Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »


This articles describes how to setup the Maestrano Hub component on a server / auto-scaling group using mno-deploy, Maestrano's Ansible automation framework.







1 - Prerequisites

This documentation is tailored for AWS. While the steps can be adapted for other platforms you may need to contact us to get further guidance if you are trying to use the below instructions for Azure, SoftLayer or Google Compute Cloud.

In order to perform this installation you will need to setup a Github repository to store your configuration (e.g. mno-deploy-myproject) and setup a build process to package this repository as a tar.gz archive and push it to one of your AWS S3 buckets. This section provides more information on what to include in this configuration repository.

2 - Configuration

This section describes how to setup the ansible configuration parameters in mno-deploy to deploy Maestrano Hub. These configuration activities should be performed on your configuration project (e.g. mno-deploy-myproject) under ansible/vars/<environment_name>.yml and ansible/vars/<environment_name>_secrets.yml

1.1 - Infrastructure configuration

In order to setup Maestrano Hub you need to fill certain parameters describing how the Load Balancer, Launch Configuration and Auto-Scale group should be configured for MnoHub.

A basic example is provided here: https://github.com/maestrano/mno-deploy/blob/develop/ansible/vars/example.yml#L91

Below is a commented example of what each parameter means. You can see the full list of default values by looking at the group_vars on github.

# MnoHub Infrastructure Configuration
mnohub:
  # should this component be setup or not
  skip: false
  launch_configuration:
	# AWS instance size for mnohub
    instance_type: c3.large
	# OPTIONAL - you can use spot instances in test environments
    spot_price: 0.0235
  auto_scaling_group:
	# how many instances of MnoHub should be launched at minima
    min_size: 1
	# max number of instances to launch
    max_size: 2
	# target number - used when failure occurs on one of the machines
    desired_capacity: 1
  elastic_load_balancer:
	# this block specifies how to load balance traffic
    listeners:
      - 
		# incoming traffic protocol - if you are behind a firewall you may set it to HTTP
		protocol: https
		# incoming traffic port - if you are behind a firewall you may set it to 80
        load_balancer_port: 443
		# protocol to use on the instance - if encryption in transit is required you may set it to https
        instance_protocol: http
		# port to use on the instance - if encryption in transit is required you may set it to 443
        instance_port: 80
		# IAM certificate to use on the ELB side. Only required if load_balancer_port is 443 and protocol is https
        ssl_certificate_id: "arn:aws:iam::647381683421:server-certificate/some.iam.certificate"


1.2 - Application configuration

The configuration block below relates the Maestrano Hub runtime configuration. This runtime configuration is split between settings and secrets. Therefore the configuration is spread over two files:

  • settings: ansible/vars/<environment_name>.yml
  • secrets (encrypted): ansible/vars/<environment_name>_secrets.yml

Below is a commented example of what each parameter means. You can see the full list of default values by looking at the group_vars on github.

mnohub_config:
  # Whether to run migration. Should be set to false
  skip_migrations: false
  # Enable/Disable sidekiq for Maestrano Hub. Should be set to false
  background_jobs_disabled: false
  # This section describes how assets (images, js/css files) are stored
  carrierwave:
	# Name of the AWS buckets where assets will be stored
    public_bucket: some-aws-bucket
	# Domain used to serve assets. Typically this will be the s3 host or cloudfront host
	# if you have activated Cloudfront for your S3 bucket
    asset_host: some-cloudfront-id.cloudfront.net
  general:
	# OPTIONAL: only required if you want to enable hosted apps by us. Please contact us.
    nex_host: "https://{{ nex.dns_record.record }}"
	# OPTIONAL: only required if hosted apps are enabled. Please contact us.
    apps_domain: "apps.uat.maestrano.io"
	# This value is used to generate unique email addresses for application providers
    virtual_email_domain: "appmail.somearbitrarydomain.com"
  # OPTIONAL (will default to maestrano.dns_record.record)
  # This block defines how Maestrano Hub should be accessed outside of the VPC
  # This configuration block is typically used by other applications
  public_dns:
    scheme: https
    host: api-hub.mydomain.com
  # OPTIONAL (will default to maestrano.dns_record.record)
  # This block defines how Maestrano Hub should be accessed inside the VPC
  # This configuration block is typically used by other applications 
  private_dns:
    scheme: http
    host: api-hub.internal
  # OPTIONAL (will default to rds.endpoint)
  # Describe the database access
  database:
    host: some-db-host.somedomain.com
  # OPTIONAL: if specified then an initial tenant will be created on first deploy
  default_tenant:
    name: Some Tenant Name
    id: some-uuid
    key: some-secret-key
    scheme: https
    host: frontend-host.example.com
  # OPTIONAL: New Relic app name and enablement flag. The global var newrelic_license_key must be set.
  new_relic:
    enabled: true
    app_name: MnoHub
  # OPTIONAL: whether to monitor the application using Splunk. The global vars splunk_* must be set
  splunk:
    enabled: true
  # OPTIONAL: whether to monitor the application using sumologic. The global configuration block 'sumologic' must be set
  sumocollector:
    enabled: true


Below is an example of secrets configuration block. Note that the <environment_name>_secret.yml must stay encrypted using ansible-vault.

# MnoHub configuration
mnohub_config:
  # Database credentials
  database:
    username: db_username
    password: some_password
  # Rails secrets
  secrets:
    # This key is used to encrypt credentials in database. Use "rake secret" to generate it.
    encryption_key: some_random_key
    # This service is used to convert monetary values. Put your OpenExchangeRate.org API key
    open_exchange_rate_id: open_exchange_rate_api_key
	# Rails secret. Use "rake secret" to generate it.
    secret_token: some_random_key
	# Rails secret. Use "rake secret" to generate it.
    secret_key_base: some_random_key
  # The AWS credentials to use to manage the 'assets' bucket (used to upload images, js/css files)
  s3_bucket:
    access_key: aws_key
    secret_access_key: aws_secret
  # OPTIONAL: Payment gateway. eWay credentials.
  eway:
    login: eway_login
    username: eway_username
    password: eway_password
  # OPTIONAL: Payment gateway. Braintree credentials.
  braintree:
    merchant_id: braintree_merchant_id
    public_key: braintree_public_api_key
    private_key: braintree_private_api_key


3 - Automated Infrastructure setup (Ansible)

You can use the mno-deploy Ansible script to deploy Maestrano Hub. Usually Maestrano Hub will be setup as part of running the setup_infrastructure.sh script, which sets up the whole Virtual Private Cloud.

To run that script, just follow the instructions below:

# Export your AWS API keys. Your keys must have enough privileges to create and manage an entire VPC
export AWS_ACCESS_KEY_ID=aws-api-key
export AWS_SECRET_ACCESS_KEY=aws-api-secret
export AWS_DEFAULT_REGION=aws-region # e.g. ap-southeast-1

# Specify the version of the configuration scripts to use. Note the configuration scripts will be fetched 
# from the AWS bucket specified in your continuous integration tool (e.g. Codeship) which packages the configuration script
# from Github and sends to AWS S3.
export MNO_DEPLOY_VERSION=develop/latest

# If you want to use your local configuration - e.g. if you have checkout the configuration project from Github - just
# specify 'local'
export MNO_DEPLOY_VERSION=local

# Name the configuration to use. The name of the environment must have corresponding <environment>.yml and <environment>_secrets.yml files.
export ENVIRONMENT_CONFIGURATION=myenv

# Path to key or script used to decrypt the <environment>_secrets.yml file.
export ANSIBLE_VAULT_PASSWORD_FILE=some/path/to/key.txt

# Finally run the script
sh scripts/setup_infrastructure.sh

4 - Manual Infrastructure setup (AWS)

This section provides step by step instructions on how to setup Maestrano Hub on AWS using auto-scale groups. Note that this "manual" setup still relies on Ansible for the actual configuration of servers. The only "manual" part in the below instruction is the creation of the load balancer, launch configuration and auto-scaling group.

MySQL Database

Maestrano Hub requires a MySQL database to be setup in order to be deployed. The steps below assume that a MySQL database is already setup either using AWS RDS or any other mean. We do not have a dedicated guide for this at this stage but AWS has a guide explaining how to create a MySQL database on RDS.

4.1 - Create the Elastic Load Balancer

The AWS load balancer handles incoming traffic, SSL termination (if required) and balances traffic between the healthy instances of your auto-scaling group.

  1. Login to the AWS Console and go to the EC2 tab
  2. Under Load Balancing, click on Load Balancers then on Create Load Balancer
  3. Select Classic Load Balancer. Note that choosing an Application Load Balancer is possible as well but configuration steps are not covered by this guide.
  4. Enter the name of your load balancer. To be compatible with our Ansible scripts we recommend naming it "{{ application_name }}-{{ environment_name }}-mnohub-elb". 
    For example: "yourcompany-production-mnohub-elb"
    See here for more details on how our Ansible scripts are configured.
  5. Select the VPC inside which this ELB should be setup
  6. Select the subnets to use. You should select at least two subnets in two different availability zones. If you have previously used the ansible scripts to setup your infrastructure you should select the following subnets:
    yourcompany-yourenvironment-uat_or_production-elb_tier_a
    yourcompany-yourenvironment-uat_or_production-elb_tier_b
  7. On the next screen you will asked to select security groups. If you have previously used the ansible scripts to setup your infrastructure you should select the following security group:
    yourcompany-yourenvironment-uat_or_production-elb: allow public traffic on port 80 and 443
  8. On the health check screen you can select the following options:
    Protocol: HTTP
    Port: 80 (switch to 443 if your setup is configured to have encryption in transit inside the VPC)
    Path: /ping
  9. You can skip selecting instances as instances will automatically added by the Auto-Scaling group.
  10. The above is the minimal configuration required to get started with your ELB. As a next step you may want to upload an SSL certificate to your ELB if you wish to terminate HTTPS connections at the load balancer level.

4.2 - Create the Launch Configuration

The Launch Configuration (LC) defines how the auto-scaling group should launch new instances. The LC typically defines the instance type, user data (boot script) and network configuration.

  1. Login to the AWS Console and go to the EC2 tab
  2. Under Auto-Scaling, click on Launch Configurations then on Create Launch Configuration
  3. Select the Ubuntu Server 14.04 LTS image (64 bit image)
  4. Select an instance type, for example t2.large
  5. Choose a name. To be compatible with our Ansible scripts we recommend naming your LC "{{ application_name }}-{{ environment_name }}-mnohub-lc".
    For example: "yourcompany-production-mnohub-lc"
    See here for more details on how our Ansible scripts are configured
  6. > Still on the same page: If you are setting up a test environment you may want to click the "Request Spot Instances" checkbox
  7. > Still on the same page: Click on Advanced Details and enter user data using this ansible template. You need to replace the Ansible variables with actual values.
    "{{ proxy_host }}": enter the host of your proxy if using any. The address must be reachable from within your VPC.
    "{{ proxy_ignore }}": is a comma separated list of domains/IPs to bypass
    "{{ tenant_dropbox_s3_aws_access_key }}": is the AWS Access Key ID to use to retrieve the Ansible configuration script from AWS S3
    "{{ tenant_dropbox_s3_aws_secret_key }}":  is the AWS Secret Key to use to retrieve the Ansible configuration script from AWS S3
    "{{ tenant_dropbox_s3_aws_region }}": is the region where your S3 bucket is located
    "{{ mongodb_master }}": remove the section related to this variable. Is it not useful here.
    "{{ mno_deploy_configuration_bucket }}": is the name of the S3 bucket where your Ansible configuration scripts are located. E.g. "mno-deploy-yourcompany"
    "{{ mno_deploy_configuration_path }}":  is the relative path inside the S3 bucket where the Ansible configuration scripts are stored. E.g. "/ansible/production"
    "{{ maestrano_component }}": is the name of the Ansible role to use to configure servers. Set it to mnohub-app-server-local. You can see the definition of the role here.
    "{{ env_config }}": is the name of group of variables to use to configure servers. E.g. "yourcompany_production". See examples here for an env_config set to "example"
  8. On the next screen select your instance storage size. We recommend 20 to 30GiB.
  9. On the next screen select the security groups that should be associated with the instance. If you have previously used the ansible scripts to setup your infrastructure you should select the following security groups. If you have not you can use the two examples below as guidance.
    yourcompany-yourenvironment-uat_or_production-rds: security group allowing access to the MySQL AWS RDS database
    yourcompany-yourenvironment-uat_or_production-public: security group allowing incoming traffic on port 22, 80 and 443 from inside the VPC
  10. Once done, review the details and click next. AWS will ask you to select the SSH keypair to use for this instance to finalise the setup.

4.3 - Create the Auto-Scaling group

The auto-scale group ensures that instances are always running for this application. Auto-scale groups can also be configured to automatically scale up/down based on specific metrics such as CPU usage.

  1. Login to the AWS Console and go to the EC2 tab
  2. Under Auto-Scaling, click on Auto-Scaling Groups then on Create Auto-Scaling Group
  3. Enter the name of your auto-scaling group. To be compatible with our Ansible scripts we recommend naming your group "{{ application_name }}-{{ environment_name }}-mnohub-asg". 
    For example: "yourcompany-production-mnohub-asg"
    See here for more details on how our Ansible scripts are configured.
  4. Select the VPC to use to launch instances
  5. Still on the same screen: click on Advanced Details
  6. Click the checkbox Receive traffic from one or more load balancers and select the load balancer created on step 4.1
  7. On the next screen select the subnets to use to launch instances. You should select at least two subnets in two different availability zones. If you have previously used the ansible scripts to setup your infrastructure you should select the following subnets:
    yourcompany-yourenvironment-uat_or_production-public_tier_a
    yourcompany-yourenvironment-uat_or_production-public_tier_b
  8. The above is the minimal configuration required to get started with your auto-scaling group. You can configure the scaling policies if you wish or do it later.



  • No labels