Update MongoDB Cluster instances types on AWS EC2
Connec! uses a MongoDB cluster to store its data. By default, it runs on 3 servers which instance type can be modified following the instructions below.
To avoid downtime when updating the MongoDB cluster, the procedure is to perform a rolling upgrade, sequentially replacing cluster nodes.
1. Infrastructure upgrade
1.1 Update Ansible scripts
The MongoDB Cluster default configuration is detailed here: https://github.com/maestrano/mno-deploy/blob/develop/ansible/group_vars/all#L820.
To change the default instance type and enable volume encryption at rest, you can modify your configuration as follows:
mongo: encrypted: True launch_configuration: instance_type: t2.medium
1.2 Create a backup image
Before running the upgrade, it is highly recommended to backup the existing MongoDB cluster data.
From the AWS EC2 console, select one of the Mongo servers, right click > Image > Create Image
2. Replace cluster nodes
You will need to repeat the procedure below for each node in the Mongo cluster sequentially to avoid any downtime.
2.1 Stop cluster node
Log into the AWS Console and stop ONE node from the cluster (AWS Instance > Stop)
2.2 Run infrastructure upgrade
From the folder mno-deploy-myenvironment, execute the following
# Export AWS environment variables export AWS_REGION=ap-southeast-1 export AWS_DEFAULT_REGION=ap-southeast-1 export AWS_ACCESS_KEY_ID=[KEY] export AWS_SECRET_ACCESS_KEY=[SECRET] export MNO_DEPLOY_VERSION=local export ENVIRONMENT_CONFIGURATION=environment export ANSIBLE_VAULT_PASSWORD_FILE=[PATH TO FILE] # Execute the ansible scripts sh scripts/setup_infrastructure.sh --tags "infra_mongo,infra_vpc"
This will create a new instance and add it too the MongoDB cluster
2.3 Validate AWS infrastructure update
New instances are running
From the AWS Console, verify that the mongo nodes are running with the name *\[environment\]-mongo*. Make also sure that the volumes attached to the instances are encrypted if this option has been enabled.
Ansible configuration has been correctly executed
SSH on the instances and tail the log file /var/log/ansible.log, it should finish with the log:
2017-04-24 05:46:46,429 p=10094 u=root | PLAY RECAP ********************************************************************* 2017-04-24 05:46:46,431 p=10094 u=root | localhost : ok=67 changed=45 unreachable=0 failed=0
MongoDB nodes are synchronising
SSH on the instances and tail the log file /var/log/mongodb.log, the following log should appear (note that it may take up to a few hours to synchronise all data):
2017-04-24T06:15:29.533+0000 [rsSync] replSet initial sync finishing up 2017-04-24T06:15:29.533+0000 [rsSync] replSet set minValid=58538b59:1 2017-04-24T06:15:29.590+0000 [rsSync] replSet RECOVERING 2017-04-24T06:15:29.590+0000 [rsSync] replSet initial sync done 2017-04-24T06:15:30.593+0000 [rsSync] replSet SECONDARY
The size of the files in /data/db/ should also be approximately the same between all servers.
SSH on a server from the VPC and run the following command to validate the Security Group access
curl -L http://r0.mongo.production.maestrano.local:27017 curl -L http://r1.mongo.production.maestrano.local:27017 curl -L http://r2.mongo.production.maestrano.local:27017 # Expect to return It looks like you are trying to access MongoDB over HTTP on the native driver port.
Verify the Connec! API works correctly by sending request (using Postman or CURL)
GET https://api-connec.myapplication.io/api/v2/org-1234/company