AWS

How To: Create An AWS Lambda Function To Backup/Snapshot Your EBS Volumes

AWS Lambda functions are a great way to run some code on a trigger/schedule without needing a whole server dedicated to it. They can be cost effective, but be careful depending on how long they run, and the number of executions per hour, they can be quite costly as well.

For my use case, I wanted to create snapshot backups of EBS volumes for a Mongo Database every day. I originally implemented this using only CloudWatch, which is a monitoring service, but because it’s focused on scheduling, AWS also uses it for other things that require scheduling/cron like features. Unfortunately, the CloudWatch implementation of snapshot backups was very limited. I could not ‘tag’ the backups, which was certainly something I needed for easy finding and cleanups later (past a retention period).

Anyway, there were a couple pitfalls I ran into when creating this function.

Pitfalls

  1. Make sure you security group allows you to communicate to the Internet for any AWS API’s you need to talk to.
  2. Make sure your time-out is set to 1 minute or greater depending on your use case. The default is seconds, and is likely not high enough.
  3. “The Lambda function execution role must have permissions to create, describe and delete ENIs. AWS Lambda provides a permissions policy, AWSLambdaVPCAccessExecutionRole, with permissions for the necessary EC2 actions (ec2:CreateNetworkInterface, ec2:DescribeNetworkInterfaces, and ec2:DeleteNetworkInterface) that you can use when creating a role”
    1. Personally, I did inline permissions and included the specific actions.
  4. Upload your zip file and make sure your handler section is configured with the exact file_name.method_in_your_code_for_the_handler
  5. Also this one is more of an FYI, Lambda Function have a maximum TTL of 5 minutes ( 300 seconds).

I think that was it, after that everything worked fine. To finish this short article off, screenshots and the code!

Screenshots

 

 

And finally the code…

Function Code

# Backup cis volumes

import boto3


def lambda_handler(event, context):
    ec2 = boto3.client('ec2')

    reg = 'us-east-1'

    # Connect to region
    ec2 = boto3.client('ec2', region_name=reg)

    response = ec2.describe_instances(Filters=[{'Name': 'instance-state-name', 'Values': ['running']},
                                               {'Name': 'tag-key', 'Values': ['Name']},
                                               {'Name': 'tag-value', 'Values': ['cis-mongo*']},
                                               ])

    for r in response['Reservations']:
        for i in r['Instances']:
            for mapping in i['BlockDeviceMappings']:
                volId = mapping['Ebs']['VolumeId']

                # Create snapshot
                result = ec2.create_snapshot(VolumeId=volId,
                                             Description='Created by Lambda backup function ebs-snapshots')

                # Get snapshot resource
                ec2resource = boto3.resource('ec2', region_name=reg)
                snapshot = ec2resource.Snapshot(result['SnapshotId'])

                # Add volume name to snapshot for easier identification
                snapshot.create_tags(Tags=[{'Key': 'Name', 'Value': 'cis-mongo-snapshot-backup'}])

And here is an additional function to add for cleanup

import boto3
from datetime import timedelta, datetime


def lambda_handler(event, context):
    # if older than days delete
    days = 14

    filters = [{'Name': 'tag:Name', 'Values': ['cis-mongo-snapshot-backup']}]

    ec2 = boto3.setup_default_session(region_name='us-east-1')
    client = boto3.client('ec2')
    snapshots = client.describe_snapshots(Filters=filters)

    for snapshot in snapshots["Snapshots"]:
        start_time = snapshot["StartTime"]
        delete_time = datetime.now(start_time.tzinfo) - timedelta(days=days)

        if start_time < delete_time:
            print 'Deleting {id}'.format(id=snapshot["SnapshotId"])
            client.delete_snapshot(SnapshotId=snapshot["SnapshotId"], DryRun=False)

The end, happy server-lessing (ha !)

 

How To: Create An AWS Lambda Function To Backup/Snapshot Your EBS Volumes Read More »

MongoDB data loss avoided courtesy of AWS EBS & Snapshots

Cross Region MongoDB Across A Slow Network (Napster) Bad, AWS Snapshots (Metallica) Good!

I recently found myself in a bit of a pickle. My team and I had deployed a 3 node MongoDB cluster configured as two nodes in us-east-1 and one node in us-west-2 to maximize our availability while minimizing cost. Ultimately, there were two problems with this approach. The first is that for reasons mostly outside of our control the rest of our application stack above the database was deployed in us-east-1 drastically reducing any availability benefit the tertiary node in us-west-2 was buying us. Additionally, we were not aware at the time we made this choice, but our cluster/replication traffic was going across a VPN with very limited bandwidth that frequently suffered network partitions due to network maintenance and a lack of redundancy. We found our MongoDB cluster failing over frequently due to losing communication with it’s members and when it did our cluster had a difficult time recovering because replication couldn’t catch up across the VPN.

After restarting Mongo several times, including removing the data directory and starting over fresh, ultimately replication was going to take days to sync, and we could not afford to wait that long. We needed to restore the cluster health ASAP so we could move all nodes to us-east-1 mitigating our network issue with our VPN that was introducing so much pain.

Now the system I am referring to is production, it cannot lose data, and it cannot take downtime/a maintenance. Given these constraints I started googling ways to catch up your MongoDB, when it will not catch up on it’s own. I tried some things I found like rsync etc, before realizing it wasn’t any faster across that slow VPN link. Ultimately, I decided I was going to try a snapshot. Now the document I read warned me that a live snapshot may result in potentially inconsistent data, but again I had to try it given the constraints I mentioned before. I had few options. In the end as it turns out, it worked perfectly and in under an hour I had my entire cluster healthy. Using the AWS CLI utility, here is how I did it…

Step 1 take the snapshot of the healthy node

I actually took the snapshot in the GUI at first… so not shown here, but for the record to create a snapshot, go to your volume under Ec2 Volumes and click actions then create snapshot and save the snapshot ID. (Or alternatively do it with the CLI like I did for everything else).

Step 2, copy the snapshot from your source region to your destination region

aws --region us-east-1 ec2 copy-snapshot --source-region us-west-2 --source-snapshot-id snap-01f185929341abd3b --description "cis-mongo-prod-3-snapshot-05-19-2017"

Make sure you copy to your clipboard the snapshot ID returned…

Step 3, Create a new volume from the copied snapshot

aws ec2 create-volume --size 300 --region us-east-1 --availability-zone us-east-1d --volume-type gp2 --snapshot-id snap-085b986dae85dfed1

Response:

{
"AvailabilityZone": "us-east-1c",
"Encrypted": false,
"VolumeType": "gp2",
"VolumeId": "vol-0fa49fde34e88a1c6",
"State": "creating",
"Iops": 900,
"SnapshotId": "snap-085b986dae85dfed1",
"CreateTime": "2017-05-19T20:37:33.304Z",
"Size": 300
}

Step 4, Attach the volume to the system

aws ec2 attach-volume --volume-id vol-0fa49fde34e88a1c6 --instance-id i-0717cd609275fdbef --device /dev/sdc

Oh No We Got An Error!

An error occurred (InvalidVolume.ZoneMismatch) when calling the AttachVolume operation: The volume 'vol-0fa49fde34e88a1c6' is not in the same availability zone as instance 'i-0717cd609275fdbef'

Ah ok, simple fix, we created the volume in a different AZ than the node we were attaching to.

(delete the old volume) Then…

Step 5, create a new volume from the snapshot, but this time specify the same AZ (us-east-1b instead of us-east-1c) as the node we wish to attach it to

aws ec2 create-volume --size 300 --region us-east-1 --availability-zone us-east-1b --volume-type gp2 --snapshot-id snap-085b986dae85dfed1

Step 6, try attaching the new volume (cross your fingers)

aws ec2 attach-volume --volume-id vol-095cc214c8a5e74e0 --instance-id i-0717cd609275fdbef --device /dev/sdc

Response:

{
"AttachTime": "2017-05-19T21:58:07.586Z",
"InstanceId": "i-0717cd609275fdbef",
"VolumeId": "vol-095cc214c8a5e74e0",
"State": "attaching",
"Device": "/dev/sdc"
}

Sweet it worked…Now it’s time to do some work on the node we attached this volume to.

Step 7, check if the new attachment is visible to the system

[root@ip-10-5-0-149 mongo]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 10G 0 disk
└─xvda1 202:1 0 10G 0 part /
xvdb 202:16 0 300G 0 disk /mnt
xvdc 202:32 0 300G 0 disk
[root@ip-10-5-0-149 mongo]#

Yup sure, is, we can see our device ‘xvdc’ is a 300G disk that has no mount point. We can also see ‘xvdb’ which is our original mongo data mount, mounted under /mnt.

Step 8, create mount point and mount the new device

[root@ip-10-5-0-149 mongo]# mkdir /mnt/mongo2
[root@ip-10-5-0-149 mongo]# mount /dev/xvdc /mnt/mongo2
[root@ip-10-5-0-149 mongo]#
[root@ip-10-5-0-149 mongo]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.8G 5.6G 3.9G 60% /
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 1.6G 15G 10% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/xvdb 296G 65M 281G 1% /mnt
none 64K 4.0K 60K 7% /.subd/tmp
tmpfs 3.2G 0 3.2G 0% /run/user/11272
/dev/xvdc 296G 12G 269G 5% /mnt/mongo2
[root@ip-10-5-0-149 mongo]#

Step 9, shutdown Mongo if it’s running

[root@ip-10-5-0-149 mongo]# service mongod stop

Step 10, copy the snapshot data, to the existing MongoDB data directory

[root@ip-10-5-0-149 mongo]# pwd
/mnt/mongo
[root@ip-10-5-0-149 mongo]# ls
[root@ip-10-5-0-149 mongo]# cp -r /mnt/mongo2/* .

Step 11, fix permissions for the copied data

[root@ip-10-5-0-149 mongo]# chown mongod:mongod /mnt/mongo -R

NOTE: Do not forget this step or you will get errors starting the MongoDB service

Step 12, start Mongo back up

[root@ip-10-5-0-149 mongo]# service mongod start
Starting mongod (via systemctl): [ OK ]
[root@ip-10-5-0-149 mongo]#

Step 13, Check Mongo Cluster Status

[root@ip-10-5-0-149 mongo]# mongo
cisreplset:SECONDARY> rs.status()
{
"set" : "cisreplset",
"date" : ISODate("2017-05-19T22:17:00.483Z"),
"myState" : 2,
"term" : NumberLong(111),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "ip-10-5-0-149:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 6,
"optime" : {
"ts" : Timestamp(1495222912, 3),
"t" : NumberLong(110)
},
"optimeDate" : ISODate("2017-05-19T19:41:52Z"),
"configVersion" : 3,
"self" : true
},
{
"_id" : 1,
"name" : "10.5.5.182:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4,
"optime" : {
"ts" : Timestamp(1495230671, 1033),
"t" : NumberLong(111)
},
"optimeDate" : ISODate("2017-05-19T21:51:11Z"),
"lastHeartbeat" : ISODate("2017-05-19T22:16:56.263Z"),
"lastHeartbeatRecv" : ISODate("2017-05-19T22:16:59.620Z"),
"pingMs" : NumberLong(1),
"syncingTo" : "10.100.0.17:27017",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "10.100.0.17:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 3,
"optime" : {
"ts" : Timestamp(1495232212, 24),
"t" : NumberLong(111)
},
"optimeDate" : ISODate("2017-05-19T22:16:52Z"),
"lastHeartbeat" : ISODate("2017-05-19T22:16:56.516Z"),
"lastHeartbeatRecv" : ISODate("2017-05-19T22:16:58.751Z"),
"pingMs" : NumberLong(84),
"electionTime" : Timestamp(1495225273, 1),
"electionDate" : ISODate("2017-05-19T20:21:13Z"),
"configVersion" : 3
}
],
"ok" : 1
}
cisreplset:SECONDARY>

For contrast, here is what it looked like before, pay close attention to node/member 10.5.0.149

cisreplset:PRIMARY> rs.status()
{
"set" : "cisreplset",
"date" : ISODate("2017-05-19T22:00:07.185Z"),
"myState" : 1,
"term" : NumberLong(111),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "ip-10-5-0-149:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-05-19T22:00:06.570Z"),
"lastHeartbeatRecv" : ISODate("2017-05-19T21:49:39.839Z"),
"pingMs" : NumberLong(82),
"lastHeartbeatMessage" : "Connection refused",
"configVersion" : -1
},
{
"_id" : 1,
"name" : "10.5.5.182:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2138,
"optime" : {
"ts" : Timestamp(1495228916, 7491),
"t" : NumberLong(111)
},
"optimeDate" : ISODate("2017-05-19T21:21:56Z"),
"lastHeartbeat" : ISODate("2017-05-19T22:00:05.507Z"),
"lastHeartbeatRecv" : ISODate("2017-05-19T22:00:05.358Z"),
"pingMs" : NumberLong(83),
"syncingTo" : "10.100.0.17:27017",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "10.100.0.17:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1200895,
"optime" : {
"ts" : Timestamp(1495231207, 1111),
"t" : NumberLong(111)
},
"optimeDate" : ISODate("2017-05-19T22:00:07Z"),
"electionTime" : Timestamp(1495225273, 1),
"electionDate" : ISODate("2017-05-19T20:21:13Z"),
"configVersion" : 3,
"self" : true
}
],
"ok" : 1
}
cisreplset:PRIMARY>

Now that our DB is verified healthy it’s time to cleanup.

Step 14, clean our now unnecessary waste ( and thank the gods)

Umount & Delete

[root@ip-10-5-0-149 mongo]# unmount /mnt/mongo2/
[root@ip-10-5-0-149 mongo]# rm -rf /mnt/mongo2/

Detach Volume

(env) ➜ ~ aws ec2 detach-volume --volume-id vol-0fa49fde34e88a1c6
{
"AttachTime": "2017-05-19T20:43:29.000Z",
"InstanceId": "i-0d535ee1cdfd79073",
"VolumeId": "vol-0fa49fde34e88a1c6",
"State": "detaching",
"Device": "/dev/sdc"
}
(env) ➜ ~

Delete Volume & Snapshots

 

(env) ➜ ~ aws ec2 delete-volume --volume-id vol-095cc214c8a5e74e0
(env) ➜ ~ aws ec2 delete-snapshot --snapshot-id snap-085b986dae85dfed1
(env) ➜ ~ aws ec2 delete-snapshot --snapshot-id snap-01f185929341abd3b --region us-west-2

When I ran into this issue and googled around a bit, I really didn’t find anyone with a detailed account of how they got out of it. Thus I was inspired by the opportunity to help others in the future and the result is this post. I hope it finds someone, someday, facing a similar scenario and graciously lifts them out of the depths! Godspeed, happy clouding.

MongoDB data loss avoided courtesy of AWS EBS & Snapshots Read More »

How To: Launch A Jump Host In AWS Using Terraform

I have been a Hashicorp fan boy for a couple of years now. I am impressed, and happy with pretty much everything they have done from Vagrant to Consul and more. In short they make the DevOps world a better place. That being said this article is about the aptly named Terraform product. Here is how Hashicorp describes Terraform in their own words…

“Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.”

Interestingly, enough it doesn’t point out, but in a way implies it by omitting anything about providers that Terraform is multi-cloud (or cloud agnostic). Terraform works with AWS, GCP, Azure and Openstack. In this article we will be covering how to use Terraform with AWS.

Step 1, download Terraform, I am not going to cover that part 😉
https://www.terraform.io/downloads.html

Step 2, Configuration…

Configuration

Hashicorp uses their own configuration language for Terraform, it is fully JSON compatible, which is nice.. The details are covered here https://github.com/hashicorp/hcl.

After downloading and installing Terraform, its time to start generating the configs.

AWS IAM Keys

AWS keys are required to do anything with Terraform. You can read about how to generate an access key / secret key for a user here : http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey

Terraform Configuration Files Overview

When you execute the terraform commands, you should be within a directory containing terraform configuration files. Those files ending in a ‘.tf’ extension will be loaded by Terraform in alphabetical order.

Before we jump into our configuration files for our Jump box, it may be helpful to review a quick primer on the syntax here https://www.terraform.io/docs/configuration/syntax.html

& some more advanced features such as lookup here https://www.terraform.io/docs/configuration/interpolation.html

Most users of Terraform choose to make their configs modular resulting in multiple .tf files with names like main.tf, variables.tf and data.tf… This is not required…you can choose to put everything in one big Terraform file, but I caution you modularity/compartmentalization is always a better approach than one monolithic file. Let’s take a look at our main.tf

Main.tf

Typically if you only see one terraform config file, it is called main.tf, most commonly there will be at least one other file called variables.tf used specifically for providing values for variables used in other TF files such as main.tf. Let’s take a look at our main.tf file section by section.

Provider

The provider keyword is used to identify the platform (cloud) you will be talking to, whether it is AWS, or another cloud. In our case it is AWS, and define three configuration items, an access key, secret key, and a region all of which we are inserting variables for, which will later be looked up / translated into real values in variables.tf

provider "aws" {
        access_key = "${var.aws_access_key_id}"
        secret_key = "${var.aws_secret_access_key}"
        region = "${var.aws_region}"
}

Resource aws_instance

This section defines a resource, which is an “aws_instance” that we are calling “jump_box”. We define all configuration requirements for that instance. Substituting variable names where necessary, and in some cases we just hard code the value. Notice we are attaching two security groups to the instance to allow for ICMP & SSH. We are also tagging our instance, which is critical an AWS environment so your administrators/teammates have some idea about the machine that is spun up and what it is used for.

resource "aws_instance" "jump_box" {
        ami = "${lookup(var.ami_base, "centos-7")}"
        instance_type = "t2.medium"
        key_name = "${var.key_name}"
        vpc_security_group_ids = [
                "${lookup(var.security-groups, "allow-icmp-from-home")}",
                "${lookup(var.security-groups, "allow-ssh-from-home")}"
        ]
        subnet_id = "${element(var.subnets_private, 0)}"
        root_block_device {
                volume_size = 8
                volume_type = "standard"
        }
        user_data = <<-EOF
                                #!/bin/bash
                                yum -y update
                                EOF
        tags = {
                Name = "${var.instance_name_prefix}-jump"
                ApplicationName = "jump-box"
                ApplicationRole = "ops"
                Cluster = "${var.tags["Cluster"]}"
                Environment = "${var.tags["Environment"]}"
                Project = "${var.tags["Project"]}"
                BusinessUnit = "${var.tags["BusinessUnit"]}"
                OwnerEmail = "${var.tags["OwnerEmail"]}"
                SupportEmail = "${var.tags["SupportEmail"]}"
        }
}

Btw, the this resource type is provided by an AWS module found here https://www.terraform.io/docs/providers/aws/r/instance.html

You have to download the module using terraform get (which can be done once you write some config files and type terraform get 🙂 ).

Also, note the usage of ‘user_data’ here to update the machines packages at boot time. This is an AWS feature that is exposed through the AWS module in terraform.

Resource aws_security_group

Next we define a new security group (vs attaching an existing one in the above section). We are creating this new security group for other VM’s in the environment to attach later, such that it can be used to allow SSH from the Jump host to the VM’s in the environment.

Also notice under cidr_blocks we define a single IP address a /32 of our jump host…but more important is to notice how we determine that jump hosts IP address. Using .private_ip to access the attribute of the “jump_box” aws_instance we are creating/just created in AWS. That is pretty cool.

resource "aws_security_group" "jump_box_sg" {
        name = "${var.instance_name_prefix}-allow-ssh-from-jumphost"
        description = "Allow SSH from the jump host"
        vpc_id = "${var.vpc_id}"

        ingress {
                from_port = 22
                to_port = 22
                protocol = "tcp"
                cidr_blocks = ["${aws_instance.jump_box.private_ip}/32"]
        }

        tags = "${var.tags_infra_default}"
}

Resource aws_route53_record

The last entry in our main.tf creates a DNS entry for our jump host in Route53. Again notice we are specifying a name of jump. has a prefix to an entry, but the remainder of the FQDN is figured out by the lookup command.  The lookup command is used to lookup values inside of a map. In this case the map is defined in our variables.tf that we will review next.

resource "aws_route53_record" "jump_box_dns" {
        zone_id = "${lookup(var.route53_zone, "id")}"
        type = "A"
        ttl = "300"
        name = "jump.${lookup(var.route53_zone, "name")}"
        records = ["${aws_instance.jump_box.private_ip}"]
}

Variables.tf

I will attempt to match the section structure I used above for main.tf when explaining the variables in variables.tf though it is not really in as clear of a layout using sections.

Provider variables

When terraform is run it compiles all .tf files, and replaces any key that equals a variable, with the value it finds listed in the variables.tf file (in our case) with the variable keyword as a prefix. Notice that the first two variables are empty, they have no value defined. Why ? Terraform supports taking input at runtime, by leaving these values blank, Terraform will prompt us for the values. Region is pretty straight forward, default is the value returned and description in this case is really an unused value except as a comment.

variable "aws_access_key_id" {}
variable "aws_secret_access_key" {}

variable "aws_region" {
        description = "AWS region to create resources in"
        default = "us-east-1"
}

I would like to demonstrate the behavior of Terraform as described above, when the variables are left empty

➜  jump terraform plan
var.aws_access_key_id
  Enter a value: aaaa

var.aws_secret_access_key
  Enter a value: bbbb

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

...

At this phase you would enter your AWS key info, and terraform would ‘plan’ out your deployment. Meaning it would run through your configs, and print to your screen it’s plan, but not actually change any state in AWS. That is the difference between Terraform plan & Terraform apply.

Resource aws_instance variables

Here we define the values of our AMI, SSH Key, Instance prefix for the name, Tags, security groups, and subnets. Again this should be pretty straight forward, no magic here, just the use of string variables & maps where necessary.

variable "ami_base" {
        description = "AWS AMIs for base images"
        default = {
                "centos-7" = "ami-2af1ca3d"
                "ubuntu-14.04" = "ami-d79487c0"
        }
}

variable "key_name" {
        default = "tuxninja-rsa-2048"
}

variable "instance_name_prefix" {
        default = "tuxlabs-"
}

variable "tags" {
        type = "map"
        default = {
                        ApplicationName = "Jump"
                        ApplicationRole = "jump box - bastion"
                        Cluster = "Jump"
                        Environment = "Dev"
                        Project = "Jump"
                        BusinessUnit = "TuxLabs"
                        OwnerEmail = "tuxninja@tuxlabs.com"
                        SupportEmail = "tuxninja@tuxlabs.com"
        }
}

variable "tags_infra_default" {
        type = "map"
        default = {
                        ApplicationName = "Jump"
                        ApplicationRole = "jump box - bastion"
                        Cluster = "Jump"
                        Environment = "DEV"
                        Project = "Jump"
                        BusinessUnit = "TuxLabs"
                        OwnerEmail = "tuxninja@tuxlabs.com"
                        SupportEmail = "tuxninja@tuxlabs.com"
        }
}

variable "security-groups" {
        description = "maintained security groups"
        default = {
                "allow-icmp-from-home" = "sg-a1b75ddc"
                "allow-ssh-from-home" = "sg-aab75dd7"
        }
}

variable "vpc_id" {
        description = "VPC us-east-1-vpc-tuxlabs-dev01"
        default = "vpc-c229daa5"
}

variable "subnets_private" {
        description = "Private subnets within us-east-1-vpc-tuxlabs-dev01 vpc"
        default = ["subnet-78dfb852", "subnet-a67322d0", "subnet-7aa1cd22", "subnet-75005c48"]
}

variable "subnets_public" {
        description = "Public subnets within us-east-1-vpc-tuxlabs-dev01 vpc"
        default = ["subnet-7bdfb851", "subnet-a57322d3", "subnet-47a1cd1f", "subnet-73005c4e"]
}

It’s important to note the variables above are also used in other sections as needed, such as the aws_security_group section in main.tf …

Resource aws_route53_record variables

Here we define the ID & Name that are used in the ‘lookup’ functionality from our main.tf Route53 section above.

variable "route53_zone" {
        description = "Route53 zone used for DNS records"
        default = {
                id = "Z1ME2RCUVBYEW2"
                name = "tuxlabs.com"
        }
}

It’s important to note Terraform or TF files do not care when or where things are loaded. All files are loaded and variables require no specific order consistent with any other part of the configuration. All that is required is that for each variable you try to insert a value for, it has a value listed via the variable keyword in a TF file somewhere.

Output.tf

Again, I want to remind folks you can put these terraform syntax in one file if you wanted to, but I choose to split things up for readability and simplicity. So we have an output.tf file specifically for the output command, there is only one command, which lists the results of our terraform configurations upon success.

output "jump-box-details" {
	value = "${aws_route53_record.jump_box_dns.fqdn} - ${aws_instance.jump_box.private_ip} - ${aws_instance.jump_box.id} - ${aws_instance.jump_box.availability_zone}"
}

Ok so let’s run this and see how it looks…First a reminder, to test your config you can run Terraform plan first..It will tell you the changes its going to make…example

➜  jump terraform plan
var.aws_access_key_id
  Enter a value: blahblah

var.aws_secret_access_key
  Enter a value: blahblahblahblah

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.jump_box
    ami:                                       "ami-2af1ca3d"
    associate_public_ip_address:               "<computed>"
    availability_zone:                         "<computed>"
    ebs_block_device.#:                        "<computed>"
    ephemeral_block_device.#:                  "<computed>"
    instance_state:                            "<computed>"
    instance_type:                             "t2.medium"

...

Plan: 3 to add, 0 to change, 0 to destroy.

If everything looks good & is green, you are ready to apply.

aws_security_group.jump_box_sg: Creation complete
aws_route53_record.jump_box_dns: Still creating... (10s elapsed)
aws_route53_record.jump_box_dns: Still creating... (20s elapsed)
aws_route53_record.jump_box_dns: Still creating... (30s elapsed)
aws_route53_record.jump_box_dns: Still creating... (40s elapsed)
aws_route53_record.jump_box_dns: Still creating... (50s elapsed)
aws_route53_record.jump_box_dns: Creation complete

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Outputs:

jump-box-details = jump.tuxlabs.com - 10.10.195.46 - i-037f5b15bce6cc16d - us-east-1b
➜  jump

Congratulations, you now have a jump box in AWS using Terraform. Remember to attach the required security group to each machine you want to grant access to, and start locking down your jump box / bastion and VM’s.

Outro

Remember if you take the above config and try to run it, swapping out only the variables it will error something about a required module. Downloading the required modules is as simple as typing ‘terraform get‘ , which I believe the error message even tells you 🙂

So again this was a brief intro to Terraform it does a lot & is extremely powerful. One of the thing I did when setting up a Mongo cluster using Terraform, was to take advantage of a map to change node count per region. So if you wanted to deploy a different number of instances in different regions, your config might look something like…

main.tf

  count = "${var.region_instance_count[var.region_name]}"

variables.tf

variable "region_instance_count" {
  type = "map"
  default = {
    us-east-1 = 2
    us-west-2 = 1
    eu-central-1 = 1
    eu-west-1 = 1
  }
}

It also supports split if you want to multi-value a string variable.

Another couple things before I forget, Terraform apply, doesn’t just set up new infrastructure, it also can be used to modify existing infrastructure, which is a very powerful feature. So if you have something deployed and want to make a change, terraform apply is your friend.

And finally, when you are done with the infrastructure you spun up or it’s time to bring her down… ‘terraform destroy’

➜  jump terraform destroy
Do you really want to destroy?
  Terraform will delete all your managed infrastructure.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

var.aws_access_key_id
  Enter a value: blahblah

var.aws_secret_access_key
  Enter a value: blahblahblahblah

...

Destroy complete! Resources: 3 destroyed.
➜  jump

I hope this article helps.

Happy Terraforming 😉

 

How To: Launch A Jump Host In AWS Using Terraform Read More »

How To: Interact with AWS S3 Using the Go SDK and not lose your mind

After these messages we will carry on with our regularly scheduled programming…

Yesterday ( during the scribbling of this article ) AWS suffered one of it’s worst outages in history in the us-east-1 region. A reminder to us all to be multi-region and more importantly multi-cloud. Please see my other articles on HA deployments using AWS and my perspective & caution on the path to centralization or singularity we appear to be on (though the outage may help people wake up).

Now back to your regularly scheduled program…


My team and I are building a CMDB for AWS, which provides us with everything happening in our AWS environment + OS level metadata + change history. There will be a separate article on the CMDB journey, but today I want to focus on a specific service in AWS called S3, which is their object store. S3 is a bit of a special snowflake when it comes to AWS services and because of that I ran into challenges structuring my code, because up until S3 (which was the last service I wrote code for) everything had been very similar, and easily modularized. We will get to more detail, but let’s start this article by covering how to use the Go SDK for AWS.

Dependencies

This article assumes you already program in Go and have Go installed on your machine. To get started you will need a couple additional items.

  1. Download and install the SDK here : https://github.com/aws/aws-sdk-go 
  2. This is the documentation for the SDK, you will need it, bookmark it : http://docs.aws.amazon.com/sdk-for-go/api/
  3. It is extremely helpful when working with the API’s to have aws-shell installed : https://github.com/awslabs/aws-shell
    • This enables you to interact with AWS API’s on the fly so you can understand the output of commands as you are searching for what you are trying to accomplish.

The Collector Structure

The collector is the component in my CMDB architecture that does all the work of collecting the metadata that we shove into our CMDB. The collector is  heavily threaded using go routines for performance. The basic structure looks like this.

  • Call a go routine for each service you want to collect
    • //pass in all accounts, regions (from config-file) and pre-established awsSessions to each account you are collecting
    • Inside of a services go routine, loop overs accounts & regions
      • Launch a go routine for each account & region
        • Inside of those go routines make your AWS API call(s), example DescribeInstances
        • Store the response (I loop through mine and store it in a map using the resource-id as the key)
        • Finally, kick off another go routine to write to our API and store the data.

Ok, so hopefully that seems straight forward as a basic structure…let’s get to why S3 through me for a loop.

S3 Challenges

It will be best if I show you what I tried first, basically I tried to marry my existing pattern to S3 and that certainly was a bad idea from the start. Here was the structure of the S3 part of the code.

  • The S3 go routine gets called from main.go
  • //all accounts, regions and AWS Sessions are past into the next go routine
    • Inside of the S3 go routine, loop over accounts & regions
      • Launch a go routine for each account & region
        • Inside of those go routines List S3 Buckets
          • For each S3 buckets returned
            • Call additional API’s such as GetBucketTagging()

Ok so what happened ? I got a lot of errors that’s what 🙂 Ones like this….

BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region
status code: 301, request id:

At first, I thought maybe my code wasn’t thread safe…but that didn’t make much sense given the other services had no issues like this.

So as I debugged my code, I began to realize the buckets list I was getting, wasn’t limited to the region I was passing in/ establishing a session for.

Naturally, I googled can I list buckets for a single region ?

https://github.com/aws/aws-sdk-java/issues/920 (even though this is the Java SDK it still applies)..

"spfink commented on Nov 16, 2016
It is not possible to list the buckets in a single region. Regardless of the endpoint or region that you set, when calling list buckets you will get buckets from all regions.

In order to determine the region of a bucket you can use getBucketLocation(String bucketName).

https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/AmazonS3.java#L1026”

Ah ok, the BucketList being returned on an AWS Session established with a specific account and region, ignores the region. Because S3 Buckets are global to an account, thus all buckets under an account are returned in the ListBuckets() call. I knew S3 buckets were global per account, but failed to expect a matching behavior/output when a specific region is passed into the SDK/API.

Ok so how then can I distinguish where a bucket actually lives?

As spfink says above, I needed to run GetBucketLocation() per bucket. Thus my code structure started to look like this…

  • For each account, region
    • ListBuckets
      • For each bucket returned in that account, region
        • GetBucketLocation
        • If a LocationConstraint (region) is returned, set the new region (otherwise if response is null, do nothing)
        • Get tags for the bucket in account, region

With this code I was still getting errors about region, but why ?

Well I made the mistake of thinking a ‘null’ response from the API for LocationConstraint had no meaning (or meant query it from any region), wrong (null actually means us-east-1 see from my google below) thus the IF condition evaluated false and the existing region from the outer loop was used because GetBucketLocation() returned null and this resulted in many errors.

Here’s what the google turned up..

https://github.com/aws/aws-cli/issues/564

"kyleknap commented on Mar 16, 2015
@xurume

For buckets located in US Standard, the location constraint will be null. For S3, here is the list of region names with their corresponding regions: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region. Notice that the location constraint is none for US Standard.

The CLI uses the values in the region column for the --region parameter. So for S3's US Standard, you need to use us-east-1 as the region.”

So let’s clarify my mistakes…

  1. The S3 ListBuckets call returns all buckets under an account globally.
    • It does not abide by a region configured in an API Session
    • Thus I/you should not loop over regions from a config file for the S3 service.
    • Instead I/you need to find a buckets ‘real’ location using GetBucketLocation
    • Then set the region for actions other than ListBuckets (which is global per account and ignores region passed).
  2. GetBucketLocation returning null, doesn’t mean the bucket is global or that you can interact with the bucket from endpoint you please…it actually means us-east-1 http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

The Working Code

So in the end the working code for S3 looks like this…

  • collector/main.go fires off a bunch of go routines per service we are collecting for.
  • It passes in accounts, and regions from a config file.
  • For the S3 service/file under the ‘services’ package the entry point is a function called StoreS3Resources.

Everything in the code should be self explanatory from that point on. You will note a function call to ‘writeToCis’… CIS is the name of our internal CMDB project/service. Again, I will later be blogging about the entire system in detail once we open source the code. Please keep in mind this code is MVP, it will be changed a lot (optimization, modularized, bug fixes, etc) before & after we open source it, but for now he is the quick and dirty, but hopefully functional code 🙂 Use at your own risk !

package services

import (
	"github.com/aws/aws-sdk-go/service/s3"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/aws"
	"sync"
	"fmt"
	"time"
	"encoding/json"
	"strings"
)

var wgS3BucketList sync.WaitGroup
var wgS3GetBucketDetails sync.WaitGroup
var accountRegionsMap = make(map[string]map[string][]string)
var accountToBuckets = make(map[string][]string)
var bucketToAccount = make(map[string]string)
var defaultRegion string = "us-east-1"

func writeS3ResourceToCis(resType string, resourceData map[string]interface{}, account string, region string){
	b, err := json.Marshal(resourceData)
	check(err)

	err, status, url := writeToCisBulk(resType, region, b)
	check(err)
	fmt.Printf("%s - %s - %s - %s - Bytes: %d\n", status, url, account, region, cap(b))
}

func StoreS3Resources(awsSessions map[string]*session.Session, accounts []string, configuredRegions []string) {
	s3Start := time.Now()

	wgS3BucketList.Add(1)
	go func () {
		defer wgS3BucketList.Done()
		for _, account := range accounts {
			awsSession := awsSessions[account]
			getS3AccountBucketList(awsSession, account)
		}
	}()
	wgS3BucketList.Wait()

	getS3BucketDetails(awsSessions, configuredRegions)

	s3Elapsed := time.Since(s3Start)
	fmt.Printf("S3 completed in: %s\n", s3Elapsed)
}

func getS3AccountBucketList(awsSession *session.Session, account string) {
	svcS3 := s3.New(awsSession, &aws.Config{Region: aws.String(defaultRegion)})

	//list returned is for all buckets in an account ( no regard for region )
	resp, err := svcS3.ListBuckets(nil)
	check(err)

	var buckets []string

	for _,bucket := range resp.Buckets {
		buckets = append(buckets, *bucket.Name)

		//reverse mapping needed for lookups in other funcs
		bucketToAccount[*bucket.Name] = account
	}

	//a list of buckets per account
	accountToBuckets[account] = buckets
}


func getS3BucketLocation(awsSession *session.Session, bucket string, bucketToRegion map[string]string, regionToBuckets map[string][]string)  {
	wgS3GetBucketDetails.Add(1)
	go func() {
		defer wgS3GetBucketDetails.Done()
		svcS3 := s3.New(awsSession, &aws.Config{Region: aws.String(defaultRegion)}) // default

		var requiredRegion string

		locationParams := &s3.GetBucketLocationInput{
			Bucket: aws.String(bucket),
		}
		respLocation, err := svcS3.GetBucketLocation(locationParams)
		check(err)

		//We must query the bucket based on the location constraint
		if strings.Contains(respLocation.String(), "LocationConstraint") {
			requiredRegion = *respLocation.LocationConstraint
		} else {
			//if getBucketLocation is null us-east-1 used
			//http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
			requiredRegion = "us-east-1"
		}

		bucketToRegion[bucket] = requiredRegion
		regionToBuckets[requiredRegion] = append(regionToBuckets[requiredRegion], bucket)
		accountRegionsMap[bucketToAccount[bucket]] = regionToBuckets
	}()
}

func getS3BucketsTags(awsSession *session.Session, buckets []string, account string, region string) {
	wgS3GetBucketDetails.Add(1)
	go func() {
		defer wgS3GetBucketDetails.Done()
		svcS3 := s3.New(awsSession, &aws.Config{Region: aws.String(region)})

		var resourceData = make(map[string]interface{})

		for _, bucket := range buckets {
			taggingParams := &s3.GetBucketTaggingInput{
				Bucket: aws.String(bucket),
			}
			respTags, err := svcS3.GetBucketTagging(taggingParams)
			check(err)

			resourceData[bucket] = respTags
		}
		writeS3ResourceToCis("buckets", resourceData, account, region)
	}()
}


func getS3BucketDetails(awsSessions map[string]*session.Session, configuredRegions []string) {

	for account, buckets := range accountToBuckets {
		//reset regions for each account
		var bucketToRegion = make(map[string]string)
		var regionToBuckets = make(map[string][]string)
		for _,bucket := range buckets {
			awsSession := awsSessions[account]
			getS3BucketLocation(awsSession, bucket, bucketToRegion, regionToBuckets)
		}
	}
	wgS3GetBucketDetails.Wait()

	//Preparing configured regions to make sure we only write to CIS for regions configured
	var configuredRegionsMap = make(map[string]bool)
	for _,region := range configuredRegions {
		configuredRegionsMap[region] = true
	}

	for account := range accountRegionsMap {
		awsSession := awsSessions[account]
		for region, buckets := range accountRegionsMap[account] {
			//Only proceed if it's a configuredRegion from the config file.
			if _, ok := configuredRegionsMap[region]; ok {
				fmt.Printf("%s %s has %d buckets\n", account, region, len(buckets))
				getS3BucketsTags(awsSession, buckets, account, region)
			} else {
				fmt.Printf("Skipping buckets in %s because is not a configured region\n", region)
			}
		}
	}
	wgS3GetBucketDetails.Wait()
}

How To: Interact with AWS S3 Using the Go SDK and not lose your mind Read More »

How To: Launch EC2 Instances In AWS Using The AWS CLI

 

It occurred to me recently that while I have written articles on Boto for AWS (the Python SDK) I have yet to write articles on how to use the AWS CLI, Terraform and the Go SDK. All of that will come in due time, for starters this article is going to be about the AWS CLI.

To start you will need to install the AWS CLI  following these links:
https://aws.amazon.com/cli/
https://github.com/aws/aws-cli

Note you will need to make sure you have an account with an access key and have setup the required credentials under ~/.aws/ for the CLI to work. How to do this is covered near the end of the second link above to the git repo.

After that is done you are ready to rock and roll. To test it out you can run…

aws ec2 describe-instances

Assuming your default region, and profile settings are correct it should output JSON.

Launching an EC2 instance

To launch an EC2 instance from the command line use the command below replacing the variables preceded with $ with their real values.

aws --profile $account --region $region ec2 run-instances --image-id $image_id --count $count --instance-type $instance_type --key-name $ssh_key_name --subnet-id $subnet_id

(Assuming you have setup the required dependencies like uploading your SSH key to AWS and specifying its name in the command above this should launch your VM).

It should be noted there is a lot more you can to to tweak your instance, such as changing the EBS volume size for your root disk that is launched or tagging. You will see examples of this in my shell script. The purpose of this article is to share a shell script I have written and use whenever I want to quickly launch a test VM (which is common). For more permanent things I use an infrastructure as code approach via Terraform. But the need for launching quick test VM’s never goes away, thus this shell script was born. You will notice my script auto-tags our VM’s…I do this because in our environment if you VM isn’t tagged appropriately it is deleted + it’s courtesy in an AWS environment to tag your resources, otherwise no one will ever what tree to bark up when there is a problem such as ‘are you still using this cause it looks idle?’ 🙂

My Shell Script for Launching EC2 VM’s

#!/bin/bash

# Global Settings
account="my-account"
region="us-east-1"

# Instance settings
image_id="ami-03ebd214" # ubuntu 14.04
ssh_key_name="my_ssh_key-rsa-2048"
instance_type="m4.xlarge"
subnet_id="subnet-b8214792"
root_vol_size=20
count=1

# Tags
tags_Name="my-test-instance"
tags_Owner="tuxninja"
tags_ApplicationRole="Testing"
tags_Cluster="Test Cluster"
tags_Environment="dev"
tags_OwnerEmail="tuxninja@tuxlabs.com"
tags_Project="Test"
tags_BusinessUnit="Cloud Platform Engineering"
tags_SupportEmail="tuxninja@tuxlabs.com"

echo 'creating instance...'
id=$(aws --profile $account --region $region ec2 run-instances --image-id $image_id --count $count --instance-type $instance_type --key-name $ssh_key_name --subnet-id $subnet_id --block-device-mapping "[ { \"DeviceName\": \"/dev/sda1\", \"Ebs\": { \"VolumeSize\": $root_vol_size } } ]" --query 'Instances[*].InstanceId' --output text)

echo "$id created"

# tag it

echo "tagging $id..."

aws --profile $account --region $region ec2 create-tags --resources $id --tags Key=Name,Value="$tags_Name" Key=Owner,Value="$tags_Owner"  Key=ApplicationRole,Value="$tags_ApplicationRole" Key=Cluster,Value="$tags_Cluster" Key=Environment,Value="$tags_Environment" Key=OwnerEmail,Value="$tags_OwnerEmail" Key=Project,Value="$tags_Project" Key=BusinessUnit,Value="$tags_BusinessUnit" Key=SupportEmail,Value="$tags_SupportEmail" Key=OwnerGroups,Value="$tags_OwnerGroups"

echo "storing instance details..."
# store the data
aws --profile $account --region $region ec2 describe-instances --instance-ids $id > instance-details.json

echo "create termination script"
echo "#!/bin/bash" > terminate-instance.sh
echo "aws --profile $account --region $region ec2 terminate-instances --instance-ids $id" >> terminate-instance.sh
chmod +x terminate-instance.sh

After substituting the required variables at the top with your real values you can run this script. Notice that after creating the VM I capture the instance details in a file & the ID in a variable so I can subsequently tag it, and then I create a termination script…this makes for very simple operations when you need to repeatedly start and then kill/destroy/delete a VM.

Using these scripts should come in quite handy. A copy of create-instance.sh can be found on my github here.

One other thing… I use the normal AWS CLI for automation as shown here…but for poking around interactively I use something called ‘aws-shell’ formerly ‘saw’. Check it out and you won’t be disappointed !

My next post will be on Terraform or the Go SDK…but both are coming soon!

How To: Launch EC2 Instances In AWS Using The AWS CLI Read More »