TuxLabs LLC

All things DevOps

Tag: AWS

How To: Launch A Jump Host In AWS Using Terraform

Published / by tuxninja / Leave a Comment

I have been a Hashicorp fan boy for a couple of years now. I am impressed, and happy with pretty much everything they have done from Vagrant to Consul and more. In short they make the DevOps world a better place. That being said this article is about the aptly named Terraform product. Here is how Hashicorp describes Terraform in their own words…

“Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.”

Interestingly, enough it doesn’t point out, but in a way implies it by omitting anything about providers that Terraform is multi-cloud (or cloud agnostic). Terraform works with AWS, GCP, Azure and Openstack. In this article we will be covering how to use Terraform with AWS.

Step 1, download Terraform, I am not going to cover that part 😉
https://www.terraform.io/downloads.html

Step 2, Configuration…

Configuration

Hashicorp uses their own configuration language for Terraform, it is fully JSON compatible, which is nice.. The details are covered here https://github.com/hashicorp/hcl.

After downloading and installing Terraform, its time to start generating the configs.

AWS IAM Keys

AWS keys are required to do anything with Terraform. You can read about how to generate an access key / secret key for a user here : http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey

Terraform Configuration Files Overview

When you execute the terraform commands, you should be within a directory containing terraform configuration files. Those files ending in a ‘.tf’ extension will be loaded by Terraform in alphabetical order.

Before we jump into our configuration files for our Jump box, it may be helpful to review a quick primer on the syntax here https://www.terraform.io/docs/configuration/syntax.html

& some more advanced features such as lookup here https://www.terraform.io/docs/configuration/interpolation.html

Most users of Terraform choose to make their configs modular resulting in multiple .tf files with names like main.tf, variables.tf and data.tf… This is not required…you can choose to put everything in one big Terraform file, but I caution you modularity/compartmentalization is always a better approach than one monolithic file. Let’s take a look at our main.tf

Main.tf

Typically if you only see one terraform config file, it is called main.tf, most commonly there will be at least one other file called variables.tf used specifically for providing values for variables used in other TF files such as main.tf. Let’s take a look at our main.tf file section by section.

Provider

The provider keyword is used to identify the platform (cloud) you will be talking to, whether it is AWS, or another cloud. In our case it is AWS, and define three configuration items, an access key, secret key, and a region all of which we are inserting variables for, which will later be looked up / translated into real values in variables.tf

Resource aws_instance

This section defines a resource, which is an “aws_instance” that we are calling “jump_box”. We define all configuration requirements for that instance. Substituting variable names where necessary, and in some cases we just hard code the value. Notice we are attaching two security groups to the instance to allow for ICMP & SSH. We are also tagging our instance, which is critical an AWS environment so your administrators/teammates have some idea about the machine that is spun up and what it is used for.

Btw, the this resource type is provided by an AWS module found here https://www.terraform.io/docs/providers/aws/r/instance.html

You have to download the module using terraform get (which can be done once you write some config files and type terraform get 🙂 ).

Also, note the usage of ‘user_data’ here to update the machines packages at boot time. This is an AWS feature that is exposed through the AWS module in terraform.

Resource aws_security_group

Next we define a new security group (vs attaching an existing one in the above section). We are creating this new security group for other VM’s in the environment to attach later, such that it can be used to allow SSH from the Jump host to the VM’s in the environment.

Also notice under cidr_blocks we define a single IP address a /32 of our jump host…but more important is to notice how we determine that jump hosts IP address. Using .private_ip to access the attribute of the “jump_box” aws_instance we are creating/just created in AWS. That is pretty cool.

Resource aws_route53_record

The last entry in our main.tf creates a DNS entry for our jump host in Route53. Again notice we are specifying a name of jump. has a prefix to an entry, but the remainder of the FQDN is figured out by the lookup command.  The lookup command is used to lookup values inside of a map. In this case the map is defined in our variables.tf that we will review next.

Variables.tf

I will attempt to match the section structure I used above for main.tf when explaining the variables in variables.tf though it is not really in as clear of a layout using sections.

Provider variables

When terraform is run it compiles all .tf files, and replaces any key that equals a variable, with the value it finds listed in the variables.tf file (in our case) with the variable keyword as a prefix. Notice that the first two variables are empty, they have no value defined. Why ? Terraform supports taking input at runtime, by leaving these values blank, Terraform will prompt us for the values. Region is pretty straight forward, default is the value returned and description in this case is really an unused value except as a comment.

I would like to demonstrate the behavior of Terraform as described above, when the variables are left empty

At this phase you would enter your AWS key info, and terraform would ‘plan’ out your deployment. Meaning it would run through your configs, and print to your screen it’s plan, but not actually change any state in AWS. That is the difference between Terraform plan & Terraform apply.

Resource aws_instance variables

Here we define the values of our AMI, SSH Key, Instance prefix for the name, Tags, security groups, and subnets. Again this should be pretty straight forward, no magic here, just the use of string variables & maps where necessary.

It’s important to note the variables above are also used in other sections as needed, such as the aws_security_group section in main.tf …

Resource aws_route53_record variables

Here we define the ID & Name that are used in the ‘lookup’ functionality from our main.tf Route53 section above.

It’s important to note Terraform or TF files do not care when or where things are loaded. All files are loaded and variables require no specific order consistent with any other part of the configuration. All that is required is that for each variable you try to insert a value for, it has a value listed via the variable keyword in a TF file somewhere.

Output.tf

Again, I want to remind folks you can put these terraform syntax in one file if you wanted to, but I choose to split things up for readability and simplicity. So we have an output.tf file specifically for the output command, there is only one command, which lists the results of our terraform configurations upon success.

Ok so let’s run this and see how it looks…First a reminder, to test your config you can run Terraform plan first..It will tell you the changes its going to make…example

If everything looks good & is green, you are ready to apply.

Congratulations, you now have a jump box in AWS using Terraform. Remember to attach the required security group to each machine you want to grant access to, and start locking down your jump box / bastion and VM’s.

Outro

Remember if you take the above config and try to run it, swapping out only the variables it will error something about a required module. Downloading the required modules is as simple as typing ‘terraform get‘ , which I believe the error message even tells you 🙂

So again this was a brief intro to Terraform it does a lot & is extremely powerful. One of the thing I did when setting up a Mongo cluster using Terraform, was to take advantage of a map to change node count per region. So if you wanted to deploy a different number of instances in different regions, your config might look something like…

main.tf

variables.tf

It also supports split if you want to multi-value a string variable.

Another couple things before I forget, Terraform apply, doesn’t just set up new infrastructure, it also can be used to modify existing infrastructure, which is a very powerful feature. So if you have something deployed and want to make a change, terraform apply is your friend.

And finally, when you are done with the infrastructure you spun up or it’s time to bring her down… ‘terraform destroy’

I hope this article helps.

Happy Terraforming 😉

 

How To: Interact with AWS S3 Using the Go SDK and not lose your mind

Published / by tuxninja / Leave a Comment

After these messages we will carry on with our regularly scheduled programming…

Yesterday ( during the scribbling of this article ) AWS suffered one of it’s worst outages in history in the us-east-1 region. A reminder to us all to be multi-region and more importantly multi-cloud. Please see my other articles on HA deployments using AWS and my perspective & caution on the path to centralization or singularity we appear to be on (though the outage may help people wake up).

Now back to your regularly scheduled program…


My team and I are building a CMDB for AWS, which provides us with everything happening in our AWS environment + OS level metadata + change history. There will be a separate article on the CMDB journey, but today I want to focus on a specific service in AWS called S3, which is their object store. S3 is a bit of a special snowflake when it comes to AWS services and because of that I ran into challenges structuring my code, because up until S3 (which was the last service I wrote code for) everything had been very similar, and easily modularized. We will get to more detail, but let’s start this article by covering how to use the Go SDK for AWS.

Dependencies

This article assumes you already program in Go and have Go installed on your machine. To get started you will need a couple additional items.

  1. Download and install the SDK here : https://github.com/aws/aws-sdk-go 
  2. This is the documentation for the SDK, you will need it, bookmark it : http://docs.aws.amazon.com/sdk-for-go/api/
  3. It is extremely helpful when working with the API’s to have aws-shell installed : https://github.com/awslabs/aws-shell
    • This enables you to interact with AWS API’s on the fly so you can understand the output of commands as you are searching for what you are trying to accomplish.

The Collector Structure

The collector is the component in my CMDB architecture that does all the work of collecting the metadata that we shove into our CMDB. The collector is  heavily threaded using go routines for performance. The basic structure looks like this.

  • Call a go routine for each service you want to collect
    • //pass in all accounts, regions (from config-file) and pre-established awsSessions to each account you are collecting
    • Inside of a services go routine, loop overs accounts & regions
      • Launch a go routine for each account & region
        • Inside of those go routines make your AWS API call(s), example DescribeInstances
        • Store the response (I loop through mine and store it in a map using the resource-id as the key)
        • Finally, kick off another go routine to write to our API and store the data.

Ok, so hopefully that seems straight forward as a basic structure…let’s get to why S3 through me for a loop.

S3 Challenges

It will be best if I show you what I tried first, basically I tried to marry my existing pattern to S3 and that certainly was a bad idea from the start. Here was the structure of the S3 part of the code.

  • The S3 go routine gets called from main.go
  • //all accounts, regions and AWS Sessions are past into the next go routine
    • Inside of the S3 go routine, loop over accounts & regions
      • Launch a go routine for each account & region
        • Inside of those go routines List S3 Buckets
          • For each S3 buckets returned
            • Call additional API’s such as GetBucketTagging()

Ok so what happened ? I got a lot of errors that’s what 🙂 Ones like this….

At first, I thought maybe my code wasn’t thread safe…but that didn’t make much sense given the other services had no issues like this.

So as I debugged my code, I began to realize the buckets list I was getting, wasn’t limited to the region I was passing in/ establishing a session for.

Naturally, I googled can I list buckets for a single region ?

https://github.com/aws/aws-sdk-java/issues/920 (even though this is the Java SDK it still applies)..

Ah ok, the BucketList being returned on an AWS Session established with a specific account and region, ignores the region. Because S3 Buckets are global to an account, thus all buckets under an account are returned in the ListBuckets() call. I knew S3 buckets were global per account, but failed to expect a matching behavior/output when a specific region is passed into the SDK/API.

Ok so how then can I distinguish where a bucket actually lives?

As spfink says above, I needed to run GetBucketLocation() per bucket. Thus my code structure started to look like this…

  • For each account, region
    • ListBuckets
      • For each bucket returned in that account, region
        • GetBucketLocation
        • If a LocationConstraint (region) is returned, set the new region (otherwise if response is null, do nothing)
        • Get tags for the bucket in account, region

With this code I was still getting errors about region, but why ?

Well I made the mistake of thinking a ‘null’ response from the API for LocationConstraint had no meaning (or meant query it from any region), wrong (null actually means us-east-1 see from my google below) thus the IF condition evaluated false and the existing region from the outer loop was used because GetBucketLocation() returned null and this resulted in many errors.

Here’s what the google turned up..

https://github.com/aws/aws-cli/issues/564

So let’s clarify my mistakes…

  1. The S3 ListBuckets call returns all buckets under an account globally.
    • It does not abide by a region configured in an API Session
    • Thus I/you should not loop over regions from a config file for the S3 service.
    • Instead I/you need to find a buckets ‘real’ location using GetBucketLocation
    • Then set the region for actions other than ListBuckets (which is global per account and ignores region passed).
  2. GetBucketLocation returning null, doesn’t mean the bucket is global or that you can interact with the bucket from endpoint you please…it actually means us-east-1 http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

The Working Code

So in the end the working code for S3 looks like this…

  • collector/main.go fires off a bunch of go routines per service we are collecting for.
  • It passes in accounts, and regions from a config file.
  • For the S3 service/file under the ‘services’ package the entry point is a function called StoreS3Resources.

Everything in the code should be self explanatory from that point on. You will note a function call to ‘writeToCis’… CIS is the name of our internal CMDB project/service. Again, I will later be blogging about the entire system in detail once we open source the code. Please keep in mind this code is MVP, it will be changed a lot (optimization, modularized, bug fixes, etc) before & after we open source it, but for now he is the quick and dirty, but hopefully functional code 🙂 Use at your own risk !

How To: Launch EC2 Instances In AWS Using The AWS CLI

Published / by tuxninja / Leave a Comment

 

It occurred to me recently that while I have written articles on Boto for AWS (the Python SDK) I have yet to write articles on how to use the AWS CLI, Terraform and the Go SDK. All of that will come in due time, for starters this article is going to be about the AWS CLI.

To start you will need to install the AWS CLI  following these links:
https://aws.amazon.com/cli/
https://github.com/aws/aws-cli

Note you will need to make sure you have an account with an access key and have setup the required credentials under ~/.aws/ for the CLI to work. How to do this is covered near the end of the second link above to the git repo.

After that is done you are ready to rock and roll. To test it out you can run…

Assuming your default region, and profile settings are correct it should output JSON.

Launching an EC2 instance

To launch an EC2 instance from the command line use the command below replacing the variables preceded with $ with their real values.

(Assuming you have setup the required dependencies like uploading your SSH key to AWS and specifying its name in the command above this should launch your VM).

It should be noted there is a lot more you can to to tweak your instance, such as changing the EBS volume size for your root disk that is launched or tagging. You will see examples of this in my shell script. The purpose of this article is to share a shell script I have written and use whenever I want to quickly launch a test VM (which is common). For more permanent things I use an infrastructure as code approach via Terraform. But the need for launching quick test VM’s never goes away, thus this shell script was born. You will notice my script auto-tags our VM’s…I do this because in our environment if you VM isn’t tagged appropriately it is deleted + it’s courtesy in an AWS environment to tag your resources, otherwise no one will ever what tree to bark up when there is a problem such as ‘are you still using this cause it looks idle?’ 🙂

My Shell Script for Launching EC2 VM’s

After substituting the required variables at the top with your real values you can run this script. Notice that after creating the VM I capture the instance details in a file & the ID in a variable so I can subsequently tag it, and then I create a termination script…this makes for very simple operations when you need to repeatedly start and then kill/destroy/delete a VM.

Using these scripts should come in quite handy. A copy of create-instance.sh can be found on my github here.

One other thing… I use the normal AWS CLI for automation as shown here…but for poking around interactively I use something called ‘aws-shell’ formerly ‘saw’. Check it out and you won’t be disappointed !

My next post will be on Terraform or the Go SDK…but both are coming soon!

Setting up Netflix’s Edda (CMDB) in AWS on Ubuntu

Published / by tuxninja / Leave a Comment

If you are running any kind of environment with greater than 10 servers, than you need a CMDB (Configuration Management DataBase). CMDB’s are the brain of your fleet & it’s environment. You can store anything in a CMDB, but commonly the metadata in CMDB’s consists of any of the following physical & digital asset inventory, software licenses, software configuration data, policy information, relationships (I.E. This VM—> Compute –> Rack –> Availability Zone –> Datacenter), automation metadata, and more… they also commonly provide change history for changes in your environment.

In the world of infrastructure as code, CMDB is king.

CMDB’s enable endless automation possibilities, without them you are stuck gathering and collecting ‘current’ configuration state about your infrastructure every time you want perform an automated change or run an audit/report . In my career I have built or been a part of CMDB efforts at nearly every company I have worked for. They are simply necessary, and by their nature they tend to require the choice of ‘built by us’ vs ‘buy or run’.

However, if you have the luxury of only running in AWS, you are in luck, because Netflix (The AWS poster child)  open sourced Edda in 2012 for this purpose!

Rather than talk about the specific features of Edda refer to the blog post or documentation, I want to keep this article short and jump right into setting up Edda, which is a bit tricky, because the documentation is out of date!

Setting Up Edda (2016)

First, in AWS you need setup an EC2 VM that has at least.. 6G for OS + dependencies including Mongo, and then however much disk you need to store the metadata for your environment (keep in mind it keeps change history). Personally I just created a root partition with 100G to keep things simple. For instance type I used ‘m4.xlarge’ and the Ubuntu version is 14.04.

After booting the VM, SSH to it and create a directory wherever your storage is allocated partition wise to store Edda & it’s dependencies. I will be using /cmdb/ in my example.

Initial Install Steps

For the record, the Edda Wiki has the build steps wrong, it appears they no long are using Gradle, but have switch to SBT… which reminds me be aware Edda is written in Scala, which isn’t as popular as Java, Python etc… in addition it’s functional programming, which I don’t personally know a lot about, but I hear it’s got quite the learning curve..so beware if you need to make custom code changes, I would not recommend it, unless you know Scala ! 🙂

After the build of Edda succeeds, install Mongo

That’s it for dependencies

Configuring Mongo

For Edda to use Mongo all we need to do is ‘use’ the database we want to use for Edda & create an associated user. (Mongo will auto-create DB’s upon insert).

You can test the user is working by doing… 

Configuring Edda

Under /cmdb/edda/src/main/resources we need to modify ‘edda.properties’ with valid config values for accounts, regions & mongo access.

Relevant Mongo Values

Account & Region Values 

The above example is using one account and only one region. The Edda configuration uses generic labels, they are very flexible, but when using them you might be confused by the name of the label as it’s intent. Don’t fall into that trap, I did, and then I found this post on Google Groups… Check it out to gain more insight on how the configuration works and can be tweaked for  your needs. There is also the standard documentation, but it’s a little light IMO.

Running Edda

Congrats you made it, time to run Edda ! Again the documentation has this wrong (listed as gradle & Jetty)…instead were using SBT + Jetty…

If everything goes smoothly you will start to see logs about crawling AWS API’s spewing to your screen 🙂 After about 2 minutes you should see data. You can check by doing a curl.

This API URL should return a JSON object with instance ID’s for the account & region specified.

Additionally, Edda is listening on whatever private IP address you have setup, you will just need to modify the default security group to allow 8080 on your machine.

I get a bit frustrated with out of date documentation..so I hope this helps ! Happy automating !

AWS, Google Cloud, Azure and the singularity of the future Internet

Published / by tuxninja / Leave a Comment

I have just left AWS re-invent and I wanted to give my brief thoughts on the future of cloud computing. I believe in the next few years the shift we have been witnessing will be completed. That is to say that the thousands of enterprises and small businesses alike will finish their migrations to public clouds, simply because the benefits are far too great. Less people, less hardware, less glue code, more functionality, more value etc. AWS will be the dominant public cloud for the next couple of years minimum due to their first to market advantage and If you look at the announcements at re-invent 2016, you see a series of products that solve common problems. In fact a lot of the “innovations” AWS announced today, replace many SaaS solutions who ironically (or maybe not so much) are hosted on AWS. None of this concerns me, this is great disruption. AWS is teaching these businesses to move even further up and away from creating tooling for DevOps as products (they will take care of that) and focus on products that provide value differently, like sifting through massive amounts of data, increasing the quality and providing intelligence from that data. This is all great, and it’s definitely where things are headed, kudos to Amazon for guiding folks.

BUT here is what is disturbing it to me, and it’s seems like no one talks about it. It’s as if they can’t see the elephant in the room.

The elephant in the room is that every company in the world is converging on a fewer number of physical devices, paths, datacenters etc. This means the global failure domains that should be distributed in nature are actually becoming more centralized.

If you really think about it, Cloud was always the return of Utility computing (mainframes) etc and as we go down this journey it’s becoming more evident it’s simply a more distributed version of mainframe, and in my opinion, at the moment that is giving people a false sense of comfort.

Early in my career almost 20 years ago, I was working at an ISP, and one of the core services for the Internet (DNS) was directly attacked. There was of course a widespread failure and what we soon realized was the Internet had more of a shared fate than most believed.

Fast forward to present, and it just happened again with Dyn who hosted DNS for some very critical companies. This problem hasn’t been solved, it is getting worse.

This is the same problem we are going to have with AWS, Google Cloud and Azure.

As companies & governments converge on datacenters, and those datacenters connect to common interconnected fabrics (aka the Internet itself) and resources.

The Internet is becoming far more grouped…far more shared & central…and thus the shared fate of the Internet will lie solely on the shoulders of giants or as we like to call them in our industry monoliths.

Public cloud monoliths, monopolies…etc

Perhaps my worries will be mitigated by fantastic diversification and investment in truly distributed, distinct network paths, independent power plants, etc…BUT my fear is the convergence is happening so fast, the providers won’t be able to make that a reality fast enough and what’s the incentive for them ? They have to invest a tremendous amount of capital when they are already successful and this problem has not publicly and visibly humiliated us yet. But I fear that it will in the next few years…

So Godspeed to journey men of the cloud, as we enjoy the luxuries that AWS, Azure, & Google Cloud offer us. We are entering a beautiful and dangerous time. Beware, and hedge your company & product by distributing it as much as you can to avoid these central dependencies. Avoid these massive shared, global failure domains.