TuxLabs LLC

All things DevOps

Category: AWS

How To: Launch A Jump Host In AWS Using Terraform

Published / by tuxninja / Leave a Comment

I have been a Hashicorp fan boy for a couple¬†of years now. I am impressed, and happy with pretty much everything they have done from Vagrant to Consul and more. In short they make the DevOps world a better place. That being said this article is about the aptly named Terraform product. Here is how Hashicorp describes Terraform in their own words…

“Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.”

Interestingly, enough it doesn’t point out, but in a way implies it by omitting anything about providers that Terraform is multi-cloud (or cloud agnostic). Terraform works with AWS, GCP, Azure and Openstack. In this article we will be covering how to use Terraform with AWS.

Step 1, download Terraform, I am not going to cover that part ūüėČ
https://www.terraform.io/downloads.html

Step 2, Configuration…

Configuration

Hashicorp uses their own configuration language for Terraform, it is fully JSON compatible, which is nice.. The details are covered here https://github.com/hashicorp/hcl.

After downloading and installing Terraform, its time to start generating the configs.

AWS IAM Keys

AWS keys are required to do anything with Terraform. You can read about how to generate an access key / secret key for a user here : http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey

Terraform Configuration Files Overview

When you execute the terraform commands, you should be within a directory containing terraform configuration files. Those files ending in a ‘.tf’ extension will be loaded by Terraform in alphabetical order.

Before we jump into our configuration files for our Jump box, it may be helpful to review a quick primer on the syntax here https://www.terraform.io/docs/configuration/syntax.html

& some more advanced features such as lookup here https://www.terraform.io/docs/configuration/interpolation.html

Most users of Terraform choose to make their configs modular resulting in multiple .tf files with names like main.tf, variables.tf and data.tf… This is not required…you can choose to put everything in one big Terraform file, but I caution you modularity/compartmentalization is always a better approach than one monolithic file. Let’s take a look at our main.tf

Main.tf

Typically if you only see one terraform config file, it is called main.tf, most commonly there will be at least one other file called variables.tf used specifically for providing values for variables used in other TF files such as main.tf. Let’s take a look at our main.tf file section by section.

Provider

The provider keyword is used to identify the platform (cloud) you will be talking to, whether it is AWS, or another cloud. In our case it is AWS, and define three configuration items, an access key, secret key, and a region all of which we are inserting variables for, which will later be looked up / translated into real values in variables.tf

Resource aws_instance

This section defines a resource, which is an “aws_instance” that we are calling “jump_box”. We define all configuration requirements for that instance. Substituting variable names where necessary, and in some cases we just hard code the value. Notice we¬†are attaching two security groups to the instance to allow for ICMP & SSH. We are also tagging our instance, which is critical an AWS environment so your administrators/teammates have some idea about the machine that is spun up and what it is used for.

Btw, the this resource type is provided by an AWS module found here https://www.terraform.io/docs/providers/aws/r/instance.html

You have to download the module using terraform get (which can be done once you write some config files and type terraform get ūüôā ).

Also, note the usage of ‘user_data’ here to update the machines packages at boot time. This is an AWS feature that is exposed through the AWS module in terraform.

Resource aws_security_group

Next we define a new security group (vs attaching an existing one in the above section). We are creating this new security group for other VM’s in the environment to attach later, such that it can be used to allow SSH from the Jump host to the VM’s in the environment.

Also notice under cidr_blocks we define a single IP address a /32 of our jump host…but more important is to notice how we determine that jump hosts IP address. Using .private_ip to access the attribute of the “jump_box” aws_instance we are creating/just created in AWS. That is pretty cool.

Resource aws_route53_record

The last entry in our main.tf creates a DNS entry for our jump host in Route53. Again notice we are specifying a name of jump. has a prefix to an entry, but the remainder of the FQDN is figured out by the lookup command.  The lookup command is used to lookup values inside of a map. In this case the map is defined in our variables.tf that we will review next.

Variables.tf

I will attempt to match the section structure I used above for main.tf when explaining the variables in variables.tf though it is not really in as clear of a layout using sections.

Provider variables

When terraform is run it compiles all .tf files, and replaces any key that equals a variable, with the value it finds listed in the variables.tf file (in our case) with the variable keyword as a prefix. Notice that the first two variables are empty, they have no value defined. Why ? Terraform supports taking input at runtime, by leaving these values blank, Terraform will prompt us for the values. Region is pretty straight forward, default is the value returned and description in this case is really an unused value except as a comment.

I would like to demonstrate the behavior of Terraform as described above, when the variables are left empty

At this phase you would enter your AWS key info, and terraform would ‘plan’ out your deployment. Meaning it would run through your configs, and print to your screen it’s plan, but not actually change any state in AWS. That is the difference between Terraform plan & Terraform apply.

Resource aws_instance variables

Here we define the values of our AMI, SSH Key, Instance prefix for the name, Tags, security groups, and subnets. Again this should be pretty straight forward, no magic here, just the use of string variables & maps where necessary.

It’s important to note the variables above are also used in other sections as needed, such as the aws_security_group section in main.tf …

Resource aws_route53_record variables

Here we define the ID & Name that are used in the ‘lookup’ functionality from our main.tf Route53 section above.

It’s important to note Terraform or TF files do not care when or where things are loaded. All files are loaded and variables require no specific order consistent with any other part of the configuration. All that is required is that for each variable you try to insert a value for, it has a value listed via the variable keyword in a TF file somewhere.

Output.tf

Again, I want to remind folks you can put these terraform syntax in one file if you wanted to, but I choose to split things up for readability and simplicity. So we have an output.tf file specifically for the output command, there is only one command, which lists the results of our terraform configurations upon success.

Ok so let’s run this and see how it looks…First a reminder, to test your config you can run Terraform plan first..It will tell you the changes its going to make…example

If everything looks good & is green, you are ready to apply.

Congratulations, you now have a jump box in AWS using Terraform. Remember to attach the required security group to each machine you want to grant access to, and start locking down your jump box / bastion and VM’s.

Outro

Remember if you take the above config and try to run it, swapping out only the variables it will error something about a required module. Downloading the required modules is as simple as typing ‘terraform get‘ , which I believe the error message even tells you ūüôā

So again this was a brief intro to Terraform it does a lot & is extremely powerful. One of the thing I did when setting up a Mongo cluster using Terraform, was to take advantage of a map to change node count per region. So if you wanted to deploy a different number of instances in different regions, your config might look something like…

main.tf

variables.tf

It also supports split if you want to multi-value a string variable.

Another couple things before I forget, Terraform apply, doesn’t just set up new infrastructure, it also can be used to modify existing infrastructure, which is a very powerful feature. So if you have something deployed and want to make a change, terraform apply is your friend.

And finally, when you are done with the infrastructure you spun up or it’s time to bring her down… ‘terraform destroy’

I hope this article helps.

Happy Terraforming ūüėČ

 

How To: Interact with AWS S3 Using the Go SDK and not lose your mind

Published / by tuxninja / Leave a Comment

After these messages we will carry on with our regularly scheduled programming…

Yesterday ( during the scribbling of this article ) AWS suffered one of it’s worst outages in history in the us-east-1 region. A reminder to us all to be multi-region and more importantly multi-cloud. Please see my other articles on HA deployments using AWS and my perspective & caution on the path to centralization or singularity we appear to be on (though the outage may help people wake up).

Now back to your regularly scheduled program…


My team and I are building a CMDB for AWS, which provides us with everything happening in our AWS environment + OS level metadata + change history. There will be a separate article on the CMDB journey, but today I want to focus on a specific service in AWS called S3, which is their object store. S3 is a bit of a special snowflake when it comes to AWS services and because of that I ran into challenges structuring my code, because up until S3 (which was the last service I wrote code for) everything had been¬†very similar, and easily modularized. We will get to more detail, but let’s start this article by covering how to use the Go SDK for AWS.

Dependencies

This article assumes you already program in Go and have Go installed on your machine. To get started you will need a couple additional items.

  1. Download and install the SDK here : https://github.com/aws/aws-sdk-go 
  2. This is the documentation for the SDK, you will need it, bookmark it : http://docs.aws.amazon.com/sdk-for-go/api/
  3. It is extremely helpful when working with the API’s to have aws-shell installed :¬†https://github.com/awslabs/aws-shell
    • This enables you to interact with AWS API’s on the fly so you can understand the output of commands as you are searching for what you are trying to accomplish.

The Collector Structure

The collector is the component in my CMDB architecture that does all the work of collecting the metadata that we shove into our CMDB. The collector is  heavily threaded using go routines for performance. The basic structure looks like this.

  • Call a go routine for each service you want to collect
    • //pass in all accounts, regions (from config-file) and pre-established awsSessions to each account you are collecting
    • Inside of a services go routine, loop overs accounts & regions
      • Launch a go routine for each account & region
        • Inside of those go routines make your AWS API call(s), example DescribeInstances
        • Store the response (I loop through mine and store it in a map using the resource-id as the key)
        • Finally, kick off another go routine to write to our API and store the data.

Ok, so hopefully that seems straight forward as a basic structure…let’s get to why S3 through me for a loop.

S3 Challenges

It will be best if I show you what I tried first, basically I tried to marry my existing pattern to S3 and that certainly was a bad idea from the start. Here was the structure of the S3 part of the code.

  • The S3 go routine gets called from main.go
  • //all accounts, regions and AWS Sessions are past into the next go routine
    • Inside of the S3 go routine, loop over accounts & regions
      • Launch a go routine for each account & region
        • Inside of those go routines List S3 Buckets
          • For each S3 buckets returned
            • Call additional API’s such as GetBucketTagging()

Ok so what happened ? I got a lot of errors that’s what ūüôā Ones like this….

At first, I thought maybe my code wasn’t thread safe…but that didn’t make much sense given the other services had no issues like this.

So as I debugged my code, I began to realize the buckets list I was getting, wasn’t limited to the region I was passing in/ establishing a session for.

Naturally, I googled can I list buckets for a single region ?

https://github.com/aws/aws-sdk-java/issues/920 (even though this is the Java SDK it still applies)..

Ah ok, the BucketList being returned on an AWS Session established with a specific account and region, ignores the region. Because S3 Buckets are global to an account, thus all buckets under an account are returned in the ListBuckets() call. I knew S3 buckets were global per account, but failed to expect a matching behavior/output when a specific region is passed into the SDK/API.

Ok so how then can I distinguish where a bucket actually lives?

As spfink says above, I needed to run GetBucketLocation() per bucket. Thus my code structure started to look like this…

  • For each account, region
    • ListBuckets
      • For each bucket returned in that account, region
        • GetBucketLocation
        • If a LocationConstraint (region) is returned, set the new region (otherwise if response is null, do nothing)
        • Get tags for the bucket in account, region

With this code I was still getting errors about region, but why ?

Well I made the mistake of thinking a ‘null’ response from the API for LocationConstraint had no meaning (or meant query it from any region), wrong (null actually means us-east-1 see from my google below)¬†thus the IF condition evaluated false and the existing region from the outer loop was¬†used because GetBucketLocation() returned null and this resulted in many errors.

Here’s what the google turned up..

https://github.com/aws/aws-cli/issues/564

So let’s clarify my mistakes…

  1. The S3 ListBuckets call returns all buckets under an account globally.
    • It does not abide by a region configured in an API Session
    • Thus I/you¬†should not loop over regions from a config file for the S3 service.
    • Instead I/you need to find a buckets ‘real’ location using GetBucketLocation
    • Then set the region for actions¬†other than ListBuckets (which is global per account and ignores region passed).
  2. GetBucketLocation returning null, doesn’t mean the bucket is global or that you can interact with¬†the bucket from endpoint¬†you please…it actually means us-east-1¬†http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

The Working Code

So in the end the working code for S3 looks like this…

  • collector/main.go fires off a bunch of go routines per service we are collecting for.
  • It passes in accounts, and regions from a config file.
  • For the S3 service/file under the ‘services’ package the entry point is a function called StoreS3Resources.

Everything in the code should be self explanatory from that point on. You will note a function call to ‘writeToCis’… CIS¬†is the name of our internal CMDB project/service. Again, I will later be blogging about the entire system in detail once we open source the code. Please keep in mind this code is MVP, it will be changed a lot (optimization, modularized, bug fixes, etc) before & after we open source it, but for now he is the quick and dirty, but hopefully functional code ūüôā Use at your own risk !

How To: Launch EC2 Instances In AWS Using The AWS CLI

Published / by tuxninja / Leave a Comment

 

It occurred to me recently that while I have written articles on Boto for AWS (the Python SDK) I have yet to write articles on how to use the AWS CLI, Terraform and the Go SDK. All of that will come in due time, for starters this article is going to be about the AWS CLI.

To start you will need to install the AWS CLI  following these links:
https://aws.amazon.com/cli/
https://github.com/aws/aws-cli

Note you will need to make sure you have an account with an access key and have setup the required credentials under ~/.aws/ for the CLI to work. How to do this is covered near the end of the second link above to the git repo.

After that is done you are ready to rock and roll. To test it out you can run…

Assuming your default region, and profile settings are correct it should output JSON.

Launching an EC2 instance

To launch an EC2 instance from the command line use the command below replacing the variables preceded with $ with their real values.

(Assuming you have setup the required dependencies like uploading your SSH key to AWS and specifying its name in the command above this should launch your VM).

It should be noted there is a lot more you can to to tweak your instance, such as changing the EBS volume size for your root disk that is launched or tagging. You will see examples of this in my shell script. The purpose of this article is to share a shell script I have written and use whenever I want to quickly launch a test VM (which is common). For more permanent things I use an infrastructure as code approach via Terraform. But the need for launching quick test VM’s never goes away, thus this shell script was born. You will notice my script auto-tags our VM’s…I do this because in our environment if you VM isn’t tagged appropriately it is deleted + it’s courtesy in an AWS environment to tag your resources, otherwise no one will ever what tree to bark up when there is a problem such as ‘are you still using this cause it looks idle?’ ūüôā

My Shell Script for Launching EC2 VM’s

After substituting the required variables at the top with your real values you can run this script. Notice that after creating the VM I capture the instance details in a file & the ID in a variable so I can subsequently tag it, and then I create a termination script…this makes for very simple operations when you need to repeatedly start and then kill/destroy/delete a VM.

Using these scripts should come in quite handy. A copy of create-instance.sh can be found on my github here.

One other thing… I use the normal AWS CLI for automation as shown here…but for poking around interactively I use something called ‘aws-shell’ formerly ‘saw’. Check it out and you won’t be disappointed !

My next post will be on Terraform or the Go SDK…but both are coming soon!

Setting up Netflix’s Edda (CMDB) in AWS on Ubuntu

Published / by tuxninja / Leave a Comment

If you are running any kind of environment with greater than 10 servers, than you need a CMDB (Configuration Management DataBase). CMDB’s are the brain of your fleet & it’s environment. You can store anything in a CMDB, but commonly the metadata in CMDB’s consists of any of the following physical & digital asset inventory, software licenses, software configuration data, policy information, relationships (I.E. This VM—> Compute –> Rack –> Availability Zone –> Datacenter), automation metadata, and more… they also commonly provide change history for changes in your environment.

In the world of infrastructure as code, CMDB is king.

CMDB’s¬†enable endless automation possibilities, without them you are stuck gathering and collecting ‘current’ configuration state¬†about your infrastructure every time you want perform an automated change or run an audit/report . In my career I have¬†built or been a part of CMDB efforts at nearly every company I have worked for. They are simply necessary, and by their nature they tend to require the choice of ‘built by us’ vs ‘buy or run’.

However, if you have the luxury of only running in AWS, you are in luck, because Netflix (The AWS poster child)  open sourced Edda in 2012 for this purpose!

Rather than talk about the specific features of Edda refer to the blog post or documentation, I want to keep this article short and jump right into setting up Edda, which is a bit tricky, because the documentation is out of date!

Setting Up Edda (2016)

First, in AWS you need setup an EC2 VM that has at least.. 6G for OS + dependencies including Mongo, and then however much disk you need to store the metadata for your environment (keep in mind it keeps change history). Personally I just created a root partition with 100G to keep things simple. For instance type I used ‘m4.xlarge’ and¬†the Ubuntu version is 14.04.

After booting the VM, SSH to it and create a directory¬†wherever your storage is allocated partition wise to store Edda & it’s dependencies. I will be using /cmdb/ in my example.

Initial Install Steps

For the record, the Edda Wiki has the build steps wrong, it appears they no long are using Gradle, but have switch to SBT… which reminds me be aware Edda is written in Scala, which isn’t as popular as Java, Python etc… in addition it’s functional programming, which I don’t personally know a lot about, but I hear it’s got quite the learning curve..so beware if you need to make custom code changes, I would not recommend it, unless you know Scala ! ūüôā

After the build of Edda succeeds, install Mongo

That’s it for dependencies

Configuring Mongo

For Edda to use Mongo all we need to do is ‘use’¬†the database we want to use for Edda & create an associated user. (Mongo will auto-create DB’s upon insert).

You can test the user is working by doing…¬†

Configuring Edda

Under /cmdb/edda/src/main/resources we need to modify ‘edda.properties’ with valid config values for accounts, regions & mongo access.

Relevant Mongo Values

Account & Region Values 

The above example is using one account and only one region. The Edda configuration uses generic labels, they are very flexible, but when using them you might be confused by the name of the label as it’s intent. Don’t fall into that trap, I did, and then I found this post on Google Groups… Check it out to gain more insight on how the configuration works and can be tweaked for ¬†your needs. There is also the standard documentation, but it’s a little light IMO.

Running Edda

Congrats you made it, time to run Edda ! Again the documentation has this wrong (listed as gradle & Jetty)…instead were using SBT + Jetty…

If everything goes smoothly you will start to see logs about crawling¬†AWS API’s spewing to your screen ūüôā After about 2 minutes you should see data. You can check by doing a curl.

This API URL should return a JSON object with instance ID’s for the account & region specified.

Additionally, Edda is listening on whatever private IP address you have setup, you will just need to modify the default security group to allow 8080 on your machine.

I get a bit frustrated with out of date documentation..so I hope this helps ! Happy automating !

How To: Maximize Availability Effeciently Using AWS Availability Zones

Published / by tuxninja / Leave a Comment

For the TL;DR version, skip straight to the Cassandra Examples

Intro & Background

During my years at PayPal I was fortunate enough to be a part of a pioneering architecture & engineering team that designed & delivered¬†¬†a new paradigm for how we deployed & operated applications using a model that included 5 Availability Zones per Region (multiple regions) & Global Traffic Management. This new deployment pattern increased our cost efficiency and capacity while providing high availability to our production application stack. The key to increasing cost efficiency while not losing¬†availability is how you manage your capacity. Failure detection and global traffic management go the rest of the way to make use of all your Availability Zones & Regions giving you¬†better¬†availability. There is a little more to it, in terms of how you deploy your applications advantageously to this design…but we will get into that with our Cassandra/AWS examples later.

Prior to this model our approach was the traditional 3 datacenter model. 2 Active + 1 DR all with 100% of the capacity required to operate independently and they required manual failover in the event of an outage.

Oh the joys of operating your own private cloud & datacenters at scale ūüôā

Most companies and a recent article suggests PayPal as well, are thinking about or are moving to public cloud. Public cloud gives most companies cheaper and faster access to the economy of scale as well as an immediate tap into a global infrastructure.

Amazon Web Services (AWS) is the market share leader in public cloud today. They operate tens of Regions & Availability Zones all around the world. Deploying your application(s) on these massive scale public clouds means opportunity for operating them in more efficient ways.

Recently I came across¬†a few articles that felt like reading a history book on going from a Failover to an Always On¬†model. ¬†I’ll just leave these here for inquiring minds…

http://highscalability.com/blog/2016/8/23/the-always-on-architecture-moving-beyond-legacy-disaster-rec.html
https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44686.pdf

Ok enough with the introduction, lets move into the future, starting with the Principles of High Availability.

Principles for High Availability – Always

  • Use Multiple Regions ( Min 2, Max as many as you need)¬†
  • Use Multiple Availability Zones ( Min 2, Max as many as is prudent to optimize availability and capacity with cost )¬†
  • Design for an¬†Active/Active¬†architecture
  • Deploy¬†at least N+1¬†capacity per cluster¬†when Active/Active is not easily accomplished
    • Note: N=100% of capacity required to run your workload
  • Eliminate Single Points of Failure/Ensure redundancies in all components of the distributed system(s)
  • Make services¬†fault tolerate/Ensure they¬†continue in the event of various failures
  • Make services resilient by¬†implementing timeout/retry patterns¬†(circuit breaker, exponential back off algorithm etc)
  • Make services¬†implement graceful failure¬†such as degrading the quality of a response while still providing a response
  • Use an ESB (Enterprise Service Bus) to make¬†calls to your application stack asynchronous where applicable
  • Use¬†caching layers¬†to speed up responses
  • Use¬†Auto-Scaling for adding dynamic capacity¬†for bursty workloads (This can also save money)
  • Design deployments to be easily reproduced &¬†self-healing¬†(i.e. using Spinnaker(Netflix), Kubernetes (Google) for¬†containers, Scalr¬†etc)
  • Design deployments with¬†effective monitoring¬†visibility –¬†¬†Synthetic Transactions, Predictive Analytics, event triggers, etc.

Principles for High Availability –¬†Never’s

  • Deploy to a single availability zone
  • Deploy to a single region
  • Depend on either a single availability zone or region to always be UP.
  • Depend on capacity in a single availability zone in a region to be AVAILABLE.
  • Give up too easily on an Active/Active design/implementation
  • Rely on datacenter failover to provide HA (Instead prefer active/active/multi-region/multi-az)
  • Make synchronous calls across regions (this negatively effects your failure domain cross region)
  • Use sticky load balancing to pin requests to a specific node for a response to succeed (i.e. if you are relying on it for ‘state’ purposes this is really bad)

Summarizing

  • YOU MUST¬†deploy your application components to a minimum of 2 Regions.
  • Because failures of an entire region in AWS happen frequently…
  • YOU MUST¬†deploy your application components to a minimum of 2 Availability Zones within a Region.¬†
  • Because an AWS AZ can go down for maintenance, outage or be out of new capacity frequently…
  • YOU MUST¬†aim to maximize the use of Availability Zones and Regions, but balance that with cost requirements and be aware of diminishing returns
  • YOU MUST¬†use global traffic routing (Route53) and health-check monitoring/ automated markdowns to route around failure to healthy deployments
  • YOU MUST¬†follow the rest of the Principles of High Availability Always directives to the best of your ability

Active/Active Request Routing (AWS)

Diagram Components

  • Global Traffic Management:¬†Using AWS Route53 as a Global Traffic Manager (configuring Traffic Flows + geolocation)
  • 3 Regions = AZ’s: us-east-1 = 4, us-west-2 = 3, eu-central-1 = 2

Additional Info on Active/Active Request Routing

  • Route53 load balances requests to the application stack within each availability zone and across 3 regions.
  • This is an Active/Active design, i.e. Route53 is configured to route requests to all available zones until one fails. (Traffic Flow + Geolocation)

The Design Above Is Actually Wasteful (Cost & Capacity must be optimized for HA)

  • This example shows all application deployments (& components) in every AZ¬†deployed with 100% capacity required to run without another AZ
  • This means you could lose ALL, BUT ONE availability zone and still service requests from your one remaining AZ

While this would achieve the highest availability, it is not the most optimal approach because it is wasteful

  • Each application team must decide the right mix of regions, availability zones & capacity to provide HA at the lowest cost for their consumers based on their application requirements
  • Depending on how much capacity you deploy for your application per AZ it is possible to deploy too much/many and have diminishing returns on availability while wasting a lot of $$$
  • You must find the right balance for your application & business contexts ( see below examples with Cassandra for more details on how this approach can be wasteful )

Understanding Availability Zones

The crux of achieving a highly available architecture lies within the proper usage & understanding of Availability Zones. If you ignore everything else, read this section !

What Do AZ’z Provide ?

In Amazon’s own words:¬†

“Each region contains multiple distinct locations called Availability Zones, or AZs. Each Availability Zone is engineered to be isolated from failures in other Availability Zones, and to provide inexpensive, low-latency network connectivity to other zones in the same region. By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location.” Source:¬†http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html

How to think of an AZ

  • In the AWS World it is helpful to use the analogy of an Availability Zone being a Rack in a traditional datacenter
  • I.E. instead of striping a Cassandra cluster across Racks, you would do it across Availability Zones.

What about using an Availability Zone as a complete application failure domain ?

  • In a traditional private datacenter an Availability Zone is a separate datacenter container, with shared network (although it can be isolated as well).
  • It is built/populated with the required infrastructure & capacity to run the applications and it has all the physical characteristics mentioned above that the AWS AZ has
  • However, the way it is used in a private datacenter can be different, because YOU have full visibility & control of capacity
  • For example, you could limit service calls such that service calls are to/from the same AZ, effectively creating a distinct application failure domain, piggy-backing the already distinct infrastructure failure domain (provided by an Availability Zone)

Why not use an AWS Availability Zone as a distinct application failure domain then ?

  • You can, but this fails to be robust in the public cloud context because capacity might not always be available in the AZ and you don’t know when it will run out
  • BEING CLEAR –¬†You can run out of capacity in an Availability Zone FOREVER. The AZ will still be available to manage existing infrastructure/capacity, but new capacity cannot be added
  • This would negatively effect auto-scaling & new provisioning and ultimately implies you cannot rely on a single AZ to be treated as a datacenter container
  • To continue operating when an AZ runs out of capacity in public cloud you have to utilize a new AZ for new capacity
  • That new AZ would follow the same isolated service calls principles except now 2 AZ’s would be called as part of a failure domain
  • Effectively multiple AZ’s would become a single failure domain organically
  • Therefore, it is not advantageous in the public cloud world to try to treat a single AZ as a failure domain
  • Instead stripe applications across as many AZ’s as necessary to provide up to N+2 capacity per Region while minimizing waste.
  • The region becomes the failure domain of your application and this is a more efficient use of AZs to provide availability in the public cloud context
  • This is why¬†N+N regions is paramount in managing your availability in public cloud

What is the latency between Availability Zones ?

  • Tests with micro instances (they have worst network profile) show that the latency between AZ’s within a region is on average 1ms
  • Most workloads will have no challenge dealing with 1ms latency and taking full advantage of availability zones to augment their availability
  • However, some applications are very chatty, making hundreds to thousands of calls in series
  • These workloads often must be re-architected to work well in a public cloud environment

What happens if I do not use multiple Availability Zones ?

Truths…

  • An Availability Zone can become unavailable at any time, for any reason
  • An Availability Zone can become unavailable due to planned maintenance or an outage
  • An Availability Zone can run out of capacity for long periods of time and indefinitely (forever)

Thus the result of not deploying to¬†multiple availability zones is…

  • Provisioning outages for new capacity and the inability to self-heal deployments

The reality is…¬†

  • If you only deploy to a single AZ in a Region, you guarantee that you will experience a 100% failure in that Region, when the AZ you rely on becomes unavailable
  • You can failover or continue (if Active/Active) to service transactions in another Region, but you can achieve higher availability per Region by striping your application deployment across multiple AZ’s in a region
  • And you have less latency to deal with¬†when failing over to another AZ locally vs. an AZ in a remote region

Finding The Right Balance Between High Availability & Cost

Often times availability decisions come with a cost. This section will describe a more practical and better way to achieve high availability infrastructure for your application while keeping the costs within reason. It is important to note, cost optimization heavily depends on application context, because you must first figure out the lowest common denominator of AZ’s / Regions you can use and still achieve your target availability. This typically depends on the number of required AZ’s to achieve quorum for stateful services. For example, if we were configuring Cassandra for eventual consistency + high availability, we would want 5 nodes in the cluster and a replication factor of 3. We would want to the best of our ability to spread those nodes across AZ’s.

100% of application capacity deployed to every Availability Zone is bad

  • The traffic routing example in the beginning¬†shows 100% capacity being deployed for every application component in every AZ
  • As covered previously and shown in the later sections, this is a sub-optimal deployment strategy.
  • Instead you must stripe your applications capacity across AZ’s, keeping in mind a target of at least N+1 capacity or N+2 where possible

What if I deploy 100% of application capacity to only two of Availability Zones per Region?

  • Let’s assume we are talking about MySQL or another RDBMs with ACID properties.
  • You might¬†want to deploy MySQL as a master/master where each master is in a separate AZ and can support 100% of the required capacity
  • Given this configuration you have met the N+1 requirement per Region
  • This might be acceptable to you from the availability perspective, however keep in mind you do not control when those AZ’s become unavailable or worse run out of new capacity.
  • Always make sure an application like this is able to be redeployed through automation (i.e. self-healing) in another availability zone

Stateless/Stateful Applications & Capacity Defines the number of AZ’z per Region you need

  • Stateless applications are much easier to provide high availability for. You simply maximize redundant instances across AZ’s within reason (cost being the reason/limiting factor)
  • So effectively with stateless applications you tend to deploy to as many AZ’s as you have available
  • But what should determine the number of AZ’s you utilize per region is your stateful applications requirements for quorum or availability and how you choose to manage capacity

Examples of balancing High Availability & Cost using Apache Cassandra in AWS

This Cassandra calculator is a useful reference for this section
http://www.ecyrd.com/cassandracalculator/

Cassandra N+1 ( 4 AZ’s ) – Wasteful Version

The following assumes Cassandra is configured for eventual consistency with the following parameters:

  • Cluster Size=5, Replication Factor=3, Write/Read=1
  • Region=us-east-1, Available AZ’s=4

Diagram Explanation

  • In this configuration you can lose 2 nodes without data availability impact
  • If you lose a 3rd node you would lose access to 60% of your data
  • Your node availability is N+2.¬†HOWEVER, because you have only 4 AZ’s in us-east-1 that can be used, 2 nodes are required to be in 1 AZ
  • The available AZ’s in the region reduces your overall availability as a whole to N+1 in the event of ‘us-east-1e’ becoming unavailable
  • Thus the¬†5th node is completely unnecessary and wasteful¬†!¬†

cassandran1wasteful

Cassandra N+1 Cost Optimized (4 AZ’s) Version

The following assumes Cassandra is configured for eventual consistency with the following parameters:

  • Set Cluster Size=4, Replication Factor=2
  • Region=us-east-1, Available AZ’s=4

Diagram Explanation

  • You can now lose 1 node or one availability zone and still service 100% of requests
  • Losing a 2nd node result in 50% impact to data accessibility

cassandran1optimized

Cassandra N+2 Cost Optimized (4 AZ’s) Version

  • Set Cluster Size=4, Replication Factor=3
  • Region=us-east-1, Available AZ’s=4

Diagram Explanation

  • In order to optimize for cost and achieve N+2¬†you must have at least 4 AZs available
  • Otherwise, 100% of data per node is required to achieve N+2 with only 3 AZ’s
  • A better scenario is having 4 AZ’s or more where you can play with Replication Factor (aka data availability thus capacity needed to run the service)
  • To achieve cost optimized N+2 with 4 AZ’s we will set cluster size to 4 and Replication Factor to 3 resulting in this diagram/outcome

cassandran2optimized

This achieves the following

  • You can lose any 2 nodes or AZ’s above, and continue to run. If you lose a 3rd node you would have 75% impact to your data availability and need to fail/markdown the region for another region to takeover.
  • Also if you have a 4 node cluster with a replication factor=4 you have 100% of data on each node…this is not cost optimized for availability, but allows you to lose 3/4 nodes.
  • However, N+3 is considered too many eggs in one region and thus is not the right approach.
  • Instead if you architect for N+2 in each region + have global traffic management routing to active/active deployments, you will maximize your availability.

Cassandra With Only 2 Availability Zones

  • us-east-1 has 4 AZ’s this is actually a high number of available regions for¬†AWS, in regions like us-west-2 and us-central-1 you only have 3, and 2 AZ’s respectively.
  • To clearly show the risk of ignoring availability zones & capacity while architecting for availability let’s look at the extreme case of deploying in eu-central-1 changing nothing else
  • Again we use a Cluster Size=5,Replication Factor=3 as our initial example
  • eu-central-1 has only 2 AZ’s

cassandra2azbadness

  • As you can see if you lose eu-central-1a you lose 3 nodes at one time and 60% of your data becomes¬†inaccessible.
  • If you lose eu-central-1b you are ok because you have > 100% of the required data available provided by your remaining nodes, but you cannot lose another node.
  • You cannot and should not ever deploy a service this way. Instead you need to tune the Cassandra cluster for the number of AZ’s.
  • Setting Cluster Size=2 and Replication Factor=2 for eu-central-1 allows an N+1 design
  • Unfortunately, cost optimization is not possible with only 2 AZ’s

cassandra2azn1

  • You can lose either node or AZ, and still have access to 100% of your data. You must lose access to the region or both AZs/nodes to have a service impact
  • You cannot achieve true N+2 with only 2 AZ’s in a region (You can achieve it via node redundancy only)
  • What I recommend in this case is have multi-region deployments (kudos if they happen to be in different public clouds), but within AWS deploy to eu-central-1 + eu-west-1 using N+1 deployments & global traffic management to achieve higher availability than a single region deployment would allow.

I hope this lengthy overview helps someone somewhere struggling with how to deploy applications to Availability Zones in AWS.
Feel free to contact me with any questions or corrections, thanks for reading !