tuxninja

Tuxninja aka Jason Riedel has worked as a Systems & Network Administrator and Code Hacker since 1999. Since 2005 he has worked for PayPal with a focus on Operations Architecture. He is also CEO of Tuxlabs LLC where he dedicates his time to the experimentation and close study of new technologies and programming languages.

Setting up Kubernetes to manage containers on the Google Cloud Platform

These days the pace of innovation in DevOps can leave you feeling like you’re jogging on a treadmill programmed to run faster than Usain Bolt. Mastery requires hours of practice and the last decade in DevOps has not allowed for it. Before gaining 10 years of experience running virtual machines using VmWare in private data-centers, private cloud software like Openstack and Cloudstack came along, and just when you and your team painfully achieved a stable install you were told running virtual machines in public clouds like AWS, GCP, and Azure is the way forward. By the time you got there it was time to switch to containers, and before you can fully appreciate those, server-less functions are on the horizon, but I digress. If you want to know more about server-less functions, see my previous article on AWS Lambda. Instead, this article will focus on running Docker containers inside of a Kubernetes cluster on Google’s Cloud Platform.

Linux Containers, which were recently popularized by Docker need something to help manage them and while there are many choices, Kubernetes the open-sourced container management system from Google is the undisputed king at this time. Given that Kubernetes was started by Google, it should be expected that the easiest way to install it is using Google’s Cloud Platform (GCP). However, Openshift from Redhat also provides a nice batteries included abstraction if you need to get up and running quickly as well as kops.

Pre-Requisites

The main pre-requisites you need for this article is a Google Cloud Platform account and installing the gcloud utility via the SDK.

In addition, you need some form of a computer with Internet connectivity, some typing skills, a brain that can read, and a determination to finish…For now I will give you the benefit of the doubt and assume you have all of these. It is also nice to have your beverage of choice while you do this, a fine tea, ice cold beer, or glass of wine will work, but for Cancer’s sake please skip the sugar.

Here is where I would normally insert a link to facts on sugar and Cancer’s link, but I literally just learned I would be spreading rumors… Fine drink your Kool-Aid, but don’t blame me for your calories.

The Build Out of our Self Healing IRC Server Hosting Containers

I lied dude, IRC is so 1995 and unfortunately, ICQ’s been dead and Slack won’t let me host their sexy chat application with game like spirit and better jokes than Kevin Heart. So…sorry to excite you… but I guess I will fallback to the docs here and install Nginx like us newb’s are supposed to.

Numero Uno (Step 1 dude)

As part of the installation of the gcloud / SDK you should have ran gcloud init, which requires you to login with your Google account via a web browser.

You must log in to continue. Would you like to log in (Y/n)?  Y

Your browser has been opened to visit:

    https://accounts.google.com/o/oauth2/auth?redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&prompt=select_account&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&access_type=offline


You are logged in as: [tuxninja@tuxlabs.com].

This account has no projects.

Would you like to create one? (Y/n)?  Y

After clicking allow in your browser you will be logged in…and asked about creating an initial Project. Say yes (type Y and hit enter).

Enter a Project ID. Note that a Project ID CANNOT be changed later.
Project IDs must be 6-30 characters (lowercase ASCII, digits, or
hyphens) in length and start with a lowercase letter. tuxlabsdemo
Your current project has been set to: [tuxlabsdemo].

Not setting default zone/region (this feature makes it easier to use
[gcloud compute] by setting an appropriate default value for the
--zone and --region flag).
See https://cloud.google.com/compute/docs/gcloud-compute section on how to set
default compute region and zone manually. If you would like [gcloud init] to be
able to do this for you the next time you run it, make sure the
Compute Engine API is enabled for your project on the
https://console.developers.google.com/apis page.

Your Google Cloud SDK is configured and ready to use!

Sweet your Project is now created. In order to use the Google Cloud API’s you must first enable access by visiting https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview and clicking enable.

That will take a minute. Once completed you will be able to run gcloud commands against your Project. We can set the default region for our project like so:

tuxninja@tldev1:~/google-cloud-sdk$ gcloud compute project-info add-metadata --metadata google-compute-default-region=us-west1
Updated [https://www.googleapis.com/compute/v1/projects/tuxlabsdemo].
tuxninja@tldev1:~/google-cloud-sdk$ 

If you get an error here, stop being cheap and link your project to your billing account in the console.

Additionally, we want to set the default region/zone for gcloud commands like so:

tuxninja@tldev1:~$ gcloud config set compute/region us-west1
Updated property [compute/region].
tuxninja@tldev1:~$ gcloud config set compute/zone us-west1-a
Updated property [compute/zone].
tuxninja@tldev1:~$ 

Numero Dos Equis

We need to install kubectl so we can interact with Kubernetes.

tuxninja@tldev1:~$ gcloud components install kubectl


Your current Cloud SDK version is: 175.0.0
Installing components from version: 175.0.0

┌──────────────────────────────────────────────────────────────────┐
│               These components will be installed.                │
├─────────────────────┬─────────────────────┬──────────────────────┤
│         Name        │       Version       │         Size         │
├─────────────────────┼─────────────────────┼──────────────────────┤
│ kubectl             │               1.7.6 │             16.0 MiB │
│ kubectl             │                     │                      │
└─────────────────────┴─────────────────────┴──────────────────────┘

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

Do you want to continue (Y/n)?  Y

╔════════════════════════════════════════════════════════════╗
╠═ Creating update staging area                             ═╣
╠════════════════════════════════════════════════════════════╣
╠═ Installing: kubectl                                      ═╣
╠════════════════════════════════════════════════════════════╣
╠═ Installing: kubectl                                      ═╣
╠════════════════════════════════════════════════════════════╣
╠═ Creating backup and activating new installation          ═╣
╚════════════════════════════════════════════════════════════╝

Performing post processing steps...done.                                                                                                                      

Update done!

tuxninja@tldev1:~$ 

Once that is done, quickly realize someone spent an obscene amount of time making that install as pretty as it was without using ncurses. Shout out to that geek.

Numero Tres Deliquentes

Time to create our Kubernetes cluster. Run this command and “it’s going to be LEGEND….Wait for it….

tuxninja@tldev1:~$ gcloud container clusters create tuxlabs-kubernetes                           
Creating cluster tuxlabs-kubernetes...done.                                                   
Created [https://container.googleapis.com/v1/projects/tuxlabsdemo/zones/us-west1-a/clusters/tuxlabs-kubernetes].
kubeconfig entry generated for tuxlabs-kubernetes.
NAME                ZONE        MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
tuxlabs-kubernetes  us-west1-a  1.7.6-gke.1     35.197.120.249  n1-standard-1  1.7.6         3          RUNNING
tuxninja@tldev1:~$

And I hope you’re not lactose intolerant cause the second half of that word is DAIRY.” – NPH

Numero (Audi) Quattro

Now you should be able to see all running Kubernetes services in your cluster like so:

tuxninja@tldev1:~$ kubectl get --all-namespaces services
NAMESPACE     NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             ClusterIP   10.19.240.1     <none>        443/TCP         15m
kube-system   default-http-backend   NodePort    10.19.254.83    <none>        80:31154/TCP    14m
kube-system   heapster               ClusterIP   10.19.247.182   <none>        80/TCP          14m
kube-system   kube-dns               ClusterIP   10.19.240.10    <none>        53/UDP,53/TCP   14m
kube-system   kubernetes-dashboard   ClusterIP   10.19.249.188   <none>        80/TCP          14m
tuxninja@tldev1:~$

And we can see the pods like so:

tuxninja@tldev1:~$ kubectl get --all-namespaces pods
NAMESPACE     NAME                                                           READY     STATUS    RESTARTS   AGE
kube-system   event-exporter-1421584133-zlvnd                                2/2       Running   0          16m
kube-system   fluentd-gcp-v2.0-1nb9x                                         2/2       Running   0          16m
kube-system   fluentd-gcp-v2.0-bpqtv                                         2/2       Running   0          16m
kube-system   fluentd-gcp-v2.0-mntjl                                         2/2       Running   0          16m
kube-system   heapster-v1.4.2-339128277-gxh5g                                3/3       Running   0          15m
kube-system   kube-dns-3468831164-5nn05                                      3/3       Running   0          15m
kube-system   kube-dns-3468831164-wcwtg                                      3/3       Running   0          16m
kube-system   kube-dns-autoscaler-244676396-fnq9g                            1/1       Running   0          16m
kube-system   kube-proxy-gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg   1/1       Running   0          16m
kube-system   kube-proxy-gke-tuxlabs-kubernetes-default-pool-6ede7d6a-pr82   1/1       Running   0          16m
kube-system   kube-proxy-gke-tuxlabs-kubernetes-default-pool-6ede7d6a-w6p8   1/1       Running   0          16m
kube-system   kubernetes-dashboard-1265873680-gftnz                          1/1       Running   0          16m
kube-system   l7-default-backend-3623108927-57292                            1/1       Running   0          16m
tuxninja@tldev1:~$ 

Numero Cinco (de Mayo)

You now have an active Kubernetes cluster. That is pretty sweet huh? Make sure you take the time to check out what’s running under the hood in the Google Compute Engine as well.

tuxninja@tldev1:~$ gcloud compute instances list
NAME                                               ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  us-west1-a  n1-standard-1               10.138.0.2   35.197.94.114   RUNNING
gke-tuxlabs-kubernetes-default-pool-6ede7d6a-pr82  us-west1-a  n1-standard-1               10.138.0.3   35.197.2.247    RUNNING
gke-tuxlabs-kubernetes-default-pool-6ede7d6a-w6p8  us-west1-a  n1-standard-1               10.138.0.4   35.197.117.173  RUNNING
tuxninja@tldev1:~$ 

Ok, for our final act, I promised Nginx…sigh…Let’s get this over with!

Step 1, create this nifty YAML file:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
      # generated from the deployment name
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Save it as deployment.yaml, then apply it!

tuxninja@tldev1:~$ kubectl apply -f deployment.yaml 
deployment "nginx-deployment" created
tuxninja@tldev1:~$

We can describe our deployment like this:

tuxninja@tldev1:~$ kubectl describe deployment nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Sun, 15 Oct 2017 07:10:52 +0000
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision=1
                        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replicas":2,"se...
Selector:               app=nginx
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.7.9
    Port:         80/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-431080787 (2/2 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  3m    deployment-controller  Scaled up replica set nginx-deployment-431080787 to 2
tuxninja@tldev1:~$

And we can take a gander at the pods created for this deployment

tuxninja@tldev1:~$ kubectl get pods -l app=nginx
NAME                               READY     STATUS    RESTARTS   AGE
nginx-deployment-431080787-7131f   1/1       Running   0          4m
nginx-deployment-431080787-cgwn8   1/1       Running   0          4m
tuxninja@tldev1:~$

To see info about a specific pod run: 

tuxninja@tldev1:~$ kubectl describe pod nginx-deployment-431080787-7131f
Name:           nginx-deployment-431080787-7131f
Namespace:      default
Node:           gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg/10.138.0.2
Start Time:     Sun, 15 Oct 2017 07:10:52 +0000
Labels:         app=nginx
                pod-template-hash=431080787
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nginx-deployment-431080787","uid":"faa4d17b-b177-11e7-b439-42010...
                kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container nginx
Status:         Running
IP:             10.16.1.4
Created By:     ReplicaSet/nginx-deployment-431080787
Controlled By:  ReplicaSet/nginx-deployment-431080787
Containers:
  nginx:
    Container ID:   docker://ce850ea012243e6d31e5eabfcc07aa71c33b3c1935e1ff1670282f22ac1d0907
    Image:          nginx:1.7.9
    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
    Port:           80/TCP
    State:          Running
      Started:      Sun, 15 Oct 2017 07:11:01 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gw047 (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-gw047:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gw047
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                                                        Message
  ----    ------                 ----  ----                                                        -------
  Normal  Scheduled              5m    default-scheduler                                           Successfully assigned nginx-deployment-431080787-7131f to gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg
  Normal  SuccessfulMountVolume  5m    kubelet, gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  MountVolume.SetUp succeeded for volume "default-token-gw047"
  Normal  Pulling                5m    kubelet, gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  pulling image "nginx:1.7.9"
  Normal  Pulled                 5m    kubelet, gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  Successfully pulled image "nginx:1.7.9"
  Normal  Created                5m    kubelet, gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  Created container
  Normal  Started                5m    kubelet, gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  Started container
tuxninja@tldev1:~$ 

Finally it’s time to expose Nginx to the Internet

tuxninja@tldev1:~$ kubectl expose deployment/nginx-deployment --port=80 --target-port=80 --name=nginx-deployment --type=LoadBalancer
service "nginx-deployment" exposed
tuxninja@tldev1:~

Check the status of our service

tuxninja@tldev1:~$ kubectl get svc nginx-deploymentNAME               TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-deployment   LoadBalancer   10.19.244.29   <pending>     80:31867/TCP   20s
tuxninja@tldev1:~$

Note the EXTERNAL-IP is in a pending state, once the LoadBalancer is created, this will have an IP address.

tuxninja@tldev1:~$ kubectl get svc nginx-deployment
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
nginx-deployment   LoadBalancer   10.19.244.29   35.203.155.123   80:31867/TCP   1m
tuxninja@tldev1:~$ curl http://35.203.155.123
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

And were all done, congratulations! 🙂

In Closing…

Kubernetes is cool as a fan, and setting it up on GCP is almost as easy as pressing the big EASY button. We have barely scraped the surface here so for continued learning I recommend buying Kubernetes Up & Running by Kelsey Hightower, Brendan Burns and Joe Beda. I would follow these folks on twitter, and in addition follow Kubernetes Co-Founder Tim Hockin as well as former Docker, Google, and now Microsoft employee/guru of all things containers Jessie Frazelle.

After you are done following these inspirational leaders in the community go to youtube and watch every Kelsey Hightower video you can find. Kelsey Hightower is perhaps the tech communities best presenter and no one has done more to educate and bring Kubernetes to the mainstream than Kelsey. So a quick shout out and thank you to Kelsey for his contributions to the community. In his honor here are two of my favorite videos from Kelsey. [ one ] [ two ].

Setting up Kubernetes to manage containers on the Google Cloud Platform Read More »

How To: Create An AWS Lambda Function To Backup/Snapshot Your EBS Volumes

AWS Lambda functions are a great way to run some code on a trigger/schedule without needing a whole server dedicated to it. They can be cost effective, but be careful depending on how long they run, and the number of executions per hour, they can be quite costly as well.

For my use case, I wanted to create snapshot backups of EBS volumes for a Mongo Database every day. I originally implemented this using only CloudWatch, which is a monitoring service, but because it’s focused on scheduling, AWS also uses it for other things that require scheduling/cron like features. Unfortunately, the CloudWatch implementation of snapshot backups was very limited. I could not ‘tag’ the backups, which was certainly something I needed for easy finding and cleanups later (past a retention period).

Anyway, there were a couple pitfalls I ran into when creating this function.

Pitfalls

  1. Make sure you security group allows you to communicate to the Internet for any AWS API’s you need to talk to.
  2. Make sure your time-out is set to 1 minute or greater depending on your use case. The default is seconds, and is likely not high enough.
  3. “The Lambda function execution role must have permissions to create, describe and delete ENIs. AWS Lambda provides a permissions policy, AWSLambdaVPCAccessExecutionRole, with permissions for the necessary EC2 actions (ec2:CreateNetworkInterface, ec2:DescribeNetworkInterfaces, and ec2:DeleteNetworkInterface) that you can use when creating a role”
    1. Personally, I did inline permissions and included the specific actions.
  4. Upload your zip file and make sure your handler section is configured with the exact file_name.method_in_your_code_for_the_handler
  5. Also this one is more of an FYI, Lambda Function have a maximum TTL of 5 minutes ( 300 seconds).

I think that was it, after that everything worked fine. To finish this short article off, screenshots and the code!

Screenshots

 

 

And finally the code…

Function Code

# Backup cis volumes

import boto3


def lambda_handler(event, context):
    ec2 = boto3.client('ec2')

    reg = 'us-east-1'

    # Connect to region
    ec2 = boto3.client('ec2', region_name=reg)

    response = ec2.describe_instances(Filters=[{'Name': 'instance-state-name', 'Values': ['running']},
                                               {'Name': 'tag-key', 'Values': ['Name']},
                                               {'Name': 'tag-value', 'Values': ['cis-mongo*']},
                                               ])

    for r in response['Reservations']:
        for i in r['Instances']:
            for mapping in i['BlockDeviceMappings']:
                volId = mapping['Ebs']['VolumeId']

                # Create snapshot
                result = ec2.create_snapshot(VolumeId=volId,
                                             Description='Created by Lambda backup function ebs-snapshots')

                # Get snapshot resource
                ec2resource = boto3.resource('ec2', region_name=reg)
                snapshot = ec2resource.Snapshot(result['SnapshotId'])

                # Add volume name to snapshot for easier identification
                snapshot.create_tags(Tags=[{'Key': 'Name', 'Value': 'cis-mongo-snapshot-backup'}])

And here is an additional function to add for cleanup

import boto3
from datetime import timedelta, datetime


def lambda_handler(event, context):
    # if older than days delete
    days = 14

    filters = [{'Name': 'tag:Name', 'Values': ['cis-mongo-snapshot-backup']}]

    ec2 = boto3.setup_default_session(region_name='us-east-1')
    client = boto3.client('ec2')
    snapshots = client.describe_snapshots(Filters=filters)

    for snapshot in snapshots["Snapshots"]:
        start_time = snapshot["StartTime"]
        delete_time = datetime.now(start_time.tzinfo) - timedelta(days=days)

        if start_time < delete_time:
            print 'Deleting {id}'.format(id=snapshot["SnapshotId"])
            client.delete_snapshot(SnapshotId=snapshot["SnapshotId"], DryRun=False)

The end, happy server-lessing (ha !)

 

How To: Create An AWS Lambda Function To Backup/Snapshot Your EBS Volumes Read More »

How To: Use Spinnaker to deploy into AWS

Spinnaker is a tool created by Netflix (of whom I have always been a big fan) that succeeded Asgard a tool I used in my past at PayPal. Not to digress but my favorite companies when it comes to DevOps tools are Hashicorp and Netflix. Obviously, that’s a bit of apples and oranges there, but they both make solid DevOps tools…Moving on…

Here is a quick overview on Spinnaker’s GUI

If you are familiar with Jenkins, then Spinnaker will make a lot of sense to you. Spinnaker is all about configuring a pipeline with stages to automate/orchestrate a number of steps with regards to ‘continously’ deploying your application code to an environment. Spinnaker puts the CD in CI/CD 😉 Corny, but had to say it…

Moving on…

To use Spinnaker effectively, you need to use Jenkins with it. Jenkins is responsible for your git / bake phase, where code is downloaded and then launched on a VM in your environment i.e. AWS. At the end of that launch assuming everything works out, a snapshot is taken and an AMI is created to be use in subsequent steps for install. A typical pipeline looks like this..

  1. Grab the latest checkin from Git repo
  2. Build a package rpm/deb of your application/code (+ dependencies)
  3. Install the package above, aka Bake the code on a VM in AWS, then take a snapshot ( create AMI)
  4. Subsequently deploy your code to compute / includes LB setup if there is any.
    • Should also mention Spinnaker automatically sets up ASG (auto-scaling groups) which is a nice feature ( if a machine dies its re-created based on the capacity you set/require)
  5. And finally if you need a cleanup task such as “destroy the boxes running the old code” that runs.

Visually pipeline configuration, it looks like this…

Notice the shrink cluster step. This is one of Spinnaker’s built in stages that you can use. However, there is a better way to handle this vs. manually creating the ‘cleanup’ phases after a deploy. You can instead employ what are called strategies…

For example, if you click Deploy in the pipeline to go to the configuration for that…. and then click on edit under a Server Group you have created under Deploy Configuration…

You will get a new window popup that looks like this… .

As you can see here I have clicked into Strategy, and am currently using Highlander which states ‘Destroys all previous server groups in the cluster as soon as new server group passes health checks’.

The highlander strategy is extremely useful for rolling out new builds. Essentially your old code will run, until your new code is healthy at which point the old server groups is destroyed. Assuming the new build is healthy (i.e. your health checks and all previous build tests etc have good coverage) you should be good to go to the shortest amount of time possible. I tried all sorts of customizations to my pipeline to emulate the above behavior without using the strategy and found that the highlander strategy is the fastest.

Anyway, your use cases may be different depending on the type of pipeline you are configuring. So take some time to familiarize yourself with the availability strategies depending on your specific use case.

Now I kinda jumped ahead a few chapters, but I did that to make the point quick before you lose interest, that Spinnaker will deploy your code, it will make sure your nodes are always running, and it can do this all automatically once configured…

What I mean is if we go backwards now to the original configuration step let’s look at your Configuration step in the pipeline…

Under the section called “Automated Triggers” I have added some configuration to listen to a Jenkins server for changes on a Job that I have defined in Jenkins. I am not going to go into much detail on Jenkins because this article would be too longer, but just to show this job in Jenkins very quickly for a full understanding.

This is a screenshot of a job that polls Git every minute for changes. So with these configured, Spinnaker will Listen for changes to our specified repo via Jenkins. Subsequently, if changes are detected Spinnaker will proceed with the rest of the pipeline configuration. The next step is Build.

The first step ‘Configuration’ detects changes to our git repo. The second step Build, turns those changes into an installable package again utilizing Jenkins to do this. Here we tell spinnaker to run our build job in Jenkins…let’s take a look at that in Jenkins…first we do our git configuration, and nothing under build triggers.

Then if necessary we can inject files during build time for packaging..

To make use of the injected files we must copy them from a variable to the host using a build step to execute a shell…

We then run our actual packaging script that turns it into an rpm/deb etc. Mine is called ‘package-collector.sh’ and looks like this…

And finally there is magic… The last line of our executed shell build step uploads our package to S3….

deb-s3 upload --bucket global-s3-prod-spinnaker --prefix spinnaker-ubuntu-mirror --arch amd64 --codename trusty --preserve-versions true ./collector/*.deb

And the last step in our Jenkins is to archive the artifacts.

The end of the output of this Job when run looks like this.

Next Spinnaker goes to Bake, this is the step where the package gets installed to a VM “baked” and then snapshotted and turned to an AMI for the Deploy step.

The Bake step in Spinnaker is the most straight forward…

Essentially, as long as your package name ‘ciscollector’ as shown here matches the base name of the package you are creating, Spinnaker (is already configured to look in our S3 bucket) will find it and install it on a VM, at the end it will snapshot that VM and created an AMI to use to install in the subsequent Deploy stage of the pipeline.

Finally, we are back to the Deploy step. The deploy step depends on the bake step, just like the bake step depends on build and build depends on Configuration. This is how we create the workflow of our pipeline using ‘depends on’ ( sorry for not explaining that sooner ). So when Build completes successfully we start our Deploy step, which as shown before, has a server group created. Of course you will not, so you must create a new server group, which is how Spinnaker manages/groups servers & load balancers for deployments.

The deploy steps requires that you fill out the following sections with your specific configuration…

As discussed previously under Basic Settings it is best to pick a strategy as part of the Deploy stage rather then trying to do many custom/complicated things yourself. So rather then screenshot through this… it’s pretty self-explanatory to configured Load Balancers, Security Groups, Instance Type etc… The one thing I would highlight is the capacity section and also tagging under Advanced Settings.

The capacity section is important for 2 reasons. First whatever you set for number of instances, turns into an auto-scaling group requirement, thus whatever you set there, if a node is killed or lost for any reason, your auto-scaling group in AWS will make sure you always have that number of nodes running. This is helpful if nodes are dying for whatever reason, although not super helpful if they continually die quickly due to misconfiguration, etc. The second part about consider deployments successful when % of instances are healthy is very important for the speed of your pipeline. Let’s say you have a pool of 16 servers… if all those servers run the same image, the chances that you need to wait until 100% here are slim, for example lets say you run 4 servers, and you only need 3 servers to service 100% of capacity for request. In this case it would be acceptable to move on from this Deploy step at 75% capacity, because waiting for the last 25% isn’t really necessary to service request and you are 99% sure that last host is going to come up. So feel free to tweak.

Last as mentioned, make sure you take the opportunity to tag things in the Advanced Settings sections.

Another highlight, is this is where you would inject userdata to Ec2 instances (under advanced settings). This is helpful when you want to past at-boot-time configuration to a system.  For example, you might want to override a configuration file at boot time and say something like if I get this value in user data, override the config, if I don’t use the config. Just remember to pass your userdata as base64.

I want to leave you with some final comments. Spinnaker is a great deployment tool. It is helpful to use the pipeline/stage workflow for reproducing deployments, however, there are many limitations that Spinnaker has, and it will always lag on feature set parity with providers like AWS. As an example, the ALB support is quite limited at the time of this writing. So for that I had to add a custom Jenkins step that runs a script on the Jenkins server to manually add nodes to target groups for an ALB. If you are interested or need more details on that solution email me at tuxninja@tuxlabs.com

I hope you found this brief overview on Spinnaker useful. It’s a great tool that can be used to easily reproduce application deployments.

How To: Use Spinnaker to deploy into AWS Read More »

MongoDB data loss avoided courtesy of AWS EBS & Snapshots

Cross Region MongoDB Across A Slow Network (Napster) Bad, AWS Snapshots (Metallica) Good!

I recently found myself in a bit of a pickle. My team and I had deployed a 3 node MongoDB cluster configured as two nodes in us-east-1 and one node in us-west-2 to maximize our availability while minimizing cost. Ultimately, there were two problems with this approach. The first is that for reasons mostly outside of our control the rest of our application stack above the database was deployed in us-east-1 drastically reducing any availability benefit the tertiary node in us-west-2 was buying us. Additionally, we were not aware at the time we made this choice, but our cluster/replication traffic was going across a VPN with very limited bandwidth that frequently suffered network partitions due to network maintenance and a lack of redundancy. We found our MongoDB cluster failing over frequently due to losing communication with it’s members and when it did our cluster had a difficult time recovering because replication couldn’t catch up across the VPN.

After restarting Mongo several times, including removing the data directory and starting over fresh, ultimately replication was going to take days to sync, and we could not afford to wait that long. We needed to restore the cluster health ASAP so we could move all nodes to us-east-1 mitigating our network issue with our VPN that was introducing so much pain.

Now the system I am referring to is production, it cannot lose data, and it cannot take downtime/a maintenance. Given these constraints I started googling ways to catch up your MongoDB, when it will not catch up on it’s own. I tried some things I found like rsync etc, before realizing it wasn’t any faster across that slow VPN link. Ultimately, I decided I was going to try a snapshot. Now the document I read warned me that a live snapshot may result in potentially inconsistent data, but again I had to try it given the constraints I mentioned before. I had few options. In the end as it turns out, it worked perfectly and in under an hour I had my entire cluster healthy. Using the AWS CLI utility, here is how I did it…

Step 1 take the snapshot of the healthy node

I actually took the snapshot in the GUI at first… so not shown here, but for the record to create a snapshot, go to your volume under Ec2 Volumes and click actions then create snapshot and save the snapshot ID. (Or alternatively do it with the CLI like I did for everything else).

Step 2, copy the snapshot from your source region to your destination region

aws --region us-east-1 ec2 copy-snapshot --source-region us-west-2 --source-snapshot-id snap-01f185929341abd3b --description "cis-mongo-prod-3-snapshot-05-19-2017"

Make sure you copy to your clipboard the snapshot ID returned…

Step 3, Create a new volume from the copied snapshot

aws ec2 create-volume --size 300 --region us-east-1 --availability-zone us-east-1d --volume-type gp2 --snapshot-id snap-085b986dae85dfed1

Response:

{
"AvailabilityZone": "us-east-1c",
"Encrypted": false,
"VolumeType": "gp2",
"VolumeId": "vol-0fa49fde34e88a1c6",
"State": "creating",
"Iops": 900,
"SnapshotId": "snap-085b986dae85dfed1",
"CreateTime": "2017-05-19T20:37:33.304Z",
"Size": 300
}

Step 4, Attach the volume to the system

aws ec2 attach-volume --volume-id vol-0fa49fde34e88a1c6 --instance-id i-0717cd609275fdbef --device /dev/sdc

Oh No We Got An Error!

An error occurred (InvalidVolume.ZoneMismatch) when calling the AttachVolume operation: The volume 'vol-0fa49fde34e88a1c6' is not in the same availability zone as instance 'i-0717cd609275fdbef'

Ah ok, simple fix, we created the volume in a different AZ than the node we were attaching to.

(delete the old volume) Then…

Step 5, create a new volume from the snapshot, but this time specify the same AZ (us-east-1b instead of us-east-1c) as the node we wish to attach it to

aws ec2 create-volume --size 300 --region us-east-1 --availability-zone us-east-1b --volume-type gp2 --snapshot-id snap-085b986dae85dfed1

Step 6, try attaching the new volume (cross your fingers)

aws ec2 attach-volume --volume-id vol-095cc214c8a5e74e0 --instance-id i-0717cd609275fdbef --device /dev/sdc

Response:

{
"AttachTime": "2017-05-19T21:58:07.586Z",
"InstanceId": "i-0717cd609275fdbef",
"VolumeId": "vol-095cc214c8a5e74e0",
"State": "attaching",
"Device": "/dev/sdc"
}

Sweet it worked…Now it’s time to do some work on the node we attached this volume to.

Step 7, check if the new attachment is visible to the system

[root@ip-10-5-0-149 mongo]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 10G 0 disk
└─xvda1 202:1 0 10G 0 part /
xvdb 202:16 0 300G 0 disk /mnt
xvdc 202:32 0 300G 0 disk
[root@ip-10-5-0-149 mongo]#

Yup sure, is, we can see our device ‘xvdc’ is a 300G disk that has no mount point. We can also see ‘xvdb’ which is our original mongo data mount, mounted under /mnt.

Step 8, create mount point and mount the new device

[root@ip-10-5-0-149 mongo]# mkdir /mnt/mongo2
[root@ip-10-5-0-149 mongo]# mount /dev/xvdc /mnt/mongo2
[root@ip-10-5-0-149 mongo]#
[root@ip-10-5-0-149 mongo]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.8G 5.6G 3.9G 60% /
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 1.6G 15G 10% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/xvdb 296G 65M 281G 1% /mnt
none 64K 4.0K 60K 7% /.subd/tmp
tmpfs 3.2G 0 3.2G 0% /run/user/11272
/dev/xvdc 296G 12G 269G 5% /mnt/mongo2
[root@ip-10-5-0-149 mongo]#

Step 9, shutdown Mongo if it’s running

[root@ip-10-5-0-149 mongo]# service mongod stop

Step 10, copy the snapshot data, to the existing MongoDB data directory

[root@ip-10-5-0-149 mongo]# pwd
/mnt/mongo
[root@ip-10-5-0-149 mongo]# ls
[root@ip-10-5-0-149 mongo]# cp -r /mnt/mongo2/* .

Step 11, fix permissions for the copied data

[root@ip-10-5-0-149 mongo]# chown mongod:mongod /mnt/mongo -R

NOTE: Do not forget this step or you will get errors starting the MongoDB service

Step 12, start Mongo back up

[root@ip-10-5-0-149 mongo]# service mongod start
Starting mongod (via systemctl): [ OK ]
[root@ip-10-5-0-149 mongo]#

Step 13, Check Mongo Cluster Status

[root@ip-10-5-0-149 mongo]# mongo
cisreplset:SECONDARY> rs.status()
{
"set" : "cisreplset",
"date" : ISODate("2017-05-19T22:17:00.483Z"),
"myState" : 2,
"term" : NumberLong(111),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "ip-10-5-0-149:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 6,
"optime" : {
"ts" : Timestamp(1495222912, 3),
"t" : NumberLong(110)
},
"optimeDate" : ISODate("2017-05-19T19:41:52Z"),
"configVersion" : 3,
"self" : true
},
{
"_id" : 1,
"name" : "10.5.5.182:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4,
"optime" : {
"ts" : Timestamp(1495230671, 1033),
"t" : NumberLong(111)
},
"optimeDate" : ISODate("2017-05-19T21:51:11Z"),
"lastHeartbeat" : ISODate("2017-05-19T22:16:56.263Z"),
"lastHeartbeatRecv" : ISODate("2017-05-19T22:16:59.620Z"),
"pingMs" : NumberLong(1),
"syncingTo" : "10.100.0.17:27017",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "10.100.0.17:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 3,
"optime" : {
"ts" : Timestamp(1495232212, 24),
"t" : NumberLong(111)
},
"optimeDate" : ISODate("2017-05-19T22:16:52Z"),
"lastHeartbeat" : ISODate("2017-05-19T22:16:56.516Z"),
"lastHeartbeatRecv" : ISODate("2017-05-19T22:16:58.751Z"),
"pingMs" : NumberLong(84),
"electionTime" : Timestamp(1495225273, 1),
"electionDate" : ISODate("2017-05-19T20:21:13Z"),
"configVersion" : 3
}
],
"ok" : 1
}
cisreplset:SECONDARY>

For contrast, here is what it looked like before, pay close attention to node/member 10.5.0.149

cisreplset:PRIMARY> rs.status()
{
"set" : "cisreplset",
"date" : ISODate("2017-05-19T22:00:07.185Z"),
"myState" : 1,
"term" : NumberLong(111),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "ip-10-5-0-149:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-05-19T22:00:06.570Z"),
"lastHeartbeatRecv" : ISODate("2017-05-19T21:49:39.839Z"),
"pingMs" : NumberLong(82),
"lastHeartbeatMessage" : "Connection refused",
"configVersion" : -1
},
{
"_id" : 1,
"name" : "10.5.5.182:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2138,
"optime" : {
"ts" : Timestamp(1495228916, 7491),
"t" : NumberLong(111)
},
"optimeDate" : ISODate("2017-05-19T21:21:56Z"),
"lastHeartbeat" : ISODate("2017-05-19T22:00:05.507Z"),
"lastHeartbeatRecv" : ISODate("2017-05-19T22:00:05.358Z"),
"pingMs" : NumberLong(83),
"syncingTo" : "10.100.0.17:27017",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "10.100.0.17:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1200895,
"optime" : {
"ts" : Timestamp(1495231207, 1111),
"t" : NumberLong(111)
},
"optimeDate" : ISODate("2017-05-19T22:00:07Z"),
"electionTime" : Timestamp(1495225273, 1),
"electionDate" : ISODate("2017-05-19T20:21:13Z"),
"configVersion" : 3,
"self" : true
}
],
"ok" : 1
}
cisreplset:PRIMARY>

Now that our DB is verified healthy it’s time to cleanup.

Step 14, clean our now unnecessary waste ( and thank the gods)

Umount & Delete

[root@ip-10-5-0-149 mongo]# unmount /mnt/mongo2/
[root@ip-10-5-0-149 mongo]# rm -rf /mnt/mongo2/

Detach Volume

(env) ➜ ~ aws ec2 detach-volume --volume-id vol-0fa49fde34e88a1c6
{
"AttachTime": "2017-05-19T20:43:29.000Z",
"InstanceId": "i-0d535ee1cdfd79073",
"VolumeId": "vol-0fa49fde34e88a1c6",
"State": "detaching",
"Device": "/dev/sdc"
}
(env) ➜ ~

Delete Volume & Snapshots

 

(env) ➜ ~ aws ec2 delete-volume --volume-id vol-095cc214c8a5e74e0
(env) ➜ ~ aws ec2 delete-snapshot --snapshot-id snap-085b986dae85dfed1
(env) ➜ ~ aws ec2 delete-snapshot --snapshot-id snap-01f185929341abd3b --region us-west-2

When I ran into this issue and googled around a bit, I really didn’t find anyone with a detailed account of how they got out of it. Thus I was inspired by the opportunity to help others in the future and the result is this post. I hope it finds someone, someday, facing a similar scenario and graciously lifts them out of the depths! Godspeed, happy clouding.

MongoDB data loss avoided courtesy of AWS EBS & Snapshots Read More »

How To: Launch A Jump Host In AWS Using Terraform

I have been a Hashicorp fan boy for a couple of years now. I am impressed, and happy with pretty much everything they have done from Vagrant to Consul and more. In short they make the DevOps world a better place. That being said this article is about the aptly named Terraform product. Here is how Hashicorp describes Terraform in their own words…

“Terraform enables you to safely and predictably create, change, and improve production infrastructure. It is an open source tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.”

Interestingly, enough it doesn’t point out, but in a way implies it by omitting anything about providers that Terraform is multi-cloud (or cloud agnostic). Terraform works with AWS, GCP, Azure and Openstack. In this article we will be covering how to use Terraform with AWS.

Step 1, download Terraform, I am not going to cover that part 😉
https://www.terraform.io/downloads.html

Step 2, Configuration…

Configuration

Hashicorp uses their own configuration language for Terraform, it is fully JSON compatible, which is nice.. The details are covered here https://github.com/hashicorp/hcl.

After downloading and installing Terraform, its time to start generating the configs.

AWS IAM Keys

AWS keys are required to do anything with Terraform. You can read about how to generate an access key / secret key for a user here : http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_CreateAccessKey

Terraform Configuration Files Overview

When you execute the terraform commands, you should be within a directory containing terraform configuration files. Those files ending in a ‘.tf’ extension will be loaded by Terraform in alphabetical order.

Before we jump into our configuration files for our Jump box, it may be helpful to review a quick primer on the syntax here https://www.terraform.io/docs/configuration/syntax.html

& some more advanced features such as lookup here https://www.terraform.io/docs/configuration/interpolation.html

Most users of Terraform choose to make their configs modular resulting in multiple .tf files with names like main.tf, variables.tf and data.tf… This is not required…you can choose to put everything in one big Terraform file, but I caution you modularity/compartmentalization is always a better approach than one monolithic file. Let’s take a look at our main.tf

Main.tf

Typically if you only see one terraform config file, it is called main.tf, most commonly there will be at least one other file called variables.tf used specifically for providing values for variables used in other TF files such as main.tf. Let’s take a look at our main.tf file section by section.

Provider

The provider keyword is used to identify the platform (cloud) you will be talking to, whether it is AWS, or another cloud. In our case it is AWS, and define three configuration items, an access key, secret key, and a region all of which we are inserting variables for, which will later be looked up / translated into real values in variables.tf

provider "aws" {
        access_key = "${var.aws_access_key_id}"
        secret_key = "${var.aws_secret_access_key}"
        region = "${var.aws_region}"
}

Resource aws_instance

This section defines a resource, which is an “aws_instance” that we are calling “jump_box”. We define all configuration requirements for that instance. Substituting variable names where necessary, and in some cases we just hard code the value. Notice we are attaching two security groups to the instance to allow for ICMP & SSH. We are also tagging our instance, which is critical an AWS environment so your administrators/teammates have some idea about the machine that is spun up and what it is used for.

resource "aws_instance" "jump_box" {
        ami = "${lookup(var.ami_base, "centos-7")}"
        instance_type = "t2.medium"
        key_name = "${var.key_name}"
        vpc_security_group_ids = [
                "${lookup(var.security-groups, "allow-icmp-from-home")}",
                "${lookup(var.security-groups, "allow-ssh-from-home")}"
        ]
        subnet_id = "${element(var.subnets_private, 0)}"
        root_block_device {
                volume_size = 8
                volume_type = "standard"
        }
        user_data = <<-EOF
                                #!/bin/bash
                                yum -y update
                                EOF
        tags = {
                Name = "${var.instance_name_prefix}-jump"
                ApplicationName = "jump-box"
                ApplicationRole = "ops"
                Cluster = "${var.tags["Cluster"]}"
                Environment = "${var.tags["Environment"]}"
                Project = "${var.tags["Project"]}"
                BusinessUnit = "${var.tags["BusinessUnit"]}"
                OwnerEmail = "${var.tags["OwnerEmail"]}"
                SupportEmail = "${var.tags["SupportEmail"]}"
        }
}

Btw, the this resource type is provided by an AWS module found here https://www.terraform.io/docs/providers/aws/r/instance.html

You have to download the module using terraform get (which can be done once you write some config files and type terraform get 🙂 ).

Also, note the usage of ‘user_data’ here to update the machines packages at boot time. This is an AWS feature that is exposed through the AWS module in terraform.

Resource aws_security_group

Next we define a new security group (vs attaching an existing one in the above section). We are creating this new security group for other VM’s in the environment to attach later, such that it can be used to allow SSH from the Jump host to the VM’s in the environment.

Also notice under cidr_blocks we define a single IP address a /32 of our jump host…but more important is to notice how we determine that jump hosts IP address. Using .private_ip to access the attribute of the “jump_box” aws_instance we are creating/just created in AWS. That is pretty cool.

resource "aws_security_group" "jump_box_sg" {
        name = "${var.instance_name_prefix}-allow-ssh-from-jumphost"
        description = "Allow SSH from the jump host"
        vpc_id = "${var.vpc_id}"

        ingress {
                from_port = 22
                to_port = 22
                protocol = "tcp"
                cidr_blocks = ["${aws_instance.jump_box.private_ip}/32"]
        }

        tags = "${var.tags_infra_default}"
}

Resource aws_route53_record

The last entry in our main.tf creates a DNS entry for our jump host in Route53. Again notice we are specifying a name of jump. has a prefix to an entry, but the remainder of the FQDN is figured out by the lookup command.  The lookup command is used to lookup values inside of a map. In this case the map is defined in our variables.tf that we will review next.

resource "aws_route53_record" "jump_box_dns" {
        zone_id = "${lookup(var.route53_zone, "id")}"
        type = "A"
        ttl = "300"
        name = "jump.${lookup(var.route53_zone, "name")}"
        records = ["${aws_instance.jump_box.private_ip}"]
}

Variables.tf

I will attempt to match the section structure I used above for main.tf when explaining the variables in variables.tf though it is not really in as clear of a layout using sections.

Provider variables

When terraform is run it compiles all .tf files, and replaces any key that equals a variable, with the value it finds listed in the variables.tf file (in our case) with the variable keyword as a prefix. Notice that the first two variables are empty, they have no value defined. Why ? Terraform supports taking input at runtime, by leaving these values blank, Terraform will prompt us for the values. Region is pretty straight forward, default is the value returned and description in this case is really an unused value except as a comment.

variable "aws_access_key_id" {}
variable "aws_secret_access_key" {}

variable "aws_region" {
        description = "AWS region to create resources in"
        default = "us-east-1"
}

I would like to demonstrate the behavior of Terraform as described above, when the variables are left empty

➜  jump terraform plan
var.aws_access_key_id
  Enter a value: aaaa

var.aws_secret_access_key
  Enter a value: bbbb

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

...

At this phase you would enter your AWS key info, and terraform would ‘plan’ out your deployment. Meaning it would run through your configs, and print to your screen it’s plan, but not actually change any state in AWS. That is the difference between Terraform plan & Terraform apply.

Resource aws_instance variables

Here we define the values of our AMI, SSH Key, Instance prefix for the name, Tags, security groups, and subnets. Again this should be pretty straight forward, no magic here, just the use of string variables & maps where necessary.

variable "ami_base" {
        description = "AWS AMIs for base images"
        default = {
                "centos-7" = "ami-2af1ca3d"
                "ubuntu-14.04" = "ami-d79487c0"
        }
}

variable "key_name" {
        default = "tuxninja-rsa-2048"
}

variable "instance_name_prefix" {
        default = "tuxlabs-"
}

variable "tags" {
        type = "map"
        default = {
                        ApplicationName = "Jump"
                        ApplicationRole = "jump box - bastion"
                        Cluster = "Jump"
                        Environment = "Dev"
                        Project = "Jump"
                        BusinessUnit = "TuxLabs"
                        OwnerEmail = "tuxninja@tuxlabs.com"
                        SupportEmail = "tuxninja@tuxlabs.com"
        }
}

variable "tags_infra_default" {
        type = "map"
        default = {
                        ApplicationName = "Jump"
                        ApplicationRole = "jump box - bastion"
                        Cluster = "Jump"
                        Environment = "DEV"
                        Project = "Jump"
                        BusinessUnit = "TuxLabs"
                        OwnerEmail = "tuxninja@tuxlabs.com"
                        SupportEmail = "tuxninja@tuxlabs.com"
        }
}

variable "security-groups" {
        description = "maintained security groups"
        default = {
                "allow-icmp-from-home" = "sg-a1b75ddc"
                "allow-ssh-from-home" = "sg-aab75dd7"
        }
}

variable "vpc_id" {
        description = "VPC us-east-1-vpc-tuxlabs-dev01"
        default = "vpc-c229daa5"
}

variable "subnets_private" {
        description = "Private subnets within us-east-1-vpc-tuxlabs-dev01 vpc"
        default = ["subnet-78dfb852", "subnet-a67322d0", "subnet-7aa1cd22", "subnet-75005c48"]
}

variable "subnets_public" {
        description = "Public subnets within us-east-1-vpc-tuxlabs-dev01 vpc"
        default = ["subnet-7bdfb851", "subnet-a57322d3", "subnet-47a1cd1f", "subnet-73005c4e"]
}

It’s important to note the variables above are also used in other sections as needed, such as the aws_security_group section in main.tf …

Resource aws_route53_record variables

Here we define the ID & Name that are used in the ‘lookup’ functionality from our main.tf Route53 section above.

variable "route53_zone" {
        description = "Route53 zone used for DNS records"
        default = {
                id = "Z1ME2RCUVBYEW2"
                name = "tuxlabs.com"
        }
}

It’s important to note Terraform or TF files do not care when or where things are loaded. All files are loaded and variables require no specific order consistent with any other part of the configuration. All that is required is that for each variable you try to insert a value for, it has a value listed via the variable keyword in a TF file somewhere.

Output.tf

Again, I want to remind folks you can put these terraform syntax in one file if you wanted to, but I choose to split things up for readability and simplicity. So we have an output.tf file specifically for the output command, there is only one command, which lists the results of our terraform configurations upon success.

output "jump-box-details" {
	value = "${aws_route53_record.jump_box_dns.fqdn} - ${aws_instance.jump_box.private_ip} - ${aws_instance.jump_box.id} - ${aws_instance.jump_box.availability_zone}"
}

Ok so let’s run this and see how it looks…First a reminder, to test your config you can run Terraform plan first..It will tell you the changes its going to make…example

➜  jump terraform plan
var.aws_access_key_id
  Enter a value: blahblah

var.aws_secret_access_key
  Enter a value: blahblahblahblah

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.


The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.jump_box
    ami:                                       "ami-2af1ca3d"
    associate_public_ip_address:               "<computed>"
    availability_zone:                         "<computed>"
    ebs_block_device.#:                        "<computed>"
    ephemeral_block_device.#:                  "<computed>"
    instance_state:                            "<computed>"
    instance_type:                             "t2.medium"

...

Plan: 3 to add, 0 to change, 0 to destroy.

If everything looks good & is green, you are ready to apply.

aws_security_group.jump_box_sg: Creation complete
aws_route53_record.jump_box_dns: Still creating... (10s elapsed)
aws_route53_record.jump_box_dns: Still creating... (20s elapsed)
aws_route53_record.jump_box_dns: Still creating... (30s elapsed)
aws_route53_record.jump_box_dns: Still creating... (40s elapsed)
aws_route53_record.jump_box_dns: Still creating... (50s elapsed)
aws_route53_record.jump_box_dns: Creation complete

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate

Outputs:

jump-box-details = jump.tuxlabs.com - 10.10.195.46 - i-037f5b15bce6cc16d - us-east-1b
➜  jump

Congratulations, you now have a jump box in AWS using Terraform. Remember to attach the required security group to each machine you want to grant access to, and start locking down your jump box / bastion and VM’s.

Outro

Remember if you take the above config and try to run it, swapping out only the variables it will error something about a required module. Downloading the required modules is as simple as typing ‘terraform get‘ , which I believe the error message even tells you 🙂

So again this was a brief intro to Terraform it does a lot & is extremely powerful. One of the thing I did when setting up a Mongo cluster using Terraform, was to take advantage of a map to change node count per region. So if you wanted to deploy a different number of instances in different regions, your config might look something like…

main.tf

  count = "${var.region_instance_count[var.region_name]}"

variables.tf

variable "region_instance_count" {
  type = "map"
  default = {
    us-east-1 = 2
    us-west-2 = 1
    eu-central-1 = 1
    eu-west-1 = 1
  }
}

It also supports split if you want to multi-value a string variable.

Another couple things before I forget, Terraform apply, doesn’t just set up new infrastructure, it also can be used to modify existing infrastructure, which is a very powerful feature. So if you have something deployed and want to make a change, terraform apply is your friend.

And finally, when you are done with the infrastructure you spun up or it’s time to bring her down… ‘terraform destroy’

➜  jump terraform destroy
Do you really want to destroy?
  Terraform will delete all your managed infrastructure.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

var.aws_access_key_id
  Enter a value: blahblah

var.aws_secret_access_key
  Enter a value: blahblahblahblah

...

Destroy complete! Resources: 3 destroyed.
➜  jump

I hope this article helps.

Happy Terraforming 😉

 

How To: Launch A Jump Host In AWS Using Terraform Read More »