Cloud

How to setup Flask and Apache on an Ubuntu VM in DigitalOcean with a Custom Domain

In this video I show how setup Flask and Apache on an Ubuntu VM in Digital Ocean with a custom domain. This was made after someone in the comments on my other DigitalOcean video requested it. If there is something else anyone would like to see, please just let me know I am happy to provide these walk through’s.

Note: I hit a number of challenges with DNS in this one, I think it’s fun to watch me struggle. Enjoy!

How to setup Flask and Apache on an Ubuntu VM in DigitalOcean with a Custom Domain Read More »

Setting up Kubernetes to manage containers on the Google Cloud Platform

These days the pace of innovation in DevOps can leave you feeling like you’re jogging on a treadmill programmed to run faster than Usain Bolt. Mastery requires hours of practice and the last decade in DevOps has not allowed for it. Before gaining 10 years of experience running virtual machines using VmWare in private data-centers, private cloud software like Openstack and Cloudstack came along, and just when you and your team painfully achieved a stable install you were told running virtual machines in public clouds like AWS, GCP, and Azure is the way forward. By the time you got there it was time to switch to containers, and before you can fully appreciate those, server-less functions are on the horizon, but I digress. If you want to know more about server-less functions, see my previous article on AWS Lambda. Instead, this article will focus on running Docker containers inside of a Kubernetes cluster on Google’s Cloud Platform.

Linux Containers, which were recently popularized by Docker need something to help manage them and while there are many choices, Kubernetes the open-sourced container management system from Google is the undisputed king at this time. Given that Kubernetes was started by Google, it should be expected that the easiest way to install it is using Google’s Cloud Platform (GCP). However, Openshift from Redhat also provides a nice batteries included abstraction if you need to get up and running quickly as well as kops.

Pre-Requisites

The main pre-requisites you need for this article is a Google Cloud Platform account and installing the gcloud utility via the SDK.

In addition, you need some form of a computer with Internet connectivity, some typing skills, a brain that can read, and a determination to finish…For now I will give you the benefit of the doubt and assume you have all of these. It is also nice to have your beverage of choice while you do this, a fine tea, ice cold beer, or glass of wine will work, but for Cancer’s sake please skip the sugar.

Here is where I would normally insert a link to facts on sugar and Cancer’s link, but I literally just learned I would be spreading rumors… Fine drink your Kool-Aid, but don’t blame me for your calories.

The Build Out of our Self Healing IRC Server Hosting Containers

I lied dude, IRC is so 1995 and unfortunately, ICQ’s been dead and Slack won’t let me host their sexy chat application with game like spirit and better jokes than Kevin Heart. So…sorry to excite you… but I guess I will fallback to the docs here and install Nginx like us newb’s are supposed to.

Numero Uno (Step 1 dude)

As part of the installation of the gcloud / SDK you should have ran gcloud init, which requires you to login with your Google account via a web browser.

You must log in to continue. Would you like to log in (Y/n)?  Y

Your browser has been opened to visit:

    https://accounts.google.com/o/oauth2/auth?redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&prompt=select_account&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&access_type=offline


You are logged in as: [tuxninja@tuxlabs.com].

This account has no projects.

Would you like to create one? (Y/n)?  Y

After clicking allow in your browser you will be logged in…and asked about creating an initial Project. Say yes (type Y and hit enter).

Enter a Project ID. Note that a Project ID CANNOT be changed later.
Project IDs must be 6-30 characters (lowercase ASCII, digits, or
hyphens) in length and start with a lowercase letter. tuxlabsdemo
Your current project has been set to: [tuxlabsdemo].

Not setting default zone/region (this feature makes it easier to use
[gcloud compute] by setting an appropriate default value for the
--zone and --region flag).
See https://cloud.google.com/compute/docs/gcloud-compute section on how to set
default compute region and zone manually. If you would like [gcloud init] to be
able to do this for you the next time you run it, make sure the
Compute Engine API is enabled for your project on the
https://console.developers.google.com/apis page.

Your Google Cloud SDK is configured and ready to use!

Sweet your Project is now created. In order to use the Google Cloud API’s you must first enable access by visiting https://console.developers.google.com/apis/api/replicapool.googleapis.com/overview and clicking enable.

That will take a minute. Once completed you will be able to run gcloud commands against your Project. We can set the default region for our project like so:

tuxninja@tldev1:~/google-cloud-sdk$ gcloud compute project-info add-metadata --metadata google-compute-default-region=us-west1
Updated [https://www.googleapis.com/compute/v1/projects/tuxlabsdemo].
tuxninja@tldev1:~/google-cloud-sdk$ 

If you get an error here, stop being cheap and link your project to your billing account in the console.

Additionally, we want to set the default region/zone for gcloud commands like so:

tuxninja@tldev1:~$ gcloud config set compute/region us-west1
Updated property [compute/region].
tuxninja@tldev1:~$ gcloud config set compute/zone us-west1-a
Updated property [compute/zone].
tuxninja@tldev1:~$ 

Numero Dos Equis

We need to install kubectl so we can interact with Kubernetes.

tuxninja@tldev1:~$ gcloud components install kubectl


Your current Cloud SDK version is: 175.0.0
Installing components from version: 175.0.0

┌──────────────────────────────────────────────────────────────────┐
│               These components will be installed.                │
├─────────────────────┬─────────────────────┬──────────────────────┤
│         Name        │       Version       │         Size         │
├─────────────────────┼─────────────────────┼──────────────────────┤
│ kubectl             │               1.7.6 │             16.0 MiB │
│ kubectl             │                     │                      │
└─────────────────────┴─────────────────────┴──────────────────────┘

For the latest full release notes, please visit:
  https://cloud.google.com/sdk/release_notes

Do you want to continue (Y/n)?  Y

╔════════════════════════════════════════════════════════════╗
╠═ Creating update staging area                             ═╣
╠════════════════════════════════════════════════════════════╣
╠═ Installing: kubectl                                      ═╣
╠════════════════════════════════════════════════════════════╣
╠═ Installing: kubectl                                      ═╣
╠════════════════════════════════════════════════════════════╣
╠═ Creating backup and activating new installation          ═╣
╚════════════════════════════════════════════════════════════╝

Performing post processing steps...done.                                                                                                                      

Update done!

tuxninja@tldev1:~$ 

Once that is done, quickly realize someone spent an obscene amount of time making that install as pretty as it was without using ncurses. Shout out to that geek.

Numero Tres Deliquentes

Time to create our Kubernetes cluster. Run this command and “it’s going to be LEGEND….Wait for it….

tuxninja@tldev1:~$ gcloud container clusters create tuxlabs-kubernetes                           
Creating cluster tuxlabs-kubernetes...done.                                                   
Created [https://container.googleapis.com/v1/projects/tuxlabsdemo/zones/us-west1-a/clusters/tuxlabs-kubernetes].
kubeconfig entry generated for tuxlabs-kubernetes.
NAME                ZONE        MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
tuxlabs-kubernetes  us-west1-a  1.7.6-gke.1     35.197.120.249  n1-standard-1  1.7.6         3          RUNNING
tuxninja@tldev1:~$

And I hope you’re not lactose intolerant cause the second half of that word is DAIRY.” – NPH

Numero (Audi) Quattro

Now you should be able to see all running Kubernetes services in your cluster like so:

tuxninja@tldev1:~$ kubectl get --all-namespaces services
NAMESPACE     NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
default       kubernetes             ClusterIP   10.19.240.1     <none>        443/TCP         15m
kube-system   default-http-backend   NodePort    10.19.254.83    <none>        80:31154/TCP    14m
kube-system   heapster               ClusterIP   10.19.247.182   <none>        80/TCP          14m
kube-system   kube-dns               ClusterIP   10.19.240.10    <none>        53/UDP,53/TCP   14m
kube-system   kubernetes-dashboard   ClusterIP   10.19.249.188   <none>        80/TCP          14m
tuxninja@tldev1:~$

And we can see the pods like so:

tuxninja@tldev1:~$ kubectl get --all-namespaces pods
NAMESPACE     NAME                                                           READY     STATUS    RESTARTS   AGE
kube-system   event-exporter-1421584133-zlvnd                                2/2       Running   0          16m
kube-system   fluentd-gcp-v2.0-1nb9x                                         2/2       Running   0          16m
kube-system   fluentd-gcp-v2.0-bpqtv                                         2/2       Running   0          16m
kube-system   fluentd-gcp-v2.0-mntjl                                         2/2       Running   0          16m
kube-system   heapster-v1.4.2-339128277-gxh5g                                3/3       Running   0          15m
kube-system   kube-dns-3468831164-5nn05                                      3/3       Running   0          15m
kube-system   kube-dns-3468831164-wcwtg                                      3/3       Running   0          16m
kube-system   kube-dns-autoscaler-244676396-fnq9g                            1/1       Running   0          16m
kube-system   kube-proxy-gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg   1/1       Running   0          16m
kube-system   kube-proxy-gke-tuxlabs-kubernetes-default-pool-6ede7d6a-pr82   1/1       Running   0          16m
kube-system   kube-proxy-gke-tuxlabs-kubernetes-default-pool-6ede7d6a-w6p8   1/1       Running   0          16m
kube-system   kubernetes-dashboard-1265873680-gftnz                          1/1       Running   0          16m
kube-system   l7-default-backend-3623108927-57292                            1/1       Running   0          16m
tuxninja@tldev1:~$ 

Numero Cinco (de Mayo)

You now have an active Kubernetes cluster. That is pretty sweet huh? Make sure you take the time to check out what’s running under the hood in the Google Compute Engine as well.

tuxninja@tldev1:~$ gcloud compute instances list
NAME                                               ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  us-west1-a  n1-standard-1               10.138.0.2   35.197.94.114   RUNNING
gke-tuxlabs-kubernetes-default-pool-6ede7d6a-pr82  us-west1-a  n1-standard-1               10.138.0.3   35.197.2.247    RUNNING
gke-tuxlabs-kubernetes-default-pool-6ede7d6a-w6p8  us-west1-a  n1-standard-1               10.138.0.4   35.197.117.173  RUNNING
tuxninja@tldev1:~$ 

Ok, for our final act, I promised Nginx…sigh…Let’s get this over with!

Step 1, create this nifty YAML file:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
      # generated from the deployment name
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Save it as deployment.yaml, then apply it!

tuxninja@tldev1:~$ kubectl apply -f deployment.yaml 
deployment "nginx-deployment" created
tuxninja@tldev1:~$

We can describe our deployment like this:

tuxninja@tldev1:~$ kubectl describe deployment nginx-deployment
Name:                   nginx-deployment
Namespace:              default
CreationTimestamp:      Sun, 15 Oct 2017 07:10:52 +0000
Labels:                 app=nginx
Annotations:            deployment.kubernetes.io/revision=1
                        kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx-deployment","namespace":"default"},"spec":{"replicas":2,"se...
Selector:               app=nginx
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=nginx
  Containers:
   nginx:
    Image:        nginx:1.7.9
    Port:         80/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-deployment-431080787 (2/2 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  3m    deployment-controller  Scaled up replica set nginx-deployment-431080787 to 2
tuxninja@tldev1:~$

And we can take a gander at the pods created for this deployment

tuxninja@tldev1:~$ kubectl get pods -l app=nginx
NAME                               READY     STATUS    RESTARTS   AGE
nginx-deployment-431080787-7131f   1/1       Running   0          4m
nginx-deployment-431080787-cgwn8   1/1       Running   0          4m
tuxninja@tldev1:~$

To see info about a specific pod run: 

tuxninja@tldev1:~$ kubectl describe pod nginx-deployment-431080787-7131f
Name:           nginx-deployment-431080787-7131f
Namespace:      default
Node:           gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg/10.138.0.2
Start Time:     Sun, 15 Oct 2017 07:10:52 +0000
Labels:         app=nginx
                pod-template-hash=431080787
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"nginx-deployment-431080787","uid":"faa4d17b-b177-11e7-b439-42010...
                kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container nginx
Status:         Running
IP:             10.16.1.4
Created By:     ReplicaSet/nginx-deployment-431080787
Controlled By:  ReplicaSet/nginx-deployment-431080787
Containers:
  nginx:
    Container ID:   docker://ce850ea012243e6d31e5eabfcc07aa71c33b3c1935e1ff1670282f22ac1d0907
    Image:          nginx:1.7.9
    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
    Port:           80/TCP
    State:          Running
      Started:      Sun, 15 Oct 2017 07:11:01 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gw047 (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  default-token-gw047:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-gw047
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                                                        Message
  ----    ------                 ----  ----                                                        -------
  Normal  Scheduled              5m    default-scheduler                                           Successfully assigned nginx-deployment-431080787-7131f to gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg
  Normal  SuccessfulMountVolume  5m    kubelet, gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  MountVolume.SetUp succeeded for volume "default-token-gw047"
  Normal  Pulling                5m    kubelet, gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  pulling image "nginx:1.7.9"
  Normal  Pulled                 5m    kubelet, gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  Successfully pulled image "nginx:1.7.9"
  Normal  Created                5m    kubelet, gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  Created container
  Normal  Started                5m    kubelet, gke-tuxlabs-kubernetes-default-pool-6ede7d6a-nvfg  Started container
tuxninja@tldev1:~$ 

Finally it’s time to expose Nginx to the Internet

tuxninja@tldev1:~$ kubectl expose deployment/nginx-deployment --port=80 --target-port=80 --name=nginx-deployment --type=LoadBalancer
service "nginx-deployment" exposed
tuxninja@tldev1:~

Check the status of our service

tuxninja@tldev1:~$ kubectl get svc nginx-deploymentNAME               TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-deployment   LoadBalancer   10.19.244.29   <pending>     80:31867/TCP   20s
tuxninja@tldev1:~$

Note the EXTERNAL-IP is in a pending state, once the LoadBalancer is created, this will have an IP address.

tuxninja@tldev1:~$ kubectl get svc nginx-deployment
NAME               TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)        AGE
nginx-deployment   LoadBalancer   10.19.244.29   35.203.155.123   80:31867/TCP   1m
tuxninja@tldev1:~$ curl http://35.203.155.123
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

And were all done, congratulations! 🙂

In Closing…

Kubernetes is cool as a fan, and setting it up on GCP is almost as easy as pressing the big EASY button. We have barely scraped the surface here so for continued learning I recommend buying Kubernetes Up & Running by Kelsey Hightower, Brendan Burns and Joe Beda. I would follow these folks on twitter, and in addition follow Kubernetes Co-Founder Tim Hockin as well as former Docker, Google, and now Microsoft employee/guru of all things containers Jessie Frazelle.

After you are done following these inspirational leaders in the community go to youtube and watch every Kelsey Hightower video you can find. Kelsey Hightower is perhaps the tech communities best presenter and no one has done more to educate and bring Kubernetes to the mainstream than Kelsey. So a quick shout out and thank you to Kelsey for his contributions to the community. In his honor here are two of my favorite videos from Kelsey. [ one ] [ two ].

Setting up Kubernetes to manage containers on the Google Cloud Platform Read More »

How To: Interact with AWS S3 Using the Go SDK and not lose your mind

After these messages we will carry on with our regularly scheduled programming…

Yesterday ( during the scribbling of this article ) AWS suffered one of it’s worst outages in history in the us-east-1 region. A reminder to us all to be multi-region and more importantly multi-cloud. Please see my other articles on HA deployments using AWS and my perspective & caution on the path to centralization or singularity we appear to be on (though the outage may help people wake up).

Now back to your regularly scheduled program…


My team and I are building a CMDB for AWS, which provides us with everything happening in our AWS environment + OS level metadata + change history. There will be a separate article on the CMDB journey, but today I want to focus on a specific service in AWS called S3, which is their object store. S3 is a bit of a special snowflake when it comes to AWS services and because of that I ran into challenges structuring my code, because up until S3 (which was the last service I wrote code for) everything had been very similar, and easily modularized. We will get to more detail, but let’s start this article by covering how to use the Go SDK for AWS.

Dependencies

This article assumes you already program in Go and have Go installed on your machine. To get started you will need a couple additional items.

  1. Download and install the SDK here : https://github.com/aws/aws-sdk-go 
  2. This is the documentation for the SDK, you will need it, bookmark it : http://docs.aws.amazon.com/sdk-for-go/api/
  3. It is extremely helpful when working with the API’s to have aws-shell installed : https://github.com/awslabs/aws-shell
    • This enables you to interact with AWS API’s on the fly so you can understand the output of commands as you are searching for what you are trying to accomplish.

The Collector Structure

The collector is the component in my CMDB architecture that does all the work of collecting the metadata that we shove into our CMDB. The collector is  heavily threaded using go routines for performance. The basic structure looks like this.

  • Call a go routine for each service you want to collect
    • //pass in all accounts, regions (from config-file) and pre-established awsSessions to each account you are collecting
    • Inside of a services go routine, loop overs accounts & regions
      • Launch a go routine for each account & region
        • Inside of those go routines make your AWS API call(s), example DescribeInstances
        • Store the response (I loop through mine and store it in a map using the resource-id as the key)
        • Finally, kick off another go routine to write to our API and store the data.

Ok, so hopefully that seems straight forward as a basic structure…let’s get to why S3 through me for a loop.

S3 Challenges

It will be best if I show you what I tried first, basically I tried to marry my existing pattern to S3 and that certainly was a bad idea from the start. Here was the structure of the S3 part of the code.

  • The S3 go routine gets called from main.go
  • //all accounts, regions and AWS Sessions are past into the next go routine
    • Inside of the S3 go routine, loop over accounts & regions
      • Launch a go routine for each account & region
        • Inside of those go routines List S3 Buckets
          • For each S3 buckets returned
            • Call additional API’s such as GetBucketTagging()

Ok so what happened ? I got a lot of errors that’s what 🙂 Ones like this….

BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region
status code: 301, request id:

At first, I thought maybe my code wasn’t thread safe…but that didn’t make much sense given the other services had no issues like this.

So as I debugged my code, I began to realize the buckets list I was getting, wasn’t limited to the region I was passing in/ establishing a session for.

Naturally, I googled can I list buckets for a single region ?

https://github.com/aws/aws-sdk-java/issues/920 (even though this is the Java SDK it still applies)..

"spfink commented on Nov 16, 2016
It is not possible to list the buckets in a single region. Regardless of the endpoint or region that you set, when calling list buckets you will get buckets from all regions.

In order to determine the region of a bucket you can use getBucketLocation(String bucketName).

https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/AmazonS3.java#L1026”

Ah ok, the BucketList being returned on an AWS Session established with a specific account and region, ignores the region. Because S3 Buckets are global to an account, thus all buckets under an account are returned in the ListBuckets() call. I knew S3 buckets were global per account, but failed to expect a matching behavior/output when a specific region is passed into the SDK/API.

Ok so how then can I distinguish where a bucket actually lives?

As spfink says above, I needed to run GetBucketLocation() per bucket. Thus my code structure started to look like this…

  • For each account, region
    • ListBuckets
      • For each bucket returned in that account, region
        • GetBucketLocation
        • If a LocationConstraint (region) is returned, set the new region (otherwise if response is null, do nothing)
        • Get tags for the bucket in account, region

With this code I was still getting errors about region, but why ?

Well I made the mistake of thinking a ‘null’ response from the API for LocationConstraint had no meaning (or meant query it from any region), wrong (null actually means us-east-1 see from my google below) thus the IF condition evaluated false and the existing region from the outer loop was used because GetBucketLocation() returned null and this resulted in many errors.

Here’s what the google turned up..

https://github.com/aws/aws-cli/issues/564

"kyleknap commented on Mar 16, 2015
@xurume

For buckets located in US Standard, the location constraint will be null. For S3, here is the list of region names with their corresponding regions: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region. Notice that the location constraint is none for US Standard.

The CLI uses the values in the region column for the --region parameter. So for S3's US Standard, you need to use us-east-1 as the region.”

So let’s clarify my mistakes…

  1. The S3 ListBuckets call returns all buckets under an account globally.
    • It does not abide by a region configured in an API Session
    • Thus I/you should not loop over regions from a config file for the S3 service.
    • Instead I/you need to find a buckets ‘real’ location using GetBucketLocation
    • Then set the region for actions other than ListBuckets (which is global per account and ignores region passed).
  2. GetBucketLocation returning null, doesn’t mean the bucket is global or that you can interact with the bucket from endpoint you please…it actually means us-east-1 http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region

The Working Code

So in the end the working code for S3 looks like this…

  • collector/main.go fires off a bunch of go routines per service we are collecting for.
  • It passes in accounts, and regions from a config file.
  • For the S3 service/file under the ‘services’ package the entry point is a function called StoreS3Resources.

Everything in the code should be self explanatory from that point on. You will note a function call to ‘writeToCis’… CIS is the name of our internal CMDB project/service. Again, I will later be blogging about the entire system in detail once we open source the code. Please keep in mind this code is MVP, it will be changed a lot (optimization, modularized, bug fixes, etc) before & after we open source it, but for now he is the quick and dirty, but hopefully functional code 🙂 Use at your own risk !

package services

import (
	"github.com/aws/aws-sdk-go/service/s3"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/aws"
	"sync"
	"fmt"
	"time"
	"encoding/json"
	"strings"
)

var wgS3BucketList sync.WaitGroup
var wgS3GetBucketDetails sync.WaitGroup
var accountRegionsMap = make(map[string]map[string][]string)
var accountToBuckets = make(map[string][]string)
var bucketToAccount = make(map[string]string)
var defaultRegion string = "us-east-1"

func writeS3ResourceToCis(resType string, resourceData map[string]interface{}, account string, region string){
	b, err := json.Marshal(resourceData)
	check(err)

	err, status, url := writeToCisBulk(resType, region, b)
	check(err)
	fmt.Printf("%s - %s - %s - %s - Bytes: %d\n", status, url, account, region, cap(b))
}

func StoreS3Resources(awsSessions map[string]*session.Session, accounts []string, configuredRegions []string) {
	s3Start := time.Now()

	wgS3BucketList.Add(1)
	go func () {
		defer wgS3BucketList.Done()
		for _, account := range accounts {
			awsSession := awsSessions[account]
			getS3AccountBucketList(awsSession, account)
		}
	}()
	wgS3BucketList.Wait()

	getS3BucketDetails(awsSessions, configuredRegions)

	s3Elapsed := time.Since(s3Start)
	fmt.Printf("S3 completed in: %s\n", s3Elapsed)
}

func getS3AccountBucketList(awsSession *session.Session, account string) {
	svcS3 := s3.New(awsSession, &aws.Config{Region: aws.String(defaultRegion)})

	//list returned is for all buckets in an account ( no regard for region )
	resp, err := svcS3.ListBuckets(nil)
	check(err)

	var buckets []string

	for _,bucket := range resp.Buckets {
		buckets = append(buckets, *bucket.Name)

		//reverse mapping needed for lookups in other funcs
		bucketToAccount[*bucket.Name] = account
	}

	//a list of buckets per account
	accountToBuckets[account] = buckets
}


func getS3BucketLocation(awsSession *session.Session, bucket string, bucketToRegion map[string]string, regionToBuckets map[string][]string)  {
	wgS3GetBucketDetails.Add(1)
	go func() {
		defer wgS3GetBucketDetails.Done()
		svcS3 := s3.New(awsSession, &aws.Config{Region: aws.String(defaultRegion)}) // default

		var requiredRegion string

		locationParams := &s3.GetBucketLocationInput{
			Bucket: aws.String(bucket),
		}
		respLocation, err := svcS3.GetBucketLocation(locationParams)
		check(err)

		//We must query the bucket based on the location constraint
		if strings.Contains(respLocation.String(), "LocationConstraint") {
			requiredRegion = *respLocation.LocationConstraint
		} else {
			//if getBucketLocation is null us-east-1 used
			//http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
			requiredRegion = "us-east-1"
		}

		bucketToRegion[bucket] = requiredRegion
		regionToBuckets[requiredRegion] = append(regionToBuckets[requiredRegion], bucket)
		accountRegionsMap[bucketToAccount[bucket]] = regionToBuckets
	}()
}

func getS3BucketsTags(awsSession *session.Session, buckets []string, account string, region string) {
	wgS3GetBucketDetails.Add(1)
	go func() {
		defer wgS3GetBucketDetails.Done()
		svcS3 := s3.New(awsSession, &aws.Config{Region: aws.String(region)})

		var resourceData = make(map[string]interface{})

		for _, bucket := range buckets {
			taggingParams := &s3.GetBucketTaggingInput{
				Bucket: aws.String(bucket),
			}
			respTags, err := svcS3.GetBucketTagging(taggingParams)
			check(err)

			resourceData[bucket] = respTags
		}
		writeS3ResourceToCis("buckets", resourceData, account, region)
	}()
}


func getS3BucketDetails(awsSessions map[string]*session.Session, configuredRegions []string) {

	for account, buckets := range accountToBuckets {
		//reset regions for each account
		var bucketToRegion = make(map[string]string)
		var regionToBuckets = make(map[string][]string)
		for _,bucket := range buckets {
			awsSession := awsSessions[account]
			getS3BucketLocation(awsSession, bucket, bucketToRegion, regionToBuckets)
		}
	}
	wgS3GetBucketDetails.Wait()

	//Preparing configured regions to make sure we only write to CIS for regions configured
	var configuredRegionsMap = make(map[string]bool)
	for _,region := range configuredRegions {
		configuredRegionsMap[region] = true
	}

	for account := range accountRegionsMap {
		awsSession := awsSessions[account]
		for region, buckets := range accountRegionsMap[account] {
			//Only proceed if it's a configuredRegion from the config file.
			if _, ok := configuredRegionsMap[region]; ok {
				fmt.Printf("%s %s has %d buckets\n", account, region, len(buckets))
				getS3BucketsTags(awsSession, buckets, account, region)
			} else {
				fmt.Printf("Skipping buckets in %s because is not a configured region\n", region)
			}
		}
	}
	wgS3GetBucketDetails.Wait()
}

How To: Interact with AWS S3 Using the Go SDK and not lose your mind Read More »

How To: Launch EC2 Instances In AWS Using The AWS CLI

 

It occurred to me recently that while I have written articles on Boto for AWS (the Python SDK) I have yet to write articles on how to use the AWS CLI, Terraform and the Go SDK. All of that will come in due time, for starters this article is going to be about the AWS CLI.

To start you will need to install the AWS CLI  following these links:
https://aws.amazon.com/cli/
https://github.com/aws/aws-cli

Note you will need to make sure you have an account with an access key and have setup the required credentials under ~/.aws/ for the CLI to work. How to do this is covered near the end of the second link above to the git repo.

After that is done you are ready to rock and roll. To test it out you can run…

aws ec2 describe-instances

Assuming your default region, and profile settings are correct it should output JSON.

Launching an EC2 instance

To launch an EC2 instance from the command line use the command below replacing the variables preceded with $ with their real values.

aws --profile $account --region $region ec2 run-instances --image-id $image_id --count $count --instance-type $instance_type --key-name $ssh_key_name --subnet-id $subnet_id

(Assuming you have setup the required dependencies like uploading your SSH key to AWS and specifying its name in the command above this should launch your VM).

It should be noted there is a lot more you can to to tweak your instance, such as changing the EBS volume size for your root disk that is launched or tagging. You will see examples of this in my shell script. The purpose of this article is to share a shell script I have written and use whenever I want to quickly launch a test VM (which is common). For more permanent things I use an infrastructure as code approach via Terraform. But the need for launching quick test VM’s never goes away, thus this shell script was born. You will notice my script auto-tags our VM’s…I do this because in our environment if you VM isn’t tagged appropriately it is deleted + it’s courtesy in an AWS environment to tag your resources, otherwise no one will ever what tree to bark up when there is a problem such as ‘are you still using this cause it looks idle?’ 🙂

My Shell Script for Launching EC2 VM’s

#!/bin/bash

# Global Settings
account="my-account"
region="us-east-1"

# Instance settings
image_id="ami-03ebd214" # ubuntu 14.04
ssh_key_name="my_ssh_key-rsa-2048"
instance_type="m4.xlarge"
subnet_id="subnet-b8214792"
root_vol_size=20
count=1

# Tags
tags_Name="my-test-instance"
tags_Owner="tuxninja"
tags_ApplicationRole="Testing"
tags_Cluster="Test Cluster"
tags_Environment="dev"
tags_OwnerEmail="tuxninja@tuxlabs.com"
tags_Project="Test"
tags_BusinessUnit="Cloud Platform Engineering"
tags_SupportEmail="tuxninja@tuxlabs.com"

echo 'creating instance...'
id=$(aws --profile $account --region $region ec2 run-instances --image-id $image_id --count $count --instance-type $instance_type --key-name $ssh_key_name --subnet-id $subnet_id --block-device-mapping "[ { \"DeviceName\": \"/dev/sda1\", \"Ebs\": { \"VolumeSize\": $root_vol_size } } ]" --query 'Instances[*].InstanceId' --output text)

echo "$id created"

# tag it

echo "tagging $id..."

aws --profile $account --region $region ec2 create-tags --resources $id --tags Key=Name,Value="$tags_Name" Key=Owner,Value="$tags_Owner"  Key=ApplicationRole,Value="$tags_ApplicationRole" Key=Cluster,Value="$tags_Cluster" Key=Environment,Value="$tags_Environment" Key=OwnerEmail,Value="$tags_OwnerEmail" Key=Project,Value="$tags_Project" Key=BusinessUnit,Value="$tags_BusinessUnit" Key=SupportEmail,Value="$tags_SupportEmail" Key=OwnerGroups,Value="$tags_OwnerGroups"

echo "storing instance details..."
# store the data
aws --profile $account --region $region ec2 describe-instances --instance-ids $id > instance-details.json

echo "create termination script"
echo "#!/bin/bash" > terminate-instance.sh
echo "aws --profile $account --region $region ec2 terminate-instances --instance-ids $id" >> terminate-instance.sh
chmod +x terminate-instance.sh

After substituting the required variables at the top with your real values you can run this script. Notice that after creating the VM I capture the instance details in a file & the ID in a variable so I can subsequently tag it, and then I create a termination script…this makes for very simple operations when you need to repeatedly start and then kill/destroy/delete a VM.

Using these scripts should come in quite handy. A copy of create-instance.sh can be found on my github here.

One other thing… I use the normal AWS CLI for automation as shown here…but for poking around interactively I use something called ‘aws-shell’ formerly ‘saw’. Check it out and you won’t be disappointed !

My next post will be on Terraform or the Go SDK…but both are coming soon!

How To: Launch EC2 Instances In AWS Using The AWS CLI Read More »

Consul for Service Discovery

Why Service Discovery ?

Service Discovery effectively replaces the process of having to manually assign or automate your own DNS entries for nodes on your network. Service Discovery aims to move even further away from treating VM’s like pets to cattle, by getting rid of the age old practice of Hostname & FQDN having contextual value. Instead when using services discovery nodes are automatically registered by an agent and automatically are configured in DNS for both nodes and services running on the machine.

Consul

Consul by Hashicorp is becoming the de-facto standard for Service Discovery. Consul’s full features & simplistic deployment model make it an optimal choice for organizations looking to quickly deploy Service Discovery capabilities in their environment.

Components of Consul

  1. The Consul Agent
  2. An optional JSON config file for each service located under /etc/consul.d/<service>.json
    1. If you do not specific a JSON file, consul can still start and will provide discovery for the nodes (they will have DNS as well)

A Quick Example of Consul

How easy is it to deploy console ?

  1. Download / Decompress and install the Consul agent – https://www.consul.io/downloads.html
  2. Define services in a JSON file (if you want) – https://www.consul.io/intro/getting-started/services.html
  3. Start the agent on the nodes – https://www.consul.io/intro/getting-started/join.html
  4.  Make 1 node join 1 other node (does not matter which node) to join the cluster, which gets you access to all cluster metadata

Steps 1 and 2 Above

  1. After downloading the Consul binary to each machine and decompressing it, copy it to /usr/local/bin/ so it’s in your path.
  2. Create the directory
    sudo mkdir /etc/consul.d
  3. Optionally, run the following to create a JSON file defining a fake service running
echo '{"service": {"name": "web", "tags": ["rails"], "port": 80}}' \
    >/etc/consul.d/web.json

Step 3 Above

Run the agent on each node, changing IP accordingly.

tuxninja@consul-d415:~$ consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -node=agent-one -bind=10.73.172.110 -config-dir /etc/consul.d

Step 4 Above

tuxninja@consul-d415:~$ consul join 10.73.172.108
Successfully joined cluster by contacting 1 nodes.

Wow, simple…ok now for the examples….

Show cluster members

tuxninja@consul-dcb3:~$ consul join 10.73.172.110
Successfully joined cluster by contacting 1 nodes.

Look up DNS for a node

tuxninja@consul-dcb3:~$ dig @127.0.0.1 -p 8600 agent-one.node.consul
; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> @127.0.0.1 -p 8600 agent-one.node.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2450
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;agent-one.node.consul.		IN	A
;; ANSWER SECTION:
agent-one.node.consul.	0	IN	A	10.73.172.110
;; Query time: 1 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Tue May 03 21:43:47 UTC 2016
;; MSG SIZE  rcvd: 76
tuxninja@consul-dcb3:~$

Lookup DNS for a service

tuxninja@consul-dcb3:~$  dig @127.0.0.1 -p 8600 web.service.consul
; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> @127.0.0.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55798
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;web.service.consul.		IN	A
;; ANSWER SECTION:
web.service.consul.	0	IN	A	10.73.172.110
;; Query time: 2 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Tue May 03 21:46:54 UTC 2016
;; MSG SIZE  rcvd: 70
tuxninja@consul-dcb3:~$

Query the REST API for Nodes

tuxninja@consul-dcb3:~$ curl localhost:8500/v1/catalog/nodes
[{"Node":"agent-one","Address":"10.73.172.110","TaggedAddresses":{"wan":"10.73.172.110"},"CreateIndex":3,"ModifyIndex":1311},{"Node":"agent-two","Address":"10.73.172.108","TaggedAddresses":{"wan":"10.73.172.108"},"CreateIndex":1338,"ModifyIndex":1339}

Query the REST API for Services

tuxninja@consul-dcb3:~$ curl http://localhost:8500/v1/catalog/service/web
[{"Node":"agent-one","Address":"10.73.172.110","ServiceID":"web","ServiceName":"web","ServiceTags":["rails"],"ServiceAddress":"","ServicePort":80,"ServiceEnableTagOverride":false,"CreateIndex":5,"ModifyIndex":772}

Consul for Service Discovery Read More »