TuxLabs LLC

All things DevOps

Category: Systems Administration

How To: curl the Openstack API’s (v3 Keystone Auth)

Published / by tuxninja / Leave a Comment

While Openstack provides a python client(s) for interactions….

I frequently, finding myself needing to get data out of it without the pain of awk/sed’ing out the ASCII art.

Thus to quickly access the raw data, we can directly query the API’s using curl & parsing JSON instead, which is much better 🙂

Authentication

Before we can interact with the other Openstack API’s we need to authenticate to Keystone openstack’s identity service. After authenticating we receive a token to use with our subequent API requests. So step 1 we are going to create a JSON object with the required authentication details.

Create a file called ‘token-request.json’ with an object that looks like this.

Btw, if you followed my tutorial on how to install Openstack Kilo, your authentication details for ‘admin’ is in your keystonerc_admin file.

Now we can use this file to authenticate like so:

The token is actually returned in the header of the HTTP response, so this is why we need ‘-i’ when curling. Notice we are parsing out the token and returning the value to an environment variable $TOKEN.

Now we can include this $TOKEN and run whatever API commands we want (assuming admin privileges for the tenant/project)

Curl Commands (Numerous Examples!)

I sometimes pipe the output to python -m json.tool, which provides formatting for JSON. Lets take a closer look at an example.

Listing servers (vm’s)

I only have 1 VM currently called spin1, but for the tutorials sake, if I had ten’s or hundred’s of VM’s and all I cared about was the VM name or ID, I would still need to parse this JSON object to avoid getting all this other meta-data.

My favorite command line way to do that without going full Python is using the handy JQ tool.

Here is how to use it !

The first command just takes whatever the STDOUT from curl is and indent’s and color’s the JSON making it pretty (colors gives it +1 vs. python -m json.tool).

The second example we actually parse what were after. As you can see it is pretty simple, but jq’s query language may not be 100% intuitive at first, but I promise it is pretty easy to understand if you have ever parsed JSON before. Read up more on JQ @ https://stedolan.github.io/jq/ & check out the Openstack docs for more API commands http://developer.openstack.org/api-ref.html

Hope you enjoyed this post ! Until next time.

 

 

Installing Openstack Kilo on Centos 7

Published / by tuxninja / 1 Comment on Installing Openstack Kilo on Centos 7

openstack-kilo-logo
In a previous article I wrote about how to install Openstack Icehouse on CentOS 6.5 in great detail. In this article, I am going to keep verbosity to a minimum and just give you the commands ! I am hoping this will be refreshing for my audience. If you are curious however, about the what, when and why please read my previous article.

Pre-requisites

  1. You need a machine with x86_64 architecture with at least 4 GB of memory & 2 NIC’s.
  2. On this machine you need to install CentOS 7 as a minimal install
  3. You should create a user with admin privileges (i.e. wheel, in my case ‘tuxninja’ was created)
  4. Disable SELinux
    1. vi /etc/sysconfig/selinux
    2. SELINUX=disabled
    3. save changes

Jumping Right In

Here are the commands you need to run.

  1. sudo yum update -y
  2. sudo yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-kilo-1.noarch.rpm
  3. sudo yum install epel-release
  4. sudo yum install -y openstack-packstack

Now at this point if you ran ‘packstack’ you would run into a bug with this message

The workaround for this bug is as follows

  1. sudo rpm -e puppet
  2. sudo rpm rpm -e hiera
  3. curl -O https://yum.puppetlabs.com/el/7/products/x86_64/hiera-1.3.4-1.el7.noarch.rpm
  4. sudo rpm -ivh hiera-1.3.4-1.el7.noarch.rpm
  5. vi /etc/yum.repos.d/epel.repo
    1. At the bottom of the [epel] section, after the gpgkey add a newline with: exclude=hiera*
    2. Save the file
  6. sud0 yum install -y puppet-3.6.2-3.el7.noarch
  7. reboot
  8. sudo rm /etc/puppet/hiera.yaml
  9. sudo packstack –allinone

This should successfully install. Godspeed.

Networking

Now that Openstack is setup, we still have to setup our network with private & public routed networks, so we can turn this into a real multi-node setup and ssh to our hosts and let them reach the internet etc. To do this, much like my previous post you need to modify your /etc/sysconfig/network-scripts/ files to reflect this.

Note: I deleted all the IPV6 crap, I think it messes some stuff up. When your done making the changes with your favorite editor, restart networking : sudo /etc/init.d/network restart or sudo systemctl restart network

Next go into in the Horizon Dashboard GUI and delete the demo project. See my previous article for details on how.

Back On the All-In-One Node Console

Next ‘reboot’ or restart all openstack services :

Note: it appears the –full-restart flag is gone, used to work !

When logging into your dashboard located at http://192.168.1.10/dashboard at some point you might hit a bug that prevent you from logging into the Horizon dashboard see : https://bugzilla.redhat.com/show_bug.cgi?id=1218894 … the work-around for this is to clear your browser cookies.

You’re Done

That’s it. Next steps would be to create a project & new admin user, re-create the required network mappings in openstack using the above commands (modify the names to make them unique) and create your ssh key, import it, download some images, import them using glance, and create some VM’s. Also I like to delete the demo project (you can also prevent this from being created with a flag on the packstack command). Make sure you delete all default security rules and add back ICMP, TCP, and UDP allow ingress / egress rules for 0.0.0.0 aka any/any, again you can see my article on CentOS 6.5 with more specifics on how to do this. Additionally, I have an article on how to add additional compute nodes as well.

As always I can be reached for assistance @ tuxninja [at] tuxlabs.com

Happying Stacking !

Creating a bootable USB for Centos on Mac OS X

Published / by tuxninja / Leave a Comment

I’m a huge Ubuntu fan. However, most of my ‘day job’ work requires CentOS or RHEL, thus I commonly have to re-image my on premise Cloud with the latest and greatest CentOS. My servers are 3 Rackables by SGI, two with more CPU & Memory and one (the controller node) with tons of Disk (12x1TB RAID 10) and then for off-premise I use Digital Ocean who has a fantastic product. Most modern servers do not have a CDROM and neither do my on premise systems. Therefore, I need to place the CentOS image on a USB drive so I can re-image my lab. Here are the steps do that on Mac OS X.

List the current Disks & Partitions

My USB drive is the 2GB drive at the bottom, we need to unmount that

Next we copy the CentOS image onto the unmounted USB disk.

When that’s gets done your Mac will pop up a window asking you to initialize the drive, ignore that. Remove it, and your ready to boot off this USB!

 

Preventing (bind9) DNS Naughty-ness (named.conf & iptables/ufw) on Ubuntu

Published / by tuxninja / Leave a Comment

If you run a DNS server on the Internet with a default configuration many people/robots will take advantage of you. The same is true for Mail, but that is another article. Needless to say if you are running a service on the Internet, the naughty goblins will find you. To thwart these dirty criminals all that’s necessary is to configure your named.conf properly. However, since these robotos are being naughty there is a high degree of certainty they are infected endpoints, and as such I really don’t want them coming anywhere near me or my machines. After all for humanity sake we don’t want to be infected by the deadly plague ! This article is short and sweet, here is how to protect your DNS server & your server in one article using named.conf & ufw (iptables).

 

Named.conf.options

Now a days named.conf is really just a file that inherits 3 other files, named.conf.local, named.conf.options, and named.conf.default-zones. The one we are going to fix is named.conf.options. The configuration below should only be applied in a scenario where you want to run an authorative nameserver, and a caching name server, but the key is you only want to allow people to query the cache that ‘you know personally or are you’ vs. allowing the entire internet, because then bad things happen. If this is not the setup you are going for, don’t do this 🙂 But if it is follow along.

Add the following section with the proper IP’s to the top fo the file

Note you can also add a CIDR for a subnet like 192.168.0.0/16

After that’s done under the options {} section… make it look like this

Note, allow transfer is necessary if you have a secondary nameserver that needs to receive updates. Now restart bind9

Ok now all querying including behavior from non-trusted people will not be allowed. If it is working check your /var/log/syslog and you will see some denies like this

Now the above is from my actual log file. I was quite annoyed that clients are basically abusing the hell out of hehehey.ru… so I decided I don’t want to talk to those people at all. To those people I should be a blackhole. To do this I used UFW which is short for uncomplicated firewall, which essentially makes dealing with Iptables much much nicer. It’s only my 2nd time using UFW, but I’ve been using Iptables for well over a decade. Anyway, here is my simple setup with UFW that I came up with.

So we are configuring the default policy to deny all incoming traffic, allow outgoing, and then allow SSH & Apache/Web traffic basically. Next I created a script called block.sh to add ufw deny rules for bad actors I parsed out of my log, here’s what block.sh looks like

Don’t forget to chmod +x your shell script. Then I did this… blocking all bad actors…

Note, use sudo if you don’t run this as root. This will go through my log and find all these bad requests, and block the requestor. It’s quite aggresive, so be careful, make sure you thoroughly limit your parsing with grep to only block things you really don’t want talking to your server, because this blocks ALL traffic from this requestor to your service, not just DNS.

Once that is complete you need to finally permit good DNS requests by running

And then finally enable your firewall

If you are successful you should see entries in your log that look like this

You can also view all your firewall rules by running

Happy Blocking !

 

 

 

Runner Features Have Been Updated !

Published / by tuxninja / Leave a Comment

Runner Reminder

Runner is a command line tool for running commands on thousands of devices that support SSH. I wrote Runner and use it every single day, because unlike Ansible, Runner truly has no dependencies on the client or server side other than SSH. I have used Runner to build entire datacenters, so it is proven and tested and has a lot of well thought out features, which brings me to todays post. Since I initially debuted Runner I have added a lot of features, but I had yet to check them into github, until now. Here is a run down of Runner’s features.

Features

  • Runner takes your login credentials & doesn’t require you to setup SSH keys on the client machines/devices.
  • Runner can be used through a bastion/jump host via an SSH tunnel (see prunner.py)
  • Runner reads it’s main host list from a file ~/.runner/hosts/hosts-all
  • Runner can accept custom hosts lists via -f
  • -e can be used to echo a command before it is run, this is useful for running commands on F5 load balancers for example, when no output is returned on success.
  • -T will allow you to tune the number of threads, but be careful you can easily exhaust your system or site resources (I.E. do NOT DOS your LDAP authentication servers by trying to do hundreds of threads across thousands of machines, unless you know they can handle it 😉 ).
  • -s is for sudo for those users who have permissions in the sudoers file.
  • -1 reduces any host list down to one host per pool. It uses a regex, which you will likely have to modify for your own host / device naming standard.
  • -r can be used to supply a regular expression for matching hosts. Remember sometimes you have to quote the regex and/or escape the shell when using certain characters.
  • -c will run a single command on many hosts, but -cf will run a series of commands listed in a file on any hosts specified. This is particularly useful for automations. For example, I used it to build out load balancer virtuals and pools on an F5.
  • -p enables you to break apart the number of hosts to run at a time using a percentage. This is a handy & more humanized way to ensure you do not kill your machine or the infrastructure you are managing when you crank threads through the roof 😉

Now that I have taken the time to explain some of those cool features, here’s an example of what it looks like in action.

Runner Demo

Host List

Basic Run Using Only -c, -u and defaults

Note: User defaults to the user you are logged in as if you don’t specify -u . Since I am logged in as ‘jriedel’ I have specified the user tuxninja instead.

The Same Run Using Sudo 

Note: I just realized if you do not prompt for a password for sudo it will fail, I will have to fix that ! Whoops ! P.S. You should always prompt for a password when using sudo !

Runner with a command file in super quiet mode  !

Example of a simple regex & a failure

I hope you enjoyed the overview and new features. You can clone Runner on github.

Enjoy,
Jason Riedel