TuxLabs LLC

All things DevOps

Category: Systems Administration

Runner Features Have Been Updated !

Published / by tuxninja / Leave a Comment

Runner Reminder

Runner is a command line tool for running commands on thousands of devices that support SSH. I wrote Runner and use it every single day, because unlike Ansible, Runner truly has no dependencies on the client or server side other than SSH. I have used Runner to build entire datacenters, so it is proven and tested and has a lot of well thought out features, which brings me to todays post. Since I initially debuted Runner I have added a lot of features, but I had yet to check them into github, until now. Here is a run down of Runner’s features.


  • Runner takes your login credentials & doesn’t require you to setup SSH keys on the client machines/devices.
  • Runner can be used through a bastion/jump host via an SSH tunnel (see prunner.py)
  • Runner reads it’s main host list from a file ~/.runner/hosts/hosts-all
  • Runner can accept custom hosts lists via -f
  • -e can be used to echo a command before it is run, this is useful for running commands on F5 load balancers for example, when no output is returned on success.
  • -T will allow you to tune the number of threads, but be careful you can easily exhaust your system or site resources (I.E. do NOT DOS your LDAP authentication servers by trying to do hundreds of threads across thousands of machines, unless you know they can handle it 😉 ).
  • -s is for sudo for those users who have permissions in the sudoers file.
  • -1 reduces any host list down to one host per pool. It uses a regex, which you will likely have to modify for your own host / device naming standard.
  • -r can be used to supply a regular expression for matching hosts. Remember sometimes you have to quote the regex and/or escape the shell when using certain characters.
  • -c will run a single command on many hosts, but -cf will run a series of commands listed in a file on any hosts specified. This is particularly useful for automations. For example, I used it to build out load balancer virtuals and pools on an F5.
  • -p enables you to break apart the number of hosts to run at a time using a percentage. This is a handy & more humanized way to ensure you do not kill your machine or the infrastructure you are managing when you crank threads through the roof 😉

Now that I have taken the time to explain some of those cool features, here’s an example of what it looks like in action.

Runner Demo

Host List

Basic Run Using Only -c, -u and defaults

Note: User defaults to the user you are logged in as if you don’t specify -u . Since I am logged in as ‘jriedel’ I have specified the user tuxninja instead.

The Same Run Using Sudo 

Note: I just realized if you do not prompt for a password for sudo it will fail, I will have to fix that ! Whoops ! P.S. You should always prompt for a password when using sudo !

Runner with a command file in super quiet mode  !

Example of a simple regex & a failure

I hope you enjoyed the overview and new features. You can clone Runner on github.

Jason Riedel

SSH Tunneling

Published / by tuxninja / Leave a Comment

In my last post about Runner I briefly explained needing to modify your ~/.ssh/config to use a ProxyCommand to allow for automatic tunneling with SSH.

What I didn’t explain is there is an alternative method that is arguably simpler. It requires creating three small shells scripts & placing them in your path or a common host path like /usr/local/bin/ with the chmod +x permission. Here is the script that sets up the ssh tunnel.

Script: starttunnel

Running starttunnel, will connect you to your bastion/jump box and then background this connection with keep alives on. It will listen / dynamically forward ssh requests to 8081 through or to tlbastion.tuxlabs.com. Additionally, if you wanted to tunnel a web port specifically on a machine that sits within your network back to the machine you are tunneling from, you can add it to the script. Such that the required host/port always gets tunneled and is available on your machine when you run starttunnel. Example config would look like.

Script: starttunnel + forwarding http


Now that you have authenticated to your bastion and have a working tunnel you need to get ssh requests to go through this tunnel. However, if your like me you still want the ability to ssh to other stuff without going through that tunnel. So I created a new script called ‘sshp’. When I want to ssh through the tunnel / proxy I use ‘sshp’, when I want to ssh to somewhere else on the internet or another network I use plain old ‘ssh’. Here is my sshp script used to connect to machines behind the bastion.

Script: sshp

Now, when you run sshp tuxlabs1@tuxlabs.com you will be connection through the tuxlabs bastion into tuxlabs1. Also notice in my previous post I used sconnect as the proxy command in this one we are using ‘nc’ aka netcat. I have found this method of tunneling to be the most simplistic and effective in my daily life. One more script you need is if you want to copy files you need to use scp. So you have to make a similar command ‘scpp’ for tunneling your copying of files. Here’s the script.

Script: scpp

One final note…if you need use ‘*’ aka splat for copying many files you cannot use the script above, because the shell or script converts that incorrectly. Instead just use the full command yourself from the command line.

scp’ing with *

This would copy all files named ‘copy.all.<whatever>’ to the  bastion. Hope this hopes the folks out there feeling limited by bastions. They provide great security and are an absolute requirement in secure environments so learning tricks that make sure you only need to authenticate once for an extended period of time can come in real handy.

Jason Riedel



Runner: Multi-threaded SSH with Sudo support using Python & Paramiko

Published / by tuxninja / Leave a Comment

Example of Runner


Why Runner ?

I have been working as a Systems & Network Administrator since 1999. In that time I have repeatedly had the need for rapidly executing commands across thousands of servers. There are many applications out there that solve this problem in various ways…to name a few…pdsh, Ansible, Salt, Chef, Puppet (mcollective),  even Cfengine and more. Some require agents running on the machines, some use SSH, but require keys…or learning curves. Alternatively, you can write your own code to solve this problem, which is what I did mostly for fun. I don’t recommend re-inventing the wheel if you need this for your job, just use what is already out there, or download runner and hack it to your hearts content for your purposes.

Fabric vs. Paramiko

Because I use Python for most of my work code these days, I decided to write my multi-threaded SSH command runner in Python this way I can use Runner for parallel SSH transport & easily bolt on my other Python scripts for additional functionality. Python has fantastic support for SSH via two libraries Fabric & Paramiko. Fabric is built on top of Paramiko. Fabric provides a simpler interface than Paramiko does for doing just about anything you can think of. Create a fabfile run it, and wolla instant results from commands ran via SSH. Fabric is really great for running & re-running a set of commands to automate an install or reporting for example. All that being said I still chose to use Paramiko over Fabric for three reasons.

  1. I don’t like abstraction. Fabric hides the ugly-ness of Paramiko, which I prefer to understand better.
  2. Writing this using Paramiko lent itself better to a command line utility used for adhoc commands than Fabric did.
  3. I wasn’t sure if Fabric’s abstraction would limit me later based on needing custom functionality. So for Runner I chose Paramiko, but to be clear, 9 times out of 10 I think I would choose Fabric.


A bastion or jump box is a machine that is used as the gatekeeper of access to the rest of the machines in your network. In secure environments where your Corp network is separate from your Production network, you will have to SSH into a bastion, which usually has some form of 2-factor authentication (at least it should !) and then from there you may SSH into other hosts. A bastion can throw a real wrench in trying to manage thousands of machines in seconds, because you would have to authenticate to the bastion 1000 times ! The way around this, is by setting up your SSH config to proxy commands.

ProxyCommand & Sconnect

Sconnect (or connect.c) is a binary that is most commonly used as the proxy command for SSH. You can download / read more about sconnect here : https://bitbucket.org/gotoh/connect/wiki/Home and it will also tell you how to setup your SSH config. Using a ProxyCommand with Runner is required, you can however use any ProxyCommand you would like. Really quickly here is what you basically need to do.

  1. Download / Compile connect.c
  2. Copy it to /usr/local/bin/sconnect and set executable permissions
  3. In your SSH Config (.ssh/config) add…
    1. Host <ssh-config-profile-name>
      User tuxninja
      ForwardAgent yes
      HostName <bastion_name>
      DynamicForward 8081 (any uncommon port is fine)
    2. Host *.tuxlabs.com
      User tuxninja
      ProxyCommand /usr/local/bin/sconnect -4 -w 4 -S localhost:8081 %h.tuxlabs.com %p

That is basically it. Then you should start a screen session so you can background the SSH session, since you will leave this open for other SSH sessions to proxy through so you don’t have to go through 2-factor authentication more than once. So something like…

After you authenticate, detach yourself from the screen using CTRL A then D. Now you can ssh to anything @ domain name in my case tuxlab.com and it will forward through the bastion. At this point you still have to authenticate using a username / password, which is fine. Runner deals with this.


Runner requires a hosts file to run. By default it is configured to look in hosts/hosts-all for a list of all hosts. I use a script called ‘update-runner-hosts.pl’, which is included in my github to gather hosts from a URL and update the required hosts file. Once you have populated hosts/hosts-all with the FQDN for your hosts, you are ready to use Runner.

Note: You can use ‘-f’ to provide a custom location for your hosts file.

Great Flags / Features

So some of the really great features of Runner are threading (-t), sudo (-s), list only mode (-l) and the regular expression (-r). -r is for pattern matching your hosts lists, which is incredibly handy and absolutely required in an environment with hundreds to thousands of hosts and you only want to select hosts with -r ‘web’ in them.

(-1) one host per pool mode is a great feature, however it is dependent on understanding your environments hostname pattern so you will have to modify the regular expression in the code to make sure it works for you. It is currently setup to identify hostnames in pools when the naming convention is something like apache1234.tuxlabs.com.

Ok I could go on and on about runner, but it’s better to just share the code at this point and let you go! Note the statically defined proxy_command in the code, you may need to change this if you didn’t use sconnect or the same port.

Note: by default runner uses the user you are logged in as to SSH, you can prompt input for a different user with ‘-u’.

All code and accessories are available for download on github : https://github.com/tuxninja/tuxlabs-code/tree/master/runner

Email tuxninja@tuxlabs.com with any question ! Happy SSH’ing admins!

Note: In various versions of this code I had a ‘-h’ allowing you to pass a CSV list of hosts, somehow I let that drop out of this version, sorry ! Feel free to re-add it !

The Runner Code





How To: Add A Compute Node To Openstack Icehouse Using Packstack

Published / by tuxninja / Leave a Comment



This article is a continuation on the previous article I wrote on how to do a single node all-in-one (AIO) Openstack Icehouse install using Redhat’s packstack. A working Openstack AIO installation using packstack is required for this article. If you do not already have a functioning AIO install of Openstack please refer to the previous article before continuing on to this articles steps.

Preparing Our Compute Node

Much like in our previous article we first need to go through and setup our system and network properly to work with Openstack. I started with a minimal CentOS 6.5 install, and then configured the following

  1. resolv.conf
  2. sudoers
  3. my network interfaces eth0(192) and eth1 (10)
    1. Hostname: ruby.tuxlabs.com ( I also setup DNS for this )
    2. EXT IP:
    3. INT IP:
  4. A local user + added him to wheel for sudo
  5. I installed these handy dependencies
    1. yum install y opensshclients
    2. yum install y yumutils
    3. yum install y wget
    4. yum install y bindutils
  6. And I disabled SELinux
    1. Don’t forget to reboot after

To see how I setup the above pre-requisites see the “Setting Up Our Initial System” section on the previous controller install here : http://tuxlabs.com/?p=82

Adding Our Compute Node Using PackStack

For starters we need to follow the steps in this link  https://openstack.redhat.com/Adding_a_compute_node

I am including the link for reference, but you don’t have to click it as I will be listing the steps below.

On your controller node ( diamond.tuxlabs.com )

First, locate your answers file from your previous packstack all-in-one install.

 Edit the answers file

Change lo to eth1 (assuming that is your private 10. interface) for both CONFIG_NOVA_COMPUTE_PRIVIF & CONFIG_NOVA_NETWORK_PRIVIF

Change CONFIG_COMPUTE_HOSTS to the ip address of the compute node you want to add. In our case ‘’. Additionally, validate the ip address for CONFIG_NETWORK_HOSTS is your controller’s ip since you do not run a separate network node.

That’s it. Now run packstack again on the controller

When that completes, ssh into or switch terminals over to your compute node you just added.

On the compute node ( ruby.tuxlabs.com )

Validate that the relevant openstack compute services are running

 Back on the controller ( diamond.tuxlabs.com )

We should now be able to validate that ruby.tuxlabs.com has been added as a compute node hypervisor.

Additionally, you can verify it in the Openstack Dashboard


Next we are going to try to boot an instance using the new ruby.tuxlabs.com hypervisor. To do this we will need a few pieces of information. First let’s get our OS images list.

Great, now we need the ID of our private network

Ok now we are ready to proceed with the nova boot command.

Fantastic. That command should look familiar from our previous tutorial it is the standard command for launching new VM instances using the command line, with one exception ‘–hint force_hosts=ruby.tuxlabs.com’ this part of the command line forces the scheduler to use ruby.tuxlabs.com as it’s hypervisor.

Once the VM is building we can validate that it is on the right hypervisor like so.

You can see from the output above I have 2 VM’s on my existing controller ‘diamond.tuxlabs.com’ and the newly created instance is on ‘ruby.tuxlabs.com’ as instructed, awesome.

Now that you are sure you setup your compute node correctly, and can boot a VM on a specific hypervisor via command line, you might be wondering how this works using the GUI. The answer is a little differently 🙂

The Openstack Nova Scheduler

The Nova Scheduler in Openstack is responsible for determining, which compute node a VM should be created on. If you are familiar with VMware this is like DRS, except it only happens on initial creation, there is no rebalancing that happens as resources are consumed overtime. Using the Openstack Dashboard GUI I am unable to tell nova to boot off a specific hypervisor, to do that I have to use the command line above (if someone knows of a way to do this using the GUI let me know, I have a feeling if it is not added already, they will add the ability to send a hint to nova from the GUI in a later version). In theory you can trust the nova-scheduler service to automatically balance the usage of compute resources (CPU, Memory, Disk etc) based on it’s default configuration. However, if you want to ensure that certain VM’s live on certain hypervisors you will want to use the command line above. For more information on how the scheduler works see : http://cloudarchitectmusings.com/2013/06/26/openstack-for-vmware-admins-nova-compute-with-vsphere-part-2/

The End

That is all for now, hopefully this tutorial was helpful and accurately assisted you in expanding your Openstack compute resources & knowledge of Openstack. Until next time !