TuxLabs LLC

All things DevOps

Category: Systems Administration

SSH Tunneling

Published / by tuxninja / Leave a Comment

In my last post about Runner I briefly explained needing to modify your ~/.ssh/config to use a ProxyCommand to allow for automatic tunneling with SSH.

What I didn’t explain is there is an alternative method that is arguably simpler. It requires creating three small shells scripts & placing them in your path or a common host path like /usr/local/bin/ with the chmod +x permission. Here is the script that sets up the ssh tunnel.

Script: starttunnel

Running starttunnel, will connect you to your bastion/jump box and then background this connection with keep alives on. It will listen / dynamically forward ssh requests to 8081 through or to tlbastion.tuxlabs.com. Additionally, if you wanted to tunnel a web port specifically on a machine that sits within your network back to the machine you are tunneling from, you can add it to the script. Such that the required host/port always gets tunneled and is available on your machine when you run starttunnel. Example config would look like.

Script: starttunnel + forwarding http

 

Now that you have authenticated to your bastion and have a working tunnel you need to get ssh requests to go through this tunnel. However, if your like me you still want the ability to ssh to other stuff without going through that tunnel. So I created a new script called ‘sshp’. When I want to ssh through the tunnel / proxy I use ‘sshp’, when I want to ssh to somewhere else on the internet or another network I use plain old ‘ssh’. Here is my sshp script used to connect to machines behind the bastion.

Script: sshp

Now, when you run sshp tuxlabs1@tuxlabs.com you will be connection through the tuxlabs bastion into tuxlabs1. Also notice in my previous post I used sconnect as the proxy command in this one we are using ‘nc’ aka netcat. I have found this method of tunneling to be the most simplistic and effective in my daily life. One more script you need is if you want to copy files you need to use scp. So you have to make a similar command ‘scpp’ for tunneling your copying of files. Here’s the script.

Script: scpp

One final note…if you need use ‘*’ aka splat for copying many files you cannot use the script above, because the shell or script converts that incorrectly. Instead just use the full command yourself from the command line.

scp’ing with *

This would copy all files named ‘copy.all.<whatever>’ to the  bastion. Hope this hopes the folks out there feeling limited by bastions. They provide great security and are an absolute requirement in secure environments so learning tricks that make sure you only need to authenticate once for an extended period of time can come in real handy.

Enjoy,
Jason Riedel

 

 

Runner: Multi-threaded SSH with Sudo support using Python & Paramiko

Published / by tuxninja / Leave a Comment

Example of Runner

 

Why Runner ?

I have been working as a Systems & Network Administrator since 1999. In that time I have repeatedly had the need for rapidly executing commands across thousands of servers. There are many applications out there that solve this problem in various ways…to name a few…pdsh, Ansible, Salt, Chef, Puppet (mcollective),  even Cfengine and more. Some require agents running on the machines, some use SSH, but require keys…or learning curves. Alternatively, you can write your own code to solve this problem, which is what I did mostly for fun. I don’t recommend re-inventing the wheel if you need this for your job, just use what is already out there, or download runner and hack it to your hearts content for your purposes.

Fabric vs. Paramiko

Because I use Python for most of my work code these days, I decided to write my multi-threaded SSH command runner in Python this way I can use Runner for parallel SSH transport & easily bolt on my other Python scripts for additional functionality. Python has fantastic support for SSH via two libraries Fabric & Paramiko. Fabric is built on top of Paramiko. Fabric provides a simpler interface than Paramiko does for doing just about anything you can think of. Create a fabfile run it, and wolla instant results from commands ran via SSH. Fabric is really great for running & re-running a set of commands to automate an install or reporting for example. All that being said I still chose to use Paramiko over Fabric for three reasons.

  1. I don’t like abstraction. Fabric hides the ugly-ness of Paramiko, which I prefer to understand better.
  2. Writing this using Paramiko lent itself better to a command line utility used for adhoc commands than Fabric did.
  3. I wasn’t sure if Fabric’s abstraction would limit me later based on needing custom functionality. So for Runner I chose Paramiko, but to be clear, 9 times out of 10 I think I would choose Fabric.

Bastions

A bastion or jump box is a machine that is used as the gatekeeper of access to the rest of the machines in your network. In secure environments where your Corp network is separate from your Production network, you will have to SSH into a bastion, which usually has some form of 2-factor authentication (at least it should !) and then from there you may SSH into other hosts. A bastion can throw a real wrench in trying to manage thousands of machines in seconds, because you would have to authenticate to the bastion 1000 times ! The way around this, is by setting up your SSH config to proxy commands.

ProxyCommand & Sconnect

Sconnect (or connect.c) is a binary that is most commonly used as the proxy command for SSH. You can download / read more about sconnect here : https://bitbucket.org/gotoh/connect/wiki/Home and it will also tell you how to setup your SSH config. Using a ProxyCommand with Runner is required, you can however use any ProxyCommand you would like. Really quickly here is what you basically need to do.

  1. Download / Compile connect.c
  2. Copy it to /usr/local/bin/sconnect and set executable permissions
  3. In your SSH Config (.ssh/config) add…
    1. Host <ssh-config-profile-name>
      User tuxninja
      ForwardAgent yes
      HostName <bastion_name>
      DynamicForward 8081 (any uncommon port is fine)
    2. Host *.tuxlabs.com
      User tuxninja
      ProxyCommand /usr/local/bin/sconnect -4 -w 4 -S localhost:8081 %h.tuxlabs.com %p

That is basically it. Then you should start a screen session so you can background the SSH session, since you will leave this open for other SSH sessions to proxy through so you don’t have to go through 2-factor authentication more than once. So something like…

After you authenticate, detach yourself from the screen using CTRL A then D. Now you can ssh to anything @ domain name in my case tuxlab.com and it will forward through the bastion. At this point you still have to authenticate using a username / password, which is fine. Runner deals with this.

Hosts

Runner requires a hosts file to run. By default it is configured to look in hosts/hosts-all for a list of all hosts. I use a script called ‘update-runner-hosts.pl’, which is included in my github to gather hosts from a URL and update the required hosts file. Once you have populated hosts/hosts-all with the FQDN for your hosts, you are ready to use Runner.

Note: You can use ‘-f’ to provide a custom location for your hosts file.

Great Flags / Features

So some of the really great features of Runner are threading (-t), sudo (-s), list only mode (-l) and the regular expression (-r). -r is for pattern matching your hosts lists, which is incredibly handy and absolutely required in an environment with hundreds to thousands of hosts and you only want to select hosts with -r ‘web’ in them.

(-1) one host per pool mode is a great feature, however it is dependent on understanding your environments hostname pattern so you will have to modify the regular expression in the code to make sure it works for you. It is currently setup to identify hostnames in pools when the naming convention is something like apache1234.tuxlabs.com.

Ok I could go on and on about runner, but it’s better to just share the code at this point and let you go! Note the statically defined proxy_command in the code, you may need to change this if you didn’t use sconnect or the same port.

Note: by default runner uses the user you are logged in as to SSH, you can prompt input for a different user with ‘-u’.

All code and accessories are available for download on github : https://github.com/tuxninja/tuxlabs-code/tree/master/runner

Email tuxninja@tuxlabs.com with any question ! Happy SSH’ing admins!

Note: In various versions of this code I had a ‘-h’ allowing you to pass a CSV list of hosts, somehow I let that drop out of this version, sorry ! Feel free to re-add it !

The Runner Code

 

 

 

 

How To: Add A Compute Node To Openstack Icehouse Using Packstack

Published / by tuxninja / Leave a Comment

openstack-compute-iconopenstack-compute-iconopenstack-compute-iconopenstack-compute-iconopenstack-compute-icon

Pre-requisites

This article is a continuation on the previous article I wrote on how to do a single node all-in-one (AIO) Openstack Icehouse install using Redhat’s packstack. A working Openstack AIO installation using packstack is required for this article. If you do not already have a functioning AIO install of Openstack please refer to the previous article before continuing on to this articles steps.

Preparing Our Compute Node

Much like in our previous article we first need to go through and setup our system and network properly to work with Openstack. I started with a minimal CentOS 6.5 install, and then configured the following

  1. resolv.conf
  2. sudoers
  3. my network interfaces eth0(192) and eth1 (10)
    1. Hostname: ruby.tuxlabs.com ( I also setup DNS for this )
    2. EXT IP: 192.168.1.11
    3. INT IP: 10.0.0.2
  4. A local user + added him to wheel for sudo
  5. I installed these handy dependencies
    1. yum install y opensshclients
    2. yum install y yumutils
    3. yum install y wget
    4. yum install y bindutils
  6. And I disabled SELinux
    1. Don’t forget to reboot after

To see how I setup the above pre-requisites see the “Setting Up Our Initial System” section on the previous controller install here : http://tuxlabs.com/?p=82

Adding Our Compute Node Using PackStack

For starters we need to follow the steps in this link  https://openstack.redhat.com/Adding_a_compute_node

I am including the link for reference, but you don’t have to click it as I will be listing the steps below.

On your controller node ( diamond.tuxlabs.com )

First, locate your answers file from your previous packstack all-in-one install.

 Edit the answers file

Change lo to eth1 (assuming that is your private 10. interface) for both CONFIG_NOVA_COMPUTE_PRIVIF & CONFIG_NOVA_NETWORK_PRIVIF

Change CONFIG_COMPUTE_HOSTS to the ip address of the compute node you want to add. In our case ‘192.168.1.11’. Additionally, validate the ip address for CONFIG_NETWORK_HOSTS is your controller’s ip since you do not run a separate network node.

That’s it. Now run packstack again on the controller

When that completes, ssh into or switch terminals over to your compute node you just added.

On the compute node ( ruby.tuxlabs.com )

Validate that the relevant openstack compute services are running

 Back on the controller ( diamond.tuxlabs.com )

We should now be able to validate that ruby.tuxlabs.com has been added as a compute node hypervisor.

Additionally, you can verify it in the Openstack Dashboard

hypervisors

Next we are going to try to boot an instance using the new ruby.tuxlabs.com hypervisor. To do this we will need a few pieces of information. First let’s get our OS images list.

Great, now we need the ID of our private network

Ok now we are ready to proceed with the nova boot command.

Fantastic. That command should look familiar from our previous tutorial it is the standard command for launching new VM instances using the command line, with one exception ‘–hint force_hosts=ruby.tuxlabs.com’ this part of the command line forces the scheduler to use ruby.tuxlabs.com as it’s hypervisor.

Once the VM is building we can validate that it is on the right hypervisor like so.

You can see from the output above I have 2 VM’s on my existing controller ‘diamond.tuxlabs.com’ and the newly created instance is on ‘ruby.tuxlabs.com’ as instructed, awesome.

Now that you are sure you setup your compute node correctly, and can boot a VM on a specific hypervisor via command line, you might be wondering how this works using the GUI. The answer is a little differently 🙂

The Openstack Nova Scheduler

The Nova Scheduler in Openstack is responsible for determining, which compute node a VM should be created on. If you are familiar with VMware this is like DRS, except it only happens on initial creation, there is no rebalancing that happens as resources are consumed overtime. Using the Openstack Dashboard GUI I am unable to tell nova to boot off a specific hypervisor, to do that I have to use the command line above (if someone knows of a way to do this using the GUI let me know, I have a feeling if it is not added already, they will add the ability to send a hint to nova from the GUI in a later version). In theory you can trust the nova-scheduler service to automatically balance the usage of compute resources (CPU, Memory, Disk etc) based on it’s default configuration. However, if you want to ensure that certain VM’s live on certain hypervisors you will want to use the command line above. For more information on how the scheduler works see : http://cloudarchitectmusings.com/2013/06/26/openstack-for-vmware-admins-nova-compute-with-vsphere-part-2/

The End

That is all for now, hopefully this tutorial was helpful and accurately assisted you in expanding your Openstack compute resources & knowledge of Openstack. Until next time !

 

How To: Install The Foreman on CentOS 6.5

Published / by tuxninja / Leave a Comment

2014-08-16 02.03.35 pm

Overview

In this tutorial I will be demonstrating how to install The Foreman on the Linux CentOS 6.5 operating system. If you are not familiar with CentOS it is the Community version of RedHat Linux Enterprise, hence the name C(ommunity)ENT(erprise) O(perating)S(ystem). For more information on CentOS see : https://www.centos.org/. I chose CentOS over Ubuntu or other Linux flavors for this set of tutorials because of it’s ubiquity in enterprise environments and because of RedHat’s leadership in the Openstack ecosystem. We will discuss Openstack (www.openstack.org) in later tutorials, for now The Foreman represents a great first step to setting up a world class Cloud environment.

Why The Foreman

“Foreman is an open source project that helps system administrators manage servers throughout their lifecycle, from provisioning and configuration to orchestration and monitoring. Using Puppet or Chef and Foreman’s smart proxy architecture, you can easily automate repetitive tasks, quickly deploy applications, and proactively manage change, both on-premise with VMs and bare-metal or in the cloud.” From their website (www.theforeman.org).

In other words it builds on well known configuration management systems (puppet in our case) and fully automates ‘things’ to provision and manage bare metal (physical) systems and Cloud Virtual Machines. This is really helpful when you have an Openstack Cloud and need to add compute nodes quickly.

Starting The Install

My 3 machines in my lab, diamond, ruby and emerald are all running CentOS 6.5. To make sure I cover all the dependencies needed I did a minimal install on these machines. So you will see me install some convenience dependencies as well, which I will point out. The following steps are based loosely on the http://theforeman.org/manuals/1.5/quickstart_guide.html

You will want to install The Foreman on your ‘master’ or machine you intend to the be the ‘controller’ of your Cloud. In my case, diamond.tuxlabs.com

Installing Basic System Dependencies

scl-utils is used for managing the rails console, the rest should be pretty self explanatory.

Installing Foreman Dependencies

Add Yum Repos

Check to make sure the Repo was added using

Installing The Foreman

 After grabbing the packages…finally…run the installer

This part is pretty cool because foreman’s automated installer seems to work quite well, it takes a few minutes, but when completed you should see something like this.

Iptables

If you have iptables running, you will have to flush the rules or open the need ports.

 Puppet

At this point Puppet should be installed on diamond (and your machine). You should test it by running the agents twice, yes twice 🙂 The first time clears some warnings.

Follow the instructions for installing the NTP module in Puppet

http://theforeman.org/manuals/1.5/index.html#2.2PuppetManagement

I could have re-wrote this or cropped it, but it was written very simply. Definitely do this though so you get a feel for how to add puppet modules and manage them in The Foreman.

The Clients

Now before we get into The Foreman Dashboard, let’s setup our clients so we have more interesting stuff to look at then this one node. ssh to each of your machines (ruby,emerald in my case), and sudo or become root and run…

 Then start puppet

 Sign The Certs

Puppet uses SSL certificates to authenticate clients that it manages. By default you must permit each client by manually signing the certificate before the client is authenticated with the puppet master. Back on the master server (diamond) run the following modified for the correct FQDN’s (full hostnames).

Auto Starting

Make sure Puppet starts on boot, do this on all machines

The Dashboard

Finally :-), If you have set everything up correctly you should be able to reach your console @ https://yourserversip.com or if you have DNS at the https://yourserversname.com (aka the FQDN Fully Qualified Domain Name).  Note the console uses SSL so you want to use HTTPS (on port 443 by default) not HTTP (which is port 80 by default), so just remember the HTTPS part and you should see a login screen, like this.

TheForemanOverview

The first time Login is : admin and the Password is : changeme … make sure you do like the password says ! Also feel free to change you display name in your profile.

The End

Now that you have The Foreman installed. The next logical step is to install Openstack. I will cover that next time folks, I hope this helps out !