TuxLabs LLC

All things DevOps

Category: How To’s

Upgrading to Python 2.7 on CentOS 6.5

Published / by tuxninja / 1 Comment on Upgrading to Python 2.7 on CentOS 6.5

Hey Folks,

The systems running Tuxlabs are currently running CentOS 6.5 to emulate a production RHEL like setup for an Openstack Cloud. Running an operating system this old has it’s drawbacks such as dependencies. I was recently installing a well know Python framework and ran into compatibility issues. The framework required Python 2.7 and CentOS 6.5 comes with 2.6. The below is a step by step procedure for how to upgrade to Python 2.7 on CentOS 6.5 if  you ever should need it. However, as a reminder run a newer OS when possible and for god sakes if you don’t need Redhat support, run Ubuntu.

Step one, we verify we are indeed running Python 2.6

Ok then, let’s upgrade Python to 2.7. First let’s update all of our system applications, just in case for version dependencies and it’s good for security etc.

Next, we have to install Develop Tools, it is a required dependency to install Python.

Additionally, we will need these…

Now, let’s install Python 2.7

It is important to use ‘altinstall’ otherwise you will end up with two different versions of Python on your filesystem, both named ‘python’.

You can verify the install like so

That’s it ! Enjoy.

SSH Tunneling

Published / by tuxninja / Leave a Comment

In my last post about Runner I briefly explained needing to modify your ~/.ssh/config to use a ProxyCommand to allow for automatic tunneling with SSH.

What I didn’t explain is there is an alternative method that is arguably simpler. It requires creating three small shells scripts & placing them in your path or a common host path like /usr/local/bin/ with the chmod +x permission. Here is the script that sets up the ssh tunnel.

Script: starttunnel

Running starttunnel, will connect you to your bastion/jump box and then background this connection with keep alives on. It will listen / dynamically forward ssh requests to 8081 through or to tlbastion.tuxlabs.com. Additionally, if you wanted to tunnel a web port specifically on a machine that sits within your network back to the machine you are tunneling from, you can add it to the script. Such that the required host/port always gets tunneled and is available on your machine when you run starttunnel. Example config would look like.

Script: starttunnel + forwarding http

 

Now that you have authenticated to your bastion and have a working tunnel you need to get ssh requests to go through this tunnel. However, if your like me you still want the ability to ssh to other stuff without going through that tunnel. So I created a new script called ‘sshp’. When I want to ssh through the tunnel / proxy I use ‘sshp’, when I want to ssh to somewhere else on the internet or another network I use plain old ‘ssh’. Here is my sshp script used to connect to machines behind the bastion.

Script: sshp

Now, when you run sshp tuxlabs1@tuxlabs.com you will be connection through the tuxlabs bastion into tuxlabs1. Also notice in my previous post I used sconnect as the proxy command in this one we are using ‘nc’ aka netcat. I have found this method of tunneling to be the most simplistic and effective in my daily life. One more script you need is if you want to copy files you need to use scp. So you have to make a similar command ‘scpp’ for tunneling your copying of files. Here’s the script.

Script: scpp

One final note…if you need use ‘*’ aka splat for copying many files you cannot use the script above, because the shell or script converts that incorrectly. Instead just use the full command yourself from the command line.

scp’ing with *

This would copy all files named ‘copy.all.<whatever>’ to the  bastion. Hope this hopes the folks out there feeling limited by bastions. They provide great security and are an absolute requirement in secure environments so learning tricks that make sure you only need to authenticate once for an extended period of time can come in real handy.

Enjoy,
Jason Riedel

 

 

How To: Add A Compute Node To Openstack Icehouse Using Packstack

Published / by tuxninja / Leave a Comment

openstack-compute-iconopenstack-compute-iconopenstack-compute-iconopenstack-compute-iconopenstack-compute-icon

Pre-requisites

This article is a continuation on the previous article I wrote on how to do a single node all-in-one (AIO) Openstack Icehouse install using Redhat’s packstack. A working Openstack AIO installation using packstack is required for this article. If you do not already have a functioning AIO install of Openstack please refer to the previous article before continuing on to this articles steps.

Preparing Our Compute Node

Much like in our previous article we first need to go through and setup our system and network properly to work with Openstack. I started with a minimal CentOS 6.5 install, and then configured the following

  1. resolv.conf
  2. sudoers
  3. my network interfaces eth0(192) and eth1 (10)
    1. Hostname: ruby.tuxlabs.com ( I also setup DNS for this )
    2. EXT IP: 192.168.1.11
    3. INT IP: 10.0.0.2
  4. A local user + added him to wheel for sudo
  5. I installed these handy dependencies
    1. yum install y opensshclients
    2. yum install y yumutils
    3. yum install y wget
    4. yum install y bindutils
  6. And I disabled SELinux
    1. Don’t forget to reboot after

To see how I setup the above pre-requisites see the “Setting Up Our Initial System” section on the previous controller install here : http://tuxlabs.com/?p=82

Adding Our Compute Node Using PackStack

For starters we need to follow the steps in this link  https://openstack.redhat.com/Adding_a_compute_node

I am including the link for reference, but you don’t have to click it as I will be listing the steps below.

On your controller node ( diamond.tuxlabs.com )

First, locate your answers file from your previous packstack all-in-one install.

 Edit the answers file

Change lo to eth1 (assuming that is your private 10. interface) for both CONFIG_NOVA_COMPUTE_PRIVIF & CONFIG_NOVA_NETWORK_PRIVIF

Change CONFIG_COMPUTE_HOSTS to the ip address of the compute node you want to add. In our case ‘192.168.1.11’. Additionally, validate the ip address for CONFIG_NETWORK_HOSTS is your controller’s ip since you do not run a separate network node.

That’s it. Now run packstack again on the controller

When that completes, ssh into or switch terminals over to your compute node you just added.

On the compute node ( ruby.tuxlabs.com )

Validate that the relevant openstack compute services are running

 Back on the controller ( diamond.tuxlabs.com )

We should now be able to validate that ruby.tuxlabs.com has been added as a compute node hypervisor.

Additionally, you can verify it in the Openstack Dashboard

hypervisors

Next we are going to try to boot an instance using the new ruby.tuxlabs.com hypervisor. To do this we will need a few pieces of information. First let’s get our OS images list.

Great, now we need the ID of our private network

Ok now we are ready to proceed with the nova boot command.

Fantastic. That command should look familiar from our previous tutorial it is the standard command for launching new VM instances using the command line, with one exception ‘–hint force_hosts=ruby.tuxlabs.com’ this part of the command line forces the scheduler to use ruby.tuxlabs.com as it’s hypervisor.

Once the VM is building we can validate that it is on the right hypervisor like so.

You can see from the output above I have 2 VM’s on my existing controller ‘diamond.tuxlabs.com’ and the newly created instance is on ‘ruby.tuxlabs.com’ as instructed, awesome.

Now that you are sure you setup your compute node correctly, and can boot a VM on a specific hypervisor via command line, you might be wondering how this works using the GUI. The answer is a little differently 🙂

The Openstack Nova Scheduler

The Nova Scheduler in Openstack is responsible for determining, which compute node a VM should be created on. If you are familiar with VMware this is like DRS, except it only happens on initial creation, there is no rebalancing that happens as resources are consumed overtime. Using the Openstack Dashboard GUI I am unable to tell nova to boot off a specific hypervisor, to do that I have to use the command line above (if someone knows of a way to do this using the GUI let me know, I have a feeling if it is not added already, they will add the ability to send a hint to nova from the GUI in a later version). In theory you can trust the nova-scheduler service to automatically balance the usage of compute resources (CPU, Memory, Disk etc) based on it’s default configuration. However, if you want to ensure that certain VM’s live on certain hypervisors you will want to use the command line above. For more information on how the scheduler works see : http://cloudarchitectmusings.com/2013/06/26/openstack-for-vmware-admins-nova-compute-with-vsphere-part-2/

The End

That is all for now, hopefully this tutorial was helpful and accurately assisted you in expanding your Openstack compute resources & knowledge of Openstack. Until next time !

 

How To: Install The Foreman on CentOS 6.5

Published / by tuxninja / Leave a Comment

2014-08-16 02.03.35 pm

Overview

In this tutorial I will be demonstrating how to install The Foreman on the Linux CentOS 6.5 operating system. If you are not familiar with CentOS it is the Community version of RedHat Linux Enterprise, hence the name C(ommunity)ENT(erprise) O(perating)S(ystem). For more information on CentOS see : https://www.centos.org/. I chose CentOS over Ubuntu or other Linux flavors for this set of tutorials because of it’s ubiquity in enterprise environments and because of RedHat’s leadership in the Openstack ecosystem. We will discuss Openstack (www.openstack.org) in later tutorials, for now The Foreman represents a great first step to setting up a world class Cloud environment.

Why The Foreman

“Foreman is an open source project that helps system administrators manage servers throughout their lifecycle, from provisioning and configuration to orchestration and monitoring. Using Puppet or Chef and Foreman’s smart proxy architecture, you can easily automate repetitive tasks, quickly deploy applications, and proactively manage change, both on-premise with VMs and bare-metal or in the cloud.” From their website (www.theforeman.org).

In other words it builds on well known configuration management systems (puppet in our case) and fully automates ‘things’ to provision and manage bare metal (physical) systems and Cloud Virtual Machines. This is really helpful when you have an Openstack Cloud and need to add compute nodes quickly.

Starting The Install

My 3 machines in my lab, diamond, ruby and emerald are all running CentOS 6.5. To make sure I cover all the dependencies needed I did a minimal install on these machines. So you will see me install some convenience dependencies as well, which I will point out. The following steps are based loosely on the http://theforeman.org/manuals/1.5/quickstart_guide.html

You will want to install The Foreman on your ‘master’ or machine you intend to the be the ‘controller’ of your Cloud. In my case, diamond.tuxlabs.com

Installing Basic System Dependencies

scl-utils is used for managing the rails console, the rest should be pretty self explanatory.

Installing Foreman Dependencies

Add Yum Repos

Check to make sure the Repo was added using

Installing The Foreman

 After grabbing the packages…finally…run the installer

This part is pretty cool because foreman’s automated installer seems to work quite well, it takes a few minutes, but when completed you should see something like this.

Iptables

If you have iptables running, you will have to flush the rules or open the need ports.

 Puppet

At this point Puppet should be installed on diamond (and your machine). You should test it by running the agents twice, yes twice 🙂 The first time clears some warnings.

Follow the instructions for installing the NTP module in Puppet

http://theforeman.org/manuals/1.5/index.html#2.2PuppetManagement

I could have re-wrote this or cropped it, but it was written very simply. Definitely do this though so you get a feel for how to add puppet modules and manage them in The Foreman.

The Clients

Now before we get into The Foreman Dashboard, let’s setup our clients so we have more interesting stuff to look at then this one node. ssh to each of your machines (ruby,emerald in my case), and sudo or become root and run…

 Then start puppet

 Sign The Certs

Puppet uses SSL certificates to authenticate clients that it manages. By default you must permit each client by manually signing the certificate before the client is authenticated with the puppet master. Back on the master server (diamond) run the following modified for the correct FQDN’s (full hostnames).

Auto Starting

Make sure Puppet starts on boot, do this on all machines

The Dashboard

Finally :-), If you have set everything up correctly you should be able to reach your console @ https://yourserversip.com or if you have DNS at the https://yourserversname.com (aka the FQDN Fully Qualified Domain Name).  Note the console uses SSL so you want to use HTTPS (on port 443 by default) not HTTP (which is port 80 by default), so just remember the HTTPS part and you should see a login screen, like this.

TheForemanOverview

The first time Login is : admin and the Password is : changeme … make sure you do like the password says ! Also feel free to change you display name in your profile.

The End

Now that you have The Foreman installed. The next logical step is to install Openstack. I will cover that next time folks, I hope this helps out !