Openstack

Consul for Service Discovery

Why Service Discovery ?

Service Discovery effectively replaces the process of having to manually assign or automate your own DNS entries for nodes on your network. Service Discovery aims to move even further away from treating VM’s like pets to cattle, by getting rid of the age old practice of Hostname & FQDN having contextual value. Instead when using services discovery nodes are automatically registered by an agent and automatically are configured in DNS for both nodes and services running on the machine.

Consul

Consul by Hashicorp is becoming the de-facto standard for Service Discovery. Consul’s full features & simplistic deployment model make it an optimal choice for organizations looking to quickly deploy Service Discovery capabilities in their environment.

Components of Consul

  1. The Consul Agent
  2. An optional JSON config file for each service located under /etc/consul.d/<service>.json
    1. If you do not specific a JSON file, consul can still start and will provide discovery for the nodes (they will have DNS as well)

A Quick Example of Consul

How easy is it to deploy console ?

  1. Download / Decompress and install the Consul agent – https://www.consul.io/downloads.html
  2. Define services in a JSON file (if you want) – https://www.consul.io/intro/getting-started/services.html
  3. Start the agent on the nodes – https://www.consul.io/intro/getting-started/join.html
  4.  Make 1 node join 1 other node (does not matter which node) to join the cluster, which gets you access to all cluster metadata

Steps 1 and 2 Above

  1. After downloading the Consul binary to each machine and decompressing it, copy it to /usr/local/bin/ so it’s in your path.
  2. Create the directory
    sudo mkdir /etc/consul.d
  3. Optionally, run the following to create a JSON file defining a fake service running
echo '{"service": {"name": "web", "tags": ["rails"], "port": 80}}' \
    >/etc/consul.d/web.json

Step 3 Above

Run the agent on each node, changing IP accordingly.

tuxninja@consul-d415:~$ consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -node=agent-one -bind=10.73.172.110 -config-dir /etc/consul.d

Step 4 Above

tuxninja@consul-d415:~$ consul join 10.73.172.108
Successfully joined cluster by contacting 1 nodes.

Wow, simple…ok now for the examples….

Show cluster members

tuxninja@consul-dcb3:~$ consul join 10.73.172.110
Successfully joined cluster by contacting 1 nodes.

Look up DNS for a node

tuxninja@consul-dcb3:~$ dig @127.0.0.1 -p 8600 agent-one.node.consul
; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> @127.0.0.1 -p 8600 agent-one.node.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2450
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;agent-one.node.consul.		IN	A
;; ANSWER SECTION:
agent-one.node.consul.	0	IN	A	10.73.172.110
;; Query time: 1 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Tue May 03 21:43:47 UTC 2016
;; MSG SIZE  rcvd: 76
tuxninja@consul-dcb3:~$

Lookup DNS for a service

tuxninja@consul-dcb3:~$  dig @127.0.0.1 -p 8600 web.service.consul
; <<>> DiG 9.9.5-3ubuntu0.8-Ubuntu <<>> @127.0.0.1 -p 8600 web.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55798
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;web.service.consul.		IN	A
;; ANSWER SECTION:
web.service.consul.	0	IN	A	10.73.172.110
;; Query time: 2 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Tue May 03 21:46:54 UTC 2016
;; MSG SIZE  rcvd: 70
tuxninja@consul-dcb3:~$

Query the REST API for Nodes

tuxninja@consul-dcb3:~$ curl localhost:8500/v1/catalog/nodes
[{"Node":"agent-one","Address":"10.73.172.110","TaggedAddresses":{"wan":"10.73.172.110"},"CreateIndex":3,"ModifyIndex":1311},{"Node":"agent-two","Address":"10.73.172.108","TaggedAddresses":{"wan":"10.73.172.108"},"CreateIndex":1338,"ModifyIndex":1339}

Query the REST API for Services

tuxninja@consul-dcb3:~$ curl http://localhost:8500/v1/catalog/service/web
[{"Node":"agent-one","Address":"10.73.172.110","ServiceID":"web","ServiceName":"web","ServiceTags":["rails"],"ServiceAddress":"","ServicePort":80,"ServiceEnableTagOverride":false,"CreateIndex":5,"ModifyIndex":772}

Consul for Service Discovery Read More »

How To: curl the Openstack API’s (v3 Keystone Auth)

While Openstack provides a python client(s) for interactions….

[root@diamond ~]# source keystonerc_tuxninja
[root@diamond ~(keystone_tuxninja)]# openstack server list
+--------------------------------------+-------+--------+----------------------------------------+
| ID                                   | Name  | Status | Networks                               |
+--------------------------------------+-------+--------+----------------------------------------+
| e5b35d6a-a9ba-4714-a9e1-6361706bd047 | spin1 | ACTIVE | private_tuxlabs=10.0.0.8, 192.168.1.52 |
+--------------------------------------+-------+--------+----------------------------------------+
[root@diamond ~(keystone_tuxninja)]#

I frequently, finding myself needing to get data out of it without the pain of awk/sed’ing out the ASCII art.

Thus to quickly access the raw data, we can directly query the API’s using curl & parsing JSON instead, which is much better 🙂

Authentication

Before we can interact with the other Openstack API’s we need to authenticate to Keystone openstack’s identity service. After authenticating we receive a token to use with our subequent API requests. So step 1 we are going to create a JSON object with the required authentication details.

Create a file called ‘token-request.json’ with an object that looks like this.

{
    "auth": {
        "identity": {
            "methods": [
                "password"
            ],
            "password": {
                "user": {
                    "domain": {
                        "id": "default"
                    },
                    "name": "tuxninja",
                    "password": "put_your_openstack_pass"
                }
            }
        }
    }
}

Btw, if you followed my tutorial on how to install Openstack Kilo, your authentication details for ‘admin’ is in your keystonerc_admin file.

Now we can use this file to authenticate like so:

export TOKEN=`curl -si -d @token-request.json -H "Content-type: application/json" http://localhost:35357/v3/auth/tokens | awk '/X-Subject-Token/ {print $2}'`

The token is actually returned in the header of the HTTP response, so this is why we need ‘-i’ when curling. Notice we are parsing out the token and returning the value to an environment variable $TOKEN.

Now we can include this $TOKEN and run whatever API commands we want (assuming admin privileges for the tenant/project)

Curl Commands (Numerous Examples!)

# list domains
curl -si -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:35357/v3/domains

# create a domain
curl  -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" -d '{"domain": {"description": "--optional--", "enabled": true, "name": "dom1"}}'  http://localhost:35357/v3/domains


# list users
curl -si -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:35357/v3/users

# To create a users, create file named create_user.json file like this:

{
    "user": {
           "default_project_id": "18ed894bb8b84a5b9144c129fc754722",
            "description": "Description",
            "domain_id": "default",
            "email": "tuxninja@tuxlabs.com",
            "enabled": true,
            "name": "tuxninja",
            "password": "changeme" }
}

# then run
curl -si -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:35357/v3/users -d @create_user.json


# list images in nova
                                                                                             <tenant_id>
curl -s -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:8774/v2/18ed894bb8b84a5b9144c129fc754722/images | python -m json.tool

# list servers (vms)

curl -s -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:8774/v2/18ed894bb8b84a5b9144c129fc754722/servers | python -m json.tool

# neutron networks

curl -s -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:9696/v2.0/networks | python -m json.tool

# neutron subnets

curl -s -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:9696/v2.0/networks | python -m json.tool

I sometimes pipe the output to python -m json.tool, which provides formatting for JSON. Lets take a closer look at an example.

Listing servers (vm’s)

[root@diamond ~(keystone_tuxninja)]# curl -s -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:8774/v2/18ed894bb8b84a5b9144c129fc754722/servers | python -m json.tool
{
    "servers": [
        {
            "id": "e5b35d6a-a9ba-4714-a9e1-6361706bd047",
            "links": [
                {
                    "href": "http://localhost:8774/v2/18ed894bb8b84a5b9144c129fc754722/servers/e5b35d6a-a9ba-4714-a9e1-6361706bd047",
                    "rel": "self"
                },
                {
                    "href": "http://localhost:8774/18ed894bb8b84a5b9144c129fc754722/servers/e5b35d6a-a9ba-4714-a9e1-6361706bd047",
                    "rel": "bookmark"
                }
            ],
            "name": "spin1"
        }
    ]
}
[root@diamond ~(keystone_tuxninja)]#

I only have 1 VM currently called spin1, but for the tutorials sake, if I had ten’s or hundred’s of VM’s and all I cared about was the VM name or ID, I would still need to parse this JSON object to avoid getting all this other meta-data.

My favorite command line way to do that without going full Python is using the handy JQ tool.

Here is how to use it !

[root@diamond ~(keystone_tuxninja)]# curl -s -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:8774/v2/18ed894bb8b84a5b9144c129fc754722/servers | jq .
{
  "servers": [
    {
      "name": "spin1",
      "links": [
        {
          "rel": "self",
          "href": "http://localhost:8774/v2/18ed894bb8b84a5b9144c129fc754722/servers/e5b35d6a-a9ba-4714-a9e1-6361706bd047"
        },
        {
          "rel": "bookmark",
          "href": "http://localhost:8774/18ed894bb8b84a5b9144c129fc754722/servers/e5b35d6a-a9ba-4714-a9e1-6361706bd047"
        }
      ],
      "id": "e5b35d6a-a9ba-4714-a9e1-6361706bd047"
    }
  ]
}
[root@diamond ~(keystone_tuxninja)]#
[root@diamond ~(keystone_tuxninja)]# curl -s -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" http://localhost:8774/v2/18ed894bb8b84a5b9144c129fc754722/servers | jq .servers[0].name -r
spin1
[root@diamond ~(keystone_tuxninja)]#

The first command just takes whatever the STDOUT from curl is and indent’s and color’s the JSON making it pretty (colors gives it +1 vs. python -m json.tool).

The second example we actually parse what were after. As you can see it is pretty simple, but jq’s query language may not be 100% intuitive at first, but I promise it is pretty easy to understand if you have ever parsed JSON before. Read up more on JQ @ https://stedolan.github.io/jq/ & check out the Openstack docs for more API commands http://developer.openstack.org/api-ref.html

Hope you enjoyed this post ! Until next time.

 

 

How To: curl the Openstack API’s (v3 Keystone Auth) Read More »

Installing Openstack Kilo on Centos 7

openstack-kilo-logo
In a previous article I wrote about how to install Openstack Icehouse on CentOS 6.5 in great detail. In this article, I am going to keep verbosity to a minimum and just give you the commands ! I am hoping this will be refreshing for my audience. If you are curious however, about the what, when and why please read my previous article.

Pre-requisites

  1. You need a machine with x86_64 architecture with at least 4 GB of memory & 2 NIC’s.
  2. On this machine you need to install CentOS 7 as a minimal install
  3. You should create a user with admin privileges (i.e. wheel, in my case ‘tuxninja’ was created)
  4. Disable SELinux
    1. vi /etc/sysconfig/selinux
    2. SELINUX=disabled
    3. save changes

Jumping Right In

Here are the commands you need to run.

  1. sudo yum update -y
  2. sudo yum install -y https://repos.fedorapeople.org/repos/openstack/openstack-kilo/rdo-release-kilo-1.noarch.rpm
  3. sudo yum install epel-release
  4. sudo yum install -y openstack-packstack

Now at this point if you ran ‘packstack’ you would run into a bug with this message

ERROR : Error appeared during Puppet run: 192.168.1.10_prescript.pp
Error: Could not find data item CONFIG_USE_SUBNETS in any Hiera data file and no default supplied at /var/tmp/packstack/053c9a3614de4404b906141268c08f0a/manifests/192.168.1.10_prescript.pp:2 on node diamond.tuxlabs.com

The workaround for this bug is as follows

  1. sudo rpm -e puppet
  2. sudo rpm rpm -e hiera
  3. curl -O https://yum.puppetlabs.com/el/7/products/x86_64/hiera-1.3.4-1.el7.noarch.rpm
  4. sudo rpm -ivh hiera-1.3.4-1.el7.noarch.rpm
  5. vi /etc/yum.repos.d/epel.repo
    1. At the bottom of the [epel] section, after the gpgkey add a newline with: exclude=hiera*
    2. Save the file
  6. sud0 yum install -y puppet-3.6.2-3.el7.noarch
  7. reboot
  8. sudo rm /etc/puppet/hiera.yaml
  9. sudo packstack –allinone

This should successfully install. Godspeed.

Networking

Now that Openstack is setup, we still have to setup our network with private & public routed networks, so we can turn this into a real multi-node setup and ssh to our hosts and let them reach the internet etc. To do this, much like my previous post you need to modify your /etc/sysconfig/network-scripts/ files to reflect this.

[tuxninja@diamond network-scripts]$ cat ifcfg-enp4s0f0
NAME="enp4s0f0"
UUID="e0c3929c-1f9b-44d1-9c59-6c8872f603bd"
DEVICE="enp4s0f0"
TYPE="OVSPort"
NM_CONTROLLED="no"
DEVICETYPE="ovs"
OVS_BRIDGE="br-ex"
BOOTPROTO="none"
ONBOOT="yes"
[tuxninja@diamond network-scripts]$ cat ifcfg-enp4s0f1
NAME=enp4s0f1
UUID=ed50b4b6-2c29-4307-bbb0-f3c923f6552a
DEVICE=enp4s0f1
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
NETWORK=10.0.0.0
IPADDR=10.0.0.1
NETMASK=255.255.255.0
[tuxninja@diamond network-scripts]$ cat ifcfg-br-ex
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.1.10
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=8.8.8.8
DNS2=8.8.4.4
DNS3=192.168.1.1
ONBOOT=yes
[tuxninja@diamond network-scripts]$

Note: I deleted all the IPV6 crap, I think it messes some stuff up. When your done making the changes with your favorite editor, restart networking : sudo /etc/init.d/network restart or sudo systemctl restart network

Next go into in the Horizon Dashboard GUI and delete the demo project. See my previous article for details on how.

Back On the All-In-One Node Console

[root@diamond ~]# source keystonerc_admin 
[root@diamond ~(keystone_admin)]# neutron router-create router1
[root@diamond ~(keystone_admin)]# neutron net-create private
[root@diamond ~(keystone_admin)]# neutron subnet-create private 10.0.0.0/24 --name private_subnet
[root@diamond ~(keystone_admin)]# neutron router-interface-add router1 private_subnet
[root@diamond ~(keystone_admin)]# neutron net-create public --router:external
[root@diamond ~(keystone_admin)]# neutron subnet-create public 192.168.1.0/24 --name public_subnet --enable_dhcp=False --allocation-pool start=192.168.1.51,end=192.168.1.99 --gateway=192.168.1.1
[root@diamond ~(keystone_admin)]# neutron router-gateway-set router1 public

Next ‘reboot’ or restart all openstack services :

for service in `openstack-service list`; do openstack-service restart $service; done

Note: it appears the –full-restart flag is gone, used to work !

When logging into your dashboard located at http://192.168.1.10/dashboard at some point you might hit a bug that prevent you from logging into the Horizon dashboard see : https://bugzilla.redhat.com/show_bug.cgi?id=1218894 … the work-around for this is to clear your browser cookies.

You’re Done

That’s it. Next steps would be to create a project & new admin user, re-create the required network mappings in openstack using the above commands (modify the names to make them unique) and create your ssh key, import it, download some images, import them using glance, and create some VM’s. Also I like to delete the demo project (you can also prevent this from being created with a flag on the packstack command). Make sure you delete all default security rules and add back ICMP, TCP, and UDP allow ingress / egress rules for 0.0.0.0 aka any/any, again you can see my article on CentOS 6.5 with more specifics on how to do this. Additionally, I have an article on how to add additional compute nodes as well.

As always I can be reached for assistance @ tuxninja [at] tuxlabs.com

Happying Stacking !

Installing Openstack Kilo on Centos 7 Read More »

How To: Add A Compute Node To Openstack Icehouse Using Packstack

openstack-compute-iconopenstack-compute-iconopenstack-compute-iconopenstack-compute-iconopenstack-compute-icon

Pre-requisites

This article is a continuation on the previous article I wrote on how to do a single node all-in-one (AIO) Openstack Icehouse install using Redhat’s packstack. A working Openstack AIO installation using packstack is required for this article. If you do not already have a functioning AIO install of Openstack please refer to the previous article before continuing on to this articles steps.

Preparing Our Compute Node

Much like in our previous article we first need to go through and setup our system and network properly to work with Openstack. I started with a minimal CentOS 6.5 install, and then configured the following

  1. resolv.conf
  2. sudoers
  3. my network interfaces eth0(192) and eth1 (10)
    1. Hostname: ruby.tuxlabs.com ( I also setup DNS for this )
    2. EXT IP: 192.168.1.11
    3. INT IP: 10.0.0.2
  4. A local user + added him to wheel for sudo
  5. I installed these handy dependencies
    1. yum install y opensshclients
    2. yum install y yumutils
    3. yum install y wget
    4. yum install y bindutils
  6. And I disabled SELinux
    1. Don’t forget to reboot after

To see how I setup the above pre-requisites see the “Setting Up Our Initial System” section on the previous controller install here : http://tuxlabs.com/?p=82

Adding Our Compute Node Using PackStack

For starters we need to follow the steps in this link  https://openstack.redhat.com/Adding_a_compute_node

I am including the link for reference, but you don’t have to click it as I will be listing the steps below.

On your controller node ( diamond.tuxlabs.com )

First, locate your answers file from your previous packstack all-in-one install.

[root@diamond tuxninja]# ls *answers*
packstack-answers-20140802-125113.txt
[root@diamond tuxninja]#

 Edit the answers file

Change lo to eth1 (assuming that is your private 10. interface) for both CONFIG_NOVA_COMPUTE_PRIVIF & CONFIG_NOVA_NETWORK_PRIVIF

[root@diamond tuxninja]# egrep 'CONFIG_NOVA_COMPUTE_PRIVIF|CONFIG_NOVA_NETWORK_PRIVIF' packstack-answers-20140802-125113.txt
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_PRIVIF=eth1
[root@diamond tuxninja]#

Change CONFIG_COMPUTE_HOSTS to the ip address of the compute node you want to add. In our case ‘192.168.1.11’. Additionally, validate the ip address for CONFIG_NETWORK_HOSTS is your controller’s ip since you do not run a separate network node.

[root@diamond tuxninja]# egrep 'CONFIG_COMPUTE_HOSTS|CONFIG_NETWORK_HOSTS' packstack-answers-20140802-125113.txt
CONFIG_COMPUTE_HOSTS=192.168.1.11
CONFIG_NETWORK_HOSTS=192.168.1.10
[root@diamond tuxninja]#

That’s it. Now run packstack again on the controller

[tuxninja@diamond yum.repos.d]$ sudo packstack --answer-file=packstack-answers-20140802-125113.txt

When that completes, ssh into or switch terminals over to your compute node you just added.

On the compute node ( ruby.tuxlabs.com )

Validate that the relevant openstack compute services are running

[root@ruby ~]# openstack-status
== Nova services ==
openstack-nova-api:                     dead      (disabled on boot)
openstack-nova-compute:                 active
openstack-nova-network:                 dead      (disabled on boot)
openstack-nova-scheduler:               dead      (disabled on boot)
== neutron services ==
neutron-server:                         inactive  (disabled on boot)
neutron-dhcp-agent:                     inactive  (disabled on boot)
neutron-l3-agent:                       inactive  (disabled on boot)
neutron-metadata-agent:                 inactive  (disabled on boot)
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
== Ceilometer services ==
openstack-ceilometer-api:               dead      (disabled on boot)
openstack-ceilometer-central:           dead      (disabled on boot)
openstack-ceilometer-compute:           active
openstack-ceilometer-collector:         dead      (disabled on boot)
== Support services ==
libvirtd:                               active
openvswitch:                            active
messagebus:                             active
Warning novarc not sourced
[root@ruby ~]#

 Back on the controller ( diamond.tuxlabs.com )

We should now be able to validate that ruby.tuxlabs.com has been added as a compute node hypervisor.

[tuxninja@diamond ~]$ sudo -s
[root@diamond tuxninja]# source keystonerc_admin
[root@diamond tuxninja(keystone_admin)]# nova hypervisor-list
+----+---------------------+
| ID | Hypervisor hostname |
+----+---------------------+
| 1  | diamond.tuxlabs.com |
| 2  | ruby.tuxlabs.com    |
+----+---------------------+
[root@diamond tuxninja(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth diamond.tuxlabs.com                  internal         enabled    :-)   2014-10-12 20:48:34
nova-conductor   diamond.tuxlabs.com                  internal         enabled    :-)   2014-10-12 20:48:35
nova-scheduler   diamond.tuxlabs.com                  internal         enabled    :-)   2014-10-12 20:48:27
nova-compute     diamond.tuxlabs.com                  nova             enabled    :-)   2014-10-12 20:48:32
nova-cert        diamond.tuxlabs.com                  internal         enabled    :-)   2014-10-12 20:48:31
nova-compute     ruby.tuxlabs.com                     nova             enabled    :-)   2014-10-12 20:48:35
[root@diamond tuxninja(keystone_admin)]#

Additionally, you can verify it in the Openstack Dashboard

hypervisors

Next we are going to try to boot an instance using the new ruby.tuxlabs.com hypervisor. To do this we will need a few pieces of information. First let’s get our OS images list.

[root@diamond tuxninja(keystone_admin)]# glance image-list
+--------------------------------------+---------------------+-------------+------------------+-----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size      | Status |
+--------------------------------------+---------------------+-------------+------------------+-----------+--------+
| 0b3f2474-73cc-4df2-ad0e-fdb7a7f7c8a1 | cirros              | qcow2       | bare             | 13147648  | active |
| 737a0060-6e80-415c-b66b-a20893d9888b | Fedora 6.4          | qcow2       | bare             | 210829312 | active |
| 952ac512-19da-47a7-81a4-cfede18c7f45 | ubuntu-server-12.04 | qcow2       | bare             | 260964864 | active |
+--------------------------------------+---------------------+-------------+------------------+-----------+--------+
[root@diamond tuxninja(keystone_admin)]#

Great, now we need the ID of our private network

[root@diamond tuxninja(keystone_admin)]# neutron net-show private
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | d1a89c10-0ae2-43f0-8cf2-f02c20e19618 |
| name                      | private                              |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 10                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   | b8760f9b-3c0a-47c7-a5af-9cb533242f5b |
| tenant_id                 | 7bdf35c08112447b8d2d78cdbbbcfa09     |
+---------------------------+--------------------------------------+
[root@diamond tuxninja(keystone_admin)]#

Ok now we are ready to proceed with the nova boot command.

[root@diamond tuxninja(keystone_admin)]#  nova boot --flavor m1.small --image 'ubuntu-server-12.04' --key-name cloud --nic net-id=d1a89c10-0ae2-43f0-8cf2-f02c20e19618 --hint force_hosts=ruby.tuxlabs.com test
+--------------------------------------+------------------------------------------------------------+
| Property                             | Value                                                      |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                     |
| OS-EXT-AZ:availability_zone          | nova                                                       |
| OS-EXT-SRV-ATTR:host                 | -                                                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                          |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000019                                          |
| OS-EXT-STS:power_state               | 0                                                          |
| OS-EXT-STS:task_state                | scheduling                                                 |
| OS-EXT-STS:vm_state                  | building                                                   |
| OS-SRV-USG:launched_at               | -                                                          |
| OS-SRV-USG:terminated_at             | -                                                          |
| accessIPv4                           |                                                            |
| accessIPv6                           |                                                            |
| adminPass                            | XHUumC5YbE3J                                               |
| config_drive                         |                                                            |
| created                              | 2014-10-12T20:59:47Z                                       |
| flavor                               | m1.small (2)                                               |
| hostId                               |                                                            |
| id                                   | f7b9e8bb-df45-4b94-a896-5600f47c269b                       |
| image                                | ubuntu-server-12.04 (952ac512-19da-47a7-81a4-cfede18c7f45) |
| key_name                             | cloud                                                      |
| metadata                             | {}                                                         |
| name                                 | test                                                       |
| os-extended-volumes:volumes_attached | []                                                         |
| progress                             | 0                                                          |
| security_groups                      | default                                                    |
| status                               | BUILD                                                      |
| tenant_id                            | 7bdf35c08112447b8d2d78cdbbbcfa09                           |
| updated                              | 2014-10-12T20:59:47Z                                       |
| user_id                              | 6bb8fcf3ce9446838e50a6b98fbb5afe                           |
+--------------------------------------+------------------------------------------------------------+
[root@diamond tuxninja(keystone_admin)]#

Fantastic. That command should look familiar from our previous tutorial it is the standard command for launching new VM instances using the command line, with one exception ‘–hint force_hosts=ruby.tuxlabs.com’ this part of the command line forces the scheduler to use ruby.tuxlabs.com as it’s hypervisor.

Once the VM is building we can validate that it is on the right hypervisor like so.

[root@diamond tuxninja(keystone_admin)]# nova hypervisor-servers ruby.tuxlabs.com
+--------------------------------------+-------------------+---------------+---------------------+
| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname |
+--------------------------------------+-------------------+---------------+---------------------+
| f7b9e8bb-df45-4b94-a896-5600f47c269b | instance-00000019 | 2             | ruby.tuxlabs.com    |
+--------------------------------------+-------------------+---------------+---------------------+
[root@diamond tuxninja(keystone_admin)]# nova hypervisor-servers diamond.tuxlabs.com
+--------------------------------------+-------------------+---------------+---------------------+
| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname |
+--------------------------------------+-------------------+---------------+---------------------+
| a4c67465-d7ef-42b6-9c2a-439f3b13e841 | instance-00000017 | 1             | diamond.tuxlabs.com |
| 0c34028d-dfb6-4fdf-b9f7-daade66f2107 | instance-00000018 | 1             | diamond.tuxlabs.com |
+--------------------------------------+-------------------+---------------+---------------------+
[root@diamond tuxninja(keystone_admin)]#

You can see from the output above I have 2 VM’s on my existing controller ‘diamond.tuxlabs.com’ and the newly created instance is on ‘ruby.tuxlabs.com’ as instructed, awesome.

Now that you are sure you setup your compute node correctly, and can boot a VM on a specific hypervisor via command line, you might be wondering how this works using the GUI. The answer is a little differently 🙂

The Openstack Nova Scheduler

The Nova Scheduler in Openstack is responsible for determining, which compute node a VM should be created on. If you are familiar with VMware this is like DRS, except it only happens on initial creation, there is no rebalancing that happens as resources are consumed overtime. Using the Openstack Dashboard GUI I am unable to tell nova to boot off a specific hypervisor, to do that I have to use the command line above (if someone knows of a way to do this using the GUI let me know, I have a feeling if it is not added already, they will add the ability to send a hint to nova from the GUI in a later version). In theory you can trust the nova-scheduler service to automatically balance the usage of compute resources (CPU, Memory, Disk etc) based on it’s default configuration. However, if you want to ensure that certain VM’s live on certain hypervisors you will want to use the command line above. For more information on how the scheduler works see : http://cloudarchitectmusings.com/2013/06/26/openstack-for-vmware-admins-nova-compute-with-vsphere-part-2/

The End

That is all for now, hopefully this tutorial was helpful and accurately assisted you in expanding your Openstack compute resources & knowledge of Openstack. Until next time !

 

How To: Add A Compute Node To Openstack Icehouse Using Packstack Read More »