{"id":151,"date":"2014-10-13T04:56:07","date_gmt":"2014-10-13T04:56:07","guid":{"rendered":"http:\/\/tuxlabs.com\/?p=151"},"modified":"2016-03-01T23:02:47","modified_gmt":"2016-03-01T23:02:47","slug":"how-to-add-a-compute-node-to-openstack-icehouse-using-packstack","status":"publish","type":"post","link":"https:\/\/tuxlabs.com\/?p=151","title":{"rendered":"How To: Add A Compute Node To Openstack Icehouse Using Packstack"},"content":{"rendered":"<h2><a href=\"http:\/\/tuxlabs.com\/wp-content\/uploads\/2014\/10\/openstack-compute-icon.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-154\" src=\"http:\/\/tuxlabs.com\/wp-content\/uploads\/2014\/10\/openstack-compute-icon.png\" alt=\"openstack-compute-icon\" width=\"111\" height=\"91\" \/><\/a><a href=\"http:\/\/tuxlabs.com\/wp-content\/uploads\/2014\/10\/openstack-compute-icon.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-154\" src=\"http:\/\/tuxlabs.com\/wp-content\/uploads\/2014\/10\/openstack-compute-icon.png\" alt=\"openstack-compute-icon\" width=\"111\" height=\"91\" \/><\/a><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-154\" src=\"http:\/\/tuxlabs.com\/wp-content\/uploads\/2014\/10\/openstack-compute-icon.png\" alt=\"openstack-compute-icon\" width=\"111\" height=\"91\" \/><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-154\" src=\"http:\/\/tuxlabs.com\/wp-content\/uploads\/2014\/10\/openstack-compute-icon.png\" alt=\"openstack-compute-icon\" width=\"111\" height=\"91\" \/><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-154\" src=\"http:\/\/tuxlabs.com\/wp-content\/uploads\/2014\/10\/openstack-compute-icon.png\" alt=\"openstack-compute-icon\" width=\"111\" height=\"91\" \/><\/h2>\n<h2>Pre-requisites<\/h2>\n<p>This article is a continuation on the previous article I wrote on how to do a single node\u00a0all-in-one\u00a0(AIO)\u00a0Openstack Icehouse install using Redhat&#8217;s packstack. A working Openstack\u00a0AIO installation using packstack is required for this article.\u00a0If you do not already have a functioning\u00a0AIO install of Openstack please\u00a0refer to<a href=\"http:\/\/tuxlabs.com\/?p=82\"> the previous article <\/a>before\u00a0continuing on to this articles steps.<\/p>\n<h3>Preparing Our Compute Node<\/h3>\n<p>Much like in our previous article we first need to go through and setup our system and network properly to work with Openstack. I started with a minimal CentOS 6.5 install, and then configured the following<\/p>\n<ol>\n<li>resolv.conf<\/li>\n<li>sudoers<\/li>\n<li>my network interfaces eth0(192)\u00a0and eth1 (10)\n<ol>\n<li>Hostname: ruby.tuxlabs.com ( I also setup DNS for this )<\/li>\n<li>EXT IP: 192.168.1.11<\/li>\n<li>INT IP: 10.0.0.2<\/li>\n<\/ol>\n<\/li>\n<li>A local user + added him to wheel for sudo<\/li>\n<li>I installed these handy dependencies\n<ol>\n<li><span class=\"crayon-e\"><span style=\"color: #008080;\">yum <\/span><\/span><span class=\"crayon-v\"><span style=\"color: #002d7a;\">install<\/span><\/span><span style=\"color: #006fe0;\"> <span class=\"crayon-o\">&#8211;<\/span><\/span><span class=\"crayon-i\">y<\/span> <span class=\"crayon-v\"><span style=\"color: #002d7a;\">openssh<\/span><\/span><span class=\"crayon-o\"><span style=\"color: #006fe0;\">&#8211;<\/span><\/span><span class=\"crayon-e\"><span style=\"color: #008080;\">clients<\/span><\/span><\/li>\n<li><span class=\"crayon-e\"><span style=\"color: #008080;\">yum <\/span><\/span><span class=\"crayon-v\"><span style=\"color: #002d7a;\">install<\/span><\/span><span style=\"color: #006fe0;\"> <span class=\"crayon-o\">&#8211;<\/span><\/span><span class=\"crayon-i\">y<\/span> <span class=\"crayon-v\"><span style=\"color: #002d7a;\">yum<\/span><\/span><span class=\"crayon-o\"><span style=\"color: #006fe0;\">&#8211;<\/span><\/span><span class=\"crayon-e\"><span style=\"color: #008080;\">utils<\/span><\/span><\/li>\n<li><span class=\"crayon-e\"><span style=\"color: #008080;\">yum <\/span><\/span><span class=\"crayon-v\"><span style=\"color: #002d7a;\">install<\/span><\/span><span style=\"color: #006fe0;\"> <span class=\"crayon-o\">&#8211;<\/span><\/span><span class=\"crayon-i\">y<\/span> <span class=\"crayon-e\"><span style=\"color: #008080;\">wget<\/span><\/span><\/li>\n<li>\n<div id=\"crayon-543b4e3e0f620960166284-4\" class=\"crayon-line crayon-striped-line\"><span class=\"crayon-e\"><span style=\"color: #008080;\">yum <\/span><\/span><span class=\"crayon-v\"><span style=\"color: #002d7a;\">install<\/span><\/span><span style=\"color: #006fe0;\"> <span class=\"crayon-o\">&#8211;<\/span><\/span><span class=\"crayon-i\">y<\/span> <span class=\"crayon-v\"><span style=\"color: #002d7a;\">bind<\/span><\/span><span class=\"crayon-o\"><span style=\"color: #006fe0;\">&#8211;<\/span><\/span><span class=\"crayon-v\"><span style=\"color: #002d7a;\">utils<\/span><\/span><\/div>\n<\/li>\n<\/ol>\n<\/li>\n<li>\n<div id=\"crayon-543b4e3e0f620960166284-4\" class=\"crayon-line crayon-striped-line\">And I disabled SELinux<\/div>\n<ol>\n<li>Don&#8217;t forget to reboot after<\/li>\n<\/ol>\n<\/li>\n<\/ol>\n<p>To see how I setup the above pre-requisites see the &#8220;Setting Up Our Initial System&#8221; section on the previous controller install here : <a href=\"http:\/\/tuxlabs.com\/?p=82\">http:\/\/tuxlabs.com\/?p=82<\/a><\/p>\n<h2>Adding Our Compute Node Using PackStack<\/h2>\n<p>For starters we need to follow the steps in this link \u00a0<a href=\"https:\/\/openstack.redhat.com\/Adding_a_compute_node\"><span style=\"color: #0066cc;\">https:\/\/openstack.redhat.com\/Adding_a_compute_node<\/span><\/a><\/p>\n<p>I am including the link for reference, but you don&#8217;t have to click it as I will be listing the steps below.<\/p>\n<p><strong>On your controller node ( diamond.tuxlabs.com )<\/strong><\/p>\n<p>First, locate your answers file from your previous packstack all-in-one install.<\/p>\n<pre class=\"toolbar-overlay:false nums:false lang:default decode:true\">[root@diamond tuxninja]# ls *answers*\r\npackstack-answers-20140802-125113.txt\r\n[root@diamond tuxninja]#\r\n<\/pre>\n<p><strong>\u00a0Edit the answers file<\/strong><\/p>\n<p>Change lo to eth1 (assuming that is your private 10. interface) for both CONFIG_NOVA_COMPUTE_PRIVIF &amp; CONFIG_NOVA_NETWORK_PRIVIF<\/p>\n<pre class=\"toolbar-overlay:false nums:false lang:default decode:true \">[root@diamond tuxninja]# egrep 'CONFIG_NOVA_COMPUTE_PRIVIF|CONFIG_NOVA_NETWORK_PRIVIF' packstack-answers-20140802-125113.txt\r\nCONFIG_NOVA_COMPUTE_PRIVIF=eth1\r\nCONFIG_NOVA_NETWORK_PRIVIF=eth1\r\n[root@diamond tuxninja]#\r\n<\/pre>\n<p>Change CONFIG_COMPUTE_HOSTS to the ip address of the compute node you want to add. In our case &#8216;192.168.1.11&#8217;. Additionally, validate the ip address for CONFIG_NETWORK_HOSTS is your controller&#8217;s ip since you do not run a separate network node.<\/p>\n<pre class=\"toolbar-overlay:false nums:false lang:default decode:true \">[root@diamond tuxninja]# egrep 'CONFIG_COMPUTE_HOSTS|CONFIG_NETWORK_HOSTS' packstack-answers-20140802-125113.txt\r\nCONFIG_COMPUTE_HOSTS=192.168.1.11\r\nCONFIG_NETWORK_HOSTS=192.168.1.10\r\n[root@diamond tuxninja]#\r\n<\/pre>\n<p>That&#8217;s it. Now run packstack again on the controller<\/p>\n<pre class=\"toolbar-overlay:false nums:false lang:default decode:true\">[tuxninja@diamond yum.repos.d]$ sudo packstack --answer-file=packstack-answers-20140802-125113.txt\r\n<\/pre>\n<p>When that completes, ssh into or switch terminals over to your compute node you just added.<\/p>\n<p><strong>On the compute node ( ruby.tuxlabs.com )<\/strong><\/p>\n<p>Validate that the relevant openstack compute services are running<\/p>\n<pre class=\"toolbar-overlay:false nums:false lang:default decode:true\">[root@ruby ~]# openstack-status\r\n== Nova services ==\r\nopenstack-nova-api:                     dead      (disabled on boot)\r\nopenstack-nova-compute:                 active\r\nopenstack-nova-network:                 dead      (disabled on boot)\r\nopenstack-nova-scheduler:               dead      (disabled on boot)\r\n== neutron services ==\r\nneutron-server:                         inactive  (disabled on boot)\r\nneutron-dhcp-agent:                     inactive  (disabled on boot)\r\nneutron-l3-agent:                       inactive  (disabled on boot)\r\nneutron-metadata-agent:                 inactive  (disabled on boot)\r\nneutron-lbaas-agent:                    inactive  (disabled on boot)\r\nneutron-openvswitch-agent:              active\r\n== Ceilometer services ==\r\nopenstack-ceilometer-api:               dead      (disabled on boot)\r\nopenstack-ceilometer-central:           dead      (disabled on boot)\r\nopenstack-ceilometer-compute:           active\r\nopenstack-ceilometer-collector:         dead      (disabled on boot)\r\n== Support services ==\r\nlibvirtd:                               active\r\nopenvswitch:                            active\r\nmessagebus:                             active\r\nWarning novarc not sourced\r\n[root@ruby ~]#\r\n<\/pre>\n<p><strong>\u00a0Back on the controller ( diamond.tuxlabs.com )<\/strong><\/p>\n<p>We should now be able to validate that ruby.tuxlabs.com has been added as a compute node hypervisor.<\/p>\n<pre class=\"toolbar-overlay:false nums:false lang:default decode:true\">[tuxninja@diamond ~]$ sudo -s\r\n[root@diamond tuxninja]# source keystonerc_admin\r\n[root@diamond tuxninja(keystone_admin)]# nova hypervisor-list\r\n+----+---------------------+\r\n| ID | Hypervisor hostname |\r\n+----+---------------------+\r\n| 1  | diamond.tuxlabs.com |\r\n| 2  | ruby.tuxlabs.com    |\r\n+----+---------------------+\r\n[root@diamond tuxninja(keystone_admin)]# nova-manage service list\r\nBinary           Host                                 Zone             Status     State Updated_At\r\nnova-consoleauth diamond.tuxlabs.com                  internal         enabled    :-)   2014-10-12 20:48:34\r\nnova-conductor   diamond.tuxlabs.com                  internal         enabled    :-)   2014-10-12 20:48:35\r\nnova-scheduler   diamond.tuxlabs.com                  internal         enabled    :-)   2014-10-12 20:48:27\r\nnova-compute     diamond.tuxlabs.com                  nova             enabled    :-)   2014-10-12 20:48:32\r\nnova-cert        diamond.tuxlabs.com                  internal         enabled    :-)   2014-10-12 20:48:31\r\nnova-compute     ruby.tuxlabs.com                     nova             enabled    :-)   2014-10-12 20:48:35\r\n[root@diamond tuxninja(keystone_admin)]#<\/pre>\n<p>Additionally, you can verify it in the Openstack Dashboard<\/p>\n<p><a href=\"http:\/\/tuxlabs.com\/wp-content\/uploads\/2014\/10\/hypervisors.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-158\" src=\"http:\/\/tuxlabs.com\/wp-content\/uploads\/2014\/10\/hypervisors.png\" alt=\"hypervisors\" width=\"1894\" height=\"728\" \/><\/a><\/p>\n<p>Next we are going to try to boot an instance using the new ruby.tuxlabs.com hypervisor. To do this we will need a few pieces of information. First let&#8217;s get our OS\u00a0images list.<\/p>\n<pre class=\"toolbar-overlay:false nums:false lang:default decode:true \">[root@diamond tuxninja(keystone_admin)]# glance image-list\r\n+--------------------------------------+---------------------+-------------+------------------+-----------+--------+\r\n| ID                                   | Name                | Disk Format | Container Format | Size      | Status |\r\n+--------------------------------------+---------------------+-------------+------------------+-----------+--------+\r\n| 0b3f2474-73cc-4df2-ad0e-fdb7a7f7c8a1 | cirros              | qcow2       | bare             | 13147648  | active |\r\n| 737a0060-6e80-415c-b66b-a20893d9888b | Fedora 6.4          | qcow2       | bare             | 210829312 | active |\r\n| 952ac512-19da-47a7-81a4-cfede18c7f45 | ubuntu-server-12.04 | qcow2       | bare             | 260964864 | active |\r\n+--------------------------------------+---------------------+-------------+------------------+-----------+--------+\r\n[root@diamond tuxninja(keystone_admin)]#<\/pre>\n<p>Great, now we need the ID of our private network<\/p>\n<pre class=\"toolbar-overlay:false nums:false lang:default decode:true\">[root@diamond tuxninja(keystone_admin)]# neutron net-show private\r\n+---------------------------+--------------------------------------+\r\n| Field                     | Value                                |\r\n+---------------------------+--------------------------------------+\r\n| admin_state_up            | True                                 |\r\n| id                        | d1a89c10-0ae2-43f0-8cf2-f02c20e19618 |\r\n| name                      | private                              |\r\n| provider:network_type     | vxlan                                |\r\n| provider:physical_network |                                      |\r\n| provider:segmentation_id  | 10                                   |\r\n| router:external           | False                                |\r\n| shared                    | False                                |\r\n| status                    | ACTIVE                               |\r\n| subnets                   | b8760f9b-3c0a-47c7-a5af-9cb533242f5b |\r\n| tenant_id                 | 7bdf35c08112447b8d2d78cdbbbcfa09     |\r\n+---------------------------+--------------------------------------+\r\n[root@diamond tuxninja(keystone_admin)]#<\/pre>\n<p>Ok now we are ready to proceed with the nova boot command.<\/p>\n<pre class=\"toolbar-overlay:false nums:false lang:default decode:true\">[root@diamond tuxninja(keystone_admin)]#  nova boot --flavor m1.small --image 'ubuntu-server-12.04' --key-name cloud --nic net-id=d1a89c10-0ae2-43f0-8cf2-f02c20e19618 --hint force_hosts=ruby.tuxlabs.com test\r\n+--------------------------------------+------------------------------------------------------------+\r\n| Property                             | Value                                                      |\r\n+--------------------------------------+------------------------------------------------------------+\r\n| OS-DCF:diskConfig                    | MANUAL                                                     |\r\n| OS-EXT-AZ:availability_zone          | nova                                                       |\r\n| OS-EXT-SRV-ATTR:host                 | -                                                          |\r\n| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                          |\r\n| OS-EXT-SRV-ATTR:instance_name        | instance-00000019                                          |\r\n| OS-EXT-STS:power_state               | 0                                                          |\r\n| OS-EXT-STS:task_state                | scheduling                                                 |\r\n| OS-EXT-STS:vm_state                  | building                                                   |\r\n| OS-SRV-USG:launched_at               | -                                                          |\r\n| OS-SRV-USG:terminated_at             | -                                                          |\r\n| accessIPv4                           |                                                            |\r\n| accessIPv6                           |                                                            |\r\n| adminPass                            | XHUumC5YbE3J                                               |\r\n| config_drive                         |                                                            |\r\n| created                              | 2014-10-12T20:59:47Z                                       |\r\n| flavor                               | m1.small (2)                                               |\r\n| hostId                               |                                                            |\r\n| id                                   | f7b9e8bb-df45-4b94-a896-5600f47c269b                       |\r\n| image                                | ubuntu-server-12.04 (952ac512-19da-47a7-81a4-cfede18c7f45) |\r\n| key_name                             | cloud                                                      |\r\n| metadata                             | {}                                                         |\r\n| name                                 | test                                                       |\r\n| os-extended-volumes:volumes_attached | []                                                         |\r\n| progress                             | 0                                                          |\r\n| security_groups                      | default                                                    |\r\n| status                               | BUILD                                                      |\r\n| tenant_id                            | 7bdf35c08112447b8d2d78cdbbbcfa09                           |\r\n| updated                              | 2014-10-12T20:59:47Z                                       |\r\n| user_id                              | 6bb8fcf3ce9446838e50a6b98fbb5afe                           |\r\n+--------------------------------------+------------------------------------------------------------+\r\n[root@diamond tuxninja(keystone_admin)]#<\/pre>\n<p>Fantastic. That command should look familiar from our previous tutorial it is the standard command for launching new VM instances using the command line, with one exception &#8216;&#8211;hint force_hosts=ruby.tuxlabs.com&#8217; this part of the command line forces the scheduler to use ruby.tuxlabs.com as it&#8217;s hypervisor.<\/p>\n<p>Once the VM is building we can validate that it is on the right hypervisor like so.<\/p>\n<pre class=\"toolbar-overlay:false nums:false lang:default decode:true \">[root@diamond tuxninja(keystone_admin)]# nova hypervisor-servers ruby.tuxlabs.com\r\n+--------------------------------------+-------------------+---------------+---------------------+\r\n| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname |\r\n+--------------------------------------+-------------------+---------------+---------------------+\r\n| f7b9e8bb-df45-4b94-a896-5600f47c269b | instance-00000019 | 2             | ruby.tuxlabs.com    |\r\n+--------------------------------------+-------------------+---------------+---------------------+\r\n[root@diamond tuxninja(keystone_admin)]# nova hypervisor-servers diamond.tuxlabs.com\r\n+--------------------------------------+-------------------+---------------+---------------------+\r\n| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname |\r\n+--------------------------------------+-------------------+---------------+---------------------+\r\n| a4c67465-d7ef-42b6-9c2a-439f3b13e841 | instance-00000017 | 1             | diamond.tuxlabs.com |\r\n| 0c34028d-dfb6-4fdf-b9f7-daade66f2107 | instance-00000018 | 1             | diamond.tuxlabs.com |\r\n+--------------------------------------+-------------------+---------------+---------------------+\r\n[root@diamond tuxninja(keystone_admin)]#<\/pre>\n<p>You can see from the output above I have 2 VM&#8217;s on my existing controller &#8216;diamond.tuxlabs.com&#8217; and the newly created instance is on &#8216;ruby.tuxlabs.com&#8217; as instructed, awesome.<\/p>\n<p>Now that you are sure you setup your compute node correctly, and can boot a VM on a specific hypervisor via command line, you might be wondering how this works using the GUI. The answer is a little differently \ud83d\ude42<\/p>\n<h2>The Openstack Nova Scheduler<\/h2>\n<p>The Nova Scheduler in Openstack is responsible for determining, which compute node a VM should be created on. If you are familiar with VMware this is like DRS, except it only happens on initial creation, there is no rebalancing that happens as resources are consumed overtime. Using the Openstack Dashboard GUI I am unable to tell nova to boot off a specific hypervisor, to do that I have to use the command line above (if someone knows of a way to do this using the GUI let me know, I have a feeling if it is not added already, they will add the ability to send a hint to nova from the GUI in a later version). In theory you can trust the nova-scheduler service to automatically balance the usage of compute resources (CPU, Memory, Disk etc) based on it&#8217;s default configuration. However, if you want to ensure that certain VM&#8217;s live on certain hypervisors you will want to use the command line above. For more information on how the scheduler works see : <a href=\"http:\/\/cloudarchitectmusings.com\/2013\/06\/26\/openstack-for-vmware-admins-nova-compute-with-vsphere-part-2\/\">http:\/\/cloudarchitectmusings.com\/2013\/06\/26\/openstack-for-vmware-admins-nova-compute-with-vsphere-part-2\/<\/a><\/p>\n<h2>The End<\/h2>\n<p>That is all for now, hopefully this tutorial was helpful and accurately assisted you in expanding your Openstack compute resources &amp; knowledge of Openstack. Until next time\u00a0!<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<a href=\"https:\/\/tuxlabs.com\/?p=151\" rel=\"bookmark\" title=\"Permalink to How To: Add A Compute Node To Openstack Icehouse Using Packstack\"><p>Pre-requisites This article is a continuation on the previous article I wrote on how to do a single node\u00a0all-in-one\u00a0(AIO)\u00a0Openstack Icehouse install using Redhat&#8217;s packstack. A working Openstack\u00a0AIO installation using packstack is required for this article.\u00a0If you do not already have a functioning\u00a0AIO install of Openstack please\u00a0refer to the previous article before\u00a0continuing on to this articles [&hellip;]<\/p>\n<\/a>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[130,1,28,12],"tags":[15,31,32,17,30,13,16,29,22],"class_list":{"0":"post-151","1":"post","2":"type-post","3":"status-publish","4":"format-standard","6":"category-cloud","7":"category-howtos","8":"category-openstack-howtos","9":"category-systems-administration","10":"tag-centos","11":"tag-compute","12":"tag-compute-node","13":"tag-nova","14":"tag-nova-scheduler","15":"tag-openstack","16":"tag-packstack","17":"tag-rdo","18":"tag-vmware","19":"h-entry","20":"hentry"},"_links":{"self":[{"href":"https:\/\/tuxlabs.com\/index.php?rest_route=\/wp\/v2\/posts\/151","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tuxlabs.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tuxlabs.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tuxlabs.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tuxlabs.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=151"}],"version-history":[{"count":8,"href":"https:\/\/tuxlabs.com\/index.php?rest_route=\/wp\/v2\/posts\/151\/revisions"}],"predecessor-version":[{"id":161,"href":"https:\/\/tuxlabs.com\/index.php?rest_route=\/wp\/v2\/posts\/151\/revisions\/161"}],"wp:attachment":[{"href":"https:\/\/tuxlabs.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=151"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tuxlabs.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=151"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tuxlabs.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=151"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}