Getting Started with Tripleo

OpenStack is big. I’ve been focused on my little corner of it, Keystone, for a long time. Now, it is time for me to help out with some of the more downstream aspects of configuring RDO deployments. In order to do so, I need to do an RDO deployment. Until recently, this has meant Packstack. However, Packstack really is not meant for production deployments. RDO manager is the right tool for that. So, I am gearing up on RDO manager. The upstream of RDO Manager is TripleO.

I have a Dell T1700 with 32 Gb Ram and a single NIC. I am going to run everything I need in virtual machines on this one machine. While this does not match a production install, it seems to be the minimal hardware commitment to getting work done.

I’ve installed CentOS 7.1 on it. This is the latest released version of CentOS, the version that RDO is targeting for deployment.

It has booted and gotten an IP address from DHCP. I’ve copied this IP address to /etc/hosts on my laptop and given it the name ayoung_dell_t1700.test

From a login console, I edited /etc/ssh/sshd_config to allow root login:

PermitRootLogin yes

And then to connect to the machine automatically:

ssh-copy-id root@ayoung_dell_t1700.test 

There are many ways to get things installed. I am opting for minimal effort here, which means instack, and even that is too much work, so I am using inlunch to run instack.

On small hiccup I hit was that instack needs to expose port 2200 to the outside world in order to allow ssh into the VMs running attached to the nested network. I originally just stopped firewalld, but this kills NAT, which means VMs can’t fetch packages from the outside world. To fix it I opened up port 2200 on the hypervisor machine.

 systemctl start firewalld.service
 firewall-cmd --permanent --zone=public --add-port=2200/tcp
 firewall-cmd --reload

To run the install:

cp answers.yml.example answers.yml
INLUNCH_FQDN=ayoung_dell_t1700.test  ./instack-virt.sh

It took some time ( should have gotten lunch…) but seems to have succeeded in getting the undercloud installed.

$ ssh stack@ayoung-dell-t1700.test -p2200
Last login: Thu Dec  3 15:49:51 2015 from 192.168.122.1
[stack@instack ~]$ . ./stackrc 
[stack@instack ~]$ openstack image list
+--------------------------------------+------------------------+
| ID                                   | Name                   |
+--------------------------------------+------------------------+
| 2a1dbaf1-d5b3-489c-943d-5fd8e1c84459 | bm-deploy-kernel       |
| 173d20f1-c160-4a4d-bbad-e5c04df5e0be | bm-deploy-ramdisk      |
| eff2db2f-0c75-4709-953b-27ca93797e8e | overcloud-full         |
| a7b83a9b-b859-449e-852c-13ac2a563330 | overcloud-full-vmlinuz |
| 8b3ccdae-c6b8-491e-a964-e5c4defe6b30 | overcloud-full-initrd  |
+--------------------------------------+------------------------+

To install the overcloud, I ran:

. ./stackrc 
openstack overcloud deploy --templates  --libvirt-type qemu

Which is still running as I write this (I really should get lunch). To check the status as it runs, in a second ssh session:

[stack@instack ~]$ . ./stackrc 
[stack@instack ~]$ heat resource-list overcloud -n5 | grep PROG
| Compute                                   | f5060a84-eacc-4f65-9aea-bf7457635bc4          | OS::Heat::ResourceGroup                           | CREATE_IN_PROGRESS | 2015-12-03T15:38:06 | overcloud                                                                       |
| 0                                         | 3c28c035-d047-4201-9d6b-b970244231b2          | OS::TripleO::Compute                              | CREATE_IN_PROGRESS | 2015-12-03T15:38:24 | overcloud-Compute-njgvkybncrzg                                                  |
| NetworkDeployment                         | 35297ccc-065f-4dc7-bb47-8b9f616ff0b4          | OS::TripleO::SoftwareDeployment                   | CREATE_IN_PROGRESS | 2015-12-03T15:38:25 | overcloud-Compute-njgvkybncrzg-0-z3nnepjk3zfz                                   |
| UpdateDeployment                          | 313941dd-f19b-4e15-97e6-31c7bf31f47a          | OS::Heat::SoftwareDeployment                      | CREATE_IN_PROGRESS | 2015-12-03T15:38:25 | overcloud-Compute-njgvkybncrzg-0-z3nnepjk3zfz                                   |
[stack@instack ~]$ heat resource-list overcloud -n5 | grep PROG
| Compute                                   | f5060a84-eacc-4f65-9aea-bf7457635bc4          | OS::Heat::ResourceGroup                           | CREATE_IN_PROGRESS | 2015-12-03T15:38:06 | overcloud                                                                       |
| 0                                         | 3c28c035-d047-4201-9d6b-b970244231b2          | OS::TripleO::Compute                              | CREATE_IN_PROGRESS | 2015-12-03T15:38:24 | overcloud-Compute-njgvkybncrzg                                                  |
| NetworkDeployment                         | 35297ccc-065f-4dc7-bb47-8b9f616ff0b4          | OS::TripleO::SoftwareDeployment                   | CREATE_IN_PROGRESS | 2015-12-03T15:38:25 | overcloud-Compute-njgvkybncrzg-0-z3nnepjk3zfz                                   |
| UpdateDeployment                          | 313941dd-f19b-4e15-97e6-31c7bf31f47a          | OS::Heat::SoftwareDeployment                      | CREATE_IN_PROGRESS | 2015-12-03T15:38:25 | overcloud-Compute-njgvkybncrzg-0-z3nnepjk3zfz                                   |
[stack@instack ~]$ exit
[stack@instack ~]$ ironic node-list
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Name | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+
| 59761ad5-e72c-459e-b88f-e2f58ec494e9 | None | None                                 | power off   | available          | False       |
| cd73ef5a-5889-41bb-b936-6dede3fa5811 | None | None                                 | power off   | available          | False       |
| d6de3560-0e47-4add-a350-dc67f5ced804 | None | None                                 | power off   | available          | False       |
| 7f7816ff-c284-4ac7-917f-67d0c5521d56 | None | 19c4d6e1-7a3d-43f6-b8c1-b761082a5d46 | power on    | active             | False       |
| f1a7da5d-74f1-4113-8a49-aa1dd5a9ed56 | None | None                                 | power off   | available          | False       |
| 8fa69e00-559f-430c-ad5d-2438980b2f4a | None | 2ca4888d-b803-49a2-8e4e-d1f4d271f3ce | power on    | active             | False       |
+--------------------------------------+------+--------------------------------------+-------------+--------------------+-------------+

Once it is done running, there is a separate keystone resrouce file: Sourceing that allows the user to query the overcloud:

$ . ./overcloudrc 
[stack@instack ~]$ openstack user list
+----------------------------------+------------+
| ID                               | Name       |
+----------------------------------+------------+
| 187635147c994b35a8ed438c04f25645 | swift      |
| 39f8484d167b4f809e576b4433e62f19 | cinder     |
| 6ead6ea9f2af4ac295e8d75e0e3f960d | heat       |
| 867d4a61fba94d1382473629f662eff9 | admin      |
| 89930c33a4314fa289252f47f76f0a0e | cinderv2   |
| a69eaa0ff159403587b0f4ae6115174a | ceilometer |
| c3d5b13e8d354a83b77d12cbe2be4950 | neutron    |
| d4e09a73a9b54a7d819deac1afcbabdf | glance     |
| d64c2792a8364750bf875ee7235471f9 | nova       |
+----------------------------------+------------+
[stack@instack ~]$ openstack compute service list
+------------------+-------------------------+----------+---------+-------+----------------------------+
| Binary           | Host                    | Zone     | Status  | State | Updated At                 |
+------------------+-------------------------+----------+---------+-------+----------------------------+
| nova-cert        | overcloud-controller-0  | internal | enabled | up    | 2015-12-07T19:33:47.000000 |
| nova-consoleauth | overcloud-controller-0  | internal | enabled | up    | 2015-12-07T19:33:50.000000 |
| nova-scheduler   | overcloud-controller-0  | internal | enabled | up    | 2015-12-07T19:33:43.000000 |
| nova-conductor   | overcloud-controller-0  | internal | enabled | up    | 2015-12-07T19:33:44.000000 |
| nova-compute     | overcloud-novacompute-0 | nova     | enabled | up    | 2015-12-07T19:33:41.000000 |
+------------------+-------------------------+----------+---------+-------+----------------------------+
openstack image create --location https://launchpadlibrarian.net/170024918/cirros-0.3.2-source.tar.gz  cirros
+------------------+--------------------------------------+
| Field            | Value                                |
+------------------+--------------------------------------+
| checksum         | None                                 |
| container_format | bare                                 |
| created_at       | 2015-12-07T19:36:47.000000           |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | raw                                  |
| id               | f6690870-96fd-4b1a-80fb-dee80b0169ef |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | cirros                               |
| owner            | 0018147affb84b3b946b40a105767b12     |
| properties       |                                      |
| protected        | False                                |
| size             | 429582                               |
| status           | active                               |
| updated_at       | 2015-12-07T19:36:47.000000           |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
[stack@instack ~]$ openstack server create --image cirros --flavor m1.tiny  test
+--------------------------------------+-----------------------------------------------+
| Field                                | Value                                         |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                        |
| OS-EXT-AZ:availability_zone          |                                               |
| OS-EXT-SRV-ATTR:host                 | None                                          |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | None                                          |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                             |
| OS-EXT-STS:power_state               | 0                                             |
| OS-EXT-STS:task_state                | scheduling                                    |
| OS-EXT-STS:vm_state                  | building                                      |
| OS-SRV-USG:launched_at               | None                                          |
| OS-SRV-USG:terminated_at             | None                                          |
| accessIPv4                           |                                               |
| accessIPv6                           |                                               |
| addresses                            |                                               |
| adminPass                            | TmSwanMP3A4K                                  |
| config_drive                         |                                               |
| created                              | 2015-12-07T19:37:19Z                          |
| flavor                               | m1.tiny (1)                                   |
| hostId                               |                                               |
| id                                   | 67166662-64a4-4265-8e3a-11926063e8c3          |
| image                                | cirros (f6690870-96fd-4b1a-80fb-dee80b0169ef) |
| key_name                             | None                                          |
| name                                 | test                                          |
| os-extended-volumes:volumes_attached | []                                            |
| progress                             | 0                                             |
| project_id                           | 0018147affb84b3b946b40a105767b12              |
| properties                           |                                               |
| security_groups                      | [{u'name': u'default'}]                       |
| status                               | BUILD                                         |
| updated                              | 2015-12-07T19:37:19Z                          |
| user_id                              | 867d4a61fba94d1382473629f662eff9              |
+--------------------------------------+-----------------------------------------------+
[stack@instack ~]$ openstack server  list
+--------------------------------------+------+--------+----------+
| ID                                   | Name | Status | Networks |
+--------------------------------------+------+--------+----------+
| 67166662-64a4-4265-8e3a-11926063e8c3 | test | ACTIVE |          |
+--------------------------------------+------+--------+----------+

Inlunch is Jiri’s personal tool, and I thank him for sharing it. It is not intended to be a large community effort, and it may break or bit-rot in the future.

In order to try and get something that is supportable, John Trowbridge has started a comparable effort called tripleo-quickstart that we are going to try and have ready for the Mitaka based RDO test day this cycle. The major differences focus between these two efforts:

  • inlunch is upstream tripleo. tripleo-quickstart is RDO based.
  • inlunch generates the VM images.  tripleo-quickstart downloads them
  • inlunch has a lot of the customization in bash scriptlets configurable from the answers.yml.  This reflects its use as a developers tool.  quicklaunch is much more straight ansible.

Leave a Reply

Your email address will not be published. Required fields are marked *