Tie Your Rabbit Down

I’ve been running the Tripleo Quickstart to setup my development deployments. While looking into the setup, I noticed that the default Rabbit deployment is wide open. I can’t see anything other than firewall port blocking in place. I dug deeper.

All of the services use the following values to talk to the Queues

  RabbitUserName:  guest
  RabbitPassword: guest

The Access Control List (ACL) allows all powers over all queues. There is no Transport Layer Security on the network communication.

I was able to address the first issue by editing the openstack-deploy.sh script that Tripleo Quickstart generates. There is a heredoc section that sets many of the defaults that go into the yaml confiog file used as the input for openstack overcloud create. I added:

  RabbitUserName:  fubar
  RabbitPassword: fumtu

And confirmed that the cloud worked with these changes by running

git clone https://git.openstack.org/openstack-infra/tripleo-ci
tripleo-ci/scripts/tripleo.sh  --overcloud-pingtest

As well as sshing to the controller and running

$ sudo rabbitmqctl list_users
Listing users ...
fubar	[administrator]
...done.
$ sudo grep -i rabbit_password /etc/nova/nova.conf 
# Deprecated group;name - DEFAULT;rabbit_password
#rabbit_password=guest
rabbit_password=fumtu

While I was tempted to tackle this in Quickstart, I think it is better to leave the issue visible there and instead tackle it in the Tripleo library.

We deploy all of Rabbit in a single vhost:

$ sudo rabbitmqctl list_vhosts
Listing vhosts ...
/
...done.

But we do allow for the separation of the RPC mechanism from the Notifications:

In the Nova config file:

# The topic compute nodes listen on (string value)
#compute_topic=compute
...
[cells]
#  (string value)
#topic=cells
#rpc_driver_queue_base=cells.intercell
...
[conductor]
#topic=conductor

[oslo_messaging_notifications]
#topics=notifications

The Keystone config file only has the notifications section. All have the Rabbit Userid and Password in the clear.

The Oslo RPC call is based on creating a response Queue. I would like to permit only the intended RPC target to write to this response Queue. However, these queues are generated using a Random UUID.

def _get_reply_q(self):
        with self._reply_q_lock:
            if self._reply_q is not None:
                return self._reply_q

            reply_q = 'reply_' + uuid.uuid4().hex

            conn = self._get_connection(rpc_common.PURPOSE_LISTEN)

            self._waiter = ReplyWaiter(reply_q, conn,
                                       self._allowed_remote_exmods)

            self._reply_q = reply_q
            self._reply_q_conn = conn

This makes it impossible to write a regular expression to limit the set of accessible queues.

What services actually have presence on the compute nodes? (some lines removed for clarity)

$ sudo lsof -i tcp:amqp
COMMAND     PID       USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
neutron-o 17236    neutron    8u  IPv4  40581      0t0  TCP overcloud-novacompute-0.localdomain:53049->overcloud-controller-0.localdomain:amqp (ESTABLISHED)
...
neutron-o 17236    neutron   19u  IPv4  40590      0t0  TCP overcloud-novacompute-0.localdomain:53058->overcloud-controller-0.localdomain:amqp (ESTABLISHED)
nova-comp 17269       nova    4u  IPv4  40572      0t0  TCP overcloud-novacompute-0.localdomain:53047->overcloud-controller-0.localdomain:amqp (ESTABLISHED)
...
nova-comp 17269       nova   19u  IPv4 130115      0t0  TCP overcloud-novacompute-0.localdomain:53157->overcloud-controller-0.localdomain:amqp (ESTABLISHED)
ceilomete 17682 ceilometer   12u  IPv4 130381      0t0  TCP overcloud-novacompute-0.localdomain:53162->overcloud-controller-0.localdomain:amqp (ESTABLISHED)

In order to trace the connections, I created three rabbit users witth uuidgen based passwords:

sudo rabbitmqctl add_user overcloud-ceil-0 28d90d7c-1ebb-47a6-b58b-3df7aef1f6bf
sudo rabbitmqctl add_user overcloud-neutron-0 1290a77d-35a1-4afa-b5ea-cbc8f9387754
sudo rabbitmqctl add_user overcloud-novacompute-0 53493010-37b3-4188-bd88-b933b9322c7c
sudo rabbitmqctl add_user keystone 4810a2c6-60f0-4014-8fbb-d628ad9d52f9
sudo rabbitmqctl set_permissions overcloud-ceil-0 ".*" ".*" ".*"
sudo rabbitmqctl set_permissions overcloud-neutron-0 ".*" ".*" ".*"
sudo rabbitmqctl set_permissions overcloud-novacompute-0 ".*" ".*" ".*"
sudo rabbitmqctl set_permissions keystone ".*" ".*" ".*"

First, I tested editing the Keystone server on the controller, and was able to see the user change from guest to keystone.

Then, I used the appropriate values on the compute node for the rabbit_user_id and rabbit_password values in the files:

/etc/ceilometer/ceilometer.conf
/etc/nova/nova.conf 
/etc/neutron/neutron.conf

Then restarted the node. After reboot, Nova and Neutron came back, but Ceilometer was not happy (even after cycling the services on both the control node and the compute node.

$ sudo lsof -i tcp:amqp
COMMAND    PID    USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
neutron-o 1680 neutron    8u  IPv4  23125      0t0  TCP overcloud-novacompute-0.localdomain:49085->overcloud-controller-0.localdomain:amqp (ESTABLISHED)
...
neutron-o 1680 neutron   19u  IPv4  23449      0t0  TCP overcloud-novacompute-0.localdomain:49096->overcloud-controller-0.localdomain:amqp (ESTABLISHED)
nova-comp 1682    nova    4u  IPv4  24066      0t0  TCP overcloud-novacompute-0.localdomain:49097->overcloud-controller-0.localdomain:amqp (ESTABLISHED)
...
nova-comp 1682    nova   20u  IPv4 487795      0t0  TCP overcloud-novacompute-0.localdomain:49582->overcloud-controller-0.localdomain:amqp (ESTABLISHED)

Going back to the controller> There is obviously a 1 to 1 relationship between the connections from the compute node and the entities that rabbitmqctl allows us to list:

$sudo rabbitmqctl list_connections
keystone	192.0.2.21	43714	running
keystone	192.0.2.21	43921	running
overcloud-neutron-0	192.0.2.20	49085	running
...
overcloud-neutron-0	192.0.2.20	49096	running
overcloud-novacompute-0	192.0.2.20	49097	running
...
overcloud-novacompute-0	192.0.2.20	49582	running
...done.

With this information we should be able to put together a map of which service talks on which channel.

This is a complex system. I’m going to do some more digging, and see if I can come up with an approach to lock things down a bit better.

Leave a Reply

Your email address will not be published. Required fields are marked *