Many years ago, when I first started working at Red Hat, I worked up a package management domain model diagram. I’ve referred to it many times over the years, but have never posted or explained it in detail. Recently, discussions over image building software caused me to refer to it a few times. Here it is, with annotations below.Continue reading
This one is going to be a little light on details, as we are still working through it, but I’d just like to share what I’ve been working on the past couple weeks. Note that this is for a proof-of-concept cluster, and is not for production.Continue reading
Ansible exists to help automate the time consuming repeated tasks that technologist depend upon. One very common jobs is to create and tear down a virtual machine. While cloud technologies have made this possible to perform remotely, there are many times when I’ve needed to setup and tear down virtual machines on systems that were stand alone Linux servers. In this case, the main interfaces to the machine are ssh and libvirt. I recently worked through an Ansible role to setup and tear down an virtual machine via libvirt, and I’d like to walk through it, and record my reasons for some of the decisions I made.
I do this infrequently enough that I want to record a reminder how I do it:
sudo cp ~/Downloads/rhel-server-7.6-x86_64-kvm.qcow2 /var/lib/libvirt/images/tower.qcow2 sudo virt-install --vcpus=2 --name tower --ram 4096 --import --disk /var/lib/libvirt/images/tower.qcow2
Not all of my virtual machines run on OpenStack; I have to run a fair number of virtual machines on my personal workstation via libvirt. However, I like using the cloud versions of RHEL, as they most closely match what I do run in OpenStack. The disconnect is that the Cloud images are designed to accept cloud-init, which pulls the ssh public keys from a metadata web server. Without that, there are no public keys added to the cloud-user account, and the VM is unaccessable. Here is how I add the ssh keys manually.
While my company has wonderful resources to allow employees to study for our certifications, they are time limited to prevent waste. I find I’ve often kicked off the lab, only to get distracted with a reql-world-interrupt, and come back to find the lab has timed out. I like working on my own systems, and having my own servers to work on. As such, I’m setting up a complementary system to the corporate one for my own study.
Nothing lasts forever. Except some developer setups that no-one seems to know who owns, and no one is willing to tear down. I’ve tried to build the code to clean up after myself into my provisioning systems. One pattern I’ve noticed is that the same data is required for building and for cleaning up a cluster. When I built Ossipee, each task had both a create and a teardown stage. I want the same from Ansible. Here is how I’ve made it work thus far.
Today I tried to use our local OpenStack instance to deploy CloudForms Management Engine (CFME). Our OpenStack deployment has a set of flavors that all are defined with 20 GB Disks. The CFME image is larger than this, and will not deploy on the set of flavors. Here is how I worked around it.
Red Hat Satellite Server is a key tool in the provisioning process for the systems in our Labs. In one of our labs we have an older deployment running Satellite 6 which maps to the upstream project The Foreman version 1.11. Since I want to be able to perform repeatable operations on this server, I need to make Web API calls.
The easiest way to do this is to use the Hammer CLI. But it turns out the version of Hammer is somewhat tied to the version of Satellite server; the version I have in Fedora 27 Does not talk to this older Satellite instance. So, I want to run an older Hammer.
I decided to use this as an opportunity to walk through running an RPM managed application targetted for RHEL 6/EPEL 6 via Docker.
Edit: actually, this might not be the case, but the rest of the learning process was interesting enough that I kept working at it.
Edit2: This was necessary, see the bottom. Also, the 1.11 in the URL refers to the upstream repo for theforeman. I’d use a different repo for building using supported RH RPMs.
Here is what I learned.