THe cloud image is shipped as a qcow2 file. It has about 3 GB of usable space. I need more.
Continue readingCategory Archives: Provisioning
Network setup for a custom Qemu Virtual Machine
After building a custom Qemu, there are a couple ways to run a VM to get to it. The older approach to VM management is to create a block device, run the VM with a boot device, do a full install and log in to the serial console. However, if you run the Qemu/KVM machine from the command lilne, hitting control C will stop your VM, and this is annoying. I have found it worth while to set up networking and then to SSH in to the machine.
My notes here suck. I am going to try and document what I have here working, and, over time, reverse engineer how I got here.
Edit: here are the steps
Parsing a yum repo with XPath
https://gnome.pages.gitlab.gnome.org/libxml2/xmllint.htmlLets say you want to see what src RPMs are in a given yum repo. If the author used createrepo to create the yum repo, it should be an a fairly standard layout. The following xpath query should pull it out.
Note that you can get xmllint to run the xpath query from libxml2
curl http://$yumserver/$somerepo/ > repo.html
xmllint --html --xpath "//html/body/table/tr/td/a/@href" repo.html | grep src
The portion of the query a/@href will match a tag like this
<a href="https://blam.src.rpm">
Working with the Booked scheduler API
One benefit of working in a hardware company is that you actually have hardware. I have worked in software for a long time, and I have learned to appreciate when new servers are not such a scarce resource as to impact productivity. However, hardware in our group needs to be shared amongst a large group of developers, and constantly reserved, assigned, and reprovisioned. We use an install of the booked scheduler to reserve servers. As with many tools, I am most interested in using it in a scripted fashion. Booked comes with an API. Here’s some of the things I can do with it.
Continue readingMetaprogramming in bash
My job requires me to perform operations on multiple machines. These operation can either be against the platform management server (reboot, change firmware options) or via an remote connection to the operating system running on that machine. Here’s how I manage the scripts to simplify my work.
Continue readingpxelinux.cfg file name format
When to Ansible? When to Shell?
Any new technology requires a mental effort to understand. When trying to automate the boring stuff, one decision I have to make is whether to use straight shell scripting or whether to perform that operation using Ansible. What I want to do is look at a simple Ansible playbook I have written, and then compare what the comparable shell script would look like to determine if it would help my team to use Ansible or not in this situation.
Continue readingCleaning a machine
After you get something working, you find you might have missed a step in documenting how you got that working. You might have installed a package that you didn’t remember. Or maybe you set up a network connection. In my case, I find I have often brute-forced the SSH setup for later provisioning. Since this is done once, and then forgotten, often in the push to “just get work done” I have had to go back and redo this (again usually manually) when I get to a new machine.
To avoid this, I am documenting what I can do to get a new machine up and running in a state where SSH connections (and forwarding) can be reliably run. This process should be automatable, but at a minimum, it should be understood.
Functional Fixedness
Today I was reminded how easy it is to get fixed in your thinking.
The short lessons learned: if the Hostname fails (due to SSL) try the IP address.
Longer story:
Continue readingipxe.efi for aarch64
To make the AARCH64 ipxe process work using bifrost, I had to
git clone https://github.com/ipxe/ipxe.git cd ipxe/src/ make bin-arm64-efi/snponly.efi ARCH=arm64 sudo cp bin-arm64-efi/snponly.efi /var/lib/tftpboot/ipxe.efi |
This works for the Ampere reference implementation servers that use a Mellanox network interface card, which supports (only) snp.