About Adam Young

Once upon a time I was an Army Officer, but that was long ago. Now I work as a Software Engineer. I climb rocks, play saxophone, and spend way too much time in front of a computer.

Merging root and home filesystems

Yocto takes up a lot of space when it builds. If the /home partition is 30 GB or smaller, I am going to fill it up. The systems I get provisioned from Beaker are routinely splitting their disks between / and /home. These are both logical volumes in the same volume group. This is easy to merge.

In order to merge them I find myself performing the following steps.

umount /home/
mkdir /althome

I then modify /etc/fstab so that the /home entry is now pointing to /althome. If I have done any work in /home/ayoung (almost always) I have to copy it to the new /home partition

mount /alhome/
cp  /althome/ayoung /home/ayoung

Once the home volume has been cleared, I can reclaim the space. The following lines will vary depending on the name of the machine.

lvremove /dev/rhel_hpe-moonshot-02-c07/home
lvresize  -L   +32.48G  /dev/rhel_hpe-moonshot-02-c07/root

I am explicitly reclaiming the size of the /home volume, which in this case is 32.48 GB.

A little bit of foresight can obviously avoid this problem; properly allocate the disks according to the workload. Requesting a machine with more disk is also an option.

But sometimes we have to fix mistakes.

Note that I use the lvdisplay command to see the names of the volumes.

In order to make use of the new space, I have to resize the file system. Since it is XFS, I use the xfs_grow command. I want the full size, so I don’t need to pass a parameter.

xfs_growfs /dev/mapper/rhel_hpe--moonshot--02--c07-root

Updating config.sub in a bitbake recipe

config.sub is used to determine, among other things, the architecture of the machine. This is used in the configure script for an autotools based make file.

Older config.sub files don’t know how to handle aarch64, the generic name used for ARM64 servers in the build process. We have a recipe that pulls in code using an older config.sub file and I need to update.

My first approach was to build a patch. This works fine, and it was my fallback, but it is tedious to do for every recipe that needs this update, every time it needs it. It turns out we have a better approach that follows the guide of “don’t repeat yourself.”

Continue reading

Jamulus Server with a Low Latency Kernel on F33

I’m trying to run a Jamulus server . I got it running, but the latency was high. My first step was to add the real time kernel from CCRMA.

CCRMA no longer ships a super-package for core. The main thing missing seems to be the rtirq package.

  • installed the ccrma repo file.
  • installed the real time Kernel
  • Set the RT kernel as the default.
  • installed the rtirq scripts rpm
  • enabler the systemd module for rtirq
  • rebooted
  • cloned the Jamulus repo from git
  • configure, built, and installed Jamulus from the sources
  • added a systemd module for Jamulus
  • set selinux to permissive mode (starting Jamulus failed without this)
  • started Jamulus
  • ensured I could connect to it
  • stopped jamulus
  • set selinux to enforcing mode
  • restarted Jamulus from systemctl
  • connected from my desktop to the Jamulus server
  • Jammed

It does not seem to have much impact on the latency I am seeing. I think that is bound more by network.

Setting the Default Kernel on Fedora 33

I have a server that I want to run the Real Time Kernel from CCRMA. Once I followed the steps to get the kernel installed, I had to reboot to use it.

Rebooting on a server with a short timeout for grub is frustrating.

Since the Fedora Kernel is installed, and I want to be able to run it as a backup kernel, I had to figure out how to change the default Kernel for Grub2. Most of the docs out there assume that you can list the menu-items in the grub2 config file, but that is a thing of the past. The lines are now auto-generated from a regex match of the places where one might place the vmlinuz files.

I ended up booting the machine and looking at the grub menu, which showed three Kernels installed; two Fedora Kernels and the RT from CCRMA. The RT Kernel was the second one on the list. But Grub is 0 relative, so to set the default Kernel:

sudo grub2-set-default 1

The next time it booted, it was set to the RT kernel;

$ uname -r
5.10.2-200.rt20.1.fc33.ccrma.x86_64+rt


Working with the beaker command line

A graphical User interface has the potential ability to guide users on their journey from n00b13 to power user. If a user has never used a system before, the graphical user interface can provide a visual orientation to the system that is intuitive and inviting.

Once a user starts to depend on a system and use it regularly, they often want to automate tasks performed in that system.

I am reminded of these principals as I start making use of my company’s beaker server. I need short term access to machines of various architectures develop and test our Yocto based coding efforts.

Continue reading

That Yocto Thing

Many hardware vendors use Yocto as a way to provide a version of the Linux Kernel and board bring up package. This is a very Linux-from-scratch type approach that grew out of GenToo. My current work is on closing the gap between these vendors and the RPM based code management approach in Fedora etc.

This is a lot of fun.

Our code repository is on gitlab.

I’ll be posting some of the more interesting things that I learn while working on this.

Getting hostname information from the beaker command line

We use Beaker to allocate and loan computer hardware. If you want to talk to it via the comand line, you can use the bkr executable. Some of the information comes back as json, but beaker tends to speak xml. To look up a host name from a job, you need to be able to parse the xml. To do that, I used the xq execuable from the python yq package. Yes, x and y.

I installed yq via pip. That puts both the xq and yq executablees into ~/.local/bin.

If i know the job ID, I can parse it using the following syntax.

bkr job-results 'J:5078388' | xq -r  ".job | .recipeSet | .recipe| .task | .[] | .roles | .role | .system | .\"@value\""
hpe-apollo-cn99xx-14-vm-12.khw4.lab.eng.bos.redhat.com
hpe-apollo-cn99xx-14-vm-12.khw4.lab.eng.bos.redhat.com

UPDATE: So here is a useful script that makes use of bkr, jq, and xq to list the hostnames of the hosts I currently have on loan from beaker.

for job in $( bkr job-list -o $USER  --unfinished | jq -r '.[]' ) ; do bkr job-results $job | xq -r  ".job | .recipeSet | .recipe| .task | .[] | .roles | .role | .system | .\"@value\"" ; done

Homelab OpenShift 4 on Baremetal: Part 1

My work as a cloud Solutions Architect is focused on OpenShift. Since I work in the financial sector, my customers are very security focused. These two factorshave converged on me working on OpenShift installs on disconnected networks.

The current emphasis on OpenShift is for virtualization. While virtualization can be nested, it typically has a performance penalty. More important, though, is that virtualization is a technology for taking advantage of bare metal installs.

I need to run OpenShift 4 on baremetal in my homelab via a disconnected install . Here we go.

Continue reading