Jamulus Server with a Low Latency Kernel on F33

I’m trying to run a Jamulus server . I got it running, but the latency was high. My first step was to add the real time kernel from CCRMA.

CCRMA no longer ships a super-package for core. The main thing missing seems to be the rtirq package.

  • installed the ccrma repo file.
  • installed the real time Kernel
  • Set the RT kernel as the default.
  • installed the rtirq scripts rpm
  • enabler the systemd module for rtirq
  • rebooted
  • cloned the Jamulus repo from git
  • configure, built, and installed Jamulus from the sources
  • added a systemd module for Jamulus
  • set selinux to permissive mode (starting Jamulus failed without this)
  • started Jamulus
  • ensured I could connect to it
  • stopped jamulus
  • set selinux to enforcing mode
  • restarted Jamulus from systemctl
  • connected from my desktop to the Jamulus server
  • Jammed

It does not seem to have much impact on the latency I am seeing. I think that is bound more by network.

Setting the Default Kernel on Fedora 33

I have a server that I want to run the Real Time Kernel from CCRMA. Once I followed the steps to get the kernel installed, I had to reboot to use it.

Rebooting on a server with a short timeout for grub is frustrating.

Since the Fedora Kernel is installed, and I want to be able to run it as a backup kernel, I had to figure out how to change the default Kernel for Grub2. Most of the docs out there assume that you can list the menu-items in the grub2 config file, but that is a thing of the past. The lines are now auto-generated from a regex match of the places where one might place the vmlinuz files.

I ended up booting the machine and looking at the grub menu, which showed three Kernels installed; two Fedora Kernels and the RT from CCRMA. The RT Kernel was the second one on the list. But Grub is 0 relative, so to set the default Kernel:

sudo grub2-set-default 1

The next time it booted, it was set to the RT kernel;

$ uname -r
5.10.2-200.rt20.1.fc33.ccrma.x86_64+rt


Working with the beaker command line

A graphical User interface has the potential ability to guide users on their journey from n00b13 to power user. If a user has never used a system before, the graphical user interface can provide a visual orientation to the system that is intuitive and inviting.

Once a user starts to depend on a system and use it regularly, they often want to automate tasks performed in that system.

I am reminded of these principals as I start making use of my company’s beaker server. I need short term access to machines of various architectures develop and test our Yocto based coding efforts.

Continue reading

Getting hostname information from the beaker command line

We use Beaker to allocate and loan computer hardware. If you want to talk to it via the comand line, you can use the bkr executable. Some of the information comes back as json, but beaker tends to speak xml. To look up a host name from a job, you need to be able to parse the xml. To do that, I used the xq execuable from the python yq package. Yes, x and y.

I installed yq via pip. That puts both the xq and yq executablees into ~/.local/bin.

If i know the job ID, I can parse it using the following syntax.

bkr job-results 'J:5078388' | xq -r  ".job | .recipeSet | .recipe| .task | .[] | .roles | .role | .system | .\"@value\""
hpe-apollo-cn99xx-14-vm-12.khw4.lab.eng.bos.redhat.com
hpe-apollo-cn99xx-14-vm-12.khw4.lab.eng.bos.redhat.com

UPDATE: So here is a useful script that makes use of bkr, jq, and xq to list the hostnames of the hosts I currently have on loan from beaker.

for job in $( bkr job-list -o $USER  --unfinished | jq -r '.[]' ) ; do bkr job-results $job | xq -r  ".job | .recipeSet | .recipe| .task | .[] | .roles | .role | .system | .\"@value\"" ; done

Homelab OpenShift 4 on Baremetal: Part 1

My work as a cloud Solutions Architect is focused on OpenShift. Since I work in the financial sector, my customers are very security focused. These two factorshave converged on me working on OpenShift installs on disconnected networks.

The current emphasis on OpenShift is for virtualization. While virtualization can be nested, it typically has a performance penalty. More important, though, is that virtualization is a technology for taking advantage of bare metal installs.

I need to run OpenShift 4 on baremetal in my homelab via a disconnected install . Here we go.

Continue reading

Getting SweetHome3D To Run on Fedora 33

When I tried running SweetHome3D, I got two different problems depending on which of the scripts I tried. I eventually was able to get ./SweetHome3D-Java3D-1_5_2 to run. At first I got this error:

$ ./SweetHome3D-Java3D-1_5_2 
Exception in thread "main" java.lang.UnsatisfiedLinkError: /home/ayoung/apps/sweet/SweetHome3D-6.4.2/lib/libj3dcore-ogl.so: libnsl.so.1: cannot open shared object file: No such file or directory

I was able to resolve with guidance from this thread. I had to install libnsl .

$ yum search libnsl
========================= Name Exactly Matched: libnsl =========================
libnsl.i686 : Legacy support library for NIS
libnsl.x86_64 : Legacy support library for NIS
======================== Name & Summary Matched: libnsl ========================
libnsl2-devel.i686 : Development files for libnsl
libnsl2-devel.x86_64 : Development files for libnsl
============================= Name Matched: libnsl =============================
libnsl2.x86_64 : Public client interface library for NIS(YP) and NIS+
libnsl2.i686 : Public client interface library for NIS(YP) and NIS+
[ayoung@ayoungP40 SweetHome3D-6.4.2]$ sudo yum install libnsl

And then it runs.

Adding an IP address to a Bridge

OpenShift requires a load balancer for providing access to the hosted applications. Although I can run a three node cluster, I need a fourth location to provide a load balancer that can then provide access to the cluster.

For my home lab set up, this means I want to run one on my bastion host….but it is already running HTTP and (FreeIPA) Red Hat IdM. I don’t want to break that. So, I want to add a second IP address to the bastion host, and have all of the existing services make use of the existing IP address. Only the new HA Proxy instance will use the new IP address.

This would be trivial for a simple Ethernet port, but I am using a Bridge, which makes it a touch trickier, but not terribly so.

Continue reading