Working with the Booked scheduler API

One benefit of working in a hardware company is that you actually have hardware. I have worked in software for a long time, and I have learned to appreciate when new servers are not such a scarce resource as to impact productivity. However, hardware in our group needs to be shared amongst a large group of developers, and constantly reserved, assigned, and reprovisioned. We use an install of the booked scheduler to reserve servers. As with many tools, I am most interested in using it in a scripted fashion. Booked comes with an API. Here’s some of the things I can do with it.

Continue reading

Building a Custom Fedora Based Kernel with Local Patches

How can I create a binary kernel RPM that has patches that have not yet merged into the mainline kernel? One approach to building the Kernel RPM is to use the Makefile option provided with the Kernel. While we typically do this, it does not provide us with the user land tools like perf and its libraries used to test certain patches.

An alternative approach is to take the Fedora Kernel Source RPM that matches the targeted upstream Kernel version, and modify it to apply the set of patches. Here is a walk-through of the process I just got to succeed.

Continue reading

How not to waste time developing long-running processes

Developing long running tasks might be my least favorite coding activity. I love writing and debugging code…I’d be crazy to be in this profession if I did not. But when a task takes long enough, your attention wanders and you get out of the zone.

Building the Linux Kernel takes time. Even checking the Linux Kernel out of git takes a non-trivial amount of time. The Ansible work I did back in the OpenStack days to build and tear down environments took a good bit of time as well. How do I keep from getting out of the zone while coding on these? It is hard, but here are some techniques.

Continue reading

Keeping the CI logic in bash

As much as I try to be a “real” programmer, the reality is that we need automation, and setting up automation is a grind. A necessary grind.

One thing that I found frustrating was that, in order to test our automation, I needed to kick off a pipeline in our git server (gitlab, but the logic holds for others) even though the majority of the heavy lifting was done in a single bash script.

In order to get to the point where we could run that script in a gitlab runner, we needed to install a bunch of packages (Dwarves, Make, and so forth) as well as do some SSH Key provisioning in order to copy the artifacts off to a server. The gitlab-ci.yml file ended up being a couple doze lines long, and all those lines were bash commands.

So I pulled the lines out of gitlab-ci.yml and put them into the somewhat intuitively named file workflow.sh. Now my gitlab-ci.yml file is basically a one liner that calls workflow.sh.

But I also made it so workflow.sh can be called from the bash command line of a new machine. This is the key part. By doing this, I am creating automation that the rest of my team can use without relying on gitlab. Since the automation will be run from gitlab, no one can check in a change that breaks the CI, but they can make changes that will make life easier for them on the remote systems.

The next step is to start breaking apart the workflow into separate pipelines, due to CI requirements. To do this, I do three things:

  • Move the majority of the logic into functions, and source a functions.sh file. This lets me share across top-level bash scripts
  • Make one top-level function for each pipeline.
  • replace workflow.sh with a script per pipeline. These are named pipeline_<stage>. These scripts merely change to the source directory, and then call top level functions in functions.sh.

The reason for the last split is to keep logic from creeping into the pipeline functions. They are merely interfaces to the single set of logic in functions.sh.

The goal of having the separate functions source-able is to be able to run interior steps of the overall processing without having to run the end-to-end work. This is to save the sitting-around time for waiting for a long running process to complete….more on that in a future article.

Remotely checking out from git using ssh key forwarding.

Much of my work is done on machines that are only on load to me, not permanently assigned. Thus, I need to be able to provision them quickly and with a minimum of fuss. One action I routinely need to do is to check code out of a git server, such as gitlab.com. We use ssh keys to authenticate to gitlab. I need a way to do this securely when working on a remote machine. Here’s what I have found

Continue reading

Print the line after a match using AWK

We have an internal system for allocating hardware to developers on a short term basis. While the software does have a web API, it is not enabled by default, nor in our deployment. Thus, we end up caching a local copy of the data about the machine. The machine names are a glom of architecture, and location. So I make a file with the name of the machine, and a symlink to the one I am currently using.

Continue reading