Sysadmin’s Req

(If you can’t guess the tune, get off my lawn!)

Are you posting a Sys Admin’s Req
telnet, CUPS, Sendmail,  and BIND
Consider her I worked with  at DEC
She once was an admin of mine

Have her crimp me a crossover cable
telnet, CUPS, Sendmail,  and BIND
Bridging hub and subnet unstable
And then she’ll be an admin of mine

Have her build me a server of web
telnet, CUPS, Sendmail,  and BIND
Without virtualized overhead
And then she’ll be an admin of mine

Have her craft me a Kerberos Key
telnet, CUPS, Sendmail,  and BIND
But not based on code writ at MIT
And then she’ll be an  Admin of mine

Have her write me a recovery plan
telnet, CUPS, Sendmail,  and BIND
With servers unconnected to SAN
And then she’ll be an  admin of mine

streamline development with autoexpect

expect is one of the old UNIX tools that people seem to continually rediscover. It is a hole plugger, linking together other tools to do things that you just can’t do any other way, or at least, not without some serious coding.

I am continually deploying and undeploying IPA Server as part of my development. Installing requires, amongst other things, typing in a password at least four times. I was sick of typing it, and decided to turn to expect.

I’m lazy. I didn’t want to learn another Domain specific language. So, while procrastinating by reading man pages and such, I cam across a tool that made my life much easier.

autoexpect

It is basically a macro recorder for the bash shell.  I ran

autoexpect

ipa-server-install –uninstall

And ran through the install process. When I was done, typed exit, and there was a beautiful expect script all ready for me in script.exp.  renamed it t ipa-uninstall.exp.  Same thing for the install process.

I should rarely, if ever have to type those passwords again.

Found ‘${BUILDROOT}’ in installed files; aborting

If you get the above mentioned error while building an RPM, here’s what it means.  rpmbuild has executed the script:

usr/lib/rpm/check-buildroot

This looks through the files that are set to be installed in your rpm to see if any of them contain the embedded path used to build or install them in the rpmbuild process.    For example, I have my ~/.rpmmacros file set to up with the following entry:

%_topdir %(echo $HOME)/rpmbuild

which means that I build in  /home/ayoung/rpmbuild.  Underneath this directory, I see, amongst other things, the subdirectory BUIOLDROOT.  The current wisdom says that an RPM should use BUILDROOT as the target for any installs.  This is the set of files that get packaged up for the final RPM.  The files here then get checked to see if they have this path embedded in them.  For example, when building rubygem libxml, I see:

+ /usr/lib/rpm/check-buildroot
Binary file /home/ayoung/rpmbuild/BUILDROOT/rubygem-libxml-ruby-1.1.4-1.young.x86_64/usr/lib/ruby/gems/1.8/gems/libxml-ruby-1.1.4/lib/libxml_ruby.so matches
Found ‘/home/ayoung/rpmbuild/BUILDROOT/rubygem-libxml-ruby-1.1.4-1.young.x86_64’ in installed files; aborting

There is a simple work around.  in ~/.rpmmacros, add the line:

%__arch_install_post   /usr/lib/rpm/check-rpaths   /usr/lib/rpm/check-buildroot

Which is,  think, a typo, but it shuts off the check.  However, I wouldn’t advise doingthis.

In the case of libxml, the paths are there as an artifact of the build process.  The .so carried the full path to the directory in which it was build as a way to link to the source files, for debuggins.  I can see the path by running:

objdump –dwarf libxml_ruby.so

I don’t have a final solution, because I need to work through the gem install process, but the end effect will be

  1. Remove an non-essential files that have this path in them
  2. rewrite the path in the remaining files to be the correct location in the final installed location in the rpm

Update:  Since this is a ruby binary RPM, the correct thing to do is to move the gem install into the %build stage and then copy it into ${BUILDROOT}.  It currently happens in the %install stage.  RPM is wise enogh to do much magic in the BUILD stage, such as producing the debuginfo rpm and so on.

Candlepin: Metaphor for an Entitlement System

The planning meeting was held in Massachusetts. When brainstorming project names, someone mentioned that most New England of activities: Candlepin Bowling. Thus, the project is named Candlepin.

When describing a project, especially something fairly abstract like an entitlement system, you can clarify communication by using a strong metaphor for the system. So, to explain entitlements, I am going to use a bowling alley as my metaphor.

One way to think of an entitlement is this:

An entitlement is contract that you can hook up your computer system to my content stream.

But for our metaphor  I’m going to say:

An entitlement is kinda like getting a lane a bowling alley.

To which you say:

Huh?

Think about it.  When you go bowling, you pay money, but you don’t get a good, and you don’t get a service.  What you get is access to a resource for a limited time.  Say a small company wants to do a team building activity:

We’re going bowling!

This company has 18 employees.  So, we go over to Westgate Lanes (A nod to the local Candlepin Alley of my childhood.  Indulge me) and we walk to the main desk.  We’ve self organizaed ourselves into six teams of three people each.  We get our shoes, and our group gets three lanes assigned to us.  We go, and each team pairs up with another team, the two teams select a lane from the three available, and they bowl.  After each game, the teams re-shuffle the match ups, switch lanes and  play another game.  When each team has played against all the other teams, we return our shoes and go home.

Here is how the analogy maps to entitlement management.

The Data Center is the Bowling Alley.

The lanes are the physical machines that the virtual machines will run on.

The company is still the company paying the bills.

The front desk is the assignment system where you buy slices of time on the machines of the data center.

The three lanes that our company is assigned has a communication network due to the fact that we all need to coordinate our games.  This is the VPN and VLAN setup that lets you specify a cluster of machines can all work together.

The pin setter and the ball retrieval and the scoring projector are analogous to the resources required to run the programs.

The score card is the backing store for the database instance that your applications talk to.

We can extend the metaphor to a larger world, too.  Say we have a bowling league that spans multiple towns and multiple bowling alleys.  This league is composed of teams.  The league sets the schedule, the games are played at the various alleys through out the district.  At the end of the season, the lead team from our league actually plays against the lead team from another league.

This reflects the hierarchical structure of resource management.  You can see that the bowling alley doesn’t really care about leagues except as a way to generate traffic through the alleys.  From the Alley’s perspective, the league is just another customer, paying for lane time.  Perhaps in some cases, the league pays for the time, in others, the individual teams do.  Authority to use a specific lane may have to be cleared not only through the clerk at the desk of the alley, but through the league official that is managing a tournament.  Just like if my company buys a chunk of virtual machines on a cloud somewhere, and then delegates them for internal usage.

Note that the metaphor works for internal clouds as well.  At the Really Big Company (RBC) campus, they take their bowling so seriously that they have a series of lanes installed into a building on their campus.  Now, the scheduling and resource management have been brought in house, but the rest of the rules still apply.

Popup notifications

I am easily distracted. If a build takes more than say, three seconds, I usually will flip to doing something else. This means that I often miss when a build is completed, and end up losing a few minutes here, a few minute there.

Well no longer. I use Zenity! What is this you ask? I didn’t know either until today. Zenity is a command line tool for making a popup window appear.

Now My build scripts look like this:

mvn -o -Pdev install
zenity –info –text “Build is completed”

This kicks off the build, and, when it is done, I get a lovely popup window telling me: the build has completed.

As the Corollary to Murphy’s law states: If its stupid, but it works, it ain’t stupid.

Why zenity? I mean, there are at least a dozen different ways to popup a window. Well, in keeping with that Cardinal programmer virtue of laziness, it is because zenity is in the Fedora11 repo, and I am running Fedora 11. yum install is my friend.

Yes, I realize that if I were cooler, I would make my script tell me success versus failure, and pop up the appropriate window for that. I’m not that cool.

OK, I wanto to be cool. Here’s the new version:

mvn -o -Pdev install && zenity –info –text “Build is completed” || zenity –warning –text “Build Failed”

This pops up a warning message box on mvn returning non-zero for failure. Note the use of the && and the ||. The evaluation of this is kind of cool: The && (logical and) has short circuit semantics, so the second portion only gets evaluated if the first part evaluates to true. However, the || (logical or) only gets evaluated if everything before it fails.

Highlander Syndrome in Package Management

Somewhere between systems work and application development lies the realm of package management. There are two main schools of thought in package management: inclusive of exclusive. If you are inclusive, you want everything inside a package management system, and everything should be inside one package management system. If you are exclusive, you want the system to provide little more than an operational environment, and you will manage your own applications thank-you-very-much.

One problem with the inclusive approach is, in the attempt to clean up old versions, you often end up with The Highlander Syndrome. There can be only one version of a library or binary installed on your system. The Exclusive approach is more end application focused. I may need to run a different version of Python than is provided by the system, and I don’t want to be locked in to using only the version installed system wide. In fact, I may require several different versions, and each of these require their own approach.

CPAN, Pear, and Maven have provide language specific approaches level APIs to resolving dependencies at the per application level. Maven is particualrly good at providing multiple versions of the API: I errs so far this way that often the same Jar file will exist multiple times in the maven repository, but under different paths.

There should be middle ground for the end user between all or nothing in package managemnt. As a system administrator, I don’t want users running “just any” software on their system, but as an end user I don’t want to be locked in to a specific version of a binary.

If the role of application maintainer is split from the role of system administrator, than the people that fill those two roles may have reason to use a different approach to package management. Since the app developer can’t be trusted, the sys admin doesn’t provide root access. With no root access, the app developer can’t deploy an RPM/Deb/MSI. The app developer doesn’t want the system administrator updating the packages that the app depends on just because there is a new bugfix/feature pack. So, the app developer doesn’t use the libraries provided by the distribution, but instead provides a limited set. Essentially, the system has two administrators, two sets of policy, and two mechanisms for applying that policy.

Each scripting language has its own package management system, but the binary languages tend to use the package management system provide by the operating system.  Most Scripting language programmers prefer to work inside their language of choice, so the Perl system is written in perl, the emacs system is written in LISP, the Python one in Python and so on.  The Wikipedia article goes into depth on the subject, so I’ll refrain from rewritintg that here.

A Package management system is really a tuple.  The variables of that system are:

  • The binary format of the package
  • The database used to track the state of the system
  • The mechanism used to fetch packages
  • The conventions for file placement

There is some redundancy in this list.  A file in the package my also be considered a capability, as is the “good name” of the package.  A package contain empty sets for some of the items in this list.  For example, an administrative package may only specify the code to be executed during install, but may not place any files on a file system.  At the other extreme, a package may provide a set of files with no executable code to be run during the install process.

Of these items, it is the conventions that really prevent interoperability.  This should come as no surprise:  It is always easier to write an adapter on top of an explicit interface than an implicit one.  The Linux Standards Base helps, as does the standards guidelines posted by Debian, Red Hat, and other distribution providers.  However, if you look at the amount of traffic on the mailing lists regarding “file X is in the wrong place for its type” you can understand why automating a cross package install is tricky.  Meta package management schemes attempt to mitigate the problem, but they can really only deal with thing that are in the right place.

Take the placement of 64 bit binaries.  For library files, Red Hat has provided a dual system:  put 32 bit libriares under /usr/lib and 64 bit librareis under /usr/lib64.  Debian puts them all into the same directory, and uses naming to keep them apart.  In neither case, however, did they provide a place to make 32 and 64 bit binaries co-exist. How much easier would migration have been if we had /usr/bin32 and /usr/bin64, with a symlink from either into /usr/bin?

Thus we see a couple of the dimensions of the problem.  An application should have a good name:  web server, mail client,  and so on.  A system should support multiple things which provide this capability, a reasonable default, and customizability for more advanced users.The system should provide protection against  applications with known security holes, but provide for the possibility of multiple implementations released at different points in time.

    An interesting take on package management comes from OSGi.  It is a language specific package management approach, specifically for Java.  It takes advantage of portions of the the Java language to allow the deployment of multiple versions of the same package inside a since Process.  When I mentioned this to some old time Linux sys admins, they blanched.  OSGi does not specify how to fetch the packages, much like RPM without YUM or DPKG  with out APT.  OSGi packages are installed into the application.  As such, they are much more like shared libraries, with specific code sections run on module load and unload.  Different OSGi container provide different sets of rules, but basically the packages must exist inside of a subset of directories in order to be available for activation.  I have heard an interesting idea that the JPackage/RPM approach and OSGi should ideally merge in the future.  To install a Jar into your OSGi container, you would have to install an RPM.

    One additional issue on the Java/RPM front is Maven.  Both Maven and RPM want to run the entire build process from start to finish.  Both have the concept of a local Database of packages to resolve dependencies.  For long term Java/RPM peaceful coexistence, RPM is going to have to treat Maven as a first class citizen, the way that it does make.  Maven should provide a means to generate a spec file that has the absolute minimum in it to track dependencies, and to kick off an RPM build of the Maven artifacts.

    interface2addr

    This little script will give you the ipv4 address for a given network interface, or list all of them if you leave the parameter blank:

    #!/bin/bash

    INTERFACE=$1

    /sbin/ifconfig $INTERFACE | grep “inet addr” | cut -d\: -f 2 | cut -d” ” -f 1

    Call it like this:

    ~/bin/interface2addr eth0

    Hacking the Palm Pre from Fedora

    My work machine is a Fedora 11 (F11) X86_64 system. The palm development SDK is distributed as a series of .deb packages, specifically targeted at an 32 bit Ubuntu 8 system. While I have the advantage of having a 32bit Debian system at home, so I was able to run through the setup process for development, ideally I would be able to attach to and control the Pre from my work machine.

    The first step is to download the .deb files onto the F11 machine. I actually only needed the novacom deb, which in my case is novacom_1.0.38_i386.deb. Deb files are accessable using ar (happy talk like a pirate day!).

    In a new and empty directory, run
    ar -vxf ~/novacom_1.0.38_i386.deb

    And you will see the three contained files:

    control.tar.gz data.tar.gz debian-binary

    Extract the data file using tar

    tar -zxf data.tar.g

    This will add a usr directory with the binaries in

    usr/local/bin/novacom{d}

    Novacom is a two piece effort: a daemon an a client. First make sure you can run the daemon.

    First, lets see the file type:

    file novacomd

    novacomd: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped

    Let’s see what libraries it requires:

    ldd novacomd
    linux-gate.so.1 => (0x008f1000)
    libpthread.so.0 => /lib/libpthread.so.0 (0x0078a000)
    libusb-0.1.so.4 => not found
    libc.so.6 => /lib/libc.so.6 (0x00110000)
    /lib/ld-linux.so.2 (0x006ae000)

    Note that the USB library is missing. I have it installed on my system, but only the 64 bit version. To get the 32 bit version, first, figure out what the 32 bit vversion is named.

    yum search libusb

    libusb.i586

    And install

    sudo yum install libusb.i586

    F11 and the RHEL based approach for running 32bit apps on 64 makes this fairly easy to do. Unlike Debian based system which pretty much require you building a chroot if you are going to run a significant amount of 32 bit binaries, Red Hat based systems put 64 bit libraries into /usr/lib64 and 32 bit libraries int /usr/lib, so they don’t conflict. Now lddd shows we have everything:

    ldd novacomd
    linux-gate.so.1 => (0x00262000)
    libpthread.so.0 => /lib/libpthread.so.0 (0x008a5000)
    libusb-0.1.so.4 => /usr/lib/libusb-0.1.so.4 (0x00770000)
    libc.so.6 => /lib/libc.so.6 (0x00263000)
    /lib/ld-linux.so.2 (0x006ae00)

    And we can now run this. Since it is talking straight to hardware, it insists on running as root:

    ./novacomd
    [2009/9/22 11:40:48] novacomd version novacomd-62 starting…
    [2009/9/22 11:40:48] novacomctl socket ready to accept
    [2009/9/22 11:40:48] need to run as super user to access usb

    so:

    sudo ./novacomd
    [2009/9/22 11:41:11] novacomd version novacomd-62 starting…
    [2009/9/22 11:41:11] novacomctl socket ready to accept
    [2009/9/22 11:41:11] sending rst
    [2009/9/22 11:41:11] sending rst
    [2009/9/22 11:41:11] sending rst
    [2009/9/22 11:41:11] going online
    [2009/9/22 11:41:11] novacom_register_device:188: dev ‘e851588c804e8caa722490a0314ce9782dd4d9a4’ via usb type castle-linux

    No we turn our attention to the client piece.

    file novacom
    novacom: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
    [ayoung@ayoung novacom]$ ldd novacom
    linux-gate.so.1 =>  (0x00173000)
    libc.so.6 => /lib/libc.so.6 (0x006d2000)
    /lib/ld-linux.so.2 (0x006ae000)

    So we are ready to run.  There is no novaterm in this deb.  Instead, you run novacom in terminal mode.  A little noted line in that I will make big here is:

    ./novacom $* -t open tty://0