About Adam Young

Once upon a time I was an Army Officer, but that was long ago. Now I work as a Software Engineer. I climb rocks, play saxophone, and spend way too much time in front of a computer.

Importing JBoss Application Server 5 code into Eclipse

I’ve been battling getting JBoss source to import into eclipse for a couple of days now.  I just got the project to show no errors.   Here’s the steps I took.

Checked the project out from Subversion:

svn co http://anonsvn.jboss.org/repos/jbossas/tags/JBoss_5_1_0_GA jbossas

Built using maven install.  Note that I have a local install of Maven at ~/apps/maven which is version 2.0.9, higher than the 2.0.4 from the Fedora 11 repo.

I created a file ~/.m2/settings.xml and populated it with the JBoss repo information.  I’ll include a link.

Opened the Galileo version of Eclipse JEE. Created a vanilla workspace.

Importing the workspace into Eclipse showed many issues, mostly dealing with bad classpaths.  If you look at the .classpath files for each of the sub proejcts, you will see that they refer to libs in /thirdparty/. This is the local maven repository defined in a pom.xml in the project.  However, the maven build puts them under the thirdparty subproject inside of your build, leading to most of the projects having the majority of their references unmet.

Open up the buildpath for a project.  Click on the libraries tab and create a new variable.  This variable, which I called THIRD_PARTY points to your jbossas/thirdparty directory.

Close eclipse to safely munge the .classpaths.

I ran variations of the following bash commands to rewire the dependencies.

for CLASSPATH in `find . -name .classpath `; do awk ‘/thirdparty/ {    sub ( “kind=\”lib\””, “kind=\”var\”” ); sub ( “/thirdparty” , “THIRD_PARTY” ) ; print $0  }  $0 !~ /thirdparty/ { print $0 } ‘ < $CLASSPATH > $CLASSPATH.new  ; mv $CLASSPATH.new $CLASSPATH  ;   ; done

Note that I should have used gsub instead of sub, as there are two instances of converting /thirparty to THIRD_PARTY:   path  and sourcepath.  Instead, I ran the command twice.

Reopening the project in eclipse showed a slew of build problems due to multiple definitions of the same jar files.  Argh!

Close eclipse.

Run the following bash command to get rid of multiples.

for CLASSPATH in `find . -name .classpath `; do awk ‘$0 != PREVLINE { print $0 } {PREVLINE=$0 }’ < $CLASSPATH  > $CLASSPATH.new ; mv $CLASSPATH.new $CLASSPATH  ; done

I’m sure there is a better way of getting rid of duplicate lines, but this worked well enough.  When I reopened the proejct, most of the duplicate library build errors were gone.  I deleted the rest by hand on individual projects libraries page.

The next set of errors involved the source paths being incorrectly set up for generated code.  Again, I mopdified these by hand:

A svn diff shows these changes in the .classpath files to be of the form

-    <classpathentry kind=”src” path=”output/gen-src”/>

I’ve been battling getting JBoss source to import into eclipse for a couple of days now.  I just got the project to show no errors.   Here’s the steps I took.

Checked the project out from Subversion:

svn co http://anonsvn.jboss.org/repos/jbossas/tags/JBoss_5_1_0_GA jbossas

Built using maven install.  Note that I have a local install of Maven at ~/apps/maven which is version 2.0.9, higher than the 2.0.4 from the Fedora 11 repo.

I created a file ~/.m2/settings.xml and populated it with the JBoss repo information.  I’ll include a link.

Opened the Galileo version of Eclipse JEE. Created a vanilla workspace.

Importing the workspace into Eclipse showed many issues, mostly dealing with bad classpaths.  If you look at the .classpath files for each of the sub proejcts, you will see that they refer to libs in /thirdparty/. This is the local maven repository defined in a pom.xml in the project.  However, the maven build puts them under the thirdparty subproject inside of your build, leading to most of the projects having the majority of their references unmet.

Open up the buildpath for a project.  Click on the libraries tab and create a new variable.  This variable, which I called THIRD_PARTY points to your jbossas/thirdparty directory.

Close eclipse to safely munge the .classpaths.

I ran variations of the following bash commands to rewire the dependencies.

for CLASSPATH in `find . -name .classpath `; do awk ‘/thirdparty/ {    sub ( “kind=\”lib\””, “kind=\”var\”” ); sub ( “/thirdparty” , “THIRD_PARTY” ) ; print $0  }  $0 !~ /thirdparty/ { print $0 } ‘ < $CLASSPATH > $CLASSPATH.new  ; mv $CLASSPATH.new $CLASSPATH  ;   ; done

Note that I should have used gsub instead of sub, as there are two instances of converting /thirparty to THIRD_PARTY:   path  and sourcepath.  Instead, I ran the command twice.

Reopening the project in eclipse showed a slew of build problems due to multiple definitions of the same jar files.  Argh!

Close eclipse.

Run the following bash command to get rid of multiples.

for CLASSPATH in `find . -name .classpath `; do awk ‘$0 != PREVLINE { print $0 } {PREVLINE=$0 }’ < $CLASSPATH  > $CLASSPATH.new ; mv $CLASSPATH.new $CLASSPATH  ; done

I’m sure there is a better way of getting rid of duplicate lines, but this worked well enough.  When I reopened the proejct, most of the duplicate library build errors were gone.  I deleted the rest by hand on individual projects libraries page.

The next set of errors involved the source paths being incorrectly set up for generated code.  Again, I mopdified these by hand:

A svn diff shows these changes in the .classpath files to be of the form

+    <classpathentry kind=”src” path=”target/generated-sources/idl”/>

-    <classpathentry kind=”src” path=”output/gen-src”/>

The final changes involved adding in excludes rules in the source paths for certain files that do not build.  These can be gleaned from the pom.xml files. For instance

./varia/pom.xml:                <exclude>org/jboss/varia/stats/*JDK5.java</exclude>

I was never able to get the embedded project to build correctly.  I closed that project and ignored it.

I had to create a couple of test classes for the test code to compile as well:  MySingleton and CtsCmp2Local.java.  I suspect that these should be generated or just didn’t get checked in.  Obviously, this didn’t break the Maven build.

Now I just need to figure out how to run it.

True Two Tiered OS Deployment

JBoss clustering and Penguin’s Clusterware (bproc) have one thing in common: the view of the the system spans more than a single underlying system. Other systems have this concept as well, but these two are the ones I know best. Virtualization is currently changing how people do work in the datacenter. Many people have “Go virtual first” strategies: all software deployed can only be deployed inside virtual machines. While this simplifies some aspects of system administration, it complicates others: Now the system administrators need tools to manage large arrays of systems.

If you combine current virtualization practices with current cluster practices, you have an interesting sytem. Make a clustered OS instance of Virtual machines and deploy it across an array of embedded Hypervisors. Any one of the VMs that make up the clustered OS image can migrate to a different machine: after running for a length of time, no VM may be on any of the machines that were originally used to run the clustered OS Image.
Such a system would have many benefits. The virtualization technology helps minimize points of failure such that, in theory the whole system could be checkpointed and restarted from a n earlier state, assume that the networking fabric plays nice. System administration would be simplified as a unified process tree allows for killing remote processes without having to log in to each and every node to kill them. Naming service management is centralized, as is all policy for the cluster. Additionally, multiple OS images could be installed on the same physical cluster, allowing clear delineation of authority, while promoting resource sharing. Meta system administrators would see to the allocation of nodes to a clustered image, while department system admins would manage their particular cluster, without handling hardware.

Context Map of an Application

Of all of the inversion of control containers I’ve come across, the one that most matches how I like to develop is Pico container. What I like best about it is that I can code in Java from start to finish. I don’t like switching to a different language in order to define my dependencies. Spring and JBoss have you define your dependencies in XML, which means that all of the Java tools know nothing about it, and javac can’t check your work. You don’t know until run time if you made a mistake.

One reason people like XML is it gives a place to look. You know that you are looking for the strategy used to create an object. The web.xml file provides you a starting point to say “Ah, they are using the struts servlet, let me look for the struts config XML file, and then….” Of course, this implies that you know servlets and struts. Come at a project with no prior knowledge puts you into murkier waters.

An application has a dynamic and a static aspect to it. The dynamic aspect can be captured in a snapshot of the register state, the stack, the heap, and the open files. The static structure is traditionally seen as the code, but that view is a little limiting. Tools like UML and ER Diagrams give you a visual representation easier to digest. We need a comparable view for IofC.

Many applications have a structure of a directed acyclic graph. The servlet model has components that are scoped global, application, session, request, and page. Each tier of the component model lives a shorter lifetime than the next higher level. However, this general model only provides context in terms of http, not in context of your actual application. For instance, if you have a single page that has two forms, and wish to register two components that represents a button, there is no way to distinguish which form the button is inside. Or, if an application has multiple databases, say one for user authentication and a different one for content, but both are registered as application scoped components, the programmer has to resort to naming the components in order to keep them separate.  While it is not uncommon to have multiple instances of the same class inside of a context scope, keeping the scope small allows the developer to use simple naming schemes to keep them distinct, and that naming scheme itself can make sense within the context of the application. For example, if an application reads from two files, one containing historical user data and one containing newly discovered user information,  and performs a complex merge of the application into an output file, the three objects that represent the files can be  named based on the expected content of the files as well as their role.  If there is another portion of the application that does a something like this, but with product data, and the two parts really have little to no commonality of code, the file objects will end up getting the context as part of the registration.

  • fetchHistoricalUserDataFile
  • fetchNewUserDataFile
  • fetchHistoricalProductDataFile
  • fetchNewProductDataFile

Note now that the application developer must be aware of the components registered elsewhere in the application to deconflict  names, and that we start depending on naming conventions, and other processes that inhibit progress and don’t scale.

We see a comparable concept in the Java package concept.  I don’t have to worry about conflicting class names, so long as the two classes are in separate packages.

To define an application, then, each section should have a container.  The container should have a parent that determines the scope of resolution.  The application developer should be comfortable in defining new containers for new scopes.  Two things that need access to the same object need to be contained inside of descendants of the container of that dependency.

A tool to make this much more manageable would produce a javadoc like view of the application.  It would iterate through each of the containers, from parent down the tree, and show what classes were registered, and under what names.  This would provide a much simpler view of the overall application than traversing through XML files.

Dependency Collectors

Certain portions of an application function as a registration point, whether they are in the native language of the project or a configuration file read in. These files provide a valuable resource to the code spelunker. For instance, when starting to understand a Java web archive, the standard directory structure with WEB-INF/web.xml provides a very valuable starting point. Just as reading C Code you can start with main. The dependency Collections often are an xml file, like struts-config.xml, or the Startup portion of a Servlet.

The concept in Inversion of Control is that you separate the creation policy of the object from from the object itself, such that the two can be varied independently. Often, a project that otherwise does a decent job of cutting dependencies via IofC will build a dependency collector as a way to register all of the factories for the components. The xml files that Spring uses to define all of the control functions are dependency collectors just as surely as a C++ file with an endless Init function that calls “registerFactory” for each component in the inventory.

As you might be able to tell from my tone, I respect the usefulness of the dependency collector, but still feel that there is a mistake in design here. In C++, you can specify a chunk of code guaranteed to run before main that will initialize your factories, so the language provides support for IofC. In Java, classes can have static blocks, but this code only get executed if the class file is somehow referenced, which means this is not a suitable mechanism for registering factories. The common approach of using XML and Introspection for factory registration violates the principle of not postponing until runtime that which should be done at compile/link time.

So I give myself two goals. 1) To find a suitable Java based mechanism for registering factories and 2) to provide a method to compensate for the lack of orientation that a dependency collector provides.

Using JNA with librpm

Much has changed in the Java world since my last professional project in Java. One significant advance has been in native bindings. JNA-Java Native Access, is a much more straightforward approach than the old Java Native Interface approach. As a prrof of concept, I tried reading the information in my systems RPM database using librpm and the code generated by JNAEATOR from the rpm headers in /usr/include/rpm.

Here’s how I generated the headers in the first place.

java -jar ~/Download/jnaerator-v0.8-b519.jar -I /usr/include/linux -package rpm -library rpm /usr/include/rpm/*

This was overkill, but, as we all know, you can never have too much overkill.  I forced them all into a single package (called rpm for now) and forced them all into a single inteface in  rpm/RpmLibrary.java.

Here is a simple unit test proviong that it works.  I don’t claim that this doesn’t leak memory, won’t corrupt your database, or steal your wallet.  Caveat Coder. It isn’t even a decent unit test.

package rpmdb;

import java.nio.ByteBuffer;

import junit.framework.TestCase;
import rpm.RpmLibrary;
import rpm.RpmLibrary.headerToken_s;
import rpm.RpmLibrary.rpmdbMatchIterator_s;
import rpm.RpmLibrary.rpmts_s;

import com.sun.jna.NativeLong;
import com.sun.jna.ptr.PointerByReference;

public class RpmdbTest extends TestCase {

public void testReadDbPath() {
int status = RpmLibrary.INSTANCE.rpmReadConfigFiles((ByteBuffer) null,
null);
assertEquals(0, status);
ByteBuffer buffer = ByteBuffer.wrap(“%_dbpath”.getBytes());

String value = RpmLibrary.INSTANCE.rpmExpand(buffer, (Object[]) null);

System.out.println(“Value of macro is ” + value);

}

public void testReadFromDB() {

int status = RpmLibrary.INSTANCE.rpmReadConfigFiles((ByteBuffer) null,
null);
assertEquals(0, status);

rpmts_s ts = RpmLibrary.INSTANCE.rpmtsCreate();
assertNotNull(ts);

rpmdbMatchIterator_s iter = RpmLibrary.INSTANCE.rpmtsInitIterator(ts,
rpm.RpmLibrary.rpmTag_e.RPMTAG_NAME, “java-1.6.0-openjdk”,
new NativeLong(0));
headerToken_s header;
while ((header = RpmLibrary.INSTANCE.rpmdbNextIterator(iter)) != null) {
PointerByReference namePtr = new PointerByReference();
PointerByReference releasePtr = new PointerByReference();
PointerByReference versionPtr = new PointerByReference();
RpmLibrary.INSTANCE.headerNVR(header, namePtr, versionPtr,
releasePtr);
System.out.println(“Name    = “+ namePtr.getValue().getString(0));
System.out.println(“release = “+ releasePtr.getValue().getString(0));
System.out.println(“version = “+ versionPtr.getValue().getString(0));
}

if (ts != null) {
RpmLibrary.INSTANCE.rpmtsFree(ts);
}
}
}

I did have to massage some of the generated code by hand:  rpmExpand returned a BytePointerByReference, and modifying the method signature to a return a String worked fine.

Reading the links from a webpage

I needed to see the set of RPMs in a YUM repository. I wanted to do this as part of a larger script. To do this, I fetched the page via wget, and then applied an xsl transform on it using the command line tool xsltproc.

Here is how I called it:

wget -q -O – http://spacewalk.redhat.com/yum/0.5/Fedora/10/x86_64/os/Packages/ | xsltproc –html showhrefs.xslt –

And here is the xslt file showrefs.xslt

<?xml version=”1.0″ encoding=”UTF-8″?>
<xsl:stylesheet xmlns:xsl=”http://www.w3.org/1999/XSL/Transform” version=”1.0″>
<xsl:output method=”xml” indent=”yes”/>

<!–  shut off the default matchin rule –>
<xsl:template match=”text()” />

<!– print the href value of the hyperlinks –>
<xsl:template match=”a”>
<xsl:value-of select=”@href” />
<xsl:text >
</xsl:text>

</xsl:template>

</xsl:stylesheet>

Reading HTML docs on an ebook reader

The Calibre project is essential to me making full use of my SOny ebook reader. I recently wanted to pull down the HTML documentation for Red Hat Satellite server and load it to the reader. It was this simple:

wget -rL http://www.redhat.com/docs/manuals/satellite/Red_Hat_Network_Satellite-5.1.0/html/Installation_Guide/index.html

html2epub www.redhat.com/docs/manuals/satellite/Red_Hat_Network_Satellite-5.1.0/html/Installation_Guide/index.html

I probably should have used the -t option to set the title, as I had to rename the file from index.epub.

Compile Time Dynamic Proxies in C++

These are my notes for compile time proxies generated from C++.  I’m not sure I will be able to understand them in the future, so good luck to you if you feel the need to read them.

Java Dynamic proxies are a well established means of reducing code by extracting a cross cutting concern. The C++ philosophy is more “Why put off to runtime that which can be performed at compile time.” How would we get the same kind of flexibility from C++ as we get from Java Dynamic proxies?

First, we would need a handful of helper classes that mimic the introspection API of Java. If we have the simple classes of Method, Field, Parameter, and Class, we can perform much of the logic we need. Refer to the Java reflexion API to see roughly what these classes should contain and what they do.

Code generation is the obvious approach, and the lack of introspection of the C++ makes abstract syntax tree analysis  it the only viable approach currently available. We can get all the information we require from g++ if we just ask nicely. FOr example, if we add the flag -fdump-translation-unit to g++ we get the file with the AST in an ultra-normalized form. For example, I want to find all of the classes defined in the file generated when I compile ExampleTestCase.cpp. The file ExampleTestCase.cpp.t00.tu on line 414 has:

@1086 identifier_node strg: ExampleTestCase lngt: 15

If we then search for what @1086 means:

adyoung@adyoung-devd$ grep -n “@1086 ” ExampleTestCase.cpp.t00.tu

1749:@783 type_decl name: @1086 type: @554 srcp: ExampleTestCase.h:14
1762:@787 function_decl name: @1086 type: @1093 scpe: @554
2414:@1086 identifier_node strg: ExampleTestCase lngt: 15
4237:@1932 type_decl name: @1086 type: @554 scpe: @554
4242:@1935 function_decl name: @1086 mngl: @2450 type: @2451
28445:@13185 function_decl name: @1086 mngl: @14801 type: @14802
We see that this identifier is used several places, but the two interesting ones are the type_decl lines, and they both refer to entry @554. Most likely the function definitions are something like the constructors. This is the data on that record:

@554    record_type      name: @783     size: @43      algn: 64
vfld: @784     base: @785     accs: priv
tag : struct   flds: @786     fncs: @787
binf: @788

It needs some prettying up, to get it all on one line, but other than that, it looks right. The big thing is the tag: struct that tells us this is a c struct. C++ must be forced to conform to c at some point, so classes become structs.

Let’s take it even simpler.  If we make an empty C++ file, called empty.cpp and compile it with:

g++   -fdump-translation-unit   -c -o empty.o empty.cpp

we get a file with a lot of standard symbols defined:

grep identifier empty.cpp.001t.tu | wc -l
1215

If we add a single static variablle, the venerable xyzzy, we can easily find it in the file:

adam@frenzy:~/devel/cpp/proxy$ echo “static int xyzzy;” >> xyzzy.cpp
adam@frenzy:~/devel/cpp/proxy$ g++   -fdump-translation-unit   -c -o xyzzy.o xyzzy.cpp
adam@frenzy:~/devel/cpp/proxy$ grep identifier  xyzzy.cpp.001t.tu | wc -l
1216

We’ve only added a single line, that looks like this:

@4      identifier_node  strg: xyzzy    lngt: 5

If we now add a Noop struct to that, we get a little bit more info:

adam@frenzy:~/devel/cpp/proxy$ echo “struct Noop{}; static int xyzzy;” >> Noop.cpp
adam@frenzy:~/devel/cpp/proxy$ make Noop.o
g++  -fdump-translation-unit    -c -o Noop.o Noop.cpp
adam@frenzy:~/devel/cpp/proxy$ grep identifier  Noop.cpp.001t.tu | wc -l
1217

Note that I’ve added -fdump-translation-unit  to the CPPFLAGS in a Makefile.

Each change has a significant effect on the resultant file:

adam@frenzy:~/devel/cpp/proxy$ wc -l Noop.cpp.001t.tu
6853 Noop.cpp.001t.tu
adam@frenzy:~/devel/cpp/proxy$ wc -l xyzzy.cpp.001t.tu
6845 xyzzy.cpp.001t.tu
adam@frenzy:~/devel/cpp/proxy$ wc -l empty.cpp.001t.tu
6841 empty.cpp.001t.tu

Because the symbol gets added early (@4) it bumps all of the other symbols in the file up one, so a diff would take a little parsing.  A visual inspection quickly shows that the following section has been added to xyzzy.cpp.001t.tu

@3      var_decl         name: @4       type: @5       srcp: xyzzy.cpp:1
chan: @6       link: static   size: @7
algn: 32       used: 0
@4      identifier_node  strg: xyzzy    lngt: 5
@5      integer_type     name: @8       size: @7       algn: 32
prec: 32       sign: signed   min : @9
max : @10

If we compare the two files based on the @ signs:

adam@frenzy:~/devel/cpp/proxy$ grep — @ xyzzy.cpp.001t.tu | wc -l
4427
adam@frenzy:~/devel/cpp/proxy$ grep — @ empty.cpp.001t.tu | wc -l
4424

We can see we have added three, which corresponds with what we have above.

Just adding the emptyr struct adds 10 lines:

adam@frenzy:~/devel/cpp/proxy$ grep — @ Noop.cpp.001t.tu | wc -l
4434.

To make iut a little easier, I went in and put a carriage return after struct Noop{};  Now I can look for Noop.cpp:1 or Noop.cpp:2

This eems to be the set of lines added for struct Noop:

@6      type_decl        name: @11      type: @12      srcp: Noop.cpp:1
note: artificial              chan: @13
@7      integer_cst      type: @14      low : 32
@8      type_decl        name: @15      type: @5       srcp: <built-in>:0
note: artificial
@9      integer_cst      type: @5       high: -1       low : -2147483648
@10     integer_cst      type: @5       low : 2147483647
@11     identifier_node  strg: Noop     lngt: 4
@12     record_type      name: @6       size: @16      algn: 8
tag : struct   flds: @17      binf: @18

Let’s see what happens if we add field.

Here’s OneOp.cpp

struct OneOp{
int aaa;
};
static int xyzzy;

adam@frenzy:~/devel/cpp/proxy$ grep — @ Noop.cpp.001t.tu | wc -l
4434
adam@frenzy:~/devel/cpp/proxy$ grep — @ OneOp.cpp.001t.tu | wc -l
4439

We get another five lines.  Let’s see if this is linear.

adam@frenzy:~/devel/cpp/proxy$ grep — @ TwoOp.cpp.001t.tu | wc -l
4444

adam@frenzy:~/devel/cpp/proxy$ grep — @ ThreeOp.cpp.001t.tu | wc -l
4449

Let’s try a function now.

adam@frenzy:~/devel/cpp/proxy$ cat OneFunc.cpp
struct OneFunc{
int narf();
};
static int xyzzy;

adam@frenzy:~/devel/cpp/proxy$ grep — @ OneOp.cpp.001t.tu | wc -l
4439
adam@frenzy:~/devel/cpp/proxy$ grep — @ OneFunc.cpp.001t.tu | wc -l
4448

About double the info.

My next goal will be to diagram out the data structures we have here using UML.

Things look fairly straight forward in the decifering until we get to function_type.  There, we have a reference to retn which in this case happens to be a void, but could concievably be any of the data types.

I have long since abandonded this approach, but may pick it back up again some day, so I will publish this and let the great crawlers out there make it avaialble to some poor sap that wants to continue it.  If you do so, please let me know.

Attitude Shift

When I got out of the Army, I had the choice of moving back to Massachusetts or anywhere closer to my last duty station.  Since I was in Hawaii at the time, I could choose from  a huge swatch of the country.  I went on several job interviews, and had a few places I could have moved.  I picked for location as much as for the job:  I moved to San Francisco.

Continue reading