Importing JBoss Application Server 5 code into Eclipse

I’ve been battling getting JBoss source to import into eclipse for a couple of days now.  I just got the project to show no errors.   Here’s the steps I took.

Checked the project out from Subversion:

svn co http://anonsvn.jboss.org/repos/jbossas/tags/JBoss_5_1_0_GA jbossas

Built using maven install.  Note that I have a local install of Maven at ~/apps/maven which is version 2.0.9, higher than the 2.0.4 from the Fedora 11 repo.

I created a file ~/.m2/settings.xml and populated it with the JBoss repo information.  I’ll include a link.

Opened the Galileo version of Eclipse JEE. Created a vanilla workspace.

Importing the workspace into Eclipse showed many issues, mostly dealing with bad classpaths.  If you look at the .classpath files for each of the sub proejcts, you will see that they refer to libs in /thirdparty/. This is the local maven repository defined in a pom.xml in the project.  However, the maven build puts them under the thirdparty subproject inside of your build, leading to most of the projects having the majority of their references unmet.

Open up the buildpath for a project.  Click on the libraries tab and create a new variable.  This variable, which I called THIRD_PARTY points to your jbossas/thirdparty directory.

Close eclipse to safely munge the .classpaths.

I ran variations of the following bash commands to rewire the dependencies.

for CLASSPATH in `find . -name .classpath `; do awk ‘/thirdparty/ {    sub ( “kind=\”lib\””, “kind=\”var\”” ); sub ( “/thirdparty” , “THIRD_PARTY” ) ; print $0  }  $0 !~ /thirdparty/ { print $0 } ‘ < $CLASSPATH > $CLASSPATH.new  ; mv $CLASSPATH.new $CLASSPATH  ;   ; done

Note that I should have used gsub instead of sub, as there are two instances of converting /thirparty to THIRD_PARTY:   path  and sourcepath.  Instead, I ran the command twice.

Reopening the project in eclipse showed a slew of build problems due to multiple definitions of the same jar files.  Argh!

Close eclipse.

Run the following bash command to get rid of multiples.

for CLASSPATH in `find . -name .classpath `; do awk ‘$0 != PREVLINE { print $0 } {PREVLINE=$0 }’ < $CLASSPATH  > $CLASSPATH.new ; mv $CLASSPATH.new $CLASSPATH  ; done

I’m sure there is a better way of getting rid of duplicate lines, but this worked well enough.  When I reopened the proejct, most of the duplicate library build errors were gone.  I deleted the rest by hand on individual projects libraries page.

The next set of errors involved the source paths being incorrectly set up for generated code.  Again, I mopdified these by hand:

A svn diff shows these changes in the .classpath files to be of the form

-    <classpathentry kind=”src” path=”output/gen-src”/>

I’ve been battling getting JBoss source to import into eclipse for a couple of days now.  I just got the project to show no errors.   Here’s the steps I took.

Checked the project out from Subversion:

svn co http://anonsvn.jboss.org/repos/jbossas/tags/JBoss_5_1_0_GA jbossas

Built using maven install.  Note that I have a local install of Maven at ~/apps/maven which is version 2.0.9, higher than the 2.0.4 from the Fedora 11 repo.

I created a file ~/.m2/settings.xml and populated it with the JBoss repo information.  I’ll include a link.

Opened the Galileo version of Eclipse JEE. Created a vanilla workspace.

Importing the workspace into Eclipse showed many issues, mostly dealing with bad classpaths.  If you look at the .classpath files for each of the sub proejcts, you will see that they refer to libs in /thirdparty/. This is the local maven repository defined in a pom.xml in the project.  However, the maven build puts them under the thirdparty subproject inside of your build, leading to most of the projects having the majority of their references unmet.

Open up the buildpath for a project.  Click on the libraries tab and create a new variable.  This variable, which I called THIRD_PARTY points to your jbossas/thirdparty directory.

Close eclipse to safely munge the .classpaths.

I ran variations of the following bash commands to rewire the dependencies.

for CLASSPATH in `find . -name .classpath `; do awk ‘/thirdparty/ {    sub ( “kind=\”lib\””, “kind=\”var\”” ); sub ( “/thirdparty” , “THIRD_PARTY” ) ; print $0  }  $0 !~ /thirdparty/ { print $0 } ‘ < $CLASSPATH > $CLASSPATH.new  ; mv $CLASSPATH.new $CLASSPATH  ;   ; done

Note that I should have used gsub instead of sub, as there are two instances of converting /thirparty to THIRD_PARTY:   path  and sourcepath.  Instead, I ran the command twice.

Reopening the project in eclipse showed a slew of build problems due to multiple definitions of the same jar files.  Argh!

Close eclipse.

Run the following bash command to get rid of multiples.

for CLASSPATH in `find . -name .classpath `; do awk ‘$0 != PREVLINE { print $0 } {PREVLINE=$0 }’ < $CLASSPATH  > $CLASSPATH.new ; mv $CLASSPATH.new $CLASSPATH  ; done

I’m sure there is a better way of getting rid of duplicate lines, but this worked well enough.  When I reopened the proejct, most of the duplicate library build errors were gone.  I deleted the rest by hand on individual projects libraries page.

The next set of errors involved the source paths being incorrectly set up for generated code.  Again, I mopdified these by hand:

A svn diff shows these changes in the .classpath files to be of the form

+    <classpathentry kind=”src” path=”target/generated-sources/idl”/>

-    <classpathentry kind=”src” path=”output/gen-src”/>

The final changes involved adding in excludes rules in the source paths for certain files that do not build.  These can be gleaned from the pom.xml files. For instance

./varia/pom.xml:                <exclude>org/jboss/varia/stats/*JDK5.java</exclude>

I was never able to get the embedded project to build correctly.  I closed that project and ignored it.

I had to create a couple of test classes for the test code to compile as well:  MySingleton and CtsCmp2Local.java.  I suspect that these should be generated or just didn’t get checked in.  Obviously, this didn’t break the Maven build.

Now I just need to figure out how to run it.

Context Map of an Application

Of all of the inversion of control containers I’ve come across, the one that most matches how I like to develop is Pico container. What I like best about it is that I can code in Java from start to finish. I don’t like switching to a different language in order to define my dependencies. Spring and JBoss have you define your dependencies in XML, which means that all of the Java tools know nothing about it, and javac can’t check your work. You don’t know until run time if you made a mistake.

One reason people like XML is it gives a place to look. You know that you are looking for the strategy used to create an object. The web.xml file provides you a starting point to say “Ah, they are using the struts servlet, let me look for the struts config XML file, and then….” Of course, this implies that you know servlets and struts. Come at a project with no prior knowledge puts you into murkier waters.

An application has a dynamic and a static aspect to it. The dynamic aspect can be captured in a snapshot of the register state, the stack, the heap, and the open files. The static structure is traditionally seen as the code, but that view is a little limiting. Tools like UML and ER Diagrams give you a visual representation easier to digest. We need a comparable view for IofC.

Many applications have a structure of a directed acyclic graph. The servlet model has components that are scoped global, application, session, request, and page. Each tier of the component model lives a shorter lifetime than the next higher level. However, this general model only provides context in terms of http, not in context of your actual application. For instance, if you have a single page that has two forms, and wish to register two components that represents a button, there is no way to distinguish which form the button is inside. Or, if an application has multiple databases, say one for user authentication and a different one for content, but both are registered as application scoped components, the programmer has to resort to naming the components in order to keep them separate.  While it is not uncommon to have multiple instances of the same class inside of a context scope, keeping the scope small allows the developer to use simple naming schemes to keep them distinct, and that naming scheme itself can make sense within the context of the application. For example, if an application reads from two files, one containing historical user data and one containing newly discovered user information,  and performs a complex merge of the application into an output file, the three objects that represent the files can be  named based on the expected content of the files as well as their role.  If there is another portion of the application that does a something like this, but with product data, and the two parts really have little to no commonality of code, the file objects will end up getting the context as part of the registration.

  • fetchHistoricalUserDataFile
  • fetchNewUserDataFile
  • fetchHistoricalProductDataFile
  • fetchNewProductDataFile

Note now that the application developer must be aware of the components registered elsewhere in the application to deconflict  names, and that we start depending on naming conventions, and other processes that inhibit progress and don’t scale.

We see a comparable concept in the Java package concept.  I don’t have to worry about conflicting class names, so long as the two classes are in separate packages.

To define an application, then, each section should have a container.  The container should have a parent that determines the scope of resolution.  The application developer should be comfortable in defining new containers for new scopes.  Two things that need access to the same object need to be contained inside of descendants of the container of that dependency.

A tool to make this much more manageable would produce a javadoc like view of the application.  It would iterate through each of the containers, from parent down the tree, and show what classes were registered, and under what names.  This would provide a much simpler view of the overall application than traversing through XML files.

Dependency Collectors

Certain portions of an application function as a registration point, whether they are in the native language of the project or a configuration file read in. These files provide a valuable resource to the code spelunker. For instance, when starting to understand a Java web archive, the standard directory structure with WEB-INF/web.xml provides a very valuable starting point. Just as reading C Code you can start with main. The dependency Collections often are an xml file, like struts-config.xml, or the Startup portion of a Servlet.

The concept in Inversion of Control is that you separate the creation policy of the object from from the object itself, such that the two can be varied independently. Often, a project that otherwise does a decent job of cutting dependencies via IofC will build a dependency collector as a way to register all of the factories for the components. The xml files that Spring uses to define all of the control functions are dependency collectors just as surely as a C++ file with an endless Init function that calls “registerFactory” for each component in the inventory.

As you might be able to tell from my tone, I respect the usefulness of the dependency collector, but still feel that there is a mistake in design here. In C++, you can specify a chunk of code guaranteed to run before main that will initialize your factories, so the language provides support for IofC. In Java, classes can have static blocks, but this code only get executed if the class file is somehow referenced, which means this is not a suitable mechanism for registering factories. The common approach of using XML and Introspection for factory registration violates the principle of not postponing until runtime that which should be done at compile/link time.

So I give myself two goals. 1) To find a suitable Java based mechanism for registering factories and 2) to provide a method to compensate for the lack of orientation that a dependency collector provides.

Using JNA with librpm

Much has changed in the Java world since my last professional project in Java. One significant advance has been in native bindings. JNA-Java Native Access, is a much more straightforward approach than the old Java Native Interface approach. As a prrof of concept, I tried reading the information in my systems RPM database using librpm and the code generated by JNAEATOR from the rpm headers in /usr/include/rpm.

Here’s how I generated the headers in the first place.

java -jar ~/Download/jnaerator-v0.8-b519.jar -I /usr/include/linux -package rpm -library rpm /usr/include/rpm/*

This was overkill, but, as we all know, you can never have too much overkill.  I forced them all into a single package (called rpm for now) and forced them all into a single inteface in  rpm/RpmLibrary.java.

Here is a simple unit test proviong that it works.  I don’t claim that this doesn’t leak memory, won’t corrupt your database, or steal your wallet.  Caveat Coder. It isn’t even a decent unit test.

package rpmdb;

import java.nio.ByteBuffer;

import junit.framework.TestCase;
import rpm.RpmLibrary;
import rpm.RpmLibrary.headerToken_s;
import rpm.RpmLibrary.rpmdbMatchIterator_s;
import rpm.RpmLibrary.rpmts_s;

import com.sun.jna.NativeLong;
import com.sun.jna.ptr.PointerByReference;

public class RpmdbTest extends TestCase {

public void testReadDbPath() {
int status = RpmLibrary.INSTANCE.rpmReadConfigFiles((ByteBuffer) null,
null);
assertEquals(0, status);
ByteBuffer buffer = ByteBuffer.wrap(“%_dbpath”.getBytes());

String value = RpmLibrary.INSTANCE.rpmExpand(buffer, (Object[]) null);

System.out.println(“Value of macro is ” + value);

}

public void testReadFromDB() {

int status = RpmLibrary.INSTANCE.rpmReadConfigFiles((ByteBuffer) null,
null);
assertEquals(0, status);

rpmts_s ts = RpmLibrary.INSTANCE.rpmtsCreate();
assertNotNull(ts);

rpmdbMatchIterator_s iter = RpmLibrary.INSTANCE.rpmtsInitIterator(ts,
rpm.RpmLibrary.rpmTag_e.RPMTAG_NAME, “java-1.6.0-openjdk”,
new NativeLong(0));
headerToken_s header;
while ((header = RpmLibrary.INSTANCE.rpmdbNextIterator(iter)) != null) {
PointerByReference namePtr = new PointerByReference();
PointerByReference releasePtr = new PointerByReference();
PointerByReference versionPtr = new PointerByReference();
RpmLibrary.INSTANCE.headerNVR(header, namePtr, versionPtr,
releasePtr);
System.out.println(“Name    = “+ namePtr.getValue().getString(0));
System.out.println(“release = “+ releasePtr.getValue().getString(0));
System.out.println(“version = “+ versionPtr.getValue().getString(0));
}

if (ts != null) {
RpmLibrary.INSTANCE.rpmtsFree(ts);
}
}
}

I did have to massage some of the generated code by hand:  rpmExpand returned a BytePointerByReference, and modifying the method signature to a return a String worked fine.

Reading the links from a webpage

I needed to see the set of RPMs in a YUM repository. I wanted to do this as part of a larger script. To do this, I fetched the page via wget, and then applied an xsl transform on it using the command line tool xsltproc.

Here is how I called it:

wget -q -O – http://spacewalk.redhat.com/yum/0.5/Fedora/10/x86_64/os/Packages/ | xsltproc –html showhrefs.xslt –

And here is the xslt file showrefs.xslt

<?xml version=”1.0″ encoding=”UTF-8″?>
<xsl:stylesheet xmlns:xsl=”http://www.w3.org/1999/XSL/Transform” version=”1.0″>
<xsl:output method=”xml” indent=”yes”/>

<!–  shut off the default matchin rule –>
<xsl:template match=”text()” />

<!– print the href value of the hyperlinks –>
<xsl:template match=”a”>
<xsl:value-of select=”@href” />
<xsl:text >
</xsl:text>

</xsl:template>

</xsl:stylesheet>

Compile Time Dynamic Proxies in C++

These are my notes for compile time proxies generated from C++.  I’m not sure I will be able to understand them in the future, so good luck to you if you feel the need to read them.

Java Dynamic proxies are a well established means of reducing code by extracting a cross cutting concern. The C++ philosophy is more “Why put off to runtime that which can be performed at compile time.” How would we get the same kind of flexibility from C++ as we get from Java Dynamic proxies?

First, we would need a handful of helper classes that mimic the introspection API of Java. If we have the simple classes of Method, Field, Parameter, and Class, we can perform much of the logic we need. Refer to the Java reflexion API to see roughly what these classes should contain and what they do.

Code generation is the obvious approach, and the lack of introspection of the C++ makes abstract syntax tree analysis  it the only viable approach currently available. We can get all the information we require from g++ if we just ask nicely. FOr example, if we add the flag -fdump-translation-unit to g++ we get the file with the AST in an ultra-normalized form. For example, I want to find all of the classes defined in the file generated when I compile ExampleTestCase.cpp. The file ExampleTestCase.cpp.t00.tu on line 414 has:

@1086 identifier_node strg: ExampleTestCase lngt: 15

If we then search for what @1086 means:

adyoung@adyoung-devd$ grep -n “@1086 ” ExampleTestCase.cpp.t00.tu

1749:@783 type_decl name: @1086 type: @554 srcp: ExampleTestCase.h:14
1762:@787 function_decl name: @1086 type: @1093 scpe: @554
2414:@1086 identifier_node strg: ExampleTestCase lngt: 15
4237:@1932 type_decl name: @1086 type: @554 scpe: @554
4242:@1935 function_decl name: @1086 mngl: @2450 type: @2451
28445:@13185 function_decl name: @1086 mngl: @14801 type: @14802
We see that this identifier is used several places, but the two interesting ones are the type_decl lines, and they both refer to entry @554. Most likely the function definitions are something like the constructors. This is the data on that record:

@554    record_type      name: @783     size: @43      algn: 64
vfld: @784     base: @785     accs: priv
tag : struct   flds: @786     fncs: @787
binf: @788

It needs some prettying up, to get it all on one line, but other than that, it looks right. The big thing is the tag: struct that tells us this is a c struct. C++ must be forced to conform to c at some point, so classes become structs.

Let’s take it even simpler.  If we make an empty C++ file, called empty.cpp and compile it with:

g++   -fdump-translation-unit   -c -o empty.o empty.cpp

we get a file with a lot of standard symbols defined:

grep identifier empty.cpp.001t.tu | wc -l
1215

If we add a single static variablle, the venerable xyzzy, we can easily find it in the file:

adam@frenzy:~/devel/cpp/proxy$ echo “static int xyzzy;” >> xyzzy.cpp
adam@frenzy:~/devel/cpp/proxy$ g++   -fdump-translation-unit   -c -o xyzzy.o xyzzy.cpp
adam@frenzy:~/devel/cpp/proxy$ grep identifier  xyzzy.cpp.001t.tu | wc -l
1216

We’ve only added a single line, that looks like this:

@4      identifier_node  strg: xyzzy    lngt: 5

If we now add a Noop struct to that, we get a little bit more info:

adam@frenzy:~/devel/cpp/proxy$ echo “struct Noop{}; static int xyzzy;” >> Noop.cpp
adam@frenzy:~/devel/cpp/proxy$ make Noop.o
g++  -fdump-translation-unit    -c -o Noop.o Noop.cpp
adam@frenzy:~/devel/cpp/proxy$ grep identifier  Noop.cpp.001t.tu | wc -l
1217

Note that I’ve added -fdump-translation-unit  to the CPPFLAGS in a Makefile.

Each change has a significant effect on the resultant file:

adam@frenzy:~/devel/cpp/proxy$ wc -l Noop.cpp.001t.tu
6853 Noop.cpp.001t.tu
adam@frenzy:~/devel/cpp/proxy$ wc -l xyzzy.cpp.001t.tu
6845 xyzzy.cpp.001t.tu
adam@frenzy:~/devel/cpp/proxy$ wc -l empty.cpp.001t.tu
6841 empty.cpp.001t.tu

Because the symbol gets added early (@4) it bumps all of the other symbols in the file up one, so a diff would take a little parsing.  A visual inspection quickly shows that the following section has been added to xyzzy.cpp.001t.tu

@3      var_decl         name: @4       type: @5       srcp: xyzzy.cpp:1
chan: @6       link: static   size: @7
algn: 32       used: 0
@4      identifier_node  strg: xyzzy    lngt: 5
@5      integer_type     name: @8       size: @7       algn: 32
prec: 32       sign: signed   min : @9
max : @10

If we compare the two files based on the @ signs:

adam@frenzy:~/devel/cpp/proxy$ grep — @ xyzzy.cpp.001t.tu | wc -l
4427
adam@frenzy:~/devel/cpp/proxy$ grep — @ empty.cpp.001t.tu | wc -l
4424

We can see we have added three, which corresponds with what we have above.

Just adding the emptyr struct adds 10 lines:

adam@frenzy:~/devel/cpp/proxy$ grep — @ Noop.cpp.001t.tu | wc -l
4434.

To make iut a little easier, I went in and put a carriage return after struct Noop{};  Now I can look for Noop.cpp:1 or Noop.cpp:2

This eems to be the set of lines added for struct Noop:

@6      type_decl        name: @11      type: @12      srcp: Noop.cpp:1
note: artificial              chan: @13
@7      integer_cst      type: @14      low : 32
@8      type_decl        name: @15      type: @5       srcp: <built-in>:0
note: artificial
@9      integer_cst      type: @5       high: -1       low : -2147483648
@10     integer_cst      type: @5       low : 2147483647
@11     identifier_node  strg: Noop     lngt: 4
@12     record_type      name: @6       size: @16      algn: 8
tag : struct   flds: @17      binf: @18

Let’s see what happens if we add field.

Here’s OneOp.cpp

struct OneOp{
int aaa;
};
static int xyzzy;

adam@frenzy:~/devel/cpp/proxy$ grep — @ Noop.cpp.001t.tu | wc -l
4434
adam@frenzy:~/devel/cpp/proxy$ grep — @ OneOp.cpp.001t.tu | wc -l
4439

We get another five lines.  Let’s see if this is linear.

adam@frenzy:~/devel/cpp/proxy$ grep — @ TwoOp.cpp.001t.tu | wc -l
4444

adam@frenzy:~/devel/cpp/proxy$ grep — @ ThreeOp.cpp.001t.tu | wc -l
4449

Let’s try a function now.

adam@frenzy:~/devel/cpp/proxy$ cat OneFunc.cpp
struct OneFunc{
int narf();
};
static int xyzzy;

adam@frenzy:~/devel/cpp/proxy$ grep — @ OneOp.cpp.001t.tu | wc -l
4439
adam@frenzy:~/devel/cpp/proxy$ grep — @ OneFunc.cpp.001t.tu | wc -l
4448

About double the info.

My next goal will be to diagram out the data structures we have here using UML.

Things look fairly straight forward in the decifering until we get to function_type.  There, we have a reference to retn which in this case happens to be a void, but could concievably be any of the data types.

I have long since abandonded this approach, but may pick it back up again some day, so I will publish this and let the great crawlers out there make it avaialble to some poor sap that wants to continue it.  If you do so, please let me know.

Proxies in C++

The Proxy design pattern and Aspect Oriented Programming have the common goal of extracting cross cutting concerns from code and encapsulating them.  A cross cutting concern usually happens on a function boundary:  check security, object creation and so on.  Proxies allow you to make an object that mimics the interface of the called object, but which provides additional functionality.

For an inversion of control container, object dependency and object creation may follow two different policies.  If Object A needs and Object of type B, that dependency should be initialized when object A is created.. However, if creating object B is expensive, and object B is not always needed, object B should be created on Demand.  This approach is called “Lazy Load” and it is one of the types of proxies that the Gang of Four book enumerates.

Java provides a mechanism to make a proxy on the fly. The use of the proxy object provides a function

public Object invoke(Object proxy, Method m, Object[] args)
throws Throwable

Let’s define a C++ class as a pure abstract base class:

class Interface {
public:
virtual void action1(int i) = 0;
virtual void action2(int j) = 0;
}

And a class that implements that interface with some side effect.

class RealClass :public Interface {

int val;

public:

void action1(int i){val = i;}

void action2(int i){val = 333 * i;}

};

Then a Lazy Load Proxy would be defined like this:

typedef Interface* (* create_delegate_fn());

class LazyLoadProxy : public Interface  {
create_delegate_fn* fetcher;
Interface* delegate;
Interface* fetch(){
if (!delegate){
delegate = (*fetcher());
}
return delegate;
}
public:
LazyLoadProxy(create_delegate_fn create_delegate):
delegate(0)
{
fetcher = create_delegate;
};

virtual void action1(int i){
fetch()->action1(i);
};
virtual void action2(int j){
fetch()->action1(j);
};
}

This cannot be completely templatized, but a good portion of it can be abstracted away, leaving the compiler to check your work for the rest.   If we want to tie this into out inversion of control framework, we need to make sure that the create_delegate has access to the same Zone used to create the Proxy object.  Thus the Zone should be stored in a member variable of the Dynamic proxy.  We should really tie this into the resolver.h code from previous posts, and pass the Zone along to be stored the lazy load proxy.  It is also likely that you will want the lazy load proxy to own the delegated item, so you may want to add a virtual destructor to the interface (always a good idea), and then delete the delegate in the destructor of the proxy.  Here’s the templatized code:

#include <resolver.h>

template <typename T>  class LazyLoadProxy : public T  {
public:
typedef T* (*create_delegate_fn)(dependency::Zone&);

private:

T* (*fetcher)(dependency::Zone&);
T* delegate;
dependency::Zone& zone_;

protected:
T* fetch(){
if (!delegate){
delegate = (fetcher(zone_));
}
return delegate;
}
public:
LazyLoadProxy(dependency::Zone& zone,create_delegate_fn create_delegate):
zone_(zone),
delegate(0)
{
fetcher = create_delegate;
};

virtual ~LazyLoadProxy(){
if (delegate){
delete delegate;
}
}
};

And the code specific to creating and registering the Interface version of the LazyLoadProxy is:

class InterfaceLazy : public LazyLoadProxy<Interface>  {
public:
InterfaceLazy(dependency::Zone& zone, create_delegate_fn create_delegate):
LazyLoadProxy<Interface>(zone, create_delegate)
{
};

virtual void action1(int i){
fetch()->action1(i);
};
virtual void action2(int j){
fetch()->action1(j);
};
};

static Interface* createReal(dependency::Zone& zone){
return new RealClass;
}

static  Interface* createProxy(dependency::Zone& zone){
return new InterfaceLazy(zone, createReal);
}

DEPENDENCY_INITIALIZATION{
dependency::supply<Interface>::configure(0,createProxy);
return true;
}

Java dynamic proxies reduce the code for the proxy down to a singe function that gets executed for each method on the public interface, with the assumption that any delegation will be done via the reflection API.  C++ Does not have a reflection API, so we can’t take that approach.  If the C++ language were extended to allow the introspection of classes passed to a template, we could build a similar approach at compile time by providing a simple template function that gets expanded for each method of the abstract interface.

Dynamic proxies that are parameter agnositc are possible in C++, but are architecture specific, and depend on the parameter passing convetion.  I’m looking in to this, and will publish what I find in a future article.

Immutability in Databases and Database Access

If we are to follow the advice of Joshua Bloch in Effective Java, we should minimize the mutability of our objects. How does this apply to data access layers, and databases in general?

A good rule of thumb for databases is that if it is important enough to record in a database, it is important enough not to delete from your database…at least, not in the normal course of events. If Databases tables are primarily read only, then then the action of reading the current item will be “select * from table where key =  max (key)”.  Deletes indicate an error made. And so on.  Business objects are then required to provide the rule to select which is the current record for a given entity.

A good example is the Physical fitness test given in the Army (the APFT).  A soldier takes this test at least once per year, probably more.  In order to be considered “in good standing” they have to score more than the minimum in push ups and sit-ups, and run two miles in less than the maximum time, all scored according to age.  The interesting thing is that the active record for a soldier may not be the latest record, but merely the highest score inside of a time range.  Failing an APFT only puts a solider in bad standing if  they do not have another test scored in the same time period that is above the minimum standards.  A soldier might take the APFT for some reason beyond just minimum qualifications, such as for entry into a school or for a competition.

As an aside, notice that the tests are scored based on age.  Age should not be recorded, rather calculated from the date of the test and the soldiers birth date.   Never record what you can calculate, especially if the result of the calculation will change over time.  Although in this case, it would be OK to record the Age of the soldier at the time of the test as a performance optimization, providing  said calculation was done by the computer and not the person entering the scores.  Note, however, that doing so will prevent adjustments like  recalculating the scores if we find out a soldier lied about his birthday.

Relations are tricky in this regard.  for instance, should removing an item from a shopping cart in an eCommerce application be recorded directly or IAW the “No-delete” rule?  If possible, go with the no-delete, as it allows you to track the addto, remove from cart actions of the shopper, something that the marketing side probably wants to know.  For a performance optimization, you can delete the relation, but make sure you send the events to some other backing store as well.

Move to Red Hat

Sometimes you can’t tell where you are headed. But, after a while, if you look back, you realize that you have been headed in a straight line exactly where you want to go. Such is the case, I find, with my current acceptance of an offer of employment at Red Hat.

Very shortly, I will take a position as a senior software engineer at Red Hat, in Westford , MA. I am on the team responsible for, amongst other things, Red Hat Satellite Server. This pulls together several two trends in my career: Java, Linux, Systems Mangement, and JBoss.  I look forward to posting lessons learned from this new venture.