Duck Typing in C++

One common description of Object orient languages is that they use “Duck Typing.”  The idea is the if it looks lie a duck, walks like a duck, and sounds like a duck, you can treat it like a duck.  Java and C++ typically are set in opposition to Duck Typing:  You must have a complete Symbol match in order to be treated like a duck.

C++ is not Duck typed at run time, but it might be helpful to think in terms of Duck typing at compile time; Template programming is based on the Duck principal.  In C++, this is called the implied interface. A Template only cares that the type passed in as the typename has a member that matches the template.  The major difference here is that in Object Oriented Languages, this distinction is made at Run Time.  In C++, the distinction is made at Build time.

One rule of thumb that I have found useful in understanding the difference in approach between Java and C++ is this: Java assumes that Code will be reused without a recompile.  C++ Assumes that the compiler will be involved in a recompile.  Note that I say C++, and I mean Bjarne Stroustrup and the STL developers.  Not COM, CORBA or many of the Languages build in C++ but on top of the language. I’m not saying I approve or disapprove of this approach, just that it is a valuable way to think about the language.

Using InitialContext for Inversion of Control in Java

If I were to try to apply my approach to IofC in C++ to Java, the logical starting point is the JNDI Context object.  In JNDI, to get an instance of an object, you call InitialContex.doLookup(String name); Which allows you to set a Generic type parameter to make sure that Casting is handled for you.    This is really close to what I want.  Also, When you request an object by name, what you have registered is either an instance of that object (via the InitialCOntext.bind method) or it calls a factory to create the instance, and the Factory gets registered earlier.  So far, we are in the right vicinity.

I’ve been doing some Tomcat work recently, so I’ll use that as a starting point. The JNDI implementation embedded in Tomcat is an apache project with the top level package name of org.apache.naming.  The InitialContext object is actually a participant in a Bridge design pattern.  Specifically, the user creates an InitialContext, and that will call a factory to create an Context object to use internally. Apache Naming creates an instance of org.apache.naming.NamingContext.  The interesting code is here

protected Object lookup(Name name, boolean resolveLinks)
throws NamingException {

// Removing empty parts
while ((!name.isEmpty()) && (name.get(0).length() == 0))
name = name.getSuffix(1);
if (name.isEmpty()) {
// If name is empty, a newly allocated naming context is returned
return new NamingContext(env, this.name, bindings);
}

NamingEntry entry = (NamingEntry) bindings.get(name.get(0));

if (entry == null) {
throw new NameNotFoundException
(sm.getString(“namingContext.nameNotBound”, name.get(0)));
}

We don’t really care what comes after this.  The point is that if the NamingEntry is not in the locally defined set of names, it does not know how to create the instance, and throws and exception.

What would happen if we turn this assumption on its head.  What if the failure to resolve a name just meant that the context factory should delegate it to the next context factory in the chain.  We could restrict this approach to a certain naming scheme.  If the name give falls inside a new scheme, say java:comp/chain/<classname>, use a chain of responsibility to walk the Contexts back up toward to the root to resolve the object.

The concept of lookupLink as  the default way to fetch something is intriguing.  It means that there is some method of chaining available.  Right now the only things that get resolved are the links that are explicitly put into the namespace.  Immediately following the code quoted above is:

if (name.size() > 1) {
// If the size of the name is greater that 1, then we go through a
// number of subcontexts.
if (entry.type != NamingEntry.CONTEXT) {
throw new NamingException
(sm.getString(“namingContext.contextExpected”));
}
return ((Context) entry.value).lookup(name.getSuffix(1));

Beware that there are two trees here.  the  naming tree, and  scopes of resolution.  It makes sense to think of this as a two dimensions rather than one.  The name may be a compound name, and we need to traverse down the tree to find it.  This is the top down thing I was talking about before: JNDI is designed assuming that initial context is the root of the tree, as opposed to the current leaf node.  At least, Tomcat starts there.

The nice thing about lazy resolution (back tracking) is that creating a new context is really quick.  If  most components are  resolved in the request namespace, and only rarely make it all the way up to the global namespace, than there is no performance problem.

In the current Java landscape, there are many APIs for resolving a reference.  Baseline to the language is java.naming.  The Servlet API has explicit ones for looking for objects in the request, session ,and global contexts.  Spring has the the BeanFactory interface.  OSGI has Bundle Context.  Pico container has the pico object.  The fact is that, even with inversion of control, at some point you need to kick of an object creation chain.

For instance, struts maps a segment of an URL to a class that extends the struts Action class, after binding the context to an Action form.  These objects are configured via an XML file.  The Action object is a flyweight, designed to encapsulate business behavior, where as the form object is a minimal request or session scoped object designed to do some validation prior to handover to the action.   All of this is configured by a servlet.  Once the Servlet is deployed, there is no way to change the URL scheme of the application without redeploying the app, making it ill suited to end user defined content such as a Wiki or blog.  Layout is controlled by Tiles, another related project that merged with struts, leading to a hybrid API.  Java server faces has its own object registration and creation API as well.  Since JSF and struts solve similar issues, with similar approaches, what I write about one can be applied fairly easily to the other.

As I look at the these APIs, I am struck once again by the procedural nature of them.  Time and again, we see:  create object, set properties, call execute method.  Most of these take the Class.forName approach to create the objects, and dependencies are injected via setters.  Not my style.

When I last did this full time, I ended up with a common approach where the primary interaction between the parameters and the the java objects was through builders. I was never fonds of the concept of ‘Validators’ or objects who’s sole purpose was to validate a string, and then pass it on as a string.  once I have validated a string, I want to bind it to a class that states the validation is a precondition. For instance, a Social Security number is in the format DDD-DD-DDDD.  Further business objects should not pass around String ssns, but rather instances of class SSN.   If the SSN is part of a Person object, the SSN is then passed in as a Constructor parameters, and bound to a final field, making the end object immutable.  If there is an intermediate stage, perhaps a multe page form to construct the ‘Person,’ intermediate state is held in a PersonBuilder.  The create method of the PersonBuilder enforces the preconditions are met, or returns a collection of errors.  The completed Person object probably then becomes either a Session or Application scoped variable.  Note that all fields would be final and immutable, meaning the end product is thread safe.

Struts, tiles, and so on have an API where a context object is passed around from object to object throughout the hierarchy.  Each of these should be wrapped in an Adapter so that extends java.naming.Context and bound to a well known name via InitialContext.bind().  These objects can then be stacked one after another inside a composite naming context, and called each in turn.  Here’s a first take at the adapter for tile .

public TilesApplicationContextAdapter(
TilesApplicationContex conetxt){
if (context == null) throw new IllegalArgumentException(MESSAGE);
    this.context = context;
    new InitialCOntext.bind("java:/comp/context/"+TilesApplicationContextAdapter.class.getName(),context);
}

public final TilesApplicationContext context;

public void bind(Name arg0, Object arg1) throws NamingException {
...
}
/* now the hard works starts, converting from one API to another. */

Now,  the trick is to encapsulate all of this inside a typesafe API.

import javax.naming.NamingException;

public class GenericResolver<T> {

Class<T> classOf;

GenericFactory(Class<T> c) {
classOf = c;
}

public T fetch(Context context) throws NamingException {
return (T) context.lookup( lookupStrategy(classOf));
}

public T fetch() throws NamingException {
return fetch( new InitialContext());
}

}

I’ll leave on this note:

Entangled Dependencies

Our LDAP Client has a wrapper for creating a persistent search.  In order to execute a persistent search, a polling mechanism has to call the client code with notifications of changes from the LDAP server.  This means a threading library.  The call looks something like this:

LDAPCLient->GetApp()->GetThreadPool()->ScheduleTimer(MakeFunctor(callback))

Here the call to GetApp is completely superfluos to the logic:  we should not care where the ThreadPool comes from.  Instead, LDAP client should either take a thread pool pointer in its constructor, or a thread pool should be passed as a parameter to CreatePersistentSearch.  I prefer to resolve all dependencies like this at object construction time.

GetApp throws an exception if it is called prior to a call to AppImpl::Init.  Init reads an XML based config file.  So our dependencies now include both the config file and the xml parser on top of the App object.  The LogObject and LogName are also initialized in the App.

What we are seeing here is how dependencies get entangled implicitly.  In order to reuse the LDAP client, or to create a Unit Test for it, I have to initialize an App object, which is far beyond the scope of LDAP.

Continuing on looking at the ThreadPool, I see that the AppImpl actually creates the ThreadPoolWin32 object by passing the config file view to it, and the config file view is used to fetch values for the state of the thread pool value by value.  Example:

_config->Get(“TaskMax”, &_maxWorkerThreads);
_minIOThreads = 1;
_config->Get(“IoMin”, &_minIOThreads);
_maxIOThreads = 2 * _maxWorkerThreads + 1;
_config->Get(“IoMax”, &_maxIOThreads);

The binding of the values to the object should be external of the constructor, as it is just one Initialization scheme.  What if we want to reuse this object and read the values in from a database, or from a different config file format?

The LDAP Client should have a field that is the ThreadPool base class.  For the Unit test, we could mock this out. Of course, maybe the persistent search itself should be its own class.

Coding at Walker Interactive Systems

My time at Walker Interactive Systems, a now defunct Mainframe financials company, served both as my transition out of the Army and into the civilian world, and as my introduction to life as a professional programmer.  It marked a rapid change in many of my attitudes about life, software, and business.

When I started at Walker, the company had recently gone through a round of layoffs, and was just recovering enough to start hiring again.  Many of the people on staff around me were veterans of the company, somewhat jaded, and ready to move on.  When I started in January of 1997, there were two other developers on my Group:  Laura and Tony.  By the time August rolled around, they were both gone.

I worked in a group developing internal software.  The end users were primarily the consultants of the company who customized the software for the end users.  Overall, the company sold a suite of financial software:  General Ledger, Accounts Receivable, and the like.  The software ran on IBM System 390 mainframes.  Customers were behemoths like AAFES, and one of the Airlines (I think United).  The primary software was designed to run on a 3270 terminal:  a fixed size, fixed font terminal run on a screen by screen basis.  It was an old technology.  Keystrokes were scanned and processed by the terminal software and stayed on the local system until an explicit “Enter” command would send the whole buffer back to the mainframe.

My team was responsible for turning these screens into an API that Visual Basic programmers could use, a process known as screen scraping.  An off line process would read the database of screens, including position and length information, and store it in a database on client side.  Code inside the applications would act like a terminal, buffer up the changed information, and send it back and forth to the mainframe.  The code to execute this was split into two pieces, one for the UI and one for direct communication to the mainframe.  When Laura left, I inherited the front end.  When Tony left, I got the rest.

Microsoft technology was then at the DCOM stage.  What had started as Object Linking and Embedding had evolved into a development philosophy that pervaded the various flavors of the Operating systems at that time:  Windows 95, NT4.  I became a student of the component object model, learned about monikers and marshalling, reflective cross language interfaces, and remote procedure calls.  Wrox Press provided most of the information I needed, and Stacey’s bookstore graciously provided me access to spend a portion of my salary to purchase the books I needed.

The first thing I did was a rewrite of the UI code.  This was less a matter of Hubris and more a matter of it being easier to learn by writing than understanding the existing code base.  The existing code was in C, and I was immersed in C++.  The existing code was a series of dynamic link libraries for the non-preemptive-multitasking version of windows that had been forward ported to Win32, but that had much of the vestiges of the older systems.  The old code was more error handling than anything else.  My version had the benefit of focusing on a single platform.  Since I had the freedom to develop the code from scratch without worrying about backwards compatibility, it made it much easier to make the new code re-entrant, something that the old code had to work to prevent.

One big lesson learned from this effort was to respect the amount of embedded knowledge that there is in a pre-existing code base.  After the first release of my code, I spent a lot of time fielding bug reports from the UI developers about features I had missed.  I learned the value of having a quick-to-run debugging code driver that allowed me to easily reproduce their problems.  The integrated debugger was invaluable in stepping through, viewing what was actually happening, and allowing me to quickly make and distribute fixes.  I got the notion of step through your code drilled into my bones.

When adapting the Gateway code, I had to deal once again with the rapidly changing API from Microsoft.  The older method of remote procedure call was deprecated.  DCOM was the new hotness.   After much trial and error with their Interface Defintion Language (IDL) and SAFEARRAYS I finally got a simple network service to run.  Since my version was expected to be remote, we had reduced licensing from each client machine to a single NT server as well…or at least the potential to do that.  Not sure if the company ever followed up on it.

Of course, all of this distracted me from what I was “supposed” to be working on:  the replacement for the screen scraping which was an asynchronous messaging layer.  This was my first exposure to the recurring theme that I will state here as a rule:  Never name a software project Phoenix.  Really, do you want to imply that your project will go down in flames, dying time and time again?  This project phoenix was no different.  The concept was good:  pull out a section of the cobol code that could be executed as a standalone transaction.  Turn this into an asynchronos message and let the developers send the message to the mainframe.  The reality was that so much of Walkers code would needed to be restructures that even after we got the messaging layer working, the concept was unusable.

One of the end goals of the messaging layer was to work with Data Bound Controls.  There are things like spread sheeets and fields that can be bound to a column from a database query.  Since that meant playing nicely with the Microsfot database technoligies, I dove in and made the messaging layer an OLEDB Provider.  If this doesn’t scare, just be glad and move on with you life.  If this does scare you, then you know way too much about microsoft technologies and I feel somewhat sorry for you.  Basically, I had to learn about the technology that microsoft was then pimping as a replacement for ODBC, a tehcnology which never really should have caught on, and a technology that should be quietly taken out to the back 40 and shot.

One bug while developing the OLEDB provider took me two weeks to track down and fix.  THe mistake was mina and mine alone.  I was apassing in a NULL to a field that required an allocated structure.  THe problem was that, while I owned the calling and implementing code, in the middle it had to go through a microsoft technology called Active DAta Objects (ADO).  This particular API was closed source.  I could see only the disaassembly of the executing code, which didn’t provide the context I needed to deduce my mistake.  This adventure really sold me on the concept of Free (as in Speech) Software and Open Source coding.  I had just started experimenting with Linux, and this really drove home why the development model was so successful.

The very tail end of my time at Walker was spent learning about Java.  IBM was pushing Visual Age, and the company was planning a bitg transition. Walker really suffered from the .com boom:  people fled to cooler, more interesting jobs throughout 1998 and 99.  I left in the summer of 99 to go to Fort Point Partners, a botique services company that was pretty focused on ATG/Dynamo and eCommerce.  Soon after I left, I happened across a group of former co-workers there.  We chatted and they let me know that the company was then very Java and Websphere focused.  Not long after that, Walker was acquired and then passed on to the corporate afterlife.

Parsing and Inversion of Control

Parsers are essential to the interface layers of an application. A message based system requires a socket listener that will convert the contents of the stream from the network into usable collection of objects. In an inversion of control container, these request scoped objects should be registered instances in the container in order to build/fetch more complex objects, potentially of longer lived scope. Parsed request scope objects should be of classes that enforce invariants, but often will be simple strings in language specific form; char * or std::string being the most common for C++.

Take the example of a userid passed in as part of the request. There really is no format that this string conforms to other than, perhaps, some constraints of the system. To create an immutable UserID class may really just force casts to char * or std::string when the UserID is to be used.

There are many objects, specifically request scoped objects, that need to be created based on the runtime values provided by the system. In a pure Inversion of COntrol (IofC) environment, the parser would create each, and then add them to the container. This may require a large amount of coding in a manner specific to the IofC container. One goal of IofC is to minimize the amount of work that is specific to the container.

Many API’s handle this by creating a map. In the Java HTTP Servlet API, the request object has two string to string maps, one for parameters and one for cookies. This may be a good middle ground between two APIs. A better approach is what Struts does using the Apache Jakarta PropertyUtils API. The Action field of a form specifies an URL that in turn starts with a java object of type Action. THe action is configured in an XML file (Ugh!) that specifies the subclass of ActionForm. The request parameters are bound to the properties of ActionForm using the Java Bean coding convention. Thus a parameter user.homePhone.areaCode =415 would force the reflection equivalent of:

action.getForm().getUser().getHome().setArea(“415”);

One problem with this implementation is that any exception thrown at any point of this binding would force halt to the entire setting of parameters. The net effect is lots of Struts specific coding. Personally, I am not a big fan of getter/setter dependency injection, as it tends to violate the precept that classes enforce invariants. It doesn’t have to, just that code written that way tends to be of the nature where an object can have an invalid state until the setter is called. However, the setter approach does work well with builders, if the set method can be called multiple times in the role of “BuildPart.”

When discussing the marshaling layer we can often think in terms of combing interpreters with builders. The interpreter is responsible from converting from the marshaled format to object format for simple data objects. Each of these objects is added to a larger complex object. In terms of the Builder pattern, the interpreter plays the role of the Director. Only once the entirety of the message is parsed will the director call GetResult. Exceptions thrown during the parsing of the message are collected up so that they can be reported in bulk.

One common objection to this style of programming is the cost of multiple exceptions thrown during the parsing stage. Reducing the number of exceptions thrown should be a performance tune performed after the system is operational. The general structure is to split the parse of a data object into a three stage process. First create builder that takes the object in string form. Second, ask the builder if the marshaled form is valid. Third, fetch the object from the builder. If stage two returns false, add an error to the collection of errors and short circuit the building process. Note the higher degree of complexity over performing the parsing in the constructor. The constructor has to validate the format it is given from the builder, or has to know about the builder object, creating a bidirectional dependency. The logic in calling the builder and the return code has to be coded by hand or has to fit into some larger framework.

The degenerate case also seems to be the most prevalent: confirm the format of the data objects, but then transfer them around as strings. The problem with this approach is that each layer of the application will be written not trusting the other layers, and the objects will be re-parsed, potentially following different rules. From the increase in code size, complexity, and potential for error, we can infer that we should avoid this approach.

Since the builder can be created with minimal dependencies, and most of these can be defined statically, it should be a request scoped component defined in the IofC container. We have to decide at what point in the parsing process do we switch from a general purpose parser to a specific class dedicated to the data coming off the socket. Ideally, the message format provides a way to specify the message type and version early in the stream. This can be the key used to fetch the specific parser. The stream coming off the socket should conform to one of the standard types of the language. In C++ this means something out of std::. The parser can depend on the builder. After the parsing process, one last call into the IofC container kicks off the business processing, that which takes the parsed message and does something with it. At this point we can use something as simple as a functor: Everything is in the IofC layer.

There is a lot here subject to interpretation. The stream may or may not be encoded according to a public protocol, a language specific feature, XML, UUEncoded, encrypted, compressed, and so on. A pipeline can handle the transformation. Converting from stream to objects may use this pipeline, or be a portion of the pipeline, converting from a character or binary based stream to a stream of objects. The issue here is always how to get the objects available to the container without too much container specific logic. If the value comes in key value pairs, the key needs to provide enough information to determine which instance of which class to create.

Since your inversion of control layer should be minimally invasive, you don’t want to have each call in the building of the input validation require knowledge of this layer.  Something about the network protocol is going to be unique enough to pull in the processor for your code.  This is going to be the  URL, the message type, or something unique to your protocol.  The should be enough to select the interpreter and the  builder objects, connect them, and let the parser become the pump from there on out.  The builder should registered  in the Inversion of COntrol container.  When something needs the end product of the build process, it should get this from the container, which will delegate to the builder.  This should strike the right balance between type safety and flexibility.

My Ideal Technology Setup for work

“Since I’m dreaming, I’d like a pony” –Susie, in Calvin and Hobbes.

“I’m not just the President of the Hair Club for Men, I’m also a client.” –President of the Hair Club for Men

Not only do I write software, I use it. A whole bunch. I am a Linux guy, and when ever I end up in a situation where I have to work around a proprietary solution that just doesn’t make sense for what I am trying to do, it ads a point or two to my Diastolic. So here is my dream setup:

Continue reading

OVER keyword in Oracle

I’m looking at the SQL Scripts that create and modify the database for VPXD. Specifically, I am looking at the oracle version stats[123] scripts. They included a query that uses the “OVER” keyword and I have not found a decent description of what this keyword does. This is the query:

SELECT
st.counter_id AS COUNTER_ID
,v_time_rowid AS TIME_ID
,ST.STAT_VAL AS STAT_VAL
,DENSE_RANK()

OVER (PARTITION BY ST.COUNTER_ID ORDER BY SM.SAMPLE_TIME DESC) AS DENSE_RANK
FROM vpx_sample_time1 sm,vpx_stat_counter sc, vpx_stat_def sd, vpx_hist_stat1 st
WHERE sm.rollup_counter=to_number(to_char(v_start_time_id)||to_char(v_cnt))
AND st.time_id = sm.time_id
AND st.counter_id=SC.COUNTER_ID
AND SC.STAT_ID=SD.ID
AND ((SC.DEVICE_ID =v_null_device_id and v_stat_rollup_level<3) or (v_stat_rollup_level>=3))
AND SD.STAT_LEVEL<=v_stat_rollup_level
AND SD.ROLLUP_TYPE=3)
WHERE DENSE_RANK=1

;

The one description of the Over command I have found has come from the Microsoft website:

“Determines the partitioning and ordering of the rowset before the associated window function is applied”

OK, so it is a union of two types of functionality: partitioning and ordering. The queries I am looking at use both. Let’s start with partitioning:

PARTITION BYDivides the result set into partitions. The window function is applied to each partition separately and computation restarts for each partition.

The window function in this place must somehow refer to DENSE_RANK. To me this looks like it is possibly trying to insert a subselect into the overall select. Here is how MSDN defines DENSE_RANK. I’d like to reiterate that these are Oracle functions I’m trying to understand, and Microsoft is doing all the work explaining them. When I tried executing this SQL Query in SQLPLUS via the Oracle 11 instant client, it balked:
SQL> 2 3 4 5 6 SQL> SP2-0734: unknown command beginning “OVER (PART…” – rest of line ignored.
SQL> SP2-0734: unknown command beginning “FROM vpx_…” – rest of line ignored. It parses fine when I do it as part of a plsql stored procedure.

Time to dig out the slightly larger guns: I picked up a copy of the SQL Cookbook by Anthony Moliaro from Our Friends in Sebastopol, CA. This covered both DENSE_RANK and the OVER keyword.

Over, it turns out, is one of a new set of functions added to SQL 2003 that the Standard calls Windowing functions. This means they operate on a subset of the results returned from the outside query. The best example I saw showed if you are selecting all employees from all departments, and next to each one you wanted to show how many employees were in that department, you could use count(*) with an over clause that specified PARTITION by department. This example is in the Cookbook.

DENSE_RANK () tells you how many rows come before the current row in some ordering scheme. The over clause is used to define that ordering scheme. If two rows have the same rank, they have equivalent values according to DENSE_RANK. For instance, if we have five people with five salaries, their DENSE_RANK of salaies will be 1 through five, highest to lowest. If the top two salaries were identical, top rank would return 1,1,2,3,4.

OK, so it looks like the original query wants to use DENSE_RANK on a subset of the returned rows of the query. The window will be defined by this portion of the clause: PARTITION BY ST.COUNTER_ID

And the order within this window will be determined by: ORDER BY SM.SAMPLE_TIME DESC

OK time to get serious about playing:

I’ve learned a little but since the above frustrations trying to run the stored procedure. Here’s what I do now:

run sqlplus. For me this is

/home/adyoung/apps/oracle/instantclient_11_1/sqlplus adyoung/adyoung@10.17.195.159/DRORACLE

Now make it so anything I print to the screen is visible:

set serveroutput on

Load my sql file in and execute it.

get myfile.sql

/

OK, I have a development method. To print debugging output:
DBMS_OUTPUT.PUT_LINE(‘Text Here’);
And we can debug. In the time honored tradition of always moving from success, I comment everything out but a simple output statement. In this case, I know that the real code uses a variable for the rollup level, so my File (minus commented out section, looks like this:
create or replace procedure stattest
is
v_stat_rollup_level int;
begin
v_stat_rollup_level := 2;
dbms_output.put_line(‘Value: ‘ || TO_CHAR(v_stat_rollup_level));
end stattest;
Which executes using the above method. Now I start adding in parts of the SELECT.
SELECT st.counter_id AS COUNTER_ID FROM vpx_hist_stat1 st;
Of course, it hates this. When I load it into sqlplus I get:
Warning: Procedure created with compilation errors.
Of course, in order to see these compilation errors, I have to jump through a couple more hoops:
SQL> set arraysize 1
SQL> set MAXDATA 6000
SQL> show errors procedure stattest
LINE/COL ERROR
——– —————————————————————–
6/1 PLS-00428: an INTO clause is expected in this SELECT statement
So I add a variable called maxcounter in the declare section and modify my select:
SELECT
max(st.counter_id) into maxcounter
FROM
vpx_hist_stat1 st;
dbms_output.put_line(‘Maxcounter: ‘ || TO_CHAR(maxcounter));
And I have a procedure that executes and does something. On to the rest of the select. After several iterations of trial and error, I have the SQL query wokring as a cursor. I’ve now figured out even a better way to work. I’ve modified my file like below, and can now cat it into the sqlplus interpreter:
set arraysize 1
set MAXDATA 6000
set serveroutput on

create or replace procedure stattest
is
v_stat_rollup_level int := 2;
maxcounter int := 0;

CURSOR MYCURSOR is SELECT
st.counter_id as COUNTER_ID,
st.STAT_VAL as STAT_VAL
FROM vpx_sample_time1 sm,
vpx_stat_counter sc,
vpx_stat_def sd,
vpx_hist_stat1 st

WHERE
st.time_id = sm.time_id
AND st.counter_id=SC.COUNTER_ID
AND SC.STAT_ID=SD.ID
AND SD.ROLLUP_TYPE=3
AND SD.STAT_LEVEL<=v_stat_rollup_level;

begin

open MYCURSOR;

FOR recname in MYCURSOR
LOOP
dbms_output.put_line(‘Value: ‘ || TO_CHAR(v_stat_rollup_level));
END LOOP;

dbms_output.put_line(‘Maxcounter: ‘ || TO_CHAR(maxcounter));

end stattest;

/

show errors procedure stattest

.

OK, I am going to bail here.  This post has been half written for a while, and is not going to get completed anytime soon.  I think I’d vote “Over” as something that would be useful in the next rev of postgres, if only for simplifying porting from Oracle.