Massachusetts has one of the best public school systems in the country. It has been top in the country before, and was rated fourth last I checked. California, which has one of the largest economies in the world, is ranked in the bottom quintile. I was lucky that my parents moved from California back to Massachusetts before I was born. My wife and I made the same decision shortly after our son was born. Education, and public education are important to us.
Playing Football after Graduation
West Point has started letting Grads that are recruited by Pro sports teams go be recruiters, and play pro ball. This has caused much heated discussion, especially amongst my fellow alumni. I initially wrote this as a response to an email discussion, but decided to sit on it for a while and ruminate. This is really more of a collection of my thoughts at the time than a coherent essay.
It seems particularly stark in contrast to the classmates that are headed to Iraq. Would people feel as strongly about the matter if we were not sending people into harms way?
Also, is admissions the only reason that success in Army Sports, Football in particular, is important?
What about the rest of us that “did our time” and are now sitting out this conflict? Yeah, we played Army for our total of 8 years Active and Reserve. For many, it was a great experience that has lead to success later in life. Are we any less guilty of avoiding our Duty? How about the guy that “only” goes Signal Corps as opposed to going in to a combat Arms, or that goes Artillery to avoid Infantry? There always is a way that someone who goes less than the full HUAH can be said to be shirking.
Is it really doing our Country any good to be sending our Grads over to Iraq? I think most people would say that it is not cut and dried: some yes, some know, many I don’t knows. So why is it so important that these kids go to Iraq instead of playing Football? Is it really just a question of paying your dues?
Maybe the best way this kid can server his country is by being a kick-ass footballer, getting the name of West Point up in front of the country, and help to raise the awareness of civilians that we even have service academies. Maybe He’ll have a two year career, get cut, and end up back on Active Duty. Maybe he’ll be such a kick ass recruiter that he’ll fill the Army’s quota single handedly. Or maybe the Army wasted money in training him, and it was a mistake to send him to the NFL.
Is keeping a bunch of barely post adolescents isolated from the rest of civilization for four years the best way to prepare them for the officer corps? Does the Army get as much bang for it’s buck vie the Service Academies as it does Via ROTC? Sure West Point has produced it’s share of generals, but would those same people be great generals if they had gone ROTC? Would the opportunities in the Army be different if the Largest block of officers in the Army didn’t come from the same school? I have no idea if what we are doing makes sense or not. I know I gained a lot and gave up a lot by going to West Point. I’ll never know what I would have gained if I had gone another route.
Letting Cadets go professional will allow the coaches to recruit players who, as Seniors in High School think they have a chance to play pro ball. Most College Football players want to go pro, but few are chosen. I suspect that a good portion of these players would make decent soldiers. So Army Football gets a better team, and good but the less-than-great ball players now have the chance of a career as an Army officer.
Many kids enter West Point as an option, and only develop the drive to be Army officers while being at West Point. I suspect that this is one of the most important roles that West Point plays in support of our Officer corps.
Interview Question for Distributed Computing
This is an updated version of my interview question, made non-bproc like.
Using only the following API:
- printf(…)
- int get_node_count() // number of compute nodes attached to the head node. 0 means no other nodes
- int get_current_node()// 0 for the head node, 1-n for the compute nodes.
- int remote_fork(int node) // like fork, but returns an fd to the child/parent process
- void send_long_sync(int fd, long value)//send and wait, blocks until receipt
- long recv_long_sync(int fd)//block until value is available
- long gettime()
Calculate the average clock skew on the cluster. Return 0 on success, and -1 on any failures.
Assume that all nodes are up and running. This is a c subset of c++. Each of these functions throw an exception upon failure. rfork has the same semantics as fork: when it returns, there are two copies of the program running, just on separate machines. The next line of code to execute will be the line immediately after the fork on both machines. However, the returned value is not a process ID, and the parent process does not need to wait for the remote process to finish: the child process is automatically reaped by init.
Oracle to Postgresql part 1
Here are the steps I am going through to port some code from Oracle PL/SQL to PostgreSQL PLPGSQL.
Here is the first line in the Oracle version
create or replace procedure stats_rollup1_proc is
This becomes
create or replace function stats_rollup1_proc() returns int as $$
DECLARE
Now Postgres is not my fulltime focus, just a part of the overall job. a PG expert could probably do it better.
The things to note here:
- procedure is not a keyword in plpgsql. Thus function and returns. I suspect I could return a void type, but haven’t looked that hard.
- Postgres requires the text of a stored procedure to be qutoed. The $$ is a nice way to deal with the requirement.
- DECLARE is optional in Oracle, but required in postgreSQL
At the end of the function:
end stats_rollup1_proc;
Becomes
return 0;
end /*stats_rollup1_proc;*/
$$ LANGUAGE plpgsql
I like leaving the comment in there to match the original begin, since the functions get long enough that it is hard to track. There is no harm in returning 0, even if we don’t really use it as a return code. The final $$ closes out the one from the start of the function. We have to specify the language used, as this same mechanism can be used for any of the languages embedded inside PostgreSQL. Yes, even Python.
Ok, for some mechanics.
In the Declare section of the oracle code we have:
cursor time_cur(current_time_in date) IS
select distinct sample_time
from VPX_SAMPLE_TIME1
WHERE ROLLUP_COUNTER IS NULL
AND SAMPLE_TIME < current_time_in-1/24
order by 1 asc;
v_time VPX_SAMPLE_TIME1.sample_time%type;
which is later used like this:
open time_cur(v_rollup_start_time);
loop
fetch time_cur into v_time;
exit when time_cur%notfound;
In PostgreSQL these can be inlined like this:
for v_time in
select distinct sample_time
from VPX_SAMPLE_TIME1
WHERE ROLLUP_COUNTER IS NULL
AND SAMPLE_TIME < current_time_in-1/24
order by 1 asc
loop
Although I have not yet figured out how to handle the notfound.
Working around an NFS hang
While Sun might have wanted us to believe “The Network is The Computer” the truth is that we often only need the network for access to stuff a brief points in time, and can get away with doing our real work on our local machines. One of the systems at work went down today and is still currently avaiable. This machine was exported several directories which I and some of my co-workers have NFS mounted. When it failed, basica utilities on my machine were no longer functioning. I tried several simple commands: like clear, ls, and which.
The problem was that my PATH environment variable had the remote machine before local machines. This was required by our build process so that things like make would resolve to the correct version, consistent across all machines. When you type a command in at the command line, the shell resolves the command by trying each directory specified by the PATH variable. In this case, the very first directory was not only failing, but hanging.
One trick that helped to confirm the problem was using echo * on a directory to see what files were there. Since the command echo is built in to the shell, it does not cause any path navigation. To view /usr/bin you execute echo /usr/bin/*.
To work around the problem export PATH=/bin:/usr/bin:$PATH. With that, basic utilities are once again resolved on the local machine. Once the NFS server comes back up, exit out of you shell and create a new one, or re-export your path once again.
Small Scale High Performance Computing
At the top end of computing there is are the Supercomputers. At the bottom end there are embedded devices.  In between, there are a wide array of types of computer systems. Personal computers, workstations and servers are all really just a sliding scale of the same general set of technologies. These systems are , more and more, the building blocks of the technologies higher up on the scale. Enterprise computing typically involves high-availability and high Input/Output (I/O) based systems. Scientific and technical computing is similar, but high availability is not as important as performance. Three of the variables that factor into system design are parallelization, running time and (disk) storage requirements. If a job is small enough that it can run on a single machine in a reasonable amount of time, it is usually best to leave it to do so. Any speedup you would get in parallelizing the job and distributing the workload is offset by (Amdhals law) the serial portion of the job, the added overhead of parallelization, and the fact that you could run a different job on the other machine. If your task is parallelizable, but is very storage intensive, you need a high speed disk interconnect. Nowadays that means fiber channel.
Only if a job takes so long that it makes sense to parallelize, and that job does not require significant access to storage does it make sense to go to a traditional Beowulf cluster. Although Infiniband does handle the interconnect for both network and storage access, the file systems themselves do not yet handle access by large clusters.
This is the point for which we need a new term: storage bound, single system jobs that should be run on their own machine. Examples of this abound throughout science, engineering, enterprise, and government. Potential terms for this are:Small Scale HPC, Single System HPC, Storage Bound HPC, but none of them really roll of the tongue.
Soldier Design Competition at MIT
Last night, USMA and MIT went head to head in a design competition. The details are here:
It was cool to be with the Cadets and MIT Students in such a creative environment. The designes were smart, focused, low cost, and viable. Not all of them could be deployed as-is, but even those furthest from from field ready had something to contribute to solving the problems that soldiers face in the field. While there was not a lot of cross talk between competitors, I think the real value of a compeition like this would be the cross breeding of ideas.
Two different teams provided solutions to trying to keep soldiers cool, in order to prevent heat casualties. In both cases, the teams approached the solution from trying to cool off the head. The MIT team made “cool pack” inserts that replaced a portion of the pad in the Kevlar Helmet. THe packs were activated by punching them, starting an endothermic chemical reaction. The packs in the display room registered 56 degress, well below the 75 degree or so room temperature. The problems with the design were that the packs didn’t last long enough, and the helmet had to be removed in order to replace the pads. That Cadet team created an insert that was composed primarily of lightweight aluminum (There should be another I in that word, dammit!) that acted as a heat conductor. Small cartridges at the back of the helmet made of sponges activated the system by evaporation. The problems with this design were the requirement for low ambient humidity (not a problem in Iraq) and the weight of the solution. However, What occurred to me is that you could combine the two solutions, use the cold pack to power the conductor, and get the best of both worlds. I suspect the final design will be somewhere along those lines.
One MIT student had done a stellar job with a wearable Solar energy based electricity generator. He used fragile solar cells that converted 20% of the sunlight that contacted them, providing 18 Watts of power, just under the 20 Watt target. The innovative part of his research was in the attempt to make the panels rugged enough to survive the beating soldiers put on them. Another team of Cadets made a strobe light that was only visible through the latest versions of night vision devices. THe idea was that older versions had fallen into the hands of the enemy. The strobe was fragile, and one point they stated that was grounds for further research was making it more durable. The materials work of the Solar panel project would be a great starting point.
Many of the other projects were wprthy of note:
- a firewall that was capable of blocking Skype
- A two battery UPS system for the radios, also field chargeable.
- A spring and cable based system designed to pull a HMMWV turret gunner back into the vehicle in case it is about to flip.
- A Wireless network for a minefield, allowing the friendly forces to turn off the mines to minimize friendly casualties and collateral damage
- A “Spy Rock”
- Two different position systems based on things like gyros, accelerometers, and cheap wireless transceivers
- A radio controlled dirigible with autopilot capable of carrying a 3 pound payload.
The proejcts were judged by a panel with members from industry, academia, and the military. It was especially good to see two Command Sergents Major in the panel, with a solid understanding of the harsh reality of the life of the soldiers. One was the CSM of the Infantry School at Fort Benning. I can’t think of anyone better equipped to say “Good Idea”, “That is too heavy”, or to ask the question “Is that addressing the right problem.”
There were six prizes donated by several companies, each of several thousand dollars. The USMA team won the highest award and the overall trophy. I was really impressed by the creativity and ingenuity of the students, and the quality of the design process they employed.
boot into single user mode on Grub
When the grub menu comes up, hit e to edit. Now hit e to edit again. Yes, you have to do it twice
Add the word single to the end of the boot options
Hit b to boot.
Echos of Erudition
Mr. Homer, My Ninth grade English teacher once made a point of describing the joy he felt on that day in Spring when you first notice the buds on the trees. I’d long forgotten that description until moving back to Massachusetts.
In California, there are always some trees that have leaves. The winter months there mean rain and a return to lushness from the brown of Summer.
New England is defined by the transition of colors: orange, gray, white, gray, green.
Biking to work these past few days has required a quicker set of reflexes to avoid the reemergence of the joggers. Many exposed legs and arms iterating above the root-knarled path along the Charles. They wear t-shirts that don’t quite hide the thin layer of Winter insulation that motivates their activity.
The buds are on the trees, and I only noticed yesterday. Thanks, Mr. Homer
Faking out PAM Authentication
I am working on a server application that uses Pluggable Authentication Modules (PAM) for authentication support. This application must run as root. As part of development, people need to log in to this server. I don’t want to give out the root password of my development machine to people. My hack was to create a setup in /etc/pam.d/emo-auth that always allows the login to succeed, provided the account exists. The emo-auth configuration is what the application looks up to authenticate network connections.
$ cat /chroot/etc/pam.d/emo-auth
account  required  pam_permit.so
auth     required pam_permit.so
session  required pam_permit.so
Now people login with root, and any password will allow them to get in.
Since this is only for development, this solution works fine, and does not require any code changes.