At the top end of computing there is are the Supercomputers. At the bottom end there are embedded devices. In between, there are a wide array of types of computer systems. Personal computers, workstations and servers are all really just a sliding scale of the same general set of technologies. These systems are , more and more, the building blocks of the technologies higher up on the scale. Enterprise computing typically involves high-availability and high Input/Output (I/O) based systems. Scientific and technical computing is similar, but high availability is not as important as performance. Three of the variables that factor into system design are parallelization, running time and (disk) storage requirements. If a job is small enough that it can run on a single machine in a reasonable amount of time, it is usually best to leave it to do so. Any speedup you would get in parallelizing the job and distributing the workload is offset by (Amdhals law) the serial portion of the job, the added overhead of parallelization, and the fact that you could run a different job on the other machine. If your task is parallelizable, but is very storage intensive, you need a high speed disk interconnect. Nowadays that means fiber channel.
Only if a job takes so long that it makes sense to parallelize, and that job does not require significant access to storage does it make sense to go to a traditional Beowulf cluster. Although Infiniband does handle the interconnect for both network and storage access, the file systems themselves do not yet handle access by large clusters.
This is the point for which we need a new term: storage bound, single system jobs that should be run on their own machine. Examples of this abound throughout science, engineering, enterprise, and government. Potential terms for this are:Small Scale HPC, Single System HPC, Storage Bound HPC, but none of them really roll of the tongue.