next up previous contents
Next: Shared-memory SIMD machines Up: Overview of Recent Previous: Introduction and account

The Main Architectural Classes

Since many years the taxonomy of Flynn [#flynn##1#] has proven to be useful for the classification of high-performance computers. This classification is based on the way of manipulating of instruction- and data streams and comprises four main architectural classes. We will first briefly sketch these classes and afterwards fill in some details when each of the classes are described separately.

Although the difference between shared- and distributed memory machines seems clear cut, this is not always entirely the case from user's point of view. For instance, the late Kendall Square Research systems employed the idea of ``virtual shared memory'' on a hardware level. Virtual shared memory can also be simulated at the programming level: The first draft proposal for High Performance Fortran (HPF) was published in November 1992 [#HPFspec##1#] which by means of compiler directives distributes the data over the available processors. The proposal was fixed by May 1993. Therefore, the system on which HPF is implemented will act in this case as a shared memory machine to the user. Other vendors of Massively Parallel Processing systems (the buzz-word MPP systems is fashionable here), like Convex and Cray, also support proprietary virtual shared-memory programming models which means that these physically distributed memory systems, by virtue of the programming model, logically will behave as shared memory systems. In addition, packages like TreadMarks [#tredmarks##1#] provide a virtual shared memory environment for networks of workstations.

Another trend that has came up in the last few years is distributed processing. This takes the DM-MIMD concept one step further: instead of many integrated processors in one or several boxes, workstations, mainframes, etc., are connected by Ethernet, FDDI, or otherwise and set to work concurrently on tasks in the same program. Conceptually, this is not different from DM-MIMD computing, but the communication between processors is often orders of magnitude slower. Many packages to realise distributed computing, commercial, and non-commercial are available. Examples of these are Parasoft's Express (commercial), PVM (standing for Parallel Virtual Machine, non-commercial) citepvm, and MPI (Message Passing Interface, [#mpi##1#] also non-commercial). PVM and MPI have been adopted for instance by Convex, Cray, IBM and Intel for the transition stage between distributed computing and MPP on the clusters of their favorite processors and they are available on a large amount of distributed memory MIMD systems and even on shared memory MIMD systems for compatibility reasons. In addition there is a tendency to cluster shared memory systems, for instance by HIPPI channels, to obtain systems with a very high computational power. E.g., Silicon Graphics is already providing such arrays of systems, the Intel Paragon with the MP (Multi Processor) nodes, and the NEC SX-4 also have this structure. The Convex Exemplar SPP-1200 could be seen as a more integrated example (although the software environment is much more complete and allows shared memory addressing).





next up previous contents
Next: Shared-memory SIMD machines Up: Overview of Recent Previous: Introduction and account



Jack Dongarra
Sat Feb 10 15:12:38 EST 1996