next up previous
Next: Relation of MPI to Up: A Survey of MPI Previous: A Survey of MPI

Introduction and History

The early days of parallel computinggif were characterized by experimentation, proof-of-concept demonstrations, and a willingness to re-implement programs from scratch for every new computer that came along. This is a fine way to learn how to do parallel computation, but a lousy way to build the infrastructure necessary for growth and stability and for making parallel computing interesting to anyone but academics. Such infrastructure requires standardization.

During 1993 and 1994, a group of representatives of the computer industry, government labs and academia met to develop a standard interface for the ``message passing'' model of parallel programming. This organization, known as the Message Passing Interface (MPI) Forum, finished its work in June, 1994, producing an industry standard known as MPI [1]. Since this initial (1.0) standard, the MPI Forum has produced versions 1.1 (June, 1995) and 1.2 (July, 1997), which correct errors and minor omissions, and version 2.0 (July, 1997), which adds substantial new functionality to MPI-1.2 [2]. At the time of this writing, there are not yet any full MPI 2.0 implementations. In the rest of this document, MPI refers to MPI 1.1 unless otherwise noted.

An MPI program consists of a set of processes and a logical communication medium connecting those processes. These processes may be the same program (SPMD - Single Program Multiple Data) or different programs (MPMD - Multiple Programs Multiple Data). The MPI memory model is logically distributed: an MPI process cannot directly access memory in another MPI process, and interprocess communication requires calling MPI routines in both processes.gif MPI defines a library of subroutines through which MPI processes communicate -- this library is the core of MPI and implicitly defines the programming model.

The most important routines in the MPI library are the so-called ``point-to-point'' communication routines, which allow processes to exchange data cooperatively -- one process sends data to another process, which receives the data. This cooperative form of communication is called ``message passing.''

The MPI standard is a specification, not a piece of software. What is specified is the application interface, not the implementation of that interface. In order to allow implementors to implement MPI efficiently, the MPI standard does not specify protocols, or require that implementations be able to interoperate. Moreover, so that MPI can make sense in a wide range of environments, the standard does not specify how processes are created or destroyed, and does not even specify precisely what a process is.

The most important considerations in the design of MPI were:

This review discusses several representative implementations of the MPI standard. These include MPICH, LAM and CHIMP, which are freely available multi-platform implementations, as well as optimized implementations supplied and supported by SGI/Cray, IBM, HP, Digital, Sun and NEC. While an attempt has been made to report on the most visible implementations, a few less well-known ones may have been left out, as well as implementations for hardware that is no longer available.

The primary conclusion is that MPI implementations are almost all of high quality, both robust and efficient. While there are minor problems here and there, the application developer considering using MPI can be confident that there is a well-supported MPI implementation on almost every commercially important parallel computer. Performance of MPI implementations is usually close to what the hardware can provide, though this review does not discuss performance, as discussed in Section 3.

While this is the main story, there are a few side stories, having to do with behavior the MPI standard does not specify, such as the integration of MPI applications into a parallel environment, tools for tracing and debugging, handling of standard input and output, documentation, buffering strategies, etc. These side stories are as much the subject of this review as are the MPI implementations themselves. The goal of this review is to orient the potential MPI user in the world of MPI and to describe interesting features of MPI implementations, rather than to compare and rank them, which would be an unproductive exercise.gif


next up previous
Next: Relation of MPI to Up: A Survey of MPI Previous: A Survey of MPI

Jack Dongarra
Sun Nov 9 14:03:51 EST 1997