Next: 3 The Communicator Abstraction
Up: No Title
Previous: 1 Introduction
MPI is intended to be a standard message passing interface for applications
and libraries running on concurrent computers with a logically
distributed memory.
MPI is not specifically designed for use by parallelizing
compilers.
MPI provides no explicit
support for multithreading, although one of the design goals of MPI was
to ensure that it can be implemented efficiently in a multithreaded
environment.
The MPI standard does not mandate that an implementation should be
interoperable with other MPI implementations. However, MPI does provide
to message passing routines all the information
needed to allow a single MPI implementation to operate in a heterogeneous
environment.
The main features of MPI are as follows:
- A set of routines that support
point-to-point communication between pairs
of processes. Blocking and nonblocking versions of the routines are provided
which may be used in four different communication modes. As discussed
in Section 4.1, these modes correspond to different
communication protocols.
Message selectivity in point-to-point communication
is by source process and message tag, each of which may be wildcarded
to indicate that any valid value is acceptable.
- The communicator abstraction that provides support for the design of
safe, modular parallel software libraries.
- General, or derived, datatypes, that permit the specification of messages
of noncontiguous data of differing datatypes.
- Application topologies that specify the logical layout
of processes. A common example is a Cartesian grid which is often used in two
and three-dimensional problems.
- A rich set of collective communication routines that perform coordinated
communication among a set of processes.
In MPI there is no mechanism for creating processes, and
an MPI program is parallel ab initio, i.e., there is a fixed number of
processes from the start to the end of an application program.
All processes are members of at least one process group. Initially all
processes are members of the same group, and a number of routines are
provided that allow an application to create (and destroy) new subgroups.
Within a group each process is assigned a unique rank in the range
0 to n-1, where n is the number of processes in the group. This rank
is used to identify a process, and, in particular, is used to specify the
source and destination processes in a point-to-point communication
operation, and the root process in certain collective communication
operations.
MPI was designed as a message passing interface rather than a complete
parallel programming environment, and in its current form intentionally
omits many desirable features. For example, MPI lacks
-
mechanisms for process creation and control;
-
one-sided communication operations that would permit put and get messages,
and active messages;
-
nonblocking collective communication operations, and the ability
for a collective communication operation to involve more than one
group;
- language bindings for Fortran 90 and C++.
These issues, and other possible extensions to MPI, are currently being
considered in the MPI-2 effort. Extensions to MPI for performing parallel
I/O are also under consideration.
Next: 3 The Communicator Abstraction
Up: No Title
Previous: 1 Introduction
top500@rz.uni-mannheim.de
Tue May 28 14:38:25 PST 1996