UTK
/
ORNL
/
ICL
PVMPI :
PVM integrating MPI applications
Introduction
A software system that allows independent MPI applications to inter-communicate even if they are running under different MPI implementations using different
language bindings.
Rational behind the project
Under MPI-1 applications cannot communicate with other processes outside their MPI_COMM_WORLD once it has formed (without using some other entity).
This means that there is only a static process model.
This has advantages such as:
- High speed address resolution as they never change.
- Simple fault tolerance.. all or nothing!
- Fixed topologies are easy to construct and use.
The main disadvantages of such a static model is:
- No ability to use additional resources as they become available.
- Debuggers and monitors cannot dynamically join applications, but instead
must start them.
- Lose of a single MPI process causes the lose of the whole application.
- Most MPP implementations disallow off machine or even off partition process
communication from within MPI.
Even though the process model was simple in MPI-1, due to the diverse number of
methods available to start jobs on MPPs no standard method was declared for
starting jobs (even the support application MPIRUN varies).
Many message passing users had come from Network/Cluster computing environments
and hence may feel restricted by MPI and many of the run time environments on
MPPs.
This hopes to give users of MPI-1 the capabilities and functionality that they
have come accustomed to from system like PVM or LAM. This includes some
features that are not even being considered in the MPI-2 forum.
Goals of the project
The main goal of the PVMPI project is to allow different MPP vendor MPI implementations to inter-communicate to aid the use of multiple MPPs in solving grand challenge projects.
This would allow users to place different sections of an application on the
system most suited for its execution. True heterogeneous computing.
Selective requirements and goals:
- Transparent
User applications are to use native MPI_ API calls where ever possible.
- Efficient
additional layers to be as light weight as possible so that
intra-communications are as fast as possible.
- Process Control
is unified for all MPI implementations supported ahead of any MPI-2
implementations.
Targeted users of the project
Any users that require multi-part dynamic MPI-1 applications that cannot wait
for MPI-2 to appear.
Current work
Power Point presentation for SuperComputing 96 on
PVMPI
Heterogeneous MPI Application Interoperation and Process Management under PVMPI
,
G. Fagg, J. Dongarra, and A. Geist,
University of Tennessee Computer Science Technical Report, CS-97-???,
June 1997. Submitted to the Euro PVM-MPI Conference, Cracow, Poland, November 97
.
A
postscript
version is available.
Related Projects, Papers et al..
PVM.
Software
Will be released in a public beta* form in third quarter 1997.
Should support C and F77 bindings under MPIF, LAM and MPICH.
Will require PVM3.3.8 or above.
PVM patches are needed if the uniform spawn functions are used.
(These are provided for PVM 3.3.11)
Please watch for announcements on comp.parallel.pvm and comp.parallel.mpi.
* Note: once enough user comments have been collected about requirements
and of the system, it will become fixed and officially
released. This is being done as some of the concepts used are and should be
open to discussion.
Project team (ICL/UTK/ORNL)
Project design, implementation, support and documentation by
Graham
Testing (beta) team (is or will be!):
Kevin London
Management and Support:
Jack Dongarra,
Shirley Browne and
Al.
Last updated August 1 1997 by Browne.