next up previous
Next: Improved Security and Up: Prototype Systems Previous: SP2 Process Spawning

MPI-style Message Passing and Collective Operations

Using two different styles of API for message passing as opposed to process control may in itself cause difficulties for users, especially if they have never used PVM or the MPI buffered pack routines before. Thus, some basic send and receive operations are provided in a similar form to the original MPI buffered operations. For example, the routines

 pvmpi_send(void* buf, int count, MPI_Datatype dtype, int destination, int tag,
 char* group)
 pvmpi_recv(void* buf, int count, MPI_Datatype dtype, int destination, int tag,
 char* group)

are used in the same way as the current MPI point-to-point operations except that a group name is given instead of a communicator handle. They support basic continuous data types with more advanced derived data types (see the PVM CCL project [#GROUPS2##1#]).

The need for collective operations across communicators has been identified by other research groups and has led to an experimental library, based upon MPICH, called MPIX (MPI eXtensions) [#MPIX##1#]. The library allows many of the current intracommunicator operations to work across intercommunicators such as All Gather and All to all.

The current PVM group services, based upon the pvm_bcast function, be used to link different implementations of MPI. Again, PVMPI operations to ease the use of groups can be created and are currently being investigated. One operation of particular interest is an intercommunicator sendrecv call. This call assists the synchronization of two independent applications and allows them to exchange data in a convenient way that matches many domain decomposition models (see Figure 6).

   figure103
Figure: Passing boundary values between two separately initiated applications using pvmpi_sendrecv



Jack Dongarra
Fri Apr 12 11:15:36 EDT 1996