Groups, Contexts, and Communicators



next up previous contents
Next: Conclusions Up: An Introduction to the Previous: Collective Communications

Groups, Contexts, and Communicators

A key feature needed to support the creation of robust, parallel libraries is to guarantee that communication within a library routine does not conflict with communication extraneous to the routine. The concepts encapsulated by an MPI communicator provide this support.

A communicator is a data object that specifies the scope of a communication operation, that is, the group of processes involved and the communication context. Contexts partition the communication space. A message sent in one context cannot be received in another context. Process ranks are interpreted with respect to the process group associated with a communicator. MPI applications begin with a default communicator, MPI_COMM_WORLD, which has as process group the entire set of processes (of this parallel job). New communicators are created from existing communicators and the creation of a communicator is a collective operation.

Communicators are especially important for the design of parallel software libraries. Suppose we have a parallel, matrix multiplication routine as a member of a library. We would like to allow distinct subgroups of processes to perform different matrix multiplications concurrently. A communicator provides a convenient mechanism for passing into the library routine the group of processes involved, and within the routine, process ranks will be interpreted relative to this group. The grouping and labeling mechanisms provided by communicators are useful, and communicators will typically be passed into library routines that perform internal communications.

Such library routines can also create their own, unique communicator for internal use. For example, consider an application in which process 0 posts a wildcarded, non-blocking receive just before entry to a library routine. Such ``promiscuous'' posting of receives is a common technique for increasing performance. Here, if an internal communicator is not created, incorrect behavior could result since the receive may be satisfied by a message sent by process 1 from within the library routine, if process 1 invokes the library ahead of process 0. Another example is one where a process sends a message before entry into a library routine, but the destination process does not post the matching receive until after exiting the library routine. In this case, the message may be received, incorrectly, within the library routine.

These problems are avoided by proper design and usage of parallel libraries. One workable design is for the application program to pass communicators into the library routine that specifies the group and ensures a safe context. Another design has the library create a ``hidden'' and unique communicator that is set up in a library initialization call, again leading to correct partitioning of the message space between application and library.

Sidebar C shows how one might implement the second type of design. Some thought shows that, as one creates separate communicators for libraries, it is convenient to associate these new communicators with the old communicators from which they were derived. The MPI caching mechanism provides a way to set up such an association. Though one can associate arbitrary objects with communicators using caching, the ability to do this for library-internal communicators is one of the most important uses of caching.



next up previous contents
Next: Conclusions Up: An Introduction to the Previous: Collective Communications



Jack Dongarra
Tue Jan 17 21:48:11 EST 1995