Collective communication routines provide for coordinated communication
among a group of processes
[1, 2]. The process group is given by the
communicator object that is input to the routine.
The MPI collective communication routines have
been designed so that their syntax and semantics are consistent with those of
the point-to-point routines. The collective communication
routines maybe (but do not have to be) implemented using the MPI
point-to-point routines. Collective communication routines do not have message
tag arguments, though an implementation in terms of the point-to-point
routines may need to make use of tags.
A collective communication routine must be called by
all members of the group with consistent arguments. As soon as a process
has completed its role in the collective communication it may continue
with other tasks. Thus, a collective communication is not necessarily a barrier
synchronization for the group.
MPI does not include nonblocking forms of the collective communication
routines. MPI collective communication routines are
divided into two broad classes: data movement routines, and global
computation routines.