All to All Scatter/Gather



next up previous contents
Next: All to All: Up: Collective Communications Previous: Gather to All:

All to All Scatter/Gather

  all to all scatter and gathergather and scatter

MPI_Alltoall(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)

MPI_ALLTOALL(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR

MPI_ALLTOALL is an extension of MPI_ALLGATHER to the case where each process sends distinct data to each of the receivers. The jth block sent from process i is received by process j and is placed in the ith block of recvbuf.

The type signature associated with sendcount and sendtype at a process must be equal to the type signature associated with recvcount and recvtype at any other process. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of processes. As usual, however, the type maps may be different.

The outcome is as if each process executed a send to each process (itself included) with a call to, MPI_Send(sendbuf+i-sendcount-extent(sendtype), sendcount, sendtype, i, ...), and a receive from every other process with a call to, MPI_Recv(recvbuf+i- recvcount-extent(recvtype), recvcount, i,...), where i = 0, ..., n - 1.

All arguments on all processes are significant. The argument comm must represent the same intragroup communication domain on all processes.

 Rationale.  The defination of MPI_ALLTOALL gives as much 
    flexibility as one would achieve by specifying at each process n 
    independent, point-to-point communications with two exceptions 
    use the same datatype, and messages are scattered from
    (or gatehered to) sequential storage. (End of rationale).




Jack Dongarra
Fri Sep 1 06:16:55 EDT 1995