all to all, vector variant
MPI_Alltoallv(void* sendbuf, int *sendcounts, int *sdispls, MPI_Datatype sendtype, void* recvbuf, int *recvcounts, int *rdispls, MPI_Datatype recvtype, MPI_Comm comm)
MPI_ALLTOALLV(SENDBUF, SENDCOUNTS, SDISPLS, SENDTYPE, RECVBUF, RECVCOUNTS, RDISPLS, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNTS(*), SDISPLS(*), SENDTYPE, RECVCOUNTS(*), RDISPLS(*), RECVTYPE, COMM, IERROR
MPI_ALLTOALLV adds flexibility to MPI_ALLTOALL in that the location of data for the send is specified by sdispls and the location of the placement of the data on the receive side is specified by rdispls.
The jth block sent from process i is received by process j and is placed in the ith block of recvbuf. These blocks need not all have the same size.
The type signature associated with sendcount[j] and sendtype at process i must be equal to the type signature associated with recvcount[i] and recvtype at process j. This implies that the amount of data sent must be equal to the amount of data received, pairwise between every pair of processes. Distinct type maps between sender and receiver are still allowed.
The outcome is as if each process sent a message to process i with MPI_Send( sendbuf + displs[i]-extent(sendtype), sendcounts[i], sendtype, i, ...), and received a message from process i with a call to MPI_Recv( recvbuf + displs[i]-extent(recvtype), recvcounts[i], recvtype, i, ...), where i = 0 ... n - 1.
All arguments on all processes are significant. The argument comm must specify the same intragroup communication domain on all processes.
Rationale. The defination of MPI_ALLTOALLV gives as much flexibility as one would achieve by specifying at each process n independent, point-to-point communications with the exceptions that all messages use the same datatype.(End of rationale).