The exchange communication pattern exhibited by the last example is exchange communication sufficiently frequent to justify special support. The send-receive operation combines, in one call, the sending of one message to a destination and the receiving of another message from a source. The source and destination are possibly the same. Send-receive is useful for communications patterns where each node both sends and receives messages. One example is an exchange of data between two processes. Another example is a shift operation across a chain of processes. A safe program that implements such shift will need to use an odd/even ordering of communications, similar to the one used in Example . When send-receive is used, data flows simultaneously in both directions (logically, at least) and cycles in the communication pattern do not lead to deadlock. deadlockcycles
Send-receive can be used in conjunction with the functions described in Chapter to perform shifts on logical topologies. Also, send-receive can be used for implementing remote procedure calls: remote procedure call one blocking send-receive call can be used for sending the input parameters to the callee and receiving back the output parameters.
There is compatibility between send-receive and normal sends and receives. A message sent by a send-receive can be received by a regular receive or probed by a regular probe, and a send-receive can receive a message sent by a regular send.
MPI_Sendrecv(void *sendbuf, int sendcount, MPI_Datatype sendtype, int dest, int sendtag, void *recvbuf, int recvcount, MPI_Datatype recvtype, int source, MPI_Datatype recvtag, MPI_Comm comm, MPI_Status *status)
MPI_SENDRECV(SENDBUF, SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVBUF, RECVCOUNT, RECVTYPE, SOURCE, RECVTAG, COMM, STATUS, IERROR)<type> SENDBUF(*), RECVBUF(*)
INTEGER SENDCOUNT, SENDTYPE, DEST, SENDTAG, RECVCOUNT, RECVTYPE, SOURCE, RECV TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR
MPI_SENDRECV executes a blocking send and receive operation. Both the send and receive use the same communicator, but have distinct tag arguments. The send buffer and receive buffers must be disjoint, and may have different lengths and datatypes. The next function handles the case where the buffers are not disjoint.
The semantics of a send-receive operation is what would be obtained if the caller forked two concurrent threads, one to execute the send, and one to execute the receive, followed by a join of these two threads.
MPI_Sendrecv_replace(void* buf, int count, MPI_Datatype datatype, int dest, int sendtag, int source, int recvtag, MPI_Comm comm, MPI_Status *status)
MPI_SENDRECV_REPLACE(BUF, COUNT, DATATYPE, DEST, SENDTAG, SOURCE, RECVTAG, COMM, STATUS, IERROR)<type> BUF(*)
INTEGER COUNT, DATATYPE, DEST, SENDTAG, SOURCE, RECVTAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR
MPI_SENDRECV_REPLACE executes a blocking send and receive. The same buffer is used both for the send and for the receive, so that the message sent is replaced by the message received.
The example below shows the main loop of the parallel Jacobi code, reimplemented using send-receive.
!Main Loop DO WHILE(.NOT.converged) ! compute DO j=1, m DO i=1, n B(i,j)=0.25*(A(i-1,j)+A(i+1,j)+A(i,j-1)+A(i,j+1)) END DO END DO DO j=1, m DO i=1, n A(i,j) = B(i,j) END DO END DO ! Communicate IF (myrank.GT.0) THEN CALL MPI_SENDRECV(B(1,1),n, MPI_REAL, myrank-1, tag, (A(1,0),n, MPI_REAL, myrank-1, tag, comm, status, ierr) END IF IF (myrank.LT.p-1) THEN CALL MPI_SENDRECV(B(1,m),n, MPI_REAL, myrank+1, tag, (A(1,m+1),n, MPI_REAL, myrank+1, tag, comm, status, ierr) END DO ... This code is safe , notwithstanding cyclic communication pattern.