Up: Buffer allocation and usage
Next: Nonblocking communication
Previous: Buffer allocation and usage
The model implementation uses the packing and unpacking functions described in
Section Pack and unpack
and the nonblocking communication functions
described in Section Nonblocking communication
.
We assume that a circular queue
of pending message entries (PME) is maintained. Each
entry contains a communication request handle
that identifies a pending nonblocking
send, a pointer to the next entry and the packed message data. The
entries are stored in successive locations in the buffer. Free space is
available between the queue tail and the queue head.
A buffered send call results in the execution of the following code.
- Traverse sequentially the PME queue from head towards the tail,
deleting all entries for
communications that have completed, up to the first entry with an uncompleted
request; update queue head to point to that entry.
- Compute the number, n, of bytes needed to store an entry for the new message.
An upper bound on n can be computed
as follows: A call to the function
MPI_PACK_SIZE(count, datatype,
comm, size), with the count, datatype and comm arguments
used in the
MPI_BSEND call, returns an upper bound on the amount of
space needed to buffer the message data (see Section Pack and unpack
).
The MPI constant
MPI_BSEND_OVERHEAD provides an upper bound on the
additional space consumed by the entry (e.g., for pointers or envelope
information).
- Find the next contiguous empty space of n bytes in
buffer (space following queue tail, or space at start of buffer if
queue tail is
too close to end of buffer). If space is not found then raise buffer
overflow error.
- Append to end of PME queue in contiguous space the new entry
that contains request
handle, next pointer and packed message data; MPI_PACK is used to
pack data.
- Post nonblocking send (standard mode) for packed data.
- Return
Up: Buffer allocation and usage
Next: Nonblocking communication
Previous: Buffer allocation and usage
Return to MPI Standard Index
Return to MPI home page