Footnotes Contents Document Notation Gather, Vector Variant Examples Using MPI_GATHERV Scatter An Example Using MPI_SCATTER Scatter: Vector Variant Examples Using MPI_SCATTERV Gather to All An Example Using MPI_ALLGATHER Gather to All: Vector Variant All to All Scatter/Gather Procedure Specification All to All: Vector Variant Global Reduction Operations Reduce Predefined Reduce Operations MINLOC and MAXLOC All Reduce Reduce-Scatter Scan User-Defined Operations for Reduce and Scan The Semantics of Collective Communications Semantic Terms Communicators Introduction Division of Processes Avoiding Message Conflicts Between Modules Extensibility by Users Safety Overview Groups Communicator Communication Domains Processes Compatibility with Current Practice Group Management Group Accessors Group Constructors Group Destructors Communicator Management Communicator Accessors Communicator Constructors Communicator Destructor Safe Parallel Libraries Types of MPI Calls Caching Introduction Caching Functions Intercommunication Introduction Intercommunicator Accessors Intercommunicator Constructors Process Topologies Introduction Virtual Topologies Opaque Objects Overlapping Topologies Embedding in MPI Cartesian Topology Functions Cartesian Constructor Function Cartesian Convenience Function: MPI_DIMS_CREATE Cartesian Inquiry Functions Cartesian Translator Functions Cartesian Shift Function Cartesian Partition Function Cartesian Low-level Functions Named Constants Graph Topology Functions Graph Constructor Function Graph Inquiry Functions Graph Information Functions Low-level Graph Functions Topology Inquiry Functions An Application Example Environmental Management Implementation Information Environmental Inquiries Choice Arguments Tag Values Host Rank I/O Rank Clock Synchronization Timers and Synchronization Initialization and Exit Error Handling Error Handlers Error Codes Interaction with Executing Environment Language Binding Independence of Basic Runtime Routines Interaction with Signals in POSIX The MPI Profiling Interface Requirements Discussion Logic of the Design Miscellaneous Control of Profiling Examples Profiler Implementation MPI Library Implementation Fortran 77 Binding Issues Systems With Weak symbols Systems without Weak Symbols Complications Multiple Counting Linker Oddities Multiple Levels of Interception Conclusions Design Issues Why is MPI so big? Should we be concerned about the size of MPI? Introduction C Binding Issues Why does MPI not guarantee buffering? Portable Programming with MPI Dependency on Buffering Collective Communication and Synchronization Ambiguous Communications and Portability Heterogeneous Computing with MPI MPI Implementations Extensions to MPI References About this document ... Point-to-Point Communication Introduction and Overview Blocking Send and Receive Operations Blocking Send Send Buffer and Message Data Message Envelope Comments on Send Blocking Receive Receive Buffer The Goals of MPI Message Selection Return Status Comments on Receive Datatype Matching and Data Conversion Type Matching Rules Type MPI_CHARACTER Data Conversion Comments on Data Conversion Semantics of Blocking Point-to-point Buffering and Safety Who Should Use This Standard? Multithreading Order Progress Fairness Example - Jacobi iteration Send-Receive Null Processes Nonblocking Communication Request Objects Posting Operations What Platforms are Targets for Implementation? Completion Operations Examples Freeing Requests Semantics of Nonblocking Communications Order Progress Fairness Buffering and resource limitations Comments on Semantics of Nonblocking Communications Multiple Completions What is Included in MPI? Probe and Cancel Persistent Communication Requests Communication-Complete Calls with Null Request Handles Communication Modes Blocking Calls Nonblocking Calls Persistent Requests Buffer Allocation and Usage Model Implementation of Buffered Mode Comments on Communication Modes What is Not Included in MPI? User-Defined Datatypes and Packing Introduction Introduction to User-Defined Datatypes Datatype Constructors Contiguous Vector Hvector Indexed Hindexed Struct Version of MPI Use of Derived Datatypes Commit Deallocation Relation to count Type Matching Message Length Address Function Lower-bound and Upper-bound Markers Absolute Addresses Pack and Unpack MPI Conventions and Design Choices Derived Datatypes vs Pack/Unpack Collective Communications Introduction and Overview Operational Details Communicator Argument Barrier Synchronization Broadcast Example Using MPI_BCAST Gather Examples Using MPI_GATHER MPI: The Complete Reference MPI: The Complete Reference