SGI recently released version 3.0 of its ``Array'' software for clustering. This package includes an implementation of MPI that runs on all current SGI MIPS-based systems. SGI MPI is originally derived from MPICH, but has evolved considerably. It has also incorporated xmpi from LAM.
SGI MPI is optimized for shared memory inside SMP servers and for a special HIPPI ``bypass'' that provides low-latency communication over HIPPI and striping over multiple HIPPI connections for large messages. It also uses TCP if HIPPI isn't available or the bypass is disabled. SGI MPI is interoperable among different SGI systems as long as all parts of an MPI application are compiled for 32-bit or 64-bit mode.
In most cases, MPI applications are run inside a single Origin 2000 system. SGI's array services software provides infrastructure to allow running on a cluster of systems. Applications running on this cluster are identified by an array session handle (ash). There are array equivalents for ps and kill that allow array sessions to be treated as a single unit. Array software is required even when running on a single node. Array software must be installed and maintained by a system administrator.
SGI MPI is compliant with MPI 1.2.