From lusk@antares.mcs.anl.gov Mon Nov 23 11:47:34 1992
Return-Path: <lusk@antares.mcs.anl.gov>
Received: from antares.mcs.anl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA19455; Mon, 23 Nov 92 11:47:30 -0500
Received: from donner.mcs.anl.gov by antares.mcs.anl.gov (4.1/SMI-GAR)
	id AA12931; Mon, 23 Nov 92 10:47:25 CST
Received: by donner.mcs.anl.gov (4.1/GCF-5.8)
	id AA14689; Mon, 23 Nov 92 10:47:24 CST
Date: Mon, 23 Nov 92 10:47:24 CST
From: lusk@antares.mcs.anl.gov
Message-Id: <9211231647.AA14689@donner.mcs.anl.gov>
To: dongarra@cs.utk.edu, zenith@antares.mcs.anl.gov
Subject: Some points for the introduction
Status: RO


I got to thinking about the general issues we were discussing and decided
it would be best to write my thoughts down.  This is an abbreviated form of
what the introduction might look like.  It needs more work (which Jack
volunteered to do) before it even becomes a first draft for the four of us
(Jack, can you send me Tony's address?).

- Rusty




		    The MPI Draft Message-Passing Standard

Introduction

     One of the most widely used paradigms for programming parallel machines
is that of message-passing.  Although there are many variations, the basic
concept of processes communicating through messages is well understood.
Over the last ten years, substantial progress has been made in casting
significant algorithms in this paradigm.  Vendors have found ways to implement
it efficiently.  More recently, several systems have demonstrated that it can
be implemented portably as well, and that users appreciate being able to write
a message-passing code that runs unchanged on a variety of parallel computing
environments.  It is thus an appropriate time to try to define both the syntax
and semantics of a core of library routines that will be useful to a wide
range of users and efficiently implementable on a wide range of computers.

Who Should Use This Standard?

     This standard is intended for use by all those who want to write portable
message-passing programs.  this includes individual application programmers,
developers software designed to run on parallel machines, and creators of
higher-level programming languages, environments, and tools.  In order to be
attractive to this wide audience, it must provide a simple, easy-to-use
interface for the basic user while not semantically precluding the
high-performance message-passing operations available on advanced machines.

What Platforms Are Targets For Implementation?

     The attractiveness of the message-passing paradigm at least partially
stems from its wide portability.  Programs expressed this way can run
efficiently on shared-memory multiprocessors, distributed-memory
multiprocessors, networks of workstations, and combinations of all of these.
The paradigm will not be made obsolete by architectures combining the shared-
and distributed-memory views, or by increases in network speeds.  It thus
should be both possible and useful to implement this standard on a great
variety of machines, including those "machines" consisting of collections of
other machines, parallel or not, connected by a communication network.

What Is Included In The Standard?

     The standard includes (this is temporarily as inclusive as possible):

    o  A simple way to create processes for the SPMD model
    o  Point-to-point communication in a variety of modes, including modes
         that allow fast communication and heterogeneous communication
    o  Process groups 
    o  Communication contexts
    o  Collective operations
    o  Bindings for both Fortran and C
    o  A model implementation 

What Is Not Included In The Standard?

     The standard does not specify:

    o  Explicit shared-memory operations
    o  Operations that require more operating system support than is currently
          standard; for example, interrupt-driven receives, remote execution,
          or active messages
    o  Program construction tools
    o  Debugging facilities
    o  Tracing facilities
     
Features that are not included may be offered as extensions by specific
implementations.
     


From kailand!kai.com!zenith@uunet.UU.NET Mon Nov 23 19:15:17 1992
Return-Path: <kailand!kai.com!zenith@uunet.UU.NET>
Received: from relay2.UU.NET by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA00482; Mon, 23 Nov 92 19:14:56 -0500
Received: from uunet.uu.net (via LOCALHOST.UU.NET) by relay2.UU.NET with SMTP 
	(5.61/UUNET-internet-primary) id AA19308; Mon, 23 Nov 92 19:14:46 -0500
Received: from kailand.UUCP by uunet.uu.net with UUCP/RMAIL
	(queueing-rmail) id 191338.15395; Mon, 23 Nov 1992 19:13:38 EST
Received: from brisk.kai.com (brisk) by kailand.kai.com via SMTP
  (5.65d-92031301 for <dongarra@cs.utk.edu>) id AA06638; Mon, 23 Nov 1992 17:29:39 -0600
Received: by brisk.kai.com
  (920330.SGI-92101201) id AA04785; Mon, 23 Nov 92 17:29:37 -0600
Date: Mon, 23 Nov 92 17:29:37 -0600
Message-Id: <9211232329.AA04785@brisk.kai.com>
To: lusk@antares.mcs.anl.gov
Cc: dongarra@cs.utk.edu
In-Reply-To: Rusty Lusk's message of Mon, 23 Nov 92 10:49:04 CST <9211231649.AA14708@donner.mcs.anl.gov>
Subject: Some points for the introduction
From: Steven Ericsson Zenith <zenith@kai.com>
Sender: zenith@kai.com
Organization: 	Kuck and Associates, Inc.
		1906 Fox Drive, Champaign IL USA 61820-7334,
		voice 217-356-2288, fax 217-356-5199
Status: RO


In your draft

	"One of the most widely used paradigms for programming parallel
	machines is that of message-passing."

I'd like to see this changed to "Message passing is a paradigm used
widely on certain classes of parallel machines; especially those with
distributed memory."

	"it must provide a simple, easy-to-use interface for the basic
	user while not semantically precluding the high-performance
	message-passing operations available on advanced machines."

Can you give an example of what you mean by "high-performance
message-passing operations available on advanced machines" - I have
difficulty imagining the conflict you describe.

	"The attractiveness of the message-passing paradigm at least
	partially stems from its wide portability.  Programs expressed
	this way can run efficiently on shared-memory multiprocessors,
	distributed-memory multiprocessors, networks of workstations,
	and combinations of all of these."

This is contentious. The statement is false if you keep in the word
"efficiently" and true otherwise.

Message passing has proven to be a difficult addendum to existing
programming practices primarily because it preoccupies the programmer
with issues of data distribution.  Message passing is unsuitable, in my
experience, as a general purpose model for parallel programming.
Programmers using the model become preoccupied by issues of data
distribution.

Forget for the moment issues of mapping programs to hardware topology.
The programmer using a message passing model is forced to consider, in
some detail, multiplexing and routing issues when distributing data
among groups of processes.

Generalized message passing does not map well onto the range of parallel
machine architectures for a general set of applications.  The semantics
of most message passing systems compel an implementation to copy data
that might otherwise be passed by reference. Exchange by reference in
message passing produces side effects that most programmers do not
consider and specializes the program (I'm not saying we shouldn't
include such facility).

However, I recognize that message passing is an easy concept to sell -
since we defined Occam (I was one of the principal people involved with
that language design) I've spent the past 8 years writing and watching
other engineers write message passing programs. I still write message
passing programs myself - I'm an implementor of higher level
abstractions (in this regard I've used Express), a compiler writer (I
want to target this standard) and I also write code for embedded systems
(robots etc..  in Occam and C). I believe that the first order audience
for this standard are really people like me - specialists, not general
purpose high level algorithm designers; i.e. generally not chemists and
physicists, excepting those currently forced to use such machines by
their proximity to them.

There is a distinction between message passing as a component of
parallel/distributed machine systems architecture and generalized
message passing as a programming model and I would like us to focus on
the former rather than the latter since that is where I see the need and
we can leave the other debate to some other time and place. We should
take a noncommittal stance; i.e. whether or not message passing is THE
way to program parallel machines, standardization of message passing is
an important and useful activity.

The only other comment I have concerns the list of items included in the
standard. You say

     o  A simple way to create processes for the SPMD model

I think this is loaded with problems for reasons I mentioned at the
meeting; i.e., it introduces more complexity and opportunity for
deadlock etc.. I would rather not see us say too much about the process
model at this stage; except to say that there is some process model
defined by the language using the standard and to say something about
side effects between processes that affect the message passing semantics
(i.e., that there should be none).

Regards,
Steven


From kailand!kai.com!zenith@uunet.UU.NET Tue Nov 24 11:17:37 1992
Return-Path: <kailand!kai.com!zenith@uunet.UU.NET>
Received: from surfer.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA14475; Tue, 24 Nov 92 11:17:33 -0500
Received: from relay1.UU.NET by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA21485; Tue, 24 Nov 92 11:17:22 -0500
Received: from uunet.uu.net (via LOCALHOST.UU.NET) by relay1.UU.NET with SMTP 
	(5.61/UUNET-internet-primary) id AA20731; Tue, 24 Nov 92 11:17:17 -0500
Received: from kailand.UUCP by uunet.uu.net with UUCP/RMAIL
	(queueing-rmail) id 111703.29902; Tue, 24 Nov 1992 11:17:03 EST
Received: from brisk.kai.com (brisk) by kailand.kai.com via SMTP
  (5.65d-92031301 for <dongarra@surfer.epm.ornl.gov>) id AA03497; Tue, 24 Nov 1992 09:35:47 -0600
Received: by brisk.kai.com
  (920330.SGI-92101201) id AA05528; Tue, 24 Nov 92 09:35:45 -0600
Date: Tue, 24 Nov 92 09:35:45 -0600
Message-Id: <9211241535.AA05528@brisk.kai.com>
To: lusk@antares.mcs.anl.gov
Cc: dongarra@surfer.EPM.ORNL.GOV, dongarra@cs.utk.edu
In-Reply-To: Rusty Lusk's message of Mon, 23 Nov 92 23:13:51 CST <9211240513.AA16389@donner.mcs.anl.gov>
Subject: Some points for the introduction 
From: Steven Ericsson Zenith <zenith@kai.com>
Sender: zenith@kai.com
Organization: 	Kuck and Associates, Inc.
		1906 Fox Drive, Champaign IL USA 61820-7334,
		voice 217-356-2288, fax 217-356-5199
Status: RO


Rusty said:

   I agree with most of what you say but believe we should define a
   message-passing standard anyway.

I wasn't questioning the importance of defining the standard - on the
contrary; I was trying to focus the rationale for it and enhance the
chances of public interest even among those who do not "believe" in
message passing.

I agree that we should recycle this discussion through the main list but
we don't have to do it literally - a modified form will do. You lead,
I'll follow.

Steven


From owner-mpi-intro@CS.UTK.EDU  Tue Nov 24 15:07:39 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA01325; Tue, 24 Nov 92 15:07:39 -0500
Received:  by CS.UTK.EDU (5.61++/2.8s-UTK)
	id AA19417; Tue, 24 Nov 92 15:04:08 -0500
Received: from THUD.CS.UTK.EDU by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA19415; Tue, 24 Nov 92 15:04:06 -0500
From: Jack Dongarra <dongarra@cs.utk.edu>
Received:  by thud.cs.utk.edu (5.61++/2.7c-UTK)
	id AA20345; Tue, 24 Nov 92 15:04:05 -0500
Date: Tue, 24 Nov 92 15:04:05 -0500
Message-Id: <9211242004.AA20345@thud.cs.utk.edu>
To: mpi-intro@cs.utk.edu
Subject: more on the intro

Here is my two cents on the intro. I have taken input from Rusty
and Steven. Comments please.
Jack



\section{Introduction}
\label{sec:intro}

\subsection{Overview and Goals}

Message passing is a paradigm used
widely on certain classes of parallel machines; especially those with
distributed memory.
Although there are many variations, the basic
concept of processes communicating through messages is well understood.
Over the last ten years, substantial progress has been made in casting
significant applications in this paradigm.  Each vendor
has implemented their own variant.
More recently, several systems \cite{need references} have demonstrated 
that a message passing system can be efficiently and
be portably implemented.
It is thus an appropriate time to try to define both the syntax
and semantics of a core of library routines that will be useful to a wide
range of users and efficiently implementable on a wide range of computers.

In designing MPI we have sought to make use of the most attractive features
of a number of existing message passing systems, rather than selecting one
of them and adopting it as the standard. Thus, MPI has been strongly
influenced by work at the IBM T. J. Watson Research Center by
Bala, Kipnis, Snir and colleagues \cite{}, Intel's NX/2 \cite{},
Express \cite{}, nCUBE's Vertex \cite{}, and PARMACS \cite{}.
Other important contributions have come from Zipcode \cite{},
Chimp \cite{}, PVM \cite{}, and PICL \cite{}.

One of the objectives of this paper is to promote a discussion within the
concurrent computing research community of the issues that must be addressed
in establishing a practical, portable, and flexible standard for message
passing. This cooperative process began with a workshop on standards
for message passing held in April 1992 \cite{}.

The main advantages of establishing a message passing standard are portability
and ease-of-use. In a distributed memory communication environment in which
the higher level routines and/or abstractions are build upon lower level
message passing routines the benefits of standardization are particularly
apparent. Furthermore, the definition of a message passing standard, such
as that proposed here, provides vendors with a clearly defined base set of
routines that they can implement efficiently, or in some cases provide
hardware support for, thereby enhancing scalability.

The goal of the Message Passing Interface simply stated is to 
develop a \it{de facto} standard for writing message-passing programs.
As such the interface should
establishing a practical, portable, efficient, and flexible standard for message
passing.

\subsection{Who Should Use This Standard?}

This standard is intended for use by all those who want to write portable
message-passing programs in Fortran 77 and/or C.  
This includes individual application programmers,
developers software designed to run on parallel machines, and creators of
higher-level programming languages, environments, and tools.  In order to be
attractive to this wide audience, the standard must provide a simple, easy-to-use
interface for the basic user while not semantically precluding the
high-performance message-passing operations available on advanced machines.


\subsection{What Platforms Are Targets For Implementation?}

The attractiveness of the message-passing paradigm at least partially
stems from its wide portability.  Programs expressed this way can run
efficiently on distributed-memory
multiprocessors, networks of workstations, and combinations of all of these.
In addition, shared-memory implementations are possible.
The paradigm will not be made obsolete by architectures combining the shared-
and distributed-memory views, or by increases in network speeds.  It thus
should be both possible and useful to implement this standard on a great
variety of machines, including those "machines" consisting of collections of
other machines, parallel or not, connected by a communication network.

\subsection{What Is Included In The Standard?}

The standard includes (this is temporarily as inclusive as possible):

\begin{itemize}
    \item  Point-to-point communication in a variety of modes, including modes
         that allow fast communication and heterogeneous communication
    \item  Collective operations
    \item  Process groups
    \item  Communication contexts
    \item  A simple way to create processes for the SPMD model
    \item  Bindings for both Fortran and C
    \item  A model implementation
\end{itemize}

\subsection{What Is Not Included In The Standard?}


The standard does not specify:

\begin{itemize}
    \item  Explicit shared-memory operations
    \item  Operations that require more operating system support than is currently
          standard; for example, interrupt-driven receives, remote execution,
          or active messages
    \item  Program construction tools
    \item  Debugging facilities
    \item  Tracing facilities
\end{itemize}

Features that are not included can always be offered as extensions 
by specific
implementations.

From owner-mpi-intro@CS.UTK.EDU  Tue Nov 24 18:07:35 1992
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA08292; Tue, 24 Nov 92 18:07:35 -0500
Received:  by CS.UTK.EDU (5.61++/2.8s-UTK)
	id AA22632; Tue, 24 Nov 92 17:44:23 -0500
Received: from relay1.UU.NET by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA22628; Tue, 24 Nov 92 17:44:21 -0500
Received: from uunet.uu.net (via LOCALHOST.UU.NET) by relay1.UU.NET with SMTP 
	(5.61/UUNET-internet-primary) id AA02022; Tue, 24 Nov 92 17:44:19 -0500
Received: from kailand.UUCP by uunet.uu.net with UUCP/RMAIL
	(queueing-rmail) id 174203.17222; Tue, 24 Nov 1992 17:42:03 EST
Received: from brisk.kai.com (brisk) by kailand.kai.com via SMTP
  (5.65d-92031301) id AA01849; Tue, 24 Nov 1992 15:34:20 -0600
Received: by brisk.kai.com
  (920330.SGI-92101201) id AA06768; Tue, 24 Nov 92 15:34:18 -0600
Date: Tue, 24 Nov 92 15:34:18 -0600
Message-Id: <9211242134.AA06768@brisk.kai.com>
To: dongarra@cs.utk.edu
Cc: mpi-intro@cs.utk.edu
In-Reply-To: Jack Dongarra's message of Tue, 24 Nov 92 15:04:05 -0500 <9211242004.AA20345@thud.cs.utk.edu>
Subject: more on the intro
From: Steven Ericsson Zenith <zenith@kai.com>
Sender: zenith@kai.com
Organization: 	Kuck and Associates, Inc.
		1906 Fox Drive, Champaign IL USA 61820-7334,
		voice 217-356-2288, fax 217-356-5199


To be inclusive "What Is Included In The Standard?" should include:

	\item a formal specification.

From owner-mpi-intro@CS.UTK.EDU  Mon Jan 25 15:21:07 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA25043; Mon, 25 Jan 93 15:21:07 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA18314; Mon, 25 Jan 93 15:20:45 -0500
X-Resent-To: mpi-intro@CS.UTK.EDU ; Mon, 25 Jan 1993 15:20:43 EST
Errors-To: owner-mpi-intro@CS.UTK.EDU
Received: from beagle.cps.msu.edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA18223; Mon, 25 Jan 93 15:20:06 -0500
Received: from uranium.cps.msu.edu by beagle.cps.msu.edu (4.1/rpj-5.0); id AA05995; Mon, 25 Jan 93 15:19:58 EST
Received: by uranium.cps.msu.edu (4.1/4.1)
	id AA12809; Mon, 25 Jan 93 15:19:58 EST
Date: Mon, 25 Jan 93 15:19:58 EST
From: huangch@cps.msu.edu
Message-Id: <9301252019.AA12809@uranium.cps.msu.edu>
To: mpi-intro@cs.utk.edu
Subject: Subscription 
Cc: mpi-collcomm@cs.utk.edu


Please add my name into your mailing list.

Thanks,

--Chengchang Huang
From owner-mpi-intro@CS.UTK.EDU  Tue Feb 16 13:46:51 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA04698; Tue, 16 Feb 93 13:46:51 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA06187; Tue, 16 Feb 93 13:46:26 -0500
X-Resent-To: mpi-intro@CS.UTK.EDU ; Tue, 16 Feb 1993 13:46:25 EST
Errors-To: owner-mpi-intro@CS.UTK.EDU
Received: from THUD.CS.UTK.EDU by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA06181; Tue, 16 Feb 93 13:46:23 -0500
From: Jack Dongarra <dongarra@cs.utk.edu>
Received:  by thud.cs.utk.edu (5.61++/2.7c-UTK)
	id AA02926; Tue, 16 Feb 93 13:46:22 -0500
Date: Tue, 16 Feb 93 13:46:22 -0500
Message-Id: <9302161846.AA02926@thud.cs.utk.edu>
To: mpi-intro@cs.utk.edu
Subject: revised intro draft

Here is a slightly revised intro section.
See you tomorrow,
Jack


\documentstyle[11pt]{article}
\textwidth=6.0in
\textheight=9.0in
\hoffset=-0.5in
\voffset=-1.0in
\parskip=.1in

\begin{document}
\begin{center}
{\Large \bf Introduction to MPI}
\end{center}



\section{Introduction}
\label{sec:intro}


\subsection{Overview and Goals}

Message passing is a paradigm used
widely on certain classes of parallel machines; especially those with
distributed memory.
Although there are many variations, the basic
concept of processes communicating through messages is well understood.
Over the last ten years, substantial progress has been made in casting
significant applications in this paradigm.  Each vendor
has implemented their own variant.
More recently, several systems have demonstrated
that a message passing system can be efficiently and
be portably implemented.
It is thus an appropriate time to try to define both the syntax
and semantics of a core of library routines that will be useful to a wide
range of users and efficiently implementable on a wide range of computers.

In designing MPI we have sought to make use of the most attractive features
of a number of existing message passing systems, rather than selecting one
of them and adopting it as the standard. Thus, MPI has been strongly
influenced by work at the IBM T. J. Watson Research Center by
Bala, Kipnis, Snir and colleagues \cite{IBM-report1,IBM-report2}, 
Intel's NX/2 \cite{NX-2},
Express \cite{express}, nCUBE's Vertex \cite{Vertex}, P4 \cite{p4book}, and PARMACS \cite{parmacs1,parmacs2}.
Other important contributions have come from Zipcode \cite{zipcode1,zipcode2},
Chimp \cite{chimp1,chimp2}, PVM \cite{PVM2,PVM1}, and PICL \cite{picl}.

One of the objectives of this paper is to promote a discussion within the
concurrent computing research community of the issues that must be addressed
in establishing a practical, portable, and flexible standard for message
passing. This cooperative process began with a workshop on standards
for message passing held in April 1992 \cite{walker92b}.

The main advantages of establishing a message passing standard are portability
and ease-of-use. In a distributed memory communication environment in which
the higher level routines and/or abstractions are build upon lower level
message passing routines the benefits of standardization are particularly
apparent. Furthermore, the definition of a message passing standard, such
as that proposed here, provides vendors with a clearly defined base set of
routines that they can implement efficiently, or in some cases provide
hardware support for, thereby enhancing scalability.

The goal of the Message Passing Interface simply stated is to
develop a widely used
standard for writing message-passing programs.
As such the interface should
establishing a practical, portable, efficient, and flexible standard for message
passing.

A complete list of goals follow.

\begin{itemize}

\item
Design an application programming interface (not necessarily for compilers
or a system implementation library).

\item
Allow efficient communication: Avoid memory to memory copying
and allow overlap of computation and communication and offload to communication
coprocessor, where available.

\item
Allow (but no mandate) extensions for use in heterogeneous environment.

\item
Allow convenient C, Fortran 77, Fortran 90, and C++ bindings for interface.

\item
Assume a reliable communication interface:
User need not cope with communication failures.
Such failures are dealt by the underlying communication subsystem.

\item
Focus on a proposal that can be agreed upon in 6 months.

\item
Define an interface that is not too different from current practice,
such as PVM, Express, P4, etc.

\item
Define an interface that can be quickly implemented on many
vendor's platforms, with no significant changes in the underlying
communication and system software.

\item 
The interface should
not contain more functions than are really necessary.

\item 
Semantics of the interface should be language independent.
\end{itemize}

\subsection{Who Should Use This Standard?}

This standard is intended for use by all those who want to write portable
message-passing programs in Fortran 77, C, Fortran 90, or C++.
This includes individual application programmers,
developers of software designed to run on parallel machines, and creators of
environments, and tools.  In order to be
attractive to this wide audience, the standard must provide a simple, easy-to-use
interface for the basic user while not semantically precluding the
high-performance message-passing operations available on advanced machines.


\subsection{What Platforms Are Targets For Implementation?}

The attractiveness of the message-passing paradigm at least partially
stems from its wide portability.  Programs expressed this way may run
on distributed-memory
multiprocessors, networks of workstations, and combinations of all of these.
In addition, shared-memory implementations are possible.
The paradigm will not be made obsolete by architectures combining the shared-
and distributed-memory views, or by increases in network speeds.  It thus
should be both possible and useful to implement this standard on a great
variety of machines, including those ``machines" consisting of collections of
other machines, parallel or not, connected by a communication network.

NEED to add some words about MIMD environments and threads.

\subsection{What Is Included In The Standard?}

The standard includes (this is temporarily as inclusive as possible):

\begin{itemize}
    \item  Point-to-point communication 
    \item  Collective operations
    \item  Process groups
    \item  Communication contexts
    \item  A simple way to create processes for the SPMD model
    \item  Bindings for Fortran 77, Fortran 90, C and C++
    \item  A model implementation
    \item  A formal specification.
    \item  Process topology
    \item  A validation suite

\end{itemize}

\subsection{What Is Not Included In The Standard?}


The standard does not specify:

\begin{itemize}
    \item  Explicit shared-memory operations
    \item  Operations that require more operating system support than is currently
          standard; for example, interrupt-driven receives, remote execution,
          or active messages
    \item  Program construction tools
    \item  Debugging facilities
    \item  Auxiliary functions such as timers
\end{itemize}

There are many features that have been considered and
not included in this standard. This happened for 
a number of reasons, one of which is the time constraint
that was self imposed in finishing the standard.
Features that are not included can always be offered as extensions
by specific
implementations.


% \bibliographystyle{plain}
% \bibliography{refs}
\begin{thebibliography}{10}

\bibitem{IBM-report1}
V.~Bala and S.~Kipnis.
\newblock Process groups: a mechanism for the coordination of and communication
  among processes in the {V}enus collective communication library.
\newblock Technical report, {IBM} {T}. {J}. {W}atson {R}esearch {C}enter,
  October 1992.
\newblock Preprint.

\bibitem{IBM-report2}
V.~Bala, S.~Kipnis, L.~Rudolph, and Marc Snir.
\newblock Designing efficient, scalable, and portable collective communication
  libraries.
\newblock Technical report, {IBM} {T}. {J}. {W}atson {R}esearch {C}enter,
  October 1992.
\newblock Preprint.

\bibitem{PVM2}
A.~Beguelin, J.~J. Dongarra, G.~A. Geist, R.~Manchek, and V.~S. Sunderam.
\newblock A users' guide to {PVM} parallel virtual machine.
\newblock Technical Report {TM}-11826, Oak {R}idge {N}ational {L}aboratory,
  July 1991.

\bibitem{p4book}
R.~Butler and E.~Lusk.
\newblock User's guide to the p4 programming system.
\newblock Technical Report {TM}-{ANL}--92/17, Argonne {N}ational {L}aboratory,
  1992.

\bibitem{chimp1}
Edinburgh Parallel Computing Centre, University of Edinburgh.
\newblock {\em {CHIMP} Concepts}, June 1991.

\bibitem{chimp2}
Edinburgh Parallel Computing Centre, University of Edinburgh.
\newblock {\em {CHIMP} Version 1.0 Interface}, May 1992.

\bibitem{picl}
G.~A. Geist, M.~T. Heath, B.~W. Peyton, and P.~H. Worley.
\newblock A user's guide to {PICL}: a portable instrumented communication
  library.
\newblock Technical Report {TM}-11616, Oak {R}idge {N}ational {L}aboratory,
  October 1990.

\bibitem{parmacs1}
R.~Hempel.
\newblock The {ANL/GMD} macros ({PARMACS}) in fortran for portable parallel
  programming using the message passing programming model -- users' guide and
  reference manual.
\newblock Technical report, {GMD}, Postfach 1316, {D}-5205 {S}ankt {A}ugustin
  1, {G}ermany, November 1991.

\bibitem{parmacs2}
R.~Hempel, H.-C. Hoppe, and A.~Supalov.
\newblock A proposal for a {PARMACS} library interface.
\newblock Technical report, {GMD}, Postfach 1316, {D}-5205 {S}ankt {A}ugustin
  1, {G}ermany, October 1992.

\bibitem{Vertex}
{nCUBE} Corporation.
\newblock {\em {nCUBE} 2 Programmers Guide, r2.0}, December 1990.

\bibitem{express}
Parasoft {C}orporation.
\newblock {\em Express Version 1.0: A Communication Environment for Parallel
  Computers}, 1988.

\bibitem{NX-2}
Paul Pierce.
\newblock The {NX/2} operating system.
\newblock In {\em Proceedings of the Third Conference on Hypercube Concurrent
  Computers and Applications}, pages 384--390. ACM Press, 1988.

\bibitem{zipcode1}
A.~Skjellum and A.~Leung.
\newblock Zipcode: a portable multicomputer communication library atop the
  reactive kernel.
\newblock In D.~W. Walker and Q.~F. Stout, editors, {\em Proceedings of the
  Fifth Distributed Memory Concurrent Computing Conference}, pages 767--776.
  {IEEE} {P}ress, 1990.

\bibitem{zipcode2}
A.~Skjellum, S.~Smith, C.~Still, A.~Leung, and M.~Morari.
\newblock The {Z}ipcode message passing system.
\newblock Technical report, Lawrence {L}ivermore {N}ational {L}aboratory,
  September 1992.

\bibitem{PVM1}
V.~Sunderam.
\newblock {PVM}: a framework for parallel distributed computing.
\newblock {\em Concurrency: Practice and Experience}, 2(4):315--339, 1990.

\bibitem{walker92b}
D.~Walker.
\newblock Standards for message passing in a distributed memory environment.
\newblock Technical Report {TM}-12147, Oak {R}idge {N}ational {L}aboratory,
  August 1992.

\end{thebibliography}
\newpage

\end{document}
From owner-mpi-intro@CS.UTK.EDU  Sun Mar  7 12:17:21 1993
Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34)
	id AA16442; Sun, 7 Mar 93 12:17:21 -0500
Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA01904; Sun, 7 Mar 93 12:16:41 -0500
X-Resent-To: mpi-intro@CS.UTK.EDU ; Sun, 7 Mar 1993 12:16:39 EST
Errors-To: owner-mpi-intro@CS.UTK.EDU
Received: from THUD.CS.UTK.EDU by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK)
	id AA01898; Sun, 7 Mar 93 12:16:38 -0500
From: Jack Dongarra <dongarra@cs.utk.edu>
Received:  by thud.cs.utk.edu (5.61++/2.7c-UTK)
	id AA14323; Sun, 7 Mar 93 12:16:37 -0500
Date: Sun, 7 Mar 93 12:16:37 -0500
Message-Id: <9303071716.AA14323@thud.cs.utk.edu>
To: mpi-intro@cs.utk.edu
Subject: our chapter

I would like to get our chapter of MPI finalized.
Enclosed is the current draft. Please send me any
updates or corrections you would like to see made.
I will then mail it to everyone on the general mailing
list so we can have the first-reading next meeting.
Regards,
Jack

\documentstyle[11pt]{article}
\textwidth=6.0in
\textheight=9.0in
\hoffset=-0.5in
\voffset=-1.0in
\parskip=.1in

\begin{document}
\begin{center}
{\Large \bf Introduction to MPI}
\end{center}



\section{Introduction}
\label{sec:intro}


\subsection{Overview and Goals}

Message passing is a paradigm used
widely on certain classes of parallel machines; especially those with
distributed memory.
Although there are many variations, the basic
concept of processes communicating through messages is well understood.
Over the last ten years, substantial progress has been made in casting
significant applications in this paradigm.  Each vendor
has implemented their own variant.
More recently, several systems have demonstrated
that a message passing system can be efficiently and
be portably implemented.
It is thus an appropriate time to try to define both the syntax
and semantics of a core of library routines that will be useful to a wide
range of users and efficiently implementable on a wide range of computers.

In designing MPI we have sought to make use of the most attractive features
of a number of existing message passing systems, rather than selecting one
of them and adopting it as the standard. Thus, MPI has been strongly
influenced by work at the IBM T. J. Watson Research Center by
Bala, Kipnis, Snir and colleagues \cite{IBM-report1,IBM-report2}, 
Intel's NX/2 \cite{NX-2},
Express \cite{express}, nCUBE's Vertex \cite{Vertex}, P4 \cite{p4book}, and PARMACS \cite{parmacs1,parmacs2}.
Other important contributions have come from Zipcode \cite{zipcode1,zipcode2},
Chimp \cite{chimp1,chimp2}, PVM \cite{PVM2,PVM1}, and PICL \cite{picl}.

One of the objectives of this paper is to promote a discussion within the
concurrent computing research community of the issues that must be addressed
in establishing a practical, portable, and flexible standard for message
passing. This cooperative process began with a workshop on standards
for message passing held in April 1992 \cite{walker92b}.

The main advantages of establishing a message passing standard are portability
and ease-of-use. In a distributed memory communication environment in which
the higher level routines and/or abstractions are build upon lower level
message passing routines the benefits of standardization are particularly
apparent. Furthermore, the definition of a message passing standard, such
as that proposed here, provides vendors with a clearly defined base set of
routines that they can implement efficiently, or in some cases provide
hardware support for, thereby enhancing scalability.

The goal of the Message Passing Interface simply stated is to
develop a widely used
standard for writing message-passing programs.
As such the interface should
establishing a practical, portable, efficient, and flexible standard for message
passing.

A complete list of goals follow.

\begin{itemize}

\item
Design an application programming interface (not necessarily for compilers
or a system implementation library).

\item
Allow efficient communication: Avoid memory to memory copying
and allow overlap of computation and communication and offload to communication
coprocessor, where available.

\item
Allow (but no mandate) extensions for use in heterogeneous environment.

\item
Allow convenient C, Fortran 77, Fortran 90, and C++ bindings for interface.

\item
Assume a reliable communication interface:
User need not cope with communication failures.
Such failures are dealt by the underlying communication subsystem.

\item
Focus on a proposal that can be agreed upon in 6 months.

\item
Define an interface that is not too different from current practice,
such as PVM, Express, P4, etc.

\item
Define an interface that can be quickly implemented on many
vendor's platforms, with no significant changes in the underlying
communication and system software.

\item 
The interface should
not contain more functions than are really necessary.

\item 
Semantics of the interface should be language independent.
\end{itemize}

\subsection{Who Should Use This Standard?}

This standard is intended for use by all those who want to write portable
message-passing programs in Fortran 77, C, Fortran 90, or C++.
This includes individual application programmers,
developers of software designed to run on parallel machines, and creators of
environments, and tools.  In order to be
attractive to this wide audience, the standard must provide a simple, easy-to-use
interface for the basic user while not semantically precluding the
high-performance message-passing operations available on advanced machines.


\subsection{What Platforms Are Targets For Implementation?}

The attractiveness of the message-passing paradigm at least partially
stems from its wide portability.  Programs expressed this way may run
on distributed-memory
multiprocessors, networks of workstations, and combinations of all of these.
In addition, shared-memory implementations are possible.
The paradigm will not be made obsolete by architectures combining the shared-
and distributed-memory views, or by increases in network speeds.  It thus
should be both possible and useful to implement this standard on a great
variety of machines, including those ``machines" consisting of collections of
other machines, parallel or not, connected by a communication network.

NEED to add some words about MIMD environments and threads.

\subsection{What Is Included In The Standard?}

The standard includes (this is temporarily as inclusive as possible):

\begin{itemize}
    \item  Point-to-point communication 
    \item  Collective operations
    \item  Process groups
    \item  Communication contexts
    \item  A simple way to create processes for the SPMD model
    \item  Bindings for Fortran 77, Fortran 90, C and C++
    \item  A model implementation
    \item  A formal specification.
    \item  Process topology
    \item  A validation suite

\end{itemize}

\subsection{What Is Not Included In The Standard?}


The standard does not specify:

\begin{itemize}
    \item  Explicit shared-memory operations
    \item  Operations that require more operating system support than is currently
          standard; for example, interrupt-driven receives, remote execution,
          or active messages
    \item  Program construction tools
    \item  Debugging facilities
    \item  Auxiliary functions such as timers
\end{itemize}

There are many features that have been considered and
not included in this standard. This happened for 
a number of reasons, one of which is the time constraint
that was self imposed in finishing the standard.
Features that are not included can always be offered as extensions
by specific
implementations.


% \bibliographystyle{plain}
% \bibliography{refs}
\begin{thebibliography}{10}

\bibitem{IBM-report1}
V.~Bala and S.~Kipnis.
\newblock Process groups: a mechanism for the coordination of and communication
  among processes in the {V}enus collective communication library.
\newblock Technical report, {IBM} {T}. {J}. {W}atson {R}esearch {C}enter,
  October 1992.
\newblock Preprint.

\bibitem{IBM-report2}
V.~Bala, S.~Kipnis, L.~Rudolph, and Marc Snir.
\newblock Designing efficient, scalable, and portable collective communication
  libraries.
\newblock Technical report, {IBM} {T}. {J}. {W}atson {R}esearch {C}enter,
  October 1992.
\newblock Preprint.

\bibitem{PVM2}
A.~Beguelin, J.~J. Dongarra, G.~A. Geist, R.~Manchek, and V.~S. Sunderam.
\newblock A users' guide to {PVM} parallel virtual machine.
\newblock Technical Report {TM}-11826, Oak {R}idge {N}ational {L}aboratory,
  July 1991.

\bibitem{p4book}
R.~Butler and E.~Lusk.
\newblock User's guide to the p4 programming system.
\newblock Technical Report {TM}-{ANL}--92/17, Argonne {N}ational {L}aboratory,
  1992.

\bibitem{chimp1}
Edinburgh Parallel Computing Centre, University of Edinburgh.
\newblock {\em {CHIMP} Concepts}, June 1991.

\bibitem{chimp2}
Edinburgh Parallel Computing Centre, University of Edinburgh.
\newblock {\em {CHIMP} Version 1.0 Interface}, May 1992.

\bibitem{picl}
G.~A. Geist, M.~T. Heath, B.~W. Peyton, and P.~H. Worley.
\newblock A user's guide to {PICL}: a portable instrumented communication
  library.
\newblock Technical Report {TM}-11616, Oak {R}idge {N}ational {L}aboratory,
  October 1990.

\bibitem{parmacs1}
R.~Hempel.
\newblock The {ANL/GMD} macros ({PARMACS}) in fortran for portable parallel
  programming using the message passing programming model -- users' guide and
  reference manual.
\newblock Technical report, {GMD}, Postfach 1316, {D}-5205 {S}ankt {A}ugustin
  1, {G}ermany, November 1991.

\bibitem{parmacs2}
R.~Hempel, H.-C. Hoppe, and A.~Supalov.
\newblock A proposal for a {PARMACS} library interface.
\newblock Technical report, {GMD}, Postfach 1316, {D}-5205 {S}ankt {A}ugustin
  1, {G}ermany, October 1992.

\bibitem{Vertex}
{nCUBE} Corporation.
\newblock {\em {nCUBE} 2 Programmers Guide, r2.0}, December 1990.

\bibitem{express}
Parasoft {C}orporation.
\newblock {\em Express Version 1.0: A Communication Environment for Parallel
  Computers}, 1988.

\bibitem{NX-2}
Paul Pierce.
\newblock The {NX/2} operating system.
\newblock In {\em Proceedings of the Third Conference on Hypercube Concurrent
  Computers and Applications}, pages 384--390. ACM Press, 1988.

\bibitem{zipcode1}
A.~Skjellum and A.~Leung.
\newblock Zipcode: a portable multicomputer communication library atop the
  reactive kernel.
\newblock In D.~W. Walker and Q.~F. Stout, editors, {\em Proceedings of the
  Fifth Distributed Memory Concurrent Computing Conference}, pages 767--776.
  {IEEE} {P}ress, 1990.

\bibitem{zipcode2}
A.~Skjellum, S.~Smith, C.~Still, A.~Leung, and M.~Morari.
\newblock The {Z}ipcode message passing system.
\newblock Technical report, Lawrence {L}ivermore {N}ational {L}aboratory,
  September 1992.

\bibitem{PVM1}
V.~Sunderam.
\newblock {PVM}: a framework for parallel distributed computing.
\newblock {\em Concurrency: Practice and Experience}, 2(4):315--339, 1990.

\bibitem{walker92b}
D.~Walker.
\newblock Standards for message passing in a distributed memory environment.
\newblock Technical Report {TM}-12147, Oak {R}idge {N}ational {L}aboratory,
  August 1992.

\end{thebibliography}
\newpage

\end{document}
