From wade@cs.utk.edu Mon Jan 18 10:04:48 1993 Received: from THUD.CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA15560; Mon, 18 Jan 93 10:04:48 -0500 Received: from LOCALHOST.cs.utk.edu by thud.cs.utk.edu with SMTP (5.61++/2.7c-UTK) id AA03584; Mon, 18 Jan 93 10:04:42 -0500 Message-Id: <9301181504.AA03584@thud.cs.utk.edu> To: mpi-context-archive@surfer.EPM.ORNL.GOV Cc: wade@cs.utk.edu Subject: Date: Mon, 18 Jan 93 10:04:42 EST From: Reed Wade test From walker@rios2.epm.ornl.gov Mon Jan 18 13:23:40 1993 Received: from rios2.EPM.ORNL.GOV by surfer.EPM.ORNL.GOV (5.61/1.34) id AA19959; Mon, 18 Jan 93 13:23:40 -0500 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA17128; Mon, 18 Jan 1993 13:23:36 -0500 Date: Mon, 18 Jan 1993 13:23:36 -0500 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9301181823.AA17128@rios2.epm.ornl.gov> To: mpi-context-archive@surfer.EPM.ORNL.GOV From root Fri Jan 15 15:33:35 1993 Received: from msr.EPM.ORNL.GOV by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA12164; Fri, 15 Jan 1993 15:33:34 -0500 Received: from CS.UTK.EDU by msr.epm.ornl.gov (5.67/1.34) id AA07918; Fri, 15 Jan 93 15:33:30 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA13121; Fri, 15 Jan 93 15:29:52 -0500 X-Resent-To: mpi-comm@CS.UTK.EDU ; Fri, 15 Jan 1993 15:29:51 EST Errors-To: owner-mpi-comm@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA13112; Fri, 15 Jan 93 15:29:50 -0500 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA13694; Fri, 15 Jan 1993 15:29:49 -0500 Date: Fri, 15 Jan 1993 15:29:49 -0500 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9301152029.AA13694@rios2.epm.ornl.gov> To: mpi-comm@cs.utk.edu Subject: Communication contexts Status: RO I thought the discussion of communication contexts at the last Dallas meeting was lacking in focus, and I'm afraid the existing subcommittees may not deal with this area. If there are enough people interested perhaps there should be a separate communication context subcommittee. If you have any strong opinions on this please let me know. Also let me know if you would like to be on the communication context subcommittee if one is established. On the topic of communication contexts I think (at least) 4 approaches have been proposed. 1) Implicit communication contexts as in MPI1 controlled by push/pop. 2) Explicit registration as in Zipcode 3) Explicit communication contexts should be merged with explicit group contexts. Since groups cannot communicate with each other a new communication context can be created by creating a new group with the same members as the current group. 4) Chuck Simmons of Oracle has suggested that communication contexts are really a particular type of thread, and so can be handled using existing threads packages. If you have any ideas, or are strongly for or against any of the above approaches let mpi-comm know so we can initiate a discussion of this topic. David Walker From root Fri Jan 15 15:47:37 1993 Received: from antares.mcs.anl.gov by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA16031; Fri, 15 Jan 1993 15:47:36 -0500 Received: from donner.mcs.anl.gov by antares.mcs.anl.gov (4.1/SMI-GAR) id AA03636; Fri, 15 Jan 93 14:47:35 CST Received: by donner.mcs.anl.gov (4.1/GCF-5.8) id AA00709; Fri, 15 Jan 93 14:47:32 CST Message-Id: <9301152047.AA00709@donner.mcs.anl.gov> To: mpi-comm@cs.utk.edu Subject: Communication contexts Cc: walker@rios2.epm.ornl.gov Date: Fri, 15 Jan 93 14:47:31 CST From: Rusty Lusk Status: RO | | On the topic of communication contexts I think (at least) 4 approaches have | been proposed. | 1) Implicit communication contexts as in MPI1 controlled by push/pop. | 2) Explicit registration as in Zipcode | 3) Explicit communication contexts should be merged with explicit | group contexts. Since groups cannot communicate with each other | a new communication context can be created by creating a new group | with the same members as the current group. | 4) Chuck Simmons of Oracle has suggested that communication contexts | are really a particular type of thread, and so can be handled | using existing threads packages. | If you have any ideas, or are strongly for or against any of the above | approaches let mpi-comm know so we can initiate a discussion of this topic. I think that 1) contradicts our genral agreement to avoid state as much as possible. 4) is a problem on systems that don't support threads, and we have more-or-less agreed to be consistent with thread packages but not depend upon them. 3) should be discussed along with the converse: that contexts rather than groups should be the primary concept and groups defined in terms of contexts. David, perhaps you could create a mailing list on which this discussion can take place, parallel to the subcommittee mailing lists, so it doesn't have to go to the whole committee. Thanks. Rusty From root Fri Jan 15 16:14:41 1993 Received: from msr.EPM.ORNL.GOV by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA13265; Fri, 15 Jan 1993 16:14:39 -0500 Received: from CS.UTK.EDU by msr.epm.ornl.gov (5.67/1.34) id AA08064; Fri, 15 Jan 93 16:14:35 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA16784; Fri, 15 Jan 93 16:12:04 -0500 X-Resent-To: mpi-comm@CS.UTK.EDU ; Fri, 15 Jan 1993 16:12:03 EST Errors-To: owner-mpi-comm@CS.UTK.EDU Received: from convex.convex.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA16768; Fri, 15 Jan 93 16:12:01 -0500 Received: from mozart.convex.com by convex.convex.com (5.64/1.35) id AA21882; Fri, 15 Jan 93 15:11:54 -0600 Received: by mozart.convex.com (5.64/1.28) id AA01950; Fri, 15 Jan 93 15:11:17 -0600 Date: Fri, 15 Jan 93 15:11:17 -0600 From: weeks@mozart.convex.com (Dennis Weeks) Message-Id: <9301152111.AA01950@mozart.convex.com> To: mpi-comm@cs.utk.edu Subject: Communication contexts Cc: lusk@antares.mcs.anl.gov, walker@rios2.epm.ornl.gov Status: RO Rusty Lusk writes: > David, perhaps you could create a mailing list on which this discussion can > take place, parallel to the subcommittee mailing lists, so it doesn't have to > go to the whole committee. Thanks. I believe the default inclusion of everyone is a good idea, although creating a separate mailing list might be appropriate if some mpi-comm members wish to "unsubscribe". In the meantime, I will add one brief opinion: I felt that the discussions of contexts in the topology subgroup were quite sufficient, and it would probably be better to have the single subgroup which covers both groups and contexts, so that both will appear in the standard in a style which does not make them conflict with each other, and/or to avoid a proposal where groups and contexts would be totally redundant. I definitely agree that it is an important topic, and that the discussions last week in Dallas were far away from getting a proposal for groups and contexts that would be unanimously acceptable, so much work remains to be done. From root Fri Jan 15 17:03:11 1993 Received: from msr.EPM.ORNL.GOV by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA13294; Fri, 15 Jan 1993 17:03:06 -0500 Received: from CS.UTK.EDU by msr.epm.ornl.gov (5.67/1.34) id AA08266; Fri, 15 Jan 93 17:03:01 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21190; Fri, 15 Jan 93 17:00:57 -0500 X-Resent-To: mpi-comm@CS.UTK.EDU ; Fri, 15 Jan 1993 17:00:56 EST Errors-To: owner-mpi-comm@CS.UTK.EDU Received: from fslg8.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21182; Fri, 15 Jan 93 17:00:54 -0500 Received: by fslg8.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA20492; Fri, 15 Jan 93 22:00:50 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA14823; Fri, 15 Jan 93 14:59:49 MST Date: Fri, 15 Jan 93 14:59:49 MST From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9301152159.AA14823@macaw.fsl.noaa.gov> To: mpi-comm@cs.utk.edu Subject: Re: Communication contexts Status: RO > | > | On the topic of communication contexts I think (at least) 4 approaches have > | been proposed. > | 1) Implicit communication contexts as in MPI1 controlled by push/pop. > | 2) Explicit registration as in Zipcode > | 3) Explicit communication contexts should be merged with explicit > | group contexts. Since groups cannot communicate with each other > | a new communication context can be created by creating a new group > | with the same members as the current group. > | 4) Chuck Simmons of Oracle has suggested that communication contexts > | are really a particular type of thread, and so can be handled > | using existing threads packages. > | If you have any ideas, or are strongly for or against any of the above > | approaches let mpi-comm know so we can initiate a discussion of this topic. > > I think that 1) contradicts our genral agreement to avoid state as much as > possible. 4) is a problem on systems that don't support threads, and we have > more-or-less agreed to be consistent with thread packages but not depend upon > them. 3) should be discussed along with the converse: that contexts rather > than groups should be the primary concept and groups defined in terms of > contexts. > > David, perhaps you could create a mailing list on which this discussion can > take place, parallel to the subcommittee mailing lists, so it doesn't have to > go to the whole committee. Thanks. > > Rusty I'm not sure, but I believe that Zipcode uses push/pop to control contexts (Tony, please correct me if I'm wrong). I'd like to add a 5th approach, basically the one proposed by Paul Pierce: 5) Contexts are used exclusively to insure that message collisions will not occur if independently developed sub-programs are combined. Contexts and groups are orthogonal. Contexts and threads are orthogonal. Each message has an associated context and tag. Message context is managed by library routines and is completely out of a user's control. Message tag is selected by the user. The method of managing message contexts is a separate issue (assuming we want contexts). Existing proposals are: 5a) Stack-based management (objected to due to hidden states). 5b) Explicit registration with user-defined "names" (probably requires some communication). 5c) Explicit registration by a central authority ("dollar bill" registration mentioned by Jim Cownie.) There is enough confusion about contexts and groups that we may want to keep this discussion open to everyone. If the concensus is to make a separate mailing list, please both of us to it. Thanks, Tom Henderson hender@fsl.noaa.gov Leslie Hart hart@fsl.noaa.gov From root Fri Jan 15 19:07:57 1993 Received: from msr.EPM.ORNL.GOV by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA13061; Fri, 15 Jan 1993 19:07:55 -0500 Received: from CS.UTK.EDU by msr.epm.ornl.gov (5.67/1.34) id AA08466; Fri, 15 Jan 93 19:07:50 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA27214; Fri, 15 Jan 93 19:05:42 -0500 X-Resent-To: mpi-comm@CS.UTK.EDU ; Fri, 15 Jan 1993 19:05:41 EST Errors-To: owner-mpi-comm@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA27206; Fri, 15 Jan 93 19:05:40 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA00450; Fri, 15 Jan 93 18:05:36 CST Date: Fri, 15 Jan 93 18:05:36 CST From: Tony Skjellum Message-Id: <9301160005.AA00450@Aurora.CS.MsState.Edu> To: hender@macaw.fsl.noaa.gov, mpi-comm@cs.utk.edu Subject: Re: Communication contexts Status: RO In Zipcode, contexts have a group-wide scope. Groups of processes can involve more than one context. For instance, if 10 processes are involved in several stages of a calculation, each stage might have a different notion of how to do messaging, but each would use all the processes. Groups are important, because they allow control over the scope of global operations. Contexts are important because they provide guarantees to sub-programs about restricted scope of messages. Contexts are added typing-like information, but which is controlled by the system, not ad hoc by each user program. When they determine to start messaging, sub-programs request a context from a "postmaster general" in Zipcode. Currently, this is a group-oriented request (loose synchronization) resulting in each group member getting the context, and other information. In a lower-level system, simpler tactics might be possible. The way we did it was supposed to avoid possibility of races or other problems like that. To build large-scale software, without globalization of the local use of messaging resources, and to provide scalable algorithms with efficient global operations, both groups and contexts are needed. The push/pop state is an implementation detail of little importance. -Tony From root Sat Jan 16 02:08:07 1993 Received: from msr.EPM.ORNL.GOV by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA15898; Sat, 16 Jan 1993 02:08:06 -0500 Received: from CS.UTK.EDU by msr.epm.ornl.gov (5.67/1.34) id AA09478; Sat, 16 Jan 93 02:08:01 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA11696; Sat, 16 Jan 93 02:06:16 -0500 X-Resent-To: mpi-comm@CS.UTK.EDU ; Sat, 16 Jan 1993 02:06:15 EST Errors-To: owner-mpi-comm@CS.UTK.EDU Received: from sol.cs.wmich.edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA11688; Sat, 16 Jan 93 02:06:13 -0500 Received: from id.wmich.edu (id.cs.wmich.edu) by cs.wmich.edu (4.1/SMI-4.1) id AA06036; Sat, 16 Jan 93 02:01:08 EST Date: Sat, 16 Jan 93 02:01:08 EST From: john@cs.wmich.edu (John Kapenga) Message-Id: <9301160701.AA06036@cs.wmich.edu> To: mpi-comm@cs.utk.edu Status: RO hi; It was my feeling that the concepts of group and context, that were clearly understood differently by different people at the start of the discussions, became somewhat better agreed on as the pt2pt discussion took place. Groups - used to define groups of processors for collective communication primitives and to allow mapping topologies onto processors. This allows a group of processors to be used in isolation to compute together. Contexts - As Tom (and Tony) express below, > From: hender@macaw.fsl.noaa.gov (Tom Henderson) > Message-Id: <9301152159.AA14823@macaw.fsl.noaa.gov> > To: mpi-comm@cs.utk.edu > Subject: Re: Communication contexts > Status: R > > I'd like to add a 5th approach, basically the one proposed by Paul Pierce: > > 5) Contexts are used exclusively to insure that message collisions will not > occur if independently developed sub-programs are combined. Contexts and > groups are orthogonal. Contexts and threads are orthogonal. Each message > has an associated context and tag. Message context is managed by library > routines and is completely out of a user's control. Message tag is > selected by the user. The only expression stated so far about the difference between tags and contexts I had gleaned was that context should now be wildcardable (IE no -1 for All contexts) while a receive ALL or some other MASK variant would be allowed on tags. > > The method of managing message contexts is a separate issue (assuming we > want contexts). Existing proposals are: > > 5a) Stack-based management (objected to due to hidden states). > 5b) Explicit registration with user-defined "names" (probably requires > some communication). > 5c) Explicit registration by a central authority ("dollar bill" > registration mentioned by Jim Cownie.) Add: 5d) A context is not strictly managed, but use should follow suggested guidelines. This could allow static allocation of contexts when reasonable, but also support a context server (as the Postmaster General in Zipcode). Think of the static contexts or non-registrar provided contexts as reserved with respect to any registration system. This could be part of the MPI initialization call, if a registrar is included. The drawback here of course is the user could violate whatever guidelines were given. It seems to me most types of name registration or stack allocation schemes, that are not too restrictive and allows user code to interact, are likely to allow errant management of contexts as well. I expect that allowing 5d) would let strictly controlled context systems to be built over MPI. > From: Tony Skjellum > Message-Id: <9301160005.AA00450@Aurora.CS.MsState.Edu> > To: hender@macaw.fsl.noaa.gov, mpi-comm@cs.utk.edu > Subject: Re: Communication contexts > Status: R > > In Zipcode, contexts have a group-wide scope. Groups of processes can Of course if we eliminate groups from the pt2pt call then contexts may not be explicitly limited to groups in MPI. (though a context server could be asked to provide a context relative to a group). > involve more than one context. For instance, if 10 processes are involved > in several stages of a calculation, each stage might have a different notion > of how to do messaging, but each would use all the processes. Groups > are important, because they allow control over the scope of global operations. > Contexts are important because they provide guarantees to sub-programs > about restricted scope of messages. Contexts are added typing-like > information, but which is controlled by the system, not ad hoc by each > user program. When they determine to start messaging, sub-programs > request a context from a "postmaster general" in Zipcode. Currently, this ... > To build large-scale software, without globalization of the local use of > messaging resources, and to provide scalable algorithms with efficient > global operations, both groups and contexts are needed. The push/pop > state is an implementation detail of little importance. I expect the use push/pop for context state would make implementation in some environments hard and I have some concern about requiring registration on very large systems. I do think a strictly controlled stack context system could be built over 5d). The question about context management relates to how high a level MPI should be at. I see it at a low level, that higher level systems can be built on. That is one reason for keeping context management "advisory". > > -Tony > I liked the zipcode documentation. I think it would be a good idea to make it available on netlib. john From root Sat Jan 16 06:08:50 1993 Received: from msr.EPM.ORNL.GOV by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA13121; Sat, 16 Jan 1993 06:08:44 -0500 Received: from CS.UTK.EDU by msr.epm.ornl.gov (5.67/1.34) id AA09829; Sat, 16 Jan 93 06:07:24 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA00620; Sat, 16 Jan 93 06:03:55 -0500 X-Resent-To: mpi-pt2pt@CS.UTK.EDU ; Sat, 16 Jan 1993 06:03:46 EST Errors-To: owner-mpi-pt2pt@CS.UTK.EDU Received: from gatekeeper.oracle.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA00612; Sat, 16 Jan 93 06:03:37 -0500 Received: from jewel.us.oracle.com by gatekeeper.oracle.com (5.59.11/37.7) id AA21711; Sat, 16 Jan 93 03:03:35 PST Received: by jewel.us.oracle.com (5.59.10/37.3) id AA01231; Sat, 16 Jan 93 03:08:56 PST Message-Id: <9301161108.AA01231@jewel.us.oracle.com> Date: Sat, 16 Jan 93 03:08:56 PST From: Charles Simmons To: mpi-pt2pt@cs.utk.edu Subject: RE: Communication contexts Status: RO > | 4) Chuck Simmons of Oracle has suggested that communication contexts > | are really a particular type of thread, and so can be handled > | using existing threads packages. > possible. 4) is a problem on systems that don't support threads, and we have > more-or-less agreed to be consistent with thread packages but not depend upon > them. 3) should be discussed along with the converse: that contexts rather Allow me to clarify my sentiments. The example of context usage given in the MPI draft documentation appears to explicitly assume the existance of threads, and here contexts appear to be being used as threads. > 5) Contexts are used exclusively to insure that message collisions will not > occur if independently developed sub-programs are combined. Contexts and > groups are orthogonal. Contexts and threads are orthogonal. Each message > has an associated context and tag. Message context is managed by library > routines and is completely out of a user's control. Message tag is > selected by the user. I could strongly support almost all of the above. [The part I wouldn't support is a "tag" as I will explain below.] We do our work on the nCUBE, which, as you may know, has a message passing interface essentially equivalent to that proposed by MPI. We ran into the exact problem quoted above: it was difficult to write libraries that we're guaranteed not to conflict in their usage of message types. The usage of the word "contexts" in the above is equivalent to the usage of the word "ports" amongst operating systems programmers. Because "port" is slightly less overloaded than "context", I prefer the word "port". Note that ports are sufficiently powerful that they subsume the concepts of "contexts", "pids", and "message types" or "tags" as used by MPI. >The only expression stated so far about the difference between tags and >contexts I had gleaned was that context should now be wildcardable (IE no >-1 for All contexts) while a receive ALL or some other MASK variant would >be allowed on tags. I agree that there is little difference between tags and contexts. This helps explain why ports are so good at subsuming both contexts and tags. > The method of managing message contexts is a separate issue (assuming we > want contexts). Existing proposals are: > > 5a) Stack-based management (objected to due to hidden states). > 5b) Explicit registration with user-defined "names" (probably requires > some communication). > 5c) Explicit registration by a central authority ("dollar bill" > registration mentioned by Jim Cownie.) I don't particularly understand the above, probably because I haven't been listening in on enough of the discussion. I think I understand what you mean by stack-based management, and we do use the word "registration"... When using ports, stack-based management isn't an issue for the same reasons you don't manage pids and message types using stack based management. For example, when sending a message, instead of using the MPI push_context (contextp); send (dest, type, buf, buflen); pop_context (contextp); you use send (port, buf, buflen); In our usage, a "port" is simply a queue to which messages can be sent. The receiver can remove a message from the head of the queue. Wildcards are neither needed nor implemented, making ports more efficient. The efficiency arises from the fact that a process can have multiple ports. In MPI, you essentially implement one receive queue for each pid. If you want to use a wildcard to access messages out-of-order, you need to waste time scanning the queue. With multiple ports, however, you can set up multiple queues so that you never need to access anything other than the head of a queue. Sometimes ports do need to be "registered". In our terminology, registration is the process of publishing in a well-known location sufficient information about a "port" created by one process that another process can send messages to that port. This is a very high level action that is almost outside the scope of MPI. We use a nameserver for this purpose. (The nameserver is accessed via a well-known port, i.e. a port whose bit-pattern is a well-known constant.) [Do note that we require that the bit-pattern representing a port be allowed to be transmitted between to processes without the operating system needing to know that the bit-pattern represents a port. This means that accessing the nameserver is not required in order to use a port.] For groups, we have thought about extending the concept of a port to an array-port. The simple implementation of this allows each process in a group to create a port. These ports are all sent to a central location which creates an array of ports and broadcasts the array to all ports in the array. The complex memory efficient implementation may require processes of a group to be created in a special fashion. Basically, it allows the address of a port for a specific process in a group to be easily computed by another process given the port of the first process in the group and the index of the target process in the group. -- Chuck From owner-mpi-context@CS.UTK.EDU Tue Jan 19 08:54:15 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA14013; Tue, 19 Jan 93 08:54:15 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA13129; Tue, 19 Jan 93 08:53:41 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 19 Jan 1993 08:53:40 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA13121; Tue, 19 Jan 93 08:53:39 -0500 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA16410; Tue, 19 Jan 1993 08:53:38 -0500 Date: Tue, 19 Jan 1993 08:53:38 -0500 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9301191353.AA16410@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu Subject: Chair Tony Skjellum of Mississippi State University has kindly agreed to chair the MPI subcommittee on contexts. Regards, David From owner-mpi-context@CS.UTK.EDU Tue Jan 19 11:37:59 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17976; Tue, 19 Jan 93 11:37:59 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21893; Tue, 19 Jan 93 11:37:36 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 19 Jan 1993 11:37:30 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21879; Tue, 19 Jan 93 11:37:21 -0500 Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA13277 (5.65c/IDA-1.4.4); Tue, 19 Jan 1993 11:36:49 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1) id AA27411; Tue, 19 Jan 93 16:36:38 GMT Date: Tue, 19 Jan 93 16:36:37 GMT From: jim@meiko.co.uk (James Cownie) Message-Id: <9301191636.AA27411@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA05440; Tue, 19 Jan 93 16:35:18 GMT To: csimmons@us.oracle.com Cc: mpi-pt2pt@cs.utk.edu, mpi-context@cs.utk.edu In-Reply-To: Charles Simmons's message of Sat, 16 Jan 93 02:21:14 PST <9301161021.AA01226@jewel.us.oracle.com> Subject: mpi Content-Length: 3447 To: mpi-pt2pt@meiko.co.uk, mpi-context.@meiko.co.uk (Apologies to those who get it twice) Gentlepeople, Chuck Simmons has raised some interesting issues, with which I have a great deal of sympathy. The original messaging model which we implemented on our machines (and which we still support, and expect to continue to support) is in exactly the vein which Chuck is asking for (and has been used for an Oracle port !). In particular CSN supports 1) multiple end points for communication in a single process (know as "transports") 2) No tagging of messages 3) No system buffering, but the ability for users to queue multiple non-blocking receives on a transport, thus providing the buffering they require. 4) Send and receive by struct iovec. (As Chuck observes, the implementation ends up building an iovec (on the stack) for the simpler forms) 5) Both blocking and non-blocking tests for I/O completion. (our test actually has a timeout value). 6) The ability to pass transport addresses around the machine without the system being involved. 7) A standard name server service to associate textual names with transport addresses. If people are really interested then I can probably send the man pages. (Though this is not the main point of this note). HOWEVER (and this is one of the points) although this system had clean, specified semantics and a fast implementation (on our hardware), it hasn't helped to sell machines, and we have now produced an NX style interface as an alternative. I think that the problem with popularising such an interface among users is that it appears to do less for them than a model with implicit buffering, and is therefore harder to start to use. The fact that it allows them greater control and can ultimately produce higher performance is not their immediate concern and does not therefore count for much. Many FORTRAN programmers in particular to do not wish to be concerned with "system programming issues" like buffer management. I would certainly like MPI to be able to support such an interface (so that we can achieve the higher performance it offers, and also make MPI applicable in non-scientific application areas). (Though those who were present in Dallas will have noted that I wasn't trying to push such an approach there, mainly because I think it's a lost cause. NX style is what we have to live with...) (Second point coming up...) However it crosses my mind that there may be some potential for embedding such an interface within the MPI model. (This is a vague thought, but I'm throwing it out so others can think about it as well). Observe that 1) Marc's persistent communication descriptors seem to remove the tag matching requirement (though they're still rather vague !) 2) One way of viewing contexts would be to implement a context as a particular queue of messages (or in other words an entirely separate communication end point) within a process. (The implications of this for contexts would be a) contexts must be declared before use b) their number may be limited c) they should be freed after use ) The combination of the two things could then come somewhere near to what Chuck is asking for... -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 FAX : +44 454 618188 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Tue Feb 2 11:00:00 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21520; Tue, 2 Feb 93 11:00:00 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA22663; Tue, 2 Feb 93 10:59:18 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 2 Feb 1993 10:59:17 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA22655; Tue, 2 Feb 93 10:59:11 -0500 Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA28603 (5.65c/IDA-1.4.4 for ); Tue, 2 Feb 1993 10:59:07 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1) id AA22488; Tue, 2 Feb 93 15:58:57 GMT Date: Tue, 2 Feb 93 15:58:57 GMT From: jim@meiko.co.uk (James Cownie) Message-Id: <9302021558.AA22488@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA03309; Tue, 2 Feb 93 15:57:21 GMT To: mpi-context@cs.utk.edu Subject: Restricted contexts Content-Length: 6135 A view on contexts ================== At the last meeting in Dallas there was a strong feeling that some form of "context" was required in MPI. The major reason for this is the desire to separate communications on behalf of (independently compiled) library routines from those performed by the user's application. The use of specific message tags for this is discounted because it would require tag registration both by the library and the user code. This intrusion into the user's constitutional right to use whatever tags she sees fit is viewed as unacceptable, hence the requirement to add what is essentially just another tag field. [Of course here in Britain we don't have a written constitution, and the government just does what it likes...] In fact adding a further tag field (called context) does not remove the registration requirement, though it does mean that registration is only required by library routines. (In effect a default context is reserved by the allocator for the user, and she need not concern herself about it.) I should like to propose a different view of what a context is. Suppose we think of each context as a separate communications end point in the process (and intend to implement it this way too !). This has various implications :-- 1) A context should be created before it can be used. (This has the effect of allocating a new message queue inside the MPI implementation). 2) The number of contexts will be limited (you might expect < 256) [ I don't see this as a problem, after all you still have all of the tags available in each context, and how many nested libraries do you expect to use ? ] 3) A context should be deleted when use of it is complete. (So that the system can release the queue. This also gives a good point at which to report errors such as unreceived messages still on the queue). 4) Creation of a context can be a local operation. (Since the context is probably just an index into a local table. cf. unix file descriptors) [Though this may not be true when collective operations filtered by context are taking place, in this case one will want the same context in all members of the collective group, perhaps some contexts need to be reserved for allocation in group.] 5) Context values can be passed in messages without needing translation or registration. (This is important for maintaining scalability. The context information need only be passed to those processes which require it. [Though there's an unaddressed bootstrap problem here...]) 6) It is still possible to have a context lookup/allocation scheme if necessary. 7) It is possible for the user to avoid most of the tag matching cost (as has been requested by Chuck Simmons), all she (the user, not Chuck!) has to do is allocate contexts and target messages at them. Within a particular context all receives can have tag MATCH_ANY, and no queue searching is required. Since the context selection is fast (because we have restricted the number of contexts so that it is sensible to implement the selection as an array access) this improves the achieved performance. Given this style, then the implementation is extremely easy (and will be fast). One simply has an array of message lists, the context number being the index into the array. All of the operations (such as tag matching) can then use the existing code. This will (in general) produce an implemetation which runs faster than having a single queue of messages and matching on both tag and context, since the message queues will be shorter, and only contain relevant buffers. [Contrast with the case of additional code to match contexts on one message list which would slow down all code, whether or not contexts are being used !] The model of communication is that the context is a part of the TARGET address, both for collective and diadic communications. In effect the end point for communication is only relevant at the receiver. [Is this what we want ? The only place it seems to make a difference is in the tests for completion of non-blocking sends. If the user has queued many of these on the single outgoing queue, then when the library is searching for completed ones it has to skip them. If we have many outgoing queues this is not necessary. It rather depends on the way we test for completion of non-blocking sends, which has yet to be determined...] Functions might look like typedef ... MPI_CONTEXT; MPI_CONTEXT MPI_NEWCONTEXT(MPI_GROUP group); Allocate a new context for use within the given group (may be MPI_NO_GROUP, in which case the allocation is entirely local). Returns MPI_NO_CONTEXTS if there are no suitable free contexts left. If the group is not MPI_NO_GROUP, then this is a barrier synchronisation of all the processes in the group, and the result of the function is the same in all of the participating processes. int MPI_FREECONTEXT(const MPI_CONTEXT context); Free the given context, returns MPI_SUCCESS if the context has been freed MPI_NOTCONTEXT if the context is not valid MPI_BUSY if the context still has outstanding messages (I'm not sure if we need this last one, but the model of unix file descriptors suggests it might be useful...) int MPI_MOVECONTEXT(const MPI_CONTEXT to, const MPI_CONTEXT from); Rename the context from to be the context to. (cf dup2). This is a simple renaming operation. MPI_SUCCESS if the context has been renamed MPI_NOTCONTEXT if the from context is not valid MPI_BUSY if the to context is already allocated One of the arguments to the send routines (both diadic and collective) would then be an MPI_CONTEXT to which the message will be sent. Feedback please. Although this is a long message I'm not offering beer. [The pound is now down to $1.45 and still falling !] -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Tue Feb 2 11:49:30 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA22912; Tue, 2 Feb 93 11:49:30 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA24957; Tue, 2 Feb 93 11:49:03 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 2 Feb 1993 11:49:02 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from fslg8.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA24949; Tue, 2 Feb 93 11:48:58 -0500 Received: by fslg8.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA28146; Tue, 2 Feb 93 16:48:54 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA01526; Tue, 2 Feb 93 09:47:49 MST Date: Tue, 2 Feb 93 09:47:49 MST From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9302021647.AA01526@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu Subject: Re: Restricted contexts A few (probably dumb) questions about Jim's context proposal: > MPI_CONTEXT MPI_NEWCONTEXT(MPI_GROUP group); > Allocate a new context for use within the given group (may be > MPI_NO_GROUP, in which case the allocation is entirely local). Returns ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > MPI_NO_CONTEXTS if there are no suitable free contexts left. > If the group is not MPI_NO_GROUP, then this is a barrier > synchronisation of all the processes in the group, and the result of > the function is the same in all of the participating processes. In what situations would I want a context that is entirely local? Would I need to send this context to a process I intend to communicate with? I like the idea of being able to create a context within a group for scalability reasons. How do I know that two groups will not get the "same" new context returned from separate calls to MPI_NEWCONTEXT()? Is this even a problem? > One of the arguments to the send routines (both diadic and collective) > would then be an MPI_CONTEXT to which the message will be sent. Does this mean that MPI_CONTEXT is NOT a required argument to a receive routine? (Maybe I'm missing the whole point of this proposal! If so, please educate me. A pseudo-code examle might be very helpful.) > Although this is a long message I'm not offering beer. [The pound is > now down to $1.45 and still falling !] This whole beer thing is getting pretty scary. I think I'll keep all my messages short! :) Tom Henderson hender@fsl.noaa.gov From owner-mpi-context@CS.UTK.EDU Tue Feb 2 12:15:44 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA23454; Tue, 2 Feb 93 12:15:44 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA26210; Tue, 2 Feb 93 12:15:20 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 2 Feb 1993 12:15:18 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA26192; Tue, 2 Feb 93 12:15:16 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03830; Tue, 2 Feb 93 11:14:40 CST Date: Tue, 2 Feb 93 11:14:40 CST From: Tony Skjellum Message-Id: <9302021714.AA03830@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu, jim@meiko.co.uk Subject: Re: Restricted contexts Ladies/Gentlemen: The notion of context implementation for efficiency should not drive it to be a highly limited field. It is true than if there are only a few contexts, then they could be moved to hardware support. I think that large software will have many hundreds of contexts, and that this resource should be less limited. I dislike the notion that hardware-specific details creep into the standard. For instance, CM-5 could/does have 8 hardware contexts of messages, at a low level, but these are not really published to the user at all. The issue of whether one or many queues are supported is completely separate, and can be addressed separately. Separate queues weaken the partial ordering guaranteee of a message passing system; ie, that messages are received in pairwise-preserved order. Hence, they have been avoided thus far in the work I have done. However, there is nothing in principle wrong with letting an implementation disperse messages as they come in based on context. The best partial screening for messages is highly application dependent. Perhaps some type of adaptive hashing could be used internal to strong implementations, but it should not be required. For instance, independent of context support totally, adaptive hashing on type/sender could be incorporated to existing systems. I do not think it too inefficient to test 64 bits instead of 32 bits for typing. In practice, there are much more serious overheads in a system. The notion that we have full push-down message stacks, is an example of this. Active messages systems are highly context oriented. They make the context bigger (eg, an address on the destination machine) rather than smaller. One can see that there is a basic unity between message selection and context support... contexts help to route messages to the right execution part of a destination process. Registry is very important, because it guarantees that code (be it user or other library code) will cooperate reasonably well, given the message resource. This type of registry is accepted for file systems, for instance, where we accept that FORTRAN unit number is a bad way to program. Unix file descriptors provide insulation between different parts of a program. Same idea for contexts. Is a context associated with a group? Well, I hope so, but enforcing this is less important. Groups are very important in and of themselves, and to have guarantee that a context is available in a group is very useful. It also makes sense to me that a loosely synchronous registry by a group of processes to get a context is a good idea. For instance, a big group of processes initially has a context. It can ask for more contexts, and pinch off subgroups, establishing an MPMD processing environment. Overlap is permitted. Smaller groups of processes, starting with their own contexts would need a way to union into bigger groups. A mechanism for this is needed. This direction is important for scalability, for fault tolerance, and for dynamicism in processing on large collections of distributed or parallel processors. For instance, homogeneous clusters could union into a heterogeneous collection, and the resulting communication structure would provide a tree-like (at least two-level) access to messaging, which could also be useful for optimizing performance of global communication in a heterogeneous setting. Other ideas? - Tony Skjellum From owner-mpi-context@CS.UTK.EDU Tue Feb 2 12:52:23 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA24044; Tue, 2 Feb 93 12:52:23 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA27903; Tue, 2 Feb 93 12:51:29 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 2 Feb 1993 12:51:28 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA27887; Tue, 2 Feb 93 12:51:22 -0500 Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA00407 (5.65c/IDA-1.4.4 for ); Tue, 2 Feb 1993 12:51:19 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1) id AA22983; Tue, 2 Feb 93 17:51:16 GMT Date: Tue, 2 Feb 93 17:51:15 GMT From: jim@meiko.co.uk (James Cownie) Message-Id: <9302021751.AA22983@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA03315; Tue, 2 Feb 93 17:49:35 GMT To: hender@macaw.fsl.noaa.gov Cc: mpi-context@cs.utk.edu In-Reply-To: Tom Henderson's message of Tue, 2 Feb 93 09:47:49 MST <9302021647.AA01526@macaw.fsl.noaa.gov> Subject: Restricted contexts Content-Length: 2607 > A few (probably dumb) questions about Jim's context proposal: Not dumb. > In what situations would I want a context that is entirely local? Would I > need to send this context to a process I intend to communicate with? The basic model I have is that contexts are very similar to Unix file descriptors (maybe I should have said this more explicitly). Therefore you can create them locally, however if someone is going to send a message to one of your contexts the they need to know its identity. So the answer to the second question is YES, you'll have to send it to them [hence the bootstrap problem...]. > I like the idea of being able to create a context within a group for > scalability reasons. How do I know that two groups will not get the "same" > new context returned from separate calls to MPI_NEWCONTEXT()? Is this even > a problem? Since contexts are allocated within a group and context creation is a barrier synchronisation there are two cases to consider :- 1) the two groups creating contexts are disjoint. In this case the contexts are independently created, and may indeed be given the same number. (Just as opening files in two processes can return the same file descriptor). I don't think this is a problem. 2) The two groups share some members. In this case we only have a problem if we have threads (since otherwise a single thread needs to be at two barriers simultaneously, which it can't.) If we do have threads, then there will have to be a lock in the library to ensure that different results are given to the two groups. > > One of the arguments to the send routines (both diadic and collective) > > would then be an MPI_CONTEXT to which the message will be sent. > > Does this mean that MPI_CONTEXT is NOT a required argument to a receive > routine? (Maybe I'm missing the whole point of this proposal! If so, please > educate me. A pseudo-code examle might be very helpful.) No you do need a context in the receive (though it might be optional in the "simple" versions with some specific default value). The issue I was addressing was whether a send needed to specify one context (that at the target), or two (that at the target and that at the sender). I was tending towards the former solution. I'll try to write some examples tomorrow. -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Tue Feb 2 13:12:02 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA24463; Tue, 2 Feb 93 13:12:02 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA28864; Tue, 2 Feb 93 13:10:58 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 2 Feb 1993 13:10:56 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA28850; Tue, 2 Feb 93 13:10:51 -0500 Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA00681 (5.65c/IDA-1.4.4 for ); Tue, 2 Feb 1993 13:10:42 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1) id AA23047; Tue, 2 Feb 93 18:10:37 GMT Date: Tue, 2 Feb 93 18:10:37 GMT From: jim@meiko.co.uk (James Cownie) Message-Id: <9302021810.AA23047@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA03318; Tue, 2 Feb 93 18:09:00 GMT To: tony@Aurora.CS.MsState.Edu Cc: mpi-context@cs.utk.edu In-Reply-To: Tony Skjellum's message of Tue, 2 Feb 93 11:14:40 CST <9302021714.AA03830@Aurora.CS.MsState.Edu> Subject: Restricted contexts Content-Length: 4249 > The notion of context implementation for efficiency should not drive > it to be a highly limited field. I'd like the additional cost of contexts to be small. I'd also like it NOT to impact people who don't use contexts. Both of these drive me in the direction of the sort of implementation I was proposing. This is easy provided there are relatively few contexts, hard if they're as anarchic as tags. > It is true that if there are only a > few contexts, then they could be moved to hardware support. Yes, but not the reason for my proposal. I have no desire (or need) to do this. > I think that > large software will have many hundreds of contexts, and that this resource > should be less limited. I can't imagine why you need another 10 bits of context when you already have 32 of tag. What are you going to use the context for ? > I dislike the notion that hardware-specific > details creep into the standard. For instance, CM-5 could/does have 8 > hardware contexts of messages, at a low level, but these are not really > published to the user at all. I agree entirely. My proposal is not based on any specific hardware implementation. (My hardware has a small processor in it which can execute C code and run comms processor threads in a remote user space without requiring any main processor intervention, but I'm not asking for that in the standard !) > The issue of whether one or many queues are supported is completely > separate, and can be addressed separately. Separate queues weaken > the partial ordering guaranteee of a message passing system; ie, that messages > are received in pairwise-preserved order. Hence, they have been avoided > thus far in the work I have done. True, I didn't see this as a problem, since I thought that messages in different contexts could reasonably be unordered with respect to each other. (This was based on my view of what contexts are for, as explained in the intro to my proposal, maybe you have a different use for them in mind ?) > However, there is nothing in principle > wrong with letting an implementation disperse messages as they come in > based on context. The best partial screening for messages is highly > application dependent. Perhaps some type of adaptive hashing could be > used internal to strong implementations, but it should not be required. > For instance, independent of context support totally, adaptive hashing > on type/sender could be incorporated to existing systems. Of course all of these things are possible, and all add to the baseline latency. I was hoping to reduce this, not increase it. > I do not think it too inefficient to test 64 bits instead of 32 bits for > typing. In practice, there are much more serious overheads in a system. > The notion that we have full push-down message stacks, is an example of > this. I haven't seen any proposal in the MPI context which doesn't take this approach, and since I'm not trying to rewrite the whole thing... > Registry is very important, because it guarantees that code (be it user > or other library code) will cooperate reasonably well, given the message > resource. This type of registry is accepted for file systems, for instance, > where we accept that FORTRAN unit number is a bad way to program. Unix > file descriptors provide insulation between different parts of a program. > Same idea for contexts. Exactly. That's why my context proposal is based on the file descriptor model. > Is a context associated with a group? Well, I hope so, but enforcing this > is less important. Groups are very important in and of themselves, and to > have guarantee that a context is available in a group is very useful. It > also makes sense to me that a loosely synchronous registry by a group of > processes to get a context is a good idea. In my model a group can request a context. It does not have one a priori. (Maybe it doesn't need one since it can use one which the relevant processes already have). -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Tue Feb 2 13:45:12 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA25590; Tue, 2 Feb 93 13:45:12 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA01587; Tue, 2 Feb 93 13:44:48 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 2 Feb 1993 13:44:47 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA01579; Tue, 2 Feb 93 13:44:45 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03874; Tue, 2 Feb 93 12:44:09 CST Date: Tue, 2 Feb 93 12:44:09 CST From: Tony Skjellum Message-Id: <9302021844.AA03874@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu, jim@meiko.co.uk Subject: Re: Restricted contexts Addendum to my previous comments. I do not think it is a bad idea for vendors to provide limited, extremely fast registry mechanisms, like the CM-5's 8 hardware contexts, or whatever Meiko might have in mind along the same lines. These should be viewed as low-level services that can be incorporated into standard interfaces, increasing their performance, when they can be used. Active messages, for instance, while they are lower-level than the MPI ideas, should not be discarded by vendors. They may be a very good way for strong, MPI-upward-compatible, message systems to do a good job with hardware. We must provide for a way for vendors to extract performance from their hardware, within the context of MPI, or else they will continue to gain commercial advantage from ignoring MPI, rather than supporting it fully. - Tony From owner-mpi-context@CS.UTK.EDU Tue Feb 2 13:52:39 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA25863; Tue, 2 Feb 93 13:52:39 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA02049; Tue, 2 Feb 93 13:52:10 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 2 Feb 1993 13:52:09 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA02040; Tue, 2 Feb 93 13:52:08 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03880; Tue, 2 Feb 93 12:51:25 CST Date: Tue, 2 Feb 93 12:51:25 CST From: Tony Skjellum Message-Id: <9302021851.AA03880@Aurora.CS.MsState.Edu> To: jim@meiko.co.uk, mpi-context@CS.UTK.EDU Subject: Re: Restricted contexts Jim, I do not think it is better to keep it a 8 or 10 bits, rather than 32, for the following reasons 1) alignment 2) comparison cost (8 v. 32) on modern architectures Yes, we are going to use more than 256. Libraries that use processes in multiple configurations, and with multiple sub-groupings generate a number of contexts. I will send you the Zipcode 1.0 tech. doc. later (when finished) and you can see for yourself that many contexts can reasonably be used. For each different library in a system, one could imagine using about twenty of these message contexts. It could be up to the implementor to use more/less, depending on how services are being provided, and how service priorities are set. Tags are an unstructured field. 32 bits for the tag may be too much, but when it is all the programmer has to control selectivity, 32 is a good quantity to choose. Context suggests structuring part of the selectivity. In this case, 32 bits may be more than enough. Real software will use contexts. People who don't use them are the people who will be writing their own codes, using no one else's libraries or codes, and basically not doing the same type of top-down design we expect from good software development. I would argue that people who want to save these incremental test costs will also be unwilling to do anything but drive the hardware from the most primitive system calls. - Tony From owner-mpi-context@CS.UTK.EDU Wed Feb 3 05:35:52 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA18460; Wed, 3 Feb 93 05:35:52 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21954; Wed, 3 Feb 93 05:34:51 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 3 Feb 1993 05:34:50 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21946; Wed, 3 Feb 93 05:34:47 -0500 Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA13139 (5.65c/IDA-1.4.4 for ); Wed, 3 Feb 1993 05:34:43 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1) id AA26644; Wed, 3 Feb 93 10:34:39 GMT Date: Wed, 3 Feb 93 10:34:38 GMT From: jim@meiko.co.uk (James Cownie) Message-Id: <9302031034.AA26644@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA03416; Wed, 3 Feb 93 10:32:56 GMT To: tony@Aurora.CS.MsState.Edu Cc: mpi-context@CS.UTK.EDU In-Reply-To: Tony Skjellum's message of Tue, 2 Feb 93 12:51:25 CST <9302021851.AA03880@Aurora.CS.MsState.Edu> Subject: Restricted contexts Content-Length: 1848 Tony, > I do not think it is better to keep it a 8 or 10 bits, rather than 32, > for the following reasons > > 1) alignment > 2) comparison cost (8 v. 32) on modern architectures Alignment may or may not be an issue. I have no objection to you sending the context as an aligned quadword in the message header if that is faster on your hardware. I'm only interested in the restricting range of valid values. My point is that if I have only a few (8 or 10 bits) then I can use a qualitatively different implementation [an array] than if I have to deal with random 32 bit values. Of course the cost of comparison is the same, but if I only have a few contexts I can remove the comparisons from the loop entirely. > Yes, we are going to use more than 256. Libraries that use processes > in multiple configurations, and with multiple sub-groupings generate a > number of contexts. I will send you the Zipcode 1.0 tech. doc. later > (when finished) and you can see for yourself that many contexts can > reasonably be used. > For each different library in a system, one could imagine using about > twenty of these message contexts. It could be up to the implementor to > use more/less, depending on how services are being provided, and how > service priorities are set. I agree with this, BUT this doesn't mean you need that many contexts all at once. The analogy with unix file descriptors is very close. Each library might use 20 open files, but you can still get by with a limit of 64 or 128 on the number of files open at one time. Similarly with contexts. -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Wed Mar 3 16:57:19 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA12038; Wed, 3 Mar 93 16:57:19 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA08269; Wed, 3 Mar 93 16:56:38 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 3 Mar 1993 16:56:37 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA08261; Wed, 3 Mar 93 16:56:36 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA05229; Wed, 3 Mar 93 15:53:56 CST Date: Wed, 3 Mar 93 15:53:56 CST From: Tony Skjellum Message-Id: <9303032153.AA05229@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: Status Dear MPI contexts people, Per our agreement at the last MPI meeting, a sub-sub-committee consisting of Cownie, Clarke, and Skjellum are developing a multi- alternative "closure" proposal for MPI. This will be presented to the entire contexts sub-committee within two weeks. After further discussion, at least two viable scenarios, required to resolve group, context, tag, group ID conflicts/overlaps, will be presented at next MPI meeting. You will receive our proposal for review as soon as possible. - Tony From owner-mpi-context@CS.UTK.EDU Mon Mar 8 17:14:45 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21363; Mon, 8 Mar 93 17:14:45 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA23364; Mon, 8 Mar 93 17:14:16 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 8 Mar 1993 17:14:15 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA23356; Mon, 8 Mar 93 17:14:14 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA08120; Mon, 8 Mar 93 16:10:45 CST Date: Mon, 8 Mar 93 16:10:45 CST From: Tony Skjellum Message-Id: <9303082210.AA08120@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: PRECIS for closure proposal Please comment. - Tony Skjellum ---------------------------------------------------------------------------- Here is the precis of the post-MPI meeting... Input was from Skjellum, Littlefield, Pierce, Ranka [and others I don't recall specifically]. There shall be three self-consistent "closures" for MPI, that completely remove inconsistency between the usage of the following concepts (while fully defining the following concepts): 1) tag 2) context 3) group 4) process ID It is our task to assemble such proposals and to place them before the context sub-committee prior to the next meeting, and provide a refined set of proposals (with recommendations) before the full committee at the next MPI meeting. The first reading of this section of the standard is needed to make the rest of the standard go forward. --------------------------------------------------------------------- Proposal I. GROUP ID = CONTEXT = OBJECT, aka "Snir" model groups, contexts, objects are local and opaque. naming of processes is group-relative. this option was discussed, in most part, by Snir at meeting. Proposal II. GROUP ID != CONTEXT, aka "Littlefield" model GROUP corresponds to "data/instance" in this model CONTEXT is statically defined at link time, corresponding to code or class In this model, group ID's appear explicitly, and separately from CONTEXTS in global operations. It was noted by Littlefield that he would write global operations, and product bits of the GROUP ID and static CONTEXT to guarantee that his code produced unique, deterministic, non-interfering results. Proposal III. GROUP ID null, GROUP scope == CONTEXT scope, aka Zipcode model CONTEXT is a dynamic concept, and global (or as global as necessary) Whenever a group is created, a context is assigned to it. CONTEXTS are used to disambiguate global operations. GROUP ID is the CONTEXT for the group. GROUP is an enumeration of PROCESS ID's OR ONE OF SEVERAL possible hashing formulas, with exceptions (more scalable) Proposal IV. GROUP ID != CONTEXT, aka fully "object-oriented" model CONTEXT is a dynamic concept, and global (or as global as necessary) GROUP ID is emasculated, in the sense that there are process groups, but CONTEXT is still needed in global operations The general case of "code U data" (the object-oriented model) is taken, with context+group forming a scope in the system. The "Server model," where CONTEXTS that have no group association made to them, is to be supported. GROUP is an enumeration of PROCESS ID's OR ONE OF SEVERAL possible hashing formulas, with exceptions (more scalable) ---------------------------------------------------------------------- Requirements for all proposals to meet closure 1) All global operations must be describable in terms of point-to-point operations without loss of generality 2) Global operations, including non-blocking barriers, and so on, must work correctly with overlapping process groups 3) There must be reasonable/scalable mechanism for assigning contexts 4) Tag usage seems to be non-controversial according to above discussion, but we need to watch out for the following. 1) desire to use part of tag for context, group ID or other field, and make that part of the standard 5) Provision of enough contexts to support all describable or reasonable scenarios that we can justify, but not more than are foreseeable, because there could be a performance penalty by requiring the number of contexts to be too great. 6) Justification is perceived to be required for dynamic contexts (why?) 7) A fully consistent definition for how process IDs are to work. 8) An explanation of how to extend to dynamic groups later, if possible. From owner-mpi-context@CS.UTK.EDU Tue Mar 9 18:14:44 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA16448; Tue, 9 Mar 93 18:14:44 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA03977; Tue, 9 Mar 93 18:13:41 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 9 Mar 1993 18:13:39 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from antares.mcs.anl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA03969; Tue, 9 Mar 93 18:13:37 -0500 Received: from godzilla.mcs.anl.gov by antares.mcs.anl.gov with SMTP id AA15144 (5.65c/IDA-1.4.4 for ); Tue, 9 Mar 1993 17:13:34 -0600 From: William Gropp Received: by godzilla.mcs.anl.gov (4.1/GeneV4) id AA12285; Tue, 9 Mar 93 17:13:30 CST Date: Tue, 9 Mar 93 17:13:30 CST Message-Id: <9303092313.AA12285@godzilla.mcs.anl.gov> To: tony@aurora.cs.msstate.edu Cc: mpi-context@cs.utk.edu In-Reply-To: Tony Skjellum's message of Mon, 8 Mar 93 16:10:45 CST <9303082210.AA08120@Aurora.CS.MsState.Edu> Subject: PRECIS for closure proposal The one point in Tony's excellent summary that needs more elaboration is needed on whether process id's, when being used in point-to-point operations, are group-relative (ranks) or absolute (and possibly opaque). Only Proposal I specifies this (ids = ranks); the others leave this unspecified. In the interests of keeping the latency down, I'd like them to be absolute, at least at the lower levels. In any event, this choice is a critical one but one that is mostly orthogonal to the other issues. In fact, I'd have chosen absolute process id's as the "Snir" model. Bill From owner-mpi-context@CS.UTK.EDU Tue Mar 9 18:40:34 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA16750; Tue, 9 Mar 93 18:40:34 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA05349; Tue, 9 Mar 93 18:40:04 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 9 Mar 1993 18:40:01 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA05321; Tue, 9 Mar 93 18:39:58 -0500 Date: Tue, 9 Mar 93 23:39:42 GMT Message-Id: <2472.9303092339@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: PRECIS for closure proposal To: William Gropp In-Reply-To: William Gropp's message of Tue, 9 Mar 93 17:13:30 CST Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Bill writes: > In the > interests of keeping the latency down, I'd like them to be absolute, at least > at the lower levels. In any event, this choice is a critical one but one > that is mostly orthogonal to the other issues. In fact, I'd have chosen > absolute process id's as the "Snir" model. I think we all realise what your choice would be Bill :-) There are arguments both ways, perhaps we should have both. On the other hand, the speed demons probably should be advised to use the communication handle layer at which any overhead of mapping (group,rank) to absolute-implementation-specific-best-you-can-get, and other per initialisation overheads, should be paid exactly once in the handle initialisation. Isn't that exactly why we have this layer? Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Mar 9 18:42:28 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA16757; Tue, 9 Mar 93 18:42:28 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA05578; Tue, 9 Mar 93 18:42:14 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 9 Mar 1993 18:42:13 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA05570; Tue, 9 Mar 93 18:42:10 -0500 Date: Tue, 9 Mar 93 23:42:06 GMT Message-Id: <2480.9303092342@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: [d39135@sodium.pnl.gov: Re: group descriptors] To: Tony Skjellum , Jim Cownie , Rik Littlefield , Sanjay Ranka Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Rik writes: > > PS. Tony, I'm a little surprised that your reply to me was not > broadcast, especially considering your reaction to being left > out of a loop earlier. > > I am also concerned that this group is not leaving a publicly > accessible trail. These discussions have a lot of content > that should be available to others even if it does not appear > in the final proposal. > > Should we not be using an mpi-contexts reflector and archiver at > Oak Ridge? > I'm beginning to think this is correct. We seem to be talking things which seem to go beyond the "Four Proposals" agreement. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Mar 10 00:36:37 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21055; Wed, 10 Mar 93 00:36:37 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA18558; Wed, 10 Mar 93 00:36:04 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 00:36:03 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA18550; Wed, 10 Mar 93 00:36:02 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA09605; Tue, 9 Mar 93 23:30:48 CST Date: Tue, 9 Mar 93 23:30:48 CST From: Tony Skjellum Message-Id: <9303100530.AA09605@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, ranka@top.cis.syr.edu Subject: Re: [d39135@sodium.pnl.gov: Re: group descriptors] Cc: mpi-context@cs.utk.edu I am not sure to which comment I incorrectly misresponded. Rik, please remind me... I think there are a total of two letters which are improperly sent only to you, and I do want them shared with everyone. I was quite rushed today, and hit the wrong R key, no doubt. To my recollection, both deal with Methods within group/contexts. If you want to elaborate mailing to entire mpi-contexts subcommittee, I concur. Let's make sure all of you are on that list, so we don't lose track of the main discussion people, however. So, my basic request to Rik is that he echo the two letters sent to him alone, which was my error. Mea culpa. :-) Tony From owner-mpi-context@CS.UTK.EDU Wed Mar 10 01:10:06 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21883; Wed, 10 Mar 93 01:10:06 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA19605; Wed, 10 Mar 93 01:09:48 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 01:09:47 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA19597; Wed, 10 Mar 93 01:09:46 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA09722; Wed, 10 Mar 93 00:05:40 CST Date: Wed, 10 Mar 93 00:05:40 CST From: Tony Skjellum Message-Id: <9303100605.AA09722@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov Subject: Re: group descriptors Cc: jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu This corrects the second message, accidently sent only to Rik. - Tony From tony Tue Mar 9 23:23:17 1993 To: d39135@sodium.pnl.gov Subject: Re: group descriptors Content-Length: 897 Rik, I was foolish not to mention that we have implemented a registration mechanism with trees, in Zipcode 1.0. The user can add available methods for combine, and then,when a mailer is created, specific choices can be taken from the list. Currently, we implement with numbers, but this could be done with names, to allow better sharing. I will check on choices we made. It has been some time since I worked on that internal code. Anyway, I concur on the need to have registerable choices, that allow sharing, per your letter. As regards new functionality, they could go in the tree too, but there is no structural mechanism in Zipcode, other than defining a class' method list, for adding new instantiations to a mailer at creation. All of this was very elaborate coding, which I will elucidate in my article for Rolf. I am happy to share details with you as well, beforehand. - Tony From owner-mpi-context@CS.UTK.EDU Wed Mar 10 06:16:34 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA13775; Wed, 10 Mar 93 06:16:34 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA06917; Wed, 10 Mar 93 06:15:36 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 06:15:33 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA06900; Wed, 10 Mar 93 06:15:29 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA09848; Wed, 10 Mar 93 05:10:50 CST Date: Wed, 10 Mar 93 05:10:50 CST From: Tony Skjellum Message-Id: <9303101110.AA09848@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@aurora.cs.msstate.edu Subject: re: static contexts writeup [My comments prepended with **** - Tony] **** In mail immemorial, Rik agreed to merge his (enclosed) write-up with **** the first-generation PRECIS, to generate a more useful one, **** Please correct me if I am wrong, but I cannot find this! **** (Again, I apologize for calling this a PRECIS, not a SKETCH.) **** I want to generate the next version of this SKETCH/PRECIS, **** with specifics garnered from our lengthy bantering and **** discussions. Particularly, I will discuss the sparse matrix **** of proposals X closure options. Lyndon, you indicated that **** one of my comments challenged you to make a new added entry **** to one of the proposal categories. May I have that now/soon? **** **** In interest of refocusing discussion, I will work on such a **** new PRECIS/SKETCH, reviewing all the mail flurry from last days, **** and then suggest writing assignments. Lyndon will be off-line, but **** Lyndon will still get some brain work to do off-line. :-) Lyndon **** will you phone me periodically, to stay abreast of things? **** Perhaps we will need the manpower of Rusty/Bill to help us, given **** the amount of work on filling the sparse matrix of issues. Willing? **** **** Our PRECIS/SKETCH and work from it will not address the following things **** (related to discussion in our group, but not central to it): **** - the fully portable MPI program **** - I/O **** - the semantics and syntax of process management **** - the time-table for MPI-1,MPI-2,MPI-3,... **** In a truly global sense, these could be construed to be closure as **** well, but I want to exclude them for sanity reasons. I do believe **** that parts of these issues could be addressed within MPI-1, but **** it is not now this subcommittee's role to do that for MPI at-large. **** Strong objections? If so, we are in deep trouble:-) **** **** We will assume that discussion in this 4-proposal X closure effort is **** all essential to MPI-1. Anything you really don't think is essential **** for MPI-1... once we agree to same .... should probably get deferred. **** Is that controversial? **** **** I thank everyone for their sincere efforts. **** - Tony **** **** PS Per constructive advice from Rik, all of my mail is getting copied to **** mpi-context now. All should do so. I did not hear any dissent **** about this suggestion, and it adds potential help to **** discussion. As we need the help, and few people are writing **** anyway, this will not likely cause problems. **** Continue to include the five names jim, lyndon, tony, sanjay & **** Rik, incase any of us are not on the mpi-context mailing list!!! **** **** PPS Lyndon, where are you. I got up at 4am to read the next round **** of mail from you, but none to be found :-) **** **** Rik writes... >Before it gets TOO stale, here's a copy of my writeup on static >contexts et.al. as it last went to Lyndon and Tony. (You two can >stop reading at the line of dashes.) > >I have opted not to hack on static contexts any more because I >would rather kill them. **** Is this a proposal to include static contexts as in within **** the 4-proposal model, and that we stop working on same? **** If so, I can make it so... >I now suspect that I would trade static contexts for something >like the MPI_set_group_attribute cacheing facility that I have >outlined in recent email. I will think more about this and get >back to you. > >In the meantime, I would very much like more feedback on the >cacheing facility, especially how it breaks. > >Thanks, >--Rik > >---------------------------------------------------------------------- > >This note discusses message context ID's -- what they're good for, >how to generate them, how to use them. > >Credits.. At the last Dallas meeting, Mark Snir asked me to write >up some thoughts on this topic and to send them around to the >whole committee. Paul Pierce, Tony Skjellum, several other >people (whose names I can't remember) and I discussed some of the >ideas over lunch on the last day. Then Lyndon Clarke of the >contexts subcommittee emailed me asking for some clarification of >static context generation, which I guess I'm the main proponent >of. He reviewed an earlier (and even longer) draft of this note, >and it is now much improved. (Thanks, Lyndon!) However, I have >substantially rewritten it since talking with any of those folks, >so I have to take full blame for the fallacies of this version. > >1. Notation and assumptions > > In this note, I assume that > > . "context ID" means a message selection field > that cannot be wildcarded, > > . "tag" means a message selection field > that *can* be wildcarded, and > > . context ID, tag, and process ID are the *only* fields > that can be used for pt-pt message selection. > > In particular, I assume that "group ID" cannot be used for > selection unless it is somehow propagated into the tag or > context ID fields. > > I will speak of "modules" that operate within "groups". By > "module", I mean a procedure (subroutine) or collection of > procedures that can exploit knowledge of each other's internal > workings in order to coordinate their message passing. By > "group", I mean a collection of processors identified by a > globally unique "group ID". By "operating within a group", I > mean that the module is called simultaneously by all processes > belonging to the group and exchanges messages primarily with > those copies of itself. (I assume that arbitrary communication > is always possible. The focus of this discussion is how to > make restricted communication easy.) I assume that the group > ID is passed into the module. > > I will also speak of an "instance of a module". By this, I > mean a module, plus the group that it is operating in, plus the > data it is working on. (The latter becomes important if the > module supports nonblocking operation. In that case, I assume > that some data ID is passed into the module. An example is the > 'tag' argument of the nonblocking collective routines.) > > My notation requires that independently written modules can be > called sequentially within the same group. Specifically, if a > a group must be "duplicated" in order to guarantee that two > modules do not conflict, then I will say that those modules are > operating in different groups. > > By definition, independent modules do not coordinate their use > of tags. (A module can of course coordinate with itself. > That's what tags are for.) > >2. Why do we want context ID's? > > The most important purpose of context ID's is to avoid message > collisions between independent modules. (This is the only way > to avoid such collisions, since coordination of tag values is > excluded by definition.) > > Context ID's are sometimes also be the best way to avoid > collisions between instances of the same module. (This depends > on the definition of "best", as discussed below.) The tag > field can also be used for such avoidance, since the module has > full control over how the tags are used. > >3. Desirable features of contexts and groups: > > I think there will be little debate about these: > > A. It should easy to write modules that don't conflict with the > same or other modules, even when the modules are called > within overlapping groups. (Or, save us all, with multiple > outstanding non-blocking operations.) > > B. It should be easy to call these modules. > > There may be more debate about these, but I like them as sanity > checks: > > C. The MPI routines that manage groups and contexts should > be definable in terms of MPI point-to-point communication > and process-local operations. > > (For example, I get nervous about group and context > management semantics that require asynchronous servers or > their equivalent, like an asynchronous atomic > fetch-and-increment. We already decided to eliminate > interrupt receive at the user level, on the argument that > it was too hard to implement correctly. Asynchronous > servers are a bit easier, since they can run on > resources other than the processes calling MPI, but > personally I would rather avoid servers if possible.) > > D. The "closed-system constraint": standard global operations > (e.g., broadcasting within a group) should be definable in > terms of MPI point-to-point and group/context operations. > > (For example, we must be careful to avoid bootstrap > problems like needing a context specific to a group, in > order to create the group.) > >3. How do modules get and use context ID's ? > > Assume that we have already created a group and distributed its > group ID, and that we have a mechanism for broadcasting within > the group. (More on this later.) > > Then we have at least the following options for managing > context ID's: > > Option 1. "Context ID = group ID" (and no group-dup'ing). > > By my definition of "same group", i.e. dup'ing gets you > a different group, this option means that two modules can > get called back-to-back with the same context. > > I think this is OK, courtesy sequencing constraints, if and > only if both modules use exact match on tag and source. > Since the group ID is already available (passed in), this > provides a simple and efficient mechanism for many useful > algorithms, such as blocking global ops. > > This option fails, however, with modules that use wildcard > tag or source. (I won't bother to provide an example -- > having the same context is the same as having no context, > and we know what havoc that creates.) > > Option 2. One process obtains from MPI a unique context ID > for each instance of a module, and distributes it > for that instance's use. > > I believe that this is the strategy proposed by Tony Skjellum > in his 3-page handout at Dallas. The distribution might be > done either within a module or by its caller. Presumably > there are more callers than modules, so doing it within the > module is preferred. As noted above, distribution of the new > context can be done with a broadcast using exact match on > tag and source, with context=group, presuming of course that > the module follows the rule of doing wildcard matches only > in a different context. > > In the Dallas discussions, it seemed to be assumed that this > option requires an asynchronous server, since the context > generation calls have to be coordinated to ensure uniqueness > but can occur at any time. This bothered me. > > However, I realized while writing this note that this option > can also be done without a server by having a process-local > pool of preassigned context ID's. For example, let the > bottom bits of a context ID be the ordinal (in some global > numbering scheme) of the process that assigned it, and manage > the upper bits however you would in a server. For P > processes, this would require at most log(P) extra bits in > the context field, compared to having a server. > > So now my main concerns about this approach are the extra > communication to distribute the context ID's, plus a nagging > feeling that there's still a bootstrap problem if this is > the only option. > > Option 3. "Static context ID's" > > This is the option that I pushed at the Dallas meeting. > > The idea is that each module gets a context ID that is based > solely on a source code key, then manages its tag values to > keep the groups and instances separate. > > To be definite, assume that a module can obtain a context ID > via a call like > > error = MPI_get_context_for_name (name, &contextID) > > where is the character string unique to the module, and > the generated context ID depends only on the supplied name. > This call is made independently on all processes wishing to > share the context. > > The simplest convention would be to require that be > the name of a procedure within the module. That way the > linker guarantees uniqueness. There are other ways to get > unique strings, such as the "dollar bill algorithm" and > whatever strategies people use to reserve names for expansion > of libraries, but these do not affect the basic idea. > However, you cannot use the address of a routine in place of > its name, since this would not work in heterogeneous systems. > > The most efficient strategy is to require that all names be > "registered offline". By this, I mean that MPI be given a > list, prior to execution, of all the names that might be > passed to MPI_get_context_for_name by this particular > program. There are several ways that this could be > accomplished. For example, if we require that the names > always be those of modules, then a (conservative) list can be > generated at link time using Unix 'nm' or equivalent. > > With offline registration, MPI_get_context_for_name can > be implemented so as to require no communication at all. > > We can go farther and require that be a constant > string, in which case the call to MPI_get_context_for_name > could be implemented as a compile-time macro substitution. > (I hate citing it, but as precedent I note that this strategy > is used by the optimizer for some versions of 'Linda'.) > > Without offline registration, it seems that MPI_get_context_for_name > would require an asynchronous server. I will not discuss this > further because in my view static contexts are intended to > make for simple implementations and fast execution. > > OK, now we can quickly and efficiently (zero execution > cost) get a context ID that is unique to the module, but all > instances of that module have to share the context. (Please, > no kneejerk reactions -- hear me out before labeling this as > a stupid idea in complete violation of the spirit of contexts.) > > How does the module keep multiple instances straight? > It uses the message tag field. > > The idea is to generate an MPI tag value by smushing together > the group ID, data ID (if needed), and the nominal tag value > that the module would have used if it didn't have to worry > about multiple instances. No other module cares how the > smushing is done -- the context is unique and we have agreed > that modules can use the tag bits however they want. This is > just an example of a module coordinating with itself > regarding how to handle the tag bits. > > Yes, this scheme does seem like jumping through a hoop, but > it does have some advantages. If the group and data ID's are > short enough to be smushed into a message tag without losing > any bits, then the MPI tag can be generated with absolutely > no communication. This means that: > > 1. It provides a method to implement other group > and context management functions that is bulletproof > and free of bootstrap problems. > > 2. It executes fast. > > To understand the first advantage, consider the problem of > how to implement MPI_close_group(gid) using only MPI pt-pt > operations. Remember that the module executing within that > group on other nodes can legitimately still have a wildcard > receive posted. If MPI_close_group tries to use the group > context, its messages can get eaten. Sync the group first? > That just means you have the same problem implementing sync. > Having static contexts for MPI_close_group and/or sync solves > this problem. Are there other solutions for which MPI pt-pt > operations would be sufficient? > > The second advantage is perhaps weaker, but I for one keep > finding myself writing modules that are sensitive to > communication performance, particularly latency. In many > cases, I would prefer to write a few extra lines of code than > to use functionality that would cost extra messages. I > have lots of experience with tag-smushing due to working > with low-functionality communication systems (like select > on message tag only, and no useful wildcarding). I find > that extra coding effort is very small. Typically it's > embedded in a macro call or cover routine, so that I still > have to write only one line of code to handle a message. > > Note that I do NOT propose static contexts as a solution for > everyone. However, they are useful for speed, and may be > required under the closed-system constraint. > >4. Summary > > It looks to me like all three options presented here are > complementary. They might all might be provided, in the > expectation that they will be used by different audiences. > >Questions, comments? > >--Rik Littlefield > > From owner-mpi-context@CS.UTK.EDU Wed Mar 10 06:34:10 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17635; Wed, 10 Mar 93 06:34:10 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA07441; Wed, 10 Mar 93 06:33:22 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 06:33:21 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA07433; Wed, 10 Mar 93 06:33:19 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA09870; Wed, 10 Mar 93 05:28:59 CST Date: Wed, 10 Mar 93 05:28:59 CST From: Tony Skjellum Message-Id: <9303101128.AA09870@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@CS.UTK.EDU Subject: latest lyndon-mail **** My latest comments start lines with ****. **** - Tony >From lyndon@epcc.ed.ac.uk Wed Mar 10 05:12:18 1993 >Received: from epcc.ed.ac.uk (daedalus.epcc.ed.ac.uk) by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); > id AA09849; Wed, 10 Mar 93 05:11:49 CST >Date: Wed, 10 Mar 93 11:14:44 GMT >Message-Id: <2772.9303101114@subnode.epcc.ed.ac.uk> >From: L J Clarke >Subject: [Tony Skjellum: Re: [Tony Skjellum: Re: Intra/inter group communications, UBONBOUND, PIDs]To: Tony Skjellum , Jim Cownie , > Rik Littlefield , > Sanjay Ranka >Reply-To: lyndon@epcc.ed.ac.uk >Apparently-To: ranka@top.cis.syr.edu >Apparently-To: d39135@sodium.pnl.gov >Apparently-To: jim@meiko.co.uk >Apparently-To: tony@aurora.cs.msstate.edu >Status: R > >Good morning > >Lyndon replying to Tony replying to Lyndon replying to Tony replying to ... > > >> >(again). Please help. >> > >> >First way, "context" is being used to indicate is a binding of group >> >plus more, and therefore I understand it easily. >> > >> >Second way, "context" is being used in a way which I understand. >> > >> ***** I think my comment made more sense to me when I wrote it before. >> ***** I withdraw it. > >Arrrghhhh! I realise that I have badly typed the "Second way", which >should say that "context" is being used in a whay which I do not >understand. This is very badly typed indeed, apologies. > >I think this was a request for information/sharing of understanding, >which I expect I still need. **** I was viewing the whole context-thing as a "port" specification. **** How about a domain-wide name for a piece of mail to be delivered? **** Such as mail from UK to US? **** I am still not sure this was cogent... **** >> >The point is that you do not actually want to produce a new group. You >> >want to keep thinking of the two groups as seperate entities, each >> >acting individually in a coherent fashion, and communicating with one >> >another perhaps according to some rules based on ranks in each >> >indicidual group. >> ***** I understand that, but why do you want to keep thinking of it that way, >> ***** except for some possible performance benefit? >> **** > >I tell you why, it's because we find it easy to think of it and discuss >it in that way. This really is the whole point, expressive power. **** **** I believe in the expressive-power argument, no problem. I am **** also willing to believe that we should keep the minimum cost of **** inter-group down to the point where it remains useful. It is **** dynamicism of groups that motivates inter-group communication. **** Else, the one-shot cost of merging groups, and staying intra-group **** would not be unacceptable. There must be some ineffable quality **** of groups (dynamic need to change, for instance) that you hope for! **** I agree with this abstractly, but please help me more... **** >> > >> >Same "I can read this two ways" as above. >> ***** OK, I withdraw this remark too. > >Look, I found it vague. I'd rather you helped me to understand, please. > **** Related thoughts... **** I am trying to understand my original comment, myself. I think **** of context/group/rank_id as a MPI_TID, if you wish. It all made more **** sense to me structured that way, rather than having a pair of **** contexts as a mechanism for inter-group. In short, I felt we were **** inventing our own TID called context/group/rank_id, and that this **** was a nice structure, and superceded the PVM notion of a TID. **** If I am foggy, sorry, but if we worked in those terms, with accessors **** on such a TID, we would know how to handle some of the problems **** described elsewhere. For instance, we could have the meaning of **** rank_id be group-scope/dependent. Rank would be a nice, fast hash on **** static groups, and a slower, more complex thing to map for **** intergroup [are people still liking NUL/UNBOUND???]. **** >> > >> >If this is an invitation to add a further proposal to the set we are >> >doing, then I accept such invitation. In fact, I can frame it as an >> >extension of proposal I. >> ***** YES. > >Will do. > >> > >> >> Of course, point-to-point could provide another set of procedures which >> >> handle inter-group communication cleanly. This is my preferred >> >> approach, but again I don't suppose that this will be acceptable, >> >> because so may people are already so concerned about the number of >> >> procedures in the point-to-point section. >> >> **** >> >> **** Again, why not try. We need to fill in our matrix of proposals X >> >> **** closures :-) >> >> **** >> > >> >Because I perceive that it would be like trying to drag a dead whale >> >along a beach :-) (You never tried that? I can tell you, its hard work >> >and slow going!) >> ***** Acknowledged. > >Ah, so you tried pulling a dead whale as well. Funny old world :-) > >> ***** I think I was interested in implementation with p2p, because >> ***** of the degrees of freedom inside an MPI envelope dictated by >> ***** some kinds of transcations. These transactions might, for >> ***** instance, motivate >> ***** 1) support of more than K contexts (where K is currently like 256) >> ***** 2) require dynamic contexts which are separate from group ID's >> ***** >> ***** Hence, by asking how the functions are to be done, I am asking >> ***** for possible justifications for the more general context model. >> ***** If hidden contexts are needed to do some of these things, >> ***** perhaps they could be justifiably be accomplished with an open >> ***** context mechanism. >> ***** >> ***** That was my motivation. - Tony >> ***** >> > >I'll think about these comments, but I suspect there is a language >barrier in operation here. > >Best Wishes >Lyndon > **** Lyndon, we know that these functions will have to be implemented some how. **** MPI itself might provide such mechanisms as published functionality or **** hidden functionality. If published functionality is denied in the **** standard (eg, general context management) because we cannot justify it **** properly, but must be implemented internally for most/all complying **** versions of MPI, then I think we lose out. My vague way of stating **** the requirement was to reveal the possibility of this, and thereby to **** promote such internal degrees of freedom into the public part of MPI, **** so applications can use them... Does that help???? **** - Tony From owner-mpi-context@CS.UTK.EDU Wed Mar 10 07:01:54 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA23631; Wed, 10 Mar 93 07:01:54 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA08660; Wed, 10 Mar 93 07:01:08 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 07:01:07 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA08652; Wed, 10 Mar 93 07:00:58 -0500 Date: Wed, 10 Mar 93 12:00:52 GMT Message-Id: <2831.9303101200@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: [Tony Skjellum: re: static contexts writeup] To: Tony Skjellum , Jim Cownie , Rik Littlefield , Sanjay Ranka Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Tony writes: > **** PPS Lyndon, where are you. I got up at 4am to read the next round > **** of mail from you, but none to be found :-) > **** First off, many thanks to Tony for getting to his terminal at 4am. I really appreciate the effort to be UK compatible. I fell asleep at mine about 2am and then went home unable to stir mental activity. From 9am to 11am I was in meetings. I had a fair bit of mail to read, but more replies to my copious mail of last night yet expected. You should have got two messages beiefly describing some fairly basic inter-group communications kind of things that are important to me. I'll make as much time today as I can, but as I said last night, I have a hell of a lot of other things to get through as well today. I will come in additionally tomorrow morning from 9am to 10:30 am, to catch up on any further discussion developments. Then I really have to get on the train. > **** In mail immemorial, Rik agreed to merge his (enclosed) write-up with Hey, I got about uncountable copies of Rik's note saved away, now another one :-) > **** Lyndon, you indicated that > **** one of my comments challenged you to make a new added entry > **** to one of the proposal categories. May I have that now/soon? challenged?, invited! Soon, Tony, soon. > **** Lyndon will be off-line, but > **** Lyndon will still get some brain work to do off-line. :-) :-) I can write up Proposal I, and Ib (Lyndon extension amendment) while away. > **** Lyndon > **** will you phone me periodically, to stay abreast of things? Okay, but this will be after 8pm my time, as the calls will come out of my pocket and appear on my parents 'phone bill, and its cheaper then, and they'll have to be real short. > **** Our PRECIS/SKETCH and work from it will not address the following things > **** (related to discussion in our group, but not central to it): > **** - the fully portable MPI program I vote we exclude from our discussion. > **** - I/O I vote we exclude from our discussion. > **** - the semantics and syntax of process management Perhaps something of this should be thought about. Quick thoughts/suggestions: * Create new process group (blob of processes :-) * Resize existing process group (shrink/grow blob, synchronously in blob) > **** - the time-table for MPI-1,MPI-2,MPI-3,... I vote we exclude from our discussion. > **** In a truly global sense, these could be construed to be closure as > **** well, but I want to exclude them for sanity reasons. I do believe > **** that parts of these issues could be addressed within MPI-1, but > **** it is not now this subcommittee's role to do that for MPI at-large. > **** Strong objections? If so, we are in deep trouble:-) No objections. > **** > **** We will assume that discussion in this 4-proposal X closure effort is > **** all essential to MPI-1. Anything you really don't think is essential > **** for MPI-1... once we agree to same .... should probably get deferred. > **** Is that controversial? Not controversial. > **** I thank everyone for their sincere efforts. - ditto - > **** PS Per constructive advice from Rik, all of my mail is getting copied to > **** mpi-context now. All should do so. I did not hear any dissent > **** about this suggestion, and it adds potential help to > **** discussion. As we need the help, and few people are writing > **** anyway, this will not likely cause problems. > **** Continue to include the five names jim, lyndon, tony, sanjay & > **** Rik, incase any of us are not on the mpi-context mailing list!!! Concur, being done. > > > **** Rik writes... > > >Before it gets TOO stale, here's a copy of my writeup on static > >contexts et.al. as it last went to Lyndon and Tony. (You two can > >stop reading at the line of dashes.) > > > >I have opted not to hack on static contexts any more because I > >would rather kill them. > **** Is this a proposal to include static contexts as in within > **** the 4-proposal model, and that we stop working on same? > **** If so, I can make it so... I'd like us to now ask Rik to write Proposal II. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Mar 10 08:37:24 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA28930; Wed, 10 Mar 93 08:37:24 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA12056; Wed, 10 Mar 93 08:36:57 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 08:36:56 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA12047; Wed, 10 Mar 93 08:36:43 -0500 Date: Wed, 10 Mar 93 13:36:37 GMT Message-Id: <2981.9303101336@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: [Tony Skjellum: re: static contexts writeup] To: Tony Skjellum , Jim Cownie , Rik Littlefield , Sanjay Ranka Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Proposal Ib Proposal Ib extends Proposal I (nee aka "snir" model) to provide group identification and inter-group communication for static (constant membership) groups. Group identification is effected by: facilities to send group in message; group registry service Inter-group communication needs to extend the basic (group,rank) notation. Two suggestions will be offered: procedures to do this; syntactic trickery (see previous emails). Regards Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Mar 10 11:12:48 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA03467; Wed, 10 Mar 93 11:12:48 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA20567; Wed, 10 Mar 93 11:12:19 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 11:12:18 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA20557; Wed, 10 Mar 93 11:12:11 -0500 Date: Wed, 10 Mar 93 16:12:01 GMT Message-Id: <3127.9303101612@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: private message To: Tony Skjellum In-Reply-To: Tony Skjellum's message of Wed, 10 Mar 93 10:00:55 CST Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu No problem. This would be just fine. I perfer not to have to administrate it, especially as I will be off-line and therefore unable to service administration requests. Can I suggest that the address be precis@aurora.cs.mssatte.edu :-) /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Mar 10 11:32:09 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA03864; Wed, 10 Mar 93 11:32:09 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21943; Wed, 10 Mar 93 11:31:32 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 11:31:29 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21932; Wed, 10 Mar 93 11:31:27 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA10032; Wed, 10 Mar 93 10:25:42 CST Date: Wed, 10 Mar 93 10:25:42 CST From: Tony Skjellum Message-Id: <9303101625.AA10032@Aurora.CS.MsState.Edu> To: tony@aurora.cs.msstate.edu, jim@meiko.co.uk, d39135@sodium.pnl.gov, ranka@top.cis.syr.edu, lyndon@epcc.ed.ac.uk Subject: Re: [Tony Skjellum: re: static contexts writeup] Cc: mpi-context@cs.utk.edu **** My latest comments appear prepended with for ****'s. **** -Tony ----- Begin Included Message ----- From owner-mpi-context@CS.UTK.EDU Wed Mar 10 05:58:24 1993 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 07:01:07 EST Date: Wed, 10 Mar 93 12:00:52 GMT From: L J Clarke Subject: Re: [Tony Skjellum: re: static contexts writeup] To: Tony Skjellum , Jim Cownie , Rik Littlefield , Sanjay Ranka Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Content-Length: 4438 Tony writes: > **** PPS Lyndon, where are you. I got up at 4am to read the next round > **** of mail from you, but none to be found :-) > **** [comments on interactions over next day omitted, :-)] **** In mail immemorial, Rik agreed to merge his (enclosed) write-up with Hey, I got about uncountable copies of Rik's note saved away, now another one :-) > **** Lyndon, you indicated that > **** one of my comments challenged you to make a new added entry > **** to one of the proposal categories. May I have that now/soon? challenged?, invited! Soon, Tony, soon. > **** Lyndon will be off-line, but > **** Lyndon will still get some brain work to do off-line. :-) :-) I can write up Proposal I, and Ib (Lyndon extension amendment) while away. > **** Lyndon > **** will you phone me periodically, to stay abreast of things? Okay, but this will be after 8pm my time, as the calls will come out of my pocket and appear on my parents 'phone bill, and its cheaper then, and they'll have to be real short. ***** Fine, I will phone you, if you provide time windows and phone number(s). ***** I will address costs. > **** Our PRECIS/SKETCH and work from it will not address the following things > **** (related to discussion in our group, but not central to it): > **** - the fully portable MPI program I vote we exclude from our discussion. > **** - I/O I vote we exclude from our discussion. > **** - the semantics and syntax of process management Perhaps something of this should be thought about. Quick thoughts/suggestions: * Create new process group (blob of processes :-) * Resize existing process group (shrink/grow blob, synchronously in blob) **** OK, I will think about the proposal X closure matrix placement of this. > **** - the time-table for MPI-1,MPI-2,MPI-3,... I vote we exclude from our discussion. > **** In a truly global sense, these could be construed to be closure as > **** well, but I want to exclude them for sanity reasons. I do believe > **** that parts of these issues could be addressed within MPI-1, but > **** it is not now this subcommittee's role to do that for MPI at-large. > **** Strong objections? If so, we are in deep trouble:-) No objections. > **** > **** We will assume that discussion in this 4-proposal X closure effort is > **** all essential to MPI-1. Anything you really don't think is essential > **** for MPI-1... once we agree to same .... should probably get deferred. > **** Is that controversial? Not controversial. > **** I thank everyone for their sincere efforts. - ditto - > **** PS Per constructive advice from Rik, all of my mail is getting copied to > **** mpi-context now. All should do so. I did not hear any dissent > **** about this suggestion, and it adds potential help to > **** discussion. As we need the help, and few people are writing > **** anyway, this will not likely cause problems. > **** Continue to include the five names jim, lyndon, tony, sanjay & > **** Rik, incase any of us are not on the mpi-context mailing list!!! Concur, being done. > > > **** Rik writes... > > >Before it gets TOO stale, here's a copy of my writeup on static > >contexts et.al. as it last went to Lyndon and Tony. (You two can > >stop reading at the line of dashes.) > > > >I have opted not to hack on static contexts any more because I > >would rather kill them. > **** Is this a proposal to include static contexts as in within > **** the 4-proposal model, and that we stop working on same? > **** If so, I can make it so... I'd like us to now ask Rik to write Proposal II. **** Yes, Rik please do proposal II!!! Best Wishes Lyndon ----- End Included Message ----- **** Best wishes, **** Tony Summary of what went on above: We agreed that Lyndon would do I and Ib (his extension) We requested that Rik would do II (please agree/decline :-) I am still not sure if Rik did/plans to do a precis/sketch merger??? By implication, I am left with III/IV, expecting to get help from Jim, Sanjay, et al on it. I have promised to do the proposal X closure precis/sketch update. - Tony From owner-mpi-context@CS.UTK.EDU Wed Mar 10 12:19:48 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA05037; Wed, 10 Mar 93 12:19:48 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA24634; Wed, 10 Mar 93 12:19:24 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 12:19:22 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA24626; Wed, 10 Mar 93 12:19:20 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Wed, 10 Mar 93 09:14 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA21499; Wed, 10 Mar 93 09:12:13 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA20633; Wed, 10 Mar 93 09:12:10 PST Date: Wed, 10 Mar 93 09:12:10 PST From: d39135@sodium.pnl.gov Subject: Problem: group descriptors & coherency To: d39135@sodium.pnl.gov, tony@Aurora.CS.MsState.Edu Cc: jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu Message-Id: <9303101712.AA20633@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu Here's a problem that applies to all proposals. Groups are dynamic. Group ID's must be reusable unless they are very long or we bound the number of groups that can ever get created. Every reference to a group ID must be interpreted in terms of the current definition. This introduces a coherency problem analogous to shared memory. There are at least a half-dozen solutions to the coherency problem, with wide differences in overhead costs and convenience for the user. Explicit coherency control by the application tends to be cheaper but less convenient than implicit control. ==> I suggest that each proposal include some mechanism for solving the group ID coherency problem, including a discussion of its probable costs. Aside: Coherency of tid's seems like a smaller problem, based on the assumption that processes will come and go much less often than groups will be re-formed. But I'm not sure that we can ignore it. --Rik From owner-mpi-context@CS.UTK.EDU Wed Mar 10 13:18:27 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA06332; Wed, 10 Mar 93 13:18:27 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA28066; Wed, 10 Mar 93 13:18:05 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 13:18:04 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA28057; Wed, 10 Mar 93 13:18:02 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Wed, 10 Mar 93 10:10 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA22798; Wed, 10 Mar 93 10:08:19 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA20772; Wed, 10 Mar 93 10:08:13 PST Date: Wed, 10 Mar 93 10:08:13 PST From: d39135@sodium.pnl.gov Subject: implementation assumptions To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.edinburgh.ac.uk, mpi-comm@cs.utk.edu, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Message-Id: <9303101808.AA20772@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu, mpi-comm@cs.utk.edu In a discussion we are having within the contexts subcommittee, Tony Skjellum commented that: > ... I want to make sure that we do > not provide illusory improvements to the standard, while letting other > parts of it exceed ours in the unscalability department, or let > implementations qualify as compliant, whose unscalability disables the > implied scalability of our syntax/semantics... I agree. This is why (I think) proposals should explicitly address performance. If a proposal assumes that the cost of a particular function is irrelevant, it should say that. If a proposal assumes that a particular function is going to be cheap, it should justify that assumption by naming a time/memory price and outlining an implementation that can achieve it. Sometimes the suggested implementation will reveal further assumptions that have wide implications. Ideally, each proposal will eventually evolve a set of bottom-line assumptions that are consistent and believable. These comments are particularly addressed to those of us beating on contexts and groups, where there seem to be lots of subtle interactions. But I am cross-posting them to the whole committee as food for thought... --Rik ("The Skeptic") Littlefield From owner-mpi-context@CS.UTK.EDU Wed Mar 10 13:29:00 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA06699; Wed, 10 Mar 93 13:29:00 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA28812; Wed, 10 Mar 93 13:28:44 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 13:28:43 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA28804; Wed, 10 Mar 93 13:28:41 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Wed, 10 Mar 93 10:21 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA23065; Wed, 10 Mar 93 10:19:47 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA20875; Wed, 10 Mar 93 10:19:42 PST Date: Wed, 10 Mar 93 10:19:42 PST From: d39135@sodium.pnl.gov Subject: Re: descriptors vs ID's To: jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@aurora.cs.msstate.edu Cc: d39135@sodium.pnl.gov Message-Id: <9303101819.AA20875@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu (This is a resend. Apparently several addresses got fouled up yesterday.) Rik said: > But I argue that the global ID's should NOT be the things that get > passed to the collective comm routines, and possibly not even to the > pt-pt routines. > > Passing a global ID implies some kind of table searching, unless we > put tight bounds on the value ranges so that we can do a simple > indirection. Of course, I suppose I could equally well argue that the cost of such a lookup would be small compared to the overall cost of the collective comm routine. Maybe this is a don't-care. --Rik (Good grief, Lyndon -- replying to one's own mail seems to be catching!) From owner-mpi-context@CS.UTK.EDU Wed Mar 10 13:59:18 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA07404; Wed, 10 Mar 93 13:59:18 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA00653; Wed, 10 Mar 93 13:58:49 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 13:58:47 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA00645; Wed, 10 Mar 93 13:58:45 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Wed, 10 Mar 93 10:49 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA23233; Wed, 10 Mar 93 10:47:14 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA21230; Wed, 10 Mar 93 10:47:12 PST Date: Wed, 10 Mar 93 10:47:12 PST From: d39135@sodium.pnl.gov Subject: Re: descriptors vs ID's To: jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@aurora.cs.msstate.edu Message-Id: <9303101847.AA21230@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu (This is a resend with modifications [flagged with **]. The original, yesterday, had fouled up addresses.) Jim raises the issue: > Are the tid's of local or global scope. i.e. Can I pass a tid to > someone else, and it still work as an address for the person I > passed it to ? > For global scope : It's simpler for the user > Against : The implementor might want the "opaque identifier" > to be a pointer to a big data structure containing > route tables or something. The same problem arises with groups and I'm not sure where else. I agree that we need global scope ID's for tids and probably for groups. But I argue that the global ID's should NOT be the things that get passed to the collective comm routines, and possibly not even to the pt-pt routines. Passing a global ID implies some kind of table searching, unless we put tight bounds on the value ranges so that we can do a simple indirection. For tid's, that's OK because there are only as many tid's as there are processes, and I'm willing to believe that's a small bound. But in theory there can be lots more groups than processes, and even in practice I'm not sure that we can keep the global ID's both unique and small without excessive overhead. (E.g., recall my trick of generating unique context values without communication by cramming the creator's process number into the bottom bits.) I propose that most routines be passed a local descriptor reference, rather than the global ID value. ** The above may be don't-care. For collective comms, the cost ** of finding a (local) group descriptor, given the group ID, ** is arguably small compared to the cost of the comm itself. ** For tid's, perhaps the tid values can be kept small enough to ** permit simple indirection. Then we need either 1. routines to retrieve the global ID from a local descriptor (easy) and to construct a local descriptor from the global ID (harder) or 2. a mechanism to translate local descriptors to and from some machine-independent form. I prefer the second option because of my bias against servers. ** In another message, I raised the problem of maintaining ** coherence of group descriptors if the group ID's can be reused. ** The comments above also relate to methods of explicitly ** restoring coherency. Scheme 1 would be used if a process was ** told only "group XX has been redefined, please invalidate your ** descriptor"; scheme 2 would be used if a process was told ** "group XX has been redefined and here's the new definition". --Rik Littlefield From owner-mpi-context@CS.UTK.EDU Wed Mar 10 14:20:17 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA08095; Wed, 10 Mar 93 14:20:17 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA01852; Wed, 10 Mar 93 14:19:58 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 14:19:57 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA01844; Wed, 10 Mar 93 14:19:56 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Wed, 10 Mar 93 11:14 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA23523; Wed, 10 Mar 93 11:13:08 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA21416; Wed, 10 Mar 93 11:13:06 PST Date: Wed, 10 Mar 93 11:13:06 PST From: d39135@sodium.pnl.gov Subject: Re: [d39135@sodium.pnl.gov: Re: group descriptors] To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Cc: mpi-context@cs.utk.edu Message-Id: <9303101913.AA21416@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu > So, my basic request to Rik is that he echo the two letters sent to him > alone, which was my error. Mea culpa. I think there was only one, and I included the whole thing in my note referencing it, so it has been echoed already. No problem. --Rik Littlefield From owner-mpi-context@CS.UTK.EDU Wed Mar 10 14:21:50 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA08148; Wed, 10 Mar 93 14:21:50 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA01996; Wed, 10 Mar 93 14:21:19 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 14:21:18 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA01988; Wed, 10 Mar 93 14:21:16 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Wed, 10 Mar 93 11:10 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA23493; Wed, 10 Mar 93 11:08:57 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA21333; Wed, 10 Mar 93 11:08:53 PST Date: Wed, 10 Mar 93 11:08:53 PST From: d39135@sodium.pnl.gov Subject: Re: group descriptors To: d39135@sodium.pnl.gov, tony@Aurora.CS.MsState.Edu Cc: jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu Message-Id: <9303101908.AA21333@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu Tony writes > Anyway, I concur on the need to have registerable choices, that allow > sharing, per your letter. As regards new functionality, they could go > in the tree too, but there is no structural mechanism in Zipcode, > other than defining a class' method list, for adding new instantiations > to a mailer at creation. All of this was very elaborate coding, which > I will elucidate in my article for Rolf. I am happy to share details > with you as well, beforehand. Maybe I haven't thought about this enough, but I do not understand why elaborate coding is required if we do everything at runtime and do not try to provide type-safety. No one has yet pointed out a deficiency with my proposed interface: MPI_set_group_attribute (gid,key,value,destructor_routine) and MPI_test_group_attribute (gid,key,&value) This can be implemented at a data cost of adding a single slot to a group descriptor (pointing to the attribute dictionary), plus a loop in the group closing routine to call the destructors (if any), plus the routines to manage the dictionary. I do not believe that there would be much resistance from the committee to adding this capability, since it seems to be easy to implement and (almost) "free if you don't use it". What have I missed? --Rik Littlefield PS. OK, since nobody else has pointed out a deficiency, I will. The definition provided above does not support transferring attributes between processes. The uses that I had in mind involve information that is so local to the process that transferring it doesn't even make sense (e.g., lists of partners for collective comms). If one wanted to define attributes that could be transferred, the interface would have to be extended. From owner-mpi-context@CS.UTK.EDU Wed Mar 10 14:51:58 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA08797; Wed, 10 Mar 93 14:51:58 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA03683; Wed, 10 Mar 93 14:51:06 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 14:51:04 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA03673; Wed, 10 Mar 93 14:51:02 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Wed, 10 Mar 93 11:48 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA24208; Wed, 10 Mar 93 11:46:25 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA21672; Wed, 10 Mar 93 11:46:20 PST Date: Wed, 10 Mar 93 11:46:20 PST From: d39135@sodium.pnl.gov Subject: re: static contexts writeup To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Message-Id: <9303101946.AA21672@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu Tony, You write > **** In mail immemorial, Rik agreed to merge his (enclosed) write-up with > **** the first-generation PRECIS, to generate a more useful one, > **** Please correct me if I am wrong, but I cannot find this! I think I'm clean. The exchange was > > > (Tony writes) > > > So, my first action item is to get a merger of what you wrote and our > > > precis, and redistribute that to group. OK? > > > > (Rik replies) > > I'm not sure that I've seen your precis yet. The stuff you've sent me > > so far seems to be quite a ways from precise (pardon the pun), so I'm > > reluctant to guess what's what. Can you send me a labeled copy? > > > > It would be faster and perhaps less confusing if I incorporated the > > last round of your and Lyndon's comments into my words and distributed > > that to the subcommittee as a separate document. Then we would have > > a collective opinion (well, five separate ones anyway) of how the > > two documents fit together, instead of just the one opinion of whoever > > did the merger. > > (Tony replies) > Yes, I concur. Go ahead. Several copies of my document were subsequently distributed, including ones by me and by you! As I noted when I sent it out, I did not incorporate your and Lyndon's comments because I decided my time might better go into exploring cacheing mechanisms. By the way, I'm sorry, but I'm still not sure that I have correctly identified your precis or sketch or whatever. You sent me some stuff at a level of detail like > Proposal II. GROUP ID != CONTEXT > > GROUP corresponds to "data/instance" in this model > CONTEXT is statically defined at link time, corresponding to > code or class > > In this model, group ID's appear explicitly, and separately > from CONTEXTS in global operations. It was noted by Littlefield > that he would write global operations, and product bits of the > GROUP ID and static CONTEXT to guarantee that his code produced > unique, deterministic, non-interfering results. > > Proposal III. GROUP ID null, GROUP scope == CONTEXT scope > > CONTEXT is a dynamic concept, and global (or as global as necessary) > Whenever a group is created, a context is assigned to it. CONTEXTS > are used to disambiguate global operations. > > GROUP ID is the CONTEXT for the group. > > GROUP is an enumeration of PROCESS ID's OR ONE OF SEVERAL > possible hashing formulas, with exceptions (more scalable) with a bunch of comments interspersed with it. Is this what you are talking about? In other mail, you have mentioned a "sparse matrix of proposals X closure options", but I haven't the foggiest idea what that refers to. Help? --Rik Littlefield From owner-mpi-context@CS.UTK.EDU Wed Mar 10 15:13:12 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA09008; Wed, 10 Mar 93 15:13:12 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA06326; Wed, 10 Mar 93 15:12:24 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 15:12:22 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA06312; Wed, 10 Mar 93 15:12:19 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Wed, 10 Mar 93 12:09 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA24295; Wed, 10 Mar 93 12:07:30 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA21860; Wed, 10 Mar 93 12:07:27 PST Date: Wed, 10 Mar 93 12:07:27 PST From: d39135@sodium.pnl.gov Subject: writing assignments To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.edinburgh.ac.uk, ranka@top.cis.syr.edu, tony@aurora.cs.msstate.edu Cc: mpi-context@cs.utk.edu Message-Id: <9303102007.AA21860@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu > I'd like us to now ask Rik to write Proposal II. > > Best Wishes > Lyndon I will be happy to write "Proposal II", but I'm not sure that we agree on what that means. I would definitely like to include a proposal for a cacheing facility, but this seems orthogonal to all other issues about groups, contexts, tid/pid format etc. I will also discuss the idea of static contexts, but will specifically not propose them, in favor of cacheing, if I can convince myself that cacheing combined with other capabilities meets all my requirements. (Cacheing is clearly superior in some ways, but I'm not sure that it covers all the bases.) --Rik Littlefield From owner-mpi-context@CS.UTK.EDU Wed Mar 10 17:06:27 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA12404; Wed, 10 Mar 93 17:06:27 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA13359; Wed, 10 Mar 93 17:05:52 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 17:05:50 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA13347; Wed, 10 Mar 93 17:05:49 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA13032; Wed, 10 Mar 93 16:00:56 CST Date: Wed, 10 Mar 93 16:00:56 CST From: Tony Skjellum Message-Id: <9303102200.AA13032@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu Subject: fixing misunderstanding about nomenclature In your last letter, Rik, you clearly identify what I have called the PRECIS/SKETCH. I have been talking loosely about a set of proposals (so marked), and Lyndon has understood these all along too. They are called I...IV, with some sub-proposals also. Then, for each of these, there are certain options on what things like tags do, and so on, so we imagine a (hopefully sparse) matrix of alternatives to be looked at each of which is a valid closure of MPI. Of the ones that make sense (eg, Proposal II with tags being blah, and TIDS being blah', and the following other side conditions defined, we have one possible closure of MPI. The sub-committee assigns a rank to this choice; we will hopefully have more than one. We vote on all of them, in rank order. If we disagree violently on rank, we present all of them in a coherent order, and vote on them in a reasonable order.) I want to move this SKETCH to have more precise information in it. I will look at doing same. Hopefully, you are doing proposal II, per Lyndon's suggestion, and the place to find what proposal II is is still in that SKETCH. - Tony From owner-mpi-context@CS.UTK.EDU Wed Mar 10 17:08:12 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA12413; Wed, 10 Mar 93 17:08:12 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA13470; Wed, 10 Mar 93 17:07:55 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 17:07:54 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA13455; Wed, 10 Mar 93 17:07:53 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA13048; Wed, 10 Mar 93 16:03:26 CST Date: Wed, 10 Mar 93 16:03:26 CST From: Tony Skjellum Message-Id: <9303102203.AA13048@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.edinburgh.ac.uk, ranka@top.cis.syr.edu Subject: Re: writing assignments Cc: mpi-context@cs.utk.edu Rik writes... >From d39135@sodium.pnl.gov Wed Mar 10 14:08:49 1993 >Received: from pnlg.pnl.gov by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); > id AA11263; Wed, 10 Mar 93 14:08:46 CST >Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Wed, 10 Mar 93 > 12:09 PST >Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA24295; Wed, > 10 Mar 93 12:07:30 PST >Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA21860; Wed, 10 Mar 93 12:07:27 > PST >Date: Wed, 10 Mar 93 12:07:27 PST >From: d39135@sodium.pnl.gov >Subject: writing assignments >To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.edinburgh.ac.uk, > ranka@top.cis.syr.edu, tony@aurora.cs.msstate.edu >Cc: mpi-context@cs.utk.edu >Message-Id: <9303102007.AA21860@sodium.pnl.gov> >X-Envelope-To: tony@aurora.cs.msstate.edu >Status: R > >> I'd like us to now ask Rik to write Proposal II. >> >> Best Wishes >> Lyndon > >I will be happy to write "Proposal II", but I'm not sure that >we agree on what that means. > >I would definitely like to include a proposal for a cacheing >facility, but this seems orthogonal to all other issues about >groups, contexts, tid/pid format etc. > >I will also discuss the idea of static contexts, but will >specifically not propose them, in favor of cacheing, if I can >convince myself that cacheing combined with other capabilities >meets all my requirements. (Cacheing is clearly superior in some >ways, but I'm not sure that it covers all the bases.) > >--Rik Littlefield > OK. The cacheing thing seems to add another item to our list of proposal things. Go ahead... think of it as another column in our sparse matrix, for which caching is the same for each major Proposal... - Tony From owner-mpi-context@CS.UTK.EDU Wed Mar 10 18:48:22 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA13684; Wed, 10 Mar 93 18:48:22 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA17757; Wed, 10 Mar 93 18:47:50 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 18:47:43 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA17747; Wed, 10 Mar 93 18:47:40 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Wed, 10 Mar 93 15:45 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA24514; Wed, 10 Mar 93 15:43:28 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA23252; Wed, 10 Mar 93 15:43:24 PST Date: Wed, 10 Mar 93 15:43:24 PST From: d39135@sodium.pnl.gov Subject: Re: fixing misunderstanding about nomenclature To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Message-Id: <9303102343.AA23252@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu > ... Of the ones that make sense > (eg, Proposal II with tags being blah, and TIDS being blah', and the > following other side conditions defined, we have one possible closure > of MPI. The sub-committee assigns a rank to this choice; we will hopefully > have more than one. We vote on all of them, in rank order. If we > disagree violently on rank, we present all of them in a coherent order, > and vote on them in a reasonable order.) > > I want to move this SKETCH to have more precise information in it. I will > look at doing same. Hopefully, you are doing proposal II, per Lyndon's > suggestion, and the place to find what proposal II is is still in that > SKETCH. > > - Tony > I assume that the goal is to get several proposals that are self-consistent but different. Since Proposal II was at one point represented as "Littlefield's model", I will try to write my personal favorite proposal into bullet-item form. I don't know close it will be to your current Proposal II, but it'll probably be different from anybody else's. I will try to get you this by tomorrow (Thursday). Once we have several different bullet-item proposals in hand, what is the best way to proceed? My inclination is to have several rounds of critique, with the author of each proposal responding to each criticism by one or more of: . modifying or extending the proposal while retaining its character . modifying or extending the statement of assumptions . modifying or specifying an implemention strategy to overcome the criticism . acknowledging a deficiency. Hopefully there will not be too much convergence between proposals. --Rik Littlefield From owner-mpi-context@CS.UTK.EDU Wed Mar 10 23:22:19 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17588; Wed, 10 Mar 93 23:22:19 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA27668; Wed, 10 Mar 93 23:21:36 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 10 Mar 1993 23:21:34 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA27659; Wed, 10 Mar 93 23:21:33 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA13324; Wed, 10 Mar 93 22:17:22 CST Date: Wed, 10 Mar 93 22:17:22 CST From: Tony Skjellum Message-Id: <9303110417.AA13324@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Subject: Re: fixing misunderstanding about nomenclature Rik writes... (my responses prepended with four *'s - Tony)... > ... Of the ones that make sense > (eg, Proposal II with tags being blah, and TIDS being blah', and the > following other side conditions defined, we have one possible closure > of MPI. The sub-committee assigns a rank to this choice; we will hopefully > have more than one. We vote on all of them, in rank order. If we > disagree violently on rank, we present all of them in a coherent order, > and vote on them in a reasonable order.) > > I want to move this SKETCH to have more precise information in it. I will > look at doing same. Hopefully, you are doing proposal II, per Lyndon's > suggestion, and the place to find what proposal II is is still in that > SKETCH. > > - Tony > I assume that the goal is to get several proposals that are self-consistent but different. **** Yes. Since Proposal II was at one point represented as "Littlefield's model", I will try to write my personal favorite proposal into bullet-item form. I don't know close it will be to your current Proposal II, but it'll probably be different from anybody else's. I will try to get you this by tomorrow (Thursday). **** Thank you. **** Rik suggests the following strategy for handling proposal review. **** I would like to use it... **** Once we have several different bullet-item proposals in hand, what is the best way to proceed? My inclination is to have several rounds of critique, with the author of each proposal responding to each criticism by one or more of: . modifying or extending the proposal while retaining its character . modifying or extending the statement of assumptions . modifying or specifying an implemention strategy to overcome the criticism . acknowledging a deficiency. **** Fine, and we will use this as the criterion for all proposals. **** It is a sound strategy. **** **** Thanks on both counts. - Tony From owner-mpi-context@CS.UTK.EDU Thu Mar 11 19:08:48 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA11521; Thu, 11 Mar 93 19:08:48 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA25952; Thu, 11 Mar 93 19:08:20 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 11 Mar 1993 19:08:18 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA25936; Thu, 11 Mar 93 19:08:14 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Thu, 11 Mar 93 15:59 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA26618; Thu, 11 Mar 93 15:58:00 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA25111; Thu, 11 Mar 93 15:57:57 PST Date: Thu, 11 Mar 93 15:57:57 PST From: d39135@sodium.pnl.gov Subject: contexts Proposal V To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Cc: gropp@mcs.anl.gov, lusk@mcs.anl.gov Message-Id: <9303112357.AA25111@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu Tony, OK, here's a detailed proposal regarding groups and contexts. It's much different from the earlier sketch called "Proposal II", so I renamed it. This proposal seems to meet my needs. Barring any new difficulties, I will be happy to drop static contexts. The proposal is long but hierarchical. I think there's enough in the summary to characterise it. I am also sending this description to Bill Gropp and Rusty Lusk. I hope they will wring it out, particularly my assertion that this proposal can be implemented as a layer on top of MPI pt-pt. --Rik Littlefield ---------------------------------------------------------------------- PROPOSAL V. Summary: . Group ID's have local scope, not global. Groups are represented by descriptors of indefinite size and opaque structure, managed by MPI. Group descriptors can be passed between processes by MPI routines that translate them as necessary. (This satisfies the same needs as global group ID's, but its performance implications are easier to understand and control.) . Arbitrary process-local info can be cached in group descriptors. (This allows collective operations to run quickly after the first execution in each group, without complicating the calling sequences.) . There are restrictions that permit groups to be layered on top of pt-pt. . Pt-pt communications use only TID, context, and tag, and are specified to be fast. . Group control operations (forming, disbanding, and translating between rank and TID) are NOT specified to be fast. Detailed Proposal: . Pt-pt uses only "TID", "context", and "message tag". TID's, contexts, and message tags have global scope; they can be copied between processes as integers. TID's and contexts are opaque; their values are managed by MPI. Message tags are controlled entirely by the application. . Pt-pt communication is specified to be fast in all cases. (E.g., MPI must initialize all processes such that any required translation of the TID is faster than the fastest pt-pt communication call.) . A group is represented by a "group descriptor", of indefinite size and opaque structure, managed by MPI. The group descriptor can be referenced via a "process-local group ID", which is typically just a pointer to the group descriptor. (Group ID's with global scope do not exist.) . Group descriptors can be copied between arbitrary processes via MPI-provided routines that translate them to and from machine- and process-independent form. A process receiving a group descriptor does not become a member of the group, but does become able to obtain information about the group (e.g., to translate between rank and TID). . Arbitrary information can be cached in a group descriptor. This is, MPI routines are provided that allow any routine to augment a group descriptor with arbitrary information, identified by a key value, that subsequently can be retrieved by any routine having access to the descriptor. Cached information is not visible to other processes and is stripped when a group descriptor is sent to another process. Retrieval of cached information is specified to be fast (significantly cheaper than a pt-pt communication call). . Group descriptors contain enough information to asynchronously translate between (group,rank) and TID for any member of the group, by any process holding a descriptor for the group. MPI routines are provided to do these translations. Translation is not specified to be fast. . At creation, each group is assigned a globally unique "default group context" which is kept in the group descriptor and can be quickly extracted. This extraction is specified to be fast (comparable to a procedure call). . The default group context can be used for pt-pt communication by any module operating within the group, but only for exactly paired send/receive with exact match on TID, context, and message tag. Call this the "paired-exact-match constraint". This constraint allows independent modules to use the same context without coordinating their tag values. (Assumes: tight sequencing constraints for both blocking and non-blocking comms in pt-pt MPI.) . A modules that does not meet the paired-exact-match constraint must use a different globally unique context value. An MPI routine will exist to generate such a value and broadcast it to the group. The cost of this operation is specified to be comparable to that of an efficient broadcast in the group, i.e. log(G) pt-pt startup costs for a group of G processes. [See Implementation note 1.] . Context values are a plentiful but exhaustible resource. If further use is anticipated, a context may be cached; otherwise it should be returned. (Note: the number of available contexts should be several times the number of groups that can be formed.) . When a group disbands, its group descriptor is closed and any cached information is released. (The MPI group-disbanding routine does this by calling destructor routines that the application provided when the information was cached.) Copies of the group descriptor must be explicitly released. It is an error to make any reference to a descriptor of a disbanded group, except to release that descriptor. . All processes that are initially allocated belong to the INITIAL group. (In a static process model, this means all processes.) An MPI routine exists for obtaining a reference to the INITIAL group descriptor (i.e., a local group ID for INITIAL). . Groups are formed and disbanded by synchronous calls on all participating processes. In the case of overlapping groups, all processes must form and disband groups in the same order. [See Implementation Note 2.] . All group formation calls are treated as a partitioning of some group, INITIAL if none other. The calls include a reference to the descriptor of the group being partitioned, the TID of the new group's "root process", the number of processes in the group, and an integer "group tag" provided by the application. The group tag must discriminate between overlapping groups that may be formed concurrently, say by multiple threads within a process. [See Implementation Note 2.] . Collective communication routines are called by all members of a group in the same order. . Blocking collective communication routines are passed only a reference to the group descriptor. To communicate, they can either use the default group context or internally allocate a new context, with or without cacheing. . Non-blocking collective communication routines are passed a reference to the group descriptor, plus a "data tag" to discriminate between concurrent operations in the same group. (Question: is sequencing enough to do this?) An implementation is allowed but not required to impose synchronization of all cooperating processes upon entry to a non-blocking collective communication routine. Advantages: . This proposal is implementable with no servers and can be layered easily on existing systems. . Cacheing auxiliary information in group descriptors allows collective operations to run at maximum speed after the first execution in each group. (Assumes that sufficient context values are available to permit cacheing.) . Speed of cached collective operations does not depend on speed of group operations such as formation or translation between rank and TID. . Requires explicit translation between (group,rank) and TID, which makes pt-pt performance predictable no matter how the functionality of groups gets extended. . Communication both within and between groups seems conceptually straightforward. Disadvantages: . Requires explicit translation between (group,rank) and TID, which may be considered awkward. . Communication between different groups may be considered awkward. . No free-for-all group formation. A process must know something about who its collaborators are going to be (minimally, the root of the group). . Requires explicit coherency of group descriptors, which is not convenient -- application must keep track of which processes have copies and what to do with them. . Static group model. Processes can join and leave collaborations only with the cooperation of the other group members in forming larger or smaller groups. . No direct support for identifying the group from which a message came. The sender cannot embed its group ID in its message, since group ID's are not globally meaningful. One method to address this problem is to have the sender embed its group context as data in the message, then have the receiver use that value to select among group descriptors that it had already been sent. Comments / Alternatives: . The performance of group control functions is explicitly unspecified in order to provide performance insurance. The intent is to encourage program designs whose performance won't be killed if the functionality of groups is expanded so as to be unscalable or require expensive services. . Adding global group ID's would seriously interfere with layering this proposal on pt-pt. The problem is not generating a unique ID, but using it. If just holding a global group ID were to allow a process to asynchronously ask questions about the group (like translating between rank and TID), then a group information server of some sort would be required, and MPI pt-pt does not provide enough capability to construct such a beast. . The restriction that only paired-exact-match modules can run in the default group context could be relaxed to something like "a module that violates the paired-exact-match constraint can run in the default group context if and only if it is the only module to run in that context". This seems too error-prone to be worth the trouble, since even the standard collective ops might be excluded. (It would also conflict with Implementation Note 1.] . Group formation can be faster when more information is provided than just the group tag and root process. E.g. if all members of the group can identify all other members of a P-member group, in rank order, then O(log P) time is sufficient; if only the root is known, O(P) time may be required. This aspect should be considered by the groups subcommittee in evaluating scalability. Implementation Notes: 1. To generate and broadcast a new context value: Generate the context value without communication by concatenating a locally maintained unique value with the process number in some 0..P-1 global ordering scheme. This assumes that context values can be significantly longer than log(P) bits for an application involving P processes. If not, then a server may be required, in which case by specification it has to be fast (comparable to a pt-pt call). There is no need to generate a new context for the broadcast since it can be coded to use the default group context by meeting the paired-exact-match constraint.) 2. The following is intended only as a sanity check on the claim that this proposal can be layered on MPI. Improvements to improve efficiency or to use fewer contexts will be greatly appreciated. In addition to a default group context, each group descriptor contains a "group partitioning context" and a "group disbanding context" that are obtained and broadcast at the time the group is created. In the case of the INITIAL group, this is done using paired-exact-match code and any context, immediately after initialization of the MPI pt-pt level. (At that time, no application code will have had a chance to issue wildcard receives to mess things up.) Group partitioning is accomplished using pt-pt messages in the partitioning context of the current group (i.e., the one being partitioned), with message tag equal to the group tag provided by the application. In the worst (least scalable) case, the root of the new group must wildcard the source TID. (This violates the paired-exact-match constraint, which is why group formation must happen in a special context.) Group disbanding is done with pt-pt messages in the group disbanding context. This keeps group control from being messed up no matter how the application uses wildcards. Examples: (To be provided) From owner-mpi-context@CS.UTK.EDU Sun Mar 14 13:58:46 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA05196; Sun, 14 Mar 93 13:58:46 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA16166; Sun, 14 Mar 93 13:58:14 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 14 Mar 1993 13:58:10 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA16158; Sun, 14 Mar 93 13:58:04 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Sun, 14 Mar 93 10:57 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA29374; Sun, 14 Mar 93 10:55:26 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA05149; Sun, 14 Mar 93 10:55:22 PST Date: Sun, 14 Mar 93 10:55:22 PST From: rj_littlefield@pnlg.pnl.gov Subject: proposal to mpi-collcomm To: d39135@sodium.pnl.gov, geist@gstws.epm.ornl.gov, gropp@mcs.anl.gov, jim@meiko.co.uk, lusk@mcs.anl.gov, lyndon@epcc.ed.ac.uk, mpi-collcomm@cs.utk.edu, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Message-Id: <9303141855.AA05149@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu Al & Tony, et.al.: I am about to send to mpi-collcomm, two notes regarding changes I propose to the collective communication specification. (One note summarizes the changes; the other discusses the reasons for them.) I am also sending these notes to mpi-context and friends because they relate to other discussions going on there. Thought you'd like to know... --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Sun Mar 14 15:04:11 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA06324; Sun, 14 Mar 93 15:04:11 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA18133; Sun, 14 Mar 93 15:03:51 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 14 Mar 1993 15:03:49 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA18125; Sun, 14 Mar 93 15:03:46 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Sun, 14 Mar 93 12:01 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA29382; Sun, 14 Mar 93 11:59:17 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA05208; Sun, 14 Mar 93 11:59:13 PST Date: Sun, 14 Mar 93 11:59:13 PST From: rj_littlefield@pnlg.pnl.gov Subject: collcomm changes, summary To: geist@gstws.epm.ornl.gov, gropp@mcs.anl.gov, jim@meiko.co.uk, lusk@mcs.anl.gov, lyndon@epcc.ed.ac.uk, mpi-collcomm@cs.utk.edu, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Cc: d39135@sodium.pnl.gov Message-Id: <9303141959.AA05208@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu SUMMARY OF SUGGESTED CHANGES TO COLLECTIVE COMMUNICATION PROPOSAL The draft proposal that Al Geist distributed several days ago contains some features that would prevent it from being implemented as a layer on top of MPI point-to-point facilities. The purpose of this note is to propose changes to the group control routines in order to permit layering, and to propose other changes for better and more predictable performance. A discussion of the rationale for these proposed changes will be distributed separately because of its length. The main changes introduced in this note are: . The concept of group identification is firmed up. Most operations use a "group handle" that is local to the process. (Think of the group handle as being just the address of a potentially large and complex "group descriptor".) There is still a "group ID" that is globally unique, but it has only a secondary role and can be ignored by most applications. The "group name" is entirely removed from MPI-1. (Group names are still anticipated in MPI-2, but upward-compatibility is maintained in a different way from the draft proposal.) . A semantic restriction is introduced, that a process can access information about a group only if the process holds a group handle for it. Group handles can be obtained in two ways: 1) they are produced by group formation routines, and 2) a process can explicitly distribute copies of its group handles to other processes, using new routines introduced specifically for that purpose. . A cacheing mechanism is introduced, that allows modules to attach arbitrary information to a group descriptor in such a way that it can be quickly retrieved. Cacheing facilitates the construction of collective communication routines that are "fast after the first execution in a group", no matter how the other group operations are implemented. . A new group formation routine is introduced, that is less synchronous and more general than MPI_PARTGROUP. Specifically, the following routines are proposed to be added or modified: 1. Arbitrary group formation: newgrp_handle = MPI_FORMGROUP (grouptag,groupsize,knownmembers) where grouptag is a user-provided integer tag, sufficiently unique to disambiguate overlapping groups that might be formed simultaneously (say by multiple threads). groupsize is the number of members that will compose the group. knownmembers is a set of pid's of some or all members of the group. Each member of the group must provide the same set of knownmembers. newgrp_handle is a group handle for the newly formed group This new routine must be called synchronously, but only by those processes forming the group. 2. Group partitioning: newgrp_handle = MPI_PARTGROUP (oldgrp_handle,grouptag) where the semantics are the same as the draft proposal except that the return value is now a new group handle instead of a rank. (The rank can be determined by a separate call to MPI_GETRANK(group_handle,pid) .) 3. Group disbanding: MPI_LVGROUP (group_handle) where the semantics are the same as the draft proposal except that MPI_LVGROUP now does not return any result. (Since groups can now be formed arbitrarily, not just by partitioning, it is not obvious what MPI_LVGROUP could return in general.) This routine can be called only by members of the group. 4. Distribution of group handles and disposition of distributed handles: MPI_SendGroupHandle (pid,context,tag,old_group_handle) new_group_handle = MPI_RecvGroupHandle (pid,context,tag) MPI_FreeGroupHandle (group_handle) (The latter routine is similar to MPI_LVGROUP except that it can be called only for distributed group handles. This is solely for semantic clarity; a single interface routine would do.) 5. Cacheing group-specific process-local information: The following routines get and free keys for use with group cacheing. key = MPI_GetAttributeKey () MPI_FreeAttributeKey () The following routines cache and retrieve information. MPI_SetGroupAttribute (grouphandle,key,value,destructor_routine) status = MPI_TestGroupAttribute (grouphandle,key,&value) where key must be unique within the group value is anything the size of a pointer destructor_routine is an application-provided routine that is called by MPI_LVGROUP, with arguments being the group handle, cached key and value. Cached information is stripped from the new group handle returned by MPI_SendGroupHandle. In a conforming implementation, MPI_TestGroupAttribute must be no slower than a point-to-point communication call. 6. Retrieving global group ID: global_id = MPI_GetGlobalGroupID (grouphandle) 7. Other collective communications: Consistently substitute "grouphandle" in place of "group". ---------------------------------------------------------------------- rj_littlefield@pnl.gov Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Sun Mar 14 15:50:25 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA06915; Sun, 14 Mar 93 15:50:25 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA19420; Sun, 14 Mar 93 15:49:43 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 14 Mar 1993 15:49:41 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA19411; Sun, 14 Mar 93 15:49:35 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Sun, 14 Mar 93 12:48 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA29389; Sun, 14 Mar 93 12:46:59 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA05301; Sun, 14 Mar 93 12:46:57 PST Date: Sun, 14 Mar 93 12:46:57 PST From: rj_littlefield@pnlg.pnl.gov Subject: collcomm changes, rationale To: geist@gstws.epm.ornl.gov, gropp@mcs.anl.gov, jim@meiko.co.uk, lusk@mcs.anl.gov, lyndon@epcc.ed.ac.uk, mpi-collcomm@cs.utk.edu, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Cc: d39135@sodium.pnl.gov Message-Id: <9303142046.AA05301@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu RATIONALE FOR SUGGESTED CHANGES TO COLLECTIVE COMMUNICATION PROPOSAL In a related summary, I outlined a set of suggested changes to the concepts and routines in the collective communication proposal. The purpose of this note is to present the rationale for those suggestions and to discuss possible alternatives. The discussion is organized into 5 areas, flagged with "----- Topic #". Entries flagged with > are from my summary of suggested changes. Entries flagged with >>> are from the draft proposal sent out by Al Geist. ----- Topic #1: Group Identification ----- > . The concept of group identification has been firmed up. Most > operations use a "group handle" that is local to the process. > (Think of the group handle as being just the address of a > potentially large and complex "group descriptor".) > ... > . A semantic restriction is introduced, that a process can access > information about a group only if the process holds a group > handle for it. Group handles can be obtained in two ways: 1) > they are produced by group formation routines, and 2) a process > can explicitly distribute copies of its group handles to other > processes, using new routines introduced specifically for that > purpose. There are two issues here: one of being able to layer collective communications on top of point-to-point at all, and a secondary one of efficiency. The more fundamental issue is layering. Given only MPI point- to-point functionality, how can a group identifier (whatever it is) be transmitted between processes so as to be useful to the receiver? Presumably we want to allow group identifiers to be passed around so that any process holding the group identifier can use it for purposes like translating between (group,rank) and pid. We also want to allow this translation to be done asynchronously, i.e., without requiring the explicit cooperation of any other MPI process at the time of translation. Since MPI pt-pt does not support asynchronous servers or an interrupt receive capability, this implies that the group identifier must come complete with enough information to resolve all translations without communication. This prompts the concept that the group identifier must be associated with a "group descriptor" that is large and complex enough to fully describe the group. How is the association done? This is a question of efficiency. If the identifier is allowed to be process-local, the group descriptor can be located very quickly -- just make the identifier be a pointer to the group descriptor. Requiring the identifier to have global scope would not be so good. In that case, either the identifier has to be carefully constructed or the association has to be done with some sort of table search. These issues also arise with global pid's. However, groups can be formed much more often and in greater numbers than processes. I doubt that careful construction tricks could be assured to be adequate, and if not, then a table search would be required on each collective communication call. The conclusion is that, for most purposes, a process-local identifier generated by the system is preferred. Such things are typically called "handles", hence the term "group handle". > 4. Distribution of group handles and disposition of distributed handles: > > MPI_SendGroupHandle (pid,context,tag,old_group_handle) > > new_group_handle = MPI_RecvGroupHandle (pid,context,tag) > > MPI_FreeGroupHandle (group_handle) The next question is how group handles should be distributed. Implicit distribution is out because MPI pt-pt doesn't support a server capability, and presumably we aren't willing to synchronize all of the processes whenever somebody creates a group handle. So, explicit distribution is required. How do we handle it? Two ideas that I do not like are the following. MPI might provide routines to translate to and from some machine- and process- independent format, so that the translated information could be sent using normal point-point primitives. This strategy requires that the user program manage the storage of indefinite-length objects, which makes for an ugly Fortran interface. Or, group descriptors (and their translation routines) might be built into point-point MPI as another data type. This violates the spirit of layering collective communication on point-to-point, and has the same storage management problem. The three routines proposed above were the cleanest interface I could think of. ----- Topic #2: Global Group ID ----- > There is > still a "group ID" that is globally unique, but it has only a > secondary role and can be ignored by most applications. > ... > global_id = MPI_GetGlobalGroupID (grouphandle) Given that we are now able (and required) to pass around copies of group handles, it is not clear to me that MPI really needs special support for the concept of a global group ID. On the other hand, it's easy to provide, since we have to construct one or more globally unique context values for each group anyway. So just use the first such context value as the global ID. This gives something unique that all processes can agree on. But note that knowing just the global group ID does not let you get other information about the group -- you have to hold a group handle for that. (We could add a routine that would accept the global group ID and return a handle for that group, presuming that the process held one. This would be cheap to do, since group handles are managed by MPI anyway, and I can vaguely imagine that it might help some applications. On the other hand, there are no similar "handle lookup" facilities provided elsewhere in MPI, and I'm reluctant to set that kind of precedent without clear need.) ----- Topic #3: Group Formation ----- > . A new group formation routine is introduced, that is less > synchronous and more general than MPI_PARTGROUP. > ... > 1. Arbitrary group formation: > > newgrp_handle = MPI_FORMGROUP (grouptag,groupsize,knownmembers) > > where > grouptag is a user-provided integer tag, sufficiently unique > to disambiguate overlapping groups that might be > formed simultaneously, say by multiple threads. > > groupsize is the number of members that will compose the group. > > knownmembers is a set of pid's of some or all members of the group. > Each member of the group must provide the same > set of knownmembers. > > newgrp_handle is a group handle for the newly formed group > > This new routine must be called synchronously, but only by those > processes forming the group. The draft proposal distributed by Al Geist says that >>> A group is identified by a group name that is supplied by the user. A group name by itself is not enough to allow implementing groups as a layer on top of point-to-point, unless we impose restrictions that I think would be not acceptable. The problem is: how does a group-forming routine know whom it should send messages to, in order to form the group? MPI_PARTGROUP does not have a problem with this, because it has to be called synchronously by all members of the group. Since each current member of the group holds a handle (descriptor) for that group, it is easy for each member to figure out who talks to whom. Unfortunately, there are some important application designs that I do not see how to implement with just MPI_PARTGROUP. For example, I am now doing an application that uses a master-slaves strategy to asynchronously parcel out chunks of work, with each chunk being done by several processes working collaboratively. Collective communication between those processes is required, so it seems natural to organize them into MPI groups. Using a synchronous group partitioning routine would introduce a risk of load imbalance, because the varying chunk size implies that groups can finish their work at different times, and synchronous partitioning would delay their reassignment. Applications like this could benefit from a group formation routine that is called synchronously, but only by those processes forming the group -- hence MPI_FORMGROUP. This type of routine does have the problem of identifying its collaborators, and the only solution I can think of is to tell it. That's what the knownmembers argument is for. I have specified knownmembers in terms of pid's because I assume that point-to-point communication based on pid's is always fast and unrestricted. If knownmembers were based on (group,rank) pairs, then per the discussion above, all processes making this call would have to hold handles (descriptors) for the referenced groups. This seems to me to be more trouble than it's worth, but others may disagree. Another comment about efficiency... The size of the knownmembers set affects the efficiency of group formation. At one extreme, only one member is required to be known. This is scalable in a memory sense, but not in a time sense, because it implies O(P) group formation time for a group of P processes. At the other extreme, all members can be specified. This is not scalable in a memory sense, but allows guaranteed O(log P) formation time. Other tradeoffs are possible, such as O(sqrt P) knownmembers and O(sqrt P) formation time. The interface as specified allows each application to choose the type of scalability it wants. ----- Topic #4: Group Names ----- > ... The > "group name" is entirely removed from MPI-1. (Group names are > still anticipated in MPI-2, but upward-compatibility is > maintained in a different way from the draft proposal.) The draft distributed by Al Geist states: >>> To allow for future extensibility of the group concept >>> the present draft specifies that groups be named. Requiring names has the drawback that 1) it burdens the user with at least the appearance of having to create unique names, in order to be upward-compatible with dynamic groups, even though 2) in a layered MPI-1, there is no way in general to check global uniqueness, and thus programs can work fine with non-unique names. This combination strikes me as actually impeding upward- compatibility. The tendency will be for programmers to use non-unique names because it works and it's easy. But such programs would break when MPI-2 came along and started actually using the names for something. I don't like encouraging people to write programs that are going to break. I do support upward compatibility. However, rather than requiring names in MPI-1, I propose that they be deferred entirely to MPI-2, at which point they can be supported either just through MPI_JOINGROUP (as an alternative to MPI_FORMGROUP) or via additional routines to attach globally unique names to groups that have already been formed via MPI_JOINGROUP. ----- Topic #5: Cacheing ----- > 5. Cacheing group-specific process-local information: > > The following routines get and free keys for use with group > cacheing. > > key = MPI_GetAttributeKey () > MPI_FreeAttributeKey () > > The following routines cache and retrieve information. > > MPI_SetGroupAttribute (grouphandle,key,value,destructor_routine) > MPI_TestGroupAttribute (grouphandle,key,&value) > > where > key must be unique within the group > value is anything the size of a pointer > destructor_routine is an application-provided routine that > is called by MPI_LVGROUP, with arguments > being the group handle, cached key and value. > > Cached information is stripped from the new group handle > returned by MPI_SendGroupHandle. > > In a conforming implementation, MPI_TestGroupAttribute must > be no slower than a point-to-point communication call. This feature is purely for efficiency, but I think it's so valuable, cheap, and clean that something like it has to go in. One feature of collective communication is that the fastest algorithm for any particular job usually depends on the machine topology, which processes belong to the group, and the amount of data being manipulated. For example, global combine of L data elements across P = RC processes on a 2-D RxC mesh can be done in O(L log(P)) time using a fanin/fanout algorithm, or in O(L + sqrt(P)) time using a nested rings algorithm. The former is better for small L, the latter for big L, and using the wrong one can easily cost a factor of 3 in execution time. So, there is strong motivation to write collective communication routines that are adaptive in the sense of figuring out which algorithm is best. The problem is that it can take quite a lot of time to make the decision, starting from a scratch position of not even knowing which processes belong to the group. It's going to take lots of calls to the inquiry routines to get that information, and then some more cycles to make the proper decisions. Obviously it would be profitable to cache the information and/or decisions. The question is, where? It is tempting to say that the collective communication routine could or should keep its own cache, indexed by group handle and/or global group ID. The problem is, groups are dynamic in the sense of being formed and disbanded, so that unless group IDs can get very large, eventually they will have to be reused. Now, it wouldn't do to have a collective communication routine use stale cached information, so if the collective communication routine is keeping its own cache, then it needs to be notified of the reuse so that it can release the cached stuff. Alternatively, perhaps the cached information could be automatically released. (Either strategy guarantees immediate release of cached info when the group handle/descriptor is released. I presume we want to do that, to avoid getting into the morass of garbage collection.) The method proposed here can be thought of as implementing both strategies. The idea is that the routines that free group handles (and the associated descriptors) loop through the cached information, calling an application-provided destructor routine for each piece of cached information. Typically, the cached information will be a pointer to a hunk of memory managed by the collective communication, which the destructor will free in whatever way it has to. Upon return from the destructor, the group-freeing routine will release the little piece of memory holding the pointer, and everything will be cleaned up. If that group handle/descriptor is ever reused, it will be reinitialized to indicate no cached information, and MPI_TestGroupAttribute will return "not found". An efficient-after-first-call group-global operation using cacheing might look like this: static int gop_key_assigned = 0; /* 0 only on first entry */ static MPI_key_type gop_key; /* key for this module's stuff */ efficient_global_op (grphandle, ...) struct group_descriptor_type *grphandle; { struct gop_stuff_type *gop_stuff; /* whatever we need */ if (!gop_key_assigned) /* get a key on first call ever */ { gop_key_assigned = 1; if ( ! (gop_key = MPI_GetAttributeKey()) ) { MPI_abort ("Insufficient keys available"); } } if (MPI_TestGroupAttribute (grphandle,gop_key,&gop_stuff)) { /* This module has executed in this group before. We will use the cached information */ } else { /* This is a group that we have not yet cached anything in. We will now do so. */ gop_stuff = /* malloc a gop_stuff_type */ /* ... fill in *gop_stuff with whatever we want ... */ MPI_SetGroupAttribute (grphandle, gop_key, gop_stuff, gop_stuff_destructor); } /* ... use contents of *gop_stuff to do the global op ... */ } gop_stuff_destructor (gop_stuff) /* called by MPI on group close */ struct gop_stuff_type *gop_stuff; { /* ... free storage pointed to by gop_stuff ... */ } ---------------------------------------------------------------------- rj_littlefield@pnl.gov Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Thu Mar 18 13:18:41 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA16171; Thu, 18 Mar 93 13:18:41 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA18320; Thu, 18 Mar 93 13:17:37 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 18 Mar 1993 13:17:35 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA18296; Thu, 18 Mar 93 13:17:07 -0500 Date: Thu, 18 Mar 93 18:17:01 GMT Message-Id: <9471.9303181817@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Proposal I To: Tony Skjellum , Jim Cownie , Rik Littlefield , Sanjay Ranka Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Here is Proposal I, basic and extended proposals, as promised. I have not added detailed justifications or implementation complexity discussions in this document. This should be transparent. I have crafted the proposal using LaTeX, in the form of a "chapter" in the "report" document style. You should be able to run this through LaTeX and get it printed off no problem, please ask me for postscript if you do have some problem. The stuff before and including, \begin{document}, and the stuff after and including \end{document} could be removed for inclusion into a report document style. Comments? Best Wishes Lyndon >------------------------------ Cut Here ------------------------------< \documentstyle{report} \begin{document} \title{``Proposal'' I for MPI Communication Context Subcommittee} \author{Lyndon~J~Clarke} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal I" % Lyndon J. Clarke % March 1993 % \chapter{Proposal I} \section{Introduction} This chapter proposes that ordered groups are used to provide communication contexts, and that communication contexts do not appear independently of process groupings. The proposal reflects the observation that an instance of a module in a parallel program typically operates within a group of processes, and as such any communication contexts associated with an instance of a module also bind the semantics of process groups. This chapter makes a basic proposal which provides intra-group communication but does not provide inter-group communication, and an extended proposal which also provides inter-group communication. The proposals say nothing about use of message tags. It is assumed that these will be a bit string, expressed as an integer in the host langauge (i.e. ANSI C or Fortran~77, in the first instacnce. These proposals should be viewed as a collection of recommendations to other subcommittees within MPI, primarily the collective communications subcommittee, the point-to-point communications subcommittee and the process topologies subcommittee. Concrete syntax is given in the style of the C language for purposes of discussion only, and should be viewed as an example of possible syntax as detailed syntax of described operations is the responsibility of the language bindings subcommittee %----------------------------------------------------------------------% % BEGIN "Basic Proposal" % \subsection{Basic Proposal} The main features of the basic proposal are: \begin{itemize} \item Process identifiers and group identifiers have opaque values expressed as an integer type in the base language. Semantics for passing process identifiers in a message are defined, whereas semantics for passing group identifiers in a message are not defined. \item Group creation and destruction are concerted, synchronous, actions on the part of the membership of the group concerned. \item Groups are {\it static\/} in that no provision is made for modification of the membership of a group over the lifetime of the group. Dynamic group operations such as {\it resize\/} can be effected by destruction of the existing group and creation of a new resized group. \item Group creation is effected by four means: explicit definition of the membership of a group; partition of an existing group into one or more distinct subgroups; identical duplication of an existing group; topological duplication of an existing group. \item Point-to-point communication provides for intra-group communication, and makes no provision for inter-group communication. \item Collective communication provides for operations which are concerted actions on the part of members of one group, and makes no provision for operations which are concerted actions on the part of members of multiple groups. \end{itemize} \subsubsection{Process and Group Identifiers} A {\it process identifier\/} is an opaque reference to an object which is a single process. A process identifier is expressed as an integer in the host language and has a value defined by the system. The only meaningful host language operations on process identifiers are assignment ({\tt =}), equality ({\tt ==}) and inequality ({\tt !=}). Each process has exactly one process identifier. MPI should provide a procedure which allows the user to determine the process identifier of the calling process. For example, {\tt mypid = mpi\_mypid()}. The identifier of the {\it null process\/}, defined to be a process which cannot exist, is defined as a named constant and shall be referred to as {\tt MPI\_PID\_NULL} in this proposal. With the single exception of the identifier of the null process, the value of a process identifier is {\it process local\/}, meaning that if two processes A and B know the identifier of a process P then the relationship between the values of the identifiers known to A and B is undefined. The user can pass the value of a process identifier in a message, since it is an integer type in the host language, however the recipient of the value cannot make defined use of that value in the MPI operations described below --- the received process identifier is {\it invalid\/}. MPI will provide a mechanism which allows a process identifier to be passed in a message in such a manner that the received identifier is valid. It is proposed that this shall be integrated with the buffer descriptor mechanism (proposed by Bill Gropp and Rusty Lusk), by addition of a procedure which places a logical reference to a process identifier into the buffer descriptor, e.g. {\tt mpi\_bd\_pid(bd, \&pid)}. Transmission of a process identifier using this mechanism returns to the recipient a process identifier which is valid for use in the MPI operations described below. This transmission may side effect state in the implementation of MPI at the recipient, and in particular may reserve state at the recipient. MPI will provide a procedure which invalidates a process identifier, allowing the implementation of MPI to recover reserved state, e.g. {\tt mpi\_pid\_invalidate(pid)}. This is an error if {\tt pid} is {\tt MPI\_PID\_NULL}, or if {\tt pid} is the identifier of the calling process. It is further proposed that MPI provide a process identifier registry service. This service allows any process to register its own process identifer by name, and deregister its process identifier. The service allows any process to determine whether a name has been registered without blocking the calling process, and to map that name into a valid process identifier with the possibility of blocking the calling process. Use of this service is not mandated, and components of programs which do not require this service are not expected to make use thereof. A {\it group identifier\/} is an opaque reference to an object which is a group of processes. A group identifier is expressed as an integer in the host language and has a value defined by the system. The only meaningful host language operations on group identifiers are assignment ({\tt =}), equality ({\tt ==}) and inequality ({\tt !=}). The identifier of the {\it null group\/}, defined to be a group which cannot exist, is defined as a named constant and shall be referred to as {\tt MPI\_GID\_NULL} in this proposal. With the single exception of the identifier of the null group, the value of a group identifier is {\it process local\/}, meaning that if two processes A and B know the identifier of a group G then the relationship between the values of the identifiers known to A and B is undefined. The user can pass the value of a group identifier in a message, since it is an integer type in the host language, however the recipient of the value cannot make defined use of that value in the MPI operations described below --- the identifier is {\tt invalid\/}. MPI will not provide a mechanism which allows a group identifier to be passed in a message in such a manner that the received identifier is valid. The canonical representation of a group is an array of distinct process identifiers, although it may be possible to use hashing functions with lower space complexity and marginally higher time complexity. A group is {\it static\/}, in that it's membership may not change over the lifetime of the group. There is a well defined {\it size\/} of a group, and MPI will provide a procedure which allows the user to determine the size of a group. For example, {\tt size = mpi\_grp\_size(gid)}. This procedure is an error if {\tt gid} is {\tt MPI\_GID\_NULL}. There is a well defined {\it rank\/} of a process within the group, i.e. the position in the representational array at which the process identifier is held. MPI will provide a procedure which allows the user to determine the rank of the calling process within a group. For example, {\tt mpi\_grp\_myrank(gid)} returns the rank of the calling process within the group referred to by {\tt gid}. This procedure is an error if {\tt gid} is {\tt MPI\_GID\_NULL}. MPI should provide a procedure to determine the member rank of a process within a group given the group identifier and a valid process identifier, e.g. {\tt rank = mpi\_grp\_rank(gid,pid)}. This procedure may ``fail'' if the process identified by {\tt pid} is not a member of the group identifier by {\tt gid}, and can be used to determine whether a given process is a member of a given group. This procedure is an error if {\tt gid} is {\tt MPI\_GID\_NULL}. MPI should provide a procedure to determine the process identifier of a group member given the group identifier and member rank, e.g. {\tt pid = mpi\_pid(gid,rank)}. This procedure should validate the returned process identifier. This procedure is an error if {\tt gid} is {\tt MPI\_GID\_NULL}. \subsubsection{Group Creation and Destruction} MPI will provide four methods for creation of groups, and a procedure to destroy an existing group. A group may be created by explicit definition of the pids of the members of the group. For example, {\tt gid = mpi\_grp\_definition(npids,pids)} where {\tt gid} is the identifier of the newly created group, {\tt npids} is the number of processes in the new group, and {\tt pids} is an array containing the (valid) process identifiers of the processes in the group. This procedure must be called my all members of the group, and does not return until all members have made the call. A group may be created by identical duplication of an existing group. For example, {\tt gidb = mpi\_grp\_duplication(gida)} where {\tt gidb} is the group identifier of the newly created group and {\tt gida} is the identifier of an existing group. The created group inherits all properties of the source group, including any topological properties. This operation has the same synchronisation properties as creation of group by definition. A group may be created by topological duplication of an existing group. Details of topological groups are under consideration within the process topologies subcommittee and will not be further discussed here. A group created by topological duplication inherits the size of the source group, and also inherits the membership list of the source group although the list may be ordered differently in the created group. These operations have the same synchronisation properties as creation of group by identical duplication. Where groups have additional topological attributes MPI should also provide procedures which allow the user to determine such attributes. One or more groups may be created by partition of an existing group into distinct subgroups by key. For example, {\tt gidb = mpi\_grp\_partition(gida, key)} where {\tt gidb} is the group identifier of the newly created group corresponding to the given {\tt key} and {\tt gida} is the group identifier of the group which is being partitioned according to the {\tt key} values supplied. MPI should define a named constant which is a {\it null\/} {\tt key} value, for example {\tt MPI\_KEY\_NULL}, in order that members of the parent group can choose not to become members of any child group, and in which case the procedure should return {\tt MPI\_GID\_NULL}. Groups created by partition share the same ordering of process member ranks as the parent group. This operation synchronises the members of the parent group, and therefore implicitly synchronises the members of the created group(s). A group may be destroyed, e.g. {\tt mpi\_grp\_deletion(gid)}, which destroys the group identified by {\tt gid}. This operation synchronises the group members, and invalidates the group identifier. \subsubsection{Point-to-point communication} There are arguments either way for point-to-point routines which accept a {\tt (gid, rank)} pair, and for routines which accept a {\tt pid}. This proposal supports and seperates both approaches to addressing and selection within the same syntax, avoiding the introduction of sets of procedures to handle each case. In order to provide expressive power to intra-group communication, point-to-point communication should accept a {\tt (gid, rank)} pair, where {\tt gid} is a valid group identifier and {\tt rank} is a member rank within the group identified by {\tt gid}. In message addressing the sender specifies the {\tt rank} and {\tt gid} of the receiver. In message selection the receiver specifies the {\tt rank} and {\tt gid} of the receiver. The sender and receiver must both specify the same {\tt gid} in order for a match to occur. The {\tt gid} field is not allowed to take a {\it wildcard\/} value in message selection. The {\tt rank} field is allowed to take a {\it wildcard\/} value in message selection, e.g. {\tt MPI\_RANK\_WILD}, which will match with any rank. One is encouraged to visualise a seperate message queue or port at each process for each group of which that process is a member (and indeed this may be an advantageous implementation feature). In order to accommodate process identifier based addressing and selection into the same syntax, this proposal advocates that point-to-point communication should also accept the null group ({\tt gid = MPI\_GID\_NULL}), in which case the {\tt rank} is interpreted as a valid {\tt pid}. The {\tt pid} is allowed to take a {\it wildcard\/} value in message selection, e.g. {\tt MPI\_PID\_WILD}. The point-to-point section should provide a procedure which allows a recipient to recover the process identifier of the sender. The discussion of matching above extends to this case, and one is also encouraged to visualise a separate message queue or port for messages referred to by {\tt MPI\_GID\_NULL}. In the case of process identified wildcard receive, the process identifier recovered by the receiver may be unknown to the receiver. It is proposed that an implicit validation of the process identifier must be performed by the MPI implementation, in order that the recipient is returned a valid process identifier, else the returned identifier is of little or no use to the recipient. \subsubsection{Collective communication} Collective communication operations within MPI should be restricted to the scope of a single group. It will be sufficient for these procedures to accept a group identifier, and possibly a message tag in order to distinguish multiple outstanding operations within the same group. These procedures must not accept {\tt MPI\_GID\_NULL}. It is not possible to determine whether this is strategy allows all of the MPI collective communication routines to be written in terms of MPI point-to-point routines without loss of generality, since the set of collective communication routines is not yet determined. This proposal takes the view that it is the responsibility of the collective communications subcommittee to determine whether such a goal is desirable, and if so to describe procedures which comply with this goal. % % END "Basic Proposal" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Extended Proposal" % \subsection{Extended Proposal} The main additional features of the extended proposal are: \begin{itemize} \item Semantics for passing a group identifier in a message are defined, and a group registry is introduced. \item Point-to-point communication is extended to allow inter-group communication without syntactic intrusion on intra-group communication. \item Collective communication operations involving a concerted action on the part of members of two or more groups is suggested. \end{itemize} \subsubsection{Process and Group Identifiers} The extended proposal says nothing more about process identifiers than the basic proposal. The extended proposal defines a mechanism by which a group identifier may be passed in a message in such a manner that the received identifier is valid, and the mechanism should be analagous to that which allows process identifiers to be transmitted. It is proposed that this shall be integrated with the buffer descriptor mechanism (proposed by Bill Gropp and Rusty Lusk), by addition of a procedure which places a logical reference to a group identifier into the buffer descriptor, e.g. {\tt mpi\_bd\_gid(bd, \&gid)}. Transmission of a group identifier using this mechanism returns to the recipient a group identifier which is valid for use in the MPI operations described above and below, and as further qualified below. This transmission may side effect state in the implementation of MPI at the recipient, and in particular may reserve state at the recipient. MPI will provide a procedure which invalidates a group identifier, allowing the implementation of MPI to recover reserved state, e.g. {\tt mpi\_invalidate\_gid(gid)}. This is an error if {\tt gid} is {\tt MPI\_GID\_NULL}, or if {\tt gid} is the identifier of a group of which the calling process is a member. It is further proposed that MPI provide a group identifier registry service. This service allows any group to register its own group identifer by name, and deregister its group identifier. The service allows any group to determine whether a name has been registered without blocking the calling group, and to map that name into a validated group identifier with the possibility of blocking the calling group. The group registry procedures should synchronise the calling group, permitting more efficient implementations than asynchronous operations. The definition of a valid group identifier is extended to include all groups of which the process is a member and all those which are validated by one of the above mechanisms. The small suite of procedures which map between process identifiers, group identifiers and member ranks are defined to work with valid group and process identifiers. \subsubsection{Point-to-point communication} The point-to-point communication syntax and semantics described in the basic proposal do not extend directly to the case of inter-group communication. The reason is quite simple --- the sender and receiver do not supply the same group since the group of the sender and the group of the receiver are different. The basic approach is to express point-to-point communication in terms of a triplet {\tt (localGroup, remoteGroup, remoteRank)}. In this notation the sender specifies the sender group identifier, then the receiver group identifier and finally the receiver rank. The receiver specifies the receiver group identifier, then the sender group identifier, and finally the sender rank. The {\tt localGroup} field may not take a wildcard value, corresponding directly to the rule that the group in the basic proposal may not take a wildcard value. The {\tt remoteGroup} field may take a wildcard value, e.g. {\tt MPI\_GID\_WILD}, which matches with any group. The {\tt remoteRank} may also take a wildcard value, as in the basic proposal. The point-to-point section should provide procedures which allow a message recipient to recover the group and rank of the sender. In the case of {\tt remoteGroup} wildcard receive, the group identifier recovered by the receiver may be unknown to the receiver. It is proposed that an implicit validation of the group identifier must be performed by the MPI implementation, in order that the recipient is returned a valid group identifier, else the returned identifier is of little or no use to the recipient. In order to accomodate inter-group addressing and selection into the framework of the basic proposal, the extended proposal suggests a careful redefinition of the {\tt gid} discussed in the basic proposal. With careful presentation this redefinition need not intrude conceptually on the basic proposal, although I shall give a less careful description here, suggestive of two different flavours of presentation. In the basic proposal, the {\tt gid} was formally a reference to a group. In the extended proposal the {\tt gid} is formally composed of references to two groups, and can be thought of as a shorthand notation for {\tt (localGroup, remoteGroup)}. The identifier of the null group, a {\it null\/} identifier, is a valid identifier which is formally composed of a pair of references to the null group, and may be used in the fashion described in the basic proposal. The group creation functions provide a symmetric, or {\it unary\/}, identifier formally composed of two references to the same group. This group is {\it local\/} since the process in question is a member of the group. An identifier which composes a pair of references to a local group is logically identical to a group identifier as implied in the basic proposal, and may be used for point-to-point and collective communications in an identical fashion. The group identifier transmission and registry lookup procedures also provide a symmetric, or {\it unary\/}, identifier which again is composed of a pair of references to the same group. This group is {\it remote\/} when the process is not a member of the group, or is local when the process is a member of the group. An identifier which composes a pair of references to a remote group is logically identical to the identifier a remote group as implied above, and is not valid for either point-to-point or collective communications. Inter-group communication is effected by use of assymetric, or {\it binary} identifiers, composed of references to two different groups, the first of which is local and the second of which is either local or remote. Such identifiers are constructed by the use of a ``glob'' operation, which returns an identifier referencing the first operand as the local field and the second operand as the remote field. The glob is defined to be an error unless: both operands are symmetric (unary); the first operand refers to a local group. The identifier returned by a glob is valid for point-to-point communciation and invalid for collective communication. The point-to-point communication section should provide a procedure to determine the group identifiers object of a completed receive. Procedures which ``unglob'' an assymetric (binary) group identifier should be provided, returning the local and remote fields as valid identifiers. This can be viewed as a continuation of a generalised approach to point-to-point communication began in the basic proposal, in which operation are specified in terms of object identifiers and instance identifiers where objects are composed of multiple instances. The nature of the instance identified, and the semantics of the operation, are dependent of the nature of the object identified, exploiting some genericity. \begin{center} \begin{tabular}{lll} object identifier & instance identifier & action \\ \hline null group identifier & basic process identifier & basic communication \\ unary group identifier & local process rank & intra-group communication \\ binary group identifier & remote process rank & inter-group communication \\ \end{tabular} \end{center} \subsubsection{Collective communication} There are a number of collective communication operations which logically extend over two or more groups. Some examples of two-group collective communications which are common in the simple host-node programming model are: \begin{itemize} \item {\it Broadcast\/} is an implicitly assymetric operation. There is exactly one sender and there are many receivers, each of which receives an identical message. The sender is a member of a singleton group G and the receivers are members of a group H. \item {\it Scatter\/} is an implicitly assymetric operation. There is exactly one sender and there are may receivers, each of which receives a different message. The sender is a member of a singleton group G and the receivers are members of a group H. \item There is a variant of {\it gather\/}, which returns the gathered data to a single process, which is an implicitly assymetric operation. There is exactly one receiver and there are many senders. The receiver is a member of a singleton group G and the senders are members of a group H. \item There is a variant of {\it reduce}, which returns the reduced data to a sinle process, which is an implicitly assymetric operation. There is exactly one receiver and there are many senders. The receiver is a member of a singleton group G and the senders are members of a group H. \end{itemize} Other patterns arise in ``process'' graphs where each ``process'' is allowed to be parallel. For example: \begin{itemize} \item {\it all-to-all\/} communciation in which the senders and receivers are distinct processes. The senders are members of a group G and the receivers are members of a group H. The two groups G and H need not be of the same size. \end{itemize} The assymetric (binary) group identifier objects described for point-to-point communications in this extended proposal are immediately suitable for each of the two-group collective communication operations described. % % END "Extended Proposal" %----------------------------------------------------------------------% \section{Conclusion} This chapter has propose that ordered groups are used to provide communication contexts, and that communication contexts do not appear independently of process groupings. The chapter made a basic proposal which provides intra-group communication but does not provide inter-group communication, and an extended proposal which also provides inter-group communication. The basic proposal provides expressive semantics for the case of intra-group communication such as arises in program which compose data driven parallelism, and is closely related to that which was discussed by Marc Snir at the February meeting in Dallas, and builds on discussions which have taken place in various subcommittees. The key additional features are: point-to-point communication can also be expressed in terms of process identifiers; a process registry service was added, use of which is optional. The extended proposal adds expressive semantics for the case of inter-group communication such as arises in programs which compose combinations of data and function driven parallelism. This functionality has been constructed in such a manner that there is no syntactic or performance intrusion on the content of the basic proposal, and the additional conceptual content can be presented seperately from a presentation of intra-group communication. % % END "Proposal I" %======================================================================% \end{document} >------------------------------ Cut Here ------------------------------< /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Mar 18 13:58:05 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA16568; Thu, 18 Mar 93 13:58:05 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA20541; Thu, 18 Mar 93 13:53:16 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 18 Mar 1993 13:53:15 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA20533; Thu, 18 Mar 93 13:53:14 -0500 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA17730; Thu, 18 Mar 1993 13:53:13 -0500 Date: Thu, 18 Mar 1993 13:53:13 -0500 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9303181853.AA17730@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu Subject: Agenda Can the context subcommittee please get together and tell me when it wants to present its idea to the full group. Also is a separate context subcommittee session required at the meeting prior to the full group meeting on contexts. What would be the length of the session(s). Just let me know and I'll change the agenda accordingly. David From owner-mpi-context@CS.UTK.EDU Thu Mar 18 14:14:48 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17038; Thu, 18 Mar 93 14:14:48 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21495; Thu, 18 Mar 93 14:11:59 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 18 Mar 1993 14:11:58 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21487; Thu, 18 Mar 93 14:11:57 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA29366; Thu, 18 Mar 93 13:07:04 CST Date: Thu, 18 Mar 93 13:07:04 CST From: Tony Skjellum Message-Id: <9303181907.AA29366@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu, walker@rios2.epm.ornl.gov Subject: Re: Agenda We need to meet as a sub-committee, to vote on the priority of our internal proposals, which will also be presented beforehand to everyone, starting next Monday (night). We need about 2 hours for our own meeting time. For the meeting before the full committee, I think one hour is appropriate. Thank you, Tony From owner-mpi-context@CS.UTK.EDU Thu Mar 18 14:14:50 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17042; Thu, 18 Mar 93 14:14:50 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21341; Thu, 18 Mar 93 14:10:06 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 18 Mar 1993 14:10:03 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21325; Thu, 18 Mar 93 14:10:02 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA29337; Thu, 18 Mar 93 13:03:12 CST Date: Thu, 18 Mar 93 13:03:12 CST From: Tony Skjellum Message-Id: <9303181903.AA29337@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, ranka@top.cis.syr.edu Subject: Proposals I & V Cc: mpi-context@cs.utk.edu Dear Context sub-committee.... we are alive and well, despite rumors of our demise :-), By 7pm EST on Monday, I would like this committee's accepted forms of Proposals I and V to be ready for repromulgation to the entire MPI list. This is necessary because we need to replace Marc Snir's straw CONTEXT proposals in pt2pt and collcomm; they are only supposed to be placeholders from what I understand, awaiting our efforts. From what I also have understood by speaking with Lyndon on telephone just now, Proposal I is our substantive replacement for these placeholders (but only one of the options for closure as we see it). For those of you who don't know, Rik sent out V a few days ago, but no commentary has been seen; Lyndon just sent out I (or I++ as he calls it ). This is to be followed by my presentation of III/IV by March 24, which we will discuss forthwith. There is no proposal 2 anymore. Please interact on I & V (I will comment as well), and help me to get these in shape by Monday, as stipulated. Please urge the organizers to include CONTEXT subcommittee in agenda as originally discussed in Dallas. We are no longer figuring as significantly as originally discussed/agreed, as far as I can tell, because of the chosen ordering of presentations (I am not sure we are even on the first day at all). Are we on the agenda now? Thanks, - Tony From owner-mpi-context@CS.UTK.EDU Thu Mar 18 14:16:53 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17146; Thu, 18 Mar 93 14:16:53 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21578; Thu, 18 Mar 93 14:14:01 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 18 Mar 1993 14:14:00 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21570; Thu, 18 Mar 93 14:13:59 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA29392; Thu, 18 Mar 93 13:09:05 CST Date: Thu, 18 Mar 93 13:09:05 CST From: Tony Skjellum Message-Id: <9303181909.AA29392@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu, walker@rios2.epm.ornl.gov Subject: Re: Agenda When is before pt2pt and collcomm does there's. - Tony From owner-mpi-context@CS.UTK.EDU Thu Mar 18 14:19:50 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17303; Thu, 18 Mar 93 14:19:50 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21794; Thu, 18 Mar 93 14:17:09 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 18 Mar 1993 14:17:08 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA21786; Thu, 18 Mar 93 14:17:07 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA29426; Thu, 18 Mar 93 13:12:13 CST Date: Thu, 18 Mar 93 13:12:13 CST From: Tony Skjellum Message-Id: <9303181912.AA29426@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu, walker@rios2.epm.ornl.gov Subject: Re: Agenda Sorry for fragments... if "less controversial" topics could be discussed at beginning, context subcommittee could miss that general session to vote, followed by our presentation to the whole. If we do not present first, I am afraid our value to the whole will be lost. I know Lyndon agrees on this; I have not heard opinions other than fear that the "straw" context proposals now in pt2pt and collcomm would be used as the de facto choice. We have two of our three proposals in solid form. By Monday, they will go to everyone. We will be prioritizing these choices at our meeting at Dallas. I will not arrive till noon [arriving from Portland with Jack]. So, just making the opening bell, I see no hope for us to vote amongst ourselves, unless you discuss "other parts of MPI" at opening. Can this be done? Thanks, - Tony From owner-mpi-context@CS.UTK.EDU Fri Mar 19 06:37:38 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA23767; Fri, 19 Mar 93 06:37:38 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA09670; Fri, 19 Mar 93 06:36:57 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 19 Mar 1993 06:36:51 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA09648; Fri, 19 Mar 93 06:36:45 -0500 Date: Fri, 19 Mar 93 11:36:36 GMT Message-Id: <10215.9303191136@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Proposals V and I To: Tony Skjellum , Jim Cownie , Rik Littlefield , Sanjay Ranka Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Dear Colleagues (1) Please do send comments on Proposal I++ :-) negative or positive as you think fit, as quickly as you can. I will receive and reply to email over the weekend for brief periods on both Saturday and Sunday, probably afternoon GMT. (2) I intend to give comments on Proposal V, thanks to Rik, this afternoon, most probably around 4pm GMT. (3) I distributed Proposal I++ as a chapter of LaTeX report document style --- I will place Proposal V into this form and post to the addressee list. This will happen before (2), real soon. Rik --- I will do this in such a way that you can read it as plain text without particular grief. Tony --- I am happy to do likewise with your proposal if you like. Hopefully this could make it easier to distribute to the whole committee as a single document, and at a later stage progress toward a draft for the MPI working document. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Mar 19 07:38:47 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA02144; Fri, 19 Mar 93 07:38:47 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA12096; Fri, 19 Mar 93 07:37:50 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 19 Mar 1993 07:37:49 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA12088; Fri, 19 Mar 93 07:37:34 -0500 Date: Fri, 19 Mar 93 12:37:29 GMT Message-Id: <10292.9303191237@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Proposal V, LaTeX To: Tony Skjellum , Jim Cownie , Rik Littlefield , Sanjay Ranka Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu LaTeX-ified Proposal V of Rik. PostScript to follow. ---------------------------------------------------------------------- \documentstyle{report} \begin{document} \title{``Proposal V'' for MPI Communication Context Subcommittee} \author{Rik~Littlefield} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal V" % Rik Littlefield % March 1993 % \chapter{Proposal V} \section{Summary} \begin{itemize} \item Group ID's have local scope, not global. Groups are represented by descriptors of indefinite size and opaque structure, managed by MPI. Group descriptors can be passed between processes by MPI routines that translate them as necessary. (This satisfies the same needs as global group ID's, but its performance implications are easier to understand and control.) \item Arbitrary process-local info can be cached in group descriptors. (This allows collective operations to run quickly after the first execution in each group, without complicating the calling sequences.) \item There are restrictions that permit groups to be layered on top of pt-pt. \item Pt-pt communications use only TID, context, and tag, and are specified to be fast. \item Group control operations (forming, disbanding, and translating between rank and TID) are NOT specified to be fast. \end{itemize} \section{Detailed Proposal} \begin{itemize} \item Pt-pt uses only ``TID'', ``context'', and ``message tag''. TID's, contexts, and message tags have global scope; they can be copied between processes as integers. TID's and contexts are opaque; their values are managed by MPI. Message tags are controlled entirely by the application. \item Pt-pt communication is specified to be fast in all cases. (E.g., MPI must initialize all processes such that any required translation of the TID is faster than the fastest pt-pt communication call.) \item A group is represented by a ``group descriptor'', of indefinite size and opaque structure, managed by MPI. The group descriptor can be referenced via a ``process-local group ID'', which is typically just a pointer to the group descriptor. (Group ID's with global scope do not exist.) \item Group descriptors can be copied between arbitrary processes via MPI-provided routines that translate them to and from machine- and process-independent form. A process receiving a group descriptor does not become a member of the group, but does become able to obtain information about the group (e.g., to translate between rank and TID). \item Arbitrary information can be cached in a group descriptor. This is, MPI routines are provided that allow any routine to augment a group descriptor with arbitrary information, identified by a key value, that subsequently can be retrieved by any routine having access to the descriptor. Cached information is not visible to other processes and is stripped when a group descriptor is sent to another process. Retrieval of cached information is specified to be fast (significantly cheaper than a pt-pt communication call). \item Group descriptors contain enough information to asynchronously translate between (group,rank) and TID for any member of the group, by any process holding a descriptor for the group. MPI routines are provided to do these translations. Translation is not specified to be fast. \item At creation, each group is assigned a globally unique ``default group context'' which is kept in the group descriptor and can be quickly extracted. This extraction is specified to be fast (comparable to a procedure call). \item The default group context can be used for pt-pt communication by any module operating within the group, but only for exactly paired send/receive with exact match on TID, context, and message tag. Call this the ``paired-exact-match constraint''. This constraint allows independent modules to use the same context without coordinating their tag values. (Assumes: tight sequencing constraints for both blocking and non-blocking comms in pt-pt MPI.) \item A modules that does not meet the paired-exact-match constraint must use a different globally unique context value. An MPI routine will exist to generate such a value and broadcast it to the group. The cost of this operation is specified to be comparable to that of an efficient broadcast in the group, i.e. log(G) pt-pt startup costs for a group of G processes. [See Implementation note 1.] \item Context values are a plentiful but exhaustible resource. If further use is anticipated, a context may be cached; otherwise it should be returned. (Note: the number of available contexts should be several times the number of groups that can be formed.) \item When a group disbands, its group descriptor is closed and any cached information is released. (The MPI group-disbanding routine does this by calling destructor routines that the application provided when the information was cached.) Copies of the group descriptor must be explicitly released. It is an error to make any reference to a descriptor of a disbanded group, except to release that descriptor. \item All processes that are initially allocated belong to the INITIAL group. (In a static process model, this means all processes.) An MPI routine exists for obtaining a reference to the INITIAL group descriptor (i.e., a local group ID for INITIAL). \item Groups are formed and disbanded by synchronous calls on all participating processes. In the case of overlapping groups, all processes must form and disband groups in the same order. [See Implementation Note 2.] \item All group formation calls are treated as a partitioning of some group, INITIAL if none other. The calls include a reference to the descriptor of the group being partitioned, the TID of the new group's ``root process'', the number of processes in the group, and an integer ``group tag'' provided by the application. The group tag must discriminate between overlapping groups that may be formed concurrently, say by multiple threads within a process. [See Implementation Note 2.] \item Collective communication routines are called by all members of a group in the same order. \item Blocking collective communication routines are passed only a reference to the group descriptor. To communicate, they can either use the default group context or internally allocate a new context, with or without cacheing. \item Non-blocking collective communication routines are passed a reference to the group descriptor, plus a ``data tag'' to discriminate between concurrent operations in the same group. (Question: is sequencing enough to do this?) An implementation is allowed but not required to impose synchronization of all cooperating processes upon entry to a non-blocking collective communication routine. \end{itemize} \section{Advantages \& Disadvantages} \subsection*{Advantages} \begin{itemize} \item This proposal is implementable with no servers and can be layered easily on existing systems. \item Cacheing auxiliary information in group descriptors allows collective operations to run at maximum speed after the first execution in each group. (Assumes that sufficient context values are available to permit cacheing.) \item Speed of cached collective operations does not depend on speed of group operations such as formation or translation between rank and TID. \item Requires explicit translation between (group,rank) and TID, which makes pt-pt performance predictable no matter how the functionality of groups gets extended. \item Communication both within and between groups seems conceptually straightforward. \end{itemize} \subsection*{Disadvantages} \begin{itemize} \item Requires explicit translation between (group,rank) and TID, which may be considered awkward. \item Communication between different groups may be considered awkward. \item No free-for-all group formation. A process must know something about who its collaborators are going to be (minimally, the root of the group). \item Requires explicit coherency of group descriptors, which is not convenient -- application must keep track of which processes have copies and what to do with them. \item Static group model. Processes can join and leave collaborations only with the cooperation of the other group members in forming larger or smaller groups. \item No direct support for identifying the group from which a message came. The sender cannot embed its group ID in its message, since group ID's are not globally meaningful. One method to address this problem is to have the sender embed its group context as data in the message, then have the receiver use that value to select among group descriptors that it had already been sent. \end{itemize} \section{Comments \& Alternatives} \begin{itemize} \item The performance of group control functions is explicitly unspecified in order to provide performance insurance. The intent is to encourage program designs whose performance won't be killed if the functionality of groups is expanded so as to be unscalable or require expensive services. \item Adding global group ID's would seriously interfere with layering this proposal on pt-pt. The problem is not generating a unique ID, but using it. If just holding a global group ID were to allow a process to asynchronously ask questions about the group (like translating between rank and TID), then a group information server of some sort would be required, and MPI pt-pt does not provide enough capability to construct such a beast. \item The restriction that only paired-exact-match modules can run in the default group context could be relaxed to something like ``a module that violates the paired-exact-match constraint can run in the default group context if and only if it is the only module to run in that context''. This seems too error-prone to be worth the trouble, since even the standard collective ops might be excluded. (It would also conflict with Implementation Note 1.] \item Group formation can be faster when more information is provided than just the group tag and root process. E.g. if all members of the group can identify all other members of a P-member group, in rank order, then O(log P) time is sufficient; if only the root is known, O(P) time may be required. This aspect should be considered by the groups subcommittee in evaluating scalability. \end{itemize} \section{Implementation Notes} \begin{enumerate} \item To generate and broadcast a new context value: Generate the context value without communication by concatenating a locally maintained unique value with the process number in some 0..P-1 global ordering scheme. This assumes that context values can be significantly longer than log(P) bits for an application involving P processes. If not, then a server may be required, in which case by specification it has to be fast (comparable to a pt-pt call). There is no need to generate a new context for the broadcast since it can be coded to use the default group context by meeting the paired-exact-match constraint.) \item The following is intended only as a sanity check on the claim that this proposal can be layered on MPI. Improvements to improve efficiency or to use fewer contexts will be greatly appreciated. In addition to a default group context, each group descriptor contains a ``group partitioning context'' and a ``group disbanding context'' that are obtained and broadcast at the time the group is created. In the case of the INITIAL group, this is done using paired-exact-match code and any context, immediately after initialization of the MPI pt-pt level. (At that time, no application code will have had a chance to issue wildcard receives to mess things up.) Group partitioning is accomplished using pt-pt messages in the partitioning context of the current group (i.e., the one being partitioned), with message tag equal to the group tag provided by the application. In the worst (least scalable) case, the root of the new group must wildcard the source TID. (This violates the paired-exact-match constraint, which is why group formation must happen in a special context.) Group disbanding is done with pt-pt messages in the group disbanding context. This keeps group control from being messed up no matter how the application uses wildcards. \end{enumerate} \section{Examples} {\bf (To be provided)} % % END "Proposal V" %======================================================================% \end{document} ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Mar 19 07:40:10 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA02169; Fri, 19 Mar 93 07:40:10 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA12140; Fri, 19 Mar 93 07:39:47 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 19 Mar 1993 07:39:46 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA12112; Fri, 19 Mar 93 07:38:58 -0500 Date: Fri, 19 Mar 93 12:38:51 GMT Message-Id: <10301.9303191238@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Proposal V To: Tony Skjellum , Jim Cownie , Rik Littlefield , Sanjay Ranka Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu PostScript of LaTeX-ified Proposal V of Rik ---------------------------------------------------------------------- %!PS-Adobe-2.0 %%Creator: dvips 5.495 Copyright 1986, 1992 Radical Eye Software %%Title: context-rik.dvi %%CreationDate: Fri Mar 19 12:37:44 1993 %%Pages: 7 %%PageOrder: Ascend %%BoundingBox: 0 0 596 842 %%EndComments %DVIPSCommandLine: dvips context-rik %DVIPSSource: TeX output 1993.03.19:1234 %%BeginProcSet: tex.pro %! /TeXDict 250 dict def TeXDict begin /N{def}def /B{bind def}N /S{exch}N /X{S N} B /TR{translate}N /isls false N /vsize 11 72 mul N /@rigin{isls{[0 1 -1 0 0 0] concat}if 72 Resolution div 72 VResolution div neg scale isls{0 Resolution vsize 72 div mul TR}if Resolution VResolution vsize -72 div 1 add mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@landscape{/isls true N}B /@manualfeed{statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{ CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail}B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image}imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N} B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto /V matrix currentmatrix dup 1 get dup mul exch 0 get dup mul add .99 lt{/QV}{/RV}ifelse load def pop pop}N /eop{SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook} if pop /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255 {IE S 1 string dup 0 3 index put cvn put}for 65781.76 div /vsize X 65781.76 div /hsize X}N /p{show}N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V{}B /RV statusdict begin /product where{ pop product dup length 7 ge{0 7 getinterval dup(Display)eq exch 0 4 getinterval(NeXT)eq or}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /QV{ gsave transform round exch round exch itransform moveto rulex 0 rlineto 0 ruley neg rlineto rulex neg 0 rlineto fill grestore}B /a{moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail}B /c{-4 M} B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{p 1 w}B /r{ p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{/SS save N}B /eos{SS restore}B end %%EndProcSet TeXDict begin 39158280 55380996 1000 300 300 (/a/obsidian/disk/home/u36/lyndon/mpi/context-rik.dvi) @start /Fa 11 119 df<0020004001800380030006000E001C001C003C0038003800780078007800F800 F000F000F000F000F000F000F000F000F000F800780078007800380038003C001C001C000E0006 00030003800180004000200B297C9E13>40 D<800040003000380018000C000E00070007000780 0380038003C003C003C003E001E001E001E001E001E001E001E001E001E003E003C003C003C003 8003800780070007000E000C00180038003000400080000B297D9E13>I<7FFFFFE07FFFFFE078 1F81E0701F80E0601F8060E01F8070C01F8030C01F8030C01F8030C01F8030001F8000001F8000 001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F80 00001F8000001F8000001F8000001F800007FFFE0007FFFE001C1C7E9B21>84 D98 D<000FF0000FF00001F00001F00001F00001F00001F0 0001F00001F00001F00001F001F9F00F07F01C03F03C01F07801F07801F0F801F0F801F0F801F0 F801F0F801F0F801F07801F07801F03C01F01C03F00F0FFE03F9FE171D7E9C1B>100 D<01FC000F07001C03803C01C07801C07801E0F801E0F801E0FFFFE0F80000F80000F800007800 007C00603C00601E00C00F038001FC0013127F9116>I<1E003F003F003F003F001E0000000000 0000000000000000FF00FF001F001F001F001F001F001F001F001F001F001F001F001F001F001F 00FFE0FFE00B1E7F9D0E>105 D<01FC000F07801C01C03C01E07800F07800F0F800F8F800F8F8 00F8F800F8F800F8F800F87800F07800F03C01E01E03C00F078001FC0015127F9118>111 DI114 D118 D E /Fb 11 119 df<000070000000007000000000F800000000F800000000F800000001FC00000001FC00000003 FE00000003FE00000003FE00000006FF000000067F0000000E7F8000000C3F8000000C3F800000 183FC00000181FC00000381FE00000300FE00000300FE00000600FF000006007F00000E007F800 00FFFFF80000FFFFF800018001FC00018001FC00038001FE00030000FE00030000FE000600007F 000600007F00FFE00FFFF8FFE00FFFF825227EA12A>65 D68 D<07FC001FFF803F07C03F03E03F01E03F01F01E01F00001F00001F0003FF003FD F01FC1F03F01F07E01F0FC01F0FC01F0FC01F0FC01F07E02F07E0CF81FF87F07E03F18167E951B >97 D<0001FE000001FE0000003E0000003E0000003E0000003E0000003E0000003E0000003E00 00003E0000003E0000003E0000003E0001FC3E0007FFBE000F81FE001F007E003E003E007E003E 007C003E00FC003E00FC003E00FC003E00FC003E00FC003E00FC003E00FC003E00FC003E007C00 3E007C003E003E007E001E00FE000F83BE0007FF3FC001FC3FC01A237EA21F>100 D<00FE0007FF800F87C01E01E03E01F07C00F07C00F8FC00F8FC00F8FFFFF8FFFFF8FC0000FC00 00FC00007C00007C00007E00003E00181F00300FC07003FFC000FF0015167E951A>I<03FC1E0F FF7F1F0F8F3E07CF3C03C07C03E07C03E07C03E07C03E07C03E03C03C03E07C01F0F801FFF0013 FC003000003000003800003FFF801FFFF00FFFF81FFFFC3800FC70003EF0001EF0001EF0001EF0 001E78003C7C007C3F01F80FFFE001FF0018217E951C>103 D<1C003F007F007F007F003F001C 000000000000000000000000000000FF00FF001F001F001F001F001F001F001F001F001F001F00 1F001F001F001F001F001F001F001F00FFE0FFE00B247EA310>105 D110 D<0FF3003FFF00781F00600700E00300E00300F00300FC00007FE0007FF800 3FFE000FFF0001FF00000F80C00780C00380E00380E00380F00700FC0E00EFFC00C7F00011167E 9516>115 D<0180000180000180000180000380000380000780000780000F80003F8000FFFF00 FFFF000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F8180 0F81800F81800F81800F81800F830007C30003FE0000F80011207F9F16>I118 D E /Fc 69 124 df<007E1F0001C1B1800303E3C00703C3C00E03C180 0E01C0000E01C0000E01C0000E01C0000E01C0000E01C000FFFFFC000E01C0000E01C0000E01C0 000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01 C0000E01C0000E01C0000E01C0007F87FC001A1D809C18>11 D<007E0001C1800301800703C00E 03C00E01800E00000E00000E00000E00000E0000FFFFC00E01C00E01C00E01C00E01C00E01C00E 01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C07F87F8151D809C 17>I<007FC001C1C00303C00703C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0FFFFC0 0E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0 0E01C00E01C00E01C07FCFF8151D809C17>I<003F07E00001C09C18000380F018000701F03C00 0E01E03C000E00E018000E00E000000E00E000000E00E000000E00E000000E00E00000FFFFFFFC 000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E0 1C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00 E01C007FC7FCFF80211D809C23>I<6060F0F0F8F8686808080808080810101010202040408080 0D0C7F9C15>34 D<60F0F8680808081010204080050C7C9C0C>39 D<004000800100020006000C 000C0018001800300030007000600060006000E000E000E000E000E000E000E000E000E000E000 E000E000600060006000700030003000180018000C000C00060002000100008000400A2A7D9E10 >I<800040002000100018000C000C000600060003000300038001800180018001C001C001C001 C001C001C001C001C001C001C001C001C0018001800180038003000300060006000C000C001800 10002000400080000A2A7E9E10>I<60F0F0701010101020204080040C7C830C>44 DI<60F0F06004047C830C>I<00010003000600060006000C000C000C00 18001800180030003000300060006000C000C000C0018001800180030003000300060006000C00 0C000C00180018001800300030003000600060006000C000C00010297E9E15>I<03C00C301818 300C300C700E60066006E007E007E007E007E007E007E007E007E007E007E007E007E007600660 06700E300C300C18180C3007E0101D7E9B15>I<030007003F00C7000700070007000700070007 0007000700070007000700070007000700070007000700070007000700070007000F80FFF80D1C 7C9B15>I<07C01830201C400C400EF00FF80FF807F8077007000F000E000E001C001C00380070 006000C00180030006010C01180110023FFE7FFEFFFE101C7E9B15>I<07E01830201C201C781E 780E781E381E001C001C00180030006007E00030001C001C000E000F000F700FF80FF80FF80FF0 0E401C201C183007E0101D7E9B15>I<000C00000C00001C00003C00003C00005C0000DC00009C 00011C00031C00021C00041C000C1C00081C00101C00301C00201C00401C00C01C00FFFFC0001C 00001C00001C00001C00001C00001C00001C0001FFC0121C7F9B15>I<300C3FF83FF03FC02000 2000200020002000200023E024302818301C200E000E000F000F000F600FF00FF00FF00F800E40 1E401C2038187007C0101D7E9B15>I<00F0030C06040C0E181E301E300C700070006000E3E0E4 30E818F00CF00EE006E007E007E007E007E007600760077006300E300C18180C3003E0101D7E9B 15>I<60F0F0600000000000000000000060F0F06004127C910C>58 D<60F0F060000000000000 0000000060F0F0701010101020204080041A7C910C>I<0FE03038401CE00EF00EF00EF00E000C 001C0030006000C000800180010001000100010001000100000000000000000000000300078007 8003000F1D7E9C14>63 D<000600000006000000060000000F0000000F0000000F000000178000 00178000001780000023C0000023C0000023C0000041E0000041E0000041E0000080F0000080F0 000180F8000100780001FFF80003007C0002003C0002003C0006003E0004001E0004001E000C00 1F001E001F00FF80FFF01C1D7F9C1F>65 DI<001F808000E06180 01801980070007800E0003801C0003801C00018038000180780000807800008070000080F00000 00F0000000F0000000F0000000F0000000F0000000F0000000F000000070000080780000807800 0080380000801C0001001C0001000E000200070004000180080000E03000001FC000191E7E9C1E >III<001F808000E0618001801980070007800E0003801C00 03801C00018038000180780000807800008070000080F0000000F0000000F0000000F0000000F0 000000F0000000F000FFF0F0000F80700007807800078078000780380007801C0007801C000780 0E00078007000B800180118000E06080001F80001C1E7E9C21>71 D73 D76 DII<003F800000E0E0000380380007001C000E00 0E001C0007003C00078038000380780003C0780003C0700001C0F00001E0F00001E0F00001E0F0 0001E0F00001E0F00001E0F00001E0F00001E0700001C0780003C0780003C0380003803C000780 1C0007000E000E0007001C000380380000E0E000003F80001B1E7E9C20>II<003F800000E0E0000380380007001C000E000E001C0007003C000780380003807800 03C0780003C0700001C0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0F0 0001E0700001C0780003C0780003C0380003803C0E07801C1107000E208E000720DC0003A0F800 00F0E020003FE0200000602000007060000078E000003FC000003FC000001F8000000F001B257E 9C20>II<07E0801C1980300580700380600180E00180E00080E00080E00080F000 00F800007C00007FC0003FF8001FFE0007FF0000FF80000F800007C00003C00001C08001C08001 C08001C0C00180C00180E00300D00200CC0C0083F800121E7E9C17>I<7FFFFFC0700F01C0600F 00C0400F0040400F0040C00F0020800F0020800F0020800F0020000F0000000F0000000F000000 0F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000 000F0000000F0000000F0000000F0000001F800003FFFC001B1C7F9B1E>I87 D91 D<08081010202040404040808080808080B0B0F8F8787830300D0C7A9C15>II<1FC000307000783800781C00301C00001C00001C0001FC000F1C00381C00701C00601C00E0 1C40E01C40E01C40603C40304E801F870012127E9115>97 DI<07E00C301878307870306000E000E000E000E000E000E00060007004300418080C3007C00E 127E9112>I<003F00000700000700000700000700000700000700000700000700000700000700 03E7000C1700180F00300700700700600700E00700E00700E00700E00700E00700E00700600700 700700300700180F000C370007C7E0131D7E9C17>I<03E00C301818300C700E6006E006FFFEE0 00E000E000E00060007002300218040C1803E00F127F9112>I<00F8018C071E061E0E0C0E000E 000E000E000E000E00FFE00E000E000E000E000E000E000E000E000E000E000E000E000E000E00 0E000E007FE00F1D809C0D>I<00038003C4C00C38C01C3880181800381C00381C00381C00381C 001818001C38000C300013C0001000003000001800001FF8001FFF001FFF803003806001C0C000 C0C000C0C000C06001803003001C0E0007F800121C7F9215>II<18003C003C0018000000000000000000000000000000FC001C001C001C001C001C001C00 1C001C001C001C001C001C001C001C001C001C00FF80091D7F9C0C>I<00C001E001E000C00000 0000000000000000000000000FE000E000E000E000E000E000E000E000E000E000E000E000E000 E000E000E000E000E000E000E000E060E0F0C0F1C061803E000B25839C0D>IIIII<03F0 000E1C00180600300300700380600180E001C0E001C0E001C0E001C0E001C0E001C06001807003 803003001806000E1C0003F00012127F9115>II<03C1000C3300180B00300F00 700700700700E00700E00700E00700E00700E00700E00700600700700700300F00180F000C3700 07C700000700000700000700000700000700000700000700003FE0131A7E9116>II<1F 9030704030C010C010E010F8007F803FE00FF000F880388018C018C018E010D0608FC00D127F91 10>I<04000400040004000C000C001C003C00FFE01C001C001C001C001C001C001C001C001C00 1C101C101C101C101C100C100E2003C00C1A7F9910>IIII<7F8FF00F03800F030007 020003840001C80001D80000F00000700000780000F800009C00010E00020E000607000403801E 07C0FF0FF81512809116>II<7FFC70386038407040F040E041C003C003800700 0F040E041C043C0C380870087038FFF80E127F9112>II E /Fd 1 16 df<03C00FF01FF83FFC7FFE7FFEFFFFFFFFFFFFFFFF7FFE7FFE3FFC1FF80FF003C0 10107E9115>15 D E /Fe 33 122 df<0003E0000000000FF0000000003E380000000078180000 0000780C00000000F80C00000000F00C00000001F00C00000001F00C00000001F01C00000001F8 1800000001F83000000001F87000000001F86000000001F8C001FFE000FD8001FFE000FF00001C 0000FE0000180000FE00003000007E00003000007F0000600000FF0000600001FF8000C00003BF 80018000071FC00180000F0FE00300001E0FE00600003E07F00600007E03F80C0000FE03FC1800 00FE01FE300000FE00FE600000FE007FC00000FE003F8000C07F001FC000C07F000FF001C03F80 1FF803801FC0F0FE0F0007FFC03FFE0001FE0007F8002B287DA732>38 D<3C7EFFFFFFFF7E3C08 087B8712>46 D<000C00001C0000FC000FFC00FFFC00F0FC0000FC0000FC0000FC0000FC0000FC 0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC 0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC00FFFFFCFFFF FC16257BA420>49 D<00FF000007FFE0001E07F8003801FC007800FE007E00FF00FF007F00FF00 7F80FF007F80FF003F807E003F803C003F8000007F8000007F0000007F000000FE000000FE0000 01FC000001F8000003F0000007C00000078000000F0000001E0000003800000070018000E00180 01C001800380030007000300060003000FFFFF001FFFFF003FFFFF007FFFFE00FFFFFE00FFFFFE 0019257DA420>I<00FF800007FFF0000F03F8001800FC003E00FE007F00FF007F007F007F807F 007F007F003F00FF001E00FE000000FE000000FC000001F8000003F0000007E00000FF800000FF 00000003E0000001F8000000FC000000FE0000007F0000007F0000007F8018007F807E007F80FF 007F80FF007F80FF007F80FF007F00FE00FF007C00FE003801FC001E03F8000FFFE00001FF0000 19257DA420>I<0000380000007800000078000000F8000001F8000003F8000003F8000007F800 000FF800001DF8000019F8000031F8000071F8000061F80000C1F80001C1F8000381F8000301F8 000601F8000E01F8001C01F8001801F8003001F8007001F800E001F800FFFFFFE0FFFFFFE00001 F8000001F8000001F8000001F8000001F8000001F8000001F8000001F800007FFFE0007FFFE01B 257EA420>I<08000C000F007C000FFFF8000FFFF0000FFFE0000FFFC0000FFF80000FFE00000C 0000000C0000000C0000000C0000000C0000000C0000000C7F80000DFFE0000F81F8000E007C00 0C007E0000003E0000003F0000003F0000003F8000003F8038003F807C003F80FE003F80FE003F 80FE003F80FE003F007C003F0060007E0030007C001800F8000F03F00007FFC00001FF00001925 7DA420>I<000FE000007FF80001F81C0003E00E0007C03F000F807F001F807F001F007F003F00 7F003F003E007E0000007E0000007E000000FE040000FE3FE000FE7FF000FEC0F800FE807C00FF 003E00FF003F00FF003F00FE003F80FE003F80FE003F80FE003F807E003F807E003F807E003F80 7E003F803E003F003E003F001F003E001F007E000F807C0007C1F80001FFE000007F800019257D A420>I<00000600000000000F00000000000F00000000001F80000000001F80000000001F8000 0000003FC0000000003FC0000000007FE0000000006FE0000000006FE000000000CFF000000000 C7F000000001C7F80000000183F80000000183F80000000303FC0000000301FC0000000701FE00 00000600FE0000000600FE0000000C007F0000000C007F0000001C007F80000018003F80000018 003F8000003FFFFFC000003FFFFFC0000070001FE0000060000FE0000060000FE00000C00007F0 0000C00007F00001800007F80001800003F80001800003F80003000001FC0007000001FC00FFF8 003FFFF0FFF8003FFFF02C287EA731>65 D<0000FF80080007FFF018003FC03C38007E000E7801 FC0003F803F00001F807E00000F80FE00000781FC00000781FC00000383F800000383F80000038 7F800000187F000000187F00000018FF00000000FF00000000FF00000000FF00000000FF000000 00FF00000000FF00000000FF00000000FF00000000FF000000007F000000007F000000187F8000 00183F800000183F800000181FC00000301FC00000300FE000006007E000006003F00000C001FC 000180007E000700003FC03C000007FFF8000000FFC00025287CA72E>67 DII73 D78 D80 D<00FF004007FFE0C00F80F9 C01E001FC03C000FC07C0003C0780003C0780001C0F80001C0F80000C0F80000C0FC0000C0FC00 0000FE0000007F8000007FF800003FFFC0003FFFF8001FFFFC000FFFFE0003FFFF0000FFFF8000 0FFFC00000FFC000001FC000000FE0000007E0000007E0C00003E0C00003E0C00003E0C00003E0 E00003C0E00003C0F0000780F8000780FE000F00E7C03E00C1FFF800803FE0001B287CA724>83 D<03FF00000FFFE0001F03F0003F80F8003F80FC003F807C001F007E001F007E0000007E000000 7E0000007E00000FFE0001FFFE0007F07E001FC07E003F807E007F007E00FE007E00FE007E18FE 007E18FE007E18FE00BE187F01BE183F873FF01FFE1FE003F80F801D1A7E9920>97 D<00007FE000007FE0000007E0000007E0000007E0000007E0000007E0000007E0000007E00000 07E0000007E0000007E0000007E0000007E0007F87E001FFE7E007E077E00FC01FE01F800FE03F 0007E03F0007E07E0007E07E0007E0FE0007E0FE0007E0FE0007E0FE0007E0FE0007E0FE0007E0 FE0007E0FE0007E07E0007E07E0007E07F0007E03F0007E01F000FE00F801FE007E077E003FFC7 FE007F07FE1F287EA724>100 D<007F8003FFE007E1F00F80F81F007C3F007E7E003E7E003E7E 003FFE003FFE003FFFFFFFFFFFFFFE0000FE0000FE0000FE00007E00007E00003F00003F00031F 80060FC00607F01C01FFF0003FC0181A7E991D>I<00FE03E007FFCFF00F83F8F01F01F0F03E00 F8E03E00F8007E00FC007E00FC007E00FC007E00FC007E00FC003E00F8003E00F8001F01F0000F 83E0000FFFC00010FE00001000000038000000380000003C0000003FFFF0001FFFFE001FFFFF00 0FFFFF801FFFFFC03C001FC07C0007E0F80003E0F80003E0F80003E0F80003E07C0007C07C0007 C03E000F801FC07F0007FFFC0000FFE0001C267E9920>103 D<0F001F801FC03FC03FC01FC01F 800F000000000000000000000000000000FFC0FFC00FC00FC00FC00FC00FC00FC00FC00FC00FC0 0FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC0FFFCFFFC0E297FA811>105 D108 DII< 007FC00001FFF00007E0FC000F803E001F001F003F001F803E000F807E000FC07E000FC0FE000F E0FE000FE0FE000FE0FE000FE0FE000FE0FE000FE0FE000FE0FE000FE07E000FC07E000FC07F00 1FC03F001F801F001F000F803E0007E0FC0001FFF000007FC0001B1A7E9920>II 114 D<03F8C01FFFC03C07C07801C07001C0F000C0F000C0F800C0FC0000FFC0007FFC003FFF00 1FFF800FFFC001FFE0000FF00003F0C001F0C000F0E000F0E000F0F000E0F801E0FE03C0E7FF80 C1FC00141A7E9919>I<00600000600000600000600000E00000E00001E00001E00003E00007E0 001FE000FFFFC0FFFFC007E00007E00007E00007E00007E00007E00007E00007E00007E00007E0 0007E00007E00007E00007E00007E06007E06007E06007E06007E06007E06003F0C001F0C000FF 80003E0013257FA419>III120 DI E /Ff 8 116 df80 D86 D<0007FFC0000000003FFFFC00 000000FFFFFF00000003F801FFC0000007F0003FE0000007F8001FF000000FFC000FF800000FFC 000FFC00000FFC0007FC00000FFC0007FE00000FFC0003FE000007F80003FF000003F00003FF00 0000000003FF000000000003FF000000000003FF000000000003FF000000000003FF0000000000 03FF000000000003FF0000000007FFFF00000000FFFFFF0000000FFFE3FF0000007FF803FF0000 01FFC003FF000003FF0003FF000007FC0003FF00000FF80003FF00001FF00003FF00003FF00003 FF00007FE00003FF00007FE00003FF0380FFC00003FF0380FFC00003FF0380FFC00003FF0380FF C00003FF0380FFC00007FF0380FFC00007FF03807FE0000DFF03807FE0001DFF03803FF00039FF 87001FF80070FFCF000FFE03E07FFE0007FFFF807FFC0000FFFE001FF800001FF00007E000312E 7CAD37>97 D<00FF80FFFF80FFFF80FFFF8003FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF80FFFFFFFFFFFFFFFFFF18487DC71F>108 D<00003FFC00000001FFFF8000000FFFFFF000003FF00FFC00007F8001FE0001FE00007F8003FC 00003FC007F800001FE007F800001FE00FF000000FF01FE0000007F81FE0000007F83FE0000007 FC3FE0000007FC7FC0000003FE7FC0000003FE7FC0000003FE7FC0000003FEFFC0000003FFFFC0 000003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003 FFFFC0000003FFFFC0000003FF7FC0000003FE7FC0000003FE7FC0000003FE7FE0000007FE3FE0 000007FC3FE0000007FC1FE0000007F81FF000000FF80FF000000FF007F800001FE007FC00003F E003FE00007FC001FF0000FF80007F8001FE00003FF00FFC00000FFFFFF0000003FFFFC0000000 3FFC0000302E7DAD37>111 D<00FF801FF80000FFFF80FFFF0000FFFF83FFFFC000FFFF8FC07F F00003FF9E000FF80001FFF80007FC0001FFF00003FE0001FFE00001FF0001FFC00000FF8001FF 800000FFC001FF8000007FC001FF8000007FE001FF8000007FE001FF8000003FE001FF8000003F F001FF8000003FF001FF8000003FF001FF8000001FF801FF8000001FF801FF8000001FF801FF80 00001FF801FF8000001FF801FF8000001FF801FF8000001FF801FF8000001FF801FF8000001FF8 01FF8000001FF801FF8000001FF801FF8000001FF001FF8000003FF001FF8000003FF001FF8000 003FF001FF8000003FE001FF8000007FE001FF8000007FC001FF800000FFC001FF800000FF8001 FFC00001FF8001FFE00001FF0001FFF00003FE0001FFF8000FFC0001FF9E001FF80001FF8F80FF E00001FF87FFFFC00001FF81FFFE000001FF803FF0000001FF800000000001FF800000000001FF 800000000001FF800000000001FF800000000001FF800000000001FF800000000001FF80000000 0001FF800000000001FF800000000001FF800000000001FF800000000001FF800000000001FF80 0000000001FF800000000001FF800000000001FF8000000000FFFFFF00000000FFFFFF00000000 FFFFFF0000000035427DAD3D>I<00FF007F00FFFF01FFC0FFFF03FFE0FFFF0787F003FF0E0FF0 01FF1C1FF801FF381FF801FF301FF801FF701FF801FF600FF001FF600FF001FFE003C001FFC000 0001FFC0000001FFC0000001FFC0000001FF80000001FF80000001FF80000001FF80000001FF80 000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF 80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001 FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF800000FFFFFFC000 FFFFFFC000FFFFFFC000252E7DAD2C>114 D<001FFC030000FFFF870003FFFFCF000FE007FF00 1F8000FF003E00003F003E00001F007C00001F007C00000F00FC00000F00FC00000700FC000007 00FE00000700FF00000700FF80000000FFE00000007FFE0000007FFFF800003FFFFF00003FFFFF C0001FFFFFF0000FFFFFF80007FFFFFC0001FFFFFE00007FFFFF00001FFFFF000000FFFF800000 07FF80000000FFC0E000007FC0E000003FC0E000001FC0F000000FC0F000000FC0F800000FC0F8 00000FC0F800000F80FC00000F80FE00001F80FF00001F00FF80003E00FFC0007C00F9F803F800 F0FFFFF000E03FFFC000C007FE0000222E7CAD2B>I E /Fg 8 117 df<00003000000070000001 F0000007F000007FF000FFFFF000FFFFF000FF8FF000000FF000000FF000000FF000000FF00000 0FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000 000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF0 00000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000F F000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF00000 0FF0007FFFFFFE7FFFFFFE7FFFFFFE1F3779B62D>49 D<000000FFE0001800000FFFFC00180000 7FFFFF00380001FFE00FC0780007FE0001F0F8000FF8000079F8003FE000003FF8007FC000000F F800FF80000007F801FF00000007F803FE00000003F807FC00000001F807F800000001F80FF800 000000F80FF000000000F81FF000000000781FF000000000783FE000000000783FE00000000038 7FE000000000387FE000000000387FE000000000387FC000000000007FC00000000000FFC00000 000000FFC00000000000FFC00000000000FFC00000000000FFC00000000000FFC00000000000FF C00000000000FFC00000000000FFC00000000000FFC00000000000FFC000000000007FC0000000 00007FC000000000007FE000000000007FE000000000387FE000000000383FE000000000383FE0 00000000381FF000000000381FF000000000700FF000000000700FF8000000007007F800000000 E007FC00000000E003FE00000001C001FF000000038000FF8000000780007FC000000F00003FE0 00001E00000FF800003C000007FE0000F0000001FFE007E00000007FFFFF800000000FFFFE0000 000000FFE00000353B7BB940>67 D<003FFC00000001FFFF80000007E00FE000000FC003F80000 0FE001FC00001FF001FE00001FF000FF00001FF000FF00001FF0007F00000FE0007F800007C000 7F80000000007F80000000007F80000000007F80000000007F80000000007F800000000FFF8000 0007FFFF8000003FE07F800001FF007F800007FC007F80000FF0007F80001FE0007F80003FE000 7F80007FC0007F80007FC0007F8380FF80007F8380FF80007F8380FF80007F8380FF8000BF8380 FF8000BF83807FC0013F83807FC0033F83803FE0061FC7001FF81C0FFE0007FFF007FC00007FC0 03F00029257DA42D>97 D<0003FF0000001FFFE000007F07F00001FC01FC0003F000FE0007E000 7F000FE0003F001FC0003F801FC0003F803FC0001FC03F80001FC07F80001FC07F80001FE07F80 001FE0FF80001FE0FF80001FE0FFFFFFFFE0FFFFFFFFE0FF80000000FF80000000FF80000000FF 80000000FF800000007F800000007F800000007F800000003FC00000003FC00000001FC00000E0 1FE00000E00FE00001C007F000038003F800070000FE000E00007FC07C00001FFFF0000001FF80 0023257EA428>101 D<01FC00000000FFFC00000000FFFC00000000FFFC0000000007FC000000 0003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC 0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC000000 0003FC0000000003FC0000000003FC0000000003FC01FF000003FC07FFC00003FC1C0FF00003FC 3007F80003FC6003F80003FCC003FC0003FC8001FC0003FD0001FE0003FF0001FE0003FE0001FE 0003FE0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC 0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE 0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC 0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE00FFFFF07FFFF8FFFFF07FFF F8FFFFF07FFFF82D3A7EB932>104 D<01FC03FE0000FFFC1FFFC000FFFC780FF000FFFDE003F8 0007FF8001FC0003FF0000FE0003FE0000FF0003FC00007F8003FC00007FC003FC00003FC003FC 00003FE003FC00003FE003FC00001FE003FC00001FE003FC00001FF003FC00001FF003FC00001F F003FC00001FF003FC00001FF003FC00001FF003FC00001FF003FC00001FF003FC00001FF003FC 00001FE003FC00003FE003FC00003FE003FC00003FC003FC00003FC003FC00007F8003FC00007F 8003FE0000FF0003FF0001FE0003FF8003FC0003FDC007F80003FCF81FE00003FC3FFF800003FC 07FC000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC000000 0003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC 00000000FFFFF0000000FFFFF0000000FFFFF00000002C357EA432>112 D<01F80FC0FFF83FF0FFF870F8FFF8C1FC07F883FE03F983FE03F903FE03FB03FE03FA01FC03FA 00F803FA007003FE000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003 FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC0000 03FC000003FC000003FC000003FC000003FC0000FFFFF800FFFFF800FFFFF8001F257EA424> 114 D<001C0000001C0000001C0000001C0000001C0000003C0000003C0000003C0000003C0000 007C0000007C000000FC000000FC000001FC000003FC000007FC00001FFFFFC0FFFFFFC0FFFFFF C003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC 000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003 FC00E003FC00E003FC00E003FC00E003FC00E003FC00E003FC00E003FC00E001FC00C001FC01C0 01FE01C000FE0380007F8700001FFE000007F8001B357EB423>116 D E /Fh 17 117 df<001FC0000070200000C010000180380003807800070078000700300007000000 070000000700000007000000070000000700000007000000FFFFF8000700780007003800070038 000700380007003800070038000700380007003800070038000700380007003800070038000700 38000700380007003800070038000700380007003800070038007FE1FF80192380A21B>12 D<008003800F80F380038003800380038003800380038003800380038003800380038003800380 03800380038003800380038003800380038003800380038007C0FFFE0F217CA018>49 D<03F8000C1E001007002007804007C07807C07803C07807C03807C0000780000780000700000F 00000E0000380003F000001C00000F000007800007800003C00003C00003E02003E07003E0F803 E0F803E0F003C04003C0400780200780100F000C1C0003F00013227EA018>51 D<01F000060C000C0600180700380380700380700380F001C0F001C0F001C0F001E0F001E0F001 E0F001E0F001E07001E07003E03803E01805E00C05E00619E003E1E00001C00001C00001C00003 80000380300300780700780600700C002018001030000FC00013227EA018>57 D76 DI82 D<0FE0001838003C0C003C0E0018070000070000070000070000FF0007C7001E07003C07007807 00700700F00708F00708F00708F00F087817083C23900FC1E015157E9418>97 D<01FE000703000C07801C0780380300780000700000F00000F00000F00000F00000F00000F000 00F000007000007800403800401C00800C010007060001F80012157E9416>99 D<0000E0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000 E00000E001F8E00704E00C02E01C01E03800E07800E07000E0F000E0F000E0F000E0F000E0F000 E0F000E0F000E07000E07800E03800E01801E00C02E0070CF001F0FE17237EA21B>I<01FC0007 07000C03801C01C03801C07801E07000E0F000E0FFFFE0F00000F00000F00000F00000F0000070 00007800203800201C00400E008007030000FC0013157F9416>I<0E0000FE00001E00000E0000 0E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E1F800E60C00E80E0 0F00700F00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070 0E00700E00700E00700E0070FFE7FF18237FA21B>104 D<1C001E003E001E001C000000000000 00000000000000000000000E00FE001E000E000E000E000E000E000E000E000E000E000E000E00 0E000E000E000E000E000E00FFC00A227FA10E>I<0E0000FE00001E00000E00000E00000E0000 0E00000E00000E00000E00000E00000E00000E00000E00000E03FC0E01F00E01C00E01800E0200 0E04000E08000E10000E38000EF8000F1C000E1E000E0E000E07000E07800E03C00E01C00E01E0 0E00F00E00F8FFE3FE17237FA21A>107 D<0E00FE001E000E000E000E000E000E000E000E000E 000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00 0E000E000E000E00FFE00B237FA20E>I<0E3CFE461E8F0F0F0F060F000E000E000E000E000E00 0E000E000E000E000E000E000E000E000F00FFF010157F9413>114 D<02000200020002000600 060006000E001E003E00FFF80E000E000E000E000E000E000E000E000E000E000E000E040E040E 040E040E040E040708030801F00E1F7F9E13>116 D E /Fi 24 121 df<7803C0FC07E0FC07E0 FE07F0FE07F07A03D0020010020010020010020010020010040020040020040020080040080040 10008020010020010040020014147EB020>34 D<00003FC0020001FFF8060007E01C06001F0007 0E003C00018E00780000DE00F000005E01E000003E03C000001E078000001E0F0000000E0F0000 000E1E000000061E000000063E000000063C000000027C000000027C000000027C000000027800 000000F800000000F800000000F800000000F800000000F800000000F800000000F800000000F8 00000000F800000000F80000000078000000007C000000007C000000027C000000023C00000002 3E000000021E000000021E000000040F000000040F0000000C078000000803C000001801E00000 1000F00000200078000040003C000180001F0003000007E01E000001FFF80000003FC00027327C B02F>67 D73 D77 D80 D<007F004001FFC0400780F0C00F0018C01C000DC03C0007C0 380003C0780001C0700001C0F00000C0F00000C0F00000C0F0000040F0000040F0000040F80000 40F80000007C0000007E0000003F0000003FE000001FFE00000FFFE00007FFF80001FFFE00007F FF000007FF8000007F8000000FC0000007E0000003E0000003E0000001F0000001F0800000F080 0000F0800000F0800000F0800000F0C00000F0C00000E0E00001E0E00001E0F00001C0F8000380 EC000780C7000F00C1E03E0080FFF800801FE0001C327CB024>83 D86 D<040020080040080040100080200100200100400200 400200400200800400800400800400800400800400BC05E0FE07F0FE07F07E03F07E03F03C01E0 141476B020>92 D<00FE0000070380000801E0001000F0003C0078003E0078003E003C003E003C 001C003C0000003C0000003C0000003C000007FC00007C3C0003E03C0007803C001F003C003E00 3C003C003C007C003C0078003C08F8003C08F8003C08F8003C08F8007C08F8007C087C00BC083E 011E100F060FE003F807C01D1E7D9D20>97 D<018000003F800000FF800000FF8000000F800000 078000000780000007800000078000000780000007800000078000000780000007800000078000 00078000000780000007800000078000000781F80007860700079803C007A000E007C000F007C0 0078078000380780003C0780003E0780001E0780001E0780001F0780001F0780001F0780001F07 80001F0780001F0780001F0780001F0780001E0780003E0780003C0780003C0780007807C00070 074000F0072001C006100380060C0E000403F80020317EB024>I<001FC00000F0380001C00400 078002000F000F001E001F001E001F003C001F007C000E007C00000078000000F8000000F80000 00F8000000F8000000F8000000F8000000F8000000F8000000780000007C0000007C0000003C00 00801E0000801E0001000F0001000780020001C00C0000F03000001FC000191E7E9D1D>I<003F 800000E0F0000380380007001C000E001E001E000E003C000F003C000F007C000F807800078078 000780F8000780FFFFFF80F8000000F8000000F8000000F8000000F8000000F800000078000000 780000007C0000003C0000801C0000801E0001000F0002000700020001C00C0000F03000001FC0 00191E7E9D1D>101 D<0007E0001C1000383800707C00E07C01E07C01C03803C00003C00003C0 0003C00003C00003C00003C00003C00003C00003C00003C00003C000FFFFC0FFFFC003C00003C0 0003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C0 0003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00007E0007FFF007FFF 0016317FB014>I<07000F801F801F800F80070000000000000000000000000000000000000000 00000001801F80FF80FF800F800780078007800780078007800780078007800780078007800780 0780078007800780078007800780078007800FC0FFF8FFF80D2F7EAE12>105 D<01803F80FF80FF800F8007800780078007800780078007800780078007800780078007800780 078007800780078007800780078007800780078007800780078007800780078007800780078007 8007800780078007800780078007800FC0FFFCFFFC0E317EB012>108 D<0181FE003FC0003F86 0780C0F000FF8801C1003800FF9001E2003C000FA000E4001C0007A000F4001E0007C000F8001E 0007C000F8001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000 F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00 078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0 001E00078000F0001E00078000F0001E00078000F0001E000FC001F8003F00FFFC1FFF83FFF0FF FC1FFF83FFF0341E7E9D38>I<0181FC003F860F00FF880380FF9003C00FA001C007C001E007C0 01E007C001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E007 8001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0 078001E00FC003F0FFFC3FFFFFFC3FFF201E7E9D24>I<003FC00000E0700003801C0007000E00 0E0007001E0007803C0003C03C0003C07C0003E0780001E0780001E0F80001F0F80001F0F80001 F0F80001F0F80001F0F80001F0F80001F0F80001F0780001E0780001E07C0003E03C0003C03C00 03C01E0007800E00070007000E0003801C0000E07000003FC0001C1E7E9D20>I<0181F8003F86 0F00FF9803C0FFA001E007C000F007C00078078000780780003C0780003E0780003E0780001E07 80001F0780001F0780001F0780001F0780001F0780001F0780001F0780001F0780001E0780003E 0780003C0780003C0780007807C000F007C000F007A001C007900380078C0E000783F800078000 000780000007800000078000000780000007800000078000000780000007800000078000000780 00000FC00000FFFC0000FFFC0000202C7E9D24>I<0183E03F8C18FF907CFF907C0FA07C07C038 07C00007C00007C000078000078000078000078000078000078000078000078000078000078000 0780000780000780000780000780000780000780000780000FC000FFFE00FFFE00161E7E9D19> 114 D<03FC200E02603801E03000E0600060E00060E00020E00020F00020F000207C00007F8000 3FFC001FFF0007FF8001FFE0000FE00001F08000F8800078800038C00038C00038C00038E00030 F00070F00060C800C0C6038081FE00151E7E9D19>I<00400000400000400000400000400000C0 0000C00000C00001C00001C00003C00007C0000FC0001FFFE0FFFFE003C00003C00003C00003C0 0003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C01003C0 1003C01003C01003C01003C01003C01003C01001C02001E02000E0400078C0001F00142B7FAA19 >I<018000603F800FE0FF803FE0FF803FE00F8003E0078001E0078001E0078001E0078001E007 8001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0 078001E0078001E0078001E0078003E0078003E0078003E0038005E003C009E001C019F000F021 FF001FC1FF201E7E9D24>I120 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300dpi TeXDict begin %%PaperSize: A4 %%EndSetup %%Page: 0 1 0 0 bop 302 1086 a Fi(\\Prop)r(osal)24 b(V")e(for)f(MPI)i(Comm)n(unication)d (Con)n(text)774 1178 y(Sub)r(committee)829 1360 y Fh(Rik)15 b(Little\014eld)853 1481 y(Marc)o(h)h(1993)p eop %%Page: 1 2 1 1 bop 262 619 a Fg(Chapter)28 b(1)262 826 y Ff(Prop)s(osal)36 b(V)262 1067 y Fe(1.1)64 b(Summary)324 1158 y Fd(\017)20 b Fc(Group)i(ID's)g(ha)o(v)o(e)g(lo)q(cal)f(scop)q(e,)k(not)d(global.)41 b(Groups)22 b(are)h(represen)o(ted)i(b)o(y)365 1208 y(descriptors)f(of)d (inde\014nite)h(size)g(and)g(opaque)f(structure,)k(managed)20 b(b)o(y)i(MPI.)365 1257 y(Group)13 b(descriptors)j(can)d(b)q(e)h(passed)h(b)q (et)o(w)o(een)g(pro)q(cesses)h(b)o(y)d(MPI)h(routines)g(that)365 1307 y(translate)h(them)e(as)h(necessary)m(.)365 1374 y(\(This)j(satis\014es) g(the)g(same)e(needs)j(as)e(global)f(group)h(ID's,)g(but)g(its)h(p)q (erformance)365 1423 y(implications)11 b(are)k(easier)f(to)g(understand)h (and)f(con)o(trol.\))324 1506 y Fd(\017)20 b Fc(Arbitrary)14 b(pro)q(cess-lo)q(cal)h(info)e(can)h(b)q(e)h(cac)o(hed)g(in)e(group)h (descriptors.)365 1573 y(\(This)i(allo)o(ws)e(collectiv)o(e)h(op)q(erations)h (to)f(run)h(quic)o(kly)e(after)i(the)g(\014rst)g(execution)365 1623 y(in)e(eac)o(h)g(group,)f(without)h(complicating)d(the)k(calling)d (sequences.\))324 1706 y Fd(\017)20 b Fc(There)15 b(are)g(restrictions)g (that)f(p)q(ermit)f(groups)h(to)g(b)q(e)g(la)o(y)o(ered)g(on)g(top)g(of)f (pt-pt.)324 1789 y Fd(\017)20 b Fc(Pt-pt)c(comm)o(unications)d(use)k(only)e (TID,)g(con)o(text,)h(and)g(tag,)f(and)h(are)g(sp)q(eci\014ed)365 1839 y(to)e(b)q(e)h(fast.)324 1922 y Fd(\017)20 b Fc(Group)c(con)o(trol)g(op) q(erations)g(\(forming,)e(disbanding,)i(and)g(translating)f(b)q(et)o(w)o(een) 365 1971 y(rank)f(and)g(TID\))f(are)i(NOT)f(sp)q(eci\014ed)h(to)f(b)q(e)h (fast.)262 2109 y Fe(1.2)64 b(Detailed)24 b(Prop)r(osal)324 2200 y Fd(\017)c Fc(Pt-pt)d(uses)h(only)e(\\TID",)g(\\con)o(text",)h(and)f (\\message)h(tag".)25 b(TID's,)17 b(con)o(texts,)365 2249 y(and)12 b(message)f(tags)h(ha)o(v)o(e)g(global)e(scop)q(e;)j(they)f(can)g(b)q(e)h (copied)f(b)q(et)o(w)o(een)h(pro)q(cesses)365 2299 y(as)i(in)o(tegers.)21 b(TID's)14 b(and)h(con)o(texts)h(are)f(opaque;)f(their)h(v)n(alues)g(are)g (managed)e(b)o(y)365 2349 y(MPI.)h(Message)h(tags)f(are)g(con)o(trolled)g(en) o(tirely)g(b)o(y)g(the)g(application.)967 2574 y(1)p eop %%Page: 2 3 2 2 bop 324 307 a Fd(\017)20 b Fc(Pt-pt)15 b(comm)o(unication)c(is)j(sp)q (eci\014ed)i(to)e(b)q(e)h(fast)g(in)f(all)f(cases.)21 b(\(E.g.,)13 b(MPI)i(m)o(ust)365 357 y(initialize)i(all)h(pro)q(cesses)j(suc)o(h)e(that)f (an)o(y)g(required)i(translation)e(of)g(the)h(TID)f(is)365 407 y(faster)d(than)f(the)g(fastest)h(pt-pt)f(comm)o(unication)c(call.\))324 490 y Fd(\017)20 b Fc(A)h(group)f(is)g(represen)o(ted)j(b)o(y)d(a)g(\\group)f (descriptor",)k(of)c(inde\014nite)i(size)g(and)365 540 y(opaque)c(structure,) j(managed)15 b(b)o(y)i(MPI.)f(The)i(group)f(descriptor)h(can)f(b)q(e)h (refer-)365 589 y(enced)e(via)e(a)h(\\pro)q(cess-lo)q(cal)g(group)f(ID",)g (whic)o(h)h(is)f(t)o(ypically)g(just)g(a)h(p)q(oin)o(ter)g(to)365 639 y(the)g(group)f(descriptor.)19 b(\(Group)14 b(ID's)f(with)h(global)e (scop)q(e)j(do)e(not)h(exist.\))324 722 y Fd(\017)20 b Fc(Group)f (descriptors)i(can)e(b)q(e)h(copied)f(b)q(et)o(w)o(een)i(arbitrary)e(pro)q (cesses)i(via)e(MPI-)365 772 y(pro)o(vided)14 b(routines)g(that)g(translate)g (them)f(to)g(and)g(from)f(mac)o(hine-)g(and)i(pro)q(cess-)365 822 y(indep)q(enden)o(t)e(form.)j(A)10 b(pro)q(cess)i(receiving)f(a)f(group)g (descriptor)i(do)q(es)f(not)f(b)q(ecome)365 872 y(a)h(mem)o(b)q(er)f(of)h (the)h(group,)f(but)h(do)q(es)g(b)q(ecome)f(able)g(to)h(obtain)e(information) f(ab)q(out)365 922 y(the)15 b(group)f(\(e.g.,)e(to)i(translate)g(b)q(et)o(w)o (een)i(rank)d(and)h(TID\).)324 1005 y Fd(\017)20 b Fc(Arbitrary)15 b(information)d(can)i(b)q(e)i(cac)o(hed)f(in)f(a)g(group)h(descriptor.)21 b(This)14 b(is,)g(MPI)365 1054 y(routines)d(are)f(pro)o(vided)g(that)g(allo)o (w)f(an)o(y)g(routine)h(to)g(augmen)o(t)f(a)g(group)h(descriptor)365 1104 y(with)17 b(arbitrary)g(information,)e(iden)o(ti\014ed)i(b)o(y)g(a)g(k)o (ey)g(v)n(alue,)g(that)g(subsequen)o(tly)365 1154 y(can)f(b)q(e)g(retriev)o (ed)h(b)o(y)e(an)o(y)h(routine)f(ha)o(ving)g(access)i(to)f(the)g(descriptor.) 24 b(Cac)o(hed)365 1204 y(information)10 b(is)j(not)g(visible)f(to)h(other)h (pro)q(cesses)h(and)e(is)g(stripp)q(ed)h(when)f(a)g(group)365 1254 y(descriptor)18 b(is)e(sen)o(t)h(to)g(another)f(pro)q(cess.)28 b(Retriev)n(al)15 b(of)h(cac)o(hed)h(information)d(is)365 1303 y(sp)q(eci\014ed)d(to)f(b)q(e)g(fast)g(\(signi\014can)o(tly)f(c)o(heap)q(er)i (than)e(a)h(pt-pt)g(comm)o(uni)o(cation)d(call\).)324 1386 y Fd(\017)20 b Fc(Group)e(descriptors)h(con)o(tain)e(enough)h(information)d (to)i(async)o(hronously)h(trans-)365 1436 y(late)c(b)q(et)o(w)o(een)h (\(group,rank\))f(and)g(TID)g(for)f(an)o(y)h(mem)o(b)q(er)e(of)h(the)i (group,)e(b)o(y)h(an)o(y)365 1486 y(pro)q(cess)j(holding)c(a)h(descriptor)h (for)g(the)g(group.)k(MPI)c(routines)g(are)f(pro)o(vided)h(to)365 1536 y(do)f(these)h(translations.)j(T)m(ranslation)13 b(is)g(not)h(sp)q (eci\014ed)i(to)d(b)q(e)i(fast.)324 1619 y Fd(\017)20 b Fc(A)o(t)14 b(creation,)g(eac)o(h)h(group)f(is)f(assigned)i(a)f(globally)d(unique)j (\\default)g(group)g(con-)365 1669 y(text")g(whic)o(h)g(is)g(k)o(ept)g(in)f (the)i(group)e(descriptor)j(and)d(can)h(b)q(e)h(quic)o(kly)d(extracted.)365 1719 y(This)i(extraction)g(is)g(sp)q(eci\014ed)i(to)d(b)q(e)i(fast)f (\(comparable)e(to)i(a)g(pro)q(cedure)h(call\).)324 1802 y Fd(\017)20 b Fc(The)d(default)g(group)f(con)o(text)h(can)g(b)q(e)g(used)h (for)e(pt-pt)h(comm)o(unicati)o(on)d(b)o(y)i(an)o(y)365 1851 y(mo)q(dule)8 b(op)q(erating)h(within)g(the)h(group,)f(but)h(only)e(for)h (exactly)h(paired)f(send/receiv)o(e)365 1901 y(with)i(exact)g(matc)o(h)e(on)i (TID,)f(con)o(text,)h(and)g(message)f(tag.)17 b(Call)9 b(this)i(the)g (\\paired-)365 1951 y(exact-matc)o(h)k(constrain)o(t".)22 b(This)15 b(constrain)o(t)h(allo)o(ws)e(indep)q(enden)o(t)j(mo)q(dules)d(to)365 2001 y(use)19 b(the)f(same)f(con)o(text)i(without)e(co)q(ordinating)g(their)i (tag)e(v)n(alues.)30 b(\(Assumes:)365 2051 y(tigh)o(t)12 b(sequencing)g (constrain)o(ts)h(for)e(b)q(oth)h(blo)q(c)o(king)f(and)h(non-blo)q(c)o(king)f (comms)e(in)365 2100 y(pt-pt)14 b(MPI.\))324 2183 y Fd(\017)20 b Fc(A)f(mo)q(dules)f(that)h(do)q(es)g(not)g(meet)f(the)h(paired-exact-matc)o (h)g(constrain)o(t)g(m)o(ust)365 2233 y(use)e(a)f(di\013eren)o(t)h(globally)d (unique)i(con)o(text)g(v)n(alue.)24 b(An)16 b(MPI)g(routine)h(will)d(exist) 365 2283 y(to)h(generate)i(suc)o(h)f(a)f(v)n(alue)g(and)g(broadcast)h(it)f (to)g(the)h(group.)22 b(The)16 b(cost)g(of)f(this)365 2333 y(op)q(eration)e(is)f(sp)q(eci\014ed)i(to)e(b)q(e)h(comparable)e(to)h(that)g (of)g(an)g(e\016cien)o(t)h(broadcast)g(in)365 2383 y(the)h(group,)f(i.e.)k (log\(G\))c(pt-pt)g(startup)h(costs)h(for)e(a)g(group)g(of)g(G)g(pro)q (cesses.)20 b([See)365 2433 y(Implemen)o(tation)11 b(note)j(1.])967 2574 y(2)p eop %%Page: 3 4 3 3 bop 324 307 a Fd(\017)20 b Fc(Con)o(text)c(v)n(alues)g(are)g(a)g(plen)o (tiful)e(but)i(exhaustible)g(resource.)26 b(If)15 b(further)i(use)g(is)365 357 y(an)o(ticipated,)i(a)f(con)o(text)g(ma)o(y)e(b)q(e)j(cac)o(hed;)i (otherwise)e(it)f(should)g(b)q(e)g(returned.)365 407 y(\(Note:)g(the)c(n)o (um)o(b)q(er)e(of)h(a)o(v)n(ailable)e(con)o(texts)j(should)e(b)q(e)i(sev)o (eral)g(times)e(the)h(n)o(um-)365 457 y(b)q(er)i(of)e(groups)h(that)g(can)g (b)q(e)h(formed.\))324 540 y Fd(\017)20 b Fc(When)g(a)e(group)h(disbands,)h (its)f(group)g(descriptor)i(is)d(closed)i(and)f(an)o(y)g(cac)o(hed)365 589 y(information)e(is)j(released.)37 b(\(The)21 b(MPI)f(group-disbanding)f (routine)h(do)q(es)h(this)365 639 y(b)o(y)15 b(calling)e(destructor)j (routines)f(that)g(the)g(application)e(pro)o(vided)h(when)h(the)g(in-)365 689 y(formation)d(w)o(as)i(cac)o(hed.\))19 b(Copies)14 b(of)f(the)i(group)f (descriptor)h(m)o(ust)e(b)q(e)i(explicitly)365 739 y(released.)j(It)11 b(is)f(an)g(error)i(to)e(mak)o(e)f(an)o(y)h(reference)i(to)f(a)f(descriptor)h (of)f(a)g(disbanded)365 789 y(group,)k(except)h(to)f(release)h(that)f (descriptor.)324 872 y Fd(\017)20 b Fc(All)11 b(pro)q(cesses)i(that)f(are)f (initially)e(allo)q(cated)i(b)q(elong)f(to)h(the)h(INITIAL)f(group.)17 b(\(In)365 922 y(a)e(static)h(pro)q(cess)h(mo)q(del,)c(this)i(means)f(all)g (pro)q(cesses.\))25 b(An)15 b(MPI)h(routine)f(exists)365 971 y(for)j(obtaining)f(a)g(reference)k(to)d(the)h(INITIAL)f(group)g(descriptor)h (\(i.e.,)f(a)f(lo)q(cal)365 1021 y(group)d(ID)g(for)f(INITIAL\).)324 1104 y Fd(\017)20 b Fc(Groups)10 b(are)h(formed)e(and)h(disbanded)g(b)o(y)g (sync)o(hronous)h(calls)f(on)g(all)f(participating)365 1154 y(pro)q(cesses.)21 b(In)14 b(the)h(case)g(of)e(o)o(v)o(erlapping)g(groups,)g (all)g(pro)q(cesses)k(m)o(ust)c(form)f(and)365 1204 y(disband)i(groups)g(in)g (the)g(same)f(order.)19 b([See)14 b(Implemen)o(tation)d(Note)j(2.])324 1287 y Fd(\017)20 b Fc(All)f(group)h(formation)e(calls)h(are)i(treated)g(as)f (a)f(partitioning)g(of)g(some)g(group,)365 1337 y(INITIAL)c(if)f(none)h (other.)20 b(The)15 b(calls)g(include)f(a)h(reference)i(to)d(the)h (descriptor)h(of)365 1386 y(the)g(group)f(b)q(eing)h(partitioned,)f(the)h (TID)f(of)f(the)i(new)g(group's)f(\\ro)q(ot)g(pro)q(cess",)365 1436 y(the)d(n)o(um)o(b)q(er)e(of)h(pro)q(cesses)i(in)e(the)h(group,)f(and)f (an)h(in)o(teger)h(\\group)e(tag")h(pro)o(vided)365 1486 y(b)o(y)h(the)h (application.)j(The)d(group)f(tag)g(m)o(ust)f(discriminate)g(b)q(et)o(w)o (een)j(o)o(v)o(erlapping)365 1536 y(groups)i(that)f(ma)o(y)f(b)q(e)i(formed)e (concurren)o(tly)m(,)i(sa)o(y)f(b)o(y)g(m)o(ultiple)e(threads)j(within)365 1586 y(a)e(pro)q(cess.)20 b([See)14 b(Implemen)o(tation)d(Note)j(2.])324 1669 y Fd(\017)20 b Fc(Collectiv)o(e)c(comm)o(unication)d(routines)k(are)g (called)f(b)o(y)g(all)f(mem)o(b)q(ers)g(of)h(a)g(group)365 1719 y(in)e(the)g(same)f(order.)324 1802 y Fd(\017)20 b Fc(Blo)q(c)o(king)f (collectiv)o(e)g(comm)o(unicati)o(on)d(routines)k(are)f(passed)h(only)e(a)h (reference)365 1851 y(to)g(the)h(group)e(descriptor.)35 b(T)m(o)18 b(comm)o(unicate,)g(they)h(can)g(either)h(use)g(the)f(de-)365 1901 y(fault)14 b(group)h(con)o(text)g(or)g(in)o(ternally)e(allo)q(cate)h(a)h (new)g(con)o(text,)g(with)f(or)h(without)365 1951 y(cac)o(heing.)324 2034 y Fd(\017)20 b Fc(Non-blo)q(c)o(king)13 b(collectiv)o(e)h(comm)o(uni)o (cation)d(routines)j(are)g(passed)h(a)f(reference)i(to)365 2084 y(the)h(group)f(descriptor,)i(plus)e(a)f(\\data)h(tag")f(to)h (discriminate)f(b)q(et)o(w)o(een)j(concur-)365 2134 y(ren)o(t)13 b(op)q(erations)g(in)f(the)h(same)f(group.)17 b(\(Question:)h(is)13 b(sequencing)g(enough)g(to)f(do)365 2183 y(this?\))18 b(An)11 b(implemen)o(tatio)o(n)e(is)i(allo)o(w)o(ed)e(but)j(not)f(required)h(to)f (imp)q(ose)f(sync)o(hron-)365 2233 y(ization)g(of)h(all)f(co)q(op)q(erating)h (pro)q(cesses)j(up)q(on)d(en)o(try)g(to)g(a)g(non-blo)q(c)o(king)f(collectiv) o(e)365 2283 y(comm)o(unication)h(routine.)967 2574 y(3)p eop %%Page: 4 5 4 4 bop 262 307 a Fe(1.3)64 b(Adv)l(an)n(tages)22 b(&)g(Disadv)l(an)n(tages) 262 406 y Fb(Adv)m(an)n(tages)324 483 y Fd(\017)e Fc(This)14 b(prop)q(osal)g(is)g(implemen)o(tabl)o(e)e(with)i(no)f(serv)o(ers)j(and)e (can)g(b)q(e)h(la)o(y)o(ered)f(easily)365 533 y(on)g(existing)g(systems.)324 616 y Fd(\017)20 b Fc(Cac)o(heing)c(auxiliary)e(information)e(in)j(group)h (descriptors)h(allo)o(ws)d(collectiv)o(e)i(op-)365 666 y(erations)d(to)g(run) g(at)g(maxim)n(um)c(sp)q(eed)14 b(after)f(the)g(\014rst)h(execution)g(in)e (eac)o(h)h(group.)365 715 y(\(Assumes)i(that)f(su\016cien)o(t)g(con)o(text)g (v)n(alues)g(are)g(a)o(v)n(ailable)e(to)i(p)q(ermit)f(cac)o(heing.\))324 799 y Fd(\017)20 b Fc(Sp)q(eed)c(of)e(cac)o(hed)i(collectiv)o(e)e(op)q (erations)h(do)q(es)h(not)e(dep)q(end)i(on)f(sp)q(eed)h(of)e(group)365 848 y(op)q(erations)g(suc)o(h)h(as)f(formation)d(or)j(translation)f(b)q(et)o (w)o(een)j(rank)e(and)f(TID.)324 931 y Fd(\017)20 b Fc(Requires)13 b(explicit)e(translation)h(b)q(et)o(w)o(een)h(\(group,rank\))f(and)g(TID,)f (whic)o(h)h(mak)o(es)365 981 y(pt-pt)j(p)q(erformance)g(predictable)g(no)g (matter)f(ho)o(w)g(the)h(functionalit)o(y)e(of)h(groups)365 1031 y(gets)h(extended.)324 1114 y Fd(\017)20 b Fc(Comm)o(unicatio)o(n)7 b(b)q(oth)i(within)g(and)g(b)q(et)o(w)o(een)h(groups)g(seems)f(conceptually)h (straigh)o(t-)365 1164 y(forw)o(ard.)262 1280 y Fb(Disadv)m(an)n(tages)324 1357 y Fd(\017)20 b Fc(Requires)d(explicit)f(translation)g(b)q(et)o(w)o(een)i (\(group,rank\))e(and)g(TID,)f(whic)o(h)i(ma)o(y)365 1406 y(b)q(e)e (considered)g(a)o(wkw)o(ard.)324 1489 y Fd(\017)20 b Fc(Comm)o(unicatio)o(n) 11 b(b)q(et)o(w)o(een)16 b(di\013eren)o(t)e(groups)g(ma)o(y)e(b)q(e)j (considered)g(a)o(wkw)o(ard.)324 1572 y Fd(\017)20 b Fc(No)d(free-for-all)f (group)h(formation.)24 b(A)17 b(pro)q(cess)i(m)o(ust)d(kno)o(w)g(something)g (ab)q(out)365 1622 y(who)e(its)g(collab)q(orators)f(are)h(going)f(to)h(b)q(e) g(\(minimal)o(ly)l(,)d(the)j(ro)q(ot)g(of)f(the)i(group\).)324 1705 y Fd(\017)20 b Fc(Requires)15 b(explicit)f(coherency)j(of)d(group)g (descriptors,)i(whic)o(h)e(is)h(not)f(con)o(v)o(enien)o(t)365 1755 y({)f(application)f(m)o(ust)g(k)o(eep)i(trac)o(k)f(of)g(whic)o(h)f(pro)q (cesses)k(ha)o(v)o(e)d(copies)h(and)f(what)g(to)365 1805 y(do)h(with)g(them.) 324 1888 y Fd(\017)20 b Fc(Static)13 b(group)f(mo)q(del.)k(Pro)q(cesses)f (can)e(join)f(and)g(lea)o(v)o(e)g(collab)q(orations)f(only)h(with)365 1938 y(the)k(co)q(op)q(eration)f(of)f(the)h(other)h(group)e(mem)o(b)q(ers)g (in)g(forming)f(larger)h(or)h(smaller)365 1988 y(groups.)324 2071 y Fd(\017)20 b Fc(No)c(direct)g(supp)q(ort)g(for)f(iden)o(tifying)f(the) i(group)g(from)d(whic)o(h)j(a)f(message)g(came.)365 2120 y(The)j(sender)g (cannot)g(em)o(b)q(ed)e(its)h(group)g(ID)g(in)f(its)h(message,)g(since)h (group)f(ID's)365 2170 y(are)h(not)f(globally)e(meaningful.)25 b(One)17 b(metho)q(d)g(to)g(address)h(this)f(problem)f(is)h(to)365 2220 y(ha)o(v)o(e)11 b(the)h(sender)h(em)o(b)q(ed)d(its)h(group)g(con)o(text) h(as)f(data)g(in)f(the)i(message,)f(then)h(ha)o(v)o(e)365 2270 y(the)k(receiv)o(er)g(use)g(that)f(v)n(alue)f(to)g(select)j(among)12 b(group)j(descriptors)h(that)f(it)g(had)365 2320 y(already)f(b)q(een)h(sen)o (t.)967 2574 y(4)p eop %%Page: 5 6 5 5 bop 262 307 a Fe(1.4)64 b(Commen)n(ts)20 b(&)i(Alternativ)n(es)324 398 y Fd(\017)e Fc(The)13 b(p)q(erformance)g(of)f(group)g(con)o(trol)h (functions)f(is)h(explicitly)e(unsp)q(eci\014ed)k(in)d(or-)365 448 y(der)g(to)f(pro)o(vide)g(p)q(erformance)h(insurance.)18 b(The)11 b(in)o(ten)o(t)h(is)f(to)g(encourage)h(program)365 498 y(designs)17 b(whose)g(p)q(erformance)f(w)o(on't)f(b)q(e)i(killed)e(if)g (the)h(functionalit)o(y)f(of)g(groups)365 548 y(is)f(expanded)h(so)f(as)g(to) f(b)q(e)i(unscalable)f(or)g(require)g(exp)q(ensiv)o(e)h(services.)324 631 y Fd(\017)20 b Fc(Adding)13 b(global)f(group)h(ID's)f(w)o(ould)h (seriously)g(in)o(terfere)h(with)f(la)o(y)o(ering)f(this)i(pro-)365 680 y(p)q(osal)d(on)g(pt-pt.)17 b(The)11 b(problem)f(is)h(not)g(generating)g (a)g(unique)g(ID,)f(but)h(using)g(it.)17 b(If)365 730 y(just)e(holding)e(a)g (global)g(group)h(ID)g(w)o(ere)h(to)f(allo)o(w)f(a)g(pro)q(cess)j(to)e(async) o(hronously)365 780 y(ask)i(questions)h(ab)q(out)f(the)h(group)f(\(lik)o(e)f (translating)h(b)q(et)o(w)o(een)h(rank)f(and)g(TID\),)365 830 y(then)e(a)e(group)h(information)d(serv)o(er)k(of)e(some)g(sort)i(w)o(ould)e (b)q(e)h(required,)h(and)e(MPI)365 880 y(pt-pt)i(do)q(es)h(not)f(pro)o(vide)g (enough)g(capabilit)o(y)e(to)i(construct)h(suc)o(h)g(a)e(b)q(east.)324 963 y Fd(\017)20 b Fc(The)c(restriction)g(that)f(only)f(paired-exact-matc)o (h)g(mo)q(dules)g(can)h(run)g(in)g(the)g(de-)365 1012 y(fault)j(group)g(con)o (text)h(could)g(b)q(e)g(relaxed)g(to)f(something)f(lik)o(e)h(\\a)g(mo)q(dule) f(that)365 1062 y(violates)g(the)h(paired-exact-matc)o(h)f(constrain)o(t)h (can)g(run)f(in)h(the)g(default)f(group)365 1112 y(con)o(text)e(if)f(and)g (only)g(if)g(it)g(is)g(the)h(only)f(mo)q(dule)f(to)h(run)h(in)f(that)g(con)o (text".)21 b(This)365 1162 y(seems)11 b(to)q(o)g(error-prone)h(to)e(b)q(e)i (w)o(orth)e(the)i(trouble,)f(since)g(ev)o(en)h(the)f(standard)g(col-)365 1212 y(lectiv)o(e)g(ops)g(migh)o(t)e(b)q(e)i(excluded.)18 b(\(It)11 b(w)o(ould)f(also)g(con\015ict)h(with)g(Implemen)o(tation)365 1262 y(Note)k(1.])324 1345 y Fd(\017)20 b Fc(Group)d(formation)d(can)j(b)q(e) g(faster)h(when)f(more)f(information)d(is)k(pro)o(vided)g(than)365 1394 y(just)i(the)f(group)g(tag)g(and)g(ro)q(ot)g(pro)q(cess.)32 b(E.g.)e(if)17 b(all)g(mem)o(b)q(ers)g(of)g(the)i(group)365 1444 y(can)c(iden)o(tify)e(all)g(other)h(mem)o(b)q(ers)f(of)g(a)h(P-mem)o(b)q (er)f(group,)g(in)h(rank)g(order,)g(then)365 1494 y(O\(log)h(P\))h(time)e(is) i(su\016cien)o(t;)g(if)f(only)g(the)h(ro)q(ot)f(is)h(kno)o(wn,)f(O\(P\))h (time)e(ma)o(y)g(b)q(e)365 1544 y(required.)24 b(This)16 b(asp)q(ect)h (should)e(b)q(e)h(considered)h(b)o(y)f(the)g(groups)g(sub)q(committee)365 1594 y(in)e(ev)n(aluating)e(scalabilit)o(y)m(.)262 1731 y Fe(1.5)64 b(Implemen)n(tation)21 b(Notes)312 1822 y Fc(1.)f(T)m(o)f(generate)h(and)f (broadcast)g(a)g(new)h(con)o(text)f(v)n(alue:)28 b(Generate)20 b(the)g(con)o(text)365 1872 y(v)n(alue)9 b(without)g(comm)o(unicatio)o(n)e(b) o(y)i(concatenating)g(a)g(lo)q(cally)f(main)o(tained)f(unique)365 1921 y(v)n(alue)j(with)g(the)g(pro)q(cess)i(n)o(um)o(b)q(er)e(in)f(some)g (0..P-1)g(global)g(ordering)h(sc)o(heme.)17 b(This)365 1971 y(assumes)g(that)g(con)o(text)g(v)n(alues)f(can)h(b)q(e)h(signi\014can)o(tly) d(longer)i(than)f(log\(P\))h(bits)365 2021 y(for)i(an)f(application)f(in)o(v) o(olving)g(P)i(pro)q(cesses.)35 b(If)18 b(not,)h(then)h(a)e(serv)o(er)i(ma)o (y)d(b)q(e)365 2071 y(required,)d(in)f(whic)o(h)g(case)i(b)o(y)e(sp)q (eci\014cation)h(it)f(has)g(to)g(b)q(e)h(fast)g(\(comparable)e(to)h(a)365 2121 y(pt-pt)i(call\).)i(There)f(is)e(no)f(need)i(to)f(generate)i(a)d(new)i (con)o(text)g(for)e(the)i(broadcast)365 2171 y(since)20 b(it)e(can)h(b)q(e)g (co)q(ded)h(to)f(use)g(the)h(default)e(group)g(con)o(text)i(b)o(y)e(meeting)g (the)365 2220 y(paired-exact-matc)o(h)13 b(constrain)o(t.\))312 2303 y(2.)20 b(The)e(follo)o(wing)d(is)j(in)o(tended)g(only)f(as)h(a)f(sanit) o(y)g(c)o(hec)o(k)i(on)e(the)h(claim)e(that)i(this)365 2353 y(prop)q(osal)f(can)g(b)q(e)g(la)o(y)o(ered)g(on)f(MPI.)h(Impro)o(v)o(emen)o (ts)e(to)h(impro)o(v)o(e)f(e\016ciency)j(or)365 2403 y(to)c(use)h(few)o(er)f (con)o(texts)h(will)e(b)q(e)h(greatly)g(appreciated.)967 2574 y(5)p eop %%Page: 6 7 6 6 bop 365 307 a Fc(In)16 b(addition)f(to)h(a)f(default)h(group)f(con)o (text,)i(eac)o(h)f(group)g(descriptor)h(con)o(tains)f(a)365 357 y(\\group)f(partitioning)g(con)o(text")h(and)f(a)g(\\group)h(disbanding)e (con)o(text")j(that)e(are)365 407 y(obtained)h(and)g(broadcast)h(at)f(the)h (time)d(the)j(group)f(is)g(created.)26 b(In)16 b(the)h(case)g(of)365 457 y(the)f(INITIAL)g(group,)f(this)h(is)f(done)h(using)g(paired-exact-matc)o (h)e(co)q(de)j(and)e(an)o(y)365 506 y(con)o(text,)g(immediately)d(after)j (initialization)c(of)k(the)g(MPI)g(pt-pt)g(lev)o(el.)20 b(\(A)o(t)15 b(that)365 556 y(time,)c(no)g(application)f(co)q(de)i(will)e(ha)o(v)o(e)i (had)f(a)g(c)o(hance)i(to)e(issue)i(wildcard)e(receiv)o(es)365 606 y(to)j(mess)g(things)f(up.\))365 672 y(Group)g(partitioning)e(is)i (accomplished)f(using)h(pt-pt)g(messages)g(in)f(the)i(partition-)365 722 y(ing)k(con)o(text)h(of)f(the)i(curren)o(t)g(group)e(\(i.e.,)g(the)i(one) e(b)q(eing)h(partitioned\),)g(with)365 772 y(message)d(tag)h(equal)f(to)g (the)h(group)f(tag)g(pro)o(vided)h(b)o(y)f(the)h(application.)25 b(In)16 b(the)365 822 y(w)o(orst)g(\(least)g(scalable\))g(case,)g(the)h(ro)q (ot)e(of)g(the)i(new)f(group)f(m)o(ust)g(wildcard)g(the)365 872 y(source)22 b(TID.)d(\(This)h(violates)f(the)h(paired-exact-matc)o(h)g (constrain)o(t,)h(whic)o(h)f(is)365 922 y(wh)o(y)14 b(group)g(formation)d(m)o (ust)i(happ)q(en)h(in)g(a)f(sp)q(ecial)h(con)o(text.\))365 988 y(Group)k(disbanding)e(is)i(done)g(with)f(pt-pt)h(messages)g(in)f(the)h (group)f(disbanding)365 1038 y(con)o(text.)25 b(This)15 b(k)o(eeps)i(group)f (con)o(trol)f(from)f(b)q(eing)i(messed)g(up)f(no)h(matter)f(ho)o(w)365 1088 y(the)g(application)d(uses)j(wildcards.)262 1225 y Fe(1.6)64 b(Examples)262 1316 y Fa(\(T)l(o)15 b(b)q(e)h(pro)o(vided\))967 2574 y Fc(6)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Mar 19 11:53:58 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA07957; Fri, 19 Mar 93 11:53:58 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA26990; Fri, 19 Mar 93 11:53:19 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 19 Mar 1993 11:53:18 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA26973; Fri, 19 Mar 93 11:53:02 -0500 Date: Fri, 19 Mar 93 16:52:24 GMT Message-Id: <10550.9303191652@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: delayed comments To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Hi all I'm afraid that comments to Rik's Proposal V will be delayed until approx 6pm GMT, two hours later than promised. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Mar 19 12:35:52 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA08995; Fri, 19 Mar 93 12:35:52 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA29197; Fri, 19 Mar 93 12:35:21 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 19 Mar 1993 12:35:19 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA29127; Fri, 19 Mar 93 12:34:35 -0500 Date: Fri, 19 Mar 93 17:34:23 GMT Message-Id: <10600.9303191734@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: comments arrive To: d39135@sodium.pnl.gov Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Hi Rik Here are my comments of this afternoon. I hope they make some sense to you! Regards Lyndon General points -------------- 1) I had to ``mine'' the text :-) Perhaps one of us (i.e., I am offering if you wish) should attempt to construct a more transparent presentation before circulation to whole committee, for the convenience of committee members. 2) I'm not a fan of much of this proposal, although I do indeed like some of the ideas which it introduces. [On the other hand, I'm not a great fan of all of the proposal which I wrote. I shall mail self criticism of my proposal, and may have to write amended or alternative proposal :-)] 3) I really like the way in which groups are something like ``frames'' in which contexts are created. This is conceptually much neater than duplication of groups. 4) I like the idea of pushing information into the group structure. I have a few qualms with the proposed details --- see specific points. 5) See ``Writing a server in the point-to-point layer of MPI in four easy steps'' at the foot of the message. Specific points --------------- Dealt with as LaTeX comments to body of text, appearing in the form %[Lyndon] % text of point for your navigational convenience. These are quite detailed. ---------------------------------------------------------------------- \documentstyle{report} \begin{document} \title{``Proposal V'' for MPI Communication Context Subcommittee} \author{Rik~Littlefield} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal V" % Rik Littlefield % March 1993 % \chapter{Proposal V} \section{Summary} \begin{itemize} \item Group ID's have local scope, not global. Groups are represented by descriptors of indefinite size and opaque structure, managed by MPI. Group descriptors can be passed between processes by MPI routines that translate them as necessary. (This satisfies the same needs as global group ID's, but its performance implications are easier to understand and control.) %[Lyndon] % I support the approach whereby group descriptors are local % objects. They could be pointers to structures, or indices % into tables thereof. We let the implementation consider that. % % One difficulty arises as group descriptors can only be passed % from process P to process Q if both P and Q members of some % group G since the communication presumably must use a context % known to both P and Q. Imagine that P is member of F and Q is not % member of F; that Q is member of H and P is not member of H; that % both P and Q are member of G. Let M be abritrary message data. % % Initially - % P can send F to Q, and Q can receive F from P, in a context of G. % Q can send H to P, and P can receive H from Q, in a context of G. % Thereafter - % P can allocate a context C in F. % P can send C to Q, and Q can receive C in the default context of H. % Q can allocate a context D in H. % Q can send D to P, and P can receive D, in the default context of F. % Thereafter - % P as member of F, and Q as member of H, can communicate using % wildcard pid and tag by use of contexts C and D. % % Okay, this is possible, but it is messy :-) \item Arbitrary process-local info can be cached in group descriptors. (This allows collective operations to run quickly after the first execution in each group, without complicating the calling sequences.) %[Lyndon] % I rather like this idea. \item There are restrictions that permit groups to be layered on top of pt-pt. \item Pt-pt communications use only TID, context, and tag, and are specified to be fast. \item Group control operations (forming, disbanding, and translating between rank and TID) are NOT specified to be fast. \end{itemize} \section{Detailed Proposal} \begin{itemize} \item Pt-pt uses only ``TID'', ``context'', and ``message tag''. TID's, contexts, and message tags have global scope; they can be copied between processes as integers. TID's and contexts are opaque; their values are managed by MPI. Message tags are controlled entirely by the application. %[Lyndon] % You probably ought to say that context and TID are integer with % opaque values. \item Pt-pt communication is specified to be fast in all cases. (E.g., MPI must initialize all processes such that any required translation of the TID is faster than the fastest pt-pt communication call.) %[Lyndon] % How do you imagine this to be acheived, considering that TIDs % are global entities? % I guess that you are thinking a TID is a (processor_number, % process_number) pair of bit fields, a bit like one sees in NX and RK, % and that network interface hardware will route based on the % processor_number. % % In another approach a TID is a process local entity just like the % group descriptor. This satisfies efficiency when the above scheme % is not applicable, for example in a workstation network. \item A group is represented by a ``group descriptor'', of indefinite size and opaque structure, managed by MPI. The group descriptor can be referenced via a ``process-local group ID'', which is typically just a pointer to the group descriptor. (Group ID's with global scope do not exist.) %[Lyndon] % I think I see, it is the context identifier which has global scope. % Now, this really is just getting on the way toward the proposal % that I really wish I had written for the subcommittee. I will flame % myself! \item Group descriptors can be copied between arbitrary processes via MPI-provided routines that translate them to and from machine- and process-independent form. A process receiving a group descriptor does not become a member of the group, but does become able to obtain information about the group (e.g., to translate between rank and TID). %[Lyndon] % Also crucially, to obtain and use the default context identifier % of the received group descriptor. \item Arbitrary information can be cached in a group descriptor. This is, MPI routines are provided that allow any routine to augment a group descriptor with arbitrary information, identified by a key value, that subsequently can be retrieved by any routine having access to the descriptor. Cached information is not visible to other processes and is stripped when a group descriptor is sent to another process. Retrieval of cached information is specified to be fast (significantly cheaper than a pt-pt communication call). %[Lyndon] % I like the general idea, but I'm nervous about two things: % (a) implied associativity of group descriptor cache - this % will potentially be time expensive in implementation. % (b) there is no method proposed for abritration of keys % between independently written modules, so we are % in the same problem regime as just having message tag % and no message context. % However, key's are local, so presumably you would find % it acceptable to add a key registration service? \item Group descriptors contain enough information to asynchronously translate between (group,rank) and TID for any member of the group, by any process holding a descriptor for the group. MPI routines are provided to do these translations. Translation is not specified to be fast. %[Lyndon] % How do you imagine that this will be done? % (a) Perhaps an array of TIDs which is just indexed on rank? Then % where is the case for not using directly rank. % (b) Perhaps a hashing function? Then the case for not using rank % directly is marginal. % (c) Perhaps generating a request to a service process? In which % case you admit here that a service process exists, which must % be propogated throughout the proposal and changes one of your % fundamental objectives. % (d) Something else? Do tell! \item At creation, each group is assigned a globally unique ``default group context'' which is kept in the group descriptor and can be quickly extracted. This extraction is specified to be fast (comparable to a procedure call). \item The default group context can be used for pt-pt communication by any module operating within the group, but only for exactly paired send/receive with exact match on TID, context, and message tag. Call this the ``paired-exact-match constraint''. This constraint allows independent modules to use the same context without coordinating their tag values. (Assumes: tight sequencing constraints for both blocking and non-blocking comms in pt-pt MPI.) %[Lyndon] % I understand what you want the paired-exact-match for. This % might appear as pragmatics and advice to module writers. I % think you should be firmer about sequencing constraints % for point-to-point in MPI that this requires, to be % sure that the constraint is not too large. \item A modules that does not meet the paired-exact-match constraint must use a different globally unique context value. An MPI routine will exist to generate such a value and broadcast it to the group. The cost of this operation is specified to be comparable to that of an efficient broadcast in the group, i.e. log(G) pt-pt startup costs for a group of G processes. [See Implementation note 1.] %[Lyndon] % Perhaps I am missing something here. Please help. This % is what my mind is thinking. % The synchronisation requirement means that all context % allocations in a group G must be performed in an identical % order by all members of G. Then the sequence number of the % allocation is unique among all allocations within G. % Therefore the duplet % (default context of G, allocation sequence number) % is a globally unique identification of the allocated % context. The sequence number can be replaced by any one-to-one % map of the sequence number, of course. So, according to your % synchronisation constraint, context generation can be ``free''. \item Context values are a plentiful but exhaustible resource. If further use is anticipated, a context may be cached; otherwise it should be returned. (Note: the number of available contexts should be several times the number of groups that can be formed.) \item When a group disbands, its group descriptor is closed and any cached information is released. (The MPI group-disbanding routine does this by calling destructor routines that the application provided when the information was cached.) Copies of the group descriptor must be explicitly released. It is an error to make any reference to a descriptor of a disbanded group, except to release that descriptor. \item All processes that are initially allocated belong to the INITIAL group. (In a static process model, this means all processes.) An MPI routine exists for obtaining a reference to the INITIAL group descriptor (i.e., a local group ID for INITIAL). \item Groups are formed and disbanded by synchronous calls on all participating processes. In the case of overlapping groups, all processes must form and disband groups in the same order. [See Implementation Note 2.] \item All group formation calls are treated as a partitioning of some group, INITIAL if none other. The calls include a reference to the descriptor of the group being partitioned, the TID of the new group's ``root process'', the number of processes in the group, and an integer ``group tag'' provided by the application. The group tag must discriminate between overlapping groups that may be formed concurrently, say by multiple threads within a process. [See Implementation Note 2.] %[Lyndon] % The group partition you propose is essentially no different to % the partition by key which has already been discussed, except % that the key can encapsulate both (root process, group tag). % So perhaps partition by key was better in the first place? \item Collective communication routines are called by all members of a group in the same order. \item Blocking collective communication routines are passed only a reference to the group descriptor. To communicate, they can either use the default group context or internally allocate a new context, with or without cacheing. \item Non-blocking collective communication routines are passed a reference to the group descriptor, plus a ``data tag'' to discriminate between concurrent operations in the same group. (Question: is sequencing enough to do this?) An implementation is allowed but not required to impose synchronization of all cooperating processes upon entry to a non-blocking collective communication routine. %[Lyndon] % If the requirement that collective operations within a group G are % done in the identical order by all members of G even when such % operations are non-blocking, then the sequence number of the operation % is unique and sufficient for disambiguation. % % The permission to force synchronisation - i.e., blocking - in the % implementation of a non-blocking routine seems to make the routine % less than useful. I can see whay you are asking for this, in order % that you can generate a context for the routine call. In fact Rik % I don't think you need the constraint, as I pointed out cheaper % context generation exists above, unless of course I am missing % something. \end{itemize} \section{Advantages \& Disadvantages} \subsection*{Advantages} \begin{itemize} \item This proposal is implementable with no servers and can be layered easily on existing systems. %[Lyndon] % I am not of the opinion that the absence of services is such a big % deal. I do think that programs which can conveniently not use % services should not be forced to, but programs which cannot % conveniently not use services should be allowed to. \item Cacheing auxiliary information in group descriptors allows collective operations to run at maximum speed after the first execution in each group. (Assumes that sufficient context values are available to permit cacheing.) %[Lyndon] % If you agree that context allocation is ``free'', then you can % delete the bracketed qualifier. \item Speed of cached collective operations does not depend on speed of group operations such as formation or translation between rank and TID. %[Lyndon] % True, but the cache is going to get big as user's are going to store % arrays of TIDs in it. \item Requires explicit translation between (group,rank) and TID, which makes pt-pt performance predictable no matter how the functionality of groups gets extended. %[Lyndon] % This is only true because you have asserted that implementations % must have the property that: % `` Pt-pt communication is specified to be fast in all cases. % (E.g., MPI must initialize all processes such that any % required translation of the TID is faster than the fastest % pt-pt communication call.)'' % So the advantage is not that which you have quoted, it is that % you have made this assertion. \item Communication both within and between groups seems conceptually straightforward. %[Lyndon] % This is a conjecture. I believe that conjecture to be false. % I especially believe this in the case of communication between % groups. The methods which are available for ``hooking up'' % allows are at least perverse. I guess that the user could make % use of a service process, to make life easier in this hooking up, % so whay not provide one. % % A further point. It seems to me that ``seems'' means that it seems % to you. This is not the point. It is how it seems to a lesser % wizard than yourself which is of importance here. I conjecture % that the reverse statment is true when the person doing the seeming % is changed to a lesser wizard. \end{itemize} \subsection*{Disadvantages} \begin{itemize} \item Requires explicit translation between (group,rank) and TID, which may be considered awkward. %[Lyndon] % It is true that (group,rank) must be translated to TID. I can % assure you that this is considered both awkward and redundant. \item Communication between different groups may be considered awkward. %[Lyndon] % You bet! Please see below. \item No free-for-all group formation. A process must know something about who its collaborators are going to be (minimally, the root of the group). %[Lyndon] % Please see comments above on group creation. \item Requires explicit coherency of group descriptors, which is not convenient -- application must keep track of which processes have copies and what to do with them. %[Lyndon] % I think all of the proposals will have this problem. \item Static group model. Processes can join and leave collaborations only with the cooperation of the other group members in forming larger or smaller groups. \item No direct support for identifying the group from which a message came. The sender cannot embed its group ID in its message, since group ID's are not globally meaningful. One method to address this problem is to have the sender embed its group context as data in the message, then have the receiver use that value to select among group descriptors that it had already been sent. %[Lyndon] % Yup, the user can ``do it manually with a search''. If you want % to invoke this argument then I can dispose of almost everything % in MPI in a period of a few minutes - in fact Steven Zenith will % do it faster - so I refute the validity of the argument and claim % that the MPI interfce should transmit said information. % % Further, the receiver is likely to want to be able to ask which % rank in the sender group the sender was. Oh dear, well I suppose % you think that's okay because the sender can put its rank into % the message. This is just being inconvenient to the user who % wants to send an array of something (double complex?) and has % to pack a rank in by copying or sending a pre-message or the % buffer descriptor kind of thing. \end{itemize} \section{Comments \& Alternatives} \begin{itemize} \item The performance of group control functions is explicitly unspecified in order to provide performance insurance. The intent is to encourage program designs whose performance won't be killed if the functionality of groups is expanded so as to be unscalable or require expensive services. %[Lyndon] % I don't think that the intent expressed in the second sentence % is satisfied. For example - group control is allowed to become the % dominant feature of application time complexity. \item Adding global group ID's would seriously interfere with layering this proposal on pt-pt. The problem is not generating a unique ID, but using it. If just holding a global group ID were to allow a process to asynchronously ask questions about the group (like translating between rank and TID), then a group information server of some sort would be required, and MPI pt-pt does not provide enough capability to construct such a beast. %[Lyndon] % It is not the global uniqueness of group identifiers which creates % the problem. There are globally unique labels of groups in your % proposal anyway - the value of the default context identifier. % The problem is that of allowing query of group information when % that information cannot be recorded in the local process/processor % memory. % % You claim that point-to-point does not have enough capability to % construct an information server. Firstly I should ask you whether % this is an artefact of the manner in which you have defined the % point-to-point communication. Secondly I assert that your claim % is false. I shall append a description of server implementation % to the foot of this message. \item The restriction that only paired-exact-match modules can run in the default group context could be relaxed to something like ``a module that violates the paired-exact-match constraint can run in the default group context if and only if it is the only module to run in that context''. This seems too error-prone to be worth the trouble, since even the standard collective ops might be excluded. (It would also conflict with Implementation Note 1.] \item Group formation can be faster when more information is provided than just the group tag and root process. E.g. if all members of the group can identify all other members of a P-member group, in rank order, then O(log P) time is sufficient; if only the root is known, O(P) time may be required. This aspect should be considered by the groups subcommittee in evaluating scalability. %[Lyndon] % Yes, partition does appear to be O(P) whereas definition by ordererd % list appears to be O(log(P)). \end{itemize} \section{Implementation Notes} \begin{enumerate} \item To generate and broadcast a new context value: Generate the context value without communication by concatenating a locally maintained unique value with the process number in some 0..P-1 global ordering scheme. This assumes that context values can be significantly longer than log(P) bits for an application involving P processes. If not, then a server may be required, in which case by specification it has to be fast (comparable to a pt-pt call). There is no need to generate a new context for the broadcast since it can be coded to use the default group context by meeting the paired-exact-match constraint.) %[Lyndon] % Please see notes above on the subject of context generation. \item The following is intended only as a sanity check on the claim that this proposal can be layered on MPI. Improvements to improve efficiency or to use fewer contexts will be greatly appreciated. In addition to a default group context, each group descriptor contains a ``group partitioning context'' and a ``group disbanding context'' that are obtained and broadcast at the time the group is created. In the case of the INITIAL group, this is done using paired-exact-match code and any context, immediately after initialization of the MPI pt-pt level. (At that time, no application code will have had a chance to issue wildcard receives to mess things up.) Group partitioning is accomplished using pt-pt messages in the partitioning context of the current group (i.e., the one being partitioned), with message tag equal to the group tag provided by the application. In the worst (least scalable) case, the root of the new group must wildcard the source TID. (This violates the paired-exact-match constraint, which is why group formation must happen in a special context.) Group disbanding is done with pt-pt messages in the group disbanding context. This keeps group control from being messed up no matter how the application uses wildcards. \end{enumerate} \section{Examples} {\bf (To be provided)} % % END "Proposal V" %======================================================================% \end{document} ---------------------------------------------------------------------- Writing a server in the point-to-point layer of MPI in four easy steps ---------------------------------------------------------------------- 1) Partition the INITIAL group into two groups. A singleton group, SERVER, and a group CLIENT which contains all of the other processes. 2) The single process in SERVER group records its TID. 3) The processes in INITIAL group allocate a context SERVICE which they remember either in the group cache or static data or something. 4) Use a broadcast in INITIAL group with ``sender'' as the one process which is also in SERVER group, and the ``receivers'' as the (many) processes which are also in CLIENT group, in the SERVICE context, in order to disseminate the TID of the server process. [Fanfare] a server process is in place as is a dedicated context for the purposes of messages required to implement the service. [Observation] the mpi point-to-point initialisation can do this automatically. ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Mar 19 14:04:14 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA10876; Fri, 19 Mar 93 14:04:14 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA04631; Fri, 19 Mar 93 14:03:37 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 19 Mar 1993 14:03:35 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61++/2.8s-UTK) id AA04623; Fri, 19 Mar 93 14:03:32 -0500 Date: Fri, 19 Mar 93 19:03:21 GMT Message-Id: <10723.9303191903@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: gone To: d39135@sodium.pnl.gov Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Hi Rik I away home with the laptop, and off-line for the evening. If you want to discuss my comments, or make comments to me, or whatever, today then feel free to phone me at home if you are able. My homephone is + 44 31 557 0480. I am going to work on proposal I, to contract it to essentially the "Snir" model as originally discussed --- i.e. I'm going to delete the extended part at least. I never really did like retrofitting my requirements onto that. I am then going to write a proposal II' which separates contexts and groups, and meets requirements, and which I all along preferred in shape. I hope you don't mind me borrowing the number, I have annotated to indicate that this is not the original II :-) There will be much room for the group identifier cache in II'. I will mention this but not deal with it in any detail at this point. This is work I will do with the optimistic assumption that the "gang of three (four, five?)" will find this acceptable. I hope to mail out tomorrow afternoon GMT. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Sun Mar 21 12:08:47 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA29146; Sun, 21 Mar 93 12:08:47 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20050; Sun, 21 Mar 93 12:08:12 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 12:08:09 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20042; Sun, 21 Mar 93 12:08:08 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03376; Sun, 21 Mar 93 11:02:21 CST Date: Sun, 21 Mar 93 11:02:21 CST From: Tony Skjellum Message-Id: <9303211702.AA03376@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, ranka@top.cis.syr.edu Subject: Re: Proposals V and I Cc: mpi-context@cs.utk.edu I will put my own doc in Latex, but I agree this is good. We will make one jointly authored "document", ala Rusty & Bill. More to follow [lots] - Tony From owner-mpi-context@CS.UTK.EDU Sun Mar 21 13:14:46 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA29712; Sun, 21 Mar 93 13:14:46 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21849; Sun, 21 Mar 93 13:14:13 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 13:14:11 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21841; Sun, 21 Mar 93 13:14:10 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03573; Sun, 21 Mar 93 12:08:23 CST Date: Sun, 21 Mar 93 12:08:23 CST From: Tony Skjellum Message-Id: <9303211808.AA03573@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@aurora.cs.msstate.edu Subject: My comments on Proposal V [tex format] Step 1. I am commenting directly on Proposal V. Step 2. I am comment on Lyndon's comments on Proposal V. This will come in a separate e-mail, to reduce [my] confusion, and hopefully yours. - Tony 1. My annotations to proposal V by Rik; %**** is my prefix \documentstyle{report} \begin{document} \title{``Proposal V'' for MPI Communication Context Subcommittee} \author{Rik~Littlefield} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal V" % Rik Littlefield % March 1993 % \chapter{Proposal V} \section{Summary} \begin{itemize} \item Group ID's have local scope, not global. Groups are represented by descriptors of indefinite size and opaque structure, managed by MPI. Group descriptors can be passed between processes by MPI routines that translate them as necessary. (This satisfies the same needs as global group ID's, but its performance implications are easier to understand and control.) %**** %**** Seems doable. %**** \item Arbitrary process-local info can be cached in group descriptors. (This allows collective operations to run quickly after the first execution in each group, without complicating the calling sequences.) %**** %**** Seems doable. %**** \item There are restrictions that permit groups to be layered on top of pt-pt. %**** %**** I don't understand the word 'restriction' here. %**** Restriction of what. %**** \item Pt-pt communications use only TID, context, and tag, and are specified to be fast. %**** %**** What does "fast" mean. %**** \item Group control operations (forming, disbanding, and translating between rank and TID) are NOT specified to be fast. %**** %**** OK, the above two items are identical to what Zipcode %**** provides in practice, but people have argued that groups %**** might be created/deleted more often in some apps, and %**** that these apps ought to be supportable %**** \end{itemize} \section{Detailed Proposal} \begin{itemize} \item Pt-pt uses only ``TID'', ``context'', and ``message tag''. TID's, contexts, and message tags have global scope; they can be copied between processes as integers. TID's and contexts are opaque; their values are managed by MPI. Message tags are controlled entirely by the application. %**** %**** Yes, and I want at least 32-bits of message tag. %**** \item Pt-pt communication is specified to be fast in all cases. (E.g., MPI must initialize all processes such that any required translation of the TID is faster than the fastest pt-pt communication call.) %**** %**** This could be difficult, in practice, if one mails a %**** message to one's own process, and MPI is smart enough %**** to optimize. %**** \item A group is represented by a ``group descriptor'', of indefinite size and opaque structure, managed by MPI. The group descriptor can be referenced via a ``process-local group ID'', which is typically just a pointer to the group descriptor. (Group ID's with global scope do not exist.) %**** %**** Sounds good. %**** \item Group descriptors can be copied between arbitrary processes via MPI-provided routines that translate them to and from machine- and process-independent form. A process receiving a group descriptor does not become a member of the group, but does become able to obtain information about the group (e.g., to translate between rank and TID). \item Arbitrary information can be cached in a group descriptor. This is, MPI routines are provided that allow any routine to augment a group descriptor with arbitrary information, identified by a key value, that subsequently can be retrieved by any routine having access to the descriptor. Cached information is not visible to other processes and is stripped when a group descriptor is sent to another process. Retrieval of cached information is specified to be fast (significantly cheaper than a pt-pt communication call). %**** %**** In Zipcode 1.0, we allow multiple global operations %**** to be provided on a message-class (eg, grid-oriented messages) %**** The identifiers for these possible operations are user-specified %**** presently, but the "names" of the global operations are fixed %**** at compile-time. %**** %**** That means that there is O(1) time to find combine, fanout, send, %**** etc, on a group-wide scope. However, other operations cannot %**** be accessed in O(1) time (they are not in the opaque structure). %**** %**** The same mechanism used by Zipcode to allow multiple methods for %**** combine to be registered by the user, could also allow extensibility %**** just like Rik describes, with little effort. We use AVL trees. %**** %**** In fact, I will add this to Zipcode 1.x. Why say this? It is %**** not far from existing practice, and I have a lot of the machinery %**** in place already, and I am confident that it is useful. %**** \item Group descriptors contain enough information to asynchronously translate between (group,rank) and TID for any member of the group, by any process holding a descriptor for the group. MPI routines are provided to do these translations. Translation is not specified to be fast. %**** %**** This seems to be a serious flaw. It will have to be cached %**** on an LRU basis, with system/user/both specifying how much %**** caching is allowed (ie, how much unscalable memory use). %**** If the first time is expensive, OK, but not the Nth time. %**** \item At creation, each group is assigned a globally unique ``default group context'' which is kept in the group descriptor and can be quickly extracted. This extraction is specified to be fast (comparable to a procedure call). %**** %**** OK, I see no problem with this (so far). %**** \item The default group context can be used for pt-pt communication by any module operating within the group, but only for exactly paired send/receive with exact match on TID, context, and message tag. Call this the ``paired-exact-match constraint''. This constraint allows independent modules to use the same context without coordinating their tag values. (Assumes: tight sequencing constraints for both blocking and non-blocking comms in pt-pt MPI.) %**** %**** NO. This violates the concept of context entirely. %**** (ie, an oxymoron ... contexts same, but still no need for %**** tag disambiguation...) %**** %**** Use the default group context to establish (cooperatively) %**** other contexts, and then use these. This is a seriously %**** bad feature, in my mind. %**** \item A modules that does not meet the paired-exact-match constraint must use a different globally unique context value. An MPI routine will exist to generate such a value and broadcast it to the group. The cost of this operation is specified to be comparable to that of an efficient broadcast in the group, i.e. log(G) pt-pt startup costs for a group of G processes. [See Implementation note 1.] %**** %**** I do not think we should support the paired-exact-match thing. %**** \item Context values are a plentiful but exhaustible resource. If further use is anticipated, a context may be cached; otherwise it should be returned. (Note: the number of available contexts should be several times the number of groups that can be formed.) %**** %**** Concur. This suggests many more than "256" %**** \item When a group disbands, its group descriptor is closed and any cached information is released. (The MPI group-disbanding routine does this by calling destructor routines that the application provided when the information was cached.) Copies of the group descriptor must be explicitly released. It is an error to make any reference to a descriptor of a disbanded group, except to release that descriptor. \item All processes that are initially allocated belong to the INITIAL group. (In a static process model, this means all processes.) An MPI routine exists for obtaining a reference to the INITIAL group descriptor (i.e., a local group ID for INITIAL). \item Groups are formed and disbanded by synchronous calls on all participating processes. In the case of overlapping groups, all processes must form and disband groups in the same order. [See Implementation Note 2.] %**** %**** This is the Zipcode model. It could say loosely synchronous. %**** \item All group formation calls are treated as a partitioning of some group, INITIAL if none other. The calls include a reference to the descriptor of the group being partitioned, the TID of the new group's ``root process'', the number of processes in the group, and an integer ``group tag'' provided by the application. The group tag must discriminate between overlapping groups that may be formed concurrently, say by multiple threads within a process. [See Implementation Note 2.] %**** %**** I don't understand the thread issue here. %**** \item Collective communication routines are called by all members of a group in the same order. %**** %**** Yes. \item Blocking collective communication routines are passed only a reference to the group descriptor. To communicate, they can either use the default group context or internally allocate a new context, with or without cacheing. %**** What does caching really imply here ??? Help. %**** \item Non-blocking collective communication routines are passed a reference to the group descriptor, plus a ``data tag'' to discriminate between concurrent operations in the same group. (Question: is sequencing enough to do this?) An implementation is allowed but not required to impose synchronization of all cooperating processes upon entry to a non-blocking collective communication routine. %**** I think that contexts are really important in this case, %**** to keep things straight, but that non-blocking collcomm should %**** be omitted from MPI1 (cf, Geist). Sequencing supports %**** a sufficient disambiguation, as long as the entire group %**** is always the participant in operations. That is, you have %**** to form subgroups, with new contexts, to do global ops on %**** subsets. \end{itemize} \section{Advantages \& Disadvantages} \subsection*{Advantages} \begin{itemize} \item This proposal is implementable with no servers and can be layered easily on existing systems. %**** Why aren't servers needed to create contexts. Where do they %**** come from? If you rely on the fact that INITIAL will do %**** a loosely synchonous cooperative operation each time a new %**** context is needed, then a simple (easily implementable server, %**** or fetch-and-add remote access) is replaced by a more rigid %**** computation model. %**** %**** If we can get rid of this disagreement, me might be able to %**** reduce our total proposal space by one whole proposal. %**** \item Cacheing auxiliary information in group descriptors allows collective operations to run at maximum speed after the first execution in each group. (Assumes that sufficient context values are available to permit cacheing.) %**** %**** If contexts are being used very dynamically, how are they being %**** assigned, kept, released, reissued without a server? Sorry if %**** I missed something, but I don't see it, without a restrictive %**** SPMD model of computation (Zipcode obviates its server for the %**** SPMD model, for instance). \item Speed of cached collective operations does not depend on speed of group operations such as formation or translation between rank and TID. %**** %**** Can you clarify this with examples. %**** \item Requires explicit translation between (group,rank) and TID, which makes pt-pt performance predictable no matter how the functionality of groups gets extended. %**** I like this, of course. \item Communication both within and between groups seems conceptually straightforward. %**** Well, is point-to-point group oriented. Not. \end{itemize} \subsection*{Disadvantages} \begin{itemize} \item Requires explicit translation between (group,rank) and TID, which may be considered awkward. %**** I think it is awkward. \item Communication between different groups may be considered awkward. %**** OK, but one can form a new group, as I have argued before. %**** Use the "awkward" pt2pt to get the right info shared between %**** group leaders, make the new group, use unawkward collective %**** operations on new group (with new context). \item No free-for-all group formation. A process must know something about who its collaborators are going to be (minimally, the root of the group). %**** %**** This again is in practice, in Zipcode. %**** \item Requires explicit coherency of group descriptors, which is not convenient -- application must keep track of which processes have copies and what to do with them. %**** %**** Sounds dangerous. What must application do to maintain %**** coherency, since group descriptors are opaque. %**** \item Static group model. Processes can join and leave collaborations only with the cooperation of the other group members in forming larger or smaller groups. %**** No, loosely synchronous process model, unless you mean %**** cooperation of INITIAL at all such join/leave steps. \item No direct support for identifying the group from which a message came. The sender cannot embed its group ID in its message, since group ID's are not globally meaningful. One method to address this problem is to have the sender embed its group context as data in the message, then have the receiver use that value to select among group descriptors that it had already been sent. %**** %**** No, you can't know the group or rank in group of sender. %**** If there were one context per group (isn't that so here?), %**** then all you need is the rank. With TID_TO_RANK_IN_GROUP %**** operation, this could be provided, but no wildcarding %**** or receipt selectivity could be done at this level. %**** \end{itemize} \section{Comments \& Alternatives} \begin{itemize} \item The performance of group control functions is explicitly unspecified in order to provide performance insurance. The intent is to encourage program designs whose performance won't be killed if the functionality of groups is expanded so as to be unscalable or require expensive services. %**** %**** No, it just does not provide guarantees that certain kinds %**** of applications will run OK. (ie, those that do group %**** creation/deletion relatively often). Zipcode has assumed %**** that such operations would be relatively seldom. Thus, I do %**** not quibble that this is a reasonable choice,but a fairer %**** way to say this is that it may be difficult to support such %**** applications. That reveals an issue to be studied more. %**** \item Adding global group ID's would seriously interfere with layering this proposal on pt-pt. The problem is not generating a unique ID, but using it. If just holding a global group ID were to allow a process to asynchronously ask questions about the group (like translating between rank and TID), then a group information server of some sort would be required, and MPI pt-pt does not provide enough capability to construct such a beast. %**** %**** Perhaps they should do. \item The restriction that only paired-exact-match modules can run in the default group context could be relaxed to something like ``a module that violates the paired-exact-match constraint can run in the default group context if and only if it is the only module to run in that context''. This seems too error-prone to be worth the trouble, since even the standard collective ops might be excluded. (It would also conflict with Implementation Note 1.] %**** Dump this. \item Group formation can be faster when more information is provided than just the group tag and root process. E.g. if all members of the group can identify all other members of a P-member group, in rank order, then O(log P) time is sufficient; if only the root is known, O(P) time may be required. This aspect should be considered by the groups subcommittee in evaluating scalability. %**** %**** No, a non-deterministic broadcast can be used, with a token. %**** This requires a token server. Again, implementable with fetch+ %**** add on most systems, or a light reactive server. %**** %**** Once the non-deterministic broadcast has finished, a fanin/collapse %**** is done to the original root, which then frees the token. %**** \end{itemize} \section{Implementation Notes} \begin{enumerate} \item To generate and broadcast a new context value: Generate the context value without communication by concatenating a locally maintained unique value with the process number in some 0..P-1 global ordering scheme. This assumes that context values can be significantly longer than log(P) bits for an application involving P processes. If not, then a server may be required, in which case by specification it has to be fast (comparable to a pt-pt call). There is no need to generate a new context for the broadcast since it can be coded to use the default group context by meeting the paired-exact-match constraint.) %**** %**** Why not just given in and allow the server. %**** I don't like the paired-exact-match constraint AT ALL. %**** \item The following is intended only as a sanity check on the claim that this proposal can be layered on MPI. Improvements to improve efficiency or to use fewer contexts will be greatly appreciated. In addition to a default group context, each group descriptor contains a ``group partitioning context'' and a ``group disbanding context'' that are obtained and broadcast at the time the group is created. In the case of the INITIAL group, this is done using paired-exact-match code and any context, immediately after initialization of the MPI pt-pt level. (At that time, no application code will have had a chance to issue wildcard receives to mess things up.) %**** %**** Seems OK, but why need the paired-exact-match thing again. %**** Group partitioning is accomplished using pt-pt messages in the partitioning context of the current group (i.e., the one being partitioned), with message tag equal to the group tag provided by the application. In the worst (least scalable) case, the root of the new group must wildcard the source TID. (This violates the paired-exact-match constraint, which is why group formation must happen in a special context.) %**** %**** Again, OK, but I want to see this work without the paired-exact- %**** match, if possible. Group disbanding is done with pt-pt messages in the group disbanding context. This keeps group control from being messed up no matter how the application uses wildcards. %**** %**** So, now, you have concurred with my (previously flamed) idea %**** that group construction/destruction should be realizable using %**** pt2pt, just like global operations should do. I like this %**** because 1) it is explicable to the implementor, 2) it allows %**** simple intitial implemtations, 3) it sets some ideas for how %**** much these things will cost [upper bound]. \end{enumerate} \section{Examples} {\bf (To be provided)} % % END "Proposal V" %======================================================================% \end{document} From owner-mpi-context@CS.UTK.EDU Sun Mar 21 13:22:19 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA29866; Sun, 21 Mar 93 13:22:19 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA22128; Sun, 21 Mar 93 13:22:00 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 13:21:59 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA22120; Sun, 21 Mar 93 13:21:58 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03587; Sun, 21 Mar 93 12:16:22 CST Date: Sun, 21 Mar 93 12:16:22 CST From: Tony Skjellum Message-Id: <9303211816.AA03587@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@aurora.cs.msstate.edu Subject: mini-proposal on layerability Dear Context subcommittee, This is something we should share with pt2pt, but I put it before you first.... (and to topologies, for their interest). To build effective layers in MPI, a reasonable receipt selectivity must be provided. As suggested by Jim, this can be accomplished by two integers as selectors for message wildcarding... dont_care care So, each receipt function uses the algorithm (received_rag & ~dont_care)& care With such a feature, layerability (eg, virtual topologies) could be much more easily added into MPI. Without this, (ie, accept or reject tag as ANY or only one specific), it is very difficult to layer. It is useful that wildcarding of sender is provided, and all-or-nothing wildcarding is fine for sender, since sender is specified by opaque TID. Example, wildcard receipt in a 3D grid usage of tags to implement a virtual topology. If a topology is embedded in a tag (p,q,r), then it is conveniently possible to do wildcarding on any dimension, given the care/dont_care bits. Another option would always be to multiple out the (p,q,r) triplet to give the rank-in-group. This could be stored in the tag instead, but severe multiplications/division needs would occur to determine a match (provided a general matching function were also contemplated). In short, a lot of flexiblity occurs with the care/dont_care bits, and I will ask for this feature, so that layerability can reasonably be claimed in MPI1. From owner-mpi-context@CS.UTK.EDU Sun Mar 21 13:44:13 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA00158; Sun, 21 Mar 93 13:44:13 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA22724; Sun, 21 Mar 93 13:43:47 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 13:43:46 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA22716; Sun, 21 Mar 93 13:43:44 -0500 Date: Sun, 21 Mar 93 18:43:27 GMT Message-Id: <12975.9303211843@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Proposals V and I To: Tony Skjellum , d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, ranka@top.cis.syr.edu In-Reply-To: Tony Skjellum's message of Sun, 21 Mar 93 11:02:21 CST Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu > I will put my own doc in Latex, but I agree this is good. We will make > one jointly authored "document", ala Rusty & Bill. Super. I thought about this and thought that until a proposal is chosen that goes through a report documenststyle would probably be easiest, so each proposal appears as a \chapter. I think we should probably attribute authorship and credits to the individual proposal chapters at least for now. > More to follow [lots] Goodie Goodie :-) Me too, lost more to follow. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Sun Mar 21 14:07:25 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA00224; Sun, 21 Mar 93 14:07:25 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23503; Sun, 21 Mar 93 14:06:52 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 14:06:50 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23495; Sun, 21 Mar 93 14:06:45 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03668; Sun, 21 Mar 93 13:00:37 CST Date: Sun, 21 Mar 93 13:00:37 CST From: Tony Skjellum Message-Id: <9303211900.AA03668@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@aurora.cs.msstate.edu Subject: my comments on Lyndon's comments on PropV %**** Step 2. My comments next to Lyndon's comments, critique of Rik's %**** proposal V. %**** %**** I have added my further remarks, with %**** %**** - Tony General points -------------- 1) I had to ``mine'' the text :-) Perhaps one of us (i.e., I am offering if you wish) should attempt to construct a more transparent presentation before circulation to whole committee, for the convenience of committee members. %**** %**** I felt that things appear twice, because of summary (good) %**** and because of implementation notes at end (confusing) %**** 2) I'm not a fan of much of this proposal, although I do indeed like some of the ideas which it introduces. [On the other hand, I'm not a great fan of all of the proposal which I wrote. I shall mail self criticism of my proposal, and may have to write amended or alternative proposal :-)] %**** %**** Please be more specific. I am having a hard time understanding %**** why you really don't like it, Lyndon. If the process model %**** were a little less static, and servers were permitted (though %**** hopefully bounded in cost), I think we would have an excellent %**** proposal. %**** 3) I really like the way in which groups are something like ``frames'' in which contexts are created. This is conceptually much neater than duplication of groups. %**** %**** In practice, group subsetting will require groups to be copied, %**** otherwise, subgroups will unfairly be penalized by the size %**** of their ancestor. %**** 4) I like the idea of pushing information into the group structure. I have a few qualms with the proposed details --- see specific points. %**** %**** I have more confidence about this idea, and could demonstrate %**** by June/July time-frame in Zipcode. %**** 5) See ``Writing a server in the point-to-point layer of MPI in four easy steps'' at the foot of the message. %**** %**** This seems like a nice thing. %**** Specific points --------------- Dealt with as LaTeX comments to body of text, appearing in the form %[Lyndon] % text of point for your navigational convenience. These are quite detailed. ---------------------------------------------------------------------- \documentstyle{report} \begin{document} \title{``Proposal V'' for MPI Communication Context Subcommittee} \author{Rik~Littlefield} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal V" % Rik Littlefield % March 1993 % \chapter{Proposal V} \section{Summary} \begin{itemize} \item Group ID's have local scope, not global. Groups are represented by descriptors of indefinite size and opaque structure, managed by MPI. Group descriptors can be passed between processes by MPI routines that translate them as necessary. (This satisfies the same needs as global group ID's, but its performance implications are easier to understand and control.) %[Lyndon] % I support the approach whereby group descriptors are local % objects. They could be pointers to structures, or indices % into tables thereof. We let the implementation consider that. % % One difficulty arises as group descriptors can only be passed % from process P to process Q if both P and Q members of some % group G since the communication presumably must use a context % known to both P and Q. Imagine that P is member of F and Q is not % member of F; that Q is member of H and P is not member of H; that % both P and Q are member of G. Let M be abritrary message data. % % Initially - % P can send F to Q, and Q can receive F from P, in a context of G. % Q can send H to P, and P can receive H from Q, in a context of G. % Thereafter - % P can allocate a context C in F. % P can send C to Q, and Q can receive C in the default context of H. % Q can allocate a context D in H. % Q can send D to P, and P can receive D, in the default context of F. % Thereafter - % P as member of F, and Q as member of H, can communicate using % wildcard pid and tag by use of contexts C and D. % % Okay, this is possible, but it is messy :-) %**** %**** Alternatives, Lyndon? %**** \item Arbitrary process-local info can be cached in group descriptors. (This allows collective operations to run quickly after the first execution in each group, without complicating the calling sequences.) %[Lyndon] % I rather like this idea. %**** Me too. \item There are restrictions that permit groups to be layered on top of pt-pt. \item Pt-pt communications use only TID, context, and tag, and are specified to be fast. \item Group control operations (forming, disbanding, and translating between rank and TID) are NOT specified to be fast. \end{itemize} \section{Detailed Proposal} \begin{itemize} \item Pt-pt uses only ``TID'', ``context'', and ``message tag''. TID's, contexts, and message tags have global scope; they can be copied between processes as integers. TID's and contexts are opaque; their values are managed by MPI. Message tags are controlled entirely by the application. %[Lyndon] % You probably ought to say that context and TID are integer with % opaque values. %**** %**** 1) It is not obvious that TIDs should be restricted to 32 bits. %**** 2) It is not obvious that contexts will be 32 bits (eg, 16 bits). %**** I favor a whole word for a context, despite other limits, %***** just to make things simpler. %**** %**** Internet addresses are going to get augmented from 32 to ??? bits %**** is it reasonable to assume that certain MPI implementations might %**** incorporate such internet addresses as TIDs (in future), %**** %**** Opacity is partially violated if we say how big the data type is??? \item Pt-pt communication is specified to be fast in all cases. (E.g., MPI must initialize all processes such that any required translation of the TID is faster than the fastest pt-pt communication call.) %[Lyndon] % How do you imagine this to be acheived, considering that TIDs % are global entities? % I guess that you are thinking a TID is a (processor_number, % process_number) pair of bit fields, a bit like one sees in NX and RK, % and that network interface hardware will route based on the % processor_number. % % In another approach a TID is a process local entity just like the % group descriptor. This satisfies efficiency when the above scheme % is not applicable, for example in a workstation network. %**** %**** where does this get us??? %**** Remember, we have to choose on some things, so we can have something %**** to present in Dallas. Is there an important difference here? %**** %**** TIDs are global entities. Is structure assumed to be global; %**** in a truly opaque system, some TID component would have to be %**** fixed, but the rest could vary structurally... %**** \item A group is represented by a ``group descriptor'', of indefinite size and opaque structure, managed by MPI. The group descriptor can be referenced via a ``process-local group ID'', which is typically just a pointer to the group descriptor. (Group ID's with global scope do not exist.) %[Lyndon] % I think I see, it is the context identifier which has global scope. % Now, this really is just getting on the way toward the proposal % that I really wish I had written for the subcommittee. I will flame % myself! %**** %**** Yes, contexts are global; group identifiers are just pointers %**** typically, to data structures, describing %**** %**** 1) a groups context %**** 2) group members and their ranks (mappings, inverses, %**** cached, hashed, unscalably stored, etc) %**** 3) TID-to-rank map and inverse (see possibilities in 2) %**** 4) A set of fixed global operations, accepted as standard, %**** an accessible in O(1) time. Possibly, each %**** such operation should be a method, so that %**** a parameter block can be passed with it. Zipcode %**** supports the Method type to do this. %**** This is effectively a cache for some parts of item #5 %**** ... %**** 5) An AVL or similar tree of extensible operations. %**** New operations are registerable by the user. These %**** tags are unique within a group, a specify an operation %**** i) pre-defined by MPI (in which case it can be cached %**** in 4 %**** ii) alternative operations (even if they do something %**** standard, that are wanted to be accessed by %**** name) This name is group unique. %**** %**** A mechanism for DO_METHOD_FROM_GROUP(name,....) %**** or GET_METHOD_FROM_GROUP(name,...) %**** and SET_METHOD_IN_GROUP(name,...) are clearly needed. %**** Group descriptors can be copied between arbitrary processes via MPI-provided routines that translate them to and from machine- and process-independent form. A process receiving a group descriptor does not become a member of the group, but does become able to obtain information about the group (e.g., to translate between rank and TID). %[Lyndon] % Also crucially, to obtain and use the default context identifier % of the received group descriptor. %**** Yes, that is included, I believe, in concept. %**** \item Arbitrary information can be cached in a group descriptor. This is, MPI routines are provided that allow any routine to augment a group descriptor with arbitrary information, identified by a key value, that subsequently can be retrieved by any routine having access to the descriptor. Cached information is not visible to other processes and is stripped when a group descriptor is sent to another process. Retrieval of cached information is specified to be fast (significantly cheaper than a pt-pt communication call). %[Lyndon] % I like the general idea, but I'm nervous about two things: % (a) implied associativity of group descriptor cache - this % will potentially be time expensive in implementation. % (b) there is no method proposed for abritration of keys % between independently written modules, so we are % in the same problem regime as just having message tag % and no message context. % However, key's are local, so presumably you would find % it acceptable to add a key registration service? %**** %**** Stripping is extremely controversial aspect, and arbitrary. %**** If the recipient has the methods with the same name, then %**** a new rendezvous could be accomplished at the far end \item Group descriptors contain enough information to asynchronously translate between (group,rank) and TID for any member of the group, by any process holding a descriptor for the group. MPI routines are provided to do these translations. Translation is not specified to be fast. %[Lyndon] % How do you imagine that this will be done? % (a) Perhaps an array of TIDs which is just indexed on rank? Then % where is the case for not using directly rank. % (b) Perhaps a hashing function? Then the case for not using rank % directly is marginal. % (c) Perhaps generating a request to a service process? In which % case you admit here that a service process exists, which must % be propogated throughout the proposal and changes one of your % fundamental objectives. % (d) Something else? Do tell! %**** %**** Yes, these are all options. Fastness seems to be an important %**** issue. If translation is very expensive, none of the "good" %**** features will be used. \item At creation, each group is assigned a globally unique ``default group context'' which is kept in the group descriptor and can be quickly extracted. This extraction is specified to be fast (comparable to a procedure call). \item The default group context can be used for pt-pt communication by any module operating within the group, but only for exactly paired send/receive with exact match on TID, context, and message tag. Call this the ``paired-exact-match constraint''. This constraint allows independent modules to use the same context without coordinating their tag values. (Assumes: tight sequencing constraints for both blocking and non-blocking comms in pt-pt MPI.) %[Lyndon] % I understand what you want the paired-exact-match for. This % might appear as pragmatics and advice to module writers. I % think you should be firmer about sequencing constraints % for point-to-point in MPI that this requires, to be % sure that the constraint is not too large. %**** Again, I think this should be eliminated, and all references %**** to this idea should be expunged. It denies the context's %**** ability to manage messages. \item A modules that does not meet the paired-exact-match constraint must use a different globally unique context value. An MPI routine will exist to generate such a value and broadcast it to the group. The cost of this operation is specified to be comparable to that of an efficient broadcast in the group, i.e. log(G) pt-pt startup costs for a group of G processes. [See Implementation note 1.] %[Lyndon] % Perhaps I am missing something here. Please help. This % is what my mind is thinking. % The synchronisation requirement means that all context % allocations in a group G must be performed in an identical % order by all members of G. Then the sequence number of the % allocation is unique among all allocations within G. % Therefore the duplet % (default context of G, allocation sequence number) % is a globally unique identification of the allocated % context. The sequence number can be replaced by any one-to-one % map of the sequence number, of course. So, according to your % synchronisation constraint, context generation can be ``free''. %**** I agree that context allocation has to be done in sequence. %**** That is why I am in favor of providing calls that allow %**** groups to get numerous contexts at creation, and then %****cooperatively, but potentially without further communication %**** divide them(as they build subgroups, for instance). %**** %**** I see these as services to be used in building virtual topology %**** features, which will then be more widely used by users of MPI. %**** \item Context values are a plentiful but exhaustible resource. If further use is anticipated, a context may be cached; otherwise it should be returned. (Note: the number of available contexts should be several times the number of groups that can be formed.) \item When a group disbands, its group descriptor is closed and any cached information is released. (The MPI group-disbanding routine does this by calling destructor routines that the application provided when the information was cached.) Copies of the group descriptor must be explicitly released. It is an error to make any reference to a descriptor of a disbanded group, except to release that descriptor. \item All processes that are initially allocated belong to the INITIAL group. (In a static process model, this means all processes.) An MPI routine exists for obtaining a reference to the INITIAL group descriptor (i.e., a local group ID for INITIAL). \item Groups are formed and disbanded by synchronous calls on all participating processes. In the case of overlapping groups, all processes must form and disband groups in the same order. [See Implementation Note 2.] \item All group formation calls are treated as a partitioning of some group, INITIAL if none other. The calls include a reference to the descriptor of the group being partitioned, the TID of the new group's ``root process'', the number of processes in the group, and an integer ``group tag'' provided by the application. The group tag must discriminate between overlapping groups that may be formed concurrently, say by multiple threads within a process. [See Implementation Note 2.] %[Lyndon] % The group partition you propose is essentially no different to % the partition by key which has already been discussed, except % that the key can encapsulate both (root process, group tag). % So perhaps partition by key was better in the first place? %**** %**** Do we get anything by having the root process? %**** \item Collective communication routines are called by all members of a group in the same order. \item Blocking collective communication routines are passed only a reference to the group descriptor. To communicate, they can either use the default group context or internally allocate a new context, with or without cacheing. \item Non-blocking collective communication routines are passed a reference to the group descriptor, plus a ``data tag'' to discriminate between concurrent operations in the same group. (Question: is sequencing enough to do this?) An implementation is allowed but not required to impose synchronization of all cooperating processes upon entry to a non-blocking collective communication routine. %[Lyndon] % If the requirement that collective operations within a group G are % done in the identical order by all members of G even when such % operations are non-blocking, then the sequence number of the operation % is unique and sufficient for disambiguation. % % The permission to force synchronisation - i.e., blocking - in the % implementation of a non-blocking routine seems to make the routine % less than useful. I can see whay you are asking for this, in order % that you can generate a context for the routine call. In fact Rik % I don't think you need the constraint, as I pointed out cheaper % context generation exists above, unless of course I am missing % something. %**** %**** I think that non-blocking collcomm is moribund in MPI1 or %**** else MPI1 is moribund. :-) %**** \end{itemize} \section{Advantages \& Disadvantages} \subsection*{Advantages} \begin{itemize} \item This proposal is implementable with no servers and can be layered easily on existing systems. %[Lyndon] % I am not of the opinion that the absence of services is such a big % deal. I do think that programs which can conveniently not use % services should not be forced to, but programs which cannot % conveniently not use services should be allowed to. %**** Too many negatives here for me to parse :-) \item Cacheing auxiliary information in group descriptors allows collective operations to run at maximum speed after the first execution in each group. (Assumes that sufficient context values are available to permit cacheing.) %[Lyndon] % If you agree that context allocation is ``free'', then you can % delete the bracketed qualifier. %**** %**** Context allocation need not be free provided it can be made cheap, %**** or cheap enough. %**** %**** If one knows one will need several, then a single call could %**** provide such contexts, amortizing overhead. This is likely when %**** bulding grids (ie, virtual topologies) in Zipcode, so it is %**** true in existing practice. %**** %**** One should recognize the need for layering virtual top. calls %**** on top of these calls, then these calls may appear painful, %**** but perhaps they would be less used. Some users will use the %**** provided virtual topology calls, others will prefer their own. %**** Both will have equal power (see also,separate note on layerability). %**** %**** If getting N contexts is a send-and-receive, plus a reactive server, %**** then this is reasonably light weight,provided that hundreds of %**** messages, or global operations ensue thereafter. We can know in %**** advance how heavy weight the context server will be. %**** %**** if an implemention can use some locations of remote memory, with %**** fetch and add, or locks, to achieve contexts, then this is even %**** cheaper, in principle. %**** %**** Despite Jim's earlier insistence that context numbers be kept to %**** 256 or so, I think that this number should be much larger, so that %**** much less efort goes into returning contexts, and so on, except %**** occasionally, by processes. Otherwise, a new kind of overhead, %**** get-rid-of-context-because-I-am-out ensues, or programs block %**** until contexts become available, offering the possibility of %**** deadlocks. %**** \item Speed of cached collective operations does not depend on speed of group operations such as formation or translation between rank and TID. %[Lyndon] % True, but the cache is going to get big as user's are going to store % arrays of TIDs in it. %**** %**** Unscalability (of a limited form) should be permitted/selectable %**** by user, to use as much per-node memory as the user wants, to reduce %**** communication. %**** \item Requires explicit translation between (group,rank) and TID, which makes pt-pt performance predictable no matter how the functionality of groups gets extended. %[Lyndon] % This is only true because you have asserted that implementations % must have the property that: % `` Pt-pt communication is specified to be fast in all cases. % (E.g., MPI must initialize all processes such that any % required translation of the TID is faster than the fastest % pt-pt communication call.)'' % So the advantage is not that which you have quoted, it is that % you have made this assertion. %**** I see,but what he means here is that there is no unpredictable %**** translation cost because we do not write (group,rank) in pt2pt %**** calls. So, there is some validity to the statement. \item Communication both within and between groups seems conceptually straightforward. %[Lyndon] % This is a conjecture. I believe that conjecture to be false. % I especially believe this in the case of communication between % groups. The methods which are available for ``hooking up'' % allows are at least perverse. I guess that the user could make % use of a service process, to make life easier in this hooking up, % so whay not provide one. %**** Yes, that is why I have one in Zipcode. I wish Zipcode were %**** on netlib today, so you could try it. Well, we are writingthe %**** manual, and working at it as fast as we can. % % A further point. It seems to me that ``seems'' means that it seems % to you. This is not the point. It is how it seems to a lesser % wizard than yourself which is of importance here. I conjecture % that the reverse statment is true when the person doing the seeming % is changed to a lesser wizard. %**** %**** I lost something here, but I agree with the sense. The word %**** seems is subjective,and should disappear from our discussions, %**** as much as seems prudent, anyway :-) \end{itemize} \subsection*{Disadvantages} \begin{itemize} \item Requires explicit translation between (group,rank) and TID, which may be considered awkward. %[Lyndon] % It is true that (group,rank) must be translated to TID. I can % assure you that this is considered both awkward and redundant. %**** %**** Yes,awkward, because it is nice to escape the TID realm and %**** work within the (albeit simple) abstraction of group,rank. %**** When layering virtual topologies on this, it would be so nice %**** to write them to a group,rank syntax, not enforcing TID mappings %**** everywhere. \item Communication between different groups may be considered awkward. %[Lyndon] % You bet! Please see below. %**** Indeed. \item No free-for-all group formation. A process must know something about who its collaborators are going to be (minimally, the root of the group). %[Lyndon] % Please see comments above on group creation. \item Requires explicit coherency of group descriptors, which is not convenient -- application must keep track of which processes have copies and what to do with them. %[Lyndon] % I think all of the proposals will have this problem. %**** Yes, and I think that loosely synchronous operations can maintain %**** coherency, in practice. That is, no operations that modify the %**** group descriptors (other than cached lookup info) are permitted, %**** without loose synchronization. %**** This is nasty in that is would prohibit sending descriptors to %**** processes not part of the group, so it is a clear trade-off. %**** Perhaps such send-to-non-group-member operations could stipulate %**** that this group information is somehow ephemeral, and that they %**** need to join a new group to keep useful information over time??? %**** \item Static group model. Processes can join and leave collaborations only with the cooperation of the other group members in forming larger or smaller groups. \item No direct support for identifying the group from which a message came. The sender cannot embed its group ID in its message, since group ID's are not globally meaningful. One method to address this problem is to have the sender embed its group context as data in the message, then have the receiver use that value to select among group descriptors that it had already been sent. %[Lyndon] % Yup, the user can ``do it manually with a search''. If you want % to invoke this argument then I can dispose of almost everything % in MPI in a period of a few minutes - in fact Steven Zenith will % do it faster - so I refute the validity of the argument and claim % that the MPI interfce should transmit said information. %**** %**** Yes, that is exactly what Zipcode was written to avoid. The %**** user wants help managing things like this!!!! %**** %**** The search, if any, must be MPI-supported, and as efficient as %**** possible (eg, AVL trees, hash, partial hash with exceptions). %**** % % Further, the receiver is likely to want to be able to ask which % rank in the sender group the sender was. Oh dear, well I suppose % you think that's okay because the sender can put its rank into % the message. This is just being inconvenient to the user who % wants to send an array of something (double complex?) and has % to pack a rank in by copying or sending a pre-message or the % buffer descriptor kind of thing. %**** %**** This is why I remain a strong advocate of (group,rank) %**** addresssing in pt2pt. %**** \end{itemize} \section{Comments \& Alternatives} \begin{itemize} \item The performance of group control functions is explicitly unspecified in order to provide performance insurance. The intent is to encourage program designs whose performance won't be killed if the functionality of groups is expanded so as to be unscalable or require expensive services. %[Lyndon] % I don't think that the intent expressed in the second sentence % is satisfied. For example - group control is allowed to become the % dominant feature of application time complexity. %**** %**** I addressed this in my Step-1 remarks. Please see that. %**** \item Adding global group ID's would seriously interfere with layering this proposal on pt-pt. The problem is not generating a unique ID, but using it. If just holding a global group ID were to allow a process to asynchronously ask questions about the group (like translating between rank and TID), then a group information server of some sort would be required, and MPI pt-pt does not provide enough capability to construct such a beast. %[Lyndon] % It is not the global uniqueness of group identifiers which creates % the problem. There are globally unique labels of groups in your % proposal anyway - the value of the default context identifier. % The problem is that of allowing query of group information when % that information cannot be recorded in the local process/processor % memory. % % You claim that point-to-point does not have enough capability to % construct an information server. Firstly I should ask you whether % this is an artefact of the manner in which you have defined the % point-to-point communication. Secondly I assert that your claim % is false. I shall append a description of server implementation % to the foot of this message. %**** %**** Thank you. These points are both well taken (ie these two paragraphs) \item The restriction that only paired-exact-match modules can run in the default group context could be relaxed to something like ``a module that violates the paired-exact-match constraint can run in the default group context if and only if it is the only module to run in that context''. This seems too error-prone to be worth the trouble, since even the standard collective ops might be excluded. (It would also conflict with Implementation Note 1.] \item Group formation can be faster when more information is provided than just the group tag and root process. E.g. if all members of the group can identify all other members of a P-member group, in rank order, then O(log P) time is sufficient; if only the root is known, O(P) time may be required. This aspect should be considered by the groups subcommittee in evaluating scalability. %[Lyndon] % Yes, partition does appear to be O(P) whereas definition by ordererd % list appears to be O(log(P)). %**** Also,see what I wrote in my Step-1 comments. %**** I believe O(log(P)) is still possible. %**** \end{itemize} \section{Implementation Notes} \begin{enumerate} \item To generate and broadcast a new context value: Generate the context value without communication by concatenating a locally maintained unique value with the process number in some 0..P-1 global ordering scheme. This assumes that context values can be significantly longer than log(P) bits for an application involving P processes. If not, then a server may be required, in which case by specification it has to be fast (comparable to a pt-pt call). There is no need to generate a new context for the broadcast since it can be coded to use the default group context by meeting the paired-exact-match constraint.) %[Lyndon] % Please see notes above on the subject of context generation. %**** Please see my Step-1 comments. \item The following is intended only as a sanity check on the claim that this proposal can be layered on MPI. Improvements to improve efficiency or to use fewer contexts will be greatly appreciated. In addition to a default group context, each group descriptor contains a ``group partitioning context'' and a ``group disbanding context'' that are obtained and broadcast at the time the group is created. In the case of the INITIAL group, this is done using paired-exact-match code and any context, immediately after initialization of the MPI pt-pt level. (At that time, no application code will have had a chance to issue wildcard receives to mess things up.) Group partitioning is accomplished using pt-pt messages in the partitioning context of the current group (i.e., the one being partitioned), with message tag equal to the group tag provided by the application. In the worst (least scalable) case, the root of the new group must wildcard the source TID. (This violates the paired-exact-match constraint, which is why group formation must happen in a special context.) Group disbanding is done with pt-pt messages in the group disbanding context. This keeps group control from being messed up no matter how the application uses wildcards. \end{enumerate} \section{Examples} {\bf (To be provided)} % % END "Proposal V" %======================================================================% \end{document} ---------------------------------------------------------------------- Writing a server in the point-to-point layer of MPI in four easy steps ---------------------------------------------------------------------- 1) Partition the INITIAL group into two groups. A singleton group, SERVER, and a group CLIENT which contains all of the other processes. 2) The single process in SERVER group records its TID. 3) The processes in INITIAL group allocate a context SERVICE which they remember either in the group cache or static data or something. 4) Use a broadcast in INITIAL group with ``sender'' as the one process which is also in SERVER group, and the ``receivers'' as the (many) processes which are also in CLIENT group, in the SERVICE context, in order to disseminate the TID of the server process. [Fanfare] a server process is in place as is a dedicated context for the purposes of messages required to implement the service. [Observation] the mpi point-to-point initialisation can do this automatically. %**** Zipcode's postmaster general works in this way, more or less. %**** - Tony ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Sun Mar 21 14:37:39 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA00298; Sun, 21 Mar 93 14:37:39 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24385; Sun, 21 Mar 93 14:37:22 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 14:37:21 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24377; Sun, 21 Mar 93 14:37:19 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03706; Sun, 21 Mar 93 13:31:14 CST Date: Sun, 21 Mar 93 13:31:14 CST From: Tony Skjellum Message-Id: <9303211931.AA03706@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu Subject: my comments on Lyndon's Proposal I %**** %**** Below are my comments on Lyndon's Proposal I. In the next paragraph %**** I note that we are actually converging to less than N proposals, though %**** I have not seen Lyndon's new II proposal yet. Does it exist now? %**** %**** To achieve a reasonable presentation at Dallas, we can have multiple %**** proposals still on table (I think this a fair, well-thought-out %**** approach, but if we can condense some, let's do it). %**** %**** %**** After reading V, making my comments, and making my comments %**** in addition to Lyndon's comments, I am convinced we can %**** advance proposal V into something that is acceptable in the %**** III/IV mold, without a further III/IV proposal. Rik's %**** dropping of the static context concept has simplified our %**** group's efforts considerably, and I cannot disambiguate my %**** III/IV proposal from Proposal V, given the Lyndon and Tony %**** provisos and suggested improvements. This is not a wimp out %**** on my part. I do not see benefit of advancing something that %**** will look 90% like Proposal V at this point, and 97% like it %**** if the Lyndon/Tony comments obtain in it (which they would if %**** I wrote it now). %**** I would prefer that we hone Proposal V. If Rik wants to keep %**** his style, then I propose that Proposal III/IV become exactly %**** what I just said, a reworking of V + comments. %**** However, I think this will cause an unnecessary delay and %**** digression just to achieve details. Instead, we might pull %**** such choices into Proposal V at this time (server vs. no server). %**** to make what is common in our "approaches" more obvious. %**** - Tony %**** %**** (PS, Lyndon: Rik/Tony/Lyndon are authors of whole paper, because %**** we are all working on this, and because one of us (ie, me) will %**** make the final document cohesive, create a unified style, format, %**** set of meanings. This is the nature of collaboration. I do %**** not propose to include our other co-conspirators on such a document's %**** list of authors, as there has been minimal input from them. I %**** Think that Rusty/Bill and Rik/Tony/Lyndon are operating equivalently.) \documentstyle{report} \begin{document} \title{``Proposal'' I for MPI Communication Context Subcommittee} \author{Lyndon~J~Clarke} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal I" % Lyndon J. Clarke % March 1993 % \chapter{Proposal I} \section{Introduction} This chapter proposes that ordered groups are used to provide communication contexts, and that communication contexts do not appear independently of process groupings. The proposal reflects the observation that an instance of a module in a parallel program typically operates within a group of processes, and as such any communication contexts associated with an instance of a module also bind the semantics of process groups. This chapter makes a basic proposal which provides intra-group communication but does not provide inter-group communication, and an extended proposal which also provides inter-group communication. The proposals say nothing about use of message tags. It is assumed that these will be a bit string, expressed as an integer in the host langauge (i.e. ANSI C or Fortran~77, in the first instacnce. These proposals should be viewed as a collection of recommendations to other subcommittees within MPI, primarily the collective communications subcommittee, the point-to-point communications subcommittee and the process topologies subcommittee. Concrete syntax is given in the style of the C language for purposes of discussion only, and should be viewed as an example of possible syntax as detailed syntax of described operations is the responsibility of the language bindings subcommittee %----------------------------------------------------------------------% % BEGIN "Basic Proposal" % \subsection{Basic Proposal} The main features of the basic proposal are: \begin{itemize} \item Process identifiers and group identifiers have opaque values expressed as an integer type in the base language. Semantics for passing process identifiers in a message are defined, whereas semantics for passing group identifiers in a message are not defined. \item Group creation and destruction are concerted, synchronous, actions on the part of the membership of the group concerned. %**** %**** loosely synchronous %**** \item Groups are {\it static\/} in that no provision is made for modification of the membership of a group over the lifetime of the group. Dynamic group operations such as {\it resize\/} can be effected by destruction of the existing group and creation of a new resized group. %**** Why not by making an off-spring, instead of destructively!!! \item Group creation is effected by four means: explicit definition of the membership of a group; partition of an existing group into one or more distinct subgroups; identical duplication of an existing group; topological duplication of an existing group. \item Point-to-point communication provides for intra-group communication, and makes no provision for inter-group communication. \item Collective communication provides for operations which are concerted actions on the part of members of one group, and makes no provision for operations which are concerted actions on the part of members of multiple groups. \end{itemize} \subsubsection{Process and Group Identifiers} A {\it process identifier\/} is an opaque reference to an object which is a single process. A process identifier is expressed as an integer in the host language and has a value defined by the system. The only meaningful host language operations on process identifiers are assignment ({\tt =}), equality ({\tt ==}) and inequality ({\tt !=}). Each process has exactly one process identifier. MPI should provide a procedure which allows the user to determine the process identifier of the calling process. For example, {\tt mypid = mpi\_mypid()}. The identifier of the {\it null process\/}, defined to be a process which cannot exist, is defined as a named constant and shall be referred to as {\tt MPI\_PID\_NULL} in this proposal. With the single exception of the identifier of the null process, the value of a process identifier is {\it process local\/}, meaning that if two processes A and B know the identifier of a process P then the relationship between the values of the identifiers known to A and B is undefined. The user can pass the value of a process identifier in a message, since it is an integer type in the host language, however the recipient of the value cannot make defined use of that value in the MPI operations described below --- the received process identifier is {\it invalid\/}. MPI will provide a mechanism which allows a process identifier to be passed in a message in such a manner that the received identifier is valid. It is proposed that this shall be integrated with the buffer descriptor mechanism (proposed by Bill Gropp and Rusty Lusk), by addition of a procedure which places a logical reference to a process identifier into the buffer descriptor, e.g. {\tt mpi\_bd\_pid(bd, \&pid)}. Transmission of a process identifier using this mechanism returns to the recipient a process identifier which is valid for use in the MPI operations described below. This transmission may side effect state in the implementation of MPI at the recipient, and in particular may reserve state at the recipient. MPI will provide a procedure which invalidates a process identifier, allowing the implementation of MPI to recover reserved state, e.g. {\tt mpi\_pid\_invalidate(pid)}. This is an error if {\tt pid} is {\tt MPI\_PID\_NULL}, or if {\tt pid} is the identifier of the calling process. It is further proposed that MPI provide a process identifier registry service. This service allows any process to register its own process identifer by name, and deregister its process identifier. The service allows any process to determine whether a name has been registered without blocking the calling process, and to map that name into a valid process identifier with the possibility of blocking the calling process. Use of this service is not mandated, and components of programs which do not require this service are not expected to make use thereof. %**** I don't get all of this. Why? %**** A {\it group identifier\/} is an opaque reference to an object which is a group of processes. A group identifier is expressed as an integer in the host language and has a value defined by the system. The only meaningful host language operations on group identifiers are assignment ({\tt =}), equality ({\tt ==}) and inequality ({\tt !=}). The identifier of the {\it null group\/}, defined to be a group which cannot exist, is defined as a named constant and shall be referred to as {\tt MPI\_GID\_NULL} in this proposal. With the single exception of the identifier of the null group, the value of a group identifier is {\it process local\/}, meaning that if two processes A and B know the identifier of a group G then the relationship between the values of the identifiers known to A and B is undefined. The user can pass the value of a group identifier in a message, since it is an integer type in the host language, however the recipient of the value cannot make defined use of that value in the MPI operations described below --- the identifier is {\tt invalid\/}. MPI will not provide a mechanism which allows a group identifier to be passed in a message in such a manner that the received identifier is valid. %**** Then why allow it to be passed? %**** The canonical representation of a group is an array of distinct process identifiers, although it may be possible to use hashing functions with lower space complexity and marginally higher time complexity. A group is {\it static\/}, in that it's membership may not change over the lifetime of the group. There is a well defined {\it size\/} of a group, and MPI will provide a procedure which allows the user to determine the size of a group. For example, {\tt size = mpi\_grp\_size(gid)}. This procedure is an error if {\tt gid} is {\tt MPI\_GID\_NULL}. There is a well defined {\it rank\/} of a process within the group, i.e. the position in the representational array at which the process identifier is held. MPI will provide a procedure which allows the user to determine the rank of the calling process within a group. For example, {\tt mpi\_grp\_myrank(gid)} returns the rank of the calling process within the group referred to by {\tt gid}. This procedure is an error if {\tt gid} is {\tt MPI\_GID\_NULL}. MPI should provide a procedure to determine the member rank of a process within a group given the group identifier and a valid process identifier, e.g. {\tt rank = mpi\_grp\_rank(gid,pid)}. This procedure may ``fail'' if the process identified by {\tt pid} is not a member of the group identifier by {\tt gid}, and can be used to determine whether a given process is a member of a given group. This procedure is an error if {\tt gid} is {\tt MPI\_GID\_NULL}. %**** %**** Why an error, why not just fail? %**** MPI should provide a procedure to determine the process identifier of a group member given the group identifier and member rank, e.g. {\tt pid = mpi\_pid(gid,rank)}. This procedure should validate the returned process identifier. This procedure is an error if {\tt gid} is {\tt MPI\_GID\_NULL}. \subsubsection{Group Creation and Destruction} MPI will provide four methods for creation of groups, and a procedure to destroy an existing group. A group may be created by explicit definition of the pids of the members of the group. For example, {\tt gid = mpi\_grp\_definition(npids,pids)} where {\tt gid} is the identifier of the newly created group, {\tt npids} is the number of processes in the new group, and {\tt pids} is an array containing the (valid) process identifiers of the processes in the group. This procedure must be called my all members of the group, and does not return until all members have made the call. A group may be created by identical duplication of an existing group. For example, {\tt gidb = mpi\_grp\_duplication(gida)} where {\tt gidb} is the group identifier of the newly created group and {\tt gida} is the identifier of an existing group. The created group inherits all properties of the source group, including any topological properties. This operation has the same synchronisation properties as creation of group by definition. %**** It is not obvious to me that we want to enforce topology at this %**** juncture. However, we could register topology information in %**** the extensible structure strategy of Proposal V. %**** A group may be created by topological duplication of an existing group. Details of topological groups are under consideration within the process topologies subcommittee and will not be further discussed here. A group created by topological duplication inherits the size of the source group, and also inherits the membership list of the source group although the list may be ordered differently in the created group. These operations have the same synchronisation properties as creation of group by identical duplication. Where groups have additional topological attributes MPI should also provide procedures which allow the user to determine such attributes. One or more groups may be created by partition of an existing group into distinct subgroups by key. For example, {\tt gidb = mpi\_grp\_partition(gida, key)} where {\tt gidb} is the group identifier of the newly created group corresponding to the given {\tt key} and {\tt gida} is the group identifier of the group which is being partitioned according to the {\tt key} values supplied. MPI should define a named constant which is a {\it null\/} {\tt key} value, for example {\tt MPI\_KEY\_NULL}, in order that members of the parent group can choose not to become members of any child group, and in which case the procedure should return {\tt MPI\_GID\_NULL}. Groups created by partition share the same ordering of process member ranks as the parent group. This operation synchronises the members of the parent group, and therefore implicitly synchronises the members of the created group(s). A group may be destroyed, e.g. {\tt mpi\_grp\_deletion(gid)}, which destroys the group identified by {\tt gid}. This operation synchronises the group members, and invalidates the group identifier. \subsubsection{Point-to-point communication} There are arguments either way for point-to-point routines which accept a {\tt (gid, rank)} pair, and for routines which accept a {\tt pid}. This proposal supports and seperates both approaches to addressing and selection within the same syntax, avoiding the introduction of sets of procedures to handle each case. In order to provide expressive power to intra-group communication, point-to-point communication should accept a {\tt (gid, rank)} pair, where {\tt gid} is a valid group identifier and {\tt rank} is a member rank within the group identified by {\tt gid}. In message addressing the sender specifies the {\tt rank} and {\tt gid} of the receiver. In message selection the receiver specifies the {\tt rank} and {\tt gid} of the receiver. The sender and receiver must both specify the same {\tt gid} in order for a match to occur. The {\tt gid} field is not allowed to take a {\it wildcard\/} value in message selection. The {\tt rank} field is allowed to take a {\it wildcard\/} value in message selection, e.g. {\tt MPI\_RANK\_WILD}, which will match with any rank. One is encouraged to visualise a seperate message queue or port at each process for each group of which that process is a member (and indeed this may be an advantageous implementation feature). %**** But not a required feature of an implementation! In order to accommodate process identifier based addressing and selection into the same syntax, this proposal advocates that point-to-point communication should also accept the null group ({\tt gid = MPI\_GID\_NULL}), in which case the {\tt rank} is interpreted as a valid {\tt pid}. The {\tt pid} is allowed to take a {\it wildcard\/} value in message selection, e.g. {\tt MPI\_PID\_WILD}. The point-to-point section should provide a procedure which allows a recipient to recover the process identifier of the sender. The discussion of matching above extends to this case, and one is also encouraged to visualise a separate message queue or port for messages referred to by {\tt MPI\_GID\_NULL}. In the case of process identified wildcard receive, the process identifier recovered by the receiver may be unknown to the receiver. It is proposed that an implicit validation of the process identifier must be performed by the MPI implementation, in order that the recipient is returned a valid process identifier, else the returned identifier is of little or no use to the recipient. %**** %**** I do not understand the usefulness or formal need for all this %**** validation and invalidation of process identifiers. Why, where %**** did it come from, what does it get us? How can this be related %**** to anything I have seen before? %**** \subsubsection{Collective communication} Collective communication operations within MPI should be restricted to the scope of a single group. It will be sufficient for these procedures to accept a group identifier, and possibly a message tag in order to distinguish multiple outstanding operations within the same group. These procedures must not accept {\tt MPI\_GID\_NULL}. %**** Why not, but do nothing... It is not possible to determine whether this is strategy allows all of the MPI collective communication routines to be written in terms of MPI point-to-point routines without loss of generality, since the set of collective communication routines is not yet determined. This proposal takes the view that it is the responsibility of the collective communications subcommittee to determine whether such a goal is desirable, and if so to describe procedures which comply with this goal. %**** Check, but it is desirable that they be so writable, so we will %**** have to watch. % % END "Basic Proposal" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Extended Proposal" % \subsection{Extended Proposal} The main additional features of the extended proposal are: \begin{itemize} \item Semantics for passing a group identifier in a message are defined, and a group registry is introduced. \item Point-to-point communication is extended to allow inter-group communication without syntactic intrusion on intra-group communication. \item Collective communication operations involving a concerted action on the part of members of two or more groups is suggested. \end{itemize} \subsubsection{Process and Group Identifiers} The extended proposal says nothing more about process identifiers than the basic proposal. The extended proposal defines a mechanism by which a group identifier may be passed in a message in such a manner that the received identifier is valid, and the mechanism should be analagous to that which allows process identifiers to be transmitted. It is proposed that this shall be integrated with the buffer descriptor mechanism (proposed by Bill Gropp and Rusty Lusk), by addition of a procedure which places a logical reference to a group identifier into the buffer descriptor, e.g. {\tt mpi\_bd\_gid(bd, \&gid)}. Transmission of a group identifier using this mechanism returns to the recipient a group identifier which is valid for use in the MPI operations described above and below, and as further qualified below. This transmission may side effect state in the implementation of MPI at the recipient, and in particular may reserve state at the recipient. MPI will provide a procedure which invalidates a group identifier, allowing the implementation of MPI to recover reserved state, e.g. {\tt mpi\_invalidate\_gid(gid)}. This is an error if {\tt gid} is {\tt MPI\_GID\_NULL}, or if {\tt gid} is the identifier of a group of which the calling process is a member. %**** I understand the idea here, but not all the details. Can this %**** be justified/exemplified/simplified? %**** It is further proposed that MPI provide a group identifier registry service. This service allows any group to register its own group identifer by name, and deregister its group identifier. The service allows any group to determine whether a name has been registered without blocking the calling group, and to map that name into a validated group identifier with the possibility of blocking the calling group. The group registry procedures should synchronise the calling group, permitting more efficient implementations than asynchronous operations. The definition of a valid group identifier is extended to include all groups of which the process is a member and all those which are validated by one of the above mechanisms. The small suite of procedures which map between process identifiers, group identifiers and member ranks are defined to work with valid group and process identifiers. \subsubsection{Point-to-point communication} The point-to-point communication syntax and semantics described in the basic proposal do not extend directly to the case of inter-group communication. The reason is quite simple --- the sender and receiver do not supply the same group since the group of the sender and the group of the receiver are different. The basic approach is to express point-to-point communication in terms of a triplet {\tt (localGroup, remoteGroup, remoteRank)}. In this notation the sender specifies the sender group identifier, then the receiver group identifier and finally the receiver rank. The receiver specifies the receiver group identifier, then the sender group identifier, and finally the sender rank. The {\tt localGroup} field may not take a wildcard value, corresponding directly to the rule that the group in the basic proposal may not take a wildcard value. The {\tt remoteGroup} field may take a wildcard value, e.g. {\tt MPI\_GID\_WILD}, which matches with any group. The {\tt remoteRank} may also take a wildcard value, as in the basic proposal. The point-to-point section should provide procedures which allow a message recipient to recover the group and rank of the sender. In the case of {\tt remoteGroup} wildcard receive, the group identifier recovered by the receiver may be unknown to the receiver. It is proposed that an implicit validation of the group identifier must be performed by the MPI implementation, in order that the recipient is returned a valid group identifier, else the returned identifier is of little or no use to the recipient. In order to accomodate inter-group addressing and selection into the framework of the basic proposal, the extended proposal suggests a careful redefinition of the {\tt gid} discussed in the basic proposal. With careful presentation this redefinition need not intrude conceptually on the basic proposal, although I shall give a less careful description here, suggestive of two different flavours of presentation. In the basic proposal, the {\tt gid} was formally a reference to a group. In the extended proposal the {\tt gid} is formally composed of references to two groups, and can be thought of as a shorthand notation for {\tt (localGroup, remoteGroup)}. %**** I dislike this intensely. There should be a group-pair data %**** structure. Group is never a pair of sub-groups. It is a %**** bad idea. This is all to get around changing syntax, no? The identifier of the null group, a {\it null\/} identifier, is a valid identifier which is formally composed of a pair of references to the null group, and may be used in the fashion described in the basic proposal. The group creation functions provide a symmetric, or {\it unary\/}, identifier formally composed of two references to the same group. This group is {\it local\/} since the process in question is a member of the group. An identifier which composes a pair of references to a local group is logically identical to a group identifier as implied in the basic proposal, and may be used for point-to-point and collective communications in an identical fashion. The group identifier transmission and registry lookup procedures also provide a symmetric, or {\it unary\/}, identifier which again is composed of a pair of references to the same group. This group is {\it remote\/} when the process is not a member of the group, or is local when the process is a member of the group. An identifier which composes a pair of references to a remote group is logically identical to the identifier a remote group as implied above, and is not valid for either point-to-point or collective communications. Inter-group communication is effected by use of assymetric, or {\it binary} identifiers, composed of references to two different groups, the first of which is local and the second of which is either local or remote. Such identifiers are constructed by the use of a ``glob'' operation, which returns an identifier referencing the first operand as the local field and the second operand as the remote field. The glob is defined to be an error unless: both operands are symmetric (unary); the first operand refers to a local group. The identifier returned by a glob is valid for point-to-point communciation and invalid for collective communication. The point-to-point communication section should provide a procedure to determine the group identifiers object of a completed receive. Procedures which ``unglob'' an assymetric (binary) group identifier should be provided, returning the local and remote fields as valid identifiers. This can be viewed as a continuation of a generalised approach to point-to-point communication began in the basic proposal, in which operation are specified in terms of object identifiers and instance identifiers where objects are composed of multiple instances. The nature of the instance identified, and the semantics of the operation, are dependent of the nature of the object identified, exploiting some genericity. \begin{center} \begin{tabular}{lll} object identifier & instance identifier & action \\ \hline null group identifier & basic process identifier & basic communication \\ unary group identifier & local process rank & intra-group communication \\ binary group identifier & remote process rank & inter-group communication \\ \end{tabular} \end{center} \subsubsection{Collective communication} There are a number of collective communication operations which logically extend over two or more groups. Some examples of two-group collective communications which are common in the simple host-node programming model are: \begin{itemize} \item {\it Broadcast\/} is an implicitly assymetric operation. There is exactly one sender and there are many receivers, each of which receives an identical message. The sender is a member of a singleton group G and the receivers are members of a group H. \item {\it Scatter\/} is an implicitly assymetric operation. There is exactly one sender and there are may receivers, each of which receives a different message. The sender is a member of a singleton group G and the receivers are members of a group H. \item There is a variant of {\it gather\/}, which returns the gathered data to a single process, which is an implicitly assymetric operation. There is exactly one receiver and there are many senders. The receiver is a member of a singleton group G and the senders are members of a group H. \item There is a variant of {\it reduce}, which returns the reduced data to a sinle process, which is an implicitly assymetric operation. There is exactly one receiver and there are many senders. The receiver is a member of a singleton group G and the senders are members of a group H. \end{itemize} Other patterns arise in ``process'' graphs where each ``process'' is allowed to be parallel. For example: \begin{itemize} \item {\it all-to-all\/} communciation in which the senders and receivers are distinct processes. The senders are members of a group G and the receivers are members of a group H. The two groups G and H need not be of the same size. \end{itemize} The assymetric (binary) group identifier objects described for point-to-point communications in this extended proposal are immediately suitable for each of the two-group collective communication operations described. % % END "Extended Proposal" %----------------------------------------------------------------------% %**** So, I gather that a set of groups is passable to a collcomm, %**** and a pair is passable to a pt2pt. That is neat, but it should %**** still be a separate data structure, with separate calls than %**** the intra-group version (at least for the pt2pt calls). %**** \section{Conclusion} This chapter has propose that ordered groups are used to provide communication contexts, and that communication contexts do not appear independently of process groupings. The chapter made a basic proposal which provides intra-group communication but does not provide inter-group communication, and an extended proposal which also provides inter-group communication. The basic proposal provides expressive semantics for the case of intra-group communication such as arises in program which compose data driven parallelism, and is closely related to that which was discussed by Marc Snir at the February meeting in Dallas, and builds on discussions which have taken place in various subcommittees. The key additional features are: point-to-point communication can also be expressed in terms of process identifiers; a process registry service was added, use of which is optional. The extended proposal adds expressive semantics for the case of inter-group communication such as arises in programs which compose combinations of data and function driven parallelism. This functionality has been constructed in such a manner that there is no syntactic or performance intrusion on the content of the basic proposal, and the additional conceptual content can be presented seperately from a presentation of intra-group communication. % % END "Proposal I" %======================================================================% \end{document} >------------------------------ Cut Here ------------------------------< /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Sun Mar 21 14:47:07 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA00323; Sun, 21 Mar 93 14:47:07 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24683; Sun, 21 Mar 93 14:46:49 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 14:46:48 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24675; Sun, 21 Mar 93 14:46:45 -0500 Date: Sun, 21 Mar 93 19:46:42 GMT Message-Id: <13090.9303211946@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Proposal I - LaTeX To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk This is LaTeX of Proposal I, credits entirely to Marc Snir, 15 minutes earlier than previously advertised. PostScript to follow. My sincere apology to everyone who has prepared comments on Proposal I++. Some of these comments will carry through to Proposal II, which will be sent out shortly. ---------------------------------------------------------------------- \documentstyle{report} \begin{document} \title{Proposal I\\MPI Context Subcommittee} \author{Marc~Snir} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal I" % Written by Marc Snir % Edited by Lyndon J. Clarke % March 1993 % \newcommand{\discuss}[1]{ \ \\ \ \\ {\small {\bf Discussion:} #1} \ \\ \ \\ } \newcommand{\missing}[1]{ \ \\ \ \\ {\small {\bf Missing:} #1} \\ \ \\ } \chapter{Proposal I} {\it Editorial Note: This chapter is the proposal of Marc~Snir which also appears in the working documents of the point-to-point subcommittee (March 15) and collective communication subcommittee (March 16). This is a minor edit of the \LaTeX\ source of the point-to-point document provided by Marc. The edit was performed by Lyndon~Clarke.} \section{Contexts} A {\bf context} consists of: \begin{itemize} \item A set of processes that currently belong to the context (possibly all processes, or a proper subset). \item A {\bf ranking} of the processes within that context, i.e., a numbering of the processes in that context from 0 to $n-1$, where $n$ is the number of processes in that context. \end{itemize} A process may belong to several contexts at the same time. Any interprocess communication occurs within a context, and messages sent within one context can be received only within the same context. A context is specified using a {\em context handle} (i.e., a handle to an opaque object that identifies a context). Context handles cannot be transferred for one process to another; they can be used only on the process where they where created. Follows examples of possible uses for contexts. \subsection{Loosely synchronous library call interface} Consider the case where a parallel application executes a ``parallel call'' to a library routine, i.e., where all processes transfer control to the library routine. If the library was developed separately, then one should beware of the possibility that the library code may receive by mistake messages send by the caller code, and vice-versa. To prevent such occurrence one might use a barrier synchronization before and after the parallel library call. Instead, one can allocate a different context to the library, thus preventing unwanted interference. Now, the transfer of control to the library need not be synchronized. \subsection{Functional decomposition and modular code development} Often, a parallel application is developed by integrating several distinct functional modules, that is each developed separately. Each module is a parallel program that runs on a dedicated set of processes, and the computation consists of phases where modules compute separately, intermixed with global phases where all processes communicate. It is convenient to allow each module to use its own private process numbering scheme, for the intramodule computation. This is achieved by using a private module context for intramodule computation, and a global context for intermodule communication. \subsection{Collective communication} MPI supports collective communication within dynamically created groups of processes. Each such group can be represented by a distinct context. This provides a simple mechanism to ensure that communication that pertains to collective communication within one group is not confused with collective communication within another group. \subsection{Lightweight gang scheduling} Consider an environment where processes are multithtreaded. Contexts can be used to provide a mechanism whereby all processes are time-shared between several parallel executions, and can context switch from one parallel execution to another, in a loosely synchronous manner. A thread is allocated on each process to each parallel execution, and a different context is used to identify each parallel execution. Thus, traffic from one execution cannot be confused with traffic from another execution. The blocking and unblocking of threads due to communication events provide a ``lazy'' context switching mechanism. This can be extended to the case where the parallel executions are spanning distinct process subsets. (MPI does not require multithreaded processes.) \discuss{ A context handle might be implemented as a pointer to a structure that consists of context label (that is carried by messages sent within this context) and a context member table, that translates process ranks within a context to absolute addresses or to routing information. Of course, other implementations are possible, including implementations that do not require each context member to store a full list of the context members. Contexts can be used only on the process where they were created. Since the context carries information on the group of processes that belong to this context, a process can send a message within a context only to other processes that belong to that context. Thus, each process needs to keep track only of the contexts that where created at that process; the total number of contexts per process is likely to be small. The only difference I see between this current definition of context, which subsumes the group concept, and a pared down definition, if that I assume here that process numbering is relative to the context, rather then being global, thus requiring a context member table. I argue that this is not much added overhead, and gives much additional needed functionality. \begin{itemize} \item If a new context is created by copying a previous context, then one does not need a new member table; rather, one needs just a new context label and a new pointer to the same old context member table. This holds true, in particular, for contexts that include all processes. \item A context member table makes sure that a message is sent only to a process that can execute in the context of the message. The alternative mechanism, which is checking at reception, is less efficient, and requires that each context label be system-wide unique. This requires that, to the least, all processes in a context execute a collective agreement algorithm at the creation of this context. \item The use of relative addressing within each context is needed to support true modular development of subcomputations that execute on a subset of the processes. There is also a big advantage in using the same context construct for collective communications as well. \end{itemize} } \section{Context Operations} A global context {\bf ALL} is predefined. All processes belong to this context when computation starts. MPI does not specify how processes are initially ranked within the context ALL. It is expected that the start-up procedure used to initiate an MPI program (at load-time or run-time) will provide information or control on this initial ranking (e.g., by specifying that processes are ranked according to their pid's, or according to the physical addresses of the executing processors, or according to a numbering scheme specified at load time). \discuss{If we think of adding new processes at run-time, then {\tt ALL} conveys the wrong impression, since it is just the initial set of processes.} The following operations are available for creating new contexts. {\bf \ \\ MPI\_COPY\_CONTEXT(newcontext, context)} Create a new context that includes all processes in the old context. The rank of the processes in the previous context is preserved. The call must be executed by all processes in the old context. It is a blocking call: No call returns until all processes have called the function. The parameters are \begin{description} \item[OUT newcontext] handle to newly created context. The handle should not be associated with an object before the call. \item[IN context] handle to old context \end{description} \discuss{ I considered adding a string parameter, to provide a unique identifier to the next context. But, in an environment where processes are single threaded, this is not much help: Either all processes agree on the order they create new contexts, or the application deadlocks. A key may help in an environment where processes are multithreaded, to distinguish call from distinct threads of the same process; but it might be simpler to use a mutex algorithm at each process. {\bf Implementation note:} No communication is needed to create a new context, beyond a barrier synchronization; all processes can agree to use the same naming scheme for successive copies of the same context. Also, no new rank table is needed, just a new context label and a new pointer to the same old table. } {\bf \ \\ MPI\_NEW\_CONTEXT(newcontext, context, key, index)} \begin{description} \item[OUT newcontext] handle to newly created context at calling process. This handle should not be associated with an object before the call. \item[IN context] handle to old context \item[IN key] integer \item[IN index] integer \end{description} A new context is created for each distinct value of {\tt key}; this context is shared by all processes that made the call with this key value. Within each new context the processes are ranked according to the order of the {\tt index} values they provided; in case of ties, processes are ranked according to their rank in the old context. This call is blocking: No call returns until all processes in the old context executed the call. Particular uses of this function are: (i) Reordering processes: All processes provide the same {\tt key} value, and provide their index in the new order. (ii) Splitting a context into subcontexts, while preserving the old relative order among processes: All processes provide the same {\tt index} value, and provide a key identifying their new subcontext. {\bf \ \\ MPI\_RANK(rank, context)} \begin{description} \item[OUT rank] integer \item[IN context] context handle \end{description} Return the rank of the calling process within the specified context. {\bf \ \\ MPI\_SIZE(size, context)} \begin{description} \item[OUT size] integer \item[IN context] context handle \end{description} Return the number of processes that belong to the specified context. \subsection{Usage note} Use of contexts for libraries: Each library may provide an initialization routine that is to be called by all processes, and that generate a context for the use of that library. Use of contexts for functional decomposition: A harness program, running in the context {\tt ALL} generates a subcontext for each module and then starts the submodule within the corresponding context. Use of contexts for collective communication: A context is created for each group of processes where collective communication is to occur. Use of contexts for context-switching among several parallel executions: A preamble code is used to generate a different context for each execution; this preamble code needs to use a mutual exclusion protocol to make sure each thread claims the right context. \discuss{ If process handles are made explicit in MPI, then an additional function needed is {\bf MPI\_PROCESS(process, context, rank)}, which returns a handle to the process identified by the {\tt rank} and {\tt context} parameters. A possible addition is a function of the form {\bf MPI\_CREATE\_CONTEXT(newcontext, list\_of\_process\_handles)} which creates a new context out of an explicit list of members (and rank them in their order of occurrence in the list). This, coupled with a mechanism for requiring the spawning of new processes to the computation, will allow to create a new all inclusive context that includes the additional processes. However, I oppose the idea of requiring dynamic process creation as part of MPI. Many implementers want to run MPI in an environment where processes are statically allocated at load-time. } % % END "Proposal I" %======================================================================% \end{document} ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Sun Mar 21 14:49:16 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA00329; Sun, 21 Mar 93 14:49:16 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24720; Sun, 21 Mar 93 14:48:46 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 14:48:45 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24711; Sun, 21 Mar 93 14:48:36 -0500 Date: Sun, 21 Mar 93 19:48:30 GMT Message-Id: <13097.9303211948@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Proposal I - PostScript To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk This is PostScript of Proposal I, credits entirely to Marc Snir, 15 minutes earlier than previously advertised. LaTeX preceeded. ---------------------------------------------------------------------- %!PS-Adobe-2.0 %%Creator: dvips 5.495 Copyright 1986, 1992 Radical Eye Software %%Title: context-i.dvi %%CreationDate: Sun Mar 21 18:30:21 1993 %%Pages: 7 %%PageOrder: Ascend %%BoundingBox: 0 0 596 842 %%EndComments %DVIPSCommandLine: dvips context-i %DVIPSSource: TeX output 1993.03.21:1810 %%BeginProcSet: tex.pro %! /TeXDict 250 dict def TeXDict begin /N{def}def /B{bind def}N /S{exch}N /X{S N} B /TR{translate}N /isls false N /vsize 11 72 mul N /@rigin{isls{[0 1 -1 0 0 0] concat}if 72 Resolution div 72 VResolution div neg scale isls{0 Resolution vsize 72 div mul TR}if Resolution VResolution vsize -72 div 1 add mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@landscape{/isls true N}B /@manualfeed{statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{ CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail}B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image}imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N} B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto /V matrix currentmatrix dup 1 get dup mul exch 0 get dup mul add .99 lt{/QV}{/RV}ifelse load def pop pop}N /eop{SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook} if pop /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255 {IE S 1 string dup 0 3 index put cvn put}for 65781.76 div /vsize X 65781.76 div /hsize X}N /p{show}N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V{}B /RV statusdict begin /product where{ pop product dup length 7 ge{0 7 getinterval dup(Display)eq exch 0 4 getinterval(NeXT)eq or}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /QV{ gsave transform round exch round exch itransform moveto rulex 0 rlineto 0 ruley neg rlineto rulex neg 0 rlineto fill grestore}B /a{moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail}B /c{-4 M} B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{p 1 w}B /r{ p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{/SS save N}B /eos{SS restore}B end %%EndProcSet TeXDict begin 39158280 55380996 1000 300 300 (/a/obsidian/disk/home/u36/lyndon/mpi/context-i.dvi) @start /Fa 9 122 df<00E00001F00001F00001B00001B00003B80003B80003B800031800071C00071C 00071C00071C00071C000E0E000E0E000FFE000FFE001FFF001C07001C07001C07007F1FC0FF1F E07F1FC013197F9816>65 D76 D<003F00007F00003F0000070000070000070000070003 C7000FF7001FFF003C1F00780F00700700E00700E00700E00700E00700E00700E00700700F0070 0F003C1F001FFFE00FE7F007C7E014197F9816>100 D<03E00FF81FFC3C1E780E7007E007FFFF FFFFFFFFE000E000700778073C0F1FFE0FFC03F010127D9116>I<018003C003C0018000000000 000000007FC07FC07FC001C001C001C001C001C001C001C001C001C001C001C001C07FFFFFFF7F FF101A7D9916>105 D<7E0000FE00007E00000E00000E00000E00000E00000E7FE00E7FE00E7F E00E0F000E1E000E3C000E78000EF0000FF0000FF8000FBC000F1E000E0E000E07000E07807F87 F0FFCFF07F87F01419809816>107 D<7E3C00FEFE007FFF000F87800F03800E03800E03800E03 800E03800E03800E03800E03800E03800E03800E03807FC7F0FFE7F87FC7F01512809116>110 D<7F1FC07F3FC07F1FC00F1C00073C0003B80003F00001F00000E00001E00001F00003B800073C 00071C000E0E007F1FC0FF3FE07F1FC013127F9116>120 D<7F1FC0FF9FE07F1FC01C07000E07 000E0E000E0E00070E00071C00071C00039C00039C0003980001B80001B80000F00000F00000F0 0000E00000E00000E00001C00079C0007BC0007F80003F00003C0000131B7F9116>I E /Fb 11 121 df<01C00003E00003E0000360000360000770000770000770000770000630000E 38000E38000E38000E38000E38001FFC001FFC001C1C001C1C003C1E00380E00FE3F80FE3F8011 177F9614>65 D76 D<1FC0007FF000707800201800001C00001C 0007FC001FFC003C1C00701C00E01C00E01C00E01C00707C003FFF800F8F8011107E8F14>97 D<03F80FFC1C1C380870006000E000E000E000E00060007000380E1C1E0FFC03F00F107E8F14> 99 D<07E00FF01C38301C700CE00EE00EFFFEFFFEE00060007000380E1C1E0FFC03F00F107E8F 14>101 D107 D110 D<07C01FF03C78701C701CE00EE00EE00E E00EE00EE00E701C783C3C781FF007C00F107E8F14>I 114 D<030007000700070007007FFCFFFC07000700070007000700070007000700070E070E070E 070C03FC00F00F157F9414>116 D<7E3F007E3F001E38000E780007700007E00003E00001C000 03C00003E0000770000E78000E38001C1C00FE3F80FE3F8011107F8F14>120 D E /Fc 1 16 df<07801FE03FF07FF87FF8FFFCFFFCFFFCFFFCFFFCFFFC7FF87FF83FF01FE007 800E107E9013>15 D E /Fd 48 123 df<00FC7C0183C607078E0607040E07000E07000E07000E 07000E07000E0700FFFFF00E07000E07000E07000E07000E07000E07000E07000E07000E07000E 07000E07000E07000E07000E07007F0FF0171A809916>11 D<00FC000182000703000607000E02 000E00000E00000E00000E00000E0000FFFF000E07000E07000E07000E07000E07000E07000E07 000E07000E07000E07000E07000E07000E07000E07007F0FE0131A809915>I<007E1F8001C170 400703C060060380E00E0380400E0380000E0380000E0380000E0380000E038000FFFFFFE00E03 80E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E 0380E00E0380E00E0380E00E0380E07F8FE3FC1E1A809920>14 D<00800100020004000C000800 18003000300030006000600060006000E000E000E000E000E000E000E000E000E000E000600060 0060006000300030003000180008000C00040002000100008009267D9B0F>40 D<8000400020001000180008000C00060006000600030003000300030003800380038003800380 0380038003800380038003000300030003000600060006000C0008001800100020004000800009 267E9B0F>I<60F0F07010101020204080040B7D830B>44 DI<60F0F060 04047D830B>I<60F0F060000000000000000060F0F06004107D8F0B>58 D<60F0F060000000000000000060F0F0701010102020408004177D8F0B>I<000C0000000C0000 000C0000001E0000001E0000003F000000270000002700000043800000438000004380000081C0 000081C0000081C0000100E0000100E00001FFE000020070000200700006007800040038000400 380008001C0008001C001C001E00FF00FFC01A1A7F991D>65 DI<003F0201C0C6 03002E0E001E1C000E1C0006380006780002700002700002F00000F00000F00000F00000F00000 F000007000027000027800023800041C00041C00080E000803003001C0C0003F00171A7E991C> I69 D72 DI77 DI<007F000001C1C000070070000E0038001C001C003C001E0038000E0078000F 0070000700F0000780F0000780F0000780F0000780F0000780F0000780F0000780F00007807800 0F0078000F0038000E003C001E001C001C000E0038000700700001C1C000007F0000191A7E991E >II<0FC21836200E6006C006C002C002C002E00070007E003FE01FF807FC003E 000E00070003800380038003C002C006E004D81887E0101A7E9915>83 D<7FFFFF00701C070040 1C0100401C0100C01C0180801C0080801C0080801C0080001C0000001C0000001C0000001C0000 001C0000001C0000001C0000001C0000001C0000001C0000001C0000001C0000001C0000001C00 00001C0000001C0000001C000003FFE000191A7F991C>I<3F8070C070E020700070007007F01C 7030707070E070E071E071E0F171FB1E3C10107E8F13>97 DI<07F80C1C381C30 087000E000E000E000E000E000E0007000300438080C1807E00E107F8F11>I<007E00000E0000 0E00000E00000E00000E00000E00000E00000E00000E0003CE000C3E00380E00300E00700E00E0 0E00E00E00E00E00E00E00E00E00E00E00600E00700E00381E001C2E0007CFC0121A7F9915>I< 07C01C3030187018600CE00CFFFCE000E000E000E0006000300438080C1807E00E107F8F11>I< 01F0031807380E100E000E000E000E000E000E00FFC00E000E000E000E000E000E000E000E000E 000E000E000E000E000E007FE00D1A80990C>I<0FCE187330307038703870387038303018602F C02000600070003FF03FFC1FFE600FC003C003C003C0036006381C07E010187F8F13>II<18003C003C001800000000000000000000000000FC001C001C001C001C001C001C001C 001C001C001C001C001C001C001C00FF80091A80990A>I<018003C003C0018000000000000000 00000000000FC001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C0 01C001C041C0E180E3007E000A2182990C>IIII< FCF8001D0C001E0E001E0E001C0E001C0E001C0E001C0E001C0E001C0E001C0E001C0E001C0E00 1C0E001C0E00FF9FC012107F8F15>I<07E01C38300C700E6006E007E007E007E007E007E00760 06700E381C1C3807E010107F8F13>II<03C2000C2600381E00300E00700E00E00E00E00E00E00E00E0 0E00E00E00E00E00700E00700E00381E001C2E0007CE00000E00000E00000E00000E00000E0000 0E00007FC012177F8F14>II<1F2060E04020C020C020F0007F003FC01FE000F080708030C030C0 20F0408F800C107F8F0F>I<0400040004000C000C001C003C00FFC01C001C001C001C001C001C 001C001C001C201C201C201C201C200E4003800B177F960F>IIIIII<7FF86070407040E041C041C003 80070007000E081C081C08381070107030FFF00D107F8F11>I E /Fe 36 121 df<004000800300020006000C001C001800380038007000700070007000F000F000F000F0 00F000F000F000F000F00070007000700070003800380018001C000C0006000200030000800040 0A257C9B11>40 D<800040003000100018000C000E00060007000700038003800380038003C003 C003C003C003C003C003C003C003C003800380038003800700070006000E000C00180010003000 400080000A257E9B11>I<70F8FCFCFC7C04040808102040060D7D850C>44 D<78FCFCFCFC78000000000078FCFCFCFC7806117D900C>58 D<00030000000780000007800000 078000000FC000000FC000001BE000001BE000001BE0000031F0000031F0000060F8000060F800 00E0FC0000C07C0000C07C0001803E0001FFFE0003FFFF0003001F0003001F0006000F8006000F 800E000FC0FFC07FFCFFC07FFC1E1A7F9921>65 D<001FE02000FFFCE003F80FE007C003E01F80 01E01F0000E03E0000E07E0000607C000060FC000000FC000000FC000000FC000000FC000000FC 000000FC000000FC0000007C0000607E0000603E0000601F0000C01F8000C007C0038003F80F00 00FFFC00001FF0001B1A7E9920>67 DII73 D77 D I<003FC00001E0780007801E000F000F001F000F803E0007C03E0007C07C0003E07C0003E0FC00 03F0FC0003F0FC0003F0FC0003F0FC0003F0FC0003F0FC0003F0FC0003F07C0003E07E0007E03E 0007C03E0007C01F000F800F801F0007C03E0001E07800003FC0001C1A7E9921>II82 D<07F0401FFDC03C0FC07803C07001C0F0 01C0F000C0F000C0F80000FF00007FF8003FFF001FFF800FFFC001FFE0000FE00003F00001F0C0 00F0C000F0C000F0E000E0F001E0FC03C0EFFF8083FE00141A7E9919>I<7FFFFF807FFFFF8078 1F0780701F0380601F0180E01F01C0C01F00C0C01F00C0C01F00C0001F0000001F0000001F0000 001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F00 00001F0000001F000007FFFC0007FFFC001A1A7E991F>I<7FF87FF07FF87FF007E0060003E00C 0003F01C0001F8380000F83000007C6000007EE000003FC000001F8000001F8000000FC000000F C000000FE000001BF0000039F8000030F8000060FC0000E07E0001C03E0001801F0003001F8007 000FC0FFE07FFCFFE07FFC1E1A7F9921>88 D<0FF0001C3C003E1E003E0E003E0F001C0F00000F 0000FF000FCF003E0F007C0F00F80F00F80F00F80F00F817007C27E01FC3E013117F9015>97 D<03FC000F0E001C1F003C1F00781F00780E00F80000F80000F80000F80000F800007800007800 003C01801C03000F060003FC0011117F9014>99 D<000FE0000FE00001E00001E00001E00001E0 0001E00001E00001E003F9E00F07E01C03E03C01E07801E07801E0F801E0F801E0F801E0F801E0 F801E07801E07801E03C01E01C03E00F0DFC03F9FC161A7F9919>I<03F0000E1C001C0E003C07 00780700780780F80780F80780FFFF80F80000F800007800007800003C01801C03000E060003FC 0011117F9014>I<00FE0003C700078F800F0F800F0F800F07000F00000F00000F0000FFF000FF F0000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F 00007FE0007FE000111A80990E>I104 D<1C003E003E003E003E001C00000000 00000000007E007E001E001E001E001E001E001E001E001E001E001E001E001E001E00FF80FF80 091B7F9A0D>I107 DIII<03F8000E0E003C0780 3803807803C07803C0F803E0F803E0F803E0F803E0F803E0F803E07803C07C07C03C07800E0E00 03F80013117F9016>II114 D<1FB020704030C030C030F000FF807FE03FF807F8003CC00C C00CE00CE008F830CFE00E117F9011>I<06000600060006000E000E001E003FF0FFF01E001E00 1E001E001E001E001E001E001E181E181E181E181E180F3003E00D187F9711>II119 DI E /Ff 31 122 df45 D<387CFEFEFE7C3807077C8610>I<00 180000780001F800FFF800FFF80001F80001F80001F80001F80001F80001F80001F80001F80001 F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001 F80001F80001F80001F8007FFFE07FFFE013207C9F1C>49 D<03FC000FFF003C1FC07007E07C07 F0FE03F0FE03F8FE03F8FE01F87C01F83803F80003F80003F00003F00007E00007C0000F80001F 00003E0000380000700000E01801C0180380180700180E00380FFFF01FFFF03FFFF07FFFF0FFFF F0FFFFF015207D9F1C>I<00FE0007FFC00F07E01E03F03F03F03F81F83F81F83F81F81F03F81F 03F00003F00003E00007C0001F8001FE0001FF000007C00001F00001F80000FC0000FC3C00FE7E 00FEFF00FEFF00FEFF00FEFF00FC7E01FC7801F81E07F00FFFC001FE0017207E9F1C>I<0000E0 0001E00003E00003E00007E0000FE0001FE0001FE00037E00077E000E7E001C7E00187E00307E0 0707E00E07E00C07E01807E03807E07007E0E007E0FFFFFEFFFFFE0007E00007E00007E00007E0 0007E00007E00007E000FFFE00FFFE17207E9F1C>I<0003FE0080001FFF818000FF01E38001F8 003F8003E0001F8007C0000F800F800007801F800007803F000003803F000003807F000001807E 000001807E00000180FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000 FE00000000FE000000007E000000007E000001807F000001803F000001803F000003801F800003 000F8000030007C000060003F0000C0001F800380000FF00F000001FFFC0000003FE000021227D A128>67 D70 D76 D85 D<07FC001FFF803F07C03F03E03F01E03F01F01E01F00001F00001F000 3FF003FDF01FC1F03F01F07E01F0FC01F0FC01F0FC01F0FC01F07E02F07E0CF81FF87F07E03F18 167E951B>97 DI<00FF 8007FFE00F83F01F03F03E03F07E03F07C01E07C0000FC0000FC0000FC0000FC0000FC0000FC00 007C00007E00007E00003E00301F00600FC0E007FF8000FE0014167E9519>I<0001FE000001FE 0000003E0000003E0000003E0000003E0000003E0000003E0000003E0000003E0000003E000000 3E0000003E0001FC3E0007FFBE000F81FE001F007E003E003E007E003E007C003E00FC003E00FC 003E00FC003E00FC003E00FC003E00FC003E00FC003E00FC003E007C003E007C003E003E007E00 1E00FE000F83BE0007FF3FC001FC3FC01A237EA21F>I<00FE0007FF800F87C01E01E03E01F07C 00F07C00F8FC00F8FC00F8FFFFF8FFFFF8FC0000FC0000FC00007C00007C00007E00003E00181F 00300FC07003FFC000FF0015167E951A>I<003F8000FFC001E3E003C7E007C7E00F87E00F83C0 0F80000F80000F80000F80000F80000F8000FFFC00FFFC000F80000F80000F80000F80000F8000 0F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F8000 7FF8007FF80013237FA211>I<03FC1E0FFF7F1F0F8F3E07CF3C03C07C03E07C03E07C03E07C03 E07C03E03C03C03E07C01F0F801FFF0013FC003000003000003800003FFF801FFFF00FFFF81FFF FC3800FC70003EF0001EF0001EF0001EF0001E78003C7C007C3F01F80FFFE001FF0018217E951C >II<1C003F007F007F 007F003F001C000000000000000000000000000000FF00FF001F001F001F001F001F001F001F00 1F001F001F001F001F001F001F001F001F001F001F00FFE0FFE00B247EA310>I108 DI I<00FE0007FFC00F83E01E00F03E00F87C007C7C007C7C007CFC007EFC007EFC007EFC007EFC00 7EFC007EFC007E7C007C7C007C3E00F81F01F00F83E007FFC000FE0017167E951C>II114 D<0FF3003FFF00781F00600700E00300E00300F00300FC00007FE0007F F8003FFE000FFF0001FF00000F80C00780C00380E00380E00380F00700FC0E00EFFC00C7F00011 167E9516>I<0180000180000180000180000380000380000780000780000F80003F8000FFFF00 FFFF000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F8180 0F81800F81800F81800F81800F830007C30003FE0000F80011207F9F16>IIII121 D E /Fg 1 111 df<381F004E61804681C04701C08F01C08E01C00E01C00E01C01C 03801C03801C03801C0700380710380710380E10380E2070064030038014127E9119>110 D E /Fh 2 16 df0 D<03C00FF01FF83FFC7FFE7FFEFFFFFF FFFFFFFFFF7FFE7FFE3FFC1FF80FF003C010107E9115>15 D E /Fi 37 123 df<0020004001800380030006000E001C001C003C0038003800780078007800F800F000F0 00F000F000F000F000F000F000F000F800780078007800380038003C001C001C000E0006000300 03800180004000200B297C9E13>40 D<800040003000380018000C000E00070007000780038003 8003C003C003C003E001E001E001E001E001E001E001E001E001E003E003C003C003C003800380 0780070007000E000C00180038003000400080000B297D9E13>I<78FCFCFEFE7A020204040808 3040070E7D850D>44 D<00038000000380000007C0000007C0000007C000000FE000000FE00000 1FF000001BF000001BF0000031F8000031F8000061FC000060FC0000E0FE0000C07E0000C07E00 01803F0001FFFF0003FFFF8003001F8003001F8006000FC006000FC00E000FE00C0007E0FFC07F FEFFC07FFE1F1C7E9B24>65 D<001FE02000FFF8E003F80FE007C003E00F8001E01F0000E03E00 00E03E0000607E0000607C000060FC000000FC000000FC000000FC000000FC000000FC000000FC 000000FC0000007C0000607E0000603E0000603E0000C01F0000C00F80018007C0030003F80E00 00FFFC00001FE0001B1C7D9B22>67 D69 D73 D75 DIII<003FE00001F07C0003C01E000F800F801F0007C01E0003C03E 0003E07E0003F07C0001F07C0001F0FC0001F8FC0001F8FC0001F8FC0001F8FC0001F8FC0001F8 FC0001F8FC0001F87C0001F07E0003F07E0003F03E0003E03F0007E01F0007C00F800F8003C01E 0001F07C00003FE0001D1C7D9B24>II82 D<07F8201FFEE03C07E07801E07000E0F000E0F00060F00060F80000FE00 00FFE0007FFE003FFF003FFF800FFFC007FFE0007FE00003F00001F00000F0C000F0C000F0C000 E0E000E0F001C0FC03C0EFFF0083FC00141C7D9B1B>I<7FFFFFE07FFFFFE0781F81E0701F80E0 601F8060E01F8070C01F8030C01F8030C01F8030C01F8030001F8000001F8000001F8000001F80 00001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F 8000001F8000001F800007FFFE0007FFFE001C1C7E9B21>II87 D<7FFE1FFE007FFE1FFE0007F001800003F803800001FC07000000FC06000000 FE0C0000007F1C0000003F380000003FB00000001FE00000000FE00000000FE000000007F00000 0003F800000007F80000000FFC0000000CFE000000187E000000387F000000703F800000601F80 0000C01FC00001C00FE000018007F000030007F000FFF03FFF80FFF03FFF80211C7F9B24>II<7FFFFC7FFFFC7E01F878 03F87003F0E007E0E007E0C00FC0C01FC0C01F80003F00007F00007E0000FC0000FC0001F80003 F80603F00607E0060FE0060FC00E1F800E1F801C3F001C7F003C7E00FCFFFFFCFFFFFC171C7D9B 1D>I<0FF8001C1E003E0F803E07803E07C01C07C00007C0007FC007E7C01F07C03C07C07C07C0 F807C0F807C0F807C0780BC03E13F80FE1F815127F9117>97 D<03FC000E0E001C1F003C1F0078 1F00780E00F80000F80000F80000F80000F80000F800007800007801803C01801C03000E0E0003 F80011127E9115>99 D<000FF0000FF00001F00001F00001F00001F00001F00001F00001F00001 F00001F001F9F00F07F01C03F03C01F07801F07801F0F801F0F801F0F801F0F801F0F801F0F801 F07801F07801F03C01F01C03F00F0FFE03F9FE171D7E9C1B>I<01FC000F07001C03803C01C078 01C07801E0F801E0F801E0FFFFE0F80000F80000F800007800007C00603C00601E00C00F038001 FC0013127F9116>I<03F8F00E0F381E0F381C07303C07803C07803C07803C07801C07001E0F00 0E0E001BF8001000001800001800001FFF001FFFC00FFFE01FFFF07801F8F00078F00078F00078 7000707800F01E03C007FF00151B7F9118>103 D<1E003F003F003F003F001E00000000000000 000000000000FF00FF001F001F001F001F001F001F001F001F001F001F001F001F001F001F00FF E0FFE00B1E7F9D0E>105 D107 D110 D<01FC000F07801C01C03C01E07800F07800F0F800F8F800F8 F800F8F800F8F800F8F800F87800F07800F03C01E01E03C00F078001FC0015127F9118>I114 D<1FD830786018E018E018F000FF807FE07F F01FF807FC007CC01CC01CE01CE018F830CFC00E127E9113>I<0300030003000300070007000F 000F003FFCFFFC1F001F001F001F001F001F001F001F001F001F0C1F0C1F0C1F0C0F08079803F0 0E1A7F9913>I119 DII< 3FFF803C1F00303F00303E00607C0060FC0060F80001F00003F00007E00007C1800F81801F8180 1F03803E03007E07007C0F00FFFF0011127F9115>I E /Fj 15 121 df<3C7EFFFFFFFF7E3C08 087B8712>46 D<000C00001C0000FC000FFC00FFFC00F0FC0000FC0000FC0000FC0000FC0000FC 0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC 0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC00FFFFFCFFFF FC16257BA420>49 D<00FF000007FFE0001E07F8003801FC007800FE007E00FF00FF007F00FF00 7F80FF007F80FF003F807E003F803C003F8000007F8000007F0000007F000000FE000000FE0000 01FC000001F8000003F0000007C00000078000000F0000001E0000003800000070018000E00180 01C001800380030007000300060003000FFFFF001FFFFF003FFFFF007FFFFE00FFFFFE00FFFFFE 0019257DA420>I<0000FF80080007FFF018003FC03C38007E000E7801FC0003F803F00001F807 E00000F80FE00000781FC00000781FC00000383F800000383F800000387F800000187F00000018 7F00000018FF00000000FF00000000FF00000000FF00000000FF00000000FF00000000FF000000 00FF00000000FF00000000FF000000007F000000007F000000187F800000183F800000183F8000 00181FC00000301FC00000300FE000006007E000006003F00000C001FC000180007E000700003F C03C000007FFF8000000FFC00025287CA72E>67 D<0001FF0000001FFFF000007F01FC0000FC00 7E0003F8003F8007F0001FC00FE0000FE00FC00007E01FC00007F03F800003F83F800003F83F80 0003F87F000001FC7F000001FC7F000001FCFF000001FEFF000001FEFF000001FEFF000001FEFF 000001FEFF000001FEFF000001FEFF000001FEFF000001FEFF000001FE7F000001FC7F000001FC 7F800003FC7F800003FC3F800003F83FC00007F81FC00007F00FC00007E00FE0000FE007F0001F C003F8003F8000FC007E00007F01FC00001FFFF0000001FF000027287CA730>79 D<03FF00000FFFE0001F03F0003F80F8003F80FC003F807C001F007E001F007E0000007E000000 7E0000007E00000FFE0001FFFE0007F07E001FC07E003F807E007F007E00FE007E00FE007E18FE 007E18FE007E18FE00BE187F01BE183F873FF01FFE1FE003F80F801D1A7E9920>97 D<007F8003FFE007E1F00F80F81F007C3F007E7E003E7E003E7E003FFE003FFE003FFFFFFFFFFF FFFE0000FE0000FE0000FE00007E00007E00003F00003F00031F80060FC00607F01C01FFF0003F C0181A7E991D>101 D<0F001F801FC03FC03FC01FC01F800F0000000000000000000000000000 00FFC0FFC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC0 0FC00FC00FC00FC00FC0FFFCFFFC0E297FA811>105 D110 D<007FC00001FFF00007E0FC000F803E001F001F00 3F001F803E000F807E000FC07E000FC0FE000FE0FE000FE0FE000FE0FE000FE0FE000FE0FE000F E0FE000FE0FE000FE07E000FC07E000FC07F001FC03F001F801F001F000F803E0007E0FC0001FF F000007FC0001B1A7E9920>II114 D<03F8C01FFFC03C07C07801C07001C0F000C0 F000C0F800C0FC0000FFC0007FFC003FFF001FFF800FFFC001FFE0000FF00003F0C001F0C000F0 E000F0E000F0F000E0F801E0FE03C0E7FF80C1FC00141A7E9919>I<0060000060000060000060 0000E00000E00001E00001E00003E00007E0001FE000FFFFC0FFFFC007E00007E00007E00007E0 0007E00007E00007E00007E00007E00007E00007E00007E00007E00007E00007E06007E06007E0 6007E06007E06007E06003F0C001F0C000FF80003E0013257FA419>I120 D E /Fk 1 98 df<00200000700000700000700000B80000B80000B800011C00011C00011C00020E00020E0004 070004070007FF000803800803800803801801C03803C0FE0FF815157F9419>97 D E /Fl 62 123 df<007E1F0001C1B1800303E3C00703C3C00E03C1800E01C0000E01C0000E01 C0000E01C0000E01C0000E01C000FFFFFC000E01C0000E01C0000E01C0000E01C0000E01C0000E 01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C000 0E01C0007F87FC001A1D809C18>11 D<007E0001C1800301800703C00E03C00E01800E00000E00 000E00000E00000E0000FFFFC00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01 C00E01C00E01C00E01C00E01C00E01C00E01C00E01C07F87F8151D809C17>I<003F07E00001C0 9C18000380F018000701F03C000E01E03C000E00E018000E00E000000E00E000000E00E000000E 00E000000E00E00000FFFFFFFC000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C00 0E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C 000E00E01C000E00E01C000E00E01C007FC7FCFF80211D809C23>14 D<6060F0F0F8F868680808 08080808101010102020404080800D0C7F9C15>34 D<60F0F8680808081010204080050C7C9C0C >39 D<004000800100020006000C000C0018001800300030007000600060006000E000E000E000 E000E000E000E000E000E000E000E000E000600060006000700030003000180018000C000C0006 0002000100008000400A2A7D9E10>I<800040002000100018000C000C00060006000300030003 8001800180018001C001C001C001C001C001C001C001C001C001C001C001C00180018001800380 03000300060006000C000C00180010002000400080000A2A7E9E10>I<60F0F070101010102020 4080040C7C830C>44 DI<60F0F06004047C830C>I<03C00C301818300C 300C700E60066006E007E007E007E007E007E007E007E007E007E007E007E007E0076006600670 0E300C300C18180C3007E0101D7E9B15>48 D<030007003F00C700070007000700070007000700 07000700070007000700070007000700070007000700070007000700070007000F80FFF80D1C7C 9B15>I<07C01830201C400C400EF00FF80FF807F8077007000F000E000E001C001C0038007000 6000C00180030006010C01180110023FFE7FFEFFFE101C7E9B15>I<07E01830201C201C781E78 0E781E381E001C001C00180030006007E00030001C001C000E000F000F700FF80FF80FF80FF00E 401C201C183007E0101D7E9B15>I<000C00000C00001C00003C00003C00005C0000DC00009C00 011C00031C00021C00041C000C1C00081C00101C00301C00201C00401C00C01C00FFFFC0001C00 001C00001C00001C00001C00001C00001C0001FFC0121C7F9B15>I<300C3FF83FF03FC0200020 00200020002000200023E024302818301C200E000E000F000F000F600FF00FF00FF00F800E401E 401C2038187007C0101D7E9B15>I<00F0030C06040C0E181E301E300C700070006000E3E0E430 E818F00CF00EE006E007E007E007E007E007600760077006300E300C18180C3003E0101D7E9B15 >I<60F0F0600000000000000000000060F0F06004127C910C>58 D<60F0F06000000000000000 00000060F0F0701010101020204080041A7C910C>I<000600000006000000060000000F000000 0F0000000F00000017800000178000001780000023C0000023C0000023C0000041E0000041E000 0041E0000080F0000080F0000180F8000100780001FFF80003007C0002003C0002003C0006003E 0004001E0004001E000C001F001E001F00FF80FFF01C1D7F9C1F>65 D<001F808000E061800180 1980070007800E0003801C0003801C00018038000180780000807800008070000080F0000000F0 000000F0000000F0000000F0000000F0000000F0000000F0000000700000807800008078000080 380000801C0001001C0001000E000200070004000180080000E03000001FC000191E7E9C1E>67 D69 DI73 D76 DII<003F800000E0E0000380380007001C000E00 0E001C0007003C00078038000380780003C0780003C0700001C0F00001E0F00001E0F00001E0F0 0001E0F00001E0F00001E0F00001E0F00001E0700001C0780003C0780003C0380003803C000780 1C0007000E000E0007001C000380380000E0E000003F80001B1E7E9C20>II82 D<07E0801C1980300580700380600180E00180E00080E00080E00080 F00000F800007C00007FC0003FF8001FFE0007FF0000FF80000F800007C00003C00001C08001C0 8001C08001C0C00180C00180E00300D00200CC0C0083F800121E7E9C17>I<7FFFFFC0700F01C0 600F00C0400F0040400F0040C00F0020800F0020800F0020800F0020000F0000000F0000000F00 00000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F 0000000F0000000F0000000F0000000F0000001F800003FFFC001B1C7F9B1E>II< FFE0FFE0FF1F001F003C1E001E00180F001F00100F001F00100F001F001007801F002007802780 20078027802003C027804003C043C04003C043C04003E043C04001E081E08001E081E08001E081 E08000F100F10000F100F10000F100F100007900FA00007A007A00007A007A00003E007C00003C 003C00003C003C00003C003C00001800180000180018000018001800281D7F9B2B>87 D<7FF0FFC00FC03E000780180003C0180003E0100001E0200001F0600000F0400000788000007D 8000003D0000001E0000001F0000000F0000000F8000000F80000013C0000023E0000021E00000 41F00000C0F8000080780001007C0003003C0002001E0006001F001F003F80FFC0FFF01C1C7F9B 1F>I<08081010202040404040808080808080B0B0F8F8787830300D0C7A9C15>92 D<1FC000307000783800781C00301C00001C00001C0001FC000F1C00381C00701C00601C00E01C 40E01C40E01C40603C40304E801F870012127E9115>97 D I<07E00C301878307870306000E000E000E000E000E000E00060007004300418080C3007C00E12 7E9112>I<003F0000070000070000070000070000070000070000070000070000070000070003 E7000C1700180F00300700700700600700E00700E00700E00700E00700E00700E0070060070070 0700300700180F000C370007C7E0131D7E9C17>I<03E00C301818300C700E6006E006FFFEE000 E000E000E00060007002300218040C1803E00F127F9112>I<00F8018C071E061E0E0C0E000E00 0E000E000E000E00FFE00E000E000E000E000E000E000E000E000E000E000E000E000E000E000E 000E007FE00F1D809C0D>I<00038003C4C00C38C01C3880181800381C00381C00381C00381C00 1818001C38000C300013C0001000003000001800001FF8001FFF001FFF803003806001C0C000C0 C000C0C000C06001803003001C0E0007F800121C7F9215>II<18003C003C0018000000000000000000000000000000FC001C001C001C001C001C001C001C 001C001C001C001C001C001C001C001C001C00FF80091D7F9C0C>I<00C001E001E000C0000000 00000000000000000000000FE000E000E000E000E000E000E000E000E000E000E000E000E000E0 00E000E000E000E000E000E000E060E0F0C0F1C061803E000B25839C0D>IIIII<03F000 0E1C00180600300300700380600180E001C0E001C0E001C0E001C0E001C0E001C0600180700380 3003001806000E1C0003F00012127F9115>II<03C1000C3300180B00300F0070 0700700700E00700E00700E00700E00700E00700E00700600700700700300F00180F000C370007 C700000700000700000700000700000700000700000700003FE0131A7E9116>II<1F90 30704030C010C010E010F8007F803FE00FF000F880388018C018C018E010D0608FC00D127F9110 >I<04000400040004000C000C001C003C00FFE01C001C001C001C001C001C001C001C001C001C 101C101C101C101C100C100E2003C00C1A7F9910>IIII<7F8FF00F03800F03000702 0003840001C80001D80000F00000700000780000F800009C00010E00020E000607000403801E07 C0FF0FF81512809116>II<7FFC70386038407040F040E041C003C0038007000F 040E041C043C0C380870087038FFF80E127F9112>I E /Fm 38 122 df<000300060008001800 30006000C000C0018003000300060006000C000C001C0018001800380030003000700070006000 600060006000E000E000E000E000E0006000600060006000600020003000100008000800102A7B 9E11>40 D<001000100008000C0004000600060006000600060007000700070007000700060006 00060006000E000E000C000C001C001800180038003000300060006000C000C001800300030006 000C00180010006000C000102A809E11>I45 D<3078F06005047C830D>I<00020006000C001C007C039C003800380038003800700070007000 7000E000E000E000E001C001C001C001C003800380038003800780FFF00F1C7C9B15>49 D<00C06000FFC001FF8001FE00010000010000020000020000020000020000047800058C000606 00040600080600000700000700000600000E00000E00700E00700C00E01C008018008038004030 0040600021C0001F0000131D7C9B15>53 D<000F0000308000C080018380038380030000060000 0E00000C00001C00001CF0003B18003C0C00380C00780C00700E00700E00700E00601C00E01C00 E01C00E01C00E03800E03800E0700060600060C0002180001E0000111D7B9B15>I<060F0F0600 0000000000000000003078F06008127C910D>58 D<0003F020001E0C60003002E000E003C001C0 01C0038001C0070000C00E0000801E0000801C0000803C0000803C000000780000007800000078 000000F0000000F0000000F0000000F0000000F0000400F0000400F0000400F000080070000800 7000100038002000180040000C0180000706000001F800001B1E7A9C1E>67 D<01FFFFE0003C00E0003800600038004000380040003800400070004000700040007020400070 200000E0400000E0400000E0C00000FFC00001C0800001C0800001C0800001C080000381010003 8001000380020003800200070004000700040007000C00070018000E007800FFFFF0001B1C7D9B 1C>69 D<01FFE0003C0000380000380000380000380000700000700000700000700000E00000E0 0000E00000E00001C00001C00001C00001C0000380080380080380080380100700100700300700 600700E00E03C0FFFFC0151C7D9B1A>76 D<01FE0007F8003E000780002E000F00002E00170000 2E001700002E002700004E002E00004E004E00004E004E00004E008E00008E011C00008E011C00 008E021C00008E021C000107043800010704380001070838000107103800020710700002072070 0002072070000207407000040740E000040780E000040700E0000C0700E0001C0601E000FF861F FC00251C7D9B25>I<01FC03FE001C0070003C0060002E0040002E0040002E0040004700800047 008000470080004380800083810000838100008181000081C1000101C2000101C2000100E20001 00E2000200E4000200740002007400020074000400380004003800040038000C0018001C001000 FF8010001F1C7D9B1F>I<000F8400304C00403C00801801001803001803001806001006001006 000007000007000003E00003FC0001FF00007F800007C00001C00001C00000C00000C02000C020 00C0600180600180600300600200F00400CC180083E000161E7D9C17>83 D<1FFFFFC01C0701C0300E00C0200E0080600E0080400E0080401C0080801C0080801C0080001C 0000003800000038000000380000003800000070000000700000007000000070000000E0000000 E0000000E0000000E0000001C0000001C0000001C0000001C0000003C000007FFE00001A1C799B 1E>I<03CC063C0C3C181C3838303870387038E070E070E070E070E0E2C0E2C0E261E462643C38 0F127B9115>97 D<3F00070007000E000E000E000E001C001C001C001C0039C03E603830383070 38703870387038E070E070E070E060E0E0C0C0C1C0618063003C000D1D7B9C13>I<01F007080C 08181C3838300070007000E000E000E000E000E000E008E010602030C01F000E127B9113>I<00 1F80000380000380000700000700000700000700000E00000E00000E00000E0003DC00063C000C 3C00181C00383800303800703800703800E07000E07000E07000E07000E0E200C0E200C0E20061 E4006264003C3800111D7B9C15>I<01E007100C1018083810701070607F80E000E000E000E000 E000E0086010602030C01F000D127B9113>I<0003C0000670000C70001C60001C00001C000038 0000380000380000380000380003FF8000700000700000700000700000700000E00000E00000E0 0000E00000E00001C00001C00001C00001C00001C0000380000380000380000300000300000700 00C60000E60000CC00007800001425819C0D>I<00F3018F030F06070E0E0C0E1C0E1C0E381C38 1C381C381C383830383038187818F00F700070007000E000E0C0C0E1C0C3007E00101A7D9113> I<0FC00001C00001C0000380000380000380000380000700000700000700000700000E78000E8C 000F0E000E0E001C0E001C0E001C0E001C0E00381C00381C00381C003838007038807038807070 80707100E03200601C00111D7D9C15>I<01800380010000000000000000000000000000001C00 2600470047008E008E000E001C001C001C0038003800710071007100720072003C00091C7C9B0D >I<0FC00001C00001C0000380000380000380000380000700000700000700000700000E0F000E 11000E23800E43801C83001C80001D00001E00003F800039C00038E00038E00070E20070E20070 E20070E400E06400603800111D7D9C13>107 D<1F800380038007000700070007000E000E000E 000E001C001C001C001C0038003800380038007000700070007000E400E400E400E40068003800 091D7C9C0B>I<3C1E0780266318C04683A0E04703C0E08E0380E08E0380E00E0380E00E0380E0 1C0701C01C0701C01C0701C01C070380380E0388380E0388380E0708380E0710701C0320300C01 C01D127C9122>I<3C3C002646004687004707008E07008E07000E07000E07001C0E001C0E001C 0E001C1C00381C40381C40383840383880701900300E0012127C9117>I<01E007180C0C180C38 0C300E700E700EE01CE01CE01CE018E038E030E06060C031801E000F127B9115>I<07870004D9 8008E0C008E0C011C0E011C0E001C0E001C0E00381C00381C00381C00381800703800703000707 000706000E8C000E70000E00000E00001C00001C00001C00001C00003C0000FF8000131A7F9115 >I<3C3C26C2468747078E068E000E000E001C001C001C001C0038003800380038007000300010 127C9112>114 D<01F006080C080C1C18181C001F001FC00FF007F0007800386030E030C03080 6060C01F000E127D9111>I<00C001C001C001C00380038003800380FFE00700070007000E000E 000E000E001C001C001C001C00384038403840388019000E000B1A7D990E>I<1E030027070047 0700470700870E00870E000E0E000E0E001C1C001C1C001C1C001C1C0038388038388018388018 39001C5900078E0011127C9116>I<1E06270E470E4706870287020E020E021C041C041C041C08 18083808181018200C4007800F127C9113>I<1E01832703874703874703838707018707010E07 010E07011C0E021C0E021C0E021C0E04180C04181C04181C081C1C100C263007C3C018127C911C >I<070E0019910010E38020E38041C30041C00001C00001C00003800003800003800003800007 0200670200E70400CB04008B080070F00011127D9113>I<1E03270747074707870E870E0E0E0E 0E1C1C1C1C1C1C1C1C38383838183818381C7007F00070007000E0E0C0E1C0818047003C00101A 7C9114>I E /Fn 8 116 df73 D80 D<0007FFC0000000003FFFFC00000000FFFFFF00000003F801 FFC0000007F0003FE0000007F8001FF000000FFC000FF800000FFC000FFC00000FFC0007FC0000 0FFC0007FE00000FFC0003FE000007F80003FF000003F00003FF000000000003FF000000000003 FF000000000003FF000000000003FF000000000003FF000000000003FF000000000003FF000000 0007FFFF00000000FFFFFF0000000FFFE3FF0000007FF803FF000001FFC003FF000003FF0003FF 000007FC0003FF00000FF80003FF00001FF00003FF00003FF00003FF00007FE00003FF00007FE0 0003FF0380FFC00003FF0380FFC00003FF0380FFC00003FF0380FFC00003FF0380FFC00007FF03 80FFC00007FF03807FE0000DFF03807FE0001DFF03803FF00039FF87001FF80070FFCF000FFE03 E07FFE0007FFFF807FFC0000FFFE001FF800001FF00007E000312E7CAD37>97 D<00FF80FFFF80FFFF80FFFF8003FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF 8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF 8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF 8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF 8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF 8001FF8001FF8001FF8001FF80FFFFFFFFFFFFFFFFFF18487DC71F>108 D<00003FFC00000001FFFF8000000FFFFFF000003FF00FFC00007F8001FE0001FE00007F8003FC 00003FC007F800001FE007F800001FE00FF000000FF01FE0000007F81FE0000007F83FE0000007 FC3FE0000007FC7FC0000003FE7FC0000003FE7FC0000003FE7FC0000003FEFFC0000003FFFFC0 000003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003 FFFFC0000003FFFFC0000003FF7FC0000003FE7FC0000003FE7FC0000003FE7FE0000007FE3FE0 000007FC3FE0000007FC1FE0000007F81FF000000FF80FF000000FF007F800001FE007FC00003F E003FE00007FC001FF0000FF80007F8001FE00003FF00FFC00000FFFFFF0000003FFFFC0000000 3FFC0000302E7DAD37>111 D<00FF801FF80000FFFF80FFFF0000FFFF83FFFFC000FFFF8FC07F F00003FF9E000FF80001FFF80007FC0001FFF00003FE0001FFE00001FF0001FFC00000FF8001FF 800000FFC001FF8000007FC001FF8000007FE001FF8000007FE001FF8000003FE001FF8000003F F001FF8000003FF001FF8000003FF001FF8000001FF801FF8000001FF801FF8000001FF801FF80 00001FF801FF8000001FF801FF8000001FF801FF8000001FF801FF8000001FF801FF8000001FF8 01FF8000001FF801FF8000001FF801FF8000001FF001FF8000003FF001FF8000003FF001FF8000 003FF001FF8000003FE001FF8000007FE001FF8000007FC001FF800000FFC001FF800000FF8001 FFC00001FF8001FFE00001FF0001FFF00003FE0001FFF8000FFC0001FF9E001FF80001FF8F80FF E00001FF87FFFFC00001FF81FFFE000001FF803FF0000001FF800000000001FF800000000001FF 800000000001FF800000000001FF800000000001FF800000000001FF800000000001FF80000000 0001FF800000000001FF800000000001FF800000000001FF800000000001FF800000000001FF80 0000000001FF800000000001FF800000000001FF8000000000FFFFFF00000000FFFFFF00000000 FFFFFF0000000035427DAD3D>I<00FF007F00FFFF01FFC0FFFF03FFE0FFFF0787F003FF0E0FF0 01FF1C1FF801FF381FF801FF301FF801FF701FF801FF600FF001FF600FF001FFE003C001FFC000 0001FFC0000001FFC0000001FFC0000001FF80000001FF80000001FF80000001FF80000001FF80 000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF 80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001 FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF800000FFFFFFC000 FFFFFFC000FFFFFFC000252E7DAD2C>114 D<001FFC030000FFFF870003FFFFCF000FE007FF00 1F8000FF003E00003F003E00001F007C00001F007C00000F00FC00000F00FC00000700FC000007 00FE00000700FF00000700FF80000000FFE00000007FFE0000007FFFF800003FFFFF00003FFFFF C0001FFFFFF0000FFFFFF80007FFFFFC0001FFFFFE00007FFFFF00001FFFFF000000FFFF800000 07FF80000000FFC0E000007FC0E000003FC0E000001FC0F000000FC0F000000FC0F800000FC0F8 00000FC0F800000F80FC00000F80FE00001F80FF00001F00FF80003E00FFC0007C00F9F803F800 F0FFFFF000E03FFFC000C007FE0000222E7CAD2B>I E /Fo 8 117 df<00003000000070000001 F0000007F000007FF000FFFFF000FFFFF000FF8FF000000FF000000FF000000FF000000FF00000 0FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000 000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF0 00000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000F F000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF00000 0FF0007FFFFFFE7FFFFFFE7FFFFFFE1F3779B62D>49 D<000000FFE0001800000FFFFC00180000 7FFFFF00380001FFE00FC0780007FE0001F0F8000FF8000079F8003FE000003FF8007FC000000F F800FF80000007F801FF00000007F803FE00000003F807FC00000001F807F800000001F80FF800 000000F80FF000000000F81FF000000000781FF000000000783FE000000000783FE00000000038 7FE000000000387FE000000000387FE000000000387FC000000000007FC00000000000FFC00000 000000FFC00000000000FFC00000000000FFC00000000000FFC00000000000FFC00000000000FF C00000000000FFC00000000000FFC00000000000FFC00000000000FFC000000000007FC0000000 00007FC000000000007FE000000000007FE000000000387FE000000000383FE000000000383FE0 00000000381FF000000000381FF000000000700FF000000000700FF8000000007007F800000000 E007FC00000000E003FE00000001C001FF000000038000FF8000000780007FC000000F00003FE0 00001E00000FF800003C000007FE0000F0000001FFE007E00000007FFFFF800000000FFFFE0000 000000FFE00000353B7BB940>67 D<003FFC00000001FFFF80000007E00FE000000FC003F80000 0FE001FC00001FF001FE00001FF000FF00001FF000FF00001FF0007F00000FE0007F800007C000 7F80000000007F80000000007F80000000007F80000000007F80000000007F800000000FFF8000 0007FFFF8000003FE07F800001FF007F800007FC007F80000FF0007F80001FE0007F80003FE000 7F80007FC0007F80007FC0007F8380FF80007F8380FF80007F8380FF80007F8380FF8000BF8380 FF8000BF83807FC0013F83807FC0033F83803FE0061FC7001FF81C0FFE0007FFF007FC00007FC0 03F00029257DA42D>97 D<0003FF0000001FFFE000007F07F00001FC01FC0003F000FE0007E000 7F000FE0003F001FC0003F801FC0003F803FC0001FC03F80001FC07F80001FC07F80001FE07F80 001FE0FF80001FE0FF80001FE0FFFFFFFFE0FFFFFFFFE0FF80000000FF80000000FF80000000FF 80000000FF800000007F800000007F800000007F800000003FC00000003FC00000001FC00000E0 1FE00000E00FE00001C007F000038003F800070000FE000E00007FC07C00001FFFF0000001FF80 0023257EA428>101 D<01FC00000000FFFC00000000FFFC00000000FFFC0000000007FC000000 0003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC 0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC000000 0003FC0000000003FC0000000003FC0000000003FC01FF000003FC07FFC00003FC1C0FF00003FC 3007F80003FC6003F80003FCC003FC0003FC8001FC0003FD0001FE0003FF0001FE0003FE0001FE 0003FE0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC 0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE 0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC 0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE00FFFFF07FFFF8FFFFF07FFF F8FFFFF07FFFF82D3A7EB932>104 D<01FC03FE0000FFFC1FFFC000FFFC780FF000FFFDE003F8 0007FF8001FC0003FF0000FE0003FE0000FF0003FC00007F8003FC00007FC003FC00003FC003FC 00003FE003FC00003FE003FC00001FE003FC00001FE003FC00001FF003FC00001FF003FC00001F F003FC00001FF003FC00001FF003FC00001FF003FC00001FF003FC00001FF003FC00001FF003FC 00001FE003FC00003FE003FC00003FE003FC00003FC003FC00003FC003FC00007F8003FC00007F 8003FE0000FF0003FF0001FE0003FF8003FC0003FDC007F80003FCF81FE00003FC3FFF800003FC 07FC000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC000000 0003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC 00000000FFFFF0000000FFFFF0000000FFFFF00000002C357EA432>112 D<01F80FC0FFF83FF0FFF870F8FFF8C1FC07F883FE03F983FE03F903FE03FB03FE03FA01FC03FA 00F803FA007003FE000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003 FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC0000 03FC000003FC000003FC000003FC000003FC0000FFFFF800FFFFF800FFFFF8001F257EA424> 114 D<001C0000001C0000001C0000001C0000001C0000003C0000003C0000003C0000003C0000 007C0000007C000000FC000000FC000001FC000003FC000007FC00001FFFFFC0FFFFFFC0FFFFFF C003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC 000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003 FC00E003FC00E003FC00E003FC00E003FC00E003FC00E003FC00E003FC00E001FC00C001FC01C0 01FE01C000FE0380007F8700001FFE000007F8001B357EB423>116 D E /Fp 11 115 df<008003800F80F380038003800380038003800380038003800380038003800380 03800380038003800380038003800380038003800380038003800380038007C0FFFE0F217CA018 >49 D<03F8000C1E001007002007804007C07807C07803C07807C03807C0000780000780000700 000F00000E0000380003F000001C00000F000007800007800003C00003C00003E02003E07003E0 F803E0F803E0F003C04003C0400780200780100F000C1C0003F00013227EA018>51 D<01F000060C000C0600180700380380700380700380F001C0F001C0F001C0F001E0F001E0F001 E0F001E0F001E07001E07003E03803E01805E00C05E00619E003E1E00001C00001C00001C00003 80000380300300780700780600700C002018001030000FC00013227EA018>57 D77 D<03F0200C0C601802603001E07000E060 0060E00060E00060E00020E00020E00020F00000F000007800007F00003FF0001FFE000FFF0003 FF80003FC00007E00001E00000F00000F0000070800070800070800070800070C00060C00060E0 00C0F000C0C80180C6070081FC0014247DA21B>83 D<0FE0001838003C0C003C0E001807000007 0000070000070000FF0007C7001E07003C0700780700700700F00708F00708F00708F00F087817 083C23900FC1E015157E9418>97 D<01FE000703000C07801C0780380300780000700000F00000 F00000F00000F00000F00000F00000F000007000007800403800401C00800C010007060001F800 12157E9416>99 D<0E0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E 00000E00000E00000E00000E1F800E60C00E80E00F00700F00700E00700E00700E00700E00700E 00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070FFE7FF18237FA2 1B>104 D<1C001E003E001E001C00000000000000000000000000000000000E00FE001E000E00 0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FFC00A227FA10E >I<0E1F80FE60C01E80E00F00700F00700E00700E00700E00700E00700E00700E00700E00700E 00700E00700E00700E00700E00700E00700E00700E0070FFE7FF18157F941B>110 D<0E3CFE461E8F0F0F0F060F000E000E000E000E000E000E000E000E000E000E000E000E000E00 0F00FFF010157F9413>114 D E /Fq 20 121 df<00003FC0020001FFF8060007E01C06001F00 070E003C00018E00780000DE00F000005E01E000003E03C000001E078000001E0F0000000E0F00 00000E1E000000061E000000063E000000063C000000027C000000027C000000027C0000000278 00000000F800000000F800000000F800000000F800000000F800000000F800000000F800000000 F800000000F800000000F80000000078000000007C000000007C000000027C000000023C000000 023E000000021E000000021E000000040F000000040F0000000C078000000803C000001801E000 001000F00000200078000040003C000180001F0003000007E01E000001FFF80000003FC0002732 7CB02F>67 D 73 D77 D80 D<007F004001FFC0400780F0C00F0018C01C000DC03C0007C0 380003C0780001C0700001C0F00000C0F00000C0F00000C0F0000040F0000040F0000040F80000 40F80000007C0000007E0000003F0000003FE000001FFE00000FFFE00007FFF80001FFFE00007F FF000007FF8000007F8000000FC0000007E0000003E0000003E0000001F0000001F0800000F080 0000F0800000F0800000F0800000F0C00000F0C00000E0E00001E0E00001E0F00001C0F8000380 EC000780C7000F00C1E03E0080FFF800801FE0001C327CB024>83 D<00FE0000070380000801E0 001000F0003C0078003E0078003E003C003E003C001C003C0000003C0000003C0000003C000007 FC00007C3C0003E03C0007803C001F003C003E003C003C003C007C003C0078003C08F8003C08F8 003C08F8003C08F8007C08F8007C087C00BC083E011E100F060FE003F807C01D1E7D9D20>97 D<018000003F800000FF800000FF8000000F800000078000000780000007800000078000000780 000007800000078000000780000007800000078000000780000007800000078000000780000007 81F80007860700079803C007A000E007C000F007C00078078000380780003C0780003E0780001E 0780001E0780001F0780001F0780001F0780001F0780001F0780001F0780001F0780001F078000 1E0780003E0780003C0780003C0780007807C00070074000F0072001C006100380060C0E000403 F80020317EB024>I<001FC00000F0380001C00400078002000F000F001E001F001E001F003C00 1F007C000E007C00000078000000F8000000F8000000F8000000F8000000F8000000F8000000F8 000000F8000000780000007C0000007C0000003C0000801E0000801E0001000F00010007800200 01C00C0000F03000001FC000191E7E9D1D>I<003F800000E0F0000380380007001C000E001E00 1E000E003C000F003C000F007C000F807800078078000780F8000780FFFFFF80F8000000F80000 00F8000000F8000000F8000000F800000078000000780000007C0000003C0000801C0000801E00 01000F0002000700020001C00C0000F03000001FC000191E7E9D1D>101 D<07000F801F801F800F8007000000000000000000000000000000000000000000000001801F80 FF80FF800F80078007800780078007800780078007800780078007800780078007800780078007 80078007800780078007800FC0FFF8FFF80D2F7EAE12>105 D<01803F80FF80FF800F80078007 800780078007800780078007800780078007800780078007800780078007800780078007800780 078007800780078007800780078007800780078007800780078007800780078007800780078007 800FC0FFFCFFFC0E317EB012>108 D<0181FE003FC0003F860780C0F000FF8801C1003800FF90 01E2003C000FA000E4001C0007A000F4001E0007C000F8001E0007C000F8001E00078000F0001E 00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000 F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00 078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0 001E00078000F0001E000FC001F8003F00FFFC1FFF83FFF0FFFC1FFF83FFF0341E7E9D38>I<01 81FC003F860F00FF880380FF9003C00FA001C007C001E007C001E007C001E0078001E0078001E0 078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001 E0078001E0078001E0078001E0078001E0078001E0078001E0078001E00FC003F0FFFC3FFFFFFC 3FFF201E7E9D24>I<003FC00000E0700003801C0007000E000E0007001E0007803C0003C03C00 03C07C0003E0780001E0780001E0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F8 0001F0F80001F0780001E0780001E07C0003E03C0003C03C0003C01E0007800E00070007000E00 03801C0000E07000003FC0001C1E7E9D20>I<0181F8003F860F00FF9803C0FFA001E007C000F0 07C00078078000780780003C0780003E0780003E0780001E0780001F0780001F0780001F078000 1F0780001F0780001F0780001F0780001F0780001E0780003E0780003C0780003C0780007807C0 00F007C000F007A001C007900380078C0E000783F8000780000007800000078000000780000007 8000000780000007800000078000000780000007800000078000000FC00000FFFC0000FFFC0000 202C7E9D24>I<0183E03F8C18FF907CFF907C0FA07C07C03807C00007C00007C0000780000780 000780000780000780000780000780000780000780000780000780000780000780000780000780 000780000780000780000FC000FFFE00FFFE00161E7E9D19>114 D<03FC200E02603801E03000 E0600060E00060E00020E00020F00020F000207C00007F80003FFC001FFF0007FF8001FFE0000F E00001F08000F8800078800038C00038C00038C00038E00030F00070F00060C800C0C6038081FE 00151E7E9D19>I<00400000400000400000400000400000C00000C00000C00001C00001C00003 C00007C0000FC0001FFFE0FFFFE003C00003C00003C00003C00003C00003C00003C00003C00003 C00003C00003C00003C00003C00003C00003C00003C01003C01003C01003C01003C01003C01003 C01003C01001C02001E02000E0400078C0001F00142B7FAA19>I<018000603F800FE0FF803FE0 FF803FE00F8003E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001 E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E00780 03E0078003E0078003E0038005E003C009E001C019F000F021FF001FC1FF201E7E9D24>I120 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300dpi TeXDict begin %%PaperSize: A4 %%EndSetup %%Page: 0 1 0 0 bop 831 1086 a Fq(Prop)r(osal)24 b(I)575 1178 y(MPI)f(Con)n(text)f(Sub)r (committee)871 1360 y Fp(Marc)16 b(Snir)853 1481 y(Marc)o(h)g(1993)p eop %%Page: 1 2 1 1 bop 262 619 a Fo(Chapter)28 b(1)262 826 y Fn(Prop)s(osal)36 b(I)262 1042 y Fm(Editorial)17 b(Note:)24 b(This)18 b(chapter)g(is)f(the)h (pr)n(op)n(osal)g(of)g(Mar)n(c)f(Snir)h(which)g(also)f(app)n(e)n(ars)262 1092 y(in)h(the)h(working)f(do)n(cuments)h(of)f(the)h(p)n(oint-to-p)n(oint)f (sub)n(c)n(ommitte)n(e)g(\(Mar)n(ch)h(15\))g(and)262 1142 y(c)n(ol)r(le)n (ctive)13 b(c)n(ommunic)n(ation)i(sub)n(c)n(ommitte)n(e)f(\(Mar)n(ch)g(16\).) 19 b(This)14 b(is)g(a)g(minor)g(e)n(dit)g(of)h(the)262 1191 y Fl(L)273 1186 y Fk(a)292 1191 y Fl(T)315 1204 y(E)338 1191 y(X)j Fm(sour)n(c)n(e)h(of)f(the)g(p)n(oint-to-p)n(oint)h(do)n(cument)g(pr)n (ovide)n(d)g(by)f(Mar)n(c.)29 b(The)19 b(e)n(dit)f(was)262 1241 y(p)n(erforme)n(d)c(by)h(Lyndon)h(Clarke.)262 1378 y Fj(1.1)64 b(Con)n(texts)262 1469 y Fl(A)13 b Fi(con)o(text)g Fl(consists)i(of:)324 1552 y Fh(\017)20 b Fl(A)15 b(set)h(of)f(pro)q(cesses)j(that)d(curren)o(tly)h (b)q(elong)f(to)f(the)i(con)o(text)g(\(p)q(ossibly)f(all)f(pro-)365 1602 y(cesses,)i(or)e(a)g(prop)q(er)g(subset\).)324 1685 y Fh(\017)20 b Fl(A)15 b Fi(ranking)e Fl(of)h(the)h(pro)q(cesses)j(within)c (that)h(con)o(text,)g(i.e.,)e(a)i(n)o(um)o(b)q(ering)e(of)h(the)365 1735 y(pro)q(cesses)e(in)d(that)h(con)o(text)g(from)e(0)h(to)g Fg(n)p Fh(\000)p Fl(1,)i(where)g Fg(n)e Fl(is)g(the)h(n)o(um)o(b)q(er)f(of)g (pro)q(cesses)365 1785 y(in)14 b(that)g(con)o(text.)324 1868 y(A)g(pro)q(cess)h(ma)o(y)d(b)q(elong)i(to)g(sev)o(eral)g(con)o(texts)h(at)f (the)g(same)f(time.)324 1918 y(An)o(y)f(in)o(terpro)q(cess)h(comm)o (unication)c(o)q(ccurs)k(within)e(a)h(con)o(text,)g(and)g(messages)g(sen)o(t) 262 1968 y(within)h(one)i(con)o(text)g(can)f(b)q(e)h(receiv)o(ed)h(only)e (within)f(the)i(same)f(con)o(text.)20 b(A)15 b(con)o(text)g(is)262 2017 y(sp)q(eci\014ed)d(using)e(a)g Fm(c)n(ontext)i(hand)r(le)g Fl(\(i.e.,)d(a)i(handle)f(to)g(an)h(opaque)f(ob)r(ject)i(that)e(iden)o (ti\014es)262 2067 y(a)15 b(con)o(text\).)23 b(Con)o(text)15 b(handles)h(cannot)g(b)q(e)g(transferred)h(for)e(one)g(pro)q(cess)i(to)f (another;)262 2117 y(they)e(can)g(b)q(e)h(used)f(only)f(on)h(the)h(pro)q (cess)g(where)g(they)g(where)g(created.)324 2167 y(F)m(ollo)o(ws)d(examples)h (of)g(p)q(ossible)h(uses)h(for)f(con)o(texts.)262 2283 y Ff(1.1.1)55 b(Lo)r(osely)17 b(sync)n(hronous)i(library)e(call)h(in)n(terface)262 2360 y Fl(Consider)g(the)g(case)h(where)g(a)e(parallel)g(application)f (executes)k(a)d(\\parallel)g(call")f(to)i(a)262 2409 y(library)c(routine,)h (i.e.,)f(where)j(all)d(pro)q(cesses)j(transfer)g(con)o(trol)e(to)g(the)g (library)g(routine.)967 2574 y(1)p eop %%Page: 2 3 2 2 bop 262 307 a Fl(If)14 b(the)i(library)f(w)o(as)g(dev)o(elop)q(ed)h (separately)m(,)g(then)g(one)f(should)h(b)q(ew)o(are)g(of)f(the)h(p)q(ossib-) 262 357 y(ilit)o(y)d(that)j(the)g(library)f(co)q(de)h(ma)o(y)e(receiv)o(e)j (b)o(y)e(mistak)o(e)f(messages)i(send)g(b)o(y)g(the)g(caller)262 407 y(co)q(de,)f(and)g(vice-v)o(ersa.)22 b(T)m(o)14 b(prev)o(en)o(t)i(suc)o (h)f(o)q(ccurrence)j(one)d(migh)o(t)e(use)i(a)g(barrier)g(syn-)262 457 y(c)o(hronization)e(b)q(efore)h(and)f(after)h(the)g(parallel)e(library)h (call.)k(Instead,)d(one)f(can)h(allo)q(cate)262 506 y(a)g(di\013eren)o(t)h (con)o(text)g(to)f(the)h(library)m(,)d(th)o(us)j(prev)o(en)o(ting)g(un)o(w)o (an)o(ted)f(in)o(terference.)21 b(No)o(w,)262 556 y(the)14 b(transfer)h(of)e(con)o(trol)h(to)f(the)i(library)e(need)i(not)f(b)q(e)g (sync)o(hronized.)262 672 y Ff(1.1.2)55 b(F)-5 b(unctional)20 b(decomp)r(osition)e(and)j(mo)r(dular)e(co)r(de)h(dev)n(el-)433 731 y(opmen)n(t)262 807 y Fl(Often,)d(a)f(parallel)f(application)g(is)i(dev)o (elop)q(ed)g(b)o(y)f(in)o(tegrating)g(sev)o(eral)h(distinct)g(func-)262 857 y(tional)f(mo)q(dules,)h(that)h(is)g(eac)o(h)g(dev)o(elop)q(ed)h (separately)m(.)30 b(Eac)o(h)18 b(mo)q(dule)f(is)g(a)h(parallel)262 907 y(program)c(that)j(runs)g(on)f(a)g(dedicated)h(set)h(of)d(pro)q(cesses,) 20 b(and)c(the)h(computation)e(con-)262 957 y(sists)10 b(of)g(phases)h(where) g(mo)q(dules)e(compute)h(separately)m(,)g(in)o(termixed)f(with)h(global)f (phases)262 1006 y(where)h(all)f(pro)q(cesses)k(comm)o(uni)o(cate.)i(It)10 b(is)f(con)o(v)o(enien)o(t)i(to)e(allo)o(w)g(eac)o(h)h(mo)q(dule)e(to)i(use)h (its)262 1056 y(o)o(wn)h(priv)n(ate)h(pro)q(cess)i(n)o(um)o(b)q(ering)d(sc)o (heme,)h(for)g(the)g(in)o(tramo)q(dule)f(computation.)k(This)262 1106 y(is)11 b(ac)o(hiev)o(ed)g(b)o(y)g(using)g(a)g(priv)n(ate)g(mo)q(dule)f (con)o(text)i(for)f(in)o(tramo)q(dule)e(computation,)h(and)262 1156 y(a)j(global)f(con)o(text)j(for)e(in)o(termo)q(dule)g(comm)o(unication.) 262 1272 y Ff(1.1.3)55 b(Collectiv)n(e)16 b(comm)n(unication)262 1349 y Fl(MPI)g(supp)q(orts)i(collectiv)o(e)f(comm)o(unication)c(within)j (dynamically)e(created)k(groups)f(of)262 1399 y(pro)q(cesses.)37 b(Eac)o(h)19 b(suc)o(h)h(group)f(can)h(b)q(e)g(represen)o(ted)i(b)o(y)d(a)g (distinct)h(con)o(text.)35 b(This)262 1448 y(pro)o(vides)18 b(a)g(simple)f(mec)o(hanism)f(to)i(ensure)h(that)g(comm)o(unicati)o(on)c (that)k(p)q(ertains)g(to)262 1498 y(collectiv)o(e)13 b(comm)o(unicati)o(on)d (within)j(one)g(group)g(is)g(not)g(confused)h(with)f(collectiv)o(e)g(com-)262 1548 y(m)o(unication)e(within)i(another)i(group.)262 1664 y Ff(1.1.4)55 b(Ligh)n(t)n(w)n(eigh)n(t)19 b(gang)g(sc)n(heduling)262 1741 y Fl(Consider)14 b(an)f(en)o(vironmen)o(t)g(where)i(pro)q(cesses)h(are)f (m)o(ultith)o(treaded.)i(Con)o(texts)d(can)g(b)q(e)262 1791 y(used)19 b(to)g(pro)o(vide)f(a)g(mec)o(hanism)f(whereb)o(y)i(all)f(pro)q (cesses)j(are)e(time-shared)f(b)q(et)o(w)o(een)262 1840 y(sev)o(eral)d (parallel)e(executions,)j(and)e(can)h(con)o(text)h(switc)o(h)f(from)d(one)j (parallel)f(execution)262 1890 y(to)k(another,)h(in)f(a)g(lo)q(osely)g(sync)o (hronous)h(manner.)31 b(A)19 b(thread)g(is)f(allo)q(cated)g(on)g(eac)o(h)262 1940 y(pro)q(cess)h(to)f(eac)o(h)g(parallel)e(execution,)j(and)f(a)f (di\013eren)o(t)i(con)o(text)g(is)e(used)i(to)e(iden)o(tify)262 1990 y(eac)o(h)h(parallel)e(execution.)29 b(Th)o(us,)19 b(tra\016c)e(from)f (one)i(execution)g(cannot)g(b)q(e)g(confused)262 2040 y(with)f(tra\016c)g (from)f(another)i(execution.)30 b(The)17 b(blo)q(c)o(king)g(and)g(un)o(blo)q (c)o(king)g(of)g(threads)262 2089 y(due)h(to)f(comm)o(unication)e(ev)o(en)o (ts)k(pro)o(vide)e(a)h(\\lazy")e(con)o(text)j(switc)o(hing)e(mec)o(hanism.) 262 2139 y(This)g(can)h(b)q(e)g(extended)i(to)d(the)i(case)f(where)h(the)f (parallel)f(executions)i(are)f(spanning)262 2189 y(distinct)c(pro)q(cess)h (subsets.)20 b(\(MPI)14 b(do)q(es)h(not)f(require)h(m)o(ultithreaded)d(pro)q (cesses.\))262 2326 y Fe(Discussion:)46 b Fd(A)16 b(con)o(text)h(handle)h (migh)o(t)f(b)q(e)g(implemen)o(ted)i(as)d(a)g(p)q(oin)o(ter)i(to)e(a)h (structure)262 2372 y(that)f(consists)h(of)f(con)o(text)h(lab)q(el)h(\(that)e (is)h(carried)g(b)o(y)f(messages)h(sen)o(t)f(within)i(this)f(con)o(text\))262 2417 y(and)10 b(a)h(con)o(text)f(mem)o(b)q(er)h(table,)h(that)e(translates)i (pro)q(cess)f(ranks)g(within)h(a)e(con)o(text)h(to)f(absolute)967 2574 y Fl(2)p eop %%Page: 3 4 3 3 bop 262 307 a Fd(addresses)16 b(or)g(to)g(routing)h(information.)27 b(Of)15 b(course,)i(other)f(implemen)o(tations)j(are)c(p)q(ossible,)262 353 y(including)f(implemen)o(tation)q(s)g(that)e(do)g(not)f(require)i(eac)o (h)f(con)o(text)g(mem)o(b)q(er)g(to)f(store)h(a)g(full)g(list)262 399 y(of)g(the)h(con)o(text)h(mem)o(b)q(ers.)324 444 y(Con)o(texts)k(can)g(b) q(e)g(used)g(only)h(on)f(the)g(pro)q(cess)h(where)e(they)h(w)o(ere)g (created.)31 b(Since)19 b(the)262 490 y(con)o(text)d(carries)g(information)i (on)e(the)f(group)i(of)e(pro)q(cesses)h(that)g(b)q(elong)h(to)f(this)g(con)o (text,)g(a)262 535 y(pro)q(cess)g(can)g(send)g(a)f(message)i(within)g(a)e (con)o(text)h(only)h(to)e(other)h(pro)q(cesses)h(that)e(b)q(elong)j(to)262 581 y(that)13 b(con)o(text.)18 b(Th)o(us,)c(eac)o(h)f(pro)q(cess)h(needs)g (to)g(k)o(eep)f(trac)o(k)h(only)h(of)e(the)g(con)o(texts)h(that)f(where)262 627 y(created)f(at)h(that)f(pro)q(cess;)h(the)g(total)g(n)o(um)o(b)q(er)g(of) g(con)o(texts)g(p)q(er)f(pro)q(cess)i(is)f(lik)o(ely)h(to)f(b)q(e)f(small.) 324 672 y(The)d(only)i(di\013erence)h(I)d(see)g(b)q(et)o(w)o(een)h(this)h (curren)o(t)f(de\014nition)i(of)d(con)o(text,)i(whic)o(h)f(subsumes)262 718 y(the)16 b(group)h(concept,)g(and)g(a)f(pared)h(do)o(wn)f(de\014nition,)j (if)d(that)h(I)e(assume)i(here)f(that)h(pro)q(cess)262 764 y(n)o(um)o(b)q(ering)12 b(is)g(relativ)o(e)g(to)f(the)g(con)o(text,)g(rather) g(then)h(b)q(eing)g(global,)h(th)o(us)e(requiring)i(a)e(con)o(text)262 809 y(mem)o(b)q(er)e(table.)17 b(I)9 b(argue)h(that)f(this)h(is)g(not)f(m)o (uc)o(h)h(added)g(o)o(v)o(erhead,)h(and)f(giv)o(es)g(m)o(uc)o(h)g(additional) 262 855 y(needed)k(functionalit)o(y)m(.)325 917 y Fc(\017)21 b Fd(If)16 b(a)g(new)g(con)o(text)g(is)h(created)f(b)o(y)h(cop)o(ying)h(a)e (previous)i(con)o(text,)f(then)f(one)h(do)q(es)f(not)365 963 y(need)d(a)f(new)f(mem)o(b)q(er)i(table;)g(rather,)f(one)g(needs)h(just)e(a)h (new)g(con)o(text)g(lab)q(el)i(and)f(a)f(new)365 1009 y(p)q(oin)o(ter)17 b(to)e(the)g(same)g(old)h(con)o(text)f(mem)o(b)q(er)h(table.)23 b(This)16 b(holds)h(true,)e(in)h(particular,)365 1054 y(for)d(con)o(texts)h (that)f(include)i(all)g(pro)q(cesses.)325 1117 y Fc(\017)21 b Fd(A)10 b(con)o(text)h(mem)o(b)q(er)g(table)g(mak)o(es)g(sure)g(that)g(a)f (message)h(is)g(sen)o(t)g(only)g(to)g(a)f(pro)q(cess)h(that)365 1162 y(can)16 b(execute)f(in)h(the)f(con)o(text)g(of)f(the)h(message.)23 b(The)15 b(alternativ)o(e)i(mec)o(hanism,)g(whic)o(h)365 1208 y(is)c(c)o(hec)o(king)h(at)e(reception,)i(is)f(less)g(e\016cien)o(t,)g(and)g (requires)h(that)e(eac)o(h)h(con)o(text)g(lab)q(el)h(b)q(e)365 1254 y(system-wide)h(unique.)21 b(This)15 b(requires)g(that,)f(to)g(the)g (least,)g(all)i(pro)q(cesses)e(in)h(a)f(con)o(text)365 1299 y(execute)g(a)f(collectiv)o(e)i(agreemen)o(t)f(algorithm)h(at)e(the)g (creation)h(of)f(this)h(con)o(text.)325 1362 y Fc(\017)21 b Fd(The)c(use)h(of)e(relativ)o(e)j(addressing)g(within)g(eac)o(h)e(con)o(text) h(is)f(needed)h(to)f(supp)q(ort)h(true)365 1407 y(mo)q(dular)e(dev)o(elopmen) o(t)f(of)f(sub)q(computations)j(that)d(execute)g(on)g(a)g(subset)h(of)e(the)h (pro-)365 1453 y(cesses.)26 b(There)16 b(is)g(also)h(a)e(big)i(adv)n(an)o (tage)g(in)g(using)g(the)f(same)g(con)o(text)g(construct)h(for)365 1499 y(collectiv)o(e)f(comm)o(unications)g(as)d(w)o(ell.)262 1802 y Fj(1.2)64 b(Con)n(text)22 b(Op)r(erations)262 1893 y Fl(A)13 b(global)f(con)o(text)j Fi(ALL)e Fl(is)g(prede\014ned.)20 b(All)13 b(pro)q(cesses)j(b)q(elong)e(to)f(this)h(con)o(text)g(when)262 1943 y(computation)i(starts.)30 b(MPI)18 b(do)q(es)h(not)f(sp)q(ecify)g(ho)o (w)f(pro)q(cesses)k(are)d(initially)d(rank)o(ed)262 1992 y(within)k(the)h (con)o(text)g(ALL.)g(It)f(is)h(exp)q(ected)i(that)e(the)g(start-up)g(pro)q (cedure)i(used)f(to)262 2042 y(initiate)c(an)h(MPI)g(program)f(\(at)h (load-time)e(or)i(run-time\))g(will)e(pro)o(vide)j(information)262 2092 y(or)c(con)o(trol)g(on)g(this)g(initial)e(ranking)i(\(e.g.,)f(b)o(y)h (sp)q(ecifying)g(that)h(pro)q(cesses)i(are)d(rank)o(ed)262 2142 y(according)e(to)g(their)g(pid's,)g(or)g(according)g(to)g(the)h(ph)o (ysical)e(addresses)k(of)c(the)i(executing)262 2192 y(pro)q(cessors,)h(or)f (according)g(to)g(a)f(n)o(um)o(b)q(ering)g(sc)o(heme)h(sp)q(eci\014ed)h(at)f (load)f(time\).)262 2341 y Fe(Discussion:)18 b Fd(If)c(w)o(e)g(think)i(of)e (adding)j(new)d(pro)q(cesses)i(at)e(run-time,)i(then)f Fb(ALL)e Fd(con)o(v)o(eys)i(the)262 2391 y(wrong)e(impression,)i(since)f(it)f(is)h (just)f(the)g(initial)j(set)d(of)g(pro)q(cesses.)967 2574 y Fl(3)p eop %%Page: 4 5 4 4 bop 324 407 a Fl(The)14 b(follo)o(wing)d(op)q(erations)k(are)f(a)o(v)n (ailable)d(for)j(creating)g(new)h(con)o(texts.)262 506 y Fi(MPI)p 361 506 15 2 v 17 w(COPY)p 517 506 V 17 w(CONTEXT\(new)o(con)o(text,)g(con)o (text\))324 556 y Fl(Create)d(a)e(new)h(con)o(text)h(that)f(includes)g(all)e (pro)q(cesses)14 b(in)c(the)i(old)e(con)o(text.)17 b(The)12 b(rank)262 606 y(of)f(the)i(pro)q(cesses)i(in)d(the)g(previous)h(con)o(text)g (is)f(preserv)o(ed.)20 b(The)12 b(call)g(m)o(ust)f(b)q(e)i(executed)262 656 y(b)o(y)f(all)f(pro)q(cesses)k(in)d(the)h(old)f(con)o(text.)18 b(It)13 b(is)f(a)g(blo)q(c)o(king)g(call:)k(No)d(call)e(returns)j(un)o(til)e (all)262 706 y(pro)q(cesses)k(ha)o(v)o(e)e(called)f(the)i(function.)j(The)c (parameters)g(are)262 797 y Fi(OUT)h(new)o(con)o(text)k Fl(handle)e(to)g (newly)f(created)j(con)o(text.)28 b(The)17 b(handle)g(should)g(not)365 847 y(b)q(e)e(asso)q(ciated)f(with)g(an)g(ob)r(ject)g(b)q(efore)h(the)f (call.)262 930 y Fi(IN)i(con)o(text)j Fl(handle)14 b(to)g(old)f(con)o(text) 262 1108 y Fe(Discussion:)49 b Fd(I)17 b(considered)j(adding)f(a)e(string)h (parameter,)h(to)e(pro)o(vide)i(a)e(unique)i(iden)o(ti-)262 1154 y(\014er)13 b(to)g(the)g(next)h(con)o(text.)k(But,)13 b(in)h(an)g(en)o(vironmen)o(t)h(where)e(pro)q(cesses)h(are)g(single)h (threaded,)262 1200 y(this)c(is)g(not)g(m)o(uc)o(h)g(help:)17 b(Either)11 b(all)h(pro)q(cesses)g(agree)f(on)f(the)h(order)g(they)g(create)g (new)f(con)o(texts,)262 1245 y(or)j(the)h(application)i(deadlo)q(c)o(ks.)21 b(A)13 b(k)o(ey)h(ma)o(y)f(help)i(in)f(an)g(en)o(vironmen)o(t)h(where)f(pro)q (cesses)g(are)262 1291 y(m)o(ultithreaded,)20 b(to)c(distinguis)q(h)k(call)e (from)e(distinct)j(threads)e(of)g(the)g(same)g(pro)q(cess;)i(but)e(it)262 1337 y(migh)o(t)c(b)q(e)h(simpler)g(to)f(use)h(a)f(m)o(utex)g(algorithm)i(at) e(eac)o(h)g(pro)q(cess.)324 1386 y Fe(Implemen)o(tation)j(note:)24 b Fd(No)16 b(comm)o(unication)j(is)d(needed)i(to)e(create)g(a)g(new)g(con)o (text,)262 1436 y(b)q(ey)o(ond)j(a)f(barrier)i(sync)o(hronization;)k(all)19 b(pro)q(cesses)g(can)g(agree)f(to)h(use)f(the)h(same)f(naming)262 1486 y(sc)o(heme)c(for)g(successiv)o(e)h(copies)g(of)f(the)g(same)g(con)o (text.)20 b(Also,)15 b(no)f(new)g(rank)g(table)h(is)g(needed,)262 1536 y(just)d(a)h(new)g(con)o(text)h(lab)q(el)h(and)e(a)g(new)g(p)q(oin)o (ter)i(to)e(the)g(same)g(old)h(table.)262 1735 y Fi(MPI)p 361 1735 V 17 w(NEW)p 495 1735 V 18 w(CONTEXT\(new)o(con)o(text,)h(con)o(text,)g (k)o(ey)l(,)h(index\))262 1826 y(OUT)f(new)o(con)o(text)k Fl(handle)e(to)g (newly)g(created)i(con)o(text)f(at)f(calling)f(pro)q(cess.)30 b(This)365 1876 y(handle)14 b(should)g(not)g(b)q(e)g(asso)q(ciated)h(with)e (an)h(ob)r(ject)h(b)q(efore)f(the)h(call.)262 1959 y Fi(IN)h(con)o(text)j Fl(handle)14 b(to)g(old)f(con)o(text)262 2042 y Fi(IN)j(k)o(ey)21 b Fl(in)o(teger)262 2125 y Fi(IN)16 b(index)j Fl(in)o(teger)324 2217 y(A)11 b(new)h(con)o(text)g(is)f(created)i(for)e(eac)o(h)g(distinct)h(v) n(alue)f(of)f Fa(key)p Fl(;)h(this)h(con)o(text)g(is)f(shared)262 2267 y(b)o(y)j(all)g(pro)q(cesses)k(that)d(made)f(the)h(call)g(with)f(this)i (k)o(ey)f(v)n(alue.)21 b(Within)13 b(eac)o(h)j(new)g(con-)262 2316 y(text)g(the)h(pro)q(cesses)h(are)f(rank)o(ed)f(according)g(to)g(the)g (order)h(of)e(the)i Fa(index)d Fl(v)n(alues)i(they)262 2366 y(pro)o(vided;)c(in)g(case)i(of)e(ties,)h(pro)q(cesses)j(are)d(rank)o(ed)g (according)g(to)g(their)g(rank)g(in)f(the)h(old)262 2416 y(con)o(text.)967 2574 y(4)p eop %%Page: 5 6 5 5 bop 324 307 a Fl(This)16 b(call)f(is)h(blo)q(c)o(king:)22 b(No)16 b(call)g(returns)i(un)o(til)d(all)g(pro)q(cesses)k(in)d(the)g(old)g (con)o(text)262 357 y(executed)f(the)g(call.)324 407 y(P)o(articular)e(uses)j (of)d(this)h(function)f(are:)324 457 y(\(i\))19 b(Reordering)h(pro)q(cesses:) 33 b(All)19 b(pro)q(cesses)k(pro)o(vide)d(the)g(same)f Fa(key)g Fl(v)n(alue,)i(and)262 506 y(pro)o(vide)13 b(their)i(index)e(in)h(the)g(new)h (order.)324 556 y(\(ii\))i(Splitting)g(a)g(con)o(text)i(in)o(to)e(sub)q(con)o (texts,)k(while)c(preserving)i(the)g(old)e(relativ)o(e)262 606 y(order)11 b(among)e(pro)q(cesses:)19 b(All)9 b(pro)q(cesses)14 b(pro)o(vide)c(the)i(same)d Fa(index)h Fl(v)n(alue,)g(and)h(pro)o(vide)262 656 y(a)i(k)o(ey)h(iden)o(tifying)e(their)j(new)f(sub)q(con)o(text.)262 756 y Fi(MPI)p 361 756 15 2 v 17 w(RANK\(rank,)i(con)o(text\))262 836 y(OUT)f(rank)21 b Fl(in)o(teger)262 914 y Fi(IN)16 b(con)o(text)j Fl(con)o(text)c(handle)324 994 y(Return)f(the)h(rank)e(of)h(the)g(calling)f (pro)q(cess)i(within)f(the)g(sp)q(eci\014ed)h(con)o(text.)262 1094 y Fi(MPI)p 361 1094 V 17 w(SIZE\(size,)g(con)o(text\))262 1174 y(OUT)g(size)20 b Fl(in)o(teger)262 1252 y Fi(IN)c(con)o(text)j Fl(con)o(text)c(handle)324 1333 y(Return)f(the)h(n)o(um)o(b)q(er)e(of)g(pro)q (cesses)k(that)d(b)q(elong)f(to)h(the)g(sp)q(eci\014ed)i(con)o(text.)262 1447 y Ff(1.2.1)55 b(Usage)18 b(note)262 1523 y Fl(Use)c(of)f(con)o(texts)i (for)e(libraries:)k(Eac)o(h)d(library)f(ma)o(y)f(pro)o(vide)h(an)h (initialization)c(routine)262 1573 y(that)k(is)g(to)h(b)q(e)g(called)f(b)o(y) g(all)g(pro)q(cesses,)j(and)d(that)g(generate)i(a)e(con)o(text)i(for)e(the)h (use)g(of)262 1623 y(that)e(library)m(.)324 1673 y(Use)j(of)g(con)o(texts)g (for)g(functional)f(decomp)q(osition:)20 b(A)c(harness)h(program,)d(running) 262 1723 y(in)f(the)i(con)o(text)g Fa(ALL)f Fl(generates)i(a)e(sub)q(con)o (text)h(for)f(eac)o(h)h(mo)q(dule)e(and)h(then)h(starts)g(the)262 1773 y(submo)q(dule)d(within)i(the)g(corresp)q(onding)h(con)o(text.)324 1822 y(Use)g(of)f(con)o(texts)i(for)e(collectiv)o(e)g(comm)o(unication:)i(A)e (con)o(text)i(is)e(created)i(for)e(eac)o(h)262 1872 y(group)f(of)h(pro)q (cesses)i(where)f(collectiv)o(e)f(comm)o(unication)c(is)k(to)g(o)q(ccur.)324 1922 y(Use)i(of)f(con)o(texts)i(for)e(con)o(text-switc)o(hing)h(among)d(sev)o (eral)k(parallel)d(executions:)23 b(A)262 1972 y(pream)o(ble)15 b(co)q(de)j(is)f(used)g(to)g(generate)h(a)f(di\013eren)o(t)h(con)o(text)f (for)g(eac)o(h)g(execution;)i(this)262 2022 y(pream)o(ble)8 b(co)q(de)j(needs)g(to)e(use)i(a)e(m)o(utual)f(exclusion)i(proto)q(col)f(to)h (mak)o(e)e(sure)j(eac)o(h)f(thread)262 2071 y(claims)i(the)i(righ)o(t)g(con)o (text.)262 2208 y Fe(Discussion:)30 b Fd(If)10 b(pro)q(cess)h(handles)h(are)e (made)h(explicit)i(in)e(MPI,)f(then)h(an)f(additional)k(function)262 2254 y(needed)d(is)g Fe(MPI)p 515 2254 14 2 v 15 w(PR)o(OCESS\(pro)q(cess,)i (con)o(text,)g(rank\))p Fd(,)e(whic)o(h)g(returns)g(a)f(handle)i(to)e(the)262 2300 y(pro)q(cess)j(iden)o(ti\014ed)j(b)o(y)d(the)g Fb(rank)f Fd(and)h Fb(context)d Fd(parameters.)324 2350 y(A)e(p)q(ossible)j(addition)g (is)f(a)f(function)h(of)e(the)h(form)f Fe(MPI)p 1136 2350 V 16 w(CREA)l(TE)p 1335 2350 V 17 w(CONTEXT\(new)o(con)o(text,)262 2399 y(list)p 325 2399 V 15 w(of)p 376 2399 V 15 w(pro)q(cess)p 531 2399 V 17 w(handles\))j Fd(whic)o(h)i(creates)f(a)f(new)h(con)o(text)g (out)g(of)g(an)g(explicit)i(list)f(of)f(mem-)262 2449 y(b)q(ers)k(\(and)g (rank)h(them)f(in)h(their)g(order)f(of)g(o)q(ccurrence)h(in)f(the)g(list\).) 27 b(This,)17 b(coupled)h(with)e(a)967 2574 y Fl(5)p eop %%Page: 6 7 6 6 bop 262 307 a Fd(mec)o(hanism)13 b(for)e(requiring)j(the)e(spa)o(wning)h (of)e(new)h(pro)q(cesses)g(to)g(the)g(computation,)h(will)g(allo)o(w)262 357 y(to)g(create)g(a)g(new)h(all)g(inclusiv)o(e)i(con)o(text)e(that)g (includes)h(the)f(additional)i(pro)q(cesses.)j(Ho)o(w)o(ev)o(er,)262 407 y(I)12 b(opp)q(ose)j(the)e(idea)h(of)f(requiring)j(dynamic)f(pro)q(cess)f (creation)g(as)g(part)f(of)g(MPI.)g(Man)o(y)h(imple-)262 457 y(men)o(ters)g(w)o(an)o(t)g(to)g(run)g(MPI)g(in)h(an)f(en)o(vironmen)o(t)i (where)e(pro)q(cesses)i(are)e(statically)i(allo)q(cated)262 506 y(at)c(load-time.)967 2574 y Fl(6)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Sun Mar 21 14:50:32 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA00341; Sun, 21 Mar 93 14:50:32 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24784; Sun, 21 Mar 93 14:50:19 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 14:50:18 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24770; Sun, 21 Mar 93 14:50:17 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03760; Sun, 21 Mar 93 13:44:53 CST Date: Sun, 21 Mar 93 13:44:53 CST From: Tony Skjellum Message-Id: <9303211944.AA03760@Aurora.CS.MsState.Edu> To: lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu Subject: Re: Proposal I - LaTeX Lyndon, what about your extensive proposal (that I just scribbled about endlessly). Are you shooting that? Anyway, tell me specifically, and call whatever new thing Proposal VI, not proposal II' or II. - Tony From owner-mpi-context@CS.UTK.EDU Sun Mar 21 14:52:15 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA00353; Sun, 21 Mar 93 14:52:15 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24896; Sun, 21 Mar 93 14:52:01 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 14:52:00 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24888; Sun, 21 Mar 93 14:51:55 -0500 Date: Sun, 21 Mar 93 19:51:46 GMT Message-Id: <13105.9303211951@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Proposal II' - LaTeX To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk This is the (all new) Proposal II', credits to myself, which was written over the last two days, intensely. PostScript follws. I have now received comments from Tony on Proposal I++ (now defunct, I truly hope). I will propogate these into this the LaTeX of Proposal II' where appropriate in the style of my reply to Proposal V, shortly. Best Wishes Lyndon ---------------------------------------------------------------------- \documentstyle{report} \begin{document} \title{Proposal II$^\prime$\\MPI Context Subcommittee} \author{Lyndon~J~Clarke} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal II'" % Written by Lyndon J. Clarke % March 1993 % \newcommand{\LabelNote}[1]{\label{ii:note:#1}} \newcommand{\SeeReferNote}[1]{{\bf See~Note~\ref{ii:note:#1}.}} \newcommand{\LabelSection}[1]{\label{ii:sect:#1}} \newcommand{\ReferSection}[1]{{\bf Section~\ref{ii:sect:#1}.}} \chapter{Proposal II$^\prime$} {\it Editorial Note: This is not Proposal II as identified at the post-meet lunch after the February meeting in Dallas. Rik~Littlefied came on board the proposal writing process, and decided to write a different proposal which appears as Proposal V. There is no longer a proposal which contains static contexts as discussed at the post-meet lunch.} %----------------------------------------------------------------------% % BEGIN "Introduction" % \section{Introduction} This chapter proposes that communication contexts and process groupings within MPI appear as related concepts. In particular process groupings appear as ``frames'' which are used in the creation of communication contexts. Communications contexts retain a reference to, and inherit properties of, their process grouping frames. This reflects the observation that an invocation of a module in a parallel program typically operates within one or more groups of processes, and as such any communication contexts associated with invocations of modules also bind certain semantics of process groupings. The proposal provides process identified communication, communications which are limited in scope to single contexts, and communications which have scope spanning pairs of contexts. The proposal makes no statements regarding message tags. It is assumed that these will be a bit string expressed as an integer in the host language. Much of this proposal must be viewed as recommendations to other subcommittees of MPI, primarily the point-to-point communication subcommittee and the collective communications subcommittee. Concrete syntax is given in the style of the ANSI C host language for only purposes of discussion. The detailed proposal is presented in \ReferSection{proposal}, which refers the reader to a set of discussion notes in \ReferSection{notes}. The notes assumes knowledge of the proposal text and are therefore best examined after familiarisation with that text. Aspects of the proposal are discussed in section \ReferSection{discussion}, and it is also recommended that this material be read after familiarisation with the text of the proposal. % % END "Introduction" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Detailed Proposal" % \section{Detailed Proposal} \LabelSection{proposal} This section presents the detailed proposal, discussing in order of appearance: processes; process groupings; communication contexts; point-to-point communication; collective communication. \subsection{Processes} \LabelSection{processes} This proposal views processes in the familiar way, as one thinks of processes in Unix or NX for example. Each process is a distinct space of instructions and data. Each process is allowed to compose multiple concurrent threads and MPI does not distinguish such threads. \subsubsection*{Process Descriptor} Each process is described by a {\it process descriptor\/} which is expressed as an integer type in the host language and has an opaque value which is process local. \SeeReferNote{integer:descriptors} \SeeReferNote{process:identifiers} \SeeReferNote{descriptor:cache} The initialisation of MPI services will assign to each process an {\it own\/} process descriptor. Each process retains its own process descriptor until the termination of MPI services. MPI provides a procedure which returns the own descriptor of the calling process. For example, {\tt pd = mpi\_own\_pd()}. \subsubsection*{Process Creation and Destruction} This proposal makes no statements regarding creation and destruction of processes. \SeeReferNote{dynamic:processes} \subsubsection*{Descriptor Transmission} The value of a process descriptor can be transmitted in a message as an integer since it is an integer type in the host language. However the recipient of the descriptor can make no defined use of the value of in the MPI operations described in this proposal --- the descriptor is {\it invalid}. MPI provides a mechanism whereby the user can transmit a valid process descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. {\tt MPI\_PD\_TYPE}. MPI provides a process descriptor registry service. The operations which this service upports are: register descriptor by name; deregister descriptor; lookup descriptor identifier by name and validate, blocking the caller until the name has been registered. \SeeReferNote{registry:check} Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. Note that receipt of a process descriptor in the fashions described above may have a persistent effect on the implementation of MPI at the receiver, and in particular may reserve state. MPI will provide a procedure which invalidates a valid descriptor, allowing the implementation to free reserved state. For example, {\tt mpi\_invalidate\_pd(pd)}. The user is not allowed to invalidate the process descriptor of the calling process. \SeeReferNote{process:coherency} \subsubsection*{Descriptor Attributes} This proposal makes no statements regarding processor descriptor attributes. \subsection{Process Groupings} \LabelSection{groupings} This proposal views a process grouping as an ordered collection of (references to?) distinct processes, the membership and ordering of which does not change over the lifetime of the grouping. \SeeReferNote{dynamic:groups} The canonical representation of a grouping reflects the process ordering and is a one-to-one map from $Z_N$ to the descriptor of the $N$ processes composing the grouping. The structure of a process grouping is defined by a process grouping topology and this proposal makes no further statements regarding such structures. \subsubsection*{Group Descriptor} Each group is identified by a {\it group descriptor\/} which is expressed as an integer type in the host language and has an opaque value which is process local. \SeeReferNote{integer:descriptors} \SeeReferNote{group:identifiers} \SeeReferNote{descriptor:cache} The initialisation of MPI services will assign to each process an {\it own} group descriptor for a process grouping of which the process is a member. Each process retains its own group descriptor and membership of the process grouping until the termination of MPI services. \SeeReferNote{group:blobs} MPI provides a procedure which returns the descriptor of the home group of the calling process. For example, {\tt gd = mpi\_home\_gd()}. \SeeReferNote{own:group} \subsubsection*{Group Creation and Deletion} MPI provides a procedure which allows users to dynamically create one or more groups which are subsets of existing groups. For example, {\tt gdb = mpi\_group\_partition(gda, key)} creates one or more new groups {\tt gdb} partition an existing group {\tt gda} into one or more distinct subsets. This procedure is called by and synchronises all members of {\tt gda}. MPI provides a procedure which allows users to dynamically create one group by explicit definition of its membership as a list of process descriptors. For example, {\tt gd = mpi\_group\_definition(listofpd)} creates one new group {\tt gd} with membership and ordering described by the process descriptor list {\tt listofpd}. This procedure is called by and synchronises all processes identified in {\tt listofpd}. MPI provides a procedure which allows users to delete a created group. This procedure accepts the descriptor of a group which was created by the calling process and destroys the identified group. For example, {\tt mpi\_group\_deletion(gd)} deletes an existing group {\tt gd}. This procedure is called by and synchronises all members of {\tt gd}. MPI provides additional procedure which allow users to construct process groupings which have a process grouping topology. \subsubsection*{Descriptor Transmission} The value of a group descriptor can be transmitted in a message as an integer since it is an integer type in the host language. However the recipient of the descriptor can make no defined use of the value of in the MPI operations described in this proposal --- the descriptor is {\it invalid}. MPI provides a mechanism whereby the user can transmit a valid group descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. {\tt MPI\_GD\_TYPE}. MPI provides a group descriptor registry service. The operations which this service upports are: register descriptor by name; deregister descriptor; lookup descriptor identifier by name and validate, blocking the caller until the name has been registered. \SeeReferNote{registry:check} Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. Note that receipt of a group descriptor in the fashions described above may have a persistent effect on the implementation of MPI at the receiver, and in particular may reserve state. MPI will provide a procedure which invalidates a valid descriptor, allowing the implementation to free reserved state. For example, {\tt mpi\_invalidate\_gd(gd)}. The user is not allowed to invalidate the own group descriptor of the process or the group descriptor of any group created by the calling process. \SeeReferNote{group:coherency} \subsubsection*{Descriptor Attributes} MPI provides a procedure which accepts a valid group descriptor and returns the rank of the calling process within the identifier group. For example, {\tt rank = mpi\_group\_rank(gd)}. MPI provides a procedure which accepts a valid group descriptor and returns the number of members, or {\it size}, of the identified group. For example, {\tt size = mpi\_group\_size(gd)}. MPI provides a procedure which accepts a valid group descriptor and process order number, or {\it rank}, and returns the valid descriptor of the process to which the supplied rank maps within the identified group. For example, {\tt pd = mpi\_group\_pd(gd, rank)}. \SeeReferNote{pd:to:rank} MPI provides additional procedures which allow users to determine the process grouping topology attributes. \subsection{Communication Contexts} \LabelSection{contexts} This proposal views a communication context as a uniquely identified reference to exactly one process grouping, which is a field in a message envelope and may therefore be used to distinguish messages. The context inherits the referenced process grouping as a ``frame''. Each process grouping is used as a frame for multiple contexts. \subsubsection*{Context Descriptor} Each context is identified by a {\it context descriptor\/} which is expressed as an integer type in the host language and has an opaque value which is process local. \SeeReferNote{integer:descriptors} \SeeReferNote{context:identifiers} \SeeReferNote{descriptor:cache} The creation of MPI process groupings allocates an {\it own\/} context which inherits the created grouping as a frame and can be thought of as a property of the created grouping. The grouping retains its own context until MPI process grouping deletion. MPI provides a procedure which accepts a valid group descriptor and returns the context descriptor of the own context of the identified group. For example, {\tt cd = mpi\_own\_cd(gd)}. \SeeReferNote{own:context} \subsubsection*{Context Creation and Deletion} MPI provides a procedure which allows users to dynamically create contexts. This procedure accepts a valid descriptor of a group of which the calling process is a member, and returns a context descriptor which references the identified group. For example, {\tt cd = mpi\_context\_create(gd)}. This procedure is called by and synchronises all members of {\tt gd}. \SeeReferNote{dynamic:contexts} MPI provides a procedure which allows users to destroy created contexts. This procedure accepts a valid context descriptor which was created by the calling process and deletes that context identifier. For example, {\tt mpi\_context\_delete(cd)}. This procedure is called by and synchronises all members of the frame of {\tt cd}. \subsubsection*{Descriptor Transmission} The value of a context descriptor can be transmitted in a message as an integer since it is an integer type in the host language. However the recipient of the descriptor can make no defined use of the value of in the MPI operations described in this proposal --- the descriptor is {\it invalid}. MPI provides a mechanism whereby the user can transmit a valid context descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. {\tt MPI\_CD\_TYPE}. MPI provides a context descriptor registry service. The operations which this service upports are: register descriptor by name; deregister descriptor; lookup descriptor identifier by name and validate, blocking the caller until the name has been registered. \SeeReferNote{registry:check} Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. Note that receipt of a context descriptor in the fashions described above may have a persistent effect on the implementation of MPI at the receiver, and in particular may reserve state. MPI will provide a procedure which invalidates a valid descriptor, allowing the implementation to free reserved state. For example, {\tt mpi\_invalidate\_cd(cd)}. The user is not allowed to invalidate the own context descriptor of a group or the context descriptor of any context created by the calling process. \SeeReferNote{context:coherency} \subsubsection*{Descriptor Attributes} MPI provides a procedure which allows users to determine the process grouping which is the frame of a context. For example, {\tt gd = mpi\_context\_gd(cd)}. \subsection{Point-to-Point Communication} \LabelSection{point2point} This proposal recommends three forms for MPI point-to-point message addressing and selection: null context; closed context; open context. It is further recommended that messages communicated in each form are distinguished such that a Send operation of form X cannot match with a receive operation of form Y, which requires that the form is embedded into the message envelope. The three forms are described followed by considerations of uniform integration of these forms in the point-to-point communication section of MPI. \subsubsection*{Null Context Form} The {\it null context\/} form contains no message context. \SeeReferNote{null:context} Message selection and addressing are expressed by {\tt (pd, tag)} where: {\tt pd} is a process descriptor; {\tt tag} is a message tag. Send supplies the {\tt pd} of the receiver. Receive supplies the {\tt pd} of the sender. Receive can wildcard on {\tt pd} by supplying the value of a named constant process descirptor, e.g. {\tt MPI\_PD\_WILD}. This proposal makes no statment about the provision for wildcard on {\tt tag}. \subsubsection*{Closed Context Form} The {\it closed context\/} form permits communication between members of the same context. \SeeReferNote{closed:context} Message selection and addressing are expressed by {\tt (cd, rank, tag)} where: {\tt cd} is a context descriptor; {\tt rank} is a process rank in the frame of {\tt cd}; {\tt tag} is a message tag. Send supplies the {\tt cd} of the receiver and sender, and the {\tt rank} of the receiver. Receive supplies the {\tt cd} of the sender and receiver, and the rank of the sender. The {\tt (cd, rank)} pair in Send (Receive) is sufficient to determine the process descriptor of the receiver (sender). Receive cannot wildcard on {\tt cd}. Receive can wildcard on {\tt rank} by supplying the value of a named constant integer, e.g. {\tt MPI\_RANK\_WILD}. This proposal makes no statment about the provision for wildcard on {\tt tag}. \subsubsection*{Open Context Form} The {\it open context\/} form permits communication between members of any two contexts. \SeeReferNote{open:context} Message selection and addressing are expressed by {\tt (lcd, rcd, rank, tag)} where: {\tt lcd} is a ``local'' context descriptor; {\tt rcd} is a ``remote'' context descriptor; {\tt rank} is a process rank in the frame of {\tt rcd}; {\tt tag} is a message tag. Send supplies the context descriptor for the sender in {\tt lcd}, the context descriptor for the receiver in {\tt rcd}, and the {\tt rank} of the receiver in the frame of {\tt rcd}. Receive supplies the context descriptor for the receiver in {\tt lcd}, the context descriptor for the sender in {\tt rcd}, and the {\tt rank} of the sender receiver in the frame of {\tt rcd}. The {\tt (rcd, rank)} pair in Send (Receive) is sufficient to determine the process descriptor of the receiver (sender). Receive cannot wildcard on {\tt lcd}. Receive can wildcard on {\tt rcd} by supplying the value of a named constant context descriptor, e.g. {\tt MPI\_CD\_WILD}, in which case Receive {\it must\/} also wildcard on {\tt rank} as there is insufficient information to determine the process descriptor of the sender. Receive can wildcard on {\tt rank} by supplying the value of a named constant integer, e.g. {\tt MPI\_RANK\_WILD}. This proposal makes no statment about the provision for wildcard on {\tt tag}. \subsubsection*{Uniform Integration} The three forms of addressing and selection described have different syntactic frameworks. We can consider integrating these forms into the point-to-point section of MPI by defining a further orthogonal axis (as in the multi-level proposal of Gropp \& Lusk) which deals with the form. This is at the expense of multiplying the number of Send and Receive procedures by a factor of three, and some difficulty in details of the current point-to-point working document which uniformly assumes a single addressing and selection form. There are various approaches to unification of the syntactic frameworks which may simplify integration. Three options are now described, each based on retention and extension of the framework of one form. These options each have advantages and disadvantages. \paragraph*{Option i:} The framework of the open context form is adopted and extended. We introduce the {\it null\/} descriptor, the value of which is defined by a named constant, e.g. {\tt MPI\_NULL}. The null context form is expressed as {\tt (MPI\_NULL, MPI\_NULL, pd, tag)}, which is a little clumsy. The closed context form is expressed as {\tt (MPI\_NULL, cd, rank, tag)}, which is marginally inconvenient. The open context form is expressed as {\tt (lcd, rcd, rank, tag}), which is of course natural. \paragraph*{Option ii:} The framework of the closed context form is adopted and extended. We introduce the {\it null\/} descriptor, the value of which is defined by a named constant, e.g. {\tt MPI\_NULL}. The null context form is expressed as {\tt (MPI\_NULL, pd, tag)}, which is marginally inconvenient. The closed context form is expressed as {\tt (cd, rank, tag)}, which is of course natural. Expression of the open context form requires a little more work. We can use the {\tt cd} field as ``shorthand notation'' for the {\tt (lcd, rcd)} pair at the expense of introducing some trickery. We can define a mechanism which ``globs'' together these two fields onto in an identifier which Send and Receive can distinguish from a context identifier and treat as the shorthand notation. Then we should also define a mechanism by which a receiver which has completed a Receive with wildcard on {\tt rcd} is able to determine the valid context descriptor of the sender. This is a inconvenient. \paragraph*{Option iii:} The framework of the null context form is adopted and extended. The null context form is expressed as {\tt (pd, tag)}, which is of course natural. Expression of the open and closed context forms requires a little more work. We can use the {\tt pd} field as ``shorthand notation'' for {\tt (cd, rank)} and {\tt (lcd, rcd, rank)} by continuation of the trickery used in the previous option. This is clumsy. \subsection{Collective Communication} \LabelSection{collective} Symmetric collective communication operations are compliant with the closed context form described above. This proposal recommends that such operations accept a context descriptor which identifies the context and frame in which they are to operate. MPI does plan to describe symmetric collective communication operations. It is not possible to determine whether the proposal is sufficient to allow implementation of the collective communication section of MPI in terms of the point-to-point section of MPI without loss of generality, since the collective operations are not yet defined. Assymetric collective communication operations, especially those in which the sender(s) and receiver(s) are distinct processes, are compliant with the open context form described above. This proposal recommends that such operations accept a pair of context descriptors (perhaps in a ``glob'' form) which identify the contexts and frames in which they are to operate. MPI does not plan to describe assymetric collective communication operations. Such operations are expressive when writing programs beyond the SPMD model and comprise communicative funtionally distinct process groupings. This proposal recommends that such operations should be considered in some reincarnation of MPI. % % END "Proposal" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Discussion & Notes" % \section{Discussion \& Notes} This section comprises a discussion of certain aspects of this proposal followed by the notes referenced in the detailed proposal. \subsection{Discussion} \LabelSection{discussion} We can dissect the proposal into two parts: an SPMD model core; an MIMD model annex. In this discussion the dissection is exposed and the conceptual foundation of each part is described. The discussion also presents arguments for and against the MIMD model annex, and to some extent explores the consquences of these arguments for MPI in a wider sense. \subsubsection*{SPMD model core} The SPMD model core provides noncommunicative process groupings and communication contexts for writers of SPMD parallel libraries. It is intended to provide expressive power beyond the ``SPIMD'' model, in which process execute in a stricly SIMD fashion. The material describing processes in \ReferSection{processes} is simplified: \begin{itemize} \item processes have identical instruction blocks and different data blocks; \item process descriptor transmission and registry become redundant (Note, however, that they are anyway redundant as context descriptor transmission and registry coupled with context descriptor attribute query and group descriptor attribute query is capable of providing access to all process descriptors); \item dynamic process models are not considered. \end{itemize} The material describing process groupings in \ReferSection{groupings} is simplified: \begin{itemize} \item group descriptor transmission and registry become redundant (Note, however, that they are anyway redundant as context descriptor transmission and registry coupled with context descriptor attribute query capable of providing access to all group descriptors.); \item the own process grouping explicitly becomes a group containing all processes. \end{itemize} The material describing communication contexts in \ReferSection{contexts} is simplified: \begin{itemize} \item context descriptor transmission and registry become unnecessary. \end{itemize} The material describing point-to-point communication in \ReferSection{point2point} is simplified: \begin{itemize} \item the open context form becomes redundant; \item uniform integration ``Option i'' is deleted, and ``Option ii'' loses ``globs'' thus becoming simple enough that ``Option iii'' need not be further considered. \end{itemize} The material describing collective communication in \ReferSection{collective} is simplified: \begin{itemize} \item there is no possibility of collective communication operations spanning more than context. \end{itemize} \subsubsection*{MIMD model annex} The MIMD model annex extends and modifies the SPMD model core to provide expressive power for MIMD programs which combine (coarse grain) function and data driven parallelism. The MIMD model annex is not intended to provide expressive power to fine grained function driven parallel programs --- it is conjectured that message passing approaches such as MPI are not suited to fine grained function driven or data driven programming. The annex is intended to provide expresive power for the ``MSPMD'' model, which is now described. One of the simplest MIMD models is the ``host-node'' model, familiar in EXPRESS and PARMACS, containing two functional groups: one node group (SPMD like) ; one host group (a singleton). The ``parallel client-server'' model, in which each of the $n$ clients is composed of parallel processes, and in which the server may also be composed of parallel processes, contains $1+n$ functional groups: $n$ client groups (SPMD like); one server group (singleton, SPMD like). The ``host-node'' model is a case of this model in which the host can be viewed as a singleton client and the nodes can be viewed as an SPMD like server (or vice versa!). The ``parallel module graph'' model, in which each module within the graph may be composed of parallel processes (singleton, SPMD like), contains any number of functional groups with arbitrarily complex relations. The ``parallel client-server model'' is a case of this model in which the module graph contains arcs joining the server to each client. The MIMD model annex is intended to provide expressive power for the ``parallel module graph'' model, which I refer to as the MPSMD model. This model requires support at some level as commercial and modular applications are increasingly moving in to parallel computating. The debate is whether or not message passing approaches such as MPI (which I simply refer to as MPI) should provide for this model. The negative argument is that such SPMD modules should be controlled and communicate with one another as ``parallel processes'' at the distributed operating system level. The argument has some appeal as the world of distributed operating systems must deal with difficult issues such as process control and coherency. Avoidance of duplication in MPI allows MPI to focus on provision of a smaller set of facilities with greater emphasis on maximum performance for data driven SPMD parallel prgrams. The positive argument is that communications between SPMD modules requires high performance and MPI can provide such performance with tuned semantics which expect the user to deal with coherency issues. There is also the argument that MPI is able to deal with this in a shorter time than development and standardisation procedures for distributed operating systems. The latter argument is comparable with the argument for MPI versus parallel compilation. \subsection{Notes} \LabelSection{notes} \begin{enumerate} \item\LabelNote{integer:descriptors} Descriptors are a plentiful but bounded resource. They are expressed as integers for two reasons. Firstly, this expression facilitates uniform binding to ANSI~C and Fortran~77. Secondly, there is a later convenience in ensuring that process descriptors in particular are expressed in integers, and I made all the descriptors are integers for consistency. In practice the descriptor could be an index into a table of objects description structures or pointers to such structures. \item\LabelNote{descriptor:cache} It is possible to allow the user to save user defined attributes in descriptors, as Littlefield has suggested. Such attributes should not be communicated in either descriptor transmission or the descriptor registry. \item\LabelNote{process:identifiers} The process descriptor is not a global unique process identifier but must reference such an identifier. The use of descriptors hides such identifiers in order that they may be implementation dependent, and enables more efficient access to additional process information in implementations. For example, the process identifier of a receiver may not be sufficient to route a message to the receiver, and this information can be referenced from the process descriptor. \item\LabelNote{group:identifiers} The group descriptor is not a global unique group identifier but must reference such an identifier. The use of descriptors hides such identifiers in order that they may be implementation dependeent, and enables more efficient access to additional group information in implementations. For example, the size of the group and the map from member ranks to process descriptors can be referenced from the group descriptor. \item\LabelNote{context:identifiers} The context descriptor is not a global unique context identifier but must reference such an identifier. The use of descriptors hides such identifiers in order that they may be implementation dependent, and enables more efficient access to additional context information in implementations. For example, the group descriptor of the frame can be referenced from the context descriptor. \item\LabelNote{dynamic:processes} The proposal does not prevent a process model which allows dynamic creation and termination of processes however it does not favour an asynchronous process creation model in which singleton processes are created and terminated in an unstructured fashion. The proposal does favour a model in which blobs of processes are created (and terminated) in a concerted fashion, and in which each group so created is assigned a different own process grouping. This model does not take into account the potential desire to expand or contract an existing blob of processes in order to take into account (presumably slowly) time varying worloads. The author suggests that concerted blob expand and contract operations are most suitable for this purpose than asynchronous spawn/kill operations. \item\LabelNote{registry:check} There should probably also be a registry operation which checks whether a name has been registered without the possibility of blocking the calling process indefinitely. The same registry could be used for process, group and context descriptors. \item\LabelNote{process:coherency} In the static process model it is assumed that processes are created in concert, and that termination of one process implies termination of all processes. In this case there is no coherency problem associated with processes and process descriptors. In a dynamic process model there is a coherency problem which processes and process descriptors since a process can retain the descriptor of a process which has been deleted. The proposal expects the user to ensure coherent usage. \item\LabelNote{dynamic:groups} Process groupings are dynamic in the sense that they can be created at any time, and static in the sense that the membership is constant over the lifetime of the process grouping. The author suggests concerted group expand and contract operations are more generally useful than asynchronous join/leave operations. \item\LabelNote{group:blobs} MPI has discussed the concept of the ``all'' group which contains all processes. The ``own'' group concept is intended to be a generalisation of the ``all'' group concept which is expressive for programs including and beyond the SPMD model. Processes are created in ``blobs'', where each member of a blob is a member of the same own process grouping, and different blobs have different own process groupings. An SPMD program is a single blob. A host-node program composes two blobs, the node blob and the host blob (which is a singleton). There is a sense in which a blob has a locally SPMD view. \item\LabelNote{own:group} This procedure looks like a process descriptor attribute query. \item\LabelNote{group:coherency} Dynamic processes introduce a group coherency problem as a group descriptor can contain the process descriptor of a process which has been deleted. Dynamic processes potentially introduce a group coherency problem as a group descriptor can contain the process descriptor of a process which has been deleted. Group transmission introduces a group and group descriptor coherenncy problem since a process can retain a group descriptor of a group which has been identified group. The proposal expects the user to ensure coherent usage. \item\LabelNote{pd:to:rank} The proposal did not include a function to convert a {\tt (gd, pd)} pair into a rank. It is suggested that this inverse map is allowed to be arbitrarily slow. \item\LabelNote{dynamic:contexts} Marc Snir has described a method by which global unique group identifiers can be generated without use of shared global data. The description of context creation anticipates a similar method for global unique context identifier generation. However the synchronisation requirement of this method makes it unnecessary for context creation. Synchronisation at creation of context within a process grouping frame implies that all processes within the frame perform the same context creations in the same sequence. Therefore the combination of global unique frame identifier and context creation sequence number provides a global unique context identifier without communication. \item\LabelNote{own:context} This procedure looks like a group descriptor attribute query. \item\LabelNote{context:coherency} The retention of a reference to a group descriptor by a context introduces the potential for a context and context descriptor coherency problem, as a context could contain a reference to the group descriptor of a group which has been deleted. This could be circumvented by demanding that all such contexts are deleted before a group is deleted. Context descriptor transmission introduces a coherency problem since a process can retain the descriptor of a context which has been deleted. The proposal expects the user to ensure coherent usage. \item\LabelNote{null:context} This is somewhat like and PARMACS and PVM~3. It is general purpose, but not particularly expressive. It does not provide facilities for writers of parallel libraries. \item\LabelNote{open:context} This is somewhat like ZIPCODE and VENUS. It is expressive in certain SPMD programs where noncommunicative data driven parallel computations can be performed concurrently. It provides facilities for writers of SPMD like parallel libraries. \item\LabelNote{closed:context} This is somewhat like CHIMP and PVM~2. It is expressive in certain MIMD programs where communicative data driven parallel computations can be performed concurrently. It provides facilities for MSPMD like parallel libraries. \end{enumerate} % % END "Discussion & Notes" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Conclusion" % \section{Conclusion} This chapter has presented and discussed a proposal for communication contexts within MPI. In the proposal process groupings appeared as frames for the creation of communication contexts, and communication contexts retained certain properties of the frames used in their creation. The author strongly recommends this proposal to the committee. % % END "Conclusion" %----------------------------------------------------------------------% % % END "Proposal II'" %======================================================================% \end{document} ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Sun Mar 21 14:53:25 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA00376; Sun, 21 Mar 93 14:53:25 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24914; Sun, 21 Mar 93 14:53:01 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 14:52:59 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24905; Sun, 21 Mar 93 14:52:46 -0500 Date: Sun, 21 Mar 93 19:52:39 GMT Message-Id: <13111.9303211952@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Proposal II' - PostScript To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk This is the PostScript of the (all new) Proposal II'. LateX preceeded. Best Wishes Lyndon ---------------------------------------------------------------------- %!PS-Adobe-2.0 %%Creator: dvips 5.495 Copyright 1986, 1992 Radical Eye Software %%Title: context-ii.dvi %%CreationDate: Sun Mar 21 18:35:09 1993 %%Pages: 15 %%PageOrder: Ascend %%BoundingBox: 0 0 596 842 %%EndComments %DVIPSCommandLine: dvips context-ii %DVIPSSource: TeX output 1993.03.21:1832 %%BeginProcSet: tex.pro %! /TeXDict 250 dict def TeXDict begin /N{def}def /B{bind def}N /S{exch}N /X{S N} B /TR{translate}N /isls false N /vsize 11 72 mul N /@rigin{isls{[0 1 -1 0 0 0] concat}if 72 Resolution div 72 VResolution div neg scale isls{0 Resolution vsize 72 div mul TR}if Resolution VResolution vsize -72 div 1 add mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@landscape{/isls true N}B /@manualfeed{statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{ CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail}B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image}imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N} B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto /V matrix currentmatrix dup 1 get dup mul exch 0 get dup mul add .99 lt{/QV}{/RV}ifelse load def pop pop}N /eop{SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook} if pop /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255 {IE S 1 string dup 0 3 index put cvn put}for 65781.76 div /vsize X 65781.76 div /hsize X}N /p{show}N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V{}B /RV statusdict begin /product where{ pop product dup length 7 ge{0 7 getinterval dup(Display)eq exch 0 4 getinterval(NeXT)eq or}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /QV{ gsave transform round exch round exch itransform moveto rulex 0 rlineto 0 ruley neg rlineto rulex neg 0 rlineto fill grestore}B /a{moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail}B /c{-4 M} B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{p 1 w}B /r{ p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{/SS save N}B /eos{SS restore}B end %%EndProcSet TeXDict begin 39158280 55380996 1000 300 300 (/a/obsidian/disk/home/u36/lyndon/mpi/context-ii.dvi) @start /Fa 1 16 df<03C00FF01FF83FFC7FFE7FFEFFFFFFFFFFFFFFFF7FFE7FFE3FFC1FF80FF003C010 107E9115>15 D E /Fb 1 79 df<07E01FC000E006000170040001700400013804000138040002 1C0800021C0800020E0800020E0800040710000407100004039000040390000801E0000801E000 0800E0000800E00018004000FE0040001A147F931A>78 D E /Fc 3 111 df<01FC00FF80001C001C00002E001800002E001000002E001000002700100000470020000043 002000004380200000438020000081C040000081C040000081C040000080E040000100E0800001 007080000100708000010070800002003900000200390000020039000002001D000004001E0000 04000E000004000E00000C000E00001C00040000FF80040000211C7E9B21>78 D<00FFFFE000F001C001C003800180070001000E0001001E0002001C0002003800020070000000 E0000001C0000003800000070000000F0000001E0000001C0000003800000070020000E0040001 C0040003800400070008000F0008000E0018001C003000380070007001E000FFFFE0001B1C7E9B 1C>90 D<381F004E61804681C04701C08F01C08E01C00E01C00E01C01C03801C03801C03801C07 00380710380710380E10380E2070064030038014127E9119>110 D E /Fd 44 123 df<00E001E0038007000E001C001C0038003800700070007000E000E000E000E000E000 E000E000E000E000700070007000380038001C001C000E000700038001E000E00B217A9C16>40 DI<387C7E7E 3E0E1E1C78F060070B798416>44 D<7FFF00FFFF80FFFF80000000000000000000000000000000 FFFF80FFFF807FFF00110B7E9116>61 D<00E00001F00001F00001B00001B00003B80003B80003 B800031800071C00071C00071C00071C00071C000E0E000E0E000FFE000FFE001FFF001C07001C 07001C07007F1FC0FF1FE07F1FC013197F9816>65 D<01F18007FB800FFF801F0F803C07803803 80700380700380F00000E00000E00000E00000E00000E00000E00000E00000F000007003807003 803803803C07001F0F000FFE0007FC0001F00011197E9816>67 D<7FF800FFFE007FFF001C0F00 1C07801C03C01C01C01C01C01C01E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E0 1C01C01C01C01C03C01C07801C0F807FFF00FFFE007FF8001319809816>I<7FFFC0FFFFC07FFF C01C01C01C01C01C01C01C01C01C00001C00001C1C001C1C001FFC001FFC001FFC001C1C001C1C 001C00001C00E01C00E01C00E01C00E01C00E07FFFE0FFFFE07FFFE013197F9816>I<03E30007 FF000FFF001E1F003C0F00380700700700700700F00000E00000E00000E00000E00000E03F80E0 7FC0E03F80F00700700700700700380F003C0F001E1F000FFF0007F70003E70012197E9816>71 D73 D<7F0FE0FF8FF07F0FE01C07801C0F001C0E001C 1C001C3C001C78001CF0001CE0001DF0001FF0001FF8001F38001E1C001C1C001C0E001C0E001C 07001C07001C03807F07E0FF8FF07F07E01419809816>75 DII<7E1FC0FF3FE0 7F1FC01D07001D87001D87001D87001DC7001DC7001CC7001CC7001CE7001CE7001CE7001C6700 1C67001C77001C77001C37001C37001C37001C17007F1F00FF9F007F0F0013197F9816>I<7FF8 00FFFE007FFF001C0F801C03801C03C01C01C01C01C01C01C01C03C01C03801C0F801FFF001FFE 001FF8001C00001C00001C00001C00001C00001C00001C00007F0000FF80007F000012197F9816 >80 D<7FE000FFF8007FFC001C1E001C0F001C07001C07001C07001C07001C0F001C1E001FFC00 1FF8001FFC001C1C001C0E001C0E001C0E001C0E001C0E201C0E701C0E707F07E0FF87E07F03C0 14197F9816>82 D<7FFFE0FFFFE0FFFFE0E0E0E0E0E0E0E0E0E0E0E0E000E00000E00000E00000 E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00007FC000F FE0007FC0013197F9816>84 D<7F07F0FF8FF87F07F01C01C01C01C01C01C01C01C01C01C01C01 C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C00E03800E03800707 0007FF0003FE0000F8001519809816>I87 D89 D<1FE0003FF0007FF800783C00 300E00000E00000E0003FE001FFE003E0E00700E00E00E00E00E00E00E00783E007FFFE03FE7E0 0F83E013127E9116>97 D<7E0000FE00007E00000E00000E00000E00000E00000E3E000EFF000F FF800F83C00F00E00E00E00E00700E00700E00700E00700E00700E00700E00E00F01E00F83C00F FF800EFF00063C001419809816>I<03F80FFC1FFE3C1E780C7000E000E000E000E000E000F000 700778073E0E1FFC0FF803F010127D9116>I<003F00007F00003F000007000007000007000007 0003C7000FF7001FFF003C1F00780F00700700E00700E00700E00700E00700E00700E00700700F 00700F003C1F001FFFE00FE7F007C7E014197F9816>I<03E00FF81FFC3C1E780E7007E007FFFF FFFFFFFFE000E000700778073C0F1FFE0FFC03F010127D9116>I<001F00007F8000FF8001E780 01C30001C00001C0007FFF00FFFF00FFFF0001C00001C00001C00001C00001C00001C00001C000 01C00001C00001C00001C00001C0003FFE007FFF003FFE0011197F9816>I<03E3C007F7E00FFF E01C1CC0380E00380E00380E00380E00380E001C1C000FF8001FF0001BE0003800001800001FFC 001FFF003FFF807803C0E000E0E000E0E000E0E000E07001C07C07C03FFF800FFE0003F800131C 7F9116>I<7E0000FE00007E00000E00000E00000E00000E00000E3C000EFE000FFF000F87800F 03800E03800E03800E03800E03800E03800E03800E03800E03800E03800E03807FC7F0FFE7F87F C7F01519809816>I<018003C003C0018000000000000000007FC07FC07FC001C001C001C001C0 01C001C001C001C001C001C001C001C07FFFFFFF7FFF101A7D9916>I<7E0000FE00007E00000E 00000E00000E00000E00000E7FE00E7FE00E7FE00E0F000E1E000E3C000E78000EF0000FF0000F F8000FBC000F1E000E0E000E07000E07807F87F0FFCFF07F87F01419809816>107 DII<7E3C00FEFE007FFF000F8780 0F03800E03800E03800E03800E03800E03800E03800E03800E03800E03800E03807FC7F0FFE7F8 7FC7F01512809116>I<03E0000FF8001FFC003C1E00780F00700700E00380E00380E00380E003 80E00380F00780700700780F003C1E001FFC000FF80003E00011127E9116>I<7E3E00FEFF007F FF800F83C00F00E00E00E00E00700E00700E00700E00700E00700E00700E00E00F01E00F83C00F FF800EFF000E3C000E00000E00000E00000E00000E00000E00007FC000FFE0007FC000141B8091 16>I114 D<0FEC3FFC7FFCF03CE01CE01C 70007F801FF007F8003C600EE00EF00EF81EFFFCFFF8C7E00F127D9116>I<0300000700000700 000700000700007FFF00FFFF00FFFF000700000700000700000700000700000700000700000701 0007038007038007038007870003FE0001FC0000F80011177F9616>I<7E1F80FE3F807E1F800E 03800E03800E03800E03800E03800E03800E03800E03800E03800E03800E03800E0F800FFFF007 FBF803E3F01512809116>I<7F1FC0FF1FE07F1FC01C07001E0F000E0E000E0E000E0E00071C00 071C00071C00071C0003B80003B80003B80001F00001F00000E00013127F9116>II<7F1FC07F3FC07F1FC00F1C00073C0003B80003F00001 F00000E00001E00001F00003B800073C00071C000E0E007F1FC0FF3FE07F1FC013127F9116>I< 7F1FC0FF9FE07F1FC01C07000E07000E0E000E0E00070E00071C00071C00039C00039C00039800 01B80001B80000F00000F00000F00000E00000E00000E00001C00079C0007BC0007F80003F0000 3C0000131B7F9116>I<3FFFC07FFFC07FFFC0700780700F00701E00003C0000780001F00003E0 000780000F00001E01C03C01C07801C0FFFFC0FFFFC0FFFFC012127F9116>I E /Fe 28 121 df45 D<387CFEFEFE7C3807077C8610>I<00 180000780001F800FFF800FFF80001F80001F80001F80001F80001F80001F80001F80001F80001 F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001 F80001F80001F80001F8007FFFE07FFFE013207C9F1C>49 D<03FC000FFF003C1FC07007E07C07 F0FE03F0FE03F8FE03F8FE01F87C01F83803F80003F80003F00003F00007E00007C0000F80001F 00003E0000380000700000E01801C0180380180700180E00380FFFF01FFFF03FFFF07FFFF0FFFF F0FFFFF015207D9F1C>I<00FE0007FFC00F07E01E03F03F03F03F81F83F81F83F81F81F03F81F 03F00003F00003E00007C0001F8001FE0001FF000007C00001F00001F80000FC0000FC3C00FE7E 00FEFF00FEFF00FEFF00FEFF00FC7E01FC7801F81E07F00FFFC001FE0017207E9F1C>I<0000E0 0001E00003E00003E00007E0000FE0001FE0001FE00037E00077E000E7E001C7E00187E00307E0 0707E00E07E00C07E01807E03807E07007E0E007E0FFFFFEFFFFFE0007E00007E00007E00007E0 0007E00007E00007E000FFFE00FFFE17207E9F1C>I<1000201E01E01FFFC01FFF801FFF001FFE 001FF8001BC00018000018000018000018000019FC001FFF001E0FC01807E01803E00003F00003 F00003F80003F83803F87C03F8FE03F8FE03F8FC03F0FC03F07007E03007C01C1F800FFF0003F8 0015207D9F1C>I<0003FE0080001FFF818000FF01E38001F8003F8003E0001F8007C0000F800F 800007801F800007803F000003803F000003807F000001807E000001807E00000180FE00000000 FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE000000007E000000 007E000001807F000001803F000001803F000003801F800003000F8000030007C000060003F000 0C0001F800380000FF00F000001FFFC0000003FE000021227DA128>67 DI<0003FE0040001FFFC0C0007F00F1C001F8003FC003F0000FC007C0 0007C00FC00003C01F800003C03F000001C03F000001C07F000000C07E000000C07E000000C0FE 00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE000FFFFC 7E000FFFFC7F00001FC07F00001FC03F00001FC03F00001FC01F80001FC00FC0001FC007E0001F C003F0001FC001FC003FC0007F80E7C0001FFFC3C00003FF00C026227DA12C>71 D78 D80 D<07FC001FFF803F07C03F03E03F01E03F01F01E01F00001F00001F000 3FF003FDF01FC1F03F01F07E01F0FC01F0FC01F0FC01F0FC01F07E02F07E0CF81FF87F07E03F18 167E951B>97 D<00FF8007FFE00F83F01F03F03E03F07E03F07C01E07C0000FC0000FC0000FC00 00FC0000FC0000FC00007C00007E00007E00003E00301F00600FC0E007FF8000FE0014167E9519 >99 D<00FE0007FF800F87C01E01E03E01F07C00F07C00F8FC00F8FC00F8FFFFF8FFFFF8FC0000 FC0000FC00007C00007C00007E00003E00181F00300FC07003FFC000FF0015167E951A>101 D<03FC1E0FFF7F1F0F8F3E07CF3C03C07C03E07C03E07C03E07C03E07C03E03C03C03E07C01F0F 801FFF0013FC003000003000003800003FFF801FFFF00FFFF81FFFFC3800FC70003EF0001EF000 1EF0001EF0001E78003C7C007C3F01F80FFFE001FF0018217E951C>103 D<1C003F007F007F007F003F001C000000000000000000000000000000FF00FF001F001F001F00 1F001F001F001F001F001F001F001F001F001F001F001F001F001F001F00FFE0FFE00B247EA310 >105 D108 DII<00FE0007FFC00F83E01E00F03E00F87C007C7C007C7C007CFC007EFC007EFC007EFC 007EFC007EFC007EFC007E7C007C7C007C3E00F81F01F00F83E007FFC000FE0017167E951C>I< FF0FE000FF3FF8001FF07C001F803E001F001F001F001F801F001F801F000FC01F000FC01F000F C01F000FC01F000FC01F000FC01F000FC01F000FC01F001F801F001F801F803F001FC03E001FE0 FC001F3FF8001F0FC0001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F 000000FFE00000FFE000001A207E951F>I114 D<0FF3003FFF00781F00600700E00300E00300F00300FC0000 7FE0007FF8003FFE000FFF0001FF00000F80C00780C00380E00380E00380F00700FC0E00EFFC00 C7F00011167E9516>I<0180000180000180000180000380000380000780000780000F80003F80 00FFFF00FFFF000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80 000F81800F81800F81800F81800F81800F830007C30003FE0000F80011207F9F16>III120 D E /Ff 43 121 df<78FCFCFCFC7806067D850D>46 D<03F8000F1E001C07003C07803803807803C07803C07803C0F803E0F803E0F803E0F803E0F803 E0F803E0F803E0F803E0F803E0F803E0F803E0F803E07803C07803C03803803C07801C07000F1E 0003F800131B7E9A18>48 D<00600001E0000FE000FFE000F3E00003E00003E00003E00003E000 03E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E000 03E00003E00003E0007FFF807FFF80111B7D9A18>I<07F8001FFE00383F80780FC0FC07C0FC07 E0FC03E0FC03E07803E00007E00007C00007C0000F80001F00001E0000380000700000E0000180 600300600600600800E01FFFC03FFFC07FFFC0FFFFC0FFFFC0131B7E9A18>I<03F8001FFE003C 1F003C0F807C07C07E07C07C07C03807C0000F80000F80001E00003C0003F800001E00000F8000 07C00007C00007E03007E07807E0FC07E0FC07E0FC07C0780F80781F001FFE0007F800131B7E9A 18>I<000180000380000780000F80001F80003F80006F8000CF80008F80018F80030F80060F80 0C0F80180F80300F80600F80C00F80FFFFF8FFFFF8000F80000F80000F80000F80000F80000F80 01FFF801FFF8151B7F9A18>I<1801801FFF001FFE001FFC001FF8001FC0001800001800001800 0018000019F8001E0E00180F801007800007C00007E00007E00007E07807E0F807E0F807E0F807 C0F007C0600F80381F001FFE0007F000131B7E9A18>I<007E0003FF000781800F03C01E07C03C 07C03C0380780000780000F80000F8F800FB0E00FA0780FC0380FC03C0F803E0F803E0F803E0F8 03E07803E07803E07803C03C03C03C07801E0F0007FE0003F800131B7E9A18>I<6000007FFFE0 7FFFE07FFFC07FFF807FFF80E00300C00600C00C00C0180000300000300000600000E00000E000 01E00001C00003C00003C00003C00003C00007C00007C00007C00007C00007C00007C000038000 131C7D9B18>I<03F8000FFE001E0F803807803803C07803C07803C07E03C07F83807FC7003FFE 001FFC000FFE0007FF801DFF80387FC0781FE0F007E0F003E0F001E0F001E0F001E07801C07803 803E07801FFE0003F800131B7E9A18>I<03F8000FFE001E0F003C07807807807803C0F803C0F8 03C0F803E0F803E0F803E0F803E07807E03807E03C0BE00E1BE003E3E00003E00003C00003C038 07C07C07807C0700780F00383C001FF8000FE000131B7E9A18>I<78FCFCFCFC78000000000000 78FCFCFCFC7806127D910D>I<00038000000380000007C0000007C0000007C000000FE000000F E000001FF000001BF000001BF0000031F8000031F8000061FC000060FC0000E0FE0000C07E0000 C07E0001803F0001FFFF0003FFFF8003001F8003001F8006000FC006000FC00E000FE00C0007E0 FFC07FFEFFC07FFE1F1C7E9B24>65 D<001FE02000FFF8E003F80FE007C003E00F8001E01F0000 E03E0000E03E0000607E0000607C000060FC000000FC000000FC000000FC000000FC000000FC00 0000FC000000FC0000007C0000607E0000603E0000603E0000C01F0000C00F80018007C0030003 F80E0000FFFC00001FE0001B1C7D9B22>67 DI70 D<000FF008007FFE3801FC07F807E001 F80F8000781F0000783F0000383E0000387E0000187C000018FC000000FC000000FC000000FC00 0000FC000000FC000000FC007FFFFC007FFF7C0001F87E0001F83E0001F83F0001F81F0001F80F 8001F807E001F801FC07F8007FFE78000FF818201C7D9B26>I73 D77 DI<003FE00001F07C0003C01E000F800F801F0007C01E0003C03E0003E07E0003F07C 0001F07C0001F0FC0001F8FC0001F8FC0001F8FC0001F8FC0001F8FC0001F8FC0001F8FC0001F8 7C0001F07E0003F07E0003F03E0003E03F0007E01F0007C00F800F8003C01E0001F07C00003FE0 001D1C7D9B24>II<07F8201FFEE03C07E07801E07000E0F000E0F00060F00060F80000FE00 00FFE0007FFE003FFF003FFF800FFFC007FFE0007FE00003F00001F00000F0C000F0C000F0C000 E0E000E0F001C0FC03C0EFFF0083FC00141C7D9B1B>83 D<7FFFFFE07FFFFFE0781F81E0701F80 E0601F8060E01F8070C01F8030C01F8030C01F8030C01F8030001F8000001F8000001F8000001F 8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F800000 1F8000001F8000001F800007FFFE0007FFFE001C1C7E9B21>II<0FF8001C1E003E0F803E07 803E07C01C07C00007C0007FC007E7C01F07C03C07C07C07C0F807C0F807C0F807C0780BC03E13 F80FE1F815127F9117>97 DI<03FC000E0E001C1F003C1F 00781F00780E00F80000F80000F80000F80000F80000F800007800007801803C01801C03000E0E 0003F80011127E9115>I<000FF0000FF00001F00001F00001F00001F00001F00001F00001F000 01F00001F001F9F00F07F01C03F03C01F07801F07801F0F801F0F801F0F801F0F801F0F801F0F8 01F07801F07801F03C01F01C03F00F0FFE03F9FE171D7E9C1B>I<01FC000F07001C03803C01C0 7801C07801E0F801E0F801E0FFFFE0F80000F80000F800007800007C00603C00601E00C00F0380 01FC0013127F9116>I<007F0001E38003C7C00787C00F87C00F83800F80000F80000F80000F80 000F8000FFF800FFF8000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80 000F80000F80000F80000F80007FF8007FF800121D809C0F>I<03F8F00E0F381E0F381C07303C 07803C07803C07803C07801C07001E0F000E0E001BF8001000001800001800001FFF001FFFC00F FFE01FFFF07801F8F00078F00078F000787000707800F01E03C007FF00151B7F9118>I<1E003F 003F003F003F001E00000000000000000000000000FF00FF001F001F001F001F001F001F001F00 1F001F001F001F001F001F001F00FFE0FFE00B1E7F9D0E>105 D108 DII<01FC000F07801C01C03C01E07800F07800F0F800F8 F800F8F800F8F800F8F800F8F800F87800F07800F03C01E01E03C00F078001FC0015127F9118> II114 D<1FD830786018E018E018F000FF807FE07FF01FF807FC007CC01CC01CE01CE018F830CFC00E12 7E9113>I<0300030003000300070007000F000F003FFCFFFC1F001F001F001F001F001F001F00 1F001F001F0C1F0C1F0C1F0C0F08079803F00E1A7F9913>II120 D E /Fg 77 125 df<007E1F0001C1B1800303E3C00703C3C00E03C1800E01C0000E01C0000E01 C0000E01C0000E01C0000E01C000FFFFFC000E01C0000E01C0000E01C0000E01C0000E01C0000E 01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C000 0E01C0007F87FC001A1D809C18>11 D<007E0001C1800301800703C00E03C00E01800E00000E00 000E00000E00000E0000FFFFC00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01 C00E01C00E01C00E01C00E01C00E01C00E01C00E01C07F87F8151D809C17>I<007FC001C1C003 03C00703C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0FFFFC00E01C00E01C00E01C00E 01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C07F CFF8151D809C17>I<003F07E00001C09C18000380F018000701F03C000E01E03C000E00E01800 0E00E000000E00E000000E00E000000E00E000000E00E00000FFFFFFFC000E00E01C000E00E01C 000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E0 1C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C007FC7FCFF80211D 809C23>I<60F0F0F0F0F0F0F060606060606060606060606060000000000060F0F060041E7C9D 0C>33 D<6060F0F0F8F86868080808080808101010102020404080800D0C7F9C15>I<00E00000 019000000308000003080000070800000708000007080000070800000710000007100000072000 000740000003C03FE003800F00038006000380040005C0040009C0080010E0100030E010006070 200060702000E0384000E03C4000E01C8000E00F0020E0070020700780403009C0401830E18007 C03E001B1F7E9D20>38 D<004000800100020006000C000C001800180030003000700060006000 6000E000E000E000E000E000E000E000E000E000E000E000E00060006000600070003000300018 0018000C000C00060002000100008000400A2A7D9E10>40 D<800040002000100018000C000C00 0600060003000300038001800180018001C001C001C001C001C001C001C001C001C001C001C001 C0018001800180038003000300060006000C000C00180010002000400080000A2A7E9E10>I<00 060000000600000006000000060000000600000006000000060000000600000006000000060000 000600000006000000060000FFFFFFE0FFFFFFE000060000000600000006000000060000000600 0000060000000600000006000000060000000600000006000000060000000600001B1C7E9720> 43 D<60F0F0701010101020204080040C7C830C>II<60F0F06004047C 830C>I<00010003000600060006000C000C000C0018001800180030003000300060006000C000 C000C0018001800180030003000300060006000C000C000C001800180018003000300030006000 60006000C000C00010297E9E15>I<03C00C301818300C300C700E60066006E007E007E007E007 E007E007E007E007E007E007E007E007E00760066006700E300C300C18180C3007E0101D7E9B15 >I<030007003F00C7000700070007000700070007000700070007000700070007000700070007 0007000700070007000700070007000F80FFF80D1C7C9B15>I<07C01830201C400C400EF00FF8 0FF807F8077007000F000E000E001C001C00380070006000C00180030006010C01180110023FFE 7FFEFFFE101C7E9B15>I<07E01830201C201C781E780E781E381E001C001C00180030006007E0 0030001C001C000E000F000F700FF80FF80FF80FF00E401C201C183007E0101D7E9B15>I<000C 00000C00001C00003C00003C00005C0000DC00009C00011C00031C00021C00041C000C1C00081C 00101C00301C00201C00401C00C01C00FFFFC0001C00001C00001C00001C00001C00001C00001C 0001FFC0121C7F9B15>I<300C3FF83FF03FC020002000200020002000200023E024302818301C 200E000E000F000F000F600FF00FF00FF00F800E401E401C2038187007C0101D7E9B15>I<00F0 030C06040C0E181E301E300C700070006000E3E0E430E818F00CF00EE006E007E007E007E007E0 07600760077006300E300C18180C3003E0101D7E9B15>I<4000007FFF807FFF007FFF00400200 80040080040080080000100000100000200000600000400000C00000C00001C000018000018000 038000038000038000038000078000078000078000078000078000078000030000111D7E9B15> I<03E00C301008200C20066006600660067006780C3E083FB01FE007F007F818FC307E601E600F C007C003C003C003C00360026004300C1C1007E0101D7E9B15>I<03C00C301818300C700C600E E006E006E007E007E007E007E0076007700F300F18170C2707C700060006000E300C780C781870 10203030C00F80101D7E9B15>I<60F0F0600000000000000000000060F0F06004127C910C>I<60 F0F0600000000000000000000060F0F0701010101020204080041A7C910C>I<0FE03038401CE0 0EF00EF00EF00E000C001C0030006000C000800180010001000100010001000100000000000000 0000000003000780078003000F1D7E9C14>63 D<000600000006000000060000000F0000000F00 00000F00000017800000178000001780000023C0000023C0000023C0000041E0000041E0000041 E0000080F0000080F0000180F8000100780001FFF80003007C0002003C0002003C0006003E0004 001E0004001E000C001F001E001F00FF80FFF01C1D7F9C1F>65 D<001F808000E0618001801980 070007800E0003801C0003801C00018038000180780000807800008070000080F0000000F00000 00F0000000F0000000F0000000F0000000F0000000F00000007000008078000080780000803800 00801C0001001C0001000E000200070004000180080000E03000001FC000191E7E9C1E>67 DIII<001F808000E061800180 1980070007800E0003801C0003801C00018038000180780000807800008070000080F0000000F0 000000F0000000F0000000F0000000F0000000F000FFF0F0000F80700007807800078078000780 380007801C0007801C0007800E00078007000B800180118000E06080001F80001C1E7E9C21>I< FFF3FFC00F003C000F003C000F003C000F003C000F003C000F003C000F003C000F003C000F003C 000F003C000F003C000F003C000FFFFC000F003C000F003C000F003C000F003C000F003C000F00 3C000F003C000F003C000F003C000F003C000F003C000F003C000F003C00FFF3FFC01A1C7E9B1F >II76 DII<003F800000E0 E0000380380007001C000E000E001C0007003C00078038000380780003C0780003C0700001C0F0 0001E0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0700001C0780003C0 780003C0380003803C0007801C0007000E000E0007001C000380380000E0E000003F80001B1E7E 9C20>II82 D<07E0801C1980300580700380600180 E00180E00080E00080E00080F00000F800007C00007FC0003FF8001FFE0007FF0000FF80000F80 0007C00003C00001C08001C08001C08001C0C00180C00180E00300D00200CC0C0083F800121E7E 9C17>I<7FFFFFC0700F01C0600F00C0400F0040400F0040C00F0020800F0020800F0020800F00 20000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F 0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000001F800003FFFC001B 1C7F9B1E>IIII<7FF0FFC00FC03E000780180003C0180003E0100001E0200001 F0600000F0400000788000007D8000003D0000001E0000001F0000000F0000000F8000000F8000 0013C0000023E0000021E0000041F00000C0F8000080780001007C0003003C0002001E0006001F 001F003F80FFC0FFF01C1C7F9B1F>II<7FFFF07C01F07001E06003C06003C0400780400F80 400F00401E00001E00003C00007C0000780000F00000F00001E00003E00003C010078010078010 0F00101F00301E00203C00203C00607800E0F803E0FFFFE0141C7E9B19>I<0808101020204040 4040808080808080B0B0F8F8787830300D0C7A9C15>92 D<1FC000307000783800781C00301C00 001C00001C0001FC000F1C00381C00701C00601C00E01C40E01C40E01C40603C40304E801F8700 12127E9115>97 DI<07E00C301878307870306000E000E0 00E000E000E000E00060007004300418080C3007C00E127E9112>I<003F000007000007000007 0000070000070000070000070000070000070000070003E7000C1700180F003007007007006007 00E00700E00700E00700E00700E00700E00700600700700700300700180F000C370007C7E0131D 7E9C17>I<03E00C301818300C700E6006E006FFFEE000E000E000E00060007002300218040C18 03E00F127F9112>I<00F8018C071E061E0E0C0E000E000E000E000E000E00FFE00E000E000E00 0E000E000E000E000E000E000E000E000E000E000E000E000E007FE00F1D809C0D>I<00038003 C4C00C38C01C3880181800381C00381C00381C00381C001818001C38000C300013C00010000030 00001800001FF8001FFF001FFF803003806001C0C000C0C000C0C000C06001803003001C0E0007 F800121C7F9215>II<18003C003C001800000000000000 0000000000000000FC001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C 001C00FF80091D7F9C0C>I<00C001E001E000C000000000000000000000000000000FE000E000 E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E060E0 F0C0F1C061803E000B25839C0D>IIIII<03F0000E1C00180600300300700380600180E0 01C0E001C0E001C0E001C0E001C0E001C06001807003803003001806000E1C0003F00012127F91 15>II<03C1000C3300180B00300F00700700700700E00700E00700E00700E007 00E00700E00700600700700700300F00180F000C370007C7000007000007000007000007000007 00000700000700003FE0131A7E9116>II<1F9030704030C010C010E010F8007F803FE0 0FF000F880388018C018C018E010D0608FC00D127F9110>I<04000400040004000C000C001C00 3C00FFE01C001C001C001C001C001C001C001C001C001C101C101C101C101C100C100E2003C00C 1A7F9910>IIII<7F8FF00F03800F030007020003840001C80001D80000F000007000 00780000F800009C00010E00020E000607000403801E07C0FF0FF81512809116>II<7FFC70386038407040F040E041C003C0038007000F040E041C043C0C380870087038FFF80E 127F9112>I124 D E /Fh 24 118 df<0003E0000000000FF00000 00003E3800000000781800000000780C00000000F80C00000000F00C00000001F00C00000001F0 0C00000001F01C00000001F81800000001F83000000001F87000000001F86000000001F8C001FF E000FD8001FFE000FF00001C0000FE0000180000FE00003000007E00003000007F0000600000FF 0000600001FF8000C00003BF80018000071FC00180000F0FE00300001E0FE00600003E07F00600 007E03F80C0000FE03FC180000FE01FE300000FE00FE600000FE007FC00000FE003F8000C07F00 1FC000C07F000FF001C03F801FF803801FC0F0FE0F0007FFC03FFE0001FE0007F8002B287DA732 >38 D<3C7EFFFFFFFF7E3C08087B8712>46 D<000C00001C0000FC000FFC00FFFC00F0FC0000FC 0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC 0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC 0000FC0000FC00FFFFFCFFFFFC16257BA420>49 D<00FF000007FFE0001E07F8003801FC007800 FE007E00FF00FF007F00FF007F80FF007F80FF003F807E003F803C003F8000007F8000007F0000 007F000000FE000000FE000001FC000001F8000003F0000007C00000078000000F0000001E0000 003800000070018000E0018001C001800380030007000300060003000FFFFF001FFFFF003FFFFF 007FFFFE00FFFFFE00FFFFFE0019257DA420>I<00FF800007FFF0000F03F8001800FC003E00FE 007F00FF007F007F007F807F007F007F003F00FF001E00FE000000FE000000FC000001F8000003 F0000007E00000FF800000FF00000003E0000001F8000000FC000000FE0000007F0000007F0000 007F8018007F807E007F80FF007F80FF007F80FF007F80FF007F00FE00FF007C00FE003801FC00 1E03F8000FFFE00001FF000019257DA420>I<0000380000007800000078000000F8000001F800 0003F8000003F8000007F800000FF800001DF8000019F8000031F8000071F8000061F80000C1F8 0001C1F8000381F8000301F8000601F8000E01F8001C01F8001801F8003001F8007001F800E001 F800FFFFFFE0FFFFFFE00001F8000001F8000001F8000001F8000001F8000001F8000001F80000 01F800007FFFE0007FFFE01B257EA420>I<0000FF80080007FFF018003FC03C38007E000E7801 FC0003F803F00001F807E00000F80FE00000781FC00000781FC00000383F800000383F80000038 7F800000187F000000187F00000018FF00000000FF00000000FF00000000FF00000000FF000000 00FF00000000FF00000000FF00000000FF00000000FF000000007F000000007F000000187F8000 00183F800000183F800000181FC00000301FC00000300FE000006007E000006003F00000C001FC 000180007E000700003FC03C000007FFF8000000FFC00025287CA72E>67 DI73 D78 D80 D<03FF00000FFFE0001F03F0003F80F8003F80FC003F807C001F 007E001F007E0000007E0000007E0000007E00000FFE0001FFFE0007F07E001FC07E003F807E00 7F007E00FE007E00FE007E18FE007E18FE007E18FE00BE187F01BE183F873FF01FFE1FE003F80F 801D1A7E9920>97 D<003FE001FFF807E07C0FC0FE1F80FE3F00FE3F007C7E007C7E0000FE0000 FE0000FE0000FE0000FE0000FE0000FE0000FE00007E00007F00003F00003F80031F80070FC006 07F01C01FFF0003FC0181A7E991D>99 D<00007FE000007FE0000007E0000007E0000007E00000 07E0000007E0000007E0000007E0000007E0000007E0000007E0000007E0000007E0007F87E001 FFE7E007E077E00FC01FE01F800FE03F0007E03F0007E07E0007E07E0007E0FE0007E0FE0007E0 FE0007E0FE0007E0FE0007E0FE0007E0FE0007E0FE0007E07E0007E07E0007E07F0007E03F0007 E01F000FE00F801FE007E077E003FFC7FE007F07FE1F287EA724>I<007F8003FFE007E1F00F80 F81F007C3F007E7E003E7E003E7E003FFE003FFE003FFFFFFFFFFFFFFE0000FE0000FE0000FE00 007E00007E00003F00003F00031F80060FC00607F01C01FFF0003FC0181A7E991D>I<0F001F80 1FC03FC03FC01FC01F800F000000000000000000000000000000FFC0FFC00FC00FC00FC00FC00F C00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC0FFFCFFFC 0E297FA811>105 D108 D110 D<007FC00001FFF00007E0FC000F803E001F001F003F001F 803E000F807E000FC07E000FC0FE000FE0FE000FE0FE000FE0FE000FE0FE000FE0FE000FE0FE00 0FE0FE000FE07E000FC07E000FC07F001FC03F001F801F001F000F803E0007E0FC0001FFF00000 7FC0001B1A7E9920>II114 D<03F8C01FFFC03C07C07801C07001C0F000C0F000C0 F800C0FC0000FFC0007FFC003FFF001FFF800FFFC001FFE0000FF00003F0C001F0C000F0E000F0 E000F0F000E0F801E0FE03C0E7FF80C1FC00141A7E9919>I<00600000600000600000600000E0 0000E00001E00001E00003E00007E0001FE000FFFFC0FFFFC007E00007E00007E00007E00007E0 0007E00007E00007E00007E00007E00007E00007E00007E00007E00007E06007E06007E06007E0 6007E06007E06003F0C001F0C000FF80003E0013257FA419>II E /Fi 40 123 df<0001FC3C00060E67000C0E C7001C0DC6001C01C0003801C0003803800038038000380380003803800070038007FFFFF80070 0700007007000070070000E0070000E00E0000E00E0000E00E0000E00E0001C00E0001C01C0001 C01C0001C01C0001C01C0003801C00038038000380380003803800030038000700300007007000 06006000C6606000E470C000C8618000703E00002025819C19>11 D<0001FC000703000C03001C 07001C0300180000380000380000380000380000700007FFFC00701C00701C00701C00E03800E0 3800E03800E03800E07001C07001C07001C07001C0E201C0E201C0E20380E40380640380380380 00030000070000060000C60000E40000CC00007000001825819C17>I<18387838080810102040 4080050C7D830D>44 DI<3078F06005047C830D>I<060F0F060000 00000000000000003078F06008127C910D>58 D<01FFFE00003C0780003801C0003801C0003800 E0003800E0007000F00070007000700070007000F000E000F000E000F000E000F000E000F001C0 01E001C001E001C001E001C001C0038003C003800380038007800380070007000E0007001C0007 003800070070000E01C000FFFF00001C1C7D9B1F>68 D<01FFFFE0003C00E00038006000380040 00380040003800400070004000700040007020400070200000E0400000E0400000E0C00000FFC0 0001C0800001C0800001C0800001C0800003810100038001000380020003800200070004000700 040007000C00070018000E007800FFFFF0001B1C7D9B1C>I<01FFFFC0003C01C0003800C00038 008000380080003800800070008000700080007020800070200000E0400000E0400000E0C00000 FFC00001C0800001C0800001C0800001C080000381000003800000038000000380000007000000 0700000007000000070000000F000000FFF000001A1C7D9B1B>I<01FFC0003C00003800003800 00380000380000700000700000700000700000E00000E00000E00000E00001C00001C00001C000 01C0000380000380000380000380000700000700000700000700000F0000FFE000121C7E9B10> 73 D<01FFE0003C0000380000380000380000380000700000700000700000700000E00000E000 00E00000E00001C00001C00001C00001C000038008038008038008038010070010070030070060 0700E00E03C0FFFFC0151C7D9B1A>76 D<01FC03FE001C0070003C0060002E0040002E0040002E 0040004700800047008000470080004380800083810000838100008181000081C1000101C20001 01C2000100E2000100E2000200E400020074000200740002007400040038000400380004003800 0C0018001C001000FF8010001F1C7D9B1F>78 D<01FFFC00003C070000380380003801C0003801 C0003801C0007003C0007003C0007003C00070038000E0078000E0070000E00E0000E0380001FF E00001C0000001C0000001C0000003800000038000000380000003800000070000000700000007 000000070000000F000000FFE000001A1C7D9B1C>80 D<01FFF800003C0E000038070000380380 003803800038038000700780007007800070078000700F0000E00E0000E01C0000E0700000FFC0 0001C0C00001C0600001C0700001C07000038070000380700003807000038070000700F0000700 F0400700F0400700F0800F007880FFE0790000001E001A1D7D9B1E>82 D<1FFFFFC01C0701C030 0E00C0200E0080600E0080400E0080401C0080801C0080801C0080001C00000038000000380000 00380000003800000070000000700000007000000070000000E0000000E0000000E0000000E000 0001C0000001C0000001C0000001C0000003C000007FFE00001A1C799B1E>84 D86 D<03CC063C0C3C181C3838303870387038E070E070E070E070E0E2C0E2C0E261 E462643C380F127B9115>97 D<3F00070007000E000E000E000E001C001C001C001C0039C03E60 383038307038703870387038E070E070E070E060E0E0C0C0C1C0618063003C000D1D7B9C13>I< 01F007080C08181C3838300070007000E000E000E000E000E000E008E010602030C01F000E127B 9113>I<001F80000380000380000700000700000700000700000E00000E00000E00000E0003DC 00063C000C3C00181C00383800303800703800703800E07000E07000E07000E07000E0E200C0E2 00C0E20061E4006264003C3800111D7B9C15>I<01E007100C1018083810701070607F80E000E0 00E000E000E000E0086010602030C01F000D127B9113>I<0003C0000670000C70001C60001C00 001C0000380000380000380000380000380003FF8000700000700000700000700000700000E000 00E00000E00000E00000E00001C00001C00001C00001C00001C000038000038000038000030000 030000070000C60000E60000CC00007800001425819C0D>I<00F3018F030F06070E0E0C0E1C0E 1C0E381C381C381C381C383830383038187818F00F700070007000E000E0C0C0E1C0C3007E0010 1A7D9113>I<0FC00001C00001C000038000038000038000038000070000070000070000070000 0E78000E8C000F0E000E0E001C0E001C0E001C0E001C0E00381C00381C00381C00383800703880 703880707080707100E03200601C00111D7D9C15>I<0180038001000000000000000000000000 0000001C002600470047008E008E000E001C001C001C0038003800710071007100720072003C00 091C7C9B0D>I<0FC00001C00001C0000380000380000380000380000700000700000700000700 000E0F000E11000E23800E43801C83001C80001D00001E00003F800039C00038E00038E00070E2 0070E20070E20070E400E06400603800111D7D9C13>107 D<1F80038003800700070007000700 0E000E000E000E001C001C001C001C0038003800380038007000700070007000E400E400E400E4 0068003800091D7C9C0B>I<3C1E0780266318C04683A0E04703C0E08E0380E08E0380E00E0380 E00E0380E01C0701C01C0701C01C0701C01C070380380E0388380E0388380E0708380E0710701C 0320300C01C01D127C9122>I<3C3C002646004687004707008E07008E07000E07000E07001C0E 001C0E001C0E001C1C00381C40381C40383840383880701900300E0012127C9117>I<01E00718 0C0C180C380C300E700E700EE01CE01CE01CE018E038E030E06060C031801E000F127B9115>I< 07870004D98008E0C008E0C011C0E011C0E001C0E001C0E00381C00381C00381C0038180070380 0703000707000706000E8C000E70000E00000E00001C00001C00001C00001C00003C0000FF8000 131A7F9115>I<3C3C26C2468747078E068E000E000E001C001C001C001C003800380038003800 7000300010127C9112>114 D<01F006080C080C1C18181C001F001FC00FF007F0007800386030 E030C030806060C01F000E127D9111>I<00C001C001C001C00380038003800380FFE007000700 07000E000E000E000E001C001C001C001C00384038403840388019000E000B1A7D990E>I<1E03 00270700470700470700870E00870E000E0E000E0E001C1C001C1C001C1C001C1C003838803838 801838801839001C5900078E0011127C9116>I<1E06270E470E4706870287020E020E021C041C 041C041C0818083808181018200C4007800F127C9113>I<1E0183270387470387470383870701 8707010E07010E07011C0E021C0E021C0E021C0E04180C04181C04181C081C1C100C263007C3C0 18127C911C>I<070E0019910010E38020E38041C30041C00001C00001C0000380000380000380 00038000070200670200E70400CB04008B080070F00011127D9113>I<1E03270747074707870E 870E0E0E0E0E1C1C1C1C1C1C1C1C38383838183818381C7007F00070007000E0E0C0E1C0818047 003C00101A7C9114>I<038207C20FEC0838100800100020004000800100020004000808100838 3067F043E081C00F127D9111>I E /Fj 1 49 df<001E003F003F003F003F007E007E007E00FC 00FC00FC00F801F801F801F001F003F003E003E003C007C007C0078007800F800F000F000F001E 001E001E003C003C003C0038007800780070007000F000E0006000102A7EAD14>48 D E /Fk 8 116 df73 D80 D<0007FFC0000000003FFFFC00000000FFFFFF00000003F801FFC000 0007F0003FE0000007F8001FF000000FFC000FF800000FFC000FFC00000FFC0007FC00000FFC00 07FE00000FFC0003FE000007F80003FF000003F00003FF000000000003FF000000000003FF0000 00000003FF000000000003FF000000000003FF000000000003FF000000000003FF0000000007FF FF00000000FFFFFF0000000FFFE3FF0000007FF803FF000001FFC003FF000003FF0003FF000007 FC0003FF00000FF80003FF00001FF00003FF00003FF00003FF00007FE00003FF00007FE00003FF 0380FFC00003FF0380FFC00003FF0380FFC00003FF0380FFC00003FF0380FFC00007FF0380FFC0 0007FF03807FE0000DFF03807FE0001DFF03803FF00039FF87001FF80070FFCF000FFE03E07FFE 0007FFFF807FFC0000FFFE001FF800001FF00007E000312E7CAD37>97 D<00FF80FFFF80FFFF80 FFFF8003FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF80FFFFFFFFFFFFFFFFFF18487DC71F>108 D<00003FFC00000001FFFF8000000FFFFFF000 003FF00FFC00007F8001FE0001FE00007F8003FC00003FC007F800001FE007F800001FE00FF000 000FF01FE0000007F81FE0000007F83FE0000007FC3FE0000007FC7FC0000003FE7FC0000003FE 7FC0000003FE7FC0000003FEFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC000 0003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003FF7FC0000003FE 7FC0000003FE7FC0000003FE7FE0000007FE3FE0000007FC3FE0000007FC1FE0000007F81FF000 000FF80FF000000FF007F800001FE007FC00003FE003FE00007FC001FF0000FF80007F8001FE00 003FF00FFC00000FFFFFF0000003FFFFC00000003FFC0000302E7DAD37>111 D<00FF801FF80000FFFF80FFFF0000FFFF83FFFFC000FFFF8FC07FF00003FF9E000FF80001FFF8 0007FC0001FFF00003FE0001FFE00001FF0001FFC00000FF8001FF800000FFC001FF8000007FC0 01FF8000007FE001FF8000007FE001FF8000003FE001FF8000003FF001FF8000003FF001FF8000 003FF001FF8000001FF801FF8000001FF801FF8000001FF801FF8000001FF801FF8000001FF801 FF8000001FF801FF8000001FF801FF8000001FF801FF8000001FF801FF8000001FF801FF800000 1FF801FF8000001FF001FF8000003FF001FF8000003FF001FF8000003FF001FF8000003FE001FF 8000007FE001FF8000007FC001FF800000FFC001FF800000FF8001FFC00001FF8001FFE00001FF 0001FFF00003FE0001FFF8000FFC0001FF9E001FF80001FF8F80FFE00001FF87FFFFC00001FF81 FFFE000001FF803FF0000001FF800000000001FF800000000001FF800000000001FF8000000000 01FF800000000001FF800000000001FF800000000001FF800000000001FF800000000001FF8000 00000001FF800000000001FF800000000001FF800000000001FF800000000001FF800000000001 FF800000000001FF8000000000FFFFFF00000000FFFFFF00000000FFFFFF0000000035427DAD3D >I<00FF007F00FFFF01FFC0FFFF03FFE0FFFF0787F003FF0E0FF001FF1C1FF801FF381FF801FF 301FF801FF701FF801FF600FF001FF600FF001FFE003C001FFC0000001FFC0000001FFC0000001 FFC0000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF800000 01FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF8000 0001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80 000001FF80000001FF80000001FF80000001FF800000FFFFFFC000FFFFFFC000FFFFFFC000252E 7DAD2C>114 D<001FFC030000FFFF870003FFFFCF000FE007FF001F8000FF003E00003F003E00 001F007C00001F007C00000F00FC00000F00FC00000700FC00000700FE00000700FF00000700FF 80000000FFE00000007FFE0000007FFFF800003FFFFF00003FFFFFC0001FFFFFF0000FFFFFF800 07FFFFFC0001FFFFFE00007FFFFF00001FFFFF000000FFFF80000007FF80000000FFC0E000007F C0E000003FC0E000001FC0F000000FC0F000000FC0F800000FC0F800000FC0F800000F80FC0000 0F80FE00001F80FF00001F00FF80003E00FFC0007C00F9F803F800F0FFFFF000E03FFFC000C007 FE0000222E7CAD2B>I E /Fl 8 117 df<00003000000070000001F0000007F000007FF000FFFF F000FFFFF000FF8FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF00000 0FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000 000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF0 00000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000F F000000FF000000FF000000FF000000FF000000FF000000FF000000FF0007FFFFFFE7FFFFFFE7F FFFFFE1F3779B62D>49 D<000000FFE0001800000FFFFC001800007FFFFF00380001FFE00FC078 0007FE0001F0F8000FF8000079F8003FE000003FF8007FC000000FF800FF80000007F801FF0000 0007F803FE00000003F807FC00000001F807F800000001F80FF800000000F80FF000000000F81F F000000000781FF000000000783FE000000000783FE000000000387FE000000000387FE0000000 00387FE000000000387FC000000000007FC00000000000FFC00000000000FFC00000000000FFC0 0000000000FFC00000000000FFC00000000000FFC00000000000FFC00000000000FFC000000000 00FFC00000000000FFC00000000000FFC000000000007FC000000000007FC000000000007FE000 000000007FE000000000387FE000000000383FE000000000383FE000000000381FF00000000038 1FF000000000700FF000000000700FF8000000007007F800000000E007FC00000000E003FE0000 0001C001FF000000038000FF8000000780007FC000000F00003FE000001E00000FF800003C0000 07FE0000F0000001FFE007E00000007FFFFF800000000FFFFE0000000000FFE00000353B7BB940 >67 D<003FFC00000001FFFF80000007E00FE000000FC003F800000FE001FC00001FF001FE0000 1FF000FF00001FF000FF00001FF0007F00000FE0007F800007C0007F80000000007F8000000000 7F80000000007F80000000007F80000000007F800000000FFF80000007FFFF8000003FE07F8000 01FF007F800007FC007F80000FF0007F80001FE0007F80003FE0007F80007FC0007F80007FC000 7F8380FF80007F8380FF80007F8380FF80007F8380FF8000BF8380FF8000BF83807FC0013F8380 7FC0033F83803FE0061FC7001FF81C0FFE0007FFF007FC00007FC003F00029257DA42D>97 D<0003FF0000001FFFE000007F07F00001FC01FC0003F000FE0007E0007F000FE0003F001FC000 3F801FC0003F803FC0001FC03F80001FC07F80001FC07F80001FE07F80001FE0FF80001FE0FF80 001FE0FFFFFFFFE0FFFFFFFFE0FF80000000FF80000000FF80000000FF80000000FF800000007F 800000007F800000007F800000003FC00000003FC00000001FC00000E01FE00000E00FE00001C0 07F000038003F800070000FE000E00007FC07C00001FFFF0000001FF800023257EA428>101 D<01FC00000000FFFC00000000FFFC00000000FFFC0000000007FC0000000003FC0000000003FC 0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC000000 0003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC 0000000003FC0000000003FC01FF000003FC07FFC00003FC1C0FF00003FC3007F80003FC6003F8 0003FCC003FC0003FC8001FC0003FD0001FE0003FF0001FE0003FE0001FE0003FE0001FE0003FC 0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE 0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC 0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE 0003FC0001FE0003FC0001FE0003FC0001FE00FFFFF07FFFF8FFFFF07FFFF8FFFFF07FFFF82D3A 7EB932>104 D<01FC03FE0000FFFC1FFFC000FFFC780FF000FFFDE003F80007FF8001FC0003FF 0000FE0003FE0000FF0003FC00007F8003FC00007FC003FC00003FC003FC00003FE003FC00003F E003FC00001FE003FC00001FE003FC00001FF003FC00001FF003FC00001FF003FC00001FF003FC 00001FF003FC00001FF003FC00001FF003FC00001FF003FC00001FF003FC00001FE003FC00003F E003FC00003FE003FC00003FC003FC00003FC003FC00007F8003FC00007F8003FE0000FF0003FF 0001FE0003FF8003FC0003FDC007F80003FCF81FE00003FC3FFF800003FC07FC000003FC000000 0003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC 0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC00000000FFFFF00000 00FFFFF0000000FFFFF00000002C357EA432>112 D<01F80FC0FFF83FF0FFF870F8FFF8C1FC07 F883FE03F983FE03F903FE03FB03FE03FA01FC03FA00F803FA007003FE000003FC000003FC0000 03FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC00 0003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC 0000FFFFF800FFFFF800FFFFF8001F257EA424>114 D<001C0000001C0000001C0000001C0000 001C0000003C0000003C0000003C0000003C0000007C0000007C000000FC000000FC000001FC00 0003FC000007FC00001FFFFFC0FFFFFFC0FFFFFFC003FC000003FC000003FC000003FC000003FC 000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003 FC000003FC000003FC000003FC000003FC000003FC00E003FC00E003FC00E003FC00E003FC00E0 03FC00E003FC00E003FC00E001FC00C001FC01C001FE01C000FE0380007F8700001FFE000007F8 001B357EB423>116 D E /Fm 18 122 df<008003800F80F38003800380038003800380038003 800380038003800380038003800380038003800380038003800380038003800380038003800380 038007C0FFFE0F217CA018>49 D<03F8000C1E001007002007804007C07807C07803C07807C038 07C0000780000780000700000F00000E0000380003F000001C00000F000007800007800003C000 03C00003E02003E07003E0F803E0F803E0F003C04003C0400780200780100F000C1C0003F00013 227EA018>51 D<01F000060C000C0600180700380380700380700380F001C0F001C0F001C0F001 E0F001E0F001E0F001E0F001E07001E07003E03803E01805E00C05E00619E003E1E00001C00001 C00001C0000380000380300300780700780600700C002018001030000FC00013227EA018>57 D<0007E0100038183000E0063001C00170038000F0070000F00E0000701E0000701C0000303C00 00303C0000307C0000107800001078000010F8000000F8000000F8000000F8000000F8000000F8 000000F8000000F800000078000000780000107C0000103C0000103C0000101C0000201E000020 0E000040070000400380008001C0010000E0020000381C000007E0001C247DA223>67 D<03FFF0001F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F 00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F 00700F00F80F00F80F00F80E00F01E00401C0020380018700007C00014237EA119>74 D76 DI<0FE0001838003C0C003C0E0018070000070000070000070000FF0007C7 001E07003C0700780700700700F00708F00708F00708F00F087817083C23900FC1E015157E9418 >97 D<01FE000703000C07801C0780380300780000700000F00000F00000F00000F00000F00000 F00000F000007000007800403800401C00800C010007060001F80012157E9416>99 D<0000E0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000 E00000E001F8E00704E00C02E01C01E03800E07800E07000E0F000E0F000E0F000E0F000E0F000 E0F000E0F000E07000E07800E03800E01801E00C02E0070CF001F0FE17237EA21B>I<01FC0007 07000C03801C01C03801C07801E07000E0F000E0FFFFE0F00000F00000F00000F00000F0000070 00007800203800201C00400E008007030000FC0013157F9416>I<0E0000FE00001E00000E0000 0E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E1F800E60C00E80E0 0F00700F00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070 0E00700E00700E00700E0070FFE7FF18237FA21B>104 D<0E0000FE00001E00000E00000E0000 0E00000E00000E00000E00000E00000E00000E00000E00000E00000E03FC0E01F00E01C00E0180 0E02000E04000E08000E10000E38000EF8000F1C000E1E000E0E000E07000E07800E03C00E01C0 0E01E00E00F00E00F8FFE3FE17237FA21A>107 D<0E00FE001E000E000E000E000E000E000E00 0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E 000E000E000E000E000E00FFE00B237FA20E>I<0E1F80FE60C01E80E00F00700F00700E00700E 00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E 0070FFE7FF18157F941B>110 D<01FC000707000C01801800C03800E0700070700070F00078F0 0078F00078F00078F00078F00078F000787000707800F03800E01C01C00E038007070001FC0015 157F9418>I<0E3CFE461E8F0F0F0F060F000E000E000E000E000E000E000E000E000E000E000E 000E000E000F00FFF010157F9413>114 D 121 D E /Fn 1 49 df<038007C007C007C007800F800F800F000F001F001E001E001E001C003C 0038003800380070007000700060006000E000C00040000A1A7F9B0D>48 D E /Fo 20 121 df<00003FC0020001FFF8060007E01C06001F00070E003C00018E00780000DE 00F000005E01E000003E03C000001E078000001E0F0000000E0F0000000E1E000000061E000000 063E000000063C000000027C000000027C000000027C000000027800000000F800000000F80000 0000F800000000F800000000F800000000F800000000F800000000F800000000F800000000F800 00000078000000007C000000007C000000027C000000023C000000023E000000021E000000021E 000000040F000000040F0000000C078000000803C000001801E000001000F00000200078000040 003C000180001F0003000007E01E000001FFF80000003FC00027327CB02F>67 D73 D77 D80 D<007F004001FFC0400780F0C00F0018C01C000DC03C0007C0 380003C0780001C0700001C0F00000C0F00000C0F00000C0F0000040F0000040F0000040F80000 40F80000007C0000007E0000003F0000003FE000001FFE00000FFFE00007FFF80001FFFE00007F FF000007FF8000007F8000000FC0000007E0000003E0000003E0000001F0000001F0800000F080 0000F0800000F0800000F0800000F0C00000F0C00000E0E00001E0E00001E0F00001C0F8000380 EC000780C7000F00C1E03E0080FFF800801FE0001C327CB024>83 D<00FE0000070380000801E0 001000F0003C0078003E0078003E003C003E003C001C003C0000003C0000003C0000003C000007 FC00007C3C0003E03C0007803C001F003C003E003C003C003C007C003C0078003C08F8003C08F8 003C08F8003C08F8007C08F8007C087C00BC083E011E100F060FE003F807C01D1E7D9D20>97 D<018000003F800000FF800000FF8000000F800000078000000780000007800000078000000780 000007800000078000000780000007800000078000000780000007800000078000000780000007 81F80007860700079803C007A000E007C000F007C00078078000380780003C0780003E0780001E 0780001E0780001F0780001F0780001F0780001F0780001F0780001F0780001F0780001F078000 1E0780003E0780003C0780003C0780007807C00070074000F0072001C006100380060C0E000403 F80020317EB024>I<001FC00000F0380001C00400078002000F000F001E001F001E001F003C00 1F007C000E007C00000078000000F8000000F8000000F8000000F8000000F8000000F8000000F8 000000F8000000780000007C0000007C0000003C0000801E0000801E0001000F00010007800200 01C00C0000F03000001FC000191E7E9D1D>I<003F800000E0F0000380380007001C000E001E00 1E000E003C000F003C000F007C000F807800078078000780F8000780FFFFFF80F8000000F80000 00F8000000F8000000F8000000F800000078000000780000007C0000003C0000801C0000801E00 01000F0002000700020001C00C0000F03000001FC000191E7E9D1D>101 D<07000F801F801F800F8007000000000000000000000000000000000000000000000001801F80 FF80FF800F80078007800780078007800780078007800780078007800780078007800780078007 80078007800780078007800FC0FFF8FFF80D2F7EAE12>105 D<01803F80FF80FF800F80078007 800780078007800780078007800780078007800780078007800780078007800780078007800780 078007800780078007800780078007800780078007800780078007800780078007800780078007 800FC0FFFCFFFC0E317EB012>108 D<0181FE003FC0003F860780C0F000FF8801C1003800FF90 01E2003C000FA000E4001C0007A000F4001E0007C000F8001E0007C000F8001E00078000F0001E 00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000 F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00 078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0 001E00078000F0001E000FC001F8003F00FFFC1FFF83FFF0FFFC1FFF83FFF0341E7E9D38>I<01 81FC003F860F00FF880380FF9003C00FA001C007C001E007C001E007C001E0078001E0078001E0 078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001 E0078001E0078001E0078001E0078001E0078001E0078001E0078001E00FC003F0FFFC3FFFFFFC 3FFF201E7E9D24>I<003FC00000E0700003801C0007000E000E0007001E0007803C0003C03C00 03C07C0003E0780001E0780001E0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F8 0001F0F80001F0780001E0780001E07C0003E03C0003C03C0003C01E0007800E00070007000E00 03801C0000E07000003FC0001C1E7E9D20>I<0181F8003F860F00FF9803C0FFA001E007C000F0 07C00078078000780780003C0780003E0780003E0780001E0780001F0780001F0780001F078000 1F0780001F0780001F0780001F0780001F0780001E0780003E0780003C0780003C0780007807C0 00F007C000F007A001C007900380078C0E000783F8000780000007800000078000000780000007 8000000780000007800000078000000780000007800000078000000FC00000FFFC0000FFFC0000 202C7E9D24>I<0183E03F8C18FF907CFF907C0FA07C07C03807C00007C00007C0000780000780 000780000780000780000780000780000780000780000780000780000780000780000780000780 000780000780000780000FC000FFFE00FFFE00161E7E9D19>114 D<03FC200E02603801E03000 E0600060E00060E00020E00020F00020F000207C00007F80003FFC001FFF0007FF8001FFE0000F E00001F08000F8800078800038C00038C00038C00038E00030F00070F00060C800C0C6038081FE 00151E7E9D19>I<00400000400000400000400000400000C00000C00000C00001C00001C00003 C00007C0000FC0001FFFE0FFFFE003C00003C00003C00003C00003C00003C00003C00003C00003 C00003C00003C00003C00003C00003C00003C00003C01003C01003C01003C01003C01003C01003 C01003C01001C02001E02000E0400078C0001F00142B7FAA19>I<018000603F800FE0FF803FE0 FF803FE00F8003E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001 E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E00780 03E0078003E0078003E0038005E003C009E001C019F000F021FF001FC1FF201E7E9D24>I120 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300dpi TeXDict begin %%PaperSize: A4 %%EndSetup %%Page: 0 1 0 0 bop 811 1086 a Fo(Prop)r(osal)24 b(I)r(I)1129 1048 y Fn(0)575 1178 y Fo(MPI)f(Con)n(text)f(Sub)r(committee)799 1360 y Fm(Lyndon)17 b(J)f(Clark)o(e)853 1481 y(Marc)o(h)g(1993)p eop %%Page: 1 2 1 1 bop 262 619 a Fl(Chapter)28 b(1)262 826 y Fk(Prop)s(osal)36 b(I)s(I)803 765 y Fj(0)262 1042 y Fi(Editorial)12 b(Note:)18 b(This)13 b(is)g(not)h(Pr)n(op)n(osal)f(II)g(as)h(identi\014e)n(d)g(at)f(the) g(p)n(ost-me)n(et)g(lunch)h(after)262 1092 y(the)d(F)m(ebruary)f(me)n(eting)h (in)h(Dal)r(las.)17 b(R)o(ik)11 b(Little\014e)n(d)g(c)n(ame)g(on)h(b)n(o)n (ar)n(d)f(the)g(pr)n(op)n(osal)g(writing)262 1142 y(pr)n(o)n(c)n(ess,)j(and)i (de)n(cide)n(d)g(to)f(write)f(a)i(di\013er)n(ent)f(pr)n(op)n(osal)g(which)g (app)n(e)n(ars)h(as)f(Pr)n(op)n(osal)g(V.)262 1191 y(Ther)n(e)f(is)g(no)h (longer)f(a)h(pr)n(op)n(osal)g(which)f(c)n(ontains)h(static)f(c)n(ontexts)h (as)g(discusse)n(d)g(at)g(the)262 1241 y(p)n(ost-me)n(et)f(lunch.)262 1378 y Fh(1.1)64 b(In)n(tro)r(duction)262 1469 y Fg(This)9 b(c)o(hapter)h(prop)q(oses)g(that)g(comm)o(unicatio)o(n)d(con)o(texts)j(and)f (pro)q(cess)i(groupings)e(within)262 1519 y(MPI)20 b(app)q(ear)g(as)g (related)h(concepts.)38 b(In)20 b(particular)g(pro)q(cess)i(groupings)d(app)q (ear)i(as)262 1569 y(\\frames")14 b(whic)o(h)i(are)g(used)h(in)e(the)h (creation)h(of)e(comm)o(unicati)o(on)e(con)o(texts.)25 b(Comm)o(u-)262 1619 y(nications)13 b(con)o(texts)i(retain)g(a)e(reference)k(to,)d(and)g (inherit)g(prop)q(erties)h(of,)e(their)i(pro)q(cess)262 1669 y(grouping)f(frames.)22 b(This)15 b(re\015ects)i(the)f(observ)n(ation)f(that) h(an)f(in)o(v)o(o)q(cation)f(of)h(a)g(mo)q(dule)262 1718 y(in)c(a)i(parallel) e(program)f(t)o(ypically)h(op)q(erates)j(within)e(one)g(or)h(more)e(groups)i (of)e(pro)q(cesses,)262 1768 y(and)f(as)h(suc)o(h)h(an)o(y)e(comm)o (unication)d(con)o(texts)12 b(asso)q(ciated)g(with)e(in)o(v)o(o)q(cations)g (of)h(mo)q(dules)262 1818 y(also)i(bind)g(certain)i(seman)o(tics)e(of)h(pro)q (cess)h(groupings.)324 1868 y(The)20 b(prop)q(osal)f(pro)o(vides)g(pro)q (cess)i(iden)o(ti\014ed)f(comm)o(unicatio)o(n,)e(comm)o(unicati)o(ons)262 1918 y(whic)o(h)d(are)g(limited)e(in)i(scop)q(e)h(to)f(single)g(con)o(texts,) h(and)f(comm)o(unicatio)o(ns)e(whic)o(h)i(ha)o(v)o(e)262 1968 y(scop)q(e)g(spanning)e(pairs)h(of)f(con)o(texts.)19 b(The)c(prop)q(osal)e (mak)o(es)g(no)h(statemen)o(ts)g(regarding)262 2017 y(message)c(tags.)16 b(It)11 b(is)f(assumed)g(that)g(these)i(will)d(b)q(e)i(a)f(bit)g(string)g (expressed)j(as)d(an)g(in)o(teger)262 2067 y(in)j(the)h(host)h(language.)324 2117 y(Muc)o(h)d(of)e(this)i(prop)q(osal)f(m)o(ust)f(b)q(e)i(view)o(ed)f(as)h (recommendations)d(to)i(other)h(sub)q(com-)262 2167 y(mittees)g(of)g(MPI,)g (primarily)e(the)j(p)q(oin)o(t-to-p)q(oin)o(t)e(comm)o(unication)e(sub)q (committee)j(and)262 2217 y(the)17 b(collectiv)o(e)f(comm)o(unicatio)o(ns)e (sub)q(committee.)25 b(Concrete)17 b(syn)o(tax)g(is)f(giv)o(en)g(in)g(the)262 2266 y(st)o(yle)e(of)f(the)h(ANSI)h(C)e(host)i(language)d(for)i(only)f(purp)q (oses)i(of)f(discussion.)324 2316 y(The)g(detailed)g(prop)q(osal)g(is)g (presen)o(ted)i(in)d Ff(Section)h(1.2.)p Fg(,)g(whic)o(h)f(refers)j(the)e (reader)262 2366 y(to)i(a)g(set)h(of)e(discussion)i(notes)g(in)f Ff(Section)g(1.3.2.)p Fg(.)26 b(The)17 b(notes)g(assumes)f(kno)o(wledge)262 2416 y(of)e(the)h(prop)q(osal)f(text)h(and)g(are)g(therefore)h(b)q(est)f (examined)f(after)h(famili)o(arisatio)o(n)d(with)967 2574 y(1)p eop %%Page: 2 3 2 2 bop 262 307 a Fg(that)13 b(text.)18 b(Asp)q(ects)d(of)d(the)i(prop)q (osal)f(are)g(discussed)i(in)d(section)i Ff(Section)f(1.3.1.)p Fg(,)g(and)262 357 y(it)g(is)g(also)g(recommended)f(that)i(this)f(material)f (b)q(e)i(read)g(after)g(famili)o(arisatio)o(n)d(with)i(the)262 407 y(text)h(of)f(the)i(prop)q(osal.)262 544 y Fh(1.2)64 b(Detailed)24 b(Prop)r(osal)262 635 y Fg(This)15 b(section)h(presen)o(ts)h(the)g(detailed)e (prop)q(osal,)g(discussing)h(in)f(order)h(of)f(app)q(earance:)262 685 y(pro)q(cesses;)k(pro)q(cess)g(groupings;)d(comm)o(unication)d(con)o (texts;)18 b(p)q(oin)o(t-to-p)q(oin)o(t)e(comm)o(u-)262 735 y(nication;)c(collectiv)o(e)i(comm)o(unication.)262 851 y Fe(1.2.1)55 b(Pro)r(cesses)262 927 y Fg(This)13 b(prop)q(osal)h(views)f(pro)q(cesses)k (in)c(the)i(famili)o(ar)c(w)o(a)o(y)m(,)h(as)i(one)g(thinks)g(of)f(pro)q (cesses)j(in)262 977 y(Unix)f(or)i(NX)f(for)g(example.)24 b(Eac)o(h)16 b(pro)q(cess)i(is)e(a)g(distinct)h(space)g(of)f(instructions)h(and)262 1027 y(data.)g(Eac)o(h)c(pro)q(cess)h(is)f(allo)o(w)o(ed)e(to)h(comp)q(ose)h (m)o(ultiple)d(concurren)o(t)k(threads)g(and)e(MPI)262 1077 y(do)q(es)i(not)g(distinguish)f(suc)o(h)i(threads.)262 1185 y Ff(Pro)q(cess)g(Descriptor)262 1261 y Fg(Eac)o(h)h(pro)q(cess)h(is)f (describ)q(ed)h(b)o(y)f(a)f Fi(pr)n(o)n(c)n(ess)h(descriptor)k Fg(whic)o(h)15 b(is)h(expressed)i(as)e(an)f(in-)262 1311 y(teger)f(t)o(yp)q (e)f(in)g(the)h(host)f(language)f(and)h(has)h(an)e(opaque)i(v)n(alue)e(whic)o (h)h(is)g(pro)q(cess)i(lo)q(cal.)262 1361 y Ff(See)g(Note)g(1.)k(See)c(Note)h (3.)i(See)d(Note)h(2.)324 1411 y Fg(The)d(initialisation)c(of)j(MPI)h (services)h(will)d(assign)h(to)h(eac)o(h)g(pro)q(cess)h(an)e Fi(own)k Fg(pro)q(cess)262 1461 y(descriptor.)i(Eac)o(h)11 b(pro)q(cess)i(retains)e(its)g(o)o(wn)g(pro)q(cess)i(descriptor)f(un)o(til)e (the)h(termination)262 1510 y(of)h(MPI)h(services.)19 b(MPI)13 b(pro)o(vides)g(a)g(pro)q(cedure)h(whic)o(h)f(returns)h(the)g(o)o(wn)e (descriptor)i(of)262 1560 y(the)g(calling)e(pro)q(cess.)20 b(F)m(or)14 b(example,)e Fd(pd)21 b(=)h(mpi)p 1052 1560 14 2 v 15 w(own)p 1133 1560 V 15 w(pd\(\))p Fg(.)262 1668 y Ff(Pro)q(cess)15 b(Creation)f(and)h(Destruction)262 1745 y Fg(This)g(prop)q(osal)g(mak)o(es)f (no)h(statemen)o(ts)h(regarding)f(creation)h(and)f(destruction)i(of)e(pro-) 262 1795 y(cesses.)20 b Ff(See)15 b(Note)h(6.)262 1903 y(Descriptor)d(T)l (ransmission)262 1979 y Fg(The)18 b(v)n(alue)g(of)f(a)h(pro)q(cess)i (descriptor)g(can)e(b)q(e)h(transmitted)e(in)h(a)g(message)g(as)g(an)g(in-) 262 2029 y(teger)d(since)h(it)e(is)h(an)f(in)o(teger)i(t)o(yp)q(e)f(in)f(the) h(host)g(language.)20 b(Ho)o(w)o(ev)o(er)15 b(the)h(recipien)o(t)f(of)262 2079 y(the)h(descriptor)i(can)e(mak)o(e)f(no)h(de\014ned)h(use)h(of)d(the)i (v)n(alue)f(of)f(in)h(the)h(MPI)f(op)q(erations)262 2129 y(describ)q(ed)f(in) f(this)g(prop)q(osal)f(|)g(the)i(descriptor)g(is)f Fi(invalid)p Fg(.)324 2178 y(MPI)j(pro)o(vides)f(a)g(mec)o(hanism)f(whereb)o(y)i(the)g (user)h(can)e(transmit)g(a)g(v)n(alid)f(pro)q(cess)262 2228 y(descriptor)j(in)e(a)g(message)h(suc)o(h)h(that)e(the)i(receiv)o(ed)g (descriptor)g(is)f(v)n(alid.)25 b(This)17 b(is)f(in-)262 2278 y(tegrated)e(with)g(the)g(capabilit)o(y)e(to)i(transmit)e(t)o(yp)q(ed)j (messages.)j(It)13 b(is)h(suggested)h(that)f(a)262 2328 y(notional)e(data)i (t)o(yp)q(e)g(should)g(b)q(e)g(in)o(tro)q(duced)h(for)e(this)h(purp)q(ose,)h (e.g.)i Fd(MPI)p 1468 2328 V 16 w(PD)p 1528 2328 V 15 w(TYPE)p Fg(.)324 2378 y(MPI)10 b(pro)o(vides)g(a)f(pro)q(cess)j(descriptor)f (registry)f(service.)18 b(The)10 b(op)q(erations)g(whic)o(h)g(this)262 2427 y(service)16 b(upp)q(orts)h(are:)22 b(register)16 b(descriptor)h(b)o(y)e (name;)g(deregister)i(descriptor;)g(lo)q(okup)967 2574 y(2)p eop %%Page: 3 4 3 3 bop 262 307 a Fg(descriptor)11 b(iden)o(ti\014er)g(b)o(y)f(name)g(and)g (v)n(alidate,)f(blo)q(c)o(king)h(the)h(caller)f(un)o(til)g(the)h(name)e(has) 262 357 y(b)q(een)16 b(registered.)24 b Ff(See)17 b(Note)g(7.)23 b Fg(Use)16 b(of)f(this)g(service)i(is)e(not)g(mandated.)22 b(Programs)262 407 y(whic)o(h)d(can)g(con)o(v)o(enien)o(tly)g(b)q(e)h (expressed)h(without)e(using)g(the)g(service)i(can)e(ignore)g(it)262 457 y(without)13 b(p)q(enalt)o(y)m(.)324 506 y(Note)19 b(that)g(receipt)i(of) d(a)h(pro)q(cess)i(descriptor)f(in)f(the)g(fashions)g(describ)q(ed)i(ab)q(o)o (v)o(e)262 556 y(ma)o(y)11 b(ha)o(v)o(e)i(a)g(p)q(ersisten)o(t)i(e\013ect)g (on)e(the)h(implem)o(en)o(tation)c(of)j(MPI)g(at)g(the)h(receiv)o(er,)h(and) 262 606 y(in)h(particular)h(ma)o(y)f(reserv)o(e)j(state.)29 b(MPI)18 b(will)d(pro)o(vide)j(a)f(pro)q(cedure)i(whic)o(h)e(in)o(v)n(alid-) 262 656 y(ates)d(a)g(v)n(alid)f(descriptor,)i(allo)o(wing)d(the)j(implemen)o (tatio)o(n)d(to)i(free)h(reserv)o(ed)h(state.)k(F)m(or)262 706 y(example,)d Fd(mpi)p 510 706 14 2 v 15 w(invalidate)p 745 706 V 14 w(pd\(pd\))p Fg(.)29 b(The)19 b(user)g(is)f(not)h(allo)o(w)o(ed) e(to)h(in)o(v)n(alidate)e(the)262 756 y(pro)q(cess)f(descriptor)g(of)f(the)g (calling)f(pro)q(cess.)19 b Ff(See)d(Note)f(8.)262 863 y(Descriptor)e(A)o (ttribu)o(tes)262 940 y Fg(This)g(prop)q(osal)h(mak)o(es)f(no)g(statemen)o (ts)h(regarding)g(pro)q(cessor)i(descriptor)f(attributes.)262 1056 y Fe(1.2.2)55 b(Pro)r(cess)18 b(Groupings)262 1133 y Fg(This)d(prop)q (osal)h(views)g(a)f(pro)q(cess)j(grouping)d(as)h(an)g(ordered)h(collection)e (of)h(\(references)262 1183 y(to?\))h(distinct)d(pro)q(cesses,)h(the)f(mem)o (b)q(ership)e(and)h(ordering)g(of)g(whic)o(h)g(do)q(es)h(not)f(c)o(hange)262 1232 y(o)o(v)o(er)i(the)i(lifetime)d(of)h(the)h(grouping.)24 b Ff(See)17 b(Note)h(9.)25 b Fg(The)16 b(canonical)f(represen)o(tation)262 1282 y(of)d(a)i(grouping)e(re\015ects)k(the)e(pro)q(cess)i(ordering)d(and)h (is)f(a)g(one-to-one)h(map)e(from)f Fc(Z)1611 1288 y Fb(N)1657 1282 y Fg(to)262 1332 y(the)j(descriptor)h(of)e(the)i Fc(N)k Fg(pro)q(cesses)d(comp)q(osing)d(the)h(grouping.)324 1382 y(The)c(structure)j (of)c(a)h(pro)q(cess)i(grouping)d(is)h(de\014ned)i(b)o(y)e(a)f(pro)q(cess)j (grouping)e(top)q(ology)262 1432 y(and)j(this)h(prop)q(osal)g(mak)o(es)e(no)i (further)h(statemen)o(ts)f(regarding)g(suc)o(h)g(structures.)262 1540 y Ff(Group)g(Descriptor)262 1616 y Fg(Eac)o(h)20 b(group)g(is)g(iden)o (ti\014ed)g(b)o(y)g(a)f Fi(gr)n(oup)i(descriptor)j Fg(whic)o(h)c(is)g (expressed)i(as)e(an)g(in-)262 1666 y(teger)14 b(t)o(yp)q(e)f(in)g(the)h (host)f(language)f(and)h(has)h(an)e(opaque)i(v)n(alue)e(whic)o(h)h(is)g(pro)q (cess)i(lo)q(cal.)262 1716 y Ff(See)g(Note)g(1.)k(See)c(Note)h(4.)i(See)d (Note)h(2.)324 1766 y Fg(The)f(initialisation)d(of)j(MPI)g(services)i(will)c (assign)i(to)g(eac)o(h)g(pro)q(cess)i(an)e Fi(own)g Fg(group)262 1816 y(descriptor)c(for)f(a)g(pro)q(cess)i(grouping)d(of)h(whic)o(h)g(the)g (pro)q(cess)i(is)e(a)g(mem)o(b)q(er.)15 b(Eac)o(h)c(pro)q(cess)262 1865 y(retains)j(its)h(o)o(wn)f(group)g(descriptor)h(and)f(mem)o(b)q(ership)f (of)h(the)h(pro)q(cess)h(grouping)d(un)o(til)262 1915 y(the)18 b(termination)f(of)h(MPI)g(services.)34 b Ff(See)20 b(Note)h(10.)31 b Fg(MPI)19 b(pro)o(vides)g(a)f(pro)q(cedure)262 1965 y(whic)o(h)h(returns)j (the)f(descriptor)g(of)e(the)i(home)e(group)g(of)h(the)g(calling)f(pro)q (cess.)38 b(F)m(or)262 2015 y(example,)12 b Fd(gd)21 b(=)h(mpi)p 614 2015 V 15 w(home)p 717 2015 V 15 w(gd\(\))p Fg(.)17 b Ff(See)e(Note)g (11.)262 2123 y(Group)f(Creation)g(and)h(Deletion)262 2199 y Fg(MPI)d(pro)o(vides)g(a)f(pro)q(cedure)j(whic)o(h)e(allo)o(ws)e(users)k (to)d(dynamically)e(create)14 b(one)e(or)g(more)262 2249 y(groups)d(whic)o(h) g(are)h(subsets)h(of)e(existing)g(groups.)16 b(F)m(or)9 b(example,)g Fd(gdb)21 b(=)g(mpi)p 1489 2249 V 15 w(group)p 1614 2249 V 15 w(partition\(gda,)262 2299 y(key\))13 b Fg(creates)k(one)e(or)g(more)e (new)j(groups)f Fd(gdb)f Fg(partition)g(an)g(existing)h(group)f Fd(gda)g Fg(in)o(to)262 2349 y(one)h(or)h(more)e(distinct)i(subsets.)25 b(This)16 b(pro)q(cedure)h(is)f(called)f(b)o(y)g(and)h(sync)o(hronises)h(all) 262 2399 y(mem)o(b)q(ers)12 b(of)h Fd(gda)p Fg(.)967 2574 y(3)p eop %%Page: 4 5 4 4 bop 324 307 a Fg(MPI)9 b(pro)o(vides)h(a)f(pro)q(cedure)i(whic)o(h)f (allo)o(ws)e(users)j(to)e(dynamically)d(create)11 b(one)f(group)262 357 y(b)o(y)j(explicit)h(de\014nition)g(of)f(its)h(mem)o(b)q(ership)f(as)h(a) g(list)f(of)h(pro)q(cess)i(descriptors.)k(F)m(or)13 b(ex-)262 407 y(ample,)c Fd(gd)21 b(=)h(mpi)p 571 407 14 2 v 15 w(group)p 696 407 V 14 w(definition\(listofpd)o(\))8 b Fg(creates)k(one)f(new)g(group)g Fd(gd)f Fg(with)262 457 y(mem)o(b)q(ership)17 b(and)i(ordering)g(describ)q (ed)i(b)o(y)e(the)h(pro)q(cess)h(descriptor)f(list)f Fd(listofpd)p Fg(.)262 506 y(This)9 b(pro)q(cedure)j(is)e(called)f(b)o(y)h(and)g(sync)o (hronises)h(all)e(pro)q(cesses)j(iden)o(ti\014ed)e(in)g Fd(listofpd)p Fg(.)324 556 y(MPI)h(pro)o(vides)h(a)f(pro)q(cedure)i(whic)o(h)e(allo)o(ws)f (users)j(to)e(delete)i(a)e(created)i(group.)k(This)262 606 y(pro)q(cedure)11 b(accepts)h(the)f(descriptor)g(of)f(a)f(group)h(whic)o(h)g (w)o(as)g(created)i(b)o(y)d(the)i(calling)e(pro-)262 656 y(cess)15 b(and)f(destro)o(ys)i(the)f(iden)o(ti\014ed)f(group.)19 b(F)m(or)14 b(example,)e Fd(mpi)p 1295 656 V 15 w(group)p 1420 656 V 15 w(deletion\(gd\))262 706 y Fg(deletes)17 b(an)e(existing)h(group)f Fd(gd)p Fg(.)23 b(This)16 b(pro)q(cedure)i(is)d(called)h(b)o(y)f(and)h(sync)o (hronises)h(all)262 756 y(mem)o(b)q(ers)12 b(of)h Fd(gd)p Fg(.)324 805 y(MPI)k(pro)o(vides)g(additional)e(pro)q(cedure)k(whic)o(h)e(allo)o(w)e (users)j(to)f(construct)i(pro)q(cess)262 855 y(groupings)13 b(whic)o(h)h(ha)o(v)o(e)f(a)h(pro)q(cess)i(grouping)d(top)q(ology)m(.)262 963 y Ff(Descriptor)g(T)l(ransmission)262 1040 y Fg(The)i(v)n(alue)f(of)g(a)g (group)h(descriptor)h(can)f(b)q(e)g(transmitted)g(in)f(a)g(message)h(as)g(an) f(in)o(teger)262 1089 y(since)j(it)g(is)f(an)h(in)o(teger)g(t)o(yp)q(e)g(in)g (the)g(host)g(language.)26 b(Ho)o(w)o(ev)o(er)17 b(the)g(recipien)o(t)h(of)e (the)262 1139 y(descriptor)h(can)g(mak)o(e)e(no)h(de\014ned)h(use)h(of)d(the) i(v)n(alue)f(of)g(in)g(the)h(MPI)g(op)q(erations)f(de-)262 1189 y(scrib)q(ed)f(in)e(this)h(prop)q(osal)g(|)f(the)h(descriptor)i(is)d Fi(invalid)p Fg(.)324 1239 y(MPI)19 b(pro)o(vides)g(a)g(mec)o(hanism)d (whereb)o(y)k(the)g(user)g(can)f(transmit)f(a)g(v)n(alid)g(group)262 1289 y(descriptor)g(in)e(a)g(message)h(suc)o(h)h(that)e(the)i(receiv)o(ed)g (descriptor)g(is)f(v)n(alid.)25 b(This)17 b(is)f(in-)262 1339 y(tegrated)e(with)g(the)g(capabilit)o(y)e(to)i(transmit)e(t)o(yp)q(ed)j (messages.)j(It)13 b(is)h(suggested)h(that)f(a)262 1388 y(notional)e(data)i (t)o(yp)q(e)g(should)g(b)q(e)g(in)o(tro)q(duced)h(for)e(this)h(purp)q(ose,)h (e.g.)i Fd(MPI)p 1468 1388 V 16 w(GD)p 1528 1388 V 15 w(TYPE)p Fg(.)324 1438 y(MPI)c(pro)o(vides)f(a)h(group)f(descriptor)i(registry)f (service.)19 b(The)13 b(op)q(erations)g(whic)o(h)g(this)262 1488 y(service)j(upp)q(orts)h(are:)22 b(register)16 b(descriptor)h(b)o(y)e (name;)g(deregister)i(descriptor;)g(lo)q(okup)262 1538 y(descriptor)11 b(iden)o(ti\014er)g(b)o(y)f(name)g(and)g(v)n(alidate,)f(blo)q(c)o(king)h(the) h(caller)f(un)o(til)g(the)h(name)e(has)262 1588 y(b)q(een)16 b(registered.)24 b Ff(See)17 b(Note)g(7.)23 b Fg(Use)16 b(of)f(this)g (service)i(is)e(not)g(mandated.)22 b(Programs)262 1637 y(whic)o(h)d(can)g (con)o(v)o(enien)o(tly)g(b)q(e)h(expressed)h(without)e(using)g(the)g(service) i(can)e(ignore)g(it)262 1687 y(without)13 b(p)q(enalt)o(y)m(.)324 1737 y(Note)h(that)f(receipt)i(of)d(a)i(group)f(descriptor)h(in)f(the)h (fashions)f(describ)q(ed)i(ab)q(o)o(v)o(e)f(ma)o(y)262 1787 y(ha)o(v)o(e)i(a)g(p)q(ersisten)o(t)i(e\013ect)g(on)e(the)h(implemen)o (tation)c(of)j(MPI)g(at)h(the)g(receiv)o(er,)h(and)e(in)262 1837 y(particular)g(ma)o(y)f(reserv)o(e)k(state.)27 b(MPI)17 b(will)f(pro)o(vide)g(a)h(pro)q(cedure)h(whic)o(h)f(in)o(v)n(alidates)262 1886 y(a)e(v)n(alid)f(descriptor,)j(allo)o(wing)c(the)j(implemen)o(tatio)o(n) d(to)i(free)i(reserv)o(ed)g(state.)24 b(F)m(or)15 b(ex-)262 1936 y(ample,)d Fd(mpi)p 465 1936 V 15 w(invalidate)p 700 1936 V 13 w(gd\(gd\))p Fg(.)18 b(The)c(user)h(is)f(not)g(allo)o(w)o(ed)f(to)h(in)o (v)n(alidate)e(the)j(o)o(wn)262 1986 y(group)d(descriptor)j(of)d(the)i(pro)q (cess)h(or)e(the)h(group)f(descriptor)h(of)f(an)o(y)f(group)h(created)i(b)o (y)262 2036 y(the)f(calling)e(pro)q(cess.)20 b Ff(See)c(Note)f(12.)262 2144 y(Descriptor)e(A)o(ttribu)o(tes)262 2220 y Fg(MPI)j(pro)o(vides)g(a)g (pro)q(cedure)i(whic)o(h)e(accepts)i(a)d(v)n(alid)g(group)h(descriptor)h(and) f(returns)262 2270 y(the)g(rank)f(of)g(the)i(calling)d(pro)q(cess)j(within)e (the)i(iden)o(ti\014er)f(group.)23 b(F)m(or)15 b(example,)f Fd(rank)262 2320 y(=)21 b(mpi)p 374 2320 V 15 w(group)p 499 2320 V 15 w(rank\(gd\))p Fg(.)324 2370 y(MPI)10 b(pro)o(vides)h(a)f(pro)q (cedure)j(whic)o(h)d(accepts)i(a)e(v)n(alid)f(group)h(descriptor)i(and)e (returns)262 2420 y(the)15 b(n)o(um)o(b)q(er)g(of)g(mem)o(b)q(ers,)e(or)j Fi(size)p Fg(,)e(of)h(the)h(iden)o(ti\014ed)f(group.)22 b(F)m(or)15 b(example,)f Fd(size)21 b(=)967 2574 y Fg(4)p eop %%Page: 5 6 5 5 bop 262 307 a Fd(mpi)p 331 307 14 2 v 15 w(group)p 456 307 V 14 w(size\(gd\))p Fg(.)324 357 y(MPI)16 b(pro)o(vides)f(a)g(pro)q (cedure)j(whic)o(h)d(accepts)i(a)e(v)n(alid)f(group)i(descriptor)g(and)g (pro-)262 407 y(cess)h(order)g(n)o(um)o(b)q(er,)f(or)g Fi(r)n(ank)p Fg(,)g(and)g(returns)i(the)f(v)n(alid)e(descriptor)i(of)f(the)h(pro)q(cess)h (to)262 457 y(whic)o(h)d(the)h(supplied)g(rank)f(maps)g(within)f(the)j(iden)o (ti\014ed)e(group.)23 b(F)m(or)15 b(example,)f Fd(pd)22 b(=)262 506 y(mpi)p 331 506 V 15 w(group)p 456 506 V 14 w(pd\(gd,)f(rank\))p Fg(.)324 556 y Ff(See)15 b(Note)h(13.)324 606 y Fg(MPI)c(pro)o(vides)h (additional)e(pro)q(cedures)j(whic)o(h)f(allo)o(w)d(users)k(to)e(determine)h (the)g(pro-)262 656 y(cess)i(grouping)e(top)q(ology)g(attributes.)262 772 y Fe(1.2.3)55 b(Comm)n(unication)16 b(Con)n(texts)262 849 y Fg(This)c(prop)q(osal)h(views)g(a)g(comm)o(unicatio)o(n)d(con)o(text)k(as)f (a)g(uniquely)f(iden)o(ti\014ed)h(reference)262 899 y(to)f(exactly)g(one)h (pro)q(cess)h(grouping,)e(whic)o(h)g(is)g(a)h(\014eld)f(in)g(a)g(message)g (en)o(v)o(elop)q(e)h(and)g(ma)o(y)262 948 y(therefore)j(b)q(e)g(used)g(to)f (distinguish)g(messages.)21 b(The)16 b(con)o(text)g(inherits)f(the)h (referenced)262 998 y(pro)q(cess)i(grouping)e(as)h(a)g(\\frame".)25 b(Eac)o(h)17 b(pro)q(cess)i(grouping)d(is)h(used)h(as)f(a)f(frame)g(for)262 1048 y(m)o(ultiple)11 b(con)o(texts.)262 1156 y Ff(Con)o(text)j(Descriptor) 262 1232 y Fg(Eac)o(h)h(con)o(text)i(is)e(iden)o(ti\014ed)h(b)o(y)f(a)g Fi(c)n(ontext)i(descriptor)j Fg(whic)o(h)15 b(is)g(expressed)j(as)e(an)f(in-) 262 1282 y(teger)f(t)o(yp)q(e)f(in)g(the)h(host)f(language)f(and)h(has)h(an)e (opaque)i(v)n(alue)e(whic)o(h)h(is)g(pro)q(cess)i(lo)q(cal.)262 1332 y Ff(See)g(Note)g(1.)k(See)c(Note)h(5.)i(See)d(Note)h(2.)324 1382 y Fg(The)g(creation)g(of)e(MPI)i(pro)q(cess)h(groupings)e(allo)q(cates)h (an)f Fi(own)k Fg(con)o(text)d(whic)o(h)f(in-)262 1432 y(herits)k(the)g (created)h(grouping)e(as)h(a)f(frame)g(and)g(can)h(b)q(e)g(though)o(t)g(of)f (as)g(a)h(prop)q(ert)o(y)262 1482 y(of)c(the)h(created)h(grouping.)23 b(The)16 b(grouping)f(retains)h(its)f(o)o(wn)h(con)o(text)g(un)o(til)f(MPI)h (pro-)262 1531 y(cess)g(grouping)e(deletion.)21 b(MPI)15 b(pro)o(vides)g(a)g (pro)q(cedure)i(whic)o(h)d(accepts)j(a)d(v)n(alid)g(group)262 1581 y(descriptor)e(and)e(returns)i(the)g(con)o(text)f(descriptor)h(of)e(the) h(o)o(wn)g(con)o(text)g(of)f(the)h(iden)o(ti\014ed)262 1631 y(group.)17 b(F)m(or)d(example,)e Fd(cd)21 b(=)h(mpi)p 822 1631 V 15 w(own)p 903 1631 V 15 w(cd\(gd\))p Fg(.)17 b Ff(See)e(Note)h(15.) 262 1739 y(Con)o(text)e(Creation)h(and)g(Deletion)262 1816 y Fg(MPI)i(pro)o(vides)g(a)g(pro)q(cedure)i(whic)o(h)d(allo)o(ws)g(users)j (to)d(dynamically)f(create)j(con)o(texts.)262 1865 y(This)11 b(pro)q(cedure)j(accepts)f(a)e(v)n(alid)f(descriptor)j(of)e(a)g(group)h(of)f (whic)o(h)g(the)i(calling)d(pro)q(cess)262 1915 y(is)16 b(a)h(mem)o(b)q(er,)f (and)h(returns)i(a)e(con)o(text)g(descriptor)i(whic)o(h)e(references)j(the)d (iden)o(ti\014ed)262 1965 y(group.)j(F)m(or)15 b(example,)e Fd(cd)21 b(=)h(mpi)p 827 1965 V 15 w(context)p 996 1965 V 14 w(create\(gd\))p Fg(.)d(This)14 b(pro)q(cedure)j(is)e(called)262 2015 y(b)o(y)e(and)h(sync)o(hronises)h(all)e(mem)o(b)q(ers)g(of)g Fd(gd)p Fg(.)k Ff(See)f(Note)f(14.)324 2065 y Fg(MPI)i(pro)o(vides)h(a)f(pro) q(cedure)i(whic)o(h)e(allo)o(ws)f(users)i(to)f(destro)o(y)h(created)h(con)o (texts.)262 2114 y(This)11 b(pro)q(cedure)j(accepts)f(a)e(v)n(alid)f(con)o (text)j(descriptor)g(whic)o(h)e(w)o(as)h(created)h(b)o(y)e(the)i(call-)262 2164 y(ing)8 b(pro)q(cess)j(and)e(deletes)i(that)e(con)o(text)h(iden)o (ti\014er.)17 b(F)m(or)9 b(example,)f Fd(mpi)p 1400 2164 V 15 w(context)p 1569 2164 V 15 w(delete\(cd\))p Fg(.)262 2214 y(This)13 b(pro)q(cedure)j(is)e(called)f(b)o(y)h(and)g(sync)o(hronises)h(all) e(mem)o(b)q(ers)f(of)i(the)g(frame)f(of)g Fd(cd)p Fg(.)262 2322 y Ff(Descriptor)g(T)l(ransmission)262 2399 y Fg(The)18 b(v)n(alue)f(of)h(a)f(con)o(text)i(descriptor)g(can)f(b)q(e)h(transmitted)f (in)f(a)h(message)f(as)h(an)g(in-)262 2448 y(teger)d(since)h(it)e(is)h(an)f (in)o(teger)i(t)o(yp)q(e)f(in)f(the)h(host)g(language.)20 b(Ho)o(w)o(ev)o(er) 15 b(the)h(recipien)o(t)f(of)967 2574 y(5)p eop %%Page: 6 7 6 6 bop 262 307 a Fg(the)16 b(descriptor)i(can)e(mak)o(e)f(no)h(de\014ned)h (use)h(of)d(the)i(v)n(alue)f(of)f(in)h(the)h(MPI)f(op)q(erations)262 357 y(describ)q(ed)f(in)f(this)g(prop)q(osal)f(|)g(the)i(descriptor)g(is)f Fi(invalid)p Fg(.)324 407 y(MPI)i(pro)o(vides)h(a)f(mec)o(hanism)d(whereb)o (y)18 b(the)f(user)g(can)f(transmit)f(a)h(v)n(alid)f(con)o(text)262 457 y(descriptor)j(in)e(a)g(message)h(suc)o(h)h(that)e(the)i(receiv)o(ed)g (descriptor)g(is)f(v)n(alid.)25 b(This)17 b(is)f(in-)262 506 y(tegrated)e(with)g(the)g(capabilit)o(y)e(to)i(transmit)e(t)o(yp)q(ed)j (messages.)j(It)13 b(is)h(suggested)h(that)f(a)262 556 y(notional)e(data)i(t) o(yp)q(e)g(should)g(b)q(e)g(in)o(tro)q(duced)h(for)e(this)h(purp)q(ose,)h (e.g.)i Fd(MPI)p 1468 556 14 2 v 16 w(CD)p 1528 556 V 15 w(TYPE)p Fg(.)324 606 y(MPI)9 b(pro)o(vides)h(a)f(con)o(text)h(descriptor)h(registry)f (service.)18 b(The)10 b(op)q(erations)f(whic)o(h)h(this)262 656 y(service)16 b(upp)q(orts)h(are:)22 b(register)16 b(descriptor)h(b)o(y)e (name;)g(deregister)i(descriptor;)g(lo)q(okup)262 706 y(descriptor)11 b(iden)o(ti\014er)g(b)o(y)f(name)g(and)g(v)n(alidate,)f(blo)q(c)o(king)h(the) h(caller)f(un)o(til)g(the)h(name)e(has)262 756 y(b)q(een)16 b(registered.)24 b Ff(See)17 b(Note)g(7.)23 b Fg(Use)16 b(of)f(this)g (service)i(is)e(not)g(mandated.)22 b(Programs)262 805 y(whic)o(h)d(can)g(con) o(v)o(enien)o(tly)g(b)q(e)h(expressed)h(without)e(using)g(the)g(service)i (can)e(ignore)g(it)262 855 y(without)13 b(p)q(enalt)o(y)m(.)324 905 y(Note)e(that)g(receipt)h(of)f(a)f(con)o(text)i(descriptor)g(in)e(the)i (fashions)e(describ)q(ed)j(ab)q(o)o(v)o(e)e(ma)o(y)262 955 y(ha)o(v)o(e)16 b(a)g(p)q(ersisten)o(t)i(e\013ect)g(on)e(the)h(implemen)o (tation)c(of)j(MPI)g(at)h(the)g(receiv)o(er,)h(and)e(in)262 1005 y(particular)g(ma)o(y)f(reserv)o(e)k(state.)27 b(MPI)17 b(will)f(pro)o(vide)g(a)h(pro)q(cedure)h(whic)o(h)f(in)o(v)n(alidates)262 1054 y(a)e(v)n(alid)f(descriptor,)j(allo)o(wing)c(the)j(implemen)o(tatio)o(n) d(to)i(free)i(reserv)o(ed)g(state.)24 b(F)m(or)15 b(ex-)262 1104 y(ample,)d Fd(mpi)p 465 1104 V 15 w(invalidate)p 700 1104 V 13 w(cd\(cd\))p Fg(.)18 b(The)c(user)h(is)f(not)g(allo)o(w)o(ed)f(to)h(in)o (v)n(alidate)e(the)j(o)o(wn)262 1154 y(con)o(text)h(descriptor)h(of)e(a)g (group)h(or)g(the)g(con)o(text)g(descriptor)h(of)e(an)o(y)h(con)o(text)g (created)262 1204 y(b)o(y)d(the)i(calling)d(pro)q(cess.)20 b Ff(See)15 b(Note)h(16.)262 1312 y(Descriptor)d(A)o(ttribu)o(tes)262 1388 y Fg(MPI)f(pro)o(vides)g(a)g(pro)q(cedure)h(whic)o(h)f(allo)o(ws)f (users)i(to)f(determine)g(the)h(pro)q(cess)g(grouping)262 1438 y(whic)o(h)g(is)h(the)h(frame)d(of)h(a)h(con)o(text.)19 b(F)m(or)13 b(example,)f Fd(gd)22 b(=)f(mpi)p 1282 1438 V 15 w(context)p 1451 1438 V 15 w(gd\(cd\))p Fg(.)262 1554 y Fe(1.2.4)55 b(P)n(oin)n(t-to-P)n (oin)n(t)19 b(Comm)n(unication)262 1631 y Fg(This)11 b(prop)q(osal)g (recommends)f(three)j(forms)d(for)h(MPI)g(p)q(oin)o(t-to-p)q(oin)o(t)f (message)h(address-)262 1681 y(ing)k(and)h(selection:)23 b(n)o(ull)15 b(con)o(text;)j(closed)e(con)o(text;)i(op)q(en)e(con)o(text.)26 b(It)16 b(is)g(further)h(re-)262 1731 y(commended)d(that)j(messages)f(comm)o (unicated)e(in)i(eac)o(h)g(form)f(are)i(distinguished)f(suc)o(h)262 1780 y(that)c(a)h(Send)g(op)q(eration)g(of)f(form)g(X)g(cannot)h(matc)o(h)f (with)h(a)f(receiv)o(e)i(op)q(eration)f(of)f(form)262 1830 y(Y,)h(whic)o(h)h(requires)h(that)f(the)g(form)e(is)i(em)o(b)q(edded)g(in)o (to)f(the)i(message)e(en)o(v)o(elop)q(e.)324 1880 y(The)j(three)h(forms)d (are)i(describ)q(ed)h(follo)o(w)o(ed)d(b)o(y)i(considerations)g(of)f(uniform) e(in)o(teg-)262 1930 y(ration)g(of)g(these)i(forms)e(in)g(the)i(p)q(oin)o (t-to-p)q(oin)o(t)d(comm)o(unication)f(section)j(of)g(MPI.)262 2038 y Ff(Null)g(Con)o(text)g(F)l(orm)262 2114 y Fg(The)g Fi(nul)r(l)h(c)n (ontext)j Fg(form)12 b(con)o(tains)i(no)f(message)h(con)o(text.)19 b Ff(See)c(Note)h(17.)324 2164 y Fg(Message)g(selection)f(and)g(addressing)g (are)g(expressed)i(b)o(y)e Fd(\(pd,)21 b(tag\))14 b Fg(where:)20 b Fd(pd)15 b Fg(is)262 2214 y(a)e(pro)q(cess)j(descriptor;)f Fd(tag)e Fg(is)g(a)h(message)g(tag.)324 2264 y(Send)g(supplies)h(the)f Fd(pd)f Fg(of)h(the)g(receiv)o(er.)20 b(Receiv)o(e)14 b(supplies)h(the)f Fd(pd)f Fg(of)h(the)g(sender.)324 2314 y(Receiv)o(e)20 b(can)f(wildcard)g(on) g Fd(pd)g Fg(b)o(y)h(supplying)e(the)i(v)n(alue)f(of)g(a)g(named)f(constan)o (t)262 2363 y(pro)q(cess)f(descirptor,)h(e.g.)24 b Fd(MPI)p 773 2363 V 15 w(PD)p 832 2363 V 15 w(WILD)p Fg(.)15 b(This)h(prop)q(osal)g (mak)o(es)e(no)i(statmen)o(t)g(ab)q(out)262 2413 y(the)e(pro)o(vision)f(for)g (wildcard)h(on)g Fd(tag)p Fg(.)967 2574 y(6)p eop %%Page: 7 8 7 7 bop 262 307 a Ff(Closed)14 b(Con)o(text)h(F)l(orm)262 384 y Fg(The)f Fi(close)n(d)g(c)n(ontext)k Fg(form)12 b(p)q(ermits)h(comm)o (unication)d(b)q(et)o(w)o(een)15 b(mem)o(b)q(ers)d(of)h(the)h(same)262 434 y(con)o(text.)k Ff(See)d(Note)h(19.)324 483 y Fg(Message)d(selection)f (and)g(addressing)g(are)g(expressed)i(b)o(y)d Fd(\(cd,)21 b(rank,)g(tag\))11 b Fg(where:)262 533 y Fd(cd)k Fg(is)g(a)h(con)o(text)g(descriptor;)i Fd(rank)c Fg(is)i(a)f(pro)q(cess)j(rank)e(in)f(the)h(frame)f(of)g Fd(cd)p Fg(;)g Fd(tag)g Fg(is)h(a)262 583 y(message)d(tag.)324 633 y(Send)f(supplies)g(the)g Fd(cd)f Fg(of)g(the)h(receiv)o(er)h(and)e (sender,)i(and)f(the)g Fd(rank)f Fg(of)f(the)j(receiv)o(er.)262 683 y(Receiv)o(e)i(supplies)f(the)i Fd(cd)e Fg(of)f(the)j(sender)g(and)e (receiv)o(er,)i(and)e(the)h(rank)f(of)g(the)h(sender.)262 732 y(The)i Fd(\(cd,)k(rank\))16 b Fg(pair)g(in)h(Send)g(\(Receiv)o(e\))h(is)f (su\016cien)o(t)g(to)g(determine)g(the)h(pro)q(cess)262 782 y(descriptor)d(of)e(the)h(receiv)o(er)i(\(sender\).)324 832 y(Receiv)o(e)e(cannot)g(wildcard)f(on)h Fd(cd)p Fg(.)j(Receiv)o(e)d(can)g (wildcard)g(on)f Fd(rank)g Fg(b)o(y)g(supplying)262 882 y(the)e(v)n(alue)g (of)f(a)h(named)f(constan)o(t)h(in)o(teger,)h(e.g.)k Fd(MPI)p 1101 882 14 2 v 16 w(RANK)p 1205 882 V 14 w(WILD)p Fg(.)10 b(This)h(prop)q(osal)g(mak)o(es)262 932 y(no)i(statmen)o(t)h(ab)q(out)f(the)i (pro)o(vision)e(for)g(wildcard)h(on)f Fd(tag)p Fg(.)262 1040 y Ff(Op)q(en)i(Con)o(text)f(F)l(orm)262 1116 y Fg(The)k Fi(op)n(en)h(c)n (ontext)j Fg(form)16 b(p)q(ermits)h(comm)o(unication)e(b)q(et)o(w)o(een)k (mem)o(b)q(ers)e(of)g(an)o(y)g(t)o(w)o(o)262 1166 y(con)o(texts.)i Ff(See)c(Note)g(18.)324 1216 y Fg(Message)d(selection)h(and)e(addressing)h (are)g(expressed)i(b)o(y)d Fd(\(lcd,)21 b(rcd,)g(rank,)f(tag\))262 1266 y Fg(where:)d Fd(lcd)11 b Fg(is)h(a)f(\\lo)q(cal")f(con)o(text)i (descriptor;)h Fd(rcd)e Fg(is)g(a)g(\\remote")g(con)o(text)h(descriptor;)262 1316 y Fd(rank)h Fg(is)g(a)h(pro)q(cess)i(rank)d(in)h(the)g(frame)f(of)g Fd(rcd)p Fg(;)g Fd(tag)g Fg(is)h(a)f(message)h(tag.)324 1365 y(Send)22 b(supplies)g(the)h(con)o(text)f(descriptor)h(for)f(the)g(sender)i (in)d Fd(lcd)p Fg(,)h(the)h(con)o(text)262 1415 y(descriptor)c(for)f(the)h (receiv)o(er)h(in)e Fd(rcd)p Fg(,)h(and)f(the)h Fd(rank)e Fg(of)h(the)h (receiv)o(er)h(in)e(the)h(frame)262 1465 y(of)12 b Fd(rcd)p Fg(.)18 b(Receiv)o(e)c(supplies)f(the)h(con)o(text)g(descriptor)h(for)e(the)h (receiv)o(er)h(in)e Fd(lcd)p Fg(,)g(the)h(con-)262 1515 y(text)f(descriptor)i (for)d(the)i(sender)h(in)d Fd(rcd)p Fg(,)g(and)h(the)h Fd(rank)e Fg(of)g(the)i(sender)h(receiv)o(er)f(in)f(the)262 1565 y(frame)c(of)i Fd(rcd)p Fg(.)16 b(The)c Fd(\(rcd,)21 b(rank\))10 b Fg(pair)h(in)f(Send)i (\(Receiv)o(e\))g(is)f(su\016cien)o(t)h(to)f(determine)262 1614 y(the)j(pro)q(cess)i(descriptor)f(of)e(the)i(receiv)o(er)g(\(sender\).) 324 1664 y(Receiv)o(e)f(cannot)g(wildcard)f(on)h Fd(lcd)p Fg(.)j(Receiv)o(e)d (can)g(wildcard)f(on)h Fd(rcd)f Fg(b)o(y)g(supplying)262 1714 y(the)j(v)n(alue)g(of)f(a)h(named)f(constan)o(t)i(con)o(text)g(descriptor,)g (e.g.)25 b Fd(MPI)p 1352 1714 V 15 w(CD)p 1411 1714 V 15 w(WILD)p Fg(,)15 b(in)h(whic)o(h)262 1764 y(case)f(Receiv)o(e)g Fi(must)k Fg(also)14 b(wildcard)g(on)g Fd(rank)g Fg(as)h(there)h(is)e(insu\016cien)o(t) h(information)d(to)262 1814 y(determine)j(the)h(pro)q(cess)i(descriptor)f(of) e(the)h(sender.)26 b(Receiv)o(e)16 b(can)g(wildcard)f(on)g Fd(rank)262 1863 y Fg(b)o(y)f(supplying)h(the)h(v)n(alue)e(of)h(a)g(named)f (constan)o(t)i(in)o(teger,)f(e.g.)22 b Fd(MPI)p 1384 1863 V 15 w(RANK)p 1487 1863 V 15 w(WILD)p Fg(.)14 b(This)262 1913 y(prop)q(osal)f(mak)o(es)g(no)g(statmen)o(t)h(ab)q(out)g(the)g(pro)o(vision)f (for)h(wildcard)f(on)h Fd(tag)p Fg(.)262 2021 y Ff(Uniform)f(In)o(tegration) 262 2098 y Fg(The)j(three)h(forms)e(of)g(addressing)i(and)f(selection)g (describ)q(ed)i(ha)o(v)o(e)e(di\013eren)o(t)h(syn)o(tactic)262 2148 y(framew)o(orks.)29 b(W)m(e)17 b(can)h(consider)h(in)o(tegrating)e (these)j(forms)c(in)o(to)h(the)i(p)q(oin)o(t-to-p)q(oin)o(t)262 2197 y(section)c(of)e(MPI)i(b)o(y)f(de\014ning)g(a)g(further)h(orthogonal)e (axis)h(\(as)h(in)e(the)i(m)o(ulti-lev)o(el)d(pro-)262 2247 y(p)q(osal)j(of)g(Gropp)h(&)g(Lusk\))g(whic)o(h)g(deals)g(with)g(the)g(form.) 23 b(This)15 b(is)h(at)g(the)g(exp)q(ense)i(of)262 2297 y(m)o(ultiplyi)o(ng)c (the)k(n)o(um)o(b)q(er)e(of)g(Send)i(and)e(Receiv)o(e)i(pro)q(cedures)h(b)o (y)e(a)f(factor)h(of)g(three,)262 2347 y(and)f(some)g(di\016cult)o(y)g(in)g (details)h(of)f(the)i(curren)o(t)g(p)q(oin)o(t-to-p)q(oin)o(t)d(w)o(orking)h (do)q(cumen)o(t)262 2397 y(whic)o(h)d(uniformly)f(assumes)h(a)h(single)f (addressing)i(and)f(selection)g(form.)967 2574 y(7)p eop %%Page: 8 9 8 8 bop 324 307 a Fg(There)22 b(are)g(v)n(arious)f(approac)o(hes)h(to)f (uni\014cation)g(of)g(the)h(syn)o(tactic)g(framew)o(orks)262 357 y(whic)o(h)16 b(ma)o(y)e(simplify)f(in)o(tegration.)24 b(Three)17 b(options)f(are)h(no)o(w)e(describ)q(ed,)j(eac)o(h)f(based)262 407 y(on)e(reten)o(tion)i(and)e(extension)i(of)e(the)i(framew)o(ork)d(of)h (one)h(form.)22 b(These)c(options)d(eac)o(h)262 457 y(ha)o(v)o(e)e(adv)n(an)o (tages)h(and)f(disadv)n(an)o(tages.)262 565 y Ff(Option)f(i:)41 b Fg(The)14 b(framew)o(ork)e(of)h(the)g(op)q(en)h(con)o(text)g(form)e(is)h (adopted)g(and)h(extended.)324 614 y(W)m(e)g(in)o(tro)q(duce)h(the)g Fi(nul)r(l)k Fg(descriptor,)c(the)g(v)n(alue)f(of)g(whic)o(h)g(is)h (de\014ned)g(b)o(y)g(a)f(named)262 664 y(constan)o(t,)f(e.g.)18 b Fd(MPI)p 590 664 14 2 v 15 w(NULL)p Fg(.)324 714 y(The)j(n)o(ull)e(con)o (text)j(form)c(is)j(expressed)i(as)d Fd(\(MPI)p 1153 714 V 15 w(NULL,)h(MPI)p 1365 714 V 15 w(NULL,)g(pd,)g(tag\))p Fg(,)262 764 y(whic)o(h)16 b(is)h(a)f(little)h(clumsy)m(.)25 b(The)17 b(closed)g(con)o(text)h(form)d(is)i(expressed)i(as)e Fd(\(MPI)p 1573 764 V 15 w(NULL,)262 814 y(cd,)k(rank,)f(tag\))p Fg(,)15 b(whic)o(h)h(is)f(marginally)d(incon)o(v)o(enien)o(t.)23 b(The)17 b(op)q(en)f(con)o(text)g(form)e(is)262 863 y(expressed)i(as)e Fd(\(lcd,)20 b(rcd,)h(rank,)g(tag)p Fg(\),)13 b(whic)o(h)h(is)g(of)f(course)i (natural.)262 971 y Ff(Option)9 b(ii:)40 b Fg(The)11 b(framew)o(ork)d(of)i (the)g(closed)h(con)o(text)g(form)d(is)i(adopted)h(and)f(extended.)324 1021 y(W)m(e)k(in)o(tro)q(duce)h(the)g Fi(nul)r(l)k Fg(descriptor,)c(the)g(v) n(alue)f(of)g(whic)o(h)g(is)h(de\014ned)g(b)o(y)g(a)f(named)262 1071 y(constan)o(t,)f(e.g.)18 b Fd(MPI)p 590 1071 V 15 w(NULL)p Fg(.)324 1121 y(The)c(n)o(ull)f(con)o(text)h(form)e(is)i(expressed)i(as)e Fd(\(MPI)p 1106 1121 V 15 w(NULL,)20 b(pd,)i(tag\))p Fg(,)12 b(whic)o(h)i(is)f(mar-)262 1171 y(ginally)8 b(incon)o(v)o(enien)o(t.)17 b(The)10 b(closed)h(con)o(text)g(form)e(is)h(expressed)j(as)d Fd(\(cd,)21 b(rank,)g(tag\))p Fg(,)262 1220 y(whic)o(h)12 b(is)h(of)f(course) i(natural.)j(Expression)c(of)f(the)i(op)q(en)f(con)o(text)g(form)e(requires)j (a)e(little)262 1270 y(more)g(w)o(ork.)324 1320 y(W)m(e)k(can)i(use)g(the)f Fd(cd)g Fg(\014eld)g(as)g(\\shorthand)h(notation")e(for)g(the)i Fd(\(lcd,)j(rcd\))16 b Fg(pair)262 1370 y(at)d(the)h(exp)q(ense)i(of)d(in)o (tro)q(ducing)g(some)g(tric)o(k)o(ery)m(.)18 b(W)m(e)13 b(can)h(de\014ne)g(a) f(mec)o(hanism)f(whic)o(h)262 1420 y(\\globs")c(together)i(these)h(t)o(w)o(o) e(\014elds)h(on)o(to)f(in)g(an)g(iden)o(ti\014er)h(whic)o(h)f(Send)h(and)f (Receiv)o(e)h(can)262 1469 y(distinguish)i(from)f(a)h(con)o(text)i(iden)o (ti\014er)f(and)g(treat)h(as)e(the)i(shorthand)f(notation.)k(Then)262 1519 y(w)o(e)e(should)f(also)g(de\014ne)i(a)f(mec)o(hanism)d(b)o(y)j(whic)o (h)g(a)f(receiv)o(er)j(whic)o(h)e(has)g(completed)f(a)262 1569 y(Receiv)o(e)h(with)f(wildcard)g(on)g Fd(rcd)g Fg(is)h(able)f(to)g(determine) h(the)g(v)n(alid)e(con)o(text)i(descriptor)262 1619 y(of)e(the)h(sender.)20 b(This)14 b(is)f(a)h(incon)o(v)o(enien)o(t.)262 1727 y Ff(Option)e(iii:)39 b Fg(The)13 b(framew)o(ork)e(of)h(the)h(n)o(ull)f(con)o(text)h(form)e(is)h (adopted)h(and)g(extended.)324 1777 y(The)f(n)o(ull)f(con)o(text)i(form)d(is) i(expressed)j(as)d Fd(\(pd,)21 b(tag\))p Fg(,)11 b(whic)o(h)h(is)g(of)f (course)j(natural.)262 1826 y(Expression)g(of)g(the)g(op)q(en)g(and)g(closed) h(con)o(text)f(forms)f(requires)i(a)e(little)h(more)f(w)o(ork.)324 1876 y(W)m(e)g(can)g(use)h(the)g Fd(pd)f Fg(\014eld)g(as)g(\\shorthand)h (notation")e(for)h Fd(\(cd,)21 b(rank\))12 b Fg(and)h Fd(\(lcd,)262 1926 y(rcd,)20 b(rank\))15 b Fg(b)o(y)h(con)o(tin)o(uation)f(of)g(the)h(tric) o(k)o(ery)h(used)f(in)g(the)g(previous)h(option.)23 b(This)262 1976 y(is)13 b(clumsy)m(.)262 2092 y Fe(1.2.5)55 b(Collectiv)n(e)16 b(Comm)n(unication)262 2169 y Fg(Symmetric)d(collectiv)o(e)j(comm)o (unication)c(op)q(erations)k(are)h(complian)o(t)c(with)j(the)g(closed)262 2218 y(con)o(text)e(form)d(describ)q(ed)16 b(ab)q(o)o(v)o(e.)h(This)d(prop)q (osal)f(recommends)g(that)g(suc)o(h)h(op)q(erations)262 2268 y(accept)e(a)f(con)o(text)h(descriptor)h(whic)o(h)e(iden)o(ti\014es)h(the)g (con)o(text)g(and)g(frame)e(in)h(whic)o(h)g(they)262 2318 y(are)j(to)g(op)q (erate.)324 2368 y(MPI)h(do)q(es)g(plan)f(to)h(describ)q(e)h(symmetric)d (collectiv)o(e)i(comm)o(unicati)o(on)d(op)q(erations.)262 2418 y(It)k(is)h(not)g(p)q(ossible)g(to)g(determine)g(whether)h(the)f(prop)q(osal) g(is)f(su\016cien)o(t)i(to)e(allo)o(w)g(im-)967 2574 y(8)p eop %%Page: 9 10 9 9 bop 262 307 a Fg(plemen)o(tation)16 b(of)h(the)h(collectiv)o(e)g(comm)o (unication)d(section)j(of)g(MPI)g(in)f(terms)h(of)f(the)262 357 y(p)q(oin)o(t-to-p)q(oin)o(t)h(section)j(of)f(MPI)g(without)g(loss)g(of)f (generalit)o(y)m(,)i(since)g(the)g(collectiv)o(e)262 407 y(op)q(erations)14 b(are)g(not)g(y)o(et)g(de\014ned.)324 457 y(Assymetric)j(collectiv)o(e)g (comm)o(unication)d(op)q(erations,)j(esp)q(ecially)g(those)h(in)f(whic)o(h) 262 506 y(the)12 b(sender\(s\))i(and)d(receiv)o(er\(s\))j(are)e(distinct)g (pro)q(cesses,)i(are)e(complian)o(t)e(with)h(the)h(op)q(en)262 556 y(con)o(text)i(form)d(describ)q(ed)16 b(ab)q(o)o(v)o(e.)h(This)d(prop)q (osal)f(recommends)g(that)g(suc)o(h)h(op)q(erations)262 606 y(accept)i(a)g(pair)f(of)g(con)o(text)h(descriptors)h(\(p)q(erhaps)g(in)e(a)h (\\glob")e(form\))g(whic)o(h)h(iden)o(tify)262 656 y(the)f(con)o(texts)h(and) f(frames)f(in)g(whic)o(h)h(they)g(are)h(to)e(op)q(erate.)324 706 y(MPI)j(do)q(es)g(not)f(plan)g(to)h(describ)q(e)h(assymetric)f(collectiv) o(e)f(comm)o(unication)d(op)q(era-)262 756 y(tions.)17 b(Suc)o(h)11 b(op)q(erations)h(are)g(expressiv)o(e)h(when)e(writing)g(programs)f(b)q(ey)o (ond)i(the)g(SPMD)262 805 y(mo)q(del)d(and)i(comprise)g(comm)o(unicati)o(v)o (e)e(fun)o(tionally)g(distinct)i(pro)q(cess)i(groupings.)k(This)262 855 y(prop)q(osal)d(recommends)h(that)g(suc)o(h)h(op)q(erations)g(should)f(b) q(e)h(considered)g(in)f(some)g(rein-)262 905 y(carnation)e(of)g(MPI.)262 1042 y Fh(1.3)64 b(Discussion)24 b(&)d(Notes)262 1133 y Fg(This)14 b(section)h(comprises)g(a)f(discussion)h(of)f(certain)h(asp)q(ects)h(of)e (this)h(prop)q(osal)f(follo)o(w)o(ed)262 1183 y(b)o(y)f(the)i(notes)f (referenced)j(in)c(the)i(detailed)e(prop)q(osal.)262 1299 y Fe(1.3.1)55 b(Discussion)262 1376 y Fg(W)m(e)15 b(can)h(dissect)i(the)e(prop) q(osal)g(in)o(to)f(t)o(w)o(o)h(parts:)22 b(an)16 b(SPMD)g(mo)q(del)f(core;)i (an)f(MIMD)262 1426 y(mo)q(del)f(annex.)27 b(In)17 b(this)g(discussion)g(the) h(dissection)f(is)g(exp)q(osed)h(and)e(the)i(conceptual)262 1475 y(foundation)d(of)i(eac)o(h)h(part)f(is)g(describ)q(ed.)30 b(The)17 b(discussion)h(also)e(presen)o(ts)j(argumen)o(ts)262 1525 y(for)14 b(and)g(against)g(the)i(MIMD)e(mo)q(del)f(annex,)i(and)f(to)h (some)e(exten)o(t)j(explores)f(the)h(con-)262 1575 y(squences)f(of)f(these)h (argumen)o(ts)e(for)h(MPI)g(in)f(a)h(wider)g(sense.)262 1683 y Ff(SPMD)h(mo)q(del)f(core)262 1760 y Fg(The)e(SPMD)g(mo)q(del)e(core)j(pro) o(vides)f(noncomm)o(unicativ)o(e)d(pro)q(cess)14 b(groupings)d(and)h(com-)262 1809 y(m)o(unication)j(con)o(texts)k(for)f(writers)g(of)g(SPMD)f(parallel)g (libraries.)30 b(It)18 b(is)f(in)o(tended)i(to)262 1859 y(pro)o(vide)11 b(expressiv)o(e)i(p)q(o)o(w)o(er)f(b)q(ey)o(ond)g(the)h(\\SPIMD")e(mo)q(del,) f(in)i(whic)o(h)f(pro)q(cess)j(execute)262 1909 y(in)f(a)h(stricly)g(SIMD)f (fashion.)324 1959 y(The)h(material)e(describing)i(pro)q(cesses)j(in)c Ff(Section)h(1.2.1.)19 b Fg(is)14 b(simpli\014ed:)324 2050 y Fa(\017)20 b Fg(pro)q(cesses)d(ha)o(v)o(e)d(iden)o(tical)f(instruction)h (blo)q(c)o(ks)g(and)g(di\013eren)o(t)g(data)g(blo)q(c)o(ks;)324 2133 y Fa(\017)20 b Fg(pro)q(cess)k(descriptor)f(transmission)d(and)h (registry)i(b)q(ecome)e(redundan)o(t)h(\(Note,)365 2183 y(ho)o(w)o(ev)o(er,)g (that)e(they)h(are)f(an)o(yw)o(a)o(y)f(redundan)o(t)i(as)f(con)o(text)h (descriptor)g(trans-)365 2233 y(mission)13 b(and)g(registry)i(coupled)g(with) e(con)o(text)i(descriptor)h(attribute)e(query)h(and)365 2283 y(group)f(descriptor)h(attribute)f(query)g(is)f(capable)h(of)f(pro)o(viding)f (access)k(to)d(all)g(pro-)365 2332 y(cess)j(descriptors\);)324 2415 y Fa(\017)k Fg(dynamic)12 b(pro)q(cess)k(mo)q(dels)d(are)h(not)g (considered.)967 2574 y(9)p eop %%Page: 10 11 10 10 bop 324 307 a Fg(The)14 b(material)e(describing)i(pro)q(cess)i (groupings)e(in)f Ff(Section)h(1.2.2.)19 b Fg(is)13 b(simpli\014ed:)324 399 y Fa(\017)20 b Fg(group)9 b(descriptor)i(transmission)d(and)h(registry)h (b)q(ecome)f(redundan)o(t)h(\(Note,)g(ho)o(w)o(ev)o(er,)365 448 y(that)17 b(they)h(are)f(an)o(yw)o(a)o(y)f(redundan)o(t)h(as)g(con)o (text)h(descriptor)g(transmission)e(and)365 498 y(registry)c(coupled)f(with)f (con)o(text)h(descriptor)h(attribute)f(query)h(capable)e(of)g(pro)o(vid-)365 548 y(ing)j(access)j(to)e(all)f(group)g(descriptors.\);)324 631 y Fa(\017)20 b Fg(the)d(o)o(wn)f(pro)q(cess)i(grouping)d(explicitly)g(b)q (ecomes)h(a)g(group)g(con)o(taining)f(all)g(pro-)365 681 y(cesses.)324 772 y(The)g(material)d(describing)j(comm)o(unication)c(con)o(texts)16 b(in)e Ff(Section)g(1.2.3.)21 b Fg(is)14 b(sim-)262 822 y(pli\014ed:)324 913 y Fa(\017)20 b Fg(con)o(text)15 b(descriptor)g(transmission)e(and)g (registry)i(b)q(ecome)f(unnecessary)m(.)324 1005 y(The)f(material)e (describing)j(p)q(oin)o(t-to-p)q(oin)o(t)d(comm)o(unication)f(in)i Ff(Section)h(1.2.4.)19 b Fg(is)262 1054 y(simpli\014ed:)324 1146 y Fa(\017)h Fg(the)15 b(op)q(en)f(con)o(text)h(form)d(b)q(ecomes)i (redundan)o(t;)324 1229 y Fa(\017)20 b Fg(uniform)15 b(in)o(tegration)g (\\Option)h(i")g(is)g(deleted,)i(and)e(\\Option)h(ii")e(loses)i(\\globs")365 1279 y(th)o(us)f(b)q(ecoming)e(simple)g(enough)h(that)h(\\Option)e(iii")g (need)i(not)f(b)q(e)h(further)h(con-)365 1328 y(sidered.)324 1420 y(The)c(material)e(describing)i(collectiv)o(e)g(comm)o(unicatio)o(n)d (in)j Ff(Section)f(1.2.5.)19 b Fg(is)12 b(sim-)262 1469 y(pli\014ed:)324 1561 y Fa(\017)20 b Fg(there)h(is)e(no)h(p)q(ossibilit)o(y)e(of)h(collectiv)o (e)g(comm)o(unication)d(op)q(erations)k(spanning)365 1611 y(more)13 b(than)h(con)o(text.)262 1719 y Ff(MIMD)i(mo)q(del)e(annex)262 1795 y Fg(The)d(MIMD)g(mo)q(del)f(annex)i(extends)g(and)f(mo)q(di\014es)g (the)g(SPMD)h(mo)q(del)d(core)k(to)e(pro)o(vide)262 1845 y(expressiv)o(e)g(p) q(o)o(w)o(er)f(for)f(MIMD)h(programs)e(whic)o(h)i(com)o(bine)e(\(coarse)j (grain\))e(function)h(and)262 1895 y(data)17 b(driv)o(en)g(parallelism.)27 b(The)18 b(MIMD)f(mo)q(del)f(annex)i(is)g(not)f(in)o(tended)h(to)g(pro)o (vide)262 1945 y(expressiv)o(e)d(p)q(o)o(w)o(er)g(to)f(\014ne)h(grained)f (function)g(driv)o(en)g(parallel)f(programs)g(|)h(it)f(is)i(con-)262 1994 y(jectured)j(that)g(message)f(passing)g(approac)o(hes)h(suc)o(h)h(as)e (MPI)h(are)g(not)f(suited)h(to)f(\014ne)262 2044 y(grained)c(function)g(driv) o(en)g(or)h(data)f(driv)o(en)h(programmi)o(ng.)h(The)f(annex)f(is)h(in)o (tended)g(to)262 2094 y(pro)o(vide)f(expresiv)o(e)i(p)q(o)o(w)o(er)f(for)g (the)h(\\MSPMD")e(mo)q(del,)f(whic)o(h)i(is)f(no)o(w)h(describ)q(ed.)324 2144 y(One)19 b(of)g(the)g(simplest)f(MIMD)h(mo)q(dels)f(is)h(the)g (\\host-no)q(de")g(mo)q(del,)g(famili)o(ar)d(in)262 2194 y(EXPRESS)f(and)f(P) m(ARMA)o(CS,)f(con)o(taining)h(t)o(w)o(o)g(functional)g(groups:)19 b(one)c(no)q(de)g(group)262 2243 y(\(SPMD)f(lik)o(e\))f(;)g(one)h(host)g (group)g(\(a)g(singleton\).)324 2293 y(The)d(\\parallel)e(clien)o(t-serv)o (er")j(mo)q(del,)e(in)g(whic)o(h)h(eac)o(h)g(of)f(the)i Fc(n)e Fg(clien)o(ts)h(is)g(comp)q(osed)262 2343 y(of)i(parallel)g(pro)q(cesses,)k (and)d(in)g(whic)o(h)g(the)h(serv)o(er)g(ma)o(y)e(also)g(b)q(e)i(comp)q(osed) f(of)g(parallel)262 2393 y(pro)q(cesses,)k(con)o(tains)d(1)10 b(+)h Fc(n)k Fg(functional)g(groups:)21 b Fc(n)15 b Fg(clien)o(t)h(groups)g (\(SPMD)f(lik)o(e\);)g(one)262 2443 y(serv)o(er)i(group)f(\(singleton,)f (SPMD)h(lik)o(e\).)23 b(The)17 b(\\host-no)q(de")f(mo)q(del)e(is)i(a)f(case)i (of)f(this)957 2574 y(10)p eop %%Page: 11 12 11 11 bop 262 307 a Fg(mo)q(del)12 b(in)i(whic)o(h)h(the)f(host)h(can)g(b)q (e)g(view)o(ed)f(as)h(a)f(singleton)g(clien)o(t)g(and)g(the)h(no)q(des)g(can) 262 357 y(b)q(e)f(view)o(ed)g(as)g(an)g(SPMD)g(lik)o(e)f(serv)o(er)i(\(or)f (vice)g(v)o(ersa!\).)324 407 y(The)f(\\parallel)e(mo)q(dule)g(graph")h(mo)q (del,)f(in)h(whic)o(h)g(eac)o(h)h(mo)q(dule)e(within)h(the)h(graph)262 457 y(ma)o(y)j(b)q(e)k(comp)q(osed)e(of)g(parallel)g(pro)q(cesses)j (\(singleton,)f(SPMD)e(lik)o(e\),)h(con)o(tains)g(an)o(y)262 506 y(n)o(um)o(b)q(er)c(of)g(functional)g(groups)h(with)g(arbitrarily)f (complex)g(relations.)24 b(The)16 b(\\parallel)262 556 y(clien)o(t-serv)o(er) e(mo)q(del")d(is)i(a)g(case)h(of)e(this)h(mo)q(del)e(in)i(whic)o(h)g(the)g (mo)q(dule)f(graph)h(con)o(tains)262 606 y(arcs)h(joining)e(the)j(serv)o(er)g (to)f(eac)o(h)g(clien)o(t.)324 656 y(The)19 b(MIMD)g(mo)q(del)e(annex)j(is)e (in)o(tended)i(to)f(pro)o(vide)g(expressiv)o(e)h(p)q(o)o(w)o(er)f(for)g(the) 262 706 y(\\parallel)14 b(mo)q(dule)h(graph")h(mo)q(del,)f(whic)o(h)h(I)g (refer)h(to)g(as)f(the)h(MPSMD)f(mo)q(del.)24 b(This)262 756 y(mo)q(del)14 b(requires)i(supp)q(ort)h(at)e(some)g(lev)o(el)g(as)h (commercial)d(and)i(mo)q(dular)f(applications)262 805 y(are)i(increasingly)g (mo)o(ving)d(in)j(to)f(parallel)g(computating.)23 b(The)16 b(debate)h(is)f(whether)i(or)262 855 y(not)c(message)g(passing)g(approac)o (hes)i(suc)o(h)f(as)g(MPI)f(\(whic)o(h)h(I)f(simply)f(refer)i(to)g(as)f (MPI\))262 905 y(should)f(pro)o(vide)h(for)f(this)h(mo)q(del.)324 955 y(The)d(negativ)o(e)f(argumen)o(t)g(is)g(that)h(suc)o(h)g(SPMD)g(mo)q (dules)f(should)g(b)q(e)h(con)o(trolled)g(and)262 1005 y(comm)o(uni)o(cate)i (with)h(one)h(another)g(as)g(\\parallel)f(pro)q(cesses")j(at)d(the)i (distributed)f(op)q(er-)262 1054 y(ating)f(system)g(lev)o(el.)21 b(The)15 b(argumen)o(t)f(has)h(some)f(app)q(eal)g(as)h(the)h(w)o(orld)e(of)g (distributed)262 1104 y(op)q(erating)j(systems)h(m)o(ust)f(deal)h(with)f (di\016cult)g(issues)i(suc)o(h)g(as)f(pro)q(cess)h(con)o(trol)f(and)262 1154 y(coherency)m(.)23 b(Av)o(oidance)16 b(of)f(duplication)f(in)h(MPI)g (allo)o(ws)g(MPI)g(to)h(fo)q(cus)f(on)h(pro)o(vision)262 1204 y(of)11 b(a)h(smaller)f(set)i(of)e(facilities)g(with)h(greater)i(emphasis)d (on)h(maxim)n(um)c(p)q(erformance)k(for)262 1254 y(data)h(driv)o(en)h(SPMD)g (parallel)f(prgrams.)324 1303 y(The)h(p)q(ositiv)o(e)g(argumen)o(t)f(is)h (that)g(comm)o(unicatio)o(ns)e(b)q(et)o(w)o(een)j(SPMD)f(mo)q(dules)f(re-)262 1353 y(quires)19 b(high)f(p)q(erformance)h(and)g(MPI)g(can)h(pro)o(vide)f (suc)o(h)g(p)q(erformance)g(with)g(tuned)262 1403 y(seman)o(tics)11 b(whic)o(h)g(exp)q(ect)j(the)e(user)h(to)f(deal)f(with)g(coherency)j(issues.) k(There)13 b(is)f(also)f(the)262 1453 y(argumen)o(t)g(that)h(MPI)g(is)g(able) g(to)g(deal)g(with)g(this)g(in)f(a)h(shorter)i(time)c(than)i(dev)o(elopmen)o (t)262 1503 y(and)k(standardisation)h(pro)q(cedures)j(for)c(distributed)i(op) q(erating)f(systems.)28 b(The)17 b(latter)262 1553 y(argumen)o(t)10 b(is)h(comparable)f(with)i(the)g(argumen)o(t)e(for)h(MPI)h(v)o(ersus)h (parallel)d(compilation.)262 1669 y Fe(1.3.2)55 b(Notes)312 1745 y Fg(1.)20 b(Descriptors)i(are)e(a)g(plen)o(tiful)f(but)i(b)q(ounded)f (resource.)39 b(They)21 b(are)f(expressed)365 1795 y(as)g(in)o(tegers)h(for)e (t)o(w)o(o)g(reasons.)37 b(Firstly)m(,)20 b(this)g(expression)h(facilitates)e (uniform)365 1845 y(binding)12 b(to)h(ANSI)h(C)f(and)g(F)m(ortran)g(77.)k (Secondly)m(,)c(there)h(is)f(a)g(later)g(con)o(v)o(enience)365 1895 y(in)e(ensuring)h(that)f(pro)q(cess)i(descriptors)g(in)d(particular)h (are)h(expressed)i(in)c(in)o(tegers,)365 1945 y(and)j(I)f(made)f(all)h(the)h (descriptors)h(are)f(in)o(tegers)g(for)f(consistency)m(.)19 b(In)12 b(practice)i(the)365 1994 y(descriptor)h(could)e(b)q(e)h(an)f(index)g (in)o(to)f(a)h(table)g(of)g(ob)r(jects)h(description)g(structures)365 2044 y(or)g(p)q(oin)o(ters)h(to)e(suc)o(h)i(structures.)312 2127 y(2.)20 b(It)10 b(is)g(p)q(ossible)g(to)g(allo)o(w)e(the)j(user)g(to)f (sa)o(v)o(e)g(user)h(de\014ned)g(attributes)g(in)e(descriptors,)365 2177 y(as)14 b(Little\014eld)f(has)g(suggested.)19 b(Suc)o(h)14 b(attributes)g(should)f(not)g(b)q(e)h(comm)o(unicated)365 2227 y(in)g(either)g(descriptor)i(transmission)c(or)i(the)h(descriptor)g(registry) m(.)312 2310 y(3.)20 b(The)c(pro)q(cess)h(descriptor)f(is)f(not)g(a)g(global) e(unique)i(pro)q(cess)i(iden)o(ti\014er)e(but)h(m)o(ust)365 2360 y(reference)i(suc)o(h)f(an)e(iden)o(ti\014er.)24 b(The)16 b(use)g(of)f(descriptors)i(hides)g(suc)o(h)f(iden)o(ti\014ers)365 2410 y(in)f(order)i(that)e(they)h(ma)o(y)e(b)q(e)i(implemen)o(tatio)o(n)d (dep)q(enden)o(t,)k(and)e(enables)i(more)957 2574 y(11)p eop %%Page: 12 13 12 12 bop 365 307 a Fg(e\016cien)o(t)14 b(access)i(to)d(additional)f(pro)q (cess)j(information)c(in)i(implemen)o(tations.)i(F)m(or)365 357 y(example,)d(the)h(pro)q(cess)i(iden)o(ti\014er)e(of)g(a)g(receiv)o(er)h (ma)o(y)d(not)i(b)q(e)h(su\016cien)o(t)f(to)g(route)365 407 y(a)f(message)f(to)h(the)g(receiv)o(er,)i(and)d(this)h(information)d(can)j(b) q(e)h(referenced)h(from)c(the)365 457 y(pro)q(cess)16 b(descriptor.)312 540 y(4.)k(The)h(group)e(descriptor)j(is)e(not)f(a)h(global)e(unique)i(group) g(iden)o(ti\014er)g(but)h(m)o(ust)365 589 y(reference)d(suc)o(h)f(an)e(iden)o (ti\014er.)24 b(The)16 b(use)g(of)f(descriptors)i(hides)g(suc)o(h)f(iden)o (ti\014ers)365 639 y(in)e(order)g(that)g(they)g(ma)o(y)e(b)q(e)i(implemen)o (tation)c(dep)q(endeen)o(t,)16 b(and)d(enables)i(more)365 689 y(e\016cien)o(t)i(access)h(to)f(additional)d(group)i(information)e(in)i (implem)o(en)o(tations.)23 b(F)m(or)365 739 y(example,)11 b(the)h(size)g(of)g (the)g(group)f(and)h(the)g(map)e(from)g(mem)o(b)q(er)g(ranks)i(to)g(pro)q (cess)365 789 y(descriptors)k(can)e(b)q(e)g(referenced)j(from)12 b(the)i(group)g(descriptor.)312 872 y(5.)20 b(The)15 b(con)o(text)g (descriptor)h(is)f(not)f(a)g(global)f(unique)i(con)o(text)g(iden)o(ti\014er)g (but)g(m)o(ust)365 922 y(reference)j(suc)o(h)f(an)e(iden)o(ti\014er.)24 b(The)16 b(use)g(of)f(descriptors)i(hides)g(suc)o(h)f(iden)o(ti\014ers)365 971 y(in)f(order)i(that)e(they)h(ma)o(y)e(b)q(e)i(implemen)o(tatio)o(n)d(dep) q(enden)o(t,)k(and)e(enables)i(more)365 1021 y(e\016cien)o(t)d(access)h(to)e (additional)e(con)o(text)j(information)d(in)h(implemen)o(tations.)j(F)m(or) 365 1071 y(example,)j(the)h(group)f(descriptor)i(of)d(the)i(frame)e(can)i(b)q (e)g(referenced)h(from)d(the)365 1121 y(con)o(text)e(descriptor.)312 1204 y(6.)20 b(The)h(prop)q(osal)e(do)q(es)i(not)f(prev)o(en)o(t)h(a)f(pro)q (cess)h(mo)q(del)e(whic)o(h)h(allo)o(ws)e(dynamic)365 1254 y(creation)13 b(and)g(termination)e(of)h(pro)q(cesses)j(ho)o(w)o(ev)o(er)e (it)f(do)q(es)i(not)f(fa)o(v)o(our)e(an)i(asyn-)365 1303 y(c)o(hronous)i(pro) q(cess)h(creation)f(mo)q(del)e(in)g(whic)o(h)i(singleton)e(pro)q(cesses)k (are)e(created)365 1353 y(and)h(terminated)g(in)g(an)g(unstructured)j (fashion.)25 b(The)16 b(prop)q(osal)g(do)q(es)h(fa)o(v)o(our)f(a)365 1403 y(mo)q(del)d(in)h(whic)o(h)g(blobs)g(of)f(pro)q(cesses)k(are)d(created)i (\(and)e(terminated\))g(in)g(a)g(con-)365 1453 y(certed)i(fashion,)d(and)h (in)g(whic)o(h)g(eac)o(h)h(group)f(so)g(created)i(is)e(assigned)h(a)f (di\013eren)o(t)365 1503 y(o)o(wn)g(pro)q(cess)j(grouping.)h(This)d(mo)q(del) e(do)q(es)i(not)f(tak)o(e)h(in)o(to)e(accoun)o(t)i(the)g(p)q(oten-)365 1553 y(tial)e(desire)j(to)e(expand)g(or)g(con)o(tract)h(an)f(existing)g(blob) g(of)f(pro)q(cesses)k(in)d(order)h(to)365 1602 y(tak)o(e)d(in)o(to)g(accoun)o (t)g(\(presumably)g(slo)o(wly\))f(time)f(v)n(arying)h(w)o(orloads.)17 b(The)c(author)365 1652 y(suggests)21 b(that)f(concerted)h(blob)e(expand)h (and)f(con)o(tract)h(op)q(erations)g(are)g(most)365 1702 y(suitable)14 b(for)g(this)g(purp)q(ose)h(than)e(async)o(hronous)i(spa)o(wn/kill)d(op)q (erations.)312 1785 y(7.)20 b(There)15 b(should)f(probably)f(also)g(b)q(e)h (a)g(registry)g(op)q(eration)g(whic)o(h)f(c)o(hec)o(ks)i(whether)365 1835 y(a)f(name)e(has)i(b)q(een)g(registered)i(without)d(the)h(p)q(ossibilit) o(y)e(of)h(blo)q(c)o(king)g(the)h(calling)365 1885 y(pro)q(cess)k (inde\014nitely)m(.)25 b(The)16 b(same)g(registry)h(could)f(b)q(e)h(used)g (for)f(pro)q(cess,)i(group)365 1934 y(and)c(con)o(text)h(descriptors.)312 2017 y(8.)20 b(In)f(the)g(static)g(pro)q(cess)h(mo)q(del)d(it)h(is)g(assumed) h(that)f(pro)q(cesses)j(are)e(created)h(in)365 2067 y(concert,)h(and)e(that)f (termination)f(of)h(one)h(pro)q(cess)h(implies)d(termination)g(of)h(all)365 2117 y(pro)q(cesses.)39 b(In)20 b(this)h(case)g(there)g(is)f(no)g(coherency)h (problem)e(asso)q(ciated)i(with)365 2167 y(pro)q(cesses)c(and)e(pro)q(cess)h (descriptors.)21 b(In)14 b(a)g(dynamic)f(pro)q(cess)j(mo)q(del)d(there)j(is)e (a)365 2217 y(coherency)f(problem)c(whic)o(h)i(pro)q(cesses)j(and)c(pro)q (cess)j(descriptors)g(since)e(a)g(pro)q(cess)365 2267 y(can)f(retain)g(the)g (descriptor)h(of)e(a)g(pro)q(cess)i(whic)o(h)f(has)g(b)q(een)g(deleted.)18 b(The)10 b(prop)q(osal)365 2316 y(exp)q(ects)16 b(the)f(user)g(to)e(ensure)j (coheren)o(t)f(usage.)312 2399 y(9.)20 b(Pro)q(cess)f(groupings)c(are)i (dynamic)d(in)i(the)h(sense)h(that)e(they)h(can)f(b)q(e)h(created)h(at)365 2449 y(an)o(y)12 b(time,)e(and)i(static)g(in)f(the)h(sense)i(that)e(the)g (mem)o(b)q(ership)e(is)i(constan)o(t)g(o)o(v)o(er)g(the)957 2574 y(12)p eop %%Page: 13 14 13 13 bop 365 307 a Fg(lifetime)11 b(of)g(the)j(pro)q(cess)g(grouping.)j(The) 12 b(author)h(suggests)h(concerted)g(group)e(ex-)365 357 y(pand)f(and)f(con)o (tract)i(op)q(erations)f(are)g(more)e(generally)i(useful)f(than)h(async)o (hronous)365 407 y(join/lea)o(v)o(e)i(op)q(erations.)291 490 y(10.)20 b(MPI)15 b(has)f(discussed)j(the)e(concept)g(of)f(the)h(\\all")e (group)h(whic)o(h)g(con)o(tains)h(all)e(pro-)365 540 y(cesses.)35 b(The)19 b(\\o)o(wn")e(group)i(concept)h(is)e(in)o(tended)h(to)g(b)q(e)g(a)f (generalisation)g(of)365 589 y(the)h(\\all")d(group)h(concept)i(whic)o(h)f (is)g(expressiv)o(e)h(for)e(programs)g(including)g(and)365 639 y(b)q(ey)o(ond)g(the)g(SPMD)f(mo)q(del.)24 b(Pro)q(cesses)19 b(are)d(created)i(in)e(\\blobs",)g(where)h(eac)o(h)365 689 y(mem)o(b)q(er)g(of)g(a)h(blob)f(is)h(a)f(mem)o(b)q(er)g(of)g(the)i(same)e(o) o(wn)g(pro)q(cess)j(grouping,)e(and)365 739 y(di\013eren)o(t)e(blobs)e(ha)o (v)o(e)g(di\013eren)o(t)h(o)o(wn)f(pro)q(cess)i(groupings.)j(An)14 b(SPMD)g(program)365 789 y(is)f(a)h(single)e(blob.)18 b(A)13 b(host-no)q(de)h(program)e(comp)q(oses)h(t)o(w)o(o)g(blobs,)g(the)g(no)q(de)h (blob)365 839 y(and)f(the)h(host)g(blob)f(\(whic)o(h)g(is)g(a)g(singleton\).) 18 b(There)c(is)f(a)g(sense)i(in)e(whic)o(h)g(a)g(blob)365 888 y(has)h(a)g(lo)q(cally)e(SPMD)i(view.)291 971 y(11.)20 b(This)14 b(pro)q(cedure)i(lo)q(oks)d(lik)o(e)g(a)h(pro)q(cess)i(descriptor)f (attribute)f(query)m(.)291 1054 y(12.)20 b(Dynamic)8 b(pro)q(cesses)j(in)o (tro)q(duce)f(a)f(group)h(coherency)h(problem)d(as)h(a)g(group)g(descriptor) 365 1104 y(can)18 b(con)o(tain)f(the)h(pro)q(cess)h(descriptor)f(of)f(a)g (pro)q(cess)i(whic)o(h)e(has)h(b)q(een)g(deleted.)365 1154 y(Dynamic)12 b(pro)q(cesses)17 b(p)q(oten)o(tially)c(in)o(tro)q(duce)i(a)e (group)h(coherency)i(problem)d(as)h(a)365 1204 y(group)e(descriptor)h(can)f (con)o(tain)g(the)g(pro)q(cess)i(descriptor)f(of)e(a)h(pro)q(cess)i(whic)o(h) d(has)365 1254 y(b)q(een)h(deleted.)17 b(Group)10 b(transmission)f(in)o(tro)q (duces)i(a)f(group)g(and)g(group)g(descriptor)365 1303 y(coherenncy)21 b(problem)d(since)i(a)f(pro)q(cess)j(can)d(retain)h(a)f(group)g(descriptor)h (of)f(a)365 1353 y(group)c(whic)o(h)f(has)h(b)q(een)g(iden)o(ti\014ed)g (group.)20 b(The)14 b(prop)q(osal)h(exp)q(ects)h(the)f(user)h(to)365 1403 y(ensure)g(coheren)o(t)f(usage.)291 1486 y(13.)20 b(The)c(prop)q(osal)f (did)f(not)h(include)g(a)g(function)g(to)g(con)o(v)o(ert)h(a)e Fd(\(gd,)21 b(pd\))15 b Fg(pair)f(in)o(to)365 1536 y(a)i(rank.)22 b(It)16 b(is)f(suggested)i(that)f(this)f(in)o(v)o(erse)i(map)d(is)h(allo)o(w) o(ed)f(to)i(b)q(e)g(arbitrarily)365 1586 y(slo)o(w.)291 1669 y(14.)k(Marc)15 b(Snir)e(has)h(describ)q(ed)i(a)d(metho)q(d)g(b)o(y)h(whic)o (h)g(global)e(unique)h(group)h(iden)o(ti\014-)365 1719 y(ers)j(can)e(b)q(e)h (generated)h(without)e(use)h(of)f(shared)h(global)e(data.)22 b(The)16 b(description)365 1768 y(of)i(con)o(text)h(creation)g(an)o (ticipates)g(a)f(similar)e(metho)q(d)i(for)g(global)f(unique)i(con-)365 1818 y(text)h(iden)o(ti\014er)g(generation.)35 b(Ho)o(w)o(ev)o(er)20 b(the)g(sync)o(hronisation)f(requiremen)o(t)g(of)365 1868 y(this)e(metho)q(d) e(mak)o(es)h(it)g(unnecessary)i(for)e(con)o(text)h(creation.)26 b(Sync)o(hronisation)365 1918 y(at)11 b(creation)g(of)f(con)o(text)i(within)e (a)g(pro)q(cess)j(grouping)d(frame)f(implies)g(that)i(all)f(pro-)365 1968 y(cesses)19 b(within)d(the)h(frame)e(p)q(erform)g(the)i(same)f(con)o (text)h(creations)g(in)f(the)g(same)365 2017 y(sequence.)39 b(Therefore)22 b(the)f(com)o(bination)c(of)j(global)e(unique)i(frame)f(iden)o (ti\014er)365 2067 y(and)e(con)o(text)h(creation)f(sequence)i(n)o(um)o(b)q (er)e(pro)o(vides)g(a)g(global)e(unique)i(con)o(text)365 2117 y(iden)o(ti\014er)e(without)e(comm)o(unication.)291 2200 y(15.)20 b(This)14 b(pro)q(cedure)i(lo)q(oks)d(lik)o(e)g(a)h(group)g(descriptor)h (attribute)f(query)m(.)291 2283 y(16.)20 b(The)13 b(reten)o(tion)h(of)e(a)g (reference)k(to)c(a)h(group)f(descriptor)i(b)o(y)f(a)f(con)o(text)i(in)o(tro) q(duces)365 2333 y(the)i(p)q(oten)o(tial)e(for)h(a)f(con)o(text)i(and)f(con)o (text)g(descriptor)h(coherency)h(problem,)d(as)365 2383 y(a)19 b(con)o(text)h(could)e(con)o(tain)h(a)f(reference)k(to)c(the)i(group)f (descriptor)h(of)e(a)h(group)365 2433 y(whic)o(h)c(has)h(b)q(een)g(deleted.) 24 b(This)15 b(could)g(b)q(e)h(circum)o(v)o(en)o(ted)f(b)o(y)g(demanding)f (that)957 2574 y(13)p eop %%Page: 14 15 14 14 bop 365 307 a Fg(all)11 b(suc)o(h)i(con)o(texts)h(are)e(deleted)i(b)q (efore)f(a)f(group)g(is)g(deleted.)19 b(Con)o(text)12 b(descriptor)365 357 y(transmission)17 b(in)o(tro)q(duces)i(a)e(coherency)j(problem)c(since)j (a)e(pro)q(cess)j(can)d(retain)365 407 y(the)d(descriptor)h(of)e(a)g(con)o (text)i(whic)o(h)e(has)h(b)q(een)g(deleted.)19 b(The)14 b(prop)q(osal)g(exp)q (ects)365 457 y(the)h(user)g(to)f(ensure)h(coheren)o(t)g(usage.)291 540 y(17.)20 b(This)14 b(is)f(somewhat)f(lik)o(e)h(and)g(P)m(ARMA)o(CS)g(and) g(PVM)g(3.)18 b(It)13 b(is)g(general)h(purp)q(ose,)365 589 y(but)f(not)f(particularly)f(expressiv)o(e.)19 b(It)13 b(do)q(es)g(not)f(pro) o(vide)g(facilities)f(for)h(writers)h(of)365 639 y(parallel)g(libraries.)291 722 y(18.)20 b(This)15 b(is)g(somewhat)f(lik)o(e)g(ZIPCODE)i(and)e(VENUS.)h (It)g(is)g(expressiv)o(e)i(in)d(certain)365 772 y(SPMD)g(programs)f(where)i (noncomm)o(uni)o(cativ)o(e)c(data)j(driv)o(en)f(parallel)g(computa-)365 822 y(tions)k(can)g(b)q(e)g(p)q(erformed)g(concurren)o(tly)m(.)27 b(It)17 b(pro)o(vides)g(facilities)f(for)g(writers)i(of)365 872 y(SPMD)c(lik)o(e)f(parallel)g(libraries.)291 955 y(19.)20 b(This)f(is)f(somewhat)g(lik)o(e)g(CHIMP)h(and)g(PVM)g(2.)32 b(It)19 b(is)f(expressiv)o(e)j(in)d(certain)365 1005 y(MIMD)13 b(programs)e(where)i(comm)o(unicativ)o(e)d(data)i(driv)o(en)h(parallel)e (computations)365 1054 y(can)21 b(b)q(e)g(p)q(erformed)g(concurren)o(tly)m(.) 38 b(It)21 b(pro)o(vides)g(facilities)e(for)i(MSPMD)f(lik)o(e)365 1104 y(parallel)13 b(libraries.)262 1241 y Fh(1.4)64 b(Conclusion)262 1332 y Fg(This)17 b(c)o(hapter)h(has)g(presen)o(ted)i(and)d(discussed)i(a)e (prop)q(osal)g(for)h(comm)o(uni)o(cation)d(con-)262 1382 y(texts)k(within)g (MPI.)f(In)h(the)h(prop)q(osal)f(pro)q(cess)h(groupings)f(app)q(eared)h(as)f (frames)f(for)262 1432 y(the)d(creation)g(of)f(comm)o(unication)e(con)o (texts,)j(and)g(comm)o(unicatio)o(n)d(con)o(texts)k(retained)262 1482 y(certain)e(prop)q(erties)h(of)f(the)g(frames)f(used)i(in)e(their)i (creation.)324 1532 y(The)f(author)g(strongly)g(recommends)e(this)i(prop)q (osal)g(to)g(the)g(committee.)957 2574 y(14)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Sun Mar 21 14:58:13 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA00429; Sun, 21 Mar 93 14:58:13 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25091; Sun, 21 Mar 93 14:58:01 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 14:58:00 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25083; Sun, 21 Mar 93 14:57:58 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03816; Sun, 21 Mar 93 13:52:35 CST Date: Sun, 21 Mar 93 13:52:35 CST From: Tony Skjellum Message-Id: <9303211952.AA03816@Aurora.CS.MsState.Edu> To: lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu Subject: Re: Proposal II' - LaTeX Your subcommittee chair has been telling you to call this Proposal VI for the last half hour. Please listen. Drop the Editorial note, forthwith, after naming it proposal VI. Please don't be write-only. - Tony From owner-mpi-context@CS.UTK.EDU Sun Mar 21 15:07:27 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA00520; Sun, 21 Mar 93 15:07:27 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25403; Sun, 21 Mar 93 15:07:05 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 15:07:04 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25395; Sun, 21 Mar 93 15:07:02 -0500 Date: Sun, 21 Mar 93 20:07:00 GMT Message-Id: <13161.9303212007@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: clarification, i hope :-) To: tony@aurora.cs.msstate.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Dear Tony I am trying to send you a "clean table" as requested. Proposal Number Notes on Proposal --------------- ----------------- I++ Defunct number, defunct proposal. Was circulated as Proposal I. II' Defunct number, live proposal. To be circulated as Proposal VI. I Marc Snir's proposal which appeared in point-to-point working document. I think we would do ourselves harm to just ignore this. VI My proposal which I am personally happy with and am also happy to move along towards the other proposals prepared or being prepared by Rik and Tony. Was called Proposal II'. III/IV The proposal which Tony is working on. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Sun Mar 21 17:15:06 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA01141; Sun, 21 Mar 93 17:15:06 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28876; Sun, 21 Mar 93 17:14:29 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 17:14:28 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28868; Sun, 21 Mar 93 17:14:22 -0500 Date: Sun, 21 Mar 93 22:14:17 GMT Message-Id: <13284.9303212214@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Proposal VI (was Proposal II') - LaTeX To: tony@aurora.cs.msstate.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Hi Tony Thanks for the phone call. I have stopped calling this thing II' and it is now Proposal VI, as you ask, and of course you are right that we should avoid confusion. Here is the LaTeX for Proposal VI. I have propogated the comments that you gave me for Proposal I++ (now defunct name and defunct proposal) into this document as I believe appropriate. Sometimes this has required replicating your comment at several points in the document, and I have tried to give the same second order comments in each case. Perhaps it will be as well to read the whole thing before making further comments, rather than "real time" comments (which I often do, bad practice). PostScript will follow. Best Wishes Lyndon ---------------------------------------------------------------------- \documentstyle{report} \begin{document} \title{Proposal VI\\MPI Context Subcommittee} \author{Lyndon~J~Clarke} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal VI" % Written by Lyndon J. Clarke % March 1993 % \newcommand{\LabelNote}[1]{\label{vi:note:#1}} \newcommand{\SeeReferNote}[1]{{\bf See~Note~\ref{vi:note:#1}.}} \newcommand{\LabelSection}[1]{\label{vi:sect:#1}} \newcommand{\ReferSection}[1]{{\bf Section~\ref{vi:sect:#1}.}} \chapter{Proposal VI} {\it Editorial Note: This is not Proposal II as identified at the post-meet lunch after the February meeting in Dallas. Rik~Littlefied came on board the proposal writing process, and decided to write a different proposal which appears as Proposal V. There is no longer a proposal which contains static contexts as discussed at the post-meet lunch.} %----------------------------------------------------------------------% % BEGIN "Introduction" % \section{Introduction} This chapter proposes that communication contexts and process groupings within MPI appear as related concepts. In particular process groupings appear as ``frames'' which are used in the creation of communication contexts. Communications contexts retain a reference to, and inherit properties of, their process grouping frames. This reflects the observation that an invocation of a module in a parallel program typically operates within one or more groups of processes, and as such any communication contexts associated with invocations of modules also bind certain semantics of process groupings. The proposal provides process identified communication, communications which are limited in scope to single contexts, and communications which have scope spanning pairs of contexts. The proposal makes no statements regarding message tags. It is assumed that these will be a bit string expressed as an integer in the host language. Much of this proposal must be viewed as recommendations to other subcommittees of MPI, primarily the point-to-point communication subcommittee and the collective communications subcommittee. Concrete syntax is given in the style of the ANSI C host language for only purposes of discussion. The detailed proposal is presented in \ReferSection{proposal}, which refers the reader to a set of discussion notes in \ReferSection{notes}. The notes assumes knowledge of the proposal text and are therefore best examined after familiarisation with that text. Aspects of the proposal are discussed in section \ReferSection{discussion}, and it is also recommended that this material be read after familiarisation with the text of the proposal. % % END "Introduction" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Detailed Proposal" % \section{Detailed Proposal} \LabelSection{proposal} This section presents the detailed proposal, discussing in order of appearance: processes; process groupings; communication contexts; point-to-point communication; collective communication. \subsection{Processes} \LabelSection{processes} This proposal views processes in the familiar way, as one thinks of processes in Unix or NX for example. Each process is a distinct space of instructions and data. Each process is allowed to compose multiple concurrent threads and MPI does not distinguish such threads. \subsubsection*{Process Descriptor} Each process is described by a {\it process descriptor\/} which is expressed as an integer type in the host language and has an opaque value which is process local. \SeeReferNote{integer:descriptors} \SeeReferNote{process:identifiers} \SeeReferNote{descriptor:cache} The initialisation of MPI services will assign to each process an {\it own\/} process descriptor. Each process retains its own process descriptor until the termination of MPI services. MPI provides a procedure which returns the own descriptor of the calling process. For example, {\tt pd = mpi\_own\_pd()}. \subsubsection*{Process Creation and Destruction} This proposal makes no statements regarding creation and destruction of processes. \SeeReferNote{dynamic:processes} \subsubsection*{Descriptor Transmission} The value of a process descriptor can be transmitted in a message as an integer since it is an integer type in the host language. However the recipient of the descriptor can make no defined use of the value of in the MPI operations described in this proposal --- the descriptor is {\it invalid}. %[Tony] % Then why allow it to be passed? % %[Lyndon] % Since it is an integer one cannot prevent the user from passing it. % The discussion notes should help answer why is is an integer. More % here. % We'd like (gd,rank) and (NULL,pd) to be suppliable to point-to-point. % Because rank will be an integer, then pd must also be an integer, % so I write that all descriptors are integers for consistency beteen % the different descriptors. MPI provides a mechanism whereby the user can transmit a valid process descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. {\tt MPI\_PD\_TYPE}. MPI provides a process descriptor registry service. The operations which this service upports are: register descriptor by name; deregister descriptor; lookup descriptor identifier by name and validate, blocking the caller until the name has been registered. \SeeReferNote{registry:check} Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. %[Tony] % I don't get all of this. Why? % %[Lyndon] % I don't understand what you don't get. A registry service is just % an easier (nonscalable, okay) way for concurrent entities to hook up % with one another, rather than having some common ancestor send descriptors % around in messages. % In fact I don't really need a group or process registry service, as mentioned % in the discussion section (yes, I know, not well presented), but % I do need a context registry service, and I'm being consistent % between different kinds of descriptors again. % Suggestive syntax for a registry service is pretty boring, so % I skipped it. Note that receipt of a process descriptor in the fashions described above may have a persistent effect on the implementation of MPI at the receiver, and in particular may reserve state. MPI will provide a procedure which invalidates a valid descriptor, allowing the implementation to free reserved state. For example, {\tt mpi\_invalidate\_pd(pd)}. The user is not allowed to invalidate the process descriptor of the calling process. \SeeReferNote{process:coherency} %[Tony] % I do not understand the usefulness or formal need for all this % validation and invalidation of process identifiers. Why, where % did it come from, what does it get us? How can this be related % to anything I have seen before? % %[Lyndon] % Acquisition of a descriptor requires an allocator, and consumes % resource. In such cases it is good practice to provide a % deallocator which frees resource. % [Tony] % I understand the idea here, but not all the details. Can this % be justified/exemplified/simplified? % %[Lyndon] % Unless I am missing the additional point in this comment, I can't % see the problem. Probably poor presentation of Proposal I++ :-) \subsubsection*{Descriptor Attributes} This proposal makes no statements regarding processor descriptor attributes. \subsection{Process Groupings} \LabelSection{groupings} This proposal views a process grouping as an ordered collection of (references to?) distinct processes, the membership and ordering of which does not change over the lifetime of the grouping. \SeeReferNote{dynamic:groups} The canonical representation of a grouping reflects the process ordering and is a one-to-one map from $Z_N$ to the descriptor of the $N$ processes composing the grouping. The structure of a process grouping is defined by a process grouping topology and this proposal makes no further statements regarding such structures. %[Tony] % It is not obvious to me that we want to enforce topology at this % juncture. However, we could register topology information in % the extensible structure strategy of Proposal V. % %[Lyndon] % I am deliberately making weak statements about topology while % acknolwedging the the process topology subcommittee. \subsubsection*{Group Descriptor} Each group is identified by a {\it group descriptor\/} which is expressed as an integer type in the host language and has an opaque value which is process local. \SeeReferNote{integer:descriptors} \SeeReferNote{group:identifiers} \SeeReferNote{descriptor:cache} The initialisation of MPI services will assign to each process an {\it own} group descriptor for a process grouping of which the process is a member. Each process retains its own group descriptor and membership of the process grouping until the termination of MPI services. \SeeReferNote{group:blobs} MPI provides a procedure which returns the descriptor of the home group of the calling process. For example, {\tt gd = mpi\_home\_gd()}. \SeeReferNote{own:group} \subsubsection*{Group Creation and Deletion} MPI provides a procedure which allows users to dynamically create one or more groups which are subsets of existing groups. For example, {\tt gdb = mpi\_group\_partition(gda, key)} creates one or more new groups {\tt gdb} partition an existing group {\tt gda} into one or more distinct subsets. This procedure is called by and synchronises all members of {\tt gda}. MPI provides a procedure which allows users to dynamically create one group by explicit definition of its membership as a list of process descriptors. For example, {\tt gd = mpi\_group\_definition(listofpd)} creates one new group {\tt gd} with membership and ordering described by the process descriptor list {\tt listofpd}. This procedure is called by and synchronises all processes identified in {\tt listofpd}. %[Tony] % loosely synchronous % %[Lyndon] % Do we mean the same thing? the constructors require each member % if the object under construction to participate in the construction, % and return a descriptor for the constructed operation. Therefore % no member can terminate construction before all other members have % commenced, at least. MPI provides a procedure which allows users to delete a created group. This procedure accepts the descriptor of a group which was created by the calling process and destroys the identified group. For example, {\tt mpi\_group\_deletion(gd)} deletes an existing group {\tt gd}. This procedure is called by and synchronises all members of {\tt gd}. MPI provides additional procedure which allow users to construct process groupings which have a process grouping topology. \subsubsection*{Descriptor Transmission} The value of a group descriptor can be transmitted in a message as an integer since it is an integer type in the host language. However the recipient of the descriptor can make no defined use of the value of in the MPI operations described in this proposal --- the descriptor is {\it invalid}. %[Tony] % Then why allow it to be passed? % %[Lyndon] % Since it is an integer one cannot prevent the user from passing it. % The discussion notes should help answer why is is an integer. More % here. % We'd like (gd,rank) and (NULL,pd) to be suppliable to point-to-point. % Because rank will be an integer, then pd must also be an integer, % so I write that all descriptors are integers for consistency beteen % the different descriptors. MPI provides a mechanism whereby the user can transmit a valid group descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. {\tt MPI\_GD\_TYPE}. MPI provides a group descriptor registry service. The operations which this service upports are: register descriptor by name; deregister descriptor; lookup descriptor identifier by name and validate, blocking the caller until the name has been registered. \SeeReferNote{registry:check} Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. %[Tony] % I don't get all of this. Why? % %[Lyndon] % I don't understand what you don't get. A registry service is just % an easier (nonscalable, okay) way for concurrent entities to hook up % with one another, rather than having some common ancestor send descriptors % around in messages. % In fact I don't really need a group or process registry service, as mentioned % in the discussion section (yes, I know, not well presented), but % I do need a context registry service, and I'm being consistent % between different kinds of descriptors again. % Suggestive syntax for a registry service is pretty boring, so % I skipped it. Note that receipt of a group descriptor in the fashions described above may have a persistent effect on the implementation of MPI at the receiver, and in particular may reserve state. MPI will provide a procedure which invalidates a valid descriptor, allowing the implementation to free reserved state. For example, {\tt mpi\_invalidate\_gd(gd)}. The user is not allowed to invalidate the own group descriptor of the process or the group descriptor of any group created by the calling process. \SeeReferNote{group:coherency} %[Tony] % I do not understand the usefulness or formal need for all this % validation and invalidation of process identifiers. Why, where % did it come from, what does it get us? How can this be related % to anything I have seen before? % %[Lyndon] % Acquisition of a descriptor requires an allocator, and consumes % resource. In such cases it is good practice to provide a % deallocator which frees resource. % [Tony] % I understand the idea here, but not all the details. Can this % be justified/exemplified/simplified? % %[Lyndon] % Unless I am missing the additional point in this comment, I can't % see the problem. Probably poor presentation of Proposal I++ :-) % \subsubsection*{Descriptor Attributes} MPI provides a procedure which accepts a valid group descriptor and returns the rank of the calling process within the identifier group. For example, {\tt rank = mpi\_group\_rank(gd)}. MPI provides a procedure which accepts a valid group descriptor and returns the number of members, or {\it size}, of the identified group. For example, {\tt size = mpi\_group\_size(gd)}. MPI provides a procedure which accepts a valid group descriptor and process order number, or {\it rank}, and returns the valid descriptor of the process to which the supplied rank maps within the identified group. For example, {\tt pd = mpi\_group\_pd(gd, rank)}. \SeeReferNote{pd:to:rank} MPI provides additional procedures which allow users to determine the process grouping topology attributes. %[Tony] It is not obvious to me that we want to enforce topology at this % juncture. However, we could register topology information in % the extensible structure strategy of Proposal V. % % %[Lyndon] % I am deliberately making weak statements about topology while % acknolwedging the the process topology subcommittee. \subsection{Communication Contexts} \LabelSection{contexts} This proposal views a communication context as a uniquely identified reference to exactly one process grouping, which is a field in a message envelope and may therefore be used to distinguish messages. The context inherits the referenced process grouping as a ``frame''. Each process grouping is used as a frame for multiple contexts. \subsubsection*{Context Descriptor} Each context is identified by a {\it context descriptor\/} which is expressed as an integer type in the host language and has an opaque value which is process local. \SeeReferNote{integer:descriptors} \SeeReferNote{context:identifiers} \SeeReferNote{descriptor:cache} The creation of MPI process groupings allocates an {\it own\/} context which inherits the created grouping as a frame and can be thought of as a property of the created grouping. The grouping retains its own context until MPI process grouping deletion. MPI provides a procedure which accepts a valid group descriptor and returns the context descriptor of the own context of the identified group. For example, {\tt cd = mpi\_own\_cd(gd)}. \SeeReferNote{own:context} \subsubsection*{Context Creation and Deletion} MPI provides a procedure which allows users to dynamically create contexts. This procedure accepts a valid descriptor of a group of which the calling process is a member, and returns a context descriptor which references the identified group. For example, {\tt cd = mpi\_context\_create(gd)}. This procedure is called by and synchronises all members of {\tt gd}. \SeeReferNote{dynamic:contexts} %[Tony] % loosely synchronous % %[Lyndon] % Do we mean the same thing? the constructors require each member % if the object under construction to participate in the construction, % and return a descriptor for the constructed operation. Therefore % no member can terminate construction before all other members have % commenced, at least MPI provides a procedure which allows users to destroy created contexts. This procedure accepts a valid context descriptor which was created by the calling process and deletes that context identifier. For example, {\tt mpi\_context\_delete(cd)}. This procedure is called by and synchronises all members of the frame of {\tt cd}. \subsubsection*{Descriptor Transmission} The value of a context descriptor can be transmitted in a message as an integer since it is an integer type in the host language. However the recipient of the descriptor can make no defined use of the value of in the MPI operations described in this proposal --- the descriptor is {\it invalid}. %[Tony] % Then why allow it to be passed? % %[Lyndon] % Since it is an integer one cannot prevent the user from passing it. % The discussion notes should help answer why is is an integer. More % here. % We'd like (gd,rank) and (NULL,pd) to be suppliable to point-to-point. % Because rank will be an integer, then pd must also be an integer, % so I write that all descriptors are integers for consistency beteen % the different descriptors. MPI provides a mechanism whereby the user can transmit a valid context descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. {\tt MPI\_CD\_TYPE}. %[Tony] % I don't get all of this. Why? % %[Lyndon] % I don't understand what you don't get. A registry service is just % an easier (nonscalable, okay) way for concurrent entities to hook up % with one another, rather than having some common ancestor send descriptors % around in messages. % In fact I don't really need a group or process registry service, as mentioned % in the discussion section (yes, I know, not well presented), but % I do need a context registry service, and I'm being consistent % between different kinds of descriptors again. % Suggestive syntax for a registry service is pretty boring, so % I skipped it. MPI provides a context descriptor registry service. The operations which this service upports are: register descriptor by name; deregister descriptor; lookup descriptor identifier by name and validate, blocking the caller until the name has been registered. \SeeReferNote{registry:check} Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. Note that receipt of a context descriptor in the fashions described above may have a persistent effect on the implementation of MPI at the receiver, and in particular may reserve state. MPI will provide a procedure which invalidates a valid descriptor, allowing the implementation to free reserved state. For example, {\tt mpi\_invalidate\_cd(cd)}. The user is not allowed to invalidate the own context descriptor of a group or the context descriptor of any context created by the calling process. \SeeReferNote{context:coherency} %[Tony] % I do not understand the usefulness or formal need for all this % validation and invalidation of process identifiers. Why, where % did it come from, what does it get us? How can this be related % to anything I have seen before? % %[Lyndon] % Acquisition of a descriptor requires an allocator, and consumes % resource. In such cases it is good practice to provide a % deallocator which frees resource. % [Tony] % I understand the idea here, but not all the details. Can this % be justified/exemplified/simplified? % %[Lyndon] % Unless I am missing the additional point in this comment, I can't % see the problem. Probably poor presentation of Proposal I++ :-) % \subsubsection*{Descriptor Attributes} MPI provides a procedure which allows users to determine the process grouping which is the frame of a context. For example, {\tt gd = mpi\_context\_gd(cd)}. \subsection{Point-to-Point Communication} \LabelSection{point2point} This proposal recommends three forms for MPI point-to-point message addressing and selection: null context; closed context; open context. It is further recommended that messages communicated in each form are distinguished such that a Send operation of form X cannot match with a receive operation of form Y, which requires that the form is embedded into the message envelope. The three forms are described followed by considerations of uniform integration of these forms in the point-to-point communication section of MPI. \subsubsection*{Null Context Form} The {\it null context\/} form contains no message context. \SeeReferNote{null:context} Message selection and addressing are expressed by {\tt (pd, tag)} where: {\tt pd} is a process descriptor; {\tt tag} is a message tag. Send supplies the {\tt pd} of the receiver. Receive supplies the {\tt pd} of the sender. Receive can wildcard on {\tt pd} by supplying the value of a named constant process descirptor, e.g. {\tt MPI\_PD\_WILD}. This proposal makes no statment about the provision for wildcard on {\tt tag}. \subsubsection*{Closed Context Form} The {\it closed context\/} form permits communication between members of the same context. \SeeReferNote{closed:context} Message selection and addressing are expressed by {\tt (cd, rank, tag)} where: {\tt cd} is a context descriptor; {\tt rank} is a process rank in the frame of {\tt cd}; {\tt tag} is a message tag. Send supplies the {\tt cd} of the receiver and sender, and the {\tt rank} of the receiver. Receive supplies the {\tt cd} of the sender and receiver, and the rank of the sender. The {\tt (cd, rank)} pair in Send (Receive) is sufficient to determine the process descriptor of the receiver (sender). Receive cannot wildcard on {\tt cd}. Receive can wildcard on {\tt rank} by supplying the value of a named constant integer, e.g. {\tt MPI\_RANK\_WILD}. This proposal makes no statment about the provision for wildcard on {\tt tag}. \subsubsection*{Open Context Form} The {\it open context\/} form permits communication between members of any two contexts. \SeeReferNote{open:context} Message selection and addressing are expressed by {\tt (lcd, rcd, rank, tag)} where: {\tt lcd} is a ``local'' context descriptor; {\tt rcd} is a ``remote'' context descriptor; {\tt rank} is a process rank in the frame of {\tt rcd}; {\tt tag} is a message tag. Send supplies the context descriptor for the sender in {\tt lcd}, the context descriptor for the receiver in {\tt rcd}, and the {\tt rank} of the receiver in the frame of {\tt rcd}. Receive supplies the context descriptor for the receiver in {\tt lcd}, the context descriptor for the sender in {\tt rcd}, and the {\tt rank} of the sender receiver in the frame of {\tt rcd}. The {\tt (rcd, rank)} pair in Send (Receive) is sufficient to determine the process descriptor of the receiver (sender). Receive cannot wildcard on {\tt lcd}. Receive can wildcard on {\tt rcd} by supplying the value of a named constant context descriptor, e.g. {\tt MPI\_CD\_WILD}, in which case Receive {\it must\/} also wildcard on {\tt rank} as there is insufficient information to determine the process descriptor of the sender. Receive can wildcard on {\tt rank} by supplying the value of a named constant integer, e.g. {\tt MPI\_RANK\_WILD}. This proposal makes no statment about the provision for wildcard on {\tt tag}. \subsubsection*{Uniform Integration} The three forms of addressing and selection described have different syntactic frameworks. We can consider integrating these forms into the point-to-point section of MPI by defining a further orthogonal axis (as in the multi-level proposal of Gropp \& Lusk) which deals with the form. This is at the expense of multiplying the number of Send and Receive procedures by a factor of three, and some difficulty in details of the current point-to-point working document which uniformly assumes a single addressing and selection form. There are various approaches to unification of the syntactic frameworks which may simplify integration. Three options are now described, each based on retention and extension of the framework of one form. These options each have advantages and disadvantages. \paragraph*{Option i:} The framework of the open context form is adopted and extended. We introduce the {\it null\/} descriptor, the value of which is defined by a named constant, e.g. {\tt MPI\_NULL}. The null context form is expressed as {\tt (MPI\_NULL, MPI\_NULL, pd, tag)}, which is a little clumsy. The closed context form is expressed as {\tt (MPI\_NULL, cd, rank, tag)}, which is marginally inconvenient. The open context form is expressed as {\tt (lcd, rcd, rank, tag}), which is of course natural. \paragraph*{Option ii:} The framework of the closed context form is adopted and extended. We introduce the {\it null\/} descriptor, the value of which is defined by a named constant, e.g. {\tt MPI\_NULL}. The null context form is expressed as {\tt (MPI\_NULL, pd, tag)}, which is marginally inconvenient. The closed context form is expressed as {\tt (cd, rank, tag)}, which is of course natural. Expression of the open context form requires a little more work. We can use the {\tt cd} field as ``shorthand notation'' for the {\tt (lcd, rcd)} pair at the expense of introducing some trickery. We can define a mechanism which ``globs'' together these two fields onto in an identifier which Send and Receive can distinguish from a context identifier and treat as the shorthand notation. Then we should also define a mechanism by which a receiver which has completed a Receive with wildcard on {\tt rcd} is able to determine the valid context descriptor of the sender. This is a inconvenient. %[Tony] % I dislike this intensely. There should be a group-pair data % structure. Group is never a pair of sub-groups. It is a % bad idea. This is all to get around changing syntax, no? % % [Lyndon] % I dislike this also. Of course it is all to get around extending % the syntax, it stinks of that. \paragraph*{Option iii:} The framework of the null context form is adopted and extended. The null context form is expressed as {\tt (pd, tag)}, which is of course natural. Expression of the open and closed context forms requires a little more work. We can use the {\tt pd} field as ``shorthand notation'' for {\tt (cd, rank)} and {\tt (lcd, rcd, rank)} by continuation of the trickery used in the previous option. This is clumsy. %[Tony] % I dislike this intensely. There should be a group-pair data % structure. Group is never a pair of sub-groups. It is a % bad idea. This is all to get around changing syntax, no? % % % [Lyndon] % I dislike this also. Of course it is all to get around extending % the syntax, it stinks of that. \subsection{Collective Communication} \LabelSection{collective} Symmetric collective communication operations are compliant with the closed context form described above. This proposal recommends that such operations accept a context descriptor which identifies the context and frame in which they are to operate. MPI does plan to describe symmetric collective communication operations. It is not possible to determine whether the proposal is sufficient to allow implementation of the collective communication section of MPI in terms of the point-to-point section of MPI without loss of generality, since the collective operations are not yet defined. %[Tony] Check, but it is desirable that they be so writable, so we will % have to watch. % Assymetric collective communication operations, especially those in which the sender(s) and receiver(s) are distinct processes, are compliant with the open context form described above. This proposal recommends that such operations accept a pair of context descriptors (perhaps in a ``glob'' form) which identify the contexts and frames in which they are to operate. MPI does not plan to describe assymetric collective communication operations. Such operations are expressive when writing programs beyond the SPMD model and comprise communicative funtionally distinct process groupings. This proposal recommends that such operations should be considered in some reincarnation of MPI. % % END "Proposal" %----------------------------------------------------------------------% % [Tony] % So, I gather that a set of groups is passable to a collcomm, % and a pair is passable to a pt2pt. That is neat, but it should % still be a separate data structure, with separate calls than % the intra-group version (at least for the pt2pt calls). % % [Lyndon] % Currently, I am only thinking of passing either singlets or duplets % to point-to-point and collective. % I was trying to avoid separate - extra - calls to point-to-point % because that is already very large and there is a swell of concern % about the size thereof. %----------------------------------------------------------------------% % BEGIN "Discussion & Notes" % \section{Discussion \& Notes} This section comprises a discussion of certain aspects of this proposal followed by the notes referenced in the detailed proposal. \subsection{Discussion} \LabelSection{discussion} We can dissect the proposal into two parts: an SPMD model core; an MIMD model annex. In this discussion the dissection is exposed and the conceptual foundation of each part is described. The discussion also presents arguments for and against the MIMD model annex, and to some extent explores the consquences of these arguments for MPI in a wider sense. \subsubsection*{SPMD model core} The SPMD model core provides noncommunicative process groupings and communication contexts for writers of SPMD parallel libraries. It is intended to provide expressive power beyond the ``SPIMD'' model, in which process execute in a stricly SIMD fashion. The material describing processes in \ReferSection{processes} is simplified: \begin{itemize} \item processes have identical instruction blocks and different data blocks; \item process descriptor transmission and registry become redundant (Note, however, that they are anyway redundant as context descriptor transmission and registry coupled with context descriptor attribute query and group descriptor attribute query is capable of providing access to all process descriptors); \item dynamic process models are not considered. \end{itemize} The material describing process groupings in \ReferSection{groupings} is simplified: \begin{itemize} \item group descriptor transmission and registry become redundant (Note, however, that they are anyway redundant as context descriptor transmission and registry coupled with context descriptor attribute query capable of providing access to all group descriptors.); \item the own process grouping explicitly becomes a group containing all processes. \end{itemize} The material describing communication contexts in \ReferSection{contexts} is simplified: \begin{itemize} \item context descriptor transmission and registry become unnecessary. \end{itemize} The material describing point-to-point communication in \ReferSection{point2point} is simplified: \begin{itemize} \item the open context form becomes redundant; \item uniform integration ``Option i'' is deleted, and ``Option ii'' loses ``globs'' thus becoming simple enough that ``Option iii'' need not be further considered. \end{itemize} The material describing collective communication in \ReferSection{collective} is simplified: \begin{itemize} \item there is no possibility of collective communication operations spanning more than context. \end{itemize} \subsubsection*{MIMD model annex} The MIMD model annex extends and modifies the SPMD model core to provide expressive power for MIMD programs which combine (coarse grain) function and data driven parallelism. The MIMD model annex is not intended to provide expressive power to fine grained function driven parallel programs --- it is conjectured that message passing approaches such as MPI are not suited to fine grained function driven or data driven programming. The annex is intended to provide expresive power for the ``MSPMD'' model, which is now described. One of the simplest MIMD models is the ``host-node'' model, familiar in EXPRESS and PARMACS, containing two functional groups: one node group (SPMD like) ; one host group (a singleton). The ``parallel client-server'' model, in which each of the $n$ clients is composed of parallel processes, and in which the server may also be composed of parallel processes, contains $1+n$ functional groups: $n$ client groups (SPMD like); one server group (singleton, SPMD like). The ``host-node'' model is a case of this model in which the host can be viewed as a singleton client and the nodes can be viewed as an SPMD like server (or vice versa!). The ``parallel module graph'' model, in which each module within the graph may be composed of parallel processes (singleton, SPMD like), contains any number of functional groups with arbitrarily complex relations. The ``parallel client-server model'' is a case of this model in which the module graph contains arcs joining the server to each client. The MIMD model annex is intended to provide expressive power for the ``parallel module graph'' model, which I refer to as the MPSMD model. This model requires support at some level as commercial and modular applications are increasingly moving in to parallel computating. The debate is whether or not message passing approaches such as MPI (which I simply refer to as MPI) should provide for this model. The negative argument is that such SPMD modules should be controlled and communicate with one another as ``parallel processes'' at the distributed operating system level. The argument has some appeal as the world of distributed operating systems must deal with difficult issues such as process control and coherency. Avoidance of duplication in MPI allows MPI to focus on provision of a smaller set of facilities with greater emphasis on maximum performance for data driven SPMD parallel prgrams. The positive argument is that communications between SPMD modules requires high performance and MPI can provide such performance with tuned semantics which expect the user to deal with coherency issues. There is also the argument that MPI is able to deal with this in a shorter time than development and standardisation procedures for distributed operating systems. The latter argument is comparable with the argument for MPI versus parallel compilation. \subsection{Notes} \LabelSection{notes} \begin{enumerate} \item\LabelNote{integer:descriptors} Descriptors are a plentiful but bounded resource. They are expressed as integers for two reasons. Firstly, this expression facilitates uniform binding to ANSI~C and Fortran~77. Secondly, there is a later convenience in ensuring that process descriptors in particular are expressed in integers, and I made all the descriptors are integers for consistency. In practice the descriptor could be an index into a table of objects description structures or pointers to such structures. \item\LabelNote{descriptor:cache} It is possible to allow the user to save user defined attributes in descriptors, as Littlefield has suggested. Such attributes should not be communicated in either descriptor transmission or the descriptor registry. \item\LabelNote{process:identifiers} The process descriptor is not a global unique process identifier but must reference such an identifier. The use of descriptors hides such identifiers in order that they may be implementation dependent, and enables more efficient access to additional process information in implementations. For example, the process identifier of a receiver may not be sufficient to route a message to the receiver, and this information can be referenced from the process descriptor. \item\LabelNote{group:identifiers} The group descriptor is not a global unique group identifier but must reference such an identifier. The use of descriptors hides such identifiers in order that they may be implementation dependeent, and enables more efficient access to additional group information in implementations. For example, the size of the group and the map from member ranks to process descriptors can be referenced from the group descriptor. \item\LabelNote{context:identifiers} The context descriptor is not a global unique context identifier but must reference such an identifier. The use of descriptors hides such identifiers in order that they may be implementation dependent, and enables more efficient access to additional context information in implementations. For example, the group descriptor of the frame can be referenced from the context descriptor. \item\LabelNote{dynamic:processes} The proposal does not prevent a process model which allows dynamic creation and termination of processes however it does not favour an asynchronous process creation model in which singleton processes are created and terminated in an unstructured fashion. The proposal does favour a model in which blobs of processes are created (and terminated) in a concerted fashion, and in which each group so created is assigned a different own process grouping. This model does not take into account the potential desire to expand or contract an existing blob of processes in order to take into account (presumably slowly) time varying worloads. The author suggests that concerted blob expand and contract operations are most suitable for this purpose than asynchronous spawn/kill operations. \item\LabelNote{registry:check} There should probably also be a registry operation which checks whether a name has been registered without the possibility of blocking the calling process indefinitely. The same registry could be used for process, group and context descriptors. \item\LabelNote{process:coherency} In the static process model it is assumed that processes are created in concert, and that termination of one process implies termination of all processes. In this case there is no coherency problem associated with processes and process descriptors. In a dynamic process model there is a coherency problem which processes and process descriptors since a process can retain the descriptor of a process which has been deleted. The proposal expects the user to ensure coherent usage. \item\LabelNote{dynamic:groups} Process groupings are dynamic in the sense that they can be created at any time, and static in the sense that the membership is constant over the lifetime of the process grouping. The author suggests concerted group expand and contract operations are more generally useful than asynchronous join/leave operations. \item\LabelNote{group:blobs} MPI has discussed the concept of the ``all'' group which contains all processes. The ``own'' group concept is intended to be a generalisation of the ``all'' group concept which is expressive for programs including and beyond the SPMD model. Processes are created in ``blobs'', where each member of a blob is a member of the same own process grouping, and different blobs have different own process groupings. An SPMD program is a single blob. A host-node program composes two blobs, the node blob and the host blob (which is a singleton). There is a sense in which a blob has a locally SPMD view. \item\LabelNote{own:group} This procedure looks like a process descriptor attribute query. \item\LabelNote{group:coherency} Dynamic processes introduce a group coherency problem as a group descriptor can contain the process descriptor of a process which has been deleted. Dynamic processes potentially introduce a group coherency problem as a group descriptor can contain the process descriptor of a process which has been deleted. Group transmission introduces a group and group descriptor coherenncy problem since a process can retain a group descriptor of a group which has been identified group. The proposal expects the user to ensure coherent usage. \item\LabelNote{pd:to:rank} The proposal did not include a function to convert a {\tt (gd, pd)} pair into a rank. It is suggested that this inverse map is allowed to be arbitrarily slow. \item\LabelNote{dynamic:contexts} Marc Snir has described a method by which global unique group identifiers can be generated without use of shared global data. The description of context creation anticipates a similar method for global unique context identifier generation. However the synchronisation requirement of this method makes it unnecessary for context creation. Synchronisation at creation of context within a process grouping frame implies that all processes within the frame perform the same context creations in the same sequence. Therefore the combination of global unique frame identifier and context creation sequence number provides a global unique context identifier without communication. \item\LabelNote{own:context} This procedure looks like a group descriptor attribute query. \item\LabelNote{context:coherency} The retention of a reference to a group descriptor by a context introduces the potential for a context and context descriptor coherency problem, as a context could contain a reference to the group descriptor of a group which has been deleted. This could be circumvented by demanding that all such contexts are deleted before a group is deleted. Context descriptor transmission introduces a coherency problem since a process can retain the descriptor of a context which has been deleted. The proposal expects the user to ensure coherent usage. \item\LabelNote{null:context} This is somewhat like and PARMACS and PVM~3. It is general purpose, but not particularly expressive. It does not provide facilities for writers of parallel libraries. \item\LabelNote{open:context} This is somewhat like ZIPCODE and VENUS. It is expressive in certain SPMD programs where noncommunicative data driven parallel computations can be performed concurrently. It provides facilities for writers of SPMD like parallel libraries. \item\LabelNote{closed:context} This is somewhat like CHIMP and PVM~2. It is expressive in certain MIMD programs where communicative data driven parallel computations can be performed concurrently. It provides facilities for MSPMD like parallel libraries. \end{enumerate} % % END "Discussion & Notes" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Conclusion" % \section{Conclusion} This chapter has presented and discussed a proposal for communication contexts within MPI. In the proposal process groupings appeared as frames for the creation of communication contexts, and communication contexts retained certain properties of the frames used in their creation. The author strongly recommends this proposal to the committee. % % END "Conclusion" %----------------------------------------------------------------------% % % END "Proposal VI" %======================================================================% \end{document} ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Sun Mar 21 18:51:28 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA01899; Sun, 21 Mar 93 18:51:28 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01788; Sun, 21 Mar 93 18:50:17 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 21 Mar 1993 18:50:14 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01768; Sun, 21 Mar 93 18:50:09 -0500 Date: Sun, 21 Mar 93 23:50:04 GMT Message-Id: <13403.9303212350@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: V commenting To: tony@aurora.cs.msstate.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Hi Tony I ran over schedule before finishing. Comments are marked in the Lyndon favoured style % %[name] % text % Comments to comments are nested %% %[name] % text %%[name2] %%text2 %% % There are comments to comments to comments down there as well %%% Good luck! Lyndon ---------------------------------------------------------------------- % [Lyndon] General points % -------------- % % 1) I had to ``mine'' the text :-) Perhaps one of us (i.e., I am % offering if you wish) should attempt to construct a more transparent % presentation before circulation to whole committee, for the % convenience of committee members. %%[Tony] %% I felt that things appear twice, because of summary (good) %% and because of implementation notes at end (confusing) %% % 2) I'm not a fan of much of this proposal, although I do indeed like % some of the ideas which it introduces. [On the other hand, I'm not a % great fan of all of the proposal which I wrote. I shall mail self % criticism of my proposal, and may have to write amended or alternative % proposal :-)] %%[Tony] %% Please be more specific. I am having a hard time understanding %% why you really don't like it, Lyndon. If the process model %% were a little less static, and servers were permitted (though %% hopefully bounded in cost), I think we would have an excellent %% proposal. %% %%%[Lyndon] %%% I would have thought that the points below make my major %%% objections perfectly clear. Perhaps not. Here they are: %%% a) Paired-exact-match stuff %%% b) Translation of (group,rank) into TID all over the place % % 3) I really like the way in which groups are something like ``frames'' % in which contexts are created. This is conceptually much neater than % duplication of groups. %%[Tony] %% In practice, group subsetting will require groups to be copied, %% otherwise, subgroups will unfairly be penalized by the size %% of their ancestor. %% %%%[Lyndon] %%% I am anticipating that when one or more groups are created by %%% subsetting that, for example if the parent were described by %%% a proces list, then the children will be described by process %%% lists which are distinct sublists of the parent. So each element %%% of the parent list gets copied, exactly once. %%% The difficulty I have is that if a group were to be expanded %%% or contracted, then the ``duplicates'' thereof would no longer %%% be duplicates. Saying that duplicate creates a group bu retains %%% the process list of the old group is conceptually muddy since %%% the new group is a reference to a group, whereas the old group %%% or an even older group must be an actual group. Yuk! Now, if %%% we introduce the concept of a reference to an actual group, %%% and say this reference is unqieuly identified, then is is %%% conceptually sound and this object we describe really is a context. % % 4) I like the idea of pushing information into the group structure. I % have a few qualms with the proposed details --- see specific points. %%[Tony] %% I have more confidence about this idea, and could demonstrate %% by June/July time-frame in Zipcode. %% % % 5) See ``Writing a server in the point-to-point layer of MPI in four % easy steps'' at the foot of the message. %%[Tony] %% This seems like a nice thing. %% %%%[Lyndon] %%% You are too kind :-) \documentstyle{report} \begin{document} \title{``Proposal V'' for MPI Communication Context Subcommittee} \author{Rik~Littlefield} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal V" % Rik Littlefield % March 1993 % \chapter{Proposal V} \section{Summary} \begin{itemize} \item Group ID's have local scope, not global. Groups are represented by descriptors of indefinite size and opaque structure, managed by MPI. Group descriptors can be passed between processes by MPI routines that translate them as necessary. (This satisfies the same needs as global group ID's, but its performance implications are easier to understand and control.) %[Lyndon] % I support the approach whereby group descriptors are local % objects. They could be pointers to structures, or indices % into tables thereof. We let the implementation consider that. % % One difficulty arises as group descriptors can only be passed % from process P to process Q if both P and Q members of some % group G since the communication presumably must use a context % known to both P and Q. Imagine that P is member of F and Q is not % member of F; that Q is member of H and P is not member of H; that % both P and Q are member of G. Let M be abritrary message data. % % Initially - % P can send F to Q, and Q can receive F from P, in a context of G. % Q can send H to P, and P can receive H from Q, in a context of G. % Thereafter - % P can allocate a context C in F. % P can send C to Q, and Q can receive C in the default context of H. % Q can allocate a context D in H. % Q can send D to P, and P can receive D, in the default context of F. % Thereafter - % P as member of F, and Q as member of H, can communicate using % wildcard pid and tag by use of contexts C and D. % % Okay, this is possible, but it is messy :-) %%[Tony] %% Alternatives, Lyndon? %% %%%[Lyndon] %%% I don't suppose for one minute that you will like this, but I really %%% would suggest that in this case a group descriptor registry may %%% be appropriate. %[Tony] % Seems doable. % %%[Lyndon] %% But usable with some grief, as above. \item Arbitrary process-local info can be cached in group descriptors. (This allows collective operations to run quickly after the first execution in each group, without complicating the calling sequences.) %[Lyndon] % I rather like this idea. %% [Tony] %% Me too. %[Tony] % Seems doable. % \item There are restrictions that permit groups to be layered on top of pt-pt. %[Tony] % I don't understand the word 'restriction' here. % Restriction of what. % %%[Lyndon] %% Rik is speaking of the pair-exact-match stuff you see later on. \item Pt-pt communications use only TID, context, and tag, and are specified to be fast. %[Tony] % What does "fast" mean. % %%[Lyndon] %% Fair question! \item Group control operations (forming, disbanding, and translating between rank and TID) are NOT specified to be fast. %[Tony] % OK, the above two items are identical to what Zipcode % provides in practice, but people have argued that groups % might be created/deleted more often in some apps, and % that these apps ought to be supportable % %%[Lyndon] %% In our work group creation/deletion is an infrequent operation %% and we are happy to live with reasonable cost for this operation. %% I think Marc Snir is thinking abour a different group model in %% group created/deletion is frequent. %% Perhaps we should provide both or neither (even handedness principle). \end{itemize} \section{Detailed Proposal} \begin{itemize} \item Pt-pt uses only ``TID'', ``context'', and ``message tag''. TID's, contexts, and message tags have global scope; they can be copied between processes as integers. TID's and contexts are opaque; their values are managed by MPI. Message tags are controlled entirely by the application. %[Lyndon] % You probably ought to say that context and TID are integer with % opaque values. %% [Tony] %% 1) It is not obvious that TIDs should be restricted to 32 bits. %%%[Lyndon] %%% I did not imply that they were. %% 2) It is not obvious that contexts will be 32 bits (eg, 16 bits). %% I favor a whole word for a context, despite other limits, %% just to make things simpler. %% %%%[Lyndon] %%% ditto %% Internet addresses are going to get augmented from 32 to ??? bits %% is it reasonable to assume that certain MPI implementations might %% incorporate such internet addresses as TIDs (in future), %%%[Lyndon] %%% No, it is not reasonable, in the least. And both you and Rik appear %%% to be ignoring the possibility that the process descriptor could %%% be required to store routing data of significant length. Therefore %%% the sensible thing to do is use a process descriptor which in %%% practice might be a table index --- fits into integer for sure --- %%% the value of which is process local for sure, and the table %%% contains the real process identifier of implementation defined size. %% Opacity is partially violated if we say how big the data type is??? %%%[Lyndon] %%% I understand the point you make, but it gets blown away by the %%% point I have just made in reply to you. %[Tony] % Yes, and I want at least 32-bits of message tag. % %%[Lyndon] %% Yes, and I want exactly zero bits of message tag. %% I'll just keep quiet about message tags. \item Pt-pt communication is specified to be fast in all cases. (E.g., MPI must initialize all processes such that any required translation of the TID is faster than the fastest pt-pt communication call.) %[Lyndon] % How do you imagine this to be acheived, considering that TIDs % are global entities? % I guess that you are thinking a TID is a (processor_number, % process_number) pair of bit fields, a bit like one sees in NX and RK, % and that network interface hardware will route based on the % processor_number. % % In another approach a TID is a process local entity just like the % group descriptor. This satisfies efficiency when the above scheme % is not applicable, for example in a workstation network. %% %%[Tony] %% where does this get us??? %% Remember, we have to choose on some things, so we can have something %% to present in Dallas. Is there an important difference here? %%%[Lyndon] %%% For sure their is an important difference. See comment above. %% %% TIDs are global entities. Is structure assumed to be global; %% in a truly opaque system, some TID component would have to be %% fixed, but the rest could vary structurally... %% %[Tony] % This could be difficult, in practice, if one mails a % message to one's own process, and MPI is smart enough % to optimize. % \item A group is represented by a ``group descriptor'', of indefinite size and opaque structure, managed by MPI. The group descriptor can be referenced via a ``process-local group ID'', which is typically just a pointer to the group descriptor. (Group ID's with global scope do not exist.) %[Lyndon] % I think I see, it is the context identifier which has global scope. % Now, this really is just getting on the way toward the proposal % that I really wish I had written for the subcommittee. I will flame % myself! %% %% Yes, contexts are global; group identifiers are just pointers %% typically, to data structures, describing %% %% 1) a groups context %% 2) group members and their ranks (mappings, inverses, %% cached, hashed, unscalably stored, etc) %% 3) TID-to-rank map and inverse (see possibilities in 2) %% 4) A set of fixed global operations, accepted as standard, %% an accessible in O(1) time. Possibly, each %% such operation should be a method, so that %% a parameter block can be passed with it. Zipcode %% supports the Method type to do this. %% This is effectively a cache for some parts of item #5 %% ... %% 5) An AVL or similar tree of extensible operations. %% New operations are registerable by the user. These %% tags are unique within a group, a specify an operation %% i) pre-defined by MPI (in which case it can be cached %% in 4 %% ii) alternative operations (even if they do something %% standard, that are wanted to be accessed by %% name) This name is group unique. %% %% A mechanism for DO_METHOD_FROM_GROUP(name,....) %% or GET_METHOD_FROM_GROUP(name,...) %% and SET_METHOD_IN_GROUP(name,...) are clearly needed. %% %[Tony] % Sounds good. % \item Group descriptors can be copied between arbitrary processes via MPI-provided routines that translate them to and from machine- and process-independent form. A process receiving a group descriptor does not become a member of the group, but does become able to obtain information about the group (e.g., to translate between rank and TID). %[Lyndon] % Also crucially, to obtain and use the default context identifier % of the received group descriptor. %%[Tony] %% Yes, that is included, I believe, in concept. %% \item Arbitrary information can be cached in a group descriptor. This is, MPI routines are provided that allow any routine to augment a group descriptor with arbitrary information, identified by a key value, that subsequently can be retrieved by any routine having access to the descriptor. Cached information is not visible to other processes and is stripped when a group descriptor is sent to another process. Retrieval of cached information is specified to be fast (significantly cheaper than a pt-pt communication call). %[Lyndon] % I like the general idea, but I'm nervous about two things: % (a) implied associativity of group descriptor cache - this % will potentially be time expensive in implementation. % (b) there is no method proposed for abritration of keys % between independently written modules, so we are % in the same problem regime as just having message tag % and no message context. % However, key's are local, so presumably you would find % it acceptable to add a key registration service? %%[Tony] %% Stripping is extremely controversial aspect, and arbitrary. %% If the recipient has the methods with the same name, then %% a new rendezvous could be accomplished at the far end %[Tony] % In Zipcode 1.0, we allow multiple global operations % to be provided on a message-class (eg, grid-oriented messages) % The identifiers for these possible operations are user-specified % presently, but the "names" of the global operations are fixed % at compile-time. % % That means that there is O(1) time to find combine, fanout, send, % etc, on a group-wide scope. However, other operations cannot % be accessed in O(1) time (they are not in the opaque structure). % % The same mechanism used by Zipcode to allow multiple methods for % combine to be registered by the user, could also allow extensibility % just like Rik describes, with little effort. We use AVL trees. % % In fact, I will add this to Zipcode 1.x. Why say this? It is % not far from existing practice, and I have a lot of the machinery % in place already, and I am confident that it is useful. % \item Group descriptors contain enough information to asynchronously translate between (group,rank) and TID for any member of the group, by any process holding a descriptor for the group. MPI routines are provided to do these translations. Translation is not specified to be fast. %[Lyndon] % How do you imagine that this will be done? % (a) Perhaps an array of TIDs which is just indexed on rank? Then % where is the case for not using directly rank. % (b) Perhaps a hashing function? Then the case for not using rank % directly is marginal. % (c) Perhaps generating a request to a service process? In which % case you admit here that a service process exists, which must % be propogated throughout the proposal and changes one of your % fundamental objectives. % (d) Something else? Do tell! %%[Tony] %% Yes, these are all options. Fastness seems to be an important %% issue. If translation is very expensive, none of the "good" %% features will be used. %[Tony] % This seems to be a serious flaw. It will have to be cached % on an LRU basis, with system/user/both specifying how much % caching is allowed (ie, how much unscalable memory use). % If the first time is expensive, OK, but not the Nth time. % %%[Lyndon] %% Check. \item At creation, each group is assigned a globally unique ``default group context'' which is kept in the group descriptor and can be quickly extracted. This extraction is specified to be fast (comparable to a procedure call). %[Tony] % OK, I see no problem with this (so far). % \item The default group context can be used for pt-pt communication by any module operating within the group, but only for exactly paired send/receive with exact match on TID, context, and message tag. Call this the ``paired-exact-match constraint''. This constraint allows independent modules to use the same context without coordinating their tag values. (Assumes: tight sequencing constraints for both blocking and non-blocking comms in pt-pt MPI.) %[Lyndon] % I understand what you want the paired-exact-match for. This % might appear as pragmatics and advice to module writers. I % think you should be firmer about sequencing constraints % for point-to-point in MPI that this requires, to be % sure that the constraint is not too large. %%[Tony] %% Again, I think this should be eliminated, and all references %% to this idea should be expunged. It denies the context's %% ability to manage messages. %%%[Lyndon] %%% Check. %[Tony] % NO. This violates the concept of context entirely. % (ie, an oxymoron ... contexts same, but still no need for % tag disambiguation...) % % Use the default group context to establish (cooperatively) % other contexts, and then use these. This is a seriously % bad feature, in my mind. % \item A modules that does not meet the paired-exact-match constraint must use a different globally unique context value. An MPI routine will exist to generate such a value and broadcast it to the group. The cost of this operation is specified to be comparable to that of an efficient broadcast in the group, i.e. log(G) pt-pt startup costs for a group of G processes. [See Implementation note 1.] %[Lyndon] % Perhaps I am missing something here. Please help. This % is what my mind is thinking. % The synchronisation requirement means that all context % allocations in a group G must be performed in an identical % order by all members of G. Then the sequence number of the % allocation is unique among all allocations within G. % Therefore the duplet % (default context of G, allocation sequence number) % is a globally unique identification of the allocated % context. The sequence number can be replaced by any one-to-one % map of the sequence number, of course. So, according to your % synchronisation constraint, context generation can be ``free''. %%[Tony] %% I agree that context allocation has to be done in sequence. %% That is why I am in favor of providing calls that allow %% groups to get numerous contexts at creation, and then %% cooperatively, but potentially without further communication %% divide them(as they build subgroups, for instance). %% %% I see these as services to be used in building virtual topology %% features, which will then be more widely used by users of MPI. %% %%%[Lyndon] %%% If the context allocations are done in sequence, then I have %%% indicated how they can be done for free. I am getting confused. %[Tony] % I do not think we should support the paired-exact-match thing. % \item Context values are a plentiful but exhaustible resource. If further use is anticipated, a context may be cached; otherwise it should be returned. (Note: the number of available contexts should be several times the number of groups that can be formed.) %[Tony] % Concur. This suggests many more than "256" % %%[Lyndon] %% The number of contexts in the whole program? Sure 256 is too small! %% The number of contexts in each process? Maybe something like 256 is okay? \item When a group disbands, its group descriptor is closed and any cached information is released. (The MPI group-disbanding routine does this by calling destructor routines that the application provided when the information was cached.) Copies of the group descriptor must be explicitly released. It is an error to make any reference to a descriptor of a disbanded group, except to release that descriptor. \item All processes that are initially allocated belong to the INITIAL group. (In a static process model, this means all processes.) An MPI routine exists for obtaining a reference to the INITIAL group descriptor (i.e., a local group ID for INITIAL). \item Groups are formed and disbanded by synchronous calls on all participating processes. In the case of overlapping groups, all processes must form and disband groups in the same order. [See Implementation Note 2.] %[Tony] % This is the Zipcode model. It could say loosely synchronous. % \item All group formation calls are treated as a partitioning of some group, INITIAL if none other. The calls include a reference to the descriptor of the group being partitioned, the TID of the new group's ``root process'', the number of processes in the group, and an integer ``group tag'' provided by the application. The group tag must discriminate between overlapping groups that may be formed concurrently, say by multiple threads within a process. [See Implementation Note 2.] %[Lyndon] % The group partition you propose is essentially no different to % the partition by key which has already been discussed, except % that the key can encapsulate both (root process, group tag). % So perhaps partition by key was better in the first place? %%[Tony] %% Do we get anything by having the root process? %% %%%[Lyndon] %%% No. %[Tony] % I don't understand the thread issue here. % %%[Lyndon] %% If two threads are concurrently partitioning the same group, you %% need to disambiguate the partition operations. Analagous to %% concurrent collective operations or nonblocking. \item Collective communication routines are called by all members of a group in the same order. %[Tony] % Yes. \item Blocking collective communication routines are passed only a reference to the group descriptor. To communicate, they can either use the default group context or internally allocate a new context, with or without cacheing. %[Tony] % What does caching really imply here ??? Help. % %%[Lyndon] %% Dunno. \item Non-blocking collective communication routines are passed a reference to the group descriptor, plus a ``data tag'' to discriminate between concurrent operations in the same group. (Question: is sequencing enough to do this?) An implementation is allowed but not required to impose synchronization of all cooperating processes upon entry to a non-blocking collective communication routine. %[Lyndon] % If the requirement that collective operations within a group G are % done in the identical order by all members of G even when such % operations are non-blocking, then the sequence number of the operation % is unique and sufficient for disambiguation. % % The permission to force synchronisation - i.e., blocking - in the % implementation of a non-blocking routine seems to make the routine % less than useful. I can see whay you are asking for this, in order % that you can generate a context for the routine call. In fact Rik % I don't think you need the constraint, as I pointed out cheaper % context generation exists above, unless of course I am missing % something. %%[Tony] %% I think that non-blocking collcomm is moribund in MPI1 or %% else MPI1 is moribund. :-) %% %%%[Lyndon] %%% Check. %[Tony] % I think that contexts are really important in this case, % to keep things straight, but that non-blocking collcomm should % be omitted from MPI1 (cf, Geist). Sequencing supports % a sufficient disambiguation, as long as the entire group % is always the participant in operations. That is, you have % to form subgroups, with new contexts, to do global ops on % subsets. \end{itemize} \section{Advantages \& Disadvantages} \subsection*{Advantages} \begin{itemize} \item This proposal is implementable with no servers and can be layered easily on existing systems. %[Lyndon] % I am not of the opinion that the absence of services is such a big % deal. I do think that programs which can conveniently not use % services should not be forced to, but programs which cannot % conveniently not use services should be allowed to. %%[Tony] %% Too many negatives here for me to parse :-) %[Tony] % Why aren't servers needed to create contexts. Where do they % come from? If you rely on the fact that INITIAL will do % a loosely synchonous cooperative operation each time a new % context is needed, then a simple (easily implementable server, % or fetch-and-add remote access) is replaced by a more rigid % computation model. % % If we can get rid of this disagreement, me might be able to % reduce our total proposal space by one whole proposal. % \item Cacheing auxiliary information in group descriptors allows collective operations to run at maximum speed after the first execution in each group. (Assumes that sufficient context values are available to permit cacheing.) %[Lyndon] % If you agree that context allocation is ``free'', then you can % delete the bracketed qualifier. %%[Tony] %% Context allocation need not be free provided it can be made cheap, %% or cheap enough. %% %% If one knows one will need several, then a single call could %% provide such contexts, amortizing overhead. This is likely when %% bulding grids (ie, virtual topologies) in Zipcode, so it is %% true in existing practice. %% %% One should recognize the need for layering virtual top. calls %% on top of these calls, then these calls may appear painful, %% but perhaps they would be less used. Some users will use the %% provided virtual topology calls, others will prefer their own. %% Both will have equal power (see also,separate note on layerability). %% %% If getting N contexts is a send-and-receive, plus a reactive server, %% then this is reasonably light weight,provided that hundreds of %% messages, or global operations ensue thereafter. We can know in %% advance how heavy weight the context server will be. %% %% if an implemention can use some locations of remote memory, with %% fetch and add, or locks, to achieve contexts, then this is even %% cheaper, in principle. %% %% Despite Jim's earlier insistence that context numbers be kept to %% 256 or so, I think that this number should be much larger, so that %% much less efort goes into returning contexts, and so on, except %% occasionally, by processes. Otherwise, a new kind of overhead, %% get-rid-of-context-because-I-am-out ensues, or programs block %% until contexts become available, offering the possibility of %% deadlocks. %% %[Tony] % If contexts are being used very dynamically, how are they being % assigned, kept, released, reissued without a server? Sorry if % I missed something, but I don't see it, without a restrictive % SPMD model of computation (Zipcode obviates its server for the % SPMD model, for instance). %%[Lyndon] %% MPI stinks of SPMD. I wouldn't mind if MPI would just say SPMD. \item Speed of cached collective operations does not depend on speed of group operations such as formation or translation between rank and TID. %[Lyndon] % True, but the cache is going to get big as user's are going to store % arrays of TIDs in it. %%[Tony] %% Unscalability (of a limited form) should be permitted/selectable %% by user, to use as much per-node memory as the user wants, to reduce %% communication. %% %[Tony] % Can you clarify this with examples. % \item Requires explicit translation between (group,rank) and TID, which makes pt-pt performance predictable no matter how the functionality of groups gets extended. %[Lyndon] % This is only true because you have asserted that implementations % must have the property that: % `` Pt-pt communication is specified to be fast in all cases. % (E.g., MPI must initialize all processes such that any % required translation of the TID is faster than the fastest % pt-pt communication call.)'' % So the advantage is not that which you have quoted, it is that % you have made this assertion. % %%[Tony] %% I see,but what he means here is that there is no unpredictable %% translation cost because we do not write (group,rank) in pt2pt %% calls. So, there is some validity to the statement. %%%[Lyndon] %%% However he ignores that the TID might require translation the %%% cost of which might be unpredictable. This because the TID has %%% a global value and cannot therefore hold process local information %%% such as ``how do I route to that process''. %[Tony] % I like this, of course. \item Communication both within and between groups seems conceptually straightforward. %[Lyndon] % This is a conjecture. I believe that conjecture to be false. % I especially believe this in the case of communication between % groups. The methods which are available for ``hooking up'' % allows are at least perverse. I guess that the user could make % use of a service process, to make life easier in this hooking up, % so whay not provide one. %%[Tony] %% Yes, that is why I have one in Zipcode. I wish Zipcode were %% on netlib today, so you could try it. Well, we are writingthe %% manual, and working at it as fast as we can. % % A further point. It seems to me that ``seems'' means that it seems % to you. This is not the point. It is how it seems to a lesser % wizard than yourself which is of importance here. I conjecture % that the reverse statment is true when the person doing the seeming % is changed to a lesser wizard. %%[Tony] %% I lost something here, but I agree with the sense. The word %% seems is subjective,and should disappear from our discussions, %% as much as seems prudent, anyway :-) %[Tony] % Well, is point-to-point group oriented. Not. %%[Lyndon] %% Check. \end{itemize} \subsection*{Disadvantages} \begin{itemize} \item Requires explicit translation between (group,rank) and TID, which may be considered awkward. %[Lyndon] % It is true that (group,rank) must be translated to TID. I can % assure you that this is considered both awkward and redundant. %%[Tony] %% Yes,awkward, because it is nice to escape the TID realm and %% work within the (albeit simple) abstraction of group,rank. %% When layering virtual topologies on this, it would be so nice %% to write them to a group,rank syntax, not enforcing TID mappings %% everywhere. %[Tony] % I think it is awkward. \item Communication between different groups may be considered awkward. %[Lyndon] % You bet! Please see below. %%[Tony] %% Indeed. %%%[Lyndon] %%% More so than you think, I think! %[Tony] % OK, but one can form a new group, as I have argued before. % Use the "awkward" pt2pt to get the right info shared between % group leaders, make the new group, use unawkward collective % operations on new group (with new context). %%[Lyndon] %% This is only one model of group-group interaction, which in my %% experience and understanding really is still steeped in SPMD. %% Please consider the examples of non SPMD group usage which I %% mailed out. You can say to me - oh, you shouldn't do this kind %% of thing with MPI, Lyndon - if you like, if you believe that. \item No free-for-all group formation. A process must know something about who its collaborators are going to be (minimally, the root of the group). %[Lyndon] % Please see comments above on group creation. %[Tony] % This again is in practice, in Zipcode. % \item Requires explicit coherency of group descriptors, which is not convenient -- application must keep track of which processes have copies and what to do with them. %[Lyndon] % I think all of the proposals will have this problem. %%[Tony] %% Yes, and I think that loosely synchronous operations can maintain %% coherency, in practice. That is, no operations that modify the %% group descriptors (other than cached lookup info) are permitted, %% without loose synchronization. %% This is nasty in that is would prohibit sending descriptors to %% processes not part of the group, so it is a clear trade-off. %% Perhaps such send-to-non-group-member operations could stipulate %% that this group information is somehow ephemeral, and that they %% need to join a new group to keep useful information over time??? %% %%%[Lyndon] %%% I am over schedule. I have to stop here. I will come back to this %%% tomorrow, if applicable (ie, you may have overtaken me). %[Tony] % Sounds dangerous. What must application do to maintain % coherency, since group descriptors are opaque. % \item Static group model. Processes can join and leave collaborations only with the cooperation of the other group members in forming larger or smaller groups. %[Tony] % No, loosely synchronous process model, unless you mean % cooperation of INITIAL at all such join/leave steps. \item No direct support for identifying the group from which a message came. The sender cannot embed its group ID in its message, since group ID's are not globally meaningful. One method to address this problem is to have the sender embed its group context as data in the message, then have the receiver use that value to select among group descriptors that it had already been sent. %[Lyndon] % Yup, the user can ``do it manually with a search''. If you want % to invoke this argument then I can dispose of almost everything % in MPI in a period of a few minutes - in fact Steven Zenith will % do it faster - so I refute the validity of the argument and claim % that the MPI interfce should transmit said information. %%[Tony] %% Yes, that is exactly what Zipcode was written to avoid. The %% user wants help managing things like this!!!! %% %% The search, if any, must be MPI-supported, and as efficient as %% possible (eg, AVL trees, hash, partial hash with exceptions). %% % % Further, the receiver is likely to want to be able to ask which % rank in the sender group the sender was. Oh dear, well I suppose % you think that's okay because the sender can put its rank into % the message. This is just being inconvenient to the user who % wants to send an array of something (double complex?) and has % to pack a rank in by copying or sending a pre-message or the % buffer descriptor kind of thing. %%[Tony] %% This is why I remain a strong advocate of (group,rank) %% addresssing in pt2pt. %% %[Tony] % No, you can't know the group or rank in group of sender. % If there were one context per group (isn't that so here?), % then all you need is the rank. With TID_TO_RANK_IN_GROUP % operation, this could be provided, but no wildcarding % or receipt selectivity could be done at this level. % \end{itemize} \section{Comments \& Alternatives} \begin{itemize} \item The performance of group control functions is explicitly unspecified in order to provide performance insurance. The intent is to encourage program designs whose performance won't be killed if the functionality of groups is expanded so as to be unscalable or require expensive services. %[Lyndon] % I don't think that the intent expressed in the second sentence % is satisfied. For example - group control is allowed to become the % dominant feature of application time complexity. %%[Tony] %% I addressed this in my Step-1 remarks. Please see that. BELOW %[Tony] % No, it just does not provide guarantees that certain kinds % of applications will run OK. (ie, those that do group % creation/deletion relatively often). Zipcode has assumed % that such operations would be relatively seldom. Thus, I do % not quibble that this is a reasonable choice,but a fairer % way to say this is that it may be difficult to support such % applications. That reveals an issue to be studied more. % \item Adding global group ID's would seriously interfere with layering this proposal on pt-pt. The problem is not generating a unique ID, but using it. If just holding a global group ID were to allow a process to asynchronously ask questions about the group (like translating between rank and TID), then a group information server of some sort would be required, and MPI pt-pt does not provide enough capability to construct such a beast. %[Lyndon] % It is not the global uniqueness of group identifiers which creates % the problem. There are globally unique labels of groups in your % proposal anyway - the value of the default context identifier. % The problem is that of allowing query of group information when % that information cannot be recorded in the local process/processor % memory. % % You claim that point-to-point does not have enough capability to % construct an information server. Firstly I should ask you whether % this is an artefact of the manner in which you have defined the % point-to-point communication. Secondly I assert that your claim % is false. I shall append a description of server implementation % to the foot of this message. %%[Tony] %% Thank you. These points are both well taken (ie these two paragraphs) %[Tony] % Perhaps they should do. \item The restriction that only paired-exact-match modules can run in the default group context could be relaxed to something like ``a module that violates the paired-exact-match constraint can run in the default group context if and only if it is the only module to run in that context''. This seems too error-prone to be worth the trouble, since even the standard collective ops might be excluded. (It would also conflict with Implementation Note 1.] %[Tony] % Dump this. \item Group formation can be faster when more information is provided than just the group tag and root process. E.g. if all members of the group can identify all other members of a P-member group, in rank order, then O(log P) time is sufficient; if only the root is known, O(P) time may be required. This aspect should be considered by the groups subcommittee in evaluating scalability. %[Lyndon] % Yes, partition does appear to be O(P) whereas definition by ordererd % list appears to be O(log(P)). %%[Tony] %% Also,see what I wrote in my Step-1 comments. BELOW. %% I believe O(log(P)) is still possible. %% %[Tony] % No, a non-deterministic broadcast can be used, with a token. % This requires a token server. Again, implementable with fetch+ % add on most systems, or a light reactive server. % % Once the non-deterministic broadcast has finished, a fanin/collapse % is done to the original root, which then frees the token. % \end{itemize} \section{Implementation Notes} \begin{enumerate} \item To generate and broadcast a new context value: Generate the context value without communication by concatenating a locally maintained unique value with the process number in some 0..P-1 global ordering scheme. This assumes that context values can be significantly longer than log(P) bits for an application involving P processes. If not, then a server may be required, in which case by specification it has to be fast (comparable to a pt-pt call). There is no need to generate a new context for the broadcast since it can be coded to use the default group context by meeting the paired-exact-match constraint.) %[Lyndon] % Please see notes above on the subject of context generation. %%[Tony] %% Please see my Step-1 comments. %[Tony] % Why not just given in and allow the server. % I don't like the paired-exact-match constraint AT ALL. % \item The following is intended only as a sanity check on the claim that this proposal can be layered on MPI. Improvements to improve efficiency or to use fewer contexts will be greatly appreciated. In addition to a default group context, each group descriptor contains a ``group partitioning context'' and a ``group disbanding context'' that are obtained and broadcast at the time the group is created. In the case of the INITIAL group, this is done using paired-exact-match code and any context, immediately after initialization of the MPI pt-pt level. (At that time, no application code will have had a chance to issue wildcard receives to mess things up.) %[Tony] % Seems OK, but why need the paired-exact-match thing again. % Group partitioning is accomplished using pt-pt messages in the partitioning context of the current group (i.e., the one being partitioned), with message tag equal to the group tag provided by the application. In the worst (least scalable) case, the root of the new group must wildcard the source TID. (This violates the paired-exact-match constraint, which is why group formation must happen in a special context.) %[Tony] % Again, OK, but I want to see this work without the paired-exact- % match, if possible. Group disbanding is done with pt-pt messages in the group disbanding context. This keeps group control from being messed up no matter how the application uses wildcards. %[Tony] % So, now, you have concurred with my (previously flamed) idea % that group construction/destruction should be realizable using % pt2pt, just like global operations should do. I like this % because 1) it is explicable to the implementor, 2) it allows % simple intitial implemtations, 3) it sets some ideas for how % much these things will cost [upper bound]. \end{enumerate} \section{Examples} {\bf (To be provided)} % % END "Proposal V" %======================================================================% \end{document} %[Lyndon] % Writing a server in the point-to-point layer of MPI in four easy steps % ---------------------------------------------------------------------- % % 1) Partition the INITIAL group into two groups. A singleton group, % SERVER, and a group CLIENT which contains all of the other processes. % % 2) The single process in SERVER group records its TID. % % 3) The processes in INITIAL group allocate a context SERVICE which % they remember either in the group cache or static data or something. % % 4) Use a broadcast in INITIAL group with ``sender'' as the one process % which is also in SERVER group, and the ``receivers'' as the (many) % processes which are also in CLIENT group, in the SERVICE context, in % order to disseminate the TID of the server process. % % [Fanfare] a server process is in place as is a dedicated context for % the purposes of messages required to implement the service. % % [Observation] the mpi point-to-point initialisation can do this % automatically. %%[Tony] %% Zipcode's postmaster general works in this way, more or less. ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Mon Mar 22 01:52:07 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA07571; Mon, 22 Mar 93 01:52:07 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15011; Mon, 22 Mar 93 01:51:24 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 01:51:23 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15000; Mon, 22 Mar 93 01:51:21 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Sun, 21 Mar 93 22:47 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA06109; Sun, 21 Mar 93 22:45:30 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA15778; Sun, 21 Mar 93 22:45:28 PST Date: Sun, 21 Mar 93 22:45:28 PST From: rj_littlefield@pnlg.pnl.gov Subject: Re: mini-proposal on layerability To: jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Cc: rj_littlefield@pnlg.pnl.gov Message-Id: <9303220645.AA15778@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu Tony writes: > So, each receipt function uses the algorithm (received_rag & ~dont_care)& > care Shouldn't this be (received_tag & ~dont_care) == care i.e. ignore some bits but check the others for equality? If so then yes, I will strongly support this feature. --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Mon Mar 22 03:30:44 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA11999; Mon, 22 Mar 93 03:30:44 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20829; Mon, 22 Mar 93 03:29:57 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 03:29:56 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20810; Mon, 22 Mar 93 03:29:52 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Mon, 22 Mar 93 00:13 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA06118; Mon, 22 Mar 93 00:11:53 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA15810; Mon, 22 Mar 93 00:11:50 PST Date: Mon, 22 Mar 93 00:11:50 PST From: rj_littlefield@pnlg.pnl.gov Subject: new proposal To: jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Cc: d39135@sodium.pnl.gov Message-Id: <9303220811.AA15810@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu Tony and Lyndon, I will respond in a separate message to some of your detailed comments on Proposal V. But maybe we can move faster by popping up a level. It seems to me that you like the idea of cacheing arbitrary info in group descriptors, you like the idea of groups as things within which contexts get formed, you like performance guarantees, and you don't like having to use opaque id's in point-to-point communications. I'm not quite sure whether static groups and synchronous group control are OK, but I'll presume here that they are. Well, this is pretty neat. If this keeps up maybe we can converge on just one or two proposals. Most of my gripes with other proposals have been based on performance and/or a need for asynchronous servers (with attendant performance and non-portability gripes). But I notice that explicit performance expectations have been gradually appearing and the need for servers disappearing from other people's proposals. Perhaps it is time for a synthesis. Here's a sketch of a new proposal (VI ?). . The functionality of point-to-point communications is per Snir, Lusk, and Gropp, augmented by my proposed MPI_FORM_CONTEXT to allow assembling arbitrary collections of processes. (Marc has already accepted this in a private email to me -- don't know why he didn't post it.) . Performance expectations of point-to-point communications are explicitly stated as follows: - MPI_COPY_CONTEXT does not synchronize the participating processes and costs significantly less than a point-to-point fanout among them (e.g., it uses a communication-free counting strategy); - all other context formation routines cost no more than if they were implemented using a single fanin/fanout among the participating processes; - translation of (context,rank) to absolute processor ID costs no more than if it were implemented via the lookup table that Snir suggests. . Groups and contexts are not equal. A group consists of a base context (from which other contexts can be created quickly by MPI_COPY_CONTEXT), plus topology information, plus my cacheing facility. Conceptually, I like this better than proposal V. Do we already have a proposal like this? Should we have one? In general, do you guys buy off on the concept of including performance expectations in the specification? A couple of discussion points... 1. The separation of group and context is defensive. I think I understand what it means to copy a context. I am not sure of either the functional or performance implications of copying a group. E.g., does cached info get copied? 2. I will respond here to two criticisms that have been raised against the cacheing facility. Lyndon notes > % I like the general idea, but I'm nervous about two things: > % (a) implied associativity of group descriptor cache - this > % will potentially be time expensive in implementation. > % (b) there is no method proposed for abritration of keys > % between independently written modules, so we are > % in the same problem regime as just having message tag > % and no message context. > % However, key's are local, so presumably you would find > % it acceptable to add a key registration service? I implicitly proposed a key assignment service in my long-ago example. It said in part: static int gop_key_assigned = 0; /* 0 only on first entry */ static MPI_key_type gop_key; /* key for this module's stuff */ ... if (!gop_key_assigned) /* get a key on first call ever */ { gop_key_assigned = 1; if ( ! (gop_key = MPI_GetAttributeKey()) ) { MPI_abort ("Insufficient keys available"); } } This is not really "registration" because nothing goes into the key assignment routine except the number of times it's called. But assuming that each call site is protected by a separate variable, the effect is to register the call site. As Lyndon notes, this is highly local. But it also allows the returned keys to have their values restricted so as to permit rapid testing and/or retrieval. Tony notes > %**** Stripping is extremely controversial aspect, and arbitrary. > %**** If the recipient has the methods with the same name, then > %**** a new rendezvous could be accomplished at the far end Yes, stripping is arbitrary. My motivation is that this greatly simplifies the design and satisfies what I view as the most critical need: to make collective comms run fast without complicating the calling sequence. I have no objection in principle to extending the facility to include classes of information that do not get stripped. But I sure didn't want to try creating and selling a spec that would handle, e.g., heterogeneous systems. Enough for here -- what do you think of "proposal VI" ? --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Mon Mar 22 05:42:49 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA25028; Mon, 22 Mar 93 05:42:49 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00467; Mon, 22 Mar 93 05:42:01 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 05:41:55 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00422; Mon, 22 Mar 93 05:41:52 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Mon, 22 Mar 93 01:48 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA06136; Mon, 22 Mar 93 01:47:08 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA15840; Mon, 22 Mar 93 01:47:05 PST Date: Mon, 22 Mar 93 01:47:05 PST From: rj_littlefield@pnlg.pnl.gov Subject: Rik's comments on Lyndon's Proposal I To: jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Cc: d39135@sodium.pnl.gov Message-Id: <9303220947.AA15840@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu >>>>> I have added here a few additions to Tony's comments. >>>>> I have also deleted the bulk of the proposal regarding which >>>>> I have no comments at 1:45 AM. >>>>> -- Rik %**** %**** Below are my comments on Lyndon's Proposal I. In the next paragraph %**** I note that we are actually converging to less than N proposals, though %**** I have not seen Lyndon's new II proposal yet. Does it exist now? %**** %**** To achieve a reasonable presentation at Dallas, we can have multiple %**** proposals still on table (I think this a fair, well-thought-out %**** approach, but if we can condense some, let's do it). %**** %**** %**** After reading V, making my comments, and making my comments %**** in addition to Lyndon's comments, I am convinced we can %**** advance proposal V into something that is acceptable in the %**** III/IV mold, without a further III/IV proposal. Rik's %**** dropping of the static context concept has simplified our %**** group's efforts considerably, and I cannot disambiguate my %**** III/IV proposal from Proposal V, given the Lyndon and Tony %**** provisos and suggested improvements. This is not a wimp out %**** on my part. I do not see benefit of advancing something that %**** will look 90% like Proposal V at this point, and 97% like it %**** if the Lyndon/Tony comments obtain in it (which they would if %**** I wrote it now). %**** I would prefer that we hone Proposal V. If Rik wants to keep %**** his style, then I propose that Proposal III/IV become exactly %**** what I just said, a reworking of V + comments. %**** However, I think this will cause an unnecessary delay and %**** digression just to achieve details. Instead, we might pull %**** such choices into Proposal V at this time (server vs. no server). %**** to make what is common in our "approaches" more obvious. %**** - Tony %**** %**** (PS, Lyndon: Rik/Tony/Lyndon are authors of whole paper, because %**** we are all working on this, and because one of us (ie, me) will %**** make the final document cohesive, create a unified style, format, %**** set of meanings. This is the nature of collaboration. I do %**** not propose to include our other co-conspirators on such a document's %**** list of authors, as there has been minimal input from them. I %**** Think that Rusty/Bill and Rik/Tony/Lyndon are operating equivalently.) >>>>> >>>>> I happens to think that the style of my draft proposal V stinks. >>>>> I had much too much trouble becoming (fairly) sure that the content >>>>> was right, to also make it readable. >>>>> --Rik .... A group may be created by identical duplication of an existing group. For example, {\tt gidb = mpi\_grp\_duplication(gida)} where {\tt gidb} is the group identifier of the newly created group and {\tt gida} is the identifier of an existing group. The created group inherits all properties of the source group, including any topological properties. This operation has the same synchronisation properties as creation of group by definition. %**** It is not obvious to me that we want to enforce topology at this %**** juncture. However, we could register topology information in %**** the extensible structure strategy of Proposal V. %**** >>>>> Why such strong synchronization?! >>>>> If this has the same synchronization properties as creation, >>>>> then it won't return until all members have made the call. >>>>> But that means you actually have to sync everybody, which >>>>> implies 2 log(P) messages. Isn't it enough to just require >>>>> a loosely synchronous call, and use a counting strategy> ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Mon Mar 22 05:42:50 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA25031; Mon, 22 Mar 93 05:42:50 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00499; Mon, 22 Mar 93 05:42:07 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 05:42:06 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00443; Mon, 22 Mar 93 05:41:56 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Mon, 22 Mar 93 01:30 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA06131; Mon, 22 Mar 93 01:29:00 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA15820; Mon, 22 Mar 93 01:28:55 PST Date: Mon, 22 Mar 93 01:28:55 PST From: rj_littlefield@pnlg.pnl.gov Subject: Rik on Tony on Lyndon on PropV To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Cc: d39135@sodium.pnl.gov Message-Id: <9303220928.AA15820@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu This is the Proprosal V first draft, critiqued by Lyndon, then by Tony, and now with responses by Rik. Comments are flagged as % Lyndon's %**** Tony's >>>>> Rik's responses [Lyndon's leadin] General points -------------- 1) I had to ``mine'' the text :-) Perhaps one of us (i.e., I am offering if you wish) should attempt to construct a more transparent presentation before circulation to whole committee, for the convenience of committee members. %**** %**** I felt that things appear twice, because of summary (good) %**** and because of implementation notes at end (confusing) %**** >>>>> I agree with both of the above. But let's decide whether >>>>> Proposal V will be replaced by VI or its equivalent before >>>>> we do any rewriting. 2) I'm not a fan of much of this proposal, although I do indeed like some of the ideas which it introduces. [On the other hand, I'm not a great fan of all of the proposal which I wrote. I shall mail self criticism of my proposal, and may have to write amended or alternative proposal :-)] %**** %**** Please be more specific. I am having a hard time understanding %**** why you really don't like it, Lyndon. If the process model %**** were a little less static, and servers were permitted (though %**** hopefully bounded in cost), I think we would have an excellent %**** proposal. %**** >>>>> See Proposal VI. I'd be happy to see the good ideas adopted >>>>> and the crap die quietly. 3) I really like the way in which groups are something like ``frames'' in which contexts are created. This is conceptually much neater than duplication of groups. %**** %**** In practice, group subsetting will require groups to be copied, %**** otherwise, subgroups will unfairly be penalized by the size %**** of their ancestor. %**** >>>>> Right, but I think of that as creating a new group. After all, >>>>> even the ranking structure is different. 4) I like the idea of pushing information into the group structure. I have a few qualms with the proposed details --- see specific points. %**** %**** I have more confidence about this idea, and could demonstrate %**** by June/July time-frame in Zipcode. %**** 5) See ``Writing a server in the point-to-point layer of MPI in four easy steps'' at the foot of the message. %**** %**** This seems like a nice thing. %**** >>>>> The implementation that Lyndon suggests consumes an entire >>>>> process for the server. There are times when this is OK, >>>>> but also times when it isn't. E.g., if you have a divide-and- >>>>> conquer algorithm that really wants 2^i non-server processes, >>>>> and you're working on a machine with 2^n processors that >>>>> doesn't support multiple processes per processor, then >>>>> some of the users will get upset that they can only use >>>>> half the machine. A year and a half ago, I got flamed for >>>>> making a suggestion like this in connection with the Delta. >>>>> (And now I suppose I get flamed for using the Delta as an >>>>> example again... Maybe if I use the CM5...) Specific points --------------- Dealt with as LaTeX comments to body of text, appearing in the form %[Lyndon] % text of point for your navigational convenience. These are quite detailed. ---------------------------------------------------------------------- \documentstyle{report} \begin{document} \title{``Proposal V'' for MPI Communication Context Subcommittee} \author{Rik~Littlefield} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal V" % Rik Littlefield % March 1993 % \chapter{Proposal V} \section{Summary} \begin{itemize} \item Group ID's have local scope, not global. Groups are represented by descriptors of indefinite size and opaque structure, managed by MPI. Group descriptors can be passed between processes by MPI routines that translate them as necessary. (This satisfies the same needs as global group ID's, but its performance implications are easier to understand and control.) %[Lyndon] % I support the approach whereby group descriptors are local % objects. They could be pointers to structures, or indices % into tables thereof. We let the implementation consider that. % % One difficulty arises as group descriptors can only be passed % from process P to process Q if both P and Q members of some % group G since the communication presumably must use a context % known to both P and Q. Imagine that P is member of F and Q is not % member of F; that Q is member of H and P is not member of H; that % both P and Q are member of G. Let M be abritrary message data. % % Initially - % P can send F to Q, and Q can receive F from P, in a context of G. % Q can send H to P, and P can receive H from Q, in a context of G. % Thereafter - % P can allocate a context C in F. % P can send C to Q, and Q can receive C in the default context of H. % Q can allocate a context D in H. % Q can send D to P, and P can receive D, in the default context of F. % Thereafter - % P as member of F, and Q as member of H, can communicate using % wildcard pid and tag by use of contexts C and D. % % Okay, this is possible, but it is messy :-) %**** %**** Alternatives, Lyndon? %**** \item Arbitrary process-local info can be cached in group descriptors. (This allows collective operations to run quickly after the first execution in each group, without complicating the calling sequences.) %[Lyndon] % I rather like this idea. %**** Me too. >>>>> Progress!! \item There are restrictions that permit groups to be layered on top of pt-pt. \item Pt-pt communications use only TID, context, and tag, and are specified to be fast. \item Group control operations (forming, disbanding, and translating between rank and TID) are NOT specified to be fast. \end{itemize} \section{Detailed Proposal} \begin{itemize} \item Pt-pt uses only ``TID'', ``context'', and ``message tag''. TID's, contexts, and message tags have global scope; they can be copied between processes as integers. TID's and contexts are opaque; their values are managed by MPI. Message tags are controlled entirely by the application. %[Lyndon] % You probably ought to say that context and TID are integer with % opaque values. %**** %**** 1) It is not obvious that TIDs should be restricted to 32 bits. %**** 2) It is not obvious that contexts will be 32 bits (eg, 16 bits). %**** I favor a whole word for a context, despite other limits, %***** just to make things simpler. %**** %**** Internet addresses are going to get augmented from 32 to ??? bits %**** is it reasonable to assume that certain MPI implementations might %**** incorporate such internet addresses as TIDs (in future), %**** %**** Opacity is partially violated if we say how big the data type is??? \item Pt-pt communication is specified to be fast in all cases. (E.g., MPI must initialize all processes such that any required translation of the TID is faster than the fastest pt-pt communication call.) %[Lyndon] % How do you imagine this to be acheived, considering that TIDs % are global entities? % I guess that you are thinking a TID is a (processor_number, % process_number) pair of bit fields, a bit like one sees in NX and RK, % and that network interface hardware will route based on the % processor_number. % % In another approach a TID is a process local entity just like the % group descriptor. This satisfies efficiency when the above scheme % is not applicable, for example in a workstation network. %**** %**** where does this get us??? %**** Remember, we have to choose on some things, so we can have something %**** to present in Dallas. Is there an important difference here? %**** %**** TIDs are global entities. Is structure assumed to be global; %**** in a truly opaque system, some TID component would have to be %**** fixed, but the rest could vary structurally... %**** \item A group is represented by a ``group descriptor'', of indefinite size and opaque structure, managed by MPI. The group descriptor can be referenced via a ``process-local group ID'', which is typically just a pointer to the group descriptor. (Group ID's with global scope do not exist.) %[Lyndon] % I think I see, it is the context identifier which has global scope. % Now, this really is just getting on the way toward the proposal % that I really wish I had written for the subcommittee. I will flame % myself! %**** %**** Yes, contexts are global; group identifiers are just pointers %**** typically, to data structures, describing %**** %**** 1) a groups context %**** 2) group members and their ranks (mappings, inverses, %**** cached, hashed, unscalably stored, etc) %**** 3) TID-to-rank map and inverse (see possibilities in 2) %**** 4) A set of fixed global operations, accepted as standard, %**** an accessible in O(1) time. Possibly, each %**** such operation should be a method, so that %**** a parameter block can be passed with it. Zipcode %**** supports the Method type to do this. %**** This is effectively a cache for some parts of item #5 %**** ... %**** 5) An AVL or similar tree of extensible operations. %**** New operations are registerable by the user. These %**** tags are unique within a group, a specify an operation %**** i) pre-defined by MPI (in which case it can be cached %**** in 4 %**** ii) alternative operations (even if they do something %**** standard, that are wanted to be accessed by %**** name) This name is group unique. %**** %**** A mechanism for DO_METHOD_FROM_GROUP(name,....) %**** or GET_METHOD_FROM_GROUP(name,...) %**** and SET_METHOD_IN_GROUP(name,...) are clearly needed. %**** >>>>> The model of contexts used in this proposal is that the >>>>> context value has global scope. It doesn't have to be that way. >>>>> The Snir, Lusk, and Gropp proposal could be implemented with each >>>>> processor contributing a different context value to the context >>>>> descriptor. E.g., process 3 says "if you want to talk to me >>>>> in this context, you have to use context value 7", while process >>>>> 4 says to use context value 2. I don't know whether there are >>>>> advantages to that extra flexibility. Group descriptors can be copied between arbitrary processes via MPI-provided routines that translate them to and from machine- and process-independent form. A process receiving a group descriptor does not become a member of the group, but does become able to obtain information about the group (e.g., to translate between rank and TID). %[Lyndon] % Also crucially, to obtain and use the default context identifier % of the received group descriptor. %**** Yes, that is included, I believe, in concept. %**** >>>>> Right. \item Arbitrary information can be cached in a group descriptor. This is, MPI routines are provided that allow any routine to augment a group descriptor with arbitrary information, identified by a key value, that subsequently can be retrieved by any routine having access to the descriptor. Cached information is not visible to other processes and is stripped when a group descriptor is sent to another process. Retrieval of cached information is specified to be fast (significantly cheaper than a pt-pt communication call). %[Lyndon] % I like the general idea, but I'm nervous about two things: % (a) implied associativity of group descriptor cache - this % will potentially be time expensive in implementation. % (b) there is no method proposed for abritration of keys % between independently written modules, so we are % in the same problem regime as just having message tag % and no message context. % However, key's are local, so presumably you would find % it acceptable to add a key registration service? %**** %**** Stripping is extremely controversial aspect, and arbitrary. %**** If the recipient has the methods with the same name, then %**** a new rendezvous could be accomplished at the far end >>>>> >>>>> See my "Proposal VI" note for responses to this. \item Group descriptors contain enough information to asynchronously translate between (group,rank) and TID for any member of the group, by any process holding a descriptor for the group. MPI routines are provided to do these translations. Translation is not specified to be fast. %[Lyndon] % How do you imagine that this will be done? % (a) Perhaps an array of TIDs which is just indexed on rank? Then % where is the case for not using directly rank. % (b) Perhaps a hashing function? Then the case for not using rank % directly is marginal. % (c) Perhaps generating a request to a service process? In which % case you admit here that a service process exists, which must % be propogated throughout the proposal and changes one of your % fundamental objectives. % (d) Something else? Do tell! %**** %**** Yes, these are all options. Fastness seems to be an important %**** issue. If translation is very expensive, none of the "good" %**** features will be used. >>>>> I was actually trying to be nice to the service process >>>>> people, giving them a designated hook where they could be >>>>> slow and I could work around them via the cacheing mechanism. >>>>> I'd really rather just specify it as being fast. \item At creation, each group is assigned a globally unique ``default group context'' which is kept in the group descriptor and can be quickly extracted. This extraction is specified to be fast (comparable to a procedure call). \item The default group context can be used for pt-pt communication by any module operating within the group, but only for exactly paired send/receive with exact match on TID, context, and message tag. Call this the ``paired-exact-match constraint''. This constraint allows independent modules to use the same context without coordinating their tag values. (Assumes: tight sequencing constraints for both blocking and non-blocking comms in pt-pt MPI.) %[Lyndon] % I understand what you want the paired-exact-match for. This % might appear as pragmatics and advice to module writers. I % think you should be firmer about sequencing constraints % for point-to-point in MPI that this requires, to be % sure that the constraint is not too large. %**** Again, I think this should be eliminated, and all references %**** to this idea should be expunged. It denies the context's %**** ability to manage messages. >>>>> >>>>> When I wrote this, I was presuming that contexts could >>>>> not be generated "for free" and that therefore it was >>>>> a good idea to specify a method by which codes could >>>>> be written so as to not require them. By the way, >>>>> you may be interested to note that the sample collective >>>>> comms defined by Snir and Geist in their recent proposal >>>>> implicitly rely on exactly the paired-exact-match >>>>> constraint. As written, those routines would break >>>>> if preceded by a call to a module that issued a wildcard >>>>> receive in the same group. (And there's an endnote >>>>> that suggests they know that.) \item A modules that does not meet the paired-exact-match constraint must use a different globally unique context value. An MPI routine will exist to generate such a value and broadcast it to the group. The cost of this operation is specified to be comparable to that of an efficient broadcast in the group, i.e. log(G) pt-pt startup costs for a group of G processes. [See Implementation note 1.] %[Lyndon] % Perhaps I am missing something here. Please help. This % is what my mind is thinking. % The synchronisation requirement means that all context % allocations in a group G must be performed in an identical % order by all members of G. Then the sequence number of the % allocation is unique among all allocations within G. % Therefore the duplet % (default context of G, allocation sequence number) % is a globally unique identification of the allocated % context. The sequence number can be replaced by any one-to-one % map of the sequence number, of course. So, according to your % synchronisation constraint, context generation can be ``free''. >>>>> I think this is right. This counting strategy probably >>>>> places some constraints on how big the context value field >>>>> has to be, but I've incorporated it into Proposal VI. %**** I agree that context allocation has to be done in sequence. %**** That is why I am in favor of providing calls that allow %**** groups to get numerous contexts at creation, and then %****cooperatively, but potentially without further communication %**** divide them(as they build subgroups, for instance). %**** %**** I see these as services to be used in building virtual topology %**** features, which will then be more widely used by users of MPI. %**** \item Context values are a plentiful but exhaustible resource. If further use is anticipated, a context may be cached; otherwise it should be returned. (Note: the number of available contexts should be several times the number of groups that can be formed.) \item When a group disbands, its group descriptor is closed and any cached information is released. (The MPI group-disbanding routine does this by calling destructor routines that the application provided when the information was cached.) Copies of the group descriptor must be explicitly released. It is an error to make any reference to a descriptor of a disbanded group, except to release that descriptor. \item All processes that are initially allocated belong to the INITIAL group. (In a static process model, this means all processes.) An MPI routine exists for obtaining a reference to the INITIAL group descriptor (i.e., a local group ID for INITIAL). \item Groups are formed and disbanded by synchronous calls on all participating processes. In the case of overlapping groups, all processes must form and disband groups in the same order. [See Implementation Note 2.] \item All group formation calls are treated as a partitioning of some group, INITIAL if none other. The calls include a reference to the descriptor of the group being partitioned, the TID of the new group's ``root process'', the number of processes in the group, and an integer ``group tag'' provided by the application. The group tag must discriminate between overlapping groups that may be formed concurrently, say by multiple threads within a process. [See Implementation Note 2.] %[Lyndon] % The group partition you propose is essentially no different to % the partition by key which has already been discussed, except % that the key can encapsulate both (root process, group tag). % So perhaps partition by key was better in the first place? %**** %**** Do we get anything by having the root process? %**** >>>>> The group partitioning concept has been refined in several >>>>> other postings to mpi-context, mpi-collcom, and mpi-pt2pt, >>>>> in which "root" was replaced by a "knownmember" set. >>>>> The idea is that if you have lots of processes joining a >>>>> group, they don't have to know who all their compatriots >>>>> are, but they should know at least one that they all have >>>>> in common, who can serve to organize the communications. \item Collective communication routines are called by all members of a group in the same order. \item Blocking collective communication routines are passed only a reference to the group descriptor. To communicate, they can either use the default group context or internally allocate a new context, with or without cacheing. \item Non-blocking collective communication routines are passed a reference to the group descriptor, plus a ``data tag'' to discriminate between concurrent operations in the same group. (Question: is sequencing enough to do this?) An implementation is allowed but not required to impose synchronization of all cooperating processes upon entry to a non-blocking collective communication routine. %[Lyndon] % If the requirement that collective operations within a group G are % done in the identical order by all members of G even when such % operations are non-blocking, then the sequence number of the operation % is unique and sufficient for disambiguation. % % The permission to force synchronisation - i.e., blocking - in the % implementation of a non-blocking routine seems to make the routine % less than useful. I can see whay you are asking for this, in order % that you can generate a context for the routine call. In fact Rik % I don't think you need the constraint, as I pointed out cheaper % context generation exists above, unless of course I am missing % something. %**** %**** I think that non-blocking collcomm is moribund in MPI1 or %**** else MPI1 is moribund. :-) %**** >>>>> Nicely put. \end{itemize} \section{Advantages \& Disadvantages} \subsection*{Advantages} \begin{itemize} \item This proposal is implementable with no servers and can be layered easily on existing systems. %[Lyndon] % I am not of the opinion that the absence of services is such a big % deal. I do think that programs which can conveniently not use % services should not be forced to, but programs which cannot % conveniently not use services should be allowed to. %**** Too many negatives here for me to parse :-) \item Cacheing auxiliary information in group descriptors allows collective operations to run at maximum speed after the first execution in each group. (Assumes that sufficient context values are available to permit cacheing.) %[Lyndon] % If you agree that context allocation is ``free'', then you can % delete the bracketed qualifier. %**** %**** Context allocation need not be free provided it can be made cheap, %**** or cheap enough. %**** %**** If one knows one will need several, then a single call could %**** provide such contexts, amortizing overhead. This is likely when %**** bulding grids (ie, virtual topologies) in Zipcode, so it is %**** true in existing practice. %**** %**** One should recognize the need for layering virtual top. calls %**** on top of these calls, then these calls may appear painful, %**** but perhaps they would be less used. Some users will use the %**** provided virtual topology calls, others will prefer their own. %**** Both will have equal power (see also,separate note on layerability). %**** %**** If getting N contexts is a send-and-receive, plus a reactive server, %**** then this is reasonably light weight,provided that hundreds of %**** messages, or global operations ensue thereafter. We can know in %**** advance how heavy weight the context server will be. %**** %**** if an implemention can use some locations of remote memory, with %**** fetch and add, or locks, to achieve contexts, then this is even %**** cheaper, in principle. %**** %**** Despite Jim's earlier insistence that context numbers be kept to %**** 256 or so, I think that this number should be much larger, so that %**** much less efort goes into returning contexts, and so on, except %**** occasionally, by processes. Otherwise, a new kind of overhead, %**** get-rid-of-context-because-I-am-out ensues, or programs block %**** until contexts become available, offering the possibility of %**** deadlocks. %**** \item Speed of cached collective operations does not depend on speed of group operations such as formation or translation between rank and TID. %[Lyndon] % True, but the cache is going to get big as user's are going to store % arrays of TIDs in it. %**** %**** Unscalability (of a limited form) should be permitted/selectable %**** by user, to use as much per-node memory as the user wants, to reduce %**** communication. %**** >>>>> Besides which, the system will do this anyway if (group,rank) >>>>> translations are required to be fast. All I'm doing is moving >>>>> it to the user level. \item Requires explicit translation between (group,rank) and TID, which makes pt-pt performance predictable no matter how the functionality of groups gets extended. %[Lyndon] % This is only true because you have asserted that implementations % must have the property that: % `` Pt-pt communication is specified to be fast in all cases. % (E.g., MPI must initialize all processes such that any % required translation of the TID is faster than the fastest % pt-pt communication call.)'' % So the advantage is not that which you have quoted, it is that % you have made this assertion. %**** I see,but what he means here is that there is no unpredictable %**** translation cost because we do not write (group,rank) in pt2pt %**** calls. So, there is some validity to the statement. >>>>> Tony has it right. \item Communication both within and between groups seems conceptually straightforward. %[Lyndon] % This is a conjecture. I believe that conjecture to be false. % I especially believe this in the case of communication between % groups. The methods which are available for ``hooking up'' % allows are at least perverse. I guess that the user could make % use of a service process, to make life easier in this hooking up, % so whay not provide one. %**** Yes, that is why I have one in Zipcode. I wish Zipcode were %**** on netlib today, so you could try it. Well, we are writingthe %**** manual, and working at it as fast as we can. % % A further point. It seems to me that ``seems'' means that it seems % to you. This is not the point. It is how it seems to a lesser % wizard than yourself which is of importance here. I conjecture % that the reverse statment is true when the person doing the seeming % is changed to a lesser wizard. %**** %**** I lost something here, but I agree with the sense. The word %**** seems is subjective,and should disappear from our discussions, %**** as much as seems prudent, anyway :-) >>>>> Silly me -- trying to call out an opinion that might be >>>>> disagreed with. It occurred to me when I wrote the "s" word >>>>> that I'd probably be better off to just follow standard practice >>>>> and state my opinions as fact. \end{itemize} \subsection*{Disadvantages} \begin{itemize} \item Requires explicit translation between (group,rank) and TID, which may be considered awkward. %[Lyndon] % It is true that (group,rank) must be translated to TID. I can % assure you that this is considered both awkward and redundant. %**** %**** Yes,awkward, because it is nice to escape the TID realm and %**** work within the (albeit simple) abstraction of group,rank. %**** When layering virtual topologies on this, it would be so nice %**** to write them to a group,rank syntax, not enforcing TID mappings %**** everywhere. >>>>> >>>>> I happen to agree with this. But even though it's late, I can't >>>>> resist pointing out that there's not a shred of scientific >>>>> evidence that these opinions are in fact true. \item Communication between different groups may be considered awkward. %[Lyndon] % You bet! Please see below. %**** Indeed. \item No free-for-all group formation. A process must know something about who its collaborators are going to be (minimally, the root of the group). %[Lyndon] % Please see comments above on group creation. \item Requires explicit coherency of group descriptors, which is not convenient -- application must keep track of which processes have copies and what to do with them. %[Lyndon] % I think all of the proposals will have this problem. %**** Yes, and I think that loosely synchronous operations can maintain %**** coherency, in practice. That is, no operations that modify the %**** group descriptors (other than cached lookup info) are permitted, %**** without loose synchronization. %**** This is nasty in that is would prohibit sending descriptors to %**** processes not part of the group, so it is a clear trade-off. %**** Perhaps such send-to-non-group-member operations could stipulate %**** that this group information is somehow ephemeral, and that they %**** need to join a new group to keep useful information over time??? %**** \item Static group model. Processes can join and leave collaborations only with the cooperation of the other group members in forming larger or smaller groups. \item No direct support for identifying the group from which a message came. The sender cannot embed its group ID in its message, since group ID's are not globally meaningful. One method to address this problem is to have the sender embed its group context as data in the message, then have the receiver use that value to select among group descriptors that it had already been sent. %[Lyndon] % Yup, the user can ``do it manually with a search''. If you want % to invoke this argument then I can dispose of almost everything % in MPI in a period of a few minutes - in fact Steven Zenith will % do it faster - so I refute the validity of the argument and claim % that the MPI interfce should transmit said information. %**** %**** Yes, that is exactly what Zipcode was written to avoid. The %**** user wants help managing things like this!!!! %**** %**** The search, if any, must be MPI-supported, and as efficient as %**** possible (eg, AVL trees, hash, partial hash with exceptions). %**** % % Further, the receiver is likely to want to be able to ask which % rank in the sender group the sender was. Oh dear, well I suppose % you think that's okay because the sender can put its rank into % the message. This is just being inconvenient to the user who % wants to send an array of something (double complex?) and has % to pack a rank in by copying or sending a pre-message or the % buffer descriptor kind of thing. %**** %**** This is why I remain a strong advocate of (group,rank) %**** addresssing in pt2pt. %**** \end{itemize} \section{Comments \& Alternatives} \begin{itemize} \item The performance of group control functions is explicitly unspecified in order to provide performance insurance. The intent is to encourage program designs whose performance won't be killed if the functionality of groups is expanded so as to be unscalable or require expensive services. %[Lyndon] % I don't think that the intent expressed in the second sentence % is satisfied. For example - group control is allowed to become the % dominant feature of application time complexity. %**** %**** I addressed this in my Step-1 remarks. Please see that. %**** \item Adding global group ID's would seriously interfere with layering this proposal on pt-pt. The problem is not generating a unique ID, but using it. If just holding a global group ID were to allow a process to asynchronously ask questions about the group (like translating between rank and TID), then a group information server of some sort would be required, and MPI pt-pt does not provide enough capability to construct such a beast. %[Lyndon] % It is not the global uniqueness of group identifiers which creates % the problem. There are globally unique labels of groups in your % proposal anyway - the value of the default context identifier. % The problem is that of allowing query of group information when % that information cannot be recorded in the local process/processor % memory. >>>>> I thought I said that. % % You claim that point-to-point does not have enough capability to % construct an information server. Firstly I should ask you whether % this is an artefact of the manner in which you have defined the % point-to-point communication. Secondly I assert that your claim % is false. I shall append a description of server implementation % to the foot of this message. %**** %**** Thank you. These points are both well taken (ie these two paragraphs) >>>>> >>>>> See my comments near the start of this message, regarding >>>>> the proposed server implementation. What I should have said >>>>> was that MPI did not provide enough capability to construct >>>>> a server without consuming a processor, since it neither >>>>> provides an interrupt-receive function nor specifies that >>>>> processors be able to time-share between multiple processes. \item The restriction that only paired-exact-match modules can run in the default group context could be relaxed to something like ``a module that violates the paired-exact-match constraint can run in the default group context if and only if it is the only module to run in that context''. This seems too error-prone to be worth the trouble, since even the standard collective ops might be excluded. (It would also conflict with Implementation Note 1.] \item Group formation can be faster when more information is provided than just the group tag and root process. E.g. if all members of the group can identify all other members of a P-member group, in rank order, then O(log P) time is sufficient; if only the root is known, O(P) time may be required. This aspect should be considered by the groups subcommittee in evaluating scalability. %[Lyndon] % Yes, partition does appear to be O(P) whereas definition by ordererd % list appears to be O(log(P)). %**** Also,see what I wrote in my Step-1 comments. %**** I believe O(log(P)) is still possible. %**** >>>>> I'd be interested in seeing the O(log(P)) algorithm, >>>>> sometime after MPI quiets down. \end{itemize} \section{Implementation Notes} \begin{enumerate} \item To generate and broadcast a new context value: Generate the context value without communication by concatenating a locally maintained unique value with the process number in some 0..P-1 global ordering scheme. This assumes that context values can be significantly longer than log(P) bits for an application involving P processes. If not, then a server may be required, in which case by specification it has to be fast (comparable to a pt-pt call). There is no need to generate a new context for the broadcast since it can be coded to use the default group context by meeting the paired-exact-match constraint.) %[Lyndon] % Please see notes above on the subject of context generation. %**** Please see my Step-1 comments. \item The following is intended only as a sanity check on the claim that this proposal can be layered on MPI. Improvements to improve efficiency or to use fewer contexts will be greatly appreciated. In addition to a default group context, each group descriptor contains a ``group partitioning context'' and a ``group disbanding context'' that are obtained and broadcast at the time the group is created. In the case of the INITIAL group, this is done using paired-exact-match code and any context, immediately after initialization of the MPI pt-pt level. (At that time, no application code will have had a chance to issue wildcard receives to mess things up.) Group partitioning is accomplished using pt-pt messages in the partitioning context of the current group (i.e., the one being partitioned), with message tag equal to the group tag provided by the application. In the worst (least scalable) case, the root of the new group must wildcard the source TID. (This violates the paired-exact-match constraint, which is why group formation must happen in a special context.) Group disbanding is done with pt-pt messages in the group disbanding context. This keeps group control from being messed up no matter how the application uses wildcards. \end{enumerate} \section{Examples} {\bf (To be provided)} % % END "Proposal V" %======================================================================% \end{document} ---------------------------------------------------------------------- Writing a server in the point-to-point layer of MPI in four easy steps >>>>> by potentially consuming an entire processor ---------------------------------------------------------------------- 1) Partition the INITIAL group into two groups. A singleton group, SERVER, and a group CLIENT which contains all of the other processes. 2) The single process in SERVER group records its TID. 3) The processes in INITIAL group allocate a context SERVICE which they remember either in the group cache or static data or something. 4) Use a broadcast in INITIAL group with ``sender'' as the one process which is also in SERVER group, and the ``receivers'' as the (many) processes which are also in CLIENT group, in the SERVICE context, in order to disseminate the TID of the server process. [Fanfare] a server process is in place as is a dedicated context for the purposes of messages required to implement the service. [Observation] the mpi point-to-point initialisation can do this automatically. %**** Zipcode's postmaster general works in this way, more or less. %**** - Tony ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Mon Mar 22 05:50:55 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA26950; Mon, 22 Mar 93 05:50:55 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20829; Mon, 22 Mar 93 03:29:57 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 03:29:56 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20810; Mon, 22 Mar 93 03:29:52 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Mon, 22 Mar 93 00:13 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA06118; Mon, 22 Mar 93 00:11:53 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA15810; Mon, 22 Mar 93 00:11:50 PST Date: Mon, 22 Mar 93 00:11:50 PST From: rj_littlefield@pnlg.pnl.gov Subject: new proposal To: jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu Cc: d39135@sodium.pnl.gov Message-Id: <9303220811.AA15810@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu Tony and Lyndon, I will respond in a separate message to some of your detailed comments on Proposal V. But maybe we can move faster by popping up a level. It seems to me that you like the idea of cacheing arbitrary info in group descriptors, you like the idea of groups as things within which contexts get formed, you like performance guarantees, and you don't like having to use opaque id's in point-to-point communications. I'm not quite sure whether static groups and synchronous group control are OK, but I'll presume here that they are. Well, this is pretty neat. If this keeps up maybe we can converge on just one or two proposals. Most of my gripes with other proposals have been based on performance and/or a need for asynchronous servers (with attendant performance and non-portability gripes). But I notice that explicit performance expectations have been gradually appearing and the need for servers disappearing from other people's proposals. Perhaps it is time for a synthesis. Here's a sketch of a new proposal (VI ?). . The functionality of point-to-point communications is per Snir, Lusk, and Gropp, augmented by my proposed MPI_FORM_CONTEXT to allow assembling arbitrary collections of processes. (Marc has already accepted this in a private email to me -- don't know why he didn't post it.) . Performance expectations of point-to-point communications are explicitly stated as follows: - MPI_COPY_CONTEXT does not synchronize the participating processes and costs significantly less than a point-to-point fanout among them (e.g., it uses a communication-free counting strategy); - all other context formation routines cost no more than if they were implemented using a single fanin/fanout among the participating processes; - translation of (context,rank) to absolute processor ID costs no more than if it were implemented via the lookup table that Snir suggests. . Groups and contexts are not equal. A group consists of a base context (from which other contexts can be created quickly by MPI_COPY_CONTEXT), plus topology information, plus my cacheing facility. Conceptually, I like this better than proposal V. Do we already have a proposal like this? Should we have one? In general, do you guys buy off on the concept of including performance expectations in the specification? A couple of discussion points... 1. The separation of group and context is defensive. I think I understand what it means to copy a context. I am not sure of either the functional or performance implications of copying a group. E.g., does cached info get copied? 2. I will respond here to two criticisms that have been raised against the cacheing facility. Lyndon notes > % I like the general idea, but I'm nervous about two things: > % (a) implied associativity of group descriptor cache - this > % will potentially be time expensive in implementation. > % (b) there is no method proposed for abritration of keys > % between independently written modules, so we are > % in the same problem regime as just having message tag > % and no message context. > % However, key's are local, so presumably you would find > % it acceptable to add a key registration service? I implicitly proposed a key assignment service in my long-ago example. It said in part: static int gop_key_assigned = 0; /* 0 only on first entry */ static MPI_key_type gop_key; /* key for this module's stuff */ ... if (!gop_key_assigned) /* get a key on first call ever */ { gop_key_assigned = 1; if ( ! (gop_key = MPI_GetAttributeKey()) ) { MPI_abort ("Insufficient keys available"); } } This is not really "registration" because nothing goes into the key assignment routine except the number of times it's called. But assuming that each call site is protected by a separate variable, the effect is to register the call site. As Lyndon notes, this is highly local. But it also allows the returned keys to have their values restricted so as to permit rapid testing and/or retrieval. Tony notes > %**** Stripping is extremely controversial aspect, and arbitrary. > %**** If the recipient has the methods with the same name, then > %**** a new rendezvous could be accomplished at the far end Yes, stripping is arbitrary. My motivation is that this greatly simplifies the design and satisfies what I view as the most critical need: to make collective comms run fast without complicating the calling sequence. I have no objection in principle to extending the facility to include classes of information that do not get stripped. But I sure didn't want to try creating and selling a spec that would handle, e.g., heterogeneous systems. Enough for here -- what do you think of "proposal VI" ? --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Mon Mar 22 05:53:49 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA27630; Mon, 22 Mar 93 05:53:49 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01772; Mon, 22 Mar 93 05:53:21 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 05:53:19 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01761; Mon, 22 Mar 93 05:53:16 -0500 Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA18140 (5.65c/IDA-1.4.4 for ); Mon, 22 Mar 1993 04:48:07 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1) id AA24191; Mon, 22 Mar 93 09:48:03 GMT Date: Mon, 22 Mar 93 09:48:03 GMT From: jim@meiko.co.uk (James Cownie) Message-Id: <9303220948.AA24191@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA00311; Mon, 22 Mar 93 09:44:36 GMT To: tony@Aurora.CS.MsState.Edu Cc: d39135@sodium.pnl.gov, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@aurora.cs.msstate.edu In-Reply-To: Tony Skjellum's message of Sun, 21 Mar 93 12:16:22 CST <9303211816.AA03587@Aurora.CS.MsState.Edu> Subject: mini-proposal on layerability Content-Length: 695 There is a typo in Tony's message : > So, each receipt function uses the algorithm > (received_rag & ~dont_care) & care This should read (received_tag & ~dontcare) == care i.e. don't care specifies a set of bits to be ignored in the comparison, care specifies the precise value of the other bits Note that you can do complete wild-carding like this by setting dont_care = -1; care = 0; -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Mon Mar 22 06:21:32 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA03967; Mon, 22 Mar 93 06:21:32 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02909; Mon, 22 Mar 93 06:20:50 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 06:20:49 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02898; Mon, 22 Mar 93 06:20:46 -0500 Date: Mon, 22 Mar 93 10:01:09 GMT Message-Id: <13945.9303221001@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Rik's comments on Lyndon's Proposal I To: rj_littlefield@pnlg.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@Aurora.CS.MsState.Edu In-Reply-To: rj_littlefield@pnlg.pnl.gov's message of Mon, 22 Mar 93 01:47:05 PST Reply-To: lyndon@epcc.ed.ac.uk Cc: d39135@sodium.pnl.gov Hi Rik Thanks for the comments. The proposal which I had sent out is dead. It has been replaced with a cut-and-paste of the section on contexts which Marc wrote into the point-to-point document. There is no proposal II. There is however a proposal VI. This naming/numbering is Tony's suggestion with which I concur. I will propogate the relevant comments you have made into the LaTeX of proposal VI (as identified LaTeX comments) and repost. I really am very sorry to have messed you about. I have a lot of other email, incl. from yourself, to read. (I am cancelling meetings left, right and centre here.) I will post later today. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Mon Mar 22 06:32:56 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA06460; Mon, 22 Mar 93 06:32:56 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03270; Mon, 22 Mar 93 06:32:22 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 06:32:21 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03262; Mon, 22 Mar 93 06:32:15 -0500 Date: Mon, 22 Mar 93 11:32:10 GMT Message-Id: <14066.9303221132@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: MPI-CONTEXT: Reference Point To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Dear Colleagues This message is a reference point summary of the current state of evolution of proposals. This is intended to help reduce confusion, as there has been a lot of development in the last 48 hours. Here is a complete summary of proposals to date, with proposal identifiers, contacts, status and brief description. In summary of the table: There are currently FOUR live proposals, one of which will shortly be circulated. These are labelled: I (in circulation); III/IV (not circulated); V (in circulated); VI (in circulated). There are currently THREE dead proposals. These are labelled: I++ (circulated as I); II (circulated?); II' (circulated as II). Identifier Status Contact Brief ---------- ------ ------- ----- I++ dead lyndon The "Proposal I" agreed at the post meet lunch, augmented by lyndon, and now defunct. This was circulated as Proposal I last week. Please kill this one. I live marc The proposal of marc which also appeared in the point-to-point document. It was cut-and-paste'd out of that document. This is currently in circulation as Proposal I. II dead rik The "Proposal II" agreed at the post meet lunch, now defunct. Please kill this one. II' dead lyndon A proposal unrelated to II. This has been named VI as of yesterday, to help avoid confusion. This was circulated as Proposal II' yestreday. Please kill this one. III/IV live tony The proposals III and IV agreed at the post lunch meet, to be merged. These have not yet appeared and should do within < 24 hours. This will be circulated as Proposal III/IV. V live Rik Proposal of Rik after abandonment of Proposal II. This was circulated as Proposal V in plain text and LaTeX source formats, last week. VI live lyndon This was, for about 4 hours, called II', yesterday. It has some common ground with I++ and other proposals. This was circulated as Proposal VI around 11:30pm GMT yesterday as LaTeX. There is currently a sketch which Rik is also naming as Proposal VI. Please Rik, either call this Proposal VII or call it a sketch, just to save confusion. The sketch suggests that we may have sufficient convergence to offer a single proposal. Tony and I discussed this yesterday evening and we concur. Tony and I suggest that the following occur: 1) Tony completes Proposal III/IV 2) Tony/Rik/Lyndon/??? discuss and hopefully agree a merger of VI, III/IV and V. There is no current suggestion to attempt to merge in I. I will call this Proposal X, for now. 3) if [ merger agreed in 2) ] then Draft contains Proposal I Proposal X else Draft contains Proposal I Proposal III/IV Proposal V Proposal VI fi 4) The false branch of 3) can be modified by a pair merger if we find that two of the three extant proposals can be merged but not all. There is currently also one mini-proposal from Tony dealing with layerability and tag selection. I understand that there will be one further mini-proposal from Tony dealing with threads. I sincerely apologise for having been the prime creator of confusion. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Mon Mar 22 07:10:23 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA15088; Mon, 22 Mar 93 07:10:23 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04660; Mon, 22 Mar 93 07:09:57 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 07:09:56 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04652; Mon, 22 Mar 93 07:09:52 -0500 Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA00212 (5.65c/IDA-1.4.4 for ); Mon, 22 Mar 1993 07:09:47 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1) id AA24691; Mon, 22 Mar 93 12:09:44 GMT Date: Mon, 22 Mar 93 12:09:44 GMT From: jim@meiko.co.uk (James Cownie) Message-Id: <9303221209.AA24691@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA00429; Mon, 22 Mar 93 12:06:23 GMT To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu In-Reply-To: L J Clarke's message of Mon, 22 Mar 93 11:32:10 GMT <14066.9303221132@subnode.epcc.ed.ac.uk> Subject: MPI-CONTEXT: Reference Point Content-Length: 599 Lyndon, since I've been rather busy with other things (which are of more immediate import to Meiko), I haven't been actively contributing to the debate. (Though I have been skimming, and filing all of the mail). Is it possible to pin down (or send out once again) the definitive versions of each proposal. (A copy of the table in your previous mail with a mail date and source for the top copy of each would do, alternatively a clean current copy of each...) Your last mail gives the references, but without following all of the history it's hard to get to the top copy of each. Thanks -- Jim From owner-mpi-context@CS.UTK.EDU Mon Mar 22 07:18:09 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA16848; Mon, 22 Mar 93 07:18:09 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04947; Mon, 22 Mar 93 07:17:51 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 07:17:50 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04939; Mon, 22 Mar 93 07:17:45 -0500 Date: Mon, 22 Mar 93 12:17:35 GMT Message-Id: <14115.9303221217@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: MPI-CONTEXT: PLEASE README - Proposal Comments To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Dear Colleagues In the previous email "Reference Point" I sent out a summary of where the proposals are. I have been religously keeping "clean" copies of proposals in LaTeX form, and collating/embedding comments as LaTeX comments. I will now circulate the LaTeX source of each live proposal, with collated and up-to-date comments streams embedded therein. I will circulate PostScript (in which our comments do not appear) on request. Please, delete all previous circulations from your memory :-) First, I explain the comment notation I used. Reader comment lines begin with '%'. First line of block is %[reader-name] Reader comment to comment lines are considered literally as comments to comments and therefore begin with %%[reader-name]. This is a comment tree, as is the '%' indent tree of the LaTeX source. This has been time consuming and I do not propose to keep receiving comments and embedding in this fashion - things are more fluid than I had expected. Three LaTeX files to follow, in turn. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Mon Mar 22 07:18:45 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA16981; Mon, 22 Mar 93 07:18:45 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04963; Mon, 22 Mar 93 07:18:29 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 07:18:28 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04954; Mon, 22 Mar 93 07:18:18 -0500 Date: Mon, 22 Mar 93 12:18:13 GMT Message-Id: <14121.9303221218@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: MPI-CONTEXT: PLEASE README - Proposal I - LaTeX To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk ---------------------------------------------------------------------- \documentstyle{report} \begin{document} \title{Proposal I\\MPI Context Subcommittee} \author{Marc~Snir} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal I" % Written by Marc Snir % Edited by Lyndon J. Clarke % March 1993 % \newcommand{\discuss}[1]{ \ \\ \ \\ {\small {\bf Discussion:} #1} \ \\ \ \\ } \newcommand{\missing}[1]{ \ \\ \ \\ {\small {\bf Missing:} #1} \\ \ \\ } \chapter{Proposal I} \section{Contexts} A {\bf context} consists of: \begin{itemize} \item A set of processes that currently belong to the context (possibly all processes, or a proper subset). \item A {\bf ranking} of the processes within that context, i.e., a numbering of the processes in that context from 0 to $n-1$, where $n$ is the number of processes in that context. \end{itemize} A process may belong to several contexts at the same time. Any interprocess communication occurs within a context, and messages sent within one context can be received only within the same context. A context is specified using a {\em context handle} (i.e., a handle to an opaque object that identifies a context). Context handles cannot be transferred for one process to another; they can be used only on the process where they where created. Follows examples of possible uses for contexts. \subsection{Loosely synchronous library call interface} Consider the case where a parallel application executes a ``parallel call'' to a library routine, i.e., where all processes transfer control to the library routine. If the library was developed separately, then one should beware of the possibility that the library code may receive by mistake messages send by the caller code, and vice-versa. To prevent such occurrence one might use a barrier synchronization before and after the parallel library call. Instead, one can allocate a different context to the library, thus preventing unwanted interference. Now, the transfer of control to the library need not be synchronized. \subsection{Functional decomposition and modular code development} Often, a parallel application is developed by integrating several distinct functional modules, that is each developed separately. Each module is a parallel program that runs on a dedicated set of processes, and the computation consists of phases where modules compute separately, intermixed with global phases where all processes communicate. It is convenient to allow each module to use its own private process numbering scheme, for the intramodule computation. This is achieved by using a private module context for intramodule computation, and a global context for intermodule communication. \subsection{Collective communication} MPI supports collective communication within dynamically created groups of processes. Each such group can be represented by a distinct context. This provides a simple mechanism to ensure that communication that pertains to collective communication within one group is not confused with collective communication within another group. \subsection{Lightweight gang scheduling} Consider an environment where processes are multithtreaded. Contexts can be used to provide a mechanism whereby all processes are time-shared between several parallel executions, and can context switch from one parallel execution to another, in a loosely synchronous manner. A thread is allocated on each process to each parallel execution, and a different context is used to identify each parallel execution. Thus, traffic from one execution cannot be confused with traffic from another execution. The blocking and unblocking of threads due to communication events provide a ``lazy'' context switching mechanism. This can be extended to the case where the parallel executions are spanning distinct process subsets. (MPI does not require multithreaded processes.) \discuss{ A context handle might be implemented as a pointer to a structure that consists of context label (that is carried by messages sent within this context) and a context member table, that translates process ranks within a context to absolute addresses or to routing information. Of course, other implementations are possible, including implementations that do not require each context member to store a full list of the context members. Contexts can be used only on the process where they were created. Since the context carries information on the group of processes that belong to this context, a process can send a message within a context only to other processes that belong to that context. Thus, each process needs to keep track only of the contexts that where created at that process; the total number of contexts per process is likely to be small. The only difference I see between this current definition of context, which subsumes the group concept, and a pared down definition, if that I assume here that process numbering is relative to the context, rather then being global, thus requiring a context member table. I argue that this is not much added overhead, and gives much additional needed functionality. \begin{itemize} \item If a new context is created by copying a previous context, then one does not need a new member table; rather, one needs just a new context label and a new pointer to the same old context member table. This holds true, in particular, for contexts that include all processes. \item A context member table makes sure that a message is sent only to a process that can execute in the context of the message. The alternative mechanism, which is checking at reception, is less efficient, and requires that each context label be system-wide unique. This requires that, to the least, all processes in a context execute a collective agreement algorithm at the creation of this context. \item The use of relative addressing within each context is needed to support true modular development of subcomputations that execute on a subset of the processes. There is also a big advantage in using the same context construct for collective communications as well. \end{itemize} } \section{Context Operations} A global context {\bf ALL} is predefined. All processes belong to this context when computation starts. MPI does not specify how processes are initially ranked within the context ALL. It is expected that the start-up procedure used to initiate an MPI program (at load-time or run-time) will provide information or control on this initial ranking (e.g., by specifying that processes are ranked according to their pid's, or according to the physical addresses of the executing processors, or according to a numbering scheme specified at load time). \discuss{If we think of adding new processes at run-time, then {\tt ALL} conveys the wrong impression, since it is just the initial set of processes.} The following operations are available for creating new contexts. {\bf \ \\ MPI\_COPY\_CONTEXT(newcontext, context)} Create a new context that includes all processes in the old context. The rank of the processes in the previous context is preserved. The call must be executed by all processes in the old context. It is a blocking call: No call returns until all processes have called the function. The parameters are \begin{description} \item[OUT newcontext] handle to newly created context. The handle should not be associated with an object before the call. \item[IN context] handle to old context \end{description} \discuss{ I considered adding a string parameter, to provide a unique identifier to the next context. But, in an environment where processes are single threaded, this is not much help: Either all processes agree on the order they create new contexts, or the application deadlocks. A key may help in an environment where processes are multithreaded, to distinguish call from distinct threads of the same process; but it might be simpler to use a mutex algorithm at each process. {\bf Implementation note:} No communication is needed to create a new context, beyond a barrier synchronization; all processes can agree to use the same naming scheme for successive copies of the same context. Also, no new rank table is needed, just a new context label and a new pointer to the same old table. } {\bf \ \\ MPI\_NEW\_CONTEXT(newcontext, context, key, index)} \begin{description} \item[OUT newcontext] handle to newly created context at calling process. This handle should not be associated with an object before the call. \item[IN context] handle to old context \item[IN key] integer \item[IN index] integer \end{description} A new context is created for each distinct value of {\tt key}; this context is shared by all processes that made the call with this key value. Within each new context the processes are ranked according to the order of the {\tt index} values they provided; in case of ties, processes are ranked according to their rank in the old context. This call is blocking: No call returns until all processes in the old context executed the call. Particular uses of this function are: (i) Reordering processes: All processes provide the same {\tt key} value, and provide their index in the new order. (ii) Splitting a context into subcontexts, while preserving the old relative order among processes: All processes provide the same {\tt index} value, and provide a key identifying their new subcontext. {\bf \ \\ MPI\_RANK(rank, context)} \begin{description} \item[OUT rank] integer \item[IN context] context handle \end{description} Return the rank of the calling process within the specified context. {\bf \ \\ MPI\_SIZE(size, context)} \begin{description} \item[OUT size] integer \item[IN context] context handle \end{description} Return the number of processes that belong to the specified context. \subsection{Usage note} Use of contexts for libraries: Each library may provide an initialization routine that is to be called by all processes, and that generate a context for the use of that library. Use of contexts for functional decomposition: A harness program, running in the context {\tt ALL} generates a subcontext for each module and then starts the submodule within the corresponding context. Use of contexts for collective communication: A context is created for each group of processes where collective communication is to occur. Use of contexts for context-switching among several parallel executions: A preamble code is used to generate a different context for each execution; this preamble code needs to use a mutual exclusion protocol to make sure each thread claims the right context. \discuss{ If process handles are made explicit in MPI, then an additional function needed is {\bf MPI\_PROCESS(process, context, rank)}, which returns a handle to the process identified by the {\tt rank} and {\tt context} parameters. A possible addition is a function of the form {\bf MPI\_CREATE\_CONTEXT(newcontext, list\_of\_process\_handles)} which creates a new context out of an explicit list of members (and rank them in their order of occurrence in the list). This, coupled with a mechanism for requiring the spawning of new processes to the computation, will allow to create a new all inclusive context that includes the additional processes. However, I oppose the idea of requiring dynamic process creation as part of MPI. Many implementers want to run MPI in an environment where processes are statically allocated at load-time. } % % END "Proposal I" %======================================================================% \end{document} ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Mon Mar 22 07:20:03 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17043; Mon, 22 Mar 93 07:20:03 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05007; Mon, 22 Mar 93 07:19:37 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 07:19:35 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04984; Mon, 22 Mar 93 07:19:13 -0500 Date: Mon, 22 Mar 93 12:18:58 GMT Message-Id: <14127.9303221218@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: MPI-CONTEXT: PLEASE README - Proposal V - LaTeX To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk ---------------------------------------------------------------------- % [Lyndon] General points % -------------- % % 1) I had to ``mine'' the text :-) Perhaps one of us (i.e., I am % offering if you wish) should attempt to construct a more transparent % presentation before circulation to whole committee, for the % convenience of committee members. %%[Tony] %% I felt that things appear twice, because of summary (good) %% and because of implementation notes at end (confusing) %% %%%[Rik] %%% I agree with both of the above. But let's decide whether %%% Proposal V will be replaced by VI or its equivalent before %%% we do any rewriting. %%%%[Lyndon] %%%% Proposal VI as referred to is the sketch which Rik sent out %%%% suggesting a merger. There is an actual Proposal VI circulated. % 2) I'm not a fan of much of this proposal, although I do indeed like % some of the ideas which it introduces. [On the other hand, I'm not a % great fan of all of the proposal which I wrote. I shall mail self % criticism of my proposal, and may have to write amended or alternative % proposal :-)] %%[Tony] %% Please be more specific. I am having a hard time understanding %% why you really don't like it, Lyndon. If the process model %% were a little less static, and servers were permitted (though %% hopefully bounded in cost), I think we would have an excellent %% proposal. %%%[Rik] %%% See Proposal VI. I'd be happy to see the good ideas adopted %%% and the crap die quietly. %% %%%[Lyndon] %%% I would have thought that the points below make my major %%% objections perfectly clear. Perhaps not. Here they are: %%% a) Paired-exact-match stuff %%% b) Translation of (group,rank) into TID all over the place % % 3) I really like the way in which groups are something like ``frames'' % in which contexts are created. This is conceptually much neater than % duplication of groups. %%[Tony] %% In practice, group subsetting will require groups to be copied, %% otherwise, subgroups will unfairly be penalized by the size %% of their ancestor. %%%[Tony] %%% Right, but I think of that as creating a new group. After all, %%% even the ranking structure is different. %% %%%[Lyndon] %%% I am anticipating that when one or more groups are created by %%% subsetting that, for example if the parent were described by %%% a proces list, then the children will be described by process %%% lists which are distinct sublists of the parent. So each element %%% of the parent list gets copied, exactly once. %%% The difficulty I have is that if a group were to be expanded %%% or contracted, then the ``duplicates'' thereof would no longer %%% be duplicates. Saying that duplicate creates a group bu retains %%% the process list of the old group is conceptually muddy since %%% the new group is a reference to a group, whereas the old group %%% or an even older group must be an actual group. Yuk! Now, if %%% we introduce the concept of a reference to an actual group, %%% and say this reference is unqieuly identified, then is is %%% conceptually sound and this object we describe really is a context. % % 4) I like the idea of pushing information into the group structure. I % have a few qualms with the proposed details --- see specific points. %%[Tony] %% I have more confidence about this idea, and could demonstrate %% by June/July time-frame in Zipcode. %% % % 5) See ``Writing a server in the point-to-point layer of MPI in four % easy steps'' at the foot of the message. %%[Tony] %% This seems like a nice thing. %%%[Tony] %%% The implementation that Lyndon suggests consumes an entire %%% process for the server. There are times when this is OK, %%% but also times when it isn't. E.g., if you have a divide-and- %%% conquer algorithm that really wants 2^i non-server processes, %%% and you're working on a machine with 2^n processors that %%% doesn't support multiple processes per processor, then %%% some of the users will get upset that they can only use %%% half the machine. A year and a half ago, I got flamed for %%% making a suggestion like this in connection with the Delta. %%% (And now I suppose I get flamed for using the Delta as an %%% example again... Maybe if I use the CM5...) %% %%%[Lyndon] %%% You are too kind :-) %% \documentstyle{report} \begin{document} \title{``Proposal V'' for MPI Communication Context Subcommittee} \author{Rik~Littlefield} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal V" % Rik Littlefield % March 1993 % \chapter{Proposal V} \section{Summary} \begin{itemize} \item Group ID's have local scope, not global. Groups are represented by descriptors of indefinite size and opaque structure, managed by MPI. Group descriptors can be passed between processes by MPI routines that translate them as necessary. (This satisfies the same needs as global group ID's, but its performance implications are easier to understand and control.) %[Lyndon] % I support the approach whereby group descriptors are local % objects. They could be pointers to structures, or indices % into tables thereof. We let the implementation consider that. % % One difficulty arises as group descriptors can only be passed % from process P to process Q if both P and Q members of some % group G since the communication presumably must use a context % known to both P and Q. Imagine that P is member of F and Q is not % member of F; that Q is member of H and P is not member of H; that % both P and Q are member of G. Let M be abritrary message data. % % Initially - % P can send F to Q, and Q can receive F from P, in a context of G. % Q can send H to P, and P can receive H from Q, in a context of G. % Thereafter - % P can allocate a context C in F. % P can send C to Q, and Q can receive C in the default context of H. % Q can allocate a context D in H. % Q can send D to P, and P can receive D, in the default context of F. % Thereafter - % P as member of F, and Q as member of H, can communicate using % wildcard pid and tag by use of contexts C and D. % % Okay, this is possible, but it is messy :-) %%[Tony] %% Alternatives, Lyndon? %% %%%[Lyndon] %%% I don't suppose for one minute that you will like this, but I really %%% would suggest that in this case a group descriptor registry may %%% be appropriate. %[Tony] % Seems doable. % %%[Lyndon] %% But usable with some grief, as above. \item Arbitrary process-local info can be cached in group descriptors. (This allows collective operations to run quickly after the first execution in each group, without complicating the calling sequences.) %[Lyndon] % I rather like this idea. %% [Tony] %% Me too. %%%[Rik] %%% Progress!! %[Tony] % Seems doable. % \item There are restrictions that permit groups to be layered on top of pt-pt. %[Tony] % I don't understand the word 'restriction' here. % Restriction of what. % %%[Lyndon] %% Rik is speaking of the pair-exact-match stuff you see later on. \item Pt-pt communications use only TID, context, and tag, and are specified to be fast. %[Tony] % What does "fast" mean. % %%[Lyndon] %% Fair question! \item Group control operations (forming, disbanding, and translating between rank and TID) are NOT specified to be fast. %[Tony] % OK, the above two items are identical to what Zipcode % provides in practice, but people have argued that groups % might be created/deleted more often in some apps, and % that these apps ought to be supportable % %%[Lyndon] %% In our work group creation/deletion is an infrequent operation %% and we are happy to live with reasonable cost for this operation. %% I think Marc Snir is thinking abour a different group model in %% group created/deletion is frequent. %% Perhaps we should provide both or neither (even handedness principle). \end{itemize} \section{Detailed Proposal} \begin{itemize} \item Pt-pt uses only ``TID'', ``context'', and ``message tag''. TID's, contexts, and message tags have global scope; they can be copied between processes as integers. TID's and contexts are opaque; their values are managed by MPI. Message tags are controlled entirely by the application. %[Lyndon] % You probably ought to say that context and TID are integer with % opaque values. %% [Tony] %% 1) It is not obvious that TIDs should be restricted to 32 bits. %%%[Lyndon] %%% I did not imply that they were. %% 2) It is not obvious that contexts will be 32 bits (eg, 16 bits). %% I favor a whole word for a context, despite other limits, %% just to make things simpler. %% %%%[Lyndon] %%% ditto %% Internet addresses are going to get augmented from 32 to ??? bits %% is it reasonable to assume that certain MPI implementations might %% incorporate such internet addresses as TIDs (in future), %%%[Lyndon] %%% No, it is not reasonable, in the least. And both you and Rik appear %%% to be ignoring the possibility that the process descriptor could %%% be required to store routing data of significant length. Therefore %%% the sensible thing to do is use a process descriptor which in %%% practice might be a table index --- fits into integer for sure --- %%% the value of which is process local for sure, and the table %%% contains the real process identifier of implementation defined size. %% Opacity is partially violated if we say how big the data type is??? %%%[Lyndon] %%% I understand the point you make, but it gets blown away by the %%% point I have just made in reply to you. %[Tony] % Yes, and I want at least 32-bits of message tag. % %%[Lyndon] %% Yes, and I want exactly zero bits of message tag. %% I'll just keep quiet about message tags. \item Pt-pt communication is specified to be fast in all cases. (E.g., MPI must initialize all processes such that any required translation of the TID is faster than the fastest pt-pt communication call.) %[Lyndon] % How do you imagine this to be acheived, considering that TIDs % are global entities? % I guess that you are thinking a TID is a (processor_number, % process_number) pair of bit fields, a bit like one sees in NX and RK, % and that network interface hardware will route based on the % processor_number. % % In another approach a TID is a process local entity just like the % group descriptor. This satisfies efficiency when the above scheme % is not applicable, for example in a workstation network. %% %%[Tony] %% where does this get us??? %% Remember, we have to choose on some things, so we can have something %% to present in Dallas. Is there an important difference here? %%%[Lyndon] %%% For sure their is an important difference. See comment above. %% %% TIDs are global entities. Is structure assumed to be global; %% in a truly opaque system, some TID component would have to be %% fixed, but the rest could vary structurally... %% %[Tony] % This could be difficult, in practice, if one mails a % message to one's own process, and MPI is smart enough % to optimize. % \item A group is represented by a ``group descriptor'', of indefinite size and opaque structure, managed by MPI. The group descriptor can be referenced via a ``process-local group ID'', which is typically just a pointer to the group descriptor. (Group ID's with global scope do not exist.) %[Lyndon] % I think I see, it is the context identifier which has global scope. % Now, this really is just getting on the way toward the proposal % that I really wish I had written for the subcommittee. I will flame % myself! %% %% Yes, contexts are global; group identifiers are just pointers %% typically, to data structures, describing %% %% 1) a groups context %% 2) group members and their ranks (mappings, inverses, %% cached, hashed, unscalably stored, etc) %% 3) TID-to-rank map and inverse (see possibilities in 2) %% 4) A set of fixed global operations, accepted as standard, %% an accessible in O(1) time. Possibly, each %% such operation should be a method, so that %% a parameter block can be passed with it. Zipcode %% supports the Method type to do this. %% This is effectively a cache for some parts of item #5 %% ... %% 5) An AVL or similar tree of extensible operations. %% New operations are registerable by the user. These %% tags are unique within a group, a specify an operation %% i) pre-defined by MPI (in which case it can be cached %% in 4 %% ii) alternative operations (even if they do something %% standard, that are wanted to be accessed by %% name) This name is group unique. %% %% A mechanism for DO_METHOD_FROM_GROUP(name,....) %% or GET_METHOD_FROM_GROUP(name,...) %% and SET_METHOD_IN_GROUP(name,...) are clearly needed. %% %%%[Rik] %%% The model of contexts used in this proposal is that the %%% context value has global scope. It doesn't have to be that way. %%% The Snir, Lusk, and Gropp proposal could be implemented with each %%% processor contributing a different context value to the context %%% descriptor. E.g., process 3 says "if you want to talk to me %%% in this context, you have to use context value 7", while process %%% 4 says to use context value 2. I don't know whether there are %%% advantages to that extra flexibility. %[Tony] % Sounds good. % \item Group descriptors can be copied between arbitrary processes via MPI-provided routines that translate them to and from machine- and process-independent form. A process receiving a group descriptor does not become a member of the group, but does become able to obtain information about the group (e.g., to translate between rank and TID). %[Lyndon] % Also crucially, to obtain and use the default context identifier % of the received group descriptor. %%[Tony] %% Yes, that is included, I believe, in concept. %%[Rik] %% Right. \item Arbitrary information can be cached in a group descriptor. This is, MPI routines are provided that allow any routine to augment a group descriptor with arbitrary information, identified by a key value, that subsequently can be retrieved by any routine having access to the descriptor. Cached information is not visible to other processes and is stripped when a group descriptor is sent to another process. Retrieval of cached information is specified to be fast (significantly cheaper than a pt-pt communication call). %[Lyndon] % I like the general idea, but I'm nervous about two things: % (a) implied associativity of group descriptor cache - this % will potentially be time expensive in implementation. % (b) there is no method proposed for abritration of keys % between independently written modules, so we are % in the same problem regime as just having message tag % and no message context. % However, key's are local, so presumably you would find % it acceptable to add a key registration service? %%[Tony] %% Stripping is extremely controversial aspect, and arbitrary. %% If the recipient has the methods with the same name, then %% a new rendezvous could be accomplished at the far end %%%[Rik] %%% See my "Proposal VI" note for responses to this. %[Tony] % In Zipcode 1.0, we allow multiple global operations % to be provided on a message-class (eg, grid-oriented messages) % The identifiers for these possible operations are user-specified % presently, but the "names" of the global operations are fixed % at compile-time. % % That means that there is O(1) time to find combine, fanout, send, % etc, on a group-wide scope. However, other operations cannot % be accessed in O(1) time (they are not in the opaque structure). % % The same mechanism used by Zipcode to allow multiple methods for % combine to be registered by the user, could also allow extensibility % just like Rik describes, with little effort. We use AVL trees. % % In fact, I will add this to Zipcode 1.x. Why say this? It is % not far from existing practice, and I have a lot of the machinery % in place already, and I am confident that it is useful. % \item Group descriptors contain enough information to asynchronously translate between (group,rank) and TID for any member of the group, by any process holding a descriptor for the group. MPI routines are provided to do these translations. Translation is not specified to be fast. %[Lyndon] % How do you imagine that this will be done? % (a) Perhaps an array of TIDs which is just indexed on rank? Then % where is the case for not using directly rank. % (b) Perhaps a hashing function? Then the case for not using rank % directly is marginal. % (c) Perhaps generating a request to a service process? In which % case you admit here that a service process exists, which must % be propogated throughout the proposal and changes one of your % fundamental objectives. % (d) Something else? Do tell! %%[Tony] %% Yes, these are all options. Fastness seems to be an important %% issue. If translation is very expensive, none of the "good" %% features will be used. %%%[Rik] %%% I was actually trying to be nice to the service process %%% people, giving them a designated hook where they could be %%% slow and I could work around them via the cacheing mechanism. %%% I'd really rather just specify it as being fast. %[Tony] % This seems to be a serious flaw. It will have to be cached % on an LRU basis, with system/user/both specifying how much % caching is allowed (ie, how much unscalable memory use). % If the first time is expensive, OK, but not the Nth time. % %%[Lyndon] %% Check. \item At creation, each group is assigned a globally unique ``default group context'' which is kept in the group descriptor and can be quickly extracted. This extraction is specified to be fast (comparable to a procedure call). %[Tony] % OK, I see no problem with this (so far). % \item The default group context can be used for pt-pt communication by any module operating within the group, but only for exactly paired send/receive with exact match on TID, context, and message tag. Call this the ``paired-exact-match constraint''. This constraint allows independent modules to use the same context without coordinating their tag values. (Assumes: tight sequencing constraints for both blocking and non-blocking comms in pt-pt MPI.) %[Lyndon] % I understand what you want the paired-exact-match for. This % might appear as pragmatics and advice to module writers. I % think you should be firmer about sequencing constraints % for point-to-point in MPI that this requires, to be % sure that the constraint is not too large. %%[Tony] %% Again, I think this should be eliminated, and all references %% to this idea should be expunged. It denies the context's %% ability to manage messages. %%%[Lyndon] %%% Check. %%%[Rik] %%% When I wrote this, I was presuming that contexts could %%% not be generated "for free" and that therefore it was %%% a good idea to specify a method by which codes could %%% be written so as to not require them. By the way, %%% you may be interested to note that the sample collective %%% comms defined by Snir and Geist in their recent proposal %%% implicitly rely on exactly the paired-exact-match %%% constraint. As written, those routines would break %%% if preceded by a call to a module that issued a wildcard %%% receive in the same group. (And there's an endnote %%% that suggests they know that.) %[Tony] % NO. This violates the concept of context entirely. % (ie, an oxymoron ... contexts same, but still no need for % tag disambiguation...) % % Use the default group context to establish (cooperatively) % other contexts, and then use these. This is a seriously % bad feature, in my mind. % \item A modules that does not meet the paired-exact-match constraint must use a different globally unique context value. An MPI routine will exist to generate such a value and broadcast it to the group. The cost of this operation is specified to be comparable to that of an efficient broadcast in the group, i.e. log(G) pt-pt startup costs for a group of G processes. [See Implementation note 1.] %[Lyndon] % Perhaps I am missing something here. Please help. This % is what my mind is thinking. % The synchronisation requirement means that all context % allocations in a group G must be performed in an identical % order by all members of G. Then the sequence number of the % allocation is unique among all allocations within G. % Therefore the duplet % (default context of G, allocation sequence number) % is a globally unique identification of the allocated % context. The sequence number can be replaced by any one-to-one % map of the sequence number, of course. So, according to your % synchronisation constraint, context generation can be ``free''. %%[Tony] %% I agree that context allocation has to be done in sequence. %% That is why I am in favor of providing calls that allow %% groups to get numerous contexts at creation, and then %% cooperatively, but potentially without further communication %% divide them(as they build subgroups, for instance). %% %% I see these as services to be used in building virtual topology %% features, which will then be more widely used by users of MPI. %% %%%[Lyndon] %%% If the context allocations are done in sequence, then I have %%% indicated how they can be done for free. I am getting confused. %%[Rik] %% I think this is right. This counting strategy probably %% places some constraints on how big the context value field %% has to be, but I've incorporated it into Proposal VI. %[Tony] % I do not think we should support the paired-exact-match thing. % \item Context values are a plentiful but exhaustible resource. If further use is anticipated, a context may be cached; otherwise it should be returned. (Note: the number of available contexts should be several times the number of groups that can be formed.) %[Tony] % Concur. This suggests many more than "256" % %%[Lyndon] %% The number of contexts in the whole program? Sure 256 is too small! %% The number of contexts in each process? Maybe something like 256 is okay? \item When a group disbands, its group descriptor is closed and any cached information is released. (The MPI group-disbanding routine does this by calling destructor routines that the application provided when the information was cached.) Copies of the group descriptor must be explicitly released. It is an error to make any reference to a descriptor of a disbanded group, except to release that descriptor. \item All processes that are initially allocated belong to the INITIAL group. (In a static process model, this means all processes.) An MPI routine exists for obtaining a reference to the INITIAL group descriptor (i.e., a local group ID for INITIAL). \item Groups are formed and disbanded by synchronous calls on all participating processes. In the case of overlapping groups, all processes must form and disband groups in the same order. [See Implementation Note 2.] %[Tony] % This is the Zipcode model. It could say loosely synchronous. % \item All group formation calls are treated as a partitioning of some group, INITIAL if none other. The calls include a reference to the descriptor of the group being partitioned, the TID of the new group's ``root process'', the number of processes in the group, and an integer ``group tag'' provided by the application. The group tag must discriminate between overlapping groups that may be formed concurrently, say by multiple threads within a process. [See Implementation Note 2.] %[Lyndon] % The group partition you propose is essentially no different to % the partition by key which has already been discussed, except % that the key can encapsulate both (root process, group tag). % So perhaps partition by key was better in the first place? %%[Tony] %% Do we get anything by having the root process? %% %%%[Lyndon] %%% No. %%%[Rik] %%% The group partitioning concept has been refined in several %%% other postings to mpi-context, mpi-collcom, and mpi-pt2pt, %%% in which "root" was replaced by a "knownmember" set. %%% The idea is that if you have lots of processes joining a %%% group, they don't have to know who all their compatriots %%% are, but they should know at least one that they all have %%% in common, who can serve to organize the communications. %[Tony] % I don't understand the thread issue here. % %%[Lyndon] %% If two threads are concurrently partitioning the same group, you %% need to disambiguate the partition operations. Analagous to %% concurrent collective operations or nonblocking. \item Collective communication routines are called by all members of a group in the same order. %[Tony] % Yes. \item Blocking collective communication routines are passed only a reference to the group descriptor. To communicate, they can either use the default group context or internally allocate a new context, with or without cacheing. %[Tony] % What does caching really imply here ??? Help. % %%[Lyndon] %% Dunno. \item Non-blocking collective communication routines are passed a reference to the group descriptor, plus a ``data tag'' to discriminate between concurrent operations in the same group. (Question: is sequencing enough to do this?) An implementation is allowed but not required to impose synchronization of all cooperating processes upon entry to a non-blocking collective communication routine. %[Lyndon] % If the requirement that collective operations within a group G are % done in the identical order by all members of G even when such % operations are non-blocking, then the sequence number of the operation % is unique and sufficient for disambiguation. % % The permission to force synchronisation - i.e., blocking - in the % implementation of a non-blocking routine seems to make the routine % less than useful. I can see whay you are asking for this, in order % that you can generate a context for the routine call. In fact Rik % I don't think you need the constraint, as I pointed out cheaper % context generation exists above, unless of course I am missing % something. %%[Tony] %% I think that non-blocking collcomm is moribund in MPI1 or %% else MPI1 is moribund. :-) %% %%%[Lyndon] %%% Check. %%%[Rik] %%% Nicely put. %[Tony] % I think that contexts are really important in this case, % to keep things straight, but that non-blocking collcomm should % be omitted from MPI1 (cf, Geist). Sequencing supports % a sufficient disambiguation, as long as the entire group % is always the participant in operations. That is, you have % to form subgroups, with new contexts, to do global ops on % subsets. \end{itemize} \section{Advantages \& Disadvantages} \subsection*{Advantages} \begin{itemize} \item This proposal is implementable with no servers and can be layered easily on existing systems. %[Lyndon] % I am not of the opinion that the absence of services is such a big % deal. I do think that programs which can conveniently not use % services should not be forced to, but programs which cannot % conveniently not use services should be allowed to. %%[Tony] %% Too many negatives here for me to parse :-) %[Tony] % Why aren't servers needed to create contexts. Where do they % come from? If you rely on the fact that INITIAL will do % a loosely synchonous cooperative operation each time a new % context is needed, then a simple (easily implementable server, % or fetch-and-add remote access) is replaced by a more rigid % computation model. % % If we can get rid of this disagreement, me might be able to % reduce our total proposal space by one whole proposal. % \item Cacheing auxiliary information in group descriptors allows collective operations to run at maximum speed after the first execution in each group. (Assumes that sufficient context values are available to permit cacheing.) %[Lyndon] % If you agree that context allocation is ``free'', then you can % delete the bracketed qualifier. %%[Tony] %% Context allocation need not be free provided it can be made cheap, %% or cheap enough. %% %% If one knows one will need several, then a single call could %% provide such contexts, amortizing overhead. This is likely when %% bulding grids (ie, virtual topologies) in Zipcode, so it is %% true in existing practice. %% %% One should recognize the need for layering virtual top. calls %% on top of these calls, then these calls may appear painful, %% but perhaps they would be less used. Some users will use the %% provided virtual topology calls, others will prefer their own. %% Both will have equal power (see also,separate note on layerability). %% %% If getting N contexts is a send-and-receive, plus a reactive server, %% then this is reasonably light weight,provided that hundreds of %% messages, or global operations ensue thereafter. We can know in %% advance how heavy weight the context server will be. %% %% if an implemention can use some locations of remote memory, with %% fetch and add, or locks, to achieve contexts, then this is even %% cheaper, in principle. %% %% Despite Jim's earlier insistence that context numbers be kept to %% 256 or so, I think that this number should be much larger, so that %% much less efort goes into returning contexts, and so on, except %% occasionally, by processes. Otherwise, a new kind of overhead, %% get-rid-of-context-because-I-am-out ensues, or programs block %% until contexts become available, offering the possibility of %% deadlocks. %% %[Tony] % If contexts are being used very dynamically, how are they being % assigned, kept, released, reissued without a server? Sorry if % I missed something, but I don't see it, without a restrictive % SPMD model of computation (Zipcode obviates its server for the % SPMD model, for instance). %%[Lyndon] %% MPI stinks of SPMD. I wouldn't mind if MPI would just say SPMD. \item Speed of cached collective operations does not depend on speed of group operations such as formation or translation between rank and TID. %[Lyndon] % True, but the cache is going to get big as user's are going to store % arrays of TIDs in it. %%[Tony] %% Unscalability (of a limited form) should be permitted/selectable %% by user, to use as much per-node memory as the user wants, to reduce %% communication. %% %%%[Rik] %%% Besides which, the system will do this anyway if (group,rank) %%% translations are required to be fast. All I'm doing is moving %%% it to the user level. %[Tony] % Can you clarify this with examples. % \item Requires explicit translation between (group,rank) and TID, which makes pt-pt performance predictable no matter how the functionality of groups gets extended. %[Lyndon] % This is only true because you have asserted that implementations % must have the property that: % `` Pt-pt communication is specified to be fast in all cases. % (E.g., MPI must initialize all processes such that any % required translation of the TID is faster than the fastest % pt-pt communication call.)'' % So the advantage is not that which you have quoted, it is that % you have made this assertion. % %%[Tony] %% I see,but what he means here is that there is no unpredictable %% translation cost because we do not write (group,rank) in pt2pt %% calls. So, there is some validity to the statement. %%%[Lyndon] %%% However he ignores that the TID might require translation the %%% cost of which might be unpredictable. This because the TID has %%% a global value and cannot therefore hold process local information %%% such as ``how do I route to that process''. %%%[Rik] %%% Tony has it right. %[Tony] % I like this, of course. \item Communication both within and between groups seems conceptually straightforward. %[Lyndon] % This is a conjecture. I believe that conjecture to be false. % I especially believe this in the case of communication between % groups. The methods which are available for ``hooking up'' % allows are at least perverse. I guess that the user could make % use of a service process, to make life easier in this hooking up, % so whay not provide one. %%[Tony] %% Yes, that is why I have one in Zipcode. I wish Zipcode were %% on netlib today, so you could try it. Well, we are writingthe %% manual, and working at it as fast as we can. % % A further point. It seems to me that ``seems'' means that it seems % to you. This is not the point. It is how it seems to a lesser % wizard than yourself which is of importance here. I conjecture % that the reverse statment is true when the person doing the seeming % is changed to a lesser wizard. %%[Tony] %% I lost something here, but I agree with the sense. The word %% seems is subjective,and should disappear from our discussions, %% as much as seems prudent, anyway :-) %%%[Rik] %%% Silly me -- trying to call out an opinion that might be %%% disagreed with. It occurred to me when I wrote the "s" word %%% that I'd probably be better off to just follow standard practice %%% and state my opinions as fact. %[Tony] % Well, is point-to-point group oriented. Not. %%[Lyndon] %% Check. \end{itemize} \subsection*{Disadvantages} \begin{itemize} \item Requires explicit translation between (group,rank) and TID, which may be considered awkward. %[Lyndon] % It is true that (group,rank) must be translated to TID. I can % assure you that this is considered both awkward and redundant. %%[Tony] %% Yes,awkward, because it is nice to escape the TID realm and %% work within the (albeit simple) abstraction of group,rank. %% When layering virtual topologies on this, it would be so nice %% to write them to a group,rank syntax, not enforcing TID mappings %% everywhere. %%%[Rik] %%% I happen to agree with this. But even though it's late, I can't %%% resist pointing out that there's not a shred of scientific %%% evidence that these opinions are in fact true. %[Tony] % I think it is awkward. \item Communication between different groups may be considered awkward. %[Lyndon] % You bet! Please see below. %%[Tony] %% Indeed. %%%[Lyndon] %%% More so than you think, I think! %[Tony] % OK, but one can form a new group, as I have argued before. % Use the "awkward" pt2pt to get the right info shared between % group leaders, make the new group, use unawkward collective % operations on new group (with new context). %%[Lyndon] %% This is only one model of group-group interaction, which in my %% experience and understanding really is still steeped in SPMD. %% Please consider the examples of non SPMD group usage which I %% mailed out. You can say to me - oh, you shouldn't do this kind %% of thing with MPI, Lyndon - if you like, if you believe that. \item No free-for-all group formation. A process must know something about who its collaborators are going to be (minimally, the root of the group). %[Lyndon] % Please see comments above on group creation. %[Tony] % This again is in practice, in Zipcode. % \item Requires explicit coherency of group descriptors, which is not convenient -- application must keep track of which processes have copies and what to do with them. %[Lyndon] % I think all of the proposals will have this problem. %%[Tony] %% Yes, and I think that loosely synchronous operations can maintain %% coherency, in practice. That is, no operations that modify the %% group descriptors (other than cached lookup info) are permitted, %% without loose synchronization. %% This is nasty in that is would prohibit sending descriptors to %% processes not part of the group, so it is a clear trade-off. %% Perhaps such send-to-non-group-member operations could stipulate %% that this group information is somehow ephemeral, and that they %% need to join a new group to keep useful information over time??? %% %%%[Lyndon] %%% I agree that the (loosely) synchronous semantics remove coherency %%% until processes find out about groups of which they are not %%% members. I cannot advocate prohibition of this. I know there is a %%% coherency issue and I think we should let the user deal with it. %%% In practice, the user implements protocols such that if group A %%% communicates with group B, then before group B is destroyed it %%% must have let group A know of its impending doom, unless group A %%% can already have knowledge that this will be the case, which it %%% often can anyway. %[Tony] % Sounds dangerous. What must application do to maintain % coherency, since group descriptors are opaque. % \item Static group model. Processes can join and leave collaborations only with the cooperation of the other group members in forming larger or smaller groups. %[Tony] % No, loosely synchronous process model, unless you mean % cooperation of INITIAL at all such join/leave steps. %%[Lyndon] %% Tony, I have difficulty understanding what you mean. If you are %% thinking of group creation by giving a list of group members then %% I can understand. If you are thinking of the partition operation %% then I cannot understand as this operation seems (to me:-) to be %% a synchronisation anyway. \item No direct support for identifying the group from which a message came. The sender cannot embed its group ID in its message, since group ID's are not globally meaningful. One method to address this problem is to have the sender embed its group context as data in the message, then have the receiver use that value to select among group descriptors that it had already been sent. %[Lyndon] % Yup, the user can ``do it manually with a search''. If you want % to invoke this argument then I can dispose of almost everything % in MPI in a period of a few minutes - in fact Steven Zenith will % do it faster - so I refute the validity of the argument and claim % that the MPI interfce should transmit said information. %%[Tony] %% Yes, that is exactly what Zipcode was written to avoid. The %% user wants help managing things like this!!!! %% %% The search, if any, must be MPI-supported, and as efficient as %% possible (eg, AVL trees, hash, partial hash with exceptions). %% % % Further, the receiver is likely to want to be able to ask which % rank in the sender group the sender was. Oh dear, well I suppose % you think that's okay because the sender can put its rank into % the message. This is just being inconvenient to the user who % wants to send an array of something (double complex?) and has % to pack a rank in by copying or sending a pre-message or the % buffer descriptor kind of thing. %%[Tony] %% This is why I remain a strong advocate of (group,rank) %% addresssing in pt2pt. %% %%%[Lyndon] %%% Et moi! %[Tony] % No, you can't know the group or rank in group of sender. % If there were one context per group (isn't that so here?), % then all you need is the rank. With TID_TO_RANK_IN_GROUP % operation, this could be provided, but no wildcarding % or receipt selectivity could be done at this level. % %%[Lyndon] %% Where a member of group A sends to a member of group B then the %% receiver in group B really does often want to know the rank of the %% in sender in group A. This is not hypothetical, we program this way. \end{itemize} \section{Comments \& Alternatives} \begin{itemize} \item The performance of group control functions is explicitly unspecified in order to provide performance insurance. The intent is to encourage program designs whose performance won't be killed if the functionality of groups is expanded so as to be unscalable or require expensive services. %[Lyndon] % I don't think that the intent expressed in the second sentence % is satisfied. For example - group control is allowed to become the % dominant feature of application time complexity. %%[Tony] %% I addressed this in my Step-1 remarks. Please see that. BELOW %[Tony] % No, it just does not provide guarantees that certain kinds % of applications will run OK. (ie, those that do group % creation/deletion relatively often). Zipcode has assumed % that such operations would be relatively seldom. Thus, I do % not quibble that this is a reasonable choice,but a fairer % way to say this is that it may be difficult to support such % applications. That reveals an issue to be studied more. % %%[Lyndon] %% Check. On performance specification this belongs in the %% implementation profile and the specification actually %% cannot say anything about performance. %% \item Adding global group ID's would seriously interfere with layering this proposal on pt-pt. The problem is not generating a unique ID, but using it. If just holding a global group ID were to allow a process to asynchronously ask questions about the group (like translating between rank and TID), then a group information server of some sort would be required, and MPI pt-pt does not provide enough capability to construct such a beast. %[Lyndon] % It is not the global uniqueness of group identifiers which creates % the problem. There are globally unique labels of groups in your % proposal anyway - the value of the default context identifier. % The problem is that of allowing query of group information when % that information cannot be recorded in the local process/processor % memory. %% [Rik] %% I thought I said that. % % You claim that point-to-point does not have enough capability to % construct an information server. Firstly I should ask you whether % this is an artefact of the manner in which you have defined the % point-to-point communication. Secondly I assert that your claim % is false. I shall append a description of server implementation % to the foot of this message. %%[Tony] %% Thank you. These points are both well taken (ie these two paragraphs) %%%[Tony] %%% See my comments near the start of this message, regarding %%% the proposed server implementation. What I should have said %%% was that MPI did not provide enough capability to construct %%% a server without consuming a processor, since it neither %%% provides an interrupt-receive function nor specifies that %%% processors be able to time-share between multiple processes. %[Tony] % Perhaps they should do. \item The restriction that only paired-exact-match modules can run in the default group context could be relaxed to something like ``a module that violates the paired-exact-match constraint can run in the default group context if and only if it is the only module to run in that context''. This seems too error-prone to be worth the trouble, since even the standard collective ops might be excluded. (It would also conflict with Implementation Note 1.] %[Tony] % Dump this. \item Group formation can be faster when more information is provided than just the group tag and root process. E.g. if all members of the group can identify all other members of a P-member group, in rank order, then O(log P) time is sufficient; if only the root is known, O(P) time may be required. This aspect should be considered by the groups subcommittee in evaluating scalability. %[Lyndon] % Yes, partition does appear to be O(P) whereas definition by ordererd % list appears to be O(log(P)). %%[Tony] %% Also,see what I wrote in my Step-1 comments. BELOW. %% I believe O(log(P)) is still possible. %% %%%[Rik] %%% I'd be interested in seeing the O(log(P)) algorithm, %%% sometime after MPI quiets down. %%%%[Lyndon] %%%% Me too! %[Tony] % No, a non-deterministic broadcast can be used, with a token. % This requires a token server. Again, implementable with fetch+ % add on most systems, or a light reactive server. % % Once the non-deterministic broadcast has finished, a fanin/collapse % is done to the original root, which then frees the token. % \end{itemize} \section{Implementation Notes} \begin{enumerate} \item To generate and broadcast a new context value: Generate the context value without communication by concatenating a locally maintained unique value with the process number in some 0..P-1 global ordering scheme. This assumes that context values can be significantly longer than log(P) bits for an application involving P processes. If not, then a server may be required, in which case by specification it has to be fast (comparable to a pt-pt call). There is no need to generate a new context for the broadcast since it can be coded to use the default group context by meeting the paired-exact-match constraint.) %[Lyndon] % Please see notes above on the subject of context generation. %%[Tony] %% Please see my Step-1 comments. %[Tony] % Why not just given in and allow the server. % I don't like the paired-exact-match constraint AT ALL. % %%[Lyndon] %% Ni moi! \item The following is intended only as a sanity check on the claim that this proposal can be layered on MPI. Improvements to improve efficiency or to use fewer contexts will be greatly appreciated. In addition to a default group context, each group descriptor contains a ``group partitioning context'' and a ``group disbanding context'' that are obtained and broadcast at the time the group is created. In the case of the INITIAL group, this is done using paired-exact-match code and any context, immediately after initialization of the MPI pt-pt level. (At that time, no application code will have had a chance to issue wildcard receives to mess things up.) %[Tony] % Seems OK, but why need the paired-exact-match thing again. % Group partitioning is accomplished using pt-pt messages in the partitioning context of the current group (i.e., the one being partitioned), with message tag equal to the group tag provided by the application. In the worst (least scalable) case, the root of the new group must wildcard the source TID. (This violates the paired-exact-match constraint, which is why group formation must happen in a special context.) %[Tony] % Again, OK, but I want to see this work without the paired-exact- % match, if possible. Group disbanding is done with pt-pt messages in the group disbanding context. This keeps group control from being messed up no matter how the application uses wildcards. %[Tony] % So, now, you have concurred with my (previously flamed) idea % that group construction/destruction should be realizable using % pt2pt, just like global operations should do. I like this % because 1) it is explicable to the implementor, 2) it allows % simple intitial implemtations, 3) it sets some ideas for how % much these things will cost [upper bound]. % %%[Lyndon] %% I still do not concur. It is restrictive on what we can do. \end{enumerate} \section{Examples} {\bf (To be provided)} % % END "Proposal V" %======================================================================% \end{document} %[Lyndon] % Writing a server in the point-to-point layer of MPI in four easy steps %%[Rik] %% by potentially consuming an entire processor %%%[Lyndon] %%% Oh dear, how sad. I know what a pain it can be when you have one %%% thumping great number cruncher which cant multitask, and its sits %%% there as a do nothing server all day long. But look, it's a do %%% nothing server, so it *can* run in the user environment like on %%% the ``host'' or ``login'' processor. Look again, this is because %%% of the ``cannot multitask'' - let us hope that is history :-) % ---------------------------------------------------------------------- % % 1) Partition the INITIAL group into two groups. A singleton group, % SERVER, and a group CLIENT which contains all of the other processes. % % 2) The single process in SERVER group records its TID. % % 3) The processes in INITIAL group allocate a context SERVICE which % they remember either in the group cache or static data or something. % % 4) Use a broadcast in INITIAL group with ``sender'' as the one process % which is also in SERVER group, and the ``receivers'' as the (many) % processes which are also in CLIENT group, in the SERVICE context, in % order to disseminate the TID of the server process. % % [Fanfare] a server process is in place as is a dedicated context for % the purposes of messages required to implement the service. % % [Observation] the mpi point-to-point initialisation can do this % automatically. %%[Tony] %% Zipcode's postmaster general works in this way, more or less. ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Mon Mar 22 07:20:39 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17057; Mon, 22 Mar 93 07:20:39 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05062; Mon, 22 Mar 93 07:20:16 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 07:20:15 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05009; Mon, 22 Mar 93 07:19:57 -0500 Date: Mon, 22 Mar 93 12:19:43 GMT Message-Id: <14133.9303221219@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: MPI-CONTEXT: PLEASE README - Proposal VI - LaTeX To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk ---------------------------------------------------------------------- \documentstyle{report} \begin{document} \title{Proposal VI\\MPI Context Subcommittee} \author{Lyndon~J~Clarke} \date{March 1993} \maketitle %======================================================================% % BEGIN "Proposal VI" % Written by Lyndon J. Clarke % March 1993 % \newcommand{\LabelNote}[1]{\label{vi:note:#1}} \newcommand{\SeeReferNote}[1]{{\bf See~Note~\ref{vi:note:#1}.}} \newcommand{\LabelSection}[1]{\label{vi:sect:#1}} \newcommand{\ReferSection}[1]{{\bf Section~\ref{vi:sect:#1}.}} \chapter{Proposal VI} %----------------------------------------------------------------------% % BEGIN "Introduction" % \section{Introduction} This chapter proposes that communication contexts and process groupings within MPI appear as related concepts. In particular process groupings appear as ``frames'' which are used in the creation of communication contexts. Communications contexts retain a reference to, and inherit properties of, their process grouping frames. This reflects the observation that an invocation of a module in a parallel program typically operates within one or more groups of processes, and as such any communication contexts associated with invocations of modules also bind certain semantics of process groupings. The proposal provides process identified communication, communications which are limited in scope to single contexts, and communications which have scope spanning pairs of contexts. The proposal makes no statements regarding message tags. It is assumed that these will be a bit string expressed as an integer in the host language. Much of this proposal must be viewed as recommendations to other subcommittees of MPI, primarily the point-to-point communication subcommittee and the collective communications subcommittee. Concrete syntax is given in the style of the ANSI C host language for only purposes of discussion. The detailed proposal is presented in \ReferSection{proposal}, which refers the reader to a set of discussion notes in \ReferSection{notes}. The notes assumes knowledge of the proposal text and are therefore best examined after familiarisation with that text. Aspects of the proposal are discussed in section \ReferSection{discussion}, and it is also recommended that this material be read after familiarisation with the text of the proposal. % % END "Introduction" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Detailed Proposal" % \section{Detailed Proposal} \LabelSection{proposal} This section presents the detailed proposal, discussing in order of appearance: processes; process groupings; communication contexts; point-to-point communication; collective communication. \subsection{Processes} \LabelSection{processes} This proposal views processes in the familiar way, as one thinks of processes in Unix or NX for example. Each process is a distinct space of instructions and data. Each process is allowed to compose multiple concurrent threads and MPI does not distinguish such threads. \subsubsection*{Process Descriptor} Each process is described by a {\it process descriptor\/} which is expressed as an integer type in the host language and has an opaque value which is process local. \SeeReferNote{integer:descriptors} \SeeReferNote{process:identifiers} \SeeReferNote{descriptor:cache} The initialisation of MPI services will assign to each process an {\it own\/} process descriptor. Each process retains its own process descriptor until the termination of MPI services. MPI provides a procedure which returns the own descriptor of the calling process. For example, {\tt pd = mpi\_own\_pd()}. \subsubsection*{Process Creation and Destruction} This proposal makes no statements regarding creation and destruction of processes. \SeeReferNote{dynamic:processes} \subsubsection*{Descriptor Transmission} The value of a process descriptor can be transmitted in a message as an integer since it is an integer type in the host language. However the recipient of the descriptor can make no defined use of the value of in the MPI operations described in this proposal --- the descriptor is {\it invalid}. %[Tony] % Then why allow it to be passed? % %%[Lyndon] %% Since it is an integer one cannot prevent the user from passing it. %% The discussion notes should help answer why is is an integer. More %% here. %% We'd like (gd,rank) and (NULL,pd) to be suppliable to point-to-point. %% Because rank will be an integer, then pd must also be an integer, %% so I write that all descriptors are integers for consistency beteen %% the different descriptors. %% MPI provides a mechanism whereby the user can transmit a valid process descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. {\tt MPI\_PD\_TYPE}. MPI provides a process descriptor registry service. The operations which this service upports are: register descriptor by name; deregister descriptor; lookup descriptor identifier by name and validate, blocking the caller until the name has been registered. \SeeReferNote{registry:check} Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. %[Tony] % I don't get all of this. Why? % %%[Lyndon] %% I don't understand what you don't get. A registry service is just %% an easier (nonscalable, okay) way for concurrent entities to hook up %% with one another, rather than having some common ancestor send descriptors %% around in messages. %% In fact I don't really need a group or process registry service, as mentioned %% in the discussion section (yes, I know, not well presented), but %% I do need a context registry service, and I'm being consistent %% between different kinds of descriptors again. %% Suggestive syntax for a registry service is pretty boring, so %% I skipped it. %% Note that receipt of a process descriptor in the fashions described above may have a persistent effect on the implementation of MPI at the receiver, and in particular may reserve state. MPI will provide a procedure which invalidates a valid descriptor, allowing the implementation to free reserved state. For example, {\tt mpi\_invalidate\_pd(pd)}. The user is not allowed to invalidate the process descriptor of the calling process. \SeeReferNote{process:coherency} %[Tony] % I do not understand the usefulness or formal need for all this % validation and invalidation of process identifiers. Why, where % did it come from, what does it get us? How can this be related % to anything I have seen before? % %%[Lyndon] %% Acquisition of a descriptor requires an allocator, and consumes %% resource. In such cases it is good practice to provide a %% deallocator which frees resource. %% % [Tony] % I understand the idea here, but not all the details. Can this % be justified/exemplified/simplified? % %%[Lyndon] %% Unless I am missing the additional point in this comment, I can't %% see the problem. Probably poor presentation of Proposal I++ :-) %% \subsubsection*{Descriptor Attributes} This proposal makes no statements regarding processor descriptor attributes. \subsection{Process Groupings} \LabelSection{groupings} This proposal views a process grouping as an ordered collection of (references to?) distinct processes, the membership and ordering of which does not change over the lifetime of the grouping. \SeeReferNote{dynamic:groups} The canonical representation of a grouping reflects the process ordering and is a one-to-one map from $Z_N$ to the descriptor of the $N$ processes composing the grouping. The structure of a process grouping is defined by a process grouping topology and this proposal makes no further statements regarding such structures. %[Tony] % It is not obvious to me that we want to enforce topology at this % juncture. However, we could register topology information in % the extensible structure strategy of Proposal V. % %%[Lyndon] %% I am deliberately making weak statements about topology while %% acknolwedging the the process topology subcommittee. %% \subsubsection*{Group Descriptor} Each group is identified by a {\it group descriptor\/} which is expressed as an integer type in the host language and has an opaque value which is process local. \SeeReferNote{integer:descriptors} \SeeReferNote{group:identifiers} \SeeReferNote{descriptor:cache} The initialisation of MPI services will assign to each process an {\it own} group descriptor for a process grouping of which the process is a member. Each process retains its own group descriptor and membership of the process grouping until the termination of MPI services. \SeeReferNote{group:blobs} MPI provides a procedure which returns the descriptor of the home group of the calling process. For example, {\tt gd = mpi\_home\_gd()}. \SeeReferNote{own:group} \subsubsection*{Group Creation and Deletion} MPI provides a procedure which allows users to dynamically create one or more groups which are subsets of existing groups. For example, {\tt gdb = mpi\_group\_partition(gda, key)} creates one or more new groups {\tt gdb} partition an existing group {\tt gda} into one or more distinct subsets. This procedure is called by and synchronises all members of {\tt gda}. MPI provides a procedure which allows users to dynamically create one group by explicit definition of its membership as a list of process descriptors. For example, {\tt gd = mpi\_group\_definition(listofpd)} creates one new group {\tt gd} with membership and ordering described by the process descriptor list {\tt listofpd}. This procedure is called by and synchronises all processes identified in {\tt listofpd}. %[Tony] % loosely synchronous % %%[Lyndon] %% Do we mean the same thing? the constructors require each member %% if the object under construction to participate in the construction, %% and return a descriptor for the constructed operation. Therefore %% no member can terminate construction before all other members have %% commenced, at least. %% %[Rik] % Why such strong synchronization?! % If this has the same synchronization properties as creation, % then it won't return until all members have made the call. % But that means you actually have to sync everybody, which % implies 2 log(P) messages. Isn't it enough to just require % a loosely synchronous call, and use a counting strategy> % MPI provides a procedure which allows users to delete a created group. This procedure accepts the descriptor of a group which was created by the calling process and destroys the identified group. For example, {\tt mpi\_group\_deletion(gd)} deletes an existing group {\tt gd}. This procedure is called by and synchronises all members of {\tt gd}. MPI provides additional procedure which allow users to construct process groupings which have a process grouping topology. \subsubsection*{Descriptor Transmission} The value of a group descriptor can be transmitted in a message as an integer since it is an integer type in the host language. However the recipient of the descriptor can make no defined use of the value of in the MPI operations described in this proposal --- the descriptor is {\it invalid}. %[Tony] % Then why allow it to be passed? % %%[Lyndon] %% Since it is an integer one cannot prevent the user from passing it. %% The discussion notes should help answer why is is an integer. More %% here. %% We'd like (gd,rank) and (NULL,pd) to be suppliable to point-to-point. %% Because rank will be an integer, then pd must also be an integer, %% so I write that all descriptors are integers for consistency beteen %% the different descriptors. %% MPI provides a mechanism whereby the user can transmit a valid group descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. {\tt MPI\_GD\_TYPE}. MPI provides a group descriptor registry service. The operations which this service upports are: register descriptor by name; deregister descriptor; lookup descriptor identifier by name and validate, blocking the caller until the name has been registered. \SeeReferNote{registry:check} Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. %[Tony] % I don't get all of this. Why? % %%[Lyndon] %% I don't understand what you don't get. A registry service is just %% an easier (nonscalable, okay) way for concurrent entities to hook up %% with one another, rather than having some common ancestor send descriptors %% around in messages. %% In fact I don't really need a group or process registry service, as mentioned %% in the discussion section (yes, I know, not well presented), but %% I do need a context registry service, and I'm being consistent %% between different kinds of descriptors again. %% Suggestive syntax for a registry service is pretty boring, so %% I skipped it. %% Note that receipt of a group descriptor in the fashions described above may have a persistent effect on the implementation of MPI at the receiver, and in particular may reserve state. MPI will provide a procedure which invalidates a valid descriptor, allowing the implementation to free reserved state. For example, {\tt mpi\_invalidate\_gd(gd)}. The user is not allowed to invalidate the own group descriptor of the process or the group descriptor of any group created by the calling process. \SeeReferNote{group:coherency} %[Tony] % I do not understand the usefulness or formal need for all this % validation and invalidation of process identifiers. Why, where % did it come from, what does it get us? How can this be related % to anything I have seen before? % %%[Lyndon] %% Acquisition of a descriptor requires an allocator, and consumes %% resource. In such cases it is good practice to provide a %% deallocator which frees resource. %% % [Tony] % I understand the idea here, but not all the details. Can this % be justified/exemplified/simplified? % %%[Lyndon] %% Unless I am missing the additional point in this comment, I can't %% see the problem. Probably poor presentation of Proposal I++ :-) %% \subsubsection*{Descriptor Attributes} MPI provides a procedure which accepts a valid group descriptor and returns the rank of the calling process within the identifier group. For example, {\tt rank = mpi\_group\_rank(gd)}. MPI provides a procedure which accepts a valid group descriptor and returns the number of members, or {\it size}, of the identified group. For example, {\tt size = mpi\_group\_size(gd)}. MPI provides a procedure which accepts a valid group descriptor and process order number, or {\it rank}, and returns the valid descriptor of the process to which the supplied rank maps within the identified group. For example, {\tt pd = mpi\_group\_pd(gd, rank)}. \SeeReferNote{pd:to:rank} MPI provides additional procedures which allow users to determine the process grouping topology attributes. %[Tony] It is not obvious to me that we want to enforce topology at this % juncture. However, we could register topology information in % the extensible structure strategy of Proposal V. % %%[Lyndon] %% I am deliberately making weak statements about topology while %% acknolwedging the the process topology subcommittee. %% \subsection{Communication Contexts} \LabelSection{contexts} This proposal views a communication context as a uniquely identified reference to exactly one process grouping, which is a field in a message envelope and may therefore be used to distinguish messages. The context inherits the referenced process grouping as a ``frame''. Each process grouping is used as a frame for multiple contexts. \subsubsection*{Context Descriptor} Each context is identified by a {\it context descriptor\/} which is expressed as an integer type in the host language and has an opaque value which is process local. \SeeReferNote{integer:descriptors} \SeeReferNote{context:identifiers} \SeeReferNote{descriptor:cache} The creation of MPI process groupings allocates an {\it own\/} context which inherits the created grouping as a frame and can be thought of as a property of the created grouping. The grouping retains its own context until MPI process grouping deletion. MPI provides a procedure which accepts a valid group descriptor and returns the context descriptor of the own context of the identified group. For example, {\tt cd = mpi\_own\_cd(gd)}. \SeeReferNote{own:context} \subsubsection*{Context Creation and Deletion} MPI provides a procedure which allows users to dynamically create contexts. This procedure accepts a valid descriptor of a group of which the calling process is a member, and returns a context descriptor which references the identified group. For example, {\tt cd = mpi\_context\_create(gd)}. This procedure is called by and synchronises all members of {\tt gd}. \SeeReferNote{dynamic:contexts} %[Tony] % loosely synchronous % %%[Lyndon] %% Do we mean the same thing? the constructors require each member %% if the object under construction to participate in the construction, %% and return a descriptor for the constructed operation. Therefore %% no member can terminate construction before all other members have %% commenced, at least %% MPI provides a procedure which allows users to destroy created contexts. This procedure accepts a valid context descriptor which was created by the calling process and deletes that context identifier. For example, {\tt mpi\_context\_delete(cd)}. This procedure is called by and synchronises all members of the frame of {\tt cd}. \subsubsection*{Descriptor Transmission} The value of a context descriptor can be transmitted in a message as an integer since it is an integer type in the host language. However the recipient of the descriptor can make no defined use of the value of in the MPI operations described in this proposal --- the descriptor is {\it invalid}. %[Tony] % Then why allow it to be passed? % %%[Lyndon] %% Since it is an integer one cannot prevent the user from passing it. %% The discussion notes should help answer why is is an integer. More %% here. %% We'd like (gd,rank) and (NULL,pd) to be suppliable to point-to-point. %% Because rank will be an integer, then pd must also be an integer, %% so I write that all descriptors are integers for consistency beteen %% the different descriptors. %% MPI provides a mechanism whereby the user can transmit a valid context descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. {\tt MPI\_CD\_TYPE}. %[Tony] % I don't get all of this. Why? % %%[Lyndon] %% I don't understand what you don't get. A registry service is just %% an easier (nonscalable, okay) way for concurrent entities to hook up %% with one another, rather than having some common ancestor send descriptors %% around in messages. %% In fact I don't really need a group or process registry service, as mentioned %% in the discussion section (yes, I know, not well presented), but %% I do need a context registry service, and I'm being consistent %% between different kinds of descriptors again. %% Suggestive syntax for a registry service is pretty boring, so %% I skipped it. %% MPI provides a context descriptor registry service. The operations which this service upports are: register descriptor by name; deregister descriptor; lookup descriptor identifier by name and validate, blocking the caller until the name has been registered. \SeeReferNote{registry:check} Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. Note that receipt of a context descriptor in the fashions described above may have a persistent effect on the implementation of MPI at the receiver, and in particular may reserve state. MPI will provide a procedure which invalidates a valid descriptor, allowing the implementation to free reserved state. For example, {\tt mpi\_invalidate\_cd(cd)}. The user is not allowed to invalidate the own context descriptor of a group or the context descriptor of any context created by the calling process. \SeeReferNote{context:coherency} %[Tony] % I do not understand the usefulness or formal need for all this % validation and invalidation of process identifiers. Why, where % did it come from, what does it get us? How can this be related % to anything I have seen before? % %%[Lyndon] %% Acquisition of a descriptor requires an allocator, and consumes %% resource. In such cases it is good practice to provide a %% deallocator which frees resource. %% % [Tony] % I understand the idea here, but not all the details. Can this % be justified/exemplified/simplified? % %%[Lyndon] %% Unless I am missing the additional point in this comment, I can't %% see the problem. Probably poor presentation of Proposal I++ :-) %% \subsubsection*{Descriptor Attributes} MPI provides a procedure which allows users to determine the process grouping which is the frame of a context. For example, {\tt gd = mpi\_context\_gd(cd)}. \subsection{Point-to-Point Communication} \LabelSection{point2point} This proposal recommends three forms for MPI point-to-point message addressing and selection: null context; closed context; open context. It is further recommended that messages communicated in each form are distinguished such that a Send operation of form X cannot match with a receive operation of form Y, which requires that the form is embedded into the message envelope. The three forms are described followed by considerations of uniform integration of these forms in the point-to-point communication section of MPI. \subsubsection*{Null Context Form} The {\it null context\/} form contains no message context. \SeeReferNote{null:context} Message selection and addressing are expressed by {\tt (pd, tag)} where: {\tt pd} is a process descriptor; {\tt tag} is a message tag. Send supplies the {\tt pd} of the receiver. Receive supplies the {\tt pd} of the sender. Receive can wildcard on {\tt pd} by supplying the value of a named constant process descirptor, e.g. {\tt MPI\_PD\_WILD}. This proposal makes no statment about the provision for wildcard on {\tt tag}. \subsubsection*{Closed Context Form} The {\it closed context\/} form permits communication between members of the same context. \SeeReferNote{closed:context} Message selection and addressing are expressed by {\tt (cd, rank, tag)} where: {\tt cd} is a context descriptor; {\tt rank} is a process rank in the frame of {\tt cd}; {\tt tag} is a message tag. Send supplies the {\tt cd} of the receiver and sender, and the {\tt rank} of the receiver. Receive supplies the {\tt cd} of the sender and receiver, and the rank of the sender. The {\tt (cd, rank)} pair in Send (Receive) is sufficient to determine the process descriptor of the receiver (sender). Receive cannot wildcard on {\tt cd}. Receive can wildcard on {\tt rank} by supplying the value of a named constant integer, e.g. {\tt MPI\_RANK\_WILD}. This proposal makes no statment about the provision for wildcard on {\tt tag}. \subsubsection*{Open Context Form} The {\it open context\/} form permits communication between members of any two contexts. \SeeReferNote{open:context} Message selection and addressing are expressed by {\tt (lcd, rcd, rank, tag)} where: {\tt lcd} is a ``local'' context descriptor; {\tt rcd} is a ``remote'' context descriptor; {\tt rank} is a process rank in the frame of {\tt rcd}; {\tt tag} is a message tag. Send supplies the context descriptor for the sender in {\tt lcd}, the context descriptor for the receiver in {\tt rcd}, and the {\tt rank} of the receiver in the frame of {\tt rcd}. Receive supplies the context descriptor for the receiver in {\tt lcd}, the context descriptor for the sender in {\tt rcd}, and the {\tt rank} of the sender receiver in the frame of {\tt rcd}. The {\tt (rcd, rank)} pair in Send (Receive) is sufficient to determine the process descriptor of the receiver (sender). Receive cannot wildcard on {\tt lcd}. Receive can wildcard on {\tt rcd} by supplying the value of a named constant context descriptor, e.g. {\tt MPI\_CD\_WILD}, in which case Receive {\it must\/} also wildcard on {\tt rank} as there is insufficient information to determine the process descriptor of the sender. Receive can wildcard on {\tt rank} by supplying the value of a named constant integer, e.g. {\tt MPI\_RANK\_WILD}. This proposal makes no statment about the provision for wildcard on {\tt tag}. \subsubsection*{Uniform Integration} The three forms of addressing and selection described have different syntactic frameworks. We can consider integrating these forms into the point-to-point section of MPI by defining a further orthogonal axis (as in the multi-level proposal of Gropp \& Lusk) which deals with the form. This is at the expense of multiplying the number of Send and Receive procedures by a factor of three, and some difficulty in details of the current point-to-point working document which uniformly assumes a single addressing and selection form. There are various approaches to unification of the syntactic frameworks which may simplify integration. Three options are now described, each based on retention and extension of the framework of one form. These options each have advantages and disadvantages. \paragraph*{Option i:} The framework of the open context form is adopted and extended. We introduce the {\it null\/} descriptor, the value of which is defined by a named constant, e.g. {\tt MPI\_NULL}. The null context form is expressed as {\tt (MPI\_NULL, MPI\_NULL, pd, tag)}, which is a little clumsy. The closed context form is expressed as {\tt (MPI\_NULL, cd, rank, tag)}, which is marginally inconvenient. The open context form is expressed as {\tt (lcd, rcd, rank, tag}), which is of course natural. \paragraph*{Option ii:} The framework of the closed context form is adopted and extended. We introduce the {\it null\/} descriptor, the value of which is defined by a named constant, e.g. {\tt MPI\_NULL}. The null context form is expressed as {\tt (MPI\_NULL, pd, tag)}, which is marginally inconvenient. The closed context form is expressed as {\tt (cd, rank, tag)}, which is of course natural. Expression of the open context form requires a little more work. We can use the {\tt cd} field as ``shorthand notation'' for the {\tt (lcd, rcd)} pair at the expense of introducing some trickery. We can define a mechanism which ``globs'' together these two fields onto in an identifier which Send and Receive can distinguish from a context identifier and treat as the shorthand notation. Then we should also define a mechanism by which a receiver which has completed a Receive with wildcard on {\tt rcd} is able to determine the valid context descriptor of the sender. This is a inconvenient. %[Tony] % I dislike this intensely. There should be a group-pair data % structure. Group is never a pair of sub-groups. It is a % bad idea. This is all to get around changing syntax, no? % %% [Lyndon] %% I dislike this also. Of course it is all to get around extending %% the syntax, it stinks of that. %% \paragraph*{Option iii:} The framework of the null context form is adopted and extended. The null context form is expressed as {\tt (pd, tag)}, which is of course natural. Expression of the open and closed context forms requires a little more work. We can use the {\tt pd} field as ``shorthand notation'' for {\tt (cd, rank)} and {\tt (lcd, rcd, rank)} by continuation of the trickery used in the previous option. This is clumsy. %[Tony] % I dislike this intensely. There should be a group-pair data % structure. Group is never a pair of sub-groups. It is a % bad idea. This is all to get around changing syntax, no? % %% [Lyndon] %% I dislike this also. Of course it is all to get around extending %% the syntax, it stinks of that. %% \subsection{Collective Communication} \LabelSection{collective} Symmetric collective communication operations are compliant with the closed context form described above. This proposal recommends that such operations accept a context descriptor which identifies the context and frame in which they are to operate. MPI does plan to describe symmetric collective communication operations. It is not possible to determine whether the proposal is sufficient to allow implementation of the collective communication section of MPI in terms of the point-to-point section of MPI without loss of generality, since the collective operations are not yet defined. %[Tony] Check, but it is desirable that they be so writable, so we will % have to watch. % Assymetric collective communication operations, especially those in which the sender(s) and receiver(s) are distinct processes, are compliant with the open context form described above. This proposal recommends that such operations accept a pair of context descriptors (perhaps in a ``glob'' form) which identify the contexts and frames in which they are to operate. MPI does not plan to describe assymetric collective communication operations. Such operations are expressive when writing programs beyond the SPMD model and comprise communicative funtionally distinct process groupings. This proposal recommends that such operations should be considered in some reincarnation of MPI. % % END "Proposal" %----------------------------------------------------------------------% % [Tony] % So, I gather that a set of groups is passable to a collcomm, % and a pair is passable to a pt2pt. That is neat, but it should % still be a separate data structure, with separate calls than % the intra-group version (at least for the pt2pt calls). % %% [Lyndon] %% Currently, I am only thinking of passing either singlets or duplets %% to point-to-point and collective. %% I was trying to avoid separate - extra - calls to point-to-point %% because that is already very large and there is a swell of concern %% about the size thereof. %% %----------------------------------------------------------------------% % BEGIN "Discussion & Notes" % \section{Discussion \& Notes} This section comprises a discussion of certain aspects of this proposal followed by the notes referenced in the detailed proposal. \subsection{Discussion} \LabelSection{discussion} We can dissect the proposal into two parts: an SPMD model core; an MIMD model annex. In this discussion the dissection is exposed and the conceptual foundation of each part is described. The discussion also presents arguments for and against the MIMD model annex, and to some extent explores the consquences of these arguments for MPI in a wider sense. \subsubsection*{SPMD model core} The SPMD model core provides noncommunicative process groupings and communication contexts for writers of SPMD parallel libraries. It is intended to provide expressive power beyond the ``SPIMD'' model, in which process execute in a stricly SIMD fashion. The material describing processes in \ReferSection{processes} is simplified: \begin{itemize} \item processes have identical instruction blocks and different data blocks; \item process descriptor transmission and registry become redundant (Note, however, that they are anyway redundant as context descriptor transmission and registry coupled with context descriptor attribute query and group descriptor attribute query is capable of providing access to all process descriptors); \item dynamic process models are not considered. \end{itemize} The material describing process groupings in \ReferSection{groupings} is simplified: \begin{itemize} \item group descriptor transmission and registry become redundant (Note, however, that they are anyway redundant as context descriptor transmission and registry coupled with context descriptor attribute query capable of providing access to all group descriptors.); \item the own process grouping explicitly becomes a group containing all processes. \end{itemize} The material describing communication contexts in \ReferSection{contexts} is simplified: \begin{itemize} \item context descriptor transmission and registry become unnecessary. \end{itemize} The material describing point-to-point communication in \ReferSection{point2point} is simplified: \begin{itemize} \item the open context form becomes redundant; \item uniform integration ``Option i'' is deleted, and ``Option ii'' loses ``globs'' thus becoming simple enough that ``Option iii'' need not be further considered. \end{itemize} The material describing collective communication in \ReferSection{collective} is simplified: \begin{itemize} \item there is no possibility of collective communication operations spanning more than context. \end{itemize} \subsubsection*{MIMD model annex} The MIMD model annex extends and modifies the SPMD model core to provide expressive power for MIMD programs which combine (coarse grain) function and data driven parallelism. The MIMD model annex is not intended to provide expressive power to fine grained function driven parallel programs --- it is conjectured that message passing approaches such as MPI are not suited to fine grained function driven or data driven programming. The annex is intended to provide expresive power for the ``MSPMD'' model, which is now described. One of the simplest MIMD models is the ``host-node'' model, familiar in EXPRESS and PARMACS, containing two functional groups: one node group (SPMD like) ; one host group (a singleton). The ``parallel client-server'' model, in which each of the $n$ clients is composed of parallel processes, and in which the server may also be composed of parallel processes, contains $1+n$ functional groups: $n$ client groups (SPMD like); one server group (singleton, SPMD like). The ``host-node'' model is a case of this model in which the host can be viewed as a singleton client and the nodes can be viewed as an SPMD like server (or vice versa!). The ``parallel module graph'' model, in which each module within the graph may be composed of parallel processes (singleton, SPMD like), contains any number of functional groups with arbitrarily complex relations. The ``parallel client-server model'' is a case of this model in which the module graph contains arcs joining the server to each client. The MIMD model annex is intended to provide expressive power for the ``parallel module graph'' model, which I refer to as the MPSMD model. This model requires support at some level as commercial and modular applications are increasingly moving in to parallel computating. The debate is whether or not message passing approaches such as MPI (which I simply refer to as MPI) should provide for this model. The negative argument is that such SPMD modules should be controlled and communicate with one another as ``parallel processes'' at the distributed operating system level. The argument has some appeal as the world of distributed operating systems must deal with difficult issues such as process control and coherency. Avoidance of duplication in MPI allows MPI to focus on provision of a smaller set of facilities with greater emphasis on maximum performance for data driven SPMD parallel prgrams. The positive argument is that communications between SPMD modules requires high performance and MPI can provide such performance with tuned semantics which expect the user to deal with coherency issues. There is also the argument that MPI is able to deal with this in a shorter time than development and standardisation procedures for distributed operating systems. The latter argument is comparable with the argument for MPI versus parallel compilation. \subsection{Notes} \LabelSection{notes} \begin{enumerate} \item\LabelNote{integer:descriptors} Descriptors are a plentiful but bounded resource. They are expressed as integers for two reasons. Firstly, this expression facilitates uniform binding to ANSI~C and Fortran~77. Secondly, there is a later convenience in ensuring that process descriptors in particular are expressed in integers, and I made all the descriptors are integers for consistency. In practice the descriptor could be an index into a table of objects description structures or pointers to such structures. \item\LabelNote{descriptor:cache} It is possible to allow the user to save user defined attributes in descriptors, as Littlefield has suggested. Such attributes should not be communicated in either descriptor transmission or the descriptor registry. \item\LabelNote{process:identifiers} The process descriptor is not a global unique process identifier but must reference such an identifier. The use of descriptors hides such identifiers in order that they may be implementation dependent, and enables more efficient access to additional process information in implementations. For example, the process identifier of a receiver may not be sufficient to route a message to the receiver, and this information can be referenced from the process descriptor. \item\LabelNote{group:identifiers} The group descriptor is not a global unique group identifier but must reference such an identifier. The use of descriptors hides such identifiers in order that they may be implementation dependeent, and enables more efficient access to additional group information in implementations. For example, the size of the group and the map from member ranks to process descriptors can be referenced from the group descriptor. \item\LabelNote{context:identifiers} The context descriptor is not a global unique context identifier but must reference such an identifier. The use of descriptors hides such identifiers in order that they may be implementation dependent, and enables more efficient access to additional context information in implementations. For example, the group descriptor of the frame can be referenced from the context descriptor. \item\LabelNote{dynamic:processes} The proposal does not prevent a process model which allows dynamic creation and termination of processes however it does not favour an asynchronous process creation model in which singleton processes are created and terminated in an unstructured fashion. The proposal does favour a model in which blobs of processes are created (and terminated) in a concerted fashion, and in which each group so created is assigned a different own process grouping. This model does not take into account the potential desire to expand or contract an existing blob of processes in order to take into account (presumably slowly) time varying worloads. The author suggests that concerted blob expand and contract operations are most suitable for this purpose than asynchronous spawn/kill operations. \item\LabelNote{registry:check} There should probably also be a registry operation which checks whether a name has been registered without the possibility of blocking the calling process indefinitely. The same registry could be used for process, group and context descriptors. \item\LabelNote{process:coherency} In the static process model it is assumed that processes are created in concert, and that termination of one process implies termination of all processes. In this case there is no coherency problem associated with processes and process descriptors. In a dynamic process model there is a coherency problem which processes and process descriptors since a process can retain the descriptor of a process which has been deleted. The proposal expects the user to ensure coherent usage. \item\LabelNote{dynamic:groups} Process groupings are dynamic in the sense that they can be created at any time, and static in the sense that the membership is constant over the lifetime of the process grouping. The author suggests concerted group expand and contract operations are more generally useful than asynchronous join/leave operations. \item\LabelNote{group:blobs} MPI has discussed the concept of the ``all'' group which contains all processes. The ``own'' group concept is intended to be a generalisation of the ``all'' group concept which is expressive for programs including and beyond the SPMD model. Processes are created in ``blobs'', where each member of a blob is a member of the same own process grouping, and different blobs have different own process groupings. An SPMD program is a single blob. A host-node program composes two blobs, the node blob and the host blob (which is a singleton). There is a sense in which a blob has a locally SPMD view. \item\LabelNote{own:group} This procedure looks like a process descriptor attribute query. \item\LabelNote{group:coherency} Dynamic processes introduce a group coherency problem as a group descriptor can contain the process descriptor of a process which has been deleted. Dynamic processes potentially introduce a group coherency problem as a group descriptor can contain the process descriptor of a process which has been deleted. Group transmission introduces a group and group descriptor coherenncy problem since a process can retain a group descriptor of a group which has been identified group. The proposal expects the user to ensure coherent usage. \item\LabelNote{pd:to:rank} The proposal did not include a function to convert a {\tt (gd, pd)} pair into a rank. It is suggested that this inverse map is allowed to be arbitrarily slow. \item\LabelNote{dynamic:contexts} Marc Snir has described a method by which global unique group identifiers can be generated without use of shared global data. The description of context creation anticipates a similar method for global unique context identifier generation. However the synchronisation requirement of this method makes it unnecessary for context creation. Synchronisation at creation of context within a process grouping frame implies that all processes within the frame perform the same context creations in the same sequence. Therefore the combination of global unique frame identifier and context creation sequence number provides a global unique context identifier without communication. \item\LabelNote{own:context} This procedure looks like a group descriptor attribute query. \item\LabelNote{context:coherency} The retention of a reference to a group descriptor by a context introduces the potential for a context and context descriptor coherency problem, as a context could contain a reference to the group descriptor of a group which has been deleted. This could be circumvented by demanding that all such contexts are deleted before a group is deleted. Context descriptor transmission introduces a coherency problem since a process can retain the descriptor of a context which has been deleted. The proposal expects the user to ensure coherent usage. \item\LabelNote{null:context} This is somewhat like and PARMACS and PVM~3. It is general purpose, but not particularly expressive. It does not provide facilities for writers of parallel libraries. \item\LabelNote{open:context} This is somewhat like ZIPCODE and VENUS. It is expressive in certain SPMD programs where noncommunicative data driven parallel computations can be performed concurrently. It provides facilities for writers of SPMD like parallel libraries. \item\LabelNote{closed:context} This is somewhat like CHIMP and PVM~2. It is expressive in certain MIMD programs where communicative data driven parallel computations can be performed concurrently. It provides facilities for MSPMD like parallel libraries. \end{enumerate} % % END "Discussion & Notes" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Conclusion" % \section{Conclusion} This chapter has presented and discussed a proposal for communication contexts within MPI. In the proposal process groupings appeared as frames for the creation of communication contexts, and communication contexts retained certain properties of the frames used in their creation. The author strongly recommends this proposal to the committee. % % END "Conclusion" %----------------------------------------------------------------------% % % END "Proposal VI" %======================================================================% \end{document} ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Mon Mar 22 11:39:45 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA25825; Mon, 22 Mar 93 11:39:45 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14730; Mon, 22 Mar 93 11:38:27 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 11:38:25 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14722; Mon, 22 Mar 93 11:38:23 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Mon, 22 Mar 93 08:34 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA06464; Mon, 22 Mar 93 08:32:36 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA16539; Mon, 22 Mar 93 08:32:32 PST Date: Mon, 22 Mar 93 08:32:32 PST From: rj_littlefield@pnlg.pnl.gov Subject: RE: Rik's comments on Lyndon's Proposal I To: jim@meiko.co.uk, lyndon@epcc.edinburgh.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, rj_littlefield@pnlg.pnl.gov, tony@aurora.cs.msstate.edu Cc: d39135@carbon.pnl.gov Message-Id: <9303221632.AA16539@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu Lyndon says > There is no proposal II. There is however a proposal VI. This > naming/numbering is Tony's suggestion with which I concur. Is this the same or different from the "proposal VI" I sketched last night? For reference, that was (Rik said) > Here's a sketch of a new proposal (VI ?). > > . The functionality of point-to-point communications is per Snir, > Lusk, and Gropp, augmented by my proposed MPI_FORM_CONTEXT to allow > assembling arbitrary collections of processes. (Marc has already > accepted this in a private email to me -- don't know why he > didn't post it.) > > . Performance expectations of point-to-point communications are > explicitly stated as follows: > > - MPI_COPY_CONTEXT does not synchronize the participating processes > and costs significantly less than a point-to-point fanout among > them (e.g., it uses a communication-free counting strategy); > > - all other context formation routines cost no more than if they > were implemented using a single fanin/fanout among the > participating processes; > > - translation of (context,rank) to absolute processor ID costs no > more than if it were implemented via the lookup table that Snir > suggests. > > . Groups and contexts are not equal. A group consists of a base > context (from which other contexts can be created quickly by > MPI_COPY_CONTEXT), plus topology information, plus my cacheing > facility. Lyndon says > I really am very sorry to have messed you about. I have no idea what this refers to. It would bother me to have MPI produce a bad standard. Not much else does. I would be delighted to have all of my specific suggestions and/or words replaced by something better. --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Mon Mar 22 12:30:13 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA27511; Mon, 22 Mar 93 12:30:13 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17027; Mon, 22 Mar 93 12:29:16 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 12:29:14 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from cs.sandia.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17019; Mon, 22 Mar 93 12:29:12 -0500 Received: from newton.sandia.gov (newton.cs.sandia.gov) by cs.sandia.gov (4.1/SMI-4.1) id AA17003; Mon, 22 Mar 93 10:29:10 MST Received: by newton.sandia.gov (5.57/Ultrix3.0-C) id AA00503; Mon, 22 Mar 93 10:30:02 -0700 Date: Mon, 22 Mar 93 10:30:02 -0700 From: mpsears@newton.cs.sandia.gov (Mark P. Sears) Message-Id: <9303221730.AA00503@newton.sandia.gov> To: mpi-context@cs.utk.edu Subject: A new proposal for context and groups. Proposal VI Mark P. Sears Sandia National Laboratories The following proposal for context and group definitions is intended to fill a gap in Tony Skjellum's list. It is most closely related to Rik Littlefield's proposal V. The main difference is that in my proposal, context and groups are completely orthogonal, both in purpose and function. There are a number of advantages to this proposal: 1) The meanings of context and group are simple and it is easy to see how they can be implemented quickly and efficiently. The proposal is also free of chicken vs. egg problems. 2) MPI 1.0 can be closed at the point-to-point level without any reference to groups or group operations. We might actually finish on time. 3) Other proposals which combine aspects of context and group as defined here could be layered on top of this proposal, but not vice versa. Indeed, some of these proposals are very different from each other and it is not clear which is the best. 4) Elsewhere in MPI we have agreed to define low level layers on which higher level stuff can be built. This proposal follows that line of reasoning. 5) This proposal requires no server code and most of the group code is not even parallel. I did a test implementation of groups as defined here and was able to build code for identity groups, permutation groups, linear and bilinear groups, general set groups (Tony's favorite), composition groups, Cartesian products and cartesian subsets (my favorite), all in about 700 lines of code. The group definition really lends itself to object-oriented user extensibility, like X widgets but easier. 6) Groups are easily constructed and destroyed, since no global communication is required. Dynamic groups are not excluded, although they must be used carefully. Since groups have no associated context, there are no resources limiting their construction other than memory and CPU time. To summarize, in this proposal: 1) The purpose of context is to allow different software modules to maintain independence by allowing independence of tag spaces. A context is an integer-valued extension to the tag field which must match exactly for both sender and receiver. Allocation and deallocation of contexts must occur globally and synchronously. Two predefined contexts exist, one for internal use by MPI and one a free-for-all context for application programs. 2) Processes are addressed by their rank within the parallel task. This global rank is fixed and assigned in an implementation-defined way. There must exist environment functions which obtain the topology of the global process assignment (hypercube, mesh, random network, ...) [ This is an area where MPI 2.0 could really break some new ground: define interaction between different parallel tasks, define creation and deletion of parallel tasks, ...] 3) The purpose of groups is to allow the management of subsets of processes in a parallel task. A group is defined simply as an ordered set of unique integers (the elements). Two groups are functionally the same if they define the same ordered set. Groups have no associated context, default or otherwise. They are also free of state. Groups are defined locally by construction, and have no global meaning. A group can be passed from one process to another by passing the information needed to construct it. A process is a member of a group if its global rank is an element of the group. 4) Groups have an implicit topology, defined by the ordering of the elements. Any other ordering can be defined by constructing a new group with the same elements in the new order. There is no need for any other topology function. 5) Any group implementation must define at least the following functions: order(group) -- Number of elements in the group. range(group) -- Upper limit on elements. Lower limit is always zero. iselement(group, element) -- Predicate to determine if element is in a group. rank(group, element) -- Compute rank of element. element(group, rank) -- Compute element given rank. Some other proposals have scamped on the need for being able to compute the rank of an element or the predicate defining whether something is even an element of the group. Using these functions, it is easy to construct a minimum spanning tree or set of binary exchanges for all of the interesting collective communication functions, for any group. This information could be cached with the local group definition. There is no reason to disallow user-defined groups (e.g. dynamic groups). Note that since groups are ordered, the previous and next elements are easily computed using the rank and element functions. 6) MPI collective communication functions which operate on subsets of processors use groups to define the subset of interest, and must be called with the same context, tags(s) and group, by at least all processes which are members of the group. Note that with this definition of group it is the responsibilty of the caller to provide context and tags. Two (or more) such communication functions are guaranteed to complete if they are called in the same order on all processes which are members of both groups (this condition is less stringent for multithreaded environments). Otherwise the program is erroneous. From owner-mpi-context@CS.UTK.EDU Mon Mar 22 19:01:22 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA04642; Mon, 22 Mar 93 19:01:22 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03825; Mon, 22 Mar 93 19:00:39 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 22 Mar 1993 19:00:38 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03811; Mon, 22 Mar 93 19:00:35 -0500 Date: Tue, 23 Mar 93 00:00:31 GMT Message-Id: <15033.9303230000@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: tum-tee-tum To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Dear Colleagues A colleage of mine (not of MPI) reviewed Proposal VI and made a number of very sensible suggestions regarding technical details and presentational details, although not technical guts. The outcome is that I must now revise this proposal. I will repost to this group if there is any interest --- can not happen for approx. 18 hours. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Mar 23 10:42:13 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA19318; Tue, 23 Mar 93 10:42:13 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14088; Tue, 23 Mar 93 10:41:12 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 23 Mar 1993 10:41:12 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14077; Tue, 23 Mar 93 10:41:02 -0500 Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA01482 (5.65c/IDA-1.4.4 for ); Tue, 23 Mar 1993 10:39:47 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1) id AA11198; Tue, 23 Mar 93 15:39:42 GMT Date: Tue, 23 Mar 93 15:39:41 GMT From: jim@meiko.co.uk (James Cownie) Message-Id: <9303231539.AA11198@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA05333; Tue, 23 Mar 93 15:36:15 GMT To: mpi-context@cs.utk.edu Cc: snir@watson.ibm.com, rj_littlefield@pnlg.pnl.gov, lyndon@epcc.ed.ac.uk Subject: Context proposals overview Content-Length: 5160 Jim's initial summary/comments on the Contexts proposals Gentlemen, here is my summary and initial comclusions on the three extant Contexts proposals received via Lyndon. Since I have probably misunderstood or misrepresented some of the proposals, please correct me as appropriate. Proposal I (Marc Snir) ---------------------- Key points : A group and a context are identical. All communication (including point to point) is addressed by group and rank. The receiver MUST be a member of the same group as the sender, and explicitly give the group as an argument to the receive function. Group descriptors are not passable to other processors. Creation of a new context involves creation of a new group which is a group synchronising operation. For: Simplicity, it's easy to explain and understand (and probably implement). Insisting on a group for all addressing gives the 0..n-1 rank model for subgroups which aids in importing libraries. Against: The receiver of a message has to both know the group of the sender, and be a part of it. This feels like it makes servers hard, but may be ok. If you keep having to go into the ALL group, then the security which was the whole point of a context is lost. The argument that contexts can be checked at the sender is incorrect because a single process can be in many different groups/contexts, and therefore the group/context must be transferred to ensure correct matching. (Maybe this wasn't the intent of the comments anyway...) I don't understand how to build a group/context using only point to point messages. I still seem to have the bootstrap problem that I need the new context to safely receive the message which will tell me what the new context is. Proposal V (Rik Littlefield) ---------------------------- Key points : A desire to have caching of arbitrary library (or even user) defined entities on the groups. Point to point communication is expected to be addressed by TID and Context. TIDs have global scope, they can be passed around arbitrarily. There is explicit translation from (group,rank) to TID. Group descriptors are local objects, but can be passed around through special mechanisms. Contexts are globally unique, (and expected to be allocated locally through one of the normal bit space partitioned schemes. i.e. processor number // identifier. There appears to be an expectation that the identifiers are managed closely), and can be passed transparently in messages (? is this true ?). Contexts are created inside groups, creating a context is a barrier synchronisation (? is this true ?). For: The caching is good idea, but orthogonal to all the other issues, it could be added just as easily to any of the other proposals The use of TIDs in the point to point removes any issues of group membership from these functions. There is thus a way to achieve any inter group communication which is required (even if it is unpleasant). Against: The complexity of the "paired exact match criterion" is non trivial to explain. Proposal VI (Lyndon Clarke) --------------------------- Key points: ALL descriptors (process, group, context) are process local, but may be sent elsewhere through special mechanisms. Contexts are created inside groups, and are bound to them. The context can be used to refer (indirectly) to the owning group. Creating a context is a barrier synchronisation of the group. Three different forms of process address are discussed for point to point messaging. (Ouch !) For: Can be made to address complex inter group communications Against: The complexity of the whole proposal. The passing of opaque entities with translation is rather unpleasant for people with homogeneous machines. Before this we could ignore all of the type info, now we have to check some of it and take special actions. Jim's initial conclusions... ---------------------------- 1) Marc's approach of insisting that all communications occur inside a group has some simplifying effects. In particular it easily gives the base shift for sub-groups, so that they "think" their processors are numbered 0..n-1. It has the disadvantage that there is an extra translation required on ALL communications. This will add to the latency at the most fundamental level (though not much if we expect to consume space linear in the group size). 2) Creating a new group for each context may be expensive, though Marc suggests an implementation which makes this relatively cheap. Adding Rik's caching ideas here could also defer the cost until it was required. (e.g. there is no need to build a broadcast tree in a group until the first broadcast is performed. [Fundamental optimisation 0: "Don't do it until you need to, you might never have to do it at all"]) At the moment I favour Marc's with added caching. Can anyone explain where the big gain is from the added complexity of the others ? -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Tue Mar 23 12:22:41 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA23133; Tue, 23 Mar 93 12:22:41 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19811; Tue, 23 Mar 93 12:21:57 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 23 Mar 1993 12:21:56 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19803; Tue, 23 Mar 93 12:21:49 -0500 Date: Tue, 23 Mar 93 17:21:18 GMT Message-Id: <15889.9303231721@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Context proposals overview To: jim@meiko.co.uk (James Cownie), mpi-context@cs.utk.edu In-Reply-To: James Cownie's message of Tue, 23 Mar 93 15:39:41 GMT Reply-To: lyndon@epcc.ed.ac.uk Cc: snir@watson.ibm.com, rj_littlefield@pnlg.pnl.gov, lyndon@epcc.ed.ac.uk Jim writes: > Gentlemen, here is my summary and initial comclusions on the three > extant Contexts proposals received via Lyndon. Since I have probably > misunderstood or misrepresented some of the proposals, please correct > me as appropriate. > > Proposal I (Marc Snir) > ---------------------- > Creation of a new context involves creation of a new group which is a > group synchronising operation. More accurately it involves duplication of an existing group. One problem which I have with this is that if we anticipate future extension to groups which are dynamic in membership (they expand and contract) then the duplication concept breaks because the identical nature of the grouping is lost. This is why I much prefer to have contexts as references to groups and groups as frames of contexts - the contexts shrink and stretch according to the expansion and contraction of the underlying group. > For: > Simplicity, it's easy to explain and understand (and probably > implement). Indeed but please see further comments attatched to Proposal VI below. > Insisting on a group for all addressing gives the 0..n-1 rank model > for subgroups which aids in importing libraries. Indeed and Proposal VI gives all (and more) of this aid. > Against: > The receiver of a message has to both know the group of the sender, > and be a part of it. This feels like it makes servers hard, but may be > ok. If you keep having to go into the ALL group, then the security > which was the whole point of a context is lost. The restriction in Proposal I really does make programming of communicative functionally distinct groups hard. I am thinking well beyond singleton service processes and I have explained some of the things I am thinking previous messages. > The argument that contexts can be checked at the sender is incorrect > because a single process can be in many different groups/contexts, and > therefore the group/context must be transferred to ensure correct > matching. (Maybe this wasn't the intent of the comments anyway...) Our interpretations concur. > Proposal V (Rik Littlefield) > ---------------------------- > Key points : > A desire to have caching of arbitrary library (or even user) defined > entities on the groups. > > Point to point communication is expected to be addressed by TID and > Context. TIDs have global scope, they can be passed around > arbitrarily. > > There is explicit translation from (group,rank) to TID. ... which the poor user *must* do. > Group descriptors are local objects, but can be passed around through > special mechanisms. Check. If the feature of Proposal VI is a disdavantage of that proposal then it is also a disadvantage of Proposal V! Just being consist :-) > Contexts are globally unique, (and expected to be allocated locally > through one of the normal bit space partitioned schemes. i.e. > processor number // identifier. There appears to be an expectation > that the identifiers are managed closely), and can be passed > transparently in messages (? is this true ?). Our interpretations concur. > Contexts are created inside groups, creating a context is a barrier > synchronisation (? is this true ?). Our interpretations concur. > For: > The caching is good idea, but orthogonal to all the other issues, it > could be added just as easily to any of the other proposals Precisely and this is recognised in the revision of Proposal VI which will be circluated within the hour. > The use of TIDs in the point to point removes any issues of group > membership from these functions. There is thus a way to achieve any > inter group communication which is required (even if it is unpleasant). It is unpleasant indeed. I appreciate that I am stating a value jugement here which may be interpreted as "my personal opinion". In fact at Edinburgh we have done plenty of intergroup communication programming and the opinion is shared. > Against: > The complexity of the "paired exact match criterion" is non trivial to > explain. I concur. > Proposal VI (Lyndon Clarke) > --------------------------- > > Key points: > ALL descriptors (process, group, context) are process local, but may > be sent elsewhere through special mechanisms. The revision, particularly in the improved discussion notes, explores the possibility that processes are expressed as identifiers but groups and contexts are process local opaque references to objects of undefined size and structure (implemenation dependent). > Contexts are created inside groups, and are bound to them. The context > can be used to refer (indirectly) to the owning group. Exactly. Please recall the point I made above about group (context) duplication in Proposal I. > Creating a context is a barrier synchronisation of the group. The proposal says this, the discusion explains that it may not be necessary to actually synchronise. My mind is still working on this one. > Three different forms of process address are discussed for point to > point messaging. (Ouch !) From the point of view of the MPI provider this is a just small amount of additional work. The three forms link to a generic send/recv so there is not at all three times as much implementation. The size of a test suite increases of course. Form the point of view of the MPI user there could be an issue here, however the facilities can be described in a stepwise manner such that users do not need to be explored to more complexity then they need to use. The revision tries to clean up the business of presenting these forms in a uniform syntactic framework which is one way to address the "point-to-point contains too many procedures" concern. > For: > Can be made to address complex inter group communications Does address such communications directly and with expressive power. I'd like to add "For: VI is functional superset of Proposal I; group/context user cache of V is easy to add to VI." > Against: > The complexity of the whole proposal. Proposal VI is larger than Proposal I which is at a similarly mature stage of development, because it is more powerful, as I say it is functionally a superset of Proposal I (and this is drawn out better in the discussion in the revision). The facilities can be presented to users in a friendly fashion which does not require substantially different entry level user sophistication than Proposal I. The more "advanced" facilities can be introduced in a user manual after the more "basic" facilities. I conjecture that there is no real problem here. > The passing of opaque entities with translation is rather unpleasant > for people with homogeneous machines. Before this we could ignore all > of the type info, now we have to check some of it and take special > actions. > I mentioned above that if this is a disadvantage of VI then it must also be a disadvantage of V. This was one suggestion as to how we might give the user an interface to transmission of these opaque objects. I'm sure we can think of other approaches which avoid the implementor having to check the data type description in homogeneous kit. For example we could provide a set of procedures which do the send and receive, although this does again add more procedures to point-to-point which I am trying to avoid, honest guv' :-) I do not believe that this is any big deal for the user, since the kind of users who are currently doing this kind of programming (and they are certainly not high priests) are understand and think about data translation issues anyway. If these are the only real objections you have to passing around descriptors then I am confident we can do something about that. Would you like to make a suggestion as to how you would prefer to see transmission done? > Jim's initial conclusions... > ---------------------------- > > 1) Marc's approach of insisting that all communications occur inside a > group has some simplifying effects. In particular it easily gives the > base shift for sub-groups, so that they "think" their processors are > numbered 0..n-1. It has the disadvantage that there is an extra > translation required on ALL communications. This will add to the > latency at the most fundamental level (though not much if we expect to > consume space linear in the group size). Proposal VI has all of the facilities, which has been implemented and used in systems for a couple years now, and a whole lot more. For speed demons Proposal VI lets them use process descriptots (or identifiers) directly. > At the moment I favour Marc's with added caching. Can anyone explain > where the big gain is from the added complexity of the others ? I hope I did. If you need more explanation then please do ask. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Mar 23 13:28:15 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA24476; Tue, 23 Mar 93 13:28:15 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA22791; Tue, 23 Mar 93 13:27:53 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 23 Mar 1993 13:27:52 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA22783; Tue, 23 Mar 93 13:27:50 -0500 Message-Id: <9303231827.AA22783@CS.UTK.EDU> Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 6757; Tue, 23 Mar 93 13:27:47 EST Date: Tue, 23 Mar 93 13:10:01 EST From: "Marc Snir" X-Addr: (914) 945-3204 (862-3204) 28-226 IBM T.J. Watson Research Center P.O. Box 218 Yorktown Heights NY 10598 To: mpi-context@cs.utk.edu Reply-To: SNIR@watson.ibm.com Subject: Context proposals overview Reference: Attached note from jim@meiko.co.uk Reply *************** Forwarded Note *************** Received: from marge.meiko.com by watson.ibm.com (IBM VM SMTP V2R3) with TCP; Tue, 23 Mar 93 10:39:58 EST Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA01482 (5.65c/IDA-1.4.4 for ); Tue, 23 Mar 1993 10:39:47 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1) id AA11198; Tue, 23 Mar 93 15:39:42 GMT Date: Tue, 23 Mar 93 15:39:41 GMT From: jim@meiko.co.uk (James Cownie) Message-Id: <9303231539.AA11198@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA05333; Tue, 23 Mar 93 15:36:15 GMT To: mpi-context@cs.utk.edu Cc: snir@watson.ibm.com, rj_littlefield@pnlg.pnl.gov, lyndon@epcc.ed.ac.uk Subject: Context proposals overview Content-Length: 5160 Jim's initial summary/comments on the Contexts proposals Gentlemen, here is my summary and initial comclusions on the three extant Contexts proposals received via Lyndon. Since I have probably misunderstood or misrepresented some of the proposals, please correct me as appropriate. Proposal I (Marc Snir) ---------------------- Key points : A group and a context are identical. All communication (including point to point) is addressed by group and rank. The receiver MUST be a member of the same group as the sender, and explicitly give the group as an argument to the receive function. Group descriptors are not passable to other processors. Creation of a new context involves creation of a new group which is a group synchronising operation. For: Simplicity, it's easy to explain and understand (and probably implement). Insisting on a group for all addressing gives the 0..n-1 rank model for subgroups which aids in importing libraries. Against: The receiver of a message has to both know the group of the sender, and be a part of it. This feels like it makes servers hard, but may be ok. If you keep having to go into the ALL group, then the security which was the whole point of a context is lost. >>> One can, of course, have several distinct contexts that include >>> all processes. The argument that contexts can be checked at the sender is incorrect because a single process can be in many different groups/contexts, and therefore the group/context must be transferred to ensure correct matching. (Maybe this wasn't the intent of the comments anyway...) >>> group/context need be transfered in the message. The point I was >>> trying to make is that one can make sure that a message with a >>> given context tag will be sent only to a process that "knows" about >>> this context. This allows some relaxations: e.g., context names >>> need not be system-wide unique, only locally unique at each process; >>> handling of "illegal context" errors is simplified. I don't understand how to build a group/context using only point to point messages. I still seem to have the bootstrap problem that I need the new context to safely receive the message which will tell me what the new context is. >>> Well, we have an existential proof, since we support dynamic group >>> creation in our system. A new context is created within an old, >>> preexisting context; so ALL need to be there from start. Proposal V (Rik Littlefield) ---------------------------- Key points : A desire to have caching of arbitrary library (or even user) defined entities on the groups. Point to point communication is expected to be addressed by TID and Context. TIDs have global scope, they can be passed around arbitrarily. There is explicit translation from (group,rank) to TID. Group descriptors are local objects, but can be passed around through special mechanisms. Contexts are globally unique, (and expected to be allocated locally through one of the normal bit space partitioned schemes. i.e. processor number // identifier. There appears to be an expectation that the identifiers are managed closely), and can be passed transparently in messages (? is this true ?). Contexts are created inside groups, creating a context is a barrier synchronisation (? is this true ?). For: The caching is good idea, but orthogonal to all the other issues, it could be added just as easily to any of the other proposals The use of TIDs in the point to point removes any issues of group membership from these functions. There is thus a way to achieve any inter group communication which is required (even if it is unpleasant). Against: The complexity of the "paired exact match criterion" is non trivial to explain. Proposal VI (Lyndon Clarke) --------------------------- Key points: ALL descriptors (process, group, context) are process local, but may be sent elsewhere through special mechanisms. Contexts are created inside groups, and are bound to them. The context can be used to refer (indirectly) to the owning group. Creating a context is a barrier synchronisation of the group. Three different forms of process address are discussed for point to point messaging. (Ouch !) For: Can be made to address complex inter group communications Against: The complexity of the whole proposal. The passing of opaque entities with translation is rather unpleasant for people with homogeneous machines. Before this we could ignore all of the type info, now we have to check some of it and take special actions. Jim's initial conclusions... ---------------------------- 1) Marc's approach of insisting that all communications occur inside a group has some simplifying effects. In particular it easily gives the base shift for sub-groups, so that they "think" their processors are numbered 0..n-1. It has the disadvantage that there is an extra translation required on ALL communications. This will add to the latency at the most fundamental level (though not much if we expect to consume space linear in the group size). 2) Creating a new group for each context may be expensive, though Marc suggests an implementation which makes this relatively cheap. Adding Rik's caching ideas here could also defer the cost until it was required. (e.g. there is no need to build a broadcast tree in a group until the first broadcast is performed. [Fundamental optimisation 0: "Don't do it until you need to, you might never have to do it at all"]) At the moment I favour Marc's with added caching. Can anyone explain where the big gain is from the added complexity of the others ? -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Tue Mar 23 13:44:12 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA24968; Tue, 23 Mar 93 13:44:12 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23491; Tue, 23 Mar 93 13:43:28 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 23 Mar 1993 13:43:27 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23478; Tue, 23 Mar 93 13:43:11 -0500 Date: Tue, 23 Mar 93 18:43:04 GMT Message-Id: <15951.9303231843@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Proposal VI, First Revision, LaTeX To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Here is LaTeX for the revision of Proposal VI. Please see my reply to Jim of today for an idea of what is changed. I apologise that this is one hour later then I had previously announced. PostScript follows. ---------------------------------------------------------------------- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % LYNDON % % REMBER TO MERGE IN THE COMMENTS FROM THE PREVIOUS REVISION OF THIS % % DOCUMENT. OH, AND TO GET BETTER SET UP FOR TRANSFER TO LAPTOP! % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentstyle{report} \begin{document} \title{MPI Context Subcommittee\\Proposal VI\\First Revision} \author{Lyndon~J~Clarke} \date{March 23, 1993} \maketitle %======================================================================% % BEGIN "Proposal VI" % Lyndon J. Clarke % March 1993 % \chapter{Proposal VI} \newcommand{\LabelNote}[1]{\label{vi:note:#1}} \newcommand{\ReferNote}[1]{Note~\ref{vi:note:#1}} \newcommand{\LabelSection}[1]{\label{vi:sect:#1}} \newcommand{\ReferSection}[1]{Section~\ref{vi:sect:#1}} %----------------------------------------------------------------------% % BEGIN "Introduction" % \section{Introduction}\LabelSection{introduction} This chapter proposes that communication contexts and process groupings within {\sc mpi} appear as loosely coupled concepts. One or more communication contexts may be associated with each process grouping, and each communication context inherits properties of the associated process grouping. This reflects the observations that invocations of modules in a parallel program typically operate within process groupings, and that there may be multiple modules operating within each process grouping. The proposal provides process identified communication, communications which are limited in scope to single contexts, and communications which have scope spanning pairs of contexts. The proposal makes no statements regarding message tags. It is assumed that these will be a bit string expressed as an integer in the host language. Much of this proposal must be viewed as recommendations to other subcommittees of {\sc mpi}, primarily the point-to-point communication subcommittee and the collective communications subcommittee. Concrete syntax is given in the style of the ANSI C host language, only for purposes of discussion. The detailed proposal is presented in \ReferSection{proposal}, which refers the reader to a set of discussion notes in \ReferSection{notes}. The note assumes forward knowledge of the proposal text and are therefore best examined after familiarisation with the proposal text. Aspects of the proposal are discussed in section \ReferSection{discussion}, and it is also recommended that this material be read after familiarisation with the proposal text. % % END "Introduction" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Detailed Proposal" % \section{Detailed Proposal}\LabelSection{proposal} This section presents the detailed proposal, describing, in order of appearance: processes; process groupings; communication contexts; point-to-point communication; collective communication. \subsection{Processes}\LabelSection{processes} This proposal views processes in the familiar way, as one thinks of processes in Unix or NX for example. Each process is a distinct space of instructions and data. Each process is allowed to compose multiple concurrent threads and {\sc mpi} does not distinguish such threads. \subsubsection*{Process Descriptor} Each process is described by a {\it process descriptor\/} (or {\it handle}) which is expressed as an integer type in the host language and has an opaque value which is process local. See \ReferNote{descriptor}. The initialisation of {\sc mpi} services will assign to each process an {\it own\/} process descriptor. Each process retains its own process descriptor until the termination of {\sc mpi} services. {\sc mpi} provides a procedure which returns the own descriptor of the calling process. For example, \mbox{\tt pd = mpi\_own\_process()}. \subsubsection*{Process Creation \& Destruction} This proposal makes no statements regarding creation and destruction of processes. See \ReferNote{processes:dynamic}. \subsubsection*{Descriptor Transmission} Since a descriptor is an integer type in the host language the value may be transmitted in a message as an integer. The recipient of the descriptor value can make no defined use of the value in the {\sc mpi} operations described in this proposal --- the descriptor is {\it invalid}. {\sc mpi} provides a mechanism whereby the user can transmit a valid process descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. \mbox{\tt MPI\_PD\_TYPE}. See \ReferNote{descriptor:transmission}. {\sc mpi} provides a process descriptor registry service. Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. See \ReferNote{descriptor:registry}. Note that receipt of a valid process descriptor may have a persistent effect on the implementation of {\sc mpi} at the receiver, and in particular may reserve state. {\sc mpi} will provide a procedure which frees (or invalidates) a valid descriptor, allowing the implementation to free reserved state. For example, \mbox{\tt mpi\_free\_pd(pd)}. The user is not allowed to free the process descriptor of the calling process. See \ReferNote{descriptor:deallocation}. See \ReferNote{coherency}. \subsubsection*{Descriptor Attributes} This proposal makes no statements regarding process descriptor attributes. \subsection{Process Groupings}\LabelSection{groupings} This proposal views a process grouping as an ordered collection of (references to) distinct processes, the membership and ordering of which does not change over the lifetime of the grouping. See \ReferNote{groupings:dynamic}. The canonical representation of a grouping reflects the process ordering and is a one-to-one map from $Z_N$ to descriptors of the $N$ processes composing the grouping. There may be structure associated with a process grouping defined by a process topology. This proposal makes no further statements regarding such structures. \subsubsection*{Group Descriptor} Each group is identified by a {\it group descriptor\/} (or {\it handle\/}) which is expressed as an integer type in the host language and has an opaque value which is process local. See \ReferNote{descriptor}. The initialisation of {\sc mpi} services will assign to each process an {\it own\/} group descriptor for a process grouping of which the process is a member. Each process retains its own group descriptor and membership of the own process grouping until the termination of {\sc mpi} services. See \ReferNote{grouping:blobs}. {\sc mpi} provides a procedure which accepts a valid process descriptor and returns the own group descriptor of the identified process. For example, \mbox{\tt gd = mpi\_own\_group(pd)}. See \ReferNote{grouping:own}. \subsubsection*{Group Creation and Deletion} {\sc mpi} provides facilities which allow users to dynamically create and delete process groupings in addition to the own grouping(s). The procedures described here generate groups which are static in membership. {\sc mpi} provides a procedure which allows users to create one or more groups which are subsets of existing groups. For example, \mbox{\tt gdb = mpi\_group\_partition(gda, key)} creates one or more new groups {\tt gdb} which are distinct subsets of an existing group {\tt gda} according to the supplied values of {\tt key}. This procedure is called by and synchronises all members of {\tt gda}. {\sc mpi} provides a procedure which allows users to create a group by permutation of an existing group. For example, \mbox{\tt gdb = mpi\_group\_permutation(gda, rank)} creates one new group with the same membership as {\tt gda} with a permutation of process ranking, and returns the created group descriptor in {\tt gdb}. This procedure is called by and synchronises all members of {\tt gda}. {\sc mpi} provides a procedure which allows users to create a group by explicit definition of its membership as a list of process descriptors. For example, \mbox{\tt gd = mpi\_group\_definition(listofpd)} creates one new group {\tt gd} with membership and ordering described by the process descriptor list {\tt listofpd}. This procedure is called by and synchronises all processes identified in {\tt listofpd}. {\sc mpi} provides a procedure which allows users to delete user created groups. This procedure accepts the descriptor of a group which was created by the calling process and deletes the identified group. For example, \mbox{\tt mpi\_group\_deletion(gd)} deletes an existing group {\tt gd}. This procedure is called by and synchronises all members of {\tt gd}. {\sc mpi} may provide additional procedures which allow users to construct process groupings with a process grouping topology. \subsubsection*{Descriptor Transmission} Since a descriptor is an integer type in the host language the value may be transmitted in a message as an integer. The recipient of the descriptor can make no defined use of the value in the {\sc mpi} operations described in this proposal --- the descriptor is {\it invalid}. See \ReferNote{descriptor}. {\sc mpi} provides a mechanism whereby the user can transmit a valid group descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. \mbox{\tt MPI\_GD\_TYPE}. See \ReferNote{descriptor:transmission}. {\sc mpi} provides a group descriptor registry service. Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. See \ReferNote{descriptor:registry}. Note that receipt of a valid group descriptor may have a persistent effect on the implementation of {\sc mpi} at the receiver, and in particular may reserve state. {\sc mpi} will provide a procedure which frees (or invalidates) a valid descriptor, allowing the implementation to free reserved state. For example, \mbox{\tt mpi\_free\_gd(gd)}. The user is not allowed to free the own group descriptor of the calling process or the group descriptor of any group created by the calling process. See \ReferNote{descriptor:deallocation}. See \ReferNote{coherency}. \subsubsection*{Descriptor Attributes} {\sc mpi} provides a procedure which accepts a valid group descriptor and returns the rank of the calling process within the identified group. For example, \mbox{\tt rank = mpi\_group\_rank(gd)}. {\sc mpi} provides a procedure which accepts a valid group descriptor and returns the number of members, or {\it size}, of the identified group. For example, \mbox{\tt size = mpi\_group\_size(gd)}. {\sc mpi} provides a procedure which accepts a valid group descriptor and process order number, or {\it rank}, and returns the valid descriptor of the process to which the supplied rank maps within the identified group. For example, \mbox{\tt pd = mpi\_group\_pd(gd, rank)}. See \ReferNote{inverse:map}. {\sc mpi} may provide additional procedures which allow users to determine the process grouping topology attributes. \subsection{Communication Contexts}\LabelSection{contexts} This proposal views a communication context as a uniquely identified reference to exactly one process grouping, which is a field in a message envelope and may therefore be used to distinguish messages. The context inherits the referenced process grouping as a ``frame''. Each process grouping may be used as a frame for multiple contexts. \subsubsection*{Context Descriptor} Each context is identified by a {\it context descriptor\/} (or {\it handle}) which is expressed as an integer type in the host language and has an opaque value which is process local. See \ReferNote{descriptor}. The creation of {\sc mpi} process groupings allocates an {\it own\/} context which inherits the created grouping as a frame and can be thought of as a property of the created grouping. The grouping retains the descriptor of its own context until {\sc mpi} process grouping deletion. {\sc mpi} provides a procedure which accepts a valid group descriptor and returns the context descriptor of the own context of the identified group. For example, \mbox{\tt cd = mpi\_own\_context(gd)}. See \ReferNote{context:own}. \subsubsection*{Context Creation and Deletion} {\sc mpi} provides facilities which allows user to dynamically create and delete contexts in addition to the own contexts associated with process groupings. See \ReferNote{grouping:deletion}. {\sc mpi} provides a procedure which allows users to create contexts. This procedure accepts a valid descriptor of a group of which the calling process is a member, and returns a context descriptor which references the identified group. For example, \mbox{\tt cd = mpi\_context\_creation(gd)}. This procedure is called by and synchronises all members of {\tt gd}. {\sc mpi} provides a procedure which allows users to delete user created contexts. This procedure accepts a valid context descriptor which was created by the calling process and deletes the identified context. For example, \mbox{\tt mpi\_context\_deletion(cd)}. This procedure is called by and synchronises all members of the frame of {\tt cd}. See \ReferNote{context:creation:deletion} \subsubsection*{Descriptor Transmission} Since a descriptor is an integer type in the host language the value may be transmitted in a message as an integer. The recipient of the descriptor can make no defined use of the value in the {\sc mpi} operations described in this proposal --- the descriptor is {\it invalid}. See \ReferNote{descriptor}. MPI provides a mechanism whereby the user can transmit a valid context descriptor in a message such that the received descriptor is valid. This is integrated with the capability to transmit typed messages. It is suggested that a notional data type should be introduced for this purpose, e.g. \mbox{\tt MPI\_CD\_TYPE}. See \ReferNote{descriptor:transmission}. {\sc mpi} provides a context descriptor registry service. Use of this service is not mandated. Programs which can conveniently be expressed without using the service can ignore it without penalty. See \ReferNote{descriptor:registry}. Note that receipt of a valid context descriptor may have a persistent effect on the implementation of {\sc mpi} at the receiver, and in particular may reserve state. {\sc mpi} will provide a procedure which frees (or invalidates) invalidates a valid descriptor, allowing the implementation to free reserved state. For example, \mbox{\tt mpi\_free\_cd(cd)}. The user is not allowed to free the own context descriptor of any group or the context descriptor of any context created by the calling process. See \ReferNote{coherency}. \subsubsection*{Descriptor Attributes} {\sc mpi} provides a procedure which allows users to determine the process grouping which is the frame of a context. For example, \mbox{\tt gd = mpi\_context\_group(cd)}. \subsection{Point-to-Point Communication}\LabelSection{point2point} This proposal recommends three forms for {\sc mpi} point-to-point message addressing and selection: null context; closed context; open context. (See \ReferNote{context:form}). It is further recommended that messages communicated in each form are distinguished such that a {\tt Send} operation of form X cannot match with a {\tt Receive} operation of form Y, requiring that form is embedded into the message envelope. The three forms are described, followed by considerations of uniform integration of these forms in the point-to-point communication chapter of {\sc mpi}. \subsubsection*{Null Context Form} The {\it null context\/} form contains no message context. Message selection and addressing are expressed by \mbox{\tt (pd, tag)} where: {\tt pd} is a process descriptor; {\tt tag} is a message tag. {\tt Send} supplies the {\tt pd} of the receiver. {\tt Receive} supplies the {\tt pd} of the sender. {\tt Receive} can wildcard on {\tt pd} by supplying the value of a named constant process descriptor, e.g. {\tt MPI\_PD\_WILD}. See \ReferNote{descriptor:wildcard}. This proposal makes no statement about the provision for wildcard on {\tt tag}. \subsubsection*{Closed Context Form} The {\it closed context\/} form permits communication between members of the same context. Message selection and addressing are expressed by \mbox{\tt (cd, rank, tag)} where: {\tt cd} is a context descriptor; {\tt rank} is a process rank in the frame of {\tt cd}; {\tt tag} is a message tag. {\tt Send} supplies the {\tt cd} of the receiver (and sender), and the {\tt rank} of the receiver. {\tt Receive} supplies the {\tt cd} of the sender (and receiver), and the rank of the sender. The \mbox{\tt (cd, rank)} pair in {\tt Send} ({\tt Receive}) is sufficient to determine the process descriptor of the receiver (sender). {\tt Receive} cannot wildcard on {\tt cd}. {\tt Receive} can wildcard on {\tt rank} by supplying the value of a named constant integer, e.g. {\tt MPI\_RANK\_WILD}. See \ReferNote{rank:wildcard}. This proposal makes no statement about the provision for wildcard on {\tt tag}. \subsubsection*{Open Context Form} The {\it open context\/} form permits communication between members of any two contexts. Message selection and addressing are expressed by \mbox{\tt (lcd, rcd, rank, tag)} where: {\tt lcd} is a ``local'' context descriptor; {\tt rcd} is a ``remote'' context descriptor; {\tt rank} is a process rank in the frame of {\tt rcd}; {\tt tag} is a message tag. {\tt Send} supplies the context descriptor for the sender in {\tt lcd}, the context descriptor for the receiver in {\tt rcd}, and the {\tt rank} of the receiver in the frame of {\tt rcd}. {\tt Receive} supplies the context descriptor for the receiver in {\tt lcd}, the context descriptor for the sender in {\tt rcd}, and the {\tt rank} of the sender receiver in the frame of {\tt rcd}. The \mbox{\tt (rcd, rank)} pair in {\tt Send} ({\tt Receive}) is sufficient to determine the process descriptor of the receiver (sender). {\tt Receive} cannot wildcard on {\tt lcd} which is the open context form analog of there being no wildcard on {\tt cd} in the closed context form. Receive can wildcard on {\tt rcd} by supplying the value of a named constant context descriptor, e.g. {\tt MPI\_CD\_WILD} (See \ReferNote{descriptor:wildcard}), in which case {\tt Receive} {\it must\/} also wildcard on {\tt rank} as there is insufficient information to determine the process descriptor of the sender. {\tt Receive} can wildcard on {\tt rank} by supplying the value of a named constant integer, e.g. {\tt MPI\_RANK\_WILD}. See \ReferNote{rank:wildcard}. This proposal makes no statement about the provision for wildcard on {\tt tag}. \subsubsection*{Uniform Integration} The three forms of addressing and selection described have different syntactic frameworks. We can consider integrating these forms into the point-to-point chapter of {\sc mpi} by defining a further orthogonal axis (as in the multi-level proposal of Gropp \& Lusk) which deals with form. This is at the expense of multiplying the number of {\tt Send} and {\tt Receive} procedures by a factor of three, and some further but trivial work with details of the current point-to-point chapter which uniformly assumes a single addressing and selection form. There are various approaches to unification of the syntactic frameworks which may simplify integration. Three options are now described, each based on retention and extension of the framework of one form. These options represent different compromises between the three forms. \paragraph*{Option i: Open Context Framework} The framework of the open context form is adopted and extended. Introduce the {\it null\/} descriptor, the value of which is defined by a named constant, e.g. {\tt MPI\_NULL}. See \ReferNote{descriptor:null}. The null context form is expressed as \mbox{\tt (MPI\_NULL, MPI\_NULL, pd, tag)}, which is a little clumsy. The closed context form is expressed as \mbox{\tt (MPI\_NULL, cd, rank, tag)}, which is marginally inconvenient. The open context form is expressed as \mbox{\tt (lcd, rcd, rank, tag}), which is of course natural. \paragraph*{Option ii: Closed Context Framework} The framework of the closed context form is adopted and extended. Introduce the {\it null\/} descriptor, the value of which is defined by a named constant, e.g. {\tt MPI\_NULL}. See \ReferNote{descriptor:null}. The null context form is expressed as \mbox{\tt (MPI\_NULL, pd, tag)}, which is marginally inconvenient. The closed context form is expressed as \mbox{\tt (cd, rank, tag)}, which is of course natural. Expression of the open context form requires a little more work. We can use the {\tt cd} field as ``shorthand notation'' for the \mbox{\tt (lcd, rcd)} pair at the expense of introducing some trickery. We define a ``context duplet descriptor'' which is formally composed of two references to contexts, and provide a procedure which constructs such a descriptor given two context descriptors. Both {\tt Send} and {\tt Receive} will accept a duplet descriptor in {\tt cd}, are able to distinguish the duplet descriptor from a singlet descriptor, and treat the duplet as shorthand notation. We should also define a mechanism by which a receiver which has completed a {\tt Receive} with wildcard on {\tt rcd} is able to determine the valid singlet descriptor of the sender, which just adds one further enquiry procedure to the point-to-point chapter(?). This option is a little inconvenient but does have some useful properties for collective communications. It is conjectured that this option is the best choice for {\sc mpi}. \paragraph*{Option iii: Null Context Framework} The framework of the null context form is adopted and extended. The null context form is expressed as \mbox{\tt (pd, tag)}, which is of course natural. Expression of the open and closed context forms requires a little more work. We can use the {\tt pd} field as ``shorthand notation'' for {\tt (cd, rank)} and {\tt (lcd, rcd, rank)} by continuation of the trickery used in the previous option. This is rather clumsy. \subsection{Collective Communication}\LabelSection{collective} Symmetric collective communication operations are compliant with the closed context form described above. This proposal recommends that such operations accept a context descriptor which identifies the context (thus frame) in which they are to operate. {\sc mpi} does plan to describe symmetric collective communication operations. It is not possible to determine whether this proposal is sufficient to allow implementation of the collective communication chapter of {\sc mpi} in terms of the point-to-point chapter of {\sc mpi} without loss of generality, since the collective operations are not yet defined. Asymmetric collective communication operations, especially those in which sender(s) and receiver(s) are distinct processes, are compliant with the open context form described above. This proposal recommends that such operations accept a pair of context descriptors (perhaps in a duplet descriptor form) which identify the contexts (thus frames) in which they are to operate. {\sc mpi} does not plan to describe asymmetric collective communication operations. Such operations are expressive when writing programs beyond the SPMD model, which are composed of communicative functionally distinct process groupings. This proposal recommends that such operations should be considered in some reincarnation of {\sc mpi}. % % END "Proposal" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Discussion & Notes" % \section{Discussion \& Notes} This section comprises a discussion of certain aspects of this proposal followed by the notes referenced in the detailed proposal. \subsection{Discussion}\LabelSection{discussion} We can dissect the proposal into two parts: an SPMD model core; an MIMD model annex. In this discussion the dissection is exposed and the conceptual foundation of each part is described. The discussion also presents brief arguments for and against the MIMD model annex. \subsubsection*{SPMD model core} The SPMD model core provides noncommunicative process groupings and communication contexts for writers of SPMD parallel libraries. It is intended to provide expressive power beyond the ``SPIMD'' model (in which process execute in an SIMD fashion). The material describing processes in \ReferSection{processes} is simplified: processes have identical instruction blocks and different data blocks; process descriptor transmission and registry become redundant; dynamic process models are not considered. The material describing process groupings in \ReferSection{groupings} is simplified: group descriptor transmission and registry become redundant; the own process grouping explicitly becomes a single group containing all processes. The material describing communication contexts in \ReferSection{contexts} is simplified: context descriptor transmission and registry become unnecessary. The material describing point-to-point communication in \ReferSection{point2point} is simplified: the open context form becomes redundant; uniform integration ``Option i'' is deleted, and ``Option ii'' loses duplet descriptors, becoming simple enough that ``Option iii'' need not be further considered. The material describing collective communication in \ReferSection{collective} is simplified: there is no possibility of collective communication operations spanning more than one context. \subsubsection*{MIMD model annex} The MIMD model annex extends and modifies the SPMD model core to provide expressive power for MIMD programs which combine (coarse grain) function and data driven parallelism. The MIMD model annex is not intended to provide expressive power to fine grained function driven parallel programs --- it is conjectured that message passing approaches such as {\sc mpi} are not suited to fine grained parallel programming. The annex is intended to provide expressive power for the ``MSPMD'' model, which is now described. One of the simplest MIMD models is the ``host-node'' model, familiar in {\sc express} and {\sc parmacs}, containing two functional groups: one node group (SPMD like) ; one host group (a singleton). The ``parallel client-server'' model, in which each of the $n$ clients is composed of parallel processes, and in which the server may also be composed of parallel processes, contains $1+n$ functional groups: $n$ client groups (SPMD like); one server group (singleton, SPMD like). The ``host-node'' model is a case of this model in which the host can be viewed as a singleton client and the nodes can be viewed as an SPMD like server (or the host as a singleton server and the nodes as an SPMD like client). The ``parallel module graph'' model, in which each module within the graph may be composed of parallel processes (singleton, SPMD like), contains any number of functional groups with arbitrarily complex relations. The ``parallel client-server model'' is a case of this model in which the module graph only contains arcs joining the server to each client. The MIMD model annex is intended to provide expressive power for the ``parallel module graph'' model, which I refer to as the MPSMD model. This model requires support at some level as commercial and modular applications are increasingly moving in to parallel computing. The debate is whether or not message passing approaches such as {\sc mpi} (which I simply refer to as MPI) should provide for this model. The negative argument is that such SPMD like modules should be controlled and communicate with one another as ``parallel processes'' at the distributed operating system level. The argument has some appeal as the world of distributed operating systems must deal with difficult issues such as process control and coherency. Avoidance of duplication in MPI allows MPI to focus on provision of a smaller set of facilities with greater emphasis on maximum performance for data driven SPMD like parallel programs. The positive argument is that communications between such SPMD like modules require high performance and MPI can provide such performance with tuned semantics which expect the user to deal with coherency issues. There is also the argument that MPI is able to deal with this in a shorter time than development (and standardisation) procedures for distributed operating systems. The latter argument is somewhat comparable with the argument for message passing versus parallel compilation. \subsection{Notes}\LabelSection{notes} \begin{enumerate} \item\LabelNote{descriptor} {\bf Descriptors:} Descriptors are assumed to be a plentiful but bounded resource. They are opaque references to objects of undefined size and structure. They are not global unique identifiers however they must reference such identifiers, and they protect the user from the form of such identifiers allowing them to be implementation dependent. The proposal expresses descriptors as integer types in the host language (in practice we might expect descriptors to be indices into tables of structures, or tables of pointers to structures, or indeed pointers to structures themselves). This expression facilitates uniform binding to ANSI~C and Fortran~77. The context descriptor must at least reference: the global unique context identifier; the group descriptor of the frame. The group descriptor must at least reference: the global unique group identifier; the own context descriptor; the rank space to process descriptor map of the group (including the size of the group); the process rank. The process descriptor must at least reference; the global unique process identifier; the group descriptor of the own group. The proposal text distinguishes process, group and context identifiers more strongly than is strictly necessary. There is potential advantage in the concept of a unified descriptor which can be used to reference either kind of actual descriptor. For example descriptor unification goes some way toward cleaning up the duplet, descriptors described in ``Option ii'' of section \ReferSection{point2point}. Definition in this fashion requires the referenced object to contain a class identifier (which in this proposal could be as little as 3 bits wide) This suggestion is explored in some of the notes below. Rik Littlefield has suggested descriptors could be used to ``cache'' per object user information, as appears in {\sc zipcode}. This descriptor capability could be seriously useful for example in context or group specific implementations of collective communications. The suggestion both requires and deserves more work. There were additional motivations for expressing process descriptors as integers in the proposal. Firstly, the author anticipated ``Option ii'' of \ReferSection{point2point}. Secondly, it is observed that definition of process descriptors can be weakened such that they may be implemented as global unique process identifiers, expressed as integers, should this infer advantage. {[\it I wish I had realised the second point before the first!]} The consequence of allowing process descriptors to be implemented as process identifiers is explored in some of the notes below. Please also note that this would exclude process descriptors from the ``caching'' capability mentioned above. These ideas suggest an alternate proposal, containing global unique process identifiers expressed as integers, and context and group descriptors as opaque references of yet unspecified language binding (i.e., perhaps not an integer). This suggestion is also explored in some of the notes below. \item\LabelNote{processes:dynamic} {\bf Dynamic Processes:} The proposal does not prevent a process model which allows dynamic creation and deletion of processes however it does not favour an asynchronous process model in which singleton processes are created and deleted in an arbitrary fashion. The proposal does favour a model in which blobs of processes are created (and deleted) in a concerted fashion, and in which each blob so created is assigned a different own process grouping. This model does not take into account the potential desire to expand or contract an existing blob of processes in order to take into account (presumably slowly) time varying workloads. It is conjectured that concerted blob expand and contract operations are more suitable for this purpose than asynchronous singleton spawn and kill operations. \item\LabelNote{descriptor:transmission} {\bf Descriptor transmission:} In the spirit of descriptor unification (See \ReferNote{descriptor}) the three {\tt MPI\_?D\_TYPE} names can be collapsed into something like {\tt MPI\_DESC\_TYPE}. If process descriptors are replaced by global unique process identifiers (See \ReferNote{descriptor}) then no special measures are required for transmission thereof. If group and context descriptors are expressed as opaque objects of yet unspecified type then, in ANSI~C at least, it will be possible to prevent nonsemantic transmission thereof. \item\LabelNote{descriptor:registry} {\bf Descriptor registry:} The registry service is just a simpler way for concurrent objects to identify and establish communications with one another. The operations of interest, expressed in the spirit of descriptor unification (See \ReferNote{descriptor}), are: registration of descriptor by name, e.g. \mbox{\tt mpi\_register(name,desc)}; deregistration, e.g \mbox{\tt mpi\_deregister(desc)}; and lookup by name, e.g. \mbox{\tt mpi\_lookup(name, \&desc, wait)} where {\tt wait} controls whether the lookup waits for the name to be registered. If process descriptors are replaced by global unique process identifiers (See \ReferNote{descriptor}) then perhaps process identifier registry is not so important. The MIMD model does not actually required process descriptor or group descriptor registry to be visible to the user since context descriptor registry and context descriptor attribute determination gives access to all groups and thus group descriptor attribute determination gives access to all processes. The proposal was written to handle descriptors consistently. \item\LabelNote{descriptor:deallocation} {\bf Descriptor deallocation:} In the spirit of descriptor unification (See \ReferNote{descriptor}) the three \mbox{\tt mpi\_?d\_free(?d)} can be collapsed into something like \mbox{\tt mpi\_desc\_free(desc)}. The receipt of a descriptor in descriptor transmission and registry is an allocator, hence provision of the deallocator. Perhaps there should be an explicit allocator which the user must call in order to receive a descriptor, and can deallocate when no longer required. If process descriptors are replaced by global unique process identifiers (See \ReferNote{descriptor}) then process identifier deallocation is moot. \item\LabelNote{coherency} {\bf Coherency:} The proposal admits incoherency as descriptors may be received in transmission or registry. The SPMD core contains no incoherency. The inclusion of dynamic process creation and deletion admits incoherency since processes can retain descriptors of processes which have been deleted. The inclusion of grouping descriptor transmission and registry admits incoherency since processes can retain descriptors of groupings which have been deleted. The inclusion of dynamic groupings admits incoherency since processes can retain descriptors of groupings of which the rank to process map has changed. The inclusion of context descriptor transmission and registry admits incoherency since processes can retain descriptors of contexts which have been deleted. The proposal expects the user to ensure coherent usage. It is conjectured that this is acceptable provided that the user is not also expected to implement process fault tolerance. \item\LabelNote{groupings:dynamic} {\bf Dynamic groupings:} Process groupings are dynamic in the sense that they can be created at any time, and static in the sense that the membership is constant over the lifetime of the process grouping. The proposal specifies static groupings however the loose separation of communication contexts from process groupings simplifies extension to dynamic groupings as contexts stretch or shrink according to the changes in their frames. It is conjectured that concerted grouping expand and contract operations are more suitable than asynchronous singleton join and leave operations. \item\LabelNote{grouping:blobs} {\bf Process blobs:} {\sc mpi} has discussed the concept of the ``all'' group which contains all processes. The ``own'' group concept is a generalisation of the ``all'' group concept which is expressive for programs including and beyond the SPMD model. Processes are created in ``blobs'', where each member of a blob is a member of the same own process grouping, and different blobs have different own process groupings. An SPMD program is a single blob. A host-node program composes two blobs, the node blob and the host blob (a singleton). There is a sense in which a blob is SPMD like. \item\LabelNote{grouping:own} {\bf \mbox{\tt mpi\_own\_group(pd)}:} This procedure looks like a case of process descriptor attribute determination. If process descriptors are allowed to be implemented as global unique process identifiers, or are replaced, this procedure should accept no arguments and return the own group descriptor of the calling process. \item\LabelNote{inverse:map} {\bf Inverse map:} The proposal did not include a function to convert \mbox{\tt (gd, pd)} pair into a rank. It is suggested that this inverse map is allowed to be ``slow'', i.e. could be a linear search over members of the group, but probably should be included for completeness. It can be used as a membership predicate. \item\LabelNote{context:own} {\bf \mbox{\tt mpi\_own\_context(gd)}:} This procedure looks like a case of group descriptor attribute determination. \item\LabelNote{grouping:deletion} {\bf Grouping Deletion:} The process grouping deletion operation should probably be defined to fail when there are user created contexts with that frame which have not themselves been deleted. This just requires a reference count in the group descriptor static attribute store. \item\LabelNote{context:creation:deletion} {\bf Context Creation \& Deletion:} Marc Snir has described a method by which global unique group identifiers can be generated without use of shared global data. The proposal states that context creation and deletion operations synchronise the processes within the frame of the context, anticipating use of this method for generation of context identifiers. However, the synchronisation requires that context creation and deletion calls within a frame are performed in the identical sequence by all members of the frame. The global unique group identifier and context creator reference count are then sufficient to generate a global unique context identifier without communication or synchronisation. Should context creation and deletion therefore not synchronise the frame? There may be advantage in defining context creation and deletion such that a number of contexts are created or deleted simultaneously, depending on how heavy we expect context management to be in implementations of {\sc mpi}. \item\LabelNote{descriptor:wildcard} {\bf Descriptor wildcard:} In the spirit of descriptor unification (See \ReferNote{descriptor}) the three named constants {\tt MPI\_?D\_WILD} can be collapsed into something like {\tt MPI\_DESC\_WILD}. If process descriptors are replaced with global unique process identifiers then perhaps the wildcard process identifier value can be the same as the wildcard tag value, and the same named constant. \item\LabelNote{context:form} {\bf Context forms:} The null form is like PVM~3. It is general purpose, but not particularly expressive. It does not provide facilities for writers of parallel libraries. It has the potential to provide maximum performance. The closed form is like {\sc zipcode}. It is expressive in SPMD programs where noncommunicative distinct data driven parallel computations can be performed concurrently. It provides facilities for writers of SPMD like parallel libraries. The open form is like {\sc chimp}. It is expressive in MIMD programs where communicative data driven parallel computations can be performed concurrently. It provides facilities for MIMD like parallel libraries. \item\LabelNote{rank:wildcard} {\bf Rank wildcard:} Since rank is an integer like message tag, perhaps they should have the same wildcard value, and the same named constant. \item\LabelNote{descriptor:null} {\bf {\tt MPI\_NULL}:} I am following the spirit of context unification (See \ReferNote{descriptor}) in the proposal text here. There may be advantage in defining the value of the null descriptor to be the ANSI~C constant {\tt NULL}, or even defining the value to be exactly zero (every rule having a useful exception). \end{enumerate} % % END "Discussion & Notes" %----------------------------------------------------------------------% %----------------------------------------------------------------------% % BEGIN "Conclusion" % \section{Conclusion}\LabelSection{conclusion} This chapter has presented and discussed a proposal for communication contexts within {\sc mpi}. In the proposal process groupings appeared as frames (or templates) for the construction of communication contexts, and communication contexts retained certain properties of the frames used in their construction. % % END "Conclusion" %----------------------------------------------------------------------% % % END "Proposal VI" %======================================================================% \end{document} ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Mar 23 13:47:14 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA25054; Tue, 23 Mar 93 13:47:14 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23695; Tue, 23 Mar 93 13:46:26 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 23 Mar 1993 13:46:23 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23555; Tue, 23 Mar 93 13:45:19 -0500 Date: Tue, 23 Mar 93 18:45:08 GMT Message-Id: <15959.9303231845@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Proposal VI, First Revision, PostScript To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Here is PostScript for the revision of Proposal VI. Please see my reply to Jim of today for an idea of what is changed. I apologise that this is one hour later then I had previously announced. LaTeX previously sent. Best Wishes Lyndon ---------------------------------------------------------------------- %!PS-Adobe-2.0 %%Creator: dvips 5.495 Copyright 1986, 1992 Radical Eye Software %%Title: context-vi.dvi %%CreationDate: Tue Mar 23 18:43:31 1993 %%Pages: 16 %%PageOrder: Ascend %%BoundingBox: 0 0 596 842 %%EndComments %DVIPSCommandLine: dvips context-vi %DVIPSSource: TeX output 1993.03.23:1840 %%BeginProcSet: tex.pro %! /TeXDict 250 dict def TeXDict begin /N{def}def /B{bind def}N /S{exch}N /X{S N} B /TR{translate}N /isls false N /vsize 11 72 mul N /@rigin{isls{[0 1 -1 0 0 0] concat}if 72 Resolution div 72 VResolution div neg scale isls{0 Resolution vsize 72 div mul TR}if Resolution VResolution vsize -72 div 1 add mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@landscape{/isls true N}B /@manualfeed{statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{ CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail}B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image}imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N} B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto /V matrix currentmatrix dup 1 get dup mul exch 0 get dup mul add .99 lt{/QV}{/RV}ifelse load def pop pop}N /eop{SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook} if pop /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255 {IE S 1 string dup 0 3 index put cvn put}for 65781.76 div /vsize X 65781.76 div /hsize X}N /p{show}N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V{}B /RV statusdict begin /product where{ pop product dup length 7 ge{0 7 getinterval dup(Display)eq exch 0 4 getinterval(NeXT)eq or}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /QV{ gsave transform round exch round exch itransform moveto rulex 0 rlineto 0 ruley neg rlineto rulex neg 0 rlineto fill grestore}B /a{moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail}B /c{-4 M} B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{p 1 w}B /r{ p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{/SS save N}B /eos{SS restore}B end %%EndProcSet TeXDict begin 39158280 55380996 1000 300 300 (/a/obsidian/disk/home/u36/lyndon/mpi/context-vi.dvi) @start /Fa 1 79 df<07E01FC000E0060001700400017004000138040001380400021C0800021C080002 0E0800020E0800040710000407100004039000040390000801E0000801E0000800E0000800E000 18004000FE0040001A147F931A>78 D E /Fb 3 111 df<01FC00FF80001C001C00002E001800 002E001000002E001000002700100000470020000043002000004380200000438020000081C040 000081C040000081C040000080E040000100E08000010070800001007080000100708000020039 00000200390000020039000002001D000004001E000004000E000004000E00000C000E00001C00 040000FF80040000211C7E9B21>78 D<00FFFFE000F001C001C003800180070001000E0001001E 0002001C0002003800020070000000E0000001C0000003800000070000000F0000001E0000001C 0000003800000070020000E0040001C0040003800400070008000F0008000E0018001C00300038 0070007001E000FFFFE0001B1C7E9B1C>90 D<381F004E61804681C04701C08F01C08E01C00E01 C00E01C01C03801C03801C03801C0700380710380710380E10380E2070064030038014127E9119 >110 D E /Fc 46 123 df<03800007E0000FE0001E70001C70001C70001C70001C77E01CE7E0 1DE7E00FC7000F8E000F0E001E0E003F1C007F1C00739C00E3F800E1F800E0F1C0E0F1C071F9C0 7FFFC03F9F801E070013197F9816>38 D<00E001E0038007000E001C001C003800380070007000 7000E000E000E000E000E000E000E000E000E000700070007000380038001C001C000E00070003 8001E000E00B217A9C16>40 DI<387C7E7E3E0E1E1C78F060070B798416>44 D<7FFF00FFFF80FFFF80000000 000000000000000000000000FFFF80FFFF807FFF00110B7E9116>61 D<0FE03FF87FFCF01EF00E F00E601E003C007800F001C0038003800380038003800300000000000000000003000780078003 000F197D9816>63 D<00E00001F00001F00001B00001B00003B80003B80003B800031800071C00 071C00071C00071C00071C000E0E000E0E000FFE000FFE001FFF001C07001C07001C07007F1FC0 FF1FE07F1FC013197F9816>65 D<01F18007FB800FFF801F0F803C0780380380700380700380F0 0000E00000E00000E00000E00000E00000E00000E00000F000007003807003803803803C07001F 0F000FFE0007FC0001F00011197E9816>67 D<7FF800FFFE007FFF001C0F001C07801C03C01C01 C01C01C01C01E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C01C01C01C01C03 C01C07801C0F807FFF00FFFE007FF8001319809816>I<7FFFC0FFFFC07FFFC01C01C01C01C01C 01C01C01C01C00001C00001C1C001C1C001FFC001FFC001FFC001C1C001C1C001C00001C00E01C 00E01C00E01C00E01C00E07FFFE0FFFFE07FFFE013197F9816>I<03E30007FF000FFF001E1F00 3C0F00380700700700700700F00000E00000E00000E00000E00000E03F80E07FC0E03F80F00700 700700700700380F003C0F001E1F000FFF0007F70003E70012197E9816>71 D73 D<7F0FE0FF8FF07F0FE01C07801C0F001C0E001C 1C001C3C001C78001CF0001CE0001DF0001FF0001FF8001F38001E1C001C1C001C0E001C0E001C 07001C07001C03807F07E0FF8FF07F07E01419809816>75 DII<7E1FC0FF3FE0 7F1FC01D07001D87001D87001D87001DC7001DC7001CC7001CC7001CE7001CE7001CE7001C6700 1C67001C77001C77001C37001C37001C37001C17007F1F00FF9F007F0F0013197F9816>I<7FF8 00FFFE007FFF001C0F801C03801C03C01C01C01C01C01C01C01C03C01C03801C0F801FFF001FFE 001FF8001C00001C00001C00001C00001C00001C00001C00007F0000FF80007F000012197F9816 >80 D<7FE000FFF8007FFC001C1E001C0F001C07001C07001C07001C07001C0F001C1E001FFC00 1FF8001FFC001C1C001C0E001C0E001C0E001C0E001C0E201C0E701C0E707F07E0FF87E07F03C0 14197F9816>82 D<07E3001FFF003FFF00781F00F00700E00700E00700E00000F000007800003F 80001FF00007FC0000FE00000F00000700000380000380600380E00380E00700F80F00FFFE00FF FC00C7F00011197E9816>I<7FFFE0FFFFE0FFFFE0E0E0E0E0E0E0E0E0E0E0E0E000E00000E000 00E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E000 07FC000FFE0007FC0013197F9816>I<7F07F0FF8FF87F07F01C01C01C01C01C01C01C01C01C01 C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C00E03800E03 8007070007FF0003FE0000F8001519809816>I87 D89 D<1FE0003FF0007FF800 783C00300E00000E00000E0003FE001FFE003E0E00700E00E00E00E00E00E00E00783E007FFFE0 3FE7E00F83E013127E9116>97 D<7E0000FE00007E00000E00000E00000E00000E00000E3E000E FF000FFF800F83C00F00E00E00E00E00700E00700E00700E00700E00700E00700E00E00F01E00F 83C00FFF800EFF00063C001419809816>I<03F80FFC1FFE3C1E780C7000E000E000E000E000E0 00F000700778073E0E1FFC0FF803F010127D9116>I<003F00007F00003F000007000007000007 0000070003C7000FF7001FFF003C1F00780F00700700E00700E00700E00700E00700E00700E007 00700F00700F003C1F001FFFE00FE7F007C7E014197F9816>I<03E00FF81FFC3C1E780E7007E0 07FFFFFFFFFFFFE000E000700778073C0F1FFE0FFC03F010127D9116>I<001F00007F8000FF80 01E78001C30001C00001C0007FFF00FFFF00FFFF0001C00001C00001C00001C00001C00001C000 01C00001C00001C00001C00001C00001C0003FFE007FFF003FFE0011197F9816>I<03E3C007F7 E00FFFE01C1CC0380E00380E00380E00380E00380E001C1C000FF8001FF0001BE0003800001800 001FFC001FFF003FFF807803C0E000E0E000E0E000E0E000E07001C07C07C03FFF800FFE0003F8 00131C7F9116>I<018003C003C0018000000000000000007FC07FC07FC001C001C001C001C001 C001C001C001C001C001C001C001C07FFFFFFF7FFF101A7D9916>105 D<7E0000FE00007E0000 0E00000E00000E00000E00000E7FE00E7FE00E7FE00E0F000E1E000E3C000E78000EF0000FF000 0FF8000FBC000F1E000E0E000E07000E07807F87F0FFCFF07F87F01419809816>107 DII<7E3C00FEFE007FFF000F8780 0F03800E03800E03800E03800E03800E03800E03800E03800E03800E03800E03807FC7F0FFE7F8 7FC7F01512809116>I<03E0000FF8001FFC003C1E00780F00700700E00380E00380E00380E003 80E00380F00780700700780F003C1E001FFC000FF80003E00011127E9116>I<7E3E00FEFF007F FF800F83C00F00E00E00E00E00700E00700E00700E00700E00700E00700E00E00F01E00F83C00F FF800EFF000E3C000E00000E00000E00000E00000E00000E00007FC000FFE0007FC000141B8091 16>I114 D<0FEC3FFC7FFCF03CE01CE01C 70007F801FF007F8003C600EE00EF00EF81EFFFCFFF8C7E00F127D9116>I<0300000700000700 000700000700007FFF00FFFF00FFFF000700000700000700000700000700000700000700000701 0007038007038007038007870003FE0001FC0000F80011177F9616>I<7E1F80FE3F807E1F800E 03800E03800E03800E03800E03800E03800E03800E03800E03800E03800E03800E0F800FFFF007 FBF803E3F01512809116>I<7F1FC0FF1FE07F1FC01C07001E0F000E0E000E0E000E0E00071C00 071C00071C00071C0003B80003B80003B80001F00001F00000E00013127F9116>II<7F1FC07F3FC07F1FC00F1C00073C0003B80003F00001 F00000E00001E00001F00003B800073C00071C000E0E007F1FC0FF3FE07F1FC013127F9116>I< 7F1FC0FF9FE07F1FC01C07000E07000E0E000E0E00070E00071C00071C00039C00039C00039800 01B80001B80000F00000F00000F00000E00000E00000E00001C00079C0007BC0007F80003F0000 3C0000131B7F9116>I<3FFFC07FFFC07FFFC0700780700F00701E00003C0000780001F00003E0 000780000F00001E01C03C01C07801C0FFFFC0FFFFC0FFFFC012127F9116>I E /Fd 27 123 df<0001FC000703000C03001C07001C0300180000380000380000380000380000 700007FFFC00701C00701C00701C00E03800E03800E03800E03800E07001C07001C07001C07001 C0E201C0E201C0E20380E4038064038038038000030000070000060000C60000E40000CC000070 00001825819C17>12 D<00C001E001E001E001C003C003C0038003800380030007000700060006 00060004000C000C00080008000000000000000000000030007800F00060000B1E7C9D0D>33 D<01FFC0003C0000380000380000380000380000700000700000700000700000E00000E00000E0 0000E00001C00001C00001C00001C0000380000380000380000380000700000700000700000700 000F0000FFE000121C7E9B10>73 D<003F80007F00000300000300000300000600000600000600 000600000C00000C00000C00000C00001800001800001800001800003000003000003000003000 00600000600000600000600000C00000C00000C00000C000018000018000018000018000030000 030000030000030000060000060000FE0000FE00001129819E0D>93 D<03CC063C0C3C181C3838 303870387038E070E070E070E070E0E2C0E2C0E261E462643C380F127B9115>97 D<3F00070007000E000E000E000E001C001C001C001C0039C03E60383038307038703870387038 E070E070E070E060E0E0C0C0C1C0618063003C000D1D7B9C13>I<01F007080C08181C38383000 70007000E000E000E000E000E000E008E010602030C01F000E127B9113>I<001F800003800003 80000700000700000700000700000E00000E00000E00000E0003DC00063C000C3C00181C003838 00303800703800703800E07000E07000E07000E07000E0E200C0E200C0E20061E4006264003C38 00111D7B9C15>I<01E007100C1018083810701070607F80E000E000E000E000E000E008601060 2030C01F000D127B9113>I<0003C0000670000C70001C60001C00001C00003800003800003800 00380000380003FF8000700000700000700000700000700000E00000E00000E00000E00000E000 01C00001C00001C00001C00001C000038000038000038000030000030000070000C60000E60000 CC00007800001425819C0D>I<00F3018F030F06070E0E0C0E1C0E1C0E381C381C381C381C3838 30383038187818F00F700070007000E000E0C0C0E1C0C3007E00101A7D9113>I<0FC00001C000 01C0000380000380000380000380000700000700000700000700000E78000E8C000F0E000E0E00 1C0E001C0E001C0E001C0E00381C00381C00381C00383800703880703880707080707100E03200 601C00111D7D9C15>I<01800380010000000000000000000000000000001C002600470047008E 008E000E001C001C001C0038003800710071007100720072003C00091C7C9B0D>I<0FC00001C0 0001C0000380000380000380000380000700000700000700000700000E0F000E11000E23800E43 801C83001C80001D00001E00003F800039C00038E00038E00070E20070E20070E20070E400E064 00603800111D7D9C13>107 D<1F800380038007000700070007000E000E000E000E001C001C00 1C001C0038003800380038007000700070007000E400E400E400E40068003800091D7C9C0B>I< 3C1E0780266318C04683A0E04703C0E08E0380E08E0380E00E0380E00E0380E01C0701C01C0701 C01C0701C01C070380380E0388380E0388380E0708380E0710701C0320300C01C01D127C9122> I<3C3C002646004687004707008E07008E07000E07000E07001C0E001C0E001C0E001C1C00381C 40381C40383840383880701900300E0012127C9117>I<01E007180C0C180C380C300E700E700E E01CE01CE01CE018E038E030E06060C031801E000F127B9115>I<07870004D98008E0C008E0C0 11C0E011C0E001C0E001C0E00381C00381C00381C00381800703800703000707000706000E8C00 0E70000E00000E00001C00001C00001C00001C00003C0000FF8000131A7F9115>I<3C3C26C246 8747078E068E000E000E001C001C001C001C0038003800380038007000300010127C9112>114 D<01F006080C080C1C18181C001F001FC00FF007F0007800386030E030C030806060C01F000E12 7D9111>I<00C001C001C001C00380038003800380FFE00700070007000E000E000E000E001C00 1C001C001C00384038403840388019000E000B1A7D990E>I<1E0300270700470700470700870E 00870E000E0E000E0E001C1C001C1C001C1C001C1C003838803838801838801839001C5900078E 0011127C9116>I<1E06270E470E4706870287020E020E021C041C041C041C0818083808181018 200C4007800F127C9113>I<1E01832703874703874703838707018707010E07010E07011C0E02 1C0E021C0E021C0E04180C04181C04181C081C1C100C263007C3C018127C911C>I<070E001991 0010E38020E38041C30041C00001C00001C000038000038000038000038000070200670200E704 00CB04008B080070F00011127D9113>I<038207C20FEC08381008001000200040008001000200 040008081008383067F043E081C00F127D9111>122 D E /Fe 39 122 df<003C000000E30000 01C1000003C18000038180000781800007C1800007C3000007C2000007C4000007C8000003F001 FF03E001FF03E0003003F0006001F000C003F800C004FC01801CFC0300387E0300783F0600781F 8C00F81FD800F80FF000FC07F0067C01F8067E07FE1C1FFE3FF807F007F0201D7E9C25>38 D<78FCFCFCFC7800000000000078FCFCFCFC7806127D910D>58 D<00038000000380000007C000 0007C0000007C000000FE000000FE000001FF000001BF000001BF0000031F8000031F8000061FC 000060FC0000E0FE0000C07E0000C07E0001803F0001FFFF0003FFFF8003001F8003001F800600 0FC006000FC00E000FE00C0007E0FFC07FFEFFC07FFE1F1C7E9B24>65 D<001FE02000FFF8E003 F80FE007C003E00F8001E01F0000E03E0000E03E0000607E0000607C000060FC000000FC000000 FC000000FC000000FC000000FC000000FC000000FC0000007C0000607E0000603E0000603E0000 C01F0000C00F80018007C0030003F80E0000FFFC00001FE0001B1C7D9B22>67 DI70 D<000FF008007FFE3801FC07F807E001F80F8000781F0000783F0000383E0000387E 0000187C000018FC000000FC000000FC000000FC000000FC000000FC000000FC007FFFFC007FFF 7C0001F87E0001F83E0001F83F0001F81F0001F80F8001F807E001F801FC07F8007FFE78000FF8 18201C7D9B26>I73 D77 DI<003FE00001F07C0003C01E000F800F801F0007C01E0003C03E0003E07E0003F07C0001F0 7C0001F0FC0001F8FC0001F8FC0001F8FC0001F8FC0001F8FC0001F8FC0001F8FC0001F87C0001 F07E0003F07E0003F03E0003E03F0007E01F0007C00F800F8003C01E0001F07C00003FE0001D1C 7D9B24>II82 D<07F8201FFEE03C07E07801E07000E0F000E0F00060F00060F80000FE0000FFE0007FFE003FFF 003FFF800FFFC007FFE0007FE00003F00001F00000F0C000F0C000F0C000E0E000E0F001C0FC03 C0EFFF0083FC00141C7D9B1B>I<7FFFFFE07FFFFFE0781F81E0701F80E0601F8060E01F8070C0 1F8030C01F8030C01F8030C01F8030001F8000001F8000001F8000001F8000001F8000001F8000 001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F80 0007FFFE0007FFFE001C1C7E9B21>II<0FF8001C1E003E0F803E07803E07C01C07C00007C0 007FC007E7C01F07C03C07C07C07C0F807C0F807C0F807C0780BC03E13F80FE1F815127F9117> 97 DI<03FC000E0E001C1F003C1F00781F00780E00F800 00F80000F80000F80000F80000F800007800007801803C01801C03000E0E0003F80011127E9115 >I<000FF0000FF00001F00001F00001F00001F00001F00001F00001F00001F00001F001F9F00F 07F01C03F03C01F07801F07801F0F801F0F801F0F801F0F801F0F801F0F801F07801F07801F03C 01F01C03F00F0FFE03F9FE171D7E9C1B>I<01FC000F07001C03803C01C07801C07801E0F801E0 F801E0FFFFE0F80000F80000F800007800007C00603C00601E00C00F038001FC0013127F9116> I<007F0001E38003C7C00787C00F87C00F83800F80000F80000F80000F80000F8000FFF800FFF8 000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80 000F80007FF8007FF800121D809C0F>I<03F8F00E0F381E0F381C07303C07803C07803C07803C 07801C07001E0F000E0E001BF8001000001800001800001FFF001FFFC00FFFE01FFFF07801F8F0 0078F00078F000787000707800F01E03C007FF00151B7F9118>II<1E003F003F003F003F001E00000000000000000000000000FF00FF001F001F001F001F 001F001F001F001F001F001F001F001F001F001F00FFE0FFE00B1E7F9D0E>I107 DIII< 01FC000F07801C01C03C01E07800F07800F0F800F8F800F8F800F8F800F8F800F8F800F87800F0 7800F03C01E01E03C00F078001FC0015127F9118>II114 D<1FD830786018E018E018F000FF807FE07FF01FF807FC007CC0 1CC01CE01CE018F830CFC00E127E9113>I<0300030003000300070007000F000F003FFCFFFC1F 001F001F001F001F001F001F001F001F001F0C1F0C1F0C1F0C0F08079803F00E1A7F9913>IIIIII E /Ff 28 121 df45 D<387CFEFEFE7C3807077C8610>I<00 180000780001F800FFF800FFF80001F80001F80001F80001F80001F80001F80001F80001F80001 F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001 F80001F80001F80001F8007FFFE07FFFE013207C9F1C>49 D<03FC000FFF003C1FC07007E07C07 F0FE03F0FE03F8FE03F8FE01F87C01F83803F80003F80003F00003F00007E00007C0000F80001F 00003E0000380000700000E01801C0180380180700180E00380FFFF01FFFF03FFFF07FFFF0FFFF F0FFFFF015207D9F1C>I<00FE0007FFC00F07E01E03F03F03F03F81F83F81F83F81F81F03F81F 03F00003F00003E00007C0001F8001FE0001FF000007C00001F00001F80000FC0000FC3C00FE7E 00FEFF00FEFF00FEFF00FEFF00FC7E01FC7801F81E07F00FFFC001FE0017207E9F1C>I<0000E0 0001E00003E00003E00007E0000FE0001FE0001FE00037E00077E000E7E001C7E00187E00307E0 0707E00E07E00C07E01807E03807E07007E0E007E0FFFFFEFFFFFE0007E00007E00007E00007E0 0007E00007E00007E000FFFE00FFFE17207E9F1C>I<1000201E01E01FFFC01FFF801FFF001FFE 001FF8001BC00018000018000018000018000019FC001FFF001E0FC01807E01803E00003F00003 F00003F80003F83803F87C03F8FE03F8FE03F8FC03F0FC03F07007E03007C01C1F800FFF0003F8 0015207D9F1C>I<0003FE0080001FFF818000FF01E38001F8003F8003E0001F8007C0000F800F 800007801F800007803F000003803F000003807F000001807E000001807E00000180FE00000000 FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE000000007E000000 007E000001807F000001803F000001803F000003801F800003000F8000030007C000060003F000 0C0001F800380000FF00F000001FFFC0000003FE000021227DA128>67 DI<0003FE0040001FFFC0C0007F00F1C001F8003FC003F0000FC007C0 0007C00FC00003C01F800003C03F000001C03F000001C07F000000C07E000000C07E000000C0FE 00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE000FFFFC 7E000FFFFC7F00001FC07F00001FC03F00001FC03F00001FC01F80001FC00FC0001FC007E0001F C003F0001FC001FC003FC0007F80E7C0001FFFC3C00003FF00C026227DA12C>71 D78 D80 D<07FC001FFF803F07C03F03E03F01E03F01F01E01F00001F00001F000 3FF003FDF01FC1F03F01F07E01F0FC01F0FC01F0FC01F0FC01F07E02F07E0CF81FF87F07E03F18 167E951B>97 D<00FF8007FFE00F83F01F03F03E03F07E03F07C01E07C0000FC0000FC0000FC00 00FC0000FC0000FC00007C00007E00007E00003E00301F00600FC0E007FF8000FE0014167E9519 >99 D<00FE0007FF800F87C01E01E03E01F07C00F07C00F8FC00F8FC00F8FFFFF8FFFFF8FC0000 FC0000FC00007C00007C00007E00003E00181F00300FC07003FFC000FF0015167E951A>101 D<03FC1E0FFF7F1F0F8F3E07CF3C03C07C03E07C03E07C03E07C03E07C03E03C03C03E07C01F0F 801FFF0013FC003000003000003800003FFF801FFFF00FFFF81FFFFC3800FC70003EF0001EF000 1EF0001EF0001E78003C7C007C3F01F80FFFE001FF0018217E951C>103 D<1C003F007F007F007F003F001C000000000000000000000000000000FF00FF001F001F001F00 1F001F001F001F001F001F001F001F001F001F001F001F001F001F001F00FFE0FFE00B247EA310 >105 D108 DII<00FE0007FFC00F83E01E00F03E00F87C007C7C007C7C007CFC007EFC007EFC007EFC 007EFC007EFC007EFC007E7C007C7C007C3E00F81F01F00F83E007FFC000FE0017167E951C>I< FF0FE000FF3FF8001FF07C001F803E001F001F001F001F801F001F801F000FC01F000FC01F000F C01F000FC01F000FC01F000FC01F000FC01F000FC01F001F801F001F801F803F001FC03E001FE0 FC001F3FF8001F0FC0001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F 000000FFE00000FFE000001A207E951F>I114 D<0FF3003FFF00781F00600700E00300E00300F00300FC0000 7FE0007FF8003FFE000FFF0001FF00000F80C00780C00380E00380E00380F00700FC0E00EFFC00 C7F00011167E9516>I<0180000180000180000180000380000380000780000780000F80003F80 00FFFF00FFFF000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80 000F81800F81800F81800F81800F81800F830007C30003FE0000F80011207F9F16>III120 D E /Fg 14 123 df45 D<002000007000007000 00700000B80000B80000B800011C00011C00011C00020E00020E0004070004070007FF00080380 0803800803801801C03803C0FE0FF815157F9419>97 D<00FC200782600E01E01C00E038006078 0020700020F00020F00000F00000F00000F00000F00000F000207000207800203800401C00400E 008007830000FC0013157E9419>99 DII< FF8FF81C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01C01FFFC01C01C01C01C0 1C01C01C01C01C01C01C01C01C01C01C01C01C01C0FF8FF815157F9419>104 DI109 D<01F800070E000C03001C038038 01C07801E07000E0F000F0F000F0F000F0F000F0F000F0F000F0F000F07000E07801E03801C01C 03801E0780070E0001F80014157E941A>111 DI114 D<1F1030F06030C030C010C010C000E0007E003FC01FE003F0007800380018801880 188010C030F0608FC00D157E9413>I120 D122 D E /Fh 76 125 df<007E1F0001C1B18003 03E3C00703C3C00E03C1800E01C0000E01C0000E01C0000E01C0000E01C0000E01C000FFFFFC00 0E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0 000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0007F87FC001A1D809C18>11 D<007E0001C1800301800703C00E03C00E01800E00000E00000E00000E00000E0000FFFFC00E01 C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01 C00E01C00E01C07F87F8151D809C17>I<007FC001C1C00303C00703C00E01C00E01C00E01C00E 01C00E01C00E01C00E01C0FFFFC00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E 01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C07FCFF8151D809C17>I<003F07E00001 C09C18000380F018000701F03C000E01E03C000E00E018000E00E000000E00E000000E00E00000 0E00E000000E00E00000FFFFFFFC000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C 000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E0 1C000E00E01C000E00E01C000E00E01C007FC7FCFF80211D809C23>I<6060F0F0F8F868680808 08080808101010102020404080800D0C7F9C15>34 D<00E0000001900000030800000308000007 0800000708000007080000070800000710000007100000072000000740000003C03FE003800F00 038006000380040005C0040009C0080010E0100030E010006070200060702000E0384000E03C40 00E01C8000E00F0020E0070020700780403009C0401830E18007C03E001B1F7E9D20>38 D<004000800100020006000C000C0018001800300030007000600060006000E000E000E000E000 E000E000E000E000E000E000E000E000600060006000700030003000180018000C000C00060002 000100008000400A2A7D9E10>40 D<800040002000100018000C000C0006000600030003000380 01800180018001C001C001C001C001C001C001C001C001C001C001C001C0018001800180038003 000300060006000C000C00180010002000400080000A2A7E9E10>I<0006000000060000000600 000006000000060000000600000006000000060000000600000006000000060000000600000006 0000FFFFFFE0FFFFFFE00006000000060000000600000006000000060000000600000006000000 06000000060000000600000006000000060000000600001B1C7E9720>43 D<60F0F0701010101020204080040C7C830C>II<60F0F06004047C830C >I<03C00C301818300C300C700E60066006E007E007E007E007E007E007E007E007E007E007E0 07E007E00760066006700E300C300C18180C3007E0101D7E9B15>48 D<030007003F00C7000700 070007000700070007000700070007000700070007000700070007000700070007000700070007 0007000F80FFF80D1C7C9B15>I<07C01830201C400C400EF00FF80FF807F8077007000F000E00 0E001C001C00380070006000C00180030006010C01180110023FFE7FFEFFFE101C7E9B15>I<07 E01830201C201C781E780E781E381E001C001C00180030006007E00030001C001C000E000F000F 700FF80FF80FF80FF00E401C201C183007E0101D7E9B15>I<000C00000C00001C00003C00003C 00005C0000DC00009C00011C00031C00021C00041C000C1C00081C00101C00301C00201C00401C 00C01C00FFFFC0001C00001C00001C00001C00001C00001C00001C0001FFC0121C7F9B15>I<30 0C3FF83FF03FC020002000200020002000200023E024302818301C200E000E000F000F000F600F F00FF00FF00F800E401E401C2038187007C0101D7E9B15>I<00F0030C06040C0E181E301E300C 700070006000E3E0E430E818F00CF00EE006E007E007E007E007E007600760077006300E300C18 180C3003E0101D7E9B15>I<4000007FFF807FFF007FFF00400200800400800400800800001000 00100000200000600000400000C00000C00001C000018000018000038000038000038000038000 078000078000078000078000078000078000030000111D7E9B15>I<03E00C301008200C200660 06600660067006780C3E083FB01FE007F007F818FC307E601E600FC007C003C003C003C0036002 6004300C1C1007E0101D7E9B15>I<03C00C301818300C700C600EE006E006E007E007E007E007 E0076007700F300F18170C2707C700060006000E300C780C78187010203030C00F80101D7E9B15 >I<60F0F0600000000000000000000060F0F06004127C910C>I<60F0F060000000000000000000 0060F0F0701010101020204080041A7C910C>I<0FE03038401CE00EF00EF00EF00E000C001C00 30006000C000800180010001000100010001000100000000000000000000000300078007800300 0F1D7E9C14>63 D<000600000006000000060000000F0000000F0000000F000000178000001780 00001780000023C0000023C0000023C0000041E0000041E0000041E0000080F0000080F0000180 F8000100780001FFF80003007C0002003C0002003C0006003E0004001E0004001E000C001F001E 001F00FF80FFF01C1D7F9C1F>65 DI<001F808000E06180018019 80070007800E0003801C0003801C00018038000180780000807800008070000080F0000000F000 0000F0000000F0000000F0000000F0000000F0000000F000000070000080780000807800008038 0000801C0001001C0001000E000200070004000180080000E03000001FC000191E7E9C1E>I III<001F808000E0618001801980 070007800E0003801C0003801C00018038000180780000807800008070000080F0000000F00000 00F0000000F0000000F0000000F0000000F000FFF0F0000F807000078078000780780007803800 07801C0007801C0007800E00078007000B800180118000E06080001F80001C1E7E9C21>II< FFF00F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F000F 000F000F000F000F000F000F000F00FFF00C1C7F9B0F>I76 DII<003F800000E0 E0000380380007001C000E000E001C0007003C00078038000380780003C0780003C0700001C0F0 0001E0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0700001C0780003C0 780003C0380003803C0007801C0007000E000E0007001C000380380000E0E000003F80001B1E7E 9C20>II82 D<07E0801C1980300580700380600180 E00180E00080E00080E00080F00000F800007C00007FC0003FF8001FFE0007FF0000FF80000F80 0007C00003C00001C08001C08001C08001C0C00180C00180E00300D00200CC0C0083F800121E7E 9C17>I<7FFFFFC0700F01C0600F00C0400F0040400F0040C00F0020800F0020800F0020800F00 20000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F 0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000001F800003FFFC001B 1C7F9B1E>IIII<7FF0FFC00FC03E000780180003C0180003E0100001E0200001 F0600000F0400000788000007D8000003D0000001E0000001F0000000F0000000F8000000F8000 0013C0000023E0000021E0000041F00000C0F8000080780001007C0003003C0002001E0006001F 001F003F80FFC0FFF01C1C7F9B1F>II91 D<0808101020204040404080 8080808080B0B0F8F8787830300D0C7A9C15>I<1FC000307000783800781C00301C00001C0000 1C0001FC000F1C00381C00701C00601C00E01C40E01C40E01C40603C40304E801F870012127E91 15>97 DI<07E00C301878307870306000E000E000E000E0 00E000E00060007004300418080C3007C00E127E9112>I<003F00000700000700000700000700 00070000070000070000070000070000070003E7000C1700180F00300700700700600700E00700 E00700E00700E00700E00700E00700600700700700300700180F000C370007C7E0131D7E9C17> I<03E00C301818300C700E6006E006FFFEE000E000E000E00060007002300218040C1803E00F12 7F9112>I<00F8018C071E061E0E0C0E000E000E000E000E000E00FFE00E000E000E000E000E00 0E000E000E000E000E000E000E000E000E000E000E007FE00F1D809C0D>I<00038003C4C00C38 C01C3880181800381C00381C00381C00381C001818001C38000C300013C0001000003000001800 001FF8001FFF001FFF803003806001C0C000C0C000C0C000C06001803003001C0E0007F800121C 7F9215>II<18003C003C00180000000000000000000000 00000000FC001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C001C00FF 80091D7F9C0C>I<00C001E001E000C000000000000000000000000000000FE000E000E000E000 E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E060E0F0C0F1C0 61803E000B25839C0D>IIIII<03F0000E1C00180600300300700380600180E001C0E001 C0E001C0E001C0E001C0E001C06001807003803003001806000E1C0003F00012127F9115>II<03C1000C3300180B00300F00700700700700E00700E00700E00700E00700E00700 E00700600700700700300F00180F000C370007C700000700000700000700000700000700000700 000700003FE0131A7E9116>II<1F9030704030C010C010E010F8007F803FE00FF000F8 80388018C018C018E010D0608FC00D127F9110>I<04000400040004000C000C001C003C00FFE0 1C001C001C001C001C001C001C001C001C001C101C101C101C101C100C100E2003C00C1A7F9910 >IIII<7F8FF00F03800F030007020003840001C80001D80000F00000700000780000 F800009C00010E00020E000607000403801E07C0FF0FF81512809116>II<7FFC 70386038407040F040E041C003C0038007000F040E041C043C0C380870087038FFF80E127F9112 >I124 D E /Fi 24 118 df<0003E0000000000FF0000000003E38 00000000781800000000780C00000000F80C00000000F00C00000001F00C00000001F00C000000 01F01C00000001F81800000001F83000000001F87000000001F86000000001F8C001FFE000FD80 01FFE000FF00001C0000FE0000180000FE00003000007E00003000007F0000600000FF00006000 01FF8000C00003BF80018000071FC00180000F0FE00300001E0FE00600003E07F00600007E03F8 0C0000FE03FC180000FE01FE300000FE00FE600000FE007FC00000FE003F8000C07F001FC000C0 7F000FF001C03F801FF803801FC0F0FE0F0007FFC03FFE0001FE0007F8002B287DA732>38 D<3C7EFFFFFFFF7E3C08087B8712>46 D<000C00001C0000FC000FFC00FFFC00F0FC0000FC0000 FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000 FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000 FC0000FC00FFFFFCFFFFFC16257BA420>49 D<00FF000007FFE0001E07F8003801FC007800FE00 7E00FF00FF007F00FF007F80FF007F80FF003F807E003F803C003F8000007F8000007F0000007F 000000FE000000FE000001FC000001F8000003F0000007C00000078000000F0000001E00000038 00000070018000E0018001C001800380030007000300060003000FFFFF001FFFFF003FFFFF007F FFFE00FFFFFE00FFFFFE0019257DA420>I<00FF800007FFF0000F03F8001800FC003E00FE007F 00FF007F007F007F807F007F007F003F00FF001E00FE000000FE000000FC000001F8000003F000 0007E00000FF800000FF00000003E0000001F8000000FC000000FE0000007F0000007F0000007F 8018007F807E007F80FF007F80FF007F80FF007F80FF007F00FE00FF007C00FE003801FC001E03 F8000FFFE00001FF000019257DA420>I<0000380000007800000078000000F8000001F8000003 F8000003F8000007F800000FF800001DF8000019F8000031F8000071F8000061F80000C1F80001 C1F8000381F8000301F8000601F8000E01F8001C01F8001801F8003001F8007001F800E001F800 FFFFFFE0FFFFFFE00001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8 00007FFFE0007FFFE01B257EA420>I<0000FF80080007FFF018003FC03C38007E000E7801FC00 03F803F00001F807E00000F80FE00000781FC00000781FC00000383F800000383F800000387F80 0000187F000000187F00000018FF00000000FF00000000FF00000000FF00000000FF00000000FF 00000000FF00000000FF00000000FF00000000FF000000007F000000007F000000187F80000018 3F800000183F800000181FC00000301FC00000300FE000006007E000006003F00000C001FC0001 80007E000700003FC03C000007FFF8000000FFC00025287CA72E>67 DI73 D78 D 80 D<03FF00000FFFE0001F03F0003F80F8003F80FC003F807C001F007E001F007E0000007E00 00007E0000007E00000FFE0001FFFE0007F07E001FC07E003F807E007F007E00FE007E00FE007E 18FE007E18FE007E18FE00BE187F01BE183F873FF01FFE1FE003F80F801D1A7E9920>97 D<003FE001FFF807E07C0FC0FE1F80FE3F00FE3F007C7E007C7E0000FE0000FE0000FE0000FE00 00FE0000FE0000FE0000FE00007E00007F00003F00003F80031F80070FC00607F01C01FFF0003F C0181A7E991D>99 D<00007FE000007FE0000007E0000007E0000007E0000007E0000007E00000 07E0000007E0000007E0000007E0000007E0000007E0000007E0007F87E001FFE7E007E077E00F C01FE01F800FE03F0007E03F0007E07E0007E07E0007E0FE0007E0FE0007E0FE0007E0FE0007E0 FE0007E0FE0007E0FE0007E0FE0007E07E0007E07E0007E07F0007E03F0007E01F000FE00F801F E007E077E003FFC7FE007F07FE1F287EA724>I<007F8003FFE007E1F00F80F81F007C3F007E7E 003E7E003E7E003FFE003FFE003FFFFFFFFFFFFFFE0000FE0000FE0000FE00007E00007E00003F 00003F00031F80060FC00607F01C01FFF0003FC0181A7E991D>I<0F001F801FC03FC03FC01FC0 1F800F000000000000000000000000000000FFC0FFC00FC00FC00FC00FC00FC00FC00FC00FC00F C00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC0FFFCFFFC0E297FA811>105 D108 D110 D<007FC00001FFF00007E0FC000F803E001F001F003F001F803E000F807E000F C07E000FC0FE000FE0FE000FE0FE000FE0FE000FE0FE000FE0FE000FE0FE000FE0FE000FE07E00 0FC07E000FC07F001FC03F001F801F001F000F803E0007E0FC0001FFF000007FC0001B1A7E9920 >I I114 D<03F8C01FFFC03C07C07801C07001C0F000C0F000C0F800C0FC0000FFC0 007FFC003FFF001FFF800FFFC001FFE0000FF00003F0C001F0C000F0E000F0E000F0F000E0F801 E0FE03C0E7FF80C1FC00141A7E9919>I<00600000600000600000600000E00000E00001E00001 E00003E00007E0001FE000FFFFC0FFFFC007E00007E00007E00007E00007E00007E00007E00007 E00007E00007E00007E00007E00007E00007E00007E06007E06007E06007E06007E06007E06003 F0C001F0C000FF80003E0013257FA419>II E /Fj 9 116 df73 D80 D86 D<0007FFC0000000003FFFFC00 000000FFFFFF00000003F801FFC0000007F0003FE0000007F8001FF000000FFC000FF800000FFC 000FFC00000FFC0007FC00000FFC0007FE00000FFC0003FE000007F80003FF000003F00003FF00 0000000003FF000000000003FF000000000003FF000000000003FF000000000003FF0000000000 03FF000000000003FF0000000007FFFF00000000FFFFFF0000000FFFE3FF0000007FF803FF0000 01FFC003FF000003FF0003FF000007FC0003FF00000FF80003FF00001FF00003FF00003FF00003 FF00007FE00003FF00007FE00003FF0380FFC00003FF0380FFC00003FF0380FFC00003FF0380FF C00003FF0380FFC00007FF0380FFC00007FF03807FE0000DFF03807FE0001DFF03803FF00039FF 87001FF80070FFCF000FFE03E07FFE0007FFFF807FFC0000FFFE001FF800001FF00007E000312E 7CAD37>97 D<00FF80FFFF80FFFF80FFFF8003FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF8001FF80 01FF8001FF8001FF8001FF8001FF8001FF80FFFFFFFFFFFFFFFFFF18487DC71F>108 D<00003FFC00000001FFFF8000000FFFFFF000003FF00FFC00007F8001FE0001FE00007F8003FC 00003FC007F800001FE007F800001FE00FF000000FF01FE0000007F81FE0000007F83FE0000007 FC3FE0000007FC7FC0000003FE7FC0000003FE7FC0000003FE7FC0000003FEFFC0000003FFFFC0 000003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003FFFFC0000003 FFFFC0000003FFFFC0000003FF7FC0000003FE7FC0000003FE7FC0000003FE7FE0000007FE3FE0 000007FC3FE0000007FC1FE0000007F81FF000000FF80FF000000FF007F800001FE007FC00003F E003FE00007FC001FF0000FF80007F8001FE00003FF00FFC00000FFFFFF0000003FFFFC0000000 3FFC0000302E7DAD37>111 D<00FF801FF80000FFFF80FFFF0000FFFF83FFFFC000FFFF8FC07F F00003FF9E000FF80001FFF80007FC0001FFF00003FE0001FFE00001FF0001FFC00000FF8001FF 800000FFC001FF8000007FC001FF8000007FE001FF8000007FE001FF8000003FE001FF8000003F F001FF8000003FF001FF8000003FF001FF8000001FF801FF8000001FF801FF8000001FF801FF80 00001FF801FF8000001FF801FF8000001FF801FF8000001FF801FF8000001FF801FF8000001FF8 01FF8000001FF801FF8000001FF801FF8000001FF001FF8000003FF001FF8000003FF001FF8000 003FF001FF8000003FE001FF8000007FE001FF8000007FC001FF800000FFC001FF800000FF8001 FFC00001FF8001FFE00001FF0001FFF00003FE0001FFF8000FFC0001FF9E001FF80001FF8F80FF E00001FF87FFFFC00001FF81FFFE000001FF803FF0000001FF800000000001FF800000000001FF 800000000001FF800000000001FF800000000001FF800000000001FF800000000001FF80000000 0001FF800000000001FF800000000001FF800000000001FF800000000001FF800000000001FF80 0000000001FF800000000001FF800000000001FF8000000000FFFFFF00000000FFFFFF00000000 FFFFFF0000000035427DAD3D>I<00FF007F00FFFF01FFC0FFFF03FFE0FFFF0787F003FF0E0FF0 01FF1C1FF801FF381FF801FF301FF801FF701FF801FF600FF001FF600FF001FFE003C001FFC000 0001FFC0000001FFC0000001FFC0000001FF80000001FF80000001FF80000001FF80000001FF80 000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF 80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001 FF80000001FF80000001FF80000001FF80000001FF80000001FF80000001FF800000FFFFFFC000 FFFFFFC000FFFFFFC000252E7DAD2C>114 D<001FFC030000FFFF870003FFFFCF000FE007FF00 1F8000FF003E00003F003E00001F007C00001F007C00000F00FC00000F00FC00000700FC000007 00FE00000700FF00000700FF80000000FFE00000007FFE0000007FFFF800003FFFFF00003FFFFF C0001FFFFFF0000FFFFFF80007FFFFFC0001FFFFFE00007FFFFF00001FFFFF000000FFFF800000 07FF80000000FFC0E000007FC0E000003FC0E000001FC0F000000FC0F000000FC0F800000FC0F8 00000FC0F800000F80FC00000F80FE00001F80FF00001F00FF80003E00FFC0007C00F9F803F800 F0FFFFF000E03FFFC000C007FE0000222E7CAD2B>I E /Fk 8 117 df<00003000000070000001 F0000007F000007FF000FFFFF000FFFFF000FF8FF000000FF000000FF000000FF000000FF00000 0FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000 000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF0 00000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000F F000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF00000 0FF0007FFFFFFE7FFFFFFE7FFFFFFE1F3779B62D>49 D<000000FFE0001800000FFFFC00180000 7FFFFF00380001FFE00FC0780007FE0001F0F8000FF8000079F8003FE000003FF8007FC000000F F800FF80000007F801FF00000007F803FE00000003F807FC00000001F807F800000001F80FF800 000000F80FF000000000F81FF000000000781FF000000000783FE000000000783FE00000000038 7FE000000000387FE000000000387FE000000000387FC000000000007FC00000000000FFC00000 000000FFC00000000000FFC00000000000FFC00000000000FFC00000000000FFC00000000000FF C00000000000FFC00000000000FFC00000000000FFC00000000000FFC000000000007FC0000000 00007FC000000000007FE000000000007FE000000000387FE000000000383FE000000000383FE0 00000000381FF000000000381FF000000000700FF000000000700FF8000000007007F800000000 E007FC00000000E003FE00000001C001FF000000038000FF8000000780007FC000000F00003FE0 00001E00000FF800003C000007FE0000F0000001FFE007E00000007FFFFF800000000FFFFE0000 000000FFE00000353B7BB940>67 D<003FFC00000001FFFF80000007E00FE000000FC003F80000 0FE001FC00001FF001FE00001FF000FF00001FF000FF00001FF0007F00000FE0007F800007C000 7F80000000007F80000000007F80000000007F80000000007F80000000007F800000000FFF8000 0007FFFF8000003FE07F800001FF007F800007FC007F80000FF0007F80001FE0007F80003FE000 7F80007FC0007F80007FC0007F8380FF80007F8380FF80007F8380FF80007F8380FF8000BF8380 FF8000BF83807FC0013F83807FC0033F83803FE0061FC7001FF81C0FFE0007FFF007FC00007FC0 03F00029257DA42D>97 D<0003FF0000001FFFE000007F07F00001FC01FC0003F000FE0007E000 7F000FE0003F001FC0003F801FC0003F803FC0001FC03F80001FC07F80001FC07F80001FE07F80 001FE0FF80001FE0FF80001FE0FFFFFFFFE0FFFFFFFFE0FF80000000FF80000000FF80000000FF 80000000FF800000007F800000007F800000007F800000003FC00000003FC00000001FC00000E0 1FE00000E00FE00001C007F000038003F800070000FE000E00007FC07C00001FFFF0000001FF80 0023257EA428>101 D<01FC00000000FFFC00000000FFFC00000000FFFC0000000007FC000000 0003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC 0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC000000 0003FC0000000003FC0000000003FC0000000003FC01FF000003FC07FFC00003FC1C0FF00003FC 3007F80003FC6003F80003FCC003FC0003FC8001FC0003FD0001FE0003FF0001FE0003FE0001FE 0003FE0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC 0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE 0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC 0001FE0003FC0001FE0003FC0001FE0003FC0001FE0003FC0001FE00FFFFF07FFFF8FFFFF07FFF F8FFFFF07FFFF82D3A7EB932>104 D<01FC03FE0000FFFC1FFFC000FFFC780FF000FFFDE003F8 0007FF8001FC0003FF0000FE0003FE0000FF0003FC00007F8003FC00007FC003FC00003FC003FC 00003FE003FC00003FE003FC00001FE003FC00001FE003FC00001FF003FC00001FF003FC00001F F003FC00001FF003FC00001FF003FC00001FF003FC00001FF003FC00001FF003FC00001FF003FC 00001FE003FC00003FE003FC00003FE003FC00003FC003FC00003FC003FC00007F8003FC00007F 8003FE0000FF0003FF0001FE0003FF8003FC0003FDC007F80003FCF81FE00003FC3FFF800003FC 07FC000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC000000 0003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC0000000003FC 00000000FFFFF0000000FFFFF0000000FFFFF00000002C357EA432>112 D<01F80FC0FFF83FF0FFF870F8FFF8C1FC07F883FE03F983FE03F903FE03FB03FE03FA01FC03FA 00F803FA007003FE000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003 FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC0000 03FC000003FC000003FC000003FC000003FC0000FFFFF800FFFFF800FFFFF8001F257EA424> 114 D<001C0000001C0000001C0000001C0000001C0000003C0000003C0000003C0000003C0000 007C0000007C000000FC000000FC000001FC000003FC000007FC00001FFFFFC0FFFFFFC0FFFFFF C003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC 000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003FC000003 FC00E003FC00E003FC00E003FC00E003FC00E003FC00E003FC00E003FC00E001FC00C001FC01C0 01FE01C000FE0380007F8700001FFE000007F8001B357EB423>116 D E /Fl 20 122 df<70F8FCFC7404040404080810102040060F7C840E>44 D<008003800F80F38003 800380038003800380038003800380038003800380038003800380038003800380038003800380 038003800380038003800380038007C0FFFE0F217CA018>49 D<03F0000C1C0010070020078040 03C04003C08003E0F003E0F801E0F801E0F801E02003E00003E00003C00003C000078000070000 0E00001C0000180000300000600000C0000180000100000200200400200800201800603000403F FFC07FFFC0FFFFC013217EA018>I<03F8000C1E001007002007804007C07807C07803C07807C0 3807C0000780000780000700000F00000E0000380003F000001C00000F000007800007800003C0 0003C00003E02003E07003E0F803E0F803E0F003C04003C0400780200780100F000C1C0003F000 13227EA018>I<01F000060C000C0600180700380380700380700380F001C0F001C0F001C0F001 E0F001E0F001E0F001E0F001E07001E07003E03803E01805E00C05E00619E003E1E00001C00001 C00001C0000380000380300300780700780600700C002018001030000FC00013227EA018>57 D<0007E0100038183000E0063001C00170038000F0070000F00E0000701E0000701C0000303C00 00303C0000307C0000107800001078000010F8000000F8000000F8000000F8000000F8000000F8 000000F8000000F800000078000000780000107C0000103C0000103C0000101C0000201E000020 0E000040070000400380008001C0010000E0020000381C000007E0001C247DA223>67 D<03FFF0001F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F 00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F 00700F00F80F00F80F00F80E00F01E00401C0020380018700007C00014237EA119>74 D76 DI<0FE0001838003C0C003C0E0018070000070000070000070000FF0007C7 001E07003C0700780700700700F00708F00708F00708F00F087817083C23900FC1E015157E9418 >97 D<01FE000703000C07801C0780380300780000700000F00000F00000F00000F00000F00000 F00000F000007000007800403800401C00800C010007060001F80012157E9416>99 D<0000E0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000 E00000E001F8E00704E00C02E01C01E03800E07800E07000E0F000E0F000E0F000E0F000E0F000 E0F000E0F000E07000E07800E03800E01801E00C02E0070CF001F0FE17237EA21B>I<01FC0007 07000C03801C01C03801C07801E07000E0F000E0FFFFE0F00000F00000F00000F00000F0000070 00007800203800201C00400E008007030000FC0013157F9416>I<0E0000FE00001E00000E0000 0E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E1F800E60C00E80E0 0F00700F00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070 0E00700E00700E00700E0070FFE7FF18237FA21B>104 D<0E0000FE00001E00000E00000E0000 0E00000E00000E00000E00000E00000E00000E00000E00000E00000E03FC0E01F00E01C00E0180 0E02000E04000E08000E10000E38000EF8000F1C000E1E000E0E000E07000E07800E03C00E01C0 0E01E00E00F00E00F8FFE3FE17237FA21A>107 D<0E00FE001E000E000E000E000E000E000E00 0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E 000E000E000E000E000E00FFE00B237FA20E>I<0E1F80FE60C01E80E00F00700F00700E00700E 00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E 0070FFE7FF18157F941B>110 D<01FC000707000C01801800C03800E0700070700070F00078F0 0078F00078F00078F00078F00078F000787000707800F03800E01C01C00E038007070001FC0015 157F9418>I<0E3CFE461E8F0F0F0F060F000E000E000E000E000E000E000E000E000E000E000E 000E000E000F00FFF010157F9413>114 D 121 D E /Fm 24 121 df<00003FC0020001FFF8060007E01C06001F00070E003C00018E007800 00DE00F000005E01E000003E03C000001E078000001E0F0000000E0F0000000E1E000000061E00 0000063E000000063C000000027C000000027C000000027C000000027800000000F800000000F8 00000000F800000000F800000000F800000000F800000000F800000000F800000000F800000000 F80000000078000000007C000000007C000000027C000000023C000000023E000000021E000000 021E000000040F000000040F0000000C078000000803C000001801E000001000F0000020007800 0040003C000180001F0003000007E01E000001FFF80000003FC00027327CB02F>67 D70 D73 D77 D80 D82 D<007F004001FFC0400780F0C00F0018C01C000DC03C0007C0380003C0780001 C0700001C0F00000C0F00000C0F00000C0F0000040F0000040F0000040F8000040F80000007C00 00007E0000003F0000003FE000001FFE00000FFFE00007FFF80001FFFE00007FFF000007FF8000 007F8000000FC0000007E0000003E0000003E0000001F0000001F0800000F0800000F0800000F0 800000F0800000F0C00000F0C00000E0E00001E0E00001E0F00001C0F8000380EC000780C7000F 00C1E03E0080FFF800801FE0001C327CB024>I86 D<00FE0000070380000801E0001000F0003C0078003E0078003E003C003E 003C001C003C0000003C0000003C0000003C000007FC00007C3C0003E03C0007803C001F003C00 3E003C003C003C007C003C0078003C08F8003C08F8003C08F8003C08F8007C08F8007C087C00BC 083E011E100F060FE003F807C01D1E7D9D20>97 D<018000003F800000FF800000FF8000000F80 000007800000078000000780000007800000078000000780000007800000078000000780000007 800000078000000780000007800000078000000781F80007860700079803C007A000E007C000F0 07C00078078000380780003C0780003E0780001E0780001E0780001F0780001F0780001F078000 1F0780001F0780001F0780001F0780001F0780001E0780003E0780003C0780003C0780007807C0 0070074000F0072001C006100380060C0E000403F80020317EB024>I<001FC00000F0380001C0 0400078002000F000F001E001F001E001F003C001F007C000E007C00000078000000F8000000F8 000000F8000000F8000000F8000000F8000000F8000000F8000000780000007C0000007C000000 3C0000801E0000801E0001000F0001000780020001C00C0000F03000001FC000191E7E9D1D>I< 003F800000E0F0000380380007001C000E001E001E000E003C000F003C000F007C000F80780007 8078000780F8000780FFFFFF80F8000000F8000000F8000000F8000000F8000000F80000007800 0000780000007C0000003C0000801C0000801E0001000F0002000700020001C00C0000F0300000 1FC000191E7E9D1D>101 D<07000F801F801F800F800700000000000000000000000000000000 0000000000000001801F80FF80FF800F8007800780078007800780078007800780078007800780 078007800780078007800780078007800780078007800FC0FFF8FFF80D2F7EAE12>105 D<01803F80FF80FF800F8007800780078007800780078007800780078007800780078007800780 078007800780078007800780078007800780078007800780078007800780078007800780078007 8007800780078007800780078007800FC0FFFCFFFC0E317EB012>108 D<0181FE003FC0003F86 0780C0F000FF8801C1003800FF9001E2003C000FA000E4001C0007A000F4001E0007C000F8001E 0007C000F8001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000 F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00 078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0 001E00078000F0001E00078000F0001E00078000F0001E000FC001F8003F00FFFC1FFF83FFF0FF FC1FFF83FFF0341E7E9D38>I<0181FC003F860F00FF880380FF9003C00FA001C007C001E007C0 01E007C001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E007 8001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0 078001E00FC003F0FFFC3FFFFFFC3FFF201E7E9D24>I<003FC00000E0700003801C0007000E00 0E0007001E0007803C0003C03C0003C07C0003E0780001E0780001E0F80001F0F80001F0F80001 F0F80001F0F80001F0F80001F0F80001F0F80001F0780001E0780001E07C0003E03C0003C03C00 03C01E0007800E00070007000E0003801C0000E07000003FC0001C1E7E9D20>I<0181F8003F86 0F00FF9803C0FFA001E007C000F007C00078078000780780003C0780003E0780003E0780001E07 80001F0780001F0780001F0780001F0780001F0780001F0780001F0780001F0780001E0780003E 0780003C0780003C0780007807C000F007C000F007A001C007900380078C0E000783F800078000 000780000007800000078000000780000007800000078000000780000007800000078000000780 00000FC00000FFFC0000FFFC0000202C7E9D24>I<0183E03F8C18FF907CFF907C0FA07C07C038 07C00007C00007C000078000078000078000078000078000078000078000078000078000078000 0780000780000780000780000780000780000780000780000FC000FFFE00FFFE00161E7E9D19> 114 D<03FC200E02603801E03000E0600060E00060E00020E00020F00020F000207C00007F8000 3FFC001FFF0007FF8001FFE0000FE00001F08000F8800078800038C00038C00038C00038E00030 F00070F00060C800C0C6038081FE00151E7E9D19>I<00400000400000400000400000400000C0 0000C00000C00001C00001C00003C00007C0000FC0001FFFE0FFFFE003C00003C00003C00003C0 0003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C01003C0 1003C01003C01003C01003C01003C01003C01001C02001E02000E0400078C0001F00142B7FAA19 >I<018000603F800FE0FF803FE0FF803FE00F8003E0078001E0078001E0078001E0078001E007 8001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0078001E0 078001E0078001E0078001E0078003E0078003E0078003E0038005E003C009E001C019F000F021 FF001FC1FF201E7E9D24>II120 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300dpi TeXDict begin %%PaperSize: A4 %%EndSetup %%Page: 0 1 0 0 bop 575 1056 a Fm(MPI)23 b(Con)n(text)f(Sub)r(committee)807 1147 y(Prop)r(osal)h(VI)778 1238 y(First)f(Revision)799 1421 y Fl(Lyndon)17 b(J)f(Clark)o(e)814 1541 y(Marc)o(h)f(23,)i(1993)p eop %%Page: 1 2 1 1 bop 262 619 a Fk(Chapter)28 b(1)262 826 y Fj(Prop)s(osal)36 b(VI)262 1067 y Fi(1.1)64 b(In)n(tro)r(duction)262 1158 y Fh(This)9 b(c)o(hapter)h(prop)q(oses)g(that)g(comm)o(unicatio)o(n)d(con)o(texts)j(and)f (pro)q(cess)i(groupings)e(within)262 1208 y Fg(mpi)14 b Fh(app)q(ear)h(as)f (lo)q(osely)g(coupled)h(concepts.)22 b(One)15 b(or)g(more)e(comm)o(unication) f(con)o(texts)262 1257 y(ma)o(y)d(b)q(e)j(asso)q(ciated)f(with)g(eac)o(h)h (pro)q(cess)h(grouping,)d(and)h(eac)o(h)h(comm)o(unication)c(con)o(text)262 1307 y(inherits)k(prop)q(erties)h(of)e(the)h(asso)q(ciated)h(pro)q(cess)g (grouping.)k(This)11 b(re\015ects)j(the)f(observ)n(a-)262 1357 y(tions)f(that)g(in)o(v)o(o)q(cations)f(of)h(mo)q(dules)f(in)h(a)g(parallel)f (program)g(t)o(ypically)g(op)q(erate)i(within)262 1407 y(pro)q(cess)19 b(groupings,)e(and)h(that)f(there)i(ma)o(y)c(b)q(e)j(m)o(ultiple)e(mo)q (dules)g(op)q(erating)h(within)262 1457 y(eac)o(h)d(pro)q(cess)i(grouping.) 324 1506 y(The)k(prop)q(osal)f(pro)o(vides)g(pro)q(cess)i(iden)o(ti\014ed)f (comm)o(unicatio)o(n,)e(comm)o(unicati)o(ons)262 1556 y(whic)o(h)d(are)g (limited)e(in)i(scop)q(e)h(to)f(single)g(con)o(texts,)h(and)f(comm)o (unicatio)o(ns)e(whic)o(h)i(ha)o(v)o(e)262 1606 y(scop)q(e)g(spanning)e (pairs)h(of)f(con)o(texts.)19 b(The)c(prop)q(osal)e(mak)o(es)g(no)h(statemen) o(ts)g(regarding)262 1656 y(message)c(tags.)16 b(It)11 b(is)f(assumed)g(that) g(these)i(will)d(b)q(e)i(a)f(bit)g(string)g(expressed)j(as)d(an)g(in)o(teger) 262 1706 y(in)j(the)h(host)h(language.)324 1756 y(Muc)o(h)d(of)e(this)i(prop) q(osal)f(m)o(ust)f(b)q(e)i(view)o(ed)f(as)h(recommendations)d(to)i(other)h (sub)q(com-)262 1805 y(mittees)i(of)f Fg(mpi)p Fh(,)h(primarily)e(the)j(p)q (oin)o(t-to-p)q(oin)o(t)e(comm)o(unicatio)o(n)f(sub)q(committee)h(and)262 1855 y(the)k(collectiv)o(e)f(comm)o(unicatio)o(ns)e(sub)q(committee.)25 b(Concrete)17 b(syn)o(tax)g(is)f(giv)o(en)g(in)g(the)262 1905 y(st)o(yle)e(of)f(the)h(ANSI)h(C)e(host)i(language,)d(only)h(for)h(purp)q (oses)h(of)e(discussion.)324 1955 y(The)h(detailed)f(prop)q(osal)g(is)g (presen)o(ted)j(in)c(Section)i(1.2,)e(whic)o(h)h(refers)i(the)f(reader)h(to) 262 2005 y(a)e(set)h(of)f(discussion)h(notes)h(in)e(Section)h(1.3.2.)i(The)e (note)g(assumes)g(forw)o(ard)f(kno)o(wledge)262 2054 y(of)h(the)h(prop)q (osal)f(text)h(and)g(are)g(therefore)h(b)q(est)f(examined)f(after)h(famili)o (arisatio)o(n)d(with)262 2104 y(the)f(prop)q(osal)g(text.)18 b(Asp)q(ects)13 b(of)e(the)h(prop)q(osal)f(are)g(discussed)i(in)e(section)h (Section)g(1.3.1,)262 2154 y(and)g(it)g(is)h(also)f(recommended)g(that)h (this)f(material)f(b)q(e)i(read)h(after)e(familiarisati)o(on)e(with)262 2204 y(the)k(prop)q(osal)g(text.)967 2574 y(1)p eop %%Page: 2 3 2 2 bop 262 307 a Fi(1.2)64 b(Detailed)24 b(Prop)r(osal)262 398 y Fh(This)14 b(section)g(presen)o(ts)i(the)f(detailed)f(prop)q(osal,)g (describing,)g(in)g(order)g(of)g(app)q(earance:)262 448 y(pro)q(cesses;)19 b(pro)q(cess)g(groupings;)d(comm)o(unication)d(con)o(texts;)18 b(p)q(oin)o(t-to-p)q(oin)o(t)e(comm)o(u-)262 498 y(nication;)c(collectiv)o(e) i(comm)o(unication.)262 614 y Ff(1.2.1)55 b(Pro)r(cesses)262 691 y Fh(This)13 b(prop)q(osal)h(views)f(pro)q(cesses)k(in)c(the)i(famili)o (ar)c(w)o(a)o(y)m(,)h(as)i(one)g(thinks)g(of)f(pro)q(cesses)j(in)262 740 y(Unix)f(or)i(NX)f(for)g(example.)24 b(Eac)o(h)16 b(pro)q(cess)i(is)e(a)g (distinct)h(space)g(of)f(instructions)h(and)262 790 y(data.)h(Eac)o(h)c(pro)q (cess)i(is)e(allo)o(w)o(ed)e(to)i(comp)q(ose)g(m)o(ultiple)d(concurren)o(t)16 b(threads)f(and)f Fg(mpi)262 840 y Fh(do)q(es)g(not)g(distinguish)f(suc)o(h)i (threads.)262 948 y Fe(Pro)q(cess)g(Descriptor)262 1025 y Fh(Eac)o(h)d(pro)q (cess)i(is)e(describ)q(ed)i(b)o(y)e(a)f Fd(pr)n(o)n(c)n(ess)i(descriptor)j Fh(\(or)c Fd(hand)r(le)p Fh(\))h(whic)o(h)f(is)g(expressed)262 1074 y(as)f(an)f(in)o(teger)i(t)o(yp)q(e)f(in)g(the)h(host)f(language)f(and)h (has)g(an)g(opaque)g(v)n(alue)f(whic)o(h)h(is)g(pro)q(cess)262 1124 y(lo)q(cal.)17 b(See)d(Note)h(1.)324 1174 y(The)f(initialisation)d(of)i Fg(mpi)g Fh(services)j(will)c(assign)h(to)h(eac)o(h)g(pro)q(cess)h(an)f Fd(own)j Fh(pro)q(cess)262 1224 y(descriptor.)h(Eac)o(h)11 b(pro)q(cess)i(retains)e(its)g(o)o(wn)g(pro)q(cess)i(descriptor)f(un)o(til)e (the)h(termination)262 1274 y(of)j Fg(mpi)h Fh(services.)23 b Fg(mpi)15 b Fh(pro)o(vides)g(a)g(pro)q(cedure)i(whic)o(h)e(returns)h(the)g (o)o(wn)f(descriptor)h(of)262 1323 y(the)e(calling)e(pro)q(cess.)20 b(F)m(or)14 b(example,)e Fc(pd)21 b(=)h(mpi)p 1052 1323 14 2 v 15 w(own)p 1133 1323 V 15 w(process\(\))n Fh(.)262 1431 y Fe(Pro)q(cess)15 b(Creation)f(&)i(Destruction)262 1508 y Fh(This)f(prop)q(osal)g(mak)o(es)f(no)h(statemen)o(ts)h(regarding)f(creation) h(and)f(destruction)i(of)e(pro-)262 1558 y(cesses.)20 b(See)15 b(Note)f(2.)262 1666 y Fe(Descriptor)f(T)l(ransmission)262 1742 y Fh(Since)19 b(a)g(descriptor)h(is)f(an)f(in)o(teger)i(t)o(yp)q(e)f(in) f(the)i(host)f(language)f(the)i(v)n(alue)e(ma)o(y)f(b)q(e)262 1792 y(transmitted)f(in)g(a)g(message)h(as)g(an)f(in)o(teger.)27 b(The)17 b(recipien)o(t)h(of)e(the)h(descriptor)h(v)n(alue)262 1842 y(can)f(mak)o(e)f(no)h(de\014ned)h(use)g(of)f(the)h(v)n(alue)e(in)h(the) h Fg(mpi)f Fh(op)q(erations)h(describ)q(ed)h(in)e(this)262 1892 y(prop)q(osal)c(|)g(the)i(descriptor)g(is)f Fd(invalid)p Fh(.)324 1942 y Fg(mpi)j Fh(pro)o(vides)h(a)g(mec)o(hanism)d(whereb)o(y)k (the)f(user)h(can)f(transmit)f(a)h(v)n(alid)e(pro)q(cess)262 1991 y(descriptor)i(in)e(a)g(message)h(suc)o(h)h(that)e(the)i(receiv)o(ed)g (descriptor)g(is)f(v)n(alid.)25 b(This)17 b(is)f(in-)262 2041 y(tegrated)h(with)f(the)h(capabilit)o(y)e(to)h(transmit)f(t)o(yp)q(ed)i (messages.)26 b(It)17 b(is)f(suggested)i(that)262 2091 y(a)d(notional)f(data) h(t)o(yp)q(e)h(should)f(b)q(e)h(in)o(tro)q(duced)g(for)f(this)h(purp)q(ose,)g (e.g.)23 b Fc(MPI)p 1524 2091 V 15 w(PD)p 1583 2091 V 15 w(TYPE)o Fh(.)262 2141 y(See)14 b(Note)h(3.)324 2191 y Fg(mpi)f Fh(pro)o(vides)g(a)g (pro)q(cess)i(descriptor)f(registry)g(service.)20 b(Use)15 b(of)f(this)g(service)i(is)e(not)262 2240 y(mandated.)27 b(Programs)16 b(whic)o(h)h(can)g(con)o(v)o(enien)o(tly)h(b)q(e)f(expressed)j(without)d (using)g(the)262 2290 y(service)e(can)f(ignore)g(it)f(without)h(p)q(enalt)o (y)m(.)j(See)e(Note)f(4.)324 2340 y(Note)d(that)g(receipt)h(of)e(a)g(v)n (alid)g(pro)q(cess)i(descriptor)g(ma)o(y)d(ha)o(v)o(e)h(a)h(p)q(ersisten)o(t) h(e\013ect)h(on)262 2390 y(the)h(implemen)o(tation)d(of)i Fg(mpi)h Fh(at)f(the)i(receiv)o(er,)g(and)f(in)g(particular)f(ma)o(y)f(reserv)o(e)k (state.)967 2574 y(2)p eop %%Page: 3 4 3 3 bop 262 307 a Fg(mpi)9 b Fh(will)e(pro)o(vide)j(a)f(pro)q(cedure)i(whic)o (h)e(frees)i(\(or)e(in)o(v)n(alidates\))f(a)h(v)n(alid)f(descriptor,)j(allo)o (w-)262 357 y(ing)j(the)h(implemen)o(tatio)o(n)d(to)j(free)g(reserv)o(ed)i (state.)22 b(F)m(or)14 b(example,)f Fc(mpi)p 1436 357 14 2 v 15 w(free)p 1539 357 V 15 w(pd\(pd\))o Fh(.)262 407 y(The)g(user)g(is)g (not)g(allo)o(w)o(ed)e(to)h(free)i(the)f(pro)q(cess)i(descriptor)f(of)e(the)h (calling)e(pro)q(cess.)20 b(See)262 457 y(Note)14 b(5.)324 506 y(See)h(Note)f(6.)262 614 y Fe(Descriptor)f(A)o(ttribu)o(tes)262 691 y Fh(This)g(prop)q(osal)h(mak)o(es)f(no)g(statemen)o(ts)h(regarding)g (pro)q(cess)i(descriptor)f(attributes.)262 807 y Ff(1.2.2)55 b(Pro)r(cess)18 b(Groupings)262 884 y Fh(This)d(prop)q(osal)h(views)g(a)f (pro)q(cess)j(grouping)d(as)h(an)g(ordered)h(collection)e(of)h(\(references) 262 934 y(to\))f(distinct)h(pro)q(cesses,)i(the)e(mem)o(b)q(ership)e(and)h (ordering)g(of)g(whic)o(h)h(do)q(es)g(not)f(c)o(hange)262 983 y(o)o(v)o(er)g(the)g(lifetime)e(of)h(the)i(grouping.)21 b(See)15 b(Note)h(7.)21 b(The)15 b(canonical)g(represen)o(tation)h(of)262 1033 y(a)g(grouping)h(re\015ects)i(the)f(pro)q(cess)h(ordering)e(and)g(is)g (a)g(one-to-one)g(map)e(from)h Fb(Z)1608 1039 y Fa(N)1657 1033 y Fh(to)262 1083 y(descriptors)f(of)e(the)i Fb(N)k Fh(pro)q(cesses)d(comp)q (osing)c(the)j(grouping.)324 1133 y(There)20 b(ma)o(y)e(b)q(e)h(structure)j (asso)q(ciated)e(with)f(a)g(pro)q(cess)i(grouping)d(de\014ned)j(b)o(y)e(a)262 1183 y(pro)q(cess)g(top)q(ology)m(.)29 b(This)18 b(prop)q(osal)g(mak)o(es)f (no)h(further)h(statemen)o(ts)f(regarding)g(suc)o(h)262 1232 y(structures.)262 1340 y Fe(Group)c(Descriptor)262 1417 y Fh(Eac)o(h)e(group) g(is)h(iden)o(ti\014ed)f(b)o(y)g(a)h Fd(gr)n(oup)g(descriptor)j Fh(\(or)d Fd(hand)r(le)s Fh(\))g(whic)o(h)g(is)f(expressed)j(as)262 1467 y(an)f(in)o(teger)h(t)o(yp)q(e)g(in)f(the)h(host)f(language)g(and)g(has) h(an)f(opaque)g(v)n(alue)g(whic)o(h)g(is)h(pro)q(cess)262 1517 y(lo)q(cal.)i(See)d(Note)h(1.)324 1566 y(The)h(initialisation)d(of)j Fg(mpi)f Fh(services)j(will)c(assign)i(to)g(eac)o(h)g(pro)q(cess)i(an)e Fd(own)j Fh(group)262 1616 y(descriptor)11 b(for)f(a)g(pro)q(cess)i(grouping) d(of)h(whic)o(h)g(the)g(pro)q(cess)i(is)e(a)g(mem)o(b)q(er.)15 b(Eac)o(h)c(pro)q(cess)262 1666 y(retains)k(its)h(o)o(wn)f(group)g (descriptor)i(and)e(mem)o(b)q(ership)f(of)g(the)i(o)o(wn)f(pro)q(cess)i (grouping)262 1716 y(un)o(til)f(the)i(termination)e(of)h Fg(mpi)g Fh(services.)31 b(See)18 b(Note)g(8.)29 b Fg(mpi)17 b Fh(pro)o(vides)g(a)h (pro)q(cedure)262 1766 y(whic)o(h)d(accepts)h(a)f(v)n(alid)f(pro)q(cess)j (descriptor)f(and)f(returns)i(the)f(o)o(wn)e(group)h(descriptor)262 1816 y(of)e(the)h(iden)o(ti\014ed)g(pro)q(cess.)20 b(F)m(or)14 b(example,)e Fc(gd)21 b(=)h(mpi)p 1149 1816 V 15 w(own)p 1230 1816 V 15 w(group\(pd\))n Fh(.)c(See)d(Note)f(9.)262 1923 y Fe(Group)g(Creation)g(and)h(Deletion)262 2000 y Fg(mpi)9 b Fh(pro)o(vides)h(facilities)f(whic)o(h)h(allo)o(w)e(users)k(to)e(dynamically) d(create)k(and)f(delete)h(pro)q(cess)262 2050 y(groupings)k(in)h(addition)g (to)g(the)h(o)o(wn)f(grouping\(s\).)25 b(The)17 b(pro)q(cedures)h(describ)q (ed)g(here)262 2100 y(generate)d(groups)f(whic)o(h)g(are)g(static)g(in)g(mem) o(b)q(ership.)324 2149 y Fg(mpi)i Fh(pro)o(vides)g(a)g(pro)q(cedure)i(whic)o (h)e(allo)o(ws)f(users)i(to)f(create)i(one)e(or)g(more)f(groups)262 2199 y(whic)o(h)9 b(are)g(subsets)i(of)e(existing)g(groups.)17 b(F)m(or)9 b(example,)f Fc(gdb)21 b(=)h(mpi)p 1360 2199 V 15 w(group)p 1485 2199 V 15 w(partition\(gda,)c(key\))262 2249 y Fh(creates)g(one)f(or)g(more)f(new)h(groups)g Fc(gdb)f Fh(whic)o(h)g(are)i (distinct)f(subsets)h(of)e(an)h(existing)262 2299 y(group)d Fc(gda)f Fh(according)h(to)g(the)h(supplied)g(v)n(alues)f(of)f Fc(key)p Fh(.)19 b(This)14 b(pro)q(cedure)i(is)e(called)g(b)o(y)262 2349 y(and)f(sync)o(hronises)j(all)c(mem)o(b)q(ers)h(of)g Fc(gda)p Fh(.)324 2399 y Fg(mpi)g Fh(pro)o(vides)i(a)e(pro)q(cedure)j(whic)o(h)e(allo) o(ws)f(users)j(to)e(create)h(a)f(group)g(b)o(y)f(p)q(erm)o(uta-)262 2448 y(tion)8 b(of)h(an)g(existing)g(group.)16 b(F)m(or)9 b(example,)g Fc(gdb)21 b(=)g(mpi)p 1159 2448 V 15 w(group)p 1284 2448 V 15 w(permutation\(gda,)e(rank\))967 2574 y Fh(3)p eop %%Page: 4 5 4 4 bop 262 307 a Fh(creates)14 b(one)e(new)h(group)f(with)g(the)h(same)f (mem)o(b)q(ership)f(as)h Fc(gda)g Fh(with)g(a)g(p)q(erm)o(utation)f(of)262 357 y(pro)q(cess)k(ranking,)e(and)g(returns)i(the)g(created)g(group)e (descriptor)i(in)f Fc(gdb)p Fh(.)j(This)d(pro)q(ced-)262 407 y(ure)g(is)g(called)f(b)o(y)h(and)g(sync)o(hronises)h(all)e(mem)o(b)q(ers)f (of)i Fc(gda)p Fh(.)324 457 y Fg(mpi)19 b Fh(pro)o(vides)h(a)f(pro)q(cedure)i (whic)o(h)f(allo)o(ws)e(users)j(to)e(create)i(a)f(group)f(b)o(y)g(expli-)262 506 y(cit)d(de\014nition)h(of)f(its)h(mem)o(b)q(ership)e(as)i(a)f(list)h(of)f (pro)q(cess)j(descriptors.)28 b(F)m(or)16 b(example,)262 556 y Fc(gd)21 b(=)g(mpi)p 439 556 14 2 v 16 w(group)p 565 556 V 14 w(definition\(listofp)o(d\))10 b Fh(creates)16 b(one)d(new)h(group)f Fc(gd)g Fh(with)g(mem-)262 606 y(b)q(ership)k(and)g(ordering)g(describ)q(ed)h (b)o(y)f(the)g(pro)q(cess)i(descriptor)f(list)e Fc(listofpd)p Fh(.)25 b(This)262 656 y(pro)q(cedure)15 b(is)f(called)g(b)o(y)f(and)h(sync)o (hronises)h(all)e(pro)q(cesses)k(iden)o(ti\014ed)d(in)f Fc(listofpd)p Fh(.)324 706 y Fg(mpi)i Fh(pro)o(vides)g(a)g(pro)q(cedure)i(whic)o(h)f(allo)o (ws)e(users)i(to)f(delete)i(user)f(created)h(groups.)262 756 y(This)9 b(pro)q(cedure)i(accepts)g(the)g(descriptor)f(of)f(a)g(group)h(whic) o(h)f(w)o(as)h(created)h(b)o(y)e(the)h(calling)262 805 y(pro)q(cess)h(and)e (deletes)i(the)f(iden)o(ti\014ed)f(group.)17 b(F)m(or)9 b(example,)f Fc(mpi)p 1295 805 V 15 w(group)p 1420 805 V 15 w(deletion\(gd\))262 855 y Fh(deletes)17 b(an)e(existing)h(group)f Fc(gd)p Fh(.)23 b(This)16 b(pro)q(cedure)i(is)d(called)h(b)o(y)f(and)h(sync)o(hronises)h(all) 262 905 y(mem)o(b)q(ers)12 b(of)h Fc(gd)p Fh(.)324 955 y Fg(mpi)h Fh(ma)o(y)f(pro)o(vide)i(additional)e(pro)q(cedures)k(whic)o(h)d(allo)o(w)f (users)k(to)d(construct)j(pro-)262 1005 y(cess)e(groupings)e(with)h(a)g(pro)q (cess)h(grouping)e(top)q(ology)m(.)262 1112 y Fe(Descriptor)g(T)l (ransmission)262 1189 y Fh(Since)19 b(a)g(descriptor)h(is)f(an)f(in)o(teger)i (t)o(yp)q(e)f(in)f(the)i(host)f(language)f(the)i(v)n(alue)e(ma)o(y)f(b)q(e) 262 1239 y(transmitted)10 b(in)g(a)h(message)f(as)h(an)g(in)o(teger.)17 b(The)12 b(recipien)o(t)f(of)f(the)i(descriptor)g(can)f(mak)o(e)262 1289 y(no)j(de\014ned)h(use)h(of)e(the)h(v)n(alue)f(in)g(the)h Fg(mpi)f Fh(op)q(erations)h(describ)q(ed)h(in)e(this)h(prop)q(osal)f(|)262 1339 y(the)g(descriptor)h(is)f Fd(invalid)p Fh(.)k(See)d(Note)f(1.)324 1388 y Fg(mpi)20 b Fh(pro)o(vides)g(a)g(mec)o(hanism)e(whereb)o(y)j(the)g (user)h(can)e(transmit)f(a)h(v)n(alid)f(group)262 1438 y(descriptor)f(in)e(a) g(message)h(suc)o(h)h(that)e(the)i(receiv)o(ed)g(descriptor)g(is)f(v)n(alid.) 25 b(This)17 b(is)f(in-)262 1488 y(tegrated)h(with)f(the)h(capabilit)o(y)e (to)h(transmit)f(t)o(yp)q(ed)i(messages.)26 b(It)17 b(is)f(suggested)i(that) 262 1538 y(a)d(notional)f(data)h(t)o(yp)q(e)h(should)f(b)q(e)h(in)o(tro)q (duced)g(for)f(this)h(purp)q(ose,)g(e.g.)23 b Fc(MPI)p 1524 1538 V 15 w(GD)p 1583 1538 V 15 w(TYPE)o Fh(.)262 1588 y(See)14 b(Note)h(3.)324 1637 y Fg(mpi)h Fh(pro)o(vides)g(a)g(group)g(descriptor)h (registry)g(service.)26 b(Use)17 b(of)e(this)h(service)i(is)e(not)262 1687 y(mandated.)27 b(Programs)16 b(whic)o(h)h(can)g(con)o(v)o(enien)o(tly)h (b)q(e)f(expressed)j(without)d(using)g(the)262 1737 y(service)e(can)f(ignore) g(it)f(without)h(p)q(enalt)o(y)m(.)j(See)e(Note)f(4.)324 1787 y(Note)f(that)g(receipt)h(of)e(a)h(v)n(alid)e(group)i(descriptor)h(ma)o(y)d (ha)o(v)o(e)h(a)h(p)q(ersisten)o(t)h(e\013ect)h(on)262 1837 y(the)f(implemen)o(tation)d(of)i Fg(mpi)h Fh(at)f(the)i(receiv)o(er,)g(and)f (in)g(particular)f(ma)o(y)f(reserv)o(e)k(state.)262 1886 y Fg(mpi)9 b Fh(will)e(pro)o(vide)j(a)f(pro)q(cedure)i(whic)o(h)e(frees)i(\(or) e(in)o(v)n(alidates\))f(a)h(v)n(alid)f(descriptor,)j(allo)o(w-)262 1936 y(ing)j(the)h(implemen)o(tatio)o(n)d(to)j(free)g(reserv)o(ed)i(state.)22 b(F)m(or)14 b(example,)f Fc(mpi)p 1436 1936 V 15 w(free)p 1539 1936 V 15 w(gd\(gd\))o Fh(.)262 1986 y(The)i(user)h(is)f(not)f(allo)o(w)o(ed) g(to)h(free)h(the)f(o)o(wn)g(group)f(descriptor)i(of)f(the)g(calling)f(pro)q (cess)262 2036 y(or)e(the)h(group)f(descriptor)i(of)e(an)o(y)g(group)g (created)i(b)o(y)f(the)g(calling)e(pro)q(cess.)19 b(See)13 b(Note)g(5.)324 2086 y(See)i(Note)f(6.)262 2194 y Fe(Descriptor)f(A)o(ttribu) o(tes)262 2270 y Fg(mpi)d Fh(pro)o(vides)i(a)f(pro)q(cedure)i(whic)o(h)e (accepts)h(a)f(v)n(alid)f(group)h(descriptor)h(and)f(returns)i(the)262 2320 y(rank)c(of)f(the)i(calling)e(pro)q(cess)j(within)e(the)h(iden)o (ti\014ed)f(group.)16 b(F)m(or)9 b(example,)g Fc(rank)21 b(=)g(mpi)p 1691 2320 V 15 w(group)p 1816 2320 V 15 w(rank\(gd\))n Fh(.)324 2370 y Fg(mpi)e Fh(pro)o(vides)g(a)g(pro)q(cedure)i(whic)o(h)e(accepts)h(a)f (v)n(alid)f(group)h(descriptor)h(and)f(re-)262 2420 y(turns)f(the)g(n)o(um)o (b)q(er)f(of)h(mem)o(b)q(ers,)f(or)g Fd(size)p Fh(,)h(of)f(the)i(iden)o (ti\014ed)f(group.)29 b(F)m(or)17 b(example,)967 2574 y(4)p eop %%Page: 5 6 5 5 bop 262 307 a Fc(size)20 b(=)i(mpi)p 483 307 14 2 v 15 w(group)p 608 307 V 15 w(size\(gd\))n Fh(.)324 357 y Fg(mpi)16 b Fh(pro)o(vides)h(a)g(pro)q(cedure)h(whic)o(h)f(accepts)i(a)d(v)n(alid)f (group)i(descriptor)h(and)f(pro-)262 407 y(cess)22 b(order)f(n)o(um)o(b)q (er,)g(or)f Fd(r)n(ank)p Fh(,)i(and)e(returns)i(the)f(v)n(alid)e(descriptor)j (of)e(the)h(pro)q(cess)262 457 y(to)e(whic)o(h)g(the)h(supplied)g(rank)f (maps)f(within)h(the)h(iden)o(ti\014ed)g(group.)34 b(F)m(or)19 b(example,)262 506 y Fc(pd)i(=)g(mpi)p 439 506 V 16 w(group)p 565 506 V 14 w(pd\(gd,)g(rank\))o Fh(.)324 556 y(See)15 b(Note)f(10.)324 606 y Fg(mpi)h Fh(ma)o(y)e(pro)o(vide)i(additional)f(pro)q(cedures)j(whic)o (h)e(allo)o(w)f(users)j(to)e(determine)g(the)262 656 y(pro)q(cess)g(grouping) e(top)q(ology)g(attributes.)262 772 y Ff(1.2.3)55 b(Comm)n(unication)16 b(Con)n(texts)262 849 y Fh(This)c(prop)q(osal)h(views)g(a)g(comm)o(unicatio)o (n)d(con)o(text)k(as)f(a)g(uniquely)f(iden)o(ti\014ed)h(reference)262 899 y(to)f(exactly)g(one)h(pro)q(cess)h(grouping,)e(whic)o(h)g(is)g(a)h (\014eld)f(in)g(a)g(message)g(en)o(v)o(elop)q(e)h(and)g(ma)o(y)262 948 y(therefore)j(b)q(e)g(used)g(to)f(distinguish)g(messages.)21 b(The)16 b(con)o(text)g(inherits)f(the)h(referenced)262 998 y(pro)q(cess)g(grouping)d(as)h(a)g(\\frame".)j(Eac)o(h)d(pro)q(cess)i (grouping)e(ma)o(y)e(b)q(e)j(used)g(as)f(a)g(frame)262 1048 y(for)f(m)o(ultiple)f(con)o(texts.)262 1156 y Fe(Con)o(text)i(Descriptor)262 1232 y Fh(Eac)o(h)e(con)o(text)g(is)g(iden)o(ti\014ed)g(b)o(y)g(a)g Fd(c)n(ontext)h(descriptor)j Fh(\(or)c Fd(hand)r(le)p Fh(\))h(whic)o(h)e(is)h (expressed)262 1282 y(as)f(an)f(in)o(teger)i(t)o(yp)q(e)f(in)g(the)h(host)f (language)f(and)h(has)g(an)g(opaque)g(v)n(alue)f(whic)o(h)h(is)g(pro)q(cess) 262 1332 y(lo)q(cal.)17 b(See)d(Note)h(1.)324 1382 y(The)i(creation)g(of)f Fg(mpi)h Fh(pro)q(cess)h(groupings)f(allo)q(cates)f(an)h Fd(own)j Fh(con)o(text)e(whic)o(h)e(in-)262 1432 y(herits)f(the)h(created)h(grouping)d (as)i(a)e(frame)g(and)h(can)h(b)q(e)g(though)o(t)e(of)h(as)g(a)g(prop)q(ert)o (y)h(of)262 1482 y(the)h(created)i(grouping.)26 b(The)17 b(grouping)g (retains)g(the)h(descriptor)g(of)e(its)h(o)o(wn)g(con)o(text)262 1531 y(un)o(til)c Fg(mpi)h Fh(pro)q(cess)i(grouping)e(deletion.)19 b Fg(mpi)14 b Fh(pro)o(vides)g(a)g(pro)q(cedure)j(whic)o(h)d(accepts)i(a)262 1581 y(v)n(alid)c(group)h(descriptor)i(and)e(returns)i(the)f(con)o(text)h (descriptor)g(of)e(the)h(o)o(wn)f(con)o(text)h(of)262 1631 y(the)g(iden)o(ti\014ed)g(group.)k(F)m(or)13 b(example,)g Fc(cd)21 b(=)g(mpi)p 1074 1631 V 16 w(own)p 1156 1631 V 15 w(context\(gd\))m Fh(.)d(See)d(Note)f(11.)262 1739 y Fe(Con)o(text)g(Creation)h(and)g(Deletion) 262 1816 y Fg(mpi)f Fh(pro)o(vides)h(facilities)f(whic)o(h)h(allo)o(ws)f (user)i(to)e(dynamically)e(create)17 b(and)e(delete)h(con-)262 1865 y(texts)i(in)e(addition)g(to)h(the)h(o)o(wn)f(con)o(texts)h(asso)q (ciated)g(with)e(pro)q(cess)j(groupings.)28 b(See)262 1915 y(Note)14 b(12.)324 1965 y Fg(mpi)i Fh(pro)o(vides)g(a)g(pro)q(cedure)i(whic) o(h)e(allo)o(ws)f(users)i(to)f(create)i(con)o(texts.)26 b(This)16 b(pro-)262 2015 y(cedure)j(accepts)g(a)e(v)n(alid)f(descriptor)j(of)d(a)i (group)f(of)g(whic)o(h)g(the)h(calling)e(pro)q(cess)k(is)d(a)262 2065 y(mem)o(b)q(er,)10 b(and)i(returns)i(a)e(con)o(text)h(descriptor)g(whic) o(h)f(references)j(the)e(iden)o(ti\014ed)f(group.)262 2114 y(F)m(or)18 b(example,)g Fc(cd)j(=)h(mpi)p 699 2114 V 15 w(context)p 868 2114 V 14 w(creation\(gd\))m Fh(.)32 b(This)19 b(pro)q(cedure)h(is)f (called)f(b)o(y)262 2164 y(and)13 b(sync)o(hronises)j(all)c(mem)o(b)q(ers)h (of)g Fc(gd)p Fh(.)324 2214 y Fg(mpi)f Fh(pro)o(vides)h(a)f(pro)q(cedure)i (whic)o(h)f(allo)o(ws)e(users)j(to)f(delete)g(user)h(created)g(con)o(texts.) 262 2264 y(This)d(pro)q(cedure)j(accepts)f(a)e(v)n(alid)f(con)o(text)j (descriptor)g(whic)o(h)e(w)o(as)h(created)h(b)o(y)e(the)i(call-)262 2314 y(ing)8 b(pro)q(cess)j(and)e(deletes)i(the)f(iden)o(ti\014ed)f(con)o (text.)17 b(F)m(or)9 b(example,)g Fc(mpi)p 1389 2314 V 15 w(context)p 1558 2314 V 14 w(deletion\(cd\))n Fh(.)262 2363 y(This)k(pro)q(cedure)j(is)e (called)f(b)o(y)h(and)g(sync)o(hronises)h(all)e(mem)o(b)q(ers)f(of)i(the)g (frame)f(of)g Fc(cd)p Fh(.)324 2413 y(See)i(Note)f(13)967 2574 y(5)p eop %%Page: 6 7 6 6 bop 262 307 a Fe(Descriptor)13 b(T)l(ransmission)262 384 y Fh(Since)19 b(a)g(descriptor)h(is)f(an)f(in)o(teger)i(t)o(yp)q(e)f(in)f (the)i(host)f(language)f(the)i(v)n(alue)e(ma)o(y)f(b)q(e)262 434 y(transmitted)10 b(in)g(a)h(message)f(as)h(an)g(in)o(teger.)17 b(The)12 b(recipien)o(t)f(of)f(the)i(descriptor)g(can)f(mak)o(e)262 483 y(no)j(de\014ned)h(use)h(of)e(the)h(v)n(alue)f(in)g(the)h Fg(mpi)f Fh(op)q(erations)h(describ)q(ed)h(in)e(this)h(prop)q(osal)f(|)262 533 y(the)g(descriptor)h(is)f Fd(invalid)p Fh(.)k(See)d(Note)f(1.)324 583 y(MPI)i(pro)o(vides)h(a)f(mec)o(hanism)d(whereb)o(y)18 b(the)f(user)g(can)f(transmit)f(a)h(v)n(alid)f(con)o(text)262 633 y(descriptor)j(in)e(a)g(message)h(suc)o(h)h(that)e(the)i(receiv)o(ed)g (descriptor)g(is)f(v)n(alid.)25 b(This)17 b(is)f(in-)262 683 y(tegrated)h(with)f(the)h(capabilit)o(y)e(to)h(transmit)f(t)o(yp)q(ed)i (messages.)26 b(It)17 b(is)f(suggested)i(that)262 732 y(a)d(notional)f(data)h (t)o(yp)q(e)h(should)f(b)q(e)h(in)o(tro)q(duced)g(for)f(this)h(purp)q(ose,)g (e.g.)23 b Fc(MPI)p 1524 732 14 2 v 15 w(CD)p 1583 732 V 15 w(TYPE)o Fh(.)262 782 y(See)14 b(Note)h(3.)324 832 y Fg(mpi)e Fh(pro)o(vides)i(a)e(con)o(text)i(descriptor)g(registry)g(service.)k(Use)c (of)f(this)g(service)h(is)f(not)262 882 y(mandated.)27 b(Programs)16 b(whic)o(h)h(can)g(con)o(v)o(enien)o(tly)h(b)q(e)f(expressed)j(without)d (using)g(the)262 932 y(service)e(can)f(ignore)g(it)f(without)h(p)q(enalt)o(y) m(.)j(See)e(Note)f(4.)324 982 y(Note)d(that)f(receipt)i(of)e(a)g(v)n(alid)f (con)o(text)i(descriptor)h(ma)o(y)d(ha)o(v)o(e)h(a)g(p)q(ersisten)o(t)j (e\013ect)f(on)262 1031 y(the)i(implemen)o(tation)d(of)i Fg(mpi)h Fh(at)f(the)i(receiv)o(er,)g(and)f(in)g(particular)f(ma)o(y)f(reserv)o(e)k (state.)262 1081 y Fg(mpi)j Fh(will)g(pro)o(vide)h(a)g(pro)q(cedure)i(whic)o (h)d(frees)j(\(or)e(in)o(v)n(alidates\))f(in)o(v)n(alidates)f(a)i(v)n(alid) 262 1131 y(descriptor,)g(allo)o(wing)c(the)j(implemen)o(tation)c(to)j(free)h (reserv)o(ed)i(state.)32 b(F)m(or)18 b(example,)262 1181 y Fc(mpi)p 331 1181 V 15 w(free)p 434 1181 V 14 w(cd\(cd\))o Fh(.)26 b(The)17 b(user)h(is)e(not)h(allo)o(w)o(ed)e(to)h(free)i(the)f(o)o (wn)f(con)o(text)h(descriptor)262 1231 y(of)g(an)o(y)h(group)h(or)f(the)h (con)o(text)g(descriptor)h(of)e(an)o(y)g(con)o(text)h(created)h(b)o(y)e(the)h (calling)262 1280 y(pro)q(cess.)324 1330 y(See)c(Note)f(6.)262 1438 y Fe(Descriptor)f(A)o(ttribu)o(tes)262 1515 y Fg(mpi)f Fh(pro)o(vides)i(a)f(pro)q(cedure)i(whic)o(h)e(allo)o(ws)f(users)j(to)e (determine)g(the)h(pro)q(cess)h(grouping)262 1565 y(whic)o(h)e(is)h(the)h (frame)d(of)h(a)h(con)o(text.)19 b(F)m(or)13 b(example,)f Fc(gd)22 b(=)f(mpi)p 1282 1565 V 15 w(context)p 1451 1565 V 15 w(group\(cd\))n Fh(.)262 1681 y Ff(1.2.4)55 b(P)n(oin)n(t-to-P)n(oin)n(t)19 b(Comm)n(unication)262 1757 y Fh(This)12 b(prop)q(osal)h(recommends)f(three)i (forms)e(for)h Fg(mpi)f Fh(p)q(oin)o(t-to-p)q(oin)o(t)g(message)g(address-) 262 1807 y(ing)h(and)h(selection:)19 b(n)o(ull)13 b(con)o(text;)h(closed)g (con)o(text;)h(op)q(en)f(con)o(text.)19 b(\(See)c(Note)g(15\).)j(It)262 1857 y(is)d(further)i(recommended)e(that)i(messages)f(comm)o(uni)o(cated)e (in)i(eac)o(h)g(form)f(are)h(distin-)262 1907 y(guished)h(suc)o(h)h(that)f(a) g Fc(Send)f Fh(op)q(eration)i(of)e(form)g(X)h(cannot)g(matc)o(h)f(with)h(a)g Fc(Receive)262 1957 y Fh(op)q(eration)10 b(of)g(form)f(Y,)h(requiring)g(that) h(form)e(is)h(em)o(b)q(edded)h(in)o(to)f(the)h(message)g(en)o(v)o(elop)q(e.) 324 2006 y(The)k(three)g(forms)e(are)i(describ)q(ed,)h(follo)o(w)o(ed)d(b)o (y)h(considerations)h(of)f(uniform)e(in)o(teg-)262 2056 y(ration)h(of)g (these)i(forms)e(in)g(the)i(p)q(oin)o(t-to-p)q(oin)o(t)d(comm)o(unication)f (c)o(hapter)k(of)e Fg(mpi)p Fh(.)262 2164 y Fe(Null)h(Con)o(text)g(F)l(orm) 262 2241 y Fh(The)20 b Fd(nul)r(l)h(c)n(ontext)k Fh(form)18 b(con)o(tains)j(no)f(message)g(con)o(text.)38 b(Message)22 b(selection)f(and)262 2291 y(addressing)16 b(are)f(expressed)j(b)o(y)e Fc(\(pd,)k(tag\))15 b Fh(where:)22 b Fc(pd)15 b Fh(is)h(a)f(pro)q(cess)i (descriptor;)g Fc(tag)262 2340 y Fh(is)c(a)h(message)f(tag.)324 2390 y Fc(Send)g Fh(supplies)h(the)g Fc(pd)f Fh(of)g(the)h(receiv)o(er.)20 b Fc(Receive)12 b Fh(supplies)i(the)g Fc(pd)f Fh(of)g(the)h(sender.)967 2574 y(6)p eop %%Page: 7 8 7 7 bop 324 307 a Fc(Receive)16 b Fh(can)j(wildcard)e(on)h Fc(pd)g Fh(b)o(y)f(supplying)h(the)h(v)n(alue)e(of)g(a)h(named)f(constan)o(t) 262 357 y(pro)q(cess)j(descriptor,)h(e.g.)33 b Fc(MPI)p 788 357 14 2 v 15 w(PD)p 847 357 V 15 w(WILD)p Fh(.)18 b(See)i(Note)f(14.)33 b(This)18 b(prop)q(osal)h(mak)o(es)f(no)262 407 y(statemen)o(t)13 b(ab)q(out)h(the)h(pro)o(vision)d(for)i(wildcard)f(on)h Fc(tag)p Fh(.)262 515 y Fe(Closed)g(Con)o(text)h(F)l(orm)262 591 y Fh(The)f Fd(close)n(d)g(c)n(ontext)k Fh(form)12 b(p)q(ermits)h(comm)o(unication)d(b)q (et)o(w)o(een)15 b(mem)o(b)q(ers)d(of)h(the)h(same)262 641 y(con)o(text.)22 b(Message)16 b(selection)g(and)f(addressing)h(are)f (expressed)j(b)o(y)d Fc(\(cd,)21 b(rank,)f(tag\))262 691 y Fh(where:)e Fc(cd)11 b Fh(is)h(a)g(con)o(text)h(descriptor;)g Fc(rank)f Fh(is)f(a)h(pro)q(cess)i(rank)e(in)g(the)g(frame)f(of)h Fc(cd)p Fh(;)f Fc(tag)262 741 y Fh(is)i(a)h(message)f(tag.)324 791 y Fc(Send)i Fh(supplies)i(the)g Fc(cd)f Fh(of)g(the)h(receiv)o(er)h (\(and)f(sender\),)h(and)e(the)i Fc(rank)d Fh(of)h(the)h(re-)262 840 y(ceiv)o(er.)26 b Fc(Receive)15 b Fh(supplies)i(the)g Fc(cd)f Fh(of)g(the)h(sender)h(\(and)e(receiv)o(er\),)j(and)d(the)h(rank)f(of)262 890 y(the)f(sender.)24 b(The)16 b Fc(\(cd,)21 b(rank\))14 b Fh(pair)h(in)f Fc(Send)h Fh(\()p Fc(Receive)p Fh(\))f(is)h(su\016cien)o(t)h (to)f(determine)262 940 y(the)f(pro)q(cess)i(descriptor)f(of)e(the)i(receiv)o (er)g(\(sender\).)324 990 y Fc(Receive)9 b Fh(cannot)i(wildcard)f(on)g Fc(cd)p Fh(.)17 b Fc(Receive)9 b Fh(can)i(wildcard)f(on)g Fc(rank)g Fh(b)o(y)g(supplying)262 1040 y(the)15 b(v)n(alue)f(of)h(a)f(named)g(constan) o(t)i(in)o(teger,)f(e.g.)21 b Fc(MPI)p 1133 1040 V 15 w(RANK)p 1236 1040 V 15 w(WILD)p Fh(.)14 b(See)i(Note)f(16.)21 b(This)262 1089 y(prop)q(osal)13 b(mak)o(es)g(no)g(statemen)o(t)h(ab)q(out)g(the)h(pro)o (vision)d(for)i(wildcard)f(on)h Fc(tag)p Fh(.)262 1197 y Fe(Op)q(en)h(Con)o (text)f(F)l(orm)262 1274 y Fh(The)k Fd(op)n(en)h(c)n(ontext)j Fh(form)16 b(p)q(ermits)h(comm)o(unication)e(b)q(et)o(w)o(een)k(mem)o(b)q (ers)e(of)g(an)o(y)g(t)o(w)o(o)262 1324 y(con)o(texts.)g(Message)11 b(selection)e(and)h(addressing)g(are)f(expressed)j(b)o(y)d Fc(\(lcd,)21 b(rcd,)g(rank,)f(tag\))262 1374 y Fh(where:)d Fc(lcd)11 b Fh(is)h(a)f(\\lo)q(cal")f(con)o(text)i(descriptor;)h Fc(rcd)e Fh(is)g(a)g(\\remote")g(con)o(text)h(descriptor;)262 1423 y Fc(rank)h Fh(is)g(a)h(pro)q(cess)i(rank)d(in)h(the)g(frame)f(of)g Fc(rcd)p Fh(;)g Fc(tag)g Fh(is)h(a)f(message)h(tag.)324 1473 y Fc(Send)21 b Fh(supplies)h(the)g(con)o(text)h(descriptor)g(for)f(the)g (sender)h(in)f Fc(lcd)p Fh(,)g(the)h(con)o(text)262 1523 y(descriptor)c(for)f (the)h(receiv)o(er)h(in)e Fc(rcd)p Fh(,)h(and)f(the)h Fc(rank)e Fh(of)h(the)h(receiv)o(er)h(in)e(the)h(frame)262 1573 y(of)f Fc(rcd)p Fh(.)31 b Fc(Receive)17 b Fh(supplies)j(the)f(con)o(text)g (descriptor)h(for)f(the)g(receiv)o(er)h(in)e Fc(lcd)p Fh(,)h(the)262 1623 y(con)o(text)f(descriptor)g(for)f(the)h(sender)h(in)e Fc(rcd)p Fh(,)g(and)h(the)g Fc(rank)e Fh(of)h(the)h(sender)h(receiv)o(er)262 1672 y(in)c(the)h(frame)f(of)g Fc(rcd)p Fh(.)23 b(The)16 b Fc(\(rcd,)k(rank\))15 b Fh(pair)g(in)h Fc(Send)e Fh(\()p Fc(Receive)p Fh(\))h(is)h(su\016cien)o(t)g(to)262 1722 y(determine)d(the)i(pro)q(cess)h (descriptor)f(of)e(the)i(receiv)o(er)g(\(sender\).)324 1772 y Fc(Receive)f Fh(cannot)h(wildcard)g(on)h Fc(lcd)f Fh(whic)o(h)g(is)g(the)h (op)q(en)g(con)o(text)g(form)e(analog)g(of)262 1822 y(there)g(b)q(eing)f(no)f (wildcard)h(on)f Fc(cd)h Fh(in)f(the)i(closed)f(con)o(text)h(form.)h(Receiv)o (e)f(can)f(wildcard)262 1872 y(on)19 b Fc(rcd)g Fh(b)o(y)h(supplying)f(the)h (v)n(alue)f(of)g(a)h(named)f(constan)o(t)h(con)o(text)g(descriptor,)i(e.g.) 262 1922 y Fc(MPI)p 331 1922 V 15 w(CD)p 390 1922 V 15 w(WILD)12 b Fh(\(See)j(Note)f(14\),)e(in)h(whic)o(h)h(case)g Fc(Receive)e Fd(must)17 b Fh(also)c(wildcard)g(on)g Fc(rank)262 1971 y Fh(as)k(there)i(is) e(insu\016cien)o(t)h(information)c(to)k(determine)f(the)h(pro)q(cess)h (descriptor)g(of)e(the)262 2021 y(sender.)39 b Fc(Receive)18 b Fh(can)j(wildcard)f(on)g Fc(rank)f Fh(b)o(y)i(supplying)e(the)i(v)n(alue)f (of)g(a)g(named)262 2071 y(constan)o(t)e(in)o(teger,)i(e.g.)30 b Fc(MPI)p 750 2071 V 15 w(RANK)p 853 2071 V 15 w(WILD)p Fh(.)17 b(See)i(Note)g(16.)31 b(This)18 b(prop)q(osal)g(mak)o(es)f(no)262 2121 y(statemen)o(t)c(ab)q(out)h(the)h(pro)o(vision)d(for)i(wildcard)f(on)h Fc(tag)p Fh(.)262 2229 y Fe(Uniform)f(In)o(tegration)262 2305 y Fh(The)j(three)h(forms)e(of)g(addressing)i(and)f(selection)g(describ)q(ed)i (ha)o(v)o(e)e(di\013eren)o(t)h(syn)o(tactic)262 2355 y(framew)o(orks.)29 b(W)m(e)17 b(can)h(consider)h(in)o(tegrating)e(these)j(forms)c(in)o(to)h(the) i(p)q(oin)o(t-to-p)q(oin)o(t)967 2574 y(7)p eop %%Page: 8 9 8 8 bop 262 307 a Fh(c)o(hapter)15 b(of)f Fg(mpi)g Fh(b)o(y)h(de\014ning)f(a) h(further)g(orthogonal)f(axis)g(\(as)h(in)f(the)h(m)o(ulti-lev)o(el)d(pro-) 262 357 y(p)q(osal)i(of)f(Gropp)i(&)f(Lusk\))h(whic)o(h)g(deals)f(with)g (form.)k(This)d(is)f(at)g(the)i(exp)q(ense)g(of)e(m)o(ul-)262 407 y(tiplying)f(the)i(n)o(um)o(b)q(er)e(of)h Fc(Send)g Fh(and)g Fc(Receive)f Fh(pro)q(cedures)k(b)o(y)e(a)f(factor)g(of)g(three,)i(and)262 457 y(some)c(further)i(but)g(trivial)e(w)o(ork)h(with)g(details)h(of)f(the)h (curren)o(t)h(p)q(oin)o(t-to-p)q(oin)o(t)d(c)o(hapter)262 506 y(whic)o(h)h(uniformly)f(assumes)h(a)h(single)f(addressing)i(and)f(selection) g(form.)324 556 y(There)22 b(are)g(v)n(arious)f(approac)o(hes)h(to)f (uni\014cation)g(of)g(the)h(syn)o(tactic)g(framew)o(orks)262 606 y(whic)o(h)10 b(ma)o(y)f(simplify)f(in)o(tegration.)17 b(Three)12 b(options)e(are)i(no)o(w)e(describ)q(ed,)j(eac)o(h)e(based)h(on) 262 656 y(reten)o(tion)i(and)g(extension)g(of)f(the)i(framew)o(ork)d(of)h (one)h(form.)j(These)e(options)e(represen)o(t)262 706 y(di\013eren)o(t)h (compromises)e(b)q(et)o(w)o(een)k(the)e(three)h(forms.)262 814 y Fe(Option)d(i:)21 b(Op)q(en)14 b(Con)o(text)h(F)l(ramew)o(ork)41 b Fh(The)13 b(framew)o(ork)f(of)h(the)h(op)q(en)g(con)o(text)262 863 y(form)e(is)h(adopted)i(and)e(extended.)324 913 y(In)o(tro)q(duce)21 b(the)g Fd(nul)r(l)j Fh(descriptor,)e(the)f(v)n(alue)f(of)f(whic)o(h)h(is)g (de\014ned)h(b)o(y)f(a)g(named)262 963 y(constan)o(t,)e(e.g.)28 b Fc(MPI)p 605 963 14 2 v 15 w(NULL)p Fh(.)16 b(See)i(Note)g(17.)27 b(The)18 b(n)o(ull)e(con)o(text)i(form)e(is)h(expressed)j(as)262 1013 y Fc(\(MPI)p 353 1013 V 14 w(NULL,)h(MPI)p 564 1013 V 15 w(NULL,)g(pd,)g(tag\))o Fh(,)14 b(whic)o(h)f(is)h(a)f(little)g(clumsy)m(.) j(The)f(closed)f(con)o(text)262 1063 y(form)f(is)j(expressed)h(as)f Fc(\(MPI)p 736 1063 V 15 w(NULL,)21 b(cd,)g(rank,)g(tag\))o Fh(,)15 b(whic)o(h)g(is)h(marginally)c(incon-)262 1112 y(v)o(enien)o(t.)17 b(The)12 b(op)q(en)f(con)o(text)i(form)c(is)i(expressed)j(as)d Fc(\(lcd,)21 b(rcd,)g(rank,)g(tag)o Fh(\),)12 b(whic)o(h)262 1162 y(is)h(of)h(course)h(natural.)262 1270 y Fe(Option)d(ii:)19 b(Closed)14 b(Con)o(text)g(F)l(ramew)o(ork)41 b Fh(The)13 b(framew)o(ork)e (of)i(the)g(closed)h(con-)262 1320 y(text)g(form)e(is)i(adopted)g(and)g (extended.)324 1370 y(In)o(tro)q(duce)21 b(the)g Fd(nul)r(l)j Fh(descriptor,)e(the)f(v)n(alue)f(of)f(whic)o(h)h(is)g(de\014ned)h(b)o(y)f(a) g(named)262 1420 y(constan)o(t,)e(e.g.)28 b Fc(MPI)p 605 1420 V 15 w(NULL)p Fh(.)16 b(See)i(Note)g(17.)27 b(The)18 b(n)o(ull)e(con)o(text)i (form)e(is)h(expressed)j(as)262 1469 y Fc(\(MPI)p 353 1469 V 14 w(NULL,)h(pd,)g(tag\))o Fh(,)d(whic)o(h)g(is)f(marginally)e(incon)o(v)o (enien)o(t.)29 b(The)18 b(closed)g(con)o(text)262 1519 y(form)12 b(is)j(expressed)i(as)e Fc(\(cd,)21 b(rank,)g(tag\))o Fh(,)14 b(whic)o(h)h(is)f(of)h(course)h(natural.)k(Expression)262 1569 y(of)13 b(the)h(op)q(en)h(con)o(text)f(form)e(requires)j(a)f(little)f(more)g (w)o(ork.)324 1619 y(W)m(e)j(can)i(use)g(the)f Fc(cd)g Fh(\014eld)g(as)g (\\shorthand)h(notation")e(for)g(the)i Fc(\(lcd,)j(rcd\))16 b Fh(pair)262 1669 y(at)22 b(the)g(exp)q(ense)i(of)e(in)o(tro)q(ducing)g (some)f(tric)o(k)o(ery)m(.)43 b(W)m(e)22 b(de\014ne)h(a)f(\\con)o(text)g (duplet)262 1718 y(descriptor")10 b(whic)o(h)f(is)g(formally)d(comp)q(osed)j (of)g(t)o(w)o(o)g(references)j(to)d(con)o(texts,)i(and)e(pro)o(vide)262 1768 y(a)15 b(pro)q(cedure)j(whic)o(h)e(constructs)i(suc)o(h)f(a)f (descriptor)h(giv)o(en)f(t)o(w)o(o)g(con)o(text)g(descriptors.)262 1818 y(Both)g Fc(Send)e Fh(and)i Fc(Receive)e Fh(will)g(accept)j(a)f(duplet)f (descriptor)i(in)f Fc(cd)p Fh(,)f(are)h(able)f(to)h(dis-)262 1868 y(tinguish)g(the)i(duplet)g(descriptor)g(from)e(a)h(singlet)g (descriptor,)i(and)e(treat)h(the)g(duplet)262 1918 y(as)13 b(shorthand)i(notation.)i(W)m(e)c(should)h(also)f(de\014ne)i(a)e(mec)o (hanism)e(b)o(y)j(whic)o(h)g(a)f(receiv)o(er)262 1968 y(whic)o(h)h(has)h (completed)f(a)g Fc(Receive)f Fh(with)h(wildcard)g(on)h Fc(rcd)e Fh(is)i(able)f(to)h(determine)f(the)262 2017 y(v)n(alid)f(singlet)h (descriptor)i(of)e(the)h(sender,)h(whic)o(h)e(just)h(adds)g(one)g(further)g (enquiry)g(pro-)262 2067 y(cedure)f(to)f(the)h(p)q(oin)o(t-to-p)q(oin)o(t)e (c)o(hapter\(?\).)19 b(This)13 b(option)f(is)h(a)g(little)f(incon)o(v)o (enien)o(t)h(but)262 2117 y(do)q(es)d(ha)o(v)o(e)g(some)f(useful)h(prop)q (erties)i(for)d(collectiv)o(e)h(comm)o(unications.)k(It)c(is)g(conjectured) 262 2167 y(that)j(this)h(option)g(is)f(the)i(b)q(est)g(c)o(hoice)f(for)g Fg(mpi)p Fh(.)262 2275 y Fe(Option)g(iii:)21 b(Null)16 b(Con)o(text)f(F)l (ramew)o(ork)41 b Fh(The)15 b(framew)o(ork)f(of)g(the)h(n)o(ull)f(con)o(text) 262 2325 y(form)e(is)h(adopted)i(and)e(extended.)324 2374 y(The)f(n)o(ull)f (con)o(text)i(form)d(is)i(expressed)j(as)d Fc(\(pd,)21 b(tag\))o Fh(,)12 b(whic)o(h)g(is)g(of)f(course)j(natural.)262 2424 y(Expression)g(of)g (the)g(op)q(en)g(and)g(closed)h(con)o(text)f(forms)f(requires)i(a)e(little)h (more)f(w)o(ork.)967 2574 y(8)p eop %%Page: 9 10 9 9 bop 324 307 a Fh(W)m(e)13 b(can)g(use)h(the)g Fc(pd)f Fh(\014eld)g(as)g (\\shorthand)h(notation")e(for)h Fc(\(cd,)21 b(rank\))12 b Fh(and)h Fc(\(lcd,)262 357 y(rcd,)20 b(rank\))15 b Fh(b)o(y)h(con)o(tin)o (uation)f(of)g(the)h(tric)o(k)o(ery)h(used)f(in)g(the)g(previous)h(option.)23 b(This)262 407 y(is)13 b(rather)i(clumsy)m(.)262 523 y Ff(1.2.5)55 b(Collectiv)n(e)16 b(Comm)n(unication)262 599 y Fh(Symmetric)d(collectiv)o(e) j(comm)o(unication)c(op)q(erations)k(are)h(complian)o(t)c(with)j(the)g (closed)262 649 y(con)o(text)e(form)d(describ)q(ed)16 b(ab)q(o)o(v)o(e.)h (This)d(prop)q(osal)f(recommends)g(that)g(suc)o(h)h(op)q(erations)262 699 y(accept)j(a)e(con)o(text)i(descriptor)g(whic)o(h)e(iden)o(ti\014es)i (the)f(con)o(text)g(\(th)o(us)h(frame\))d(in)i(whic)o(h)262 749 y(they)e(are)g(to)g(op)q(erate.)324 799 y Fg(mpi)i Fh(do)q(es)h(plan)f (to)g(describ)q(e)j(symmetric)14 b(collectiv)o(e)j(comm)o(unicatio)o(n)d(op)q (erations.)262 848 y(It)i(is)g(not)g(p)q(ossible)g(to)g(determine)g(whether)i (this)e(prop)q(osal)g(is)g(su\016cien)o(t)g(to)g(allo)o(w)f(im-)262 898 y(plemen)o(tation)h(of)i(the)h(collectiv)o(e)f(comm)o(unicatio)o(n)e(c)o (hapter)j(of)f Fg(mpi)g Fh(in)f(terms)h(of)g(the)262 948 y(p)q(oin)o(t-to-p)q (oin)o(t)12 b(c)o(hapter)i(of)f Fg(mpi)h Fh(without)f(loss)g(of)g(generalit)o (y)m(,)g(since)h(the)g(collectiv)o(e)g(op-)262 998 y(erations)g(are)g(not)g (y)o(et)g(de\014ned.)324 1048 y(Asymmetric)f(collectiv)o(e)h(comm)o (unication)d(op)q(erations,)k(esp)q(ecially)f(those)i(in)e(whic)o(h)262 1097 y(sender\(s\))20 b(and)f(receiv)o(er\(s\))h(are)f(distinct)g(pro)q (cesses,)j(are)d(complian)o(t)d(with)i(the)h(op)q(en)262 1147 y(con)o(text)14 b(form)d(describ)q(ed)16 b(ab)q(o)o(v)o(e.)h(This)d(prop)q (osal)f(recommends)g(that)g(suc)o(h)h(op)q(erations)262 1197 y(accept)f(a)f(pair)g(of)f(con)o(text)i(descriptors)h(\(p)q(erhaps)g(in)d(a)h (duplet)h(descriptor)g(form\))e(whic)o(h)262 1247 y(iden)o(tify)i(the)h(con)o (texts)h(\(th)o(us)g(frames\))e(in)g(whic)o(h)h(they)g(are)g(to)g(op)q (erate.)324 1297 y Fg(mpi)21 b Fh(do)q(es)h(not)g(plan)e(to)i(describ)q(e)h (asymmetric)c(collectiv)o(e)j(comm)o(unicatio)o(n)d(op-)262 1346 y(erations.)33 b(Suc)o(h)19 b(op)q(erations)h(are)f(expressiv)o(e)h (when)g(writing)e(programs)g(b)q(ey)o(ond)h(the)262 1396 y(SPMD)13 b(mo)q(del,)f(whic)o(h)i(are)g(comp)q(osed)g(of)f(comm)o(unicati)o(v)o(e)e (functionally)i(distinct)h(pro-)262 1446 y(cess)g(groupings.)k(This)13 b(prop)q(osal)f(recommends)g(that)i(suc)o(h)g(op)q(erations)f(should)g(b)q(e) h(con-)262 1496 y(sidered)h(in)e(some)g(reincarnation)h(of)f Fg(mpi)p Fh(.)262 1633 y Fi(1.3)64 b(Discussion)24 b(&)d(Notes)262 1724 y Fh(This)14 b(section)h(comprises)g(a)f(discussion)h(of)f(certain)h (asp)q(ects)h(of)e(this)h(prop)q(osal)f(follo)o(w)o(ed)262 1774 y(b)o(y)f(the)i(notes)f(referenced)j(in)c(the)i(detailed)e(prop)q(osal.) 262 1889 y Ff(1.3.1)55 b(Discussion)262 1966 y Fh(W)m(e)15 b(can)h(dissect)i(the)e(prop)q(osal)g(in)o(to)f(t)o(w)o(o)h(parts:)22 b(an)16 b(SPMD)g(mo)q(del)f(core;)i(an)f(MIMD)262 2016 y(mo)q(del)f(annex.)27 b(In)17 b(this)g(discussion)g(the)h(dissection)f(is)g(exp)q(osed)h(and)e(the) i(conceptual)262 2066 y(foundation)d(of)h(eac)o(h)h(part)g(is)g(describ)q (ed.)28 b(The)17 b(discussion)h(also)e(presen)o(ts)i(brief)f(argu-)262 2116 y(men)o(ts)c(for)g(and)h(against)f(the)i(MIMD)e(mo)q(del)g(annex.)262 2223 y Fe(SPMD)i(mo)q(del)f(core)262 2300 y Fh(The)e(SPMD)g(mo)q(del)e(core)j (pro)o(vides)f(noncomm)o(unicativ)o(e)d(pro)q(cess)14 b(groupings)d(and)h (com-)262 2350 y(m)o(unication)j(con)o(texts)k(for)f(writers)g(of)g(SPMD)f (parallel)g(libraries.)30 b(It)18 b(is)f(in)o(tended)i(to)262 2399 y(pro)o(vide)11 b(expressiv)o(e)i(p)q(o)o(w)o(er)e(b)q(ey)o(ond)h(the)g (\\SPIMD")f(mo)q(del)e(\(in)i(whic)o(h)h(pro)q(cess)h(execute)262 2449 y(in)g(an)h(SIMD)f(fashion\).)967 2574 y(9)p eop %%Page: 10 11 10 10 bop 324 307 a Fh(The)18 b(material)e(describing)j(pro)q(cesses)h(in)e (Section)g(1.2.1)f(is)h(simpli\014ed:)24 b(pro)q(cesses)262 357 y(ha)o(v)o(e)17 b(iden)o(tical)f(instruction)i(blo)q(c)o(ks)f(and)g (di\013eren)o(t)h(data)f(blo)q(c)o(ks;)h(pro)q(cess)h(descriptor)262 407 y(transmission)14 b(and)i(registry)g(b)q(ecome)g(redundan)o(t;)h(dynamic) d(pro)q(cess)k(mo)q(dels)c(are)j(not)262 457 y(considered.)324 506 y(The)22 b(material)e(describing)i(pro)q(cess)i(groupings)d(in)g(Section) h(1.2.2)e(is)i(simpli\014ed:)262 556 y(group)13 b(descriptor)i(transmission)e (and)g(registry)i(b)q(ecome)e(redundan)o(t;)h(the)g(o)o(wn)g(pro)q(cess)262 606 y(grouping)f(explicitly)f(b)q(ecomes)j(a)e(single)h(group)f(con)o (taining)g(all)g(pro)q(cesses.)324 656 y(The)j(material)d(describing)j(comm)o (unicatio)o(n)d(con)o(texts)j(in)f(Section)h(1.2.3)e(is)h(simpli-)262 706 y(\014ed:)j(con)o(text)d(descriptor)g(transmission)d(and)i(registry)h(b)q (ecome)e(unnecessary)m(.)324 756 y(The)20 b(material)e(describing)i(p)q(oin)o (t-to-p)q(oin)o(t)e(comm)o(unication)e(in)k(Section)g(1.2.4)e(is)262 805 y(simpli\014ed:)c(the)e(op)q(en)g(con)o(text)g(form)d(b)q(ecomes)j (redundan)o(t;)g(uniform)d(in)o(tegration)i(\\Op-)262 855 y(tion)18 b(i")g(is)g(deleted,)j(and)d(\\Option)g(ii")g(loses)h(duplet)g(descriptors,)i (b)q(ecoming)c(simple)262 905 y(enough)d(that)f(\\Option)h(iii")e(need)j(not) f(b)q(e)g(further)h(considered.)324 955 y(The)k(material)e(describing)i (collectiv)o(e)g(comm)o(unicatio)o(n)d(in)i(Section)i(1.2.5)d(is)h(sim-)262 1005 y(pli\014ed:)h(there)d(is)e(no)h(p)q(ossibilit)o(y)e(of)h(collectiv)o(e) h(comm)o(unicatio)o(n)d(op)q(erations)j(spanning)262 1054 y(more)d(than)i (one)g(con)o(text.)262 1162 y Fe(MIMD)i(mo)q(del)e(annex)262 1239 y Fh(The)d(MIMD)g(mo)q(del)f(annex)i(extends)g(and)f(mo)q(di\014es)g (the)g(SPMD)h(mo)q(del)d(core)k(to)e(pro)o(vide)262 1289 y(expressiv)o(e)g(p) q(o)o(w)o(er)f(for)f(MIMD)h(programs)e(whic)o(h)i(com)o(bine)e(\(coarse)j (grain\))e(function)h(and)262 1339 y(data)h(driv)o(en)i(parallelism.)h(The)f (MIMD)f(mo)q(del)e(annex)j(is)f(not)g(in)o(tended)h(to)e(pro)o(vide)h(ex-)262 1388 y(pressiv)o(e)i(p)q(o)o(w)o(er)g(to)f(\014ne)h(grained)f(function)h (driv)o(en)f(parallel)f(programs)h(|)g(it)g(is)g(conjec-)262 1438 y(tured)e(that)g(message)f(passing)h(approac)o(hes)g(suc)o(h)h(as)f Fg(mpi)f Fh(are)h(not)g(suited)g(to)g(\014ne)g(grained)262 1488 y(parallel)k(programmi)o(ng.)22 b(The)17 b(annex)g(is)f(in)o(tended)h (to)f(pro)o(vide)g(expressiv)o(e)i(p)q(o)o(w)o(er)f(for)262 1538 y(the)d(\\MSPMD")f(mo)q(del,)f(whic)o(h)i(is)g(no)o(w)f(describ)q(ed.) 324 1588 y(One)g(of)e(the)i(simplest)e(MIMD)h(mo)q(dels)f(is)h(the)g (\\host-no)q(de")h(mo)q(del,)d(familia)o(r)g(in)h Fg(ex-)262 1637 y(press)g Fh(and)g Fg(p)m(arma)o(cs)p Fh(,)i(con)o(taining)d(t)o(w)o(o)h (functional)f(groups:)17 b(one)11 b(no)q(de)h(group)f(\(SPMD)262 1687 y(lik)o(e\))i(;)g(one)h(host)g(group)g(\(a)g(singleton\).)324 1737 y(The)d(\\parallel)e(clien)o(t-serv)o(er")j(mo)q(del,)e(in)g(whic)o(h)h (eac)o(h)g(of)f(the)i Fb(n)e Fh(clien)o(ts)h(is)g(comp)q(osed)262 1787 y(of)i(parallel)g(pro)q(cesses,)k(and)d(in)g(whic)o(h)g(the)h(serv)o(er) g(ma)o(y)e(also)g(b)q(e)i(comp)q(osed)f(of)g(parallel)262 1837 y(pro)q(cesses,)k(con)o(tains)d(1)10 b(+)h Fb(n)k Fh(functional)g(groups:)21 b Fb(n)15 b Fh(clien)o(t)h(groups)g(\(SPMD)f(lik)o(e\);)g(one)262 1886 y(serv)o(er)i(group)f(\(singleton,)f(SPMD)h(lik)o(e\).)23 b(The)17 b(\\host-no)q(de")f(mo)q(del)e(is)i(a)f(case)i(of)f(this)262 1936 y(mo)q(del)c(in)i(whic)o(h)h(the)f(host)h(can)g(b)q(e)g(view)o(ed)f(as)h (a)f(singleton)g(clien)o(t)g(and)g(the)h(no)q(des)g(can)262 1986 y(b)q(e)i(view)o(ed)h(as)f(an)g(SPMD)h(lik)o(e)e(serv)o(er)j(\(or)e(the) h(host)g(as)f(a)g(singleton)g(serv)o(er)i(and)e(the)262 2036 y(no)q(des)d(as)g(an)g(SPMD)g(lik)o(e)f(clien)o(t\).)324 2086 y(The)g(\\parallel)e(mo)q(dule)g(graph")h(mo)q(del,)f(in)h(whic)o(h)g(eac)o (h)h(mo)q(dule)e(within)h(the)h(graph)262 2136 y(ma)o(y)j(b)q(e)k(comp)q (osed)e(of)g(parallel)g(pro)q(cesses)j(\(singleton,)f(SPMD)e(lik)o(e\),)h (con)o(tains)g(an)o(y)262 2185 y(n)o(um)o(b)q(er)c(of)g(functional)g(groups)h (with)g(arbitrarily)f(complex)g(relations.)24 b(The)16 b(\\parallel)262 2235 y(clien)o(t-serv)o(er)j(mo)q(del")e(is)h(a)h(case)g(of)f(this)h(mo)q (del)e(in)h(whic)o(h)g(the)h(mo)q(dule)e(graph)h(only)262 2285 y(con)o(tains)13 b(arcs)i(joining)d(the)j(serv)o(er)g(to)f(eac)o(h)g(clien)o (t.)324 2335 y(The)19 b(MIMD)g(mo)q(del)e(annex)j(is)e(in)o(tended)i(to)f (pro)o(vide)g(expressiv)o(e)h(p)q(o)o(w)o(er)f(for)g(the)262 2385 y(\\parallel)14 b(mo)q(dule)h(graph")h(mo)q(del,)f(whic)o(h)h(I)g(refer) h(to)g(as)f(the)h(MPSMD)f(mo)q(del.)24 b(This)262 2434 y(mo)q(del)14 b(requires)i(supp)q(ort)h(at)e(some)g(lev)o(el)g(as)h(commercial)d(and)i(mo)q (dular)f(applications)957 2574 y(10)p eop %%Page: 11 12 11 11 bop 262 307 a Fh(are)13 b(increasingly)g(mo)o(ving)d(in)j(to)g (parallel)f(computing.)k(The)d(debate)h(is)f(whether)i(or)e(not)262 357 y(message)d(passing)h(approac)o(hes)h(suc)o(h)g(as)g Fg(mpi)e Fh(\(whic)o(h)i(I)f(simply)e(refer)j(to)f(as)g(MPI\))h(should)262 407 y(pro)o(vide)h(for)h(this)g(mo)q(del.)324 457 y(The)d(negativ)o(e)g (argumen)o(t)f(is)h(that)g(suc)o(h)h(SPMD)f(lik)o(e)f(mo)q(dules)g(should)h (b)q(e)h(con)o(trolled)262 506 y(and)e(comm)o(unicate)e(with)i(one)h(another) g(as)g(\\parallel)e(pro)q(cesses")k(at)e(the)g(distributed)g(op-)262 556 y(erating)g(system)h(lev)o(el.)17 b(The)c(argumen)o(t)e(has)h(some)f(app) q(eal)h(as)g(the)g(w)o(orld)g(of)f(distributed)262 606 y(op)q(erating)17 b(systems)h(m)o(ust)f(deal)h(with)f(di\016cult)g(issues)i(suc)o(h)g(as)f(pro) q(cess)h(con)o(trol)f(and)262 656 y(coherency)m(.)23 b(Av)o(oidance)16 b(of)f(duplication)f(in)h(MPI)g(allo)o(ws)g(MPI)g(to)h(fo)q(cus)f(on)h(pro)o (vision)262 706 y(of)11 b(a)h(smaller)f(set)i(of)e(facilities)g(with)h (greater)i(emphasis)d(on)h(maxim)n(um)c(p)q(erformance)k(for)262 756 y(data)h(driv)o(en)h(SPMD)g(lik)o(e)f(parallel)g(programs.)324 805 y(The)20 b(p)q(ositiv)o(e)g(argumen)o(t)f(is)h(that)h(comm)o(uni)o (cations)d(b)q(et)o(w)o(een)j(suc)o(h)g(SPMD)f(lik)o(e)262 855 y(mo)q(dules)11 b(require)j(high)e(p)q(erformance)g(and)h(MPI)g(can)f (pro)o(vide)h(suc)o(h)g(p)q(erformance)g(with)262 905 y(tuned)i(seman)o(tics) f(whic)o(h)h(exp)q(ect)h(the)f(user)h(to)e(deal)h(with)f(coherency)j(issues.) k(There)16 b(is)262 955 y(also)c(the)i(argumen)o(t)f(that)g(MPI)h(is)f(able)g (to)g(deal)h(with)f(this)g(in)g(a)g(shorter)i(time)d(than)h(de-)262 1005 y(v)o(elopmen)o(t)f(\(and)j(standardisation\))f(pro)q(cedures)j(for)d (distributed)h(op)q(erating)f(systems.)262 1054 y(The)j(latter)g(argumen)o(t) f(is)g(somewhat)g(comparable)g(with)g(the)i(argumen)o(t)d(for)i(message)262 1104 y(passing)c(v)o(ersus)i(parallel)e(compilation.)262 1220 y Ff(1.3.2)55 b(Notes)312 1297 y Fh(1.)20 b Fe(Descriptors:)29 b Fh(Descriptors)22 b(are)f(assumed)f(to)g(b)q(e)h(a)f(plen)o(tiful)f(but)i (b)q(ounded)365 1347 y(resource.)37 b(They)20 b(are)g(opaque)f(references)j (to)e(ob)r(jects)g(of)f(unde\014ned)i(size)f(and)365 1397 y(structure.)25 b(They)16 b(are)g(not)f(global)f(unique)i(iden)o(ti\014ers)g(ho)o(w)o(ev)o (er)g(they)g(m)o(ust)e(ref-)365 1446 y(erence)19 b(suc)o(h)e(iden)o (ti\014ers,)h(and)f(they)g(protect)h(the)f(user)h(from)d(the)i(form)e(of)h (suc)o(h)365 1496 y(iden)o(ti\014ers)g(allo)o(wing)c(them)i(to)h(b)q(e)h (implem)o(en)o(tation)c(dep)q(enden)o(t.)23 b(The)15 b(prop)q(osal)365 1546 y(expresses)h(descriptors)e(as)f(in)o(teger)g(t)o(yp)q(es)h(in)f(the)g (host)g(language)f(\(in)g(practice)i(w)o(e)365 1596 y(migh)o(t)9 b(exp)q(ect)j(descriptors)g(to)f(b)q(e)g(indices)g(in)o(to)f(tables)h(of)f (structures,)k(or)c(tables)h(of)365 1646 y(p)q(oin)o(ters)k(to)f(structures,) j(or)d(indeed)h(p)q(oin)o(ters)f(to)h(structures)h(themselv)o(es\).)k(This) 365 1696 y(expression)15 b(facilitates)f(uniform)d(binding)i(to)h(ANSI)g(C)g (and)g(F)m(ortran)f(77.)365 1762 y(The)h(con)o(text)h(descriptor)g(m)o(ust)d (at)i(least)g(reference:)20 b(the)14 b(global)e(unique)i(con)o(text)365 1812 y(iden)o(ti\014er;)j(the)f(group)f(descriptor)i(of)e(the)h(frame.)22 b(The)16 b(group)g(descriptor)h(m)o(ust)365 1862 y(at)j(least)g(reference:)32 b(the)20 b(global)e(unique)i(group)f(iden)o(ti\014er;)j(the)f(o)o(wn)e(con)o (text)365 1911 y(descriptor;)14 b(the)f(rank)g(space)g(to)g(pro)q(cess)h (descriptor)g(map)d(of)g(the)j(group)e(\(includ-)365 1961 y(ing)i(the)g(size) h(of)f(the)h(group\);)e(the)i(pro)q(cess)h(rank.)j(The)14 b(pro)q(cess)i (descriptor)g(m)o(ust)365 2011 y(at)10 b(least)h(reference;)i(the)e(global)e (unique)h(pro)q(cess)i(iden)o(ti\014er;)f(the)g(group)f(descriptor)365 2061 y(of)k(the)g(o)o(wn)g(group.)365 2127 y(The)c(prop)q(osal)f(text)h (distinguishes)g(pro)q(cess,)i(group)d(and)g(con)o(text)h(iden)o(ti\014ers)h (more)365 2177 y(strongly)18 b(than)g(is)g(strictly)h(necessary)m(.)32 b(There)19 b(is)f(p)q(oten)o(tial)g(adv)n(an)o(tage)f(in)h(the)365 2227 y(concept)d(of)e(a)g(uni\014ed)h(descriptor)h(whic)o(h)e(can)h(b)q(e)g (used)g(to)g(reference)i(either)e(kind)365 2277 y(of)k(actual)f(descriptor.) 31 b(F)m(or)18 b(example)f(descriptor)i(uni\014cation)e(go)q(es)h(some)f(w)o (a)o(y)365 2326 y(to)o(w)o(ard)j(cleaning)f(up)h(the)h(duplet,)g(descriptors) g(describ)q(ed)h(in)d(\\Option)h(ii")e(of)365 2376 y(section)i(Section)f (1.2.4.)31 b(De\014nition)18 b(in)g(this)h(fashion)f(requires)i(the)f (referenced)365 2426 y(ob)r(ject)h(to)f(con)o(tain)g(a)g(class)g(iden)o (ti\014er)h(\(whic)o(h)f(in)g(this)g(prop)q(osal)g(could)g(b)q(e)h(as)957 2574 y(11)p eop %%Page: 12 13 12 12 bop 365 307 a Fh(little)20 b(as)g(3)f(bits)h(wide\))g(This)g (suggestion)g(is)g(explored)g(in)g(some)f(of)g(the)i(notes)365 357 y(b)q(elo)o(w.)365 423 y(Rik)12 b(Little\014eld)h(has)h(suggested)g (descriptors)h(could)e(b)q(e)h(used)g(to)f(\\cac)o(he")h(p)q(er)g(ob-)365 473 y(ject)i(user)h(information,)c(as)i(app)q(ears)h(in)g Fg(zipcode)p Fh(.)22 b(This)15 b(descriptor)i(capabilit)o(y)365 523 y(could)f(b)q(e)h (seriously)f(useful)g(for)g(example)f(in)g(con)o(text)i(or)f(group)g(sp)q (eci\014c)i(imple-)365 573 y(men)o(tations)f(of)h(collectiv)o(e)g(comm)o (unications.)28 b(The)19 b(suggestion)f(b)q(oth)g(requires)365 623 y(and)c(deserv)o(es)i(more)d(w)o(ork.)365 689 y(There)20 b(w)o(ere)f(additional)d(motiv)n(ations)g(for)i(expressing)h(pro)q(cess)i (descriptors)f(as)365 739 y(in)o(tegers)f(in)f(the)h(prop)q(osal.)31 b(Firstly)m(,)19 b(the)g(author)f(an)o(ticipated)g(\\Option)g(ii")f(of)365 789 y(Section)11 b(1.2.4.)k(Secondly)m(,)c(it)f(is)g(observ)o(ed)h(that)g (de\014nition)f(of)g(pro)q(cess)i(descriptors)365 839 y(can)19 b(b)q(e)f(w)o(eak)o(ened)h(suc)o(h)g(that)f(they)h(ma)o(y)d(b)q(e)j(implemen) o(ted)d(as)i(global)f(unique)365 888 y(pro)q(cess)i(iden)o(ti\014ers,)e (expressed)i(as)e(in)o(tegers,)g(should)g(this)f(infer)h(adv)n(an)o(tage.)25 b([)p Fd(I)365 938 y(wish)15 b(I)g(had)g(r)n(e)n(alise)n(d)f(the)h(se)n(c)n (ond)g(p)n(oint)g(b)n(efor)n(e)g(the)g(\014rst!])365 1005 y Fh(The)21 b(consequence)h(of)d(allo)o(wing)f(pro)q(cess)j(descriptors)h(to)d (b)q(e)i(implemen)o(ted)d(as)365 1054 y(pro)q(cess)c(iden)o(ti\014ers)f(is)f (explored)h(in)f(some)f(of)h(the)h(notes)f(b)q(elo)o(w.)18 b(Please)13 b(also)e(note)365 1104 y(that)j(this)f(w)o(ould)g(exclude)h(pro)q (cess)h(descriptors)g(from)c(the)j(\\cac)o(hing")f(capabilit)o(y)365 1154 y(men)o(tioned)i(ab)q(o)o(v)o(e.)25 b(These)17 b(ideas)f(suggest)h(an)f (alternate)h(prop)q(osal,)f(con)o(taining)365 1204 y(global)h(unique)h(pro)q (cess)i(iden)o(ti\014ers)e(expressed)j(as)d(in)o(tegers,)h(and)f(con)o(text)h (and)365 1254 y(group)12 b(descriptors)h(as)e(opaque)g(references)k(of)10 b(y)o(et)i(unsp)q(eci\014ed)h(language)e(binding)365 1303 y(\(i.e.,)i(p)q (erhaps)i(not)f(an)g(in)o(teger\).)19 b(This)14 b(suggestion)h(is)e(also)h (explored)g(in)g(some)f(of)365 1353 y(the)i(notes)f(b)q(elo)o(w.)312 1436 y(2.)20 b Fe(Dynamic)f(Pro)q(cesses:)25 b Fh(The)19 b(prop)q(osal)e(do)q (es)h(not)g(prev)o(en)o(t)g(a)g(pro)q(cess)h(mo)q(del)365 1486 y(whic)o(h)d(allo)o(ws)e(dynamic)f(creation)j(and)f(deletion)g(of)g(pro)q (cesses)j(ho)o(w)o(ev)o(er)e(it)f(do)q(es)365 1536 y(not)c(fa)o(v)o(our)f(an) g(async)o(hronous)h(pro)q(cess)i(mo)q(del)c(in)h(whic)o(h)h(singleton)f(pro)q (cesses)j(are)365 1586 y(created)18 b(and)d(deleted)i(in)e(an)h(arbitrary)f (fashion.)23 b(The)16 b(prop)q(osal)g(do)q(es)g(fa)o(v)o(our)f(a)365 1636 y(mo)q(del)c(in)g(whic)o(h)h(blobs)f(of)h(pro)q(cesses)i(are)e(created)i (\(and)e(deleted\))h(in)e(a)h(concerted)365 1685 y(fashion,)20 b(and)f(in)g(whic)o(h)g(eac)o(h)h(blob)f(so)h(created)g(is)g(assigned)f(a)h (di\013eren)o(t)g(o)o(wn)365 1735 y(pro)q(cess)g(grouping.)29 b(This)18 b(mo)q(del)f(do)q(es)h(not)g(tak)o(e)g(in)o(to)f(accoun)o(t)i(the)f (p)q(oten)o(tial)365 1785 y(desire)c(to)f(expand)g(or)g(con)o(tract)h(an)f (existing)f(blob)h(of)f(pro)q(cesses)k(in)c(order)i(to)f(tak)o(e)365 1835 y(in)o(to)c(accoun)o(t)h(\(presumably)e(slo)o(wly\))h(time)f(v)n(arying) g(w)o(orkloads.)16 b(It)10 b(is)f(conjectured)365 1885 y(that)k(concerted)j (blob)c(expand)h(and)g(con)o(tract)h(op)q(erations)f(are)h(more)e(suitable)h (for)365 1934 y(this)h(purp)q(ose)h(than)f(async)o(hronous)h(singleton)e(spa) o(wn)h(and)g(kill)e(op)q(erations.)312 2017 y(3.)20 b Fe(Descriptor)f (transmissio)o(n:)25 b Fh(In)19 b(the)h(spirit)e(of)g(descriptor)i (uni\014cation)f(\(See)365 2067 y(Note)12 b(1\))g(the)g(three)h Fc(MPI)p 754 2067 14 2 v 15 w(?D)p 813 2067 V 16 w(TYPE)d Fh(names)h(can)h(b) q(e)g(collapsed)g(in)o(to)f(something)f(lik)o(e)365 2117 y Fc(MPI)p 434 2117 V 15 w(DESC)p 537 2117 V 15 w(TYPE)p Fh(.)365 2183 y(If)h(pro)q(cess)i(descriptors)g(are)e(replaced)h(b)o(y)f(global)f (unique)h(pro)q(cess)i(iden)o(ti\014ers)f(\(See)365 2233 y(Note)j(1\))e(then) i(no)f(sp)q(ecial)g(measures)g(are)g(required)h(for)f(transmission)e (thereof.)365 2300 y(If)17 b(group)g(and)g(con)o(text)h(descriptors)h(are)f (expressed)i(as)d(opaque)g(ob)r(jects)i(of)d(y)o(et)365 2350 y(unsp)q(eci\014ed)i(t)o(yp)q(e)f(then,)g(in)f(ANSI)g(C)g(at)g(least,)h(it)f (will)e(b)q(e)j(p)q(ossible)g(to)f(prev)o(en)o(t)365 2399 y(nonseman)o(tic)d (transmission)g(thereof.)957 2574 y(12)p eop %%Page: 13 14 13 13 bop 312 307 a Fh(4.)20 b Fe(Descriptor)13 b(registry:)k Fh(The)d(registry)h(service)g(is)f(just)g(a)f(simpler)g(w)o(a)o(y)g(for)h (con-)365 357 y(curren)o(t)22 b(ob)r(jects)f(to)f(iden)o(tify)f(and)g (establish)h(comm)o(unications)d(with)j(one)g(an-)365 407 y(other.)35 b(The)20 b(op)q(erations)f(of)g(in)o(terest,)i(expressed)h(in)d(the)h(spirit) f(of)f(descriptor)365 457 y(uni\014cation)i(\(See)i(Note)g(1\),)g(are:)32 b(registration)21 b(of)f(descriptor)i(b)o(y)e(name,)h(e.g.)365 506 y Fc(mpi)p 434 506 14 2 v 15 w(register\(name,desc\))l Fh(;)k(deregistration,)e(e.g)d Fc(mpi)p 1321 506 V 15 w(deregister\(desc\))m Fh(;)365 556 y(and)14 b(lo)q(okup)g(b)o(y)g(name,)f(e.g.)19 b Fc(mpi)p 915 556 V 15 w(lookup\(name,)g(&desc,)i(wait\))13 b Fh(where)i Fc(wait)365 606 y Fh(con)o(trols)f(whether)i(the)e(lo)q(okup)f (w)o(aits)h(for)f(the)i(name)d(to)i(b)q(e)h(registered.)365 671 y(If)c(pro)q(cess)i(descriptors)g(are)e(replaced)h(b)o(y)f(global)f (unique)h(pro)q(cess)i(iden)o(ti\014ers)f(\(See)365 721 y(Note)j(1\))e(then)i (p)q(erhaps)g(pro)q(cess)h(iden)o(ti\014er)e(registry)g(is)g(not)g(so)g(imp)q (ortan)o(t.)365 785 y(The)h(MIMD)e(mo)q(del)g(do)q(es)i(not)f(actually)f (required)i(pro)q(cess)g(descriptor)h(or)e(group)365 835 y(descriptor)20 b(registry)e(to)g(b)q(e)h(visible)e(to)h(the)g(user)h(since)g(con)o(text)g (descriptor)g(re-)365 885 y(gistry)e(and)g(con)o(text)g(descriptor)i (attribute)e(determination)f(giv)o(es)g(access)j(to)e(all)365 935 y(groups)e(and)f(th)o(us)h(group)g(descriptor)h(attribute)f (determination)e(giv)o(es)h(access)j(to)365 985 y(all)11 b(pro)q(cesses.)20 b(The)12 b(prop)q(osal)f(w)o(as)h(written)g(to)f(handle)h(descriptors)h (consisten)o(tly)m(.)312 1064 y(5.)20 b Fe(Descriptor)g(deallo)q(cation:)28 b Fh(In)20 b(the)h(spirit)f(of)f(descriptor)i(uni\014cation)f(\(See)365 1114 y(Note)c(1\))f(the)i(three)f Fc(mpi)p 769 1114 V 15 w(?d)p 828 1114 V 16 w(free\(?d\))d Fh(can)j(b)q(e)g(collapsed)g(in)o(to)e (something)g(lik)o(e)365 1164 y Fc(mpi)p 434 1164 V 15 w(desc)p 537 1164 V 15 w(free\(desc\))n Fh(.)365 1229 y(The)j(receipt)h(of)e(a)g (descriptor)i(in)f(descriptor)g(transmission)f(and)g(registry)h(is)g(an)365 1279 y(allo)q(cator,)c(hence)j(pro)o(vision)e(of)f(the)i(deallo)q(cator.)20 b(P)o(erhaps)15 b(there)h(should)e(b)q(e)h(an)365 1328 y(explicit)d(allo)q (cator)g(whic)o(h)g(the)h(user)h(m)o(ust)d(call)h(in)g(order)h(to)f(receiv)o (e)i(a)e(descriptor,)365 1378 y(and)i(can)g(deallo)q(cate)g(when)g(no)g (longer)g(required.)365 1443 y(If)d(pro)q(cess)i(descriptors)g(are)e (replaced)h(b)o(y)f(global)f(unique)h(pro)q(cess)i(iden)o(ti\014ers)f(\(See) 365 1493 y(Note)j(1\))e(then)i(pro)q(cess)h(iden)o(ti\014er)e(deallo)q (cation)f(is)g(mo)q(ot.)312 1572 y(6.)20 b Fe(Coherency:)f Fh(The)14 b(prop)q(osal)g(admits)f(incoherency)i(as)f(descriptors)i(ma)o(y)c (b)q(e)j(re-)365 1622 y(ceiv)o(ed)i(in)f(transmission)e(or)i(registry)m(.)25 b(The)17 b(SPMD)f(core)g(con)o(tains)g(no)g(incoher-)365 1672 y(ency)m(.)24 b(The)16 b(inclusion)f(of)g(dynamic)f(pro)q(cess)j(creation)f (and)g(deletion)f(admits)f(in-)365 1722 y(coherency)k(since)f(pro)q(cesses)i (can)d(retain)h(descriptors)g(of)f(pro)q(cesses)j(whic)o(h)d(ha)o(v)o(e)365 1772 y(b)q(een)i(deleted.)27 b(The)17 b(inclusion)e(of)h(grouping)g (descriptor)i(transmission)d(and)h(re-)365 1822 y(gistry)e(admits)d (incoherency)k(since)f(pro)q(cesses)i(can)e(retain)f(descriptors)i(of)e (group-)365 1871 y(ings)f(whic)o(h)h(ha)o(v)o(e)f(b)q(een)h(deleted.)19 b(The)13 b(inclusion)f(of)f(dynamic)g(groupings)h(admits)365 1921 y(incoherency)18 b(since)g(pro)q(cesses)h(can)e(retain)f(descriptors)j (of)d(groupings)g(of)g(whic)o(h)365 1971 y(the)e(rank)f(to)f(pro)q(cess)j (map)c(has)i(c)o(hanged.)18 b(The)13 b(inclusion)g(of)f(con)o(text)h (descriptor)365 2021 y(transmission)j(and)i(registry)g(admits)e(incoherency)i (since)h(pro)q(cesses)h(can)d(retain)365 2071 y(descriptors)j(of)e(con)o (texts)h(whic)o(h)f(ha)o(v)o(e)g(b)q(een)i(deleted.)32 b(The)19 b(prop)q(osal)f(exp)q(ects)365 2120 y(the)13 b(user)g(to)f(ensure)i(coheren)o (t)g(usage.)k(It)12 b(is)g(conjectured)i(that)e(this)g(is)h(acceptable)365 2170 y(pro)o(vided)19 b(that)g(the)g(user)h(is)e(not)h(also)f(exp)q(ected)j (to)d(implemen)o(t)e(pro)q(cess)k(fault)365 2220 y(tolerance.)312 2300 y(7.)g Fe(Dynamic)d(groupings)o(:)i Fh(Pro)q(cess)e(groupings)e(are)h (dynamic)d(in)i(the)h(sense)h(that)365 2350 y(they)c(can)g(b)q(e)g(created)h (at)e(an)o(y)g(time,)f(and)i(static)g(in)f(the)h(sense)h(that)e(the)h(mem)o (b)q(er-)365 2399 y(ship)k(is)f(constan)o(t)g(o)o(v)o(er)h(the)f(lifetime)f (of)g(the)i(pro)q(cess)h(grouping.)24 b(The)17 b(prop)q(osal)365 2449 y(sp)q(eci\014es)g(static)e(groupings)g(ho)o(w)o(ev)o(er)g(the)g(lo)q (ose)g(separation)g(of)f(comm)o(unication)957 2574 y(13)p eop %%Page: 14 15 14 14 bop 365 307 a Fh(con)o(texts)12 b(from)d(pro)q(cess)k(groupings)d (simpli\014es)f(extension)i(to)g(dynamic)e(groupings)365 357 y(as)j(con)o(texts)g(stretc)o(h)h(or)e(shrink)g(according)g(to)g(the)h(c)o (hanges)g(in)f(their)g(frames.)16 b(It)c(is)365 407 y(conjectured)17 b(that)f(concerted)h(grouping)d(expand)i(and)f(con)o(tract)h(op)q(erations)f (are)365 457 y(more)e(suitable)h(than)g(async)o(hronous)g(singleton)g(join)f (and)g(lea)o(v)o(e)h(op)q(erations.)312 540 y(8.)20 b Fe(Pro)q(cess)d(blobs:) i Fg(mpi)14 b Fh(has)i(discussed)g(the)g(concept)g(of)f(the)h(\\all")d(group) i(whic)o(h)365 589 y(con)o(tains)k(all)e(pro)q(cesses.)35 b(The)19 b(\\o)o(wn")e(group)i(concept)h(is)e(a)g(generalisation)g(of)365 639 y(the)h(\\all")d(group)h(concept)i(whic)o(h)f(is)g(expressiv)o(e)h(for)e (programs)g(including)g(and)365 689 y(b)q(ey)o(ond)g(the)g(SPMD)f(mo)q(del.) 24 b(Pro)q(cesses)19 b(are)d(created)i(in)e(\\blobs",)g(where)h(eac)o(h)365 739 y(mem)o(b)q(er)g(of)g(a)h(blob)f(is)h(a)f(mem)o(b)q(er)g(of)g(the)i(same) e(o)o(wn)g(pro)q(cess)j(grouping,)e(and)365 789 y(di\013eren)o(t)e(blobs)e (ha)o(v)o(e)g(di\013eren)o(t)h(o)o(wn)f(pro)q(cess)i(groupings.)j(An)14 b(SPMD)g(program)365 839 y(is)f(a)h(single)e(blob.)18 b(A)13 b(host-no)q(de)h(program)e(comp)q(oses)h(t)o(w)o(o)g(blobs,)g(the)g(no)q(de)h (blob)365 888 y(and)e(the)h(host)g(blob)e(\(a)h(singleton\).)17 b(There)d(is)e(a)g(sense)h(in)f(whic)o(h)g(a)g(blob)g(is)g(SPMD)365 938 y(lik)o(e.)312 1021 y(9.)20 b Fc(mpi)p 434 1021 14 2 v 15 w(own)p 515 1021 V 15 w(group\(pd\))o Fe(:)d Fh(This)10 b(pro)q(cedure)j(lo)q(oks)d(lik)o(e)g(a)h(case)h(of)e(pro)q(cess)i (descriptor)365 1071 y(attribute)18 b(determination.)26 b(If)17 b(pro)q(cess)i(descriptors)g(are)e(allo)o(w)o(ed)f(to)h(b)q(e)h(imple-)365 1121 y(men)o(ted)11 b(as)g(global)e(unique)i(pro)q(cess)i(iden)o(ti\014ers,)e (or)g(are)h(replaced,)g(this)f(pro)q(cedure)365 1171 y(should)16 b(accept)i(no)e(argumen)o(ts)f(and)h(return)h(the)g(o)o(wn)f(group)g (descriptor)h(of)f(the)365 1220 y(calling)d(pro)q(cess.)291 1303 y(10.)20 b Fe(In)o(v)o(erse)9 b(map:)16 b Fh(The)10 b(prop)q(osal)f(did) g(not)g(include)g(a)g(function)g(to)g(con)o(v)o(ert)h Fc(\(gd,)21 b(pd\))365 1353 y Fh(pair)c(in)o(to)g(a)g(rank.)28 b(It)17 b(is)h(suggested)g(that)g(this)f(in)o(v)o(erse)h(map)e(is)h(allo)o(w)o(ed)f (to)h(b)q(e)365 1403 y(\\slo)o(w",)12 b(i.e.)17 b(could)12 b(b)q(e)h(a)f(linear)g(searc)o(h)i(o)o(v)o(er)f(mem)o(b)q(ers)e(of)h(the)h (group,)f(but)h(prob-)365 1453 y(ably)e(should)h(b)q(e)g(included)g(for)g (completeness.)18 b(It)12 b(can)g(b)q(e)g(used)h(as)f(a)f(mem)o(b)q(ership) 365 1503 y(predicate.)291 1586 y(11.)20 b Fc(mpi)p 434 1586 V 15 w(own)p 515 1586 V 15 w(context\(gd\))n Fe(:)d Fh(This)9 b(pro)q(cedure)i(lo)q(oks)e(lik)o(e)f(a)h(case)h(of)f(group)g(descriptor)365 1636 y(attribute)15 b(determination.)291 1719 y(12.)20 b Fe(Grouping)f (Deletion:)26 b Fh(The)20 b(pro)q(cess)h(grouping)e(deletion)g(op)q(eration)g (should)365 1768 y(probably)h(b)q(e)h(de\014ned)h(to)e(fail)f(when)i(there)g (are)g(user)h(created)f(con)o(texts)h(with)365 1818 y(that)15 b(frame)f(whic)o(h)h(ha)o(v)o(e)g(not)g(themselv)o(es)g(b)q(een)h(deleted.)23 b(This)14 b(just)i(requires)g(a)365 1868 y(reference)h(coun)o(t)d(in)f(the)i (group)e(descriptor)j(static)e(attribute)g(store.)291 1951 y(13.)20 b Fe(Con)o(text)i(Creation)f(&)i(Deletion:)k Fh(Marc)21 b(Snir)e(has)h(describ)q(ed)i(a)d(metho)q(d)365 2001 y(b)o(y)g(whic)o(h)g (global)f(unique)h(group)g(iden)o(ti\014ers)h(can)f(b)q(e)h(generated)g (without)f(use)365 2051 y(of)h(shared)i(global)d(data.)38 b(The)21 b(prop)q(osal)g(states)h(that)e(con)o(text)i(creation)f(and)365 2100 y(deletion)13 b(op)q(erations)f(sync)o(hronise)i(the)e(pro)q(cesses)j (within)d(the)h(frame)e(of)h(the)h(con-)365 2150 y(text,)h(an)o(ticipating)f (use)i(of)e(this)h(metho)q(d)f(for)g(generation)h(of)g(con)o(text)g(iden)o (ti\014ers.)365 2200 y(Ho)o(w)o(ev)o(er,)h(the)g(sync)o(hronisation)g (requires)g(that)g(con)o(text)g(creation)g(and)g(deletion)365 2250 y(calls)10 b(within)f(a)g(frame)g(are)h(p)q(erformed)g(in)f(the)i(iden)o (tical)e(sequence)j(b)o(y)d(all)g(mem)o(b)q(ers)365 2300 y(of)i(the)i(frame.) j(The)c(global)e(unique)i(group)f(iden)o(ti\014er)h(and)g(con)o(text)g (creator)h(refer-)365 2350 y(ence)g(coun)o(t)e(are)g(then)h(su\016cien)o(t)f (to)g(generate)h(a)e(global)g(unique)h(con)o(text)g(iden)o(ti\014er)365 2399 y(without)k(comm)o(unicatio)o(n)d(or)j(sync)o(hronisation.)21 b(Should)15 b(con)o(text)g(creation)h(and)365 2449 y(deletion)e(therefore)h (not)f(sync)o(hronise)h(the)g(frame?)957 2574 y(14)p eop %%Page: 15 16 15 15 bop 365 307 a Fh(There)19 b(ma)o(y)d(b)q(e)i(adv)n(an)o(tage)e(in)h (de\014ning)h(con)o(text)g(creation)g(and)f(deletion)h(suc)o(h)365 357 y(that)10 b(a)f(n)o(um)o(b)q(er)g(of)g(con)o(texts)h(are)g(created)h(or)f (deleted)h(sim)o(ultaneously)m(,)c(dep)q(ending)365 407 y(on)12 b(ho)o(w)g(hea)o(vy)g(w)o(e)g(exp)q(ect)i(con)o(text)f(managemen)o(t)c(to)j (b)q(e)h(in)e(implemen)o(tations)e(of)365 457 y Fg(mpi)p Fh(.)291 540 y(14.)20 b Fe(Descriptor)10 b(wildcard:)16 b Fh(In)c(the)g(spirit)f(of)g (descriptor)i(uni\014cation)d(\(See)j(Note)f(1\))365 589 y(the)k(three)h (named)d(constan)o(ts)i Fc(MPI)p 935 589 14 2 v 15 w(?D)p 994 589 V 15 w(WILD)f Fh(can)g(b)q(e)h(collapsed)f(in)o(to)g(something)365 639 y(lik)o(e)e Fc(MPI)p 510 639 V 15 w(DESC)p 613 639 V 15 w(WILD)p Fh(.)365 706 y(If)k(pro)q(cess)h(descriptors)g(are)g(replaced)f (with)g(global)e(unique)h(pro)q(cess)j(iden)o(ti\014ers)365 756 y(then)14 b(p)q(erhaps)h(the)f(wildcard)f(pro)q(cess)i(iden)o(ti\014er)f (v)n(alue)f(can)h(b)q(e)g(the)g(same)e(as)i(the)365 805 y(wildcard)g(tag)f(v) n(alue,)g(and)h(the)g(same)f(named)g(constan)o(t.)291 888 y(15.)20 b Fe(Con)o(text)g(forms:)26 b Fh(The)19 b(n)o(ull)f(form)e(is)i(lik)o(e)g (PVM)h(3.)31 b(It)18 b(is)g(general)h(purp)q(ose,)365 938 y(but)13 b(not)f(particularly)f(expressiv)o(e.)19 b(It)13 b(do)q(es)g(not)f(pro)o (vide)g(facilities)f(for)h(writers)h(of)365 988 y(parallel)g(libraries.)18 b(It)c(has)g(the)g(p)q(oten)o(tial)f(to)h(pro)o(vide)g(maxim)n(um)c(p)q (erformance.)365 1054 y(The)21 b(closed)g(form)e(is)h(lik)o(e)f Fg(zipcode)p Fh(.)37 b(It)21 b(is)f(expressiv)o(e)i(in)d(SPMD)i(programs)365 1104 y(where)d(noncomm)o(uni)o(cativ)o(e)c(distinct)i(data)g(driv)o(en)g (parallel)g(computations)e(can)365 1154 y(b)q(e)h(p)q(erformed)f(concurren)o (tly)m(.)19 b(It)14 b(pro)o(vides)g(facilities)f(for)h(writers)h(of)e(SPMD)h (lik)o(e)365 1204 y(parallel)f(libraries.)365 1270 y(The)j(op)q(en)g(form)e (is)i(lik)o(e)e Fg(chimp)p Fh(.)23 b(It)16 b(is)f(expressiv)o(e)j(in)d(MIMD)g (programs)f(where)365 1320 y(comm)o(unicativ)o(e)e(data)j(driv)o(en)g (parallel)f(computations)f(can)i(b)q(e)h(p)q(erformed)f(con-)365 1370 y(curren)o(tly)m(.)k(It)14 b(pro)o(vides)g(facilities)f(for)g(MIMD)h (lik)o(e)f(parallel)g(libraries.)291 1453 y(16.)20 b Fe(Rank)c(wildcard:)g Fh(Since)e(rank)g(is)f(an)h(in)o(teger)g(lik)o(e)e(message)i(tag,)e(p)q (erhaps)j(they)365 1503 y(should)f(ha)o(v)o(e)g(the)g(same)f(wildcard)h(v)n (alue,)e(and)i(the)h(same)e(named)f(constan)o(t.)291 1586 y(17.)20 b Fc(MPI)p 434 1586 V 15 w(NULL)p Fe(:)14 b Fh(I)g(am)f(follo)o(wing)e(the)k (spirit)f(of)g(con)o(text)h(uni\014cation)e(\(See)i(Note)g(1\))f(in)365 1636 y(the)i(prop)q(osal)f(text)g(here.)23 b(There)16 b(ma)o(y)e(b)q(e)h(adv) n(an)o(tage)f(in)h(de\014ning)g(the)h(v)n(alue)e(of)365 1685 y(the)h(n)o(ull)e(descriptor)j(to)e(b)q(e)h(the)g(ANSI)f(C)g(constan)o(t)h Fc(NULL)p Fh(,)e(or)h(ev)o(en)h(de\014ning)f(the)365 1735 y(v)n(alue)g(to)f (b)q(e)i(exactly)f(zero)h(\(ev)o(ery)f(rule)g(ha)o(ving)f(a)h(useful)g (exception\).)262 1872 y Fi(1.4)64 b(Conclusion)262 1963 y Fh(This)9 b(c)o(hapter)j(has)e(presen)o(ted)i(and)e(discussed)i(a)d(prop)q (osal)h(for)g(comm)o(unicatio)o(n)e(con)o(texts)262 2013 y(within)h Fg(mpi)p Fh(.)16 b(In)10 b(the)h(prop)q(osal)e(pro)q(cess)j(groupings)e(app)q (eared)h(as)f(frames)f(\(or)h(templates\))262 2063 y(for)16 b(the)i(construction)g(of)f(comm)o(unicati)o(on)d(con)o(texts,)19 b(and)e(comm)o(unicatio)o(n)e(con)o(texts)262 2113 y(retained)f(certain)h (prop)q(erties)g(of)e(the)i(frames)e(used)h(in)g(their)g(construction.)957 2574 y(15)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Mar 23 14:34:44 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA25818; Tue, 23 Mar 93 14:34:44 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25950; Tue, 23 Mar 93 14:33:51 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 23 Mar 1993 14:33:50 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25936; Tue, 23 Mar 93 14:33:47 -0500 Date: Tue, 23 Mar 93 19:33:41 GMT Message-Id: <16020.9303231933@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Context proposals overview To: SNIR@watson.ibm.com, mpi-context@cs.utk.edu In-Reply-To: Marc Snir's message of Tue, 23 Mar 93 13:10:01 EST Reply-To: lyndon@epcc.ed.ac.uk Marc writes (in reply to Jim): > Proposal I (Marc Snir) > ---------------------- > Against: > The receiver of a message has to both know the group of the sender, > and be a part of it. This feels like it makes servers hard, but may be > ok. If you keep having to go into the ALL group, then the security > which was the whole point of a context is lost. > > >>> One can, of course, have several distinct contexts that include > >>> all processes. Sure, I see, if there are M distinct services then you create M contexts which are replicates of the ALL context and messages for each service are protected from one another. Of course this is ignoring the nagging realisation that the client has to remember which rank within the all group the server has, when the expressive way of doing this is actually to describe a SERVER context and a CLIENT context, forget about the ALL context, and let the clients send messages to the server context and the server send messages to the client context. Well, actually this is only part of the story, because we should think about the CLIENT as being a concerted entity (a parallel client process) and allow there to me multiple distinct clients. Don't things look rather bad for this proposal now. Well, again we left something out of the story, because the SERVER itself could compose multiple processes acting as a concerted entity (a parallel server process). Now it's just getting a bit messy for the client to remember the set of server indices? It's worse for the server remembering the set of client indices. You bet the programmer ends up implementing inter-context communication by hand. Finally, are we in good shape when the parallel server for service X is the parallel client for service Z, which has a parallel server? I suggest not. Okay, well this is beginning to get a wee bit hypothetical, I don't know of anyone programming at this final level of complexity quite yet. But, I do know of people programming at the parallel servers - parallel clients level. > I don't understand how to build a group/context using only point to > point messages. I still seem to have the bootstrap problem that I need > the new context to safely receive the message which will tell me what > the new context is. > > >>> Well, we have an existential proof, since we support dynamic group > >>> creation in our system. A new context is created within an old, > >>> preexisting context; so ALL need to be there from start. > There is an implementation of this kind of group creation in your system - existence proof that this is implementable - but if your system does not use the point-to-point facilities of MPI then this may not be an existence proof that this is implementable with the point-to-point facilities of MPI. This doesn't bother me, personally. I cannot bring myself to being an advocate that the group and context constructors and destructors have to be implementable using the point-to-point section of MPI. This just seems to have the effect of restricting the contructors and desctructors. The argument is not the same as the similar argument for collective communications -- there should be few constructors and destructors whereas there may be many collective communications. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Mar 23 18:24:24 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA01132; Tue, 23 Mar 93 18:24:24 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA06265; Tue, 23 Mar 93 18:23:42 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 23 Mar 1993 18:23:40 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA06257; Tue, 23 Mar 93 18:23:37 -0500 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Tue, 23 Mar 93 15:19 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA08184; Tue, 23 Mar 93 15:17:47 PST Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA18045; Tue, 23 Mar 93 15:17:43 PST Date: Tue, 23 Mar 93 15:17:43 PST From: rj_littlefield@pnlg.pnl.gov Subject: overview and "sketch VII" To: jim@meiko.co.uk, mpi-context@cs.utk.edu Cc: d39135@sodium.pnl.gov, lyndon@epcc.ed.ac.uk, snir@watson.ibm.com Message-Id: <9303232317.AA18045@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu Many thanks to Jim for his summary of various proposals. The earlier proposals have become rather Talmudic, with layer upon layer of comments so detailed that it is difficult to tell what are the fundamental differences. Even after integration of the comments, proposals are now so long that they require much study. (Proposal VI is gonna take a while, Lyndon.) Rather than comment further on Jim's summary or any other specific proposal, I would like to sketch anew a possible synthesis of key ideas selected from all the proposals. This is a somewhat expanded version of the sketch that I distributed a couple of days ago with a confusing choice of title. Following both of Lyndon's suggestions, I will now call it "Sketch VII". My hope here is to provide a BRIEF description that may help clarify our thinking and contribute to a unified proposal. BASE FEATURES Functionality . All descriptors/identifiers are local and opaque. . A "context" consists of a collection of processes that have agreed amongst themselves regarding context-relative process names ("ranks"), and have also negotiated a protection mechanism to keep contexts separate. . There is a predefined context holding all the initial processes. . Contexts are formed and destroyed by loosely synchronous calls in only those processes belonging to the context. . Point-to-point messages can be sent and received within a context, in which case both sender and receiver are identified by rank. . Groups are not equal to contexts. From the user's standpoint, a group consists of a base context (from which other contexts can be created cheaply by copying), plus topology information, plus a facility for attaching arbitrary information to the group descriptor ("cacheing"). Performance/Implementation Expectations . Assembling a new collection of processes requires communication and imposes actual synchronization. The implementation is expected to be scalable and to cost no more than a point-to-point fanin/fanout on the participating processes. . "Copying" a context does not require communication and does not impose actual synchronization. (Use a counting strategy.) . There is no concept of a globally unique context value. If desired by the MPI implementor, each process can choose its own message selector value and broadcast it to the other processes upon context formation. EXTENDED FEATURES Functionality . Process, group, and context descriptors can be passed around by special mechanisms. This allows a process to obtain the descriptor for a group or context of which it is not a member. If a process receives a descriptor by one of these mechanisms, then that process assumes responsibility for explicitly releasing it when the descriptor is no longer needed or becomes stale. . Point-to-point communications can be extended to allow a process to send a message to any context for which it holds a descriptor. The receiver needs an easy way to identify and/or select on sender. (Lyndon's proposal VI introduces an "open context" form that works well if sender and receiver hold descriptors for each other's contexts, but is not clear on how the sender is identified if the receiver does not know the sender's context.) . Registration facilities can be added to allow global naming of processes, contexts, and groups. The registration facility translates between globally unique names and process-local descriptors. (I.e., to register a name, one specifies the name and matching descriptor; to lookup, one specifies the name and receives a descriptor in reply.) This capability, and none of the capabilities before it, requires an asynchronous server. Happy thinking... --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Wed Mar 24 09:39:35 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA14644; Wed, 24 Mar 93 09:39:35 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA16981; Wed, 24 Mar 93 09:38:32 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 24 Mar 1993 09:38:31 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA16973; Wed, 24 Mar 93 09:38:30 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA06527; Wed, 24 Mar 93 08:32:44 CST Date: Wed, 24 Mar 93 08:32:44 CST From: Tony Skjellum Message-Id: <9303241432.AA06527@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: status of this subcommittee Hi, with the strong efforts of Lyndon, Rik, and the full-proposal contributed by Marc, we have two proposals, and I am writing the last one now. These will be put to your straw poll by next Monday. Goal of straw poll is to rank proposals for order of presentation and precedence by committee. I will keep you informed. Current proposals: I - by Snir VI/VII - by Littlefield & Clarke - Tony From owner-mpi-context@CS.UTK.EDU Wed Mar 24 09:49:31 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA14858; Wed, 24 Mar 93 09:49:31 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17436; Wed, 24 Mar 93 09:49:15 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 24 Mar 1993 09:49:14 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17425; Wed, 24 Mar 93 09:49:08 -0500 Date: Wed, 24 Mar 93 14:48:58 GMT Message-Id: <16984.9303241448@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: status of this subcommittee To: Tony Skjellum , mpi-context@cs.utk.edu In-Reply-To: Tony Skjellum's message of Wed, 24 Mar 93 08:32:44 CST Reply-To: lyndon@epcc.ed.ac.uk Hi Tony > Goal of straw poll is to rank proposals for order of presentation and > precedence by committee. I will keep you informed. > > Current proposals: > I - by Snir > VI/VII - by Littlefield & Clarke Correction: VI/VII should be replaced by VII. Cheers Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Mar 24 09:52:12 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA14948; Wed, 24 Mar 93 09:52:12 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17626; Wed, 24 Mar 93 09:51:53 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 24 Mar 1993 09:51:52 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17618; Wed, 24 Mar 93 09:51:49 -0500 Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA07283 (5.65c/IDA-1.4.4 for ); Wed, 24 Mar 1993 09:51:44 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1) id AA15569; Wed, 24 Mar 93 14:51:40 GMT Date: Wed, 24 Mar 93 14:51:40 GMT From: jim@meiko.co.uk (James Cownie) Message-Id: <9303241451.AA15569@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA06620; Wed, 24 Mar 93 14:48:11 GMT To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Subject: Proposal VI' Content-Length: 1687 Lyndon, Please help me, I am having trouble understanding proposal VI'. It seems to me that there is a bootstrap problem with (at least) the NULL Context form of point to point. (I'd hoped that this would be the simplest and easiest to understand and use, so I started with it). First problem ------------- ALL descriptors are local (including process descriptors). Therefore before I can send a message to someone I have to find out (or construct) their (local to me) process descriptor. How do I do this ? They can send it to me, except that the problem just moves to them, as they don't know my address either ! (We had this exact problem in CSN, which is why we introduced the CSN nameserver at a known, fixed, address. Having used it I don't actually much like that solution !) Second Problem (more of a feature). ----------------------------------- Since the descriptors are ALL local, it will be extremely hard to make sense of trace/debug information. Consider code like this MPI_PD from; MPI_CRECV(... ,&from, ...); /* Wildcard receive but tell me where it came from */ printf("[%08X] Received from %8X\n",MPI_MYPD(), from); Coupled with code like this printf("[%08X]Sending to %08X\n", MPI_MYPD(), to); The output will be TOTALLY meaningless and confusing ! e.g. It could very easily look like this [00000001] Sending to 00000002 [00000001] Received from 00000002 etc. -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Wed Mar 24 10:43:23 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17289; Wed, 24 Mar 93 10:43:23 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19426; Wed, 24 Mar 93 10:18:30 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 24 Mar 1993 10:18:29 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19407; Wed, 24 Mar 93 10:18:23 -0500 Date: Wed, 24 Mar 93 15:18:19 GMT Message-Id: <17017.9303241518@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Proposal VI' To: jim@meiko.co.uk (James Cownie) In-Reply-To: James Cownie's message of Wed, 24 Mar 93 14:51:40 GMT Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Hi Jim > Please help me, I am having trouble understanding proposal VI'. Sure. First thing, Rik and I have done a "merger" of Proposals V and VI, to produce a Proposal VII. This is a cut down of VI with inclusion of group/context user cache capability. I think your queries propagate into Proposal VII, and I shall reply. > > It seems to me that there is a bootstrap problem with (at least) the > NULL Context form of point to point. (I'd hoped that this would be the > simplest and easiest to understand and use, so I started with it). > There kind of is, indeed, and I know about that. Hence I explore in the discussion having global process identifiers instead of local process descriptor handles. Proposal VII went to identifiers and then back to descriptors as I and Rik played tug-of-war with the thing. > First problem > ------------- > ALL descriptors are local (including process descriptors). > > Therefore before I can send a message to someone I have to find out > (or construct) their (local to me) process descriptor. Yes. > How do I do this ? > They can send it to me, except that the problem just moves to them, as > they don't know my address either ! Indeed. For processes in which you share the initial group, then they can be sent in a message using closed context communication, or more easily you can use the (group, rank) -> process mapping routine. Of course this doesnt let you find out about any processes in a different initial group (either becaus either because the program was kicked off that way or because the other processes were created later, although that kind of process creation is not *yet* in MPI). > (We had this exact problem in CSN, which is why we introduced the > CSN nameserver at a known, fixed, address. Having used it I don't > actually much like that solution !) Well as you know we used that CS~Tools software a lot for a lot of different things. For data driven parallel programming we never liked it either. However for modular applications --- distinct data driven (SPMD like) modules composed as module graphs --- we find the name server approach for register and lookup of group very convenient and expressive indeed! > Second Problem (more of a feature). > ----------------------------------- > Since the descriptors are ALL local, it will be extremely hard to make > sense of trace/debug information. > > Consider code like this > > MPI_PD from; > > MPI_CRECV(... ,&from, ...); /* Wildcard receive but tell me > where it came from */ > printf("[%08X] Received from %8X\n",MPI_MYPD(), from); > > Coupled with code like this > > printf("[%08X]Sending to %08X\n", MPI_MYPD(), to); > > The output will be TOTALLY meaningless and confusing ! > > e.g. It could very easily look like this > [00000001] Sending to 00000002 > [00000001] Received from 00000002 > Yes, I should hope that it would! This is a good point. So I suggest that for this kind of thing one needs to either use one of the context oriented forms, or we need just to add a (group, process) -> rank mapping (which might be slow, but its only debugging after all), or a descriptor attribute query procedure to return a global unqiue process identifer which will be embedded into the descriptor anyway. So for me, your helpful questions translate into a requirement to add one or two additional procedures into Proposal VII as outlined in the above paragraph. This will have to happen at or after the meeting, since Proposal I and VII are already en route to Steve Otto our Draft Editor. By the way, Jim, I'd be real grateful if you would explain to me what the Elan/Elite "contexts" that I have heard mentioned do. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Mar 24 12:06:42 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA20034; Wed, 24 Mar 93 12:06:42 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24341; Wed, 24 Mar 93 12:05:49 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 24 Mar 1993 12:05:47 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24327; Wed, 24 Mar 93 12:05:42 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA06808; Wed, 24 Mar 93 10:57:27 CST Date: Wed, 24 Mar 93 10:57:27 CST From: Tony Skjellum Message-Id: <9303241657.AA06808@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@aurora.cs.msstate.edu Subject: draft of MPI Context proposals (Latex) \documentstyle{report} \begin{document} \title{Context Subcommittee Proposals} \author{Lyndon~Clarke \and Anthony Skjellum \and Rik~Littlefield \and Marc~Snir} \date{March 24, 1993} \maketitle \newpage %======================================================================% % BEGIN "Proposal I" %======================================================================% % BEGIN "Proposal I" % Written by Marc Snir % Edited by Lyndon J. Clarke % March 1993 % \newcommand{\discuss}[1]{ \ \\ \ \\ {\small {\bf Discussion:} #1} \ \\ \ \\ } \newcommand{\missing}[1]{ \ \\ \ \\ {\small {\bf Missing:} #1} \\ \ \\ } \chapter{Proposal I --- Marc~Snir} \section{Contexts} A {\bf context} consists of: \begin{itemize} \item A set of processes that currently belong to the context (possibly all processes, or a proper subset). \item A {\bf ranking} of the processes within that context, i.e., a numbering of the processes in that context from 0 to $n-1$, where $n$ is the number of processes in that context. \end{itemize} A process may belong to several contexts at the same time. Any interprocess communication occurs within a context, and messages sent within one context can be received only within the same context. A context is specified using a {\em context handle} (i.e., a handle to an opaque object that identifies a context). Context handles cannot be transferred for one process to another; they can be used only on the process where they where created. Follows examples of possible uses for contexts. \subsection{Loosely synchronous library call interface} Consider the case where a parallel application executes a ``parallel call'' to a library routine, i.e., where all processes transfer control to the library routine. If the library was developed separately, then one should beware of the possibility that the library code may receive by mistake messages send by the caller code, and vice-versa. To prevent such occurrence one might use a barrier synchronization before and after the parallel library call. Instead, one can allocate a different context to the library, thus preventing unwanted interference. Now, the transfer of control to the library need not be synchronized. \subsection{Functional decomposition and modular code development} Often, a parallel application is developed by integrating several distinct functional modules, that is each developed separately. Each module is a parallel program that runs on a dedicated set of processes, and the computation consists of phases where modules compute separately, intermixed with global phases where all processes communicate. It is convenient to allow each module to use its own private process numbering scheme, for the intramodule computation. This is achieved by using a private module context for intramodule computation, and a global context for intermodule communication. \subsection{Collective communication} MPI supports collective communication within dynamically created groups of processes. Each such group can be represented by a distinct context. This provides a simple mechanism to ensure that communication that pertains to collective communication within one group is not confused with collective communication within another group. \subsection{Lightweight gang scheduling} Consider an environment where processes are multithtreaded. Contexts can be used to provide a mechanism whereby all processes are time-shared between several parallel executions, and can context switch from one parallel execution to another, in a loosely synchronous manner. A thread is allocated on each process to each parallel execution, and a different context is used to identify each parallel execution. Thus, traffic from one execution cannot be confused with traffic from another execution. The blocking and unblocking of threads due to communication events provide a ``lazy'' context switching mechanism. This can be extended to the case where the parallel executions are spanning distinct process subsets. (MPI does not require multithreaded processes.) \discuss{ A context handle might be implemented as a pointer to a structure that consists of context label (that is carried by messages sent within this context) and a context member table, that translates process ranks within a context to absolute addresses or to routing information. Of course, other implementations are possible, including implementations that do not require each context member to store a full list of the context members. Contexts can be used only on the process where they were created. Since the context carries information on the group of processes that belong to this context, a process can send a message within a context only to other processes that belong to that context. Thus, each process needs to keep track only of the contexts that where created at that process; the total number of contexts per process is likely to be small. The only difference I see between this current definition of context, which subsumes the group concept, and a pared down definition, if that I assume here that process numbering is relative to the context, rather then being global, thus requiring a context member table. I argue that this is not much added overhead, and gives much additional needed functionality. \begin{itemize} \item If a new context is created by copying a previous context, then one does not need a new member table; rather, one needs just a new context label and a new pointer to the same old context member table. This holds true, in particular, for contexts that include all processes. \item A context member table makes sure that a message is sent only to a process that can execute in the context of the message. The alternative mechanism, which is checking at reception, is less efficient, and requires that each context label be system-wide unique. This requires that, to the least, all processes in a context execute a collective agreement algorithm at the creation of this context. \item The use of relative addressing within each context is needed to support true modular development of subcomputations that execute on a subset of the processes. There is also a big advantage in using the same context construct for collective communications as well. \end{itemize} } \section{Context Operations} A global context {\bf ALL} is predefined. All processes belong to this context when computation starts. MPI does not specify how processes are initially ranked within the context ALL. It is expected that the start-up procedure used to initiate an MPI program (at load-time or run-time) will provide information or control on this initial ranking (e.g., by specifying that processes are ranked according to their pid's, or according to the physical addresses of the executing processors, or according to a numbering scheme specified at load time). \discuss{If we think of adding new processes at run-time, then {\tt ALL} conveys the wrong impression, since it is just the initial set of processes.} The following operations are available for creating new contexts. {\bf \ \\ MPI\_COPY\_CONTEXT(newcontext, context)} Create a new context that includes all processes in the old context. The rank of the processes in the previous context is preserved. The call must be executed by all processes in the old context. It is a blocking call: No call returns until all processes have called the function. The parameters are \begin{description} \item[OUT newcontext] handle to newly created context. The handle should not be associated with an object before the call. \item[IN context] handle to old context \end{description} \discuss{ I considered adding a string parameter, to provide a unique identifier to the next context. But, in an environment where processes are single threaded, this is not much help: Either all processes agree on the order they create new contexts, or the application deadlocks. A key may help in an environment where processes are multithreaded, to distinguish call from distinct threads of the same process; but it might be simpler to use a mutex algorithm at each process. {\bf Implementation note:} No communication is needed to create a new context, beyond a barrier synchronization; all processes can agree to use the same naming scheme for successive copies of the same context. Also, no new rank table is needed, just a new context label and a new pointer to the same old table. } {\bf \ \\ MPI\_NEW\_CONTEXT(newcontext, context, key, index)} \begin{description} \item[OUT newcontext] handle to newly created context at calling process. This handle should not be associated with an object before the call. \item[IN context] handle to old context \item[IN key] integer \item[IN index] integer \end{description} A new context is created for each distinct value of {\tt key}; this context is shared by all processes that made the call with this key value. Within each new context the processes are ranked according to the order of the {\tt index} values they provided; in case of ties, processes are ranked according to their rank in the old context. This call is blocking: No call returns until all processes in the old context executed the call. Particular uses of this function are: (i) Reordering processes: All processes provide the same {\tt key} value, and provide their index in the new order. (ii) Splitting a context into subcontexts, while preserving the old relative order among processes: All processes provide the same {\tt index} value, and provide a key identifying their new subcontext. {\bf \ \\ MPI\_RANK(rank, context)} \begin{description} \item[OUT rank] integer \item[IN context] context handle \end{description} Return the rank of the calling process within the specified context. {\bf \ \\ MPI\_SIZE(size, context)} \begin{description} \item[OUT size] integer \item[IN context] context handle \end{description} Return the number of processes that belong to the specified context. \subsection{Usage note} Use of contexts for libraries: Each library may provide an initialization routine that is to be called by all processes, and that generate a context for the use of that library. Use of contexts for functional decomposition: A harness program, running in the context {\tt ALL} generates a subcontext for each module and then starts the submodule within the corresponding context. Use of contexts for collective communication: A context is created for each group of processes where collective communication is to occur. Use of contexts for context-switching among several parallel executions: A preamble code is used to generate a different context for each execution; this preamble code needs to use a mutual exclusion protocol to make sure each thread claims the right context. \discuss{ If process handles are made explicit in MPI, then an additional function needed is {\bf MPI\_PROCESS(process, context, rank)}, which returns a handle to the process identified by the {\tt rank} and {\tt context} parameters. A possible addition is a function of the form {\bf MPI\_CREATE\_CONTEXT(newcontext, list\_of\_process\_handles)} which creates a new context out of an explicit list of members (and rank them in their order of occurrence in the list). This, coupled with a mechanism for requiring the spawning of new processes to the computation, will allow to create a new all inclusive context that includes the additional processes. However, I oppose the idea of requiring dynamic process creation as part of MPI. Many implementers want to run MPI in an environment where processes are statically allocated at load-time. } % % END "Proposal I" %======================================================================% %\end{document} %\documentstyle{report} % \chapter{Proposal VII --- Lyndon~J~Clarke \& Rik~J~Littlefield} %---------------------------------------------------------------------- % Introduction \section{Introduction} This chapter is similar in basic principles to Proposal~I and includes all of the functionality of that proposal as a subset --- it extends in several ways and differs in some details. Certain features of other, now defunct, proposals discussed in the context subcommittee are included. In particular, this chapter proposes that: \begin{enumerate} \item Contexts and groups are not identical. A context is always associated with one group, but a group may have several contexts. Properties of groups are inherited by all of the associated contexts, for example process rank. \item Context and group descriptors can be explicitly transferred to processes that are not members of the context or group. \item In point-to-point messages, processes can be identified in any of three ways: by process, by rank in a shared context, or by ranks in separate sender and receiver contexts. \item A ``cache'' facility is provided that allows modules to attach arbitrary information to both contexts and groups. \end{enumerate} These extensions are somewhat independent of each other. The first reflects the observation that multiple modules often operate within each process group, so that context formation should be lighter weight than group formation. The second and third together provide expressive support for communication between modules within different groups of processes. The fourth allows modules to be significantly faster in common cases, without complicating their interface to the application. Much of this proposal must be viewed as recommendations to other subcommittees of {\sc mpi}, primarily the point-to-point communication subcommittee and the collective communications subcommittee. Concrete syntax is given in the style of the ANSI C host language, only for purposes of discussion. %---------------------------------------------------------------------- % Processes \section{Processes} This proposal views processes in the familiar way, as one thinks of processes in Unix or NX for example. Each process is a distinct space of instructions and data. Each process is allowed to compose multiple concurrent threads and {\sc mpi} does not distinguish such threads. \subsection*{Process Identifier} Each process is identified by a process-local {\it process handle}, which is a reference to a {\it process descriptor} of undefined size and opaque structure. In a static process model process handles can be obtained by mapping from a group (or context) and rank. In a future extension for dynamic processes, handles may be returned by process creation functions. {\sc mpi} provides a procedure which returns a handle for the calling process. \begin{verbatim} process = mpi_my_process() \end{verbatim} \subsection*{Process Creation \& Destruction} This proposal makes no statements regarding creation and destruction of processes. {\sc mpi} provides facilities for descriptor transmission allowing the user to explicitly transfer a process decriptor from one process to another. These facilities are described below. %---------------------------------------------------------------------- % Groups \section{Process Groups} This proposal views a process group as an ordered collection of (references to) distinct processes, the membership and ordering of which does not change over the lifetime of the group. The canonical representation of a group is a one-to-one map from the integers $(0, 1, \ldots, N-1)$ to handles of the $N$ processes composing the group. There may be structure associated with a process group defined by a process topology. This proposal makes no further statements regarding such structures. \subsection*{Group Identifier} Each group is identified by a process-local {\it group handle}, which is a reference to a {\it group descriptor} of undefined size and opaque structure. The initialization of {\sc mpi} makes each process a member of the ``initial'' group. {\sc mpi} provides a procedure that returns a handle to this group. \begin{verbatim} group = mpi_initial_group() \end{verbatim} {\sc mpi} provides facilities for descriptor transmission allowing the user to explicitly transfer a group descriptor from one process to another. \subsection*{Group Creation and Deletion} {\sc mpi} provides facilities which allow users to dynamically create and delete process groups. The procedures described here generate groups which are static in membership. {\sc mpi} provides a procedure which allows users to create one or more groups which are subsets of existing groups. \begin{verbatim} groupb = mpi_group_partition(groupa, key) \end{verbatim} This procedure creates one or more new groups {\tt groupb} which are distinct subsets of an existing group {\tt groupa} according to the supplied values of {\tt key}. This procedure is called by and synchronises all members of {\tt groupa}. {\sc mpi} provides a procedure which allows users to create a group by permutation of an existing group. \begin{verbatim} groupb = mpi_group_permutation(groupa, rank) \end{verbatim} This procedure creates one new group with the same membership as {\tt groupa} with a permutation of process ranking, and returns the created group descriptor in {\tt groupb}. It is called by and synchronises all members of {\tt groupa}. {\sc mpi} provides a procedure which allows users to create a group by explicit definition of its membership as a list of process handles. \begin{verbatim} group = mpi_group_definition(listofprocess) \end{verbatim} This procedure creates one new group {\tt group} with membership and ordering described by the process handle list {\tt listofprocess}. It is called by and synchronises all processes identified in {\tt listofprocess}. {\sc mpi} provides a procedure which allows users to delete user created groups. \begin{verbatim} mpi_group_deletion(group) \end{verbatim} This procedure deletes an existing group {\tt group}. It is called by and synchronises all members of {\tt group}. {\sc mpi} may provide additional procedures which allow users to construct process groups with a process group topology. \subsection*{Group Attributes} {\sc mpi} provides a procedure which accepts a valid group handle and returns the rank of the calling process within the identified group. \begin{verbatim} rank = mpi_group_rank(group) \end{verbatim} {\sc mpi} provides a procedure which accepts a valid group handle and returns the number of members, or {\it size}, of the identified group. \begin{verbatim} size = mpi_group_size(group) \end{verbatim} {\sc mpi} provides a procedure which accepts a valid group handle and process order number, or {\it rank}, and returns the valid process handle to which the supplied rank maps within the identified group. \begin{verbatim} process = mpi_group_process(group, rank) \end{verbatim} {\sc mpi} may provide additional procedures which allow users to determine the process group topology attributes. {\sc mpi} provides a group descriptor cache facility which allows the user to attach attributes to group descriptors. %---------------------------------------------------------------------- % Contexts \section{Communication Contexts} This proposal views a communication context as the combination of a process group and a protection mechanism that avoids collision between messages sent to different contexts. The context inherits process ranking from its associated group, referred to as a {\it frame}. Each process group may be used as a frame for multiple contexts. \subsubsection*{Context Identifier} Each context is identified by a process-local {\it context handle}, which is a reference to a {\it context descriptor} of undefined size and opaque structure. The creation of a process group allocates a {\it base context} which inherits the created group as a frame and can be thought of as an attribute of the created group. {\sc mpi} provides a procedure which accepts a valid group handle and returns a handle to the base context within the identified group. \begin{verbatim} context = mpi_base_context(group) \end{verbatim} {\sc mpi} provides facilities for descriptor transmission allowing the user to explicitly transfer a context descriptor from one process to another. \subsubsection*{Context Creation and Deletion} {\sc mpi} provides facilities which allows user to dynamically create and delete contexts in addition to the base context associated with a process group. Contexts created in this fashion can be thought of as copies of the base context of the process group. {\sc mpi} provides a procedure which allows users to create contexts. This procedure accepts the handle of a group of which the calling process is a member, and returns a handle to the new context. \begin{verbatim} context = mpi_context_creation(group). \end{verbatim} This procedure must be called loosely synchronously by all members of {\tt group}. The procedure may not actually synchronize the member processes --- it is suggested that this is a lightweight procedure that can be implemented so as to not require interprocess communication. {\sc mpi} provides a procedure which allows users to delete user created contexts. The procedure accepts a context handle that was created by the calling process and deletes the identified context. \begin{verbatim} mpi_context_deletion(context) \end{verbatim} This procedure has the same synchronization behavior as context creation. \subsubsection*{Context Attributes} {\sc mpi} provides a procedure which allows users to determine the process group that is the frame of a context. \begin{verbatim} group = mpi_context_frame(context) \end{verbatim} {\sc mpi} provides a group descriptor cache facility which allows user to attach attributes to group descriptors. %---------------------------------------------------------------------- % Descriptors \section{Descriptor Facilities} This section describes the descriptor transmission and user cache facilities. \subsection*{Transmission Facility} {\sc mpi} provides a mechanism whereby the user can transmit a valid descriptor in a message such that the received descriptor handle is valid. This can be integrated with the capability to transmit typed messages, and it is suggested that a notional data type should be introduced for this purpose, e.g. {\tt MPI\_DSCR\_TYPE}. There are other reasonable approaches to providing this facility. The descriptor is translated as necessary to be meaningful on the destination process, storage is allocated for it, and a handle to that storage is returned. Decorations are not transmitted. Handles are guaranteed to be unique within each process --- if processes A and B independently send to process C a descriptor for an object D, then process C will get two copies of the same handle. As with all transfers of descriptors, the receiving process is responsible for releasing the descriptor and its handle when it is no longer needed or becomes stale. MPI provides a procedure which frees a descriptor. \begin{verbatim} mpi_free_dscr(handle) \end{verbatim} A descriptor registry service which allows descriptors to be identified by name would be a useful additional feature. This service can be implemented at the user level using the point-to-point chapter of {\sc mpi} and the descriptor transmission facilities. These services can be deferred in this session of {\sc mpi}. \subsection*{Cache Facility} {\sc mpi} provides a ``cache'' facility that allows an application to attach arbitrary pieces of information, called {\em decorations}, to context and group descriptors. Decorations are local to the process and are not included if the descriptor is sent to another process. This facility is intended to support optimizations such as saving persistent communication handles and recording topology-based decisions by adaptive algorithms. {\sc mpi} provides the following services related to cacheing: \begin{description} \item [Generate key:] Generate cache key. \begin{verbatim} keyval = mpi_GetDecorationKey() \end{verbatim} \item [Store decoration:] Store decoration in cache by key. \begin{verbatim} mpi_SetDecoration(handle, keyval, decoration_val, decoration_destructor_routine) \end{verbatim} \item [Retrieve decoration:] Retrieve decoration from cache by key. \begin{verbatim} mpi_TestDecoration(handle,keyval,decoration) \end{verbatim} \item [Delete decoration:] Delete decoration from cache by key. \begin{verbatim} mpi_DeleteDecoration(handle,keyval) \end{verbatim} \end{description} Each decoration consists of a pointer or a value of the same size as a pointer, and would typically be a reference to a larger block of storage managed by the module. As an example, a global operation using cacheing to be more efficient for aall contexts of a group after the first call might look like this: {\small \begin{verbatim} static int gop_key_assigned = 0; /* 0 only on first entry */ static MPI_KEY_TYPE gop_key; /* key for this module's stuff */ efficient_global_op (context, ...) int context_handle; { struct gop_stuff_type *gop_stuff; /* whatever we need */ int group_handle = mpi_context_frame(context_handle); if (!gop_key_assigned) /* get a key on first call ever */ { gop_key_assigned = 1; if ( ! (gop_key = MPI_GetDecorationKey()) ) { MPI_abort ("Insufficient keys available"); } } if (MPI_TestDecoration (group_handle,gop_key,&gop_stuff)) { /* This module has executed in this group before. We will use the cached information */ } else { /* This is a group that we have not yet cached anything in. We will now do so. */ gop_stuff = /* malloc a gop_stuff_type */ /* ... fill in *gop_stuff with whatever we want ... */ MPI_SetDecoration (group_handle, gop_key, gop_stuff, gop_stuff_destructor); } /* ... use contents of *gop_stuff to do the global op ... */ } gop_stuff_destructor (gop_stuff) /* called by MPI on group delete */ struct gop_stuff_type *gop_stuff; { /* ... free storage pointed to by gop_stuff ... */ } \end{verbatim} } The cache facility could also be provided for process descriptors, but it is less clear how such provision would be useful. It is suggested that the cache store, retrieve and delete decoration procedures should fail when applied to a process descriptor handle. %---------------------------------------------------------------------- % Point-to-point \section{Point-to-Point Communication} This proposal recommends three forms for {\sc mpi} point-to-point message addressing and selection: null context; closed context; open context. It is further recommended that messages communicated in each form are distinguished such that a {\tt Send} operation of form X cannot match with a {\tt Receive} operation of form Y, requiring that form is embedded into the message envelope. The three forms are described, followed by considerations of uniform integration of these forms in the point-to-point communication chapter of {\sc mpi}. \subsection*{Null Context Form} The {\it null context\/} form contains no message context. Message selection and addressing are expressed by \begin{verbatim} (process, tag) \end{verbatim} where: {\tt process} is a process handle; {\tt tag} is a message tag. {\tt Send} supplies the {\tt process} of the receiver. {\tt Receive} supplies the {\tt process} of the sender. {\tt Receive} can wildcard on {\tt process} by supplying the wildcard descriptor handle value {\tt MPI\_WILDCARD}. In this case the receiver may have obtained the process descriptor of the sender, and the null descriptor handle {\tt MPI\_NULL} is returned in the relevant point-to-point enquiry procedure. \subsection*{Closed Context Form} The {\it closed context\/} form permits communication between members of the same context. Message selection and addressing are expressed by \begin{verbatim} (context, rank, tag) \end{verbatim} where: {\tt context} is a context handle; {\tt rank} is a process rank in the frame of {\tt context}; {\tt tag} is a message tag. The calling process must be a member of the frame of {\tt context}. {\tt Send} supplies the {\tt context} of the receiver (and sender), and the {\tt rank} of the receiver. {\tt Receive} supplies the {\tt context} of the sender (and receiver), and the rank of the sender. The {\tt (context, rank)} pair in {\tt Send} ({\tt Receive}) is sufficient to determine the process identifier of the receiver (sender). {\tt Receive} cannot wildcard on {\tt context}. {\tt Receive} can wildcard on {\tt rank} by supplying the wildcard integer {\tt MPI\_DONTCARE}. This proposal makes no statement about the provision for wildcard on {\tt tag}. \subsection*{Open Context Form} The {\it open context\/} form permits communication between members of any two contexts. Message selection and addressing are expressed by \begin{verbatim} (lcontext, rcontext, rank, tag) \end{verbatim} where: {\tt lcontext} is a context handle; {\tt rcontext} is a context handle; {\tt rank} is a process rank in the frame of {\tt rcontext}; {\tt tag} is a message tag. The calling process must be a member of the frame of {\tt lcontext} and need not be a member of the frame of {\tt rcontext}. {\tt Send} supplies the context of the sender in {\tt lcontext}, the context of the receiver in {\tt rcontext}, and the {\tt rank} of the receiver in the frame of {\tt rcontext}. {\tt Receive} supplies the context of the receiver in {\tt lcontext}, the context of the sender in {\tt rcontext}, and the {\tt rank} of the sender in the frame of {\tt rcontext}. The {\tt (rcontext, rank)} pair in {\tt Send} ({\tt Receive}) is sufficient to determine the process identifier of the receiver (sender). {\tt Receive} cannot wildcard on {\tt lcontext}. {\tt Receive} can wildcard on {\tt rcontext} by supplying the wildcard descriptor handle value {\tt MPI\_WILDCARD}, in which case it must also wildcard on {\tt rank} since the process descriptor of the sender cannot be determined. In this case the receiver may not have obtained the context descriptor of the sender, and the null descriptor handle {\tt MPI\_NULL} is returned in the relevant point-to-point enquiry procedure. {\tt Receive} can wildcard on {\tt rank} by supplying the wildard intger value {\tt MPI\_DONTCARE}. \subsection*{Uniform Integration} The three forms of addressing and selection described have different syntactic frameworks. We can consider integrating these forms into the point-to-point chapter of {\sc mpi} by defining a further orthogonal axis (as in the multi-level proposal of Gropp \& Lusk) which deals with form. This is at the expense of multiplying the number of {\tt Send} and {\tt Receive} procedures by a factor of three, and some further but trivial work with details of the current point-to-point chapter which uniformly assumes a single addressing and selection form. There are various approaches to unification of the syntactic frameworks which may simplify integration. Two options are now described, each based on retention and extension of the framework of the closed and open contexts forms. The framework of the open context form could be adopted and extended. The null context form is expressed as {\tt (MPI\_NULL, MPI\_NULL, process, tag)}, which is a little clumsy. The closed context form is expressed as {\tt (MPI\_NULL, context, rank, tag)}, which is marginally inconvenient. The open context form is expressed as {\tt (lcontext, rcontext, rank, tag}), which is of course natural. The framework of the closed context form could be adopted and extended. The null context form is expressed as {\tt (MPI\_NULL, process, tag)}, which is marginally inconvenient, and requires that descriptor handles are expressed as intgers. The closed context form is expressed as {\tt (context, rank, tag)}, which is of course natural. Expression of the open context form requires a little more work. We can use the {\tt context} field as ``shorthand notation'' for the {\tt (lcontext, rcontext)} pair at the expense of introducing some trickery. We define a ``duplet descriptor'' which is formally composed of two references to contexts, and provide a procedure which constructs such a descriptor given two context descriptors. Both {\tt Send} and {\tt Receive} accept a duplet descriptor in {\tt context}, are able to distinguish the duplet descriptor from a singlet descriptor, and treat the duplet as shorthand notation. It is conjectured that using this framework is the best choice for {\sc mpi}. %---------------------------------------------------------------------- % Point-to-point \section{Collective Communication} Symmetric collective communication operations are compliant with the closed context form described above. This proposal recommends that such operations accept a context descriptor which identifies the context (thus frame) in which they are to operate. {\sc mpi} does plan to describe symmetric collective communication operations. It is not possible to determine whether this proposal is sufficient to allow implementation of the collective communication chapter of {\sc mpi} in terms of the point-to-point chapter of {\sc mpi} without loss of generality, since the collective operations are not yet defined. Asymmetric collective communication operations, especially those in which sender(s) and receiver(s) are distinct processes, are compliant with the open context form described above. This proposal recommends that such operations accept a pair of context descriptors (a duplet descriptor) which identify the contexts (thus frames) in which they are to operate. {\sc mpi} does not plan to describe asymmetric collective communication operations. Such operations are expressive when writing programs beyond the SPMD model, which are composed of communicative functionally distinct process groups. These services can be deferred in this session of {\tt mpi}. %---------------------------------------------------------------------- % Conclusion \section{Conclusion} This chapter presented a proposal for communication contexts and process groups with {\sc mpi}. In the proposal process groups are created dynamically and are static in membership. Associated with each process group are one or more communication contexts which inherit process ranking. The recommendations for point-to-point communication are powerful. The proposal provides process addressed communication which occurs within an extended context. The proposal also contains closed context communication addressed in terms of context and rank which protects messages belonging to one context from those belonging to other contexts. The proposal also contains open context communications adressed in terms of sender context, receiver context, and process rank which provides expressive power for intercommunication between modules within different groups. The proposal is extensible to a number of features which might be included in future sessions of {\sc MPI}, for example: dynamic processes; dynamic groups; multiple group collective communications. % % END "Proposal VII" %======================================================================% % % BEGIN "Proposal III" % Anthony Skjellum % March 1993 % \chapter{Proposal III - A.~Skjellum {\em et al.}} %---------------------------------------------------------------------- % Introduction \section{Introduction} This chapter takes a slightly different approach to contexts and groups, than does Proposal~VII. It is of roughly equal conceptual ``power'' as Proposal~VII, with some differences. As appropriate, this chapter borrows directly from Proposal~VII, by Clarke and Littlefield. \begin{enumerate} \item Contexts are supported to discriminate between messages in the system. A context is a conceptual extension of the tag space into a system-defined part (not wildcardable), and a totally user-defined part (the traditional 32-bit tag). \item A context is a lower-level concept than a group, so that contexts not associated with groups are permitted. This permits the user to develop codes that build on the server model, or which build up groups dynamically (not otherwise supported by MPI1). \item Groups are used to describe cooperative communication in the system. Groups have one or more context of communication associated with them. When created, a group is given a context of communication. \item Context and group descriptors can be explicitly transferred to processes that are not members of the context or group. \item In point-to-point messages, processes can be identified in either of two ways: by opaque process identifier, or by rank in a group; in either case, communication scope is within a given context. \item The cache facility, allowing groups to add additional information (described in Proposal~VII) is embraced by this Proposal, with reservations as noted. The possible need to omit this cacheing feature from MPI1 should not invalidate the remainder of Proposal~VII from further consideration (severability). \end{enumerate} %---------------------------------------------------------------------- % Processes \section{Processes} This Proposal views processes in the familiar way, as one thinks of processes in Unix or NX for example. Each process is a distinct space of instructions and data. Each process is allowed to compose multiple concurrent threads and {\sc MPI\/} does not distinguish such threads. {\sc MPI\/} shall be thread-aware, but not thread supporting, so we make every attempt to make thread safe programs possible in defining what follows. \subsection*{Process Identifier} Each process is identified by an opaque process identifier, which is associable to a {\it process descriptor} of undefined size and opaque structure, through {\sc MPI} accessor calls. In a static process model process identifiers can be obtained by mapping from a group and rank. In a future extension for dynamic processes, identifiers may be returned by process creation functions. {\sc MPI} provides a procedure that returns an identifier for the calling process. \begin{verbatim} my_process = mpi_my_process() \end{verbatim} This identifier can be converted to a transmittable form by {\sc MPI} converter functions, though opaque, it is conceptually a pointer. \subsection*{Process Creation \& Destruction} This proposal makes no statements regarding creation and destruction of processes. {\sc MPI} provides facilities for identifier transmission allowing the user explicitly to transfer a process identifier from one process to another. These facilities are described below. MPI also provides means to transfer underlying information about the opaque process descriptor underlying. %---------------------------------------------------------------------- % Groups \section{Process Groups} This proposal views a process group as an ordered collection of distinct processes (via process identifiers), the membership and ordering of which does not change over the lifetime of the group. The canonical representation of a group is a one-to-one map from the integers $(0, 1, \ldots, N-1)$ to identifiers of the $N$ processes composing the group. There may be structure associated with a process group defined by a process topology. This proposal makes no further statements regarding such structures. There may be non-enumerative ways to construct and manipulate special groups, or for special machine architectures ({\em e.g.}, cohorts). This proposal makes no further statements about such special groups, other than the desirability of avoiding group-name enumeration, when possible. \subsection*{Group Identifier} Each group is identified by an opaque {\it group identifier}, which is a associable to a {\it group descriptor} of undefined size and opaque structure, through {\sc MPI} accessor functions. The initialization of {\sc MPI} makes each process a member of the ``initial'' group. {\sc MPI} provides a procedure that returns an identifier to this group. \begin{verbatim} group_ident = mpi_initial_group() \end{verbatim} {\sc MPI} provides facilities for descriptor transmission allowing the user explicitly transfer a group descriptor from one process to another. \subsection*{Group Creation and Deletion} {\sc MPI} provides facilities which allow users dynamically to create and delete process groups. The procedures described here generate groups which are static in membership. {\sc mpi} provides a procedure that allows users to create one or more groups which are subsets of existing groups. \begin{verbatim} new_group = mpi_group_partition(old_group, key) \end{verbatim} This procedure creates one or more new groups {\tt new\_group} which are distinct subsets of an existing group {\tt old\_group} according to the supplied values of {\tt key}. This procedure is called by and synchronises all members of {\tt new\_group}. No overlapping is permitted, so that exactly one {\tt new\_group} is achieved in each {\tt old\_group} process. The new groups have new contexts of communication. The number of new contexts depends on the number of different key values asserted. {\sc MPI} provides a procedure which allows users to create a group by permutation of an existing group. \begin{verbatim} new_group = mpi_group_permutation(old_group, rank) \end{verbatim} This procedure creates one new group with the same membership as {\tt old\_group} with a permutation of process ranking, and returns the created group descriptor in {\tt new\_group}. It is called by and synchronises all members of {\tt gha}. The new group has a new context of communication. {\sc MPI} provides a procedure that allows users to create a group by explicit definition of its membership as a list of process identifiers. \begin{verbatim} new_group = mpi_group_definition(array_of_process_ids,length, context_to_use) \end{verbatim} This procedure creates one new group {\tt new\_group} with membership and ordering described by the process handle list {\tt array\_of\_process\_ids}, of length {\tt length}. If {\tt context\_to\_use} is specified as the special context {\tt MPI\_GET\_CONTEXT}, then the system allocates the new context. Else, the system trusts the user to have allocated the context legally (see below). This procedure must be called by and synchronises all processes identified in the list. A further approach to new group definition is as follows \begin{verbatim} new_group = mpi_group_def_by_leader(leader_id, in_length, in_array_of_process_ids, context_to_use, out_array_of_process_ids, out_length) \end{verbatim} This weaker form requires all future group members to have identified only a leader (specified by {\tt leader\_id}). The leader knows the length a names of all participants. This is a synchronization of all participants. Same semantics for {\tt context\_to\_use} as above. {\sl MPI} provides a way formally to duplicate a group, in order to obtain a separate context of communication. This can be achieved by using other operations, but this procedure allows optimization in some implementations (no explicit group copy, for instance). \begin{verbatim} new_group = mpi_group_duplicate(old_group) \end{verbatim} The only difference between the old and new groups is that there is a new context of communication. {\sc MPI} provides a procedure which allows users to delete user created groups. \begin{verbatim} mpi_group_deletion(group) \end{verbatim} This procedure deletes an existing group {\tt group}. It is called by and synchronises all members of {\tt group}. {\sc MPI} may provide additional procedures which allow users to construct process groups with a process group topology. \subsection*{Group Attributes/Accessors} {\sc MPI} provides a procedure that accepts a valid group identifier and returns the rank of the calling process within the identified group. \begin{verbatim} rank = mpi_group_rank(group) \end{verbatim} {\sc MPI} provides a procedure that accepts a valid group identifier and returns the number of members, or {\it size}, of the identified group. \begin{verbatim} size = mpi_group_size(group) \end{verbatim} {\sc MPI} provides a procedure that accepts a valid group identifier {\it rank-in-group}, and returns the valid process identifier to which the supplied rank maps within the identified group. \begin{verbatim} process_id = mpi_group_process(group, rank) \end{verbatim} {\sc MPI} may provide additional procedures which allow users to determine the process group topology attributes. {\sc mpi} provides a group descriptor cache facility thath allows the user to attach attributes to group descriptors. See Proposal~VII for details. %---------------------------------------------------------------------- % Contexts \section{Communication Contexts} This proposal views a communication context as a partition of the tag space, which is a protection mechanism that avoids collision between messages sent between processes. Process groups have one or more contexts in {\sl MPI}. Unlike Proposal~VII, more contexts are obtained for a group using the above-discussed group creation and replication functions. Replication may only be formal for good implementations. This approach is viewed as simpler than what Proposal~VII describes. \subsubsection*{Context Identifier} Each context is identified by an opaque process identifier. It is conceptually an integer assigned by the system to partition a large tag space into a user-defined and system-controlled subspaces. This stategy provides the minimal level of isolation needed to build large libraries, and is close to practice. \subsubsection*{Context Creation and Deletion} {\sc MPI} provides facilities that allow user dynamically to allocate and free contexts. When contexts are used with groups, these calls are not needed. For more advanced users (such as building your own dynamic groups), these calls will be used. Above, where {\tt context\_to\_use} appears as an argument, the following call would have been used to secure such a context in advance. \begin{verbatim} mpi_context_creation(number_of_contexts_wanted,array_of_contexts, number_of_contexts_provided) \end{verbatim} This call is called by any process, with no synchronization to other processes. {\sc MPI} provides a procedure that allows users to delete user- created contexts. The procedure accepts a context identifier array, containing zero or more contexts created previously in the system. \begin{verbatim} mpi_context_deletion(context_array,length) \end{verbatim} No synchronization occurs here. The user can do erroneous things by freeing contexts that are still in use. For general applications, it may be nice to have a name service for contexts (necessary for building dynamic groups and servers, for yourself). Herewith: \begin{verbatim} mpi_associate_contexts_with_name(string_name,context_array,length) mpi_disassociate_contexts_with_name(string_name) mpi_get_contexts_by_name(string_name,max_length,out_length, context_array \end{verbatim} As with context generation, the above calls assume a simple reactive, global server, or shared name space mechanism (both achieveable easily in practice). %---------------------------------------------------------------------- % Descriptors \section{Descriptor Facilities} This section describes the descriptor transmission and user cache facilities. \subsection*{Conversion Facility} {\sc MPI} provides a mechanism whereby the user can convert a valid descriptor ({\em e.g.}, a group descriptor) idenfied through an identifier ({\em e.g.}, a group identifier) for use in a message such that the received descriptor can be reconstructed on the remote end. This can be integrated with message transmission as the user sees fit, without additional complication to the send/receive semantics of {\sc MPI}. An example follows: \begin{verbatim} error = mpi_group_group_transmit(group, group_buffer, max_length, act_length) \end{verbatim} If the buffer is not long enough to hold the information, an error occurs. A network independent format can be assumed in the {\tt group\_buffer}. Cached ``attributes'' are not transmitted (see below). \subsection*{Cache Facility} {\sc MPI} provides a ``cache'' facility that allows an application to attach arbitrary pieces of information, called {\em attributes}, to context and group descriptors. Attributes are local to the process and are not included if the descriptor is sent to another process. This facility is intended to support optimizations such as saving persistent communication handles and recording topology-based decisions by adaptive algorithms. {\sc MPI} provides the following services related to cacheing. We call our attributes `attributes'; Proposal~VII calls them (equivalently) decorations (no big difference, except naming, is anticipated). \begin{description} \item [Generate key:] Generate cache key. \begin{verbatim} keyval = mpi_get_attribute_key() \end{verbatim} \item [Store attribute:] Store attribute in cache by key. \begin{verbatim} mpi_set_attribute(handle, keyval, attribute_val, attribute_destructor_routine) \end{verbatim} \item [Retrieve attribute:] Retrieve attribute from cache by key. \begin{verbatim} mpi_Test_Attribute(handle,keyval,attribute) \end{verbatim} \item [Delete attribute:] Delete attribute from cache by key. \begin{verbatim} mpi_delete_attribute(handle,keyval) \end{verbatim} \end{description} Each attribute consists of a pointer or a value of the same size as a pointer, and would typically be a reference to a larger block of storage managed by the module. Our example will appear in a later draft, because we have semantic differences from some of the ancillary aspects of the example of Proposal~VII. The cache facility could also be provided for process identifiers, but it is less clear how such provision would be useful. It is suggested that the cache store, retrieve and delete attribute procedures should fail when applied to a process identifiers. Implementations should use AVL trees, or similar efficient data structures to provide relatively efficient access to attributes. %---------------------------------------------------------------------- % Point-to-point \section{Point-to-Point Communication} This proposal recommends two forms for {\sc MPI} point-to-point message addressing and selection: by-group notation, by process-ID notation; always, one is working in a context, as this is the fundamental management tactic of {\sc MPI} for messages. As a group always has a context at creation, and an ALL group is anticipated, this should prove fine for a static process model. We disagree significantly from Proposal~VII in what follows. The two forms are described, followed by considerations of uniform integration of these forms in the point-to-point communication chapter of {\sc MPI}. \subsection*{Group-Rank Form} The {\it group-rank\/} form permits communication between members of the same context and group. Message selection and addressing are expressed by \begin{verbatim} (group, rank, tag) \end{verbatim} where: {\tt group} is a group identifier; {\tt rank} is a process rank in that group; {\tt tag} is a message tag. The calling process must be a member of the frame of {\tt context}. {\tt Send} determines the context using information in the group identifier. It does all necessary mappings to the process identifier space. {\tt Receive} cannot wildcard on context, so a valid matching receive must refer to the same group information. {\tt Receive} can wildcard on {\tt rank} by supplying the wildcard integer {\tt MPI\_DONTCARE}. This proposal makes the following statement about the provision for wildcard on {\tt tag}. Two integer-form wildcard is needed for layering: {\tt care\_bits}, {\tt dont\_care\_bits}. Tags are matched if and only if \begin{equation} (received\_tag AND NOT dont\_care_bits) XOR care\_bits== 0 \end{equation} This general format can be used to partition the tag space for virtual topologies or other user-defined needs, and is quite important to the standard's flexibility. \subsection*{Process-Identifier Form} Communication takes place using the following parameters: \begin{verbatim} (context, process_identifier, tag) \end{verbatim} where: {\tt context} is a context identifier, {\tt process\_identifier} is a process identifier, {\tt tag} is a message tag. The calling process must be a member of the same context as the recipient. There is no reference to groups here. Contexts can have been shared by using a least common ancestor prior to this call, or by the above-mentioned context naming service. There is never wildcarding on context. Wildcarding on {\tt process\_identifier} is through {\tt MPI\_DONTCARE}. Tag wildcarding is through the integer pair described above. \subsection*{Uniform Integration} The two forms of addressing and selection described have different syntactic frameworks. We can consider integrating these forms into the point-to-point chapter of {\sc MPI} by defining a further orthogonal axis (as in the multi-level proposal of Gropp \& Lusk) which deals with form. This is at the expense of multiplying the number of {\tt Send} and {\tt Receive} procedures by a factor of two, and some further but trivial work with details of the current point-to-point chapter which uniformly assumes a single addressing and selection form. No further details, other than naming that disambiguates the rank-group form from the process-id-context form is really needed, and the naming would seem uncontroversial. %---------------------------------------------------------------------- % Collective \section{Collective Communication} Symmetric collective communication operations are compliant with the group-rank form described above. This proposal recommends that such operations accept a group-identifier (which contains context and other information) needed to operate correctly. We recommend that the tag argument be included in collective calls where this could help with debugging. {\sc MPI} does plan to describe symmetric collective communication operations. It is impossible to determine whether this proposal is sufficient to allow implementation of the collective communication chapter of {\sc MPI} in terms of the point-to-point chapter of {\sc mpi} without loss of generality, since the collective operations are not yet defined. Asymmetric collective communication operations, especially those in which sender(s) and receiver(s) are distinct processes, should be made compliant with the group-rank form described above. {\sc MPI1} should forego non-blocking collective operations, but ask vendors to support thread models in lieu of such operations. %---------------------------------------------------------------------- % Conclusion \section{Conclusion} This proposal is substantially simpler than Proposal~VII, while incorporating many of its finer elements, specifically the ability to cache attributes, which would let users extend the usefulness of groups, in their applications. It places groups as conceptually higher objects than contexts, and removes a lot of the framework needed by Proposal~VII to accomplish inter-group transfers, by providing context-process-identifier communication and group-rank communication. Through a simple context management server, and name registry server, dynamic groups support becomes simple, and synchronizations via least-common ancestors becomes avoidable. Compared to Proposal~I, we have offered a more convenient process model, because contexts are seen clearly to partition and manage the tag space, whereas groups are seen as ways to describe collective operations, and attributes of cooperative communication. This offers a much more open interface an either VII or I, including a legitimate ability to do server-like computation. However, a simple, reactive server is needed to accomplish these services. Arguably, this server would also be needed in Proposal~VII, though not in Proposal~I. % % END "Proposal VI" %======================================================================% \end{document} From owner-mpi-context@CS.UTK.EDU Wed Mar 24 12:16:07 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA20277; Wed, 24 Mar 93 12:16:07 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24774; Wed, 24 Mar 93 12:15:33 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 24 Mar 1993 12:15:31 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24747; Wed, 24 Mar 93 12:15:23 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA06846; Wed, 24 Mar 93 11:04:43 CST Date: Wed, 24 Mar 93 11:04:43 CST From: Tony Skjellum Message-Id: <9303241704.AA06846@Aurora.CS.MsState.Edu> To: d39135@sodium.pnl.gov, jim@meiko.co.uk, lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, ranka@top.cis.syr.edu, tony@aurora.cs.msstate.edu Subject: postscript of draft (uuencoded) begin 600 drafts.ps.Z M'YV0)4) F=(B")DW8LJTD.$"AH(2)8;(*1.&SALY.D"0L9,&SAP0-5S0N %B MR!LX>>2D.8.&#H@8.7#8:)$C!@@I8+E4R)LZ;G2Z.6,63T88(.#:B"$# MQ(T<,I8626NR39LR;NC,*5OF3!HW4.2\&3.E#)V,=,K@<0%'L8(75,I@(;+3 MI0P8<'6.<4FFC!D0F3=W!I'0L!L0+YS +GV:S9N<&DW#%K)'S&$RNXP[Z2,$-A4IL.E0=3.'346AS($33]/]HYF> 2!Z+2]1X%Y-G,V;,%KLT6-L$E8!=CO.$&3W3TD<9I-]0EQ5=OL%$' M'6D8J%$:=MA5EQ4/SA'AA!6^IE.&;A0&PAP\0:5 AQ]2:"&'$$KHXFOLN2=4 MB6> T$8=/EVG8T4JX0'"&'7(,5%@;0"9AI!DU &'1DZ"0 ,(7M&A@&)H 3$4#[9Y),U4.D8"%BF!1((7'H)9GIT)*G=DL/!!@14=$0FQQXOU/C> M2PY-5]T+=X:1%HIAP%$&G_E]I%T=ZL'FIU @B"HG6'0P.>D@+I@ UXUS3 # M#C7@8"EUF,YP7Y_M_1E#2#74,(--EQ(*%1E>[N=A?6T::OPS7#J MH$ DZ48=/9E11AEDW#<''175,8=H+KV@++-L. LM"(\*]66"R!8(1QI?\3F" MN>A^A 6J+QC11$5;!!@7")4.Z,1E1@@A1'_WZ@N;&Z_)!UL21=Q+W&AR*#R; M&2U0FP8;?!(,@JG86@J"Q:T=QJ^!=%"1AZ(@&$N<$2#/.Z>09@0FAY G@^SO M&T+V6YULTZJT%FQBA)$>"%B D!15>5PFA%/S>@3T;G5,3,80:"2U!]1)"=$T M&Z7)4><+>Q6H5HX(:PP8<&/R:<8;;YSHV-F!]9'7T&'D,>1)O MQ>/"QK#"6\!+AAE\SG$:K;"UK%W-*FL\>,1A3-R' H,.OM^(L!F^] N*O[R% MY@*>>!J. 78-V!G!KSE''6*8J?L+Q*-1QDHM#<^[\=2&N=SRS4_Y?/15PDL\ M'D413I>IQ'./_)C*K]6\L>-+?R+TYO.>1_KWM7]\\E"17Y?J4K\JO40&)!E? M_HJ'!^T5[W_P"R#SNF23 CJ&.IB. M2#4Y6,UI9=C3',)@!Z$8RR:*88-/IJ ^+V@9S\S8%U^(QG[&><%1Z-#TCYB M0..PT'_="YKKGM3"AAFL?4)RR/G2-R3]\;%ZUV.)2^JW1S._,::E MC&X\D^&ZB"$W)6A!]O2;"_\6PU<>ZV## V(_@4,$>(GA)'NP5A0S]L63M - M:%N# M;@AC?B8J3HLY+#C+(-;ZBB1;96!H2. 2I) M&0I()S*MBV1R#A"] QQH"0*%RH&A+87#0R,* HI:%*-!'2I':;@U($PK*71( M:'I^NIID0E6I$C7J1??P5#G0 :L>A9&'9!2BS;&(K!8*6GQ ,JP]GTQIQ MZ(A3B^;M-N*YC!14)LK0Y2LNIML-$=[@&1O I80[TR&/RDA$.:Q6;D2T Y]< M"Q6Y8>$RM"UCT*P +RN:1I(,.N:YC#J,G._0$!9R9 QR\DX<4E"$.?7CND\ZCGUHNJ SM32][TT-# M^/YL;'O8PQFF6$748*<%K01P#!3@(YM4,$5"R:V0T\. MEB&7NSP#37U9R_3JE[]"\9& %2!@_Z)IM0M^[8D0_)(6._@F*I,PA15D80Q3 M:<,V[4-]VPN"087A/BA5Z1L$!]_N$3%U>_!B:=@0QWO)(A_C/&G+@QK#'EHPI28(;LS&,O.@%K40$*@Y<6.VR9M?<(8]P&7. MPI,SO-*PA[JH^3)JV$.:X;6&/9097G>P,YJ@O-)!4>Q)9%XNO-K0.GP9ZP[P MPBBDZX+I0;VATI6Z@P+@%8=*VZ33L-G3DS@-K_T\Z=+PBNJ3IH3J%S00+L9Y MDE+ M8A;3&"NEAC/*)2Z@V#H#W]X"3;G2D[WP&3.\XO@<^=:D_O>HSJ#J]=WZ# MG,.@ZUSW^@T4(':OX]S>-[=YS?D=@Q@,X09&R($09NZ#&QB+"#*G^=3U/G,C M[-WO/,ZYOSN.@Y[,J^JA03W1[E_SA,6#ZZ^1%&!7 M>N?^VL7B]AN,7 ABJ3M)\-YWOO/\YD7_.>)_[G&>4QWM_)ZZ H+N]L/?W_K_ M%WL!Z'C_)H"VUW0_UW'7)WMIYWK:9V^&!WU7YW-6YW/5%W84N'@8:(%>QW,= M.',Y9WCM9W8TYW,Q ')O=WYNYP/[!@(JES#Q D1ETG8D$7.MUW%G!W?@9P1' M]X ^R(,.\8 ZB(/[=H,P,'="< -=!W)$X ,T,!;L!QK=1X%2>'-1=WCU!G<= M2'Q$AV\B>(!2>(6 -W#QUW<.F'-$1WECF&]KQW-,EX:$]X8U]X5 >'CS!W=* MUX9?%W\E:&]XZ(=1]X=4N(>#6(4.48B"2(6)^'0W1X7P5X9[B'.+. -KMX@T M]WV&F(=%EXA#F(GOEW."F(9\6(8EJ(M#%XRWB(9"8(NM.'#'^(JGZ(>#-RJ4R'1% M,'@'MV]]V']^5W!["'[66(=IV(?-:(<%>'8R4 0R8 3T%@1$, ,T<')C%X7O M5W55J(=L.'&EB(-D"'1)-WA8B';X1HEQZ(F&]X>)2(ENR(@O=X\ V87#. 3A MB(4/68^K*)%G]X5$5P3)B'88&8LS5P3TV)$?R8./"'\>.9+:]Y'O1W:.R)&? MN)(?V'7,Z"7<*P'0(=(XW0 3J6(X^$!\VP7X7^7YU:(5F MR8!R&'7WV)91MW]L&9?Z.)<:29<'69=X>9=ZV9$VZ99VR9=Y"9A[Z9>!29A[ M"9=_:9B"N9B*V9AR&9B(69B/.9B3R9B5Z9A]>9F:F9@,>':/6))6^)DQ< ,T M@ -=^78KJ #Q82KPJ(T6J(BPF8&QZ7.0F(!"!X%,29L4:(1Y2($&Z7._Z8H6 M"(FT:(? -X_#^(BWJ)PFB6^+^)3"&9VUF('>1YW6.9S7*9V>.)W8V9TV^9S@ M"8GAF9W<&9U))XCG671%H(KJZ8GI^9/N&9_0^9Z@09\PX'C\AY_[IIK_>';] MQX8)MW?U5G]\Z9J$&']I>(4 R88T,)I%<(PR@ -A&0-X]8+K!APDT7;&$G, M-W!%Z)EGMWT!-Z(>6J(=1Z(G:J+UYW$1B*(KZJ(PJJ+[UGVP"'C3F(1#D ,@ M=P-.F ,IQP.-V(@)6'1$FIM%:G](.J1&NJ1)&G2&=Z1*VJ1,&:1].'%O1P1R MUW9U=W= .G_1V'=3!W=)MX8WR71K:'[V9GDPH*9LRG1M:F^;QV^@!QIQFF]S M>F]@-RM,]W8X$($&B*<)]Z<.N7""BGRT!Z>&*J@"N'MTNJB".G/Z)JA/&JE[ M"JDO5ZG+!X!T:GR:ZG"9^JB_!ZI'9X "V'\&N']^YW8)"J:J^H R,"M7*GIRV:9:T69QV6GY8*@1$QX)P@7=KMW;_F:Z"MZ[YUZX. MH:[N"J^O&*_L.J\P>:\P4!,U '(U >(,(RW_\)P0) MIXTQP+ =Y[ 02W!RNG .FP,6ZV]']WT.BP-']W8:BZ?^=@/ZUK$D:YK^-@-S MU[$J&WP U[(=*X#:F'4RFY*Q]Z4EZ) '*Z[[&G<-:G+Q 6_FZG[PUZY@UW\> MA[3UIK3SBGQ(BXE(6X19^*$&.X;_Z:Y8:Z]9>[5:BZ_W*H)T\2K\VJ \VG9U M8:'Q(A0T &]TL:'R5I\(6'V+MZS05W^+YYH%IXJY*8WP";=^>Y]]FX!XZX?$ MJJ*"9Z:%.Y"'V[?H-YI!@+$4ZH3ER@.(*X7_N+A)NG@K6J_RNG2!R[=_"[J MJZF=>[DF.K<("'W9!X%1U[CT!KDVX -)P /!+4D&Z;\Z8QA>K5'Q[=F M2K#>**3]Y[!9:+3O5Y8T9\#>5Y8Q,"LYP*\O%[LV4";L1XGRVWUJYY\]=W;8 M-Z,-[/6Z'-,)X)76,(B?,([)X:OF:)YVL$>M\'F2Z0\"*QK&)QM!Q/F M%Q/BNWZW>L%C"L#@9[12:W6FNK1&_*Y+.ZKXIL1P^Y],#+A.;*]//,7S6L15 M/*!'#';W"Z)]B)8CR&\,O&\Z'+NS"W\[R'=F?,5J;,6F"K55Q[6JNJ[IZGYI M*'AT;*7Y)Z_N2L7_F952_,[:@0WB'Q! MUW5/:JPNJXW:6(8FG,(H#)]#>'2?O(T_A\F^:WN5?("[&L0XF+"T**0*Z+ R M@,,CA\@D:ZNA>9(/^*(#IY(IVLNZ[,LQ"LPR.LS"?,LBJ7V3VL!82LM<6G,E MZ,P9C+RV>(#8F,?\YY#[^';KNGBSJ'1$L'^:J(GCJ(9"8,?0%WNDRJC5/*_7 M3,U3"Y//['VG%\WSG&\-&A/Y2LL2W*5_U\]*]\8 /70!_<_'"<<#O;4'W;4& MC7%_',4.[86#_(,2?; ,[,"16\99V(,:S9= . -!<(0))ZX3*](A_; AK9HC M[2\AK=(=)WM,[G7 ^W:(]S=$;7; ;7<@- M?,@7#:0].<@X:],'2P0N6WG**]7:J)I$,-5$ ++YEM4"K-5NY]5#@)'Y)]:O M2-9N9P,"C-;YIX1K+< 4B\1O[79Q/7&9''?4:]>?R'\5C=1DS -U3(9UV+VH MC*1CMV\$.+H>=]B*7:*+G:*-G=B,'=F.O7+2O^*1#G+18O,9'S,:K';7L3+4*Z]D%"-!?J- )O=!H-W^Z.G]A M>]1CG,@/:);_AJI6J\ZJ', $/*\"'-7)K\\6';LXD!?L1W,"]XWS>-DW2+U^JXWG6;[;JH$SZ8"\ M77A_>=]>1Z"3'> >=]D@&-_##:WW)MV@7:;!7<9H&WK_-H6'^*5H5X U !-BC,AE/-1,7J#+K=Q-ZV]!_<(J MR=.CHM,SD ,V'=-;/M-)Z])R2M-AWMY@;J=B;N9DOL#\)M)T^*>0S+]@;,C! M;9K2Z]0S:N=[N*#W^:?3&*82:'?%W7'PBX^]J70<_J?FZX!/-^C+!XKTJ[^C M:W@2SMP%7+"0;.D*O-<-WJ4(W.D'.\"4#K?K"9],5[_X>^KLZWJ#SIN\.>(? M+N(;&^*O+LI5&*96J^E*?JM=6KYH M2N* G9YA[8&CFW2C/NWV9X X1WS3>WV$['>A.BL(-')M%[L-;,N;*>N2[N&N M?I_-_L7MAV_9.*::*M^C&W3U/;KT[;OI[K+T;N\>B.W:;);PWH; SF_WG.1) MC8,[")0\"'6Q=Y,6%ZR?._&9*X5;9W= N:I\9Y!BB4!8.NZ)O)TW&7^+:* N M2[A+N/W(AI2) ;:<]RGNL# M.:KDK7?T!G5=)XIF&(DFZ<51JH36:'Y$U]F#/*"DEV\>KP#AV]?^5XN$1WD8 M6=N,K-NY:I9W38L+;9&YW?8"_?8$'?=8*]AT?W-@"]HI/KNO// '2\BZN^,& M"-D"7ME21ZJZ>WC[UY^(I[*R]]]O/-R=G8/0"G9;UZMP+NKPF8H .-Y8CZZN MW(F%#(M=+]SG#NL>ON\_V>ZN3-I\:G_UAWK_=GVQW\NR#[?U5_N RZ*Y/_NW MGZ+:IX6_+\T;^7)'CL^;7H%U:[K9>XBUF(6 !_J<&_WT.OWV"J]/#W_QH9I! MT)7O-MJE/^OK7L)B6I_CS^YU")3 MWW2>($>&?93&?9[5\YHQ^=&V(.SK]E^ M%\^W&(3Y3WH'G\^1*S[P,%Q% 'V2V<-M< ]K&9X#R #=GMQ#@ _0[!DW7I>K M>ITLVW3]HO6=I#/6T3!2M,LZ74?V%1P1N/OR%@D\@2=J!*; $L@"42 /HC?( MRP/&0!A(_-9'O@)Y2ZY,H3VT9W-:G\YY?;0O"/:^($7 M_.;9\'L5-G#TZ3WBTY+8T!BB6><)LY$RA_#8")_@^X):<$:5,LU&=4A5 6Q? M;\ALF1^NAP.56K.K7KD*WJ6@:R?O?DX=O'=XL)3=.R=%[\;4I0H\V8=H$;QV MQ^'"G[IC2T!(! VW):AT&A3$DH*<#O#0G/$7R/:0 4)3SDGKE!VN<]M.7BE# M>:#P$XI"!$<*Z1BXXU<@K^W0@!]UD716-M([J"?:N1TEAGB&4&>K=0.G"% [ M:F?OLAUH6DZWIXO1A9OT\1*>R3N&*RP4FK9E^/F2H2C\.IT0&::\E+=6@H_= MZF4%YVX!.<#F@-A2 :)!_ JO>+\%U? \4T\R@DHP'>*^=2@$T6'M2W>]#W4A M'C@XO1*.G8."$>H&CK9+-\@\'1-S6#WH"TVZ"T<03=W\RE\(4=6]KX58Z+X> M? );IS#NM$&:!)YCIM[B?LH(D)"N-.,:.+S1?%6I?\4<"Y9M9D82<8E^;7LZ/[^!%?B<& M=YY#H()?I^KXMK@8B 0C&IIFH2H+';$TQABAU3"D"ZCPHJV<>'$:9,4!&0LQ MQ[1D,)VS&3LC9_R,/JX0:4:=PW'(%L3+ 7.A'8D*6^5\=$[7\7&GQ_; QMEH M>[)A\+&-N)'Q_"XL1;+:D=[30$0'$_6C@:.FKISK\5C'4?9D'@TV8328$2 " M&(PM5KU_I-.^3G7$2G.+K9$_]40%<]> 6V1WKA*AHUV5N]C.: J'(<<'C(H? MQ7U,TZ"*8W*-3R&OT<5P[@)J3(]Z[S^IKF(5^/Y1D MRA$L14KP1)[I$5[+Z MA)HKX& NP:4@VY7A84(WX''5!-](N7*3YEJ0!Q(,_;)S)OVJ7T[TD/_GE]TM MIM0?+5D"BCH0$DM-R)"W(?>3#1@5FR?+@::>9 X/4F>;9R))1K[(VR/,>ADD MLXYX 3_2+O=H?FY7[E)-'HEWD;+M%P2<5SLB7M(+!"4O;50#F%=,:%!/$@K) MG-"%@$ID!:)D)O)(&;X0Z1]-).I"4F,2^3FG_RBE2)B&@V1$((>MR-\([XQ> M+$QVPP=EE9^R5I]V89_\D_?)3THZ7FA]OAO(DFMY$@0F).IUGN#=>1PYJ9%% M@D*)]Y2Z6'4260AR4FK*3,DI3UY=BV:@LE$IH1P5*17 [*I=]";IO23@1"2U M&)Q'0/U(E,8@:J.]^E::LMQ9L>LEOA+09T=!_L*3GN$ M1HER"%2CK%,.ZV9/$DOITF36G7U6 /U9QXR PUL.<"_23@1X,0\G*EM5ZHD M.!@LB:;>7')"+7)V-)(FZ$Q:Y1QIEE-EL32513DFEDS+@A=BK%(V8+6VQ!PA35*2<:IK*/$:&&1RT1+T?-=+K EC?SD M?2(][8]NPL*:LWG&U*/4F\.G76ZO]1FB+N8T^CZ)S7U>S.))/QM@ JR?PK%P M'L_=]@W/8]&DD*TS@&ZT?;F?"&A0ZS?M#:>A1'2T0%5B2ZP\#W0E;K4(VA+7 ME!#@.(;')#)+(D"AYI^9@HEQ3G:F1SIGKLKEN#27&@RYI:OU)(&P);:,EL_R MZ3A+99D6B2<#S)]=BQ.1/,[C.)EGKIR:#RYB ;OT..S8XYFB11;,6%6?V1/XM'S;<,;9AU/#Q$=@((QC-Y/^PF!9E&W%)X* M:XO6/J:5XVP< '):H.Q#;4/5!,!4YN[B-R1S>0K)F53'.INKY%.E9T :2/+7 MI]B==]0[<*A2"L BX$5/IG7D.H&H$$I2?U3M5J;5LUS:[@_*3&O'2:F=WY)W MEU !6:YF%_F@%BF\HV$K;\Y.F8.FX) ?37J U"/5)R"J?2K7V#.D2-*/)M(W MN$@;:<@+C >I+,V R ,CBQDN4YI0DY=%S6#VP9">,Y6$_O-QLM(Z]8WL826" M1:IH^#@G>V4HC97J"G@XJ!8BT0N$WV[DLWMO,4\E94'$Y@7#8&#<>77TB"H= MO(>\A*1/NJ=R$3<;5.^''.(Y]N:?:-#(TYL"L%UVHZ&J43,J MU#2JNF^9MI\9^>#.YRJE.="1!_Y*NOG"8N3I9%&F\T2ASJV:T[KJ5267_4*C MA560) 1&U:R ?O&EV(T_K94=V ,:SH%]:"".KGJW.IJ05VH?FRA1D6*MIK0 MJJT,>5M'%-W(9'\<_5BD?U35JMK)&P&PVA-&8) M,^$&,I&BD.9-PU#H"3UA*2R8="R6H56_FAY5(2NL.18L[< ^V;>)G".(HHZ( MU1=:I;2S_R 594V/II*5;DI/*?"FJU[.FN@VE7Y&E<)*GVU9HZ,8*:G5GI67ZL]'8^R%42UHCAF< CL*])? MV_(G$:AJZ8>@I8-].LDRPLY3 2C/8.LD%+ 8=B-V0<165AU/AU5&B570V8!] M=$$%3XD%6R7V\%2?FW6XSIFP))8 =/AUI%;$-6.?!H*A[:[C^+$'*!MY8VS%-]=21KL&8BH-V7%%]LST21-VE5H94E2E2 M[4]&[SSUH,HC)XOKJ[)E Z\*)D@596?A[(KZF6,JSXK2HVIH4906VUY-"5+% MR9&S(NF"!)N,1B!'&!C,^+8 CS M018LN8[/_M?_$I]W?862Z.B1GK.TQ!9I M#IB+8>G2PB CT"5*TP')FAS*.K;2H@4#X]3QBHV><;S"K:MX\H0M>FIH M/&9"N!-*Z0VV>Z>>M+ 9(-QW!#D9@&.X(RFB>;':YTD%'\6M3T"MXA*^BZL$ M-:ZUL[@!CN->7.#7D^0FR3U%Z(S;OIO/$):*E\S1CP#R$VHW>R4BMRNSC'4$ M4I#J7"[YMYAE>#UYOTQ>H;Z?:S'Q31 0"SEL[I@BJE1LAX .@E\L5F3O(Z*8H'+!OFE 9 XT,;23(W2,DW !O1;4_X&PSPH & M]76N6L^ANZ7"EAG>PYMV-R/;+3<%Q^TZG,5['AWO$5( OW3S.AT3=(6673X" M3I H.#78#H36.I ":+T^!Z?N'-F+J-Z8)O4W0?'V=IPD%4[CHBLS/!Q' P7? MG#-\^]8Y?V"MYA&\+E4#(32I]L+P:/,7L M:#I!?78(J#CERCB_[X"3O /(VO!H['X MB. $I,KNS2E;9^OL-5:O1_1RY"^_J@E!P =\"O7I1]M/$=Z^3 GZG#)>!96T M+[/:.TTX"E>@.+C;""#5H4-2F E/82>,A)]P$H;"6U@*'QR^5X1O3A$^01@8 M^E:>NG.V;E4CBU)P&$K)82PLI>IP'%963(I*!9V),W]?;1%0<;SL&UW!9]:? MFM)OLCP%>$TEG42,B/>P(Z93CSC!0>)#!$AC(Z *I)?8$@^JUW-OL@XGWL0_ M!_?$8%$'(\--^/D,:7KDS0/U0WWAD M40LFCVM*>T?SI"E5/(E%+^\RQEMGZF@SPZ.,?X\R;CW/N."Z3\O"3?^4_N1_ XWQ^\ANWO(AHPN.J=\B$3 MZ26/KXG"=X1'%&8R#$3R]E,_MDWE]_L*Y,%6N#00@6-64W8 <4\I?&/M47*2 M1XD()F!@2AM[-O"NJX/T#N+1'\S6?=NO1^[((-F=]KOP.P^E%1T^BPA8 :?D M*LR2^Q+N)%3*29BYIOGZT!+(A+3:\^-)7A6]J-. )(X[S1)C;J/ Z/HS@ ""R7 M*C14>A&=4S)TNZDLEUXDF9;;,NEURZ,W+J/EM\R(:!%Z57M F!UKX-E59/>L M9U0^^$>P@EJ4%1K'*RC[/4ZMZHV\A^6-M%P!BF4P&2K_1IS31P<58?UNR)'GYER#DR'! M#C9K53F(VE%8+QH#_FTDI9GIMH1UPDGJAH0G\LVDH;23?M+EK)SODYN\>?BX M/W*A4^II43,[5LVLU$Y*G9ML4UOJ^$Q\!=)"%J[;;(]R+;/.Q?? :'Q)^GX;;>]-S(9.GXC[L/6N,\*-- ( M.D^ZS?W\_":EJJU1ZVG5)MHPE(G0$38KPUFO9-737(S1@,X^%6BWK3+CMH!K MQ\:D=CO1[0I%G[;MQJ)5='9+T>]'R^T@&:WU;K&_/97*+_E1U,,[K>2@UNV0 M6S=(>Z&+"AJ <+^% 4$X/^ZC)7V6>70&TK,3[SG/YBE]GZFTSM6\;V[F2#+) M9V^$@ *(9F%COK0U-IZDWE MD!SR5/'4'=2,PA9,!J ^&NL"Z4%-_2ZSD/[1B+H7_QLC_9^3-)#:7>X5F7D^ M5 Q'=5X64V/=5+E9L4P]P#8UIKYB-,]3B^H976/UJA:J/.B9Y:[5"U:BF70V MVWEZY:.X&HV5UK8[5L7JADNJE)5R!GVK"SH_4E;XX^B8KH;11-M;-3CN2 M-=U%LRK1J'+,OGJ!"3=^ZL5:#T#EG9SU,WMFIS6@]A,L9D.L;!]!Q]IVI@_@ M_FRMZ=*+0N7L&A@Y3AKB.% '.>?!3>I)FW.]OH-ZT%CU3!!HG"DI<#9]LFX5 M#:%4BF3+V#>+:&SHNW6WF7NH!;6[JL_W!CV?7=KE?6QFVI5PFC<&WZN!_;\< M3L\QS,&G,P*A<=VH9E(&"WG?V!A?XZL+?>ISPQ;2^'<(Q.R9_:Y"[!/]-^)* ME4[L> <(9W6+OM6TVE;+:L'YLX6VJEK25!#!F9_JC)Y?19)ELBETNS'C=@5E MY=>>F[)5%LM:V3%+0\MUG!XP31N .C73W'>?9;EYY5K!@;($ MQVH7@1"X0'R];4(!2(7#?1Y-RA%KF"H-%XJ%[DPMF/+O%242I] MXXG!9KG9/X0YG@4=#EJXU3.F30,E8RR$K7BS)66 MPWBX2=F<[BU1U23 7)"52T/6J]&%?O95N-)23!?O.%T3:8__EI,"0^5;1R,_ MQ\.D25?_3I.!CW^3R5VV)KEWDNK?UYO8=FF[DG2U%%\^/;LK_P4\*T6\\^1- MBC@1)TY>M2JI@:&D\8+@$CP;K4IC%<)W%]B9"Z,)2]$I'U J?A3OR4 ?"+_A MUC+MH4W04+KA"BIK%B75^W#TG.3A1S\)IGKLJ@-^XC2YU$2E*/*I'H"I>DXO M!@?B[AKQ_,R\^^Y$9JJ:1GTXE@EA@16%;F$4USOI*2%-Z&"]^+@=6@:,H"SJ MI'$"#(Y%D&;423XITM+P.2['Y;C^9>,YUOE4.]M'4P'4PUZG?)04Q>-;N.8F M\LJ% 4+X'>DZ&8;@5G%P*I@]4+*"8^)S!<>4'6M*\>U#\B-O:8WL6"<_/)_< MCW-R<+S)/3DIAZ]?S_L4XDENR24Y)$\\D;JTF;:M9LCQ@@RPR+;3N)FI ;7+ M[3>^29G;U!#[([8;>(@YI:9-LSO; B-OE8V0.98]YAZ:B49RJWL%R_0+_D=T M*)B#XYZ\:#\3VBO-M;P=WZKE5,2;4L$4CK[8G ?J=+[-U[D[5^?PO!7K'>/& MW6RTEK([[3).0[/.=,3)D,UA0/\FJDH>Q"/0+P\O.\8)[M\@](4>6+,5B]YZ-*-"I\86':1G=)'^T;N2-2KI5RVDN]>/ MKL.EGN>M9U_(U"\NYN1'RM/-P::5V/[Z*@3AJC^2G'>N-5!TH5=-2 M#4?F/3JWX_;=?MN!#@ZMXXW\W0V=O$R1_S#M7$/O=Z0B][[.HKYVXUNS\IG45X[)%97^[B6!)^]_&NW,E[.;]$N@C5XLCR;'W> M>F].;^0+JUM!M%G.\]1SPYTUF!N)(/"#"$O=<*N'O5>U__>@69PBLC3V/U 8 MYW[LS)=^01D'M#A$+N2$ F*PBG M=.SAT]5S*V_A]1U3:^FNN(W6XGR7?-&B(ZST^/=OG?4:_A U\)4K>;@X5L-,6=? .5&Y@+#T?N;95-LK_=MZ6='P0Z9B?20WHH M7WU4T:BRAV!SONF *V?7M?+^)*/>>!'%;VS M''3245GUK;Z2OK)+<@J-0&&CIM])=X\&#?H]"IW,J75,[?/0H4LB:>^]7?H; M'O-\*-J;^W+/[F.1NU?W[1[>JU;D1._S3=M1N7Y8Q4VE\_43Y3R_=XCO*G*? M2X$;4W?M17]&1D^4MIY0A7H2/L-7^ W??/T>.@KQ*SYBYIJW:"--'&&OAFE5 M?'"Y@>^1G=[?9):G_"#R\U\O8:5\A;7R6WZB4_E@*N8_2,NIH\9TE]KW_3YR M__M$!_#]?3#*8."L]:P\[ ;)S-DX$V=VJ%N:LY^9L/B1M;[K8R?O0K-B! 5A MG^M_?;%/]E.YI4LZX@K?VWS,/7$2W_L14^7O M(D4CDI^JZCXH'SPJO_7D_57K?_0^RX?'7=_O]WV^;[X$?^$G_()_[R>ZX+>1 M!EE6@OM Z%5I^U>;?AR<3N78!:\+,?R!*?$=ON?O_&PH GW^A3_Z(S[I7X(Z M*/AM_,EOIUCD?A/@%0F&V7?ZG&#@ M:C-]7,WY:UR=U.''S$(M5=A$/IT_\]]GMWM-5 P@=\6LI#V-#,P'_XU]85^D M LDD)!:@!0BQ4&5)!P>%9C4P#EPDU \E0J=?"6CZG8 67^F7 H)^)B 9QY*!C.,#!)K_@' ): MD.-X152.U_H"_*@>/HZ0U^YD97&:F]5TR$502P'"0=& +!=H(U3%,_6,T%?D MZ'5A2DKGS_U$WILB=!CQ1)%=_8.PU'[FWS/#\ @]-.!H(KMD)>N&&B ES" R M0%V0:WDM/UX>PJC@(Y>=&6H)HAW/W"5:"FF IR E.(4)(/#.JH!\$AQ%P=(TK MPQ \PI*T68Y(M#.M*"_GRS,G\?DF^A]PHO]U0HZ=C%=Z&23#8-&7X@$XW) 5 MI-H)3ZI=RC;'\$3C"$^4A\QOX53@DBZM;#,'FV9TI$U!CMPD3/%U94FPLH., M.#<16^0)\C8"'EU"B\Q!^8T-YN==>J<(U#&8O3N7%"V8WX@%KXKY<73) !K8 MJ* ^D7.G%CGGG#$O2*"T1@/L,S"01Z(0BB0>T&U$;\@$\4'HU MW39>EES68V$=FU#APELE8',9 D*TE5D3B$0A3",HUA5#A4R@5LG9(851(%1Z%6*%2:!5FA4[AMN/TF%6?P6.EY:P@ M:POY@N@U(,T:J%.P<'D=C:\'D+2%IX?1XQX!*"\,SG'J>2D[%"0%Y0U&JQM? M-^GU?4,, E6$%%(="SLEB/5>_I\>IKPMAJ1'#? 9<#]%7\C#] 2"0M\ :,89 M) )QY/M+']\'EU(E50V/ISJE8@<3 &7)A(-)4N#T61FF60?"J$;7BW@QA-QH\AB.')&A_044D7J G5^7E\QI=)3L\=@10LU.^;&/ MF":.AU%&9JUQGLJE]TA],,(6FA+KC#W^A#:>3U",4"'I5>MF#J-=\.'G4)FA-;2)CQ5Y.GE,W M?X M?)X\8@'*(P%34")P&(A["0;GK01?,N%,%58\A0-=GE/GL64Q-DE M6%8IXBR-/,F'<$*2"3\\D%W1!^J#=$$0QBZU)@<+7!1^N#^,E3+U1(E346%; M\J$81E2A!Q.+-%.L$G0RY-PY)D@_1R$&)5%>D)@@"HF 2:\$) Z)2**.V*&\ M8SQBZQ:5E"C!T7_SY+%@:EQYTA5=020(SP?8-()"CL$1RXR(7LGZD,BI3Z*= M22*"P4&@ M@8@8[H <,D#]M'K( 8J9C>K!)_8;H0B79[3$'ZKBII*+-'6OHJ<2 M]($B'YXA4F59>^\-K4C7X8J$R*UH=.)=S)F;2?(HI((G7Q_ M0!/4$WN,B$4 IYB+%1SA5C4H>D4^459+\IX-1^&(J#@AVCD^3R)S M4X8$,.=BJWB-S3 4"'^G'LZ'BTE_0R0>B?$B!.(NY3^AW2O4D.B(#(F!A_6L M@5#Z<(%%XB$XU^$BB>@1CC MDQ@:XH5P4<=X,>I8'./'N#%ZC!GCR"@RAHPEHZ%8,J*,&F/M5.91-".B*]AV M0&5V!_75+"Y"RF*QJ)WDC.))SVB )8LZX['(,Y(G0*-XXB8*C:'(J:@T0B>H MHJG8[?&)F$RR,I <*7\(/&6(F(EF8N4!+7**M]P-H"4Y*PD*@K.JC!WWB$_& M*@YT\\#GN=L[27](B'"/LZ/[6/B437>1)G(5):>?#X^ MHP5R-!:-1./0:"P2D ?DU*B][#=ZR&>P-=(%Q5\E9"FB(HMB&@5\""+R5'R/$@X&^>98>*36NY3>- BF(T]7=\%WQHM) M$AZ%BMQ.LY>G,'OWD@;3X9V!_8;%^**A5T-,?_+1>9&;"A@I<6",6,B"GV@0\D M L&"W%UHX:CE7$6/O$Q$=NPXB?SB(DDA-I+;206#H#B2AX@DR4@JDI$D)6EU M89*09"=9/VZ2DV0F*4IZDJ$D*U.)MZ(Q02 0 M(XG8*0)6GHUS! KV>KTB5?)&*8H^4:)G]U E+@\>DN&U>JV>;$<*\H^6CB6D M:J%49L@!\\#D,$>.E^:."%7=2@=R%$Z%W&2-Z$UND]XD':5-+AVX^(\N=K1DZS=:V=/PG85B66'PB%%6 M';2==_CF$8HQU244W? ?P@LU.?[YAF\C&5)ZD79ZH06R4KZ43UI,Z5+*E"WE M=[*6J(DJTWLUE $FV@ZRUBAMAT]4*T=FL55OD',B5!:50*51^2;^E,?:=DC) M#67%CCCY-MXC.!(EE>SB@_$WMY7K:7ZV5\J5[.ES^32D)?FB5_S!%I[B1 ^4_Z<0DF@[E@\B/098,I87(Q;]@0( .,!*Y@R:'BM([D M9(?I>FV7V64W"6)BER2F5?@47B+RI!8%AFQR3>&?\HSH<[@@@)(6C55(7')R M*3DRAQ&-!LJ1ALQA4QB*;(;C2%MY8IIRN=RE$Y09F5=E0OD*_F%!U=!"5$V8 M"F:4Z6!^*/#D@VEE2IE0YI2I96:9#6:$26$**19F*WAT[6RI5C>RVWB7WM[Y M9[N EU0>)53=?%K,'3OI7L*'"XIZ62?"EW?F4YAGXHYU'IXYR?B9VR$_LF<* MFG^FGEEH\IG8(Z$9:.J2\>6>J5]J'QB:D4DCD8X@!UWI&#*4R)6;N% Y46RF M*")>0H?](GEY7_IQHV:4*%_"EZ6F>49JHIJLYGOI:O(_2&;[L5]2*";E5BF8 M2):^T+S3S=H.WZ$([2,D MBZ5IN'4I<>%NDTYV2X ).VEH8IE6)B^C3[9V]V2XR4^.F_MDN9EE0I=TGZHX M4*)1:HA4)&)ZF"$F50BVP)LE)K<22YH@8V:MJ>)L<@0?T6$U%7P#(,_6A>Q: MVE3 Z!0:G%NA5C@50I@+I\+I[H"-LX*T6;:$'&I>O[ E6BX[CQA$?PA("Z"[ MQXIUU-.?,>0!E M35K'9 93L5O7XV.I4?ZJ:6JFFG.FU9EU=IU:9]C9SVUR;^;RM\M-E1=6?P*+ M7)6TYLAQY&":/Q_"5NP\464:!@68E#K"V+"(V/V*1%Y3B&4](\A=9U+1B%!AU%(--)5.6:&"3C&K-$P0 YS!+1$1/X'?H5IC"<-&/",F, MR2,!'68@TGACOG@F)M[8#?D<^:$(DSO:(^W+]8?RE"(PT]2I\2Q_.TOQ&7EN M8&R6S\-J?4*;XUU2ZNQ8)>25R'>R9DOAWYGM!)[\)R0%D14!^D=8QT:ND8OG MXKGNV#H*DD17C!H9!TYW_2304?40P1U'"I0_.5035782+F MB*QFBW$WGT%BXZ]X'#; #ZHZO7"\E;,DLZ4P_@J_,@4]3SUO3APT]J%LB1H M()L*Y(9../S@?1*'FB.TQQ,B%F0<7Q.D@D ( >%+H:?(!2<_5Y7&L'5(7I$" MXGR$H="(-;2&+J)N4Y_2AI)71X?F]7;9=/T'[-;#024<5)S$ATX<'%I#Z;LT M=2\'&IHH02"!:'X&NB!'\8%IT8.^=.H)9,2':BB_U#8)X'!XZ&%O)0T!;+CH M,-5\0&#GS(OCH:Q'$]4U)(S>/]&6NT7J? 8.1PZ:C"*C=:CD<8=2'<"HO<=G M;:).FY(VOY1L_4:>) !!:S_DS82?\5R]2RKJ$*RBJEV:XHJ&+YTH29A\&"@U MU5ECIJ6@$V$O S#]-RHB*.(,V:+W*'&&BWY6K!N/18^BH U@0.H)':$_R89( MAIJ$]HP3"A/4&W[7E@02!5*S1_X'N-1S%&FV)(HB2ACIU[0>J:%J:,D2BE(O M/Y<"-Y$FH_T'!+I..F7%4#IJ 6@7,LL^H_"H[?H,V3R"&",5()'ZA1801NN MEI,&;0&3?P1SP5S#J!B$<-Q&PU4VM%9<'\CHZU7]M!T:'.0"K,DO%I.3YBO] M)V(!QJ&-C 2@@<TXU9([C$7HIYJ!QY*$X7\HE0Y@)RI8EJ?+*:@ MJ>2!HMQ <5*.4F]XBF/8MR86T&T\TA,R?'PT1P=6"KHA1Y -;;IQA5R3C:9V MJ0T8N^FEQM/(/GG2[A.QC2(MJ^A+M[1P4 A$CD(7W'+9E2>&;]RA MI='"=)7:1FB5L(4\ETPZ*NU .XC7^+K M0H-(AN<)6%9TYJ$FZ;)&WPR LE!^>NG)/OI?)!IQ#"C]QDNTA_:A@!4N2 ,L M,UGIVY6=.ARJR8O6=NRD15M/ZJ'\I#XI@"2,$BZ#R\TU?4HZ!LJA\Z&IHU70 M7'2+S8R)C'0J%EB83PC909ID0\C1PH29$FJ,%C9$A'8\[BG]9UR1)4&':0&* M1FP51Z1RR2&,5)FPXJ&T1OH?"7I%0AQN8%[ _=PPOQ2,JE3]=]N5(.JN&&KU MBGEZH@BA0F@1HIZ..&D?]Y/OS2Y96::(G6*EUZG+@K)4.V?J38KY/%EYC!SC MIOZBZ"G<-!@.I<2;>NIVX$K$2B6E!CFGQA!VT_WP&Q;IPQBH_H0"QT]XIL&E MK@ Q$-!1BEI6!(63C#I36#T&? @]88V MFI4"JIFBO9&$#E>.6:G*T(RJ+T=6FHAVGQG'T>&+ZJ'$4/C2FSVG+@BFY1/T M@2\!3$"]_5I3$FA0);EOP0N %GD9+U(2D+/TA*(5'.;%!6%P59(&5ZR>A[/7 M;J*3R$50B5\X4_H<:@H$TJW:*0J*6::G<)]FV:$R>ZV$N1=-N.P<'%JB8(-. MK6RF1Y)R!!*A&1+X)LHT(JB*Z!C'\"OBBLEQ(BHRF*(2)WLD?/_'A33 <5-> MB M:@ !RE]#"ZM(=7 @(YU$'A66JQ^]9J9UOG-?*A7PO$SUID6B(9'@&T= MG2-:TW 08*";M300.74**IPB@/QV]HA!D[+:7I:*DWC(.(EAUM,5M3* DXC! MP<.%EKC([?C$72'NVHQYE@TW_1W-&>'%I M.&Q)S29TR$6QD-"!W_"*KE%9A*#0I>L76A8782)Q40U&D(U!^=B*B7R%2!/K MZ8="NFYL59N:=*EJ .LBI,2A0:LE]"%[P#OJZ":"N1HK'">R4KBX.4,,0C:E MQ)+Z'*XBL[PAH\H;8AF6>G]*I79_M*Y[G605V7&LM%^_D=>M'O]?"6*R\B6H M6@HBJR!R3*1?4S[VD1%?H_?!B$&8S)%JR9@6'$=%HGG]'! ']+IYY:AHH.EE MO99\@%&V>H7Y)E?)51+#?:^W(+8BONY\D^4=8A1:*?RJOC%(DB\)2B+53WE[ MBU_K>("Q8IT0_@G#Z8+MFAMI"P(G64T>^9(8H #('@G K!J9$V2^KPDX%3K MAJWN7ML;][E:7F-FF:E2E-QUD]Y1F K*, A;I@BVB3[HQRV7YN&H\$L2!:EH M5A+8.W/Q6#+)J_R!V>1,]HZ Q%CI'"]L+#0:T1O*1US([1AU ,K1TJ T1:-/ M9$:Q>2)0RAX'NEPW+%M4=2,M,N\1('4H+6QNDRQ%/4D=.XZ#%T=18&D7QB/* M/')\UL@Q<=QRZQG68_]D:>_9=,3N=&\E69*B:3(E:LT8ZXF=*\\:9.04<;%; MDHMGB:AD)HQ%DA&^HP%(!F3LN$0D;-1&^8 Z.FOY92CV04@**"76@$_'B]0Q MPQZAKU%B18G@L(.'#LMGQ1T;GV3(WU!"'HL XKL]'%O'V*/TC$]TJ;T*O8(= M'M :6R/IH9%1UO1+]0L!AT-)MG%&18AH%0Q2'^Q7Y?,RW;(Z3Q;$EF%X:Q+( M.H@A=X-.//H2#BL2#Y&2M!U1(=J]%QFU([Y.N/6_S5Q2:05** *A+(I.XP/] M7.A6.M-8*;):5Y\ +%P<[0*4V$-?BA)"LD2BUB&[0ZNU/O,: M^'3M5%$Q*(]SY61E9$>4I7L$,/&1\5%P8H:Q;+Y!LO"PH,V$,KGH)80<"-)9 M6AU.BW.7Q02TGV4)BX1I7XO>*+/"ZFN-TA*EJ^!<3-$R6Z*J/WHE.G*[^*Q% M$@3W&H5L<-@0^[]5J=]8W5* :#VI;/W'ODH^X(< DKO<+#?MWS9W2C[=8 \: MIU0C8I-LY!K!A2Y1N(..2H:$:([F:%DO;]C8X:NHM%T22^LUI:$'V@?TRJTI M_IF8QE""?5-DI@?HG'#MD47JE>%)ODL%XR,M@+D/?,+(5%LW%^I3M&Z+(2H5 M=,,0M,65OI&KSEIM +!!H5!O(\FNA*[X* ^C23=F^@ '[:ZU&:):6Q-HN&Z_!0?B!YEIS&G%SAMQ8%V5Y=STUKI[) MWM'!K'NH+7NHG'QSINW,2K)>3X_5]72!O+;2SFP[H[2VMJUK>]O*MKCMWF%? M]K95",-SYPBWWTI*![G0771!N*7 A4MA$HGT9AZN7U+(6KZ%>W:8*"N'C6_< M6S8ER>EO:1(<9K!J+?.;A:G,9!X&H4,@OX%"FXM3 J7D;SG:,]LE<7U*2N=J MW7I][NVBI;]-M\YMB%1(?5UTRJS"_4PCG6A"^/08DVW7GN.:A:'#%3IRU9@6 M30CQ\J,(N NA0GB!6"#E$T/($%X\F&I-\*\-]RY MN(R@+T6)++I-QHTI6\3ADRB/T6&U.@5Z>8X,%.BLM#>+CU1H=\:?X)B9DG<^ M4GH1+I,793R^*_JA,.TSQDG*!,;M,G7?&&?PI98.J^J5/PY;\0=T*(-%'9K1 M6@8D;KDD;I<[XM*4W2@>%W]X>#:)7 C!GE: B>9B]U5ACP_G(:[$*AUL(F2=XD'X*$(87<<33 6WH6V!PER1^@&NL6CH O>J7*B M;9]+9#ZZIU=0:4]R0)82V%CT)5U?&G96/D:"(U5L!W_^(];<-/?Z]*A'9RCZ MK)"Z1F'NJ/-*!%YZM[1:)B_892*HX)E=/K-$;*!6KHF/+SRO6U MD6 1\N8R./UJ+/K_^3!UX?+UE)!#INU1F.S<(=SE\-21K*D2+3K&0:90!S9>5X6E[1@,F*%<#/'[I=F4X4KZZ]J^_ZN]CA2"*6<+#T7\V(-.5%DP\AUK=<07?N M-&?:]7*\"JB3>.Q8Q5P8->'BB]1G<3++E6V\#5 BH.R.I2-8\]Z\:>4BB%@"G<)<->?S;E-T MY^8XAH2V;)GK^)F()L.:"M*0#G(29"2RF/PA!=V:LKQI/HS'U+MS"'0('4S@ MB!4@6@Z=LD3=(.!4ZD&T_G![&/$1-](I\-_?IO:"O6RO*H9_@KT%3I[GJ4"@ M<&_2D^CPG#%,@R:(&;[?DLN;],HJ2V\= MTD=R0VW(@9CC2KV@;H+3'"8XNXK7&_OYAS9KQ(8V376]()D8@?0)7'GI#\'_5:;TN_S"]!%)D0(\QL% M*CKR8_(;E?0]#F\]B8],@5;);?,V]J[++FB]W=SP@)_K^-8 ,QPW NAV4A^X)1O'8PZ*R-BZXW (Y M/%)W[V\\=^H.)G72;05G#9:\DKRFH"=:O M[J\F^("6BY5@K%L)ZC16B3 9Y"8=EZ7)5;:V(\'K3)+WYC(;340+$&%O18=' M)X.9'8A0T;&8_8_.%6HWX)4B72*_J/-!83:I_<&DY80"B*OW*^DL^\A/F/X^ M+(E,5=;_H25+#VLV7O$C_BD_8EH$G&)0/6F""9JLJ,VJ!F>?(R8A_ +5P@'=H9P(\P( M/\*+<"2P)8SRM#O!%]^"7L4KDI"NN,S]&[^OK*+I/E-7 MCU7R4,9\DNY^]R;.8#QE.T-FD7Q!3AC'=6TVG2^ATXHP,K5A@B/Q\B-W5.O) MFI5/DV,74K(,K:E*CQE&B1\SY^VB" J%SO#J.8KDO<$L)4(-UX=4'B,2:.V+ M\2(=50'3)MU@^#JSYB8+[.D9PZ6@Y4CY$7=X:;,""\+#/)0K$SQ,X$DY@2* MTR_F=ITC7\)15B&R)0[BX5(B>MLT^"X.Q(I(;1*51$.S8#P&IP")Q[!O(NZ* MH!Q)WMN!J'8X%#H&6R**Y4DCXCTV,AMQ# M&SDY(HSRR*>C[=:ZKH@N")"8/$1(/MR%S(<*<8@ZBMR1_J_6BGW">#->4%SC M#<5/QP#Y$U^+R8 S[\F$M@RRHB4>R:@C&B9R MVF!W17J+R@?U6H?$C;K9T-MD-5DQ)#&,BC$G9LP%!N>*<[_*+TL>"9JAK6.7 MQ>2<@(V =Q/-A?2CNUCY*$:HF.*"Z/(\W$Z_8 [A?4<.7T\Z$]VT1D#7&$(9RP:_YG-7VE[&;\X/D^ M*-!.LLN*X&S//:QH#( L0CUH53C>1*>:A!E&WPV_1H M?8^Q9$SU:HF3<7&,@:Y:&O&U55$F?(%O=%R.NGW66CDJ'<VB1QK<>XL1PWX38[2NEO@ORU;)&O-R(3 MM4'+\:\W,/F;2R$LTV/0:+^Q?]48!=7XH, M\_T@:!]C%7= -4QFP+*+S;[<7XVL'-_(R#$]8ZTA;PO?7:>PF$'G##HRSG!+ M%1@;,ITU?>E5>A7U@7[.U"VR=E(HK/'$X?%A?4.+UH?&;LBDSU.)KNA"&,PQ-GC\DT-?C1P9V\@YLG%\;]J;TBA=R\R&',=5JC7N MA4 QZ4GPS"U%HPYA>R0H4\L 3B+@$W4%0*HI%*VZ2@O3(1BLTE8VD/.>)WG_US*GV)AW0TB$=<[)2?@D0E5,+%NO#;JE3W[AS L#+,E MJN$P^W0H'7,!74OLN3C'3(L#E(A^N]?IQT3EFIE,9]B\R@0]1_ ELQUX@5'[ M8U2^'6R1-F;2^7/#U'[B&?J=JB5O(L_V+54BT3@6CB1F+7+QO2$&7Y91;_X;RD=7E5?[?_J.#0,G^E!3;VK$1 MK[$V*T%"9F5Y#@&+ZS;BA9!4TAF@)L-TH9R M.%-1C3,.-3E?SI)SYJPX:\Z',^> MA)7K4T59KMJ*/EJ^A$Q#;T-#S][.C#/E#,4(1WP,[JP[]\Z,\^A<9IYJ1XYQ M2]?$=59+O@J-KDDM6P4"//_.MC/C3#O[SJ/.0Y,[/\_8<_0,/5O/E#-64HRD M?JQ9\6PZVX%WY6:8G*A4HY[F%Z:TE4S8PARXM% ):K\&M%,C\##([P(8E MCE?M+,Z>5-3%/^?.UH[_#-X"T/VS%Q) 1\ZZ\P%]/PO0#+1/9C+3M."S2G)9 M&L]P'5\6L@$Y)5PV,L&-D10-;CDIRO=PSR'414_]1"Z:WA$;V1]>1 M,D&Q+AAR1Q5Z)L6'9D.K)3WF7^=S _YS?]'"DL5B+"R7/XBL(B\FG%1'P2EH MI%'Q 9G:?#27H_63?"&<2S 3DH)"4NHW:J5M./G9&OT)!5WDVEE'#09=1A=/ MUY5P/=,((H=V;5=%%S;(=/A-&][?HNNE* ?)+ M?T+,DB-]UL5I#V-H0]X":V]0T*>1#FN7"GU#W]PU6I^EE$UCTR0R-ZU-9X,8 MES5]J?2D%Q,'Z)@]+#D,R &LQ=)Y[AXF1W.F7M<[+4^#5_0TI\2!G1T<%(ZB MH[P*H)?>$[.^1D>H/$BG+4I$%0,5I2 L MR7Q!]9;XFD6HNTO##:R'VH-%DA M8DD2,G*HTS)98:R@\2E*#^ZQ](P]ITH]1O@"/1YA_1:'U34%M1B,R+I'LC)$ MF6]8U/MT\'MSXL7:*N,1I_6O T=@^NA4;2F;D^6C3JB@[G\2L^7.TS,>BZ:% M:*6KTE>C:<@8]:G$C*R\FV$OB-;ZRK"TZW&4';&;V#C=J5BWWC0V_<(J/2PU M&WM.P]1--0\ (=^KF:QWA'L$3" UD"7Q3#WEW^YR*\<>(1U Q)\A6/"=!6EG+JESYS)TSE>(<13C_6/GGHW M845H6>O+3_-->,,4N8$U&6<72J=LM<)F31-2[JR+;,I&5F9('V74W61B#2'W M&N*[A\@?TE9+I[M4&TM6=]:&K3;5'I4?ETHO-$O=U68(4I9:"\E5#Y\7^5PD M64\VVU8O/1PUGW*3Q4;94Q'R4@/6ZYH%G;R,>SP:L*JL"JL9;GP %&K7V/7A M=<%=UV!3I+))AZ&=-'>]_7P\:Y\W%W&-R"8??"V5OM?O]<."IS!2+B\B-Q>X M<)&C3HB2I=#T9O89D70!A)9A2G"XMV/:'C=VW'AQ7&0QT5IYCBZ^XTI#N)+:T?MG*^K/F M)I$2T,;6M.L' PU6A,?KC^BT9BO^]?RJ[6='K'%/'Y> V-EG;7-;A^VRHPS?/1N4&INWB[EZ:=A'=DIU[1/0)YOJX8!C=[I4U#=H)B,:F:@/!+;:/?6HA M4D'VM,F#P1^#K >&V=BNE':#/=7M7B.8QX;6AIZNT:=M]Z!@I/:H;6R+VBL9 M2I82\EX&DUVH-7/6GB*1_3=#I3X._*)/GG9+-NTGM?)$]F7X.11B-B!8EFUE M5ROC:Y!B+\Z#*"PHLMM:,G].MKW0ADF$[X87<+&R$$@T$J"BKV7K9V>&67%' M&!?F;V=A8!BO@K%Z801W%V9PXW@(-\#]A0$?_W88YG![K %WQ+UP$]S[-J-; M<;.=G#7<&2K=8=5M^!:'G63S;<<#7-+*[AVA\)_W-H<&4=&V.0\?-_[-0$UA@MP3.8U8I/:(J>)R:Z'E M\7BD*8Z'1L9X.!X:2)W"@$EBG&S.(75'W;89UAWQ6-U9M]6==:@FF9Y35W%$ MW31+V!VQ*-JNB$D4 MWSU^/*B/*UZA[['"S]4\V2>2M&V=A!ZG!H8\!'N"CXZKIMX\Q:E(EUF LZUGQ*CNJ82^OS1!AE$W">T#K38O_>?2OPO6+[K4['W@J&1"#OS/U! MH%15GVOZ)KK,?H#=8.2M^' /"]GZNUIDQQ_/32]Y9,-W]WV5>=^;V1+&?5VT M;=7#R7(KV^9>D0!L[=8 7+%5O M4O=RQC*<-8_B%#N;5=FR1\O"L#^K*H:O;:[1*_4*A$^O_?7]5F5O*^.VN(V$ M+^'D-FVR1S;A1S@4'N#I-U9+H[T.5V2T$__S81=,^S?BC0W"PC18;@*&>>&, M(!HMAI_A2*$R8+8J'CF+00U;"X(BIX$T4SPM'%<[GJWE+U MYG.^")XN]%T D56?/C'8)\VJ9*BM"NT6N1[9ZSI+KI8AM1^_@8D 1^OV OCG MJ+<4K$03_Q$CVSCL[8:!*12C4*?\+%$-TE2]B!4ZSI*)5^!H0M"PJWN^/7.F MF V7#PO ,%Q'SH%XY!A(- 2^?N0D^6VUQRZN1H?;(<88Y.;UUUV\&6B95\0A MX7C1'AL&'<%UT:H29*.).J4GAWKCU^C6*(OF)=5QT1HT$/I):[$ &FIDJ\QI MWIYT6@:!X,)97(9#_W>3CC=JE=3LON'0#)T+1S;<+ZD%"$M"H\BJI7Y M2TH3R3;\E^YHPK).K!-ELY&-YKM*SDV:UTL=S'^L@+B3AG$8PO'\Y63R:-)/ M9V<:4T?-7'_4X!,0Y7%+5(PY!>.84R^J262^RGJQO09SFKIL3G\"H"Z:"Z+T6[GVGEW_E$OSX!-P/'8W:,*\FL< M^>2*F0WW!:%J-Q3M;JN:JZTS.#HM$?UJTQJ4=Y;WHY296FS5&"Q;E+67E6%K M%%.WMCL-Z)U9@&Z@LR@%^H!CUT0S-Z#0 Y@?SSB:).=K:J].VH/(ZRK2&'JB M)J\0:?G*F$S)DM'Z44+-A;R'T>QK[493O&WT&ZVBI^@7&Z32@VQIZ273T;Z% M:0QI?3['W>=&]0%TF:TUS>? =M$Q(]M,;>-2B7K9<)'.6-4VE5X_>V#YP>QY MC,90\54/^FASH6OH"6N&[K!=Z8DTE?[]$56D*9F\]L$BT,ESM&D&F:/S)8<9 M+BYN>;O"FX)(5HR,XZ;3.&]ZC!.G\Z/P2IL.IP/,($J+0XZH01)14_RHG270 MY&A595(SF]6!#J C3@2Z@)Z@'^J+TX(^%04C.VQD9(,#K'AX2G[8I+ <^(Q- MT0XYZOE$>W_82Y3/1@00N3F%76M%UVJZ^GFPM):_46UY6SZ7P^6P^ELNJ\^S M85GJ*KVOST^HWFU^IN0MCFI?FI[GO MLWW3HE9D9@,@=^:D50B++;VEE#FV#I &1X(6CJ57 MZ0];4(9O,*>1T:-]_CF]FQ'(]GJ4',>.\Z.=>2!3BW-GF\Y&P%;.,H:PZT[1 MHZVR+=,KFV8Z\)325"J5_J,Q471OP\ZP,U%D36 #U<%DCW;B4\HY?8IZQFZH M)^H;^Z+.L2/H[PR$W=ZYC%7(:SXR#^MR8#3Y#PTQQ)8C+4P#+H1TAIT0Y=3M MBUCT)";',U.]$VY^C9V>%44ZKS4P7EDI&8_@2:;?%,+#=I<:>5Q4Y=% M<6FTMP#AU-WL1.5T#7WXCY[.XN!#8LQ$Y.NE[+Y3 '-ON>SP2],:O,$IT%*R],;,"O-7&T2CT"@0#PAD>_!NFJPFF\#Q M2-*8H]+GE-W+ZFP$*AHV)SL<^W-?>(OMOC&K/(QI%?H1A-$%Q0NF]6I<$;$, M7_OLL4_="*JR*^WN]AXGN^WQ*)HMWM%L%MTI-5$50S(=KU8#P["\'*JL;X@2 M1B="#L_I?GV<.!]Q/!Q?[ZA*]7Z]-\;:._=.(UOOWKOUGKU_[]L[]4Z^A^_E M.\Z')Y,D6 E%W2 H(4) $& #]($]2K@5CP%XO]Y01LB]08#B;$73CGGVT" 3 MP-\K3&'I\8L9FGO7.64N$8ZCY"@"CT_5WH;J_T6KUAO M=;0*R=K^>I5@&E.5*4[L4PGOGZ ?2HA7,K]+YK0.4OS>?+S_BYF9,([OZ:QS M9#,-\+LP@X;#9\KX2ET-P7>VZ-7AV:VTQ!])9VN]4Z"C" ./#>^R-?P./\!O M/)8.<0[%7ST)8Y!+^N8WC[EJM9!4+;NO:!?SK3D(1%+W2 MBLL)--I\0 CBTC0\\\4)'T*)[(5ZXA3+KHHM1Y(&*1\4C4:C"<*>X(@ MY4R54E_K1#J1$/SU?L0W/0\B<>+(CR+.RE1V[(CK1$ 8'RVN\"@;*$3RPH#Y M8K[(9H<]\C<"W""J.^7+?$;*'Z#FCRB/#LYI]HE/MH5P))>@$L^(*/#V+8.F M4A+PELX3;\/C\CL($\?<#2%W2]36].R GCP+FDVJ+V&\"D_&;TE3XW\3CSF+ M<2#8<]I%D?HC#Y_#WSF87GXS\M"X56.*\P)-K3,-],MI\*RR 3ZMF0![PM'_#5\]2\G-GW!IB>X"*YYC5RKF4M\H9X M:!/1$4F5SV>FIS2T>U*#SW:1-834U ML/*!L"20R.+9XHWTX@E,J>W^C-CN2=\TK_3S_# 2?.)P-ASS&,'7<9:UV^'? M#@%-TC(O@.KJ+[QR5[E(BJ3+LD(77T])&XTK@;URMQ"A>2U;/72?+E?CLMF0 M_&#G&[??:GDZN4SJ187QQA(($;J8[#Q_AU3M0.[5?K4OXC.K..Z'5#E*-GPH M1!)X](CIBL*+\9A\&9]$0<-@L*)Z^#IT7:6[@Z^XKRDE@NR3@'FQ"%'H_V(P M).X _Y$/V*7BM7KG*/8M?*+*I;LDA\^KA>OBU5/8CG&2OV5OVJV/S MXYO8EXU@LNL0"[]0(I XMX@G#[R%U+AFK H) NQ;0O*#(@A_E6R\L/UQ,JO< M]$W2#,"R#D# ;H1X7#X]J*'P!!?Y=P;T^0O](D+;YP.?4H8C*>52TMP[OG7( MO)G =RNW/,Z74C8]UWUN*8;L\N^/?'C,9_?<_6H%WAOVXOWPR-W[(N1]2P+> M5_]\.]O50()'^-CXD1 X=Y3)*D$\/*(_C/8R5 MO#*3VY_"+&\D*#*=[O0]WCA53O?!X?D+X6LJ*>6$/_,(OQ;^(U/[S"P=C<:I M-GGX'?XL_Y;N9PX\+7@C'I>(H3/OQ>,LSKQHM7W.HAV-'OSAB_@P/L-%VMOW MR3TVG.+)]#A^CW_CW_C:X3=7!..,%^:79N"G-W07M0B/.+V2^%?X8#\B72\* M22KC*PEM--+(>_89\8E/OF]]+WV/_\C <>S)5;+3CT%+I M&^F^J)_JC_JA_JAOZI/ZK?ZJO^H?O=DA2:(;'C5- MDM"=-2WY'C +>JWNQ=V)$-=#JL1/XB>Y_'4/LWT>'I_*C[ MVB<%.F!K?]O]/'_E6^]SVNZX,@*,R*+!LL!K\PLDO.@!=XJX_?#1CMP%9&*S MG=P]/0W^)3CA:_=BUMMH"T&_Q/DM?^AS.VI)FW474OPO_G%/'ZI-$#]]^/#[ M=^0].!*9$#G$:Z(/]5SW/C[*_^.C_-[]CI]'SHEU<\@3U:"&/ MW VOUE?]<#\VC\6[XKU((@_.3RN/O+)O]P>0HX@>'/R9OFHZPWN7!/.=H2-3 M1G(>(,=M;7*\&_QU>)[BL23P*[$O^0^@:&2BBD>F>'=DYN^_;OZ8/R."-$O? MAG''"+_:3I%R,J+GG/XPZ['3RAX[=Z':<3'^\ZX_Q@C[$\=ZSNP_'.LYIG[M M#P52DH#]Q8CZK_Y(O.+6\6I,R^_3;QCCA9#5PU@BZH-X@O\_QOG_S9W_GP!PSC( +.=- M:&@I!8M.D1@/Y,#,DI&1T@9R!4 H &0 $@!G !: !,A%\ (H :P B@![ #^ M_W!F$S4?A![-*U%I0:OD<]I!*ICR3&%LX.$JR6#TF (/U0FSS93K.:+:.9NL M5I@>NK^67@YPW+(#A._->_X78B0@H.R/O/3](@+6_HR \Y!]A'DN"<@M8@(B MC(Z 34 JVA/0">C.!*&ARK'>_3H@ZIE M.ZI)W(\@@$>C9%:^@Y[L-IQK^+(IU\PAJ\%%*N^Y/YY$N:4[H \'NA>0P -: MC * &2,^(%YB#_@'C(7U 0F!@< _X.LD$<@(] /*_BA/ZQ,:"1=#$JBZ:SO$ M77047QKGWSWB"\C]ZJ$MF/IYAR#W40=)?L0ANAHU5^H9;Q$DBC0#[*$*1 6& M[5B!I\ T$3BC^\4$A +BD\!-BZ0?$],K9H9.NZC1[Y)/2XVIQ"\D0M0&Q /Z MD+2 ! L?5U["U.>_R$NHNT!^>HEGX ./">8$1 'B7*2!2#QJH!1P.-8,U$MT M ^8DQ0O"P3G(F$H&1"(4B9\.WD MB^1F (R7PVU- Z,*<5EU-XH6SP?_7*A-\8+2&J)9JHI@Q$"GAR#,3A*DHG,= M4<(H>AV<"]VDS%*QJH6PF70PA2^^PQR(^-,+W*=%V_!1NQ6]1EX%2!=74#7)%.?^H\W$K)P=H[]"H"#P+H$(= 2:!0$2= BQH%JP$5@6; L. M_\Z"P:6QH"%P$/@6S$AP_?J =!T[2^U^/CB#&70 ;%^#H0<#-7!$T,^('Y*P9] MJ]@0;+L\1&@P%)00*PW2) QBSPD-FR]H$"%;&EO)YW04?CIHA9!"-ECI$AUA M3\I($C:U7NLH((+W*_G01'Y]+(DVWI9O.)@LDD($ERIU-AR&&&SHZ@">X//( MKX1]BA:E \&H)+36T]6YG]8GNCK!(+_BZ,+,VE1A6N( )8.Z@*4%!(7WL^R! M\+ 7";X\TDC"P.0804G4@]I])[DOGC!0$_'2JV'XX9)B*B(FCQ_I"M$$YP>2Q[S$A1$14 M;'PE%<*3#_2+JK?8\NT]/78;I[LZ%V-KT7;]&$XL$/4($24(LH9102X@C'"!5 M":F$4\(LX9*P2T2)F( 4QM 2#CV&WIDP,8@F].:D"=^$WAR^8&G"_/!VH '< M$KB7R0Z'8T@HR*"AR"@C>(N(6T>#2$!*M!1",OWI:]* J- M<) ^_Z_=!!+E/31'TEPM/Q):$ZY"X:100K@I!!3*"#.%!3%+8?:B2V0>K/?\ MARR$\"T*88F,4SCK817.F?)_>ZQ&4*A0R'0I/'I45%XI.)?LA1\E*%&5T6_@ ML*)Z_SPOGM.#TW>)H$[,OS)D@!@M+UP$M$!I6+MP_P%=UP\IL X#@6/0?, HZB$ M@_B$0IPKEE'(N13"0YLQ")&%XI9Y4^U$&5B7.5G1<418 !YPD@C+3TCHN!+^ MS=1%GBU63[VH3/1H>A::KR@BSC;MA8J(3\CF0N(B>U823 D8>MA_H)^D% $[VASKA14V(B08K0UA#\\E'0I?C6QA.WA] (BV/Q M@J)9"J027:DPS>>DF6[$A[ 7;L/#2]SP5MCKJQ5BOPXO_L&*2MX00T@Q]#FP M!Y>%C4+ 82%"$/,GI$((8GX[V0O"(8]F97AX81P6(A!BX4(*X=;$(/8NPM_Y M2AQ;$R8FQ!M%/LD*_H3^GMD0W7)30BV8\[3]5H5L%5FCOZAL2 MQ"R'Z$&ST=HP"&'G>Q0V 3V'U25\82$"KP3UR/^ISNJ&0['484&,9R@AS!7N M#;==3@_;H<=0LE=#FL+%UZ9$!,/!86#/9RCY(QWN#K]_0;Y088D05;@]U!!V M"$6''L+>7KBA&/.G$0M#$-^7!&59G;7?LO@6C""T30 MVA@GV4,&XJX#@1C@"UB$'$8E,IL:@(FHG.4]O&,YP 96]L-(GGY0.W2/\.\Y MD3I;DKQ^RH3PN.0XC!ZRH/P[=ADJ'!JE+(,; ARNP@"$ZD']RN<0,=$^I ]> MYRI[9\(13G3.=2C0BT08Q3A%+!5B M#FV'90K8H2)BV'%15:6QCQB[A &B-^"U<2LJ2_X1D1C'@Y)",2 M#[,7S"+*81FQC;A&G"/N"8=M?,,Z8E3/=HB>PUYT#&%YO"#_X>Z0$J@KC"(6 M$IN(-B5$HO9OB;BV8B0^C_2(W[@C(E1(D(@^_"%N_A2):[V:GUTB?RB8<"'6 M][B%U FQ@^NP6%B703S- W=^WZ-FF^'P^D$I9.B1$9L3@4/A8:BP]#(26%NH MXVX =P70"UJCX%?K@QKZ$J.&P+TJS"GQ"J',&2:6!XV)\L!CHC(QF:BM$!XZ M$^F!T,0M5S31F%B_FB8N$V-)R$1LXC6Q"E- ++]I$YF)W\1PHC!QFR@.E":: M$[.)XT1P8CI1G/BLL":J$VT[[T1OXCIQG=@!0R?"$]F)\L1[XCS1ME--/">2 M$[V)UX^D82\1F/A 9!J2)! ((X'3Q/RN"8'/05L%$ZEPV4/B%7<64A"KP7J0Q!"L*=^Z)HIT;8D O$#3?,7_5QS:!ER(B7A Q62@\/'S- M@MIW2\2GHJA@3@B >I$TOCZ$^+-PX=/#U!<^[!M^#R6%N$,DCA3Q&X$X3 Z& M#A6+B4/JH8UIL9CXHBPF%@>(<$/)(F01>V%"M"P^%C&+TL/+HB51IZA(!"U^ M#D6+I<718GPMM)"C':DFM1M8A8_"S2%AV+ 2C8HFVQM8A:G"UR M F&%O$59X;SM/31<=$3\S1J+Q<5W7 _1"+%<%.0T@IB+E45)X681*-9#_!V& M]IR+L[TRBL]0%Q@ LRT)B$A#"K&#TL<0 ZA:3!"V#Z>*P9WWH.X0FM?%$[\H M^&")74-*%ROQO9C@&Q$JSX8I"<7G21"*1S%T0YK($LF'AT7NX?5#D@@LO'XH M#[>&!\;D#F/1BNC%4QP.#D$3LK?+(EV-'G'^^99\%,4GT\-TTH41&#*,N ') M$74PTT,M!V$1AM-@+/]D]JJ%FS+-'HI1$J$\,QS2=*Q+DJ6V3(D1S1(ZK()1 M&+,=)D7NU8J1ST,O ATQKDR')Q_KQ8]QKH,:%#)BE_B'"<,3DTPB[?%CS"X$ M$ DF3<88(Y[MQA@>T4E@.ZB,530@$7UL>FA-&0;Q/9J,9C9!');1A.B1FS$F M&9L46,9JX9!BS,A65&^E&:6,I#HNHX*P"<-F'! V*^2,X13'716FR7CM4<"Q M&<.,!IDTHU[O_\9G'#3&DIJ,6)0%R8X1A:1H-#,J".E6;,8\8[;*T7CMN5L) M&A,]ED8YH[[0 +=HW#,B8 2-7XP3(:*1T6B6H33^>$J-H$:F'A%/TZC?.'P< M&NM-WP@9CJDQULAA8C4Z,F2-H$:SHJAHJDQG.(P'#7J M-XJ-]:99DL0PV4A5W#42"*N):$3V%_I.@G@SM-Y1&[&'-:5A(6,KVWAMK'-Q M]"2("Y/\XOI M_8IX +UE8;X1)<%>/!7>$^>-]$: (PZ1IQ?50^*D$E5& D?\7<+1^)=*G+SCBF"2.!D>3C[MJX4@L_.BA%_F-'T>0WB6B=-5ANC=: M^TR.F$%X88E1WICXTOE%&+N'-2Y8HHWIHNA8+"C6)E*+K\73HFTQY[A:G"WR M'&N+4L2?HPVGQ1A9%"T*'2>+4ZE&#G#1YWA;!#HF'1.+(1JF(\ZQZ8A;3#H2 M'3.+1D>J(])Q^8$O7#J*%@V,\4/Q86 1)1%'A!"%$%%\.X>#4'XQ@_C>F5PP MM@J'7$.8X]@Q>\@S3-VA$1E;-,6+8XUK53)O1"0"'/.'W+J3XRP(X"AP)#B6 M%6U,'L<+86TQ\3@?XCC>')>%QT/&HL71W4AQ7#Q6'BF/R<+]%N+Q\KAYS#Q: M'CV/E$?(87 0\N@Z=#SJ%AF//4>0XY%(X$@])#P"#M>*3)PS(L#QOSAOI"1V M'G\]!::88WI0MTASU#DB\D*+)HF[X>0A\/(PL$>>-G$03HHJO?+AR9#[> M&->'PD**$;QH]@@\_#M6'X6%@,>68[)0YYARE#UB'XF(UD=2XO"1WHAY-.:= M'D&/CD6_GQ91[\B"^E]]'Z>%[\9K=ID: /'"/9)!I%7DP)R-*,)58<(:7P\/4GS!B6$C^*3LBO8@X1UP-;!.LE M)^R0\\'UD ]B54()T4FP)PY= <+0H9M)0QCR&DZ,@U82O;R81)X.LA= Q XF MQ2$_$U$ M'?.*U(J\(D[LBW@DE0&PDA7#_*"K$H':8_H2?1FU1*S,.-<)Z(U$A0HM#HG4@H?",2 MB1AN%,,D(O'Q$81^-!^2#V%,K$,7HAIIZX@NU"WZ=WPEVD+>4=CQ'>F.E$?V M$7JU0Q;20U']9':%XG,-44$32GD='W'Z!#^^.^D/O8:Y.;WAST#'U M"2E&KZR/9!H2^*A'K#E2#@585<@[HB/1?OA1U!XV%A.204?R(1]1)@E']!;. M)&U,4JDPXX$INN>D64D6Q&*2U F@Y!SI)OF.&T3("A**68T(1A!&)J Z^B]6 M9;J)8,A77B:R13AOA/K=4]Z*;(F(8Y]QT-C;>!B"\G1%?S*P))"P3#C8L2CJ M+P2'T$FR(R32V;ALC#9Z#X^3N$/*W5H2@%C84UL5 M.722_*:JA)'1\B>9;"D-"-TEK2Q;1]\#]">)-#OJXU:1S<),427/ Z*.F $4 M>D ]%D(IHANR6W%UO#OJ)V>+^\FDF!1Q5+1SG!O"%H..!DK$XI\0T(>@-#HF M*).##6E6X#2.0=1\NT2-B;FA77,H, M%1%#JT9?7QIIJN>30$[J#I&3;<;$9&72 T91+%WI!Q$[B">I7AXN9[BC5$\2 MB(B.,$K%I)I1+2DL)%+:&\.2%!4CY404JUI 6C4NBC%%!.*?620DHPY7!22,FM(QDBP+9[5\G(37'2C[0\5$\B M)-56JT@H2@+1"\E3\>)])B97I<3#%HH)/YCB:VG,"9DR=D*)D'?28KA2[DDOR&(EX;L DGI1Q&Q&DU$%T M*N$2G+7<.GA[[!EO@\Z521LZI%W20"87FPMNCW"T".%Q5BX\B(Y<,RCA-X)$"* M'*60J\?9X;>2=0BN[%A*I6J$\$8#R(#/P7B!M!@"!ZV, 3@&X81R9:FR;%EJ M(E"6S\61HX)R9LFR?%G2+%V6C,JA0W]R-HFSY%G:+&N6.$::FTG% N)823GHV\".1(+6=Q\YJM M3ZPI1L;H71;%L[4$FTU.(9IRWS_;TII2EK0B[ /5 *X:#R$_W9B/9]B&C"Z& M'>N6B<4 )9@R0^DI]%DZ+?^60,O 97+B)1FU%%Q"+0V7A4L#S(,R<7FX[%DZ M+FV6^;_%)<\RA>@VI%RV%ST?.5S.RX]H26G2VWH5P'5['-4UFV M\?)*%,N0$P%R&\8@1/G-=:1]#,)O81W#B)?T61$F?9:%:0AR8>')-7F#I#OB M#$-ZIQ:+)8WQN_@MD32V+CU@ ""*X71CY6@V:G,I/G*%J$M)9;=$37FEW%RB MK!9&'T=$6/:RQA2T?/J)\N2+$SU$WE,2V-5M#%^2+Q\1YBD.I-W!UK2*0+A! M*^*(;+;XY:PR:!DWK%\FP.Z7O2_Z)?[2>>FRM%^*+/^7;XRKH_^2?_FT'&#V M+_>70TL$9@%3@:F_!& >,/U<"TP'9@23@-G I&!:02IB#$R?I0%S@OG #&!F M,#>8%4P/H5]K@DC"'(D%2_H5ZDOY'ZUC;N@!HZTFGLF6T5)W*7@ISV!XA030F *_"8 M'$5Y#D30WZH+%>';V%T='EV4\^8_90_9K0RD'F&^V8@\PYIHZ0C(G&!&0J,E&$C$PWYB%3D%F%D&.V M,!>9:LP^YB1S5);(Y&/**#>9DDSI&R+S))?@^T2R"15#$+TV85!PM>4%E_+$+J)*69)T*<)#13;<4LND-2 M,A^9WLQMYB5339DKM&:.,\.9)CAP9B;3G*EX3&>>,]>9TLQ1ICH37RG/?&?B M,=F9]4PX9@)3GPG//&+:,PF9[DQ]9MGQ&_?@8&6ZUPR:$ Z$9D%3A#=,FIPP M*W%Z1(! 91DO,>D TTCV^F*3GC]?9;<"AIB<,#ZJB(9X1<0%H;,#DN>+V%%& M+6.Q5#/B M-/&-/$W'8K600 DJZ6D&*<=&0,V9IC?SJ&G.>XWQ)5.:)+$<);#2C&E*FQ&N M""V,^@WC8R72\4<.!!.&\.J#R4IHB._RFX,>E"6Q(YB5J4F5Q2Y3P'-*+%<6 M$,T8?:1WXU12P4=P=!DRO88;=YF0X?72G[GS _U1-0&:Y99K9!96+3O:C-!&SR] :5[@[(YM/OE#B,BR:.,RB;]\S(IF4SL]F (%!B M-AV;FLT2%S.SL:G8W%66-CE]AN0[F2A&PB%EN2H,WCIBURG5F'U% B M-XV;R<&\(SHSM/C<+&X6-SV99T>\I6:QMXD_6U!R-Y\._8KFE#I"HNF08L]$ MB(",/)#H8?6"#R;1Y= M 9.;R4W/YFF11/>@+'!"'VN,*4O#I8$3PR]FQ7%G#%',6.LD4H\1O8:*3QXE&;'0N M+'>QIYC,Y'YZ33TAGI7%B.BEHRIQT19!T%U,G(P>N( M.D^=5#!&CE$OB1+9\:M9%3>$'B_7Y+&R.$'4! [:&RY,=D(Z(5U S::PU&]$ MB&"(BS9@)]5+@N0@0WL(.W^=+"C?BZ22G,F?9':*,_F9]\QXYC?3 T;-C';R M-?.9V,YI9RFBF]G/='8&-+.=S\YRIK0SW*GM!'<&--^:U\YRI[J3W+G-Y&/2 M-J^0WLZM)!!SW.G"$W":'[N:7D52F;\)DZEKT6+.AJ87%*^2'Q_,4N3D@R@B MO%*;3CZ5Q4-SO F=NL$5,VV844J(Y\,S?/G)TS;^'7*;AD4T9>MHU;CQA$R" MPSR>VD4K)9S3S@B\5$C0$ L\PT@C'*APT/B6Y&VG<]%8R-Q&1;<28I %+0QB"*'K2#;^>?4+D)O9(0@B41'"^ M]O)(IIR"F&9&C>@3:\EDKT9E[4@)'-XR] /G^PQ9".6 U(F,0Q+1S*ES)##* M)B.+R,TL1ZXS6K2. %J,K?R+ 4MT8^)S:QAILR\R/A>?(\*W@VFA-)&OB)-$ MH49IF):&09^,!D$>W.BI!M.!*+_S9-/KYW9 1(1D?("&J$^\WS!R]=E1K&K. M!U6?J<^-9NSS];G1?,)=#&^?K$_8I^Y3]CFM$#Z.-'F?P$_:I_ 3WQG\A$?F M/HN?O\_A9_*3^"G\]$45 MA4J(6N8\.!('!)G'QA74R8OM M)ITI)+@$(=A0172RC ^M-/68@TPBX?)S &J@7&G>9=2>:#Y!GZ10"/F6C S(LE M"&-EI$F%'TO&I;D$"UA%$-&.R,D(Q?9S"-#]5 !->0"-5$@HA>YQAU98=/%@ MB^(1,421XK'0B^>N&A#V&8.4H+9OX13"WOF(#%.*^B"A6A\JZ#:-5H4*X([$15E@*C>P9+]5#GE"_S^DPOC%B'+UH M0A(SY)6Z+T1,HSK(DSA!#& M-;&.\D>I@\IB^1"_&PF,B=)^*(G#E\11(3:V1/HD4O2%+,"?I7DQY.2YY&_: M)C].W,KI1.RQ+^3&JX2J!P>2 ,:JA,_-8MCGO*3T.5D>@AS8%U<34Q@L+"(. M1'^?"2WLI0P#1]F";(*V%SNB3J_O3_N3WF-F0VFV# L6Y$))4+(E'#IP)$L" M3_A!2!>OA*P D<.:C%K>-H&%GDN^'@XRN,7$ >H8)ZB*W;&P!X]Q)]8?5)5I M*K^AKY"#8)7S0'CZQ'(B_%B)4%'<9X&I;D@5+49:1>=#5- ^ISK46Y&:*AE" MMD!@U\OAED\$4UCBT#Z=0[F)TD*MQ/P.K2$3E?0!+,82Y4V$*!MQ+CKEB1SV M"/V< ,3K(>50*UJ7\1VN$?FBOLZ\J&#T('J1!(PV01]_>M'$J&!T0DA19!WY M.B5(G8T=D(V+X6/CZBS%^5YE+96$YM&&'GK4QNGT2H\" M9*HS"WE# $;%"_(1SS%Y)?!"J=X M\ ;AR%?IU:4+PJ-5(P21A^DB-'!Z.//+[H7_8M:2"ND>T@E;7J%C]BB(=)91?VG\2?#L:S-<+ZD7E+VD)@4I#=.:EA: M^\I_C4$V'Z2PBUD^PAHEOE"950O9FYRTHTDG79.^(%^0!,HYJ9VTV<@G?3[J M23,F&!- WYP45Q>UX8^*Q(@87K-#*:&4]V,H34GLW9X?]3KO39[,QM7$Y//Y M7ZPF%Y3TP[@CY8!I^0C,*@X(?J"W18+HJFEV^]9\ZBJ@-Y: E->E%F7&ZZVT M2FEBKE((D=DQA64*:ZT5YHK]97J M2D\7<2#)!W$+)O"JV#>\@I!;/CE@51>-@Z:J.*^I0!YR%IIBU1_&@_-R05$L M(KL_5 M>NZEEIU83\WB@:4LH:E->]H]3WIN8^H7"G&9Y/A"63/^7F!/A6&2FW!U M;590M+>4*;YTUO-3PF0L>ZAUQ$^\DFYO4]0N9=?,.H(BP*$'UH@EA1%UV%\1 M@$(6;CO"Q1[G>440R4D1/K)(#*!Y#/>P H* M>CX:ZJN):4PE7138,[L!',HI!2?FQW%T7R87P;]@N98=EE/$W=G(-K&YTH_5 MN (QW1M>6.4+^:&.:CT]Y! 0IM/9&<_'6@/!?)S='!Q(626W)8FMQ(G6HE%< MS O6(?BZ3B,7>5CT9EJB1*%TZL.7IE$%^:A071L*HN7^8<+DQA/^_D.O544 MP?B#(SF3')BT2ZIR*LDAI-:GZ=/W:9B4?>IZF9_&3^&G]#!N1O>F@F%,H]^$ M3A-)N#0SB4>H$*>8:Z?QG*X1MAP2447&:T1].?Q1$"V2S[CM4"+FZ3;0N7KQ M9"RH#9CB1NK&&:&6 M8K"EC8J'W++*A3I#E:$RJQYRGAA-A?.![K6(0;Y![(H/&3D7:A 58:KD<:&V M=DZGGA8DZB^F^'*TTAT%M#R3/$:,8.\E0A%(^BO",3\2$:V0G'_C)!0.*ZG5 M;E2#FHLOZ@,KC,H)"3B(4=]CW1M#&QV.9,B1V!MJ-KP. PG:E27#+'.T0)\> M8 YS:M0P24=(_OCCV$"\S,8W_--4%_XF#S7YB-A!\0QBD8DO458)=L>"()'B M(9I7/ C+J3#%O32SZ$YA17(N@[01AU(J"!:"$Q=))Q):<,JWHJU#Z^!!W&.Z MW"!F+[2_G=!TE9ICRWP\417,HOD MAE@>NI43EE0B8<;OF%]88DX6U(N)3LHJK,:PZ&QHAG J4)!]Q[]2:G^Q,GIJSFZ=F2Z0X!X[RB(",_+)S@\[, M:V(F.%6;QTU5I[H8W*G>/((?ZRYI4F7IEOKQ69U@EC!SKHP]9K>APN.DZ):K7:T M5DT4J=79QVHUMLJ<>:T&06:KAXC#!FO5GUHW<8) IT@" 2QA<]"B:3W,*&4 M)$YQ9B[-A\Q#YN%3;:[F:W0FS]6J4>,/"(5\U5SF4YIY=5_\4;%3.X=P9P$L>8_Y2[5U1"B!U I*EQ8Y0J4:@W@.OJ*LV6H,P]J@>-T= M/C$M+@&[UO3F#V0/*GL,PJ83^U(@#G=FM=3-< 1Y),YJTPE'WP-K_/5=K."Y M *<5U$*GQMMP?:F(>'L^2["BA,)?AQMIY8G8D^-8Z)R%@-;Q80&,MT-HO>T< M(T\^QTF87YXUSVIG%15((V!% M,KQ6MM96H9Y)^I1KM;#N6A67NM9$!V#DU@KC0?7X6&2M+AVYUA*GU7H615$2 M1%>#;B1XD9UUS4KEE+9:6PN%U]9H*[9UVPHHG+9Z6[.M9T-MJ[@U.X'II!=^ M).P*#A4^*(&/7*'<.CO :QH1#C,01!VD5F;BL )-AZAV4#877_!MWPJXD GY M6QU84DH7E]=ATNI)/9PFBP9L_LLTI9+*_ITX_J:2, <)=ZLZ95M%?4;?XI/XPB@TOZ:/A6%+KK&68V61&4-DKPV@Q'_K[,?#\)*& M))BG[E/ MQ^Q%,XI.&: #&EH%NQLY*_)>TZF.0[BCY1K\BO"@A=!C@3*T:6]6\I[4ZU](#>0\1<$57/^E5!!10:G#"#%I=? T+32"X-.DA 7HOP47*Q>% M&>$\J-* 6WUK.F.U L/DMHZOKSUH5[!K^0IV=9ZR7C^>T-?7:TUM^JIA0Z"8 MQ$A+L"U/R?PL,R3*<^;%:6H"*$[R5A-"+*'F>0IEF7X.'$N/I2*B>$>((]^0 M;P"H!AR15?RU?$-_=;_.7^VOVYT11 -B'6$="0) Y@Z6[(>X70_/_BI_++>"O"U@'+ -V @MOJ^+(!'5[90IQA8W*!!B %>.@ M4V$215 9!NC!VP%^0CM,+,@:Y)3\A&O5^<0MPCN5:8*NK:U.16Y+!ENXDL$J MJO)L"KB04 +.*;&#I<'F)H*N-5A$6P\6"/N#S>#IV4Q".-B*4!)6BW&$S<'R M8&NP1-@FK!)6!PN%W>,,80]OE"EA637E/!<#2SNJBGY;7=CU27EF3Q>&S3QDNHQ=',\S3N[-NY5M.@UT5 M4 =EJJ78%G523L-!";^:(Q1 8;"I4Q&TZW66F+ONCR);M:VT3D4([RI*V;LJ M7O6M@-?>!.(U\-H^-5\L8Y6QS%C;D3$VWSJ,M;UB3 44AM-A*MTJ_JT(C1V3;N:7= "H^9(Z!IW"4\73FC\L&PN'CP M@!XI9KWIJY'5)519PZR)4S-K)=-^[$T( 0N0+$S+LEF8X]Q M]&I"B2J=[$]6* N4S58598>R)2'>&X;-'T>"8F@IWYZR3MD=G":KZ6"F, =A M+PT>7QI81!! 7$'L$<1B+F^ &B_>Z1_6$*N!.!KE80NQ9UE"K%H6#XN6=>#= M80$R;=FU+%Q6+ON6)7D$8NZPO EVUP9"&PK(^S_(4GH3=)-5UYS/EW>ON"S9 M8F->M)/PB&>+5%@()*60>N0B1ZL/E\P# Y$/2\9PK_IA-XN8!/Y*?YB-!4[4 M*J6M>,Y!A !F]@(J)-YH$MEJZ2%(Q,+L<#?A8FZIH[RG ,A!S'YK"48CA:+T M@?:=F0>X#EV@7+%RP#9 !/8"9(!M@VW@#$ 8< UT&YP$2P$C $4@N# 1R BH M ,X*( (+@<"!P* 9 #>X!,0-V08RP'&6V 1X IX!=X"9(&#PJQ!*1>T( D@ M#XP = 4@$G #9 "< .@ "(#> Z0 J 5R4&0 %, : '*0 Y J@0. 7< I$ M!LH *0 S I@VP 'R,^B (X-Z($4P+)$')(/ 0'(#8P

.@ "T-PS6 H@",">]= &#RRT'MKRPEZK/CL%6 .H M > +$H(V0 H@:*$ J,]* =( '-H5K836/PL5X +8,^ +9( 4@&G"P" J,!7D M 5 \X(?0@K@#&"?=0.D >0 0EJ*4\@A\G*D3=*. 3JT* T0( 6!7!A8@&D M +0"* "8 ,DA!? D""IL!6@)&0&;0%Y /.M7F-^! .8"\(;SK'Z6:@ 'T!,H M:*<$]=D8P'S639O>& L\:., UK%0($V0HN@90.D !H4-5H40!+ O( "X .D M - 2-JDP)560#L%<-)":?%!@()=XH,6#A"F-26D (8P]=GU;'OV/2L9$ SL M:?,",R,C+0@ 93"BG=$:"Z('1@#W;(&@1$NJC<\N:5>U'MI9;7M@6C '^-(> M:,T .@ %K0<*=C=@>-"V 5 /MK1Q'P6+@"K]= & 6:T98+Z+)S@4?NKW=(> M%U( @%I,PAP@/="K50.\9Z,&\EDRP'ZV2' DP-(^!6:I>TR%H40($64'ML: ^( =@ Y04!;4_ 4"N@/2ZT M !2TTQ>Z0$W 0;N?_0IP:[VT:8 #+92V#( "" /@:X\+^]H/+92623L^@!-P M 5( +@!A[91@@4&:<-#N'XZUR=H;P+*6-8"K;=5:MF"U[MGMP!K A&!>:-D6 M!:BUP0, +:3V6INM==A^!5( WEK?0?#@,'"A9=<&#RH"U-IZK8FV5.NE9=*F M 5P 90 7@)>68FNQC=C2:NL ,EH/K;H64!M%H-E2;&^VD-J<+<)68!%R^+,< M:7FV#=MM[<^68DNT%=<>;>6SD%JE;:U6/LND-0,H!F2T%%L8 ,[V#;"DW3^P M 5 [-DG@1$ 96MO(-.V;&, $%L4 ',AB@"@S1$8 ?ZV[%FA *N6/("O%=M" M:J^V65L4P-968[NEO=D29_6UB@%L;=J66SNV?0F,!(P%1UJB+<6I/ONVQ=?* M;>&S=("0K6YO9!LKP"L<:9^U3%JTK;:V5PNI31)@:678L6V E4!$($%]H#K>AVF% DZ-6N M 12W3@$T -&6:HN^I=7";INV%=NT +[V+["MI27T:I&W_ML40"90/!C!\-/B M;X>V[%E(K8$ 0!NQ?=W>:YFT/ 'VK(!6>$NMG0A@$C $RML@K037#0"P-2^P M:X6V^EL-[K>6@HL"F")@;P6T%MSX;,CV,R"B?=:^;/>SSEO8+;Z6ZW5UKW+,C@7ANQC1H( M-I&;;VTXMJJ;9;64$ &R.(*:(^W 5QJ[4D@#! '@!1T M;M6U^MD7[6C > NN9=3^#@ #Z]OW;!'W1]NKQ=2.)NX*#%N+[0TWADL'^-C. M<%^UZMD<[KV6GU:?M>*6<7^V<5M#046!;JNY%>)6;K<#AH,HPD1 A(L"B"PP M:B>X%]O0;<]V6VN\Y=M.;P.XI(7FP@[@5/L2V#>4"8ZT.=OR L4V@XNOY>!" M:A4*05H!K8&@A*O+C>"N<*FU/%MV[>X67[NXG0A4;JT'Y04F;3&W<;N?G0A\ M!\@ LMMA+:G@E8L", *D (ZUP@:^K8?V#M"K%==*!JZWXP5(+KO63"-!Z:V, I@1/;:P@4)F:14#%05K ;66#8#.I0K( 9*Y^]F!+;66:.N>U1.< M!S )IUIWEZWE0>NA7<^V![(&KEH1K=BV@CM-V-(NV+MU9[L!6!%"YY=OJ M=+>T#-V#+:;V,X!R.-** 1RZ$%TL 86@1 # ]>)*;4VZS046KD17I2O*'=W^ M;.^WV@%#029W8HO#+=$:;;,(\%J^+9/6Y>5V*[4BWG,NOW=:B MU$=N_+ER745L@$-V6!GRUOEL%0(HVB2O7'>)> M#PR['EH K;=67;O;I=BV <@#U (.;7!W@.NP->!2:],# ES-;OAVK5N]%>8" M=3>V.%S;;AG 2[O4%9\&<@6XWMI+@D( N*NQG2+,<.$-]5DJP#57?DNA2-16 M!I*WP=T$;L3V>7"E]=!J:9FT?-ML+9' 2 8P.A&;$&YQEWL@;F6MZM0 -HV M<4^Y/0,C ;H 2KO8+>AB:=$ B@&EKINV#U3N.-(Z:?4 [UL+;?!6B!M9$.X& M<(.T5%NC0!0!9UO=U=A&!EZZ"MRE+CN")'"DU0DD:V< C%OY;EG77FNK7>N: M2. M;TFU<@!GP9$ DR"[=0C49YT VES%+7=WPIOE(/+^W[UE,KII7C1O:7?*: M:-&Z8=LG[TB772OEC>N6")"X5EO2PI:6@\NQ=?#:=Q&Z-@)H;@H -4(28.!, M"8ZT>EIB@YFV.XNF!39D3=BTF%K,B]!@57O/S>?* %( L0(1K36W!1"M7><> M"&8$S%OZ+ J@-,"?+=#^<]L#,UV!;72778M2,- V"32TS5M?+X!60%O8'=7" M%PZV&0=CP;*$FGL2^ NP9RNZK%SB15Z HXL"> *(>-T 8-P#KXF7I0L52/'" M=!FZ" +XK=.V5QNQ+?8&=Q&[0MS,;B5WMRN@)=KJ=DFU&H;W[;L6>4O>7=X: M:B&U.H%IP6%@CENW10N, 22\=8&U15T EALB8-X*:'N]@-I?[U? 2YOE;=U" M:BFW3%J*0'TW2XNO-?>V:T\"Z=[G;AF@L1O07>^V>1.U18 PP,172VO710$@ M?&>]JP7Q[0&WB;NS/?&V= VUF%H-Y, 7=/L&B/?*:#&TK5MOK6N!.Z# Q>^J M<"VV3-K2@&C@F:OQA=M6:\6UYENU[1R N\NL_?#&;]FZM=X)08:7/:L F.4: M" "^((&%[9%V5VNJ[9/59\FY<("H >D64GO29=*>?!6^O5JK;QM@7/#9;>%N M?$^\CMWR F1WA^N_C2(<=_$ 2%P!+0J76GL&L WT#%*\8=]I0A]W?O=Z:5 P M;$^ZD%J);L&VJ[N[_>KB<-6W'MKV+9860;"]C=@F >2SV8_Z+.7VG4NKK?>F!VBZ75Q3+:;V4U#/E=]N M;ZVV5][,K?!69U"\%=>B"/*XUH._ ,.WDWL1..8R M<\^\85W5KQGW\3LNF/J&;.^W5 #][>!7267X3=WV:L$M]5F1[X,775#O5?<2 M=U$ $-TE[KRWB9N[50D8=G>_/UZS;X]7X1N@%>;"<^6S$5MW;A?7?TL5D/X" M:-6^U=\9P?*7C)O1O0O4?BVVWMJ^+T+ X.OGG=L:>#VY;-NZKZ[!MIORQ>"F M;S.W[=MPKX$@9+O !0&\6HB\I-Y.+3YW!H#J;?6:!%JZHP'=[X=V1HO>??S* M:-VS&>"I;T;7]^"@;=DV : BUKCKY,@8WL1,-56<(4-0UP*0;UW2\N?Q0 [ M:37 [%D,+0J7:*L&8->2 2BTUUOWK18AKCL&<.9&!H*T]UL-PUG (]"YS>OZ M<0EL#-ND+[=6=CMVJ,^&?$>^+& 1+7T7]0NIU0(["8C .MYTK26723L1>.]" M=^VU0=J([?D7>,O^=0(C=_>]1E]"KNRVU9O]Y>&Z:5<3J%N:+Y:6(U :^/[V M*<*_^-KVP-IW^OL7&/D:"L@#,MHS;NL7,/ \,.:&;>NX%F#(KZM7IIL&P-): M:,< U%ND[?TVL>M5B!SP?)V\\]P%1B*X0( "%@(K;T.Z]=DB\)7W"/R^-06S M9\D )]S\+1/X0%LBP->Z@0NTXUY\[22W>>L&, /@]]W;BT8A+O<#04# M?!M4Z9,CK1&8%&RAI=@N@=FS.MU);G/!5WL,?O&*!V<5\(:B@1&@U(O/I0&4 M@$6T3 !+[37/KZ:L7 D-R*K3'WE(L!9C+D M;\VUSERA;\@V8BNJ7=K&RA900/ M>]F_)M\A[H/W$CP'R 0K;K.Z>V!?K>)7$1P'$.;Z;+NU&N'_KQ[W+] "P"EX M=G.YE5RNK7QVFZN\Q>/Z<4<"U-ST0+V7I9O$9=?^=%^^C%J;+I%@1K#T-? 2 M@(6ZOEQ"[G-7:(L&%M#:;=.WO%U0;A-XVXOBW=**A:.S[%D_KF.(FANOO=\: M"E*Y483E+]%68(NOM0V8<],#)=Q&[X$ ^8O0/0M ;Y4%)0(Y@.S6)E"?M>&V M@(,'(>$=;J]68*M%Z/%2?M6]%MJ(K_C7OIO1%5M0<\7 >UN[\&B8XIN[=?EV M>\/">-^Q<(C 2^NM)0 3AD^Y1MZY2))WP/L#KOLV>;NX:5_7+Z+W/A6>Z.5Z"@:)7);P"M@ W@Y_!?ERTQJ7W@:NE/>-2 M!;S#5]K/K7CXI2L.'O&:AZG#,X(V[W5X7FOCK<_^:SW#6-J9[?J7 ,RD!?-J M;0W#1>'];(*89HO?S>M2=T/"V]^70(-@/MPD4.?&:]^YHN 8\$R78KOE!?T: M;>VW-&&(\!UW]$NM[>EZ!_0 >0"BKM77-RR?]=;. =C"X]]WK7%7)VS?Y0FW M 3C ?EPEQ'RX$+SRI>7*@5?"$=_X[-A@.!RO==O"=B.ZKU^]KT7X$SSL]0J[ MAEVZ#F$ <8B W)L4WM**%R2Y!.%#<'J#0_P'?O7&;U'#6=Z.K6- GDO#K<]R M<6_ BUH\[T% B N]708+:"<";=PGK3JW(TPAP Q3!(2^O-U@KE?W*O"QG>?> M1VK 1H"(;7B!2&#:-1 $:VTML%I(K0T7QML UN%":AVY6=PL[W&7)3#@W>#. M@;NXD^ +PT/X"BP4[N;BA'G T%__,'&819PF?A2,!HH$ -JEKN2AM5O'Y?QR M?:&W-UO.+Y/7Z.L=$-[>>_.UKEUV+>4VDJO@1>)*=]F].>$"KE< >GL'#OWJ M?87!X.!O[>:W4ERJ+13W<#\#CJ'O;I#6Z\ORI1;;:@^^%P8L[? 79QL&2.@R M=8VT%=M7 DV!=*NC;?["@NG 9X&I;O'VX=L,O@C("4($,UR20'W6V@NM3=06 M".JWVUU?+6 82MON#15/A$VT4U^DL&?W3'P8]M="!>:\>8&];B)XWRLA^!J( M;SW&HV(9,*!VY8LNGA,S:>W$O=T\L3'7/#P:9M?6A[W%E-R_@/!7B(O=_1"W M<#D,]N)^KQ:A,(S<]1?K&]+!U5KJKCI7+=R[#1=K;8G&\MS9K5_A54'-+0D_ M;U, 4%P=[^>6T2L!B(+#V,++9BV&_S+%= > M@S7&?5Z,,/2V='MZ^:P.]\EJ(KY_7:,NTW?'J=75[#ES4<'HX$CPV\!R# MBY7%+-QZ,9,VI6O_I1&K<<_&J>&O;)&V%%M?,?'6P!NOO183 M'&:^K=L%AHA6$#RC'>BB +##D=^R,(;6.4SQ7?QZAL^WE]MG 1G@@&NTG0DK M=S^T&U]X+55@XLNA'>&6<"6X?5ZQK7MX5SPYKA6WC8VYUV*R!<.V;DST%=@: M?5NWSN/YK<-VE4LE+NI2"PRU-^/@K[B8:-RWC1H#C5>^K&!K[;N82SPM9NA> MCSN^F&-/\4KXE-L3=O%.>G>)EEY$L8?VU%NF/0F<:;^S)0,I08WV=: U%O6> M9R&UV5\ ;0MXEUM>*/?N<1<825Z,+J0V!^SV9>8*;Z?"BMNJ< 27>CLL'M?R M=Q.XQ-XR@!'723O3_>_F=0?']]_$+7-A)X#ZM>Q"#YX'!-P@KYO6/ID(%MOJ M:!_'<5L# 2:A3@O 91!K>U>ZBMU9;YSWHE#_G?7JD.T9/.39L(%W0<#;=?@J M:D>_VUJL+0O7(BS91=KN; ?(&=TL!]08<[L_YMH2?V>T=0'-[\IW(K NI@:W MAU?%DEUF+B&W:B[6TR6-&+9ZX:7!'/NK. MB'?%2N2 +Y1$&VPUC@IS;D6[Y6*H0%U78GQ&;@$G!HS$O=\Z@U^]QN34[F[W6!O-+1-H(,<.#UHF;]R M!!S#]-H@LMI8A0O*Q??V>'^V MIE\+)7 8 KF?D>>H.T8=PM;8F LPLN'AK; MD8VWH>0RP"J7#EP1:"Y4>7&\XV1H ?16#6 MD,]F$KK%"V5F,":92=LK%N(: M:L/!4.&=[CB9L[OL?=/Z@\ZVYMQ4\4*X#+PJ?NTR5#;J*62#RCI11#%+ &DV/70L8X @S,A2KL!'@$20$O;4MY89((]N0" MD6G$O-M4L/1V__M2J /<=G6Z2^/V[_-8Z5M-M@LP7;+)R%J!%C>Y1NM-=MDV M?:G&.6&',N=VK1M1'N"& 0[(*]].,#.7'6TY^&;,-&V?JPK/B<3EO.Z@>34[U]Y>_NJ&"H3 MB%N]/0$]@1O@?3L$CMA6@G?"3MJ>, !WB(P&;BD/'XJUP.'X[P8KXENND;>-REBG#QV/M+[161#M)UA/3 MC,&U55XTKOB8T$L1F!94D9?&$5O)L1W7EZM25@NWE/$"69,C;63YL$P+A@JD M@$O+%5L-P[/@8_R>?>FR M1_N72C:W \%OP<>88>DL\UNV5 M"4B1B&)MLEI9'[Q_ "'/:S&T!5X),\66CTQ:O@77A@7&#MM%LNKX0[L:UM)> MDBW+"H"'[V8XHPPH3B"W<'? YERO LY6J5RZC0_8%1BVTM^D */6XCN\A0-, MA-^]6^5<\8UY71U'@H_(C@$^[]MX7IOE-=CV MF(U3Z^.C\(MW0.PVUM%JCM$#?%MZ>>>D&UB=QGKP/XSS(QMO5J;$/&\]PJ">IV_^"LQ04S MDOV^#&"UL!XH5CNB90(P 6JVDUL \B;WBQQR*!%(>L-"E>&EL?"X/BM?'NZ: ME'?'K-\)<_]VI-PVS@L3?E\%EI' ^F&P0# N+Z+)RX>"RBG1-S M;>O$BE[+;F*7B)L!HK) UL^L,PVA#O/544= M?J'%Y&;+;>N8V!P$<#6';)FTEU^5+[56,K!M)A5';-'%XN.C\57A#2Q67MT7%PW,J$7LQ QT-Y^ MOZ M?(^YFN-S,X6 >3MBEA?3;".V7-RH[1G@JLM.1O_"?H.X7%MNLSQY_6M&EB_K M=(VYXMIU#A.+!U5W8> !\D6@+"S!Y1BS M:\G.6>=%,AI7I4Q'OAK;D1?)QU_[KJCXN;MM/B$?<3FYJ^2-TMR!MD: MBH\!Z[W69CALKGO#R M&PZ_"%U_,*C8#MQYQO;Z:ML#!P+)\E9Y6,RNY2A/"]3&=%]TLZ'61IQE%M>" M?6//66,/5"RC2F*R9=6&D*FU9X/.<)B79NM]9@6#FOW"&^$F;DWXS3PQJ"N+ M:SVYZQ NHL7SY?ALU6 3?@H6Z+5VH\&CWF9P"W@ ; M>,_/J&"Z\%] 3[!H-N92>OT*^XKY, FX@LR=+0%X9Z,"4@*0P 9YU;!$QOJ< M9]VS3X J@'KWA@N"AD'O$EBUF&*E,!IWH:NH(.TWEKB<82""%T@'N.F M!;*X'%N( F97&:SH'2C;)^V\'%RJ[;;6,YP&:!_3AW>\5>8#+1P7!2#'I>0* M;SN\)-XJ<417B_!_)EXHDU>U[MDD@!,@G^PZ7C7WB)_1(-K1,QI7#@V#[@## M:3//X(7-L[6@^1RLQ3R#GN/$I&1=K4(8B7MB=AD3AO_!A078[T]W%!T8CAVG MB%?.EN11L'?Y"7T<-M>F :[!&X?YL!%WQ+MK7A7?F6_&]]K^\SGY^HL"L!K0 M 6J\>6CZ4Y-.RV+-S#Q24Z<)N_WEJV[K>7&QPM+@WD M!"#$$F(/[;QXA@M>MN$B<@_(:&/?+;O79PQ?*-#VEG^^,^G.[Y/60$!@-OJ> M=./'.^D:],BXAUN$ #)?>3W"?N*Q7&38->+:!62QLVOD%W M7J@2<$=ESY+*8V>(%--OD* M>$._&%I;;@MW@Q#.+1H_>4/,VEX,,'QVQWM@3C '#V2T.EV/\$S76TNY%=<> ME)G$Q>%FKMW8V*&,/L7UC7[*'C-3NC1L5=Z(G 3CM?>1R;/W,NUM?,OKV7%"#5C('G&73YF3.L0R7,QT$T ^\ M 6J\K=Z*@D*9G(RO73?3E=.Y@65E,#.9#$"4_O@X<*W/X5J0,H]Z!&TI;NL^ ME[FVO^*S,$XXI)R[C2F;==&\HE^;@AP!- \V!]D"O ME">_^F25M(SZ)8VO!4RG /ZV@N%W A[ 6GP(7KDR;'/1ZMV ,H\Z!.TZEB@ MH\FX6=RM\THY,2R?)OKVAC')[F,XL!;!RRMNWM9^?-.[XN66\@*#*7FD#4;/ M:$?/EUTO]CH,H?7IH!]CN2R MHUG"0Q@!M*)V'JV2=@ W@2_5^.A7M4AW)LR/5@OKA9/-9&IYM('74#VV:M;N M?^VUN8)V\[($41R/GD>;F1750-N6;:HX7#V[]>$B$+ZW,UKP\@M:+6R%[E3[ MFYF_9%WEPJ ':_$@*P-&9Y54NQ70.4 7I/75KNSLYW9).U@'N?^%$:\(N9?+O1&.;O_+0W@ 9*X+=OWKPT:4L EAAO[ M<4T3MMN+]!V7#+#*34J'HWD"K]_4LVBW7;#\[367F^6Z1^BI<_\8\"RT-BQS MG;&[/NIOM5CZ0.NE]N,V_PC-0FB1K6JB@>' )2HGHX?"-60^=+;6#WT&"-;* M>LW3E&) =(_:,5 DX/E&;!W$1MN) 6\7#)VQ=OS"H:NW=&MUK[6:\.M ;)W$U.=.KK\WUYL4#M9J MC;]$U%PN;A$W!9 &MM&V%([6=^2<=60:+*"@_?2.:!W4(MI)M-;ZMNLDOBEO MJ$\#3&N#=3*7U(RRGEK?GIV^JUU<8MY9Y;PB+DGG;+O.AVFG]7PY'2VDYNS& MF"6V68/1-2 I=]W%W5W[:J< ,=T\P2)9G*P6/DR;:^/3XX,<]/VW+'R_'2)G M<3&T6&@Y $?@78M[SBE[J4?,\(42\W+W6HQ 8/8"K^/5(MKK+;"Y?R/-%R8/E+*: !V]4=/6 M4D$ %V@W-0I "A $< (L ;BX,MO$+2&W4$SIS725;*F][F!6] 49%ETFP"N M>FO1YEDR-:1ZO%R?73?K@]75==]P=:CWU5)^+E<7E8?'^&)[- P:0^NJUAJ7 M)F:^#P)$],=ZW5SUK=;6CW.\H^IW+61Z=.NKK1=GECFVC>>1]$X913NH_C^7 M"NJY9&HB]K+$)O#(?@FLJ=O4RP7[;!) "U $X.(2"P"X4VS>=0\7K3'S'6/[ MI-\#GV\7UO8M*!8!QUOYECKC@79J^(_=@LWD?VCS0,#C2/-J!'4;0C8TJS/;?56 M :8(@.62[R2W^"P>A%2EH*_9 %H,K9*9D$NRA@"K>-^Z!&KB]?WV#(SZ%=<6 M>K>T@FFC<_R8A7W;3?S.D/>_VF<%]4P70SO7+09_@;/3M&)5<&P9%GPD_B,# MHO.^ZUZCLP)@=8RQEAS3AIV^3F*D+9/6*Q 9YOUFI.O7^.HM<]NWP%LAC@5G M.0+:#>9W-C69>WMF@1J#LUW7L&ER=O_81"V[AB8;"!C M-YE\V&XF.QY;A.W M:FO997,_3:0TO17MZR MBX7"C-H>]GVZ2:W_U5?W;N>_JU^G+Y78E]M5,-7>F0O"$0JH\?C _\O;K1<7 ML)NW_%VD,S'89=RLAMU>IU43H@*3]NNWHZV6+M5RB4@L(HZ%)QHGOH& M:R.VS]J/,D:;[;^V0PTL9!7#23CW7M4_"^=_F[XMY;ML4QA'; M=^?)%5N4PKO6^ O?K>ERA5?!5V)#K939#+TE-EPC(R M^UIUR6MKXZ2;1]GOW>BLNSN+:H6N[R6T#[T&99AR?+O!FI)73W *[,!Z@J]Q\O@@K M!N*U/NP4KX6:UWM7OD3?F(VYWEI]M60W,ZP7%F[O9[T#&X2J]9.X/@!)+C#' MJ.G0\]P&P9@Z41R01A$,I$,$MNL\SCW;LU^ZK&=8KZ'6F3R[AM_"G*E%R^1* D)YH+V=+E-+ MH"U;YQ'_P)1 6W)SF1# :0 6-HG0%ACVL#%%0-#;.?-YV0 KLRV4"RXA=1. M;7_"1&2G]"&Z<;UHWM*ZJO_-\6FQ;4'X@;&]UEF/D6G8>EQ%MH$W*-R\=N+V MK#FTTUO@,P&X2V $X/SJ<$78T.$\,REZ=*UOV#C JU6Z)6-T;A:W(4W3KC(+ MBV?,*F5.]$PW:6HI0+3H4'T\+@(8 M;F7!,PA*-Q);0YS)UE/SJ3O0'^A)-="X2.UM8=@R=*_/KX8_BT2FTDWJ;D#? M;%\-0[=9]ZC;LM6 1@T3&V0%!H96=Q([DPWB+A3_HG'266Z*;]QV!8WHWE%S M=J_;3=YQ+FZZH[TZGDGC 4[>U_^"=0I@7J'U;OMFZ MH'G4PFO;<7,[X-QDGOHR?(/!&&O2+U585@SN)EJ?I>G3?^K$+\::9;S$!3#7 M:Y',]]K7KD2;JTR#+A_?A(G),F[[]EAY#&S@10)LD96WP%W;=G[ZLD3-C1/G MD0^T< >\.OWX:T4ON-:;"7,=^+S B6Y]3SKA0(?=R>U[N*=-Q$8PAP:KM@" MD*$*76!\+9SXWARII=6BC8G34(%K<"RCY-V8CB+0BSN["=SF]LY7[$UQ=MK* MI.W"BFF;])4ZQ7W2U4G;CI_.2N&G H4 $%U"MD'CI3_#_N*];B!7/NO7Y3A3 M"/X"(=M3= 29VEL#,-QN>EW1G=ZQ0!GX=3#[]57):=6S48,Z[8B7'2&B/?7N MIL4"B&) +8$V!4"HG2(8:A&U]=FX NO[C"R@==1B:NM*#-L);84V'*T$P-<. M 32T'-IC[4W8!##/_3JXA)?8.%IJ;>^;21L/SA/T:'^TAMJ$[SPWXD1IYM)Z M:C'-20!-LW; GXOR[40;FU&U_@H0L/E9O(SGM02+OD'.#^OA]'%W=?VM3M?Z M#(C(3>A];T< *C!CCM'RSQ3W/O#&VAV$&\& W(ZTFKA$W:G/37>12[=@ *0R%U@K+ MC9FWD^W.[N\6T1W=G?6>D)&\^MP)C=["+"W05_'0.1 /["%%M$. M :(($@/V[ W765 MH"EHF76YLV, KHX:GAQ#SB$KNA.W$'#]MWY@2YOOY3QG MD8FVUVJ;];PV&;UG9M=*M8O _MEGP4O7$TSS!H!+>G5[(MKL=^]ZVSOXABK+ M 0"X0]LB,?(W46"G%2LKO7^V\6$6^+>V(F"[3FY1*"J\G5IJK==X+8U'3M1* MJ8_!(>ZY,> VV]S%17,/OGL"0^C*<(<;CNTZ[BT'AWL"YNLQ[PA\Q[N-QE43 M?7?5P8,>\XI*YEP,?B)C"> %PQP+1;V_LX/E_'L^.ZE>CY\W)WL5LBY@I# MNZ'-\]E6[^K;CDPA^-G*YKZ^?5Y N"8<#SXRMGLW%YP"".V@,$,;Y5RM/6NO MCBOA$6(B[B7\?USEU6[C <"YF>J,-;E99CL91M6BIQBV# %R\Q[\7AO&Y24S MDC_A3/"O0!6X(^!+0 K3LL36&/P)KX:3KA<8"H C[0R@!A[CO=A: MJ^.F^==4;X',ZN%4[?<;N\:NYV;SB:S"O/]A_[ MB6_9[)+/P(LD(4YZEC^C?XG7/V8#WD7G.';Z/.1VB[-\K;*2RQ!M,. MDCNWV-TU--KZ]&PX#DTJP.VXLN8.=A;9C"SR5MR: MBA&^3N#50M7ZR5L1H!:@@9_.OH&P[I2WI=P'TFPYG!'',N"V0Z(V7NMKYM=F MPIG4=NU=,E08$*Z^=E0?:4/(<^WI<&2 ._!D/N7VPENXZ5MRLOF8C)#8?22W>+&99\H%V=ER']BR#:!&Y1MP4."5W(I"LK0$$@3/$S5(I>$YY7YR%?C/3 MKLW(65OVK4NXKW!\+6!\1-S"E49[J0O:;5^&+J>8?$O,5=[^I@O,;61?;F,;7YTX=@.P MI,_/5>S)-PK !F#YMB!S>C'()(%',.<;C#WJ9=6F!V:UVV)3<@E\:/M3*.I6 M"<;!!6=Z-C.7WSPP!N&^>]6YSX.D=XZ9DHO7O@!GMC7 6-K([2\;ZWN-9HG# MD!W-SP()N6_P'?#H=\$\"=\QJQD'F:+F;G)S.C1\]F@2) )9T-O MD5>^&_(>;OB[M3L3%DZW!\X 3EHC+BW71,O+[>3Z#.RT#&,B\ 58PPLEQH@? MMU&X">-[\228%*PC5G?KL+_5Y-N++C=[#V3G-1H/>BNV26]PK_67>ULE<> V M ?S,[>QJ[6N7[D%H/[@5X; !J=MNV M?C?&SUW\K ^[#7 -KMT"F;/@LN=<\:I67+OV+04GK%$ 96N50)) )> ._^5Z M:RGB$'&+P$3X HM0K11Y(O#C;48;J,VU88"G=;W:,.#YG3-_ MM=VT#5RFN (\&NZBYC*/:)T 4X!%+:1V","HA2AV18)][3B7)PT:;X)?N#W1K6PV]J;6^GUI;@%O&PRTQ&S*],LA!4U4+DDG MP0^TKF]J;9N M!N;%E^5ZN4:< _[]YGK,"ABV50 G[=-:EXO^=@)@ ;:TSG!H.&=Y],S.)@WK M:%'#CG 6\>?V,PT(9CR+?'/*F.7];Y_9^QWB%N#>O@''L][C0<@64MLQI_@" ML$6YD>TG]"-<_4P55EL'FW_ :]_#,#.73SS_9D&7=?>[U>T"0::+.XA_+E+VL9]7&P%$$*H$ZK<&8>JUDSA+0 MFFD MN;GLXV@_TU8WBELI#FVKF+F[/:VY"L&S_SBA 7?1.2"\TYZ^NVKT@;/ M?UVZCW(X,B7:EAT<-A)WQ#_%"($<.!(7'=Y.[E@GO9FX^V%ZN,4664R&WG+GM)&X5V&8 $L\*)SQ7IO;J^&WX>$:MX$@_#RR?4*D M@Z/FA?+Z+ZMVA:TB;@0GC<7-".?4L?X6#8S 1D1;SAOGMFR,^;6<>BM('MB2 MJL7 86H1*:(XY3S,M6GW &"_A?(O0!L@#_ %."[L;KFXC&H&;EG(GHO_1I>? M;\6U$H'E;(09!4#\%M 2 7H)K@7O]\0:]4WM)2K#GO/?\/+]=\<:KZQ03C,+ MG",#C6GH[42 ']RV/N7JGRVT]FRH<",X=]YL%I/3?+&^YQ$'K@XZM-VG1103 MRJ?F0(0)>NA60+[RO>B:Q=L%(G3:<2-X'*XZ9_">=<6ZQ]U&L82\+>X[0&:+ M;1714P7#X>XJ7-? J-V/]S+5><& MSB?'=/'^L&1X/FL\AHR#:(W'=G2&;BT\0\PLI>:.C#W3XG"5P-:VINO,UAV' M>?_/V.('NA& 4\L0( %CFM/FA&Q.,PK@"# B=QH'H%NV[?+1\[N\4"LO1Q?0 MRPG#!?3?[S=\"SRY;IEWD;/.C%MG=EY;B<[MYNGV=C/GT6<4L44@A[XU9BL4 M>?G5KEKW+OCYH2VVW1'+E=W>^ML"+=56@#N6[MI>CI?06&XT\,C\GXQMIN0: MO '+[VY5..>VNQN [NNJ>)T%CV^!M?-7@1NQ+0?/:/7=!.*W\E^8%QRRK@/3 M%&+&/>2C;[?X<9P,SO*JE">X$?$6 "A7R&M.MYY+QU'3=V;[]=(V5[#RY>+J M;8D-CUL4P"I78GRX-=S&U"?**(!@+9-6IYY33P&D9W'J\^A5,^$6-(#J9=FR M:F, A6+!N$R86IL_IVM#>5>U?UMS-4U=CFXO-Q77>F?:'FK.=1):5$X0WP(W MQM/0C_&<=,%R\_TLD 14]H&Q%=P9[ M%<;!K>+O0!9]2[M%3PJXC$_4HO&5N>Y0IY5+XJEI&3 MOF'84.LG[C=<=/X&OJ2CSLGAJFV;[6/;U6M7!R-GD8D%M///[1KW=GXT1JO# MBD?7I8E$\/G9W/QP9NAB>&>ZWN>\KE; "/!&+P4[;CVTBG7E[=#VUW'!+5L*NDW8@4](YQ*+R:O;3/:WG4^NB9]K8O)#:37 MMUWDX&8F[B%=.$P[CM>2B/$ 6%^[@R2:'7ZN+4*S;B?I)?87.S@\R[M'SZ1? MRT_3<&'O9XWA;G2I>2D8O4N]UK#?<=NY97%5.AY=2ISL7BX;OG_>K.NI@C@]/CT_CP(3 MOA6X@^,5-+G9-NQ%QA-XTN'(+W,E-&-<00M1CQ4/&#C8\N,,.I>XS([.S?@V MSMW&CN/GMB%9G:L+]P8[UY_#76(&LI/Y2ALD_P)7S3G)[_04>#R=SSU<";1' MT)?F07:U>)>[PPXM:)]/MF_M%%^J+<_<+;ZE]2G,Q;O()^__;C'X:XO=12G$ MGW7A?EPUQ^$7VNYB[A+/=[',7NTE=$T7#\ UW]*.VOVXCZ$5^C3=2:"NC;&; MEF?LQ^ :NE,9AAPBX.(6;XG@X/ Q;[97>DU.7Q],N#VT[7*\N>_WPYZ,9G9W M;HO!%',%[L&76%V?YOL6TQVW8O8ML+I6C=+G@)',S\8E:#$"S%A5D MA=?$/_ T+J16,M!N=QQSSY':Q^"=;LN6;>TR!G-WD4>["?*8[O2\>MT3H%HG MS#'NM>6#M4W=0\L0#R\;@E&UHX+3\,*]?7[#G18?M'GKR.(Y=PO7T7O035#O MK&_0?N5@^S_<0@QS_X1;; _E?.Z,PX3[PDZ]?1ZT];A+ MVPW#V]P9L[6==UP<-D;/I^G)+G=J^K&X),X*5DZ' =#HI8GY,'K8?QP'7K>W MW.?M<( 4NIO6Q8YXK[\,UTBN+]=R.XD" -X M:">Q#J'3W^IMQFL)?N">>S-*ZX& P,3GDCWHFY'^&YM6S:G>Y3AJ=W M!"CA7]K]0\:]\PYKWN9^A./#XO(F[@VW[W[E=3*'I?W8!?9"LL]Y+7#EMB*/ MV'/:"NZ;\%F:D_LXKJO'TL'KZ&\H^+&9-$%ZW[W# =2U8/I" MW4OB4G<#]<@7TCOV=403PX7&M?<4^,^]Z[X%_KJW;/G<70=,PUY%[1/ MWK? 7X#9 0\92]MO;^O2X&L*6%HS@ U=P#X'")\?@M\-!W>C>Z_V9OX\%L#; MJUO4$/<_]40;'![G7L$OM;_156.M>SR==O[?!5[;G36ZU%Q6N_ 6*BP4#SB# MS)'K65QW=M.X[$ZQM7<7!6SHY]OPNP?JKHOY)25KCI_NB][P;1C=2TRA57OG M*^;#ZW>K.T#9\!TH=D(WH_?F-6PU-WU=@EZ&1^Z>X14#P]SP^^TZY'#IA:!/ MS?_KR^L >'7:*&R$GYR3AM_N+W)&[=R]L\W;73(X!EBX/@77K6$[R_MNG_ " MDN3M,?1\NR>>WVX@\+=C:3_A.?0O#0JZB4[^!B\'X)WMG?9/_*)9A_A=1YMO3DK/F.'PPE<_?[@9;_#P;7J!?6L= ?>\%S>U3(# MW1_'BO@3-.//PKY_IW,#V/?G?'?_^XI8IQOC-B;7DU'OCE]+/,6W]4Y(K[8G MX#GQ5V7-NH'7RPR)WJ$3LKWO,^:W^M]983X\%Y%3TUF_@?6:[V ]\@U!3D4S M;'$ [?%6]"LZ(\"VC."43A3^U0KVR=&Q]!!P:; MW>GH4.VX>K,]X:R.+[>3G3>^/N F+L56:MT3^!V8<%_O5=P0>K5;W?M\=US_ M(GE2[F0;WJRE[BT;PG?K[O:">$9W BUO-V,_ MWF7LDO=\^Y1[FG[EA0,4BGGLYO3]PX]]JNZA-;1;T*W,S/9*O-O]ZCQ%3LFG M=%'6CFB7?-[]#7PIELD[?>G C&OH.RLW"/57G]>ZI_7'U/@1K[L[Z^Z!!^ : MC WGPVE4-G,=ZIVA!L2[UD,.M62'>@?8[C#P16JCU^WM+OA\.[%@\Q[G=1)< MY3U0#XR$N".^% S^3;:?W2WH#?A;/$E^+'^2K].*C'F[_FJ M5K^]=Z6CU;/ MY'N_6N^^,]97)I"%U\L#;O'R46&A-V%Y8PN8+ZX?H9GKK_FHMP<=RGP39LDS M='?&#T3+LDOU!>[.V"&+@#]ST[P'HIS:,V^W@38.P2W?SS:-$U_Z/>7.W>/2E/9/_).!=K!0#[<_ MS(?;;?;Q.&#=G"M8[R+7Q67I.%T&>J]J7*)=FFQT;C1[W,'*<'=J^]PZ%(\YIMKF":C30FW)S^+[(I^ACS?/ MA#GJ7N3O>A^=3=RH7IAHYG?IUF]W,*:Y+_ "'@4C@5'DLG+M=V+\*HPK*3\C MW,G?K=YE>KQ<0#LOYQ+7EQWDJ7((N8=60H[1'NR>KSG><&&M[6% !BS.SNO6 MAJO3P?1#<)4$:CP>MR+O=+/>D0$ENFF9MFS%Q027D>NX1UV!>J7<-2T-?G6+ M>W_9%_(\;H;O6GKPW=+^NO=N N.C^MR3!+F%8/]S_O0E[ M,PVR=.KW:;E)O M?H7%WOIFMK ZZ@ZGW]T"U^>WS%O0N818BAL82$DCU_.S[.6AN\"X6@ELSCY"U^G5S\GP[\#S6['[?U67;O[#5O/ MZXG!.MPGO'->) ZJMWF7AEFBZP7;:?IC?4\W1MNMCXETWB/0ML MQ\'CH-V7KC5]=9PGP- 3QP'B@F3#MJS83OBJ]ZK3=._SQ_9N\D=^R-[E!M"S MW?/IE'.R/$K^@XZ6MT&[Y%?P,?EH-<6^@QN7Y_G*B@?VV/6T_2,W-7R2[HB7 M=*/CS^+M/!,ZMPX25]>OY1/O3F/M->-]5JO#1<^[X,$(98 O .&^6DR5C_Q: MY7?L13ZW)7G^PAY'!]$KV]'/*_I5^B:]^9MD-[-O;/_HW $G._P6RNY[;Y)/ MV6GT\W#) -;W58%688ECN[OL7NAT\5@WS Z#[MQCWT_J:79B, OWDYZE?K,W M%PCRE5Z#_)$V!Y"0WV*G:?T#I_:'/&J=U;RMQ_R.GO'LB^ZW/'@!OK!=L-#V M<,U3QO:I^1/>*T]DS]POVL'VN'9"NMP]D0ZFK=57VJ_>Z6;:KJ:]&2VE7L5/ MC:^^T5YVA(4\,3Z(7QV7XQG'\5ICN.;V;@^#[MFOU?>SJWK8[]Z7?QM-WVDC MWEG2S?L,<7P]9V\W5O?N;\';6=[SP ?^^&XD)MI3=XWV#>XD\WA=68_#95'_ MQL/'QG+$??\8K@P:)X#7!60%B6#4\-]^=%TJR"]OYDO86_O_.U@>@XZ+'] _ MVL.\XWNW]XG>:)QMM];O:0?$E-N/>81W+E]A/SBWSX'697D.O-@6A*ZVQZ<_ MO5_OU&[P/6O8)V]"1V:CAN?"A&V<,/P]+W\&G\MG@YV^PGD2^YM>7'MB)UJO MEUWORF@W+;9>WKZXM]4:[F/H;7PZ@.(>9)_'IZ@[A35"CLI79D/QQL=I]W+R&OX G MI=')*G 4)A"W#G &6!:3BO&\\&/P=KN7PZ[B/=HGZH.[W_$&LXF;OZXS.YW[]7 M[B/Q9'O1O := $^ZI[9_E _T;G;W/9TYD;[L;N"+E*GB8>I\!DO\_.RUYX*7 MYDVU].2K,B:Y-3^WI^[.O!W-RGJ[.]'7U[S;;7,;JQFZ5GQ0^R$XY PEKJ[ M[Y'"8ML^MG<>*?\]C]UNSA_T2&TX?L)>CK]D4-YFWAGL95T8L[F61__C]="RTM<6W?P(^K/:H+C;C%8F,\M,T>T-RL[PH7P[6]:7P&+D:?:0UC?\I' MWA7W&_TO@-UV0ZUY#^DK;\'GD/N1;6\D>\^9A_%ZYD'R&6DHLA.88:]))^FF MZ-W"F?N;N!1?2PNCS\1_>6_\XT>#1PD#D+MZ)WV1W8T+I"^1%\.1\%[ M>JK?#('*-Z;YB"ZDUZ3+>E>]E&+O/8#6J'^SA]OO\8/XB-S[>$9="T\G)E]; MWTO[G'O9O>C>0JM)B*MH_PK@TE<39_/C<0J U6N$#MV3 M!^K;N.9J[G(?M$S<3^VZ9Y7#$UY]P]A]_[ZZ_>MS[3W'56.#MY^^BW ^+@,$ MA9G>ZWRD=417A1O6=>XWF)_XQOG5/&/[M^:C=[N\U5O!]XC7?=7>$8KY<+R75*P$_@CC MK".Z.P7?@"I]/Y^_U^VG=D6[O]O$;A[8KXPZ+@NO[9VVL/P\>2V_E[ 6K^,* M>:_K35P,[237'$_L/1XH<,.V(?YT>'7\+""KMN!#R?.\6&T/NW3<8*S+U]=F MRX7F3^X/K0M@Z PV1FI'NBF??H6]HND6V+" +A,TH(D 4X A )+[2? J2)^\ M*DC?%VB!10.:"I %@ )XNM7P=-Z-_*;>F$NUC3_CR>'R/H.7-I Z\9OT5@RT MF '*U>/9-#V;5)T&Q]Q7]\_B(^UA1_#^BAVF?:E7?=WCF&\,,KAE&,*F[2!' MY,??(%KP\GR?=CPM;N]W!XB^QW A]>YVMFMS[V.3C?\"E^"U@#.8& Y&-VJ/ MT=?T,]U0+^ 40J^S9OHF:FL*5 ' \BY_]&NR]_%FD5G:3H$4-LK;5;WQ!@5; MP7T)$7TWCO9B?8ZT1_2R,Z@YXBP(G^9;^H^[>8)BOPEZ2F 9]@B M 9CG'W=:.[-@.V"OK24;CQG[TG!(K:/6W#Q>-UL# MM]VT77)M\:0W42L$>$*_$SKCAG!.=5-D9;UTM8'D<' MJR&U1( K=\XV6HP:COD"#FTW!0G(//N28I;_8?5XO MYP^Y-=SO+Z4X&LR'/^5:^M6\2.%N.6%_[YR?#I4?I=6Y'MS?+FP[;[X9+B1_ M!7C E'$@==VV^1OUABE GV_+2WR O^=>*"S -=5#;UW5H^R'?[&9MYO:G?7K MPPO#Q%O;_'O:F]G$\O1_NO_.7OG/BR^A]YHJ_FWKMC]\7S?2"6./T\G-T2Y_'S MULWJVEZ%PJA_QWN_'RVG>'G@NO/SKFH>*E#O3?'6F;'/\>6'N(DVI XI#_T.CM?@YV^4-JR9[#YSIZ=' MH??^7^"Q%2RWR?[/_@?^&O M:/OOEWMZLC;Y*W[G'XO[R./_!7W%?(U].'S\8;!?II@*%V3?<)D&F%Y M<8!=\':6?6=;Z +:6GUPN'#B6RUGY7%7;G!X2VOW6[==I'-27+YY9&'Z>FAP M5G1*6X-JX&B+<;=UDW[R7-MAR7YH=+,"\V$7 :!FC'Y=8E%>P77$%F(D=(1=AW]!:O]@@V4V=^)SB6!_88=D07EL8FY> M0GC2:O59T'_[>:UD=5_O!)Q<1F,Y=PMR&F.[9<==,WT+= I<&%I@<')R;6N!7@%Y M9P!Y "T B'MB=$-EAQ M9&I' +)VO%],6H!IAFM:>QY:K8 5;;5RFGJE@"-JU%J&:B-MJFJ? @=AT6L' M:]X%,%_@92]T9E\S=JV = #)?C* ,P)+ $)W=7T%=W9,I^@W=$6SE>*7J.?X5J)FL>7SUFV6DN81E],W;= M!LN AF0U>'-_T6H>6C5>+6#2:\. RU\Q;?& !@)? /F *0!*"#X*, %^<>" MV8#]@&-T:'6F8'( 7P"<=U-=JWW(6B1]=0&? 9MD^P?N67EL!&I#8"5I^X!^ M;=N K%H7@>1?MV#7@.& AWK[88)M=WXV<UV->$QK,729 =," M\(!37UQ6AY:^X"=79EW$(%Z71XEH;@;1K)8'!?8IG*(%&9P5KZ("]@-,7KPKB>HQW M,71/@64 ,X&96L!Z1V]2@:-[95VW8AM_>G6D70=:,F4W?MYSVE[&>1=[ M)G'&IIS\'A]94YJ<@#A:NE:AV\)8@ESEEW ?H!EIF6& K0$.VY!86=M,G+'?B9? M6&U:?_=;00"F8@UATV$:<\UW!EZ-9,Q;[6=$8]]IKE]>7M9=LV6"9U9TGW[$ M:B1BP660>LADJWX9;:MC0V$F@%]CWEHB>TEYU'2)7R)>16/&EO>8W%=\7-/6M=G$5\N Q!= M*@ O ,):#@'["M5ENW9T;TD 7P!+ $4 60!? .I='EIE?EA:GFZ.8_F!TFNK M7#PA(EH(@F-P.H$[?QQCX&V?6R@%"E]@9UYBP06&!/A:%((6@F]:+2OZ988$ M$GB 7)\6%H*8.)BS%L>6E\ <5[(8"!]<7Q57BX 3X*7>7 !>WT.9Q9P<7Q? M /. 05H>6EE<"FL6@C<)I@4,6WL "%MP :Y<0%RY5U<@KE$]5DM@MMAS 7? 54$4%J-9AY>!V5R M@A6";!L7 YAG5()==4X"5X($8%J"2F$O='-A&GWU>D]K)EIR?:]XX'@D9R@ M@&IU7UA:\5E? #MA$7(W@<%='UNK7&P;7 ]4>ZE;*7-<7"$ =(+[@>!XI%P> M6BL!M%I(7'5_DUZM7GN"+6!5 D!>$5LZ@5];5%^ 9@^"6V6*:Q%??X(Z@H6" M7X(Q 0!C88)M8"B"!VO\@5A:IEU$@K.".VZ0@F%:JUP'!VEHO@[S861DW7/X M6B$ ^%K^=LV"<5TM>'!B7P#)@$=T@&I06IAW-'C.6\Z _FD@?85BM8(46X5B M$5MA@AD'I@("$%!CY8+E8CU@O%U<7"( V6_/ T""$U_"911A07?#9V!:5 0S M=/EV-GT"7(Z"/8&,9!A?2(* :2N#)@#_=DAOGX)&?<$%"EP> M6FV"$(%7>5^"$"0S7V&"$5LM@DQ:1&43=_1?('306Y]K%X!T7.=Z4'R:6[4# M$P$A6P6DH!.?R-Z='5:985L-V%1 U4$ MU&A&7T$!PH)<@\EZ;@!Y !!' 5"#+1PA!\%W58.%;!-_^G#A9G-BBUUO M %"#6E:R @Y<8X/9@J4",U]T@FN".8+B6I""[%IR@TD"G6'@84U<"6#Y@9R# M.7A? IP<8(86I>#; MH#8E@+8+>6D^"(5MN81-_.G'G:W."IX-V@F5M'V'/ M=GZ"P&R*6QY>2 &! 5M:M(/46JZ#&F)!*"QQY8+O@ ^!96= @>R"OV#Y6B># MX'C/:,I; %RM@@=KRU^;@[R#1'!>6C$!&0JM*HQMNX-L@F8 -(-!>B!AL60P M9!#68-"6C9C1( 3;:AFI'Z&:81U3X*$@G=:_@']"G9>:(-,%STH &/= M@^:#6%HQ> ^!.0-Y7&!R)EH*;^&" H0X@CEXA6*I7GJ""8+;87!X&UP07=-F M(UR*@O)T@WV>=LB#A8(6A&@5\V'R>">")(3F@^B#QP2'9@"$&82&!*6"_@I? M&N)V1UIA@@0'\ 8D$]1IIAGV,%=?1O%&8P!3&"WV9489AU;&!::T]V M,63O7F^$:G*@>.D!HW[686QX#W4B6C5DSP,K?MAEZEY\91QO)X%]=6J#6U\< M!E5>M'(8@;=@/("Y>#]>"WLC@5UOXGPD:[%J.V[G 9)K4F)X7#I=67+H IIH M,V$66_5XDFAP9#ED W[]8\%HD6^ 4^IQ= %T@8E^L&C"8QMLRGN/?KQTG@*? M6N9RHG,47>]9(76G:U)O]] M7 9[%W;;>[I_T822]YO=7 B6E=;CVAO9?=?#&H';@!=.VXO;^=DJUS;@9UQ M*&$_; I:QF/Y9C9M-62.=8=D,G+1?$=:WHE?D)R(6+J M:3MW1%L\8[)D[%J.;CV A%U<6)_)WQ@?75<)5]W>DQKC0'T=#9; MPW& @3UY+84H !=S)%WJ84%C%0']>GM?^G_!> \"47'_7SV%QX$9=3=;,5]X M<(YNL7M9 ,M?Y67!:\5\^F$.8VAQOEH(=EQ;M&3=?_=K'VN->O6$#V&U?U!T MVUO3?F]:(P$=>U-P8W/,7(ACCW )9[1]886)6^EG,7H8-K*764@=!?O'_OA.%: M]E\FA9IM,%P[8<%_VX%<=Y: AG*6?+MI"'$\=$X / *&6MMAHEH571=@'G+ H9:6%Q)6Z)SDX+R6;]H.(-=A25?KUNN8$AA M.FJG8Q=Q[WR-;!.%(EI- )MEOGIJ8 "%M&L$8+=O9V7VA 5R_(2:>I=Q!'(^ M6P%J06')?Y6 \X(T7[M;B5O4 F< L7C"$L" '7>,6SIU2&WL>5YVZFI>>OQX MA&S>6L%HJUP>>:!D"V/OA4!>56/Z?71E3'Y:8S,C$VX46SN%(%XM6SEWTGEC M?<%X[84.>U![OEHQ?[YDHW0Q84AM3(6_72!::UH%AM%Y>'3U8W1N+'0_=.)U MFW,!"UQUW&?O!*-TVWQ_!?)V$X88 N%L+5L5?%>#9 "$8V9C3EPT7"+@8U;P7A$;%42_@'E!C,( M0VYL:D5N#6A7 $D 3 !9#4$ 4@!$ /IZ4(1B<+AG3G+V$C&Y/8DUQ%6G!>5%H!(4I7GM_ MF&U.SQTOV\*A6-P MJH4.8$Q:JUZM7EV%!6&X$N)OLX53@(!Y+P4 7:-:FFN[A3I6OW6\R% M^GVT;?YE@W><=C($UH5#:[UPZW;,9_)PW86'8%86\G9<7$V"#8;R8N.%70'F MA:F1R:'WX8OQ\4 'T8\%XZV8.>S5D3WUF;B%G M1V@B<1]K2%B(:Q9?I_](2R?N]970$S M7)IZ76B):U5\RF?)>=)ZKE^!?#I^XF3'A@]E!8<-7WYC/WAK@#][V%OY=3!\ M,V(Z7-EX(UO1;/)V X82=$9]&(:R9TM\[X8;?8]N_VFP::-=7X9R8RUB3W%6 M<$MB<@ ?6XE;!&!M;DMK/6<^:PN&RH8FAQ*!6@\,6S^'P5VC=!$*;&5,:RB& M>X,O6Q>&M(0WAUQF.8>\A=EU/(?D:ZQP(X8G>N1N(%[_?;]=%H9=I$#V%WC@$AM M,X=E=SEVFH*"6Q1;68?+=2U;)6>X=<\#TH$)7*M=R2VP7$]>6'V\AEM]1( % M?(9CZ7J(>=A=)7S& M0WD[AI9U#WBN!RVBP6JV'385O;-YJM%RTARZ&MH=1KWX82@5\BLWQ9VI4 #@]FFY7AHQ:1&5::8AUJ5_];<%=:5_L6CIJSF:2==.!S8>+=Q5;FUM) M6'@GZH:(1^"PS$A MY4WZ,7,F$ M86SK8-5>GF;%AL9CIH8W:ZV%JH:PA1IY 6'U6Z]=;]\\EO6=7AX M^VLC84A:S86>@-6&@(%V>F]G/W\[7IY@W(8#8UEDS(<@ 39>C'=<7&P Y(8Z M7^2&<7+FAN1BY841@;8NZX997:F 1G5E>/"&*UKYA4==IVU&7_2%5WLC9O"& M'WH0A]UB]&K+>?1C2&WZAM1K1W=MA5!C(WMP9#I?1VA,92=I*8<*=61X(?&8S]Y675U 9\"NH1:;QA::'(?AYA_ M*F4RB"->?6P-YN;HA^B-1KHVOV$3*'I6,1AO9Q-HSI]<(9M7W6']G[9=5V( M.H? >1EJ)'P@?-QL9 ?V>*2'MGG[6["(PH@M6U5>_H0.9FZ( (>LN MAV"&WVH&?7J(86] =S\$PHBRB)=\NG2_AQ2&L(&^'1716YZ(?HC/B-AG5'"N8]UT-WF$7[)\6U]$A^-K>X9]B,Z(K%JZ?AJ&TXA, MA]6(9(>GAR!=07)::F1]+GU@6\V(88>[AR%;WEKG9[=O&X5(;5!L4(C6>4EU MS%J"AUUG07)\B#.'3W@U@8N'2VS8B(Z'6UR\:QUU-V:3AP->-F:9APMZ!7.< MA])P7V!\78Q_KW4;?WM[[HC]8YYL1(=X?/^%^0PGAB:)%UHKAK*'%G#^A_UQ MO5R$>&U=\HBL6HEP$H9"B9UYK%JT7$%;3@/5AVEC!GMFB#J'X78;732&X5M& M9-"'@F=/8D>):8=[?V!:KFI<>B=L;ES(ASYN7'ZX3KTJMA84)T1NO&XH %"& M4H;CAU:&NX=57NENSGF17*]?X79X<*E=86@>AP9F<83X6[6'_7$W9WR(/FH% MAGMCY&O,=D)Y0G^W>P6'LG?\B%E>&5S+8/!VSH9$>P>$"GI:?<5^8' -<25? MWW]%!XEVWB:&B&SWDP:&!M>'![?WA>HH>.EOBX8!:HV& M<&V:B6Q=?&3)6[M?E(;RA%\%\826A6%AFX:49)V&.'-&?$^#/WR^AR6)P(?= M?5*).E[$ASR&QX<$B1Q:!6%U7BEH>77GB.,%.(:2A2V&MH?6A^]9J8$\=P%K M2HF1;@\C(UOOB<>)X(<::..'9WZBA7)Z:X A8!%S"P+ A0]JZ'.C6I$!BX76 M>W)Z &VT>$9S)W'H9^F$KVF$IET$;&!\0X= B%UO=H4?6LYL9(;^;=-M M*' W:2]]N'(I8%=:RV$5>'J(CF87 7MI\UDM8')JPF_QANM]_5L?7>I]ZVKQ M@#IB)'1+;^1Q0W@_?I-ZHUW7B9>&AH&4A\=F/7;K8YR%=EIM>H]NQ'^L6@1T M7F0[9*1J3WP8A0!R1FLG6@AFGG\O7>QV+76V>D%R_==Y''77^Q:3G3I;S-O MHUW\9D)WN^;*I\?FR&AZ5CH8B[7EB'0HG; M>^AX*&+QB8EN(E]+=HV!%&I'ARYSQV9A&D!9=<& (/=HQ: M,XH;6@!OAW%>9*D#N&(I<_)F-F1)Z-M(63H8T=O MEW$L>2H!^&0"6W5^MP&S9:!T763M6SIR,F!1?,N&%%HMBH-O)W5Y9K&*"',% M8)&(Q7I1;6AO-FQS9BMU2X'19L=C,XK%8XE:)P$B@*MC7'!#<#%D MMWWC<%MX7EKW6_IJN6/3!*QZ*6,G"IQPO%VTO**R88[?*%KM@'I 9!GWG!JA6EA^VMZ76@5 &-SA6]ML0P$!V^%SGHABPJ! M<&Z-A5Z AHK[@_M\N89 9--H8%JU:@1W_V'=ALYL@UV26^"%_HE0$F :QHG?AX6&KF;<7:=V;(E[,V,! M6XN[;E9^* !>BTP %SL>>EEZ(:\:\QU#&PSBT=:SGG&>/9B EJJ=$\!4V,Q M6M9EZ8 Q<7B'-&6RAB>+ZHA4;5V%YF9<8MJ%:EM5BVE?%XELB;\:^!S:3UR+ M_1U7P6#)EJR M6Q>'?F--7RJ+@XOW6B]^=EZ.;G6+ 6[7;DB(BHM=87J'2XE\APV&Y(849:=G MYV(#B(=R;8@1X,F;=B/9?&&.?8W0.%EX]<(APMH*#?O-A(0<)B)U>0&Y> M;'F!4P)[@7B'K5Z;B"*+[FY&;8&+"&U/BSZ+.%Q BSY^.0$%BUEDG'9'B\5^ MJGYV7N=G]UN]B71C]8:YAN1W*W>TB]N%9F'=7(R+_HDO5YB!<8E-AI*+R8E( M<(AX#EMMBU:((X!87<2+L66W@79>H(MK>DMCNUVQ>_M>SENEBZ-B#VW.=U)O MYF4Q:%UD>W_VAI)I\6>(BX*+9X X>=6!T%\L=5"$76AWA]^+BVSVB/.+:8@@ MC/)P5HN,B^2&T7V8BT!E!HS,=>!?BWVL7/%^WE\;?Q9@R(NN9\R+D&:P7T4 M]HOA?P6'17E(;SN+8H@9BK"%46<8C#YP;WJ%>,YO %JF<,1J#G\.7/**/(@X MBBM@W&(I8<)AQ&>/;BV'W'#4:R-I7FIG@#Q=7 "5!8]UHHBD!5-?,F(9;0.) MN(N?B+R%1&DN?_^(O(7T@AV)M&.Q?P%GA8I<6X>*@X%&BEU=R07*@9AQ,'!Y M7LEN!V137B-MLEY%I=&&=< $T"(&E*?^%<6V63?D1=87_/==-B M@'%38$IR;W !:BYJ%W__:?]ZTW63=K%L6&6CBA.+=@%R<-Q[*&_76@H /5MB M>01X,V*:=4AA98 F>;Z) 7^T8[QRRGSP6J:,\EN[B%YD679*?P-^;&*M7F8! MF7#(=CJ%I6-/>'-N3XG APY[]WPE=L1>3@)X CV&OHP371I%US. 4YABA?F8RP7WM_=5NH:0EE\UW/BI11KE&EMANV,VHP]>Y%@$ 5LC,9_DUQ;J5R_X%2:[2%-G .6E, >0 >A7,"]8&( M7-MTIG(K8'* F'+P6^)LVG*?9B* \&?O>TMR)'&,;]]HZ'T87":+TWGF?R]7 MSX:89/R'3HR1B5MXP8?UA^9,?'M\V67C9V)#'H588X!XV!:AL%A27#;A5.*^W>0>/1N M=E^WB@UZ!7#P'3!EOXJ1^9O=EJF;2]\RXE(+:&GK8?%D M'GU)@1%[?&Z5;+=;FHVC:)R-0G+.C:MPH8U_C3.-TX.+C2%C#HM8Q;[5S- M>CUXO&MXB#J,5&)_<_%G^VC#<=M;VWRO97Q]4'HM6U=S&6'3?.,%V'^K8K9[ MPFD^ SF-#XY$9*%H/HTJ:D&-HHUD@;=Y[6=H=-=P^F5+@5ECQFOW66A@VV$8 MCB* B5\?C$:,[0&.@$M_RER1@%Y;66\V7RM?PEO^:6MM_(P_>+]V]EE0 $T M5H8E:6=?\ 557LYY"8."<%=UI5VU9.-L8'HKB"V-_FT(<"]MH8PF9+YX?V99 M;VAT^ QA9@9\HW83B=8#KU^T;0)LUG])6XR.PFG;?[-Z_6/T;A.-;E_A?]1N M[8"AA4V)28M,B"B-UF\(>9\"B5KK6?1M?G;-AR9 )WU]7.!EZ8VJ8EAHK7]J M9C1?18W,="&-M&/0C:1@OF 1A5AEFW;>< U_G&M.*%5[3@+?7ZU>284O?&IX MB%R]:6>"6F*-LVGEC8EEB'>(;L..HWA5;_1S4F E7!%GI(H2@5.#L']< M<,!SO'9R:]-XD7P?>(![O('(< !\1%^-9K%WH7%L8%EO_H:O=V%<7'"570-Q M5VWCC"F/T7H%CY-?;'I0;E]C_6N>BY]Z75NNC3N UGDF!#&+,W)A=:E^Y<V%: ME6^:7HA^"HA[+;=P#8CU<99[Z8>Q>UY_L'EF;K-D'8J"B_&+F85/>DI@+WKR M6_I=]5N,@]MY\W3N<]=@F(&P7!^%Y&;*:(A<&(?):QJ/2G"$B3-APH6' =1C M"86"B]%Q?&V8CD.- 6-87U6/LUZUBRQG;V/;C3U?_F-E>D5_UGG-=0F,QP5Y M>OA\$H='=K-T%U]K H-@1&@V;>&$WEXV:H.+PX5+>7)PK(OHC?*+!8^*9O%;.0&GC]^ #WQ9 M?5MW7F+\B/6(NV*AA-B(0(?,97UR2G__6[1S3X &7G*'?([@?.YSS&?48#5= MP%QGCRUQMGFX?A)@V8\O9\!R#P<#=$)<-XB+>J9ENH%'=BMF*8J6>BENHW;M M:@U<2XO.8^F'=(,EUP M@&*M; "0+7#>6LY@:77@?\UG'WS9=>):1&P!B4-^M&/?7*%BAFN:>N=[1&0T M<>!GJURR7CB0]8&48H!P/)"$7ZYLVF"7:10"O'$YC(]C5' 2C@9MR([:A+Y@ MG'5O@<*$['&A=<6$:5KB@[-H;5J-B)YOJUIS6J%O=EJD;_59)&"G;\" JF\: MCZUOS%NP;V=D$5J+6FR06UH'6[QG7XTM6B]:6@186ZY_B(*K6LMCBV6C9\V+ M!6$2&=5O"W%Z6MAOVF^A6_I9T@7UC@<$WXVO?2)_%F-0BJIBR5T"=H=EW0'! M?(UR"9"[BF9AXXK4:29_F8\LBW-X?WM\9W-PWV^ AF!P15Z3C>):LW!.COUC MLF^ 6DD 9&_Y9L6+]XA8 BP!WV,?6FU<4G2S6F5Y\6V06CU=/&##AAI;1%V6 M<2M?J)#,=$Q:JY"*6OEIJVHS7P-\AXSJ7):0\FYL8F&+O8%3@%AR&%LJ =\! M:'#ECON.M&/VAT8"F7A;O>3-\%@'W8SUR/@7% Y*, 6HK7=)IP&+&+Z #0(, M6[H]@8"I7=YK;HL&<7]<66O9<#9MSUJ?CQB1)&4_;U2':(JU?_N#97E09MA_ M^W@[>_!YSV=A;J!@U<:B)FF+'H1. ]5V#81PD2./EG;I>X)FKQ;L:\M@'HBS M8[X#KU_0>0IR )&D8N: EIP8DAUEF?@#GMK!FX+<9QI=79<8 );@G#,;>E\ M18Z67#]@YEP^A4-@Y)#]C_MLWVKH?0U>V&78BGQQ%8M'6K.1WU]WAR>*IW*Z M>VI<,8+>=GIFKF.$>4>/L85P!E[G<$1[]8NO>^YPH'MC71)K5GVK?'!G?'%SCY9RQH4&7(ER577^ MD3I>&UY(8GB-@7CJ>O=OQG%6:>AO)7R^D'Q;?'%,9(=? ',O<7]M?G.O:@2+ M%GN?=EIO1GFZ==]Z5VY5?U:*<6M%;4!?[%K'8 MK,9*'9"1=Z0%292EQ/RB$ M9H*/@7%ABR@":H7D9M-BAFPC>KYD*XMY:P6(EEXL<<@$&*'X]T=QR*'(J%;CQG^U]?@\=W7%RE M 0=;+645;VB=7R=L MOP.\7VUNRGD3 4D"9 2 @LAWC)+BA,:0BUKBD+F2LFN/=:IB"FE9 S)@ V4\ MC/E:'G!4A^5B9G&_6WA\LU^;9.Z."W'EA+!]ZI*:DL20 M*F+D9&EQ"W.HC]N(^G*H7?YR)UPZDO>-I6*O8QYB*5P(<]V+9FO/B5T'\V$+ MBA-S W$16Q9S>73NEE\2Y(NCB5SBUTBD8B*(G&:=4]F M8**!G@_:^>,Q([P3>)X79Z,W5SFI#I M?H=NXEN?AX]GM5R]=4-[ZW6,@7ESE)-SAB=<38^(;MJ1=67P=DQTYXQ 7X!Y ME0&B'M_38?46A![YG7.;!-[TXJ9DPARN7.P8DE^F&$8?"Z0/F[P M=D,WU%XG>=MH,8* :$>3H8FQ+8]\C\-S!9#:789:1'AF;F]HG)/R=RU> MWHP^?M%SKF"U9$J(#U,.7$D"UW.S;6AVT&D;=4QX6FW*=Z=G,UQ';Q>/$6S2 M<>QS7' 6D)M! <;@]*3CV'YBJ&!.E]M>]F37HG2<&)S M7W02TI,RD.R,BXB" MB#&.HUNHBVP+#1T[87;8=6",70S=#5T XS;6CIVI6$%9>^2#9$4E)%< M*ES]?C=ZK)*YAN: =&Y)DXM:D'@QE%2'=EK-AK5?!)3: M88A<)UJT?(YVM9-ZB49?3G=29"AYW *TD"-[E'(R9==Z5%HA8(>3I%QW=G1\ MD%ISBJ1]9'03BM9PUW6+D'UX4'3SDFZ36'-5=,]=P 6L?^9:6G3! EUT GW1 M>YQV"(%^9E&%D6-#?,2*>'@G:GN/^5^J?>J$1'XVB:"' M4)/,?^%_A']\?G"%H7A)8@*2\W](#G-VYXPG?ZIB9WE?>(>41UPQ@E2!H%SY MA\-S8G--8KY[401U72U@TW\E:H24DVM$CZ1K(Y.YD6&%#W6)6K9TVUP2D])F M2I.D9&AF\I.9@)IE8ET+9V%;8'997JYS<@ 6Z1JAV,.8_Z/4(=??$]JQG5) M:J-=M9/QA8EE@WV]A UB(X;#E&L!N'1CA.-R"W&^=.9RA9#!=,9KRI&?8Y:' M8Y2G=V64\7*06EUZMWK2 M.GS=5/G4/@TQ:YFJO6JM:0W5Z6VB(K)$^ M>CM<-&R1:]]ZW(UB:4"5\VBXAOB4R&2X@2V3QP8T6S2 HUT.=79LU8';BDIA MG7RA@6=U=93ME>*L9.^6IV*[7KG6SUOQ:%7HC;265[6D CQN/V'F+=8-PZW/7DGYT5W1LE"9:BH9OE)=U-V:9=;>3 M- 4L?+E$5H_&7-MAG7PKDFMR@W!N "T #V%J9/& KXD ?9-P"62M8_)XM6&N MC\H"'0%095"!RVQ"7,IPQV2-B')P@'"?=P-Q X^,9PI<26>OBMAY(U\,>FU^ MP96SCQR0Y&8CB8!Y0F'SZ!J-_WU\>;O*2YI13=)&0LU]* M?SIJ*) LGA<;W/?>AE=V6FZ='R!U&E)D&-VN'6:E,%P\1U MW'4%6C-L0H)P=:Y_7(G:=1H&'I:^6JV3'H;3 ME"F1L'/R<.AU07O]8\Z4N'/N=563=%RRD"YF")1U;S-W7(["DZY_35TM;2QS MQ'9\712)35\U=O9U,F %ATMK?Y1Q;M]X5'2B8?Q];&-G>:B(W8A*:PQV7 !/ MED1=7V ;A0=\0XGW>=-F:(A[CS-\*7TEAZMD_G,V>2J2+9*'>9N4O7NCB665 M:PO88/9YWG# ME']^>'1;CZ=SW(RL:8J4]6T19W>20VNG>XETD)3@7B^ J#*I6UUV;EM@=F]Z M?WO;C$-_9'VY>S:57WA-8G]GHG1A:AD'I'?>6K.1'(@J:$)T&HN<=D^!]8XK M<," ?Y1>:-Y]_W>.EG=T!Y >=RIO>H6:> EX$P6:D>A[H8S<8H1Q1WA$?V2) MU'NGA&N*47! CK]E'X^%=91_JX0QEJZ4)8I>;K!V\XW0;;!?M7;QCSQK5W1$ M9+UVK7IR< .5PW9Z:Q0*#W\O?%R5(7J2>/5\+%RY?.)[QY8Z?,")/'P,>,^1 MW':I7]"!>9)!8G)P=(OF=DMF< *]6O=O2F:]> ICZ)$> 6U>,PAK?XQW"V6# ME@5:D'FI>.MY^W9Y7#5XIF!8 S=VKWA;8+F5!'<1@:U<1Y'XCKF4M8RX>*IW M.I9)>%5_TI'E:S&$O%M(;2%E;8GO:B4+XD%,AG8 OQJ9;-5W+5O!CBV.]I2Z M:3]X!Y8'ERAW2(PJ=X9WZ8PQA&*,F6N&6\$+,PBV'&:+/)P;>IE M\UKW;P:&M83X6D2&X8]KB3J!)8WT7&^%07QV:&!_L668:/]NYH VD2]]>7BI M90%PIFG'AN6-/963BDQK-)>F6ZDC8Y<"BI5\ZXY5?%O78LH_,F=)CE$$ MW@'>D\5;]H'06R>(T7Y$:L]=SW%NEM]#2&U :+,[:EI@9IR7JW4:;F9WUFDL8*9Q MTY!I:7F2;7F$E\YNK8LG8=618'IT@*IR8WH\B!2)V7VAAK=YD5_!=O%;Y%SS M:IV0@WWX?JMW+'WY8\67\7=A;_A>'W*7D)!:AFIHB6X CF@^=_Y[.'"?B>)G M%QB!=M-F\I:7>VN6?GQ@6I21^):1=CI\-GD:G>62X1K%T=Z%V]W !67+'BI7^IY,'B* D\&4E^L>&X M7%QQ7AZ7+Y;O>:!P%XOR>;QKG82L:F.4>H_FET=X_W#;=JYWXW'Z>!EJCX$0 MDEAC4W@A>*MF7UM>:EQ^%)%1?DQ)EFYSB5Y]$H'-<>N&'V&?<_*7?G>T:SUY M('/ED[9ES7Y\/R1S5^"?G"!#%LP 1.5L&@T7-UJ=H_G M6;&".G!^?7:7EYR+#X\%6K5L'';NAS)KQ8B7 MH54?1!]3UXG;\-D2Y/./ M7';17M6!-988>/ARY%PHB".8$6*V=_)TMGI'72-XY($YF%&/%I8I,5DOF 99^J":<='%>YX,K;)%MZ(._ TM? MAEKQ71TI&Q'>E=Z07S4E5!]J7<.=SY?-V&T ME\!X5)B<75Q^]@QO6D>1%VB8;@YX/I$%MK6Q$EYB- MVV*D9&QN1FTF>-%JA7@0@L%X>@6?;HL7Q@)()%*7MFYY75Q^/F8: MZ'A9^EJ3&ORB$\( M90)*" *99XOQ6F0"'B@XF&D6#,;(AYMHL+B;*5OI6I9O6&V8=F M:J.0FF_>F+T"F6Q' $4 ZEUX]]ZGU& M8\V1*V"4:169I%_96C>/Q(@.:U !_&"AC."%Z0$>7K*4BEMX?'-A[7+@A'Y\ MWEJ2B/%;,)3F=Y:3RG@3BUP/1)=FF*5\:)C,:<1P!I.91WM %? %J$7P#!F"QF(%V)AS %68(:6WR66%I7;H8,I;ZX<_ M92V,684_6Q=K:HHID+AXXW$BA[9MSWQ/:?:5I6<6EIQI] MGFPHF963>F!>B1&9FFDCECR9!@:5BPH "9F9;"YYP8N69[V'B8@B6OJ9V%U& M!02 FD1<()IME"MK+%H;:GA>Z0'* AYI8P#M7,U?]9,7F!B9SHCW M#("&+9A3?@UH'YE2 GHSJVJT;OPT#6@%7M6,(6Y(CAN587?C:@]:M92+6@Y[ M:I;:7F)>^WB[7=>*Z7L_<_^,,EW1;0Z7B6I <0F)1&O7DX-A7Y%N7SL%@'>L M:FAKTY9KB,>.5Y(!8I]HXF4F@2^4NI%79Z"7Y5]Q>,ABMW\27(=D_HULC0MT M16J5@VIHN&G;87U^87ENE3=\<7>?7%^ ^FR,A<-@+W#J:*M_F5L8=4]FX(7; ME29NE)@%;@YIEENEQ>:FM[T6/E9$R2%W>X=9J$4V(8D9)JG%U7:P\C>&N89-^ <)I"C7^8 MTEQXD&>60Y:,EE]Y#)1;:>")XWP<=(N:,GI4=HV80V;2EHZ$MGG3=^.6(W(; ME"\!?91L?[":,'BK>&"##)CN><9UCWO^8@(M*Y1G8AJ8#WW@:/)MAW?8F"MX MPWB48L5XZ6_(>#V2)WYME\UX>G@J<"IQ=E[2> .5UGCT=V]@+X=^!3X5W'@) ME"U;G7SS;$-V*%V=?PQ@VY9,CP1X['@JDUR957UN=KI:9H):::M[M";5CM!; MNG!\BFMKSY3G6^N4GYB E71K GE]BZ9XV6 2>=1TO('M649;1'U? 2\ 00!% MEAX"\G="7E^"'YM*DXN63'V%F..:.6@C>1Y>K5MA7%R.IGEI=_N)JWE09'*6 MBY.(E)Z0-G_#>;:!$UZ,%U*'=XC(=YN$@6P8E%]X]6(?9=UYA9D.E\"$SXO8 M<#^5EV\*B+4[D7^-?F4ARWO)AYI\QH"FF3!X4WEN8(66=G]9#J1WO9:!A^&: MDFE\E49X^'!=FV9YC%JI>69[:8=*D&.;(I;I=6^6"GS4A0%G%Y"=F#&"Y%P# ME7YYO'G:=>AU@WGLB,9R])FKC_V7( %F6XYYLW/Z8"@ A9L8ES!XDWGS=HJ; M>FLN?X*8Z%V_EFAWH7ETF<5\5%N5FY9S*WE'FYF;2Y=LCD22-XGR>KYYIVK&8:8+M MA8F'B9M=>?%YH7=Y?=N'CXT^;HQ^(9OYB@%R))M^CWV2@YF;BN1F-GSM>.=\ M[WCS<"^)GHG=C9!_YY#">A)Z9905>I9Q37OH7!EZHVL@((6.59KSECY[FWU& M@#YQ)X&(?0" 6($A=HYV^)9[>X^4;G9U<=1I!6&"ES!L))P8>>B91&LMG,F, MI91Y8UIP&%K0DBE>JY#L6CV4UH&UBC-P1'KF?U:42'H+<4IZ+6#8A#*-FFU0 MF12(4WIRQZ:@W<>D1!E M&)1?8I@$16KH8/:*HG&$B&)ELX^'>J=E]VA^>C)O*6[RF8:!A'KV7Z-JB80( MD1AR"I&UCX]Z3(IW@7B6X6X?6@.0>%Q/@,9]I'3QDEIO)5_DEE^918;/D7UG M\&S$:F*(!&0:7*M:4YKEATQ:$7,&9+=@[HF8?$F<\5O2D@!<+Y>AG&9A;GYG MAMB34&$L;19;]V] >K=_%W[*E"IGREO% P)IB8LL8?9XFYL"E"!V4FIU8%IR MT7M$E!%C(EH/ B=N@7&48B*;Q&"TD!Q@>=%Y9:FBIW2&]L7&J)_7ZFF!*)7YBA:$V4T6=U6E"4?W.BD*,)7WSK:K9Y MMFJR@OZ3R&)796'1LDTUEJ6O1A85AIV0?D7*<')._B"&$K&)VCE$$/)$1 M>_-'5L%EDRY#IE05LD7'$DR]Y^7 QD4)J.FX2D>Y_V%R76DU?+Y>K77YN M,&QN9M6)0GZ?FNJ:OEJ>;""$S6.V>II=BI%,7+2.9P&T8UMDX6CQDY^0;8=G M>MF.+%R?G=Y@$W]/D]U=>RTO82>+=6@U7$Z:MV3!735>=X7E6.RA IGL&"673=="6=@GIE1?JM^Z8YP?;R%B#Q(-%?;->+ !# YU=N9AW@B@ $9X5GHEQX8R0:+B!RP)M D! M"X%% 1&>(Y[Q7#5T()>,8"%<;H%C!Y-O+5NQ#$N880$ICUF0H5X9E5* )6G# M@FF(<'BID6E=K)'_%A1&;BGR/FJ5^^VA_FA1L;VV.CX*8MXA]G?]W MQ)O2F%J;B)'HFH5_.9TNFT9CZGQ9@W( <)!'>-IT\'LX>;B!:VDA6QA:76C& M:VR68GM>F_E\,&@@EGU[B)0'BU0!X5MS7X)M.5RN8+!M[P&A6YZ<\E^KE,&2 M)J"1'"+ M>YV"+^9>0 K@]4%#U[,F+.;K%J$:/-=I'PR?3UZ:7!-<^&1:AFL!,V9K:"R%%&IV9]V8,U\J;8"<18BV MDQ-_>XX*!6]M^YA'6M9.[WDI5T16M"5+6!( (Q;*%RV:V4A@Y%?@3-VMFJ+ 6UU M<'T1'EJ>@D)F(9]Q M ZJ>X .LGA68HT@>8(^&,73Y7A*?<94=GB #'Y[ZF^MG?HTJGOR5B864APP[47*EXC@$VGBV>*IZJF;1:>6:>;"!A9UY"G_1SV(-) M I2"[%KLN!PGUSDK&?49RF6XJ0 M#XBFABZ-*7 V;DU?J89^DXQ;?H5! M?[Z>0WL_E!I;6V7B9.&;NY^W>T)MKGK[E(J'R95+?LN5\XBQ>R^5%):3?I&2 M?'LA<=1;0)8%FQ.4$)VT8\B5VG7*E0>,T'0.>"V25I6O=:)WT7J>7+]]\VXC M.OUQ/C"@% M/UX.86D!,9I(G@Q^J7^?8C^*F'[?:/=O9H43A\]^S7_'=Q-PLG^)=$%BT7IL MF<%A1ER$:J=<[HV47#AOYG@ 8,N/$XXUFG!F>EPXFAN* MT&M_E&*7,PK?6]%B-)"\I\HA;Y< MZV5L?>!P?H;4(H\A5P3=G7.)HG"&!"->BG@F8'=^?)F*5 Q;/5T7>B6<[&)$ M70QAS7Y+:3&@C)<89RUD>I6_9F%>J8?=715X(6#J?P9=\UKW6ZF&\'_BDM*? M0E[+G5V>Y&G"FR)ZXF0]7;.2_G]LA"UZ H!EGB1P!H#=GC)@*I;16W9Q#(!) M 0Z LUU0FZ%C$X"!(7H!YO:! G R9\XC-;7!EMF'-FGIW M19B5H V:SUJ[?WI=#0I\89A]WU^(7*&=A'.-:&9\3V*B8].*EW$_@/]O@&). M6[B*QG('+N<@-D;Y)\?"): MX:!Q7;%KJ6)9@/EB\HE=@/]=CIIA@#Q?9("UG5I:9X#66VF <60DC!^*K&!< M7W& %(YCCSIP=X#8?GF N5RFB8%G]W0^FQ$" GF!@!*+X7!$7"R0QG[8:*J) M 6-L<"IJ'G7-:9& : !M !IZP9*"F..7NI3*;'F%6V3L9T.'ZIZ.7Z* "&T\ M?'=^4X_%:RQKZUVG=,-C0V&\G4)A7P74CS9K#*&F9F ^Z 9>B< .'6MG(Z2 MK)#E=UH$YV7A8J=<06=Y8L9YHV( 6Y"'?X%B@+N0BI6:76QF6(PL<<&:=)Q? MA?-M78XW8QUJ9P"YE91;K)4WFAT!SURHFGD!+Q(N@>Y9M8#BEK" )FNR@'R= MPW2N@+> >Z$I>M:76H$K@8QE!Y3!@'D PX"$FP-<^'9+GX6AD0'Z@0^!>0 H ME*.%/@K*:MV FY5\D#^<.%V":-R 2&,19]6ATY8/?+.2R*"#:25KP5T#>:5; M-0&LARYX< +\@=NA.F047V->]( "<_R;^( \@0J 29O^@/*A 8$>8/2'R:%O M?TAT=91Q7@N!6 (>:24"YH4!"["A)8ES B5BIF;ZH5-IUI\>6A^!"*+:H4F; MZ&?CD(-C)X$$?3.<*H$2D3,(%T3VFB&"O01? $$ ZJ$7GF!X.($Z@35A\J&8 M=W,"XF(X7=Z9JIBL9F&!\Z%.>\I;38&L6B^B85SJH1ASI7IQDG"%0G$8HH=: MCB%P;:20$)[N>^FA$:*)AXYS*H. :<2AM8+;?"UAXWIYBOJA^X."@6)KIV*6 MA2->8F=VB^Q:1(;RC@8B= MC%HBD&=YHX'"FJ6!-90:6OJ)@FMY:25K3G6P?G)FL8&C95X!M('A6VF;J%I[ M84!F!I"PE>#W)Z)74)F98K=8NMB5J).8%0!1@&A<+.5 M80#N;?F/S7*"G#=A2Z!=7B^*QV;["EEXU)?/D+A[-U_,D-9X1FV& L1]&EI\ M8#X#_68H8J]K1(#_81IS(XN06JVK(HH?@9^5IWG>:UZA'[0!D)F\X4_ M;T9D69:A6^]>' '[A_J=/7[]. :.$$XYSE.V:,**^H)R%+Q=DF#U9S&"M&;?8::+!6;E7C9^KH$]EL]XUG"9H*MB3W1-?6Q[?62U M?R=K)J/ C2"A8GN0B59VDGU)F\:?MABD=YQ:"W'L<;]TSX3:;]*$1Z,0BKYZ M\HXQC4JA28MGA&"V#6FQ-Y=F@&C8Z@:Y<9:RT M20!6AF:1"VK,::>5*5IE8;Z"\VX)E]U=)77RBDQ[:6SF=S>.V=26T6=]1T)'5*:/.&3*"#;ZNA5V#8 MD+V-OI)SA*N.=W4/DS-\R9.0;81L5VW-DW:1=(EYH3J*-)]?:7Z$2F@WFNO"IA\YV=8HRZ(3UZ0 MA;61+%SMF,M?X:-HCT%:P6EK9_Z1"W2 A0:+^'3M6T&%*Z"K>Y%HK5UE=)Y; M0HKRG4\ 'A:$B*78]CHU^/OG])H<"&U9/:D'%GH5V"F.%W(J&$?J\*):'B M9[%R*:$/B+.1+0 X 3YK)5_3H*^%,EH9B-Z$M(57AQV6V)M"!=*;'%JXAE6% MQF.+EPUK<9KZFFRB!F=99FB/BIP8FRN'&H8::MA;.'DP?D&.V(4E<"25/8C3 MAM2*>FTSH"5PV(8I9O=TYI8OC"-^/9(Q7^V.N(O+<=-P5(@VC,R84:/IAMMS Z&FF!-E[(DTAU^D_7[ZAB^/9F1-DDR7#H%N6S84VC-F6A6APF^*;;B-*78((U(9IV;QE\)JVZ18%9T9WGAF]AYD@0Y8AB4RH@*7'69@'RY M97M>[)I,.'AHG%AXB):6-KB8N)58H8E@:& MCXJTHMFD5HDDG:F(#Y;R=I)L#W/HI*@R*5E%PZ@-QK6%_ZB,-II7[KA"!OVJ0PAFN) M#H>*>)./ HO;8>:D+0!!A6.))WJN9$-J38QCG>%;5(>H@$>6Y'E"7<97']H#&0NV9(;08+4,JPGP!@**A&.67*DH61-!"Q-N80"57'T5@ (5BD"5O88+FDZE0H6S M;V6E@%I2 &^E*V'$!@]3&0IH%6VE-9DII;V&OI:L6CT A9MZ6P-[,)&K:$"5 MP5J'Q(\W8<]@-F8_D9F9)G]"D0.(B&R1 MG]9R9%T18<$%4G13H<-9>'3P=EJ.T)DSDEF>.Y&7F/)PZVT]F%!AFF-Y8L1Q MQ&D+N.ABFW.AE>4W93N>@:=SUX% MH(B4TJ#O?R&D+&&NGAU=3J,,I("4%(=4=,=MBXX@C1&-&)2;:/.=^%O]I*)H M"GHP<7)Z)I RC+J+VV'L>7R6(99F M>!8P%=\:85GKE?7]MA:2+7%ZDN*,\ MBY9E0Y)^GIN44UL)AOF;GVX+#-(;_J59F-^''W$4 OJEP&SYA:26ZI0/G:&5 MXFI_=2*6GW<"B'9F^86&F&:C9UP5AVN/=GF#? A]&X?*7!T!**,6?0V::)6QEWIDC%HZ:G!Y 'M:FYYS M>9)/:;F>"EI-7_R0MW [BEF.4&/FGC"F+XC/71%K*)TE<;R<.&$^?W!;F(6R M;8^@VEJ:=,%Z%0)D75B7;W@#6ZB;>IE&9[]R"Z-,I76?&Y4M (UI(0+@BW%L M@%NEH?%;IG\,GH..,8I!IO5J)0)2<;MJAXE:9[VDQ&-0IBUK@Z8T6R-D56;X MFXL!:7BF6Y9]=2HCF12F"P(+7@:'PFT8ID%INY- =T>&J&^N%:8" HKPI 2* M5(9G?N&(%%OJ7<=: XCIB99UQ)3E8_F?%W[FI!=;8(ZX8=V648WTFJ-M<*;1 MG4J.LF+/F0J*A(4-BHE:8G"KA1**HI RB_9YKX;GHY5_>71*6B2EZW+;&(WBLRCS6@5 M?-F2/HII;$"*_F !I ]@&HI>:V2%.I5AHF*CXXW:B7&+M5\.I&)K@IC+B?&7 MEYCS:^N3C)6P7SU@$ 5?BGZ2N7(U9!EG17'LC&>*DH;G7HEO,'-4HU**3%JJ M=0IR3%K$;Y!:3 !UBO6?-GQV:4MPXWXK;0YCM)1(GFYXBV82H'1GUVZ&BI%Q M4H5T7L<;19#3>D>0PY1!GT):59G4=:.;G%NOB,^,O&34:_Z$TXSEB"J<8IY( M87N59'&C 55_ W!9HP=:NGY"7C!SDIJ+E5"2>H0#I>Y?KXKBF?]ILHH?IR-S M,G 3 UB5Q(@TDRV)\Z;PA/6F8:,%9L&*8US!?WUUL)9WG*A?)5_YHPYCC7++ MBDEDH9C#:6L!0&E)I!Z*VH8"7.20B&%&I+Y0Z(/H%8I_H!_&XW7DZG\9Q$IGBFSYX'A.0!Y&U$<+X!J&0H M>RQ]_X;3F^-QCF[4DC):;:8E ]6!8F#4FV0 +0"KHSI<5HT(;[%E(6<4 :&, M*&DZ;OB,:8>C722/WY-GHIR!;)DR6O=O_EXY7'B47I5V7EMA[0%]D/!^YI)4 MG"X *HU7G"R-EFCV8$%>3J;AI1:.1'Q!>#B-FXTYD$=OE8Z@C6><8X^DC3R. M1HU/9C2.>F9*C<"*38U1:/J5H'*D FR;/FOLGK"%YY]9C7YAT)WU7GAC=Y'? M7PZ*5*-EC>2$A'K%?#.%AX2ECFV-2BKNDC!LE9NZABIE^I6FF.""\8^:CSQ? MNF(P;PI[M)/KE=.D%IZ\:S)^&J6M;&I^\5K\FZ6.Q9T7 >"0CUV_H:1DCXR< M:5B/KH\BA1EJ#H>6<:B!:F0)7/N3(Y (;8V&%P,^I/R-RUVQG9^=.&EN8Y]@ M3933:$NF46D<C:HZY5]?BEGZ8>2DKZ-% HYHQ6>='MEGG9&J#B)(XT1=.V-^5M)FVKCO"1?:A7CMP!#HLU76IMA( =CM]:'XXJ;CJ,X79??"6.+5SUGTB':H<& M9<+6E361PA?J5 M<6M!A>V89F&WC[^<7HTD;7I0X9>[DCN'%J.<@1U>"&;4:*&5=:+<8X:,V:>D MA-ZGKZC%J+5N=U77'9HZ-SX<\8#IF<)-_8@"7S9,$I$QC M90"$EAUM_*=:;OEOMFVGC2QA.&ZHG_".RZ?_;]!]/Q8LC M=Z::'FG*J'V?4 'A@1)GB:$H $N@;)[*'_(D..#_&D0C'>'7GF+9(1<$W&_ M=0>HI'(QI!9:OG/=FXAW+0!Q:[^&G(Y5>A9>=&FEG:AJ^9_1?2IE-*G3D9AD M>Z*5HX-EGH#^GHF?\UD.H0)<[FG/:S9_?7_: 9*4R3M 7(B?%6FQ>_&3^'C/ M9SAFIY&6DYV85W]Y9:M_OJ7/7;*>*9JFCBIB*F5=IC1KPP['F$-R(W'A=/V1 M*WNSE9EQS6Z">L>2!9;M:L@H4&-] 5ADMH346CBI20"K9:JHYI[X78"@PV&Y MG,9=H)S!9\I\/J89<<-SR*/);9&1+8NX$@&026>9HJ]?.8PJ;:>.@6\$D =N MT0:-D.J@@J$=;:M=1H^&?+V1(8$LJ<*H M$7(FH M^-UZ H/"%QEU-:;:=+Y=6CC2DOEPR95)R!9L17S*20X$?H/R<_Y8" M6@JJW8Q=982FN9RQ784!Q61R DIA5ZEG<01M6F0:7]*I.IVIG*5G'6]RFMY: M/F_#C*UPE(O1>A9;QE_KJ;EE@)\F9FY?>ZJ4?H*@&J6AE'1QI:DQ7>N,S6NO M7_=;>J:5G+)>00 IJ*%D_Y*QG7U^)'4 8X!_0(>:@<9U!VPV:&&31'M66QJE M.G$"6K-PNXU*G.J0.ZF?8)NFHIT!A[BBU952=*V0SHM(F$=:=@7$A%0 5 $3 C U$$55]I 0H /@5 !6D!6 5%!14%6P5C V %"P)Z 4\ W@=: end From owner-mpi-context@CS.UTK.EDU Thu Mar 25 00:08:53 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA07003; Thu, 25 Mar 93 00:08:53 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21855; Thu, 25 Mar 93 00:08:25 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 00:08:24 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21847; Thu, 25 Mar 93 00:08:23 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA07335; Wed, 24 Mar 93 23:02:31 CST Date: Wed, 24 Mar 93 23:02:31 CST From: Tony Skjellum Message-Id: <9303250502.AA07335@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: Silence? I have no mail in my inbox on contexts. Hasn't anyone anything to say about the tripartite proposal? I am hoping for input from others than the co-authors, of course :-) - Tony From owner-mpi-context@CS.UTK.EDU Thu Mar 25 05:10:00 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA14110; Thu, 25 Mar 93 05:10:00 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05401; Thu, 25 Mar 93 05:09:19 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 05:09:18 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05393; Thu, 25 Mar 93 05:09:14 -0500 Date: Thu, 25 Mar 93 10:09:07 GMT Message-Id: <17744.9303251009@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Silence? To: Tony Skjellum , mpi-context@cs.utk.edu In-Reply-To: Tony Skjellum's message of Wed, 24 Mar 93 23:02:31 CST Reply-To: lyndon@epcc.ed.ac.uk Hi Tony. Until your mail of yesterday, I hadn't realised that you intended the suggested straw poll of proposals to be held by email. I could be wrong, but I don't think this will work --- just too few people doing the email stuff. > I have no mail in my inbox on contexts. Hasn't anyone anything to say > about the tripartite proposal? Well there to date have been really only Rik, Jim, Marc (one message, respone to Jim), Tom Henserson (one message, clarification) and myself doing anything in this subcommittee mail list. Perhaps its just too early yet for any responses. > I am hoping for input from others than the co-authors, of course :-) However one will accept input from other authors, of course :-) I certainly do have comments to make on each of the proposals. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Mar 25 09:54:00 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA15684; Thu, 25 Mar 93 09:54:00 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19738; Thu, 25 Mar 93 09:53:23 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 09:53:22 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from fslg8.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19730; Thu, 25 Mar 93 09:53:20 -0500 Received: by fslg8.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA28293; Thu, 25 Mar 93 14:53:16 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA02709; Thu, 25 Mar 93 07:51:59 MST Date: Thu, 25 Mar 93 07:51:59 MST From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9303251451.AA02709@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu Subject: Re: Silence? > I have no mail in my inbox on contexts. Hasn't anyone anything to say > about the tripartite proposal? > > I am hoping for input from others than the co-authors, of course :-) > - Tony Hey, I haven't even finished reading it! I only got it yesterday... :-) Tom Henderson NOAA Forecast Systems Laboratory hender@fsl.noaa.gov From owner-mpi-context@CS.UTK.EDU Thu Mar 25 10:37:55 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA16999; Thu, 25 Mar 93 10:37:55 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21399; Thu, 25 Mar 93 10:37:13 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 10:37:12 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from cs.sandia.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21391; Thu, 25 Mar 93 10:37:11 -0500 Received: from panther.cs.sandia.gov by cs.sandia.gov (4.1/SMI-4.1) id AA03293; Thu, 25 Mar 93 08:37:09 MST Received: by panther.cs.sandia.gov (Smail3.1.28.1 #1) id m0nbtzQ-0016bbC; Thu, 25 Mar 93 08:37 MST Message-Id: Date: Thu, 25 Mar 93 08:37 MST From: srwheat@cs.sandia.gov (Stephen R. Wheat) To: mpi-context@cs.utk.edu Subject: Re: Silence? I'm still trying to dig through a pile of documents to find which one I'm supposed to "be non-silent about" and which ones are too old. When you get 3 or so of these a day out of context (pun not intended), i.e., not an author, it's hard to keep up with it. By the time one doc is printed it is already history. Stephen From owner-mpi-context@CS.UTK.EDU Thu Mar 25 10:48:37 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17260; Thu, 25 Mar 93 10:48:37 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21849; Thu, 25 Mar 93 10:47:57 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 10:47:56 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21766; Thu, 25 Mar 93 10:45:43 -0500 Date: Thu, 25 Mar 93 15:44:59 GMT Message-Id: <18119.9303251544@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Helpful Summary of Contexts Proposals To: mpi-comm@cs.utk.edu, mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: tony@aurora.cs.msstate.edu, nbm@castle.ed.ac.uk, bobf@epcc.ed.ac.uk Dear MPI Colleagues I imagine that many of you have started or are about to start reading the three contexts subcommittee proposals. We have prepared a comparative, and non judgmental, summary of the three proposals which may be of some assistance to MPI. The three proposals in the context subcommittee share common features, and have differences both in concept and detail. Two of these proposals contain features which are "separable" and could equally appear as components of one or more other proposals. Hopefully the summary will: (a) help us to discuss the important differnces between the proposals and make agreements on how we should proceed with respect to those issues; (b) help us to isolate the separable points and make separate agreements on those issues. I hope that the summary is both accurate and complete. Please make corrections and additions if you discover such. I apologise in advance for my errors, which are surely inevitable. Best Wishes Lyndon ---------------------------------------------------------------------- Summary of context subcommittee proposals ***************************************** The three proposals in the context subcommittee share common features, and have differences both in concept and detail. Two of these proposals contain features which are "separable" and could equally appear as components of one or more other proposals. This summary identifies feature of proposals as: Common Features; Separable Features; Concept Differences; Detail Differences. Common Features =============== 1. Process group management --------------------------- In each proposal groups are created dynamically and have static membership. In each proposal a group can be created as a partition of an existing group and as a permutation of an existing group. Each proposal allows (or suggests) that a group can be created as an explicit list of processes. 2. Provision for point-to-point communication within group. ----------------------------------------------------------- In each proposal point-to-point communcation of scope closed within a group can be expressed in terms of a reference to a group coupled with a process rank within the group. 3. Provision for collective communication within group. ------------------------------------------------------ In each proposal collective communication of scope closed within a group can be expressed in terms of a reference to a group. 4. Opacity of group and process description. -------------------------------------------- In each proposal the description of groups and processes is opaque. Groups and processes are referred to by a handle like object. Separable Features ================== 1. Tag usage in point-to-point communication. --------------------------------------------- Proposal III describes tag selection for Receive in a two-integer form. Proposals I and VII say nothing about tag usage. This feature can be placed in all Proposals I, III and VII. [Historical note: Tony did say to methat this would appear as an appendix with our mutual recognition that it can equally appear as a feature of any of the proposals.] 2. Tag usage in collective communication. ----------------------------------------- Proposal III suggests that tag should be used as an argument to collective communication where this will assist debugging. Proposals I and VII say nothing about usage. This feature can be placed in all Proposals I, III and VII. 3. Context or Group cache ------------------------- Proposal VII decribes a cache facility associated with contexts and groups. Proposal III describes a similar cache facility associated with groups. This feature can be placed in all Proposals I, III and VII. 4. Opaque object (descriptor) transmission. ------------------------------------------- Proposal VII suggests that opaque object transmission can be provided by integration with transmission of typed data. Proposal III suggests that opaque transmission is provided by a mechanism for flattening a descriptor into a memory buffer. These are just details of different ways of providing the feature. This feature can be placed in Proposals III and VII. This feature cannot be placed in Proposal I. 5. Context registry. -------------------- Proposal III describes a context name registry service. Proposal VII indicates that such a service would be useful. This feature can be placed in Proposals III and VII. This feature cannot be placed in Proposal I. Concept Differences. ==================== 1. Concept of CONTEXT and GROUP ------------------------------- In Proposal I CONTEXT and GROUP are identical concepts and are not distinguished. In Proposal III CONTEXT is a lower degree concept that GROUP. The GROUP concept inherits aspects of the CONTEXT concept. In Proposal VII CONTEXT is a higher concept than GROUP. The CONTEXT concept inherits aspects of the GROUP concept. 2. Scope of point-to-point communication. ----------------------------------------- In Proposal I the scope of point-to-point communication is limited to the CONTEXT. Processes which are members of distinct groups can only communicate through a common ancestor group. In Proposals III and VII the scope of point-to-point communciation is not limited. Processes which are members of distinct groups can communicate without reference to a common ancestor group. 3. Transmission of group or context. ------------------------------------ In Proposal I the CONTEXT cannot be transmitted from one process to another. In Proposals VII and III both CONTEXT and GROUP can be transmitted from one process to another. In Proposal VII PROCESS can alo be transmitted (Proposal III suggests such but makes no specific provision, presumably a small oversight?) Detail differences. =================== 1. Manifestation of context --------------------------- In Proposals I and VII context is an opaque object. In Proposal III context is an integer(?). 2. Deletion of group. --------------------- In Proposals VII and III groups can be deleted. In Proposal I there is no provision for group deletion (possibly a small oversight?). 3. Duplication of group. ------------------------ In Proposals I and III a group there is explicit provision for duplication of an existing group to form a new (distinct, homomorphic) group. In Proposal VII there is no such provision as similar funtionality is provided by the context (although the provision for group partition, permutation and definition can be used to create a snapshot copy of a group). 4. Global shared variables. --------------------------- Proposals I and VII do not require global shared variables. Proposal III requires a global shared variable (which can be implemented as such or of course in the traditional approach as a global service process.) 3. Process identifier addressed communication. ---------------------------------------------- Proposal I does not make provision for process identifier addressed communication. Proposal III makes provision for process identifier addressed communication within multiple distinct tag spaces. Proposal VII makes provision for process identifier addressed communication within a single distinct tag space. 5. Inter-group communication. ----------------------------- Proposal I does not provide inter-group communication as it limits the scope of point-to-point communication to be closed within a group. Proposal VII provides inter-group communication in an addressing form specified by sender (receiver) group, receiver (sender) group and sender (receiver) rank. Proposal III provides inter-group communication as process identifier addressed communication. ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Mar 25 12:15:37 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA20646; Thu, 25 Mar 93 12:15:37 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25731; Thu, 25 Mar 93 12:15:00 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 12:14:59 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25723; Thu, 25 Mar 93 12:14:55 -0500 Date: Thu, 25 Mar 93 17:14:46 GMT Message-Id: <18221.9303251714@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Silence? To: srwheat@cs.sandia.gov (Stephen R. Wheat), mpi-context@cs.utk.edu In-Reply-To: Stephen R. Wheat's message of Thu, 25 Mar 93 08:37 MST Reply-To: lyndon@epcc.ed.ac.uk Dear MPI Context Colleagues Stephen writes: > I'm still trying to dig through a pile of documents to find > which one I'm supposed to "be non-silent about" and which > ones are too old. I guess plenty of subcommittee members will have this difficulty. I believe that I can be of service by providing clarification. The chapter(s) from this subcommittee which (to my knowledge) Tony has sent for inclusion in the MPI draft were circulated by Tony to the subcomittee mail list. The messages of interest are: Date: Wed, 24 Mar 93 10:57:27 CST From: Tony Skjellum Subject: draft of MPI Context proposals (Latex) Date: Wed, 24 Mar 93 11:04:43 CST From: Tony Skjellum Subject: postscript of draft (uuencoded) I also refer you to a helpful summary sent out by myself :-) The message of interest is: Date: Thu, 25 Mar 93 15:44:59 GMT From: L J Clarke Subject: Helpful Summary of Contexts Proposals > When you get 3 or so of these a day > out of context (pun not intended), i.e., not an author, > it's hard to keep up with it. By the time one doc > is printed it is already history. Yes, this has been a busy mail list recently, and I do apologise for having worked so hard on the MPI communication contexts :-) If you have time and enthuiasm I expect that you would find tracing through the historical evolution of these documents over the past week both interesting and revealing. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Mar 25 12:33:30 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21280; Thu, 25 Mar 93 12:33:30 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26541; Thu, 25 Mar 93 12:33:01 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 12:33:00 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26533; Thu, 25 Mar 93 12:32:59 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA07552; Thu, 25 Mar 93 11:27:02 CST Date: Thu, 25 Mar 93 11:27:02 CST From: Tony Skjellum Message-Id: <9303251727.AA07552@Aurora.CS.MsState.Edu> To: tony@Aurora.CS.MsState.Edu, mpi-context@cs.utk.edu, lyndon@epcc.ed.ac.uk Subject: Re: Silence? Jack told me that it is essential that we have our act together as far as possible by meeting. A straw poll by e-mail might be meaningless, as you say, because of small interactions (many readers, few writers). Educationally speaking, I would like to present all three proposals to whole committee, to explain differences and similarities. Since our act is together now, it would be educational/beneficial not to exclude any one of the three. My straw poll was following Jack's advice. If there is no strong sentiment for it, we will not have it. COmments from everyone, please? I had assumed, at the moment of the conception of the straw poll, that we might have 5 rather than 3 proposals! Three is a managable presentation. They are sufficiently different that I do not see a way to merge them, and preserve their intended properties. - Tony From owner-mpi-context@CS.UTK.EDU Thu Mar 25 12:35:44 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21339; Thu, 25 Mar 93 12:35:44 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26701; Thu, 25 Mar 93 12:35:25 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 12:35:23 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26684; Thu, 25 Mar 93 12:35:22 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA07560; Thu, 25 Mar 93 11:29:14 CST Date: Thu, 25 Mar 93 11:29:14 CST From: Tony Skjellum Message-Id: <9303251729.AA07560@Aurora.CS.MsState.Edu> To: tony@aurora.cs.msstate.edu, lyndon@epcc.ed.ac.uk Subject: Re: proposal iii Cc: mpi-context@cs.utk.edu ----- Begin Included Message ----- From lyndon@epcc.ed.ac.uk Thu Mar 25 06:16:18 1993 Date: Thu, 25 Mar 93 12:21:59 GMT From: L J Clarke Subject: proposal iii To: tony@aurora.cs.msstate.edu Reply-To: lyndon@epcc.ed.ac.uk Content-Length: 950 Private message --------------- Dear Tony Regarding Proposals I, III and VII I will be prepared to make value judgement oriented comments over the next couple of days. Whereas the comparisions of Proposal III versus I and VII which you have included in III are useful, I believe that the value judgments which you also included are inappropriate to the document, and unhelpful to the process of the subcommittee (not to mention unfair on the authors of the other proposals especially Rik and myself). I hope that you will endeavour to edit such material out of your proposal and resubmit to Steve Otto for circulation in the MPI draft. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ ----- End Included Message ----- Lyndon, I disagree and I will not remove them. Feel free to add your own conclusion comments to your and ask Marc to do so for his. I find them completely appropriate. - Tony From owner-mpi-context@CS.UTK.EDU Thu Mar 25 12:40:13 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21397; Thu, 25 Mar 93 12:40:13 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26868; Thu, 25 Mar 93 12:39:39 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 12:39:37 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26850; Thu, 25 Mar 93 12:39:35 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA07566; Thu, 25 Mar 93 11:33:39 CST Date: Thu, 25 Mar 93 11:33:39 CST From: Tony Skjellum Message-Id: <9303251733.AA07566@Aurora.CS.MsState.Edu> To: mpi-comm@cs.utk.edu, mpi-context@cs.utk.edu, lyndon@epcc.ed.ac.uk Subject: Re: Helpful Summary of Contexts Proposals Cc: tony@aurora.cs.msstate.edu, nbm@castle.ed.ac.uk, bobf@epcc.ed.ac.uk Thank you for this additional work, Lyndon. Why don't we include this as the preface to our proposal? It looks good, but I will have to read it in detail, before rendering my complete view on this matter. Who are these other, new cc people... - Tony ----- Begin Included Message ----- From owner-mpi-comm@CS.UTK.EDU Thu Mar 25 10:07:51 1993 X-Resent-To: mpi-comm@CS.UTK.EDU ; Thu, 25 Mar 1993 10:45:59 EST Date: Thu, 25 Mar 93 15:44:59 GMT From: L J Clarke Subject: Helpful Summary of Contexts Proposals To: mpi-comm@cs.utk.edu, mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: tony@aurora.cs.msstate.edu, nbm@castle.ed.ac.uk, bobf@epcc.ed.ac.uk Content-Length: 8050 Dear MPI Colleagues I imagine that many of you have started or are about to start reading the three contexts subcommittee proposals. We have prepared a comparative, and non judgmental, summary of the three proposals which may be of some assistance to MPI. The three proposals in the context subcommittee share common features, and have differences both in concept and detail. Two of these proposals contain features which are "separable" and could equally appear as components of one or more other proposals. Hopefully the summary will: (a) help us to discuss the important differnces between the proposals and make agreements on how we should proceed with respect to those issues; (b) help us to isolate the separable points and make separate agreements on those issues. I hope that the summary is both accurate and complete. Please make corrections and additions if you discover such. I apologise in advance for my errors, which are surely inevitable. Best Wishes Lyndon ---------------------------------------------------------------------- Summary of context subcommittee proposals ***************************************** The three proposals in the context subcommittee share common features, and have differences both in concept and detail. Two of these proposals contain features which are "separable" and could equally appear as components of one or more other proposals. This summary identifies feature of proposals as: Common Features; Separable Features; Concept Differences; Detail Differences. Common Features =============== 1. Process group management --------------------------- In each proposal groups are created dynamically and have static membership. In each proposal a group can be created as a partition of an existing group and as a permutation of an existing group. Each proposal allows (or suggests) that a group can be created as an explicit list of processes. 2. Provision for point-to-point communication within group. ----------------------------------------------------------- In each proposal point-to-point communcation of scope closed within a group can be expressed in terms of a reference to a group coupled with a process rank within the group. 3. Provision for collective communication within group. ------------------------------------------------------ In each proposal collective communication of scope closed within a group can be expressed in terms of a reference to a group. 4. Opacity of group and process description. -------------------------------------------- In each proposal the description of groups and processes is opaque. Groups and processes are referred to by a handle like object. Separable Features ================== 1. Tag usage in point-to-point communication. --------------------------------------------- Proposal III describes tag selection for Receive in a two-integer form. Proposals I and VII say nothing about tag usage. This feature can be placed in all Proposals I, III and VII. [Historical note: Tony did say to methat this would appear as an appendix with our mutual recognition that it can equally appear as a feature of any of the proposals.] 2. Tag usage in collective communication. ----------------------------------------- Proposal III suggests that tag should be used as an argument to collective communication where this will assist debugging. Proposals I and VII say nothing about usage. This feature can be placed in all Proposals I, III and VII. 3. Context or Group cache ------------------------- Proposal VII decribes a cache facility associated with contexts and groups. Proposal III describes a similar cache facility associated with groups. This feature can be placed in all Proposals I, III and VII. 4. Opaque object (descriptor) transmission. ------------------------------------------- Proposal VII suggests that opaque object transmission can be provided by integration with transmission of typed data. Proposal III suggests that opaque transmission is provided by a mechanism for flattening a descriptor into a memory buffer. These are just details of different ways of providing the feature. This feature can be placed in Proposals III and VII. This feature cannot be placed in Proposal I. 5. Context registry. -------------------- Proposal III describes a context name registry service. Proposal VII indicates that such a service would be useful. This feature can be placed in Proposals III and VII. This feature cannot be placed in Proposal I. Concept Differences. ==================== 1. Concept of CONTEXT and GROUP ------------------------------- In Proposal I CONTEXT and GROUP are identical concepts and are not distinguished. In Proposal III CONTEXT is a lower degree concept that GROUP. The GROUP concept inherits aspects of the CONTEXT concept. In Proposal VII CONTEXT is a higher concept than GROUP. The CONTEXT concept inherits aspects of the GROUP concept. 2. Scope of point-to-point communication. ----------------------------------------- In Proposal I the scope of point-to-point communication is limited to the CONTEXT. Processes which are members of distinct groups can only communicate through a common ancestor group. In Proposals III and VII the scope of point-to-point communciation is not limited. Processes which are members of distinct groups can communicate without reference to a common ancestor group. 3. Transmission of group or context. ------------------------------------ In Proposal I the CONTEXT cannot be transmitted from one process to another. In Proposals VII and III both CONTEXT and GROUP can be transmitted from one process to another. In Proposal VII PROCESS can alo be transmitted (Proposal III suggests such but makes no specific provision, presumably a small oversight?) Detail differences. =================== 1. Manifestation of context --------------------------- In Proposals I and VII context is an opaque object. In Proposal III context is an integer(?). 2. Deletion of group. --------------------- In Proposals VII and III groups can be deleted. In Proposal I there is no provision for group deletion (possibly a small oversight?). 3. Duplication of group. ------------------------ In Proposals I and III a group there is explicit provision for duplication of an existing group to form a new (distinct, homomorphic) group. In Proposal VII there is no such provision as similar funtionality is provided by the context (although the provision for group partition, permutation and definition can be used to create a snapshot copy of a group). 4. Global shared variables. --------------------------- Proposals I and VII do not require global shared variables. Proposal III requires a global shared variable (which can be implemented as such or of course in the traditional approach as a global service process.) 3. Process identifier addressed communication. ---------------------------------------------- Proposal I does not make provision for process identifier addressed communication. Proposal III makes provision for process identifier addressed communication within multiple distinct tag spaces. Proposal VII makes provision for process identifier addressed communication within a single distinct tag space. 5. Inter-group communication. ----------------------------- Proposal I does not provide inter-group communication as it limits the scope of point-to-point communication to be closed within a group. Proposal VII provides inter-group communication in an addressing form specified by sender (receiver) group, receiver (sender) group and sender (receiver) rank. Proposal III provides inter-group communication as process identifier addressed communication. ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Thu Mar 25 12:42:23 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21429; Thu, 25 Mar 93 12:42:23 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27079; Thu, 25 Mar 93 12:41:54 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 12:41:53 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from marge.meiko.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27031; Thu, 25 Mar 93 12:41:07 -0500 Received: from hub.meiko.co.uk by marge.meiko.com with SMTP id AA02886 (5.65c/IDA-1.4.4); Thu, 25 Mar 1993 12:41:03 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk (4.1/SMI-4.1) id AA18733; Thu, 25 Mar 93 17:40:59 GMT Date: Thu, 25 Mar 93 17:40:59 GMT From: jim@meiko.co.uk (James Cownie) Message-Id: <9303251740.AA18733@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA07372; Thu, 25 Mar 93 17:37:30 GMT To: snir@watson.ibm.com Cc: mpi-pt2pt@cs.utk.edu Cc: mpi-context@cs.utk.edu Subject: PT2PT draft (MARCH 23) Content-Length: 5131 Marc, [This is a reconstruction from memory of some mail I sent this morning, which seems to have disappeared... If anyone got the original they can test how good my memory is by comparing the two mails :-)] Here are some comments and queries on the current (23 March) point to point proposal. To avoid duplication of lots of text I will reference the proposal bye page number. Apologies to those of you who haven't printed it ! Page 3 : Discussion of options for Lists of Handles My preference is for option 3 (A separate length parameter). I prefer this over 1 (length as first element) because this leads to off by one errors, or a funny structure in C with an arbitrary sized array. I prefer it over 2 (a delimited list) because it avoids a whole pass over the list to work out its length when this is required. (e.g. if a copy needs to be made. cf the horrible C code char * copy = strcpy(malloc(strlen(string)+1), string); which accesses every byte of the string twice) Page 4: Discussion of named constants in FORTRAN. The F90 discussion should mention MODULEs (e.g. "... can be made available via an INCLUDE file, or MODULE.") PLEASE PLEASE PLEASE don't use character strings. This is guaranteed to be slow. I'd rather have literal values in place than strings ! Page 8 et seq: Contexts. I don't understand how to implement your context model in the way you seem to be describing. On page 8 you suggest that you can check the context at transmission, and avoid a collective agreement alogrothm at context creation. Similarly on page 10 in the implementation note you say "No communication is needed to create a new context beyond a barrier synchronisation". Surely you MUST check the context at the receiver (you seemed to agree with this in previous mail in MPI-context), therefore the sender and receiver must agree on the context name. (Or at least the sender must know the value by which the context is known at the receiver, which could be different for each receiver if you like [I don't !]) To achive this surely you need a group co-operation on context creation. Consider Process number i.e. Rank in ALL 0 1 Group 0 0 1 Group 1 1 0 The number in the table is the rank of the process in the group. Certainly when process 1 receives a message from zero he must know the group/context from which it was sent, since the result of the sender inquiry will be different depending on the group/context. Even if you only allow partitioning, I don't think you can avoid needing a cooperation, consider a scenario like this Process number Time 0 0 1 2 Time 1 0 | 1 2 Partition of all Time 2 1 | 2 Partition of subgroup Time 3 0 1 | 2 Partition of all Where the | denotes a partition of the group. Now with only a sequnce number process 0 has has been involved in 2 partitions, but process 1 (which should be in the same group) has been involved in 3. Ah, you can get the right answer by using a count of the partitions of the the parent group, and its depth in the tree. (Somehow...) Anyway it all falls apart when you let in the group construction by list (I believe) ! Page 12: I agree that we shouldn't REQUIRE dynamic process creation. Page 13: A global errno is the CLASSIC example of global data which gives threads a problem. Do we REALLY want to do it like this ? Page 14: Data types I think we should have both MPI_CHAR and MPI_BYTE. The difference being that in a heterogeneous environment MPI_CHAR would be translated according to the local character sets (e.g. ASCII -> EBCDIC), MPI_BYTE would be an 8 bit integer. What should we do about Fortran 90 KINDs ? Page 15: Data buffers. Vector buffer : Do we allow a zero or negative inter block stride ? Zero is a neat way to perform a fill operation... Page 17: Last line on the page... What is the (4,2,1) ? It's not Fortran 77. Looks a bit like an F90 array constructor ... Page 18: Data conversion You state "No data conversion occurs when data is moved from a sender buffer into a message". In a heterogeneous environment surely this is an implementation issue. The implementation should be free to translate at source, at dest, or on the moon if it likes. Page 36: You should delete "out of either process' address space." The example only requires buffering, it doesn't matter where the implementation chooses to do it. General question ================ Where is the store which is referenced by handles allocated ? At present it appears that all of this (even for ephemeral objects) is MPI system managed. I would very much like it to be possible to have the user manage this store, as she can often do it cheaper. (e.g. allocating ephemeral objects on the stack). Hmmm... came out more or less the same I think. -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Thu Mar 25 12:52:33 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21694; Thu, 25 Mar 93 12:52:33 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27592; Thu, 25 Mar 93 12:51:53 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 12:51:52 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from fslg8.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27577; Thu, 25 Mar 93 12:51:50 -0500 Received: by fslg8.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA28820; Thu, 25 Mar 93 17:51:46 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA03192; Thu, 25 Mar 93 10:50:29 MST Date: Thu, 25 Mar 93 10:50:29 MST From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9303251750.AA03192@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu Subject: Re: Silence? > Jack told me that it is essential that we have our act together as far > as possible by meeting. A straw poll by e-mail might be meaningless, as you > say, because of small interactions (many readers, few writers). > > My straw poll was following Jack's advice. If there is no strong sentiment > for it, we will not have it. COmments from everyone, please? > > I had assumed, at the moment of the conception of the straw poll, that we > might have 5 rather than 3 proposals! Three is a managable presentation. > They are sufficiently different that I do not see a way to merge them, > and preserve their intended properties. > > - Tony I think an email straw poll is a good idea. At least you'll get some information. Tom From owner-mpi-context@CS.UTK.EDU Thu Mar 25 12:53:57 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21701; Thu, 25 Mar 93 12:53:57 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27647; Thu, 25 Mar 93 12:53:38 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 12:53:37 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27639; Thu, 25 Mar 93 12:53:34 -0500 Date: Thu, 25 Mar 93 17:53:29 GMT Message-Id: <18278.9303251753@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: proposal iii To: Tony Skjellum In-Reply-To: Tony Skjellum's message of Thu, 25 Mar 93 11:29:14 CST Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Tony I am dissapointed that you find it necessary to circulate the private message publically. I shall reply to you personally. I believe that the point I made is valid. Since you advise to add comments into the conclusion section of the chapters, will you also advise as to the mechanism considering that the chapters have gone to the MPI draft editor for inclusion into the draft. Best Wishes Lyndon > > > ----- Begin Included Message ----- > > >From lyndon@epcc.ed.ac.uk Thu Mar 25 06:16:18 1993 > Date: Thu, 25 Mar 93 12:21:59 GMT > From: L J Clarke > Subject: proposal iii > To: tony@aurora.cs.msstate.edu > Reply-To: lyndon@epcc.ed.ac.uk > Content-Length: 950 > > Private message > --------------- > > Dear Tony > > Regarding Proposals I, III and VII I will be prepared to make value > judgement oriented comments over the next couple of days. > > Whereas the comparisions of Proposal III versus I and VII which you have > included in III are useful, I believe that the value judgments which you > also included are inappropriate to the document, and unhelpful to the > process of the subcommittee (not to mention unfair on the authors of the > other proposals especially Rik and myself). > > I hope that you will endeavour to edit such material out of your > proposal and resubmit to Steve Otto for circulation in the MPI draft. > > > Best Wishes > Lyndon > > /--------------------------------------------------------\ > e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) > c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c > \--------------------------------------------------------/ > > > > > ----- End Included Message ----- > Lyndon, > > I disagree and I will not remove them. Feel free to add your own > conclusion comments to your and ask Marc to do so for his. I find them > completely appropriate. > > - Tony > > /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Mar 25 12:58:46 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21783; Thu, 25 Mar 93 12:58:46 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27841; Thu, 25 Mar 93 12:58:28 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 12:58:27 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27833; Thu, 25 Mar 93 12:58:23 -0500 Date: Thu, 25 Mar 93 17:58:13 GMT Message-Id: <18293.9303251758@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Silence? To: hender@macaw.fsl.noaa.gov (Tom Henderson), mpi-context@cs.utk.edu In-Reply-To: Tom Henderson's message of Thu, 25 Mar 93 10:50:29 MST Reply-To: lyndon@epcc.ed.ac.uk I will of course participate in an email straw poll. Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Mar 25 13:43:15 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA22825; Thu, 25 Mar 93 13:43:15 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29945; Thu, 25 Mar 93 13:42:40 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 13:42:38 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from msr.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29937; Thu, 25 Mar 93 13:42:37 -0500 Received: by msr.EPM.ORNL.GOV (5.67/1.34) id AA01033; Thu, 25 Mar 93 13:42:34 -0500 Date: Thu, 25 Mar 93 13:42:34 -0500 From: geist@msr.EPM.ORNL.GOV (Al Geist) Message-Id: <9303251842.AA01033@msr.EPM.ORNL.GOV> To: mpi-context@cs.utk.edu Subject: Re: Silence? > I have no mail in my inbox... I'm with Tom. I haven't finished reading it yet. (-: ----------------------------- __o /\ Al Geist _`\<,_ /\/ \ Oak Ridge National Laboratory (_)/ (_) / \ (615) 574-3153 gst@ornl.gov * * * * * * * * * * ----------------------------- From owner-mpi-context@CS.UTK.EDU Thu Mar 25 13:59:47 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA23332; Thu, 25 Mar 93 13:59:47 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00756; Thu, 25 Mar 93 13:58:38 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 13:58:36 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00747; Thu, 25 Mar 93 13:58:34 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA07654; Thu, 25 Mar 93 12:52:38 CST Date: Thu, 25 Mar 93 12:52:38 CST From: Tony Skjellum Message-Id: <9303251852.AA07654@Aurora.CS.MsState.Edu> To: mpi-comm@cs.utk.edu, mpi-context@cs.utk.edu, lyndon@epcc.ed.ac.uk Subject: Re: Helpful Summary of Contexts Proposals Cc: tony@aurora.cs.msstate.edu, nbm@castle.ed.ac.uk, bobf@epcc.ed.ac.uk Lyndon, as you know I was immensely under the gun yesterday, and my 9-hour trip was quite an ordeal, but I did get the sub-committee's proposal in on time, and it is basically quite good. I do feel that it is appropriate to include comparisons and even criticisms in the conclusions, but it was not done evenly, since mine was last. To this extent, I see why you did not like that. In same light, I can see why we should drop your name and Rik's name from proposal III, as this might unfairly indicate your support for proposal III (which is admittedly, an extended version of the programming model I support most, and is what Zipcode does, plus some additions). Please restrain yourself from completely dominating this subcommittee, because of the remarkable amount of time you are able to devote to it. To be fair, I cannot devote more than 2hr/day, and you are able to devote something like 12hr/day to it. People at the SIAM meeting were overwhelmed by the volume of mail you generate on different topics. You are a star performer, but you are also very demanding, and I must say that I have had some aggravations in this last weeks, but not only from you. I am trying not to push my personal viewpoint too hard. Please work from a viewpoint of cooperation with me, as we move towards the meeting next week. In this vein, see next paragraph. I will read your summary, but I already agree (note: agree) to drop all conclusion comments from all three subproposals that are "judgemental" (a loaded word). Since we are making a report out of this, the comparison should go in as latex. I will give this to Otto today (I will give it to him). Do you want me to latex your contribution, or will you (it is not too late there). Tell me. Ala Frankie & Johnny, I have to hang up now, since there is more mail from you to read :-)... I will not respond to that which is outdated by this letter. Please work on a latex version of your excellent new contribution, which also makes our joint report much better. I thank you. PS For rest of committee, the new contribution by Lyndon does not invalidate yesterday's tripartite proposal, it will merely be included in it. ----- Begin Included Message ----- From owner-mpi-comm@CS.UTK.EDU Thu Mar 25 10:07:51 1993 X-Resent-To: mpi-comm@CS.UTK.EDU ; Thu, 25 Mar 1993 10:45:59 EST Date: Thu, 25 Mar 93 15:44:59 GMT From: L J Clarke Subject: Helpful Summary of Contexts Proposals To: mpi-comm@cs.utk.edu, mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: tony@aurora.cs.msstate.edu, nbm@castle.ed.ac.uk, bobf@epcc.ed.ac.uk Content-Length: 8050 Dear MPI Colleagues I imagine that many of you have started or are about to start reading the three contexts subcommittee proposals. We have prepared a comparative, and non judgmental, summary of the three proposals which may be of some assistance to MPI. The three proposals in the context subcommittee share common features, and have differences both in concept and detail. Two of these proposals contain features which are "separable" and could equally appear as components of one or more other proposals. Hopefully the summary will: (a) help us to discuss the important differnces between the proposals and make agreements on how we should proceed with respect to those issues; (b) help us to isolate the separable points and make separate agreements on those issues. I hope that the summary is both accurate and complete. Please make corrections and additions if you discover such. I apologise in advance for my errors, which are surely inevitable. Best Wishes Lyndon ---------------------------------------------------------------------- Summary of context subcommittee proposals ***************************************** The three proposals in the context subcommittee share common features, and have differences both in concept and detail. Two of these proposals contain features which are "separable" and could equally appear as components of one or more other proposals. This summary identifies feature of proposals as: Common Features; Separable Features; Concept Differences; Detail Differences. Common Features =============== 1. Process group management --------------------------- In each proposal groups are created dynamically and have static membership. In each proposal a group can be created as a partition of an existing group and as a permutation of an existing group. Each proposal allows (or suggests) that a group can be created as an explicit list of processes. 2. Provision for point-to-point communication within group. ----------------------------------------------------------- In each proposal point-to-point communcation of scope closed within a group can be expressed in terms of a reference to a group coupled with a process rank within the group. 3. Provision for collective communication within group. ------------------------------------------------------ In each proposal collective communication of scope closed within a group can be expressed in terms of a reference to a group. 4. Opacity of group and process description. -------------------------------------------- In each proposal the description of groups and processes is opaque. Groups and processes are referred to by a handle like object. Separable Features ================== 1. Tag usage in point-to-point communication. --------------------------------------------- Proposal III describes tag selection for Receive in a two-integer form. Proposals I and VII say nothing about tag usage. This feature can be placed in all Proposals I, III and VII. [Historical note: Tony did say to methat this would appear as an appendix with our mutual recognition that it can equally appear as a feature of any of the proposals.] 2. Tag usage in collective communication. ----------------------------------------- Proposal III suggests that tag should be used as an argument to collective communication where this will assist debugging. Proposals I and VII say nothing about usage. This feature can be placed in all Proposals I, III and VII. 3. Context or Group cache ------------------------- Proposal VII decribes a cache facility associated with contexts and groups. Proposal III describes a similar cache facility associated with groups. This feature can be placed in all Proposals I, III and VII. 4. Opaque object (descriptor) transmission. ------------------------------------------- Proposal VII suggests that opaque object transmission can be provided by integration with transmission of typed data. Proposal III suggests that opaque transmission is provided by a mechanism for flattening a descriptor into a memory buffer. These are just details of different ways of providing the feature. This feature can be placed in Proposals III and VII. This feature cannot be placed in Proposal I. 5. Context registry. -------------------- Proposal III describes a context name registry service. Proposal VII indicates that such a service would be useful. This feature can be placed in Proposals III and VII. This feature cannot be placed in Proposal I. Concept Differences. ==================== 1. Concept of CONTEXT and GROUP ------------------------------- In Proposal I CONTEXT and GROUP are identical concepts and are not distinguished. In Proposal III CONTEXT is a lower degree concept that GROUP. The GROUP concept inherits aspects of the CONTEXT concept. In Proposal VII CONTEXT is a higher concept than GROUP. The CONTEXT concept inherits aspects of the GROUP concept. 2. Scope of point-to-point communication. ----------------------------------------- In Proposal I the scope of point-to-point communication is limited to the CONTEXT. Processes which are members of distinct groups can only communicate through a common ancestor group. In Proposals III and VII the scope of point-to-point communciation is not limited. Processes which are members of distinct groups can communicate without reference to a common ancestor group. 3. Transmission of group or context. ------------------------------------ In Proposal I the CONTEXT cannot be transmitted from one process to another. In Proposals VII and III both CONTEXT and GROUP can be transmitted from one process to another. In Proposal VII PROCESS can alo be transmitted (Proposal III suggests such but makes no specific provision, presumably a small oversight?) Detail differences. =================== 1. Manifestation of context --------------------------- In Proposals I and VII context is an opaque object. In Proposal III context is an integer(?). 2. Deletion of group. --------------------- In Proposals VII and III groups can be deleted. In Proposal I there is no provision for group deletion (possibly a small oversight?). 3. Duplication of group. ------------------------ In Proposals I and III a group there is explicit provision for duplication of an existing group to form a new (distinct, homomorphic) group. In Proposal VII there is no such provision as similar funtionality is provided by the context (although the provision for group partition, permutation and definition can be used to create a snapshot copy of a group). 4. Global shared variables. --------------------------- Proposals I and VII do not require global shared variables. Proposal III requires a global shared variable (which can be implemented as such or of course in the traditional approach as a global service process.) 3. Process identifier addressed communication. ---------------------------------------------- Proposal I does not make provision for process identifier addressed communication. Proposal III makes provision for process identifier addressed communication within multiple distinct tag spaces. Proposal VII makes provision for process identifier addressed communication within a single distinct tag space. 5. Inter-group communication. ----------------------------- Proposal I does not provide inter-group communication as it limits the scope of point-to-point communication to be closed within a group. Proposal VII provides inter-group communication in an addressing form specified by sender (receiver) group, receiver (sender) group and sender (receiver) rank. Proposal III provides inter-group communication as process identifier addressed communication. ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Thu Mar 25 14:04:15 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA23478; Thu, 25 Mar 93 14:04:15 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00948; Thu, 25 Mar 93 14:03:21 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 14:03:20 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00940; Thu, 25 Mar 93 14:03:19 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA07664; Thu, 25 Mar 93 12:57:17 CST Date: Thu, 25 Mar 93 12:57:17 CST From: Tony Skjellum Message-Id: <9303251857.AA07664@Aurora.CS.MsState.Edu> To: tony@aurora.cs.msstate.edu, lyndon@epcc.ed.ac.uk, tony@Aurora.CS.MsState.Edu Subject: Re: proposal iii Cc: mpi-context@cs.utk.edu ----- Begin Included Message ----- From tony Thu Mar 25 11:29:14 1993 Date: Thu, 25 Mar 93 11:29:14 CST From: Tony Skjellum To: tony@aurora.cs.msstate.edu, lyndon@epcc.ed.ac.uk Subject: Re: proposal iii Cc: mpi-context@cs.utk.edu Content-Length: 1434 ----- Begin Included Message ----- >From lyndon@epcc.ed.ac.uk Thu Mar 25 06:16:18 1993 Date: Thu, 25 Mar 93 12:21:59 GMT From: L J Clarke Subject: proposal iii To: tony@aurora.cs.msstate.edu Reply-To: lyndon@epcc.ed.ac.uk Content-Length: 950 Private message --------------- Dear Tony Regarding Proposals I, III and VII I will be prepared to make value judgement oriented comments over the next couple of days. Whereas the comparisions of Proposal III versus I and VII which you have included in III are useful, I believe that the value judgments which you also included are inappropriate to the document, and unhelpful to the process of the subcommittee (not to mention unfair on the authors of the other proposals especially Rik and myself). I hope that you will endeavour to edit such material out of your proposal and resubmit to Steve Otto for circulation in the MPI draft. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ ----- End Included Message ----- Lyndon, I disagree and I will not remove them. Feel free to add your own conclusion comments to your and ask Marc to do so for his. I find them completely appropriate. - Tony ----- End Included Message ----- Please disregard this message. I was annoyed with Lyndon, and I had not seen his helpful summary then. I have proposed a more constructive resolution to our tiff. - Tony From owner-mpi-context@CS.UTK.EDU Thu Mar 25 20:01:56 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA03122; Thu, 25 Mar 93 20:01:56 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15845; Thu, 25 Mar 93 20:00:59 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 25 Mar 1993 20:00:58 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15835; Thu, 25 Mar 93 20:00:54 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA11159; Thu, 25 Mar 93 18:54:43 CST Date: Thu, 25 Mar 93 18:54:43 CST From: Tony Skjellum Message-Id: <9303260054.AA11159@Aurora.CS.MsState.Edu> To: otto@iliamna.cse.ogi.edu Subject: Re: To MPI Authors: Status, Macros, etc Cc: mpi-context@cs.utk.edu Steve, Here is our updated version of the context subcommittee document, adhering to your canonical format. - Tony ---------------------------------------------------------------------------- \documentstyle[twoside,11pt]{report} \pagestyle{headings} %\markright{ {\em Draft Document of the MPI Standard,\/ \today} } \marginparwidth 0pt \oddsidemargin=.25in \evensidemargin .25in \marginparsep 0pt \topmargin=-.5in \textwidth=6.0in \textheight=9.0in \parindent=2em % ---------------------------------------------------------------------- % mpi-macs.tex --- man page macros, % discuss, missing, mpifunc macros % % ---------------------------------------------------------------------- % a couple of commands from Marc Snir, modified S. Otto \newlength{\discussSpace} \setlength{\discussSpace}{.7cm} \newcommand{\discuss}[1]{\vspace{\discussSpace} {\small {\bf Discussion:} #1} \vspace{\discussSpace} } \newcommand{\missing}[1]{\vspace{\discussSpace} {\small {\bf Missing:} #1} \vspace{\discussSpace} } \newlength{\codeSpace} \setlength{\codeSpace}{.3cm} \newcommand{\mpifunc}[1]{\vspace{\codeSpace} {\bf #1} \vspace{\codeSpace} } % ----------------------------------------------------------------------- % A few commands to help in writing MPI man pages % \def\twoc#1#2{ \begin{list} {\hbox to95pt{#1\hfil}} {\setlength{\leftmargin}{120pt} \setlength{\labelwidth}{95pt} \setlength{\labelsep}{0pt} \setlength{\partopsep}{0pt} \setlength{\parskip}{0pt} \setlength{\topsep}{0pt} } \item {#2} \end{list} } \outer\long\def\onec#1{ \begin{list} {} {\setlength{\leftmargin}{25pt} \setlength{\labelwidth}{0pt} \setlength{\labelsep}{0pt} \setlength{\partopsep}{0pt} \setlength{\parskip}{0pt} \setlength{\topsep}{0pt} } \item {#1} \end{list} } \def\manhead#1{\noindent{\bf{#1}}} \hyphenation{RE-DIS-TRIB-UT-ABLE sub-script mul-ti-ple} \begin{document} \setcounter{page}{1} \pagenumbering{roman} \title{ {\em D R A F T} \\ Document for a Standard Message-Passing Interface} \author{Scott Berryman, {\em Yale Univ} \\ James Cownie, {\em Meiko Ltd} \\ Jack Dongarra, {\em Univ. of Tennessee and ORNL} \\ Al Geist, {\em ORNL} \\ Bill Gropp, {\em ANL} \\ Rolf Hempel, {\em GMD} \\ Bob Knighten, {\em Intel} \\ Rusty Lusk, {\em ANL} \\ Steve Otto, {\em Oregon Graduate Inst} \\ Tony Skjellum, {\em Missisippi State Univ} \\ Marc Snir, {\em IBM T. J. Watson} \\ David Walker, {\em ORNL} \\ Steve Zenith, {\em Kuck \& Associates} } \date{March 25, 1993 \\ This work was supported by ARPA and NSF under contract number \#\#\#, by the National Science Foundation Science and Technology Center Cooperative Agreement No. CCR-8809615. } \maketitle \hfuzz=5pt %\tableofcontents %\begin{abstract} %We don't have an abstract yet. %\end{abstract} \setcounter{page}{1} \pagenumbering{arabic} \label{sec:context} %======================================================================% % BEGIN "Proposal I" %======================================================================% % BEGIN "Proposal I" % Written by Marc Snir % Edited by Lyndon J. Clarke % March 1993 % \chapter{Contexts -- Proposal I} \begin{center} Marc~Snir \end{center} \section{Contexts} A {\bf context} consists of: \begin{itemize} \item A set of processes that currently belong to the context (possibly all processes, or a proper subset). \item A {\bf ranking} of the processes within that context, i.e., a numbering of the processes in that context from 0 to $n-1$, where $n$ is the number of processes in that context. \end{itemize} A process may belong to several contexts at the same time. Any interprocess communication occurs within a context, and messages sent within one context can be received only within the same context. A context is specified using a {\em context handle} (i.e., a handle to an opaque object that identifies a context). Context handles cannot be transferred for one process to another; they can be used only on the process where they where created. Follows examples of possible uses for contexts. \subsection{Loosely synchronous library call interface} Consider the case where a parallel application executes a ``parallel call'' to a library routine, i.e., where all processes transfer control to the library routine. If the library was developed separately, then one should beware of the possibility that the library code may receive by mistake messages send by the caller code, and vice-versa. To prevent such occurrence one might use a barrier synchronization before and after the parallel library call. Instead, one can allocate a different context to the library, thus preventing unwanted interference. Now, the transfer of control to the library need not be synchronized. \subsection{Functional decomposition and modular code development} Often, a parallel application is developed by integrating several distinct functional modules, that is each developed separately. Each module is a parallel program that runs on a dedicated set of processes, and the computation consists of phases where modules compute separately, intermixed with global phases where all processes communicate. It is convenient to allow each module to use its own private process numbering scheme, for the intramodule computation. This is achieved by using a private module context for intramodule computation, and a global context for intermodule communication. \subsection{Collective communication} MPI supports collective communication within dynamically created groups of processes. Each such group can be represented by a distinct context. This provides a simple mechanism to ensure that communication that pertains to collective communication within one group is not confused with collective communication within another group. \subsection{Lightweight gang scheduling} Consider an environment where processes are multithtreaded. Contexts can be used to provide a mechanism whereby all processes are time-shared between several parallel executions, and can context switch from one parallel execution to another, in a loosely synchronous manner. A thread is allocated on each process to each parallel execution, and a different context is used to identify each parallel execution. Thus, traffic from one execution cannot be confused with traffic from another execution. The blocking and unblocking of threads due to communication events provide a ``lazy'' context switching mechanism. This can be extended to the case where the parallel executions are spanning distinct process subsets. (MPI does not require multithreaded processes.) \discuss{ A context handle might be implemented as a pointer to a structure that consists of context label (that is carried by messages sent within this context) and a context member table, that translates process ranks within a context to absolute addresses or to routing information. Of course, other implementations are possible, including implementations that do not require each context member to store a full list of the context members. Contexts can be used only on the process where they were created. Since the context carries information on the group of processes that belong to this context, a process can send a message within a context only to other processes that belong to that context. Thus, each process needs to keep track only of the contexts that where created at that process; the total number of contexts per process is likely to be small. The only difference I see between this current definition of context, which subsumes the group concept, and a pared down definition, if that I assume here that process numbering is relative to the context, rather then being global, thus requiring a context member table. I argue that this is not much added overhead, and gives much additional needed functionality. \begin{itemize} \item If a new context is created by copying a previous context, then one does not need a new member table; rather, one needs just a new context label and a new pointer to the same old context member table. This holds true, in particular, for contexts that include all processes. \item A context member table makes sure that a message is sent only to a process that can execute in the context of the message. The alternative mechanism, which is checking at reception, is less efficient, and requires that each context label be system-wide unique. This requires that, to the least, all processes in a context execute a collective agreement algorithm at the creation of this context. \item The use of relative addressing within each context is needed to support true modular development of subcomputations that execute on a subset of the processes. There is also a big advantage in using the same context construct for collective communications as well. \end{itemize} } \section{Context Operations} A global context {\bf ALL} is predefined. All processes belong to this context when computation starts. MPI does not specify how processes are initially ranked within the context ALL. It is expected that the start-up procedure used to initiate an MPI program (at load-time or run-time) will provide information or control on this initial ranking (e.g., by specifying that processes are ranked according to their pid's, or according to the physical addresses of the executing processors, or according to a numbering scheme specified at load time). \discuss{If we think of adding new processes at run-time, then {\tt ALL} conveys the wrong impression, since it is just the initial set of processes.} The following operations are available for creating new contexts. {\bf \ \\ MPI\_COPY\_CONTEXT(newcontext, context)} Create a new context that includes all processes in the old context. The rank of the processes in the previous context is preserved. The call must be executed by all processes in the old context. It is a blocking call: No call returns until all processes have called the function. The parameters are \begin{description} \item[OUT newcontext] handle to newly created context. The handle should not be associated with an object before the call. \item[IN context] handle to old context \end{description} \discuss{ I considered adding a string parameter, to provide a unique identifier to the next context. But, in an environment where processes are single threaded, this is not much help: Either all processes agree on the order they create new contexts, or the application deadlocks. A key may help in an environment where processes are multithreaded, to distinguish call from distinct threads of the same process; but it might be simpler to use a mutex algorithm at each process. {\bf Implementation note:} No communication is needed to create a new context, beyond a barrier synchronization; all processes can agree to use the same naming scheme for successive copies of the same context. Also, no new rank table is needed, just a new context label and a new pointer to the same old table. } {\bf \ \\ MPI\_NEW\_CONTEXT(newcontext, context, key, index)} \begin{description} \item[OUT newcontext] handle to newly created context at calling process. This handle should not be associated with an object before the call. \item[IN context] handle to old context \item[IN key] integer \item[IN index] integer \end{description} A new context is created for each distinct value of {\tt key}; this context is shared by all processes that made the call with this key value. Within each new context the processes are ranked according to the order of the {\tt index} values they provided; in case of ties, processes are ranked according to their rank in the old context. This call is blocking: No call returns until all processes in the old context executed the call. Particular uses of this function are: (i) Reordering processes: All processes provide the same {\tt key} value, and provide their index in the new order. (ii) Splitting a context into subcontexts, while preserving the old relative order among processes: All processes provide the same {\tt index} value, and provide a key identifying their new subcontext. {\bf \ \\ MPI\_RANK(rank, context)} \begin{description} \item[OUT rank] integer \item[IN context] context handle \end{description} Return the rank of the calling process within the specified context. {\bf \ \\ MPI\_SIZE(size, context)} \begin{description} \item[OUT size] integer \item[IN context] context handle \end{description} Return the number of processes that belong to the specified context. \subsection{Usage note} Use of contexts for libraries: Each library may provide an initialization routine that is to be called by all processes, and that generate a context for the use of that library. Use of contexts for functional decomposition: A harness program, running in the context {\tt ALL} generates a subcontext for each module and then starts the submodule within the corresponding context. Use of contexts for collective communication: A context is created for each group of processes where collective communication is to occur. Use of contexts for context-switching among several parallel executions: A preamble code is used to generate a different context for each execution; this preamble code needs to use a mutual exclusion protocol to make sure each thread claims the right context. \discuss{ If process handles are made explicit in MPI, then an additional function needed is {\bf MPI\_PROCESS(process, context, rank)}, which returns a handle to the process identified by the {\tt rank} and {\tt context} parameters. A possible addition is a function of the form {\bf MPI\_CREATE\_CONTEXT(newcontext, list\_of\_process\_handles)} which creates a new context out of an explicit list of members (and rank them in their order of occurrence in the list). This, coupled with a mechanism for requiring the spawning of new processes to the computation, will allow to create a new all inclusive context that includes the additional processes. However, I oppose the idea of requiring dynamic process creation as part of MPI. Many implementers want to run MPI in an environment where processes are statically allocated at load-time. } % % END "Proposal I" %======================================================================% %\end{document} %\documentstyle{report} % \chapter{Contexts -- Proposal VII} \begin{center} Lyndon~J~Clarke \& Rik~J~Littlefield \end{center} %---------------------------------------------------------------------- % Introduction \section{Introduction} This chapter is similar in basic principles to Proposal~I and includes all of the functionality of that proposal as a subset --- it extends in several ways and differs in some details. Certain features of other, now defunct, proposals discussed in the context subcommittee are included. In particular, this chapter proposes that: \begin{enumerate} \item Contexts and groups are not identical. A context is always associated with one group, but a group may have several contexts. Properties of groups are inherited by all of the associated contexts, for example process rank. \item Context and group descriptors can be explicitly transferred to processes that are not members of the context or group. \item In point-to-point messages, processes can be identified in any of three ways: by process, by rank in a shared context, or by ranks in separate sender and receiver contexts. \item A ``cache'' facility is provided that allows modules to attach arbitrary information to both contexts and groups. \end{enumerate} These extensions are somewhat independent of each other. The first reflects the observation that multiple modules often operate within each process group, so that context formation should be lighter weight than group formation. The second and third together provide expressive support for communication between modules within different groups of processes. The fourth allows modules to be significantly faster in common cases, without complicating their interface to the application. Much of this proposal must be viewed as recommendations to other subcommittees of {\sc mpi}, primarily the point-to-point communication subcommittee and the collective communications subcommittee. Concrete syntax is given in the style of the ANSI C host language, only for purposes of discussion. %---------------------------------------------------------------------- % Processes \section{Processes} This proposal views processes in the familiar way, as one thinks of processes in Unix or NX for example. Each process is a distinct space of instructions and data. Each process is allowed to compose multiple concurrent threads and {\sc mpi} does not distinguish such threads. \subsection*{Process Identifier} Each process is identified by a process-local {\it process handle}, which is a reference to a {\it process descriptor} of undefined size and opaque structure. In a static process model process handles can be obtained by mapping from a group (or context) and rank. In a future extension for dynamic processes, handles may be returned by process creation functions. {\sc mpi} provides a procedure which returns a handle for the calling process. \begin{verbatim} process = mpi_my_process() \end{verbatim} \subsection*{Process Creation \& Destruction} This proposal makes no statements regarding creation and destruction of processes. {\sc mpi} provides facilities for descriptor transmission allowing the user to explicitly transfer a process decriptor from one process to another. These facilities are described below. %---------------------------------------------------------------------- % Groups \section{Process Groups} This proposal views a process group as an ordered collection of (references to) distinct processes, the membership and ordering of which does not change over the lifetime of the group. The canonical representation of a group is a one-to-one map from the integers $(0, 1, \ldots, N-1)$ to handles of the $N$ processes composing the group. There may be structure associated with a process group defined by a process topology. This proposal makes no further statements regarding such structures. \subsection*{Group Identifier} Each group is identified by a process-local {\it group handle}, which is a reference to a {\it group descriptor} of undefined size and opaque structure. The initialization of {\sc mpi} makes each process a member of the ``initial'' group. {\sc mpi} provides a procedure that returns a handle to this group. \begin{verbatim} group = mpi_initial_group() \end{verbatim} {\sc mpi} provides facilities for descriptor transmission allowing the user to explicitly transfer a group descriptor from one process to another. \subsection*{Group Creation and Deletion} {\sc mpi} provides facilities which allow users to dynamically create and delete process groups. The procedures described here generate groups which are static in membership. {\sc mpi} provides a procedure which allows users to create one or more groups which are subsets of existing groups. \begin{verbatim} groupb = mpi_group_partition(groupa, key) \end{verbatim} This procedure creates one or more new groups {\tt groupb} which are distinct subsets of an existing group {\tt groupa} according to the supplied values of {\tt key}. This procedure is called by and synchronises all members of {\tt groupa}. {\sc mpi} provides a procedure which allows users to create a group by permutation of an existing group. \begin{verbatim} groupb = mpi_group_permutation(groupa, rank) \end{verbatim} This procedure creates one new group with the same membership as {\tt groupa} with a permutation of process ranking, and returns the created group descriptor in {\tt groupb}. It is called by and synchronises all members of {\tt groupa}. {\sc mpi} provides a procedure which allows users to create a group by explicit definition of its membership as a list of process handles. \begin{verbatim} group = mpi_group_definition(listofprocess) \end{verbatim} This procedure creates one new group {\tt group} with membership and ordering described by the process handle list {\tt listofprocess}. It is called by and synchronises all processes identified in {\tt listofprocess}. {\sc mpi} provides a procedure which allows users to delete user created groups. \begin{verbatim} mpi_group_deletion(group) \end{verbatim} This procedure deletes an existing group {\tt group}. It is called by and synchronises all members of {\tt group}. {\sc mpi} may provide additional procedures which allow users to construct process groups with a process group topology. \subsection*{Group Attributes} {\sc mpi} provides a procedure which accepts a valid group handle and returns the rank of the calling process within the identified group. \begin{verbatim} rank = mpi_group_rank(group) \end{verbatim} {\sc mpi} provides a procedure which accepts a valid group handle and returns the number of members, or {\it size}, of the identified group. \begin{verbatim} size = mpi_group_size(group) \end{verbatim} {\sc mpi} provides a procedure which accepts a valid group handle and process order number, or {\it rank}, and returns the valid process handle to which the supplied rank maps within the identified group. \begin{verbatim} process = mpi_group_process(group, rank) \end{verbatim} {\sc mpi} may provide additional procedures which allow users to determine the process group topology attributes. {\sc mpi} provides a group descriptor cache facility which allows the user to attach attributes to group descriptors. %---------------------------------------------------------------------- % Contexts \section{Communication Contexts} This proposal views a communication context as the combination of a process group and a protection mechanism that avoids collision between messages sent to different contexts. The context inherits process ranking from its associated group, referred to as a {\it frame}. Each process group may be used as a frame for multiple contexts. \subsubsection*{Context Identifier} Each context is identified by a process-local {\it context handle}, which is a reference to a {\it context descriptor} of undefined size and opaque structure. The creation of a process group allocates a {\it base context} which inherits the created group as a frame and can be thought of as an attribute of the created group. {\sc mpi} provides a procedure which accepts a valid group handle and returns a handle to the base context within the identified group. \begin{verbatim} context = mpi_base_context(group) \end{verbatim} {\sc mpi} provides facilities for descriptor transmission allowing the user to explicitly transfer a context descriptor from one process to another. \subsubsection*{Context Creation and Deletion} {\sc mpi} provides facilities which allows user to dynamically create and delete contexts in addition to the base context associated with a process group. Contexts created in this fashion can be thought of as copies of the base context of the process group. {\sc mpi} provides a procedure which allows users to create contexts. This procedure accepts the handle of a group of which the calling process is a member, and returns a handle to the new context. \begin{verbatim} context = mpi_context_creation(group). \end{verbatim} This procedure must be called loosely synchronously by all members of {\tt group}. The procedure may not actually synchronize the member processes --- it is suggested that this is a lightweight procedure that can be implemented so as to not require interprocess communication. {\sc mpi} provides a procedure which allows users to delete user created contexts. The procedure accepts a context handle that was created by the calling process and deletes the identified context. \begin{verbatim} mpi_context_deletion(context) \end{verbatim} This procedure has the same synchronization behavior as context creation. \subsubsection*{Context Attributes} {\sc mpi} provides a procedure which allows users to determine the process group that is the frame of a context. \begin{verbatim} group = mpi_context_frame(context) \end{verbatim} {\sc mpi} provides a group descriptor cache facility which allows user to attach attributes to group descriptors. %---------------------------------------------------------------------- % Descriptors \section{Descriptor Facilities} This section describes the descriptor transmission and user cache facilities. \subsection*{Transmission Facility} {\sc mpi} provides a mechanism whereby the user can transmit a valid descriptor in a message such that the received descriptor handle is valid. This can be integrated with the capability to transmit typed messages, and it is suggested that a notional data type should be introduced for this purpose, e.g. {\tt MPI\_DSCR\_TYPE}. There are other reasonable approaches to providing this facility. The descriptor is translated as necessary to be meaningful on the destination process, storage is allocated for it, and a handle to that storage is returned. Decorations are not transmitted. Handles are guaranteed to be unique within each process --- if processes A and B independently send to process C a descriptor for an object D, then process C will get two copies of the same handle. As with all transfers of descriptors, the receiving process is responsible for releasing the descriptor and its handle when it is no longer needed or becomes stale. MPI provides a procedure which frees a descriptor. \begin{verbatim} mpi_free_dscr(handle) \end{verbatim} A descriptor registry service which allows descriptors to be identified by name would be a useful additional feature. This service can be implemented at the user level using the point-to-point chapter of {\sc mpi} and the descriptor transmission facilities. These services can be deferred in this session of {\sc mpi}. \subsection*{Cache Facility} {\sc mpi} provides a ``cache'' facility that allows an application to attach arbitrary pieces of information, called {\em decorations}, to context and group descriptors. Decorations are local to the process and are not included if the descriptor is sent to another process. This facility is intended to support optimizations such as saving persistent communication handles and recording topology-based decisions by adaptive algorithms. {\sc mpi} provides the following services related to cacheing: \begin{description} \item [Generate key:] Generate cache key. \begin{verbatim} keyval = mpi_GetDecorationKey() \end{verbatim} \item [Store decoration:] Store decoration in cache by key. \begin{verbatim} mpi_SetDecoration(handle, keyval, decoration_val, decoration_destructor_routine) \end{verbatim} \item [Retrieve decoration:] Retrieve decoration from cache by key. \begin{verbatim} mpi_TestDecoration(handle,keyval,decoration) \end{verbatim} \item [Delete decoration:] Delete decoration from cache by key. \begin{verbatim} mpi_DeleteDecoration(handle,keyval) \end{verbatim} \end{description} Each decoration consists of a pointer or a value of the same size as a pointer, and would typically be a reference to a larger block of storage managed by the module. As an example, a global operation using cacheing to be more efficient for aall contexts of a group after the first call might look like this: {\small \begin{verbatim} static int gop_key_assigned = 0; /* 0 only on first entry */ static MPI_KEY_TYPE gop_key; /* key for this module's stuff */ efficient_global_op (context, ...) int context_handle; { struct gop_stuff_type *gop_stuff; /* whatever we need */ int group_handle = mpi_context_frame(context_handle); if (!gop_key_assigned) /* get a key on first call ever */ { gop_key_assigned = 1; if ( ! (gop_key = MPI_GetDecorationKey()) ) { MPI_abort ("Insufficient keys available"); } } if (MPI_TestDecoration (group_handle,gop_key,&gop_stuff)) { /* This module has executed in this group before. We will use the cached information */ } else { /* This is a group that we have not yet cached anything in. We will now do so. */ gop_stuff = /* malloc a gop_stuff_type */ /* ... fill in *gop_stuff with whatever we want ... */ MPI_SetDecoration (group_handle, gop_key, gop_stuff, gop_stuff_destructor); } /* ... use contents of *gop_stuff to do the global op ... */ } gop_stuff_destructor (gop_stuff) /* called by MPI on group delete */ struct gop_stuff_type *gop_stuff; { /* ... free storage pointed to by gop_stuff ... */ } \end{verbatim} } The cache facility could also be provided for process descriptors, but it is less clear how such provision would be useful. It is suggested that the cache store, retrieve and delete decoration procedures should fail when applied to a process descriptor handle. %---------------------------------------------------------------------- % Point-to-point \section{Point-to-Point Communication} This proposal recommends three forms for {\sc mpi} point-to-point message addressing and selection: null context; closed context; open context. It is further recommended that messages communicated in each form are distinguished such that a {\tt Send} operation of form X cannot match with a {\tt Receive} operation of form Y, requiring that form is embedded into the message envelope. The three forms are described, followed by considerations of uniform integration of these forms in the point-to-point communication chapter of {\sc mpi}. \subsection*{Null Context Form} The {\it null context\/} form contains no message context. Message selection and addressing are expressed by \begin{verbatim} (process, tag) \end{verbatim} where: {\tt process} is a process handle; {\tt tag} is a message tag. {\tt Send} supplies the {\tt process} of the receiver. {\tt Receive} supplies the {\tt process} of the sender. {\tt Receive} can wildcard on {\tt process} by supplying the wildcard descriptor handle value {\tt MPI\_WILDCARD}. In this case the receiver may have obtained the process descriptor of the sender, and the null descriptor handle {\tt MPI\_NULL} is returned in the relevant point-to-point enquiry procedure. \subsection*{Closed Context Form} The {\it closed context\/} form permits communication between members of the same context. Message selection and addressing are expressed by \begin{verbatim} (context, rank, tag) \end{verbatim} where: {\tt context} is a context handle; {\tt rank} is a process rank in the frame of {\tt context}; {\tt tag} is a message tag. The calling process must be a member of the frame of {\tt context}. {\tt Send} supplies the {\tt context} of the receiver (and sender), and the {\tt rank} of the receiver. {\tt Receive} supplies the {\tt context} of the sender (and receiver), and the rank of the sender. The {\tt (context, rank)} pair in {\tt Send} ({\tt Receive}) is sufficient to determine the process identifier of the receiver (sender). {\tt Receive} cannot wildcard on {\tt context}. {\tt Receive} can wildcard on {\tt rank} by supplying the wildcard integer {\tt MPI\_DONTCARE}. This proposal makes no statement about the provision for wildcard on {\tt tag}. \subsection*{Open Context Form} The {\it open context\/} form permits communication between members of any two contexts. Message selection and addressing are expressed by \begin{verbatim} (lcontext, rcontext, rank, tag) \end{verbatim} where: {\tt lcontext} is a context handle; {\tt rcontext} is a context handle; {\tt rank} is a process rank in the frame of {\tt rcontext}; {\tt tag} is a message tag. The calling process must be a member of the frame of {\tt lcontext} and need not be a member of the frame of {\tt rcontext}. {\tt Send} supplies the context of the sender in {\tt lcontext}, the context of the receiver in {\tt rcontext}, and the {\tt rank} of the receiver in the frame of {\tt rcontext}. {\tt Receive} supplies the context of the receiver in {\tt lcontext}, the context of the sender in {\tt rcontext}, and the {\tt rank} of the sender in the frame of {\tt rcontext}. The {\tt (rcontext, rank)} pair in {\tt Send} ({\tt Receive}) is sufficient to determine the process identifier of the receiver (sender). {\tt Receive} cannot wildcard on {\tt lcontext}. {\tt Receive} can wildcard on {\tt rcontext} by supplying the wildcard descriptor handle value {\tt MPI\_WILDCARD}, in which case it must also wildcard on {\tt rank} since the process descriptor of the sender cannot be determined. In this case the receiver may not have obtained the context descriptor of the sender, and the null descriptor handle {\tt MPI\_NULL} is returned in the relevant point-to-point enquiry procedure. {\tt Receive} can wildcard on {\tt rank} by supplying the wildard intger value {\tt MPI\_DONTCARE}. \subsection*{Uniform Integration} The three forms of addressing and selection described have different syntactic frameworks. We can consider integrating these forms into the point-to-point chapter of {\sc mpi} by defining a further orthogonal axis (as in the multi-level proposal of Gropp \& Lusk) which deals with form. This is at the expense of multiplying the number of {\tt Send} and {\tt Receive} procedures by a factor of three, and some further but trivial work with details of the current point-to-point chapter which uniformly assumes a single addressing and selection form. There are various approaches to unification of the syntactic frameworks which may simplify integration. Two options are now described, each based on retention and extension of the framework of the closed and open contexts forms. The framework of the open context form could be adopted and extended. The null context form is expressed as {\tt (MPI\_NULL, MPI\_NULL, process, tag)}, which is a little clumsy. The closed context form is expressed as {\tt (MPI\_NULL, context, rank, tag)}, which is marginally inconvenient. The open context form is expressed as {\tt (lcontext, rcontext, rank, tag}), which is of course natural. The framework of the closed context form could be adopted and extended. The null context form is expressed as {\tt (MPI\_NULL, process, tag)}, which is marginally inconvenient, and requires that descriptor handles are expressed as intgers. The closed context form is expressed as {\tt (context, rank, tag)}, which is of course natural. Expression of the open context form requires a little more work. We can use the {\tt context} field as ``shorthand notation'' for the {\tt (lcontext, rcontext)} pair at the expense of introducing some trickery. We define a ``duplet descriptor'' which is formally composed of two references to contexts, and provide a procedure which constructs such a descriptor given two context descriptors. Both {\tt Send} and {\tt Receive} accept a duplet descriptor in {\tt context}, are able to distinguish the duplet descriptor from a singlet descriptor, and treat the duplet as shorthand notation. It is conjectured that using this framework is the best choice for {\sc mpi}. %---------------------------------------------------------------------- % Point-to-point \section{Collective Communication} Symmetric collective communication operations are compliant with the closed context form described above. This proposal recommends that such operations accept a context descriptor which identifies the context (thus frame) in which they are to operate. {\sc mpi} does plan to describe symmetric collective communication operations. It is not possible to determine whether this proposal is sufficient to allow implementation of the collective communication chapter of {\sc mpi} in terms of the point-to-point chapter of {\sc mpi} without loss of generality, since the collective operations are not yet defined. Asymmetric collective communication operations, especially those in which sender(s) and receiver(s) are distinct processes, are compliant with the open context form described above. This proposal recommends that such operations accept a pair of context descriptors (a duplet descriptor) which identify the contexts (thus frames) in which they are to operate. {\sc mpi} does not plan to describe asymmetric collective communication operations. Such operations are expressive when writing programs beyond the SPMD model, which are composed of communicative functionally distinct process groups. These services can be deferred in this session of {\tt mpi}. %---------------------------------------------------------------------- % Conclusion \section{Conclusion} This chapter presented a proposal for communication contexts and process groups with {\sc mpi}. In the proposal process groups are created dynamically and are static in membership. Associated with each process group are one or more communication contexts which inherit process ranking. The recommendations for point-to-point communication are powerful. The proposal provides process addressed communication which occurs within an extended context. The proposal also contains closed context communication addressed in terms of context and rank which protects messages belonging to one context from those belonging to other contexts. The proposal also contains open context communications adressed in terms of sender context, receiver context, and process rank which provides expressive power for intercommunication between modules within different groups. The proposal is extensible to a number of features which might be included in future sessions of {\sc MPI}, for example: dynamic processes; dynamic groups; multiple group collective communications. % % END "Proposal VII" %======================================================================% % % BEGIN "Proposal III" % Anthony Skjellum % March 1993 % \chapter{Contexts -- Proposal III} \begin{center} A.~Skjellum {\em et al.} \end{center} %---------------------------------------------------------------------- % Introduction \section{Introduction} This chapter takes a slightly different approach to contexts and groups, than does Proposal~VII. It is of roughly equal conceptual ``power'' as Proposal~VII, with some differences. As appropriate, this chapter borrows directly from Proposal~VII, by Clarke and Littlefield. \begin{enumerate} \item Contexts are supported to discriminate between messages in the system. A context is a conceptual extension of the tag space into a system-defined part (not wildcardable), and a totally user-defined part (the traditional 32-bit tag). \item A context is a lower-level concept than a group, so that contexts not associated with groups are permitted. This permits the user to develop codes that build on the server model, or which build up groups dynamically (not otherwise supported by MPI1). \item Groups are used to describe cooperative communication in the system. Groups have one or more context of communication associated with them. When created, a group is given a context of communication. \item Context and group descriptors can be explicitly transferred to processes that are not members of the context or group. \item In point-to-point messages, processes can be identified in either of two ways: by opaque process identifier, or by rank in a group; in either case, communication scope is within a given context. \item The cache facility, allowing groups to add additional information (described in Proposal~VII) is embraced by this Proposal, with reservations as noted. The possible need to omit this cacheing feature from MPI1 should not invalidate the remainder of Proposal~VII from further consideration (severability). \end{enumerate} %---------------------------------------------------------------------- % Processes \section{Processes} This Proposal views processes in the familiar way, as one thinks of processes in Unix or NX for example. Each process is a distinct space of instructions and data. Each process is allowed to compose multiple concurrent threads and {\sc MPI\/} does not distinguish such threads. {\sc MPI\/} shall be thread-aware, but not thread supporting, so we make every attempt to make thread safe programs possible in defining what follows. \subsection*{Process Identifier} Each process is identified by an opaque process identifier, which is associable to a {\it process descriptor} of undefined size and opaque structure, through {\sc MPI} accessor calls. In a static process model process identifiers can be obtained by mapping from a group and rank. In a future extension for dynamic processes, identifiers may be returned by process creation functions. {\sc MPI} provides a procedure that returns an identifier for the calling process. \begin{verbatim} my_process = mpi_my_process() \end{verbatim} This identifier can be converted to a transmittable form by {\sc MPI} converter functions, though opaque, it is conceptually a pointer. \subsection*{Process Creation \& Destruction} This proposal makes no statements regarding creation and destruction of processes. {\sc MPI} provides facilities for identifier transmission allowing the user explicitly to transfer a process identifier from one process to another. These facilities are described below. MPI also provides means to transfer underlying information about the opaque process descriptor underlying. %---------------------------------------------------------------------- % Groups \section{Process Groups} This proposal views a process group as an ordered collection of distinct processes (via process identifiers), the membership and ordering of which does not change over the lifetime of the group. The canonical representation of a group is a one-to-one map from the integers $(0, 1, \ldots, N-1)$ to identifiers of the $N$ processes composing the group. There may be structure associated with a process group defined by a process topology. This proposal makes no further statements regarding such structures. There may be non-enumerative ways to construct and manipulate special groups, or for special machine architectures ({\em e.g.}, cohorts). This proposal makes no further statements about such special groups, other than the desirability of avoiding group-name enumeration, when possible. \subsection*{Group Identifier} Each group is identified by an opaque {\it group identifier}, which is a associable to a {\it group descriptor} of undefined size and opaque structure, through {\sc MPI} accessor functions. The initialization of {\sc MPI} makes each process a member of the ``initial'' group. {\sc MPI} provides a procedure that returns an identifier to this group. \begin{verbatim} group_ident = mpi_initial_group() \end{verbatim} {\sc MPI} provides facilities for descriptor transmission allowing the user explicitly transfer a group descriptor from one process to another. \subsection*{Group Creation and Deletion} {\sc MPI} provides facilities which allow users dynamically to create and delete process groups. The procedures described here generate groups which are static in membership. {\sc mpi} provides a procedure that allows users to create one or more groups which are subsets of existing groups. \begin{verbatim} new_group = mpi_group_partition(old_group, key) \end{verbatim} This procedure creates one or more new groups {\tt new\_group} which are distinct subsets of an existing group {\tt old\_group} according to the supplied values of {\tt key}. This procedure is called by and synchronises all members of {\tt new\_group}. No overlapping is permitted, so that exactly one {\tt new\_group} is achieved in each {\tt old\_group} process. The new groups have new contexts of communication. The number of new contexts depends on the number of different key values asserted. {\sc MPI} provides a procedure which allows users to create a group by permutation of an existing group. \begin{verbatim} new_group = mpi_group_permutation(old_group, rank) \end{verbatim} This procedure creates one new group with the same membership as {\tt old\_group} with a permutation of process ranking, and returns the created group descriptor in {\tt new\_group}. It is called by and synchronises all members of {\tt gha}. The new group has a new context of communication. {\sc MPI} provides a procedure that allows users to create a group by explicit definition of its membership as a list of process identifiers. \begin{verbatim} new_group = mpi_group_definition(array_of_process_ids,length, context_to_use) \end{verbatim} This procedure creates one new group {\tt new\_group} with membership and ordering described by the process handle list {\tt array\_of\_process\_ids}, of length {\tt length}. If {\tt context\_to\_use} is specified as the special context {\tt MPI\_GET\_CONTEXT}, then the system allocates the new context. Else, the system trusts the user to have allocated the context legally (see below). This procedure must be called by and synchronises all processes identified in the list. A further approach to new group definition is as follows \begin{verbatim} new_group = mpi_group_def_by_leader(leader_id, in_length, in_array_of_process_ids, context_to_use, out_array_of_process_ids, out_length) \end{verbatim} This weaker form requires all future group members to have identified only a leader (specified by {\tt leader\_id}). The leader knows the length a names of all participants. This is a synchronization of all participants. Same semantics for {\tt context\_to\_use} as above. {\sl MPI} provides a way formally to duplicate a group, in order to obtain a separate context of communication. This can be achieved by using other operations, but this procedure allows optimization in some implementations (no explicit group copy, for instance). \begin{verbatim} new_group = mpi_group_duplicate(old_group) \end{verbatim} The only difference between the old and new groups is that there is a new context of communication. {\sc MPI} provides a procedure which allows users to delete user created groups. \begin{verbatim} mpi_group_deletion(group) \end{verbatim} This procedure deletes an existing group {\tt group}. It is called by and synchronises all members of {\tt group}. {\sc MPI} may provide additional procedures which allow users to construct process groups with a process group topology. \subsection*{Group Attributes/Accessors} {\sc MPI} provides a procedure that accepts a valid group identifier and returns the rank of the calling process within the identified group. \begin{verbatim} rank = mpi_group_rank(group) \end{verbatim} {\sc MPI} provides a procedure that accepts a valid group identifier and returns the number of members, or {\it size}, of the identified group. \begin{verbatim} size = mpi_group_size(group) \end{verbatim} {\sc MPI} provides a procedure that accepts a valid group identifier {\it rank-in-group}, and returns the valid process identifier to which the supplied rank maps within the identified group. \begin{verbatim} process_id = mpi_group_process(group, rank) \end{verbatim} {\sc MPI} may provide additional procedures which allow users to determine the process group topology attributes. {\sc MPI} provides a group descriptor cache facility thath allows the user to attach attributes to group descriptors. See Proposal~VII for details. %---------------------------------------------------------------------- % Contexts \section{Communication Contexts} This proposal views a communication context as a partition of the tag space, which is a protection mechanism that avoids collision between messages sent between processes. Process groups have one or more contexts in {\sl MPI}. Unlike Proposal~VII, more contexts are obtained for a group using the above-discussed group creation and replication functions. Replication may only be formal for good implementations. \subsubsection*{Context Identifier} Each context is identified by an opaque process identifier. It is conceptually an integer assigned by the system to partition a large tag space into a user-defined and system-controlled subspaces. This stategy provides the minimal level of isolation needed to build large libraries, and is close to practice. \subsubsection*{Context Creation and Deletion} {\sc MPI} provides facilities that allow user dynamically to allocate and free contexts. When contexts are used with groups, these calls are not needed. For more advanced users (such as building your own dynamic groups), these calls will be used. Above, where {\tt context\_to\_use} appears as an argument, the following call would have been used to secure such a context in advance. \begin{verbatim} mpi_context_creation(number_of_contexts_wanted,array_of_contexts, number_of_contexts_provided) \end{verbatim} This call is called by any process, with no synchronization to other processes. {\sc MPI} provides a procedure that allows users to delete user- created contexts. The procedure accepts a context identifier array, containing zero or more contexts created previously in the system. \begin{verbatim} mpi_context_deletion(context_array,length) \end{verbatim} No synchronization occurs here. The user can do erroneous things by freeing contexts that are still in use. For general applications, it may be nice to have a name service for contexts (necessary for building dynamic groups and servers, for yourself). Herewith: \begin{verbatim} mpi_associate_contexts_with_name(string_name,context_array,length) mpi_disassociate_contexts_with_name(string_name) mpi_get_contexts_by_name(string_name,max_length,out_length, context_array \end{verbatim} As with context generation, the above calls assume a simple reactive, global server, or shared name space mechanism (both achieveable easily in practice). %---------------------------------------------------------------------- % Descriptors \section{Descriptor Facilities} This section describes the descriptor transmission and user cache facilities. \subsection*{Conversion Facility} {\sc MPI} provides a mechanism whereby the user can convert a valid descriptor ({\em e.g.}, a group descriptor) idenfied through an identifier ({\em e.g.}, a group identifier) for use in a message such that the received descriptor can be reconstructed on the remote end. This can be integrated with message transmission as the user sees fit, without additional complication to the send/receive semantics of {\sc MPI}. An example follows: \begin{verbatim} error = mpi_group_group_transmit(group, group_buffer, max_length, act_length) \end{verbatim} If the buffer is not long enough to hold the information, an error occurs. A network independent format can be assumed in the {\tt group\_buffer}. Cached ``attributes'' are not transmitted (see below). \subsection*{Cache Facility} {\sc MPI} provides a ``cache'' facility that allows an application to attach arbitrary pieces of information, called {\em attributes}, to context and group descriptors. Attributes are local to the process and are not included if the descriptor is sent to another process. This facility is intended to support optimizations such as saving persistent communication handles and recording topology-based decisions by adaptive algorithms. {\sc MPI} provides the following services related to cacheing. We call our attributes `attributes'; Proposal~VII calls them (equivalently) decorations (no big difference, except naming, is anticipated). \begin{description} \item [Generate key:] Generate cache key. \begin{verbatim} keyval = mpi_get_attribute_key() \end{verbatim} \item [Store attribute:] Store attribute in cache by key. \begin{verbatim} mpi_set_attribute(handle, keyval, attribute_val, attribute_destructor_routine) \end{verbatim} \item [Retrieve attribute:] Retrieve attribute from cache by key. \begin{verbatim} mpi_Test_Attribute(handle,keyval,attribute) \end{verbatim} \item [Delete attribute:] Delete attribute from cache by key. \begin{verbatim} mpi_delete_attribute(handle,keyval) \end{verbatim} \end{description} Each attribute consists of a pointer or a value of the same size as a pointer, and would typically be a reference to a larger block of storage managed by the module. Our example will appear in a later draft, because we have semantic differences from some of the ancillary aspects of the example of Proposal~VII. The cache facility could also be provided for process identifiers, but it is less clear how such provision would be useful. It is suggested that the cache store, retrieve and delete attribute procedures should fail when applied to a process identifiers. Implementations should use AVL trees, or similar efficient data structures to provide relatively efficient access to attributes. %---------------------------------------------------------------------- % Point-to-point \section{Point-to-Point Communication} This proposal recommends two forms for {\sc MPI} point-to-point message addressing and selection: by-group notation, by process-ID notation; always, one is working in a context, as this is the fundamental management tactic of {\sc MPI} for messages. As a group always has a context at creation, and an ALL group is anticipated, this should prove fine for a static process model. We disagree significantly from Proposal~VII in what follows. The two forms are described, followed by considerations of uniform integration of these forms in the point-to-point communication chapter of {\sc MPI}. \subsection*{Group-Rank Form} The {\it group-rank\/} form permits communication between members of the same context and group. Message selection and addressing are expressed by \begin{verbatim} (group, rank, tag) \end{verbatim} where: {\tt group} is a group identifier; {\tt rank} is a process rank in that group; {\tt tag} is a message tag. The calling process must be a member of the frame of {\tt context}. {\tt Send} determines the context using information in the group identifier. It does all necessary mappings to the process identifier space. {\tt Receive} cannot wildcard on context, so a valid matching receive must refer to the same group information. {\tt Receive} can wildcard on {\tt rank} by supplying the wildcard integer {\tt MPI\_DONTCARE}. This proposal makes the following statement about the provision for wildcard on {\tt tag}. Two integer-form wildcard is needed for layering: {\tt care\_bits}, {\tt dont\_care\_bits}. Tags are matched if and only if \begin{equation} (received\_tag AND NOT dont\_care_bits) XOR care\_bits== 0 \end{equation} This general format can be used to partition the tag space for virtual topologies or other user-defined needs, and is quite important to the standard's flexibility. \subsection*{Process-Identifier Form} Communication takes place using the following parameters: \begin{verbatim} (context, process_identifier, tag) \end{verbatim} where: {\tt context} is a context identifier, {\tt process\_identifier} is a process identifier, {\tt tag} is a message tag. The calling process must be a member of the same context as the recipient. There is no reference to groups here. Contexts can have been shared by using a least common ancestor prior to this call, or by the above-mentioned context naming service. There is never wildcarding on context. Wildcarding on {\tt process\_identifier} is through {\tt MPI\_DONTCARE}. Tag wildcarding is through the integer pair described above. \subsection*{Uniform Integration} The two forms of addressing and selection described have different syntactic frameworks. We can consider integrating these forms into the point-to-point chapter of {\sc MPI} by defining a further orthogonal axis (as in the multi-level proposal of Gropp \& Lusk) which deals with form. This is at the expense of multiplying the number of {\tt Send} and {\tt Receive} procedures by a factor of two, and some further but trivial work with details of the current point-to-point chapter which uniformly assumes a single addressing and selection form. No further details, other than naming that disambiguates the rank-group form from the process-id-context form is really needed, and the naming would seem uncontroversial. %---------------------------------------------------------------------- % Collective \section{Collective Communication} Symmetric collective communication operations are compliant with the group-rank form described above. This proposal recommends that such operations accept a group-identifier (which contains context and other information) needed to operate correctly. We recommend that the tag argument be included in collective calls where this could help with debugging. {\sc MPI} does plan to describe symmetric collective communication operations. It is impossible to determine whether this proposal is sufficient to allow implementation of the collective communication chapter of {\sc MPI} in terms of the point-to-point chapter of {\sc mpi} without loss of generality, since the collective operations are not yet defined. Asymmetric collective communication operations, especially those in which sender(s) and receiver(s) are distinct processes, should be made compliant with the group-rank form described above. {\sc MPI1} should forego non-blocking collective operations, but ask vendors to support thread models in lieu of such operations. %---------------------------------------------------------------------- % Conclusion \section{Conclusion} This proposal is substantially different than either Proposal~VII or I. Contexts are integer tag partitions here, and are fundamentally lower-level objects than groups. Groups have contexts in this proposal, but not all operations require the presence of group-scope. To avoid invidious comparisons here, more substantial comparisons of all three proposals are deferred the following appendix. \appendix \chapter{Summary of context subcommittee proposals} The three proposals in the context subcommittee share common features, and have differences both in concept and detail. Two of these proposals contain features which are "separable" and could equally appear as components of one or more other proposals. This summary identifies feature of proposals as: Common Features; Separable Features; Concept Differences; Detail Differences. Hopefully the summary will: (a) help us to discuss the important differnces between the proposals and make agreements on how we should proceed with respect to those issues; (b) help us to isolate the separable points and make separate agreements on those issues. I hope that the summary is both accurate and complete. Please make corrections and additions if you discover such. I apologise in advance for my errors, which are surely inevitable. \section{Common Features} \subsection{Process group management} In each proposal groups are created dynamically and have static membership. In each proposal a group can be created as a partition of an existing group and as a permutation of an existing group. In each proposal there is a defined group containing all (or perhaps all initial) processes. Each proposal allows (or suggests) that a group can be created as an explicit list of processes. \subsection{Provision for point-to-point communication within group} In each proposal point-to-point communcation of scope closed within a group can be expressed in terms of a reference to a group coupled with a process rank within the group. \subsection{Provision for collective communication within group} In each proposal collective communication of scope closed within a group can be expressed in terms of a reference to a group. \subsection{Opacity of group and process description} In each proposal the description of groups and processes is opaque. Groups and processes are referred to by a handle like object. \subsection{Fields of point-to-point communication} In each proposal point-to-point communication accepts three fields, inclusive of message tag, in addressing and selection. \section{Separable Features} \subsection{Tag usage in point-to-point communication} Proposal III describes tag selection for Receive in a two-integer form. Proposals I and VII say nothing about tag usage. This feature can be placed in all Proposals I, III and VII. \subsection{Tag usage in collective communication} Proposal III suggests that tag should be used as an argument to collective communication where this will assist debugging. Proposals I and VII say nothing about tag usage. This feature can be placed in all Proposals I, III and VII. \subsection{Context or Group cache} Proposal VII describes a "cache" facility associated with contexts and groups. Proposal III describes a similar "cache" facility associated with groups. This feature can be placed in all Proposals I, III and VII. \subsection{Opaque object (descriptor) transmission} Proposal VII suggests that opaque object transmission can be provided by integration with transmission of typed data. Proposal III suggests that opaque transmission is provided by a mechanism for flattening a descriptor into a memory buffer. These are details of different ways of providing the feature. This feature can be placed in Proposals III and VII. This feature cannot be placed in Proposal I. \subsection{Context registry} Proposal III describes a context name registry service. Proposal VII indicates that such a service would be useful. This feature can be placed in Proposals III and VII. This feature cannot be placed in Proposal I. \section{Concept Differences} \subsection{Concept of CONTEXT and GROUP} In Proposal I CONTEXT and GROUP are identical concepts and are not distinguished. In Proposal III CONTEXT is a lower degree concept than GROUP. The GROUP concept inherits aspects of the CONTEXT concept. In Proposal VII CONTEXT is a higher concept than GROUP. The CONTEXT concept inherits aspects of the GROUP concept. \subsection{Scope of point-to-point communication} In Proposal I the scope of point-to-point communication is limited to the group. Processes which are members of distinct groups can only communicate through a common ancestor group. In Proposals III and VII the scope of point-to-point communication is not limited. Processes which are members of distinct groups can communicate without reference to a common ancestor group. \subsection{Transmission of group or context} In Proposal I the CONTEXT cannot be transmitted from one process to another. In Proposals VII and III both CONTEXT and GROUP can be transmitted from one process to another. In Proposal VII PROCESS can alo be transmitted (Proposal III suggests such but makes no specific provision, presumably a small oversight?) \section{Detail differences} \subsection{Manifestation of context} In Proposals I and VII context is an opaque object. In Proposal III context is an integer. \subsection{Deletion of group} In Proposals VII and III groups can be deleted. In Proposal I there is no provision for group deletion (possibly a small oversight?). \subsection{Duplication of group} In Proposals I and III there is explicit provision for duplication of an existing group to form a new (distinct, homomorphic) group. In Proposal VII there is no such provision as similar funtionality is provided by the context (although the provision for group partition, permutation and definition can be used to create a snapshot copy of a group). \subsection{Global shared variables} Proposals I and VII do not require global shared variables. Proposal III requires a global shared variable (which can be implemented as such or of course in the traditional approach as a global service process.) \subsection{Process identifier addressed communication} Proposal I does not make provision for process identifier addressed communication. Proposal III makes provision for process identifier addressed communication within multiple distinct tag spaces. Proposal VII makes provision for process identifier addressed communication within a single distinct tag space. \subsection{Inter-group communication} Proposal I does not provide inter-group communication as it limits the scope of point-to-point communication to be closed within a group. Proposal VII provides inter-group communication in a triplet addressing form: sender (receiver) group, receiver (sender) group, sender (receiver) rank. Proposal III provides inter-group communication as process identifier addressed communication. \end{document} From owner-mpi-context@CS.UTK.EDU Fri Mar 26 13:11:55 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA19032; Fri, 26 Mar 93 13:11:55 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02142; Fri, 26 Mar 93 13:11:03 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 26 Mar 1993 13:11:01 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from fslg8.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02125; Fri, 26 Mar 93 13:10:58 -0500 Received: by fslg8.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA01869; Fri, 26 Mar 93 18:10:53 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA05020; Fri, 26 Mar 93 11:09:35 MST Date: Fri, 26 Mar 93 11:09:35 MST From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9303261809.AA05020@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu Subject: Re: A new proposal for context and groups. Hi all, A while back, Mark Sears wrote the beginnings of a "Proposal VI": > Proposal VI > Mark P. Sears > Sandia National Laboratories > > The following proposal for context and group definitions is > intended to fill a gap in Tony Skjellum's list. It is most > closely related to Rik Littlefield's proposal V. The main > difference is that in my proposal, context and groups are > completely orthogonal, both in purpose and function... > [remainder deleted...] Any particular reason why this proposal hasn't been pushed any farther? (Maybe no one has time to do it?? I'm not volunteering! :-) I think "Proposal VI" would fit into Lyndon's summary like this: > In Proposal I CONTEXT and GROUP are identical concepts and are not > distinguished. > > In Proposal III CONTEXT is a lower degree concept that GROUP. The > GROUP concept inherits aspects of the CONTEXT concept. > > In Proposal VII CONTEXT is a higher concept than GROUP. The CONTEXT > concept inherits aspects of the GROUP concept. In Proposal VI CONTEXT and GROUP are othogonal and unrelated. Is this worth pursuing? Tom (I'm STILL reading the proposals... :-) Henderson NOAA Forecast Systems Laboratory hender@fsl.noaa.gov From owner-mpi-context@CS.UTK.EDU Fri Mar 26 16:43:03 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA25014; Fri, 26 Mar 93 16:43:03 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA12680; Fri, 26 Mar 93 16:41:48 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 26 Mar 1993 16:41:47 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA12672; Fri, 26 Mar 93 16:41:46 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA12136; Fri, 26 Mar 93 15:35:39 CST Date: Fri, 26 Mar 93 15:35:39 CST From: Tony Skjellum Message-Id: <9303262135.AA12136@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu, hender@macaw.fsl.noaa.gov Subject: CONTEXT agenda @ Dallas; Sears proposal Tom! Concerning Sears proposal (Proposal VIII by my nomenaclature). I have not had time to read this proposal yet. I except any responsiblity for not addressing it. I will read it prior to meeting, and give Mark Sears' proposal time during our 3-hour slot (see below), if he wishes. If it is a workable closure for MPI, I would consider adding it to straw poll, of course. I appreciate Mark's efforts, and I do not mean to snub him! Schedule (we initiate the meeting on Wednesday) 3hr context discussion before committee of the whole... Assume we start at 1:30+epsilon: intro to context sub-committee proposals - Skjellum / 10 mins Proposal VII - Littlefield & Clarke presents / 30 mins discussion on VII - lead by Littlefield & Clarke / 15 mins Proposal I - Snir / 30 mins discussion on I - lead by Snir / 15 mins Proposal III & VIII - Skjellum presents / 30 mins discussion on III & VIII - lead by Skjellum & Sears / 15 mins Overall discussion / Recommendations / Ranking poll 35 mins - Tony From owner-mpi-context@CS.UTK.EDU Mon Mar 29 13:08:29 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA19903; Mon, 29 Mar 93 13:08:29 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05771; Mon, 29 Mar 93 13:07:54 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 29 Mar 1993 13:07:53 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from fslg8.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05763; Mon, 29 Mar 93 13:07:50 -0500 Received: by fslg8.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA02203; Mon, 29 Mar 93 18:07:46 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA00768; Mon, 29 Mar 93 11:06:27 MST Date: Mon, 29 Mar 93 11:06:27 MST From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9303291806.AA00768@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu Subject: My (Preliminary) Vote Hi all, I FINALLY finished reading the context proposals. Right now I am leaning towards "Proposal VIII" (Mark Sears' proposal for orthogonal context and group concepts). However, I would like to see more detail in that proposal. Of course, the discussion on Wednesday may change my opinions! :-) See you all then... Tom Henderson NOAA Forecast Systems Laboratory hender@fsl.noaa.gov From owner-mpi-context@CS.UTK.EDU Mon Mar 29 13:24:08 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA20278; Mon, 29 Mar 93 13:24:08 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA06707; Mon, 29 Mar 93 13:23:33 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 29 Mar 1993 13:23:32 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA06699; Mon, 29 Mar 93 13:23:23 -0500 Received: from float.co.uk (float.meiko.co.uk) by hub.meiko.co.uk with SMTP id AA11427 (5.65c/IDA-1.4.4 for mpi-context@cs.utk.edu); Mon, 29 Mar 1993 19:23:11 +0100 Date: Mon, 29 Mar 1993 19:23:11 +0100 From: James Cownie Message-Id: <199303291823.AA11427@hub.meiko.co.uk> Received: by float.co.uk (5.0/SMI-SVR4) id AA02335; Mon, 29 Mar 93 19:19:34 BST To: hender@macaw.fsl.noaa.gov Cc: mpi-context@cs.utk.edu In-Reply-To: Tom Henderson's message of Mon, 29 Mar 93 11:06:27 MST <9303291806.AA00768@macaw.fsl.noaa.gov> Subject: My (Preliminary) Vote Content-Length: 621 It woudl be really nice if the could be kept separate. Unfortunately they interact because 1) the context needs to be agreed by a set of processors, so context creation is naturally a group operation 2) if collective operations are built on point to point they need a context to remove the ambiguity, and to work safely. See you on Dallas -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Mon Mar 29 13:34:01 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA20599; Mon, 29 Mar 93 13:34:01 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA07253; Mon, 29 Mar 93 13:33:35 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 29 Mar 1993 13:33:34 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA07243; Mon, 29 Mar 93 13:33:32 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA14097; Mon, 29 Mar 93 12:27:00 CST Date: Mon, 29 Mar 93 12:27:00 CST From: Tony Skjellum Message-Id: <9303291827.AA14097@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: straw poll I invite straw poll votes on MPI context proposals. Note that Lyndon has introduced proposal X, but I recommend we defer discussion on that till Wednesday night. - TOny From owner-mpi-context@CS.UTK.EDU Mon Mar 29 13:57:52 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21095; Mon, 29 Mar 93 13:57:52 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08541; Mon, 29 Mar 93 13:57:27 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 29 Mar 1993 13:57:26 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from fslg8.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08530; Mon, 29 Mar 93 13:57:24 -0500 Received: by fslg8.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA02384; Mon, 29 Mar 93 18:57:20 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA00904; Mon, 29 Mar 93 11:56:01 MST Date: Mon, 29 Mar 93 11:56:01 MST From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9303291856.AA00904@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu Subject: Re: My (Preliminary) Vote > Mark;s is essentially the same as mine, except that he has weakened > a few things. It is so vague (Clintonlike) that perhaps it has to get > elected :-) > - Tony Unfortunately, I agree that this proposal is vague. Maybe the reason I like it is that I'm confusing vagueness with simplicity... :-) {I'm trying to think which "promises" might be broken within the first week...} I've made a few suggestions for "de-vagueifying" to Mark. I hope a more detailed proposal could be available by Wednesday (i.e. I hope Mark doesn't have any "real" work to do before then...). Tom From owner-mpi-context@CS.UTK.EDU Mon Mar 29 18:27:29 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA27367; Mon, 29 Mar 93 18:27:29 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21729; Mon, 29 Mar 93 18:22:30 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 29 Mar 1993 18:22:29 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from cs.sandia.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21715; Mon, 29 Mar 93 18:22:26 -0500 Received: from newton.sandia.gov (newton.cs.sandia.gov) by cs.sandia.gov (4.1/SMI-4.1) id AA12159; Mon, 29 Mar 93 16:22:23 MST Received: by newton.sandia.gov (5.57/Ultrix3.0-C) id AA05331; Mon, 29 Mar 93 16:23:17 -0700 Date: Mon, 29 Mar 93 16:23:17 -0700 From: mpsears@newton.cs.sandia.gov (Mark P. Sears) Message-Id: <9303292323.AA05331@newton.sandia.gov> To: mpi-context@cs.utk.edu Subject: Response to comments on Pr. VIII Tom (and other colleagues in MPI), Thanks for your interest and let me try to answer some of your questions, according to my point of view. I apologize for any lack of clarity in the proposal (I had the flu when I wrote it). My proposal was certainly not at the level of detail needed for insertion in the MPI document -- it was intended more as a position statement. In the following, my comments begin with **** in response to Tom's statements and questions. I end with some responses to Jim Cownie's comments. Mark P. Sears Sandia National Laboratories ********************************************************************** (from Tom Henderson's email:) 1) Process ID's are globally unique integers. I see this as a minor advantage. When PID is an opaque data type, we get the "identity" problem: how do I tell if PID A is the same as PID B? I need to use a special routine (like MPI_RANK() in Proposal I). **** I agree with this. **** In addition, our current model of a parallel task in MPI is rather **** static: all the processes in the task are created and destroyed **** simultaneously and all can communicate with one another. So why not **** just number them? That is current common practice anyway. The numbering **** encodes the system's idea of the best process ordering. Opaque id's **** promote a sense that any ordering is just as good as any other, which **** is not the case for high performance applications. 2) Contexts are globally unique integers. Global uniqueness makes context creation "slow". Given the intended use of contexts, I do not see this as a problem. Your current proposal does not explain how contexts are "created" in much detail. Here's three (non-exclusive) possibilities: A) make_group_contexts(group, num_contexts, new_contexts) num_contexts new contexts are placed in array new_contexts. This routine must be called "loosely-synchronously" in every process in the list of processes called "group". This is a convenient way for a user to create one or more contexts. It does not imply any relationship between groups and contexts. B) get_registered_contexts(context_name, num_contexts, new_contexts) This is a "name-server" version. If one or more contexts are registered under the name "context_name", then the first num_contexts contexts are copied into array new_contexts. If "context_name" is not currently registered, allocate num_contexts new contexts and register them under name "context_name". Some folks will not need groups. There must be a way to allow access to contexts without using groups. (However, see #3 below...) C) make_list_contexts(list_of_PIDs, num_contexts, new_contexts) This could easily be (A). The only reason for having this version would be if "group" has other stuff "cached" with it. This one could be left out if "group" is just a "list of PIDs". **** Here I had in mind a hardware or low level model of context, just **** implemented as a few bits in what is now the tag field of the **** message envelope. From this point of view, contexts are a scarce **** resource, but this matches well with the idea that contexts are **** associated with different software components of a program - most **** programs use only a handful of independent software components **** (the main app, a few libraries). My model of how the underlying **** implementation uses context is that a message (or part thereof) **** arrives at a process and must then be queued or matched with an **** outstanding receive. If there are not many possible context values then **** the number of possible queues is not large either, and all of the **** resources needed to support all context values can be preallocated **** in process startup. For 16 or 256 context values we can easily imagine **** this not taking very long, order milliseconds. Then context values can **** be used like tag values -- they preexist and can be utilized and allocated **** according to any desired scheme. If we must specify an allocation scheme, **** then I would recommend something like **** **** context_value = MPI_get_context() **** **** where the routine must be called synchronously on every processor. **** This routine would typically be called a few times by the various **** initialization code for the various software components. **** The bottom line is that contexts preexist and a very simple (or no) **** allocation/deallocation scheme is needed. 3) I did not completely understand this: > 5) This proposal requires no server code and most of the > group code is not even parallel. I did a test implementation of > groups as defined here and was able to build code for identity > groups, permutation groups, linear and bilinear groups, general > set groups (Tony's favorite), composition groups, Cartesian products > and cartesian subsets (my favorite), all in about 700 lines of code. > The group definition really lends itself to object-oriented user > extensibility, like X widgets but easier. Are you talking about groups only here? If not, how are you proposing to maintain globally synchronous allocation/deallocation of contexts? **** Yes, groups only. See above comments on context. 4) Minor questions: > 6) Groups are easily constructed and destroyed, since no > global communication is required. Dynamic groups are not excluded, > although they must be used carefully. Since groups have no associated > context, there are no resources limiting their construction other than > memory and CPU time. You say "global" communication is not required. I assume that communication among the members of a created group is required? (Seems obvious...) **** There is no magic, of course. Sometimes, group creation requires **** communication and sometimes not. Many interesting groups can be **** computed rather than communicated, for example the row and column **** subgroups of a group representing a rectangle. Other groups require **** communication in order for each process to know whether it belongs **** to a group or not, the random list group being a good example, but I **** see that communication as independent of the construction of the group. **** That is, you communicate the list of processes wherever needed, then **** each process that needs the group builds it from the list. 5) Clarification: > There is no reason to disallow user-defined groups (e.g. dynamic > groups). The term "dynamic group" has been used to mean different things in the mpi-context discussion. Are you talking about a user-created group whose membership does not change? Are you talking about a user-created group whose membership can change after creation? Are you talking about something else? **** A group is just a mapping or function. This function could be **** one predefined by MPI or one defined by a (knowlegeable) user. MPI could **** restrict itself to a few simple kinds of groups and allow the user to **** implement code for groups whose membership changes (what I meant by **** dynamic groups) or groups with properties we haven't thought of yet. 6) My understanding of this proposal is that "group" is an array of integers, probably including range(group). You mention that other information could be cached with "group". Are you proposing a "handle" or "opaque object"? **** A group is not explictly an array. Most groups in my view (this is **** biased by the kind of work I do, of course) will be computed rather **** than given as explicit lists. So a group is a union of possible **** structures. Each kind of structure knows how to compute the **** needed functions (element(group, rank), rank(group, element, etc) -- all **** very straightforward with modern C programming techniques. Maybe **** that computation uses a list, maybe it doesn't. **** What the user gets from a group creation operation **** is a pointer to the object defining the group. The pointer is of **** course an opaque type valid only on the process that created that **** group. **** The kind of information likely to be cached with a group is probably **** a spanning tree or set of exchanges which are optimized for the **** particular group. Alternatively it is possible to cache optimized methods **** for operations such as broadcast or synchronization. 7) How are you proposing groups be created? It would be nice to see a few proposed routines. (Or, "it's done just like proposal XXX.) **** I hope you don't want code! (I will mail it to anyone that asks). **** Let me begin with defining a classification of groups. Each class **** here is defined in terms of the function element(group, rank). Each **** type of group will have its own constructor with the parameters **** needed for that type. The output of the constructor is a pointer **** to the actual group object. **** **** Class: element(rank) parameters: **** ------ ------------- ----------- **** Identity rank. order **** Permutation p(rank) for some permutation p. order, p **** List list(rank) order, list **** Linear start + rank*delta order, start, delta **** Bilinear start + r1 * d1 + r2 * d2 order, n1, d1, d2 **** where rank = r1 + n1 * r2 **** Composition g1(g2(rank)) (group) g1, g2 **** Product g1(r1) + range(g1)*g2(r2) (group) g1, g2 **** So the constructor for a linear group would be defined like **** **** Group MPI_make_linear_group(order, start, delta), **** **** which returns a pointer to the actual group structure. Each **** class of groups would have a specific constructor. **** **** The functions common to all groups would be defined like **** **** int MPI_group_order(Group g) **** int MPI_group_range(Group g) **** int MPI_group_iselement(Group g, int element) **** int MPI_group_rank(Group g, int element) **** int MPI_group_element(Group g, int rank) **** **** In addition in my test implementation I defined some other **** stuff like a destructor, copy constructor, etc. **** Although I have made heavy use of C pointers here, a Fortran **** binding is not too hard to construct. 8) Is this what you had in mind? (My understanding of some of the things you've said...): * All point-to-point communication uses (PID, context, tag). **** Yes. * All collective communication uses (group, context, tag) (with the caveat that tag may be eliminated by the collcomm committee). **** Here the committee could eliminate tag and context together but not **** independently, I think. How could I call such a routine not knowing **** which tags it was going to use for the operation? If neither is **** available then the MPI routines must use an internal context and **** set of tags. * Third-party library routines that use collective communication must have (group, context) passed in as arguments. * Third-party library routines that use point-to-point communication within a group must have (group, context) passed in as arguments. (PID's are then extracted inside the routine using element(), etc.) * Third-party library routines that use point-to-point communication without reference to groups must have (PID's, context) passed in as arguments. **** Not quite. **** In my view there are two categories of third party routines: one **** category which establishes and uses its own context and tag spaces **** and a second category which uses the context and tag spaces given **** to it. Both kinds are useful -- MPI must be careful not to preclude **** one or the other. * Group-based point-to-point calls will look like: send(element(group,rank), context, tag, ...) recv(element(group,rank), context, tag, ...) **** Yes. * Default group and default context are provided. When used, these look like: send(element(group,rank), MPI_DEF_CONTEXT, tag, ...) barrier(MPI_DEF_GROUP, MPI_DEF_CONTEXT) **** Yes. Default context (free-for-all) especially is intended for **** the poor hapless end-user slob (I wear this hat occasionally) **** who hasn't got the time to understand all our machinations. :-) **** Default group is a trivial group (what I would call an identity **** group) which maps rank into itself, assuming PID == rank. 9) The examples above imply that context appears in all communication routines. There could be special versions of all communication routines that use the default application context. I'd prefer to avoid this. No one has proposed special versions of the collective communication routines that use the default group! **** Hear, hear. How many routines do we need anyway? Delete my **** suggestion. 10) If I've got the points above right, inter-group communication seems to be no problem with this proposal. Everything is converted into a globally unique PID. **** Right. 11) Clarification: > 2) Processes are addressed by their rank within the parallel task. > This global rank is fixed and assigned in an implementation-defined way. What do you mean by "parallel task"? Is there only one? Is this extensible to "more than one"? (Seems like it could be as long as "global rank" == PID is globally unique for any process.) A bit more detail on how this would extend would be nice. > [ This is an area where MPI 2.0 could really break some new ground: > define interaction between different parallel tasks, define creation > and deletion of parallel tasks, ...] I think "one" parallel task in MPI 1.0 is definitely a good idea. :-) **** There are several options: **** 1) There exists one parallel task (collection of processes). **** In this case processes can be addressed by rank within the task. **** 2) There exists more than one parallel task. Now we open up **** a Pandora's box of options: can a process belong to more than **** one parallel task (yech), can groups cross task boundaries **** (yech**2). If the answer to both these questions is no, then **** I would support modifying everything so that processes are **** addressed by (parallel-task-id, rank). But this significantly **** extends our work. **** I had a hard time with this one -- am pulled both ways. 12) Suggestion: > 4) Groups have an implicit topology, defined by the ordering of the > elements. Any other ordering can be defined by constructing a new > group with the same elements in the new order. There is no need for > any other topology function. It seems to me that having standard routines that produce "good" orderings for a few common logical topologies is a good idea. > There must exist environment functions which obtain the topology of > the global process assignment (hypercube, mesh, random network, ...) I think that this is more difficult than specifying a few "logical topology" ordering routines. Arguments? :-) **** What you mean by a logical topology ordering routine is (correct **** me if I misinterpret) something that selects, for example a Gray **** ordering or natural ordering for a group. The choice depends on **** the underlying topology and the application and cannot be done **** automatically. Suppose you have constructed a group and want to **** impose Gray ordering on it. In this proposal you would compose **** your group with a permutation group (where the permutation was **** the Gray ordering), creating thus a new group with the required **** order. **** The implementation knows what the global topology is. A routine **** like the following could be easily implemented (I think) to return **** this information: **** **** char * MPI_global_topology() **** **** which in various implementations could return a string containing **** something like **** **** "N 564" -- a random network with 564 processes. **** "H 5" -- a 5 dimensional hypercube. **** "R 16 13" -- a 16 by 13 mesh. **** **** or generally **** "topology-class parameter ..." **** **** I don't anticipate very many topology classes. (from James Cownie's email:) It woudl be really nice if the could be kept separate. Unfortunately they interact because 1) the context needs to be agreed by a set of processors, so context creation is naturally a group operation **** In this proposal contexts are global quantities agreed to by **** all processes. Also, contexts are a scarce and therefore limited **** resource. I see no problem in simply creating them all at process **** startup. **** If we insist that contexts do need to be "created" before being **** used, and if we accept the model (in this proposal) that a context **** is just an integer, then there is no problem with implementing **** a routine **** context = MPI_get_context() **** which tells MPI internally that this process is now willing to **** send and receive messages with the specified context value. Such **** a routine could be implemented with other point-to-point routines **** which used one preexisting context (MPI_INTERNAL_CONTEXT). 2) if collective operations are built on point to point they need a context to remove the ambiguity, and to work safely. **** I agree, and my proposal states this. I would feel badly about **** any proposal where collective communications could not be built **** on top of point to point. (the end) From owner-mpi-context@CS.UTK.EDU Tue Mar 30 11:36:42 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA14063; Tue, 30 Mar 93 11:36:42 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA12397; Tue, 30 Mar 93 11:36:11 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 30 Mar 1993 11:36:09 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from fslg8.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA12386; Tue, 30 Mar 93 11:36:07 -0500 Received: by fslg8.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA04528; Tue, 30 Mar 93 16:35:59 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA02355; Tue, 30 Mar 93 09:34:39 MST Date: Tue, 30 Mar 93 09:34:39 MST From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9303301634.AA02355@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu Subject: A few more ramblings Cc: mpsears@newton.cs.sandia.gov Mark, I had a few more thoughts about Proposal VIII... One of the arguments that was unresolved at the last meeting was whether contexts should be a "scarce" or "plentiful" resource. One concern was a situation like one we have dealt with here. One of my co-workers (Bernardo Rodriguez) has developed a high-level library for finite difference approximation (FDA) problems (common with the weather forecast models we are working with). One well-known strategy for getting decent performance from a data-parallel FDA is to compute boundary values, start non-blocking boundary exchange, compute interior values while exchange is happening, and complete the exchange. An example of how this code looks with contexts is shown below. Routines that start with "LIB" are our "third-party" library routines. Routines written by a user of the library start with "USER". Array "A" contains the data. "myContext" is the context used by the LIB_XXX() routines. USER_COMPUTE_BOUNDARY(A, ...) LIB_START_EXCHANGE(A, myContext) USER_COMPUTE_BOUNDARY(A, ...) LIB_END_EXCHANGE(A, myContext) Unfortunately, weather models have lots and lots of arrays (A, B, C, ...). If we use the same idea we need contexts "myContextA", "myContextB", ... The code looks like: USER_COMPUTE_BOUNDARY(A, B, C, ...) LIB_START_EXCHANGE(A, myContextA) LIB_START_EXCHANGE(B, myContextB) LIB_START_EXCHANGE(C, myContextC) ... USER_COMPUTE_BOUNDARY(A, B, C, ...) LIB_END_EXCHANGE(A, myContextA) LIB_END_EXCHANGE(B, myContextB) LIB_END_EXCHANGE(C, myContextC) ... Here's a situation where a "large number" of contexts might be needed. Now, here's how we fix it when contexts are scarce: USER_COMPUTE_BOUNDARY(A, B, C, ...) LIB_START_EXCHANGE(A, myContext, keyA) LIB_START_EXCHANGE(B, myContext, keyB) LIB_START_EXCHANGE(C, myContext, keyC) ... USER_COMPUTE_BOUNDARY(A, B, C, ...) LIB_END_EXCHANGE(A, myContext, keyA) LIB_END_EXCHANGE(B, myContext, keyB) LIB_END_EXCHANGE(C, myContext, keyC) ... The "keys" are defined by the library to be in some range. The library guarantees that communication operations initiated by calls to any routines using different keys will not collide. This might be done by using "key" inside each routine as a base to select a range of tags. It is the user's responsibility to ensure that the "keys" are unique. The bottom line is: the library developer is responsible for managing tags used by a library. The context insures that communication internal to a library routine will not collide with communication external to that routine. OK, now I'm comfortable with "context" as a scarce resource that is allocated ONCE at the beginning of an application. Comments? Flames? Tom From owner-mpi-context@CS.UTK.EDU Tue Mar 30 13:38:26 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17175; Tue, 30 Mar 93 13:38:26 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA18165; Tue, 30 Mar 93 13:37:43 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 30 Mar 1993 13:37:42 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA18157; Tue, 30 Mar 93 13:37:41 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA14756; Tue, 30 Mar 93 12:30:38 CST Date: Tue, 30 Mar 93 12:30:38 CST From: Tony Skjellum Message-Id: <9303301830.AA14756@Aurora.CS.MsState.Edu> To: hender@macaw.fsl.noaa.gov, mpi-context@cs.utk.edu Subject: Re: A few more ramblings Cc: mpsears@newton.cs.sandia.gov Given all the discussion since I left for Salishamn, I will need to rely on Mark to present his part of XIII (oops, VIII) tomorrow. - Tony From owner-mpi-context@CS.UTK.EDU Tue Mar 30 13:54:34 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA17387; Tue, 30 Mar 93 13:54:34 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19035; Tue, 30 Mar 93 13:54:12 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 30 Mar 1993 13:54:11 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19026; Tue, 30 Mar 93 13:54:09 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA14808; Tue, 30 Mar 93 12:47:29 CST Date: Tue, 30 Mar 93 12:47:29 CST From: Tony Skjellum Message-Id: <9303301847.AA14808@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: agenda of context committeee we will prorate all speaker times in our agenda, to give equal time to Mark Sears. - I will give exact details tomorrow, Tony From owner-mpi-context@CS.UTK.EDU Fri Apr 2 01:51:46 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA24312; Fri, 2 Apr 93 01:51:46 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08735; Fri, 2 Apr 93 01:51:15 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 2 Apr 1993 01:51:14 EST Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08727; Fri, 2 Apr 93 01:51:13 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA01906; Fri, 2 Apr 93 00:50:53 CST Date: Fri, 2 Apr 93 00:50:53 CST From: Tony Skjellum Message-Id: <9304020650.AA01906@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: the gathering Cc: mpi-collcomm@cs.utk.edu Dear Context sub-committee members (and observers from collcomm, etc), The meeting this week underscored the need for convergence to a unifying proposal that captures the features of Proposal I, VIII, and III+VII=X. The following work will be accomplished before May 12 to that end, while respecting the current separateness of I and VIII. I regret having to leave current MPI meeting early, but the context discussions were quite sufficient to put me in a higher gear on the problems before us... . Rik Littlefield agrees to organize a set of test cases to be coded for each proposal; proposers including codings in their proposals. Deadline for such examples is April 21, 8pm EST. This will be discussed on mpi-context over next three weeks. . I will develop a unified proposal X (with sensible names, and rationale, details, performance discussion, and examples). . I will ask for help, as needed, from Lyndon/Mark/Marc etc, on understanding nuances of their proposals, . Marc Snir / Lyndon Clarke will discuss changes/enhancements (if any) to Proposal I . Mark Sears will complete (presumably) a full proposal VIII (Tacit in this discussion is the accepted merger of III+VII as X, despite its incomplete state, so we have eliminated some proposals from consideration this round). To be considered for a straw vote (before next meeting), all proposals must be complete in that they must . Address their interactions with the first-reading of pt2pt, and current status of collcomm, including needed changes if any . Provide specific syntax/semantics, as needed for pt2pt & collcomm chapters . Describe any known flaws in syntax / semantics . Describe logical subsets, if any, for MPI1 . Implement the examples that Rik organizes, and upon which we agree together (including those from Wednesday night discussion session) . Include discussions of how starting works, and what the spawning semantics must provide them (or through an initial message) so that they can work. . The meaning of the MPI_ALL group in the proposal, if any, or weaker substitutes for same. . The existence/non-existence/requirement for servers or shared-memory locations to effect some features . Include expectations for performance of key operations (eg, how much does it cost to get a new context?, can this be done outside of loops and cached?) . Describe their use of a "cacheing facility," if any . Describe their syntax/semantics of a "cacheing facility" . Describe their reliance on any other MPI1 features not specifically part of context/group/tag/pid nature - - - - - Presumably Proposals I, VIII, and X will fill all requirements to reach the next straw poll deadline. Whichever do make this Straw poll deadline, (May 10, 1993, 5pm EST), can be considered by the voting subcommittee. A ranking will be developed, with the bottom N-2 proposals dropped. We will meet on the evening of Wednesday, May 12, 8:00pm CST, for as long as it takes to choose the final proposal, possibly by further merger of the remaining strong proposals. On Thursday, May 13, we will present our first reading of the Context subcommittee (with possible spill over to Friday, May 14). Actual context sub-committee members will vote, only, in all cases. Please recall the two-sub-committee voting limit of the MPIF (as well as sub-committee membership; observers are always welcome). I will strive not to send fine-grain changes to proposal X's around, but will wait to circulate my product in complete form, prior to May 10, so there is a lower e-mail burden for next weeks; perhaps others will like to keep their updates coarse grain, but share important things with everyone, for sure. If agreements/compromises occur between proposals and/or proposers, please share this with me and the sub-committee in a timely fashion; I do not desire surprises at the next meeting. For instance, if Marc Snir were willing to consider a separate context feature (separate from group) in Proposal I, a lot of effort could be averted, because his proposal is pretty good otherwise (except in re inter-group issues). I think Lyndon will be talking to Marc about making inter-group communication easier in Proposal I, also. If any breakthroughs are made, please let me know. - Tony PS Please copy mpi-collcomm on context-related matters for the duration of MPIF. . . . . . . . . . . "There is no lifeguard at the gene pool." - C. H. Baldwin "In the end ... there can be only one." - Ramirez (Sean Connery) in Anthony Skjellum, MSU/ERC, (601)325-8435; FAX: 325-8997; tony@cs.msstate.edu From owner-mpi-context@CS.UTK.EDU Tue Apr 6 15:32:28 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA25155; Tue, 6 Apr 93 15:32:28 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA09620; Tue, 6 Apr 93 15:31:25 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 6 Apr 1993 15:31:24 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA09604; Tue, 6 Apr 93 15:31:20 -0400 Date: Tue, 6 Apr 93 20:31:15 BST Message-Id: <826.9304061931@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context: comment and suggestion To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-colcomm@cs.utk.edu Dear MPI context colleagues. I'd like to say something about contexts and groups and the extant proposals ... First off, we have two major concepts floating around, which I need to define here for purpose of the discussion below. Group --- is an ordered collection of distinct processes, or formally of references to distinct processes. It provides a naming scheme for processes in terms of a group name and rank of process within group. Context --- is a distinct space of messages, or more formally of message tags. It provides management of messages as a message in context A cannot be received as a message in context B. Within these definitions there are exactly two themes in the extant proposals. Marc Snir, in Proposal I, views Group and Context as identical. This simplifies the number of concepts in MPI, but does mean that we can have intragroup communication and no way at all can you have intergroup communication within the above definition of Group and Context. Rik and I amusingly coined the term "grountext" to describe the group/context entity in this proposal. Tony Skjellum, in Proposal III, views Group and Context as independent. This means two concepts instead of one, but does mean that we can allow intragroup communication and some intergroup communication with restriction on how flexible we can make such communication. Proposals VIII and X are identical to III in the manner in which they treat Context and Group as independent concepts. Please consider Proposal VII as not compliant with the above definitions of Group and Context. We need to decide: 1) Are context and group identical or different? 2) Is intergroup communication provided? Now I want to point out something about intergroup communication which we have in our system and find most expressive and convenient, but does not fit in with the above frameworks and the assumption that the message envelope always contains just (context, process, tag). Receive in intergroup communication can wildcard on (sender group and sender rank) or (sender rank), in addition to message tag. We (at EPCC) do, and want to do (in MPI) (written out in longhand notation) receive(group, group', rank, tag) where group is the receiver group, group' is the sender group, rank is the sender rank in group' and tag is the message tag. The receiver can never wildcard group. The receiver can always wildcard tag. The receiver can always wildcard either (rank) or (group' and rank). (In fact, group and group' in this expression are more like the grountext of Marc's proposal or the "context" of historical proposal VII, but never mind on that point.) In the framework of Marc we can reasonably do intergroup communication without wildcard on group'. To do this we transmit group information in messages and form a group which is the union of group and group'. We cannot add wildcard on group' by saying that to do that one forms a union of group and all cases of group'. This requires the sender to always know too much about the detail of the recieve call with which it is to match (i.e., that the receiver is or is not doing a wildcard). If you disbelive this, then you should probably argue that we do not need source selection in point-to-point as you can use tag to choose the source, as it is the same argument (and bogus in my opinion). In the framework of Tony we can reasonably do intergroup communication without wildcard on group'. To do this we transmit group information in messages and choose a context for the pair of groups to use for intergroup communication. We cannot add wildcard on group' by using a context agreed for such use between group and all cases of group'. The argument is the same as that above after a little substitution. If we are serious about intergroup communication then in my opinion we really should provide the facility to wildcard on sender group. This throws up a small number issues, some of which I now address. No process addressed: I didn't mention process addressed communication at all. Perhaps the demons of speed are bothered by this. Well, we could do such as (context,process,tag), and the above does not exclude it. We can fit it in, of course. Size of point-to-point section: I said above "longhand notation". Well that is the most expressive and convenient notation, and if you ask me then I think that (group,group,rank,tag) or (NULL,group,rank,tag) are both acceptable for intragroup communication. On the other hand one can introduce some grunge syntax for intergroup communication which use the same framework as intragroup communication and replaces group in (group,rank,tag) with some glob object which is "shorthand" for (group, group'). This is not the best syntax in the world but we can live with it. We can even fit in the process addressed stuff with this kind of syntax as I have shown in Proposal X. Message envelope: You probably spot that this needs the sender group id to go into the message envelope. Perhaps the demons of speed are bothered by this. Well, you could have a different enevelope for groupless communication, intragroup communication and intergroup communication, and only pay the cost of the bigger envelope when you need it. This is going to take two bits for envelope identification. Big deal! It will anyway be natural not to match communications of different kinds (e.g. intergroup cannot match with intragroup, groupless cannot match with intergroup) so the extra header bits would be useful anyway. Unknown group: You probably also spot that the receive with wildcard on group can pick up a group that the receiver knows nothing of. I would be happiest if the implementation of MPI at the receiver asked the implementation of MPI at the sender about the group in this case, so that the receiver never has to bother about the eventuality. We (at EPCC) could accept that the returned group identifier is a NULL identifier. This means that groups have to exchange flattened group descriptions in messages in a reasonable way before they can make a great deal of sense of intergroup communication. Not ideal, but we can live with it. Comments please? Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Apr 8 10:57:36 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA11782; Thu, 8 Apr 93 10:57:36 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15395; Thu, 8 Apr 93 10:56:38 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 8 Apr 1993 10:56:37 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15386; Thu, 8 Apr 93 10:56:27 -0400 Date: Thu, 8 Apr 93 15:56:22 BST Message-Id: <2310.9304081456@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context: context and group (medium) To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-collcomm@cs.utk.edu Dear MPI Colleagues This letter is about groups, contexts, independence and coupling thereof, the kinds of point-to-point communication which we have talked about, and to brief extent libraries. Before embarking on the guts of the letter, I should like to express very strong support for the suggestion that MPI users can cleanly program in the host-node model. In my opinion, this model of programming is of considerable commercial significance, and I observer that their are a number of important programs around which use this model. o--------------------o I understand three different kinds of point-to-point communication which have been discussed by various people in MPIF. I write these out with separate group and context concepts, as per a previous message to mpi-context [Subject: mpi-context: comment and suggestion]. I will then discuss coupling of group and context. I refer the reader to my prevous message to mpi-comm which described classes of MPI user libraries [Subject: mpi-comm: various (long)], as their is some follow on discussion below. Groupless (process addressed) ----------------------------- (process, context, tag) Wildcard on process, tag. No wildcard on context Intragroup (closed group) ------------------------- (group, rank, context, tag) Wildcard on rank, tag. No wildcard on group, context. Intergroup (open group) ----------------------- (lgroup, rgroup, rank, context, tag) Wildcard on rgroup, rank, tag. No wildcard on lgroup, context. Observe that "group" in intragroup and "lgroup" in intergroup are the same thing, they are the group of the callling process. Since neither "group" nor "context" in intragroup can be wildcard then there may appear to be appeal in some coupling of them in order to provide shorter syntax and easier context/group management. This implies that we couple context to group of calling process. Now this coulping is not compatible with intergroup since the two calling processes have different groups, thus different contexts, thus the send and receive can never match. We can resolve this difficulty by a more careful statement of where the context of the message is coupled. In particular we can state that the context of the message is coupled to the group of the message receiver. In this way we would express intragroup as a coupling of (group,context), and we would express intergroup as a pair of such couplings. The claim we have heard that context and group must be strongly coupled, resulting in a proposal which asserts that context and group are identical, is possibly nothing more than a consequence of an assumption that messages may only be distinguished on the basis of (process, context, tag) (here process is a process label which can be a rank within a group). Given that assumption, we can only use context to distinguish messages within different groups and the two entities become strongly coupled. Examining records of the early meetings of MPI, I find that this "decision" was made by the point-to-point subcommittee in a straw poll which rejected selection by group by a narrow majority of 10 to 11. Please note also that the same meeting rejected context modifying process identifier --- a "decision" which we are already often ignoring. These "decisions" predate the existence of the contexts subcommittee and the vigorous discussion of contexts and groups which has been and continues to take place. We should uniformly be open minded enough to allow ourselves to question all such "decisions", and to change them if we see fit. The description of MPI user libraries which has been given by Mark Sears and myself strongly suggests that context and group must be independent entities. Provision of the process addressed communication immediately suggests that a context can appear without coupling to a group in which case it seems (to me) that they are independent entities. There is an argument against process addressed communication which says that process addressed communication gains nothing in performance over intragroup communication in the group of all processes. If the process description in process addressed communication will, for sake of generality and thus portability, have to be an some kind of pointer to a process description object which contains whatever information is needed to route a message to the intended recipient. It could be just that (in C, at least), a pointer. Sometimes, on some machines, it will actually be implementable with some other kind of magic which is more scalable, but it must always appear the same way. It could be an index, representable as an integer in the host language, into a table of process description objects (better for F77, for sure). It could be a rank in a group of (all) processes, used as an index into a process description object table, which is just fine for a static process model (and reflects existing practice). It could be some kind of global unique process identifier which is again user as a table index somewhere. If tables grow too large in either of the latter cases, then there may be some hashing and/or caching involved. There are counter arguments. I give one, and invite you to give more. On some machines, the global unique process identifier is sufficient to route the message, and is representable as an integer in the host language. For example, the global process id can be a composite of two bit fields (nodeid, procid) where nodeid is a physical processor node number and procid is a process number on the node, and the nodeid bit field is sufficient to route. In these cases, there is no need for a process description object table, and no need to do a table lookup. We probably all have used machines just like this. For me the arguments have piled up in favour of context and group being separate and independent entities. This letter therefore makes the recommendation that context and group are separate and independent entities. In that light I propose further discussion on management of contexts within and between processes, and within and between groups, and on the subject of the use objects which bind one or more contexts and one or more groups in order to keep the communcation syntax compact by overloading. I shall post another letter to you tomorrow. o--------------------o Comments, questions, (flames :-) please?! Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Apr 8 14:32:12 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA15572; Thu, 8 Apr 93 14:32:12 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26381; Thu, 8 Apr 93 14:31:06 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 8 Apr 1993 14:31:05 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from ssd.intel.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26243; Thu, 8 Apr 93 14:29:58 -0400 Received: from ernie.ssd.intel.com by SSD.intel.com (4.1/SMI-4.1) id AA01330; Thu, 8 Apr 93 11:29:43 PDT Message-Id: <9304081829.AA01330@SSD.intel.com> To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu, prp@SSD.intel.com Subject: Re: mpi-context: context and group (longer) In-Reply-To: Your message of "Thu, 08 Apr 93 15:56:22 BST." <2310.9304081456@subnode.epcc.ed.ac.uk> Date: Thu, 08 Apr 93 11:29:42 -0700 From: prp@SSD.intel.com > From: L J Clarke > Subject: mpi-context: context and group (medium) > > ... > > For me the arguments have piled up in favour of context and group > being separate and independent entities. > > Lyndon I agree, although I also see merit in associating a context with a group. I would like to share my thoughts about context which lead me to think we need two differently managed forms of context. Some of you have already heard this. Most of the discussion about context has revolved around protecting two different entities: libraries and groups. I think this is required, but I think they need very differently managed contexts. One form is not adequate to cover both needs without sacrificing performance. Consider a SPMD program with these calls. Assume the calls are loosely synchronous. call to LibA (Group1) call to LibB (Group1) call to LibB (Group2) In a loosely synchronous environment, messages for the next call can come in before the previous one has completed. Here we see two forms of overlap. Within the call to LibA, we might get messages from processes which have already entered LibB. If LibA and LibB are independently written, they might use some of the same tags. To avoid messages from LibB matching receives in LibA, we must use different contexts. If we have static contexts, allocated when the libraries are initialized, each call in the library can quickly provide the context to its point-to-point calls. If we only have dynamic contexts, especially if contexts are carried inside groups, then a library must be prepared to dynamically allocate a new context on any call when it sees a new group. I know we discussed ways to do this locally, so the context could be created and cached locally on the fly without communication, but I find the idea of incorporating such code into every library call horrifying. Within the first call to LibB, we might get messages from processes which have entered the second call to LibB. Since these calls are in different groups, it might be difficult to code LibB in such a way that messages could not intermix, since a process' position in Group2 might be quite different from its position in Group1. (I would hope that libraries would be coded so that multiple sequential calls to the same library with the same group would be safe. That seems to be current practice.) To keep the two calls from interfering, it would be convenient to have a different context for each group. If each group contains a dynamically allocated context, thats easy. But if contexts are statically allocated, especially if they require a name server, getting a new context for each new group might be a global operation that wouldn't scale well. So I propose that we need two forms of context, one that is quite static for protecting code, and one that is more dynamic for protecting groups. The only mechanism I know of that is adequate for protecting code is context alloctated via a nameserver. In MIMD programs, one cannot say much about the order in which libraries are initialized. Thus, if context is statically allocated at initialization time, there must be a way to obtain the global context value for a piece of code independently of other processes. A more static method, such as a MPI registry or a "dollar bill server" has the disadvantage of requiring a much larger value range for context. That uses precious bits in the envelope of every message. Once a context is allocated to a piece of code, it can be safely stored in a global variable without endangering thread safety or shared memory implementations, because no matter how many instantiations of the library store into the variable, they will always store the same value. There are nice dynamic mechanisms for allocating context for groups, which require only communication within the group. This can piggyback on the communication which is probably required to set up and synchronize the group when it is created. For instance, one might set aside a small number of context values for use by groups. When a group is created, every process in the group could provide its current set of free context values, possibly as a bit vector. After a groupwide reduction, each process chooses the smallest value from the intersection, resulting in every process choosing the same value. Other forms of context protection might be required in the future. I don't predict any, and expect that with both a static and a dynamic form, it is likely that future needs would be covered. The point-to-point calls might be configured to accept (group, rank, context). In this configuration, the static context protecting the code is passed in explicitly, and the context protecting the group is inside the group object. I'm not sure how this interacts with cross-group message passing. Perhaps the simplest solution is to use a well-known group context in such cases, which effectively disables group protection. Those are my thoughts on context. Although I think the methods outlined here are simple enough, I would be happy to see simpler mechanisms that solve the same problems. I am not comfortable with any solution that requires active participation by every library call, no matter how local. Paul From owner-mpi-context@CS.UTK.EDU Fri Apr 9 11:48:39 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA29402; Fri, 9 Apr 93 11:48:39 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24713; Fri, 9 Apr 93 11:48:25 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 9 Apr 1993 11:48:24 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24704; Fri, 9 Apr 93 11:48:22 -0400 Date: Fri, 9 Apr 93 16:48:19 BST Message-Id: <3201.9304091548@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: mpi-context: context and group (medium) To: mpi-context@cs.utk.edu In-Reply-To: L J Clarke's message of Thu, 8 Apr 93 15:56:22 BST Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-collcomm@cs.utk.edu Dear MPI Colleagues This is a short letter. First: a colleague here pointed out to me that I left an unfinished point, ie failed to draw to conclusion, in the mail message "Subject: mpi-context: context and group (medium)". Apologies to you all for my slipshod work. I conclude that discussion here. Regarding the process identifier, for which there are arguments for and against its appearance as a global unique process identifier and as a process local handle to opaque process descriptor object. The discussion in the referenced letter should have concluded that MPI should say that it is a process local identifier of a process expressed as an integer, and no more. This allows the implementation of MPI to choose the "best" form, which may be a global unique process identifier or may be a process local opaque reference to a process description object or may be an index into a table of subject objects describing all processes. Second: the letter I sent to you all "Subject: mpi-context: comment and suggestion" contained. Apologies again. I correct those errors here. * The claim that the conceptual framework of Tony regarding Group and Context restricts the possibilities for inter(group)communication is false. It is the restriction of the message envelope to (context,process,tag) which creates the limitation in this case. * When I explained how intergroup communication can be done within the conceptual framework of Marc (Snir) I should have said that this is a method for *simulating* intergroup communication without wildcard on group'. * When I explained how intergroup communication can be done within the framework Tony Ishould have said that this is a method for *implementing* intergroup communication without wildcard on group'. Final: Regarding the same letter whihc really deals with the subject of inter(group)communication I may have made errors or at least unhelpful assumptions in the latter couple of paragraphs of the message. Again I apologise. I plan to go into deep thought on the subject of inter(group,context)communication, and promise to deliver some quality discussion to you all next week. Please bear with me. Until such time I shall omit inter(group,context)communication from my discussions. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Apr 9 12:20:59 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA29769; Fri, 9 Apr 93 12:20:59 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26161; Fri, 9 Apr 93 12:20:16 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 9 Apr 1993 12:20:15 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26150; Fri, 9 Apr 93 12:20:12 -0400 Date: Fri, 9 Apr 93 17:20:03 BST Message-Id: <3227.9304091620@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context: Why scarce contexts? To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-collcomm@cs.utk.edu Dear MPI Colleagues. This question is primarily directed at Mark Sears. Mark, in Proposal VII you say that contexts will be a scarce resource, in fact you suggest 16 which is in my mind very scarce indeed. Why do you say this? It will help me/us if I/we understand, I am sure. Please reply. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Apr 9 13:06:44 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA00684; Fri, 9 Apr 93 13:06:44 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28334; Fri, 9 Apr 93 13:06:08 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 9 Apr 1993 13:06:07 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from cs.sandia.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28326; Fri, 9 Apr 93 13:06:06 -0400 Received: from newton.sandia.gov (newton.cs.sandia.gov) by cs.sandia.gov (4.1/SMI-4.1) id AA04322; Fri, 9 Apr 93 11:06:04 MDT Received: by newton.sandia.gov (5.57/Ultrix3.0-C) id AA01816; Fri, 9 Apr 93 11:06:59 -0600 Message-Id: <9304091706.AA01816@newton.sandia.gov> To: mpi-context@cs.utk.edu Subject: Re: mpi-context: Why scarce contexts? In-Reply-To: Your message of Fri, 09 Apr 93 17:20:03 -0000. <3227.9304091620@subnode.epcc.ed.ac.uk> Date: Fri, 09 Apr 93 11:06:58 MST From: mpsears@newton.cs.sandia.gov Lyndon asks why I think context values will be a scarce resource. First, we don't have any right now, so that is a data point :-) I think there are several reasons. The first is that a context requires underlying resources in the implementation (e.g. queues) which may be limited. A message arrives at a process, it goes into a queue matching the assigned context value in the envelope. Both support for the queue and the matching function take some effort. (16 queues is not too bad; 1000 is a lot.) One way to limit the effort required is to limit the number of supported contexts. Second, the bits in the envelope that support the context value have to come from somewhere, probably the existing tag field. If the tag field is only 16 bits to begin with (for argument's sake), then taking more than 4 bits for a context value might have a large impact. This is a question vendors might answer: how many context values and tag values are you willing to support on future platforms and how many are you willing to back fit on existing ones? Last, I don't see a need for billions of contexts. My model calls for most programs to use handfuls, not thousands. I would also like to think (this is a hopeless cause, but here goes) that much of MPI could be implemented in hardware, not just the communications part but the part that we now think of as overhead. This would greatly extend the class of programs that could benefit from parallelization, and I oppose for this reason things which add unnecessary complexity to the communications process. Mark Sears Sandia National Laboratories P.S. My proposal is VIII, not VII. From owner-mpi-context@CS.UTK.EDU Fri Apr 9 13:32:25 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA01034; Fri, 9 Apr 93 13:32:25 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29401; Fri, 9 Apr 93 13:31:47 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 9 Apr 1993 13:31:46 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29393; Fri, 9 Apr 93 13:31:43 -0400 Date: Fri, 9 Apr 93 18:31:38 BST Message-Id: <3385.9304091731@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: mpi-context: Why scarce contexts? To: mpsears@newton.cs.sandia.gov, mpi-context@cs.utk.edu In-Reply-To: mpsears@newton.cs.sandia.gov's message of Fri, 09 Apr 93 11:06:58 MST Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-collcomm@cs.utk.edu Dear Mark First, apologies for getting the proposal number wrong. > > Lyndon asks why I think context values will be a scarce resource. > [stuff deleted] > I think there are several reasons. The first is that a context > requires underlying resources in the implementation (e.g. queues) > which may be limited. A message arrives at a process, it goes > into a queue matching the assigned context value in the > envelope. Both support for the queue and the matching function > take some effort. (16 queues is not too bad; 1000 is a lot.) > One way to limit the effort required is to > limit the number of supported contexts. What you seem to be asking in this argument is that each process should use a limited number of contexts, which is different to asking that the system as a whole should use a limited number of contexts. Okay, that is just perhaps a subtle point. You are assuming details of an implementation of context. For example, in a different approach there could be just one queue which is searched through (in some fashion) in receive for a matching message, testing for context in no different way to testing for tag and sender. In that implementation contexts do not require resource, and the number of contexts is bounded only by the bit length of the context identifier. I imagine that you must have good reasons for the assumed implementation of context. Please do let me/us know why you make the assumption, I am sure that I am not alone in my concern that the number of contexts should be so scarce, but perhaps you know of very good reasons why they should so be. > Second, the bits in the envelope that support the context value > have to come from somewhere, probably the existing tag field. If > the tag field is only 16 bits to begin with (for argument's sake), > then taking more than 4 bits for a context value might have a > large impact. I must be missing something here again. This seems to say that the bit length of the envelope is fixed to some number of bits and the more fields we want to cram into the envelope the shorter the bit lengths of fields must be. Is there a good reason why the bit length of the envelope shoud be fixed in this fashion, or perhaps are you arguing that the bit length of the envelope should be as short as possible? > This is a question vendors might answer: how many > context values and tag values are you willing to support on future > platforms and how many are you willing to back fit on existing ones? > Yes, this would be a good question for the vendors indeed. VENDORS - PLEASE PLEASE PLEASE DO ADVISE US ON THIS ONE. > Last, I don't see a need for billions of contexts. My model calls > for most programs to use handfuls, not thousands. Yes, your model demands that programs use a handfull, the concern which I have is that complex and highly modular software will not be able to conform with your model, inhibiting the development of third party software. > I would also like to > think (this is a hopeless cause, but here goes) that much of > MPI could be implemented in hardware, not just the communications > part but the part that we now think of as overhead. This would > greatly extend the class of programs that could benefit from > parallelization, and I oppose for this reason things which add > unnecessary complexity to the communications process. I am sure that vendors do take very seriously the possibility of implementing relevant parts of MPI in hardware. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Apr 9 15:33:07 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA05144; Fri, 9 Apr 93 15:33:07 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04518; Fri, 9 Apr 93 15:32:29 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 9 Apr 1993 15:32:28 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04510; Fri, 9 Apr 93 15:32:25 -0400 Date: Fri, 9 Apr 93 20:32:21 BST Message-Id: <3457.9304091932@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context: context management and group binding (long) To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-collcomm@cs.utk.edu Dear MPI Colleagues I now discuss context management and group binding. As promised I omit inter(group,context)communication for the present. This letter is further to letters of today and yesterday to mpi-comm and mpi-context. Some of the people I talked with about contexts at the recent meeting wanted to be able to generate some contexts values themselves, i.e. not by calling context constructor procedures. This is accomadated in the recommendations of this letter. In my letter to mpi-comm "Subject: various (long)" I suggested that the question of secure/insecure point-to-point and collective communications could be described as a property of a context, with some advantage. In this letter I will incorporate this feature. I will also be discussing communicator objects as in Proposal X, but with sensible names. Tony Skjellum has made the valued suggestion to me, privately, that it is better to attributise the communicator object with the secure/insecure stuff, rather than the context. I shall adopt this suggestion in this letter and attributise the communicator object rather than the context. o--------------------o Message Contexts ---------------- In this proposal a (message) context identifier is (like a message tag) just an integer which is used in message selection and (unlike a message tag) may not be wildcard. The interval of context identifiers (1, ..., MPI_NUMUSR_CONTEXTS) are reserved for the MPI user to manage as she sees fit. Use of these contexts allows the user to write programs which do not make use of the provided context creation and deletion facilities. How big should MPI_NUMUSR_CONTEXTS be? Say 1, 2, 4, 8, 16, 32, 64, 128, ... Steve and Tom, and friends, can you advise? The MPI system provides a procedure which creates a unique context outside the interval of reserved user context identifiers, and a procedure which deletes a created context (it does not delete user reserved contexts). For example: context = mpi_create_context() mpi_delete_context(context) There may be advantage in defining the context create and delete functions such that they create and delete more than one context at a time, in order to amortise creation/deletion overhead. Please note that these context generation calls are made by a single process and are asynchronous. They can be implemented as a process local operation by attatching the global process identifier to a process local context allocator, at the expense of needing a lot of bits in the context. They can also be implemented via access to shared data (or a reactive server) in which case the bit length of the context can be made smaller. [I view this as an implementation detail which we should not dwell on in MPI, and should be the freedom of the implementor to choose any formally correct method which hopefully optimises execution on the target platform.] The user program may make use of the user reserved contexts. ClassB libraries (encapsulated objects) are expected to use system created contexts. These can be created as above or through the Communicator object constructors described below. Communicator Objects -------------------- The context acquired by the user in either of the above ways is not valid for communication. Communication is effected by use of a Communicator object, which is a binding of context, zero or more groups (just zero or one in this letter), and communicator attributes (just one in this letter). Two classes of communicator are described in this letter: * WorldCommunicator - an instance of a WorldCommunicator is a binding of context to nothing. This communicator allows the user to intracommunicate within the world of processes comprising the user application, labelling processes with their (process local) process identifier. * GroupCommunicator - an instance of a GroupCommunicator is a binding of context to a process group. This communicator allows the user to intracommunicate within the group of processes comprising the group, labelling processes with their (group global) rank within group. Communicator creation defines the SECURITY attribute of the communicator to be created, which may be any of the following: * MPI_DEFAULT_COMMUNICATOR - the default Security attribute specified in environmental management. * MPI_REGULAR_COMMUNICATOR - the regular Security attribute which provides regular point-to-point and collective semantics * MPI_SECURE_COMMUNICATOR - the secure Security attribute which provides secure point-to-point and collective semantics Communicator objects are opaque objects of undefined size referenced by an object handle which is expressed as in integer in the host language. Communicator creation will create a context for the Communicator, or will accept and bind a user managed context. MPI should provide procedures for creation of each class of Communicator objects, and for deletion of any class of Communicator object. For example, handle = mpi_create_world_communicator(context, security) handle = mpi_create_group_communicator(group, context, security) mpi_delete_communicator(handle) In each creation procedure "security" is the security attribute described above. It is the responsibility of the user to ensure that all communicators with the same context also have the same security. In each creation procedure "context" may be a user managed context or may take the value MPI_NULL_CONTEXT (or something like that :-) in which case the creation procedure also creates a context for the communicator. If the creation procedure creates a context then the procedure synchronises the calling processes (all processes for a WorldCommunicator and the group of processes for a GroupCommunicator) and returns the same context to each copy of the communicator object. If a user managed context was supplied then the procedure is process local and it is the responsibility of the user to ensure that each user managed context is bound to no more than one communicator at any time. In the GroupCommunicator creation procedure "group" is a handle to a group description. The communicator deletion procedure deletes the bound context if that context was created in the communicator creation procedure but does not delete a user managed context. Short Examples -------------- A user program which only makes use of two user reserved contexts and makes no use of process groupings can "enable" the user reserved contexts by creating WorldCommunicator objects. For example, c0 = mpi_create_world_communicator(0,MPI_DEFAULT_COMMUNICATOR) c1 = mpi_create_world_communicator(1,MPI_DEFAULT_COMMUNICATOR) A ClassA library can accept a communicator object as argument. For example, void class_a_procedure(int communicator, ...) { /* do it */ } A ClassB library can accept a group as argument and create private GroupCommunicator objects. For example, void class_b_procedure(int group, ...) { static int communicator = MPI_NULL_COMMUNICATOR; if (communicator != MPI_NULL_COMMUNICATOR) { communicator = mpi_create_group_communicator(group, MPI_NULL_CONTEXT, MPI_SECURE_COMMUNICATOR); } /* do it */ } This example could be generalised by adding a group "cache" facility as described by Rik Littlefield. Point-to-point communication ---------------------------- The point-to-point (intra)communication procedures have a generic process and message addressing form (communicator, process_label,message_label). I shall deal with Send and Receive separately. Send(communicator, process-label, message-label) ---- * communicator is a WorldCommunicator or a GroupCommunicator * process-label is { the (process local) identifier of the receiver when { communicator is a WorldCommunicator { { the rank in communicator.group of the receiver when { communicator is a GroupCommunicator * message-label is the message tag in communicator.context. The point-to-point communication is REGULAR if communicator.security has the value MPI_REGULAR_COMMUNICATOR, and SECURE if communicator.security has the value MPI_SECURE_COMMUNICATOR. Recv(communicator, process-label, message-label) ---- * communicator is a WorldCommunicator or a GroupCommunicator * process-label is { the (process local) identifier of the receiver when { communicator is a WorldCommunicator { { the rank in communicator.group of the receiver when { communicator is a GroupCommunicator { { a wildcard value in either case * message-label is the message tag in communicator.context or a wildcard value The point-to-point communication is REGULAR if communicator.security has the value MPI_REGULAR_COMMUNICATOR, and SECURE if communicator.security has the value MPI_SECURE_COMMUNICATOR. Collective communication ------------------------ The WorldCommunicator is not valid for MPI collective communication. The GroupCommunicator is valid for MPI collective communication procedures. The collective communication is REGULAR if communicator.security has the value MPI_REGULAR_COMMUNICATOR, and SECURE if communicator.security has the value MPI_SECURE_COMMUNICATOR. o--------------------o Comments, questions, (flames :-), please! For your conveniene, my plan now is to go into a session of deep thought regarding intercommunication, the work we have done at EPCC, and MPI. I will then discuss these thoughts with my colleagues here, and promise to return quality discussion of intercommunication to you sometime next week. [If anyone wants to discuss intercommunication with me, I prefer to do so privately until I have really thought longer and harder than before.] I have an oustanding reply to Paul Pierce's recent letter, which I shall make now. I'll be off-line for a while, probably come on-line again Sunday, and will reply to letters which I hope you will write in a reactive and less prolific fashion. Happy reading :-) Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Apr 9 16:00:26 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AB06763; Fri, 9 Apr 93 16:00:26 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05670; Fri, 9 Apr 93 16:00:03 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 9 Apr 1993 16:00:02 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05655; Fri, 9 Apr 93 16:00:00 -0400 Date: Fri, 9 Apr 93 20:59:57 BST Message-Id: <3490.9304091959@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context: CORRECTION to previous message To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Dear MPI Colleagues An astute colleague here has pointed out two silly errors and some exceptionally bad phrasing in my previous letter "Subject: mpi-context: context management and group binding (long)" When describing point-to-point receive, please replace the two erroneous occurences of "receiver" by "sender". Cut and paste errors, sorry. In the final paragraph I am inviting your replies and informing that I personally will be in a reactive and less prolific mode of operation. The wording implies that I am asking you to be reactive and less prolific, which of course I would not ask. Tired and hungry errors (its 9pm here now, Easter Friday), sorry. Best Wishes Lyndon "the prolific" ps thanks Al :-) /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Apr 9 16:20:24 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA07110; Fri, 9 Apr 93 16:20:24 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA06508; Fri, 9 Apr 93 16:19:39 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 9 Apr 1993 16:19:38 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA06500; Fri, 9 Apr 93 16:19:36 -0400 Date: Fri, 9 Apr 93 21:19:33 BST Message-Id: <3512.9304092019@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: mpi-context: context and group (longer) To: prp@SSD.intel.com In-Reply-To: prp@SSD.intel.com's message of Thu, 08 Apr 93 11:29:42 -0700 Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu Paul Pierce writes: > > For me the arguments have piled up in favour of context and group > > being separate and independent entities. > > > > Lyndon > > I agree, although I also see merit in associating a context with a group. > Hey, some consensus here. Magic! BTW, Paul, I wanted to ask what you thought about my suggestions for secure send/receive being bound to a context kind of thing (now communicator object as of last mail to mpi-context), as opposed to having different calls. I think you are right about the microscopic effect on the code. I just tried to give both global default control and per module instance control over the security question. > Consider a SPMD program with these calls. Assume the calls are loosely > synchronous. > > call to LibA (Group1) > call to LibB (Group1) > call to LibB (Group2) > > In a loosely synchronous environment, messages for the next call can come in > before the previous one has completed. Here we see two forms of overlap. > > Within the call to LibA, we might get messages from processes which have already > entered LibB. If LibA and LibB are independently written, they might use some > of the same tags. To avoid messages from LibB matching receives in LibA, we > must use different contexts. If we have static contexts, allocated when the > libraries are initialized, each call in the library can quickly provide the > context to its point-to-point calls. If we only have dynamic contexts, > especially if contexts are carried inside groups, then a library must be > prepared to dynamically allocate a new context on any call when it sees a new > group. I know we discussed ways to do this locally, so the context could be > created and cached locally on the fly without communication, but I find the > idea of incorporating such code into every library call horrifying. Paul, I have a model for libraries like this, which in my mail to mpi-comm "Subject: mpi-comm: various (long)" I referred to as ClassB libraries, which maybe you might want to think about. It's quite simple. We write libraries just like this, which are akin to encapsulated objects. We think in terms of library instances. The library provides is an instance constructor which accepts a group, creates context(s) for the instance and constructs the instance, returning an instance id to the user which is used to refer to the instance for all calls. That is, all calls including and up to the instance destructor, which asks an instance to detruct itself. Our experience is that users do not find it difficult to manage this model for ClassB libraries. > > So I propose that we need two forms of context, one that is quite static for > protecting code, and one that is more dynamic for protecting groups. I cannot see any difference between the latter of these two contexts "more dynamic for protecting groups" and a global group identifier. > The only mechanism I know of that is adequate for protecting code is context > alloctated via a nameserver. In MIMD programs, one cannot say much about the > order in which libraries are initialized. Thus, if context is statically > allocated at initialization time, there must be a way to obtain the global > context value for a piece of code independently of other processes. A more > static method, such as a MPI registry or a "dollar bill server" has the > disadvantage of requiring a much larger value range for context. That uses > precious bits in the envelope of every message. Once a context is allocated to > a piece of code, it can be safely stored in a global variable without > endangering thread safety or shared memory implementations, because no matter > how many instantiations of the library store into the variable, they will > always store the same value. We find that with regard to operations within a process group, and in particular to library instance construction and desctruction decribed above, the main user program has a highly SPMD nature. So we can exploit sequencing. This is a most valuable learning experience, because we had similar thoughts to those you express here, implemented a name server, and really didn't need it once (for this purpose). > > The point-to-point calls might be configured to accept (group, rank, context). > In this configuration, the static context protecting the code is passed in > explicitly, and the context protecting the group is inside the group object. > > I'm not sure how this interacts with cross-group message passing. Perhaps the > simplest solution is to use a well-known group context in such cases, which > effectively disables group protection. As I point out above, your "group protecting context hidden inside group" really does just seem to me to be a global group identifier. Within the definition of context I see no reason why we necessarily will cause a problem with intercommunication. When you say "use a well know group context in such cases" I take it you mean a common ancestor like the "group context" of all processes or something? I have promised, I will return quality discussion on intercommunication next week. Did the points in this reply letter help, Paul? Best Wishes Lyndon "the temporarily less prolific" /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Apr 9 18:45:09 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA08745; Fri, 9 Apr 93 18:45:09 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA12895; Fri, 9 Apr 93 18:44:25 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 9 Apr 1993 18:44:24 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from ssd.intel.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA12872; Fri, 9 Apr 93 18:43:49 -0400 Received: from ernie.ssd.intel.com by SSD.intel.com (4.1/SMI-4.1) id AA24612; Fri, 9 Apr 93 15:43:36 PDT Message-Id: <9304092243.AA24612@SSD.intel.com> To: lyndon@epcc.ed.ac.uk Cc: prp@SSD.intel.com, mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu Subject: Re: mpi-context: context and group (longer) In-Reply-To: Your message of "Fri, 09 Apr 93 21:19:33 BST." <3512.9304092019@subnode.epcc.ed.ac.uk> Date: Fri, 09 Apr 93 15:43:35 -0700 From: prp@SSD.intel.com > From: L J Clarke > > Paul Pierce writes: > > > Consider a SPMD program with these calls. Assume the calls are loosely > > synchronous. > > > > call to LibA (Group1) > > call to LibB (Group1) > > call to LibB (Group2) > > > > In a loosely synchronous environment, messages for the next call can come in > > before the previous one has completed. Here we see two forms of overlap. > > > > Within the call to LibA, we might get messages from processes which have already > > entered LibB. > > Paul, I have a model for libraries like this, which in my mail to > mpi-comm "Subject: mpi-comm: various (long)" I referred to as ClassB > libraries ... Yes, that is a matching concept. > We think in terms of library instances. ... I found this part hard to understand. However, if you propose to use the mechanisms in your just previous mail: > A ClassB library can accept a group as argument and create private > GroupCommunicator objects. > For example, > void class_b_procedure(int group, ...) > { > static int communicator = MPI_NULL_COMMUNICATOR; > > if (communicator != MPI_NULL_COMMUNICATOR) > { > communicator = mpi_create_group_communicator(group, > MPI_NULL_CONTEXT, > MPI_SECURE_COMMUNICATOR); > } > > /* do it */ > } > This example could be generalised by adding a group "cache" facility > as described by Rik Littlefield. First of all this code doesn't work - if the library is called with a different group (see the LibB(Group{1,2}) example above) it will mistakenly use the communicator for Group1 when called for Group2. This problem can be fixed using cacheing. But... This is exactly the sort of too-dynamic, too-intrusive mechanism I find horrifying. I can't conceive of unleashing on the unsuspecting world a standard that requires you to put code like that in every library call (its even more complex with cacheing.) We _must_ come up with a better mechanism. The example mechanism I talked about might look like this: int my_context; void class_b_initialize() /* Called once at the beginning of time */ { my_context = create_and_or_lookup_context("mylib"); } void class_b_procedure(int group, ...) { /* do it using (group, rank, my_context) */ } Note the total absence of context maintenance in the arbitrary library procedure. For group protection, the group must contain an additional embedded context. > > So I propose that we need two forms of context, one that is quite static for > > protecting code, and one that is more dynamic for protecting groups. > > I cannot see any difference between the latter of these two contexts > "more dynamic for protecting groups" and a global group identifier. You are right, its the same. My point is that group context is necessary but _not_ sufficient. > > The only mechanism I know of that is adequate for protecting code is context > > alloctated via a nameserver. ... > > We find that with regard to operations within a process group, and in > particular to library instance construction and desctruction decribed > above, the main user program has a highly SPMD nature. So we can > exploit sequencing. This is a most valuable learning experience, > because we had similar thoughts to those you express here, implemented a > name server, and really didn't need it once (for this purpose). You learned that you have a SPMD universe. We have a mostly SPMD universe, but we have customers already with MPMD applications. One can argue that sequencing is acceptable for a static-process model, but it is not adequate for dynamic processes. We have talked about defining MPI in such a way that it is complete for a static process model but without limiting its extension to a dynamic process model. So we must be careful - if we assume sequencing now, we must do it in a way that allows for a nameserver later. > Lyndon "the temporarily less prolific" Paul From owner-mpi-context@CS.UTK.EDU Fri Apr 9 22:58:14 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA11834; Fri, 9 Apr 93 22:58:14 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20936; Fri, 9 Apr 93 22:58:02 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 9 Apr 1993 22:58:01 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20883; Fri, 9 Apr 93 22:56:53 -0400 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Fri, 9 Apr 93 19:45 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA09922; Fri, 9 Apr 93 19:43:30 PDT Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA22459; Fri, 9 Apr 93 19:43:26 PDT Date: Fri, 9 Apr 93 19:43:26 PDT From: rj_littlefield@pnlg.pnl.gov Subject: proposal -- context and tag limits To: lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu, mpsears@newton.cs.sandia.gov Cc: d39135@carbon.pnl.gov, gropp@mcs.anl.gov, mpi-collcomm@cs.utk.edu, mpi-envir@cs.utk.edu, mpi-pt2pt@cs.utk.edu Message-Id: <9304100243.AA22459@sodium.pnl.gov> X-Envelope-To: mpi-pt2pt@cs.utk.edu, mpi-envir@cs.utk.edu, mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu Lyndon et.al. write: > ... This seems to say that the bit > length of the envelope is fixed to some number of bits and the more > fields we want to cram into the envelope the shorter the bit lengths of > fields must be. Is there a good reason why the bit length of the > envelope shoud be fixed in this fashion, or perhaps are you arguing > that the bit length of the envelope should be as short as possible? > > > This is a question vendors might answer: how many > > context values and tag values are you willing to support on future > > platforms and how many are you willing to back fit on existing ones? > > Yes, this would be a good question for the vendors indeed. > > VENDORS - PLEASE PLEASE PLEASE DO ADVISE US ON THIS ONE. I wonder what kind of useful advice vendors could really give us. Hardware support boils down to a question of getting faster performance in exchange for some relatively small resource limit. But in almost every case I can think of, such limits are made functionally transparent to the user by automatic fallback to some slower mechanism without the resource limit. Thus we have.. fixed size register sets with compilers that spill to memory, fixed size caches with automatic flush/reload from main memory, fixed size TLB's with cpu traps for TLB reload, fixed size physical memory with virtual memory support, and so on. The only counterexample that pops to mind is fixed-length numeric values, for which reasonably well established conventions exist. No such conventions currently exist regarding tag and context values. ============ PROPOSAL TO ENVIRONMENT COMMITTEE ============== The MPI specification should 1. require that all MPI implementations provide functional support for specified generous limits (e.g., 32 bits) on tag and context values, and 2. suggest that vendors provide a system-specific mechanism by which the user can optionally specify tag and context limits that the program agrees to abide by. Even the form of these limits should remain unspecified since they may vary from system to system. ======================== END PROPOSAL ======================== Further discussion... If a vendor wishes to provide hardware support to enhance performance for some stricter limits, and if some people are able and willing to write programs within those limits, that's great. Those people on those machines will be lark happy. If the performance increase is substantial, and I'm on one of those machines, and my program is simple enough, I'll probably be one of those people. However, I am not aware of any system on which generous limits could not be supported, albeit with some loss of performance compared to staying within the (currently hypothetical) hardware-supported limits. Everyone I know would MUCH prefer suboptimal performance over HAVING to rewrite applications to conform to varying and inconsistent hard limits. Yes, I recall the many arguments against mandating specific limits. But, I claim that those arguments are misdirected. They are based on analogy to things like word length and memory size, which I again note are subject to well established conventions and principles. (You can't run big programs on small machines, and we pretty much agree about what "big" and "small" mean.) In the case of context and tag values, such conventions do not exist, and a very wide range of conflicting limits have been discussed at various times and places. I believe that we will not meet our goal of portability if we do not specify usable limits on tag and context values. --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Mon Apr 12 13:57:41 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA19566; Mon, 12 Apr 93 13:57:41 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24449; Mon, 12 Apr 93 13:56:46 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 12 Apr 1993 13:56:35 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24426; Mon, 12 Apr 93 13:56:31 -0400 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Mon, 12 Apr 93 10:42 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA11608; Mon, 12 Apr 93 10:40:32 PDT Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA24711; Mon, 12 Apr 93 10:40:28 PDT Date: Mon, 12 Apr 93 10:40:28 PDT From: rj_littlefield@pnlg.pnl.gov Subject: contexts examples/problems 1-3 To: jwf@parasoft.com, lyndon@epcc.ed.ac.uk, mpi-collcomm@cs.utk.edu, mpi-context@cs.utk.edu, mpsears@cs.sandia.gov, snir@watson.ibm.com, tony@cs.msstate.edu Cc: d39135@carbon.pnl.gov Message-Id: <9304121740.AA24711@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu Folks, As Tony Skjellum noted, I am organizing a set of test cases & issues to be addressed by the various context proposals. I have formulated these as a set of "problems" such as might be found on an essay test. Here are draft versions of the first three "problem statements" for the context proposals. I anticipate that at least one more problem will be submitted. Please tell me about defects and inadequacies in these problems. If you have a favorite concern, now is the time to get it reflected in the problem set. Thanks, --Rik Littlefield BACKGROUND INFO . Be sure that your point-to-point and group/context control calls are specified elsewhere in your proposal. PROBLEM 1 (simple): . Specify your calling sequence for an MPI circular-shift routine that operates on a contiguous buffer of double precision float values. E.g. you might specify MPI_CSHIFTB (inbuf,outbuf,datatype,len,group,shift) where IN inbuf input buffer OUT outbuf output buffer IN datatype symbolic constant MPI_DOUBLE IN len length of inbuf (# of elements) IN group handle to group descriptor IN shift number of processes to shift . Assume that a user desires to write a new collective communication routine with the same calling sequence as cshift, but with different semantics. To be definite, this routine exchanges data in the pattern needed for one stage in a butterfly. I.e., the process of rank i exchanges data with the process of rank i+shift*(1-2*(i%(2*shift)/shift). Call this routine bflyexchange. . Show an implementation of bflyexchange in terms of your point-to-point and group/context control calls. . Specify the conditions necessary to ensure correct operation of this implementation. E.g., you might say "safe under all conditions", "safe if and only if no other routine issues wildcard receives in the same group/context", "safe if and only if context and tag are unique", or something like that. Making these conditions simple and broad is good. Getting caught stating conditions that are too broad is bad. . Discuss the performance of this implementation. Note that the semantics of bflyexchange require only a single send and receive per process. Explain how this level of performance can be achieved or approached by your implementation. If you assert that group control operations can be done without communications, explain how this works and what implications it has on other system parameters, e.g., the number and range of context values. PROBLEM 2 (medium) . Write a "guidelines for library developers and users" document that explains how to write and call libraries in order to maintain message-passing isolation between the various libraries and between the libraries and the user program. Be sure to explain how to achieve good efficiency. Be complete, but brief. (Long explanations can be interpreted as indicating a complex design.) You may wish to describe two or more self-consistent strategies, along the lines of Lyndon's "ClassA" and "ClassB" libraries as discussed earlier on mpi-context. PROBLEM 3 (hard?) This problem is paraphrased from one posed by Jon Flower. The task is to simulate the host-node programming model through the use of "host" and "node" groups. This is interesting both for backward- compatibility and for its inter-group communication requirements. As stated by Jon, this problem really spans subcommittees. For the sake of the present discussion, I have reformulated it in terms of an SPMD programming model in which a black-box function is used to tell each process whether it's the host or a node. Note in particular that nodes don't know the id of the host. Here is pseudo-code for the desired program: main() { if (I_am_the_host()) host (); else node (); } host (); /* * Form two groups containing: * i) only the host process. * ii) the node processes. */ host_group = mpi_...; node_group = mpi_...; /* * Broadcast from host to all nodes; using "ALL" group. * (It would be nice to have inter-group broadcast for * this since that is more like "current practice".) */ myrank = mpi_...; mpi_bcast( ...., myrank, MPI_GROUP_ALL, ...); /* * Send individual message to each node in turn. */ for(node=0; node < MPI_ORDER(node_group); node++) { mpi_send( ..., (node_group, node), ...); } /* * Receive result from node 0. */ mpi_recv( ..., (node_group, 0), ...); } node() { /* * Form two groups containing: * i) only the host process. * ii) the node processes. */ host_group = mpi_...; node_group = mpi_...; /* * Receive bcast from host using ALL group. */ host_rank = mpi_...; mpi_bcast(..., host_rank, MPI_GROUP_ALL, ...); /* * Receive single message from host. */ mpi_recv(..., 0, host_group, ...); /* * Send point-to-point messages in node group. */ myrank = mpi_... (node_group); nnodes = mpi_... (node_group); sendhandle = mpi_isend( ..., (node_group,(myrank+1)%nnodes), ...); mpi_recv ( ..., (node_group,(myrank-1+nnodes)%nnodes), ...); mpi_complete (sendhandle); /* * Compute global sum in nodes only. */ mpi_reduce(... , node_group, MPI_SUM_OP, ...); /* * Node 0 sends sum to host. */ if(myrank == 0) mpi_send(..., 0, host_group, ...); } . Show how to implement this pseudo-code using your point-to-point and group calls. Note that this code wants to think of node processes in terms of their rank in the node_group, not the ALL group. Be sure to show all details of any translations that are required. . Discuss how the collective comms and point-to-point messages are kept separate, even if the point-to-point calls are changed to used wildcards. ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Mon Apr 12 17:56:06 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA24573; Mon, 12 Apr 93 17:56:06 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10975; Mon, 12 Apr 93 17:54:58 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 12 Apr 1993 17:54:57 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10955; Mon, 12 Apr 93 17:54:02 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA13925; Mon, 12 Apr 93 16:52:04 CDT Date: Mon, 12 Apr 93 16:52:04 CDT From: Tony Skjellum Message-Id: <9304122152.AA13925@Aurora.CS.MsState.Edu> To: tony@aurora@cs.msstate.edu, mpsears@newton.cs.sandia.gov Subject: Re: the gathering Cc: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu Mark, You should explain what your model implies about the starting of processes. If you assume that processes have been started by MPI, that is OK (generally a tacit assumption of MPI1), but in any event you should tell us what the process is told at the moment of spawning (eg, about ALL groups, or its name, etc), that will help it become part of MPI-based communication. We need to see how "safe"/"unsafe" it will be to start MPI in every model. If it is extremely difficult/simple to get from the "just-spawned" state to the "MPI-up-and-running" sate, that should be made clear. I am happy to answer more questions! Please shoot away. - Tony PS Because this is of general interest to all readers, I am echoing to the reflector. I hope that is OK with you. ----- Begin Included Message ----- From mpsears@newton.cs.sandia.gov Mon Apr 12 15:08:18 1993 To: tony@aurora@cs.msstate.edu Subject: Re: the gathering Date: Mon, 12 Apr 93 14:10:36 MST From: mpsears@newton.cs.sandia.gov Content-Length: 243 Tony, I need a little clarification of what you mean by "Include discussion of how starting works and what the spawning semantics must provide them (or through an initial message) so that they can work." Starting and spawning what? mark ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Tue Apr 13 05:34:14 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA15699; Tue, 13 Apr 93 05:34:14 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24925; Tue, 13 Apr 93 05:33:23 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 13 Apr 1993 05:33:22 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24917; Tue, 13 Apr 93 05:33:19 -0400 Date: Tue, 13 Apr 93 10:32:55 BST Message-Id: <5989.9304130932@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: mpi-context: context and group (longer) To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Dear MPI Colleagues This never got to you, looks like I hit the wrong key in my mailer on April 10. Oops. Regards Lyndon ---- Start of forwarded text ---- > From lyndon Sat Apr 10 17:50:29 1993 > Date: Sat, 10 Apr 93 17:50:29 > From: L J Clarke > Subject: Re: mpi-context: context and group (longer) > To: prp@SSD.intel.com Paul Pierce writes: > > We think in terms of library instances. ... > > I found this part hard to understand. However, if you propose to use the > mechanisms in your just previous mail: > First off no way do I intend to use the simple example mechanism in my previous email. Of course it does not work if called with different groups, it is just a silly little simple example, it was never meant to work with different groups. Why did you find that part hard to understand? It is just an object oriented approach with a parallel object. I will try to explain again, hopefully making myself more clear, at the foot of this letter. I hope I can make myself clear. > This is exactly the sort of too-dynamic, too-intrusive mechanism I find > horrifying. I can't conceive of unleashing on the unsuspecting world a > standard that requires you to put code like that in every library call > (its even more complex with cacheing.) We _must_ come up with a > better mechanism. Fine, my approach will not do all of this anyway, please see below. > The example mechanism I talked about might look like this: Interesting! This example works in exactly one group/context and therefore is utterly incapable of supporting multiple instances operating on different data. > > int my_context; > > void class_b_initialize() /* Called once at the beginning of time */ > { > my_context = create_and_or_lookup_context("mylib"); > } > > void class_b_procedure(int group, ...) > { > > /* do it using (group, rank, my_context) */ > } > > I cannot see any difference between the latter of these two contexts > > "more dynamic for protecting groups" and a global group identifier. > > You are right, its the same. My point is that group context is necessary but > _not_ sufficient. I cannot concur. I know I mailed a lot of discussion letters recently. I hope you can find time to read them because the mechanisms discussed do mean that you only need the one context. > > > The only mechanism I know of that is adequate for protecting code is context > > > alloctated via a nameserver. ... > > > > We find that with regard to operations within a process group, and in > > particular to library instance construction and desctruction decribed > > above, the main user program has a highly SPMD nature. So we can > > exploit sequencing. This is a most valuable learning experience, > > because we had similar thoughts to those you express here, implemented a > > name server, and really didn't need it once (for this purpose). > > You learned that you have a SPMD universe. Negative! How can you read that from my words? We have a highly MPMD universe, with lots of groups both overlapping and distinct, which do lots of things of closed scope within the groups and lots of things of open scope between groups. We call this MSPMD or M(SPMD), meaning multiple (single program multiple data). The observation we make, and it is hardly surprising, it that the computations and communications associated with a process group are locally SPMD-like in nature. This is an entirely different observation. It is this property of a process group which allows parallel object constructors to be called in sequence. > One can argue that sequencing is acceptable for a static-process model, but it > is not adequate for dynamic processes. Eh? I don't understand what you are suggesting sequencing can be used for in static process model that it cannot in dynamic process model. Please do let me/us know. I'll second guess the sequencing you are thinking about, but I fail to understand how it relates to static/dynamic process model. This is sequencing within groups of calls of context allocators. This means that when the processes of a group allocate contexts as a concerted (and synchronous?) action they each bind same usage to the N'th context generated by the group, for all N. Of course we must realise that we distinguish the N'th context generated by the process P (member of group G) from the N'th generated by the group G, since the P will generally also be a member of other groups (H, ...) and such groups (H, ...) are not identical in membership to G. > We have talked about defining MPI in > such a way that it is complete for a static process model but without limiting > its extension to a dynamic process model. So we must be careful - if we assume > sequencing now, we must do it in a way that allows for a nameserver later. > Well I have no problem with this statement. It would be trivial to incorporate a name server into the discussions I have recently posted. In fact, when I get on to some helpful discussion of intercommunication I will probably be asking for some name service anyway. The explanation promised above now follows. I really will do it mainly by an exceptionally simple worked example. We view ClassB libraries as encapsulated parallel objects. Each object class has a constructor, which creates an object, and destructor, which destroys the object. Each object also provides a number of public operators which you can use to do things with the object. We allow the object to do things with user data as well as its encapsulate data, so we have a hybrid between object oriented and the usual ad hoc. The user calls constructors in all processes within the group G in sequence. Of course, processes which are also in another group say H interleave calls to consructors across H with those across G. The constructor called within G synchronises processes within G. Its a complete doddle. Let me give a very simple example for class Simple in C, assuming that object identifiers are expressed as type "void *" and group and communicator handles are expressed also as type "long". In practice we may implement object identifiers as opaque entities, but it's simpler to show this (void *) stuff here. I omit all error checks and standard speed optimisations as well. #---------------------------------------- # Library fragment void * Simple_contructor(long G) { Simple_state *state Simple_alloc_state(); /* Class state memory allocator */ Simple_state->communicator = mpi_create_group_communicator(G, MPI_NULL_CONTEXT, MPI_SECURE_CONTEXT); return (void *)Simple_state; } void Simple_destructor(void *state) { Simple_state *state = (Simple_state *)state; /* Macros can be faster */ mpi_delete_communicator(Simple_state->communicator); Simple_state_dealloc(Simple_state); /* Class state memory deallocator */ return; } void Simple_maketea_double(void *state, double *data, int items) { Simple_state *state = (Simple_state *)state; /* Macros can be faster */ /* next line is just to show how easy it is to get at the communicator */ long communicator = Simple_state->communicator; /* make "items" cups of double precision tea :-) with communication using communicator */ return; } void Simple_maketea_float(void *state, float *data, int items) { Simple_state *state = (Simple_state *)state; /* Macros can be faster */ /* next line is just to show how easy it is to get at the communicator */ long communicator = Simple_state->communicator; /* make "items" cups of single precision tea :-) with communication using communicator */ return; } # #---------------------------------------- #---------------------------------------- # User fragment /* some decls */ void *simple_ga, *simple_gb, *simple_ha, *simple_hb; double data[16]; float fata[16]; /* stuff */ simple_ga = Simple_constructor(G); if (in_H) { simple_ha = Simple_constructor(H); simple_maketea_double(simple_hb, data, 16); simple_maketea_float(simple_hb, fata, 16); simple_hb = Simple_constructor(H); } simple_maketea_float(simple_ga, fata, 16); simple_gb = Simple_constructor(G); simple_maketea_double(simple_gb, data, 7); /* more stuff */ # #---------------------------------------- ---- End of forwarded text ---- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Apr 13 05:59:20 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA21553; Tue, 13 Apr 93 05:59:20 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26245; Tue, 13 Apr 93 05:58:56 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 13 Apr 1993 05:58:56 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26237; Tue, 13 Apr 93 05:58:53 -0400 Date: Tue, 13 Apr 93 10:58:49 BST Message-Id: <6027.9304130958@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: some corrections to example To: lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu In-Reply-To: L J Clarke's message of Tue, 13 Apr 93 10:32:55 BST Reply-To: lyndon@epcc.ed.ac.uk There were some silly mistakes in the example I sent out. Here I just correct that. Regards Lyndon Let me give a very simple example for class Simple in C, assuming that object identifiers are expressed as type "void *" and group and communicator handles are expressed also as type "long". In practice we may implement object identifiers as opaque entities, but it's simpler to show this (void *) stuff here. I omit all error checks and standard speed optimisations as well. #---------------------------------------- # Library fragment void * Simple_contructor(long G) { Simple_state *state Simple_alloc_state(); /* Class state memory allocator */ state->communicator = mpi_create_group_communicator(G, MPI_NULL_CONTEXT, MPI_SECURE_CONTEXT); return (void *)state; } void Simple_destructor(void *id) { Simple_state *state = (Simple_state *)id; /* Macros can be faster */ mpi_delete_communicator(state->communicator); Simple_dealloc_state(state); /* Class state memory deallocator */ return; } void Simple_maketea_double(void *id, double *data, int items) { Simple_state *state = (Simple_state *)id; /* Macros can be faster */ /* next line is just to show how easy it is to get at the communicator */ long communicator = state->communicator; /* make "items" cups of double precision tea :-) with communication using communicator */ return; } void Simple_maketea_float(void *id, float *data, int items) { Simple_state *state = (Simple_state *)id; /* Macros can be faster */ /* next line is just to show how easy it is to get at the communicator */ long communicator = state->communicator; /* make "items" cups of single precision tea :-) with communication using communicator */ return; } # #---------------------------------------- #---------------------------------------- # User fragment /* some decls */ void *simple_ga, *simple_gb, *simple_ha, *simple_hb; double data[16]; float fata[16]; /* stuff */ simple_ga = Simple_constructor(G); if (in_H) { simple_ha = Simple_constructor(H); simple_maketea_double(simple_hb, data, 16); simple_maketea_float(simple_hb, fata, 16); simple_hb = Simple_constructor(H); } simple_maketea_float(simple_ga, fata, 16); simple_gb = Simple_constructor(G); simple_maketea_double(simple_gb, data, 7); /* more stuff */ # #---------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Mon Apr 19 14:22:00 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA07511; Mon, 19 Apr 93 14:22:00 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03824; Mon, 19 Apr 93 14:21:04 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Apr 1993 14:21:03 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03816; Mon, 19 Apr 93 14:21:02 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA17610; Mon, 19 Apr 1993 14:21:01 -0400 Date: Mon, 19 Apr 1993 14:21:01 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9304191821.AA17610@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu Subject: Names for communicator objects Has anyone come up with sensible names for communicator objects. At the last MPI meeting they were called floopy, bongo, and bingo. I suggest they be called C1, C2, and C3 communicators instead. David -------------------------------------------------------------------------- | David W. Walker | Office : (615) 574-7401 | | Oak Ridge National Laboratory | Fax : (615) 574-0680 | | Building 6012/MS-6367 | Messages : (615) 574-1936 | | P. O. Box 2008 | Email : walker@msr.epm.ornl.gov | | Oak Ridge, TN 37831-6367 | | -------------------------------------------------------------------------- From owner-mpi-context@CS.UTK.EDU Mon Apr 19 14:30:17 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA07582; Mon, 19 Apr 93 14:30:17 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04340; Mon, 19 Apr 93 14:29:56 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Apr 1993 14:29:55 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04332; Mon, 19 Apr 93 14:29:54 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA11638; Mon, 19 Apr 93 13:29:47 CDT Date: Mon, 19 Apr 93 13:29:47 CDT From: Tony Skjellum Message-Id: <9304191829.AA11638@Aurora.CS.MsState.Edu> To: walker@rios2.epm.ornl.gov Subject: Re: Names for communicator objects Cc: mpi-context@cs.utk.edu We have sensible names for these things. It is part of the proposal X rewrite The highest level would be inter-group-communicator, the next lowest is intra-group-communication, and I am not sure what the lowest should be (will review notes). C1, C2, C3 do not contain enough semantic information to be helpful, to my mind... - Tony ----- Begin Included Message ----- From owner-mpi-context@CS.UTK.EDU Mon Apr 19 13:22:12 1993 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Apr 1993 14:21:03 EDT Date: Mon, 19 Apr 1993 14:21:01 -0400 From: walker@rios2.epm.ornl.gov (David Walker) To: mpi-context@cs.utk.edu Subject: Names for communicator objects Content-Length: 729 Has anyone come up with sensible names for communicator objects. At the last MPI meeting they were called floopy, bongo, and bingo. I suggest they be called C1, C2, and C3 communicators instead. David -------------------------------------------------------------------------- | David W. Walker | Office : (615) 574-7401 | | Oak Ridge National Laboratory | Fax : (615) 574-0680 | | Building 6012/MS-6367 | Messages : (615) 574-1936 | | P. O. Box 2008 | Email : walker@msr.epm.ornl.gov | | Oak Ridge, TN 37831-6367 | | -------------------------------------------------------------------------- ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Mon Apr 19 14:43:44 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA07750; Mon, 19 Apr 93 14:43:44 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05458; Mon, 19 Apr 93 14:43:10 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Apr 1993 14:43:09 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05450; Mon, 19 Apr 93 14:43:08 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA17381; Mon, 19 Apr 1993 14:43:07 -0400 Date: Mon, 19 Apr 1993 14:43:07 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9304191843.AA17381@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu Subject: Re: Names for communicator objects If you're going to use long names like inter-group-communicator, intra-group-communication, and ??? (how about no-group-communicator), at least make sure they can be shortened to distinct acronyms. I think it is not too much to ask people to remember that C1, C2, and C3 stand for the three communicators in increasing order of complexity. "Inter" and "intra" will be a problem because there will be a class of people who don't remember that the former is Latin for "between", and the latter is Latin for "within." Also if you're listening to a talk or squinting at someones illegible viewgraphs you might not discern the distinction. David ----- Begin Included Message ----- From root Mon Apr 19 14:29:49 1993 Received: from Aurora.CS.MsState.Edu by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA17617; Mon, 19 Apr 1993 14:29:48 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA11638; Mon, 19 Apr 93 13:29:47 CDT Date: Mon, 19 Apr 93 13:29:47 CDT From: Tony Skjellum Message-Id: <9304191829.AA11638@Aurora.CS.MsState.Edu> To: walker@rios2.epm.ornl.gov Subject: Re: Names for communicator objects Cc: mpi-context@cs.utk.edu Status: R We have sensible names for these things. It is part of the proposal X rewrite The highest level would be inter-group-communicator, the next lowest is intra-group-communication, and I am not sure what the lowest should be (will review notes). C1, C2, C3 do not contain enough semantic information to be helpful, to my mind... - Tony From owner-mpi-context@CS.UTK.EDU Mon Apr 19 16:36:04 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA11316; Mon, 19 Apr 93 16:36:04 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14100; Mon, 19 Apr 93 16:35:21 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Apr 1993 16:35:20 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14081; Mon, 19 Apr 93 16:35:17 -0400 Received: from carbon.pnl.gov (130.20.188.38) by pnlg.pnl.gov; Mon, 19 Apr 93 13:31 PST Received: from sodium.pnl.gov by carbon.pnl.gov (4.1/SMI-4.1) id AA04004; Mon, 19 Apr 93 13:28:59 PDT Received: by sodium.pnl.gov (4.1/SMI-4.0) id AA03387; Mon, 19 Apr 93 13:28:56 PDT Date: Mon, 19 Apr 93 13:28:56 PDT From: rj_littlefield@pnlg.pnl.gov Subject: RE: Names for communicator objects To: mpi-context@cs.utk.edu, walker@rios2.epm.ornl.gov Cc: d39135@carbon.pnl.gov Message-Id: <9304192028.AA03387@sodium.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu > We have sensible names for these things. > It is part of the proposal X rewrite > > The highest level would be inter-group-communicator, the next lowest > is intra-group-communication, and I am not sure what the lowest > should be (will review notes). > > C1, C2, C3 do not contain enough semantic information to be helpful, > to my mind... > > - Tony I share David's concerns about the subtle difference between "inter" and "intra", and likewise Tony's about "C1" etc not being mnemonic. How about "G0", "G1", and "G2", for both level of complexity and the number of groups involved? --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Mon Apr 19 18:28:14 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA15382; Mon, 19 Apr 93 18:28:14 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23910; Mon, 19 Apr 93 18:26:50 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Apr 1993 18:26:46 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23900; Mon, 19 Apr 93 18:26:42 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA18691; Mon, 19 Apr 93 17:26:36 CDT Date: Mon, 19 Apr 93 17:26:36 CDT From: Tony Skjellum Message-Id: <9304192226.AA18691@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu, walker@rios2.epm.ornl.gov, rj_littlefield@pnlg.pnl.gov Subject: RE: Names for communicator objects Cc: d39135@carbon.pnl.gov I would suggest GRP0, GRP1, GRP2, to distinguish between GRIDS (eg, synonym for virtual topologies) versus GROUPS. - Tony ----- Begin Included Message ----- From owner-mpi-context@CS.UTK.EDU Mon Apr 19 15:36:10 1993 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Apr 1993 16:35:20 EDT Date: Mon, 19 Apr 93 13:28:56 PDT From: rj_littlefield@pnlg.pnl.gov Subject: RE: Names for communicator objects To: mpi-context@cs.utk.edu, walker@rios2.epm.ornl.gov Cc: d39135@carbon.pnl.gov X-Envelope-To: mpi-context@cs.utk.edu Content-Length: 878 > We have sensible names for these things. > It is part of the proposal X rewrite > > The highest level would be inter-group-communicator, the next lowest > is intra-group-communication, and I am not sure what the lowest > should be (will review notes). > > C1, C2, C3 do not contain enough semantic information to be helpful, > to my mind... > > - Tony I share David's concerns about the subtle difference between "inter" and "intra", and likewise Tony's about "C1" etc not being mnemonic. How about "G0", "G1", and "G2", for both level of complexity and the number of groups involved? --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Tue Apr 20 06:04:44 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA09503; Tue, 20 Apr 93 06:04:44 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA16292; Tue, 20 Apr 93 06:03:48 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Apr 1993 06:03:47 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA16284; Tue, 20 Apr 93 06:03:43 -0400 Date: Tue, 20 Apr 93 11:03:35 BST Message-Id: <4613.9304201003@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: RE: Names for communicator objects To: Tony Skjellum , mpi-context@cs.utk.edu In-Reply-To: Tony Skjellum's message of Mon, 19 Apr 93 17:26:36 CDT Reply-To: lyndon@epcc.ed.ac.uk Cc: d39135@carbon.pnl.gov Well, to pitch in my penny, I had already suggested some kind of name for the two simplest communicator objects in a previous letter. Well, never mind, because I am composing a letter which, among other things, suggests that: 1) The first two ones are called an "intracommunicator" object, which is composed of a context and a group (possibly null group). 2) The third one is called an "intercommunicator" object, which is composed of two intracommunicator objects. I understand the point Dave makes about missing the difference between "er" and "ra" embedded in the string "inxxcommunicator". I don't rate it as importantly as Dave, but that's just my opinion. I would have thought that few enough people educated enough to come into contact with this MPI stuff would not know the difference between the English prefix "intra" and the English prefix "inter" --- I do believe I understood this before leaving high school and my English education was rather poor, and my school never did teach Latin (it did teach Russian :-). Okay, so this is just my opinion, but I really don't think that enumerating the objects is very helpful to the user. If we want to make up concise short names, so users have less to type (which of course is irrelevant to me as I really don't mind typing, as you may have noticed, and at any rate my editor can type the long things for me :-), then imho they really should convey a bit more than just groups. How about ... RACOM - intRACOMmunicator - is Floopy and Bongo as above. ERCOM - intERCOMmunicator - is Bingo as above. Or even ... ACOM - intrACOMmunicator - is Floopy and Bongo as above. ECOM - intErCOMmunicator - is Bingo as above. Or along the cryptic line of thinking ... CG - ContextGroup - is Floopy and Bongo. CGG - ContextGroupGroup - is Bingo. Regards Lyndon > > I would suggest GRP0, GRP1, GRP2, to distinguish between GRIDS > (eg, synonym for virtual topologies) versus GROUPS. > - Tony > > > ----- Begin Included Message ----- > > >From owner-mpi-context@CS.UTK.EDU Mon Apr 19 15:36:10 1993 > X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Apr 1993 16:35:20 EDT > Date: Mon, 19 Apr 93 13:28:56 PDT > From: rj_littlefield@pnlg.pnl.gov > Subject: RE: Names for communicator objects > To: mpi-context@cs.utk.edu, walker@rios2.epm.ornl.gov > Cc: d39135@carbon.pnl.gov > X-Envelope-To: mpi-context@cs.utk.edu > Content-Length: 878 > > > We have sensible names for these things. > > It is part of the proposal X rewrite > > > > The highest level would be inter-group-communicator, the next lowest > > is intra-group-communication, and I am not sure what the lowest > > should be (will review notes). > > > > C1, C2, C3 do not contain enough semantic information to be helpful, > > to my mind... > > > > - Tony > > I share David's concerns about the subtle difference between > "inter" and "intra", and likewise Tony's about "C1" etc not > being mnemonic. > > How about "G0", "G1", and "G2", for both level of complexity > and the number of groups involved? > > --Rik > > ---------------------------------------------------------------------- > rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield > Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 > Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 > > > ----- End Included Message ----- > > /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Apr 20 13:26:47 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA02987; Tue, 20 Apr 93 13:26:47 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA18378; Tue, 20 Apr 93 13:24:45 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Apr 1993 13:24:44 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA18365; Tue, 20 Apr 93 13:24:25 -0400 Date: Tue, 20 Apr 93 18:23:45 BST Message-Id: <4968.9304201723@subnode.epcc.ed.ac.uk> From: L J Clarke To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-collcomm@cs.utk.edu Subject: mpi-context; intercommunication etc (long) Dear mpi-context colleagues I previously wrote regarding context management and binding of contexts for intracommunication in the letter [Subject: mpi-context: context management and group binding (long)] and sent out a short correction in the letter [Subject: mpi-context: CORRECTION to previous message] to which I draw your attention. In this letter I wish to briefly revisit and recap the above subjects, then move on to briefly discuss and make a concrete suggestion for intercommunication. This is a long letter. Probably best to print it and read over a coffee. I really must clarify the nature of the context which I am assuming. In this letter contexts are assumed to be global in the sense that if a process P creates a context C, it can send C to another process Q, and Q can both send and receive messages of context C. This is the model adopted by Zipcode, which I view as the exemplar of existing practice regarding message context. Regarding intracommunication I hope to slightly simplify the content of my suggestion compared to the letters referred to above. Regarding intercommunication the particular suggestion I make is motivated by conformity with intracommunication both in the point-to-point syntax class, and in the content of the message envelope. o--------------------o 1. Communicator and Communication ================================= Communicator objects provide point-to-point and collective communication in MPI. A communicator object is a binding of a message context and one or more process worlds. Two subclasses of communicator object are defined below, intracommunicator and intercommunicator. Communicator objects are identified by process local object identifiers. 1.1 Construction, Destruction and Information --------------------------------------------- MPI provides subclass specific communicator constructors described below. MPI provides a subclass generic communicator object destructor procedure. mpi_delete_communicator(id) id is identifier of a communicator purpose deletes the communicator object identified by id See Note 1) under intracommunicator construction and Note 1) under intercommunicator construction Notes: 1) This procedure could be replaced with MPI_FREE if we wish to fit in with the manipulation of communication handles and buffer descriptor handles described in the point-to-point chapter. MPI provides a subclass generic procedure which returns the context identifier of a communicator object. context = mpi_communicator_context(id) context is the context bound to the communicator id is the identifier of a communicator See Note 1) under intracommunicator construction and Note 1) under intercommunicator construction purpose informs the caller of the context bound to a communicator 1.2 Discussion -------------- 2. Intracommunicator and Intracommunication =========================================== Intracommunicator objects provide point-to-point communication between processes of the same process world in MPI. Intracommunicator objects also provide collective communication in MPI. 2.1 Construction and Information -------------------------------- MPI provides a subclass intracommunicator constructor. id = mpi_create_intracommunicator(context, world) id is identifier of created communicator context is message context for communications world is process world of receiver and sender in both send and recv purpose creates an intracommunicator object Notes: 1) The context of an intracommunicator is either an actual context or the null context (MPI_NULL). If the context is an actual context then the call does not synchronise processes in the process world of the intracommunicator. If the context is the null context then the call synchronises the process world of the communicator and creates a context for the communicator. In this case the context is deleted when the communicator is itself deleted calling mpi_delete_communicator, and that call will synchronise the process world. In this case the information procedure mpi_communicator_context will return MPI_NULL to the caller --- the caller is not allowed to have knowledge of the context created. 2) The process world of an intracommunicator object is either an actual process group or the null group (MPI_NULL). If the world is an actual process group then the world is understood to contain all processes composing the process group and the communicator object identifies processes in a relative sense, i.e. as a rank within the process group. If the world is the null group then the world is dunerstood to contain all processes composing the program and the communicator object identifies processes in an absolute sense, i.e. as a process identifier. MPI provides a subclass information procedure which returns the identifier of the world of the intracommunicator. world = mpi_intracommunicator_world(id) world is process world of the communicator id is identifier of created communicator purpose returns the world identifier of the intracommunicator, either an actual group identifier or the null group identifier (MPI_NULL) 2.2 Point-to-point ------------------ I deal with generic "send" and "recv" seperately, and can ignore the multiple flavours thereof. send(id, process, label, ...) id is identifier of intracommunicator object process is identifier of receiver in world of object label is message tag in context of object recv(id, process, label, ...) id is identifier of intracommunicator object, and cannot be wildcard process is identifier of sender in world of object, and can be wildcard label is message tag in context of object, and can be wildcard Notes: 1) The caller must be in the world of the intracommunicator, i.e. either it is the null process group or an actual process group of which the caller is a member. 2.3 Collective -------------- I deal with a generic collective "operation", and can ignore the multiple flavours thereof. operation(id, ...) id is identifier of intracommunicator object Notes: 1) The intracommunicator must have a world which is an actual process group of which the caller is a member. 2.4 Envelope ------------ The message envelope for intracommunication consists of: * sender identifier within process world of communicator (pid or rank) * receiver routing (implementation defined) * message context of communicator * message tag * message length (implementation defined) The sender and reciever must bind the context to the same process world in an intracommunicator, thus the world is determinable. 2.5 Discussion -------------- The facilities for intracommunication, coupled with the context model, provide a convenient and powerful interface for communications which are closed within the scope of a group and for the serial client-server model. The ability to create an intracommunicator without synchronisation of processes simplifies the construction of libraries in highly MIMD programs, and can be used to advantage in conjunction with the association and location facilities described below. 3. Association, Dissociation, Location, Passivation and Activation ================================================================== 3.1 Association, Dissociation and Location ------------------------------------------ These facilities allow the user to bind names to process, group, and context objects. mpi_associate(name, id) name is a string which is the name bound to the given object id is the object identifier (process, group or context) prupose associates name with object identified by id id = mpi_locate(name, wait) id is the object identifier (process, group or context) name is a string which is the name bound to the given object wait is a boolean value determining whether the caller waits for the name to become associated with an object of given class purpose creates a copy of the object associated with name mpi_dissociate(id) id is the object identifier (process, group or context) purpose removes the association of name with object id, and can only be performed by the process which previously associated name. Notes: 1) These facilities are a name service. This could be implemented by a name server process which can run on a host or login node, and need not consume expensive numerical computation resources. 3.2 Passivation and Activation ------------------------------ These facilities allow the user to transmit a process, group and context objects. Passivation and activation produce a "portable" description of the object in a memory buffer (conventionally these operations produce a description in a file, but a memory buffer is more convenient for transmission in a message :-). mpi_passivate(id, buf, len) id is the object identifier (process, group or context) buf is an array of character len is the length of the array buf purpose writes a portable description of object identified by id in the memory buffer buf id = mpi_activate(buf, len) id is the object identifier (process, group or context) buf is an array of character len is the length of the array buf purpose reads a portable description of an object and creates a copy of the object Notes: 1) The detailed type of the memory buffer is not of great importance provided that we define that type. I have used character above, we could choose integer, for example. 3.3 Discussion -------------- I have assumed that MPI can distinguish the class of the object (process, group or context) given the object identifier. If this cannot be the case then we can describe a different set of procedures for each class or we can add a class argument to the above procedures. The name association and location service is the most manageable way of describing which groups communicate with one another. The passivation activation facilities are potentially a building block in the implementation of the name association and location service. Deletion of objects created by activation or location should only delete the process local copy of the object. It should not delete the original copy. When location and activation "create" an object and the object already exists within the calling process, a new object should not be created and the id of the existing object should be returned. This means that such object have multiple references, so we should define the destructors in terms of deleting references to objects, leaving the implementation to delete the object when there are zero references. 4. Intercommunicator and Intercommunication =========================================== Intercommunicator objects provide point-to-point communication between processes of different process worlds in MPI. Intercommunicator objects do not provide collective communication in MPI (yet :-). 4.1 Construction ---------------- id = mpi_create_intercommunicator(context, local_world, remote_world) id is identifier of created communicator context is message context for communications local_world is process world of sender in send and receiver in recv remote_world is process world of receiver in send and sender in recv purpose creates an intercommunicator object Notes: 1) The context can be an actual context or the null context (MPI_NULL). If the context is an actual context then the call does not synchronise processes within the two process worlds of the communicator. If the context is the null context then the call synchronises the two process worlds of the communicator and creates a context for the communicator. In this case the context is deleted when the communicator is itself deleted calling mpi_delete_communicator, and that call will synchronise the process world. In this case the information procedure mpi_communicator_context will return MPI_NULL to the caller --- the caller is not allowed to have knowledge of the context created. 2) Each process world of an intercommunicator object is either an actual process group or the null group (MPI_NULL). If the world is an actual process group then the world is understood to contain all processes composing the process group and the communicator object identifies processes in that world in a relative sense, i.e. as a rank within the process group. If the world is the null group then the world is understood to contain all processes composing the program and the communicator object identifies processes in that world in an absolute sense, i.e. as a global process identifier. MPI provides subclass information procedures which return the identifier of the local_world and remote_world of the intercommunicator. world = mpi_intercommunicator_local_world(id) world is local process world of the communicator id is identifier of created communicator purpose returns the local world identifier of the intercommunicator, either an actual group identifier or the null group identifier (MPI_NULL) world = mpi_intercommunicator_remote_world(id) world is remote process world of the communicator id is identifier of created communicator purpose returns the remote world identifier of the intercommunicator, either an actual group identifier or the null group identifier (MPI_NULL) 4.2 Point-to-point ------------------ I deal with generic "send" and "recv" seperately, and can ignore the multiple flavours thereof. send(id, process, label, ...) id is identifier of intracommunicator object process is identifier of receiver in remote_world of object label is message tag in context of object recv(id, process, label, ...) id is identifier of intracommunicator object, and cannot be wildcard process is identifier of sender in remote_world of object, and can be a wildcard label is message tag in context of object, and can be a wildcard 1) The caller must be in the local_world of the intracommunicator, i.e. either it is the null process group or an actual process group of which the caller is a member. 4.3 Envelope ------------ The message envelope for intercommunication consists of: * sender identifier within process world of communicator (pid or rank) * receiver routing (implementation defined) * message context of communicator * message tag * message length (implementation defined) The sender and reciever must bind the context to the same process worlds in an intercommunicator, thus both the local_world and the remote_world are determinable. This is identical to the envelope of intracommunication. 4.4 Discussion -------------- The facilities for intercommunication, coupled with the context model, and the name service, provide a convenient interface for the parallel client-server model and parallel modular-application software, provided that the WAIT_ANY() facilities of point-to-point communication are fair. The ability to create an intercommunicator without synchronisation of processes simplifies the programming of parallel client-server software, and avoids a dependency graph problem when writing parallel modular-application software in which the module graph contains loops. 5. Discussion ============= I find it a wee bit amusing that an intercommunicator in which local_world and remote_world are the same is no different to an intracommunicator. This suggests to me that either (a) there should only be an intercommunicator class or (b) we think of the intracommunicator class as simply syntactic sugar around the intercommunicator class. The communicator object class names are rather long. Perhaps programmers would prefer shorter names in programs. We could take the approach of deriving names from the list of objects which a communicator binds, for example: "intracommunicator" becomes "CW" as it is a binding of a context and a world; "intercommunicator" becomes "CWW" as it is a binding of a context and a world and another world. On the other hand we could take collections of letters from the long names, for example: "intracommunicator" becomes "RACO" or "ACO"; "intercommunicator" becomes "ERCO" or "ECO". o--------------------o Comments, questions, flames, please! Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Apr 20 14:07:05 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA04375; Tue, 20 Apr 93 14:07:05 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA22093; Tue, 20 Apr 93 14:06:22 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Apr 1993 14:06:16 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA22052; Tue, 20 Apr 93 14:06:10 -0400 Date: Tue, 20 Apr 93 19:06:06 BST Message-Id: <5045.9304201806@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: proposal -- context and tag limits To: rj_littlefield@pnlg.pnl.gov, mpi-context@cs.utk.edu In-Reply-To: rj_littlefield@pnlg.pnl.gov's message of Fri, 9 Apr 93 19:43:26 PDT Reply-To: lyndon@epcc.ed.ac.uk Cc: d39135@carbon.pnl.gov, gropp@mcs.anl.gov, mpi-collcomm@cs.utk.edu, mpi-envir@cs.utk.edu, mpi-pt2pt@cs.utk.edu Rik writes: > ============ PROPOSAL TO ENVIRONMENT COMMITTEE ============== Yes, I support the spirit and detail of the proposal. > Everyone I know would MUCH prefer suboptimal performance > over HAVING to rewrite applications to conform to varying and > inconsistent hard limits. Yes, this claim is true of everyone I know except for one very small community of academic scientists who will write their relatively simple programs from scratch for every machine on which they will do major scientific production runs. I know a whole lot more academics and commercials who just will not write programs from scratch in this way. > Yes, I recall the many arguments against mandating specific > limits. But, I claim that those arguments are misdirected. Indeed I believe that your claim is valid. > I believe that we will not meet our goal of portability > if we do not specify usable limits on tag and context values. I have the same belief. I also believe that if we fail on portability then we fail period. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Apr 21 08:36:13 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA23891; Wed, 21 Apr 93 08:36:13 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA11515; Wed, 21 Apr 93 08:34:50 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 21 Apr 1993 08:34:49 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA11507; Wed, 21 Apr 93 08:34:48 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA17335; Wed, 21 Apr 1993 08:34:47 -0400 Date: Wed, 21 Apr 1993 08:34:47 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9304211234.AA17335@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu Subject: Names for communicators I think either C0, C1, C2 or G0, G1, G2 are the best names for communicator objects. Things like "RACOM" and "ERCOM" are not suitable. Does the C0 (aka Floopy) communicator still exist in the latest proposal X. In pt2pt communication using C0 we use process handle to designate source and destination, whereas with C1 and C2 the rank is used. Does this imply we need separate pt2pt routines for C0 communicators, thereby doubling the number of pt2pt routines? David From owner-mpi-context@CS.UTK.EDU Wed Apr 21 08:57:41 1993 Received: from CS.UTK.EDU by surfer.EPM.ORNL.GOV (5.61/1.34) id AA24371; Wed, 21 Apr 93 08:57:41 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14688; Wed, 21 Apr 93 08:57:09 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 21 Apr 1993 08:57:09 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14674; Wed, 21 Apr 93 08:57:04 -0400 Date: Wed, 21 Apr 93 13:56:02 BST Message-Id: <5987.9304211256@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Names for communicators To: walker@rios2.epm.ornl.gov (David Walker), mpi-context@cs.utk.edu In-Reply-To: David Walker's message of Wed, 21 Apr 1993 08:34:47 -0400 Reply-To: lyndon@epcc.ed.ac.uk Hi David > I think either C0, C1, C2 or G0, G1, G2 are the best names for communicator > objects. Things like "RACOM" and "ERCOM" are not suitable. Okay, whatever you think. > Does the C0 (aka Floopy) communicator still exist in the latest proposal X. Hard to say :-) In the letter (long) I posted yesterday I spoke only of two subclasses, which I called intracommunicator and intercommunicator, and which we can call C1, C2. The capability of the C0 (aka Floopy) is in the suggestion of the letter, for sure. The question is how things end up presented in X (aka malcolm :-), which we (Tony, Rik, myself, ...) have not yet discussed. > In pt2pt > communication using C0 we use process handle to designate source and > destination, whereas with C1 and C2 the rank is used. Yes, although we haven't really made a decision about whether its a global process identifier (pid) or a local process identifier (handle), or whether (my preference) we define it such that it can be implemented either way. > Does this imply > we need separate pt2pt routines for C0 communicators, thereby doubling the > number of pt2pt routines? > I think not, and this is reflected in the letter of yesterday. The way I see it, a process identifier or process handle should be expressed as an integer which is the same type as a rank. Thus the pt2pt syntax class is the same for all of the cases, and we do not need to multiply the number of pt2pt routines, if we use genericity and overloading. In essence we describe pt2pt as (communicator, process-label, message-label, ...) where communicator is an instance of a Communicator object process-labek is a process label interpreted according to communicator as a process identifier or rank label is a message label which is a tag in the context of the communicator > David > /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri May 7 14:28:21 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA01084; Fri, 7 May 93 14:28:21 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21927; Fri, 7 May 93 14:27:59 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 7 May 1993 14:27:58 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21894; Fri, 7 May 93 14:27:08 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA27019; Fri, 7 May 93 13:27:41 CDT Date: Fri, 7 May 93 13:27:41 CDT From: Tony Skjellum Message-Id: <9305071827.AA27019@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: Re: mpi-context: context and group (longer) Cc: mpi-collcomm@cs.utk.edu For your information, the following was sent (by me) to the IAC subcommittee. - Tony ----- Begin Included Message ----- From tony Fri May 7 13:25:17 1993 Date: Fri, 7 May 93 13:24:37 CDT From: Tony Skjellum To: mpi-iac@CS.UTK.EDU, stoessel@irsun21.ifp.fr Subject: Re: Subset comments Cc: tony Content-Length: 6858 IACers, I think that a subset is essential, because of the number of features in MPI that are so remote from current practice; furthermore, if one were to look at the liberal vs. conservative nature of committee (as others have observed), it is not equal over all features/proposed features. Hence, I offer my thoughts. This is not meant to be a "flame." For instance, I have argued for the addition of a two-word match against tags, in order to allow easier layerability. A tag would be matched as follows (following Jim Cownie): (received_tag xor (not dont_care_bits)) and care_bits This would allow the user, not only complete freedom in use of tags, but also the ability to develop further layers on top of MPI that partition the use of tag. I will bring this idea up again at the next meeting, at the second reading of pt-2-pt. It is necessary to have this, and it is a small step away from usual practice. However, it is hard to convince people to add this, despite its negligible impact on performance (two logical operations, instead of one, assuming user passes the one's complement of the dont_care_bits.) However, its impact on MPI flexibility is immense. Hence, I view this feature as essential to the subset and full MPI likewise. Another instance. Contexts. We are arguing for/not for contexts that are independent of groups. Contexts as an extended, system-registered part of the tag field help us to build libraries that can co-exist, register at runtime, and do not interfere with the message-passing of other parts of the system. I want an "open system"; hence, I want to see the tag partitioning. Contexts work very well in Zipcode (my message passing system, developed at Caltech, LLNL), and are helpful with the libraries we develop on Zipcode. Because vendor systems do not have contexts, Zipcode, when it layers on vendor systems must requeue messages. This is undesirable from a performance standpoint. Hence, it is highly desirable for MPI to provide contexts of the type I describe, as a simple tag registration/partitioning mechanism that is understandable as an extension of existing practice. If contexts are limited, and their is a mechanism to find this out (environment), then messaging systems like Zipcode could do requeueing of messages as necessary, or manage contexts themselves at times, and use the precious "fast" contexts on user-specified communications, leaving others to be requeued and slower. Hence, I view contexts (whether plentiful or scarce in a given implementation) as essential to the subset, and the full standard. As Don Heller of Shell has noted... "contexts allow the development of a software industry [for multicomputers]." Groups. Yes, we need them too. They are important for managing who is communicating with others. So, they have to stay in the subset as well. Rik Littlefield, Lyndon Clarke, and I have argued (and will continue to do so) for attributes based on group/context-scope. This would allow the methods implementing communication to be changed in MPI for each group/context scope, permitting optimizations. This is not current practice, except in our Zipcode 1.0 release, which has this useful capability, but it justifiably useful. I think these ideas can/should remain in the standard and in the subset. There are multitudinous types of send/receive that we are currently proposing, but not using in practice currently, which have been proposed and accepted with relative ease by MPI. Practically, send, receive, receive unblocked, is enough, provided the kernel is smart enough to do overlapping of communication and computation. Actually, if the semantics of the Reactive Kernel were taken, which allows the system to handle all memory management, then receive would provide the pointer to the data, and send would be like free, with an allocate mechanism like malloc. These reduce the number of copies of data, except when extremely regular data structures are in use (less and less likely). The RK semantics thought out by Seitz et al are remarkably simple, but highly optimizable, and can even work very fast in shared memory. These semantics do not appear as options in MPI; we only have multitudinous buffer-oriented operations. When memory management units are involved, binding control of the memory and messaging operations gives even more opportunity for the system to optimize. Allowing the user to receive messages without having first to know their size is elegant, and simplifies error issues. As we all know, there are faster implementation strategies than RK semantics for message-passing that are low level, such as channels, active messages (unchampioned in this standard), and shared-memory (eg, CRAY Tera 3D). These need not be part of this standard, but it would be helpful if the standard were unhostile to such possibly efficient implementation mechanisms. The "buffer descriptor" approach in MPI is the best match (being the highest level interface) to a runtime system that exploits channels, and/or active messages, and/or remote memory writes, etc. The optimizability of the highest level is complemented by the fact that the user no longer knows if a buffer is ever formed on the local or remote end [well it should be written to make that so]. Furthermore, heterogeneity can be encapsulated in transfers at this level. Hence, I am convinced that "buffer descriptor" stuff should remain in the subset. The committee has shied away from defining the process model, and this has led not only to a very static model (arguably OK), but a predilection to the SPMD (handling of groups, definition of subgroups, need for dynamic contexts diminished, etc). All of these factors make the standard backward-looking if so adopted, and make it really difficult to justify in the distributed environment. I am not sure why this has happened, but it is unfortunate. It means that MPI codes will be partially portable, but not totally, as each system will have different process management. SPMD programs will be reasonably portable, as the process management is simple, and therefore localized. The handling of the host/node model is not well established in MPI, and may not be suitably supported. That would be a big problem to my mind. To summarize, it is my view that the enabling mechanisms: group, context, tag selection, and buffer descriptors described above are essential aspects of a standard and subset, and should not be sacrificed. MPMD programming, the host/node model, should be supported. - Tony . . . . . . . . . . "There is no lifeguard at the gene pool." - C. H. Baldwin "In the end ... there can be only one." - Ramirez (Sean Connery) in Anthony Skjellum, MSU/ERC, (601)325-8435; FAX: 325-8997; tony@cs.msstate.edu ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Sun May 9 22:31:43 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA03781; Sun, 9 May 93 22:31:43 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00862; Sun, 9 May 93 22:31:06 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 9 May 1993 22:31:05 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00810; Sun, 9 May 93 22:30:38 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA29012; Sun, 9 May 93 21:31:10 CDT Date: Sun, 9 May 93 21:31:10 CDT From: Tony Skjellum Message-Id: <9305100231.AA29012@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu Subject: Mea culpa on previous letter to Ho/Pierce; Question about proposal VIII (Sears) Cc: tony@CS.UTK.EDU Dear friends/colleagues: 1) I think I misunderstood who was saying what in my last message, re Ho/Pierce. But, I hope my point was clear. I disagree with Howard on the concept that the context is convenient, but not important/essential. I thank Paul for his examples. If I was sounding inflamed, ignore that, I had low blood sugar. I think that the context should be a separate, logical extension to the tag field, the latter should be 32 bits long, for sufficient flexibility in layering, and the tag matching should permit layering by selective bit inclusion/exclusion. Please see that my tag matching and context/tag (as a logical partitioning) are correlated. The MPI layer partitions the context field from the "logical tag," and controls it. The user has control over the format of the rest of the tag, and may build layers (or use other layers), that subsequently partition the rest of the tag to continue to build safe service layers. That is why I keep pounding the receipt selectivity mechanism based on the dont_care bits, and the care_bits. I do not view the context as a large quantity in some implementations. In certain cases, there might be as few as 16, but the implementer (like a compiler writer using registers), allows the code to work if the hard limit is exceeded, by reserving a context that supports a higher-cost propocal (eg, requeueing of messages). ------------------------------------------- 2) Lyndon suggested some time ago that I revive my two-dimensional grid example, vis a vis Rik Littlefield's collections of examples, and ask how that applies to Mark Sear's proposal VIII... Basically, in this case, there is a two-dimensional logical array of processes (a virtual topology), of shape PxQ. In a given position (p,q), there are three possible contexts for a given global operation G (eg, combine): 1) G over the whole collective, PxQ topology 2) G over the row that includes process (p,q) [the pth row] 3) G over a column that includes process (p,q) [the pth column] How many contexts are needed to provide for safe intermixing of operation G over the three possible combinations. Assume that the operations of 1, 2, 3 may operate in sequence correctly, for now (ie, two G's over the whole collection work correctly). This is true of multiple combines, deterministic broadcasts, etc. It is not true if there are non-deterministic G operations included. If there are three contexts of communication, then everything is fine, because row G, col G, and whole G cannot interfere. By extension, a total of three unique contexts are needed for the whole PxQ topology, reusing the same context on parallel (row, column) entities, and an additional one for the "whole." I pointed up this example at an earlier MPI meeting. In the Proposal VIII model, contexts and groups are disjoint, so no property of group will provide the safety needed. Hence, I assert that either three contexts should be needed here, or tag values would be needed to disambiguate the G operations. Now, in an earlier mail message, Ho/Pierce bantered about "how many contexts are needed for a given group." I argued, one per unique library, with the discussion of non-deterministic broadcast, because I wanted my non-deterministic broadcast to work safely when other communication was going on. At least, there must be a point-to-point context for a group, and a global operations context for a group [which is what Zipcode 1.0 has]. Further non-deterministic operations would need more contexts. In the light of this issue, I would suggest that my PxQ topology above reasonably needs 6 contexts of communication to be implemented safely [and assuming that all global operations are deterministic, as asserted]. In short, six contexts are needed for this group. In the PxQxR three-dimensional case, the number is larger: 1) G over the whole collective, PxQxR topology 2) G over PxQ plane 3) G over PxR plane 4) G over QxR plane By simple study, this requires 1 + 3 * 6 = 19 contexts to be implemented safely, and sets a limit on the number of contexts that one would want to have, to support a single 3D topology safely. Remember, I am assuming that we are talking about Proposal VIII, which does not offer group-based safety for messaging. Proposal I, by Snir, does offer this safety inherently [which masks the need for explicit contexts when dealing with SPMD-type calculations]. I am arguing that the assumption of small numbers of contexts (where small is about 8 or 16, and where the code breaks if the number is exceeded) is not reasonable. The code must continue to work, if slower, if many contexts are needed, beyond that which is available as fast, hardware contexts. Proposal VIII must accomodate that to be reasonable. In an application at Livermore, one of my colleagues uses multiple two-dimensional topologies to implement stages of a calculation. They overlap. So, he wants at least two unique topologies, completely supported, or at least 12 contexts. If he goes to three dimensions, he want 38 minimum, for his code to work. Summary: Proposal VIII cannot cope with practical situations illustrated above, because it breaks when there are "too many contexts," and contexts are assumed rare (eg, 8 or 16). To be reasonable, it would have to have a fall-back to support many contexts in some way. Lyndon and others have made this request, in the past weeks. Comments? - Tony From owner-mpi-context@CS.UTK.EDU Mon May 10 03:03:53 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA03996; Mon, 10 May 93 03:03:53 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14410; Mon, 10 May 93 02:59:44 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 10 May 1993 02:59:42 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14402; Mon, 10 May 93 02:59:37 -0400 Message-Id: <9305100659.AA14402@CS.UTK.EDU> Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 6287; Mon, 10 May 93 03:00:12 EDT Date: Mon, 10 May 93 03:00:11 EDT From: "Marc Snir" To: MPI-CONTEXT@CS.UTK.EDU %!PS-Adobe-2.0 %%Creator: dvips 5.47 Copyright 1986-91 Radical Eye Software %%Title: CON-03.DVI.* %%Pages: 10 1 %%BoundingBox: 0 0 612 792 %%EndComments %%BeginProcSet: texc.pro /TeXDict 250 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{ isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10 N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{ /vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder} N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail} B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get} B /ch-height{ch-data dup length 4 sub get} B /ch-xoff{128 ch-data dup length 3 sub get sub} B /ch-yoff{ ch-data dup length 2 sub get 127 sub} B /ch-dx{ch-data dup length 1 sub get} B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]/id ch-image N /rw ch-width 7 add 8 idiv string N /rc 0 N /gp 0 N /cp 0 N{rc 0 ne{rc 1 sub /rc X rw}{G}ifelse}imagemask restore}B /G{{id gp get /gp gp 1 add N dup 18 mod S 18 idiv pl S get exec}loop}B /adv{cp add /cp X}B /chg{rw cp id gp 4 index getinterval putinterval dup gp add /gp X adv}B /nd{/cp 0 N rw exit}B /lsh{rw cp 2 copy get dup 0 eq{pop 1}{dup 255 eq{pop 254}{dup dup add 255 and S 1 and or}ifelse}ifelse put 1 adv}B /rsh{rw cp 2 copy get dup 0 eq{pop 128}{dup 255 eq{pop 127}{dup 2 idiv S 128 and or}ifelse}ifelse put 1 adv}B /clr{rw cp 2 index string putinterval adv}B /set{rw cp fillstr 0 4 index getinterval putinterval adv}B /fillstr 18 string 0 1 17{2 copy 255 put pop}for N /pl[{adv 1 chg}bind{adv 1 chg nd}bind{1 add chg}bind{1 add chg nd}bind{adv lsh}bind{ adv lsh nd}bind{adv rsh}bind{adv rsh nd}bind{1 add adv}bind{/rc X nd}bind{1 add set}bind{1 add clr}bind{adv 2 chg}bind{adv 2 chg nd}bind{pop nd}bind]N /D{ /cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto}N /eop{clear SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put} for}N /p /show load N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval (Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail} B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{ /SS save N}B /eos{clear SS restore}B end %%EndProcSet TeXDict begin 1000 300 300 @start /Fa 1 16 df15 D E /Fb 58 123 df35 D<13E01201EA07C013005A121E5A123812781270A312F05AA77E1270A312781238123C7E7E7E13 C0EA01E012000B217A9C16>40 D<12E07E127C121C121E7EEA0780120313C01201A313E01200A7 120113C0A3120313801207EA0F00121E121C127C12F05A0B217C9C16>II<12 38127C127EA2123E120E121E123C127C12F81260070B798416>44 DI< 127012F8A312700505788416>I III<1238127CA312381200A81238127CA3123C121C123C123812F812 F012600618799116>59 D61 D<13E0487EA213B0A2EA03B8A31318EA071CA5EA0E0EA2EA0FFEA2487EEA1C07A3387F1FC000FF 13E0007F13C013197F9816>65 DI<3801F180EA07 FF5AEA1F0FEA3C0712781303127000F0C7FC5AA77E387003801278A2EA3C07381F0F00EA0FFE6C 5AEA01F011197E9816>II<387FFFC0B5FC7EEA1C01A490C7FCA2 131CA2EA1FFCA3EA1C1CA290C7FC14E0A5EA7FFFB5FC7E13197F9816>I71 D73 D<387F0FE038FF8FF0387F0FE0381C0780EB0F00131E131C133C5B5BEA1DE07F121F7F1338EA1E 3C131CEA1C1E7F7F14801303387F07E038FF8FF0387F07E01419809816>75 DI<38FC07E0EAFE0FA2383A0B80EA3B 1BA413BBA2EA39B3A413F3EA38E3A21303A538FE0FE0A313197F9816>I<387E1FC038FF3FE038 7F1FC0381D07001387A313C7A2121CA213E7A31367A21377A21337A31317EA7F1FEAFF9FEA7F0F 13197F9816>III82 DI<387FFFE0B5FCA2EAE0E0A400 001300AFEA07FC487E6C5A13197F9816>I<387F07F038FF8FF8387F07F0381C01C0B0EA1E0300 0E1380EA0F8F3807FF006C5AEA00F81519809816>I<38FC07E0EAFE0FEAFC07387001C0A30030 1380EA3803A313E3EA39F3A213B300191300A61313EA1B1BEA0F1EA2EA0E0E13197F9816>87 D<387F1F80133F131F380E1E00131CEA073C1338EA03B813F012015B120012017F120313B81207 131CA2EA0E0EA2487E387F1FC000FF13E0007F13C013197F9816>I<38FE0FE0EAFF1FEAFE0F38 1C0700A2EA0E0EA26C5AA3EA03B8A2EA01F0A26C5AA8EA03F8487E6C5A13197F9816>I<387FFF 80B5FCA238E007005B131E131CEA003C5B137013F0485A5B1203485A90C7FC5A381E0380121C12 3C12781270B5FCA311197E9816>I95 D97 D<127E12FE127E120EA4133EEBFF80000F13C0EB83E01301EB00F0120E1470A4000F13 F014E01381EB83C013FF000E1300EA067C1419809816>II<133F5B7F1307A4EA 03E7EA0FFF123FEA3C1F487E1270EAF00712E0A46C5AA2EA781FEA7C3F383FFFE0381FF7F03807 C7E014197F9816>II<131FEB7F8013FFEA01E7EBC30013C0A2EA 7FFFB5FCA2EA01C0ACEA3FFE487E6C5A11197F9816>I<3803E3C0380FFFE05A381E3CC0383C1E 00EA380EA3EA3C1E6C5AEA1FFC485AEA3BE00038C7FC123CEA1FFC48B4FC4813C0EA780338F001 E0EAE000A3EAF001387C07C0383FFF80380FFE00EA03F8131C7F9116>I<127E12FE127E120EA4 137CEA0FFF148013871303A2120EA9387FC7F038FFE7F8387FC7F01519809816>II<127E12FE127E120EA4EB7F E0A3EB0F00131E5B5B5BEA0FF8A213BC131EEA0E0E130FEB0780387F87F0EAFFCFEA7F87141980 9816>107 DI<38FBC78038FFEFC0EBFFE0EA3E7CEA3C 78EA3870AA38FE7CF8A31512809116>IIII<38FF0F80EB3FE013FFEA07F1EBE0C0EBC0005BA290C7FCA7EAFFFCA313127F 9116>114 DI<12035AA4EA7FFFB5FCA20007C7FCA75BEB03 80A2130713873803FF005BEA00F811177F9616>I<387E1F80EAFE3FEA7E1FEA0E03AA1307EA0F 0FEBFFF06C13F83803F3F01512809116>I<387F1FC000FF13E0007F13C0381C0700EA1E0FEA0E 0EA36C5AA4EA03B8A3EA01F0A26C5A13127F9116>I<38FF1FE013BF131F38380380A413E33819 F300A213B3EA1DB7A4EA0F1EA313127F9116>I<387F1FC0133F131F380F1C00EA073CEA03B813 F012016C5A12017FEA03B8EA073C131CEA0E0E387F1FC038FF3FE0387F1FC013127F9116>I<38 7F1FC038FF9FE0387F1FC0381C0700120E130EA212075BA2EA039CA21398EA01B8A2EA00F0A35B A3485A1279127BEA7F806CC7FC123C131B7F9116>I<383FFFC05AA238700780EB0F00131EC65A 13F8485A485A485A48C7FC381E01C0123C1278B5FCA312127F9116>I E /Fc 48 123 df11 D<127012F812FCA2127C120CA31218A2123012601240060D 7D9C0C>39 D<13C0EA0180EA03001206120E120C121C121812381230A21270A21260A212E0AC12 60A21270A21230A212381218121C120C120E12067EEA0180EA00C00A2A7D9E10>I<12C012607E 7E121C120C120E120612077EA21380A21201A213C0AC1380A21203A21300A25A1206120E120C12 1C12185A5A5A0A2A7E9E10>I<127012F012F8A212781218A31230A2127012601240050D7D840C> 44 DI<127012F8A3127005057D840C>I<12035A123FB4FC12C71207B3 A3EAFFF8A20D1C7C9B15>49 DII<13F0 EA03FCEA070CEA0E0EEA1C1E1238130CEA78001270A2EAF3F0EAF7F8EAFC1CEAF81E130E12F013 0FA51270A2130E1238131CEA1C38EA0FF0EA03E0101D7E9B15>54 D<127012F8A312701200A812 7012F8A3127005127D910C>58 D<127012F8A312701200A8127012F012F8A212781218A31230A2 127012601240051A7D910C>I<1306130FA3497EA4EB33C0A3EB61E0A3EBC0F0A338018078A2EB FFF8487FEB003CA200067FA3001F131F39FFC0FFF0A21C1D7F9C1F>65 D69 D<39FFF3FFC0A2390F003C00AAEBFFFCA2EB003CAC39FFF3FFC0A21A1C7E9B1F> 72 DI76 D80 D<3807E080EA1FF9EA3C1FEA7007130312E01301 A36CC7FCA2127CEA7FC0EA3FF8EA1FFEEA07FFC61380130FEB03C0A2130112C0A300E013801303 00F01300EAFC0EEACFFCEA83F8121E7E9C17>83 D<007FB512C0A238780F030070130100601300 00E014E000C01460A400001400B03803FFFCA21B1C7F9B1E>I<3AFFE0FFE1FFA23A1F001E007C 6C1530143FA20180147000079038678060A32603C0E713C0ECC3C0A2D801E0EBC1809038E181E1 A3D800F3EBF3001400A2017B13F6017E137EA3013C133CA3011C133801181318281D7F9B2B>87 D97 D<12FCA2121CA9137EEA1DFF381F8780381E01C0001C13E013 0014F0A614E01301001E13C0381F07803819FF00EA187C141D7F9C17>IIII<1378EA01FCEA039EEA071EEA0E0C1300A6EAFFE0A2EA0E 00AEEA7FE0A20F1D809C0D>II<12FCA2121CA9137CEA1DFFEA1F07381E0380A2121CAB38FF9FF0 A2141D7F9C17>I<1218123C127C123C1218C7FCA612FCA2121CAEEAFF80A2091D7F9C0C>II<12FC A2121CA9EB7FC0A2EB3E0013185B5B5BEA1DE0121FEA1E70EA1C781338133C131C7F130F38FF9F E0A2131D7F9C16>I<12FCA2121CB3A7EAFF80A2091D7F9C0C>I<39FC7E07E039FDFF9FF8391F83 B838391E01E01CA2001C13C0AB3AFF8FF8FF80A221127F9124>IIII<3803E180EA0FF9EA1E1FEA3C071278130312F0A612781307123CEA1E1FEA0FFB EA07E3EA0003A6EB1FF0A2141A7F9116>III<120CA5121CA2123CEAFFE0A2EA1C00A81330A5EA 1E60EA0FC0EA07800C1A7F9910>I<38FC1F80A2EA1C03AC1307EA0C0F380FFBF0EA03E314127F 9117>I<38FF0FE0A2381C0780EB0300EA0E06A36C5AA2131CEA0398A213F86C5AA26C5AA31312 7F9116>I<39FF3FCFE0A2391C0F0780EC0300131F380E1B061486A2EB318E000713CCA2136038 03E0F8A33801C070A31B127F911E>I<387F8FF0A2380F078038070600EA038EEA01DC13D8EA00 F01370137813F8EA01DCEA038E130EEA0607380F038038FF8FF8A21512809116>I<38FF0FE0A2 381C0780EB0300EA0E06A36C5AA2131CEA0398A213F86C5AA26C5AA35BA3EAF180A200C7C7FC12 7E123C131A7F9116>II E /Fd 18 118 df<1238127C12FEA312 7C12381200A41238127C12FEA3127C123807127D910D>58 D68 D73 D<3BFFFC7FFE0FFCA23B0FC007E000C081D9 E003130100071680EC07F8D803F0EC0300A29039F80CFC0700011506EC1CFE9039FC187E0E0000 150CEC387F90397E303F18A290397F601FB8013F14B002E013F0ECC00F011F5CA26D486C5AA2EC 00036D5CA22E1C7F9B31>87 D97 D99 D101 D<3803F0F0380FFFF8383E1F38383C0F30007C13 80A4003C1300EA3E1FEA1FFCEA33F00030C7FC12707EEA3FFF14C06C13E04813F0387801F838F0 0078A3007813F0383E03E0381FFFC03803FE00151B7F9118>103 D<121E123FA25A7EA2121EC7 FCA5B4FCA2121FAEEAFFE0A20B1E7F9D0E>105 D108 D<39FF1FC0FE90387FE3FF3A1FE1F70F80903980FC07C0A2EB00F8AB3AFFE7FF3FF8A225127F91 28>I<38FF1FC0EB7FE0381FE1F0EB80F8A21300AB38FFE7FFA218127F911B>II<38FF1FC0EBFFE0381FC1F8130014FC147C147EA6147C14FCEB80F8EBC1F0EB7F E0EB1F8090C7FCA6EAFFE0A2171A7F911B>I114 DI<1203A35AA25AA2123FEA FFFCA2EA1F00A9130CA4EA0F98EA07F0EA03E00E1A7F9913>I<38FF07F8A2EA1F00AC1301EA0F 03EBFEFFEA03F818127F911B>I E /Fe 55 126 df<137013F01201EA03C0EA0780EA0F00121E 121C123C123812781270A212F05AA87E1270A212781238123C121C121E7EEA0780EA03C0EA01F0 120013700C24799F18>40 D<126012F012787E7E7EEA0780120313C0120113E01200A213F01370 A813F013E0A2120113C0120313801207EA0F00121E5A5A5A12600C247C9F18>I<123C127E127F A3123F120F120E121E127C12F81270080C788518>44 D<127812FCA412780606778518>46 D48 DII<123C127EA4123C1200A812 38127C127EA3123E120E121E123C127812F01260071A789318>59 D<387FFFC0B512E0A3C8FCA4 B512E0A36C13C0130C7E9318>61 D<137013F8A213D8A2EA01DCA3138CEA038EA41306EA0707A4 380FFF80A3EA0E03A2381C01C0A2387F07F038FF8FF8387F07F0151C7F9B18>65 DI<3801FCE0EA03FEEA07FFEA0F07EA 1E03EA3C01EA78001270A200F013005AA87E007013E0A21278EA3C01001E13C0EA0F073807FF80 6C1300EA01FC131C7E9B18>I69 DI<3801F9C0EA07FF5AEA1F0FEA1C03123CEA78011270A200F0C7 FC5AA5EB0FF0131F130F38F001C0127013031278123CEA1C07EA1F0FEA0FFFEA07FDEA01F9141C 7E9B18>I73 D76 D<38FC01F8EAFE03A2383B06E0A4138EA2EA398CA213DCA3EA38D8A213F81370A21300A638FE03 F8A3151C7F9B18>I<387E07F038FF0FF8387F07F0381D81C0A313C1121CA213E1A313611371A2 13311339A31319A2131D130DA3EA7F07EAFF87EA7F03151C7F9B18>III82 D<3807F380EA1FFF5AEA7C1FEA7007EAF00312E0A290C7FC7E1278123FEA1FF0EA0FFEEA01FF38 001F80EB03C0EB01E01300A2126012E0130100F013C0EAFC07B512801400EAE7FC131C7E9B18> I<387FFFF8B5FCA238E07038A400001300B2EA07FFA3151C7F9B18>I<38FF07F8A3381C01C0A4 380E0380A4EA0F0700071300A4EA038EA4EA018C13DCA3EA00D813F8A21370151C7F9B18>86 D<38FE03F8A338700070A36C13E0A513F8A2EA39DCA2001913C0A3138CEA1D8DA4000D13801305 EA0F07A2EA0E03151C7F9B18>I<387F8FE0139F138F380E0700120FEA070E138EEA039C13DCEA 01F8A26C5AA2137013F07F120113DCEA039E138EEA070F7F000E13801303001E13C0387F07F038 FF8FF8387F07F0151C7F9B18>I<38FF07F8A3381C01C0EA1E03000E1380EA0F0700071300A2EA 038EA2EA01DCA213FC6C5AA21370A9EA01FC487E6C5A151C7F9B18>I95 D97 D<127E12FE127E120EA5133EEBFF 80000F13C0EBE3E0EB80F0EB00701478000E1338A5120F14781470EB80F0EBC3E0EBFFC0000E13 8038067E00151C809B18>IIII< EB1FC0EB7FE013FFEA01F1EBC0C01400A3387FFFC0B5FCA23801C000AEEA7FFFA3131C7F9B18> I<3803F1F03807FFF85A381E1F30383C0F00EA3807A5EA3C0FEA1E1EEA1FFC485AEA3BF00038C7 FC123CEA1FFF14C04813E0387801F038F00078481338A36C1378007813F0EA7E03383FFFE0000F 13803803FE00151F7F9318>I<127E12FE127E120EA5133FEBFF80000F13C0EBE1E013801300A2 120EAA387FC3FC38FFE7FE387FC3FC171C809B18>II<12FEA3120EA5EB3FF0137F133FEB0780EB0F00131E5B 5B5BEA0FF87F139C131EEA0E0FEB0780130314C038FFC7F8A3151C7F9B18>107 DI<387DF1F038FFFBF86CB4 7E381F1F1CEA1E1EA2EA1C1CAB387F1F1F39FFBFBF80397F1F1F001914819318>IIII<387F87E038FF9FF8EA7FBF3803FC78EBF030EBE0005BA35BA8EA7FFEB5FC6C5A 15147F9318>114 DI<487E1203A4387FFFC0 B5FCA238038000A9144014E0A21381EBC3C0EA01FF6C1380EB7E0013197F9818>I<387E07E0EA FE0FEA7E07EA0E00AC1301EA0F073807FFFC6C13FE3801FCFC1714809318>I<387F8FF000FF13 F8007F13F0381E03C0000E1380A338070700A3EA038EA4EA01DCA3EA00F8A2137015147F9318> I<38FF8FF8A3383800E0A3381C01C0A2137113F9A213D9A2380DDD80A3138DEA0F8FA238070700 15147F9318>I<387F8FF0139F138F38070700138EEA039EEA01DC13F81200137013F07FEA01DC EA039E138EEA0707000F1380387F8FF000FF13F8007F13F015147F9318>I<387F8FF000FF13F8 007F13F0380E01C0EB0380A21207EB0700A2EA03871386138EEA01CEA2EA00CCA213DC1378A313 70A313F05B1279EA7BC0EA7F806CC7FC121E151E7F9318>I<383FFFF05AA2387001E0EB03C0EB 078038000F00131E137C5B485A485AEA0780380F0070121E5A5AB512F0A314147F9318>II<127CB47E7FEA07E01200AB7FEB7FC0EB3FE0A2EB7FC0EBF0005BAB1207B45A5B007CC7 FC13247E9F18>125 D E /Ff 25 124 df<1238127CA2127E123E120CA31218A21230126012C0 1280070E789F0D>39 D<1230127812F81278127005057C840D>46 D<130C131C13FCEA0FF81338 1200A41370A613E0A6EA01C0A61203EA7FFE12FF0F1E7C9D17>49 D<133FEBFFC03801C1E03803 00F000061378A2000F137C1380A2EB00781206C712F814F0130114E0EB03C0EB0780EB0F00131C 5B5B13C0485A380300601206001C13C05AEA7FFFB51280A2161E7E9D17>I<137F3801FFC03803 83E0EA0701EB00F05A1301A2000013E0A2EB03C0EB0780EB0F0013FE13F8130E7F1480EB03C0A3 EA3007127812F8A238F00F8000C01300EA601EEA383CEA1FF8EA07E0141F7D9D17>II<14181438A21478147C14FCA2EB01BCA2 EB033C143EEB061EA2130CA21318141F497EA21360EB7FFF90B5FCEBC00F3901800780A2EA0300 A21206A2001F14C039FFC07FFCA21E207E9F22>65 D<0007B5FC15C039003C01E015F090387800 F8A515F0EBF001EC03E0EC07C0EC0F809038FFFE00ECFF803901E007C0EC03E0A21401A215F0D8 03C013E01403A2EC07C0A2EC0F800007EB3F00387FFFFEB512F01D1F7E9E20>I<903803F80890 380FFE1890383F073890387801F83801F000D803C013F0000714705B48C7FC5A121E003E146012 3C007C1400A45AA415C01278EC0180127C003CEB0300A26C13065C6C6C5A3807E0703801FFC06C 6CC7FC1D217B9F21>I<0007B5FC15E039003C01F0EC00F849137C153C151EA3151F5BA6484813 1E153EA3153C157C4848137815F0A2EC01E0EC03C0EC0F800007EB3F00387FFFFCB512E0201F7E 9E23>I<0007B512F8A238003C001578491338A5EC0C30EBF0181500A21438EBFFF8A23801E070 1430A2151815301400485A1560A215E0EC01C014030007130F007FB51280B6FC1D1F7E9E1F>I< 3A07FFC7FFC0A23A003C007800A2495BA649485AA490B5FCA23901E003C0A64848485AA6000713 0F397FFCFFF8485A221F7E9E22>72 D<3807FFE0A238003C00A25BA65BA6485AA6485AA61207EA FFFCA2131F7F9E10>I<3A07FFE0FFE0A23A003C003E001538495B5DEC01804AC7FC14065C495A 5C5CEBF1F013F3EBF778EA01EC13F8497E13E080A248487EA36E7EA26E7E0007497E397FFC1FFC 00FF133F231F7E9E23>75 D<3807FFF0A2D8003CC7FCA25BA65BA6485AA3EC0180A2EC0300EA03 C0A25C1406140E141E0007137E387FFFFCB5FC191F7E9E1C>II<3A07FC03FFC0A23A003E007C001538016F1330A3EB6780A2EB63C001C35BA2EB C1E0A2EBC0F0A2D801805B1478A2143CA33903001F80A2140FA3481307D80F8090C7FC387FF003 12FF221F7E9E22>II<0007B5FC15C039003C03E0 EC01F0EB780015F8A59038F001F0A215E0EC03C0EC0F809038FFFE004813F801E0C7FCA5485AA6 1207EA7FFC12FF1D1F7E9E1F>I<3807FFFC14FF39003C07C0EC03E0EB780115F0A415E0EBF003 15C0EC0780EC1F00EBFFFC14F03801E038143C141CA2141EA23803C03EA41506A20007140C397F FC1F1800FFEB0FF8C7EA03E01F207E9E21>82 DI<001FB512F8A2381E03 C000381438EB078012300070141812601538153038C00F0000001400A5131EA65BA6137C381FFF F05A1D1F7B9E21>I<39FFF007FEA2390F0001F0EC00C0A2EC0180EA0780EC03005C1406140EEB C00C0003131C14185CA25CEA01E05CA2EBE180A2D800F3C7FCA213F6A213FCA21378A21370A21F 207A9E22>86 D<3A03FFC1FFC0A23A003E007C00011E137015606D5B1401EC8380010790C7FC14 C6EB03CC14FC6D5A5C1300801301497E143C1306497E131CEB381FEB300F01607FEBC007000180 38038003D80FC07F39FFF01FFE13E0221F7F9E22>88 D123 D E /Fg 30 122 df<1238127C12FEA3127C123807077C8610>46 D<13181378EA01F812FFA212 01B3A7387FFFE0A213207C9F1C>49 DI I<14E013011303A21307130F131FA21337137713E7EA01C71387EA03071207120E120C12181238 127012E0B512FEA2380007E0A7EBFFFEA217207E9F1C>I67 D70 D76 D 85 D97 D II I<13FE3807FF80380F87C0381E01E0003E13F0EA7C0014F812FCA2B5FCA200FCC7FCA3127CA212 7E003E13186C1330380FC0703803FFC0C6130015167E951A>II<3801FE1F0007B512 80380F87E7EA1F03391E01E000003E7FA5001E5BEA1F03380F87C0EBFF80D819FEC7FC0018C8FC 121CA2381FFFE014F86C13FE80123F397C003F8048131F140FA3007CEB1F00007E5B381F80FC6C B45A000113C019217F951C>II<120E121FEA3F80A3EA1F00120EC7FCA7EAFF80A2121FB2EAFFF0A2 0C247FA30F>I108 D<3AFF87F00FE090399FFC3FF8 3A1FB87E70FC9039E03EC07C9039C03F807EA201801300AE3BFFF1FFE3FFC0A22A167E952F>I< 38FF87E0EB9FF8381FB8FCEBE07CEBC07EA21380AE39FFF1FFC0A21A167E951F>I<13FE3807FF C0380F83E0381E00F0003E13F848137CA300FC137EA7007C137CA26C13F8381F01F0380F83E038 07FFC03800FE0017167E951C>I<38FF8FE0EBBFF8381FF07CEBC03E497E1580A2EC0FC0A8EC1F 80A2EC3F00EBC03EEBE0FCEBBFF8EB8FC00180C7FCA8EAFFF0A21A207E951F>I114 DI<13C0A41201A212031207120F121FB5FCA2EA0FC0ABEBC180A512 07EBE300EA03FEC65A11207F9F16>I<38FF83FEA2381F807EAF14FEA2EA0F833907FF7FC0EA01 FC1A167E951F>I<39FFF01FE0A2390FC00600A2EBE00E0007130CEBF01C0003131813F800015B A26C6C5AA2EB7EC0A2137F6D5AA26DC7FCA2130EA21B167F951E>I<3AFFE3FF87F8A23A1F807C 00C0D80FC0EB0180147E13E0000790387F030014DF01F05B00031486EBF18FD801F913CC13FB90 38FF07DC6C14F8EBFE03017E5BA2EB7C01013C5BEB380001185B25167F9528>I<39FFF01FE0A2 390FC00600A2EBE00E0007130CEBF01C0003131813F800015BA26C6C5AA2EB7EC0A2137F6D5AA2 6DC7FCA2130EA2130CA25B1278EAFC3813305BEA69C0EA7F80001FC8FC1B207F951E>121 D E /Fh 15 122 df97 DI<137E48B4FC38038380EA0F07121E001C1300EA3C0248C7FCA35AA5EA70 021307EA381EEA1FF8EA07E011147C9315>I<1478EB03F814F0EB0070A314E0A4EB01C0A213F1 EA03FD38078F80EA0E07121C123C14001278A3EAF00EA31430EB1C60133CEA707C3878FCC0EA3F CF380F078015207C9F17>I<137C48B4FCEA0783380F0180121E123CEB0300EA780EEA7FFC13E0 00F0C7FCA412701302EA7807EA3C1EEA1FF8EA07E011147C9315>I103 DI<136013F0A213E01300A7120FEA1F80123113C0EA6380 A212C3EA0700A3120EA3EA1C301360A2EA38C01218EA1F80EA0F000C1F7D9E0E>I108 D<381E07C0383F1FE03833B870EA63E0EBC038138000C71370EA0700A3000E13E0A3EB01C3001C 13C6A2EB038C1301003813F8381800F018147D931A>110 D<137C48B4FC38038380380F01C012 1E001C13E0123C1278A338F003C0A3EB07801400EA700F131EEA3838EA1FF0EA07C013147C9317 >I114 D116 D<38078780380FCFC03818F8E0EA3070EA6071A238C0 E1C03800E000A3485AA30071136038F380C0A238E3818038C7C300EA7CFEEA387C13147D9315> 120 D<000F1360381F8070003113E013C0EA6380A238C381C0EA0701A3380E0380A4EB0700A25B 5BEA07FEEA03EEEA000EA25B12785BEA7070EA60E0EA3FC06CC7FC141D7D9316>I E /Fi 2 16 df0 D15 D E /Fj 1 111 df<380F07C0381F8FE03831D8703861 F03813E013C000C35BEA0380A348485AA3903801C180000EEBC300EB03831486EB018E001C13FC 380C00F019147F931B>110 D E /Fk 52 123 df<90390FF03FFC90387FFDFF9038F83FE0D801 E013810003137FD807C01300806E137CA5B712FCA23A07C01F007CB03B3FF8FFE3FF80A2292080 9F2C>15 D<1318137013E0EA01C0EA0380A2EA0700120EA2121E121C123CA25AA412F85AA97E12 78A47EA2121C121E120EA27EEA0380A2EA01C0EA00E0137013180D2D7DA114>40 D<12C012707E7E7EA27EEA0380A213C0120113E0A2EA00F0A413F81378A913F813F0A4EA01E0A2 13C012031380A2EA0700120EA25A5A5A12C00D2D7DA114>I<1238127C12FE12FFA2127F123B12 03A21206A2120E120C12181270122008107C860F>44 D<146014E0130114C0A213031480A21307 14005B130EA2131E131C133C1338A213781370A213F05B12015BA212035BA2120790C7FC5A120E A2121E121C123C1238A212781270A212F05AA2132D7DA11A>47 D<137013F0120F12FF12F31203 B3A4B51280A2111D7C9C1A>49 DI<14E0A2497EA3497EA2497EA2497E130CA2EB187FA201307F143F01707F EB601FA201C07F140F48B57EA2EB800748486C7EA20006801401000E803AFFE01FFFE0A2231F7E 9E28>65 DI<9038 07FC0290383FFF0E9038FE03DE3903F000FE4848133E4848131E485A48C7120EA2481406127EA2 00FE1400A7127E1506127F7E150C6C7E6C6C13186C6C13386C6C13703900FE01C090383FFF8090 3807FC001F1F7D9E26>I69 DI<903807FC0290383FFF0E9038FE03DE3903F000FE4848133E48 48131E485A48C7120EA2481406127EA200FE91C7FCA591387FFFE0A2007E903800FE00A2127F7E A26C7E6C7E6C7E3803F0013900FE03BE90383FFF1E903807FC06231F7D9E29>I73 D75 DIIIII82 D<3803FC08380FFF38381E03F8EA3C00481378143812F814187E1400 B4FC13F86CB4FC14C06C13E06C13F06C13F8120338001FFC13011300A200C0137CA36C1378A200 F813F038FE01E038E7FFC000811300161F7D9E1D>I<007FB512FCA2397C0FE07C0070141C0060 140CA200E0140E00C01406A400001400B10007B512C0A21F1E7E9D24>II87 DII<003FB51280A2EB807F393E 00FF00383801FEA248485A5CEA6007495AA2C6485A495AA2495A91C7FC5B485AEC0180EA03FCEA 07F8A2380FF00313E0001F140048485A5C48485A38FF007F90B5FCA2191F7D9E20>I97 DIII II<3801FC3C3807FFFE380F07DEEA1E03003E13E0A5001E13C0380F0780EBFF00EA19FC 0018C7FCA2121C381FFF8014F06C13F8003F13FC387C007C0070133E00F0131EA30078133CA238 3F01F8380FFFE000011300171E7F931A>II<121C123F5AA37E121CC7FCA6B4FCA2121FB0EAFFE0A20B217E A00E>I107 DI<3AFE0FE03F8090393FF0FFC03A1E70F9C3E09039C07F01F0381F807EA2EB00 7CAC3AFFE3FF8FFEA227147D932C>I<38FE0FC0EB3FE0381E61F0EBC0F8EA1F801300AD38FFE3 FFA218147D931D>I<48B4FC000713C0381F83F0383E00F8A248137CA200FC137EA6007C137CA2 6C13F8A2381F83F03807FFC00001130017147F931A>I<38FF1FC0EB7FF0381FE1F8EB80FCEB00 7EA2143E143FA6143E147E147CEB80FCEBC1F8EB7FE0EB1F8090C7FCA7EAFFE0A2181D7E931D> I114 DII<38FF07F8A2EA1F00AD1301A2EA0F07 3807FEFFEA03F818147D931D>I<39FFE07F80A2391F001C00380F8018A26C6C5AA26C6C5AA26C 6C5AA213F900005B13FF6DC7FCA2133EA2131CA219147F931C>I<3AFFE7FE1FE0A23A1F00F007 006E7ED80F801306A23907C1BC0CA214BE3903E31E18A23901F60F30A215B03900FC07E0A29038 7803C0A3903830018023147F9326>I<38FFE1FFA2380F80706C6C5A6D5A3803E180EA01F36CB4 C7FC137E133E133F497E136FEBC7C0380183E0380381F0380701F8380E00FC39FF81FF80A21914 7F931C>I<39FFE07F80A2391F001C00380F8018A26C6C5AA26C6C5AA26C6C5AA213F900005B13 FF6DC7FCA2133EA2131CA21318A2EA783012FC5BEAC0E0EAE1C0EA7F80001EC8FC191D7F931C> I<383FFFE0A2383C0FC0EA381F0070138038603F00137E13FEC65A485A485A3807E060120F13C0 381F80E0383F00C0EA7F01EA7E03B5FCA213147F9317>I E /Fl 31 124 df<121C127FEAFF80A213C0A3127F121C1200A212011380A21203EA07001206120E5A5A12300A 157BA913>39 D<121C127FEAFF80A5EA7F00121C09097B8813>46 D<13075B137FEA07FFB5FCA2 12F8C6FCB3AB007F13FEA317277BA622>49 DII<1407 5C5C5C5C5CA25B5B497E130F130E131C1338137013F013E0EA01C0EA0380EA07005A120E5A5A5A 5AB612F8A3C71300A7017F13F8A31D277EA622>I65 DI<91393FF00180903903FFFE07010FEBFF8F90393FF007FF9038FF8001 4848C7127FD807FC143F49141F4848140F485A003F15075B007F1503A3484891C7FCAB6C7EEE03 80A2123F7F001F15076C6C15006C6C5C6D141ED801FE5C6C6C6C13F890393FF007F0010FB512C0 010391C7FC9038003FF829297CA832>I69 D79 DI82 D<48B47E000F13F0381F81FC486C7E147FA2EC 3F80A2EA0F00C7FCA2EB0FFF90B5FC3807FC3FEA1FE0EA3F80127F130012FEA3147F7E6CEBFFC0 393F83DFFC380FFF0F3801FC031E1B7E9A21>97 DIII< EB3FE03801FFF83803F07E380FE03F391FC01F80393F800FC0A2EA7F00EC07E05AA390B5FCA290 C8FCA47E7F003F14E01401D81FC013C0380FE0033903F81F803900FFFE00EB1FF01B1B7E9A20> I<1207EA1FC013E0123FA3121F13C0EA0700C7FCA7EAFFE0A3120FB3A3EAFFFEA30F2B7DAA14> 105 D 107 DI<3BFFC07F800FF0903AC1FFE03FFC903AC7 83F0F07E3B0FCE03F9C07F903ADC01FB803F01F8D9FF00138001F05BA301E05BAF3CFFFE1FFFC3 FFF8A3351B7D9A3A>I<38FFC07F9038C1FFC09038C787E0390FCE07F09038DC03F813F813F0A3 13E0AF3AFFFE3FFF80A3211B7D9A26>II<38FFE1FE9038E7FF809038FE07E0390FF803F8496C7E01E07F 140081A2ED7F80A9EDFF00A25DEBF0014A5A01F85B9038FE0FE09038EFFF80D9E1FCC7FC01E0C8 FCA9EAFFFEA321277E9A26>I<38FFC3F0EBCFFCEBDC7E380FD8FF13F85BA3EBE03C1400AFB5FC A3181B7E9A1C>114 D<3803FE30380FFFF0EA3E03EA7800127000F01370A27E6C1300EAFFE013 FE387FFFC06C13E06C13F0000713F8C613FC1303130000E0137C143C7EA26C13787E38FF01F038 F7FFC000C11300161B7E9A1B>I<1370A413F0A312011203A21207381FFFF0B5FCA23807F000AD 1438A73803F870000113F03800FFE0EB1F8015267FA51B>I<3AFFFE03FF80A33A07F0007000A2 6D13F000035CEBFC0100015CA26C6C485AA2D97F07C7FCA2148FEB3F8E14DEEB1FDCA2EB0FF8A3 6D5AA26D5AA26D5A211B7F9A24>118 D<39FFFC0FFFA33907F003C06C6C485AEA01FC6C6C48C7 FCEBFF1E6D5AEB3FF86D5A130FA2130780497E497E131EEB3C7F496C7E496C7ED801E07FEBC00F 00036D7E3AFFF01FFF80A3211B7F9A24>120 D123 D E /Fm 67 124 df<90380FC3E090387FEFF09038E07C783801C0F8D8038013303907007000A7 B61280A23907007000B0387FE3FFA21D20809F1B>11 DI<90380F80F890387FE7FE9038E06E 063901C0FC0F380380F8380700F00270C7FCA6B7FCA23907007007B03A7FE3FE3FF0A22420809F 26>14 D<90380FC0FFEB7FE79038E07E0F3801C0FC4848487E38070070A7B7FCA23907007007B0 3A7FE3FE3FF0A22420809F26>I34 D<127012F812FCA2127C120CA31218A21238123012601240 060E7C9F0D>39 D<136013C0EA0180EA03005A12065A121C12181238A212301270A31260A212E0 AC1260A21270A312301238A21218121C120C7E12077EEA0180EA00C013600B2E7DA112>I<12C0 12607E7E121C120C7E12077E1380A2120113C0A31200A213E0AC13C0A21201A313801203A21300 5A12065A121C12185A5A5A0B2E7DA112>I<127012F812FCA2127C120CA31218A2123812301260 1240060E7C840D>44 DI<127012F8A3127005057C840D>I48 DIII<13 0EA2131E133EA2136E13EE13CEEA018E1203130E1206120E120C121812381230126012E0B512F0 A238000E00A7EBFFE0A2141E7F9D17>II<137CEA01FEEA0783380E0380EA0C07121C3838030090C7FC127812 70A2EAF3F8EAF7FEEAFC0E487EEB0380A200F013C0A51270A214801238EB0700121CEA0E1EEA07 FCEA01F0121F7E9D17>I<1260387FFFC0A21480EA600138C003001306A2C65A5BA25B5BA213E0 5B1201A3485AA41207A76CC7FC121F7D9D17>III<127012F8A312701200AA127012F8A312700514 7C930D>I<127012F8A312701200AA127012F8A312781218A41230A21260A21240051D7C930D>I< EB0380A3497EA3EB0DE0A3EB18F0A3EB3078A3497EA3EBE01E13C0EBFFFE487FEB800FA2000314 80EB0007A24814C01403EA0F8039FFE03FFEA21F207F9F22>65 DI<90381FC04090387FF0C03801F8393803C00D38078007380F0003121E003E1301123C 127C1400127812F81500A8007814C0127CA2123C003EEB0180121E6CEB0300EA07803803C00E38 01F81C38007FF0EB1FC01A217D9F21>I69 DI73 D<39FFFC1FFCA239078007C0150014065C5C5C14705CEB81C0EB83800187C7FC80138FEB9BC0EB B1E013E1EBC0F01380147880A280A280EC0780A215C039FFFC3FFCA21E1F7E9E23>75 D77 D<39FF807FF813C00007EB07809038E00300A2EA06F0A21378133CA2131EA2130FA2EB078314C3 1303EB01E3A2EB00F3A2147BA2143F80A280A2000F7FEAFFF0801D1F7E9E22>III82 D<3807E080EA0FF9EA1C1FEA300FEA7007EA600312E01301A36CC7FCA21278127FEA 3FF0EA1FFC6C7EEA03FF38001F801307EB03C0A2130112C0A400E01380EAF00338F80700EAFE0E EACFFCEA81F812217D9F19>I<007FB512E0A238780F010070130000601460A200E0147000C014 30A400001400B23807FFFEA21C1F7E9E21>I<39FFFC7FF8A23907800780EC0300B3A300031302 EBC006A200015B6C6C5AEB7830EB3FE0EB0FC01D207E9E22>I<3BFFF07FF83FF0A23B0F000780 0F80EE0300A23A07800FC006A3913819E00ED803C0140CA214393A01E030F018A33A00F0607830 A3ECE07C903978C03C60A390393D801EC0A390383F000F6D5CA3010E6DC7FCA32C207F9E2F>87 D92 D97 D<120E12FEA2120EA9133FEBFF80380FC3C0EB00E000 0E13F014701478A7147014F0120FEB01E0EBC3C0380CFF80EB3E0015207F9F19>IIII<13 3C13FEEA01CFEA038F1306EA0700A7EAFFF0A2EA0700B0EA7FF0A21020809F0E>II<120E12FEA2120EA9133E13FF380FC380EB01C0A2120EAD38FFE7FCA216 207F9F19>I<121C121E123E121E121CC7FCA6120E127EA2120EAFEAFFC0A20A1F809E0C>I<13E0 EA01F0A3EA00E01300A61370EA07F0A212001370B3A21260EAF0E0EAF1C0EA7F80EA3E000C2882 9E0E>I<120E12FEA2120EA9EB1FF0A2EB0F80EB0E00130C5B5B137013F0EA0FF81338EA0E1C13 1E130E7F1480130314C038FFCFF8A215207F9F18>I<120E12FEA2120EB3A9EAFFE0A20B20809F 0C>I<390E3F03F039FEFF8FF839FFC1DC1C390F80F80EEB00F0000E13E0AD3AFFE7FE7FE0A223 147F9326>IIII<3803E180EA 0FF9EA1E1FEA3C0712781303127012F0A6127012781307EA3C0FEA1E1FEA0FF3EA03E3EA0003A7 EB3FF8A2151D7E9318>I II<1206A4120EA2121E123EEAFFF8A2EA0E00AA1318A5EA07 3013E0EA03C00D1C7F9B12>I<380E01C0EAFE1FA2EA0E01AC1303A2EA070FEBFDFCEA01F11614 7F9319>I<38FF87F8A2381E01E0000E13C01480A238070300A3EA0386A2138EEA01CCA213FC6C 5AA21370A315147F9318>I<39FF9FF3FCA2391C0780F01560ECC0E0D80E0F13C0130C14E00007 EBE180EB186114713903987300EBB033A2143F3801F03EEBE01EA20000131CEBC00C1E147F9321 >I<387FC7FCA2380703E0148038038300EA01C7EA00EE13EC13781338133C137C13EEEA01C713 8738030380380701C0000F13E038FF87FEA21714809318>I<38FF87F8A2381E01E0000E13C014 80A238070300A3EA0386A2138EEA01CCA213FC6C5AA21370A31360A35B12F0EAF18012F3007FC7 FC123C151D7F9318>III E /Fn 14 124 df67 D73 D80 D<903807FFFE017FEBFFE048B612F84815FE489039001FFF8003077F48D980017F6F7FA2 707EA2707E6C90C7FC6C5A6C5AC9FCA40207B5FC91B6FC130F013FEBF03F3901FFFE004813F048 13C04890C7FC485A485A485AA212FF5BA3167FA26D14FF6C6CEB01EF003F140301FF90380FCFFF 6C9026C07F8F13F8000790B5000713FC6CECFC03C66CEBF0010107903980007FF8362E7DAD3A> 97 D101 D108 D<903A7FC001FFC0B5010F13F8 033F13FE92B6FC9126C1FE077F9126C3F0037F00039026C7C0017F6CEBCF0014DE02DC6D7F14FC 5CA25CA25CB3A8B6D8C07FEBFFE0A53B2E7CAD42>110 DI<90397FC01FFEB590B512E002C314F802CF14FE9139DFF01FFF9126FF80077F00 0349486C7F6C496D7F02F06D7F717E4A81173F84171F84A3711380AB19005FA34D5AA260177F6E 5D6E4A5A6E495B6E495B9126FF800F5BDBE03F90C7FC02EFB512FC02E314F002E014C0DB1FFCC8 FC92CAFCAFB612C0A539427CAD42>I<9039FF803FC0B5EBFFF0028313FC02877F91388FE7FFEC 9F070003D99E0F13806C13BCA214F8A214F06F13004A6C5A6F5A92C8FCA25CB3A6B612E0A5292E 7CAD31>114 D<90390FFF81E090B512F7000314FF5A380FFC01391FE0003FD83F80130F007F14 0790C7FC15035AA27F6D90C7FC13F013FF14F86CEBFFC015F86C14FE6C806C15806C15C06C15E0 C615F0013F14F81307EB001F020013FC153F0078140F00F81407A26C1403A27E16F86C14076D14 F06D130F01F0EB1FE001FEEBFFC090B6128048ECFE00D8F83F13F8D8F00313C0262E7CAD2F>I< EB01F0A61303A31307A3130FA2131F133FA2137FEA01FF5A000F90B512C0B7FCA4C601F0C7FCB3 A5ED01F0A91503D97FF813E01507D93FFC13C090391FFE1F806DB5FC6D1400010113FC9038003F F024427EC12E>I<007FB5D8801FB5FCA528007FF8000390C7FC6EEB01FC6D6C495A6D6C495A6D 5D6D6D485A6D6D485AEDE03F6D6D48C8FC6DEBF8FE91387FF9FC6EB45A5E6E5B6E5B80806E7F82 824A7F825C91380FEFFFDA1FCF7FDA3F877FDA7F037FECFE0149486C7F4A6D7E49488001076E7E 49486D7E49487F49486D7FD9FFC081B500F8013FEBFFC0A53A2E7EAD3F>120 D123 D E /Fo 8 117 df<140F5C147FEB03FF131FB6FCA313E7EA0007 B3B3A7007FB612E0A4233879B732>49 D 67 D97 D<903801FFC0011F13F8017F13FE9038FFC1FF00039038007F80D807FCEB1FC0484814E0ED 0FF0485A003FEC07F8A2485AED03FCA212FFA290B6FCA301E0C8FCA5127FA27F003F153CA26C6C 147C000F15786C6C14F86C6CEB01F06C6CEB07E06C9038E03FC0013FB51200010F13FC010013E0 26267DA52D>101 D<13FFB5FCA412077EB0ED7FC0913803FFF8020F13FE91381F03FFEC3C0102 7814804A7E4A14C05CA25CA291C7FCB3A4B5D8FC3F13FFA4303C7CBB37>104 D<9039FF01FF80B5000F13F0023F13FC9138FE03FFDAF00113C000039039E0007FE0028014F0EE 3FF891C7121F17FC160F17FEA3EE07FFAAEE0FFEA3EE1FFCA217F86EEB3FF06E137F6EEBFFE06E 481380DAFC07130091383FFFFC020F13F0020190C7FC91C9FCADB512FCA430377DA537>112 D<9038FE03F000FFEB0FFE91383FFF8091387C7FC014F00007ECFFE06C6C5A5CA25CED7FC0ED3F 80ED0E0091C8FCB3A3B512FEA423267DA529>114 D116 D E /Fp 36 119 df38 D<127012F8A212F012E005057A840F>46 D<14035CA25C1580141FA2143714771467 14C7A2EB0187A2EB0307A21306130E130C1318A2133090383FFFC05BEB600313C012011380EA03 00A21206A2121E39FFC03FFC13801E237DA224>65 D<90B512E015F890380F003C151E131E150E 150FA249130E151EA2153C49137815F0EC01E090387FFFC090B5FC9038F003E0EC00F01578485A 1538153CA248481378A315F039078001E01403EC07C0EC1F00B512FE14F020227DA122>I<90B5 12F015FC90380F003E150F011EEB0780150316C015015B16E0A35BA449EB03C0A44848EB0780A2 16005D4848130E5D153C5D48485B4A5AEC0780021FC7FCB512FC14F023227DA125>68 D<91387E0180903903FF810090380F80C390383E00670178133F49133ED801C0131E485A120748 C7121C120E121E5A15185A92C7FCA25AA4EC3FFC5AEC01E0A26C495AA312700078495A1238003C 130F6C131B260FC0F3C7FC3803FFC1C690C8FC212479A226>71 D73 D<903807FFC04913809038003C00 A25CA45CA4495AA4495AA4495AA449C7FCA212381278EAF81EA2485AEA6078EA70F0EA3FE0EA1F 801A237CA11A>I<9039FFF80FFCA290390F0007C01600011E130E5D5D1560495B4A5A4AC7FC14 0E495A5C147814FCEBF1BCEBF33CEBFE1E13FC3801F01F497EA2813803C007A26E7EA2EA07806E 7EA28139FFF80FFEA226227DA125>III<01FFEB1FFC1480010FEB03C01680D91BC01300A3EB19E001 311306A2EB30F0A201605B1478A3496C5AA3141ED801805BA2140FA2D803005BEC07E0A300065C 1403A2121F39FFE00180A226227DA124>I<14FE903807FF8090380F03E090383C00F001701378 4913384848133C4848131C48C7FC48141E121EA25AA25AA348143CA31578A34814F0A26CEB01E0 15C01403EC07800078EB0F00141E6C5B6C13F8380F83E03807FF80D801FCC7FC1F2479A225>I< 90B512C015F090380F0078153C011E131E150EA349131EA3153C491338157815F0EC03C090B512 005CEBF00FEC07803901E003C0A43903C00780A43907800F001503A2EC0706D8FFF8138EEC03FC C7EA01F020237DA124>82 D<903801F06090380FFC4090381E0EC0EB3807EB700301E0138013C0 1201A2D803801300A26DC7FCA27FEA01F813FF6C13E06D7EEB1FF8EB03FCEB007C143C80A30030 131CA31418007013385C00781360387C01C038EF0380D8C7FFC7FCEA81FC1B247DA21B>I<001F B512F8A2391E03C07800381438EB0780123000601430A2EB0F0012C0A3D8001E1300A45BA45BA4 5BA4485AA31203B57E91C7FC1D2277A123>I<393FFE07FFA23903C000F015E0484813C0A4390F 000180A4001EEB0300A4481306A4485BA4485BA25C12705C5C6C485A49C7FCEA1E0EEA0FFCEA03 F0202377A124>I<3BFFF03FF81FF8D9E07FEB3FF03B1F0007800780001E010FEB03001606141F 5E14375E14675E14C75E381F0187000F5DEB0307ED818013060383C7FC130C158613180138138C 0130139C0160139815B801C013B015E013805D13005D120E92C8FC120C2D2376A131>87 D<39FFF003FFA2000FC712F015E090388001C000071480EC0300EBC0060003130E5CEBE0180001 5B5CEBF0E000005BEBF18001FBC7FC13FF137E137C1378A45BA4485AA4EA3FFE5B202276A124> 89 D97 D<137E48B4FC3803C380EA0703 EA0E07121C003CC7FC12381278A35AA45BEA7003130EEA383CEA1FF0EA0FC011157B9416>99 D<143CEB03F8A2EB0038A21470A414E0A4EB01C013F9EA01FDEA078F380F0780120E121CEA3C03 383807001278A3EAF00EA214101418EB1C30EA703C137C3838FC60383FCFC0380F078016237BA2 19>I<13F8EA03FCEA0F0EEA1E06123C1238EA780CEAF038EAFFF01380EAF0005AA413021306EA 701C1378EA3FE0EA0F800F157A9416>I<143C147F14CF1301EB03861480A3EB0700A5130EEBFF F0A2EB0E00A25BA55BA55BA55BA45B1201A2EA718012F390C7FC127E123C182D82A20F>II<136013F0 13E0A21300A8120EEA1F801233126312C3A3EA0700A2120EA35A13201330EA3860A213C01239EA 1F80EA0E000C217CA00F>105 D<13F0EA0FE0A21200A2485AA4485AA448C7FCEB01E0EB07F0EB 0E30380E1870EB30F01360EBC060381D8000121F13E0EA1CF0EA3838133CEB1C20143038703860 A21440EB18C038E01F8038600F0014237DA216>107 DI<381E0780383F1F E0EA63B8EBE070EAC3C0A21380000713E01300A3380E01C0A214C2EB0383001C1386EB0706140C EB0318003813F0381801E018157C941B>110 D<137E48B4FC3803C380380701C0120E001C13E0 123CA21278A338F003C0A21480130700701300130E5B6C5AEA1FF0EA07C013157B9419>I<3803 C1F03807E3F8380C761C137C3818781E1370A2EA00E0A43801C03CA314780003137014F014E0EB E3C038077F80EB1E0090C7FCA2120EA45AA2EAFFC0A2171F7F9419>I<381E0F80383F1FC03863 B0E013E0EAC3C1A2EB80C00007130090C7FCA3120EA45AA45A121813157C9415>114 D<13FC48B4FC38038380EA0703EA0E07A2EB0200000FC7FC13F0EA07FC6C7EEA007E130FA2EA70 07EAF00EA2485AEA7038EA3FF0EA1FC011157D9414>I<13C01201A4EA0380A4EA0700EAFFF8A2 EA0700120EA45AA45AA213101318EA7030A21360EA71C0EA3F80EA1E000D1F7C9E10>I<000F13 30381F8070EA31C0006113E012C1EAC380A2380381C0EA0701A3380E0380A214841486EB070CA2 130FEB1F183807F3F03803E1E017157C941A>I<380F01C0381F83E0EA31C3EA61C1EAC1C0EAC3 80A2000313C0EA0700A3380E0180A3EB0300A213061304EA0F1CEA07F8EA01E013157C9416>I E /Fq 53 122 df35 D<127012F812FCA2127C12 0CA41218A21230A212601240060F7C840E>44 DI<127012F8A3127005 057C840E>I48 DI51 D<00101380EA1C07381FFF 005B5B13F00018C7FCA613F8EA1BFEEA1F0F381C0780EA180314C0EA000114E0A4126012F0A214 C0EAC0031260148038300700EA1C1EEA0FFCEA03F013227EA018>53 D<137E48B4FC3803C18038 0701C0EA0E03121CEB018048C7FCA2127812701320EAF1FCEAF3FEEAF60738FC038000F813C013 0112F014E0A51270A3003813C0130300181380381C0700EA0E0EEA07FCEA01F013227EA018>I< EA01F0EA07FCEA0E0F38180780EA3803383001C01270A31278EB0380123E383F0700EA1FCEEA0F FCEA03F87FEA0F7F381C3F80EA380F387007C0130338E001E01300A5387001C0A238380380381E 0F00EA0FFEEA03F013227EA018>56 DI<497E497EA3497EA3497E130CA2EB1CF8EB1878 A2EB383C1330A2497EA3497EA348B51280A2EB800739030003C0A30006EB01E0A3000EEB00F000 1F130139FFC00FFFA220237EA225>65 DI<90380FE01090383FF8309038F81C703801E0063903C003 F03807800148C7FC121E003E1470123C127C15301278A212F81500A700781430A2127CA2003C14 60123E121E6C14C06C7E3903C001803901E003003800F80EEB3FF8EB0FE01C247DA223>II 70 D<903807F00890383FFC189038FC0E383801E0033903C001F83807800048C71278121E1538 5AA2007C14181278A212F81500A6EC1FFF1278007CEB0078A2123CA27EA27E6C7E6C6C13F83801 F0013900FC079890383FFE08903807F80020247DA226>I<39FFFC3FFFA239078001E0AD90B5FC A2EB8001AF39FFFC3FFFA220227EA125>I<3803FFF0A238000F00B3A6127012F8A3EAF01EEA60 1CEA3878EA1FF0EA07C014237EA119>74 D<39FFFC07FFA239078001F015C05D4AC7FC14065C5C 14385C5CEB81C0EB8380EB87C080138DEB98F013B0EBE078497E13808080A26E7E8114036E7EA2 6E7E4A7E3AFFFC07FF80A221227EA126>III<39FF800FFF13C00007EB01F89038E000607F12061378A27F133E131E7FA2 EB078014C01303EB01E0A2EB00F01478A2143CA2141E140FA2EC07E0A214031401A2381F8000EA FFF0156020227EA125>III82 D<3803F020380FFC60381C0EE0EA3803EA7001A2EAE000A21460A36C1300A21278127FEA3F F0EA1FFE6C7E0003138038003FC0EB07E01301EB00F0A2147012C0A46C136014E06C13C0EAF801 38EF038038C7FF00EA81FC14247DA21B>I<007FB512F8A2387C07800070143800601418A200E0 141C00C0140CA500001400B3A20003B5FCA21E227EA123>I<3BFFF03FFC07FEA23B0F0007C001 F00203EB00E01760D807806D13C0A33B03C007F001801406A216032701E00C781300A33A00F018 3C06A3903978383E0CEC301EA2161C90393C600F18A390391EC007B0A3010F14E0EC8003A36D48 6C5AA32F237FA132>87 D<387FFFFEA2EB003C007C137C0070137814F84813F0EB01E0EAC00314 C01307148038000F005B131E133E133C5B13F85B0001130313E0EA03C012071380000F13071300 001E1306003E130E003C131E007C133E007813FEB5FCA218227DA11E>90 D97 D<120E12FEA2121E120EAAEB1F80EB7FE0380FC0 F0EB0078000E1338143C141C141EA7141C143C000F1338EB8070EBC1F0380C7FC0EB1F0017237F A21B>II<14E0130FA213011300AAEA03F0EA07FEEA1F07EA3C01EA 38001278127012F0A712701278EA3801EA3C03381E0EF0380FFCFEEA03F017237EA21B>II<133C13FEEA01CFEA038FA2EA0700A9EAFFF8A2EA0700 B1EA7FF8A2102380A20F>I<14F03801F1F83807FFB8380F1F38381E0F00EA1C07003C1380A500 1C1300EA1E0FEA0F1EEA1FFCEA19F00018C7FCA2121CEA1FFF6C13C04813E0383801F038700070 481338A400701370007813F0381E03C0380FFF803801FC0015217F9518>I<120E12FEA2121E12 0EAAEB1F80EB7FC0380FC1E0EB80F0EB0070120EAE38FFE7FFA218237FA21B>I<121C121E123E 121E121CC7FCA8120E12FEA2121E120EAFEAFFC0A20A227FA10E>II<120E12FEA212 1E120EAAEB0FFCA2EB07E0EB0380EB0700130E13185B137813F8EA0F9C131EEA0E0E7F1480EB03 C0130114E014F038FFE3FEA217237FA21A>I<120E12FEA2121E120EB3ABEAFFE0A20B237FA20E> I<390E1FC07F3AFE7FE1FF809039C0F303C03A1F807E01E0390F003C00000E1338AE3AFFE3FF8F FEA227157F942A>I<380E1F8038FE7FC038FFC1E0381F80F0380F0070120EAE38FFE7FFA21815 7F941B>II<380E1F8038FE7FE0 38FFC1F0380F0078120E143CA2141EA7143CA2000F1378EB8070EBC1F0380E7FC0EB1F0090C7FC A8EAFFE0A2171F7F941B>I114 DI<1206A5120EA3121E123EEAFFF8A2 EA0E00AA130CA51308EA0718EA03F0EA01E00E1F7F9E13>I<000E137038FE07F0A2EA1E00000E 1370AC14F01301380703783803FE7FEA01F818157F941B>I<38FFC3FEA2381E00F8000E1360A2 6C13C0A338038180A213C300011300A2EA00E6A3137CA31338A217157F941A>I<39FF8FF9FFA2 391E01C07CD81C031338000EEBE030A2EB06600007EB7060A2130E39038C30C01438139C3901D8 1980141DA2EBF00F00001400A2497EEB600620157F9423>I<38FFC3FEA2381E00F8000E1360A2 6C13C0A338038180A213C300011300A2EA00E6A3137CA31338A21330A213701360A2EAF0C012F1 EAF380007FC7FC123E171F7F941A>121 D E /Fr 20 118 df45 D68 D73 D77 D80 D<90387F80203901FFE0603807C0F8390F001CE0001E130F481307003813030078130112701400 12F0A21560A37E1500127C127E7E13C0EA1FF86CB47E6C13F86C7FC613FF010F1380010013C0EC 1FE01407EC03F01401140015F8A200C01478A57E15706C14F015E07E6CEB01C06CEB038039E780 070038C1F01E38C07FFC38800FF01D337CB125>83 D97 D99 DIII<15F090387F03F83901FFCF1C3803C1FC390780F818 390F00780048137C001E133C003E133EA7001E133C001F137C6C13786C6C5A380FC1E0380DFFC0 D81C7FC7FC0018C8FCA2121CA2121E380FFFF814FF6C14804814C0391E0007E00038EB01F048EB 00701578481438A500701470007814F06CEB01E06CEB03C03907C01F003801FFFC38003FE01E2F 7E9F21>I<1207EA0F80121FA2120FEA0700C7FCABEA078012FFA2120F1207B3A6EA0FC0EAFFF8 A20D307EAF12>105 D<260781FEEB3FC03BFF87FF80FFF0903A8E07C1C0F83B0F9803E3007C27 07B001E6133C9026E000FC7F495BA3495BB3486C486C133F3CFFFC1FFF83FFF0A2341F7E9E38> 109 D<380781FE39FF87FF8090388E07C0390F9803E03807B0019038E000F05BA35BB3486C487E 3AFFFC1FFF80A2211F7E9E25>II<380783E038FF8FF8EB9C7CEA0FB0EA07F0EBE0 38EBC000A35BB3487EEAFFFEA2161F7E9E19>114 D<3801FC10380FFF30381E03F0EA38004813 705A1430A37E6C1300127EEA3FF06CB4FC6C1380000313E038003FF0EB03F8EB007800C0133CA2 141C7EA27E14186C13386C137038EF01E038C3FFC03880FE00161F7E9E1A>I<13C0A51201A312 03A21207120F121FB512E0A23803C000B01430A83801E060A23800F0C0EB7F80EB1F00142C7FAB 19>II E /Fs 5 85 df<1630167016F0A21501A21503A2150715 0FA2151B821531A2156115E115C1EC0181A2EC0301A21406A2140C141C14181430A202607FA2EC C000A249B5FC5B91C7FC1306A25BA25BA25B1370136013E01201000381D80FF01301D8FFFE9038 3FFFF0A22C337CB235>65 D<010FB6FC17C0903A003F8007F0EE01F892C7127C177E4A143E8314 7E188002FE140FA24A15C0A21301A25CA21303171F5CA2130718804A143FA2130F18004A5CA201 1F157E17FE4A5CA2013F4A5A5F91C712034C5A495D160F017E4A5A4CC7FC01FE147E16F849495A ED07E00001EC3F80B600FEC8FC15F032317CB036>68 D<010FB612FEA29039003F8000173E92C7 121EA24A140CA2147EA214FEA25CA20101151CEEC0184A1400A201031301A202F05B1503010713 0F91B5FC93C7FCECE00F010F7FA2ECC006A2011F130EA2EC800C92C8FC133FA291C9FCA25BA213 7EA213FEA25BA21201B512FCA22F317CB02F>70 D<010FB512F816FF903A003F801FC0EE07E092 380003F0EE01F84AEB00FCA2147EA214FE16015CA2010115F816034A14F0EE07E01303EE0F804A EB3F00167E0107EB03F891B512E016809138E007C0010FEB03F015014A6C7EA2011F80A25CA201 3F1301A21400A249495AA2137E170601FE150E170C5B171C000102011338B539F000FC70EE7FE0 C9EA1F802F327CB034>82 D<0007B712F8A23A0FE007F00101801400D80E00491370121E001C13 0F121800385CA20030011F1460127000605CA2023F14E000E016C0C790C8FCA25CA2147EA214FE A25CA21301A25CA21303A25CA21307A25CA2130FA25CA2131FA25C133F497E007FB512C0A22D31 74B033>84 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300 TeXDict begin %%EndSetup %%Page: 0 1 bop 795 696 a Fs(D)26 b(R)g(A)f(F)h(T)225 787 y Fr(Do)r(cumen)n(t)20 b(for)i(a)f(Standard)g(Message-P)n(assing)f(In)n(terface)685 981 y Fq(Scott)c(Berryman,)e Fp(Y)l(ale)19 b(Univ)700 1040 y Fq(James)c(Co)o(wnie,)h Fp(Meiko)i(Ltd)474 1098 y Fq(Jac)o(k)e(Dongarra,)i Fp(Univ.)23 b(of)17 b(T)l(ennesse)n(e)j(and)d(ORNL)801 1156 y Fq(Al)f(Geist,)f Fp(ORNL)795 1214 y Fq(Bill)f(Gropp,)j Fp(ANL)767 1272 y Fq(Rolf)f(Hemp)q(el,)e Fp(GMD)762 1330 y Fq(Bob)i(Knigh)o(ten,)g Fp(Intel)786 1388 y Fq(Rust)o(y)g(Lusk,)h Fp(ANL)614 1446 y Fq(Stev)o(e)e(Otto,)h Fp(Or)n(e)n(gon)h(Gr)n(aduate)g(Inst)578 1504 y Fq(T)l(on)o(y)g(Skjellum,)c Fp(Missisippi)j(State)j(Univ)653 1563 y Fq(Marc)d(Snir,)g Fp(IBM)h(T.)g(J.)g(Watson)744 1621 y Fq(Da)o(vid)f(W)l(alk)o(er,)f Fp(ORNL)626 1679 y Fq(Stev)o(e)g(Zenith,)g Fp(Kuck)k(&)f(Asso)n(ciates)844 1803 y Fq(Ma)o(y)e(9,)g(1993)87 1861 y(This)g(w)o(ork)g(w)o(as)h(supp)q(orted)g(b)o(y)f(ARP)l(A)g(and)g(NSF)g (under)g(con)o(tract)g(n)o(um)o(b)q(er)f(###,)g(b)o(y)g(the)192 1919 y(National)h(Science)f(F)l(oundation)i(Science)e(and)i(T)l(ec)o(hnology) f(Cen)o(ter)f(Co)q(op)q(erativ)o(e)650 1977 y(Agreemen)o(t)f(No.)21 b(CCR-8809615.)p eop %%Page: 1 2 bop 75 356 a Fo(Chapter)32 b(1)75 564 y Fn(Con)m(texts)40 b({)g(Prop)s(osal)h (I)876 786 y Fm(Marc)14 b(Snir)75 927 y Fl(1.1)70 b(Con)n(texts)75 1029 y Fm(A)17 b Fk(comm)o(unication)k(con)o(text)c Fm(\(for)f(short,)g Fk(con)o(text)p Fm(\))h(is)g(a)g(mec)o(hanism)g(for)g(the)g(mo)q (dularization)75 1085 y(of)h(MPI)g(comm)o(unication.)29 b(An)o(y)18 b(MPI)g(comm)o(unication)h(o)q(ccurs)f(within)h(a)f(con)o(text,)g(and)g(do)q (es)g(not)75 1142 y(in)o(terfere)j(with)g(comm)o(unication)h(executed)g (within)g(another)e(con)o(text.)37 b(F)l(urthermore,)21 b(a)g(con)o(text)75 1198 y(sp)q(eci\014es)c(a)e(lo)q(cal)h(name)f(space)h(for)e(pro)q(cesses)i (that)e(comm)o(unicate)h(in)h(this)g(con)o(text.)j(The)d(pro)q(cesses)75 1255 y(that)i(participate)i(in)g(a)f(con)o(text)f(are)h(asso)q(ciated)g(with) g(a)g Fk(rank)p Fm(,)g(whic)o(h)h(ranges)e(from)h(0)f(to)h Fj(n)13 b Fi(\000)g Fm(1,)75 1311 y(where)18 b Fj(n)h Fm(is)f(the)h(n)o(um)o (b)q(er)f(of)g(pro)q(cesses)g(that)g(participate)h(in)g(the)f(con)o(text.)28 b(This)19 b(rank)f(is)g(used)h(in)75 1368 y(in)o(terpro)q(cess)d(comm)o (unication)h(as)e(the)h(lo)q(cal)h(address)f(of)f(the)h(pro)q(cess)g(within)h (that)e(con)o(text.)21 b(Th)o(us,)75 1424 y(comm)o(unication)16 b(within)g(a)f(con)o(text)g(is)h(una\013ected)f(b)o(y)g(comm)o(unication)h (outside)g(that)e(con)o(text.)166 1480 y(A)22 b(pro)q(cess)g(ma)o(y)g(comm)o (unicate)g(sim)o(ultaneously)h(in)g(sev)o(eral)f(con)o(texts.)40 b(The)22 b(con)o(text)g(of)f(a)75 1537 y(comm)o(unication)16 b(is)g(explicitly)i(stated)c(as)h(a)g(parameter)f(of)h(the)g(comm)o (unication)h(call.)166 1593 y(A)h(pro)q(cess)g(that)f(participates)i(in)g(a)e (comm)o(unication)i(con)o(text)f(accesses)g(this)g(con)o(text)g(using)g(a)75 1650 y Fh(c)n(ontext)h(hand)r(le)g Fm(\(i.e.,)f(a)h(handle)h(to)e(an)g (opaque)h(ob)s(ject)f(that)g(iden)o(ti\014es)j(a)d(con)o(text\).)26 b(This)19 b(handle)75 1706 y(can)c(b)q(e)h(used)g(to)143 1788 y Fi(\017)23 b Fm(Find)14 b(information)f(ab)q(out)g(this)g(con)o(text,)g (suc)o(h)g(as)g(the)g(n)o(um)o(b)q(er)h(of)e(pro)q(cesses)i(that)e (participate)189 1844 y(in)k(the)f(con)o(text,)f(or)h(the)g(rank)g(of)g(the)g (calling)i(pro)q(cess)f(within)g(the)f(con)o(text.)143 1933 y Fi(\017)23 b Fm(Comm)o(unicate)12 b(with)h(other)f(pro)q(cesses)h(that)e (participate)j(in)f(the)f(con)o(text;)h(these)f(pro)q(cesses)h(are)189 1989 y(addressed)i(using)h(their)g(con)o(text)f(rank.)143 2078 y Fi(\017)23 b Fm(Create)14 b(new)i(con)o(texts.)166 2160 y(Con)o(text)i (handles)h(cannot)g(b)q(e)g(transferred)f(for)g(one)h(pro)q(cess)f(to)g (another;)i(they)e(can)h(b)q(e)g(used)75 2216 y(only)12 b(on)g(the)f(pro)q (cess)h(where)g(they)g(w)o(ere)f(created.)19 b(Th)o(us,)11 b(\\kno)o(wledge")h(ab)q(out)f(a)h(con)o(text)f(exists)g(only)75 2273 y(lo)q(cally)l(,)17 b(at)e(the)h(pro)q(cesses)g(that)f(participate)h(in) h(that)d(con)o(text.)21 b(Op)q(erations)16 b(within)h(a)e(comm)o(unica-)75 2329 y(tion)h(con)o(texts)e(\(including)k(the)d(generation)h(of)f(new)g(sub)q (con)o(texts\))g(do)g(not)g(require)h(comm)o(unication)75 2385 y(with)g(pro)q(cesses)f(that)g(do)g(not)g(participate)g(in)i(that)d(con)o (text.)166 2442 y(F)l(ollo)o(ws)h(examples)h(of)f(p)q(ossible)i(uses)e(for)g (con)o(texts.)75 2561 y Fg(1.1.1)55 b(Lo)r(osely)18 b(sync)n(hronous)h (library)h(call)g(in)n(terface)75 2647 y Fm(Consider)13 b(the)g(case)g(where) g(a)f(parallel)i(application)h(executes)e(a)f(\\parallel)i(call")g(to)e(a)g (library)i(routine,)75 2704 y(i.e.,)g(where)h(all)g(pro)q(cesses)g(transfer)f (con)o(trol)g(to)g(the)h(library)g(routine.)20 b(If)15 b(the)f(library)i(w)o (as)d(dev)o(elop)q(ed)964 2828 y(1)p eop %%Page: 2 3 bop 75 -100 a Fm(2)867 b Ff(CHAPTER)15 b(1.)35 b(CONTEXTS)15 b({)g(PR)o(OPOSAL)i(I)75 45 y Fm(separately)l(,)23 b(then)e(one)h(should)g(b) q(ew)o(are)g(of)e(the)i(p)q(ossibilit)o(y)h(that)e(the)g(library)h(co)q(de)g (ma)o(y)f(receiv)o(e)75 102 y(b)o(y)f(mistak)o(e)g(messages)f(send)i(b)o(y)f (the)g(caller)h(co)q(de,)g(and)f(vice-v)o(ersa.)35 b(The)20 b(problem)h(is)g(solv)o(ed)f(b)o(y)75 158 y(allo)q(cating)c(a)f(di\013eren)o (t)h(con)o(text)e(to)h(the)g(library)l(,)h(th)o(us)f(prev)o(en)o(ting)h(un)o (w)o(an)o(ted)e(in)o(terference.)75 277 y Fg(1.1.2)55 b(F)-5 b(unctional)21 b(decomp)r(osition)e(and)g(mo)r(dular)g(co)r(de)f(dev)n (elopmen)n(t)75 363 y Fm(Often,)i(a)f(parallel)i(application)g(is)e(dev)o (elop)q(ed)i(b)o(y)e(in)o(tegrating)h(sev)o(eral)f(distinct)h(functional)h (mo)q(d-)75 419 y(ules,)c(that)f(is)h(eac)o(h)g(dev)o(elop)q(ed)h(separately) l(.)24 b(Eac)o(h)16 b(mo)q(dule)i(is)f(a)f(parallel)i(program)d(that)h(runs)h (on)f(a)75 476 y(dedicated)g(set)e(of)g(pro)q(cesses,)g(and)g(the)h (computation)f(consists)h(of)e(phases)i(where)f(mo)q(dules)i(compute)75 532 y(separately)l(,)d(in)o(termixed)h(with)f(global)h(phases)f(where)g(all)h (pro)q(cesses)f(comm)o(unicate.)19 b(It)13 b(is)g(con)o(v)o(enien)o(t)75 589 y(to)g(allo)o(w)h(eac)o(h)g(mo)q(dule)g(to)f(use)h(its)g(o)o(wn)f(priv)m (ate)i(pro)q(cess)f(n)o(um)o(b)q(ering)g(sc)o(heme,)g(for)f(the)h(in)o(tramo) q(dule)75 645 y(computation.)24 b(This)17 b(is)g(ac)o(hiev)o(ed)g(b)o(y)g (using)g(a)f(priv)m(ate)i(mo)q(dule)f(con)o(text)f(for)g(in)o(tramo)q(dule)i (compu-)75 701 y(tation,)c(and)i(a)f(global)h(con)o(text)e(for)h(in)o(termo)q (dule)h(comm)o(unication.)75 820 y Fg(1.1.3)55 b(Collectiv)n(e)20 b(comm)n(unication)75 906 y Fm(MPI)g(supp)q(orts)h(collectiv)o(e)h(comm)o (unication)f(within)h(dynamically)g(created)f(groups)f(of)g(pro)q(cesses.)75 963 y(Eac)o(h)15 b(suc)o(h)h(group)f(can)g(b)q(e)h(represen)o(ted)g(b)o(y)f (a)g(distinct)h(comm)o(unication)g(con)o(text.)k(This)c(pro)o(vides)f(a)75 1019 y(simple)g(mec)o(hanism)f(to)f(ensure)h(that)f(comm)o(unication)h(that)f (p)q(ertains)h(to)f(collectiv)o(e)i(comm)o(unication)75 1076 y(within)h(one)f(group)f(is)h(not)f(confused)i(with)f(collectiv)o(e)h(comm)o (unication)g(within)f(another)g(group,)f(and)75 1132 y(a)o(v)o(oids)h(the)g (in)o(tro)q(duction)h(of)f(t)o(w)o(o)f(di\013eren)o(t)h(mec)o(hanisms)h(with) g(similar)g(functionalit)o(y)l(.)75 1251 y Fg(1.1.4)55 b(Ligh)n(t)n(w)n(eigh) n(t)21 b(gang)f(sc)n(heduling)75 1337 y Fm(Consider)14 b(an)f(en)o(vironmen)o (t)g(where)g(pro)q(cesses)h(are)e(m)o(ultith)o(treaded.)20 b(Con)o(texts)12 b(can)h(b)q(e)h(used)g(to)e(pro-)75 1393 y(vide)h(a)e(mec)o (hanism)i(whereb)o(y)f(all)h(pro)q(cesses)f(are)f(time-shared)i(b)q(et)o(w)o (een)f(sev)o(eral)g(parallel)h(executions,)75 1450 y(and)19 b(can)h(con)o(text)e(switc)o(h)i(from)e(one)h(parallel)i(execution)f(to)f (another,)g(in)h(a)f(lo)q(osely)h(sync)o(hronous)75 1506 y(manner.)27 b(A)17 b(thread)g(is)h(allo)q(cated)h(on)e(eac)o(h)g(pro)q(cess)h(to)f(eac)o (h)g(parallel)i(execution,)g(and)f(a)f(di\013eren)o(t)75 1562 y(con)o(text)d(is)i(used)f(to)f(iden)o(tify)i(eac)o(h)f(parallel)i (execution.)k(Th)o(us,)14 b(tra\016c)g(from)g(one)h(execution)h(cannot)75 1619 y(b)q(e)i(confused)g(with)g(tra\016c)f(from)f(another)h(execution.)28 b(The)18 b(blo)q(c)o(king)g(and)g(un)o(blo)q(c)o(king)h(of)e(threads)75 1675 y(due)g(to)f(comm)o(unication)h(ev)o(en)o(ts)f(pro)o(vide)h(a)f(\\lazy") g(con)o(text)g(switc)o(hing)h(mec)o(hanism.)24 b(This)17 b(can)f(b)q(e)75 1732 y(extended)j(to)f(the)h(case)f(where)h(the)f(parallel)i(executions)f (are)g(spanning)g(distinct)h(pro)q(cess)e(subsets.)75 1788 y(\(MPI)d(do)q(es)g(not)g(require)h(m)o(ultithreaded)h(pro)q(cesses.\))75 1929 y Fl(1.2)70 b(Basic)22 b(Con)n(text)g(Op)r(erations)75 2030 y Fm(A)17 b(global)g(con)o(text)g Fk(MPI)p 534 2030 16 2 v 18 w(ALL)g Fm(is)g(prede\014ned.)26 b(All)19 b(pro)q(cesses)e (participate)g(in)h(this)f(con)o(text)f(when)75 2087 y(computation)k(starts.) 32 b(MPI)20 b(do)q(es)g(not)f(sp)q(ecify)i(ho)o(w)e(pro)q(cesses)h(are)g (initially)i(rank)o(ed)e(within)h(the)75 2143 y(con)o(text)11 b Fe(MPI)p 308 2143 15 2 v 17 w(ALL)p Fm(.)g(It)h(is)g(exp)q(ected)h(that)f (the)g(start-up)f(pro)q(cedure)i(used)f(to)f(initiate)i(an)f(MPI)g(program)75 2200 y(\(at)j(load-time)j(or)d(run-time\))i(will)h(pro)o(vide)f(information)g (or)e(con)o(trol)h(on)h(this)f(initial)j(ranking)e(\(e.g.,)75 2256 y(b)o(y)12 b(sp)q(ecifying)j(that)c(pro)q(cesses)i(are)f(rank)o(ed)h (according)g(to)e(their)i(pid's,)h(or)d(according)i(to)f(the)h(ph)o(ysical)75 2312 y(addresses)h(of)f(the)h(executing)g(pro)q(cessors,)f(or)g(according)h (to)f(a)h(n)o(um)o(b)q(ering)g(sc)o(heme)g(sp)q(eci\014ed)i(at)d(load)75 2369 y(time\).)166 2508 y Fd(Discussion:)h Fc(If)e(w)o(e)h(think)e(of)h (adding)f(new)i(pro)q(cesses)i(at)d(run-time,)e(then)j Fb(MPI)p 1454 2508 14 2 v 15 w(ALL)f Fc(con)o(v)o(eys)g(the)h(wrong)75 2564 y(impression,)f(since)j(it)f(is)f(just)h(the)h(initial)d(set)j(of)e(pro) q(cesses.)166 2704 y Fm(The)i(follo)o(wing)h(op)q(erations)g(are)e(a)o(v)m (ailable)j(for)e(creating)g(new)h(con)o(texts.)p eop %%Page: 3 4 bop 75 -100 a Ff(1.2.)34 b(BASIC)16 b(CONTEXT)f(OPERA)l(TIONS)964 b Fm(3)166 45 y Fk(MPI)p 275 45 16 2 v 18 w(COPY)p 446 45 V 18 w(CONTEXT\(new)o(con)o(text,)17 b(con)o(text\))166 137 y Fm(Create)g(a)g(new)g(con)o(text)g(that)g(includes)j(all)e(pro)q(cesses)g(in) g(the)f(old)h(con)o(text.)26 b(The)18 b(rank)f(of)g(the)75 193 y(pro)q(cesses)g(in)g(the)f(previous)h(con)o(text)f(is)g(preserv)o(ed.)24 b(The)16 b(call)i(m)o(ust)d(b)q(e)i(executed)g(b)o(y)f(all)i(pro)q(cesses)75 250 y(in)f(the)g(old)g(con)o(text.)22 b(It)17 b(is)g(a)f(blo)q(c)o(king)h (call:)24 b(No)16 b(call)h(returns)f(un)o(til)i(all)f(pro)q(cesses)g(ha)o(v)o (e)f(called)i(the)75 306 y(function.)j(The)15 b(parameters)f(are)75 406 y Fk(OUT)k(new)o(con)o(text)k Fm(handle)16 b(to)d(newly)h(created)g(con)o (text.)19 b(The)14 b(handle)h(should)g(not)e(b)q(e)i(asso)q(ciated)189 462 y(with)g(an)g(ob)s(ject)g(b)q(efore)g(the)h(call.)75 554 y Fk(IN)h(con)o(text)23 b Fm(handle)17 b(to)d(old)i(con)o(text)166 689 y Fk(MPI)p 275 689 V 18 w(NEW)p 422 689 V 19 w(CONTEXT\(new)o(con)o (text,)h(con)o(text,)g(arra)o(y)p 1338 689 V 18 w(of)p 1398 689 V 19 w(ranks,)f(size\))75 824 y(OUT)i(new)o(con)o(text)k Fm(handle)15 b(to)d(newly)h(created)g(con)o(text)f(at)h(calling)h(pro)q (cess.)19 b(This)14 b(handle)g(should)189 880 y(not)g(b)q(e)i(asso)q(ciated)g (with)f(an)g(ob)s(ject)g(b)q(efore)h(the)f(call.)75 972 y Fk(IN)i(con)o(text) 23 b Fm(handle)17 b(to)d(old)i(con)o(text)75 1064 y Fk(IN)h(arra)o(y)p 277 1064 V 18 w(of)p 337 1064 V 19 w(ranks)22 b Fm(ranks)15 b(in)h(the)f(old)h(con)o(text)e(of)h(the)g(pro)q(cesses)h(that)e(join)i(the)f (new)h(con)o(text)75 1156 y Fk(IN)h(size)23 b Fm(size)16 b(of)f(new)h(con)o (text)e(\(in)o(teger\))166 1255 y(A)20 b(new)g(con)o(text)f(is)i(created)e (for)h(the)g(pro)q(cesses)g(in)h(the)f(old)g(con)o(text)f(that)g(are)h (listed)h(in)g(the)75 1312 y(arra)o(y)l(.)f(The)c(pro)q(cesses)f(are)h (listed)g(according)g(to)f(their)h(rank)g(in)g(the)f(old)i(con)o(text.)j(The) c(rank)f(of)g(the)75 1368 y(pro)q(cesses)h(in)g(the)f(new)g(con)o(text)g(is)h (determined)g(b)o(y)f(their)h(place)g(in)g(the)f(list.)166 1424 y(The)h(call)g(has)g(to)f(b)q(e)h(executed)h(b)o(y)e(all)i(pro)q(cesses) f(listed)g(in)h(the)f(arra)o(y;)e(all)i(mak)o(e)f(the)h(call)h(with)75 1481 y(the)h(same)g(list)i(of)d(parameters.)29 b(Pro)q(cesses)18 b(in)h(the)g(old)g(con)o(text)e(that)h(do)g(not)g(b)q(elong)i(to)d(the)i(new) 75 1537 y(con)o(text)14 b(need)i(not)f(mak)o(e)g(the)g(call.)21 b(The)15 b(call)h(is)g(blo)q(c)o(king;)g(no)f(pro)q(cess)g(returns)g(from)f (the)i(call)g(un)o(til)75 1594 y(all)g(pro)q(cesses)g(ha)o(v)o(e)f(executed)h (the)f(call.)166 1686 y Fk(MPI)p 275 1686 V 18 w(SPLIT)p 445 1686 V 19 w(CONTEXT\(new)o(con)o(text,)i(con)o(text,)g(k)o(ey)l(,)f(index\)) 75 1821 y(OUT)i(new)o(con)o(text)k Fm(handle)15 b(to)d(newly)h(created)g(con) o(text)f(at)h(calling)h(pro)q(cess.)19 b(This)14 b(handle)g(should)189 1877 y(not)g(b)q(e)i(asso)q(ciated)g(with)f(an)g(ob)s(ject)g(b)q(efore)h(the) f(call.)75 1969 y Fk(IN)i(con)o(text)23 b Fm(handle)17 b(to)d(old)i(con)o (text)75 2061 y Fk(IN)h(k)o(ey)22 b Fm(in)o(teger)75 2153 y Fk(IN)17 b(index)23 b Fm(in)o(teger)166 2252 y(A)18 b(new)g(con)o(text)f(is)h (created)g(for)g(eac)o(h)g(distinct)h(v)m(alue)g(of)e Fe(key)p Fm(;)h(this)h(con)o(text)e(is)h(shared)g(b)o(y)g(all)75 2308 y(pro)q(cesses)13 b(that)f(made)h(the)g(call)h(with)f(this)g(k)o(ey)f(v)m (alue.)21 b(Within)13 b(eac)o(h)g(new)g(con)o(text)f(the)h(pro)q(cesses)g (are)75 2365 y(rank)o(ed)j(according)g(to)g(the)g(order)g(of)f(the)h Fe(index)g Fm(v)m(alues)h(they)f(pro)o(vided;)h(in)g(case)f(of)f(ties,)i(pro) q(cesses)75 2421 y(are)e(rank)o(ed)g(according)h(to)e(their)i(rank)f(in)h (the)f(old)h(con)o(text.)166 2478 y(This)e(call)h(is)f(blo)q(c)o(king:)21 b(No)13 b(call)i(returns)f(un)o(til)h(all)f(pro)q(cesses)g(in)h(the)f(old)g (con)o(text)f(executed)i(the)75 2534 y(call.)166 2591 y(P)o(articular)g(uses) h(of)e(this)i(function)g(are:)166 2647 y(\(i\))g(Reordering)h(pro)q(cesses:) 22 b(All)c(pro)q(cesses)e(pro)o(vide)h(the)f(same)g Fe(key)g Fm(v)m(alue,)h(and)g(pro)o(vide)f(their)75 2704 y(index)g(in)h(the)e(new)g (order.)p eop %%Page: 4 5 bop 75 -100 a Fm(4)867 b Ff(CHAPTER)15 b(1.)35 b(CONTEXTS)15 b({)g(PR)o(OPOSAL)i(I)166 45 y Fm(\(ii\))e(Splitting)h(a)e(con)o(text)f(in)o (to)i(sub)q(con)o(texts,)f(while)i(preserving)f(the)f(old)h(relativ)o(e)g (order)f(among)75 102 y(pro)q(cesses:)21 b(All)c(pro)q(cesses)f(pro)o(vide)g (the)g(same)f Fe(index)g Fm(v)m(alue,)i(and)f(pro)o(vide)g(a)f(k)o(ey)h(iden) o(tifying)h(their)75 158 y(new)e(sub)q(con)o(text.)166 214 y Fe(MPI)p 241 214 15 2 v 17 w(COPY)p 354 214 V 16 w(CONTEXT)g Fm(is)h(a)f(particular)i(case)e(of)h Fe(MPI)p 1069 214 V 16 w(SPLIT)p 1205 214 V 17 w(CONTEXT)p Fm(,)e(when)i(all)h(pro)q(cesses)f(pro-) 75 271 y(vide)g(the)g(same)e(k)o(ey)h(and)h(index)g(parameter.)166 363 y Fk(MPI)p 275 363 16 2 v 18 w(RANK\(rank,)g(con)o(text\))75 504 y(OUT)i(rank)23 b Fm(in)o(teger)75 598 y Fk(IN)17 b(con)o(text)23 b Fm(con)o(text)15 b(handle)166 705 y(Return)h(the)f(rank)g(of)g(the)g (calling)i(pro)q(cess)e(within)i(the)e(sp)q(eci\014ed)i(con)o(text.)166 796 y Fk(MPI)p 275 796 V 18 w(SIZE\(size,)h(con)o(text\))75 938 y(OUT)g(size)23 b Fm(in)o(teger)75 1032 y Fk(IN)17 b(con)o(text)23 b Fm(con)o(text)15 b(handle)166 1138 y(Return)h(the)f(n)o(um)o(b)q(er)h(of)e (pro)q(cesses)i(that)e(b)q(elong)j(to)d(the)h(sp)q(eci\014ed)j(con)o(text.) 166 1195 y(A)d(con)o(text)g(ob)s(ject)f(is)i(destro)o(y)o(ed)f(using)h(the)f Fe(MPI)p 1036 1195 15 2 v 17 w(FREE)f Fm(function.)75 1316 y Fg(1.2.1)55 b(Usage)19 b(note)75 1402 y Fm(Use)e(of)g(con)o(texts)f(for)h (libraries:)25 b(Eac)o(h)17 b(library)h(ma)o(y)e(pro)o(vide)i(an)f (initialization)j(routine)e(that)e(is)i(to)75 1459 y(b)q(e)c(called)i(b)o(y)e (all)g(pro)q(cesses,)g(and)g(that)f(generate)h(a)g(con)o(text)f(for)g(the)h (use)g(of)f(that)g(library)l(.)21 b(A)14 b(sc)o(heme)75 1515 y(for)j(allo)o(wing)i(eac)o(h)g(link)o(ed)g(library)g(to)f(ha)o(v)o(e)g(its)g (o)o(wn)f(initializati)q(on)k(co)q(de)d(w)o(ould)h(can)f(b)q(e)h(used)g(for) 75 1572 y(this)d(purp)q(ose)f(\(assuming)h(the)f(library)h(will)h(not)e(ha)o (v)o(e)f(sev)o(eral)i(concurren)o(t)f(instan)o(tiations\).)166 1628 y(Use)g(of)g(con)o(texts)g(for)f(functional)j(decomp)q(osition:)k(A)15 b(harness)g(program,)f(running)i(in)g(the)g(con-)75 1684 y(text)e Fe(ALL)g Fm(generates)f(a)h(sub)q(con)o(text)h(for)e(eac)o(h)i(mo)q(dule)g (and)g(then)f(starts)f(the)i(submo)q(dule)g(within)h(the)75 1741 y(corresp)q(onding)g(con)o(text.)166 1797 y(Use)i(of)f(con)o(texts)g (for)g(collectiv)o(e)j(comm)o(unication:)26 b(A)18 b(con)o(text)f(is)h (created)g(for)f(eac)o(h)h(group)f(of)75 1854 y(pro)q(cesses)f(where)f (collectiv)o(e)i(comm)o(unication)f(is)g(to)e(o)q(ccur.)166 1910 y(Use)k(of)g(con)o(texts)f(for)h(con)o(text-switc)o(hing)g(among)g(sev)o (eral)g(parallel)i(executions:)26 b(A)18 b(pream)o(ble)75 1967 y(co)q(de)d(is)g(used)g(to)f(generate)g(a)g(di\013eren)o(t)g(con)o(text)g (for)g(eac)o(h)g(execution;)i(this)e(pream)o(ble)h(co)q(de)g(needs)h(to)75 2023 y(use)g(a)e(m)o(utual)i(exclusion)h(proto)q(col)e(to)f(mak)o(e)h(sure)g (eac)o(h)h(thread)f(claims)h(the)f(righ)o(t)g(con)o(text.)166 2156 y Fd(Implemen)o(tati)o(on)d(note:)166 2205 y Fc(W)m(e)18 b(outline)f(here)i(t)o(w)o(o)f(p)q(ossible)g(implemen)o(tations)d(of)j(con)o (texts.)31 b(They)19 b(are)f(b)o(y)g(no)g(means)f(the)i(only)75 2255 y(p)q(ossible)c(ones.)23 b(In)15 b(eac)o(h)g(implemen)o(tation)d(w)o(e)j (assume)g(that)g(a)g(con)o(text)h(ob)r(ject)g(is)f(a)g(p)q(oin)o(ter)g(to)g (a)g(structure)75 2305 y(that)d(describ)q(es)i(the)f(con)o(text.)18 b(A)12 b(comp)q(onen)o(t)f(of)g(this)h(structure)i(is)e(a)g(table)f(of)h(the) g(pro)q(cesses)j(that)d(participate)75 2355 y(in)17 b(the)i(con)o(text,)f (ordered)h(b)o(y)f(rank.)29 b(W)m(e)17 b(assume)h(that)f(the)i(n)o(um)o(b)q (er)e(of)g(concurren)o(tly)i(activ)o(e)e(con)o(texts)i(at)75 2405 y(eac)o(h)12 b(pro)q(cess)i(is)d(relativ)o(ely)g(small;)e(sa)o(y)j (16-32.)k(In)c(either)g(implemen)o(tatio)o(n)d(one)j(migh)o(t)d(ha)o(v)o(e)j (disjoin)o(t)e(message)75 2455 y(queues)j(for)e(eac)o(h)i(con)o(text,)f(or)g (ha)o(v)o(e)f(shared)i(queues,)g(with)e(the)h(righ)o(t)g(mec)o(hanisms)d(for) j(con)o(text)g(matc)o(hing)e(and)75 2504 y(bu\013er)15 b(allo)q(cation.)166 2554 y(Prop)q(osal)f(1:)j(Large)d(con)o(text)h(tags.)166 2604 y(In)d(this)g(implem)o(en)o(tation)d(w)o(e)j(use)h(large)e(con)o(text)i(tags) f(\(sa)o(y)f(32)h(bits\),)g(so)g(that)f(matc)o(hing)f(of)i(an)f(incoming)75 2654 y(tag)19 b(with)g(a)h(lo)q(cal)e(pro)q(cess)k(requires)f(to)e(p)q (erform)g(a)g(searc)o(h)i(in)e(a)g(hash)h(table)f(or)h(another)g(similar)d (searc)o(h)75 2704 y(structure)e(\(this)d(o)q(ccurs)i(whenev)o(er)g(a)e (message)g(is)h(receiv)o(ed\).)19 b(All)11 b(messages)i(sen)o(t)g(within)f(a) g(con)o(text)h(carry)g(the)p eop %%Page: 5 6 bop 75 -100 a Ff(1.3.)34 b(AD)o(V)-5 b(ANCED)14 b(CONTEXT)i(OPERA)l(TIONS)841 b Fm(5)75 45 y Fc(same)13 b(tag)h(v)n(alue.)k(W)m(e)c(use)h(as)g(con)o(text)f (tag)g(the)h(pid)f(of)f(the)i(lo)o(w)o(est)f(n)o(um)o(b)q(ered)g(pro)q(cess)i (in)e(the)h(con)o(text)g(\(let's)75 95 y(call)g(it)g(the)h(con)o(text)g (leader\),)h(concatenated)g(with)e(a)g(coun)o(ter)i(that)e(is)h(incremen)o (ted)g(whenev)o(er)h(this)e(pro)q(cess)75 145 y(allo)q(cates)h(a)g(new)g (tag.)24 b(This)16 b(guaran)o(tee)g(a)g(unique)g(tag)g(for)f(eac)o(h)i(group) f(\(sp)q(ecial)g(co)q(de)h(needed)g(for)f(coun)o(ter)75 195 y(wraparound\).)166 247 y Fb(MPI)p 235 247 14 2 v 15 w(COPY)p 338 247 V 15 w(CONTEXT)11 b Fc(-)h(A)h(new)g(tag)f(is)g(generated)i(b)o(y)f (incremen)o(ting)e(the)i(old)f(tag)h(b)o(y)f(one;)h(one)f(can)h(either)75 296 y(cop)o(y)f(the)g(old)f(con)o(text)h(table,)g(or)f(create)j(a)d(new)h(p)q (oin)o(ter)g(to)g(the)g(old)f(table.)17 b(A)12 b(global)e(barrier)i(sync)o (hronization)75 346 y(is)i(needed)h(to)f(mak)o(e)e(sure)j(the)g(call)e(is)h (blo)q(c)o(king.)166 398 y Fb(MPI)p 235 398 V 15 w(SPLIT)p 360 398 V 15 w(CONTEXT)j Fc(-)h(A)h(naiv)o(e)f(implemen)o(tatio)o(n)e(is)j (to)f(ha)o(v)o(e)g(an)h(all-to-all)d(comm)o(unicati)o(on)g(where)75 448 y(eac)o(h)h(pro)q(cess)i(gathers)f(the)f Fb(\(key,)k(index\))15 b Fc(pairs)i(of)f(all)g(pro)q(cesses)j(in)e(the)g(old)f(group.)27 b(Eac)o(h)17 b(pro)q(cess)i(can)75 498 y(determine)14 b(whether)h(it)e(is)h (the)g(leader)h(of)e(a)g(new)i(group)e(and)h(broadcast)g(the)h(new)f(group)g (tag)f(to)h(all)e(mem)o(b)q(ers)75 548 y(\(using)j(p)q(oin)o(t)g(to)g(p)q (oin)o(t)g(comm)o(unicatio)o(n)e(in)i(the)h(old)e(group,)h(or)g(using)h (another)f(all-to-all)e(comm)o(unicatio)o(n\).)75 597 y(Algorithmic)f(minds)g (will)g(think)i(of)f(man)o(y)f(p)q(ossible)i(optimizations.)166 649 y Fb(MPI)p 235 649 V 15 w(NEW)p 316 649 V 15 w(CONTEXT)c Fc(-)g(The)i(lo)o(w)o(est)f(n)o(um)o(b)q(ered)f(pro)q(cess)j(in)e(the)g(list) g(broadcast)g(the)h(new)f(con)o(text)h(tag)e(to)h(all)75 699 y(pro)q(cesses)i(in)d(the)g(list)g(\(using)g(p)q(oin)o(t)g(to)g(p)q(oin)o(t)f (comm)o(unication)e(op)q(erations\).)17 b(A)11 b(more)e(robust)h(implemen)o (tation)75 749 y(ma)o(y)i(en)o(tail)h(the)i(broadcast)f(of)f(the)i(mem)o(b)q (er)d(list,)h(for)h(error)g(c)o(hec)o(king.)166 801 y Fb(MPI)p 235 801 V 15 w(RANK,)21 b(MPI)p 447 801 V 15 w(SIZE)13 b Fc(-)g(require)i(lo) q(cal)e(access)j(to)e(the)g(con)o(text)h(ob)r(ject)166 853 y(Prop)q(osal)f(2:)j(Small)12 b(con)o(text)i(tags.)166 905 y(In)19 b(that)h(implemen)o(tatio)o(n)d(the)j(n)o(um)o(b)q(er)f(of)g (distinct)h(con)o(text)g(tag)f(v)n(alues)g(is)h(equal)f(to)g(the)h(maxima)o (l)75 955 y(n)o(um)o(b)q(er)d(of)g(con)o(texts)i(that)f(can)g(b)q(e)h(activ)o (e)f(at)f(the)i(same)e(no)q(de.)30 b(Th)o(us,)19 b(the)f(con)o(text)h(tag)e (of)h(an)f(incoming)75 1005 y(message)i(can)g(b)q(e)g(used)h(to)f(index)f (directly)i(in)o(to)e(a)g(con)o(text)i(table,)g(a)o(v)o(oiding)c(the)k(need)g (for)e(a)h(searc)o(h)h(in)e(a)75 1055 y(hash)13 b(table.)18 b(Eac)o(h)13 b(pro)q(cess)i(has)e(a)f(unique)h(con)o(text)h(tag)e(for)h (incoming)d(comm)o(unication)g(within)i(this)h(con)o(text.)75 1104 y(Ho)o(w)o(ev)o(er,)h(di\013eren)o(t)h(con)o(text)g(tag)e(v)n(alues)h (ma)o(y)e(b)q(e)j(used)g(b)o(y)f(di\013eren)o(t)h(pro)q(cesses)h(for)e(the)h (same)e(con)o(text)i(\(this)75 1154 y(is)e(necessary)i(in)e(order)h(to)f (densely)h(p)q(opulate)f(the)h(con)o(text)f(tag)g(range\).)18 b(The)c(con)o(text)g(table)f(carries,)h(for)e(eac)o(h)75 1204 y(mem)o(b)q(er)g(of)i(the)g(con)o(text,)g(the)h(con)o(text)f(tag)g(to)f(b)q (e)i(used)g(when)f(sending)g(messages)g(to)g(it.)166 1256 y Fb(MPI)p 235 1256 V 15 w(COPY)p 338 1256 V 15 w(CONTEXT)c Fc(-)h(A)h(new)g (tag)f(is)h(generated)h(b)o(y)e(eac)o(h)h(pro)q(cess)i(for)d(the)i(new)f(con) o(text,)g(and)f(broadcast)75 1306 y(to)18 b(all)g(other)h(mem)o(b)q(ers)f(of) g(the)h(con)o(text,)h(using)e(an)h(all-to-all)d(comm)o(uni)o(cation.)29 b(A)19 b(new)g(con)o(text)h(table)e(is)75 1356 y(created,)d(with)e(these)j (new)e(tags.)166 1408 y Fb(MPI)p 235 1408 V 15 w(SPLIT)p 360 1408 V 15 w(CONTEXT)j Fc(-)h(A)h(naiv)o(e)f(implemen)o(tatio)o(n)e(is)j(to)f (ha)o(v)o(e)g(an)h(all-to-all)d(comm)o(unicati)o(on)g(where)75 1457 y(eac)o(h)e(pro)q(cess)h(gathers)f(the)h Fb(\(key,)20 b(index,)h(new)p 881 1457 V 15 w(context)p 1050 1457 V 14 w(tag\))13 b Fc(triples)h(of)e(all)h(pro)q(cesses)j(in)d(the)h(old)f(group.)75 1507 y(Eac)o(h)g(pro)q(cess)h(then)f(creates)i(a)d(new)h(con)o(text)g(table)g (for)f(the)h(pro)q(cesses)i(that)e(pro)o(vided)f(the)i(same)d(k)o(ey)i(v)n (alue)f(as)75 1557 y(it.)166 1609 y Fb(MPI)p 235 1609 V 15 w(NEW)p 316 1609 V 15 w(CONTEXT)g Fc(-)i(Similar)d(to)j Fb(MPI)p 785 1609 V 15 w(COPY)p 888 1609 V 15 w(CONTEXT)p Fc(.)166 1661 y Fb(MPI)p 235 1661 V 15 w(RANK,)21 b(MPI)p 447 1661 V 15 w(SIZE)13 b Fc(-)g(require)i(lo)q(cal)e(access)j(to)e(the)g(con)o(text)h(ob)r(ject)75 1899 y Fl(1.3)70 b(Adv)l(anced)23 b(con)n(text)f(op)r(erations)75 2005 y Fm(Additional)e(functions)e(are)f(required)i(to)e(supp)q(ort)h(a)f (less)i(static)e(mo)q(del,)i(where)f(pro)q(cesses)g(ma)o(y)f(b)q(e)75 2062 y(created)i(or)f(deleted)i(during)g(execution.)31 b(This)20 b(requires)f(con)o(texts)f(to)h(gro)o(w)e(or)h(b)q(e)i(merged.)30 b(This)75 2118 y(section)12 b(outlines)g(p)q(ossible)h(mec)o(hanisms)f(for)e (suc)o(h)i(extension.)19 b(The)11 b(lac)o(k)h(of)f(in)o(terupt)g(driv)o(en)h (comm)o(u-)75 2175 y(nication)j(mec)o(hanisms)g(in)g(MPI)f(restricts)g(the)g (functionalit)o(y)h(of)f(suc)o(h)g(mec)o(hanisms:)20 b(A)14 b(new)h(pro)q(cess)75 2231 y(can)g(join)f(an)h(existing)g(con)o(text)f(only)h (if)g(all)g(pro)q(cesses)g(that)f(participate)h(in)g(the)f(old)h(con)o(text)f (execute)75 2287 y(a)h(call)h(to)f(add)g(this)h(new)f(pro)q(cess.)166 2346 y(A)j(prede\014ned)i(lo)q(cal)g(con)o(text)e Fe(MPI)p 792 2346 15 2 v 16 w(ME)g Fm(where)h(only)g(one)f(pro)q(cess)h(participates,) g(is)g(prede\014ned)75 2403 y(for)c(eac)o(h)g(pro)q(cess.)166 2497 y Fk(MPI)p 275 2497 16 2 v 18 w(SP)l(A)-6 b(WN\()17 b(new)o(con)o(text,) f(oldcon)o(text,)i(arra)o(y)p 1202 2497 V 18 w(of)p 1262 2497 V 19 w(en)o(vironmen)o(ts,)d(len\))166 2591 y Fm(Spa)o(wn)h(new)h(pro)q (cesses)g(and)g(create)f(a)g(new)h(con)o(text)f(that)g(includes)j(the)d(pro)q (cesses)h(in)h(the)e(old)75 2647 y(con)o(text,)d(follo)o(w)o(ed)h(b)o(y)g (the)g(newly)h(spa)o(wned)f(pro)q(cesses.)19 b(The)14 b(arra)o(y)f(pro)o (vides)h(en)o(vironmen)o(t)g(param-)75 2704 y(eters)j(for)f(eac)o(h)h(newly)h (generated)e(pro)q(cess.)26 b(The)17 b(form)f(of)g(the)h(arra)o(y)f(en)o (tries)h(are)g(implemen)o(tation)p eop %%Page: 6 7 bop 75 -100 a Fm(6)867 b Ff(CHAPTER)15 b(1.)35 b(CONTEXTS)15 b({)g(PR)o(OPOSAL)i(I)75 45 y Fm(dep)q(enden)o(t)i({)e(they)g(ma)o(y)f (include)k(information)d(on)h(the)f(pro)q(cessor)g(that)f(is)i(to)f(run)g (the)h(new)f(tasks,)75 102 y Fe(argv,)23 b(argc)15 b Fm(argumen)o(ts,)f(etc.) 75 214 y Fk(OUT)k(new)o(con)o(text)k Fm(handle)17 b(to)d(new)i(con)o(text)75 314 y Fk(OUT)i(oldcon)o(text)24 b Fm(handle)16 b(to)f(old)h(con)o(text)75 413 y Fk(IN)h(arra)o(y)p 277 413 16 2 v 18 w(of)p 337 413 V 19 w(en)o(vironmen)o(ts)k Fm(list)16 b(of)f(en)o(vironmen)o(ts)g(for)g(new)g (pro)q(cesses)75 512 y Fk(IN)i(len)23 b Fm(n)o(um)o(b)q(er)16 b(of)f(new)g(pro)q(cesses)h(\(in)o(teger\))166 625 y(As)23 b(particular)g(cases,)h Fe(MPI)p 669 625 15 2 v 17 w(SPAWN\()f(newcontext,)f (MPI)p 1211 625 V 17 w(ME,...\))42 b Fm(allo)o(ws)23 b(one)g(pro)q(cess)g(to) 75 681 y(spa)o(wn)f(new)g(pro)q(cesses,)j(and)d(create)g(a)g(new)h(con)o (text)e(that)h(consists)g(of)g(the)g(spa)o(wning)h(pro)q(cess,)75 737 y(follo)o(w)o(ed)13 b(b)o(y)g(the)g(spa)o(wned)g(pro)q(cesses;)h Fe(MPI)p 849 737 V 17 w(SPAWN\()23 b(newcontext,)f(MPI)p 1391 737 V 17 w(ALL,...\))c Fm(allo)o(ws)13 b(to)g(add)75 794 y(to)21 b(create)g(a)g(new)h(\\univ)o(ersal")g(con)o(text)e(that)h(con)o(tains)h(all) g(previous)g(pro)q(cesses)g(and)g(the)f(newly)75 850 y(spa)o(wned)15 b(pro)q(cesses.)166 944 y Fk(MPI)p 275 944 16 2 v 18 w(BCAST)p 473 944 V 19 w(CONTEXT\()d(new)o(con)o(text,)g(con)o(text,)g(ro)q(ot,)h(arra) o(y)p 1514 944 V 18 w(of)p 1574 944 V 19 w(ranks,)e(coun)o(t\))166 1093 y Fm(Beha)o(v)o(es)i(lik)o(e)h Fe(MPI)p 495 1093 15 2 v 16 w(NEW)p 583 1093 V 17 w(CONTEXT)p Fm(,)e(except)h(that)f(only)h(one)g (pro)q(cess)g(in)h(the)f(old)g(con)o(text,)f(namely)75 1150 y(the)21 b(pro)q(cess)g(with)h(rank)e Fe(root)p Fm(,)i(has)e(to)h(compute)g (the)g(list)h(of)e(the)h(ranks)g(of)f(the)h(pro)q(cesses)h(that)75 1206 y(participate)c(in)g(the)f(new)h(con)o(text.)25 b(The)17 b(new)g(con)o(text)g(do)q(es)h(not)e(necessarily)j(include)g(the)f(pro)q (cess)75 1263 y Fe(root)p Fm(.)h(The)c(call)h(is)f(blo)q(c)o(king,)h(and)f (it)g(has)f(to)g(b)q(e)i(executed)f(b)o(y)g(all)g(pro)q(cesses)h(in)f(the)g (list)g(and)g(b)o(y)g(the)75 1319 y(ro)q(ot)f(pro)q(cess.)75 1432 y Fk(OUT)k(new)o(con)o(text)k Fm(handle)15 b(to)d(newly)h(created)g(con) o(text)f(at)h(calling)h(pro)q(cess.)19 b(This)14 b(handle)g(should)189 1488 y(not)g(b)q(e)i(asso)q(ciated)g(with)f(an)g(ob)s(ject)g(b)q(efore)h(the) f(call.)75 1587 y Fk(IN)i(con)o(text)23 b Fm(handle)17 b(to)d(old)i(con)o (text)75 1687 y Fk(IN)h(ro)q(ot)23 b Fm(index)17 b(of)e(new)g(con)o(text)g (creator)f(in)i(old)g(con)o(text)75 1786 y Fk(IN)h(arra)o(y)p 277 1786 16 2 v 18 w(of)p 337 1786 V 19 w(ranks)22 b Fm(ranks)17 b(in)h(old)g(con)o(texts)e(of)h(mem)o(b)q(ers)h(of)f(new)g(con)o(text;)g (signi\014can)o(t)i(only)f(at)189 1842 y(ro)q(ot)75 1941 y Fk(IN)f(coun)o(t)23 b Fm(size)16 b(of)f(new)h(con)o(text;)e(signi\014can)o(t) i(only)g(at)e(ro)q(ot)166 2130 y Fd(Implemen)o(tati)o(on)e(note:)166 2181 y Fc(The)19 b(call)e(can)i(b)q(e)g(implemen)o(ted)d(as)i(a)g(broadcast)h (from)e(the)i(ro)q(ot)f(to)g(all)g(new)g(con)o(text)h(participan)o(ts,)75 2231 y(follo)o(w)o(ed)12 b(b)o(y)h(the)h(co)q(de)g(of)f Fb(MPI)p 574 2231 14 2 v 15 w(NEW)p 655 2231 V 15 w(CONTEXT)f Fc(\(the)i(broadcast)g (is)f(executed)i(using)e(p)q(oin)o(t-to-p)q(oin)o(t)f(comm)o(uni-)75 2281 y(cation\).)166 2421 y Fm(Example:)34 b(Supp)q(ose)24 b(one)e(has)g(a)g(serv)o(er)g(con)o(text)g Fe(SERVER)p Fm(,)f(and)h(a)g (clien)o(t)i(con)o(text)d Fe(CLIENT)p Fm(.)75 2478 y(Pro)q(cesses)16 b(in)i(either)f(con)o(texts)e(ma)o(y)h(not)g(kno)o(w)g(ab)q(out)g(pro)q (cesses)h(in)g(the)g(other)f(con)o(text,)f(but)i(they)75 2534 y(all)h(b)q(elong)g(to)e(con)o(text)g Fe(MPI)p 582 2534 15 2 v 17 w(ALL)p Fm(,)g(and)h(all)g(kno)o(w)g(ab)q(out)f(pro)q(cess)h Fe(root)g Fm(\(the)f(\\nameserv)o(er"\).)24 b(The)75 2591 y(ro)q(ot)17 b(pro)q(cess)g(kno)o(ws)g(ab)q(out)h(eac)o(h)f(of)g(the)h(t)o(w)o(o)e(con)o (texts.)26 b(A)17 b(call)i(to)e Fe(MPI)p 1408 2591 V 17 w(BCAST)p 1545 2591 V 16 w(CONTEXT)g Fm(can)g(b)q(e)75 2647 y(used)e(to)f(create)g(a)g (new)h(con)o(text)e(that)h(is)h(the)g(union)g(of)f(the)g Fe(CLIENT)g Fm(and)h Fe(SERVER)e Fm(con)o(texts,)h(so)g(as)g(to)75 2704 y(allo)o(w)h(these)h(to)e(comm)o(unicate.)p eop %%Page: 7 8 bop 75 -100 a Ff(1.4.)29 b(EXAMPLES)15 b({)g(PR)o(OBLEMS)h(IN)g(RICK'S)g (LIST)756 b Fm(7)166 45 y Fd(Discussion:)166 96 y Fc(P)o(ossible)12 b(extension:)18 b(the)13 b(ro)q(ot)f(has)g(t)o(w)o(o)g(lists;)g(the)h(list)e (of)h(pro)q(cesses)j(that)d(call)f(to)h(join)f(the)i(new)g(con)o(text,)75 146 y(and)j(the)g(list)f(of)g(pro)q(cesses)k(that)c(are)i(to)e(join)o(t)g (the)h(next)h(con)o(text.)24 b(The)16 b(remianing)e(pro)q(cesses)k(are)e (returned)75 196 y(a)g(v)n(alue)f(indicating)g(they)i(w)o(ere)g(left)e(out.) 25 b(This)16 b(could)g(b)q(e)g(useful)h(for)e(master-sla)o(v)o(e)g(co)q(de.) 26 b(The)16 b(ro)q(ot)g(is)g(the)75 246 y(master)f(that)h(creates)i(new)e (con)o(texts)h(for)e(the)h(parallel)f(executuin)i(of)e(new)h(tasks.)24 b(All)15 b(disp)q(onible)g(pro)q(cesses)75 296 y(execute)h(the)e(call,)f(but) h(only)f(a)h(subset)h(is)f(allo)q(cated.)166 472 y Fk(MPI)p 275 472 16 2 v 18 w(MER)o(GE)p 490 472 V 19 w(CONTEXT\()k(new)o(con)o(text,)f (con)o(text1,)g(con)o(text2,)g(ro)q(ot\))166 565 y Fm(This)e(function)g(allo) o(ws)g(to)e(merge)h(t)o(w)o(o)f(con)o(texts)h(ev)o(en)h(when)g(there)f(is)h (no)f(con)o(text)g(that)g(encom-)75 621 y(passes)k(the)g(t)o(w)o(o)f(merged)h (con)o(texts;)g(it)g(is)h(only)f(required)h(that)e(one)i(pro)q(cess,)f (namely)h Fe(root)p Fm(,)e(b)q(e)i(in)75 678 y(b)q(oth)13 b(con)o(texts)e(to) h(b)q(e)h(merged.)19 b(The)13 b(call)h(is)f(blo)q(c)o(king)h(and)e(m)o(ust)g (b)q(e)h(executed)h(b)o(y)e(all)i(pro)q(cesses)f(that)75 734 y(participate)k(in)f(the)g(merged)g(con)o(text.)21 b(Eac)o(h)16 b(pro)q(cess)g(pro)o(vides)g(its)g(old)g(con)o(text,)f(and)h(the)g(index)h (of)75 791 y(pro)q(cess)c(ro)q(ot)f(within)i(this)g(old)f(con)o(text)f(\(the) h(index)h(ma)o(y)e(b)q(e)i(di\013eren)o(t)f(in)g(the)g(t)o(w)o(o)f(con)o (texts)g(that)g(are)75 847 y(merged\).)19 b(The)14 b(ro)q(ot)g(pro)o(vides)g (handles)h(to)e(b)q(oth)i(con)o(texts)e(that)g(are)h(to)f(b)q(e)i(merged.)k (The)c(pro)q(cesses)75 904 y(in)h Fe(context1)e Fm(precede)i(the)g(pro)q (cesses)f(in)h Fe(context2)e Fm(in)i(the)g(ranking)f(of)g(the)g(new)h(con)o (text.)75 1017 y Fk(OUT)i(new)o(con)o(text)k Fm(handle)15 b(to)d(newly)h (created)g(con)o(text)f(at)h(calling)h(pro)q(cess.)19 b(This)14 b(handle)g(should)189 1073 y(not)g(b)q(e)i(asso)q(ciated)g(with)f(an)g(ob)s (ject)g(b)q(efore)h(the)f(call.)75 1173 y Fk(IN)i(con)o(text1)23 b Fm(handle)17 b(to)d(old)i(con)o(text)75 1272 y Fk(IN)h(con)o(text2)23 b Fm(handle)17 b(to)d(second)i(con)o(text;)e(signi\014can)o(t)i(only)g(at)f (ro)q(ot)75 1372 y Fk(IN)i(ro)q(ot)23 b Fm(index)17 b(of)e(ro)q(ot)f(in)i (old)g(con)o(text)166 1485 y(Example:)23 b(a)17 b(serv)o(er)f(\(the)h(\\ro)q (ot"\))e(and)i(a)f(set)h(of)f(clien)o(t)i(pro)q(cesses)f(participate)h(in)f Fe(context1)p Fm(.)75 1541 y(The)j(serv)o(er)f(spa)o(wns)g(new)h(\\help)q (ers",)h(or)e(has)h(other)f(a)o(v)m(ailable)i(serv)o(er)e(pro)q(cesses)h (join)g(it.)33 b(A)20 b(call)75 1598 y(to)e Fe(MPI)p 209 1598 15 2 v 16 w(MERGE)p 345 1598 V 17 w(CONTEXT)f Fm(can)i(b)q(e)g(used)g(to)e (create)h(a)h(new)f(con)o(text)g(that)f(mak)o(e)h(these)h(new)f(serv)o(ers)75 1654 y(a)o(v)m(ailable)f(to)d(the)i(parallel)g(clinet.)166 1788 y Fd(Discussion:)166 1839 y Fc(Here,)h(to)q(o,)f(the)h(function)e(can)h (b)q(e)h(extended)h(to)d(allo)o(w)g(the)h(creation)h(of)e(a)h(new)g(con)o (text)h(where)g(only)e(a)75 1889 y(subset)g(of)f(the)g(pro)q(cesses)j(in)c (the)i(old)e(group)g(participate)166 1940 y(If)k Fb(MPI)p 280 1940 14 2 v 15 w(MERGE)p 405 1940 V 14 w(CONTEXT)f Fc(is)h(a)o(v)n(ailable)e (then)j(the)g(functionalit)o(y)d(of)i Fb(MPI)p 1344 1940 V 15 w(SPAWN)f Fc(can)i(b)q(e)f(reduced:)27 b(It)17 b(is)75 1990 y(su\013cien)o(t)e(to)e(create)i(a)f(new)g(con)o(text)g(that)g(consists)h(of) e(a)g(spa)o(wning)g(pro)q(cess)j(\(rather)e(than)g(sp)o(wning)f(con)o(text\)) 75 2040 y(and)h(the)g(spa)o(wned)h(pro)q(cesses.)20 b(This)14 b(new)g(con)o(text)h(can)f(then)h(b)q(e)f(merged)g(with)f(a)h(preexisting)g (con)o(text.)166 2256 y Fd(Implemen)o(tati)o(on)e(note:)166 2308 y Fc(Implemen)o(tation)f(is)i(similar)f(to)i(the)g(implemen)o(tation)c (of)k Fb(MPI)p 1179 2308 V 15 w(BCAST)p 1304 2308 V 14 w(CONTEXT)75 2542 y Fl(1.4)70 b(Examples)22 b({)h(Problems)f(in)g(Ric)n(k's)g(list)75 2646 y Fm(Only)16 b(the)g(basic)g(con)o(text)e(functions)i(are)f(used)h(in)g (these)f(examples.)166 2704 y Fd(W)l(arning)p Fc(:)h(late)e(nigh)o(t)f(w)o (ork)p eop %%Page: 8 9 bop 75 -100 a Fm(8)867 b Ff(CHAPTER)15 b(1.)35 b(CONTEXTS)15 b({)g(PR)o(OPOSAL)i(I)75 45 y Fk(F)l(unction)h(that)h(implemen)o(ts)e(sh)o (u\017e)f(p)q(erm)o(utation)i(in)g(group/con)o(text)75 204 y Fe(void)23 b(function)g(mpi_shuffle\(inbuf,)e(outbuf,)i(datatype,)g(count,) g(context,)g(size\))75 317 y(...)75 430 y({)75 486 y (mpi_copy_context\(newcontex)o(t,)e(context\);)75 543 y(mpi_rank\(context,)h (i\);)75 599 y(dest)h(=)h(shuffle\(i,)f(size\);)75 656 y(source)g(=)h (unshuffle\(i,)e(size\);)75 712 y(mpi_irecvc)g(\(handle,)h(inbuf,)g(count,)g (datatype,)g(source,)g(0,)g(nexcontext\);)75 769 y(mpi_sendc)g(\(outbuf,)f (count,)h(datatype,)g(dest,)g(0,)h(newcontext\);)75 825 y(mpi_wait\(handle,)e (ret_stat\);)75 882 y(mpi_free\(newcontext\);)75 938 y(})166 1095 y Fm(This)15 b(w)o(orks)f(OK,)h(if)g(pro)q(cesses)g(are)g (single-threaded.)21 b(If)15 b(they)g(are)g(m)o(ultithreaded,)g(one)g(has)g (to)75 1152 y(mak)o(e)f(sure)g(that)g(other)f(concurren)o(t)i(threads)f(do)g (not)g(create)g(new)g(con)o(texts.)19 b(New)14 b(con)o(text)g(creation)75 1208 y(can)h(b)q(e)h(skipp)q(ed)h(if)f(there)f(are)g(no)g(p)q(ending)i (messages)e(in)h(the)f(con)o(text)g(when)g(mpi)p 1539 1208 14 2 v 17 w(sh)o(u\017e)h(is)g(called.)75 1338 y Fk(Use)25 b(of)h(con)o(texts)g(for)f(library)h(dev)o(elopmen)o(t)44 b Fm(The)23 b(co)q(de)g(of)f(a)g(parallel)i(library)f(function)75 1395 y(should)18 b(b)q(e)f(written)g(so)f(that)g(eac)o(h)h(message)f(that)g (is)h(pro)q(duced)h(b)o(y)e(some)h(pro)q(cess)g(is)g(consumed)g(b)o(y)75 1451 y(another)d(pro)q(cess)h(during)h(the)f(execution)g(of)g(the)f(library)i (co)q(de)f({)f(i.e.)21 b(the)14 b(library)i(should)f(\\clean)h(its)75 1508 y(garbage".)166 1566 y(A)d(parallel)j(library)e(is)g(in)o(v)o(ok)o(ed)g (collectiv)o(ely)i(b)o(y)d(a)g(group)g(of)h(pro)q(cesses.)19 b(An)o(y)14 b(in)o(v)o(ok)m(ation)g(of)f(the)75 1622 y(library)k(o)q(ccurs)g (within)g(a)f(con)o(text)g(de\014ned)i(b)o(y)e(the)h(user.)23 b(All)18 b(pro)q(cesses)f(that)e(participate)i(in)h(suc)o(h)75 1679 y(con)o(text)g(in)o(v)o(ok)o(e)h(the)g(parallel)h(library)l(,)h(and)e (matc)o(hing)g(in)o(v)o(ok)m(ations)g(o)q(ccur)g(at)f(all)i(of)e(them)h(in)h (the)75 1735 y(same)15 b(order.)166 1793 y(The)10 b(library)h(has)f(to)g (generate)g(a)f(new)i(con)o(text,)f(using)h(the)f(function)h Fe(MPI)p 1427 1793 15 2 v 17 w(COPY)p 1540 1793 V 17 w(CONTEXT\(newcontext,) 75 1850 y(context\))p Fm(,)16 b(for)h(eac)o(h)g(con)o(text)g Fe(context)f Fm(where)h(from)g(the)g(library)h(ma)o(y)f(b)q(e)h(in)o(v)o(ok)o (ed.)26 b(The)17 b(library)75 1906 y(will)g(then)f(use)g(the)g(con)o(text)f Fe(newcontext)f Fm(for)h(its)h(comm)o(unication)h(whenev)o(er)f(it)g(is)g(in) o(v)o(ok)o(ed)g(within)75 1963 y(the)h(con)o(text)f Fe(context)p Fm(.)24 b(If)17 b(the)g(library)g(uses)h(collectiv)o(e)g(comm)o(unication)g (within)g(dynamically)g(de-)75 2019 y(\014ned)h(subgroups,)f(then)g(these)g (subgroups)g(will)i(b)q(e)f(created)e(b)o(y)h(splitting)i(the)d(group)h(of)g (pro)q(cesses)75 2076 y(de\014ned)f(b)o(y)e Fe(newcontext)p Fm(.)166 2134 y(In)23 b(the)g(general)g(case,)h(the)e(con)o(text)g(of)g(the)h (in)o(v)o(ok)m(ation)g(has)f(to)g(b)q(e)h(passed)g(as)f(an)h(explicit)75 2190 y(parameter)14 b(when)i(the)f(library)h(is)f(in)o(v)o(ok)o(ed.)21 b(The)15 b(library)h(co)q(de)f(will)i(generate)e(a)g(new)g(con)o(text)f(when) 75 2247 y(it)h(starts)f(executing)j(and)e(will)i(free)e(this)h(con)o(text)e (when)i(it)g(terminates.)166 2305 y(There)f(are)g(sev)o(eral)h(sp)q(ecial)h (cases)e(where)g(dynamic)h(con)o(text)f(creation)g(can)h(b)q(e)f(a)o(v)o (oided.)166 2363 y(Case)10 b(1:)18 b(The)11 b(library)g(is)h(alw)o(a)o(ys)e (in)o(v)o(ok)o(ed)h(within)h(the)e(con)o(text)h Fe(MPI)p 1343 2363 V 16 w(ALL)p Fm(.)f(Then)i(the)f(new)g(con)o(text)75 2420 y(can)18 b(b)q(e)g(\\cac)o(hed":)25 b(A)17 b(library)i(collectiv)o(e)g (initialization)i(routine)d(should)g(b)q(e)h(in)o(v)o(ok)o(ed)f(b)o(y)f(the)h (user)75 2476 y(at)c(the)g(start)f(of)h(the)g(program;)f(this)i(routine)f (creates)g(a)g(cop)o(y)g(of)g(the)g(con)o(text)g Fe(MPI)p 1537 2476 V 17 w(ALL)f Fm(and)i(stores)e(a)75 2532 y(handle)k(to)e(it)h(in)g(a)g (C)f(static)h(v)m(ariable)h(\(F)l(ortran)d(77)h(COMMON\).)g(The)h(con)o(text) f Fe(MPI)p 1602 2532 V 17 w(ALL)g Fm(need)h(not)75 2589 y(b)q(e)g(passed)f (as)g(a)g(parameter)f(when)i(the)f(library)h(is)g(in)o(v)o(ok)o(ed.)166 2647 y(Case)g(2:)22 b(The)17 b(library)g(is)g(alw)o(a)o(ys)e(in)o(v)o(ok)o (ed)i(in)g(a)f(unique)i Fh(libr)n(ary)f(c)n(al)r(ling)f(c)n(ontext)g Fm(on)h(eac)o(h)f(pro-)75 2704 y(cess.)29 b(Then)18 b(a)g(library)h (initialization)i(routine)e(should)g(b)q(e)g(in)o(v)o(ok)o(ed)f(b)o(y)g(the)g (user)g(on)h(eac)o(h)f(pro)q(cess)p eop %%Page: 9 10 bop 75 -100 a Ff(1.4.)34 b(EXAMPLES)16 b({)e(PR)o(OBLEMS)j(IN)e(RICK'S)h (LIST)751 b Fm(9)75 45 y(where)16 b(the)g(library)g(ma)o(y)f(b)q(e)h(in)o(v)o (ok)o(ed;)g(the)f(initialization)k(routine)d(is)g(called)h(after)e(the)h (user)g(de\014ned)75 102 y(the)i(library)g(calling)i(con)o(texts)d(and)h(b)q (efore)g(the)g(library)g(is)g(in)o(v)o(ok)o(ed.)28 b(Eac)o(h)18 b(initialization)i(call)f(is)f(a)75 158 y(collectiv)o(e)d(call)f(within)g (the)f(library)h(calling)g(con)o(text.)19 b(A)13 b(cop)o(y)g(of)f(the)h (calling)i(con)o(text)d(is)h(created)g(\(b)o(y)75 214 y Fe(MPI)p 150 214 15 2 v 17 w(COPY)p 263 214 V 16 w(CONTEXT)p Fm(\))i(and)i(stored)e (in)j(a)d(static)h(library)h(v)m(ariable.)25 b(Subsequen)o(t)17 b(calls)g(to)f(the)g(library)75 271 y(need)g(not)f(pass)g(the)g(calling)i (con)o(text)e(as)g(a)f(parameter.)166 327 y(Case)19 b(3:)27 b(The)20 b(library)g(ma)o(y)e(b)q(e)i(in)o(v)o(ok)o(ed)f(on)h(eac)o(h)f(pro)q (cess)g(within)i(a)d(\014xed)i(\(small\))g(n)o(um)o(b)q(er)75 384 y(of)f(library)h(calling)h(con)o(texts.)32 b(Copies)20 b(of)f(these)g(con)o(texts)g(can)h(b)q(e)g(created)f(b)q(efore)h(the)f (library)h(is)75 440 y(in)o(v)o(ok)o(ed)e(and)g(\\cac)o(hed")g(in)h(static)f (v)m(ariables.)29 b(Subsequen)o(t)19 b(in)o(v)o(ok)m(ations)f(to)g(the)g (library)h(need)f(not)75 497 y(create)d(new)g(copies,)h(but)f(only)h(select)g (the)f(righ)o(t)g(preexisting)i(cop)o(y)l(.)166 547 y Fc(This)9 b(assumes)h(one)f(can)h(test)g(con)o(texts)h(\(con)o(text)f(handles\))g(for)f (equalit)o(y)m(.)15 b(This)9 b(should)g(b)q(e)h(said)g(explicitely)75 596 y(in)j(the)i(draft)75 704 y Fk(Use)i(of)h(con)o(texts)f(for)g(a)h(host)f (no)q(de)h(computation)i(mo)q(del)42 b Fc(Let's)14 b(assume)g(t)o(w)o(o)f (con)o(texts:)145 779 y Fa(\017)23 b Fb(MPI)p 258 779 14 2 v 15 w(ALL)13 b Fc(with)g(host)i(b)q(eing)e(no)q(de)i(zero)145 854 y Fa(\017)23 b Fb(MPI)p 258 854 V 15 w(NODE)13 b Fc(that)h(do)q(es)g(not) g(include)g(the)h(host)166 928 y(W)m(e)e(assume)h(the)g(load)f(pro)q(cedure)j (ensures)g(that)e(host)g(is)g(pro)q(cess)i(zero)e(in)g(ALL)75 1078 y Fb(host/node)20 b(code)479 b(translation)75 1128 y(--------------)e (-------------)75 1227 y(I_am_the_host\(\))455 b(\(mpi_rank\(MPI_A)o(LL,)19 b(rank\);)860 1277 y(rank==0;\))75 1377 y(form)i(node)g(group)457 b(mpi_rank\(MPI_AL)o(L,)19 b(task\))860 1427 y(mpi_split_conte)o(xt\(MP)o (I_NOD)o(E,)g(MPI_ALL,)903 1476 y(\(task==0\),0\);)75 1576 y(Broadcast)h(from)h(host)g(to)g(nodes)174 b(mpi_bcast\(MPI_A)o(LL,0,)o (...\))75 1676 y(regular)20 b(communications)f(btwn)196 b(mpi_send\(buffer)o (,len,)o(dest,)o(tag,M)o(PI_NO)o(DE\))75 1725 y(nodes)675 b(mpi_recv\(buffer) o(,len,)o(sourc)o(e,tag)o(,MPI_)o(NODE)o(\))184 1775 y(/*)21 b(source,)g(dest)g(are)g(ranging)f(from)h(0)h(to)f(#nodes-1)f(*/)75 1875 y(sum)h(values)g(from)g(each)g(node)239 b(mpi_reduce\(inbu)o(f,out)o (buf,l)o(en,MP)o(I_ALL)o(,)75 1925 y(at)21 b(host)893 b(0,MPI_ISUM\))228 1975 y(/*)21 b(host)g(\(node)g(zero\))f(calls)h(mpi_reduce)f(with)h(inbuf)g (=)g(0)h(*/)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF From owner-mpi-context@CS.UTK.EDU Mon May 10 05:13:04 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA04059; Mon, 10 May 93 05:13:04 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14410; Mon, 10 May 93 02:59:44 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 10 May 1993 02:59:42 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14402; Mon, 10 May 93 02:59:37 -0400 Message-Id: <9305100659.AA14402@CS.UTK.EDU> Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 6287; Mon, 10 May 93 03:00:12 EDT Date: Mon, 10 May 93 03:00:11 EDT From: "Marc Snir" To: MPI-CONTEXT@CS.UTK.EDU %!PS-Adobe-2.0 %%Creator: dvips 5.47 Copyright 1986-91 Radical Eye Software %%Title: CON-03.DVI.* %%Pages: 10 1 %%BoundingBox: 0 0 612 792 %%EndComments %%BeginProcSet: texc.pro /TeXDict 250 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{ isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10 N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{ /vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder} N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail} B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get} B /ch-height{ch-data dup length 4 sub get} B /ch-xoff{128 ch-data dup length 3 sub get sub} B /ch-yoff{ ch-data dup length 2 sub get 127 sub} B /ch-dx{ch-data dup length 1 sub get} B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]/id ch-image N /rw ch-width 7 add 8 idiv string N /rc 0 N /gp 0 N /cp 0 N{rc 0 ne{rc 1 sub /rc X rw}{G}ifelse}imagemask restore}B /G{{id gp get /gp gp 1 add N dup 18 mod S 18 idiv pl S get exec}loop}B /adv{cp add /cp X}B /chg{rw cp id gp 4 index getinterval putinterval dup gp add /gp X adv}B /nd{/cp 0 N rw exit}B /lsh{rw cp 2 copy get dup 0 eq{pop 1}{dup 255 eq{pop 254}{dup dup add 255 and S 1 and or}ifelse}ifelse put 1 adv}B /rsh{rw cp 2 copy get dup 0 eq{pop 128}{dup 255 eq{pop 127}{dup 2 idiv S 128 and or}ifelse}ifelse put 1 adv}B /clr{rw cp 2 index string putinterval adv}B /set{rw cp fillstr 0 4 index getinterval putinterval adv}B /fillstr 18 string 0 1 17{2 copy 255 put pop}for N /pl[{adv 1 chg}bind{adv 1 chg nd}bind{1 add chg}bind{1 add chg nd}bind{adv lsh}bind{ adv lsh nd}bind{adv rsh}bind{adv rsh nd}bind{1 add adv}bind{/rc X nd}bind{1 add set}bind{1 add clr}bind{adv 2 chg}bind{adv 2 chg nd}bind{pop nd}bind]N /D{ /cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto}N /eop{clear SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put} for}N /p /show load N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval (Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail} B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{ /SS save N}B /eos{clear SS restore}B end %%EndProcSet TeXDict begin 1000 300 300 @start /Fa 1 16 df15 D E /Fb 58 123 df35 D<13E01201EA07C013005A121E5A123812781270A312F05AA77E1270A312781238123C7E7E7E13 C0EA01E012000B217A9C16>40 D<12E07E127C121C121E7EEA0780120313C01201A313E01200A7 120113C0A3120313801207EA0F00121E121C127C12F05A0B217C9C16>II<12 38127C127EA2123E120E121E123C127C12F81260070B798416>44 DI< 127012F8A312700505788416>I III<1238127CA312381200A81238127CA3123C121C123C123812F812 F012600618799116>59 D61 D<13E0487EA213B0A2EA03B8A31318EA071CA5EA0E0EA2EA0FFEA2487EEA1C07A3387F1FC000FF 13E0007F13C013197F9816>65 DI<3801F180EA07 FF5AEA1F0FEA3C0712781303127000F0C7FC5AA77E387003801278A2EA3C07381F0F00EA0FFE6C 5AEA01F011197E9816>II<387FFFC0B5FC7EEA1C01A490C7FCA2 131CA2EA1FFCA3EA1C1CA290C7FC14E0A5EA7FFFB5FC7E13197F9816>I71 D73 D<387F0FE038FF8FF0387F0FE0381C0780EB0F00131E131C133C5B5BEA1DE07F121F7F1338EA1E 3C131CEA1C1E7F7F14801303387F07E038FF8FF0387F07E01419809816>75 DI<38FC07E0EAFE0FA2383A0B80EA3B 1BA413BBA2EA39B3A413F3EA38E3A21303A538FE0FE0A313197F9816>I<387E1FC038FF3FE038 7F1FC0381D07001387A313C7A2121CA213E7A31367A21377A21337A31317EA7F1FEAFF9FEA7F0F 13197F9816>III82 DI<387FFFE0B5FCA2EAE0E0A400 001300AFEA07FC487E6C5A13197F9816>I<387F07F038FF8FF8387F07F0381C01C0B0EA1E0300 0E1380EA0F8F3807FF006C5AEA00F81519809816>I<38FC07E0EAFE0FEAFC07387001C0A30030 1380EA3803A313E3EA39F3A213B300191300A61313EA1B1BEA0F1EA2EA0E0E13197F9816>87 D<387F1F80133F131F380E1E00131CEA073C1338EA03B813F012015B120012017F120313B81207 131CA2EA0E0EA2487E387F1FC000FF13E0007F13C013197F9816>I<38FE0FE0EAFF1FEAFE0F38 1C0700A2EA0E0EA26C5AA3EA03B8A2EA01F0A26C5AA8EA03F8487E6C5A13197F9816>I<387FFF 80B5FCA238E007005B131E131CEA003C5B137013F0485A5B1203485A90C7FC5A381E0380121C12 3C12781270B5FCA311197E9816>I95 D97 D<127E12FE127E120EA4133EEBFF80000F13C0EB83E01301EB00F0120E1470A4000F13 F014E01381EB83C013FF000E1300EA067C1419809816>II<133F5B7F1307A4EA 03E7EA0FFF123FEA3C1F487E1270EAF00712E0A46C5AA2EA781FEA7C3F383FFFE0381FF7F03807 C7E014197F9816>II<131FEB7F8013FFEA01E7EBC30013C0A2EA 7FFFB5FCA2EA01C0ACEA3FFE487E6C5A11197F9816>I<3803E3C0380FFFE05A381E3CC0383C1E 00EA380EA3EA3C1E6C5AEA1FFC485AEA3BE00038C7FC123CEA1FFC48B4FC4813C0EA780338F001 E0EAE000A3EAF001387C07C0383FFF80380FFE00EA03F8131C7F9116>I<127E12FE127E120EA4 137CEA0FFF148013871303A2120EA9387FC7F038FFE7F8387FC7F01519809816>II<127E12FE127E120EA4EB7F E0A3EB0F00131E5B5B5BEA0FF8A213BC131EEA0E0E130FEB0780387F87F0EAFFCFEA7F87141980 9816>107 DI<38FBC78038FFEFC0EBFFE0EA3E7CEA3C 78EA3870AA38FE7CF8A31512809116>IIII<38FF0F80EB3FE013FFEA07F1EBE0C0EBC0005BA290C7FCA7EAFFFCA313127F 9116>114 DI<12035AA4EA7FFFB5FCA20007C7FCA75BEB03 80A2130713873803FF005BEA00F811177F9616>I<387E1F80EAFE3FEA7E1FEA0E03AA1307EA0F 0FEBFFF06C13F83803F3F01512809116>I<387F1FC000FF13E0007F13C0381C0700EA1E0FEA0E 0EA36C5AA4EA03B8A3EA01F0A26C5A13127F9116>I<38FF1FE013BF131F38380380A413E33819 F300A213B3EA1DB7A4EA0F1EA313127F9116>I<387F1FC0133F131F380F1C00EA073CEA03B813 F012016C5A12017FEA03B8EA073C131CEA0E0E387F1FC038FF3FE0387F1FC013127F9116>I<38 7F1FC038FF9FE0387F1FC0381C0700120E130EA212075BA2EA039CA21398EA01B8A2EA00F0A35B A3485A1279127BEA7F806CC7FC123C131B7F9116>I<383FFFC05AA238700780EB0F00131EC65A 13F8485A485A485A48C7FC381E01C0123C1278B5FCA312127F9116>I E /Fc 48 123 df11 D<127012F812FCA2127C120CA31218A2123012601240060D 7D9C0C>39 D<13C0EA0180EA03001206120E120C121C121812381230A21270A21260A212E0AC12 60A21270A21230A212381218121C120C120E12067EEA0180EA00C00A2A7D9E10>I<12C012607E 7E121C120C120E120612077EA21380A21201A213C0AC1380A21203A21300A25A1206120E120C12 1C12185A5A5A0A2A7E9E10>I<127012F012F8A212781218A31230A2127012601240050D7D840C> 44 DI<127012F8A3127005057D840C>I<12035A123FB4FC12C71207B3 A3EAFFF8A20D1C7C9B15>49 DII<13F0 EA03FCEA070CEA0E0EEA1C1E1238130CEA78001270A2EAF3F0EAF7F8EAFC1CEAF81E130E12F013 0FA51270A2130E1238131CEA1C38EA0FF0EA03E0101D7E9B15>54 D<127012F8A312701200A812 7012F8A3127005127D910C>58 D<127012F8A312701200A8127012F012F8A212781218A31230A2 127012601240051A7D910C>I<1306130FA3497EA4EB33C0A3EB61E0A3EBC0F0A338018078A2EB FFF8487FEB003CA200067FA3001F131F39FFC0FFF0A21C1D7F9C1F>65 D69 D<39FFF3FFC0A2390F003C00AAEBFFFCA2EB003CAC39FFF3FFC0A21A1C7E9B1F> 72 DI76 D80 D<3807E080EA1FF9EA3C1FEA7007130312E01301 A36CC7FCA2127CEA7FC0EA3FF8EA1FFEEA07FFC61380130FEB03C0A2130112C0A300E013801303 00F01300EAFC0EEACFFCEA83F8121E7E9C17>83 D<007FB512C0A238780F030070130100601300 00E014E000C01460A400001400B03803FFFCA21B1C7F9B1E>I<3AFFE0FFE1FFA23A1F001E007C 6C1530143FA20180147000079038678060A32603C0E713C0ECC3C0A2D801E0EBC1809038E181E1 A3D800F3EBF3001400A2017B13F6017E137EA3013C133CA3011C133801181318281D7F9B2B>87 D97 D<12FCA2121CA9137EEA1DFF381F8780381E01C0001C13E013 0014F0A614E01301001E13C0381F07803819FF00EA187C141D7F9C17>IIII<1378EA01FCEA039EEA071EEA0E0C1300A6EAFFE0A2EA0E 00AEEA7FE0A20F1D809C0D>II<12FCA2121CA9137CEA1DFFEA1F07381E0380A2121CAB38FF9FF0 A2141D7F9C17>I<1218123C127C123C1218C7FCA612FCA2121CAEEAFF80A2091D7F9C0C>II<12FC A2121CA9EB7FC0A2EB3E0013185B5B5BEA1DE0121FEA1E70EA1C781338133C131C7F130F38FF9F E0A2131D7F9C16>I<12FCA2121CB3A7EAFF80A2091D7F9C0C>I<39FC7E07E039FDFF9FF8391F83 B838391E01E01CA2001C13C0AB3AFF8FF8FF80A221127F9124>IIII<3803E180EA0FF9EA1E1FEA3C071278130312F0A612781307123CEA1E1FEA0FFB EA07E3EA0003A6EB1FF0A2141A7F9116>III<120CA5121CA2123CEAFFE0A2EA1C00A81330A5EA 1E60EA0FC0EA07800C1A7F9910>I<38FC1F80A2EA1C03AC1307EA0C0F380FFBF0EA03E314127F 9117>I<38FF0FE0A2381C0780EB0300EA0E06A36C5AA2131CEA0398A213F86C5AA26C5AA31312 7F9116>I<39FF3FCFE0A2391C0F0780EC0300131F380E1B061486A2EB318E000713CCA2136038 03E0F8A33801C070A31B127F911E>I<387F8FF0A2380F078038070600EA038EEA01DC13D8EA00 F01370137813F8EA01DCEA038E130EEA0607380F038038FF8FF8A21512809116>I<38FF0FE0A2 381C0780EB0300EA0E06A36C5AA2131CEA0398A213F86C5AA26C5AA35BA3EAF180A200C7C7FC12 7E123C131A7F9116>II E /Fd 18 118 df<1238127C12FEA312 7C12381200A41238127C12FEA3127C123807127D910D>58 D68 D73 D<3BFFFC7FFE0FFCA23B0FC007E000C081D9 E003130100071680EC07F8D803F0EC0300A29039F80CFC0700011506EC1CFE9039FC187E0E0000 150CEC387F90397E303F18A290397F601FB8013F14B002E013F0ECC00F011F5CA26D486C5AA2EC 00036D5CA22E1C7F9B31>87 D97 D99 D101 D<3803F0F0380FFFF8383E1F38383C0F30007C13 80A4003C1300EA3E1FEA1FFCEA33F00030C7FC12707EEA3FFF14C06C13E04813F0387801F838F0 0078A3007813F0383E03E0381FFFC03803FE00151B7F9118>103 D<121E123FA25A7EA2121EC7 FCA5B4FCA2121FAEEAFFE0A20B1E7F9D0E>105 D108 D<39FF1FC0FE90387FE3FF3A1FE1F70F80903980FC07C0A2EB00F8AB3AFFE7FF3FF8A225127F91 28>I<38FF1FC0EB7FE0381FE1F0EB80F8A21300AB38FFE7FFA218127F911B>II<38FF1FC0EBFFE0381FC1F8130014FC147C147EA6147C14FCEB80F8EBC1F0EB7F E0EB1F8090C7FCA6EAFFE0A2171A7F911B>I114 DI<1203A35AA25AA2123FEA FFFCA2EA1F00A9130CA4EA0F98EA07F0EA03E00E1A7F9913>I<38FF07F8A2EA1F00AC1301EA0F 03EBFEFFEA03F818127F911B>I E /Fe 55 126 df<137013F01201EA03C0EA0780EA0F00121E 121C123C123812781270A212F05AA87E1270A212781238123C121C121E7EEA0780EA03C0EA01F0 120013700C24799F18>40 D<126012F012787E7E7EEA0780120313C0120113E01200A213F01370 A813F013E0A2120113C0120313801207EA0F00121E5A5A5A12600C247C9F18>I<123C127E127F A3123F120F120E121E127C12F81270080C788518>44 D<127812FCA412780606778518>46 D48 DII<123C127EA4123C1200A812 38127C127EA3123E120E121E123C127812F01260071A789318>59 D<387FFFC0B512E0A3C8FCA4 B512E0A36C13C0130C7E9318>61 D<137013F8A213D8A2EA01DCA3138CEA038EA41306EA0707A4 380FFF80A3EA0E03A2381C01C0A2387F07F038FF8FF8387F07F0151C7F9B18>65 DI<3801FCE0EA03FEEA07FFEA0F07EA 1E03EA3C01EA78001270A200F013005AA87E007013E0A21278EA3C01001E13C0EA0F073807FF80 6C1300EA01FC131C7E9B18>I69 DI<3801F9C0EA07FF5AEA1F0FEA1C03123CEA78011270A200F0C7 FC5AA5EB0FF0131F130F38F001C0127013031278123CEA1C07EA1F0FEA0FFFEA07FDEA01F9141C 7E9B18>I73 D76 D<38FC01F8EAFE03A2383B06E0A4138EA2EA398CA213DCA3EA38D8A213F81370A21300A638FE03 F8A3151C7F9B18>I<387E07F038FF0FF8387F07F0381D81C0A313C1121CA213E1A313611371A2 13311339A31319A2131D130DA3EA7F07EAFF87EA7F03151C7F9B18>III82 D<3807F380EA1FFF5AEA7C1FEA7007EAF00312E0A290C7FC7E1278123FEA1FF0EA0FFEEA01FF38 001F80EB03C0EB01E01300A2126012E0130100F013C0EAFC07B512801400EAE7FC131C7E9B18> I<387FFFF8B5FCA238E07038A400001300B2EA07FFA3151C7F9B18>I<38FF07F8A3381C01C0A4 380E0380A4EA0F0700071300A4EA038EA4EA018C13DCA3EA00D813F8A21370151C7F9B18>86 D<38FE03F8A338700070A36C13E0A513F8A2EA39DCA2001913C0A3138CEA1D8DA4000D13801305 EA0F07A2EA0E03151C7F9B18>I<387F8FE0139F138F380E0700120FEA070E138EEA039C13DCEA 01F8A26C5AA2137013F07F120113DCEA039E138EEA070F7F000E13801303001E13C0387F07F038 FF8FF8387F07F0151C7F9B18>I<38FF07F8A3381C01C0EA1E03000E1380EA0F0700071300A2EA 038EA2EA01DCA213FC6C5AA21370A9EA01FC487E6C5A151C7F9B18>I95 D97 D<127E12FE127E120EA5133EEBFF 80000F13C0EBE3E0EB80F0EB00701478000E1338A5120F14781470EB80F0EBC3E0EBFFC0000E13 8038067E00151C809B18>IIII< EB1FC0EB7FE013FFEA01F1EBC0C01400A3387FFFC0B5FCA23801C000AEEA7FFFA3131C7F9B18> I<3803F1F03807FFF85A381E1F30383C0F00EA3807A5EA3C0FEA1E1EEA1FFC485AEA3BF00038C7 FC123CEA1FFF14C04813E0387801F038F00078481338A36C1378007813F0EA7E03383FFFE0000F 13803803FE00151F7F9318>I<127E12FE127E120EA5133FEBFF80000F13C0EBE1E013801300A2 120EAA387FC3FC38FFE7FE387FC3FC171C809B18>II<12FEA3120EA5EB3FF0137F133FEB0780EB0F00131E5B 5B5BEA0FF87F139C131EEA0E0FEB0780130314C038FFC7F8A3151C7F9B18>107 DI<387DF1F038FFFBF86CB4 7E381F1F1CEA1E1EA2EA1C1CAB387F1F1F39FFBFBF80397F1F1F001914819318>IIII<387F87E038FF9FF8EA7FBF3803FC78EBF030EBE0005BA35BA8EA7FFEB5FC6C5A 15147F9318>114 DI<487E1203A4387FFFC0 B5FCA238038000A9144014E0A21381EBC3C0EA01FF6C1380EB7E0013197F9818>I<387E07E0EA FE0FEA7E07EA0E00AC1301EA0F073807FFFC6C13FE3801FCFC1714809318>I<387F8FF000FF13 F8007F13F0381E03C0000E1380A338070700A3EA038EA4EA01DCA3EA00F8A2137015147F9318> I<38FF8FF8A3383800E0A3381C01C0A2137113F9A213D9A2380DDD80A3138DEA0F8FA238070700 15147F9318>I<387F8FF0139F138F38070700138EEA039EEA01DC13F81200137013F07FEA01DC EA039E138EEA0707000F1380387F8FF000FF13F8007F13F015147F9318>I<387F8FF000FF13F8 007F13F0380E01C0EB0380A21207EB0700A2EA03871386138EEA01CEA2EA00CCA213DC1378A313 70A313F05B1279EA7BC0EA7F806CC7FC121E151E7F9318>I<383FFFF05AA2387001E0EB03C0EB 078038000F00131E137C5B485A485AEA0780380F0070121E5A5AB512F0A314147F9318>II<127CB47E7FEA07E01200AB7FEB7FC0EB3FE0A2EB7FC0EBF0005BAB1207B45A5B007CC7 FC13247E9F18>125 D E /Ff 25 124 df<1238127CA2127E123E120CA31218A21230126012C0 1280070E789F0D>39 D<1230127812F81278127005057C840D>46 D<130C131C13FCEA0FF81338 1200A41370A613E0A6EA01C0A61203EA7FFE12FF0F1E7C9D17>49 D<133FEBFFC03801C1E03803 00F000061378A2000F137C1380A2EB00781206C712F814F0130114E0EB03C0EB0780EB0F00131C 5B5B13C0485A380300601206001C13C05AEA7FFFB51280A2161E7E9D17>I<137F3801FFC03803 83E0EA0701EB00F05A1301A2000013E0A2EB03C0EB0780EB0F0013FE13F8130E7F1480EB03C0A3 EA3007127812F8A238F00F8000C01300EA601EEA383CEA1FF8EA07E0141F7D9D17>II<14181438A21478147C14FCA2EB01BCA2 EB033C143EEB061EA2130CA21318141F497EA21360EB7FFF90B5FCEBC00F3901800780A2EA0300 A21206A2001F14C039FFC07FFCA21E207E9F22>65 D<0007B5FC15C039003C01E015F090387800 F8A515F0EBF001EC03E0EC07C0EC0F809038FFFE00ECFF803901E007C0EC03E0A21401A215F0D8 03C013E01403A2EC07C0A2EC0F800007EB3F00387FFFFEB512F01D1F7E9E20>I<903803F80890 380FFE1890383F073890387801F83801F000D803C013F0000714705B48C7FC5A121E003E146012 3C007C1400A45AA415C01278EC0180127C003CEB0300A26C13065C6C6C5A3807E0703801FFC06C 6CC7FC1D217B9F21>I<0007B5FC15E039003C01F0EC00F849137C153C151EA3151F5BA6484813 1E153EA3153C157C4848137815F0A2EC01E0EC03C0EC0F800007EB3F00387FFFFCB512E0201F7E 9E23>I<0007B512F8A238003C001578491338A5EC0C30EBF0181500A21438EBFFF8A23801E070 1430A2151815301400485A1560A215E0EC01C014030007130F007FB51280B6FC1D1F7E9E1F>I< 3A07FFC7FFC0A23A003C007800A2495BA649485AA490B5FCA23901E003C0A64848485AA6000713 0F397FFCFFF8485A221F7E9E22>72 D<3807FFE0A238003C00A25BA65BA6485AA6485AA61207EA FFFCA2131F7F9E10>I<3A07FFE0FFE0A23A003C003E001538495B5DEC01804AC7FC14065C495A 5C5CEBF1F013F3EBF778EA01EC13F8497E13E080A248487EA36E7EA26E7E0007497E397FFC1FFC 00FF133F231F7E9E23>75 D<3807FFF0A2D8003CC7FCA25BA65BA6485AA3EC0180A2EC0300EA03 C0A25C1406140E141E0007137E387FFFFCB5FC191F7E9E1C>II<3A07FC03FFC0A23A003E007C001538016F1330A3EB6780A2EB63C001C35BA2EB C1E0A2EBC0F0A2D801805B1478A2143CA33903001F80A2140FA3481307D80F8090C7FC387FF003 12FF221F7E9E22>II<0007B5FC15C039003C03E0 EC01F0EB780015F8A59038F001F0A215E0EC03C0EC0F809038FFFE004813F801E0C7FCA5485AA6 1207EA7FFC12FF1D1F7E9E1F>I<3807FFFC14FF39003C07C0EC03E0EB780115F0A415E0EBF003 15C0EC0780EC1F00EBFFFC14F03801E038143C141CA2141EA23803C03EA41506A20007140C397F FC1F1800FFEB0FF8C7EA03E01F207E9E21>82 DI<001FB512F8A2381E03 C000381438EB078012300070141812601538153038C00F0000001400A5131EA65BA6137C381FFF F05A1D1F7B9E21>I<39FFF007FEA2390F0001F0EC00C0A2EC0180EA0780EC03005C1406140EEB C00C0003131C14185CA25CEA01E05CA2EBE180A2D800F3C7FCA213F6A213FCA21378A21370A21F 207A9E22>86 D<3A03FFC1FFC0A23A003E007C00011E137015606D5B1401EC8380010790C7FC14 C6EB03CC14FC6D5A5C1300801301497E143C1306497E131CEB381FEB300F01607FEBC007000180 38038003D80FC07F39FFF01FFE13E0221F7F9E22>88 D123 D E /Fg 30 122 df<1238127C12FEA3127C123807077C8610>46 D<13181378EA01F812FFA212 01B3A7387FFFE0A213207C9F1C>49 DI I<14E013011303A21307130F131FA21337137713E7EA01C71387EA03071207120E120C12181238 127012E0B512FEA2380007E0A7EBFFFEA217207E9F1C>I67 D70 D76 D 85 D97 D II I<13FE3807FF80380F87C0381E01E0003E13F0EA7C0014F812FCA2B5FCA200FCC7FCA3127CA212 7E003E13186C1330380FC0703803FFC0C6130015167E951A>II<3801FE1F0007B512 80380F87E7EA1F03391E01E000003E7FA5001E5BEA1F03380F87C0EBFF80D819FEC7FC0018C8FC 121CA2381FFFE014F86C13FE80123F397C003F8048131F140FA3007CEB1F00007E5B381F80FC6C B45A000113C019217F951C>II<120E121FEA3F80A3EA1F00120EC7FCA7EAFF80A2121FB2EAFFF0A2 0C247FA30F>I108 D<3AFF87F00FE090399FFC3FF8 3A1FB87E70FC9039E03EC07C9039C03F807EA201801300AE3BFFF1FFE3FFC0A22A167E952F>I< 38FF87E0EB9FF8381FB8FCEBE07CEBC07EA21380AE39FFF1FFC0A21A167E951F>I<13FE3807FF C0380F83E0381E00F0003E13F848137CA300FC137EA7007C137CA26C13F8381F01F0380F83E038 07FFC03800FE0017167E951C>I<38FF8FE0EBBFF8381FF07CEBC03E497E1580A2EC0FC0A8EC1F 80A2EC3F00EBC03EEBE0FCEBBFF8EB8FC00180C7FCA8EAFFF0A21A207E951F>I114 DI<13C0A41201A212031207120F121FB5FCA2EA0FC0ABEBC180A512 07EBE300EA03FEC65A11207F9F16>I<38FF83FEA2381F807EAF14FEA2EA0F833907FF7FC0EA01 FC1A167E951F>I<39FFF01FE0A2390FC00600A2EBE00E0007130CEBF01C0003131813F800015B A26C6C5AA2EB7EC0A2137F6D5AA26DC7FCA2130EA21B167F951E>I<3AFFE3FF87F8A23A1F807C 00C0D80FC0EB0180147E13E0000790387F030014DF01F05B00031486EBF18FD801F913CC13FB90 38FF07DC6C14F8EBFE03017E5BA2EB7C01013C5BEB380001185B25167F9528>I<39FFF01FE0A2 390FC00600A2EBE00E0007130CEBF01C0003131813F800015BA26C6C5AA2EB7EC0A2137F6D5AA2 6DC7FCA2130EA2130CA25B1278EAFC3813305BEA69C0EA7F80001FC8FC1B207F951E>121 D E /Fh 15 122 df97 DI<137E48B4FC38038380EA0F07121E001C1300EA3C0248C7FCA35AA5EA70 021307EA381EEA1FF8EA07E011147C9315>I<1478EB03F814F0EB0070A314E0A4EB01C0A213F1 EA03FD38078F80EA0E07121C123C14001278A3EAF00EA31430EB1C60133CEA707C3878FCC0EA3F CF380F078015207C9F17>I<137C48B4FCEA0783380F0180121E123CEB0300EA780EEA7FFC13E0 00F0C7FCA412701302EA7807EA3C1EEA1FF8EA07E011147C9315>I103 DI<136013F0A213E01300A7120FEA1F80123113C0EA6380 A212C3EA0700A3120EA3EA1C301360A2EA38C01218EA1F80EA0F000C1F7D9E0E>I108 D<381E07C0383F1FE03833B870EA63E0EBC038138000C71370EA0700A3000E13E0A3EB01C3001C 13C6A2EB038C1301003813F8381800F018147D931A>110 D<137C48B4FC38038380380F01C012 1E001C13E0123C1278A338F003C0A3EB07801400EA700F131EEA3838EA1FF0EA07C013147C9317 >I114 D116 D<38078780380FCFC03818F8E0EA3070EA6071A238C0 E1C03800E000A3485AA30071136038F380C0A238E3818038C7C300EA7CFEEA387C13147D9315> 120 D<000F1360381F8070003113E013C0EA6380A238C381C0EA0701A3380E0380A4EB0700A25B 5BEA07FEEA03EEEA000EA25B12785BEA7070EA60E0EA3FC06CC7FC141D7D9316>I E /Fi 2 16 df0 D15 D E /Fj 1 111 df<380F07C0381F8FE03831D8703861 F03813E013C000C35BEA0380A348485AA3903801C180000EEBC300EB03831486EB018E001C13FC 380C00F019147F931B>110 D E /Fk 52 123 df<90390FF03FFC90387FFDFF9038F83FE0D801 E013810003137FD807C01300806E137CA5B712FCA23A07C01F007CB03B3FF8FFE3FF80A2292080 9F2C>15 D<1318137013E0EA01C0EA0380A2EA0700120EA2121E121C123CA25AA412F85AA97E12 78A47EA2121C121E120EA27EEA0380A2EA01C0EA00E0137013180D2D7DA114>40 D<12C012707E7E7EA27EEA0380A213C0120113E0A2EA00F0A413F81378A913F813F0A4EA01E0A2 13C012031380A2EA0700120EA25A5A5A12C00D2D7DA114>I<1238127C12FE12FFA2127F123B12 03A21206A2120E120C12181270122008107C860F>44 D<146014E0130114C0A213031480A21307 14005B130EA2131E131C133C1338A213781370A213F05B12015BA212035BA2120790C7FC5A120E A2121E121C123C1238A212781270A212F05AA2132D7DA11A>47 D<137013F0120F12FF12F31203 B3A4B51280A2111D7C9C1A>49 DI<14E0A2497EA3497EA2497EA2497E130CA2EB187FA201307F143F01707F EB601FA201C07F140F48B57EA2EB800748486C7EA20006801401000E803AFFE01FFFE0A2231F7E 9E28>65 DI<9038 07FC0290383FFF0E9038FE03DE3903F000FE4848133E4848131E485A48C7120EA2481406127EA2 00FE1400A7127E1506127F7E150C6C7E6C6C13186C6C13386C6C13703900FE01C090383FFF8090 3807FC001F1F7D9E26>I69 DI<903807FC0290383FFF0E9038FE03DE3903F000FE4848133E48 48131E485A48C7120EA2481406127EA200FE91C7FCA591387FFFE0A2007E903800FE00A2127F7E A26C7E6C7E6C7E3803F0013900FE03BE90383FFF1E903807FC06231F7D9E29>I73 D75 DIIIII82 D<3803FC08380FFF38381E03F8EA3C00481378143812F814187E1400 B4FC13F86CB4FC14C06C13E06C13F06C13F8120338001FFC13011300A200C0137CA36C1378A200 F813F038FE01E038E7FFC000811300161F7D9E1D>I<007FB512FCA2397C0FE07C0070141C0060 140CA200E0140E00C01406A400001400B10007B512C0A21F1E7E9D24>II87 DII<003FB51280A2EB807F393E 00FF00383801FEA248485A5CEA6007495AA2C6485A495AA2495A91C7FC5B485AEC0180EA03FCEA 07F8A2380FF00313E0001F140048485A5C48485A38FF007F90B5FCA2191F7D9E20>I97 DIII II<3801FC3C3807FFFE380F07DEEA1E03003E13E0A5001E13C0380F0780EBFF00EA19FC 0018C7FCA2121C381FFF8014F06C13F8003F13FC387C007C0070133E00F0131EA30078133CA238 3F01F8380FFFE000011300171E7F931A>II<121C123F5AA37E121CC7FCA6B4FCA2121FB0EAFFE0A20B217E A00E>I107 DI<3AFE0FE03F8090393FF0FFC03A1E70F9C3E09039C07F01F0381F807EA2EB00 7CAC3AFFE3FF8FFEA227147D932C>I<38FE0FC0EB3FE0381E61F0EBC0F8EA1F801300AD38FFE3 FFA218147D931D>I<48B4FC000713C0381F83F0383E00F8A248137CA200FC137EA6007C137CA2 6C13F8A2381F83F03807FFC00001130017147F931A>I<38FF1FC0EB7FF0381FE1F8EB80FCEB00 7EA2143E143FA6143E147E147CEB80FCEBC1F8EB7FE0EB1F8090C7FCA7EAFFE0A2181D7E931D> I114 DII<38FF07F8A2EA1F00AD1301A2EA0F07 3807FEFFEA03F818147D931D>I<39FFE07F80A2391F001C00380F8018A26C6C5AA26C6C5AA26C 6C5AA213F900005B13FF6DC7FCA2133EA2131CA219147F931C>I<3AFFE7FE1FE0A23A1F00F007 006E7ED80F801306A23907C1BC0CA214BE3903E31E18A23901F60F30A215B03900FC07E0A29038 7803C0A3903830018023147F9326>I<38FFE1FFA2380F80706C6C5A6D5A3803E180EA01F36CB4 C7FC137E133E133F497E136FEBC7C0380183E0380381F0380701F8380E00FC39FF81FF80A21914 7F931C>I<39FFE07F80A2391F001C00380F8018A26C6C5AA26C6C5AA26C6C5AA213F900005B13 FF6DC7FCA2133EA2131CA21318A2EA783012FC5BEAC0E0EAE1C0EA7F80001EC8FC191D7F931C> I<383FFFE0A2383C0FC0EA381F0070138038603F00137E13FEC65A485A485A3807E060120F13C0 381F80E0383F00C0EA7F01EA7E03B5FCA213147F9317>I E /Fl 31 124 df<121C127FEAFF80A213C0A3127F121C1200A212011380A21203EA07001206120E5A5A12300A 157BA913>39 D<121C127FEAFF80A5EA7F00121C09097B8813>46 D<13075B137FEA07FFB5FCA2 12F8C6FCB3AB007F13FEA317277BA622>49 DII<1407 5C5C5C5C5CA25B5B497E130F130E131C1338137013F013E0EA01C0EA0380EA07005A120E5A5A5A 5AB612F8A3C71300A7017F13F8A31D277EA622>I65 DI<91393FF00180903903FFFE07010FEBFF8F90393FF007FF9038FF8001 4848C7127FD807FC143F49141F4848140F485A003F15075B007F1503A3484891C7FCAB6C7EEE03 80A2123F7F001F15076C6C15006C6C5C6D141ED801FE5C6C6C6C13F890393FF007F0010FB512C0 010391C7FC9038003FF829297CA832>I69 D79 DI82 D<48B47E000F13F0381F81FC486C7E147FA2EC 3F80A2EA0F00C7FCA2EB0FFF90B5FC3807FC3FEA1FE0EA3F80127F130012FEA3147F7E6CEBFFC0 393F83DFFC380FFF0F3801FC031E1B7E9A21>97 DIII< EB3FE03801FFF83803F07E380FE03F391FC01F80393F800FC0A2EA7F00EC07E05AA390B5FCA290 C8FCA47E7F003F14E01401D81FC013C0380FE0033903F81F803900FFFE00EB1FF01B1B7E9A20> I<1207EA1FC013E0123FA3121F13C0EA0700C7FCA7EAFFE0A3120FB3A3EAFFFEA30F2B7DAA14> 105 D 107 DI<3BFFC07F800FF0903AC1FFE03FFC903AC7 83F0F07E3B0FCE03F9C07F903ADC01FB803F01F8D9FF00138001F05BA301E05BAF3CFFFE1FFFC3 FFF8A3351B7D9A3A>I<38FFC07F9038C1FFC09038C787E0390FCE07F09038DC03F813F813F0A3 13E0AF3AFFFE3FFF80A3211B7D9A26>II<38FFE1FE9038E7FF809038FE07E0390FF803F8496C7E01E07F 140081A2ED7F80A9EDFF00A25DEBF0014A5A01F85B9038FE0FE09038EFFF80D9E1FCC7FC01E0C8 FCA9EAFFFEA321277E9A26>I<38FFC3F0EBCFFCEBDC7E380FD8FF13F85BA3EBE03C1400AFB5FC A3181B7E9A1C>114 D<3803FE30380FFFF0EA3E03EA7800127000F01370A27E6C1300EAFFE013 FE387FFFC06C13E06C13F0000713F8C613FC1303130000E0137C143C7EA26C13787E38FF01F038 F7FFC000C11300161B7E9A1B>I<1370A413F0A312011203A21207381FFFF0B5FCA23807F000AD 1438A73803F870000113F03800FFE0EB1F8015267FA51B>I<3AFFFE03FF80A33A07F0007000A2 6D13F000035CEBFC0100015CA26C6C485AA2D97F07C7FCA2148FEB3F8E14DEEB1FDCA2EB0FF8A3 6D5AA26D5AA26D5A211B7F9A24>118 D<39FFFC0FFFA33907F003C06C6C485AEA01FC6C6C48C7 FCEBFF1E6D5AEB3FF86D5A130FA2130780497E497E131EEB3C7F496C7E496C7ED801E07FEBC00F 00036D7E3AFFF01FFF80A3211B7F9A24>120 D123 D E /Fm 67 124 df<90380FC3E090387FEFF09038E07C783801C0F8D8038013303907007000A7 B61280A23907007000B0387FE3FFA21D20809F1B>11 DI<90380F80F890387FE7FE9038E06E 063901C0FC0F380380F8380700F00270C7FCA6B7FCA23907007007B03A7FE3FE3FF0A22420809F 26>14 D<90380FC0FFEB7FE79038E07E0F3801C0FC4848487E38070070A7B7FCA23907007007B0 3A7FE3FE3FF0A22420809F26>I34 D<127012F812FCA2127C120CA31218A21238123012601240 060E7C9F0D>39 D<136013C0EA0180EA03005A12065A121C12181238A212301270A31260A212E0 AC1260A21270A312301238A21218121C120C7E12077EEA0180EA00C013600B2E7DA112>I<12C0 12607E7E121C120C7E12077E1380A2120113C0A31200A213E0AC13C0A21201A313801203A21300 5A12065A121C12185A5A5A0B2E7DA112>I<127012F812FCA2127C120CA31218A2123812301260 1240060E7C840D>44 DI<127012F8A3127005057C840D>I48 DIII<13 0EA2131E133EA2136E13EE13CEEA018E1203130E1206120E120C121812381230126012E0B512F0 A238000E00A7EBFFE0A2141E7F9D17>II<137CEA01FEEA0783380E0380EA0C07121C3838030090C7FC127812 70A2EAF3F8EAF7FEEAFC0E487EEB0380A200F013C0A51270A214801238EB0700121CEA0E1EEA07 FCEA01F0121F7E9D17>I<1260387FFFC0A21480EA600138C003001306A2C65A5BA25B5BA213E0 5B1201A3485AA41207A76CC7FC121F7D9D17>III<127012F8A312701200AA127012F8A312700514 7C930D>I<127012F8A312701200AA127012F8A312781218A41230A21260A21240051D7C930D>I< EB0380A3497EA3EB0DE0A3EB18F0A3EB3078A3497EA3EBE01E13C0EBFFFE487FEB800FA2000314 80EB0007A24814C01403EA0F8039FFE03FFEA21F207F9F22>65 DI<90381FC04090387FF0C03801F8393803C00D38078007380F0003121E003E1301123C 127C1400127812F81500A8007814C0127CA2123C003EEB0180121E6CEB0300EA07803803C00E38 01F81C38007FF0EB1FC01A217D9F21>I69 DI73 D<39FFFC1FFCA239078007C0150014065C5C5C14705CEB81C0EB83800187C7FC80138FEB9BC0EB B1E013E1EBC0F01380147880A280A280EC0780A215C039FFFC3FFCA21E1F7E9E23>75 D77 D<39FF807FF813C00007EB07809038E00300A2EA06F0A21378133CA2131EA2130FA2EB078314C3 1303EB01E3A2EB00F3A2147BA2143F80A280A2000F7FEAFFF0801D1F7E9E22>III82 D<3807E080EA0FF9EA1C1FEA300FEA7007EA600312E01301A36CC7FCA21278127FEA 3FF0EA1FFC6C7EEA03FF38001F801307EB03C0A2130112C0A400E01380EAF00338F80700EAFE0E EACFFCEA81F812217D9F19>I<007FB512E0A238780F010070130000601460A200E0147000C014 30A400001400B23807FFFEA21C1F7E9E21>I<39FFFC7FF8A23907800780EC0300B3A300031302 EBC006A200015B6C6C5AEB7830EB3FE0EB0FC01D207E9E22>I<3BFFF07FF83FF0A23B0F000780 0F80EE0300A23A07800FC006A3913819E00ED803C0140CA214393A01E030F018A33A00F0607830 A3ECE07C903978C03C60A390393D801EC0A390383F000F6D5CA3010E6DC7FCA32C207F9E2F>87 D92 D97 D<120E12FEA2120EA9133FEBFF80380FC3C0EB00E000 0E13F014701478A7147014F0120FEB01E0EBC3C0380CFF80EB3E0015207F9F19>IIII<13 3C13FEEA01CFEA038F1306EA0700A7EAFFF0A2EA0700B0EA7FF0A21020809F0E>II<120E12FEA2120EA9133E13FF380FC380EB01C0A2120EAD38FFE7FCA216 207F9F19>I<121C121E123E121E121CC7FCA6120E127EA2120EAFEAFFC0A20A1F809E0C>I<13E0 EA01F0A3EA00E01300A61370EA07F0A212001370B3A21260EAF0E0EAF1C0EA7F80EA3E000C2882 9E0E>I<120E12FEA2120EA9EB1FF0A2EB0F80EB0E00130C5B5B137013F0EA0FF81338EA0E1C13 1E130E7F1480130314C038FFCFF8A215207F9F18>I<120E12FEA2120EB3A9EAFFE0A20B20809F 0C>I<390E3F03F039FEFF8FF839FFC1DC1C390F80F80EEB00F0000E13E0AD3AFFE7FE7FE0A223 147F9326>IIII<3803E180EA 0FF9EA1E1FEA3C0712781303127012F0A6127012781307EA3C0FEA1E1FEA0FF3EA03E3EA0003A7 EB3FF8A2151D7E9318>I II<1206A4120EA2121E123EEAFFF8A2EA0E00AA1318A5EA07 3013E0EA03C00D1C7F9B12>I<380E01C0EAFE1FA2EA0E01AC1303A2EA070FEBFDFCEA01F11614 7F9319>I<38FF87F8A2381E01E0000E13C01480A238070300A3EA0386A2138EEA01CCA213FC6C 5AA21370A315147F9318>I<39FF9FF3FCA2391C0780F01560ECC0E0D80E0F13C0130C14E00007 EBE180EB186114713903987300EBB033A2143F3801F03EEBE01EA20000131CEBC00C1E147F9321 >I<387FC7FCA2380703E0148038038300EA01C7EA00EE13EC13781338133C137C13EEEA01C713 8738030380380701C0000F13E038FF87FEA21714809318>I<38FF87F8A2381E01E0000E13C014 80A238070300A3EA0386A2138EEA01CCA213FC6C5AA21370A31360A35B12F0EAF18012F3007FC7 FC123C151D7F9318>III E /Fn 14 124 df67 D73 D80 D<903807FFFE017FEBFFE048B612F84815FE489039001FFF8003077F48D980017F6F7FA2 707EA2707E6C90C7FC6C5A6C5AC9FCA40207B5FC91B6FC130F013FEBF03F3901FFFE004813F048 13C04890C7FC485A485A485AA212FF5BA3167FA26D14FF6C6CEB01EF003F140301FF90380FCFFF 6C9026C07F8F13F8000790B5000713FC6CECFC03C66CEBF0010107903980007FF8362E7DAD3A> 97 D101 D108 D<903A7FC001FFC0B5010F13F8 033F13FE92B6FC9126C1FE077F9126C3F0037F00039026C7C0017F6CEBCF0014DE02DC6D7F14FC 5CA25CA25CB3A8B6D8C07FEBFFE0A53B2E7CAD42>110 DI<90397FC01FFEB590B512E002C314F802CF14FE9139DFF01FFF9126FF80077F00 0349486C7F6C496D7F02F06D7F717E4A81173F84171F84A3711380AB19005FA34D5AA260177F6E 5D6E4A5A6E495B6E495B9126FF800F5BDBE03F90C7FC02EFB512FC02E314F002E014C0DB1FFCC8 FC92CAFCAFB612C0A539427CAD42>I<9039FF803FC0B5EBFFF0028313FC02877F91388FE7FFEC 9F070003D99E0F13806C13BCA214F8A214F06F13004A6C5A6F5A92C8FCA25CB3A6B612E0A5292E 7CAD31>114 D<90390FFF81E090B512F7000314FF5A380FFC01391FE0003FD83F80130F007F14 0790C7FC15035AA27F6D90C7FC13F013FF14F86CEBFFC015F86C14FE6C806C15806C15C06C15E0 C615F0013F14F81307EB001F020013FC153F0078140F00F81407A26C1403A27E16F86C14076D14 F06D130F01F0EB1FE001FEEBFFC090B6128048ECFE00D8F83F13F8D8F00313C0262E7CAD2F>I< EB01F0A61303A31307A3130FA2131F133FA2137FEA01FF5A000F90B512C0B7FCA4C601F0C7FCB3 A5ED01F0A91503D97FF813E01507D93FFC13C090391FFE1F806DB5FC6D1400010113FC9038003F F024427EC12E>I<007FB5D8801FB5FCA528007FF8000390C7FC6EEB01FC6D6C495A6D6C495A6D 5D6D6D485A6D6D485AEDE03F6D6D48C8FC6DEBF8FE91387FF9FC6EB45A5E6E5B6E5B80806E7F82 824A7F825C91380FEFFFDA1FCF7FDA3F877FDA7F037FECFE0149486C7F4A6D7E49488001076E7E 49486D7E49487F49486D7FD9FFC081B500F8013FEBFFC0A53A2E7EAD3F>120 D123 D E /Fo 8 117 df<140F5C147FEB03FF131FB6FCA313E7EA0007 B3B3A7007FB612E0A4233879B732>49 D 67 D97 D<903801FFC0011F13F8017F13FE9038FFC1FF00039038007F80D807FCEB1FC0484814E0ED 0FF0485A003FEC07F8A2485AED03FCA212FFA290B6FCA301E0C8FCA5127FA27F003F153CA26C6C 147C000F15786C6C14F86C6CEB01F06C6CEB07E06C9038E03FC0013FB51200010F13FC010013E0 26267DA52D>101 D<13FFB5FCA412077EB0ED7FC0913803FFF8020F13FE91381F03FFEC3C0102 7814804A7E4A14C05CA25CA291C7FCB3A4B5D8FC3F13FFA4303C7CBB37>104 D<9039FF01FF80B5000F13F0023F13FC9138FE03FFDAF00113C000039039E0007FE0028014F0EE 3FF891C7121F17FC160F17FEA3EE07FFAAEE0FFEA3EE1FFCA217F86EEB3FF06E137F6EEBFFE06E 481380DAFC07130091383FFFFC020F13F0020190C7FC91C9FCADB512FCA430377DA537>112 D<9038FE03F000FFEB0FFE91383FFF8091387C7FC014F00007ECFFE06C6C5A5CA25CED7FC0ED3F 80ED0E0091C8FCB3A3B512FEA423267DA529>114 D116 D E /Fp 36 119 df38 D<127012F8A212F012E005057A840F>46 D<14035CA25C1580141FA2143714771467 14C7A2EB0187A2EB0307A21306130E130C1318A2133090383FFFC05BEB600313C012011380EA03 00A21206A2121E39FFC03FFC13801E237DA224>65 D<90B512E015F890380F003C151E131E150E 150FA249130E151EA2153C49137815F0EC01E090387FFFC090B5FC9038F003E0EC00F01578485A 1538153CA248481378A315F039078001E01403EC07C0EC1F00B512FE14F020227DA122>I<90B5 12F015FC90380F003E150F011EEB0780150316C015015B16E0A35BA449EB03C0A44848EB0780A2 16005D4848130E5D153C5D48485B4A5AEC0780021FC7FCB512FC14F023227DA125>68 D<91387E0180903903FF810090380F80C390383E00670178133F49133ED801C0131E485A120748 C7121C120E121E5A15185A92C7FCA25AA4EC3FFC5AEC01E0A26C495AA312700078495A1238003C 130F6C131B260FC0F3C7FC3803FFC1C690C8FC212479A226>71 D73 D<903807FFC04913809038003C00 A25CA45CA4495AA4495AA4495AA449C7FCA212381278EAF81EA2485AEA6078EA70F0EA3FE0EA1F 801A237CA11A>I<9039FFF80FFCA290390F0007C01600011E130E5D5D1560495B4A5A4AC7FC14 0E495A5C147814FCEBF1BCEBF33CEBFE1E13FC3801F01F497EA2813803C007A26E7EA2EA07806E 7EA28139FFF80FFEA226227DA125>III<01FFEB1FFC1480010FEB03C01680D91BC01300A3EB19E001 311306A2EB30F0A201605B1478A3496C5AA3141ED801805BA2140FA2D803005BEC07E0A300065C 1403A2121F39FFE00180A226227DA124>I<14FE903807FF8090380F03E090383C00F001701378 4913384848133C4848131C48C7FC48141E121EA25AA25AA348143CA31578A34814F0A26CEB01E0 15C01403EC07800078EB0F00141E6C5B6C13F8380F83E03807FF80D801FCC7FC1F2479A225>I< 90B512C015F090380F0078153C011E131E150EA349131EA3153C491338157815F0EC03C090B512 005CEBF00FEC07803901E003C0A43903C00780A43907800F001503A2EC0706D8FFF8138EEC03FC C7EA01F020237DA124>82 D<903801F06090380FFC4090381E0EC0EB3807EB700301E0138013C0 1201A2D803801300A26DC7FCA27FEA01F813FF6C13E06D7EEB1FF8EB03FCEB007C143C80A30030 131CA31418007013385C00781360387C01C038EF0380D8C7FFC7FCEA81FC1B247DA21B>I<001F B512F8A2391E03C07800381438EB0780123000601430A2EB0F0012C0A3D8001E1300A45BA45BA4 5BA4485AA31203B57E91C7FC1D2277A123>I<393FFE07FFA23903C000F015E0484813C0A4390F 000180A4001EEB0300A4481306A4485BA4485BA25C12705C5C6C485A49C7FCEA1E0EEA0FFCEA03 F0202377A124>I<3BFFF03FF81FF8D9E07FEB3FF03B1F0007800780001E010FEB03001606141F 5E14375E14675E14C75E381F0187000F5DEB0307ED818013060383C7FC130C158613180138138C 0130139C0160139815B801C013B015E013805D13005D120E92C8FC120C2D2376A131>87 D<39FFF003FFA2000FC712F015E090388001C000071480EC0300EBC0060003130E5CEBE0180001 5B5CEBF0E000005BEBF18001FBC7FC13FF137E137C1378A45BA4485AA4EA3FFE5B202276A124> 89 D97 D<137E48B4FC3803C380EA0703 EA0E07121C003CC7FC12381278A35AA45BEA7003130EEA383CEA1FF0EA0FC011157B9416>99 D<143CEB03F8A2EB0038A21470A414E0A4EB01C013F9EA01FDEA078F380F0780120E121CEA3C03 383807001278A3EAF00EA214101418EB1C30EA703C137C3838FC60383FCFC0380F078016237BA2 19>I<13F8EA03FCEA0F0EEA1E06123C1238EA780CEAF038EAFFF01380EAF0005AA413021306EA 701C1378EA3FE0EA0F800F157A9416>I<143C147F14CF1301EB03861480A3EB0700A5130EEBFF F0A2EB0E00A25BA55BA55BA55BA45B1201A2EA718012F390C7FC127E123C182D82A20F>II<136013F0 13E0A21300A8120EEA1F801233126312C3A3EA0700A2120EA35A13201330EA3860A213C01239EA 1F80EA0E000C217CA00F>105 D<13F0EA0FE0A21200A2485AA4485AA448C7FCEB01E0EB07F0EB 0E30380E1870EB30F01360EBC060381D8000121F13E0EA1CF0EA3838133CEB1C20143038703860 A21440EB18C038E01F8038600F0014237DA216>107 DI<381E0780383F1F E0EA63B8EBE070EAC3C0A21380000713E01300A3380E01C0A214C2EB0383001C1386EB0706140C EB0318003813F0381801E018157C941B>110 D<137E48B4FC3803C380380701C0120E001C13E0 123CA21278A338F003C0A21480130700701300130E5B6C5AEA1FF0EA07C013157B9419>I<3803 C1F03807E3F8380C761C137C3818781E1370A2EA00E0A43801C03CA314780003137014F014E0EB E3C038077F80EB1E0090C7FCA2120EA45AA2EAFFC0A2171F7F9419>I<381E0F80383F1FC03863 B0E013E0EAC3C1A2EB80C00007130090C7FCA3120EA45AA45A121813157C9415>114 D<13FC48B4FC38038380EA0703EA0E07A2EB0200000FC7FC13F0EA07FC6C7EEA007E130FA2EA70 07EAF00EA2485AEA7038EA3FF0EA1FC011157D9414>I<13C01201A4EA0380A4EA0700EAFFF8A2 EA0700120EA45AA45AA213101318EA7030A21360EA71C0EA3F80EA1E000D1F7C9E10>I<000F13 30381F8070EA31C0006113E012C1EAC380A2380381C0EA0701A3380E0380A214841486EB070CA2 130FEB1F183807F3F03803E1E017157C941A>I<380F01C0381F83E0EA31C3EA61C1EAC1C0EAC3 80A2000313C0EA0700A3380E0180A3EB0300A213061304EA0F1CEA07F8EA01E013157C9416>I E /Fq 53 122 df35 D<127012F812FCA2127C12 0CA41218A21230A212601240060F7C840E>44 DI<127012F8A3127005 057C840E>I48 DI51 D<00101380EA1C07381FFF 005B5B13F00018C7FCA613F8EA1BFEEA1F0F381C0780EA180314C0EA000114E0A4126012F0A214 C0EAC0031260148038300700EA1C1EEA0FFCEA03F013227EA018>53 D<137E48B4FC3803C18038 0701C0EA0E03121CEB018048C7FCA2127812701320EAF1FCEAF3FEEAF60738FC038000F813C013 0112F014E0A51270A3003813C0130300181380381C0700EA0E0EEA07FCEA01F013227EA018>I< EA01F0EA07FCEA0E0F38180780EA3803383001C01270A31278EB0380123E383F0700EA1FCEEA0F FCEA03F87FEA0F7F381C3F80EA380F387007C0130338E001E01300A5387001C0A238380380381E 0F00EA0FFEEA03F013227EA018>56 DI<497E497EA3497EA3497E130CA2EB1CF8EB1878 A2EB383C1330A2497EA3497EA348B51280A2EB800739030003C0A30006EB01E0A3000EEB00F000 1F130139FFC00FFFA220237EA225>65 DI<90380FE01090383FF8309038F81C703801E0063903C003 F03807800148C7FC121E003E1470123C127C15301278A212F81500A700781430A2127CA2003C14 60123E121E6C14C06C7E3903C001803901E003003800F80EEB3FF8EB0FE01C247DA223>II 70 D<903807F00890383FFC189038FC0E383801E0033903C001F83807800048C71278121E1538 5AA2007C14181278A212F81500A6EC1FFF1278007CEB0078A2123CA27EA27E6C7E6C6C13F83801 F0013900FC079890383FFE08903807F80020247DA226>I<39FFFC3FFFA239078001E0AD90B5FC A2EB8001AF39FFFC3FFFA220227EA125>I<3803FFF0A238000F00B3A6127012F8A3EAF01EEA60 1CEA3878EA1FF0EA07C014237EA119>74 D<39FFFC07FFA239078001F015C05D4AC7FC14065C5C 14385C5CEB81C0EB8380EB87C080138DEB98F013B0EBE078497E13808080A26E7E8114036E7EA2 6E7E4A7E3AFFFC07FF80A221227EA126>III<39FF800FFF13C00007EB01F89038E000607F12061378A27F133E131E7FA2 EB078014C01303EB01E0A2EB00F01478A2143CA2141E140FA2EC07E0A214031401A2381F8000EA FFF0156020227EA125>III82 D<3803F020380FFC60381C0EE0EA3803EA7001A2EAE000A21460A36C1300A21278127FEA3F F0EA1FFE6C7E0003138038003FC0EB07E01301EB00F0A2147012C0A46C136014E06C13C0EAF801 38EF038038C7FF00EA81FC14247DA21B>I<007FB512F8A2387C07800070143800601418A200E0 141C00C0140CA500001400B3A20003B5FCA21E227EA123>I<3BFFF03FFC07FEA23B0F0007C001 F00203EB00E01760D807806D13C0A33B03C007F001801406A216032701E00C781300A33A00F018 3C06A3903978383E0CEC301EA2161C90393C600F18A390391EC007B0A3010F14E0EC8003A36D48 6C5AA32F237FA132>87 D<387FFFFEA2EB003C007C137C0070137814F84813F0EB01E0EAC00314 C01307148038000F005B131E133E133C5B13F85B0001130313E0EA03C012071380000F13071300 001E1306003E130E003C131E007C133E007813FEB5FCA218227DA11E>90 D97 D<120E12FEA2121E120EAAEB1F80EB7FE0380FC0 F0EB0078000E1338143C141C141EA7141C143C000F1338EB8070EBC1F0380C7FC0EB1F0017237F A21B>II<14E0130FA213011300AAEA03F0EA07FEEA1F07EA3C01EA 38001278127012F0A712701278EA3801EA3C03381E0EF0380FFCFEEA03F017237EA21B>II<133C13FEEA01CFEA038FA2EA0700A9EAFFF8A2EA0700 B1EA7FF8A2102380A20F>I<14F03801F1F83807FFB8380F1F38381E0F00EA1C07003C1380A500 1C1300EA1E0FEA0F1EEA1FFCEA19F00018C7FCA2121CEA1FFF6C13C04813E0383801F038700070 481338A400701370007813F0381E03C0380FFF803801FC0015217F9518>I<120E12FEA2121E12 0EAAEB1F80EB7FC0380FC1E0EB80F0EB0070120EAE38FFE7FFA218237FA21B>I<121C121E123E 121E121CC7FCA8120E12FEA2121E120EAFEAFFC0A20A227FA10E>II<120E12FEA212 1E120EAAEB0FFCA2EB07E0EB0380EB0700130E13185B137813F8EA0F9C131EEA0E0E7F1480EB03 C0130114E014F038FFE3FEA217237FA21A>I<120E12FEA2121E120EB3ABEAFFE0A20B237FA20E> I<390E1FC07F3AFE7FE1FF809039C0F303C03A1F807E01E0390F003C00000E1338AE3AFFE3FF8F FEA227157F942A>I<380E1F8038FE7FC038FFC1E0381F80F0380F0070120EAE38FFE7FFA21815 7F941B>II<380E1F8038FE7FE0 38FFC1F0380F0078120E143CA2141EA7143CA2000F1378EB8070EBC1F0380E7FC0EB1F0090C7FC A8EAFFE0A2171F7F941B>I114 DI<1206A5120EA3121E123EEAFFF8A2 EA0E00AA130CA51308EA0718EA03F0EA01E00E1F7F9E13>I<000E137038FE07F0A2EA1E00000E 1370AC14F01301380703783803FE7FEA01F818157F941B>I<38FFC3FEA2381E00F8000E1360A2 6C13C0A338038180A213C300011300A2EA00E6A3137CA31338A217157F941A>I<39FF8FF9FFA2 391E01C07CD81C031338000EEBE030A2EB06600007EB7060A2130E39038C30C01438139C3901D8 1980141DA2EBF00F00001400A2497EEB600620157F9423>I<38FFC3FEA2381E00F8000E1360A2 6C13C0A338038180A213C300011300A2EA00E6A3137CA31338A21330A213701360A2EAF0C012F1 EAF380007FC7FC123E171F7F941A>121 D E /Fr 20 118 df45 D68 D73 D77 D80 D<90387F80203901FFE0603807C0F8390F001CE0001E130F481307003813030078130112701400 12F0A21560A37E1500127C127E7E13C0EA1FF86CB47E6C13F86C7FC613FF010F1380010013C0EC 1FE01407EC03F01401140015F8A200C01478A57E15706C14F015E07E6CEB01C06CEB038039E780 070038C1F01E38C07FFC38800FF01D337CB125>83 D97 D99 DIII<15F090387F03F83901FFCF1C3803C1FC390780F818 390F00780048137C001E133C003E133EA7001E133C001F137C6C13786C6C5A380FC1E0380DFFC0 D81C7FC7FC0018C8FCA2121CA2121E380FFFF814FF6C14804814C0391E0007E00038EB01F048EB 00701578481438A500701470007814F06CEB01E06CEB03C03907C01F003801FFFC38003FE01E2F 7E9F21>I<1207EA0F80121FA2120FEA0700C7FCABEA078012FFA2120F1207B3A6EA0FC0EAFFF8 A20D307EAF12>105 D<260781FEEB3FC03BFF87FF80FFF0903A8E07C1C0F83B0F9803E3007C27 07B001E6133C9026E000FC7F495BA3495BB3486C486C133F3CFFFC1FFF83FFF0A2341F7E9E38> 109 D<380781FE39FF87FF8090388E07C0390F9803E03807B0019038E000F05BA35BB3486C487E 3AFFFC1FFF80A2211F7E9E25>II<380783E038FF8FF8EB9C7CEA0FB0EA07F0EBE0 38EBC000A35BB3487EEAFFFEA2161F7E9E19>114 D<3801FC10380FFF30381E03F0EA38004813 705A1430A37E6C1300127EEA3FF06CB4FC6C1380000313E038003FF0EB03F8EB007800C0133CA2 141C7EA27E14186C13386C137038EF01E038C3FFC03880FE00161F7E9E1A>I<13C0A51201A312 03A21207120F121FB512E0A23803C000B01430A83801E060A23800F0C0EB7F80EB1F00142C7FAB 19>II E /Fs 5 85 df<1630167016F0A21501A21503A2150715 0FA2151B821531A2156115E115C1EC0181A2EC0301A21406A2140C141C14181430A202607FA2EC C000A249B5FC5B91C7FC1306A25BA25BA25B1370136013E01201000381D80FF01301D8FFFE9038 3FFFF0A22C337CB235>65 D<010FB6FC17C0903A003F8007F0EE01F892C7127C177E4A143E8314 7E188002FE140FA24A15C0A21301A25CA21303171F5CA2130718804A143FA2130F18004A5CA201 1F157E17FE4A5CA2013F4A5A5F91C712034C5A495D160F017E4A5A4CC7FC01FE147E16F849495A ED07E00001EC3F80B600FEC8FC15F032317CB036>68 D<010FB612FEA29039003F8000173E92C7 121EA24A140CA2147EA214FEA25CA20101151CEEC0184A1400A201031301A202F05B1503010713 0F91B5FC93C7FCECE00F010F7FA2ECC006A2011F130EA2EC800C92C8FC133FA291C9FCA25BA213 7EA213FEA25BA21201B512FCA22F317CB02F>70 D<010FB512F816FF903A003F801FC0EE07E092 380003F0EE01F84AEB00FCA2147EA214FE16015CA2010115F816034A14F0EE07E01303EE0F804A EB3F00167E0107EB03F891B512E016809138E007C0010FEB03F015014A6C7EA2011F80A25CA201 3F1301A21400A249495AA2137E170601FE150E170C5B171C000102011338B539F000FC70EE7FE0 C9EA1F802F327CB034>82 D<0007B712F8A23A0FE007F00101801400D80E00491370121E001C13 0F121800385CA20030011F1460127000605CA2023F14E000E016C0C790C8FCA25CA2147EA214FE A25CA21301A25CA21303A25CA21307A25CA2130FA25CA2131FA25C133F497E007FB512C0A22D31 74B033>84 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300 TeXDict begin %%EndSetup %%Page: 0 1 bop 795 696 a Fs(D)26 b(R)g(A)f(F)h(T)225 787 y Fr(Do)r(cumen)n(t)20 b(for)i(a)f(Standard)g(Message-P)n(assing)f(In)n(terface)685 981 y Fq(Scott)c(Berryman,)e Fp(Y)l(ale)19 b(Univ)700 1040 y Fq(James)c(Co)o(wnie,)h Fp(Meiko)i(Ltd)474 1098 y Fq(Jac)o(k)e(Dongarra,)i Fp(Univ.)23 b(of)17 b(T)l(ennesse)n(e)j(and)d(ORNL)801 1156 y Fq(Al)f(Geist,)f Fp(ORNL)795 1214 y Fq(Bill)f(Gropp,)j Fp(ANL)767 1272 y Fq(Rolf)f(Hemp)q(el,)e Fp(GMD)762 1330 y Fq(Bob)i(Knigh)o(ten,)g Fp(Intel)786 1388 y Fq(Rust)o(y)g(Lusk,)h Fp(ANL)614 1446 y Fq(Stev)o(e)e(Otto,)h Fp(Or)n(e)n(gon)h(Gr)n(aduate)g(Inst)578 1504 y Fq(T)l(on)o(y)g(Skjellum,)c Fp(Missisippi)j(State)j(Univ)653 1563 y Fq(Marc)d(Snir,)g Fp(IBM)h(T.)g(J.)g(Watson)744 1621 y Fq(Da)o(vid)f(W)l(alk)o(er,)f Fp(ORNL)626 1679 y Fq(Stev)o(e)g(Zenith,)g Fp(Kuck)k(&)f(Asso)n(ciates)844 1803 y Fq(Ma)o(y)e(9,)g(1993)87 1861 y(This)g(w)o(ork)g(w)o(as)h(supp)q(orted)g(b)o(y)f(ARP)l(A)g(and)g(NSF)g (under)g(con)o(tract)g(n)o(um)o(b)q(er)f(###,)g(b)o(y)g(the)192 1919 y(National)h(Science)f(F)l(oundation)i(Science)e(and)i(T)l(ec)o(hnology) f(Cen)o(ter)f(Co)q(op)q(erativ)o(e)650 1977 y(Agreemen)o(t)f(No.)21 b(CCR-8809615.)p eop %%Page: 1 2 bop 75 356 a Fo(Chapter)32 b(1)75 564 y Fn(Con)m(texts)40 b({)g(Prop)s(osal)h (I)876 786 y Fm(Marc)14 b(Snir)75 927 y Fl(1.1)70 b(Con)n(texts)75 1029 y Fm(A)17 b Fk(comm)o(unication)k(con)o(text)c Fm(\(for)f(short,)g Fk(con)o(text)p Fm(\))h(is)g(a)g(mec)o(hanism)g(for)g(the)g(mo)q (dularization)75 1085 y(of)h(MPI)g(comm)o(unication.)29 b(An)o(y)18 b(MPI)g(comm)o(unication)h(o)q(ccurs)f(within)h(a)f(con)o(text,)g(and)g(do)q (es)g(not)75 1142 y(in)o(terfere)j(with)g(comm)o(unication)h(executed)g (within)g(another)e(con)o(text.)37 b(F)l(urthermore,)21 b(a)g(con)o(text)75 1198 y(sp)q(eci\014es)c(a)e(lo)q(cal)h(name)f(space)h(for)e(pro)q(cesses)i (that)e(comm)o(unicate)h(in)h(this)g(con)o(text.)j(The)d(pro)q(cesses)75 1255 y(that)i(participate)i(in)g(a)f(con)o(text)f(are)h(asso)q(ciated)g(with) g(a)g Fk(rank)p Fm(,)g(whic)o(h)h(ranges)e(from)h(0)f(to)h Fj(n)13 b Fi(\000)g Fm(1,)75 1311 y(where)18 b Fj(n)h Fm(is)f(the)h(n)o(um)o (b)q(er)f(of)g(pro)q(cesses)g(that)g(participate)h(in)g(the)f(con)o(text.)28 b(This)19 b(rank)f(is)g(used)h(in)75 1368 y(in)o(terpro)q(cess)d(comm)o (unication)h(as)e(the)h(lo)q(cal)h(address)f(of)f(the)h(pro)q(cess)g(within)h (that)e(con)o(text.)21 b(Th)o(us,)75 1424 y(comm)o(unication)16 b(within)g(a)f(con)o(text)g(is)h(una\013ected)f(b)o(y)g(comm)o(unication)h (outside)g(that)e(con)o(text.)166 1480 y(A)22 b(pro)q(cess)g(ma)o(y)g(comm)o (unicate)g(sim)o(ultaneously)h(in)g(sev)o(eral)f(con)o(texts.)40 b(The)22 b(con)o(text)g(of)f(a)75 1537 y(comm)o(unication)16 b(is)g(explicitly)i(stated)c(as)h(a)g(parameter)f(of)h(the)g(comm)o (unication)h(call.)166 1593 y(A)h(pro)q(cess)g(that)f(participates)i(in)g(a)e (comm)o(unication)i(con)o(text)f(accesses)g(this)g(con)o(text)g(using)g(a)75 1650 y Fh(c)n(ontext)h(hand)r(le)g Fm(\(i.e.,)f(a)h(handle)h(to)e(an)g (opaque)h(ob)s(ject)f(that)g(iden)o(ti\014es)j(a)d(con)o(text\).)26 b(This)19 b(handle)75 1706 y(can)c(b)q(e)h(used)g(to)143 1788 y Fi(\017)23 b Fm(Find)14 b(information)f(ab)q(out)g(this)g(con)o(text,)g (suc)o(h)g(as)g(the)g(n)o(um)o(b)q(er)h(of)e(pro)q(cesses)i(that)e (participate)189 1844 y(in)k(the)f(con)o(text,)f(or)h(the)g(rank)g(of)g(the)g (calling)i(pro)q(cess)f(within)g(the)f(con)o(text.)143 1933 y Fi(\017)23 b Fm(Comm)o(unicate)12 b(with)h(other)f(pro)q(cesses)h(that)e (participate)j(in)f(the)f(con)o(text;)h(these)f(pro)q(cesses)h(are)189 1989 y(addressed)i(using)h(their)g(con)o(text)f(rank.)143 2078 y Fi(\017)23 b Fm(Create)14 b(new)i(con)o(texts.)166 2160 y(Con)o(text)i (handles)h(cannot)g(b)q(e)g(transferred)f(for)g(one)h(pro)q(cess)f(to)g (another;)i(they)e(can)h(b)q(e)g(used)75 2216 y(only)12 b(on)g(the)f(pro)q (cess)h(where)g(they)g(w)o(ere)f(created.)19 b(Th)o(us,)11 b(\\kno)o(wledge")h(ab)q(out)f(a)h(con)o(text)f(exists)g(only)75 2273 y(lo)q(cally)l(,)17 b(at)e(the)h(pro)q(cesses)g(that)f(participate)h(in) h(that)d(con)o(text.)21 b(Op)q(erations)16 b(within)h(a)e(comm)o(unica-)75 2329 y(tion)h(con)o(texts)e(\(including)k(the)d(generation)h(of)f(new)g(sub)q (con)o(texts\))g(do)g(not)g(require)h(comm)o(unication)75 2385 y(with)g(pro)q(cesses)f(that)g(do)g(not)g(participate)g(in)i(that)d(con)o (text.)166 2442 y(F)l(ollo)o(ws)h(examples)h(of)f(p)q(ossible)i(uses)e(for)g (con)o(texts.)75 2561 y Fg(1.1.1)55 b(Lo)r(osely)18 b(sync)n(hronous)h (library)h(call)g(in)n(terface)75 2647 y Fm(Consider)13 b(the)g(case)g(where) g(a)f(parallel)i(application)h(executes)e(a)f(\\parallel)i(call")g(to)e(a)g (library)i(routine,)75 2704 y(i.e.,)g(where)h(all)g(pro)q(cesses)g(transfer)f (con)o(trol)g(to)g(the)h(library)g(routine.)20 b(If)15 b(the)f(library)i(w)o (as)d(dev)o(elop)q(ed)964 2828 y(1)p eop %%Page: 2 3 bop 75 -100 a Fm(2)867 b Ff(CHAPTER)15 b(1.)35 b(CONTEXTS)15 b({)g(PR)o(OPOSAL)i(I)75 45 y Fm(separately)l(,)23 b(then)e(one)h(should)g(b) q(ew)o(are)g(of)e(the)i(p)q(ossibilit)o(y)h(that)e(the)g(library)h(co)q(de)g (ma)o(y)f(receiv)o(e)75 102 y(b)o(y)f(mistak)o(e)g(messages)f(send)i(b)o(y)f (the)g(caller)h(co)q(de,)g(and)f(vice-v)o(ersa.)35 b(The)20 b(problem)h(is)g(solv)o(ed)f(b)o(y)75 158 y(allo)q(cating)c(a)f(di\013eren)o (t)h(con)o(text)e(to)h(the)g(library)l(,)h(th)o(us)f(prev)o(en)o(ting)h(un)o (w)o(an)o(ted)e(in)o(terference.)75 277 y Fg(1.1.2)55 b(F)-5 b(unctional)21 b(decomp)r(osition)e(and)g(mo)r(dular)g(co)r(de)f(dev)n (elopmen)n(t)75 363 y Fm(Often,)i(a)f(parallel)i(application)g(is)e(dev)o (elop)q(ed)i(b)o(y)e(in)o(tegrating)h(sev)o(eral)f(distinct)h(functional)h (mo)q(d-)75 419 y(ules,)c(that)f(is)h(eac)o(h)g(dev)o(elop)q(ed)h(separately) l(.)24 b(Eac)o(h)16 b(mo)q(dule)i(is)f(a)f(parallel)i(program)d(that)h(runs)h (on)f(a)75 476 y(dedicated)g(set)e(of)g(pro)q(cesses,)g(and)g(the)h (computation)f(consists)h(of)e(phases)i(where)f(mo)q(dules)i(compute)75 532 y(separately)l(,)d(in)o(termixed)h(with)f(global)h(phases)f(where)g(all)h (pro)q(cesses)f(comm)o(unicate.)19 b(It)13 b(is)g(con)o(v)o(enien)o(t)75 589 y(to)g(allo)o(w)h(eac)o(h)g(mo)q(dule)g(to)f(use)h(its)g(o)o(wn)f(priv)m (ate)i(pro)q(cess)f(n)o(um)o(b)q(ering)g(sc)o(heme,)g(for)f(the)h(in)o(tramo) q(dule)75 645 y(computation.)24 b(This)17 b(is)g(ac)o(hiev)o(ed)g(b)o(y)g (using)g(a)f(priv)m(ate)i(mo)q(dule)f(con)o(text)f(for)g(in)o(tramo)q(dule)i (compu-)75 701 y(tation,)c(and)i(a)f(global)h(con)o(text)e(for)h(in)o(termo)q (dule)h(comm)o(unication.)75 820 y Fg(1.1.3)55 b(Collectiv)n(e)20 b(comm)n(unication)75 906 y Fm(MPI)g(supp)q(orts)h(collectiv)o(e)h(comm)o (unication)f(within)h(dynamically)g(created)f(groups)f(of)g(pro)q(cesses.)75 963 y(Eac)o(h)15 b(suc)o(h)h(group)f(can)g(b)q(e)h(represen)o(ted)g(b)o(y)f (a)g(distinct)h(comm)o(unication)g(con)o(text.)k(This)c(pro)o(vides)f(a)75 1019 y(simple)g(mec)o(hanism)f(to)f(ensure)h(that)f(comm)o(unication)h(that)f (p)q(ertains)h(to)f(collectiv)o(e)i(comm)o(unication)75 1076 y(within)h(one)f(group)f(is)h(not)f(confused)i(with)f(collectiv)o(e)h(comm)o (unication)g(within)f(another)g(group,)f(and)75 1132 y(a)o(v)o(oids)h(the)g (in)o(tro)q(duction)h(of)f(t)o(w)o(o)f(di\013eren)o(t)h(mec)o(hanisms)h(with) g(similar)g(functionalit)o(y)l(.)75 1251 y Fg(1.1.4)55 b(Ligh)n(t)n(w)n(eigh) n(t)21 b(gang)f(sc)n(heduling)75 1337 y Fm(Consider)14 b(an)f(en)o(vironmen)o (t)g(where)g(pro)q(cesses)h(are)e(m)o(ultith)o(treaded.)20 b(Con)o(texts)12 b(can)h(b)q(e)h(used)g(to)e(pro-)75 1393 y(vide)h(a)e(mec)o (hanism)i(whereb)o(y)f(all)h(pro)q(cesses)f(are)f(time-shared)i(b)q(et)o(w)o (een)f(sev)o(eral)g(parallel)h(executions,)75 1450 y(and)19 b(can)h(con)o(text)e(switc)o(h)i(from)e(one)h(parallel)i(execution)f(to)f (another,)g(in)h(a)f(lo)q(osely)h(sync)o(hronous)75 1506 y(manner.)27 b(A)17 b(thread)g(is)h(allo)q(cated)h(on)e(eac)o(h)g(pro)q(cess)h(to)f(eac)o (h)g(parallel)i(execution,)g(and)f(a)f(di\013eren)o(t)75 1562 y(con)o(text)d(is)i(used)f(to)f(iden)o(tify)i(eac)o(h)f(parallel)i (execution.)k(Th)o(us,)14 b(tra\016c)g(from)g(one)h(execution)h(cannot)75 1619 y(b)q(e)i(confused)g(with)g(tra\016c)f(from)f(another)h(execution.)28 b(The)18 b(blo)q(c)o(king)g(and)g(un)o(blo)q(c)o(king)h(of)e(threads)75 1675 y(due)g(to)f(comm)o(unication)h(ev)o(en)o(ts)f(pro)o(vide)h(a)f(\\lazy") g(con)o(text)g(switc)o(hing)h(mec)o(hanism.)24 b(This)17 b(can)f(b)q(e)75 1732 y(extended)j(to)f(the)h(case)f(where)h(the)f(parallel)i(executions)f (are)g(spanning)g(distinct)h(pro)q(cess)e(subsets.)75 1788 y(\(MPI)d(do)q(es)g(not)g(require)h(m)o(ultithreaded)h(pro)q(cesses.\))75 1929 y Fl(1.2)70 b(Basic)22 b(Con)n(text)g(Op)r(erations)75 2030 y Fm(A)17 b(global)g(con)o(text)g Fk(MPI)p 534 2030 16 2 v 18 w(ALL)g Fm(is)g(prede\014ned.)26 b(All)19 b(pro)q(cesses)e (participate)g(in)h(this)f(con)o(text)f(when)75 2087 y(computation)k(starts.) 32 b(MPI)20 b(do)q(es)g(not)f(sp)q(ecify)i(ho)o(w)e(pro)q(cesses)h(are)g (initially)i(rank)o(ed)e(within)h(the)75 2143 y(con)o(text)11 b Fe(MPI)p 308 2143 15 2 v 17 w(ALL)p Fm(.)g(It)h(is)g(exp)q(ected)h(that)f (the)g(start-up)f(pro)q(cedure)i(used)f(to)f(initiate)i(an)f(MPI)g(program)75 2200 y(\(at)j(load-time)j(or)d(run-time\))i(will)h(pro)o(vide)f(information)g (or)e(con)o(trol)h(on)h(this)f(initial)j(ranking)e(\(e.g.,)75 2256 y(b)o(y)12 b(sp)q(ecifying)j(that)c(pro)q(cesses)i(are)f(rank)o(ed)h (according)g(to)e(their)i(pid's,)h(or)d(according)i(to)f(the)h(ph)o(ysical)75 2312 y(addresses)h(of)f(the)h(executing)g(pro)q(cessors,)f(or)g(according)h (to)f(a)h(n)o(um)o(b)q(ering)g(sc)o(heme)g(sp)q(eci\014ed)i(at)d(load)75 2369 y(time\).)166 2508 y Fd(Discussion:)h Fc(If)e(w)o(e)h(think)e(of)h (adding)f(new)i(pro)q(cesses)i(at)d(run-time,)e(then)j Fb(MPI)p 1454 2508 14 2 v 15 w(ALL)f Fc(con)o(v)o(eys)g(the)h(wrong)75 2564 y(impression,)f(since)j(it)f(is)f(just)h(the)h(initial)d(set)j(of)e(pro) q(cesses.)166 2704 y Fm(The)i(follo)o(wing)h(op)q(erations)g(are)e(a)o(v)m (ailable)j(for)e(creating)g(new)h(con)o(texts.)p eop %%Page: 3 4 bop 75 -100 a Ff(1.2.)34 b(BASIC)16 b(CONTEXT)f(OPERA)l(TIONS)964 b Fm(3)166 45 y Fk(MPI)p 275 45 16 2 v 18 w(COPY)p 446 45 V 18 w(CONTEXT\(new)o(con)o(text,)17 b(con)o(text\))166 137 y Fm(Create)g(a)g(new)g(con)o(text)g(that)g(includes)j(all)e(pro)q(cesses)g(in) g(the)f(old)h(con)o(text.)26 b(The)18 b(rank)f(of)g(the)75 193 y(pro)q(cesses)g(in)g(the)f(previous)h(con)o(text)f(is)g(preserv)o(ed.)24 b(The)16 b(call)i(m)o(ust)d(b)q(e)i(executed)g(b)o(y)f(all)i(pro)q(cesses)75 250 y(in)f(the)g(old)g(con)o(text.)22 b(It)17 b(is)g(a)f(blo)q(c)o(king)h (call:)24 b(No)16 b(call)h(returns)f(un)o(til)i(all)f(pro)q(cesses)g(ha)o(v)o (e)f(called)i(the)75 306 y(function.)j(The)15 b(parameters)f(are)75 406 y Fk(OUT)k(new)o(con)o(text)k Fm(handle)16 b(to)d(newly)h(created)g(con)o (text.)19 b(The)14 b(handle)h(should)g(not)e(b)q(e)i(asso)q(ciated)189 462 y(with)g(an)g(ob)s(ject)g(b)q(efore)g(the)h(call.)75 554 y Fk(IN)h(con)o(text)23 b Fm(handle)17 b(to)d(old)i(con)o(text)166 689 y Fk(MPI)p 275 689 V 18 w(NEW)p 422 689 V 19 w(CONTEXT\(new)o(con)o (text,)h(con)o(text,)g(arra)o(y)p 1338 689 V 18 w(of)p 1398 689 V 19 w(ranks,)f(size\))75 824 y(OUT)i(new)o(con)o(text)k Fm(handle)15 b(to)d(newly)h(created)g(con)o(text)f(at)h(calling)h(pro)q (cess.)19 b(This)14 b(handle)g(should)189 880 y(not)g(b)q(e)i(asso)q(ciated)g (with)f(an)g(ob)s(ject)g(b)q(efore)h(the)f(call.)75 972 y Fk(IN)i(con)o(text) 23 b Fm(handle)17 b(to)d(old)i(con)o(text)75 1064 y Fk(IN)h(arra)o(y)p 277 1064 V 18 w(of)p 337 1064 V 19 w(ranks)22 b Fm(ranks)15 b(in)h(the)f(old)h(con)o(text)e(of)h(the)g(pro)q(cesses)h(that)e(join)i(the)f (new)h(con)o(text)75 1156 y Fk(IN)h(size)23 b Fm(size)16 b(of)f(new)h(con)o (text)e(\(in)o(teger\))166 1255 y(A)20 b(new)g(con)o(text)f(is)i(created)e (for)h(the)g(pro)q(cesses)g(in)h(the)f(old)g(con)o(text)f(that)g(are)h (listed)h(in)g(the)75 1312 y(arra)o(y)l(.)f(The)c(pro)q(cesses)f(are)h (listed)g(according)g(to)f(their)h(rank)g(in)g(the)f(old)i(con)o(text.)j(The) c(rank)f(of)g(the)75 1368 y(pro)q(cesses)h(in)g(the)f(new)g(con)o(text)g(is)h (determined)g(b)o(y)f(their)h(place)g(in)g(the)f(list.)166 1424 y(The)h(call)g(has)g(to)f(b)q(e)h(executed)h(b)o(y)e(all)i(pro)q(cesses) f(listed)g(in)h(the)f(arra)o(y;)e(all)i(mak)o(e)f(the)h(call)h(with)75 1481 y(the)h(same)g(list)i(of)d(parameters.)29 b(Pro)q(cesses)18 b(in)h(the)g(old)g(con)o(text)e(that)h(do)g(not)g(b)q(elong)i(to)d(the)i(new) 75 1537 y(con)o(text)14 b(need)i(not)f(mak)o(e)g(the)g(call.)21 b(The)15 b(call)h(is)g(blo)q(c)o(king;)g(no)f(pro)q(cess)g(returns)g(from)f (the)i(call)g(un)o(til)75 1594 y(all)g(pro)q(cesses)g(ha)o(v)o(e)f(executed)h (the)f(call.)166 1686 y Fk(MPI)p 275 1686 V 18 w(SPLIT)p 445 1686 V 19 w(CONTEXT\(new)o(con)o(text,)i(con)o(text,)g(k)o(ey)l(,)f(index\)) 75 1821 y(OUT)i(new)o(con)o(text)k Fm(handle)15 b(to)d(newly)h(created)g(con) o(text)f(at)h(calling)h(pro)q(cess.)19 b(This)14 b(handle)g(should)189 1877 y(not)g(b)q(e)i(asso)q(ciated)g(with)f(an)g(ob)s(ject)g(b)q(efore)h(the) f(call.)75 1969 y Fk(IN)i(con)o(text)23 b Fm(handle)17 b(to)d(old)i(con)o (text)75 2061 y Fk(IN)h(k)o(ey)22 b Fm(in)o(teger)75 2153 y Fk(IN)17 b(index)23 b Fm(in)o(teger)166 2252 y(A)18 b(new)g(con)o(text)f(is)h (created)g(for)g(eac)o(h)g(distinct)h(v)m(alue)g(of)e Fe(key)p Fm(;)h(this)h(con)o(text)e(is)h(shared)g(b)o(y)g(all)75 2308 y(pro)q(cesses)13 b(that)f(made)h(the)g(call)h(with)f(this)g(k)o(ey)f(v)m (alue.)21 b(Within)13 b(eac)o(h)g(new)g(con)o(text)f(the)h(pro)q(cesses)g (are)75 2365 y(rank)o(ed)j(according)g(to)g(the)g(order)g(of)f(the)h Fe(index)g Fm(v)m(alues)h(they)f(pro)o(vided;)h(in)g(case)f(of)f(ties,)i(pro) q(cesses)75 2421 y(are)e(rank)o(ed)g(according)h(to)e(their)i(rank)f(in)h (the)f(old)h(con)o(text.)166 2478 y(This)e(call)h(is)f(blo)q(c)o(king:)21 b(No)13 b(call)i(returns)f(un)o(til)h(all)f(pro)q(cesses)g(in)h(the)f(old)g (con)o(text)f(executed)i(the)75 2534 y(call.)166 2591 y(P)o(articular)g(uses) h(of)e(this)i(function)g(are:)166 2647 y(\(i\))g(Reordering)h(pro)q(cesses:) 22 b(All)c(pro)q(cesses)e(pro)o(vide)h(the)f(same)g Fe(key)g Fm(v)m(alue,)h(and)g(pro)o(vide)f(their)75 2704 y(index)g(in)h(the)e(new)g (order.)p eop %%Page: 4 5 bop 75 -100 a Fm(4)867 b Ff(CHAPTER)15 b(1.)35 b(CONTEXTS)15 b({)g(PR)o(OPOSAL)i(I)166 45 y Fm(\(ii\))e(Splitting)h(a)e(con)o(text)f(in)o (to)i(sub)q(con)o(texts,)f(while)i(preserving)f(the)f(old)h(relativ)o(e)g (order)f(among)75 102 y(pro)q(cesses:)21 b(All)c(pro)q(cesses)f(pro)o(vide)g (the)g(same)f Fe(index)g Fm(v)m(alue,)i(and)f(pro)o(vide)g(a)f(k)o(ey)h(iden) o(tifying)h(their)75 158 y(new)e(sub)q(con)o(text.)166 214 y Fe(MPI)p 241 214 15 2 v 17 w(COPY)p 354 214 V 16 w(CONTEXT)g Fm(is)h(a)f(particular)i(case)e(of)h Fe(MPI)p 1069 214 V 16 w(SPLIT)p 1205 214 V 17 w(CONTEXT)p Fm(,)e(when)i(all)h(pro)q(cesses)f(pro-) 75 271 y(vide)g(the)g(same)e(k)o(ey)h(and)h(index)g(parameter.)166 363 y Fk(MPI)p 275 363 16 2 v 18 w(RANK\(rank,)g(con)o(text\))75 504 y(OUT)i(rank)23 b Fm(in)o(teger)75 598 y Fk(IN)17 b(con)o(text)23 b Fm(con)o(text)15 b(handle)166 705 y(Return)h(the)f(rank)g(of)g(the)g (calling)i(pro)q(cess)e(within)i(the)e(sp)q(eci\014ed)i(con)o(text.)166 796 y Fk(MPI)p 275 796 V 18 w(SIZE\(size,)h(con)o(text\))75 938 y(OUT)g(size)23 b Fm(in)o(teger)75 1032 y Fk(IN)17 b(con)o(text)23 b Fm(con)o(text)15 b(handle)166 1138 y(Return)h(the)f(n)o(um)o(b)q(er)h(of)e (pro)q(cesses)i(that)e(b)q(elong)j(to)d(the)h(sp)q(eci\014ed)j(con)o(text.) 166 1195 y(A)d(con)o(text)g(ob)s(ject)f(is)i(destro)o(y)o(ed)f(using)h(the)f Fe(MPI)p 1036 1195 15 2 v 17 w(FREE)f Fm(function.)75 1316 y Fg(1.2.1)55 b(Usage)19 b(note)75 1402 y Fm(Use)e(of)g(con)o(texts)f(for)h (libraries:)25 b(Eac)o(h)17 b(library)h(ma)o(y)e(pro)o(vide)i(an)f (initialization)j(routine)e(that)e(is)i(to)75 1459 y(b)q(e)c(called)i(b)o(y)e (all)g(pro)q(cesses,)g(and)g(that)f(generate)h(a)g(con)o(text)f(for)g(the)h (use)g(of)f(that)g(library)l(.)21 b(A)14 b(sc)o(heme)75 1515 y(for)j(allo)o(wing)i(eac)o(h)g(link)o(ed)g(library)g(to)f(ha)o(v)o(e)g(its)g (o)o(wn)f(initializati)q(on)k(co)q(de)d(w)o(ould)h(can)f(b)q(e)h(used)g(for) 75 1572 y(this)d(purp)q(ose)f(\(assuming)h(the)f(library)h(will)h(not)e(ha)o (v)o(e)f(sev)o(eral)i(concurren)o(t)f(instan)o(tiations\).)166 1628 y(Use)g(of)g(con)o(texts)g(for)f(functional)j(decomp)q(osition:)k(A)15 b(harness)g(program,)f(running)i(in)g(the)g(con-)75 1684 y(text)e Fe(ALL)g Fm(generates)f(a)h(sub)q(con)o(text)h(for)e(eac)o(h)i(mo)q(dule)g (and)g(then)f(starts)f(the)i(submo)q(dule)g(within)h(the)75 1741 y(corresp)q(onding)g(con)o(text.)166 1797 y(Use)i(of)f(con)o(texts)g (for)g(collectiv)o(e)j(comm)o(unication:)26 b(A)18 b(con)o(text)f(is)h (created)g(for)f(eac)o(h)h(group)f(of)75 1854 y(pro)q(cesses)f(where)f (collectiv)o(e)i(comm)o(unication)f(is)g(to)e(o)q(ccur.)166 1910 y(Use)k(of)g(con)o(texts)f(for)h(con)o(text-switc)o(hing)g(among)g(sev)o (eral)g(parallel)i(executions:)26 b(A)18 b(pream)o(ble)75 1967 y(co)q(de)d(is)g(used)g(to)f(generate)g(a)g(di\013eren)o(t)g(con)o(text)g (for)g(eac)o(h)g(execution;)i(this)e(pream)o(ble)h(co)q(de)g(needs)h(to)75 2023 y(use)g(a)e(m)o(utual)i(exclusion)h(proto)q(col)e(to)f(mak)o(e)h(sure)g (eac)o(h)h(thread)f(claims)h(the)f(righ)o(t)g(con)o(text.)166 2156 y Fd(Implemen)o(tati)o(on)d(note:)166 2205 y Fc(W)m(e)18 b(outline)f(here)i(t)o(w)o(o)f(p)q(ossible)g(implemen)o(tations)d(of)j(con)o (texts.)31 b(They)19 b(are)f(b)o(y)g(no)g(means)f(the)i(only)75 2255 y(p)q(ossible)c(ones.)23 b(In)15 b(eac)o(h)g(implemen)o(tation)d(w)o(e)j (assume)g(that)g(a)g(con)o(text)h(ob)r(ject)g(is)f(a)g(p)q(oin)o(ter)g(to)g (a)g(structure)75 2305 y(that)d(describ)q(es)i(the)f(con)o(text.)18 b(A)12 b(comp)q(onen)o(t)f(of)g(this)h(structure)i(is)e(a)g(table)f(of)h(the) g(pro)q(cesses)j(that)d(participate)75 2355 y(in)17 b(the)i(con)o(text,)f (ordered)h(b)o(y)f(rank.)29 b(W)m(e)17 b(assume)h(that)f(the)i(n)o(um)o(b)q (er)e(of)g(concurren)o(tly)i(activ)o(e)e(con)o(texts)i(at)75 2405 y(eac)o(h)12 b(pro)q(cess)i(is)d(relativ)o(ely)g(small;)e(sa)o(y)j (16-32.)k(In)c(either)g(implemen)o(tatio)o(n)d(one)j(migh)o(t)d(ha)o(v)o(e)j (disjoin)o(t)e(message)75 2455 y(queues)j(for)e(eac)o(h)i(con)o(text,)f(or)g (ha)o(v)o(e)f(shared)i(queues,)g(with)e(the)h(righ)o(t)g(mec)o(hanisms)d(for) j(con)o(text)g(matc)o(hing)e(and)75 2504 y(bu\013er)15 b(allo)q(cation.)166 2554 y(Prop)q(osal)f(1:)j(Large)d(con)o(text)h(tags.)166 2604 y(In)d(this)g(implem)o(en)o(tation)d(w)o(e)j(use)h(large)e(con)o(text)i(tags) f(\(sa)o(y)f(32)h(bits\),)g(so)g(that)f(matc)o(hing)f(of)i(an)f(incoming)75 2654 y(tag)19 b(with)g(a)h(lo)q(cal)e(pro)q(cess)k(requires)f(to)e(p)q (erform)g(a)g(searc)o(h)i(in)e(a)g(hash)h(table)f(or)h(another)g(similar)d (searc)o(h)75 2704 y(structure)e(\(this)d(o)q(ccurs)i(whenev)o(er)g(a)e (message)g(is)h(receiv)o(ed\).)19 b(All)11 b(messages)i(sen)o(t)g(within)f(a) g(con)o(text)h(carry)g(the)p eop %%Page: 5 6 bop 75 -100 a Ff(1.3.)34 b(AD)o(V)-5 b(ANCED)14 b(CONTEXT)i(OPERA)l(TIONS)841 b Fm(5)75 45 y Fc(same)13 b(tag)h(v)n(alue.)k(W)m(e)c(use)h(as)g(con)o(text)f (tag)g(the)h(pid)f(of)f(the)i(lo)o(w)o(est)f(n)o(um)o(b)q(ered)g(pro)q(cess)i (in)e(the)h(con)o(text)g(\(let's)75 95 y(call)g(it)g(the)h(con)o(text)g (leader\),)h(concatenated)g(with)e(a)g(coun)o(ter)i(that)e(is)h(incremen)o (ted)g(whenev)o(er)h(this)e(pro)q(cess)75 145 y(allo)q(cates)h(a)g(new)g (tag.)24 b(This)16 b(guaran)o(tee)g(a)g(unique)g(tag)g(for)f(eac)o(h)i(group) f(\(sp)q(ecial)g(co)q(de)h(needed)g(for)f(coun)o(ter)75 195 y(wraparound\).)166 247 y Fb(MPI)p 235 247 14 2 v 15 w(COPY)p 338 247 V 15 w(CONTEXT)11 b Fc(-)h(A)h(new)g(tag)f(is)g(generated)i(b)o(y)f (incremen)o(ting)e(the)i(old)f(tag)h(b)o(y)f(one;)h(one)f(can)h(either)75 296 y(cop)o(y)f(the)g(old)f(con)o(text)h(table,)g(or)f(create)j(a)d(new)h(p)q (oin)o(ter)g(to)g(the)g(old)f(table.)17 b(A)12 b(global)e(barrier)i(sync)o (hronization)75 346 y(is)i(needed)h(to)f(mak)o(e)e(sure)j(the)g(call)e(is)h (blo)q(c)o(king.)166 398 y Fb(MPI)p 235 398 V 15 w(SPLIT)p 360 398 V 15 w(CONTEXT)j Fc(-)h(A)h(naiv)o(e)f(implemen)o(tatio)o(n)e(is)j (to)f(ha)o(v)o(e)g(an)h(all-to-all)d(comm)o(unicati)o(on)g(where)75 448 y(eac)o(h)h(pro)q(cess)i(gathers)f(the)f Fb(\(key,)k(index\))15 b Fc(pairs)i(of)f(all)g(pro)q(cesses)j(in)e(the)g(old)f(group.)27 b(Eac)o(h)17 b(pro)q(cess)i(can)75 498 y(determine)14 b(whether)h(it)e(is)h (the)g(leader)h(of)e(a)g(new)i(group)e(and)h(broadcast)g(the)h(new)f(group)g (tag)f(to)h(all)e(mem)o(b)q(ers)75 548 y(\(using)j(p)q(oin)o(t)g(to)g(p)q (oin)o(t)g(comm)o(unicatio)o(n)e(in)i(the)h(old)e(group,)h(or)g(using)h (another)f(all-to-all)e(comm)o(unicatio)o(n\).)75 597 y(Algorithmic)f(minds)g (will)g(think)i(of)f(man)o(y)f(p)q(ossible)i(optimizations.)166 649 y Fb(MPI)p 235 649 V 15 w(NEW)p 316 649 V 15 w(CONTEXT)c Fc(-)g(The)i(lo)o(w)o(est)f(n)o(um)o(b)q(ered)f(pro)q(cess)j(in)e(the)g(list) g(broadcast)g(the)h(new)f(con)o(text)h(tag)e(to)h(all)75 699 y(pro)q(cesses)i(in)d(the)g(list)g(\(using)g(p)q(oin)o(t)g(to)g(p)q(oin)o(t)f (comm)o(unication)e(op)q(erations\).)17 b(A)11 b(more)e(robust)h(implemen)o (tation)75 749 y(ma)o(y)i(en)o(tail)h(the)i(broadcast)f(of)f(the)i(mem)o(b)q (er)d(list,)h(for)h(error)g(c)o(hec)o(king.)166 801 y Fb(MPI)p 235 801 V 15 w(RANK,)21 b(MPI)p 447 801 V 15 w(SIZE)13 b Fc(-)g(require)i(lo) q(cal)e(access)j(to)e(the)g(con)o(text)h(ob)r(ject)166 853 y(Prop)q(osal)f(2:)j(Small)12 b(con)o(text)i(tags.)166 905 y(In)19 b(that)h(implemen)o(tatio)o(n)d(the)j(n)o(um)o(b)q(er)f(of)g (distinct)h(con)o(text)g(tag)f(v)n(alues)g(is)h(equal)f(to)g(the)h(maxima)o (l)75 955 y(n)o(um)o(b)q(er)d(of)g(con)o(texts)i(that)f(can)g(b)q(e)h(activ)o (e)f(at)f(the)i(same)e(no)q(de.)30 b(Th)o(us,)19 b(the)f(con)o(text)h(tag)e (of)h(an)f(incoming)75 1005 y(message)i(can)g(b)q(e)g(used)h(to)f(index)f (directly)i(in)o(to)e(a)g(con)o(text)i(table,)g(a)o(v)o(oiding)c(the)k(need)g (for)e(a)h(searc)o(h)h(in)e(a)75 1055 y(hash)13 b(table.)18 b(Eac)o(h)13 b(pro)q(cess)i(has)e(a)f(unique)h(con)o(text)h(tag)e(for)h (incoming)d(comm)o(unication)g(within)i(this)h(con)o(text.)75 1104 y(Ho)o(w)o(ev)o(er,)h(di\013eren)o(t)h(con)o(text)g(tag)e(v)n(alues)h (ma)o(y)e(b)q(e)j(used)g(b)o(y)f(di\013eren)o(t)h(pro)q(cesses)h(for)e(the)h (same)e(con)o(text)i(\(this)75 1154 y(is)e(necessary)i(in)e(order)h(to)f (densely)h(p)q(opulate)f(the)h(con)o(text)f(tag)g(range\).)18 b(The)c(con)o(text)g(table)f(carries,)h(for)e(eac)o(h)75 1204 y(mem)o(b)q(er)g(of)i(the)g(con)o(text,)g(the)h(con)o(text)f(tag)g(to)f(b)q (e)i(used)g(when)f(sending)g(messages)g(to)g(it.)166 1256 y Fb(MPI)p 235 1256 V 15 w(COPY)p 338 1256 V 15 w(CONTEXT)c Fc(-)h(A)h(new)g (tag)f(is)h(generated)h(b)o(y)e(eac)o(h)h(pro)q(cess)i(for)d(the)i(new)f(con) o(text,)g(and)f(broadcast)75 1306 y(to)18 b(all)g(other)h(mem)o(b)q(ers)f(of) g(the)h(con)o(text,)h(using)e(an)h(all-to-all)d(comm)o(uni)o(cation.)29 b(A)19 b(new)g(con)o(text)h(table)e(is)75 1356 y(created,)d(with)e(these)j (new)e(tags.)166 1408 y Fb(MPI)p 235 1408 V 15 w(SPLIT)p 360 1408 V 15 w(CONTEXT)j Fc(-)h(A)h(naiv)o(e)f(implemen)o(tatio)o(n)e(is)j(to)f (ha)o(v)o(e)g(an)h(all-to-all)d(comm)o(unicati)o(on)g(where)75 1457 y(eac)o(h)e(pro)q(cess)h(gathers)f(the)h Fb(\(key,)20 b(index,)h(new)p 881 1457 V 15 w(context)p 1050 1457 V 14 w(tag\))13 b Fc(triples)h(of)e(all)h(pro)q(cesses)j(in)d(the)h(old)f(group.)75 1507 y(Eac)o(h)g(pro)q(cess)h(then)f(creates)i(a)d(new)h(con)o(text)g(table)g (for)f(the)h(pro)q(cesses)i(that)e(pro)o(vided)f(the)i(same)d(k)o(ey)i(v)n (alue)f(as)75 1557 y(it.)166 1609 y Fb(MPI)p 235 1609 V 15 w(NEW)p 316 1609 V 15 w(CONTEXT)g Fc(-)i(Similar)d(to)j Fb(MPI)p 785 1609 V 15 w(COPY)p 888 1609 V 15 w(CONTEXT)p Fc(.)166 1661 y Fb(MPI)p 235 1661 V 15 w(RANK,)21 b(MPI)p 447 1661 V 15 w(SIZE)13 b Fc(-)g(require)i(lo)q(cal)e(access)j(to)e(the)g(con)o(text)h(ob)r(ject)75 1899 y Fl(1.3)70 b(Adv)l(anced)23 b(con)n(text)f(op)r(erations)75 2005 y Fm(Additional)e(functions)e(are)f(required)i(to)e(supp)q(ort)h(a)f (less)i(static)e(mo)q(del,)i(where)f(pro)q(cesses)g(ma)o(y)f(b)q(e)75 2062 y(created)i(or)f(deleted)i(during)g(execution.)31 b(This)20 b(requires)f(con)o(texts)f(to)h(gro)o(w)e(or)h(b)q(e)i(merged.)30 b(This)75 2118 y(section)12 b(outlines)g(p)q(ossible)h(mec)o(hanisms)f(for)e (suc)o(h)i(extension.)19 b(The)11 b(lac)o(k)h(of)f(in)o(terupt)g(driv)o(en)h (comm)o(u-)75 2175 y(nication)j(mec)o(hanisms)g(in)g(MPI)f(restricts)g(the)g (functionalit)o(y)h(of)f(suc)o(h)g(mec)o(hanisms:)20 b(A)14 b(new)h(pro)q(cess)75 2231 y(can)g(join)f(an)h(existing)g(con)o(text)f(only)h (if)g(all)g(pro)q(cesses)g(that)f(participate)h(in)g(the)f(old)h(con)o(text)f (execute)75 2287 y(a)h(call)h(to)f(add)g(this)h(new)f(pro)q(cess.)166 2346 y(A)j(prede\014ned)i(lo)q(cal)g(con)o(text)e Fe(MPI)p 792 2346 15 2 v 16 w(ME)g Fm(where)h(only)g(one)f(pro)q(cess)h(participates,) g(is)g(prede\014ned)75 2403 y(for)c(eac)o(h)g(pro)q(cess.)166 2497 y Fk(MPI)p 275 2497 16 2 v 18 w(SP)l(A)-6 b(WN\()17 b(new)o(con)o(text,) f(oldcon)o(text,)i(arra)o(y)p 1202 2497 V 18 w(of)p 1262 2497 V 19 w(en)o(vironmen)o(ts,)d(len\))166 2591 y Fm(Spa)o(wn)h(new)h(pro)q (cesses)g(and)g(create)f(a)g(new)h(con)o(text)f(that)g(includes)j(the)d(pro)q (cesses)h(in)h(the)e(old)75 2647 y(con)o(text,)d(follo)o(w)o(ed)h(b)o(y)g (the)g(newly)h(spa)o(wned)f(pro)q(cesses.)19 b(The)14 b(arra)o(y)f(pro)o (vides)h(en)o(vironmen)o(t)g(param-)75 2704 y(eters)j(for)f(eac)o(h)h(newly)h (generated)e(pro)q(cess.)26 b(The)17 b(form)f(of)g(the)h(arra)o(y)f(en)o (tries)h(are)g(implemen)o(tation)p eop %%Page: 6 7 bop 75 -100 a Fm(6)867 b Ff(CHAPTER)15 b(1.)35 b(CONTEXTS)15 b({)g(PR)o(OPOSAL)i(I)75 45 y Fm(dep)q(enden)o(t)i({)e(they)g(ma)o(y)f (include)k(information)d(on)h(the)f(pro)q(cessor)g(that)f(is)i(to)f(run)g (the)h(new)f(tasks,)75 102 y Fe(argv,)23 b(argc)15 b Fm(argumen)o(ts,)f(etc.) 75 214 y Fk(OUT)k(new)o(con)o(text)k Fm(handle)17 b(to)d(new)i(con)o(text)75 314 y Fk(OUT)i(oldcon)o(text)24 b Fm(handle)16 b(to)f(old)h(con)o(text)75 413 y Fk(IN)h(arra)o(y)p 277 413 16 2 v 18 w(of)p 337 413 V 19 w(en)o(vironmen)o(ts)k Fm(list)16 b(of)f(en)o(vironmen)o(ts)g(for)g(new)g (pro)q(cesses)75 512 y Fk(IN)i(len)23 b Fm(n)o(um)o(b)q(er)16 b(of)f(new)g(pro)q(cesses)h(\(in)o(teger\))166 625 y(As)23 b(particular)g(cases,)h Fe(MPI)p 669 625 15 2 v 17 w(SPAWN\()f(newcontext,)f (MPI)p 1211 625 V 17 w(ME,...\))42 b Fm(allo)o(ws)23 b(one)g(pro)q(cess)g(to) 75 681 y(spa)o(wn)f(new)g(pro)q(cesses,)j(and)d(create)g(a)g(new)h(con)o (text)e(that)h(consists)g(of)g(the)g(spa)o(wning)h(pro)q(cess,)75 737 y(follo)o(w)o(ed)13 b(b)o(y)g(the)g(spa)o(wned)g(pro)q(cesses;)h Fe(MPI)p 849 737 V 17 w(SPAWN\()23 b(newcontext,)f(MPI)p 1391 737 V 17 w(ALL,...\))c Fm(allo)o(ws)13 b(to)g(add)75 794 y(to)21 b(create)g(a)g(new)h(\\univ)o(ersal")g(con)o(text)e(that)h(con)o(tains)h(all) g(previous)g(pro)q(cesses)g(and)g(the)f(newly)75 850 y(spa)o(wned)15 b(pro)q(cesses.)166 944 y Fk(MPI)p 275 944 16 2 v 18 w(BCAST)p 473 944 V 19 w(CONTEXT\()d(new)o(con)o(text,)g(con)o(text,)g(ro)q(ot,)h(arra) o(y)p 1514 944 V 18 w(of)p 1574 944 V 19 w(ranks,)e(coun)o(t\))166 1093 y Fm(Beha)o(v)o(es)i(lik)o(e)h Fe(MPI)p 495 1093 15 2 v 16 w(NEW)p 583 1093 V 17 w(CONTEXT)p Fm(,)e(except)h(that)f(only)h(one)g (pro)q(cess)g(in)h(the)f(old)g(con)o(text,)f(namely)75 1150 y(the)21 b(pro)q(cess)g(with)h(rank)e Fe(root)p Fm(,)i(has)e(to)h(compute)g (the)g(list)h(of)e(the)h(ranks)g(of)f(the)h(pro)q(cesses)h(that)75 1206 y(participate)c(in)g(the)f(new)h(con)o(text.)25 b(The)17 b(new)g(con)o(text)g(do)q(es)h(not)e(necessarily)j(include)g(the)f(pro)q (cess)75 1263 y Fe(root)p Fm(.)h(The)c(call)h(is)f(blo)q(c)o(king,)h(and)f (it)g(has)f(to)g(b)q(e)i(executed)f(b)o(y)g(all)g(pro)q(cesses)h(in)f(the)g (list)g(and)g(b)o(y)g(the)75 1319 y(ro)q(ot)f(pro)q(cess.)75 1432 y Fk(OUT)k(new)o(con)o(text)k Fm(handle)15 b(to)d(newly)h(created)g(con) o(text)f(at)h(calling)h(pro)q(cess.)19 b(This)14 b(handle)g(should)189 1488 y(not)g(b)q(e)i(asso)q(ciated)g(with)f(an)g(ob)s(ject)g(b)q(efore)h(the) f(call.)75 1587 y Fk(IN)i(con)o(text)23 b Fm(handle)17 b(to)d(old)i(con)o (text)75 1687 y Fk(IN)h(ro)q(ot)23 b Fm(index)17 b(of)e(new)g(con)o(text)g (creator)f(in)i(old)g(con)o(text)75 1786 y Fk(IN)h(arra)o(y)p 277 1786 16 2 v 18 w(of)p 337 1786 V 19 w(ranks)22 b Fm(ranks)17 b(in)h(old)g(con)o(texts)e(of)h(mem)o(b)q(ers)h(of)f(new)g(con)o(text;)g (signi\014can)o(t)i(only)f(at)189 1842 y(ro)q(ot)75 1941 y Fk(IN)f(coun)o(t)23 b Fm(size)16 b(of)f(new)h(con)o(text;)e(signi\014can)o(t) i(only)g(at)e(ro)q(ot)166 2130 y Fd(Implemen)o(tati)o(on)e(note:)166 2181 y Fc(The)19 b(call)e(can)i(b)q(e)g(implemen)o(ted)d(as)i(a)g(broadcast)h (from)e(the)i(ro)q(ot)f(to)g(all)g(new)g(con)o(text)h(participan)o(ts,)75 2231 y(follo)o(w)o(ed)12 b(b)o(y)h(the)h(co)q(de)g(of)f Fb(MPI)p 574 2231 14 2 v 15 w(NEW)p 655 2231 V 15 w(CONTEXT)f Fc(\(the)i(broadcast)g (is)f(executed)i(using)e(p)q(oin)o(t-to-p)q(oin)o(t)f(comm)o(uni-)75 2281 y(cation\).)166 2421 y Fm(Example:)34 b(Supp)q(ose)24 b(one)e(has)g(a)g(serv)o(er)g(con)o(text)g Fe(SERVER)p Fm(,)f(and)h(a)g (clien)o(t)i(con)o(text)d Fe(CLIENT)p Fm(.)75 2478 y(Pro)q(cesses)16 b(in)i(either)f(con)o(texts)e(ma)o(y)h(not)g(kno)o(w)g(ab)q(out)g(pro)q (cesses)h(in)g(the)g(other)f(con)o(text,)f(but)i(they)75 2534 y(all)h(b)q(elong)g(to)e(con)o(text)g Fe(MPI)p 582 2534 15 2 v 17 w(ALL)p Fm(,)g(and)h(all)g(kno)o(w)g(ab)q(out)f(pro)q(cess)h Fe(root)g Fm(\(the)f(\\nameserv)o(er"\).)24 b(The)75 2591 y(ro)q(ot)17 b(pro)q(cess)g(kno)o(ws)g(ab)q(out)h(eac)o(h)f(of)g(the)h(t)o(w)o(o)e(con)o (texts.)26 b(A)17 b(call)i(to)e Fe(MPI)p 1408 2591 V 17 w(BCAST)p 1545 2591 V 16 w(CONTEXT)g Fm(can)g(b)q(e)75 2647 y(used)e(to)f(create)g(a)g (new)h(con)o(text)e(that)h(is)h(the)g(union)g(of)f(the)g Fe(CLIENT)g Fm(and)h Fe(SERVER)e Fm(con)o(texts,)h(so)g(as)g(to)75 2704 y(allo)o(w)h(these)h(to)e(comm)o(unicate.)p eop %%Page: 7 8 bop 75 -100 a Ff(1.4.)29 b(EXAMPLES)15 b({)g(PR)o(OBLEMS)h(IN)g(RICK'S)g (LIST)756 b Fm(7)166 45 y Fd(Discussion:)166 96 y Fc(P)o(ossible)12 b(extension:)18 b(the)13 b(ro)q(ot)f(has)g(t)o(w)o(o)g(lists;)g(the)h(list)e (of)h(pro)q(cesses)j(that)d(call)f(to)h(join)f(the)i(new)g(con)o(text,)75 146 y(and)j(the)g(list)f(of)g(pro)q(cesses)k(that)c(are)i(to)e(join)o(t)g (the)h(next)h(con)o(text.)24 b(The)16 b(remianing)e(pro)q(cesses)k(are)e (returned)75 196 y(a)g(v)n(alue)f(indicating)g(they)i(w)o(ere)g(left)e(out.) 25 b(This)16 b(could)g(b)q(e)g(useful)h(for)e(master-sla)o(v)o(e)g(co)q(de.) 26 b(The)16 b(ro)q(ot)g(is)g(the)75 246 y(master)f(that)h(creates)i(new)e (con)o(texts)h(for)e(the)h(parallel)f(executuin)i(of)e(new)h(tasks.)24 b(All)15 b(disp)q(onible)g(pro)q(cesses)75 296 y(execute)h(the)e(call,)f(but) h(only)f(a)h(subset)h(is)f(allo)q(cated.)166 472 y Fk(MPI)p 275 472 16 2 v 18 w(MER)o(GE)p 490 472 V 19 w(CONTEXT\()k(new)o(con)o(text,)f (con)o(text1,)g(con)o(text2,)g(ro)q(ot\))166 565 y Fm(This)e(function)g(allo) o(ws)g(to)e(merge)h(t)o(w)o(o)f(con)o(texts)h(ev)o(en)h(when)g(there)f(is)h (no)f(con)o(text)g(that)g(encom-)75 621 y(passes)k(the)g(t)o(w)o(o)f(merged)h (con)o(texts;)g(it)g(is)h(only)f(required)h(that)e(one)i(pro)q(cess,)f (namely)h Fe(root)p Fm(,)e(b)q(e)i(in)75 678 y(b)q(oth)13 b(con)o(texts)e(to) h(b)q(e)h(merged.)19 b(The)13 b(call)h(is)f(blo)q(c)o(king)h(and)e(m)o(ust)g (b)q(e)h(executed)h(b)o(y)e(all)i(pro)q(cesses)f(that)75 734 y(participate)k(in)f(the)g(merged)g(con)o(text.)21 b(Eac)o(h)16 b(pro)q(cess)g(pro)o(vides)g(its)g(old)g(con)o(text,)f(and)h(the)g(index)h (of)75 791 y(pro)q(cess)c(ro)q(ot)f(within)i(this)g(old)f(con)o(text)f(\(the) h(index)h(ma)o(y)e(b)q(e)i(di\013eren)o(t)f(in)g(the)g(t)o(w)o(o)f(con)o (texts)g(that)g(are)75 847 y(merged\).)19 b(The)14 b(ro)q(ot)g(pro)o(vides)g (handles)h(to)e(b)q(oth)i(con)o(texts)e(that)g(are)h(to)f(b)q(e)i(merged.)k (The)c(pro)q(cesses)75 904 y(in)h Fe(context1)e Fm(precede)i(the)g(pro)q (cesses)f(in)h Fe(context2)e Fm(in)i(the)g(ranking)f(of)g(the)g(new)h(con)o (text.)75 1017 y Fk(OUT)i(new)o(con)o(text)k Fm(handle)15 b(to)d(newly)h (created)g(con)o(text)f(at)h(calling)h(pro)q(cess.)19 b(This)14 b(handle)g(should)189 1073 y(not)g(b)q(e)i(asso)q(ciated)g(with)f(an)g(ob)s (ject)g(b)q(efore)h(the)f(call.)75 1173 y Fk(IN)i(con)o(text1)23 b Fm(handle)17 b(to)d(old)i(con)o(text)75 1272 y Fk(IN)h(con)o(text2)23 b Fm(handle)17 b(to)d(second)i(con)o(text;)e(signi\014can)o(t)i(only)g(at)f (ro)q(ot)75 1372 y Fk(IN)i(ro)q(ot)23 b Fm(index)17 b(of)e(ro)q(ot)f(in)i (old)g(con)o(text)166 1485 y(Example:)23 b(a)17 b(serv)o(er)f(\(the)h(\\ro)q (ot"\))e(and)i(a)f(set)h(of)f(clien)o(t)i(pro)q(cesses)f(participate)h(in)f Fe(context1)p Fm(.)75 1541 y(The)j(serv)o(er)f(spa)o(wns)g(new)h(\\help)q (ers",)h(or)e(has)h(other)f(a)o(v)m(ailable)i(serv)o(er)e(pro)q(cesses)h (join)g(it.)33 b(A)20 b(call)75 1598 y(to)e Fe(MPI)p 209 1598 15 2 v 16 w(MERGE)p 345 1598 V 17 w(CONTEXT)f Fm(can)i(b)q(e)g(used)g(to)e (create)h(a)h(new)f(con)o(text)g(that)f(mak)o(e)h(these)h(new)f(serv)o(ers)75 1654 y(a)o(v)m(ailable)f(to)d(the)i(parallel)g(clinet.)166 1788 y Fd(Discussion:)166 1839 y Fc(Here,)h(to)q(o,)f(the)h(function)e(can)h (b)q(e)h(extended)h(to)d(allo)o(w)g(the)h(creation)h(of)e(a)h(new)g(con)o (text)h(where)g(only)e(a)75 1889 y(subset)g(of)f(the)g(pro)q(cesses)j(in)c (the)i(old)e(group)g(participate)166 1940 y(If)k Fb(MPI)p 280 1940 14 2 v 15 w(MERGE)p 405 1940 V 14 w(CONTEXT)f Fc(is)h(a)o(v)n(ailable)e (then)j(the)g(functionalit)o(y)d(of)i Fb(MPI)p 1344 1940 V 15 w(SPAWN)f Fc(can)i(b)q(e)f(reduced:)27 b(It)17 b(is)75 1990 y(su\013cien)o(t)e(to)e(create)i(a)f(new)g(con)o(text)g(that)g(consists)h(of) e(a)g(spa)o(wning)g(pro)q(cess)j(\(rather)e(than)g(sp)o(wning)f(con)o(text\)) 75 2040 y(and)h(the)g(spa)o(wned)h(pro)q(cesses.)20 b(This)14 b(new)g(con)o(text)h(can)f(then)h(b)q(e)f(merged)g(with)f(a)h(preexisting)g (con)o(text.)166 2256 y Fd(Implemen)o(tati)o(on)e(note:)166 2308 y Fc(Implemen)o(tation)f(is)i(similar)f(to)i(the)g(implemen)o(tation)c (of)k Fb(MPI)p 1179 2308 V 15 w(BCAST)p 1304 2308 V 14 w(CONTEXT)75 2542 y Fl(1.4)70 b(Examples)22 b({)h(Problems)f(in)g(Ric)n(k's)g(list)75 2646 y Fm(Only)16 b(the)g(basic)g(con)o(text)e(functions)i(are)f(used)h(in)g (these)f(examples.)166 2704 y Fd(W)l(arning)p Fc(:)h(late)e(nigh)o(t)f(w)o (ork)p eop %%Page: 8 9 bop 75 -100 a Fm(8)867 b Ff(CHAPTER)15 b(1.)35 b(CONTEXTS)15 b({)g(PR)o(OPOSAL)i(I)75 45 y Fk(F)l(unction)h(that)h(implemen)o(ts)e(sh)o (u\017e)f(p)q(erm)o(utation)i(in)g(group/con)o(text)75 204 y Fe(void)23 b(function)g(mpi_shuffle\(inbuf,)e(outbuf,)i(datatype,)g(count,) g(context,)g(size\))75 317 y(...)75 430 y({)75 486 y (mpi_copy_context\(newcontex)o(t,)e(context\);)75 543 y(mpi_rank\(context,)h (i\);)75 599 y(dest)h(=)h(shuffle\(i,)f(size\);)75 656 y(source)g(=)h (unshuffle\(i,)e(size\);)75 712 y(mpi_irecvc)g(\(handle,)h(inbuf,)g(count,)g (datatype,)g(source,)g(0,)g(nexcontext\);)75 769 y(mpi_sendc)g(\(outbuf,)f (count,)h(datatype,)g(dest,)g(0,)h(newcontext\);)75 825 y(mpi_wait\(handle,)e (ret_stat\);)75 882 y(mpi_free\(newcontext\);)75 938 y(})166 1095 y Fm(This)15 b(w)o(orks)f(OK,)h(if)g(pro)q(cesses)g(are)g (single-threaded.)21 b(If)15 b(they)g(are)g(m)o(ultithreaded,)g(one)g(has)g (to)75 1152 y(mak)o(e)f(sure)g(that)g(other)f(concurren)o(t)i(threads)f(do)g (not)g(create)g(new)g(con)o(texts.)19 b(New)14 b(con)o(text)g(creation)75 1208 y(can)h(b)q(e)h(skipp)q(ed)h(if)f(there)f(are)g(no)g(p)q(ending)i (messages)e(in)h(the)f(con)o(text)g(when)g(mpi)p 1539 1208 14 2 v 17 w(sh)o(u\017e)h(is)g(called.)75 1338 y Fk(Use)25 b(of)h(con)o(texts)g(for)f(library)h(dev)o(elopmen)o(t)44 b Fm(The)23 b(co)q(de)g(of)f(a)g(parallel)i(library)f(function)75 1395 y(should)18 b(b)q(e)f(written)g(so)f(that)g(eac)o(h)h(message)f(that)g (is)h(pro)q(duced)h(b)o(y)e(some)h(pro)q(cess)g(is)g(consumed)g(b)o(y)75 1451 y(another)d(pro)q(cess)h(during)h(the)f(execution)g(of)g(the)f(library)i (co)q(de)f({)f(i.e.)21 b(the)14 b(library)i(should)f(\\clean)h(its)75 1508 y(garbage".)166 1566 y(A)d(parallel)j(library)e(is)g(in)o(v)o(ok)o(ed)g (collectiv)o(ely)i(b)o(y)d(a)g(group)g(of)h(pro)q(cesses.)19 b(An)o(y)14 b(in)o(v)o(ok)m(ation)g(of)f(the)75 1622 y(library)k(o)q(ccurs)g (within)g(a)f(con)o(text)g(de\014ned)i(b)o(y)e(the)h(user.)23 b(All)18 b(pro)q(cesses)f(that)e(participate)i(in)h(suc)o(h)75 1679 y(con)o(text)g(in)o(v)o(ok)o(e)h(the)g(parallel)h(library)l(,)h(and)e (matc)o(hing)g(in)o(v)o(ok)m(ations)g(o)q(ccur)g(at)f(all)i(of)e(them)h(in)h (the)75 1735 y(same)15 b(order.)166 1793 y(The)10 b(library)h(has)f(to)g (generate)g(a)f(new)i(con)o(text,)f(using)h(the)f(function)h Fe(MPI)p 1427 1793 15 2 v 17 w(COPY)p 1540 1793 V 17 w(CONTEXT\(newcontext,) 75 1850 y(context\))p Fm(,)16 b(for)h(eac)o(h)g(con)o(text)g Fe(context)f Fm(where)h(from)g(the)g(library)h(ma)o(y)f(b)q(e)h(in)o(v)o(ok)o (ed.)26 b(The)17 b(library)75 1906 y(will)g(then)f(use)g(the)g(con)o(text)f Fe(newcontext)f Fm(for)h(its)h(comm)o(unication)h(whenev)o(er)f(it)g(is)g(in) o(v)o(ok)o(ed)g(within)75 1963 y(the)h(con)o(text)f Fe(context)p Fm(.)24 b(If)17 b(the)g(library)g(uses)h(collectiv)o(e)g(comm)o(unication)g (within)g(dynamically)g(de-)75 2019 y(\014ned)h(subgroups,)f(then)g(these)g (subgroups)g(will)i(b)q(e)f(created)e(b)o(y)h(splitting)i(the)d(group)h(of)g (pro)q(cesses)75 2076 y(de\014ned)f(b)o(y)e Fe(newcontext)p Fm(.)166 2134 y(In)23 b(the)g(general)g(case,)h(the)e(con)o(text)g(of)g(the)h (in)o(v)o(ok)m(ation)g(has)f(to)g(b)q(e)h(passed)g(as)f(an)h(explicit)75 2190 y(parameter)14 b(when)i(the)f(library)h(is)f(in)o(v)o(ok)o(ed.)21 b(The)15 b(library)h(co)q(de)f(will)i(generate)e(a)g(new)g(con)o(text)f(when) 75 2247 y(it)h(starts)f(executing)j(and)e(will)i(free)e(this)h(con)o(text)e (when)i(it)g(terminates.)166 2305 y(There)f(are)g(sev)o(eral)h(sp)q(ecial)h (cases)e(where)g(dynamic)h(con)o(text)f(creation)g(can)h(b)q(e)f(a)o(v)o (oided.)166 2363 y(Case)10 b(1:)18 b(The)11 b(library)g(is)h(alw)o(a)o(ys)e (in)o(v)o(ok)o(ed)h(within)h(the)e(con)o(text)h Fe(MPI)p 1343 2363 V 16 w(ALL)p Fm(.)f(Then)i(the)f(new)g(con)o(text)75 2420 y(can)18 b(b)q(e)g(\\cac)o(hed":)25 b(A)17 b(library)i(collectiv)o(e)g (initialization)i(routine)d(should)g(b)q(e)h(in)o(v)o(ok)o(ed)f(b)o(y)f(the)h (user)75 2476 y(at)c(the)g(start)f(of)h(the)g(program;)f(this)i(routine)f (creates)g(a)g(cop)o(y)g(of)g(the)g(con)o(text)g Fe(MPI)p 1537 2476 V 17 w(ALL)f Fm(and)i(stores)e(a)75 2532 y(handle)k(to)e(it)h(in)g(a)g (C)f(static)h(v)m(ariable)h(\(F)l(ortran)d(77)h(COMMON\).)g(The)h(con)o(text) f Fe(MPI)p 1602 2532 V 17 w(ALL)g Fm(need)h(not)75 2589 y(b)q(e)g(passed)f (as)g(a)g(parameter)f(when)i(the)f(library)h(is)g(in)o(v)o(ok)o(ed.)166 2647 y(Case)g(2:)22 b(The)17 b(library)g(is)g(alw)o(a)o(ys)e(in)o(v)o(ok)o (ed)i(in)g(a)f(unique)i Fh(libr)n(ary)f(c)n(al)r(ling)f(c)n(ontext)g Fm(on)h(eac)o(h)f(pro-)75 2704 y(cess.)29 b(Then)18 b(a)g(library)h (initialization)i(routine)e(should)g(b)q(e)g(in)o(v)o(ok)o(ed)f(b)o(y)g(the)g (user)g(on)h(eac)o(h)f(pro)q(cess)p eop %%Page: 9 10 bop 75 -100 a Ff(1.4.)34 b(EXAMPLES)16 b({)e(PR)o(OBLEMS)j(IN)e(RICK'S)h (LIST)751 b Fm(9)75 45 y(where)16 b(the)g(library)g(ma)o(y)f(b)q(e)h(in)o(v)o (ok)o(ed;)g(the)f(initialization)k(routine)d(is)g(called)h(after)e(the)h (user)g(de\014ned)75 102 y(the)i(library)g(calling)i(con)o(texts)d(and)h(b)q (efore)g(the)g(library)g(is)g(in)o(v)o(ok)o(ed.)28 b(Eac)o(h)18 b(initialization)i(call)f(is)f(a)75 158 y(collectiv)o(e)d(call)f(within)g (the)f(library)h(calling)g(con)o(text.)19 b(A)13 b(cop)o(y)g(of)f(the)h (calling)i(con)o(text)d(is)h(created)g(\(b)o(y)75 214 y Fe(MPI)p 150 214 15 2 v 17 w(COPY)p 263 214 V 16 w(CONTEXT)p Fm(\))i(and)i(stored)e (in)j(a)d(static)h(library)h(v)m(ariable.)25 b(Subsequen)o(t)17 b(calls)g(to)f(the)g(library)75 271 y(need)g(not)f(pass)g(the)g(calling)i (con)o(text)e(as)g(a)f(parameter.)166 327 y(Case)19 b(3:)27 b(The)20 b(library)g(ma)o(y)e(b)q(e)i(in)o(v)o(ok)o(ed)f(on)h(eac)o(h)f(pro)q (cess)g(within)i(a)d(\014xed)i(\(small\))g(n)o(um)o(b)q(er)75 384 y(of)f(library)h(calling)h(con)o(texts.)32 b(Copies)20 b(of)f(these)g(con)o(texts)g(can)h(b)q(e)g(created)f(b)q(efore)h(the)f (library)h(is)75 440 y(in)o(v)o(ok)o(ed)e(and)g(\\cac)o(hed")g(in)h(static)f (v)m(ariables.)29 b(Subsequen)o(t)19 b(in)o(v)o(ok)m(ations)f(to)g(the)g (library)h(need)f(not)75 497 y(create)d(new)g(copies,)h(but)f(only)h(select)g (the)f(righ)o(t)g(preexisting)i(cop)o(y)l(.)166 547 y Fc(This)9 b(assumes)h(one)f(can)h(test)g(con)o(texts)h(\(con)o(text)f(handles\))g(for)f (equalit)o(y)m(.)15 b(This)9 b(should)g(b)q(e)h(said)g(explicitely)75 596 y(in)j(the)i(draft)75 704 y Fk(Use)i(of)h(con)o(texts)f(for)g(a)h(host)f (no)q(de)h(computation)i(mo)q(del)42 b Fc(Let's)14 b(assume)g(t)o(w)o(o)f (con)o(texts:)145 779 y Fa(\017)23 b Fb(MPI)p 258 779 14 2 v 15 w(ALL)13 b Fc(with)g(host)i(b)q(eing)e(no)q(de)i(zero)145 854 y Fa(\017)23 b Fb(MPI)p 258 854 V 15 w(NODE)13 b Fc(that)h(do)q(es)g(not) g(include)g(the)h(host)166 928 y(W)m(e)e(assume)h(the)g(load)f(pro)q(cedure)j (ensures)g(that)e(host)g(is)g(pro)q(cess)i(zero)e(in)g(ALL)75 1078 y Fb(host/node)20 b(code)479 b(translation)75 1128 y(--------------)e (-------------)75 1227 y(I_am_the_host\(\))455 b(\(mpi_rank\(MPI_A)o(LL,)19 b(rank\);)860 1277 y(rank==0;\))75 1377 y(form)i(node)g(group)457 b(mpi_rank\(MPI_AL)o(L,)19 b(task\))860 1427 y(mpi_split_conte)o(xt\(MP)o (I_NOD)o(E,)g(MPI_ALL,)903 1476 y(\(task==0\),0\);)75 1576 y(Broadcast)h(from)h(host)g(to)g(nodes)174 b(mpi_bcast\(MPI_A)o(LL,0,)o (...\))75 1676 y(regular)20 b(communications)f(btwn)196 b(mpi_send\(buffer)o (,len,)o(dest,)o(tag,M)o(PI_NO)o(DE\))75 1725 y(nodes)675 b(mpi_recv\(buffer) o(,len,)o(sourc)o(e,tag)o(,MPI_)o(NODE)o(\))184 1775 y(/*)21 b(source,)g(dest)g(are)g(ranging)f(from)h(0)h(to)f(#nodes-1)f(*/)75 1875 y(sum)h(values)g(from)g(each)g(node)239 b(mpi_reduce\(inbu)o(f,out)o (buf,l)o(en,MP)o(I_ALL)o(,)75 1925 y(at)21 b(host)893 b(0,MPI_ISUM\))228 1975 y(/*)21 b(host)g(\(node)g(zero\))f(calls)h(mpi_reduce)f(with)h(inbuf)g (=)g(0)h(*/)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF From owner-mpi-context@CS.UTK.EDU Wed Jun 9 17:32:25 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA20612; Wed, 9 Jun 93 17:32:25 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02043; Wed, 9 Jun 93 17:32:15 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 9 Jun 1993 17:32:14 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02035; Wed, 9 Jun 93 17:32:13 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA16583; Wed, 9 Jun 93 16:31:59 CDT Date: Wed, 9 Jun 93 16:31:59 CDT From: Tony Skjellum Message-Id: <9306092131.AA16583@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: June 22 context meeting Gentlemen, I will arrive on June 21, and hope to see as many of the context subcommittee members for pre-meeting discussion on June 22, at the Bristol Suites. I am writing this so everyone on the sub-committee will know that we are trying to roll up our sleeves again, this time, pre-meeting, to be sure we're on track by the time the meeting starts. -Tony From owner-mpi-context@CS.UTK.EDU Thu Jun 10 06:48:29 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA21785; Thu, 10 Jun 93 06:48:29 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25168; Thu, 10 Jun 93 06:48:33 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 06:48:32 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25160; Thu, 10 Jun 93 06:48:29 -0400 Date: Thu, 10 Jun 93 11:48:07 BST Message-Id: <21723.9306101048@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: June 22 context meeting To: Tony Skjellum , mpi-context@cs.utk.edu In-Reply-To: Tony Skjellum's message of Wed, 9 Jun 93 16:31:59 CDT Reply-To: lyndon@epcc.ed.ac.uk I will arrive evening of June 21. > Gentlemen, I will arrive on June 21, and hope to see as many of the context > subcommittee members for pre-meeting discussion on June 22, at the Bristol Suites. > I am writing this so everyone on the sub-committee will know that we are trying > to roll up our sleeves again, this time, pre-meeting, to be sure we're on track > by the time the meeting starts. > -Tony > /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Jun 10 08:38:38 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA22177; Thu, 10 Jun 93 08:38:38 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01083; Thu, 10 Jun 93 08:35:56 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 08:35:55 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01075; Thu, 10 Jun 93 08:35:54 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA18321; Thu, 10 Jun 1993 08:35:55 -0400 Date: Thu, 10 Jun 1993 08:35:55 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9306101235.AA18321@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu Subject: Next MPI meeting Unfortunately I won't be at the next MPI meeting, which is why I'm keen to understand the current context ideas and proposal(s) now. I've not had much feedback on the examples I sent out a couple of days ago. I'm particularly keen to get a correct version of the convolution example. I think examples are important if we are to make others understand a fairly complex proposal. They also help us understand what is really needed. Here's a question that I've asked myself, and which I'm sure others will ask. Are naked contexts really necessary? Communicators provide the scope of a communication operation. Why would an application writer use mpi_alloc_context and mpi_make_comm, rather than mpi_safemake_comm? David From owner-mpi-context@CS.UTK.EDU Thu Jun 10 11:02:58 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA22663; Thu, 10 Jun 93 11:02:58 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10074; Thu, 10 Jun 93 11:03:09 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 11:03:08 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA09781; Thu, 10 Jun 93 11:00:02 -0400 Date: Thu, 10 Jun 93 15:59:53 BST Message-Id: <22013.9306101459@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Next MPI Meeting (etc) To: walker@rios2.epm.ornl.gov (David Walker), lyndon@epcc.ed.ac.uk In-Reply-To: David Walker's message of Wed, 9 Jun 1993 08:50:17 -0400 Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Hi David Seems to be just me and thee on the contexts and examples front at the moment. I guess people are just too busy. > A more complicated example based on the convolution example could > use overlapping groups. Assume for example that the parallel FFT > requires the number of processes to be a power of two, and we have > a total of 12 processes. Then we could put 0,1,2,..,7 in one group > and 4,5,6,...,11 in the other group. I'm not sure how much this > complicates things. I was thinking, in terms of this example, of something like having arbitrarily different distributions and numbers of nodes in each of the two groups. This would introduce an arbitrary communication pattern between the two groups making sender choice more important to the example. I was really thinking of abritrarily different computations in two or more groups. Trouble is, its real hard to write a complete yet sshort example - conceptually simple is easy, small simple code is not so easy. > Sorry, in my corrected version there was still an extraneous > mpi_safemake_comm inside the first conditional branch. This has > been corrected in the revised version below. Thanks. > The arguments to > my version of mpi_safemake_comm are: > > mpi_safemake_comm (local_group, remote_group, communicator) > IN local_group > IN remote_group > OUT communicator Well of course I support this kind of communicator constructor, as my previous messages make obvious. I guess the point is that this is *not* the kind of constructor in the working document from Marc Snir - this is something that Marc identified as left out for now. Further this is *not* the sort of thing that Tony/Mark were proposing to include in the working document - from what I can gather. Would you like me to type in explanation of what I think is the kind of thing that they are suggesting? Would that help anyone? > In the sends the rank of the destination process is relative to the > remote group, i.e., the receiving group. In the receives the rank > of the source process is relative to the remote group, i.e., the > sending group. This seems the natural way to do things. Yes this is the natural way to do communication between different groupings, of course, in my opinion. > As you > point out, the source rank in the receives can be replace by > MPI_DONTCARE. Yes, in the example given, because the communication is a particular pattern - i.e. one process in A group sends to one process in B group and vica versa. > It's not obvious to me why a C2 receive can't > specify a particular source in the remote group. Please explain > this to me. The comment I made was a result of confusing your example with my understanding of the working document and what Tony/Mark are suggesting. In particular the way in which your example assumes to use publish/subscribe is quite incompatible with what I think is meant my Tony/Mark. Are you aware of this? I can send an explanation of what I believe thes guys intend, if that will help. (Frankly, it will mean that you can write your example program in a half-sensible way, but more complicated examples like those I suggested above cannot be written in any sensible way.) I should make a general point about your example. When building the communicator for intercommunication you are using the init_group and the fft_group. Now the init_group is a superset of the fft_group, in fact it encloses the two different fft_group's. This is a rather strange thing to be doing since it means that calculation of the destination rank in the other group refers to the enclosing group. It is *much* more natural and expressive to think of the destination rank in the destination group, so that a better thing to do is to make a communicator with the local group being the fft_group and the remote group being the other (remote) fft_group. Would you like me to adapt your example program to this way of working? It will be easy for me to do - of course I will be inventing definitions of some things like publish and subscribe. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Jun 10 12:02:54 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA22952; Thu, 10 Jun 93 12:02:54 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14511; Thu, 10 Jun 93 12:03:09 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 12:03:08 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14503; Thu, 10 Jun 93 12:03:04 -0400 Date: Thu, 10 Jun 93 17:03:00 BST Message-Id: <22074.9306101603@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Next MPI meeting To: walker@rios2.epm.ornl.gov (David Walker), mpi-context@cs.utk.edu In-Reply-To: David Walker's message of Thu, 10 Jun 1993 08:35:55 -0400 Reply-To: lyndon@epcc.ed.ac.uk Dear David > I think examples > are important if we are to make others understand a fairly complex proposal. I agree. I fear that we may need complex examples, and people will have time to study them. > They also help us understand what is really needed. > I agree again, and have the same fear - i.e. that whereas the examples can be conceptually simple the example code cannot be. > Here's a question that I've asked myself, and which I'm sure others will > ask. Are naked contexts really necessary? Communicators provide the scope > of a communication operation. Why would an application writer use > mpi_alloc_context and mpi_make_comm, rather than mpi_safemake_comm? > Naked groups allow the user to perform group manipulations on groups. This is conceptually cleaner than doing it on communicators, in my humble opinion. The context-free :-) communicator constructors synchronise the processes invloved in the communicator construction. The context-oriented constructors do not. Naked contexts allow preallocation of contexts and avoidance of synchronisation for communicator construction. I think that a modification of your example program in line with a comment about destination rank addressing will reveal another use for naked context. I'll have to redefine publish/subscribe a little, but I dont see this as a problem as the example redefines them relative to my understanding of the intentions of Tony and Mark (Sears). I'll send that modification after this message. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ >From owner-mpi-context@CS.UTK.EDU Tue May 11 21:56:56 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA22309; Tue, 11 May 93 21:56:54 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA01509; Tue, 11 May 93 21:58:03 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15824; Tue, 11 May 93 22:52:51 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 11 May 1993 22:52:50 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from cs.sandia.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15806; Tue, 11 May 93 22:52:47 -0400 Received: from anasazi.cs.sandia.gov.noname ([132.175.76.10]) by cs.sandia.gov (4.1/SMI-4.1) id AA26565; Tue, 11 May 93 20:52:45 MDT Received: by anasazi.cs.sandia.gov.noname (4.1/SMI-4.1) id AA13505; Tue, 11 May 93 20:52:44 MDT Date: Tue, 11 May 93 20:52:44 MDT From: mpsears@cs.sandia.gov (Mark P. Sears) Message-Id: <9305120252.AA13505@anasazi.cs.sandia.gov.noname> To: mpi-context@cs.utk.edu Subject: Proposal VIII -- response to Riks probs. Status: R The following is my response to Rik Littlefields problem sets. I will bring copies to the meeting. Mark Sears Sandia National Laboratories Comments on Rik Littlefields problems: 1) (describe a circular shift routine) Here I give a contiguous byte version only. Extensions to other data types and to non contigous buffers is straightforward. Error handling is left out, and I use an aggressive style of message passing (send before receive). Note that tag must be provided in addition to context. // MPI_CSHIFT_B -- shift a contiguous buffer around a group in circular order // // limitations: // uses aggressive style of message passing. // no check for len being same on all processes. // // assumptions: // MPI_MYNODE() returns process id // MOD(a,b) = a modulo b // SEND(buf, len, dest, tag, context) sends buffer to destination. // RECV(buf, len, src, tag, context) receives buffer from source. // void MPI_CSHIFT_B ( char *in, char *out, int len, int shift, MPIGroup group, int context, int tag) { int myrank; // rank of this process in the group int left, right; // process ids for sending and receiving // exit routine if this process is not a group member: if(!MPI_groupiselement(group, MPI_MYNODE())) return; // get my rank in group and compute left and right neighbors myrank = MPI_grouprank(group, MPI_MYNODE()); left = MPI_groupelement(group, MOD((myrank - shift), MPI_grouporder(group))); right = MPI_groupelement(group, MOD((myrank + shift), MPI_grouporder(group))); // now execute the shift: SEND(in, len, right, tag, context); RECV(out, len, left, tag, context); } 2) (bflyexchange) // MPI_BFLY_B -- butterfly exchange // // limitations: // uses aggressive style of message passing. // no check for len being same on all processes. // // assumptions: // MPI_MYNODE() returns process id // MOD(a,b) = a modulo b // SEND(buf, len, dest, tag, context) sends buffer to destination. // RECV(buf, len, src, tag, context) receives buffer from source. // void MPI_BFLY_B ( char *in, char *out, int len, int shift, MPIGroup group, int context, int tag) { int myrank; // rank of this process in the group int partner; // process id for exchanging int j; // exit routine if this process is not a group member: if(!MPI_groupiselement(group, MPI_MYNODE())) return; // get my rank in group and compute partner for exchange: myrank = MPI_grouprank(group, MPI_MYNODE()); j = (myrank % (shift<<1))/shift; partner = MPI_groupelement(group, myrank + shift(1-2*j)); // (the problem statement was a little unclear about what to do if // the exchange partner was not in the group. Here I just assume // that it is in the group.) // now execute the exchange: SEND(in, len, partner, tag, context); RECV(out, len, partner, tag, context); } Discussion: Safety: This routine (and other similarly implemented global operations) are safe if no process in the group has a send or receive using the same context and tag. The routine is unsafe if a process in the group issues another send outside the routine to its partner in the dual exchange (using the same tag), or if a process posts a wildcard receive prior to calling this routine which would match the incoming message. Thus, the safety of this routine is the same as the safety associated with normal point to point message passing. Performance: Performance of this routine acheives the limit of one send/receive per process. The computational overhead of the group operations will depend on the type of group used, but with use of code inlining and for simple kinds of groups this overhead will be moderate. Note that group operations require no communications, since a group is defined algorithmically. 2) Guidelines for library developers. There are two possible models for developing library or third party software which uses MPI, but which is in turn intended for use by other independently developed software. The first kind uses context and tags provided to it by the calling software, and is typical of communication services. An example of such software that would not be provided by MPI would be a matrix transpose. Writing software of this type is similar to current practice, but the author must avoid using tags and context for which permission has not been granted. One possibility for the author is to simply specify a context in the interface and make the explicitly declared assumption that tags within that context are usable in any way within the library module. An author of multiple library modules that need to work together can develop an internal scheme for tag management which allows the use of minimum numbers of contexts. The second kind of library software establishes and manages its own context and tag spaces. This kind of software is typical of higher level operations which provide computational functions (which implicitly use communications). There are a number of ways for a library to do this management. For example, to begin with a library module might make use of static contexts and assume it has only one 'instantiation' within a program. More advanced libraries might bind context to data objects that are being manipulated and allow multiple simultaneous use of the library code. In both cases, a primary source of problems will be receives where the tag filter is wild-carded. Such receives should be used with care, and independent context values associated with each. A secondary source of problems will be collective communication operations among groups which intersect. Again, such operations can be made non-interfering by choosing different contexts for each. 3) host-node programming The description of this problem makes some assumptions that I don't agree with. First is that the ALL group contains the host. My opinion is that there might be two preexisting groups, the ALL group containing all the nodes, and the ALLH group containing all the nodes and the host process. Here I make the assumption that in the ALLH group the host is the last process. To make the problem interesting, the program uses a static context. These groups can be built as follows, assuming constructors for building linear groups and list groups: // build the ALL group MPIGroup ALL; // all processes MPIGroup ALLH; // all processes plus host ... // Build ALL, ALLH groups // // Assume // MPI_HOSTID() -- returns id of host // MPI_NNODES() -- returns number of node processes // MPI_makelineargroup(int order, int start, int stride) // -- constructor for linear group // MPI_makelistgroup(int order, int *list) // -- constructor for list group // void MPIBuildAlls() { int *list; int n; int i; n = MPI_NNODES(); ALL = MPI_makelineargroup( /* order: */ n, /* start */ 0, /* delta */ 1); list = (int *) malloc(sizeof(int) * (n + 1)) for(i=0;i MPIGroup hgroup; // group containing host only MPIGroup ngroup; // group containing all nodes static int me; static int hostid; static int context = 100; // a randomly chosen context static int tags = 13367; // a randomly chosen tag base main() { me = MPI_NODE(); hostid = MPI_HOSTID(); MPI_opencontext(context); // use a static context if(me == hostid) host(); else node(); MPI_closecontext(context); // context is no longer in use } host() { int myrank; // build host, node groups: hgroup = MPI_makelistgroup(1, &hostid); ngroup = ALL; // lets not do more work than necessary // compute rank of host in ALLH group myrank = MPI_grouprank(ALLH, hostid); // broadcast from host to all nodes (perfectly legitimate) MPI_BCAST( ..., myrank, ALLH, tags, context); // send a message to each node in turn for(i=0;iFrom weeks@mozart.convex.com Wed Jun 9 14:53:36 1993 Return-Path: Received: from ens.ens-lyon.fr by lip.ens-lyon.fr (4.1/SMI-4.1) id AA14989; Wed, 9 Jun 93 14:53:35 +0200 Received: from convex.convex.com by ens.ens-lyon.fr (4.1/SMI-4.1) id AA12451; Wed, 9 Jun 93 14:53:05 +0200 Received: from mozart.convex.com by convex.convex.com (5.64/1.35) id AA01222; Wed, 9 Jun 93 07:51:39 -0500 Received: by mozart.convex.com (5.64/1.28) id AA26626; Wed, 9 Jun 93 07:53:48 -0500 Date: Wed, 9 Jun 93 07:53:48 -0500 From: weeks@mozart.convex.com (Dennis Weeks) Message-Id: <9306091253.AA26626@mozart.convex.com> To: Jack.Dongarra@lip.ens-lyon.fr Subject: May 12 mpi mail #2 Status: R LaTeX source (sender forgotten: I deleted the verbiage before the LaTeX stuff so I could just print it.. I think it was sent by Steve Otto) DW --------------------------- cut here ---------------------------------------- \documentstyle[twoside,11pt]{report} \pagestyle{headings} %\markright{ {\em Draft Document of the MPI Standard,\/ \today} } \marginparwidth 0pt \oddsidemargin=.25in \evensidemargin .25in \marginparsep 0pt \topmargin=-.5in \textwidth=6.0in \textheight=9.0in \parindent=2em % ---------------------------------------------------------------------- % mpi-macs.tex --- man page macros, % discuss, missing, mpifunc macros % % ---------------------------------------------------------------------- % a couple of commands from Marc Snir, modified S. Otto \newlength{\discussSpace} \setlength{\discussSpace}{.7cm} \newcommand{\discuss}[1]{\vspace{\discussSpace} {\small {\bf Discussion:} #1} \vspace{\discussSpace} } \newcommand{\missing}[1]{\vspace{\discussSpace} {\small {\bf Missing:} #1} \vspace{\discussSpace} } \newlength{\codeSpace} \setlength{\codeSpace}{.3cm} \newcommand{\mpifunc}[1]{\vspace{\codeSpace} {\bf #1} \vspace{\codeSpace} } % ----------------------------------------------------------------------- % A few commands to help in writing MPI man pages % \def\twoc#1#2{ \begin{list} {\hbox to95pt{#1\hfil}} {\setlength{\leftmargin}{120pt} \setlength{\labelwidth}{95pt} \setlength{\labelsep}{0pt} \setlength{\partopsep}{0pt} \setlength{\parskip}{0pt} \setlength{\topsep}{0pt} } \item {#2} \end{list} } \outer\long\def\onec#1{ \begin{list} {} {\setlength{\leftmargin}{25pt} \setlength{\labelwidth}{0pt} \setlength{\labelsep}{0pt} \setlength{\partopsep}{0pt} \setlength{\parskip}{0pt} \setlength{\topsep}{0pt} } \item {#1} \end{list} } \def\manhead#1{\noindent{\bf{#1}}} \hyphenation{RE-DIS-TRIB-UT-ABLE sub-script mul-ti-ple} \begin{document} \setcounter{page}{1} \pagenumbering{roman} \title{ {\em D R A F T} \\ Document for a Standard Message-Passing Interface} \author{Scott Berryman, {\em Yale Univ} \\ James Cownie, {\em Meiko Ltd} \\ Jack Dongarra, {\em Univ. of Tennessee and ORNL} \\ Al Geist, {\em ORNL} \\ Bill Gropp, {\em ANL} \\ Rolf Hempel, {\em GMD} \\ Bob Knighten, {\em Intel} \\ Rusty Lusk, {\em ANL} \\ Steve Otto, {\em Oregon Graduate Inst} \\ Tony Skjellum, {\em Missisippi State Univ} \\ Marc Snir, {\em IBM T. J. Watson} \\ David Walker, {\em ORNL} \\ Steve Zenith, {\em Kuck \& Associates} } \date{May 10, 1993 \\ This work was supported by ARPA and NSF under contract number \#\#\#, by the National Science Foundation Science and Technology Center Cooperative Agreement No. CCR-8809615. } \maketitle \hfuzz=5pt %\tableofcontents %\begin{abstract} %We don't have an abstract yet. %\end{abstract} \setcounter{page}{1} \pagenumbering{arabic} \label{sec:context} %======================================================================% \chapter{Contexts -- Proposal X} \begin{center} {\em Lyndon J.~Clarke, Tony Skjellum, Rik Littlefield}\medskip \end{center} This chapter describes the definition of context, its relation to group and tag, and describes communication safety, while indicating how all this relates to point-to-point and global communication. We include examples that illustrate the features incorporated, including the ``Rik Littlefield'' examples, plus the logical grid topology context example of Skjellum. We have utilized C-language semantics to describe the functionality, whereas other authors have utilized Fortran semantics. This should not be construed to denigrate (or render impossible) a Fortran interface. % %---------------------------------------------------------------------- % Context, Tag, Group and Process \section{Context, Tag, Group and Process} This section describes the view of context, tag, group and process taken by MPI. MPI views group and context as different and unrelated entities. Group is used for management of processes and context is used for management of messages. The purpose of these definitions are to promote the generation of library modules, and to allow users safely to combine programs written at different times, or by different programmers, without globalizing the semantics of how messaging is done. To do this, we introduce scope to message passing. This is accomplished through contexts and groups, primarily, and then by tags, within the user's perview. Note that we take a {\em laissez faire\/} view of the context resource (in the same spirit as Unix ports), in that we provide an resource allocation mechanism, but also provide for correct programs that define their own ``well-known'' contexts. Depending on implementation, either ``well-known'' or system-allocated contexts might be of higher performance, or the first allocated might always be faster, depending on the implementation details. (In doing so, we have removed a major incompatibility of this MPI formulation with the alternate Proposal VIII (Mark Sears)). We do not view groups, contexts, and tags as totally ``orthogonal,'' but we seek to make them orthogonal as possible, consistent with providing a safe, optimizable environment for programming. Since key events in the formation of groups may require unique contexts for their creation, and because contexts are often naturally associated with groups, their is a tendency to link context heavily with group. Actually, we have found that the group-to-context relationship is not one to one, both based on experience in experimental systems, and in recent discussions. Except where needed, contexts and groups are separate concepts, but we build up communicator objects in the following, that intentionally bind them together for programming convenience. In the end, this formulation remains at odds with Proposal I (Marc Snir), which uses the word ``context'' in a semantically distinct sense. Here, context always means a logical partition of the tag space into a user-defined and system-defined space. The latter is subsequently allocated/deallocated under either a system- or user-controlled safety mechanism, or a combination of both. Though we permit contexts to be integer values, and hence transmittable like tags, the operating system may choose internal optimizations by hashing such contexts to make reference to hardware message tags, or other facilities that we cannot anticipate in a portable standard. These runtime optimizations are not excluded. Furthermore, in what follows, we note at times that simplifications occur for the SPMD model. These simplifications imply a simpler implementation, and less potential overhead for that programming model, while including (rather than excluding) more general programming in the MPMD model. MPI libraries for SPMD and MPMD may be appropriately optimized at compile-time and/or run-time by the implementor. \subsection{Tag and Context} \subsubsection{Tag} MPI defines a {\em tag\/} as the familiar non-negative integer label of a message. The tag is a field in message selection which may be wildcard. There is no concept of creation or deletion of tags. Any non-negative integer value may be used as a tag. The MPI user is responsible for management of tag usage. It is correct for a process {\cal P} to transmit a tag value T to another process {\cal Q}, and, subsequently, for {\cal P} to then send messages of tag T, and for {\cal Q} to then receive messages of tag T. A tag is a 32-bit integer. Users assign non-negative tag values. {\small {\bf Discussion:} This definition appears compatible with the point-to-point definition of tag, at present. We will introduce an amendment to tag matching to point-to-point, to allow bit selection (which was narrowly defeated last time, and justify that better this time). Other than that, tag definition is in no apparent conflict with point-to-point.} \subsubsection{Context} MPI defines a {\em context\/} as the familiar integer label of a distinct message space, or more formally as a distinct message tag space. The context is a field in message selection that may not be wildcarded. There is no concept of creation or deletion of contexts. Key features are as follows: \begin{itemize} \item A context is exactly analogous to a tag, except that it may not be wildcarded in selection. \item In MPI, a context is stored in a 32-bit integer. Implementations that facilitate context management need not span the full 32 bits. Users are constrained to use non-negative contexts. System-provided contexts (see below), will be negative. \end{itemize} The MPI user is responsible for the management of context usage. Any non-negative integer value may be used as a context. It is correct for a process {\cal P} to transmit a context value {\bf C} to another process {\cal Q}, and, subsequently, for {\cal P} to send and receive messages of context {\bf C}, and for {\cal Q} to send and receive messages of context C. {\small {\bf Discussion:} A switch to 16-bit contexts would be a reasonable implementation alternative; we are convinced that a compliant implementation should permit at least 16 bits of context. Because these numbers are essentially a hash to hardware resource, in some implementations, 16 bits may provide nominal performance improvements. It is also thought that 16 bits would provide a sufficiently large number of contexts for practical applications, as currently envisaged, even those that have a number of objects. We strongly oppose less than 16 bits, because they could be insufficient for reasonable applications. We suggest the natural symmetry of tag and context sizes may simplify alignment in implementations, and reduce the number of data types in MPI by one. Again, we view the MPI contexts are software contexts; the mapping to ``hardware contexts'' is an implementation issue; there may only be an infinitesmal number of such hardware contexts. Still, in analogy to register variables in a C program, a program continues to compile and run after the fast register resource is exhausted\ldots one suggests to the implementor that at least one context support hooks to a slower, but bigger internal context map.} \subsubsection{Context Management} MPI provides services that facilitate context management and recommends, although does not mandate, use of these services. MPI provides a context allocation and deallocation facility that allows user processes to generate unique context values. \begin{verbatim} int context_array[]; /* OUT */ int number_to_alloc; /* IN */ mpi_alloc_contexts(context_array, number_to_alloc) \end{verbatim} This procedure allocates \verb|number_to_alloc| unique context values and and stores them in \verb|context_array|. None of the context values allocated have been returned by a previous call to the procedure in any process unless such values have also previously been deallocated by calling \verb|mpi_free_contexts|. \begin{verbatim} int context_array[]; /* IN */ int number_to_free; /* IN */ mpi_free_contexts(context_array, number_to_free) \end{verbatim} This procedure deallocates \verb|number_to_free| unique context values stored in the array \verb|context_array|. Each context value must have previously been allocated by calling \verb|mpi_alloc_contexts| or else the program is erroneous. The context values deallocated may be allocated with future calls of \verb|mpi_alloc_contexts|. {\small {\bf Discussion:} Allocation and deallocation of contexts involves a global synchronization in the SPMD model, and the possibility of a context server in the full MPMD programming model. The degree of implementation difficulty depends on which model the programmer is using. SPMD programming involves no server. Thus, extra overhead (however minimal) is removed from the SPMD model. An appropriate MPI implementation could provide an SPMD and MPMD version to facilitate this optimization, or via runtime checking.} {\small {\bf Discussion:} We allow the user to write programs with complicated and possibly unpredictable behavior by allocating a context C in one process P and sending the context to another process Q which frees the context. P continues to use context C while a third process R allocates C for another purpose. Therefore, the user is expected to maintain safety in such cases, through systematic programming. Programs that do not maintain their own context coherence after \verb|mpi_free_contexts| are said to be erroneous.} MPI permits correct programs to include a mixture of code that do their own allocation of contexts with code that utilizes the automatic context registration mechanisms. The ``fastest'' contexts are implementation-defined, and implementations may provide hints as to which contexts are faster, or undertake heuristics to utilize faster contexts before slower contexts. {\small {\bf Discussion:} Selection of a range of system- and user-allocated contexts, as opposed to positive/negative (equi-partition of bits) would be viewed as a friendly amendment, if this further articulation is seen as sufficiently important to merit the additional mechanisms needed to implement it. The functionality is equivalent, but the latter form may be more flexible for implementors, and more useful to users. This issue could be remanded to the Environment Subcommittee.} \subsubsection{Context Global Association} MPI provides name services which allow the user to make global associations of names with contexts and locate contexts by associated names. \begin{verbatim} char *name; /* IN */ int context; /* IN */ mpi_associate_context(name, context) \end{verbatim} This procedure associates \verb|name| with context value \verb|context|. The procedure fails if \verb|name| is associated with any context value. \begin{verbatim} char *name; /* IN */ int context; /* IN */ mpi_dissociate_context(name, context) \end{verbatim} This procedure dissociates \verb|name| from the context value \verb|context|. The procedure fails if \verb|name| is not associated with the context value \verb|context|. \begin{verbatim} char *name; /* IN */ int context; /* OUT */ int wait; /* IN */ mpi_locate_context(name, &context, wait) \end{verbatim} This procedure locates the context value \verb|context| associated with \verb|name|. If \verb|name| is not associated with a context value then the behavior is determined by the logical value of \verb|wait|. If \verb|wait| is \verb|true| then the procedure waits until \verb|name| becomes associated with a context value. If \verb|wait| is \verb|false| the procedure returns \verb|MPI_NULL_CONTEXT| in \verb|context|. \subsection{Group and Process} \subsubsection{Process} MPI views a process in the familiar sense, for example as a Unix or Intel NX process. MPI defines a {\em process identifier\/} (pid for short) as a process-local integer entity that refers to an opaque process description object of undefined size. This means that if two processes {\cal P} and {\cal Q} both know the identifier of a process {\cal R} then the relationship between the values of the identifiers known to {\cal P} and {\cal Q} is undefined. \begin{small} {\bf Discussion:} This pid definition leaves freedom for the implementor to do whatever is efficient for the machine(s) at hand, and does not exclude implementations in which the pid is global. \end{small} \subsubsection{Group Definition} MPI defines a group as an ordered collection of processes, or more formally as an ordered collection of pids. MPI defines a {\em group identifier\/} (gid for short) as a process-local integer entity that refers to an opaque group description object of undefined size. This means that if two processes {\cal P} and {\cal Q} both know the identifier of a group {\bf G} then the relationship between the values of the identifiers known to {\cal P} and {\cal Q} is undefined. This MPI formulation defines a pid as formally the identifier of a singleton group containing only the process. \subsubsection{Group Services} MPI provides services that allow the user to create and delete groups. \begin{verbatim} int new_group; /* OUT */ int old_group; /* IN */ int subset_key; /* IN */ mpi_subset_group(&new_group, old_group, subset_key) \end{verbatim} This procedure creates one or more new groups as subsets of an existing group according to a subset key. The procedure synchronizes the member processes of the existing group. \begin{verbatim} mpi_permute_group(&new_group, old_group, permute_rank) \end{verbatim} This procedure creates one new group by permuting the member ranks of an existing group. The procedure synchronizes the member processes of the existing group. \begin{verbatim} int new_group; /* OUT */ int old_group_array[]; /* IN */ int number_of_old_group; /* IN */ mpi_union_group(&new_group, old_group_array, number_of_old_group) \end{verbatim} This procedure creates one new group by a union of one or more distinct existing groups. The procedure synchronizes the member processes of the union. Note in particular that each existing group can of course be the singleton group of a process. \begin{verbatim} mpi_delete_group(group) \end{verbatim} This procedure deletes a group. The procedure synchronizes the member processes of the group. \begin{small} {\bf Discussion: } We have concluded that at least one default context is needed, safely to implement the above group creation, deletion, and union operations. This context may be a hidden context, per the implementation. It is at this point that group and context become non-orthogonal, because a safe messaging mechanism must be supported, deterministically to accomplish these operations. The communicators described below reveal context and group merger explicitly to the user. The above operations do not. \end{small} \subsubsection{Group Transmission} MPI provides services which allow the user to write a portable group description into a memory buffer and read a portable group description from a memory buffer. The memory buffer can be transmitted in a message, allowing transmission of group descriptions. \begin{verbatim} char *buf_ptr; /* OUT */ int buf_len; /* IN */ int group; /* IN */ mpi_passivate_group(group, buf_ptr, buf_len) \end{verbatim} This procedure writes a portable description of \verb|group| into the buffer \verb|buf_ptr| of length \verb|buf_len| bytes. \begin{verbatim} char *buf_ptr; /* OUT */ int buf_len; /* IN */ int group; /* OUT */ mpi_activate_group(&group, buf_ptr, buf_len) \end{verbatim} This procedure reads a portable description of a group from the buffer \verb|buf_ptr| of length \verb|buf_len| bytes and returns a group descriptor identifier for the read group in \verb|group|. MPI provides name services that allow the user to make global associations of names with groups and locate groups by associated names. \begin{verbatim} char *name; /* IN */ int group; /* IN */ mpi_associate_group(name, group) \end{verbatim} This procedure associates \verb|name| with group identifier by \verb|group|. The procedure fails if \verb|name| is associated with any group. \begin{verbatim} char *name; /* IN */ int group; /* IN */ mpi_dissociate_group(name, group) \end{verbatim} This procedure dissociates \verb|name| from the group identified by \verb|group|. The procedure fails if \verb|name| is not associated with the group identified by \verb|group|. \begin{verbatim} char *name; /* IN */ int group; /* OUT */ int wait; /* IN */ mpi_locate_group(name, &group, wait) \end{verbatim} This procedure locates the group associated with \verb|name| and returns a group identifier for the located group in \verb|group|. If \verb|name| is not associated with a context value then the behavior is determined by the logical value of \verb|wait|. If \verb|wait| is \verb|true| then the procedure waits until \verb|name| becomes associated with a group. If \verb|wait| is \verb|false| the procedure returns \verb|MPI_NULL_GROUP| in \verb|group|. \begin{small} {\bf Discussion:} When a process receives a group description either by means of \verb|mpi_locate_group| or \verb|mpi_activate_group|, it may later delete that group using \verb|mpi_delete_group|. If so, then the group deletion should simply delete the received copy of the group description rather than the actual group. \end{small} \subsubsection{Default Groups} MPI provides each process with two predefined groups, which cannot be deleted. These are the singleton group of which the process is the only member and a single group of which all processes are a member. \begin{verbatim} int pid; mpi_my_pid(&pid) \end{verbatim} This process returns the singleton group identifier, that is, process identifier, of the calling process. \begin{verbatim} int gid; mpi_all_group(&gid) \end{verbatim} This process returns the group identifier of the group of all processes. \begin{small} {\bf Discussion:} The concept of the ``all'' group is not really extensible to a dynamic process world. We could replace this with an ``initial'' group for example. It might be more general to replace the all group by the following. Creation of the set of processes composing the program is implementation specific. Require an implementation specific method of assigning each process to a {\em base\/} process group, with different and distinct groups of processes in different base groups. Default is to make all processes members of the same base group. Provide a procedure to return the base group of the process. \end{small} %---------------------------------------------------------------------- % Communicator and Communication \section{Communicator and Communication} Now that we have established the fundamental workings of tag, context, and group, we can turn to the standard user-level communication objects of MPI. In particular, communication between processes is provided by communicator objects that are a binding of a message context and one or more process ``worlds.'' A process world is either a process group or all processes composing the user program. \subsection{Intra-communication} Intra-communication is communication between processes within the same process world. This service is provided by an intra-communicator object, which is a binding of a context, and a process world. The process world is either a group or all processes composing the user program. \begin{verbatim} int communicator; /* OUT */ int context; /* IN */ int world; /* IN */ mpi_create_intra_communicator(&communicator, context, world) \end{verbatim} This procedure creates an intra-communicator and returns the communicator identifier in \verb|communicator|. The intra-communicator world supplied in \verb|world| may be an actual group identifier or the null group \verb|MPI_NULL_GROUP|. If \verb|world| is an actual group identifier, then the world is the identified group, and processes within the world are identified relatively by {\bf rank} within that group. Otherwise the world contains all processes within the user program and processes within the world are identified absolutely by process identifier. The communicator context supplied in \verb|context| may be an actual context value or the null context \verb|MPI_NULL_CONTEXT|. If \verb|context| is an actual context then the procedure completes asynchronously. If \verb|context| is the null context then the procedure allocates a single context using the context allocator described above, synchronizes all members of the process world, and uses the allocated context in each intra-communicator created. \begin{small} {\bf Discussion:} The name ``intra-communicator'' is long. We propose ``C1'' as the nick-name for an intra-communicator object. {\bf Discussion:} We conserve the Proposal I rank-mechanism for referring to processes in groups, for most cases (see above), and under communication primitives. We see this as the most easily understood, intuitive way to describe communication relative to the group; the level of virtualization implied by this mapping is also desirable from the point of view of safety. \end{small} \begin{verbatim} int communicator; /* IN */ int world; /* OUT */ mpi_intra_communicator_world(communicator, &world) \end{verbatim} This procedure returns in \verb|world| the process world of the intra-communicator identified by \verb|communicator|. \begin{verbatim} int communicator; /* IN */ int context; /* OUT */ mpi_communicator_context(communicator, &context) \end{verbatim} This procedure returns in \verb|context| the context of a communicator object identified by \verb|communicator|. If the context supplied in communicator creation was the null context, then the procedure returns the context allocated during communicator creation. \begin{verbatim} int communicator; /* IN */ mpi_delete_communicator(communicator) \end{verbatim} This procedure deletes a communicator object. If the context supplied in communicator creation was the null context then this procedure synchronizes the process members of process world(s) of the communicator and deallocates the context allocated for the communicator. Otherwise the procedure completes asynchronously. \subsection{Inter-communication} Intercommunication is communication between processes within two different process worlds. This service is provided by an inter-communicator object, which is a binding of a context, a local process world, and a remote process world. Each process world is either a group or all processes composing the user program. \begin{verbatim} int communicator; /* OUT */ int context; /* IN */ int local; /* IN */ int remote; /* IN */ mpi_create_intercommunicator(&communicator, context, local, remote) \end{verbatim} This procedure creates an inter-communicator and returns the communicator identifier in \verb|communicator|. The intercommunicator local world supplied in \verb|local| may be an actual group identifier or the null group \verb|MPI_NULL_GROUP|. If \verb|local| is an actual group identifier then the local world is the identified group and processes within the local world are identified relatively by rank within that group. Otherwise the local world contains all processes within the user program and processes within the local world are identified absolutely by process identifier. The inter-communicator remote world supplied in \verb|remote| may be an actual group identifier or the null group \verb|MPI_NULL_GROUP|. If \verb|remote| is an actual group identifier then the remote world is the identified group and processes within the remote world are identified relatively by rank within that group. Otherwise the remote world contains all processes within the user program and processes within the remote world are identified absolutely by process identifier. The communicator context supplied in \verb|context| may be an actual context value or the null context \verb|MPI_NULL_CONTEXT|. If \verb|context| is an actual context then the procedure completes asynchronously. If \verb|context| is the null context then the procedure allocates a single context using the context allocator described above, synchronizes all members of the local and remote process worlds, and uses the allocated context in each intercommunicator created. Where the user explicitly manages context values then he or she must ensure that the same process world(s) are bound to the same context value in each process which uses that context value, otherwise the program is erroneous. \begin{small} {\bf Discussion:} We propose the nickname ``C2'' for the inter-communicator. \end{small} \begin{verbatim} int communicator; /* IN */ int local; /* OUT */ mpi_intercommunicator_local(communicator, &local) \end{verbatim} This procedure returns in \verb|local| the local world of the inter-communicator identified by \verb|communicator|. \begin{verbatim} int communicator; /* IN */ int remote; /* OUT */ mpi_intercommunicator_remote(communicator, &remote) \end{verbatim} This procedure returns in \verb|remote| the remote world of the intercommunicator identified by \verb|communicator|. The context of an inter-communicator may be determined using the procedure\linebreak[4]\verb|mpi_communicator_context| described above. The procedure will again return the context allocated dynamically when the context supplied in communicator creation was the null context. An inter-communicator may be deleted using the procedure \verb|mpi_delete_communicator| described above. The procedure will synchronize all members of both the local and remote process worlds when the context supplied in communicator creation was the null context, and will otherwise complete asynchronously. \subsection{Point-to-point} The generic point-to-point communication send \verb|send| accepts a single communicator, a process label, and a message label as arguments and send-specific arguments: \begin{verbatim} send(communicator, process_label, message_label, ...) \end{verbatim} \verb|communicator| is the identifier of a communicator object. If the communicator is an intra-communicator then the sender must be a member of the intra-communicator world. If the communicator is an inter-communicator then the sender must be a member of the inter-communicator local world. \verb|process_label| identifies the receiver process. If the communicator is an intra-communicator and the world is an actual group, or if the communicator is an inter-communicator and the remote world is an actual group, then \verb|process-label| is a rank within the group. If the communicator is an intra-communicator and the world is the null group, or if the communicator is an inter-communicator and the remote world is the null group, then \verb|process-label| is a process identifier. \verb|message_label| is the message tag in the context+tag space of the communicator. The generic point-to-point communication receive \verb|recv| accepts a single communicator, a process label, and a message label as arguments and receive-specific arguments: \begin{verbatim} recv(communicator, process_label, message_label, ...) \end{verbatim} \verb|communicator| is the identifier of a communicator object. If the communicator is an intra-communicator then the sender must be a member of the intra-communicator world. If the communicator is an intercommunicator then the sender must be a member of the intercommunicator local world. \verb|process_label| identifies the sender process. If the communicator is an intra-communicator and the world is an actual group, or if the communicator is an intercommunicator and the remote world is an actual group, then \verb|process-label| is a rank within the group. If the communicator is an intra-communicator and the world is the null group, or if the communicator is an intercommunicator and the remote world is the null group, then \verb|process-label| is a process identifier. This field may be wildcarded by supplying the value \verb|MPI_PID_ANY|. \verb|message_label| is the message tag in the tag space of the communicator. This field may be wildcard by supplying the value \verb|MPI_TAG_ANY|. \subsection{Collective} The generic collective communication operation \verb|operation| accepts a single communicator as argument and operation specific arguments. \begin{verbatim} operation(communicator, ...) \end{verbatim} Communicator must be an intra-communicator with a process world which is a process group of which the caller is a member, or else the program is erroneous. \subsection{Message Envelope} To provide the services and capabilities we envisage for MPI, the prototypical message envelope is to consist of : \begin{description} \item[context], which is the identifier of the communicator \item[sender], which is the label of the sender according to the world, or remote world, of the communicator \item[tag], which is the message tag \item[receiver], which is implementation- specific information required to route the message to the receiver \end{description} \begin{small} {\bf Discussion:} The name of sender and receiver, has to be chosen in a way that is useful to the MPI implementation, rather than the user. The user works in ranks, but the system may actually transfer its own internal name for the sender. This remains an implementation detail, with a standard interface based on rank. \end{small} \section{Safety} The user of collective communications must be aware that the operations have no mechanism for protecting the point-to-point messages of which they are composed from those of the user program within the same context. The user is responsible for ensuring that there are no oustanding communications with the same communicator when the collective communication procedure is called. The writer of collective communications cannot use wildcard on either tag or sender in point-to-point receive calls, and strict sequencing of point-to-point messages is assumed. This is more easily handled by utilizing a duplicate communicator: one for point-to-point only, and one for deterministic global operations, all of which have the strict ordering assumptions. Each additional global operation (non-deterministic) will in general need its own communicator (new context, plus copy of the same group). This issue is now well understood, and the features included in this formulation provide a straightforward means for user and library code to achieve safety. %---------------------------------------------------------------------- % Point-to-point Chapter \section{Point-to-point Chapter} {\bf Changes to draft to go here... we will work these out based on newest Snir chapter, on Wednesday night, May 12} %---------------------------------------------------------------------- % Collective Chapter \section{Collective Chapter} {\bf Changes to draft to go here... we will work these out based on newest Geist chapter, on Wednesday night, May 12 \& 13} %---------------------------------------------------------------------- % Worked Examples \section{Worked Examples} {\bf Worked examples to go here... we will add these for you ASAP} {\bf Littlefield examples, Skjellum examples} %---------------------------------------------------------------------- % Conclusion \section{Conclusion} This formulation of context, group, and tag, provide a powerful, forward-looking standard within MPI, that ``closes the loops'' needed to complete the standard. It does so by providing tiers of services, which keep context and group as separate entities as much as possible at the low level, but which subsequently unite them for the user's benefit in intra-communicator (C1) and inter-communicator (C2) objects, that cope with group and context simultaneously. The latter provides a group union capability, especially important in MPMD and distributed-computing situations. Through careful choice of features, we are able to support safe programs in which both system-provided and user-defined contexts are permissible. We are able to provide a tag field that is totally at the user's disposal, and big enough to allow sophisticated application-dependent selectivity (32 bits). This formulation achieves the goal of enabling software module formulation, and provides servers to allow name/context matching, another feature of potential impact to both MPMD and distributed-computing arenas. It further has the clear potential to drop servers whenever the SPMD model is elected, replacing same with synchronizations over all processes. We have left out some things compared to earlier versions. The cacheing facility can be implemented later, in MPI2, or as an implementation-specific set of features, for now. We have omitted discussion of virtual topologies, because they can be layered on top of MPI1, using two features: \begin{itemize} \item contexts for each sub-topology, \item layerability of tag selectivity (dont\_care bits, match bits). \end{itemize} Given these capabilities, we see no need to stipulate virtual topologies in the base MPI is apparent, and so we have not gone into this further here. %======================================================================% \end{document} >From owner-mpi-context@CS.UTK.EDU Sat May 15 22:36:57 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA19231; Sat, 15 May 93 22:36:55 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA07218; Sat, 15 May 93 22:37:06 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28086; Sat, 15 May 93 23:35:17 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sat, 15 May 1993 23:35:16 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28076; Sat, 15 May 93 23:35:14 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA02134; Sat, 15 May 93 22:35:13 CDT Date: Sat, 15 May 93 22:35:13 CDT From: Tony Skjellum Message-Id: <9305160335.AA02134@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: the gathering (part II) Status: R We have a unified proposal between Marc Snir, myself, Lyndon Clarke, and Mark Sears, which we worked out at the last meeting, and which we will flesh out better over the next weeks, and present to you here. This progress report is mainly for the benefit of those who did not attend the meeting. One point that did not appear in the pre-meeting proposal rounds was the cacheing facility. It will definitely be needed to support virtual topologies. We will send you update soon, Tony >From owner-mpi-context@CS.UTK.EDU Fri May 21 13:21:09 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA13482; Fri, 21 May 93 13:20:58 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA10187; Fri, 21 May 93 13:19:12 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA09039; Fri, 21 May 93 14:07:48 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 21 May 1993 14:07:46 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA09031; Fri, 21 May 93 14:07:40 -0400 Message-Id: <9305211807.AA09031@CS.UTK.EDU> Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 8875; Fri, 21 May 93 14:07:43 EDT Date: Fri, 21 May 93 14:07:42 EDT From: "Marc Snir" To: MPI-CONTEXT@CS.UTK.EDU Status: R %!PS-Adobe-2.0 %%Creator: dvips 5.47 Copyright 1986-91 Radical Eye Software %%Title: CON-V6.DVI.* %%Pages: 11 1 %%BoundingBox: 0 0 612 792 %%EndComments %%BeginProcSet: texc.pro /TeXDict 250 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{ isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10 N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{ /vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder} N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail} B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get} B /ch-height{ch-data dup length 4 sub get} B /ch-xoff{128 ch-data dup length 3 sub get sub} B /ch-yoff{ ch-data dup length 2 sub get 127 sub} B /ch-dx{ch-data dup length 1 sub get} B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]/id ch-image N /rw ch-width 7 add 8 idiv string N /rc 0 N /gp 0 N /cp 0 N{rc 0 ne{rc 1 sub /rc X rw}{G}ifelse}imagemask restore}B /G{{id gp get /gp gp 1 add N dup 18 mod S 18 idiv pl S get exec}loop}B /adv{cp add /cp X}B /chg{rw cp id gp 4 index getinterval putinterval dup gp add /gp X adv}B /nd{/cp 0 N rw exit}B /lsh{rw cp 2 copy get dup 0 eq{pop 1}{dup 255 eq{pop 254}{dup dup add 255 and S 1 and or}ifelse}ifelse put 1 adv}B /rsh{rw cp 2 copy get dup 0 eq{pop 128}{dup 255 eq{pop 127}{dup 2 idiv S 128 and or}ifelse}ifelse put 1 adv}B /clr{rw cp 2 index string putinterval adv}B /set{rw cp fillstr 0 4 index getinterval putinterval adv}B /fillstr 18 string 0 1 17{2 copy 255 put pop}for N /pl[{adv 1 chg}bind{adv 1 chg nd}bind{1 add chg}bind{1 add chg nd}bind{adv lsh}bind{ adv lsh nd}bind{adv rsh}bind{adv rsh nd}bind{1 add adv}bind{/rc X nd}bind{1 add set}bind{1 add clr}bind{adv 2 chg}bind{adv 2 chg nd}bind{pop nd}bind]N /D{ /cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto}N /eop{clear SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put} for}N /p /show load N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval (Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{S p tail} B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w}B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{ /SS save N}B /eos{clear SS restore}B end %%EndProcSet TeXDict begin 1000 300 300 @start /Fa 6 119 df97 D99 D101 D<13C0EA01E013C01380C7FCA6120E123FEA33801263 EAC700A21207120EA35A1340EA38C0A3EA3980EA3F00121E0B1C7D9B0D>105 D<13C01201A3EA0380A4EAFFE0A2EA0700A2120EA45AA31320EA3860A213C0A2EA1F80EA0F000B 1A7D990E>116 D118 D E /Fb 1 27 df<1470EB01F0EB03E0EB0780EB0F0013 1E5B5BA25BB3AD485AA25B1203485A90C7FC120E123C12705A1270123C120E7E7F6C7E12017FA2 6C7EB3AD1378A27F7F7FEB0780EB03E0EB01F0EB007014637B811F>26 D E /Fc 31 121 df<1238127C12FE12FFA2127F123B1203A212071206A2120C121C121812701220 08117CA210>39 D<1238127C12FEA3127C123807077C8610>46 D<13181378EA01F812FFA21201 B3A7387FFFE0A213207C9F1C>49 DII< 14E013011303A21307130F131FA21337137713E7EA01C71387EA03071207120E120C1218123812 7012E0B512FEA2380007E0A7EBFFFEA217207E9F1C>I<12601278387FFFFEA214FC14F8A214F0 38E0006014C038C00180EB0300A2EA00065B131C131813381378A25BA31201A31203A76C5A1722 7DA11C>55 D67 D77 DIII87 D97 D99 DI<13FE3807FF80380F87C0381E01E0003E13F0EA7C0014F812FCA2B5FCA200FC C7FCA3127CA2127E003E13186C1330380FC0703803FFC0C6130015167E951A>I<3801FE1F0007 B51280380F87E7EA1F03391E01E000003E7FA5001E5BEA1F03380F87C0EBFF80D819FEC7FC0018 C8FC121CA2381FFFE014F86C13FE80123F397C003F8048131F140FA3007CEB1F00007E5B381F80 FC6CB45A000113C019217F951C>103 DI<120E121FEA3F80A3EA1F00120EC7FCA7EAFF80A2121FB2 EAFFF0A20C247FA30F>I108 D<3AFF87F00FE090399FFC3FF83A1FB87E70FC9039E03EC07C9039C03F807EA201801300AE3BFF F1FFE3FFC0A22A167E952F>I<38FF87E0EB9FF8381FB8FCEBE07CEBC07EA21380AE39FFF1FFC0 A21A167E951F>I<13FE3807FFC0380F83E0381E00F0003E13F848137CA300FC137EA7007C137C A26C13F8381F01F0380F83E03807FFC03800FE0017167E951C>I<38FF8FE0EBBFF8381FF07CEB C03E497E1580A2EC0FC0A8EC1F80A2EC3F00EBC03EEBE0FCEBBFF8EB8FC00180C7FCA8EAFFF0A2 1A207E951F>I114 DI<13C0A41201A212031207120F12 1FB5FCA2EA0FC0ABEBC180A51207EBE300EA03FEC65A11207F9F16>I<38FF83FEA2381F807EAF 14FEA2EA0F833907FF7FC0EA01FC1A167E951F>I<3AFFE3FF87F8A23A1F807C00C0D80FC0EB01 80147E13E0000790387F030014DF01F05B00031486EBF18FD801F913CC13FB9038FF07DC6C14F8 EBFE03017E5BA2EB7C01013C5BEB380001185B25167F9528>119 D<39FFF07FC0A2390FC01C00 6C6C5A6D5A6C6C5A00015B3800FD80017FC7FCA27F6D7E497E80EB67F013E33801C1F8380381FC 48C67E000E137E39FF81FFE0A21B167F951E>I E /Fd 30 122 df40 D<12E07E12787E7E7E7E13801203A213C01201A712031380A2120713005A121E5A5A5A5A0A 1D7D9914>I<1238127C127EA2123E121EA2127C12F81260070A7A8414>44 D65 D67 DIII<38FE3F80A338380E00A7EA3FFEA3EA380EA738FE 3F80A311177F9614>72 DI75 DI<38FE0FE0A3383B1B80A413 BBA2EA39B3A313F3EA38E3A21303A538FE0FE0A31317809614>I<38FE3F80A3383B0E00A4138E 1239A213CEA31238A213EE136EA4EAFE3EA311177F9614>III82 DII<38FE3F80A338380E00A26C5AA56C5AA4EA0630EA0770A3EA0360A213 E0A26C5A11177F9614>86 D88 D<38FE3F80A3383C1E00EA1C1CEA 1E3CEA0E38A26C5AA2EA036013E0A26C5AA7EA07F0A311177F9614>I97 D99 D<133C13FE1201EA03DE138C1380A2EAFFFEA3EA0380AAEA7FFCA30F177F9614> 102 D109 D111 DI<38FF1F80EB7FC013FF3807E080EBC0005B A290C7FCA6EAFFFCA31210808F14>114 D<38FF3F80A3381C1C00A2120E5BA212071330A2EA03 70A26C5AA35BA3EA7B80127F90C7FC127E123C11187F8F14>121 D E /Fe 1 106 df105 D E /Ff 57 123 df<1218123C123E121E120EA3121E121C 123C127812F01260070D799816>39 D<13E01201EA07C013005A121E5A123812781270A312F05A A77E1270A312781238123C7E7E7E13C0EA01E012000B217A9C16>I<12E07E127C121C121E7EEA 0780120313C01201A313E01200A7120113C0A3120313801207EA0F00121E121C127C12F05A0B21 7C9C16>II<1238127C127EA2123E120E121E123C127C12F81260070B798416> 44 D47 D49 DI<1238127CA312381200A81238127CA3123C121C123C123812F812F0126006 18799116>59 D<13E0487EA213B0A2EA03B8A31318EA071CA5EA0E0EA2EA0FFEA2487EEA1C07A3 387F1FC000FF13E0007F13C013197F9816>65 DI< 3801F180EA07FF5AEA1F0FEA3C0712781303127000F0C7FC5AA77E387003801278A2EA3C07381F 0F00EA0FFE6C5AEA01F011197E9816>II<387FFFC0B5FC7EEA1C 01A490C7FCA2131CA2EA1FFCA3EA1C1CA290C7FC14E0A5EA7FFFB5FC7E13197F9816>II<387F1FC038 FFBFE0387F1FC0381C0700A7EA1FFFA3EA1C07A9387F1FC038FFBFE0387F1FC013197F9816>72 DI<387F0FE038FF8FF0387F0FE0381C0780EB0F00 131E131C133C5B5BEA1DE07F121F7F1338EA1E3C131CEA1C1E7F7F14801303387F07E038FF8FF0 387F07E01419809816>75 DI<38FC07 E0EAFE0FA2383A0B80EA3B1BA413BBA2EA39B3A413F3EA38E3A21303A538FE0FE0A313197F9816 >I<387E1FC038FF3FE0387F1FC0381D07001387A313C7A2121CA213E7A31367A21377A21337A3 1317EA7F1FEAFF9FEA7F0F13197F9816>III82 DI<387FFFE0B5FCA2 EAE0E0A400001300AFEA07FC487E6C5A13197F9816>I<387F07F038FF8FF8387F07F0381C01C0 B0EA1E03000E1380EA0F8F3807FF006C5AEA00F81519809816>I<38FE0FE0EAFF1FEAFE0F3838 0380381C0700A4EA0E0EA4EA060CEA071CA4EA031813B8A3EA01B013F0A26C5A13197F9816>I< 387F1F80133F131F380E1E00131CEA073C1338EA03B813F012015B120012017F120313B8120713 1CA2EA0E0EA2487E387F1FC000FF13E0007F13C013197F9816>88 D<38FE0FE0EAFF1FEAFE0F38 1C0700A2EA0E0EA26C5AA3EA03B8A2EA01F0A26C5AA8EA03F8487E6C5A13197F9816>I<387FFF 80B5FCA238E007005B131E131CEA003C5B137013F0485A5B1203485A90C7FC5A381E0380121C12 3C12781270B5FCA311197E9816>I95 D<120C121E123C1278127012F0 12E0A312F012F812781230070D789B16>II<127E12FE12 7E120EA4133EEBFF80000F13C0EB83E01301EB00F0120E1470A4000F13F014E01381EB83C013FF 000E1300EA067C1419809816>II<133F5B7F1307A4EA03E7EA0FFF123FEA3C1F 487E1270EAF00712E0A46C5AA2EA781FEA7C3F383FFFE0381FF7F03807C7E014197F9816>II<131FEB7F8013FFEA01E7EBC30013C0A2EA7FFFB5FCA2EA01C0ACEA 3FFE487E6C5A11197F9816>I<3803E3C0380FFFE05A381E3CC0383C1E00EA380EA3EA3C1E6C5A EA1FFC485AEA3BE00038C7FC123CEA1FFC48B4FC4813C0EA780338F001E0EAE000A3EAF001387C 07C0383FFF80380FFE00EA03F8131C7F9116>I<127E12FE127E120EA4137CEA0FFF1480138713 03A2120EA9387FC7F038FFE7F8387FC7F01519809816>II<127E12FE127E120EA4EB7FE0A3EB0F00131E5B5B5B EA0FF8A213BC131EEA0E0E130FEB0780387F87F0EAFFCFEA7F871419809816>107 DI<38FBC78038FFEFC0EBFFE0EA3E7CEA3C78EA3870 AA38FE7CF8A31512809116>IIII<38FF0F80EB3FE013FFEA07F1EBE0C0EBC0005BA290C7FCA7EAFFFCA313127F9116>114 DI<12035AA4EA7FFFB5FCA20007C7FCA75BEB0380A2130713 873803FF005BEA00F811177F9616>I<387E1F80EAFE3FEA7E1FEA0E03AA1307EA0F0FEBFFF06C 13F83803F3F01512809116>I<387F1FC000FF13E0007F13C0381C0700EA1E0FEA0E0EA36C5AA4 EA03B8A3EA01F0A26C5A13127F9116>I<38FF1FE013BF131F38380380A413E33819F300A213B3 EA1DB7A4EA0F1EA313127F9116>I<387F1FC0133F131F380F1C00EA073CEA03B813F012016C5A 12017FEA03B8EA073C131CEA0E0E387F1FC038FF3FE0387F1FC013127F9116>I<387F1FC038FF 9FE0387F1FC0381C0700120E130EA212075BA2EA039CA21398EA01B8A2EA00F0A35BA3485A1279 127BEA7F806CC7FC123C131B7F9116>I<383FFFC05AA238700780EB0F00131EC65A13F8485A48 5A485A48C7FC381E01C0123C1278B5FCA312127F9116>I E /Fg 58 124 df11 D<137CEA01FEEA0387485A120E130690C7FCA4B5FCA2EA0E07AC387F0FE0A213 1A809915>I 34 D<126012F012F812781218A31230127012E01240050B7D990B>39 D<13C0EA0180EA030012 06120E120C5A1238A212301270A21260A212E0AA1260A21270A212301238A212187E120E12067E EA0180EA00C00A267E9B0F>I<12C012607E7E121C120C7E1207A27E1380A21201A213C0AA1380 A21203A213005AA212065A121C12185A5A5A0A267E9B0F>I<126012F0A212701230A31260A212 C01240040B7D830B>44 DI<126012F0A2126004047D830B>I<130CA213 1C1318A213381330A213701360A213E013C0A212011380A2120313005A1206A2120E120CA2121C 1218A212381230A212701260A212E05AA20E257E9B13>I<126012F0A212601200A8126012F0A2 126004107D8F0B>58 D<126012F0A212601200A8126012F0A212701230A31260A212C012400417 7D8F0B>I63 D<130C131EA3133FA3497E1367A3EBC3C0A3380181E0 A348B47EA2130000061378A3487F121E39FF81FFC0A21A1B7F9A1D>65 D67 DIIII<39FFF3FFC0A2390F003C00A9EBFFFCA2EB003CAB39FF F3FFC0A21A1A7F991D>II<39FFF07FC0A2390F00 3E0014185C5C5C495A49C7FC13065B131E133F136FEBC780EB87C013036D7E8013001478A28014 3E39FFF0FFC0A21A1A7F991E>75 D<39FF8001FF6D5A000F14F0A2380DE006A3380CF00CA3EB78 18A3EB3C30A3EB1E60A3EB0FC0A3EB0780121E39FFC78FFFEBC30F201A7F9923>77 DI<137F3801FFC03807C1 F0380F0078001E7F001C131C487F0078130FA200707F00F01480A80078EB0F00A36C131E001C13 1C001E133C6C5B3807C1F03801FFC06C6CC7FC191C7E9A1E>II82 DI<007FB5FCA238781E0F00601303A200E0148000C01301A3000090C7FCAF3803FFF0A2191A 7F991C>I<3AFFC7FF1FF0A23A1E00F803C091387801806CEC0300A214FC5DD807801306EB819E A2D803C15B13C3140F01E3131C000114189038E60798A2D800F613B001FE13F0EBFC03017C5BA2 EB7801A201385BEB3000241B7F9927>87 D92 D97 D<12FCA2121CA813F8EA1FFE130F381C0380A2 EB01C0A6EB0380A2381F0F00EA1BFEEA18F8121A7F9915>II<137EA2130EA8EA07CEEA1FFEEA3C 1EEA700EA212E0A61270131EEA3C3E381FFFC0EA07CF121A7F9915>II<13F8EA01FCEA 03BCEA073CEA0E181300A5EAFFC0A2EA0E00ACEA7FE0A20E1A80990C>I<130EEA079FEA1FF7EA 3877EA7038A5EA3870EA3FE0EA7780EA7000A2EA3FF013FC13FEEA700FEAE007A3EA700EEA781E EA1FF8EA07E010197F9013>I<12FCA2121CA813F8EA1FFC131EEA1E0E121CAA38FF9FC0A2121A 7F9915>I<1218123CA21218C7FCA612FCA2121CACEAFF80A2091A80990A>I<13C0EA01E0A2EA00 C01300A6EA07E0A21200B0126012F0EAF1C0EA7F80EA3E000B2183990C>I<12FCA2121CA8EB7F 80A2EB3C0013305B5B121DEA1FE0121EEA1C70137813387F131E38FF3FC0A2121A7F9914>I<12 FCA2121CB3A4EAFF80A2091A80990A>I<38FCFC3F39FDFE7F80391F0FC3C0381E0781001C1301 AA39FF9FE7F8A21D107F8F20>II< EA07E0EA1FF8EA381CEA700EEA6006EAE007A6EA700EA2EA3C3CEA1FF8EA07E010107F8F13>I< EAFCF8EAFFFEEA1F0F381C07801303EB01C0A6EB03801307381F0F0013FEEA1CF890C7FCA5B47E A212177F8F15>IIII<120CA4121CA2123CEAFFC0A2EA1C00A71360A5EA0FC0EA07800B 177F960F>II<38FF3F80A2381C0E 00130CA26C5AA21338EA0730A213F06C5AA26C5AA311107F8F14>I<39FF3F9F80A239381E0E00 381C3E0C13361337000E5B13631498000713B013C114F0A2380380E0A319107F8F1C>I<387F1F C0A2380E1E00EA071CEA03B813B0EA01E012007F1201EA03B8EA071C1206EA0E0E38FF1FE0A213 10808F14>I<38FF3F80A2381C0E00130CA26C5AA21338EA0730A213F06C5AA26C5AA35BA21261 00F3C7FC12C712FE127811177F8F14>III E /Fh 8 118 df<127812FCA412781200A5127812FCA4127806117D900C>58 D68 D99 D<121E123FA4121EC7FCA4127FA212 1FADEAFFC0A20A1B809A0C>105 D110 DI115 D<38FF1FE0A2EA1F03AB1307130F380FFFFCEA03F316117F9019>117 D E /Fi 3 34 df0 D<127012F8A3127005057D8C0C>I<1506A281168015 01ED00E0B712F816F0C912E0ED0180150316001506A2250E7E902A>33 D E /Fj 11 123 df<127012F8A3127005057D840C>58 D<127012F012F8A212781218A31230A212 7012601240050D7D840C>I I<13F8EA03FEEA0F06121E123C1238EA781CEA7FF8EAFFE0EAF000A4EA7004130EEA383CEA1FF0 EA0FC00F127F9113>101 DI<13E01201A2 EA00C01300A6120EEA1F8012331263124312C3EA0700A2120EA3EA1C201360A2EA38E013C0EA1F 80EA0F000B1C7F9B0E>105 D<1307130FA213061300A613F0EA01FCEA031C1206A2120C1200A2 1338A41370A413E0A4EA01C01261EAF380EAF70012FE127C1024809B11>I<391C0FC0F8393F1F E3FC396730770EEB607C38C7C078138038070070A2000E495AA3ED3840261C01C013C0A2157116 803A3803803F00D81801131E22127F9124>109 D<381C0F80383F3FC0386770E013C0EAC780A2 EA0700A2380E01C0A3EB0384001C138CA2EB071C1418383803F0381801E016127F9119>I115 D<3801E180EA07F1380FFF00EA0C1EEA180CC65A5B 5B5B485A48C7FCEA0601485A1218EA3F0EEA7FFCEA61F8EAC0F011127F9113>122 D E /Fk 42 121 df<1238127C12FCA2127C120CA21218A21230126012C01280060D799C0C>39 D<1230127812F81278127005057D840C>46 D<13181370EA03F0120FEA1C701200A213E0A6EA01 C0A6EA0380A61207EAFFF8A20D1C7C9B15>49 D<137E3801FF80380387C0EA0603EB01E0120FA2 121FA2120E1200EB03C0A2EB0780EB0F00130E5B5B13605BEA038038070180120E381C03001230 EA7FFF485AA2131C7E9B15>I<137C48B4FC38038780EA0603000F13C0A3380E07801200140013 0E133CEA01F8485AEA001C7F130FA4127012F8A2EAF01E12E05BEA7078EA3FE0EA0F80121D7D9B 15>II<133E13FF3801C380 EA0383EA0707120E001C130090C7FC123C1238EA79F8EA7FFCEA7C1C487EA2EAF00FA4EAE01EA4 131C5BEA70705BEA3FC0EA0F80111D7C9B15>54 D<137C48B4FC38038780EA0703120EEB01C0A2 381E0380A2EB0700EA0F8613DCEA07F81203487EEA0E7E487EEA380F12707F12E0A31306130EEA 701CEA3878EA1FF0EA0FC0121D7D9B15>56 D<14301470A214F0801301A2EB0378A21306147CEB 0C3CA21318A21330A2497EA2EBFFFEA23801801EA23803001F801206120F397FC0FFF012FF1C1D 7F9C1F>65 D<903807F030EB1FFC90387C0E603901F003E0EA03C038078001EA0F00000E130000 1E14C05AA25A1500A25AA5EC0180EC03001270127814066C5B001C131C000F5B3807C0E06CB45A D8007EC7FC1C1E7C9C1E>67 D<380FFFFCECFF803900F007C0EC01E0EC00F015701578485A1538 A3153C1538485A1578A3157015F0484813E0140115C0EC03801407EC0E00380F007CB512F014C0 1E1C7E9B20>I<000FB512E0A23800F003EC01C01400A33801E060A31500EBE1E013FF485B13C1 13C0A215C0ECC18038078001A2EC0300A25C140E380F003EB512FE5C1B1C7E9B1C>I<000FB512 C0A23800F007EC03801401A3EA01E01461A2EC600014E013E148B45AA213C113C0A33807818001 80C7FCA5120FEAFFFC5B1A1C7E9B1B>I<903807F030EB1FFC90387C0E603901F003E0EA03C038 078001EA0F00000E1300001E14C05AA25A1500A25AA3ECFFF0A2EC0F80150012701278A27E001C 5B000F133E3807C0F63803FFC2C66CC7FC1C1E7C9C21>I<390FFF9FFE143F3900F003C0A53901 E00780A590B5FC481400EBC00FA53807801EA6000F133E39FFF3FFE015C01F1C7E9B1F>I<380F FF80A23800F000A5485AA6485AA6485AA6120FEAFFF8A2111C7F9B0F>I<390FFF87FEA23900F0 03E0EC0180EC030014065C48485A14705CEBE1C013E313E73803CDE013D9EBF1F013E013C01478 EA078080A280A280000FEB1F8039FFF87FF013F01F1C7E9B20>75 D<380FFFC0A23800F000A548 5AA6485AA41406140CEA0780A2141C141814381478380F01F8B512F0A2171C7E9B1A>II<390FF80FFEA23900FC01E0EC00C0A213DEA239018F0180A214811387A2EB83C1390303C300 EB01E3A3EB00F3A20006137EA2143EA3141E001E131C38FFE00C13C01F1C7E9B1F>II<380FFFFC14FF3900F00F80140315C01401A23801E003A315801407EC0F003803C03E EBFFF814E001C0C7FCA3485AA6120FEAFFF85B1A1C7E9B1C>I<380FFFF814FE3900F00F801403 15C01401A23801E003A3EC0780EC0F00143E3803FFF85CEBC07880A2143E3807803CA51518000F 1430EAFFF89038F01FE0C7EA07C01D1D7E9B1F>82 DI<001FB512F05A383C07810038EB8060 12701260A2EB0F0012C0A200001400A3131EA65BA6137C381FFFE0A21C1C7C9B1E>I<39FFF8FF E0A2390F001E00140CA4001E5BA6485BA6485BA300385BA2383C0180D81C03C7FCEA0E0EEA07FC EA01F01B1D7A9B1F>I<3BFFF0FFE1FF805B3B1F001F007C006C15384A13301670026F1360A202 CF5BA23A07818F018015819026838783C7FC140715861386158E01CC138C120301D813D8140301 F013F0A25D13E05DEA01C05DEB8001291D7B9B2B>87 D<3907FF8FFEA239007C03E0013C138001 3E1300EB1E06140EEB1F1CEB0F1814B0EB07E0A26D5A80A2497E130E130C497EEB3078EB707CEB E03CEBC03E3801801E3803001FEA0F8039FFE07FF014FF1F1C7F9B1F>I97 D<13FCEA03FEEA0F0FEA1E0EEA3C0CEA38001278A35AA31270EA7806EA380CEA3E18EA1F F0EA07E010127E9112>99 DII105 D<391F8FC0FC9038BFE3FE3907E0F60F9038C07C07EB8078EB0070A4000EEBE0 0EA64848485A3AFF8FF8FF80A221127F9124>109 D<381F8F80EBBFC03807E0E013C013801300 A4380E01C0A6381C038038FF8FF0139F14127F9117>I<13FCEA03FF380F0780381E03C0EA3C01 1238007813E0A338F003C0A300701380130738780F00EA3C1EEA1FF8EA07E013127E9115>I<38 0FC7E0EBFFF03803F078EBC03C1380A2141EA33807003CA31478A2EB80F0380FC3E0380EFF80EB 3E0090C7FCA35AA3B47EA2171A809117>I114 DI<1206A3120EA2120C121C123CEA FFE0A2EA1C005AA65A13C0A4EA7180A2EA7F00123C0B1A7C9910>I<38FC1F80A2EA3C07003813 00A6EA700EA4131E133E137E383FDF80EA1F1F11127C9117>I<381FE3FCEA3FE7380383C03801 C380EBC700EA00EE13EC13781370137813F8EA01DCEA039EEA070EEA0607121E38FF1FF0A21612 7F9116>120 D E /Fl 51 123 df<13301360EA01C0EA038013005A120E121E121C123CA21238 1278A312F85AA97E1278A31238123CA2121C121E120E7E7E1380EA01C0EA006013300C297D9E13 >40 D<12C0126012387E120C120E7E1380120313C0A2120113E0A313F01200A9120113E0A313C0 1203A2138012071300120E120C121C5A12605A0C297D9E13>I<1238127C12FE12FFA2127F123B 120312071206A2120C121812301220080F7D860D>44 D<1360EA01E0120F12FF12F31203B3A238 7FFF80A2111B7D9A18>49 DI63 D65 D I<90381FE0209038FFF8E03803F80F3807E007380FC001EA1F8048C7FCA2007E1460A212FE1500 A7007E1460A27E15C06C7E390FC001803907E003003803F80E3800FFFCEB1FE01B1C7D9B22>I< B512F814FF390FC01FC0EC07E0EC03F0EC01F8A2EC00FCA315FEA815FCA3EC01F8A2EC03F0EC07 E0EC1FC0B6120014F81F1C7E9B25>III73 D<39FFFC07FFA2390FC000E0EC01C0 EC0380EC06005C14385C5CEBC1C013C3EBC7E013CFEBFBF0EBF3F813E1EBC0FC80147E80EC1F80 15C0140FEC07E015F039FFFC3FFFA2201C7E9B25>75 DII<39FFE003FFA2390FF000307FEA0DFCEA0CFE137E7FEB1F8014C0 EB0FE0EB07F01303EB01F814FCEB00FE147F143FEC1FB015F0140F1407140314011400A2D8FFC0 13701530201C7E9B25>III82 D<3807F860381FFEE0EA3C07EA7801EA700012F01460A26C130012FEEAFFE0EA7F FE6C7E1480000F13C06C13E0EA007FEB03F01301130012C0A214E07E38F001C0EAFC0338EFFF00 EAC3FC141C7D9B1B>I<007FB512E0A238781F81007013800060146000E0147000C01430A40000 1400B03807FFFEA21C1C7E9B21>I<39FFFC03FFA2390FC00030B3120715606C6C13E03901F001 C03900FC078090387FFE00EB0FF8201C7E9B25>I<3AFFFC01FF80A23A0FC00018006C6C5BA26D 1370000314606D13E000015C7F0000495AA2D97E03C7FCA2EB7F07EB3F06148EEB1F8C14CCEB0F D8A2EB07F0A36D5AA26D5AA2211C7F9B24>I<397FFE1FFEA23907F001803803F8036C6C48C7FC 000013066D5AEB7F1C6D5A14B0EB1FE0130FA26D7E6D7E1307497EEB0CFEEB187EEB387F496C7E EB601F01C07F00016D7E496C7EEA03003AFFF03FFF80A2211C7F9B24>88 D<3AFFFC01FF80A23A0FE00038006C6C13306C6C137015606C6C5B3800FE015DD97F03C7FCEB3F 871486EB1FCEEB0FFC5C13076D5AAAEB3FFFA2211C7F9B24>I<387FFFFCA2387E01F8EA780300 7013F038E007E0130F00C013C0131F148038003F005B137E13FE485A5B0003130613F0EA07E012 0FEBC00E121FEB801CEA3F0048133C007E13FCB5FCA2171C7D9B1D>I97 DIIII<133F3801FF803803E7C0EA07C7EA0F87EB8380EB8000A5EAFFF8A2EA0F80AEEA7FF8 A2121D809C0F>I<3803F0F0380FFFF8383E1F38383C0F30007C1380A4003C1300EA3E1FEA1FFC EA33F00030C7FC12707EEA3FFF14C06C13E04813F0387801F838F00078A3007813F0383E03E038 1FFFC03803FE00151B7F9118>II<121E123FA25A7EA2121EC7FCA5B4FCA2121FAEEAFFE0A20B1E7F9D0E>I107 DI<39FF1FC0FE90387FE3FF 3A1FE1F70F80903980FC07C0A2EB00F8AB3AFFE7FF3FF8A225127F9128>I<38FF1FC0EB7FE038 1FE1F0EB80F8A21300AB38FFE7FFA218127F911B>II<38FF1F C0EBFFE0381FC1F8130014FC147C147EA6147C14FCEB80F8EBC1F0EB7FE0EB1F8090C7FCA6EAFF E0A2171A7F911B>I 114 DI<1203A35AA25AA2123FEAFFFCA2EA1F00A9130CA4 EA0F98EA07F0EA03E00E1A7F9913>I<38FF07F8A2EA1F00AC1301EA0F03EBFEFFEA03F818127F 911B>I<38FFC1FCA2381F0060EB80E0000F13C013C03807C180A23803E300A2EA01F6A213FE6C 5AA21378A2133016127F9119>I<39FF8FF8FEA2391F03E030A201831370000FEBF0601386D807 C613C0EBCEF8EBEC790003EB7D80EBF83D0001EB3F00A2497E0000131EEBE00EA21F127F9122> I<38FFC7FCA2381F8180EA0F833807C700EA03EEEA01FC5B1200137C13FEEA01DFEA039F38070F 80380607C0380C03E038FF07FCA216127F9119>I<38FFC1FCA2381F0060EB80E0000F13C013C0 3807C180A23803E300A2EA01F6A213FE6C5AA21378A21330A25B1270EAF8E0EAC0C0EAE380007F C7FC123E161A7F9119>I<383FFF80A2383C1F00EA303EEA707EEA60FC5BEA61F012033807E180 13C1EA0F81EA1F83383F0300EA3E07485AB5FCA211127F9115>I E /Fm 70 124 df11 D<133FEBFF803803C1C0EA0703A2380E018090C7FCA5B512C0A2 EA0E01AE387F87F8A2151D809C17>I<90383F03F09038FFCFF83903C0FC1C390701F03CA2390E 00E01892C7FCA5B612FCA2390E00E01CAE3A7FC7FCFF80A2211D809C23>14 D34 D<127012F812FCA2127C120CA31218A2123012601240060D7D9C0C>39 D<13C0EA0180EA030012 06120E120C121C121812381230A21270A21260A212E0AC1260A21270A21230A212381218121C12 0C120E12067EEA0180EA00C00A2A7D9E10>I<12C012607E7E121C120C120E120612077EA21380 A21201A213C0AC1380A21203A21300A25A1206120E120C121C12185A5A5A0A2A7E9E10>I<1306 ADB612E0A2D80006C7FCAD1B1C7E9720>43 D<127012F012F8A212781218A31230A21270126012 40050D7D840C>II<127012F8A3127005057D840C>I<1303A213071306 A2130E130C131C1318A213381330A213701360A213E013C0A21201138012031300A25A1206A212 0E120CA2121C1218A21238123012701260A212E05AA210297E9E15>II<12035A12 3FB4FC12C71207B3A3EAFFF8A20D1C7C9B15>III<131CA2133C137CA213DC1201139C1203EA071C1206120E120C121812381230126012 E0B512C0A238001C00A63801FFC0A2121C7F9B15>II<13F0EA03FCEA070CEA0E0EEA1C1E1238130CEA78001270A2EAF3F0EAF7F8EA FC1CEAF81E130E12F0130FA51270A2130E1238131CEA1C38EA0FF0EA03E0101D7E9B15>I<1260 387FFF80A21400EA6003EAC0065BA2C65A5BA25BA25BA21201A2485AA41207A76CC7FC111D7E9B 15>III<127012F8A312701200A81270 12F8A3127005127D910C>I<127012F8A312701200A8127012F012F8A212781218A31230A21270 12601240051A7D910C>I<007FB512C0B612E0C9FCA8B612E06C14C01B0C7E8F20>61 D<1306130FA3497EA4EB33C0A3EB61E0A3EBC0F0A338018078A2EBFFF8487FEB003CA200067FA3 001F131F39FFC0FFF0A21C1D7F9C1F>65 DI<90381F80 80EBFFE13803F0333807801B380F000F001E1307001C1303123C5A1401127012F091C7FCA70070 EB01801278A27E001CEB0300121E6C13066C6C5A3803F0383800FFF0EB1F80191E7E9C1E>IIII<39FFF3FFC0 A2390F003C00AAEBFFFCA2EB003CAC39FFF3FFC0A21A1C7E9B1F>72 DI76 DIIII 82 D<3807E080EA1FF9EA3C1FEA7007130312E01301A36CC7FCA2127CEA7FC0EA3FF8EA1FFEEA 07FFC61380130FEB03C0A2130112C0A300E01380130300F01300EAFC0EEACFFCEA83F8121E7E9C 17>I<007FB512C0A238780F03007013010060130000E014E000C01460A400001400B03803FFFC A21B1C7F9B1E>I<3AFFE0FFE1FFA23A1F001E007C6C1530143FA20180147000079038678060A3 2603C0E713C0ECC3C0A2D801E0EBC1809038E181E1A3D800F3EBF3001400A2017B13F6017E137E A3013C133CA3011C133801181318281D7F9B2B>87 D92 D97 D<12FCA2121CA9137EEA1DFF381F8780381E01C0001C13E0130014F0A614E01301001E13C0381F 07803819FF00EA187C141D7F9C17>IIII<1378EA01FCEA039EEA071EEA0E0C1300A6EAFFE0A2EA0E00AEEA7FE0A20F1D809C0D>II<12FC A2121CA9137CEA1DFFEA1F07381E0380A2121CAB38FF9FF0A2141D7F9C17>I<1218123C127C12 3C1218C7FCA612FCA2121CAEEAFF80A2091D7F9C0C>II<12FCA2121CA9EB7FC0A2EB3E0013185B 5B5BEA1DE0121FEA1E70EA1C781338133C131C7F130F38FF9FE0A2131D7F9C16>I<12FCA2121C B3A7EAFF80A2091D7F9C0C>I<39FC7E07E039FDFF9FF8391F83B838391E01E01CA2001C13C0AB 3AFF8FF8FF80A221127F9124>IIII<3803E180EA0F F9EA1E1FEA3C071278130312F0A612781307123CEA1E1FEA0FFBEA07E3EA0003A6EB1FF0A2141A 7F9116>III<120CA5121CA2123CEAFFE0A2EA1C00A81330A5EA1E60EA0FC0EA07800C1A7F9910>I< 38FC1F80A2EA1C03AC1307EA0C0F380FFBF0EA03E314127F9117>I<38FF0FE0A2381C0780EB03 00EA0E06A36C5AA2131CEA0398A213F86C5AA26C5AA313127F9116>I<39FF3FCFE0A2391C0F07 80EC0300131F380E1B061486A2EB318E000713CCA213603803E0F8A33801C070A31B127F911E> I<387F8FF0A2380F078038070600EA038EEA01DC13D8EA00F01370137813F8EA01DCEA038E130E EA0607380F038038FF8FF8A21512809116>I<38FF0FE0A2381C0780EB0300EA0E06A36C5AA213 1CEA0398A213F86C5AA26C5AA35BA3EAF180A200C7C7FC127E123C131A7F9116>III E /Fn 35 121 df<121C127FEAFF80A213C0A3127F 121C1200A212011380A21203EA07001206120E5A5A12300A157BA913>39 D<13075B137FEA07FFB5FCA212F8C6FCB3AB007F13FEA317277BA622>49 DII<14075C5C5C5C5CA25B5B497E130F130E131C1338 137013F013E0EA01C0EA0380EA07005A120E5A5A5A5AB612F8A3C71300A7017F13F8A31D277EA6 22>I<000C1303380F803FEBFFFEA25C5C14E05C49C7FC000EC8FCA6EB7FC0380FFFF8EB80FC38 0E007F000C1480C7123F15C0A215E0A2123E127FEAFF80A315C01300007E137F007814806CEBFF 00381F01FE380FFFF8000313E0C690C7FC1B277DA622>II<1238123E003FB512F0A315E04814C01580A21500387000 1E5C5C48137014F0495AC6485A13075C130F91C7FC5B5BA3137EA213FEA41201A86C5A13781C29 7CA822>II<91393FF00180903903FFFE07010FEBFF8F90393FF007FF90 38FF80014848C7127FD807FC143F49141F4848140F485A003F15075B007F1503A3484891C7FCAB 6C7EEE0380A2123F7F001F15076C6C15006C6C5C6D141ED801FE5C6C6C6C13F890393FF007F001 0FB512C0010391C7FC9038003FF829297CA832>67 D69 D73 D76 DI87 D<48B47E000F13F0381F81FC486C7E147FA2EC3F80A2EA0F00 C7FCA2EB0FFF90B5FC3807FC3FEA1FE0EA3F80127F130012FEA3147F7E6CEBFFC0393F83DFFC38 0FFF0F3801FC031E1B7E9A21>97 D99 DIII<9038FF81F00003EBE7FC390FC1FE7C391F80FCFC003FEB FE7C9038007E3848EB7F00A66C137EEB80FE001F5B380FC1F8381FFFE0001813800038C8FC123C A2123E383FFFF814FF6C14C06C14E06C14F0121F397E0007F8007C13015A1400A36C1301007EEB 03F06CEB07E0390FC01F803903FFFE0038007FF01E287E9A22>II< 1207EA1FC013E0123FA3121F13C0EA0700C7FCA7EAFFE0A3120FB3A3EAFFFEA30F2B7DAA14>I< EAFFE0A3120FACEC1FFCA3EC07C0EC0F80EC1E00147C5CEBE1F0EBE3E0EBE7C0EBEFE0EBFFF0A2 80EBF3FCEBE1FE13C080EC7F80143F15C0EC1FE0EC0FF039FFFC3FFEA31F2A7EA924>107 DI<3BFFC07F800FF0903AC1FFE03FFC903AC783F0 F07E3B0FCE03F9C07F903ADC01FB803F01F8D9FF00138001F05BA301E05BAF3CFFFE1FFFC3FFF8 A3351B7D9A3A>I<38FFC07F9038C1FFC09038C787E0390FCE07F09038DC03F813F813F0A313E0 AF3AFFFE3FFF80A3211B7D9A26>II<38FFE1FE9038E7FF809038FE07E0390FF803F8496C7E01E07F1400 81A2ED7F80A9EDFF00A25DEBF0014A5A01F85B9038FE0FE09038EFFF80D9E1FCC7FC01E0C8FCA9 EAFFFEA321277E9A26>I<38FFC3F0EBCFFCEBDC7E380FD8FF13F85BA3EBE03C1400AFB5FCA318 1B7E9A1C>114 D<3803FE30380FFFF0EA3E03EA7800127000F01370A27E6C1300EAFFE013FE38 7FFFC06C13E06C13F0000713F8C613FC1303130000E0137C143C7EA26C13787E38FF01F038F7FF C000C11300161B7E9A1B>I<1370A413F0A312011203A21207381FFFF0B5FCA23807F000AD1438 A73803F870000113F03800FFE0EB1F8015267FA51B>I<39FFE03FF8A3000F1303B11407A2140F 0007131F3A03F03BFF803801FFF338003FC3211B7D9A26>I<3BFFFE7FFC0FFEA33B0FE007E000 E03B07F003F001C0A29039F807F80300031680A23B01FC0EFC0700A2D9FE1E5B000090381C7E0E A29039FF383F1E017F141C0278133C90393FF01FB8A216F86D486C5AA26D486C5AA36D486C5AA2 2F1B7F9A32>119 D<39FFFC0FFFA33907F003C06C6C485AEA01FC6C6C48C7FCEBFF1E6D5AEB3F F86D5A130FA2130780497E497E131EEB3C7F496C7E496C7ED801E07FEBC00F00036D7E3AFFF01F FF80A3211B7F9A24>I E /Fo 24 122 df<127012F812FCA2127C120CA41218A21230A2126012 40060F7C840E>44 D49 D51 D57 D<90380FE01090383FF8309038F81C703801E0063903 C003F03807800148C7FC121E003E1470123C127C15301278A212F81500A700781430A2127CA200 3C1460123E121E6C14C06C7E3903C001803901E003003800F80EEB3FF8EB0FE01C247DA223>67 D76 DI<38 03F020380FFC60381C0EE0EA3803EA7001A2EAE000A21460A36C1300A21278127FEA3FF0EA1FFE 6C7E0003138038003FC0EB07E01301EB00F0A2147012C0A46C136014E06C13C0EAF80138EF0380 38C7FF00EA81FC14247DA21B>83 D<007FB512F8A2387C07800070143800601418A200E0141C00 C0140CA500001400B3A20003B5FCA21E227EA123>I 97 D99 D<14E0130FA213011300AAEA03F0EA07FEEA1F07EA3C01EA 38001278127012F0A712701278EA3801EA3C03381E0EF0380FFCFEEA03F017237EA21B>II<121C121E123E121E121CC7FCA8120E12FEA2121E120E AFEAFFC0A20A227FA10E>105 DI<120E12FEA2121E120EAAEB0FFCA2EB07E0EB0380 EB0700130E13185B137813F8EA0F9C131EEA0E0E7F1480EB03C0130114E014F038FFE3FEA21723 7FA21A>I<120E12FEA2121E120EB3ABEAFFE0A20B237FA20E>I<390E1FC07F3AFE7FE1FF809039 C0F303C03A1F807E01E0390F003C00000E1338AE3AFFE3FF8FFEA227157F942A>I<380E1F8038 FE7FC038FFC1E0381F80F0380F0070120EAE38FFE7FFA218157F941B>II114 DI<000E137038FE07F0A2 EA1E00000E1370AC14F01301380703783803FE7FEA01F818157F941B>117 D<38FFC3FEA2381E00F8000E1360A26C13C0A338038180A213C300011300A2EA00E6A3137CA313 38A21330A213701360A2EAF0C012F1EAF380007FC7FC123E171F7F941A>121 D E /Fp 15 121 df<127812FCA212FEA2127E1206A5120CA31218A2123012701260124007147A 8512>44 D<91383FE001903901FFF803903807F01E90391F80070790393E00018F49EB00CF01F0 147F4848143F0003151F485A4848140FA248C8FC16075A123E007E1503A3127C00FC1500AB127C 007E1503A3123E123F6C1506A26C7E160C6C7E6C6C141812016C6C1430017C14606DEB01C09039 1F800380903907F01E00903801FFFC9038003FE028337CB130>67 D71 D77 D97 D100 DI<380781FE39FF87FF8090 388E07C0390F9803E03807B0019038E000F05BA35BB3486C487E3AFFFC1FFF80A2211F7E9E25> 110 DI<380780FC39FF87FF8090389E07C0390FB803E03907E000F04913F8157C5B 153EA3151FA9153EA3157C6D137815F89038E001F09038B803E090389E0FC0903887FF00EB81FC 0180C7FCAB487EEAFFFCA2202D7E9E25>I<380783E038FF8FF8EB9C7CEA0FB0EA07F0EBE038EB C000A35BB3487EEAFFFEA2161F7E9E19>114 D<3801FC10380FFF30381E03F0EA38004813705A 1430A37E6C1300127EEA3FF06CB4FC6C1380000313E038003FF0EB03F8EB007800C0133CA2141C 7EA27E14186C13386C137038EF01E038C3FFC03880FE00161F7E9E1A>I<13C0A51201A31203A2 1207120F121FB512E0A23803C000B01430A83801E060A23800F0C0EB7F80EB1F00142C7FAB19> II<39FFF807FFA2390FE003F83903C001E001E05B0001495A6C6C 48C7FCEB7806EB7C0CEB3C1C6D5AEB0F3014E013076D5A80801307EB0E78EB0C7C497EEB381E49 7E01607F496C7E0001130348486C7E000780391FC003F83AFFE007FFC0A2221F7F9E23>120 D E /Fq 5 85 df<1630167016F0A21501A21503A21507150FA2151B821531A2156115E115C1EC 0181A2EC0301A21406A2140C141C14181430A202607FA2ECC000A249B5FC5B91C7FC1306A25BA2 5BA25B1370136013E01201000381D80FF01301D8FFFE90383FFFF0A22C337CB235>65 D<010FB6FC17C0903A003F8007F0EE01F892C7127C177E4A143E83147E188002FE140FA24A15C0 A21301A25CA21303171F5CA2130718804A143FA2130F18004A5CA2011F157E17FE4A5CA2013F4A 5A5F91C712034C5A495D160F017E4A5A4CC7FC01FE147E16F849495AED07E00001EC3F80B600FE C8FC15F032317CB036>68 D<010FB612FEA29039003F8000173E92C7121EA24A140CA2147EA214 FEA25CA20101151CEEC0184A1400A201031301A202F05B15030107130F91B5FC93C7FCECE00F01 0F7FA2ECC006A2011F130EA2EC800C92C8FC133FA291C9FCA25BA2137EA213FEA25BA21201B512 FCA22F317CB02F>70 D<010FB512F816FF903A003F801FC0EE07E092380003F0EE01F84AEB00FC A2147EA214FE16015CA2010115F816034A14F0EE07E01303EE0F804AEB3F00167E0107EB03F891 B512E016809138E007C0010FEB03F015014A6C7EA2011F80A25CA2013F1301A21400A249495AA2 137E170601FE150E170C5B171C000102011338B539F000FC70EE7FE0C9EA1F802F327CB034>82 D<0007B712F8A23A0FE007F00101801400D80E00491370121E001C130F121800385CA20030011F 1460127000605CA2023F14E000E016C0C790C8FCA25CA2147EA214FEA25CA21301A25CA21303A2 5CA21307A25CA2130FA25CA2131FA25C133F497E007FB512C0A22D3174B033>84 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300 TeXDict begin %%EndSetup %%Page: 1 1 bop 795 220 a Fq(D)26 b(R)g(A)f(F)h(T)570 311 y Fp(Maps,)20 b(Groups)i(and)e(Con)n(texts)391 433 y Fo(Lyndon)d(Clark)o(e,)e(Mark)h (Sears,)h(T)l(on)o(y)f(Skjellum,)d(Marc)j(Snir)832 530 y(Ma)o(y)g(13,)g(1993) 75 715 y Fn(1)69 b(In)n(tro)r(duction)75 807 y Fm(MPI)15 b(pro)o(vides)f (supp)q(ort)h(for)f(the)h(execution)g(of)f Fl(parallel)f(pro)q(cedures)p Fm(.)k(A)e(parallel)e(pro)q(cedure)j(is)e(executed)75 857 y(collectiv)o(ely)g (b)o(y)g(a)f(set)j(of)d(comm)o(unicating)e(pro)q(cesses.)22 b(T)m(ransfer)14 b(of)g(con)o(trol)g(to)g(the)g(parallel)f(pro)q(cedure,)j (and)75 907 y(bac)o(k,)d(is)g(ac)o(hiev)o(ed)g(b)o(y)g(ha)o(ving)e(eac)o(h)j (executing)g(pro)q(cess)g(transfer)g(con)o(trol)f(to)g(the)h(lo)q(cal)e(pro)q (cedure)j(co)q(de,)e(and)75 957 y(return)j(from)e(it.)22 b(Not)15 b(all)f(pro)q(cesses)k(need)e(transfer)g(con)o(trol)f(to)h(the)f(same)g (parallel)f(pro)q(cedure;)j(the)f(parallel)75 1006 y(pro)q(cedure)k(ma)o(y)d (in)o(v)o(olv)o(e)g(only)h(a)g(subset)i(of)d(pro)q(cesses,)22 b(or)d(di\013eren)o(t)g(parallel)e(calls)h(ma)o(y)f(b)q(e)i(executed)h(b)o(y) 75 1056 y(di\013eren)o(t)15 b(subsets)g(of)e(pro)q(cesses.)21 b(The)14 b(collectiv)o(e)f(comm)o(unication)e(calls)i(pro)o(vided)g(b)o(y)h (MPI)g(are)g(examples)f(of)75 1106 y(suc)o(h)i(parallel)d(pro)q(cedures.)21 b(Libraries)14 b(suc)o(h)g(as)g(parallel)f(scien)o(ti\014c)i(libraries)e (will)g(b)q(e)h(another)h(example.)158 1157 y(It)j(is)g(highly)e(desirable)j (to)f(allo)o(w)e(the)i(pro)q(cesses)j(that)d(execute)i(a)d(parallel)g(pro)q (cedure)j(use)e(a)g(\\virtual)75 1206 y(pro)q(cess)i(name)d(space")j(lo)q (cal)d(to)i(the)g(in)o(v)o(ok)n(ation.)29 b(Th)o(us,)19 b(the)g(co)q(de)h(of) d(the)i(parallel)f(pro)q(cedure)i(will)d(lo)q(ok)75 1256 y(iden)o(tical,)c (irresp)q(ectiv)o(e)j(of)e(the)h(absolute)f(addresses)j(of)d(the)h(executing) g(pro)q(cesses.)22 b(It)14 b(is)g(often)h(the)g(case)g(that)75 1306 y(parallel)10 b(application)g(co)q(de)j(is)e(built)g(b)o(y)g(comp)q (osing)f(sev)o(eral)i(parallel)e(mo)q(dules)h(\(e.g.,)f(a)i(n)o(umerical)e (solv)o(er,)h(and)75 1356 y(a)j(graphic)g(displa)o(y)f(mo)q(dule\).)18 b(Supp)q(ort)d(of)e(a)h(virtual)f(name)g(space)j(for)e(eac)o(h)g(mo)q(dule)f (will)g(allo)o(w)f(to)i(comp)q(ose)75 1406 y(mo)q(dules)19 b(that)g(w)o(ere)i(dev)o(elop)q(ed)f(separately)g(without)f(c)o(hanging)g (all)g(message)g(passing)g(calls)h(within)e(eac)o(h)75 1455 y(mo)q(dule.)e(The)e(set)g(of)e(pro)q(cesses)k(that)d(execute)i(a)e(parallel) f(pro)q(cedure)j(ma)o(y)c(b)q(e)j(\014xed,)f(or)g(ma)o(y)e(b)q(e)j (determined)75 1505 y(dynamically)9 b(b)q(efore)k(the)g(in)o(v)o(ok)n(ation.) i(Th)o(us,)e(MPI)f(has)h(to)f(pro)o(vide)g(a)g(mec)o(hanism)d(for)j(creating) h(dynamically)75 1555 y(sets)20 b(of)f(lo)q(cally)e(named)h(pro)q(cesses.)36 b(W)m(e)18 b(alw)o(a)o(ys)g(n)o(um)o(b)q(er)g(pro)q(cesses)k(that)d(execute)i (a)d(parallel)g(pro)q(cedure)75 1605 y(consecutiv)o(ely)m(,)d(starting)g (form)e(zero.)23 b(Th)o(us,)15 b(a)f Fl(group)g Fm(is)g(an)h(ordered)h(set)g (of)f(pro)q(cesses.)24 b(Eac)o(h)15 b(pro)q(cess)i(in)d(a)75 1655 y(group)i(is)g(asso)q(ciated)h(with)f(a)f Fl(rank)p Fm(,)h(starting)g (from)e(zero.)26 b(Pro)q(cesses)19 b(are)d(iden)o(ti\014ed)h(b)o(y)f(their)g (ranks)g(when)75 1705 y(comm)o(unication)10 b(o)q(ccurs)16 b(within)d(the)h(group.)158 1755 y(Another)h(imp)q(ortan)o(t)d(goal)h(is)h (the)h(supp)q(ort)g(of)f(a)g Fl(lo)q(osely)g(sync)o(hronous)e Fm(transfer)j(of)e(con)o(trol:)19 b(no)14 b(syn-)75 1805 y(c)o(hronization)f (o)q(ccurs)i(either)f(b)q(efore)g(or)f(after)h(the)g(call.)j(Th)o(us,)c(a)g (pro)q(cess)i(ma)o(y)d(start)i(sending)f(messages)h(that)75 1855 y(p)q(ertain)g(to)g(the)h(execution)g(of)e(a)h(parallel)f(pro)q(cedure)j (b)q(efore)f(all)e(other)i(participating)e(pro)q(cesses)k(ha)o(v)o(e)c (joined)75 1904 y(the)j(execution;)h(a)f(pro)q(cess)i(ma)o(y)13 b(receiv)o(e)18 b(a)d(message)h(that)f(p)q(ertains)i(to)f(the)g(execution)h (of)e(a)g(parallel)g(pro)q(ce-)75 1954 y(dure,)f(while)g(participating)f(in)g (another)i(parallel)e(execution.)19 b(A)14 b Fl(comm)o(unicatio)o(n)f(con)o (text)f Fm(mec)o(hanism)g(is)75 2004 y(needed)k(to)e(distinguish)g(comm)o (unicatio)o(n)e(that)i(b)q(elongs)h(to)f(distinct)g(parallel)g(pro)q(cedure)i (executions.)k(\(Ev)o(en)75 2054 y(if)11 b(parallel)h(transfer)h(of)f(con)o (trol)g(is)g(executed)i(sync)o(hroneously)m(,)f(one)f(still)g(needs)h(a)f (con)o(text)h(mec)o(hanism)d(for)i(the)75 2104 y(sync)o(hronization)i(calls)f (themselv)o(es.\))158 2154 y(Normally)m(,)i(a)i(parallel)f(pro)q(cedure)k(is) d(written)h(so)g(that)f(all)f(messages)i(pro)q(duced)h(during)e(its)g (execution)75 2204 y(are)e(also)f(consumed)g(b)o(y)g(the)h(pro)q(cesses)i (that)d(execute)j(the)e(pro)q(cedure.)21 b(Ho)o(w)o(ev)o(er,)15 b(if)e(one)i(parallel)e(pro)q(cedure)75 2254 y(calls)i(another,)h(then)h(it)e (migh)o(t)f(b)q(e)i(desirable)g(to)g(allo)o(w)e(suc)o(h)i(call)f(to)h(pro)q (ceed)h(while)e(messages)h(are)g(p)q(ending)75 2304 y(\(the)21 b(messages)f(will)f(b)q(e)i(consumed)f(b)o(y)g(the)h(pro)q(cedure)h(after)e (the)h(call)e(returns\).)39 b(In)20 b(suc)o(h)h(case,)i(a)d(new)75 2354 y(comm)o(unication)11 b(con)o(text)k(is)f(needed)i(for)d(the)i(called)f (parallel)f(pro)q(cedure,)j(ev)o(en)f(if)e(the)i(transfer)g(of)f(con)o(trol)g (is)75 2403 y(sync)o(hronized.)158 2454 y(A)j Fl(con)o(text)f Fm(is)h(the)h(MPI)f(mec)o(hanism)e(for)i(partitioning)f(comm)o(unication)e (space.)29 b(A)17 b(send)h(made)e(in)h(a)75 2504 y(con)o(text)e(cannot)f(b)q (e)h(receiv)o(ed)h(in)d(another)i(con)o(text.)k(Con)o(texts)c(are)g(iden)o (ti\014ed)f(in)g(MPI)g(using)g(in)o(teger-v)n(alued)75 2553 y Fl(con)o(text)p 234 2553 15 2 v 16 w(id)p Fm('s.)158 2604 y(The)k(comm)o(unication)c(domain)h(used)k(b)o(y)e(a)h(parallel)e(pro)q (cedure)k(is)d(iden)o(ti\014ed)h(b)o(y)f(a)g Fl(comm)o(unicator)p Fm(.)75 2654 y(Comm)o(uni)o(cators)c(bring)h(together)i(the)f(concepts)i(of)d (pro)q(cess)j(group)d(and)h(comm)o(unicatio)o(n)d(con)o(text.)22 b(A)14 b(com-)75 2704 y(m)o(unicator)g(is)i(an)g(explicit)g(parameter)g(in)f (eac)o(h)i(p)q(oin)o(t-to-p)q(oin)o(t)d(comm)o(unication)f(op)q(eration.)24 b(The)17 b(comm)o(u-)965 2828 y(1)p eop %%Page: 2 2 bop 75 -100 a Fm(2)1596 b Fk(2)42 b(MAPS)75 45 y Fm(nicator)16 b(iden)o(ti\014es)g(the)g(comm)o(unicatio)o(n)d(con)o(text)j(of)f(that)h(op)q (eration;)g(it)f(iden)o(ti\014es)i(the)f(group)f(of)g(pro)q(cesses)75 95 y(that)j(can)f(b)q(e)h(in)o(v)o(olv)o(ed)f(in)g(this)g(comm)o(unication;)f (and)h(it)g(pro)o(vides)h(the)g(translation)f(from)e(virtual)i(pro)q(cess)75 145 y(names,)e(whic)o(h)g(are)h(ranks)g(within)f(the)h(group,)f(in)o(to)g (absolute)h(addresses.)25 b(Collectiv)o(e)15 b(comm)o(unication)d(calls)75 195 y(also)i(tak)o(e)g(a)g(comm)o(unicator)d(as)k(parameter;)e(it)h(is)h(exp) q(ected)h(that)e(parallel)f(libraries)h(will)f(b)q(e)i(built)f(to)g(accept)75 244 y(a)g(comm)o(uni)o(cator)e(as)i(parameter.)j(Comm)o(unicators)11 b(are)k(represen)o(ted)h(b)o(y)e(opaque)g(MPI)g(ob)r(jects.)158 297 y(MPI)i(do)q(es)h(not)f(pro)o(vide)g(for)g(absolute)g(pro)q(cess)i (names.)24 b(Rather,)17 b(pro)q(cesses)h(are)f(alw)o(a)o(ys)e(iden)o (ti\014ed)h(b)o(y)75 347 y(their)d(rank)g(inside)g(a)f(group.)18 b(A)13 b(univ)o(ersal)g(group)f(that)h(includes)h(all)d(pro)q(cesses)16 b(a)o(v)n(ailable)10 b(when)k(computation)75 397 y(starts)21 b(is)g(the)g(nearest)h(equiv)n(alen)o(t)e(to)g(an)g(absolute)h(pro)q(cess)h (name)d(space)j(in)e(MPI.)g(Relativ)o(e)g(naming)e(is)75 447 y(consisten)o(t)d(with)f(MPI)g(implem)o(en)o(tations)d(where)k(pro)q(cesses)i (can)d(b)q(e)h(added)f(dynamically)l(.)158 500 y(New)g(pro)q(cess)h(groups)e (are)h(built)e(b)o(y)h(subsetting)h(and)f(reordering)h(pro)q(cesses)i(within) c(existing)h(groups)h(\(as)75 549 y(de\014ned)20 b(b)o(y)e(comm)o (unicators\).)29 b(A)19 b(new)g(group)f(is)h(de\014ned)g(from)e(an)h(old)g (group)g(b)o(y)h(sp)q(ecifying)f(the)h(corre-)75 599 y(sp)q(ondence)e(b)q(et) o(w)o(een)f(the)g(rank)e(of)h(eac)o(h)g(pro)q(cess)i(in)d(the)i(new)f(group)g (and)g(its)g(rank)f(in)h(the)g(old)g(group.)21 b(Suc)o(h)75 649 y(corresp)q(ondence)c(is)d(called)f(a)h Fl(map)p Fm(;)f(it)g(is)h (represen)o(ted)j(in)c(MPI)h(b)o(y)g(an)g(opaque)f(ob)r(ject.)75 802 y Fn(2)69 b(Maps)75 899 y Fm(A)14 b Fl(map)g Fm(is)h(a)f(one-to)g (one-mapping)e(from)h(0)p Fj(:::m)8 b Fi(\000)i Fm(1)k(in)o(to)g(the)h(set)g (of)f(non)g(negativ)o(e)h(in)o(tegers;)g Fj(m)g Fm(is)f(the)h Fl(size)75 949 y Fm(of)g(map)g Fj(f)t Fm(.)25 b(A)16 b(p)q(ossible)g (represen)o(tation)i(for)d(suc)o(h)i(map)d(is)i(a)g(list)f(of)h Fj(m)g Fm(in)o(tegers;)h(it)f(is)g(often)g(con)o(v)o(enien)o(t)g(to)75 999 y(think)d(of)g(a)g(map)e(as)j(b)q(eing)f(suc)o(h)h(list.)j(Another)e(p)q (ossible)e(represen)o(tation)i(is)e(as)h(an)f(algorithmic)d(sp)q (eci\014cation)75 1048 y(of)j(the)i(map)d(\(e.g.,)h(hash)h(function\).)k(A)c (map)e(is)i(represen)o(ted)i(b)o(y)e(an)g(opaque)g(map)e(ob)r(ject.)158 1101 y(Maps)i(are)g(used)h(in)f(order)g(to)g(represen)o(t)i(groups.)i(If)c (pro)q(cesses)i(are)f(n)o(um)o(b)q(ered)e(from)g(0)g(to)h Fj(n)9 b Fi(\000)h Fm(1,)j(then)h(a)75 1151 y(map)f Fj(f)18 b Fm(:)13 b(0)p Fj(:::m)c Fi(\000)h Fm(1)j Fi(!)f Fm(0)p Fj(:::n)d Fi(\000)h Fm(1)k(represen)o(ts)k(a)c(subset)j(group)d(of)h Fj(m)g Fm(pro)q(cesses:)23 b(pro)q(cess)16 b Fj(f)t Fm(\()p Fj(i)p Fm(\))g(has)f(rank)g Fj(i)g Fm(in)75 1201 y(the)e(new)f(group.)18 b(A)12 b(map,)e(b)o(y)i(itself,) g(do)q(es)h(not)f(represen)o(t)i(a)e(group;)g(it)g(do)q(es)h(so)f(only)f (relativ)o(e)h(to)g(a)g(pre-de\014ned)75 1251 y(con)o(taining)h(group,)g (i.e.,)f(relativ)o(e)i(to)g(another)g(ranking)f(of)h(the)g(pro)q(cesses.)158 1382 y Fh(Discussion:)38 b Fg(Should)16 b(decide)f(if)f(opaque)h(ob)r(jects)f (are)f(justi\014ed,)i(or)f(whether)g(an)g(arra)o(y)g(of)g(indices)i(is)e(OK.) f(On)75 1428 y(the)i(\\opaque)h(ob)r(ject")f(side:)21 b(Ma)o(y)15 b(ha)o(v)o(e)g(algorithmic)i(map)e(de\014nitions)i(and)f(ma)o(y)e(spread)i (the)f(de\014nition)i(o)o(v)o(er)e(more)75 1473 y(than)d(one)g(no)q(de,)h(on) f(a)f(v)o(ery)h(large)h(system.)k(On)11 b(the)h(explicit)i(represen)o(tation) g(side:)j(The)12 b(represen)o(tation)i(is)e(simple,)h(w)o(e)75 1519 y(a)o(v)o(oid)i(an)f(additional)i(ob)r(ject,)e(and)g(there)g(is)g(no)f (m)o(uc)o(h)h(w)o(aste.)k(Also,)c(with)g(an)g(explicit)i(represen)o(tation,)f (the)f(user)f(can)75 1565 y(easily)j(add)e(his/her)i(o)o(wn)e(map)g (constructors,)h(and)g(building)i(comm)o(unicators)f(is)e(faster.)20 b(Need)14 b(to)g(understand)i(what)75 1610 y(an)d(opaque)i(map)e(w)o(ould)h (b)q(e)f(in)h(F)m(ortran.)158 1663 y(If)e(maps)i(are)f(explictly)j(represen)o (ted)e(as)f(arra)o(ys,)g(then)h(one)f(needs)h(map)f(size)h(to)f(b)q(e)g(an)h (additional)i(parameter.)158 1799 y Fm(F)m(or)h(eac)o(h)g Ff(m)p Fm(,)g(an)g(iden)o(tit)o(y)f(map)f Ff(MPI)p 764 1799 14 2 v 15 w(IDENT\(m\))h Fm(of)g(size)i Ff(m)e Fm(is)h(prede\014ned:)26 b Ff(MPI)p 1475 1799 V 15 w(IDENT)p Fm(\()p Ff(m)p Fm(\))o(\()p Fj(i)p Fm(\))17 b(=)g Fj(i)g(;)7 b(i)17 b Fm(=)75 1849 y(0)p Fj(;)7 b(:::;)g(m)g Fi(\000)i Fm(1.)158 1984 y Fh(Discussion:)35 b Fg(If)11 b(maps)i(are)g(non)g(opaque)h(then)f(the)f(iden)o(t)i(map)f(is)g (just)f(a)h(long)g(arra)o(y)g(with)g Fe(i)g Fg(stored)g(at)f(the)h Fe(i)p Fg(-th)75 2034 y(en)o(try)m(,)f(and)g(it)g(is)g(just)f Fd(MPI)p 465 2034 12 2 v 13 w(IDENT)p Fg(.)e(If)i(maps)h(are)g(opaque,)h(w)o (e)e(need)h(a)f(di\013eren)o(t)j(map)d(for)h(eac)o(h)g(domain)h(size)f(\(a)f (function)75 2084 y(from)i(in)o(tegers)h(to)f(maps?\))18 b(Need)13 b(to)g(think)h(of)f(F)m(ortran)g(implemen)o(tation.)75 2299 y Fc(2.1)56 b(Op)r(erations)18 b(on)h(maps)75 2381 y Fm(The)14 b(follo)o(wing)e(op)q(erations)i(are)g(de\014ned)h(on)f(maps.)i(Eac)o(h)f (function)e(is)h(in)o(v)o(ok)o(ed)f(lo)q(cally)f(b)o(y)i(a)g(pro)q(cess.)158 2469 y Fl(MPI)p 257 2469 15 2 v 17 w(MAP)p 388 2469 V 17 w(SIZE\(map,)i (size\))75 2608 y(IN)g(map)21 b Fm(handle)13 b(to)h(map)e(ob)r(ject)75 2704 y Fl(OUT)k(size)k Fm(size)14 b(of)g(map)e(\(in)o(teger\))p eop %%Page: 3 3 bop 75 -100 a Fk(2.2)41 b(Map)13 b(constructors)1369 b Fm(3)158 45 y Fl(MPI)p 257 45 15 2 v 17 w(MAP)p 388 45 V 17 w(APPL)l(Y\()15 b(map,)h(argumen)o(t,)e(v)m(alue\))75 174 y(IN)i(map)21 b Fm(handle)13 b(to)h(map)e(ob)r(ject)75 259 y Fl(IN)k(argumen)o(t)j Fm(map)12 b(argumen)o(t)h(\(in)o(teger\))75 345 y Fl(OUT)j(v)m(alue)k Fm(image)12 b(of)h Ff(argument)f Fm(under)j Ff(map)e Fm(\(in)o(teger\))158 474 y Fl(MPI)p 257 474 V 17 w(MAP)p 388 474 V 17 w(INVERSE\()k(map,)e (argumen)o(t,)f(v)m(alue\))75 603 y(IN)i(map)21 b Fm(handle)13 b(to)h(map)e(ob)r(ject)75 688 y Fl(OUT)k(argumen)o(t)i Fm(in)o(v)o(erse)d (image)d(of)h Ff(value)g Fm(under)h Ff(map)p Fm(;)f(-1)h(if)f Ff(value)f Fm(is)i(not)g(in)f(the)i(range)f(of)f Ff(map)p Fm(.)75 773 y Fl(IN)j(v)m(alue)21 b Fm(map)12 b(v)n(alue)h(\(in)o(teger\))158 902 y Fl(MPI)p 257 902 V 17 w(MAP)p 388 902 V 17 w(LIST\(map,)i(len,)g(arra)o (y)p 850 902 V 17 w(of)p 906 902 V 17 w(in)o(teger,)e(size\))158 988 y Fm(Creates)i(an)f(explicit)f(list)h(represen)o(tation)h(of)e(a)h(map.) 75 1073 y Fl(IN)i(map)21 b Fm(handle)13 b(to)h(map)e(ob)r(ject)75 1158 y Fl(IN)k(LEN)22 b Fm(length)13 b(of)h(arra)o(y)f(\(in)o(teger\))75 1243 y Fl(OUT)j(arra)o(y)p 310 1243 V 16 w(of)p 365 1243 V 17 w(in)o(teger)i Fm(list)c(of)f(v)n(alues)g(in)h(the)g(range)h(of)e Ff(map)75 1329 y Fl(OUT)j(size)k Fm(length)13 b(of)h(returned)h(list)f({)f (size)i(of)e(map)g(\(in)o(teger\).)158 1492 y Fh(Discussion:)158 1542 y Fg(I)g(c)o(hanged)i(terminology)h(from)e(\\elemen)o(t,)h(rank")f(to)g (\\argumen)o(t,)g(v)n(alue")i(since)e(b)q(oth)h(argumen)o(t)f(and)h(v)n(alue) g(are)75 1592 y(ranks.)75 1794 y Fc(2.2)56 b(Map)19 b(constructors)75 1872 y Fm(The)14 b(map)f(constructors)i(ma)o(y)d(b)q(e)j(either)g(lo)q(cal)e (or)h(ma)o(y)d(require)k(comm)o(unication.)158 1957 y Fl(MPI)p 257 1957 V 17 w(MAP)p 388 1957 V 17 w(BUILD\(map,)g(arra)o(y)p 807 1957 V 17 w(of)p 863 1957 V 16 w(in)o(teger,)f(size\))158 2043 y Fm(Build)f(a)g(map)f(from)g(an)h(explicit)g(list)g(represen)o(tation.) 20 b(This)13 b(function)g(is)g(called)h(lo)q(cally)e(b)o(y)h(one)h(pro)q (cess.)75 2128 y Fl(OUT)i(map)k Fm(handle)14 b(to)f(map)g(ob)r(ject)75 2213 y Fl(IN)j(arra)o(y)p 259 2213 V 17 w(of)p 315 2213 V 17 w(in)o(teger)i Fm(list)13 b(of)h(v)n(alues)f(in)h(the)g(range)g(of)f Ff(map)75 2298 y Fl(IN)j(size)k Fm(n)o(um)o(b)q(er)13 b(of)h(en)o(tries)h(in) e(arra)o(y)h({)f(map)g(size)h(\(in)o(teger\))158 2419 y Fl(MPI)p 257 2419 V 17 w(MAP)p 388 2419 V 17 w(SPLIT\(comm,)h(map,)h(k)o(ey)l(,)g (index,)e(newmap\))158 2504 y Fm(This)g(is)f(a)h(collectiv)o(e)g(function)f (that)h(is)g(called)f(b)o(y)h(all)e(pro)q(cesses)17 b(in)c(the)i(group)e (asso)q(ciated)i(with)e Ff(\(comm,)75 2554 y(map\))p Fm(.)23 b(All)14 b(calling)h(pro)q(cesses)j(pro)o(vide)e(the)g(same)f(v)n(alues)g (for)h(the)g(parameters)g Ff(comm)f Fm(and)g Ff(map)p Fm(.)23 b(A)16 b(separate)75 2604 y(group)e(of)f(pro)q(cesses)k(is)d(formed)f(for)g (eac)o(h)i(distinct)f(v)n(alue)f(of)h Ff(key)p Fm(,)f(with)g(the)i(pro)q (cesses)i(ordered)e(according)f(to)75 2654 y(the)g(v)n(alue)g(of)f Ff(index)p Fm(.)k(Eac)o(h)d(pro)q(cess)i(in)d(suc)o(h)h(group)g(is)g (returned)h(in)f Ff(newmap)e Fm(the)j(map)d(of)h(this)h(group)g(within)75 2704 y Ff(comm)p Fm(.)p eop %%Page: 4 4 bop 75 -100 a Fm(4)1442 b Fk(3)42 b(CONTEXT)p 1815 -100 13 2 v 15 w(ID)75 45 y Fl(IN)16 b(comm)21 b Fm(comm)o(unicator)11 b(ob)r(ject)k(handle)75 149 y Fl(IN)h(map)21 b Fm(map)12 b(ob)r(ject)j (handle)75 253 y Fl(IN)h(k)o(ey)21 b Fm(\(in)o(teger\))75 357 y Fl(IN)16 b(index)k Fm(\(in)o(teger\))75 460 y Fl(OUT)c(newmap)k Fm(map)12 b(ob)r(ject)j(handle)158 572 y(Additional)d(map)h(creation)h (functions)g(ma)o(y)e(b)q(e)i(pro)o(vided.)k(F)m(ollo)o(ws)12 b(a)i(p)q(ossible)g(list)g(constructors.)158 710 y Fh(Discussion:)k Fg(Do)13 b(w)o(e)g(w)o(an)o(t)g(to)g(mandate)h(them)f(in)g(MPI?)158 883 y Fl(MPI)p 257 883 15 2 v 17 w(MAP)p 388 883 V 17 w(UNION\()j(map1,)g (map2,)f(newmap\))158 974 y Ff(map1)i Fm(and)g Ff(map2)g Fm(are)h(maps)e (with)h(disjoin)o(t)g(ranges.)30 b(The)18 b(resulting)f(concatenation)h(is)g (a)f(map)f(with)h(a)75 1024 y(range)d(whic)o(h)h(is)f(the)h(union)e(of)h(the) h(ranges)g(of)e Ff(map1)h Fm(and)g Ff(map2)p Fm(,)f(with)h(elemen)o(ts)g(in)g (the)g(range)h(of)f Ff(map1)f Fm(listed)75 1073 y(\014rst,)18 b(and)g(original)d(order)j(preserv)o(ed)h(within)e(eac)o(h)h(range.)28 b(The)18 b(union)f(op)q(eration)g(is)g(asso)q(ciativ)o(e)h(but)f(not)75 1123 y(comm)o(utativ)o(e.)459 1245 y Ff(newmap)o Fm(\()p Fj(i)p Fm(\))12 b(=)692 1186 y Fb(\032)744 1219 y Ff(map1)o Fm(\()p Fj(i)p Fm(\))287 b(if)13 b Fj(i)e(<)h(siz)r(e)p Fm(\()p Ff(map1)q Fm(\))744 1269 y Ff(map2)o Fm(\()p Fj(i)e Fi(\000)f Fj(siz)r(e)p Fm(\()p Ff(map1)q Fm(\)\))42 b(otherwise)75 1369 y Fl(IN)16 b(map1)21 b Fm(map)12 b(ob)r(ject)i(handle)75 1473 y Fl(IN)i(map2)21 b Fm(map)12 b(ob)r(ject)i(handle)75 1577 y Fl(OUT)i(newmap)k Fm(map)12 b(ob)r(ject)j(handle)158 1724 y Fl(MPI)p 257 1724 V 17 w(MAP)p 388 1724 V 17 w(PR)o(ODUCT\()g(map1,)h(map2,)f(range,)g (newmap\))158 1815 y Ff(map1)c Fm(is)i(a)f(map)e(with)i(v)n(alues)g(in)g(the) h(range)g(0)p Fj(;)7 b(:::;)g Ff(ran)o(ge)s Fi(\000)f Fm(1.)18 b Ff(newmap)11 b Fm(is)h(the)h(cartesian)g(pro)q(duct)g(of)f Ff(map1)75 1864 y Fm(and)k Ff(map2)p Fm(,)f(whic)o(h)h(maps)f(the)i(pair)f (\()p Fj(i;)7 b(j)r Fm(\))16 b(in)o(to)g(the)g(pair)g(\()p Ff(map1)o Fm(\()p Fj(i)p Fm(\))p Fj(;)7 b Ff(map2)p Fm(\()p Fj(j)r Fm(\)\).)25 b(P)o(airs)16 b(in)g(the)h(domain)c(and)j(in)75 1914 y(the)d(range)f(of)g(this)h(mapping)d(are)i(n)o(um)o(b)q(ered)g(in)g(ro) o(w)g(ma)r(jor)f(order,)i(i.e.,)e(pair)h(\()p Fj(i;)7 b(j)r Fm(\))13 b(has)f(n)o(um)o(b)q(er)g Fj(i)6 b Fi(\001)g Ff(range)f Fm(+)h Fj(j)r Fm(.)75 1964 y(The)14 b(pro)q(duct)h(op)q(eration)f(is)g(asso)q (ciativ)o(e)f(but)i(not)e(comm)o(utativ)o(e.)523 2079 y Ff(newmap)o Fm(\()p Fj(i)c Fi(\001)g Ff(range)f Fm(+)i Fj(j)r Fm(\))i(=)g Ff(map1)o Fm(\()p Fj(i)p Fm(\))e Fi(\001)e Ff(range)h Fm(+)g Ff(map2)o Fm(\()p Fj(j)r Fm(\))75 2181 y Fl(IN)16 b(map1)21 b Fm(map)12 b(ob)r(ject)i(handle)75 2285 y Fl(IN)i(map2)21 b Fm(map)12 b(ob)r(ject)i(handle)75 2388 y Fl(OUT)i(newmap)k Fm(map)12 b(ob)r(ject)j(handle)75 2553 y Fn(3)69 b(Con)n(text)p 424 2553 21 2 v 24 w(id)75 2654 y Fm(A)13 b Fl(con)o(text)p 278 2654 15 2 v 15 w(id)e Fm(is)i(an)f(in)o(teger.)18 b(The)13 b(range)g(of)f(v)n(alid)f(v)n(alues)h(for)g Ff(context)p 1274 2654 14 2 v 14 w(id)g Fm(is)h(implem)o(en)o(tation)d(dep)q(enden)o(t,)75 2704 y(and)k(can)g(b)q(e)g(found)g(b)o(y)f(calling)g(a)g(suitable)h(query)h (function,)e(as)h(describ)q(ed)h(in)f Fl(??)p Fm(.)p eop %%Page: 5 5 bop 75 -100 a Fk(3.1)41 b(Op)q(erations)14 b(on)g(con)o(text)p 576 -100 13 2 v 16 w(id's)1201 b Fm(5)75 45 y Fc(3.1)56 b(Op)r(erations)18 b(on)h(con)n(text)p 753 45 17 2 v 20 w(id's)75 168 y Fl(MPI)p 174 168 15 2 v 17 w(CONTEXT)p 431 168 V 19 w(ALLOCA)l(TE\(comm,)d(map,)g (arra)o(y)p 1112 168 V 16 w(of)p 1167 168 V 17 w(con)o(textids,)d(len\))158 259 y Fm(Allo)q(cates)e(an)g(arra)o(y)g(of)g(con)o(text)p 676 259 13 2 v 15 w(id's.)17 b(This)11 b(is)g(a)g(collectiv)o(e)g(op)q(eration)g (that)g(is)g(executed)i(b)o(y)e(all)f(pro)q(cesses)75 309 y(in)g(the)i(group) f(de\014ned)h(b)o(y)e Ff(\(comm,)21 b(map\))p Fm(.)16 b(The)11 b(con)o(text)p 984 309 V 16 w(ids)g(that)g(are)g(returned)h(are)g(unique)f (within)f(the)h(group)75 359 y(asso)q(ciated)i(with)f Ff(\(comm,)21 b(map\))p Fm(.)16 b(The)d(arra)o(y)f(returned)i(is)e(the)h(same)e(on)h(all)g (pro)q(cesses)i(that)f(call)e(the)i(function)75 409 y(\(same)f(order,)i(same) e(n)o(um)o(b)q(er)g(of)g(elemen)o(ts\).)18 b(The)13 b(call)f(ma)o(y)f(blo)q (c)o(k)i(un)o(til)f(all)g(pro)q(cesses)j(within)e(the)g(executing)75 458 y(group)h(ha)o(v)o(e)f(in)o(v)o(ok)o(ed)g(the)i(call.)75 573 y Fl(IN)h(comm)21 b Fm(comm)o(unicator)11 b(ob)r(ject)k(handle)75 679 y Fl(IN)h(map)21 b Fm(map)12 b(ob)r(ject)j(handle)75 786 y Fl(OUT)h(arra)o(y)p 310 786 15 2 v 16 w(of)p 365 786 V 17 w(con)o(text)p 538 786 V 16 w(ids)j Fm(\(arra)o(y)14 b(of)f(in)o(tegers\))75 892 y Fl(IN)j(len)k Fm(n)o(um)o(b)q(er)13 b(of)g(con)o(text)p 562 892 13 2 v 16 w(id's)g(to)h(allo)q(cate)f(\(in)o(teger\))158 1042 y Fl(MPI)p 257 1042 15 2 v 17 w(CONTEXT)p 514 1042 V 19 w(RESER)-5 b(VE\(con)o(text)p 931 1042 V 15 w(id\))158 1133 y Fm(Reserv)o(e)21 b(a)e(con)o(text)p 492 1133 13 2 v 16 w(id)g(v)n(alue.)34 b(This)20 b(is)f(a)g(lo)q(cal)g(call)g(that)g(is)h(in)o(v)o(ok)o(ed)e(b)o(y)i (a)f(single)g(pro)q(cess.)37 b(A)19 b(re-)75 1183 y(serv)o(ed)i(con)o(text)p 343 1183 V 15 w(id)e(v)n(alue)g(will)f(not)h(b)q(e)h(allo)q(cated)f(b)o(y)g (a)g(subsequen)o(t)i(call)d(to)h Ff(MPI)p 1457 1183 14 2 v 15 w(CONTEXT)p 1626 1183 V 15 w(ALLOCATE)e Fm(b)o(y)75 1233 y(the)g(same)e(pro)q(cess.)27 b(It)16 b(is)g(erroneous)i(to)e(reserv)o(e)i(a) e(con)o(text)p 1070 1233 13 2 v 16 w(id)f(that)i(has)f(b)q(een)h(already)f(b) q(een)h(allo)q(cated)f(b)o(y)75 1282 y Ff(MPI)p 144 1282 14 2 v 15 w(CONTEXT)p 313 1282 V 14 w(ALLOCATE)p Fm(.)75 1397 y Fl(IN)g(con)o(text)p 305 1397 15 2 v 16 w(id)k Fm(\(in)o(teger\))158 1590 y Fh(Discussion:)158 1642 y Fg(Ma)o(y)13 b(w)o(an)o(t)g(to)g(reserv)o(e) h(an)f(arra)o(y)g(of)g(v)n(alues,)h(rather)g(than)f(one)h(v)n(alue.)158 1697 y(Implemen)o(tations)i(ma)o(y)d(ha)o(v)o(e)g(there)h(o)o(wn)f(reserv)o (ed)g(v)n(alues.)19 b(The)13 b(e\013ect)g(is)g(as)h(if)f(calls)h(to)f Fd(MPI)p 1575 1697 12 2 v 13 w(CONTEXT)p 1728 1697 V 12 w(RESERVE)75 1747 y Fg(where)g(executed)h(in)g(a)f(pream)o(ble.)158 1921 y Fl(MPI)p 257 1921 15 2 v 17 w(CONTEXT)p 514 1921 V 19 w(FREE\(con)o(text)p 836 1921 V 16 w(id\))158 2012 y Fm(F)m(ree)j(a)e(reserv)o(ed)j(con)o(text)p 583 2012 13 2 v 16 w(id)d(v)n(alue.)21 b(The)15 b(v)n(alue)f(b)q(ecomes)h(a)o (v)n(ailable)e(for)h(a)h(subsequen)o(t)i(reserv)n(ation)e(b)o(y)75 2062 y Ff(MPI)p 144 2062 14 2 v 15 w(CONTEXT)p 313 2062 V 14 w(RESERVE)e Fm(or)g(subsequen)o(t)j(allo)q(cation)c(b)o(y)i Ff(MPI)p 1071 2062 V 15 w(CONTEXT)p 1240 2062 V 14 w(ALLOCATE)p Fm(.)d(It)j(is)g(erroneous)h(to)f(free)g(a)75 2112 y(v)n(alue)f(that)h(is)g (asso)q(ciated)h(with)e(an)h(activ)o(e)g(comm)o(unicator.)75 2226 y Fl(IN)i(con)o(text)p 305 2226 15 2 v 16 w(id)k Fm(\(in)o(teger\))158 2419 y Fh(Discussion:)158 2471 y Fg(Here,)12 b(to)q(o,)h(w)o(e)g(migh)o(t)h (prefer)f(to)g(free)f(an)i(arra)o(y)f(of)g(v)n(alues,)h(rather)f(than)h(one.) 158 2522 y(Comm)o(unicators,)i(lik)o(e)h(all)f(opaque)g(ob)r(jects,)f(are)g (freed)g(b)o(y)g(a)f(call)j(to)d Fd(MPI)p 1286 2522 12 2 v 13 w(FREE)p Fg(.)f(W)m(e)i(ma)o(y)g(prefer)g(to)f(ha)o(v)o(e)i(that)75 2568 y(same)c(call)g(free)g(also)g(the)g(asso)q(ciated)h(con)o(text)p 757 2568 V 14 w(id.)18 b(With)12 b(the)g(curren)o(t)g(design,)h(if)e(w)o(e)g (w)o(an)o(t)h(to)f(free)g(b)q(oth)h(comm)o(unicator)75 2614 y(and)i(con)o(text)p 275 2614 V 14 w(id)g(w)o(e)f(need)g(to)g(store)g(the)g (con)o(text)p 810 2614 V 15 w(id)h(v)n(alue,)g(free)e(the)i(comm)o(unicator,) g(next)f(free)g(the)g(con)o(text)p 1730 2614 V 15 w(id.)p eop %%Page: 6 6 bop 75 -100 a Fm(6)1328 b Fk(4)41 b(COMMUNICA)m(TORS)75 45 y Fn(4)69 b(Comm)n(unicators)75 144 y Fm(A)17 b Fl(comm)o(unicator)d Fm(is)j(an)g(opaque)g(ob)r(ject)h(that)f(iden)o(ti\014es)g(a)g(group)g(of)f (pro)q(cesses)k(and)d(a)f(comm)o(unication)75 194 y(con)o(text)i(for)f(that)g (group.)28 b(Lik)o(e)17 b(other)h(opaque)g(ob)r(jects,)g(comm)o(unicators)d (cannot)j(b)q(e)g(transfered)h(b)q(et)o(w)o(een)75 244 y(pro)q(cesses;)d(con) o(text)p 401 244 13 2 v 16 w(id's)d(are)h(used)h(to)f(transfer)h(information) c(on)i(con)o(text.)158 298 y(As)g(a)f(short-hand,)g(w)o(e)h(shall)e(iden)o (ti\014y)h(the)h(group)f(of)g(pro)q(cesses)j(asso)q(ciated)e(with)f(a)g(comm) o(unicator)e Ff(comm)75 348 y Fm(with)17 b(the)h(comm)o(unicator)d(itself.)28 b(Th)o(us,)18 b(\\the)g(pro)q(cess)h(with)f(rank)f Fj(i)h Fm(in)f Ff(comm)p Fm(")f(should)h(b)q(e)h(understo)q(o)q(d)h(as)75 398 y(meaning)14 b(\\)h(the)h(pro)q(cess)h(with)e(rank)g Fj(i)h Fm(in)f(the)h(group)f(asso)q(ciated)i(with)e Ff(comm)p Fm(".)21 b(In)16 b(the)g(same)e(manner,)h(\\the)75 447 y(comm)o(unication)7 b(con)o(text)k Ff(comm)p Fm(")e(should)h(b)q(e)h(understo)q(o)q(d)h(to)e (mean)f(\\the)i(comm)o(unication)c(con)o(text)k(asso)q(ciated)75 497 y(with)j Ff(comm)p Fm(".)158 551 y(An)19 b(initial)e(comm)o(unicator)f Ff(MPI)p 701 551 14 2 v 15 w(COMM)p 804 551 V 15 w(INIT)i Fm(is)h(de\014ned)h (when)f(the)h(program)d(starts.)34 b(Its)19 b(asso)q(ciated)75 601 y(group)e(con)o(tains)h(all)e(pro)q(cesses)k(that)e(start)g(the)g (computation.)28 b(Applications)16 b(that)i(do)f(not)h(need)g(m)o(ultiple)75 651 y(pro)q(cess)e(groups)e(or)g(m)o(ultiple)d(con)o(texts,)k(will)d(only)h (use)i(this)f(comm)o(unicator.)158 705 y(Let)f Ff(comm)g Fm(b)q(e)g(a)g(comm) o(unicator)d(that)j(is)g(asso)q(ciated)g(with)g(a)g(group)g(of)f(size)i Fj(n)p Fm(,)e(and)h(let)g Ff(map)e Fm(:)g(0)p Fj(::m)c Fi(\000)g Fm(1)k Fi(!)75 755 y Fm(0)p Fj(::n)t Fi(\000)t Fm(1)g(b)q(e)h(a)f(map.)16 b(Then)c(the)g(pair)f Ff(\(comm,)20 b(map\))11 b Fm(de\014nes)i(a)e (subgroup,)h(namely)e(the)i(subgroup)f(of)g(pro)q(cesses)75 805 y(with)j(ranks)g Ff(map)o Fm(\(0\))p Fj(;)7 b(:::;)g Ff(map)m Fm(\()p Fj(m)j Fi(\000)f Fm(1\))14 b(in)g Ff(comm)p Fm(.)75 944 y Fc(4.1)56 b(Op)r(erations)18 b(on)h(comm)n(unicators)75 1064 y Fl(MPI)p 174 1064 15 2 v 17 w(COMM)p 351 1064 V 18 w(MAP\(comm,)d(sub) q(comm,)f(map\))158 1154 y Fm(Returns)d(a)e(map)f(suc)o(h)i(that)g Ff(\(comm,)21 b(map\))9 b Fm(is)i(the)g(group)g(asso)q(ciated)g(with)f Ff(subcomm)p Fm(;)g(i.e.,)g(if)g Ff(i)g Fm(is)h(the)g(rank)75 1204 y(of)i(a)h(pro)q(cess)h(in)e(the)i(group)e(asso)q(ciated)i(with)e Ff(subcomm)p Fm(,)f(then)i Ff(map\(i\))f Fm(is)g(the)i(rank)e(of)g(that)h (same)f(pro)q(cess)i(in)75 1254 y(the)g(group)e(asso)q(ciated)i(with)f Ff(comm)p Fm(.)j(The)e(call)e(is)h(erroneous)h(if)f Ff(subcomm)e Fm(has)i(a)g(pro)q(cess)i(that)e(is)g(not)g(mem)o(b)q(er)75 1304 y(of)f Ff(comm)p Fm(.)75 1412 y Fl(IN)j(comm)21 b Fm(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1513 y Fl(IN)h(sub)q(comm)k Fm(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1613 y Fl(OUT)h(map)k Fm(map)12 b(ob)r(ject)j(handle) 158 1722 y(The)h Ff(MPI)p 314 1722 14 2 v 15 w(COMM)p 417 1722 V 15 w(MAP)f Fm(function)g(retriev)o(es)i(the)f(group)f(of)g(pro)q(cesses)j (that)e(is)f(asso)q(ciated)i(with)e(a)g(comm)o(u-)75 1772 y(nicator.)36 b(Ho)o(w)o(ev)o(er,)21 b(since)f(absolute)g(pro)q(cess)i(names)d(are)h(not)g (visible)f(in)g(MPI,)g(the)i(group)e(can)h(only)f(b)q(e)75 1822 y(de\014ned)h(relativ)o(e)e(to)h(another)g(encompassing)f(group.)33 b(The)19 b(\\absolute")f(pro)q(cess)j(n)o(um)o(b)q(er)d(is)h(obtained)f(b)o (y)75 1872 y(using)c Ff(MPI)p 253 1872 V 15 w(COMM)p 356 1872 V 15 w(INIT)f Fm(as)h(a)f(reference)j(comm)o(unicator.)158 1961 y Fl(MPI)p 257 1961 15 2 v 17 w(COMM)p 434 1961 V 18 w(SIZE\(comm,)g (size\))158 2051 y Fm(Returns)f(the)f(size)h(of)e(the)i(group)e(asso)q (ciated)i(with)f Ff(comm)p Fm(.)75 2160 y Fl(IN)i(comm)21 b Fm(handle)14 b(to)f(comm)o(unicator)75 2260 y Fl(OUT)j(size)k Fm(group)13 b(size)i(\(in)o(teger\))158 2405 y Fl(MPI)p 257 2405 V 17 w(COMM)p 434 2405 V 18 w(CONTEXTID\(comm,)i(con)o(text)p 1077 2405 V 16 w(id\))158 2494 y Fm(Returns)e(the)f(con)o(text)p 522 2494 13 2 v 16 w(id)f(asso)q(ciated)i(with)f(the)g(comm)o(unicator)d Ff(comm)p Fm(.)75 2603 y Fl(IN)16 b(comm)21 b Fm(comm)o(unicator)11 b(ob)r(ject)k(handle)75 2704 y Fl(OUT)h(con)o(text)p 356 2704 15 2 v 15 w(id)k Fm(con)o(text)p 564 2704 13 2 v 16 w(id)p eop %%Page: 7 7 bop 75 -100 a Fk(4.2)41 b(Comm)n(unicator)11 b(constructors)1182 b Fm(7)75 45 y Fc(4.2)56 b(Comm)n(unicator)19 b(constructors)75 157 y Fl(MPI)p 174 157 15 2 v 17 w(COMM)p 351 157 V 18 w(MAKE\(comm,)e(map,)f (con)o(text)p 967 157 V 15 w(id,)f(new)o(comm\))158 242 y Fm(Creates)20 b(a)f(new)g(comm)o(unicator)d(ob)r(ject,)k(whic)o(h)f(is)g(asso)q(ciated)g (with)g(the)g(group)g(de\014ned)h(b)o(y)e Ff(\(comm,)75 292 y(map\))p Fm(,)e(and)g(the)h(con)o(text)p 483 292 13 2 v 15 w(id)f Ff(context)p 704 292 14 2 v 15 w(id)p Fm(.)25 b(This)16 b(is)g(a)g(lo)q(cal)g(call)f(executed)j(b)o(y)f(one)f(pro)q(cess.)27 b(Ho)o(w)o(ev)o(er,)17 b(the)75 342 y(new)c(comm)o(unicator)e(ob)r(ject)i (should)g(not)g(b)q(e)g(used)h(for)f(comm)o(unicatio)o(n)d(b)q(et)o(w)o(een) 15 b(t)o(w)o(o)d(pro)q(cesses)k(unless)d(they)75 392 y(b)q(oth)h(ha)o(v)o(e)g (called)f(the)i(function.)158 442 y(It)h(is)g(erroneous)h(to)f(create)h(on)f (the)g(same)f(pro)q(cess)j(t)o(w)o(o)d(distinct)h(comm)o(unicators)e(with)h (the)i(same)e(con-)75 491 y(text)p 149 491 13 2 v 16 w(id.)75 582 y Fl(IN)h(comm)21 b Fm(comm)o(unicator)11 b(ob)r(ject)k(handle)75 665 y Fl(IN)h(map)21 b Fm(map)12 b(ob)r(ject)j(handle)75 747 y Fl(IN)h(con)o(text)p 305 747 15 2 v 16 w(id)k Fm(con)o(text)p 514 747 13 2 v 15 w(id)75 830 y Fl(OUT)c(new)o(comm)k Fm(comm)o(unicator)11 b(ob)r(ject)j(handle)158 920 y(The)c(con)o(tect)p 370 920 V 16 w(id)f(used)h(in)f(this)g(call)g(ma)o(y)e(b)q(e)j(an)f(in)o(teger)h (returned)h(b)o(y)e(a)g(previous)g(call)g(to)g Ff(MPI)p 1627 920 14 2 v 15 w(ALLOC)p 1752 920 V 15 w(CONTEXT)p Fm(,)75 970 y(executed)18 b(within)d(the)h(same)f(group.)23 b(In)16 b(suc)o(h)h(case,)f (it)g(is)f(guaran)o(teed)i(that)e(the)i(con)o(text)p 1552 970 13 2 v 15 w(id)f(is)f(unique,)h(and)75 1020 y(has)i(not)f(b)q(een)i(used)f (already)f(to)h(create)h(a)e(comm)o(unicator.)26 b(Ho)o(w)o(ev)o(er,)18 b(the)h(user)f(is)g(free)g(to)f(manage)f(on)h(it)75 1070 y(o)o(wn)e(the)h (con)o(text)p 371 1070 V 16 w(id's,)f(and)g(use)i(other)f(mec)o(hanisms)e (for)h(their)h(allo)q(cation.)21 b(F)m(or)16 b(example,)e(one)i(could)f(ha)o (v)o(e)75 1119 y(a)f(single)g Fl(con)o(text)p 385 1119 15 2 v 16 w(id)h(serv)o(er)e Fm(generate)j(unique)e(con)o(text)p 1031 1119 13 2 v 16 w(id's)g(for)g(the)h(en)o(tire)g(system;)f(one)g(can)h (preallo)q(cate)75 1169 y(statically)e(some)g(con)o(text)p 493 1169 V 16 w(id)g(v)n(alues,)g(for)h(the)g(use)h(of)e(libraries;)g(etc.) 158 1298 y Fh(Discussion:)158 1343 y Fg(Do)g(w)o(e)f(w)o(an)o(t)g(to)g (prohibit)j(implemen)o(tations)g(that)e(sync)o(hronize)h(the)f(creation)g(of) f(comm)o(unicators)j(\(i.e.,)c(where)i(a)75 1389 y(call)h(to)f Fd(MPI)p 254 1389 12 2 v 13 w(MAKE)p 347 1389 V 13 w(COMM)e Fg(blo)q(c)o(ks)k(un)o(til)f(all)h(mem)o(b)q(ers)e(of)g(the)g(group)h(ha)o(v) o(e)f(made)h(the)f(call\)?)158 1435 y(Is)g(it)g(preferable)i(to)d(ha)o(v)o(e) i(a)f Fd(array)p 686 1435 V 12 w(of)p 738 1435 V 14 w(comm)e Fg(as)i(parameter,)g(rather)h(than)f(one)h(comm)o(unicator?)158 1484 y(W)m(e)g(don't)h(ha)o(v)o(e)g(no)o(w)f(the)h(abilit)o(y)h(to)f(ha)o(v)o (e)g(t)o(w)o(o)e(di\013eren)o(t)j(groups)g(with)e(the)h(same)f(con)o(text;)i (e.g.,)d(t)o(w)o(o)h(di\013eren)o(t)75 1534 y(n)o(um)o(b)q(erings)i(of)d(the) h(pro)q(cesses,)g(used)g(with)g(the)g(same)g(comm)o(unication)i(con)o(text.)j (The)14 b(alternativ)o(e)h(is)f(to)g(remo)o(v)o(e)g(the)75 1584 y(restriction)f(that)e(t)o(w)o(o)g(distinct)i(comm)o(unicators)g(cannot) f(use)f(the)g(same)h(con)o(text)p 1282 1584 V 14 w(id.)17 b(It)11 b(is)h(no)o(w)f(the)g(user)h(resp)q(onsibili)q(t)o(y)75 1634 y(to)17 b(disam)o(biguate)i(these)f(t)o(w)o(o)e(comm)o(unicators;)k(the)d (system)g(do)q(es)h(not)f(guaran)o(tee)h(that)f(a)g(message)g(sen)o(t)g(with) g(one)75 1684 y(comm)o(unicator)d(can)f(b)q(e)g(receiv)o(ed)h(only)f(with)g (this)h(same)e(comm)o(unicator,)i(if)f(they)g(use)f(the)h(same)g(con)o(text)p 1668 1684 V 14 w(id.)18 b(W)m(e)12 b(sa)o(v)o(e)75 1733 y(on)i(con)o(text)p 254 1733 V 15 w(id's)g(but)g(lo)q(ose)h(safet)o(y)f(\(the)g(sender)h(and)f (receiv)o(er)h(ma)o(y)f(b)q(e)g(using)h(di\013eren)o(t)h(pro)q(cess)e(n)o(um) o(b)q(erings,)i(if)e(t)o(w)o(o)75 1783 y(di\013eren)o(t)h(comm)o(unicators)f (are)g(used)f({)g(the)g(receiv)o(er)h(ma)o(y)g(not)f(iden)o(tify)i(correctly) f(the)f(sender\).)75 2003 y Fn(5)69 b(W)-6 b(orking)24 b(without)e(con)n (text)p 940 2003 21 2 v 24 w(id's)75 2094 y Fm(It)9 b(is)h(p)q(ossible)f(to)g (create)i(comm)o(unicators)c(directly)m(,)j(without)f(using)g(con)o(text)p 1296 2094 13 2 v 15 w(id's.)16 b(A)10 b(single)f(function)g(com)o(bines)75 2144 y(the)k(allo)q(cation)e(of)g(a)h(new)h(con)o(text)p 631 2144 V 16 w(id)e(and)h(the)h(generation)g(of)e(a)h(new)h(comm)o(unicator.)i (This)d(allo)o(ws)f(a)h(\\naiv)o(e")75 2194 y(user)22 b(that)e(do)q(es)i(not) e(need)i(to)f(customize)f(con)o(text)p 942 2194 V 16 w(id)g(allo)q(cation)f (to)h(ignore)h(this)f(MPI)h(feature.)39 b(Direct)75 2243 y(comm)o(unicator)11 b(creation)j(is)g(safer,)g(since)h(uniqueness)g(of)e(con)o(text)p 1150 2243 V 16 w(id's)g(is)h(guaran)o(teed,)g(b)o(y)g(construction.)158 2329 y Fl(MPI)p 257 2329 15 2 v 17 w(COMM)p 434 2329 V 18 w(SAFEMAKE\(comm,)j (map,)f(new)o(comm\))158 2414 y Fm(Creates)h(a)f(new)g(comm)o(unicator)d (\(with)j(an)f(attac)o(hed)i(con)o(text)p 1186 2414 13 2 v 15 w(id\))f(that)g(is)g(asso)q(ciated)g(with)g(the)g(group)75 2464 y(de\014ned)h(b)o(y)g(the)f(pair)g Ff(\(comm,)21 b(map\))p Fm(.)j(This)17 b(is)f(a)g(collectiv)o(e)g(call)g(that)g(has)g(to)h(b)q(e)g (in)o(v)o(ok)o(ed)e(b)o(y)h(all)f(mem)o(b)q(ers)75 2514 y(in)h(this)g(group.) 24 b(All)15 b(of)h(them)f(are)i(pro)o(viding)d(the)j(same)e(input)h (parameters)g Ff(comm)f Fm(and)h Ff(map)p Fm(.)24 b(The)17 b(call)e(ma)o(y)75 2563 y(blo)q(c)o(k)g(un)o(til)g(all)f(pro)q(cesses)k(in)d (the)h(group)f(ha)o(v)o(e)h(in)o(v)o(ok)o(ed)e(it.)23 b(The)16 b(new)g(comm)o(unicator)c(ma)o(y)i(b)q(e)i(safely)f(used)75 2613 y(for)f(comm)o(uni)o(cation)d(with)j(an)o(y)f(mem)o(b)q(er)f(of)h(the)i (new)f(group,)g(once)g(the)h(call)e(returns.)75 2704 y Fl(IN)j(comm)21 b Fm(comm)o(unicator)11 b(handle)p eop %%Page: 8 8 bop 75 -100 a Fm(8)710 b Fk(6)42 b(W)o(ORKING)13 b(WITHOUT)h(MAPS)g(AND)g (CONTEXT)p 1780 -100 13 2 v 16 w(ID'S)75 45 y Fl(IN)i(map)21 b Fm(map)12 b(handle)75 126 y Fl(OUT)k(new)o(comm)k Fm(comm)o(unicator)11 b(handle)158 213 y Ff(MPI)p 227 213 14 2 v 15 w(COMM)p 330 213 V 15 w(SAFEMAKE)h Fm(is)75 300 y Ff(MPI_CONTEXT_ALLOC)o(\(comm)o(,)19 b(map,)i(context_id,)f(1\);)75 350 y(MPI_COMM_MAKE\(com)o(m,)f(map,)i (context_id,)e(newcomm\);)75 400 y(``MPI_SYNCH\(comm,)o(map\)')o(')g(/*)i(by) h(which)f(we)g(mean)g(a)g(synchronization)533 450 y(operation)f(in)h(the)g (subgroup)f(defined)h(by)g(\(comm,map\))f(*/)158 615 y Fh(Discussion:)158 661 y Fg(Do)g(w)o(e)e(prefer)i(to)f(create)g(an)h(arra)o(y)g(of)f(comm)o (unicators,)j(rather)e(than)f(one?)37 b(\(T)m(o)19 b(b)q(e)g(consisten)o(t)i (with)f(the)75 707 y Fd(MPI)p 137 707 12 2 v 13 w(CONTEXT)p 290 707 V 11 w(ALLOC)11 b Fg(function\).)158 752 y Fd(MPI)p 220 752 V 13 w(SYNCH\(comm)o(,m)o(ap\))e Fg(is)k(not)g(curren)o(tly)h (de\014ned)f(in)h(the)f(collectiv)o(e)i(comm)o(unication)g(library;)f(it)f (can)g(b)q(e)g(co)q(ded)75 798 y(using)h(p)q(oin)o(t-to-p)q(oin)o(t)i(calls,) e(but)g(seems)f(imp)q(ortan)o(t)h(enough)g(to)f(b)q(e)h(included)h(there.)158 848 y(General)e(though)o(t)f({)g(if)f(w)o(e)g(supp)q(ort)i(collectiv)o(e)g (comm)o(unication)i(calls)d(with)g(a)g(map)f(as)h(an)g(additional)i (parameter,)75 898 y(then)h(w)o(e)f(ha)o(v)o(e)h(disasso)q(ciated)i(the)e (\\con)o(text")g(of)f(a)h(collectiv)o(e)h(comm)o(unication,)h(from)e(its)g (supp)q(orting)h(\\group".)23 b(W)m(e)75 948 y(need)14 b(to)f(do)g(it)g(when) h(w)o(e)e(build)j(new)e(comm)o(unicators,)i(in)f(order)f(to)g(b)q(o)q (otstrap)h({)f(if)h(w)o(e)e(w)o(an)o(t)h(the)g(call)i(that)e(generates)75 997 y(a)g(new)g(comm)o(unicator)i(to)e(in)o(v)o(olv)o(e)i(only)f(the)f(pro)q (cesses)h(in)g(the)f(new)g(group.)75 1217 y Fn(6)69 b(W)-6 b(orking)24 b(without)e(maps)h(and)g(con)n(text)p 1251 1217 21 2 v 25 w(id's)75 1307 y Fm(It)14 b(is)g(p)q(ossible)g(to)f(a)o(v)o(oid)g (b)q(oth)h(the)g(use)h(of)e(map)g(ob)r(jects)i(and)e(of)h(con)o(text)p 1264 1307 13 2 v 15 w(id's)g(altogether.)k(A)c(single)f(function)75 1357 y(com)o(bines)20 b(the)h(generation)g(of)f(a)h(map,)f(the)h(allo)q (cation)e(of)i(a)f(con)o(text)p 1263 1357 V 16 w(id)g(and)h(the)g(generation) g(of)f(a)h(new)75 1407 y(comm)o(unicator.)14 b(This)e(allo)o(ws)e(\\naiv)o (e")g(users)j(to)e(use)h(MPI)g(without)f(b)q(eing)g(exp)q(osed)i(to)e(maps)f (or)h(con)o(text)p 1787 1407 V 16 w(id's.)158 1492 y Fl(MPI)p 257 1492 15 2 v 17 w(COMM)p 434 1492 V 18 w(BUILD\(comm,)k(arra)o(y)p 889 1492 V 17 w(of)p 945 1492 V 16 w(ranks,)h(size,)f(new)o(comm\))158 1578 y Fm(Build)j(a)f(map)g(from)f(an)h(explicit)h(list)g(represen)o(tation.) 31 b(This)18 b(is)g(a)g(collectiv)o(e)g(function)f(that)h(is)g(called)75 1627 y(b)o(y)d(all)e(mem)o(b)q(ers)h(of)g(the)h(new)g(group.)21 b(All)14 b(pro)o(vide)g(the)i(same)e(parameters)g(for)h Ff(comm)p Fm(,)f Ff(array)p 1614 1627 14 2 v 14 w(of)p 1672 1627 V 15 w(ranks)g Fm(and)75 1677 y Ff(size)p Fm(.)j(The)d(call)f(ma)o(y)e(blo)q(c)o (k)j(un)o(til)e(it)i(w)o(as)f(in)o(v)o(ok)o(ed)g(on)g(all)f(the)j(group)e (mem)o(b)q(ers.)k(When)c(it)h(returns,)g(the)h(new)75 1727 y(comm)o(unicator)c(ma)o(y)h(b)q(e)j(used)f(to)g(comm)o(unicate)e(with)h(an)o (y)h(mem)o(b)q(er)e(of)h(the)i(new)f(group.)75 1807 y Fl(IN)i(comm)21 b Fm(handle)14 b(to)f(comm)o(unicator)75 1888 y Fl(IN)j(arra)o(y)p 259 1888 15 2 v 17 w(of)p 315 1888 V 17 w(ranks)k Fm(list)13 b(of)h(v)n(alues)f(in)h(the)g(range)g(of)g Ff(map)75 1969 y Fl(IN)i(size)k Fm(n)o(um)o(b)q(er)13 b(of)h(en)o(tries)h(in)e(arra)o(y)h({)f (map)g(size)h(\(in)o(teger\))75 2050 y Fl(OUT)i(new)o(comm)k Fm(handle)13 b(to)h(new)g(comm)o(unicator)158 2130 y Ff(MPI)p 227 2130 14 2 v 15 w(COMM)p 330 2130 V 15 w(BUILD)f Fm(is)75 2217 y Ff(MPI_MAP_BUILD\(com)o(m,)19 b(array_of_ranks,)g(size,)h(newmap\);)75 2267 y(MPI_COMM_SAFEMAKE)o(\(comm)o(,)f(newmap,)h(newcomm\);)158 2389 y Fl(MPI)p 257 2389 15 2 v 17 w(COMM)p 434 2389 V 18 w(COPY\(comm,)c (new)o(comm\))158 2475 y Fm(Creates)d(a)f(new)g(comm)o(unicator)d(with)j(the) h(same)e(group)g(as)h(the)h(old)e(comm)o(unicator.)k(This)d(is)f(a)h (collectiv)o(e)75 2524 y(function)17 b(that)g(is)h(called)f(b)o(y)g(all)f (mem)o(b)q(ers)g(of)h(new)h(group.)28 b(All)16 b(pro)o(vide)h(the)h(same)f (parameters)g(for)g Ff(comm)p Fm(.)75 2574 y(The)j(call)f(ma)o(y)f(blo)q(c)o (k)h(un)o(til)g(it)g(w)o(as)h(in)o(v)o(ok)o(ed)f(on)g(all)g(the)h(group)g (mem)o(b)q(ers.)34 b(When)20 b(it)f(returns,)j(the)f(new)75 2624 y(comm)o(unicator)11 b(ma)o(y)h(b)q(e)j(used)f(to)g(comm)o(unicate)e (with)h(an)o(y)h(mem)o(b)q(er)e(of)h(the)i(group.)75 2704 y Fl(IN)h(comm)21 b Fm(handle)14 b(to)f(comm)o(unicator)p eop %%Page: 9 9 bop 1854 -100 a Fm(9)75 45 y Fl(OUT)16 b(new)o(comm)k Fm(handle)13 b(to)h(new)g(comm)o(unicator)158 127 y Ff(MPI)p 227 127 14 2 v 15 w(COMM)p 330 127 V 15 w(COPY)f Fm(is)75 217 y Ff(MPI_COMM_SIZE\(com)o (m,)19 b(size\);)75 267 y(MPI_COMM_SAFEMAKE)o(\(comm)o(,)g (MPI_IDENT\(size\),)g(newcomm\);)158 392 y Fl(MPI)p 257 392 15 2 v 17 w(COMM)p 434 392 V 18 w(SPLIT\(comm,)c(k)o(ey)l(,)h(index,)f(new)o (comm\))158 477 y Fm(Split)g(the)h(group)g(asso)q(ciated)g(with)g Ff(comm)p Fm(;)f(creates)j(a)d(new)h(group)g(for)f(eac)o(h)i(distinct)f(v)n (alue)f(of)g Ff(key)g Fm(that)75 527 y(con)o(tains)d(the)h(pro)q(cesses)i (that)d(supplied)g(that)g(k)o(ey)g(v)n(alue;)g(the)h(pro)q(cesses)h(are)f (rank)o(ed)f(according)g(to)g(the)h Ff(index)75 577 y Fm(v)n(alues)f(they)h (supplied.)18 b(A)13 b(new)g(comm)o(unicator)d(is)i(created)i(for)e(eac)o(h)h (subgroup.)18 b(Eac)o(h)13 b(pro)q(cess)i(is)d(returned)i(a)75 627 y(handle)e(to)g(the)h(comm)o(unicator)c(for)j(the)h(new)g(subgroup)f(it)g (b)q(elongs)g(to.)17 b(This)c(is)f(a)f(collectiv)o(e)i(call)e(executed)j(b)o (y)75 676 y(all)c(pro)q(cesses)15 b(in)c(the)h(group)f(asso)q(ciated)i(with)e Ff(comm)p Fm(.)16 b(They)c(call)f(ma)o(y)f(blo)q(c)o(k)h(un)o(til)g(all)f (pro)q(cesses)k(ha)o(v)o(e)e(in)o(v)o(ok)o(ed)75 726 y(the)j(function.)21 b(When)15 b(the)g(call)f(returns)i(the)g(new)f(comm)o(unicator)d(ma)o(y)h(b)q (e)i(safely)f(used)i(for)e(comm)o(unication)75 776 y(in)f(the)i(new)f(group.) 75 866 y Fl(IN)i(comm)21 b Fm(handle)14 b(to)f(comm)o(unicator)75 948 y Fl(IN)j(k)o(ey)21 b Fm(\(in)o(teger\))75 1031 y Fl(IN)16 b(index)k Fm(\(in)o(teger\))75 1113 y Fl(OUT)c(new)o(comm)k Fm(handle)13 b(to)h(new)g(comm)o(unicator)158 1203 y Ff(MPI)p 227 1203 14 2 v 15 w(COMM)p 330 1203 V 15 w(SPLIT\(comm,)19 b(key,)i(index,)g(newcomm\))12 b Fm(is)75 1293 y Ff(MPI_COMM_SIZE\(com)o(m,) 19 b(size\);)75 1343 y(MPI_MAP_SPLIT\(com)o(m,)g(MPI_IDENT\(size\),)f(key,)j (index,)g(newmap\);)75 1393 y(MPI_COMM_SAFEMAKE)o(\(comm)o(,)e(newmap,)h (newcomm\);)158 1561 y Fh(Discussion:)158 1607 y Fg(Ma)o(y)13 b(w)o(an)o(t)f(a)g(sp)q(ecial)i(DONTCARE)e(k)o(ey)g(v)n(alue)h(that)g (indicates)h(that)f(the)f(caller)h(need)g(not)g(ha)o(v)o(e)f(a)h(new)f(comm)o (u-)75 1652 y(nicator)75 1872 y Fn(7)69 b(Examples)75 1963 y Fm(W)m(e)20 b(sa)o(y)g(that)h(a)f(parallel)f(pro)q(cedure)k(is)d Fa(active)h Fm(at)f(a)g(pro)q(cess)i(if)e(the)h(pro)q(cess)h(b)q(elongs)f(to) f(a)h(group)f(that)75 2013 y(ma)o(y)13 b(collectiv)o(ely)h(execute)i(the)f (pro)q(cedure,)i(and)d(some)g(mem)o(b)q(er)f(of)h(that)g(group)h(is)f(curren) o(tly)i(executing)f(the)75 2063 y(pro)q(cedure)j(co)q(de.)27 b(If)16 b(a)h(parallel)e(pro)q(cedure)j(is)f(activ)o(e)f(at)h(a)f(pro)q (cess,)j(then)e(this)f(pro)q(cess)j(ma)o(y)14 b(b)q(e)k(receiving)75 2113 y(messages)10 b(p)q(ertaining)f(to)h(this)g(pro)q(cedure,)i(ev)o(en)e (if)f(it)g(do)q(es)i(not)e(curren)o(tly)i(execute)h(the)e(co)q(de)h(of)e (this)h(pro)q(cedure.)75 2228 y Fc(7.1)56 b(Nonreen)n(tran)n(t)18 b(parallel)j(pro)r(cedures)75 2305 y Fm(This)13 b(co)o(v)o(ers)h(the)f(case)h (where,)g(at)f(an)o(y)g(p)q(oin)o(t)f(in)h(time,)e(at)i(most)f(one)h(in)o(v)o (ok)n(ation)e(of)h(a)h(parallel)f(pro)q(cedure)j(can)75 2355 y(b)q(e)e(activ)o(e)g(at)g(an)o(y)f(pro)q(cess.)20 b(I.e.,)12 b(concurren)o(t)j(in)o(v)o(ok)n(ations)c(of)h(the)h(same)f(parallel)g(pro)q (cedure)j(ma)o(y)c(o)q(ccur)j(only)75 2405 y(within)h(disjoin)o(t)h(groups)g (of)g(pro)q(cesses.)27 b(F)m(or)16 b(example,)f(all)g(in)o(v)o(ok)n(ations)f (of)i(parallel)f(pro)q(cedures)j(in)o(v)o(olv)o(e)d(all)75 2455 y(pro)q(cesses,)h(pro)q(cesses)h(are)d(single-threaded,)g(and)g(there)h (are)f(no)g(recursiv)o(e)h(in)o(v)o(ok)n(ations.)158 2504 y(In)e(suc)o(h)h(a) f(case,)h(a)f(con)o(text)p 604 2504 13 2 v 15 w(id)g(can)g(b)q(e)h (statically)e(allo)q(cated)h(to)g(eac)o(h)h(pro)q(cedure.)19 b(The)14 b(static)g(allo)q(cation)75 2554 y(can)c(b)q(e)h(done)f(in)g(a)g (pream)o(ble,)f(as)h(part)g(of)g(initialization)d(co)q(de.)17 b(Or,)11 b(it)f(can)g(b)q(e)h(done)f(a)g(compile/link)d(time,)i(if)g(the)75 2604 y(implemen)o(tatio)o(n)h(has)j(additional)f(mec)o(hanisms)e(to)j(reserv) o(e)i(con)o(text)p 1190 2604 V 16 w(id)d(v)n(alues.)18 b(Comm)n(unicators)11 b(to)i(b)q(e)g(used)75 2654 y(b)o(y)f(the)g(di\013eren)o(t)h(pro)q(cedures)h (can)e(b)q(e)g(build)f(in)h(a)f(pream)o(ble,)g(if)g(the)i(executing)f(groups) g(are)g(statically)f(de\014ned;)75 2704 y(if)17 b(the)i(executing)f(groups)h (c)o(hange)f(dynamically)m(,)d(then)k(a)e(new)i(comm)o(unicator)c(has)j(to)g (b)q(e)g(built)g(whenev)o(er)p eop %%Page: 10 10 bop 75 -100 a Fm(10)1584 b Fk(8)42 b(LEFT)75 45 y Fm(the)15 b(executing)h(group)e(c)o(hanges,)h(but)g(this)g(new)g(comm)o(uni)o(cator)d (can)j(b)q(e)g(built)f(using)h(the)g(same)f(preallo)q(cated)75 95 y(con)o(text)p 210 95 13 2 v 16 w(id.)19 b(If)14 b(the)h(parallel)e(pro)q (cedures)j(can)f(b)q(e)g(organized)f(in)o(to)g(libraries,)f(so)i(that)f(only) g(one)g(pro)q(cedure)j(of)75 145 y(eac)o(h)c(library)e(can)i(b)q(e)g (concurren)o(tly)g(activ)o(e)g(at)f(eac)o(h)h(pro)q(cessor,)h(then)f(it)f(is) g(su\016cien)o(t)h(to)f(allo)q(cate)g(one)g(con)o(text)75 195 y(p)q(er)j(library)m(.)75 318 y Fc(7.2)56 b(P)n(arallel)14 b(pro)r(cedures)e(that)g(are)g(nonreen)n(tran)n(t)h(within)h(eac)n(h)f (executing)f(group)75 397 y Fm(This)i(co)o(v)o(ers)i(the)f(case)h(where,)f (at)g(an)o(y)f(p)q(oin)o(t)g(in)g(time,)f(for)h(eac)o(h)h(pro)q(cess)i (group,)d(there)i(can)f(b)q(e)g(at)f(most)g(one)75 447 y(activ)o(e)h(in)o(v)o (ok)n(ation)e(of)h(a)g(parallel)g(pro)q(cedure)j(b)o(y)d(a)h(pro)q(cess)i (mem)o(b)q(er.)i(Ho)o(w)o(ev)o(er,)c(it)g(migh)o(t)d(b)q(e)k(p)q(ossible)f (that)75 497 y(the)e(same)e(pro)q(cedure)j(is)e(concurren)o(tly)i(in)o(v)o (ok)o(ed)d(in)h(t)o(w)o(o)g(partially)e(\(or)j(completely\))e(o)o(v)o (erlapping)g(groups.)17 b(F)m(or)75 547 y(example,)12 b(the)h(same)g (collectiv)o(e)g(comm)o(unicati)o(on)d(function)j(ma)o(y)e(b)q(e)j(concurren) o(tly)h(in)o(v)o(ok)o(ed)d(on)h(t)o(w)o(o)g(partially)75 596 y(o)o(v)o(erlapping)g(groups.)158 648 y(In)e(suc)o(h)i(a)e(case,)h(a)f(con)o (text)p 595 648 V 16 w(id)g(is)g(asso)q(ciated)h(with)f(eac)o(h)h(parallel)e (pro)q(cedure)j(and)f(eac)o(h)f(executing)i(group,)75 697 y(so)k(that)h(o)o (v)o(erlapping)e(execution)i(groups)f(ha)o(v)o(e)g(distinct)h(comm)o(unicati) o(on)c(con)o(texts.)30 b(\(One)18 b(do)q(es)g(not)f(need)75 747 y(a)f(di\013eren)o(t)i(con)o(text)p 414 747 V 16 w(id)e(from)f(eac)o(h)i (group;)h(one)f(merely)e(needs)j(a)f(\\coloring")e(of)h(the)i(groups,)f(so)f (that)h(One)75 797 y(can)c(generate)i(the)f(comm)o(unicators)d(for)i(eac)o(h) g(parallel)f(pro)q(cedure)k(when)d(the)h(execution)g(groups)g(are)f (de\014ned.)75 847 y(Here,)i(again,)d(one)i(only)g(need)h(one)f(con)o(text)g (for)g(eac)o(h)h(library)m(,)d(if)h(no)h(t)o(w)o(o)f(pro)q(cedures)k(from)12 b(the)j(same)e(library)75 897 y(can)h(b)q(e)h(concurren)o(tly)g(activ)o(e)f (in)f(the)i(same)e(group.)158 948 y(Note)18 b(that,)f(for)g(collectiv)o(e)h (comm)o(unicati)o(on)c(libraries,)k(w)o(e)f(do)g(allo)o(w)f(sev)o(eral)i (concurren)o(t)h(in)o(v)o(ok)n(ations)75 998 y(within)e(the)i(same)e(group:) 25 b(a)18 b(broadcast)g(in)g(a)f(group)h(ma)o(y)e(b)q(e)j(started)g(at)e(a)h (pro)q(cess)h(b)q(efore)g(the)g(previous)75 1047 y(broadcast)h(in)f(that)h (group)g(ended)g(at)g(another)g(pro)q(cess.)37 b(In)20 b(suc)o(h)g(a)g(case,) h(one)f(cannot)g(rely)g(on)f(con)o(text)75 1097 y(mec)o(hanisms)12 b(to)i(disam)o(biguate)e(successiv)o(e)17 b(in)o(v)o(ok)n(ations)12 b(of)h(the)i(same)f(parallel)f(pro)q(cedure)j(within)d(the)i(same)75 1147 y(group:)31 b(the)21 b(pro)q(cedure)i(need)e(b)q(e)h(implem)o(en)o(ted)d (so)h(as)h(to)f(a)o(v)o(oid)g(confusion.)37 b(E.g.,)21 b(for)f(broadcast,)j (one)75 1197 y(ma)o(y)13 b(need)k(to)e(carry)h(additional)d(information)f(in) j(messages,)h(suc)o(h)g(as)f(the)h(broadcast)g(ro)q(ot,)f(to)g(help)g(in)g (suc)o(h)75 1247 y(disam)o(biguation;)9 b(one)i(also)g(relies)h(on)f(preserv) n(ation)h(of)f(message)g(order)i(b)o(y)e(MPI.)g(With)g(suc)o(h)h(an)f (approac)o(h,)h(w)o(e)75 1297 y(ma)o(y)g(b)q(e)i(gaining)e(p)q(erformance,)h (but)h(w)o(e)g(lo)q(ose)g(mo)q(dularit)o(y)m(.)h(It)f(is)f(not)h(su\016cien)o (t)g(to)g(implemen)o(t)d(the)j(parallel)75 1346 y(pro)q(cedure)f(so)e(that)h (it)e(w)o(orks)i(correctly)g(in)f(isolation,)e(when)j(in)o(v)o(ok)o(ed)e (only)h(once;)h(it)f(needs)h(to)f(b)q(e)h(implemen)o(ted)75 1396 y(so)j(that)h(an)o(y)f(n)o(um)o(b)q(er)f(of)h(successiv)o(e)i(in)o(v)o (ok)n(ations)d(will)g(execute)j(correctly)m(.)23 b(Of)15 b(course,)i(the)f (same)e(approac)o(h)75 1446 y(can)g(b)q(e)h(used)f(for)g(other)g(parallel)f (libraries.)75 1569 y Fc(7.3)56 b(W)-5 b(ell)19 b(nested)f(parallel)j(pro)r (cedures)75 1649 y Fm(Calls)12 b(of)h(parallel)f(pro)q(cedures)j(are)f(w)o (ell)f(nested)h(if)f(a)g(new)g(parallel)f(pro)q(cedure)j(is)e(alw)o(a)o(ys)g (in)o(v)o(ok)o(ed)f(in)h(a)g(subset)75 1698 y(of)j(a)g(group)g(executing)h (the)g(same)e(parallel)g(pro)q(cedure.)27 b(Th)o(us,)17 b(pro)q(cesses)i (that)d(execute)i(the)f(same)e(parallel)75 1748 y(pro)q(cedure)h(ha)o(v)o(e)e (the)g(same)f(execution)i(stac)o(k.)158 1799 y(In)i(suc)o(h)g(a)f(case,)i(a)e (new)h(con)o(text)g(need)h(to)e(b)q(e)h(dynamically)d(allo)q(cated)i(for)g (eac)o(h)h(new)g(in)o(v)o(ok)n(ation)d(of)i(a)75 1849 y(parallel)e(pro)q (cedure.)24 b(Ho)o(w)o(ev)o(er,)16 b(a)f(stac)o(k)h(mec)o(hanism)d(can)i(b)q (e)h(used)g(for)f(allo)q(cating)f(new)h(con)o(texts.)24 b(Th)o(us,)15 b(a)75 1899 y(p)q(ossible)e(mec)o(hanism)d(is)j(to)f(allo)q(cate)g(\014rst)i (a)e(large)g(n)o(um)o(b)q(er)g(of)g(con)o(text)p 1233 1899 V 16 w(id's)g(\(up)h(to)f(the)i(upp)q(er)f(b)q(ound)g(on)f(the)75 1949 y(depth)f(of)f(nested)i(parallel)e(pro)q(cedure)i(calls\),)f(and)f(then) h(use)h(a)e(lo)q(cal)g(stac)o(k)h(managemen)o(t)d(of)i(these)i(con)o(text)p 1799 1949 V 16 w(id's)75 1999 y(on)i(eac)o(h)g(pro)q(cess)i(to)e(create)h(a)e (new)i(comm)o(unicator)c(\(using)j Ff(MPI)p 1129 1999 14 2 v 15 w(COMM)p 1232 1999 V 15 w(MAKE)p Fm(\))f(for)g(eac)o(h)i(new)f(in)o(v)o (ok)n(ation.)158 2132 y Fh(Discussion:)k Fg(General)c(case)158 2266 y Fm(In)g(the)h(general)f(case,)h(there)g(ma)o(y)d(b)q(e)j(m)o(ultiple)c (concurren)o(tly)16 b(activ)o(e)e(in)o(v)o(ok)n(ations)e(of)h(the)i(same)e (parallel)75 2316 y(pro)q(cedure)18 b(within)d(the)i(same)e(group;)h(in)o(v)o (ok)n(ations)e(ma)o(y)g(not)i(b)q(e)h(w)o(ell)e(nested.)26 b(A)16 b(new)g(con)o(text)h(need)g(to)e(b)q(e)75 2366 y(created)i(for)e(eac)o (h)h(in)o(v)o(ok)n(ation.)21 b(It)16 b(is)f(the)h(user)h(resp)q(onsibilit)o (y)e(to)g(mak)o(e)f(sure)j(that,)f(if)e(t)o(w)o(o)h(distinct)h(parallel)75 2416 y(pro)q(cedures)h(are)e(in)o(v)o(ok)o(ed)f(concurren)o(tly)i(on)f(o)o(v) o(erlapping)f(sets)i(of)e(pro)q(cesses,)j(then)f(con)o(text)p 1584 2416 13 2 v 15 w(id)f(allo)q(cation)e(or)75 2466 y(comm)o(unicator)e (creation)j(is)g(prop)q(erly)g(co)q(ordinated.)75 2610 y Fn(8)69 b(Left)75 2704 y Fm(Remote)13 b(pro)q(cedure)j(call)d(and)g(C2)h(ob)r(jects.) p eop %%Page: 11 11 bop 1833 -100 a Fm(11)158 45 y(Ho)o(w)14 b(to)f(handle)h(gro)o(wing)f(pro)q (cess)j(set.)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF >From weeks@mozart.convex.com Wed Jun 9 15:37:19 1993 Return-Path: Received: from ens.ens-lyon.fr by lip.ens-lyon.fr (4.1/SMI-4.1) id AA15217; Wed, 9 Jun 93 15:37:19 +0200 Received: from convex.convex.com by ens.ens-lyon.fr (4.1/SMI-4.1) id AA13402; Wed, 9 Jun 93 15:37:04 +0200 Received: from mozart.convex.com by convex.convex.com (5.64/1.35) id AA02892; Wed, 9 Jun 93 08:36:37 -0500 Received: by mozart.convex.com (5.64/1.28) id AA01496; Wed, 9 Jun 93 08:38:50 -0500 Date: Wed, 9 Jun 93 08:38:50 -0500 From: weeks@mozart.convex.com (Dennis Weeks) Message-Id: <9306091338.AA01496@mozart.convex.com> To: Jack.Dongarra@lip.ens-lyon.fr Subject: An inquiry you may not have received Status: R On May 31 there was a post in comp.parallel asking "what is MPI?" -- I responded to the guy, telling him to contact you if he wanted to get in one or more of the email lists and/or to attend the meeting. He is from exxon. Now it occurs to me that he might have sent you mail at cs.utk and maybe it didn't get forwarded to france or whatever, so just for reference here is the guy's name and email: Denny Willen dew@custer.exxon.com >From owner-mpi-context@CS.UTK.EDU Wed May 26 15:38:20 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA09084; Wed, 26 May 93 15:38:17 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA09337; Wed, 26 May 93 15:38:29 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21630; Wed, 26 May 93 16:35:42 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 26 May 1993 16:35:41 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21536; Wed, 26 May 93 16:35:08 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA19257; Wed, 26 May 1993 16:35:06 -0400 Date: Wed, 26 May 1993 16:35:06 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9305262035.AA19257@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu, mpi-pt2pt@cs.utk.edu Subject: Group nakedness Status: R It has occurred to me, while reading the "Maps, Groups, and Contexts" (MGC) draft, that groups no longer really exist in MPI in the sense that there is no routine whose input or output is a group. It seems we have discarded the intuitively appealing concept of a group, and replaced it with lesser obvious concept of a map. I would like to get rid of maps, and restore groups to MPI. For most of the routines in MGC this can be done by replacing the word "map" with "group", and clarifying the names of some routines. I would like to see the MGC draft rewritten with the routines below. In most cases there is a clear correspondence between a "map" routine and a "group" routine that does something very similar. David MPI_MAP_DOMAIN --> MPI_GROUP_SIZE size_of_group = MPI_GROUP_SIZE(group) IN group - handle to group object OUT size_of_group - number of processes in group (integer) MPI_MAP_ELEMENT --> MPI_RANK_IN_PARENT old_rank = MPI_RANK_IN_PARENT(group,rank) IN group - handle to group object IN rank - rank in group (integer) OUT old_rank - rank of process (group,rank) in parent group MPI_MAP_RANK --> MPI_RANK_IN_GROUP rank = MPI_RANK_IN_GROUP(group,old_rank) IN group - handle to group object IN old_rank - rank in parent group (integer) OUT rank - rank in group of process whose rank in the parent group is (group,old_rank) (integer) MPI_MAP_FLATTEN --> MPI_INQ_GROUP MPI_INQ_GROUP(group,array_of_integer,max_size) IN group - handle to group object OUT array_of_integer - list of the ranks in the parent group of the processes in group INOUT max_dim - on entry the maximum number of integers that can be put into array_of_integer. on exit the size of the group, i.e. the actual number of integers put into array_of_integer (integer). MPI_MAP_BUILD --> MPI_GROUP_BUILD MPI_MAKE_GROUP(group,array_of_integer) OUT group - handle to new group object IN array_of_integer - ranks in parent group of processes in new group (Note: this is still a local group constructor since communicators will stiil be used to all communication routines) MPI_MAP_CONCAT --> MPI_GROUP_CONCAT MPI_GROUP_CONCAT(group1,group2,new_group) IN group1 - handle to first group IN group2 - handle to second group (does not overlap with group1) OUT new_group - handle to new group that is the concatentation of group1 and group2 MPI_MAP_SPLIT --> MPI_GROUP_SPLIT MPI_GROUP_SPLIT(group,key,index,new_group) IN group - handle to group IN key - key used to partition group (integer) IN index - value whose relative order over all processes in new_group determines the rank within new_group (integer) OUT new_group - handle to new group (Note: this is a collective group constructor) MPI_ALLOC_CONTEXT --> MPI_ALLOC_CONTEXT MPI_ALLOC_CONTEXT(comm,array_of_contextids,len) IN comm - handle to communicator IN len - number of contexts to be created (integer) OUT array_of_contextids - array of len context IDs (Note: this is a collective operation involving all the process in the group in comm) MPI_MAKE_COMM --> MPI_MAKE_COMM MPI_MAKE_COMM(group,context_id,new_comm) IN group - handle to group IN context_id - context ID OUT new_comm - handle to communicator containing group and context_id MPI_SAFEMAKE_COMM --> MPI_SAFEMAKE_COMM MPI_SAFEMAKE_COMM(group,new_comm) IN group - handle to group OUT new_comm - handle to communicator (Note: this is a collective operation) MPI_MAP --> not needed MPI_CONTEXT --> what is this for? nothing --> MPI_PARENT (a new routine, may be useful) MPI_PARENT(group,parent_group) IN group - handle to group OUT parent_group - handle to parent of group Date: Fri, 28 May 93 14:42:06 BST Message-Id: <7507.9305281342@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Group nakedness To: walker@rios2.epm.ornl.gov (David Walker) In-Reply-To: David Walker's message of Wed, 26 May 1993 16:35:06 -0400 Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Status: RO David writes: > It has occurred to me, while reading the "Maps, Groups, and Contexts" (MGC) > draft, that groups no longer really exist in MPI in the sense that there is > no routine whose input or output is a group. > > It seems we have discarded the intuitively appealing concept of a group, and > replaced it with lesser obvious concept of a map. I would like to get rid > of maps, and restore groups to MPI. Me too! I think this makes a lot of sense. I explained it to future users here and they just said "Eh?". I can't comment on the detailed suggestions which was given below, I aint had time to read it yet. Best Wishes Lyndon (the ex-prolific :-) /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ >From owner-mpi-context@CS.UTK.EDU Thu Jun 3 10:29:00 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA09255; Thu, 3 Jun 93 10:28:45 -0500 Received: from [128.169.201.1] by convex.convex.com (5.64/1.35) id AA21898; Thu, 3 Jun 93 10:26:09 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08266; Thu, 3 Jun 93 11:14:59 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 3 Jun 1993 11:14:58 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08257; Thu, 3 Jun 93 11:14:47 -0400 Date: Thu, 3 Jun 93 16:14:37 BST Message-Id: <14127.9306031514@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context: group nakedness and deconvolution To: tony@aurora.cs.msstate.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Status: R Hi Tony How's the context draft coming along between yourself and Mark (Sears)? I'll be interested to hear what's happening on the "group nakedness" which was mentioned over the net by David, yourself and I. I found time to consider the concrete suggestion put forward by David, at last, and have some input which I hope will be of use. First off I think that the deletion of MAP and exposure of GROUP is a Good Thing. It helps to deconvolve the conceptual context of the draft which I now see as a major difficulty. I explained it to people and the convolution was quite a problem. I'll go through David's suggestion making the detailed comments, and then make the overall comments after all that. Lines of David's begin with "> ", and I'll include pointers to the overall comments where appropriate. o--------------------o Detailed Comments - ----------------- > MPI_MAP_DOMAIN --> MPI_GROUP_SIZE > size_of_group = MPI_GROUP_SIZE(group) > IN group - handle to group object > OUT size_of_group - number of processes in group (integer) Looks fine to me. > MPI_MAP_ELEMENT --> MPI_RANK_IN_PARENT > old_rank = MPI_RANK_IN_PARENT(group,rank) > IN group - handle to group object > IN rank - rank in group (integer) > OUT old_rank - rank of process (group,rank) in parent group This introduces the idea of ranking within the space of a parent group, presumably that within which the (child) group was created (overall comment below). The group object must store the handle of the parent group, which doesnt seem to be a problem. > MPI_MAP_RANK --> MPI_RANK_IN_GROUP > rank = MPI_RANK_IN_GROUP(group,old_rank) > IN group - handle to group object > IN old_rank - rank in parent group (integer) > OUT rank - rank in group of process whose rank in the parent group is > (group,old_rank) (integer) Is consistent. > MPI_MAP_FLATTEN --> MPI_INQ_GROUP > MPI_INQ_GROUP(group,array_of_integer,max_size) > IN group - handle to group object > OUT array_of_integer - list of the ranks in the parent group of the > processes in group > INOUT max_dim - on entry the maximum number of integers that can be > put into array_of_integer. > on exit the size of the group, i.e. the actual number of integers put into array_of_integer (integer). Is consistent. > MPI_MAP_BUILD --> MPI_GROUP_BUILD > MPI_MAKE_GROUP(group,array_of_integer) > OUT group - handle to new group object > IN array_of_integer - ranks in parent group of processes in new group > (Note: this is still a local group constructor since communicators will stiil > be used to all communication routines) Is not consistent. Needs the parent group as an argument in order to record the parent group in the created group (at least). Needs the size of the group to create as argument. Its far from clear how much use MPI_MAKE_GROUP and MPI_INQ_GROUP really are with the business of always referencing the parent. In order to send a group in a message this way the user will have to send the entire ancestry of the group in the message(s). Please see overall comments below. > MPI_MAP_CONCAT --> MPI_GROUP_CONCAT > MPI_GROUP_CONCAT(group1,group2,new_group) > IN group1 - handle to first group > IN group2 - handle to second group (does not overlap with group1) > OUT new_group - handle to new group that is the concatentation of group1 and > group2 Within the parent reference framework the groups must have the same parent, or with more complication a common ancestor which becomes the parent of the created group. Decision should be made and documented. > MPI_MAP_SPLIT --> MPI_GROUP_SPLIT > MPI_GROUP_SPLIT(group,key,index,new_group) > IN group - handle to group > IN key - key used to partition group (integer) > IN index - value whose relative order over all processes in new_group > determines the rank within new_group (integer) > OUT new_group - handle to new group > (Note: this is a collective group constructor) Is consistent. Should document that "group" is parent of each "new_group" (obvious to us, I know, but ...). > MPI_ALLOC_CONTEXT --> MPI_ALLOC_CONTEXT > MPI_ALLOC_CONTEXT(comm,array_of_contextids,len) > IN comm - handle to communicator > IN len - number of contexts to be created (integer) > OUT array_of_contextids - array of len context IDs > (Note: this is a collective operation involving all the process in the > group in comm) Please see overall comments below. > MPI_MAKE_COMM --> MPI_MAKE_COMM > MPI_MAKE_COMM(group,context_id,new_comm) > IN group - handle to group > IN context_id - context ID > OUT new_comm - handle to communicator containing group and context_id Is consistent (although see following message on intercommuncation). > MPI_SAFEMAKE_COMM --> MPI_SAFEMAKE_COMM > MPI_SAFEMAKE_COMM(group,new_comm) > IN group - handle to group > OUT new_comm - handle to communicator > (Note: this is a collective operation) Is inconsistent. There is no communicator with which to call MPI_ALLOC_CONTEXT. Can be fixed, for example, by redefining MPI_ALLOC_CONTEXT to accept a group rather than a communicator. > MPI_MAP --> not needed I assume this is a reference to MPI_COMM_MAP. Equivalent operator still needed, returning group. MPI_COMM_GROUP(comm, group) IN comm communicator handle OUT group handle of group bound in communicator MPI_CONTEXT --> what is this for? I assume this is a reference to MPI_COMM_CONTEXTID. Should keep it. > nothing --> MPI_PARENT (a new routine, may be useful) > MPI_PARENT(group,parent_group) > IN group - handle to group > OUT parent_group - handle to parent of group In the preant reference framework this is essential. Overall Comments - ---------------- 1. Parent framework. I mentioned that MPI_INQ_GROUP and MPI_BUILD_GROUP will be difficult to use. Recommendation is to drop the parent reference framework and have absolute groups. This means absolute process identification which can be, for example, indices within a group containing all processes, but not mandated to be such. 2. Context allocation. I mentioned that MPI_ALLOC_CONTEXT and MPI_SAFEMAKE_COMM are inconsistent. Recommendation is to replace the communicator argument of MPI_ALLOC_CONTEXT with a group. MPI implementation provides the context or whatever for the implied collective operation, as it does for MPI_SAFEMAKE_COMM as described. This recommendation effects deconvolution of the conceptual content of the draft. It will make the whole thing much easier to understand. o--------------------o Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ >From weeks@mozart.convex.com Wed Jun 9 15:47:16 1993 Return-Path: Received: from ens.ens-lyon.fr by lip.ens-lyon.fr (4.1/SMI-4.1) id AA15290; Wed, 9 Jun 93 15:47:15 +0200 Received: from convex.convex.com by ens.ens-lyon.fr (4.1/SMI-4.1) id AA13569; Wed, 9 Jun 93 15:46:55 +0200 Received: from mozart.convex.com by convex.convex.com (5.64/1.35) id AA03206; Wed, 9 Jun 93 08:46:28 -0500 Received: by mozart.convex.com (5.64/1.28) id AA01658; Wed, 9 Jun 93 08:48:42 -0500 Date: Wed, 9 Jun 93 08:48:42 -0500 From: weeks@mozart.convex.com (Dennis Weeks) Message-Id: <9306091348.AA01658@mozart.convex.com> To: Jack.Dongarra@lip.ens-lyon.fr Subject: June 3 mpi mail #1 of 1 Status: R >From owner-mpi-context@CS.UTK.EDU Thu Jun 3 12:29:51 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA14746; Thu, 3 Jun 93 12:29:33 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA27165; Thu, 3 Jun 93 12:26:53 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA16388; Thu, 3 Jun 93 13:22:04 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 3 Jun 1993 13:22:02 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA16375; Thu, 3 Jun 93 13:21:35 -0400 Date: Thu, 3 Jun 93 18:21:28 BST Message-Id: <14220.9306031721@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context: intercommunication etcetara To: tony@aurora.cs.msstate.edu Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Status: R Hi Tony Here are thoughts on how we can proceed with intercommunication. I write this letter as though maps were deleted and groups exposed, in anticipation :-) In summary this letter says what we do not need two different kinds of communicator objects - indeed we can have just one communicator object which is described in the letter. We do not really need different constructors to build communicators for intracommunication and intercommunication, but we should provide them for the convenience of the user. The letter adds at most nine procedures to this section of MPI, and suggests deletion of a few others :-) The "context" in the draft from Marc (Snir) is weaker than that in Zipcode and Proopsal X, as I am sure you are aware. The allocator ensures only that the context is unique within the set of processes performing the allocation. The context (or context_id in the draft) can be sent in a message. Presumably this means the following. A group G could allocate a context C, send it to a group H. H can then send messages to G using context C which in the knowledge that G knows such messages come from H. However, G cannot send messages to H using context C in the knowledge that such messages have come from G. I have no complaint with this context concept. It does have consequences, one of which is that attempting to simulate intercommunication by group union intracommunication not particularly palatable. Never mind, I hope this letter can largely resolve difficulties with intercommunication. With this kind of context in mind, I write out generic send and receive in longhand, in two slightly different ways. Please bear with me on this. send(sender-group, send(local-group, receiver-group, remote-group, receiver-rank, OR remote-rank, receiver-context, remote-context, tag, tag, ...) ...) receive(receiver-group, receive(local-group, sender-group, OR remote-group, sender-rank, remote-rank, receiver-context, local-context, tag, tag, ...) ...) The receive operation is allowed to wildcard on "sender-rank" (OR "remote-rank") and "tag" (OR "tag" :-) only. The "sender-group" (OR "local-group") argument to "send" is just there so that the rank of the sender can be placed in the message envelope. In principle this could just be some identifier for the sender, but the natural identifier to use in MPI terms in the rank within a group, and putting the group there is good programming practice since it documents the send. The two group arguments to receive, "receiver-group" and "sender-group" (OR "local-group" and "remote-group") appear at first to serve no purpose since "receive" cannot select on them. However they may be useful in optimisations of interpretation of "sender-rank" (OR "remote-rank") and the context, and are good programming practice since they document the receive. In general people at MPI (excepting myself) have been thinking and talking about the (special) case where it just so happens that "sender-group = receiver-group" (OR "remote-group = local-group"). This point is key. I suggest that we can simply let this be a special case, provide convenience functions for this case, and retain efficiency. Sounds groovy to me :-) So what is a communicator then? No suprises to learn that the idea is something like typedef struct { group_t local_group; context_t local_context; group_t remote_group; context_t remote_context; } communicator_t; where I have written "group_t" for the type of a group and "context_t" for the type of a context (and yes, I am talking a bit too close to implementation here, but that is just to demonstrate the potential for a single implementation). If the communicator is to be used for intracommunication, then the construction of the communicator sets "remote_group = local_group" and "remote_context = local_context". If the communicator is to be used for intercommunication, then the construction of the communicator sets "remote_group" and "local_group" differently. It may or may not set "remote_context" and "local_context" differently depending on the scope within which contexts were allocated. Now the interesting case of the two "group_t" being set the same and the two "context_t" being set different. Yes, it even means something, it means two different modules within the same process group which communicate with one another, and need private context for such communications (it's intercommunication really). Now we're motoring :-) How is it used in point-to-point? For pedagogical purposes, I now write out the generic send and receive using the communicator object described and show how they correspond to the above. send(communicator, rank, tag, ...) sender-group = local-group = communicator.local_group receiver-group = remote-group = communicator.remote_group receiver-rank = remote-rank = rank receiver-context = remote-context = communicator.remote_context tag = tag = tag receive(communicator, rank, tag, ...) receiver-group = local-group = communicator.local_group sender-group = remote-group = communicator.remote_group sender-rank = remote-rank = rank receiver-context = local-context = communicator.local_context tag = tag = tag So how is it used in collective? Well we have only collective operations appliacble to the case where the local and remote parts of the communicator are the same, so they are all errors if the communicator supplied is not of that nature. We could invent jargon to describe a communicator with that nature, if we really wanted to. I think that deals with what the object is and how it is used. Better lets think about the constructors and accessors and all the such now. Okay, I said we keep functions for making the special case of remote and local parts the same for user convenience. So just two new procedures for constructors. Names for these? Well, as you know I'm not very good at that part :-) I added the letter 'A' to 'COMM' (a for assymetric, and use of the string "comma" is of course silly). MPI_COMMA_MAKE(local_group, local_context, remote_group, remote_context, comm) IN local_group local group for communicator which caller must be member of IN local_context for communicator which local_group processes must be able to use for receive IN remote_group for communicator which caller moy or may not be a member of IN remote_context for communicator which remote_group processes must be able to use for receive OUT comm communicator created MPI_COMMA_SAFEMAKE(lcoal_group, remote_group, comm) IN local_group as for MPI_COMMA_MAKE IN remote_group as for MPI_COMMA_MAKE OUT comm as for MPI_COMMA_MAKE (Note that this is a collective operation synchronising all members of both process groups) [Aside - I *much* prefer to drop the "safemake" stuff and be able to pass a NULL or MPI_NULL_CONTEXT to the "make" stuff in which case it allocates the contexts for the user. It allows just one or other of the two contexts in these cases to be allocated automatically. I propose this.] Six new procedures for accessors. They are pretty self explanantory so I do not detail the argument lists here, it can be added trivially later. MPI_COMM_LOCAL_GROUP(comm, local_group) MPI_COMM_REMOTE_GROUP(comm, remote_group) MPI_COMM_LOCAL_CONTEXTID(comm, local_context) MPI_COMM_REMOTE_CONTEXTID(comm, remote_context) MPI_COMM_LOCAL_SIZE(comm, local_size) MPI_COMM_REMOTE_SIZE(comm, remote_size) The three accessors in the draft should probably be documented as errors if the remote and local parts of the communicator are not identical. On the other hand, they could just be deleted :-) [Aside - why have accessors for the group size when the user can easily get at the group and ask that object what the size is? If we have this function it should be in Section 6.] Regarding Section 6, "Working without maps and context_id's". I guess we'd have a new procedure which is pretty obvious by now MPI_COMMA_BUILD(comm, local_arrayofranks, local_size, remote_arrayofranks, remote_size, newcomm) [Aside - I'd like to see MPI_COMM[A]_BUILD dropped. This should be done at the group level. I propose this.] MPI_COMM_SPLIT should be documented as an error if the local and remote parts of the existing communicator are not identical. Finally I cannot avoid the matter of providing mechanisms for allowing users to get information about remote groups in order to build communicators in which the local and remote parts are not the same (otherwise there is no point being able to do that in the first place). I'll just say this again. The BEST KNOWN WAY TO DO THIS IS BY A NAME SERVER, now for contexts and groups. We rely on sequencing for allocation of contexts and binding with groups into communicators when we are planning for intracommunication. We justify this by virtue of our SPMD view of the process group - this is the condition under which we can rely on sequencing. When we think about activities spanning pairs of groups we just cannot use sequencing - we certainly cannot invoke an SPMD view of the group pair although we can invoke an SPMD view of each group separately and individually. The NAME SERVICE is TRACTABLE. It is EXISTING PRACTICE. I'd like to see it in MPI. If not then I need the capabiliities which allow me to implement one. Minimum requirement: there are mechanisms for transmission of contexts and groups in messages. I already have given them at the previous meeting. Please don't make them too perverse :-) Oh well, I hope all this typing helped. Best Wishes Lyndon (the now sporadically(sp?) prolific) /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ >From weeks@mozart.convex.com Wed Jun 9 15:48:40 1993 Return-Path: Received: from ens.ens-lyon.fr by lip.ens-lyon.fr (4.1/SMI-4.1) id AA15308; Wed, 9 Jun 93 15:48:40 +0200 Received: from convex.convex.com by ens.ens-lyon.fr (4.1/SMI-4.1) id AA13591; Wed, 9 Jun 93 15:48:28 +0200 Received: from mozart.convex.com by convex.convex.com (5.64/1.35) id AA03233; Wed, 9 Jun 93 08:48:02 -0500 Received: by mozart.convex.com (5.64/1.28) id AA01694; Wed, 9 Jun 93 08:50:15 -0500 Date: Wed, 9 Jun 93 08:50:15 -0500 From: weeks@mozart.convex.com (Dennis Weeks) Message-Id: <9306091350.AA01694@mozart.convex.com> To: Jack.Dongarra@lip.ens-lyon.fr Subject: June 4 mpi mail #1 of 1 Status: R >From owner-mpi-context@CS.UTK.EDU Fri Jun 4 12:42:16 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA17192; Fri, 4 Jun 93 12:42:13 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA14175; Fri, 4 Jun 93 12:39:50 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21363; Fri, 4 Jun 93 13:30:04 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 4 Jun 1993 13:30:02 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from cs.sandia.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21352; Fri, 4 Jun 93 13:30:00 -0400 Received: from newton.sandia.gov (newton.cs.sandia.gov) by cs.sandia.gov (4.1/SMI-4.1) id AA06643; Fri, 4 Jun 93 11:29:57 MDT Received: by newton.sandia.gov (5.57/Ultrix3.0-C) id AA15559; Fri, 4 Jun 93 11:31:02 -0600 Message-Id: <9306041731.AA15559@newton.sandia.gov> To: mpi-context@cs.utk.edu Subject: Progress report on context draft. Date: Fri, 04 Jun 93 11:31:01 MST From: mpsears@newton.cs.sandia.gov Status: R MPI colleagues, We plan to rewrite the draft to include the following: 1) Explanation of what a process knows when it is born. e.g. the default communicators. We think the requirements are going to be pretty minimal. For example, a communicator that allows the created process to talk to its parent, itself, and possibly a server. We are trying to avoid overspecifying what the process management external to MPI must provide. 2) Explanation of what happens when an MPI send or receive call occurs, i.e. how the message envelope is built or interpreted. The purpose of this discussion is to reveal to implementors the information they need to know about rank->address translation and when. We do this without specifying a single correct implementation. 3) An explanation of the proper/safe way to use communicators in the presence of subroutines of possibly arbitrary nesting and complexity. The rule we have definitely established is as follows: When a caller passes a communicator (which contains a context and group) to a callee, that communicator must be free of side effects on entry and exit to the subprogram. This provides the basic guarantee of safety. The callee has permission to do whatever communication it likes with the communicator, and under the above guarantee knows that no other communications will interfere. Since we permit the creation of new communicators without synchronization ( assuming preallocated context ids), this does not impose a significant overhead. This form of safety is analogous to other common computer science usages, such as passing a descriptor of an array to a library routine. The library routine has every right to expect such a descriptor to be valid and modifiable. 4) Examples of how inter communication (communication between independent groups) is set up and used safely. We have a mechanism worked out for implementing such communication without a C2 object. C1 objects will provide a 'picture' of the remote group/context (including virtual topology info as appropriate). This communicator will only support pt to pt. A mechanism can also be created to allow construction of a union, but the details are unresolved at present. 5) Demonstration of how a user server might be set up. In order to support servers we will allow receipt of messages in a context from a process outside a group. Since the process is outside the group, there is no valid rank for it, so the query of the rank will produce a MPI_ANYPROCESS (or something like that) return. 6) A better (clearer) definition of what a communicator is. For instance, we are restricting communicators to a single context id, though we may cache virt top info and methods that implement communication inside a communicator. We do not support a hidden context id for collective communication (remember the user can always get another communicator). This strategy is perceived to be thread safe. 7) Clarification of properties of groups and discussion of absolute versus relative groups. We currently favor the former, getting rid of maps a la Walker. Mark Sears Tony Skjellum >From weeks@mozart.convex.com Wed Jun 9 15:51:34 1993 Return-Path: Received: from ens.ens-lyon.fr by lip.ens-lyon.fr (4.1/SMI-4.1) id AA15335; Wed, 9 Jun 93 15:51:33 +0200 Received: from convex.convex.com by ens.ens-lyon.fr (4.1/SMI-4.1) id AA13624; Wed, 9 Jun 93 15:51:12 +0200 Received: from mozart.convex.com by convex.convex.com (5.64/1.35) id AA03298; Wed, 9 Jun 93 08:50:16 -0500 Received: by mozart.convex.com (5.64/1.28) id AA01745; Wed, 9 Jun 93 08:52:29 -0500 Date: Wed, 9 Jun 93 08:52:29 -0500 From: weeks@mozart.convex.com (Dennis Weeks) Message-Id: <9306091352.AA01745@mozart.convex.com> To: Jack.Dongarra@lip.ens-lyon.fr Subject: June 5-6 mpi intercommunication discussion thread Status: R >From owner-mpi-context@CS.UTK.EDU Sat Jun 5 10:12:59 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA07594; Sat, 5 Jun 93 10:12:50 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA15797; Sat, 5 Jun 93 10:10:28 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10078; Sat, 5 Jun 93 10:23:36 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sat, 5 Jun 1993 10:23:35 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10070; Sat, 5 Jun 93 10:23:34 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03213; Sat, 5 Jun 93 09:23:10 CDT Date: Sat, 5 Jun 93 09:23:10 CDT From: Tony Skjellum Message-Id: <9306051423.AA03213@Aurora.CS.MsState.Edu> To: lyndon@epcc.ed.ac.uk Subject: intercommunication Cc: mpi-context@cs.utk.edu, mpsears@cs.sandia.gov Status: R Lyndon, Mark and I had a long discussion (well Barney McCabe was also there for a while)... Inter communicator would have these properties (following your intentions, I believe, but without additional data structures): A mechanism called publish will be created: mpi_publish(my_comm, "label-for-my_comm", [permissions], persistence_flag) This registers a passivated C1 communicator my_comm with the server, or server-equivalent mechanism. "label-for-my_comm" uniquely identifies it. [permissions] are flags (eg, an array of contexts that may subscribe, tbd). persistence_flag is either MPI_EPHEMERAL, or MPI_PERSISTENT, so that a server can persistently leave access to a communicator available. mpi_subscribe("label-for_my-comm", new_comm) gives the subscriber (in new_comm) a picture of my_comm of the publisher. Both send and receive by the subscriber refers to ranks in the sender. [NB Appropriate virtual topology info goes with a passivated C1.] Now, what about a symmetric picture? Well, A group: mpi_publish(A_comm, "A_comm", [permissions], MPI_EPHEMERAL); mpi_subscribe(B_comm, "B_comm"); B group: mpi_publish(B_comm, "B_comm", [permissions], MPI_EPHEMERAL}; mpi_subscribe(A_comm, "A_comm"); So, have we failed to deal with any problems? Well, one will have to accept messages as MPI_ANY to get them from outside the group (but with in the context, still, of course). We do not see this as a problem. Factoids: publish and subscribe do not create new contexts we still want something that can take A_comm and B_comm, make a "union" group, and give it new context (that is still in thought stage, because of ordering issues). Ideas? - - Tony >From owner-mpi-context@CS.UTK.EDU Sat Jun 5 10:19:36 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA07859; Sat, 5 Jun 93 10:19:30 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA15973; Sat, 5 Jun 93 10:17:09 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10587; Sat, 5 Jun 93 10:34:02 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sat, 5 Jun 1993 10:34:02 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10579; Sat, 5 Jun 93 10:34:01 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03230; Sat, 5 Jun 93 09:33:32 CDT Date: Sat, 5 Jun 93 09:33:32 CDT From: Tony Skjellum Message-Id: <9306051433.AA03230@Aurora.CS.MsState.Edu> To: lyndon@epcc.ed.ac.uk Subject: Re: intercommunication Cc: mpi-context@cs.utk.edu, mpsears@cs.sandia.gov Status: R Further factoids on intercommunication i) mpi_publish is non-blocking ii) mpi_subscribe is blocking iii) cancelling publish or subscribe? First to ask for this is in deep trouble :-) iv) [IMPORTANT] Passivated C1's shared during publish/subscribe process inherently update a processes' internal table that allows it to do address translation on group members acquired through a subscribe. Mark Sears and I discussed the possibility that subscribe could be allowed to fail if subscription implied an unactivable C1. In other words, it might be impossible for certain processes to talk to other processes in a real system. We will have to decide how conforming implementations are allowed to fail ... MPI_EPHEMERAL allows a single publish to be subscribed to and removed without race condition. If MPI server is more powerful than user-space itself, Barney points out that is can be used to connect to servers outside user space, and hence must address permissions intelligently. - - Tony ps Lyndon, please comment further on need for relative groups, re Walker. Why do we need this ??? >From owner-mpi-context@CS.UTK.EDU Sun Jun 6 10:48:27 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA17201; Sun, 6 Jun 93 10:48:19 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA05614; Sun, 6 Jun 93 10:45:56 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04288; Sun, 6 Jun 93 11:40:07 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 6 Jun 1993 11:40:04 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04258; Sun, 6 Jun 93 11:40:01 -0400 Date: Sun, 6 Jun 93 16:39:45 BST Message-Id: <17260.9306061539@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context; comments on "June 5 - intercommunication" To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Status: R Dear Tony Here are (hopefully) better comments on the material you kindly sent out regarding the discussion between yourself and Mark Sears on the subject of intercommunication. I refer to your letter to mpi-context "intercommunication" of June 5. You appear to suggest a mechanism whereby a process which can obtain a picture of a C1 object (i.e., binding of one context and one group) which it has not itself created. This works by the creator using a publish operation to name the C1 within a global name space, and the non-creator using a subscribe operation to find out about the C1 in the global name space by name. I really am happy with the acceptance of a global name space! There are some general comments about the publish/subscribe mechanism suggested which follow. These raise an important question that I hope can be answered fairly quickly (latter part of next paragraph), and leave a pointer to material not yet covered. The subscriber can use the C1 object obtained by subscribe to send messages to the publisher using the usual "rank in group" notation. The subscribe operation will make this possible. The material you sent also says that the subscriber can also receive using the usual "rank in group" notation for the publisher group. I don't understand this. The reason is that with the context allocation scheme of the extant draft it would not be safe for the subscriber to attempt to use the communicator for receive. The context embedded therein is safe for receive only at the publisher. Is this just a simple error preparing the letter, or are you also changing the context allocation scheme? If you are then we really do need to know that because it has important ramifications for later material. The publisher can receive messages from any subscriber by receiving with the published communicator. The publisher cannot choose the sender and will only be able to receive from subscribers by executing a receive from MPI_ANY sender rank. The publisher cannot find out the identity of the subscriber sender as the rank of the sender will be returned as MPI_ANYPROCESS rank (or equivalent). We recognise that the mechamism suggested is not capable of providing the publisher with the ability to choose the sending subscriber. We also recognise that the machamism is not capable of providing the identity of the sending subscriber (unless the sending subscriber explicitly places identity in the data part of the message). This is not intended as a criticism, just a statement about the mechanism on which we can agree. There are situations in which not choosing and not knowing the sender is fine (although in my experience they are uncommon). We also need to provide for the situations in which (a) the receiver needs receive from a particular sender and (b) the receiver needs to receive from any sender and needs to know the identity of the actual sender. I come back to this matter later in the letter. There are two detailed comments on the publish/subscribe mechanism suggested. Some of these may be the result of misunderstanding or incomplete information. * The "[permissions]" argument to the publish operation does seem to have a problem. If this is an array of contexts as suggested, or similar thing, then how can the publisher know which contexts the subscriber(s) will be using. There does not seem to be a mechanism. We could also ask the question the other way around, i.e. how can a subscriber determine a permitted context. Again there does not seem to be a mechanism. Should such a mechanism exist the publish operation does not accept a context and therefore there is nothing to check the permitted contexts against. Is this a small mistake in preparing the letter - should mpi_subscribe accept an existing communicator to provide a context for the subscribe? The MPI server has power which the MPI user does not have. However this does not seem to mean that the user can use the server to connect outside user space since the user is not provided with a mechanism with which to make the server to that. So what purpose do permissions really serve? SUGGEST we remove the permissions argument. * The "persistence_flag" argument to the publish operation allows the publisher to publish a name which the first subscribe will atomically subscribe and remove. Bearing in mind that the consuming subscribe is executed by a single process and that in general a group of processes will need to subscribe, I find it difficult to see what the advantage of this feature is. Is there a small mistake in preparing the letter - should mpi_subscribe accept a communicator in order to provide a group for the subscribe? * There is no way to remove a persistent publish. We should have a way of doing this. SUGGEST we add an mpi_unpublish("label") which removes "label" from the global name space. [Yes, I know, I'm in deep trouble, and probably launched into the swimming pool in a couple of weeks :-)] Now to sender-knowing intercommunication. The letter suggests that a "union" is the mechanism for communication when the receiver needs to either choose or discover the sender. I think that there are a couple of basic problems with that approach. Observe that with the context allocator of the extant draft we do rely on sequencing to correctly allocate contexts to library modules within a group. If process P in group G is allocating a context for some module then all other processes in G are allocating for the same module, otherwise things get all twisted. We do this because we have a strong SPMD view of the behaviour of a process group. This fine so far. Now if we use a union to do sender-knowing intercommunication between two functionally distinct groups (e.g. a client group and a server group, or two groups in a modular application) then we have created a group which we cannot view as SPMD. It is not possible to rely on sequencing for context allocation in this union group. The suggestion is that we somehow have a union operator which accepts two communicators, makes a "union" group of the groups within the communicators, and gives a context for the "union" group. Neither of the contexts inside the original communicators can be used as the context for the union, since each is allocated within a different group and thus is not safe for receive in the other group. With the extant context allocator of the draft the union operator would have to synchronise the "union" group, the two original groups, in order to allocate a context. Unfortunately this is only possible if we can rely on sequencing across the two groups, which the previous paragraph argues we cannot. This problem can be partially alleviated if we change the context allocator of the extant draft to that intended in X and existing in Zipcode - but that would not be working within the extant draft so I have not worked through the nitty gritty details. The "union" would have to be a union operator of the two groups, which would have to be distinct. This just means that intercommunication between co-located (in terms of mapped into same group) parallel modules would have to be done with different notation between intercommunication between non-co-colocated parallel modules. This is not the biggest point, but I do think it is valid nevertheless. These observations led me to look at other ways in which we can remove the multiplicity of different kinds of communicator objects, which resulted in my previous letter to mpi-context "mpi-context: intercommunication etcetara" of June 3. I make the suggestion that I can probably join the publish/subscribe mechanism you suggest with suggestion of my previous letter, without much grief. If you like us to go that way, then please say so, and I will work through the nitty gritty details of that line. Well that's all for now folks. I hope some of the questions raised can be answered. How would we like to move forward from here? Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ >From owner-mpi-context@CS.UTK.EDU Tue Jun 8 16:22:21 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA16080; Tue, 8 Jun 93 16:22:19 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA00894; Tue, 8 Jun 93 16:24:42 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03859; Tue, 8 Jun 93 17:16:02 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 8 Jun 1993 17:16:01 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03851; Tue, 8 Jun 93 17:16:00 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA19383; Tue, 8 Jun 1993 17:15:59 -0400 Date: Tue, 8 Jun 1993 17:15:59 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9306082115.AA19383@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu Subject: Example MPI programs Status: R I've been trying to get a better understanding of the use of communicators, and have actually been trying to write MPI code. Below are two examples that I would like feedback on. Both programs split the processes into two groups that then compute in parallel. The first uses just intragroup communicators. The second example merges results from the two groups, and involves intergroup communication. I think it generally useful to try to build up a small suite of example programs like this. My second example merges results from two disjoint groups. It would be nice to have an example in which they over lap. I look forward to your comments. David /* PROGRAM 1. Just to get warmed up here's a simple MPI program that splits the initial group into subgroups. One subgroup contains the processes with even ranks in the initial group, and the other the odd ranks. Then, one subgroup does a parallel FFT (A->FFT(A)), and the other a parallel matrix multiplication (Z=XY). The global length of A is lenA. The global sizes of X, Y, and Z are M by L, L by N, and M by N. */ main () { group_t init_group, new_group; comm_t comm; int init_rank, key, ret, lenA, M, N, L; double A[100], X[100][100], Y[100][100], Z[100][100]; ret = get_input1 (lenA, M, N, L); /* All processes get scalar input */ init_group = mpi_initial_group (); /* Determine initial group */ init_rank = mpi_rank (init_group); /* Determine my rank in initial group*/ key = init_rank&1; /* Split initial group into odd/even subgroups */ ret = mpi_split_group (init_group, key, init_rank, new_group); ret = mpi_safemake_comm (new_group, new_group, comm); if (key) { ret = initialize_fft (A, lenA, comm); ret = parallel_fft (A, lenA, comm); ret = output_fft (A, lenA, comm); } else { ret = initialize_matmul (X, Y, Z, M, N, L, comm); ret = parallel_matmul (X, Y, Z, M, N, L, comm); ret = output_matmul (X, Y, Z, M, N, L, comm); } exit (0); } /* PROGRAM 2. Now we'll try something a little harder. In the next program we find the convolution, C, of two vectors, A and B, by first finding the FFT of A and B, evaluating their elementwise product in Fourier space, and then finding the inverse transform, i.e., C = INVFFT ( FFT(A) * FFT(B) ) Of course, we can do this by having all processes first find FFT(A) and then FFT(B). The product in Fourier space can then be found with no communication being necessary. Finally, all processes cooperate to perform the inverse FFT. Program 2 doesn't do this. Instead, we split the initial group into two equally-sized subgroups, one of which evaluates FFT(A) and the other FFT(B) in parallel. Communication is then necessary between the initial group and the subgroups to re-distribute the data before doing the Fourier space product in the initial group. Finally, the processes in the initial group evaluate the inverse FFT, giving the convolution, C. The point here is that communication between groups is required. */ main () { group_t init_group, fft_group; comm_t comm_fft, comm_merge_A, comm_merge_B, comm_inv; comm_handle_t handle1, handle2; rso_handle_t rso; int init_rank, init_size, key, fftlen, local_len, nbytes; int source_rank, dest_rank1, dest_rank2; double A[100], B[100], C[100]; double tmpA[100], tmpB[100]; ret = get_input2 (fftlen); init_group = mpi_initial_group (); init_rank = mpi_rank (init_group); init_size = mpi_group_size (init_group); local_len = fftlen/init_size; nbytes = local_len*sizeof(double); key = init_rank&1; ret = mpi_split_group (init_group, key, init_rank, fft_group); ret = mpi_safemake_comm (fft_group, fft_group, comm_fft); ret = mpi_safemake_comm (init_group, init_group, comm_inv); source_rank = mpi_rank (fft_group); dest_rank1 = 2*source_rank; dest_rank2 = dest_rank1 + 1; if (key) { ret = mpi_safemake_comm (fft_group, init_group, comm_merge_A); ret = mpi_publish (comm_merge_A, "GROUP_A"); ret = initialize_fft (A, fftlen, comm_fft); ret = parallel_fft (A, fftlen, comm_fft); ret = mpi_isendc (handle1, &A[0], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank1, comm_merge_A); ret = mpi_isendc (handle2, &A[local_len], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank2, comm_merge_A); ret = mpi_subscribe ("GROUP_B", comm_merge_B); } else { ret = mpi_safemake_comm (fft_group, init_group, comm_merge_B); ret = mpi_publish (comm_merge_B, "GROUP_B"); ret = initialize_fft (B, fftlen, comm_fft); ret = parallel_fft (B, fftlen, comm_fft); ret = mpi_isendc (handle1, &B[0], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank1, comm_merge_B); ret = mpi_isendc (handle2, &B[local_len], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank2, comm_merge_B); ret = mpi_subscribe ("GROUP_A", comm_merge_A); } ret = mpi_recvc (tmpA, nbytes, MPI_DOUBLE, MPI_DONTCARE, source_rank, comm_merge_A, rso); ret = mpi_recvc (tmpB, nbytes, MPI_DOUBLE, MPI_DONTCARE, source_rank, comm_merge_B, rso); ret = mpi_wait (handle1, rso); ret = mpi_wait (handle2, rso); ret = parallel_product (tmpA, tmpB, C, fftlen, comm_inv); ret = parallel_invfft (C, fftlen, comm_inv); ret = output_result (A, B, C, fftlen, comm_inv); exit (0); } >From owner-mpi-context@CS.UTK.EDU Tue Jun 8 20:42:16 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA06666; Tue, 8 Jun 93 20:42:14 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA11785; Tue, 8 Jun 93 20:39:57 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00114; Tue, 8 Jun 93 21:35:32 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 8 Jun 1993 21:35:30 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA06262; Tue, 8 Jun 93 18:07:21 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA17109; Tue, 8 Jun 1993 18:07:19 -0400 Date: Tue, 8 Jun 1993 18:07:19 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9306082207.AA17109@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu Subject: Correction to second program Status: R Sorry! There was an error in the second example program I sent out this afternoon. Below is a corrected version. David /* PROGRAM 1. Just to get warmed up here's a simple MPI program that splits the initial group into subgroups. One subgroup contains the processes with even ranks in the initial group, and the other the odd ranks. Then, one subgroup does a parallel FFT (A->FFT(A)), and the other a parallel matrix multiplication (Z=XY). The global length of A is lenA. The global sizes of X, Y, and Z are M by L, L by N, and M by N. */ main () { group_t init_group, new_group; comm_t comm; int init_rank, key, ret, lenA, M, N, L; double A[100], X[100][100], Y[100][100], Z[100][100]; ret = get_input1 (lenA, M, N, L); /* All processes get scalar input */ init_group = mpi_initial_group (); /* Determine initial group */ init_rank = mpi_rank (init_group); /* Determine my rank in initial group*/ key = init_rank&1; /* Split initial group into odd/even subgroups */ ret = mpi_split_group (init_group, key, init_rank, new_group); ret = mpi_safemake_comm (new_group, new_group, comm); if (key) { ret = initialize_fft (A, lenA, comm); ret = parallel_fft (A, lenA, comm); ret = output_fft (A, lenA, comm); } else { ret = initialize_matmul (X, Y, Z, M, N, L, comm); ret = parallel_matmul (X, Y, Z, M, N, L, comm); ret = output_matmul (X, Y, Z, M, N, L, comm); } exit (0); } /* PROGRAM 2. Now we'll try something a little harder. In the next program we find the convolution, C, of two vectors, A and B, by first finding the FFT of A and B, evaluating their elementwise product in Fourier space, and then finding the inverse transform, i.e., C = INVFFT ( FFT(A) * FFT(B) ) Of course, we can do this by having all processes first find FFT(A) and then FFT(B). The product in Fourier space can then be found with no communication being necessary. Finally, all processes cooperate to perform the inverse FFT. Program 2 doesn't do this. Instead, we split the initial group into two equally-sized subgroups, one of which evaluates FFT(A) and the other FFT(B) in parallel. Communication is then necessary between the initial group and the subgroups to re-distribute the data before doing the Fourier space product in the initial group. Finally, the processes in the initial group evaluate the inverse FFT, giving the convolution, C. The point here is that communication between groups is required. */ main () { group_t init_group, fft_group; comm_t comm_fft, comm_merge, comm_merge_A, comm_merge_B, comm_inv; comm_handle_t handle1, handle2; rso_handle_t rso; int init_rank, init_size, key, fftlen, local_len, nbytes; int source_rank, dest_rank1, dest_rank2; double A[100], B[100], C[100]; double tmpA[100], tmpB[100]; ret = get_input2 (fftlen); init_group = mpi_initial_group (); init_rank = mpi_rank (init_group); init_size = mpi_group_size (init_group); local_len = fftlen/init_size; nbytes = local_len*sizeof(double); key = init_rank&1; ret = mpi_split_group (init_group, key, init_rank, fft_group); ret = mpi_safemake_comm (fft_group, fft_group, comm_fft); ret = mpi_safemake_comm (fft_group, init_group, comm_merge); ret = mpi_safemake_comm (init_group, init_group, comm_inv); source_rank = mpi_rank (fft_group); dest_rank1 = 2*source_rank; dest_rank2 = dest_rank1 + 1; if (key) { ret = mpi_copy_comm (comm_merge, comm_merge_A); ret = mpi_safemake_comm (fft_group, init_group, comm_merge_A); ret = mpi_publish (comm_merge_A, "GROUP_A"); ret = initialize_fft (A, fftlen, comm_fft); ret = parallel_fft (A, fftlen, comm_fft); ret = mpi_isendc (handle1, &A[0], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank1, comm_merge_A); ret = mpi_isendc (handle2, &A[local_len], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank2, comm_merge_A); ret = mpi_subscribe ("GROUP_B", comm_merge_B); } else { ret = mpi_copy_comm (comm_merge, comm_merge_B); ret = mpi_publish (comm_merge_B, "GROUP_B"); ret = initialize_fft (B, fftlen, comm_fft); ret = parallel_fft (B, fftlen, comm_fft); ret = mpi_isendc (handle1, &B[0], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank1, comm_merge_B); ret = mpi_isendc (handle2, &B[local_len], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank2, comm_merge_B); ret = mpi_subscribe ("GROUP_A", comm_merge_A); } ret = mpi_recvc (tmpA, nbytes, MPI_DOUBLE, MPI_DONTCARE, source_rank, comm_merge_A, rso); ret = mpi_recvc (tmpB, nbytes, MPI_DOUBLE, MPI_DONTCARE, source_rank, comm_merge_B, rso); ret = mpi_wait (handle1, rso); ret = mpi_wait (handle2, rso); ret = parallel_product (tmpA, tmpB, C, fftlen, comm_inv); ret = parallel_invfft (C, fftlen, comm_inv); ret = output_result (A, B, C, fftlen, comm_inv); exit (0); } >From owner-mpi-context@CS.UTK.EDU Wed Jun 9 04:36:46 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA11993; Wed, 9 Jun 93 04:36:35 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA26603; Wed, 9 Jun 93 04:34:18 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10964; Wed, 9 Jun 93 05:30:30 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 9 Jun 1993 05:30:29 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10946; Wed, 9 Jun 93 05:30:25 -0400 Date: Wed, 9 Jun 93 10:30:08 BST Message-Id: <20493.9306090930@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Correction to second program To: walker@rios2.epm.ornl.gov (David Walker), mpi-context@cs.utk.edu In-Reply-To: David Walker's message of Tue, 8 Jun 1993 18:07:19 -0400 Reply-To: lyndon@epcc.ed.ac.uk Status: R Hi y'all I agree that it is useful to have examples. The first example you sent shows that MPI makes it quite easy to do the kind of function driven parallelism where the two "functions" do not communicate. The second example starts to think about communication between two different "functions". The two functions communicate by passing use of the common ancestor which was easy as the ancestor was a parent and is in scope at the time of communication. I'm a little confused about the mpi_safemake_comm calls. I'm more confused by the receives and sends in the second example. Here the destination rank in the sends appears to be different to the intended source rank in the receives. Also the receives from the "other" group appear to be choosing the source which the publish/subscribe example described by Tony does not permit with the published/subscribed communicator. [Perhaps this hardly matters in this simpl example since each process receives just one message from the other group.] Hopefully you can help unravel my confusion. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ >From owner-mpi-context@CS.UTK.EDU Wed Jun 9 04:49:38 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA12100; Wed, 9 Jun 93 04:49:37 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA26794; Wed, 9 Jun 93 04:47:20 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA13226; Wed, 9 Jun 93 05:44:27 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 9 Jun 1993 05:44:25 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA13204; Wed, 9 Jun 93 05:44:21 -0400 Date: Wed, 9 Jun 93 10:44:15 BST Message-Id: <20520.9306090944@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Correction to [Re: Correction to second program] To: lyndon@epcc.ed.ac.uk, walker@rios2.epm.ornl.gov (David Walker), mpi-context@cs.utk.edu In-Reply-To: L J Clarke's message of Wed, 9 Jun 93 10:30:08 BST Reply-To: lyndon@epcc.ed.ac.uk Status: R Oh dear, I'm not quite sure what happened at the end of the first para. in my previous message. I've corrected (read "deleted the junk") and sent again. I agree that it is useful to have examples. The first example you sent shows that MPI makes it quite easy to do the kind of function driven parallelism where the two "functions" do not communicate. The second example starts to think about communication between two different "functions". Perhaps I should devise a more complicated yet simple example of this. I'm a little confused about the mpi_safemake_comm calls. I'm more confused by the receives and sends in the second example. Here the destination rank in the sends appears to be different to the intended source rank in the receives. Also the receives from the "other" group appear to be choosing the source which the publish/subscribe example described by Tony does not permit with the published/subscribed communicator. [Perhaps this hardly matters in this simpl example since each process receives just one message from the other group.] Hopefully you can help unravel my confusion. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ >From owner-mpi-context@CS.UTK.EDU Wed Jun 9 07:56:55 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA27150; Wed, 9 Jun 93 07:56:53 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA01327; Wed, 9 Jun 93 07:54:24 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26159; Wed, 9 Jun 93 08:50:43 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 9 Jun 1993 08:50:42 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26149; Wed, 9 Jun 93 08:50:41 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA20601; Wed, 9 Jun 1993 08:50:17 -0400 Date: Wed, 9 Jun 1993 08:50:17 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9306091250.AA20601@rios2.epm.ornl.gov> To: lyndon@epcc.ed.ac.uk Subject: Re: Correction to second program Cc: mpi-context@cs.utk.edu Status: R > Date: Wed, 9 Jun 93 10:44:15 BST > From: L J Clarke > I agree that it is useful to have examples. Good. > The first example you sent shows that MPI makes it quite easy to do the > kind of function driven parallelism where the two "functions" do not > communicate. The second example starts to think about communication > between two different "functions". Perhaps I should devise a more > complicated yet simple example of this. A more complicated example based on the convolution example could use overlapping groups. Assume for example that the parallel FFT requires the number of processes to be a power of two, and we have a total of 12 processes. Then we could put 0,1,2,..,7 in one group and 4,5,6,...,11 in the other group. I'm not sure how much this complicates things. > I'm a little confused about the mpi_safemake_comm calls. Sorry, in my corrected version there was still an extraneous mpi_safemake_comm inside the first conditional branch. This has been corrected in the revised version below. The arguments to my version of mpi_safemake_comm are: mpi_safemake_comm (local_group, remote_group, communicator) IN local_group IN remote_group OUT communicator > I'm more confused by the receives and sends in the second example. Here the > destination rank in the sends appears to be different to the intended > source rank in the receives. Also the receives from the "other" group > appear to be choosing the source which the publish/subscribe example > described by Tony does not permit with the published/subscribed > communicator. [Perhaps this hardly matters in this simpl example since > each process receives just one message from the other group.] In the sends the rank of the destination process is relative to the remote group, i.e., the receiving group. In the receives the rank of the source process is relative to the remote group, i.e., the sending group. This seems the natural way to do things. As you point out, the source rank in the receives can be replace by MPI_DONTCARE. It's not obvious to me why a C2 receive can't specify a particular source in the remote group. Please explain this to me. > Hopefully you can help unravel my confusion. Yes, hopefully. Regards David /* PROGRAM 2. Now we'll try something a little harder. In the next program we find the convolution, C, of two vectors, A and B, by first finding the FFT of A and B, evaluating their elementwise product in Fourier space, and then finding the inverse transform, i.e., C = INVFFT ( FFT(A) * FFT(B) ) Of course, we can do this by having all processes first find FFT(A) and then FFT(B). The product in Fourier space can then be found with no communication being necessary. Finally, all processes cooperate to perform the inverse FFT. Program 2 doesn't do this. Instead, we split the initial group into two equally-sized subgroups, one of which evaluates FFT(A) and the other FFT(B) in parallel. Communication is then necessary between the initial group and the subgroups to re-distribute the data before doing the Fourier space product in the initial group. Finally, the processes in the initial group evaluate the inverse FFT, giving the convolution, C. The point here is that communication between groups is required. */ main () { group_t init_group, fft_group; comm_t comm_fft, comm_merge, comm_merge_A, comm_merge_B, comm_inv; comm_handle_t handle1, handle2; rso_handle_t rso; int init_rank, init_size, key, fftlen, local_len, nbytes; int source_rank, dest_rank1, dest_rank2; double A[100], B[100], C[100]; double tmpA[100], tmpB[100]; ret = get_input2 (fftlen); init_group = mpi_initial_group (); init_rank = mpi_rank (init_group); init_size = mpi_group_size (init_group); local_len = fftlen/init_size; nbytes = local_len*sizeof(double); key = init_rank&1; ret = mpi_split_group (init_group, key, init_rank, fft_group); ret = mpi_safemake_comm (fft_group, fft_group, comm_fft); ret = mpi_safemake_comm (fft_group, init_group, comm_merge); ret = mpi_safemake_comm (init_group, init_group, comm_inv); source_rank = mpi_rank (fft_group); dest_rank1 = 2*source_rank; dest_rank2 = dest_rank1 + 1; if (key) { ret = mpi_copy_comm (comm_merge, comm_merge_A); ret = mpi_publish (comm_merge_A, "GROUP_A"); ret = initialize_fft (A, fftlen, comm_fft); ret = parallel_fft (A, fftlen, comm_fft); ret = mpi_isendc (handle1, &A[0], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank1, comm_merge_A); ret = mpi_isendc (handle2, &A[local_len], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank2, comm_merge_A); ret = mpi_subscribe ("GROUP_B", comm_merge_B); } else { ret = mpi_copy_comm (comm_merge, comm_merge_B); ret = mpi_publish (comm_merge_B, "GROUP_B"); ret = initialize_fft (B, fftlen, comm_fft); ret = parallel_fft (B, fftlen, comm_fft); ret = mpi_isendc (handle1, &B[0], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank1, comm_merge_B); ret = mpi_isendc (handle2, &B[local_len], nbytes, MPI_DOUBLE, MPI_DONTCARE, dest_rank2, comm_merge_B); ret = mpi_subscribe ("GROUP_A", comm_merge_A); } ret = mpi_recvc (tmpA, nbytes, MPI_DOUBLE, MPI_DONTCARE, MPI_DONTCARE, comm_merge_A, rso); ret = mpi_recvc (tmpB, nbytes, MPI_DOUBLE, MPI_DONTCARE, MPI_DONTCARE, comm_merge_B, rso); ret = mpi_wait (handle1, rso); ret = mpi_wait (handle2, rso); ret = parallel_product (tmpA, tmpB, C, fftlen, comm_inv); ret = parallel_invfft (C, fftlen, comm_inv); ret = output_result (A, B, C, fftlen, comm_inv); exit (0); } >From weeks@mozart.convex.com Wed Jun 9 15:30:09 1993 Return-Path: Received: from ens.ens-lyon.fr by lip.ens-lyon.fr (4.1/SMI-4.1) id AA15187; Wed, 9 Jun 93 15:30:09 +0200 Received: from convex.convex.com by ens.ens-lyon.fr (4.1/SMI-4.1) id AA13219; Wed, 9 Jun 93 15:30:00 +0200 Received: from mozart.convex.com by convex.convex.com (5.64/1.35) id AA02567; Wed, 9 Jun 93 08:29:35 -0500 Received: by mozart.convex.com (5.64/1.28) id AA01359; Wed, 9 Jun 93 08:31:47 -0500 Date: Wed, 9 Jun 93 08:31:47 -0500 From: weeks@mozart.convex.com (Dennis Weeks) Message-Id: <9306091331.AA01359@mozart.convex.com> To: Jack.Dongarra@lip.ens-lyon.fr Subject: May 26 mpi mail #1 of 1 Cc: !m@mozart.convex.com Status: R >From owner-mpi-context@CS.UTK.EDU Wed May 26 10:07:53 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA10653; Wed, 26 May 93 10:07:46 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA26142; Wed, 26 May 93 10:08:02 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27620; Wed, 26 May 93 11:05:04 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 26 May 1993 11:05:02 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from cs.sandia.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27596; Wed, 26 May 93 11:05:00 -0400 Received: from newton.sandia.gov (newton.cs.sandia.gov) by cs.sandia.gov (4.1/SMI-4.1) id AA21486; Wed, 26 May 93 09:04:57 MDT Received: by newton.sandia.gov (5.57/Ultrix3.0-C) id AA10163; Wed, 26 May 93 09:06:01 -0600 Message-Id: <9305261506.AA10163@newton.sandia.gov> To: walker@rios2.epm.ornl.gov (David Walker) Cc: mpi-context@cs.utk.edu Subject: Re: local map constructors In-Reply-To: Your message of Wed, 26 May 93 10:43:55 -0400. <9305261443.AA14665@rios2.epm.ornl.gov> Date: Wed, 26 May 93 09:06:00 MST From: mpsears@newton.cs.sandia.gov Status: R We did not have time at the last meeting to write up a large set of useful constructors. We anticipated many of the operations you suggest, and we also anticipate that map constructors might be developed by third parties (i.e. there will be an interface that will let you roll your own.) Also, remember that maps are NOT sets, and therefore set operations need something added to them to be meaningful. For example, suppose I begin with the maps A: [0,1,2] -> [7, 2, 9] and B: [0,1] -> [7, 13]. What is their union? The union of the sets (2,7,9) and (7, 13) is well defined: (2,7,9,13). But what ordering should be applied? Mark Sears >From owner-mpi-context@CS.UTK.EDU Wed May 26 10:24:17 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA13776; Wed, 26 May 93 10:24:15 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA26815; Wed, 26 May 93 10:24:34 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28859; Wed, 26 May 93 11:21:27 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 26 May 1993 11:21:26 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28851; Wed, 26 May 93 11:21:25 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA15726; Wed, 26 May 1993 11:21:24 -0400 Date: Wed, 26 May 1993 11:21:24 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9305261521.AA15726@rios2.epm.ornl.gov> To: mpsears@newton.cs.sandia.gov Subject: Re: local map constructors Cc: mpi-context@cs.utk.edu Status: R Mark Sears writes: > We did not have time at the last meeting to write up a large > set of useful constructors. We anticipated many of the operations > you suggest, and we also anticipate that map constructors might be > developed by third parties (i.e. there will be an interface that will > let you roll your own.) OK > Also, remember that maps are NOT sets, and therefore set operations > need something added to them to be meaningful. For example, suppose > I begin with the maps A: [0,1,2] -> [7, 2, 9] and B: [0,1] -> [7, 13]. > What is their union? The union of the sets (2,7,9) and (7, 13) is well > defined: (2,7,9,13). But what ordering should be applied? Well I thought maps were just sets. According to the MGC draft "A possible representation for such [a] map is a list of m integers." No mention of order there. A group, on the other hand, is an ordered set of processes, but the order is arbitary. I think a better definition of group is to say "A group is a set of N processes each of which is uniquely labeled by an integer in the range 0 to N-1." So I would say that the union of the maps A and B is [2,7,9,13], or any other ordering of these 4 integers. David >From weeks@mozart.convex.com Thu Jun 10 14:39:10 1993 Return-Path: Received: from ens.ens-lyon.fr by lip.ens-lyon.fr (4.1/SMI-4.1) id AA21502; Thu, 10 Jun 93 14:39:09 +0200 Received: from convex.convex.com by ens.ens-lyon.fr (4.1/SMI-4.1) id AA26598; Thu, 10 Jun 93 14:39:04 +0200 Received: from mozart.convex.com by convex.convex.com (5.64/1.35) id AA19858; Thu, 10 Jun 93 07:12:31 -0500 Received: by mozart.convex.com (5.64/1.28) id AA12976; Thu, 10 Jun 93 07:14:26 -0500 Date: Thu, 10 Jun 93 07:14:26 -0500 From: weeks@mozart.convex.com (Dennis Weeks) Message-Id: <9306101214.AA12976@mozart.convex.com> To: Jack.Dongarra@lip.ens-lyon.fr Subject: one more mpi email to forward Status: R >From owner-mpi-context@CS.UTK.EDU Thu Jun 10 05:53:09 1993 Received: from convex.convex.com by mozart.convex.com (5.64/1.28) id AA09179; Thu, 10 Jun 93 05:53:01 -0500 Received: from CS.UTK.EDU by convex.convex.com (5.64/1.35) id AA17836; Thu, 10 Jun 93 05:50:42 -0500 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25168; Thu, 10 Jun 93 06:48:33 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 06:48:32 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA25160; Thu, 10 Jun 93 06:48:29 -0400 Date: Thu, 10 Jun 93 11:48:07 BST Message-Id: <21723.9306101048@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: June 22 context meeting To: Tony Skjellum , mpi-context@cs.utk.edu In-Reply-To: Tony Skjellum's message of Wed, 9 Jun 93 16:31:59 CDT Reply-To: lyndon@epcc.ed.ac.uk Status: R I will arrive evening of June 21. > Gentlemen, I will arrive on June 21, and hope to see as many of the context > subcommittee members for pre-meeting discussion on June 22, at the Bristol Suites. > I am writing this so everyone on the sub-committee will know that we are trying > to roll up our sleeves again, this time, pre-meeting, to be sure we're on track > by the time the meeting starts. > -Tony > /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ >From MAILER-DAEMON@CS.UTK.EDU Thu Jun 10 17:42:46 1993 Return-Path: Received: from ens.ens-lyon.fr by lip.ens-lyon.fr (4.1/SMI-4.1) id AA23066; Thu, 10 Jun 93 17:42:45 +0200 Received: from CS.UTK.EDU by ens.ens-lyon.fr (4.1/SMI-4.1) id AA01279; Thu, 10 Jun 93 17:42:42 +0200 Received: by CS.UTK.EDU (5.61+IDA+UTK-930125/2.8s-UTK) id AA12530; Thu, 10 Jun 93 11:42:33 -0400 Date: Thu, 10 Jun 93 11:42:33 -0400 From: Mail Delivery Subsystem Message-Id: <9306101542.AA12530@CS.UTK.EDU> Subject: Returned mail: Cannot send message for 3 days To: wade@CS.UTK.EDU, dongarra@lip.ens-lyon.fr, owner-mpi-core@CS.UTK.EDU Status: R ----- Transcript of session follows ----- 421 surfer.epm.ornl.gov (tcp)... Deferred: Connection timed out during user open with surfer.epm.ornl.gov ----- Unsent message follows ----- Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01429; Mon, 7 Jun 93 11:12:36 -0400 X-Resent-To: mpi-core@CS.UTK.EDU ; Mon, 7 Jun 1993 11:12:34 EDT Errors-To: owner-mpi-core@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01419; Mon, 7 Jun 93 11:12:33 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA18426; Mon, 7 Jun 1993 11:12:32 -0400 Date: Mon, 7 Jun 1993 11:12:32 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9306071512.AA18426@rios2.epm.ornl.gov> To: mpi-core@cs.utk.edu Subject: Next MPI meeting The next meeting of the Message Passing Interface (MPI) Forum will take place in Dallas, June 23-25, 1993. Details are given below. The minutes of the last meeting in May are not yet available electronically, but should be soon. The ongoing email discussion on MPI standardization issues can be obtained from netlib. To find out what is available via email send the following message to netlib@ornl.gov: send index from mpi or use xnetlib ("send index from xnetlib" for info). The schedule for future meetings is as follows: August 11-13, 1993 Dallas September 22-24, 1993 Dallas October 27-29, 1993 Europe MPI Forum Meeting, June 23-25, 1993 =================================== The next meeting of the MPI forum will take place at Bristol Suites Hotel 7800 Alpha Road Dallas, Texas The meeting will start at 1pm on Wednesday June 23, 1993, and finish at noon on Friday, June 25, 1993. Rooms are $89 per night and reservations may be made by calling (214) 233-7600 (mention MPI meeting). The meeting registration fee will be $75. Please make checks and POs payable to University of Tennessee. The registration fee will be collected at the meeting. The registration fee will go for coffee breaks, meeting rooms, AV and printer rentals. Certain organizations need to see a registration form before giving their employees money for meetings like this. You can get such a form in PostScript by sending the following message to netlib@ornl.gov: send mpi-form.ps from mpi There is no need to send me the registration form or bring it to the meeting. TBS Shuttle Service will be providing complimentary shuttle service to and from the airports. If you fly into DFW, use their courtesy telephone and dial 03. If you fly into Love Field, you'll have to use a pay phone. They can be reached at 817-267-5150. Upon boarding the shuttle refer to the MPI meeting. We have NOT been able to make any special arrangements with airlines to get reduced fares. We have secured limited funding from ARPA/NSF for travel expenses of MPI meeting participants who are from U.S. universities. If you would like to apply for financial support to attend the June MPI meeting please send email to me at walker@msr.epm.ornl.gov, with justification of why you need support and an estimate of your travel expenses. Please send comments and/or suggested changes to the agenda below to me at walker@msr.epm.ornl.gov Provisional Agenda for MPI Meeting, June 23-25, 1993 Wednesday 1:30-6:00 First Reading of the contexts draft. (Skjellum) 6:00-7:30 Unofficial dinner break 7:30-10:30 Break up for subcommittee meetings Thursday 9:00-12:00 First reading of the process topologies draft. (Hempel) 12:00-1:30 Lunch (provided) 1:30-4:30 Second reading of the collective communication draft (Geist) 4:30-6:00 Second reading of the profiling draft (Cownie) 6:00-8:00 Dinner (attendees pay, but hotel provides transport to area restaurant) 8:00-10:00 Continued informal subcommittee meetings if necessary Friday 9:00-10:30 First reading of the subset draft. (Huss-Lederman) 10:30-12:00 First reading of the formal specifications draft. (???) ------- End of Forwarded Message From owner-mpi-context@CS.UTK.EDU Thu Jun 10 13:43:28 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA23248; Thu, 10 Jun 93 13:43:28 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21810; Thu, 10 Jun 93 13:43:19 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 13:43:18 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21787; Thu, 10 Jun 93 13:43:14 -0400 Date: Thu, 10 Jun 93 18:43:07 BST Message-Id: <22168.9306101743@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Next MPI meeting To: walker@rios2.epm.ornl.gov (David Walker), mpi-context@cs.utk.edu In-Reply-To: David Walker's message of Thu, 10 Jun 1993 08:35:55 -0400 Reply-To: lyndon@epcc.ed.ac.uk Hi As promised, here is David's convolution example modified. It uses a bare context in one place. Notice how the relation of ranks of sender and receiver is simpler. No calculations needed in this case. I optimised as well by removing some redundant communications - hope I understood correctly! Actually I must say that in this particular example everything is so simple that it really is easier to do the merge in the initial group. This is an artefact of the example. We should imagine the situation where the initial group is no longer in scope (because things are down in libraries) or where there are other things going on in other toplevel subgroups of the initial group at the same time (because there is function and data/event driven parallelism going on at the same time). Then it gets messy to do the merge in the initial group. Discussion of example --------------------- First I better describe the less familiar mpi style routines. mpi_safemake_comm(group, comm) IN group - the local and remote groups for the communicator OUT comm - the communicator made Note that this is an accelerator. It is defined as mpi_alloc_contexts(group, 1, &context) mpi_make_comm(group, context, comm) although it can be implemented faster. It is synchronous becasue mpi_alloc_contexts is synchronous - ie it synchronises processes in group. mpi_make_comm(group, context, comm) IN group - the local and remote groups for the communicator IN context - the local and remote contexts for the communicator OUT comm - the communicator made Note that this is an accelerator. It is defined as mpi_make_comma(group, context, group, context, comm) although it can be implemented faster. It is asynchronous - ie does not synchronise processes. mpi_make_comma(group1, context1, group2, context2, comm) IN group1 - the local group for the communicator IN context1 - the local context for the communicator IN group2 - the remote group for the communicator IN context2 - the remote context for the communicator OUT comm - the communicator made This procedure is asynchronous - ie does not synchronise processes. mpi_publish(group, context, name) IN group - group to associate with name, i.e. publish IN context - context to associate with name, i.e. publish IN name - name to publish by This procedure is asynchronous - ie does not synchronise processes. mpi_subscribe(name, group, context) IN name - name to subscribe by OUT group - group associated with name OUT context - context associated with name This procedure is asynchronous - ie does not synchronise processes. Now I explain out why it is better to publish a (group, context) pair rather than a communicator. Its simple really. If you publish by communicator then when you add the two communicators together (mpi_make_comma) you have to delete the old communicators, because the contexts are already in use. This means you just have to create this intermdiate object in order to add two of them together and then delete the intermediate. This is messy. So the code is simpler if you publish/subscribe on a group and context pair instead of a communicator. The other problem is whether we define mpi_free of a communicator to delete the group and/or context. ---------------------------------------------------------------------- main () { group_t init_group, fft_group, other_group; comm_t comm_fft, comm_merge, comm_inv; context_t context, other_context; comm_handle_t handle; rso_handle_t rso; int init_rank, init_size, key, fft_rank, fft_size, fftlen, local_len, nbytes; double A[100], B[100], C[100]; ret = get_input2 (fftlen); init_group = mpi_initial_group (); init_rank = mpi_rank (init_group); init_size = mpi_group_size (init_group); local_len = fftlen/init_size; nbytes = local_len*sizeof(double); key = init_rank&1; ret = mpi_split_group (init_group, key, 0, fft_group); fft_rank = mpi_rank (fft_group); ret = mpi_safemake_comm (fft_group, fft_group, comm_fft); ret = initialize_fft (A, fftlen, comm_fft); ret = parallel_fft (A, fftlen, comm_fft); mpi_alloc_context(fft_group, 1, &context); if (fft_rank == 0) { ret = mpi_publish (fft_group, context , (key ? "even" : "odd"); } ret = mpi_subscribe ((key ? "odd" : "even") , &other_group, &other_context); ret = mpi_make_comma(fft_group, context, other_group, other_context, comm_merge); if (key) { ret = mpi_isendc (handle, &A[local_len], nbytes, MPI_DOUBLE, 0, fft_rank, merge_A); ret = mpi_recvc (B, nbytes, MPI_DOUBLE, 0, fft_rank, comm_merge, rso); ret = mpi_wait (handle, rso); } else { ret = mpi_isendc (handle, &A[0], nbytes, MPI_DOUBLE, 0, fft_rank, comm_merge); ret = mpi_recvc (B, nbytes, MPI_DOUBLE, 0, fft_rank, comm_merge, rso); ret = mpi_wait (handle, rso); bcopy((&A[local_len], &[0], nbytes); } ret = mpi_safemake_comm (init_group, comm_inv); ret = parallel_product (A, B, C, fftlen, comm_inv); ret = parallel_invfft (C, fftlen, comm_inv); ret = output_result (C, fftlen, comm_inv); exit (0); } ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Jun 10 14:44:20 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA23431; Thu, 10 Jun 93 14:44:20 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26616; Thu, 10 Jun 93 14:44:45 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 14:44:43 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26608; Thu, 10 Jun 93 14:44:39 -0400 Date: Thu, 10 Jun 93 19:44:35 BST Message-Id: <22249.9306101844@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Next MPI Meeting (etc) To: lyndon@epcc.ed.ac.uk In-Reply-To: L J Clarke's message of Thu, 10 Jun 93 18:52:01 Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Hi Tom I forwarded some stuff to you personally, that I had sent out to mpi-context last week. I think it covers the material. I try again in a different form of words here, for your convenience. From what I can glean, what Tony/Mark suggest to incorporate into the context subcommittee working document. More precisely, this will be part of what they suggest, specifically part to do with inter communication. I hope this helps, or I just wasted 45 minutes of my life :-) Best Wishes Lyndon ---------------------------------------------------------------------- 1. The "map" should be replaced by a naked absolute group. This is similar to David's suggestion, I think, but the group is not relative to a parent or the such. 2. There should be exactly one kind of communicator object, as opposed to two communicator objects. The object should be a binding of exactly one group and exactly one context. 3. The communicator object is used for communication within the group in the way we have all come to expect in MPI - the user can do send addressing the receiver by rank in the group, and receive selecting the sender by rank in the group. 4. There should be a publish/subscribe mechanism which implementa a name service and allows a user process to obtain a communicator object which the process did not create and moreover when the user process is not a mamber of the communicator group. More details follow in the next 4 items. 5. The publish operation is: mpi_publish(communicator, name, persistence) IN communicator - communicator to be published in global name space IN name - name by which communicator to be published IN persistence - a flag having the value MPI_PERSISTENT for a publish which can be subscribed any number of times and remains in the global name space indefinitely, or MPI_EPHEMERAL for a publish which can be subscribed exactly once and is removed by the first subscribe. There is also the suggestion of some kind of "permissions" argument, which is suggested to be an array of contexts which are allowed to subscribe to the published communicator. I dont understand this part of the suggestion so I cant make further (guessed) description. 6. The subscribe operation is: mpi_subscribe(name, communicator) IN name - the name by which the communicator is published OUT communicator - the communicator published by the given name The communicator obtained by subscribe provides the subscriber with the communicator context and a picture of the publisher group. 7. The publisher can use the published communicator for the following communications: a. Send to processes within the communicator group addressing the receiver by rank within group. This is the usual send capability. b. Receive from processes within the communicator group selecting the sender by rank within group or by wildcard. This is the usual receive capability. c. Receive from subscriber processes selecting the sender by wildcard. The receiver cannot choose the sender since it does not have a rank in the group. The reciever cannot discover the sender - the sender will be recorded as "unknown" through a constant such as MPI_UNKNOWN, or MPI_ANYPRCOESS or the such. 8. The subscriber can use the subscribed communicator for the following communications: a. Send to processes within the communicator, i.e. publisher, group addressing the receiver by rank within group. This is the send capability extended to processes outside the group. 9. Tony's mail of June 5 said that the communicator can be used by the subscriber for receive selecting the sender by rank. This is inconsistent with the context allocation in the working document from Marc Snir (so I assume it is a small mistake) because the subscriber could legitimately be using the communicator context within a different communicator. 10. There is mumbling about using a group union to do receives when the sender needs to be chosen. Tony did send me an email with some C-ish example code for this but the example code does not work with the context allocator of the subcommittee working document. ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Jun 10 15:08:41 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA23596; Thu, 10 Jun 93 15:08:41 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28190; Thu, 10 Jun 93 15:08:51 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 15:08:50 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28182; Thu, 10 Jun 93 15:08:49 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03908; Thu, 10 Jun 93 14:08:20 CDT Date: Thu, 10 Jun 93 14:08:20 CDT From: Tony Skjellum Message-Id: <9306101908.AA03908@Aurora.CS.MsState.Edu> To: lyndon@epcc.ed.ac.uk Subject: Re: Next MPI Meeting (etc) Cc: mpi-context@cs.utk.edu Lyndon, have you sent me e-mail describing the following failures??? - Tony ----- Begin Included Message ----- From owner-mpi-context@CS.UTK.EDU Thu Jun 10 13:44:58 1993 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 14:44:43 EDT Date: Thu, 10 Jun 93 19:44:35 BST From: L J Clarke Subject: Re: Next MPI Meeting (etc) To: lyndon@epcc.ed.ac.uk Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Content-Length: 4598 Hi Tom ... 9. Tony's mail of June 5 said that the communicator can be used by the subscriber for receive selecting the sender by rank. This is inconsistent with the context allocation in the working document from Marc Snir (so I assume it is a small mistake) because the subscriber could legitimately be using the communicator context within a different communicator. 10. There is mumbling about using a group union to do receives when the sender needs to be chosen. Tony did send me an email with some C-ish example code for this but the example code does not work with the context allocator of the subcommittee working document. ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Thu Jun 10 15:16:53 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA23626; Thu, 10 Jun 93 15:16:53 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28849; Thu, 10 Jun 93 15:17:22 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 15:17:21 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28841; Thu, 10 Jun 93 15:17:20 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA20178; Thu, 10 Jun 1993 15:17:07 -0400 Date: Thu, 10 Jun 1993 15:17:07 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9306101917.AA20178@rios2.epm.ornl.gov> To: lyndon@epcc.edinburgh.ac.uk Subject: Re: Next MPI meeting Cc: mpi-context@cs.utk.edu Lyndon, First I've corected a few typos in our code. These lines I've marked ">>" main () { group_t init_group, fft_group, other_group; comm_t comm_fft, comm_merge, comm_inv; context_t context, other_context; comm_handle_t handle; rso_handle_t rso; >> int init_rank, init_size, key, ret fft_rank, fft_size, fftlen, local_len, nbytes; double A[100], B[100], C[100]; ret = get_input2 (fftlen); init_group = mpi_initial_group (); init_rank = mpi_rank (init_group); init_size = mpi_group_size (init_group); local_len = fftlen/init_size; nbytes = local_len*sizeof(double); key = init_rank&1; ret = mpi_split_group (init_group, key, 0, fft_group); fft_rank = mpi_rank (fft_group); >> ret = mpi_safemake_comm (fft_group, comm_fft); ret = initialize_fft (A, fftlen, comm_fft); ret = parallel_fft (A, fftlen, comm_fft); mpi_alloc_context(fft_group, 1, &context); if (fft_rank == 0) { ret = mpi_publish (fft_group, context , (key ? "even" : "odd"); } ret = mpi_subscribe ((key ? "odd" : "even") , &other_group, &other_context); ret = mpi_make_comma(fft_group, context, other_group, other_context, comm_merge); if (key) { ret = mpi_isendc (handle, &A[local_len], nbytes, MPI_DOUBLE, >> 0, fft_rank, comm_merge); >> ret = mpi_recvc (&B[0], nbytes, MPI_DOUBLE, /* changed this for consistency */ 0, fft_rank, comm_merge, rso); ret = mpi_wait (handle, rso); } else { ret = mpi_isendc (handle, &A[0], nbytes, MPI_DOUBLE, 0, fft_rank, comm_merge); >> ret = mpi_recvc (&B[0], nbytes, MPI_DOUBLE, /* changed this for consistency */ 0, fft_rank, comm_merge, rso); ret = mpi_wait (handle, rso); >> bcopy((&A[local_len], &A[0], nbytes); } ret = mpi_safemake_comm (init_group, comm_inv); >> ret = parallel_product (A, B, C, local_len);/* No communication -> no comm_inv needed */ ret = parallel_invfft (C, fftlen, comm_inv); ret = output_result (C, fftlen, comm_inv); exit (0); } ---------------------------------------------------------------------- Do we now agree that the above is correct? Your code differs from mine since you actually communicate between the "odd" and "even" groups, whereas in my example communication is between the "even" group and the initial group, and the "odd" group and the initial group. Let N be the number of processors in the initial group. Then the 2 subgroups each contain N/2 processes (we assume N to be even). The subgroups do not overlap. Any method can be used to divide the initial group into equal sized, non-intersecting subgroups. We form subgroups based on odd/even ranking in the initial group. Immediately before the communication phase group 1 (G1) contains the first FFT in A, and group 2 (G2) contains the other FFT in A. On completion of the communication and buffer copy stuff, what do A and B contain in each group? G1 contains even chunks of FFT1 in A and even chunks of FFT2 in B. G2 contains odd chunks of FFT1 in A and odd chunks of FFT2 in B. Relative to the initial group everything is in the correct place to form a correctly distributed C. My example code is correct regardless of how the subgroups are formed. Lyndon's code is correct only if the odd/even method for forming subgroups is used. That's why I used communication between each subgroup and the initial group to get the data correctly distributed prior to the product and invfft steps. Anyway this is a minor point. Both examples are valid and useful, though mine could do with a bit more tidying up. Did we decide at the last meeting that all processes bound into a communicator had to sync before using it? If so the above code needs a sync after the mpi_make_comma call. Well that all for now. David From owner-mpi-context@CS.UTK.EDU Thu Jun 10 15:45:31 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA23768; Thu, 10 Jun 93 15:45:31 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01286; Thu, 10 Jun 93 15:46:03 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 15:46:02 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01277; Thu, 10 Jun 93 15:46:00 -0400 Received: by gw1.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA05044; Thu, 10 Jun 93 19:45:59 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA20626; Thu, 10 Jun 93 13:44:23 MDT Date: Thu, 10 Jun 93 13:44:23 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9306101944.AA20626@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu Subject: Re: Next MPI Meeting (etc) Lyndon, > Hi Tom > ... > I hope this helps, or I just wasted 45 minutes of my life :-) > > Best Wishes > Lyndon Yes, this helps a lot. Thanks! > 5. The publish operation is: > > mpi_publish(communicator, name, persistence) > > IN communicator - communicator to be published in global name space > IN name - name by which communicator to be published > IN persistence - a flag having the value MPI_PERSISTENT for a publish > which can be subscribed any number of times and > remains in the global name space indefinitely, > or MPI_EPHEMERAL for a publish which can be > subscribed exactly once and is removed by the > first subscribe. > > There is also the suggestion of some kind of "permissions" argument, > which is suggested to be an array of contexts which are allowed to > subscribe to the published communicator. I dont understand this part of > the suggestion so I cant make further (guessed) description. A few comments: 1) Would it be a good idea to have a way of publishing a whole bunch of communicators at once? We designed a tag registration server a while back (and have not implemented it hoping MPI will do a better job :-) with registration routines that allow a calling process to specify how many "tags" it wants to reserve under a particular name. This might be good for context name registration also. The subscribe routine would match: mpi_publish(list_of_communicators, number_of_communicators, name, persistence) IN list_of_communicators - list of communicator to be published in global name space IN number_of_communicators - number of communicators in list_of_communicators IN name - name by which communicators will be published IN persistence - a flag having the value MPI_PERSISTENT for a publish which can be subscribed any number of times and remains in the global name space indefinitely, or MPI_EPHEMERAL for a publish which can be subscribed exactly once and is removed by the first subscribe. mpi_subscribe(name, list_of_communicators, number_of_communicators) IN name - the name by which the communicator is published OUT list_of_communicators - the communicator published by the given name OUT number_of_communicators - the communicator published by the given name 2) What happens when multiple processes call mpi_publish() with the same name? I could see this happening in a distributed server for instance... I assume that mpi_publish() will return a status indicating that the name has already been registered. The process would then call mpi_subscribe() to find out what name is associated with: status = mpi_publish(comm, NAME, MPI_PERSISTENT) if (status == MPI_NAME_ALREADY_PUBLISHED) mpi_subscribe(NAME, comm) If this is happens a lot (I think it will with the current MIMD/SPMD model), we may want to modify publish() to do this for us automatically: mpi_publish(in_communicator, name, persistence, out_communicator) IN in_communicator - communicator to be published in global name space IN name - name by which communicator is to be published IN persistence - a flag having the value MPI_PERSISTENT for a publish which can be subscribed any number of times and remains in the global name space indefinitely, or MPI_EPHEMERAL for a publish which can be subscribed exactly once and is removed by the first subscribe. OUT out_communicator - if name already exists in the global name space, out_communicator is set to the communicator registered under name. Otherwise out_communicator is ignored. Return status indicates if in_communicator was successfully registered or if out_communicator was previously registered under name. If the calling process cares, it can check to see if in_communicator and out_communicator are the same in the second case. 3) 1 and 2 could be combined. 4) If we have mpi_unpublish(name) (and I think we should), then we could make the restriction that only the publish()'ing process can unpublish(). I'm not yet sure if this is a good idea... Comments? Tom From owner-mpi-context@CS.UTK.EDU Thu Jun 10 15:49:58 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-UTK) id AA23816; Thu, 10 Jun 93 15:49:58 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01573; Thu, 10 Jun 93 15:50:18 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 10 Jun 1993 15:50:17 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01561; Thu, 10 Jun 93 15:50:15 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA04494; Thu, 10 Jun 93 14:49:52 CDT Date: Thu, 10 Jun 93 14:49:52 CDT From: Tony Skjellum Message-Id: <9306101949.AA04494@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu, hender@macaw.fsl.noaa.gov Subject: Re: Next MPI Meeting (etc) 4) If we have mpi_unpublish(name) (and I think we should), then we could make the restriction that only the publish()'ing process can unpublish(). I'm not yet sure if this is a good idea... Comments? Tom Yes, I concur. This was a detail we did not write down yet! - Tony From owner-mpi-context@CS.UTK.EDU Fri Jun 11 05:27:08 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA00174; Fri, 11 Jun 93 05:27:08 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29090; Fri, 11 Jun 93 05:27:27 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 11 Jun 1993 05:27:26 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29081; Fri, 11 Jun 93 05:27:23 -0400 Date: Fri, 11 Jun 93 10:27:13 BST Message-Id: <22929.9306110927@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Next MPI Meeting (etc) To: Tony Skjellum , lyndon@epcc.ed.ac.uk In-Reply-To: Tony Skjellum's message of Thu, 10 Jun 93 14:08:20 CDT Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Dear Tony > > Lyndon, have you sent me e-mail describing the following failures??? > I dont see these as "failures", but the answer is "yes and yes". Will it help if I resend same email-s, or perhaps if I send new email covering both points more slowly and in more detail. I'm happy to do either or both if it helps. > 9. Tony's mail of June 5 said that the communicator can be used by the > subscriber for receive selecting the sender by rank. This is > inconsistent with the context allocation in the working document from > Marc Snir (so I assume it is a small mistake) because the subscriber > could legitimately be using the communicator context within a different > communicator. I do believe I sent stuff to you personally during our conversation of last weekend and also to mpi-context this week, both of which stuff covered this point among others. Relevant header data of latter email follows. Date: Sun, 6 Jun 93 16:39:45 BST Subject: mpi-context; comments on "June 5 - intercommunication" To: mpi-context@cs.utk.edu Indeed scanning that email I can see that I asked the question whether yourself/Mark were also suggesting that the context allocation is changed to be like Zipcode/Proposal-X, in which case it would be fine for the subscriber to use the subscribed communicator for receive. > 10. There is mumbling about using a group union to do receives when the > sender needs to be chosen. Tony did send me an email with some C-ish > example code for this but the example code does not work with the > context allocator of the subcommittee working document. > I replied to your mail containing example C-ish code making this point, relevant email header data follows. Did this fail to reach you? Date: Sun, 6 Jun 93 20:17:24 Subject: Re: more ideas on C2-level To: Tony Skjellum /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Jun 11 05:29:17 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA00181; Fri, 11 Jun 93 05:29:17 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29244; Fri, 11 Jun 93 05:29:53 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 11 Jun 1993 05:29:52 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29233; Fri, 11 Jun 93 05:29:50 -0400 Date: Fri, 11 Jun 93 10:29:49 BST Message-Id: <22949.9306110929@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Next MPI Meeting (etc) To: Tony Skjellum , mpi-context@cs.utk.edu, hender@macaw.fsl.noaa.gov In-Reply-To: Tony Skjellum's message of Thu, 10 Jun 93 14:49:52 CDT Reply-To: lyndon@epcc.ed.ac.uk > > 4) > > If we have mpi_unpublish(name) (and I think we should), then we could make the > restriction that only the publish()'ing process can unpublish(). I'm not yet > sure if this is a good idea... Comments? > > Tom > > > Yes, I concur. This was a detail we did not write down yet! > - Tony > Yup, its worth having an unpublish(), and restrict such that publisher process alone can do the unpublish. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Jun 11 06:53:27 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA00559; Fri, 11 Jun 93 06:53:27 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03577; Fri, 11 Jun 93 06:53:59 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 11 Jun 1993 06:53:58 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03569; Fri, 11 Jun 93 06:53:51 -0400 Date: Fri, 11 Jun 93 11:53:51 BST Message-Id: <23058.9306111053@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Next MPI meeting To: walker@rios2.epm.ornl.gov (David Walker) In-Reply-To: David Walker's message of Thu, 10 Jun 1993 15:17:07 -0400 Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Hi David Thanks for the typo corrections and further comments. I realise that there are further redundancies/optimisations, and just one typo, so I enclose hopefully a final version of this line at the foot of the message. > > Your code differs from mine since you actually communicate between the "odd" and > "even" groups, whereas in my example communication is between the "even" group and > the initial group, and the "odd" group and the initial group. > Yes. On reflection, in this example, you did it right and I did it wrong. Lets look at the timeline of this program. ------------------------- ------------------------- | odd group - fft | | even group - fft | ------------------------- ------------------------- || || || || \/ \/ ---------------------------------------------------------- | init group - parallel product then inverse fft | ---------------------------------------------------------- The arrows represent the merge communications. These comms are clearly happening between the two fft groups which I've labelled odd and even, and the init group. So its best programming practice to write the communications that way. I got us into this mess - I'll send out a cleaned up version of your original example later. > Let N be the number of processors in the initial group. Then the 2 subgroups each > contain N/2 processes (we assume N to be even). The subgroups do not overlap. Any > method can be used to divide the initial group into equal sized, non-intersecting > subgroups. We form subgroups based on odd/even ranking in the initial group. > > Immediately before the communication phase group 1 (G1) contains the first FFT in A, > and group 2 (G2) contains the other FFT in A. > > On completion of the communication and buffer copy stuff, what do A and B contain > in each group? G1 contains even chunks of FFT1 in A and even chunks of FFT2 in B. > G2 contains odd chunks of FFT1 in A and odd chunks of FFT2 in B. Relative > to the initial group everything is in the correct place to form a correctly > distributed C. Yup. > My example code is correct regardless of how the subgroups are formed. Lyndon's code > is correct only if the odd/even method for forming subgroups is used. I disagree. In the code you wrote the program calculates the destination ranks dest_rank1 and dest_rank2. This calculation uses knowledge of how the subgroups were formed to relate ranks in the fft groups to those in the init group. Both codes rely on how the subgroups were formed. > Did we decide at the last meeting that all processes bound into a communicator had > to sync before using it? If so the above code needs a sync after the mpi_make_comma call. I recall Tony explaining that in the Friday morning meeting the opposite agreement was made after a point raised by Paul Pierce. So no, I do not think that an mpi_sync is needed. In this example it doesnt really matter an iota whether or not there has to be a sync there, since the program is strongly SPMD and its safe to rely on sequencing/synchronisation. Best Wishes Lyndon ---------------------------------------------------------------------- main () { group_t init_group, fft_group, other_group; comm_t comm_fft, comm_merge, comm_inv; context_t context, other_context; comm_handle_t handle; rso_handle_t rso; int init_rank, init_size, key, ret, fft_rank, fft_size, fftlen, local_len, nbytes, keep_part, give_part; double A[100], B[100], C[100]; ret = get_input2 (fftlen); init_group = mpi_initial_group (); init_rank = mpi_rank (init_group); init_size = mpi_group_size (init_group); local_len = fftlen/init_size; nbytes = local_len*sizeof(double); key = init_rank&1; ret = mpi_split_group (init_group, key, 0, fft_group); fft_rank = mpi_rank (fft_group); ret = mpi_safemake_comm (fft_group, comm_fft); ret = initialize_fft (A, fftlen, comm_fft); ret = parallel_fft (A, fftlen, comm_fft); mpi_alloc_context(fft_group, 1, &context); if (fft_rank == 0) { ret = mpi_publish (fft_group, context , (key ? "even" : "odd"); } ret = mpi_subscribe ((key ? "odd" : "even") , &other_group, &other_context); ret = mpi_make_comma(fft_group, context, other_group, other_context, comm_merge); keep_part = (key != 0) ? 0 : local_len; give_part = (key == 0) ? 0 : local_len; ret = mpi_isendc (handle, &A[give_part], nbytes, MPI_DOUBLE, 0, fft_rank, comm_merge); ret = mpi_recvc (&B[keep_part], nbytes, MPI_DOUBLE, 0, fft_rank, comm_merge, rso); ret = mpi_wait (handle, rso); ret = parallel_product (&A[keep_part], &B[keep_part], C, local_len); ret = mpi_safemake_comm (init_group, comm_inv); ret = parallel_invfft (C, fftlen, comm_inv); ret = output_result (C, fftlen, comm_inv); exit (0); } ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Jun 11 09:38:46 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA01137; Fri, 11 Jun 93 09:38:46 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA13701; Fri, 11 Jun 93 09:38:49 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 11 Jun 1993 09:38:48 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA13692; Fri, 11 Jun 93 09:38:47 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA19992; Fri, 11 Jun 1993 09:37:36 -0400 Date: Fri, 11 Jun 1993 09:37:36 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9306111337.AA19992@rios2.epm.ornl.gov> To: lyndon@epcc.ed.ac.uk Subject: Re: Next MPI meeting Cc: mpi-context@cs.utk.edu Date: Fri, 11 Jun 93 11:53:51 BST From: L J Clarke > Hi David > > [stuff deleted] > >> Your code differs from mine since you actually communicate between the "odd" >> and "even" groups, whereas in my example communication is between the "even" >> group and the initial group, and the "odd" group and the initial group. > > Yes. On reflection, in this example, you did it right and I did it > wrong. Lets look at the timeline of this program. > > ------------------------- ------------------------- > | odd group - fft | | even group - fft | > ------------------------- ------------------------- > || || > || || > \/ \/ > ---------------------------------------------------------- > | init group - parallel product then inverse fft | > ---------------------------------------------------------- > > The arrows represent the merge communications. These comms are clearly > happening between the two fft groups which I've labelled odd and even, > and the init group. So its best programming practice to write the > communications that way. I got us into this mess - I'll send out a > cleaned up version of your original example later. I look forward to seeing the cleaned up version of my code. >> >> [stuff deleted] >> >> My example code is correct regardless of how the subgroups are formed. >> Lyndon's code is correct only if the odd/even method for forming >> subgroups is used. > I disagree. In the code you wrote the program calculates the > destination ranks dest_rank1 and dest_rank2. This calculation uses > knowledge of how the subgroups were formed to relate ranks in the fft > groups to those in the init group. Both codes rely on how the subgroups > were formed. I still think I am right. When sending, the rank of the destination process in the init_group is relative to the init_group. Similarly when receiving, the rank of the source process in the fft_group is relative to that group. In my code I can change the way key is calculated, and hence the composition of the subgroups, and still everything will work OK. For example, I could say, key = (init_rank> Did we decide at the last meeting that all processes bound into a >> communicator had to sync before using it? If so the above code needs a >> sync after the mpi_make_comma call. > I recall Tony explaining that in the Friday morning meeting the opposite > agreement was made after a point raised by Paul Pierce. So no, I do not > think that an mpi_sync is needed. In this example it doesnt really > matter an iota whether or not there has to be a sync there, since the > program is strongly SPMD and its safe to rely on > sequencing/synchronisation. Well if we're not requiring synchronization (which would amount to another sort of ready mode related to communicators) this discussion is academic (so I'll continue). It seems to me that if one subgroup is running on a very faster bunch of processors and the other isn't then the fast group could whip through its mpi_make_comma and mpi_isendc and get a message to the other group before that group could complete its mpi_make_comma Best Wishes, David > Lyndon's code -------------------------------------------------------- > main () > { > group_t init_group, fft_group, other_group; > comm_t comm_fft, comm_merge, comm_inv; > context_t context, other_context; > comm_handle_t handle; > rso_handle_t rso; > int init_rank, init_size, key, ret, > fft_rank, fft_size, fftlen, > local_len, nbytes, keep_part, give_part; > double A[100], B[100], C[100]; > > ret = get_input2 (fftlen); > > init_group = mpi_initial_group (); > init_rank = mpi_rank (init_group); > init_size = mpi_group_size (init_group); > > local_len = fftlen/init_size; > nbytes = local_len*sizeof(double); > > key = init_rank&1; > ret = mpi_split_group (init_group, key, 0, fft_group); > fft_rank = mpi_rank (fft_group); > > ret = mpi_safemake_comm (fft_group, comm_fft); > > ret = initialize_fft (A, fftlen, comm_fft); > ret = parallel_fft (A, fftlen, comm_fft); > > mpi_alloc_context(fft_group, 1, &context); > if (fft_rank == 0) > { > ret = mpi_publish (fft_group, context , (key ? "even" : "odd"); > } > ret = mpi_subscribe ((key ? "odd" : "even") , &other_group, &other_context); > ret = mpi_make_comma(fft_group, context, > other_group, other_context, comm_merge); > > keep_part = (key != 0) ? 0 : local_len; > give_part = (key == 0) ? 0 : local_len; > > ret = mpi_isendc (handle, &A[give_part], nbytes, MPI_DOUBLE, > 0, fft_rank, comm_merge); > ret = mpi_recvc (&B[keep_part], nbytes, MPI_DOUBLE, > 0, fft_rank, comm_merge, rso); > ret = mpi_wait (handle, rso); > > ret = parallel_product (&A[keep_part], &B[keep_part], C, local_len); > > ret = mpi_safemake_comm (init_group, comm_inv); > > ret = parallel_invfft (C, fftlen, comm_inv); > ret = output_result (C, fftlen, comm_inv); > > exit (0); > } > ---------------------------------------------------------------------- From owner-mpi-context@CS.UTK.EDU Fri Jun 11 09:52:45 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA01260; Fri, 11 Jun 93 09:52:45 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14733; Fri, 11 Jun 93 09:53:11 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 11 Jun 1993 09:53:10 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14691; Fri, 11 Jun 93 09:53:01 -0400 Date: Fri, 11 Jun 93 14:53:01 BST Message-Id: <23212.9306111353@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Next MPI meeting To: walker@rios2.epm.ornl.gov (David Walker), lyndon@epcc.ed.ac.uk In-Reply-To: David Walker's message of Fri, 11 Jun 1993 09:37:36 -0400 Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu David Sometimes, I am stupid. Sorry. I will clean your original example and send later, but later will now most probably be Monday. > > I still think I am right. You are right. > More generally, my example will still work for 3 arbitrary > groups, provided the two FFT groups are distinct and each is > half the size of the third group (which does the product and inverse > FFT). For example, if we had N processes, N/4 could work on FFT1, > N/4 could work on FFT2, and N/2 could work on the product and > inverse FFT. So I could input one array and find its FFT on a bunch > of workstations at ORNL, input and do the second FFT on the Delta at > Caltech, and each could send its results to EPCC where the product and > inverse are done on a transputer array and displayed to a graphical > device. Yup, my stupidity in the extreme, your example was deeper than I first appreciated. > > Well if we're not requiring synchronization (which would amount to > another sort of ready mode related to communicators) this discussion > is academic (so I'll continue). :-) > It seems to me that if one subgroup > is running on a very faster bunch of processors and the other isn't > then the fast group could whip through its mpi_make_comma and > mpi_isendc and get a message to the other group before that group > could complete its mpi_make_comma So why is this a problem? The message can just wait in a queue until the slow group makes it communicator and does its receive. Best Wishes Lyndon (the sometimes stupid) /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Fri Jun 11 10:04:05 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA01294; Fri, 11 Jun 93 10:04:05 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15476; Fri, 11 Jun 93 10:04:42 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 11 Jun 1993 10:04:38 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15460; Fri, 11 Jun 93 10:04:34 -0400 Received: from tycho.co.uk (float.meiko.co.uk) by hub.meiko.co.uk with SMTP id AA08063 (5.65c/IDA-1.4.4 for mpi-context@cs.utk.edu); Fri, 11 Jun 1993 15:04:37 +0100 Date: Fri, 11 Jun 1993 15:04:37 +0100 From: James Cownie Message-Id: <199306111404.AA08063@hub.meiko.co.uk> Received: by tycho.co.uk (5.0/SMI-SVR4) id AA00490; Fri, 11 Jun 93 15:03:56 BST To: mpi-context@cs.utk.edu In-Reply-To: <23212.9306111353@subnode.epcc.ed.ac.uk> (message from L J Clarke on Fri, 11 Jun 93 14:53:01 BST) Subject: Re: Next MPI meeting Content-Length: 1720 > > It seems to me that if one subgroup > > is running on a very faster bunch of processors and the other isn't > > then the fast group could whip through its mpi_make_comma and > > mpi_isendc and get a message to the other group before that group > > could complete its mpi_make_comma > > So why is this a problem? The message can just wait in a queue until the > slow group makes it communicator and does its receive. Indeed, as I remember the discussion at the end of the last meeting, the conclusion was that insisting on the synchronisation caused unpleasant problems (especially in Paul Pierce's example), and that we should allow programs to send messages to other nodes which had not yet constructed the correct context in which to receive them. This seems reasonable (though I don't actually like the implementation implications...). The question is what happens to a message which arrives at a node for a context which does not yet exist at that node ? The presentation at the last meeting said this was illegal, hence the need to synchronise. (And the funny synchronisation function on a group which doesn't yet exist). After Paul's argument it seemed to be accepted that such messages should be buffered until the context they could be matched with a receive (which of course implies the context now exists). The analogous situation with the tag field would be to allow only the "ready receiver send"... -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Fri Jun 11 12:10:04 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA02151; Fri, 11 Jun 93 12:10:04 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24107; Fri, 11 Jun 93 12:10:22 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 11 Jun 1993 12:10:21 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24089; Fri, 11 Jun 93 12:10:17 -0400 Date: Fri, 11 Jun 93 17:10:13 BST Message-Id: <23318.9306111610@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: [David Walker: Re: Next MPI meeting] To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk David asked me to bounce this on to mpi-context. ---- Start of forwarded text ---- > From walker@rios2.epm.ornl.gov Fri Jun 11 15:02:31 1993 > Received: from rios2.epm.ornl.gov by epcc.ed.ac.uk; Fri, 11 Jun 93 15:02:23 BST > Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) > id AA16959; Fri, 11 Jun 1993 10:02:20 -0400 > Date: Fri, 11 Jun 1993 10:02:20 -0400 > From: walker@rios2.epm.ornl.gov (David Walker) > Message-Id: <9306111402.AA16959@rios2.epm.ornl.gov> > To: lyndon@epcc.ed > Subject: Re: Next MPI meeting > Status: RO > > > > [stuff deleted] > >> > >> Well if we're not requiring synchronization (which would amount to > >> another sort of ready mode related to communicators) this discussion > >> is academic (so I'll continue). > > > >:-) > > > >> It seems to me that if one subgroup > >> is running on a very faster bunch of processors and the other isn't > >> then the fast group could whip through its mpi_make_comma and > >> mpi_isendc and get a message to the other group before that group > >> could complete its mpi_make_comma > > > >So why is this a problem? The message can just wait in a queue until the > >slow group makes it communicator and does its receive. > > Yes, this is what would be done. But the discussion, as I recall it, in > Dallas was about whether a process should stick messages for > communicators its never heard of into a system buffer or not. I can't > recall the pros and cons, but if this was deemed to be OK, then its > OK with me. > > Best Wishes, > David > ---- End of forwarded text ---- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Jun 15 17:52:24 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA09546; Tue, 15 Jun 93 17:52:24 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA09829; Tue, 15 Jun 93 17:50:51 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 15 Jun 1993 17:50:50 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA09816; Tue, 15 Jun 93 17:50:49 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA26001; Tue, 15 Jun 93 16:49:45 CDT Date: Tue, 15 Jun 93 16:49:45 CDT From: Tony Skjellum Message-Id: <9306152149.AA26001@Aurora.CS.MsState.Edu> To: SNIR@watson.ibm.com Subject: Re: context Cc: mpi-context@cs.utk.edu ----- Begin Included Message ----- From snir@watson.ibm.com Tue Jun 15 15:58:58 1993 Date: Tue, 15 Jun 93 16:58:08 EDT From: "Marc Snir" X-Addr: (914) 945-3204 (862-3204) 28-226 IBM T.J. Watson Research Center P.O. Box 218 Yorktown Heights NY 10598 To: tony@cs.msstate.edu Subject: context Reply-To: SNIR@watson.ibm.com Content-Length: 147 Could you send the updated draft before the weekend? Do you need help in updating the map/group constructs, according to the suggestion of Walker? ----- End Included Message ----- Marc, there has been a tremendous amount of discussion about groups, maps, etc, since last meeting. Specifically, the following items have come up (during a meeting between Mark Sears and me, and with EXTENSIVE comments from Lyndon)... maps... concensus is to drop them, returning to groups. groups... make groups be ordered collections of opaque process names. We went round and round at Sandia (Mark and I) discussing the maps, groups, etc, and seeing how everything fits together. David Walker's comments spurred this discussion. We feel that the system is simpler without the relative maps, that is, groups are easier to explain. As groups remain opaque, the full power to retain underlying translation tables is retained. We have come up with a C1 strategy for creating/sharing communicators. This is tentatively the publish/subscribe strategy. I will most likely update the draft this weekend, and work with context sub-committee members next Tuesday, before meeting starts. Can you be there a day early? - Tony From owner-mpi-context@CS.UTK.EDU Wed Jun 16 08:07:11 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA15061; Wed, 16 Jun 93 08:07:11 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA07263; Wed, 16 Jun 93 08:07:11 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 16 Jun 1993 08:07:10 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA07255; Wed, 16 Jun 93 08:07:05 -0400 Date: Wed, 16 Jun 93 13:07:09 BST Message-Id: <29055.9306161207@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: convolution example To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Just for completeness here is David's convolution example cleaned up a bit as promised. I've really done little more than add a few comments and localised the manipulation of communicators around the usage thereof. Hope I didnt break it! Best Wishes Lyndon ---------------------------------------------------------------------- /* * * PROGRAM 2. * Now we'll try something a little harder. In the next program we find the * convolution, C, of two vectors, A and B, by first finding the FFT of A * and B, evaluating their elementwise product in Fourier space, and then * finding the inverse transform, i.e., * * C = INVFFT ( FFT(A) * FFT(B) ) * * Of course, we can do this by having all processes first find FFT(A) and * then FFT(B). The product in Fourier space can then be found with no * communication being necessary. Finally, all processes cooperate to * perform the inverse FFT. Program 2 doesn't do this. Instead, we split * the initial group into two equally-sized subgroups, one of which * evaluates FFT(A) and the other FFT(B) in parallel. Communication is * then necessary between the initial group and the subgroups to * re-distribute the data before doing the Fourier space product in the * initial group. Finally, the processes in the initial group evaluate * the inverse FFT, giving the convolution, C. The point here is that * communication between groups is required. * */ main () { group_t init_group, fft_group; comm_t comm_fft, comm_merge_A, comm_merge_B, comm_inv; comm_handle_t handle1, handle2; rso_handle_t rso; int init_rank, init_size, key, fft_rank, fft_size, fftlen, local_len, nbytes; int source_rank, dest_rank1, dest_rank2; double F[100], A[100], B[100], C[100]; /* input the fft length */ ret = get_input2 (fftlen); /* find out about the initial group */ init_group = mpi_initial_group (); init_rank = mpi_rank (init_group); init_size = mpi_group_size (init_group); /* split the group in half, can be done any old way */ key = init_rank&1; ret = mpi_split_group (init_group, key, init_rank, fft_group); /* find out about this fft group */ fft_rank = mpi_rank (fft_group); fft_size = mpi_rank (fft_group); /* work out how big merge messages will be */ local_len = fftlen/init_size; nbytes = local_len*sizeof(double); /* work out sources and destinations of merge messages */ source_rank = fft_rank; dest_rank1 = 2*source_rank; dest_rank2 = dest_rank1 + 1; /* do the fft */ ret = mpi_safemake_comm (fft_group, fft_group, comm_fft); ret = initialize_fft (F, fftlen, comm_fft); ret = parallel_fft (F, fftlen, comm_fft); ret = mpi_free(comm_fft); /* done */ /* do the merge */ if (key != 0) { ret = mpi_safemake_comm (fft_group, init_group, comm_merge_A); ret = mpi_isendc (handle1, &F[0], nbytes, MPI_DOUBLE, 0, dest_rank1, comm_merge_A); ret = mpi_isendc (handle2, &F[local_len], nbytes, MPI_DOUBLE, 0, dest_rank2, comm_merge_A); ret = mpi_publish (comm_merge_A, "GROUP_A"); ret = mpi_subscribe ("GROUP_B", comm_merge_B); } else { ret = mpi_safemake_comm (fft_group, init_group, comm_merge_B); ret = mpi_isendc (handle1, &F[0], nbytes, MPI_DOUBLE, 0, dest_rank1, comm_merge_B); ret = mpi_isendc (handle2, &F[local_len], nbytes, MPI_DOUBLE, 0, dest_rank2, comm_merge_B); ret = mpi_publish (comm_merge_B, "GROUP_B"); ret = mpi_subscribe ("GROUP_A", comm_merge_A); } ret = mpi_recvc (A, nbytes, MPI_DOUBLE, MPI_DONTCARE, source_rank, comm_merge_A, rso); ret = mpi_recvc (B, nbytes, MPI_DOUBLE, MPI_DONTCARE, source_rank, comm_merge_B, rso); ret = mpi_wait (handle1, rso); ret = mpi_wait (handle2, rso); ret = mpi_free(comm_merge_A); ret = mpi_free(comm_merge_B); /* done */ /* do the product */ ret = parallel_product (A, B, C, fftlen); /* done */ /* do the inverse fft and output result */ ret = mpi_safemake_comm (init_group, init_group, comm_inv); ret = parallel_invfft (C, fftlen, comm_inv); ret = output_result (C, fftlen, comm_inv); ret = mpi_free(comm_inv); /* done */ /* finished */ exit (0); } ---------------------------------------------------------------------- /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Jun 16 08:46:08 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA15156; Wed, 16 Jun 93 08:46:08 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA09505; Wed, 16 Jun 93 08:46:14 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 16 Jun 1993 08:46:13 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA09497; Wed, 16 Jun 93 08:46:12 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA18101; Wed, 16 Jun 1993 08:44:53 -0400 Message-Id: <9306161244.AA18101@rios2.epm.ornl.gov> To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Subject: Convolution Date: Wed, 16 Jun 93 08:44:53 -0500 From: David W. Walker Lyndon, I think there's a bug in your/my program. You can't call mpi_safemake_comm inside the conditional because it is a collective routine that must be called loosely synchronously by all processes in fft_group and init_group. David From owner-mpi-context@CS.UTK.EDU Wed Jun 16 09:14:45 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA15257; Wed, 16 Jun 93 09:14:45 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA11066; Wed, 16 Jun 93 09:14:50 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 16 Jun 1993 09:14:49 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA11058; Wed, 16 Jun 93 09:14:45 -0400 Date: Wed, 16 Jun 93 14:15:01 BST Message-Id: <29104.9306161315@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Convolution To: David W. Walker , lyndon@epcc.ed.ac.uk In-Reply-To: David W. Walker's message of Wed, 16 Jun 93 08:44:53 -0500 Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Hi David Yeah, I noticed that too. But is there any difference between the two cases? In both cases all processes call mpi_safemake_comm(fft_group, init_group, ...). In both cases half the proesses are the fft_group which is composed of the odd members of init_group, and half the processes are the fft_group which is composed of the odd members of init_group. Yeah, it looks like the example just dont work in this respect, wherever the mpi_safemake_comm(fft_group, init_group, ...) call is placed. On the other hand maybe I'm making a stupid error again as I am rushing. What do you think? Best Wishes Lyndon > > Lyndon, > I think there's a bug in your/my program. You can't call mpi_safemake_comm > inside the conditional because it is a collective routine that must be called > loosely synchronously by all processes in fft_group and init_group. > > David > /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Jun 16 11:24:51 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA16616; Wed, 16 Jun 93 11:24:51 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20846; Wed, 16 Jun 93 11:24:40 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 16 Jun 1993 11:24:39 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20838; Wed, 16 Jun 93 11:24:38 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA27477; Wed, 16 Jun 93 10:23:44 CDT Date: Wed, 16 Jun 93 10:23:44 CDT From: Tony Skjellum Message-Id: <9306161523.AA27477@Aurora.CS.MsState.Edu> To: SNIR@watson.ibm.com Subject: Re: context Cc: mpi-context@cs.utk.edu Marc, I will work on draft all weekend, and TRY HARD to have it out by Monday. I understand your point on the reading of the draft. Perhaps we can do it Thursday at meeting, to guarantee at least on extra day for people to read it. I will be working on this all weekend... if possible also Friday. I would like to have an opportunity to edit the draft first, then show it to you and the others. - Tony ----- Begin Included Message ----- From snir@watson.ibm.com Wed Jun 16 08:52:21 1993 Date: Wed, 16 Jun 93 09:49:15 EDT From: "Marc Snir" X-Addr: (914) 945-3204 (862-3204) 28-226 IBM T.J. Watson Research Center P.O. Box 218 Yorktown Heights NY 10598 To: tony@Aurora.CS.MsState.Edu Reply-To: SNIR@watson.ibm.com Subject: Re: context Content-Length: 429 Reference: Your note of Tue, 15 Jun 93 16:49:45 CDT No, I cannot be a day early, but I would like to see the draft before Wednesday We cannot have a formal reading on a document that is not available at least a few days before the formal reading. I support the move back to groups as ordered sets of processes -- my question was whether you want me to change the draft I wrote accordingly, while you work on the other issues. ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Wed Jun 16 11:39:40 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA16807; Wed, 16 Jun 93 11:39:40 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21884; Wed, 16 Jun 93 11:39:44 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 16 Jun 1993 11:39:43 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21872; Wed, 16 Jun 93 11:39:38 -0400 Date: Wed, 16 Jun 93 16:39:50 BST Message-Id: <29262.9306161539@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: context To: Tony Skjellum , SNIR@watson.ibm.com In-Reply-To: Tony Skjellum's message of Wed, 16 Jun 93 10:23:44 CDT Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Tony - can you bring copies of the draft with you to Dallas on June 21 as some of us will not be able to receive on Monday - for example I will be travelling all of Monday so the first chance I will get to see such draft will be after you nad me hardcopy on evening of Monday 21. I guess we should also make sure we have facilities for eleventh hour editing, printing, ... in Dallas if the premeet is to do anything in terms of the draft. I sense that we have a formal reading and unfinished draft work to complete concurrently. Such is life. Regards - Lyndon > > Marc, I will work on draft all weekend, and TRY HARD to have it > out by Monday. I understand your point on the reading of the draft. > Perhaps we can do it Thursday at meeting, to guarantee at least on extra > day for people to read it. I will be working on this all weekend... > if possible also Friday. > > I would like to have an opportunity to edit the draft first, then show > it to you and the others. > - Tony > > > > ----- Begin Included Message ----- > > >From snir@watson.ibm.com Wed Jun 16 08:52:21 1993 > Date: Wed, 16 Jun 93 09:49:15 EDT > From: "Marc Snir" > X-Addr: (914) 945-3204 (862-3204) > 28-226 IBM T.J. Watson Research Center > P.O. Box 218 Yorktown Heights NY 10598 > To: tony@Aurora.CS.MsState.Edu > Reply-To: SNIR@watson.ibm.com > Subject: Re: context > Content-Length: 429 > > Reference: Your note of Tue, 15 Jun 93 16:49:45 CDT > > No, I cannot be a day early, but I would like to see the draft before Wednesday > We cannot have a formal reading on a document that is not available at least a > few days before the formal reading. > > I support the move back to groups as ordered sets of processes -- my question > was whether you want me to change the draft I wrote accordingly, while you > work on the other issues. > > > ----- End Included Message ----- > > /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Jun 17 08:48:59 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA26371; Thu, 17 Jun 93 08:48:59 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14942; Thu, 17 Jun 93 08:48:30 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 17 Jun 1993 08:48:29 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14934; Thu, 17 Jun 93 08:48:28 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA16826; Thu, 17 Jun 1993 08:48:29 -0400 Message-Id: <9306171248.AA16826@rios2.epm.ornl.gov> To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Subject: Convolution still Date: Thu, 17 Jun 93 08:48:29 -0500 From: David W. Walker Dear Lyndon, The problem with forming the merge communicator arises because it must be called by all processes with consistent arguments. Thus, all processes must call mpi_safemake_comm twice, once for each merge communicator. Please see the amended version of Program 2 below. I've also changed the calls so they all return error codes. This code would need to be further modified to cover the general case of three arbitrary groups. Perhaps I'll do that when I have time - perhaps while I'm sitting around in Rome airport next week waiting for some mad European terrorist to blow me up. David ---------------------------------------------------------------------- /* * * PROGRAM 2. * Now we'll try something a little harder. In the next program we find the * convolution, C, of two vectors, A and B, by first finding the FFT of A * and B, evaluating their elementwise product in Fourier space, and then * finding the inverse transform, i.e., * * C = INVFFT ( FFT(A) * FFT(B) ) * * Of course, we can do this by having all processes first find FFT(A) and * then FFT(B). The product in Fourier space can then be found with no * communication being necessary. Finally, all processes cooperate to * perform the inverse FFT. Program 2 doesn't do this. Instead, we split * the initial group into two equally-sized subgroups, one of which * evaluates FFT(A) and the other FFT(B) in parallel. Communication is * then necessary between the initial group and the subgroups to * re-distribute the data before doing the Fourier space product in the * initial group. Finally, the processes in the initial group evaluate * the inverse FFT, giving the convolution, C. The point here is that * communication between groups is required. * */ main () { group_t init_group, fft_group, other_group; comm_t comm_fft, comm_merge_A, comm_merge_B, comm_inv, other_comm; comm_handle_t handle1, handle2; rso_handle_t rso; int init_rank, init_size, key, fft_rank, fft_size, fftlen, local_len, nbytes, errno; int source_rank, dest_rank1, dest_rank2; double F[100], A[100], B[100], C[100]; /* input the fft length */ errno = get_input2 (fftlen); /* find out about the initial group */ errno = mpi_initial_group (init_group); errno = mpi_rank (init_group, init_rank); errno = mpi_group_size (init_group, init_size); /* split the group in half, can be done any old way */ key = init_rank&1; errno = mpi_split_group (init_group, key, init_rank, fft_group); /* find out about this fft group */ errno = mpi_rank (fft_groupi, fft_rank); errno = mpi_rank (fft_group, fft_size); /* work out how big merge messages will be */ local_len = fftlen/init_size; nbytes = local_len*sizeof(double); /* work out sources and destinations of merge messages */ source_rank = fft_rank; dest_rank1 = 2*source_rank; dest_rank2 = dest_rank1 + 1; /* create and publish communicator for FFTs */ errno = mpi_safemake_comm (fft_group, fft_group, comm_fft); errno = mpi_publish (comm_fft, (key == 0 ? "even" : "odd")); /* do the FFT */ errno = initialize_fft (F, fftlen, comm_fft); errno = parallel_fft (F, fftlen, comm_fft); /* find out what the other FFT group's communicator is and extract group */ errno = mpi_subscribe (other_comm, (key != 0 ? "even" : "odd")); errno = mpi_group (other_comm, other_group); /* free the FFT communicator */ errno = mpi_free(comm_fft); /* do the merge */ if (key != 0) { errno = mpi_safemake_comm (fft_group, init_group, comm_merge_A); errno = mpi_safemake_comm (other_group, init_group, comm_merge_B); errno = mpi_isendc (handle1, &F[0], nbytes, MPI_DOUBLE, 0, dest_rank1, comm_merge_A); errno = mpi_isendc (handle2, &F[local_len], nbytes, MPI_DOUBLE, 0, dest_rank2, comm_merge_A); } else { errno = mpi_safemake_comm (other_group, init_group, comm_merge_A); errno = mpi_safemake_comm (fft_group, init_group, comm_merge_B); errno = mpi_isendc (handle1, &F[0], nbytes, MPI_DOUBLE, 0, dest_rank1, comm_merge_B); errno = mpi_isendc (handle2, &F[local_len], nbytes, MPI_DOUBLE, 0, dest_rank2, comm_merge_B); } errno = mpi_recvc (A, nbytes, MPI_DOUBLE, MPI_DONTCARE, source_rank, comm_merge_A, rso); errno = mpi_recvc (B, nbytes, MPI_DOUBLE, MPI_DONTCARE, source_rank, comm_merge_B, rso); errno = mpi_wait (handle1, rso); errno = mpi_wait (handle2, rso); errno = mpi_free(comm_merge_A); errno = mpi_free(comm_merge_B); /* done */ /* do the product */ errno = parallel_product (A, B, C, fftlen); /* done */ /* do the inverse fft and output result */ errno = mpi_safemake_comm (init_group, init_group, comm_inv); errno = parallel_invfft (C, fftlen, comm_inv); errno = output_result (C, fftlen, comm_inv); errno = mpi_free(comm_inv); /* done */ /* finished */ exit (0); From owner-mpi-context@CS.UTK.EDU Fri Jun 18 15:38:10 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA10726; Fri, 18 Jun 93 15:38:10 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17930; Fri, 18 Jun 93 15:37:32 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 18 Jun 1993 15:37:31 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17922; Fri, 18 Jun 93 15:37:30 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA17905; Fri, 18 Jun 1993 15:37:59 -0400 Date: Fri, 18 Jun 1993 15:37:59 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9306181937.AA17905@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu Subject: Another intergroup example Below is another example MPI code concernec with intergroup communication. This is a routine that partitions a group into two subgroups and returns a communicator for communicating between them. Here I'm following the suggestion from Lyndon of viewing a communicator as consisting of 4 fields, namely: local_group, local_context, remote_group, remote_context For intragroup communication, remote_group=local_group and remote_context=local_context. When communicating between groups, the sender gives the destination relative to the remote group, and the local context is used associated with the message. The receiver uses a source relative to the remote group, and assumes the message is associated with the remote context. I invented a new convenience routine, mpi_merge_comm, to form the intergroup communicator. David /* PROGRAM 4 This subroutine partitions a group into 2 subgroups, and returns a communicator for communicating between them. key must equal either 0 or 1 on entry. */ intergroup (key, index, group, comm) { int key, index; group_t group; comm_t comm; group_t lgroup; int errno, rank; comm_t lcomm, rcomm; /* partition the group, and find rank in subgroup */ errno = mpi_partition_group (group, key, index, lgroup); errno = mpi_rank (lgroup, rank); /* form a communicator for local group, get communicator for remote group */ errno = mpi_safemake_comm (lgroup, lgroup, lcomm); if (key==0) { if (rank==0) errno = mpi_publish (lcomm, "G0"); errno = mpi_subscribe (rcomm, "G1"); } elseif (key==1) { if (rank==0) errno = mpi_publish (lcomm, "G1"); errno = mpi_subscribe (rcomm, "G0"); } errno = mpi_merge_comm (lcomm, rcomm, comm); /* free local and remote communicators */ errno = mpi_free (lcomm); errno = mpi_free (rcomm); } From owner-mpi-context@CS.UTK.EDU Sat Jun 19 15:30:41 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA13789; Sat, 19 Jun 93 15:30:41 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA12978; Sat, 19 Jun 93 15:30:11 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sat, 19 Jun 1993 15:30:10 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA12964; Sat, 19 Jun 93 15:30:08 -0400 Date: Sat, 19 Jun 93 20:30:33 BST Message-Id: <1683.9306191930@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Another intergroup example To: walker@rios2.epm.ornl.gov (David Walker), mpi-context@cs.utk.edu In-Reply-To: David Walker's message of Fri, 18 Jun 1993 15:37:59 -0400 Reply-To: lyndon@epcc.ed.ac.uk Hi David > > Below is another example MPI code concerned with intergroup > communication. This is a routine that partitions a group into two > subgroups and returns a communicator for communicating between them. > Here I'm following the suggestion from Lyndon of viewing a communicator > as consisting of 4 fields, namely: > > local_group, > local_context, > remote_group, > remote_context > > For intragroup communication, remote_group=local_group and > remote_context=local_context. > > When communicating between groups, the sender gives the destination > relative to the remote group, and the local context is used associated > with the message. The receiver uses a source relative to the remote group, > and assumes the message is associated with the remote context. I think a little explanation is due on my part. In old proposal x there was a communicator kind of object which had one context and a local group and a remote group. More recently I have suggested a communicator object with the fields you describe above. The reason for the change in suggestion is simple and floows directly from difference between context allocation which was intended in proposal x and that described in the current subcommittee working document. In proposal x members of the local group could allocate a context which would be safe for members of the local group - *and* members of the remote group - to use for receive (and the other way around where the remote group allocates the context). Therefore you only needed one context for the communicator object which either group can allocate and both local and remote groups can use for receive (ergo used for send in both local and remote groups). In the working document members of the local group can allocate a context would be safe for member of the local group - *but not* members of the remote group - to use for receive. Therefore the communicator object needs to contain two contexts. One of these is allocated by the local group and is used for receive in the local group (ergo used for send in the remote group). The other is allocated by the remote group and is used for receive in the remote group (ergo used for send in the local group). > I invented a new convenience routine, mpi_merge_comm, to > form the intergroup communicator. I thought about seeing what its like to introduce such a routine, since it allows us to make use of the publish/subscribe mechanism which Tony/Mark suggest. This is nicely demonstrated by part of your example. Lets look however at the last part of the example. > errno = mpi_merge_comm (lcomm, rcomm, comm); > > /* free local and remote communicators */ > errno = mpi_free (lcomm); > errno = mpi_free (rcomm); We should not make use of "lcomm" and "rcomm" after mpi_merge_comm since they must contain the same contexts as "comm" and this can lead to messages getting received in the wrong peice of code. Therefore we should immediately free them as the example does. In order to do this we have to be careful in our definition of what mpi_free(comm) actually frees, remembering that a communicator is a composite of a group and a context. I expect we would agree that mpi_free(comm) does not free the group associated with comm. The question is whether it does free the context associated with comm. If the answer is YES, then the "comm" communicator is no longer safe as the contexts could become used for other purposes. If the answer is NO then we need to provide a mechanism for the user to explicitly free the context, but that would be ugly anyway if the user never explicitly allocated the context in the first place. There is another possible answer, which is YES in general but NO not in this case because MPI recognises that the context is being used in the communicator "comm" also. This means that MPI keeps count of references to contexts, which is going to be quite expensive if context is just an integer as opposed to a handle to an opaque object which can contain a reference count. Perhaps "mpi_merge_comm(acomm, bcomm, comm)" should "free" acomm and bcomm. I put "free" in quotes because I mean to free the communicator objects but not the contexts therein, the contexts being used in the new communicator "comm". Thus any use of acomm and bcomm after the call to mpi_merge_comm would be an error. This saves the user from having to do the mpi_free, and avoids some nasty questions in the definition of mpi_free. I include the example 4 amended in this fashion, and another example which just shows how two groups can "hook-up" just knowing a name for the local "group" and a name for the other "group". Then I give example 4 written to use example 5. Best Wishes Lyndon /* PROGRAM 4 * This subroutine partitions a group into 2 subgroups, and returns a * communicator for communicating between them. key must equal either * 0 or 1 on entry. */ intergroup (key, index, group, comm) int key, index; group_t group; comm_t *comm; { group_t lgroup; int errno, rank; comm_t lcomm, rcomm; /* partition the group, and find rank in subgroup */ errno = mpi_partition_group (group, key, index, lgroup); errno = mpi_rank (lgroup, rank); /* form a communicator for local group, get communicator for remote group */ errno = mpi_safemake_comm (lgroup, lgroup, lcomm); if (key==0) { if (rank==0) errno = mpi_publish (lcomm, "G0"); errno = mpi_subscribe (rcomm, "G1"); } elseif (key==1) { if (rank==0) errno = mpi_publish (lcomm, "G1"); errno = mpi_subscribe (rcomm, "G0"); } errno = mpi_merge_comm (lcomm, rcomm, *comm); } /* PROGRAM 5 * * This example routine shows how the user can generate a communicator * for intergroup communication given a group of which the callee is * a member, a name for the local side and a name for the remote side. */ make_inter_thing(comm_t *comm, group_t group, char *this, char *that) { comm_t lcomm, rcomm; int errno, rank; errno = mpi_safemake_comm(group, lcomm); errno = mpi_rank(group, &rank); if (rank == 0) errno = mpi_publish(lcomm, this); errno = mpi_subscribe(rcomm, that); errno = mpi_merge_comm(lcomm, rcomm, *comm); } /* PROGRAM 4B * * This is program 4 using the routine of example 5. */ intergroup (int key, int index, group_t group, comm_t *comm) { group_t lgroup; int errno; /* partition the group */ errno = mpi_partition_group (group, key, index, lgroup); /* call make_inter_thing with appropriate names */ make_inter_thing(&comm, group, ((key == 0) ? "G0" : "G1"), ((key == 0) ? "G0" : "G1")); } /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Mon Jun 21 08:47:49 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA18545; Mon, 21 Jun 93 08:47:49 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29666; Mon, 21 Jun 93 08:47:16 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 21 Jun 1993 08:47:15 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29658; Mon, 21 Jun 93 08:47:14 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA16464; Mon, 21 Jun 1993 08:47:34 -0400 Message-Id: <9306211247.AA16464@rios2.epm.ornl.gov> To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Subject: Re: Another intergroup example Date: Mon, 21 Jun 93 08:47:33 -0500 From: David W. Walker Lyndon, Thanks for your comments on the latest intergroup example, and for pointing out the potential problems of freeing a communicator. The view of contexts that we have lately been discussing comes down to saying that they are unidirectional, rather than bidirectional. I haven't had time to look in detail at your examples as I'm about to leave. I'll look at them on the plane. David From owner-mpi-context@CS.UTK.EDU Wed Jun 23 09:36:38 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA02021; Wed, 23 Jun 93 09:36:38 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA09843; Wed, 23 Jun 93 09:34:40 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 23 Jun 1993 09:34:36 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA09822; Wed, 23 Jun 93 09:34:31 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA17456; Wed, 23 Jun 93 08:33:00 CDT Date: Wed, 23 Jun 93 08:33:00 CDT From: Tony Skjellum Message-Id: <9306231333.AA17456@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: Context chapter (new revision) [ps] Here is the postscript of our chapter, revisions to follow as meeting progresses... - Tony ------------------------------------------------------------------------ %!PS-Adobe-2.0 %%Creator: dvips 5.47 Copyright 1986-91 Radical Eye Software %%Title: unify1.dvi %%Pages: 9 1 %%BoundingBox: 0 0 612 792 %%EndComments %%BeginProcSet: tex.pro /TeXDict 200 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{ isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10 N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{ /vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail} B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image} imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto}N /eop{clear SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for}N /p /show load N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval(Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{ moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{ S p tail}B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w }B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{/SS save N}B /eos{clear SS restore}B end %%EndProcSet TeXDict begin 1000 300 300 @start /Fa 15 86 df66 D<07E60FFE1FFE3E3E3C1E780E700EF00E E000E000E000E000E000E000E000F00E700E780E3C1E3E3C1FF80FF807E00F177E9614>I69 D<07E6000FFE001FFE003E3E003C1E00780E00700E00F00E00E00000E00000E00000E00000E07F 80E07F80E07F80F00E00700E00781E003C1E003E3E001FFE000FFE0007EE0011177F9614>71 D73 D75 DIII<1FF07FFC7F FC701CF01EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EF01E783C7FFC7FFC 1FF00F177E9614>II82 D<0FCC3FFC7FFCF87CF03CE01CE01CF000F8007F003FE01FF801FC003C001E000EE0 0EE00EF01EF83CFFFCFFF8CFE00F177E9614>III E /Fb 3 104 df<7078F87005047C830C>46 D<03F00FF81E38381878387078FFF0FFC0E000E000E000E018E03870F03FE01F800D107C8F12> 101 D<00FB8003FF80079F800E0F001E07001C0F003C0F00380E00380E00381E00381E00381C00 383C003CFC001FFC000FB800003800007800007800E0F000E1F000FFC000FF000011177E8F12> 103 D E /Fc 1 59 df58 D E /Fd 36 119 df<001F0000003F800000 7B80000071800000E1800000E1800000E1800000C3800001C7000001CE000001DE000001FC1FF8 01F81FF801E0078001E0070007E006000FE00E001EF01C003C7038007878300078787000F03CE0 00F01FC000F01F8000F00F0060F80F80C07C7FE1C03FF1FF801FC07E001D1D7D9C20>38 D<7FF0FFE0FFE00C037F890E>45 D<7878F8F87005057D840C>I<00FE0003FF800787C00E03E0 0F01E00F01E01F01E01F01E00F01E00001E00003C00007C0000780000F00001E00003C00007800 00F00001E00003C0000701800E01801C03003803007FFF00FFFE00FFFE00131B7E9A15>50 D<00FE0003FF00078F800F07C00F03C00F03C00F0780000780000F80001F00003E0003FC0003F8 00001C00000E00000F00000F00000F00000F00780F00F80F00F81F00F81E00F03C00F07C007FF0 001FC000121B7D9A15>I<07018007FF8007FF0007FC000600000E00000C00000C00000C00000C 00000DF8001FFE001F1E001C0F00180700000700000780000F80000F00F00F00F00F00F01F00F0 1E00E03C0070F8007FF0001FC000111B7D9A15>53 D<1800003FFFC03FFFC03FFFC07003806007 00600E00C00C00001C0000380000700000E00000C00001C0000380000380000780000700000F00 000F00000E00001E00001E00001E00001E00003E00003C00001C0000121C7B9B15>55 D<00FE0003FF0007C7800F03800E03C00E03C01E03801E03801F07801F8F000FFE000FFC0003F8 0007FE001EFE003C3F00781F00700F00F00700E00700E00700E00700F00E00701E007C7C003FF0 000FC000121B7D9A15>I<00007000000070000000F0000000F0000001F0000001F80000037800 000378000006780000067800000C7800000C3C0000183C0000183C0000303C0000303C0000603C 0000601E0000FFFE0000FFFE0001801E0001801E0003001F0003000F0007000F000F000F007FC0 FFF0FFC0FFF01C1C7F9B1F>65 D<000FF030003FFC7000FC0EE003F007E007C003E00F8001E01F 0001E01E0000E03C0000C03C0000C0780000C07800000078000000F8000000F0000000F0000000 F0000000F0000000F0000380F000030078000300780007003C000E003E001C001F003C000FC0F0 0003FFE00000FF00001C1C7C9B1E>67 D<0FFFFC000FFFFF8000F007C000F001E000F000F000F0 007000F0007801F0007801E0003801E0003801E0003801E0003C01E0003803E0003803C0007803 C0007803C0007803C0007003C000F007C000E0078001E0078001C0078003800780078007800E00 0F807C00FFFFF000FFFFC0001E1C7E9B20>I<0FFFFFE00FFFFFE000F003E000F001C000F000C0 00F000C000F000C001F060C001E060C001E060C001E0600001E1E00001FFE00003FFC00003C1C0 0003C0C00003C0C00003C0C0C003C0C18007C00180078001800780030007800300078007000780 0E000F803E00FFFFFE00FFFFFC001B1C7E9B1C>I<000FF030003FFC7000FC0EE003F007E007C0 03E00F8001E01F0001E01E0000E03C0000C03C0000C0780000C07800000078000000F8000000F0 000000F0000000F000FFF0F000FFF0F0000F80F0000F8078000F0078000F003C000F003E001F00 1F001F000FC07F0003FFF60000FF82001C1C7C9B21>71 D<0FFF800FFF8000F00000F00000F000 00F00000F00001F00001E00001E00001E00001E00001E00003E00003C00003C00003C00003C000 03C00007C0000780000780000780000780000780000F8000FFF800FFF800111C7F9B0F>73 D<0FFFC00FFFC000F00000F00000F00000F00000F00001F00001E00001E00001E00001E00001E0 0003E00003C00003C00003C00003C00603C00C07C00C07800C07801C0780180780380780780F81 F8FFFFF0FFFFF0171C7E9B1A>76 D<0FFC000FFC0FFC000FFC00FC001F8000FC001F8000FC0037 8000DE00378000DE006F8001DE006F80019E00CF00019E00CF00019E018F00018F018F00018F03 1F00038F031F00030F061E00030F061E0003078C1E0003078C1E000307983E000707983E000607 B03C000607B03C000603E03C000603E03C000603C07C001E03C07C00FFE387FFC0FFC387FF8026 1C7E9B26>I<0FF80FFE0FF80FFE00FC01E000FC00C000FE00C000DE00C000DE01C001DF01C001 8F0180018F8180018781800187C1800187C3800383C3800303E3000301E3000301F3000300F300 0300FF000700FF0006007E0006007E0006003E0006003E0006001E001E001E00FFE01C00FFC00C 001F1C7E9B1F>I<0007F000003FFC0000F81E0001E0070003800380070003C00E0001C01E0001 E03C0001E03C0000E0780000E0780000F0780000E0F00001E0F00001E0F00001E0F00001E0F000 03C0F00003C0F00007807800078078000F0038001E003C003C001E0078000F81F00003FFC00000 FE00001C1C7C9B20>I<0FFFFC000FFFFF0000F00F8000F0038000F003C000F001C000F001C001 F003C001E003C001E003C001E0038001E0078001E00F0003E03E0003FFF80003FFE00003C00000 03C0000003C0000007C00000078000000780000007800000078000000F8000000F800000FFF800 00FFF000001A1C7E9B1C>I<0FFFF8000FFFFE0000F00F8000F0038000F003C000F001C000F001 C001F003C001E003C001E003C001E0078001E00F0001E03E0003FFF80003FFF80003C0FC0003C0 3C0003C03C0003C03E0007C03C0007803C0007803C0007803C0007803C0007803C380F803E70FF F81FF0FFF00FE01D1C7E9B1F>82 D<007F0C01FFDC03C1F80780780F00380E00380E00381E0038 1E00001F00001F80000FF8000FFF0007FFC001FFE0003FE00003E00001E00000E00000E06000E0 6000E06001E07001C0780380FE0F80FFFE00C3F800161C7E9B17>I<1FFFFFF03FFFFFF03C0781 F038078060700780606007806060078060600F8060C00F0060C00F0060000F0000000F0000000F 0000001F0000001E0000001E0000001E0000001E0000001E0000003E0000003C0000003C000000 3C0000003C0000003C0000007C00001FFFE0001FFFE0001C1C7C9B1E>III<03FFFF8007 FFFF0007E01F0007803E0007007C00060078000600F8000E01F0000C03E0000C07C0000007C000 000F8000001F0000003E0000007C0000007C000000F80C0001F00C0003E0180003C0180007C018 000F8038001F0030003E0070003E00F0007C03F000FFFFE000FFFFE000191C7E9B19>90 D<01FE07FE0F8F1E0E3C0E3C00780078007800F800F000F0007800780E7C0C3E1C1FF807E01012 7E9112>99 D<01F807FE0F1E1E0F3C0F7C077FFF7FFF7800F800F000F0007000780E7C0C3E3C1F F807E010127E9112>101 D<01C003E003E003E003C00000000000000000000000001F801F8007 800780070007000700070007000F000E000E000E000E000E001E00FF80FF800B1D7F9C0C>105 D<07E00FE001E001C001C001C001C001C003C00380038003800380038007800700070007000700 07000F000E000E000E000E001E001E00FF80FF800B1D7F9C0C>108 D<1F9FC01FFFE007F1E007 C0E00780E00780E00700E00700E00701E00F01E00E01C00E01C00E01C00E01C00E03C01E03C0FF 8FF0FF9FF014127F9117>110 D<00FC0003FF000F07801E03C03C01C03801C07801E07801E078 01E0F003C0F003C0F003C0700380700780780F003C1E001FF80007E00013127E9115>I<1FBE1F FE07EF07CE078E07800700070007000F000E000E000E000E000E001E00FFC0FFC010127F9110> 114 D<03F60FFE1E1E3C0E380C3C0C3E003FE01FF807FC007C603C601C701C7038F878FFF0CFC0 0F127F9110>I<0600060006000E000E000C001C003C00FFE0FFE01C003C003800380038003800 3800780070C070C070C070C071C073807F003E000B1A7C9910>III E /Fe 51 122 df<6030F078F078F078F078F078F078F078F078F078E038E0380D0C7C9916>34 D<00E001E007C007800F001E003C0038007800700070007000F000E000E000E000E000E000E000 E000F000700070007000780038003C001E000F00078007C001E000E00B217A9C16>40 DI<387C7E7E 3E0E1E3CFCF860070B798416>44 D<03E0000FF8001FFC001E3C00380E00780F00700700700700 E00380E00380E00380E00380E00380E00380E00380E00380F00780700700700700780F003C1E00 1E3C001FFC000FF80003E00011197E9816>48 D<0380038007800F801F80FF80FF80F380038003 800380038003800380038003800380038003800380038003807FFC7FFC7FFC0E197C9816>I<7F FF00FFFF80FFFF80000000000000000000000000000000FFFF80FFFF807FFF00110B7E9116>61 D<00E00001F00001F00001B00001B00003B80003B80003B800031800071C00071C00071C00071C 00071C000E0E000E0E000FFE000FFE001FFF001C07001C07001C0700FF1FE0FF1FE0FF1FE01319 7F9816>65 DI<03F18007FF800FFF801F0F803C0780780780780380700380F00000E000 00E00000E00000E00000E00000E00000E00000F000007003807803807803803C07801F0F000FFE 0007FC0003F00011197E9816>IIII<03F3000FFF001FFF003F1F003C 0F00780F00780700700700F00000E00000E00000E00000E00000E07FC0E07FC0E07FC0F0070070 0700780F00780F003C0F003F1F001FFF000FFF0003E70012197E9816>III75 DIII<1FFC003FFE007FFF00 780F00F00780E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380 E00380E00380E00380F00780F00780780F007FFF003FFE001FFC0011197E9816>II82 D<0FF3001FFF007FFF00781F00F00F00E00700E00700E00000F000007800007F80003FF8 000FFC0000FE00000F00000780000380000380E00380E00380F00780F81F00FFFE00FFFC00CFF8 0011197E9816>III<7F3F807F3F807F3F800E1E000E1C00073C0007380003B8 0003F00001F00001E00000E00001E00001F00003F00003B80007B800071C00071C000E0E000E0E 001C0700FF1FE0FF1FE0FF1FE013197F9816>88 D95 D<1FE0007FF8007FFC00783C00301E00000E00007E000FFE003FFE007FCE00F80E00E00E00E00E 00F01E00F83E007FFFE03FFFE01FC3E013127E9116>97 DI<03FC0FFE1FFE3E1E780C70 00F000E000E000E000E000F00070077C073E1F1FFE0FFC03F010127D9116>I<007F00007F0000 7F0000070000070000070000070007E7001FFF003FFF003E3F00780F00700F00F00700E00700E0 0700E00700E00700F00F00F00F00781F007E3F003FFFF01FF7F007E7F014197F9816>I<07E01F F83FFC7C3E781FF00FF007FFFFFFFFFFFFE000F000F007780F7E1F3FFE0FFC03F010127D9116> I<001F00007F8000FF8001E78001C30001C00001C000FFFF00FFFF00FFFF0001C00001C00001C0 0001C00001C00001C00001C00001C00001C00001C00001C00001C0007FFF007FFF007FFF001119 7F9816>I<03E7C00FFFE01FFFE01E3CE03C1E00380E00380E00380E003C1E001E3C001FFC003F F8003BE0003800003C00001FFE003FFF807FFFC07807C0F001E0E000E0E000E0E000E0F001E07E 0FC03FFF801FFF0007FC00131C7F9116>II<03C003C003C003C000000000000000007F C07FC07FC001C001C001C001C001C001C001C001C001C001C001C001C0FFFFFFFFFFFF101A7D99 16>I107 DIII<03E0000FF8001FFC003C1E00780F00700700E00380 E00380E00380E00380E00380F00780700700780F003C1E001FFC000FF80003E00011127E9116> II114 D<1FEC3FFC7FFCF03CE01CE01CF8007FC03FF007FC003EE00EE00EF00EF83EFFFCFFF8CFF00F12 7D9116>I<070000070000070000070000070000FFFF00FFFF00FFFF0007000007000007000007 0000070000070000070000070100070380070380070780078F8003FF0003FE0000F80011177F96 16>IIII<7F3FC07F3FC07F3FC0 0F1C00073C0003B80003F00001F00000E00001E00001F00003B800073C00071C000E0E00FF3FE0 FF3FE0FF3FE013127F9116>II E /Ff 1 16 df<07E01FF83FFC7FFE7F FEFFFFFFFFFFFFFFFFFFFFFFFF7FFE7FFE3FFC1FF807E010107E9115>15 D E /Fg 29 119 df<000FF800007FFC0001FC1E0003F01F0007E03F000FE03F000FC03F000FC0 3F000FC00C000FC000000FC000000FC000000FC00000FFFFFF00FFFFFF000FC03F000FC03F000F C03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F00 0FC03F000FC03F000FC03F000FC03F000FC03F000FC03F007FF0FFE07FF0FFE01B237FA21F>12 D<7CFEFEFEFEFE7C07077C8610>46 D<00180000780001F800FFF800FFF80001F80001F80001F8 0001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F8 0001F80001F80001F80001F80001F80001F80001F80001F80001F8007FFFE07FFFE013207C9F1C >49 D<03FC001FFF803C1FC0700FE0FC07F0FE03F0FE03F8FE03F8FE01F87C01F83803F80003F8 0003F00007F00007E0000FC0000F80001F00003E00007C0000F80001F01801C018038018070018 0E00381FFFF03FFFF07FFFF0FFFFF0FFFFF0FFFFF015207D9F1C>I<01FF0007FFC01F07E01E03 F03F03F83F01F83F81F83F01F83F03F80E03F80003F00007F0000FE0001F8001FF0001FF000007 E00003F80001FC0000FC0000FE0000FE7C00FEFE00FEFE00FEFE00FEFE00FCFE01FC7803F83E07 F01FFFE003FF0017207E9F1C>I<0000E00001E00003E00003E00007E0000FE0001FE0001FE000 37E00077E000E7E001C7E00187E00307E00707E00E07E00C07E01807E03807E07007E0E007E0FF FFFEFFFFFE0007E00007E00007E00007E00007E00007E00007E000FFFE00FFFE17207E9F1C>I< 1800601E03E01FFFE01FFFC01FFF801FFF001FFC001BE00018000018000018000018000019FE00 1FFF801F0FC01C07E01803F00003F00003F80003F80003F87803F8FC03F8FC03F8FC03F8FC03F8 FC03F06007F0780FE03C1FC01FFF0007FC0015207D9F1C>I<003FC001FFE003F07007C0F80F81 F81F01F83E01F83E01F87E00F07C00007C0000FC0800FCFFC0FDFFF0FF81F8FF00F8FE007CFE00 7CFE007EFC007EFC007EFC007EFC007E7C007E7C007E7E007E3E007C3E00FC1F01F80FC3F007FF C000FF0017207E9F1C>I<6000007800007FFFFE7FFFFE7FFFFE7FFFFC7FFFF87FFFF0E000E0E0 00C0C001C0C00380C00700000E00000E00001C00003C0000380000780000780000F80000F00001 F00001F00001F00001F00003F00003F00003F00003F00003F00003F00003F00001E00017227DA1 1C>I<0007FE0180003FFFC38000FF01E78003FC007F8007F0001F800FE0000F801FC0000F801F 800007803F800003807F000003807F000003807F00000180FE00000180FE00000000FE00000000 FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE000000007F000001 807F000001807F000001803F800003801F800003001FC00007000FE0000E0007F0001C0003FC00 380000FF01F000003FFFC0000007FF000021227DA128>67 D<0003FF00C0003FFFE1C000FF80F3 C001FC003FC007F0001FC00FE0000FC01FC00007C01F800003C03F800001C07F000001C07F0000 01C07F000000C0FE000000C0FE00000000FE00000000FE00000000FE00000000FE00000000FE00 000000FE00000000FE000FFFFCFE000FFFFC7F00001FC07F00001FC07F00001FC03F80001FC01F 80001FC01FC0001FC00FE0001FC007F0003FC001FC003FC000FF80FFC0003FFFE3C00003FF00C0 26227DA12C>71 D76 D<0007FC0000003FFF800000FC07E00003F001F80007E000FC000FC0007E001F80003F001F8000 3F003F00001F803F00001F807F00001FC07E00000FC07E00000FC0FE00000FE0FE00000FE0FE00 000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE07E00000FC07F 00001FC07F00001FC03F00001F803F80003F801F80003F000FC0007E0007E000FC0003F001F800 00FC07E000003FFF80000007FC000023227DA12A>79 DI<07FE001FFF803F0FC03F07E03F07F03F03F01E03F00003F00003F001FF F00FFFF03FE3F07F03F07E03F0FE03F0FC03F0FC03F0FC07F0FE0FF07F1FF83FFDFF0FF0FF1816 7E951B>97 D<01FF8007FFE01FC3F03F03F03E03F07E03F07C01E0FC0000FC0000FC0000FC0000 FC0000FC0000FC0000FC00007E00007E00003F00303F80701FE1E007FFC001FF0014167E9519> 99 D<0003FE000003FE0000007E0000007E0000007E0000007E0000007E0000007E0000007E00 00007E0000007E0000007E0000007E0001FE7E0007FFFE001FC3FE003F00FE003E007E007E007E 007C007E00FC007E00FC007E00FC007E00FC007E00FC007E00FC007E00FC007E00FC007E007C00 7E007E007E003E00FE003F01FE001F83FE000FFF7FC001FC7FC01A237EA21F>I<01FE0007FF80 1F87E03F03E03E01F07E00F07C00F8FC00F8FC00F8FFFFF8FFFFF8FC0000FC0000FC0000FC0000 7E00007E00003F00181F80380FE0F007FFE000FF8015167E951A>I<1F003F803F803F803F803F 801F000000000000000000000000000000FF80FF801F801F801F801F801F801F801F801F801F80 1F801F801F801F801F801F801F801F801F80FFF0FFF00C247FA30F>105 D 108 DII<00FE0007FFC00F83E01E00F03E00F87C007C7C007C7C007CFC007EFC007EFC007EFC007E FC007EFC007EFC007E7C007C7C007C3E00F81F01F00F83E007FFC000FE0017167E951C>II114 D<0FFB003FFF007C1F00780700F00300F00300F80000FF0000FFF8 007FFC007FFE001FFF000FFF80007F80C00F80C00F80E00780F00780F80F00FC1F00FFFE00C7F8 0011167E9516>I<00C00000C00000C00000C00001C00001C00003C00007C0000FC0001FC000FF FF00FFFF000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000F C1800FC1800FC1800FC1800FC1800FE38007E70003FF0000FC0011207F9F16>III E /Fh 55 123 df<007E7E01FFFF07CFCF070F8F0F0F0F0E07000E07000E07000E07000E0700FFFFF0FF FFF00E07000E07000E07000E07000E07000E07000E07000E07000E07000E07000E07000E07007F 0FF07F0FF0181A809916>11 D<007E0001FE0007CF00070F000F0F000E0F000E00000E00000E00 000E0000FFFF00FFFF000E07000E07000E07000E07000E07000E07000E07000E07000E07000E07 000E07000E07007F0FE07F0FE0131A809915>I<007F0001FF0007CF00070F000F0F000E07000E 07000E07000E07000E0700FFFF00FFFF000E07000E07000E07000E07000E07000E07000E07000E 07000E07000E07000E07000E07007F9FE07F9FE0131A809915>I<007E1F8001FF7FC007C7F1E0 0707C1E00F07C1E00E0781E00E0380000E0380000E0380000E038000FFFFFFE0FFFFFFE00E0380 E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E03 80E00E0380E07F8FE3FC7F8FE3FC1E1A809920>I39 D<01C00380070007000E001C001C00380038007000700070006000E000E000E000E000E000E000 E000E000E000E000E000E0006000700070007000380038001C001C000E0007000700038001C00A 267E9B0F>II44 DII<1FC03FF071F8F078F83CF83CF81C701C003C003C0038007800 F000E001C0038007000E0C1C0C380C7018FFF8FFF8FFF80E187E9713>50 D58 DI<1FC07FF07070F038F038F038F038007800F001E003C003 8007000700060006000600060000000000000000000F000F000F000F000D1A7E9912>63 D<000C0000001E0000001E0000001E0000003F0000003F0000003F000000778000006780000067 800000C3C00000C3C00000C3C0000181E0000181E0000181E0000300F00003FFF00003FFF00006 00780006007800060078000E003C001E003C00FF81FFC0FF81FFC01A1A7F991D>65 DI<007F0601FFE607E0FE0F803E1E001E3C001E3C000E78000E780006F00006F0 0000F00000F00000F00000F00000F00000F000067800067800063C000E3C000C1E001C0F803807 E0F001FFE0007F80171A7E991C>III73 D76 DII80 D<0FC63FF6787E701EE00EE00EE006E006F000FC007F807FF03FFC0FFE01FE003F000F 000FC007C007E007E00FF00EFC3CDFFCC7F0101A7E9915>83 D<7FFFFF007FFFFF00781E0F0060 1E0300601E0300E01E0380C01E0180C01E0180C01E0180001E0000001E0000001E0000001E0000 001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E00 00001E000003FFF00003FFF000191A7F991C>II87 D<18387060E0C0C0F8F8F878050B7E990B>96 D<7F80FFE0F1F0F0F0607000F01FF03FF07870F0 70E070E073E0F3F1F37FFE3F3C10107E8F13>II<07F81FFC3C3C783C7018E000 E000E000E000E000F0007000780C3E1C1FF807E00E107F8F11>I<007E00007E00000E00000E00 000E00000E00000E00000E00000E00000E000FEE001FFE003C3E00781E00700E00E00E00E00E00 E00E00E00E00E00E00E00E00700E00781E003C3E001FFFC00FCFC0121A7F9915>I<07E01FF03C 78701C701CFFFCFFFCE000E000E000F0007000780C3E1C1FF807E00E107F8F11>I<00F803FC07 BC0F3C0E3C0E000E000E000E000E00FFC0FFC00E000E000E000E000E000E000E000E000E000E00 0E000E007FE07FE00E1A80990C>I<0FDF1FFF38777038703870387038703838703FE07FC07000 70007FF83FFC7FFEF01FE00FE007E007F00F7C3E3FFC0FF010187F8F13>II<3C 003C003C003C00000000000000000000000000FC00FC001C001C001C001C001C001C001C001C00 1C001C001C001C00FF80FF80091A80990A>I<01E001E001E001E0000000000000000000000000 07E007E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E060E0F1 E0F3C0FF807F000B2183990C>IIIII<07E01FF8381C700E6006E007E007E007E007E007E007700E700E3C3C 1FF807E010107F8F13>II<07C6001FF6003E3E00781E00700E00F00E00E00E00E00E00E00E00E00E00 F00E00700E00781E003C7E001FFE000FCE00000E00000E00000E00000E00000E00007FC0007FC0 12177F8F14>II<3FE07FE0F0E0E060E060F000FF807FC01FE001F0C0F0E070E070F0F0FFE0DF80 0C107F8F0F>I<0C000C000C000C001C001C003C00FFC0FFC01C001C001C001C001C001C001C00 1C601C601C601C601EE00FC007800B177F960F>IIII<7F1FC07F1FC00F1E00071C0003B80003B00001E00000E00000F00001F00003B8 00071C00061C001E0E00FF1FE0FF1FE01310808F14>II<7FF87FF870F860F061E063C063C007800F18 1E181E183C387830F870FFF0FFF00D107F8F11>I E /Fi 8 118 df<78FCFCFCFC780000000000 78FCFCFCFC7806117D900C>58 D68 D<03FC001FFF003F1F007C1F007C1F00F80E00F80000F80000F80000F80000 F80000FC00007C00007E01803F87801FFF0003FC0011117F9014>99 D<1E003F003F003F003F00 1E0000000000000000007F007F001F001F001F001F001F001F001F001F001F001F001F001F001F 00FFC0FFC00A1B809A0C>105 D110 D<03F8000FFE003E0F803C07807803C07803C0F803E0F803E0F803E0F803E0F803E0F803E07803 C07C07C03E0F800FFE0003F80013117F9016>I<1FF07FF07070E030F030FC00FFE07FF07FF81F FC01FCC03CE01CE01CF838FFF8CFE00E117F9011>115 D117 D E /Fj 55 123 df<001FF3F800FFFFFE03F87F3E07E07E3E0FC07E3E0F807C1C0F 807C000F807C000F807C000F807C000F807C00FFFFFFC0FFFFFFC00F807C000F807C000F807C00 0F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C 000F807C007FE1FFC07FE1FFC01F1D809C1C>11 D<001FFC0000FFFC0003F87C0007E07C000FC0 7C000F807C000F807C000F807C000F807C000F807C000F807C00FFFFFC00FFFFFC000F807C000F 807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C00 0F807C000F807C000F807C007FF3FF807FF3FF80191D809C1B>13 D<007000E001C003C007800F 000F001E001E003C003C007C00780078007800F800F800F000F000F000F000F000F000F000F800 F8007800780078007C003C003C001E001E000F000F00078003C001C000E000700C297D9E13>40 DI<7CFEFFFFFFFF7F0307060E1C3C7830080F7D860D>44 DI<7CFEFEFEFEFE7C07077D860D>I<00600001E0000FE000FF E000F3E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003 E00003E00003E00003E00003E00003E00003E00003E00003E0007FFF807FFF80111B7D9A18>49 D<07F8003FFF00787F80F81FC0FC0FC0FC0FE0FC0FE0FC07E07807E0000FE0000FE0000FC0001F 80003F00003E00007C0000F00001E00003C0600780600F00601C00E03FFFC07FFFC0FFFFC0FFFF C0FFFFC0131B7E9A18>I<1C03801FFF801FFF801FFF001FFC001FF8001FC00018000018000018 00001BFC001FFF001E1F801C0FC01807C00007E00007E00007E07807E0F807E0F807E0F807E0F8 0FC0700FC07C3F803FFE0007F800131B7E9A18>53 D<00038000000380000007C0000007C00000 07C000000FE000000FE000001FF000001BF000001BF0000031F8000031F8000061FC000060FC00 00E0FE0000C07E0000C07E0001803F0001FFFF0003FFFF8003001F8003001F8006000FC006000F C00E000FE00C0007E0FFC07FFEFFC07FFE1F1C7E9B24>65 DI<001FF06000FFFCE003FC1FE0 0FE007E01FC003E01F8001E03F0000E07F0000E07F0000E07E000060FE000060FE000000FE0000 00FE000000FE000000FE000000FE000000FE0000007E0000607F0000607F0000603F0000E01F80 00C01FC001C00FE0078003FC1F0000FFFC00001FF0001B1C7D9B22>IIII<001FF818 00FFFE3803FC0FF807F003F80FC000F81F8000783F8000787F0000387F0000387E000018FE0000 18FE000000FE000000FE000000FE000000FE000000FE007FFFFE007FFF7E0001F87F0001F87F00 01F83F8001F81F8001F80FE001F807F003F803FE07F800FFFE78001FF818201C7D9B26>II< FFFFFFFF07E007E007E007E007E007E007E007E007E007E007E007E007E007E007E007E007E007 E007E007E007E007E007E007E0FFFFFFFF101C7F9B12>I75 D III<003FE00001FF FC0003F07E000FC01F801F800FC01F0007C03F0007E07F0007F07E0003F07E0003F0FE0003F8FE 0003F8FE0003F8FE0003F8FE0003F8FE0003F8FE0003F8FE0003F87E0003F07E0003F07F0007F0 3F0007E03F800FE01F800FC00FC01F8003F07E0001FFFC00003FE0001D1C7D9B24>II82 D<07F8601FFFE03E0FE078 03E07001E0F000E0F00060F80060F80000FE0000FFF0007FFE007FFF803FFFC01FFFE007FFE000 7FF00007F00001F00001F0C000F0C000F0E000F0E001E0F001E0FE07C0FFFF80C3FE00141C7D9B 1B>I<7FFFFFE07FFFFFE0781F81E0701F80E0601F8060E01F8070C01F8030C01F8030C01F8030 C01F8030001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F80 00001F8000001F8000001F8000001F8000001F8000001F8000001F800007FFFE0007FFFE001C1C 7E9B21>III<7FFE 1FFE007FFE1FFE0007F001800003F803800001FC07000000FC06000000FE0C0000007F1C000000 3F380000003FB00000001FE00000000FE00000000FE000000007F000000003F800000007F80000 000FFC0000000CFE000000187E000000387F000000703F800000601F800000C01FC00001C00FE0 00018007F000030007F000FFF03FFF80FFF03FFF80211C7F9B24>88 D<7FFFFC7FFFFC7E01FC78 03F87007F0E007F0E00FE0C01FE0C01FC0C03F80003F80007F0000FE0000FE0001FC0001FC0003 F80607F00607F0060FE0061FE00E1FC00E3F801C3F801C7F003CFE00FCFFFFFCFFFFFC171C7D9B 1D>90 D<0FFC003FFF003E1F803E0FC03E07C01C07C00007C003FFC01FFFC07F87C07F07C0FE07 C0FC07C0FC07C0FE0FC07E3FE03FFBF80FE1F815127F9117>97 DI<03FC001FFF003F1F007E1F007E1F00FC0E00FC0000FC0000FC0000FC0000FC0000FC00 00FC00007E01807F03803F87001FFE0003F80011127E9115>I<03FC000FFF003F0F803E07C07E 03C07C03E0FC03E0FFFFE0FFFFE0FC0000FC0000FC00007C00007E00603F00E01FC3C00FFF8003 FE0013127F9116>101 D<07F9F01FFFF83E1F787C0FB87C0F807C0F807C0F807C0F807C0F803E 1F003FFE0037F8007000007000007800003FFF803FFFE01FFFF07FFFF0F801F8F000F8F00078F0 0078F800F87E03F03FFFE007FF00151B7F9118>103 DI< 1E003F007F007F007F003F001E0000000000000000000000FF00FF001F001F001F001F001F001F 001F001F001F001F001F001F001F001F00FFE0FFE00B1E7F9D0E>I<00F801FC01FC01FC01FC01 FC00F80000000000000000000003FC03FC007C007C007C007C007C007C007C007C007C007C007C 007C007C007C007C007C007C007C707CF87CF8FCF9F87FF03F800E26839D0F>IIIII<01 FC000FFF801F07C03E03E07C01F07C01F0FC01F8FC01F8FC01F8FC01F8FC01F8FC01F87C01F07C 01F03E03E01F07C00FFF8001FC0015127F9118>II114 D<1FF87FF87078E018E018F000FF80FFF07FF83FF80FFC007CC03C E01CE01CF878FFF8CFE00E127E9113>I<030003000300070007000F000F003F00FFFCFFFC1F00 1F001F001F001F001F001F001F001F001F0C1F0C1F0C1F0C1F9C0FF803F00E1A7F9913>IIIIII<3FFF803FFF80 3C3F80307F00707E0060FC0061FC0063F80003F00007E1800FE1801FC1801F83803F03007F0700 FE0F00FFFF00FFFF0011127F9115>I E /Fk 69 123 df<003F1F8001FFFFC003C3F3C00783E3 C00F03E3C00E01C0000E01C0000E01C0000E01C0000E01C0000E01C000FFFFFC00FFFFFC000E01 C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E 01C0000E01C0000E01C0000E01C0007F87FC007F87FC001A1D809C18>11 D<003F0001FF8003C3C00783C00F03C00E03C00E00000E00000E00000E00000E0000FFFFC0FFFF C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01 C00E01C07F87F87F87F8151D809C17>I<003FC001FFC003C3C00783C00F03C00E01C00E01C00E 01C00E01C00E01C00E01C0FFFFC0FFFFC00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E 01C00E01C00E01C00E01C00E01C00E01C00E01C07FCFF87FCFF8151D809C17>I<003F83F00001 FFDFF80003E1FC3C000781F83C000F01F03C000E01E03C000E00E000000E00E000000E00E00000 0E00E000000E00E00000FFFFFFFC00FFFFFFFC000E00E01C000E00E01C000E00E01C000E00E01C 000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E0 1C000E00E01C000E00E01C007FC7FCFF807FC7FCFF80211D809C23>I<7070F8F8FCFCFCFC7C7C 0C0C0C0C1C1C181838387070F0F060600E0D7F9C15>34 D<00F0000001F80000039C0000070C00 00070C0000070C0000070C0000071C0000071C0000073800000770000007F07FE003E07FE003C0 1F0007C00E000FC01C001FC018003DE0380078F0300070F07000F0786000F03CE000F03FC000F0 1F8000F80F0060780FC0E07C3FE1C03FF9FF800FE03F001B1D7E9C20>38 D<70F8FCFC7C0C0C1C183870F060060D7D9C0C>I<01C00380038007000E000C001C0018003800 38007000700070007000E000E000E000E000E000E000E000E000E000E000E000E000E000E00070 007000700070003800380018001C000C000E0007000380038001C00A2A7D9E10>II<70F8F8F878181818383070E060050D7D840C>44 DI<70F8F8F87005057D840C>I<030007003F00FF00C700070007000700070007000700070007 00070007000700070007000700070007000700070007000700FFF8FFF80D1B7C9A15>49 D<0FE03FF878FC603EF01EF81FF80FF80F700F000F001F001E003E003C007800F001E001C00380 07000E031C0338037006FFFEFFFEFFFE101B7E9A15>I<0FE03FF8387C783E7C1E781E781E001E 003C003C00F807F007E00078003C001E000F000F000F700FF80FF80FF81EF01E787C3FF80FE010 1B7E9A15>I<001C00001C00003C00007C00007C0000DC0001DC00039C00031C00071C000E1C00 0C1C00181C00381C00301C00601C00E01C00FFFFC0FFFFC0001C00001C00001C00001C00001C00 001C0001FFC001FFC0121B7F9A15>I<301C3FFC3FF83FE030003000300030003000300037E03F F83C3C381E301E000F000F000F000FF00FF00FF00FF01E703E787C3FF80FE0101B7E9A15>I<01 F807FC0F8E1E1E3C1E381E781E78007000F080F7F8FFFCFC1CF81EF80FF00FF00FF00FF00FF00F 700F700F781E381E1E3C0FF807E0101B7E9A15>I<6000007FFF807FFF807FFF80600700C00600 C00E00C01C0000380000300000700000600000E00000C00001C00001C00003C000038000038000 038000078000078000078000078000078000078000078000078000111C7E9B15>I<07E01FF83C 3C381E701E700E700E780E7C1E7F3C3FF81FF00FF01FFC3DFC787E703FF00FE00FE007E007E007 F00E781E3C3C1FF807E0101B7E9A15>I<07E01FF83C38781C781EF00EF00EF00FF00FF00FF00F F00FF01F781F383F3FFF1FEF010F000E001E781E781C783C787878F03FE01F80101B7E9A15>I< 70F8F8F870000000000000000070F8F8F87005127D910C>I<70F8F8F870000000000000000070 F8F8F878181818383070E060051A7D910C>I<00060000000F0000000F0000000F0000001F8000 001F8000001F8000003FC0000033C0000033C0000073E0000061E0000061E00000E1F00000C0F0 0000C0F00001C0F8000180780001FFF80003FFFC0003003C0003003C0007003E0006001E000600 1E001F001F00FFC0FFF0FFC0FFF01C1C7F9B1F>65 DI<003FC180 01FFF18003F07B800FC01F801F000F801E0007803C0003807C0003807800038078000180F00001 80F0000000F0000000F0000000F0000000F0000000F0000000F000000078000180780001807C00 01803C0003801E0003001F0007000FC00E0003F03C0001FFF000003FC000191C7E9B1E>II< FFFFFCFFFFFC0F007C0F001C0F000C0F000E0F00060F03060F03060F03060F03000F07000FFF00 0FFF000F07000F03000F03000F03030F03030F00030F00060F00060F00060F000E0F001E0F007C FFFFFCFFFFFC181C7E9B1C>II<003FC18001FFF18003F07B800F C01F801F000F801E0007803C0003807C0003807800038078000180F0000180F0000000F0000000 F0000000F0000000F0000000F000FFF0F000FFF078000780780007807C0007803C0007801E0007 801F0007800FC00F8003F03F8001FFFB80003FE1801C1C7E9B21>III76 DII<003F800001FFF00003E0F80007001C000E 000E001C0007003C00078038000380780003C0700001C0F00001E0F00001E0F00001E0F00001E0 F00001E0F00001E0F00001E0F00001E0780003C0780003C0380003803C0007801E000F000E000E 0007803C0003E0F80001FFF000003F80001B1C7E9B20>II82 D<07F1801FFD803C1F80700780700380E00380E00180E00180F00000F80000FE00007FE0003FFC 001FFE000FFF0000FF80000F800007C00003C00001C0C001C0C001C0E001C0E00380F00780FE0F 00DFFE00C7F800121C7E9B17>I<7FFFFFC07FFFFFC0780F03C0700F01C0600F00C0E00F00E0C0 0F0060C00F0060C00F0060C00F0060000F0000000F0000000F0000000F0000000F0000000F0000 000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F00 0003FFFC0003FFFC001B1C7F9B1E>II87 D<18183C3C383870706060E0E0C0C0C0C0F8F8FCFCFCFC7C7C38380E0D7B 9C15>92 D<1FE0003FF8003C3C003C1E00180E00000E00001E0007FE003FFE007E0E00F80E00F8 0E00F00E60F00E60F81E607C7E607FFFC01FC78013127F9115>97 DI<07F80FFC3E3C3C3C78187800F000F000F000F000F000F000780078063C0E3F1C0FF8 07F00F127F9112>I<001F80001F80000380000380000380000380000380000380000380000380 00038007F3801FFF803E1F807C0780780380F80380F00380F00380F00380F00380F00380F00380 F003807807807C0F803E1F801FFBF007E3F0141D7F9C17>I<07E01FF83E7C781C781EF01EFFFE FFFEF000F000F000F000780078063C0E3F1C0FF807F00F127F9112>I<00FC03FE079E071E0F1E 0E000E000E000E000E000E00FFE0FFE00E000E000E000E000E000E000E000E000E000E000E000E 000E000E007FE07FE00F1D809C0D>I<07E7C01FFFC03C3DC0781E00781E00781E00781E00781E 00781E003C3C003FF80037E0007000007000007800003FFC003FFF007FFF807807C0F003C0E001 C0E001C0F003C0F807C07C0F801FFE0007F800121B7F9115>II<3C007C007C007C003C00000000000000000000000000FC00FC001C001C001C001C001C00 1C001C001C001C001C001C001C001C001C00FF80FF80091D7F9C0C>I<01C003E003E003E001C0 0000000000000000000000000FE00FE000E000E000E000E000E000E000E000E000E000E000E000 E000E000E000E000E000E000E000E0F0E0F1E0F3C0FF807E000B25839C0D>IIIII<03F0 000FFC001E1E00380700780780700380F003C0F003C0F003C0F003C0F003C0F003C07003807807 803807001E1E000FFC0003F00012127F9115>II<07F1801FF9803F1F803C0F80 780780780380F00380F00380F00380F00380F00380F00380F803807807807C0F803E1F801FFB80 07E380000380000380000380000380000380000380001FF0001FF0141A7F9116>II<1F B07FF0F0F0E070E030F030F8007FC07FE01FF000F8C078C038E038F078F8F0FFF0CFC00D127F91 10>I<0C000C000C000C000C001C001C003C00FFE0FFE01C001C001C001C001C001C001C001C00 1C301C301C301C301C301E700FE007C00C1A7F9910>IIII<7F8FF07F8FF00F0F8007 0F00038E0001DC0001D80000F00000700000780000F80001DC00038E00030E000707001F0780FF 8FF8FF8FF81512809116>II<7FFC7FFC783C707860F061E061E063C00780078C 0F0C1E0C1E1C3C187818F078FFF8FFF80E127F9112>I E /Fl 43 123 df<3E007C007F00FE00 FF81FF00FF81FF00FFC1FF80FFC1FF80FFC1FF807FC0FF803EC07D8000C0018000C0018001C003 8001800300018003000380070007000E0006000C000E001C001C00380038007000300060001915 7EA924>34 D<0003F0000000000FFC000000001F1C000000003E0E000000007C0F00000000FC07 00000000FC0700000001F80700000001F80700000001F80700000001FC0E00000001FC0E000000 01FC1C00000001FC3800000001FC7000000001FEF0007FFC00FFE0007FFC00FF80007FFC00FF00 00078000FF00000F00007F80000E00007F80001E00007FC0003C0001FFC000380003FFE0007800 079FF000F0000F8FF000E0001F0FF801E0003F07FC03C0007F03FE0780007F03FF0F0000FF01FF 0E0000FF00FF9E0000FF007FFC0000FF803FF80000FF801FF000387F800FFC00387FC01FFE0070 3FE07FFF81F01FFFFC7FFFE007FFF01FFFC000FF8001FE002E2A7DA935>38 D<3E007F00FF80FF80FFC0FFC0FFC07FC03EC000C000C001C0018001800380070006000E001C00 380030000A157B8813>44 DI<003F 800001FFF00007E0FC000FC07E001F803F001F803F003F001F803F001F807F001FC07F001FC07F 001FC07F001FC0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0 FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE07F001FC07F001FC07F001F C07F001FC03F001F803F001F801F803F001F803F000FC07E0007E0FC0001FFF000003F80001B27 7DA622>48 D<000700000F00007F0007FF00FFFF00FFFF00F8FF0000FF0000FF0000FF0000FF00 00FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF00 00FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF007FFFFE 7FFFFE7FFFFE17277BA622>I<00FFC00007FFF8001FFFFE003F03FF007E00FF807F007FC0FF80 7FC0FF803FE0FF803FE0FF803FE0FF801FE07F001FE03E003FE000003FE000003FC000003FC000 007F8000007F800000FF000001FE000001FC000003F8000007E000000FC000001F8000003E00E0 007C00E0007800E000F001C001E001C0038001C007FFFFC00FFFFFC01FFFFFC03FFFFFC07FFFFF 80FFFFFF80FFFFFF80FFFFFF801B277DA622>I<007FC00003FFF00007FFFC000FC1FE001F80FF 003FC0FF003FE07F803FE07F803FE07F803FE07F803FE07F801FC0FF800F80FF000000FF000001 FE000003FC000007F00000FFC00000FFF8000001FE000000FF0000007F8000007FC000003FC000 003FE01E003FE07F803FE07F803FE0FFC03FE0FFC03FE0FFC03FE0FFC03FC0FFC07FC07F807F80 7F00FF803F81FF001FFFFC0007FFF80000FFC0001B277DA622>I<0000070000000F0000001F00 00003F0000007F000000FF000000FF000001FF000003FF0000077F00000F7F00000E7F00001C7F 0000387F0000707F0000F07F0000E07F0001C07F0003807F0007007F000F007F000E007F001C00 7F0038007F0070007F00E0007F00FFFFFFF8FFFFFFF8FFFFFFF80000FF000000FF000000FF0000 00FF000000FF000000FF000000FF00007FFFF8007FFFF8007FFFF81D277EA622>I<0C0007000F C03F000FFFFE000FFFFE000FFFFC000FFFF8000FFFF0000FFFC0000FFF00000E0000000E000000 0E0000000E0000000E0000000E0000000E7FC0000FFFF8000FC1FE000E007F000C007F8000003F 8000003FC000003FC000003FE000003FE03E003FE07F003FE0FF803FE0FF803FE0FF803FE0FF80 3FC0FF003FC07E007FC078007F803C00FF001F83FE000FFFFC0007FFF00000FF80001B277DA622 >I<0007F800003FFC0000FFFE0001FE1F0007F81F000FE03F800FC07F801FC07F803F807F803F 807F807F803F007F001E007F0000007F000000FF000000FF1FE000FF3FF800FF70FE00FFE03F00 FFC03F80FF801FC0FF801FC0FF801FC0FF001FE0FF001FE0FF001FE0FF001FE07F001FE07F001F E07F001FE07F001FE03F801FC03F801FC01F803F800FC03F8007F0FF0003FFFC0001FFF800007F E0001B277DA622>I<380000003E0000003FFFFFF03FFFFFF03FFFFFF03FFFFFE07FFFFFC07FFF FF807FFFFF807FFFFF0070001E0070003C0070007800E0007000E000F000E001E0000003C00000 07C00000078000000F8000000F0000001F0000003F0000003F0000003F0000007E0000007E0000 00FE000000FE000000FE000000FE000001FE000001FE000001FE000001FE000001FE000001FE00 0001FE000001FE000000FC0000007800001C297CA822>I<007FC00001FFF80007FFFC000FC0FE 000F003F001E001F001E001F803E000F803E000F803F000F803FC00F803FF01F803FF81F003FFE 3F001FFFFE001FFFFC000FFFF00007FFFC0003FFFE0003FFFF000FFFFF801FBFFFC03F0FFFC07E 03FFE07C00FFE0FC007FE0F8001FE0F80007E0F80007E0F80003E0F80003E0FC0003C07C0007C0 7E0007803F000F801FC07F000FFFFE0007FFF80000FFC0001B277DA622>I<007FC00003FFF000 07FFFC000FE0FE001FC07E003F803F007F003F807F003F80FF001FC0FF001FC0FF001FC0FF001F C0FF001FE0FF001FE0FF001FE0FF001FE07F003FE07F003FE07F003FE03F807FE01F80FFE00FE1 DFE003FF9FE000FF1FE000001FE000001FC000001FC00F001FC01F803FC03FC03F803FC03F803F C07F003FC07F003F80FE001F01FC001F07F8000FFFE00007FFC00001FE00001B277DA622>I<00 007FF801800007FFFE0780001FFFFF8F80007FF80FFF8000FF8001FF8003FE00007F8007FC0000 3F8007F800001F800FF000000F801FE000000F803FE0000007803FC0000007807FC0000003807F C0000003807FC000000380FF8000000000FF8000000000FF8000000000FF8000000000FF800000 0000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF80000000007F C0000000007FC0000003807FC0000003803FC0000003803FE0000003801FE0000007800FF00000 070007F800000F0007FC00001E0003FE00003C0000FF8000F800007FF807F000001FFFFFC00000 07FFFF000000007FF8000029297CA832>67 D<00007FF003000007FFFE0F00001FFFFF9F00007F F00FFF0000FF8003FF0003FE0000FF0007FC00007F000FF800003F000FF000001F001FE000001F 003FE000000F003FC000000F007FC0000007007FC0000007007FC000000700FF8000000000FF80 00000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF80000000 00FF8000000000FF8003FFFFF8FF8003FFFFF87FC003FFFFF87FC00001FF007FC00001FF003FC0 0001FF003FE00001FF001FE00001FF000FF00001FF000FF80001FF0007FC0001FF0003FE0003FF 0000FF8003FF00007FF01FFF00001FFFFF3F000007FFFE1F0000007FF007002D297CA836>71 D73 D77 D<0000FFE000000007FFFC0000003FC07F8000007F001FC00001FC0007F00003F80003F80007F0 0001FC000FF00001FE001FE00000FF001FE00000FF003FC000007F803FC000007F807FC000007F C07F8000003FC07F8000003FC07F8000003FC0FF8000003FE0FF8000003FE0FF8000003FE0FF80 00003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003F E07F8000003FC07FC000007FC07FC000007FC03FC000007F803FC000007F801FE00000FF001FE0 0000FF000FF00001FE0007F00001FC0003F80003F80001FC0007F00000FF001FE000003FC07F80 00000FFFFE00000000FFE000002B297CA834>79 D 82 D<00FF806003FFF0E00FFFFFE01FC0FFE03F001FE03E0007E07E0003E07C0003E0FC0001E0 FC0001E0FC0000E0FE0000E0FF0000E0FF800000FFF80000FFFFC0007FFFF8007FFFFE003FFFFF 801FFFFFC00FFFFFC007FFFFE001FFFFF0003FFFF00003FFF800001FF800000FF8000007F8E000 03F8E00001F8E00001F8E00001F8F00001F8F00001F0F80003F0FC0003E0FF0007E0FFE01FC0FF FFFF80E1FFFE00C03FF8001D297CA826>I<0300060007000E000E001C001C0038001800300038 0070007000E0006000C0006000C000E001C000C0018000C0018000DF01BE00FF81FF00FFC1FF80 FFC1FF80FFC1FF807FC0FF807FC0FF803F807F001F003E00191578A924>92 D<03FFC0000FFFF0001F81FC003FC0FE003FC07F003FC07F003FC03F803FC03F801F803F800000 3F8000003F80001FFF8001FFFF8007FE3F801FE03F803FC03F807F803F807F003F80FE003F80FE 003F80FE003F80FE007F80FF007F807F00FFC03FC3DFFC1FFF8FFC03FE07FC1E1B7E9A21>97 D<003FF80001FFFE0007F83F000FE07F801FC07F803F807F803F807F807F807F807F003F00FF00 0000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF0000007F0000007F 8000003F8001C03FC001C01FC003C00FE0078007F83F0001FFFC00003FF0001A1B7E9A1F>99 D<00003FF80000003FF80000003FF800000003F800000003F800000003F800000003F800000003 F800000003F800000003F800000003F800000003F800000003F800000003F800000003F800003F E3F80001FFFBF80003F83FF8000FE00FF8001FC007F8003F8003F8003F8003F8007F8003F8007F 0003F800FF0003F800FF0003F800FF0003F800FF0003F800FF0003F800FF0003F800FF0003F800 FF0003F800FF0003F8007F0003F8007F0003F8003F8003F8003F8007F8001FC00FF8000FE01FF8 0007F07FFF8001FFFBFF80003FC3FF80212A7EA926>I<003FE00001FFFC0007F07E000FE03F00 1FC01F803F800FC03F800FC07F000FC07F0007E0FF0007E0FF0007E0FF0007E0FFFFFFE0FFFFFF E0FF000000FF000000FF000000FF0000007F0000007F8000003F8000E03F8001E01FC001C00FE0 07C003F81F8001FFFE00003FF8001B1B7E9A20>I<000FF800003FFE0000FF3F0001FC7F8003F8 7F8003F87F8007F07F8007F07F8007F03F0007F0000007F0000007F0000007F0000007F0000007 F00000FFFFC000FFFFC000FFFFC00007F0000007F0000007F0000007F0000007F0000007F00000 07F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F000 0007F0000007F0000007F0000007F0000007F000007FFF80007FFF80007FFF8000192A7EA915> I<00FF81F007FFF7FC0FE3FF7C1F80FCFC3F80FE7C3F007E787F007F007F007F007F007F007F00 7F007F007F007F007F003F007E003F80FE001F80FC000FE3F8001FFFF00018FF8000380000003C 0000003C0000003E0000003FFFFC003FFFFF001FFFFFC00FFFFFE007FFFFF03FFFFFF07E000FF8 7C0001F8F80001F8F80000F8F80000F8F80000F8FC0001F87E0003F03F0007E01FE03FC007FFFF 0000FFF8001E287E9A22>II<0F801FC01F E03FE03FE03FE01FE01FC00F800000000000000000000000000000FFE0FFE0FFE00FE00FE00FE0 0FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0FFFEFF FEFFFE0F2B7DAA14>I108 DII<003FE00001FFFC0003F07E000FC01F801F800FC03F800FE03F0007 E07F0007F07F0007F07F0007F0FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF00 07F8FF0007F87F0007F07F0007F03F800FE03F800FE01F800FC00FC01F8007F07F0001FFFC0000 3FE0001D1B7E9A22>II114 D<03FE701FFFF03E03F078 01F07000F0F00070F00070F80070FE0000FFF000FFFF007FFFC03FFFE01FFFF007FFF800FFFC00 07FC0000FCE0007CE0003CF0003CF0003CF80078FC0078FF01F0FFFFC0E1FF00161B7E9A1B>I< 00700000700000700000700000F00000F00000F00001F00003F00003F00007F0001FFFF0FFFFF0 FFFFF007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F000 07F00007F03807F03807F03807F03807F03807F03807F03803F87001FCF000FFE0003FC015267F A51B>III 120 DI<3FFFFF803FFFFF803F00FF803C00FF003801FE007803FC007807FC0070 07F800700FF000701FE000001FE000003FC000007F800000FF800000FF000001FE038003FC0380 03FC038007F803800FF007801FF007801FE007003FC00F007F801F00FF807F00FFFFFF00FFFFFF 00191B7E9A1F>I E /Fm 28 122 df<70F8FCFC7C0C0C0C1C18183870E0E0060F7C840E>44 D<018003800F80FF80F38003800380038003800380038003800380038003800380038003800380 038003800380038003800380038003800380038003800380FFFEFFFE0F217CA018>49 D<03F8000FFE003C3F00380F807007C06007C0E003E0F803E0F803E0F801E0F801E07003E00003 E00003C00007C00007C0000F80000F00001E00003C0000780000F00000E00001C0000380000700 600E00601C00603800E07000C0FFFFC0FFFFC0FFFFC013217EA018>I<03F8000FFE001E1F0038 0F807007C07807C07C07C07807C07807C00007C00007C0000780000F80001F00003E0003FC0003 F800001E00000F000007800007C00003E00003E07003E0F803E0F803E0F803E0F803E0E007C070 07C0780F803E1F000FFE0003F80013227EA018>I<03F8000FFC001F1E003C0700380780780380 7003C0F003C0F001C0F001C0F001E0F001E0F001E0F001E0F001E0F003E07803E07803E03C07E0 3E0FE01FFDE007F9E00081E00001C00003C00003C0000380780780780700780F00781E00787C00 3FF8000FE00013227EA018>57 D<000180000003C0000003C0000003C0000007E0000007E00000 07E000000FF000000CF000000CF000001CF800001878000018780000383C0000303C0000303C00 00601E0000601E0000601E0000C00F0000C00F0000C00F0001FFFF8001FFFF8001800780030003 C0030003C0030003C0060001E0060001E0060001E00E0000F01F0001F0FFC00FFFFFC00FFF2023 7EA225>65 D<000FF030007FFC3000FC1E7003F0077007C003F00F8001F01F0001F01F0000F03E 0000F03C0000707C0000707C0000707C000030F8000030F8000030F8000000F8000000F8000000 F8000000F8000000F8000000F8000000F80000307C0000307C0000307C0000303E0000703E0000 601F0000E01F0000C00F8001C007C0038003F0070000FC1E00007FFC00000FF0001C247DA223> 67 D<03FFF003FFF0000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00 000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00 000F00F80F00F80F00F80F00F81F00F01E00703E00787C003FF8000FE00014237EA119>74 D76 DI<07F0600FFE601E1FE03807E07003E07001E0E000E0E000E0E00060E000 60F00060F00000F800007C00007F00003FF0001FFE000FFF8003FFC0007FC00007E00001E00001 F00000F00000F0C00070C00070C00070E00070E000F0F000E0F801E0FC01C0FF0780C7FF00C1FC 0014247DA21B>83 D<1FF0003FFC003C3E003C0F003C0F00000700000700000F0003FF001FFF00 3F07007C0700F80700F80700F00718F00718F00F18F81F187C3FB83FF3F01FC3C015157E9418> 97 D<03FE000FFF801F07803E07803C0780780000780000F00000F00000F00000F00000F00000 F00000F000007800007800C03C01C03E01801F87800FFF0003FC0012157E9416>99 D<0000E0000FE0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000 E00000E003F8E00FFEE01F0FE03E03E07C01E07800E07800E0F000E0F000E0F000E0F000E0F000 E0F000E0F000E07000E07801E07801E03C03E01F0FF00FFEFE03F0FE17237EA21B>I<01FC0007 FF001F0F803E03C03C01C07801E07801E0FFFFE0FFFFE0F00000F00000F00000F00000F0000078 00007800603C00E03E01C00F83C007FF0001FC0013157F9416>I<0E0000FE0000FE00001E0000 0E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E3F800EFFE00FE1E0 0F80F00F00700F00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070 0E00700E00700E0070FFE7FFFFE7FF18237FA21B>104 D<1E003E003E003E001E000000000000 00000000000000000000000E00FE00FE001E000E000E000E000E000E000E000E000E000E000E00 0E000E000E000E000E00FFC0FFC00A227FA10E>I<01C003E003E003E001C00000000000000000 000000000000000001E00FE00FE001E000E000E000E000E000E000E000E000E000E000E000E000 E000E000E000E000E000E000E000E000E000E000E0F1E0F1C0F3C0FF803E000B2C82A10F>I<0E 0000FE0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E 00000E0FFC0E0FFC0E07E00E07800E07000E1E000E3C000E78000EF8000FFC000FFC000F1E000E 0F000E0F800E07800E03C00E03E00E01E00E01F0FFE3FEFFE3FE17237FA21A>I<0E00FE00FE00 1E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E 000E000E000E000E000E000E000E000E000E000E00FFE0FFE00B237FA20E>I<0E3FC0FF00FEFF F3FFC0FFE0F783C01F807E01E00F003C00E00F003C00E00E003800E00E003800E00E003800E00E 003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E0 0E003800E00E003800E0FFE3FF8FFEFFE3FF8FFE27157F942A>I<0E3F80FEFFE0FFE1E01F80F0 0F00700F00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070 0E00700E0070FFE7FFFFE7FF18157F941B>I<01FC0007FF000F07801C01C03800E07800F07000 70F00078F00078F00078F00078F00078F00078F000787000707800F03800E01C01C00F078007FF 0001FC0015157F9418>I<0E7EFEFFFFEF1F8F0F0F0F000F000E000E000E000E000E000E000E00 0E000E000E000E000E00FFF0FFF010157F9413>114 D<1FD83FF87878F038E018E018F018F800 7F807FE01FF003F8007CC03CC01CE01CE01CF03CF878FFF0CFE00E157E9413>I<060006000600 060006000E000E000E001E003E00FFF8FFF80E000E000E000E000E000E000E000E000E000E000E 0C0E0C0E0C0E0C0E0C0F1C073807F803E00E1F7F9E13>I<0E0070FE07F0FE07F01E00F00E0070 0E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00F00E00F00E01F0 0787F807FF7F01FC7F18157F941B>I121 D E /Fn 16 121 df<78FCFCFEFE7E06060606060E0C0C1C383870E0C007147A8512>44 D<00003FE0030001FFFC030007F01E07001F800787003E0001CF00FC0000EF01F800007F03F000 007F03E000003F07C000001F0FC000001F0F8000000F1F8000000F3F000000073F000000073E00 0000077E000000077E000000037E000000037C00000003FC00000000FC00000000FC00000000FC 00000000FC00000000FC00000000FC00000000FC00000000FC00000000FC00000000FC00000000 7C000000007E000000037E000000037E000000033E000000033F000000073F000000061F800000 060F8000000E0FC000000C07C000001C03E000001803F000003801F800007000FC0000E0003F00 01C0001F8007800007F01F000001FFFC0000003FE00028337CB130>67 D<00003FF001800001FF FC01800007F81F0380001FC0038380003F0001E780007C0000E78001F800007F8001F000003F80 03E000001F8007C000000F800FC000000F800F80000007801F80000007801F00000003803F0000 0003803E00000003807E00000003807E00000001807E00000001807C0000000180FC0000000000 FC0000000000FC0000000000FC0000000000FC0000000000FC0000000000FC0000000000FC0000 000000FC0000000000FC0000000000FC00000FFFFC7E00000FFFFC7E0000001FC07E0000000F80 7E0000000F803E0000000F803F0000000F801F0000000F801F8000000F800F8000000F800FC000 000F8007E000000F8003E000000F8001F000001F8001F800001F80007E00003F80003F00007780 001FC001E3800007F80FC1800001FFFF008000003FF800002E337CB134>71 D<03FF00000FFFC0001E03F0003E00F8003E007C003E003C003E003E001C001E0000001E000000 1E0000001E0000001E000003FE00007FFE0003FF1E0007F01E001FC01E003F001E007E001E007C 001E00FC001E00F8001E0CF8001E0CF8001E0CF8003E0CFC007E0C7C007E0C7E01FF1C3F87CFB8 1FFF07F003FC03C01E1F7D9E21>97 D<003FE001FFF803E03C07C03E0F003E1F003E3E003E3C00 1C7C00007C0000780000F80000F80000F80000F80000F80000F80000F80000F80000F800007C00 007C00007C00003E00033E00071F00060F800E07C01C03F07801FFF0003F80181F7D9E1D>99 D<003F800001FFF00003E1F80007807C000F003E001E001E003E001F003C000F007C000F807C00 0F8078000F80F8000780FFFFFF80FFFFFF80F8000000F8000000F8000000F8000000F8000000F8 0000007C0000007C0000007C0000003E0001803E0003801F0003000F80070007E00E0003F83C00 00FFF800003FC000191F7E9E1D>101 D<0F001F801F801F801F800F0000000000000000000000 0000000000000000000000000780FF80FF800F8007800780078007800780078007800780078007 80078007800780078007800780078007800780078007800780078007800FC0FFF8FFF80D307EAF 12>105 D<0781FE003FC000FF87FF80FFF000FF9E0FC3C1F8000FB803E7007C0007F001EE003C 0007E001FC003E0007E000FC001E0007C000F8001E0007C000F8001E00078000F0001E00078000 F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00 078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0 001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E0007 8000F0001E000FC001F8003F00FFFC1FFF83FFF0FFFC1FFF83FFF0341F7E9E38>109 D<0781FE0000FF87FF8000FF9E0FC0000FB803E00007F001E00007E001F00007E000F00007C000 F00007C000F000078000F000078000F000078000F000078000F000078000F000078000F0000780 00F000078000F000078000F000078000F000078000F000078000F000078000F000078000F00007 8000F000078000F000078000F000078000F000078000F0000FC001F800FFFC1FFF80FFFC1FFF80 211F7E9E25>I<001FC00000FFF80001E03C0007800F000F0007801E0003C01E0003C03C0001E0 3C0001E0780000F0780000F0780000F0F80000F8F80000F8F80000F8F80000F8F80000F8F80000 F8F80000F8F80000F8780000F07C0001F03C0001E03C0001E01E0003C01E0003C00F00078007C0 1F0001F07C0000FFF800001FC0001D1F7E9E21>I<0781FE00FF87FF80FF9F0FE00FB803F007F0 01F807E000F807C0007C0780007C0780003E0780003E0780003E0780001F0780001F0780001F07 80001F0780001F0780001F0780001F0780001F0780003F0780003E0780003E0780007E07C0007C 07C000FC07E000F807F001F007B803E0079E0FC0078FFF800781FC000780000007800000078000 0007800000078000000780000007800000078000000780000007800000078000000FC00000FFFC 0000FFFC0000202D7E9E25>I<0787F0FF8FF8FFBC7C0FB87C07F07C07E07C07E00007C00007C0 0007C0000780000780000780000780000780000780000780000780000780000780000780000780 000780000780000780000780000780000780000FC000FFFE00FFFE00161F7E9E19>114 D<03FE300FFFF03E07F07801F07000F0E00070E00070E00030E00030F00030F800007F00007FF0 003FFF000FFFC003FFE0003FF00003F80000F8C0007CC0003CE0001CE0001CE0001CF0001CF800 38F80038FC00F0EF03F0C7FFC0C1FF00161F7E9E1A>I<00C00000C00000C00000C00000C00001 C00001C00001C00003C00003C00007C0000FC0001FC000FFFFE0FFFFE003C00003C00003C00003 C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003 C03003C03003C03003C03003C03003C03003C03003C07001E06001E0E000F9C000FFC0003F0014 2C7FAB19>I<078000F000FF801FF000FF801FF0000F8001F000078000F000078000F000078000 F000078000F000078000F000078000F000078000F000078000F000078000F000078000F0000780 00F000078000F000078000F000078000F000078000F000078000F000078000F000078000F00007 8001F000078001F000078001F000078003F00003C007F00003C00EF00001F03CF80000FFF8FF80 003FC0FF80211F7E9E25>I120 D E /Fo 5 85 df<00000000C00000000000E000 00000001E00000000003E00000000003E00000000007E00000000007E0000000000FE000000000 0FE0000000001FE0000000001FE00000000037E00000000067E00000000067E000000000C7E000 000000C7F00000000183F00000000183F00000000303F00000000703F00000000603F00000000C 03F00000000C03F00000001803F00000001803F00000003003F00000003003F00000006003F000 0000C003F0000000C003F00000018003F00000018003F8000003FFFFF8000003FFFFF800000600 01F800000E0001F800000C0001F80000180001F80000180001F80000300001F80000300001F800 00600001F80000E00001F80000C00001F80001C00001F80001C00001F80007C00001FC001FC000 03FC00FFF8007FFFE0FFF8007FFFE02B327BB135>65 D<000FFFFFFE0000000FFFFFFF80000000 7F000FE00000007E0003F00000007E0000F80000007E0000FC0000007E00007C000000FC00003E 000000FC00003E000000FC00003F000000FC00001F000001F800001F000001F800001F800001F8 00001F800001F800001F800003F000001F800003F000001F800003F000001F800003F000001F80 0007E000003F800007E000003F800007E000003F800007E000003F80000FC000003F00000FC000 007F00000FC000007F00000FC000007F00001F8000007E00001F800000FE00001F800000FE0000 1F800000FC00003F000001FC00003F000001F800003F000001F800003F000003F000007E000003 E000007E000007E000007E00000FC000007E00000F800000FC00001F800000FC00003F000000FC 00007E000000FC0000FC000001F80001F0000001F80003E0000001F8000FC0000003F8007F0000 00FFFFFFFC000000FFFFFFE000000031317BB036>68 D<000FFFFFFFFC000FFFFFFFFC00007F00 01FC00007E00007C00007E00003C00007E00003C00007E0000180000FC0000180000FC00001800 00FC0000180000FC0000180001F80000180001F80000180001F80000180001F80000180003F000 80100003F00180000003F00180000003F00180000007E00300000007E00300000007E007000000 07E01F0000000FFFFE0000000FFFFE0000000FC01E0000000FC00E0000001F800C0000001F800C 0000001F800C0000001F800C0000003F00180000003F00080000003F00000000003F0000000000 7E00000000007E00000000007E00000000007E0000000000FC0000000000FC0000000000FC0000 000000FC0000000001F80000000001F80000000001F80000000003F800000000FFFFF0000000FF FFF00000002E317BB02F>70 D<000FFFFFF000000FFFFFFE0000007F003F8000007E000FC00000 7E0007E000007E0003F000007E0001F80000FC0001F80000FC0001F80000FC0001F80000FC0001 F80001F80003F80001F80003F80001F80003F80001F80003F00003F00007F00003F00007E00003 F0000FC00003F0000FC00007E0001F000007E0007E000007E000FC000007E007F000000FFFFFC0 00000FFFFF0000000FC00F8000000FC003C000001F8003E000001F8001F000001F8001F000001F 8001F800003F0001F800003F0001F800003F0001F800003F0001F800007E0003F800007E0003F8 00007E0003F000007E0003F00000FC0007F00000FC0007F00000FC0007F00800FC0007F00C01F8 0007F01801F80007F01801F80003F03003F80003F030FFFFE001F0E0FFFFE000FFC0000000003F 002E327BB034>82 D<07FFFFFFFFF00FFFFFFFFFF00FC00FE003F01E000FC000F01C000FC000E0 18000FC000E038000FC0006030001F8000E030001F8000E060001F8000C060001F8000C060003F 0000C0C0003F0000C0C0003F0000C0C0003F0000C080007E00008000007E00000000007E000000 00007E0000000000FC0000000000FC0000000000FC0000000000FC0000000001F80000000001F8 0000000001F80000000001F80000000003F00000000003F00000000003F00000000003F0000000 0007E00000000007E00000000007E00000000007E0000000000FC0000000000FC0000000000FC0 000000000FC0000000001F80000000001F80000000001F80000000001F80000000003F00000000 003F00000000003F0000000000FF00000000FFFFFF000000FFFFFF0000002C3173B033>84 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300 TeXDict begin %%EndSetup %%Page: 1 1 bop 795 220 a Fo(D)26 b(R)g(A)f(F)h(T)476 311 y Fn(Groups,)c(Con)n(texts,)f (Comm)n(unicators)353 432 y Fm(Lyndon)c(Clark)o(e,)e(Mark)h(Sears,)g(An)o (thon)o(y)g(Skjellum,)d(Marc)j(Snir)828 529 y(June)h(22,)f(1993)75 710 y Fl(1)69 b(In)n(tro)r(duction)75 826 y(2)g(Groups)75 918 y Fk(A)15 b Fj(group)e Fk(is)i(an)f(ordered)i(set)g(of)e(pro)q(cess)j(iden)o (ti\014ers)f(\(henceforth)g(pro)q(cess\);)h(pro)q(cess)f(iden)o(ti\014ers)g (are)f(imple-)75 968 y(men)o(tation)f(dep)q(enden)o(t;)j(a)e(group)h(is)f(an) g(opaque)h(ob)r(ject.)23 b(Eac)o(h)16 b(pro)q(cess)h(in)e(a)g(group)h(is)f (asso)q(ciated)h(with)f(an)75 1017 y(in)o(teger)f Fj(rank)p Fk(,)f(starting)h(from)e(zero.)75 1156 y Fl(3)69 b(Con)n(text)75 1248 y Fk(A)15 b Fj(con)o(text)e Fk(is)i(the)h(MPI)f(mec)o(hanism)d(for)j (partitioning)f(comm)o(uni)o(cation)e(space.)22 b(A)15 b(de\014ning)g(prop)q (ert)o(y)h(of)e(a)75 1297 y(con)o(text)j(is)g(that)g(a)f(send)i(made)d(in)h (a)h(con)o(text)g(cannot)g(b)q(e)g(receiv)o(ed)h(in)e(another)h(con)o(text.) 28 b(A)16 b(con)o(text)i(is)e(an)75 1347 y(in)o(teger.)75 1486 y Fl(4)69 b(Comm)n(unicators)75 1577 y Fk(All)12 b(MPI)g(comm)o(unication)d (\(b)q(oth)k(p)q(oin)o(t-to-p)q(oin)o(t)e(and)i(collectiv)o(e\))f(functions)h (use)g Fj(comm)o(unicators)d Fk(to)i(pro-)75 1627 y(vide)i(a)f(sp)q(eci\014c) j(scop)q(e)f(\(con)o(text)f(and)g(group)g(sp)q(eci\014cations\))h(for)e(the)i (comm)o(unicatio)o(n.)g(In)f(short,)g(comm)o(uni-)75 1677 y(cators)h(bring)e (together)i(the)g(concepts)h(of)d(group)h(and)g(con)o(text;)g(\(furthermore,) g(to)g(supp)q(ort)h(implem)o(en)o(tation-)75 1727 y(sp)q(eci\014c)i (optimizations,)12 b(and)j(virtual)g(top)q(ologies,)f(they)i(\\cac)o(he")f (additional)f(information)e(opaquely\).)22 b(The)75 1777 y(source)16 b(and)e(destination)g(of)g(a)g(message)g(is)h(iden)o(ti\014ed)f(b)o(y)g(the)h (rank)g(of)f(that)g(pro)q(cess)i(within)e(the)h(group,)f(and)75 1826 y(comm)o(unication)d(using)j(this)h(comm)o(unicator)c(is)k(restricted)h (to)f(pro)q(cesses)i(within)d(this)g(group.)20 b(F)m(or)14 b(collectiv)o(e)75 1876 y(comm)o(unication,)h(the)j(comm)o(unicator)d(sp)q (eci\014es)k(the)g(set)f(of)g(pro)q(cesses)i(that)e(participate)f(in)h(the)g (collectiv)o(e)75 1926 y(op)q(eration.)27 b(Th)o(us,)17 b(the)h(comm)o (unicator)c(restricts)19 b(the)e(\\spatial")f(scop)q(e)i(of)e(comm)o (unication,)e(and)j(pro)o(vides)75 1976 y(lo)q(cal)c(pro)q(cess)j (addressing.)158 2109 y Fi(Discussion:)37 b Fh(`Comm)o(unicator')14 b(replaces)h(the)f(w)o(ord)f(`con)o(text')h(ev)o(erywhere)g(in)g(curren)o(t)g (pt2pt)g(and)g(collcomm)75 2158 y(drafts.)158 2291 y Fk(Comm)o(uni)o(cators)d (are)i(represen)o(ted)i(b)o(y)d(opaque)h Fj(comm)o(unicator)e(ob)s(jects)p Fk(.)16 b(Suc)o(h)d(ob)r(jects)h(are)e(opaque,)75 2341 y(and)i(cannot)g(b)q (e)g(directly)g(transferred)i(from)c(one)i(no)q(de)h(to)e(another.)75 2459 y Fg(4.1)56 b(Prede\014ned)18 b(Comm)n(unicators)75 2536 y Fk(Initial)12 b(comm)o(unicators)g(are)i(as)g(follo)o(ws:)137 2620 y Ff(\017)21 b Fe(MPI)p 248 2620 14 2 v 15 w(COMM)p 351 2620 V 15 w(SIBLINGS)p Fk(,)39 b(SPMD)14 b(siblings)f(of)g(a)h(pro)q(cess.) 137 2704 y Ff(\017)21 b Fe(MPI)p 248 2704 V 15 w(COMM)p 351 2704 V 15 w(HOST)p Fk(,)40 b(A)14 b(comm)o(unicator)d(for)j(talking)e(to)i (one's)g(HOST.)965 2828 y(1)p eop %%Page: 2 2 bop 75 -100 a Fk(2)1231 b Fd(5)41 b(GR)o(OUP)13 b(MANA)o(GEMENT)137 45 y Ff(\017)21 b Fe(MPI)p 248 45 14 2 v 15 w(COMM)p 351 45 V 15 w(PARENT)p Fk(,)40 b(A)13 b(comm)o(unicator)f(for)h(talking)g(to)g (one's)h(P)m(ARENT)g(\(spa)o(wner\).)137 130 y Ff(\017)21 b Fe(MPI)p 248 130 V 15 w(COMM)p 351 130 V 15 w(SELF)p Fk(,)30 b(A)11 b(comm)o(unicator)d(for)i(talking)g(to)g(one's)h(self)g(\(useful)g (for)f(getting)g(con)o(texts)i(for)e(serv)o(er)179 180 y(purp)q(oses,)15 b(etc.\).)75 264 y(are)f(de\014ned)h(when)g(the)f(program)e(starts.)158 315 y(MPI)17 b(implemen)o(tatio)o(ns)e(are)i(required)h(to)e(pro)o(vide)h(at) g(least)g(these)h(comm)o(unicators;)d(ho)o(w)o(ev)o(er,)i(not)g(all)75 364 y(forms)h(of)g(comm)o(unicatio)o(n)e(mak)o(e)i(sense)i(for)f(all)f (systems.)33 b(En)o(vironmen)o(tal)17 b(inquiry)h(will)g(b)q(e)h(pro)o(vided) g(to)75 414 y(determine)14 b(whic)o(h)g(of)f(these)i(comm)o(unicators)d(are)i (usable)g(in)f(a)h(giv)o(en)f(implemen)o(tation.)158 547 y Fi(Discussion:)18 b Fh(En)o(vironmen)o(tal)e(sub-committee)e(needs)g(to)f (pro)o(vide)h(suc)o(h)g(inquiry)h(functions)g(for)e(us.)75 770 y Fl(5)69 b(Group)24 b(Managemen)n(t)75 862 y Fk(This)12 b(section)i(describ)q(es)g(the)f(manipulation)c(of)j(groups)h(under)g(v)n (arious)f(subheadings:)18 b(general,)12 b(constructors,)75 911 y(and)i(so)g(on.)75 1030 y Fg(5.1)56 b(General)18 b(Op)r(erations)75 1108 y Fk(The)c(follo)o(wing)e(are)i(all)f(lo)q(cal)g(\(non-comm)o(uni)o (cating\))e(op)q(erations.)158 1193 y Fj(MPI)p 257 1193 15 2 v 17 w(GR)o(OUP)p 453 1193 V 16 w(SIZE\(group,)j(size\))75 1322 y(IN)i(group)k Fk(handle)13 b(to)h(group)g(ob)r(ject.)75 1407 y Fj(OUT)i(size)k Fk(is)13 b(the)i(in)o(teger)f(n)o(um)o(b)q(er)f(of)h (pro)q(cesses)i(in)e(the)g(group.)158 1536 y Fj(MPI)p 257 1536 V 17 w(GR)o(OUP)p 453 1536 V 16 w(RANK\(group,)h(rank\))75 1664 y(IN)h(group)k Fk(handle)13 b(to)h(group)g(ob)r(ject.)75 1749 y Fj(OUT)i(rank)k Fk(is)15 b(the)h(in)o(teger)g(rank)f(of)f(the)i (calling)e(pro)q(cess)j(in)e(group,)g(or)g Fe(MPI)p 1366 1749 14 2 v 15 w(UNDEFINED)e Fk(if)i(y)o(ou)f(are)i(not)f(a)179 1799 y(mem)o(b)q(er.)158 1928 y Fj(MPI)p 257 1928 15 2 v 17 w(TRANSLA)l(TE)p 568 1928 V 18 w(RANKS)h(\(group)p 916 1928 V 15 w(a,)g(ranks)p 1097 1928 V 17 w(a,)g(n,)g(group)p 1344 1928 V 16 w(b,)f(ranks)p 1529 1928 V 17 w(b\))g(\))75 2057 y(IN)h(group)p 271 2057 V 16 w(a)21 b Fk(handle)14 b(to)f(group)h(ob)r(ject)h (\\A")75 2142 y Fj(IN)h(ranks)p 263 2142 V 17 w(a)21 b Fk(arra)o(y)14 b(of)f(zero)i(or)e(more)g(v)n(alid)g(ranks)h(in)f(group)h(\\A")75 2226 y Fj(IN)i(n)21 b Fk(n)o(um)o(b)q(er)13 b(of)g(ranks)h(in)g Fe(ranks)p 666 2226 14 2 v 14 w(a)g Fk(arra)o(y)75 2311 y Fj(IN)i(group)p 271 2311 15 2 v 16 w(b)k Fk(handle)14 b(to)g(group)f(ob)r(ject)i(\\B")75 2396 y Fj(OUT)h(ranks)p 314 2396 V 16 w(b)21 b Fk(arra)o(y)10 b(of)g(corresp)q(onding)i(ranks)f(in)g(group)g(\\B,")f Fe(MPI)p 1220 2396 14 2 v 15 w(UNDEFINED)f Fk(when)i(no)g(corresp)q(ondence)179 2446 y(exists.)158 2575 y Fj(MPI)p 257 2575 15 2 v 17 w(GR)o(OUP)p 453 2575 V 16 w(FLA)l(TTEN\(group,)k(bu\013er,)f(max)p 1102 2575 V 18 w(length,)g(actual)p 1407 2575 V 16 w(length\))75 2704 y(IN)i(group)k Fk(handle)13 b(to)h(ob)r(ject)p eop %%Page: 3 3 bop 75 -100 a Fd(5.2)41 b(Constructors)1452 b Fk(3)75 45 y Fj(OUT)16 b(bu\013er)j Fk(b)o(yte-aligned)13 b(bu\013er)75 127 y Fj(IN)j(max)p 237 127 15 2 v 18 w(length)i Fk(maxim)n(um)10 b(length)k(of)f(bu\013er)i(in)e(b)o(ytes)75 208 y Fj(OUT)j(actual)p 327 208 V 16 w(length)i Fk(actual)c(b)o(yte)g(length)g(of)f(bu\013er)i(con)o (taining)e(\015attened)i(group)e(information)75 296 y(If)j(insu\016cien)o(t)g (space)g(is)g(a)o(v)n(ailable)e(in)h(bu\013er)i(\(as)f(sp)q(eci\014ed)i(b)o (y)e Fe(max)p 1198 296 14 2 v 15 w(length)p Fk(\),)f(then)h(the)h(con)o(ten)o (ts)g(of)e(bu\013er)75 346 y(is)h(unde\014ned)h(at)f(exit.)25 b(The)16 b(quan)o(tit)o(y)f Fe(actual)p 858 346 V 15 w(length)f Fk(is)i(alw)o(a)o(ys)f(w)o(ell-de\014ned)i(at)e(exit)h(as)h(the)f(n)o(um)o(b) q(er)g(of)75 396 y(b)o(ytes)f(needed)g(to)f(store)h(the)f(\015attened)h (group.)158 446 y(Though)g(implemen)o(tations)e(ma)o(y)h(v)n(ary)h(on)h(ho)o (w)f(they)i(store)g(\015attened)g(groups,)f(the)g(information)d(m)o(ust)75 495 y(b)q(e)h(su\016cien)o(t)f(to)h(reconstruct)h(the)f(group)f(using)g Fe(MPI)p 936 495 V 15 w(GROUP)p 1061 495 V 15 w(UNFLATTEN)e Fk(b)q(elo)o(w.)17 b(The)d(purp)q(ose)g(of)f(\015attening)75 545 y(and)h(un\015attening)g(is)f(to)h(allo)o(w)e(in)o(terpro)q(cess)k (transmission)d(of)g(group)h(ob)r(jects.)75 661 y Fg(5.2)56 b(Constructors)75 738 y Fk(Group)14 b(constructors)h(ma)o(y)d(either)j(b)q(e) g(lo)q(cal)e(or)h(ma)o(y)d(require)k(comm)o(unication.)75 845 y Fj(5.2.1)48 b(Lo)q(cal)15 b(Constructors)75 921 y Fk(The)f(execution)h(of)e (the)i(follo)o(wing)c(op)q(erations)j(do)g(not)g(require)g(in)o(terpro)q (cess)i(comm)o(unication.)158 1007 y Fj(MPI)p 257 1007 15 2 v 17 w(GR)o(OUP)p 453 1007 V 16 w(UNFLA)l(TTEN\(group,)f(bu\013er,)g(max)p 1177 1007 V 17 w(length\))75 1130 y(OUT)h(group)j Fk(handle)14 b(to)f(ob)r(ject)75 1212 y Fj(IN)j(bu\013er)k Fk(b)o(yte-aligned)13 b(bu\013er)i(pro)q(duced)g(b)o(y)e Fe(MPI)p 951 1212 14 2 v 16 w(GROUP)p 1077 1212 V 14 w(FLATTEN)75 1293 y Fj(IN)j(max)p 237 1293 15 2 v 18 w(length)i Fk(maxim)n(um)10 b(length)k(of)f(bu\013er)i(in) e(b)o(ytes)75 1381 y(See)i(MPI)p 232 1381 13 2 v 15 w(GR)o(OUP)p 401 1381 V 15 w(FLA)m(TTEN)f(ab)q(o)o(v)o(e.)158 1466 y Fj(MPI)p 257 1466 15 2 v 17 w(LOCAL)p 438 1466 V 17 w(SUBGR)o(OUP\(grou)o(p,)f(n,)j (ranks,)f(new)p 1179 1466 V 17 w(group\))75 1590 y(IN)h(group)k Fk(handle)13 b(to)h(group)g(ob)r(ject)75 1671 y Fj(IN)i(n)21 b Fk(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(arra)o(y)h(ranks)g(\(and)g (size)h(of)e Fe(new)p 1123 1671 14 2 v 15 w(group)p Fk(\))75 1753 y Fj(IN)j(ranks)21 b Fk(arra)o(y)13 b(of)g(in)o(teger)i(ranks)f(in)f (group)75 1834 y Fj(OUT)j(new)p 283 1834 15 2 v 17 w(group)j Fk(new)14 b(group)g(deriv)o(ed)g(from)f(ab)q(o)o(v)o(e,)g(preserving)i(the)f (order)h(de\014ned)g(b)o(y)e Fe(ranks)p Fk(.)158 1958 y Fj(MPI)p 257 1958 V 17 w(LOCAL)p 438 1958 V 17 w(GR)o(OUP)p 634 1958 V 16 w(UNION\(group1,)h(group2,)h(group)p 1303 1958 V 15 w(out\))89 2043 y(MPI)p 188 2043 V 17 w(LOCAL)p 369 2043 V 17 w(GR)o(OUP)p 565 2043 V 16 w(INTERSECT\(group1,)g(group2,)f(group)p 1349 2043 V 16 w(out\))89 2164 y(MPI)p 188 2164 V 17 w(LOCAL)p 369 2164 V 17 w(GR)o(OUP)p 565 2164 V 16 w(DIFFERENCE\(group1,)h(group2,)g(group) p 1385 2164 V 15 w(out\))75 2322 y(IN)h(group1)j Fk(\014rst)c(group)f(ob)r (ject)h(handle)75 2404 y Fj(IN)h(group2)j Fk(second)c(group)f(ob)r(ject)h (handle)75 2486 y Fj(OUT)h(group)p 322 2486 V 15 w(out)k Fk(group)14 b(ob)r(ject)g(handle)75 2574 y(The)g(set-lik)o(e)g(op)q(erations)g(are)h (de\014ned)g(as)f(follo)o(ws:)57 2654 y(union)20 b(All)11 b(elemen)o(ts)i(of) f(the)h(\014rst)g(group)f(\(group1\),)g(follo)o(w)o(ed)f(b)o(y)h(all)g (elemen)o(ts)g(of)g(second)h(group)g(\(group2\))f(not)179 2704 y(in)h(\014rst)p eop %%Page: 4 4 bop 75 -100 a Fk(4)1231 b Fd(5)41 b(GR)o(OUP)13 b(MANA)o(GEMENT)4 45 y Fk(in)o(tersect)23 b(all)12 b(elemen)o(ts)i(of)f(the)i(\014rst)g(group)e (whic)o(h)h(are)g(also)g(in)f(the)i(second)g(group)-14 127 y(di\013erence)23 b(all)12 b(elemen)o(ts)i(of)f(the)i(\014rst)g(group)e(whic) o(h)h(are)g(not)g(in)g(the)g(second)h(group)75 291 y Fi(Discussion:)k Fh(What)c(do)f(p)q(eople)h(think)g(ab)q(out)g(these)f(lo)q(cal)h(op)q (erations?)22 b(More?)e(Less?)f(Note:)g(these)14 b(op)q(erations)h(do)75 340 y(not)e(explicitl)q(y)j(en)o(umerate)d(ranks,)h(and)g(therefore)f(are)g (more)g(scalable)i(if)e(implemen)o(ted)j(e\016cien)o(tly)p Fc(:)8 b(:)e(:)158 508 y Fj(MPI)p 257 508 15 2 v 17 w(FREE)p 402 508 V 18 w(GR)o(OUP\(group)o(\))75 625 y(IN)16 b(group)k Fk(frees)15 b Fe(group)d Fk(previously)i(de\014ned.)75 788 y Fi(Discussion:)i Fh(The)10 b(p)q(oin)o(t-to-p)q(oin)o(t)i(c)o(hapter)e (suggests)h(that)f(there)g(is)g(a)g(single)h(destructor)g(for)e(all)i(MPI)f (opaque)h(ob)r(jects;)75 838 y(ho)o(w)o(ev)o(er,)i(it)g(is)h(arguable)h(that) e(this)h(sp)q(eci\014es)h(the)e(implemen)o(tation)j(of)d(MPI)g(v)o(ery)g (strongly)m(.)75 1028 y Fj(5.2.2)48 b(Collectiv)o(e)13 b(group)i (constructors)75 1105 y Fk(The)f(execution)h(of)e(the)i(follo)o(wing)c(op)q (erations)j(require)h(collectiv)o(e)f(comm)o(unicatio)o(n)d(within)i(a)h (group.)158 1190 y Fj(MPI)p 257 1190 V 17 w(COLL)p 402 1190 V 17 w(SUBGR)o(OUP\(comm,)f(mem)o(b)q(ership)p 1124 1190 V 14 w(k)o(ey)l(,)j(new)p 1317 1190 V 17 w(group\))75 1306 y(IN)g(comm)21 b Fk(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1388 y Fj(IN)h(mem)o(b)q (ership)p 407 1388 V 14 w(k)o(ey)21 b Fk(\(in)o(teger\))75 1470 y Fj(OUT)16 b(new)p 283 1470 V 17 w(group)j Fk(new)14 b(group)g(ob)r(ject)h(handle)75 1551 y(This)c(collectiv)o(e)g(function)g(is)g (called)g(b)o(y)h(all)e(pro)q(cesses)j(in)e(the)h(group)f(asso)q(ciated)h (with)f Fe(comm)p Fk(.)16 b(A)c(separate,)g(non-)75 1601 y(o)o(v)o(erlapping) g(group)h(of)g(pro)q(cesses)j(is)d(formed)f(for)h(eac)o(h)h(distinct)g(v)n (alue)e(of)h Fe(key)p Fk(,)g(with)g(the)g(pro)q(cesses)j(retaining)75 1651 y(their)e(relativ)o(e)g(order)h(compared)e(to)h(the)g(group)g(of)f Fe(comm)p Fk(.)158 1736 y Fj(MPI)p 257 1736 V 17 w(COLL)p 402 1736 V 17 w(GR)o(OUP)p 598 1736 V 16 w(PERMUTE\(comm,)i(new)p 1115 1736 V 17 w(rank,)h(new)p 1339 1736 V 17 w(group\))75 1852 y(IN)g(comm)21 b Fk(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1934 y Fj(IN)h(new)p 232 1934 V 17 w(rank)21 b Fk(\(in)o(teger\))75 2016 y Fj(OUT)16 b(new)p 283 2016 V 17 w(group)j Fk(new)14 b(group)g(ob)r(ject)h(handle)75 2097 y(This)c(collectiv)o(e)g(function)g(op)q (erates)i(o)o(v)o(er)e(all)f(elemen)o(ts)h(of)g(the)h(group)f(of)g(comm.)j(A) d(correct)i(program)d(sp)q(eci\014es)75 2147 y(a)k(distinct)g Fe(new)p 329 2147 14 2 v 15 w(rank)f Fk(in)g(eac)o(h)i(pro)q(cess,)g(whic)o (h)f(de\014nes)h(a)f(p)q(erm)o(utation)e(of)h(the)i(original)d(ordering.)158 2275 y Fi(Discussion:)18 b Fh(Before,)13 b(w)o(e)f(had)i(one)f(function)i (that)e(implemen)o(ted)i(collectiv)o(e)g(subsetting,)g(plus)f(p)q(erm)o (utation)h(of)75 2321 y(ranks.)i(W)m(e)10 b(con)o(vinced)i(ourselv)o(es)g (that)f(it)f(is)h(b)q(etter)g(to)f(ha)o(v)o(e)h(the)f(ab)q(o)o(v)o(e)h(t)o(w) o(o)f(functions,)i(eac)o(h)e(of)g(whic)o(h)h(has)g(clear)g(usage)75 2367 y(and)h(implemen)o(tation)q(.)19 b(W)m(e)12 b(do)g(not)g(b)q(eliev)o(e)h (that)f(the)g(p)q(erformance)g(of)g(the)f(t)o(w)o(o-function)i(sequence)g (needed)f(to)g(subset)75 2412 y(and)i(p)q(erm)o(ute)f(will)i(signi\014can)o (tly)h(di\013er)f(from)d(the)h(all-encompassi)q(ng)j(function)f(previously)g (de\014ned.)k(Commen)o(ts?)158 2462 y(By)11 b(the)f(w)o(a)o(y)m(,)g(w)o(e)g (ha)o(v)o(e)h(in)o(ten)o(tionall)q(y)i(a)o(v)o(oided)f(making)g(con)o(v)o (enience)g(functions)g(that)f(do)f(analogous)j(p)q(erm)o(utation)75 2512 y(or)j(subsetting,)j(while)f(creating)g(new)e(comm)o(unicators,)j(b)q (ecause)f(the)e(user)h(can)g(easily)h(do)f(this)g(without)g(additional)75 2562 y(sync)o(hronization)q(s)g(b)q(ey)o(ond)e(getting)h(additional)h(needed) e(con)o(texts.)21 b(Are)14 b(these)h(so)f(con)o(v)o(enien)o(t)i(that)e(w)o(e) g(w)o(an)o(t)g(to)g(add)75 2612 y(them)f(\()p Fb(e.g.)p Fh(,)e Fa(MPI)p 332 2612 12 2 v 13 w(COLL)p 425 2612 V 13 w(COMM)p 518 2612 V 12 w(PERMUTE)p Fh(\).)p eop %%Page: 5 5 bop 1854 -100 a Fk(5)75 45 y Fl(6)69 b(Op)r(erations)23 b(on)g(Con)n(texts)75 145 y Fg(6.1)56 b(Lo)r(cal)18 b(Op)r(erations)75 258 y Fj(MPI)p 174 258 15 2 v 17 w(CONTEXTS)p 458 258 V 19 w(RESER)-5 b(VE\(n,)23 b(con)o(texts\))37 b(MPI)p 1109 258 V 17 w(CONTEXTS)p 1393 258 V 18 w(UNRESER)-5 b(VE\(n,)23 b(con-)75 379 y(texts\))75 498 y(IN)16 b(n)21 b Fk(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(reserv)o (e)i(\(resp,)e(unreserv)o(e\))75 583 y Fj(OUT)i(\(resp,)e(IN\))i(con)o(texts) j Fk(in)o(teger)c(arra)o(y)e(of)h(con)o(texts)75 667 y(Reserv)o(es)19 b(\(resp,)g(unreserv)o(es\))g(zero)f(or)g(more)e(con)o(texts.)29 b(A)17 b(reserv)o(ed)i(con)o(text)f(will)e(not)h(b)q(e)h(allo)q(cated)e(b)o (y)h(a)75 717 y(subsequen)o(t)g(call)d(to)g Fe(MPI)p 486 717 14 2 v 15 w(CONTEXTS)p 677 717 V 14 w(ALLOC)g Fk(in)h(the)g(same)f(pro)q (cess)j(\(see)f(b)q(elo)o(w\).)21 b(It)14 b(is)h(erroneous)h(to)f(reserv)o(e) 75 767 y(a)f(con)o(text)g(that)g(has)g(already)g(b)q(een)h(allo)q(cated)e(b)o (y)h Fe(MPI)p 969 767 V 15 w(CONTEXTS)p 1160 767 V 14 w(ALLOC)p Fk(.)158 853 y Fj(MPI)p 257 853 15 2 v 17 w(CONTEXTS)p 541 853 V 19 w(DEALLOC\(n,)h(con)o(texts\))75 973 y(IN)h(n)21 b Fk(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(allo)q(cate)75 1057 y Fj(IN)i(con)o(texts)j Fk(in)o(teger)c(arra)o(y)e(of)h(con)o(texts)75 1142 y(Lo)q(cal)f(deallo)q(cation)g(of)g(con)o(text)i(allo)q(cated)e(b)o(y)h Fe(MPI)p 917 1142 14 2 v 15 w(CONTEXTS)p 1108 1142 V 14 w(ALLOC)p Fk(.)75 1260 y Fg(6.2)56 b(Collectiv)n(e)19 b(Op)r(erations)75 1373 y Fj(MPI)p 174 1373 15 2 v 17 w(CONTEXTS)p 458 1373 V 19 w(ALLOC\(comm,)c(n,)g(con)o(texts\))75 1501 y(IN)h(comm)21 b Fk(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1586 y Fj(IN)h(n)21 b Fk(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(allo)q(cate)75 1671 y Fj(OUT)i(con)o(texts)j Fk(in)o(teger)14 b(arra)o(y)g(of)f(con)o(texts) 158 1764 y(Allo)q(cates)18 b(an)g(arra)o(y)g(of)g(con)o(texts.)32 b(This)18 b(collectiv)o(e)g(op)q(eration)g(is)g(executed)i(b)o(y)e(all)f(pro) q(cesses)k(in)c(the)75 1814 y(group)d(de\014ned)i(b)o(y)e Fe(comm)p Fk(.)19 b(The)c(con)o(texts)g(that)g(are)g(allo)q(cated)f(b)o(y)g Fe(MPI)p 1229 1814 14 2 v 15 w(ALLOC)p 1354 1814 V 15 w(CONTEXTS)e Fk(are)j(unique)g(within)75 1864 y(the)f(group)f(asso)q(ciated)h(with)f Fe(comm)p Fk(.)k(The)c(arra)o(y)g(is)g(the)h(same)e(on)h(all)f(pro)q(cesses)k (that)e(call)e(the)i(function)f(\(same)75 1913 y(order,)h(same)f(n)o(um)o(b)q (er)g(of)h(elemen)o(ts\).)75 2053 y Fl(7)69 b(Op)r(erations)23 b(on)g(Comm)n(unicators)75 2153 y Fg(7.1)56 b(General)18 b(Op)r(erations)75 2230 y Fk(The)c(follo)o(wing)e(are)i(all)f(lo)q(cal)g(\(non-comm)o(uni)o (cating\))e(op)q(erations.)158 2316 y Fj(MPI)p 257 2316 15 2 v 17 w(COMM)p 434 2316 V 18 w(SIZE\(comm,)16 b(size\))75 2445 y(IN)g(comm)21 b Fk(handle)14 b(to)f(comm)o(unicator)e(ob)r(ject.)75 2529 y Fj(OUT)16 b(size)k Fk(is)13 b(the)i(in)o(teger)f(n)o(um)o(b)q(er)f(of) h(pro)q(cesses)i(in)e(the)g(group)g(of)f Fe(comm)p Fk(.)158 2658 y Fj(MPI)p 257 2658 V 17 w(COMM)p 434 2658 V 18 w(RANK\(comm,)j(rank\))p eop %%Page: 6 6 bop 75 -100 a Fk(6)958 b Fd(7)41 b(OPERA)m(TIONS)15 b(ON)f(COMMUNICA)m(TORS) 75 45 y Fj(IN)i(comm)21 b Fk(handle)14 b(to)f(comm)o(unicator)e(ob)r(ject.)75 122 y Fj(OUT)16 b(rank)k Fk(is)13 b(the)g(in)o(teger)g(rank)g(of)f(the)h (calling)f(pro)q(cess)i(in)e(group)h(of)f Fe(comm)p Fk(,)g(or)g Fe(MPI)p 1484 122 14 2 v 16 w(UNDEFINED)e Fk(if)i(y)o(ou)h(are)179 171 y(not)h(a)f(mem)o(b)q(er.)158 282 y Fj(MPI)p 257 282 15 2 v 17 w(COMM)p 434 282 V 18 w(FLA)l(TTEN\(comm,)k(bu\013er,)d(max)p 1090 282 V 18 w(length,)g(actual)p 1395 282 V 16 w(length\))75 392 y(IN)i(comm)21 b Fk(handle)14 b(to)f(comm)o(unicator)e(ob)r(ject.)75 469 y Fj(OUT)16 b(bu\013er)j Fk(b)o(yte-aligned)13 b(bu\013er)75 545 y Fj(IN)j(max)p 237 545 V 18 w(length)i Fk(maxim)n(um)10 b(length)k(of)f(bu\013er)i(in)e(b)o(ytes)75 622 y Fj(OUT)j(actual)p 327 622 V 16 w(length)i Fk(actual)c(b)o(yte)g(length)g(of)f(bu\013er)i(con)o (taining)e(\015attened)i(comm)o(unicator)c(information)75 697 y(If)16 b(insu\016cien)o(t)g(space)g(is)g(a)o(v)n(ailable)e(in)h(bu\013er)i (\(as)f(sp)q(eci\014ed)i(b)o(y)e Fe(max)p 1198 697 14 2 v 15 w(length)p Fk(\),)f(then)h(the)h(con)o(ten)o(ts)g(of)e(bu\013er)75 747 y(is)h(unde\014ned)h(at)f(exit.)25 b(The)16 b(quan)o(tit)o(y)f Fe(actual)p 858 747 V 15 w(length)f Fk(is)i(alw)o(a)o(ys)f(w)o(ell-de\014ned) i(at)e(exit)h(as)h(the)f(n)o(um)o(b)q(er)g(of)75 796 y(b)o(ytes)f(needed)g (to)f(store)h(the)f(\015attened)h(comm)o(unicator.)158 846 y(Though)c(implemen)o(tations)d(ma)o(y)i(v)n(ary)h(on)g(ho)o(w)g(they)h (store)g(\015attened)h(comm)o(unicators,)c(the)j(information)75 896 y(m)o(ust)g(b)q(e)i(su\016cien)o(t)f(to)g(reconstruct)j(the)d(comm)o (unicator)e(using)i Fe(MPI)p 1191 896 V 15 w(COMM)p 1294 896 V 15 w(UNFLATTEN)e Fk(b)q(elo)o(w.)17 b(The)d(purp)q(ose)75 946 y(of)f(\015attening)h(and)g(un\015attening)g(is)f(to)h(allo)o(w)e(in)o (terpro)q(cess)k(transmission)d(of)g(comm)o(unicator)e(ob)r(jects.)75 1059 y Fg(7.2)56 b(Lo)r(cal)18 b(Constructors)75 1171 y Fj(MPI)p 174 1171 15 2 v 17 w(COMM)p 351 1171 V 18 w(UNFLA)l(TTEN\(comm,)f(bu\013er,)d (max)p 1081 1171 V 18 w(length\))75 1277 y(OUT)i(comm)k Fk(handle)14 b(to)g(ob)r(ject)75 1353 y Fj(IN)i(bu\013er)k Fk(b)o(yte-aligned)13 b(bu\013er)i(pro)q(duced)g(b)o(y)e Fe(MPI)p 951 1353 14 2 v 16 w(COMM)p 1055 1353 V 14 w(FLATTEN)75 1430 y Fj(IN)j(max)p 237 1430 15 2 v 18 w(length)i Fk(maxim)n(um)10 b(length)k(of)f(bu\013er)i(in) e(b)o(ytes)75 1500 y(See)i Fe(MPI)p 218 1500 14 2 v 15 w(COMM)p 321 1500 V 15 w(FLATTEN)d Fk(ab)q(o)o(v)o(e.)158 1585 y Fj(MPI)p 257 1585 15 2 v 17 w(COMM)p 434 1585 V 18 w(BIND\(group,)j(con)o(text,)f (comm)p 1055 1585 V 17 w(new\))75 1690 y(IN)i(group)k Fk(ob)r(ject)14 b(handle)g(to)g(b)q(e)g(b)q(ound)g(to)g(new)g(comm)o(unicator)75 1767 y Fj(IN)i(con)o(text)k Fk(con)o(text)14 b(to)g(b)q(e)g(b)q(ound)g(to)g (new)h(comm)o(uni)o(cator)75 1843 y Fj(OUT)h(comm)p 325 1843 V 17 w(new)k Fk(the)15 b(new)f(comm)o(unicator.)158 1913 y(The)19 b(ab)q(o)o(v)o(e)e(function)h(creates)i(a)e(new)h(comm)o(unicator)c(ob)r (ject,)20 b(whic)o(h)e(is)g(asso)q(ciated)h(with)e(the)i(group)75 1963 y(de\014ned)e(b)o(y)e(group,)g(and)h(the)g(sp)q(eci\014ed)h(con)o(text.) 23 b(The)16 b(op)q(eration)g(do)q(es)g(not)g(require)g(comm)o(unication.)j (It)d(is)75 2013 y(correct)i(to)e(b)q(egin)h(using)f(a)g(comm)o(unicator)e (as)i(so)q(on)h(as)f(it)g(is)g(de\014ned.)27 b(It)17 b(is)f(not)g(erroneous)i (to)e(in)o(v)o(ok)o(e)g(this)75 2063 y(function)10 b(t)o(wice)g(in)f(the)h (same)f(pro)q(cess)j(with)d(the)i(same)e(con)o(text.)17 b(Finally)m(,)8 b(there)j(is)f(no)g(explicit)f(sync)o(hronization)75 2113 y(o)o(v)o(er)14 b(the)g(group.)158 2198 y Fj(MPI)p 257 2198 V 17 w(COMM)p 434 2198 V 18 w(UNBIND\(comm\))75 2303 y(OUT)i(comm)k Fk(the)15 b(comm)o(uni)o(cator)d(to)h(b)q(e)i(deallo)q(cated.)75 2373 y(This)f(routine)g(disasso)q(ciated)h(the)g(group)e(asso)q(ciated)i(with)f (comm)d(from)h(the)j(con)o(text)g(asso)q(ciated)g(with)e Fe(comm)p Fk(.)75 2423 y(The)f(opaque)g(ob)r(ject)h Fe(comm)e Fk(is)h(deallo)q(cated.) 17 b(Both)12 b(the)h(group)f(and)f(con)o(text,)i(pro)o(vided)f(at)f(the)i Fe(MPI)p 1673 2423 14 2 v 15 w(COMM)p 1776 2423 V 15 w(BIND)75 2473 y Fk(call,)i(remain)g(a)o(v)n(ailable)f(for)i(further)h(use.)26 b(If)15 b Fe(MPI)p 894 2473 V 16 w(COMM)p 998 2473 V 14 w(MAKE)h Fk(w)o(as)g(called)g(in)f(lieu)h(of)g Fe(MPI)p 1565 2473 V 15 w(COMM)p 1668 2473 V 14 w(BIND)p Fk(,)f(then)75 2523 y(there)h(is)e(no)g (exp)q(osed)i(con)o(text)f(kno)o(wn)f(to)g(the)h(user,)g(and)g(this)f(quan)o (tit)o(y)g(is)g(freed)h(b)o(y)g Fe(MPI)p 1546 2523 V 15 w(COMM)p 1649 2523 V 14 w(UNBIND)f Fk(\(see)75 2573 y(b)q(elo)o(w\).)158 2658 y Fj(MPI)p 257 2658 15 2 v 17 w(COMM)p 434 2658 V 18 w(GR)o(OUP\(comm,)g (group\))p eop %%Page: 7 7 bop 75 -100 a Fd(7.3)41 b(Collectiv)o(e)1506 b Fk(7)75 45 y Fj(IN)16 b(comm)21 b Fk(comm)o(unicator)11 b(ob)r(ject)k(handle)75 139 y Fj(OUT)h(group)j Fk(group)14 b(ob)r(ject)g(handle)75 230 y(Accessor)i(that)e(returns)h(the)g(group)f(corresp)q(onding)h(to)e(the)i (comm)o(unicator)c Fe(comm)p Fk(.)158 318 y Fj(MPI)p 257 318 15 2 v 17 w(COMM)p 434 318 V 18 w(CONTEXT\(comm,)17 b(con)o(text\))75 444 y(IN)f(comm)21 b Fk(comm)o(unicator)11 b(ob)r(ject)k(handle)75 538 y Fj(OUT)h(con)o(text)j Fk(con)o(text)75 629 y(Returns)c(the)f(con)o (text)h(asso)q(ciated)f(with)g(the)h(comm)o(uni)o(cator)d Fe(comm)p Fk(.)158 717 y Fj(MPI)p 257 717 V 17 w(COMM)p 434 717 V 18 w(DUP\(comm,)j(new)p 814 717 V 17 w(con)o(text,)f(new)p 1097 717 V 18 w(comm\))75 843 y(IN)i(comm)21 b Fk(comm)o(unicator)11 b(ob)r(ject)k(handle)75 937 y Fj(IN)h(new)p 232 937 V 17 w(con)o(text)k Fk(new)14 b(con)o(text)h(to)e(use)i(with)f(new)p 945 937 13 2 v 15 w(comm)75 1030 y Fj(OUT)i(new)p 283 1030 15 2 v 17 w(comm)k Fk(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1121 y Fe(MPI)p 144 1121 14 2 v 15 w(COMM)p 247 1121 V 15 w(DUP)c Fk(duplicates)h(a)g(comm)o (unicator)d(with)j(all)f(its)g(cac)o(hed)i(information,)c(replacing)j(just)g (the)h(con)o(text.)75 1171 y(This)h(function)f(is)h(essen)o(tial)g(to)g(the)h (supp)q(ort)f(of)g(virtual)f(top)q(ologies)g(\(see)i(usage)f(b)q(elo)o(w\).) 75 1301 y Fg(7.3)56 b(Collectiv)n(e)75 1418 y Fj(MPI)p 174 1418 15 2 v 17 w(COMM)p 351 1418 V 18 w(MAKE\(comm,)17 b(group,)e(comm)p 967 1418 V 16 w(new\))f Fk(:)75 1545 y Fj(IN)i(comm)21 b Fk(Comm)o(uni)o (cator-scop)q(e)12 b(in)i(whic)o(h)g(the)g(new)g(comm)o(unicator's)d(con)o (text)k(will)d(b)q(e)j(allo)q(cated.)75 1638 y Fj(IN)h(group)k Fk(group)13 b(ob)r(ject)i(handle)f(to)f(b)q(e)i(b)q(ound)f(to)g(new)g(comm)o (unicator)75 1732 y Fj(OUT)i(comm)p 325 1732 V 17 w(new)k Fk(the)15 b(new)f(comm)o(unicator.)158 1823 y Fe(MPI)p 227 1823 14 2 v 15 w(COMM)p 330 1823 V 15 w(MAKE)f Fk(is)h(equiv)n(alen)o(t)f(to:)140 1917 y Fe(MPI_CONTEXTS_ALLOC\()o(comm,)18 b(context,)i(1\))140 1966 y(MPI_COMM_BIND\(comm,)e(group,)j(context,)f(comm_new\))75 2057 y Fk(plus,)d(notionally)m(,)d(in)o(ternal)j(\015ags)f(are)i(set)f(in)g (the)g(comm)o(unicator,)d(denoting)j(that)f(con)o(text)i(w)o(as)f(created)h (as)75 2107 y(part)c(of)f(the)i(opaque)f(pro)q(cess)h(that)f(made)f(the)h (comm)o(unicator)e(\(so)i(it)f(can)h(b)q(e)h(freed)g(b)o(y)e Fe(MPI)p 1584 2107 V 15 w(COMM)p 1687 2107 V 15 w(UNBIND)p Fk(.)75 2259 y Fl(8)69 b(In)n(ter-comm)n(unication)20 b(Initializati)o(on)g (&)k(Rendezv)n(ous)75 2390 y Fj(MPI)p 174 2390 15 2 v 17 w(PUBLISH\(comm,)15 b(lab)q(el,)f(p)q(ersistence)p 939 2390 V 15 w(\015ag\))75 2516 y(IN)i(comm)21 b Fk(Comm)o(uni)o(cator)11 b(to)j(b)q(e)h(published)f (under)g(the)h(name)e Fe(label)p Fk(.)75 2610 y Fj(IN)j(lab)q(el)k Fk(String)13 b(lab)q(el)h(describing)g(published)g(comm)o(unicator.)75 2704 y Fj(IN)i(p)q(ersistence)p 382 2704 V 15 w(\015ag)k Fk(Either)15 b Fe(MPI)p 685 2704 14 2 v 15 w(EPHEMERAL)d Fk(or)i Fe(MPI)p 1027 2704 V 15 w(PERSISTENT)p Fk(.)p eop %%Page: 8 8 bop 75 -100 a Fk(8)481 b Fd(8)41 b(INTER-COMMUNICA)m(TION)15 b(INITIALIZA)m(TION)f(&)g(RENDEZV)o(OUS)75 45 y Fk(This)e(op)q(eration)g (results)h(in)f(the)h(asso)q(ciation)f(of)f(the)i(comm)o(unicator)d Fe(comm)h Fk(with)h(the)h(name)e(sp)q(eci\014ed)i(in)f(lab)q(el,)75 95 y(with)d(global)f(scop)q(e.)17 b(P)o(ersistence)12 b(is)d(either)h (ephemeral)f(\(one)h(subscrib)q(e)h(causes)g(an)e(automatic)e Fe(MPI)p 1664 95 14 2 v 15 w(UNPUBLISH)p Fk(\),)75 145 y(or)13 b(p)q(ersisten)o(t)i(\(an)e(explicit)g Fe(MPI)p 602 145 V 15 w(UNPUBLISH)e Fk(m)o(ust)h(b)q(e)h(subsequen)o(tly)i(called\))e(and)g(an)o(y) f(n)o(um)o(b)q(er)h(of)f(subscrip-)75 195 y(tions)h(can)g(o)q(ccur.)19 b(This)13 b(op)q(eration)h(do)q(es)g(not)f(w)o(ait)f(for)h(subscriptions)h (to)f(o)q(ccur)h(b)q(efore)g(returning.)19 b(Only)13 b(one)75 244 y(pro)q(cess)j(calls)d(this)h(function.)158 296 y(Subsequen)o(t)h(calls)f (to)f Fe(MPI)p 589 296 V 15 w(PUBLISH)g Fk(with)g(the)i(same)e(lab)q(el)g (\(without)g(an)h(in)o(terv)o(ening)g Fe(MPI)p 1651 296 V 15 w(UNPUBLISH)p Fk(\))75 346 y(is)g(erroneous.)158 480 y Fi(Discussion:)k Fh(Should)d(w)o(e)e(ha)o(v)o(e)g(a)g(p)q(ermissions)j(\015ag)d(that)g (implemen)o(ts)i(access)f(restrictions)h(similar)g(to)e(Unix.)158 650 y Fj(MPI)p 257 650 15 2 v 17 w(UNPUBLISH\(lab)q(el\))75 773 y(IN)j(lab)q(el)k Fk(String)13 b(lab)q(el)h(describing)g(published)g (comm)o(unicator.)75 862 y(This)j(is)g(an)h(op)q(eration)f(undertak)o(en)h(b) o(y)f(a)g(single)g(pro)q(cess,)j(whose)e(e\013ect)h(is)e(to)g(remo)o(v)o(e)g (the)h(asso)q(ciation)f(of)75 911 y(the)k(comm)o(unicator)c(sp)q(eci\014ed)22 b(in)e(a)g(preceding)h Fe(MPI)p 962 911 14 2 v 16 w(PUBLISH)d Fk(call)i(with)g(the)h(name)e(sp)q(eci\014ed)j(in)e Fe(label)p Fk(.)75 961 y Fe(MPI)p 144 961 V 15 w(UNPUBLISH)12 b Fk(on)i(an)f (unde\014ned)i(lab)q(el)f(will)e(b)q(e)j(ignored.)158 1048 y Fj(MPI)p 257 1048 15 2 v 17 w(SUBSCRIBE\(m)o(y)p 635 1048 V 16 w(comm,)h(lab)q(el,)f(comm\))75 1172 y(IN)h(m)o(y)p 213 1172 V 17 w(comm)21 b Fk(Comm)n(unicator)11 b(of)j(participan)o(ts)f(in)h (subscrib)q(e.)75 1262 y Fj(IN)i(lab)q(el)k Fk(String)13 b(lab)q(el)h (describing)g(comm)o(unicator)d(to)j(whic)o(h)g(w)o(e)g(wish)f(to)h(subscrib) q(e)75 1352 y Fj(OUT)i(comm)k Fk(Comm)o(unicator)11 b(created)k(through)f (subscription)h(pro)q(cess.)75 1440 y(This)d(is)f(a)h(collectiv)o(e)g(comm)o (unicatio)o(n)d(in)j(the)g(group)g(sp)q(eci\014ed)h(in)f Fe(my)p 1196 1440 14 2 v 15 w(comm)p Fk(,)f(whic)o(h)g(has)h(the)h(e\013ect)g(of)f (creating)75 1490 y(in)g(eac)o(h)h(group)g(pro)q(cess)h(a)e(cop)o(y)h(of)f (the)h(previously)f(published)h(comm)o(unicator)c(asso)q(ciated)14 b(with)e(the)h(name)e(in)75 1540 y(lab)q(el.)17 b(This)d(op)q(eration)g(blo)q (c)o(ks)g(un)o(til)f(suc)o(h)i(an)e(asso)q(ciation)h(is)f(p)q(ossible.)158 1592 y(Once)j(an)e Fe(MPI)p 392 1592 V 15 w(PUBLISH)g Fk(and)g(an)h Fe(MPI)p 781 1592 V 15 w(SUBSCRIBE)d Fk(on)j(the)g(same)f(lab)q(el)g(ha)o(v)o (e)g(o)q(ccurred,)j(the)e(subscrib)q(er)75 1641 y Fe(my)p 122 1641 V 15 w(comm)d Fk(has)h(the)h(abilit)o(y)d(to)i(send)h(messages)f(to)g (the)g(publisher;)g(group)g(mem)o(b)q(ers)f(of)g(the)i(published)f(comm)o(u-) 75 1691 y(nicator)g(ha)o(v)o(e)f(the)h(abilit)o(y)e(to)i(receiv)o(e)h (messages)f(using)f(the)i(published)e(comm)o(unicator.)j(The)e(comm)o (unicators)75 1741 y(so)h(de\014ned)h(ma)o(y)d(only)h(b)q(e)h(used)h(in)f(p)q (oin)o(t-to-p)q(oin)o(t)e(comm)o(unication.)158 1875 y Fi(Discussion:)26 b Fh(Do)17 b(w)o(e)g(w)o(an)o(t)g(an)o(y)h(of)e(the)h(follo)o(wing)j Fa(MPI)p 1044 1875 12 2 v 13 w(SUBSCRIBE)p 1236 1875 V 10 w(NON)p 1307 1875 V 13 w(BLOCKING)p Fh(,)14 b Fa(MPI)p 1565 1875 V 13 w(SUBSCRIBE)p 1757 1875 V 10 w(PROBE)p Fh(,)75 1925 y(etc...)75 2125 y Fj(The)j(symmetric)e(case)42 b Fk(is)15 b(constructed)i(as)e(follo)o (ws:)j(Group)d(\\A")g(and)f(\\B")h(wish)g(to)g(build)f(a)h(symmetric)75 2175 y(in)o(ter-group)f(comm)o(unicatio)o(n)d(structure.)75 2226 y(\\A")i(group:)249 2316 y Fe(mpi_comm_rank\(A_com)o(m,)19 b(rank\))249 2366 y(if\(rank)i(==)g(0\))g(then)162 2416 y (mpi_publish\("A_com)o(m",)e(MPI_EPHEMERAL\))249 2466 y(mpi_subscribe\(B_com) o(m_sen)o(d,)g("B_comm"\))249 2565 y(mpi_comm_context\(A_)o(comm,)f (A_context\))249 2615 y(mpi_comm_dup\(B_comm)o(_send)o(,)h(A_context,)h (B_comm_recv\))75 2704 y Fk(\\B")14 b(group:)p eop %%Page: 9 9 bop 1854 -100 a Fk(9)249 45 y Fe(mpi_comm_rank\(B_com)o(m,)19 b(rank\))249 95 y(if\(rank)i(==)g(0\))g(then)162 145 y(mpi_publish\("B_com)o (m",)e(MPI_EPHEMERAL\))249 195 y(mpi_subscribe\(A_com)o(m_sen)o(d,)g ("A_comm"\))249 294 y(mpi_comm_context\(B_)o(comm,)f(B_context\))249 344 y(mpi_comm_dup\(A_comm)o(_send)o(,)h(B_context,)h(A_comm_recv\))75 427 y Fk(F)m(or)13 b(example,)f(elemen)o(ts)h(of)g(the)i(\\B")e(group)g(use)i Fe(A)p 909 427 14 2 v 15 w(comm)p 1012 427 V 15 w(send)e Fk(to)g(send)i(to)e (elemen)o(ts)h(of)f(\\A,")f(whic)o(h)i(receiv)o(e)75 477 y(suc)o(h)h (messages)f(in)f Fe(B)p 418 477 V 16 w(comm)p 522 477 V 14 w(recv)p Fk(.)158 527 y(Notice)19 b(that)g(the)g(ab)q(o)o(v)o(e)g(calling)e (sequences)22 b(preserv)o(e)e(an)o(y)f(virtual)e(top)q(ology)h(information,)f (or)h(other)75 576 y(cac)o(hed)d(quan)o(tities,)e(b)o(y)h(using)f Fe(MPI)p 648 576 V 15 w(COMM)p 751 576 V 15 w(DUP)p Fk(,)g(and)h(then)g(c)o (hanging)f(con)o(text.)75 714 y Fl(9)69 b(Cac)n(heing)22 b(in)g(Comm)n (unicators)75 805 y Fk(TBD.)75 942 y Fl(10)69 b(Con)n(texts,)23 b(Comm)n(unicators,)f(and)i(\\Safet)n(y")75 1033 y Fk(When)15 b(a)g(caller)g(passes)i(a)e(comm)o(unicator)d(\(whic)o(h)j(con)o(tains)g(a)g (con)o(text)h(and)f(group\))g(to)g(a)g(callee,)g(that)g(com-)75 1083 y(m)o(unicator)j(m)o(ust)g(b)q(e)h(free)h(of)f(side)g(e\013ects)j(on)d (en)o(try)g(and)g(exit)g(to)g(the)h(subprogram.)33 b(This)19 b(pro)o(vides)g(the)75 1132 y(basic)13 b(guaran)o(tee)g(of)g(safet)o(y)m(.)k (The)c(callee)g(has)g(p)q(ermission)f(to)h(do)g(whatev)o(er)g(comm)o (unication)d(it)i(lik)o(es)h(with)f(the)75 1182 y(comm)o(unicator,)g(and)j (under)g(the)h(ab)q(o)o(v)o(e)f(guaran)o(tee)g(kno)o(ws)g(that)g(no)g(other)g (comm)o(unications)d(will)h(in)o(terfere.)75 1232 y(Since)f(w)o(e)g(p)q (ermit)f(the)h(creation)g(of)f(new)h(comm)o(unicators)d(without)j(sync)o (hronization)f(\(assuming)g(preallo)q(cated)75 1282 y(con)o(texts\),)k(this)f (do)q(es)g(not)g(imp)q(ose)f(a)g(signi\014can)o(t)h(o)o(v)o(erhead.)158 1332 y(This)h(form)f(of)h(safet)o(y)g(is)h(analogous)e(to)h(other)h(common)d (computer)i(science)i(usages,)f(suc)o(h)g(as)g(passing)f(a)75 1382 y(descriptor)k(of)e(an)g(arra)o(y)g(to)g(a)h(library)e(routine.)29 b(The)18 b(library)f(routine)h(has)f(ev)o(ery)i(righ)o(t)e(to)g(exp)q(ect)i (suc)o(h)f(a)75 1431 y(descriptor)d(to)f(b)q(e)g(v)n(alid)f(and)g(mo)q (di\014able.)158 1481 y(Note)h(that)g Fe(MPI)p 417 1481 V 15 w(UNDEFINED)e Fk(is)h(the)h(rank)g(for)f(pro)q(cesses)j(that)e(are)g(sending) g(to)g(the)g(comm)o(unicator)d(from)75 1531 y(outside)j(the)h(group.)k(This) 14 b(can)g(result)h(through)f(the)g(publish)g(&)g(subscrib)q(e)i(mec)o (hanism,)c(or)i(b)o(y)f(virtue)i(of)e(the)75 1581 y(transmission)g(of)g (\015atten)h(comm)o(unicators)e(\(and)i(their)g(subsequen)o(t)i (un\015attening)e(and)f(use\).)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF From owner-mpi-context@CS.UTK.EDU Thu Jun 24 00:09:45 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA07328; Thu, 24 Jun 93 00:09:45 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05070; Thu, 24 Jun 93 00:09:13 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 24 Jun 1993 00:09:12 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05062; Thu, 24 Jun 93 00:09:07 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA23457; Wed, 23 Jun 93 23:07:33 CDT Date: Wed, 23 Jun 93 23:07:33 CDT From: Tony Skjellum Message-Id: <9306240407.AA23457@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: revised draft (6/24/93) Please ignore draft from earlier today, and read our new one. - Tony, Mark, Lyndon %!PS-Adobe-2.0 %%Creator: dvips 5.47 Copyright 1986-91 Radical Eye Software %%Title: unify2.dvi %%Pages: 12 1 %%BoundingBox: 0 0 612 792 %%EndComments %%BeginProcSet: tex.pro /TeXDict 200 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{ isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10 N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{ /vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail} B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image} imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto}N /eop{clear SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for}N /p /show load N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval(Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{ moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{ S p tail}B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w }B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{/SS save N}B /eos{clear SS restore}B end %%EndProcSet TeXDict begin 1000 300 300 @start /Fa 15 86 df66 D<07E60FFE1FFE3E3E3C1E780E700EF00E E000E000E000E000E000E000E000F00E700E780E3C1E3E3C1FF80FF807E00F177E9614>I69 D<07E6000FFE001FFE003E3E003C1E00780E00700E00F00E00E00000E00000E00000E00000E07F 80E07F80E07F80F00E00700E00781E003C1E003E3E001FFE000FFE0007EE0011177F9614>71 D73 D75 DIII<1FF07FFC7F FC701CF01EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EF01E783C7FFC7FFC 1FF00F177E9614>II82 D<0FCC3FFC7FFCF87CF03CE01CE01CF000F8007F003FE01FF801FC003C001E000EE0 0EE00EF01EF83CFFFCFFF8CFE00F177E9614>III E /Fb 3 104 df<7078F87005047C830C>46 D<03F00FF81E38381878387078FFF0FFC0E000E000E000E018E03870F03FE01F800D107C8F12> 101 D<00FB8003FF80079F800E0F001E07001C0F003C0F00380E00380E00381E00381E00381C00 383C003CFC001FFC000FB800003800007800007800E0F000E1F000FFC000FF000011177E8F12> 103 D E /Fc 1 59 df58 D E /Fd 1 16 df<07E01FF83FFC7FFE7FFE FFFFFFFFFFFFFFFFFFFFFFFF7FFE7FFE3FFC1FF807E010107E9115>15 D E /Fe 32 119 df<000FF800007FFC0001FC1E0003F01F0007E03F000FE03F000FC03F000FC03F 000FC00C000FC000000FC000000FC000000FC00000FFFFFF00FFFFFF000FC03F000FC03F000FC0 3F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000F C03F000FC03F000FC03F000FC03F000FC03F000FC03F007FF0FFE07FF0FFE01B237FA21F>12 D<7CFEFEFEFEFE7C07077C8610>46 D<00180000780001F800FFF800FFF80001F80001F80001F8 0001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F8 0001F80001F80001F80001F80001F80001F80001F80001F80001F8007FFFE07FFFE013207C9F1C >49 D<03FC001FFF803C1FC0700FE0FC07F0FE03F0FE03F8FE03F8FE01F87C01F83803F80003F8 0003F00007F00007E0000FC0000F80001F00003E00007C0000F80001F01801C018038018070018 0E00381FFFF03FFFF07FFFF0FFFFF0FFFFF0FFFFF015207D9F1C>I<01FF0007FFC01F07E01E03 F03F03F83F01F83F81F83F01F83F03F80E03F80003F00007F0000FE0001F8001FF0001FF000007 E00003F80001FC0000FC0000FE0000FE7C00FEFE00FEFE00FEFE00FEFE00FCFE01FC7803F83E07 F01FFFE003FF0017207E9F1C>I<0000E00001E00003E00003E00007E0000FE0001FE0001FE000 37E00077E000E7E001C7E00187E00307E00707E00E07E00C07E01807E03807E07007E0E007E0FF FFFEFFFFFE0007E00007E00007E00007E00007E00007E00007E000FFFE00FFFE17207E9F1C>I< 1800601E03E01FFFE01FFFC01FFF801FFF001FFC001BE00018000018000018000018000019FE00 1FFF801F0FC01C07E01803F00003F00003F80003F80003F87803F8FC03F8FC03F8FC03F8FC03F8 FC03F06007F0780FE03C1FC01FFF0007FC0015207D9F1C>I<003FC001FFE003F07007C0F80F81 F81F01F83E01F83E01F87E00F07C00007C0000FC0800FCFFC0FDFFF0FF81F8FF00F8FE007CFE00 7CFE007EFC007EFC007EFC007EFC007E7C007E7C007E7E007E3E007C3E00FC1F01F80FC3F007FF C000FF0017207E9F1C>I<6000007800007FFFFE7FFFFE7FFFFE7FFFFC7FFFF87FFFF0E000E0E0 00C0C001C0C00380C00700000E00000E00001C00003C0000380000780000780000F80000F00001 F00001F00001F00001F00003F00003F00003F00003F00003F00003F00003F00001E00017227DA1 1C>I<01FF0007FFC01F83E03F01F07E00F87C00F8FC007CFC007CFC007CFC007EFC007EFC007E FC007EFC00FE7C00FE7C00FE3E01FE3F03FE1FFF7E07FE7E00207E00007C00007C1E00FC3F00F8 3F00F83F01F03F03F03E07E01E1FC00FFF0003F80017207E9F1C>57 D<0007FE0180003FFFC380 00FF01E78003FC007F8007F0001F800FE0000F801FC0000F801F800007803F800003807F000003 807F000003807F00000180FE00000180FE00000000FE00000000FE00000000FE00000000FE0000 0000FE00000000FE00000000FE00000000FE000000007F000001807F000001807F000001803F80 0003801F800003001FC00007000FE0000E0007F0001C0003FC00380000FF01F000003FFFC00000 07FF000021227DA128>67 D<0003FF00C0003FFFE1C000FF80F3C001FC003FC007F0001FC00FE0 000FC01FC00007C01F800003C03F800001C07F000001C07F000001C07F000000C0FE000000C0FE 00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE000FFFFC FE000FFFFC7F00001FC07F00001FC07F00001FC03F80001FC01F80001FC01FC0001FC00FE0001F C007F0003FC001FC003FC000FF80FFC0003FFFE3C00003FF00C026227DA12C>71 D76 D<0007FC0000003FFF800000 FC07E00003F001F80007E000FC000FC0007E001F80003F001F80003F003F00001F803F00001F80 7F00001FC07E00000FC07E00000FC0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000F E0FE00000FE0FE00000FE0FE00000FE0FE00000FE07E00000FC07F00001FC07F00001FC03F0000 1F803F80003F801F80003F000FC0007E0007E000FC0003F001F80000FC07E000003FFF80000007 FC000023227DA12A>79 DI<07FE 001FFF803F0FC03F07E03F07F03F03F01E03F00003F00003F001FFF00FFFF03FE3F07F03F07E03 F0FE03F0FC03F0FC03F0FC07F0FE0FF07F1FF83FFDFF0FF0FF18167E951B>97 D<01FF8007FFE01FC3F03F03F03E03F07E03F07C01E0FC0000FC0000FC0000FC0000FC0000FC00 00FC0000FC00007E00007E00003F00303F80701FE1E007FFC001FF0014167E9519>99 D<0003FE000003FE0000007E0000007E0000007E0000007E0000007E0000007E0000007E000000 7E0000007E0000007E0000007E0001FE7E0007FFFE001FC3FE003F00FE003E007E007E007E007C 007E00FC007E00FC007E00FC007E00FC007E00FC007E00FC007E00FC007E00FC007E007C007E00 7E007E003E00FE003F01FE001F83FE000FFF7FC001FC7FC01A237EA21F>I<01FE0007FF801F87 E03F03E03E01F07E00F07C00F8FC00F8FC00F8FFFFF8FFFFF8FC0000FC0000FC0000FC00007E00 007E00003F00181F80380FE0F007FFE000FF8015167E951A>I<01FE1F0007FFFF800F87F7801F 03E7803E01F3003E01F0003E01F0003E01F0003E01F0003E01F0003E01F0001F03E0000F87C000 1FFF80001DFE0000180000001C0000001E0000001FFFE0001FFFFC001FFFFE000FFFFF003FFFFF 007E007F80FC001F80F8000F80F8000F80F8000F80FC001F807E003F003F80FE000FFFF80001FF C00019217F951C>103 DI<1F003F803F803F803F803F801F000000000000000000000000000000FF80FF801F801F801F 801F801F801F801F801F801F801F801F801F801F801F801F801F801F801F80FFF0FFF00C247FA3 0F>I108 DII<00FE0007FFC00F83E01E00F03E00F87C007C7C007C7C007CFC007EFC007EFC007EFC00 7EFC007EFC007EFC007E7C007C7C007C3E00F81F01F00F83E007FFC000FE0017167E951C>II114 D<0FFB003FFF007C1F00780700F00300F00300F80000FF0000FF F8007FFC007FFE001FFF000FFF80007F80C00F80C00F80E00780F00780F80F00FC1F00FFFE00C7 F80011167E9516>I<00C00000C00000C00000C00001C00001C00003C00007C0000FC0001FC000 FFFF00FFFF000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC000 0FC1800FC1800FC1800FC1800FC1800FE38007E70003FF0000FC0011207F9F16>III E /Ff 60 123 df<007E7E01FFFF07CFCF070F8F0F0F0F0E07000E07000E07000E07000E0700FFFF F0FFFFF00E07000E07000E07000E07000E07000E07000E07000E07000E07000E07000E07000E07 007F0FF07F0FF0181A809916>11 D<007E0001FE0007CF00070F000F0F000E0F000E00000E0000 0E00000E0000FFFF00FFFF000E07000E07000E07000E07000E07000E07000E07000E07000E0700 0E07000E07000E07007F0FE07F0FE0131A809915>I<007F0001FF0007CF00070F000F0F000E07 000E07000E07000E07000E0700FFFF00FFFF000E07000E07000E07000E07000E07000E07000E07 000E07000E07000E07000E07000E07007F9FE07F9FE0131A809915>I<007E1F8001FF7FC007C7 F1E00707C1E00F07C1E00E0781E00E0380000E0380000E0380000E038000FFFFFFE0FFFFFFE00E 0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E0 0E0380E00E0380E07F8FE3FC7F8FE3FC1E1A809920>I 39 D<01C00380070007000E001C001C00380038007000700070006000E000E000E000E000E000 E000E000E000E000E000E000E0006000700070007000380038001C001C000E0007000700038001 C00A267E9B0F>II44 DII<1FC03FF071F8F078F83CF83CF81C70 1C003C003C0038007800F000E001C0038007000E0C1C0C380C7018FFF8FFF8FFF80E187E9713> 50 D58 DI<1FC07FF07070F038F038F038F038007800F001E003 C0038007000700060006000600060000000000000000000F000F000F000F000D1A7E9912>63 D<000C0000001E0000001E0000001E0000003F0000003F0000003F000000778000006780000067 800000C3C00000C3C00000C3C0000181E0000181E0000181E0000300F00003FFF00003FFF00006 00780006007800060078000E003C001E003C00FF81FFC0FF81FFC01A1A7F991D>65 DI<007F0601FFE607E0FE0F803E1E001E3C001E3C000E78000E780006F00006F0 0000F00000F00000F00000F00000F00000F000067800067800063C000E3C000C1E001C0F803807 E0F001FFE0007F80171A7E991C>IIII<007F060001FFE60007E0FE000F803E00 1E001E003C001E003C000E0078000E0078000600F0000600F0000000F0000000F0000000F00000 00F0000000F003FFC0F003FFC078001E0078001E003C001E003C001E001E001E000F803E0007E0 7E0001FFFE00007FC6001A1A7E991E>I73 D76 DII<007F00 0001FFC00007C1F0000F0078001E003C003C001E0038000E0078000F0070000700F0000780F000 0780F0000780F0000780F0000780F0000780F0000780F000078078000F0078000F0038000E003C 001E001E003C000F00780007C1F00001FFC000007F0000191A7E991E>II82 D<0FC63FF6787E701EE00EE00EE006E006F000FC007F807FF03FFC0FFE01FE003F000F000FC007 C007E007E00FF00EFC3CDFFCC7F0101A7E9915>I<7FFFFF007FFFFF00781E0F00601E0300601E 0300E01E0380C01E0180C01E0180C01E0180001E0000001E0000001E0000001E0000001E000000 1E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000 03FFF00003FFF000191A7F991C>IIII<18387060E0C0C0F8F8F878050B7E990B> 96 D<7F80FFE0F1F0F0F0607000F01FF03FF07870F070E070E073E0F3F1F37FFE3F3C10107E8F 13>II<07F81FFC3C3C783C7018E000E000E000E000E000F0007000780C3E1C1F F807E00E107F8F11>I<007E00007E00000E00000E00000E00000E00000E00000E00000E00000E 000FEE001FFE003C3E00781E00700E00E00E00E00E00E00E00E00E00E00E00E00E00700E00781E 003C3E001FFFC00FCFC0121A7F9915>I<07E01FF03C78701C701CFFFCFFFCE000E000E000F000 7000780C3E1C1FF807E00E107F8F11>I<00F803FC07BC0F3C0E3C0E000E000E000E000E00FFC0 FFC00E000E000E000E000E000E000E000E000E000E000E000E007FE07FE00E1A80990C>I<0FDF 1FFF38777038703870387038703838703FE07FC0700070007FF83FFC7FFEF01FE00FE007E007F0 0F7C3E3FFC0FF010187F8F13>II<3C003C003C003C0000000000000000000000 0000FC00FC001C001C001C001C001C001C001C001C001C001C001C001C00FF80FF80091A80990A >I<01E001E001E001E000000000000000000000000007E007E000E000E000E000E000E000E000 E000E000E000E000E000E000E000E000E000E060E0F1E0F3C0FF807F000B2183990C>IIIII<07E01FF8381C 700E6006E007E007E007E007E007E007700E700E3C3C1FF807E010107F8F13>II<07C6001FF6003E3E 00781E00700E00F00E00E00E00E00E00E00E00E00E00F00E00700E00781E003C7E001FFE000FCE 00000E00000E00000E00000E00000E00007FC0007FC012177F8F14>II<3FE07FE0F0E0E060E060 F000FF807FC01FE001F0C0F0E070E070F0F0FFE0DF800C107F8F0F>I<0C000C000C000C001C00 1C003C00FFC0FFC01C001C001C001C001C001C001C001C601C601C601C601EE00FC007800B177F 960F>IIII<7F1FC07F1FC00F1E0007 1C0003B80003B00001E00000E00000F00001F00003B800071C00061C001E0E00FF1FE0FF1FE013 10808F14>II<7FF87FF870F860F061E063C063C007800F181E181E183C387830F870FFF0FFF00D107F 8F11>I E /Fg 8 118 df<78FCFCFCFC78000000000078FCFCFCFC7806117D900C>58 D68 D<03FC001FFF003F1F007C1F007C1F00F80E00F80000F80000F80000F80000F80000FC00007C00 007E01803F87801FFF0003FC0011117F9014>99 D<1E003F003F003F003F001E00000000000000 00007F007F001F001F001F001F001F001F001F001F001F001F001F001F001F00FFC0FFC00A1B80 9A0C>105 D110 D<03F8000FFE003E0F803C0780 7803C07803C0F803E0F803E0F803E0F803E0F803E0F803E07803C07C07C03E0F800FFE0003F800 13117F9016>I<1FF07FF07070E030F030FC00FFE07FF07FF81FFC01FCC03CE01CE01CF838FFF8 CFE00E117F9011>115 D117 D E /Fh 58 122 df<6030F078F078F078F078F078F078F078F078F078E038E0380D0C7C9916> 34 D<00E001E007C007800F001E003C0038007800700070007000F000E000E000E000E000E000 E000E000F000700070007000780038003C001E000F00078007C001E000E00B217A9C16>40 DI<01C00001 C00001C00001C00071C700F9CF807FFF001FFC0007F00007F0001FFC007FFF00F9CF8071C70001 C00001C00001C00001C00011127E9516>I<387C7E7E3E0E1E3CFCF860070B798416>44 D<000380000380000780000780000F00000F00001E00001E00003C00003C0000780000780000F0 0000F00001E00001E00003C00003C0000780000780000F00000F00001E00001E00003C00003C00 00780000780000F00000F00000E00000E0000011207E9C16>47 D<03E0000FF8001FFC001E3C00 380E00780F00700700700700E00380E00380E00380E00380E00380E00380E00380E00380F00780 700700700700780F003C1E001E3C001FFC000FF80003E00011197E9816>I<0380038007800F80 1F80FF80FF80F380038003800380038003800380038003800380038003800380038003807FFC7F FC7FFC0E197C9816>I<7FFF00FFFF80FFFF80000000000000000000000000000000FFFF80FFFF 807FFF00110B7E9116>61 D<00E00001F00001F00001B00001B00003B80003B80003B800031800 071C00071C00071C00071C00071C000E0E000E0E000FFE000FFE001FFF001C07001C07001C0700 FF1FE0FF1FE0FF1FE013197F9816>65 DI<03F18007FF800FFF801F0F803C0780780780 780380700380F00000E00000E00000E00000E00000E00000E00000E00000F00000700380780380 7803803C07801F0F000FFE0007FC0003F00011197E9816>IIII<03F3 000FFF001FFF003F1F003C0F00780F00780700700700F00000E00000E00000E00000E00000E07F C0E07FC0E07FC0F00700700700780F00780F003C0F003F1F001FFF000FFF0003E70012197E9816 >III75 DIII<1FFC003FFE007FFF00780F00F00780E00380E00380E00380E00380E00380E00380E00380E0 0380E00380E00380E00380E00380E00380E00380F00780F00780780F007FFF003FFE001FFC0011 197E9816>II82 D<0FF3001FFF007FFF00781F00F00F00E00700E00700E00000 F000007800007F80003FF8000FFC0000FE00000F00000780000380000380E00380E00380F00780 F81F00FFFE00FFFC00CFF80011197E9816>IIIII<7F3F807F 3F807F3F800E1E000E1C00073C0007380003B80003F00001F00001E00000E00001E00001F00003 F00003B80007B800071C00071C000E0E000E0E001C0700FF1FE0FF1FE0FF1FE013197F9816>I< FFF0FFF0FFF0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E0 00E000E000E000E000E000E000E000E000E000FFF0FFF0FFF00C20789C16>91 D93 D95 D<1FE0007FF8007FFC00783C00301E00000E00007E00 0FFE003FFE007FCE00F80E00E00E00E00E00F01E00F83E007FFFE03FFFE01FC3E013127E9116> 97 DI<03FC0FFE1FFE3E1E780C7000F000E000E000E000E000F00070077C073E1F1FFE 0FFC03F010127D9116>I<007F00007F00007F0000070000070000070000070007E7001FFF003F FF003E3F00780F00700F00F00700E00700E00700E00700E00700F00F00F00F00781F007E3F003F FFF01FF7F007E7F014197F9816>I<07E01FF83FFC7C3E781FF00FF007FFFFFFFFFFFFE000F000 F007780F7E1F3FFE0FFC03F010127D9116>I<001F00007F8000FF8001E78001C30001C00001C0 00FFFF00FFFF00FFFF0001C00001C00001C00001C00001C00001C00001C00001C00001C00001C0 0001C00001C0007FFF007FFF007FFF0011197F9816>I<03E7C00FFFE01FFFE01E3CE03C1E0038 0E00380E00380E003C1E001E3C001FFC003FF8003BE0003800003C00001FFE003FFF807FFFC078 07C0F001E0E000E0E000E0E000E0F001E07E0FC03FFF801FFF0007FC00131C7F9116>I I<03C003C003C003C000000000000000007FC07FC07FC001C001C001C001C001C001C001C001C0 01C001C001C001C0FFFFFFFFFFFF101A7D9916>I<007800780078007800000000000000001FF8 1FF81FF80038003800380038003800380038003800380038003800380038003800380038003800 380078F078F0F0FFF0FFE03F800D237E9916>IIIII<03E0000FF800 1FFC003C1E00780F00700700E00380E00380E00380E00380E00380F00780700700780F003C1E00 1FFC000FF80003E00011127E9116>II114 D<1FEC3FFC7FFCF03CE01CE01CF8007FC03FF007FC003EE00EE00EF00E F83EFFFCFFF8CFF00F127D9116>I<070000070000070000070000070000FFFF00FFFF00FFFF00 070000070000070000070000070000070000070000070100070380070380070780078F8003FF00 03FE0000F80011177F9616>III I<7F3FC07F3FC07F3FC00F1C00073C0003B80003F00001F00000E00001E00001F00003B800073C 00071C000E0E00FF3FE0FF3FE0FF3FE013127F9116>II E /Fi 56 123 df<001FF3F800FFFFFE03F87F3E07E07E3E0FC07E3E0F807C1C0F807C000F807C 000F807C000F807C000F807C00FFFFFFC0FFFFFFC00F807C000F807C000F807C000F807C000F80 7C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C007F E1FFC07FE1FFC01F1D809C1C>11 D<001FFC0000FFFC0003F87C0007E07C000FC07C000F807C00 0F807C000F807C000F807C000F807C000F807C00FFFFFC00FFFFFC000F807C000F807C000F807C 000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F80 7C000F807C007FF3FF807FF3FF80191D809C1B>13 D<007000E001C003C007800F000F001E001E 003C003C007C00780078007800F800F800F000F000F000F000F000F000F000F800F80078007800 78007C003C003C001E001E000F000F00078003C001C000E000700C297D9E13>40 DI<7CFEFFFFFFFF7F0307060E1C3C7830080F7D860D>44 DI<00600001E0000FE000FFE000F3E00003E00003E00003E0 0003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E0 0003E00003E00003E00003E0007FFF807FFF80111B7D9A18>49 D<07F8003FFF00787F80F81FC0 FC0FC0FC0FE0FC0FE0FC07E07807E0000FE0000FE0000FC0001F80003F00003E00007C0000F000 01E00003C0600780600F00601C00E03FFFC07FFFC0FFFFC0FFFFC0FFFFC0131B7E9A18>I<0003 8000000380000007C0000007C0000007C000000FE000000FE000001FF000001BF000001BF00000 31F8000031F8000061FC000060FC0000E0FE0000C07E0000C07E0001803F0001FFFF0003FFFF80 03001F8003001F8006000FC006000FC00E000FE00C0007E0FFC07FFEFFC07FFE1F1C7E9B24>65 DI<001FF06000FFFCE003FC1FE00FE007E01FC003E01F8001E03F0000E07F0000E07F0000E0 7E000060FE000060FE000000FE000000FE000000FE000000FE000000FE000000FE0000007E0000 607F0000607F0000603F0000E01F8000C01FC001C00FE0078003FC1F0000FFFC00001FF0001B1C 7D9B22>IIII<001FF81800FFFE3803FC0FF807F003F80FC000F81F8000783F800078 7F0000387F0000387E000018FE000018FE000000FE000000FE000000FE000000FE000000FE007F FFFE007FFF7E0001F87F0001F87F0001F83F8001F81F8001F80FE001F807F003F803FE07F800FF FE78001FF818201C7D9B26>III< 07FFF007FFF0001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80 001F80001F80001F80001F80001F80001F80001F80301F80FC1F80FC1F80FC1F80FC3F80F87F00 7FFE001FF000141C7F9B19>IIIII<003FE00001FFFC0003F07E000FC01F801F80 0FC01F0007C03F0007E07F0007F07E0003F07E0003F0FE0003F8FE0003F8FE0003F8FE0003F8FE 0003F8FE0003F8FE0003F8FE0003F87E0003F07E0003F07F0007F03F0007E03F800FE01F800FC0 0FC01F8003F07E0001FFFC00003FE0001D1C7D9B24>II82 D<07F8601FFFE03E0FE07803E07001E0F000E0F00060F8 0060F80000FE0000FFF0007FFE007FFF803FFFC01FFFE007FFE0007FF00007F00001F00001F0C0 00F0C000F0E000F0E001E0F001E0FE07C0FFFF80C3FE00141C7D9B1B>I<7FFFFFE07FFFFFE078 1F81E0701F80E0601F8060E01F8070C01F8030C01F8030C01F8030C01F8030001F8000001F8000 001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F80 00001F8000001F8000001F8000001F800007FFFE0007FFFE001C1C7E9B21>III<7FFE1FFE007FFE1FFE0007F00180 0003F803800001FC07000000FC06000000FE0C0000007F1C0000003F380000003FB00000001FE0 0000000FE00000000FE000000007F000000003F800000007F80000000FFC0000000CFE00000018 7E000000387F000000703F800000601F800000C01FC00001C00FE000018007F000030007F000FF F03FFF80FFF03FFF80211C7F9B24>88 D<7FFFFC7FFFFC7E01FC7803F87007F0E007F0E00FE0C0 1FE0C01FC0C03F80003F80007F0000FE0000FE0001FC0001FC0003F80607F00607F0060FE0061F E00E1FC00E3F801C3F801C7F003CFE00FCFFFFFCFFFFFC171C7D9B1D>90 D<0FFC003FFF003E1F803E0FC03E07C01C07C00007C003FFC01FFFC07F87C07F07C0FE07C0FC07 C0FC07C0FE0FC07E3FE03FFBF80FE1F815127F9117>97 D I<03FC001FFF003F1F007E1F007E1F00FC0E00FC0000FC0000FC0000FC0000FC0000FC0000FC00 007E01807F03803F87001FFE0003F80011127E9115>I<000FF0000FF00001F00001F00001F000 01F00001F00001F00001F00001F00001F007F9F01FFFF03F0FF07E03F07E01F0FC01F0FC01F0FC 01F0FC01F0FC01F0FC01F0FC01F0FC01F07C01F07E03F03F0FF01FFFFE07F1FE171D7E9C1B>I< 03FC000FFF003F0F803E07C07E03C07C03E0FC03E0FFFFE0FFFFE0FC0000FC0000FC00007C0000 7E00603F00E01FC3C00FFF8003FE0013127F9116>I<007F0001FFC003E7C007C7C00FC7C00F83 800F80000F80000F80000F80000F8000FFF800FFF8000F80000F80000F80000F80000F80000F80 000F80000F80000F80000F80000F80000F80000F80000F80007FF8007FF800121D809C0F>I<07 F9F01FFFF83E1F787C0FB87C0F807C0F807C0F807C0F807C0F803E1F003FFE0037F80070000070 00007800003FFF803FFFE01FFFF07FFFF0F801F8F000F8F00078F00078F800F87E03F03FFFE007 FF00151B7F9118>II<1E003F007F007F007F003F001E00 00000000000000000000FF00FF001F001F001F001F001F001F001F001F001F001F001F001F001F 001F00FFE0FFE00B1E7F9D0E>I<00F801FC01FC01FC01FC01FC00F80000000000000000000003 FC03FC007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C 707CF87CF8FCF9F87FF03F800E26839D0F>IIIII<01FC000FFF801F07C03E03E07C01F0 7C01F0FC01F8FC01F8FC01F8FC01F8FC01F8FC01F87C01F07C01F03E03E01F07C00FFF8001FC00 15127F9118>II114 D<1FF87FF87078E018E018F000FF80FFF07FF83FF80FFC007CC03CE01CE01CF878FFF8CFE00E12 7E9113>I<030003000300070007000F000F003F00FFFCFFFC1F001F001F001F001F001F001F00 1F001F001F0C1F0C1F0C1F0C1F9C0FF803F00E1A7F9913>IIIIII<3FFF803FFF803C3F80307F00707E0060FC00 61FC0063F80003F00007E1800FE1801FC1801F83803F03007F0700FE0F00FFFF00FFFF0011127F 9115>I E /Fj 3 104 df<3078F06005047C830D>46 D<01E007100C1018083810701070607F80 E000E000E000E000E000E0086010602030C01F000D127B9113>101 D<00F3018F030F06070E0E 0C0E1C0E1C0E381C381C381C381C383830383038187818F00F700070007000E000E0C0C0E1C0C3 007E00101A7D9113>103 D E /Fk 44 123 df<3E007C007F00FE00FF81FF00FF81FF00FFC1FF 80FFC1FF80FFC1FF807FC0FF803EC07D8000C0018000C0018001C0038001800300018003000380 070007000E0006000C000E001C001C003800380070003000600019157EA924>34 D<0003F0000000000FFC000000001F1C000000003E0E000000007C0F00000000FC0700000000FC 0700000001F80700000001F80700000001F80700000001FC0E00000001FC0E00000001FC1C0000 0001FC3800000001FC7000000001FEF0007FFC00FFE0007FFC00FF80007FFC00FF0000078000FF 00000F00007F80000E00007F80001E00007FC0003C0001FFC000380003FFE0007800079FF000F0 000F8FF000E0001F0FF801E0003F07FC03C0007F03FE0780007F03FF0F0000FF01FF0E0000FF00 FF9E0000FF007FFC0000FF803FF80000FF801FF000387F800FFC00387FC01FFE00703FE07FFF81 F01FFFFC7FFFE007FFF01FFFC000FF8001FE002E2A7DA935>38 D<3E007F00FF80FF80FFC0FFC0 FFC07FC03EC000C000C001C0018001800380070006000E001C00380030000A157B8813>44 DI<3E007F00FF80FF80FF80FF80FF 807F003E0009097B8813>I<003F800001FFF00007E0FC000FC07E001F803F001F803F003F001F 803F001F807F001FC07F001FC07F001FC07F001FC0FF001FE0FF001FE0FF001FE0FF001FE0FF00 1FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF 001FE07F001FC07F001FC07F001FC07F001FC03F001F803F001F801F803F001F803F000FC07E00 07E0FC0001FFF000003F80001B277DA622>48 D<000700000F00007F0007FF00FFFF00FFFF00F8 FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000 FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000 FF0000FF0000FF0000FF007FFFFE7FFFFE7FFFFE17277BA622>I<00FFC00007FFF8001FFFFE00 3F03FF007E00FF807F007FC0FF807FC0FF803FE0FF803FE0FF803FE0FF801FE07F001FE03E003F E000003FE000003FC000003FC000007F8000007F800000FF000001FE000001FC000003F8000007 E000000FC000001F8000003E00E0007C00E0007800E000F001C001E001C0038001C007FFFFC00F FFFFC01FFFFFC03FFFFFC07FFFFF80FFFFFF80FFFFFF80FFFFFF801B277DA622>I<007FC00003 FFF00007FFFC000FC1FE001F80FF003FC0FF003FE07F803FE07F803FE07F803FE07F803FE07F80 1FC0FF800F80FF000000FF000001FE000003FC000007F00000FFC00000FFF8000001FE000000FF 0000007F8000007FC000003FC000003FE01E003FE07F803FE07F803FE0FFC03FE0FFC03FE0FFC0 3FE0FFC03FC0FFC07FC07F807F807F00FF803F81FF001FFFFC0007FFF80000FFC0001B277DA622 >I<0000070000000F0000001F0000003F0000007F000000FF000000FF000001FF000003FF0000 077F00000F7F00000E7F00001C7F0000387F0000707F0000F07F0000E07F0001C07F0003807F00 07007F000F007F000E007F001C007F0038007F0070007F00E0007F00FFFFFFF8FFFFFFF8FFFFFF F80000FF000000FF000000FF000000FF000000FF000000FF000000FF00007FFFF8007FFFF8007F FFF81D277EA622>I<0C0007000FC03F000FFFFE000FFFFE000FFFFC000FFFF8000FFFF0000FFF C0000FFF00000E0000000E0000000E0000000E0000000E0000000E0000000E7FC0000FFFF8000F C1FE000E007F000C007F8000003F8000003FC000003FC000003FE000003FE03E003FE07F003FE0 FF803FE0FF803FE0FF803FE0FF803FC0FF003FC07E007FC078007F803C00FF001F83FE000FFFFC 0007FFF00000FF80001B277DA622>I<0007F800003FFC0000FFFE0001FE1F0007F81F000FE03F 800FC07F801FC07F803F807F803F807F807F803F007F001E007F0000007F000000FF000000FF1F E000FF3FF800FF70FE00FFE03F00FFC03F80FF801FC0FF801FC0FF801FC0FF001FE0FF001FE0FF 001FE0FF001FE07F001FE07F001FE07F001FE07F001FE03F801FC03F801FC01F803F800FC03F80 07F0FF0003FFFC0001FFF800007FE0001B277DA622>I<380000003E0000003FFFFFF03FFFFFF0 3FFFFFF03FFFFFE07FFFFFC07FFFFF807FFFFF807FFFFF0070001E0070003C0070007800E00070 00E000F000E001E0000003C0000007C00000078000000F8000000F0000001F0000003F0000003F 0000003F0000007E0000007E000000FE000000FE000000FE000000FE000001FE000001FE000001 FE000001FE000001FE000001FE000001FE000001FE000000FC0000007800001C297CA822>I<00 7FC00001FFF80007FFFC000FC0FE000F003F001E001F001E001F803E000F803E000F803F000F80 3FC00F803FF01F803FF81F003FFE3F001FFFFE001FFFFC000FFFF00007FFFC0003FFFE0003FFFF 000FFFFF801FBFFFC03F0FFFC07E03FFE07C00FFE0FC007FE0F8001FE0F80007E0F80007E0F800 03E0F80003E0FC0003C07C0007C07E0007803F000F801FC07F000FFFFE0007FFF80000FFC0001B 277DA622>I<007FC00003FFF00007FFFC000FE0FE001FC07E003F803F007F003F807F003F80FF 001FC0FF001FC0FF001FC0FF001FC0FF001FE0FF001FE0FF001FE0FF001FE07F003FE07F003FE0 7F003FE03F807FE01F80FFE00FE1DFE003FF9FE000FF1FE000001FE000001FC000001FC00F001F C01F803FC03FC03F803FC03F803FC07F003FC07F003F80FE001F01FC001F07F8000FFFE00007FF C00001FE00001B277DA622>I<00007FF801800007FFFE0780001FFFFF8F80007FF80FFF8000FF 8001FF8003FE00007F8007FC00003F8007F800001F800FF000000F801FE000000F803FE0000007 803FC0000007807FC0000003807FC0000003807FC000000380FF8000000000FF8000000000FF80 00000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF80000000 00FF8000000000FF80000000007FC0000000007FC0000003807FC0000003803FC0000003803FE0 000003801FE0000007800FF00000070007F800000F0007FC00001E0003FE00003C0000FF8000F8 00007FF807F000001FFFFFC0000007FFFF000000007FF8000029297CA832>67 D<00007FF003000007FFFE0F00001FFFFF9F00007FF00FFF0000FF8003FF0003FE0000FF0007FC 00007F000FF800003F000FF000001F001FE000001F003FE000000F003FC000000F007FC0000007 007FC0000007007FC000000700FF8000000000FF8000000000FF8000000000FF8000000000FF80 00000000FF8000000000FF8000000000FF8000000000FF8000000000FF8003FFFFF8FF8003FFFF F87FC003FFFFF87FC00001FF007FC00001FF003FC00001FF003FE00001FF001FE00001FF000FF0 0001FF000FF80001FF0007FC0001FF0003FE0003FF0000FF8003FF00007FF01FFF00001FFFFF3F 000007FFFE1F0000007FF007002D297CA836>71 D73 D77 D<0000FFE000000007FFFC0000003F C07F8000007F001FC00001FC0007F00003F80003F80007F00001FC000FF00001FE001FE00000FF 001FE00000FF003FC000007F803FC000007F807FC000007FC07F8000003FC07F8000003FC07F80 00003FC0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003F E0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE07F8000003FC07FC000007FC07FC0 00007FC03FC000007F803FC000007F801FE00000FF001FE00000FF000FF00001FE0007F00001FC 0003F80003F80001FC0007F00000FF001FE000003FC07F8000000FFFFE00000000FFE000002B29 7CA834>79 D82 D<00FF806003FFF0E00FFFFFE01F C0FFE03F001FE03E0007E07E0003E07C0003E0FC0001E0FC0001E0FC0000E0FE0000E0FF0000E0 FF800000FFF80000FFFFC0007FFFF8007FFFFE003FFFFF801FFFFFC00FFFFFC007FFFFE001FFFF F0003FFFF00003FFF800001FF800000FF8000007F8E00003F8E00001F8E00001F8E00001F8F000 01F8F00001F0F80003F0FC0003E0FF0007E0FFE01FC0FFFFFF80E1FFFE00C03FF8001D297CA826 >I<0300060007000E000E001C001C00380018003000380070007000E0006000C0006000C000E0 01C000C0018000C0018000DF01BE00FF81FF00FFC1FF80FFC1FF80FFC1FF807FC0FF807FC0FF80 3F807F001F003E00191578A924>92 D<03FFC0000FFFF0001F81FC003FC0FE003FC07F003FC07F 003FC03F803FC03F801F803F8000003F8000003F80001FFF8001FFFF8007FE3F801FE03F803FC0 3F807F803F807F003F80FE003F80FE003F80FE003F80FE007F80FF007F807F00FFC03FC3DFFC1F FF8FFC03FE07FC1E1B7E9A21>97 D<003FF80001FFFE0007F83F000FE07F801FC07F803F807F80 3F807F807F807F807F003F00FF000000FF000000FF000000FF000000FF000000FF000000FF0000 00FF000000FF0000007F0000007F8000003F8001C03FC001C01FC003C00FE0078007F83F0001FF FC00003FF0001A1B7E9A1F>99 D<00003FF80000003FF80000003FF800000003F800000003F800 000003F800000003F800000003F800000003F800000003F800000003F800000003F800000003F8 00000003F800000003F800003FE3F80001FFFBF80003F83FF8000FE00FF8001FC007F8003F8003 F8003F8003F8007F8003F8007F0003F800FF0003F800FF0003F800FF0003F800FF0003F800FF00 03F800FF0003F800FF0003F800FF0003F800FF0003F8007F0003F8007F0003F8003F8003F8003F 8007F8001FC00FF8000FE01FF80007F07FFF8001FFFBFF80003FC3FF80212A7EA926>I<003FE0 0001FFFC0007F07E000FE03F001FC01F803F800FC03F800FC07F000FC07F0007E0FF0007E0FF00 07E0FF0007E0FFFFFFE0FFFFFFE0FF000000FF000000FF000000FF0000007F0000007F8000003F 8000E03F8001E01FC001C00FE007C003F81F8001FFFE00003FF8001B1B7E9A20>I<000FF80000 3FFE0000FF3F0001FC7F8003F87F8003F87F8007F07F8007F07F8007F03F0007F0000007F00000 07F0000007F0000007F0000007F00000FFFFC000FFFFC000FFFFC00007F0000007F0000007F000 0007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0 000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F000007FFF80007F FF80007FFF8000192A7EA915>I<00FF81F007FFF7FC0FE3FF7C1F80FCFC3F80FE7C3F007E787F 007F007F007F007F007F007F007F007F007F007F007F003F007E003F80FE001F80FC000FE3F800 1FFFF00018FF8000380000003C0000003C0000003E0000003FFFFC003FFFFF001FFFFFC00FFFFF E007FFFFF03FFFFFF07E000FF87C0001F8F80001F8F80000F8F80000F8F80000F8FC0001F87E00 03F03F0007E01FE03FC007FFFF0000FFF8001E287E9A22>II<0F801FC01FE03FE03FE03FE01FE01FC00F8000000000000000000000000000 00FFE0FFE0FFE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0 0FE00FE00FE00FE00FE0FFFEFFFEFFFE0F2B7DAA14>I108 DII<003FE00001FFFC0003F07E00 0FC01F801F800FC03F800FE03F0007E07F0007F07F0007F07F0007F0FF0007F8FF0007F8FF0007 F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F87F0007F07F0007F03F800FE03F800FE01F80 0FC00FC01F8007F07F0001FFFC00003FE0001D1B7E9A22>II114 D<03FE701FFFF03E03F07801F07000F0F00070F00070F80070FE0000FFF000FFFF007F FFC03FFFE01FFFF007FFF800FFFC0007FC0000FCE0007CE0003CF0003CF0003CF80078FC0078FF 01F0FFFFC0E1FF00161B7E9A1B>I<00700000700000700000700000F00000F00000F00001F000 03F00003F00007F0001FFFF0FFFFF0FFFFF007F00007F00007F00007F00007F00007F00007F000 07F00007F00007F00007F00007F00007F00007F03807F03807F03807F03807F03807F03807F038 03F87001FCF000FFE0003FC015267FA51B>III120 DI<3FFFFF803FFFFF803F00FF803C00 FF003801FE007803FC007807FC007007F800700FF000701FE000001FE000003FC000007F800000 FF800000FF000001FE038003FC038003FC038007F803800FF007801FF007801FE007003FC00F00 7F801F00FF807F00FFFFFF00FFFFFF00191B7E9A1F>I E /Fl 71 123 df<003F1F8001FFFFC0 03C3F3C00783E3C00F03E3C00E01C0000E01C0000E01C0000E01C0000E01C0000E01C000FFFFFC 00FFFFFC000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01 C0000E01C0000E01C0000E01C0000E01C0000E01C0007F87FC007F87FC001A1D809C18>11 D<003F0001FF8003C3C00783C00F03C00E03C00E00000E00000E00000E00000E0000FFFFC0FFFF C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01 C00E01C07F87F87F87F8151D809C17>I<003FC001FFC003C3C00783C00F03C00E01C00E01C00E 01C00E01C00E01C00E01C0FFFFC0FFFFC00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E 01C00E01C00E01C00E01C00E01C00E01C00E01C07FCFF87FCFF8151D809C17>I<003F83F00001 FFDFF80003E1FC3C000781F83C000F01F03C000E01E03C000E00E000000E00E000000E00E00000 0E00E000000E00E00000FFFFFFFC00FFFFFFFC000E00E01C000E00E01C000E00E01C000E00E01C 000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E0 1C000E00E01C000E00E01C007FC7FCFF807FC7FCFF80211D809C23>I<7070F8F8FCFCFCFC7C7C 0C0C0C0C1C1C181838387070F0F060600E0D7F9C15>34 D<00F0000001F80000039C0000070C00 00070C0000070C0000070C0000071C0000071C0000073800000770000007F07FE003E07FE003C0 1F0007C00E000FC01C001FC018003DE0380078F0300070F07000F0786000F03CE000F03FC000F0 1F8000F80F0060780FC0E07C3FE1C03FF9FF800FE03F001B1D7E9C20>38 D<70F8FCFC7C0C0C1C183870F060060D7D9C0C>I<01C00380038007000E000C001C0018003800 38007000700070007000E000E000E000E000E000E000E000E000E000E000E000E000E000E00070 007000700070003800380018001C000C000E0007000380038001C00A2A7D9E10>II<70F8F8F878181818383070E060050D7D840C>44 DI<70F8F8F87005057D840C>I<07E00FF01C38381C781E700E700EF00FF00FF00FF00FF00FF0 0FF00FF00FF00FF00FF00FF00FF00F700E700E781E381C1C380FF007E0101B7E9A15>48 D<030007003F00FF00C70007000700070007000700070007000700070007000700070007000700 070007000700070007000700FFF8FFF80D1B7C9A15>I<0FE03FF878FC603EF01EF81FF80FF80F 700F000F001F001E003E003C007800F001E001C0038007000E031C0338037006FFFEFFFEFFFE10 1B7E9A15>I<0FE03FF8387C783E7C1E781E781E001E003C003C00F807F007E00078003C001E00 0F000F000F700FF80FF80FF81EF01E787C3FF80FE0101B7E9A15>I<001C00001C00003C00007C 00007C0000DC0001DC00039C00031C00071C000E1C000C1C00181C00381C00301C00601C00E01C 00FFFFC0FFFFC0001C00001C00001C00001C00001C00001C0001FFC001FFC0121B7F9A15>I<30 1C3FFC3FF83FE030003000300030003000300037E03FF83C3C381E301E000F000F000F000FF00F F00FF00FF01E703E787C3FF80FE0101B7E9A15>I<01F807FC0F8E1E1E3C1E381E781E78007000 F080F7F8FFFCFC1CF81EF80FF00FF00FF00FF00FF00F700F700F781E381E1E3C0FF807E0101B7E 9A15>I<6000007FFF807FFF807FFF80600700C00600C00E00C01C000038000030000070000060 0000E00000C00001C00001C00003C0000380000380000380000780000780000780000780000780 00078000078000078000111C7E9B15>I<07E01FF83C3C381E701E700E700E780E7C1E7F3C3FF8 1FF00FF01FFC3DFC787E703FF00FE00FE007E007E007F00E781E3C3C1FF807E0101B7E9A15>I< 07E01FF83C38781C781EF00EF00EF00FF00FF00FF00FF00FF01F781F383F3FFF1FEF010F000E00 1E781E781C783C787878F03FE01F80101B7E9A15>I<70F8F8F870000000000000000070F8F8F8 7005127D910C>I<70F8F8F870000000000000000070F8F8F878181818383070E060051A7D910C> I<00060000000F0000000F0000000F0000001F8000001F8000001F8000003FC0000033C0000033 C0000073E0000061E0000061E00000E1F00000C0F00000C0F00001C0F8000180780001FFF80003 FFFC0003003C0003003C0007003E0006001E0006001E001F001F00FFC0FFF0FFC0FFF01C1C7F9B 1F>65 DI<003FC18001FFF18003F07B800FC01F801F000F801E00 07803C0003807C0003807800038078000180F0000180F0000000F0000000F0000000F0000000F0 000000F0000000F000000078000180780001807C0001803C0003801E0003001F0007000FC00E00 03F03C0001FFF000003FC000191C7E9B1E>IIII<003FC18001FFF18003F07B800FC01F801F000F801E0007803C0003807C0003 807800038078000180F0000180F0000000F0000000F0000000F0000000F0000000F000FFF0F000 FFF078000780780007807C0007803C0007801E0007801F0007800FC00F8003F03F8001FFFB8000 3FE1801C1C7E9B21>III76 DII<003F800001FFF00003E0F80007001C000E000E001C0007003C00078038000380780003 C0700001C0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E07800 03C0780003C0380003803C0007801E000F000E000E0007803C0003E0F80001FFF000003F80001B 1C7E9B20>II82 D<07F1801FFD803C1F80700780700380E003 80E00180E00180F00000F80000FE00007FE0003FFC001FFE000FFF0000FF80000F800007C00003 C00001C0C001C0C001C0E001C0E00380F00780FE0F00DFFE00C7F800121C7E9B17>I<7FFFFFC0 7FFFFFC0780F03C0700F01C0600F00C0E00F00E0C00F0060C00F0060C00F0060C00F0060000F00 00000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F 0000000F0000000F0000000F0000000F0000000F000003FFFC0003FFFC001B1C7F9B1E>II< FFE01FF0FFE01FF01F0007800F0003000F800700078006000780060007C00E0003C00C0003C00C 0001E0180001E0180001E0180000F0300000F0300000F870000078600000786000007CE000003C C000003CC000001F8000001F8000001F8000000F0000000F0000000F0000000600001C1C7F9B1F >II<18183C3C383870 706060E0E0C0C0C0C0F8F8FCFCFCFC7C7C38380E0D7B9C15>92 D<1FE0003FF8003C3C003C1E00 180E00000E00001E0007FE003FFE007E0E00F80E00F80E00F00E60F00E60F81E607C7E607FFFC0 1FC78013127F9115>97 DI<07F80FFC3E3C3C3C78187800 F000F000F000F000F000F000780078063C0E3F1C0FF807F00F127F9112>I<001F80001F800003 8000038000038000038000038000038000038000038000038007F3801FFF803E1F807C07807803 80F80380F00380F00380F00380F00380F00380F00380F003807807807C0F803E1F801FFBF007E3 F0141D7F9C17>I<07E01FF83E7C781C781EF01EFFFEFFFEF000F000F000F000780078063C0E3F 1C0FF807F00F127F9112>I<00FC03FE079E071E0F1E0E000E000E000E000E000E00FFE0FFE00E 000E000E000E000E000E000E000E000E000E000E000E000E000E007FE07FE00F1D809C0D>I<07 E7C01FFFC03C3DC0781E00781E00781E00781E00781E00781E003C3C003FF80037E00070000070 00007800003FFC003FFF007FFF807807C0F003C0E001C0E001C0F003C0F807C07C0F801FFE0007 F800121B7F9115>II<3C007C007C007C003C0000000000 0000000000000000FC00FC001C001C001C001C001C001C001C001C001C001C001C001C001C001C 00FF80FF80091D7F9C0C>I<01C003E003E003E001C00000000000000000000000000FE00FE000 E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E0F0E0 F1E0F3C0FF807E000B25839C0D>IIIII<03F0000FFC001E1E00380700780780700380F0 03C0F003C0F003C0F003C0F003C0F003C07003807807803807001E1E000FFC0003F00012127F91 15>II<07F1801FF9803F1F803C0F80780780780380F00380F00380F00380F003 80F00380F00380F803807807807C0F803E1F801FFB8007E3800003800003800003800003800003 80000380001FF0001FF0141A7F9116>II<1FB07FF0F0F0E070E030F030F8007FC07FE0 1FF000F8C078C038E038F078F8F0FFF0CFC00D127F9110>I<0C000C000C000C000C001C001C00 3C00FFE0FFE01C001C001C001C001C001C001C001C001C301C301C301C301C301E700FE007C00C 1A7F9910>IIII<7F8FF07F8FF00F0F80070F00038E0001DC0001D80000F000007000 00780000F80001DC00038E00030E000707001F0780FF8FF8FF8FF81512809116>II<7FFC7FFC783C707860F061E061E063C00780078C0F0C1E0C1E1C3C187818F078FFF8FFF80E 127F9112>I E /Fm 27 91 df<001F0000003F8000007B80000071800000E1800000E1800000E1 800000C3800001C7000001CE000001DE000001FC1FF801F81FF801E0078001E0070007E006000F E00E001EF01C003C7038007878300078787000F03CE000F01FC000F01F8000F00F0060F80F80C0 7C7FE1C03FF1FF801FC07E001D1D7D9C20>38 D<7FF0FFE0FFE00C037F890E>45 D<7878F8F87005057D840C>I<0010007003F01FF01C70007000F000E000E000E000E000E001E0 01C001C001C001C001C003C0038003800380038003800780FFF8FFF80D1B7C9A15>49 D<00FE0003FF00078F800F07C00F03C00F03C00F0780000780000F80001F00003E0003FC0003F8 00001C00000E00000F00000F00000F00000F00780F00F80F00F81F00F81E00F03C00F07C007FF0 001FC000121B7D9A15>51 D<07018007FF8007FF0007FC000600000E00000C00000C00000C0000 0C00000DF8001FFE001F1E001C0F00180700000700000780000F80000F00F00F00F00F00F01F00 F01E00E03C0070F8007FF0001FC000111B7D9A15>53 D<1800003FFFC03FFFC03FFFC070038060 0700600E00C00C00001C0000380000700000E00000C00001C0000380000380000780000700000F 00000F00000E00001E00001E00001E00001E00003E00003C00001C0000121C7B9B15>55 D<00FE0003FF0007C7800F03800E03C00E03C01E03801E03801F07801F8F000FFE000FFC0003F8 0007FE001EFE003C3F00781F00700F00F00700E00700E00700E00700F00E00701E007C7C003FF0 000FC000121B7D9A15>I<01F80007FE000F8E001E07003C07003C07807C078078078078078078 0780780F80780F80780F00781F00383F003FFF001FEF00021E00001E00001C00F03C00F03800F0 7800E0F000E3E000FF80003E0000111B7C9A15>I<00007000000070000000F0000000F0000001 F0000001F80000037800000378000006780000067800000C7800000C3C0000183C0000183C0000 303C0000303C0000603C0000601E0000FFFE0000FFFE0001801E0001801E0003001F0003000F00 07000F000F000F007FC0FFF0FFC0FFF01C1C7F9B1F>65 D<000FF030003FFC7000FC0EE003F007 E007C003E00F8001E01F0001E01E0000E03C0000C03C0000C0780000C07800000078000000F800 0000F0000000F0000000F0000000F0000000F0000380F000030078000300780007003C000E003E 001C001F003C000FC0F00003FFE00000FF00001C1C7C9B1E>67 D<0FFFFC000FFFFF8000F007C0 00F001E000F000F000F0007000F0007801F0007801E0003801E0003801E0003801E0003C01E000 3803E0003803C0007803C0007803C0007803C0007003C000F007C000E0078001E0078001C00780 03800780078007800E000F807C00FFFFF000FFFFC0001E1C7E9B20>I<0FFFFFE00FFFFFE000F0 03E000F001C000F000C000F000C000F000C001F060C001E060C001E060C001E0600001E1E00001 FFE00003FFC00003C1C00003C0C00003C0C00003C0C0C003C0C18007C001800780018007800300 078003000780070007800E000F803E00FFFFFE00FFFFFC001B1C7E9B1C>I<000FF030003FFC70 00FC0EE003F007E007C003E00F8001E01F0001E01E0000E03C0000C03C0000C0780000C0780000 0078000000F8000000F0000000F0000000F000FFF0F000FFF0F0000F80F0000F8078000F007800 0F003C000F003E001F001F001F000FC07F0003FFF60000FF82001C1C7C9B21>71 D<0FFF9FFE0FFF3FFE00F003C000F003C000F003C000F003C000F007C001F007C001E0078001E0 078001E0078001E0078001FFFF8003FFFF8003C00F0003C00F0003C00F0003C00F0003C01F0007 C01F0007801E0007801E0007801E0007801E0007803E000F803E00FFF3FFE0FFF3FFC01F1C7E9B 1F>I<0FFF800FFF8000F00000F00000F00000F00000F00001F00001E00001E00001E00001E000 01E00003E00003C00003C00003C00003C00003C00007C000078000078000078000078000078000 0F8000FFF800FFF800111C7F9B0F>I<0FFFC00FFFC000F00000F00000F00000F00000F00001F0 0001E00001E00001E00001E00001E00003E00003C00003C00003C00003C00603C00C07C00C0780 0C07801C0780180780380780780F81F8FFFFF0FFFFF0171C7E9B1A>76 D<0FFC000FFC0FFC000F FC00FC001F8000FC001F8000FC00378000DE00378000DE006F8001DE006F80019E00CF00019E00 CF00019E018F00018F018F00018F031F00038F031F00030F061E00030F061E0003078C1E000307 8C1E000307983E000707983E000607B03C000607B03C000603E03C000603E03C000603C07C001E 03C07C00FFE387FFC0FFC387FF80261C7E9B26>I<0FF80FFE0FF80FFE00FC01E000FC00C000FE 00C000DE00C000DE01C001DF01C0018F0180018F8180018781800187C1800187C3800383C38003 03E3000301E3000301F3000300F3000300FF000700FF0006007E0006007E0006003E0006003E00 06001E001E001E00FFE01C00FFC00C001F1C7E9B1F>I<0007F000003FFC0000F81E0001E00700 03800380070003C00E0001C01E0001E03C0001E03C0000E0780000E0780000F0780000E0F00001 E0F00001E0F00001E0F00001E0F00003C0F00003C0F00007807800078078000F0038001E003C00 3C001E0078000F81F00003FFC00000FE00001C1C7C9B20>I<0FFFFC000FFFFF0000F00F8000F0 038000F003C000F001C000F001C001F003C001E003C001E003C001E0038001E0078001E00F0003 E03E0003FFF80003FFE00003C0000003C0000003C0000007C00000078000000780000007800000 078000000F8000000F800000FFF80000FFF000001A1C7E9B1C>I<0FFFF8000FFFFE0000F00F80 00F0038000F003C000F001C000F001C001F003C001E003C001E003C001E0078001E00F0001E03E 0003FFF80003FFF80003C0FC0003C03C0003C03C0003C03E0007C03C0007803C0007803C000780 3C0007803C0007803C380F803E70FFF81FF0FFF00FE01D1C7E9B1F>82 D<007F0C01FFDC03C1F8 0780780F00380E00380E00381E00381E00001F00001F80000FF8000FFF0007FFC001FFE0003FE0 0003E00001E00000E00000E06000E06000E06001E07001C0780380FE0F80FFFE00C3F800161C7E 9B17>I<1FFFFFF03FFFFFF03C0781F038078060700780606007806060078060600F8060C00F00 60C00F0060000F0000000F0000000F0000001F0000001E0000001E0000001E0000001E0000001E 0000003E0000003C0000003C0000003C0000003C0000003C0000007C00001FFFE0001FFFE0001C 1C7C9B1E>III<03FFFF8007FFFF0007E01F0007803E0007007C00060078000600F8000E 01F0000C03E0000C07C0000007C000000F8000001F0000003E0000007C0000007C000000F80C00 01F00C0003E0180003C0180007C018000F8038001F0030003E0070003E00F0007C03F000FFFFE0 00FFFFE000191C7E9B19>90 D E /Fn 29 122 df<70F8FCFC7C0C0C0C1C18183870E0E0060F7C 840E>44 D<018003800F80FF80F380038003800380038003800380038003800380038003800380 03800380038003800380038003800380038003800380038003800380FFFEFFFE0F217CA018>49 D<03F8000FFE003C3F00380F807007C06007C0E003E0F803E0F803E0F801E0F801E07003E00003 E00003C00007C00007C0000F80000F00001E00003C0000780000F00000E00001C0000380000700 600E00601C00603800E07000C0FFFFC0FFFFC0FFFFC013217EA018>I<03F8000FFE001E1F0038 0F807007C07807C07C07C07807C07807C00007C00007C0000780000F80001F00003E0003FC0003 F800001E00000F000007800007C00003E00003E07003E0F803E0F803E0F803E0F803E0E007C070 07C0780F803E1F000FFE0003F80013227EA018>I<000E00000E00001E00001E00003E00003E00 006E0000EE0000CE0001CE00018E00030E00070E00060E000E0E000C0E00180E00180E00300E00 700E00600E00E00E00FFFFF8FFFFF8000E00000E00000E00000E00000E00000E00000E0001FFF0 01FFF015217FA018>I<03F8000FFC001F1E003C07003807807803807003C0F003C0F001C0F001 C0F001E0F001E0F001E0F001E0F001E0F003E07803E07803E03C07E03E0FE01FFDE007F9E00081 E00001C00003C00003C0000380780780780700780F00781E00787C003FF8000FE00013227EA018 >57 D<000180000003C0000003C0000003C0000007E0000007E0000007E000000FF000000CF000 000CF000001CF800001878000018780000383C0000303C0000303C0000601E0000601E0000601E 0000C00F0000C00F0000C00F0001FFFF8001FFFF8001800780030003C0030003C0030003C00600 01E0060001E0060001E00E0000F01F0001F0FFC00FFFFFC00FFF20237EA225>65 D<000FF030007FFC3000FC1E7003F0077007C003F00F8001F01F0001F01F0000F03E0000F03C00 00707C0000707C0000707C000030F8000030F8000030F8000000F8000000F8000000F8000000F8 000000F8000000F8000000F80000307C0000307C0000307C0000303E0000703E0000601F0000E0 1F0000C00F8001C007C0038003F0070000FC1E00007FFC00000FF0001C247DA223>67 D<03FFF003FFF0000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F 00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F 00F80F00F80F00F80F00F81F00F01E00703E00787C003FF8000FE00014237EA119>74 D76 DI<07F0600FFE601E1FE03807E07003E07001E0E000E0E000E0E00060E000 60F00060F00000F800007C00007F00003FF0001FFE000FFF8003FFC0007FC00007E00001E00001 F00000F00000F0C00070C00070C00070E00070E000F0F000E0F801E0FC01C0FF0780C7FF00C1FC 0014247DA21B>83 D<1FF0003FFC003C3E003C0F003C0F00000700000700000F0003FF001FFF00 3F07007C0700F80700F80700F00718F00718F00F18F81F187C3FB83FF3F01FC3C015157E9418> 97 D<03FE000FFF801F07803E07803C0780780000780000F00000F00000F00000F00000F00000 F00000F000007800007800C03C01C03E01801F87800FFF0003FC0012157E9416>99 D<0000E0000FE0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000 E00000E003F8E00FFEE01F0FE03E03E07C01E07800E07800E0F000E0F000E0F000E0F000E0F000 E0F000E0F000E07000E07801E07801E03C03E01F0FF00FFEFE03F0FE17237EA21B>I<01FC0007 FF001F0F803E03C03C01C07801E07801E0FFFFE0FFFFE0F00000F00000F00000F00000F0000078 00007800603C00E03E01C00F83C007FF0001FC0013157F9416>I<0E0000FE0000FE00001E0000 0E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E3F800EFFE00FE1E0 0F80F00F00700F00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070 0E00700E00700E0070FFE7FFFFE7FF18237FA21B>104 D<1E003E003E003E001E000000000000 00000000000000000000000E00FE00FE001E000E000E000E000E000E000E000E000E000E000E00 0E000E000E000E000E00FFC0FFC00A227FA10E>I<01C003E003E003E001C00000000000000000 000000000000000001E00FE00FE001E000E000E000E000E000E000E000E000E000E000E000E000 E000E000E000E000E000E000E000E000E000E000E0F1E0F1C0F3C0FF803E000B2C82A10F>I<0E 0000FE0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E 00000E0FFC0E0FFC0E07E00E07800E07000E1E000E3C000E78000EF8000FFC000FFC000F1E000E 0F000E0F800E07800E03C00E03E00E01E00E01F0FFE3FEFFE3FE17237FA21A>I<0E00FE00FE00 1E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E 000E000E000E000E000E000E000E000E000E000E00FFE0FFE00B237FA20E>I<0E3FC0FF00FEFF F3FFC0FFE0F783C01F807E01E00F003C00E00F003C00E00E003800E00E003800E00E003800E00E 003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E0 0E003800E00E003800E0FFE3FF8FFEFFE3FF8FFE27157F942A>I<0E3F80FEFFE0FFE1E01F80F0 0F00700F00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070 0E00700E0070FFE7FFFFE7FF18157F941B>I<01FC0007FF000F07801C01C03800E07800F07000 70F00078F00078F00078F00078F00078F00078F000787000707800F03800E01C01C00F078007FF 0001FC0015157F9418>I<0E7EFEFFFFEF1F8F0F0F0F000F000E000E000E000E000E000E000E00 0E000E000E000E000E00FFF0FFF010157F9413>114 D<1FD83FF87878F038E018E018F018F800 7F807FE01FF003F8007CC03CC01CE01CE01CF03CF878FFF0CFE00E157E9413>I<060006000600 060006000E000E000E001E003E00FFF8FFF80E000E000E000E000E000E000E000E000E000E000E 0C0E0C0E0C0E0C0E0C0F1C073807F803E00E1F7F9E13>I<0E0070FE07F0FE07F01E00F00E0070 0E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00F00E00F00E01F0 0787F807FF7F01FC7F18157F941B>I121 D E /Fo 16 121 df<78FCFCFEFE7E06060606060E0C0C1C383870E0C007147A8512>44 D<00003FE0030001FFFC030007F01E07001F800787003E0001CF00FC0000EF01F800007F03F000 007F03E000003F07C000001F0FC000001F0F8000000F1F8000000F3F000000073F000000073E00 0000077E000000077E000000037E000000037C00000003FC00000000FC00000000FC00000000FC 00000000FC00000000FC00000000FC00000000FC00000000FC00000000FC00000000FC00000000 7C000000007E000000037E000000037E000000033E000000033F000000073F000000061F800000 060F8000000E0FC000000C07C000001C03E000001803F000003801F800007000FC0000E0003F00 01C0001F8007800007F01F000001FFFC0000003FE00028337CB130>67 D<00003FF001800001FF FC01800007F81F0380001FC0038380003F0001E780007C0000E78001F800007F8001F000003F80 03E000001F8007C000000F800FC000000F800F80000007801F80000007801F00000003803F0000 0003803E00000003807E00000003807E00000001807E00000001807C0000000180FC0000000000 FC0000000000FC0000000000FC0000000000FC0000000000FC0000000000FC0000000000FC0000 000000FC0000000000FC0000000000FC00000FFFFC7E00000FFFFC7E0000001FC07E0000000F80 7E0000000F803E0000000F803F0000000F801F0000000F801F8000000F800F8000000F800FC000 000F8007E000000F8003E000000F8001F000001F8001F800001F80007E00003F80003F00007780 001FC001E3800007F80FC1800001FFFF008000003FF800002E337CB134>71 D<03FF00000FFFC0001E03F0003E00F8003E007C003E003C003E003E001C001E0000001E000000 1E0000001E0000001E000003FE00007FFE0003FF1E0007F01E001FC01E003F001E007E001E007C 001E00FC001E00F8001E0CF8001E0CF8001E0CF8003E0CFC007E0C7C007E0C7E01FF1C3F87CFB8 1FFF07F003FC03C01E1F7D9E21>97 D<003FE001FFF803E03C07C03E0F003E1F003E3E003E3C00 1C7C00007C0000780000F80000F80000F80000F80000F80000F80000F80000F80000F800007C00 007C00007C00003E00033E00071F00060F800E07C01C03F07801FFF0003F80181F7D9E1D>99 D<003F800001FFF00003E1F80007807C000F003E001E001E003E001F003C000F007C000F807C00 0F8078000F80F8000780FFFFFF80FFFFFF80F8000000F8000000F8000000F8000000F8000000F8 0000007C0000007C0000007C0000003E0001803E0003801F0003000F80070007E00E0003F83C00 00FFF800003FC000191F7E9E1D>101 D<0F001F801F801F801F800F0000000000000000000000 0000000000000000000000000780FF80FF800F8007800780078007800780078007800780078007 80078007800780078007800780078007800780078007800780078007800FC0FFF8FFF80D307EAF 12>105 D<0781FE003FC000FF87FF80FFF000FF9E0FC3C1F8000FB803E7007C0007F001EE003C 0007E001FC003E0007E000FC001E0007C000F8001E0007C000F8001E00078000F0001E00078000 F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00 078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0 001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E0007 8000F0001E000FC001F8003F00FFFC1FFF83FFF0FFFC1FFF83FFF0341F7E9E38>109 D<0781FE0000FF87FF8000FF9E0FC0000FB803E00007F001E00007E001F00007E000F00007C000 F00007C000F000078000F000078000F000078000F000078000F000078000F000078000F0000780 00F000078000F000078000F000078000F000078000F000078000F000078000F000078000F00007 8000F000078000F000078000F000078000F000078000F0000FC001F800FFFC1FFF80FFFC1FFF80 211F7E9E25>I<001FC00000FFF80001E03C0007800F000F0007801E0003C01E0003C03C0001E0 3C0001E0780000F0780000F0780000F0F80000F8F80000F8F80000F8F80000F8F80000F8F80000 F8F80000F8F80000F8780000F07C0001F03C0001E03C0001E01E0003C01E0003C00F00078007C0 1F0001F07C0000FFF800001FC0001D1F7E9E21>I<0781FE00FF87FF80FF9F0FE00FB803F007F0 01F807E000F807C0007C0780007C0780003E0780003E0780003E0780001F0780001F0780001F07 80001F0780001F0780001F0780001F0780001F0780003F0780003E0780003E0780007E07C0007C 07C000FC07E000F807F001F007B803E0079E0FC0078FFF800781FC000780000007800000078000 0007800000078000000780000007800000078000000780000007800000078000000FC00000FFFC 0000FFFC0000202D7E9E25>I<0787F0FF8FF8FFBC7C0FB87C07F07C07E07C07E00007C00007C0 0007C0000780000780000780000780000780000780000780000780000780000780000780000780 000780000780000780000780000780000780000FC000FFFE00FFFE00161F7E9E19>114 D<03FE300FFFF03E07F07801F07000F0E00070E00070E00030E00030F00030F800007F00007FF0 003FFF000FFFC003FFE0003FF00003F80000F8C0007CC0003CE0001CE0001CE0001CF0001CF800 38F80038FC00F0EF03F0C7FFC0C1FF00161F7E9E1A>I<00C00000C00000C00000C00000C00001 C00001C00001C00003C00003C00007C0000FC0001FC000FFFFE0FFFFE003C00003C00003C00003 C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003 C03003C03003C03003C03003C03003C03003C03003C07001E06001E0E000F9C000FFC0003F0014 2C7FAB19>I<078000F000FF801FF000FF801FF0000F8001F000078000F000078000F000078000 F000078000F000078000F000078000F000078000F000078000F000078000F000078000F0000780 00F000078000F000078000F000078000F000078000F000078000F000078000F000078000F00007 8001F000078001F000078001F000078003F00003C007F00003C00EF00001F03CF80000FFF8FF80 003FC0FF80211F7E9E25>I120 D E /Fp 5 85 df<00000000C00000000000E000 00000001E00000000003E00000000003E00000000007E00000000007E0000000000FE000000000 0FE0000000001FE0000000001FE00000000037E00000000067E00000000067E000000000C7E000 000000C7F00000000183F00000000183F00000000303F00000000703F00000000603F00000000C 03F00000000C03F00000001803F00000001803F00000003003F00000003003F00000006003F000 0000C003F0000000C003F00000018003F00000018003F8000003FFFFF8000003FFFFF800000600 01F800000E0001F800000C0001F80000180001F80000180001F80000300001F80000300001F800 00600001F80000E00001F80000C00001F80001C00001F80001C00001F80007C00001FC001FC000 03FC00FFF8007FFFE0FFF8007FFFE02B327BB135>65 D<000FFFFFFE0000000FFFFFFF80000000 7F000FE00000007E0003F00000007E0000F80000007E0000FC0000007E00007C000000FC00003E 000000FC00003E000000FC00003F000000FC00001F000001F800001F000001F800001F800001F8 00001F800001F800001F800003F000001F800003F000001F800003F000001F800003F000001F80 0007E000003F800007E000003F800007E000003F800007E000003F80000FC000003F00000FC000 007F00000FC000007F00000FC000007F00001F8000007E00001F800000FE00001F800000FE0000 1F800000FC00003F000001FC00003F000001F800003F000001F800003F000003F000007E000003 E000007E000007E000007E00000FC000007E00000F800000FC00001F800000FC00003F000000FC 00007E000000FC0000FC000001F80001F0000001F80003E0000001F8000FC0000003F8007F0000 00FFFFFFFC000000FFFFFFE000000031317BB036>68 D<000FFFFFFFFC000FFFFFFFFC00007F00 01FC00007E00007C00007E00003C00007E00003C00007E0000180000FC0000180000FC00001800 00FC0000180000FC0000180001F80000180001F80000180001F80000180001F80000180003F000 80100003F00180000003F00180000003F00180000007E00300000007E00300000007E007000000 07E01F0000000FFFFE0000000FFFFE0000000FC01E0000000FC00E0000001F800C0000001F800C 0000001F800C0000001F800C0000003F00180000003F00080000003F00000000003F0000000000 7E00000000007E00000000007E00000000007E0000000000FC0000000000FC0000000000FC0000 000000FC0000000001F80000000001F80000000001F80000000003F800000000FFFFF0000000FF FFF00000002E317BB02F>70 D<000FFFFFF000000FFFFFFE0000007F003F8000007E000FC00000 7E0007E000007E0003F000007E0001F80000FC0001F80000FC0001F80000FC0001F80000FC0001 F80001F80003F80001F80003F80001F80003F80001F80003F00003F00007F00003F00007E00003 F0000FC00003F0000FC00007E0001F000007E0007E000007E000FC000007E007F000000FFFFFC0 00000FFFFF0000000FC00F8000000FC003C000001F8003E000001F8001F000001F8001F000001F 8001F800003F0001F800003F0001F800003F0001F800003F0001F800007E0003F800007E0003F8 00007E0003F000007E0003F00000FC0007F00000FC0007F00000FC0007F00800FC0007F00C01F8 0007F01801F80007F01801F80003F03003F80003F030FFFFE001F0E0FFFFE000FFC0000000003F 002E327BB034>82 D<07FFFFFFFFF00FFFFFFFFFF00FC00FE003F01E000FC000F01C000FC000E0 18000FC000E038000FC0006030001F8000E030001F8000E060001F8000C060001F8000C060003F 0000C0C0003F0000C0C0003F0000C0C0003F0000C080007E00008000007E00000000007E000000 00007E0000000000FC0000000000FC0000000000FC0000000000FC0000000001F80000000001F8 0000000001F80000000001F80000000003F00000000003F00000000003F00000000003F0000000 0007E00000000007E00000000007E00000000007E0000000000FC0000000000FC0000000000FC0 000000000FC0000000001F80000000001F80000000001F80000000001F80000000003F00000000 003F00000000003F0000000000FF00000000FFFFFF000000FFFFFF0000002C3173B033>84 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300 TeXDict begin %%EndSetup %%Page: 0 1 bop 795 996 a Fp(D)26 b(R)g(A)f(F)h(T)476 1088 y Fo(Groups,)c(Con)n(texts,)f (Comm)n(unicators)353 1270 y Fn(Lyndon)c(Clark)o(e,)e(Mark)h(Sears,)g(An)o (thon)o(y)g(Skjellum,)d(Marc)j(Snir)828 1391 y(June)h(24,)f(1993)p eop %%Page: 1 2 bop 75 -100 a Fm(3.1.)31 b(INTR)o(ODUCTION)1343 b Fl(1)75 45 y Fk(3.1)70 b(In)n(tro)r(duction)75 147 y Fl(It)11 b(is)f(highly)f(desirable) i(that)g(pro)q(cesses)i(executing)e(a)f(parallel)f(pro)q(cedure)k(use)e(a)f (\\virtual)g(pro)q(cess)i(name)d(space")75 197 y(lo)q(cal)14 b(to)h(the)g(in)o(v)o(o)q(cation.)20 b(Th)o(us,)15 b(the)h(co)q(de)g(of)e (the)i(parallel)d(pro)q(cedure)k(will)d(lo)q(ok)g(iden)o(tical,)g(irresp)q (ectiv)o(e)j(of)75 247 y(the)12 b(absolute)f(addresses)i(of)d(the)h (executing)h(pro)q(cesses.)20 b(It)11 b(is)g(often)g(the)g(case)h(that)f (parallel)f(application)f(co)q(de)j(is)75 296 y(built)f(b)o(y)g(comp)q(osing) f(sev)o(eral)i(parallel)f(mo)q(dules)f(\()p Fj(e.g.)p Fl(,)h(a)h(n)o (umerical)e(solv)o(er,)h(and)h(a)f(graphic)g(displa)o(y)g(mo)q(dule\).)75 346 y(Supp)q(ort)h(of)f(a)h(virtual)f(name)f(space)j(for)e(eac)o(h)h(mo)q (dule)f(will)f(allo)o(w)g(for)h(the)i(comp)q(osition)d(of)h(mo)q(dules)f (that)i(w)o(ere)75 396 y(dev)o(elop)q(ed)18 b(separately)h(without)e(c)o (hanging)g(all)f(message)i(passing)f(calls)h(within)f(eac)o(h)h(mo)q(dule.)28 b(The)18 b(set)h(of)75 446 y(pro)q(cesses)d(that)d(execute)i(a)e(parallel)f (pro)q(cedure)j(ma)o(y)c(b)q(e)j(\014xed,)g(or)f(ma)o(y)e(b)q(e)j(determined) f(dynamically)d(b)q(efore)75 496 y(the)19 b(in)o(v)o(o)q(cation.)31 b(Th)o(us,)20 b(MPI)f(has)g(to)f(pro)o(vide)h(a)f(mec)o(hanism)e(for)j (dynamically)c(creating)k(sets)h(of)e(lo)q(cally)75 545 y(named)h(pro)q (cesses.)37 b(W)m(e)20 b(alw)o(a)o(ys)e(n)o(um)o(b)q(er)h(pro)q(cesses)j (that)e(execute)i(a)d(parallel)f(pro)q(cedure)k(consecutiv)o(ely)m(,)75 595 y(starting)17 b(from)d(zero,)k(and)e(call)g(this)h(n)o(um)o(b)q(ering)e Fi(rank)k(in)f(group)p Fl(.)24 b(Th)o(us,)17 b(a)f Fi(group)f Fl(is)i(an)f(ordered)i(set)g(of)75 645 y(pro)q(cesses,)e(where)f(pro)q (cesses)i(are)d(iden)o(ti\014ed)g(b)o(y)g(their)g(ranks)g(when)g(comm)o (unication)d(o)q(ccurs.)158 701 y(Comm)o(uni)o(cation)f Fh(contexts)h Fl(partition)g(the)j(message-passing)e(space)h(in)o(to)f(separate,)i (manageable)d(\\uni-)75 751 y(v)o(erses.")31 b(Sp)q(eci\014cally)m(,)17 b(a)h(send)g(made)f(in)g(a)g(con)o(text)i(cannot)f(b)q(e)g(receiv)o(ed)h(in)e (another)h(con)o(text.)30 b(Con)o(texts)75 800 y(are)14 b(iden)o(ti\014ed)g (in)g(MPI)g(using)g(in)o(teger-v)n(alued)f Fi(con)o(texts)g Fl(that)h(reside)h(within)e Fh(communicator)e Fl(ob)r(jects.)20 b(The)75 850 y(con)o(text)14 b(mec)o(hanism)e(is)i(need)g(to)g(allo)o(w)e (predictable)i(b)q(eha)o(vior)g(in)f(subprograms,)g(and)g(to)h(allo)o(w)e (dynamicism)75 900 y(in)g(message)g(usage)g(that)h(cannot)f(b)q(e)h (reasonably)f(an)o(ticipated)g(or)g(managed.)k(Normally)m(,)9 b(a)j(parallel)f(pro)q(cedure)75 950 y(is)g(written)g(so)g(that)g(all)f (messages)h(pro)q(duced)h(during)f(its)g(execution)h(are)f(also)f(consumed)h (b)o(y)g(the)g(pro)q(cesses)j(that)75 1000 y(execute)i(the)f(pro)q(cedure.)22 b(Ho)o(w)o(ev)o(er,)15 b(if)f(one)g(parallel)g(pro)q(cedure)i(calls)e (another,)h(then)g(it)f(migh)o(t)e(b)q(e)k(desirable)75 1049 y(to)h(allo)o(w)e(suc)o(h)j(call)e(to)h(pro)q(ceed)i(while)e(messages)g(are)g (p)q(ending)g(\(the)h(messages)f(will)f(b)q(e)i(consumed)f(b)o(y)f(the)75 1099 y(pro)q(cedure)d(after)f(the)g(call)f(returns\).)19 b(In)12 b(suc)o(h)g(case,)h(a)e(new)h(comm)o(unicatio)o(n)d(con)o(text)j(is)g(needed) h(for)e(the)h(called)75 1149 y(parallel)h(pro)q(cedure,)i(ev)o(en)g(if)e(the) h(transfer)h(of)e(con)o(trol)h(is)g(sync)o(hronized.)158 1205 y(The)k(comm)o(unication)c(domain)h(used)k(b)o(y)e(a)h(parallel)e(pro)q (cedure)k(is)d(iden)o(ti\014ed)h(b)o(y)f(a)g Fi(comm)o(unicator)p Fl(.)75 1255 y(Comm)o(uni)o(cators)c(bring)h(together)i(the)f(concepts)i(of)d (pro)q(cess)j(group)d(and)h(comm)o(unicatio)o(n)d(con)o(text.)22 b(A)14 b(com-)75 1304 y(m)o(unicator)g(is)i(an)g(explicit)g(parameter)g(in)f (eac)o(h)i(p)q(oin)o(t-to-p)q(oin)o(t)d(comm)o(unication)f(op)q(eration.)24 b(The)17 b(comm)o(u-)75 1354 y(nicator)f(iden)o(ti\014es)g(the)g(comm)o (unicatio)o(n)d(con)o(text)j(of)f(that)h(op)q(eration;)g(it)f(iden)o (ti\014es)i(the)f(group)f(of)g(pro)q(cesses)75 1404 y(that)j(can)f(b)q(e)h (in)o(v)o(olv)o(ed)f(in)g(this)g(comm)o(unication;)f(and)h(it)g(pro)o(vides)h (the)g(translation)f(from)e(virtual)i(pro)q(cess)75 1454 y(names,)e(whic)o(h) g(are)h(ranks)g(within)f(the)h(group,)f(in)o(to)g(absolute)h(addresses.)25 b(Collectiv)o(e)15 b(comm)o(unication)d(calls)75 1504 y(also)i(tak)o(e)g(a)g (comm)o(unicator)d(as)k(parameter;)e(it)h(is)h(exp)q(ected)h(that)e(parallel) f(libraries)h(will)f(b)q(e)i(built)f(to)g(accept)75 1553 y(a)g(comm)o(uni)o (cator)e(as)i(parameter.)j(Comm)o(unicators)11 b(are)k(represen)o(ted)h(b)o (y)e(opaque)g(MPI)g(ob)r(jects.)158 1609 y(MPI)i(do)q(es)g(not)g(rev)o(eal)f (absolute)h(pro)q(cess)i(names.)k(Rather,)16 b(pro)q(cesses)i(are)e(alw)o(a)o (ys)e(iden)o(ti\014ed)i(b)o(y)g(their)75 1659 y(rank)c(inside)h(a)f(group.)17 b(New)c(pro)q(cess)h(groups)f(are)g(built)e(b)o(y)h(subsetting)i(and)e (reordering)h(pro)q(cesses)i(within)d(ex-)75 1709 y(isting)e(groups,)g(as)h (w)o(ell)e(as)h(b)o(y)g(\\publishing)f(and)i(subscribing")f(or)g (\\\015attening)g(and)g(un\015attening.")17 b(Publishing)75 1759 y(and)g(subscribing)h(pro)o(vides)f(a)g(serv)o(er-lik)o(e)h(mec)o (hanism)c(to)j(allo)o(w)f(rendezv)o(ous)j(b)q(et)o(w)o(een)f(disjoin)o(t)e (comm)o(uni-)75 1808 y(cating)d(pro)q(cesses.)21 b(Flatten)14 b(and)f(un\015atten)h(allo)o(ws)f(the)h(transmission)e(of)h(groups)h(or)g (comm)o(uni)o(cators)e(in)h(user)75 1858 y(messages.)k(Subsetting)c(and)f (reordering)g(allo)o(ws)f(the)h(user)h(to)f(construct)h(subgroups)g(giv)o(en) e(an)h(existing)g(group.)75 2027 y Fk(3.2)70 b(Groups)75 2129 y Fl(A)16 b Fi(group)f Fl(is)h(an)f(ordered)j(set)f(of)e(pro)q(cess)j(iden)o (ti\014ers)f(\(henceforth)g(pro)q(cesses\);)j(pro)q(cess)e(iden)o(ti\014ers)f (are)f(im-)75 2178 y(plemen)o(tation)e(dep)q(enden)o(t;)k(a)e(group)f(is)h (an)g(opaque)f(ob)r(ject.)25 b(Eac)o(h)16 b(pro)q(cess)i(in)d(a)g(group)h(is) g(asso)q(ciated)g(with)75 2228 y(an)e(in)o(teger)g Fi(rank)p Fl(,)f(starting)h(from)e(zero.)158 2284 y(Groups)17 b(are)g(represen)o(ted)i (b)o(y)e(opaque)f Fi(group)i(ob)s(jects)p Fl(,)d(and)h(hence)i(cannot)f(b)q (e)g(directly)g(transferred)75 2334 y(from)12 b(one)i(pro)q(cess)i(to)e (another.)75 2502 y Fk(3.3)70 b(Con)n(text)75 2604 y Fl(A)15 b Fi(con)o(text)e Fl(is)i(the)h(MPI)f(mec)o(hanism)d(for)j(partitioning)f (comm)o(uni)o(cation)e(space.)22 b(A)15 b(de\014ning)g(prop)q(ert)o(y)h(of)e (a)75 2654 y(con)o(text)j(is)g(that)g(a)f(send)i(made)d(in)h(a)h(con)o(text)g (cannot)g(b)q(e)g(receiv)o(ed)h(in)e(another)h(con)o(text.)28 b(A)16 b(con)o(text)i(is)e(an)75 2704 y(in)o(teger.)p eop %%Page: 2 3 bop 75 -100 a Fl(2)75 45 y Fk(3.4)70 b(Comm)n(unicators)75 138 y Fl(All)12 b(MPI)g(comm)o(unication)d(\(b)q(oth)k(p)q(oin)o(t-to-p)q (oin)o(t)e(and)i(collectiv)o(e\))f(functions)h(use)g Fi(comm)o(unicators)d Fl(to)i(pro-)75 188 y(vide)i(a)f(sp)q(eci\014c)j(scop)q(e)f(\(con)o(text)f (and)g(group)g(sp)q(eci\014cations\))h(for)e(the)i(comm)o(unicatio)o(n.)g(In) f(short,)g(comm)o(uni-)75 238 y(cators)h(bring)e(together)i(the)g(concepts)h (of)d(group)h(and)g(con)o(text;)g(\(furthermore,)g(to)g(supp)q(ort)h(implem)o (en)o(tation-)75 288 y(sp)q(eci\014c)i(optimizations,)12 b(and)j(virtual)g (top)q(ologies,)f(they)i(\\cac)o(he")f(additional)f(information)e (opaquely\).)22 b(The)75 338 y(source)17 b(and)f(destination)f(of)g(a)h (message)f(is)h(iden)o(ti\014ed)g(b)o(y)f(the)i(rank)e(of)g(that)h(pro)q (cess)i(within)d(the)h(group;)g(no)75 387 y(a)f(priori)h(mem)o(b)q(ership)e (restrictions)j(on)e(the)i(pro)q(cess)g(sending)f(or)g(receiving)g(the)g (message)g(are)g(implied.)21 b(F)m(or)75 437 y(collectiv)o(e)d(comm)o (unicati)o(on,)d(the)k(comm)o(unicator)c(sp)q(eci\014es)k(the)f(set)h(of)e (pro)q(cesses)k(that)c(participate)h(in)f(the)75 487 y(collectiv)o(e)f(op)q (eration.)23 b(Th)o(us,)16 b(the)h(comm)o(uni)o(cator)c(restricts)18 b(the)e(\\spatial")f(scop)q(e)i(of)e(comm)o(unicatio)o(n,)e(and)75 537 y(pro)o(vides)h(lo)q(cal)f(pro)q(cess)j(addressing.)158 671 y Fg(Discussion:)37 b Ff(`Comm)o(unicator')14 b(replaces)h(the)f(w)o(ord) f(`con)o(text')h(ev)o(erywhere)g(in)g(curren)o(t)g(pt2pt)g(and)g(collcomm)75 720 y(drafts.)158 854 y Fl(Comm)o(uni)o(cators)19 b(are)h(represen)o(ted)k(b) o(y)c(opaque)g Fi(comm)o(unicator)h(ob)s(jects)p Fl(,)f(and)g(hence)i(cannot) e(b)q(e)75 904 y(directly)14 b(transferred)i(from)c(one)i(pro)q(cess)i(to)d (another.)75 1027 y Fe(3.4.1)55 b(Prede\014ned)18 b(Comm)n(unicators)75 1106 y Fl(Initial)12 b(comm)o(unicators)g(de\014ned)j(once)f Fh(MPI)p 792 1106 14 2 v 15 w(INIT)f Fl(has)h(b)q(een)h(called)f(are)g(as)g (follo)o(ws:)137 1192 y Fd(\017)21 b Fh(MPI)p 248 1192 V 15 w(COMM)p 351 1192 V 15 w(ALL)p Fl(,)40 b(SPMD-lik)o(e)13 b(siblings)g(of)g(a) h(pro)q(cess.)137 1280 y Fd(\017)21 b Fh(MPI)p 248 1280 V 15 w(COMM)p 351 1280 V 15 w(HOST)p Fl(,)40 b(A)14 b(comm)o(unicator)d(for)j (talking)e(to)i(one's)g(HOST.)137 1368 y Fd(\017)21 b Fh(MPI)p 248 1368 V 15 w(COMM)p 351 1368 V 15 w(PARENT)p Fl(,)40 b(A)13 b(comm)o(unicator)f(for)h(talking)g(to)g(one's)h(P)m(ARENT)g(\(spa)o(wner\).) 137 1456 y Fd(\017)21 b Fh(MPI)p 248 1456 V 15 w(COMM)p 351 1456 V 15 w(SELF)p Fl(,)30 b(A)11 b(comm)o(unicator)d(for)i(talking)g(to)g (one's)h(self)g(\(useful)g(for)f(getting)g(con)o(texts)i(for)e(serv)o(er)179 1506 y(purp)q(oses,)15 b(etc.\).)158 1593 y(MPI)h(implem)o(en)o(tations)d (are)i(required)h(to)g(pro)o(vide)f(at)g(least)g(some)g(of)f(these)j(comm)o (unicators;)c(ho)o(w)o(ev)o(er,)75 1643 y(not)g(all)g(forms)f(of)h(comm)o (unicatio)o(n)e(mak)o(e)h(sense)j(for)e(all)g(systems.)18 b(En)o(vironmen)o (tal)11 b(inquiry)i(will)f(b)q(e)i(pro)o(vided)75 1692 y(to)g(determine)g (whic)o(h)f(of)h(these)h(comm)o(unicators)c(are)k(usable)f(in)f(a)h(giv)o(en) f(implemen)o(tation.)158 1826 y Fg(Discussion:)18 b Ff(En)o(vironmen)o(tal)e (sub-committee)e(needs)g(to)f(pro)o(vide)h(suc)o(h)g(inquiry)h(functions)g (for)e(us.)75 2053 y Fk(3.5)70 b(Group)24 b(Managemen)n(t)75 2146 y Fl(This)12 b(section)i(describ)q(es)g(the)f(manipulation)c(of)j (groups)h(under)g(v)n(arious)f(subheadings:)18 b(general,)12 b(constructors,)75 2196 y(and)i(so)g(on.)75 2318 y Fe(3.5.1)55 b(Lo)r(cal)19 b(Op)r(erations)75 2397 y Fl(The)14 b(follo)o(wing)e(are)i(all) f(lo)q(cal)g(\(non-comm)o(uni)o(cating\))e(op)q(erations.)89 2448 y Fi(MPI)p 188 2448 15 2 v 17 w(GR)o(OUP)p 384 2448 V 15 w(SIZE\(group,)k(size\))75 2616 y(IN)h(group)k Fl(handle)13 b(to)h(group)g(ob)r(ject.)75 2704 y Fi(OUT)i(size)k Fl(is)13 b(the)i(in)o(teger)f(n)o(um)o(b)q(er)f(of)h(pro)q(cesses)i(in)e(the)g(group.) p eop %%Page: 3 4 bop 75 -100 a Fm(3.5.)31 b(GR)o(OUP)13 b(MANA)o(GEMENT)1197 b Fl(3)89 45 y Fi(MPI)p 188 45 15 2 v 17 w(GR)o(OUP)p 384 45 V 15 w(RANK\(group,)15 b(rank\))75 208 y(IN)h(group)k Fl(handle)13 b(to)h(group)g(ob)r(ject.)75 291 y Fi(OUT)i(rank)k Fl(is)15 b(the)h(in)o(teger)g(rank)f(of)f(the)i(calling)e(pro)q(cess)j(in)e(group,)f (or)i Fh(MPI)p 1366 291 14 2 v 15 w(UNDEFINED)d Fl(if)h(the)i(pro)q(cess)h (is)179 341 y(not)d(a)f(mem)o(b)q(er.)89 433 y Fi(MPI)p 188 433 15 2 v 17 w(TRANSLA)l(TE)p 499 433 V 18 w(RANKS)j(\(group)p 847 433 V 15 w(a,)g(n,)g(ranks)p 1084 433 V 17 w(a,)g(group)p 1275 433 V 16 w(b,)f(ranks)p 1460 433 V 17 w(b\))75 596 y(IN)h(group)p 271 596 V 16 w(a)21 b Fl(handle)14 b(to)f(group)h(ob)r(ject)h(\\A")75 679 y Fi(IN)h(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(ranks)h(in)g Fh(ranks)p 666 679 14 2 v 14 w(a)g Fl(arra)o(y)75 763 y Fi(IN)i(ranks)p 263 763 15 2 v 17 w(a)21 b Fl(arra)o(y)14 b(of)f(zero)i(or)e(more)g(v)n(alid) g(ranks)h(in)f(group)h(\\A")75 846 y Fi(IN)i(group)p 271 846 V 16 w(b)k Fl(handle)14 b(to)g(group)f(ob)r(ject)i(\\B")75 930 y Fi(OUT)h(ranks)p 314 930 V 16 w(b)21 b Fl(arra)o(y)10 b(of)g(corresp)q(onding)i(ranks)f(in)g(group)g(\\B,")f Fh(MPI)p 1220 930 14 2 v 15 w(UNDEFINED)f Fl(when)i(no)g(corresp)q(ondence)179 980 y(exists.)89 1071 y Fi(MPI)p 188 1071 15 2 v 17 w(GR)o(OUP)p 384 1071 V 15 w(FLA)l(TTEN\(group,)16 b(max)p 882 1071 V 17 w(length,)e(bu\013er,)g(actual)p 1337 1071 V 17 w(length\))75 1234 y(IN)i(group)k Fl(handle)13 b(to)h(ob)r(ject)75 1318 y Fi(IN)i(max)p 237 1318 V 18 w(length)i Fl(maxim)n(um)10 b(length)k(of)f (bu\013er)i(in)e(b)o(ytes)75 1401 y Fi(OUT)j(bu\013er)j Fl(b)o(yte-aligned)13 b(bu\013er)75 1485 y Fi(OUT)j(actual)p 327 1485 V 16 w(length)i Fl(actual)c(b)o(yte)g(length)g(of)f(bu\013er)i(con)o(taining)e(\015attened)i (group)e(information)75 1577 y(If)f(insu\016cien)o(t)h(space)g(is)g(a)o(v)n (ailable)d(in)i Fh(buffer)f Fl(\(as)i(sp)q(eci\014ed)h(b)o(y)e Fh(max)p 1189 1577 14 2 v 15 w(length)p Fl(\),)f(then)j(the)f(con)o(ten)o(ts) g(of)f Fh(buffer)75 1626 y Fl(are)j(unde\014ned)g(at)g(exit.)k(The)c(quan)o (tit)o(y)f Fh(actual)p 873 1626 V 14 w(length)f Fl(is)i(alw)o(a)o(ys)e(w)o (ell-de\014ned)i(at)f(exit)g(as)h(the)g(n)o(um)o(b)q(er)f(of)75 1676 y(b)o(ytes)h(needed)g(to)f(store)h(the)f(\015attened)h(group.)158 1726 y(Though)g(implemen)o(tations)e(ma)o(y)h(v)n(ary)h(on)h(ho)o(w)f(they)i (store)g(\015attened)g(groups,)f(the)g(information)d(m)o(ust)75 1776 y(b)q(e)h(su\016cien)o(t)f(to)h(reconstruct)h(the)f(group)f(using)g Fh(MPI)p 936 1776 V 15 w(GROUP)p 1061 1776 V 15 w(UNFLATTEN)e Fl(b)q(elo)o(w.)17 b(The)d(purp)q(ose)g(of)f(\015attening)75 1826 y(and)h(un\015attening)g(is)f(to)h(allo)o(w)e(in)o(terpro)q(cess)k (transmission)d(of)g(group)h(ob)r(jects.)75 1943 y Fe(3.5.2)55 b(Lo)r(cal)19 b(Group)g(Constructors)75 2019 y Fl(The)14 b(execution)h(of)e (the)i(follo)o(wing)c(op)q(erations)j(do)g(not)g(require)g(in)o(terpro)q (cess)i(comm)o(unication.)89 2069 y Fi(MPI)p 188 2069 15 2 v 17 w(GR)o(OUP)p 384 2069 V 15 w(UNFLA)l(TTEN\(max)p 805 2069 V 19 w(length,)d(bu\013er,)i(group\))75 2232 y(IN)h(max)p 237 2232 V 18 w(length)i Fl(maxim)n(um)10 b(length)k(of)f(bu\013er)i(in)e(b)o (ytes)75 2316 y Fi(IN)j(bu\013er)k Fl(b)o(yte-aligned)13 b(bu\013er)i(pro)q (duced)g(b)o(y)e Fh(MPI)p 951 2316 14 2 v 16 w(GROUP)p 1077 2316 V 14 w(FLATTEN)75 2399 y Fi(OUT)j(group)j Fl(handle)14 b(to)f(ob)r(ject)75 2491 y(See)i(MPI)p 232 2491 13 2 v 15 w(GR)o(OUP)p 401 2491 V 15 w(FLA)m(TTEN)f(ab)q(o)o(v)o(e.)89 2541 y Fi(MPI)p 188 2541 15 2 v 17 w(LOCAL)p 369 2541 V 17 w(SUBGR)o(OUP\(grou)o(p,)f(n,)i (ranks,)h(new)p 1110 2541 V 17 w(group\))75 2704 y(IN)g(group)k Fl(handle)13 b(to)h(group)g(ob)r(ject)p eop %%Page: 4 5 bop 75 -100 a Fl(4)75 45 y Fi(IN)16 b(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(arra)o(y)h(ranks)g(\(and)g(size)h(of)e Fh(new)p 1123 45 14 2 v 15 w(group)p Fl(\))75 132 y Fi(IN)j(ranks)21 b Fl(arra)o(y)13 b(of)g(in)o(teger)i(ranks)f(in)f(group)75 220 y Fi(OUT)j(new)p 283 220 15 2 v 17 w(group)j Fl(new)14 b(group)g(deriv)o(ed)g(from)f(ab)q(o)o(v)o(e,)g(preserving)i(the)f(order)h (de\014ned)g(b)o(y)e Fh(ranks)p Fl(.)89 315 y Fi(MPI)p 188 315 V 17 w(LOCAL)p 369 315 V 17 w(SUBGR)o(OUP)p 663 315 V 15 w(RANGES\(group,)h(n,)h(ranges,)g(new)p 1352 315 V 17 w(group\))75 482 y(IN)h(group)k Fl(handle)13 b(to)h(group)g(ob)r(ject)75 569 y Fi(IN)i(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(arra)o (y)h(ranks)g(\(and)g(size)h(of)e Fh(new)p 1123 569 14 2 v 15 w(group)p Fl(\))75 657 y Fi(IN)j(ranges)k Fl(a)14 b(one-dimensional)d(arra)o (y)j(of)f(pairs)h(of)f(ranks)i(\(form:)h(b)q(egin)e(through)g(end\))75 744 y Fi(OUT)i(new)p 283 744 15 2 v 17 w(group)j Fl(new)14 b(group)g(deriv)o(ed)g(from)f(ab)q(o)o(v)o(e,)g(preserving)i(the)f(order)h (de\014ned)g(b)o(y)e Fh(ranges)p Fl(.)158 922 y Fg(Discussion:)k Ff(Please)12 b(prop)q(ose)g(additional)i(subgroup)e(functions,)g(b)q(efore)f (the)g(second)h(reading...Virtual)h(T)m(op)q(olo-)75 972 y(gies)h(supp)q (ort?)89 1106 y Fi(MPI)p 188 1106 V 17 w(LOCAL)p 369 1106 V 17 w(GR)o(OUP)p 565 1106 V 16 w(UNION\(group1,)g(group2,)g(group)p 1233 1106 V 16 w(out\))89 1226 y(MPI)p 188 1226 V 17 w(LOCAL)p 369 1226 V 17 w(GR)o(OUP)p 565 1226 V 16 w(INTERSECT\(group1,)h(group2,)f (group)p 1349 1226 V 16 w(out\))89 1347 y(MPI)p 188 1347 V 17 w(LOCAL)p 369 1347 V 17 w(GR)o(OUP)p 565 1347 V 16 w(DIFFERENCE\(group1,)h (group2,)g(group)p 1385 1347 V 15 w(out\))75 1513 y(IN)h(group1)j Fl(\014rst)c(group)f(ob)r(ject)h(handle)75 1601 y Fi(IN)h(group2)j Fl(second)c(group)f(ob)r(ject)h(handle)75 1688 y Fi(OUT)h(group)p 322 1688 V 15 w(out)k Fl(group)14 b(ob)r(ject)g(handle)75 1784 y(The)g(set-lik)o(e)g(op)q(erations)g(are)h(de\014ned)g(as)f(follo)o(ws:)75 1870 y Fi(union)k Fl(All)d(elemen)o(ts)g(of)g(the)g(\014rst)h(group)f (\(group1\),)g(follo)o(w)o(ed)f(b)o(y)h(all)f(elemen)o(ts)h(of)g(second)h (group)f(\(group2\))179 1920 y(not)f(in)f(\014rst)75 2007 y Fi(in)o(tersect)18 b Fl(all)12 b(elemen)o(ts)i(of)f(the)i(\014rst)g(group)e (whic)o(h)h(are)g(also)g(in)f(the)i(second)g(group)75 2095 y Fi(di\013erence)j Fl(all)13 b(elemen)o(ts)h(of)f(the)i(\014rst)f(group)g (whic)o(h)g(are)g(not)g(in)f(the)i(second)g(group)75 2181 y(Note)e(the)g(for) g(these)h(op)q(erations)f(the)g(order)g(of)f(pro)q(cesses)k(in)c(the)h (output)g(group)f(is)h(determined)f(\014rst)i(b)o(y)e(order)75 2231 y(in)k(the)i(\014rst)f(group)g(\(if)f(p)q(ossible\))h(and)g(then)g(b)o (y)g(order)h(in)e(the)h(second)h(group)f(\(if)f(necessary\).)29 b Fg(Discussion:)75 2363 y Ff(What)14 b(do)g(p)q(eople)h(think)g(ab)q(out)g (these)f(lo)q(cal)h(op)q(erations?)21 b(More?)e(Less?)g(Note:)f(these)c(op)q (erations)i(do)e(not)f(explicitl)q(y)75 2413 y(en)o(umerate)h(ranks,)f(and)h (therefore)f(are)g(more)g(scalable)i(if)f(implemen)o(ted)h(e\016cien)o(tly)p Fc(:)8 b(:)e(:)89 2546 y Fi(MPI)p 188 2546 V 17 w(GR)o(OUP)p 384 2546 V 15 w(FREE\(group\))75 2704 y(IN)16 b(group)k Fl(frees)15 b Fh(group)d Fl(previously)i(de\014ned.)p eop %%Page: 5 6 bop 75 -100 a Fm(3.5.)31 b(GR)o(OUP)13 b(MANA)o(GEMENT)1197 b Fl(5)75 45 y(This)11 b(op)q(eration)g(frees)i(a)e(handle)g Fh(group)f Fl(whic)o(h)h(is)g(not)g(curren)o(tly)h(b)q(ound)g(to)f(a)g(comm)o (unicator.)j(It)d(is)h(erroneous)75 95 y(to)i(attempt)f(to)h(free)g(a)g (group)g(curren)o(tly)g(b)q(ound)g(to)g(a)g(comm)o(unicator.)158 227 y Fg(Discussion:)19 b Ff(The)14 b(p)q(oin)o(t-to-p)q(oin)o(t)i(c)o (hapter)f(suggests)f(that)g(there)g(is)h(a)e(single)j(destructor)e(for)g(all) h(MPI)f(opaque)75 277 y(ob)r(jects;)f(ho)o(w)o(ev)o(er,)g(it)g(is)h(arguable) h(that)e(this)h(sp)q(eci\014es)h(the)e(implemen)o(tation)j(of)d(MPI)g(v)o (ery)g(strongly)m(.)89 410 y Fi(MPI)p 188 410 15 2 v 17 w(GR)o(OUP)p 384 410 V 15 w(DUP\(group,)h(new)p 757 410 V 17 w(group\))75 556 y(IN)i(group)k Fl(extan)o(t)14 b(group)f(ob)r(ject)i(handle)75 636 y Fi(OUT)h(new)p 283 636 V 17 w(group)j Fl(new)14 b(group)g(ob)r(ject)h (handle)75 712 y Fh(MPI)p 144 712 14 2 v 15 w(GROUP)p 269 712 V 15 w(DUP)e Fl(duplicates)g(a)h(group)f(with)g(all)f(its)i(cac)o(hed)g (information,)c(replacing)k(nothing.)j(This)c(function)75 761 y(is)h(essen)o(tial)g(to)g(the)g(supp)q(ort)h(of)e(virtual)g(top)q(ologies.) 75 876 y Fe(3.5.3)55 b(Collectiv)n(e)20 b(Group)f(Constructors)75 953 y Fl(The)14 b(execution)h(of)e(the)i(follo)o(wing)c(op)q(erations)j (require)h(collectiv)o(e)f(comm)o(unicatio)o(n)d(within)i(a)h(group.)89 1002 y Fi(MPI)p 188 1002 15 2 v 17 w(COLL)p 333 1002 V 17 w(SUBGR)o (OUP\(comm,)f(mem)o(b)q(ership)p 1054 1002 V 14 w(k)o(ey)l(,)j(new)p 1248 1002 V 17 w(group\))75 1149 y(IN)g(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1229 y Fi(IN)h(mem)o(b)q(ership)p 407 1229 V 14 w(k)o(ey)21 b Fl(\(in)o(teger\))75 1308 y Fi(OUT)16 b(new)p 283 1308 V 17 w(group)j Fl(new)14 b(group)g(ob)r(ject)h(handle)75 1384 y(This)c(collectiv)o(e)g(function)g(is)g(called)g(b)o(y)h(all)e(pro)q (cesses)j(in)e(the)h(group)f(asso)q(ciated)h(with)f Fh(comm)p Fl(.)16 b(A)c(separate,)g(non-)75 1434 y(o)o(v)o(erlapping)g(group)h(of)g (pro)q(cesses)j(is)d(formed)f(for)h(eac)o(h)h(distinct)g(v)n(alue)e(of)h Fh(key)p Fl(,)g(with)g(the)g(pro)q(cesses)j(retaining)75 1483 y(their)e(relativ)o(e)g(order)h(compared)e(to)h(the)g(group)g(of)f Fh(comm)p Fl(.)89 1533 y Fi(MPI)p 188 1533 V 17 w(COLL)p 333 1533 V 17 w(GR)o(OUP)p 529 1533 V 15 w(PERMUTE\(comm,)j(new)p 1046 1533 V 17 w(rank,)g(new)p 1270 1533 V 17 w(group\))75 1680 y(IN)g(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1759 y Fi(IN)h(new)p 232 1759 V 17 w(rank)21 b Fl(\(in)o(teger\))75 1839 y Fi(OUT)16 b(new)p 283 1839 V 17 w(group)j Fl(new)14 b(group)g(ob)r(ject)h(handle)75 1915 y(This)c(collectiv)o(e)g(function)g(op)q (erates)i(o)o(v)o(er)e(all)f(elemen)o(ts)h(of)g(the)h(group)f(of)g(comm.)j(A) d(correct)i(program)d(sp)q(eci\014es)75 1964 y(a)k(distinct)g Fh(new)p 329 1964 14 2 v 15 w(rank)f Fl(in)g(eac)o(h)i(pro)q(cess,)g(whic)o (h)f(de\014nes)h(a)f(p)q(erm)o(utation)e(of)h(the)i(original)d(ordering.)158 2093 y Fg(Discussion:)k Ff(Before,)10 b(w)o(e)e(had)i(one)g(function)g(that)f (implemen)o(ted)j(collectiv)o(e)f(subsetting,)h(as)d(w)o(ell)h(as)f(p)q(erm)o (utation)75 2138 y(of)18 b(ranks.)33 b(W)m(e)18 b(con)o(vinced)i(ourselv)o (es)g(that)e(it)g(is)h(b)q(etter)f(to)g(ha)o(v)o(e)h(the)f(ab)q(o)o(v)o(e)h (t)o(w)o(o)f(functions,)i(eac)o(h)f(of)e(whic)o(h)i(has)75 2184 y(clear)g(usage)f(and)h(implemen)o(tation.)34 b(W)m(e)18 b(do)g(not)g(b)q(eliev)o(e)i(that)e(the)g(p)q(erformance)g(of)g(the)g(t)o(w)o (o-function)h(sequence)75 2230 y(needed)12 b(to)g(subset)g(and)g(p)q(erm)o (ute)g(will)h(signi\014can)o(tl)q(y)h(di\013er)f(from)e(the)h (all-encompassing)j(function)e(previously)h(de\014ned.)75 2275 y(Commen)o(ts?)158 2321 y(F)m(or)g(instance,)g(to)g(pro)o(vide)h(the)f (sorting)h(function)g(capabilit)o(y)i(originally)g(an)o(ticipated)f(b)o(y)e (Marc)g(Snir,)g(one)g(could)75 2367 y(use)c(MPI)p 215 2367 12 2 v 14 w(COLL)p 335 2367 V 13 w(SUBGR)o(OUP)g(preceeded)g(b)o(y)g(a)f (stable)h(sort.)16 b(W)m(e)10 b(therefore)f(see)h(the)f(giv)o(en)i (functionalit)o(y)h(as)e(su\016cien)o(t)75 2412 y(and)k(clearly)g(explicabl)q (e.)158 2462 y(By)d(the)f(w)o(a)o(y)m(,)g(w)o(e)g(ha)o(v)o(e)h(in)o(ten)o (tionall)q(y)i(a)o(v)o(oided)f(making)g(con)o(v)o(enience)g(functions)g(that) f(do)f(analogous)j(p)q(erm)o(utation)75 2512 y(or)j(subsetting,)j(while)f (creating)g(new)e(comm)o(unicators,)j(b)q(ecause)f(the)e(user)h(can)g(easily) h(do)f(this)g(without)g(additional)75 2562 y(sync)o(hronization)q(s)g(b)q(ey) o(ond)e(getting)h(additional)h(needed)e(con)o(texts.)21 b(Are)14 b(these)h(so)f(con)o(v)o(enien)o(t)i(that)e(w)o(e)g(w)o(an)o(t)g(to)g(add)75 2612 y(them)f(\()p Fb(e.g.)p Ff(,)e Fa(MPI)p 332 2612 V 13 w(COLL)p 425 2612 V 13 w(COMM)p 518 2612 V 12 w(PERMUTE)p Ff(\).)p eop %%Page: 6 7 bop 75 -100 a Fl(6)75 45 y Fk(3.6)70 b(Op)r(erations)22 b(on)h(Con)n(texts)75 144 y Fe(3.6.1)55 b(Lo)r(cal)19 b(Op)r(erations)89 221 y Fi(MPI)p 188 221 15 2 v 17 w(CONTEXTS)p 472 221 V 18 w(RESER)-5 b(VE\(n,)16 b(con)o(texts\))75 381 y(IN)g(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(reserv)o(e)i(\(resp,)e(unreserv)o(e\))75 464 y Fi(IN)i(con)o(texts)j Fl(in)o(teger)c(arra)o(y)e(of)h(con)o(texts)75 553 y(Reserv)o(es)k(zero)g(or)f(more)e(con)o(texts.)28 b(A)16 b(reserv)o(ed)j(con)o(text)f(will)d(not)h(b)q(e)i(allo)q(cated)e(b)o(y)g(a)h (subsequen)o(t)h(call)e(to)75 603 y Fh(MPI)p 144 603 14 2 v 15 w(CONTEXTS)p 335 603 V 14 w(ALLOC)g Fl(in)h(the)h(same)f(pro)q(cess)i (\(see)g(b)q(elo)o(w\).)28 b(If)18 b(one)f(or)h(more)e(con)o(texts)i(had)g (already)f(b)q(een)75 653 y(allo)q(cated)12 b(b)o(y)g Fh(MPI)p 375 653 V 15 w(CONTEXTS)p 566 653 V 14 w(ALLOC)f Fl(or)i(reserv)o(ed)h(b)o(y) e Fh(MPI)p 1033 653 V 15 w(CONTEXTS)p 1224 653 V 14 w(RESERVE)p Fl(,)f(then)i(this)f(function)g(returns)75 703 y(an)i(error,)g(and)g(no)f(c)o (hange)i(of)e(con)o(text)h(reserv)n(ation)h(state)g(shall)e(ha)o(v)o(e)g(o)q (ccured.)89 752 y Fi(MPI)p 188 752 15 2 v 17 w(CONTEXTS)p 472 752 V 18 w(FREE\(n,)j(con)o(texts\))75 905 y(IN)g(n)21 b Fl(n)o(um)o(b)q(er) 13 b(of)g(con)o(texts)i(to)f(free)75 987 y Fi(IN)i(con)o(texts)j Fl(in)o(teger)c(arra)o(y)e(of)h(con)o(texts)75 1069 y(Lo)q(cal)19 b(deallo)q(cation)g(of)h(con)o(text)g(allo)q(cated)g(b)o(y)f Fh(MPI)p 953 1069 14 2 v 15 w(CONTEXTS)p 1144 1069 V 14 w(RESERVE)g Fl(or)h Fh(MPI)p 1454 1069 V 15 w(CONTEXTS)p 1645 1069 V 14 w(ALLOC)p Fl(.)e(It)i(is)75 1119 y(erroneous)13 b(to)f(free)g(a)f(con)o(text) i(that)f(is)f(b)q(ound)h(to)f(an)o(y)h(comm)o(unicator)d(\(either)j(lo)q (cally)e(or)i(in)f(another)h(pro)q(cess\).)75 1235 y Fe(3.6.2)55 b(Collectiv)n(e)20 b(Op)r(erations)89 1311 y Fi(MPI)p 188 1311 15 2 v 17 w(CONTEXTS)p 472 1311 V 18 w(ALLOC\(comm,)c(n,)f(con)o(texts\))75 1472 y(IN)h(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1554 y Fi(IN)h(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(allo)q (cate)75 1636 y Fi(OUT)i(con)o(texts)j Fl(in)o(teger)14 b(arra)o(y)g(of)f (con)o(texts)158 1726 y(Allo)q(cates)18 b(an)g(arra)o(y)g(of)g(con)o(texts.) 32 b(This)18 b(collectiv)o(e)g(op)q(eration)g(is)g(executed)i(b)o(y)e(all)f (pro)q(cesses)k(in)c(the)75 1776 y(group)d(de\014ned)i(b)o(y)e Fh(comm)p Fl(.)19 b(The)c(con)o(texts)g(that)g(are)g(allo)q(cated)f(b)o(y)g Fh(MPI)p 1229 1776 14 2 v 15 w(ALLOC)p 1354 1776 V 15 w(CONTEXTS)e Fl(are)j(unique)g(within)75 1825 y(the)f(group)f(asso)q(ciated)h(with)f Fh(comm)p Fl(.)k(The)c(arra)o(y)g(is)g(the)h(same)e(on)h(all)f(pro)q(cesses)k (that)e(call)e(the)i(function)f(\(same)75 1875 y(order,)h(same)f(n)o(um)o(b)q (er)g(of)h(elemen)o(ts\).)87 1925 y Fi(MPI)p 186 1925 15 2 v 18 w(CONTEXTS)p 471 1925 V 18 w(JOIN\(lo)q(cal)p 721 1925 V 16 w(comm,)g(remote)p 1036 1925 V 16 w(comm)p 1177 1925 V 17 w(send,)g(remote)p 1463 1925 V 15 w(comm)p 1603 1925 V 17 w(recv,)h(n,)f(con-)75 2010 y(texts\))75 2135 y(IN)i(lo)q(cal)p 246 2135 V 17 w(comm)k Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 2218 y Fi(IN)h(remote)p 296 2218 V 16 w(comm)p 437 2218 V 17 w(send)k Fl(comm)o(uni)o(cator)12 b(ob)r(ject)i(handle)75 2300 y Fi(IN)i(remote)p 296 2300 V 16 w(comm)p 437 2300 V 17 w(recv)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 2382 y Fi(IN)h(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(allo)q(cate)75 2465 y Fi(OUT)i(con)o(texts)j Fl(in)o(teger)14 b(arra)o(y)g(of)f(con)o(texts) 75 2554 y(This)f(is)f(an)h(adv)n(anced)g(function)f(needed)i(to)f(supp)q(ort) g(\\safe")g(in)o(tergroup)g(comm)o(uni)o(cation.)i(The)f(function)e(cre-)75 2604 y(ates)j Fh(n)e Fl(con)o(texts)i(unique)f(on)g(the)g(union)f(of)h(the)g (underlying)g(groups)g(of)f Fh(local)p 1347 2604 14 2 v 15 w(comm)g Fl(and)h Fh(remote)p 1674 2604 V 14 w(comm)p 1776 2604 V 15 w(send)75 2654 y Fl(\(whic)o(h)k(is)f(the)h(same)f(as)g Fh(remote)p 626 2654 V 15 w(comm)p 729 2654 V 15 w(recv)p Fl('s)f(group\).)26 b Fh(remote)p 1165 2654 V 15 w(comm)p 1268 2654 V 15 w(send)15 b Fl(and)i Fh(remote)p 1602 2654 V 14 w(comm)p 1704 2654 V 15 w(recv)f Fl(are)75 2704 y(created)f(b)o(y)f Fh(MPI)p 347 2704 V 15 w(COMM)p 450 2704 V 15 w(MERGE)f Fl(\(b)q(elo)o(w\).)p eop %%Page: 7 8 bop 75 -100 a Fm(3.7.)31 b(OPERA)m(TIONS)14 b(ON)g(COMMUNICA)m(TORS)924 b Fl(7)75 45 y Fk(3.7)70 b(Op)r(erations)22 b(on)h(Comm)n(unicators)75 144 y Fe(3.7.1)55 b(Lo)r(cal)19 b(Comm)n(unicator)h(Op)r(erations)75 221 y Fl(The)14 b(follo)o(wing)e(are)i(all)f(lo)q(cal)g(\(non-comm)o(uni)o (cating\))e(op)q(erations.)89 271 y Fi(MPI)p 188 271 15 2 v 17 w(COMM)p 365 271 V 18 w(SIZE\(comm,)16 b(size\))75 432 y(IN)g(comm)21 b Fl(handle)14 b(to)f(comm)o(unicator)e(ob)r(ject.)75 515 y Fi(OUT)16 b(size)k Fl(is)13 b(the)i(in)o(teger)f(n)o(um)o(b)q(er)f(of)h(pro)q (cesses)i(in)e(the)g(group)g(of)f Fh(comm)p Fl(.)89 606 y Fi(MPI)p 188 606 V 17 w(COMM)p 365 606 V 18 w(RANK\(comm,)j(rank\))75 767 y(IN)g(comm)21 b Fl(handle)14 b(to)f(comm)o(unicator)e(ob)r(ject.)75 850 y Fi(OUT)16 b(rank)k Fl(is)e(the)g(in)o(teger)g(rank)g(of)f(the)h (calling)e(pro)q(cess)k(in)d(group)h(of)f Fh(comm)p Fl(,)g(or)h Fh(MPI)p 1550 850 14 2 v 15 w(UNDEFINED)d Fl(if)i(the)179 900 y(pro)q(cess)e(is)f(not)g(a)g(mem)o(b)q(er.)89 991 y Fi(MPI)p 188 991 15 2 v 17 w(COMM)p 365 991 V 18 w(FLA)l(TTEN\(comm,)j(max)p 870 991 V 17 w(length,)d(bu\013er,)g(actual)p 1325 991 V 17 w(length\))75 1152 y(IN)i(comm)21 b Fl(handle)14 b(to)f(comm)o(unicator)e(ob) r(ject.)75 1235 y Fi(IN)16 b(max)p 237 1235 V 18 w(length)i Fl(maxim)n(um)10 b(length)k(of)f(bu\013er)i(in)e(b)o(ytes)75 1318 y Fi(OUT)j(bu\013er)j Fl(b)o(yte-aligned)13 b(bu\013er)75 1401 y Fi(OUT)j(actual)p 327 1401 V 16 w(length)i Fl(actual)c(b)o(yte)g (length)g(of)f(bu\013er)i(con)o(taining)e(\015attened)i(comm)o(unicator)c (information)75 1491 y(If)h(insu\016cien)o(t)h(space)g(is)g(a)o(v)n(ailable)d (in)i Fh(buffer)f Fl(\(as)i(sp)q(eci\014ed)h(b)o(y)e Fh(max)p 1189 1491 14 2 v 15 w(length)p Fl(\),)f(then)j(the)f(con)o(ten)o(ts)g(of)f Fh(buffer)75 1541 y Fl(are)j(unde\014ned)g(at)g(exit.)k(The)c(quan)o(tit)o(y) f Fh(actual)p 873 1541 V 14 w(length)f Fl(is)i(alw)o(a)o(ys)e(w)o (ell-de\014ned)i(at)f(exit)g(as)h(the)g(n)o(um)o(b)q(er)f(of)75 1591 y(b)o(ytes)h(needed)g(to)f(store)h(the)f(\015attened)h(comm)o(unicator.) 158 1641 y(Though)c(implemen)o(tations)d(ma)o(y)i(v)n(ary)h(on)g(ho)o(w)g (they)h(store)g(\015attened)h(comm)o(unicators,)c(the)j(information)75 1691 y(m)o(ust)g(b)q(e)i(su\016cien)o(t)f(to)g(reconstruct)j(the)d(comm)o (unicator)e(using)i Fh(MPI)p 1191 1691 V 15 w(COMM)p 1294 1691 V 15 w(UNFLATTEN)e Fl(b)q(elo)o(w.)17 b(The)d(purp)q(ose)75 1741 y(of)f(\015attening)h(and)g(un\015attening)g(is)f(to)h(allo)o(w)e(in)o (terpro)q(cess)k(transmission)d(of)g(comm)o(unicator)e(ob)r(jects.)75 1857 y Fe(3.7.2)55 b(Lo)r(cal)19 b(Constructors)89 1933 y Fi(MPI)p 188 1933 15 2 v 17 w(COMM)p 365 1933 V 18 w(UNFLA)l(TTEN\(max)p 789 1933 V 18 w(length,)14 b(bu\013er,)h(comm\))75 2087 y(IN)h(max)p 237 2087 V 18 w(length)i Fl(maxim)n(um)10 b(length)k(of)f(bu\013er)i(in)e(b)o (ytes)75 2169 y Fi(IN)j(bu\013er)k Fl(b)o(yte-aligned)13 b(bu\013er)i(pro)q (duced)g(b)o(y)e Fh(MPI)p 951 2169 14 2 v 16 w(COMM)p 1055 2169 V 14 w(FLATTEN)75 2252 y Fi(OUT)j(comm)k Fl(handle)14 b(to)g(ob)r(ject)75 2335 y(See)h Fh(MPI)p 218 2335 V 15 w(COMM)p 321 2335 V 15 w(FLATTEN)d Fl(ab)q(o)o(v)o(e.)89 2385 y Fi(MPI)p 188 2385 15 2 v 17 w(COMM)p 365 2385 V 18 w(BIND\(group,)i(con)o(text,)h (comm)p 986 2385 V 17 w(new\))75 2538 y(IN)h(group)k Fl(ob)r(ject)14 b(handle)g(to)g(b)q(e)g(b)q(ound)g(to)g(new)g(comm)o(unicator)75 2621 y Fi(IN)i(con)o(text)k Fl(con)o(text)14 b(to)g(b)q(e)g(b)q(ound)g(to)g (new)h(comm)o(uni)o(cator)75 2704 y Fi(OUT)h(comm)p 325 2704 V 17 w(new)k Fl(the)15 b(new)f(comm)o(unicator.)p eop %%Page: 8 9 bop 75 -100 a Fl(8)158 45 y(The)19 b(ab)q(o)o(v)o(e)e(function)h(creates)i(a) e(new)h(comm)o(unicator)c(ob)r(ject,)20 b(whic)o(h)e(is)g(asso)q(ciated)h (with)e(the)i(group)75 95 y(de\014ned)e(b)o(y)e(group,)g(and)h(the)g(sp)q (eci\014ed)h(con)o(text.)23 b(The)16 b(op)q(eration)g(do)q(es)g(not)g (require)g(comm)o(unication.)j(It)d(is)75 145 y(correct)i(to)e(b)q(egin)h (using)f(a)g(comm)o(unicator)e(as)i(so)q(on)h(as)f(it)g(is)g(de\014ned.)27 b(It)17 b(is)f(not)g(erroneous)i(to)e(in)o(v)o(ok)o(e)g(this)75 195 y(function)10 b(t)o(wice)g(in)f(the)h(same)f(pro)q(cess)j(with)d(the)i (same)e(con)o(text.)17 b(Finally)m(,)8 b(there)j(is)f(no)g(explicit)f(sync)o (hronization)75 244 y(o)o(v)o(er)14 b(the)g(group.)89 297 y Fi(MPI)p 188 297 15 2 v 17 w(COMM)p 365 297 V 18 w(UNBIND\(comm\))75 460 y(IN)i(comm)21 b Fl(the)14 b(comm)o(unicator)d(to)j(b)q(e)h(deallo)q (cated.)75 553 y(This)f(routine)h(disasso)q(ciates)h(the)f(group)f(asso)q (ciated)i(with)e(comm)e(from)g(the)k(con)o(text)f(asso)q(ciated)g(with)f Fh(comm)p Fl(.)75 603 y(The)e(opaque)g(ob)r(ject)h Fh(comm)e Fl(is)h(deallo)q(cated.)17 b(Both)12 b(the)h(group)f(and)f(con)o(text,)i(pro) o(vided)f(at)f(the)i Fh(MPI)p 1673 603 14 2 v 15 w(COMM)p 1776 603 V 15 w(BIND)75 652 y Fl(call,)c(remain)f(a)o(v)n(ailable)f(for)i(further) h(use.)17 b(If)9 b Fh(MPI)p 845 652 V 15 w(COMM)p 948 652 V 15 w(MAKE)f Fl(\(see)j(b)q(elo)o(w\))e(w)o(as)g(called)g(in)g(lieu)g(of)g Fh(MPI)p 1682 652 V 15 w(COMM)p 1785 652 V 15 w(BIND)p Fl(,)75 702 y(then)14 b(there)g(is)f(no)g(exp)q(osed)h(con)o(text)f(kno)o(wn)g(to)g (the)g(user,)h(and)f(this)g(quan)o(tit)o(y)f(is)h(freed)h(b)o(y)f Fh(MPI)p 1618 702 V 15 w(COMM)p 1721 702 V 15 w(UNBIND)p Fl(.)89 755 y Fi(MPI)p 188 755 15 2 v 17 w(COMM)p 365 755 V 18 w(GR)o(OUP\(comm,)h (group\))75 918 y(IN)i(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle) 75 1014 y Fi(OUT)h(group)j Fl(group)14 b(ob)r(ject)g(handle)75 1106 y(Accessor)i(that)e(returns)h(the)g(group)f(corresp)q(onding)h(to)e(the) i(comm)o(unicator)c Fh(comm)p Fl(.)89 1159 y Fi(MPI)p 188 1159 V 17 w(COMM)p 365 1159 V 18 w(CONTEXT\(comm,)17 b(con)o(text\))75 1322 y(IN)f(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1418 y Fi(OUT)h(con)o(text)j Fl(con)o(text)75 1510 y(Returns)c(the)f(con)o (text)h(asso)q(ciated)f(with)g(the)h(comm)o(uni)o(cator)d Fh(comm)p Fl(.)89 1563 y Fi(MPI)p 188 1563 V 17 w(COMM)p 365 1563 V 18 w(DUP\(comm,)j(new)p 745 1563 V 17 w(con)o(text,)f(new)p 1028 1563 V 17 w(comm\))75 1726 y(IN)i(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1821 y Fi(IN)h(new)p 232 1821 V 17 w(con)o(text)k Fl(new)14 b(con)o(text)h(to)e(use)i(with)f(new)p 945 1821 13 2 v 15 w(comm)75 1917 y Fi(OUT)i(new)p 283 1917 15 2 v 17 w(comm)k Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 2009 y Fh(MPI)p 144 2009 14 2 v 15 w(COMM)p 247 2009 V 15 w(DUP)c Fl(duplicates)h(a)g(comm)o(unicator)d(with)j(all)f(its)g(cac)o(hed)i (information,)c(replacing)j(just)g(the)h(con)o(text.)89 2062 y Fi(MPI)p 188 2062 15 2 v 17 w(COMM)p 365 2062 V 18 w(MER)o(GE\(comm,)j (comm)p 861 2062 V 16 w(remote,)f(comm)p 1177 2062 V 17 w(send,)g(comm)p 1442 2062 V 17 w(recv\))75 2225 y(IN)h(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 2321 y Fi(IN)h(comm)p 274 2321 V 17 w(remote)k Fl(comm)o(uni)o(cator)12 b(ob)r(ject)i(handle)75 2416 y Fi(OUT)i(comm)p 325 2416 V 17 w(send)j Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 2512 y Fi(OUT)h(comm)p 325 2512 V 17 w(recv)k Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 2604 y Fh(MPI)p 144 2604 14 2 v 15 w(COMM)p 247 2604 V 15 w(MERGE)i Fl(is)i(a)f(call)f(used)j(after)f(a)f(publish)g(&)h(subscrib)q(e)h(\(equiv)n (alen)o(tly)m(,)d(\015atten)i(&)g(un\015atten\))g(se-)75 2654 y(quence)g(to)f(pro)o(vide)g(comm)o(unicators)e(that)i(can)g(b)q(e)h(used)g (to)f(transmit)f(to)g(remote)h(group)g(using)g(\\correct")75 2704 y(rank)c(notation.)j(It)d(is)g(motiv)n(ated)e(further)i(b)q(elo)o(w.)p eop %%Page: 9 10 bop 75 -100 a Fm(3.8.)31 b(INTER-COMMUNICA)m(TION)14 b(INITIALIZA)m(TION)h(&) f(RENDEZV)o(OUS)447 b Fl(9)75 45 y Fe(3.7.3)55 b(Collectiv)n(e)20 b(Comm)n(unicator)g(Constructors)89 122 y Fi(MPI)p 188 122 15 2 v 17 w(COMM)p 365 122 V 18 w(MAKE\(comm,)d(group,)d(comm)p 980 122 V 17 w(new\))75 273 y(IN)i(comm)21 b Fl(Comm)o(uni)o(cator-scop)q(e) 12 b(in)i(whic)o(h)g(the)g(new)g(comm)o(unicator's)d(con)o(text)k(will)d(b)q (e)j(allo)q(cated.)75 354 y Fi(IN)h(group)k Fl(group)13 b(ob)r(ject)i(handle) f(to)f(b)q(e)i(b)q(ound)f(to)g(new)g(comm)o(unicator)75 435 y Fi(OUT)i(comm)p 325 435 V 17 w(new)k Fl(the)15 b(new)f(comm)o(unicator.)75 515 y Fh(MPI)p 144 515 14 2 v 15 w(COMM)p 247 515 V 15 w(MAKE)f Fl(is)h(equiv)n(alen)o(t)f(to:)140 595 y Fh(MPI_CONTEXTS_ALLOC\()o(comm,)18 b(context,)i(1\))140 645 y(MPI_COMM_BIND\(comm,)e(group,)j(context,)f (comm_new\))75 725 y Fl(plus,)d(notionally)m(,)d(in)o(ternal)j(\015ags)f(are) i(set)f(in)g(the)g(comm)o(unicator,)d(denoting)j(that)f(con)o(text)i(w)o(as)f (created)h(as)75 775 y(part)d(of)g(the)g(opaque)g(pro)q(cess)i(that)e(made)f (the)i(comm)o(unicator)c(\(so)j(it)g(can)g(b)q(e)h(freed)g(b)o(y)f Fh(MPI)p 1602 775 V 15 w(COMM)p 1705 775 V 15 w(UNBIND)p Fl(\).)75 825 y(It)f(is)g(not)g(erroneous)h(if)e(the)h Fh(group)f Fl(is)h(not)g(a)f (subset)j(of)d(the)h(underlying)g(group)g(of)f Fh(comm)p Fl(.)75 961 y Fk(3.8)70 b(In)n(ter-comm)n(unicati)o(on)21 b(Initializ)o(ati)o(on)g(&) i(Rendezv)n(ous)89 1052 y Fi(MPI)p 188 1052 15 2 v 17 w(COMM)p 365 1052 V 18 w(PUBLISH\(comm,)15 b(lab)q(el,)f(p)q(ersistence)p 1131 1052 V 15 w(\015ag\))75 1203 y(IN)i(comm)21 b Fl(Comm)o(uni)o(cator)11 b(to)j(b)q(e)h(published)f(under)g(the)h(name)e Fh(label)p Fl(.)75 1284 y Fi(IN)j(lab)q(el)k Fl(String)13 b(lab)q(el)h(describing)g (published)g(comm)o(unicator.)75 1366 y Fi(IN)i(p)q(ersistence)p 382 1366 V 15 w(\015ag)k Fl(Either)15 b Fh(MPI)p 685 1366 14 2 v 15 w(EPHEMERAL)d Fl(or)i Fh(MPI)p 1027 1366 V 15 w(PERSISTENT)p Fl(.)75 1446 y(This)e(op)q(eration)g(results)h(in)f(the)h(asso)q(ciation)f (of)f(the)i(comm)o(unicator)d Fh(comm)h Fl(with)h(the)h(name)e(sp)q (eci\014ed)i(in)f(lab)q(el,)75 1496 y(with)32 b(global)f(scop)q(e.)75 b(P)o(ersistence)35 b(is)d(either)i(ephemeral)e(\(one)g(subscrib)q(e)j (causes)f(an)e(automatic)75 1545 y Fh(MPI)p 144 1545 V 15 w(COMM)p 247 1545 V 15 w(UNPUBLISH)p Fl(\),)11 b(or)j(p)q(ersisten)o(t)i(\(an)d (explicit)g Fh(MPI)p 1025 1545 V 16 w(COMM)p 1129 1545 V 14 w(UNPUBLISH)f Fl(m)o(ust)h(b)q(e)h(subsequen)o(tly)h(called\))75 1595 y(and)e(an)o(y)f(n)o(um)o(b)q(er)h(of)f(subscriptions)i(can)f(o)q(ccur.) 19 b(This)13 b(op)q(eration)g(do)q(es)g(not)g(w)o(ait)f(for)h(subscriptions)h (to)f(o)q(ccur)75 1645 y(b)q(efore)i(returning.)j(Only)c(one)g(pro)q(cess)i (calls)d(this)h(function.)158 1695 y(Subsequen)o(t)108 b(calls)e(to)g Fh(MPI)p 867 1695 V 15 w(COMM)p 970 1695 V 15 w(PUBLISH)f Fl(with)h(the)h (same)f(lab)q(el)75 1745 y(\(without)14 b(an)f(in)o(terv)o(ening)h Fh(MPI)p 588 1745 V 15 w(COMM)p 691 1745 V 15 w(UNPUBLISH)p Fl(\))e(is)i(erroneous.)158 1877 y Fg(Discussion:)k Ff(Should)d(w)o(e)e(ha)o (v)o(e)g(a)g(p)q(ermissions)j(\015ag)d(that)g(implemen)o(ts)i(access)f (restrictions)h(similar)g(to)e(Unix.)89 2010 y Fi(MPI)p 188 2010 15 2 v 17 w(COMM)p 365 2010 V 18 w(UNPUBLISH\(lab)q(el\))75 2160 y(IN)j(lab)q(el)k Fl(String)13 b(lab)q(el)h(describing)g(published)g (comm)o(unicator.)75 2240 y(This)j(is)g(an)h(op)q(eration)f(undertak)o(en)h (b)o(y)f(a)g(single)g(pro)q(cess,)j(whose)e(e\013ect)h(is)e(to)g(remo)o(v)o (e)g(the)h(asso)q(ciation)f(of)75 2290 y(the)c(comm)o(unicator)d(sp)q (eci\014ed)k(in)d(a)i(preceding)g Fh(MPI)p 915 2290 14 2 v 15 w(COMM)p 1018 2290 V 15 w(PUBLISH)e Fl(call)g(with)h(the)h(name)f(sp)q (eci\014ed)i(in)e Fh(label)p Fl(.)75 2340 y Fh(MPI)p 144 2340 V 15 w(COMM)p 247 2340 V 15 w(UNPUBLISH)g Fl(on)i(an)f(unde\014ned)i(lab)q (el)f(will)e(b)q(e)j(ignored.)89 2390 y Fi(MPI)p 188 2390 15 2 v 17 w(COMM)p 365 2390 V 18 w(SUBSCRIBE\(m)o(y)p 744 2390 V 16 w(comm,)h(lab)q(el,)f(comm\))75 2541 y(IN)h(m)o(y)p 213 2541 V 17 w(comm)21 b Fl(Comm)n(unicator)11 b(of)j(participan)o(ts)f(in)h (subscrib)q(e.)75 2622 y Fi(IN)i(lab)q(el)k Fl(String)13 b(lab)q(el)h (describing)g(comm)o(unicator)d(to)j(whic)o(h)g(w)o(e)g(wish)f(to)h(subscrib) q(e)75 2704 y Fi(OUT)i(comm)k Fl(Comm)o(unicator)11 b(created)k(through)f (subscription)h(pro)q(cess.)p eop %%Page: 10 11 bop 75 -100 a Fl(10)75 45 y(This)12 b(is)f(a)h(collectiv)o(e)g(comm)o (unicatio)o(n)d(in)j(the)g(group)g(sp)q(eci\014ed)h(in)f Fh(my)p 1196 45 14 2 v 15 w(comm)p Fl(,)f(whic)o(h)g(has)h(the)h(e\013ect)g(of)f (creating)75 95 y(in)g(eac)o(h)h(group)g(pro)q(cess)h(a)e(cop)o(y)h(of)f(the) h(previously)f(published)h(comm)o(unicator)c(asso)q(ciated)14 b(with)e(the)h(name)e(in)75 145 y(lab)q(el.)17 b(This)d(op)q(eration)g(blo)q (c)o(ks)g(un)o(til)f(suc)o(h)i(an)e(asso)q(ciation)h(is)f(p)q(ossible.)158 196 y(Once)22 b(an)e Fh(MPI)p 404 196 V 15 w(COMM)p 507 196 V 15 w(PUBLISH)f Fl(and)i(an)f Fh(MPI)p 913 196 V 15 w(COMM)p 1016 196 V 15 w(SUBSCRIBE)f Fl(on)h(the)h(same)f(lab)q(el)g(ha)o(v)o(e)h(o)q (ccurred,)75 246 y(the)d(subscrib)q(er)i Fh(my)p 399 246 V 16 w(comm)d Fl(has)h(the)g(abilit)o(y)e(to)i(send)h(messages)f(to)f(the)i (publisher;)h(group)d(mem)o(b)q(ers)g(of)g(the)75 295 y(published)h(comm)o (unicator)e(ha)o(v)o(e)i(the)g(abilit)o(y)f(to)h(receiv)o(e)h(messages)f (using)g(the)h(published)f(comm)o(unicator.)75 345 y(The)c(comm)o(unicators)e (so)i(de\014ned)h(ma)o(y)d(only)h(b)q(e)h(used)h(in)e(p)q(oin)o(t-to-p)q(oin) o(t)g(comm)o(unication.)158 479 y Fg(Discussion:)92 b Ff(Do)50 b(w)o(e)g(w)o(an)o(t)g(an)o(y)g(of)g(the)g(follo)o(wing)i Fa(MPI)p 1340 479 12 2 v 13 w(COMM)p 1433 479 V 13 w(SUBSCRIBE)p 1625 479 V 10 w(NON)p 1696 479 V 13 w(BLOCKING)p Ff(,)75 529 y Fa(MPI)p 137 529 V 13 w(COMM)p 230 529 V 13 w(SUBSCRIB)o(E)p 422 529 V 11 w(PROBE)p Ff(,)11 b(etc...)75 725 y Fi(The)17 b(symmetric)e(case)42 b Fl(is)15 b(constructed)i(as)e(follo)o(ws:)j(Group)d(\\A")g(and)f(\\B")h (wish)g(to)g(build)f(a)h(symmetric)75 775 y(in)o(ter-group)f(comm)o(unicatio) o(n)d(structure.)75 826 y(\\A")i(group:)249 914 y Fh(mpi_contexts_alloc\()o (A_com)o(m,)19 b(1,)i(context1\))249 964 y(mpi_comm_dup\(A_comm)o(,)e (context1,)h(A_comm1\))249 1013 y(mpi_comm_rank\(A_com)o(m,)f(rank\))249 1063 y(if\(rank)i(==)g(0\))g(then)315 1113 y(mpi_comm_publish)o(\(A_co)o (mm1,)d("A_comm1",)i(MPI_EPHEMERAL\))249 1163 y(endif)249 1213 y(/*)i([OTHER)e(WORK)h(UNRELATED)f(TO)i(THIS)f(OPERATION])e(*/)249 1262 y(mpi_comm_subscribe\()o(A_com)o(m,)g("B_comm1",)g(B_comm1\))249 1312 y(mpi_comm_merge\(A_co)o(mm1,)f(B_comm1,)j(Send_to_B,)e(Recv_from_B\)) 249 1362 y(mpi_comm_free\(A_com)o(m1\))249 1412 y(mpi_comm_free\(B_com)o (m1\))75 1498 y Fl(\\B")14 b(group:)249 1586 y Fh(mpi_contexts_alloc\()o (B_com)o(m,)19 b(1,)i(context1\))249 1636 y(mpi_comm_dup\(B_comm)o(,)e (context1,)h(B_comm1\))249 1686 y(mpi_comm_rank\(B_com)o(m,)f(rank\))249 1735 y(if\(rank)i(==)g(0\))g(then)315 1785 y(mpi_comm_publish)o(\(B_co)o (mm1,)d("B_comm1",)i(MPI_EPHEMERAL\))249 1835 y(endif)249 1885 y(/*)i([OTHER)e(WORK)h(UNRELATED)f(TO)i(THIS)f(OPERATION])e(*/)249 1935 y(mpi_comm_subscribe\()o(B_com)o(m,)g("A_comm1",)g(A_comm1\))249 1984 y(mpi_comm_merge\(B_co)o(mm1,)f(A_comm1,)j(Send_to_B,)e(Recv_from_B\)) 249 2034 y(mpi_comm_free\(B_com)o(m1\))249 2084 y(mpi_comm_free\(A_com)o (m1\))75 2171 y Fl(F)m(or)d(example,)f(elemen)o(ts)h(of)g(the)h(\\B")f(group) g(use)h Fh(Send)p 997 2171 14 2 v 15 w(to)p 1056 2171 V 15 w(A)f Fl(to)g(send)h(to)f(elemen)o(ts)g(of)g(\\A,")g(whic)o(h)g(receiv)o(e)75 2220 y(suc)o(h)e(messages)f(in)f Fh(Recv)p 481 2220 V 15 w(from)p 584 2220 V 15 w(B)p Fl(.)h(Alternativ)o(ely)m(,)e(the)j(follo)o(wing)c (function)j(is)g(rather)h(more)e(con)o(v)o(enien)o(t)h(when)75 2270 y(one)g(do)q(es)h(not)f(need)h(to)e(exploit)h(the)g(decoupling)g(of)f (publish)h(and)f(subscrib)q(e)j(transactions,)e(and)g(is)g(as)g(follo)o(ws:) 130 2321 y Fi(MPI)p 229 2321 15 2 v 17 w(COMM)p 406 2321 V 18 w(PUBLISH)p 639 2321 V 16 w(SUBSCRIBE\(comm)p 1077 2321 V 16 w(A,)63 b(lab)q(el)p 1306 2321 V 16 w(A,)g(lab)q(el)p 1535 2321 V 15 w(B,)g(send)p 1755 2321 V 16 w(to)p 1814 2321 V 17 w(B,)75 2406 y(recv)p 166 2406 V 17 w(from)p 282 2406 V 16 w(B\))75 2528 y(IN)16 b(comm)p 274 2528 V 17 w(A)21 b Fl(Comm)o(uni)o(cator)12 b(to)h(b)q(e)i(duplicated)f(and)g(published)75 2616 y Fi(IN)i(lab)q(el)p 250 2616 V 16 w(A)21 b Fl(Lab)q(el)14 b(of)f(published,)g(duplicated)h(comm)o(unicator)75 2704 y Fi(IN)i(lab)q(el)p 250 2704 V 16 w(B)21 b Fl(Lab)q(el)13 b(of)h(comm)o (unicator)d(to)j(whic)o(h)f(w)o(e)h(w)o(an)o(t)g(to)g(subscrib)q(e)p eop %%Page: 11 12 bop 75 -100 a Fm(3.9.)31 b(CA)o(CHEING)1433 b Fl(11)75 45 y Fi(OUT)16 b(send)p 295 45 15 2 v 16 w(to)p 354 45 V 16 w(B)21 b Fl(V)m(alid)12 b(send)j(comm)o(unicator)c(for)j(talking)e(to)i(the)h(other) f(group.)75 123 y Fi(OUT)i(recv)p 288 123 V 17 w(from)p 404 123 V 16 w(B)21 b Fl(V)m(alid)12 b(receiv)o(e)j(comm)o(unicator)c(for)j (talking)e(to)i(the)h(other)f(group.)75 196 y(This)g(call)f(is)h(alw)o(a)o (ys)e(done)j(in)e(pairs)h(as)g(follo)o(ws:)i(\\A")e(group:)140 270 y Fh(MPI_COMM_PUBLISH_SU)o(BSCRI)o(BE\(co)o(mm_A,)k("comm_A",)i ("comm_B",)g(Send_to_B,)729 319 y(Recv_from_B\))75 393 y Fl(\\B")14 b(group:)140 466 y Fh(MPI_COMM_PUBLISH_SU)o(BSCRI)o(BE\(co)o(mm_B,)k ("comm_B",)i("comm_A",)g(Send_to_A,)729 516 y(Recv_from_A\))89 589 y Fi(MPI)p 188 589 V 17 w(COMM)p 365 589 V 18 w(JOIN\(lo)q(cal)p 615 589 V 16 w(comm,)c(remote)p 932 589 V 16 w(comm)p 1073 589 V 17 w(send,)75 674 y(remote)p 225 674 V 16 w(comm)p 366 674 V 17 w(recv,)g(order,)f(joined)p 770 674 V 14 w(comm\))75 783 y(IN)h(lo)q(cal)p 246 783 V 17 w(comm)k Fl(Comm)o(unicator)11 b(describing)j(lo)q(cal)f(group)75 861 y Fi(IN)j(remote)p 296 861 V 16 w(comm)p 437 861 V 17 w(send)k Fl(Comm)n(unicator)11 b(describing)k(remote)e(group)75 939 y Fi(IN)j(remote)p 296 939 V 16 w(comm)p 437 939 V 17 w(recv)21 b Fl(Comm)o(uni)o(cator)11 b(describing)k(remote)e(group)75 1017 y Fi(IN)j(order)k Fl(If)13 b Fh(local)p 433 1017 14 2 v 15 w(comm)g Fl(is)h(\014rst,)g(then)g Fh(MPI)p 848 1017 V 15 w(TRUE)p Fl(,)f(else)i Fh(MPI)p 1121 1017 V 15 w(FALSE)75 1095 y Fi(OUT)h(joined)p 329 1095 15 2 v 15 w(comm)k Fl(Merged)15 b(comm)o(unicator)75 1168 y(This)9 b(function)g(creates)i(the)f(merged)f(comm)o(unicator)d Fh(joined)p 1066 1168 14 2 v 15 w(comm)i Fl(giv)o(en)h(a)g(lo)q(cal)f(comm)o(unicator)f (\()p Fh(local)p 1797 1168 V 15 w(comm)p Fl(\))75 1218 y(and)17 b(a)f(pair)h(of)f(remote)g(comm)o(unicators)f(\(obtained)h(from)f Fh(MPI)p 1126 1218 V 15 w(COMM)p 1229 1218 V 15 w(MERGE)p Fl(\).)h(The)h (function)f(sync)o(hronizes)75 1268 y(o)o(v)o(er)i(the)h(underlying)f(groups) g(of)g(b)q(oth)g(comm)o(unicators,)f(and)h(pro)q(duces)h(a)f(new,)i(merged)d (comm)o(unicator,)75 1318 y(whose)h(group)f(is)g(the)h(ordered)g(union)f(of)g (the)h(t)o(w)o(o)e(underlying)h(groups)h(\(see)g Fh(MPI)p 1424 1318 V 15 w(GROUP)p 1549 1318 V 15 w(UNION)p Fl(\).)e(The)i(new)75 1368 y(comm)o(unicator)11 b(has)j(a)g(unique)g(con)o(text)g(of)f(comm)o (unication.)75 1503 y Fk(3.9)70 b(Cac)n(heing)75 1594 y Fl(Cac)o(heing)13 b(is)f(the)i(pro)q(cess)h(b)o(y)d(whic)o(h)h(implemen)o(tation-)o(de\014ned)e (data)i(is)g(propagated)g(in)f(groups)h(and)g(comm)o(u-)75 1644 y(nicators.)75 1758 y Fe(3.9.1)55 b(Cac)n(heing)20 b(in)g(Groups)75 1843 y(3.9.2)55 b(Cac)n(heing)20 b(in)g(Comm)n(unicators)75 1919 y Fl(TBD.)75 2054 y Fk(3.10)70 b(Con)n(texts,)23 b(Comm)n(unicators,)e (and)j(\\Safet)n(y")75 2145 y Fl(When)15 b(a)g(caller)g(passes)i(a)e(comm)o (unicator)d(\(whic)o(h)j(con)o(tains)g(a)g(con)o(text)h(and)f(group\))g(to)g (a)g(callee,)g(that)g(com-)75 2195 y(m)o(unicator)j(m)o(ust)g(b)q(e)h(free)h (of)f(side)g(e\013ects)j(on)d(en)o(try)g(and)g(exit)g(to)g(the)h(subprogram.) 33 b(This)19 b(pro)o(vides)g(the)75 2245 y(basic)13 b(guaran)o(tee)g(of)g (safet)o(y)m(.)k(The)c(callee)g(has)g(p)q(ermission)f(to)h(do)g(whatev)o(er)g (comm)o(unication)d(it)i(lik)o(es)h(with)f(the)75 2295 y(comm)o(unicator,)g (and)j(under)g(the)h(ab)q(o)o(v)o(e)f(guaran)o(tee)g(kno)o(ws)g(that)g(no)g (other)g(comm)o(unications)d(will)h(in)o(terfere.)75 2345 y(Since)f(w)o(e)g (p)q(ermit)f(the)h(creation)g(of)f(new)h(comm)o(unicators)d(without)j(sync)o (hronization)f(\(assuming)g(preallo)q(cated)75 2394 y(con)o(texts\),)k(this)f (do)q(es)g(not)g(imp)q(ose)f(a)g(signi\014can)o(t)h(o)o(v)o(erhead.)158 2444 y(This)h(form)f(of)h(safet)o(y)g(is)h(analogous)e(to)h(other)h(common)d (computer)i(science)i(usages,)f(suc)o(h)g(as)g(passing)f(a)75 2494 y(descriptor)k(of)e(an)g(arra)o(y)g(to)g(a)h(library)e(routine.)29 b(The)18 b(library)f(routine)h(has)f(ev)o(ery)i(righ)o(t)e(to)g(exp)q(ect)i (suc)o(h)f(a)75 2544 y(descriptor)d(to)f(b)q(e)g(v)n(alid)f(and)g(mo)q (di\014able.)158 2594 y(Note)h(that)g Fh(MPI)p 417 2594 V 15 w(UNDEFINED)e Fl(is)h(the)h(rank)g(for)f(pro)q(cesses)j(that)e(are)g(sending) g(to)g(the)g(comm)o(unicator)d(from)75 2643 y(outside)j(the)h(group.)k(This) 14 b(can)g(result)h(through)f(the)g(publish)g(&)g(subscrib)q(e)i(mec)o (hanism,)c(or)i(b)o(y)f(virtue)i(of)e(the)75 2693 y(transmission)g(of)g (\015atten)h(comm)o(unicators)e(\(and)i(their)g(subsequen)o(t)i (un\015attening)e(and)f(use\).)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF From owner-mpi-context@CS.UTK.EDU Mon Jun 28 11:46:22 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA12079; Mon, 28 Jun 93 11:46:22 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20729; Mon, 28 Jun 93 11:45:45 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 28 Jun 1993 11:45:43 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20717; Mon, 28 Jun 93 11:45:42 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA16203; Mon, 28 Jun 1993 11:46:44 -0400 Date: Mon, 28 Jun 1993 11:46:44 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9306281546.AA16203@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu Subject: latest draft Since I missed last week's Dallas meeting many of the issues below concerning the June 24 draft of the groups, contexts, and communicators proposal may already have been dealt with. 1. MPI_COMM_HOST, MPI_COMM_PARENT. Since the MPI process model does not currently include the idea of a host or parent I think these should be omitted. If and when the next generation of MPI gets off the ground communication with host and parent will become an issue, but I think we should defer talking about such things until then. 2. Can anyone give me an example using MPI_TRANSLATE_RANKS? 3. Do we need flatten/unflatten routines for both groups and communicators? Am I right in assuming that flatten/unflatten routines are provided as more efficient ways of disseminating group/communicator information than publish abd subscribe? 4. In the union, intersection and difference set operations, what assumptions are being made about the relationship between the ranks in the parent and child groups? In my opinion, MPI should not specify any relationship. So if you form the union of 2 groups you just get a third group and ranks in this latter group bear no relation to those in the former 2 groups. If order is really important, perhaps we should have an MPI_GROUP_CONCATENATE routine. 5. I would prefer to use MPI_FREE rather than MPI_GROUP_FREE, in the interests of limiting the number of routines. 6. I would prefer a single routines for partitioning, permuting, and copying groups, rather than separate MPI_COLL_SUBGROUP and MPI_COLL_GROUP_PERMUTE. 7. What is MPI_CONTEXTS_RESERVE for? Any examples? 8. Am I correct in inferring from MPI_COMM_MERGE that the concept of a C2 communicator no longer exists in MPI? I guess I would have preferred all communicators to be of the form: local_group, remote_group, send_context, recv_context so that if local_group=remote_group we have intragroup communicator. 9. The example on page 10 contains errors. Also I think all examples should be complete programs or routines. These are much easier to follow than disembodied code fragments. 10.We still nedd to have more example codes to justify routines related to groups and communicators. These routines are likely to be very frightening to many people. Sorry if many of these points have already been discussed, but I would appreciate it if someone would bring me up to speed. David From owner-mpi-context@CS.UTK.EDU Mon Jun 28 15:51:53 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA14926; Mon, 28 Jun 93 15:51:53 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA07935; Mon, 28 Jun 93 15:51:24 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 28 Jun 1993 15:51:23 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from ocfmail.ocf.llnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA07923; Mon, 28 Jun 93 15:51:21 -0400 Received: from [134.9.250.226] (nessett.ocf.llnl.gov) by ocfmail.ocf.llnl.gov (4.1/SMI-4.0) id AA21415; Mon, 28 Jun 93 12:52:14 PDT Message-Id: <9306281952.AA21415@ocfmail.ocf.llnl.gov> Date: Mon, 28 Jun 1993 12:54:12 -0800 To: mpi-context@cs.utk.edu From: nessett@ocfmail.ocf.llnl.gov (Dan Nessett) X-Sender: nessett@ocfmail.llnl.gov Subject: publish / subscribe Cc: nessett@ocfmail.ocf.llnl.gov I am sending this to the context mailing list, eventhough I am not on it, because it needs to be discussed before presentation to the full committee. I am about to leave for a month and won't be able to respond to questions. My intention is that it will spark some discussion, rather than be the ultimate solution. If you copy me on the discussion, I will be able to comment when I get back at the end of July. Cheers, Dan ----------------- It occured to me Friday after the MPI meeting was over that the publish / subscribe mechanism was trying to solve a problem similar to that of connecting two virtual address spaces. In Unix this is achieved through the shared memory system calls, shmget, shmctl, shmat and shmdt. While much of the detailed functionality of these system calls is not pertinent, I think the concepts can help us understand how to "attach" two groups, perhaps without a common ancestor, into one. In order to discuss the similarities and differences between connecting groups and connecting virtual address spaces, let me outline a model of groups that facilitates this. Groups are a kind of virtual address space. The rank within a group specifies a message pool to which I wish to send messages and from which I wish to receive messages. They differ from conventional virtual address spaces in that a process can possess more than one, in fact it can possess many. When I wish to send a message to another message pool, I specify a group id, a rank and a context (the first and last items being encapsulated into a communicator). The rank is a "virtual address" or, if you wish, a "virtual name" of the message pool in the scope of the group specified by the group id. The same rank can mean quite different message pools in the scope of different group ids. This is the major reason why a group is conveniently viewed as a virtual address or virtual name space. Collective communication operations do not specify a rank, since it is implicit that they wish to interact with all message pools within the group. I use the concept of message pool here, which I acknowledge is non-standard, because of the confusion that occurs when speaking of processes and interprocess communication. Processes may or may not consist of one thread of control. Thus, to speak of a process sending to or receiving from another process can become confusing, since in some contexts I wish to discuss threads sending and receiving. My solution is to view the target of communication operations as a message pool. Processes/threads carry out interprocess communication by operating on message pools. They operate on their own message pool by receiving from it and the operate on other process/thread message pools by sending to them. In a single-threaded environment, we generally use the term process and there is a one-to-one mapping between them and message pools. In a multi-threaded environment, the threads share a common message pool (as they share a common address space) and the one-to-one mapping no longer exists. Groups have several characteristics that set them appart from the traditional concept of a process virtual address space. One already has been mentioned, i.e., a process may possess more than one group, while it possesses only one memory address space (actually, this is not quite true, since a segmented virtual address space can be considered to be the union of a number of different address spaces, but that is an aside). Since processes possess only one address space, they don't have a need to create another. Also, when they wish to share memory with another process, they map part of their existing address space into that of the other process. Our model seems to be that sharing of separately created group address spaces results in the creation of a new group. Finally, when the address space of one process is mapped into that of another process, an address that maps to a common word in the shared space may be different in one process from that in another process. Groups, on the other hand, are created by processes to organize their communication. Since the pattern of process communication within an application can be quite complex, this may require the creation of several groups. However, when two processes reference the same rank within a group, they want to refer to the same message pool. This allows them to communicate rank information to each other and use it in a consistent manner. The last point has significant implications when processes wish to join two groups into one. In particular, the combined group must map ranks to the same message pool for all processes. This causes some problems when the join operation is executed. I think much of the asymmetry in the existing publish / subscribe mechanism derives from these problems. The remainder of this message is a proposal to modify the publish / subscribe mechanism so that it results in the creation of a group by all involved processes that consistently maps ranks to message pools. It utilizes the idea that groups are virtual address spaces and that the creation of a combined group is very similar to sharing memory space. Combining two groups is carried out in two steps. One process within each of the two groups publishes a name for the group. The publication operation conveys not only a character string name representing a published name for the group (NB: nothing prevents the publication of the same group under different names), but also the number of processes (message pools) in the group. The publish operation returns a unique publication id that is used in a subsequent subscribe operation. The process that executes the publish communicates the publication id to the other members in its group. To complete the join, each process in both groups executes a subscribe operation. The arguments to the subscribe are the name of a published group and a publication id. The name represents the "other" group, while the publication id represents the group to which the process executing the subscribe belongs. The subscribe is a barrier synch operation. When it completes it returns a new group id that represents the new joined group. The ranks in the joined group represent message pools in both the original group to which the process belonged and message pools in the "other" group. For example, consider two groups A and B with the following characteristics. Group A consists of 7 message pools and group B consists of 3 message pools. To keep the description short, I will assume that we are working in a single threaded environment and so I will use the terms message pool and process synonymously. Process 2 in group A executes a publish operation on A using the name "Big group" and receives the publication id 10. Process 1 in group B executes a publish operation on B using the name "Small group" and receives the publication id 90. Subsequently, all processes in group A execute a subscribe with the parameters "Small group" and publication id 10. Similarly, all processes in group B execute a subscribe with the parameters "Big group" and the publication id 90. After all processes execute the subscribe operation, each is returned an id of a group with 10 members, 7 of which represent the processes in group A and 3 of which represent the processes in group B. Some may be concerned about how the publish / subscribe service could be implemented and how it would interact with the send / receive operations. Of course, MPI does not describe implementation, since it is an interface definition. However, in order that people can feel comfortable with the interface operations described above, let me outline one possible implementation as an existence proof that efficiency is possible. First, note that publish / subscribe can be implemented either with a central server or as a distributed service. The central server implementation would accept publish requests, create a unique publication id and table the appropriate information in association with the publication id. When a process executes a publication request, the group id is used to compute a list of message pool identifiers that belong to the group. This list is sent along with the publication name to the central server, which creates an entry in its data structures, creates a unique publication id and associates the list of message pool identifiers, the name and the publication id so it can search on either the name or publication id subsequently and find the other information. It then returns the publication id to the requesting process. A subscription request requires the central server to look up the number of ranks in the group identified by the name and by the one identified by the publication id. It adds those numbers together to determine the number of message pools in the combined group, returns this number with a list of message pool identifiers that represent the members in the combined group and records the message pool of the process executing the subscribe as having completed the operation. When all processes in the combined group have executed the subscribe, the central server sends a message to each of them, releasing them from the barrier synch. Discussion of a distributed service requires the establishment of some terminology. Since the publication / subscribe service allows groups created by processes without a common ancestor to join, there must be a way for the MPI implementations of these processes to communicate. For example, two parallel programs running on the same MPP machine may not have a common ancestor, but the machine generally will allow processes running on a node to communicate with other processes on other nodes irregardless of whether they have a common ancestor. The MPI implementation uses lower level identifiers, such as node id and UNIX pid, to carry out its associated semantics. Similarly, in a distributed system, an MPI implementation will use IP addresses and TCP ports as its lower level identifiers. I will call these lower level identifiers and the communication primatives that operate on them the universe of communication. In order to control scaling problems in large systems, the publish / subscribe caller must limit the universe of communication. For example, if MPI is being used in a distributed system based on the internet as the universe of communication, a publish / subscribe activity would have to contact every process executing within it. Obviously, this is unreasonable. There could be either a environmental management call or a separate publish / subscribe call that establishes this limit. For example, the call could establish the universe of communication to be all processes within a particular partition on an MPP machine. In a distributed environment, the call could establish a particular subnet address or a set of subnet addresses to limit the universe. I will assume that there is a broadcast facility in the universe of communication. For an MPP this seems obvious. For a distributed system this is less obvious, but much work is on-going at the present time to support a scalable broadcast facility in distributed systems in order to support various forms of teleconferencing. When a process executes a publish operation, it picks a unique integer and concatenates it with its unique address within the universe of communication. This becomes the publication id. It communicates this id and the published name to the other members in the group. When a subscribe operation is executed, the process broadcasts over the universe of communications the other group's publication name, its own publication id and a list of members in its group. This list must be expressed using the identifiers in the universe of communication. Each process blocks within the subscribe operation until it receives a subscription message with either the publication name or its publication id. When the message contains the publication id, it must have come from some process within its own group. It marks that process off the list of processes waiting on the barrier synch and waits for another message (unless the completion criterion described below is met). When the message contains the publication name, if this is the first such message, it extracts the list of members in the other group and forms a concatenated group. Some rule must be used so that all processes choose the same ranking. This could be on the basis of numerical or lexigraphic ordering of the identifiers in the universe of communication. The process then marks off the source of the message from the list of processes waiting for the barrier synch and waits for another message. If this was not the first message with the publication name, it simply marks off the process. When all processes are marked off of the list, the subscribe completes and the call returns. One problem with this scheme is that all processes must process every subscribe even if they are not involved. To mitigate this performance hit, a combination of the central server and distributed server implementation strategies is possible. Processes can be bound to "local" central servers with which they execute the central server protocol. The central servers can then execute the distributed protocol (or some suitably modified version of it) to complete the publish / subscribe operation. This offloads interfering publish / subscribe activity onto a server with only that function. From owner-mpi-context@CS.UTK.EDU Tue Jun 29 11:24:15 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA25070; Tue, 29 Jun 93 11:24:15 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23576; Tue, 29 Jun 93 11:23:42 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 29 Jun 1993 11:23:41 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23567; Tue, 29 Jun 93 11:23:38 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA11808; Tue, 29 Jun 1993 11:24:43 -0400 Date: Tue, 29 Jun 1993 11:24:43 -0400 From: walker@rios2.epm.ornl.gov (David Walker) Message-Id: <9306291524.AA11808@rios2.epm.ornl.gov> To: mpi-context@cs.utk.edu Subject: Routines for communicators etc Here is what I think should form the core of the groups, contexts, and communicators proposal. I believe that anything additional needs to be justified with example codes. In fact, I think we should have example codes illustrating all of these routines. My basic idea of a communicator is that it provides for communication between a local group and a remote group. A send context is used when sending messages, and a receive context is used when receiving messages. Thus, contexts provide unidirectional communication streams. The send context for one of the 2 groups must be the receive context of the other group, and vice versa. Likewise, given 2 groups, A and B, in group A, the local and remote groups refer to A and B, respectively. In group B they refer to groups B and A, respectively. For intragroup communication, the local and remote groups are identical. In general, the send and receive contexts will also be identical, but this is not necessarily so. Unless otherwise stated it is assumed that groups are undecorated. Predefined Communicators ------------------------ MPI_COMM_ALL. 90% of users will probably only ever use this communicator. Thus, we don't need the mpi_init_group routine. Group Inquiry Routines ---------------------- MPI_GROUP_SIZE (group, size) IN group (handle) OUT size (integer) MPI_GROUP_RANK (group, rank) IN group (handle) OUT rank (integer) Support for Transmission of Groups ---------------------------------- MPI_GROUP_FLATTEN (group, max_length, buffer, actual_length) IN group (handle) IN max_length (integer) OUT buffer (integer array) OUT actual_length (integer) MPI_GROUP_UNFLATTEN (max_length, buffer, group) IN max_length (integer) IN buffer (integer array) OUT group (handle) Local Group Constructors ------------------------ MPI_GROUP_ENUMERATE (group, n, ranks, new_group) (New name. Used to be IN group (handle) MPI_LOCAL_SUBGROUP) IN n (integer) IN ranks (integer array) OUT new_group (handle) Collective Group Constructors ----------------------------- MPI_GROUP_PARTITION (group, key, index, new_group) IN group (handle) IN key (integer) (This routine replaces MPI_COLL_SUBGROUP and IN index (integer) MPI_COLL_GROUP_PERMUTE) OUT new_group (handle) Local Context Operations ------------------------ MPI_CONTEXT_FREE (n, contexts) IN n (integer) IN contexts (integer array) Collective Context Operations ----------------------------- MPI_CONTEXT_ALLOC (group, n, contexts) (Uses group instead of comm) IN group (handle) IN n (integer) OUT contexts (integer array) Communicator Inquiry Routines ----------------------------- MPI_COMM_LOCAL_GROUP (comm, group) (Returns local group of communicator) IN comm (handle) OUT group (handle) MPI_COMM_REMOTE_GROUP (comm, group) (Returns remote group of communicator) IN comm (handle) OUT group (handle) MPI_COMM_SEND_CONTEXT (comm, context) (Returns send context of communicator) IN comm (handle) OUT context (integer) MPI_COMM_RECV_CONTEXT (comm, context) (Returns receive context of communicator) IN comm (handle) OUT context (integer) Local Communicator Constructors ------------------------------- MPI_COMM_BIND (local_group, remote_group, send_context, recv_context, comm) IN local_group (handle) IN remote_group (handle) (This differs from original routine) IN send_context (integer) IN recv_context (integer) OUT comm (handle) MPI_COMM_MERGE (local_comm, remote_comm, comm) IN local_comm (handle) IN remote_comm (handle) OUT comm (handle) mpi_comm_merge (local_comm, remote_comm, comm) is equivalent to: mpi_comm_local_group (local_comm, local_group) mpi_comm_remote_group (remote_comm, remote_group) mpi_comm_send_context (local_comm, send_context) mpi_comm_recv_context (remote_comm, recv_context) mpi_comm_bind (local_group,remote_group,send_context,recv_context,comm) MPI_COMM_DUP (comm, new_context, new_comm) IN comm (handle) IN new_context (integer) OUT new_comm (handle) MPI_COMM_UNBIND (comm) IN comm (handle) Collective Communicator Constructors ------------------------------------ MPI_COMM_MAKE (group, comm) (This differs from original) IN group (handle) OUT comm (handle) mpi_comm_make (group, comm) is equivalent to: mpi_context_alloc (group, 1, context) mpi_comm_bind (group, group, context, context, comm) Publishing and Subscribing to Communicators ------------------------------------------- MPI_COMM_PUBLISH (comm, label, persistence) IN comm (handle) IN label (character string) IN persistence (integer) MPI_COMM_SUBSCRIBE (group, label, comm) IN group (handle) IN label (character string) OUT comm (handle) MPI_UNPUBLISH (label) IN label (character string) From owner-mpi-context@CS.UTK.EDU Tue Jun 29 12:14:48 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA26333; Tue, 29 Jun 93 12:14:48 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26683; Tue, 29 Jun 93 12:14:10 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 29 Jun 1993 12:14:09 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26675; Tue, 29 Jun 93 12:14:04 -0400 Received: from tycho.co.uk (tycho.meiko.co.uk) by hub.meiko.co.uk with SMTP id AA21875 (5.65c/IDA-1.4.4 for mpi-context@cs.utk.edu); Tue, 29 Jun 1993 17:15:07 +0100 Date: Tue, 29 Jun 1993 17:15:07 +0100 From: James Cownie Message-Id: <199306291615.AA21875@hub.meiko.co.uk> Received: by tycho.co.uk (5.0/SMI-SVR4) id AA00407; Tue, 29 Jun 93 17:13:35 BST To: mpi-context@cs.utk.edu In-Reply-To: <9306291524.AA11808@rios2.epm.ornl.gov> (walker@rios2.epm.ornl.gov) Subject: Context proposals Content-Length: 4415 David Walker has stung me into putting out my ramblings on this subject too, so here they are. Before I start on the detail, some things I hope are not too contentious. What are Contexts for ? ======================= The selling point of contexts is that they allow the development of modular software by insulating library routines from the state of the communications system when they are called. I think this is the SOLE purpose of contexts. Issues with the current proposal ================================ 1) It doesn't supply the security we want. Users are allowed to arbitrarily bind context numbers which they invent to communicators in an entirely anarchic manner. There is therefore no possible security guarantee. 2) Because the user can locally construct communicators and bind contexts to them the receiver of a message where the receive has a wild-carded from field (given widespread current practice which doesn't support selection on from, this is likely to be a large proportion of receives !) has to perform a back translation from an absolute process identifier into a group rank. Unfortunately this is non-trivial to implement, and has to be done even if the message is really an inter-group communication, since there is no easy way (either at sender or receiver) of knowing this fact. 3) The intra group communicators have this funny property of being uni-directional, when inter group ones are not. This is most peculiar. (It also seems unnecessary. If the range of valid contexts really is large, then it should be possible to find one which is free at both ends. This may be expensive, but then the whole publish/subscribe mechanism already is !) 4) I have yet to see the example which implements (any of) the collective communications with the properties we desire that a) The communicator passed in need not be quiescent b) The operation is insulated from the non-quiescence c) There is low cost in operating on different communicators All of the library examples shown to date assume that the library can be statically bound to the group of processes on which it is to operate. This is certainly not the case for the collective comms routines. (Note that you need a context per group for the collective routines, otherwise overlapping groups will fail. One context for the library is insufficient). Where do the problems come from ? ================================= 1) The insecurity arise from the ability of users to invent contexts and use them with no system intervention. This is like being allowed to access physical disk blocks while a file system is running. 2) The need to back translate node ids comes from the lack of negotiation in setting up communicators. If all the members of a communicator understand it the same, then there's no need for a translation. We hope and expect that this will be the case for communicators used for inter-group communication. (I think many many user's will be very confused if this is not the case, just imagine the confusion from the debugging messages you can generate with crossed over groups, e.g. Process 0 Process 1 group {0, 1} group {1, 0} bind the same context send to 1 receive from 1 receive from 1 send to 1 etc.) The process names can no longer be guaranteed to be anything more than totally local. The perverse thing here is that actually we could assert stronger rules about the creation of communicators which ensured that the same context was only ever bound to the same group, which should be sufficient for the common (inter-group) cases, and then relax this somewhat for the intra-group cases, which have to undergo negotiation via publish/subscribe anyway. (Of course we would lose the ability to flatten a group and distribute it through a shared file...) Anyhow I really dislike having to pay an extra cost everywhere for something I may never use. Please Please Please let's find a way to avoid the back translation on the common cases. -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Tue Jun 29 12:22:13 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA26450; Tue, 29 Jun 93 12:22:13 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27276; Tue, 29 Jun 93 12:21:32 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 29 Jun 1993 12:21:30 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27265; Tue, 29 Jun 93 12:21:22 -0400 Received: from tycho.co.uk (tycho.meiko.co.uk) by hub.meiko.co.uk with SMTP id AA21900 (5.65c/IDA-1.4.4 for mpi-context@cs.utk.edu); Tue, 29 Jun 1993 17:22:05 +0100 Date: Tue, 29 Jun 1993 17:22:05 +0100 From: James Cownie Message-Id: <199306291622.AA21900@hub.meiko.co.uk> Received: by tycho.co.uk (5.0/SMI-SVR4) id AA00408; Tue, 29 Jun 93 17:20:35 BST To: walker@rios2.epm.ornl.gov Cc: mpi-context@cs.utk.edu In-Reply-To: <9306291524.AA11808@rios2.epm.ornl.gov> (walker@rios2.epm.ornl.gov) Subject: Re: Routines for communicators etc Content-Length: 885 > Collective Context Operations > ----------------------------- > MPI_CONTEXT_ALLOC (group, n, contexts) (Uses group instead of comm) > IN group (handle) > IN n (integer) > OUT contexts (integer array) I'm afraid it may need a communicator rather than a context, since otherwise how can it communicate to reserve the contexts ? It also probably needs the context in the communicator to be quiescent. (Actually this routine is just as hard as all the other collective routines to implement if you can't assume quiescence of the context. Let's see how we're supposed to do it (Lyndon ? Tony ? Mark ?)) -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Tue Jun 29 12:53:43 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA26658; Tue, 29 Jun 93 12:53:43 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29493; Tue, 29 Jun 93 12:53:06 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 29 Jun 1993 12:53:04 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29479; Tue, 29 Jun 93 12:53:02 -0400 Date: Tue, 29 Jun 93 17:53:52 BST Message-Id: <11218.9306291653@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Context proposals To: James Cownie , mpi-context@cs.utk.edu In-Reply-To: James Cownie's message of Tue, 29 Jun 1993 17:15:07 +0100 Reply-To: lyndon@epcc.ed.ac.uk Jim writes: > What are Contexts for ? > ======================= The purpose of context is to provide a mechanism which allows programmers to write software which isolates one set of communications from other sets of communications without use of message tags - to create different universes of messages, or different tag spaces, in the jargon. > The selling point of contexts is that they allow the development of > modular software by insulating library routines from the state of the > communications system when they are called. One application (important application) of context is that it allows library software to be written in such a manner that communications private to the library conflict neither with those of the library user or of other libraries. This is not the only application of context - it is perfectly reasonable and sensible for the user program to make use of contexts (even in the absence of libraries) to provide different message universes. Its probably a very sensible thing to do in complex applications. > > I think this is the SOLE purpose of contexts. > I think this is one purpose of contexts, for sure, but I can not agree that it is the only purpose of contexts. > Issues with the current proposal > ================================ > 1) It doesn't supply the security we want. > Users are allowed to arbitrarily bind context numbers which they > invent to communicators in an entirely anarchic manner. There is > therefore no possible security guarantee. Sure the current proposals give the programmer enough rope to hang himself. However there are ways of using contexts within the current proposal which certainly are safe. > 2) Because the user can locally construct communicators and bind > contexts to them the receiver of a message where the receive has a > wild-carded from field (given widespread current practice which > doesn't support selection on from, this is likely to be a large > proportion of receives !) has to perform a back translation from an > absolute process identifier into a group rank. Unfortunately this > is non-trivial to implement, and has to be done even if the message > is really an inter-group communication, since there is no easy way > (either at sender or receiver) of knowing this fact. I disagree that it is necessary to do the back-translation. I'll mail further tomorrow including an explanation of how the back-translation is avoided, I gotta go right now. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Jun 29 13:25:27 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA27065; Tue, 29 Jun 93 13:25:27 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01689; Tue, 29 Jun 93 13:24:52 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 29 Jun 1993 13:24:51 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01680; Tue, 29 Jun 93 13:24:44 -0400 Received: by gw1.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA15098; Tue, 29 Jun 93 17:25:44 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA02148; Tue, 29 Jun 93 11:24:00 MDT Date: Tue, 29 Jun 93 11:24:00 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9306291724.AA02148@macaw.fsl.noaa.gov> To: jim@meiko.co.uk Subject: yet more rambings... Cc: mpi-context@cs.utk.edu > > Collective Context Operations > > ----------------------------- > > MPI_CONTEXT_ALLOC (group, n, contexts) (Uses group instead of comm) > > IN group (handle) > > IN n (integer) > > OUT contexts (integer array) > > I'm afraid it may need a communicator rather than a context, since > otherwise how can it communicate to reserve the contexts ? It also > probably needs the context in the communicator to be > quiescent. (Actually this routine is just as hard as all the other > collective routines to implement if you can't assume quiescence of the > context. Let's see how we're supposed to do it (Lyndon ? Tony ? Mark > ?)) I agree. We went round and round on this for a bit when we were making up the examples. (At least I did...) What can we really say about "quiescence"? Can we guarantee that MPI_COMM_ALL is quiescent immediately after MPI_INIT() returns? I hope so! :-) Can we guarantee is that a communicator is quiescent immediately after "collective" creation (via MPI_COMM_MAKE())? I think so. Can we guarantee is that a communicator used in a collective communication function is quiescent immediately after the function returns? I think so, as long as the communicator was quiescent before the function was called. Is it the user's responsibility to make sure a communicator is quiescent before using it in a collective communication call? If so, will a user be forced to create lots of quiescent communicators? Is this a bad thing? I could see programs starting out like this: int i, numQuietComms = ; group allGroup; comm quietComms[numQuietComms]; MPI_INIT(); MPI_COMM_GROUP(MPI_COMM_ALL, allGroup); for (i=0; i To: jim@meiko.co.uk Subject: Re: context proposals Cc: mpi-context@cs.utk.edu > Date: Tue, 29 Jun 1993 17:15:07 +0100 > From: James Cownie > > David Walker has stung me into putting out my ramblings on this > subject too, so here they are. > > Before I start on the detail, some things I hope are not too > contentious. > > What are Contexts for ? > ======================= > The selling point of contexts is that they allow the development of > modular software by insulating library routines from the state of the > communications system when they are called. The insulation is needed in at least 2 cases: (1) Asynchronous entry into a library routine (2) Synchonous entry intoa library routine when messages are outstanding > I think this is the SOLE purpose of contexts. > > Issues with the current proposal > ================================ > 1) It doesn't supply the security we want. > Users are allowed to arbitrarily bind context numbers which they > invent to communicators in an entirely anarchic manner. There is > therefore no possible security guarantee. I think MPI should allow secure programs to be written. I don't think it should guarantee security. > 2) Because the user can locally construct communicators and bind > contexts to them the receiver of a message where the receive has a > wild-carded from field (given widespread current practice which > doesn't support selection on from, this is likely to be a large > proportion of receives !) has to perform a back translation from an > absolute process identifier into a group rank. Unfortunately this > is non-trivial to implement, and has to be done even if the message > is really an inter-group communication, since there is no easy way > (either at sender or receiver) of knowing this fact. I assume you're performing intergroup communication in which the rank is wildcarded on the receiver, i.e., the receiver will receive from any process in the other group. I don't see the problem here. The actual rank (relative to the send group) of the sender is part of the message envelope (according to my interpretation of Section 1.4.2). When calling MPI_QUERY to query the return status object, the rank returned is relative to the send group. Maybe I'm misunderstanding what you mean. > 3) The intra group communicators have this funny property of being > uni-directional, when inter group ones are not. This is most > peculiar. (It also seems unnecessary. If the range of valid > contexts really is large, then it should be possible to find one > which is free at both ends. This may be expensive, but then the > whole publish/subscribe mechanism already is !) In the routines that I posted today both intra and inter group communicators are unidirectional. Do you have a proposal for bidirectional intergroup communicators? > 4) I have yet to see the example which implements (any of) the > collective communications with the properties we desire that > a) The communicator passed in need not be quiescent > b) The operation is insulated from the non-quiescence > c) There is low cost in operating on different communicators > > All of the library examples shown to date assume that the library > can be statically bound to the group of processes on which it is to > operate. This is certainly not the case for the collective comms > routines. (Note that you need a context per group for the > collective routines, otherwise overlapping groups will fail. One > context for the library is insufficient). Well, my assumption has always been that whenever a new group is created a reserved system context is also created for use in collective communication. So when a collective communication routine is passed a communicator it calls MPI_COMM_DUP to create a new communicator that is the same as the original, except that it uses the reserved context. Can you give a more concrete example of what you perceive to be the problem here? > Where do the problems come from ? > ================================= > > 1) The insecurity arise from the ability of users to invent contexts > and use them with no system intervention. This is like being > allowed to access physical disk blocks while a file system is > running. > > 2) The need to back translate node ids comes from the lack of > negotiation in setting up communicators. If all the members of a > communicator understand it the same, then there's no need for a > translation. We hope and expect that this will be the case for > communicators used for inter-group communication. (I think many > many user's will be very confused if this is not the case, just > imagine the confusion from the debugging messages you can generate > with crossed over groups, e.g. > > Process 0 Process 1 > group {0, 1} group {1, 0} > bind the same context > send to 1 receive from 1 > receive from 1 send to 1 > > etc.) > > The process names can no longer be guaranteed to be anything more > than totally local. > > The perverse thing here is that actually we could assert stronger > rules about the creation of communicators which ensured that the > same context was only ever bound to the same group, which should be > sufficient for the common (inter-group) cases, and then relax this > somewhat for the intra-group cases, which have to undergo > negotiation via publish/subscribe anyway. (Of course we would lose > the ability to flatten a group and distribute it through a shared > file...) > Are you confusing intergroup (i.e., between groups), and intragroup (i.e., within a group) communication? > Anyhow I really dislike having to pay an extra cost everywhere for > something I may never use. > > Please Please Please let's find a way to avoid the back translation on > the common cases. > > > -- Jim David From owner-mpi-context@CS.UTK.EDU Tue Jun 29 13:51:24 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA27443; Tue, 29 Jun 93 13:51:24 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03801; Tue, 29 Jun 93 13:50:37 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 29 Jun 1993 13:50:36 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03786; Tue, 29 Jun 93 13:50:34 -0400 Received: by gw1.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA15164; Tue, 29 Jun 93 17:51:38 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA02173; Tue, 29 Jun 93 11:49:54 MDT Date: Tue, 29 Jun 93 11:49:54 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9306291749.AA02173@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu Subject: Re: yet more rambings... After looking at the program fragment I just sent out, it's obvious that I missed the "fast" way to do it... > int i, numQuietComms = ; > group allGroup; > comm quietComms[numQuietComms]; > > MPI_INIT(); > /* Local op */ > MPI_COMM_GROUP(MPI_COMM_ALL, allGroup); > > for (i=0; i { /* Collective op */ > MPI_COMM_MAKE(MPI_COMM_ALL, allGroup, quietComms[i]); > } > > /* etc. */ int i, numQuietComms = ; group allGroup; comm quietComms[numQuietComms]; context tmpContexts[numQuietComms]; MPI_INIT(); /* Local op */ MPI_COMM_GROUP(MPI_COMM_ALL, allGroup); /* Collective op */ MPI_CONTEXTS_ALLOC (MPI_COMM_ALL, numQuietComms, tmpContexts); for (i=0; i Message-Id: <199306301015.AA26186@hub.meiko.co.uk> Received: by tycho.co.uk (5.0/SMI-SVR4) id AA00509; Wed, 30 Jun 93 11:14:22 BST To: mpi-context@cs.utk.edu In-Reply-To: <9306291734.AA14952@rios2.epm.ornl.gov> (walker@rios2.epm.ornl.gov) Subject: Re: context proposals Content-Length: 6504 What are contexts for ? ======================= OK, so I was a little bit restrictive in my definition of what contexts are for. I agree with Lyndon that they may be useful for structuring user code (but then maybe this is just a semantic question of what a library is...), and I agree with David that there are two different pieces of insulation needed by the library. However I haven't seen any great addition to my statement. We all agree they're to protect you from yourself (and others). Security ======== On the security side I'd like to understand the gains from throwing away enforced security. What do we lose by enforcing it ? So far (and this may just because we haven't seen any examples...) I have seen no gain from the ability to mix contexts into groups at whim, which is what loses the security. I am open to arguments that we need this, but haven't yet understood what we gain from this power (other than the ability to shoot ourselves in the foot), and am nervous about what we lose. > I assume you're performing intergroup communication in which the > rank is wildcarded on the receiver, i.e., the receiver will receive > from any process in the other group. I don't see the problem here. Yes this (or any case where the rank is wildcarded) is the case. > The actual rank (relative to the send group) of the sender is part of the > message envelope (according to my interpretation of Section 1.4.2). > When calling MPI_QUERY to query the return status object, the rank > returned is relative to the send group. Maybe I'm misunderstanding > what you mean. Please can we clarify some terminology, 1) Send communicator group == group in communicator used by sender 2) Receive communicator group == group in communicator used by receiver These groups are not (in general) the same, and have no relationship to each other (they may even contain no proper intersection). If the resulting rank returned to the receiver is NOT relative to the receive communicator group, then 1) The group in the receive communicator is irrelevant (except when the communicator is sent through). 2) The resulting rank is useless. I cannot use it to reply to the sender. In my mind this was the main reason for getting this result. I think 2.4.2 (on envelopes) should not be taken as gospel yet. We need to think what we want to achieve, then worry about how to do it. Directionality ============== David says : > In the routines that I posted today both intra and inter group > communicators are unidirectional. Do you have a proposal for > bidirectional intergroup communicators? I don't actually believe he means this. Surely I don't need TWO commmunicators just to send a message to someone else and receive a reply ? I believe 1) A communicator should always be able to be used both for sending through, and receiving from. It is a bi-directional object. 2) The context in a communicator has two functions, when sending it is added to the message, when receiving only messages which match it can be received through this communicator. This model (which give or take bi-directionality is [I believe] what is in the proposal) has the implication that a single context number has meaning at both the sender and the receiver. If I understand David's proposal, he has two contexts in the communicator, so that the context used for sending can be different from that used for receiving. This makes the context numbers more local, and less co-operation is needed to create them. (The ultimate end of this road is to have a different context number for each member of the group...) Collective communications ========================= > Well, my assumption has always been that whenever a new group is created > a reserved system context is also created for use in collective > communication. So when a collective communication routine is passed > a communicator it calls MPI_COMM_DUP to create a new communicator > that is the same as the original, except that it uses the reserved context. > Can you give a more concrete example of what you perceive to be the > problem here? This is certainly a possible implementation. It's just that it isn't what is in the current proposal. That is most explicit that a communicator is a group and ONE context. There has been a strong pressure to be able to let the user write routines which are similar in their behaviour to the collective routines we specify. This implies that the context mechanism should have enough power to let the user do this. If this power exists, then we should be able to write the collective routines this way, building solely on the point to point and context proposals, WITHOUT requiring additional help from the implementation of communicators and groups (i.e. without a special reserved context, unless we have a way of the user also being allowed reserved contexts. So what I want to see is how to write a library routine using the context proposal which has the following properties 1) It can be called on arbitrary groups. (You should probably read communicator for group, but I'm really interested in the group of processes, not the communicator). 2) It can be called safely independently of the outstanding message traffic and queued non-blocking operations. (i.e. wild card receives can be outstanding, no tag space is available). 3) It can be called simultaneously in overlapping groups and correctly resolve the situation. (Really we just need to see how the routine safely and cheaply establishes a unique context for itself, I'll buy that once this is done it can do the necessary comms !) David writes: > Are you confusing intergroup (i.e., between groups), and intragroup > (i.e., within a group) communication? Yes I am you are correct (David: IOU 1 Beer -- Jim). It's a long time since I did Latin. I'll re-phrase once more to make clear... I am concerned that we are introducing expense into the simple common practice case which is unnecessary for its implementation, solely to support a much less common and more intricate case. This cost arises because we cannot locally distinguish the two cases. -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Wed Jun 30 09:36:08 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA05002; Wed, 30 Jun 93 09:36:08 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27780; Wed, 30 Jun 93 09:35:37 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 09:35:36 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from rios2.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27772; Wed, 30 Jun 93 09:35:35 -0400 Received: by rios2.epm.ornl.gov (AIX 3.2/UCB 5.64/4.03) id AA20050; Wed, 30 Jun 1993 09:36:29 -0400 Message-Id: <9306301336.AA20050@rios2.epm.ornl.gov> To: jim@meiko.co.uk Cc: mpi-context@CS.UTK.EDU Subject: Re: context proposals Date: Wed, 30 Jun 93 09:36:28 -0500 From: David W. Walker What are contexts for ? ======================= OK, so I think we agree what contexts are for. Security ======== The current proposal doesn't enforce security; we don't have a proposal that does enforce it. When we do then we'll be able to compare them, and see what the pros and cons are. Of course, being able to shoot oneself (and others) in the foot is a proud part of the tradition and culture of parallel programming. I think as a group we should strive to preserve our cultural heritage. Anyway, when I see a secure proposal I'll think about it. I'm still confused about Jim's comments on intergroup communication. I don't see any difficulties. An important point is that the intergroup communicator is different for the 2 groups. Suppose groups A and B each have a local context, called context_A and context_B. Then when they get together to form an intergroup communicator this is what they end up with: Comm. in group A Comm. in group B group_A group_B /* local group */ group_B group_A /* remote group */ context_A context_B /* recv context */ context_B context_A /* send context */ When group A sends to group B, we use a rank relative to group B and context_B (i.e., we use the remote group and the send context). When group A receives from group B, again the rank is interpreted relative to group B, and context_A is used (i.e., we use the remote group and the recv context). The local group is not actually used. However, it needs to be there because somehow MPI needs to figure out the rank relative to the local group to stick on the envelope. When group B receives a message, it knows it's from process X of group A. It has no problem replying. Directionality ============== On the topic of directionality, I guess I mean that contexts are unidirectional, not communicators. Anyway, this is not really important. Since contexts are integers we can always decide to use the one with the lowest ID number for communicating in both directions. i.e., in the above example we coould use the smaller of context_A and context_B as both the send and recv context. That way we end up with a communicator consisting of just a local group, a remote group, and a context. Maybe this is preferable. Collective communications ========================= Jim writes: > So what I want to see is how to write a library routine using the > context proposal which has the following properties > 1) It can be called on arbitrary groups. (You should probably read > communicator for group, but I'm really interested in the group of > processes, not the communicator). > 2) It can be called safely independently of the outstanding message > traffic and queued non-blocking operations. (i.e. wild card receives > can be outstanding, no tag space is available). > 3) It can be called simultaneously in overlapping groups and correctly > resolve the situation. I have made a distinction between MPI's collective communication routines and collective librray routines (such as matrix multiply,etc). The collective routines in MPI need a group and a context in order to be implemented in terms of point-to-point routines. The context can either be passed into the routine in a communicator, or a reserved system context can be used. In general a library routines will be passed a group and an array of contexts. In many cases only a single context will actually be need so again the group and context can be passed into the routine via a communicator. The context(s) that you pass a routine ensure that it can be called safely regardelss of outstanding message traffic. Similarly, the use of contexts ensures that a routine can be called correctly simultaneously by overlapping groups. Note that "simultaneously" is not a correct word to use here within an SPMD process model in which different groups operate within different conditional branches. Thus: if ( in(group_A) ) then call matmul (A, B, C, comm_A) end if if ( in(group_B) ) then call matmul (X, Y, Z, comm_B) end if this is how you would "simultaneously" call matmul in overlapping groups. If the groups were not overlapping you could use an if-elseif construct. I don't really understand Jim's concerns about the expense of implementation. Intergroup and intragroup communication can easily be distinguished by checking if the local and remote groups are identical. This could be done by setting a hidden bit in the communicator structure when its created. David From owner-mpi-context@CS.UTK.EDU Wed Jun 30 10:07:43 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA05744; Wed, 30 Jun 93 10:07:43 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00256; Wed, 30 Jun 93 10:07:19 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 10:07:17 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00234; Wed, 30 Jun 93 10:07:08 -0400 Received: from tycho.co.uk (tycho.meiko.co.uk) by hub.meiko.co.uk with SMTP id AA27300 (5.65c/IDA-1.4.4 for mpi-context@CS.UTK.EDU); Wed, 30 Jun 1993 15:08:15 +0100 Date: Wed, 30 Jun 1993 15:08:15 +0100 From: James Cownie Message-Id: <199306301408.AA27300@hub.meiko.co.uk> Received: by tycho.co.uk (5.0/SMI-SVR4) id AA00562; Wed, 30 Jun 93 15:06:43 BST To: mpi-context@CS.UTK.EDU In-Reply-To: <9306301336.AA20050@rios2.epm.ornl.gov> (message from David W. Walker on Wed, 30 Jun 93 09:36:28 -0500) Subject: Re: context proposals Content-Length: 4828 David writes > Anyway, when I see a secure proposal I'll think about it. I believe that Marc Snir's original proposal which bound groups and contexts tightly together provided this security. (Though of course it may have other problems !) The reason you're having difficulty with my comments about between group communicators is that you're considering your proposal, I'm working in the light of the currently tabled context proposal as presented at the last meeting. In that proposal a communicator explicitly has only a SINGLE group and SINGLE context. (I actually prefer your view of the world, but that's not what's been tabled at present...) Most of our other misunderstandings arise from the same problem. > Directionality > ============== > On the topic of directionality, I guess I mean that contexts are > unidirectional, not communicators. Exactly, the question is where does the context work as promised, to filter messages arriving at the the sender, at the receiver, or both. This is where the current proposal makes between group communicators uni-directional, as they only have one context they have to make it meaningful only at the target of a send, and not locally meaningful for receipt of a message. > Anyway, this is not really > important. Since contexts are integers we can always decide to use the > one with the lowest ID number for communicating in both > directions. i.e., in the above example we coould use the smaller of > context_A and context_B as both the send and recv context. That way we > end up with a communicator consisting of just a local group, a remote > group, and a context. Maybe this is preferable. I don't think this works, because that context number need not be available at both ends. Of course it is possible to have a liaison when creating the between group context and find a suitable single context number. This is fine, and also allows the exchange of the relevant group information. It is NOT in the current proposal. (Though presumably is in yours, otherwise how does the communicator get both groups ?) Collective communications ========================= David writes: > I have made a distinction between MPI's collective communication > routines and collective librray routines (such as matrix > multiply,etc). I am questioning exactly this distinction. I thought we wanted to be able to write routines exactly equivalent to the MPI mandated collective routines without needing greater power than we expose to the user. > The collective routines in MPI need a group and a context in order to > be implemented in terms of point-to-point routines. The context can > either be passed into the routine in a communicator, or a reserved > system context can be used. I think they need more than one, I think they need one per group... (which is exactly the point.) > In general a library routines will be passed a group and an array of > contexts. In many cases only a single context will actually be need so > again the group and context can be passed into the routine via a > communicator. The context(s) that you pass a routine ensure that it > can be called safely regardelss of outstanding message traffic. This seems horrible. It cuts across all of the modularity we wanted to achieve. The caller now has to know for any library she calls how many contexts it needs. This must be wrong. > Note that "simultaneously" is not a correct word to use here within > an SPMD process model in which different groups operate within > different conditional branches. Thus: > > if ( in(group_A) ) then > call matmul (A, B, C, comm_A) > end if > if ( in(group_B) ) then > call matmul (X, Y, Z, comm_B) > end if > > this is how you would "simultaneously" call matmul in overlapping > groups. If the groups were not overlapping you could use an if-elseif > construct. Why don't you like "simultaneously" ? The events happen at the same time (which is what causes the problem !), the fact that they come from different lines of code does not seem relevant to me (as an implementer !) > I don't really understand Jim's concerns about the expense of > implementation. Intergroup and intragroup communication can easily be > distinguished by checking if the local and remote groups are > identical. This could be done by setting a hidden bit in the > communicator structure when its created. As above, this is a result of working on two different proposals. I understand how to do it in your world view, not in the Lyndon, Tony, Mark, ... proposal. James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Wed Jun 30 11:01:16 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA06166; Wed, 30 Jun 93 11:01:16 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03870; Wed, 30 Jun 93 11:00:20 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 11:00:19 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03857; Wed, 30 Jun 93 11:00:17 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA14076; Wed, 30 Jun 93 10:01:21 CDT Date: Wed, 30 Jun 93 10:01:21 CDT From: Tony Skjellum Message-Id: <9306301501.AA14076@Aurora.CS.MsState.Edu> To: tony@aurora.cs.msstate.edu, hender@macaw.fsl.noaa.gov Subject: Re: Context Examples Cc: mpi-context@cs.utk.edu ----- Begin Included Message ----- From hender@macaw.fsl.noaa.gov Tue Jun 29 10:22:11 1993 Date: Tue, 29 Jun 93 09:20:23 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) To: tony@aurora.cs.msstate.edu Subject: Context Examples Content-Length: 182 Tony, Would it be possible to send the context examples from the last meeting to the mpi-context mailing list? I'd do it but I don't have a copy of the slides. Thanks, Tom ----- End Included Message ----- The examples are being added to our chapter; I will then propagate chapter to reflector. Please give me a few days. - Tony From owner-mpi-context@CS.UTK.EDU Wed Jun 30 13:17:32 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA06921; Wed, 30 Jun 93 13:17:32 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA13352; Wed, 30 Jun 93 13:16:35 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 13:16:33 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA13328; Wed, 30 Jun 93 13:16:30 -0400 Date: Wed, 30 Jun 93 18:17:36 BST Message-Id: <12526.9306301717@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context: security issue To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Dear Jim The only way I can see of addressing the security issue which concerns you is to avoid exposing context in MPI. The particulars of how context is allocated seems to be neither here nor there. Imagine we make what appears to be a weak rule: that the user is only allowed to bind a context C to a group G in some process P if P has allocated C and P is a member of G. This cannot provide the security your desire. If P is also a member of some group H then P could allocate C in H and then bind C to G while the other members of H bind C to H. Imagine we make what appears to be a strong rule: that the user is only allowed to bind a context C to a group G in some process P if P has allocated C within the G (by implication P is a member of G). This makes it pointless to expose context since the only thing the user can do is bind it to the group used to allocate it. Further, if an implementation was to check that the user obeyed the rule, then it would have to remember the association between C and G so they are already bound. So I came to the conclusion that the security issue which concerns you arises directly from the exposed context. It allows the programmer to write programs which work as well as programs which do not work --- just a wee bit like C and Fortran on that score. Seriously though there is a philosophical issue whether user interfaces should or should not permit the user to write incorrect programs. I sympathise with the enforcement approach but generally prefer the permissive approach if there is a reason for it. I gather that you do not see enough of a reason for MPI to take the permissive approach in respect of context. I see this as being so that we can set up inter-group communication within the same framework as used to set up intra-group communication. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Jun 30 14:28:05 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA07736; Wed, 30 Jun 93 14:28:05 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA18813; Wed, 30 Jun 93 14:27:11 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 14:27:10 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA18804; Wed, 30 Jun 93 14:27:07 -0400 Date: Wed, 30 Jun 93 19:28:14 BST Message-Id: <12605.9306301828@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context: back-translation avoided To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Dear Jim How we avoid the back-translation which you discussed, is as follows... We tended to talk so far of this communicator having two basic fields, CONTEXT and GROUP. I guess we always thought that the SOURCE field of the message envelope would be the rank of the sender in the group of the communicator. It might always have been an optimisation to store this number as a field in the communicator rather than the group. Let us make SOURCE a basic field of the communicator, along with CONTEXT and GROUP. The SOURCE field of the communicator is copied into the SOURCE field of the message envelope when the communicator is used for send. Examples: I just give a couple of examples for intra-group point-to-point, (intra-group) collective, and inter-group (point-to-point) communications. To do point-to-point intra-group communication. Process P is a member of group G and has a communicator Cp which binds a CONTEXT = C, GROUP = G, SOURCE = RANKp(G). (here I use RANKx(G) as some notation to mean the rank of process X in group G). Process Q is a member of F and has a communicator Cq which binds CONTEXT = C, GROUP = G, source = RANKq(G). P can send to Q the notation {Cp,RANKq(G)} and Q can receive from P with the notation {Cq,RANKp(G)}, and vica versa. I have not mentioned that the context C must be unique in P and Q -- meaning that neither P nor Q bind C to a different group in a different communicator and use that communicator for receive. This is the security issue which concerns you, which is a different issue. I will just assume sensible use of contexts. To do collective (intra-group) communication. For all processes P in group G, P has a communicator P which binds CONTEXT = C, GROUP = G, SOURCE = RANKp(G). The group G can perform collective communication with the notation {Cp}. To do (point-to-point) inter-group communication. Consider process P a member of group G and process Q a member of group H. Process P has a communicator Cp which binds CONTEXT = C, GROUP = H, SOURCE = RANKp(G). Process Q has a communicator Cq which binds CONTEXT = C, GROUP = G, SOURCE = RANKq(H). P can send to Q using the notation {Cp,RANKq(H)} and Q can receive from P using the notation {Cq,RANKp(G)}. (Aside: If we do not allocate a context which is "unique" in both P and Q then we can use two communicators, one containing a context unique in P which P uses for receive and Q uses for send, and the other containing a context unique in Q which Q uses for receive and P uses for send. In this case the value of the SOURCE field in the communicators used for receive is irrelevant.) To do point-to-point inter-group communication in weaker senses. If a process P is not a member of a group G then RANKp(G) is some constant say '-1'. Let a process perform a receive with the notation {comm, '-1'} which means any process not in GROUP(comm). Consider process P a member of group G and process Q a member of group H. Process P has a communicator Cp which binds CONTEXT = C, GROUP = H, SOURCE = '-1'. Process Q has a communicator Cq which binds CONTEXT = C, GROUP = ?, SOURCE = ? (the "?" means that the detail is irrelevant in this example). P can send to Q using the notation {Cp,RANKq(H)} and Q can receive from P using the notation {Cq, '-1'}. Rule: This is a rule for the SOURCE field which avoids the implementation having to do the expensive back-translation. It ensures that the source field of the envelope is immediately meaningful at the receiver as either a rank in the group bound with the context or '-1' meaning that the sender is not in the group. "It is erroneous for a process P to use a communicator C to send to a process Q unless either the source field of C is the rank of P within the group which Q binds with C or the source field of C is '-1'." You will notice that this is not an easy rule to enforce, i.e. it costs in the implementation of communication if you do choose to enforce it. Did I go wrong somewhere? Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Jun 30 14:31:11 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA07759; Wed, 30 Jun 93 14:31:11 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19066; Wed, 30 Jun 93 14:30:11 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 14:30:10 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from wk107.nas.nasa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19052; Wed, 30 Jun 93 14:30:08 -0400 Received: by wk107.nas.nasa.gov (5.67-NAS-1.1/5.67-NAS-1.1(SGI)) id AA24927; Wed, 30 Jun 93 11:31:12 -0700 Date: Wed, 30 Jun 93 11:31:12 -0700 From: dcheng@nas.nasa.gov (Doreen Y. Cheng) Message-Id: <9306301831.AA24927@wk107.nas.nasa.gov> To: mpi-context@cs.utk.edu Subject: Flatten/Unflattern Would anyone in this group please tell me why we need these calls and what are their definitions (functions)? MPI_GROUP_FLATTEN (group, max_length, buffer, actual_length) MPI_GROUP_UNFLATTEN (max_length, buffer, group) MPI_COMM_FLATTEN (comm, max_length, buffer, actual_length) MPI_COMM_UNFLATTEN (max_length, buffer, comm) Thanks! Doreen From owner-mpi-context@CS.UTK.EDU Wed Jun 30 14:36:54 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA07785; Wed, 30 Jun 93 14:36:54 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19709; Wed, 30 Jun 93 14:35:55 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 14:35:54 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA19699; Wed, 30 Jun 93 14:35:52 -0400 Date: Wed, 30 Jun 93 19:36:59 BST Message-Id: <12620.9306301836@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context; two typos corrected To: lyndon@epcc.ed.ac.uk, mpi-context@cs.utk.edu In-Reply-To: L J Clarke's message of Wed, 30 Jun 93 19:28:14 BST Reply-To: lyndon@epcc.ed.ac.uk "Subject: Re: mpi-context: back-translation avoided" two typos corrected. The line > member of F and has a communicator Cq which binds CONTEXT = C, GROUP = should read > member of G and has a communicator Cq which binds CONTEXT = C, GROUP = oops. The line > For all processes P in group G, P has a communicator P which binds should read > For all processes P in group G, P has a communicator Cp which binds oops**2. Sorry for the errors Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Jun 30 14:43:17 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA07796; Wed, 30 Jun 93 14:43:17 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20348; Wed, 30 Jun 93 14:42:52 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 14:42:51 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20340; Wed, 30 Jun 93 14:42:49 -0400 Date: Wed, 30 Jun 93 19:43:57 BST Message-Id: <12632.9306301843@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Flatten/Unflattern To: dcheng@nas.nasa.gov (Doreen Y. Cheng), mpi-context@cs.utk.edu In-Reply-To: Doreen Y. Cheng's message of Wed, 30 Jun 93 11:31:12 -0700 Reply-To: lyndon@epcc.ed.ac.uk These calls are intended to be used to allow processes in different groups to establish communications with one another. The MPI_COMM_... operations in particular are used to implement the name service. The ..._FLATTEN operations write a transmittable description of the object (group or communicator) into a memory buffer. The ..._UNFLATTEN operations read such a description creating an (possibly approximate) copy of the object. (The operations are somewhat analagous to passivation and activation in SmallTalk and Objective C.) Hope this helped! Best Wishes Lyndon > > Would anyone in this group please tell me why we need these calls and > what are their definitions (functions)? > > MPI_GROUP_FLATTEN (group, max_length, buffer, actual_length) > > MPI_GROUP_UNFLATTEN (max_length, buffer, group) > > MPI_COMM_FLATTEN (comm, max_length, buffer, actual_length) > > MPI_COMM_UNFLATTEN (max_length, buffer, comm) > > > Thanks! > > Doreen > > /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Jun 30 14:49:24 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA07823; Wed, 30 Jun 93 14:49:24 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20697; Wed, 30 Jun 93 14:48:50 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 14:48:49 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20687; Wed, 30 Jun 93 14:48:47 -0400 Date: Wed, 30 Jun 93 19:49:53 BST Message-Id: <12644.9306301849@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context; expense of name service To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Hi Jim You mentioned that the publish/subscribe operations - i.e. name service for communicators - is expensive. I dont think it is that expensive really. Observe that mpi_publish and mpi_subscribe are defined such that when two groups wish to perform a transaction thorugh the name service just one process does the publish and since mpi_publish is collective just one process needs to access the server (and the broadcast the result). Inference is that name server access is infrequent and you dont have to worry about the possibility that all processes hit the server at about the same time. Of course the user could write a program which kept hitting the server and this would run slow but this is the fault of the user provided the performance issues are properly documented. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Jun 30 14:52:32 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA07873; Wed, 30 Jun 93 14:52:32 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21015; Wed, 30 Jun 93 14:52:09 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 14:52:08 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21006; Wed, 30 Jun 93 14:52:06 -0400 Via: uk.ac.southampton.ecs; Wed, 30 Jun 1993 19:53:03 +0100 Via: brewery.ecs.soton.ac.uk; Wed, 30 Jun 93 19:44:48 BST From: Ian Glendinning Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; Wed, 30 Jun 93 19:54:36 BST Date: Wed, 30 Jun 93 19:54:39 BST Message-Id: <1332.9306301854@holt.ecs.soton.ac.uk> To: mpi-context@cs.utk.edu Subject: Re: context proposals > From: walker@gov.ornl.epm.rios2 (David Walker) > I assume you're performing intergroup communication in which the > rank is wildcarded on the receiver, i.e., the receiver will receive > from any process in the other group. I don't see the problem here. > The actual rank (relative to the send group) of the sender is part of the > message envelope (according to my interpretation of Section 1.4.2). > When calling MPI_QUERY to query the return status object, the rank > returned is relative to the send group. Maybe I'm misunderstanding > what you mean. At last week's meeting, on the Thursday afternoon, various members of the context subcommittee attempted to give us "the big picture", since it had been said by someone that what the rest of the committee needed was to understand the mind-set of the context subcommittee. Mark Sears gave us a brief look "under the hood" of an example implementation. In the scheme he presented, the message envelope (header) consisted of source TID, destination TID, tag, and context. No ranks. Ian -- I.Glendinning@ecs.soton.ac.uk Ian Glendinning Tel: +44 703 593368 Dept of Electronics and Computer Science Fax: +44 703 593045 University of Southampton SO9 5NH England From owner-mpi-context@CS.UTK.EDU Wed Jun 30 18:32:51 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA10996; Wed, 30 Jun 93 18:32:51 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA06733; Wed, 30 Jun 93 18:32:26 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 18:32:25 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA06725; Wed, 30 Jun 93 18:32:24 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA24176; Wed, 30 Jun 93 17:33:28 CDT Date: Wed, 30 Jun 93 17:33:28 CDT From: Tony Skjellum Message-Id: <9306302233.AA24176@Aurora.CS.MsState.Edu> To: dcheng@nas.nasa.gov Subject: Re: Flatten/Unflattern Cc: mpi-context@cs.utk.edu These take opaque objects and make them transmittable from one process to another. Flatten converts to shippable form, unflatten reconstructs the objects. This is discussed in the CONTEXT drafts from last week. ----- Begin Included Message ----- From owner-mpi-context@CS.UTK.EDU Wed Jun 30 13:32:13 1993 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 30 Jun 1993 14:30:10 EDT Date: Wed, 30 Jun 93 11:31:12 -0700 From: dcheng@nas.nasa.gov (Doreen Y. Cheng) To: mpi-context@cs.utk.edu Subject: Flatten/Unflattern Content-Length: 347 Would anyone in this group please tell me why we need these calls and what are their definitions (functions)? MPI_GROUP_FLATTEN (group, max_length, buffer, actual_length) MPI_GROUP_UNFLATTEN (max_length, buffer, group) MPI_COMM_FLATTEN (comm, max_length, buffer, actual_length) MPI_COMM_UNFLATTEN (max_length, buffer, comm) Thanks! Doreen ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Thu Jul 1 06:41:42 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA13329; Thu, 1 Jul 93 06:41:42 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26397; Thu, 1 Jul 93 06:41:13 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 1 Jul 1993 06:41:12 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA26380; Thu, 1 Jul 93 06:41:06 -0400 Received: from tycho.co.uk (tycho.meiko.co.uk) by hub.meiko.co.uk with SMTP id AA02500 (5.65c/IDA-1.4.4 for mpi-context@cs.utk.edu); Thu, 1 Jul 1993 11:42:14 +0100 Date: Thu, 1 Jul 1993 11:42:14 +0100 From: James Cownie Message-Id: <199307011042.AA02500@hub.meiko.co.uk> Received: by tycho.co.uk (5.0/SMI-SVR4) id AA00797; Thu, 1 Jul 93 11:40:40 BST To: mpi-context@cs.utk.edu In-Reply-To: <12526.9306301717@subnode.epcc.ed.ac.uk> (message from L J Clarke on Wed, 30 Jun 93 18:17:36 BST) Subject: Re: mpi-context: security issue Content-Length: 2671 > So I came to the conclusion that the security issue which concerns you > arises directly from the exposed context. I agree this is where the problem arises. As soon as the user can manage contexts they don't give us security. > It allows the programmer to write programs which work as well as > programs which do not work --- just a wee bit like C and Fortran on > that score. Seriously though there is a philosophical issue whether > user interfaces should or should not permit the user to write > incorrect programs. I sympathise with the enforcement approach but > generally prefer the permissive approach if there is a reason for it. > I gather that you do not see enough of a reason for MPI to take the > permissive approach in respect of context. Exactly. I have no objection to languages which let you do what you want (after all I use C !), and prefer a sharp kitchen knife to cut things with. However this doesn't imply that I want to use a sharp knife to eat my meals with. Tools need to be designed with enough power to handle the application they're intended for. I think we're removing all of the safety interlocks from a dangerous tool so that we can achieve an objective that may be achievable without doing this by giving a specific contained technique. It's like the person who hacked a command like "talk" into an operating system by adding a system call to let any user write to arbitrary parts of another process' address space. Sure it works, but it's more than is needed and has other large implications. > I see this as being so > that we can set up inter-group communication within the same framework > as used to set up intra-group communication. Maybe, but I need to do so many different things anyway to set up the two different classes of communicator why not be explicit about it, and preserve some safety too. > How we avoid the back-translation which you discussed, is as follows... Yup this works fine for the sort of communicators I'd like us to be able to construct. It falls over as soon as people exploit the ability to bind the same context to different groups. Since that is allowed in the current draft this soilution is incompatible with the current draft. As you know by now I'd like to ENFORCE the (reasonable) limitations by construction, in which case this solution is fine. (I'm still waiting for the collective example though !) -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Thu Jul 1 07:05:13 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA13406; Thu, 1 Jul 93 07:05:13 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28088; Thu, 1 Jul 93 07:04:51 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 1 Jul 1993 07:04:50 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28080; Thu, 1 Jul 93 07:04:46 -0400 Date: Thu, 1 Jul 93 12:05:41 BST Message-Id: <13529.9307011105@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: mpi-context: security issue To: James Cownie , mpi-context@cs.utk.edu In-Reply-To: James Cownie's message of Thu, 1 Jul 1993 11:42:14 +0100 Reply-To: lyndon@epcc.ed.ac.uk Hi Jim another day - another mpi-context email session :-) So we agree on one thing, at least, which is that the exposure of context leads to a security issue which concerns you. I am sure that we can write a draft which adresses this security concern for the case of intra-group communication and does not omit intra-group communication. Is there sufficient demand for the draft to have this property? I can see that you would like this, Jim. What about the rest of the MPI committee. PLEASE ADVISE. > > How we avoid the back-translation which you discussed, is as follows... > Yup this works fine for the sort of communicators I'd like us to be > able to construct. It falls over as soon as people exploit the ability > to bind the same context to different groups. Since that is allowed in > the current draft this soilution is incompatible with the current > draft. I guess the draft allows people to do that but the rule would imply that they did not use more than one such binding. In fact we could allow the multiple binding but the the rule would be perversely complicated to state. Hmm, your making a comment on the draft, which is fair enough. Personally, I don't see much value in being able to have the multiple bindings, so we agree somewhere around about here as well, as it happens. > As you know by now I'd like to ENFORCE the (reasonable) limitations by > construction, in which case this solution is fine. Again I am sure that we can write a draft which has this property, if this is what the majority of MPI would like to see. PLEASE ADVISE. > > (I'm still waiting for the collective example though !) > Yup, this is tricky. I am still thinking about it, and going around in circles. I see two possible different requirements: 1. A desire to allow the programmer to have MPI collective communications work in any group-context with the user allowed to have point-to-point communications dangling over the collective call. 2. A desire to allow the programmer to write libraries which enjoy the same protection from the user as the MPI collective communications. Point 1. seems probably not too difficult to manage if we recognise that the collective communication "library" is privileged in being part of MPI, whereas the user library is not so privileged. Point 2. does seem more difficult. I think your comments have been valuable in a sort of sanity check mode sense. Perhaps you would like to make a suggestion regarding the collective communications. I, for one, would very much appreciate that. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Jul 1 13:39:31 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA15819; Thu, 1 Jul 93 13:39:31 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28162; Thu, 1 Jul 93 13:38:49 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 1 Jul 1993 13:38:48 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from msr.EPM.ORNL.GOV by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28144; Thu, 1 Jul 93 13:38:46 -0400 Received: by msr.EPM.ORNL.GOV (4.1/1.34) id AA09272; Thu, 1 Jul 93 13:39:58 EDT Date: Thu, 1 Jul 93 13:39:58 EDT From: geist@msr.EPM.ORNL.GOV (Al Geist) Message-Id: <9307011739.AA09272@msr.EPM.ORNL.GOV> To: mpi-context@cs.utk.edu Subject: Re: mpi-context: security issue >I am sure that we can write a draft which has this property, if >this is what the majority of MPI would like to see. PLEASE ADVISE. I vote with Jim's suggestions. Al Geist From owner-mpi-context@CS.UTK.EDU Thu Jul 1 14:14:13 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA15973; Thu, 1 Jul 93 14:14:13 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00890; Thu, 1 Jul 93 14:13:32 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 1 Jul 1993 14:13:31 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00882; Thu, 1 Jul 93 14:13:28 -0400 Via: uk.ac.southampton.ecs; Thu, 1 Jul 1993 19:14:35 +0100 Via: brewery.ecs.soton.ac.uk; Thu, 1 Jul 93 19:06:27 BST From: Ian Glendinning Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; Thu, 1 Jul 93 19:16:16 BST Date: Thu, 1 Jul 93 19:16:19 BST Message-Id: <2097.9307011816@holt.ecs.soton.ac.uk> To: mpi-context@cs.utk.edu Subject: Re: mpi-context: back-translation avoided > From: L J Clarke > How we avoid the back-translation which you discussed, is as follows... > > We tended to talk so far of this communicator having two basic fields, > CONTEXT and GROUP. I guess we always thought that the SOURCE field of > the message envelope would be the rank of the sender in the group of the > communicator. It might always have been an optimisation to store this > number as a field in the communicator rather than the group. Let us > make SOURCE a basic field of the communicator, along with CONTEXT and > GROUP. The SOURCE field of the communicator is copied into the SOURCE > field of the message envelope when the communicator is used for send. > > Examples: [Examples deleted] When I first read this I found it incredibly confusing. Just when I'd got used to communicators with only GROUP and CONTEXT fields... However, I think I now understand what you mean, and would just like to check that I've got hold of the right end of the stick. As far as I can tell, what you're actually saying is that what is passed in the SOURCE field of the message *envelope* should be the rank of the sender, rather than its TID, as in the example implementation of Mark Sears that I posted the other day. Is that right? I was confused by the fact that you dwelled on the optimisation of caching the SOURCE rank in the communicator. > To do point-to-point inter-group communication in weaker senses. > > If a process P is not a member of a group G then RANKp(G) is some > constant say '-1'. Let a process perform a receive with the notation > {comm, '-1'} which means any process not in GROUP(comm). And here the SOURCE field in the message is set to -1. Right? Ian From owner-mpi-context@CS.UTK.EDU Thu Jul 1 14:22:46 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA15992; Thu, 1 Jul 93 14:22:46 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01380; Thu, 1 Jul 93 14:21:37 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 1 Jul 1993 14:21:32 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA01369; Thu, 1 Jul 93 14:21:29 -0400 Date: Thu, 1 Jul 93 19:22:38 BST Message-Id: <14049.9307011822@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: mpi-context: back-translation avoided To: Ian Glendinning , mpi-context@cs.utk.edu In-Reply-To: Ian Glendinning's message of Thu, 1 Jul 93 19:16:19 BST Reply-To: lyndon@epcc.ed.ac.uk > > When I first read this I found it incredibly confusing. Just when I'd got > used to communicators with only GROUP and CONTEXT fields... However, I think > I now understand what you mean, and would just like to check that I've got > hold of the right end of the stick. As far as I can tell, what you're > actually saying is that what is passed in the SOURCE field of the message > *envelope* should be the rank of the sender, rather than its TID, as in the > example implementation of Mark Sears that I posted the other day. Is that > right? yes - the rank that is stored in the SOURCE field of the communicator > > If a process P is not a member of a group G then RANKp(G) is some > > constant say '-1'. Let a process perform a receive with the notation > > {comm, '-1'} which means any process not in GROUP(comm). > > And here the SOURCE field in the message is set to -1. Right? yes - because it is '-1' in the SOURCE field of the communicator. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Jul 1 15:07:50 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA16124; Thu, 1 Jul 93 15:07:50 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04597; Thu, 1 Jul 93 15:06:13 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 1 Jul 1993 15:06:12 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04586; Thu, 1 Jul 93 15:06:10 -0400 Via: uk.ac.southampton.ecs; Thu, 1 Jul 1993 20:05:06 +0100 Via: brewery.ecs.soton.ac.uk; Thu, 1 Jul 93 19:56:51 BST From: Ian Glendinning Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; Thu, 1 Jul 93 20:06:39 BST Date: Thu, 1 Jul 93 20:06:42 BST Message-Id: <2121.9307011906@holt.ecs.soton.ac.uk> To: mpi-context@cs.utk.edu Subject: Why non-opaque contexts? Hi, I was just pondering on why contexts have been specified as being integers rather than being opaque objects (that can be represented as integers). Can someone enlighten me please? As far as I can see, none of the operations you can perform on them require you to know that they're integers, so why expose that fact? Perhaps it is so that people can do sneaky things with them and write their own allocators, in conjunction with MPI_CONTEXTS_RESERVE, but I'm not really sure what sort of things people would want to do. Is that the reason, and if so can someone give me a couple of examples of what people might want to do? Ian -- I.Glendinning@ecs.soton.ac.uk Ian Glendinning Tel: +44 703 593368 Dept of Electronics and Computer Science Fax: +44 703 593045 University of Southampton SO9 5NH England From owner-mpi-context@CS.UTK.EDU Tue Jul 6 20:07:31 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA10291; Tue, 6 Jul 93 20:07:31 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28583; Tue, 6 Jul 93 20:07:53 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 6 Jul 1993 20:07:52 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28575; Tue, 6 Jul 93 20:07:51 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA05469; Tue, 6 Jul 93 19:07:45 CDT Date: Tue, 6 Jul 93 19:07:45 CDT From: Tony Skjellum Message-Id: <9307070007.AA05469@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: context examples, etc. Please send me your address if you would like a photocopy of examples from meeting. Else, I will type them in Friday. - Tony From owner-mpi-context@CS.UTK.EDU Wed Jul 7 19:00:31 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA16429; Wed, 7 Jul 93 19:00:31 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04179; Wed, 7 Jul 93 19:00:55 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 7 Jul 1993 19:00:53 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04161; Wed, 7 Jul 93 19:00:50 -0400 Received: by gw1.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA12187; Wed, 7 Jul 93 23:00:45 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA02283; Wed, 7 Jul 93 16:59:01 MDT Date: Wed, 7 Jul 93 16:59:01 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9307072259.AA02283@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu Subject: Example attempt... Hi all, At the last meeting I promised to try and find some "more real" program examples and "re-write" them using MPI contexts. Here's one... It is the first application I thought of when I first heard the "group" concept. I thought it would be "real simple" but it's gotten a bit long. At the very least I'm learning a lot about the current draft! :-) ISAR EXAMPLE This example is taken from an old ISAR (Inverse Synthetic Aperture Radar) translational motion compensation and display program that I originally wrote in Occam long, long ago. The program converts raw radar data into a human- readable sequence of animated images and displays them in "real time" on a monitor like this: +--------+ +------------+ +---------+ raw | | | | | | radar ---> | SEARCH | ---> | COMPENSATE | ---> | DISPLAY | ---> monitor data | | | (FFT's) | | | +--------+ +------------+ +---------+ The program divides naturally into three tasks: search, compensate, and display. The search task uses an iterative algorithm to determine the current motion of the target relative to the radar set (distance, velocity, and acceleration). The compensation task modifies phase in the raw data according to the current motion and uses FFT's to convert from frequency space to "image" space. The display task does various scaling, thresholding, centering, etc. and displays the resulting image on a monitor. Each time new radar data arrives, a new image is generated. The search task must pass current raw radar data and current motion parameters to the compensation task. The compensation task must pass the raw image to the display task. There is no feedback loop in the program, this is a pure pipeline. To be scalable, the program must be able to choose how many processes are allocated to each task at run time. The easiest way to do this is to assign a separate sub-group to each task. The sub-groups can then be populated at run time in some "sensible" way to achieve load balance. MY ATTEMPT AT AN MPI VERSION The "pseudo code" below is an attempt to set up the sub-groups and get them communicating. Processing details are mostly left hidden inside functions. I've left a few details exposed so the "flavor" of the original code is still there. Some details: 1) I have avoided using publish/subscribe because we have already seen an example of this in the last meeting. Actually, this code is simpler if MPI_COMM_PUBLISH_SUBSCRIBE() is used. If there is any interest, I'll send out the version that uses publish/subscribe later. 2) Communication in the pipeline is as simple as possible. I have not used double or triple buffering to improve performance. If I figure the "simple" example out, I'll add optimizations. 3) The compensation sub-group calls routines from a third-party FFT library. I put this detail in because I'm not quite sure how it all works yet. For now, I'm assuming that this library can behave like the MPI collective communication calls and that it requires an initialization call (INIT_FFT_LIB() in the code below). 4) When I have had a choice of MPI functions to use, I have attempted to use the "highest level" function. (It helps to look at the code when reading the following description...) The code starts with the usual #includes ( :-) ) and #defines. The include file for the FFT library is fftLib.h. Three keys are #defined for sub-group creation using MPI_COLL_SUBGROUP(). MAX_FLAT_GROUP_SIZE is a number pulled out of thin air as maximum size (in bytes) of a "flattened" group. Two structures are typedef'ed. The isarParameters structure contains all the information required by the compensation sub-group to compensate raw radar data. The most interesting fields are the motion fields (r, v, a). These are filled in for each image by the search sub-group. A decompParameters structure contains information about how data is decomposed in a sub-group. It is used to determine how data is re-organized when arrays are passed from one sub-group to the next. After MPI_INIT(), INIT_FFT_LIB(), etc., function getGroupSizes() is called to determine how many processes will be assigned to each sub-group. This function also sets the membership key for each process. MPI_COLL_SUBGROUP() is then called using the membership key to create all three sub-groups. Every process is then made aware of each sub-group. Process 0 in each sub-group sends a flattened group description to process 0 in the "ALL" group. The flattened groups are then broadcast to all processes and unflattened. Communicators for each sub-group are then created using MPI_COMM_MAKE(). Intergroup communicators for each of the communicating subgroups are then created using MPI_COMM_MERGE(). Decomposition parameters are then set up for each sub-group using function makeDecompParams(). (MPI topology functions would probably live inside makeDecompParams().) After "other" set up stuff, the main loops for each sub-group begin. In the main loop for the search sub-group, function addNewData() gets new raw radar data and stores it in array radarData. Function findNewMotion() determines target motion and updates the (r, v, a) fields of isarParams. Function sendCurrentData() sends radarData to the compensate sub-group (with appropriate re-organization according to data decompositions described by searchDecomp and compensateDecomp). Finally, function sendMotion broadcasts the (r, v, a) fields of isarParams to every process in the compensate sub-group. In the main loop for the compensate sub-group, function recvCurrentData() gets updated raw radar data from the search sub-group. Function recvMotion() then gets the current (r, v, a) fields of isarParams from the search sub-group. Function compensateData() uses (r, v, a) to apply phase corrections to the radar data. It then calls the FFT routines from the third-party library to create the raw image. Function sendCurrentData() passes the raw image to the display sub-group. In the main loop for the display sub-group, function recvCurrentData() gets the raw image from the compensate sub-group. Function scaleThresholdCenter() produces the human-readable image. Function displayImage stuffs the image into the monitor. SOME QUESTIONS 1) Am I doing the context/sub-group creation in the "best" way (given that I did not use publish/subscribe)? Is this even correct? 2) Assume that the third-party FFT library makes collective communication calls. Will it work as written? Do I need to pass any "extra" contexts or communicators into compensateData() to make things work? Do I really need INIT_FFT_LIB()? 3) Does MPI_COMM_MERGE() create new contexts? If not, is it possible to get confused between communications that use (for example) compensateComm and sendComm? This question comes up when I want to add the double/triple buffering. Please fire away... If there is any interest, I'll pass on the publish/subscribe version and maybe even work on a double/triple buffering version... I hope I don't find too many stupid mistakes when I read this tomorrow! :-) Tom Henderson NOAA Forecast Systems Laboratory ------------------------------ BEGIN "PSEUDO" CODE --------------------------- /* #includes */ #include /* :-) */ #include /* For FFT library. */ ... /* #defines */ /* Membership keys for sub-group creation. */ #define SEARCH_GROUP_KEY 0 #define COMPENSATE_GROUP_KEY 1 #define DISPLAY_GROUP_KEY 2 /* Buffer size for flattened groups. */ #define MAX_FLAT_GROUP_SIZE 4096 /* whatever... */ ... /* typedefs */ typedef struct isarParameters /* ISAR parameters */ { int num_bursts; /* number of bursts (rows) in each ISAR frame */ int num_pulses; /* number of pulses (columns) in each ISAR frame */ float r; /* target range: m */ float v; /* target velocity: m/s */ float a; /* target acceleration: m/s^2 */ float prf; /* pulse repetition rate: Hz */ float deltaf; /* frequency step size: Hz */ float f0; /* start frequency: Hz */ } isarParameters; typedef struct decompParameters /* data decomposition parameters */ { int numRows; /* number of rows in 2D array */ int numCols; /* number of columns in 2D array */ int myNumRows; /* number of rows in my 2D "chunk" */ int myNumCols; /* number of columns in my 2D "chunk" */ int myStartRow; /* start row of my 2D "chunk" */ int myStartCol; /* start column of my 2D "chunk" */ int numProcRows; /* number of logical rows of processes */ int numProcCols; /* number of logical columns of processes */ int myProcRow; /* my logical process row */ int myProcCol; /* my logical process column */ } decompParameters; main() { group searchGroup, compensateGroup, displayGroup, myGroup; comm searchComm, compensateComm, displayComm, sendComm, recvComm, tmpComm; float **radarData; char **image; isarParameters isarParams; decompParameters searchDecomp, compensateDecomp, displayDecomp; int membershipKey, myRank, myRankInAll, length; int allGroupSize, searchGroupSize, compensateGroupSize, displayGroupSize; char myFlatGroup[MAX_FLAT_GROUP_SIZE], searchFlatGroup[MAX_FLAT_GROUP_SIZE], compensateFlatGroup[MAX_FLAT_GROUP_SIZE], displayFlatGroup[MAX_FLAT_GROUP_SIZE]; comm_handle searchHandle, compensateHandle, displayHandle; return_handle status; /* Generic setup. */ MPI_INIT(); MPI_COMM_SIZE(MPI_COMM_ALL, &allGroupSize); MPI_COMM_RANK(MPI_COMM_ALL, &myRankInAll); /* Initialize library called inside function compensateData(). */ INIT_FFT_LIB(...); /* Other setup stuff (malloc() 2D arrays, etc...) */ ... /* Build sub-groups. */ getGroupSizes(allGroupSize, &searchGroupSize, &compensateGroupSize, &displayGroupSize, &membershipKey, ...); MPI_COLL_SUBGROUP(MPI_COMM_ALL, membershipKey, myGroup); /* Process 0 in each sub-group sends flattened group description to process */ /* 0 in the "ALL" group. Flattened groups are then broadcast to all */ /* processes and unflattened. Beware that process 0 in the "ALL" group may */ /* also be process 0 in a sub-group. */ /* Process 0 in "ALL" posts receives for flattened groups. */ if (myRankInAll == 0) { MPI_IRECVC(searchHandle, searchFlatGroup, MAX_FLAT_GROUP_SIZE, MPI_BYTE, MPI_DONTCARE, SEARCH_GROUP_KEY, MPI_COMM_ALL); MPI_IRECVC(compensateHandle, compensateFlatGroup, MAX_FLAT_GROUP_SIZE, MPI_BYTE, MPI_DONTCARE, COMPENSATE_GROUP_KEY, MPI_COMM_ALL); MPI_IRECVC(displayHandle, displayFlatGroup, MAX_FLAT_GROUP_SIZE, MPI_BYTE, MPI_DONTCARE, DISPLAY_GROUP_KEY, MPI_COMM_ALL); } /* Process 0 in sub-group flattens group and sends to process 0 in */ /* "ALL". */ MPI_GROUP_RANK(myGroup, &myRank); if (myRank == 0) { MPI_GROUP_FLATTEN(myGroup, MAX_FLAT_GROUP_SIZE, myFlatGroup, &length); if (length > MAX_FLAT_GROUP_SIZE) { handleError(...); } MPI_SENDC(myFlatGroup, MAX_FLAT_GROUP_SIZE, MPI_BYTE, 0, membershipKey, MPI_COMM_ALL); } /* Process 0 in "ALL" waits for completion of receives and then */ /* broadcasts flat groups to all processes. */ if (myRankInAll == 0) { MPI_WAIT(searchHandle, status); MPI_WAIT(compensateHandle, status); MPI_WAIT(displayHandle, status); } MPI_BCASTC(searchFlatGroup, MAX_FLAT_GROUP_SIZE, MPI_BYTE, MPI_COMM_ALL, 0); MPI_BCASTC(compensateFlatGroup, MAX_FLAT_GROUP_SIZE, MPI_BYTE, MPI_COMM_ALL, 0); MPI_BCASTC(displayFlatGroup, MAX_FLAT_GROUP_SIZE, MPI_BYTE, MPI_COMM_ALL, 0); /* Finally, unflatten all group descriptions. */ MPI_GROUP_UNFLATTEN(MAX_FLAT_GROUP_SIZE, searchFlatGroup, searchGroup); MPI_GROUP_UNFLATTEN(MAX_FLAT_GROUP_SIZE, compensateFlatGroup, compensateGroup); MPI_GROUP_UNFLATTEN(MAX_FLAT_GROUP_SIZE, displayFlatGroup, displayGroup); /* Build communicators. */ MPI_COMM_MAKE(MPI_COMM_ALL, searchGroup, searchComm); MPI_COMM_MAKE(MPI_COMM_ALL, compensateGroup, compensateComm); MPI_COMM_MAKE(MPI_COMM_ALL, displayGroup, displayComm); /* Build intergroup communicators. */ if (membershipKey == SEARCH_GROUP_KEY) { MPI_COMM_MERGE(searchComm, compensateComm, sendComm, tmpComm); } else if (membershipKey == COMPENSATE_GROUP_KEY) { MPI_COMM_MERGE(compensateComm, searchComm, tmpComm, recvComm); MPI_COMM_MERGE(compensateComm, displayGroup, sendComm, tmpComm); } else if (membershipKey == DISPLAY_GROUP_KEY) { MPI_COMM_MERGE(displayGroup, compensateComm, tmpComm, recvComm); } else { handleError(...); } MPI_COMM_FREE(tmpComm); /* Data decomposition set-up code for all groups... */ makeDecompParams(searchGroupSize, searchDecomp, ...); makeDecompParams(compensateGroupSize, compensateDecomp, ...); makeDecompParams(displayGroupSize, displayDecomp, ...); /* Other stuff... */ ... /* MAIN LOOPS for each sub-group... */ /* MAIN LOOP for "search" group... */ if (membershipKey == SEARCH_GROUP_KEY) { searchDone = 0; while (searchDone == 0) { ... addNewData(searchComm, radarData, searchDecomp, ...); findNewMotion(searchComm, radarData, isarParams, searchDecomp, ...); sendCurrentData(sendComm, radarData, searchDecomp, compensateDecomp); sendMotion(sendComm, isarParams); ... } /* end of searchGroup while */ } /* end of searchGroup if */ /* MAIN LOOP for "compensate" group... */ else if (membershipKey == COMPENSATE_GROUP_KEY) { compensateDone = 0; while (compensateDone == 0) { ... recvCurrentData(recvComm, radarData, compensateDecomp, searchDecomp); recvMotion(recvComm, isarParams); compensateData(compensateComm, radarData, isarParams, compensateDecomp, ...); sendCurrentData(sendComm, radarData, compensateDecomp, displayDecomp); ... } /* end of compensateGroup while */ } /* end of compensateGroup if */ /* MAIN LOOP for "display" group... */ else if (membershipKey == DISPLAY_GROUP_KEY) { displayDone = 0; while (displayDone == 0) { ... recvCurrentData(recvComm, radarData, displayDecomp, compensateDecomp); scaleThresholdCenter(displayComm, radarData, image, displayDecomp, ...); displayImage(displayComm, image, displayDecomp); ... } /* end of displayGroup while */ } /* end of displayGroup if */ else { handleError(...); } /* Cleanup... */ ... } /* end of main() */ From owner-mpi-context@CS.UTK.EDU Sat Jul 10 17:09:06 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA14367; Sat, 10 Jul 93 17:09:06 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10682; Sat, 10 Jul 93 17:09:46 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sat, 10 Jul 1993 17:09:44 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10674; Sat, 10 Jul 93 17:09:43 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA03244; Sat, 10 Jul 93 16:09:42 CDT Date: Sat, 10 Jul 93 16:09:42 CDT From: Tony Skjellum Message-Id: <9307102109.AA03244@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: discussion of process dynamicism, publish, subscribe, etc. Dear colleagues, Adam Greenberg and I have been discussing the issue of dynamicism in the context chapter (publish, subscribe, possibility of dynamic process management outside MPI, etc). Please comment... - Tony ------------------------------------------------- From moose@Think.COM Fri Jul 9 11:11:00 1993 From: Adam Greenberg Date: Fri, 9 Jul 93 12:10:49 EDT To: tony@Aurora.CS.MsState.Edu Subject: discussion Content-Length: 972 X-Lines: 19 Status: RO Date: Tue, 6 Jul 93 19:06:11 CDT From: Tony Skjellum I would like to discuss your concerns about inter-group facilities, and get you involved in the revised context proposal, prior to next meeting. Can you summarize worries, and how we can allay them reasonably, without giving up MPMD programming capabilities. I don't think you can allay my fears. I (as TMC) don't believe that MPI should contain mechanisms for coupling disparate programs together. I distinguish this notion from MPMD in which all of a user's programs which comprise an instance or invocation of the users `system' are known a priori and information on all members of the system is available at system start up. Intersystem communication is properly an OS issue. It has already been solved by Unix. Other solutions too, we believe, will require OS support if not implementation. These OS related requirements are beyond the scope of MPI. moose From moose@Think.COM Sat Jul 10 15:09:11 1993 From: Adam Greenberg Date: Sat, 10 Jul 93 16:09:01 EDT To: tony@Aurora.CS.MsState.Edu Subject: discussion Content-Length: 193 Status: RO X-Lines: 7 Date: Sat, 10 Jul 93 15:04:11 CDT From: Tony Skjellum What about dynamic process management for a single user? What do you believe this to entail? moose From moose@Think.COM Sat Jul 10 15:56:59 1993 From: Adam Greenberg Date: Sat, 10 Jul 93 16:56:53 EDT To: tony@Aurora.CS.MsState.Edu Subject: discussion Content-Length: 1374 Status: RO X-Lines: 30 Date: Sat, 10 Jul 93 15:13:58 CDT From: Tony Skjellum Well, it used to be possible on some systems for the "HOST program" and even other node programs to spawn processes, kill processes, as needed, rather than having everything "known at the beginning." To be concrete, on the iPSC/2 and iPSC/860, it is possible to "spawn" and "kill/killcube." It would be nice if we could support, however weakly, the ability to cope with such dynamicism. It is arguable that a spawn mechanism would return a communicator for the spawned child, and the spawned child would get its MPI_COMM_PARENT set accordingly. With flatten/unflatten, the child could be told about other SPMD groups, without an explicit publish/subscribe mechanism. The child could be told about other groups, but it would be unable to participate in any communication unless other processes were told of its existence. This begs the issue of how to introduce (or delete) new processes to (or from) the global process pool. Will the exclusion of this functionality severely cripple many would-be MPI users? Will it cripple MPI acceptance? Will it be difficult to include at a later time - MPI2 were it to happen? My opinion is that its exclusion is not a real problem. However, I will certainly listen to arguments to the contrary. moose From owner-mpi-context@CS.UTK.EDU Sat Jul 10 18:34:36 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA14418; Sat, 10 Jul 93 18:34:36 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15565; Sat, 10 Jul 93 18:35:09 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sat, 10 Jul 1993 18:35:04 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15479; Sat, 10 Jul 93 18:34:53 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA04050; Sat, 10 Jul 93 17:34:51 CDT Date: Sat, 10 Jul 93 17:34:51 CDT From: Tony Skjellum Message-Id: <9307102234.AA04050@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: new draft of context chapter Dear colleagues, given post-meeting discussions, recommendations, and further thinking, here is the latest draft of the context chapter. Please comment. Notice that the semantics of mpi_contexts_alloc() and mpi_comm_make() have been changed. These changes have been made to allow libraries/subprograms to control safety without relying on the underlying quiescence of a passed-in communicator. As such, this change is an important step forward. We also are able to justify how MPI would implement both of these functions as statically initialized, single-invocation collective operations (that lock out other threads). Examples discussed on the Friday of the last meeting have been added to the draft, appropriately modified for the new semantics of mpi_contexts_alloc() and mpi_comm_make(). - Tony Skjellum ------------------------------------------- %!PS-Adobe-2.0 %%Creator: dvips 5.47 Copyright 1986-91 Radical Eye Software %%Title: unify4.dvi %%Pages: 19 1 %%BoundingBox: 0 0 612 792 %%EndComments %%BeginProcSet: tex.pro /TeXDict 200 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{ isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10 N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{ /vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail} B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image} imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto}N /eop{clear SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for}N /p /show load N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval(Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{ moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{ S p tail}B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w }B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{/SS save N}B /eos{clear SS restore}B end %%EndProcSet TeXDict begin 1000 300 300 @start /Fa 33 122 df<01C003C007800F001E003C00380078 0070007000F000E000E000E000E000E000E000E000F00070007000780038003C001E000F000780 03C001C00A1D7A9914>40 DI<38 7C7E7E3E1E1EFCF870070A7A8414>44 D<01C00003E00003E00003600003600007700007700007 70000770000630000E38000E38000E38000E38000E38001FFC001FFC001FFC001C1C003C1E00FE 3F80FE3F80FE3F8011177F9614>65 DI<07E60FFE1FFE3E3E3C1E780E700EF00EE000E000E000E000E0 00E000E000F00E700E780E3C1E3E3C1FF80FF807E00F177E9614>I69 D<07E6000FFE001FFE003E3E00 3C1E00780E00700E00F00E00E00000E00000E00000E00000E07F80E07F80E07F80F00E00700E00 781E003C1E003E3E001FFE000FFE0007EE0011177F9614>71 D 73 D75 DIII<1FF07FFC7FFC701CF01EE00EE00EE00EE00EE00EE00EE00EE00EE0 0EE00EE00EE00EE00EF01E783C7FFC7FFC1FF00F177E9614>II82 D<0FCC3FFC7FFCF87CF03CE01CE01CF000 F8007F003FE01FF801FC003C001E000EE00EE00EF01EF83CFFFCFFF8CFE00F177E9614>III88 D<07F81FFC3FFC7C3C7818F000E000E000E000E000F00E780E7E1E3FFC1FF807F00F107E8F14> 99 D<07E01FF83FFC7C3C781EF00EFFFEFFFEFFFEE000F00E780E7E1E3FFC1FF807E00F107E8F 14>101 D<07CF001FFF803FFF807C7F00783C00701C00701C00783C007C7C003FF8007FF00077 C0007800003FF8003FFE007FFF00F00F80E00780E00380E00380F007807C1F007FFF001FFC0007 F00011197F8F14>103 D109 DI<07C01FF03FF8783C701CF01EE00EE00EE00EE00EF01E701C7C7C3FF81FF007C00F 107E8F14>II114 D<1FF87FF8FFF8F038E038F0007F80 3FF80FFC003EE00EE00EF83EFFFCFFF8CFF00F107E8F14>I<07000700070007000700FFFCFFFC FFFC070007000700070007000700070E070E070E079E07FC03F801F00F157F9414>II120 DI E /Fb 3 104 df<7078F87005047C830C>46 D<03F00FF81E38381878387078FFF0FFC0E000E0 00E000E018E03870F03FE01F800D107C8F12>101 D<00FB8003FF80079F800E0F001E07001C0F 003C0F00380E00380E00381E00381E00381C00383C003CFC001FFC000FB8000038000078000078 00E0F000E1F000FFC000FF000011177E8F12>103 D E /Fc 1 59 df 58 D E /Fd 65 123 df<007E7E01FFFF07CFCF070F8F0F0F0F0E07000E07000E07000E07000E 0700FFFFF0FFFFF00E07000E07000E07000E07000E07000E07000E07000E07000E07000E07000E 07000E07007F0FF07F0FF0181A809916>11 D<007E0001FE0007CF00070F000F0F000E0F000E00 000E00000E00000E0000FFFF00FFFF000E07000E07000E07000E07000E07000E07000E07000E07 000E07000E07000E07000E07007F0FE07F0FE0131A809915>I<007F0001FF0007CF00070F000F 0F000E07000E07000E07000E07000E0700FFFF00FFFF000E07000E07000E07000E07000E07000E 07000E07000E07000E07000E07000E07000E07007F9FE07F9FE0131A809915>I<007E1F8001FF 7FC007C7F1E00707C1E00F07C1E00E0781E00E0380000E0380000E0380000E038000FFFFFFE0FF FFFFE00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E0 0E0380E00E0380E00E0380E07F8FE3FC7F8FE3FC1E1A809920>I34 D39 D<01C00380070007000E001C001C00380038007000700070006000E000E000E000E000E000E000 E000E000E000E000E000E0006000700070007000380038001C001C000E0007000700038001C00A 267E9B0F>II44 DII<03000700FF00FF000700070007000700070007000700070007 00070007000700070007000700070007000700FFF0FFF00C187D9713>49 D<1FC03FF071F8F078F83CF83CF81C701C003C003C0038007800F000E001C0038007000E0C1C0C 380C7018FFF8FFF8FFF80E187E9713>I 58 DI<1FC07FF07070F0 38F038F038F038007800F001E003C0038007000700060006000600060000000000000000000F00 0F000F000F000D1A7E9912>63 D<000C0000001E0000001E0000001E0000003F0000003F000000 3F000000778000006780000067800000C3C00000C3C00000C3C0000181E0000181E0000181E000 0300F00003FFF00003FFF0000600780006007800060078000E003C001E003C00FF81FFC0FF81FF C01A1A7F991D>65 DI<007F0601FFE607E0FE0F803E1E001E3C001E3C000E7800 0E780006F00006F00000F00000F00000F00000F00000F00000F000067800067800063C000E3C00 0C1E001C0F803807E0F001FFE0007F80171A7E991C>IIII<007F060001FFE600 07E0FE000F803E001E001E003C001E003C000E0078000E0078000600F0000600F0000000F00000 00F0000000F0000000F0000000F003FFC0F003FFC078001E0078001E003C001E003C001E001E00 1E000F803E0007E07E0001FFFE00007FC6001A1A7E991E>I73 D 75 DIII<007F 000001FFC00007C1F0000F0078001E003C003C001E0038000E0078000F0070000700F0000780F0 000780F0000780F0000780F0000780F0000780F0000780F000078078000F0078000F0038000E00 3C001E001E003C000F00780007C1F00001FFC000007F0000191A7E991E>II82 D<0FC63FF6787E701EE00EE00EE006E006F000FC007F807FF03FFC0FFE01FE003F000F000FC007 C007E007E00FF00EFC3CDFFCC7F0101A7E9915>I<7FFFFF007FFFFF00781E0F00601E0300601E 0300E01E0380C01E0180C01E0180C01E0180001E0000001E0000001E0000001E0000001E000000 1E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000 03FFF00003FFF000191A7F991C>IIIII<1830387070E060C0E1C0C180C180F9F0F9F0F9F078F0 0C0B7B9913>92 D<18387060E0C0C0F8F8F878050B7E990B>96 D<7F80FFE0F1F0F0F0607000F0 1FF03FF07870F070E070E073E0F3F1F37FFE3F3C10107E8F13>II<07F81FFC3C 3C783C7018E000E000E000E000E000F0007000780C3E1C1FF807E00E107F8F11>I<007E00007E 00000E00000E00000E00000E00000E00000E00000E00000E000FEE001FFE003C3E00781E00700E 00E00E00E00E00E00E00E00E00E00E00E00E00700E00781E003C3E001FFFC00FCFC0121A7F9915 >I<07E01FF03C78701C701CFFFCFFFCE000E000E000F0007000780C3E1C1FF807E00E107F8F11> I<00F803FC07BC0F3C0E3C0E000E000E000E000E00FFC0FFC00E000E000E000E000E000E000E00 0E000E000E000E000E007FE07FE00E1A80990C>I<0FDF1FFF3877703870387038703870383870 3FE07FC0700070007FF83FFC7FFEF01FE00FE007E007F00F7C3E3FFC0FF010187F8F13>II<3C003C003C003C00000000000000000000000000FC00FC001C001C001C001C001C00 1C001C001C001C001C001C001C00FF80FF80091A80990A>I<01E001E001E001E0000000000000 00000000000007E007E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000 E000E060E0F1E0F3C0FF807F000B2183990C>III II<07E01FF8381C700E6006E007E007E007E007E007E007 700E700E3C3C1FF807E010107F8F13>II<07C6001FF6003E3E00781E00700E00F00E00E00E00E00E00 E00E00E00E00F00E00700E00781E003C7E001FFE000FCE00000E00000E00000E00000E00000E00 007FC0007FC012177F8F14>II<3FE07FE0F0E0E060E060F000FF807FC01FE001F0C0F0E070E070 F0F0FFE0DF800C107F8F0F>I<0C000C000C000C001C001C003C00FFC0FFC01C001C001C001C00 1C001C001C001C601C601C601C601EE00FC007800B177F960F>IIII<7F1FC07F1FC00F1E00071C0003B80003B00001E00000E00000F0 0001F00003B800071C00061C001E0E00FF1FE0FF1FE01310808F14>II<7FF87FF870F860F061E063C0 63C007800F181E181E183C387830F870FFF0FFF00D107F8F11>I E /Fe 8 118 df<78FCFCFCFC78000000000078FCFCFCFC7806117D900C>58 D68 D<03FC001FFF003F1F007C1F007C1F 00F80E00F80000F80000F80000F80000F80000FC00007C00007E01803F87801FFF0003FC001111 7F9014>99 D<1E003F003F003F003F001E0000000000000000007F007F001F001F001F001F001F 001F001F001F001F001F001F001F001F00FFC0FFC00A1B809A0C>105 D110 D<03F8000FFE003E0F803C07807803C07803C0F803E0F803E0F803E0 F803E0F803E0F803E07803C07C07C03E0F800FFE0003F80013117F9016>I<1FF07FF07070E030 F030FC00FFE07FF07FF81FFC01FCC03CE01CE01CF838FFF8CFE00E117F9011>115 D117 D E /Ff 1 16 df<07E01FF83FFC7FFE7FFE FFFFFFFFFFFFFFFFFFFFFFFF7FFE7FFE3FFC1FF807E010107E9115>15 D E /Fg 45 122 df<000FF800007FFC0001FC1E0003F01F0007E03F000FE03F000FC03F000FC03F 000FC00C000FC000000FC000000FC000000FC00000FFFFFF00FFFFFF000FC03F000FC03F000FC0 3F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000F C03F000FC03F000FC03F000FC03F000FC03F000FC03F007FF0FFE07FF0FFE01B237FA21F>12 D<00003801C000003801C000007803C000007803C0000070038000007003800000F007800000F0 07800000E007000000E007000001E00F000001E00F000001C00E000001C00E000003C01E007FFF FFFFFEFFFFFFFFFFFFFFFFFFFF0007003800000F007800000F007800000E007000000E00700000 0E007000001E00F000001E00F000001C00E000FFFFFFFFFFFFFFFFFFFF7FFFFFFFFE007803C000 0070038000007003800000F007800000F007800000E007000000E007000001E00F000001E00F00 0001C00E000001C00E000003C01E000003C01E000003801C000003801C0000282D7DA22F>35 D<0038007800F001E003C003C007800F000F001F001E003E003E003C007C007C007C007800F800 F800F800F800F800F800F800F800F800F800F800F800F80078007C007C007C003C003E003E001E 001F000F000F00078003C003C001E000F0007800380D317BA416>40 DI<7CFEFEFEFEFE7C07077C8610>46 D<00FE0007FFC00F83E01F01F03E00F83E00F87C007C7C007C7C007CFC007CFC007EFC007EFC00 7EFC007EFC007EFC007EFC007EFC007EFC007EFC007EFC007EFC007EFC007E7C007C7C007C7C00 7C3E00F83E00F81F01F00F83E007FFC000FE0017207E9F1C>48 D<00180000780001F800FFF800 FFF80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F800 01F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F800 7FFFE07FFFE013207C9F1C>I<03FC001FFF803C1FC0700FE0FC07F0FE03F0FE03F8FE03F8FE01 F87C01F83803F80003F80003F00007F00007E0000FC0000F80001F00003E00007C0000F80001F0 1801C0180380180700180E00381FFFF03FFFF07FFFF0FFFFF0FFFFF0FFFFF015207D9F1C>I<01 FF0007FFC01F07E01E03F03F03F83F01F83F81F83F01F83F03F80E03F80003F00007F0000FE000 1F8001FF0001FF000007E00003F80001FC0000FC0000FE0000FE7C00FEFE00FEFE00FEFE00FEFE 00FCFE01FC7803F83E07F01FFFE003FF0017207E9F1C>I<0000E00001E00003E00003E00007E0 000FE0001FE0001FE00037E00077E000E7E001C7E00187E00307E00707E00E07E00C07E01807E0 3807E07007E0E007E0FFFFFEFFFFFE0007E00007E00007E00007E00007E00007E00007E000FFFE 00FFFE17207E9F1C>I<1800601E03E01FFFE01FFFC01FFF801FFF001FFC001BE0001800001800 0018000018000019FE001FFF801F0FC01C07E01803F00003F00003F80003F80003F87803F8FC03 F8FC03F8FC03F8FC03F8FC03F06007F0780FE03C1FC01FFF0007FC0015207D9F1C>I<003FC001 FFE003F07007C0F80F81F81F01F83E01F83E01F87E00F07C00007C0000FC0800FCFFC0FDFFF0FF 81F8FF00F8FE007CFE007CFE007EFC007EFC007EFC007EFC007E7C007E7C007E7E007E3E007C3E 00FC1F01F80FC3F007FFC000FF0017207E9F1C>I<6000007800007FFFFE7FFFFE7FFFFE7FFFFC 7FFFF87FFFF0E000E0E000C0C001C0C00380C00700000E00000E00001C00003C00003800007800 00780000F80000F00001F00001F00001F00001F00003F00003F00003F00003F00003F00003F000 03F00001E00017227DA11C>I<01FF0007FFC01F83E03F01F07E00F87C00F8FC007CFC007CFC00 7CFC007EFC007EFC007EFC007EFC00FE7C00FE7C00FE3E01FE3F03FE1FFF7E07FE7E00207E0000 7C00007C1E00FC3F00F83F00F83F01F03F03F03E07E01E1FC00FFF0003F80017207E9F1C>57 D<000070000000007000000000F800000000F800000000F800000001FC00000001FC00000003FE 00000003FE00000003FE00000006FF000000067F0000000E7F8000000C3F8000000C3F80000018 3FC00000181FC00000381FE00000300FE00000300FE00000600FF000006007F00000E007F80000 FFFFF80000FFFFF800018001FC00018001FC00038001FE00030000FE00030000FE000600007F00 0600007F00FFE00FFFF8FFE00FFFF825227EA12A>65 DI<0007FE0180003FFFC38000FF01E78003FC007F8007F0001F800FE0000F801FC0000F 801F800007803F800003807F000003807F000003807F00000180FE00000180FE00000000FE0000 0000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE000000007F00 0001807F000001807F000001803F800003801F800003001FC00007000FE0000E0007F0001C0003 FC00380000FF01F000003FFFC0000007FF000021227DA128>I69 D<0003FF00C0003FFFE1C000FF80F3C001FC003FC007F0001FC0 0FE0000FC01FC00007C01F800003C03F800001C07F000001C07F000001C07F000000C0FE000000 C0FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000FE000F FFFCFE000FFFFC7F00001FC07F00001FC07F00001FC03F80001FC01F80001FC01FC0001FC00FE0 001FC007F0003FC001FC003FC000FF80FFC0003FFFE3C00003FF00C026227DA12C>71 D76 DI<0007FC0000003FFF800000FC07E00003F001F80007E000FC000FC0007E001F80003F 001F80003F003F00001F803F00001F807F00001FC07E00000FC07E00000FC0FE00000FE0FE0000 0FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE07E00 000FC07F00001FC07F00001FC03F00001F803F80003F801F80003F000FC0007E0007E000FC0003 F001F80000FC07E000003FFF80000007FC000023227DA12A>79 DI<03FE0C0FFF9C1F03FC3E00FC7C007C78003CF8001CF8001CF8 000CFC000CFE0000FF0000FFF0007FFF007FFFE03FFFF01FFFF80FFFFC03FFFE007FFE0003FF00 00FF00007F00003FC0001FC0001FC0001FE0001FE0001EF0003EFC007CFF80F8E7FFF0C0FFC018 227DA11F>83 D<07FE001FFF803F0FC03F07E03F07F03F03F01E03F00003F00003F001FFF00FFF F03FE3F07F03F07E03F0FE03F0FC03F0FC03F0FC07F0FE0FF07F1FF83FFDFF0FF0FF18167E951B >97 DI<01FF8007FFE0 1FC3F03F03F03E03F07E03F07C01E0FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000 7E00007E00003F00303F80701FE1E007FFC001FF0014167E9519>I<0003FE000003FE0000007E 0000007E0000007E0000007E0000007E0000007E0000007E0000007E0000007E0000007E000000 7E0001FE7E0007FFFE001FC3FE003F00FE003E007E007E007E007C007E00FC007E00FC007E00FC 007E00FC007E00FC007E00FC007E00FC007E00FC007E007C007E007E007E003E00FE003F01FE00 1F83FE000FFF7FC001FC7FC01A237EA21F>I<01FE0007FF801F87E03F03E03E01F07E00F07C00 F8FC00F8FC00F8FFFFF8FFFFF8FC0000FC0000FC0000FC00007E00007E00003F00181F80380FE0 F007FFE000FF8015167E951A>I<003F8001FFC003F7E007E7E007E7E00FC7E00FC3C00FC0000F C0000FC0000FC0000FC0000FC000FFFC00FFFC000FC0000FC0000FC0000FC0000FC0000FC0000F C0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0007FFC007F FC0013237FA211>I<01FE1F0007FFFF800F87F7801F03E7803E01F3003E01F0003E01F0003E01 F0003E01F0003E01F0003E01F0001F03E0000F87C0001FFF80001DFE0000180000001C0000001E 0000001FFFE0001FFFFC001FFFFE000FFFFF003FFFFF007E007F80FC001F80F8000F80F8000F80 F8000F80FC001F807E003F003F80FE000FFFF80001FFC00019217F951C>II<1F003F803F803F803F803F801F00000000 0000000000000000000000FF80FF801F801F801F801F801F801F801F801F801F801F801F801F80 1F801F801F801F801F801F80FFF0FFF00C247FA30F>I108 DII<00FE0007FFC00F83E01E00F03E00F8 7C007C7C007C7C007CFC007EFC007EFC007EFC007EFC007EFC007EFC007E7C007C7C007C3E00F8 1F01F00F83E007FFC000FE0017167E951C>II114 D<0FFB003FFF007C1F00780700F00300F00300F80000FF0000FFF8007FFC007FFE001FFF000FFF 80007F80C00F80C00F80E00780F00780F80F00FC1F00FFFE00C7F80011167E9516>I<00C00000 C00000C00000C00001C00001C00003C00007C0000FC0001FC000FFFF00FFFF000FC0000FC0000F C0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC1800FC1800FC1800FC1800F C1800FE38007E70003FF0000FC0011207F9F16>III120 DI E /Fh 82 126 df<60F0F0F0F0F0F0F0F0F0F0F0 F0F0F0F06000000000F0F0F0F00419779816>33 D<6030F078F078F078F078F078F078F078F078 F078E038E0380D0C7C9916>I<0387000387000387000387000387000387007FFFC0FFFFE0FFFF E0070E00070E000F1E000F1E000E1C000E1C000E1C00FFFFE0FFFFE07FFFC01C38001C38001C38 001C38001C38001C380013197F9816>I<3803807C0380FE0780FE0780EE0F00EE0F00EE0E00EE 1E00FE1E00FE3C007C3C0038780000780000700000F00000F00001E00001E00001C00003C00003 C0000783800787C00F0FE00F0EE00E0EE01E0EE01E0EE03C0EE03C0FE03807C038038013207F9C 16>37 D<03C0000FE0001FF0001EF0001C70001C70001C70001CF7E01DF7E01FE7E01FCF000F8F 001F8E003F1E007F1E007F9C00F7FC00F3FC00E1F800E1F9C0F0F1C0F1F9C07FFFC07FFFC01F0F 0013197F9816>I<1C3C3E1E0E0E0E1E1C3C78F060070D799816>I<00E001E007C007800F001E00 3C0038007800700070007000F000E000E000E000E000E000E000E000F000700070007000780038 003C001E000F00078007C001E000E00B217A9C16>II<01C00001C00001C00001C00071C700F9CF807FFF001F FC0007F00007F0001FFC007FFF00F9CF8071C70001C00001C00001C00001C00011127E9516>I< 01C00001C00001C00001C00001C00001C00001C00001C000FFFF80FFFF80FFFF8001C00001C000 01C00001C00001C00001C00001C00001C00011137E9516>I<387C7E7E3E0E1E3CFCF860070B79 8416>II<70F8F8F8700505788416>I<0003800003800007 80000780000F00000F00001E00001E00003C00003C0000780000780000F00000F00001E00001E0 0003C00003C0000780000780000F00000F00001E00001E00003C00003C0000780000780000F000 00F00000E00000E0000011207E9C16>I<03E0000FF8001FFC001E3C00380E00780F0070070070 0700E00380E00380E00380E00380E00380E00380E00380E00380F00780700700700700780F003C 1E001E3C001FFC000FF80003E00011197E9816>I<0380038007800F801F80FF80FF80F3800380 03800380038003800380038003800380038003800380038003807FFC7FFC7FFC0E197C9816>I< 0FF0001FFC007FFE00783F00F00F00F00780F00380F00380000380000380000780000700000F00 001E00003C0000780000F00003E00007C0000F00001E03803C0380FFFF80FFFF80FFFF8011197E 9816>I<0FF0003FFC007FFE00781F00780F00780700300700000F00000F00003E0007FC0007F8 0007FC00001E00000F00000780000380600380F00380F00780F00F00F81F007FFE003FFC000FF0 0011197E9816>I<007C0000FC0000DC0001DC00039C00039C00071C000F1C000E1C001E1C003C 1C00381C00781C00F01C00FFFFE0FFFFE0FFFFE0001C00001C00001C00001C00001C0001FFC001 FFC001FFC013197F9816>I<3FFE003FFE003FFE00380000380000380000380000380000380000 3800003FF0003FFC003FFE003C1F00380700000780000380600380F00380F00780F00F00F83F00 7FFE003FFC000FF00011197E9816>I<00FC0003FE000FFF001F8F003E0F003C0F007806007000 00F04000F7F800FFFE00FFFE00F80F00F00780F00780E00380F00380F00380700380780780780F 003E1F001FFE000FFC0007F00011197E9816>I<07F0001FFC003FFE007C1F00F00780E00380E0 0380E003807007007C1F001FFC0007F0001FFC003C1E00700700F00780E00380E00380E00380F0 07807007007C1F003FFE001FFC0007F00011197E9816>56 D<70F8F8F870000000000000000070 F8F8F8700512789116>58 D<387C7C7C380000000000000000387C7C7C3C1C3C38F8F060061879 9116>I<000380000F80001F80007E0000FC0003F00007E0001F80003F0000FC0000F80000FC00 003F00001F800007E00003F00000FC00007E00001F80000F8000038011157E9616>I<7FFF00FF FF80FFFF80000000000000000000000000000000FFFF80FFFF807FFF00110B7E9116>II<00E00001F00001F00001B0 0001B00003B80003B80003B800031800071C00071C00071C00071C00071C000E0E000E0E000FFE 000FFE001FFF001C07001C07001C0700FF1FE0FF1FE0FF1FE013197F9816>65 DI<03F18007FF800FFF801F0F803C0780780780780380700380F00000E00000E00000E0 0000E00000E00000E00000E00000F000007003807803807803803C07801F0F000FFE0007FC0003 F00011197E9816>IIII<03F3000FFF001FFF003F1F003C0F00780F00 780700700700F00000E00000E00000E00000E00000E07FC0E07FC0E07FC0F00700700700780F00 780F003C0F003F1F001FFF000FFF0003E70012197E9816>III75 DIII<1FFC003FFE007FFF00780F00F007 80E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E003 80E00380F00780F00780780F007FFF003FFE001FFC0011197E9816>II82 D<0FF3001FFF007FFF00781F00F00F00E00700E00700E00000F000007800007F80003FF8000FFC 0000FE00000F00000780000380000380E00380E00380F00780F81F00FFFE00FFFC00CFF8001119 7E9816>IIIII<7F3F807F3F807F3F800E1E000E1C00073C00 07380003B80003F00001F00001E00000E00001E00001F00003F00003B80007B800071C00071C00 0E0E000E0E001C0700FF1FE0FF1FE0FF1FE013197F9816>II91 DII95 D<0C1E3C7870F0E0E0E0F0F87870070D789B16>I<1F E0007FF8007FFC00783C00301E00000E00007E000FFE003FFE007FCE00F80E00E00E00E00E00F0 1E00F83E007FFFE03FFFE01FC3E013127E9116>II<03FC0FFE1FFE3E1E780C7000F000 E000E000E000E000F00070077C073E1F1FFE0FFC03F010127D9116>I<007F00007F00007F0000 070000070000070000070007E7001FFF003FFF003E3F00780F00700F00F00700E00700E00700E0 0700E00700F00F00F00F00781F007E3F003FFFF01FF7F007E7F014197F9816>I<07E01FF83FFC 7C3E781FF00FF007FFFFFFFFFFFFE000F000F007780F7E1F3FFE0FFC03F010127D9116>I<001F 00007F8000FF8001E78001C30001C00001C000FFFF00FFFF00FFFF0001C00001C00001C00001C0 0001C00001C00001C00001C00001C00001C00001C00001C0007FFF007FFF007FFF0011197F9816 >I<03E7C00FFFE01FFFE01E3CE03C1E00380E00380E00380E003C1E001E3C001FFC003FF8003B E0003800003C00001FFE003FFF807FFFC07807C0F001E0E000E0E000E0E000E0F001E07E0FC03F FF801FFF0007FC00131C7F9116>II<03C003C003C003C000000000000000007FC07FC0 7FC001C001C001C001C001C001C001C001C001C001C001C001C0FFFFFFFFFFFF101A7D9916>I< 007800780078007800000000000000001FF81FF81FF80038003800380038003800380038003800 380038003800380038003800380038003800380078F078F0F0FFF0FFE03F800D237E9916>IIIII<03E0000FF8001FFC003C1E00780F00700700E00380E00380E00380 E00380E00380F00780700700780F003C1E001FFC000FF80003E00011127E9116>II114 D<1FEC3FFC7FFCF03CE01C E01CF8007FC03FF007FC003EE00EE00EF00EF83EFFFCFFF8CFF00F127D9116>I<070000070000 070000070000070000FFFF00FFFF00FFFF00070000070000070000070000070000070000070000 070100070380070380070780078F8003FF0003FE0000F80011177F9616>IIII<7F3FC07F3FC07F3FC00F1C00073C0003B80003F0 0001F00000E00001E00001F00003B800073C00071C000E0E00FF3FE0FF3FE0FF3FE013127F9116 >II<7FFFC07FFFC07FFFC0700F80701F00703E00007C0000F80001F000 03E00007C0000F80001F01C03E01C07C01C0FFFFC0FFFFC0FFFFC012127F9116>I<003F8000FF 8001FF8001E00001C00001C00001C00001C00001C00001C00001C00001C00001C00003C000FF80 00FF0000FF0000FF800003C00001C00001C00001C00001C00001C00001C00001C00001C00001C0 0001E00001FF8000FF80003F8011207E9C16>I125 D E /Fi 57 123 df<001FF3F800FFFFFE03F87F3E07E07E3E0FC07E3E0F807C 1C0F807C000F807C000F807C000F807C000F807C00FFFFFFC0FFFFFFC00F807C000F807C000F80 7C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F 807C000F807C007FE1FFC07FE1FFC01F1D809C1C>11 D<001FFC0000FFFC0003F87C0007E07C00 0FC07C000F807C000F807C000F807C000F807C000F807C000F807C00FFFFFC00FFFFFC000F807C 000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F80 7C000F807C000F807C000F807C007FF3FF807FF3FF80191D809C1B>13 D<007000E001C003C007 800F000F001E001E003C003C007C00780078007800F800F800F000F000F000F000F000F000F000 F800F8007800780078007C003C003C001E001E000F000F00078003C001C000E000700C297D9E13 >40 DI<7CFEFFFFFFFF7F0307060E1C3C7830080F7D860D>44 DI<00600001E0000FE000FFE000F3E00003E00003E00003E0 0003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E0 0003E00003E00003E00003E0007FFF807FFF80111B7D9A18>49 D<07F8003FFF00787F80F81FC0 FC0FC0FC0FE0FC0FE0FC07E07807E0000FE0000FE0000FC0001F80003F00003E00007C0000F000 01E00003C0600780600F00601C00E03FFFC07FFFC0FFFFC0FFFFC0FFFFC0131B7E9A18>I<0003 8000000380000007C0000007C0000007C000000FE000000FE000001FF000001BF000001BF00000 31F8000031F8000061FC000060FC0000E0FE0000C07E0000C07E0001803F0001FFFF0003FFFF80 03001F8003001F8006000FC006000FC00E000FE00C0007E0FFC07FFEFFC07FFE1F1C7E9B24>65 DI<001FF06000FFFCE003FC1FE00FE007E01FC003E01F8001E03F0000E07F0000E07F0000E0 7E000060FE000060FE000000FE000000FE000000FE000000FE000000FE000000FE0000007E0000 607F0000607F0000603F0000E01F8000C01FC001C00FE0078003FC1F0000FFFC00001FF0001B1C 7D9B22>IIII<001FF81800FFFE3803FC0FF807F003F80FC000F81F8000783F800078 7F0000387F0000387E000018FE000018FE000000FE000000FE000000FE000000FE000000FE007F FFFE007FFF7E0001F87F0001F87F0001F83F8001F81F8001F80FE001F807F003F803FE07F800FF FE78001FF818201C7D9B26>III< 07FFF007FFF0001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80 001F80001F80001F80001F80001F80001F80001F80301F80FC1F80FC1F80FC1F80FC3F80F87F00 7FFE001FF000141C7F9B19>IIIII<003FE00001FFFC0003F07E000FC01F801F80 0FC01F0007C03F0007E07F0007F07E0003F07E0003F0FE0003F8FE0003F8FE0003F8FE0003F8FE 0003F8FE0003F8FE0003F8FE0003F87E0003F07E0003F07F0007F03F0007E03F800FE01F800FC0 0FC01F8003F07E0001FFFC00003FE0001D1C7D9B24>II82 D<07F8601FFFE03E0FE07803E07001E0F000E0F00060F8 0060F80000FE0000FFF0007FFE007FFF803FFFC01FFFE007FFE0007FF00007F00001F00001F0C0 00F0C000F0E000F0E001E0F001E0FE07C0FFFF80C3FE00141C7D9B1B>I<7FFFFFE07FFFFFE078 1F81E0701F80E0601F8060E01F8070C01F8030C01F8030C01F8030C01F8030001F8000001F8000 001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F80 00001F8000001F8000001F8000001F800007FFFE0007FFFE001C1C7E9B21>IIII<7FFE1FFE007FFE1FFE0007F001800003F803800001FC07000000FC06000000FE0C00 00007F1C0000003F380000003FB00000001FE00000000FE00000000FE000000007F000000003F8 00000007F80000000FFC0000000CFE000000187E000000387F000000703F800000601F800000C0 1FC00001C00FE000018007F000030007F000FFF03FFF80FFF03FFF80211C7F9B24>I<7FFFFC7F FFFC7E01FC7803F87007F0E007F0E00FE0C01FE0C01FC0C03F80003F80007F0000FE0000FE0001 FC0001FC0003F80607F00607F0060FE0061FE00E1FC00E3F801C3F801C7F003CFE00FCFFFFFCFF FFFC171C7D9B1D>90 D<0FFC003FFF003E1F803E0FC03E07C01C07C00007C003FFC01FFFC07F87 C07F07C0FE07C0FC07C0FC07C0FE0FC07E3FE03FFBF80FE1F815127F9117>97 DI<03FC001FFF003F1F007E1F007E1F00FC0E00FC0000FC 0000FC0000FC0000FC0000FC0000FC00007E01807F03803F87001FFE0003F80011127E9115>I< 000FF0000FF00001F00001F00001F00001F00001F00001F00001F00001F00001F007F9F01FFFF0 3F0FF07E03F07E01F0FC01F0FC01F0FC01F0FC01F0FC01F0FC01F0FC01F0FC01F07C01F07E03F0 3F0FF01FFFFE07F1FE171D7E9C1B>I<03FC000FFF003F0F803E07C07E03C07C03E0FC03E0FFFF E0FFFFE0FC0000FC0000FC00007C00007E00603F00E01FC3C00FFF8003FE0013127F9116>I<00 7F0001FFC003E7C007C7C00FC7C00F83800F80000F80000F80000F80000F8000FFF800FFF8000F 80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F 80007FF8007FF800121D809C0F>I<07F9F01FFFF83E1F787C0FB87C0F807C0F807C0F807C0F80 7C0F803E1F003FFE0037F8007000007000007800003FFF803FFFE01FFFF07FFFF0F801F8F000F8 F00078F00078F800F87E03F03FFFE007FF00151B7F9118>II<1E003F007F007F007F003F001E0000000000000000000000FF00FF001F001F001F001F001F 001F001F001F001F001F001F001F001F001F00FFE0FFE00B1E7F9D0E>I<00F801FC01FC01FC01 FC01FC00F80000000000000000000003FC03FC007C007C007C007C007C007C007C007C007C007C 007C007C007C007C007C007C007C007C707CF87CF8FCF9F87FF03F800E26839D0F>IIII I<01FC000FFF801F07C03E03E07C01F07C01F0FC01F8FC01F8FC01F8FC01F8FC01F8FC01F87C01 F07C01F03E03E01F07C00FFF8001FC0015127F9118>II114 D<1FF87FF87078E018E018F000FF80FFF07FF83FF80FFC007C C03CE01CE01CF878FFF8CFE00E127E9113>I<030003000300070007000F000F003F00FFFCFFFC 1F001F001F001F001F001F001F001F001F001F0C1F0C1F0C1F0C1F9C0FF803F00E1A7F9913>I< FF07F8FF07F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F81F00F8 1F01F81F01F81F87F80FFFFF03FCFF18127F911B>IIIII<3FFF803F FF803C3F80307F00707E0060FC0061FC0063F80003F00007E1800FE1801FC1801F83803F03007F 0700FE0F00FFFF00FFFF0011127F9115>I E /Fj 11 122 df<3078F06005047C830D>46 D<03CC063C0C3C181C3838303870387038E070E070E070E070E0E2C0E2C0E261E462643C380F12 7B9115>97 D<01F007080C08181C3838300070007000E000E000E000E000E000E008E010602030 C01F000E127B9113>99 D<01E007100C1018083810701070607F80E000E000E000E000E000E008 6010602030C01F000D127B9113>101 D<00F3018F030F06070E0E0C0E1C0E1C0E381C381C381C 381C383830383038187818F00F700070007000E000E0C0C0E1C0C3007E00101A7D9113>103 D<01800380010000000000000000000000000000001C002600470047008E008E000E001C001C00 1C0038003800710071007100720072003C00091C7C9B0D>105 D<3C1E0780266318C04683A0E0 4703C0E08E0380E08E0380E00E0380E00E0380E01C0701C01C0701C01C0701C01C070380380E03 88380E0388380E0708380E0710701C0320300C01C01D127C9122>109 D<3C3C00264600468700 4707008E07008E07000E07000E07001C0E001C0E001C0E001C1C00381C40381C40383840383880 701900300E0012127C9117>I<00C001C001C001C00380038003800380FFE00700070007000E00 0E000E000E001C001C001C001C00384038403840388019000E000B1A7D990E>116 D<1E06270E470E4706870287020E020E021C041C041C041C0818083808181018200C4007800F12 7C9113>118 D<1E03270747074707870E870E0E0E0E0E1C1C1C1C1C1C1C1C3838383818381838 1C7007F00070007000E0E0C0E1C0818047003C00101A7C9114>121 D E /Fk 48 123 df<0003F0000000000FFC000000001F1C000000003E0E000000007C0F00000000FC 0700000000FC0700000001F80700000001F80700000001F80700000001FC0E00000001FC0E0000 0001FC1C00000001FC3800000001FC7000000001FEF0007FFC00FFE0007FFC00FF80007FFC00FF 0000078000FF00000F00007F80000E00007F80001E00007FC0003C0001FFC000380003FFE00078 00079FF000F0000F8FF000E0001F0FF801E0003F07FC03C0007F03FE0780007F03FF0F0000FF01 FF0E0000FF00FF9E0000FF007FFC0000FF803FF80000FF801FF000387F800FFC00387FC01FFE00 703FE07FFF81F01FFFFC7FFFE007FFF01FFFC000FF8001FE002E2A7DA935>38 D<00030007001E003C007800F800F001E003E007C007C00F800F801F801F003F003F003E003E00 7E007E007E007C00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC00FC007C007E 007E007E003E003E003F003F001F001F800F800F8007C007C003E001E000F000F80078003C001E 00070003103C7AAC1B>40 DI<3E007F00FF80FF80FFC0FFC0FFC07FC03E C000C000C001C0018001800380070006000E001C00380030000A157B8813>44 DI<3E007F00FF80FF80FF80FF80FF 807F003E0009097B8813>I<003F800001FFF00007E0FC000FC07E001F803F001F803F003F001F 803F001F807F001FC07F001FC07F001FC07F001FC0FF001FE0FF001FE0FF001FE0FF001FE0FF00 1FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF 001FE07F001FC07F001FC07F001FC07F001FC03F001F803F001F801F803F001F803F000FC07E00 07E0FC0001FFF000003F80001B277DA622>48 D<000700000F00007F0007FF00FFFF00FFFF00F8 FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000 FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000 FF0000FF0000FF0000FF007FFFFE7FFFFE7FFFFE17277BA622>I<00FFC00007FFF8001FFFFE00 3F03FF007E00FF807F007FC0FF807FC0FF803FE0FF803FE0FF803FE0FF801FE07F001FE03E003F E000003FE000003FC000003FC000007F8000007F800000FF000001FE000001FC000003F8000007 E000000FC000001F8000003E00E0007C00E0007800E000F001C001E001C0038001C007FFFFC00F FFFFC01FFFFFC03FFFFFC07FFFFF80FFFFFF80FFFFFF80FFFFFF801B277DA622>I<007FC00003 FFF00007FFFC000FC1FE001F80FF003FC0FF003FE07F803FE07F803FE07F803FE07F803FE07F80 1FC0FF800F80FF000000FF000001FE000003FC000007F00000FFC00000FFF8000001FE000000FF 0000007F8000007FC000003FC000003FE01E003FE07F803FE07F803FE0FFC03FE0FFC03FE0FFC0 3FE0FFC03FC0FFC07FC07F807F807F00FF803F81FF001FFFFC0007FFF80000FFC0001B277DA622 >I<0000070000000F0000001F0000003F0000007F000000FF000000FF000001FF000003FF0000 077F00000F7F00000E7F00001C7F0000387F0000707F0000F07F0000E07F0001C07F0003807F00 07007F000F007F000E007F001C007F0038007F0070007F00E0007F00FFFFFFF8FFFFFFF8FFFFFF F80000FF000000FF000000FF000000FF000000FF000000FF000000FF00007FFFF8007FFFF8007F FFF81D277EA622>I<0C0007000FC03F000FFFFE000FFFFE000FFFFC000FFFF8000FFFF0000FFF C0000FFF00000E0000000E0000000E0000000E0000000E0000000E0000000E7FC0000FFFF8000F C1FE000E007F000C007F8000003F8000003FC000003FC000003FE000003FE03E003FE07F003FE0 FF803FE0FF803FE0FF803FE0FF803FC0FF003FC07E007FC078007F803C00FF001F83FE000FFFFC 0007FFF00000FF80001B277DA622>I<0007F800003FFC0000FFFE0001FE1F0007F81F000FE03F 800FC07F801FC07F803F807F803F807F807F803F007F001E007F0000007F000000FF000000FF1F E000FF3FF800FF70FE00FFE03F00FFC03F80FF801FC0FF801FC0FF801FC0FF001FE0FF001FE0FF 001FE0FF001FE07F001FE07F001FE07F001FE07F001FE03F801FC03F801FC01F803F800FC03F80 07F0FF0003FFFC0001FFF800007FE0001B277DA622>I<380000003E0000003FFFFFF03FFFFFF0 3FFFFFF03FFFFFE07FFFFFC07FFFFF807FFFFF807FFFFF0070001E0070003C0070007800E00070 00E000F000E001E0000003C0000007C00000078000000F8000000F0000001F0000003F0000003F 0000003F0000007E0000007E000000FE000000FE000000FE000000FE000001FE000001FE000001 FE000001FE000001FE000001FE000001FE000001FE000000FC0000007800001C297CA822>I<00 7FC00001FFF80007FFFC000FC0FE000F003F001E001F001E001F803E000F803E000F803F000F80 3FC00F803FF01F803FF81F003FFE3F001FFFFE001FFFFC000FFFF00007FFFC0003FFFE0003FFFF 000FFFFF801FBFFFC03F0FFFC07E03FFE07C00FFE0FC007FE0F8001FE0F80007E0F80007E0F800 03E0F80003E0FC0003C07C0007C07E0007803F000F801FC07F000FFFFE0007FFF80000FFC0001B 277DA622>I<007FC00003FFF00007FFFC000FE0FE001FC07E003F803F007F003F807F003F80FF 001FC0FF001FC0FF001FC0FF001FC0FF001FE0FF001FE0FF001FE0FF001FE07F003FE07F003FE0 7F003FE03F807FE01F80FFE00FE1DFE003FF9FE000FF1FE000001FE000001FC000001FC00F001F C01F803FC03FC03F803FC03F803FC07F003FC07F003F80FE001F01FC001F07F8000FFFE00007FF C00001FE00001B277DA622>I<00007FF801800007FFFE0780001FFFFF8F80007FF80FFF8000FF 8001FF8003FE00007F8007FC00003F8007F800001F800FF000000F801FE000000F803FE0000007 803FC0000007807FC0000003807FC0000003807FC000000380FF8000000000FF8000000000FF80 00000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF80000000 00FF8000000000FF80000000007FC0000000007FC0000003807FC0000003803FC0000003803FE0 000003801FE0000007800FF00000070007F800000F0007FC00001E0003FE00003C0000FF8000F8 00007FF807F000001FFFFFC0000007FFFF000000007FF8000029297CA832>67 D69 DI<00007F F003000007FFFE0F00001FFFFF9F00007FF00FFF0000FF8003FF0003FE0000FF0007FC00007F00 0FF800003F000FF000001F001FE000001F003FE000000F003FC000000F007FC0000007007FC000 0007007FC000000700FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000 FF8000000000FF8000000000FF8000000000FF8000000000FF8003FFFFF8FF8003FFFFF87FC003 FFFFF87FC00001FF007FC00001FF003FC00001FF003FE00001FF001FE00001FF000FF00001FF00 0FF80001FF0007FC0001FF0003FE0003FF0000FF8003FF00007FF01FFF00001FFFFF3F000007FF FE1F0000007FF007002D297CA836>I73 D76 DI<0000FFE0000000 07FFFC0000003FC07F8000007F001FC00001FC0007F00003F80003F80007F00001FC000FF00001 FE001FE00000FF001FE00000FF003FC000007F803FC000007F807FC000007FC07F8000003FC07F 8000003FC07F8000003FC0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF800000 3FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE07F8000003FC07F C000007FC07FC000007FC03FC000007F803FC000007F801FE00000FF001FE00000FF000FF00001 FE0007F00001FC0003F80003F80001FC0007F00000FF001FE000003FC07F8000000FFFFE000000 00FFE000002B297CA834>79 D82 D<00FF806003FFF0E00FFFFFE01FC0FFE03F001FE03E0007E07E0003E07C0003E0FC0001E0FC00 01E0FC0000E0FE0000E0FF0000E0FF800000FFF80000FFFFC0007FFFF8007FFFFE003FFFFF801F FFFFC00FFFFFC007FFFFE001FFFFF0003FFFF00003FFF800001FF800000FF8000007F8E00003F8 E00001F8E00001F8E00001F8F00001F8F00001F0F80003F0FC0003E0FF0007E0FFE01FC0FFFFFF 80E1FFFE00C03FF8001D297CA826>I85 D<03FFC0000FFFF0001F81FC003FC0FE003FC07F003FC07F003FC03F803FC03F801F803F800000 3F8000003F80001FFF8001FFFF8007FE3F801FE03F803FC03F807F803F807F003F80FE003F80FE 003F80FE003F80FE007F80FF007F807F00FFC03FC3DFFC1FFF8FFC03FE07FC1E1B7E9A21>97 D<003FF80001FFFE0007F83F000FE07F801FC07F803F807F803F807F807F807F807F003F00FF00 0000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF0000007F0000007F 8000003F8001C03FC001C01FC003C00FE0078007F83F0001FFFC00003FF0001A1B7E9A1F>99 D<00003FF80000003FF80000003FF800000003F800000003F800000003F800000003F800000003 F800000003F800000003F800000003F800000003F800000003F800000003F800000003F800003F E3F80001FFFBF80003F83FF8000FE00FF8001FC007F8003F8003F8003F8003F8007F8003F8007F 0003F800FF0003F800FF0003F800FF0003F800FF0003F800FF0003F800FF0003F800FF0003F800 FF0003F800FF0003F8007F0003F8007F0003F8003F8003F8003F8007F8001FC00FF8000FE01FF8 0007F07FFF8001FFFBFF80003FC3FF80212A7EA926>I<003FE00001FFFC0007F07E000FE03F00 1FC01F803F800FC03F800FC07F000FC07F0007E0FF0007E0FF0007E0FF0007E0FFFFFFE0FFFFFF E0FF000000FF000000FF000000FF0000007F0000007F8000003F8000E03F8001E01FC001C00FE0 07C003F81F8001FFFE00003FF8001B1B7E9A20>I<000FF800003FFE0000FF3F0001FC7F8003F8 7F8003F87F8007F07F8007F07F8007F03F0007F0000007F0000007F0000007F0000007F0000007 F00000FFFFC000FFFFC000FFFFC00007F0000007F0000007F0000007F0000007F0000007F00000 07F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F000 0007F0000007F0000007F0000007F0000007F000007FFF80007FFF80007FFF8000192A7EA915> I<00FF81F007FFF7FC0FE3FF7C1F80FCFC3F80FE7C3F007E787F007F007F007F007F007F007F00 7F007F007F007F007F003F007E003F80FE001F80FC000FE3F8001FFFF00018FF8000380000003C 0000003C0000003E0000003FFFFC003FFFFF001FFFFFC00FFFFFE007FFFFF03FFFFFF07E000FF8 7C0001F8F80001F8F80000F8F80000F8F80000F8FC0001F87E0003F03F0007E01FE03FC007FFFF 0000FFF8001E287E9A22>II<0F801FC01F E03FE03FE03FE01FE01FC00F800000000000000000000000000000FFE0FFE0FFE00FE00FE00FE0 0FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0FFFEFF FEFFFE0F2B7DAA14>I108 DII<003FE00001FFFC0003F07E000FC01F801F800FC03F800FE03F0007 E07F0007F07F0007F07F0007F0FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF00 07F8FF0007F87F0007F07F0007F03F800FE03F800FE01F800FC00FC01F8007F07F0001FFFC0000 3FE0001D1B7E9A22>II114 D<03FE701FFFF03E03F078 01F07000F0F00070F00070F80070FE0000FFF000FFFF007FFFC03FFFE01FFFF007FFF800FFFC00 07FC0000FCE0007CE0003CF0003CF0003CF80078FC0078FF01F0FFFFC0E1FF00161B7E9A1B>I< 00700000700000700000700000F00000F00000F00001F00003F00003F00007F0001FFFF0FFFFF0 FFFFF007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F000 07F00007F03807F03807F03807F03807F03807F03807F03803F87001FCF000FFE0003FC015267F A51B>III 120 DI<3FFFFF803FFFFF803F00FF803C00FF003801FE007803FC007807FC0070 07F800700FF000701FE000001FE000003FC000007F800000FF800000FF000001FE038003FC0380 03FC038007F803800FF007801FF007801FE007003FC00F007F801F00FF807F00FFFFFF00FFFFFF 00191B7E9A1F>I E /Fl 76 125 df<003F1F8001FFFFC003C3F3C00783E3C00F03E3C00E01C0 000E01C0000E01C0000E01C0000E01C0000E01C000FFFFFC00FFFFFC000E01C0000E01C0000E01 C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E 01C0000E01C0007F87FC007F87FC001A1D809C18>11 D<003F0001FF8003C3C00783C00F03C00E 03C00E00000E00000E00000E00000E0000FFFFC0FFFFC00E01C00E01C00E01C00E01C00E01C00E 01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C07F87F87F87F8151D809C17>I< 003FC001FFC003C3C00783C00F03C00E01C00E01C00E01C00E01C00E01C00E01C0FFFFC0FFFFC0 0E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0 0E01C07FCFF87FCFF8151D809C17>I<003F83F00001FFDFF80003E1FC3C000781F83C000F01F0 3C000E01E03C000E00E000000E00E000000E00E000000E00E000000E00E00000FFFFFFFC00FFFF FFFC000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E 00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C007FC7FCFF80 7FC7FCFF80211D809C23>I<7070F8F8FCFCFCFC7C7C0C0C0C0C1C1C181838387070F0F060600E 0D7F9C15>34 D<00030180000301800003018000070380000603000006030000060300000E0700 000C0600000C0600000C0600000C0600001C0E00FFFFFFFCFFFFFFFC0030180000301800003018 000070380000603000006030000060300000603000FFFFFFFCFFFFFFFC01C0E0000180C0000180 C0000180C0000381C000030180000301800003018000070380000603000006030000060300001E 257E9C23>I<00F0000001F80000039C0000070C0000070C0000070C0000070C0000071C000007 1C0000073800000770000007F07FE003E07FE003C01F0007C00E000FC01C001FC018003DE03800 78F0300070F07000F0786000F03CE000F03FC000F01F8000F80F0060780FC0E07C3FE1C03FF9FF 800FE03F001B1D7E9C20>38 D<70F8FCFC7C0C0C1C183870F060060D7D9C0C>I<01C003800380 07000E000C001C001800380038007000700070007000E000E000E000E000E000E000E000E000E0 00E000E000E000E000E00070007000700070003800380018001C000C000E0007000380038001C0 0A2A7D9E10>II<70F8F8F878181818383070E060050D7D840C> 44 DI<70F8F8F87005057D840C>I<00030003000700060006000E 000C001C0018001800380030003000700060006000E000C000C001C00180038003000300070006 0006000E000C000C001C001800180038003000700060006000E000C000C00010297E9E15>I<07 E00FF01C38381C781E700E700EF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00F 700E700E781E381C1C380FF007E0101B7E9A15>I<030007003F00FF00C7000700070007000700 0700070007000700070007000700070007000700070007000700070007000700FFF8FFF80D1B7C 9A15>I<0FE03FF878FC603EF01EF81FF80FF80F700F000F001F001E003E003C007800F001E001 C0038007000E031C0338037006FFFEFFFEFFFE101B7E9A15>I<0FE03FF8387C783E7C1E781E78 1E001E003C003C00F807F007E00078003C001E000F000F000F700FF80FF80FF81EF01E787C3FF8 0FE0101B7E9A15>I<001C00001C00003C00007C00007C0000DC0001DC00039C00031C00071C00 0E1C000C1C00181C00381C00301C00601C00E01C00FFFFC0FFFFC0001C00001C00001C00001C00 001C00001C0001FFC001FFC0121B7F9A15>I<301C3FFC3FF83FE0300030003000300030003000 37E03FF83C3C381E301E000F000F000F000FF00FF00FF00FF01E703E787C3FF80FE0101B7E9A15 >I<01F807FC0F8E1E1E3C1E381E781E78007000F080F7F8FFFCFC1CF81EF80FF00FF00FF00FF0 0FF00F700F700F781E381E1E3C0FF807E0101B7E9A15>I<6000007FFF807FFF807FFF80600700 C00600C00E00C01C0000380000300000700000600000E00000C00001C00001C00003C000038000 038000038000078000078000078000078000078000078000078000078000111C7E9B15>I<07E0 1FF83C3C381E701E700E700E780E7C1E7F3C3FF81FF00FF01FFC3DFC787E703FF00FE00FE007E0 07E007F00E781E3C3C1FF807E0101B7E9A15>I<07E01FF83C38781C781EF00EF00EF00FF00FF0 0FF00FF00FF01F781F383F3FFF1FEF010F000E001E781E781C783C787878F03FE01F80101B7E9A 15>I<70F8F8F870000000000000000070F8F8F87005127D910C>I<70F8F8F87000000000000000 0070F8F8F878181818383070E060051A7D910C>I<00060000000F0000000F0000000F0000001F 8000001F8000001F8000003FC0000033C0000033C0000073E0000061E0000061E00000E1F00000 C0F00000C0F00001C0F8000180780001FFF80003FFFC0003003C0003003C0007003E0006001E00 06001E001F001F00FFC0FFF0FFC0FFF01C1C7F9B1F>65 DI<003F C18001FFF18003F07B800FC01F801F000F801E0007803C0003807C0003807800038078000180F0 000180F0000000F0000000F0000000F0000000F0000000F0000000F00000007800018078000180 7C0001803C0003801E0003001F0007000FC00E0003F03C0001FFF000003FC000191C7E9B1E>I< FFFFC000FFFFF0000F007C000F001E000F000F000F0007000F0003800F0003C00F0003C00F0001 C00F0001E00F0001E00F0001E00F0001E00F0001E00F0001E00F0001E00F0001E00F0001C00F00 01C00F0003C00F0003800F0007800F000F000F001E000F007C00FFFFF000FFFFC0001B1C7E9B20 >III<003FC18001FFF18003F07B 800FC01F801F000F801E0007803C0003807C0003807800038078000180F0000180F0000000F000 0000F0000000F0000000F0000000F000FFF0F000FFF078000780780007807C0007803C0007801E 0007801F0007800FC00F8003F03F8001FFFB80003FE1801C1C7E9B21>III75 DIII<003F800001FFF00003E0F80007 001C000E000E001C0007003C00078038000380780003C0700001C0F00001E0F00001E0F00001E0 F00001E0F00001E0F00001E0F00001E0F00001E0780003C0780003C0380003803C0007801E000F 000E000E0007803C0003E0F80001FFF000003F80001B1C7E9B20>II82 D<07F1801FFD803C1F80700780700380E00380E00180E00180F00000F80000FE00007F E0003FFC001FFE000FFF0000FF80000F800007C00003C00001C0C001C0C001C0E001C0E00380F0 0780FE0F00DFFE00C7F800121C7E9B17>I<7FFFFFC07FFFFFC0780F03C0700F01C0600F00C0E0 0F00E0C00F0060C00F0060C00F0060C00F0060000F0000000F0000000F0000000F0000000F0000 000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F00 00000F000003FFFC0003FFFC001B1C7F9B1E>IIII<18183C3C383870706060E0E0C0C0C0C0F8F8FCFCFCFC7C7C 38380E0D7B9C15>92 D<1FE0003FF8003C3C003C1E00180E00000E00001E0007FE003FFE007E0E 00F80E00F80E00F00E60F00E60F81E607C7E607FFFC01FC78013127F9115>97 DI<07F80FFC3E3C3C3C78187800F000F000F000F000F000 F000780078063C0E3F1C0FF807F00F127F9112>I<001F80001F80000380000380000380000380 00038000038000038000038000038007F3801FFF803E1F807C0780780380F80380F00380F00380 F00380F00380F00380F00380F003807807807C0F803E1F801FFBF007E3F0141D7F9C17>I<07E0 1FF83E7C781C781EF01EFFFEFFFEF000F000F000F000780078063C0E3F1C0FF807F00F127F9112 >I<00FC03FE079E071E0F1E0E000E000E000E000E000E00FFE0FFE00E000E000E000E000E000E 000E000E000E000E000E000E000E000E007FE07FE00F1D809C0D>I<07E7C01FFFC03C3DC0781E 00781E00781E00781E00781E00781E003C3C003FF80037E0007000007000007800003FFC003FFF 007FFF807807C0F003C0E001C0E001C0F003C0F807C07C0F801FFE0007F800121B7F9115>II<3C007C007C007C003C00000000000000000000000000FC00 FC001C001C001C001C001C001C001C001C001C001C001C001C001C001C00FF80FF80091D7F9C0C >I<01C003E003E003E001C00000000000000000000000000FE00FE000E000E000E000E000E000 E000E000E000E000E000E000E000E000E000E000E000E000E000E0F0E0F1E0F3C0FF807E000B25 839C0D>IIIII<03F0000FFC001E1E00380700780780700380F003C0F003C0F003C0F003 C0F003C0F003C07003807807803807001E1E000FFC0003F00012127F9115>II< 07F1801FF9803F1F803C0F80780780780380F00380F00380F00380F00380F00380F00380F80380 7807807C0F803E1F801FFB8007E380000380000380000380000380000380000380001FF0001FF0 141A7F9116>II<1FB07FF0F0F0E070E030F030F8007FC07FE01FF000F8C078C038E038 F078F8F0FFF0CFC00D127F9110>I<0C000C000C000C000C001C001C003C00FFE0FFE01C001C00 1C001C001C001C001C001C001C301C301C301C301C301E700FE007C00C1A7F9910>II II<7F8FF07F8FF00F0F80070F00038E0001DC0001D80000F00000700000780000F80001DC0003 8E00030E000707001F0780FF8FF8FF8FF81512809116>II<7FFC7FFC783C7078 60F061E061E063C00780078C0F0C1E0C1E1C3C187818F078FFF8FFF80E127F9112>III E /Fm 33 91 df<001F0000003F8000007B80000071800000E1800000E1800000E1800000C3800001C7 000001CE000001DE000001FC1FF801F81FF801E0078001E0070007E006000FE00E001EF01C003C 7038007878300078787000F03CE000F01FC000F01F8000F00F0060F80F80C07C7FE1C03FF1FF80 1FC07E001D1D7D9C20>38 D<001C0038007000E001E001C00380070007000E000E001C001C003C 0038003800380070007000700070006000E000E000E000E000E000E000E000E000E000E0006000 6000700070003000380018001C000E0006000E2A7C9E10>40 D<01C000C000E000700070003800 380018001C001C001C001C001C001C000C001C001C001C001C001C001C001C0018003800380038 00380070007000E000E000E001C001C00380070007000E001C0038007000E0000E2A809E10>I< 3C3C7C7C3C0C1C18383870E040060D7E840C>44 D<7FF0FFE0FFE00C037F890E>I<7878F8F870 05057D840C>I<00F80003FE00070E000E07001C07001C07803C07803807807807807807807807 80780780F00F00F00F00F00F00F00F00F00F00F00E00E01E00E01E00E01C00E01C00E038007078 0078F0003FE0001F8000111B7C9A15>48 D<0010007003F01FF01C70007000F000E000E000E000 E000E001E001C001C001C001C001C003C0038003800380038003800780FFF8FFF80D1B7C9A15> I<00FE0003FF00078F800F07C00F03C00F03C00F0780000780000F80001F00003E0003FC0003F8 00001C00000E00000F00000F00000F00000F00780F00F80F00F81F00F81E00F03C00F07C007FF0 001FC000121B7D9A15>51 D<07018007FF8007FF0007FC000600000E00000C00000C00000C0000 0C00000DF8001FFE001F1E001C0F00180700000700000780000F80000F00F00F00F00F00F01F00 F01E00E03C0070F8007FF0001FC000111B7D9A15>53 D<1800003FFFC03FFFC03FFFC070038060 0700600E00C00C00001C0000380000700000E00000C00001C0000380000380000780000700000F 00000F00000E00001E00001E00001E00001E00003E00003C00001C0000121C7B9B15>55 D<00FE0003FF0007C7800F03800E03C00E03C01E03801E03801F07801F8F000FFE000FFC0003F8 0007FE001EFE003C3F00781F00700F00F00700E00700E00700E00700F00E00701E007C7C003FF0 000FC000121B7D9A15>I<00007000000070000000F0000000F0000001F0000001F80000037800 000378000006780000067800000C7800000C3C0000183C0000183C0000303C0000303C0000603C 0000601E0000FFFE0000FFFE0001801E0001801E0003001F0003000F0007000F000F000F007FC0 FFF0FFC0FFF01C1C7F9B1F>65 D<000FF030003FFC7000FC0EE003F007E007C003E00F8001E01F 0001E01E0000E03C0000C03C0000C0780000C07800000078000000F8000000F0000000F0000000 F0000000F0000000F0000380F000030078000300780007003C000E003E001C001F003C000FC0F0 0003FFE00000FF00001C1C7C9B1E>67 D<0FFFFC000FFFFF8000F007C000F001E000F000F000F0 007000F0007801F0007801E0003801E0003801E0003801E0003C01E0003803E0003803C0007803 C0007803C0007803C0007003C000F007C000E0078001E0078001C0078003800780078007800E00 0F807C00FFFFF000FFFFC0001E1C7E9B20>I<0FFFFFE00FFFFFE000F003E000F001C000F000C0 00F000C000F000C001F060C001E060C001E060C001E0600001E1E00001FFE00003FFC00003C1C0 0003C0C00003C0C00003C0C0C003C0C18007C00180078001800780030007800300078007000780 0E000F803E00FFFFFE00FFFFFC001B1C7E9B1C>I<0FFFFFC00FFFFFC000F007C000F0038000F0 018000F0018000F0018001F0018001E0618001E0618001E0600001E0E00001E1E00003FFC00003 FFC00003C1C00003C0C00003C0C00003C0C00007C1800007800000078000000780000007800000 078000000F800000FFFC0000FFF800001A1C7E9B1B>I<000FF030003FFC7000FC0EE003F007E0 07C003E00F8001E01F0001E01E0000E03C0000C03C0000C0780000C07800000078000000F80000 00F0000000F0000000F000FFF0F000FFF0F0000F80F0000F8078000F0078000F003C000F003E00 1F001F001F000FC07F0003FFF60000FF82001C1C7C9B21>I<0FFF9FFE0FFF3FFE00F003C000F0 03C000F003C000F003C000F007C001F007C001E0078001E0078001E0078001E0078001FFFF8003 FFFF8003C00F0003C00F0003C00F0003C00F0003C01F0007C01F0007801E0007801E0007801E00 07801E0007803E000F803E00FFF3FFE0FFF3FFC01F1C7E9B1F>I<0FFF800FFF8000F00000F000 00F00000F00000F00001F00001E00001E00001E00001E00001E00003E00003C00003C00003C000 03C00003C00007C0000780000780000780000780000780000F8000FFF800FFF800111C7F9B0F> I<0FFFC00FFFC000F00000F00000F00000F00000F00001F00001E00001E00001E00001E00001E0 0003E00003C00003C00003C00003C00603C00C07C00C07800C07801C0780180780380780780F81 F8FFFFF0FFFFF0171C7E9B1A>76 D<0FFC000FFC0FFC000FFC00FC001F8000FC001F8000FC0037 8000DE00378000DE006F8001DE006F80019E00CF00019E00CF00019E018F00018F018F00018F03 1F00038F031F00030F061E00030F061E0003078C1E0003078C1E000307983E000707983E000607 B03C000607B03C000603E03C000603E03C000603C07C001E03C07C00FFE387FFC0FFC387FF8026 1C7E9B26>I<0FF80FFE0FF80FFE00FC01E000FC00C000FE00C000DE00C000DE01C001DF01C001 8F0180018F8180018781800187C1800187C3800383C3800303E3000301E3000301F3000300F300 0300FF000700FF0006007E0006007E0006003E0006003E0006001E001E001E00FFE01C00FFC00C 001F1C7E9B1F>I<0007F000003FFC0000F81E0001E0070003800380070003C00E0001C01E0001 E03C0001E03C0000E0780000E0780000F0780000E0F00001E0F00001E0F00001E0F00001E0F000 03C0F00003C0F00007807800078078000F0038001E003C003C001E0078000F81F00003FFC00000 FE00001C1C7C9B20>I<0FFFFC000FFFFF0000F00F8000F0038000F003C000F001C000F001C001 F003C001E003C001E003C001E0038001E0078001E00F0003E03E0003FFF80003FFE00003C00000 03C0000003C0000007C00000078000000780000007800000078000000F8000000F800000FFF800 00FFF000001A1C7E9B1C>I<0FFFF8000FFFFE0000F00F8000F0038000F003C000F001C000F001 C001F003C001E003C001E003C001E0078001E00F0001E03E0003FFF80003FFF80003C0FC0003C0 3C0003C03C0003C03E0007C03C0007803C0007803C0007803C0007803C0007803C380F803E70FF F81FF0FFF00FE01D1C7E9B1F>82 D<007F0C01FFDC03C1F80780780F00380E00380E00381E0038 1E00001F00001F80000FF8000FFF0007FFC001FFE0003FE00003E00001E00000E00000E06000E0 6000E06001E07001C0780380FE0F80FFFE00C3F800161C7E9B17>I<1FFFFFF03FFFFFF03C0781 F038078060700780606007806060078060600F8060C00F0060C00F0060000F0000000F0000000F 0000001F0000001E0000001E0000001E0000001E0000001E0000003E0000003C0000003C000000 3C0000003C0000003C0000007C00001FFFE0001FFFE0001C1C7C9B1E>III<07FF8FFE07 FF8FFE007C03E0003C0380003E0300001E0600001E0E00001F1C00000F1800000FB0000007E000 0007E0000003C0000003E0000003E0000007F000000EF000000CF0000018F8000030780000707C 0000E03C0000C03E0001801E0003801F000F801F00FFE07FF0FFE0FFF01F1C7F9B1F>88 DI<03FFFF8007FFFF0007E01F0007803E0007007C00060078000600F8000E01F0000C03E000 0C07C0000007C000000F8000001F0000003E0000007C0000007C000000F80C0001F00C0003E018 0003C0180007C018000F8038001F0030003E0070003E00F0007C03F000FFFFE000FFFFE000191C 7E9B19>I E /Fn 28 122 df<70F8FCFC7C0C0C0C1C18183870E0E0060F7C840E>44 D<01F00007FC000E0E001C07003803803803807803C07001C07001C07001C0F001E0F001E0F001 E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E07001C07001 C07001C07803C03803803803801C07000E0E0007FC0001F00013227EA018>48 D<018003800F80FF80F38003800380038003800380038003800380038003800380038003800380 038003800380038003800380038003800380038003800380FFFEFFFE0F217CA018>I<03F8000F FE001E1F00380F807007C07807C07C07C07807C07807C00007C00007C0000780000F80001F0000 3E0003FC0003F800001E00000F000007800007C00003E00003E07003E0F803E0F803E0F803E0F8 03E0E007C07007C0780F803E1F000FFE0003F80013227EA018>51 D<03F8000FFC001F1E003C07 003807807803807003C0F003C0F001C0F001C0F001E0F001E0F001E0F001E0F001E0F003E07803 E07803E03C07E03E0FE01FFDE007F9E00081E00001C00003C00003C0000380780780780700780F 00781E00787C003FF8000FE00013227EA018>57 D<000180000003C0000003C0000003C0000007 E0000007E0000007E000000FF000000CF000000CF000001CF800001878000018780000383C0000 303C0000303C0000601E0000601E0000601E0000C00F0000C00F0000C00F0001FFFF8001FFFF80 01800780030003C0030003C0030003C0060001E0060001E0060001E00E0000F01F0001F0FFC00F FFFFC00FFF20237EA225>65 D<000FF030007FFC3000FC1E7003F0077007C003F00F8001F01F00 01F01F0000F03E0000F03C0000707C0000707C0000707C000030F8000030F8000030F8000000F8 000000F8000000F8000000F8000000F8000000F8000000F80000307C0000307C0000307C000030 3E0000703E0000601F0000E01F0000C00F8001C007C0038003F0070000FC1E00007FFC00000FF0 001C247DA223>67 D<03FFF003FFF0000F00000F00000F00000F00000F00000F00000F00000F00 000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00000F00 000F00000F00000F00F80F00F80F00F80F00F81F00F01E00703E00787C003FF8000FE00014237E A119>74 D76 DI<07F0600FFE601E1FE03807E07003E07001 E0E000E0E000E0E00060E00060F00060F00000F800007C00007F00003FF0001FFE000FFF8003FF C0007FC00007E00001E00001F00000F00000F0C00070C00070C00070E00070E000F0F000E0F801 E0FC01C0FF0780C7FF00C1FC0014247DA21B>83 D<1FF0003FFC003C3E003C0F003C0F00000700 000700000F0003FF001FFF003F07007C0700F80700F80700F00718F00718F00F18F81F187C3FB8 3FF3F01FC3C015157E9418>97 D<03FE000FFF801F07803E07803C0780780000780000F00000F0 0000F00000F00000F00000F00000F000007800007800C03C01C03E01801F87800FFF0003FC0012 157E9416>99 D<0000E0000FE0000FE00001E00000E00000E00000E00000E00000E00000E00000 E00000E00000E00000E003F8E00FFEE01F0FE03E03E07C01E07800E07800E0F000E0F000E0F000 E0F000E0F000E0F000E0F000E07000E07801E07801E03C03E01F0FF00FFEFE03F0FE17237EA21B >I<01FC0007FF001F0F803E03C03C01C07801E07801E0FFFFE0FFFFE0F00000F00000F00000F0 0000F000007800007800603C00E03E01C00F83C007FF0001FC0013157F9416>I<0E0000FE0000 FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E3F80 0EFFE00FE1E00F80F00F00700F00700E00700E00700E00700E00700E00700E00700E00700E0070 0E00700E00700E00700E00700E0070FFE7FFFFE7FF18237FA21B>104 D<1E003E003E003E001E 00000000000000000000000000000000000E00FE00FE001E000E000E000E000E000E000E000E00 0E000E000E000E000E000E000E000E00FFC0FFC00A227FA10E>I<01C003E003E003E001C00000 000000000000000000000000000001E00FE00FE001E000E000E000E000E000E000E000E000E000 E000E000E000E000E000E000E000E000E000E000E000E000E000E0F1E0F1C0F3C0FF803E000B2C 82A10F>I<0E0000FE0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E 00000E00000E00000E0FFC0E0FFC0E07E00E07800E07000E1E000E3C000E78000EF8000FFC000F FC000F1E000E0F000E0F800E07800E03C00E03E00E01E00E01F0FFE3FEFFE3FE17237FA21A>I< 0E00FE00FE001E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E 000E000E000E000E000E000E000E000E000E000E000E000E000E00FFE0FFE00B237FA20E>I<0E 3FC0FF00FEFFF3FFC0FFE0F783C01F807E01E00F003C00E00F003C00E00E003800E00E003800E0 0E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800 E00E003800E00E003800E00E003800E0FFE3FF8FFEFFE3FF8FFE27157F942A>I<0E3F80FEFFE0 FFE1E01F80F00F00700F00700E00700E00700E00700E00700E00700E00700E00700E00700E0070 0E00700E00700E00700E0070FFE7FFFFE7FF18157F941B>I<01FC0007FF000F07801C01C03800 E07800F0700070F00078F00078F00078F00078F00078F00078F000787000707800F03800E01C01 C00F078007FF0001FC0015157F9418>I<0E7EFEFFFFEF1F8F0F0F0F000F000E000E000E000E00 0E000E000E000E000E000E000E000E00FFF0FFF010157F9413>114 D<1FD83FF87878F038E018 E018F018F8007F807FE01FF003F8007CC03CC01CE01CE01CF03CF878FFF0CFE00E157E9413>I< 060006000600060006000E000E000E001E003E00FFF8FFF80E000E000E000E000E000E000E000E 000E000E000E0C0E0C0E0C0E0C0E0C0F1C073807F803E00E1F7F9E13>I<0E0070FE07F0FE07F0 1E00F00E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00F0 0E00F00E01F00787F807FF7F01FC7F18157F941B>I121 D E /Fo 16 121 df<78FCFCFEFE7E06060606060E0C0C1C383870E0C007147A85 12>44 D<00003FE0030001FFFC030007F01E07001F800787003E0001CF00FC0000EF01F800007F 03F000007F03E000003F07C000001F0FC000001F0F8000000F1F8000000F3F000000073F000000 073E000000077E000000077E000000037E000000037C00000003FC00000000FC00000000FC0000 0000FC00000000FC00000000FC00000000FC00000000FC00000000FC00000000FC00000000FC00 0000007C000000007E000000037E000000037E000000033E000000033F000000073F000000061F 800000060F8000000E0FC000000C07C000001C03E000001803F000003801F800007000FC0000E0 003F0001C0001F8007800007F01F000001FFFC0000003FE00028337CB130>67 D<00003FF001800001FFFC01800007F81F0380001FC0038380003F0001E780007C0000E78001F8 00007F8001F000003F8003E000001F8007C000000F800FC000000F800F80000007801F80000007 801F00000003803F00000003803E00000003807E00000003807E00000001807E00000001807C00 00000180FC0000000000FC0000000000FC0000000000FC0000000000FC0000000000FC00000000 00FC0000000000FC0000000000FC0000000000FC0000000000FC00000FFFFC7E00000FFFFC7E00 00001FC07E0000000F807E0000000F803E0000000F803F0000000F801F0000000F801F8000000F 800F8000000F800FC000000F8007E000000F8003E000000F8001F000001F8001F800001F80007E 00003F80003F00007780001FC001E3800007F80FC1800001FFFF008000003FF800002E337CB134 >71 D<03FF00000FFFC0001E03F0003E00F8003E007C003E003C003E003E001C001E0000001E00 00001E0000001E0000001E000003FE00007FFE0003FF1E0007F01E001FC01E003F001E007E001E 007C001E00FC001E00F8001E0CF8001E0CF8001E0CF8003E0CFC007E0C7C007E0C7E01FF1C3F87 CFB81FFF07F003FC03C01E1F7D9E21>97 D<003FE001FFF803E03C07C03E0F003E1F003E3E003E 3C001C7C00007C0000780000F80000F80000F80000F80000F80000F80000F80000F80000F80000 7C00007C00007C00003E00033E00071F00060F800E07C01C03F07801FFF0003F80181F7D9E1D> 99 D<003F800001FFF00003E1F80007807C000F003E001E001E003E001F003C000F007C000F80 7C000F8078000F80F8000780FFFFFF80FFFFFF80F8000000F8000000F8000000F8000000F80000 00F80000007C0000007C0000007C0000003E0001803E0003801F0003000F80070007E00E0003F8 3C0000FFF800003FC000191F7E9E1D>101 D<0F001F801F801F801F800F000000000000000000 00000000000000000000000000000780FF80FF800F800780078007800780078007800780078007 800780078007800780078007800780078007800780078007800780078007800FC0FFF8FFF80D30 7EAF12>105 D<0781FE003FC000FF87FF80FFF000FF9E0FC3C1F8000FB803E7007C0007F001EE 003C0007E001FC003E0007E000FC001E0007C000F8001E0007C000F8001E00078000F0001E0007 8000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F000 1E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E000780 00F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E 00078000F0001E000FC001F8003F00FFFC1FFF83FFF0FFFC1FFF83FFF0341F7E9E38>109 D<0781FE0000FF87FF8000FF9E0FC0000FB803E00007F001E00007E001F00007E000F00007C000 F00007C000F000078000F000078000F000078000F000078000F000078000F000078000F0000780 00F000078000F000078000F000078000F000078000F000078000F000078000F000078000F00007 8000F000078000F000078000F000078000F000078000F0000FC001F800FFFC1FFF80FFFC1FFF80 211F7E9E25>I<001FC00000FFF80001E03C0007800F000F0007801E0003C01E0003C03C0001E0 3C0001E0780000F0780000F0780000F0F80000F8F80000F8F80000F8F80000F8F80000F8F80000 F8F80000F8F80000F8780000F07C0001F03C0001E03C0001E01E0003C01E0003C00F00078007C0 1F0001F07C0000FFF800001FC0001D1F7E9E21>I<0781FE00FF87FF80FF9F0FE00FB803F007F0 01F807E000F807C0007C0780007C0780003E0780003E0780003E0780001F0780001F0780001F07 80001F0780001F0780001F0780001F0780001F0780003F0780003E0780003E0780007E07C0007C 07C000FC07E000F807F001F007B803E0079E0FC0078FFF800781FC000780000007800000078000 0007800000078000000780000007800000078000000780000007800000078000000FC00000FFFC 0000FFFC0000202D7E9E25>I<0787F0FF8FF8FFBC7C0FB87C07F07C07E07C07E00007C00007C0 0007C0000780000780000780000780000780000780000780000780000780000780000780000780 000780000780000780000780000780000780000FC000FFFE00FFFE00161F7E9E19>114 D<03FE300FFFF03E07F07801F07000F0E00070E00070E00030E00030F00030F800007F00007FF0 003FFF000FFFC003FFE0003FF00003F80000F8C0007CC0003CE0001CE0001CE0001CF0001CF800 38F80038FC00F0EF03F0C7FFC0C1FF00161F7E9E1A>I<00C00000C00000C00000C00000C00001 C00001C00001C00003C00003C00007C0000FC0001FC000FFFFE0FFFFE003C00003C00003C00003 C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003 C03003C03003C03003C03003C03003C03003C03003C07001E06001E0E000F9C000FFC0003F0014 2C7FAB19>I<078000F000FF801FF000FF801FF0000F8001F000078000F000078000F000078000 F000078000F000078000F000078000F000078000F000078000F000078000F000078000F0000780 00F000078000F000078000F000078000F000078000F000078000F000078000F000078000F00007 8001F000078001F000078001F000078003F00003C007F00003C00EF00001F03CF80000FFF8FF80 003FC0FF80211F7E9E25>I120 D E /Fp 5 85 df<00000000C00000000000E000 00000001E00000000003E00000000003E00000000007E00000000007E0000000000FE000000000 0FE0000000001FE0000000001FE00000000037E00000000067E00000000067E000000000C7E000 000000C7F00000000183F00000000183F00000000303F00000000703F00000000603F00000000C 03F00000000C03F00000001803F00000001803F00000003003F00000003003F00000006003F000 0000C003F0000000C003F00000018003F00000018003F8000003FFFFF8000003FFFFF800000600 01F800000E0001F800000C0001F80000180001F80000180001F80000300001F80000300001F800 00600001F80000E00001F80000C00001F80001C00001F80001C00001F80007C00001FC001FC000 03FC00FFF8007FFFE0FFF8007FFFE02B327BB135>65 D<000FFFFFFE0000000FFFFFFF80000000 7F000FE00000007E0003F00000007E0000F80000007E0000FC0000007E00007C000000FC00003E 000000FC00003E000000FC00003F000000FC00001F000001F800001F000001F800001F800001F8 00001F800001F800001F800003F000001F800003F000001F800003F000001F800003F000001F80 0007E000003F800007E000003F800007E000003F800007E000003F80000FC000003F00000FC000 007F00000FC000007F00000FC000007F00001F8000007E00001F800000FE00001F800000FE0000 1F800000FC00003F000001FC00003F000001F800003F000001F800003F000003F000007E000003 E000007E000007E000007E00000FC000007E00000F800000FC00001F800000FC00003F000000FC 00007E000000FC0000FC000001F80001F0000001F80003E0000001F8000FC0000003F8007F0000 00FFFFFFFC000000FFFFFFE000000031317BB036>68 D<000FFFFFFFFC000FFFFFFFFC00007F00 01FC00007E00007C00007E00003C00007E00003C00007E0000180000FC0000180000FC00001800 00FC0000180000FC0000180001F80000180001F80000180001F80000180001F80000180003F000 80100003F00180000003F00180000003F00180000007E00300000007E00300000007E007000000 07E01F0000000FFFFE0000000FFFFE0000000FC01E0000000FC00E0000001F800C0000001F800C 0000001F800C0000001F800C0000003F00180000003F00080000003F00000000003F0000000000 7E00000000007E00000000007E00000000007E0000000000FC0000000000FC0000000000FC0000 000000FC0000000001F80000000001F80000000001F80000000003F800000000FFFFF0000000FF FFF00000002E317BB02F>70 D<000FFFFFF000000FFFFFFE0000007F003F8000007E000FC00000 7E0007E000007E0003F000007E0001F80000FC0001F80000FC0001F80000FC0001F80000FC0001 F80001F80003F80001F80003F80001F80003F80001F80003F00003F00007F00003F00007E00003 F0000FC00003F0000FC00007E0001F000007E0007E000007E000FC000007E007F000000FFFFFC0 00000FFFFF0000000FC00F8000000FC003C000001F8003E000001F8001F000001F8001F000001F 8001F800003F0001F800003F0001F800003F0001F800003F0001F800007E0003F800007E0003F8 00007E0003F000007E0003F00000FC0007F00000FC0007F00000FC0007F00800FC0007F00C01F8 0007F01801F80007F01801F80003F03003F80003F030FFFFE001F0E0FFFFE000FFC0000000003F 002E327BB034>82 D<07FFFFFFFFF00FFFFFFFFFF00FC00FE003F01E000FC000F01C000FC000E0 18000FC000E038000FC0006030001F8000E030001F8000E060001F8000C060001F8000C060003F 0000C0C0003F0000C0C0003F0000C0C0003F0000C080007E00008000007E00000000007E000000 00007E0000000000FC0000000000FC0000000000FC0000000000FC0000000001F80000000001F8 0000000001F80000000001F80000000003F00000000003F00000000003F00000000003F0000000 0007E00000000007E00000000007E00000000007E0000000000FC0000000000FC0000000000FC0 000000000FC0000000001F80000000001F80000000001F80000000001F80000000003F00000000 003F00000000003F0000000000FF00000000FFFFFF000000FFFFFF0000002C3173B033>84 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300 TeXDict begin %%EndSetup %%Page: 0 1 bop 795 996 a Fp(D)26 b(R)g(A)f(F)h(T)476 1088 y Fo(Groups,)c(Con)n(texts,)f (Comm)n(unicators)353 1270 y Fn(Lyndon)c(Clark)o(e,)e(Mark)h(Sears,)g(An)o (thon)o(y)g(Skjellum,)d(Marc)j(Snir)833 1391 y(July)g(10,)g(1993)p eop %%Page: 1 2 bop 75 -100 a Fm(3.1.)31 b(INTR)o(ODUCTION)1343 b Fl(1)75 45 y Fk(3.1)70 b(In)n(tro)r(duction)75 147 y Fl(It)11 b(is)f(highly)f(desirable) i(that)g(pro)q(cesses)i(executing)e(a)f(parallel)f(pro)q(cedure)k(use)e(a)f (\\virtual)g(pro)q(cess)i(name)d(space")75 197 y(lo)q(cal)14 b(to)h(the)g(in)o(v)o(o)q(cation.)20 b(Th)o(us,)15 b(the)h(co)q(de)g(of)e (the)i(parallel)d(pro)q(cedure)k(will)d(lo)q(ok)g(iden)o(tical,)g(irresp)q (ectiv)o(e)j(of)75 247 y(the)12 b(absolute)f(addresses)i(of)d(the)h (executing)h(pro)q(cesses.)20 b(It)11 b(is)g(often)g(the)g(case)h(that)f (parallel)f(application)f(co)q(de)j(is)75 296 y(built)f(b)o(y)g(comp)q(osing) f(sev)o(eral)i(parallel)f(mo)q(dules)f(\()p Fj(e.g.)p Fl(,)h(a)h(n)o (umerical)e(solv)o(er,)h(and)h(a)f(graphic)g(displa)o(y)g(mo)q(dule\).)75 346 y(Supp)q(ort)h(of)f(a)h(virtual)f(name)f(space)j(for)e(eac)o(h)h(mo)q (dule)f(will)f(allo)o(w)g(for)h(the)i(comp)q(osition)d(of)h(mo)q(dules)f (that)i(w)o(ere)75 396 y(dev)o(elop)q(ed)18 b(separately)h(without)e(c)o (hanging)g(all)f(message)i(passing)f(calls)h(within)f(eac)o(h)h(mo)q(dule.)28 b(The)18 b(set)h(of)75 446 y(pro)q(cesses)d(that)d(execute)i(a)e(parallel)f (pro)q(cedure)j(ma)o(y)c(b)q(e)j(\014xed,)g(or)f(ma)o(y)e(b)q(e)j(determined) f(dynamically)d(b)q(efore)75 496 y(the)19 b(in)o(v)o(o)q(cation.)31 b(Th)o(us,)20 b(MPI)f(has)g(to)f(pro)o(vide)h(a)f(mec)o(hanism)e(for)j (dynamically)c(creating)k(sets)h(of)e(lo)q(cally)75 545 y(named)h(pro)q (cesses.)37 b(W)m(e)20 b(alw)o(a)o(ys)e(n)o(um)o(b)q(er)h(pro)q(cesses)j (that)e(execute)i(a)d(parallel)f(pro)q(cedure)k(consecutiv)o(ely)m(,)75 595 y(starting)17 b(from)d(zero,)k(and)e(call)g(this)h(n)o(um)o(b)q(ering)e Fi(rank)k(in)f(group)p Fl(.)24 b(Th)o(us,)17 b(a)f Fi(group)f Fl(is)i(an)f(ordered)i(set)g(of)75 645 y(pro)q(cesses,)e(where)f(pro)q (cesses)i(are)d(iden)o(ti\014ed)g(b)o(y)g(their)g(ranks)g(when)g(comm)o (unication)d(o)q(ccurs.)158 701 y(Comm)o(uni)o(cation)f Fh(contexts)h Fl(partition)g(the)j(message-passing)e(space)h(in)o(to)f(separate,)i (manageable)d(\\uni-)75 751 y(v)o(erses.")31 b(Sp)q(eci\014cally)m(,)17 b(a)h(send)g(made)f(in)g(a)g(con)o(text)i(cannot)f(b)q(e)g(receiv)o(ed)h(in)e (another)h(con)o(text.)30 b(Con)o(texts)75 800 y(are)14 b(iden)o(ti\014ed)g (in)g(MPI)g(using)g(in)o(teger-v)n(alued)f Fi(con)o(texts)g Fl(that)h(reside)h(within)e Fh(communicator)e Fl(ob)r(jects.)20 b(The)75 850 y(con)o(text)14 b(mec)o(hanism)e(is)i(need)g(to)g(allo)o(w)e (predictable)i(b)q(eha)o(vior)g(in)f(subprograms,)g(and)g(to)h(allo)o(w)e (dynamicism)75 900 y(in)g(message)g(usage)g(that)h(cannot)f(b)q(e)h (reasonably)f(an)o(ticipated)g(or)g(managed.)k(Normally)m(,)9 b(a)j(parallel)f(pro)q(cedure)75 950 y(is)g(written)g(so)g(that)g(all)f (messages)h(pro)q(duced)h(during)f(its)g(execution)h(are)f(also)f(consumed)h (b)o(y)g(the)g(pro)q(cesses)j(that)75 1000 y(execute)i(the)f(pro)q(cedure.)22 b(Ho)o(w)o(ev)o(er,)15 b(if)f(one)g(parallel)g(pro)q(cedure)i(calls)e (another,)h(then)g(it)f(migh)o(t)e(b)q(e)k(desirable)75 1049 y(to)h(allo)o(w)e(suc)o(h)j(call)e(to)h(pro)q(ceed)i(while)e(messages)g(are)g (p)q(ending)g(\(the)h(messages)f(will)f(b)q(e)i(consumed)f(b)o(y)f(the)75 1099 y(pro)q(cedure)d(after)f(the)g(call)f(returns\).)19 b(In)12 b(suc)o(h)g(case,)h(a)e(new)h(comm)o(unicatio)o(n)d(con)o(text)j(is)g(needed) h(for)e(the)h(called)75 1149 y(parallel)h(pro)q(cedure,)i(ev)o(en)g(if)e(the) h(transfer)h(of)e(con)o(trol)h(is)g(sync)o(hronized.)158 1205 y(The)k(comm)o(unication)c(domain)h(used)k(b)o(y)e(a)h(parallel)e(pro)q (cedure)k(is)d(iden)o(ti\014ed)h(b)o(y)f(a)g Fi(comm)o(unicator)p Fl(.)75 1255 y(Comm)o(uni)o(cators)c(bring)h(together)i(the)f(concepts)i(of)d (pro)q(cess)j(group)d(and)h(comm)o(unicatio)o(n)d(con)o(text.)22 b(A)14 b(com-)75 1304 y(m)o(unicator)g(is)i(an)g(explicit)g(parameter)g(in)f (eac)o(h)i(p)q(oin)o(t-to-p)q(oin)o(t)d(comm)o(unication)f(op)q(eration.)24 b(The)17 b(comm)o(u-)75 1354 y(nicator)f(iden)o(ti\014es)g(the)g(comm)o (unicatio)o(n)d(con)o(text)j(of)f(that)h(op)q(eration;)g(it)f(iden)o (ti\014es)i(the)f(group)f(of)g(pro)q(cesses)75 1404 y(that)j(can)f(b)q(e)h (in)o(v)o(olv)o(ed)f(in)g(this)g(comm)o(unication;)f(and)h(it)g(pro)o(vides)h (the)g(translation)f(from)e(virtual)i(pro)q(cess)75 1454 y(names,)e(whic)o(h) g(are)h(ranks)g(within)f(the)h(group,)f(in)o(to)g(absolute)h(addresses.)25 b(Collectiv)o(e)15 b(comm)o(unication)d(calls)75 1504 y(also)i(tak)o(e)g(a)g (comm)o(unicator)d(as)k(parameter;)e(it)h(is)h(exp)q(ected)h(that)e(parallel) f(libraries)h(will)f(b)q(e)i(built)f(to)g(accept)75 1553 y(a)g(comm)o(uni)o (cator)e(as)i(parameter.)j(Comm)o(unicators)11 b(are)k(represen)o(ted)h(b)o (y)e(opaque)g(MPI)g(ob)r(jects.)158 1609 y(MPI)i(do)q(es)g(not)g(rev)o(eal)f (absolute)h(pro)q(cess)i(names.)k(Rather,)16 b(pro)q(cesses)i(are)e(alw)o(a)o (ys)e(iden)o(ti\014ed)i(b)o(y)g(their)75 1659 y(rank)c(inside)h(a)f(group.)17 b(New)c(pro)q(cess)h(groups)f(are)g(built)e(b)o(y)h(subsetting)i(and)e (reordering)h(pro)q(cesses)i(within)d(ex-)75 1709 y(isting)e(groups,)g(as)h (w)o(ell)e(as)h(b)o(y)g(\\publishing)f(and)i(subscribing")f(or)g (\\\015attening)g(and)g(un\015attening.")17 b(Publishing)75 1759 y(and)g(subscribing)h(pro)o(vides)f(a)g(serv)o(er-lik)o(e)h(mec)o (hanism)c(to)j(allo)o(w)f(rendezv)o(ous)j(b)q(et)o(w)o(een)f(disjoin)o(t)e (comm)o(uni-)75 1808 y(cating)d(pro)q(cesses.)21 b(Flatten)14 b(and)f(un\015atten)h(allo)o(ws)f(the)h(transmission)e(of)h(groups)h(or)g (comm)o(uni)o(cators)e(in)h(user)75 1858 y(messages.)k(Subsetting)c(and)f (reordering)g(allo)o(ws)f(the)h(user)h(to)f(construct)h(subgroups)g(giv)o(en) e(an)h(existing)g(group.)75 2027 y Fk(3.2)70 b(Con)n(text)75 2129 y Fl(A)15 b Fi(con)o(text)e Fl(is)i(the)h(MPI)f(mec)o(hanism)d(for)j (partitioning)f(comm)o(uni)o(cation)e(space.)22 b(A)15 b(de\014ning)g(prop)q (ert)o(y)h(of)e(a)75 2178 y(con)o(text)j(is)g(that)g(a)f(send)i(made)d(in)h (a)h(con)o(text)g(cannot)g(b)q(e)g(receiv)o(ed)h(in)e(another)h(con)o(text.) 28 b(A)16 b(con)o(text)i(is)e(an)75 2228 y(in)o(teger.)75 2397 y Fk(3.3)70 b(Groups)75 2498 y Fl(A)16 b Fi(group)f Fl(is)h(an)f(ordered)j (set)f(of)e(pro)q(cess)j(iden)o(ti\014ers)f(\(henceforth)g(pro)q(cesses\);)j (pro)q(cess)e(iden)o(ti\014ers)f(are)f(im-)75 2548 y(plemen)o(tation)e(dep)q (enden)o(t;)k(a)e(group)f(is)h(an)g(opaque)f(ob)r(ject.)25 b(Eac)o(h)16 b(pro)q(cess)i(in)d(a)g(group)h(is)g(asso)q(ciated)g(with)75 2598 y(an)e(in)o(teger)g Fi(rank)p Fl(,)f(starting)h(from)e(zero.)158 2654 y(Groups)17 b(are)g(represen)o(ted)i(b)o(y)e(opaque)f Fi(group)i(ob)s(jects)p Fl(,)d(and)h(hence)i(cannot)f(b)q(e)g(directly)g (transferred)75 2704 y(from)12 b(one)i(pro)q(cess)i(to)e(another.)p eop %%Page: 2 3 bop 75 -100 a Fl(2)75 45 y Fg(3.3.1)55 b(Prede\014ned)18 b(Groups)75 122 y Fl(.)g(Initial)12 b(groups)i(de\014ned)h(once)g Fh(MPI)p 669 122 14 2 v 15 w(INIT)e Fl(has)h(b)q(een)h(called)f(are)g(as)g(follo)o (ws:)137 205 y Ff(\017)21 b Fh(MPI)p 248 205 V 15 w(GROUP)p 373 205 V 15 w(ALL)p Fl(,)40 b(SPMD-lik)o(e)13 b(siblings)g(of)g(a)h(pro)q (cess.)137 288 y Ff(\017)21 b Fh(MPI)p 248 288 V 15 w(GROUP)p 373 288 V 15 w(HOST)p Fl(,)40 b(A)14 b(group)f(including)g(one's)h(self)g (and)g(one's)g(HOST)137 372 y Ff(\017)21 b Fh(MPI)p 248 372 V 15 w(GROUP)p 373 372 V 15 w(PARENT)p Fl(,)39 b(A)14 b(group)g(con)o (taining)f(one's)h(self)g(and)f(one's)h(P)m(ARENT)g(\(spa)o(wner\).)137 455 y Ff(\017)21 b Fh(MPI)p 248 455 V 15 w(GROUP)p 373 455 V 15 w(SELF)p Fl(,)40 b(A)14 b(group)f(comprising)g(one's)h(self)158 538 y(MPI)f(implemen)o(tatio)o(ns)d(are)j(required)h(to)e(pro)o(vide)h(these) h(groups;)e(ho)o(w)o(ev)o(er,)h(not)g(all)e(forms)g(of)h(comm)o(uni-)75 588 y(cation)g(mak)o(e)e(sense)k(for)e(all)f(systems,)h(so)h(not)f(all)f(of)g (these)j(groups)e(ma)o(y)e(b)q(e)j(relev)n(an)o(t.)18 b(En)o(vironmen)o(tal) 10 b(inquiry)75 638 y(will)j(b)q(e)j(pro)o(vided)e(to)h(determine)f(whic)o(h) h(of)f(these)i(are)f(usable)g(in)f(a)h(giv)o(en)f(implemen)o(tation.)j(The)e (analogous)75 688 y(comm)o(unicators)c(corresp)q(onding)k(to)f(these)h (groups)f(are)h(de\014ned)g(b)q(elo)o(w)e(in)h(section)g(3.4.1.)158 820 y Fe(Discussion:)k Fd(En)o(vironmen)o(tal)e(sub-committee)e(needs)g(to)f (pro)o(vide)h(suc)o(h)g(inquiry)h(functions)g(for)e(us.)75 1040 y Fk(3.4)70 b(Comm)n(unicators)75 1131 y Fl(All)12 b(MPI)g(comm)o (unication)d(\(b)q(oth)k(p)q(oin)o(t-to-p)q(oin)o(t)e(and)i(collectiv)o(e\))f (functions)h(use)g Fi(comm)o(unicators)d Fl(to)i(pro-)75 1181 y(vide)i(a)f(sp)q(eci\014c)j(scop)q(e)f(\(con)o(text)f(and)g(group)g(sp)q (eci\014cations\))h(for)e(the)i(comm)o(unicatio)o(n.)g(In)f(short,)g(comm)o (uni-)75 1231 y(cators)h(bring)e(together)i(the)g(concepts)h(of)d(group)h (and)g(con)o(text;)g(\(furthermore,)g(to)g(supp)q(ort)h(implem)o(en)o (tation-)75 1281 y(sp)q(eci\014c)i(optimizations,)12 b(and)j(virtual)g(top)q (ologies,)f(they)i(\\cac)o(he")f(additional)f(information)e(opaquely\).)22 b(The)75 1331 y(source)17 b(and)f(destination)f(of)g(a)h(message)f(is)h(iden) o(ti\014ed)g(b)o(y)f(the)i(rank)e(of)g(that)h(pro)q(cess)i(within)d(the)h (group;)g(no)75 1381 y(a)f(priori)h(mem)o(b)q(ership)e(restrictions)j(on)e (the)i(pro)q(cess)g(sending)f(or)g(receiving)g(the)g(message)g(are)g (implied.)21 b(F)m(or)75 1430 y(collectiv)o(e)d(comm)o(unicati)o(on,)d(the)k (comm)o(unicator)c(sp)q(eci\014es)k(the)f(set)h(of)e(pro)q(cesses)k(that)c (participate)h(in)f(the)75 1480 y(collectiv)o(e)f(op)q(eration.)23 b(Th)o(us,)16 b(the)h(comm)o(uni)o(cator)c(restricts)18 b(the)e(\\spatial")f (scop)q(e)i(of)e(comm)o(unicatio)o(n,)e(and)75 1530 y(pro)o(vides)h(lo)q(cal) f(pro)q(cess)j(addressing.)158 1663 y Fe(Discussion:)37 b Fd(`Comm)o (unicator')14 b(replaces)h(the)f(w)o(ord)f(`con)o(text')h(ev)o(erywhere)g(in) g(curren)o(t)g(pt2pt)g(and)g(collcomm)75 1712 y(drafts.)158 1845 y Fl(Comm)o(uni)o(cators)19 b(are)h(represen)o(ted)k(b)o(y)c(opaque)g Fi(comm)o(unicator)h(ob)s(jects)p Fl(,)f(and)g(hence)i(cannot)e(b)q(e)75 1895 y(directly)14 b(transferred)i(from)c(one)i(pro)q(cess)i(to)d(another.)75 2011 y Fg(3.4.1)55 b(Prede\014ned)18 b(Comm)n(unicators)75 2088 y Fl(Initial)12 b(comm)o(unicators)g(de\014ned)j(once)f Fh(MPI)p 792 2088 V 15 w(INIT)f Fl(has)h(b)q(een)h(called)f(are)g(as)g(follo) o(ws:)137 2171 y Ff(\017)21 b Fh(MPI)p 248 2171 V 15 w(COMM)p 351 2171 V 15 w(ALL)p Fl(,)40 b(SPMD-lik)o(e)13 b(siblings)g(of)g(a)h(pro)q (cess.)137 2255 y Ff(\017)21 b Fh(MPI)p 248 2255 V 15 w(COMM)p 351 2255 V 15 w(HOST)p Fl(,)40 b(A)14 b(comm)o(unicator)d(for)j(talking)e(to) i(one's)g(HOST.)137 2338 y Ff(\017)21 b Fh(MPI)p 248 2338 V 15 w(COMM)p 351 2338 V 15 w(PARENT)p Fl(,)40 b(A)13 b(comm)o(unicator)f(for)h (talking)g(to)g(one's)h(P)m(ARENT)g(\(spa)o(wner\).)137 2421 y Ff(\017)21 b Fh(MPI)p 248 2421 V 15 w(COMM)p 351 2421 V 15 w(SELF)p Fl(,)30 b(A)11 b(comm)o(unicator)d(for)i(talking)g(to)g(one's)h (self)g(\(useful)g(for)f(getting)g(con)o(texts)i(for)e(serv)o(er)179 2471 y(purp)q(oses,)15 b(etc.\).)158 2554 y(MPI)h(implemen)o(tatio)o(ns)d (are)k(required)f(to)g(pro)o(vide)f(these)i(comm)o(unicators;)d(ho)o(w)o(ev)o (er,)i(not)g(all)e(forms)h(of)75 2604 y(comm)o(unication)f(mak)o(e)h(sense)k (for)d(all)g(systems.)28 b(En)o(vironmen)o(tal)15 b(inquiry)h(will)g(b)q(e)h (pro)o(vided)g(to)g(determine)75 2654 y(whic)o(h)e(of)f(these)j(comm)o (unicators)12 b(are)k(usable)f(in)f(a)h(giv)o(en)g(implem)o(en)o(tation.)j (The)e(groups)f(corresp)q(onding)h(to)75 2704 y(these)f(comm)o(unicators)d (are)i(also)f(a)o(v)n(ailable)f(as)i(prede\014ned)i(quan)o(tities)d(\(see)j (section)e(3.3.1\).)p eop %%Page: 3 4 bop 75 -100 a Fm(3.5.)31 b(GR)o(OUP)13 b(MANA)o(GEMENT)1197 b Fl(3)158 45 y Fe(Discussion:)18 b Fd(En)o(vironmen)o(tal)e(sub-committee)e (needs)g(to)f(pro)o(vide)h(suc)o(h)g(inquiry)h(functions)g(for)e(us.)75 263 y Fk(3.5)70 b(Group)24 b(Managemen)n(t)75 354 y Fl(This)12 b(section)i(describ)q(es)g(the)f(manipulation)c(of)j(groups)h(under)g(v)n (arious)f(subheadings:)18 b(general,)12 b(constructors,)75 404 y(and)i(so)g(on.)75 518 y Fg(3.5.1)55 b(Lo)r(cal)19 b(Op)r(erations)75 594 y Fl(The)14 b(follo)o(wing)e(are)i(all)f(lo)q(cal)g(\(non-comm)o(uni)o (cating\))e(op)q(erations.)89 644 y Fi(MPI)p 188 644 15 2 v 17 w(GR)o(OUP)p 384 644 V 15 w(SIZE\(group,)k(size\))75 793 y(IN)h(group)k Fl(handle)13 b(to)h(group)g(ob)r(ject.)75 871 y Fi(OUT)i(size)k Fl(is)13 b(the)i(in)o(teger)f(n)o(um)o(b)q(er)f(of)h(pro)q (cesses)i(in)e(the)g(group.)89 949 y Fi(MPI)p 188 949 V 17 w(GR)o(OUP)p 384 949 V 15 w(RANK\(group,)h(rank\))75 1099 y(IN)h(group)k Fl(handle)13 b(to)h(group)g(ob)r(ject.)75 1177 y Fi(OUT)i(rank)k Fl(is)15 b(the)h(in)o(teger)g(rank)f(of)f(the)i(calling)e(pro)q(cess)j(in)e (group,)f(or)i Fh(MPI)p 1366 1177 14 2 v 15 w(UNDEFINED)d Fl(if)h(the)i(pro)q (cess)h(is)179 1226 y(not)d(a)f(mem)o(b)q(er.)89 1305 y Fi(MPI)p 188 1305 15 2 v 17 w(TRANSLA)l(TE)p 499 1305 V 18 w(RANKS)j(\(group)p 847 1305 V 15 w(a,)g(n,)g(ranks)p 1084 1305 V 17 w(a,)g(group)p 1275 1305 V 16 w(b,)f(ranks)p 1460 1305 V 17 w(b\))75 1454 y(IN)h(group)p 271 1454 V 16 w(a)21 b Fl(handle)14 b(to)f(group)h(ob)r(ject)h (\\A")75 1532 y Fi(IN)h(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(ranks)h(in)g Fh(ranks)p 666 1532 14 2 v 14 w(a)g Fl(arra)o(y)75 1610 y Fi(IN)i(ranks)p 263 1610 15 2 v 17 w(a)21 b Fl(arra)o(y)14 b(of)f(zero)i(or)e(more)g(v)n (alid)g(ranks)h(in)f(group)h(\\A")75 1688 y Fi(IN)i(group)p 271 1688 V 16 w(b)k Fl(handle)14 b(to)g(group)f(ob)r(ject)i(\\B")75 1765 y Fi(OUT)h(ranks)p 314 1765 V 16 w(b)21 b Fl(arra)o(y)10 b(of)g(corresp)q(onding)i(ranks)f(in)g(group)g(\\B,")f Fh(MPI)p 1220 1765 14 2 v 15 w(UNDEFINED)f Fl(when)i(no)g(corresp)q(ondence)179 1815 y(exists.)89 1894 y Fi(MPI)p 188 1894 15 2 v 17 w(GR)o(OUP)p 384 1894 V 15 w(FLA)l(TTEN\(group,)16 b(max)p 882 1894 V 17 w(length,)e(bu\013er,)g(actual)p 1337 1894 V 17 w(length\))75 2043 y(IN)i(group)k Fl(handle)13 b(to)h(ob)r(ject)75 2121 y Fi(IN)i(max)p 237 2121 V 18 w(length)i Fl(maxim)n(um)10 b(length)k(of)f (bu\013er)i(in)e(b)o(ytes)75 2199 y Fi(OUT)j(bu\013er)j Fl(b)o(yte-aligned)13 b(bu\013er)75 2277 y Fi(OUT)j(actual)p 327 2277 V 16 w(length)i Fl(actual)c(b)o(yte)g(length)g(of)f(bu\013er)i(con)o(taining)e(\015attened)i (group)e(information)75 2355 y(If)f(insu\016cien)o(t)h(space)g(is)g(a)o(v)n (ailable)d(in)i Fh(buffer)f Fl(\(as)i(sp)q(eci\014ed)h(b)o(y)e Fh(max)p 1189 2355 14 2 v 15 w(length)p Fl(\),)f(then)j(the)f(con)o(ten)o(ts) g(of)f Fh(buffer)75 2405 y Fl(are)j(unde\014ned)g(at)g(exit.)k(The)c(quan)o (tit)o(y)f Fh(actual)p 873 2405 V 14 w(length)f Fl(is)i(alw)o(a)o(ys)e(w)o (ell-de\014ned)i(at)f(exit)g(as)h(the)g(n)o(um)o(b)q(er)f(of)75 2455 y(b)o(ytes)h(needed)g(to)f(store)h(the)f(\015attened)h(group.)158 2504 y(Though)g(implemen)o(tations)e(ma)o(y)h(v)n(ary)h(on)h(ho)o(w)f(they)i (store)g(\015attened)g(groups,)f(the)g(information)d(m)o(ust)75 2554 y(b)q(e)h(su\016cien)o(t)f(to)h(reconstruct)h(the)f(group)f(using)g Fh(MPI)p 936 2554 V 15 w(GROUP)p 1061 2554 V 15 w(UNFLATTEN)e Fl(b)q(elo)o(w.)17 b(The)d(purp)q(ose)g(of)f(\015attening)75 2604 y(and)h(un\015attening)g(is)f(to)h(allo)o(w)e(in)o(terpro)q(cess)k (transmission)d(of)g(group)h(ob)r(jects.)158 2654 y(The)g(purp)q(ose)g(of)f (the)h(\015attening)g(and)f(un\015attening)g(pro)q(cedures)j(is)d(to)h(allo)o (w)d(groups)j(to)f(b)q(e)h(transmitted)75 2704 y(b)q(et)o(w)o(een)h(pro)q (cesses)i(in)c(a)h(p)q(ortable)g(w)o(a)o(y)m(.)p eop %%Page: 4 5 bop 75 -100 a Fl(4)75 45 y Fg(3.5.2)55 b(Lo)r(cal)19 b(Group)g(Constructors) 75 122 y Fl(The)14 b(execution)h(of)e(the)i(follo)o(wing)c(op)q(erations)j (do)g(not)g(require)g(in)o(terpro)q(cess)i(comm)o(unication.)89 172 y Fi(MPI)p 188 172 15 2 v 17 w(GR)o(OUP)p 384 172 V 15 w(UNFLA)l(TTEN\(max)p 805 172 V 19 w(length,)d(bu\013er,)i(group\))75 330 y(IN)h(max)p 237 330 V 18 w(length)i Fl(maxim)n(um)10 b(length)k(of)f (bu\013er)i(in)e(b)o(ytes)75 411 y Fi(IN)j(bu\013er)k Fl(b)o(yte-aligned)13 b(bu\013er)i(pro)q(duced)g(b)o(y)e Fh(MPI)p 951 411 14 2 v 16 w(GROUP)p 1077 411 V 14 w(FLATTEN)75 493 y Fi(OUT)j(group)j Fl(handle)14 b(to)f(ob)r(ject)75 580 y(The)20 b(purp)q(ose)g(of)f(the)h (\015attening)f(and)h(un\015attening)f(pro)q(cedures)j(is)d(to)g(allo)o(w)f (groups)h(to)h(b)q(e)f(transmitted)75 630 y(b)q(et)o(w)o(een)c(pro)q(cesses)i (in)c(a)h(p)q(ortable)g(w)o(a)o(y)m(.)j(See)d(also)g Fh(MPI)p 986 630 V 15 w(GROUP)p 1111 630 V 14 w(FLATTEN)f Fl(ab)q(o)o(v)o(e.)89 680 y Fi(MPI)p 188 680 15 2 v 17 w(LOCAL)p 369 680 V 17 w(SUBGR)o(OUP\(grou)o (p,)g(n,)i(ranks,)h(new)p 1110 680 V 17 w(group\))75 838 y(IN)g(group)k Fl(handle)13 b(to)h(group)g(ob)r(ject)75 919 y Fi(IN)i(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(arra)o(y)h(ranks)g(\(and)g (size)h(of)e Fh(new)p 1123 919 14 2 v 15 w(group)p Fl(\))75 1001 y Fi(IN)j(ranks)21 b Fl(arra)o(y)13 b(of)g(in)o(teger)i(ranks)f(in)f (group)75 1082 y Fi(OUT)j(new)p 283 1082 15 2 v 17 w(group)j Fl(new)14 b(group)g(deriv)o(ed)g(from)f(ab)q(o)o(v)o(e,)g(preserving)i(the)f (order)h(de\014ned)g(b)o(y)e Fh(ranks)p Fl(.)89 1169 y Fi(MPI)p 188 1169 V 17 w(LOCAL)p 369 1169 V 17 w(SUBGR)o(OUP)p 663 1169 V 15 w(RANGES\(group,)h(n,)h(ranges,)g(new)p 1352 1169 V 17 w(group\))75 1328 y(IN)h(group)k Fl(handle)13 b(to)h(group)g(ob)r(ject)75 1409 y Fi(IN)i(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(arra)o (y)h(ranks)g(\(and)g(size)h(of)e Fh(new)p 1123 1409 14 2 v 15 w(group)p Fl(\))75 1490 y Fi(IN)j(ranges)k Fl(a)14 b(one-dimensional)d (arra)o(y)j(of)f(pairs)h(of)f(ranks)i(\(form:)h(b)q(egin)e(through)g(end\))75 1572 y Fi(OUT)i(new)p 283 1572 15 2 v 17 w(group)j Fl(new)14 b(group)g(deriv)o(ed)g(from)f(ab)q(o)o(v)o(e,)g(preserving)i(the)f(order)h (de\014ned)g(b)o(y)e Fh(ranges)p Fl(.)158 1742 y Fe(Discussion:)k Fd(Please)12 b(prop)q(ose)g(additional)i(subgroup)e(functions,)g(b)q(efore)f (the)g(second)h(reading...Virtual)h(T)m(op)q(olo-)75 1792 y(gies)h(supp)q (ort?)89 1924 y Fi(MPI)p 188 1924 V 17 w(LOCAL)p 369 1924 V 17 w(GR)o(OUP)p 565 1924 V 16 w(UNION\(group1,)g(group2,)g(group)p 1233 1924 V 16 w(out\))89 2045 y(MPI)p 188 2045 V 17 w(LOCAL)p 369 2045 V 17 w(GR)o(OUP)p 565 2045 V 16 w(INTERSECT\(group1,)h(group2,)f (group)p 1349 2045 V 16 w(out\))89 2166 y(MPI)p 188 2166 V 17 w(LOCAL)p 369 2166 V 17 w(GR)o(OUP)p 565 2166 V 16 w(DIFFERENCE\(group1,)h (group2,)g(group)p 1385 2166 V 15 w(out\))75 2324 y(IN)h(group1)j Fl(\014rst)c(group)f(ob)r(ject)h(handle)75 2405 y Fi(IN)h(group2)j Fl(second)c(group)f(ob)r(ject)h(handle)75 2487 y Fi(OUT)h(group)p 322 2487 V 15 w(out)k Fl(group)14 b(ob)r(ject)g(handle)75 2574 y(The)g(set-lik)o(e)g(op)q(erations)g(are)h(de\014ned)g(as)f(follo)o(ws:)75 2654 y Fi(union)k Fl(All)d(elemen)o(ts)g(of)g(the)g(\014rst)h(group)f (\(group1\),)g(follo)o(w)o(ed)f(b)o(y)h(all)f(elemen)o(ts)h(of)g(second)h (group)f(\(group2\))179 2704 y(not)f(in)f(\014rst)p eop %%Page: 5 6 bop 75 -100 a Fm(3.5.)31 b(GR)o(OUP)13 b(MANA)o(GEMENT)1197 b Fl(5)75 45 y Fi(in)o(tersect)18 b Fl(all)12 b(elemen)o(ts)i(of)f(the)i (\014rst)g(group)e(whic)o(h)h(are)g(also)g(in)f(the)i(second)g(group)75 131 y Fi(di\013erence)j Fl(all)13 b(elemen)o(ts)h(of)f(the)i(\014rst)f(group) g(whic)o(h)g(are)g(not)g(in)f(the)i(second)g(group)75 216 y(Note)e(the)g(for) g(these)h(op)q(erations)f(the)g(order)g(of)f(pro)q(cesses)k(in)c(the)h (output)g(group)f(is)h(determined)f(\014rst)i(b)o(y)e(order)75 266 y(in)h(the)i(\014rst)g(group)e(\(if)h(p)q(ossible\))g(and)g(then)g(b)o(y) g(order)g(in)g(the)g(second)h(group)f(\(if)f(necessary\).)158 399 y Fe(Discussion:)k Fd(What)c(do)f(p)q(eople)h(think)g(ab)q(out)f(these)g (lo)q(cal)i(op)q(erations?)k(More?)g(Less?)f(Note:)f(these)c(op)q(erations)75 449 y(do)h(not)h(explicitly)i(en)o(umerate)e(ranks,)f(and)h(therefore)f(are)g (more)g(scalable)i(if)f(implemen)o(ted)h(e\016cien)o(tly)p Fc(:)8 b(:)e(:)89 582 y Fi(MPI)p 188 582 15 2 v 17 w(GR)o(OUP)p 384 582 V 15 w(FREE\(group\))75 738 y(IN)16 b(group)k Fl(frees)15 b Fh(group)d Fl(previously)i(de\014ned.)75 823 y(This)d(op)q(eration)g(frees) i(a)e(handle)g Fh(group)f Fl(whic)o(h)h(is)g(not)g(curren)o(tly)h(b)q(ound)g (to)f(a)g(comm)o(unicator.)j(It)d(is)h(erroneous)75 873 y(to)i(attempt)f(to)h (free)g(a)g(group)g(curren)o(tly)g(b)q(ound)g(to)g(a)g(comm)o(unicator.)158 1006 y Fe(Discussion:)19 b Fd(The)14 b(p)q(oin)o(t-to-p)q(oin)o(t)i(c)o (hapter)f(suggests)f(that)g(there)g(is)h(a)e(single)j(destructor)e(for)g(all) h(MPI)f(opaque)75 1056 y(ob)r(jects;)f(ho)o(w)o(ev)o(er,)g(it)g(is)h (arguable)h(that)e(this)h(sp)q(eci\014es)h(the)e(implemen)o(tation)j(of)d (MPI)g(v)o(ery)g(strongly)m(.)89 1189 y Fi(MPI)p 188 1189 V 17 w(GR)o(OUP)p 384 1189 V 15 w(DUP\(group,)h(new)p 757 1189 V 17 w(group\))75 1345 y(IN)i(group)k Fl(extan)o(t)14 b(group)f(ob)r(ject)i (handle)75 1431 y Fi(OUT)h(new)p 283 1431 V 17 w(group)j Fl(new)14 b(group)g(ob)r(ject)h(handle)75 1516 y Fh(MPI)p 144 1516 14 2 v 15 w(GROUP)p 269 1516 V 15 w(DUP)e Fl(duplicates)g(a)h(group)f(with)g (all)f(its)i(cac)o(hed)g(information,)c(replacing)k(nothing.)j(This)c (function)75 1565 y(is)h(essen)o(tial)g(to)g(the)g(supp)q(ort)h(of)e(virtual) g(top)q(ologies.)75 1685 y Fg(3.5.3)55 b(Collectiv)n(e)20 b(Group)f (Constructors)75 1763 y Fl(The)14 b(execution)h(of)e(the)i(follo)o(wing)c(op) q(erations)j(require)h(collectiv)o(e)f(comm)o(unicatio)o(n)d(within)i(a)h (group.)89 1814 y Fi(MPI)p 188 1814 15 2 v 17 w(COLL)p 333 1814 V 17 w(SUBGR)o(OUP\(comm,)f(mem)o(b)q(ership)p 1054 1814 V 14 w(k)o(ey)l(,)j(new)p 1248 1814 V 17 w(group\))75 1970 y(IN)g(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 2055 y Fi(IN)h(mem)o(b)q(ership)p 407 2055 V 14 w(k)o(ey)21 b Fl(\(in)o(teger\))75 2141 y Fi(OUT)16 b(new)p 283 2141 V 17 w(group)j Fl(new)14 b(group)g(ob)r(ject)h(handle)75 2226 y(This)c(collectiv)o(e)g(function)g(is)g(called)g(b)o(y)h(all)e(pro)q(cesses) j(in)e(the)h(group)f(asso)q(ciated)h(with)f Fh(comm)p Fl(.)16 b(A)c(separate,)g(non-)75 2276 y(o)o(v)o(erlapping)g(group)h(of)g(pro)q (cesses)j(is)d(formed)f(for)h(eac)o(h)h(distinct)g(v)n(alue)e(of)h Fh(key)p Fl(,)g(with)g(the)g(pro)q(cesses)j(retaining)75 2326 y(their)e(relativ)o(e)g(order)h(compared)e(to)h(the)g(group)g(of)f Fh(comm)p Fl(.)89 2376 y Fi(MPI)p 188 2376 V 17 w(COLL)p 333 2376 V 17 w(GR)o(OUP)p 529 2376 V 15 w(PERMUTE\(comm,)j(new)p 1046 2376 V 17 w(rank,)g(new)p 1270 2376 V 17 w(group\))75 2532 y(IN)g(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 2618 y Fi(IN)h(new)p 232 2618 V 17 w(rank)21 b Fl(\(in)o(teger\))75 2704 y Fi(OUT)16 b(new)p 283 2704 V 17 w(group)j Fl(new)14 b(group)g(ob)r(ject)h(handle)p eop %%Page: 6 7 bop 75 -100 a Fl(6)75 45 y(This)11 b(collectiv)o(e)g(function)g(op)q(erates)i (o)o(v)o(er)e(all)f(elemen)o(ts)h(of)g(the)h(group)f(of)g(comm.)j(A)d (correct)i(program)d(sp)q(eci\014es)75 95 y(a)k(distinct)g Fh(new)p 329 95 14 2 v 15 w(rank)f Fl(in)g(eac)o(h)i(pro)q(cess,)g(whic)o(h)f (de\014nes)h(a)f(p)q(erm)o(utation)e(of)h(the)i(original)d(ordering.)158 224 y Fe(Discussion:)k Fd(Before,)10 b(w)o(e)e(had)i(one)g(function)g(that)f (implemen)o(ted)j(collectiv)o(e)f(subsetting,)h(as)d(w)o(ell)h(as)f(p)q(erm)o (utation)75 270 y(of)18 b(ranks.)33 b(W)m(e)18 b(con)o(vinced)i(ourselv)o(es) g(that)e(it)g(is)h(b)q(etter)f(to)g(ha)o(v)o(e)h(the)f(ab)q(o)o(v)o(e)h(t)o (w)o(o)f(functions,)i(eac)o(h)f(of)e(whic)o(h)i(has)75 315 y(clear)g(usage)f(and)h(implemen)o(tation.)34 b(W)m(e)18 b(do)g(not)g(b)q (eliev)o(e)i(that)e(the)g(p)q(erformance)g(of)g(the)g(t)o(w)o(o-function)h (sequence)75 361 y(needed)12 b(to)g(subset)g(and)g(p)q(erm)o(ute)g(will)h (signi\014can)o(tl)q(y)h(di\013er)f(from)e(the)h(all-encompassing)j(function) e(previously)h(de\014ned.)75 407 y(Commen)o(ts?)158 453 y(F)m(or)g(instance,) g(to)g(pro)o(vide)h(the)f(sorting)h(function)g(capabilit)o(y)i(originally)g (an)o(ticipated)f(b)o(y)e(Marc)g(Snir,)g(one)g(could)75 499 y(use)c(MPI)p 215 499 12 2 v 14 w(COLL)p 335 499 V 13 w(SUBGR)o(OUP)g (preceeded)g(b)o(y)g(a)f(stable)h(sort.)16 b(W)m(e)10 b(therefore)f(see)h (the)f(giv)o(en)i(functionalit)o(y)h(as)e(su\016cien)o(t)75 544 y(and)k(clearly)g(explicabl)q(e.)158 595 y(By)d(the)f(w)o(a)o(y)m(,)g(w)o (e)g(ha)o(v)o(e)h(in)o(ten)o(tionall)q(y)i(a)o(v)o(oided)f(making)g(con)o(v)o (enience)g(functions)g(that)f(do)f(analogous)j(p)q(erm)o(utation)75 645 y(or)j(subsetting,)j(while)f(creating)g(new)e(comm)o(unicators,)j(b)q (ecause)f(the)e(user)h(can)g(easily)h(do)f(this)g(without)g(additional)75 694 y(sync)o(hronization)q(s)g(b)q(ey)o(ond)e(getting)h(additional)h(needed)e (con)o(texts.)21 b(Are)14 b(these)h(so)f(con)o(v)o(enien)o(t)i(that)e(w)o(e)g (w)o(an)o(t)g(to)g(add)75 744 y(them)f(\()p Fb(e.g.)p Fd(,)e Fa(MPI)p 332 744 V 13 w(COLL)p 425 744 V 13 w(COMM)p 518 744 V 12 w(PERMUTE)p Fd(\).)75 968 y Fk(3.6)70 b(Op)r(erations)22 b(on)h(Con)n(texts)75 1069 y Fg(3.6.1)55 b(Lo)r(cal)19 b(Op)r(erations)89 1147 y Fi(MPI)p 188 1147 15 2 v 17 w(CONTEXTS)p 472 1147 V 18 w(RESER)-5 b(VE\(n,)16 b(con)o(texts\))75 1312 y(IN)g(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(reserv)o(e)i(\(resp,)e (unreserv)o(e\))75 1397 y Fi(IN)i(con)o(texts)j Fl(in)o(teger)c(arra)o(y)e (of)h(con)o(texts)75 1492 y(Reserv)o(es)k(zero)g(or)f(more)e(con)o(texts.)28 b(A)16 b(reserv)o(ed)j(con)o(text)f(will)d(not)h(b)q(e)i(allo)q(cated)e(b)o (y)g(a)h(subsequen)o(t)h(call)e(to)75 1541 y Fh(MPI)p 144 1541 14 2 v 15 w(CONTEXTS)p 335 1541 V 14 w(ALLOC)g Fl(in)h(the)h(same)f(pro)q (cess)i(\(see)g(b)q(elo)o(w\).)28 b(If)18 b(one)f(or)h(more)e(con)o(texts)i (had)g(already)f(b)q(een)75 1591 y(allo)q(cated)12 b(b)o(y)g Fh(MPI)p 375 1591 V 15 w(CONTEXTS)p 566 1591 V 14 w(ALLOC)f Fl(or)i(reserv)o(ed)h(b)o(y)e Fh(MPI)p 1033 1591 V 15 w(CONTEXTS)p 1224 1591 V 14 w(RESERVE)p Fl(,)f(then)i(this)f(function)g(returns)75 1641 y(an)i(error,)g(and)g(no)f(c)o(hange)i(of)e(con)o(text)h(reserv)n(ation) h(state)g(shall)e(ha)o(v)o(e)g(o)q(ccured.)158 1692 y(The)19 b(purp)q(ose)g(of)f Fh(MPI)p 529 1692 V 15 w(CONTEXTS)p 720 1692 V 14 w(RESERVE)f Fl(is)h(to)g(pro)o(vide)g(a)g(\\w)o(ell-kno)o(wn)f(con) o(text")h(mec)o(hanism)e(for)75 1741 y(those)f(who)e(will)g(dev)o(elop)h (serv)o(er-lik)o(e)g(programs.)89 1792 y Fi(MPI)p 188 1792 15 2 v 17 w(CONTEXTS)p 472 1792 V 18 w(FREE\(n,)i(con)o(texts\))75 1948 y(IN)g(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(free)75 2034 y Fi(IN)i(con)o(texts)j Fl(in)o(teger)c(arra)o(y)e(of)h(con)o(texts)75 2119 y(Lo)q(cal)19 b(deallo)q(cation)g(of)h(con)o(text)g(allo)q(cated)g(b)o (y)f Fh(MPI)p 953 2119 14 2 v 15 w(CONTEXTS)p 1144 2119 V 14 w(RESERVE)g Fl(or)h Fh(MPI)p 1454 2119 V 15 w(CONTEXTS)p 1645 2119 V 14 w(ALLOC)p Fl(.)e(It)i(is)75 2169 y(erroneous)13 b(to)f(free)g(a)f (con)o(text)i(that)f(is)f(b)q(ound)h(to)f(an)o(y)h(comm)o(unicator)d (\(either)j(lo)q(cally)e(or)i(in)f(another)h(pro)q(cess\).)75 2289 y Fg(3.6.2)55 b(Collectiv)n(e)20 b(Op)r(erations)89 2367 y Fi(MPI)p 188 2367 15 2 v 17 w(CONTEXTS)p 472 2367 V 18 w(ALLOC\(group,)14 b(n,)i(con)o(texts\))75 2532 y(IN)g(group)k Fl(group)13 b(ob)r(ject)i(handle) f(represen)o(ting)h(participan)o(ts)75 2618 y Fi(IN)h(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(allo)q(cate)75 2704 y Fi(OUT)i(con)o(texts)j Fl(in)o(teger)14 b(arra)o(y)g(of)f(con)o(texts) p eop %%Page: 7 8 bop 75 -100 a Fm(3.7.)31 b(OPERA)m(TIONS)14 b(ON)g(COMMUNICA)m(TORS)924 b Fl(7)158 45 y(Allo)q(cates)14 b(an)g(arra)o(y)g(of)g(con)o(texts.)20 b(This)14 b(collectiv)o(e)h(op)q(eration)f(is)g(executed)i(b)o(y)e(all)f(pro) q(cesses)k(in)d Fh(group)p Fl(.)75 95 y(MPI)e(pro)o(vides)h(a)f(static)g (initialization)d(of)j(the)h(collectiv)o(e)f(op)q(eration)g Fh(MPI)p 1262 95 14 2 v 15 w(CONTEXT)p 1431 95 V 14 w(ALLOC)f Fl(and)h(therefore)i(has)75 145 y(the)k(abilit)o(y)e(to)i(pro)o(vide)f(a)h (safe)g(comm)o(unicatio)o(n)d Fj(at)j(any)i(time)d Fl(to)g(pro)o(vide)h(new)g (con)o(texts.)31 b(This)17 b(function)75 195 y(will)d(ha)o(v)o(e)g(to)h(lo)q (c)o(k)g(out)g(m)o(ultiple)d(threads,)k(so)f(it)g(will)f(ha)o(v)o(e)g (capabilities)h(una)o(v)n(ailable)d(to)j(the)h(user)g(program.)75 244 y(Con)o(texts)c(that)g(are)f(allo)q(cated)g(b)o(y)h Fh(MPI)p 701 244 V 15 w(ALLOC)p 826 244 V 14 w(CONTEXTS)e Fl(are)i(unique)g(within)e Fh(group)p Fl(.)16 b(The)c(arra)o(y)f(is)h(the)g(same)75 294 y(on)i(all)e(pro)q(cesses)17 b(that)d(call)f(the)h(function)g(\(same)f (order,)h(same)f(n)o(um)o(b)q(er)g(of)h(elemen)o(ts\).)158 427 y Fe(Discussion:)24 b Fa(MPI)p 457 427 12 2 v 13 w(CONTEXTS)p 630 427 V 11 w(ALLOC\(comm)o(,)17 b(n,)h(contexts\))13 b Fd(w)o(as)j(the)g (previous)i(de\014nition)h(of)d(this)h(function.)75 476 y(The)e(revision)h (is)g(seen)f(as)g(a)f(m)o(uc)o(h)h(more)g(consisten)o(t,)h(explicable)q(,)h (and)e(reasonable)i(c)o(hoice)f(for)e(MPI1.)22 b(It)15 b(a)o(v)o(oids)h(the) 75 526 y(c)o(hic)o(k)o(en-and-egg)i(feel)e(of)f(MPI)g(\(ie,)i(use)e(a)h(comm) o(unicator)h(to)e(get)h(con)o(text\(s\))g(or)f(a)h(comm)o(unicator\),)h(and)f (it)g(means)75 576 y(that)c(libraries)j(con)o(trol)f(their)f(o)o(wn)f(fate)g (regarding)i(safet)o(y)m(.)j(No)12 b(quiescen)o(t)i(comm)o(unicator)g(is)f (required)h(in)f(order)g(to)f(get)75 626 y(new)i(con)o(texts.)22 b(MPI)15 b(manages)g(this)h(call)f(as)g(a)f(single-in)o(vo)q(cati)q(on)j (library)m(,)g(so)d Fa(MPI)p 1342 626 V 13 w(INIT)f Fd(can)i(assign)h(it)f(a) f(con)o(text)h(of)75 676 y(comm)o(unication)h(statically)m(.)87 808 y Fi(MPI)p 186 808 15 2 v 18 w(CONTEXTS)p 471 808 V 18 w(JOIN\(lo)q(cal)p 721 808 V 16 w(comm,)e(remote)p 1036 808 V 16 w(comm)p 1177 808 V 17 w(send,)g(remote)p 1463 808 V 15 w(comm)p 1603 808 V 17 w(recv,)h(n,)f(con-)75 893 y(texts\))75 1014 y(IN)i(lo)q(cal)p 246 1014 V 17 w(comm)k Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1094 y Fi(IN)h(remote)p 296 1094 V 16 w(comm)p 437 1094 V 17 w(send)k Fl(comm)o(uni)o(cator)12 b(ob)r(ject)i(handle)75 1174 y Fi(IN)i(remote)p 296 1174 V 16 w(comm)p 437 1174 V 17 w(recv)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1255 y Fi(IN)h(n)21 b Fl(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(allo)q(cate)75 1335 y Fi(OUT)i(con)o(texts)j Fl(in)o(teger)14 b(arra)o(y)g(of)f(con)o(texts)75 1420 y(This)f(is)f(an)h (adv)n(anced)g(function)f(needed)i(to)f(supp)q(ort)g(\\safe")g(in)o(tergroup) g(comm)o(uni)o(cation.)i(The)f(function)e(cre-)75 1470 y(ates)j Fh(n)e Fl(con)o(texts)i(unique)f(on)g(the)g(union)f(of)h(the)g(underlying)g (groups)g(of)f Fh(local)p 1347 1470 14 2 v 15 w(comm)g Fl(and)h Fh(remote)p 1674 1470 V 14 w(comm)p 1776 1470 V 15 w(send)75 1520 y Fl(\(whic)o(h)k(is)f(the)h(same)f(as)g Fh(remote)p 626 1520 V 15 w(comm)p 729 1520 V 15 w(recv)p Fl('s)f(group\).)26 b Fh(remote)p 1165 1520 V 15 w(comm)p 1268 1520 V 15 w(send)15 b Fl(and)i Fh(remote)p 1602 1520 V 14 w(comm)p 1704 1520 V 15 w(recv)f Fl(are)75 1569 y(created)f(b)o(y)f Fh(MPI)p 347 1569 V 15 w(COMM)p 450 1569 V 15 w(MERGE)f Fl(\(b)q(elo)o(w\).)75 1705 y Fk(3.7)70 b(Op)r(erations)22 b(on)h(Comm)n(unicators)75 1805 y Fg(3.7.1)55 b(Lo)r(cal)19 b(Comm)n(unicator)h(Op)r(erations)75 1881 y Fl(The)14 b(follo)o(wing)e(are)i(all)f(lo)q(cal)g(\(non-comm)o(uni)o (cating\))e(op)q(erations.)89 1931 y Fi(MPI)p 188 1931 15 2 v 17 w(COMM)p 365 1931 V 18 w(SIZE\(comm,)16 b(size\))75 2087 y(IN)g(comm)21 b Fl(handle)14 b(to)f(comm)o(unicator)e(ob)r(ject.)75 2167 y Fi(OUT)16 b(size)k Fl(is)13 b(the)i(in)o(teger)f(n)o(um)o(b)q(er)f(of) h(pro)q(cesses)i(in)e(the)g(group)g(of)f Fh(comm)p Fl(.)89 2252 y Fi(MPI)p 188 2252 V 17 w(COMM)p 365 2252 V 18 w(RANK\(comm,)j(rank\)) 75 2407 y(IN)g(comm)21 b Fl(handle)14 b(to)f(comm)o(unicator)e(ob)r(ject.)75 2488 y Fi(OUT)16 b(rank)k Fl(is)e(the)g(in)o(teger)g(rank)g(of)f(the)h (calling)e(pro)q(cess)k(in)d(group)h(of)f Fh(comm)p Fl(,)g(or)h Fh(MPI)p 1550 2488 14 2 v 15 w(UNDEFINED)d Fl(if)i(the)179 2538 y(pro)q(cess)e(is)f(not)g(a)g(mem)o(b)q(er.)89 2622 y Fi(MPI)p 188 2622 15 2 v 17 w(COMM)p 365 2622 V 18 w(FLA)l(TTEN\(comm,)j(max) p 870 2622 V 17 w(length,)d(bu\013er,)g(actual)p 1325 2622 V 17 w(length\))p eop %%Page: 8 9 bop 75 -100 a Fl(8)75 45 y Fi(IN)16 b(comm)21 b Fl(handle)14 b(to)f(comm)o(unicator)e(ob)r(ject.)75 134 y Fi(IN)16 b(max)p 237 134 15 2 v 18 w(length)i Fl(maxim)n(um)10 b(length)k(of)f(bu\013er)i(in)e (b)o(ytes)75 223 y Fi(OUT)j(bu\013er)j Fl(b)o(yte-aligned)13 b(bu\013er)75 311 y Fi(OUT)j(actual)p 327 311 V 16 w(length)i Fl(actual)c(b)o(yte)g(length)g(of)f(bu\013er)i(con)o(taining)e(\015attened)i (comm)o(unicator)c(information)75 409 y(If)h(insu\016cien)o(t)h(space)g(is)g (a)o(v)n(ailable)d(in)i Fh(buffer)f Fl(\(as)i(sp)q(eci\014ed)h(b)o(y)e Fh(max)p 1189 409 14 2 v 15 w(length)p Fl(\),)f(then)j(the)f(con)o(ten)o(ts)g (of)f Fh(buffer)75 458 y Fl(are)j(unde\014ned)g(at)g(exit.)k(The)c(quan)o (tit)o(y)f Fh(actual)p 873 458 V 14 w(length)f Fl(is)i(alw)o(a)o(ys)e(w)o (ell-de\014ned)i(at)f(exit)g(as)h(the)g(n)o(um)o(b)q(er)f(of)75 508 y(b)o(ytes)h(needed)g(to)f(store)h(the)f(\015attened)h(comm)o(unicator.) 158 559 y(Though)c(implemen)o(tations)d(ma)o(y)i(v)n(ary)h(on)g(ho)o(w)g (they)h(store)g(\015attened)h(comm)o(unicators,)c(the)j(information)75 609 y(m)o(ust)g(b)q(e)i(su\016cien)o(t)f(to)g(reconstruct)j(the)d(comm)o (unicator)e(using)i Fh(MPI)p 1191 609 V 15 w(COMM)p 1294 609 V 15 w(UNFLATTEN)e Fl(b)q(elo)o(w.)17 b(The)d(purp)q(ose)75 659 y(of)c(\015attening)g(and)h(un\015attening)f(is)h(to)f(allo)o(w)f(p)q (ortable)i(in)o(terpro)q(cess)h(transmission)e(of)g(comm)o(uni)o(cator)e(ob)r (jects.)75 783 y Fg(3.7.2)55 b(Lo)r(cal)19 b(Constructors)89 862 y Fi(MPI)p 188 862 15 2 v 17 w(COMM)p 365 862 V 18 w(UNFLA)l(TTEN\(max)p 789 862 V 18 w(length,)14 b(bu\013er,)h(comm\))75 1020 y(IN)h(max)p 237 1020 V 18 w(length)i Fl(maxim)n(um)10 b(length)k(of)f(bu\013er)i(in)e(b)o (ytes)75 1109 y Fi(IN)j(bu\013er)k Fl(b)o(yte-aligned)13 b(bu\013er)i(pro)q (duced)g(b)o(y)e Fh(MPI)p 951 1109 14 2 v 16 w(COMM)p 1055 1109 V 14 w(FLATTEN)75 1198 y Fi(OUT)j(comm)k Fl(handle)14 b(to)g(ob)r(ject)75 1285 y(See)h Fh(MPI)p 218 1285 V 15 w(COMM)p 321 1285 V 15 w(FLATTEN)d Fl(ab)q(o)o(v)o(e.)89 1337 y Fi(MPI)p 188 1337 15 2 v 17 w(COMM)p 365 1337 V 18 w(BIND\(group,)i(con)o(text,)h (comm)p 986 1337 V 17 w(new\))75 1495 y(IN)h(group)k Fl(ob)r(ject)14 b(handle)g(to)g(b)q(e)g(b)q(ound)g(to)g(new)g(comm)o(unicator)75 1584 y Fi(IN)i(con)o(text)k Fl(con)o(text)14 b(to)g(b)q(e)g(b)q(ound)g(to)g (new)h(comm)o(uni)o(cator)75 1672 y Fi(OUT)h(comm)p 325 1672 V 17 w(new)k Fl(the)15 b(new)f(comm)o(unicator.)158 1760 y(The)19 b(ab)q(o)o(v)o(e)e(function)h(creates)i(a)e(new)h(comm)o(unicator)c(ob)r (ject,)20 b(whic)o(h)e(is)g(asso)q(ciated)h(with)e(the)i(group)75 1810 y(de\014ned)e(b)o(y)e(group,)g(and)h(the)g(sp)q(eci\014ed)h(con)o(text.) 23 b(The)16 b(op)q(eration)g(do)q(es)g(not)g(require)g(comm)o(unication.)j (It)d(is)75 1859 y(correct)i(to)e(b)q(egin)h(using)f(a)g(comm)o(unicator)e (as)i(so)q(on)h(as)f(it)g(is)g(de\014ned.)27 b(It)17 b(is)f(not)g(erroneous)i (to)e(in)o(v)o(ok)o(e)g(this)75 1909 y(function)10 b(t)o(wice)g(in)f(the)h (same)f(pro)q(cess)j(with)d(the)i(same)e(con)o(text.)17 b(Finally)m(,)8 b(there)j(is)f(no)g(explicit)f(sync)o(hronization)75 1959 y(o)o(v)o(er)14 b(the)g(group.)89 2010 y Fi(MPI)p 188 2010 V 17 w(COMM)p 365 2010 V 18 w(UNBIND\(comm\))75 2169 y(IN)i(comm)21 b Fl(the)14 b(comm)o(unicator)d(to)j(b)q(e)h(deallo)q(cated.)75 2256 y(This)f(routine)h (disasso)q(ciates)h(the)f(group)f(asso)q(ciated)i(with)e(comm)e(from)g(the)k (con)o(text)f(asso)q(ciated)g(with)f Fh(comm)p Fl(.)75 2306 y(The)e(opaque)g(ob)r(ject)h Fh(comm)e Fl(is)h(deallo)q(cated.)17 b(Both)12 b(the)h(group)f(and)f(con)o(text,)i(pro)o(vided)f(at)f(the)i Fh(MPI)p 1673 2306 14 2 v 15 w(COMM)p 1776 2306 V 15 w(BIND)75 2356 y Fl(call,)c(remain)f(a)o(v)n(ailable)f(for)i(further)h(use.)17 b(If)9 b Fh(MPI)p 845 2356 V 15 w(COMM)p 948 2356 V 15 w(MAKE)f Fl(\(see)j(b)q(elo)o(w\))e(w)o(as)g(called)g(in)g(lieu)g(of)g Fh(MPI)p 1682 2356 V 15 w(COMM)p 1785 2356 V 15 w(BIND)p Fl(,)75 2405 y(then)14 b(there)g(is)f(no)g(exp)q(osed)h(con)o(text)f(kno)o(wn)g(to)g (the)g(user,)h(and)f(this)g(quan)o(tit)o(y)f(is)h(freed)h(b)o(y)f Fh(MPI)p 1618 2405 V 15 w(COMM)p 1721 2405 V 15 w(UNBIND)p Fl(.)89 2457 y Fi(MPI)p 188 2457 15 2 v 17 w(COMM)p 365 2457 V 18 w(GR)o(OUP\(comm,)h(group\))75 2615 y(IN)i(comm)21 b Fl(comm)o(unicator) 11 b(ob)r(ject)k(handle)75 2704 y Fi(OUT)h(group)j Fl(group)14 b(ob)r(ject)g(handle)p eop %%Page: 9 10 bop 75 -100 a Fm(3.7.)31 b(OPERA)m(TIONS)14 b(ON)g(COMMUNICA)m(TORS)924 b Fl(9)75 45 y(Accessor)16 b(that)e(returns)h(the)g(group)f(corresp)q(onding) h(to)e(the)i(comm)o(unicator)c Fh(comm)p Fl(.)89 98 y Fi(MPI)p 188 98 15 2 v 17 w(COMM)p 365 98 V 18 w(CONTEXT\(comm,)17 b(con)o(text\))75 259 y(IN)f(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 353 y Fi(OUT)h(con)o(text)j Fl(con)o(text)75 443 y(Returns)c(the)f(con)o (text)h(asso)q(ciated)f(with)g(the)h(comm)o(uni)o(cator)d Fh(comm)p Fl(.)89 496 y Fi(MPI)p 188 496 V 17 w(COMM)p 365 496 V 18 w(DUP\(comm,)j(new) p 745 496 V 17 w(con)o(text,)f(new)p 1028 496 V 17 w(comm\))75 657 y(IN)i(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 751 y Fi(IN)h(new)p 232 751 V 17 w(con)o(text)k Fl(new)14 b(con)o(text)h(to)e (use)i(with)f(new)p 945 751 13 2 v 15 w(comm)75 844 y Fi(OUT)i(new)p 283 844 15 2 v 17 w(comm)k Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 935 y Fh(MPI)p 144 935 14 2 v 15 w(COMM)p 247 935 V 15 w(DUP)c Fl(duplicates)h(a)g(comm)o(unicator)d(with)j(all)f(its)g(cac)o(hed)i (information,)c(replacing)j(just)g(the)h(con)o(text.)89 987 y Fi(MPI)p 188 987 15 2 v 17 w(COMM)p 365 987 V 18 w(MER)o(GE\(comm,)j(comm)p 861 987 V 16 w(remote,)f(comm)p 1177 987 V 17 w(send,)g(comm)p 1442 987 V 17 w(recv\))75 1149 y(IN)h(comm)21 b Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1242 y Fi(IN)h(comm)p 274 1242 V 17 w(remote)k Fl(comm)o(uni)o(cator)12 b(ob)r(ject)i(handle)75 1336 y Fi(OUT)i(comm)p 325 1336 V 17 w(send)j Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1429 y Fi(OUT)h(comm)p 325 1429 V 17 w(recv)k Fl(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1520 y Fh(MPI)p 144 1520 14 2 v 15 w(COMM)p 247 1520 V 15 w(MERGE)i Fl(is)i(a)f(call)f(used)j(after)f(a)f(publish)g(&)h(subscrib)q(e)h(\(equiv)n (alen)o(tly)m(,)d(\015atten)i(&)g(un\015atten\))g(se-)75 1570 y(quence)g(to)f(pro)o(vide)g(comm)o(unicators)e(that)i(can)g(b)q(e)h(used)g (to)f(transmit)f(to)g(remote)h(group)g(using)g(\\correct")75 1620 y(rank)c(notation.)j(It)d(is)g(motiv)n(ated)e(further)i(b)q(elo)o(w.)75 1750 y Fg(3.7.3)55 b(Collectiv)n(e)20 b(Comm)n(unicator)g(Constructors)89 1831 y Fi(MPI)p 188 1831 15 2 v 17 w(COMM)p 365 1831 V 18 w(MAKE\(sync)p 643 1831 V 18 w(group,)15 b(comm)p 936 1831 V 16 w(group,)g(comm)p 1227 1831 V 17 w(new\))75 1993 y(IN)h(sync)p 241 1993 V 17 w(group)k Fl(Group)13 b(o)o(v)o(er)g(whic)o(h)g(comm)o(unicator)e(will)g(b)q (e)j(de\014ned,)g(and)g(sp)q(eci\014es)h(participan)o(ts)e(in)g(this)179 2043 y(sync)o(hronizing)h(op)q(eration.)75 2136 y Fi(IN)i(comm)p 274 2136 V 17 w(group)j Fl(Group)c(of)e(the)i(new)g(comm)o(unicator;)d(often) i(this)h(will)e(b)q(e)i(the)g(same)f(as)g Fh(sync)p 1691 2136 14 2 v 15 w(group)p Fl(,)f(or)179 2186 y(b)q(e)h(a)g(subset)h(thereof;)75 2279 y Fi(OUT)h(comm)p 325 2279 15 2 v 17 w(new)k Fl(the)15 b(new)f(comm)o(unicator.)75 2370 y Fh(MPI)p 144 2370 14 2 v 15 w(COMM)p 247 2370 V 15 w(MAKE)f Fl(is)h(equiv)n(alen)o(t)f(to:)140 2463 y Fh(MPI_CONTEXTS_ALLOC\()o(sync\\)o(_grou)o(p,)19 b(context,)h(1\))140 2513 y(MPI_COMM_BIND\(comm\\)o(_grou)o(p,)f(context,)h(comm_new\))75 2604 y Fl(plus,)d(notionally)m(,)d(in)o(ternal)j(\015ags)f(are)i(set)f(in)g (the)g(comm)o(unicator,)d(denoting)j(that)f(con)o(text)i(w)o(as)f(created)h (as)75 2654 y(part)d(of)g(the)g(opaque)g(pro)q(cess)i(that)e(made)f(the)i (comm)o(unicator)c(\(so)j(it)g(can)g(b)q(e)h(freed)g(b)o(y)f Fh(MPI)p 1602 2654 V 15 w(COMM)p 1705 2654 V 15 w(UNBIND)p Fl(\).)75 2704 y(It)f(is)g(not)g(erroneous)h(if)e Fh(comm)p 552 2704 V 15 w(group)f Fl(is)i(not)g(a)g(subset)h(of)e Fh(sync)p 1102 2704 V 15 w(group)p Fl(.)p eop %%Page: 10 11 bop 75 -100 a Fl(10)75 45 y Fk(3.8)70 b(In)n(ter-comm)n(unicati)o(on)21 b(Initializ)o(ati)o(on)g(&)i(Rendezv)n(ous)75 219 y Fe(Discussion:)c Fd(Before,)14 b Fa(MPI)p 501 219 12 2 v 13 w(COMM)p 594 219 V 12 w(MAKE)e Fd(required)j(a)f(comm)o(unicator,)h(instead)g(of)f Fa(sync)p 1402 219 V 12 w(group)p Fd(.)j(This)d(is)h(unnecessary)m(,)75 269 y(b)q(ecause)10 b(MPI)p 290 269 V 14 w(COMM)p 432 269 V 14 w(MAKE)e(can)h(b)q(e)g(treated)g(b)o(y)g(MPI)g(as)g(a)g(statically)i (initiali)q(zed)h(library)m(,)f(that)e(also)h(lo)q(c)o(ks)f(out)g(other)75 318 y(threads.)17 b(Th)o(us,)10 b(regardless)i(of)e(the)h(curren)o(t)f(state) g(of)g(comm)o(unication)j(in)e(MPI,)f(a)g(program)h(can)g(initiate)h(a)e(lo)q (osely)i(syn-)75 368 y(c)o(hronous)f(call)g(to)f Fa(MPI)p 406 368 V 13 w(COMM)p 499 368 V 12 w(MAKE)e Fd(and)i(get)g(a)g(quiescen)o(t)h (comm)o(unicator)g(bac)o(k.)17 b(As)9 b(with)h(MPI)p 1486 368 V 14 w(CONTEXTS)p 1719 368 V 13 w(ALLOC,)75 418 y(this)16 b(c)o(hoice)h(giv)o (es)g(full)f(con)o(trol)h(to)e(libraries)j(concerning)g(their)e(safet)o(y)m (.)25 b(If)14 b(they)i(ha)o(v)o(e)g(p)q(ersisten)o(t)h(ob)r(jects,)g(they)e (can)75 468 y(sa)o(v)o(e)c(needed)h(comm)o(unicators)g(for)f(reuse)g(on)g (subsequen)o(t)i(in)o(v)o(o)q(cations)g(of)d(that)h(library)m(,)i(or)e (\\friend)h(functions")g(thereof,)75 518 y(op)q(erating)j(on)e(the)g(same)h (p)q(ersisten)o(t)g(data.)89 650 y Fi(MPI)p 188 650 15 2 v 17 w(COMM)p 365 650 V 18 w(PUBLISH\(comm,)h(lab)q(el,)f(p)q(ersistence)p 1131 650 V 15 w(\015ag\))75 804 y(IN)i(comm)21 b Fl(Comm)o(uni)o(cator)11 b(to)j(b)q(e)h(published)f(under)g(the)h(name)e Fh(label)p Fl(.)75 887 y Fi(IN)j(lab)q(el)k Fl(String)13 b(lab)q(el)h(describing)g (published)g(comm)o(unicator.)75 970 y Fi(IN)i(p)q(ersistence)p 382 970 V 15 w(\015ag)k Fl(Either)15 b Fh(MPI)p 685 970 14 2 v 15 w(EPHEMERAL)d Fl(or)i Fh(MPI)p 1027 970 V 15 w(PERSISTENT)p Fl(.)75 1052 y(This)e(op)q(eration)g(results)h(in)f(the)h(asso)q(ciation)f (of)f(the)i(comm)o(unicator)d Fh(comm)h Fl(with)h(the)h(name)e(sp)q (eci\014ed)i(in)f(lab)q(el,)75 1102 y(with)32 b(global)f(scop)q(e.)75 b(P)o(ersistence)35 b(is)d(either)i(ephemeral)e(\(one)g(subscrib)q(e)j (causes)f(an)e(automatic)75 1152 y Fh(MPI)p 144 1152 V 15 w(COMM)p 247 1152 V 15 w(UNPUBLISH)p Fl(\),)11 b(or)j(p)q(ersisten)o(t)i(\(an)d (explicit)g Fh(MPI)p 1025 1152 V 16 w(COMM)p 1129 1152 V 14 w(UNPUBLISH)f Fl(m)o(ust)h(b)q(e)h(subsequen)o(tly)h(called\))75 1202 y(and)e(an)o(y)f(n)o(um)o(b)q(er)h(of)f(subscriptions)i(can)f(o)q(ccur.) 19 b(This)13 b(op)q(eration)g(do)q(es)g(not)g(w)o(ait)f(for)h(subscriptions)h (to)f(o)q(ccur)75 1252 y(b)q(efore)i(returning.)j(Only)c(one)g(pro)q(cess)i (calls)d(this)h(function.)158 1302 y(Subsequen)o(t)108 b(calls)e(to)g Fh(MPI)p 867 1302 V 15 w(COMM)p 970 1302 V 15 w(PUBLISH)f Fl(with)h(the)h (same)f(lab)q(el)75 1351 y(\(without)14 b(an)f(in)o(terv)o(ening)h Fh(MPI)p 588 1351 V 15 w(COMM)p 691 1351 V 15 w(UNPUBLISH)p Fl(\))e(is)i(erroneous.)158 1484 y Fe(Discussion:)k Fd(Should)d(w)o(e)e(ha)o (v)o(e)g(a)g(p)q(ermissions)j(\015ag)d(that)g(implemen)o(ts)i(access)f (restrictions)h(similar)g(to)e(Unix.)89 1616 y Fi(MPI)p 188 1616 15 2 v 17 w(COMM)p 365 1616 V 18 w(UNPUBLISH\(lab)q(el\))75 1770 y(IN)j(lab)q(el)k Fl(String)13 b(lab)q(el)h(describing)g(published)g (comm)o(unicator.)75 1853 y(This)j(is)g(an)h(op)q(eration)f(undertak)o(en)h (b)o(y)f(a)g(single)g(pro)q(cess,)j(whose)e(e\013ect)h(is)e(to)g(remo)o(v)o (e)g(the)h(asso)q(ciation)f(of)75 1903 y(the)c(comm)o(unicator)d(sp)q (eci\014ed)k(in)d(a)i(preceding)g Fh(MPI)p 915 1903 14 2 v 15 w(COMM)p 1018 1903 V 15 w(PUBLISH)e Fl(call)g(with)h(the)h(name)f(sp)q (eci\014ed)i(in)e Fh(label)p Fl(.)75 1952 y Fh(MPI)p 144 1952 V 15 w(COMM)p 247 1952 V 15 w(UNPUBLISH)g Fl(on)i(an)f(unde\014ned)i(lab)q (el)f(will)e(b)q(e)j(ignored.)89 2002 y Fi(MPI)p 188 2002 15 2 v 17 w(COMM)p 365 2002 V 18 w(SUBSCRIBE\(m)o(y)p 744 2002 V 16 w(comm,)h(lab)q(el,)f(comm\))75 2156 y(IN)h(m)o(y)p 213 2156 V 17 w(comm)21 b Fl(Comm)n(unicator)11 b(of)j(participan)o(ts)f(in)h (subscrib)q(e.)75 2239 y Fi(IN)i(lab)q(el)k Fl(String)13 b(lab)q(el)h (describing)g(comm)o(unicator)d(to)j(whic)o(h)g(w)o(e)g(wish)f(to)h(subscrib) q(e)75 2322 y Fi(OUT)i(comm)k Fl(Comm)o(unicator)11 b(created)k(through)f (subscription)h(pro)q(cess.)75 2405 y(This)d(is)f(a)h(collectiv)o(e)g(comm)o (unicatio)o(n)d(in)j(the)g(group)g(sp)q(eci\014ed)h(in)f Fh(my)p 1196 2405 14 2 v 15 w(comm)p Fl(,)f(whic)o(h)g(has)h(the)h(e\013ect)g(of)f (creating)75 2455 y(in)g(eac)o(h)h(group)g(pro)q(cess)h(a)e(cop)o(y)h(of)f (the)h(previously)f(published)h(comm)o(unicator)c(asso)q(ciated)14 b(with)e(the)h(name)e(in)75 2504 y(lab)q(el.)17 b(This)d(op)q(eration)g(blo)q (c)o(ks)g(un)o(til)f(suc)o(h)i(an)e(asso)q(ciation)h(is)f(p)q(ossible.)158 2554 y(Once)22 b(an)e Fh(MPI)p 404 2554 V 15 w(COMM)p 507 2554 V 15 w(PUBLISH)f Fl(and)i(an)f Fh(MPI)p 913 2554 V 15 w(COMM)p 1016 2554 V 15 w(SUBSCRIBE)f Fl(on)h(the)h(same)f(lab)q(el)g(ha)o(v)o(e)h(o)q (ccurred,)75 2604 y(the)d(subscrib)q(er)i Fh(my)p 399 2604 V 16 w(comm)d Fl(has)h(the)g(abilit)o(y)e(to)i(send)h(messages)f(to)f(the)i (publisher;)h(group)d(mem)o(b)q(ers)g(of)g(the)75 2654 y(published)h(comm)o (unicator)e(ha)o(v)o(e)i(the)g(abilit)o(y)f(to)h(receiv)o(e)h(messages)f (using)g(the)h(published)f(comm)o(unicator.)75 2704 y(The)c(comm)o(unicators) e(so)i(de\014ned)h(ma)o(y)d(only)h(b)q(e)h(used)h(in)e(p)q(oin)o(t-to-p)q (oin)o(t)g(comm)o(unication.)p eop %%Page: 11 12 bop 75 -100 a Fm(3.8.)26 b(INTER-COMMUNICA)m(TION)15 b(INITIALIZA)m(TION)f(&) g(RENDEZV)o(OUS)431 b Fl(11)158 45 y Fe(Discussion:)92 b Fd(Do)50 b(w)o(e)g(w)o(an)o(t)g(an)o(y)g(of)g(the)g(follo)o(wing)i Fa(MPI)p 1340 45 12 2 v 13 w(COMM)p 1433 45 V 13 w(SUBSCRIBE)p 1625 45 V 10 w(NON)p 1696 45 V 13 w(BLOCKING)p Fd(,)75 95 y Fa(MPI)p 137 95 V 13 w(COMM)p 230 95 V 13 w(SUBSCRIB)o(E)p 422 95 V 11 w(PROBE)p Fd(,)11 b(etc...)75 285 y Fi(The)17 b(symmetric)e(case)42 b Fl(is)15 b(constructed)i(as)e(follo)o(ws:)j(Group)d(\\A")g(and)f(\\B")h (wish)g(to)g(build)f(a)h(symmetric)75 335 y(in)o(ter-group)f(comm)o(unicatio) o(n)d(structure.)75 385 y(\\A")i(group:)249 467 y Fh(mpi_comm_group\(A_co)o (mm,)19 b(A_group\))249 517 y(mpi_contexts_alloc\()o(A_gro)o(up,)g(1,)i (context1\))249 566 y(mpi_comm_dup\(A_comm)o(,)e(context1,)h(A_comm1\))249 616 y(mpi_comm_rank\(A_com)o(m,)f(rank\))249 666 y(if\(rank)i(==)g(0\))g (then)315 716 y(mpi_comm_publish)o(\(A_co)o(mm1,)d("A_comm1",)i (MPI_EPHEMERAL\))249 766 y(endif)249 815 y(/*)i([OTHER)e(WORK)h(UNRELATED)f (TO)i(THIS)f(OPERATION])e(*/)249 865 y(mpi_comm_subscribe\()o(A_com)o(m,)g ("B_comm1",)g(B_comm1\))249 915 y(mpi_comm_merge\(A_co)o(mm1,)f(B_comm1,)j (Send_to_B,)e(Recv_from_B\))249 965 y(mpi_comm_free\(A_com)o(m1\))249 1015 y(mpi_comm_free\(B_com)o(m1\))75 1097 y Fl(\\B")14 b(group:)249 1178 y Fh(mpi_comm_group\(B_co)o(mm,)19 b(B_group\))249 1228 y(mpi_contexts_alloc\()o(B_gro)o(up,)g(1,)i(context1\))249 1278 y(mpi_comm_dup\(B_comm)o(,)e(context1,)h(B_comm1\))249 1328 y(mpi_comm_rank\(B_com)o(m,)f(rank\))249 1378 y(if\(rank)i(==)g(0\))g (then)315 1427 y(mpi_comm_publish)o(\(B_co)o(mm1,)d("B_comm1",)i (MPI_EPHEMERAL\))249 1477 y(endif)249 1527 y(/*)i([OTHER)e(WORK)h(UNRELATED)f (TO)i(THIS)f(OPERATION])e(*/)249 1577 y(mpi_comm_subscribe\()o(B_com)o(m,)g ("A_comm1",)g(A_comm1\))249 1627 y(mpi_comm_merge\(B_co)o(mm1,)f(A_comm1,)j (Send_to_B,)e(Recv_from_B\))249 1677 y(mpi_comm_free\(B_com)o(m1\))249 1726 y(mpi_comm_free\(A_com)o(m1\))75 1808 y Fl(F)m(or)d(example,)f(elemen)o (ts)h(of)g(the)h(\\B")f(group)g(use)h Fh(Send)p 997 1808 14 2 v 15 w(to)p 1056 1808 V 15 w(A)f Fl(to)g(send)h(to)f(elemen)o(ts)g(of)g (\\A,")g(whic)o(h)g(receiv)o(e)75 1858 y(suc)o(h)e(messages)f(in)f Fh(Recv)p 481 1858 V 15 w(from)p 584 1858 V 15 w(B)p Fl(.)h(Alternativ)o(ely) m(,)e(the)j(follo)o(wing)c(function)j(is)g(rather)h(more)e(con)o(v)o(enien)o (t)h(when)75 1908 y(one)g(do)q(es)h(not)f(need)h(to)e(exploit)h(the)g (decoupling)g(of)f(publish)h(and)f(subscrib)q(e)j(transactions,)e(and)g(is)g (as)g(follo)o(ws:)130 1958 y Fi(MPI)p 229 1958 15 2 v 17 w(COMM)p 406 1958 V 18 w(PUBLISH)p 639 1958 V 16 w(SUBSCRIBE\(comm)p 1077 1958 V 16 w(A,)63 b(lab)q(el)p 1306 1958 V 16 w(A,)g(lab)q(el)p 1535 1958 V 15 w(B,)g(send)p 1755 1958 V 16 w(to)p 1814 1958 V 17 w(B,)75 2043 y(recv)p 166 2043 V 17 w(from)p 282 2043 V 16 w(B\))75 2160 y(IN)16 b(comm)p 274 2160 V 17 w(A)21 b Fl(Comm)o(uni)o(cator)12 b(to)h(b)q(e)i(duplicated)f(and)g(published)75 2243 y Fi(IN)i(lab)q(el)p 250 2243 V 16 w(A)21 b Fl(Lab)q(el)14 b(of)f(published,)g(duplicated)h(comm)o(unicator)75 2325 y Fi(IN)i(lab)q(el)p 250 2325 V 16 w(B)21 b Fl(Lab)q(el)13 b(of)h(comm)o (unicator)d(to)j(whic)o(h)f(w)o(e)h(w)o(an)o(t)g(to)g(subscrib)q(e)75 2408 y Fi(OUT)i(send)p 295 2408 V 16 w(to)p 354 2408 V 16 w(B)21 b Fl(V)m(alid)12 b(send)j(comm)o(unicator)c(for)j(talking)e(to)i(the)h(other) f(group.)75 2490 y Fi(OUT)i(recv)p 288 2490 V 17 w(from)p 404 2490 V 16 w(B)21 b Fl(V)m(alid)12 b(receiv)o(e)j(comm)o(unicator)c(for)j (talking)e(to)i(the)h(other)f(group.)75 2572 y(This)g(call)f(is)h(alw)o(a)o (ys)e(done)j(in)e(pairs)h(as)g(follo)o(ws:)i(\\A")e(group:)140 2654 y Fh(MPI_COMM_PUBLISH_SU)o(BSCRI)o(BE\(co)o(mm_A,)k("comm_A",)i ("comm_B",)g(Send_to_B,)729 2704 y(Recv_from_B\))p eop %%Page: 12 13 bop 75 -100 a Fl(12)75 45 y(\\B")14 b(group:)140 134 y Fh (MPI_COMM_PUBLISH_SU)o(BSCRI)o(BE\(co)o(mm_B,)k("comm_B",)i("comm_A",)g (Send_to_A,)729 184 y(Recv_from_A\))89 271 y Fi(MPI)p 188 271 15 2 v 17 w(COMM)p 365 271 V 18 w(JOIN\(lo)q(cal)p 615 271 V 16 w(comm,)c(remote)p 932 271 V 16 w(comm)p 1073 271 V 17 w(send,)75 357 y(remote)p 225 357 V 16 w(comm)p 366 357 V 17 w(recv,)g(order,)f(joined)p 770 357 V 14 w(comm\))75 479 y(IN)h(lo)q(cal)p 246 479 V 17 w(comm)k Fl(Comm)o(unicator)11 b(describing)j(lo)q(cal)f(group) 75 568 y Fi(IN)j(remote)p 296 568 V 16 w(comm)p 437 568 V 17 w(send)k Fl(Comm)n(unicator)11 b(describing)k(remote)e(group)75 657 y Fi(IN)j(remote)p 296 657 V 16 w(comm)p 437 657 V 17 w(recv)21 b Fl(Comm)o(uni)o(cator)11 b(describing)k(remote)e(group)75 746 y Fi(IN)j(order)k Fl(If)13 b Fh(local)p 433 746 14 2 v 15 w(comm)g Fl(is)h(\014rst,)g(then)g Fh(MPI)p 848 746 V 15 w(TRUE)p Fl(,)f(else)i Fh(MPI)p 1121 746 V 15 w(FALSE)75 835 y Fi(OUT)h(joined)p 329 835 15 2 v 15 w(comm)k Fl(Merged)15 b(comm)o(unicator)75 923 y(This)9 b(function)g(creates)i(the)f(merged)f(comm) o(unicator)d Fh(joined)p 1066 923 14 2 v 15 w(comm)i Fl(giv)o(en)h(a)g(lo)q (cal)f(comm)o(unicator)f(\()p Fh(local)p 1797 923 V 15 w(comm)p Fl(\))75 973 y(and)17 b(a)f(pair)h(of)f(remote)g(comm)o(unicators)f (\(obtained)h(from)f Fh(MPI)p 1126 973 V 15 w(COMM)p 1229 973 V 15 w(MERGE)p Fl(\).)h(The)h(function)f(sync)o(hronizes)75 1022 y(o)o(v)o(er)i(the)h(underlying)f(groups)g(of)g(b)q(oth)g(comm)o (unicators,)f(and)h(pro)q(duces)h(a)f(new,)i(merged)d(comm)o(unicator,)75 1072 y(whose)h(group)f(is)g(the)h(ordered)g(union)f(of)g(the)h(t)o(w)o(o)e (underlying)h(groups)h(\(see)g Fh(MPI)p 1424 1072 V 15 w(GROUP)p 1549 1072 V 15 w(UNION)p Fl(\).)e(The)i(new)75 1122 y(comm)o(unicator)11 b(has)j(a)g(unique)g(con)o(text)g(of)f(comm)o(unication.)75 1267 y Fk(3.9)70 b(Cac)n(heing)75 1361 y Fl(Cac)o(heing)13 b(is)f(the)i(pro)q(cess)h(b)o(y)d(whic)o(h)h(implemen)o(tation-)o(de\014ned)e (data)i(is)g(propagated)g(in)f(groups)h(and)g(comm)o(u-)75 1411 y(nicators.)75 1535 y Fg(3.9.1)55 b(Cac)n(heing)20 b(in)g(Groups)75 1622 y(3.9.2)55 b(Cac)n(heing)20 b(in)g(Comm)n(unicators)75 1702 y Fl(TBD.)75 1847 y Fk(3.10)70 b(F)-6 b(ormalizing)26 b(the)i(Lo)r(osely)g(Sync)n(hronous)i(Mo)r(del)d(\(Usage,)266 1922 y(Safet)n(y\))75 2023 y Fg(3.10.1)55 b(Basic)20 b(Statemen)n(ts)75 2103 y Fl(When)15 b(a)g(caller)g(passes)i(a)e(comm)o(unicator)d(\(whic)o(h)j (con)o(tains)g(a)g(con)o(text)h(and)f(group\))g(to)g(a)g(callee,)g(that)g (com-)75 2153 y(m)o(unicator)g(m)o(ust)i(b)q(e)g(free)h(of)e(side)i (e\013ects)h(throughout)e(execution)h(of)e(the)i(subprogram)e(\(quiescen)o (t\).)29 b(This)75 2202 y(pro)o(vides)13 b(one)f(mo)q(del)f(in)h(whic)o(h)h (libraries)f(can)g(b)q(e)i(written,)e(and)h(w)o(ork)f(\\safely)m(.")k(F)m(or) c(libraries)g(so)h(designated,)75 2252 y(the)k(callee)f(has)h(p)q(ermission)f (to)g(do)g(whatev)o(er)h(comm)o(unication)c(it)j(lik)o(es)g(with)g(the)h (comm)o(unicator,)d(and)i(un-)75 2302 y(der)h(the)g(ab)q(o)o(v)o(e)g(guaran)o (tee)f(kno)o(ws)h(that)f(no)h(other)g(comm)o(unicati)o(ons)d(will)h(in)o (terfere.)27 b(Since)17 b(w)o(e)g(p)q(ermit)e(the)75 2352 y(creation)e(of)f (new)g(comm)o(unicators)e(without)i(sync)o(hronization)h(\(assuming)e (preallo)q(cated)h(con)o(texts\),)i(this)e(do)q(es)75 2402 y(not)i(imp)q(ose)f(a)g(signi\014can)o(t)h(o)o(v)o(erhead.)158 2453 y(This)h(form)f(of)h(safet)o(y)g(is)h(analogous)e(to)h(other)h(common)d (computer)i(science)i(usages,)f(suc)o(h)g(as)g(passing)f(a)75 2503 y(descriptor)k(of)e(an)g(arra)o(y)g(to)g(a)h(library)e(routine.)29 b(The)18 b(library)f(routine)h(has)f(ev)o(ery)i(righ)o(t)e(to)g(exp)q(ect)i (suc)o(h)f(a)75 2553 y(descriptor)d(to)f(b)q(e)g(v)n(alid)f(and)g(mo)q (di\014able.)158 2604 y(Note)h(that)g Fh(MPI)p 417 2604 V 15 w(UNDEFINED)e Fl(is)h(the)h(rank)g(for)f(pro)q(cesses)j(that)e(are)g(sending) g(to)g(the)g(comm)o(unicator)d(from)75 2654 y(outside)j(the)h(group.)k(This) 14 b(can)g(result)h(through)f(the)g(publish)g(&)g(subscrib)q(e)i(mec)o (hanism,)c(or)i(b)o(y)f(virtue)i(of)e(the)75 2704 y(transmission)g(of)g (\015attened)i(comm)o(unicators)c(\(and)j(their)g(subsequen)o(t)i (un\015attening)e(and)g(use\).)p eop %%Page: 13 14 bop 75 -100 a Fm(3.10.)31 b(F)o(ORMALIZING)13 b(THE)i(LOOSEL)m(Y)f(SYNCHR)o (ONOUS)g(MODEL)h(\(USA)o(GE,)e(SAFETY\))129 b Fl(13)75 45 y Fg(3.10.2)55 b(Mo)r(dels)19 b(of)g(Execution)75 124 y Fl(W)m(e)h(sa)o(y)g (that)h(a)f(parallel)f(pro)q(cedure)k(is)d Fj(active)h Fl(at)f(a)g(pro)q (cess)i(if)e(the)h(pro)q(cess)h(b)q(elongs)f(to)f(a)h(group)f(that)75 174 y(ma)o(y)13 b(collectiv)o(ely)h(execute)i(the)f(pro)q(cedure,)i(and)d (some)g(mem)o(b)q(er)f(of)h(that)g(group)h(is)f(curren)o(tly)i(executing)f (the)75 224 y(pro)q(cedure)j(co)q(de.)27 b(If)16 b(a)h(parallel)e(pro)q (cedure)j(is)f(activ)o(e)f(at)h(a)f(pro)q(cess,)j(then)e(this)f(pro)q(cess)j (ma)o(y)14 b(b)q(e)k(receiving)75 274 y(messages)10 b(p)q(ertaining)f(to)h (this)g(pro)q(cedure,)i(ev)o(en)e(if)f(it)g(do)q(es)i(not)e(curren)o(tly)i (execute)h(the)e(co)q(de)h(of)e(this)h(pro)q(cedure.)75 388 y Fi(Nonreen)o(tran)o(t)i(parallel)i(pro)q(cedures)75 467 y Fl(This)f(co)o(v)o(ers)h(the)f(case)h(where,)g(at)f(an)o(y)g(p)q(oin)o(t)f (in)h(time,)e(at)i(most)f(one)h(in)o(v)o(o)q(cation)f(of)g(a)h(parallel)f (pro)q(cedure)j(can)75 517 y(b)q(e)g(activ)o(e)f(at)f(an)o(y)h(pro)q(cess.)20 b(That)14 b(is,)f(concurren)o(t)j(in)o(v)o(ok)n(ations)c(of)h(the)i(same)e (parallel)g(pro)q(cedure)j(ma)o(y)c(o)q(ccur)75 567 y(only)h(within)g (disjoin)o(t)g(groups)i(of)e(pro)q(cesses.)21 b(F)m(or)14 b(example,)e(all)h (in)o(v)o(ok)n(ations)f(of)h(parallel)g(pro)q(cedures)j(in)o(v)o(olv)o(e)75 617 y(all)d(pro)q(cesses,)j(pro)q(cesses)g(are)f(single-threaded,)f(and)f (there)j(are)e(no)g(recursiv)o(e)h(in)o(v)o(ok)n(ations.)158 668 y(In)c(suc)o(h)i(a)e(case,)h(a)f(con)o(text)i(can)e(b)q(e)h(statically)f (allo)q(cated)g(to)g(eac)o(h)h(pro)q(cedure.)19 b(The)12 b(static)g(allo)q (cation)e(can)75 718 y(b)q(e)15 b(done)f(in)g(a)f(pream)o(ble,)g(as)h(part)g (of)g(initialization)d(co)q(de.)20 b(Or,)14 b(it)f(can)i(b)q(e)f(done)h(a)f (compile/link)d(time,)h(if)h(the)75 768 y(implemen)o(tatio)o(n)d(has)i (additional)f(mec)o(hanisms)f(to)i(reserv)o(e)i(con)o(text)f(v)n(alues.)18 b(Comm)n(unicators)10 b(to)i(b)q(e)h(used)g(b)o(y)75 817 y(the)j(di\013eren)o (t)h(pro)q(cedures)g(can)f(b)q(e)g(build)f(in)g(a)g(pream)o(ble,)g(if)g(the)h (executing)g(groups)g(are)g(statically)e(de\014ned;)75 867 y(if)j(the)i(executing)f(groups)h(c)o(hange)f(dynamically)m(,)d(then)k(a)e (new)i(comm)o(unicator)c(has)j(to)g(b)q(e)g(built)g(whenev)o(er)75 917 y(the)d(executing)h(group)e(c)o(hanges,)h(but)g(this)g(new)g(comm)o(uni)o (cator)d(can)j(b)q(e)g(built)f(using)h(the)g(same)f(preallo)q(cated)75 967 y(con)o(text.)29 b(If)17 b(the)g(parallel)f(pro)q(cedures)k(can)d(b)q(e)h (organized)f(in)o(to)g(libraries,)g(so)g(that)h(only)e(one)h(pro)q(cedure)j (of)75 1017 y(eac)o(h)13 b(library)e(can)i(b)q(e)g(concurren)o(tly)g(activ)o (e)g(at)f(eac)o(h)h(pro)q(cessor,)h(then)f(it)f(is)g(su\016cien)o(t)h(to)f (allo)q(cate)g(one)g(con)o(text)75 1067 y(p)q(er)j(library)m(.)75 1181 y Fi(P)o(arallel)e(pro)q(cedures)h(that)h(are)g(nonreen)o(tran)n(t)e (within)g(eac)o(h)j(executing)e(group)75 1260 y Fl(This)g(co)o(v)o(ers)i(the) f(case)h(where,)f(at)g(an)o(y)f(p)q(oin)o(t)g(in)g(time,)f(for)h(eac)o(h)h (pro)q(cess)i(group,)d(there)i(can)f(b)q(e)g(at)f(most)g(one)75 1310 y(activ)o(e)h(in)o(v)o(ok)n(ation)e(of)h(a)g(parallel)g(pro)q(cedure)j (b)o(y)d(a)h(pro)q(cess)i(mem)o(b)q(er.)i(Ho)o(w)o(ev)o(er,)c(it)g(migh)o(t)d (b)q(e)k(p)q(ossible)f(that)75 1360 y(the)e(same)e(pro)q(cedure)j(is)e (concurren)o(tly)i(in)o(v)o(ok)o(ed)d(in)h(t)o(w)o(o)g(partially)e(\(or)j (completely\))e(o)o(v)o(erlapping)g(groups.)17 b(F)m(or)75 1410 y(example,)12 b(the)h(same)g(collectiv)o(e)g(comm)o(unicati)o(on)d (function)j(ma)o(y)e(b)q(e)j(concurren)o(tly)h(in)o(v)o(ok)o(ed)d(on)h(t)o(w) o(o)g(partially)75 1460 y(o)o(v)o(erlapping)g(groups.)158 1511 y(In)i(suc)o(h)g(a)g(case,)g(a)f(con)o(text)i(is)e(asso)q(ciated)i(with)e (eac)o(h)h(parallel)f(pro)q(cedure)i(and)f(eac)o(h)g(executing)h(group,)75 1561 y(so)f(that)g(o)o(v)o(erlapping)e(execution)j(groups)e(ha)o(v)o(e)h (distinct)g(comm)o(unication)c(con)o(texts.)22 b(\(One)15 b(do)q(es)h(not)f (need)g(a)75 1610 y(di\013eren)o(t)j(con)o(text)g(from)d(eac)o(h)j(group;)g (one)g(merely)e(needs)j(a)d(\\coloring")g(of)h(the)h(groups,)f(so)h(that)f (One)h(can)75 1660 y(generate)j(the)f(comm)o(unicators)d(for)i(eac)o(h)h (parallel)f(pro)q(cedure)i(when)f(the)g(execution)h(groups)e(are)h (de\014ned.)75 1710 y(Here,)15 b(again,)d(one)i(only)g(need)h(one)f(con)o (text)g(for)g(eac)o(h)h(library)m(,)d(if)h(no)h(t)o(w)o(o)f(pro)q(cedures)k (from)12 b(the)j(same)e(library)75 1760 y(can)h(b)q(e)h(concurren)o(tly)g (activ)o(e)f(in)f(the)i(same)e(group.)158 1811 y(Note)18 b(that,)f(for)g (collectiv)o(e)h(comm)o(unicati)o(on)c(libraries,)k(w)o(e)f(do)g(allo)o(w)f (sev)o(eral)i(concurren)o(t)h(in)o(v)o(o)q(cations)75 1861 y(within)e(the)i(same)e(group:)25 b(a)18 b(broadcast)g(in)g(a)f(group)h(ma)o (y)e(b)q(e)j(started)g(at)e(a)h(pro)q(cess)h(b)q(efore)g(the)g(previous)75 1911 y(broadcast)h(in)f(that)h(group)g(ended)g(at)g(another)g(pro)q(cess.)37 b(In)20 b(suc)o(h)g(a)g(case,)h(one)f(cannot)g(rely)g(on)f(con)o(text)75 1960 y(mec)o(hanisms)12 b(to)i(disam)o(biguate)e(successiv)o(e)17 b(in)o(v)o(o)q(cations)c(of)g(the)i(same)f(parallel)f(pro)q(cedure)j(within)d (the)i(same)75 2010 y(group:)i(the)12 b(pro)q(cedure)h(need)f(b)q(e)g (implemen)o(ted)d(so)i(as)h(to)f(a)o(v)o(oid)f(confusion.)17 b(F)m(or)11 b(example,)f(for)h(broadcast,)h(one)75 2060 y(ma)o(y)h(need)k(to) e(carry)h(additional)d(information)f(in)j(messages,)h(suc)o(h)g(as)f(the)h (broadcast)g(ro)q(ot,)f(to)g(help)g(in)g(suc)o(h)75 2110 y(disam)o (biguation;)9 b(one)i(also)g(relies)h(on)f(preserv)n(ation)h(of)f(message)g (order)i(b)o(y)e(MPI.)g(With)g(suc)o(h)h(an)f(approac)o(h,)h(w)o(e)75 2160 y(ma)o(y)g(b)q(e)i(gaining)e(p)q(erformance,)h(but)h(w)o(e)g(lo)q(ose)g (mo)q(dularit)o(y)m(.)h(It)f(is)f(not)h(su\016cien)o(t)g(to)g(implemen)o(t)d (the)j(parallel)75 2210 y(pro)q(cedure)f(so)e(that)h(it)e(w)o(orks)i (correctly)g(in)f(isolation,)e(when)j(in)o(v)o(ok)o(ed)e(only)h(once;)h(it)f (needs)h(to)f(b)q(e)h(implemen)o(ted)75 2259 y(so)j(that)h(an)o(y)f(n)o(um)o (b)q(er)f(of)h(successiv)o(e)i(in)o(v)o(o)q(cations)e(will)f(execute)j (correctly)m(.)23 b(Of)15 b(course,)i(the)f(same)e(approac)o(h)75 2309 y(can)g(b)q(e)h(used)f(for)g(other)g(parallel)f(libraries.)75 2424 y Fi(W)l(ell-nested)g(parallel)g(pro)q(cedures)75 2503 y Fl(Calls)f(of)h(parallel)f(pro)q(cedures)j(are)f(w)o(ell)f(nested)h(if)f(a) g(new)g(parallel)f(pro)q(cedure)j(is)e(alw)o(a)o(ys)g(in)o(v)o(ok)o(ed)f(in)h (a)g(subset)75 2553 y(of)j(a)g(group)g(executing)h(the)g(same)e(parallel)g (pro)q(cedure.)27 b(Th)o(us,)17 b(pro)q(cesses)i(that)d(execute)i(the)f(same) e(parallel)75 2603 y(pro)q(cedure)h(ha)o(v)o(e)e(the)g(same)f(execution)i (stac)o(k.)158 2654 y(In)i(suc)o(h)g(a)f(case,)i(a)e(new)h(con)o(text)g(need) h(to)e(b)q(e)h(dynamically)d(allo)q(cated)i(for)g(eac)o(h)h(new)g(in)o(v)o(o) q(cation)e(of)h(a)75 2704 y(parallel)e(pro)q(cedure.)24 b(Ho)o(w)o(ev)o(er,) 16 b(a)f(stac)o(k)h(mec)o(hanism)d(can)i(b)q(e)h(used)g(for)f(allo)q(cating)f (new)h(con)o(texts.)24 b(Th)o(us,)15 b(a)p eop %%Page: 14 15 bop 75 -100 a Fl(14)75 45 y(p)q(ossible)16 b(mec)o(hanism)d(is)i(to)h(allo)q (cate)f(\014rst)h(a)f(large)h(n)o(um)o(b)q(er)e(of)h(con)o(text's)i(\(up)e (to)h(the)g(upp)q(er)g(b)q(ound)g(on)f(the)75 95 y(depth)g(of)e(nested)i (parallel)e(pro)q(cedure)j(calls\),)d(and)h(then)h(use)f(a)g(lo)q(cal)f(stac) o(k)h(managemen)o(t)e(of)h(these)i(con)o(text's)75 145 y(on)f(eac)o(h)g(pro)q (cess)i(to)e(create)h(a)e(new)i(comm)o(unicator)c(\(using)j Fh(MPI)p 1129 145 14 2 v 15 w(COMM)p 1232 145 V 15 w(MAKE)p Fl(\))f(for)g(eac)o(h)i(new)f(in)o(v)o(o)q(cation.)75 252 y Fi(The)i(General)d(case)75 329 y Fl(In)20 b(the)g(general)g(case,)i(there)f (ma)o(y)d(b)q(e)j(m)o(ultiple)c(concurren)o(tly)22 b(activ)o(e)d(in)o(v)o(o)q (cations)g(of)h(the)g(same)f(parallel)75 379 y(pro)q(cedure)f(within)d(the)i (same)e(group;)i(in)o(v)o(o)q(cations)e(ma)o(y)f(not)i(b)q(e)h(w)o (ell-nested.)25 b(A)16 b(new)h(con)o(text)f(need)h(to)f(b)q(e)75 429 y(created)h(for)e(eac)o(h)h(in)o(v)o(o)q(cation.)22 b(It)16 b(is)f(the)h(user)h(resp)q(onsibilit)o(y)e(to)g(mak)o(e)f(sure)j(that,)f(if)e (t)o(w)o(o)h(distinct)h(parallel)75 478 y(pro)q(cedures)21 b(are)e(in)o(v)o(ok)o(ed)f(concurren)o(tly)j(on)d(o)o(v)o(erlapping)g(sets)i (of)e(pro)q(cesses,)23 b(then)c(con)o(text)h(allo)q(cation)d(or)75 528 y(comm)o(unicator)11 b(creation)j(is)g(prop)q(erly)g(co)q(ordinated.)75 665 y Fk(3.11)70 b(Motiv)l(ating)23 b(Examples)75 764 y Fg(3.11.1)55 b(Curren)n(t)19 b(Practice)g(#1)75 841 y Fl(Example)12 b(#1a:)162 922 y Fh(int)21 b(me,)h(size;)162 972 y(...)162 1022 y(mpi_init\(\);)162 1072 y(mpi_comm_rank\(MPI_)o(COMM_)o(ALL,)c(&me\);)162 1121 y(mpi_comm_size\(MPI_)o(COMM_)o(ALL,)g(&size\);)162 1221 y(printf\("Process)h (\045d)i(size)g(\045d\\n",)g(me,)g(size\);)162 1271 y(...)162 1321 y(mpi_end\(\);)75 1402 y Fl(Example)e(#1a)g(is)i(a)f(do-nothing)f (program)g(that)h(initializes)g(itself)g(legally)m(,)f(and)i(refers)g(to)g (the)g(the)g(\\all")75 1452 y(comm)o(unicator,)13 b(and)j(prin)o(ts)h(a)e (message.)25 b(This)15 b(example)g(do)q(es)i(not)f(imply)e(that)i(MPI)g(supp) q(orts)h(prin)o(tf-lik)o(e)75 1502 y(comm)o(unication)10 b(itself.)75 1551 y(Example)i(#1b:)162 1633 y Fh(int)21 b(me,)h(size;)162 1683 y(...)162 1732 y(mpi_init\(\);)162 1782 y(mpi_comm_rank\(MPI_)o(COMM_)o (ALL,)c(&me\);)65 b(/*)21 b(local)g(*/)162 1832 y(mpi_comm_size\(MPI_)o (COMM_)o(ALL,)d(&size\);)j(/*)g(local)g(*/)162 1932 y(if\(\(me)g(\045)g(2\))h (==)f(0\))228 1981 y(mpi_send\(...,)e(MPI_COMM_ALL,)g(\(\(me)i(+)g(1\))h (\045)f(size\)\);)162 2031 y(else)228 2081 y(mpi_recv\(...,)e(MPI_COMM_ALL,)g (\(\(me)i(-)g(1)h(+)f(size\))g(\045)h(size\)\);)162 2181 y(...)162 2231 y(mpi_end\(\);)75 2312 y Fl(Example)14 b(#1b)h(sc)o(hematically)f (illustrates)i(message)f(exc)o(hanges)i(b)q(et)o(w)o(een)g(\\ev)o(en")f(and)f (\\o)q(dd")g(pro)q(cesses)j(in)75 2362 y(the)c(\\all")f(comm)o(unicator.)75 2478 y Fg(3.11.2)55 b(Curren)n(t)19 b(Practice)g(#2)162 2554 y Fh(void)i(*data;)162 2604 y(int)g(me;)162 2654 y(...)162 2704 y(mpi_init\(\);)p eop %%Page: 15 16 bop 75 -100 a Fm(3.11.)26 b(MOTIV)-5 b(A)m(TING)14 b(EXAMPLES)1120 b Fl(15)162 45 y Fh(mpi_comm_rank\(MPI_)o(COMM_)o(ALL,)18 b(&me\);)162 145 y(if\(me)j(==)g(0\))162 195 y({)249 244 y(/*)h(get)f(input,)f(create)h (buffer)g(``data'')f(*/)249 294 y(...)162 344 y(})162 444 y (mpi_broadcast\(MPI_)o(COMM_)o(ALL,)e(0,)k(data\);)162 543 y(...)162 593 y(mpi_end\(\);)75 687 y Fl(This)14 b(example)e(illustrates)i (the)h(use)g(of)e(a)g(collectiv)o(e)h(comm)o(unication.)75 806 y Fg(3.11.3)55 b(\(Appro)n(ximate\))19 b(Curren)n(t)g(Practice)g(#3)162 883 y Fh(int)i(me;)162 933 y(void)g(*grp0,)g(*grprem,)f(*commslave;)162 983 y(...)162 1033 y(mpi_init\(\);)162 1082 y(mpi_comm_rank\(MPI_)o(COMM_)o (ALL,)e(&me\);)43 b(/*)21 b(local)g(*/)162 1132 y(mpi_local_subgroup)o (\(MPI_)o(GROUP)o(_ALL,)d(1,)k(``[0]'',)e(&grp0\);)g(/*)h(local)g(*/)162 1182 y(mpi_group_differen)o(ce\(MP)o(I_GRO)o(UP_AL)o(L,)e(grp0,)h(&grprem\);) g(/*)i(local)f(*/)162 1232 y(mpi_comm_make\(MPI_)o(GROUP)o(_ALL,)d(grprem,)j (&commslave\);)162 1331 y(if\(me)g(!=)g(0\))162 1381 y({)249 1431 y(/*)h(compute)e(on)h(slave)g(*/)249 1481 y(...)249 1531 y(mpi_reduce\(commslav)o(e,)e(...\);)249 1581 y(...)162 1630 y(})162 1680 y(/*)j(zero)f(falls)f(through)h(immediately)e(to)j(this)e (reduce,)h(others)f(do)i(later...)e(*/)162 1730 y(mpi_reduce\(MPI_COM)o (M_ALL)o(,)f(...\);)75 1823 y Fl(This)d(example)e(illustrates)i(ho)o(w)f(a)h (group)g(consisting)f(of)g(all)g(but)h(the)g(zeroth)h(pro)q(cess)h(of)d(the)h (\\all")e(group)i(is)75 1873 y(created,)g(and)f(then)h(ho)o(w)f(a)g(comm)o (unicator)d(is)j(formed)f(\()p Fh(commslave)p Fl(\))f(for)i(that)g(new)h (group.)21 b(The)16 b(new)f(com-)75 1923 y(m)o(unicator)d(is)h(used)h(in)f(a) f(collectiv)o(e)i(call,)e(and)h(all)f(pro)q(cesses)k(execute)f(a)e(collectiv) o(e)g(call)f(in)h(the)h Fh(MPI)p 1695 1923 14 2 v 15 w(COMM)p 1798 1923 V 15 w(ALL)75 1973 y Fl(con)o(text.)k(This)12 b(example)f (illustrates)i(ho)o(w)e(the)i(t)o(w)o(o)f(comm)o(unicators)e(\(whic)o(h)i(p)q (ossess)i(distinct)f(con)o(texts\))g(pro-)75 2023 y(tect)j(comm)o(unicatio)o (n.)i(That)d(is,)f(comm)o(unication)d(in)k Fh(MPI)p 1035 2023 V 15 w(COMM)p 1138 2023 V 15 w(ALL)f Fl(is)g(insulated)h(from)e(comm)o (unication)e(in)75 2073 y Fh(commslave)p Fl(,)h(and)h(vice)h(v)o(ersa.)158 2123 y(In)h(summary)m(,)d(for)j(comm)o(unication)d(with)j(\\group)g(safet)o (y)m(,")f(con)o(texts)j(within)d(comm)o(unicators)f(m)o(ust)h(b)q(e)75 2173 y(distinct.)75 2292 y Fg(3.11.4)55 b(Example)19 b(#4)75 2369 y Fl(The)13 b(follo)o(wing)e(example)h(is)h(mean)o(t)e(to)i(illustrate)g (\\safet)o(y")g(b)q(et)o(w)o(een)h(p)q(oin)o(t-to-p)q(oin)o(t)e(and)h (collectiv)o(e)g(comm)o(u-)75 2419 y(nication.)75 2504 y Fh(#define)20 b(TAG_ARBITRARY)f(12345)75 2554 y(#define)h(SOME_COUNT)151 b(50)162 2604 y(int)21 b(me;)162 2654 y(int)g(*contexts;)162 2704 y(void)g(*subgroup;)p eop %%Page: 16 17 bop 75 -100 a Fl(16)162 95 y Fh(...)162 145 y(mpi_init\(\);)162 195 y(mpi_contexts_alloc)o(\(MPI_)o(GROUP)o(_ALL,)18 b(2,)k(&contexts\);)162 244 y(mpi_local_subgroup)o(\(MPI_)o(GROUP)o(_ALL,)c(4,)k(``[2,4,6,8]'',)c (&subgroup\);)i(/*)h(local)g(*/)162 294 y(mpi_group_rank\(sub)o(group)o(,)e (&me\);)108 b(/*)21 b(local)g(*/)162 394 y(if\(me)g(!=)g(MPI_UNDEFINED\))162 444 y({)249 493 y(mpi_comm_bind\(subgr)o(oup,)d(context[0],)i(&pt2pt_comm\);) f(/*)i(local)g(*/)249 543 y(mpi_comm_bind\(subgr)o(oup,)d(context[1],)i (&coll_comm\);)41 b(/*)21 b(local)g(*/)249 643 y(/*)h(asynchronous)d (receive:)h(*/)249 693 y(mpi_irecv\(...,)f(MPI_SRC_ANY,)h(TAG_ARBITRARY,)e (pt2pt_comm\);)162 742 y(})162 842 y(for\(i)j(=)h(0;)f(i)g(<)h(SOME_COUNT,)e (i++\))249 892 y(mpi_reduce\(coll_com)o(m,)f(...\);)75 1015 y Fg(3.11.5)55 b(Library)19 b(Example)f(#1)75 1095 y Fl(The)c(main)e (program:)162 1183 y Fh(int)21 b(done)g(=)h(0;)162 1233 y(user_lib_t)e (*libh_a,)g(*libh_b;)162 1283 y(void)h(*dataset1,)f(*dataset2;)162 1332 y(...)162 1382 y(mpi_init\(\);)162 1432 y(...)162 1482 y(init_user_lib\(MPI_)o(COMM_)o(ALL,)e(&libh_a\);)162 1532 y(init_user_lib\(MPI_)o(COMM_)o(ALL,)g(&libh_b\);)162 1582 y(...)162 1631 y(user_start_op\(libh)o(_a,)h(dataset1\);)162 1681 y(user_start_op\(libh)o(_a,)g(dataset2\);)162 1731 y(...)162 1781 y(while\(!done\))162 1831 y({)249 1880 y(/*)j(work)f(*/)249 1930 y(...)249 1980 y(mpi_reduce\(MPI_COMM)o(_ALL,)d(...\);)249 2030 y(...)249 2080 y(/*)k(see)f(if)g(done)g(*/)249 2129 y(...)162 2179 y(})162 2229 y(user_end_op\(libh_a)o(\);)162 2279 y(user_end_op\(libh_b) o(\);)75 2366 y Fl(The)14 b(user)h(library)e(initialization)f(co)q(de:)75 2455 y Fh(void)21 b(init_user_lib\(voi)o(d)e(*comm,)i(user_lib_t)e (**handle\))75 2504 y({)140 2554 y(user_lib_t)h(*save;)140 2604 y(int)i(context;)140 2654 y(void)f(*group;)p eop %%Page: 17 18 bop 75 -100 a Fm(3.11.)26 b(MOTIV)-5 b(A)m(TING)14 b(EXAMPLES)1120 b Fl(17)140 45 y Fh(user_lib_initsave\(&)o(save\))o(;)19 b(/*)i(local)g(*/) 140 95 y(mpi_comm_group\(comm)o(,)e(&group\);)140 145 y(mpi_contexts_alloc\() o(group)o(,)g(1,)i(&context\);)140 195 y(mpi_comm_dup\(comm,)d(context,)j (save)g(->)g(comm\);)140 294 y(/*)h(other)f(inits)f(*/)140 344 y(*handle)h(=)g(save;)75 394 y(})75 484 y Fl(Notice)14 b(that)g(the)h(comm)o(unicator)c Fh(comm)i Fl(passed)i(to)e(the)i(library)e (is)h(not)g(needed)h(to)f(allo)q(cate)f(new)h(con)o(texts.)75 537 y(User)h(start-up)g(co)q(de:)75 630 y Fh(void)21 b(user_start_op\(use)o (r_lib)o(_t)e(*handle,)h(void)h(*data\))75 680 y({)162 730 y(user_lib_state)e(*state;)162 780 y(state)i(=)h(handle)e(->)h(state;)162 829 y(mpi_irecv\(save)e(->)i(comm,)g(...,)g(data,)g(...)g(&\(state)f(->)i (irecv_handle\)\);)162 879 y(mpi_isend\(save)d(->)i(comm,)g(...,)g(data,)g (...)g(&\(state)f(->)i(isend_handle\)\);)75 929 y(})75 1020 y Fl(User)15 b(clean-up)f(co)q(de:)75 1113 y Fh(void)21 b(user_end_op\(user_) o(lib_t)d(*handle\))75 1163 y({)162 1213 y(mpi_wait\(save)h(->)j(state)e(->)i (isend_handle\);)162 1262 y(mpi_wait\(save)d(->)j(state)e(->)i (irecv_handle\);)75 1312 y(})75 1442 y Fg(3.11.6)55 b(Library)19 b(Example)f(#2)75 1523 y Fl(The)c(main)e(program:)162 1617 y Fh(int)21 b(ma,)h(mb;)162 1666 y(...)162 1716 y(list_a)f(:=)g(``[0,1]'';) 162 1766 y(list_b)g(:=)g(``[0,2{,3}]'';)162 1866 y(mpi_local_subgroup)o (\(MPI_)o(GROUP)o(_ALL,)d(2,)k(list_a,)e(&group_a\);)162 1916 y(mpi_local_subgroup)o(\(MPI_)o(GROUP)o(_ALL,)e(2\(3\),)j(list_b,)f (&group_b\);)162 2015 y(mpi_comm_make\(MPI_)o(GROUP)o(_ALL,)e(group_a,)i (&comm_a\);)162 2065 y(mpi_comm_make\(MPI_)o(GROUP)o(_ALL,)e(group_b,)i (&comm_b\);)162 2165 y(mpi_comm_rank\(comm)o(_a,)f(&ma\);)162 2214 y(mpi_comm_rank\(comm)o(_b,)g(&mb\);)162 2314 y(if\(ma)i(!=)g (MPI_UNDEFINED\))228 2364 y(lib_call\(comm_a)o(\);)162 2414 y(if\(mb)g(!=)g(MPI_UNDEFINED\))162 2463 y({)228 2513 y(lib_call\(comm_b)o (\);)228 2563 y(lib_call\(comm_b)o(\);)162 2613 y(})75 2704 y Fl(The)14 b(library:)p eop %%Page: 18 19 bop 75 -100 a Fl(18)75 45 y Fh(void)21 b(lib_call\(void)e(*comm\))75 95 y({)162 145 y(int)i(me,)h(done)e(=)i(0;)162 195 y(mpi_comm_rank\(comm)o(,) d(&me\);)162 244 y(if\(me)i(==)g(0\))249 294 y(while\(!done\))249 344 y({)337 394 y(mpi_recv\(...,)e(comm,)h(MPI_SRC_ANY\);)337 444 y(...)249 493 y(})162 543 y(else)162 593 y({)249 643 y(/*)i(work)f(*/)249 693 y(mpi_send\(...,)e(comm,)i(0\);)249 742 y(....)162 792 y(})162 842 y(MPI_SYNC\(comm\);)62 b(/*)22 b(include/no)e(safety)g(for)h (safety/no)f(safety)h(*/)75 892 y(})75 975 y Fl(The)e(ab)q(o)o(v)o(e)g (example)f(is)g(really)h(t)o(w)o(o)f(examples,)h(dep)q(ending)h(on)e(whether) j(or)e(not)f(y)o(ou)h(include)g(rank)g(3)f(in)75 1025 y Fh(list)p 166 1025 14 2 v 15 w(b)p Fl(.)f(This)c(example)e(illustrates)i(that,)f (despite)h(con)o(texts,)h(subsequen)o(t)g(calls)e(to)h Fh(lib)p 1510 1025 V 15 w(call)e Fl(with)i(the)g(same)75 1075 y(con)o(text)j(need)g (not)f(b)q(e)h(safe)g(from)d(one)i(another)h(\(\\bac)o(k)f(masking"\).)20 b(Safet)o(y)15 b(is)g(realized)h(is)f(the)h Fh(MPI)p 1733 1075 V 15 w(SYNC)e Fl(is)75 1124 y(added.)k(What)c(this)g(demonstrates)g(is)g (that)g(libraries)f(ha)o(v)o(e)h(to)g(b)q(e)g(written)h(carefully)m(,)d(ev)o (en)j(with)e(con)o(texts.)158 1174 y(Algorithms)c(lik)o(e)g(\\com)o(bine")g (ha)o(v)o(e)i(strong)f(enough)h(source)h(selectivit)o(y)f(so)f(that)h(they)g (are)g(inheren)o(tly)g(OK.)75 1224 y(So)16 b(are)g(m)o(ultiple)d(calls)i(to)h (a)g(t)o(ypical)e(tree)j(broadcast)g(algorithm)c(with)i(the)i(same)d(ro)q (ot.)24 b(Ho)o(w)o(ev)o(er,)16 b(m)o(ultiple)75 1274 y(calls)g(to)h(a)g(t)o (ypical)e(tree)k(broadcast)e(algorithm)d({)i(with)h(di\013eren)o(t)h(ro)q (ots)f(|)f(could)h(break.)27 b(Therefore,)18 b(suc)o(h)75 1324 y(algorithms)9 b(w)o(ould)h(ha)o(v)o(e)h(to)f(utilize)h(the)h(tag)e(to)h(k)o (eep)h(things)f(straigh)o(t.)17 b(All)10 b(of)g(the)i(foregoing)e(is)g(a)h (discussion)h(of)75 1373 y(\\collectiv)o(e)f(calls")f(implem)o(en)o(ted)f (with)i(p)q(oin)o(t)f(to)g(p)q(oin)o(t)h(op)q(erations.)17 b(MPI)11 b(implemen)o(tatio)o(ns)e(ma)o(y)f(or)j(ma)o(y)e(not)75 1423 y(implemen)o(t)k(collectiv)o(e)j(calls)g(using)f(p)q(oin)o(t-to-p)q(oin) o(t)g(op)q(erations.)24 b(These)18 b(algorithms)13 b(are)k(used)f(to)g (illustrate)75 1473 y(the)e(issues)h(of)f(correctness)j(and)c(safet)o(y)m(,)g (indep)q(enden)o(t)j(of)d(ho)o(w)g(MPI)i(implem)o(en)o(ts)d(its)i(collectiv)o (e)g(calls.)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF From owner-mpi-context@CS.UTK.EDU Sat Jul 10 18:36:16 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA14436; Sat, 10 Jul 93 18:36:16 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15709; Sat, 10 Jul 93 18:37:07 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sat, 10 Jul 1993 18:37:02 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15701; Sat, 10 Jul 93 18:36:59 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA04075; Sat, 10 Jul 93 17:36:58 CDT Date: Sat, 10 Jul 93 17:36:58 CDT From: Tony Skjellum Message-Id: <9307102236.AA04075@Aurora.CS.MsState.Edu> To: otto@iliamna.cse.ogi.edu Subject: new latex of context chapter Cc: mpi-context@cs.utk.edu This is uuencoded to reduce dangers of mail transmission... - Tony begin 640 ctxt_10jul93.tex M)0HE($-O;G1E>'0@8VAA<'1E6]U"B4@'1W M:61T:#TV+C!I;@I<=&5X=&AE:6=H=#TY+C!I;@I<<&%R:6YD96YT/3)E;0H* M)2 @("TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM M+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2T*)2 @(&UP:2UM86-S+G1E M>" @+2TM(&UA;B!P86=E(&UA8W)OUQD:7-C=7-S M4W!A8V5]"EQS971L96YG=&A[7&1IUQD:7-C=7-S?5LQ77M<=G-P86-E>UQD:7-C=7-S4W!A8V5]('M< MUQB9B!$:7-C=7-S:6]N.GT@(S%](%QVUQVUQS;6%L;"![7&)F($UIUQD:7-C=7-S4W!A8V5]"GM<UQB9B!);7!L96UE M;G1A=&EO;B!N;W1E.GT@(S%](%QVUQC;V1E4W!A8V5] M>RXS8VU]"@I<;F5W8V]M;6%N9'M<;7!I9G5N8WU;,5U[7'9S<&%C97M<8V]D M95-P86-E?2![7&)F(",Q?2!<=G-P86-E>UQC;V1E4W!A8V5]('T*"B4@+2TM M+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2TM M+2TM+2TM+2TM+2TM+2TM+2TM+2TM+2T*)2 @02!F97<@8V]M;6%N9',@=&\@ M:&5L<"!I;B!WPI<8F5G:6Y[;&ES='T*>UQH8F]X('1O.35P='LC,5QH9FEL?7T*>UQS971L M96YG=&A[7&QE9G1M87)G:6Y]>S$R,'!T?0H@7'-E=&QE;F=T:'M<;&%B96QW M:61T:'U[.35P='T*(%QS971L96YG=&A[7&QA8F5LUQP87)T;W!S97!]>S!P='T*(%QS971L96YG=&A[7'!AS!P='T*(%QS971L96YG=&A[7'1O<'-E<'U[,'!T?0I]"EQI=&5M"GLC,GT* M7&5N9'ML:7-T?0I]"EQO=71ES!P='T*(%QS971L96YG=&A[7&QA8F5LUQP87)T;W!S97!]>S!P='T*(%QS971L96YG M=&A[7'!AS!P='T*(%QS971L96YG=&A[7'1O<'-E<'U[,'!T?0I] M"EQI=&5M"GLC,7T*7&5N9'ML:7-T?0I]"EQD969<;6%N:&5A9",Q>UQN;VEN M9&5N='M<8F9[(S%]?7T*"@I<:'EP:&5N871I;VY[4D4M1$E3+51224(M550M M04),12!S=6(MV1O8W5M96YT?0H*7'-E=&-O=6YT97)[ M<&%G97U[,7T*7'!A9V5N=6UB97)I;F=['1S+"!#;VUM=6YI8V%T;W)S M?0H*7&%U=&AO2!3:VIE;&QU;2P@36%R8R!3;FER?0H*"EQD871E>TIU;'D@,3 L(#$Y.3-] M"@I<;6%K971I=&QE"EQH9G5Z>CTU<'0*)5QT86)L96]F8V]N=&5N=',*"B5< M8F5G:6Y[86)S=')A8W1]"B5792!D;VXG="!H879E(&%N(&%BTEN=')O9'5C=&EO;GT*270@ M:7,@:&EG:&QY(&1E&5C=71I;F<@ M82!P87)A;&QE;"!P2!C;VUP;W-I;F<@2!M;V1U;&4I+B @4W5P<&]R="!O9B!A('9I&5D M+"!O<@IM87D@8F4@9&5T97)M:6YE9"!D>6YA;6EC86QL>2!B969O7,@;G5M8F5R('!R;V-E2P@2!T:&5I'1S M?2!P87)T:71I;VX@=&AE(&UE&5C=71I;VX@87)E(&%L M'0N("!! M(&-O;6UU;FEC871O'!L:6-I="!P87)A;65T97(@:6X@96%C M:"!P;VEN="UT;RUP;VEN="!C;VUM=6YI8V%T:6]N(&]P97)A=&EO;BX*5&AE M(&-O;6UU;FEC871O'0@;V8@=&AA= IO<&5R871I;VX[(&ET(&ED96YT:69I97,@=&AE(&=R M;W5P(&]F('!R;V-E'!E8W1E9"!T:&%T M('!A0IO<&%Q=64@35!)(&]B:F5C=',N"@I-4$D@9&]E M&ES=&EN9R!G2!O9B!A(&-O;G1E>'0@:7,@ M=&AA="!A('-E;F0@;6%D92!I;B!A"F-O;G1E>'0@8V%N;F]T(&)E(')E8V5I M=F5D(&EN(&%N;W1H97(@8V]N=&5X="X@($$@8V]N=&5X="!IF5R;RX*"D=R M;W5PUQT="!-4$E<7T=23U507%]!3$Q]+'Y^?E-0340M;&EK92!S M:6)L:6YGT5N=FER;VYM96YT M86P@UQB9B!C;VUM=6YI8V%T M;W)S?2!T;R!P'0@86YD M(&=R;W5P"G-P96-I9FEC871I;VYS*2!F;W(@=&AE(&-O;6UU;FEC871I;VXN M("!);B!S:&]R="P@8V]M;75N:6-A=&]R'0G"F5V97)Y=VAE2!O<&%Q=64@>UQB9B!C;VUM=6YI8V%T M;W(@;V)J96-TUQT="!-4$E<7T-/34U<7T%, M3'TL?GY^4U!-1"UL:6ME('-I8FQI;F=S(&]F(&$@<')O8V5SUQT="!-4$E<7T-/34U<7TA/4U1]+'Y^?D$@8V]M;75N:6-A=&]R(&9O7-T96US+@I%;G9IW-E8SIPTQO8V%L($]P97)A=&EO;G-]"E1H92!F M;VQL;W=I;F<@87)E(&%L;"!L;V-A;" H;F]N+6-O;6UU;FEC871I;FTU025Q?5%)!3E-,051%7%]2 M04Y+4R H9W)O=7!<7V$L(&XL(')A;FMS7%]A+"!GV1EUQT="!R86YKV1EV1EUQT="!M87A<7VQE;F=T:'TI+"!T M:&5N('1H92!C;VYT96YT&ET(&%S('1H90IN=6UB97(@ M;V8@8GET97,@;F5E9&5D('1O('-T;W)E('1H92!F;&%T=&5N960@9W)O=7 N M"@I4:&]U9V@@:6UP;&5M96YT871I;VYS(&UA>2!V87)Y(&]N(&AO=R!T:&5Y M('-T;W)E(&9L871T96YE9"!G%Q?;&5N9W1H+"!B M=69F97(L(&=R;W5P*7T*"EQB96=I;GMD97-C&EM=6T@;&5N9W1H(&]F(&)U9F9E71E MUQT="!-4$E<7T=23U507%]&3$%45$5.?0I<:71E;5M/550@9W)O M=7!=(&AA;F1L92!T;R!O8FIE8W0*7&5N9'MD97-C2X*4V5E(&%LUQT="!-4$E<7T=23U507%]&3$%45$5.?2!A8F]V92X*"EQN;VEN9&5N=%QM M<&EF=6YC>TU025Q?3$]#04Q<7U-50D=23U50*&=R;W5P+"!N+"!R86YK2!R86YKF4@;V8@>UQT M="!N97=<7V=R;W5P?2D*7&ET96U;24X@UQT="!R86YKV1E2![7'1T(')A;F=ETU025Q?3$]#04Q<7T=23U507%]53DE/3BAGTU025Q?3$]# M04Q<7T=23U507%])3E1%4E-%0U0H9W)O=7 Q+"!G2DN"@I<9&ES8W5S2!E;G5M97)A=&4@2!D969I M;F5D+@I<96YD>V1E2!B;W5N9"!T;R!A(&-O M;6UU;FEC871OU1H92!P;VEN="UT;RUP;VEN="!C:&%P M=&5R('-U9V=E'1A;G0@9W)O=7 @;V)J M96-T(&AA;F1L90I<:71E;5M/550@;F5W7%]GT-O;&QE8W1I=F4@1W)O=7 @0V]NTU025Q?0T],3%Q?4U5" M1U)/55 H8V]M;2P@;65M8F5RV1E2!A;&P@<')O8V5SUQT="!N97=<7W)A;FM](&EN(&5A8V@@<')O8V5S MT)E9F]R92P@=V4@:&%D(&]N92!F=6YC M=&EO;B!T:&%T(&EM<&QE;65N=&5D(&-O;&QE8W1I=F4@"G-U8G-E='1I;F2!D969I;F5D+B @0V]M;65N=',_"@I&;W(@:6YS=&%N M8V4L('1O('!R;W9I9&4@=&AE('-O2!O2!A('-T M86)L92!S;W)T+B @5V4@=&AE2!AUQE;2!E+FUQT="!-4$E<7T-/3$Q<7T-/34U<7U!%4DU55$5] M*2Y]"@I<'1S?0H*7'-U8G-E M8W1I;VY[3&]C86P@3W!E'1S72!I M;G1E9V5R(&%R'1S"EQE;F1[9&5S8W)I<'1I;VY]"E)E M'1S M(&AA9"!A;')E861Y(&)E96X@86QL;V-A=&5D( IB>2![7'1T($U025Q?0T]. M5$585%-<7T%,3$]#?2!O2![7'1T($U025Q?0T].5$58 M5%-<7U)%4T525D5]+ IT:&5N('1H:7,@9G5N8W1I;VX@TU025Q?0T].5$585%-<7T92144H;BP@8V]N=&5X=',I?0I<8F5G M:6Y[9&5S8W)I<'1I;VY]"EQI=&5M6TE.(&Y=(&YU;6)E'1S72!I;G1E9V5R(&%R'1S"EQE;F1[9&5S8W)I<'1I;VY]"DQO8V%L(&1E86QL;V-A=&EO M;B!O9B!C;VYT97AT(&%L;&]C871E9"!B>2![7'1T($U025Q?0T].5$585%-< M7U)%4T525D5]"F]R('M<='0@35!)7%]#3TY415A44UQ?04Q,3T-]+B @270@ M:7,@97)R;VYE;W5S('1O(&9R964@82!C;VYT97AT('1H870*:7,@8F]U;F0@ M=&\@86YY(&-O;6UU;FEC871OT-O;&QE8W1I=F4@3W!E'1S*7T*"EQB96=I;GMD97-C'1S72!I;G1E9V5R(&%R'1S"EQE;F1[9&5S8W)I<'1I;VY]"@I!;&QO8V%T97,@86X@ M87)R87D@;V8@8V]N=&5X=',N("!4:&ES(&-O;&QE8W1I=F4@;W!EUQT="!G2!T:6UE?2!T;R!PUQT="!-4$E<7T%, M3$]#7%]#3TY415A44WT*87)E('5N:7%U92!W:71H:6X@>UQT="!G2 @:7,@=&AE('-A;64@;VX@86QL('!R;V-E'1S*7T@=V%S('1H92!P'!L:6-A8FQE+"!A;F0@ MV1ETQO8V%L($-O;6UU;FEC871OF4I?0H*7&)E9VEN>V1EF5=(&ES('1H92!I;G1E9V5R(&YU;6)ETU025Q?0T]-35Q?1DQ!5%1%3BAC;VUM+"!M87A< M7VQE;F=T:"P@8G5F9F5R+"!A8W1U86Q<7VQE;F=T:"E]"@I<8F5G:6Y[9&5S M8W)I<'1I;VY]"EQI=&5M6TE.(&-O;6U=(&AA;F1L92!T;R!C;VUM=6YI8V%T M;W(@;V)J96-T+@I<:71E;5M)3B!M87A<7VQE;F=T:%T@;6%X:6UU;2!L96YG M=&@@;V8@8G5F9F5R(&EN(&)Y=&5S"EQI=&5M6T]55"!B=69F97)=(&)Y=&4M M86QI9VYE9"!B=69F97(*7&ET96U;3U54(&%C='5A;%Q?;&5N9W1H72!A8W1U M86P@8GET92!L96YG=&@@;V8@8G5F9F5R(&-O;G1A:6YI;F<*9FQA='1E;F5D M(&-O;6UU;FEC871OV1EUQT="!B=69F97)](&%R92!U;F1E9FEN960@ M870@97AI="X*5&AE('%U86YT:71Y('M<='0@86-T=6%L7%]L96YG=&A](&ES M(&%L=V%Y&EM=6T@;&5N9W1H(&]F(&)U9F9E71EUQT="!-4$E< M7T-/34U<7T9,051414Y]"EQI=&5M6T]55"!C;VUM72!H86YD;&4@=&\@;V)J M96-T"EQE;F1[9&5S8W)I<'1I;VY]"E-E92![7'1T($U025Q?0T]-35Q?1DQ! M5%1%3GT@86)O=F4N"@I<;F]I;F1E;G1<;7!I9G5N8WM-4$E<7T-/34U<7T)) M3D0H9W)O=7 L(&-O;G1E>'0L(&-O;6U<7VYE=RE]"EQB96=I;GMD97-C'0N"E1H92!O<&5R871I;VX@9&]E'!L:6-I="!S>6YC:')O;FEZ871I;VX*;W9EV1EV1EUQT="!C;VUM?2X@(%1H92!O<&%Q=64@;V)J96-T('M<='0@ M8V]M;7T@:7,*9&5A;&QO8V%T960N("!";W1H('1H92!GUQT M="!-4$E<7T-/34U<7TU!2T5]("AS964@8F5L;W0II2![7'1T($U025Q?0T]-35Q?54Y"24Y$ M?2X*"EQN;VEN9&5N=%QM<&EF=6YC>TU025Q?0T]-35Q?1U)/55 H8V]M;2P@ M9W)O=7 I?0I<8F5G:6Y[9&5S8W)I<'1I;VY]"EQI=&5M6TE.(&-O;6U=(&-O M;6UU;FEC871OV1E'0I?0I<8F5G:6Y[9&5S8W)I M<'1I;VY]"EQI=&5M6TE.(&-O;6U=(&-O;6UU;FEC871OUQT="!C;VUM?2X*"EQN;VEN9&5N=%QM<&EF=6YC>TU0 M25Q?0T]-35Q?1%50*&-O;6TL(&YE=UQ?8V]N=&5X="P@;F5W7%]C;VUM*7T* M7&)E9VEN>V1E'1=(&YE=R!C M;VYT97AT('1O('5S92!W:71H(&YE=UQ?8V]M;0I<:71E;5M/550@;F5W7%]C M;VUM72!C;VUM=6YI8V%T;W(@;V)J96-T(&AA;F1L90I<96YD>V1ET-O;&QE8W1I=F4@0V]M;75N:6-A=&]R($-O;G-TTU025Q?0T]-35Q?34%+12AS>6YC7%]G MFEN9R!O<&5R871I;VXN M"EQI=&5M6TE.(&-O;6U<7V=R;W5P72!'6YC7%]GUQT="!-4$E<7T-/34U<7U5.0DE.1'TI+@I)="!I2!S>6YC:')O;F]U2X@($EF"G1H97D@:&%V92!P97)S:7-T96YT(&]B:F5C=',L('1H97D@ M8V%N('-A=F4@;F5E9&5D(&-O;6UU;FEC871OV1EUQT="!-4$E<7T502$5-15)!3'T@;W(@"GM<='0@ M35!)7%]015)325-414Y4?2X*7&5N9'MD97-CUQT="!-4$E<7T-/34U<7U!50DQ)4TA]('=I=&@@ M=&AE('-A;64@;&%B96P@"EQL:6YE8G)E86M;-%T@*'=I=&AO=70@86X@:6YT M97)V96YI;F<@>UQT="!-4$E<7T-/34U<7U5.4%5"3$E32'TI"FES(&5RTU025Q?0T]-35Q? M54Y054),25-(*&QA8F5L*7T*7&)E9VEN>V1E2!A('-I;F=L92!PUQT="!-4$E< M7T-/34U<7U!50DQ)4TA](&-A;&P*=VET:"!T:&4@;F%M92!S<&5C:69I960@ M:6X@>UQT="!L86)E;'TN(" @>UQT="!-4$E<7T-/34U<7U5.4%5"3$E32'T@ M;VX@86X@=6YD969I;F5D"FQA8F5L('=I;&P@8F4@:6=N;W)E9"X*"EQN;VEN M9&5N=%QM<&EF=6YC>TU025Q?0T]-35Q?4U5"4T-224)%*&UY7%]C;VUM+"!L M86)E;"P@8V]M;2E]"EQB96=I;GMD97-CV1EUQT="!M>5Q?8V]M;7TL('=H:6-H(&AAUQT="!-4$E<7T-/34U<7U!50DQ)4TA](&%N9"!A;B![7'1T M($U025Q?0T]-35Q?4U5"4T-224)%?2!O;B!T:&4@2!B92!U2!O M9B!T:&4@9F]L;&]W:6YG('M<='0@35!)7%]#3TU-7%]354)30U))0D5<7TY/ M3EQ?0DQ/0TM)3D=]+%QL:6YE8G)E86M;-%T*>UQT="!-4$E<7T-/34U<7U-5 M0E-#4DE"15Q?4%)/0D5]+"!E=&,N+BY]"@I<<&%R86=R87!H>U1H92!S>6UM M971R:6,@8V%S97T@:7,@8V]N6UM971R:6,@:6YT M97(M9W)O=7 @8V]M;75N:6-A=&EO;@IS=')U8W1UUQT="!396YD7%]T;UQ?07T@=&\@"G-E;F0@=&\@96QE;65N=',@ M;V8@8&!!+"2P@=&AE(&9O;&QO=VEN M9R!F=6YC=&EO;B!ITU025Q?0T]-35Q?4%5"3$E3 M2%Q?4U5"4T-224)%*&-O;6U<7T$L(&QA8F5L7%]!+"!L86)E;%Q?0BP@V1EW9EV1EUQT="!L M;V-A;%Q?8V]M;7TI(&%N9"!A('!A:7(@;V8*UQT="!-4$E<7T-/34U<7TU%4D=%?2DN"E1H M92!F=6YC=&EO;B!S>6YC:')O;FEZ97,@;W9E6EN9R!G M6EN9R!GFEN9R!T M:&4@3&]O2!3>6YC:')O;F]U2E] M"@I<F%T:6]N"BAA2!I2!R;W5T:6YE+@I4:&4@;&EB2!R;W5T:6YE(&AA2!R:6=H="!T;R!E>'!E8W0@UQT="!-4$E<7U5. M1$5&24Y%1'T@:7,@=&AE(')A;FL@9F]R('!R;V-EUQE;2!A8W1I=F5](&%T(&$@<')O8V5S0IB92!R M96-E:79I;F<@;65STYO;G)E M96YT2!P;VEN="!I;B!T:6UE+"!A="!M;W-T(&]N M90II;G9O8V%T:6]N(&]F(&$@<&%R86QL96P@<')O8V5D=7)E(&-A;B!B92!A M8W1I=F4@870@86YY('!R;V-E2!W:71H:6X@9&ES:F]I;G0@9W)O=7!S(&]F('!R;V-E&%M<&QE+"!A;&P*:6YV;VMA=&EO;G,@;V8@<&%R86QL96P@ M<')O8V5D=7)E'0@8V%N M(&)E('-T871I8V%L;'D@86QL;V-A=&5D('1O(&5A8V@*<')O8V5D=7)E+B @ M5&AE('-T871I8R!A;&QO8V%T:6]N(&-A;B!B92!D;VYE(&EN(&$@<')E86UB M;&4L(&%S('!A'0*=F%L=65S+B @0V]M;75N:6-A=&]R2!O;F4*<')O8V5D=7)E(&]F(&5A8V@@;&EB2!C M86X@8F4@8V]N8W5R2!P;VEN="!I;B!T:6UE+"!F;W(@96%C:"!P2!I;G9O:V5D(&EN('1W;R!P87)T:6%L;'D@ M*&]R"F-O;7!L971E;'DI(&]V97)L87!P:6YG(&=R;W5P&%M M<&QE+"!T:&4@2!I;G9O:V5D(&]N('1W;R!P87)T:6%L M;'D*;W9E'1S+B @*$]N92!D;V5S(&YO="!N965D(&$@9&EF9F5R96YT M"F-O;G1E>'0@9G)O;2!E86-H(&=R;W5P.R!O;F4@;65R96QY(&YE961S(&$@ M8&!C;VQO2!N965D(&]N92!C;VYT97AT(&9O M2!C86X@8F4@8V]N8W5R2!O;@IC;VYT97AT(&UE8VAA;FES;7,@=&\@9&ES M86UB:6=U871E('-U8V-E&%M<&QE+"!F;W(@8G)O861C87-T+"!O;F4* M;6%Y(&YE960@=&\@8V%R0IB92!G86EN:6YG('!E2X@($ET(&ES(&YO="!S=69F:6-I M96YT"G1O(&EM<&QE;65N="!T:&4@<&%R86QL96P@<')O8V5D=7)E('-O('1H M870@:70@=V]R:W,@8V]R&5C=71E(&-O2X@($]F"F-O=7)S92P@=&AE('-A;64@87!P MU=E;&PM;F5S=&5D('!A&5C=71I;F<@=&AE M('-A;64*<&%R86QL96P@<')O8V5D=7)E+B @5&AU&5C=71I;VX@'0G2!A8W1I=F4*:6YV;V-A=&EO;G,@;V8@=&AE M('-A;64@<&%R86QL96P@<')O8V5D=7)E('=I=&AI;B!T:&4@'0@;F5E9"!T;R!B90IC'0*86QL;V-A=&EO;B!OV-O;G1E>'0M97@Q?0I<;F]I;F1E M;G0@17AA;7!L92!<(S%A.@I<8F5G:6Y[=F5R8F%T:6U]"B @("!I;G0@;64L M('-I>F4["B @(" N+BX*(" @(&UP:5]I;FET*"D["B @("!M<&E?8V]M;5]R M86YK*$U025]#3TU-7T%,3"P@)FUE*3L*(" @(&UP:5]C;VUM7W-I>F4H35!) M7T-/34U?04Q,+" FF4@)61<;B(L(&UE+"!S:7IE*3L*(" @("XN+@H@(" @;7!I7V5N9"@I.PI< M96YD>W9E&%M<&QE(%PC,6$@:7,@82!D;RUN;W1H:6YG('!R M;V=R86T@=&AA="!I;FET:6%L:7IE2P@86YD( IR M969EF4["B @(" N+BX*(" @(&UP:5]I;FET*"D[ M"B @("!M<&E?8V]M;5]R86YK*$U025]#3TU-7T%,3"P@)FUE*3L@(" O*B!L M;V-A;" J+PH@(" @;7!I7V-O;6U?F4I M*3L*(" @(&5LF4I("4@W9E&%M<&QE(%PC,6(@2!I;&QUV-O;G1E>'0M97@R?0I<8F5G:6Y[=F5R8F%T:6U]"B @("!V M;VED("ID871A.PH@(" @:6YT(&UE.PH@(" @+BXN"B @("!M<&E?:6YI="@I M.PH@(" @;7!I7V-O;6U?W9E&EM871E*2!#=7)R96YT(%!R86-T M:6-E(%PC,WT*7&QA8F5L>V-O;G1E>'0M97@S?0I<8F5G:6Y[=F5R8F%T:6U] M"B @("!I;G0@;64["B @("!V;VED("IGF5R;R!F M86QLW9E'1S M('=I=&AI;B!C;VUM=6YI8V%T;W)S(&UU&%M<&QE(%PC-'T*7&QA8F5L>V-O;G1E>'0M97@T?0I4:&4@ M9F]L;&]W:6YG(&5X86UP;&4@:7,@;65A;G0@=&\@:6QL=7-T2W9E'1S.PH@(" @=F]I9" J'1S M7V%L;&]C*$U025]'4D]54%]!3$PL(#(L("9C;VYT97ATPH@(" @(" @(&UP:5]C;VUM7V)I;F0H M'1;,%TL("9P=#)P=%]C;VUM*3L@+RH@;&]C86P@ M*B\*(" @(" @("!M<&E?8V]M;5]B:6YD*'-U8F=R;W5P+"!C;VYT97AT6S%= M+" F8V]L;%]C;VUM*3L@("\J(&QO8V%L("HO(" @( H*(" @(" @(" O*B!A M2!%>&%M<&QE(%PC,7T*7&QA M8F5L>V-O;G1E>'0M97@U?0I4:&4@;6%I;B!P'0["B @('9O M:60@*F=R;W5P.PH*(" @=7-E'0L('-A=F4@+3X@8V]M;2D["@H@(" O M*B!O=&AEW9E M'1S+@H*7&YO:6YD96YT(%5S97(@W9E#9] M"E1H92!M86EN('!R;V=R86TZ"EQB96=I;GMV97)B871I;7T*(" @(&EN="!M M82P@;6(["B @(" N+BX*(" @(&QIRPS?5TG)SL*"B @("!M<&E?;&]C86Q?PH@(" @(" @;&EB7V-A;&PH8V]M;5]B*3L*(" @(" @(&QI8E]C86QL M*&-O;6U?8BD["B @("!]"EQE;F1[=F5R8F%T:6U]"@I<;F]I;F1E;G0@5&AE M(&QI8G)A2]N;R!S869E='D@*B\*?2 @"EQE;F1[=F5R8F%T:6U]"E1H92!A8F]V M92!E>&%M<&QE(&ES(')E86QL>2!T=V\@97AA;7!L97,L(&1E<&5N9&EN9R!O M;B!W:&5T:&5R(&]R(&YO="!Y;W4*:6YC;'5D92!R86YK(#,@:6X@>UQT="!L M:7-T7%]B?2X@(%1H:7,@97AA;7!L92!I;&QU'1S+"!S=6)S97%U96YT(&-A;&QS('1O('M<='0@;&EB7%]C M86QL?2!W:71H('1H92!S86UE(&-O;G1E>'0*;F5E9"!N;W0@8F4@2!I MUQT="!-4$E<7U-93D-](&ES(&%D9&5D+B @ M5VAA="!T:&ES(&1E;6]N2!S;R!T:&%T"G1H97D@87)E(&EN:&5R M96YT;'D@3TLN("!3;R!A7!I8V%L M('1R964*8G)O861C87-T(&%L9V]R:71H;2!W:71H('1H92!S86UE(')O;W0N M("!(;W=E=F5R+"!M=6QT:7!L92!C86QLF4@=&AE"G1A9R!T;R!K965P('1H:6YG2!N;W0@:6UP;&5M96YT(&-O;&QE8W1I=F4@8V%L;',@=7-I M;F<*<&]I;G0M=&\M<&]I;G0@;W!E From: L J Clarke Subject: Re: discussion of process dynamicism, publish, subscribe, etc. To: Tony Skjellum , mpi-context@cs.utk.edu In-Reply-To: Tony Skjellum's message of Sat, 10 Jul 93 16:09:42 CDT Reply-To: lyndon@epcc.ed.ac.uk Some comments as requested. > ------------------------------------------------- > >From moose@Think.COM Fri Jul 9 11:11:00 1993 > From: Adam Greenberg > Date: Fri, 9 Jul 93 12:10:49 EDT > To: tony@Aurora.CS.MsState.Edu > Subject: discussion > Content-Length: 972 > X-Lines: 19 > Status: RO > > Date: Tue, 6 Jul 93 19:06:11 CDT > From: Tony Skjellum > > I would like to discuss your concerns about inter-group > facilities, and get you involved in the revised context > proposal, prior to next meeting. Can you summarize > worries, and how we can allay them reasonably, without > giving up MPMD programming capabilities. > > I don't think you can allay my fears. I (as TMC) don't believe that MPI should > contain mechanisms for coupling disparate programs together. I share this belief. > I distinguish > this notion from MPMD in which all of a user's programs which comprise an > instance or invocation of the users `system' are known a priori and information > on all members of the system is available at system start up. The validity of this distinction is restricted to a world in which the MPI user cannot dynamically create processes. I guess its a bit harder to describe the distinction when the user can do this. Nevertheless I have an intuitive understanding of the distinction which I believe is useful. By the way, I do not think that information regarding all members of the user "system" really is available at start up - to the MPI user - at least not all information on all members available to all members - and certainly relevant information which does not yet exist is not available. > Intersystem > communication is properly an OS issue. Yes. > It has already been solved by Unix. No. > Other solutions too, we believe, will require OS support if not implementation. > These OS related requirements are beyond the scope of MPI. Concur. > >From moose@Think.COM Sat Jul 10 15:09:11 1993 > From: Adam Greenberg > Date: Sat, 10 Jul 93 16:09:01 EDT > To: tony@Aurora.CS.MsState.Edu > Subject: discussion > Content-Length: 193 > Status: RO > X-Lines: 7 > > Date: Sat, 10 Jul 93 15:04:11 CDT > From: Tony Skjellum > > What about dynamic process management for a single user? > > What do you believe this to entail? > moose > > >From moose@Think.COM Sat Jul 10 15:56:59 1993 > From: Adam Greenberg > Date: Sat, 10 Jul 93 16:56:53 EDT > To: tony@Aurora.CS.MsState.Edu > Subject: discussion > Content-Length: 1374 > Status: RO > X-Lines: 30 > > Date: Sat, 10 Jul 93 15:13:58 CDT > From: Tony Skjellum > > Well, it used to be possible on some systems for the "HOST program" > and even other node programs to spawn processes, kill processes, > as needed, rather than having everything "known at the beginning." > To be concrete, on the iPSC/2 and iPSC/860, it is possible to "spawn" > and "kill/killcube." > > It would be nice if we could support, however weakly, the ability to > cope with such dynamicism. It is arguable that a spawn mechanism > would return a communicator for the spawned child, and the spawned > child would get its MPI_COMM_PARENT set accordingly. With flatten/unflatten, > the child could be told about other SPMD groups, without an explicit > publish/subscribe mechanism. > > The child could be told about other groups, but it would be unable to > participate in any communication unless other processes were told of its > existence. This begs the issue of how to introduce (or delete) new processes > to (or from) the global process pool. > > Will the exclusion of this functionality severely cripple many would-be MPI > users? Will it cripple MPI acceptance? Will it be difficult to include at a > later time - MPI2 were it to happen? > > My opinion is that its exclusion is not a real problem. However, I will > certainly listen to arguments to the contrary. I believe that exclusion will not cause a large problem in the short term but will lead to the dismissal of MPI as a standard within the not so distant future. However, it will mean even in the short term that there will be work which is prevented from use of MPI as a direct result of this exclusion. I can't claim to know what will be a good approach to dynamic process management. I have ideas, which in part seem to make sense to me, but are, to the best of my knowledge, quite untested. 5c Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Mon Jul 12 12:40:20 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA22584; Mon, 12 Jul 93 12:40:20 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14174; Mon, 12 Jul 93 12:40:40 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 12 Jul 1993 12:40:39 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14161; Mon, 12 Jul 93 12:40:38 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA14067; Mon, 12 Jul 93 11:40:32 CDT Date: Mon, 12 Jul 93 11:40:32 CDT From: Tony Skjellum Message-Id: <9307121640.AA14067@Aurora.CS.MsState.Edu> To: Jack.Dongarra@lip.ens-lyon.fr Subject: Re: context examples, etc. Cc: mpi-context@cs.utk.edu Jack, I have recently published a new draft of the chapter, which has examples in it, superceding need to distribute examples separately :-) Regards, - Tony ----- Begin Included Message ----- From Jack.Dongarra@lip.ens-lyon.fr Mon Jul 12 03:11:30 1993 Date: Mon, 12 Jul 93 10:11:23 +0200 From: Jack.Dongarra@lip.ens-lyon.fr (Jack Dongarra) To: tony@Aurora.CS.MsState.Edu Subject: Re: context examples, etc. Content-Length: 60 Tony, Please send me a copy of the examples. Regards, Jack ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Thu Jul 15 15:07:53 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA24659; Thu, 15 Jul 93 15:07:53 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10162; Thu, 15 Jul 93 15:08:14 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 15 Jul 1993 15:08:13 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10153; Thu, 15 Jul 93 15:08:12 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA01640; Thu, 15 Jul 93 14:08:07 CDT Date: Thu, 15 Jul 93 14:08:07 CDT From: Tony Skjellum Message-Id: <9307151908.AA01640@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: responses to recent proposal I have had only two responses (Cownie, Henderson), and both were "private." Both have serious "concerns." Is there further input? - Tony From owner-mpi-context@CS.UTK.EDU Sun Jul 18 11:23:08 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA08263; Sun, 18 Jul 93 11:23:08 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03342; Sun, 18 Jul 93 11:23:56 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 18 Jul 1993 11:23:55 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03185; Sun, 18 Jul 93 11:21:08 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA19755; Sun, 18 Jul 93 10:21:07 CDT Date: Sun, 18 Jul 93 10:21:07 CDT From: Tony Skjellum Message-Id: <9307181521.AA19755@Aurora.CS.MsState.Edu> To: mpi-collcomm@cs.utk.edu, mpi-context@cs.utk.edu, mpi-core@cs.utk.edu, mpi-pt2pt@cs.utk.edu Subject: All about threads Topic: Tacit Thread Safety Requirement of MPI1, Context Chapter, etc. Dear colleagues: In reviewing comments about the latest context draft, it has been repeatedly told me that we are at a crucial stage in MPI, because we have to agree on the context model, etc, as soon as possible. I concur with that assessment. In trying to find a consistent way to acquire safe communication space for groups, the issue of thread safety arises, because overlapping concurrent threads would have to work correctly. I am currently confident about the single-threaded case, and I am NOT CONFIDENT about the multi-threaded case. Does anyone have real experience with multi-threaded message passing (has it been done in an application setting, like we assume for MPI)? I need immediate guidance (specific guidance) about what multi-threaded programming MEANS in MPI, if this was in fact a reasonable requirement for MPI in the first place, and how multi-threading impacts point-to-point and collective communication (that is, real programs with examples). For instance, do we assume named threads or unnamed threads (and would this help). Is there an examplar threads package ??? Here is one problem in a nutshell. We discussed "statically initialized libraries" from time to time. Well, if there are multiple overlapping threads, then one would need to have separate contexts statically initialized for each concurrent thread. Such threads have group scope. Hence, groups would have to cache contexts for each concurrent thread (notice: groups cacheing contexts). I propose that we have a serious discussion on what thread safety really means for MPI1. I need for there to be well-formulated guidelines and in-depth debate immediately, so that the context committee can work effectively within these requirements, or give feedback as to why they are unreasonable. Otherwise, I/we can't really make the context chapter bullet-proof in time for the next meeting (except for the single-thread case). We have discussed how contexts provide group safety, but not temporal safety from multiple invocations of operations on a context (for which a programming paradihm must be described; e.g., synchronizing or implicitly synchronizing ... also could be called quiescent-at-exit). Now we need to have a notion of how to provide safety with multiple threads, or how to program the multi-threaded environment consistently, with interspersesd MPI calls. - - - To summarize, I seriously propose that in absence of an in-depth debate and specification of what thread safety means in MPI1, that we abandon this requirement altogether (analogous to the abandonment of non-blocking collective operations). If thread safety were to remain a de jure requirement of MPI1, then I ask that there be examples (analogous to or supersets of our contexts examples, pt2pt examples, and collective examples) illustrating same. If this is to be an added task of my subcommittee [which makes reasonablee sense to me] then I am eager for assistance nonetheless. I would want to see what people think existing thread practice is, what the design choices are, and which we choose to support, as well. It is not obvious to me that we really know what we mean (formally, practically) by "thread safety" for SPMD/MPMD message passing applications. Recall that there are at least three kinds of threads: O/S threads, compiler threads, user threads (we seeem to really mean the latter in our discussions). Thanks + please advise soonest. Tony Skjellum PS References to accessible texts or papers or software (eg, portable thread packages) are acceptable forms of advice. PPS I would like to have a new draft of the context chapter out by August 1 (with possible revisions by August 5). I am getting one extremely negative set of feedback from a single vendor representative, and one more balanced feedback (ie, only two people are communicating with me on the context chapter). I am not seeing widespread debate over the context chapter. This MUST happen now, between the meetings, since we have our best current draft available. We will not be successful if we are debating it all again at the next meeting without careful thought now (eg, on the threads issue). From owner-mpi-context@CS.UTK.EDU Sun Jul 18 11:29:55 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA08289; Sun, 18 Jul 93 11:29:55 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03760; Sun, 18 Jul 93 11:30:49 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 18 Jul 1993 11:30:49 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03600; Sun, 18 Jul 93 11:28:50 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA19772; Sun, 18 Jul 93 10:28:48 CDT Date: Sun, 18 Jul 93 10:28:48 CDT From: Tony Skjellum Message-Id: <9307181528.AA19772@Aurora.CS.MsState.Edu> To: mpi-collcomm@cs.utk.edu, mpi-context@cs.utk.edu, mpi-core@cs.utk.edu, mpi-pt2pt@cs.utk.edu Subject: Heterogeneous communication proposal Dear colleagues: In order to make inter-vendor MPI implementations and cluster computing with MPI even a reasonable possibility, I suggest that we need to adopt the requirement that data formats follow IEEE Std 1596.5-1993 Data Transfer Formats Optimized for SCI. I propose that debate be started on this topic, and that a presentation be made at MPI in which the features of 1596.5-1993 are discussed and elaborated. Currently, there is little hope for standardization between vendors (or home-brew heterogeneous MPI) implementations. We recognize that XDR is inefficient, so this IEEE standard seems the logical alternative. If we say nothing, implementations will surely become incompatible. I volunteer to champion this effort, but only after the context chapter issues are resolved (so for September meeting or later). It is very important, to my mind, that we embrace other reasonable standards in creating MPI, such as this data standard. - Tony Skjellum Enclosure: From dbg@SLAC.Stanford.EDU Thu Jul 15 15:37:11 1993 Date: Thu, 15 Jul 1993 12:56:25 -0800 From: dbg@SLAC.Stanford.EDU (Dave Gustavson) Subject: SCI Data Transfer Formats standard approved To: sci_announce@hplsci.hpl.hp.com X-Envelope-To: sci_announce@hplsci.hpl.hp.com Content-Transfer-Encoding: 7BIT X-Sender: dbg@scs.slac.stanford.edu Content-Length: 4329 X-Lines: 86 Status: RO In its June 1993 meeting, the IEEE Standards Board approved: IEEE Std 1596.5-1993 Data Transfer Formats Optimized for SCI. (The approved document was Draft 1.0 8Dec92, but with significant edits to clarify the vendor-dependent formats listed in the appendix.) Congratulations to the working group, and especially to working group chairman David James! This new standard defines a set of data types and formats that will work efficiently on SCI for transferring data among heterogeneous processors in a multiprocessor SCI system. This work has attracted much interest, even beyond the SCI community. It solves a difficult problem that must be faced in heterogeneous systems. Over the years a great amount of effort has been invested in translating data among dissimilar computers. Computer-bus bridges have incorporated byte swappers to try to handle the big-endian/little-endian conversion. Software and hardware have been used to convert floating point formats. It was always tempting to have the hardware swap byte addresses to preserve full-bus-width integers, which seem to look the same on big- and little-endian machines, and then not swap bytes when passing character strings. But finally we understood that this problem cannot be solved by the hardware (at least until some far-future day when we all use standardized fully tagged self-describing data structures!). The magnitude of the problem became clearer during work on Futurebus+, where we had to deal with multiple bus widths and their interfaces with other standards like VME and SCI. When you observe data flowing along paths of various widths through a connected system, you see how hardware byte-swappers can arbitrarily scramble the data bytes of various number formats such as long integer or floating point. Furthermore, the scrambling may depend on the particular path used and on the state of the bridge hardware at the time the data passed through! Finally the solution became clear: first, keep the relative byte address of each component of a data item fixed as it flows through the complex system. (This is now referred to as the "address invariance" principle.) Thus, character strings arrive unchanged, but other data items may have been created with their bytes in inconvenient (but well-defined) places. Then provide the descriptive tools needed to tell the compiler what the original format of the data was. (That is what this standard does.) The compiler knows the properties of the machine for which it is compiling, and thus now has enough information to allow it to generate code to perform the needed conversions before trying to do arithmetic on the foreign data. For example, when the compiler loads a long integer into a register it may swap bytes to convert from little-endian to big-endian significance, so that the register will contain the correct arithmetic value for use in calculations. Similarly, when an arithmetic result is stored back into a structure that is declared with foreign data types the compiler ensures that the conversions are done appropriately before the data are stored. This capability is critical for work in heterogeneous multiprocessors, but it is also useful for interpreting data tapes or disk files that were written on a different machine as well. The IEEE Std 1596.5-defined descriptors include type (character, integer, floating), sizes, alignment, endian-ness, and atomic properties (can I be certain this long integer is always changed as a unit, never by a series of narrower loads and stores that might allow inconsistent data to be momentarily visible to a sharing machine). The standard also includes a C-code test suite that can be used to check the degree of compliance of a given implementation. The chairman is Dr. David V. James, MS 301-4G, Apple Computer, 20525 Mariani Avenue, Cupertino, CA 95014, 408-974-1321, fax 408-974-9793, dvj@apple.com. Again, my hearty congratulations on a job well done! David Gustavson, SCI (IEEE Std 1596-1992 Scalable Coherent Interface) chair David B. Gustavson phone 415/961-3539 SCI (ANSI/IEEE Std 1596 Scalable Coherent Interface) chairman SLAC Computation Research Group, Stanford University fax 415/961-3530 POB 4349, MS 88, Stanford, CA 94309 dbg@slac.stanford.edu From owner-mpi-context@CS.UTK.EDU Sun Jul 18 14:37:53 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA08453; Sun, 18 Jul 93 14:37:53 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA13031; Sun, 18 Jul 93 14:38:46 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 18 Jul 1993 14:38:46 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA12892; Sun, 18 Jul 93 14:36:46 -0400 Via: uk.ac.southampton.ecs; Sun, 18 Jul 1993 19:36:23 +0100 Via: brewery.ecs.soton.ac.uk; Sun, 18 Jul 93 19:28:00 BST From: Ian Glendinning Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; Sun, 18 Jul 93 19:38:12 BST Date: Sun, 18 Jul 93 19:38:15 BST Message-Id: <10706.9307181838@holt.ecs.soton.ac.uk> To: mpi-context@cs.utk.edu, mpi-pt2pt@cs.utk.edu Subject: Re: All about threads > From: Tony Skjellum > Does anyone have real experience with multi-threaded message passing > (has it been done in an application setting, like we assume for MPI)? Yes. Most code written for transputer-based machines is multithreaded. Until quite recently (a couple of years ago) almost all the machines we had at Southampton were transputer based, so we developed quite a number of multi-threaded applications. We are not unique in this respect. Transputers have been very popular in Europe, and there are a large number of programmers on this side of the atlantic who are used to programming parallel machines in a multi-threaded style, using synchronous unbuffered message-passing primitives. > I need immediate guidance (specific guidance) about what multi-threaded > programming MEANS in MPI, if this was in fact a reasonable requirement Hmmm - I fear I'm sailing into dangerous waters here, but here goes... My understanding of what I thought was a consensus on threads is that in a multi-threaded environment, each thread can use the usual MPI primitives in the normal way, to communicate with other tasks, oblivious (almost) to any other threads within its own process. I say `almost' obvlivious, because obviously if two threads each try to receive a message with the same source, tag and context, then only one of them should succeed if a matching message arrives. I would suggest that MPI stipulate that it be unspecified which thread would get the message in such a situation. As far as I can see, this is the only place where extra semantics creep in. In particular, I don't see why contexts should introduce any further complication. A message with a particular context value should match a receive specifying the same context value (and source, tag), regardless of which thread posted it (the receive). Different contexts are distinct regardless of any considerations to do with threads. A good use for contexts in a multi-threaded environment would be to enable messages to be directed at particular threads within a process. Of course, in such a scheme the threads need somehow to find out which contexts they are supposed to be using, since contexts are allocated on a per-process (rather than per-thread) basis. As far as I'm concerned, this is outside the scope of MPI as currently defined, and would depend on the details of the particular threads implementation. However, an example of how things might proceed would be if there was initially one thread which allocated a number of contexts, then created some threads, and passed them each some contexts to use, in a manner specific to the thread creation scheme. > for MPI in the first place, and how multi-threading impacts point-to-point > and collective communication (that is, real programs with examples). I don't think we need examples - just some words along the above lines. > For instance, do we assume named threads or unnamed threads (and would > this help). Is there an examplar threads package ??? I don't think this is an issue, since MPI will not contain any operations that explicitly affect threads. > Here is one problem in a nutshell. We discussed "statically > initialized libraries" from time to time. Well, if there are multiple > overlapping threads, then one would need to have separate contexts > statically initialized for each concurrent thread. Such threads have > group scope. Hence, groups would have to cache contexts for each > concurrent thread (notice: groups cacheing contexts). Not a problem if contexts are allocated on a per-process basis, as I presumed they would be. What do other people think about this? By the way though Tony, I never did really understand what you meant by "statically initialized libraries". I hate to show my ignorance, but this is not a term I have encountered before - would you care to explain please? > I propose that we have a serious discussion on what thread safety > really means for MPI1. I need for there to be well-formulated guidelines and Well, I just had my $0.02 worth... > To summarize, I seriously propose that in absence of an in-depth > debate and specification of what thread safety means in MPI1, that we > abandon this requirement altogether (analogous to the abandonment of Well let's have some debate then. I think it would be disastrous not to address the issue of thread safety. > and collective examples) illustrating same. If this is to be an added > task of my subcommittee [which makes reasonablee sense to me] then I > am eager for assistance nonetheless. I would want to see what people think If my world view is seen as acceptable, then there is no implied extra work for your subcommittee. Ian -- I.Glendinning@ecs.soton.ac.uk Ian Glendinning Tel: +44 703 593368 Dept of Electronics and Computer Science Fax: +44 703 593045 University of Southampton SO9 5NH England From owner-mpi-context@CS.UTK.EDU Sun Jul 18 15:34:01 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA09847; Sun, 18 Jul 93 15:34:01 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15705; Sun, 18 Jul 93 15:34:53 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 18 Jul 1993 15:34:52 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15697; Sun, 18 Jul 93 15:34:50 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA21715; Sun, 18 Jul 93 14:34:36 CDT Date: Sun, 18 Jul 93 14:34:36 CDT From: Tony Skjellum Message-Id: <9307181934.AA21715@Aurora.CS.MsState.Edu> To: jim@meiko.co.uk Subject: Re: I'm going to be out Cc: igl@ecs.soton.ac.uk, mpi-context@cs.utk.edu Jim & Ian, Thanks for your responses. I am trying to figure out how to deal with the creation of communicators and contexts in a way that is "thread safe." I will respond specifically to your two mails as soon as I can. For now... In general, I may have raised a false problem (by expecting MPI calls to be able to do everything needed without "godlike" internal powers). I wanted to describe that the following two calls have specific correctness properties; that is, mpi_contexts_alloc(), mpi_comm_make() should be usable by any group, though that group does not have any safe communication context initially. I supposed (naively) that a statically allocated context for each of these libraries (with global scope) would be sufficient to allow an explanation of how MPI implements them. However, as Jim points out, this is WRONG. MPI needs some magic to make these work, which is i) thread safe ii) disambiguates the call made with different group entries ii) occurs in the multithreaded environment only. In the single-threaded environment, the overlapping processes will define the order. Ian asks about "contexts allocated on a per process basis." I have no idea what this means. :-) Know that I am personally in favor of having threads, I am just trying to reveal to everyone the complexity that is being incorporated by supporting the multiple threads, and that we agree specifically on what that complexity shall be. Otherwise, I keep missing the mark with the semantics of the context chapter :-) Calmly, Tony From owner-mpi-context@CS.UTK.EDU Sun Jul 18 17:19:30 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA10768; Sun, 18 Jul 93 17:19:30 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20972; Sun, 18 Jul 93 17:20:12 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 18 Jul 1993 17:20:11 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20899; Sun, 18 Jul 93 17:17:27 -0400 Via: uk.ac.southampton.ecs; Sun, 18 Jul 1993 22:17:20 +0100 Via: brewery.ecs.soton.ac.uk; Sun, 18 Jul 93 22:08:56 BST From: Ian Glendinning Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; Sun, 18 Jul 93 22:19:08 BST Date: Sun, 18 Jul 93 22:19:12 BST Message-Id: <10742.9307182119@holt.ecs.soton.ac.uk> To: mpi-context@cs.utk.edu, mpi-pt2pt@cs.utk.edu Subject: more on threads > From: Tony Skjellum > Thanks for your responses. I am trying to figure out how to deal with > the creation of communicators and contexts in a way that is "thread safe." > I will respond specifically to your two mails as soon as I can. For now... Hmmm, I guess my message dealt mainly with using contexts once they'd been created. Also, having thought a bit more about it, maybe a bit more needs to be said about what "thread safe" means, than I suggested. Lets take mpi_contexts_alloc() as an example. How I'd expect this to be used normally would be for it to be called by one thread per process in the group doing the allocating. But what happens if, say, if a different thread in one of those processes `simultaneously' makes a call to mpi_contexts_alloc()? This could happen if for example one of the processes in the group also belonged to another group, and wanted to allocate some contexts for use in it, using a different thread to make the call to mpi_contexts_alloc(). Presumably in such a case "thread safe" means that the behaviour of the system is as if the calls in the two threads were executed sequentially in some order. In other words they have to be treated as atomic operations. I guess that since two events are rarely simultaneous, the situation we are considering is when one thread calls mpi_contexts_alloc and begins to execute it, but then another thread also calls it, before the first invocation is complete. What is required is that there is some form of locking, so that the first invocation executes completely before the second one starts. Of course all of this does not prevent the example with the overlapping groups from being erroneous, since although the allocations are atomic, they may well occur in the wrong order. It would be up to the programmer to make sure one thread didn't make the call till the other had finished. How best to do that without introducing special thread support such as barriers to synchronize them? I'm not sure. A use for sending messages to yourself perhaps? The same issues apply to all of the collective operations, so for example I'd expect calls to mpi_barrier() to be made by one thread on each process in the group to be synchronized. > Ian asks about "contexts allocated on a per process basis." I have no > idea what this means. :-) Now I come to think of it, neither do I :-) In the light of the above discussion about making calls to mpi_contexts_alloc() an atomic operation, what I said in my previous message about contexts being passed to threads by their creators is not the only way of doing things. The threads could indeed create their own contexts, although there is that problem of interleaving... I suspect I oughtn't to mention this, but I suppose that it could be resolved if groups were objects with globally unique identifiers that were passed as part of the message header. Nope? Thought not... > Know that I am personally in favor of having threads, I am just trying I'm pleased to hear that. > to reveal to everyone the complexity that is being incorporated by > supporting the multiple threads, and that we agree specifically on what > that complexity shall be. Otherwise, I keep missing the mark with the > semantics of the context chapter :-) Yes, we should come to some agreement. As I say though, I don't think this is specifically a problem that affects the work of the context subcommittee, but rather that of point to point and collective communications. Ian From owner-mpi-context@CS.UTK.EDU Mon Jul 19 11:24:37 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA15180; Mon, 19 Jul 93 11:24:37 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27426; Mon, 19 Jul 93 11:24:42 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Jul 1993 11:24:41 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27417; Mon, 19 Jul 93 11:24:39 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA25217; Mon, 19 Jul 93 10:24:34 CDT Date: Mon, 19 Jul 93 10:24:34 CDT From: Tony Skjellum Message-Id: <9307191524.AA25217@Aurora.CS.MsState.Edu> To: jwf@lion.Parasoft.COM Subject: Re: All about threads Cc: mpi-context@cs.utk.edu Jon, my main problem is getting contexts/communicators in a multi-thread environment. I am not worried at all about things once this has been set up. As long as people will believe that "MPI can internally cope with multi-threading issues, both in collective operations [general] and in the collective operations needed to get contexts or communicators for groups," then I am satisfied. I would like to know how overlapping concurrent threads can be disambiguated when they both call "mpi_make_comm()" [which could be with different groups, or with the same group, in the case where both threads are over the same process group]. Thanks, Tony From jwf@lion.Parasoft.COM Mon Jul 19 09:36:12 1993 Received: from sampson.ccsf.caltech.edu by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA24901; Mon, 19 Jul 93 09:36:10 CDT Received: from elephant (elephant.parasoft.com) by sampson.ccsf.caltech.edu with SMTP id AA03674 (5.65c/IDA-1.4.4 for tony@Aurora.CS.MsState.Edu); Mon, 19 Jul 1993 07:36:07 -0700 Received: from lion.parasoft by elephant (4.1/SMI-4.1) id AA25394; Mon, 19 Jul 93 07:34:10 PDT Received: by lion.parasoft (4.1/SMI-4.1) id AA00154; Mon, 19 Jul 93 07:38:40 PDT Date: Mon, 19 Jul 93 07:38:40 PDT From: jwf@lion.Parasoft.COM (Jon Flower) Message-Id: <9307191438.AA00154@lion.parasoft> To: tony@Aurora.CS.MsState.Edu Subject: Re: All about threads Status: R Tony, I would suggest that you try to canvas transputer people, since multi user-level threads was the accepted programming paradigm in Occam. We implemented multi-threading at the user level in Express via active messages -- i.e., an active message could spawn a thread which then didn't go away, leading to multiple threads at the user level. We didn't do anything very clever to make this work -- message selection at the recipient was still by (simple) message type, altough we allowed an extension in which a user process could claim a contiguous set of types, and then wildcard on it. I'm not sure this latter feature was ever used. In the Express model I don't think you really need anything else. We do "groups" by passing horrible argument lists to collective functions, so that's already thread-safe (I think). We return error codes, and store subsidiary information in a global structure. this info is then returned with a separate function call, so it could be made thread-safe, although our current implementation isn't. This should be fixed in MPI but I don't think it overlaps contexts too much. Jon Flower From owner-mpi-context@CS.UTK.EDU Mon Jul 19 11:46:42 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA15539; Mon, 19 Jul 93 11:46:42 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29074; Mon, 19 Jul 93 11:47:28 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Jul 1993 11:47:27 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29020; Mon, 19 Jul 93 11:46:25 -0400 Via: uk.ac.southampton.ecs; Mon, 19 Jul 1993 16:46:15 +0100 Via: brewery.ecs.soton.ac.uk; Mon, 19 Jul 93 16:37:49 BST From: Ian Glendinning Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; Mon, 19 Jul 93 16:48:02 BST Date: Mon, 19 Jul 93 16:48:04 BST Message-Id: <11100.9307191548@holt.ecs.soton.ac.uk> To: mpi-context@cs.utk.edu, mpi-pt2pt@cs.utk.edu Subject: Re: more on threads Please excuse me, but, I feel I'm going to have to indulge in a bit of commenting on my own comments ;-) > From: Ian Glendinning > invocation executes completely before the second one starts. Of course all > of this does not prevent the example with the overlapping groups from being > erroneous, since although the allocations are atomic, they may well occur in > the wrong order. It would be up to the programmer to make sure one thread > didn't make the call till the other had finished. How best to do that > without introducing special thread support such as barriers to synchronize > them? I'm not sure. A use for sending messages to yourself perhaps? The It now occurs to me that the obvious method of doing this is for each thread to use a separate context to allocate the new contexts (or to perform whatever other collective operation it wants to). That is, go back to using a communicator as an argument to mpi_contexts_alloc() rather than just a group. Now there must have been good reasons for the change to just a group in the last draft, but I confess that they evade me. Could someone explain what they were please? In fact thinking about it, the whole point of having contexts in the first place is really to stop parallel threads treading on each other's toes, in the sense that message transfers invoked by `parallel procedures' (a la contexts chapter) continue to move around the system asynchronously after the procedure returns. In this sense I'm regarding the message-passing system as a thread running in parallel with the user's code here, which in fact it is. However, when we move over to user-defined threads, the same requirement exists to use contexts to protect them from each other. Ian From owner-mpi-context@CS.UTK.EDU Mon Jul 19 11:59:22 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA15652; Mon, 19 Jul 93 11:59:22 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29868; Mon, 19 Jul 93 12:00:04 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Jul 1993 12:00:03 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29830; Mon, 19 Jul 93 11:59:13 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA26672; Mon, 19 Jul 93 10:59:05 CDT Date: Mon, 19 Jul 93 10:59:05 CDT From: Tony Skjellum Message-Id: <9307191559.AA26672@Aurora.CS.MsState.Edu> To: igl@ecs.soton.ac.uk, mpi-context@cs.utk.edu, mpi-pt2pt@cs.utk.edu Subject: Re: more on threads >From owner-mpi-context@CS.UTK.EDU Mon Jul 19 10:47:49 1993 >Received: from Walt.CS.MsState.Edu by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); > id AA26451; Mon, 19 Jul 93 10:47:48 CDT >Received: from CS.UTK.EDU by Walt.CS.MsState.Edu (4.1/6.0s-FWP); > id AA15070; Mon, 19 Jul 93 10:47:47 CDT >Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) > id AA29074; Mon, 19 Jul 93 11:47:28 -0400 >X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Jul 1993 11:47:27 EDT >Errors-To: owner-mpi-context@CS.UTK.EDU >Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) > id AA29020; Mon, 19 Jul 93 11:46:25 -0400 >Via: uk.ac.southampton.ecs; Mon, 19 Jul 1993 16:46:15 +0100 >Via: brewery.ecs.soton.ac.uk; Mon, 19 Jul 93 16:37:49 BST >From: Ian Glendinning >Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; > Mon, 19 Jul 93 16:48:02 BST >Date: Mon, 19 Jul 93 16:48:04 BST >Message-Id: <11100.9307191548@holt.ecs.soton.ac.uk> >To: mpi-context@cs.utk.edu, mpi-pt2pt@cs.utk.edu >Subject: Re: more on threads >Status: R > >Please excuse me, but, I feel I'm going to have to indulge in a bit of >commenting on my own comments ;-) > >> From: Ian Glendinning > >> invocation executes completely before the second one starts. Of course all >> of this does not prevent the example with the overlapping groups from being >> erroneous, since although the allocations are atomic, they may well occur in >> the wrong order. It would be up to the programmer to make sure one thread >> didn't make the call till the other had finished. How best to do that >> without introducing special thread support such as barriers to synchronize >> them? I'm not sure. A use for sending messages to yourself perhaps? The > >It now occurs to me that the obvious method of doing this is for each thread >to use a separate context to allocate the new contexts (or to perform >whatever other collective operation it wants to). That is, go back to using a >communicator as an argument to mpi_contexts_alloc() rather than just a group. >Now there must have been good reasons for the change to just a group in the >last draft, but I confess that they evade me. Could someone explain what they >were please? In fact thinking about it, the whole point of having contexts in >the first place is really to stop parallel threads treading on each other's >toes, in the sense that message transfers invoked by `parallel procedures' >(a la contexts chapter) continue to move around the system asynchronously >after the procedure returns. In this sense I'm regarding the message-passing >system as a thread running in parallel with the user's code here, which in >fact it is. However, when we move over to user-defined threads, the same >requirement exists to use contexts to protect them from each other. > Ian > Quiescence as a requirement for library safety was the original mechanism we proposed. This meant that: i) no pending operations were present on a context ii) none would occur out of band while a library was using that context iii) mpi_contexts_alloc() & mpi_comm_make relied on quiescence to operate properly At the last meeting, there was pummeling regarding the quiescence requirement, in as much as, libraries could not guarantee their own safety (analogous to a stack-based language giving new, automatic variables to an invocation of a function). Changing the semantics to group instead of communicator in these two calls was meant to relieve the user of this worry. We also thereby removed the "you need a safe context to get a safe context." If multi-threading needs this rule, then we have to go back to a more restrictive model of computing for the non-threaded case; ie, that quiescence property. I believe that for the single-threaded case, the semantics currently posed are good. However, I need to understand how we manage the multi-threaded case... I need an explanation of how the following scenario will be handled (simple as possible) 1 group, 2 threads on the entire group, both threads call mpi_comm_make. Using contexts to disambiguate threads sounds good to me, off hand, but isn't that like giving threads "names." Do we want this association? Admittedly, I need to think more about Ian's comments. - Tony From owner-mpi-context@CS.UTK.EDU Mon Jul 19 13:11:31 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA16141; Mon, 19 Jul 93 13:11:31 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05065; Mon, 19 Jul 93 13:12:14 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 19 Jul 1993 13:12:12 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04980; Mon, 19 Jul 93 13:12:06 -0400 Via: uk.ac.southampton.ecs; Mon, 19 Jul 1993 18:11:15 +0100 Via: brewery.ecs.soton.ac.uk; Mon, 19 Jul 93 18:02:45 BST From: Ian Glendinning Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; Mon, 19 Jul 93 18:12:57 BST Date: Mon, 19 Jul 93 18:12:59 BST Message-Id: <11130.9307191712@holt.ecs.soton.ac.uk> To: mpi-context@cs.utk.edu, pt2pt@cs.utk.edu Subject: Re: more on threads > From: Tony Skjellum > At the last meeting, there was pummeling regarding the quiescence requirement, > in as much as, libraries could not guarantee their own safety (analogous > to a stack-based language giving new, automatic variables to an invocation > of a function). Changing the semantics to group instead of communicator > in these two calls was meant to relieve the user of this worry. We also > thereby removed the "you need a safe context to get a safe context." > If multi-threading needs this rule, then we have to go back to a more > restrictive model of computing for the non-threaded case; ie, that quiescence Thanks for that explanation. All is now clear... > property. I believe that for the single-threaded case, the semantics > currently posed are good. However, I need to understand how we manage > the multi-threaded case... I don't see any cleaner way than using contexts. If people are desperate to have the non-quiescent allocator, perhaps we could compromise and have one of each? People wanting to use threads could then use the one with the requirement for quiescence. > I need an explanation of how the following scenario will be handled > (simple as possible) > > 1 group, 2 threads on the entire group, both threads call mpi_comm_make. > > Using contexts to disambiguate threads sounds good to me, off hand, but > isn't that like giving threads "names." Do we want this association? Yes it is like that, and yes we do want it, if we want to make threads useable. And to do that, you've somehow got to identify them. This does it in a way that's not tied to any particular threads implementation. Ian From owner-mpi-context@CS.UTK.EDU Tue Jul 20 07:32:36 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA23616; Tue, 20 Jul 93 07:32:36 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14636; Tue, 20 Jul 93 07:33:26 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Jul 1993 07:33:25 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14618; Tue, 20 Jul 93 07:32:52 -0400 Via: uk.ac.southampton.ecs; Tue, 20 Jul 1993 12:32:28 +0100 Via: brewery.ecs.soton.ac.uk; Tue, 20 Jul 93 12:23:58 BST From: Ian Glendinning Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; Tue, 20 Jul 93 12:34:13 BST Date: Tue, 20 Jul 93 12:34:15 BST Message-Id: <11655.9307201134@holt.ecs.soton.ac.uk> To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu Subject: Quiescent contexts, threads, and collective communications Hi, it seems to me that the recent discussion in mpi-context about thread safety is also relevant to collective communications, because the routine mpi_contexts_alloc(), which we've been mainly discussing, is a collective operation. When used in conjunction with threads, it would seem elegant to allow a context to be passed as an argument to the routine (as part of a communicator) rather than just a group. Unfortunately this would add the requirement that the context be quiescent at the time of the call, which people do not seem to like. However, as presently defined, the collective communication routines *do* take communicators as arguments, and so surely there is also a requirement for quiescence here, which no one seems to have objected to. Am I missing something here? Ian From owner-mpi-context@CS.UTK.EDU Tue Jul 20 07:40:18 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA23661; Tue, 20 Jul 93 07:40:18 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15042; Tue, 20 Jul 93 07:41:08 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Jul 1993 07:41:07 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15034; Tue, 20 Jul 93 07:41:02 -0400 Date: Tue, 20 Jul 93 12:40:43 BST Message-Id: <20361.9307201140@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: discussion of process dynamicism, publish, subscribe, etc. To: lyndon@epcc.ed.ac.uk, Tony Skjellum , mpi-context@cs.utk.edu In-Reply-To: L J Clarke's message of Mon, 12 Jul 93 09:55:35 BST Reply-To: lyndon@epcc.ed.ac.uk > > ------------------------------------------------- > > >From moose@Think.COM Fri Jul 9 11:11:00 1993 > > From: Adam Greenberg > > Date: Fri, 9 Jul 93 12:10:49 EDT > > To: tony@Aurora.CS.MsState.Edu > > Subject: discussion > > Content-Length: 972 > > X-Lines: 19 > > Status: RO > > > > Date: Tue, 6 Jul 93 19:06:11 CDT > > From: Tony Skjellum > > > > I would like to discuss your concerns about inter-group > > facilities, and get you involved in the revised context > > proposal, prior to next meeting. Can you summarize > > worries, and how we can allay them reasonably, without > > giving up MPMD programming capabilities. > > > > I don't think you can allay my fears. I (as TMC) don't believe that MPI should > > contain mechanisms for coupling disparate programs together. > > I share this belief. [Lyndon] There may be a misunderstanding here. IF you (Adam) mean that by "disparate programs" different program texts - in other words you (as TMC) believe that MPI should only address SPMD programs - then I strongly disagree with you. IF you (Adam) mean different MPMD systems, as you describe below, then I strongly agree with you. Is this crystal clear? I hope so! Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Jul 20 10:51:16 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA25368; Tue, 20 Jul 93 10:51:16 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27518; Tue, 20 Jul 93 10:51:48 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Jul 1993 10:51:47 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27510; Tue, 20 Jul 93 10:51:45 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA17262; Tue, 20 Jul 93 09:51:33 CDT Date: Tue, 20 Jul 93 09:51:33 CDT From: Tony Skjellum Message-Id: <9307201451.AA17262@Aurora.CS.MsState.Edu> To: lyndon@epcc.ed.ac.uk Subject: red herrings :-) Cc: mpi-context@cs.utk.edu Lyndon, the issues I see before us is as follows. I. Thread dilemma for context allocation / safe communication allocation ------------------------------------------------------------------------ First, Ian's assertion that context == thread name is wrong. Each concurrent thread can have one or more context. The single-threaded limit can have more than one context... Contexts provide thread insulation, however, so are useful with threads. We have been asked to adjust the context model in which libraries have control over their "safety" regardless of the state of message passing when they are invoked in a loosely synchronous SPMD sense (Pierce requirements). I modified proposal to do so (with a reasonable change). I do not see how to implement the semantics in a thread-safe manner, and I am soliciting opinions on whether or not to go back to previous syntax, which also required the stronger quiescence property [which Ian has noted is thread safe]. i) mpi_comm_make(group_A,...) executes in thread-parallelism with ii) mpi_comm_make(group_B,...) where group B and group A have overlap. mpi_comm_make() takes a group (possibly formed only locally, though in parallel). MPI would need a mechanism (an implementation-specific mechanism) to get new context for each, while dealing with the multi- threadedness problem. Can one give an example of how to do this? NB. The collective communication subcommittee must handle exactly this problem, for all collective operations, so I wonder what they have decided? [Note, different contexts provide thread insulation, so that is one answer]. Model A: If MPI were in control of the thread package, in that it could KNOW when a new concurrent thread were formed, and could create a new communication context for it, then we would be far along to a solution. That is, we would actually have mpi_new_thread() or the like, e.g., mpi_new_thread(group, comm) IN group OUT comm A quiescent communicator is the output, providing bootstrap communication. The call could increase the minimum granularity of concurrent threads (significantly). Model A (MPI has direct access to thread package) |------------------------------------| | User code + Library code | |------------------------------------| | MPI [with hooks to thread package] | |------------------------------------| | Stipulated Thread package | |------------------------------------| Model B (MPI is just another thread-safe piece of software, that has no idea multithreadedness is occuring) |------------------------------------| | User code + Library code | |------------------------------------| | User-level Thread package | |------------------------------------| | MPI [no hooks to thread package] | |------------------------------------| In Model B: The user has to make sure that communicators don't foul up because of the use of threads. This means that a quiescent communicator for each concurrent thread (group scope) must be created. It must be bootstrapped from the MPI_COMM_ALL. In this model, one returns to using communicators to make communicators (per last MPI meeting). We achieve thread safety, but require the quiescence property. By the way, I think model B is more likely to occur in practice. Default action -------------- Model B is the default action for modifying the proposal (actually it is unmodifying the proposal, to previous meeting state). Desired action -------------- We find a way to make ibraries have control over their own communication safety, while permitting multi-threadedness. II. Inter-communication ----------------------- To achieve inter-communication, we are forced to weaken the meaning of communicator, and permit out-of-band communication. We have to add many functions. Jim, Adam, and others are against this stuff. I like having inter-group communication, personally, but I am arriving at the opinion that we cannot get it into MPI1. Therefore, I personally favor dropping it, so that the standard itself can go forward (as we have sufficient problems to describe intra-communication in the fully multi-threaded model). Default action -------------- Restructure chapter to put all inter-communication stuff last, so we can get intra-communication stuff passed on first reading. Modify semantics to promote intra-communication safety, putting greater onus on inter-communication mechanisms, rather than weaking basic mechanisms to allow "back-door" inter-communication. Desired action -------------- We develop better primitives for intercommunication before the August 5 deadline (cf, Henderson stuff) and get them in the chapter. There will still be resistance. Requirements include that i) there will never be out-of-band communication ii) only one communicator is bound to a context in a process iii) user servers can be supported -------------------------------------------------------------------- Please give whatever input you can. - Tony . . . . . . . . . . "There is no lifeguard at the gene pool." - C. H. Baldwin "In the end ... there can be only one." - Ramirez (Sean Connery) in "Entia sunt Multiplicanda Praeter necessitatem ad Infinitum" - MPI Slogan - - - Anthony Skjellum, MSU/ERC, (601)325-8435; FAX: 325-8997; tony@cs.msstate.edu From owner-mpi-context@CS.UTK.EDU Tue Jul 20 10:54:36 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA25421; Tue, 20 Jul 93 10:54:36 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27794; Tue, 20 Jul 93 10:55:17 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Jul 1993 10:55:16 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27786; Tue, 20 Jul 93 10:55:15 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA17278; Tue, 20 Jul 93 09:55:01 CDT Date: Tue, 20 Jul 93 09:55:01 CDT From: Tony Skjellum Message-Id: <9307201455.AA17278@Aurora.CS.MsState.Edu> To: igl@ecs.soton.ac.uk Subject: Re: more on threads Cc: mpi-context@cs.utk.edu >From @ecs.southampton.ac.uk,@brewery.ecs.soton.ac.uk:igl@ecs.southampton.ac.uk Tue Jul 20 06:18:32 1993 >Received: from sun2.nsfnet-relay.ac.uk by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); > id AA16166; Tue, 20 Jul 93 06:18:30 CDT >Via: uk.ac.southampton.ecs; Tue, 20 Jul 1993 12:18:06 +0100 >Via: brewery.ecs.soton.ac.uk; Tue, 20 Jul 93 12:09:39 BST >From: Ian Glendinning >Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; > Tue, 20 Jul 93 12:19:54 BST >Date: Tue, 20 Jul 93 12:19:56 BST >Message-Id: <11652.9307201119@holt.ecs.soton.ac.uk> >To: tony@Aurora.CS.MsState.Edu >Subject: Re: more on threads >Status: R > >> From tony@Edu.MsState.CS.Aurora Tue Jul 20 01:56:01 1993 > >> I have to work on this. Returning to the previous model seems a >> step backwards, except for the thread issue. I want to hear ALL of >> Jim's views (and other views) before proceeding. I would like it to > >Yes, me too. > >> be possible, given that there are thread-specific calls to lock out >> other threads, for users and libraries to be able to get safe communication >> context/communicator without having one. This is a sine qua non at >> this point, for me. > >I agree it's difficult to justify not having that facility if it's possible. >When you say "given that there are thread-specific calls", I assume you mean >"If there are...". > >> I think that there needs to be some further recognition of the complexity >> that multiple threads is adding to the context chapter, and some understanding >> of how this will impact the single-thread case. I want the solution >> NOT to impact the semantics for the case where single-thread operation >> is to be used. > >I also agree that this is desirable, but unfortunately it seems to conflict >with the desire to lose the "need a clean context to get a context" >requirement. Maybe someone else can come up with a way out. However, as I >said before, I don't think this is a problem specific to the contexts >chapter - it's just that you guys have thought about it more. > Ian > If we do not return to the model where context is needed to get context, but find a way to provide context given group, and still can be thread safe, then we are OK. I need to see how it could be implemented, to be comfortable, and I want to spread the responsibility for such design choic to more people than myself :-) - Tony From owner-mpi-context@CS.UTK.EDU Tue Jul 20 10:56:24 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA25442; Tue, 20 Jul 93 10:56:24 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27913; Tue, 20 Jul 93 10:57:10 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Jul 1993 10:57:08 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA27899; Tue, 20 Jul 93 10:57:07 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA17392; Tue, 20 Jul 93 09:56:57 CDT Date: Tue, 20 Jul 93 09:56:57 CDT From: Tony Skjellum Message-Id: <9307201456.AA17392@Aurora.CS.MsState.Edu> To: igl@ecs.soton.ac.uk, mpi-collcomm@cs.utk.edu, mpi-context@cs.utk.edu Subject: Re: Quiescent contexts, threads, and collective communications >From owner-mpi-context@CS.UTK.EDU Tue Jul 20 06:33:42 1993 >Received: from Walt.CS.MsState.Edu by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); > id AA16171; Tue, 20 Jul 93 06:33:42 CDT >Received: from CS.UTK.EDU by Walt.CS.MsState.Edu (4.1/6.0s-FWP); > id AA19264; Tue, 20 Jul 93 06:33:41 CDT >Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) > id AA14636; Tue, 20 Jul 93 07:33:26 -0400 >X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Jul 1993 07:33:25 EDT >Errors-To: owner-mpi-context@CS.UTK.EDU >Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) > id AA14618; Tue, 20 Jul 93 07:32:52 -0400 >Via: uk.ac.southampton.ecs; Tue, 20 Jul 1993 12:32:28 +0100 >Via: brewery.ecs.soton.ac.uk; Tue, 20 Jul 93 12:23:58 BST >From: Ian Glendinning >Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; > Tue, 20 Jul 93 12:34:13 BST >Date: Tue, 20 Jul 93 12:34:15 BST >Message-Id: <11655.9307201134@holt.ecs.soton.ac.uk> >To: mpi-context@cs.utk.edu, mpi-collcomm@cs.utk.edu >Subject: Quiescent contexts, threads, and collective communications >Status: R > >Hi, > it seems to me that the recent discussion in mpi-context about thread >safety is also relevant to collective communications, because the routine >mpi_contexts_alloc(), which we've been mainly discussing, is a collective >operation. When used in conjunction with threads, it would seem elegant >to allow a context to be passed as an argument to the routine (as part of a >communicator) rather than just a group. Unfortunately this would add the >requirement that the context be quiescent at the time of the call, which >people do not seem to like. However, as presently defined, the collective >communication routines *do* take communicators as arguments, and so surely >there is also a requirement for quiescence here, which no one seems to have >objected to. Am I missing something here? > Ian > I agree with Ian. If we move back to having comm as the first argument to mpi_make_comm() and mpi_contexts_alloc(), then these are just collective calls, and all our worries about multi-threadedness appear to apply equally well to collcomm chapter. -Tony From owner-mpi-context@CS.UTK.EDU Tue Jul 20 11:08:46 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA25647; Tue, 20 Jul 93 11:08:46 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28787; Tue, 20 Jul 93 11:09:40 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Jul 1993 11:09:39 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28779; Tue, 20 Jul 93 11:09:38 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA17571; Tue, 20 Jul 93 10:09:21 CDT Date: Tue, 20 Jul 93 10:09:21 CDT From: Tony Skjellum Message-Id: <9307201509.AA17571@Aurora.CS.MsState.Edu> To: lyndon@epcc.ed.ac.uk Subject: Re: mpi-context: multithreaded, quiescent, ... Cc: mpi-context@cs.utk.edu Lyndon, your assessment of domain requirements seem useful. I note that we must hear comments from the collcomm committee about their opinions! Perhaps our e-mails crossed atlantic and you did not see my long e-mail before you wrote yours. In any event, on item "I" we are in agreement. Concerning inter communication, we still have to think. Thanks, Tony From owner-mpi-context@CS.UTK.EDU Tue Jul 20 12:36:00 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA25992; Tue, 20 Jul 93 12:36:00 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05130; Tue, 20 Jul 93 12:36:29 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Jul 1993 12:36:28 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05122; Tue, 20 Jul 93 12:36:24 -0400 Date: Tue, 20 Jul 93 17:36:18 BST Message-Id: <20945.9307201636@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: more multithreaded nonquiescence To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Hi y'all On a little thought, it seems that the MPI definition required by my previous email message is fairly simple. Regarding quiescence: "MPI collective operations may contain internal communications. Such communications are isolated from those of the user. They cannot interfere with user communications and they cannot be interfered with by user communications." Regarding threads: "It is erroneous to invoke MPI collective operations within the same CONTEXT simultaneously in two or more threads of the same process." Since I am advocating that a CONTEXT be a pair of contexts (case significant) then we can IMAGINE an implementation internal macro something like "__MPI_PRIVATE_COMM(user_comm)" which evaluates to the private communicator associated with the given user communicator. The MPI collective operations routines are then implemented in terms of point-to-point routines, using the expression "__MPI_PRIVATE_COMM(user_comm)" instead of simply "user_comm" Notice that this can evaluate to an invalid communicator for communicators which are used for inter-group communications and therefore are not valid for MPI collective oeprations. Of course, these comments are intended exclusively in the context of "implementor notes", not part of the MPI standard definition. Hope this helps! Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Jul 20 14:45:17 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA26981; Tue, 20 Jul 93 14:45:17 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA13789; Tue, 20 Jul 93 14:45:14 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Jul 1993 14:45:13 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA13766; Tue, 20 Jul 93 14:45:08 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA19049; Tue, 20 Jul 93 13:44:57 CDT Date: Tue, 20 Jul 93 13:44:57 CDT From: Tony Skjellum Message-Id: <9307201844.AA19049@Aurora.CS.MsState.Edu> To: lyndon@epcc.ed.ac.uk Subject: Re: red herrings :-) Cc: mpi-context@cs.utk.edu Lyndon, your comments are great... here is stuff from hender. Please work with him and try to improve inter-comm stuff. I will do intra-comm stuff... One note on your comments... My favoring of dropping inter-comm is purely because of the political pressure to do so. Technically, I favor a superior model of message passing that includes such capabilities. I did not realize that the following has not been reflected. I will cc the reflector on this mail to be sure. Thanks, Tony ------------------------------------------------ From hender@macaw.fsl.noaa.gov Fri Jul 16 16:05:22 1993 Received: from gw1.fsl.noaa.gov by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA09815; Fri, 16 Jul 93 16:05:18 CDT Received: from macaw.fsl.noaa.gov by gw1.fsl.noaa.gov (5.65/Ultrix3.0-C) id AA10879; Fri, 16 Jul 1993 18:02:58 +0100 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA05769; Fri, 16 Jul 93 15:03:08 MDT Date: Fri, 16 Jul 93 15:03:08 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9307162103.AA05769@macaw.fsl.noaa.gov> To: tony@Aurora.CS.MsState.Edu Subject: My comments on Jim's coments, etc. Cc: jim@meiko.co.uk Status: R Tony, Here are my comments on Jim's comments and a (hopefully constructive) proposal. I am attempting to take the "user's" perspective. I agree with Jim on some things, often not for the same reason... I share Jim's desire to help in a constructive way. If context doesn't happen at the next meeting, I fear for the success of MPI. Please let me know what you think of all this. I know there are things in the details of the proposal which are not quite right. Also, I realize that right now you're on the spot to produce. If there is anything you can "farm out", I'll be happy to assist... As usual, feel free to reflect, forward, etc. I've sent a copy to Jim also. Tom 1) I agree with Jim that the current proposal is too complicated. That's why I asked you lots of questions about "which function should I use?" in my comments on the draft. The "security" issue also has some validity. I don't feel quite as strongly as Jim about this. I do feel that "non-expert" users should be able to have "security" without thinking about it too much. We have run into this problem in point-to-point. We "solved" the problem by introducing an EXPLICIT multi-level organization to handle "normal" vs. "expert" users. I think this approach may solve some of our problems in the context draft. This means that the "easy" routines must be able to handle everything that a "normal" user would want to do. We could then vote on whether to include the "expert" routines at all... In my view, contexts and groups are "low-level" "expert-user" things. The communicator is a "high-level" thing that "normal" users should be able to deal with. "Normal" users should not have to worry about groups and contexts. I make a more detailed proposal below... 2) I confess that I don't completely follow the arguments about difficulty of the "who sent me this" inquiry. How does this work in CHIMP? Can we look at the source code (Lyndon)? As for the unique one-to-one binding of contexts (a context may be bound to one and only one group), this can be enforced for "normal users" in the multilevel scheme (see proposal below...). 3) Jim points out that inter-group communicators may be hard to use in the current proposal. I feel that some routines are "high-level" enough to make them easy to use. MPI_COMM_PUBLISH_SUBSCRIBE() and MPI_COMM_JOIN() seem fairly straightforward. Is Jim objecting to the use of a pair of communicators (Send_to_A, Recv_from_A) rather than a single one (communicate_with_A)? 4) I am less concerned than Jim with eliminating inter-group communication. I don't see a huge difference between explicit inter-group communication and Jim's "group fusion" (which is already supported in your current draft using MPI_COMM_JOIN() as I understand it...). 5) I agree that we need "fast" creation of communicators to support libraries. Library writers can use the "expert" routines to do this... 6) If Jim puts together another proposal, we MUST discuss it in detail via email well in advance of the next meeting. PROPOSAL: MULTI-LEVEL ORGANIZATION OF GROUP/CONTEXT/COMMUNICATOR ROUTINES I propose that we use a "multi-level" organization of routines to solve the complexity problems in the current context draft. This approach has been used successfully in the point-to-point draft to solve similar problems. Routines in the current proposal are re-organized into two levels: "basic", and "expert". Routines are modified/added/deleted as necessary to preserve the current functionality at the appropriate level. The basic level allows manipulation of communicators only. Contexts and groups are not visible. This level provides the "security" of system-managed contexts at the expense of optimum performance. This level is intended for use by the "typical" user. It supports "fusion" of communicators (as suggested by Jim). It does not support explicit publish/subscribe or flatten/unflatten. The expert level has all the low-level functionality required by library and server developers. Explicit context and group manipulation is allowed as well as construction/destruction of communicators via bind/unbind. This level provides the fastest possible performance at the expense of "security". This level is intended for use by the "expert" user. It supports explicit publish/subscribe and flatten/unflatten. I had a difficult time deciding what level the functionality of the local group operations (MPI_LOCAL_SUBGROUP(), MPI_LOCAL_SUBGROUP_RANGES(), MPI_LOCAL_GROUP_UNION(), MPI_LOCAL_GROUP_INTERSECT(), and MPI_LOCAL_GROUP_DIFFERENCE()) should be on. After thrashing around for a while with versions that operate on communicators, I decided to leave them in the expert level. The rationale here is that all the "local" operations exist primarily for performance reasons. They give the "typical user" a lot of rope. I can envision a situation where an incorrect sequence of calling various local communicator construction operations could result in two processes having the same context bound to a different group. (Yeah, I know, "the behavior of an erroneous program is undefined..." :-) I certainly could see having MPI_COMM_COLL_INTERSECT() or MPI_COMM_COLL_DIFFERENCE() at the basic level. Comments please? This organization will help us vote on the more advanced features as they will be isolated in the "expert" level. It will help answer all the "which function should I use?" questions. And, hopefully, "multiple layers" will be more palatable to the committee as a whole, since most of us have already bought the idea in point-to-point. In the current context draft, routines are organized as follows: >> GROUPS >> >> Local Operations >> MPI_GROUP_SIZE(group, size) >> MPI_GROUP_RANK(group, rank) >> MPI_TRANSLATE_RANKS (group_a, n, ranks_a, group_b, ranks_b) >> MPI_GROUP_FLATTEN(group, max_length, buffer, actual_length) >> >> Local Group Constructors >> MPI_GROUP_UNFLATTEN(max_length, buffer, group) >> MPI_LOCAL_SUBGROUP(group, n, ranks, new_group) >> MPI_LOCAL_SUBGROUP_RANGES(group, n, ranges, new_group) >> MPI_LOCAL_GROUP_UNION(group1, group2, group_out) >> MPI_LOCAL_GROUP_INTERSECT(group1, group2, group_out) >> MPI_LOCAL_GROUP_DIFFERENCE(group1, group2, group_out) >> MPI_GROUP_FREE(group) >> MPI_GROUP_DUP(group, new_group) >> >> Collective Group Constructors >> MPI_COLL_SUBGROUP(comm, membership_key, new_group) >> MPI_COLL_GROUP_PERMUTE(comm, new_rank, new_group) >> >> >> CONTEXTS >> >> Local Operations >> MPI_CONTEXTS_RESERVE(n, contexts) >> MPI_CONTEXTS_FREE(n, contexts) >> >> Collective Operations >> MPI_CONTEXTS_ALLOC(group, n, contexts) >> MPI_CONTEXTS_JOIN(local_comm, remote_comm_send, remote_comm_recv, n, >> contexts) >> >> >> COMMUNICATORS >> >> Local Communicator Operations >> MPI_COMM_SIZE(comm, size) >> MPI_COMM_RANK(comm, rank) >> MPI_COMM_FLATTEN(comm, max_length, buffer, actual_length) >> >> Local Constructors >> MPI_COMM_UNFLATTEN(max_length, buffer, comm) >> MPI_COMM_BIND(group, context, comm_new) >> MPI_COMM_UNBIND(comm) >> MPI_COMM_GROUP(comm, group) >> MPI_COMM_CONTEXT(comm, context) >> MPI_COMM_DUP(comm, new_context, new_comm) >> MPI_COMM_MERGE(comm, comm_remote, comm_send, comm_recv) >> >> Collective Communicator Constructors >> MPI_COMM_MAKE(sync_group, comm_group, comm_new) >> >> >> INTER-COMMUNICATION >> >> MPI_COMM_PUBLISH(comm, label, persistence_flag) >> MPI_COMM_UNPUBLISH(label) >> MPI_COMM_SUBSCRIBE(my_comm, label, comm) >> MPI_COMM_PUBLISH_SUBSCRIBE(comm_A, label_A, label_B, send_to_B, recv_from_B) >> MPI_COMM_JOIN(local_comm, remote_comm_send, remote_comm_recv, order, >> joined_comm) The "multi-level" organization follows. Details are included for routines that are different from the current draft. A few "discussion" points are also indicated. The basic level contains 7 routines. The expert level contains 28 routines. (The current draft has 34 so I haven't done too badly... :-) "BASIC" LEVEL Local Communicator Operations MPI_COMM_SIZE(comm, size) MPI_COMM_RANK(comm, rank) CHANGED ROUTINE (communicators instead of groups): MPI_COMM_LOCAL_TRANSLATE_RANKS (comm_a, n, ranks_a, comm_b, ranks_b) Local Destructor NEW ROUTINE: MPI_COMM_FREE(comm) IN comm communicator object handle previously defined This operation frees communicator handle comm. DISCUSS: all communicator construction at the basic level is currently done with collective operations. Should this be MPI_COMM_COLL_FREE()?? Collective Communicator Constructors NEW ROUTINE (replaces MPI_COMM_MAKE()): MPI_COMM_COLL_DUP(sync_comm, comm_to_dup, n, new_comms) IN sync_comm communicator specifying participants in this call IN comm_to_dup communicator object handle to be duplicated IN n number of new communicators to create OUT new_comms array of n new communicators MPI_COMM_COLL_DUP() is a collective version of MPI_COMM_LOCAL_DUP(). MPI_COMM_COLL_DUP() creates n new communicators and stores them in array new_comms. Each new communicator has a context unique among all processes in sync_comm. Each new communicator is otherwise identical to comm_to_dup. comm_to_dup will often be a subset of sync_comm. It is not erroneous if comm_to_dup is not a subset of sync_comm. (This functionality is primarily for the "expert" user.) Tony's discussion of statically initialized, single-invocation collective operations (that lock out other threads) applies to this routine when comm_to_dup is not a subset of sync_comm. I have replaced "sync_group" with "sync_comm" assuming that the context associated with sync_comm can be ignored (if the implementation requires it). Tony: will the next two routines also need to be single-invocation collective operations (with an additional "sync_comm" argument)? NEW ROUTINE (replaces MPI_COLL_SUBGROUP()): MPI_COMM_COLL_SUBSET(comm, membership_key, new_comm) Same as MPI_COLL_SUBGROUP() in the current draft except that new communicator new_comm is returned (instead of new group new_group). new_comm has a new context. NEW ROUTINE (replaces MPI_COLL_GROUP_PERMUTE()): MPI_COMM_COLL_PERMUTE(comm, new_rank, new_comm) Same as MPI_COLL_GROUP_PERMUTE() in the current draft except that new communicator new_comm is returned (instead of new group new_group). new_comm has a new context. DISCUSS: is there enough functionality for communicator construction at the basic level?? "Inter-communication" NEW ROUTINE (combines MPI_COMM_PUBLISH_SUBSCRIBE() and MPI_COMM_JOIN()): MPI_COMM_FUSE(comm_local, label_local, label_remote, order, comm_joined) IN comm_local local communicator IN label_local string label for local communicator IN label_remote string label for remote communicator IN order if comm_local is first, then MPI_TRUE, else MPI_FALSE OUT comm_joined merged communicator MPI_COMM_FUSE() (name thanks to Jim :-) is a combination of routines MPI_COMM_PUBLISH_SUBSCRIBE() and MPI_COMM_JOIN() from the current draft. This routine creates the merged communicator comm_joined given local communicator comm_local, string label label_local, string label label_remote, and int order. String label_local is used as a unique name for local_comm. String label_remote is used as a unique name for the remote communicator that will be merged with comm_local. The routine synchronizes over the processes in both communicators, and produces a new, merged communicator (comm_joined), whose processes are the ordered union of the two underlying process groups. The processes in comm_local will be first in comm_joined iff order == MPI_TRUE. Otherwise, the processes in the remote communicator will be first. The new communicator has a unique context of communication. This routine is called in pairs by all of the processes belonging to each communicator. Labels and order must correspond. For example: /* comm A call */ MPI_COMM_FUSE(comm_A, "COMM_A", "COMM_B", MPI_TRUE, comm_A_B); ... /* comm B call */ MPI_COMM_FUSE(comm_B, "COMM_B", "COMM_A", MPI_FALSE, comm_A_B); ... After these calls have completed, comm_A_B will consist of the processes in comm_A followed by the processes in comm_B and a new context unique across all processors in comm_A and comm_B. (I was going to call this MPI_COMM_COLL_FUSE() but MPI_COMM_FUSE() seems much more appropriate! :-) "EXPERT" LEVEL GROUPS Local Operations MPI_GROUP_SIZE(group, size) MPI_GROUP_RANK(group, rank) MPI_TRANSLATE_RANKS (group_a, n, ranks_a, group_b, ranks_b) MPI_GROUP_FLATTEN(group, max_length, buffer, actual_length) Local Group Constructors MPI_GROUP_UNFLATTEN(max_length, buffer, group) MPI_LOCAL_SUBGROUP(group, n, ranks, new_group) MPI_LOCAL_SUBGROUP_RANGES(group, n, ranges, new_group) MPI_LOCAL_GROUP_UNION(group1, group2, group_out) MPI_LOCAL_GROUP_INTERSECT(group1, group2, group_out) MPI_LOCAL_GROUP_DIFFERENCE(group1, group2, group_out) MPI_GROUP_FREE(group) MPI_GROUP_DUP(group, new_group) CONTEXTS Local Operations MPI_CONTEXTS_RESERVE(n, contexts) MPI_CONTEXTS_FREE(n, contexts) Collective Operations MPI_CONTEXTS_ALLOC(group, n, contexts) MPI_CONTEXTS_JOIN(local_comm, remote_comm_send, remote_comm_recv, n, contexts) COMMUNICATORS Local Communicator Operations MPI_COMM_FLATTEN(comm, max_length, buffer, actual_length) Local Constructors MPI_COMM_UNFLATTEN(max_length, buffer, comm) MPI_COMM_BIND(group, context, comm_new) MPI_COMM_UNBIND(comm) MPI_COMM_GROUP(comm, group) MPI_COMM_CONTEXT(comm, context) MPI_COMM_MERGE(comm, comm_remote, comm_send, comm_recv) INTER-COMMUNICATION MPI_COMM_PUBLISH(comm, label, persistence_flag) MPI_COMM_UNPUBLISH(label) MPI_COMM_SUBSCRIBE(my_comm, label, comm) MPI_COMM_PUBLISH_SUBSCRIBE(comm_A, label_A, label_B, send_to_B, recv_from_B) MPI_COMM_JOIN(local_comm, remote_comm_send, remote_comm_recv, order, joined_comm) OK, blast away!! From hender@macaw.fsl.noaa.gov Mon Jul 19 18:18:32 1993 Received: from gw1.fsl.noaa.gov by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA15435; Mon, 19 Jul 93 18:18:30 CDT Received: by gw1.fsl.noaa.gov (5.65/DEC-Ultrix/4.3) id AA21316; Mon, 19 Jul 1993 20:15:31 +0100 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA07925; Mon, 19 Jul 93 17:15:41 MDT Date: Mon, 19 Jul 93 17:15:41 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9307192315.AA07925@macaw.fsl.noaa.gov> To: tony@Aurora.CS.MsState.Edu Subject: More on "multi-level" and "group fusion" Status: R Tony & Jim, I've been thinking about "group fusion"... As usual, I think best with examples. I decided to re-code the extremely popular "ISAR" example using Jim's "group fusion" and see what happens... I've restricted myself to using routines from the "basic" level of my "multi-level" proposal (which includes "fusion"). Some sections of the original example have been deleted (since you've already got them). The biggest changes have been: 1) Creation of the communicators is much simpler (expected, since this is basically a "publish-subscribe" mechanism). 2) The user functions that send data/parameters between the three process groups are now more complicated. Before, I could pass a single communicator into these functions. Each function could then find out who to communicate with directly using the single communicator. Now, I am passing the "fused" communicator, the size of the local sub-group communicator, and the membership key. The functions use the size and the key to determine which parts of the fused communicator correspond to the "local" and "remote" sub-groups. (Alternatively, the "local" communicator could be passed in instead of the size and key. The MPI_COMM_TRANSLATE_RANKS() routine would then be used to identify the "local" sub-group. The "remote" sub-group would be the remaining processes in the fused communicator.) My (not very well thought-out) conclusions after this exercise are: 1) The "basic" level provides enough flexibility to make this example work OK. I will attempt to re-do the examples in the current draft using the "multi-level" proposal and see what happens there... 2) Not having explicit groups and contexts running around is a definite plus. Lack of local operations doesn't hurt in this example. 3) "Fused" groups are a bit clumsy. It is intuitive (to me) to be able to send to another communicator, directly addressing members of that communicator as [0,...,(N-1)]. With a "fused" group, I need to keep track of which "half" of the group I want to communicate with. This feels a bit clumsy, but it certainly allows me to do what I want. By the way, if this multi-level idea is no good, say so and I'll attempt to help out with some other problem. I've had no feedback yet, positive or negative. Tom Henderson NOAA Forecast Systems Laboratory ------------------------------ BEGIN "PSEUDO" CODE --------------------------- /* #includes */ #include /* :-) */ #include /* For FFT library. */ /* #defines */ /* Membership keys for sub-group creation. */ #define SEARCH_GROUP_KEY 0 #define COMPENSATE_GROUP_KEY 1 #define DISPLAY_GROUP_KEY 2 /* Publication strings. */ #define SEARCH_COMM_STRING "SEARCH_COMM" #define COMPENSATE_COMM_STRING "COMPENSATE_COMM" #define DISPLAY_COMM_STRING "DISPLAY_COMM" /* typedefs */ ... main() { comm myComm, search_compensateComm, compensate_displayComm; float **radarData; char **image; isarParameters isarParams; decompParameters searchDecomp, compensateDecomp, displayDecomp; int membershipKey, myRankInAll; int allCommSize, searchCommSize, compensateCommSize, displayCommSize; /* Generic setup. */ MPI_INIT(); MPI_COMM_SIZE(MPI_COMM_ALL, &allCommSize); MPI_COMM_RANK(MPI_COMM_ALL, &myRankInAll); /* Initialize library called inside function compensateData(). */ INIT_FFT_LIB(...); /* Other setup stuff (malloc() 2D arrays, etc...) */ ... /* Build "sub-groups". */ /* "User" function... */ getCommSizes(myRankInAll, allCommSize, &searchCommSize, &compensateCommSize, &displayCommSize, &membershipKey, ...); MPI_COMM_COLL_SUBSET(MPI_COMM_ALL, membershipKey, myComm); /* Build "fused" communicators search_compensateComm and */ /* compensate_displayComm. Search group is first in */ /* search_compensateComm, compensate group is first in */ /* compensate_displayComm. */ if (membershipKey == SEARCH_GROUP_KEY) { MPI_COMM_FUSE(myComm, SEARCH_COMM_STRING, COMPENSATE_COMM_STRING, MPI_TRUE, search_compensateComm); } else if (membershipKey == COMPENSATE_GROUP_KEY) { MPI_COMM_FUSE(myComm, COMPENSATE_COMM_STRING, SEARCH_COMM_STRING, MPI_FALSE, search_compensateComm); /* Be careful to make calls in the right order! */ MPI_COMM_FUSE(myComm, COMPENSATE_COMM_STRING, DISPLAY_COMM_STRING, MPI_TRUE, compensate_displayComm); } else if (membershipKey == DISPLAY_GROUP_KEY) { MPI_COMM_FUSE(myComm, DISPLAY_COMM_STRING, COMPENSATE_COMM_STRING, MPI_FALSE, compensate_displayComm); } else { handleError(...); } /* Data decomposition set-up code for all groups... */ makeDecompParams(searchCommSize, searchDecomp, ...); makeDecompParams(compensateCommSize, compensateDecomp, ...); makeDecompParams(displayCommSize, displayDecomp, ...); /* Other stuff... */ ... /* MAIN LOOPS for each sub-group... */ /* NOTE changes in arguments to user functions sendCurrentData(), */ /* sendMotion(), recvCurrentData(), and recvMotion. These functions */ /* now require the "fused" communicator (search_compensateComm or */ /* compensate_displayComm), size of the "local" sub-group communicator, */ /* and the membership key. */ /* MAIN LOOP for "search" group... */ if (membershipKey == SEARCH_GROUP_KEY) { searchDone = 0; while (searchDone == 0) { ... addNewData(myComm, radarData, searchDecomp, ...); findNewMotion(myComm, radarData, isarParams, searchDecomp, ...); sendCurrentData(search_compensateComm, searchCommSize, membershipKey, radarData, searchDecomp, compensateDecomp); sendMotion(search_compensateComm, searchCommSize, membershipKey, isarParams); ... } /* end of search group while */ } /* end of search group if */ /* MAIN LOOP for "compensate" group... */ else if (membershipKey == COMPENSATE_GROUP_KEY) { compensateDone = 0; while (compensateDone == 0) { ... recvCurrentData(search_compensateComm, compensateCommSize, membershipKey, radarData, compensateDecomp, searchDecomp); recvMotion(search_compensateComm, compensateCommSize, membershipKey, isarParams); compensateData(myComm, radarData, isarParams, compensateDecomp, ...); sendCurrentData(compensate_displayComm, compensateCommSize, membershipKey, radarData, compensateDecomp, displayDecomp); ... } /* end of compensate group while */ } /* end of compensate group if */ /* MAIN LOOP for "display" group... */ else if (membershipKey == DISPLAY_GROUP_KEY) { displayDone = 0; while (displayDone == 0) { ... recvCurrentData(compensate_displayComm, displayCommSize, membershipKey, radarData, displayDecomp, compensateDecomp); scaleThresholdCenter(myComm, radarData, image, displayDecomp, ...); displayImage(myComm, image, displayDecomp); ... } /* end of display group while */ } /* end of display group if */ else { handleError(...); } /* Cleanup... */ ... } /* end of main() */ From hender@macaw.fsl.noaa.gov Mon Jul 19 19:36:36 1993 Received: from gw1.fsl.noaa.gov by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA15476; Mon, 19 Jul 93 19:36:34 CDT Received: by gw1.fsl.noaa.gov (5.65/DEC-Ultrix/4.3) id AA22913; Mon, 19 Jul 1993 21:34:28 +0100 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA08010; Mon, 19 Jul 93 18:34:38 MDT Date: Mon, 19 Jul 93 18:34:38 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9307200034.AA08010@macaw.fsl.noaa.gov> To: tony@Aurora.CS.MsState.Edu Subject: more "multi-level" "basic" examples... Cc: jim@meiko.co.uk Status: R Tony & Jim, As promised, here are the (non-trivial) examples from the current context draft recoded using the "basic" level of my "multi-level" proposal. The examples are identified by section in the current draft. Examples in sections 3.11.1 and 3.11.2 are unchanged. Some conclusions are: 1) The "basic" routines can accomplish everything in the examples (as I understand them-- Tony??). 2) Some of the "expert" routines clearly are better for getting performance out of libraries. 3) We could use a few more examples... :-) Ideas? Comments? Enjoy, Tom ----------------------------------------------------------------------------- The "basic" level currently contains the following EIGHT routines (yeah, I can't count :-): MPI_COMM_SIZE(comm, size) MPI_COMM_RANK(comm, rank) MPI_COMM_LOCAL_TRANSLATE_RANKS (comm_a, n, ranks_a, comm_b, ranks_b) MPI_COMM_FREE(comm) MPI_COMM_COLL_DUP(sync_comm, comm_to_dup, n, new_comms) MPI_COMM_COLL_SUBSET(comm, membership_key, new_comm) MPI_COMM_COLL_PERMUTE(comm, new_rank, new_comm) MPI_COMM_FUSE(comm_local, label_local, label_remote, order, comm_joined) (With the caveat that sync_comm might need to appear in a few more places... We'll see when I understand "single-invocation library" better.) 3.11.3 (Approximate) Current Practice #3 int me; int membershipKey; /* membership key for MPI_COMM_COLL_SUBSET() */ void *commslave; /* communicator for "slave" sub-group */ ... /* Initialize, etc. */ MPI_INIT(); MPI_COMM_RANK(MPI_COMM_ALL, &me); /* local */ /* Set up "slave" sub-group. */ if (me == 0) membershipKey = 0; /* process 0 in MPI_COMM_ALL */ else membershipKey = 1; /* "slave" processes */ MPI_COMM_COLL_SUBSET(MPI_COMM_ALL, membershipKey, &commslave); if (me != 0) { /* compute on slave */ ... MPI_REDUCE(commslave, ...); ... } /* zero falls through immediately to this reduce, others do later... */ MPI_REDUCE(MPI_COMM_ALL, ...); 3.11.4 Example #4 #define TAG_ARBITRARY 12345 #define SOME_COUNT 50 ... int me; int membershipKey; /* membership key for MPI_COMM_COLL_SUBSET() */ void *pt2pt_comm, *coll_comm; /* communicators for sub-group */ ... /* Initialize, etc. */ MPI_INIT(); MPI_COMM_RANK(MPI_COMM_ALL, &me); /* local */ /* Make communicators for sub-group. */ if ((me == 2) || (me == 4) || (me == 6) || (me == 8)) membershipKey = 0; /* processes in sub-group */ else membershipKey = 1; /* other processes */ MPI_COMM_COLL_SUBSET(MPI_COMM_ALL, membershipKey, &pt2pt_comm); if (membershipKey == 0) { MPI_COMM_COLL_DUP(pt2pt_comm, pt2pt_comm, 1, coll_comm); /* asynchronous receive: */ MPI_IRECV(..., MPI_SRC_ANY, TAG_ARBITRARY, pt2pt_comm); /* collective communication */ for (i=0; i comm); /* other inits */ ... *handle = save; } Notice that the communicator comm passed to the library IS needed to allocate new communicators. User start-up code: void user_start_op(user_lib_t *handle, void *data) { user_lib_state *state; state = handle -> state; MPI_IRECV(save -> comm, ..., data, ... &(state -> irecv_handle)); MPI_ISEND(save -> comm, ..., data, ... &(state -> isend_handle)); } User clean-up code: void user_end_op(user_lib_t *handle, void *data) { MPI_WAIT(save -> state -> isend_handle); MPI_WAIT(save -> state -> irecv_handle); } 3.11.6 Library Example #2 The main program: int ma, mb, me; int membershipKeyA, membershipKeyB; /* membership keys */ void *comm_a, comm_b; /* communicators for sub-groups */ ... /* Initialize, etc. */ MPI_INIT(); MPI_COMM_RANK(MPI_COMM_ALL, &me); /* local */ ... /* Make communicators for sub-groups. */ if ((me == 0) || (me == 1)) membershipKeyA = 0; /* processes in sub-group A */ else membershipKeyA = 1; /* other processes */ MPI_COMM_COLL_SUBSET(MPI_COMM_ALL, membershipKey, &comm_a); if ((me == 0) || (me == 2) || (me == 3)) membershipKeyB = 0; /* processes in sub-group B */ else membershipKeyB = 1; /* other processes */ MPI_COMM_COLL_SUBSET(MPI_COMM_ALL, membershipKey, &comm_b); if (membershipKeyA == 0) lib_call(comm_a); if (membershipKeyB == 0) { lib_call(comm_b); lib_call(comm_b); } The library: void lib_call(void *comm) { int me, done = 0; MPI_COMM_RANK(comm, &me); if (me == 0) { while (!done) { MPI_RECV(..., comm, MPI_SRC_ANY); ... } } else { /* work */ MPI_SEND(..., comm, 0); .... } MPI_SYNC(comm); /* include/no safety for safety/no safety */ } From owner-mpi-context@CS.UTK.EDU Tue Jul 20 16:41:54 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA27841; Tue, 20 Jul 93 16:41:54 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23145; Tue, 20 Jul 93 16:42:42 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 20 Jul 1993 16:42:40 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23136; Tue, 20 Jul 93 16:42:38 -0400 Received: by gw1.fsl.noaa.gov (5.65/DEC-Ultrix/4.3) id AA01618; Tue, 20 Jul 1993 17:40:29 +0100 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA08662; Tue, 20 Jul 93 14:40:50 MDT Date: Tue, 20 Jul 93 14:40:50 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9307202040.AA08662@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu Subject: one red fish? Hi all, Tony writes: > Desired action > -------------- > We find a way to make ibraries have control over > their own communication safety, while permitting multi-threadedness. OK, I'll give it a shot with a possible pthreads() implementation of MPI_COMM_MAKE(). Caveat: I'm not an expert on pthreads()!! Also, I have used routines from the current draft. This is a naive implementation-- I'm just trying to show something that may work, not something that works well... :-) What do we want? An implementation of MPI_COMM_MAKE(sync_group, comm_group, comm_new) that is "safe" in a multi-threaded environment. As I understand this, it means that there must be some way of ensuring that multiple threads cannot "interfere" with each other by making "simultaneous" calls to this routine. It also means that multiple processes cannot "interfere" with each other when two or more groups make this call and the groups have overlapping membership. Finally, it is permissible for an implementation to use a "hidden" context (not accessible to the user) to accomplish all this. (If I haven't got this right, read no further...) Here's the scheme ("pseudo-code" follows): Each process maintains a "shared" variable among all threads. This shared variable is used by the threads to sequentialize access to MPI_COMM_MAKE(). Let's name the variable mpiCommMakeMutex. On entry to MPI_COMM_MAKE(sync_group, comm_group, comm_new), the calling thread (process) attempts to obtain exclusive access to mpiCommMakeMutex using pthread_mutex_lock(). If another thread is already executing MPI_COMM_MAKE(), this call will block. If not, the thread will lock mpiCommMakeMutex and proceed. The calling process then sends a message to process 0 in sync_group that contains information describing comm_group. (If the thread is in process 0, it receives these messages.) These messages are sent using special context MPI_COMM_MAKE_CONTEXT. This special context is not accessible to the user. To disambiguate overlapping process groups that have the same "process 0", sync_group is used as message tag. (Note that I am assuming "INTEGER" handles here. If "group" is a struct then we have to use an implementation-dependent "group ID" field in this struct. I use this scheme in the example below.) Use of this tag ensures that process 0 will not attempt to receive a message on context MPI_COMM_MAKE_CONTEXT from a process that is not in sync_group until the current call to MPI_COMM_MAKE() completes. After receiving a message from every other process in sync_group, the thread in process 0 checks to see if comm_group is the same in each received message. If not, an "error" message is sent to each process in sync_group and MPI_COMM_MAKE() returns an error status. If everything is "OK", a "acknowledge" message is sent to each process in sync_group. After the "acknowledge" message, process 0 calls MPI_CONTEXTS_RESERVE(1, &newContext) and sends newContext to each process in sync_group. Each process then calls MPI_COMM_BIND(comm_group, newContext, comm_new) to create the new communicator. Each process then releases mpiCommMakeMutex using pthread_mutex_unlock() and returns from MPI_COMM_MAKE(). One note: the use of MPI_CONTEXTS_RESERVE() is a bit of a kludge. Probably, there should be some internal implementation-dependent way for processes {1, ..., (N-1)} to verify that newContext is not currently locally reserved and, if so, to reserve a context with VALUE newContext. If newContext IS currently locally reserved, then some negotiation must go on within sync_group to generate a unique context. I have NOT included this level of detail... Some implications of the example below: 1) The user is responsible for making sure that multiple threads do not accidentally call MPI_COMM_MAKE() in the "wrong" order. This is consistent with what we have said about threads elsewhere in MPI... 2) The user is responsible for making sure that calls to MPI_COMM_MAKE() by groups with common members are made in the correct order to avoid deadlock. Again, this is consistent with what's going on in collcomm. OK, will this work? If not, what's wrong with it? Can we fix it? Tom Henderson ----------------- "Implementation" of MPI_COMM_MAKE() ----------------------- /* "Implementation-dependent" part of "group" struct typedef. */ typedef struct group { ... int implementationGroupID; ... } group; /* "Global" declaration (visible to all threads in a process). */ pthread_mutex_t mpiCommMakeMutex; ... /***************************************************************************** ** function MPI_COMM_MAKE() ** ** Note that I am "assuming" an error status passed as an OUT argument... ** Error handling is oversimplified. I use only point-to-point communication ** routines. ** *****************************************************************************/ void MPI_COMM_MAKE(group sync_group, group comm_group, comm comm_new, int *status) { pthread_mutex_t mpiCommMakeMutex; /* "shared" variable */ int i, myRank, syncGroupSize, syncGroupID, commGroupID; int newContext, reqMsg, ackMsg, errFlag; receiveHandle rcvStatus; /* Lock "shared" variable (blocks until lock succeeds). */ pthread_mutex_lock(&mpiCommMakeMutex); /* Get implementation-dependent group IDs. */ syncGroupID = sync_group.implementationGroupID; commGroupID = comm_group.implementationGroupID; /* Code for process 0 in sync_group and other processes in sync_group. */ MPI_GROUP_RANK(sync_group, &myRank); if (myRank == 0) { /* Receive "request" message from each process in sync_group. */ /* This message should contain commGroupID. */ MPI_GROUP_SIZE(sync_group, &syncGroupSize); errFlag = 0; for (i=1; i To: mpi-context@cs.utk.edu Subject: Re: one red fish? -- bad assumption and a "fix" Hi all, Very early this morning, it occurred to me that one of the assumptions I made in the "sample implementation" of MPI_COMM_MAKE() causes problems with local group creation operations, or (if group creation is restricted to be collective) with Jim's "group fusion" (MPI_COMM_JOIN()). The problem comes from the following part of the sample implementation: > The calling process then sends a message to process 0 in sync_group that > contains information describing comm_group. (If the thread is in process 0, > it receives these messages.) These messages are sent using special context > MPI_COMM_MAKE_CONTEXT. This special context is not accessible to the user. > To disambiguate overlapping process groups that have the same "process 0", > sync_group is used as message tag. (Note that I am assuming "INTEGER" handles > here. If "group" is a struct then we have to use an implementation-dependent > "group ID" field in this struct. I use this scheme in the example below.) > Use of this tag ensures that process 0 will not attempt to receive a message > on context MPI_COMM_MAKE_CONTEXT from a process that is not in sync_group > until the current call to MPI_COMM_MAKE() completes. The assumption here is that groups comm_group and sync_group can each be identified by an integer that is unique within sync_group. If local operations have been used to create these groups, then this cannot be guaranteed. If only collective operations have been used to create these groups (as in the "basic" level of the "multi-level" proposal) then we're OK UNLESS comm_group has been formed by "joining" two groups (Jim's "group fusion"). Here's how group fusion might screw things up even without local operations. Start with one group. Form two groups A and B using MPI_COLL_SUBGROUP(). Group A forms a "permuted" group A' using MPI_COLL_GROUP_PERMUTE(A, new_ranks, A'). Group B forms a "permuted" group B' using MPI_COLL_GROUP_PERMUTE(B, new_ranks, B'). Since group A is unaware of group B' and group B is unaware of group A', it is possible that the integer handle (or system-dependent "group ID") may be the same for groups A' and B'. Suppose processes in both groups have assigned the local permuted group to a variable named permutedGroup. If groups A and B then "join" into group A_B, what happens when the processes in group A_B call MPI_COMM_MAKE(A_B, permutedGroup, newComm, &status)? BAD THINGS most likely. How can this be fixed? 1) Send the "flattened" group comm_group in the implementation. This is probably "slow", but it works, and MPI_COMM_MAKE() is already pretty "slow". I show this "fix" below... 2) Disallow the case of sync_group != comm_group. This may be unacceptable. 3) Make MPI_COMM_JOIN() so sophisticated that it can resolve inconsistent group naming (and lots of other stuff I'm sure) during the "join" operation. This is so complicated I can't imagine how it would be done. 4) Use a "server" or some other scheme to maintain globally unique group identifiers. Definitely a bad idea... This still leaves the issue of local group operations being used to create sync_group unresolved. As I've said before, I'm in favor of restricting these local operations to an "expert" level (or eliminating them altogether). As long as sync_group is created by a collective operation, it's "group ID" can be unique for all member processes. I realize that all of this may be obvious to many of you. Please be patient as I catch up... :-) Tom Henderson ----------------- "Implementation" of MPI_COMM_MAKE() ----------------------- ----------------- with "flatten/unflatten" fix. ----------------------- /* New/changed code is indicated with "$$$" */ /* "Implementation-dependent" part of "group" struct typedef. */ typedef struct group { ... int implementationGroupID; ... } group; /* "Global" declaration (visible to all threads in a process). */ pthread_mutex_t mpiCommMakeMutex; ... /***************************************************************************** ** function MPI_COMM_MAKE() ** ** Note that I am "assuming" an error status passed as an OUT argument... ** Error handling is oversimplified. I use only point-to-point communication ** routines. This version has a "fix". Group comm_group is now sent ** "flattened" instead of by "group ID" in the "request" msg. ** *****************************************************************************/ void MPI_COMM_MAKE(group sync_group, group comm_group, comm comm_new, int *status) { pthread_mutex_t mpiCommMakeMutex; /* "shared" variable */ /* $$$ */ int i, j, myRank, syncGroupSize, syncGroupID, flatGroupSize, flatMsgSize; int newContext, ackMsg, errFlag, dummy1, dummy2; receiveHandle rcvStatus; /* $$$ */ char flatCommGroup[MAX_FLAT_GROUP_SIZE], flatMsg[MAX_FLAT_GROUP_SIZE]; /* Lock "shared" variable (blocks until lock succeeds). */ pthread_mutex_lock(&mpiCommMakeMutex); /* $$$ */ /* Get implementation-dependent group IDs for sync_group. */ syncGroupID = sync_group.implementationGroupID; /* $$$ */ /* Get flattened commGroup. */ MPI_GROUP_FLATTEN(comm_group, MAX_FLAT_GROUP_SIZE, flatCommGroup, &flatGroupSize); /* Code for process 0 in sync_group and other processes in sync_group. */ MPI_GROUP_RANK(sync_group, &myRank); if (myRank == 0) { /* Receive "request" message from each process in sync_group. */ /* $$$ */ /* This message should contain "flattened" comm_group. */ MPI_GROUP_SIZE(sync_group, &syncGroupSize); errFlag = 0; for (i=1; i Message-Id: <9307261756.AA08857@Aurora.CS.MsState.Edu> To: hender@macaw.fsl.noaa.gov, lyndon@epcc.ed.ac.uk Subject: Re: inter-communication Cc: mpi-context@cs.utk.edu Comment: Make context opaque Make flatten/unflatten for it Add attribute to contexts (at creation) intra-communicator only (default) inter-communication OK This adds to safety, allows one to hide the dual context nature in communicators, and should allay concerns of Jim Cownie about unsafe contexts. It euphemizes implementations, so there would be much more freedom in implementation, with the opaque context. This upsets Sandians, because it limits ability to access contexts like well-known ports. What do you think? Does it help with inter-communication? - Tony From owner-mpi-context@CS.UTK.EDU Mon Jul 26 17:37:33 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA19080; Mon, 26 Jul 93 17:37:33 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA07349; Mon, 26 Jul 93 17:38:03 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 26 Jul 1993 17:38:01 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA07339; Mon, 26 Jul 93 17:38:00 -0400 Received: by gw1.fsl.noaa.gov (5.65/DEC-Ultrix/4.3) id AA24768; Mon, 26 Jul 1993 18:34:47 +0100 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA17704; Mon, 26 Jul 93 15:35:08 MDT Date: Mon, 26 Jul 93 15:35:08 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9307262135.AA17704@macaw.fsl.noaa.gov> To: lyndon@epcc.ed.ac.uk, tony@aurora.cs.msstate.edu Subject: Re: inter-communication Cc: mpi-context@cs.utk.edu Tony writes: > Comment: > > Make context opaque > Make flatten/unflatten for it > Add attribute to contexts (at creation) > intra-communicator only (default) > inter-communication OK > > This adds to safety, allows one to hide the dual context nature in > communicators, and should allay concerns of Jim Cownie about unsafe > contexts. It euphemizes implementations, so there would be much > more freedom in implementation, with the opaque context. > > This upsets Sandians, because it limits ability to access contexts > like well-known ports. > > What do you think? Does it help with inter-communication? > - Tony First off, I like the idea of an opaque context. This would certainly solve the "security" problem of: MPI_COMM_BIND(groupA, 5, commA); Opaque contexts are clearly managed by MPI. It don't think opaque contexts solve the problem of binding the same context to multiple groups: MPI_CONTEXTS_RESERVE(1, &contextA); MPI_COMM_BIND(groupA, contextA, commA); MPI_COMM_BIND(groupB, contextA, commB); userStartOp(commA, ...); userStartOp(commB, ...); /* now what? */ ... userEndOp(commB, ...); userEndOp(commA, ...); ...of course, this program may be "erroneous". :-) However: what if a "bind" operation "destroys" a context object? This would eliminate the possibility of multiple bindings. An "unbind" would "re-create" the context object (in user space). I'm a bit confused by flatten/unflatten and by what it means to send a context to another process. If context registration is managed by a global context server (visible to all processes) and if context creation routines (like MPI_CONTEXTS_RESERVE()) communicate with the server to generate new contexts, then I understand what I get when I unflatten a context (because any context is globally unique). If not, how do I interpret the context I get from unflatten? If I call MPI_CONTEXTS_RESERVE() after unflattening a context, is it possible that I can get the "same" context? If not, then the context unflattening operation must update some internal table so I don't accidentally get the same context later. If I update a table, what happens if the unflattened context is already in the table? Another possibility is that the user could maintain a list of contexts in use and manage any conflicts explicitly, in which case I need a routine that will tell me if two contexts are the same. I also need a routine that will enter a context in the "local registry" by "value". Since contexts are opaque, this does not necessarily destroy "security". These questions also apply to MPI_COMM_FLATTEN() and MPI_COMM_UNFLATTEN() in the current draft. I think I understand flattening and unflattening a group... :-) As far as an (intra-communication/inter-communication) attribute, we definitely need this. We could attach this attribute either to a communicator or to a context. Which is the "best" place? Does attaching it to a context imply that an implementation could use different methods for managing "intra-group" and "inter-group" contexts? Seems like this ought to be a good thing. I need to think about this some more... That's enough for now, my brain hurts... Tom From owner-mpi-context@CS.UTK.EDU Tue Jul 27 05:56:32 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA26578; Tue, 27 Jul 93 05:56:32 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20377; Tue, 27 Jul 93 05:56:18 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 27 Jul 1993 05:56:16 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20369; Tue, 27 Jul 93 05:56:14 -0400 Date: Tue, 27 Jul 93 10:56:09 BST Message-Id: <1027.9307270956@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: inter-communication To: hender@macaw.fsl.noaa.gov (Tom Henderson) In-Reply-To: Tom Henderson's message of Mon, 26 Jul 93 15:35:08 MDT Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu If we make context an opaque object then we avoid the insecurity arising from the possibility that two different peices of code in the same process will accidentally (or maliciously :-) choose to bind the same context number (say 5) in different communicator objects. If we make context opaque then we can make the rule that a process can only have ONE binding of a context and it is an error to attempt to bind a context which is already bound. The context object can contain a BOUND/FREE flag which becomes set to BOUND when we bind the context and is set to FREE otherwise. These ideas solves two insecurity problems, which is very good indeed. The cost is that contexts no longer conform to the wishes of Mark Sears and colleagues. There is a third problem which these ideas do not solve. Imagine two processes P and Q which are in group G, and Q is also in group H. Within group G both P and Q allocate a context C (opaque as discussed). P then binds C with G into a communicator Dp, while Q binds C with H into a communicator Dq. P then sends a message to Q using the communicator Dp. Of course we can say that the above is an arroneous program ... I would personally be happy with that situation, but I would also be concerned about the committee taking a different view. By the way, Tom, a couple of points. I think you are getting confused between MPI_CONTEXTS_RESERVE and MPI_CONTEXTS_ALLOC. Also the CONTEXT allocation is a collective within a group of processes and the CONTEXT reservation is local to a single process. There is no notion of a global context server in the draft and a CONTEXT is not necessarily a global object unless allocated within the global group of processes. I do not currently see how the suggestions which Tony make really help with inter-communication. They dont seem to either add or remove any new possibilities with respect to inter-communication. Perhaps I miss something important? Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Jul 27 11:25:30 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA02488; Tue, 27 Jul 93 11:25:30 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10596; Tue, 27 Jul 93 11:24:58 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 27 Jul 1993 11:24:57 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10587; Tue, 27 Jul 93 11:24:56 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA17387; Tue, 27 Jul 93 10:24:32 CDT Date: Tue, 27 Jul 93 10:24:32 CDT From: Tony Skjellum Message-Id: <9307271524.AA17387@Aurora.CS.MsState.Edu> To: hender@macaw.fsl.noaa.gov, lyndon@epcc.ed.ac.uk Subject: Re: inter-communication Cc: mpi-context@cs.utk.edu Lyndon, the issue is that if we followed the "old" model of intercommunication, then the context attribute would allow some communicators to be set with stronger security than others. Specifically, intra-communicators would have contexts marked as "closed" and inter-communicators would relax these semantics. Thus, the users desiring intercommunication could sacrifice the security of guaranteeing that their messages come from within a known group. The suggestion is mainly aimed at the current draft's idea of inter-communication. I have not had opportunity to study your newest stuff with Tom. I will go ahead and make contexts opaque. As I indicated, one wants to allow some association (in specific implementation) with "ports" or other things. I propose that these be implementation-specific alliances, that we do not cover in MPI. The greater good of this chapter, and of MPI, requires the additional safety posed by making contexts opaque, and as you two agree (and are the only ones giving me feedback), I will make it so. Thanks for all your efforts. I will strive to have the document worked up with your final inter-communication ideas by August 5, for early review by our colleagues at-large. - Tony ----- Begin Included Message ----- From owner-mpi-context@CS.UTK.EDU Tue Jul 27 04:56:33 1993 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 27 Jul 1993 05:56:16 EDT Date: Tue, 27 Jul 93 10:56:09 BST From: L J Clarke Subject: Re: inter-communication To: hender@macaw.fsl.noaa.gov (Tom Henderson) Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Content-Length: 2256 If we make context an opaque object then we avoid the insecurity arising from the possibility that two different peices of code in the same process will accidentally (or maliciously :-) choose to bind the same context number (say 5) in different communicator objects. If we make context opaque then we can make the rule that a process can only have ONE binding of a context and it is an error to attempt to bind a context which is already bound. The context object can contain a BOUND/FREE flag which becomes set to BOUND when we bind the context and is set to FREE otherwise. These ideas solves two insecurity problems, which is very good indeed. The cost is that contexts no longer conform to the wishes of Mark Sears and colleagues. There is a third problem which these ideas do not solve. Imagine two processes P and Q which are in group G, and Q is also in group H. Within group G both P and Q allocate a context C (opaque as discussed). P then binds C with G into a communicator Dp, while Q binds C with H into a communicator Dq. P then sends a message to Q using the communicator Dp. Of course we can say that the above is an arroneous program ... I would personally be happy with that situation, but I would also be concerned about the committee taking a different view. By the way, Tom, a couple of points. I think you are getting confused between MPI_CONTEXTS_RESERVE and MPI_CONTEXTS_ALLOC. Also the CONTEXT allocation is a collective within a group of processes and the CONTEXT reservation is local to a single process. There is no notion of a global context server in the draft and a CONTEXT is not necessarily a global object unless allocated within the global group of processes. I do not currently see how the suggestions which Tony make really help with inter-communication. They dont seem to either add or remove any new possibilities with respect to inter-communication. Perhaps I miss something important? Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Tue Jul 27 11:48:26 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA02632; Tue, 27 Jul 93 11:48:26 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA12167; Tue, 27 Jul 93 11:48:02 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 27 Jul 1993 11:48:01 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA12159; Tue, 27 Jul 93 11:47:50 -0400 Received: from tycho.co.uk (tycho.meiko.co.uk) by hub with SMTP id AA18514 (5.65c/IDA-1.4.4 for mpi-context@cs.utk.edu); Tue, 27 Jul 1993 16:47:47 +0100 Date: Tue, 27 Jul 1993 16:47:47 +0100 From: James Cownie Message-Id: <199307271547.AA18514@hub> Received: by tycho.co.uk (5.0/SMI-SVR4) id AA02116; Tue, 27 Jul 93 16:47:41 BST To: mpi-context@cs.utk.edu Subject: Context and safety Content-Length: 9939 As you know I am in favour of removing the ability of users to bind arbitrary integers into communicators, since this removes any security from the communications. (and in my mind that was the sole purpose of contexts). Or phrased another way : Groups provide spatial insulation of communication. Contexts provide temporal insulation of communication. How did we get here ? ===================== Why did we introduce contexts and communicators as separate objects from groups ? I think the logic runs like this :- 1) We know that it is necessary to make the SAFE creation of context(s) within a group a collective (and possibly synchronising) operation. 2) But we want to allow the creation of communicators without synchronisation of the group. [A third possible reason has to do with establishing between-group communication, however I would like to leave that for now, since I see it as a large area of additional complexity, and I'd like to solve the "simple case" first. (I'll discuss how I'd handle this later). ] We overcome this problem in two separate ways :- 1) We allow totally unsafe creation of contexts (by hard coding, using the phase of the moon, or whatever). 2) We amortise the cost of the synchronisation (and move it elsewhere) by using an allocation function which allocates multiple contexts simultaneously. Communicators then arise as a result of exposing the separation of the communication context and the group. They are a syntactical convenience to bind these two entities back into a single object, since they can only ever be used in a communication together. However I don't think that we need solution one (unsafe creation), and I think that if we accept only solution two (context pre-allocation) we can do it WITHOUT having to expose contexts explicitly to the user. How to do without explicit contexts. ==================================== As in Marc Snir's proposal, we do everything inside a group. A group now replaces a communicator in all communication operations. (If you prefer you could keep a group as a topological entity, or process list, and have a synchronising mpi_group_activate command which produces a communicator from a group, you can then use mpi_duplicate_communicator to get a copy of the communicator with a different context, and still separate the concept of group and communicator. For now I'll explain in terms of groups, the explanation in terms of communicators should follow trivially.) So how do we create duplicate groups safely and locally ? (You could read this as "How do we allocate contexts safely and locally ?", since that is what the implementation will actually be doing.) We introduce a VERY IMPORTANT rule which is :- "All collective operations on a group must occur in the same order on each node." (This is already necessarily true for the collective operation we have previously defined in MPI collective, since they are permitted to contain barriers, and therefore the whole code will deadlock if this rule is not obeyed.) What this rule does allow us to do, is to locally pre-allocate a set of contexts for a given group, and then manage them entirely locally, just as the user could explicitly by using mpi_contexts_alloc, BUT we don't have to expose the context numbers to the user, so safety is preserved. [This follows by induction, each node with the group in starts with the same set of contexts allocated (by definition, as the group was established or initialised collectively), and each allocation occurs in the same order, so they can allocate the same result at each node and remain in identical states] I think that this rule is unobjectionable, (as noted it is a restating of the rule we already have for collective routines) since a group exists for collective operations, and context allocation is (conceptually) a collective operation, even though we're going to GUARANTEE that in certain circumstances it is not a barrier. Cacheing ======== To simplify the user interface I would introduce cacheing of duplicate groups on the "primary" group. (If you prefer this is cacheing of communicators on the primary communicators.) In implementation terms this involves holding multiple contexts on the same group, and choosing the correct one to use based on the symbol table lookup. To allow the semantics we require, the keys to the cache lookup should be a function address and an integer By definition, the function address is sufficient on any machine to apply the function, and is therefore guaranteed unique. (Of course its interpretation is potentially different on different machines, but the library is machine specific, so can be expected to understand how to convert a function pointer into a useful value for hashing on, on most machines it is safe simply to cast to int !). The additional integer allows a library function safely to distinguish between different instances of itself operating on the same group. (Say if one wanted to implement non-blocking collective operations.) So suppose we have a function like this ... enum LOOKUPOP { OP_LOOKUP, /* Lookup ONLY */ OP_LOOKUPCREATE, /* Lookup, create if not present */ OP_DELETE, /* Remove the entry */ OP_LOOKUPCREATENB, /* Lookup, create if not present, BUT only if * there is a pre-allocated context */ }; MPI_GROUP mpi_lookupGroup( MPI_GROUP g, void (*key1)(), const int key2, enum LOOKUPOP op); and a way of pre-allocating contexts, say int mpi_groupsReserve( MPI_GROUP g, void (*key1)(), const int number); Which is a group synchronisation in "g", and allocates "number" duplicate groups which can only be used by quoting the key "key1". We can use these to solve various of our classic problems Tony's library example #1 Multiple instances of the same library function operating in the same group at the same time, with guaranteed non-blocking properties. The main program int main(int argc, char ** argv) { int done = 0; userLibHandle h1; userLibHandle h2; ... mpi_init(); ... /* Group, Concurrent ops in this group */ init_user_lib(MPI_COMM_ALL, 2); h1 = user_start_op(dataset1); h2 = user_start_op(dataset2); while (!done) { /* Work including reductions on COMM_ALL, comms in COMM_ALL etc */ done = /* Whatever */ ; } user_end_op(MPI_COMM_ALL, h1); user_end_op(MPI_COMM_ALL, h2); } /* The library. Doesn't handle local multi threading, could do with * mutexes around access to instanceNo. */ typedef int userLibHandle; /* Type of the data the user must give us * back, could pack a group in as well if * we preferred */ static int instanceNo = 0; /* Library initialisation code */ void init_user_lib(MPI_GROUP g, const int concurrentOps) { if (mpi_groupsReserve(g, (void(*)(void))init_user_lib, concurrentOps) != MPI_SUCCESS ) { /* Error handling */ } instanceNo = 0; } /* * If this call is allowed to synchronise in g, then we can use * LOOKUPCREATE, and should never get an error. */ userLibHandle user_start_op(MPI_GROUP g, void * data) { MPI_GROUP myGroup = mpi_lookupGroup(g, (void(*)(void))init_user_lib, ++ instanceNo, OP_LOOKUPCREATENB); if (!myGroup) { /* Error handling, insufficient pre-allocated groups */ } mpi_irecv(myGroup, ...); mpi_isend(myGroup, ...); /* Etc */ return instanceNo; } void user_end_op(MPI_GROUP g, userLibHandle key) { MPI_GROUP myGroup = mpi_lookupGroup(g, (void(*)(void))init_user_lib, key, OP_LOOKUP); if (!barrierGroup) { /* * Fatal error, attempt to end user op without starting it */ } else { /* * Wait for all communications in myGroup to complete * The proposed waitAll( GROUP ) would be useful here, * otherwise we need the return tag from startBarrier * to contain the comms handles of all the non-blocking ops we * started so that we can wait for them to complete. */ } /* Free the group we used. */ mpi_lookupGroup (void(*)(void))init_user_lib, key, OP_DELETE); } Summary ======= I believe that this proposal achieves the objectives of maintaining security allowing a simple user interface, and allowing guaranteed non-blocking operations. In this proposal the structure of a group (or communicator) is completely hidden from the user, and can be guaranteed secure. There are none of the problems of receiving messages in a group which is different from that in which it was sent, since the system controls the binding of contexts to groups. All of the magic is contained in the single routine mpi_groupsReserve, which can use a hidden context which is always created on the group to ensure its own safety. It is clearly possible to implement all of the collective operations using this interface, though an implementation would probably reserve some contexts in the group explicitly for these functions, rather than using the lookup mechanism. I'm out of time now, so I'll try to cover some of the missing issues tomorrow. (Between group communication, more detailed implementation thoughts). FEEDBACK PLEASE !!!!! James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Tue Jul 27 12:18:44 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA02803; Tue, 27 Jul 93 12:18:44 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA13980; Tue, 27 Jul 93 12:18:15 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 27 Jul 1993 12:18:14 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA13972; Tue, 27 Jul 93 12:18:12 -0400 Received: by gw1.fsl.noaa.gov (5.65/DEC-Ultrix/4.3) id AA26429; Tue, 27 Jul 1993 13:16:02 +0100 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA00553; Tue, 27 Jul 93 10:16:23 MDT Date: Tue, 27 Jul 93 10:16:23 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9307271616.AA00553@macaw.fsl.noaa.gov> To: lyndon@epcc.ed.ac.uk Subject: De-confused... Cc: mpi-context@cs.utk.edu Lyndon writes, > By the way, Tom, a couple of points. I think you are getting confused > between MPI_CONTEXTS_RESERVE and MPI_CONTEXTS_ALLOC. Also the CONTEXT > allocation is a collective within a group of processes and the CONTEXT > reservation is local to a single process. There is no notion of a > global context server in the draft and a CONTEXT is not necessarily a > global object unless allocated within the global group of processes. > > Best Wishes > Lyndon You are right about my confusion. Re-reading the current draft (yet again) I see that I had mis-interpreted what MPI_CONTEXTS_RESERVE() is doing. Sorry if I added confusion to the discussion... :-( So, what does MPI_CONTEXTS_RESERVE() mean if contexts are opaque? I guess I could receive a flattened context from another process, unflatten it, and then call MPI_CONTEXTS_RESERVE() to ensure that a subsequent call to MPI_CONTEXTS_ALLOC() does not give me the "same" context. If the "same" context is (somehow) already "in use", then I'll get an error from MPI_CONTEXTS_ALLOC() so I can do something about it... Does this sound right? Would it be possible to satisfy the "well known context" idea by having a static array of named contexts available for use by a server developer? Something like this: MPI_NAMED_USER_CONTEXT_0 MPI_NAMED_USER_CONTEXT_1 ... MPI_NAMED_USER_CONTEXT_63 We could certainly argue forever about "how many" of these there should be. Do we need a large number of "well known" contexts? Tom From owner-mpi-context@CS.UTK.EDU Tue Jul 27 12:25:33 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA02841; Tue, 27 Jul 93 12:25:33 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14378; Tue, 27 Jul 93 12:24:54 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 27 Jul 1993 12:24:50 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14339; Tue, 27 Jul 93 12:24:44 -0400 Date: Tue, 27 Jul 93 17:24:39 BST Message-Id: <1437.9307271624@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: Context and safety To: James Cownie , mpi-context@cs.utk.edu In-Reply-To: James Cownie's message of Tue, 27 Jul 1993 16:47:47 +0100 Reply-To: lyndon@epcc.ed.ac.uk Hi Jim > As you know I am in favour of removing the ability of users to bind > arbitrary integers into communicators, since this removes any security > from the communications. (and in my mind that was the sole purpose of > contexts). I understand that Tony intends to make "context" opaque, rather than just an integer. > > So how do we create duplicate groups safely and locally ? > (You could read this as "How do we allocate contexts safely and > locally ?", since that is what the implementation will actually be doing.) > > We introduce a VERY IMPORTANT rule which is :- > "All collective operations on a group must occur in the same order on > each node." I think that I understand what you mean but surely the rule, if so important, should be more accurately stated. Perhaps something like "All collective operations within a group must occur in the same order in all processes within the group." > (This is already necessarily true for the collective operation we have > previously defined in MPI collective, Yes. > since they are permitted to > contain barriers, and therefore the whole code will deadlock if this > rule is not obeyed.) I dont think this is a correct statement at all. Its got nothing to do with the barriers. Imagine that barrier were the only collective operations. It would be impossible to speak of collective operations within a group occuring in different orders in different processes of the group. The reason why the rule is nevertheless true for the collective operations defined in mpi-collcomm, is that the behaviour would be undefined and reasonably unpredictable if the collective operations were not done in the same order. Sometimes there would indeed be a deadlock, unless each routine contains an intial barrier-synchronising operation that determines that the call made by each process are compatible. > What this rule does allow us to do, is to locally > pre-allocate a set of contexts for a given group, and then manage them > entirely locally, just as the user could explicitly by using > mpi_contexts_alloc, BUT we don't have to expose the context numbers to > the user, so safety is preserved. > > [This follows by induction, each node with the group in starts with > the same set of contexts allocated (by definition, as the group was > established or initialised collectively), and each allocation occurs > in the same order, so they can allocate the same result at each node > and remain in identical states] > > I think that this rule is unobjectionable, (as noted it is a restating > of the rule we already have for collective routines) since a group > exists for collective operations, and context allocation is > (conceptually) a collective operation, even though we're going to > GUARANTEE that in certain circumstances it is not a barrier. Couple of points: 1) If you guarantee no barrier then you are not permitting the implementation to do any kind of error checking on the ordering of context allocation - if any were possible anyway. 2) I dont see how the GUARANTEE can be made considering that the pool of contexts allocated to the group is not inexhaustible - i.e. sometimes the implementation has to synchronise to get some more contexts. So we have a collective operation which may or may not synchronise, depending. Just like mpi-collcomm, really. > To allow the semantics we require, the keys to the cache lookup should > be > a function address > and an integer > > By definition, the function address is sufficient on any machine to > apply the function, and is therefore guaranteed unique. (Of course its > interpretation is potentially different on different machines, but the > library is machine specific, so can be expected to understand how to > convert a function pointer into a useful value for hashing on, on most > machines it is safe simply to cast to int !). I just dont get it with this function pointer stuff. If you had a heterogeneous machine then the memory map on the different architectures is arbitrarily different and the relationship between addresses such as those of pointers is also arbitrarily differet. If you had a non SPMD program - i.e. different executables - then again the memory map is totally different. Do I miss something? Or are you thinking only of SPMD and heterogeneous? Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Jul 27 12:29:10 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA02875; Tue, 27 Jul 93 12:29:10 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14646; Tue, 27 Jul 93 12:28:39 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 27 Jul 1993 12:28:38 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14638; Tue, 27 Jul 93 12:28:35 -0400 Date: Tue, 27 Jul 93 17:28:30 BST Message-Id: <1446.9307271628@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: De-confused... To: hender@macaw.fsl.noaa.gov (Tom Henderson) In-Reply-To: Tom Henderson's message of Tue, 27 Jul 93 10:16:23 MDT Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@cs.utk.edu Hi Tom > > So, what does MPI_CONTEXTS_RESERVE() mean if contexts are opaque? I think means nothing and dissapears. > I guess I > could receive a flattened context from another process, unflatten it, and > then call MPI_CONTEXTS_RESERVE() to ensure that a subsequent call to > MPI_CONTEXTS_ALLOC() does not give me the "same" context. If the "same" > context is (somehow) already "in use", then I'll get an error from > MPI_CONTEXTS_ALLOC() so I can do something about it... Does this sound > right? Well, maybe, okay, perhaps. > Would it be possible to satisfy the "well known context" idea by having a > static array of named contexts available for use by a server developer? > Something like this: > > MPI_NAMED_USER_CONTEXT_0 > MPI_NAMED_USER_CONTEXT_1 > ... > MPI_NAMED_USER_CONTEXT_63 > > We could certainly argue forever about "how many" of these there should be. > Do we need a large number of "well known" contexts? Looks like these just bring back the original problems that opque contexts were supposed to remove. I think the idea of well-know contexts becomes meaningless and dissapears once they become opaque. Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Jul 27 12:47:54 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA03053; Tue, 27 Jul 93 12:47:54 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15825; Tue, 27 Jul 93 12:47:36 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 27 Jul 1993 12:47:35 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15817; Tue, 27 Jul 93 12:47:32 -0400 Received: by gw1.fsl.noaa.gov (5.65/DEC-Ultrix/4.3) id AA26501; Tue, 27 Jul 1993 13:45:22 +0100 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA00684; Tue, 27 Jul 93 10:45:43 MDT Date: Tue, 27 Jul 93 10:45:43 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9307271645.AA00684@macaw.fsl.noaa.gov> To: lyndon@epcc.ed.ac.uk Subject: Re: De-confused... Cc: mpi-context@cs.utk.edu Lyndon, > > Would it be possible to satisfy the "well known context" idea by having a > > static array of named contexts available for use by a server developer? > > Something like this: > > > > MPI_NAMED_USER_CONTEXT_0 > > MPI_NAMED_USER_CONTEXT_1 > > ... > > MPI_NAMED_USER_CONTEXT_63 > > > Looks like these just bring back the original problems that opque > contexts were supposed to remove. I think the idea of well-know > contexts becomes meaningless and dissapears once they become opaque. > > Best Wishes > Lyndon With contexts being opaque objects, we could arrange a bit more "security". Each MPI_NAMED_USER_CONTEXT would have an associated flag that "disables" the context unless the context has been explicitly reserved by using MPI_CONTEXTS_RESERVE(). For example: MPI_CONTEXTS_RESERVE(1, MPI_NAMED_USER_CONTEXT_0); I'm not sure how much extra "security" this would provide... Personally, I'd prefer to use publish/subscribe and use a "well known" character string for my servers. I'm a bit concerned that publish/subscribe won't make it though... Tom From owner-mpi-context@CS.UTK.EDU Tue Jul 27 19:36:26 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA07239; Tue, 27 Jul 93 19:36:26 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10340; Tue, 27 Jul 93 19:35:34 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 27 Jul 1993 19:35:32 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10319; Tue, 27 Jul 93 19:35:26 -0400 Received: by gw1.fsl.noaa.gov (5.65/DEC-Ultrix/4.3) id AA27273; Tue, 27 Jul 1993 20:33:16 +0100 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA01127; Tue, 27 Jul 93 17:33:37 MDT Date: Tue, 27 Jul 93 17:33:37 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9307272333.AA01127@macaw.fsl.noaa.gov> To: jim@meiko.co.uk Subject: Re: Context and safety Cc: mpi-context@cs.utk.edu Jim, I think your proposal has the following main points (please correct whatever I've got wrong): 1) "Context" is not directly visible to the user. 2) Multiple contexts may be attached to a group (communicator). 3) Contexts are indirectly accessed by a key pair consisting of a function address and an integer "instance number". 4) The number of contexts attached to a group can be increased using a collective operation. (I'm not absolutely clear on this. At one point you say we should be able to "locally pre-allocate a set of contexts for a given group". Later you say that the routine that does this "is a group synchronization"-- which sounds "collective".) 5) A context can be "accessed" using an operation that is local UNLESS there are not enough contexts and OP_LOOKUPCREATE is specified (in which case the operation is collective as in #4). 6) Assumption: the "collective" operation in #4 must be called with matching parameters in all processes in the specified group. A few questions: A) Why use a function address for the first key in the key pair? It seems to me that you're using a function address only because it is something that a user cannot modify that is guaranteed to be unique within a process. It is also globally visible (within a process). Are there any other reasons for using a function pointer here? Why not use a local GET_KEY() routine that returns some opaque thing (key_t? :-) ? B) You say that mpi_groupsReserve() is a "group synchronization". I assume that if I call mpi_lookupGroup(..., OP_LOOKUPCREATE) and there are no more contexts, then mpi_groupsReserve() must be called inside mpi_lookupGroup() to allocate new contexts. This means that mpi_lookupGroup(..., OP_LOOKUPCREATE) may or may not be a collective operation. As a user, I'd like to be able to find this out before I make the call. How about a local inquiry routine that returns the number of available contexts attached to a group? C) In a heterogeneous implementation, comparison of the function pointer key in a call to mpi_groupsReserve() won't be helpful. This would also be true in a MIMD program. Specifically, the following error cannot be detected and will not deadlock a machine: MPI_COMM_RANK(MPI_COMM_ALL, &me); if (me == 0) mpi_groupsReserve(MPI_COMM_ALL, myFuncA(), 1); else mpi_groupsReserve(MPI_COMM_ALL, myFuncB(), 1); Is this worse/better than being unable to detect bindings of multiple groups to a single integer context? (By the way, bindings of multiple groups to a single OPAQUE context could be detected.) D) (Just a dumb question :-) What happens if I create a new group G from MPI_GROUP_ALL (using mpi_lookupGroup()), reserve a bunch of contexts for G, create a group H from G, and then "delete" G? I assume it is still safe to use group H? mpi_groupsReserve(MPI_COMM_ALL, myFuncA(), 10); G = mpi_lookupGroup(MPI_COMM_ALL, myFuncA(), 5, OP_LOOKUP); mpi_groupsReserve(G, myFuncB(), 3); H = mpi_lookupGroup(G, myFuncB(), 1, OP_LOOKUP); (void)mpi_lookupGroup(MPI_COMM_ALL, myFuncA(), 5, OP_DELETE); /* OK to use group H... */ Typos? > void user_end_op(MPI_GROUP g, userLibHandle key) > { > MPI_GROUP myGroup = mpi_lookupGroup(g, > (void(*)(void))init_user_lib, > key, > OP_LOOKUP); > > if (!barrierGroup) ^^^^^^^^^^^^ myGroup ?? > { > /* > * Fatal error, attempt to end user op without starting it > */ > } else > { > /* > * Wait for all communications in myGroup to complete > * The proposed waitAll( GROUP ) would be useful here, > * otherwise we need the return tag from startBarrier $$$ ^^^^^^^^^^^^ user_start_op() ? > * to contain the comms handles of all the non-blocking ops we > * started so that we can wait for them to complete. > */ > } > > /* Free the group we used. */ > mpi_lookupGroup (void(*)(void))init_user_lib, > key, > OP_DELETE); > } > FEEDBACK PLEASE !!!!! Hope this is useful... Tom From owner-mpi-context@CS.UTK.EDU Tue Jul 27 19:46:07 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA07249; Tue, 27 Jul 93 19:46:07 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10962; Tue, 27 Jul 93 19:45:40 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 27 Jul 1993 19:45:37 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA10936; Tue, 27 Jul 93 19:45:35 -0400 Received: by gw1.fsl.noaa.gov (5.65/DEC-Ultrix/4.3) id AA27293; Tue, 27 Jul 1993 20:43:24 +0100 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA01159; Tue, 27 Jul 93 17:43:45 MDT Date: Tue, 27 Jul 93 17:43:45 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9307272343.AA01159@macaw.fsl.noaa.gov> To: jim@meiko.co.uk Subject: Re: Context and safety Cc: mpi-context@cs.utk.edu Jim, One other question I forgot to include... You mention waitAll(), > * The proposed waitAll( GROUP ) would be useful here How would waitAll() work with nested groups (as in my "dumb question")? mpi_groupsReserve(MPI_COMM_ALL, myFuncA(), 10); G = mpi_lookupGroup(MPI_COMM_ALL, myFuncA(), 5, OP_LOOKUP); mpi_groupsReserve(G, myFuncB(), 4); H = mpi_lookupGroup(G, myFuncB(), 2, OP_LOOKUP); waitAll(G); Would waitAll() wait for all communication in H? Or just communication in the first context of H (which I assume is context "2" in G)? No more dumb questions for the moment... Tom From owner-mpi-context@CS.UTK.EDU Tue Jul 27 21:14:47 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA07481; Tue, 27 Jul 93 21:14:47 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15664; Tue, 27 Jul 93 21:14:25 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 27 Jul 1993 21:14:24 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15656; Tue, 27 Jul 93 21:14:23 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA26751; Tue, 27 Jul 93 20:14:10 CDT Date: Tue, 27 Jul 93 20:14:10 CDT From: Tony Skjellum Message-Id: <9307280114.AA26751@Aurora.CS.MsState.Edu> To: jim@meiko.co.uk, hender@macaw.fsl.noaa.gov Subject: Re: Context and safety Cc: mpi-context@cs.utk.edu Hi, I have to read Jim's comments in greater detail before I respond. I guess it is somewhat late for a totally new proposal, don't you think... as we are refining the intra-comm quite well at this point. Anyway, I will incorporate those ideas that are reasonable into the proposal draft by August 5. - Tony From owner-mpi-context@CS.UTK.EDU Wed Jul 28 04:50:25 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA09147; Wed, 28 Jul 93 04:50:25 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14964; Wed, 28 Jul 93 04:49:21 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 28 Jul 1993 04:49:20 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA14947; Wed, 28 Jul 93 04:49:08 -0400 Received: from tycho.co.uk (tycho.meiko.co.uk) by hub with SMTP id AA20407 (5.65c/IDA-1.4.4 for mpi-context@cs.utk.edu); Wed, 28 Jul 1993 09:49:05 +0100 Date: Wed, 28 Jul 1993 09:49:05 +0100 From: James Cownie Message-Id: <199307280849.AA20407@hub> Received: by tycho.co.uk (5.0/SMI-SVR4) id AA02212; Wed, 28 Jul 93 09:48:56 BST To: mpi-context@cs.utk.edu In-Reply-To: <1437.9307271624@subnode.epcc.ed.ac.uk> (message from L J Clarke on Tue, 27 Jul 93 17:24:39 BST) Subject: Re: Context and safety Content-Length: 6533 Lyndon, > I understand that Tony intends to make "context" opaque, rather than > just an integer. Unfortunately although a step in the right direction this does not entirely remove the problem, since it is still possible to bind different (intersecting) groups to the same context in different processes. This leads to all of the "Who sent me this" problems. This is not a locally checkable process, to ensure that it does not happen context allocation and binding have to be collective. It's my relaxation of the second of these rules which leads to the insecurities in my proposal. I am (of course) happier with those mainly because the rule which must be obeyed for correctness is easy to explain to the user. > I think that I understand what you mean but surely the rule, if so > important, should be more accurately stated. Perhaps something like > > "All collective operations within a group must occur in the same order > in all processes within the group." I'm happy with your restatement, I see no major increase or decrease in the clarity. Trying to reconstruct my thought processes I think I used "node" rather than "process" to attempt to avoid any lightweight process (== thread) confusions. As to the choice of "within" or "on" I really see no difference at all. If it makes you happier though I'll certainly take your formulation. > > (This is already necessarily true for the collective operation we have > > previously defined in MPI collective, > > Yes. > > > since they are permitted to > > contain barriers, and therefore the whole code will deadlock if this > > rule is not obeyed.) > > I dont think this is a correct statement at all. Its got nothing to do > with the barriers. Imagine that barrier were the only collective > operations. It would be impossible to speak of collective operations > within a group occuring in different orders in different processes of > the group. > > The reason why the rule is nevertheless true for the collective > operations defined in mpi-collcomm, is that the behaviour would be > undefined and reasonably unpredictable if the collective operations were > not done in the same order. > > Sometimes there would indeed be a deadlock, unless each routine contains > an intial barrier-synchronising operation that determines that the call > made by each process are compatible. > You're right, I was implicitly assuming a "barrier with key", and matching of the keys which would immediately lead to deadlock. Whatever the reason the statement is true, which is all I ask. > > What this rule does allow us to do, is to locally <========== *** SORRY *** As Tom points outs this should say COLLECTIVELY > > pre-allocate a set of contexts for a given group, and then manage them > > entirely locally, just as the user could explicitly by using > > mpi_contexts_alloc, BUT we don't have to expose the context numbers to > > the user, so safety is preserved. > > > > [This follows by induction, each node with the group in starts with > > the same set of contexts allocated (by definition, as the group was > > established or initialised collectively), and each allocation occurs > > in the same order, so they can allocate the same result at each node > > and remain in identical states] > > > > I think that this rule is unobjectionable, (as noted it is a restating > > of the rule we already have for collective routines) since a group > > exists for collective operations, and context allocation is > > (conceptually) a collective operation, even though we're going to > > GUARANTEE that in certain circumstances it is not a barrier. > > Couple of points: > > 1) If you guarantee no barrier then you are not permitting the > implementation to do any kind of error checking on the ordering of > context allocation - if any were possible anyway. True, however at least the rule one has to obey is easy to understand. In the cases where the barrier would be acceptable a check can be made in a checking version of the library at least. > 2) I dont see how the GUARANTEE can be made considering that the pool of > contexts allocated to the group is not inexhaustible - i.e. sometimes > the implementation has to synchronise to get some more contexts. So we > have a collective operation which may or may not synchronise, depending. > Just like mpi-collcomm, really. The guarantee can be made because the failure happens when the contexts are pre-allocated. Either there are enough or not AT THAT POINT. If you don't pre-allocate, then the behaviour is as you describe. (See mpi_groupsReserve, and the difference between MPI_LOOKUPCREATE and MPI_LOOKUPCREATENB). > > To allow the semantics we require, the keys to the cache lookup should > > be > > a function address > > and an integer > > > > By definition, the function address is sufficient on any machine to > > apply the function, and is therefore guaranteed unique. (Of course its > > interpretation is potentially different on different machines, but the > > library is machine specific, so can be expected to understand how to > > convert a function pointer into a useful value for hashing on, on most > > machines it is safe simply to cast to int !). > > I just dont get it with this function pointer stuff. If you had a > heterogeneous machine then the memory map on the different architectures > is arbitrarily different and the relationship between addresses such as > those of pointers is also arbitrarily differet. If you had a non SPMD > program - i.e. different executables - then again the memory map is > totally different. > > Do I miss something? Or are you thinking only of SPMD and heterogeneous? Yes you do. The function pointer and key are not the context, they are the key to allow you to get at the context. They are ONLY meaningful entirely locally to the process. Really I'm just using a trick to allow me to get the linker to produce guaranteed unique local keys at low cost. Since these keys are entirely local to the process it makes no difference whether we are on a heterogenous machine running a non SPMD code or not. (Though as Tom points out [and I'll answer his mail next !] the inability to compare these keys globally does remove some security...) Hope this helps. -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Wed Jul 28 05:55:36 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA09440; Wed, 28 Jul 93 05:55:36 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21765; Wed, 28 Jul 93 05:53:52 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 28 Jul 1993 05:53:51 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA21756; Wed, 28 Jul 93 05:53:28 -0400 Received: from tycho.co.uk (tycho.meiko.co.uk) by hub with SMTP id AA20686 (5.65c/IDA-1.4.4 for mpi-context@cs.utk.edu); Wed, 28 Jul 1993 10:53:24 +0100 Date: Wed, 28 Jul 1993 10:53:24 +0100 From: James Cownie Message-Id: <199307280953.AA20686@hub> Received: by tycho.co.uk (5.0/SMI-SVR4) id AA02215; Wed, 28 Jul 93 10:53:15 BST To: mpi-context@cs.utk.edu In-Reply-To: <9307272333.AA01127@macaw.fsl.noaa.gov> (hender@macaw.fsl.noaa.gov) Subject: Re: Context and safety Content-Length: 7579 Tom, > I think your proposal has the following main points (please correct whatever > I've got wrong): > > 1) "Context" is not directly visible to the user. Correct > 2) Multiple contexts may be attached to a group (communicator). Correct > 3) Contexts are indirectly accessed by a key pair consisting of a > function address and an integer "instance number". Correct > 4) The number of contexts attached to a group can be increased using a > collective operation. (I'm not absolutely clear on this. At one point > you say we should be able to "locally pre-allocate a set of contexts for > a given group". Later you say that the routine that does this "is a > group synchronization"-- which sounds "collective".) Correct, you have detected the crucial typo, as you say it should read COLLECTIVE not LOCAL. Score one beer ! > 5) A context can be "accessed" using an operation that is local UNLESS > there are not enough contexts and OP_LOOKUPCREATE is specified (in > which case the operation is collective as in #4). Correct > 6) Assumption: the "collective" operation in #4 must be called with > matching parameters in all processes in the specified group. Correct > A) Why use a function address for the first key in the key pair? It seems > to me that you're using a function address only because it is something > that a user cannot modify that is guaranteed to be unique within a > process. It is also globally visible (within a process). Are there any > other reasons for using a function pointer here? Why not use a local > GET_KEY() routine that returns some opaque thing (key_t? :-) ? You are right, it is guaranteed unique in a process, and visible throughout the process. It is also cheap to access. I prefer this to allocating unique keys because this is cheaper and I don't have to remember the key in a data structure. Suppose I want to implement the MPI collective operations, then I don't have any arguments passed into them to give me the key, I have to remember it locally. If I use a function pointer, then it is immediately visible without any effort on my part. > B) You say that mpi_groupsReserve() is a "group synchronization". I assume > that if I call mpi_lookupGroup(..., OP_LOOKUPCREATE) and there are no > more contexts, then mpi_groupsReserve() must be called inside > mpi_lookupGroup() to allocate new contexts. This means that > mpi_lookupGroup(..., OP_LOOKUPCREATE) may or may not be a collective > operation. As a user, I'd like to be able to find this out before I > make the call. How about a local inquiry routine that returns the > number of available contexts attached to a group? You can already achieve approximately this effect by using mpi_lookupGroup(..., OP_LOOKUPCREATENB) which will fail if the LOOKUPCREATE would have blocked (cf EWOULDBLOCK). I have no strong objection to being able to query the number of "free contexts" on the group. The only problem arises in a multi-threaded environment where some other thread could consume contexts between you inquiring and using them. I really prefer the explicit allocation which is there so that you can ensure that you won't block. > C) In a heterogeneous implementation, comparison of the function pointer > key in a call to mpi_groupsReserve() won't be helpful. This would also > be true in a MIMD program. Specifically, the following error cannot be > detected and will not deadlock a machine: > > MPI_COMM_RANK(MPI_COMM_ALL, &me); > if (me == 0) > mpi_groupsReserve(MPI_COMM_ALL, myFuncA(), 1); /* probably don't want these () ^^ */ > else > mpi_groupsReserve(MPI_COMM_ALL, myFuncB(), 1); /* probably don't want these () ^^ */ > > Is this worse/better than being unable to detect bindings of multiple > groups to a single integer context? (By the way, bindings of multiple > groups to a single OPAQUE context could be detected.) I could argue that this isn't an error, it all depends on how the new group is used. (The allocation order is clearly specified, the choice of key to use to find the context is a local decision, provided the usage is consistent this code could work !). [Imagine for instance a MIMD non SPMD code in which a group was constructed for some collextive operation but not all of the processes in the group contained the same set of functions. Then this would actually be useful.] I agree that it will often be a problem, however. If we do want to be able to check this, then my best suggestion is to add an additional string argument which can be checked in a debugging version of the library or ignored in a kamikaze speed first version. This is rather like the discussion about a tag in a barrier. That could also be useful to find exactly analogous bugs. We voted it out on the grounds that all it was doing was changing the behaviour of codes which were already erroneous (and therefore had unspecified behaviour). [I'd still quite like to see it there, solely for this debugging purpose, but I think that argument is lost...] > D) (Just a dumb question :-) What happens if I create a new group G from > MPI_GROUP_ALL (using mpi_lookupGroup()), reserve a bunch of contexts for > G, create a group H from G, and then "delete" G? I assume it is still > safe to use group H? > > mpi_groupsReserve(MPI_COMM_ALL, myFuncA(), 10); > G = mpi_lookupGroup(MPI_COMM_ALL, myFuncA(), 5, OP_LOOKUP); > mpi_groupsReserve(G, myFuncB(), 3); > H = mpi_lookupGroup(G, myFuncB(), 1, OP_LOOKUP); > (void)mpi_lookupGroup(MPI_COMM_ALL, myFuncA(), 5, OP_DELETE); > /* OK to use group H... */ Yes, I think this is fine. However I'd disallow use of both G and H if you deleted the MPI_COMM_ALL. > Typos? Yes, I took this example from one which was explicitly implementing the famous MPI -1 non-blocking barrier, hence barrierGroup. For comparability I adopted Tony's names but then failed to make all the changes. Sorry. (but not another beer !) > Hope this is useful... Definitely, thanks. > One other question I forgot to include... You mention waitAll(), > > > * The proposed waitAll( GROUP ) would be useful here > > How would waitAll() work with nested groups (as in my "dumb question")? > > mpi_groupsReserve(MPI_COMM_ALL, myFuncA(), 10); > G = mpi_lookupGroup(MPI_COMM_ALL, myFuncA(), 5, OP_LOOKUP); > mpi_groupsReserve(G, myFuncB(), 4); > H = mpi_lookupGroup(G, myFuncB(), 2, OP_LOOKUP); > waitAll(G); > > Would waitAll() wait for all communication in H? Or just communication in > the first context of H (which I assume is context "2" in G)? I don't think that waitAll(GROUP) exists, so this is a moot point. I'd certainly expect it only to wait on the actual group it is passed, so in this case (which I now find confusing, should the waitAll be on H ?) [attempt to get back to parity on the beers !], as written it would wait for completion of all communications on the G group. Communications in H or MPI_COMM_ALL would be irrelevant. James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Wed Jul 28 12:47:28 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA11731; Wed, 28 Jul 93 12:47:28 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17712; Wed, 28 Jul 93 12:46:53 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 28 Jul 1993 12:46:47 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17704; Wed, 28 Jul 93 12:46:42 -0400 Received: by gw1.fsl.noaa.gov (5.65/DEC-Ultrix/4.3) id AA28693; Wed, 28 Jul 1993 13:43:03 +0100 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA01760; Wed, 28 Jul 93 10:43:24 MDT Date: Wed, 28 Jul 93 10:43:24 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9307281643.AA01760@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu, jim@meiko.co.uk Subject: Re: Context and safety Jim, > I don't think that waitAll(GROUP) exists, so this is a moot point. I'd > certainly expect it only to wait on the actual group it is passed, so > in this case (which I now find confusing, should the waitAll be on H ?) > [attempt to get back to parity on the beers !], as written it would > wait for completion of all communications on the G group. > Communications in H or MPI_COMM_ALL would be irrelevant. > > James Cownie You're right, the waitAll() should be on H. I guess I owe you a moot beer... :-) Tom From owner-mpi-context@CS.UTK.EDU Wed Jul 28 12:47:44 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA11738; Wed, 28 Jul 93 12:47:44 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17724; Wed, 28 Jul 93 12:47:12 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 28 Jul 1993 12:47:12 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17714; Wed, 28 Jul 93 12:46:55 -0400 Received: from tycho.co.uk (tycho.meiko.co.uk) by hub with SMTP id AA22473 (5.65c/IDA-1.4.4 for mpi-context@cs.utk.edu); Wed, 28 Jul 1993 17:46:38 +0100 Date: Wed, 28 Jul 1993 17:46:38 +0100 From: James Cownie Message-Id: <199307281646.AA22473@hub> Received: by tycho.co.uk (5.0/SMI-SVR4) id AA00267; Wed, 28 Jul 93 17:46:23 BST To: tony@Aurora.CS.MsState.Edu Cc: mpi-context@cs.utk.edu In-Reply-To: <9307280114.AA26751@Aurora.CS.MsState.Edu> (message from Tony Skjellum on Tue, 27 Jul 93 20:14:10 CDT) Subject: Re: Context and safety Content-Length: 1455 Tony, > I have to read Jim's comments in greater detail before I respond. > I guess it is somewhat late for a totally new proposal, don't you > think... as we are refining the intra-comm quite well at this point. I agree it's late. I apologise. As I warned you before I left I had other unavoidable committments which ate up a week of my time away from the net. I believe that I could construct something very close to the current proposal in its treatment of groups, but containing the cacheing scheme I proposed as the way of manipulating contexts, and thus avoiding their exposure to the user. This would retain communicators as separate objects from groups, but initial communicator creation from a group would be a collective operation. I will happily do this if you care to send me the latest source for the chapter. Alternatively we can try to have a straw poll to find out which style the (full ?) committee prefers. In some sense this is unfair unless I produce a full document to the same level of detail as the extant draft, since sketchy outlines can easily look more attractive than a full proposal. Tell me what to do ! (other than go jump in a lake !) -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Wed Jul 28 13:57:58 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA12303; Wed, 28 Jul 93 13:57:58 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA22362; Wed, 28 Jul 93 13:56:50 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 28 Jul 1993 13:56:44 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA22354; Wed, 28 Jul 93 13:56:42 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA28736; Wed, 28 Jul 93 12:56:31 CDT Date: Wed, 28 Jul 93 12:56:31 CDT From: Tony Skjellum Message-Id: <9307281756.AA28736@Aurora.CS.MsState.Edu> To: jim@meiko.co.uk Subject: Re: Context and safety Cc: mpi-context@cs.utk.edu ----- Begin Included Message ----- From jim@meiko.co.uk Wed Jul 28 11:46:53 1993 Date: Wed, 28 Jul 1993 17:46:38 +0100 From: James Cownie To: tony@Aurora.CS.MsState.Edu Cc: mpi-context@cs.utk.edu Subject: Re: Context and safety Content-Length: 1455 Tony, > I have to read Jim's comments in greater detail before I respond. > I guess it is somewhat late for a totally new proposal, don't you > think... as we are refining the intra-comm quite well at this point. I agree it's late. I apologise. As I warned you before I left I had other unavoidable committments which ate up a week of my time away from the net. I believe that I could construct something very close to the current proposal in its treatment of groups, but containing the cacheing scheme I proposed as the way of manipulating contexts, and thus avoiding their exposure to the user. This would retain communicators as separate objects from groups, but initial communicator creation from a group would be a collective operation. I will happily do this if you care to send me the latest source for the chapter. Alternatively we can try to have a straw poll to find out which style the (full ?) committee prefers. In some sense this is unfair unless I produce a full document to the same level of detail as the extant draft, since sketchy outlines can easily look more attractive than a full proposal. Tell me what to do ! (other than go jump in a lake !) -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com ----- End Included Message ----- No, please do not jump into a lake. We want your feedback. Please let me finish the chapter by August 4 or 5, and I will have had time to digest your significant comments. After that, let us discuss and refine your comments with the sub-committee, and see where that gets us. If it means that there are a number of friendly and unfriendly amendments that you will make during the reading, then this a reasonable approach that still gets us reading the chapter as scheduled. I think I just sent you another message that complained about the lateness issue. Probably duplicate in my thinking, but I did not send to mpi-context by accident... the main issue there of interest is my view that contexts (as opaque objects) are safer than you asserted. IE, --------- Having opaque contexts strongly increases the safety of the system, in my opinion. It reduces the closeness to implementation that Jim discussed before. I am concentrating on intra-communication now. A context can have an attribute that says it is invalid for inter-communication. When one tries to use it in a communicator for inter communication, one fails. The problem cannot happen, if one is prohibited from binding the communicator (whether locally or globally) if it is a closed-mode context. Since one has to transmit (by flattening/unflattening) an opaque context, one presumably DOES NOT FLATTEN a closed-mode context, making it unreasonable that a non-group process would have it in the first place. --------- - Tony From owner-mpi-context@CS.UTK.EDU Wed Jul 28 14:10:43 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA12440; Wed, 28 Jul 93 14:10:43 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23111; Wed, 28 Jul 93 14:09:53 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 28 Jul 1993 14:09:48 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23103; Wed, 28 Jul 93 14:09:46 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA29899; Wed, 28 Jul 93 13:09:38 CDT Date: Wed, 28 Jul 93 13:09:38 CDT From: Tony Skjellum Message-Id: <9307281809.AA29899@Aurora.CS.MsState.Edu> To: jim@meiko.co.uk, tony@Aurora.CS.MsState.Edu Subject: Re: Context and safety Cc: mpi-context@cs.utk.edu Jim, I am happy to do straw polls only after we have a complete chapter, next week, if you have a complete chapter too. As Marc Snir might say, it only makes sense to view "complete" chapters (well, he used to say "complete proposals" but we are at the chapter stage now). - Tony From owner-mpi-context@CS.UTK.EDU Sun Aug 8 19:52:27 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA10632; Sun, 8 Aug 93 19:52:27 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08386; Sun, 8 Aug 93 19:51:08 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 8 Aug 1993 19:51:08 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08378; Sun, 8 Aug 93 19:51:07 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA11397; Sun, 8 Aug 93 18:51:01 CDT Date: Sun, 8 Aug 93 18:51:01 CDT From: Tony Skjellum Message-Id: <9308082351.AA11397@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: memento mori I am working hard to get you a revised context chapter out by tomorrow... Please be patient with us. We have fixed many problems since the last meeting, despite miniscule feedback (except from Jim Cownie). Thanks in advance for your patience and cooperation. I have asked David to put us on Thursday, so you can pound Tom, Lyndon, and me on Wednesday night about the chapter, and we can read it on Thursday. I think you will like what we have done, and there will be ample opportunity for straw votes :-) - Tony From owner-mpi-context@CS.UTK.EDU Mon Aug 9 01:34:04 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA10998; Mon, 9 Aug 93 01:34:04 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24924; Mon, 9 Aug 93 01:33:28 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 9 Aug 1993 01:33:25 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA24846; Mon, 9 Aug 93 01:33:12 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA13734; Mon, 9 Aug 93 00:33:10 CDT Date: Mon, 9 Aug 93 00:33:10 CDT From: Tony Skjellum Message-Id: <9308090533.AA13734@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: latest context chapter (postscript) Dear Colleagues: The following is the context postscript file, latest. We definitely will have incremental (but not major) improvements at the meeting, particularly in the form of additional examples. - Tony Skjellum PS You can send me your comments before the meeting, if you have any. %!PS-Adobe-2.0 %%Creator: dvips 5.47 Copyright 1986-91 Radical Eye Software %%Title: unify5.dvi %%Pages: 24 1 %%BoundingBox: 0 0 612 792 %%EndComments %%BeginProcSet: tex.pro /TeXDict 200 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{ isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10 N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{ /vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail} B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image} imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto}N /eop{clear SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for}N /p /show load N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval(Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{ moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{ S p tail}B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w }B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{/SS save N}B /eos{clear SS restore}B end %%EndProcSet TeXDict begin 1000 300 300 @start /Fa 19 118 df<00FF0003FF000F87001E07001C0700 1C07001C07001C07001C0700FFFF00FFFF001C07001C07001C07001C07001C07001C07001C0700 1C07001C07001C0700FFBFE0FFBFE01317809614>13 D46 D<0018001800380030003000700060006000E000C001C001800180038003000300070006000600 0E000C000C001C001800380030003000700060006000E000C000C0000D217E9812>I<7FFFF87F FFF8703838603818E0381CC0380CC0380CC0380C00380000380000380000380000380000380000 380000380000380000380000380000380000380007FFC007FFC016177F9619>84 D<1FC03FF038F8103800781FF87FF8F838F038E03BE03BF8FB7FFE3F9C100E7F8D12>97 D<007E00007E00000E00000E00000E00000E00000E00000E00000E000FEE003FFE007C3E00700E 00F00E00E00E00E00E00E00E00E00E00F00E00701E007C3E003FFFC00FCFC012177F9614>100 D<0FC03FF078707038FFF8FFF8E000E000E000F00070187C381FF00FE00D0E7F8D10>I<03E007 F00F701E701C001C001C001C001C00FF80FF801C001C001C001C001C001C001C001C001C001C00 FF80FF800C1780960B>I104 D<3C3C3C3C00000000007C7C1C1C1C1C1C1C1C1C1C1CFFFF081780960A> I107 DII< FCFC00FFFE001F0F001E07001C07001C07001C07001C07001C07001C07001C07001C0700FF9FE0 FF9FE0130E808D14>I<07C01FF03838701C600CE00EE00EE00EE00EE00E701C38381FF007C00F 0E7F8D12>II<3FC07FC0F1 C0E0C0F000FF007FC03FE003E0C0E0E0E0F1E0FFC0DF800B0E7F8D0E>115 D<180018001800180038007800FF80FF8038003800380038003800380038C038C038C03DC03F80 1F000A147F930E>II E /Fb 1 50 df<18007800F80098001800180018 001800180018001800180018001800FF80FF8009107E8F0F>49 D E /Fc 1 50 df<0C001C00EC000C000C000C000C000C000C000C000C000C000C000C000C000C000C000C 00FFC00A137D9211>49 D E /Fd 1 59 df<70F8F8F87005057D840C>58 D E /Fe 12 122 df<03FC0FFE1F1E3C1E780C70007000F000E000E000E000F000781C7C383FF0 0FE00F107E8F11>99 D<03F00FFC1E1C3C0E780E7FFE7FFEF000E000E000E000F000701C7C383F F00FE00F107E8F11>101 D<01C003C003C003800000000000000000000000001F801F80070007 000700070007000F000E000E000E000E000E001E00FF80FF800A1A80990A>105 D<1FBF001FFF8007E7800783800703800703800707800F07800E07000E07000E07000E07000E0F 001E0F00FF9FC0FF9FC012107F8F15>110 D<01F007FC1E0E1807300770037003E007E007E007 E007600E701C38381FF00FC010107E8F13>I<0FDF800FFFE003F1F003C0F00380700380700380 700780700700700700700700F00701E00703E00F87C00FFF800E7E000E00000E00001E00001E00 001C0000FF8000FF80001417808F15>I<03F3800FFB001F1F003C0F00780700700700700F00F0 0F00E00E00E00E00E00E00F01E00703E007C7C003FFC001F9C00001C00001C00003C0000380000 380001FF0001FF0011177E8F14>I<1FFC1FFC07DC078C0780070007000F000E000E000E000E00 0E001E00FFC0FFC00E107F8F0F>I<07EC1FFC3C38381838183C003FC01FF00FF800F860786038 7078F8F0FFE0DFC00E107F8F0F>I<03000300060006000E000E003E007FE0FFC01C001C001C00 1C001C003C00380038C038C038C039C039803F801E000B177D960F>II<1FE3F81FE3F80381C003 818003838003830001C70001C60001CE0001CC0001D80000F80000F00000F00000E00000E00000 C00000C000718000F38000C700007E00007800001517808F14>121 D E /Ff 32 121 df<01C003C007800F001E003C003800780070007000F000E000E000E000E000E000 E000E000F00070007000780038003C001E000F00078003C001C00A1D7A9914>40 DI<387C7E7E3E1E1EFCF870070A 7A8414>44 D<01C00003E00003E0000360000360000770000770000770000770000630000E3800 0E38000E38000E38000E38001FFC001FFC001FFC001C1C003C1E00FE3F80FE3F80FE3F8011177F 9614>65 D<07E60FFE1FFE3E3E3C1E780E700EF00EE000E000E000E000E000E000E000F00E700E 780E3C1E3E3C1FF80FF807E00F177E9614>67 D69 DI72 DI75 DIII< 1FF07FFC7FFC701CF01EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EE00EF01E78 3C7FFC7FFC1FF00F177E9614>II82 D<0FCC3FFC7FFCF87CF03CE01CE01CF000F8007F003FE01FF801FC003C 001E000EE00EE00EF01EF83CFFFCFFF8CFE00F177E9614>II88 D<07F81FFC3FFC7C3C7818F000E000E000E0 00E000F00E780E7E1E3FFC1FF807F00F107E8F14>99 D<07E01FF83FFC7C3C781EF00EFFFEFFFE FFFEE000F00E780E7E1E3FFC1FF807E00F107E8F14>101 D<07CF001FFF803FFF807C7F00783C 00701C00701C00783C007C7C003FF8007FF00077C0007800003FF8003FFE007FFF00F00F80E007 80E00380E00380F007807C1F007FFF001FFC0007F00011197F8F14>103 D108 DII<07C01FF03FF8783C701CF01EE00EE00EE00EE00EF01E701C7C7C3FF81F F007C00F107E8F14>II114 D<1FF87FF8FFF8F038E038 F0007F803FF80FFC003EE00EE00EF83EFFFCFFF8CFF00F107E8F14>I<07000700070007000700 FFFCFFFCFFFC070007000700070007000700070E070E070E079E07FC03F801F00F157F9414>I< FC3F00FC3F00FC3F001C07001C07001C07001C07001C07001C07001C07001C07001C07001C1F00 1FFFE00FFFE007F7E01310808F14>I120 D E /Fg 1 59 df58 D E /Fh 3 73 df<07E01FF83FFC7FFE7FFEFFFF FFFFFFFFFFFFFFFFFFFF7FFE7FFE3FFC1FF807E010107E9115>15 D<0003FC001FFE00703E01C0 1E03001C06001C0E00381C00301C0060380000380000700000700000F00018F00038F00078F000 78F00070F000F0F800F0F801E07C03E07F0FE03FF9C00FC1C00003800003800007000006003C0C 007FF8003FC00017207F9B19>71 D<00FC000303FE00070E3E000E181E000E381E001C701E001C 601C0038C01C0038003C0070003C0070003C00F0003800E0003FFFE0007FFFE0007001C0007001 C000F003C000E003C000E0038001E0038001C0078001C0078003C0078003800780038007830700 0786070007CC0E0007F8000003E0201D809B23>I E /Fi 50 122 df<000FF800007FFC0001FC 1E0003F01F0007E03F000FE03F000FC03F000FC03F000FC00C000FC000000FC000000FC000000F C00000FFFFFF00FFFFFF000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F00 0FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F000FC03F 000FC03F007FF0FFE07FF0FFE01B237FA21F>12 D<00003801C000003801C000007803C0000078 03C0000070038000007003800000F007800000F007800000E007000000E007000001E00F000001 E00F000001C00E000001C00E000003C01E007FFFFFFFFEFFFFFFFFFFFFFFFFFFFF000700380000 0F007800000F007800000E007000000E007000000E007000001E00F000001E00F000001C00E000 FFFFFFFFFFFFFFFFFFFF7FFFFFFFFE007803C0000070038000007003800000F007800000F00780 0000E007000000E007000001E00F000001E00F000001C00E000001C00E000003C01E000003C01E 000003801C000003801C0000282D7DA22F>35 D<0038007800F001E003C003C007800F000F001F 001E003E003E003C007C007C007C007800F800F800F800F800F800F800F800F800F800F800F800 F800F80078007C007C007C003C003E003E001E001F000F000F00078003C003C001E000F0007800 380D317BA416>40 DI45 D<7CFEFEFEFEFE7C07077C8610>I<00FE0007FFC00F83E01F 01F03E00F83E00F87C007C7C007C7C007CFC007CFC007EFC007EFC007EFC007EFC007EFC007EFC 007EFC007EFC007EFC007EFC007EFC007EFC007E7C007C7C007C7C007C3E00F83E00F81F01F00F 83E007FFC000FE0017207E9F1C>48 D<00180000780001F800FFF800FFF80001F80001F80001F8 0001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F80001F8 0001F80001F80001F80001F80001F80001F80001F80001F80001F8007FFFE07FFFE013207C9F1C >I<03FC001FFF803C1FC0700FE0FC07F0FE03F0FE03F8FE03F8FE01F87C01F83803F80003F800 03F00007F00007E0000FC0000F80001F00003E00007C0000F80001F01801C0180380180700180E 00381FFFF03FFFF07FFFF0FFFFF0FFFFF0FFFFF015207D9F1C>I<01FF0007FFC01F07E01E03F0 3F03F83F01F83F81F83F01F83F03F80E03F80003F00007F0000FE0001F8001FF0001FF000007E0 0003F80001FC0000FC0000FE0000FE7C00FEFE00FEFE00FEFE00FEFE00FCFE01FC7803F83E07F0 1FFFE003FF0017207E9F1C>I<0000E00001E00003E00003E00007E0000FE0001FE0001FE00037 E00077E000E7E001C7E00187E00307E00707E00E07E00C07E01807E03807E07007E0E007E0FFFF FEFFFFFE0007E00007E00007E00007E00007E00007E00007E000FFFE00FFFE17207E9F1C>I<18 00601E03E01FFFE01FFFC01FFF801FFF001FFC001BE00018000018000018000018000019FE001F FF801F0FC01C07E01803F00003F00003F80003F80003F87803F8FC03F8FC03F8FC03F8FC03F8FC 03F06007F0780FE03C1FC01FFF0007FC0015207D9F1C>I<003FC001FFE003F07007C0F80F81F8 1F01F83E01F83E01F87E00F07C00007C0000FC0800FCFFC0FDFFF0FF81F8FF00F8FE007CFE007C FE007EFC007EFC007EFC007EFC007E7C007E7C007E7E007E3E007C3E00FC1F01F80FC3F007FFC0 00FF0017207E9F1C>I<6000007800007FFFFE7FFFFE7FFFFE7FFFFC7FFFF87FFFF0E000E0E000 C0C001C0C00380C00700000E00000E00001C00003C0000380000780000780000F80000F00001F0 0001F00001F00001F00003F00003F00003F00003F00003F00003F00003F00001E00017227DA11C >I<00FF0007FFC00F83E01E01F01C00F83C00783C00783E00783F00783FC0F83FF0F03FF9F01F FFE01FFF800FFFE003FFF00FFFF81F7FFC3E1FFC7C0FFEF803FEF000FEF0007EF0003EF0001EF0 001EF8001C78003C7E00783F81F00FFFE003FF0017207E9F1C>I<01FF0007FFC01F83E03F01F0 7E00F87C00F8FC007CFC007CFC007CFC007EFC007EFC007EFC007EFC00FE7C00FE7C00FE3E01FE 3F03FE1FFF7E07FE7E00207E00007C00007C1E00FC3F00F83F00F83F01F03F03F03E07E01E1FC0 0FFF0003F80017207E9F1C>I<000070000000007000000000F800000000F800000000F8000000 01FC00000001FC00000003FE00000003FE00000003FE00000006FF000000067F0000000E7F8000 000C3F8000000C3F800000183FC00000181FC00000381FE00000300FE00000300FE00000600FF0 00006007F00000E007F80000FFFFF80000FFFFF800018001FC00018001FC00038001FE00030000 FE00030000FE000600007F000600007F00FFE00FFFF8FFE00FFFF825227EA12A>65 DI<0007FE0180003FFFC38000FF01E78003FC 007F8007F0001F800FE0000F801FC0000F801F800007803F800003807F000003807F000003807F 00000180FE00000180FE00000000FE00000000FE00000000FE00000000FE00000000FE00000000 FE00000000FE00000000FE000000007F000001807F000001807F000001803F800003801F800003 001FC00007000FE0000E0007F0001C0003FC00380000FF01F000003FFFC0000007FF000021227D A128>IIII<00 03FF00C0003FFFE1C000FF80F3C001FC003FC007F0001FC00FE0000FC01FC00007C01F800003C0 3F800001C07F000001C07F000001C07F000000C0FE000000C0FE00000000FE00000000FE000000 00FE00000000FE00000000FE00000000FE00000000FE000FFFFCFE000FFFFC7F00001FC07F0000 1FC07F00001FC03F80001FC01F80001FC01FC0001FC00FE0001FC007F0003FC001FC003FC000FF 80FFC0003FFFE3C00003FF00C026227DA12C>I73 D76 DII<00 07FC0000003FFF800000FC07E00003F001F80007E000FC000FC0007E001F80003F001F80003F00 3F00001F803F00001F807F00001FC07E00000FC07E00000FC0FE00000FE0FE00000FE0FE00000F E0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE0FE00000FE07E00000FC07F0000 1FC07F00001FC03F00001F803F80003F801F80003F000FC0007E0007E000FC0003F001F80000FC 07E000003FFF80000007FC000023227DA12A>II82 D<03FE0C0FFF9C1F03FC3E 00FC7C007C78003CF8001CF8001CF8000CFC000CFE0000FF0000FFF0007FFF007FFFE03FFFF01F FFF80FFFFC03FFFE007FFE0003FF0000FF00007F00003FC0001FC0001FC0001FE0001FE0001EF0 003EFC007CFF80F8E7FFF0C0FFC018227DA11F>I<07FE001FFF803F0FC03F07E03F07F03F03F0 1E03F00003F00003F001FFF00FFFF03FE3F07F03F07E03F0FE03F0FC03F0FC03F0FC07F0FE0FF0 7F1FF83FFDFF0FF0FF18167E951B>97 DI<01FF8007FFE01FC3F03F03F03E03F07E03F07C01E0FC0000FC0000FC0000FC 0000FC0000FC0000FC0000FC00007E00007E00003F00303F80701FE1E007FFC001FF0014167E95 19>I<0003FE000003FE0000007E0000007E0000007E0000007E0000007E0000007E0000007E00 00007E0000007E0000007E0000007E0001FE7E0007FFFE001FC3FE003F00FE003E007E007E007E 007C007E00FC007E00FC007E00FC007E00FC007E00FC007E00FC007E00FC007E00FC007E007C00 7E007E007E003E00FE003F01FE001F83FE000FFF7FC001FC7FC01A237EA21F>I<01FE0007FF80 1F87E03F03E03E01F07E00F07C00F8FC00F8FC00F8FFFFF8FFFFF8FC0000FC0000FC0000FC0000 7E00007E00003F00181F80380FE0F007FFE000FF8015167E951A>I<003F8001FFC003F7E007E7 E007E7E00FC7E00FC3C00FC0000FC0000FC0000FC0000FC0000FC000FFFC00FFFC000FC0000FC0 000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0 000FC0000FC0000FC0007FFC007FFC0013237FA211>I<1F003F803F803F803F803F801F000000 000000000000000000000000FF80FF801F801F801F801F801F801F801F801F801F801F801F801F 801F801F801F801F801F801F80FFF0FFF00C247FA30F>105 D108 DII<00FE0007FFC00F83E01E00F0 3E00F87C007C7C007C7C007CFC007EFC007EFC007EFC007EFC007EFC007EFC007E7C007C7C007C 3E00F81F01F00F83E007FFC000FE0017167E951C>I I114 D<0FFB003FFF007C1F00780700F00300F00300F80000FF0000FFF8007FFC007FFE001FFF000FFF 80007F80C00F80C00F80E00780F00780F80F00FC1F00FFFE00C7F80011167E9516>I<00C00000 C00000C00000C00001C00001C00003C00007C0000FC0001FC000FFFF00FFFF000FC0000FC0000F C0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC0000FC1800FC1800FC1800FC1800F C1800FE38007E70003FF0000FC0011207F9F16>III120 DI E /Fj 68 124 df<007E7E01FFFF07CFCF070F8F 0F0F0F0E07000E07000E07000E07000E0700FFFFF0FFFFF00E07000E07000E07000E07000E0700 0E07000E07000E07000E07000E07000E07000E07007F0FF07F0FF0181A809916>11 D<007E0001FE0007CF00070F000F0F000E0F000E00000E00000E00000E0000FFFF00FFFF000E07 000E07000E07000E07000E07000E07000E07000E07000E07000E07000E07000E07007F0FE07F0F E0131A809915>I<007E1F8001FF7FC007C7F1E00707C1E00F07C1E00E0781E00E0380000E0380 000E0380000E038000FFFFFFE0FFFFFFE00E0380E00E0380E00E0380E00E0380E00E0380E00E03 80E00E0380E00E0380E00E0380E00E0380E00E0380E00E0380E07F8FE3FC7F8FE3FC1E1A809920 >14 D33 DI39 D<01C00380070007000E001C001C00380038007000700070006000E000E0 00E000E000E000E000E000E000E000E000E000E0006000700070007000380038001C001C000E00 07000700038001C00A267E9B0F>II44 DII<07801FE03870303070386018E01CE0 1CE01CE01CE01CE01CE01CE01CE01CE01CE01CE01C60187038703838701FE007800E187E9713> 48 D<03000700FF00FF0007000700070007000700070007000700070007000700070007000700 0700070007000700FFF0FFF00C187D9713>I<1FC03FF071F8F078F83CF83CF81C701C003C003C 0038007800F000E001C0038007000E0C1C0C380C7018FFF8FFF8FFF80E187E9713>I<0FC03FE0 78F078787C7878783878007800F001F00FE00FC000F00078003C003C003CF83CF83CF83CF87870 F87FF01FC00E187E9713>I<0070007000F001F001F00370077006700E700C7018703870307060 70E070FFFFFFFF0070007000700070007007FF07FF10187F9713>I<03F00FF81E783C78387870 7870006000E180EFE0FFF0F838F038F01CE01CE01CE01CE01C701C701C70383C701FE00FC00E18 7E9713>54 D58 DI<1FC07FF07070F038F038F038F038007800F0 01E003C0038007000700060006000600060000000000000000000F000F000F000F000D1A7E9912 >63 D<000C0000001E0000001E0000001E0000003F0000003F0000003F00000077800000678000 0067800000C3C00000C3C00000C3C0000181E0000181E0000181E0000300F00003FFF00003FFF0 000600780006007800060078000E003C001E003C00FF81FFC0FF81FFC01A1A7F991D>65 DI<007F0601FFE607E0FE0F803E1E001E3C001E3C000E78000E780006F00006F0 0000F00000F00000F00000F00000F00000F000067800067800063C000E3C000C1E001C0F803807 E0F001FFE0007F80171A7E991C>I69 D72 DI<0F FF0FFF007800780078007800780078007800780078007800780078007800780078007800787078 F878F878F8F8F9F07FE01F80101A7F9914>I76 DII<007F000001FFC00007C1F0000F0078001E003C003C001E00 38000E0078000F0070000700F0000780F0000780F0000780F0000780F0000780F0000780F00007 80F000078078000F0078000F0038000E003C001E001E003C000F00780007C1F00001FFC000007F 0000191A7E991E>II82 D<0FC63FF6787E701EE00EE00EE006E006F000FC007F807FF03FFC0FFE 01FE003F000F000FC007C007E007E00FF00EFC3CDFFCC7F0101A7E9915>I<7FFFFF007FFFFF00 781E0F00601E0300601E0300E01E0380C01E0180C01E0180C01E0180001E0000001E0000001E00 00001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E 0000001E0000001E000003FFF00003FFF000191A7F991C>IIII89 D<1830387070E060C0E1C0C1 80C180F9F0F9F0F9F078F00C0B7B9913>92 D<18387060E0C0C0F8F8F878050B7E990B>96 D<7F80FFE0F1F0F0F0607000F01FF03FF07870F070E070E073E0F3F1F37FFE3F3C10107E8F13> II<07F81FFC3C3C783C7018E000E000E000E000E000F0007000780C3E1C1FF807 E00E107F8F11>I<007E00007E00000E00000E00000E00000E00000E00000E00000E00000E000F EE001FFE003C3E00781E00700E00E00E00E00E00E00E00E00E00E00E00E00E00700E00781E003C 3E001FFFC00FCFC0121A7F9915>I<07E01FF03C78701C701CFFFCFFFCE000E000E000F0007000 780C3E1C1FF807E00E107F8F11>I<00F803FC07BC0F3C0E3C0E000E000E000E000E00FFC0FFC0 0E000E000E000E000E000E000E000E000E000E000E000E007FE07FE00E1A80990C>I<0FDF1FFF 38777038703870387038703838703FE07FC0700070007FF83FFC7FFEF01FE00FE007E007F00F7C 3E3FFC0FF010187F8F13>II<3C003C003C003C00000000000000000000000000 FC00FC001C001C001C001C001C001C001C001C001C001C001C001C00FF80FF80091A80990A>I< 01E001E001E001E000000000000000000000000007E007E000E000E000E000E000E000E000E000 E000E000E000E000E000E000E000E000E060E0F1E0F3C0FF807F000B2183990C>IIIII<07E01FF8381C700E 6006E007E007E007E007E007E007700E700E3C3C1FF807E010107F8F13>II<07C6001FF6003E3E0078 1E00700E00F00E00E00E00E00E00E00E00E00E00F00E00700E00781E003C7E001FFE000FCE0000 0E00000E00000E00000E00000E00007FC0007FC012177F8F14>II<3FE07FE0F0E0E060E060F000 FF807FC01FE001F0C0F0E070E070F0F0FFE0DF800C107F8F0F>I<0C000C000C000C001C001C00 3C00FFC0FFC01C001C001C001C001C001C001C001C601C601C601C601EE00FC007800B177F960F >IIII<7F1FC07F1FC00F1E00071C00 03B80003B00001E00000E00000F00001F00003B800071C00061C001E0E00FF1FE0FF1FE0131080 8F14>II<7FF87FF870F860F061E063C063C007800F181E181E183C387830F870FFF0FFF00D107F8F11> II E /Fk 8 118 df<78FCFCFCFC78000000000078FCFCFCFC7806 117D900C>58 D 68 D<03FC001FFF003F1F007C1F007C1F00F80E00F80000F80000F80000F80000F80000FC0000 7C00007E01803F87801FFF0003FC0011117F9014>99 D<1E003F003F003F003F001E0000000000 000000007F007F001F001F001F001F001F001F001F001F001F001F001F001F001F00FFC0FFC00A 1B809A0C>105 D110 D<03F8000FFE003E0F803C 07807803C07803C0F803E0F803E0F803E0F803E0F803E0F803E07803C07C07C03E0F800FFE0003 F80013117F9016>I<1FF07FF07070E030F030FC00FFE07FF07FF81FFC01FCC03CE01CE01CF838 FFF8CFE00E117F9011>115 D117 D E /Fl 83 126 df<60F0F0F0F0F0F0F0F0F0F0F0F0F0F0F06000000000F0F0F0F00419779816 >33 D<6030F078F078F078F078F078F078F078F078F078E038E0380D0C7C9916>I<0387000387 000387000387000387000387007FFFC0FFFFE0FFFFE0070E00070E000F1E000F1E000E1C000E1C 000E1C00FFFFE0FFFFE07FFFC01C38001C38001C38001C38001C38001C380013197F9816>I<38 03807C0380FE0780FE0780EE0F00EE0F00EE0E00EE1E00FE1E00FE3C007C3C0038780000780000 700000F00000F00001E00001E00001C00003C00003C0000783800787C00F0FE00F0EE00E0EE01E 0EE01E0EE03C0EE03C0FE03807C038038013207F9C16>37 D<03C0000FE0001FF0001EF0001C70 001C70001C70001CF7E01DF7E01FE7E01FCF000F8F001F8E003F1E007F1E007F9C00F7FC00F3FC 00E1F800E1F9C0F0F1C0F1F9C07FFFC07FFFC01F0F0013197F9816>I<1C3C3E1E0E0E0E1E1C3C 78F060070D799816>I<00E001E007C007800F001E003C0038007800700070007000F000E000E0 00E000E000E000E000E000F000700070007000780038003C001E000F00078007C001E000E00B21 7A9C16>II< 01C00001C00001C00001C00071C700F9CF807FFF001FFC0007F00007F0001FFC007FFF00F9CF80 71C70001C00001C00001C00001C00011127E9516>I<01C00001C00001C00001C00001C00001C0 0001C00001C000FFFF80FFFF80FFFF8001C00001C00001C00001C00001C00001C00001C00001C0 0011137E9516>I<387C7E7E3E0E1E3CFCF860070B798416>II<70F8F8F8700505788416>I<000380000380000780000780000F00000F00001E00001E00003C 00003C0000780000780000F00000F00001E00001E00003C00003C0000780000780000F00000F00 001E00001E00003C00003C0000780000780000F00000F00000E00000E0000011207E9C16>I<03 E0000FF8001FFC001E3C00380E00780F00700700700700E00380E00380E00380E00380E00380E0 0380E00380E00380F00780700700700700780F003C1E001E3C001FFC000FF80003E00011197E98 16>I<0380038007800F801F80FF80FF80F3800380038003800380038003800380038003800380 03800380038003807FFC7FFC7FFC0E197C9816>I<0FF0001FFC007FFE00783F00F00F00F00780 F00380F00380000380000380000780000700000F00001E00003C0000780000F00003E00007C000 0F00001E03803C0380FFFF80FFFF80FFFF8011197E9816>I<0FF0003FFC007FFE00781F00780F 00780700300700000F00000F00003E0007FC0007F80007FC00001E00000F000007800003806003 80F00380F00780F00F00F81F007FFE003FFC000FF00011197E9816>I<007C0000FC0000DC0001 DC00039C00039C00071C000F1C000E1C001E1C003C1C00381C00781C00F01C00FFFFE0FFFFE0FF FFE0001C00001C00001C00001C00001C0001FFC001FFC001FFC013197F9816>I<3FFE003FFE00 3FFE003800003800003800003800003800003800003800003FF0003FFC003FFE003C1F00380700 000780000380600380F00380F00780F00F00F83F007FFE003FFC000FF00011197E9816>I<00FC 0003FE000FFF001F8F003E0F003C0F00780600700000F04000F7F800FFFE00FFFE00F80F00F007 80F00780E00380F00380F00380700380780780780F003E1F001FFE000FFC0007F00011197E9816 >I<07F0001FFC003FFE007C1F00F00780E00380E00380E003807007007C1F001FFC0007F0001F FC003C1E00700700F00780E00380E00380E00380F007807007007C1F003FFE001FFC0007F00011 197E9816>56 D<70F8F8F870000000000000000070F8F8F8700512789116>58 D<387C7C7C380000000000000000387C7C7C3C1C3C38F8F0600618799116>I<000380000F8000 1F80007E0000FC0003F00007E0001F80003F0000FC0000F80000FC00003F00001F800007E00003 F00000FC00007E00001F80000F8000038011157E9616>I<7FFF00FFFF80FFFF80000000000000 000000000000000000FFFF80FFFF807FFF00110B7E9116>II<00E00001F00001F00001B00001B00003B80003B80003 B800031800071C00071C00071C00071C00071C000E0E000E0E000FFE000FFE001FFF001C07001C 07001C0700FF1FE0FF1FE0FF1FE013197F9816>65 DI<03F18007FF800FFF801F0F803C 0780780780780380700380F00000E00000E00000E00000E00000E00000E00000E00000F0000070 03807803807803803C07801F0F000FFE0007FC0003F00011197E9816>IIII<03F3000FFF001FFF003F1F003C0F00780F00780700700700F00000E00000E00000E00000 E00000E07FC0E07FC0E07FC0F00700700700780F00780F003C0F003F1F001FFF000FFF0003E700 12197E9816>III75 DIII<1FFC003FFE007FFF00780F00F00780E00380E00380E00380E003 80E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380F00780F00780780F 007FFF003FFE001FFC0011197E9816>II82 D<0FF3001FFF007FFF00781F00F0 0F00E00700E00700E00000F000007800007F80003FF8000FFC0000FE00000F0000078000038000 0380E00380E00380F00780F81F00FFFE00FFFC00CFF80011197E9816>IIIII<7F3F807F3F807F3F800E1E000E1C00073C0007380003B80003F00001F00001E0 0000E00001E00001F00003F00003B80007B800071C00071C000E0E000E0E001C0700FF1FE0FF1F E0FF1FE013197F9816>IIIIII95 D<0C1E3C7870F0E0E0E0F0F87870070D789B16>I<1FE0007FF8007FFC00783C 00301E00000E00007E000FFE003FFE007FCE00F80E00E00E00E00E00F01E00F83E007FFFE03FFF E01FC3E013127E9116>II<03FC0FFE1FFE3E1E780C7000F000E000E000E000E000F000 70077C073E1F1FFE0FFC03F010127D9116>I<007F00007F00007F000007000007000007000007 0007E7001FFF003FFF003E3F00780F00700F00F00700E00700E00700E00700E00700F00F00F00F 00781F007E3F003FFFF01FF7F007E7F014197F9816>I<07E01FF83FFC7C3E781FF00FF007FFFF FFFFFFFFE000F000F007780F7E1F3FFE0FFC03F010127D9116>I<001F00007F8000FF8001E780 01C30001C00001C000FFFF00FFFF00FFFF0001C00001C00001C00001C00001C00001C00001C000 01C00001C00001C00001C00001C0007FFF007FFF007FFF0011197F9816>I<03E7C00FFFE01FFF E01E3CE03C1E00380E00380E00380E003C1E001E3C001FFC003FF8003BE0003800003C00001FFE 003FFF807FFFC07807C0F001E0E000E0E000E0E000E0F001E07E0FC03FFF801FFF0007FC00131C 7F9116>II<03C003C003C003C000000000000000007FC07FC07FC001C001C001C001C0 01C001C001C001C001C001C001C001C0FFFFFFFFFFFF101A7D9916>I107 DIII<03E0000FF8001FFC003C1E00780F00700700E00380E00380E00380E003 80E00380F00780700700780F003C1E001FFC000FF80003E00011127E9116>II<07E7001FF7003FFF007C3F00781F00F00F00F00F00E00700E00700E00700E00700F00F00 F00F00781F007C3F003FFF001FF7000FC700000700000700000700000700000700000700007FF0 007FF0007FF0141B7E9116>II<1FEC3FFC 7FFCF03CE01CE01CF8007FC03FF007FC003EE00EE00EF00EF83EFFFCFFF8CFF00F127D9116>I< 070000070000070000070000070000FFFF00FFFF00FFFF00070000070000070000070000070000 070000070000070100070380070380070780078F8003FF0003FE0000F80011177F9616>IIII<7F3FC07F3FC07F3FC00F1C00073C 0003B80003F00001F00000E00001E00001F00003B800073C00071C000E0E00FF3FE0FF3FE0FF3F E013127F9116>II<7FFFC07FFFC07FFFC0700F80701F00703E00007C00 00F80001F00003E00007C0000F80001F01C03E01C07C01C0FFFFC0FFFFC0FFFFC012127F9116> I<003F8000FF8001FF8001E00001C00001C00001C00001C00001C00001C00001C00001C00001C0 0003C000FF8000FF0000FF0000FF800003C00001C00001C00001C00001C00001C00001C00001C0 0001C00001C00001E00001FF8000FF80003F8011207E9C16>I125 D E /Fm 13 119 df<183878380808101020404080050C7D830D> 44 D<3078F06005047C830D>46 D<03CC063C0C3C181C3838303870387038E070E070E070E070 E0E2C0E2C0E261E462643C380F127B9115>97 D<3F00070007000E000E000E000E001C001C001C 001C0039C03E60383038307038703870387038E070E070E070E060E0E0C0C0C1C0618063003C00 0D1D7B9C13>I<01F007080C08181C3838300070007000E000E000E000E000E000E008E0106020 30C01F000E127B9113>I<01E007100C1018083810701070607F80E000E000E000E000E000E008 6010602030C01F000D127B9113>101 D<00F3018F030F06070E0E0C0E1C0E1C0E381C381C381C 381C383830383038187818F00F700070007000E000E0C0C0E1C0C3007E00101A7D9113>103 D<01800380010000000000000000000000000000001C002600470047008E008E000E001C001C00 1C0038003800710071007100720072003C00091C7C9B0D>105 D<3C3C26C2468747078E068E00 0E000E001C001C001C001C0038003800380038007000300010127C9112>114 D<01F006080C080C1C18181C001F001FC00FF007F0007800386030E030C030806060C01F000E12 7D9111>I<00C001C001C001C00380038003800380FFE00700070007000E000E000E000E001C00 1C001C001C00384038403840388019000E000B1A7D990E>I<1E0300270700470700470700870E 00870E000E0E000E0E001C1C001C1C001C1C001C1C003838803838801838801839001C5900078E 0011127C9116>I<1E06270E470E4706870287020E020E021C041C041C041C0818083808181018 200C4007800F127C9113>I E /Fn 46 123 df<00030007001E003C007800F800F001E003E007 C007C00F800F801F801F003F003F003E003E007E007E007E007C00FC00FC00FC00FC00FC00FC00 FC00FC00FC00FC00FC00FC00FC00FC007C007E007E007E003E003E003F003F001F001F800F800F 8007C007C003E001E000F000F80078003C001E00070003103C7AAC1B>40 DI<3E007F00FF80FF80FFC0FFC0FFC07FC03EC000C000C001C001800180 0380070006000E001C00380030000A157B8813>44 DI<3E007F00FF80FF80FF80FF80FF807F003E0009097B8813>I<003F800001FFF0 0007E0FC000FC07E001F803F001F803F003F001F803F001F807F001FC07F001FC07F001FC07F00 1FC0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF 001FE0FF001FE0FF001FE0FF001FE0FF001FE0FF001FE07F001FC07F001FC07F001FC07F001FC0 3F001F803F001F801F803F001F803F000FC07E0007E0FC0001FFF000003F80001B277DA622>48 D<000700000F00007F0007FF00FFFF00FFFF00F8FF0000FF0000FF0000FF0000FF0000FF0000FF 0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF 0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF0000FF007FFFFE7FFFFE7FFF FE17277BA622>I<00FFC00007FFF8001FFFFE003F03FF007E00FF807F007FC0FF807FC0FF803F E0FF803FE0FF803FE0FF801FE07F001FE03E003FE000003FE000003FC000003FC000007F800000 7F800000FF000001FE000001FC000003F8000007E000000FC000001F8000003E00E0007C00E000 7800E000F001C001E001C0038001C007FFFFC00FFFFFC01FFFFFC03FFFFFC07FFFFF80FFFFFF80 FFFFFF80FFFFFF801B277DA622>I<007FC00003FFF00007FFFC000FC1FE001F80FF003FC0FF00 3FE07F803FE07F803FE07F803FE07F803FE07F801FC0FF800F80FF000000FF000001FE000003FC 000007F00000FFC00000FFF8000001FE000000FF0000007F8000007FC000003FC000003FE01E00 3FE07F803FE07F803FE0FFC03FE0FFC03FE0FFC03FE0FFC03FC0FFC07FC07F807F807F00FF803F 81FF001FFFFC0007FFF80000FFC0001B277DA622>I<0000070000000F0000001F0000003F0000 007F000000FF000000FF000001FF000003FF0000077F00000F7F00000E7F00001C7F0000387F00 00707F0000F07F0000E07F0001C07F0003807F0007007F000F007F000E007F001C007F0038007F 0070007F00E0007F00FFFFFFF8FFFFFFF8FFFFFFF80000FF000000FF000000FF000000FF000000 FF000000FF000000FF00007FFFF8007FFFF8007FFFF81D277EA622>I<0C0007000FC03F000FFF FE000FFFFE000FFFFC000FFFF8000FFFF0000FFFC0000FFF00000E0000000E0000000E0000000E 0000000E0000000E0000000E7FC0000FFFF8000FC1FE000E007F000C007F8000003F8000003FC0 00003FC000003FE000003FE03E003FE07F003FE0FF803FE0FF803FE0FF803FE0FF803FC0FF003F C07E007FC078007F803C00FF001F83FE000FFFFC0007FFF00000FF80001B277DA622>I<0007F8 00003FFC0000FFFE0001FE1F0007F81F000FE03F800FC07F801FC07F803F807F803F807F807F80 3F007F001E007F0000007F000000FF000000FF1FE000FF3FF800FF70FE00FFE03F00FFC03F80FF 801FC0FF801FC0FF801FC0FF001FE0FF001FE0FF001FE0FF001FE07F001FE07F001FE07F001FE0 7F001FE03F801FC03F801FC01F803F800FC03F8007F0FF0003FFFC0001FFF800007FE0001B277D A622>I<380000003E0000003FFFFFF03FFFFFF03FFFFFF03FFFFFE07FFFFFC07FFFFF807FFFFF 807FFFFF0070001E0070003C0070007800E0007000E000F000E001E0000003C0000007C0000007 8000000F8000000F0000001F0000003F0000003F0000003F0000007E0000007E000000FE000000 FE000000FE000000FE000001FE000001FE000001FE000001FE000001FE000001FE000001FE0000 01FE000000FC0000007800001C297CA822>I<007FC00001FFF80007FFFC000FC0FE000F003F00 1E001F001E001F803E000F803E000F803F000F803FC00F803FF01F803FF81F003FFE3F001FFFFE 001FFFFC000FFFF00007FFFC0003FFFE0003FFFF000FFFFF801FBFFFC03F0FFFC07E03FFE07C00 FFE0FC007FE0F8001FE0F80007E0F80007E0F80003E0F80003E0FC0003C07C0007C07E0007803F 000F801FC07F000FFFFE0007FFF80000FFC0001B277DA622>I<007FC00003FFF00007FFFC000F E0FE001FC07E003F803F007F003F807F003F80FF001FC0FF001FC0FF001FC0FF001FC0FF001FE0 FF001FE0FF001FE0FF001FE07F003FE07F003FE07F003FE03F807FE01F80FFE00FE1DFE003FF9F E000FF1FE000001FE000001FC000001FC00F001FC01F803FC03FC03F803FC03F803FC07F003FC0 7F003F80FE001F01FC001F07F8000FFFE00007FFC00001FE00001B277DA622>I<00007FF80180 0007FFFE0780001FFFFF8F80007FF80FFF8000FF8001FF8003FE00007F8007FC00003F8007F800 001F800FF000000F801FE000000F803FE0000007803FC0000007807FC0000003807FC000000380 7FC000000380FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000 000000FF8000000000FF8000000000FF8000000000FF8000000000FF80000000007FC000000000 7FC0000003807FC0000003803FC0000003803FE0000003801FE0000007800FF00000070007F800 000F0007FC00001E0003FE00003C0000FF8000F800007FF807F000001FFFFFC0000007FFFF0000 00007FF8000029297CA832>67 D69 DI<00007FF003000007FFFE0F00001FFFFF9F00007FF0 0FFF0000FF8003FF0003FE0000FF0007FC00007F000FF800003F000FF000001F001FE000001F00 3FE000000F003FC000000F007FC0000007007FC0000007007FC000000700FF8000000000FF8000 000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000FF8000000000 FF8000000000FF8003FFFFF8FF8003FFFFF87FC003FFFFF87FC00001FF007FC00001FF003FC000 01FF003FE00001FF001FE00001FF000FF00001FF000FF80001FF0007FC0001FF0003FE0003FF00 00FF8003FF00007FF01FFF00001FFFFF3F000007FFFE1F0000007FF007002D297CA836>I73 D76 DI<0000FFE000000007FFFC0000003FC07F8000007F001FC00001 FC0007F00003F80003F80007F00001FC000FF00001FE001FE00000FF001FE00000FF003FC00000 7F803FC000007F807FC000007FC07F8000003FC07F8000003FC07F8000003FC0FF8000003FE0FF 8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF8000003FE0FF800000 3FE0FF8000003FE0FF8000003FE07F8000003FC07FC000007FC07FC000007FC03FC000007F803F C000007F801FE00000FF001FE00000FF000FF00001FE0007F00001FC0003F80003F80001FC0007 F00000FF001FE000003FC07F8000000FFFFE00000000FFE000002B297CA834>79 D<00FF806003FFF0E00FFFFFE01FC0FFE03F001FE03E0007E07E0003E07C0003E0FC0001E0FC00 01E0FC0000E0FE0000E0FF0000E0FF800000FFF80000FFFFC0007FFFF8007FFFFE003FFFFF801F FFFFC00FFFFFC007FFFFE001FFFFF0003FFFF00003FFF800001FF800000FF8000007F8E00003F8 E00001F8E00001F8E00001F8F00001F8F00001F0F80003F0FC0003E0FF0007E0FFE01FC0FFFFFF 80E1FFFE00C03FF8001D297CA826>83 D85 D<03FFC0000FFFF0001F81FC003FC0FE003FC07F003FC07F003FC03F803FC03F801F803F800000 3F8000003F80001FFF8001FFFF8007FE3F801FE03F803FC03F807F803F807F003F80FE003F80FE 003F80FE003F80FE007F80FF007F807F00FFC03FC3DFFC1FFF8FFC03FE07FC1E1B7E9A21>97 D<003FF80001FFFE0007F83F000FE07F801FC07F803F807F803F807F807F807F807F003F00FF00 0000FF000000FF000000FF000000FF000000FF000000FF000000FF000000FF0000007F0000007F 8000003F8001C03FC001C01FC003C00FE0078007F83F0001FFFC00003FF0001A1B7E9A1F>99 D<00003FF80000003FF80000003FF800000003F800000003F800000003F800000003F800000003 F800000003F800000003F800000003F800000003F800000003F800000003F800000003F800003F E3F80001FFFBF80003F83FF8000FE00FF8001FC007F8003F8003F8003F8003F8007F8003F8007F 0003F800FF0003F800FF0003F800FF0003F800FF0003F800FF0003F800FF0003F800FF0003F800 FF0003F800FF0003F8007F0003F8007F0003F8003F8003F8003F8007F8001FC00FF8000FE01FF8 0007F07FFF8001FFFBFF80003FC3FF80212A7EA926>I<003FE00001FFFC0007F07E000FE03F00 1FC01F803F800FC03F800FC07F000FC07F0007E0FF0007E0FF0007E0FF0007E0FFFFFFE0FFFFFF E0FF000000FF000000FF000000FF0000007F0000007F8000003F8000E03F8001E01FC001C00FE0 07C003F81F8001FFFE00003FF8001B1B7E9A20>I<000FF800003FFE0000FF3F0001FC7F8003F8 7F8003F87F8007F07F8007F07F8007F03F0007F0000007F0000007F0000007F0000007F0000007 F00000FFFFC000FFFFC000FFFFC00007F0000007F0000007F0000007F0000007F0000007F00000 07F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F0000007F000 0007F0000007F0000007F0000007F0000007F000007FFF80007FFF80007FFF8000192A7EA915> I<00FF81F007FFF7FC0FE3FF7C1F80FCFC3F80FE7C3F007E787F007F007F007F007F007F007F00 7F007F007F007F007F003F007E003F80FE001F80FC000FE3F8001FFFF00018FF8000380000003C 0000003C0000003E0000003FFFFC003FFFFF001FFFFFC00FFFFFE007FFFFF03FFFFFF07E000FF8 7C0001F8F80001F8F80000F8F80000F8F80000F8FC0001F87E0003F03F0007E01FE03FC007FFFF 0000FFF8001E287E9A22>II<0F801FC01F E03FE03FE03FE01FE01FC00F800000000000000000000000000000FFE0FFE0FFE00FE00FE00FE0 0FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE00FE0FFFEFF FEFFFE0F2B7DAA14>I108 DII<003FE00001FFFC0003F07E000FC01F801F800FC03F800FE03F0007 E07F0007F07F0007F07F0007F0FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF0007F8FF00 07F8FF0007F87F0007F07F0007F03F800FE03F800FE01F800FC00FC01F8007F07F0001FFFC0000 3FE0001D1B7E9A22>II114 D<03FE701FFFF03E03F078 01F07000F0F00070F00070F80070FE0000FFF000FFFF007FFFC03FFFE01FFFF007FFF800FFFC00 07FC0000FCE0007CE0003CF0003CF0003CF80078FC0078FF01F0FFFFC0E1FF00161B7E9A1B>I< 00700000700000700000700000F00000F00000F00001F00003F00003F00007F0001FFFF0FFFFF0 FFFFF007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F00007F000 07F00007F03807F03807F03807F03807F03807F03807F03803F87001FCF000FFE0003FC015267F A51B>III 120 DI<3FFFFF803FFFFF803F00FF803C00FF003801FE007803FC007807FC0070 07F800700FF000701FE000001FE000003FC000007F800000FF800000FF000001FE038003FC0380 03FC038007F803800FF007801FF007801FE007003FC00F007F801F00FF807F00FFFFFF00FFFFFF 00191B7E9A1F>I E /Fo 33 91 df<001C0038007000E001E001C00380070007000E000E001C00 1C003C0038003800380070007000700070006000E000E000E000E000E000E000E000E000E000E0 0060006000700070003000380018001C000E0006000E2A7C9E10>40 D<01C000C000E000700070 003800380018001C001C001C001C001C001C000C001C001C001C001C001C001C001C0018003800 38003800380070007000E000E000E001C001C00380070007000E001C0038007000E0000E2A809E 10>I<3C3C7C7C3C0C1C18383870E040060D7E840C>44 D<7FF0FFE0FFE00C037F890E>I<7878F8 F87005057D840C>I<00F80003FE00070E000E07001C07001C07803C0780380780780780780780 780780780780F00F00F00F00F00F00F00F00F00F00F00E00E01E00E01E00E01C00E01C00E03800 70780078F0003FE0001F8000111B7C9A15>48 D<0010007003F01FF01C70007000F000E000E000 E000E000E001E001C001C001C001C001C003C0038003800380038003800780FFF8FFF80D1B7C9A 15>I<00FE0003FF00078F800F07C00F03C00F03C00F0780000780000F80001F00003E0003FC00 03F800001C00000E00000F00000F00000F00000F00780F00F80F00F81F00F81E00F03C00F07C00 7FF0001FC000121B7D9A15>51 D<07018007FF8007FF0007FC000600000E00000C00000C00000C 00000C00000DF8001FFE001F1E001C0F00180700000700000780000F80000F00F00F00F00F00F0 1F00F01E00E03C0070F8007FF0001FC000111B7D9A15>53 D<1800003FFFC03FFFC03FFFC07003 80600700600E00C00C00001C0000380000700000E00000C00001C0000380000380000780000700 000F00000F00000E00001E00001E00001E00001E00003E00003C00001C0000121C7B9B15>55 D<00FE0003FF0007C7800F03800E03C00E03C01E03801E03801F07801F8F000FFE000FFC0003F8 0007FE001EFE003C3F00781F00700F00F00700E00700E00700E00700F00E00701E007C7C003FF0 000FC000121B7D9A15>I<01F80007FE000F8E001E07003C07003C07807C078078078078078078 0780780F80780F80780F00781F00383F003FFF001FEF00021E00001E00001C00F03C00F03800F0 7800E0F000E3E000FF80003E0000111B7C9A15>I<00007000000070000000F0000000F0000001 F0000001F80000037800000378000006780000067800000C7800000C3C0000183C0000183C0000 303C0000303C0000603C0000601E0000FFFE0000FFFE0001801E0001801E0003001F0003000F00 07000F000F000F007FC0FFF0FFC0FFF01C1C7F9B1F>65 D<000FF030003FFC7000FC0EE003F007 E007C003E00F8001E01F0001E01E0000E03C0000C03C0000C0780000C07800000078000000F800 0000F0000000F0000000F0000000F0000000F0000380F000030078000300780007003C000E003E 001C001F003C000FC0F00003FFE00000FF00001C1C7C9B1E>67 D<0FFFFC000FFFFF8000F007C0 00F001E000F000F000F0007000F0007801F0007801E0003801E0003801E0003801E0003C01E000 3803E0003803C0007803C0007803C0007803C0007003C000F007C000E0078001E0078001C00780 03800780078007800E000F807C00FFFFF000FFFFC0001E1C7E9B20>I<0FFFFFE00FFFFFE000F0 03E000F001C000F000C000F000C000F000C001F060C001E060C001E060C001E0600001E1E00001 FFE00003FFC00003C1C00003C0C00003C0C00003C0C0C003C0C18007C001800780018007800300 078003000780070007800E000F803E00FFFFFE00FFFFFC001B1C7E9B1C>I<0FFFFFC00FFFFFC0 00F007C000F0038000F0018000F0018000F0018001F0018001E0618001E0618001E0600001E0E0 0001E1E00003FFC00003FFC00003C1C00003C0C00003C0C00003C0C00007C18000078000000780 00000780000007800000078000000F800000FFFC0000FFF800001A1C7E9B1B>I<000FF030003F FC7000FC0EE003F007E007C003E00F8001E01F0001E01E0000E03C0000C03C0000C0780000C078 00000078000000F8000000F0000000F0000000F000FFF0F000FFF0F0000F80F0000F8078000F00 78000F003C000F003E001F001F001F000FC07F0003FFF60000FF82001C1C7C9B21>I<0FFF9FFE 0FFF3FFE00F003C000F003C000F003C000F003C000F007C001F007C001E0078001E0078001E007 8001E0078001FFFF8003FFFF8003C00F0003C00F0003C00F0003C00F0003C01F0007C01F000780 1E0007801E0007801E0007801E0007803E000F803E00FFF3FFE0FFF3FFC01F1C7E9B1F>I<0FFF 800FFF8000F00000F00000F00000F00000F00001F00001E00001E00001E00001E00001E00003E0 0003C00003C00003C00003C00003C00007C0000780000780000780000780000780000F8000FFF8 00FFF800111C7F9B0F>I<0FFFC00FFFC000F00000F00000F00000F00000F00001F00001E00001 E00001E00001E00001E00003E00003C00003C00003C00003C00603C00C07C00C07800C07801C07 80180780380780780F81F8FFFFF0FFFFF0171C7E9B1A>76 D<0FFC000FFC0FFC000FFC00FC001F 8000FC001F8000FC00378000DE00378000DE006F8001DE006F80019E00CF00019E00CF00019E01 8F00018F018F00018F031F00038F031F00030F061E00030F061E0003078C1E0003078C1E000307 983E000707983E000607B03C000607B03C000603E03C000603E03C000603C07C001E03C07C00FF E387FFC0FFC387FF80261C7E9B26>I<0FF80FFE0FF80FFE00FC01E000FC00C000FE00C000DE00 C000DE01C001DF01C0018F0180018F8180018781800187C1800187C3800383C3800303E3000301 E3000301F3000300F3000300FF000700FF0006007E0006007E0006003E0006003E0006001E001E 001E00FFE01C00FFC00C001F1C7E9B1F>I<0007F000003FFC0000F81E0001E007000380038007 0003C00E0001C01E0001E03C0001E03C0000E0780000E0780000F0780000E0F00001E0F00001E0 F00001E0F00001E0F00003C0F00003C0F00007807800078078000F0038001E003C003C001E0078 000F81F00003FFC00000FE00001C1C7C9B20>I<0FFFFC000FFFFF0000F00F8000F0038000F003 C000F001C000F001C001F003C001E003C001E003C001E0038001E0078001E00F0003E03E0003FF F80003FFE00003C0000003C0000003C0000007C00000078000000780000007800000078000000F 8000000F800000FFF80000FFF000001A1C7E9B1C>I<0FFFF8000FFFFE0000F00F8000F0038000 F003C000F001C000F001C001F003C001E003C001E003C001E0078001E00F0001E03E0003FFF800 03FFF80003C0FC0003C03C0003C03C0003C03E0007C03C0007803C0007803C0007803C0007803C 0007803C380F803E70FFF81FF0FFF00FE01D1C7E9B1F>82 D<007F0C01FFDC03C1F80780780F00 380E00380E00381E00381E00001F00001F80000FF8000FFF0007FFC001FFE0003FE00003E00001 E00000E00000E06000E06000E06001E07001C0780380FE0F80FFFE00C3F800161C7E9B17>I<1F FFFFF03FFFFFF03C0781F038078060700780606007806060078060600F8060C00F0060C00F0060 000F0000000F0000000F0000001F0000001E0000001E0000001E0000001E0000001E0000003E00 00003C0000003C0000003C0000003C0000003C0000007C00001FFFE0001FFFE0001C1C7C9B1E> III<07FF8FFE07FF8FFE007C03E0003C0380003E0300001E0600001E0E00001F1C00000F 1800000FB0000007E0000007E0000003C0000003E0000003E0000007F000000EF000000CF00000 18F8000030780000707C0000E03C0000C03E0001801E0003801F000F801F00FFE07FF0FFE0FFF0 1F1C7F9B1F>88 DI<03FFFF8007FFFF0007E01F0007803E0007007C00060078000600F8000E 01F0000C03E0000C07C0000007C000000F8000001F0000003E0000007C0000007C000000F80C00 01F00C0003E0180003C0180007C018000F8038001F0030003E0070003E00F0007C03F000FFFFE0 00FFFFE000191C7E9B19>I E /Fp 80 125 df<003F1F8001FFFFC003C3F3C00783E3C00F03E3 C00E01C0000E01C0000E01C0000E01C0000E01C0000E01C000FFFFFC00FFFFFC000E01C0000E01 C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E 01C0000E01C0000E01C0007F87FC007F87FC001A1D809C18>11 D<003F0001FF8003C3C00783C0 0F03C00E03C00E00000E00000E00000E00000E0000FFFFC0FFFFC00E01C00E01C00E01C00E01C0 0E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C07F87F87F87F8151D80 9C17>I<003FC001FFC003C3C00783C00F03C00E01C00E01C00E01C00E01C00E01C00E01C0FFFF C0FFFFC00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01 C00E01C00E01C07FCFF87FCFF8151D809C17>I<003F83F00001FFDFF80003E1FC3C000781F83C 000F01F03C000E01E03C000E00E000000E00E000000E00E000000E00E000000E00E00000FFFFFF FC00FFFFFFFC000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00 E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C007F C7FCFF807FC7FCFF80211D809C23>I<7070F8F8FCFCFCFC7C7C0C0C0C0C1C1C181838387070F0 F060600E0D7F9C15>34 D<00030180000301800003018000070380000603000006030000060300 000E0700000C0600000C0600000C0600000C0600001C0E00FFFFFFFCFFFFFFFC00301800003018 00003018000070380000603000006030000060300000603000FFFFFFFCFFFFFFFC01C0E0000180 C0000180C0000180C0000381C00003018000030180000301800007038000060300000603000006 0300001E257E9C23>I<70F8FCFC7C0C0C1C183870F060060D7D9C0C>39 D<01C00380038007000E000C001C001800380038007000700070007000E000E000E000E000E000 E000E000E000E000E000E000E000E000E00070007000700070003800380018001C000C000E0007 000380038001C00A2A7D9E10>II<018001C0018001806186F99F 7DBE1FF807E007E01FF87DBEF99F61860180018001C0018010127E9E15>I<70F8F8F878181818 383070E060050D7D840C>44 DI<70F8F8F87005057D840C>I<0003 0003000700060006000E000C001C0018001800380030003000700060006000E000C000C001C001 800380030003000700060006000E000C000C001C001800180038003000700060006000E000C000 C00010297E9E15>I<07E00FF01C38381C781E700E700EF00FF00FF00FF00FF00FF00FF00FF00F F00FF00FF00FF00FF00F700E700E781E381C1C380FF007E0101B7E9A15>I<030007003F00FF00 C70007000700070007000700070007000700070007000700070007000700070007000700070007 000700FFF8FFF80D1B7C9A15>I<0FE03FF878FC603EF01EF81FF80FF80F700F000F001F001E00 3E003C007800F001E001C0038007000E031C0338037006FFFEFFFEFFFE101B7E9A15>I<0FE03F F8387C783E7C1E781E781E001E003C003C00F807F007E00078003C001E000F000F000F700FF80F F80FF81EF01E787C3FF80FE0101B7E9A15>I<001C00001C00003C00007C00007C0000DC0001DC 00039C00031C00071C000E1C000C1C00181C00381C00301C00601C00E01C00FFFFC0FFFFC0001C 00001C00001C00001C00001C00001C0001FFC001FFC0121B7F9A15>I<301C3FFC3FF83FE03000 3000300030003000300037E03FF83C3C381E301E000F000F000F000FF00FF00FF00FF01E703E78 7C3FF80FE0101B7E9A15>I<01F807FC0F8E1E1E3C1E381E781E78007000F080F7F8FFFCFC1CF8 1EF80FF00FF00FF00FF00FF00F700F700F781E381E1E3C0FF807E0101B7E9A15>I<6000007FFF 807FFF807FFF80600700C00600C00E00C01C0000380000300000700000600000E00000C00001C0 0001C00003C0000380000380000380000780000780000780000780000780000780000780000780 00111C7E9B15>I<07E01FF83C3C381E701E700E700E780E7C1E7F3C3FF81FF00FF01FFC3DFC78 7E703FF00FE00FE007E007E007F00E781E3C3C1FF807E0101B7E9A15>I<07E01FF83C38781C78 1EF00EF00EF00FF00FF00FF00FF00FF01F781F383F3FFF1FEF010F000E001E781E781C783C7878 78F03FE01F80101B7E9A15>I<70F8F8F870000000000000000070F8F8F87005127D910C>I<70F8 F8F870000000000000000070F8F8F878181818383070E060051A7D910C>I61 D<00060000000F0000000F0000000F0000001F8000001F8000001F8000003F C0000033C0000033C0000073E0000061E0000061E00000E1F00000C0F00000C0F00001C0F80001 80780001FFF80003FFFC0003003C0003003C0007003E0006001E0006001E001F001F00FFC0FFF0 FFC0FFF01C1C7F9B1F>65 DI<003FC18001FFF18003F07B800FC0 1F801F000F801E0007803C0003807C0003807800038078000180F0000180F0000000F0000000F0 000000F0000000F0000000F0000000F000000078000180780001807C0001803C0003801E000300 1F0007000FC00E0003F03C0001FFF000003FC000191C7E9B1E>IIII<003FC18001FFF18003F07B800FC01F801F000F801E0007 803C0003807C0003807800038078000180F0000180F0000000F0000000F0000000F0000000F000 0000F000FFF0F000FFF078000780780007807C0007803C0007801E0007801F0007800FC00F8003 F03F8001FFFB80003FE1801C1C7E9B21>III75 DIII<003F800001FFF00003E0F80007001C000E000E001C0007003C 00078038000380780003C0700001C0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0 F00001E0F00001E0780003C0780003C0380003803C0007801E000F000E000E0007803C0003E0F8 0001FFF000003F80001B1C7E9B20>II<003F800001FFF00003E0 F80007803C000E000E001C0007003C00078038000380780003C0780003C0F00001E0F00001E0F0 0001E0F00001E0F00001E0F00001E0F00001E0F00001E0780003C0780003C0380003803C1F0780 1C3F87000E398E0007B0FC0003F8F80001FFF000003FE020000060200000602000007060000038 E000003FC000003FC000001FC000000F001B247E9B20>II<07F1801FFD803C1F8070078070 0380E00380E00180E00180F00000F80000FE00007FE0003FFC001FFE000FFF0000FF80000F8000 07C00003C00001C0C001C0C001C0E001C0E00380F00780FE0F00DFFE00C7F800121C7E9B17>I< 7FFFFFC07FFFFFC0780F03C0700F01C0600F00C0E00F00E0C00F0060C00F0060C00F0060C00F00 60000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F 0000000F0000000F0000000F0000000F0000000F0000000F000003FFFC0003FFFC001B1C7F9B1E >IIII91 D<18183C3C383870706060E0E0C0C0C0C0F8F8FCFCFCFC7C7C38380E0D7B9C15> II<1FE0003FF8003C3C003C1E00180E00000E00001E0007FE003FFE007E0E 00F80E00F80E00F00E60F00E60F81E607C7E607FFFC01FC78013127F9115>97 DI<07F80FFC3E3C3C3C78187800F000F000F000F000F000 F000780078063C0E3F1C0FF807F00F127F9112>I<001F80001F80000380000380000380000380 00038000038000038000038000038007F3801FFF803E1F807C0780780380F80380F00380F00380 F00380F00380F00380F00380F003807807807C0F803E1F801FFBF007E3F0141D7F9C17>I<07E0 1FF83E7C781C781EF01EFFFEFFFEF000F000F000F000780078063C0E3F1C0FF807F00F127F9112 >I<00FC03FE079E071E0F1E0E000E000E000E000E000E00FFE0FFE00E000E000E000E000E000E 000E000E000E000E000E000E000E000E007FE07FE00F1D809C0D>I<07E7C01FFFC03C3DC0781E 00781E00781E00781E00781E00781E003C3C003FF80037E0007000007000007800003FFC003FFF 007FFF807807C0F003C0E001C0E001C0F003C0F807C07C0F801FFE0007F800121B7F9115>II<3C007C007C007C003C00000000000000000000000000FC00 FC001C001C001C001C001C001C001C001C001C001C001C001C001C001C00FF80FF80091D7F9C0C >I<01C003E003E003E001C00000000000000000000000000FE00FE000E000E000E000E000E000 E000E000E000E000E000E000E000E000E000E000E000E000E000E0F0E0F1E0F3C0FF807E000B25 839C0D>IIIII<03F0000FFC001E1E00380700780780700380F003C0F003C0F003C0F003 C0F003C0F003C07003807807803807001E1E000FFC0003F00012127F9115>II< 07F1801FF9803F1F803C0F80780780780380F00380F00380F00380F00380F00380F00380F80380 7807807C0F803E1F801FFB8007E380000380000380000380000380000380000380001FF0001FF0 141A7F9116>II<1FB07FF0F0F0E070E030F030F8007FC07FE01FF000F8C078C038E038 F078F8F0FFF0CFC00D127F9110>I<0C000C000C000C000C001C001C003C00FFE0FFE01C001C00 1C001C001C001C001C001C001C301C301C301C301C301E700FE007C00C1A7F9910>II II<7F8FF07F8FF00F0F80070F00038E0001DC0001D80000F00000700000780000F80001DC0003 8E00030E000707001F0780FF8FF8FF8FF81512809116>II<7FFC7FFC783C7078 60F061E061E063C00780078C0F0C1E0C1E1C3C187818F078FFF8FFF80E127F9112>III E /Fq 61 123 df<001FF3F800FFFFFE03F87F3E07E07E3E0FC07E3E0F807C1C0F807C000F807C000F80 7C000F807C000F807C00FFFFFFC0FFFFFFC00F807C000F807C000F807C000F807C000F807C000F 807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C000F807C007FE1FFC0 7FE1FFC01F1D809C1C>11 D<7C0F80FE1FC0FF1FE0FF1FE0FF1FE0FF1FE07F0FE00300600700E0 0600C00E01C01C03803C0780780F00300600130F7E9C19>34 D<7CFEFFFFFFFF7F0307060E1C3C 7830080F7D9C0D>39 D<007000E001C003C007800F000F001E001E003C003C007C007800780078 00F800F800F000F000F000F000F000F000F000F800F8007800780078007C003C003C001E001E00 0F000F00078003C001C000E000700C297D9E13>II<7CFEFFFFFFFF7F 0307060E1C3C7830080F7D860D>44 DI<0003800003800007 80000780000700000F00000F00001E00001E00001C00003C00003C0000380000780000780000F0 0000F00000E00001E00001E00001C00003C00003C0000380000780000780000F00000F00000E00 001E00001E00001C00003C00003C0000780000780000700000F00000F00000E00000E000001129 7D9E18>47 D<00600001E0000FE000FFE000F3E00003E00003E00003E00003E00003E00003E000 03E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E00003E000 03E0007FFF807FFF80111B7D9A18>49 D<07F8003FFF00787F80F81FC0FC0FC0FC0FE0FC0FE0FC 07E07807E0000FE0000FE0000FC0001F80003F00003E00007C0000F00001E00003C0600780600F 00601C00E03FFFC07FFFC0FFFFC0FFFFC0FFFFC0131B7E9A18>I<00038000000380000007C000 0007C0000007C000000FE000000FE000001FF000001BF000001BF0000031F8000031F8000061FC 000060FC0000E0FE0000C07E0000C07E0001803F0001FFFF0003FFFF8003001F8003001F800600 0FC006000FC00E000FE00C0007E0FFC07FFEFFC07FFE1F1C7E9B24>65 DI<001FF06000FFFC E003FC1FE00FE007E01FC003E01F8001E03F0000E07F0000E07F0000E07E000060FE000060FE00 0000FE000000FE000000FE000000FE000000FE000000FE0000007E0000607F0000607F0000603F 0000E01F8000C01FC001C00FE0078003FC1F0000FFFC00001FF0001B1C7D9B22>III I<001FF81800FFFE3803FC0FF807F003F80FC000F81F8000783F8000787F0000387F0000387E00 0018FE000018FE000000FE000000FE000000FE000000FE000000FE007FFFFE007FFF7E0001F87F 0001F87F0001F83F8001F81F8001F80FE001F807F003F803FE07F800FFFE78001FF818201C7D9B 26>III75 DIII<003FE00001FFFC0003F07E000FC01F801F800FC01F0007C03F0007E07F0007F0 7E0003F07E0003F0FE0003F8FE0003F8FE0003F8FE0003F8FE0003F8FE0003F8FE0003F8FE0003 F87E0003F07E0003F07F0007F03F0007E03F800FE01F800FC00FC01F8003F07E0001FFFC00003F E0001D1C7D9B24>II<003FE00001FFFC0003F07E000FC01F801F800FC01F800FC03F0007E0 7F0007F07E0003F07E0003F0FE0003F8FE0003F8FE0003F8FE0003F8FE0003F8FE0003F8FE0003 F8FE0003F87E0003F07E0003F07F0007F03F0F87E03FBFCFE01FB8EFC00FF07F8003F87E0001FF FC00003FFC0800003E1800001FF800001FF800001FF800001FF000000FF000000FE0000007C01D 247D9B24>II<07F860 1FFFE03E0FE07803E07001E0F000E0F00060F80060F80000FE0000FFF0007FFE007FFF803FFFC0 1FFFE007FFE0007FF00007F00001F00001F0C000F0C000F0E000F0E001E0F001E0FE07C0FFFF80 C3FE00141C7D9B1B>I<7FFFFFE07FFFFFE0781F81E0701F80E0601F8060E01F8070C01F8030C0 1F8030C01F8030C01F8030001F8000001F8000001F8000001F8000001F8000001F8000001F8000 001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F8000001F800007FFFE 0007FFFE001C1C7E9B21>II87 D<7FFE1FFE007FFE1FFE0007F001800003F803800001FC07000000FC06000000FE0C0000007F1C 0000003F380000003FB00000001FE00000000FE00000000FE000000007F000000003F800000007 F80000000FFC0000000CFE000000187E000000387F000000703F800000601F800000C01FC00001 C00FE000018007F000030007F000FFF03FFF80FFF03FFF80211C7F9B24>II<7FFFFC7FFFFC7E01FC7803F87007F0E007 F0E00FE0C01FE0C01FC0C03F80003F80007F0000FE0000FE0001FC0001FC0003F80607F00607F0 060FE0061FE00E1FC00E3F801C3F801C7F003CFE00FCFFFFFCFFFFFC171C7D9B1D>I<0C01801E 03C03C0780380700700E00600C00E01C00C01800FE1FC0FF1FE0FF1FE0FF1FE0FF1FE07F0FE03E 07C0130F7D9C19>92 D<04000E001F003F807BC0E0E0C0600B077A9C18>94 D<0FFC003FFF003E1F803E0FC03E07C01C07C00007C003FFC01FFFC07F87C07F07C0FE07C0FC07 C0FC07C0FE0FC07E3FE03FFBF80FE1F815127F9117>97 D I<03FC001FFF003F1F007E1F007E1F00FC0E00FC0000FC0000FC0000FC0000FC0000FC0000FC00 007E01807F03803F87001FFE0003F80011127E9115>I<000FF0000FF00001F00001F00001F000 01F00001F00001F00001F00001F00001F007F9F01FFFF03F0FF07E03F07E01F0FC01F0FC01F0FC 01F0FC01F0FC01F0FC01F0FC01F0FC01F07C01F07E03F03F0FF01FFFFE07F1FE171D7E9C1B>I< 03FC000FFF003F0F803E07C07E03C07C03E0FC03E0FFFFE0FFFFE0FC0000FC0000FC00007C0000 7E00603F00E01FC3C00FFF8003FE0013127F9116>I<007F0001FFC003E7C007C7C00FC7C00F83 800F80000F80000F80000F80000F8000FFF800FFF8000F80000F80000F80000F80000F80000F80 000F80000F80000F80000F80000F80000F80000F80000F80007FF8007FF800121D809C0F>I<07 F9F01FFFF83E1F787C0FB87C0F807C0F807C0F807C0F807C0F803E1F003FFE0037F80070000070 00007800003FFF803FFFE01FFFF07FFFF0F801F8F000F8F00078F00078F800F87E03F03FFFE007 FF00151B7F9118>II<1E003F007F007F007F003F001E00 00000000000000000000FF00FF001F001F001F001F001F001F001F001F001F001F001F001F001F 001F00FFE0FFE00B1E7F9D0E>I<00F801FC01FC01FC01FC01FC00F80000000000000000000003 FC03FC007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C007C 707CF87CF8FCF9F87FF03F800E26839D0F>IIIII<01FC000FFF801F07C03E03E07C01F0 7C01F0FC01F8FC01F8FC01F8FC01F8FC01F8FC01F87C01F07C01F03E03E01F07C00FFF8001FC00 15127F9118>II114 D<1FF87FF87078E018E018F000FF80FFF07FF83FF80FFC007CC03CE01CE01CF878FFF8CFE00E12 7E9113>I<030003000300070007000F000F003F00FFFCFFFC1F001F001F001F001F001F001F00 1F001F001F0C1F0C1F0C1F0C1F9C0FF803F00E1A7F9913>IIIIII<3FFF803FFF803C3F80307F00707E0060FC00 61FC0063F80003F00007E1800FE1801FC1801F83803F03007F0700FE0F00FFFF00FFFF0011127F 9115>I E /Fr 32 122 df<000FE000007FF00000F8780001E0780003C0780007807800070030 0007000000070000000700000007000000070000000700000007000000FFFFF800FFFFF8000700 780007003800070038000700380007003800070038000700380007003800070038000700380007 0038000700380007003800070038000700380007003800070038007FE1FF807FE1FF80192380A2 1B>12 D<70F8FCFC7C0C0C0C1C18183870E0E0060F7C840E>44 D<018003800F80FF80F3800380 038003800380038003800380038003800380038003800380038003800380038003800380038003 8003800380038003800380FFFEFFFE0F217CA018>49 D<03F8000FFE001E1F00380F807007C078 07C07C07C07807C07807C00007C00007C0000780000F80001F00003E0003FC0003F800001E0000 0F000007800007C00003E00003E07003E0F803E0F803E0F803E0F803E0E007C07007C0780F803E 1F000FFE0003F80013227EA018>51 D<1801801E07801FFF801FFF001FFC001FF0001800001800 0018000018000018000018000019F8001FFE001F0F001E07801C03C01803C00001C00001E00001 E00001E00001E07001E0F001E0F001E0F001E0E003C0E003C0700780380F803E1F000FFC0007F0 0013227EA018>53 D<03F8000FFC001F1E003C07003807807803807003C0F003C0F001C0F001C0 F001E0F001E0F001E0F001E0F001E0F003E07803E07803E03C07E03E0FE01FFDE007F9E00081E0 0001C00003C00003C0000380780780780700780F00781E00787C003FF8000FE00013227EA018> 57 D<000180000003C0000003C0000003C0000007E0000007E0000007E000000FF000000CF000 000CF000001CF800001878000018780000383C0000303C0000303C0000601E0000601E0000601E 0000C00F0000C00F0000C00F0001FFFF8001FFFF8001800780030003C0030003C0030003C00600 01E0060001E0060001E00E0000F01F0001F0FFC00FFFFFC00FFF20237EA225>65 D<000FF030007FFC3000FC1E7003F0077007C003F00F8001F01F0001F01F0000F03E0000F03C00 00707C0000707C0000707C000030F8000030F8000030F8000000F8000000F8000000F8000000F8 000000F8000000F8000000F80000307C0000307C0000307C0000303E0000703E0000601F0000E0 1F0000C00F8001C007C0038003F0070000FC1E00007FFC00000FF0001C247DA223>67 D72 D76 DI82 D<07F0600FFE601E1FE03807E07003 E07001E0E000E0E000E0E00060E00060F00060F00000F800007C00007F00003FF0001FFE000FFF 8003FFC0007FC00007E00001E00001F00000F00000F0C00070C00070C00070E00070E000F0F000 E0F801E0FC01C0FF0780C7FF00C1FC0014247DA21B>I<7FFFFFF87FFFFFF87C0780F870078038 6007801860078018E007801CC007800CC007800CC007800CC007800CC007800C00078000000780 000007800000078000000780000007800000078000000780000007800000078000000780000007 8000000780000007800000078000000780000007800000078000000780000007800003FFFF0003 FFFF001E227EA123>I<1FF0003FFC003C3E003C0F003C0F00000700000700000F0003FF001FFF 003F07007C0700F80700F80700F00718F00718F00F18F81F187C3FB83FF3F01FC3C015157E9418 >97 D<03FE000FFF801F07803E07803C0780780000780000F00000F00000F00000F00000F00000 F00000F000007800007800C03C01C03E01801F87800FFF0003FC0012157E9416>99 D<0000E0000FE0000FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000 E00000E003F8E00FFEE01F0FE03E03E07C01E07800E07800E0F000E0F000E0F000E0F000E0F000 E0F000E0F000E07000E07801E07801E03C03E01F0FF00FFEFE03F0FE17237EA21B>I<01FC0007 FF001F0F803E03C03C01C07801E07801E0FFFFE0FFFFE0F00000F00000F00000F00000F0000078 00007800603C00E03E01C00F83C007FF0001FC0013157F9416>I<0000F003FBF807FFB80F1F38 1E0F003C07803C07803C07803C07803C07803C07803C07801E0F001F1E001FFC001BF800180000 1800001C00001FFF000FFFC03FFFF07C01F0700078F00078E00038E00038E00038F000787800F0 3F07E01FFFC003FE0015217F9518>103 D<0E0000FE0000FE00001E00000E00000E00000E0000 0E00000E00000E00000E00000E00000E00000E00000E3F800EFFE00FE1E00F80F00F00700F0070 0E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070 FFE7FFFFE7FF18237FA21B>I<1E003E003E003E001E0000000000000000000000000000000000 0E00FE00FE001E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FF C0FFC00A227FA10E>I<01C003E003E003E001C00000000000000000000000000000000001E00F E00FE001E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E0 00E000E000E000E000E0F1E0F1C0F3C0FF803E000B2C82A10F>I<0E0000FE0000FE00001E0000 0E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E0FFC0E0FFC0E07E0 0E07800E07000E1E000E3C000E78000EF8000FFC000FFC000F1E000E0F000E0F800E07800E03C0 0E03E00E01E00E01F0FFE3FEFFE3FE17237FA21A>I<0E00FE00FE001E000E000E000E000E000E 000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00 0E000E000E000E000E00FFE0FFE00B237FA20E>I<0E3FC0FF00FEFFF3FFC0FFE0F783C01F807E 01E00F003C00E00F003C00E00E003800E00E003800E00E003800E00E003800E00E003800E00E00 3800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E0FF E3FF8FFEFFE3FF8FFE27157F942A>I<0E3F80FEFFE0FFE1E01F80F00F00700F00700E00700E00 700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070FFE7FFFFE7 FF18157F941B>I<01FC0007FF000F07801C01C03800E07800F0700070F00078F00078F00078F0 0078F00078F00078F000787000707800F03800E01C01C00F078007FF0001FC0015157F9418>I< 0E7EFEFFFFEF1F8F0F0F0F000F000E000E000E000E000E000E000E000E000E000E000E000E00FF F0FFF010157F9413>114 D<1FD83FF87878F038E018E018F018F8007F807FE01FF003F8007CC0 3CC01CE01CE01CF03CF878FFF0CFE00E157E9413>I<060006000600060006000E000E000E001E 003E00FFF8FFF80E000E000E000E000E000E000E000E000E000E000E0C0E0C0E0C0E0C0E0C0F1C 073807F803E00E1F7F9E13>I<0E0070FE07F0FE07F01E00F00E00700E00700E00700E00700E00 700E00700E00700E00700E00700E00700E00700E00F00E00F00E01F00787F807FF7F01FC7F1815 7F941B>I121 D E /Fs 16 121 df<78FCFCFEFE7E06060606060E0C0C1C383870E0C007147A8512>44 D<00003FE0030001FFFC030007F01E07001F800787003E0001CF00FC0000EF01F800007F03F000 007F03E000003F07C000001F0FC000001F0F8000000F1F8000000F3F000000073F000000073E00 0000077E000000077E000000037E000000037C00000003FC00000000FC00000000FC00000000FC 00000000FC00000000FC00000000FC00000000FC00000000FC00000000FC00000000FC00000000 7C000000007E000000037E000000037E000000033E000000033F000000073F000000061F800000 060F8000000E0FC000000C07C000001C03E000001803F000003801F800007000FC0000E0003F00 01C0001F8007800007F01F000001FFFC0000003FE00028337CB130>67 D<00003FF001800001FF FC01800007F81F0380001FC0038380003F0001E780007C0000E78001F800007F8001F000003F80 03E000001F8007C000000F800FC000000F800F80000007801F80000007801F00000003803F0000 0003803E00000003807E00000003807E00000001807E00000001807C0000000180FC0000000000 FC0000000000FC0000000000FC0000000000FC0000000000FC0000000000FC0000000000FC0000 000000FC0000000000FC0000000000FC00000FFFFC7E00000FFFFC7E0000001FC07E0000000F80 7E0000000F803E0000000F803F0000000F801F0000000F801F8000000F800F8000000F800FC000 000F8007E000000F8003E000000F8001F000001F8001F800001F80007E00003F80003F00007780 001FC001E3800007F80FC1800001FFFF008000003FF800002E337CB134>71 D<03FF00000FFFC0001E03F0003E00F8003E007C003E003C003E003E001C001E0000001E000000 1E0000001E0000001E000003FE00007FFE0003FF1E0007F01E001FC01E003F001E007E001E007C 001E00FC001E00F8001E0CF8001E0CF8001E0CF8003E0CFC007E0C7C007E0C7E01FF1C3F87CFB8 1FFF07F003FC03C01E1F7D9E21>97 D<003FE001FFF803E03C07C03E0F003E1F003E3E003E3C00 1C7C00007C0000780000F80000F80000F80000F80000F80000F80000F80000F80000F800007C00 007C00007C00003E00033E00071F00060F800E07C01C03F07801FFF0003F80181F7D9E1D>99 D<003F800001FFF00003E1F80007807C000F003E001E001E003E001F003C000F007C000F807C00 0F8078000F80F8000780FFFFFF80FFFFFF80F8000000F8000000F8000000F8000000F8000000F8 0000007C0000007C0000007C0000003E0001803E0003801F0003000F80070007E00E0003F83C00 00FFF800003FC000191F7E9E1D>101 D<0F001F801F801F801F800F0000000000000000000000 0000000000000000000000000780FF80FF800F8007800780078007800780078007800780078007 80078007800780078007800780078007800780078007800780078007800FC0FFF8FFF80D307EAF 12>105 D<0781FE003FC000FF87FF80FFF000FF9E0FC3C1F8000FB803E7007C0007F001EE003C 0007E001FC003E0007E000FC001E0007C000F8001E0007C000F8001E00078000F0001E00078000 F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00 078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0 001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E0007 8000F0001E000FC001F8003F00FFFC1FFF83FFF0FFFC1FFF83FFF0341F7E9E38>109 D<0781FE0000FF87FF8000FF9E0FC0000FB803E00007F001E00007E001F00007E000F00007C000 F00007C000F000078000F000078000F000078000F000078000F000078000F000078000F0000780 00F000078000F000078000F000078000F000078000F000078000F000078000F000078000F00007 8000F000078000F000078000F000078000F000078000F0000FC001F800FFFC1FFF80FFFC1FFF80 211F7E9E25>I<001FC00000FFF80001E03C0007800F000F0007801E0003C01E0003C03C0001E0 3C0001E0780000F0780000F0780000F0F80000F8F80000F8F80000F8F80000F8F80000F8F80000 F8F80000F8F80000F8780000F07C0001F03C0001E03C0001E01E0003C01E0003C00F00078007C0 1F0001F07C0000FFF800001FC0001D1F7E9E21>I<0781FE00FF87FF80FF9F0FE00FB803F007F0 01F807E000F807C0007C0780007C0780003E0780003E0780003E0780001F0780001F0780001F07 80001F0780001F0780001F0780001F0780001F0780003F0780003E0780003E0780007E07C0007C 07C000FC07E000F807F001F007B803E0079E0FC0078FFF800781FC000780000007800000078000 0007800000078000000780000007800000078000000780000007800000078000000FC00000FFFC 0000FFFC0000202D7E9E25>I<0787F0FF8FF8FFBC7C0FB87C07F07C07E07C07E00007C00007C0 0007C0000780000780000780000780000780000780000780000780000780000780000780000780 000780000780000780000780000780000780000FC000FFFE00FFFE00161F7E9E19>114 D<03FE300FFFF03E07F07801F07000F0E00070E00070E00030E00030F00030F800007F00007FF0 003FFF000FFFC003FFE0003FF00003F80000F8C0007CC0003CE0001CE0001CE0001CF0001CF800 38F80038FC00F0EF03F0C7FFC0C1FF00161F7E9E1A>I<00C00000C00000C00000C00000C00001 C00001C00001C00003C00003C00007C0000FC0001FC000FFFFE0FFFFE003C00003C00003C00003 C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003 C03003C03003C03003C03003C03003C03003C03003C07001E06001E0E000F9C000FFC0003F0014 2C7FAB19>I<078000F000FF801FF000FF801FF0000F8001F000078000F000078000F000078000 F000078000F000078000F000078000F000078000F000078000F000078000F000078000F0000780 00F000078000F000078000F000078000F000078000F000078000F000078000F000078000F00007 8001F000078001F000078001F000078003F00003C007F00003C00EF00001F03CF80000FFF8FF80 003FC0FF80211F7E9E25>I120 D E /Ft 8 117 df<00001FC00000FFF00001E078 0007803C000E001E001C001E0038001F0030001F0061801F0061801F00C0C01F00C0C01F00C0C0 1F00C1803E00C1803E00C3003C007E007C003C0078000000F0000001E0000003C000000F800007 FE000007FC0000001E0000000F000000078000000780000007C0000007C0000007C0000007C000 0007C000000FC03C000FC07E000FC07E000FC0FC001F80F8001F80F0001F00C0003E00E0003E00 60007C007000F8003803E0001C0FC0000FFF000003F80000203079AE25>51 D<0000007F8008000003FFE01800001FC0703800003E0018780000F8000CF00001F00007F00007 C00007F0000F800003F0001F000003E0003E000003E0007E000001E000FC000001E001F8000001 C001F8000001C003F0000001C007F0000001C007E0000001800FE0000001800FC0000001801FC0 000001801FC0000000003F80000000003F80000000003F80000000007F00000000007F00000000 007F00000000007F00000000007E0000000000FE0000000000FE0000000000FE0000000000FE00 00001800FE0000001800FE0000001800FE00000030007E00000030007E00000030007E00000060 007E000000C0003E000000C0003E00000180001F00000300001F00000600000F80000C000007C0 0018000003E00030000001F000E0000000FC07800000003FFE0000000007F80000002D3375B133 >67 D<0007E000001FF1C0003C3BE000F01FE001E00FE003E00FC007C007C0078007C00F8007C0 1F800F801F000F803F000F803F000F807E001F007E001F007E001F007E001F00FC003E00FC003E 00FC003E00FC003E04FC007C0CFC007C0C78007C0C7800FC187C01FC183C03FC183C077C301E1E 3C600FF81FC003E007801E1F799E25>97 D<0003F0001FFC007C1E00F00F03E00707C0070F8007 1F80071F00073F000E3E001C7E00787E0FF0FFFF80FFF000FC0000FC0000F80000F80000F80000 F80000F80000F80002F800037800067C000C3C00381E00F00F03C007FF0001FC00181F789E21> 101 D<001F000003FF000003FF0000003F0000003E0000003E0000003E0000003E0000007C0000 007C0000007C0000007C000000F8000000F8000000F8000000F8000001F0000001F0000001F000 0001F0FC0003E3FF0003E7078003FC07C003F803C007F003C007E003E007C003E007C003C00F80 07C00F8007C00F8007C00F8007C01F000F801F000F801F000F801F001F003E001F003E001F003E 003E003E003E027C003E067C007C067C007C067C007C0CF800780CF8007818F8007830F8003860 F0003FC060000F801F327AB125>104 D<003C01F80000FF07FC0001C78E0E0001879C0F000307 F007800307E007C00607E007C00607C007C00607C007C00C0F8007C00C0F8007C0000F8007C000 0F8007C0001F000FC0001F000FC0001F000FC0001F000FC0003E001F80003E001F80003E001F00 003E003F00007C003E00007C003E00007C007C00007C00780000FC00F80000FE01F00000FE03C0 0000FB87800001F1FF000001F0F8000001F000000001F000000003E000000003E000000003E000 000003E000000007C000000007C000000007C000000007C00000000F800000000F80000000FFFC 000000FFFC000000222D7E9E25>112 D<03C03F000FF0FF801C79C1C0187B81E0307F03E0307E 07E0607C07E0607C07C0607C0380C0F80000C0F8000000F8000000F8000001F0000001F0000001 F0000001F0000003E0000003E0000003E0000003E0000007C0000007C0000007C0000007C00000 0F8000000F8000000F8000000F8000000F0000000E0000001B1F7A9E1E>114 D<000E00001E00003E00003E00003E00003E00007C00007C00007C00007C0000F80000F80000F8 00FFFFC0FFFFC001F00001F00001F00003E00003E00003E00003E00007C00007C00007C00007C0 000F80000F80000F80000F80001F00001F00001F00001F00803E01803E01803E03003E03003E06 003E06003E0C001E38000FF00003C000122C79AB18>116 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300 TeXDict begin %%EndSetup %%Page: 0 1 bop 820 996 a Ft(Chapter)25 b(3)476 1088 y Fs(Groups,)d(Con)n(texts,)f(Comm)n (unicators)100 1270 y Fr(Lyndon)c(Clark)o(e,)e(T)l(om)h(Henderson,)f(Mark)h (Sears,)g(An)o(thon)o(y)g(Skjellum,)d(Marc)j(Snir,)f(Rik)h(Little\014eld)814 1391 y(August)h(5,)f(1993)p eop %%Page: 0 2 bop 884 862 a Fq(Abstract)75 953 y Fp(W)m(e)15 b(de\014ne)i(the)f(concepts)h (of)e(group,)g(con)o(text,)h(and)f(comm)o(unicator)e(here.)24 b(W)m(e)15 b(discuss)i(op)q(erations)e(for)g(ho)o(w)75 1003 y(these)c(should)e(b)q(e)h(used)h(to)e(pro)o(vide)g(safe)h(\(safer\))g(comm)o (unication)c(in)j(the)h(MPI)g(system.)16 b(W)m(e)9 b(start)h(b)o(y)f (discussing)75 1053 y(in)o(tra-comm)o(uni)o(cation)j(in)j(full)f(detail;)h (then)h(w)o(e)f(discuss)i(in)o(ter-comm)o(unicatio)o(n,)12 b(whic)o(h)k(builds)e(on)h(the)h(data)75 1103 y(structures)23 b(and)d(requiremen)o(ts)h(of)e(the)i(in)o(tra-comm)o(unicatio)o(n)d (sections.)38 b(W)m(e)20 b(follo)o(w)f(with)h(discussion)h(of)75 1153 y(formalizations)11 b(of)j(the)h(lo)q(osely)e(sync)o(hronous)i(mo)q(del) e(of)h(computing)e(\(vis)i(a)g(vis)g(message)g(passing\))g(and)g(o\013er)75 1202 y(examples.)p eop %%Page: 1 3 bop 75 -100 a Fo(3.1.)31 b(INTR)o(ODUCTION)1343 b Fp(1)75 45 y Fn(3.1)70 b(In)n(tro)r(duction)75 167 y Fp(It)11 b(is)f(highly)f(desirable) i(that)g(pro)q(cesses)i(executing)e(a)f(parallel)f(pro)q(cedure)k(use)e(a)f (\\virtual)g(pro)q(cess)i(name)d(space")75 216 y(lo)q(cal)14 b(to)h(the)g(in)o(v)o(o)q(cation.)20 b(Th)o(us,)15 b(the)h(co)q(de)g(of)e (the)i(parallel)d(pro)q(cedure)k(will)d(lo)q(ok)g(iden)o(tical,)g(irresp)q (ectiv)o(e)j(of)75 266 y(the)12 b(absolute)f(addresses)i(of)d(the)h (executing)h(pro)q(cesses.)20 b(It)11 b(is)g(often)g(the)g(case)h(that)f (parallel)f(application)f(co)q(de)j(is)75 316 y(built)f(b)o(y)g(comp)q(osing) f(sev)o(eral)i(parallel)f(mo)q(dules)f(\()p Fm(e.g.)p Fp(,)h(a)h(n)o (umerical)e(solv)o(er,)h(and)h(a)f(graphic)g(displa)o(y)g(mo)q(dule\).)75 366 y(Supp)q(ort)h(of)f(a)h(virtual)f(name)f(space)j(for)e(eac)o(h)h(mo)q (dule)f(will)f(allo)o(w)g(for)h(the)i(comp)q(osition)d(of)h(mo)q(dules)f (that)i(w)o(ere)75 416 y(dev)o(elop)q(ed)18 b(separately)h(without)e(c)o (hanging)g(all)f(message)i(passing)f(calls)h(within)f(eac)o(h)h(mo)q(dule.)28 b(The)18 b(set)h(of)75 466 y(pro)q(cesses)d(that)d(execute)i(a)e(parallel)f (pro)q(cedure)j(ma)o(y)c(b)q(e)j(\014xed,)g(or)f(ma)o(y)e(b)q(e)j(determined) f(dynamically)d(b)q(efore)75 515 y(the)19 b(in)o(v)o(o)q(cation.)31 b(Th)o(us,)20 b(MPI)f(has)g(to)f(pro)o(vide)h(a)f(mec)o(hanism)e(for)j (dynamically)c(creating)k(sets)h(of)e(lo)q(cally)75 565 y(named)h(pro)q (cesses.)37 b(W)m(e)20 b(alw)o(a)o(ys)e(n)o(um)o(b)q(er)h(pro)q(cesses)j (that)e(execute)i(a)d(parallel)f(pro)q(cedure)k(consecutiv)o(ely)m(,)75 615 y(starting)17 b(from)d(zero,)k(and)e(call)g(this)h(n)o(um)o(b)q(ering)e Fq(rank)k(in)f(group)p Fp(.)24 b(Th)o(us,)17 b(a)f Fq(group)f Fp(is)i(an)f(ordered)i(set)g(of)75 665 y(pro)q(cesses,)e(where)f(pro)q (cesses)i(are)d(iden)o(ti\014ed)g(b)o(y)g(their)g(ranks)g(when)g(comm)o (unication)d(o)q(ccurs.)158 731 y(Comm)o(uni)o(cation)f Fl(contexts)h Fp(partition)g(the)j(message-passing)e(space)h(in)o(to)f(separate,)i (manageable)d(\\uni-)75 781 y(v)o(erses.")31 b(Sp)q(eci\014cally)m(,)17 b(a)h(send)g(made)f(in)g(a)g(con)o(text)i(cannot)f(b)q(e)g(receiv)o(ed)h(in)e (another)h(con)o(text.)30 b(Con)o(texts)75 831 y(are)17 b(iden)o(ti\014ed)f (in)g(MPI)h(using)f(opaque)g Fq(con)o(texts)f Fp(that)h(reside)h(within)f Fl(communicator)e Fp(ob)r(jects.)26 b(The)17 b(con-)75 880 y(text)e(mec)o(hanism)e(is)h(need)i(to)e(allo)o(w)f(predictable)j(b)q(eha)o (vior)e(in)g(subprograms,)g(and)g(to)h(allo)o(w)e(dynamicism)e(in)75 930 y(message)h(usage)i(that)e(cannot)h(b)q(e)h(reasonably)e(an)o(ticipated)h (or)g(managed.)j(Normally)m(,)9 b(a)k(parallel)e(pro)q(cedure)k(is)75 980 y(written)f(so)g(that)f(all)g(messages)g(pro)q(duced)i(during)e(its)h (execution)g(are)g(also)f(consumed)g(b)o(y)h(the)g(pro)q(cesses)i(that)75 1030 y(execute)g(the)f(pro)q(cedure.)22 b(Ho)o(w)o(ev)o(er,)15 b(if)f(one)g(parallel)g(pro)q(cedure)i(calls)e(another,)h(then)g(it)f(migh)o (t)e(b)q(e)k(desirable)75 1080 y(to)h(allo)o(w)e(suc)o(h)j(call)e(to)h(pro)q (ceed)i(while)e(messages)g(are)g(p)q(ending)g(\(the)h(messages)f(will)f(b)q (e)i(consumed)f(b)o(y)f(the)75 1130 y(pro)q(cedure)d(after)f(the)g(call)f (returns\).)19 b(In)12 b(suc)o(h)g(case,)h(a)e(new)h(comm)o(unicatio)o(n)d (con)o(text)j(is)g(needed)h(for)e(the)h(called)75 1179 y(parallel)h(pro)q (cedure,)i(ev)o(en)g(if)e(the)h(transfer)h(of)e(con)o(trol)h(is)g(sync)o (hronized.)158 1246 y(The)k(comm)o(unication)c(domain)h(used)k(b)o(y)e(a)h (parallel)e(pro)q(cedure)k(is)d(iden)o(ti\014ed)h(b)o(y)f(a)g Fq(comm)o(unicator)p Fp(.)75 1295 y(Comm)o(uni)o(cators)c(bring)h(together)i (the)f(concepts)i(of)d(pro)q(cess)j(group)d(and)h(comm)o(unicatio)o(n)d(con)o (text.)22 b(A)14 b(com-)75 1345 y(m)o(unicator)g(is)i(an)g(explicit)g (parameter)g(in)f(eac)o(h)i(p)q(oin)o(t-to-p)q(oin)o(t)d(comm)o(unication)f (op)q(eration.)24 b(The)17 b(comm)o(u-)75 1395 y(nicator)f(iden)o(ti\014es)g (the)g(comm)o(unicatio)o(n)d(con)o(text)j(of)f(that)h(op)q(eration;)g(it)f (iden)o(ti\014es)i(the)f(group)f(of)g(pro)q(cesses)75 1445 y(that)j(can)f(b)q(e)h(in)o(v)o(olv)o(ed)f(in)g(this)g(comm)o(unication;)f (and)h(it)g(pro)o(vides)h(the)g(translation)f(from)e(virtual)i(pro)q(cess)75 1495 y(names,)e(whic)o(h)g(are)h(ranks)g(within)f(the)h(group,)f(in)o(to)g (absolute)h(addresses.)25 b(Collectiv)o(e)15 b(comm)o(unication)d(calls)75 1545 y(also)i(tak)o(e)g(a)g(comm)o(unicator)d(as)k(parameter;)e(it)h(is)h (exp)q(ected)h(that)e(parallel)f(libraries)h(will)f(b)q(e)i(built)f(to)g (accept)75 1594 y(a)g(comm)o(uni)o(cator)e(as)i(parameter.)j(Comm)o (unicators)11 b(are)k(represen)o(ted)h(b)o(y)e(opaque)g(MPI)g(ob)r(jects.)158 1661 y(W)m(e)d(start)i(b)o(y)e(discussing)h(in)o(tra-comm)o(uni)o(cation)d (in)i(full)f(detail;)i(then)g(w)o(e)g(discuss)h(in)o(ter-comm)o(unicatio)o (n,)75 1710 y(whic)o(h)d(builds)g(on)g(the)h(data)f(structures)j(and)d (requiremen)o(ts)g(of)g(the)h(in)o(tra-comm)o(uni)o(cation)c(sections.)18 b(W)m(e)10 b(follo)o(w)75 1760 y(with)k(discussion)h(of)e(formalizations)e (of)j(the)h(lo)q(osely)e(sync)o(hronous)j(mo)q(del)c(of)i(computing)e(\(vis)i (a)g(vis)g(message)75 1810 y(passing\))g(and)g(o\013er)g(examples.)75 2035 y Fn(3.2)70 b(Con)n(text)75 2156 y Fp(A)15 b Fq(con)o(text)e Fp(is)i(the)h(MPI)f(mec)o(hanism)d(for)j(partitioning)f(comm)o(uni)o(cation)e (space.)22 b(A)15 b(de\014ning)g(prop)q(ert)o(y)h(of)e(a)75 2206 y(con)o(text)j(is)g(that)g(a)f(send)i(made)d(in)h(a)h(con)o(text)g (cannot)g(b)q(e)g(receiv)o(ed)h(in)e(another)h(con)o(text.)28 b(A)16 b(con)o(text)i(is)e(an)75 2256 y(opaque)h(ob)r(ject.)28 b(Only)17 b(one)h(comm)o(unicator)c(in)j(a)f(pro)q(cess)j(ma)o(y)c(bind)i (the)h(same)e(con)o(text.)28 b(Con)o(texts)18 b(ha)o(v)o(e)75 2306 y(additional)g(attributes)j(for)e(in)o(ter-comm)o(unication,)e(to)j(b)q (e)g(discussed)i(b)q(elo)o(w.)35 b(F)m(or)19 b(in)o(tra-comm)o(unicatio)o(n,) 75 2355 y(a)g(con)o(text)g(is)g(essen)o(tially)g(a)g(h)o(yp)q(er-tag)g (needed)h(to)f(mak)o(e)f(a)g(comm)o(unicator)f(safe)i(for)g(p)q(oin)o(t-to-p) q(oin)o(t)e(and)75 2405 y(MPI-de\014ned)e(collectiv)o(e)f(comm)o(unication.) 158 2554 y Fk(Discussion:)42 b Fj(Some)15 b(implemen)o(tations)i(ma)o(y)e (mak)o(e)g(a)g(con)o(text)g(to)f(b)q(e)h(a)g(pair)h(of)e(in)o(tegers,)i(eac)o (h)f(represen)o(ting)75 2604 y(\\h)o(yp)q(er)h(tags")f({)f(one)h(for)g(p)q (oin)o(t-to-p)q(oin)o(t)i(and)e(one)g(for)f(\(MPI-de\014ned\))i(collectiv)o (e)h(op)q(erations)g(on)e(a)f(comm)o(unicator.)75 2654 y(By)f(making)i(this)e (concept)h(opaque,)g(w)o(e)e(reliev)o(e)j(the)e(implemen)o(tor)i(of)e(the)g (requiremen)o(t)h(that)f(this)h(is)g(the)f(only)h(w)o(a)o(y)f(to)75 2704 y(implemen)o(t)i(con)o(texts)f(correctly)g(for)e(MPI.)p eop %%Page: 2 4 bop 75 -100 a Fp(2)158 45 y Fk(Discussion:)23 b Fj(Among)16 b(other)g(reasons,)g(includin)q(g)i(to)d(address)i(Jim)f(Co)o(wnie's)f (concerns)i(ab)q(out)f(safet)o(y)g(and)g(to)75 95 y(mak)o(e)d(b)q(oth)g(p)q (oin)o(t-to-p)q(oin)o(t)i(and)e(collectiv)o(e)i(comm)o(unication)g(safer)e (on)g(an)f(in)o(tra-comm)o(unincato)q(r,)j(w)o(e)d(ha)o(v)o(e)h(opted)g(to)75 145 y(mak)o(e)i(con)o(texts)g(opaque,)h(at)f(the)g(exp)q(ense)h(of)e (upsetting)i(those)g(who)e(w)o(an)o(t)h(to)g(b)q(e)g(able)g(to)g(set)g(con)o (text)g(v)n(alues.)23 b(This)75 195 y(c)o(hange)15 b(is)f(crucial)i(to)e (abstracting)h(MPI)f(from)g(sp)q(eci\014c)h(implemen)o(tations)q(,)h(and)f (forces)e(sp)q(eci\014c)j(implemen)o(tations)h(to)75 244 y(pro)o(vide)d (implemen)o(tation-sp)r(eci\014)q(c)h(functions)f(to)e(mak)o(e)h(the)f (connotation)j(of)d(con)o(texts)h(with)f(sp)q(eci\014c)i(in)o(teger)g(v)n (alues.)75 464 y Fn(3.3)70 b(Groups)75 554 y Fp(A)16 b Fq(group)f Fp(is)h(an)f(ordered)j(set)f(of)e(pro)q(cess)j(iden)o(ti\014ers)f (\(henceforth)g(pro)q(cesses\);)j(pro)q(cess)e(iden)o(ti\014ers)f(are)f(im-) 75 604 y(plemen)o(tation)e(dep)q(enden)o(t;)k(a)e(group)f(is)h(an)g(opaque)f (ob)r(ject.)25 b(Eac)o(h)16 b(pro)q(cess)i(in)d(a)g(group)h(is)g(asso)q (ciated)g(with)75 654 y(an)e(in)o(teger)g Fq(rank)p Fp(,)f(starting)h(from)e (zero.)158 704 y(Groups)17 b(are)g(represen)o(ted)i(b)o(y)e(opaque)f Fq(group)i(ob)s(jects)p Fp(,)d(and)h(hence)i(cannot)f(b)q(e)g(directly)g (transferred)75 754 y(from)12 b(one)i(pro)q(cess)i(to)e(another.)75 869 y Fi(3.3.1)55 b(Prede\014ned)18 b(Groups)75 946 y Fp(.)g(Initial)12 b(groups)i(de\014ned)h(once)g Fl(MPI)p 669 946 14 2 v 15 w(INIT)e Fp(has)h(b)q(een)h(called)f(are)g(as)g(follo)o(ws:)137 1025 y Fh(\017)21 b Fl(MPI)p 248 1025 V 15 w(GROUP)p 373 1025 V 15 w(ALL)p Fp(,)40 b(SPMD-lik)o(e)13 b(siblings)g(of)g(a)h(pro)q(cess.)137 1106 y Fh(\017)21 b Fl(MPI)p 248 1106 V 15 w(GROUP)p 373 1106 V 15 w(HOST)p Fp(,)40 b(A)14 b(group)f(including)g(one's)h(self)g(and)g (one's)g(HOST)137 1188 y Fh(\017)21 b Fl(MPI)p 248 1188 V 15 w(GROUP)p 373 1188 V 15 w(PARENT)p Fp(,)39 b(A)14 b(group)g(con)o(taining)f (one's)h(self)g(and)f(one's)h(P)m(ARENT)g(\(spa)o(wner\).)137 1269 y Fh(\017)21 b Fl(MPI)p 248 1269 V 15 w(GROUP)p 373 1269 V 15 w(SELF)p Fp(,)40 b(A)14 b(group)f(comprising)g(one's)h(self)158 1348 y(MPI)f(implemen)o(tatio)o(ns)d(are)j(required)h(to)e(pro)o(vide)h (these)h(groups;)e(ho)o(w)o(ev)o(er,)h(not)g(all)e(forms)g(of)h(comm)o(uni-) 75 1398 y(cation)g(mak)o(e)e(sense)k(for)e(all)f(systems,)h(so)h(not)f(all)f (of)g(these)j(groups)e(ma)o(y)e(b)q(e)j(relev)n(an)o(t.)18 b(En)o(vironmen)o(tal)10 b(inquiry)75 1448 y(will)j(b)q(e)j(pro)o(vided)e(to) h(determine)f(whic)o(h)h(of)f(these)i(are)f(usable)g(in)f(a)h(giv)o(en)f (implemen)o(tation.)j(The)e(analogous)75 1498 y(comm)o(unicators)c(corresp)q (onding)k(to)f(these)h(groups)f(are)h(de\014ned)g(b)q(elo)o(w)e(in)h(section) g(3.4.1.)158 1630 y Fk(Discussion:)k Fj(En)o(vironmen)o(tal)e(sub-committee)e (needs)g(to)f(pro)o(vide)h(suc)o(h)g(inquiry)h(functions)g(for)e(us.)75 1850 y Fn(3.4)70 b(Comm)n(unicators)75 1940 y Fp(All)12 b(MPI)g(comm)o (unication)d(\(b)q(oth)k(p)q(oin)o(t-to-p)q(oin)o(t)e(and)i(collectiv)o(e\))f (functions)h(use)g Fq(comm)o(unicators)d Fp(to)i(pro-)75 1990 y(vide)i(a)f(sp)q(eci\014c)j(scop)q(e)f(\(con)o(text)f(and)g(group)g(sp)q (eci\014cations\))h(for)e(the)i(comm)o(unicatio)o(n.)g(In)f(short,)g(comm)o (uni-)75 2040 y(cators)h(bring)e(together)i(the)g(concepts)h(of)d(group)h (and)g(con)o(text;)g(\(furthermore,)g(to)g(supp)q(ort)h(implem)o(en)o (tation-)75 2090 y(sp)q(eci\014c)i(optimizations,)12 b(and)j(virtual)g(top)q (ologies,)f(they)i(\\cac)o(he")f(additional)f(information)e(opaquely\).)22 b(The)75 2140 y(source)17 b(and)f(destination)f(of)g(a)h(message)f(is)h(iden) o(ti\014ed)g(b)o(y)f(the)i(rank)e(of)g(that)h(pro)q(cess)i(within)d(the)h (group;)g(no)75 2190 y(a)f(priori)h(mem)o(b)q(ership)e(restrictions)j(on)e (the)i(pro)q(cess)g(sending)f(or)g(receiving)g(the)g(message)g(are)g (implied.)21 b(F)m(or)75 2239 y(collectiv)o(e)d(comm)o(unicati)o(on,)d(the)k (comm)o(unicator)c(sp)q(eci\014es)k(the)f(set)h(of)e(pro)q(cesses)k(that)c (participate)h(in)f(the)75 2289 y(collectiv)o(e)f(op)q(eration.)23 b(Th)o(us,)16 b(the)h(comm)o(uni)o(cator)c(restricts)18 b(the)e(\\spatial")f (scop)q(e)i(of)e(comm)o(unicatio)o(n,)e(and)75 2339 y(pro)o(vides)h(lo)q(cal) f(pro)q(cess)j(addressing.)158 2471 y Fk(Discussion:)37 b Fj(`Comm)o (unicator')14 b(replaces)h(the)f(w)o(ord)f(`con)o(text')h(ev)o(erywhere)g(in) g(curren)o(t)g(pt2pt)g(and)g(collcomm)75 2521 y(drafts.)158 2654 y Fp(Comm)o(uni)o(cators)19 b(are)h(represen)o(ted)k(b)o(y)c(opaque)g Fq(comm)o(unicator)h(ob)s(jects)p Fp(,)f(and)g(hence)i(cannot)e(b)q(e)75 2704 y(directly)14 b(transferred)i(from)c(one)i(pro)q(cess)i(to)d(another.)p eop %%Page: 3 5 bop 75 -100 a Fo(3.5.)31 b(GR)o(OUP)13 b(MANA)o(GEMENT)1197 b Fp(3)75 45 y Fq(Raison)15 b(d')o(^)-23 b(etre)16 b(for)f(separate)h(Con)o (texts)f(and)h(Comm)o(unicators)39 b Fp(Within)13 b(a)h(comm)o(unicator,)e(a) i(con-)75 95 y(text)20 b(is)g(separately)g(brok)o(en)g(out,)h(rather)g(than)f (b)q(eing)f(inheren)o(t)i(in)e(the)i(comm)o(uni)o(cator)d(for)h(one)h(sp)q (eci\014c,)75 145 y(essen)o(tial)15 b(purp)q(ose.)20 b(W)m(e)14 b(w)o(an)o(t)g(to)g(mak)o(e)e(it)i(p)q(ossible)h(for)f(libraries)g(quic)o (kly)f(to)h(ac)o(hiev)o(e)h(additional)d(safe)i(com-)75 195 y(m)o(unication)d(space)k(without)e(MPI-comm)o(unicator-based)e(sync)o (hronization.)18 b(The)c(only)e(w)o(a)o(y)h(to)g(do)g(this)h(is)f(to)75 244 y(pro)o(vide)d(a)g(means)f(to)h(preallo)q(cate)h(man)o(y)d(con)o(texts,)k (and)e(bind)g(them)f(lo)q(cally)m(,)g(as)h(needed.)18 b(This)11 b(c)o(hoice)f(w)o(eak)o(ens)75 294 y(the)16 b(o)o(v)o(erall,)e(inheren)o(t)i (\\safet)o(y")e(of)h(MPI,)g(if)f(programmed)f(in)i(this)g(w)o(a)o(y)m(,)f (but)h(pro)o(vides)h(added)f(p)q(erformance)75 344 y(whic)o(h)f(library)f (designers)i(will)d(demand.)75 461 y Fi(3.4.1)55 b(Prede\014ned)18 b(Comm)n(unicators)75 538 y Fp(Initial)12 b(comm)o(unicators)g(de\014ned)j (once)f Fl(MPI)p 792 538 14 2 v 15 w(INIT)f Fp(has)h(b)q(een)h(called)f(are)g (as)g(follo)o(ws:)137 622 y Fh(\017)21 b Fl(MPI)p 248 622 V 15 w(COMM)p 351 622 V 15 w(ALL)p Fp(,)40 b(SPMD-lik)o(e)13 b(siblings)g(of)g(a)h(pro)q(cess.)137 706 y Fh(\017)21 b Fl(MPI)p 248 706 V 15 w(COMM)p 351 706 V 15 w(HOST)p Fp(,)40 b(A)14 b(comm)o(unicator)d(for)j(talking)e(to)i(one's)g(HOST.)137 789 y Fh(\017)21 b Fl(MPI)p 248 789 V 15 w(COMM)p 351 789 V 15 w(PARENT)p Fp(,)40 b(A)13 b(comm)o(unicator)f(for)h(talking)g(to)g(one's)h (P)m(ARENT)g(\(spa)o(wner\).)137 873 y Fh(\017)21 b Fl(MPI)p 248 873 V 15 w(COMM)p 351 873 V 15 w(SELF)p Fp(,)30 b(A)11 b(comm)o(unicator)d(for)i(talking)g(to)g(one's)h(self)g(\(useful)g(for)f (getting)g(con)o(texts)i(for)e(serv)o(er)179 923 y(purp)q(oses,)15 b(etc.\).)158 1007 y(MPI)h(implemen)o(tatio)o(ns)d(are)k(required)f(to)g(pro) o(vide)f(these)i(comm)o(unicators;)d(ho)o(w)o(ev)o(er,)i(not)g(all)e(forms)h (of)75 1056 y(comm)o(unication)f(mak)o(e)h(sense)k(for)d(all)g(systems.)28 b(En)o(vironmen)o(tal)15 b(inquiry)h(will)g(b)q(e)h(pro)o(vided)g(to)g (determine)75 1106 y(whic)o(h)e(of)f(these)j(comm)o(unicators)12 b(are)k(usable)f(in)f(a)h(giv)o(en)g(implem)o(en)o(tation.)j(The)e(groups)f (corresp)q(onding)h(to)75 1156 y(these)f(comm)o(unicators)d(are)i(also)f(a)o (v)n(ailable)f(as)i(prede\014ned)i(quan)o(tities)d(\(see)j(section)e (3.3.1\).)158 1289 y Fk(Discussion:)k Fj(En)o(vironmen)o(tal)e(sub-committee) e(needs)g(to)f(pro)o(vide)h(suc)o(h)g(inquiry)h(functions)g(for)e(us.)75 1510 y Fn(3.5)70 b(Group)24 b(Managemen)n(t)75 1601 y Fp(This)12 b(section)i(describ)q(es)g(the)f(manipulation)c(of)j(groups)h(under)g(v)n (arious)f(subheadings:)18 b(general,)12 b(constructors,)75 1651 y(and)i(so)g(on.)75 1768 y Fi(3.5.1)55 b(Lo)r(cal)19 b(Op)r(erations)75 1845 y Fp(The)14 b(follo)o(wing)e(are)i(all)f(lo)q(cal)g(\(non-comm)o(uni)o (cating\))e(op)q(erations.)158 1930 y Fq(MPI)p 257 1930 15 2 v 17 w(GR)o(OUP)p 453 1930 V 16 w(SIZE\(group,)j(size\))75 2058 y(IN)i(group)k Fp(handle)13 b(to)h(group)g(ob)r(ject.)75 2142 y Fq(OUT)i(size)k Fp(is)13 b(the)i(in)o(teger)f(n)o(um)o(b)q(er)f(of)h (pro)q(cesses)i(in)e(the)g(group.)158 2269 y Fq(MPI)p 257 2269 V 17 w(GR)o(OUP)p 453 2269 V 16 w(RANK\(group,)h(rank\))75 2397 y(IN)h(group)k Fp(handle)13 b(to)h(group)g(ob)r(ject.)75 2480 y Fq(OUT)i(rank)k Fp(is)15 b(the)h(in)o(teger)g(rank)f(of)f(the)i (calling)e(pro)q(cess)j(in)e(group,)f(or)i Fl(MPI)p 1366 2480 14 2 v 15 w(UNDEFINED)d Fp(if)h(the)i(pro)q(cess)h(is)179 2530 y(not)d(a)f(mem)o(b)q(er.)158 2658 y Fq(MPI)p 257 2658 15 2 v 17 w(TRANSLA)l(TE)p 568 2658 V 18 w(RANKS)j(\(group)p 916 2658 V 15 w(a,)g(n,)g(ranks)p 1153 2658 V 17 w(a,)g(group)p 1344 2658 V 16 w(b,)f(ranks)p 1529 2658 V 17 w(b\))p eop %%Page: 4 6 bop 75 -100 a Fp(4)75 45 y Fq(IN)16 b(group)p 271 45 15 2 v 16 w(a)21 b Fp(handle)14 b(to)f(group)h(ob)r(ject)h(\\A")75 128 y Fq(IN)h(n)21 b Fp(n)o(um)o(b)q(er)13 b(of)g(ranks)h(in)g Fl(ranks)p 666 128 14 2 v 14 w(a)g Fp(arra)o(y)75 210 y Fq(IN)i(ranks)p 263 210 15 2 v 17 w(a)21 b Fp(arra)o(y)14 b(of)f(zero)i(or)e(more)g(v)n(alid) g(ranks)h(in)f(group)h(\\A")75 292 y Fq(IN)i(group)p 271 292 V 16 w(b)k Fp(handle)14 b(to)g(group)f(ob)r(ject)i(\\B")75 375 y Fq(OUT)h(ranks)p 314 375 V 16 w(b)21 b Fp(arra)o(y)10 b(of)g(corresp)q(onding)i(ranks)f(in)g(group)g(\\B,")f Fl(MPI)p 1220 375 14 2 v 15 w(UNDEFINED)f Fp(when)i(no)g(corresp)q(ondence)179 425 y(exists.)75 541 y Fi(3.5.2)55 b(Lo)r(cal)19 b(Group)g(Constructors)75 617 y Fp(The)14 b(execution)h(of)e(the)i(follo)o(wing)c(op)q(erations)j(do)g (not)g(require)g(in)o(terpro)q(cess)i(comm)o(unication.)158 703 y Fq(MPI)p 257 703 15 2 v 17 w(LOCAL)p 438 703 V 17 w(SUBGR)o(OUP\(grou)o (p,)d(n,)j(ranks,)f(new)p 1179 703 V 17 w(group\))75 828 y(IN)h(group)k Fp(handle)13 b(to)h(group)g(ob)r(ject)75 910 y Fq(IN)i(n)21 b Fp(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(arra)o(y)h(ranks)g(\(and)g (size)h(of)e Fl(new)p 1123 910 14 2 v 15 w(group)p Fp(\))75 993 y Fq(IN)j(ranks)21 b Fp(arra)o(y)13 b(of)g(in)o(teger)i(ranks)f(in)f Fl(group)g Fp(to)h(app)q(ear)g(in)g Fl(new_group)p Fp(.)75 1075 y Fq(OUT)i(new)p 283 1075 15 2 v 17 w(group)j Fp(new)14 b(group)g(deriv)o(ed)g(from)f(ab)q(o)o(v)o(e,)g(preserving)i(the)f(order)h (de\014ned)g(b)o(y)e Fl(ranks)p Fp(.)75 1165 y(If)h(no)f(ranks)h(are)h(sp)q (eci\014ed,)g Fl(new_group)d Fp(has)i(no)f(mem)o(b)q(ers.)158 1251 y Fq(MPI)p 257 1251 V 17 w(LOCAL)p 438 1251 V 17 w(EX)o(CL)p 584 1251 V 18 w(SUBGR)o(OUP\(gr)o(oup)o(,)g(n,)j(ranks,)g(new)p 1326 1251 V 17 w(group\))75 1376 y(IN)g(group)k Fp(handle)13 b(to)h(group)g(ob)r(ject)75 1458 y Fq(IN)i(n)21 b Fp(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(arra)o(y)h(ranks)g(\(and)g(size)h(of)e Fl(new)p 1123 1458 14 2 v 15 w(group)p Fp(\))75 1541 y Fq(IN)j(ranks)21 b Fp(arra)o(y)13 b(of)g(in)o(teger)i(ranks)f(in)f Fl(group)g Fp(not)h(to)g(app)q(ear)g(in)f Fl(new_group)75 1623 y Fq(OUT)j(new)p 283 1623 15 2 v 17 w(group)j Fp(new)14 b(group)g(deriv)o(ed)g(from)f(ab)q(o)o (v)o(e,)g(preserving)i(the)f(order)h(de\014ned)g(b)o(y)e Fl(ranks)p Fp(.)75 1713 y(If)h(no)f(ranks)h(are)h(sp)q(eci\014ed,)g Fl(new_group)d Fp(is)h(iden)o(tical)h(to)f Fl(group)p Fp(.)158 1798 y Fq(MPI)p 257 1798 V 17 w(LOCAL)p 438 1798 V 17 w(SUBGR)o(OUP)p 732 1798 V 15 w(RANGES\(group,)h(n,)h(ranges,)h(new)p 1422 1798 V 17 w(group\))75 1924 y(IN)g(group)k Fp(handle)13 b(to)h(group)g(ob)r(ject)75 2006 y Fq(IN)i(n)21 b Fp(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(arra)o (y)h(ranks)g(\(and)g(size)h(of)e Fl(new)p 1123 2006 14 2 v 15 w(group)p Fp(\))75 2089 y Fq(IN)j(ranges)k Fp(a)15 b(one-dimensional)e (arra)o(y)h(in)o(teger)i(triplets:)k(pairs)15 b(of)g(ranks)g(\(form:)k(b)q (eginning)14 b(through)h(end,)179 2139 y(inclusiv)o(e\))h(to)g(b)q(e)g (included)g(in)g(the)h(output)f(group)g Fl(new_group)p Fp(,)e(plus)i(a)g (constan)o(t,)g(stride)h(\(often)f(1)g(or)179 2188 y(-1\).)75 2271 y Fq(OUT)g(new)p 283 2271 15 2 v 17 w(group)j Fp(new)14 b(group)g(deriv)o(ed)g(from)f(ab)q(o)o(v)o(e,)g(preserving)i(the)f(order)h (de\014ned)g(b)o(y)e Fl(ranges)p Fp(.)75 2361 y(If)18 b(an)o(y)f(of)g(the)i (rank)f(sets)h(o)o(v)o(erlap,)f(then)h(the)f(o)o(v)o(erlap)g(is)f(ignored.)30 b(If)18 b(no)g(ranges)g(are)h(sp)q(eci\014ed,)h(then)e(the)75 2411 y(output)c(group)g(has)g(no)f(mem)o(b)q(ers.)158 2496 y Fq(MPI)p 257 2496 V 17 w(LOCAL)p 438 2496 V 17 w(SUBGR)o(OUP)p 732 2496 V 15 w(EX)o(CL)p 876 2496 V 17 w(RANGES\(group,)h(n,)i(ranges,)f (new)p 1568 2496 V 17 w(group\))75 2621 y(IN)h(group)k Fp(handle)13 b(to)h(group)g(ob)r(ject)75 2704 y Fq(IN)i(n)21 b Fp(n)o(um)o(b)q(er)13 b(of)g(elemen)o(ts)h(in)f(arra)o(y)h(ranks)g(\(and)g(size)h(of)e Fl(new)p 1123 2704 14 2 v 15 w(group)p Fp(\))p eop %%Page: 5 7 bop 75 -100 a Fo(3.5.)31 b(GR)o(OUP)13 b(MANA)o(GEMENT)1197 b Fp(5)75 45 y Fq(IN)16 b(ranges)k Fp(a)13 b(one-dimensional)f(arra)o(y)h(of) g(\(three)i(in)o(teger\))f(consisting)f(of)g(pairs)h(of)f(ranks)g(\(form:)k (b)q(eginning)179 95 y(through)h(end,)i(inclusiv)o(e\))f(to)f(b)q(e)i (excluded)f(from)e(the)j(output)e(group)h Fl(new_group)p Fp(,)e(plus)i(a)f (constan)o(t)179 145 y(stride)c(\(often)h(1)e(or)h(-1\).)75 222 y Fq(OUT)i(new)p 283 222 15 2 v 17 w(group)j Fp(new)14 b(group)g(deriv)o(ed)g(from)f(ab)q(o)o(v)o(e,)g(preserving)i(the)f(order)h (de\014ned)g(b)o(y)e Fl(ranges)p Fp(.)75 299 y(If)k(an)o(y)g(of)g(the)h(rank) f(sets)i(o)o(v)o(erlap,)e(then)h(the)g(o)o(v)o(erlap)f(is)g(ignored.)28 b(If)17 b(there)i(are)f(no)f(ranges)h(sp)q(eci\014ed,)h(the)75 348 y(output)14 b(group)g(is)g(the)g(same)f(as)h(the)g(original)f(group.)158 481 y Fk(Discussion:)k Fj(Please)12 b(prop)q(ose)g(additional)i(subgroup)e (functions,)g(b)q(efore)f(the)g(second)h(reading...Virtual)h(T)m(op)q(olo-)75 531 y(gies)h(supp)q(ort?)89 663 y Fq(MPI)p 188 663 V 17 w(LOCAL)p 369 663 V 17 w(GR)o(OUP)p 565 663 V 16 w(UNION\(group1,)g(group2,)g(group)p 1233 663 V 16 w(out\))89 784 y(MPI)p 188 784 V 17 w(LOCAL)p 369 784 V 17 w(GR)o(OUP)p 565 784 V 16 w(INTERSECT\(group1,)h(group2,)f (group)p 1349 784 V 16 w(out\))89 905 y(MPI)p 188 905 V 17 w(LOCAL)p 369 905 V 17 w(GR)o(OUP)p 565 905 V 16 w(DIFFERENCE\(group1,)h (group2,)g(group)p 1385 905 V 15 w(out\))75 1052 y(IN)h(group1)j Fp(\014rst)c(group)f(ob)r(ject)h(handle)75 1129 y Fq(IN)h(group2)j Fp(second)c(group)f(ob)r(ject)h(handle)75 1206 y Fq(OUT)h(group)p 322 1206 V 15 w(out)k Fp(group)14 b(ob)r(ject)g(handle)75 1283 y(The)g(set-lik)o(e)g(op)q(erations)g(are)h(de\014ned)g(as)f(follo)o(ws:)75 1354 y Fq(union)k Fp(All)c(elemen)o(ts)g(of)g(the)h(\014rst)g(group)f(\()p Fl(group1)p Fp(\),)f(follo)o(w)o(ed)g(b)o(y)h(all)f(elemen)o(ts)i(of)e (second)j(group)e(\()p Fl(group2)p Fp(\))179 1404 y(not)g(in)f(\014rst)75 1481 y Fq(in)o(tersect)18 b Fp(all)12 b(elemen)o(ts)i(of)f(the)i(\014rst)g (group)e(whic)o(h)h(are)g(also)g(in)f(the)i(second)g(group)75 1559 y Fq(di\013erence)j Fp(all)13 b(elemen)o(ts)h(of)f(the)i(\014rst)f (group)g(whic)o(h)g(are)g(not)g(in)f(the)i(second)g(group)75 1630 y(Note)e(the)g(for)g(these)h(op)q(erations)f(the)g(order)g(of)f(pro)q (cesses)k(in)c(the)h(output)g(group)f(is)h(determined)f(\014rst)i(b)o(y)e (order)75 1680 y(in)h(the)i(\014rst)g(group)e(\(if)h(p)q(ossible\))g(and)g (then)g(b)o(y)g(order)g(in)g(the)g(second)h(group)f(\(if)f(necessary\).)158 1812 y Fk(Discussion:)k Fj(What)c(do)f(p)q(eople)h(think)g(ab)q(out)f(these)g (lo)q(cal)i(op)q(erations?)k(More?)g(Less?)f(Note:)f(these)c(op)q(erations)75 1862 y(do)h(not)h(explicitly)i(en)o(umerate)e(ranks,)f(and)h(therefore)f(are) g(more)g(scalable)i(if)f(implemen)o(ted)h(e\016cien)o(tly)p Fg(:)8 b(:)e(:)158 2030 y Fq(MPI)p 257 2030 V 17 w(GR)o(OUP)p 453 2030 V 16 w(FREE\(group\))75 2137 y(IN)16 b(group)k Fp(frees)15 b Fl(group)d Fp(previously)i(de\014ned.)75 2208 y(This)d(op)q(eration)g (frees)i(a)e(handle)g Fl(group)f Fp(whic)o(h)h(is)g(not)g(curren)o(tly)h(b)q (ound)g(to)f(a)g(comm)o(unicator.)j(It)d(is)h(erroneous)75 2258 y(to)i(attempt)f(to)h(free)g(a)g(group)g(curren)o(tly)g(b)q(ound)g(to)g (a)g(comm)o(unicator.)158 2390 y Fk(Discussion:)19 b Fj(The)14 b(p)q(oin)o(t-to-p)q(oin)o(t)i(c)o(hapter)f(suggests)f(that)g(there)g(is)h(a) e(single)j(destructor)e(for)g(all)h(MPI)f(opaque)75 2440 y(ob)r(jects;)j(ho)o (w)o(ev)o(er,)g(it)f(is)h(arguable)h(that)e(this)h(sp)q(eci\014es)g(the)f (implemen)o(tation)j(of)d(MPI)g(v)o(ery)g(strongly)m(.)27 b(W)m(e)16 b(p)q(olitely)75 2490 y(argue)e(against)g(this)g(approac)o(h.)158 2658 y Fq(MPI)p 257 2658 V 17 w(GR)o(OUP)p 453 2658 V 16 w(DUP\(group,)f(new) p 826 2658 V 17 w(group\))p eop %%Page: 6 8 bop 75 -100 a Fp(6)75 45 y Fq(IN)16 b(group)k Fp(extan)o(t)14 b(group)f(ob)r(ject)i(handle)75 131 y Fq(OUT)h(new)p 283 131 15 2 v 17 w(group)j Fp(new)14 b(group)g(ob)r(ject)h(handle)75 216 y Fl(MPI)p 144 216 14 2 v 15 w(GROUP)p 269 216 V 15 w(DUP)e Fp(duplicates)g(a)h(group)f(with)g(all)f(its)i(cac)o(hed)g(information,)c (replacing)k(nothing.)j(This)c(function)75 265 y(is)h(essen)o(tial)g(to)g (the)g(supp)q(ort)h(of)e(virtual)g(top)q(ologies.)75 385 y Fi(3.5.3)55 b(Collectiv)n(e)20 b(Group)f(Constructors)75 463 y Fp(The)14 b(execution)h(of)e(the)i(follo)o(wing)c(op)q(erations)j(require)h (collectiv)o(e)f(comm)o(unicatio)o(n)d(within)i(a)h(group.)158 549 y Fq(MPI)p 257 549 15 2 v 17 w(COLL)p 402 549 V 17 w(SUBGR)o(OUP\(comm,)f (k)o(ey)l(,)j(color,)f(new)p 1176 549 V 17 w(group\))75 669 y(IN)h(comm)21 b Fp(comm)o(unicator)11 b(ob)r(ject)k(handle)75 754 y Fq(IN)h(k)o(ey)21 b Fp(\(in)o(teger\))75 840 y Fq(IN)16 b(color)k Fp(\(in)o(teger\))75 925 y Fq(OUT)c(new)p 283 925 V 17 w(group)j Fp(new)14 b(group)g(ob)r(ject)h(handle)75 1010 y(This)g(collectiv)o(e)g(function)f(is)h(called)f(b)o(y)h(all)f(pro)q(cesses) j(in)e(the)g(group)g(asso)q(ciated)g(with)g Fl(comm)p Fp(.)20 b Fl(Color)14 b Fp(de\014nes)75 1060 y(the)h(particular)f(new)h(group)f(to)g (whic)o(h)g(the)h(pro)q(cess)h(b)q(elongs.)j Fl(Key)14 b Fp(de\014nes)i(the)f (rank-order)f(in)g Fl(new)p 1707 1060 14 2 v 15 w(group)p Fp(;)f(a)75 1110 y(stable)h(sort)h(is)e(used)i(to)f(determine)g(rank)f(order)i(in)e Fl(new)p 981 1110 V 16 w(group)f Fp(if)h(the)i Fl(Keys)e Fp(are)h(not)g (unique.)158 1243 y Fk(Discussion:)21 b Fj(According)c(to)d(the)h(op)q (eration)i(of)d(this)i(function,)g(the)f(groups)h(so)e(created)i(are)f(non-o) o(v)o(erlapping.)75 1293 y(Is)e(there)g(a)g(need)h(for)e(a)h(more)h(complex)g (functionalit)o(y?)75 1516 y Fn(3.6)70 b(Op)r(erations)22 b(on)h(Con)n(texts) 75 1617 y Fi(3.6.1)55 b(Lo)r(cal)19 b(Op)r(erations)75 1694 y Fp(There)c(are)f(no)g(lo)q(cal)f(op)q(erations)h(on)g(con)o(texts.)158 1780 y Fq(MPI)p 257 1780 15 2 v 17 w(CONTEXTS)p 541 1780 V 19 w(FREE\(n,)h(con)o(texts\))75 1901 y(IN)h(n)21 b Fp(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(free)75 1986 y Fq(IN)i(con)o(texts)j Fp(v)o(oid)13 b(*)h(arra)o(y)g(of)f(con)o(texts)75 2071 y(Lo)q(cal)j(deallo)q (cation)g(of)g(con)o(text)h(allo)q(cated)f(b)o(y)h Fl(MPI)p 934 2071 14 2 v 15 w(CONTEXTS)p 1125 2071 V 14 w(ALLOC)f Fp(\(b)q(elo)o(w\).) 26 b(It)17 b(is)f(erroneous)i(to)f(free)g(a)75 2121 y(con)o(text)d(that)f(is) g(b)q(ound)h(to)f(an)o(y)f(comm)o(unicator)f(\(either)j(lo)q(cally)e(or)h(in) g(another)g(pro)q(cess\).)20 b(This)13 b(op)q(eration)g(is)75 2171 y(lo)q(cal)g(\(as)h(it)g(m)o(ust)e(b)q(e,)j(b)q(ecause)g(it)f(do)q(es)g (not)g(p)q(ose)h(a)e(comm)o(unicator)e(in)j(its)g(argumen)o(t)e(list\).)75 2290 y Fi(3.6.2)55 b(Collectiv)n(e)20 b(Op)r(erations)75 2403 y Fq(MPI)p 174 2403 15 2 v 17 w(CONTEXTS)p 458 2403 V 19 w(ALLOC\(comm,)15 b(n,)g(con)o(texts,)g(len\))75 2533 y(IN)h(comm)21 b Fp(Comm)o(uni)o(cator)11 b(whose)k(group)f(denotes)h(participan)o(ts)75 2618 y Fq(IN)h(n)21 b Fp(n)o(um)o(b)q(er)13 b(of)g(con)o(texts)i(to)f(allo)q(cate)75 2704 y Fq(OUT)i(con)o(texts)j Fp(v)o(oid)13 b(*)g(arra)o(y)h(of)f(con)o (texts)p eop %%Page: 7 9 bop 75 -100 a Fo(3.7.)31 b(OPERA)m(TIONS)14 b(ON)g(COMMUNICA)m(TORS)924 b Fp(7)75 45 y Fq(OUT)16 b(len)j Fp(length)14 b(of)f(con)o(texts)i(arra)o(y) 158 125 y(Allo)q(cates)i(an)g(arra)o(y)g(of)f(opaque)h(con)o(texts.)28 b(This)17 b(collectiv)o(e)g(op)q(eration)g(is)g(executed)i(b)o(y)e(all)e(pro) q(cesses)75 175 y(in)h(the)g Fl(MPI_COMM_GROUP\(comm)o(\))p Fp(.)22 b(MPI)16 b(pro)o(vides)h(sp)q(ecial)f(con)o(textual)g(space)h(to)f (its)h(collectiv)o(e)f(op)q(erations)75 224 y(\()d(including)p Fl(MPI_CONTEXTS_A)o(LLOC)p Fp(\))d(so)j(that,)g(despite)h(an)o(y)f(on-going)f (of)h(p)q(oin)o(t-to-p)q(oin)o(t)e(comm)o(unication)f(on)75 274 y Fl(comm)p Fp(,)19 b(this)g(op)q(eration)f(can)h(execute)i(safely)m(.)32 b(MPI)19 b(collectiv)o(e)g(functions)g(will)e(ha)o(v)o(e)h(to)h(lo)q(c)o(k)g (out)f(m)o(ultiple)75 324 y(threads,)c(so)g(it)g(will)e(eviden)o(tly)i(ha)o (v)o(e)g(capabilities)f(una)o(v)n(ailable)f(to)h(the)i(user)g(program.)158 374 y(Con)o(texts)d(that)f(are)g(allo)q(cated)f(b)o(y)h Fl(MPI)p 781 374 14 2 v 15 w(ALLOC)p 906 374 V 15 w(CONTEXTS)e Fp(are)i(unique)g (within)g Fl(MPI_COMM_GROUP\(c)o(omm\))o Fp(.)75 424 y(The)h(arra)o(y)g(is)g (the)g(same)g(on)f(all)g(pro)q(cesses)k(that)d(call)f(the)h(function)g (\(same)f(order,)i(same)e(n)o(um)o(b)q(er)g(of)g(elemen)o(ts\).)158 552 y Fk(Discussion:)24 b Ff(MPI)p 457 552 12 2 v 13 w(CONTEXTS)p 630 552 V 11 w(ALLOC\(comm)o(,)17 b(n,)h(contexts\))13 b Fj(w)o(as)j(the)g (previous)i(de\014nition)h(of)d(this)h(function,)75 598 y(then)g(w)o(e)e(c)o (hanged)j(to)e(a)g Ff(group)e Fj(argumen)o(t)j(in)h(the)e(\014rst)g(slot,)i (arguing)g(that)e(the)h(comm)o(unicator)h(w)o(as)e(unnecessary)m(.)75 643 y(W)m(e)f(ha)o(v)o(e)f(c)o(hanged)i(bac)o(k)f(b)q(ecause)h(the)e (group-seman)o(tics)j(pro)o(v)o(ed)e(not)g(to)f(b)q(e)h(thread)g(safe,)f(so)h (w)o(e)f(had)h(to)f(retain)i(the)75 689 y(approac)o(h)e(discussed)g(at)e (length)h(at)g(the)f(previous)i(MPI)e(meeting.)18 b(\(In)12 b(case)g(y)o(ou)h(did)g(not)g(read)g(the)f(July)h(10)f(draft,)g(y)o(ou)75 735 y(ha)o(v)o(e)g(not)h(seen)f(a)g(c)o(hange)h(in)g(this)f(draft)g(compared) h(to)f(the)g(June)g(24)g(draft!\))17 b(No)o(w,)11 b(w)o(e)h(ha)o(v)o(e)g (stronger)h(justi\014cation)h(for)75 780 y(k)o(eeping)g(the)f(approac)o(h)h (discussed)g(at)f(June)f(24)h(meeting.)18 b(W)m(e)13 b(ha)o(v)o(e)g(added)g (the)g Ff(len)e Fj(parameter)i(yielding)j(the)c(curren)o(t)75 826 y(form)o(ulation,)j(b)q(ecause)f(con)o(texts)f(are)h(no)o(w)f(opaque,)g (not)h(in)o(tegers.)158 872 y(W)m(e)e(ha)o(v)o(e)g(to)f(retain)h(the)g(c)o (hic)o(k)o(en-and-egg)i(asp)q(ect)e(of)f(MPI)h(\()f(em)g(i.e.,)g(use)h(a)g (comm)o(unicator)h(to)e(get)h(con)o(text\(s\))g(or)75 917 y(a)g(comm)o (unicator\),)i(to)f(get)f(thread)h(safet)o(y)m(.)k(Y)m(et,)11 b(w)o(e)h(w)o(an)o(t)h(libraries)i(to)d(con)o(trol)h(their)h(o)o(wn)e(fate)g (regarding)i(safet)o(y)m(,)f(not)75 963 y(to)h(rely)i(on)e(the)h(caller)h(to) e(pro)o(vide)i(a)f(quiescen)o(t)h(con)o(text.)22 b(W)m(e)14 b(ac)o(hiev)o(e)i(this)g(b)o(y)e(adding)j(the)d Fe(quiescen)o(t)i(prop)q(ert) o(y)j Fj(for)75 1009 y(MPI)13 b(collectiv)o(e)i(comm)o(unication)h (functions.)i(W)m(e)13 b(m)o(ust,)g(in)h(fact,)e(push)i(this)g(requiremen)o (t)g(to)f(the)g(collectiv)o(e)i(c)o(hapter,)75 1054 y(but)j(w)o(e)e (demonstrate)j(here)e(wh)o(y)g(our)h(particular)h(collectiv)o(e)h(routines)f (need)f(this)g(prop)q(ert)o(y)m(.)30 b(In)17 b(a)g(m)o(ulti-threaded)75 1100 y(en)o(vironmen)o(t,)k(it)e(is)g(clear)g(that)g(eac)o(h)g(temp)q(orally) i(o)o(v)o(erlapping)g(call)f(to)e(a)h(collectiv)o(e)h(op)q(eration)h(m)o(ust) d(b)q(e)h(with)g(a)75 1146 y(di\013eren)o(t)c(comm)o(unicator.)j(If)13 b(this)h(has)f(not)g(b)q(een)h(made)f(explicit,)i(it)f(m)o(ust)f(b)q(e.)158 1191 y(One)18 b(can)g(ha)o(v)o(e)g(on-going)h(p)q(oin)o(t-to-p)q(oin)o(t)h (and)e(collectiv)o(e)i(comm)o(unications)g(on)e(a)g(single)h(comm)o (unicator.)32 b(A)75 1237 y(con)o(text)14 b(is)g(de\014ned)h(to)f(b)q(e)g (su\016cien)o(tly)h(p)q(o)o(w)o(erful)g(to)e(k)o(eep)h(b)q(oth)g(p)q(oin)o (t-to-p)q(oin)o(t)i(and)f(collectiv)o(e)h(op)q(erations)f(distinct.)75 1283 y(Hence,)d(it)f(is)h(alw)o(a)o(ys)h(safe)e(to)h(call)g Ff(MPI)p 644 1283 V 13 w(COMM)p 737 1283 V 13 w(MAKE)e Fj(and)i Ff(MPI)p 973 1283 V 13 w(CONTEXTS)p 1146 1283 V 11 w(ALLOC)p Fj(,)d(ev)o(en)j(if)f(p)q(ending)j(async)o(hronous)g(p)q(oin)o(t-)75 1328 y(to-p)q(oin)o(t)h(op)q(erations)g(are)e(on-going,)i(or)f(messages)g(ha) o(v)o(e)g(not)g(b)q(een)g(receiv)o(ed)g(but)g(are)g(on)f(receipt)i(queue.)k (With)14 b(these)75 1374 y(rules,)g(no)f(quiescen)o(t)i(comm)o(unicator)g(is) e(required)i(in)f(order)f(to)g(get)g(new)g(con)o(texts.)18 b(W)m(e)13 b(ha)o(v)o(e)h(added)g(demands)g(on)g(the)75 1420 y(MPI)h(implemen)o(tation)j(while)e(making)h(con)o(texts)f(opaque)g(to)f(mak) o(e)g(this)h(simpler)g(to)f(realize)i(without)f(sa)o(ying)g(ho)o(w)f(it)75 1465 y(m)o(ust)e(b)q(e)g(done.)158 1515 y(In)d(summary)m(,)h(libraries)i(ha)o (v)o(e)d(to)g(get)h(the)f(comm)o(unicator)i(as)e(a)g(base)h(argumen)o(t)g(to) f(retain)h(thread)g(safet)o(y)m(,)f(but)g(they)75 1565 y(can)j(alw)o(a)o(ys)h (safely)f(get)g(comm)o(unication)i(con)o(texts)e(to)g(do)g(further)g(w)o (ork.)k(The)12 b(concept)i(of)e(quiescence)j(is)e(banished)i(to)75 1615 y(b)q(e)h(a)f(small)i(detail)g(of)e(implemen)o(tation)q(,)j(rather)e (than)g(a)f(cen)o(tral)i(tenet)e(of)g(library)j(design.)26 b(Users)15 b(m)o(ust)h(still)h(w)o(orry)75 1665 y(ab)q(out)d(temp)q(oral)g (safet)o(y)m(,)f(whic)o(h)h(is)f(not)g(guaran)o(teed)i(b)o(y)e(con)o(texts)h (alone)g(\(see)f(example)i(b)q(elo)o(w)e(in)h(section)g(3.11.6.)75 1882 y Fn(3.7)70 b(Op)r(erations)22 b(on)h(Comm)n(unicators)75 1982 y Fi(3.7.1)55 b(Lo)r(cal)19 b(Comm)n(unicator)h(Op)r(erations)75 2058 y Fp(The)14 b(follo)o(wing)e(are)i(all)f(lo)q(cal)g(\(non-comm)o(uni)o (cating\))e(op)q(erations.)158 2144 y Fq(MPI)p 257 2144 15 2 v 17 w(COMM)p 434 2144 V 18 w(SIZE\(comm,)16 b(size\))75 2259 y(IN)g(comm)21 b Fp(handle)14 b(to)f(comm)o(unicator)e(ob)r(ject.)75 2337 y Fq(OUT)16 b(size)k Fp(is)13 b(the)i(in)o(teger)f(n)o(um)o(b)q(er)f(of) h(pro)q(cesses)i(in)e(the)g(group)g(of)f Fl(comm)p Fp(.)158 2452 y Fq(MPI)p 257 2452 V 17 w(COMM)p 434 2452 V 18 w(RANK\(comm,)j(rank\)) 75 2567 y(IN)g(comm)21 b Fp(handle)14 b(to)f(comm)o(unicator)e(ob)r(ject.)75 2646 y Fq(OUT)16 b(rank)k Fp(is)e(the)g(in)o(teger)g(rank)g(of)f(the)h (calling)e(pro)q(cess)k(in)d(group)h(of)f Fl(comm)p Fp(,)g(or)h Fl(MPI)p 1550 2646 14 2 v 15 w(UNDEFINED)d Fp(if)i(the)179 2696 y(pro)q(cess)e(is)f(not)g(a)g(mem)o(b)q(er.)p eop %%Page: 8 10 bop 75 -100 a Fp(8)75 45 y Fi(3.7.2)55 b(Lo)r(cal)19 b(Constructors)75 159 y Fq(MPI)p 174 159 15 2 v 17 w(COMM)p 351 159 V 18 w(GR)o(OUP\(comm,)14 b(group\))75 281 y(IN)i(comm)21 b Fp(comm)o(unicator)11 b(ob)r(ject)k(handle) 75 368 y Fq(OUT)h(group)j Fp(group)14 b(ob)r(ject)g(handle)75 454 y(Accessor)i(that)e(returns)h(the)g(group)f(corresp)q(onding)h(to)e(the)i (comm)o(unicator)c Fl(comm)p Fp(.)158 541 y Fq(MPI)p 257 541 V 17 w(COMM)p 434 541 V 18 w(CONTEXT\(comm,)17 b(con)o(text\))75 662 y(IN)f(comm)21 b Fp(comm)o(unicator)11 b(ob)r(ject)k(handle)75 750 y Fq(OUT)h(con)o(text)j Fp(con)o(text)75 836 y(Returns)c(the)f(con)o (text)h(asso)q(ciated)f(with)g(the)h(comm)o(uni)o(cator)d Fl(comm)p Fp(.)158 922 y Fq(MPI)p 257 922 V 17 w(COMM)p 434 922 V 18 w(UNBIND\(comm\))75 1044 y(IN)k(comm)21 b Fp(the)14 b(comm)o(unicator)d(to)j (b)q(e)h(deallo)q(cated.)75 1130 y(This)d(routine)h(disasso)q(ciates)h(the)f (group)f Fl(MPI_COMM_GROUP\(com)o(m\))d Fp(asso)q(ciated)14 b(with)e Fl(comm)f Fp(from)g(the)i(con)o(text)75 1180 y Fl (MPI_COMM_CONTEXT\()o(comm\))j Fp(asso)q(ciated)k(with)f Fl(comm)p Fp(.)34 b(The)20 b(opaque)g(ob)r(ject)g Fl(comm)e Fp(is)i(deallo)q(cated.)35 b(Both)75 1230 y(the)19 b(group)g(and)f(con)o(text,)i(pro)o(vided)e(at)h(the) g Fl(MPI)p 900 1230 14 2 v 15 w(COMM)p 1003 1230 V 15 w(BIND)f Fp(call,)g(remain)f(a)o(v)n(ailable)g(for)h(further)h(use.)33 b(If)75 1280 y Fl(MPI)p 144 1280 V 15 w(COMM)p 247 1280 V 15 w(MAKE)16 b Fp(\(see)i(b)q(elo)o(w\))e(w)o(as)h(called)f(in)h(lieu)f(of)g Fl(MPI)p 1040 1280 V 15 w(COMM)p 1143 1280 V 15 w(BIND)p Fp(,)f(then)i(there) h(is)f(no)f(exp)q(osed)i(con)o(text)75 1330 y(kno)o(wn)13 b(to)h(the)h(user,) f(and)g(this)g(quan)o(tit)o(y)f(is)h(freed)g(b)o(y)g Fl(MPI)p 1027 1330 V 15 w(COMM)p 1130 1330 V 15 w(UNBIND)p Fp(.)158 1416 y Fq(MPI)p 257 1416 15 2 v 17 w(COMM)p 434 1416 V 18 w(DUP\(comm,)h(new) p 814 1416 V 17 w(con)o(text,)f(new)p 1097 1416 V 18 w(comm\))75 1538 y(IN)i(comm)21 b Fp(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1625 y Fq(IN)h(new)p 232 1625 V 17 w(con)o(text)k Fp(new)14 b(con)o(text)h(to)e(use)i(with)f(new)p 945 1625 13 2 v 15 w(comm)75 1712 y Fq(OUT)i(new)p 283 1712 15 2 v 17 w(comm)k Fp(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1798 y Fl(MPI)p 144 1798 14 2 v 15 w(COMM)p 247 1798 V 15 w(DUP)c Fp(duplicates)h(a)g(comm)o(unicator)d(with)j (all)f(its)g(cac)o(hed)i(information,)c(replacing)j(just)g(the)h(con)o(text.) 75 1920 y Fi(3.7.3)55 b(Collectiv)n(e)20 b(Comm)n(unicator)g(Constructors)75 2034 y Fq(MPI)p 174 2034 15 2 v 17 w(COMM)p 351 2034 V 18 w(MAKE\(sync)p 629 2034 V 18 w(comm,)c(comm)p 926 2034 V 17 w(group,)e(comm)p 1217 2034 V 17 w(new\))75 2156 y(IN)i(sync)p 241 2156 V 17 w(comm)21 b Fp(Comm)n(unicator)13 b(whose)i(incorp)q(orated)h(group)f(\()p Fl(MPI_COMM_GROUP\(comm)o(\))p Fp(\))d(is)j(the)h(group)179 2206 y(o)o(v)o(er)h(whic)o(h)g(the)h(new)g(comm)o(unicator)d Fl(comm_new)g Fp(will)h(b)q(e)i(de\014ned,)h(also)e(sp)q(ecifying)g (participan)o(ts)g(in)179 2256 y(this)d(sync)o(hronizing)g(op)q(eration.)75 2343 y Fq(IN)i(comm)p 274 2343 V 17 w(group)j Fp(Group)12 b(of)g(the)h(new)g (comm)o(unicator;)c(often)k(this)f(will)f(b)q(e)i(the)g(same)e(as)h Fl(sync)p 1665 2343 14 2 v 15 w(group)p Fp(,)f(else)179 2393 y(it)i(m)o(ust)g(b)q(e)i(subset)g(thereof;)75 2480 y Fq(OUT)h(comm)p 325 2480 15 2 v 17 w(new)k Fp(the)15 b(new)f(comm)o(unicator.)75 2566 y Fl(MPI)p 144 2566 14 2 v 15 w(COMM)p 247 2566 V 15 w(MAKE)f Fp(is)h(equiv)n(alen)o(t)f(to:)140 2654 y Fl(MPI_CONTEXTS_ALLOC\()o(sync\\)o (_comm)o(,)19 b(context,)h(1,)h(len\))140 2704 y(MPI_COMM_BIND\(comm\\)o (_grou)o(p,)e(context,)h(comm_new\))p eop %%Page: 9 11 bop 75 -100 a Fo(3.8.)31 b(INTR)o(ODUCTION)14 b(TO)g(INTER-COMMUNICA)m(TION) 723 b Fp(9)75 45 y(plus,)17 b(notionally)m(,)d(in)o(ternal)j(\015ags)f(are)i (set)f(in)g(the)g(comm)o(unicator,)d(denoting)j(that)f(con)o(text)i(w)o(as)f (created)h(as)75 95 y(part)d(of)g(the)g(opaque)g(pro)q(cess)i(that)e(made)f (the)i(comm)o(unicator)c(\(so)j(it)g(can)g(b)q(e)h(freed)g(b)o(y)f Fl(MPI)p 1602 95 14 2 v 15 w(COMM)p 1705 95 V 15 w(UNBIND)p Fp(\).)75 145 y(It)f(is)g(erroneous)h(if)e Fl(comm)p 478 145 V 15 w(group)g Fp(is)g(not)h(a)g(subset)h(of)e Fl(sync)p 1028 145 V 15 w(group)p Fp(.)158 278 y Fk(Discussion:)30 b Ff(MPI)p 463 278 12 2 v 13 w(COMM)p 556 278 V 13 w(MAKE)17 b Fj(and)j Ff(MPI)p 807 278 V 13 w(CONTEXTS)p 980 278 V 11 w(ALLOC)d Fj(b)q(oth)j (require)g(b)q(o)q(otstrap)g(via)g(a)f(comm)o(unicator,)75 328 y(instead)g(of)f(just)g(the)f(group)i(of)f(that)g(comm)o(unicator)h(for)f (thread)g(safet)o(y)m(.)31 b(W)m(e)18 b(ha)o(v)o(e)h(argued)f(this)h(bac)o(k) f(and)h(forth)75 378 y(carefully)m(,)14 b(and)g(conclude)h(that)e(eac)o(h)g (thread)h(of)f(an)g(MPI)g(program)h(will)g(ha)o(v)o(e)g(one)f(or)g(more)h (con)o(texts.)75 605 y Fn(3.8)70 b(In)n(tro)r(duction)22 b(to)i(In)n (ter-Comm)n(unicati)o(on)75 698 y Fp(This)14 b(section)g(in)o(tro)q(duces)h (the)f(concept)h(of)e(in)o(ter-comm)o(unicatio)o(n)e(and)i(describ)q(es)j (the)e(p)q(ortions)g(of)f(MPI)h(that)75 748 y(supp)q(ort)21 b(it.)37 b(It)20 b(describ)q(es)j(supp)q(ort)e(for)f(writing)f(programs)g (whic)o(h)h(con)o(tain)g(user-lev)o(el)h(serv)o(ers.)39 b(It)21 b(also)75 798 y(describ)q(es)16 b(a)e(name)e(service)k(whic)o(h)d (simpli\014es)g(writing)g(programs)f(con)o(taining)h(in)o(ter-comm)o (unication.)158 931 y Fk(Discussion:)18 b Fj(Recommendation)e(and)e(plea)g (for)f(patience:)19 b(MPI)13 b(Committee)h(tak)o(es)f(stra)o(w)g(p)q(oll)i (on)e(whether)h(to)75 981 y(ha)o(v)o(e)h(in)o(ter-comm)o(unication)j(or)c (not)h({)g(as)f(a)h(whole)g({)g(in)g(MPI1.)21 b(The)15 b(most)f(suitable)j (time)e(w)o(ould)g(b)q(e)g(after)f(w)o(e)g(hear)75 1031 y(the)f(argumen)o(ts) h(ab)q(out)g(the)f(in)o(terface)h(at)f(a)g(high)h(lev)o(el,)g(but)g(b)q (efore)f(w)o(e)g(w)o(e)f(this)i(section)g(of)f(the)g(c)o(hapter.)75 1236 y Fi(3.8.1)55 b(De\014nitions)20 b(of)f(In)n(ter-Comm)n(unication)h(and) f(In)n(ter-Comm)n(unicators)75 1315 y Fp(All)11 b(p)q(oin)o(t-to-p)q(oin)o(t) g(comm)o(unication)e(describ)q(ed)14 b(th)o(us)f(far)f(has)h(in)o(v)o(olv)o (ed)e(comm)o(unicati)o(on)f(b)q(et)o(w)o(een)j(pro)q(cesses)75 1365 y(that)e(are)h(mem)o(b)q(ers)e(of)g(the)i(same)e(group.)17 b(The)12 b(source)g(pro)q(cess)h(in)e(send)h(or)f(the)h(destination)f(pro)q (cess)i(in)e(receiv)o(e)75 1415 y(\(the)16 b(\\target")e(pro)q(cess\))j(is)e (sp)q(eci\014ed)h(using)f(a)f Fl(\(communicator,)19 b(rank\))14 b Fp(pair.)20 b(The)15 b(target)h(pro)q(cess)g(is)f(that)75 1465 y(pro)q(cess)d(with)d(the)h(giv)o(en)g(rank)f(within)h(the)g(group)g(of) f(the)h(giv)o(en)f(comm)o(unicator.)14 b(This)c(t)o(yp)q(e)g(of)f(comm)o (unication)75 1515 y(is)14 b(called)f(\\in)o(tra-comm)o(uni)o(cation")e(and)i (the)i(comm)o(unicator)c(used)k(is)f(called)f(an)h(\\in)o(tra-comm)n (unicator.")158 1566 y(In)f(mo)q(dular)f(and)h(m)o(ulti-disciplinary)d (applications,)i(di\013eren)o(t)j(pro)q(cess)g(groups)f(execute)h(di\013eren) o(t)g(mo)q(d-)75 1616 y(ules)g(and)f(pro)q(cesses)k(within)c(di\013eren)o(t)h (mo)q(dules)f(comm)o(unicate)e(with)i(one)h(another)g(in)f(a)h(pip)q(eline)f (or)h(a)f(more)75 1665 y(general)h(mo)q(dule)e(graph.)19 b(In)c(these)h (applications)d(the)i(most)f(natural)g(w)o(a)o(y)f(for)h(a)h(pro)q(cess)h(to) e(sp)q(ecify)h(a)f(target)75 1715 y(pro)q(cess)k(is)e(b)o(y)g(the)h(rank)f (of)f(the)i(target)g(pro)q(cess)h(within)d(the)i(target)f(group.)25 b(In)16 b(applications)f(that)h(con)o(tain)75 1765 y(in)o(ternal)d(user)h (lev)o(el)e(serv)o(ers,)j(eac)o(h)e(serv)o(er)i(ma)o(y)c(b)q(e)j(a)e(pro)q (cess)j(group)e(that)g(pro)o(vides)g(services)i(to)e(one)g(or)g(more)75 1815 y(clien)o(ts,)j(and)f(eac)o(h)h(clien)o(t)f(ma)o(y)f(b)q(e)i(a)f(pro)q (cess)i(group)f(whic)o(h)f(uses)i(the)f(services)h(of)e(one)g(or)h(more)e (serv)o(ers.)25 b(In)75 1865 y(these)17 b(applications)e(it)h(is)g(again)e (most)h(natural)g(to)h(sp)q(ecify)h(the)f(target)h(pro)q(cess)g(b)o(y)f(rank) g(within)f(the)h(target)75 1914 y(group.)j(This)c(t)o(yp)q(e)f(of)g(comm)o (unication)d(is)j(called)h(\\in)o(ter-comm)o(uni)o(cation")c(and)j(the)h (comm)o(unicator)d(used)j(is)75 1964 y(called)f(an)f(\\in)o(ter-comm)o (unicator.")158 2015 y(An)g(in)o(ter-comm)o(unication)d(op)q(eration)j(is)g (a)g(p)q(oin)o(t-to-p)q(oin)o(t)f(comm)o(unication)e(b)q(et)o(w)o(een)k(pro)q (cesses)i(in)d(dif-)75 2065 y(feren)o(t)18 b(groups.)26 b(The)17 b(group)g(con)o(taining)f(a)g(pro)q(cess)j(that)d(initiates)h(an)f(in)o (ter-comm)o(unication)d(op)q(eration)k(is)75 2115 y(called)10 b(the)g(\\lo)q(cal)f(group,")h(that)g(is,)g(the)h(sender)h(in)d(a)h(send)h (and)f(the)g(receiv)o(er)i(in)d(a)h(receiv)o(e.)18 b(The)11 b(group)f(con)o(tain-)75 2165 y(ing)h(the)h(target)g(pro)q(cess)i(is)d (called)h(the)g(\\remote)f(group,")g(that)h(is,)f(the)i(receiv)o(er)g(in)e(a) g(send)i(and)f(the)g(sender)h(in)e(a)75 2215 y(receiv)o(e.)22 b(As)15 b(in)f(in)o(tra-comm)n(unication,)d(the)k(target)g(pro)q(cess)i(is)d (sp)q(eci\014ed)i(using)f(a)f Fl(\(communicator,)19 b(rank\))75 2264 y Fp(pair.)f(Unlik)o(e)13 b(in)o(tra-comm)o(uni)o(cation,)d(the)15 b(rank)f(is)f(relativ)o(e)h(to)g(the)g(remote)g(group.)158 2315 y(One)d(additional)e(needed)i(concept)h(is)e(the)h(\\group)f(leader.")17 b(The)11 b(pro)q(cess)h(with)e(rank)g(0)g(in)g(a)g(pro)q(cess)i(group)75 2365 y(is)i(designated)g(\\group)g(leader.")k(This)c(concept)h(is)f(used)g (in)g(supp)q(ort)h(of)e(user-lev)o(el)h(serv)o(ers,)i(and)d(elsewhere.)75 2488 y Fi(3.8.2)55 b(Prop)r(erties)18 b(of)h(In)n(ter-Comm)n(unication)h(and) f(In)n(ter-Comm)n(unicators)75 2567 y Fp(Here)c(is)f(a)f(summary)f(of)h(the)i (prop)q(erties)g(of)e(in)o(ter-comm)o(unication)e(and)i(in)o(ter-comm)o (unicators:)137 2654 y Fh(\017)21 b Fp(The)12 b(syn)o(tax)g(is)g(the)h(same)e (for)h(b)q(oth)g(in)o(ter-)g(and)g(in)o(tra-comm)o(unicati)o(on.)j(The)d (same)f(comm)o(unicator)f(can)179 2704 y(b)q(e)k(used)h(for)f(b)q(oth)g(send) g(and)g(receiv)o(e)h(op)q(erations.)p eop %%Page: 10 12 bop 75 -100 a Fp(10)137 45 y Fh(\017)21 b Fp(A)14 b(target)g(pro)q(cess)i(is) e(addressed)h(b)o(y)f(its)g(rank)g(in)f(the)i(remote)e(group.)137 129 y Fh(\017)21 b Fp(Comm)o(uni)o(cations)14 b(using)i(an)h(in)o(ter-comm)o (unicator)d(are)j(guaran)o(teed)g(not)f(to)h(con\015ict)g(with)f(an)o(y)g (com-)179 179 y(m)o(unications)c(that)i(use)g(a)g(di\013eren)o(t)h(comm)o (unicator.)137 263 y Fh(\017)21 b Fp(An)14 b(in)o(ter-comm)o(unicator)d (cannot)j(b)q(e)h(used)f(for)g(collectiv)o(e)g(comm)o(unicatio)o(n.)137 347 y Fh(\017)21 b Fp(A)14 b(comm)o(unicator)d(will)h(pro)o(vide)i(either)h (in)o(tra-)e(or)h(in)o(ter-comm)o(unicatio)o(n,)d(nev)o(er)k(b)q(oth.)137 431 y Fh(\017)21 b Fp(Once)15 b(constructed,)g(the)g(remote)f(group)f(of)h (an)f(in)o(ter-comm)o(unicator)f(ma)o(y)f(not)j(b)q(e)h(c)o(hanged.)j(Comm)o (u-)179 481 y(nication)13 b(with)g(an)o(y)h(pro)q(cess)i(outside)e(of)f(the)i (remote)e(group)h(is)g(not)f(allo)o(w)o(ed.)158 565 y(The)e(routine)h Fl(MPI_COMM_STAT\(\))7 b Fp(ma)o(y)i(b)q(e)j(used)g(to)e(determine)h(if)f(a)h (comm)o(unicator)d(is)j(an)g(in)o(ter-)g(or)g(in)o(tra-)75 615 y(comm)o(unicator.)16 b(In)o(ter-comm)o(unicators)c(can)j(b)q(e)g(used)g (as)f(argumen)o(ts)f(to)h(some)g(of)f(the)i(other)g(comm)o(unicator)75 665 y(inquiry)f(routines)i(\(de\014ned)g(ab)q(o)o(v)o(e\).)21 b(In)o(ter-comm)o(unicators)14 b(cannot)h(b)q(e)g(used)h(as)g(input)e(to)h (an)o(y)g(of)f(the)i(lo)q(cal)75 714 y(constructor)k(routines)g(for)e(in)o (tra-comm)o(unicators.)30 b(When)19 b(an)f(in)o(ter-comm)o(unicator)e(is)j (used)h(as)f(an)f(input)75 764 y(argumen)o(t,)12 b(the)j(follo)o(wing)c (table)j(describ)q(es)i(b)q(eha)o(vior)d(of)h(relev)n(an)o(t)g Fl(MPI_COMM_*)d Fp(functions:)p 279 821 1392 2 v 278 871 2 50 v 709 856 a Fl(MPI)p 778 856 14 2 v 15 w(COMM)p 881 856 V 29 w Fp(F)m(unction)i(Beha)o(vior)p 1670 871 2 50 v 278 920 V 683 906 a(\(in)g(In)o(ter-Comm)o(unication)e(Mo)q(de\))p 1670 920 V 279 922 1392 2 v 279 932 V 278 982 2 50 v 304 967 a Fl(MPI_COMM_SIZE\(\))p 720 982 V 112 w Fp(returns)k(the)g(size)g(of)e(the)h (remote)g(group.)p 1670 982 V 278 1032 V 304 1017 a Fl(MPI_COMM_GROUP\(\))p 720 1032 V 90 w Fp(returns)h(the)g(remote)e(group.)p 1670 1032 V 278 1082 V 304 1067 a Fl(MPI_COMM_RANK\(\))p 720 1082 V 112 w Fp(returns)i Fl(MPI_UNDEFINED)p 1670 1082 V 278 1131 V 304 1116 a(MPI_COMM_CONTEXT\()o(\))p 720 1131 V 47 w Fp(erroneous)p 1670 1131 V 278 1181 V 304 1166 a Fl(MPI_COMM_UNBIND\(\))p 720 1181 V 68 w Fp(erroneous)p 1670 1181 V 278 1231 V 304 1216 a Fl(MPI_COMM_DUP\(\))p 720 1231 V 134 w Fp(erroneous)p 1670 1231 V 279 1233 1392 2 v 75 1327 a Fq(Construction/D)o(estr)o(ucti)o(on)d(of) k(In)o(ter-Comm)o(un)o(icator)o(s)75 1405 y Fp(Construction)e(of)g(an)f(in)o (ter-comm)o(unicator)e(requires)k(t)o(w)o(o)f(separate)h(collectiv)o(e)f(op)q (erations)g(\(one)g(in)f(the)i(lo)q(cal)75 1454 y(group)i(and)g(one)g(in)g (the)h(remote)f(group\))g(and)g(a)g(p)q(oin)o(t-to-p)q(oin)o(t)e(op)q (eration)i(b)q(et)o(w)o(een)i(the)f(t)o(w)o(o)e(group)h(lead-)75 1504 y(ers.)j(These)c(op)q(erations)e(ma)o(y)f(b)q(e)i(p)q(erformed)e(with)h (explicit)g(sync)o(hronization)g(of)g(the)h(t)o(w)o(o)f(groups)g(b)o(y)g (calling)75 1554 y Fl(MPI_COMM_PEER_MAK)o(E\(\))p Fp(.)g(The)d(explicit)f (sync)o(hronization)g(can)h(cause)h(deadlo)q(c)o(k)e(in)g(mo)q(dular)f (programs)g(with)75 1604 y(cyclic)h(comm)o(unicati)o(on)d(graphs.)16 b(So,)10 b(the)g(lo)q(cal)f(and)g(remote)g(op)q(erations)g(can)h(b)q(e)g (decoupled)g(and)g(the)g(construc-)75 1654 y(tion)k(p)q(erformed)g(\\lo)q (osely)f(sync)o(hronously")i(b)o(y)f(calling)f(the)i(t)o(w)o(o)e(routines)i Fl(MPI_COMM_PEER_MAKE_)o(START)o(\(\))75 1703 y Fp(and)f Fl(MPI_COMM_PEER_MA) o(KE_FI)o(NISH\()o(\))p Fp(.)158 1836 y Fk(Discussion:)35 b Ff(MPI)p 468 1836 12 2 v 13 w(COMM)p 561 1836 V 12 w(PEER)p 653 1836 V 13 w(MAKE)p 746 1836 V 13 w(START\(\))9 b Fj(and)14 b Ff(MPI)p 1043 1836 V 13 w(COMM)p 1136 1836 V 12 w(PEER)p 1228 1836 V 13 w(MAKE)p 1321 1836 V 13 w(FINISH\(\))9 b Fj(are)k(b)q(oth)g (collectiv)o(e)j(op-)75 1886 y(erations)g(in)f(the)f(lo)q(cal)i(group.)21 b(They)15 b(ma)o(y)f(lea)o(v)o(e)h(a)f(non-blo)q(c)o(kin)q(g)j(send)e(and)g (receiv)o(e)g(activ)o(e)g(b)q(et)o(w)o(een)f(the)h(t)o(w)o(o)f(calls,)75 1936 y(where)h(the)h(group)g(leaders)h(exc)o(hange)g(lo)q(cal)g(comm)o (unicator)g(information)h(as)d(necessary)m(.)25 b(Ho)o(w)o(ev)o(er,)16 b(they)g(are)f(not)h(a)75 1986 y(non-blo)q(c)o(king)g(collectiv)o(e)f(op)q (eration.)38 b Fp(These)15 b(routines)f(can)g(construct)i(m)o(ultiple)11 b(in)o(ter-comm)o(unicators)h(with)h(a)75 2118 y(single)h(call.)j(This)d (impro)o(v)o(es)e(p)q(erformance)i(b)o(y)f(allo)o(wing)f(amortization)g(of)h (the)h(sync)o(hronization)g(o)o(v)o(erhead.)158 2168 y(The)g(in)o(ter-comm)o (unicator)d(ob)r(jects)j(are)g(destro)o(y)o(ed)h(in)d(the)i(same)f(w)o(a)o(y) g(as)g(in)o(tra-comm)o(unicator)d(ob)r(jects,)75 2218 y(b)o(y)k(calling)e Fl(MPI_COMM_FREE\(\))p Fp(.)75 2327 y Fq(Supp)q(ort)h(for)i(User-Lev)o(el)f (Serv)o(ers)75 2404 y Fp(W)m(e)f(consider)g(the)h(primary)d(feature)j(of)e (user-lev)o(el)i(serv)o(ers)h(that)e(can)g(require)h(additional)d(supp)q(ort) j(is)e(that)h(the)75 2454 y(serv)o(er)j(cannot)e(a)g(priori)g(kno)o(w)f(the)i (iden)o(ti\014cation)f(of)f(the)i(clien)o(ts,)f(whereas)i(the)e(clien)o(ts)h (m)o(ust)e(a)h(priori)f(kno)o(w)75 2504 y(the)h(iden)o(ti\014cation)g(of)f (the)h(serv)o(ers.)20 b(In)14 b(addition,)f(a)g(user-lev)o(el)i(serv)o(er)g (is)f(a)f(dedicated)i(pro)q(cess)h(group)e(whic)o(h,)75 2554 y(after)g(some)f(initialization,)e(pro)o(vides)j(a)f(giv)o(en)h(service)h(un) o(til)e(termination.)158 2604 y(The)21 b(supp)q(ort)f(for)g(user-lev)o(el)h (serv)o(ers)h(tak)o(es)e(in)o(to)g(accoun)o(t)g(the)h(prev)n(ailing)d(view)i (that)g(all)f(pro)q(cesses)75 2654 y(\(p)q(ossibly)e(excepting)g(a)f(host)h (pro)q(cess\))i(are)e(initially)d(equiv)n(alen)o(t)i(mem)o(b)q(ers)f(of)i (the)g(group)f(of)g(all)g(pro)q(cesses.)75 2704 y(This)11 b(group)g(is)f (describ)q(ed)j(b)o(y)e(pre-de\014ned)h(in)o(tra-comm)o(unicator)c Fl(MPI_COMM_ALL)p Fp(.)g(The)j(user)h(splits)f(this)g(group)p eop %%Page: 11 13 bop 75 -100 a Fo(3.8.)31 b(INTR)o(ODUCTION)14 b(TO)g(INTER-COMMUNICA)m(TION) 702 b Fp(11)75 45 y(suc)o(h)17 b(that)f(pro)q(cesses)j(in)d(eac)o(h)h (parallel)e(serv)o(er)i(are)g(placed)g(within)e(a)h(sp)q(eci\014c)i (sub-group.)25 b(The)17 b(non-serv)o(er)75 95 y(pro)q(cesses)h(are)d(placed)g (in)f(a)h(group)g(of)f(all)f(non-serv)o(ers.)23 b(Pro)o(vided)15 b(that)g(the)g(user)h(can)f(determine)g(the)h(ranks)75 145 y(of)c(the)h(serv)o(er)g(group)f(leaders)i(\()p Fm(i.e.,)d Fp(rank)h(zero\))i(and)e(assign)g(some)f(tags)h(for)g(clien)o(ts)h(to)f(send) h(a)f(message)g(to)g(the)75 195 y(group)h(leaders,)g(then)h(a)f(group)g (leader)g(can)g(at)g(an)o(y)g(time)e(notify)h(a)h(serv)o(er)i(that)e(it)f (wishes)i(to)f(b)q(ecome)g(a)f(clien)o(t.)158 245 y(MPI)18 b(pro)o(vides)g(a)f(routine,)h Fl(MPI_COMM_SPLITL\(\))p Fp(,)d(that)j(splits) f(a)h(paren)o(t)g(group,)g(creates)h(sub-groups)75 294 y(\(in)o(tra-comm)o (uni)o(cators\))d(according)h(to)g(supplied)g(k)o(eys,)h(and)f(returns)i(the) f(rank)f(of)f(eac)o(h)i(sub-group)f(leader)75 344 y(\(relativ)o(e)i(to)f(the) h(paren)o(t)h(group\).)32 b(This)18 b(allo)o(ws)g(a)g(pro)q(cess)j(that)d(do) q(es)i(not)e(kno)o(w)g(ab)q(out)h(a)f(sub-group)h(to)75 394 y(con)o(tact)14 b(that)g(sub-group)f(via)g(the)h(sub-group)g(leader,)f(using) h(the)g(paren)o(t)g(comm)o(unicator.)h(The)f(k)o(eys)g(ma)o(y)d(b)q(e)75 444 y(used)18 b(as)f(unique)f(tags.)27 b(This)17 b(information)d(ma)o(y)h (also)h(b)q(e)h(used)h(as)f(input)g(to)f Fl(MPI_COMM_PEER_MAKE\()o(\))p Fp(,)e(for)75 494 y(example.)75 603 y Fq(Name)i(Service)75 680 y Fp(MPI)g(pro)o(vides)g(a)f(name)g(service)i(to)f(simplify)c (construction)17 b(of)e(in)o(ter-comm)o(unicators.)22 b(This)15 b(service)i(allo)o(ws)75 730 y(a)f(lo)q(cal)g(pro)q(cess)i(group)e(to)h (create)h(an)e(in)o(ter-comm)o(unicator)e(when)j(the)g(only)e(a)o(v)n (ailable)g(information)e(ab)q(out)75 779 y(the)i(remote)e(group)h(is)g(a)g (user-de\014ned)i(c)o(haracter)g(string.)i(A)d(sync)o(hronizing)f(v)o(ersion) g(is)g(pro)o(vided)g(b)o(y)g(routine)75 829 y Fl(MPI_COMM_NAME_MAK)o(E\(\))p Fp(.)f(A)d(lo)q(osely)e(sync)o(hronous)j(v)o(ersion)e(is)g(pro)o(vided)g(b)o (y)g(routines)h Fl(MPI_COMM_NAME_MAKE)o(_STAR)o(T\(\))75 879 y Fp(and)k Fl(MPI_COMM_NAME_MA)o(KE_FI)o(NISH\()o(\))p Fp(.)75 996 y Fi(3.8.3)55 b(In)n(ter-Comm)n(unication)20 b(Routines)75 1074 y Fq(Sync)o(hronous)13 b(In)o(ter-Comm)o(uni)o(cator)f(Constructors)75 1151 y Fp(Both)19 b(of)g(these)h(routines)g(allo)o(w)d(construction)j(of)f(m) o(ultiple)d(in)o(ter-comm)o(unicators.)32 b(Eac)o(h)19 b(of)f(these)j(in)o (ter-)75 1200 y(comm)o(unicators)11 b(con)o(tains)i(the)i(same)d(remote)h (group)h(and)f(di\013eren)o(t)h(in)o(ternal)f(con)o(texts.)19 b(Therefore,)c(comm)o(u-)75 1250 y(nication)e(using)g(an)o(y)g(of)g(these)h (in)o(ter-comm)o(unicators)e(will)g(not)h(in)o(terfere)h(with)f(comm)o (unication)d(using)k(an)o(y)f(of)75 1300 y(the)h(others.)158 1385 y Fq(MPI)p 257 1385 15 2 v 17 w(COMM)p 434 1385 V 18 w(PEER)p 583 1385 V 18 w(MAKE\(m)o(y)p 833 1385 V 17 w(comm,)21 b(p)q(eer)p 1101 1385 V 16 w(comm,)g(p)q(eer)p 1368 1385 V 16 w(rank,)g(tag,)g(n)o(um)p 1706 1385 V 15 w(comms,)75 1435 y(new)p 161 1435 V 17 w(comms\))75 1554 y(IN)16 b(m)o(y)p 213 1554 V 17 w(comm)37 b Fp(lo)q(cal)13 b(in)o(tra-comm)n(unicator)75 1638 y Fq(IN)j(p)q(eer)p 241 1638 V 17 w(comm)k Fp(\\paren)o(t")14 b(in)o(tra-comm)o(uni)o(cator)75 1722 y Fq(IN)i(p)q(eer)p 241 1722 V 17 w(rank)k Fp(rank)14 b(of)f(remote)h(group)f(leader)i(in)e Fl(peer)p 1031 1722 14 2 v 15 w(comm)75 1806 y Fq(IN)j(tag)37 b Fp(\\safe")13 b(tag)75 1890 y Fq(IN)j(n)o(um)p 242 1890 15 2 v 16 w(comms)k Fp(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o(ter-comm)o(unicators)e(to)h(construct)75 1974 y Fq(OUT)j(new)p 283 1974 V 17 w(comms)k Fp(arra)o(y)14 b(of)f(new)h(in)o(ter-comm)o(unicators)158 2058 y(This)k(routine)h (constructs)h(an)e(arra)o(y)g(of)g(in)o(ter-comm)o(unicators)e(and)i(stores)i (it)e(in)g Fl(new_comms)p Fp(.)29 b(In)o(tra-)75 2108 y(comm)o(unicator)9 b Fl(my_comm)i Fp(describ)q(es)j(the)e(lo)q(cal)f(group.)17 b(In)o(tra-comm)o(unicator)9 b Fl(peer_comm)h Fp(describ)q(es)k(a)e(group)75 2157 y(that)21 b(con)o(tains)f(the)h(leaders)h(\()p Fm(i.e.,)f Fp(mem)o(b)q(ers)e(with)h(rank)h(zero\))g(of)f(b)q(oth)h(the)g(lo)q(cal)f (and)g(remote)g(groups.)75 2207 y(In)o(teger)14 b Fl(peer_rank)d Fp(is)j(the)f(rank)h(of)e(the)i(remote)f(leader)h(in)e Fl(peer_comm)p Fp(.)k(In)o(teger)e(tag)f(is)g(used)h(to)g(distinguish)75 2257 y(this)19 b(op)q(eration)g(from)e(others)j(with)f(the)g(same)g(p)q(eer.)34 b(In)o(teger)20 b Fl(num_comms)d Fp(is)i(the)h(n)o(um)o(b)q(er)e(of)g(new)i (in)o(ter-)75 2307 y(comm)o(unicators)12 b(constructed.)21 b(This)14 b(routine)g(is)g(collectiv)o(e)g(in)g(the)h(lo)q(cal)e(group)h(and) g(sync)o(hronizes)i(with)d(the)75 2357 y(remote)j(group.)27 b(Eac)o(h)17 b(of)f(the)i(in)o(ter-comm)o(unicators)c(pro)q(duced)k(pro)o (vides)f(in)o(ter-comm)o(unication)d(with)i(the)75 2407 y(remote)d(group.)158 2492 y Fq(MPI)p 257 2492 V 17 w(COMM)p 434 2492 V 18 w(NAME)p 601 2492 V 19 w(MAKE\(m)o(y)p 852 2492 V 17 w(comm,)j(name,)g(n)o(um)p 1257 2492 V 15 w(comms,)g(new)p 1528 2492 V 17 w(comms\))75 2620 y(IN)g(m)o(y)p 213 2620 V 17 w(comm)21 b Fp(lo)q(cal)13 b(in)o(tra-comm)n(unicator)75 2704 y Fq(IN)j(name)21 b Fp(\\name)12 b(kno)o(wn)h(to)h(b)q(oth)g(lo)q(cal)f(and)h(remote)f(group)h(leaders)p eop %%Page: 12 14 bop 75 -100 a Fp(12)75 45 y Fq(IN)16 b(n)o(um)p 242 45 15 2 v 16 w(comms)k Fp(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o(ter-comm)o(unicators)e (to)h(construct)75 130 y Fq(OUT)j(new)p 283 130 V 17 w(comms)k Fp(Arra)o(y)14 b(of)f(new)i(in)o(ter-comm)o(unicators)158 223 y(This)e(is)g(the)g(name-serv)o(ed)g(equiv)n(alen)o(t)g(of)f Fl(MPI_COMM_PEER_MAKE)d Fp(in)k(whic)o(h)g(the)g(caller)g(need)h(only)e(kno)o (w)75 273 y(a)19 b(name)f(for)h(the)g(p)q(eer)i(connection.)34 b(The)20 b(same)e(name)g(is)h(supplied)g(b)o(y)g(b)q(oth)g(lo)q(cal)g(and)f (remote)h(groups.)75 322 y(The)g(name)e(is)h(remo)o(v)o(ed)g(from)f(the)i(in) o(ternal)f(name-serv)o(er)h(database)f(after)h(b)q(oth)g(groups)f(ha)o(v)o(e) h(completed)75 372 y Fl(MPI_COMM_NAME_MAK)o(E\(\))p Fp(.)75 482 y Fq(Lo)q(osely)c(Sync)o(hronous)e(In)o(ter-Comm)o(uni)o(cator)f (Constructors)75 560 y Fp(These)19 b(routines)f(are)g(lo)q(osely)f(sync)o (hronous)i(coun)o(terparts)g(of)e(the)h(sync)o(hronous)h(in)o(ter-comm)o (unicator)c(con-)75 609 y(struction)g(routines)f(describ)q(ed)i(ab)q(o)o(v)o (e.)158 695 y Fq(MPI)p 257 695 V 17 w(COMM)p 434 695 V 18 w(PEER)p 583 695 V 18 w(MAKE)p 750 695 V 18 w(ST)l(AR)l(T\(m)o(y)p 1008 695 V 16 w(comm,)c(p)q(eer)p 1266 695 V 16 w(comm,)g(p)q(eer)p 1524 695 V 16 w(rank,)g(tag,)g(n)o(um)p 1844 695 V 15 w(comms,)75 745 y(mak)o(e)p 187 745 V 17 w(id\))75 865 y(IN)k(m)o(y)p 213 865 V 17 w(comm)37 b Fp(lo)q(cal)13 b(in)o(tra-comm)n(unicator)75 949 y Fq(IN)j(p)q(eer)p 241 949 V 17 w(comm)k Fp(\\paren)o(t")14 b(in)o(tra-comm)o(uni)o(cator)75 1034 y Fq(IN)i(p)q(eer)p 241 1034 V 17 w(rank)k Fp(rank)14 b(of)f(remote)h(group)f(leader)i(in)e Fl(peer)p 1031 1034 14 2 v 15 w(comm)75 1118 y Fq(IN)j(tag)37 b Fp(\\safe")13 b(tag)75 1203 y Fq(IN)j(n)o(um)p 242 1203 15 2 v 16 w(comms)k Fp(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o(ter-comm)o (unicators)e(to)h(construct)75 1288 y Fq(OUT)j(mak)o(e)p 309 1288 V 17 w(id)j Fp(handle)14 b(for)f Fl(MPI)p 649 1288 14 2 v 16 w(COMM)p 753 1288 V 14 w(PEER)p 855 1288 V 15 w(MAKE)p 958 1288 V 15 w(FINISH\(\))158 1372 y Fp(This)18 b(starts)h(o\013)g(a)f Fl(_PEER_MAKE)e Fp(op)q(eration,)j(returning)f(a)g(handle)h(for)f(the)g(op)q (eration)h(in)e(\\)p Fl(make_id)p Fp(.")75 1422 y(It)f(is)f(collectiv)o(e)h (in)f Fl(my_comm)p Fp(.)22 b(It)16 b(do)q(es)g(not)g(w)o(ait)f(for)g(the)h (remote)f(group)h(to)f(do)h Fl(MPI_COMM_MAKE_STA)o(RT\(\))o Fp(.)75 1472 y(The)g(\\)p Fl(make_id)p Fp(")e(handle)i(is)g(conceptually)g (similar)d(to)j(the)h(comm)o(uni)o(cation)c(handle)j(used)h(b)o(y)e(non-blo)q (c)o(king)75 1521 y(p)q(oin)o(t-to-p)q(oin)o(t)j(routines.)34 b(A)19 b Fl(make_id)e Fp(handle)i(is)g(constructed)i(b)o(y)e(a)g(\\)p Fl(_START)p Fp(")e(routine)i(and)g(destro)o(y)o(ed)75 1571 y(b)o(y)g(the)g(matc)o(hing)e(\\)p Fl(_FINISH)p Fp(")g(routine.)33 b(These)20 b(handles)f(are)g(not)g(v)n(alid)e(for)i(an)o(y)f(other)h(use.)34 b(It)19 b(is)f(erro-)75 1621 y(neous)i(to)f(call)f(this)i(routine)f(again)f (with)h(the)h(same)e Fl(peer_comm)p Fp(,)h Fl(peer_rank)e Fp(and)i Fl(tag)p Fp(,)h(without)e(calling)75 1671 y Fl(MPI_COMM_MAKE_FIN)o(ISH\(\))10 b Fp(to)k(\014nish)g(the)g(\014rst)h(call.)158 1757 y Fq(MPI)p 257 1757 15 2 v 17 w(COMM)p 434 1757 V 18 w(PEER)p 583 1757 V 18 w(MAKE)p 750 1757 V 18 w(FINISH\(mak)o(e)p 1063 1757 V 17 w(id,)g(new)p 1232 1757 V 17 w(comms\))75 1876 y(IN)h(mak)o(e)p 258 1876 V 17 w(id)36 b Fp(handle)14 b(from)e Fl(MPI)p 650 1876 14 2 v 15 w(COMM)p 753 1876 V 15 w(PEER)p 856 1876 V 14 w(MAKE)p 958 1876 V 15 w(START\(\))75 1961 y Fq(OUT)k(new)p 283 1961 15 2 v 17 w(comms)36 b Fp(arra)o(y)14 b(of)f(new)h(in)o(ter-comm)o (unicators)158 2045 y(This)9 b(completes)g(a)g Fl(_PEER_MAKE)e Fp(op)q(eration,)j(returning)g(an)f(arra)o(y)g(of)f Fl(num_comms)g Fp(new)i(in)o(ter-comm)o(unicators)75 2095 y(in)f Fl(new_comms)p Fp(.)14 b(\(Note)c(that)g Fl(num_comms)d Fp(w)o(as)i(sp)q(eci\014ed)i(in)e (the)h(corresp)q(onding)g(call)e(to)h Fl(MPI_COMM_PEER_MAKE_S)o(TART)o(\(\))p Fp(.\))75 2145 y(This)i(routine)g(is)g(collectiv)o(e)f(in)h(the)g Fl(my_comm)f Fp(of)g(the)h(corresp)q(onding)h(call)e(to)h Fl (MPI_COMM_PEER_MAK)o(E_STA)o(RT\(\))o Fp(.)75 2195 y(It)g(w)o(aits)f(for)h (the)g(remote)g(group)g(to)f(call)g Fl(MPI_COMM_PEER_MAKE_)o(START)o(\(\))e Fp(but)j(do)q(es)h(not)e(w)o(ait)g(for)h(the)g(remote)75 2244 y(group)j(to)f(call)h Fl(MPI_COMM_PEER_M)o(AKE_F)o(INISH)o(\(\))p Fp(.)158 2330 y Fq(MPI)p 257 2330 V 17 w(COMM)p 434 2330 V 18 w(NAME)p 601 2330 V 19 w(MAKE)p 769 2330 V 18 w(ST)l(AR)l(T\(m)o(y)p 1027 2330 V 16 w(comm,)i(name,)g(n)o(um)p 1431 2330 V 15 w(comms,)g(mak)o(e)p 1728 2330 V 17 w(id\))75 2450 y(IN)g(m)o(y)p 213 2450 V 17 w(comm)21 b Fp(lo)q(cal)13 b(in)o(tra-comm)n(unicator)75 2534 y Fq(IN)j(name)21 b Fp(\\name")12 b(kno)o(wn)h(to)h(b)q(oth)g(lo)q(cal)f(and) h(remote)f(group)h(leaders)75 2619 y Fq(IN)i(n)o(um)p 242 2619 V 16 w(comms)k Fp(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o(ter-comm)o(unicators)e (to)h(construct)75 2704 y Fq(OUT)j(mak)o(e)p 309 2704 V 17 w(id)j Fp(handle)14 b(for)f(MPI)p 663 2704 13 2 v 16 w(COMM)p 817 2704 V 15 w(NAME)p 960 2704 V 16 w(MAKE)p 1105 2704 V 15 w(FINISH\(\))p eop %%Page: 13 15 bop 75 -100 a Fo(3.8.)31 b(INTR)o(ODUCTION)14 b(TO)g(INTER-COMMUNICA)m(TION) 702 b Fp(13)158 45 y Fq(MPI)p 257 45 15 2 v 17 w(COMM)p 434 45 V 18 w(NAME)p 601 45 V 19 w(MAKE)p 769 45 V 18 w(FINISH\(mak)o(e)p 1082 45 V 17 w(id,)15 b(new)p 1251 45 V 17 w(comms\))75 162 y(IN)h(mak)o(e)p 258 162 V 17 w(id)k Fp(handle)14 b(from)e(MPI)p 648 162 13 2 v 15 w(COMM)p 801 162 V 15 w(NAME)p 944 162 V 16 w(MAKE)p 1089 162 V 15 w(ST)m(AR)m(T\(\))75 244 y Fq(OUT)k(new)p 283 244 15 2 v 17 w(comms)k Fp(arra)o(y)14 b(of)f(new)h(in)o(ter-comm)o (unicators)158 325 y(These)j(are)e(the)h Fl(START/FINISH)d Fp(v)o(ersions)j(of)e Fl(MPI\\_COMM\\_NAME\\_MAK)o(E\(\))p Fp(.)19 b(They)d(ha)o(v)o(e)f(sync)o(hroniza-)75 375 y(tion)e(prop)q(erties)j (analogous)c(to)i(the)h(corresp)q(onding)f Fl(\\_PEER\\_)f Fp(routines.)75 483 y Fq(Comm)o(unicator)g(Status)75 559 y Fp(This)d(lo)q(cal)g(routine)g(allo)o(ws)f(the)i(calling)e(pro)q(cess)j(to)e (determine)g(if)g(a)g(comm)o(unicator)d(is)k(an)f(in)o(ter-comm)o(unicator)75 609 y(or)k(an)g(in)o(tra-comm)n(unicator.)158 694 y Fq(MPI)p 257 694 V 17 w(COMM)p 434 694 V 18 w(ST)l(A)l(T\(comm,)i(status\))75 811 y(IN)g(comm)21 b Fp(comm)o(unicator)75 893 y Fq(OUT)16 b(status)j Fp(in)o(teger)14 b(status)158 975 y(This)g(returns)i(the)f(status) h(of)e(comm)o(uni)o(cator)e Fl(comm)p Fp(.)19 b(V)m(alid)13 b(status)i(v)n(alues)g(are)f Fl(MPI)p 1526 975 14 2 v 16 w(INTRA)p Fp(,)e Fl(MPI)p 1742 975 V 16 w(INTER)p Fp(,)75 1025 y Fl(MPI)p 144 1025 V 15 w(INVALID)p Fp(.)158 1157 y Fk(Discussion:)18 b Fj(Should)d(there)e(b)q(e)g(an)o(y)h(other)f(status)h(v)n(alues?)75 1347 y Fq(Supp)q(ort)f(for)i(User-Lev)o(el)f(Serv)o(ers)75 1424 y Fp(This)h(collectiv)o(e)f(routine)h(is)f(used)i(to)e(mak)o(e)g(it)g (easier)h(for)g(pro)q(cesses)i(to)d(form)f(sub-groups)i(and)g(con)o(tact)g (user-)75 1474 y(lev)o(el)f(serv)o(ers.)158 1559 y Fq(MPI)p 257 1559 15 2 v 17 w(COMM)p 434 1559 V 18 w(SPLITL\(comm,)h(k)o(ey)l(,)h(nk)o (eys,)f(leaders,)g(sub)p 1282 1559 V 16 w(comm\))75 1676 y(IN)h(comm)37 b Fp(extan)o(t)14 b(in)o(tra-comm)o(uni)o(cator)d(to)j(b)q(e)h(\\split")75 1758 y Fq(IN)h(k)o(ey)37 b Fp(k)o(ey)14 b(for)f(sub-group)i(mem)o(b)q(ership) 75 1840 y Fq(IN)h(nk)o(eys)36 b Fp(n)o(um)o(b)q(er)14 b(of)f(k)o(eys)h(\(n)o (um)o(b)q(er)f(of)h(sub-groups\))75 1922 y Fq(OUT)i(leaders)35 b Fp(ranks)14 b(of)f(sub-group)h(leaders)h(in)e(comm)75 2004 y Fq(OUT)j(sub)p 273 2004 V 16 w(comm)k Fp(in)o(tra-comm)o(uni)o(cator)12 b(describing)i(sub-group)g(of)f(calling)g(pro)q(cess)158 2086 y(This)k(routine)h(splits)f(the)i(group)e(describ)q(ed)i(b)o(y)e(in)o (tra-comm)o(unicator)d(comm)h(in)o(to)i(nk)o(eys)h(sub-groups.)75 2136 y(Eac)o(h)d(calling)e(pro)q(cess)k(m)o(ust)d(sp)q(ecify)h(a)g(v)n(alue)f (of)g(k)o(ey)h(in)f(the)i(range)f([0)p Fd(:)7 b(:)g(:)t Fp(\()p Fl(nkeys)p Fp(-1\)].)19 b(Pro)q(cesses)f(sp)q(ecifying)75 2185 y(the)f(same)f(k)o(ey)g(are)h(placed)g(in)f(the)i(same)d(sub-group.)27 b(Ranks)16 b(of)g(the)h(leaders)g(of)f(eac)o(h)h(sub-group)g(\(relativ)o(e)75 2235 y(to)i Fl(comm)p Fp(\))f(are)h(returned)i(in)d(in)o(teger)h(arra)o(y)g (leaders.)34 b(This)18 b(routine)i(returns)g(a)e(new)i(in)o(tra-comm)n (unicator,)75 2285 y Fl(sub\\_comm)p Fp(,)12 b(that)h(describ)q(es)j(the)f (sub-group)f(to)g(whic)o(h)g(the)g(calling)e(pro)q(cess)k(b)q(elongs.)75 2401 y Fi(3.8.4)55 b(Implemen)n(tation)19 b(Notes)75 2478 y Fq(Securit)o(y)13 b(and)j(P)o(erformance)d(Issues)75 2554 y Fp(The)k(routines)h(in)e(this)h(section)g(do)g(not)g(in)o(tro)q(duce)g (insecurit)o(y)h(in)o(to)e(the)h(basic)g(usage)g(of)f(MPI.)h(Sp)q (eci\014cally)m(,)75 2604 y(they)d(do)g(not)g(allo)o(w)e(con)o(texts)j(to)f (b)q(e)g(b)q(ound)g(in)g(m)o(ultiple)d(usable)j(comm)o(unicators.)158 2654 y(The)h(pro)o(vision)f(of)g(in)o(ter-comm)o(unicatio)o(n)e(do)q(es)j (not)g(adv)o(ersely)g(a\013ect)h(the)f(\(p)q(oten)o(tial\))f(p)q(erformance)h (of)75 2704 y(in)o(tra-comm)o(uni)o(cation.)p eop %%Page: 14 16 bop 75 -100 a Fp(14)75 45 y Fq(\\Under)14 b(the)h(Ho)q(o)q(d")75 122 y Fp(A)d(p)q(ossible)g(implemen)o(tation)d(of)i(a)h(comm)o(unicator)d (con)o(tains)j(a)f(single)h(group,)g(a)f(send)i(con)o(text,)g(a)f(receiv)o(e) h(con-)75 172 y(text,)g(and)f(a)h(source)g(\(for)g(the)g(message)f(en)o(v)o (elop)q(e\).)18 b(This)13 b(structure)i(mak)o(es)c(in)o(tra-)h(and)g(in)o (ter-comm)o(unicators)75 221 y(basically)h(the)h(same)f(ob)r(ject.)158 271 y(The)i(in)o(tra-comm)o(uni)o(cator)d(has)j(the)g(prop)q(erties)h(that:)j (the)d(send-con)o(text)g(and)e(the)h(receiv)o(e-con)o(text)i(are)75 321 y(iden)o(tical;)c(the)h(pro)q(cess)i(is)e(a)f(mem)o(b)q(er)g(of)g(the)h (group;)g(the)g(source)h(is)f(the)g(rank)g(of)g(the)g(pro)q(cess)i(in)d(the)i (group.)158 371 y(The)d(in)o(ter-comm)o(unicator)e(cannot)i(b)q(e)g (discussed)i(sensibly)e(without)f(considering)h(pro)q(cesses)j(in)c(b)q(oth)h (the)75 421 y(lo)q(cal)g(and)g(remote)g(groups.)18 b(Imagine)11 b(a)h(pro)q(cess)j Fq(P)d Fp(in)g(group)g(G)h(whic)o(h)f(has)h(an)f(in)o (ter-comm)o(unicator)e Fl(Cp)p Fp(,)i(and)75 470 y(a)j(pro)q(cess)i Fq(Q)d Fp(in)h(group)g(H)g(whic)o(h)g(has)g(an)g(in)o(ter-comm)o(unicator)d Fl(Cq)p Fp(.)21 b(\(Note)16 b(that)f(G)g(and)f(H)i(do)e(not)h(ha)o(v)o(e)g (to)75 520 y(b)q(e)g(distinct.\))20 b(The)15 b(in)o(ter-comm)o(unicators)d (ha)o(v)o(e)i(the)h(prop)q(erties)h(that:)j(the)c(send)g(con)o(text)g(of)f Fl(Cp)f Fp(is)i(iden)o(tical)75 570 y(to)h(the)h(receiv)o(e)g(con)o(text)g (of)e Fl(Cq)p Fp(,)h(and)f(is)h(unique)g(in)g(H;)g(the)g(receiv)o(e)i(con)o (text)e(of)g Fl(Cp)f Fp(is)h(iden)o(tical)g(to)f(the)i(send)75 620 y(con)o(text)d(of)f Fl(Cq)p Fp(,)f(and)h(is)g(unique)g(in)g(G;)g(the)g (group)g(of)g Fl(Cp)g Fp(is)g(H;)g(the)h(group)f(of)f Fl(Cq)h Fp(is)g(G;)f(the)i(source)h(of)d Fl(Cp)h Fp(is)g(the)75 670 y(rank)h(of)f(P)h(in)f(G,)g(whic)o(h)g(is)h(the)g(group)g(of)f Fl(Cq)p Fp(;)g(the)h(source)h(of)e Fl(Cq)g Fp(is)h(the)g(rank)g(of)f(Q)h(in)f (H,)h(whic)o(h)f(is)h(the)g(group)75 719 y(of)f Fl(Cp)p Fp(.)158 769 y(It)k(is)f(easy)h(to)f(see)i(that)f(in)f(terms)g(of)g(these)i(\014elds,) f(the)g(in)o(tra-comm)o(unicator)c(is)k(a)f(sp)q(ecial)h(case)g(of)f(the)75 819 y(in)o(ter-comm)o(unicator.)f(It)c(has)h Fh(G)i Fp(=)e Fh(H)g Fp(and)f(b)q(oth)h(con)o(texts)h(the)f(same.)k(This)c(ensures)h(that)f (the)g(p)q(oin)o(t-to-p)q(oin)o(t)75 869 y(comm)o(unication)e(implemen)o (tati)o(on)h(for)h(in)o(tra-comm)o(unicatio)o(n)f(and)i(in)o(ter-comm)o (unicatio)o(n)e(can)i(b)q(e)h(iden)o(tical.)75 1006 y Fn(3.9)70 b(Cac)n(heing)75 1097 y Fp(MPI)14 b(pro)o(vides)g(a)f(\\cac)o(heing")h (facilit)o(y)e(that)i(allo)o(ws)e(an)i(application)e(to)i(attac)o(h)g (arbitrary)f(pieces)i(of)f(informa-)75 1147 y(tion,)g(called)h Fm(attributes)p Fp(,)f(to)h(con)o(text,)g(group,)g(and)g(comm)o(unicator)d (descriptors;)17 b(it)d(pro)o(vides)i(this)f(facilit)o(y)e(to)75 1197 y(user)g(programs)e(as)h(w)o(ell.)k(A)o(ttributes)d(are)g(lo)q(cal)e(to) g(the)i(pro)q(cess)g(and)f(are)g(not)g(included)g(if)f(the)i(descriptor)g(w)o (ere)75 1246 y(someho)o(w)g(sen)o(t)i(to)f(another)h(pro)q(cess)672 1231 y Fc(1)707 1246 y Fp(This)f(facilit)o(y)f(is)h(in)o(tended)h(to)f(supp)q (ort)h(optimizations)d(suc)o(h)j(as)g(sa)o(ving)75 1296 y(p)q(ersisten)o(t)j (comm)o(unicatio)o(n)13 b(handles)j(and)g(recording)h(top)q(ology-based)e (decisions)i(b)o(y)e(adaptiv)o(e)h(algorithms.)75 1346 y(Ho)o(w)o(ev)o(er,)e (attributes)h(are)f(propagated)g(in)o(ten)o(tionally)e(b)o(y)h(sp)q(eci\014c) j(MPI)e(routines.)158 1396 y(T)m(o)k(summarize,)e(cac)o(heing)j(is,)g(in)f (particular,)g(the)h(pro)q(cess)h(b)o(y)e(whic)o(h)h(implem)o(en)o (tation-de\014ned)d(data)75 1446 y(\(and)e(virtual)f(top)q(ology)g(data\))g (is)h(propagated)g(in)f(groups)h(and)g(comm)o(unicators.)158 1578 y Fk(Discussion:)k Fj(A)o(ttribute)c(propagation)h(m)o(ust)e(b)q(e)h (discussed)h(carefully)m(.)75 1777 y Fi(3.9.1)55 b(F)-5 b(unctionalit)n(y)75 1853 y Fp(MPI)14 b(pro)o(vides)g(the)h(follo)o(wing)c(services)16 b(related)e(to)g(cac)o(heing.)k(They)c(are)h(all)d(pro)q(cess-lo)q(cal.)158 1939 y Fq(MPI)p 257 1939 15 2 v 17 w(A)l(TTRIBUTE)p 561 1939 V 18 w(ALLOC\(n,handle)p 942 1939 V 14 w(arra)o(y)l(,len\))75 2056 y(IN)k(n)21 b Fp(n)o(um)o(b)q(er)13 b(of)g(handles)h(to)g(allo)q(cate)75 2139 y Fq(OUT)i(handle)p 340 2139 V 15 w(arra)o(y)k Fp(p)q(oin)o(ter)14 b(to)g(arra)o(y)g(of)f(opaque)h(attribute)g(handling)f(structure)75 2221 y Fq(OUT)j(len)j Fp(length)14 b(of)f(eac)o(h)i(opaque)e(structure)75 2304 y(Allo)q(cates)18 b(a)g(new)h(attribute,)h(so)e(user)h(programs)e(and)h (functionalit)o(y)f(la)o(y)o(ered)h(on)g(top)h(of)e(MPI)i(can)f(access)75 2353 y(attribute)c(tec)o(hnology)m(.)158 2439 y Fq(MPI)p 257 2439 V 17 w(A)l(TTRIBUTE)p 561 2439 V 18 w(FREE\(handle)p 866 2439 V 15 w(arra)o(y)l(,n\))75 2556 y(IN)i(handle)p 289 2556 V 15 w(arra)o(y)21 b Fp(arra)o(y)13 b(of)h(p)q(oin)o(ters)g(to)g(opaque)g (attribute)g(handling)f(structures)75 2639 y Fq(IN)j(n)21 b Fp(n)o(um)o(b)q(er)13 b(of)g(handles)h(to)g(deallo)q(cate)p 75 2665 720 2 v 121 2692 a Fb(1)139 2704 y Fa(The)d(deletion)e(of)j (\015atten/un)o(\015at)o(ten)c(mak)o(es)i(this)h(p)q(oin)o(t)f(mo)q(ot.)p eop %%Page: 15 17 bop 75 -100 a Fo(3.9.)26 b(CA)o(CHEING)1438 b Fp(15)75 45 y(F)m(rees)15 b(attribute)f(handle.)158 130 y Fq(MPI)p 257 130 15 2 v 17 w(GET)p 376 130 V 18 w(A)l(TTRIBUTE)p 681 130 V 17 w(KEY\(k)o(eyv)m(al\))75 247 y(OUT)i(k)o(eyv)m(al)k Fp(Pro)o(vide)14 b(the)h(in)o(teger)f(k)o(ey)g(v)n (alue)f(for)h(future)g(storing.)75 328 y(Generates)h(a)f(new)g(cac)o(he)h(k)o (ey)m(.)158 413 y Fq(MPI)p 257 413 V 17 w(SET)p 365 413 V 18 w(A)l(TTRIBUTE\(handle,)10 b(k)o(eyv)m(al,)i(attribut)o(e)p 1196 413 V 14 w(v)m(al,)g(attribute)p 1484 413 V 14 w(len,)f(attribute)p 1774 413 V 14 w(destructor)p 2007 413 V 14 w(routine\))75 580 y(IN)16 b(handle)j Fp(opaque)14 b(attribute)g(handle)75 662 y Fq(IN)i(k)o(eyv)m(al)21 b Fp(The)14 b(in)o(teger)h(k)o(ey)f(v)n(alue)f(for) g(future)i(storing.)75 744 y Fq(IN)h(attribute)p 339 744 V 14 w(v)m(al)21 b Fp(attribute)14 b(v)n(alue)g(\(opaque)g(p)q(oin)o(ter\))75 826 y Fq(IN)i(attribute)p 339 826 V 14 w(len)k Fp(length)14 b(of)f(attribute)h(\(in)g(b)o(ytes\))75 908 y Fq(IN)i(attribute)p 339 908 V 14 w(destructor)p 571 908 V 15 w(routine)i Fp(What)13 b(one)h(calls)g(to)g(get)g(rid)f(of)h(this)g(attribute)g(later)75 989 y(Stores)h(attribute)f(in)g(cac)o(he)h(b)o(y)e(k)o(ey)m(.)158 1074 y Fq(MPI)p 257 1074 V 17 w(TEST)p 398 1074 V 18 w(A)l (TTRIBUTE\(handle,k)o(eyv)m(al,attrib)o(ut)o(e)p 1206 1074 V 14 w(ptr,len\))75 1191 y(IN)j(handle)j Fp(opaque)14 b(attribute)g(handle)75 1273 y Fq(IN)i(k)o(eyv)m(al)21 b Fp(The)14 b(in)o(teger)h(k)o(ey)f(v)n(alue)f (for)g(future)i(storing.)75 1355 y Fq(OUT)h(attribute)p 389 1355 V 14 w(ptr)j Fp(v)o(oid)13 b(p)q(oin)o(ter)h(to)g(attribute,)g(or)g (NULL)g(if)f(not)h(found)75 1437 y Fq(OUT)i(len)j Fp(length)14 b(in)f(b)o(ytes)i(of)e(attribute,)h(if)f(found.)75 1518 y(Retriev)o(e)h (attribute)h(from)d(cac)o(he)j(b)o(y)e(k)o(ey)m(.)158 1603 y Fq(MPI)p 257 1603 V 17 w(DELETE)p 466 1603 V 18 w(A)l(TTRIBUTE\(handle,)h (k)o(eyv)m(al\))75 1720 y(IN)i(handle)j Fp(opaque)14 b(attribute)g(handle)75 1802 y Fq(IN)i(k)o(eyv)m(al)21 b Fp(The)14 b(in)o(teger)h(k)o(ey)f(v)n(alue)f (for)g(future)i(storing.)75 1883 y(Delete)g(attribute)f(from)e(cac)o(he)j(b)o (y)f(k)o(ey)m(.)75 1990 y Fq(Example)75 2067 y Fp(Eac)o(h)g(attribute)g (consists)h(of)e(a)g(p)q(oin)o(ter)h(or)g(a)f(v)n(alue)g(of)g(the)h(same)f (size)h(as)g(a)f(p)q(oin)o(ter,)h(and)f(w)o(ould)g(t)o(ypically)f(b)q(e)75 2117 y(a)i(reference)i(to)d(a)h(larger)g(blo)q(c)o(k)f(of)g(storage)i (managed)d(b)o(y)h(the)i(mo)q(dule.)h(As)f(an)e(example,)f(a)i(global)e(op)q (eration)75 2166 y(using)h(cac)o(heing)g(to)f(b)q(e)i(more)e(e\016cien)o(t)h (for)g(all)e(con)o(texts)j(of)e(a)h(group)f(after)i(the)f(\014rst)h(call)e (migh)o(t)e(lo)q(ok)i(lik)o(e)g(this:)140 2255 y Fl(static)21 b(int)g(gop_key_assigned)e(=)i(0;)87 b(/*)21 b(0)h(only)f(on)g(first)g(entry) g(*/)140 2305 y(static)g(int)g(gop_key;)173 b(/*)21 b(key)g(for)h(this)e (module's)h(stuff)f(*/)140 2405 y(efficient_global_op)e(\(comm,)j(...\))140 2455 y(void)g(*comm;)140 2504 y({)184 2554 y(struct)g(gop_stuff_type)d (*gop_stuff;)64 b(/*)21 b(whatever)f(we)h(need)g(*/)184 2604 y(void)43 b(*group)20 b(=)i(mpi_comm_group\(co)o(mm\);)184 2704 y(if)f(\(!gop_key_assigned\))105 b(/*)22 b(get)f(a)h(key)f(on)g(first)g (call)g(ever)g(*/)p eop %%Page: 16 18 bop 75 -100 a Fp(16)184 45 y Fl({)22 b(gop_key_assigne)o(d)d(=)j(1;)228 95 y(if)f(\()h(!)f(\(gop_key)f(=)i(mpi_Get_Attribute)o(_Key)o(\(\)\))d(\))i ({)271 145 y(mpi_abort)f(\("Insufficient)f(keys)i(available"\);)228 195 y(})184 244 y(})184 294 y(if)g(\(mpi_Test_Attribute)d (\(mpi_group_attr\(gr)o(oup\),)o(gop_k)o(ey,&g)o(op_st)o(uff\)\))184 344 y({)k(/*)f(This)g(module)f(has)i(executed)e(in)h(this)g(group)g(before.) 293 394 y(We)g(will)g(use)g(the)h(cached)e(information)g(*/)184 444 y(})184 493 y(else)184 543 y({)i(/*)f(This)g(is)g(a)h(group)f(that)g(we)g (have)g(not)g(yet)g(cached)g(anything)f(in.)293 593 y(We)h(will)g(now)g(do)h (so.)249 643 y(*/)228 693 y(gop_stuff)d(=)j(/*)f(malloc)g(a)h(gop_stuff_type) c(*/)228 792 y(/*)j(...)g(fill)g(in)g(*gop_stuff)f(with)h(whatever)f(we)i (want)f(...)g(*/)228 892 y(mpi_set_attribu)o(te)e(\(mpi_group_attr\(g)o(roup) o(\),)g(gop_key,)h(gop_stuff,)881 942 y(gop_stuff_destructo)o(r\);)184 992 y(})184 1041 y(/*)h(...)h(use)f(contents)f(of)h(*gop_stuff)f(to)h(do)h (the)f(global)f(op)i(...)f(*/)162 1091 y(})162 1191 y(gop_stuff_destruct)o (or)e(\(gop_stuff\))63 b(/*)21 b(called)g(by)g(MPI)g(on)h(group)f(delete)f (*/)162 1241 y(struct)h(gop_stuff_type)e(*gop_stuff;)162 1290 y({)206 1340 y(/*)i(...)g(free)g(storage)g(pointed)f(to)h(by)h(gop_stuff)e (...)h(*/)162 1390 y(})158 1599 y Fk(Discussion:)i Fj(The)16 b(cac)o(he)g(facilit)o(y)h(could)g(also)g(b)q(e)f(pro)o(vided)i(for)d(other)h (descriptors,)i(but)e(it)g(is)g(less)g(clear)h(ho)o(w)75 1648 y(suc)o(h)12 b(pro)o(vision)i(w)o(ould)e(b)q(e)f(useful.)18 b(It)11 b(is)h(suggested)g(that)f(this)i(issue)f(b)q(e)f(review)o(ed)i(in)f (reference)f(to)g(Virtual)i(T)m(op)q(ologies.)75 1964 y Fn(3.10)70 b(F)-6 b(ormalizing)26 b(the)i(Lo)r(osely)g(Sync)n(hronous)i(Mo)r(del)d (\(Usage,)266 2039 y(Safet)n(y\))75 2154 y Fi(3.10.1)55 b(Basic)20 b(Statemen)n(ts)75 2247 y Fp(When)15 b(a)g(caller)g(passes)i(a)e(comm)o (unicator)d(\(whic)o(h)j(con)o(tains)g(a)g(con)o(text)h(and)f(group\))g(to)g (a)g(callee,)g(that)g(com-)75 2296 y(m)o(unicator)g(m)o(ust)i(b)q(e)g(free)h (of)e(side)i(e\013ects)h(throughout)e(execution)h(of)e(the)i(subprogram)e (\(quiescen)o(t\).)29 b(This)75 2346 y(pro)o(vides)13 b(one)f(mo)q(del)f(in)h (whic)o(h)h(libraries)f(can)g(b)q(e)i(written,)e(and)h(w)o(ork)f(\\safely)m (.")k(F)m(or)c(libraries)g(so)h(designated,)75 2396 y(the)k(callee)f(has)h(p) q(ermission)f(to)g(do)g(whatev)o(er)h(comm)o(unication)c(it)j(lik)o(es)g (with)g(the)h(comm)o(unicator,)d(and)i(un-)75 2446 y(der)h(the)g(ab)q(o)o(v)o (e)g(guaran)o(tee)f(kno)o(ws)h(that)f(no)h(other)g(comm)o(unicati)o(ons)d (will)h(in)o(terfere.)27 b(Since)17 b(w)o(e)g(p)q(ermit)e(the)75 2496 y(creation)e(of)f(new)g(comm)o(unicators)e(without)i(sync)o(hronization) h(\(assuming)e(preallo)q(cated)h(con)o(texts\),)i(this)e(do)q(es)75 2546 y(not)i(imp)q(ose)f(a)g(signi\014can)o(t)h(o)o(v)o(erhead.)158 2604 y(This)h(form)f(of)h(safet)o(y)g(is)h(analogous)e(to)h(other)h(common)d (computer)i(science)i(usages,)f(suc)o(h)g(as)g(passing)f(a)75 2654 y(descriptor)k(of)e(an)g(arra)o(y)g(to)g(a)h(library)e(routine.)29 b(The)18 b(library)f(routine)h(has)f(ev)o(ery)i(righ)o(t)e(to)g(exp)q(ect)i (suc)o(h)f(a)75 2704 y(descriptor)d(to)f(b)q(e)g(v)n(alid)f(and)g(mo)q (di\014able.)p eop %%Page: 17 19 bop 75 -100 a Fo(3.10.)31 b(F)o(ORMALIZING)13 b(THE)i(LOOSEL)m(Y)f(SYNCHR)o (ONOUS)g(MODEL)h(\(USA)o(GE,)e(SAFETY\))129 b Fp(17)75 45 y Fi(3.10.2)55 b(Mo)r(dels)19 b(of)g(Execution)75 124 y Fp(W)m(e)h(sa)o(y)g (that)h(a)f(parallel)f(pro)q(cedure)k(is)d Fm(active)h Fp(at)f(a)g(pro)q (cess)i(if)e(the)h(pro)q(cess)h(b)q(elongs)f(to)f(a)h(group)f(that)75 174 y(ma)o(y)13 b(collectiv)o(ely)h(execute)i(the)f(pro)q(cedure,)i(and)d (some)g(mem)o(b)q(er)f(of)h(that)g(group)h(is)f(curren)o(tly)i(executing)f (the)75 224 y(pro)q(cedure)j(co)q(de.)27 b(If)16 b(a)h(parallel)e(pro)q (cedure)j(is)f(activ)o(e)f(at)h(a)f(pro)q(cess,)j(then)e(this)f(pro)q(cess)j (ma)o(y)14 b(b)q(e)k(receiving)75 274 y(messages)10 b(p)q(ertaining)f(to)h (this)g(pro)q(cedure,)i(ev)o(en)e(if)f(it)g(do)q(es)i(not)e(curren)o(tly)i (execute)h(the)e(co)q(de)h(of)e(this)h(pro)q(cedure.)75 388 y Fq(Nonreen)o(tran)o(t)i(parallel)i(pro)q(cedures)75 467 y Fp(This)f(co)o(v)o(ers)h(the)f(case)h(where,)g(at)f(an)o(y)g(p)q(oin)o(t)f (in)h(time,)e(at)i(most)f(one)h(in)o(v)o(o)q(cation)f(of)g(a)h(parallel)f (pro)q(cedure)j(can)75 517 y(b)q(e)g(activ)o(e)f(at)f(an)o(y)h(pro)q(cess.)20 b(That)14 b(is,)f(concurren)o(t)j(in)o(v)o(ok)n(ations)c(of)h(the)i(same)e (parallel)g(pro)q(cedure)j(ma)o(y)c(o)q(ccur)75 567 y(only)h(within)g (disjoin)o(t)g(groups)i(of)e(pro)q(cesses.)21 b(F)m(or)14 b(example,)e(all)h (in)o(v)o(ok)n(ations)f(of)h(parallel)g(pro)q(cedures)j(in)o(v)o(olv)o(e)75 617 y(all)d(pro)q(cesses,)j(pro)q(cesses)g(are)f(single-threaded,)f(and)f (there)j(are)e(no)g(recursiv)o(e)h(in)o(v)o(ok)n(ations.)158 668 y(In)c(suc)o(h)i(a)e(case,)h(a)f(con)o(text)i(can)e(b)q(e)h(statically)f (allo)q(cated)g(to)g(eac)o(h)h(pro)q(cedure.)19 b(The)12 b(static)g(allo)q (cation)e(can)75 718 y(b)q(e)15 b(done)f(in)g(a)f(pream)o(ble,)g(as)h(part)g (of)g(initialization)d(co)q(de.)20 b(Or,)14 b(it)f(can)i(b)q(e)f(done)h(a)f (compile/link)d(time,)h(if)h(the)75 768 y(implemen)o(tatio)o(n)d(has)i (additional)f(mec)o(hanisms)f(to)i(reserv)o(e)i(con)o(text)f(v)n(alues.)18 b(Comm)n(unicators)10 b(to)i(b)q(e)h(used)g(b)o(y)75 817 y(the)j(di\013eren)o (t)h(pro)q(cedures)g(can)f(b)q(e)g(build)f(in)g(a)g(pream)o(ble,)g(if)g(the)h (executing)g(groups)g(are)g(statically)e(de\014ned;)75 867 y(if)j(the)i(executing)f(groups)h(c)o(hange)f(dynamically)m(,)d(then)k(a)e (new)i(comm)o(unicator)c(has)j(to)g(b)q(e)g(built)g(whenev)o(er)75 917 y(the)d(executing)h(group)e(c)o(hanges,)h(but)g(this)g(new)g(comm)o(uni)o (cator)d(can)j(b)q(e)g(built)f(using)h(the)g(same)f(preallo)q(cated)75 967 y(con)o(text.)29 b(If)17 b(the)g(parallel)f(pro)q(cedures)k(can)d(b)q(e)h (organized)f(in)o(to)g(libraries,)g(so)g(that)h(only)e(one)h(pro)q(cedure)j (of)75 1017 y(eac)o(h)13 b(library)e(can)i(b)q(e)g(concurren)o(tly)g(activ)o (e)g(at)f(eac)o(h)h(pro)q(cessor,)h(then)f(it)f(is)g(su\016cien)o(t)h(to)f (allo)q(cate)g(one)g(con)o(text)75 1067 y(p)q(er)j(library)m(.)75 1181 y Fq(P)o(arallel)e(pro)q(cedures)h(that)h(are)g(nonreen)o(tran)n(t)e (within)g(eac)o(h)j(executing)e(group)75 1260 y Fp(This)g(co)o(v)o(ers)i(the) f(case)h(where,)f(at)g(an)o(y)f(p)q(oin)o(t)g(in)g(time,)f(for)h(eac)o(h)h (pro)q(cess)i(group,)d(there)i(can)f(b)q(e)g(at)f(most)g(one)75 1310 y(activ)o(e)h(in)o(v)o(ok)n(ation)e(of)h(a)g(parallel)g(pro)q(cedure)j (b)o(y)d(a)h(pro)q(cess)i(mem)o(b)q(er.)i(Ho)o(w)o(ev)o(er,)c(it)g(migh)o(t)d (b)q(e)k(p)q(ossible)f(that)75 1360 y(the)e(same)e(pro)q(cedure)j(is)e (concurren)o(tly)i(in)o(v)o(ok)o(ed)d(in)h(t)o(w)o(o)g(partially)e(\(or)j (completely\))e(o)o(v)o(erlapping)g(groups.)17 b(F)m(or)75 1410 y(example,)12 b(the)h(same)g(collectiv)o(e)g(comm)o(unicati)o(on)d (function)j(ma)o(y)e(b)q(e)j(concurren)o(tly)h(in)o(v)o(ok)o(ed)d(on)h(t)o(w) o(o)g(partially)75 1460 y(o)o(v)o(erlapping)g(groups.)158 1511 y(In)i(suc)o(h)g(a)g(case,)g(a)f(con)o(text)i(is)e(asso)q(ciated)i(with)e (eac)o(h)h(parallel)f(pro)q(cedure)i(and)f(eac)o(h)g(executing)h(group,)75 1561 y(so)f(that)g(o)o(v)o(erlapping)e(execution)j(groups)e(ha)o(v)o(e)h (distinct)g(comm)o(unication)c(con)o(texts.)22 b(\(One)15 b(do)q(es)h(not)f (need)g(a)75 1610 y(di\013eren)o(t)j(con)o(text)g(from)d(eac)o(h)j(group;)g (one)g(merely)e(needs)j(a)d(\\coloring")g(of)h(the)h(groups,)f(so)h(that)f (One)h(can)75 1660 y(generate)j(the)f(comm)o(unicators)d(for)i(eac)o(h)h (parallel)f(pro)q(cedure)i(when)f(the)g(execution)h(groups)e(are)h (de\014ned.)75 1710 y(Here,)15 b(again,)d(one)i(only)g(need)h(one)f(con)o (text)g(for)g(eac)o(h)h(library)m(,)d(if)h(no)h(t)o(w)o(o)f(pro)q(cedures)k (from)12 b(the)j(same)e(library)75 1760 y(can)h(b)q(e)h(concurren)o(tly)g (activ)o(e)f(in)f(the)i(same)e(group.)158 1811 y(Note)18 b(that,)f(for)g (collectiv)o(e)h(comm)o(unicati)o(on)c(libraries,)k(w)o(e)f(do)g(allo)o(w)f (sev)o(eral)i(concurren)o(t)h(in)o(v)o(o)q(cations)75 1861 y(within)e(the)i(same)e(group:)25 b(a)18 b(broadcast)g(in)g(a)f(group)h(ma)o (y)e(b)q(e)j(started)g(at)e(a)h(pro)q(cess)h(b)q(efore)g(the)g(previous)75 1911 y(broadcast)h(in)f(that)h(group)g(ended)g(at)g(another)g(pro)q(cess.)37 b(In)20 b(suc)o(h)g(a)g(case,)h(one)f(cannot)g(rely)g(on)f(con)o(text)75 1960 y(mec)o(hanisms)12 b(to)i(disam)o(biguate)e(successiv)o(e)17 b(in)o(v)o(o)q(cations)c(of)g(the)i(same)f(parallel)f(pro)q(cedure)j(within)d (the)i(same)75 2010 y(group:)i(the)12 b(pro)q(cedure)h(need)f(b)q(e)g (implemen)o(ted)d(so)i(as)h(to)f(a)o(v)o(oid)f(confusion.)17 b(F)m(or)11 b(example,)f(for)h(broadcast,)h(one)75 2060 y(ma)o(y)h(need)k(to) e(carry)h(additional)d(information)f(in)j(messages,)h(suc)o(h)g(as)f(the)h (broadcast)g(ro)q(ot,)f(to)g(help)g(in)g(suc)o(h)75 2110 y(disam)o (biguation;)9 b(one)i(also)g(relies)h(on)f(preserv)n(ation)h(of)f(message)g (order)i(b)o(y)e(MPI.)g(With)g(suc)o(h)h(an)f(approac)o(h,)h(w)o(e)75 2160 y(ma)o(y)g(b)q(e)i(gaining)e(p)q(erformance,)h(but)h(w)o(e)g(lo)q(ose)g (mo)q(dularit)o(y)m(.)h(It)f(is)f(not)h(su\016cien)o(t)g(to)g(implemen)o(t)d (the)j(parallel)75 2210 y(pro)q(cedure)f(so)e(that)h(it)e(w)o(orks)i (correctly)g(in)f(isolation,)e(when)j(in)o(v)o(ok)o(ed)e(only)h(once;)h(it)f (needs)h(to)f(b)q(e)h(implemen)o(ted)75 2259 y(so)j(that)h(an)o(y)f(n)o(um)o (b)q(er)f(of)h(successiv)o(e)i(in)o(v)o(o)q(cations)e(will)f(execute)j (correctly)m(.)23 b(Of)15 b(course,)i(the)f(same)e(approac)o(h)75 2309 y(can)g(b)q(e)h(used)f(for)g(other)g(parallel)f(libraries.)75 2424 y Fq(W)l(ell-nested)g(parallel)g(pro)q(cedures)75 2503 y Fp(Calls)f(of)h(parallel)f(pro)q(cedures)j(are)f(w)o(ell)f(nested)h(if)f(a) g(new)g(parallel)f(pro)q(cedure)j(is)e(alw)o(a)o(ys)g(in)o(v)o(ok)o(ed)f(in)h (a)g(subset)75 2553 y(of)j(a)g(group)g(executing)h(the)g(same)e(parallel)g (pro)q(cedure.)27 b(Th)o(us,)17 b(pro)q(cesses)i(that)d(execute)i(the)f(same) e(parallel)75 2603 y(pro)q(cedure)h(ha)o(v)o(e)e(the)g(same)f(execution)i (stac)o(k.)158 2654 y(In)i(suc)o(h)g(a)f(case,)i(a)e(new)h(con)o(text)g(need) h(to)e(b)q(e)h(dynamically)d(allo)q(cated)i(for)g(eac)o(h)h(new)g(in)o(v)o(o) q(cation)e(of)h(a)75 2704 y(parallel)e(pro)q(cedure.)24 b(Ho)o(w)o(ev)o(er,) 16 b(a)f(stac)o(k)h(mec)o(hanism)d(can)i(b)q(e)h(used)g(for)f(allo)q(cating)f (new)h(con)o(texts.)24 b(Th)o(us,)15 b(a)p eop %%Page: 18 20 bop 75 -100 a Fp(18)75 45 y(p)q(ossible)16 b(mec)o(hanism)d(is)i(to)h(allo)q (cate)f(\014rst)h(a)f(large)h(n)o(um)o(b)q(er)e(of)h(con)o(text's)i(\(up)e (to)h(the)g(upp)q(er)g(b)q(ound)g(on)f(the)75 95 y(depth)g(of)e(nested)i (parallel)e(pro)q(cedure)j(calls\),)d(and)h(then)h(use)f(a)g(lo)q(cal)f(stac) o(k)h(managemen)o(t)e(of)h(these)i(con)o(text's)75 145 y(on)f(eac)o(h)g(pro)q (cess)i(to)e(create)h(a)e(new)i(comm)o(unicator)c(\(using)j Fl(MPI)p 1129 145 14 2 v 15 w(COMM)p 1232 145 V 15 w(MAKE)p Fp(\))f(for)g(eac)o(h)i(new)f(in)o(v)o(o)q(cation.)75 254 y Fq(The)i(General)d(case)75 331 y Fp(In)20 b(the)g(general)g(case,)i(there)f (ma)o(y)d(b)q(e)j(m)o(ultiple)c(concurren)o(tly)22 b(activ)o(e)d(in)o(v)o(o)q (cations)g(of)h(the)g(same)f(parallel)75 381 y(pro)q(cedure)f(within)d(the)i (same)e(group;)i(in)o(v)o(o)q(cations)e(ma)o(y)f(not)i(b)q(e)h(w)o (ell-nested.)25 b(A)16 b(new)h(con)o(text)f(need)h(to)f(b)q(e)75 431 y(created)h(for)e(eac)o(h)h(in)o(v)o(o)q(cation.)22 b(It)16 b(is)f(the)h(user)h(resp)q(onsibilit)o(y)e(to)g(mak)o(e)f(sure)j(that,)f(if)e (t)o(w)o(o)h(distinct)h(parallel)75 481 y(pro)q(cedures)21 b(are)e(in)o(v)o(ok)o(ed)f(concurren)o(tly)j(on)d(o)o(v)o(erlapping)g(sets)i (of)e(pro)q(cesses,)23 b(then)c(con)o(text)h(allo)q(cation)d(or)75 531 y(comm)o(unicator)11 b(creation)j(is)g(prop)q(erly)g(co)q(ordinated.)75 670 y Fn(3.11)70 b(Motiv)l(ating)23 b(Examples)75 844 y Fk(Discussion:)d Fj(The)15 b(in)o(tra-comm)o(unication)i(examples)f(w)o(ere)e(\014rst)h (presen)o(ted)g(at)f(the)h(June)f(MPI)g(meeting;)i(the)e(in)o(ter-)75 894 y(comm)o(unication)i(routines)e(\(when)f(added\))h(are)g(new.)75 1094 y Fi(3.11.1)55 b(Curren)n(t)19 b(Practice)g(#1)75 1171 y Fp(Example)12 b(#1a:)162 1256 y Fl(int)21 b(me,)h(size;)162 1305 y(...)162 1355 y(mpi_init\(\);)162 1405 y(mpi_comm_rank\(MPI_)o(COMM_)o (ALL,)c(&me\);)162 1455 y(mpi_comm_size\(MPI_)o(COMM_)o(ALL,)g(&size\);)162 1554 y(printf\("Process)h(\045d)i(size)g(\045d\\n",)g(me,)g(size\);)162 1604 y(...)162 1654 y(mpi_end\(\);)75 1738 y Fp(Example)e(#1a)g(is)i(a)f (do-nothing)f(program)g(that)h(initializes)g(itself)g(legally)m(,)f(and)i (refers)g(to)g(the)g(the)g(\\all")75 1788 y(comm)o(unicator,)13 b(and)j(prin)o(ts)h(a)e(message.)25 b(This)15 b(example)g(do)q(es)i(not)f (imply)e(that)i(MPI)g(supp)q(orts)h(prin)o(tf-lik)o(e)75 1838 y(comm)o(unication)10 b(itself.)75 1888 y(Example)i(#1b:)162 1972 y Fl(int)21 b(me,)h(size;)162 2022 y(...)162 2072 y(mpi_init\(\);)162 2122 y(mpi_comm_rank\(MPI_)o(COMM_)o(ALL,)c(&me\);)65 b(/*)21 b(local)g(*/)162 2171 y(mpi_comm_size\(MPI_)o(COMM_)o(ALL,)d(&size\);)j(/*)g (local)g(*/)162 2271 y(if\(\(me)g(\045)g(2\))h(==)f(0\))228 2321 y(mpi_send\(...,)e(MPI_COMM_ALL,)g(\(\(me)i(+)g(1\))h(\045)f(size\)\);) 162 2371 y(else)228 2420 y(mpi_recv\(...,)e(MPI_COMM_ALL,)g(\(\(me)i(-)g(1)h (+)f(size\))g(\045)h(size\)\);)162 2520 y(...)162 2570 y(mpi_end\(\);)75 2654 y Fp(Example)14 b(#1b)h(sc)o(hematically)f(illustrates)i(message)f(exc)o (hanges)i(b)q(et)o(w)o(een)g(\\ev)o(en")f(and)f(\\o)q(dd")g(pro)q(cesses)j (in)75 2704 y(the)c(\\all")f(comm)o(unicator.)p eop %%Page: 19 21 bop 75 -100 a Fo(3.11.)26 b(MOTIV)-5 b(A)m(TING)14 b(EXAMPLES)1120 b Fp(19)75 45 y Fi(3.11.2)55 b(Curren)n(t)19 b(Practice)g(#2)162 122 y Fl(void)i(*data;)162 172 y(int)g(me;)162 221 y(...)162 271 y(mpi_init\(\);)162 321 y(mpi_comm_rank\(MPI_)o(COMM_)o(ALL,)d(&me\);)162 421 y(if\(me)j(==)g(0\))162 470 y({)249 520 y(/*)h(get)f(input,)f(create)h (buffer)g(``data'')f(*/)249 570 y(...)162 620 y(})162 719 y (mpi_broadcast\(MPI_)o(COMM_)o(ALL,)e(0,)k(data\);)162 819 y(...)162 869 y(mpi_end\(\);)75 948 y Fp(This)14 b(example)e(illustrates)i (the)h(use)g(of)e(a)g(collectiv)o(e)h(comm)o(unication.)75 1062 y Fi(3.11.3)55 b(\(Appro)n(ximate\))19 b(Curren)n(t)g(Practice)g(#3)162 1139 y Fl(int)i(me;)162 1188 y(void)g(*grp0,)g(*grprem,)f(*commslave;)162 1238 y(...)162 1288 y(mpi_init\(\);)162 1338 y(mpi_comm_rank\(MPI_)o(COMM_)o (ALL,)e(&me\);)43 b(/*)21 b(local)g(*/)162 1388 y(mpi_local_subgroup)o (\(MPI_)o(GROUP)o(_ALL,)d(1,)k(``[0]'',)e(&grp0\);)g(/*)h(local)g(*/)162 1438 y(mpi_group_differen)o(ce\(MP)o(I_GRO)o(UP_AL)o(L,)e(grp0,)h(&grprem\);) g(/*)i(local)f(*/)162 1487 y(mpi_comm_make\(MPI_)o(COMM_)o(ALL,)d(grprem,)j (&commslave\);)162 1587 y(if\(me)g(!=)g(0\))162 1637 y({)249 1687 y(/*)h(compute)e(on)h(slave)g(*/)249 1736 y(...)249 1786 y(mpi_reduce\(commslav)o(e,)e(...\);)249 1836 y(...)162 1886 y(})162 1936 y(/*)j(zero)f(falls)f(through)h(immediately)e(to)j(this)e (reduce,)h(others)f(do)i(later...)e(*/)162 1986 y(mpi_reduce\(MPI_COM)o (M_ALL)o(,)f(...\);)75 2065 y Fp(This)d(example)e(illustrates)i(ho)o(w)f(a)h (group)g(consisting)f(of)g(all)g(but)h(the)g(zeroth)h(pro)q(cess)h(of)d(the)h (\\all")e(group)i(is)75 2114 y(created,)g(and)f(then)h(ho)o(w)f(a)g(comm)o (unicator)d(is)j(formed)f(\()p Fl(commslave)p Fp(\))f(for)i(that)g(new)h (group.)21 b(The)16 b(new)f(com-)75 2164 y(m)o(unicator)d(is)h(used)h(in)f(a) f(collectiv)o(e)i(call,)e(and)h(all)f(pro)q(cesses)k(execute)f(a)e(collectiv) o(e)g(call)f(in)h(the)h Fl(MPI)p 1695 2164 14 2 v 15 w(COMM)p 1798 2164 V 15 w(ALL)75 2214 y Fp(con)o(text.)k(This)12 b(example)f (illustrates)i(ho)o(w)e(the)i(t)o(w)o(o)f(comm)o(unicators)e(\(whic)o(h)i(p)q (ossess)i(distinct)f(con)o(texts\))g(pro-)75 2264 y(tect)j(comm)o(unicatio)o (n.)i(That)d(is,)f(comm)o(unication)d(in)k Fl(MPI)p 1035 2264 V 15 w(COMM)p 1138 2264 V 15 w(ALL)f Fp(is)g(insulated)h(from)e(comm)o (unication)e(in)75 2314 y Fl(commslave)p Fp(,)h(and)h(vice)h(v)o(ersa.)158 2363 y(In)h(summary)m(,)d(for)j(comm)o(unication)d(with)j(\\group)g(safet)o (y)m(,")f(con)o(texts)j(within)d(comm)o(unicators)f(m)o(ust)h(b)q(e)75 2413 y(distinct.)75 2527 y Fi(3.11.4)55 b(Example)19 b(#4)75 2604 y Fp(The)f(follo)o(wing)d(example)i(is)g(mean)o(t)g(to)h(illustrate)f (\\safet)o(y")g(b)q(et)o(w)o(een)j(p)q(oin)o(t-to-p)q(oin)o(t)c(and)i (collectiv)o(e)f(com-)75 2654 y(m)o(unication.)k(MPI)16 b(guaran)o(tees)h (that)e(a)h(single)f(comm)o(unicator)e(can)j(do)f(safe)h(p)q(oin)o(t-to-p)q (oin)o(t)e(and)i(collectiv)o(e)75 2704 y(comm)o(unication.)p eop %%Page: 20 22 bop 75 -100 a Fp(20)75 45 y Fl(#define)20 b(TAG_ARBITRARY)f(12345)75 95 y(#define)h(SOME_COUNT)151 b(50)162 145 y(int)21 b(me;)162 195 y(int)g(len;)162 244 y(void)g(*contexts;)162 294 y(void)g(*subgroup;)162 394 y(...)162 444 y(mpi_init\(\);)162 493 y(mpi_contexts_alloc)o(\(MPI_)o (COMM_)o(ALL,)d(1,)k(&contexts,)d(&len\);)162 543 y(mpi_local_subgroup)o (\(MPI_)o(GROUP)o(_ALL,)f(4,)k(``[2,4,6,8]'',)c(&subgroup\);)i(/*)h(local)g (*/)162 593 y(mpi_group_rank\(sub)o(group)o(,)e(&me\);)108 b(/*)21 b(local)g(*/)162 693 y(if\(me)g(!=)g(MPI_UNDEFINED\))162 742 y({)249 792 y(mpi_comm_bind\(subgr)o(oup,)d(context,)j(0,)g(&the_comm\);) e(/*)j(local)f(*/)249 892 y(/*)h(asynchronous)d(receive:)h(*/)249 942 y(mpi_irecv\(...,)f(MPI_SRC_ANY,)h(TAG_ARBITRARY,)e(the_comm\);)162 992 y(})162 1091 y(for\(i)j(=)h(0;)f(i)g(<)h(SOME_COUNT,)e(i++\))249 1141 y(mpi_reduce\(the_comm)o(,)f(...\);)75 1264 y Fi(3.11.5)55 b(Library)19 b(Example)f(#1)75 1344 y Fp(The)c(main)e(program:)162 1432 y Fl(int)21 b(done)g(=)h(0;)162 1482 y(user_lib_t)e(*libh_a,)g(*libh_b;) 162 1532 y(void)h(*dataset1,)f(*dataset2;)162 1582 y(...)162 1631 y(mpi_init\(\);)162 1681 y(...)162 1731 y(init_user_lib\(MPI_)o(COMM_)o (ALL,)e(&libh_a\);)162 1781 y(init_user_lib\(MPI_)o(COMM_)o(ALL,)g (&libh_b\);)162 1831 y(...)162 1880 y(user_start_op\(libh)o(_a,)h (dataset1\);)162 1930 y(user_start_op\(libh)o(_a,)g(dataset2\);)162 1980 y(...)162 2030 y(while\(!done\))162 2080 y({)249 2129 y(/*)j(work)f(*/)249 2179 y(...)249 2229 y(mpi_reduce\(MPI_COMM)o(_ALL,)d (...\);)249 2279 y(...)249 2329 y(/*)k(see)f(if)g(done)g(*/)249 2379 y(...)162 2428 y(})162 2478 y(user_end_op\(libh_a)o(\);)162 2528 y(user_end_op\(libh_b)o(\);)75 2615 y Fp(The)14 b(user)h(library)e (initialization)f(co)q(de:)75 2704 y Fl(void)21 b(init_user_lib\(voi)o(d)e (*comm,)i(user_lib_t)e(**handle\))p eop %%Page: 21 23 bop 75 -100 a Fo(3.11.)26 b(MOTIV)-5 b(A)m(TING)14 b(EXAMPLES)1120 b Fp(21)75 45 y Fl({)140 95 y(user_lib_t)20 b(*save;)140 145 y(void)h(*context;)140 195 y(void)g(*group;)140 244 y(int)h(len;)140 344 y(user_lib_initsave\(&)o(save\))o(;)d(/*)i(local)g(*/)140 394 y(mpi_comm_group\(comm)o(,)e(&group\);)140 444 y(mpi_contexts_alloc\()o (comm,)f(1,)k(&context,)e(&len\);)140 493 y(mpi_comm_dup\(comm,)e(context,)j (save)g(->)g(comm\);)140 593 y(/*)h(other)f(inits)f(*/)140 643 y(*handle)h(=)g(save;)75 693 y(})75 777 y Fp(Notice)14 b(that)g(the)h(comm)o(unicator)c Fl(comm)i Fp(passed)i(to)e(the)i(library)e Fm(is)h Fp(needed)h(to)f(allo)q(cate)f(new)h(con)o(texts.)75 827 y(User)h(start-up)g(co)q(de:)75 911 y Fl(void)21 b(user_start_op\(use)o (r_lib)o(_t)e(*handle,)h(void)h(*data\))75 961 y({)162 1011 y(user_lib_state)e(*state;)162 1060 y(state)i(=)h(handle)e(->)h(state;)162 1110 y(mpi_irecv\(save)e(->)i(comm,)g(...,)g(data,)g(...)g(&\(state)f(->)i (irecv_handle\)\);)162 1160 y(mpi_isend\(save)d(->)i(comm,)g(...,)g(data,)g (...)g(&\(state)f(->)i(isend_handle\)\);)75 1210 y(})75 1294 y Fp(User)15 b(clean-up)f(co)q(de:)75 1378 y Fl(void)21 b(user_end_op\(user_) o(lib_t)d(*handle\))75 1428 y({)162 1478 y(mpi_wait\(save)h(->)j(state)e(->)i (isend_handle\);)162 1528 y(mpi_wait\(save)d(->)j(state)e(->)i (irecv_handle\);)75 1577 y(})75 1695 y Fi(3.11.6)55 b(Library)19 b(Example)f(#2)75 1772 y Fp(The)c(main)e(program:)162 1857 y Fl(int)21 b(ma,)h(mb;)162 1907 y(...)162 1956 y(list_a)f(:=)g(``[0,1]'';) 162 2006 y(list_b)g(:=)g(``[0,2{,3}]'';)162 2106 y(mpi_local_subgroup)o (\(MPI_)o(GROUP)o(_ALL,)d(2,)k(list_a,)e(&group_a\);)162 2156 y(mpi_local_subgroup)o(\(MPI_)o(GROUP)o(_ALL,)e(2\(3\),)j(list_b,)f (&group_b\);)162 2255 y(mpi_comm_make\(MPI_)o(COMM_)o(ALL,)e(group_a,)j (&comm_a\);)162 2305 y(mpi_comm_make\(MPI_)o(COMM_)o(ALL,)d(group_b,)j (&comm_b\);)162 2405 y(mpi_comm_rank\(comm)o(_a,)e(&ma\);)162 2455 y(mpi_comm_rank\(comm)o(_b,)g(&mb\);)162 2554 y(if\(ma)i(!=)g (MPI_UNDEFINED\))228 2604 y(lib_call\(comm_a)o(\);)162 2654 y(if\(mb)g(!=)g(MPI_UNDEFINED\))162 2704 y({)p eop %%Page: 22 24 bop 75 -100 a Fp(22)228 45 y Fl(lib_call\(comm_b)o(\);)228 95 y(lib_call\(comm_b)o(\);)162 145 y(})75 228 y Fp(The)14 b(library:)75 311 y Fl(void)21 b(lib_call\(void)e(*comm\))75 361 y({)162 410 y(int)i(me,)h(done)e(=)i(0;)162 460 y(mpi_comm_rank\(comm)o (,)d(&me\);)162 510 y(if\(me)i(==)g(0\))249 560 y(while\(!done\))249 610 y({)337 659 y(mpi_recv\(...,)e(comm,)h(MPI_SRC_ANY\);)337 709 y(...)249 759 y(})162 809 y(else)162 859 y({)249 909 y(/*)i(work)f(*/)249 958 y(mpi_send\(...,)e(comm,)i(0\);)249 1008 y(....)162 1058 y(})162 1108 y(MPI_SYNC\(comm\);)62 b(/*)22 b(include/no)e(safety)g(for)h (safety/no)f(safety)h(*/)75 1158 y(})75 1241 y Fp(The)e(ab)q(o)o(v)o(e)g (example)f(is)g(really)h(t)o(w)o(o)f(examples,)h(dep)q(ending)h(on)e(whether) j(or)e(not)f(y)o(ou)h(include)g(rank)g(3)f(in)75 1290 y Fl(list)p 166 1290 14 2 v 15 w(b)p Fp(.)f(This)c(example)e(illustrates)i(that,)f (despite)h(con)o(texts,)h(subsequen)o(t)g(calls)e(to)h Fl(lib)p 1510 1290 V 15 w(call)e Fp(with)i(the)g(same)75 1340 y(con)o(text)j(need)h (not)e(b)q(e)h(safe)g(from)d(one)j(another)g(\(\\bac)o(k)f(masking"\).)20 b(Safet)o(y)c(is)f(realized)h(if)e(the)i Fl(MPI)p 1732 1340 V 16 w(SYNC)e Fp(is)75 1390 y(added.)k(What)c(this)g(demonstrates)g(is)g (that)g(libraries)f(ha)o(v)o(e)h(to)g(b)q(e)g(written)h(carefully)m(,)d(ev)o (en)j(with)e(con)o(texts.)158 1440 y(Algorithms)c(lik)o(e)g(\\com)o(bine")g (ha)o(v)o(e)i(strong)f(enough)h(source)h(selectivit)o(y)f(so)f(that)h(they)g (are)g(inheren)o(tly)g(OK.)75 1490 y(So)16 b(are)g(m)o(ultiple)d(calls)i(to)h (a)g(t)o(ypical)e(tree)j(broadcast)g(algorithm)c(with)i(the)i(same)d(ro)q (ot.)24 b(Ho)o(w)o(ev)o(er,)16 b(m)o(ultiple)75 1539 y(calls)g(to)h(a)g(t)o (ypical)e(tree)k(broadcast)e(algorithm)d({)i(with)h(di\013eren)o(t)h(ro)q (ots)f(|)f(could)h(break.)27 b(Therefore,)18 b(suc)o(h)75 1589 y(algorithms)9 b(w)o(ould)h(ha)o(v)o(e)h(to)f(utilize)h(the)h(tag)e(to)h(k)o (eep)h(things)f(straigh)o(t.)17 b(All)10 b(of)g(the)i(foregoing)e(is)g(a)h (discussion)h(of)75 1639 y(\\collectiv)o(e)f(calls")f(implem)o(en)o(ted)f (with)i(p)q(oin)o(t)f(to)g(p)q(oin)o(t)h(op)q(erations.)17 b(MPI)11 b(implemen)o(tatio)o(ns)e(ma)o(y)f(or)j(ma)o(y)e(not)75 1689 y(implemen)o(t)k(collectiv)o(e)j(calls)g(using)f(p)q(oin)o(t-to-p)q(oin) o(t)g(op)q(erations.)24 b(These)18 b(algorithms)13 b(are)k(used)f(to)g (illustrate)75 1739 y(the)e(issues)h(of)f(correctness)j(and)c(safet)o(y)m(,)g (indep)q(enden)o(t)j(of)d(ho)o(w)g(MPI)i(implem)o(en)o(ts)d(its)i(collectiv)o (e)g(calls.)75 1855 y Fi(3.11.7)55 b(In)n(ter-Comm)n(unication)20 b(Examples)75 1932 y Fp(Examples)13 b(of)g(usage)h(of)g(all)e(routines)j(in)e (this)h(section)p Fd(:)7 b(:)g(:)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF From owner-mpi-context@CS.UTK.EDU Tue Aug 10 02:03:00 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA17605; Tue, 10 Aug 93 02:03:00 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03034; Tue, 10 Aug 93 02:02:18 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 10 Aug 1993 02:02:14 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03026; Tue, 10 Aug 93 02:02:04 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA27007; Tue, 10 Aug 93 01:02:01 CDT Date: Tue, 10 Aug 93 01:02:01 CDT From: Tony Skjellum Message-Id: <9308100602.AA27007@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: further revised context chapter This is the postscript (bug fixes, missing inter-communication examples added). - Tony Skjellum --- cut here --- %!PS-Adobe-2.0 %%Creator: dvips 5.47 Copyright 1986-91 Radical Eye Software %%Title: mpi-report.dvi %%Pages: 34 1 %%BoundingBox: 0 0 612 792 %%EndComments %%BeginProcSet: tex.pro /TeXDict 200 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{ isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10 N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{ /vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail} B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image} imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto}N /eop{clear SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for}N /p /show load N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval(Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{ moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{ S p tail}B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w }B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{/SS save N}B /eos{clear SS restore}B end %%EndProcSet TeXDict begin 1000 300 300 @start /Fa 19 118 df<007F0001FF0007CF00070F000F0F00 0E07000E07000E07000E07000E0700FFFF00FFFF000E07000E07000E07000E07000E07000E0700 0E07000E07000E07000E07000E07000E07007F9FE07F9FE0131A809915>13 D46 D<000C000C001C0018001800380030003000700060006000E000C0 00C001C001800180038003000700060006000E000C000C001C0018001800380030003000700060 006000E000C000C0000E257E9B13>I<7FFFFF007FFFFF00781E0F00601E0300601E0300E01E03 80C01E0180C01E0180C01E0180001E0000001E0000001E0000001E0000001E0000001E0000001E 0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E000003FFF00003 FFF000191A7F991C>84 D<7F80FFE0F1F0F0F0607000F01FF03FF07870F070E070E073E0F3F1F3 7FFE3F3C10107E8F13>97 D<007E00007E00000E00000E00000E00000E00000E00000E00000E00 000E000FEE001FFE003C3E00781E00700E00E00E00E00E00E00E00E00E00E00E00E00E00700E00 781E003C3E001FFFC00FCFC0121A7F9915>100 D<07E01FF03C78701C701CFFFCFFFCE000E000 E000F0007000780C3E1C1FF807E00E107F8F11>I<00F803FC07BC0F3C0E3C0E000E000E000E00 0E00FFC0FFC00E000E000E000E000E000E000E000E000E000E000E000E007FE07FE00E1A80990C >I104 D<3C003C003C003C00000000000000000000000000FC00FC001C001C00 1C001C001C001C001C001C001C001C001C001C00FF80FF80091A80990A>I107 DIII<07E01FF8381C700E60 06E007E007E007E007E007E007700E700E3C3C1FF807E010107F8F13>II<3FE07FE0F0E0E060E060F0 00FF807FC01FE001F0C0F0E070E070F0F0FFE0DF800C107F8F0F>115 D<0C000C000C000C001C 001C003C00FFC0FFC01C001C001C001C001C001C001C001C601C601C601C601EE00FC007800B17 7F960F>II E /Fb 1 50 df<03000F00FF00F700070007 0007000700070007000700070007000700070007000700070007007FF07FF00C157E9412>49 D E /Fc 12 122 df<01FE07FE0F8F1E0E3C0E3C00780078007800F800F000F0007800780E7C0C 3E1C1FF807E010127E9112>99 D<01F807FE0F1E1E0F3C0F7C077FFF7FFF7800F800F000F00070 00780E7C0C3E3C1FF807E010127E9112>101 D<01C003E003E003E003C0000000000000000000 0000001F801F8007800780070007000700070007000F000E000E000E000E000E001E00FF80FF80 0B1D7F9C0C>105 D<1F9FC01FFFE007F1E007C0E00780E00780E00700E00700E00701E00F01E0 0E01C00E01C00E01C00E01C00E03C01E03C0FF8FF0FF9FF014127F9117>110 D<00FC0003FF000F07801E03C03C01C03801C07801E07801E07801E0F003C0F003C0F003C07003 80700780780F003C1E001FF80007E00013127E9115>I<0FCFE00FFFF803F8F803E07C03C03C03 803E03801E03801E03803E07803E07003C07003C0700780700F80781F00FC3E00EFFC00E7F000E 00000E00001E00001E00001C00001C0000FF8000FF8000171A809117>I<01F86007FEE00F87E0 1F03C03E01C03C01C07801C07801C07803C0F803C0F00380F00380F803807807807C0F803E3F80 1FF7000FC700000700000700000F00000E00000E00000E00007FC0007FC0131A7E9116>I<1FBE 1FFE07EF07CE078E07800700070007000F000E000E000E000E000E001E00FFC0FFC010127F9110 >I<03F60FFE1E1E3C0E380C3C0C3E003FE01FF807FC007C603C601C701C7038F878FFF0CFC00F 127F9110>I<0600060006000E000E000C001C003C00FFE0FFE01C003C00380038003800380038 00780070C070C070C070C071C073807F003E000B1A7C9910>II<0FF1FE0FF1FE03C07801C06001C06001C0C001C1C001C18000E38000E300 00E60000E60000EC00007C0000780000780000700000700000600000600000C00070C000F18000 C70000FE00007C0000171A809116>121 D E /Fd 3 106 df<3078F06005047C830D>46 D<01E007100C1018083810701070607F80E000E000E000E000E000E0086010602030C01F000D12 7B9113>101 D<01800380010000000000000000000000000000001C002600470047008E008E00 0E001C001C001C0038003800710071007100720072003C00091C7C9B0D>105 D E /Fe 1 59 df<70F8F8F87005057D840C>58 D E /Ff 40 122 df<01E003C0038007800F00 0E001E001C003C0038007800700070007000F000E000E000E000E000E000E000E000E000E000E0 00E000E000F000700070007000780038003C001C001E000E000F000780038003C001E00B2A7E9E 10>40 DI<7878787838307060E005097D830C>44 D<001C0000003E0000003E0000002E0000006700000067000000E7800000C7800000C3800001C3 C0000183C0000181C0000381E0000381E0000700F0000700F0000600F0000E0078000FFFF8000F FFF8001FFFFC001C003C0018003C0038001E0038001E0070001F0070000F0070000F00E0000780 191D7F9C1C>65 D<007FC001FFF007FFF00FE0F01F80301F00003E00007C0000780000780000F8 0000F00000F00000F00000F00000F00000F00000F00000F800007800007800007C00003E00001F 00001F80180FE07807FFF801FFF0007FC0151D7D9C1B>67 DIII<007F8001FF F007FFF80FE0F81F80381F00183E00007C0000780000780000F80000F00000F00000F00000F000 00F00000F007F8F007F8F807F87800387800387C00383E00381F00381F80380FE0F807FFF801FF F0007F80151D7D9C1C>III 75 DIII<003F000001FFE00003FFF00007C0F8000F003C001E001E003C000F 003C000F00780007807800078070000380F00003C0F00003C0F00003C0F00003C0F00003C0F000 03C0F00003C0F80007C078000780780007803C000F003E001F001E001E000F807C0007C0F80003 FFF00001FFE000003F00001A1D7E9C1F>II82 D<07F8001FFE003FFF007E0F007C0700F80300F00000F00000F00000 F80000FC00007F00003FF0003FFC000FFF0001FF00003F80000FC00007C00003C00003C00003C0 C003C0E007C0F00F80FC1F807FFF003FFC0007F000121D7E9C17>IIII<78000E007C001E003C003C001E0038000F0070000F00 F0000781E00003C1C00001C3C00001E7800000F70000007E0000003E0000003C0000003C000000 7E00000077000000E7800001E3800003C1C0000381E0000700F0000F00F8000E0078001C003C00 3C003E0078001F0070000F00F0000F80191D7F9C1C>88 D<0FE03FF07FF8707C603C001C001C01 FC1FFC7FFCF81CF01CE01CF03CF87CFFFC7FFC3F1C0E127E9114>97 D<07F01FF83FFC7C3C780C F000F000E000E000E000E000F000F000780C7C3C3FFC1FF807E00E127E9112>99 D<07C01FE03FF078787018601CFFFCFFFCFFFCE000E000E0007000700C3C3C3FFC1FF807E00E12 7E9112>101 D<07E7C00FFFC01FFFC03E7C003C3C00381C00381C00381C003C3C003E7C001FF8 003FF0003FE0003800003800003FFC003FFF007FFF80F807C0F003C0E001C0E001C0F003C07C0F 807FFF801FFE0007F800121B7F9115>103 D107 DIII<03F0000FFC001FFE003C0F00780780700380E001C0E001C0E001C0E001 C0E001C0F003C07003807807803C0F001FFE000FFC0003F00012127F9115>II114 D<1FE07FF0FFF0F070E000E000F0007F007FE01FF0 01F800780038C038F078FFF07FF01FC00D127F9110>I<1C001C001C001C001C001C00FFE0FFE0 FFE01C001C001C001C001C001C001C001C001C001C001C001E601FF00FF00FC00C187F970F>I< E01CE01CE01CE01CE01CE01CE01CE01CE01CE01CE01CE01CE01CE03CF07CFFFCFFDC7F1C0E127D 9115>I119 D<7003807807003C0E001C1C000E1C0007380003F00001E00001C00001E00003F0 000738000E18000E1C001C0E00380700700380F003C01212809113>II E /Fg 3 73 df<07E01FF83FFC7FFE7FFEFFFFFFFFFFFFFFFFFFFFFFFF FFFFFFFF7FFE7FFE3FFC1FF807E010127D9317>15 D<0001FE000FFF001FFF00701F01C00F0380 0F07000E0E001E1E001C1C00383C0020380000780000780000700000F00006F0000EF0001CF000 3CF0003CF0003CF80078F80078FC00F87E01F07F83F03FFEF01FF8E007E1E00001C00003C00003 800007003E0E00FFF800FFF0001FC00018257E9F1B>71 D<007C00002001FE0000E007FE0001E0 1C3E0001C0381E0003C0701E000380F01E000780F01E000700C01C000F00003C000E00003C001E 00003C001C00003FFFFC00003FFFFC00007FFFB80000780078000078007800007000700000F000 F00000F000F00000E000F00001E000E00001E001E00001C001E00003C001E00003C001E0000380 01E000078001E060078001E0E0070001F1C00E0001FF800C0001FE00000000F80023217F9E26> I E /Fh 62 123 df<783C783C783C783C381C3018301870386030E0700E0A7F9F17>34 D<787878783830307060E0050A7D9F0D>39 D<00F001E003E003C007800F000F001E001E003C00 3C003C007800780078007800F800F000F000F000F000F000F000F000F000F000F000F000F000F8 0078007800780078003C003C003C001E001E000F000F00078003C003E001E000F00C2E7EA112> II<018001C001800180C183E187F99F7DBE1F F807E007E01FF87DBEF99FE187C1830180018001C0018010147DA117>I<787878783830307060 E0050A7D830D>44 DI<00030003000700060006000E000C000C00 1C0018001800380030003000700060006000E000C000C001C00180018001800380030003000700 060006000E000C000C001C0018001800380030003000700060006000E000C000C000102D7DA117 >47 D<00C001C00FC0FFC0FFC0F3C003C003C003C003C003C003C003C003C003C003C003C003C0 03C003C003C003C003C003C003C003C003C003C0FFFEFFFEFFFE0F1F7C9E17>49 D<07F0001FFC003FFE007C3F00701F80F00F80E007C0E003C0C003C04003C00003C00003C00007 C0000780000780000F00001F00003E00007C0000F80001F00003E00007C0000F80001F00003E00 007C0000F80000FFFFC0FFFFC0FFFFC0121F7E9E17>I<07F0001FFC003FFE007E1F00780F8070 0780200780000780000780000780000F00001F00003E0003FC0003F80003FC00001F00000F8000 07800007C00003C00003C00003C00003C08003C0C007C0E00780F00F807C1F003FFE001FFC0007 F00012207E9E17>I<003E00003E00005E00005E0000DE0001DE00019E00039E00039E00079E00 071E000F1E000E1E001E1E003C1E003C1E00781E00781E00F01E00FFFFF0FFFFF0FFFFF0001E00 001E00001E00001E00001E00001E00001E00001E00141E7F9D17>I58 D<001F0000001F0000003F8000003B8000003B8000007B C0000073C0000071C00000F1E00000E1E00000E0E00001E0F00001E0F00001C0F00003C0780003 C078000380780007803C0007803C0007003C000FFFFE000FFFFE000FFFFE001E000F001E000F00 3C000F803C0007803C000780780007C0780003C0780003C0F00003E01B207F9F1E>65 DI<003FE000FFF803FFFC07F07C0F C01C1F800C1F00003E00003C00007C0000780000780000F80000F00000F00000F00000F00000F0 0000F00000F00000F00000F800007800007800007C00003C00003E00001F00001F80060FC00E07 F03E03FFFC00FFF8003FE017227DA01D>IIII<003FE000FFFC03FFFE07F07E0FC00E1F80061F00003E0000 3C00007C0000780000780000F80000F00000F00000F00000F00000F00000F00000F003FEF003FE F803FE78001E78001E7C001E3C001E3E001E1F001E1F801E0FC01E07F03E03FFFE00FFFC003FE0 17227DA01E>III75 DIII<003F000000FFC00003FFF00007E1F8000F807C001F003E001E001E003C00 0F003C000F00780007807800078078000780F00003C0F00003C0F00003C0F00003C0F00003C0F0 0003C0F00003C0F00003C0F00003C0F80007C07800078078000780780007803C000F003C000F00 1E001E001F003E000F807C0007E1F80003FFF00000FFC000003F00001A227DA021>II82 D<01FE0007FF801FFFC01F07C03E01C03C00C07C000078000078000078000078 00007C00003E00003F00001FE0001FFC000FFF0003FF80003F80000FC00007C00003E00001E000 01E00001E00001E00001E0C003E0E003C0F007C0FE0F807FFF001FFE0007F80013227EA019>I< FFFFFFC0FFFFFFC0FFFFFFC0001E0000001E0000001E0000001E0000001E0000001E0000001E00 00001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E 0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E000000 1E0000001E0000001E00001A207E9F1F>II87 D<780007807C000F003E001F001E001E000F003C00 0F807C000780780003C0F00003E1F00001F1E00000F3C000007FC000007F8000003F0000001F00 00001E0000003F0000007F8000007FC00000F3C00001F1E00001E0F00003C0F80007C078000780 3C000F003E001F001E001E000F003C000F807C0007C0780003C0F00003E01B207F9F1E>III<381C3018703860306030E070F078F078F078F0780E0A799F17>92 D<03C007E00FF01FF83E 7C7C3EF81FF00F10087D9F17>94 D<0FF03FFC7FFE783E601F000F000F000F007F0FFF3FFF7E0F F80FF00FF00FF81FFC3F7FFF7FFF3F8F10147E9316>97 DI<03F80FFC1FFE3E1E7C067800F800F000F000F000F000F000F000F80078 007C033E0F1FFF0FFE03F810147E9314>I<000780000780000780000780000780000780000780 00078000078000078000078000078007E7801FFF803FFF803F1F807C0F80780780F80780F00780 F00780F00780F00780F00780F00780F80780780F807C0F803E3F803FFF801FF78007C78011207E 9F17>I<03F0000FFC001FFE003E1F003C0700780700700380FFFF80FFFF80FFFF80F00000F000 00F000007000007800003C03003E0F001FFF0007FE0001F80011147F9314>I<00FE01FE03FE07 C00F800F000F000F000F000F000F000F00FFF0FFF0FFF00F000F000F000F000F000F000F000F00 0F000F000F000F000F000F000F000F000F000F20809F0E>I<07F1F00FFFF01FFFF03E3E007C1F 00780F00780F00780F00780F00780F007C1F003E3E003FFC003FF80037F0003000003800003FFE 003FFFC03FFFE07FFFE0F803F0F001F0F000F0F000F0F801F07E07E03FFFC01FFF8003FC00141E 7F9317>III107 DIII<01F80007FE001FFF803F0FC03C03C07801E07801E0F000F0F000F0F000F0F000F0F000F0 F000F07801E07801E03C03C03F0FC01FFF8007FE0001F80014147F9317>II114 D<0FF03FFC3FFC7C1C780078007C007E003FE03FF01F F803FC007C003C003CC03CF87CFFF87FF01FE00E147F9311>I<1E001E001E001E001E001E00FF F0FFF0FFF01E001E001E001E001E001E001E001E001E001E001E001E001E001F601FF00FF00FC0 0C1A7F9910>IIII<7801E07C03C03E07801E0F000F0F00079E0003FC0003F80001F8 0000F00001F00001F80003FC00079E000F0F000E0F001E07803C03C07801E0F801F01414809315 >II<7FFF7FFF7FFF003E003C007800F800F001E003E007 C007800F001F001E003C007C00FFFFFFFFFFFF10147F9314>I E /Fi 50 122 df<00007800FC7801FC7803FC7803CC000780000780000780000780000780000780000780 00078000078000FFFC78FFFC78FFFC780780780780780780780780780780780780780780780780 78078078078078078078078078078078078078078078078078078078078078078078152480A31A >12 D<0000C018000000C018000000C018000001C0380000018030000001803000000180300000 03807000000300600000030060000003006000000700E000000600C000000600C000000600C000 000E01C000000C018000FFFFFFFFC0FFFFFFFFC000180300000018030000001803000000380700 000030060000003006000000300600000030060000FFFFFFFFC0FFFFFFFFC000600C000000E01C 000000C018000000C018000000C018000001C03800000180300000018030000001803000000380 7000000300600000030060000003006000000700E000000600C000000600C00000222D7DA229> 35 D<00F001F001E003C0078007800F001E001E001E003C003C003C007800780078007800F800 F000F000F000F000F000F000F000F000F000F000F000F000F000F000F80078007800780078003C 003C003C001E001E001E000F000780078003C001E001F000F00C327DA413>40 DI45 DI<01F00007FC000FFE001F1F003C07803C07807803C07803 C07803C07001C0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001 E0F001E0F001E0F001E0F001E07803C07803C07803C07803C03C07803C07801F1F000FFE0007FC 0001F00013237EA118>48 D<00C001C007C0FFC0FFC0FBC003C003C003C003C003C003C003C003 C003C003C003C003C003C003C003C003C003C003C003C003C003C003C003C003C003C0FFFFFFFF FFFF10227CA118>I<07F8000FFE001FFF003C3F80780F807807C0F003C0F003E0F001E06001E0 2001E00001E00001E00003E00003C00003C00007C0000F80000F00001E00003C00007C0000F000 01E00003C0000780000F80001F00003E00007C0000F80000FFFFE0FFFFE0FFFFE013227EA118> I<03F8000FFE001FFF003E1F807C07807007C07003C02003C00003C00003C00003C00007800007 80000F80003F0003FE0003FC0003FE00001F000007800003C00003C00001E00001E00001E00001 E00001E08001E0C003E0E003C0F807C07E1F803FFF000FFE0003F80013237EA118>I<001F0000 1F00002F00002F00006F0000EF0000CF0001CF0001CF00038F00038F00078F00070F000F0F000E 0F001E0F003C0F003C0F00780F00780F00F00F00FFFFF8FFFFF8FFFFF8000F00000F00000F0000 0F00000F00000F00000F00000F00000F0015217FA018>I<3FFF803FFF803FFF803C00003C0000 3C00003C00003C00003C00003C00003C00003C00003CF8003FFE003FFF003F0F803E07C03C03C0 3803C00001E00001E00001E00001E00001E00001E00001E04003E0E003C0F007C0F80F807E1F80 3FFF001FFC0007F00013227EA018>I<007E0001FF0003FF0007C3000F80001F00003E00003C00 003C000078000078000078FC00F3FE00FFFF00FF1F80FE0780FC07C0F803C0F803E0F001E0F001 E0F001E0F001E0F001E0F001E07801E07801E07803E07C03C03C07C03E07801F1F000FFF0007FE 0001F80013237EA118>II< 01F00007FC000FFE001E0F003C07803C07807803C07803C07803C07803C07803C03803803C0780 1E0F000F1E0007FC0003F8000FFE001E0F003C07807803C07803C0F001E0F001E0F001E0F001E0 F001E0F001E07803C07803C03C07803E0F801FFF0007FC0001F00013237EA118>I<03F80007FC 001FFE001F1F003E0F807C07807803C0F803C0F003C0F003C0F001E0F001E0F001E0F001E0F001 E0F001E0F803E07803E07C07E03C0FE03F1FE01FFFE00FF9E007E3C00003C00003C00007C00007 80000F80000F00101E003C7E003FFC001FF0000FE00013237EA118>I<001F0000001F0000003F 8000003F8000003B8000007BC0000073C0000071C00000F1E00000F1E00000E0E00001E0F00001 E0F00001C0F00003C0780003C078000380780007803C0007803C0007003C000F001E000F001E00 0FFFFE001FFFFF001FFFFF001C000F003C0007803C00078038000780780003C0780003C0700003 C0F00001E0F00001E0E00001E01B237EA220>65 DI<001FF000007FFE0001FFFF0003F83F0007E00F000FC007 001F0002001F0000003E0000003C0000007C0000007800000078000000F8000000F0000000F000 0000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F80000007800000078 0000007C0000003C0000003E0000001F0000001F0000800FC0018007E0078003F81F8001FFFF00 007FFE00001FF00019257DA31F>II II<000FF800 007FFF0001FFFF8003F81F8007E007800FC003800F8001001F0000003E0000003C0000007C0000 007800000078000000F8000000F0000000F0000000F0000000F0000000F0000000F0000000F000 FFC0F000FFC0F000FFC0F80003C0780003C0780003C07C0003C03C0003C03E0003C01F0003C00F 8003C00FC003C007E003C003F81FC001FFFFC0007FFF80000FF8001A257DA321>I73 D76 DII<001FC000007FF00001FFFC0003 F07E0007C01F000F800F801F0007C01E0003C03C0001E03C0001E0780000F0780000F0780000F0 70000070F0000078F0000078F0000078F0000078F0000078F0000078F0000078F0000078F00000 78780000F0780000F0780000F07C0001F03C0001E03E0003E01E0003C01F0007C00F800F8007C0 1F0003F07E0001FFFC00007FF000001FC0001D257DA324>II82 D<01FF0007FFE00FFFF01F83F03E00F03C00707C00207800 007800007800007800007C00007C00003E00003F00001FE0001FFC0007FF0003FFC0007FE00007 E00003F00001F00000F80000F8000078000078000078400078C000F8E000F0F001F0FC03E07F07 E03FFFC00FFF8001FC0015257EA31B>I<07F01FFC3FFE3C3E301F200F000F000F000F03FF1FFF 3FFF7F0FFC0FF80FF00FF00FF81FFC3F7FFF7FFF3FCF10167E9517>97 DI<01FC0007FF001FFF803F07803E03 807C0080780000F80000F00000F00000F00000F00000F00000F00000F800007800007C00C03E01 C03F07C01FFFC007FF8001FC0012167E9516>I<0003C00003C00003C00003C00003C00003C000 03C00003C00003C00003C00003C00003C00003C003F3C00FFFC01FFFC03F0FC07E07C07C03C078 03C0F803C0F003C0F003C0F003C0F003C0F003C0F003C0F803C07803C07C07C07C07C03F1FC01F FFC00FFFC007F3C012237EA219>I<03F00007FC001FFE003E0F003C0780780380780380F001C0 FFFFC0FFFFC0FFFFC0F00000F00000F000007000007800007800003C01801F07800FFF8007FF00 01FC0012167E9516>I<007F00FF01FF03E303C007800780078007800780078007800780FFF8FF F8FFF8078007800780078007800780078007800780078007800780078007800780078007800780 0780102380A20F>I105 D108 DII<01FC0007FF000FFF801F07C03C01E07800F07800F0700070F00078F00078 F00078F00078F00078F000787800F07800F07C01F03E03E01F07C00FFF8007FF0001FC0015167F 9518>II114 D<07F81FFE3FFF7C1F7807780178007C007F003FF01FFC0FFE01FE003F001F000FC00FE0 1FFC3FFFFE7FFC0FF010167F9513>I<0F000F000F000F000F000F00FFF8FFF8FFF80F000F000F 000F000F000F000F000F000F000F000F000F000F000F000F080F9C0FFC07F803E00E1C7F9B12> III<7801F07C01E03E03C01E07C00F0780078F0007DE0003FC0001FC0000F80000 700000F80001FC0003DC00039E00078F000F07801E07801E03C03C01E07800F0F800F815168095 16>120 DI E /Fj 71 124 df<003F1F8001FFFFC003C3F3C00783E3C00F03E3C00E01C0000E01C0000E01C0 000E01C0000E01C0000E01C000FFFFFC00FFFFFC000E01C0000E01C0000E01C0000E01C0000E01 C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0007F 87FC007F87FC001A1D809C18>11 D<003F0001FF8003C3C00783C00F03C00E03C00E00000E0000 0E00000E00000E0000FFFFC0FFFFC00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0 0E01C00E01C00E01C00E01C00E01C00E01C07F87F87F87F8151D809C17>I<003F83F00001FFDF F80003E1FC3C000781F83C000F01F03C000E01E03C000E00E000000E00E000000E00E000000E00 E000000E00E00000FFFFFFFC00FFFFFFFC000E00E01C000E00E01C000E00E01C000E00E01C000E 00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C00 0E00E01C000E00E01C007FC7FCFF807FC7FCFF80211D809C23>14 D<70F8F8F8F8F8F8F8F87070 7070707070707070700000000070F8F8F870051D7D9C0C>33 D<7070F8F8FCFCFCFC7C7C0C0C0C 0C1C1C181838387070F0F060600E0D7F9C15>I<70F8FCFC7C0C0C1C183870F060060D7D9C0C> 39 D<01C00380038007000E000C001C001800380038007000700070007000E000E000E000E000 E000E000E000E000E000E000E000E000E000E00070007000700070003800380018001C000C000E 0007000380038001C00A2A7D9E10>II<018001C0018001806186 F99F7DBE1FF807E007E01FF87DBEF99F61860180018001C0018010127E9E15>I<70F8F8F87818 1818383070E060050D7D840C>44 DI<70F8F8F87005057D840C>I< 07E00FF01C38381C781E700E700EF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF0 0F700E700E781E381C1C380FF007E0101B7E9A15>48 D<030007003F00FF00C700070007000700 07000700070007000700070007000700070007000700070007000700070007000700FFF8FFF80D 1B7C9A15>I<0FE03FF878FC603EF01EF81FF80FF80F700F000F001F001E003E003C007800F001 E001C0038007000E031C0338037006FFFEFFFEFFFE101B7E9A15>I<001C00001C00003C00007C 00007C0000DC0001DC00039C00031C00071C000E1C000C1C00181C00381C00301C00601C00E01C 00FFFFC0FFFFC0001C00001C00001C00001C00001C00001C0001FFC001FFC0121B7F9A15>52 D<01F807FC0F8E1E1E3C1E381E781E78007000F080F7F8FFFCFC1CF81EF80FF00FF00FF00FF00F F00F700F700F781E381E1E3C0FF807E0101B7E9A15>54 D<70F8F8F870000000000000000070F8 F8F87005127D910C>58 D<70F8F8F870000000000000000070F8F8F878181818383070E060051A 7D910C>I<1FE03FF8787CE01EF01EF01EF01E603E007C00F801F001E003C00380030003000300 030003000300000000000000000007000F800F800F8007000F1D7E9C14>63 D<00060000000F0000000F0000000F0000001F8000001F8000001F8000003FC0000033C0000033 C0000073E0000061E0000061E00000E1F00000C0F00000C0F00001C0F8000180780001FFF80003 FFFC0003003C0003003C0007003E0006001E0006001E001F001F00FFC0FFF0FFC0FFF01C1C7F9B 1F>65 DI<003FC18001FFF18003F07B800FC01F801F000F801E00 07803C0003807C0003807800038078000180F0000180F0000000F0000000F0000000F0000000F0 000000F0000000F000000078000180780001807C0001803C0003801E0003001F0007000FC00E00 03F03C0001FFF000003FC000191C7E9B1E>I69 DI<003FC18001FFF18003F07B800FC01F801F000F801E0007803C 0003807C0003807800038078000180F0000180F0000000F0000000F0000000F0000000F0000000 F000FFF0F000FFF078000780780007807C0007803C0007801E0007801F0007800FC00F8003F03F 8001FFFB80003FE1801C1C7E9B21>III<1FFF1FFF0078007800780078007800780078007800780078007800780078007800780078 007800780078F878F878F878F8F8F1F07FE01F80101C7F9B15>IIIII<003F8000 01FFF00003E0F80007001C000E000E001C0007003C00078038000380780003C0700001C0F00001 E0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0780003C0780003C03800 03803C0007801E000F000E000E0007803C0003E0F80001FFF000003F80001B1C7E9B20>II82 D<07F1801FFD803C1F80700780700380E00380E00180E00180F0 0000F80000FE00007FE0003FFC001FFE000FFF0000FF80000F800007C00003C00001C0C001C0C0 01C0E001C0E00380F00780FE0F00DFFE00C7F800121C7E9B17>I<7FFFFFC07FFFFFC0780F03C0 700F01C0600F00C0E00F00E0C00F0060C00F0060C00F0060C00F0060000F0000000F0000000F00 00000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F 0000000F0000000F0000000F000003FFFC0003FFFC001B1C7F9B1E>IIII89 D<18183C3C383870706060E0E0 C0C0C0C0F8F8FCFCFCFC7C7C38380E0D7B9C15>92 D<183C387060E0C0C0F8FCFC7C38060D7E9C 0C>96 D<1FE0003FF8003C3C003C1E00180E00000E00001E0007FE003FFE007E0E00F80E00F80E 00F00E60F00E60F81E607C7E607FFFC01FC78013127F9115>II<07F80FFC3E3C3C3C78187800F000F000F000F000F000F000780078063C0E3F1C0FF807F0 0F127F9112>I<001F80001F800003800003800003800003800003800003800003800003800003 8007F3801FFF803E1F807C0780780380F80380F00380F00380F00380F00380F00380F00380F003 807807807C0F803E1F801FFBF007E3F0141D7F9C17>I<07E01FF83E7C781C781EF01EFFFEFFFE F000F000F000F000780078063C0E3F1C0FF807F00F127F9112>I<00FC03FE079E071E0F1E0E00 0E000E000E000E000E00FFE0FFE00E000E000E000E000E000E000E000E000E000E000E000E000E 000E007FE07FE00F1D809C0D>I<07E7C01FFFC03C3DC0781E00781E00781E00781E00781E0078 1E003C3C003FF80037E0007000007000007800003FFC003FFF007FFF807807C0F003C0E001C0E0 01C0F003C0F807C07C0F801FFE0007F800121B7F9115>I I<3C007C007C007C003C00000000000000000000000000FC00FC001C001C001C001C001C001C00 1C001C001C001C001C001C001C001C00FF80FF80091D7F9C0C>I<01C003E003E003E001C00000 000000000000000000000FE00FE000E000E000E000E000E000E000E000E000E000E000E000E000 E000E000E000E000E000E000E0F0E0F1E0F3C0FF807E000B25839C0D>IIIII<03F0000F FC001E1E00380700780780700380F003C0F003C0F003C0F003C0F003C0F003C070038078078038 07001E1E000FFC0003F00012127F9115>II<07F1801FF9803F1F803C0F807807 80780380F00380F00380F00380F00380F00380F00380F803807807807C0F803E1F801FFB8007E3 80000380000380000380000380000380000380001FF0001FF0141A7F9116>II<1FB07F F0F0F0E070E030F030F8007FC07FE01FF000F8C078C038E038F078F8F0FFF0CFC00D127F9110> I<0C000C000C000C000C001C001C003C00FFE0FFE01C001C001C001C001C001C001C001C001C30 1C301C301C301C301E700FE007C00C1A7F9910>IIII<7F8FF07F8FF00F0F80070F00 038E0001DC0001D80000F00000700000780000F80001DC00038E00030E000707001F0780FF8FF8 FF8FF81512809116>II<7FFC7FFC783C707860F061E061E063C00780078C0F0C 1E0C1E1C3C187818F078FFF8FFF80E127F9112>II E /Fk 8 118 df<7CFEFEFEFEFE7C000000007CFEFEFEFEFE7C07127D910D>58 D68 D<03FC001FFF003F1F007E1F007E1F00FC0E00FC0000FC0000FC0000FC0000FC0000FC00 00FC00007E01807F03803F87001FFE0003F80011127E9115>99 D<1E003F007F007F007F003F00 1E0000000000000000000000FF00FF001F001F001F001F001F001F001F001F001F001F001F001F 001F001F00FFE0FFE00B1E7F9D0E>105 D 110 D<01FC000FFF801F07C03E03E07C01F07C01F0FC01F8FC01F8FC01F8FC01F8FC01F8FC01F8 7C01F07C01F03E03E01F07C00FFF8001FC0015127F9118>I<1FF87FF87078E018E018F000FF80 FFF07FF83FF80FFC007CC03CE01CE01CF878FFF8CFE00E127E9113>115 D117 D E /Fl 33 91 df<0007000E001C0038007000E001C001C00380070007000E000E001C001C001C003800380038 00700070007000700070006000E000E000E000E000E000E000E000E000E0006000600070007000 3000380038001C001C000E0007000300102E7CA112>40 D<00C000E00070003000380018001C00 1C000C000E000E000E0006000600060006000600060006000E000E000E000E000E000E001C001C 001C001C0038003800380070007000E000E001C001C00380070007000E001C0038007000E0000F 2E7FA112>I<3C3E7E7E3E060E0C0C1C3870E0C0070E7D840D>44 D<7FE07FE0FFC00B037E8A0F> I<7878F8F87005057C840D>I<007E0001FF0003C3800701C00E01C00E01E01C00E01C01E03C01 E03C01E07801E07801E07801E07801E07801E0F003C0F003C0F003C0F003C0F003C0F00380F007 80E00780E00700E00F00E00E00701E00701C003878003FF0000FC000131F7C9D17>48 D<000C001C00FC0FFC0F38003800380038003800780070007000700070007000F000E000E000E0 00E000E001E001C001C001C001C001C003C07FFEFFFE0F1E7C9D17>I<0001C00003C00003C000 0780000F80001B80003B8000738000638000C7000187000307000707000E07000C0700180E0030 0E00600E00E00E00FFFFF0FFFFF0001E00001C00001C00001C00001C00003C00003C0003FFC003 FFC0141E7D9D17>52 D<03807003FFF003FFE003FF8007FE000600000600000600000600000600 000E00000C7E000DFF000F87800F03C00E01C00C01C00001E00001E00001E00003E07003E0F003 C0F003C0F007C0C00780E00F00701E00787C003FF8000FE000141F7D9D17>I<001F80007FE001 F0E003C0E00781E00F01E00F01E01E00001C00003C00003C300079FE007BFF007F07807E0780FC 0380F803C0F803C0F003C0F003C0F007C0F00780E00780E00780E00780F00F00701E00701E003C 7C003FF0000FC000131F7C9D17>I<003F0000FFC003E3E00780E00700F00E00700E00700E0070 0E00F00F00E00F81E00FE3C007FF0003FE0001FE0007FF000F3F801E1FC03C0FC07803C07001C0 F001C0E001C0E001C0E001C0E00380F00780780F003C3E001FFC000FE000141F7D9D17>56 D<007E0001FF0007C7800F03C01E01C01E01E03C01E03C01E03C01E07C01E07801E07801E07801 E07803E07803E07807C0380FC03C1FC01FFBC00FF3C0010780000780000700000F00E01E00F01C 00F03C00E07800E1F000FFC0003F0000131F7C9D17>I<00001800000038000000380000007800 00007C000000FC000000FC000001BC000001BC0000033C0000033E0000061E0000061E00000C1E 00000C1E0000181E0000181F0000300F0000300F0000600F00007FFF0000FFFF0000C00F000180 0780018007800300078003000780060007800E0007801F0007C0FFC07FFCFFC07FFC1E207E9F22 >65 D<0003FC08001FFE18007F073800FC03F801F001F803E000F807C000F00F8000700F000070 1F0000703E0000703E0000703E0000607C0000007C0000007C0000007C000000FC000000F80000 00F8000000F8000000F80000C0780001C0780001807C0001807C0003803C0007003E0006001F00 0E000F803C0007E0F00003FFE000007F80001D217B9F21>67 D<07FFFF0007FFFFE0003C01F000 3C00F8007C007C0078003C0078001E0078001E0078001E0078001F00F8001F00F0001F00F0001F 00F0001F00F0001F00F0001F01F0001E01E0003E01E0003E01E0003E01E0003C01E0007C03E000 7803C000F003C000F003C001E003C003C003C00F8007C03F007FFFFC00FFFFE000201F7E9E23> I<07FFFFF807FFFFF8003C00F8003C0078007C0038007800380078003800780038007800380078 0C3000F8183000F0180000F0180000F0380000FFF80000FFF80001F0700001E0300001E0300001 E0301801E0303001E0003003E0003003C0006003C0006003C000E003C001C003C003C007C00FC0 7FFFFF80FFFFFF801D1F7E9E1F>I<07FFFFF807FFFFF8003C00F8003C0078007C003800780038 00780038007800380078003800780C3000F8183000F0180000F0180000F0380000FFF80000FFF8 0001F0700001E0300001E0300001E0300001E0300001E0000003E0000003C0000003C0000003C0 000003C0000003C0000007C000007FFE0000FFFE00001D1F7E9E1E>I<0003FC04000FFF0C003F 079C00FC01FC01F000FC03E0007C07C000780F8000380F8000381F0000383E0000383E0000383E 0000307C0000007C0000007C0000007C000000FC000000F8000000F8007FFCF8007FFCF80001E0 780003E07C0003E07C0003C07C0003C03E0003C03E0007C01F0007C00FC01FC007F07D8001FFF0 80007FC0001E217B9F24>I<07FFC7FFC007FFC7FFC0003C007800003C00F800007C00F8000078 00F000007800F000007800F000007800F000007801F00000F801F00000F001E00000F001E00000 F001E00000FFFFE00000FFFFE00001F003E00001E003C00001E003C00001E003C00001E003C000 01E007C00003E007C00003C007800003C007800003C007800003C007800003C00F800007C00F80 007FFCFFF800FFF8FFF800221F7E9E22>I<07FFE007FFE0003C00003C00007C00007800007800 00780000780000780000F80000F00000F00000F00000F00000F00001F00001E00001E00001E000 01E00001E00003E00003C00003C00003C00003C00003C00007C000FFFC00FFFC00131F7F9E10> I<07FFF00007FFF000003C0000003C0000007C0000007800000078000000780000007800000078 000000F8000000F0000000F0000000F0000000F0000000F0000001F0000001E0000001E0000001 E0018001E0018001E0030003E0030003C0030003C0070003C0060003C00E0003C01E0007C07E00 7FFFFC00FFFFFC00191F7E9E1C>76 D<07FC0000FFC007FC0001FFC0003E0001F800003E0003F8 00007E0003F800006E0006F000006E0006F000006E000CF0000067000CF00000670019F00000E7 0019F00000C70031E00000C70031E00000C70061E00000C38061E00000C380C3E00001C380C3E0 0001838183C00001838183C0000181C303C0000181C303C0000181C607C0000381C607C0000301 CC0780000301CC0780000300F80780000300F80780000700F00F80000F80F00F80007FF0E0FFF8 00FFF0E1FFF8002A1F7E9E2A>I<07FC03FFC007FE03FFC0003E007C00003E003800007F003800 006F003000006F803000006780300000678030000067C0700000E3C0700000C3E0600000C1E060 0000C1E0600000C1F0600000C0F0E00001C0F8E000018078C000018078C00001807CC00001803C C00001803FC00003801FC00003001F800003001F800003000F800003000F800007000780000F80 0780007FF0070000FFF0030000221F7E9E22>I<0003F800001FFE00003C1F0000F0078001E003 C003C001E0078001E00F8000F00F0000F01F0000F01E0000F83E0000F83C0000F87C0000F87C00 00F87C0000F87C0000F8F80001F0F80001F0F80001F0F80001F0F80003E0780003E0780007C07C 0007C07C000F803C000F003E001E001E003C000F00780007C1F00003FFC00000FE00001D217B9F 23>I<07FFFF0007FFFFC0003C03E0003C01F0007C00F0007800F8007800F8007800F8007800F8 007800F800F801F000F001F000F001E000F003C000F00F8000FFFE0001FFF80001E0000001E000 0001E0000001E0000001E0000003E0000003C0000003C0000003C0000003C0000003C0000007C0 00007FFC0000FFFC00001D1F7E9E1F>I<07FFFC0007FFFF00003C07C0003C03E0007C01E00078 01F0007801F0007801F0007801F0007801E000F803E000F003C000F0078000F01F0000FFFC0000 FFF00001F0780001E03C0001E03C0001E01C0001E01E0001E01E0003E03E0003C03E0003C03E00 03C03E0003C03E0603C03E0E07C03F0C7FFC1F1CFFFC1FF8000007F01F207E9E21>82 D<003F8C00FFCC01E1FC03C07C07803C07003C0F00380E00180E00180E00180E00000F00000F80 000FF00007FF0007FF8003FFC000FFE0000FE00001E00000F00000F00000F06000E06000E06000 E07000E07001C07803C0FC0780FE0F00EFFE00C3F80016217D9F19>I<1FFFFFF81FFFFFF81E03 C0F83803C0383807C0383007803870078018600780186007803860078030C00F8030000F000000 0F0000000F0000000F0000000F0000001F0000001E0000001E0000001E0000001E0000001E0000 003E0000003C0000003C0000003C0000003C0000003C0000007C00001FFFF0003FFFF0001D1F7B 9E21>II I<03FFC1FFC003FFC1FFC0003E00FC00001E007000001E006000000F00C000000F01C000000F83 800000078300000007C600000003CC00000003FC00000001F800000001F000000000F000000000 F800000001F800000003FC000000033C000000063C0000000C1E0000001C1E000000381F000000 300F000000600F800000C007800001C007C000038003C0000FC007E000FFF01FFE00FFE01FFE00 221F7F9E22>88 D<7FF803FF80FFF803FF0007C000F80007C000E00003C000C00003E001C00001 E003800001F003000000F006000000F80E000000F80C00000078180000007C300000003C700000 003E600000001EC00000001F800000000F800000000F000000000F000000000F000000000E0000 00001E000000001E000000001E000000001E000000001E000000001C000000003C00000003FFE0 000007FFE00000211F7B9E22>I<03FFFFC003FFFFC003F0078007C00F8007001F0007003E000E 007C000C007C000C00F8000C01F0001803E0000003E0000007C000000F8000001F0000001F0000 003E0000007C000000F8030000F0030001F0030003E0060007C00600078006000F800E001F001C 003E001C003C007C007C01FC00FFFFF800FFFFF8001A1F7D9E1C>I E /Fm 83 126 df<70F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8000000000070F8F8F870051C779B18> 33 D<4010F078F078F078E038E038E038E038E038E038E038E038E038E0380D0E7B9C18>I<078F 00078F00078F00078F00078F00078F00078F00FFFFE0FFFFE0FFFFE07FFFE00F1E000F1E000F1E 000F1E000F1E000F1E007FFFE0FFFFE0FFFFE0FFFFE01E3C001E3C001E3C001E3C001E3C001E3C 001E3C00131C7E9B18>I<3803807C0780FE0780FE0F80EE0F00EE0F00EE1F00EE1E00EE1E00FE 3E00FE3C007C3C00387C0000780000780000F80000F00001F00001E00001E00003E00003C00003 C00007C0000783800787C00F8FE00F0FE00F0EE01F0EE01E0EE01E0EE03E0FE03C0FE03C07C01C 038013247E9F18>37 D<03C0000FF0000FF0001E78001E78001C38001C38001C78001C7BF01CF3 F01FF3F01FE7800FC7800F87000F0F001F0F003F8E007FDE00FBDE00F1FC00E1FC00E0F800E0F8 70F0FC70F9FEF07FFFF03FCFE01F03C0141C7F9B18>I<387C7E7E3E0E0E1E1E3C7CF8F0E0070E 789B18>I<007000F003F007E00F800F001E003E003C007800780070007000F000F000E000E000 E000E000E000E000F000F00070007000780078003C003E001E000F000F8007E003F000F000700C 24799F18>II<01C00001C00001C00001C000C1C180F1C780F9CF807FFF001FFC0007F00007F0 001FFC007FFF00F9CF80F1C780C1C18001C00001C00001C00001C00011147D9718>I<00F00000 F00000F00000F00000F00000F00000F00000F000FFFFE0FFFFE0FFFFE0FFFFE000F00000F00000 F00000F00000F00000F00000F00000F00013147E9718>I<3C7E7F7F7F3F0F0F3EFEF8F0080C78 8518>II<78FCFCFCFC780606778518>I<00038000 0780000780000F80000F00001F00001E00001E00003E00003C00007C0000780000780000F80000 F00001F00001E00003E00003C00003C00007C0000780000F80000F00000F00001F00001E00003E 00003C00003C00007C0000780000F80000F00000F00000E0000011247D9F18>I<01F00007FC00 0FFE001F1F001C07003803807803C07001C07001C0E000E0E000E0E000E0E000E0E000E0E000E0 E000E0E000E0E000E0F001E07001C07001C07803C03803801C07001F1F000FFE0007FC0001F000 131C7E9B18>I<0380038007800F800F803F80FF80FB8063800380038003800380038003800380 038003800380038003800380038003800380FFFEFFFEFFFE0F1C7B9B18>I<07F8001FFE003FFF 807C0FC0F803C0F001E0F001E0F000E0F000E00000E00001E00001E00003C00003C0000780000F 80001F00003E0000FC0001F80003E00007C0000F80003F00E07E00E0FFFFE0FFFFE0FFFFE0131C 7E9B18>I<07F8001FFE007FFF807C0FC07803C07801C07801C00001C00003C00007C0000F8003 FF0003FE0003FF80000FC00003C00001E00001E00000E00000E0F000E0F001E0F001E0F003C0FC 0FC07FFF801FFE0007F800131C7E9B18>I<001F00003F00007F0000770000F70001E70001C700 03C7000787000707000E07001E07003C0700380700780700F00700FFFFF8FFFFF8FFFFF8000700 00070000070000070000070000070000FFF800FFF800FFF8151C7F9B18>I<3FFF803FFF803FFF 803800003800003800003800003800003800003800003800003BFC003FFF003FFF803E07C03803 C00001E00001E00000E06000E0F000E0F001E0F003E0F007C07C0F807FFF001FFE0007F800131C 7E9B18>I<007E0003FF8007FFC00FC3C01F03C03E03C03C03C0780000780000F00000F3F800EF FE00FFFF80FE0F80FC03C0F803E0F001E0F000E0F000E0F000E07000E07801E07803E03C07C03E 0F801FFF000FFE0003F800131C7E9B18>II<03F8000FFE 001FFF003E0F803803807001C07001C07001C07001C03803803C07801FFF0007FC000FFE001F1F 003C07807001C0F001E0E000E0E000E0E000E0E000E07001C07803C03E0F801FFF000FFE0003F8 00131C7E9B18>I<78FCFCFCFC78000000000000000078FCFCFCFC780614779318>58 D<3C7E7E7E7E3C00000000000000003C7E7E7E7E3E1E1E3CFCF8E0071A789318>I<000380000F 80001F80003F8000FE0001FC0003F8000FE0001FC0003F8000FE0000FC0000FC0000FE00003F80 001FC0000FE00003F80001FC0000FE00003F80001F80000F8000038011187D9918>I<7FFFC0FF FFE0FFFFE0FFFFE0000000000000000000000000FFFFE0FFFFE0FFFFE07FFFC0130C7E9318>I< E00000F80000FC0000FE00003F80001FC0000FE00003F80001FC0000FE00003F80001F80001F80 003F8000FE0001FC0003F8000FE0001FC0003F8000FE0000FC0000F80000E0000011187D9918> I<00700000F80000F80000D80000D80001DC0001DC0001DC00018C00038E00038E00038E00038E 000306000707000707000707000707000FFF800FFF800FFF800E03800E03801C01C01C01C0FF8F F8FF8FF8FF8FF8151C7F9B18>65 DI<01FCE007FFE00FFFE01F07 E03E03E03C01E07801E07800E07000E0F00000F00000E00000E00000E00000E00000E00000E000 00F00000F000007000E07800E07800E03C01E03E03E01F07C00FFF8007FF0001FC00131C7E9B18 >IIII<01F9C007FFC00FFFC01F0FC03E07C03C03C07803C07801C07001C0F00000F00000E00000E0 0000E00000E00000E01FF0E01FF0F01FF0F001C07003C07803C07803C03C07C03E0FC01F1FC00F FFC007FDC001F9C0141C7E9B18>III75 DIII<0FF8003FFE007FFF00780F00700700F00780E00380E003 80E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380E003 80E00380F00780700700780F007FFF003FFE000FF800111C7D9B18>II82 D<07F3801FFF803FFF807C1F80F80780F00780E00380E0 0380F00000F000007800007F00003FF0000FFE0001FF80001FC00003C00001E00001E00000E0E0 00E0E000E0E001E0F003E0FC07C0FFFF80FFFF00E7FC00131C7E9B18>III87 D<7F9FE07F9FE07F9FE00E07000F0700070E00078E00039C0003DC0001F80001F80000F00000F0 0000700000F00000F80001F80001DC00039E00038E00070F000707000E07800E03801E03C0FF8F F8FF8FF8FF8FF8151C7F9B18>II91 DII95 D<0E1E3E7C78F0F0E0E0F8FCFC7C38070E789E18>I<1FE0007FF8007FFE00 783E00780F00000F0000070001FF000FFF003FFF007F0700F80700F00700E00700E00700F00F00 F83F007FFFF03FFFF00FE1F014147D9318>II<01FE000FFF801F FF803F07807C0780780000F00000F00000E00000E00000E00000E00000F00000F000007801C07C 03C03F07C01FFF800FFF0001FC0012147D9318>I<003F80003F80003F80000380000380000380 00038000038003F3800FFF801FFF803E1F807C0F80780780F00780F00380E00380E00380E00380 E00380F00780F00780780F80781F803E3F803FFFF80FFBF803E3F8151C7E9B18>I<03F0000FFE 001FFF003E1F807C0780780380F003C0F003C0E001C0FFFFC0FFFFC0FFFFC0F00000F000007801 C07C03C03F07C01FFF800FFF0001FC0012147D9318>I<001FC0007FE000FFE001F1E001E0C001 C00001C00001C000FFFFC0FFFFC0FFFFC001C00001C00001C00001C00001C00001C00001C00001 C00001C00001C00001C00001C00001C00001C0007FFF007FFF007FFF00131C7F9B18>I<03F1F0 0FFFF81FFFF81E1F383C0F003C0F003807003807003807003C0F003C0F001E1E001FFE003FFC00 3FF0003C00003C00001FFF003FFFC07FFFF07801F0F00078F00078E00038E00038F00078F800F8 7E03F03FFFE00FFF8003FE00151F7F9318>II<03800007C00007 C00007C000038000000000000000000000000000FFC000FFC000FFC00001C00001C00001C00001 C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C000FFFF80FFFF80FF FF80111D7C9C18>I107 DIII<01F0000F FE001FFF003E0F803803807001C07001C0E000E0E000E0E000E0E000E0E000E0F001E07001C078 03C03C07803E0F801FFF000FFE0001F00013147E9318>II<03F3800FFF801FFF803E1F807C0F80780780F00780F00780E00380E00380E00380E003 80F00780F00780780F807C0F803E1F801FFF800FFF8003F3800003800003800003800003800003 80000380000380003FF8003FF8003FF8151E7E9318>II<0FF7003FFF007FFF00F81F00F00F00E00700F00700FC00007FF000 1FFC0007FF00001F80E00780E00380F00380F80780FC0F80FFFF00FFFE00E7F80011147D9318> I<038000038000038000038000038000FFFFC0FFFFC0FFFFC00380000380000380000380000380 000380000380000380000380000380400380E00380E003C1E003E3E001FFC000FF80007E001319 7F9818>IIII<7F9FF07F9FF07F9FF0070700078E00039E0001DC0001F80000F80000 700000F00000F80001DC00039E00038E000707000F0780FF8FF8FF8FF8FF8FF815147F9318>I< FF8FF8FF8FF8FF8FF80E01C00E03800E0380070380070700070700038700038600038E0001CE00 01CE0000CC0000CC0000DC0000780000780000780000700000700000700000F00000E00079E000 7BC0007F80003F00001E0000151E7F9318>I<7FFFF07FFFF07FFFF07003E07007C0700F80001F 00003E00007C0000F80001F00003E00007C0000F80701F00703E00707C0070FFFFF0FFFFF0FFFF F014147F9318>I<0007E0003FE0007FE000FC0000F00000E00000E00000E00000E00000E00000 E00000E00000E00000E00000E00001E0007FE000FFC000FFC0007FE00001E00000E00000E00000 E00000E00000E00000E00000E00000E00000E00000E00000F00000FC00007FE0003FE00007E013 247E9F18>III E /Fn 13 119 df<1C3C3C3C3C0C1C18383070E0C0C0060E7D840E>44 D<70F8F8F0F005057B840E>46 D<00F9C003FFC0078FC00F0F801E07803C07803C078078070078 0700780F00F80F00F00E00F00E00F01E30F01E70F03C60707C6078FEE03FCFC01F078014147C93 17>97 D<07803F803F80070007000F000F000E000E001E001E001C001CF83FFC3F1E3E0E3C0F78 0F780F700F700F701FF01FE01EE01EE03CE03CE07870F071E03FC01F0010207B9F15>I<007E00 03FF0007C3800F07801E07803C07803C0200780000780000780000F80000F00000F00000F00000 F000007003007807003C3E003FFC000FE00011147C9315>I<00FE0003FF0007C7800F03801E01 803C03807C0700781F007FFC007FF000F80000F00000F00000F000007000007003007807003C3E 001FFC000FE00011147C9315>101 D<003E7000FFF001E3F003C3E00781E00F01E00F01E01E01 C01E01C01E03C03E03C03C03803C03803C07803C07803C0F001C1F001E3F000FFF0007CE00000E 00001E00001E00001C00703C00787800F8F0007FE0003F8000141D7E9315>103 D<00F000F000F000E000000000000000000000000000000F001F803BC071C063C06380E3800780 07000F000F000E001E001C701C603C6038E03DC01F800F000C1F7D9E0E>105 D<1E1F003F3F8077F1C063E3C063C3C0E783C0C783800700000700000F00000F00000E00000E00 001E00001E00001C00001C00003C00003C000038000012147D9313>114 D<01FC03FE07870F0F0E0F0E0F1E000F800FF80FFC03FC007E001E701EF01CF01CF03CF0787FF0 1FC010147D9313>I<01C003C003C003800380078007800700FFF0FFF00F000E000E001E001E00 1C001C003C003C00380038007870786070E070C079C03F801F000C1C7C9B0F>I<0F00701F80F0 3BC0F071C0E063C0E06381E0E381E00781C00701C00703C00F03C00E03800E03800E078C0E079C 0E0F180E0F180F3FB807FBF003E1E016147D9318>I<0F01C01F83E03BC3E071C3E063C1E06380 E0E380C00780C00700C00701C00F01800E01800E01800E03000E03000E07000E0E000F1C0007F8 0003F00013147D9315>I E /Fo 46 123 df<001E003C007800F001F001E003C007C007800F80 0F001F001F001E003E003E003E007C007C007C007C007C007800F800F800F800F800F800F800F8 00F800F800F800F800F800F800F800F80078007C007C007C007C007C003E003E003E001E001F00 1F000F000F80078007C003C001E001F000F00078003C001E0F3D7CAC17>40 DI44 DII<007F000001FFC00007FFF0000FFF F8000FC1F8001F007C003F007E003E003E003C001E007C001F007C001F007C001F0078000F00F8 000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80 F8000F80F8000F80F8000F80F8000F80F8000F8078000F007C001F007C001F007C001F003E003E 003E003E003F007E001F80FC000FC1F8000FFFF80007FFF00001FFC000007F000019297EA71E> 48 D<00180000380000F80007F800FFF800FFF800F8F80000F80000F80000F80000F80000F800 00F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F800 00F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F8007FFFF0 7FFFF07FFFF014287BA71E>I<00FE0003FFC007FFE00FFFF01F03F83C00FC38007E78003E7000 3EF0001FF0001F60001F20001F00001F00001F00001F00003E00003E00007C00007C0000F80001 F00001E00003C0000780000F00001E00003C0000780000F00001E00003C0000780000F00001E00 003C00007FFFFF7FFFFF7FFFFF7FFFFF18287EA71E>I<007F000001FFC00007FFF0000FFFF800 1FC1F8003E007C003C003E0078003E0038003E0010003E0000003E0000003E0000003C0000007C 000000FC000001F8000007F00000FFE00000FFC00000FFE00000FFF0000001FC0000007C000000 3E0000001F0000001F0000000F8000000F8000000F8000000F8000000F8040000F8060001F00F0 001F00F8003F007E007E003F81FC001FFFF8000FFFF00003FFE000007F000019297EA71E>I<00 03F0000007F0000005F000000DF000000DF000001DF0000039F0000039F0000079F0000079F000 00F1F00000F1F00001E1F00003E1F00003E1F00007C1F00007C1F0000F81F0000F81F0001F01F0 001F01F0003E01F0007C01F0007C01F000F801F000FFFFFF80FFFFFF80FFFFFF80FFFFFF800001 F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F00019 277EA61E>I<3FFFFC3FFFFC3FFFFC3FFFFC3E00003E00003E00003E00003E00003E00003E0000 3E00003E00003E00003E3F003EFFC03FFFE03FFFF03FE1F83F807C3F003E3E003E00003E00001F 00001F00001F00001F00001F00001F00001F20001F60003E70003EF8007C7C00FC3F03F81FFFF0 0FFFE007FF8000FE0018287EA61E>I<000FF000003FFC0000FFFC0001FFFC0003F80C0007E000 000FC000000F8000001F0000001E0000003E0000003C0000007C0000007C0000007C3FE000F8FF F000F9FFF800FBFFFC00FF807E00FF003E00FE003F00FC001F00FC001F00FC000F80F8000F80F8 000F80F8000F80F8000F8078000F807C000F807C000F807C000F003E001F003E001F001F003E00 1F807C000FC1FC0007FFF80003FFF00001FFC000007F000019297EA71E>II<00 7F000001FFC00007FFF0000FFFF8001FC1FC003F007E003E003E007E003F007C001F007C001F00 7C001F007C001F007C001F003E003E003E003E001F007C000FC1F80007FFF00003FFE00003FFE0 000FFFF8001FC1FC003F007E003E003E007C001F007C001F00F8000F80F8000F80F8000F80F800 0F80F8000F80F8000F807C001F007C001F007E003F003F007E001FC1FC000FFFF80007FFF00003 FFE000007F000019297EA71E>I<007F000001FFC00003FFE0000FFFF0000FC1F8001F007C003E 007C007C003E007C001E007C001F00F8001F00F8001F00F8000F00F8000F80F8000F80F8000F80 F8000F80F8001F807C001F807C001F807E003F803E007F803F00FF801FFFEF800FFFCF8007FF8F 8003FE1F0000001F0000001F0000001E0000003E0000003E0000007C0000007C000000F8001801 F0001E07E0003FFFC0001FFF80000FFE000003F8000019297EA71E>I<0001FF00000FFFE0003F FFF8007FFFF801FE01F803F8003007E0001007C000000F8000001F8000001F0000003E0000003E 0000007C0000007C0000007C0000007C000000F8000000F8000000F8000000F8000000F8000000 F8000000F8000000F8000000F80000007C0000007C0000007C0000007C0000003E0000003E0000 001F0000001F8000000F80000007C0000007E0000403F8001C01FE00FC007FFFFC003FFFF8000F FFE00001FF001E2B7CA926>67 D69 DI<0001FF00000FFFE0003FFFFC007FFFFE01FE01FE03F8003E07F0000C07C000000F 8000001F8000001F0000003E0000003E0000007C0000007C0000007C0000007C000000F8000000 F8000000F8000000F8000000F8000000F8000000F8001FFEF8001FFEF8001FFE7C001FFE7C0000 3E7C00003E7C00003E3E00003E3E00003E1F00003E1F80003E0F80003E07C0003E07F0003E03F8 003E01FE00FE007FFFFE003FFFFC000FFFE00001FF001F2B7CA928>I73 D76 DI<0001FC0000 000FFF8000003FFFE00000FFFFF80001FE03FC0003F800FE0007E0003F000FC0001F800F80000F 801F000007C01F000007C03E000003E03E000003E07C000001F07C000001F07C000001F0780000 00F0F8000000F8F8000000F8F8000000F8F8000000F8F8000000F8F8000000F8F8000000F8F800 0000F8F8000000F8FC000001F87C000001F07C000001F07E000003F03E000003E03E000003E01F 000007C01F80000FC00F80000F800FC0001F8007F0007F0003F800FE0001FE03FC0000FFFFF800 003FFFE000000FFF80000001FC0000252B7DA92C>79 D<007FC00001FFF80007FFFE000FFFFF00 1FC07F003F000F007E0006007C000000F8000000F8000000F8000000F8000000F8000000FC0000 007C0000007E0000007F0000003FE000001FFE00000FFFC00007FFF00001FFF800003FFC000003 FE0000007F0000001F8000001F8000000FC0000007C0000007C0000007C0000007C0000007C000 0007C000000F8060000F80F0001F00FC003F00FF80FE007FFFFC001FFFF80007FFE00000FF8000 1A2B7DA921>83 D85 D<00FE0007FF801FFFC03FFF E03E03F03801F02001F80000F80000F80000F80000F80000F8007FF807FFF81FFFF83FE0F87F00 F8FC00F8F800F8F800F8F800F8FC01F87E07F87FFFF83FFFF81FFCF80FE0F8151B7E9A1D>97 D<007F8001FFE007FFF80FFFFC1FC07C1F001C3E00087C00007C00007C0000F80000F80000F800 00F80000F80000F80000F800007C00007C00007E00003E00001F000C1FC07C0FFFFC07FFFC01FF F0007F80161B7E9A1B>99 D<00003E00003E00003E00003E00003E00003E00003E00003E00003E 00003E00003E00003E00003E00003E00FC3E03FF3E07FFFE0FFFFE1FC1FE3F007E3E003E7C003E 7C003EFC003EF8003EF8003EF8003EF8003EF8003EF8003EF8003EFC003E7C003E7C003E3E007E 3F00FE1FC1FE0FFFFE07FFBE03FF3E00FC3E17297EA81F>I<007E0003FF8007FFC00FFFE01F83 F03F00F03E00787C00787C003878003CFFFFFCFFFFFCFFFFFCFFFFFCF80000F80000F800007800 007C00007C00003E00003F000C1FC07C0FFFFC07FFFC01FFF0007F80161B7E9A1B>I<001FC000 7FC000FFC001FFC003E00003C00007C00007C00007C00007C00007C00007C00007C00007C000FF FE00FFFE00FFFE0007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007 C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007 C00012297FA812>I<00F8078003FE7FC00FFFFFC01FFFFFC01F07C0003E03E0003E03E0007C01 F0007C01F0007C01F0007C01F0007C01F0007C01F0003E03E0003E03E0001F07C0001FFFC0003F FF80003BFE000038F8000078000000780000003C0000003FFFC0003FFFF8001FFFFC001FFFFE00 3FFFFF007C007F00F8001F80F8000F80F8000F80F8000F80FC001F807E003F003F80FE003FFFFE 000FFFF80007FFF00000FF80001A287E9A1E>III108 D II<007F000001FFC00007FFF0000FFFF8001FC1FC003F007E003E003E00 7C001F007C001F0078000F00F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F 807C001F007C001F007E003F003E003E003F007E001FC1FC000FFFF80007FFF00001FFC000007F 0000191B7E9A1E>II114 D<03FC001FFF803FFFC07FFFC07C07C0F80080F80000F80000F80000FC00007F80007FF8003FFE 001FFF0007FF8000FFC0000FE00007E00003E00003E04003E0E007E0FC0FC0FFFFC07FFF801FFE 0003F800131B7E9A17>I<07C00007C00007C00007C00007C00007C00007C000FFFFC0FFFFC0FF FFC007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007 C00007C00007C00007C00007C00007C00007C04007E1C003FFE003FFE001FF8000FC0013227FA1 16>III<7C000FC03E001F803F001F001F803E000F807C0007C0FC0003E0F800 01F1F00001FBE00000FFC000007FC000003F8000001F0000001F0000003F8000007FC00000FBC0 0000F3E00001F1F00003E0F80007C07C000F807C000F803E001F001F003E000F807E000FC0FC00 07E01B1B809A1C>120 DII E /Fp 8 117 df<00000007C0000000000FC0000000000FC0000000001FC0000000003FC0000000007F C000000000FFC000000000FFC000000001FFC000000003FFC000000007FFC00000000FFFC00000 000FFFC00000001EFFC00000003CFFC00000007CFFC0000000F8FFC0000000F0FFC0000001E0FF C0000003C0FFC0000007C0FFC000000F80FFC000000F00FFC000001E00FFC000003C00FFC00000 7C00FFC00000F800FFC00000F000FFC00001E000FFC00003C000FFC00007C000FFC0000F8000FF C0000F0000FFC0001E0000FFC0003C0000FFC0007C0000FFC000F80000FFC000FFFFFFFFFFC0FF FFFFFFFFC0FFFFFFFFFFC0FFFFFFFFFFC0000001FFC000000001FFC000000001FFC000000001FF C000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC00000 07FFFFFFC00007FFFFFFC00007FFFFFFC00007FFFFFFC02A377DB631>52 D<0003FF800380003FFFF8078000FFFFFE0F8003FFFFFF9F8007FE00FFFF800FF8001FFF801FE0 0003FF801FC00001FF803F800000FF807F8000007F807F0000003F807F0000001F80FF0000001F 80FF0000000F80FF0000000F80FF8000000F80FF8000000780FFC000000780FFE000000780FFF8 00000000FFFF000000007FFFF00000007FFFFF8000007FFFFFFC00003FFFFFFF80001FFFFFFFE0 001FFFFFFFF0000FFFFFFFFC0007FFFFFFFE0003FFFFFFFF0000FFFFFFFF80003FFFFFFF80000F FFFFFFC00000FFFFFFC0000007FFFFE00000007FFFE000000007FFE000000001FFF0000000007F F0000000003FF0F00000003FF0F00000001FF0F00000001FF0F00000000FF0F00000000FF0F800 00000FF0F80000000FE0FC0000000FE0FC0000001FE0FE0000001FC0FF0000003FC0FFC000003F 80FFF000007F00FFFC0001FE00FFFFC00FFC00FCFFFFFFF800F83FFFFFF000F007FFFFC000E000 7FFC00002C3B7BBA37>83 D<0001FFF800000FFFFF00003FFFFFC000FFC03FE001FF003FF007FE 007FF00FFC007FF00FF8007FF01FF8007FF03FF0007FF03FF0003FE07FF0001FC07FE0000F807F E0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE0000000 FFE0000000FFE0000000FFE00000007FE00000007FF00000007FF00000003FF00000003FF80000 781FF80000780FFC0000F80FFE0001F007FF0003E003FF8007C000FFF03F80003FFFFF00000FFF FC000001FFE00025267DA52C>99 D<0001FFC000001FFFF800007FFFFE0001FFC1FF8003FF007F C007FE003FE00FFC001FF01FF8000FF01FF8000FF83FF00007F87FF00007F87FF00007FC7FE000 07FC7FE00003FCFFE00003FCFFE00003FCFFFFFFFFFCFFFFFFFFFCFFFFFFFFFCFFE0000000FFE0 000000FFE0000000FFE0000000FFE00000007FE00000007FE00000007FF00000003FF000003C3F F000003C1FF800007C1FF80000FC0FFC0001F807FE0003F003FF0007E000FFE03FC0003FFFFF80 000FFFFC000000FFE00026267DA52D>101 D<01F80007FC000FFF001FFF001FFF801FFF801FFF 801FFF801FFF801FFF801FFF000FFF0007FC0001F8000000000000000000000000000000000000 0000000000000000000000FF00FFFF00FFFF00FFFF00FFFF0007FF0003FF0003FF0003FF0003FF 0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF 0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF00FFFFF8FFFF F8FFFFF8FFFFF8153D7DBC1B>105 D<00FE007FE000FFFE03FFF800FFFE0FFFFE00FFFE1F83FF 00FFFE3E01FF0007FE7801FF8003FEF000FF8003FFE000FFC003FFC000FFC003FFC000FFC003FF 8000FFC003FF8000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FF C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF 0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FF C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC0FFFFFC3FFFFFFFFFFC3FFFFFFFFF FC3FFFFFFFFFFC3FFFFF30267CA537>110 D<0000FFC00000000FFFFC0000003FFFFF000000FF C0FFC00001FE001FE00007FC000FF80007F80007F8000FF00003FC001FF00003FE003FF00003FF 003FE00001FF007FE00001FF807FE00001FF807FE00001FF807FE00001FF80FFE00001FFC0FFE0 0001FFC0FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE00001FF C0FFE00001FFC07FE00001FF807FE00001FF807FE00001FF803FF00003FF003FF00003FF001FF0 0003FE000FF80007FC000FF80007FC0007FC000FF80003FE001FF00000FFC0FFC000003FFFFF00 00000FFFFC00000001FFE000002A267DA531>I<00078000000780000007800000078000000780 00000F8000000F8000000F8000000F8000001F8000001F8000003F8000003F8000007F800000FF 800001FF800007FF80001FFFFFF0FFFFFFF0FFFFFFF0FFFFFFF001FF800001FF800001FF800001 FF800001FF800001FF800001FF800001FF800001FF800001FF800001FF800001FF800001FF8000 01FF800001FF800001FF800001FF800001FF800001FF800001FF803C01FF803C01FF803C01FF80 3C01FF803C01FF803C01FF803C01FF803C00FF807800FFC078007FC0F8007FF1F0003FFFE0000F FFC00001FF001E377EB626>116 D E /Fq 1 98 df<001800001800001800003C00003C00004E 00004E00004E000087000087000187800103800103800201C00201C003FFC00400E00400E00800 700800701800703C0078FE01FF18177F961C>97 D E /Fr 10 58 df<1F003F8060C04040C060 C060C060C060C060C060C060C06060C060C03F801F000B107F8F0F>48 D<18007800F800980018 00180018001800180018001800180018001800FF80FF8009107E8F0F>I<3F00FFC0F3E0F0E0F0 E000E000E001E001C007800F001C0038607060FFC0FFC00B107F8F0F>I<1F007F8071C079C071 C003C00F800F8001C000E060E0F0E0F0E0F1C07FC03F000B107F8F0F>I<070007000F001F001B 003B0033006300E300FFE0FFE00300030003001FE01FE00B107F8F0F>I<61807F807F007C0060 0060006F807FC079E070E000E0E0E0E0E0E1C0FFC03F000B107F8F0F>I<0F801FC039C071C060 00C200FFC0FFC0E0E0C060C060C06060E071C03FC01F000B107F8F0F>I<60007FE07FE0C0E0C1 C00380070006000E000E000C001C001C001C001C001C001C000B117E900F>I<1F003F8071C060 C070C07DC03F803F807FC0E3E0C0E0C060C060F1E07FC01F000B107F8F0F>I<1F007F8071C0E0 C0C060C060C060E0E07FE07FE0086000C071C073807F803E000B107F8F0F>I E /Fs 1 59 df<70F8F8F87005057C840D>58 D E /Ft 84 125 df<001FC3F0007FEFF801F0FE 7803C0FC780780F87807007000070070000700700007007000070070000700700007007000FFFF FF80FFFFFF80070070000700700007007000070070000700700007007000070070000700700007 007000070070000700700007007000070070000700700007007000070070007FE3FF007FE3FF00 1D20809F1B>11 D<001F8000FFC001F1E003C1E00781E00701E007000007000007000007000007 0000070000FFFFE0FFFFE00700E00700E00700E00700E00700E00700E00700E00700E00700E007 00E00700E00700E00700E00700E00700E00700E07FC3FE7FC3FE1720809F19>I<001FE000FFE0 01F1E003C1E00781E00700E00700E00700E00700E00700E00700E00700E0FFFFE0FFFFE00700E0 0700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0 0700E00700E07FE7FE7FE7FE1720809F19>I<001FC1FC00007FE7FE0001F0FF0F0003C0FC0F00 0780F80F000700F00F000700700000070070000007007000000700700000070070000007007000 00FFFFFFFF00FFFFFFFF0007007007000700700700070070070007007007000700700700070070 070007007007000700700700070070070007007007000700700700070070070007007007000700 700700070070070007007007007FE3FE3FF07FE3FE3FF02420809F26>I<7038F87CFC7EFC7E7C 3E0C060C060C061C0E180C381C7038F07860300F0E7E9F17>34 D<000300C0000300C0000300C0 000701C000060180000601800006018000060180000E0380000C0300000C0300000C0300000C03 00001C070000180600FFFFFFFEFFFFFFFE00300C0000300C0000300C0000701C00006018000060 18000060180000601800FFFFFFFEFFFFFFFE01C070000180600001806000018060000180600003 80E0000300C0000300C0000300C0000300C0000701C0000601800006018000060180001F297D9F 26>I<70F8FCFC7C0C0C0C1C183870F060060E7C9F0D>39 D<00E001C003C0038007000E000E00 1C001C003800380030007000700070006000E000E000E000E000E000E000E000E000E000E000E0 00E000E000E00060007000700070003000380038001C001C000E000E000700038003C001C000E0 0B2E7DA112>II<0006000000060000000600 000006000000060000000600000006000000060000000600000006000000060000000600000006 00000006000000060000FFFFFFF0FFFFFFF0000600000006000000060000000600000006000000 060000000600000006000000060000000600000006000000060000000600000006000000060000 1C207D9A23>43 D<70F8FCFC7C0C0C0C1C183870F060060E7C840D>II<70F8F8F87005057C840D>I<00030003000700060006000E000C000C001C00180018003800 30003000700060006000E000C000C001C00180018001800380030003000700060006000E000C00 0C001C0018001800380030003000700060006000E000C000C000102D7DA117>I<03F0000FFC00 1E1E001C0E00380700780780700380700380700380F003C0F003C0F003C0F003C0F003C0F003C0 F003C0F003C0F003C0F003C0F003C0F003C0F003C07003807003807003807807803807001C0E00 1E1E000FFC0003F000121F7E9D17>I<018003801F80FF80E38003800380038003800380038003 800380038003800380038003800380038003800380038003800380038003800380FFFEFFFE0F1E 7C9D17>I<07F0001FFC003C3E00701F00600F80E00780F807C0F807C0F803C0F803C07007C000 07C00007C0000F80000F00001F00003E00003C0000780000F00001E0000380000700000E00C01C 00C03800C0700180FFFF80FFFF80FFFF80121E7E9D17>I<07F0001FFC003C3F00380F00780F80 780F807C0780780F80000F80000F80001F00001E00007E0003F80003F000003C00001F00000F80 000F800007C00007C07007C0F807C0F807C0F807C0F80F80E00F80701F003C3E001FFC0007F000 121F7E9D17>I<000E00000E00001E00003E00003E00006E0000EE0000CE00018E00038E00030E 00060E000E0E000C0E00180E00380E00300E00600E00E00E00FFFFF0FFFFF0000E00000E00000E 00000E00000E00000E00000E0000FFE000FFE0141E7F9D17>I<3807003FFF003FFE003FFC003F F00030000030000030000030000030000030000031F80037FC003F1E003C0F0038078038078000 03C00003C00003C00003C07003C0F003C0F003C0F003C0E00780600780700F003C3E001FFC0007 F000121F7E9D17>I<00FE0003FF0007C7800F07801E07803C07803C0780780000780000700000 F06000F3FC00F7FE00FE0F00FC0780F80780F80380F003C0F003C0F003C0F003C0F003C07003C0 7003C07803C07807803807803C0F001E1E000FFC0003F000121F7E9D17>I<6000007FFFC07FFF C07FFFC0600380C00300C00700C00E00000C00001C0000380000300000700000E00000E00000C0 0001C00001C00003C0000380000380000380000380000780000780000780000780000780000780 00078000078000121F7D9D17>I<03F0000FFC001E1E003C0F0038078070038070038070038078 03807C07807E07003F8E001FFC000FF80007F8000FFE001EFF003C3F80781F807007C0F003C0E0 03C0E001C0E001C0E001C0F003C07003807807003E1F001FFC0007F000121F7E9D17>I<03F000 0FFC001E1E003C0F00780700780780F00780F00380F00380F003C0F003C0F003C0F003C0F003C0 7007C07807C0780FC03C1FC01FFBC00FF3C00183C0000380000780000780780700780F00781E00 783C007878003FF0000FC000121F7E9D17>I<70F8F8F8700000000000000000000070F8F8F870 05147C930D>I<70F8F8F8700000000000000000000070F8F8F87818181838303070E040051D7C 930D>I<70F8F8F870000000000070707070707070707070707070F8F8F8F8F8F8F8F8F8700521 7C960D>II<07000F800F800F8007000000000000000000 000003000300070006000600060006000E000E001C003C007C007800F000F03CF03CF03CF01CF0 3878783FF01FC00E207D9615>I<0003800000038000000380000007C0000007C0000007C00000 0DE000000DE000000DE0000018F0000018F0000018F00000307800003078000030780000603C00 00603C0000603C0000E01E0000C01E0000FFFE0001FFFF0001800F0001800F0003800F80030007 8003000780070007C0070003C00F8003C0FFE03FFEFFE03FFE1F207F9F22>65 DI<001FC0C000FFF0C001F83DC007E00FC00F8007C00F0007C0 1F0003C03E0001C03C0001C07C0001C07C0000C07C0000C0F80000C0F8000000F8000000F80000 00F8000000F8000000F8000000F8000000F80000007C0000C07C0000C07C0000C03C0001C03E00 01801F0001800F0003800F80070007E00E0001F83C0000FFF800001FE0001A217D9F21>III I<001FE060007FF86001F83CE003E00FE007C007E00F8003E01F0001E03E0000E03E0000E07C00 00E07C0000607C000060F8000060F8000000F8000000F8000000F8000000F8000000F8000000F8 007FFCF8007FFC7C0001E07C0001E07C0001E03E0001E03E0001E01F0001E00F8001E007C003E0 03E007E001FC1FE0007FFC60001FF0001E217D9F24>III<0FFFC00FFFC0003C00003C 00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C 00003C00003C00003C00003C00003C00003C00703C00F83C00F83C00F83C00F87C00E0780078F0 003FE0000FC00012207E9E17>IIIII<001F800000FFF000 01E0780007C03E000F801F000F000F001E0007803C0003C03C0003C07C0003E07C0003E0780001 E0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F07800 01E07C0003E07C0003E03C0003C03E0007C01E0007800F000F000F801F0007C03E0001F0F80000 FFF000001F80001C217D9F23>II<001F800000FFF00001E078 0007C03E000F801F000F000F001E0007803E0007C03C0003C07C0003E07C0003E0780001E0F800 01F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0F80001F0780001E07C 0003E07C0003E03C0003C03E0F07C01E1F87800F38CF000FB05F0007F07E0001F8780000FFF010 001FB010000030100000383000003C7000003FF000001FE000001FE000001FC000000F801C297D 9F23>II<07E1801FF9803C3F80780F80700780E003 80E00380E00180E00180E00180F00000F800007C00007F80003FF8003FFE000FFF0003FF00003F 80000F800003C00003C00001C0C001C0C001C0C001C0E001C0E00380F00380F80780FE0F00CFFE 00C3F80012217D9F19>I<7FFFFFE07FFFFFE0780F01E0700F00E0600F0060600F0060E00F0070 C00F0030C00F0030C00F0030C00F0030000F0000000F0000000F0000000F0000000F0000000F00 00000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F 0000000F0000000F000007FFFE0007FFFE001C1F7E9E21>IIII<7FF83FF87FF83FF807E00F8003 C00F0001E00E0001F00C0000F0180000783800007C3000003C7000003E6000001EC000000FC000 000F8000000780000007C0000007E000000DE000001DF0000018F8000038780000307C0000603C 0000E01E0000C01F0001800F0003800780078007C00FC007E0FFE01FFEFFE01FFE1F1F7F9E22> I91 D<180C3C1E381C70386030E070C060C060C060F87CFC7EFC7E 7C3E381C0F0E7B9F17>II<3FE0007FF800787C00781E00781E00 000E00000E00000E0007FE001FFE003F0E007C0E00F80E00F00E30F00E30F01E30F81E307C7F70 3FFFE01FC78014147E9317>97 D<0E0000FE0000FE00000E00000E00000E00000E00000E00000E 00000E00000E00000E00000E3F000EFFC00FC3E00F81E00F00F00E00F00E00780E00780E00780E 00780E00780E00780E00780E00780E00F00F00F00F81E00FC3C00CFF800C7F0015207F9F19>I< 03FC0FFE1E1E3C1E781E7800F000F000F000F000F000F000F000F80078007C033E071F0E0FFC03 F010147E9314>I<000380003F80003F8000038000038000038000038000038000038000038000 038000038007F3800FFF801E1F803C0780780780780380F00380F00380F00380F00380F00380F0 0380F00380F003807803807807803C0F803E1F801FFBF807E3F815207E9F19>I<03F0000FFC00 1E1E003C0F00780700780780F00780F00380FFFF80FFFF80F00000F00000F00000F80000780000 7C01803E03801F87000FFE0003F80011147F9314>I<007E00FF01EF038F078F07000700070007 00070007000700FFF0FFF007000700070007000700070007000700070007000700070007000700 070007007FF07FF01020809F0E>I<0001E007F7F00FFF703E3E703C1E00780F00780F00780F00 780F00780F00780F003C1E003E3E003FF80037F0003000003000003800003FFE001FFFC03FFFE0 7803E0F000F0E00070E00070E00070F000F07801E03E07C01FFF8003FC00141F7F9417>I<0E00 00FE0000FE00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E7F000EFF 800FC7C00F83C00F01C00F01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01 C00E01C00E01C00E01C0FFE7FCFFE7FC16207F9F19>I<1E003E003E003E001E00000000000000 0000000000000E007E007E000E000E000E000E000E000E000E000E000E000E000E000E000E000E 000E00FFC0FFC00A1F809E0C>I<00E001F001F001F000E0000000000000000000000000007007 F007F000F000700070007000700070007000700070007000700070007000700070007000700070 007000700070F0F0F0E0F1E0FFC03F000C28829E0E>I<0E0000FE0000FE00000E00000E00000E 00000E00000E00000E00000E00000E00000E00000E1FF00E1FF00E0F800E0F000E1E000E3C000E 78000EF0000FF0000FF8000FBC000F3C000E1E000E0F000E0F000E07800E07800E03C0FFCFF8FF CFF815207F9F18>I<0E00FE00FE000E000E000E000E000E000E000E000E000E000E000E000E00 0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FFE0FFE00B20809F0C >I<0E3F03F000FEFFCFFC00FFC3DC3C000F81F81E000F00F00E000F00F00E000E00E00E000E00 E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E 00E00E000E00E00E000E00E00E00FFE7FE7FE0FFE7FE7FE023147F9326>I<0E7F00FEFF80FFC7 C00F83C00F01C00F01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01 C00E01C00E01C0FFE7FCFFE7FC16147F9319>I<01F80007FE001E07803C03C03801C07000E070 00E0F000F0F000F0F000F0F000F0F000F0F000F07000E07801E03801C03C03C01E078007FE0001 F80014147F9317>I<0E3F00FEFFC0FFC3E00F81E00F01F00E00F00E00F80E00780E00780E0078 0E00780E00780E00780E00F80E00F00F01F00F81E00FC7C00EFF800E7F000E00000E00000E0000 0E00000E00000E00000E0000FFE000FFE000151D7F9319>I<07F1800FF9801F1F803C0F807C07 80780380F80380F00380F00380F00380F00380F00380F00380F803807807807C07803C0F803E1F 801FFB8007E380000380000380000380000380000380000380000380003FF8003FF8151D7E9318 >I<0E7CFFFEFFDE0F9E0F1E0F000E000E000E000E000E000E000E000E000E000E000E000E00FF E0FFE00F147F9312>I<1FB07FF078F0E070E030E030F000FC007FC03FE00FF000F8C078C038E0 38E038F078F8F0FFE0CFC00D147E9312>I<06000600060006000E000E001E003E00FFF8FFF80E 000E000E000E000E000E000E000E000E000E000E180E180E180E180E180F3007F003E00D1C7F9B 12>I<0E01C0FE1FC0FE1FC00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0 0E01C00E01C00E01C00E03C00E07C00F0FC007FDFC03F9FC16147F9319>III<7FC7FC7FC7FC0F03E007 038003830001C70000EE0000EC00007800003800003C00007C0000EE0001C70001870003038007 01C01F01E0FF87FEFF87FE1714809318>II<3FFF3FFF38 1F301E703C6078607860F001E003C003C007830F031E031E073C067806F81EFFFEFFFE10147F93 14>III E /Fu 28 121 df<000FF07F00007FFFFF8001FC3FEFC003F07F8FC007E07F8FC007C07F0FC007 C07F078007C03F000007C01F000007C01F000007C01F000007C01F0000FFFFFFF800FFFFFFF800 07C01F000007C01F000007C01F000007C01F000007C01F000007C01F000007C01F000007C01F00 0007C01F000007C01F000007C01F000007C01F000007C01F000007C01F000007C01F000007C01F 00003FF8FFF0003FF8FFF0002220809F1F>11 D<7CFEFEFFFFFF7F030707060E1C3C787008107C 860F>44 DI<00700000F0000FF000FFF000F3F00003F00003 F00003F00003F00003F00003F00003F00003F00003F00003F00003F00003F00003F00003F00003 F00003F00003F00003F00003F00003F00003F00003F000FFFF80FFFF80111D7C9C1A>49 D<0001C00003C00007C00007C0000FC0001FC0003FC00077C00067C000C7C00187C00387C00707 C00E07C00C07C01807C03807C07007C0E007C0FFFFFEFFFFFE000FC0000FC0000FC0000FC0000F C0000FC001FFFE01FFFE171D7F9C1A>52 D<0000E000000000E000000001F000000001F0000000 01F000000003F800000003F800000007FC00000007FC0000000FFE0000000CFE0000000CFE0000 00187F000000187F000000307F800000303F800000703FC00000601FC00000601FC00000C01FE0 0000C00FE00001FFFFF00001FFFFF000018007F000030003F800030003F800060003FC00060001 FC000E0001FE00FFE01FFFE0FFE01FFFE0231F7E9E28>65 D<000FFC06007FFF8E01FE03DE03F8 00FE0FE0007E1FC0003E1F80001E3F80000E7F00000E7F00000E7F000006FE000006FE000000FE 000000FE000000FE000000FE000000FE000000FE000000FE0000007F0000067F0000067F000006 3F80000E1F80000C1FC0001C0FE0003803F800F001FF03E0007FFF80000FFC001F1F7D9E26>67 D<000FFC0600007FFF8E0001FE03DE0003F800FE000FE0007E001FC0003E001F80001E003F8000 0E007F00000E007F00000E007F00000600FE00000600FE00000000FE00000000FE00000000FE00 000000FE00000000FE007FFFE0FE007FFFE0FE0000FE007F0000FE007F0000FE007F0000FE003F 8000FE001F8000FE001FC000FE000FE000FE0003F801FE0001FF07FE00007FFF3E00000FFC0600 231F7D9E29>71 D80 D<001FF80000FFFF0001F81F8007E007E0 0FC003F01F8001F81F8001F83F0000FC7F0000FE7F0000FE7E00007EFE00007FFE00007FFE0000 7FFE00007FFE00007FFE00007FFE00007FFE00007FFE00007F7E00007E7E00007E7F0000FE3F00 00FC3F87C1FC1F8FF1F80FDC3BF007F83FE001FC1F8000FFFF00001FFE0300000F0300000F8700 000FFF00000FFF000007FE000007FE000003FC000003FC000001F020287D9E27>I<07FC001FFF 803F0FC03F07C03F03E03F03E01E03E00003E001FFE00FFFE03FC3E07E03E0FC03E0F803E0F803 E0F803E0FC07E07E1FE03FFDFE0FF0FE17147F9319>97 DI<03FE000FFF801F8FC03F0FC07E0FC07C0FC0FC0780FC0000FC0000FC00 00FC0000FC0000FC0000FC00007E00007E00603F00E01FC3C00FFF8003FE0013147E9317>I<00 07F80007F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F803FCF80F FFF81F87F83F01F87E00F87C00F8FC00F8FC00F8FC00F8FC00F8FC00F8FC00F8FC00F8FC00F87C 00F87E01F83E03F81F87F80FFEFF03F8FF18207E9F1D>I<01FF0007FFC01F87E03F01F07E00F0 7E00F8FC00F8FC00F8FFFFF8FFFFF8FC0000FC0000FC00007C00007E00003E00183F00381FC0F0 07FFE001FF8015147F9318>I<03FE7C0FFFFE1F8FDE1F07DE3E03E03E03E03E03E03E03E03E03 E01F07C01F8FC01FFF801FFE001800001C00001E00001FFFC01FFFF00FFFF83FFFFC7C00FEF800 3EF0001EF0001EF0001EF8003E7C007C3F01F81FFFF003FF80171E7F931A>103 D<1E003F007F007F007F003F001E00000000000000000000000000FF00FF001F001F001F001F00 1F001F001F001F001F001F001F001F001F001F001F001F00FFE0FFE00B217EA00E>105 D<007C00FE00FE00FE00FE00FE007C00000000000000000000000001FE01FE003E003E003E003E 003E003E003E003E003E003E003E003E003E003E003E003E003E003E003E003E783EFC3EFC7EFC 7CFCF87FF01FC00F2A83A010>II< FE1FE07F80FE7FF9FFE01EF0FBC3E01FC0FF03F01F807E01F01F807E01F01F007C01F01F007C01 F01F007C01F01F007C01F01F007C01F01F007C01F01F007C01F01F007C01F01F007C01F01F007C 01F01F007C01F01F007C01F0FFE3FF8FFEFFE3FF8FFE27147D932C>109 DI<01FF0007FFC01F83F03E 00F83E00F87C007C7C007CFC007EFC007EFC007EFC007EFC007EFC007E7C007C7C007C3E00F83E 00F81F83F007FFC001FF0017147F931A>II114 D<0FF63FFE781EE00EE006F006F800 FFC07FF87FFC1FFE03FF003FC00FE007E007F00FFC1EFFFCCFF010147E9315>I<018001800180 03800380038007800F803F80FFFCFFFC0F800F800F800F800F800F800F800F800F800F800F860F 860F860F860F860FCC07FC01F80F1D7F9C14>II120 D E /Fv 17 121 df<07E0001FF8003FFC007FFE 00FFFE00FFFF00FFFF00FFFF80FFFF80FFFF80FFFF80FFFF807FFF803FFF801FFF8007E7800007 80000F80000F80000F00000F00001F00001F00003E00003E00007C00007C0000F80001F80003F0 0007E0000FE0001FC0001F80001F00000E00001124788F21>44 D<000000007FFE00001E000000 0FFFFFE0003E000000FFFFFFF8007E000003FFFFFFFE00FE00001FFFFFFFFF81FE00007FFFF801 FFE7FE0000FFFF80003FFFFE0003FFFC000007FFFE0007FFF0000003FFFE001FFFE0000001FFFE 003FFF800000007FFE007FFF000000003FFE00FFFE000000001FFE01FFFC000000001FFE01FFFC 000000000FFE03FFF80000000007FE07FFF00000000007FE07FFF00000000003FE0FFFE0000000 0003FE0FFFE00000000001FE1FFFC00000000001FE1FFFC00000000000FE3FFFC00000000000FE 3FFF800000000000FE3FFF800000000000FE7FFF8000000000007E7FFF8000000000007E7FFF80 00000000007E7FFF00000000000000FFFF00000000000000FFFF00000000000000FFFF00000000 000000FFFF00000000000000FFFF00000000000000FFFF00000000000000FFFF00000000000000 FFFF00000000000000FFFF00000000000000FFFF00000000000000FFFF00000000000000FFFF00 000000000000FFFF00000000000000FFFF00000000000000FFFF000000000000007FFF00000000 0000007FFF800000000000007FFF8000000000003E7FFF8000000000003E3FFF8000000000003E 3FFF8000000000003E3FFFC000000000003E1FFFC000000000007E1FFFC000000000007E0FFFE0 00000000007C0FFFE00000000000FC07FFF00000000000FC07FFF00000000001F803FFF8000000 0001F801FFFC0000000003F801FFFC0000000007F000FFFE000000000FE0007FFF000000000FE0 003FFFC00000003FC0001FFFE00000007F800007FFF0000000FF000003FFFC000003FE000000FF FF80000FF80000007FFFF800FFF00000001FFFFFFFFFC000000003FFFFFFFF0000000000FFFFFF FC00000000000FFFFFE00000000000007FFE000000474979C756>67 D<000000007FFE00001E00 0000000FFFFFE0003E00000000FFFFFFF8007E00000003FFFFFFFE00FE0000001FFFFFFFFF81FE 0000007FFFF801FFE7FE000000FFFF80003FFFFE000003FFFC000007FFFE000007FFF0000003FF FE00001FFFE0000001FFFE00003FFF800000007FFE00007FFF000000003FFE0000FFFE00000000 1FFE0001FFFC000000001FFE0001FFFC000000000FFE0003FFF80000000007FE0007FFF0000000 0007FE0007FFF00000000003FE000FFFE00000000003FE000FFFE00000000001FE001FFFC00000 000001FE001FFFC00000000000FE003FFFC00000000000FE003FFF800000000000FE003FFF8000 00000000FE007FFF8000000000007E007FFF8000000000007E007FFF8000000000007E007FFF00 00000000000000FFFF0000000000000000FFFF0000000000000000FFFF0000000000000000FFFF 0000000000000000FFFF0000000000000000FFFF0000000000000000FFFF0000000000000000FF FF0000000000000000FFFF0000000000000000FFFF0000000000000000FFFF0000000000000000 FFFF0000000000000000FFFF0000000000000000FFFF0000007FFFFFFFFEFFFF0000007FFFFFFF FE7FFF8000007FFFFFFFFE7FFF8000007FFFFFFFFE7FFF8000007FFFFFFFFE7FFF8000000000FF FE003FFF8000000000FFFE003FFF8000000000FFFE003FFFC000000000FFFE001FFFC000000000 FFFE001FFFC000000000FFFE000FFFE000000000FFFE000FFFE000000000FFFE0007FFF0000000 00FFFE0007FFF000000000FFFE0003FFF800000000FFFE0001FFFC00000000FFFE0001FFFE0000 0000FFFE0000FFFE00000000FFFE00007FFF80000000FFFE00003FFFC0000001FFFE00001FFFE0 000003FFFE000007FFF8000003FFFE000003FFFE00000FFFFE000000FFFFC0001FFFFE0000007F FFFC00FFEFFE0000001FFFFFFFFFC3FE00000003FFFFFFFF00FE00000000FFFFFFFC003E000000 000FFFFFF0000E00000000007FFF000000004F4979C75D>71 D<0007FFFE000000007FFFFFE000 0001FFFFFFF8000003FFFFFFFE000007FF001FFF80000FFF8007FFC0000FFF8003FFE0000FFF80 01FFF0000FFF8000FFF8000FFF80007FF8000FFF80007FFC000FFF80007FFC0007FF00003FFC00 03FE00003FFC0001FC00003FFC00000000003FFC00000000003FFC00000000003FFC0000000000 3FFC00000007FFFFFC000000FFFFFFFC00000FFFFFFFFC00003FFFF03FFC0001FFFE003FFC0003 FFF0003FFC000FFFC0003FFC001FFF80003FFC003FFE00003FFC007FFE00003FFC007FFC00003F FC00FFF800003FFC00FFF800003FFC00FFF000003FFC00FFF000003FFC00FFF000003FFC00FFF0 00007FFC00FFF800007FFC00FFF80000FFFC007FFC0001FFFC007FFE0003EFFC003FFF000FCFFF 001FFFC07F8FFFFC0FFFFFFF07FFFC03FFFFFC03FFFC007FFFF001FFFC0007FF80007FFC362E7D AD3A>97 D<00001FFFC0000001FFFFFC000007FFFFFF00001FFFFFFF80007FFC01FFC000FFF003 FFE003FFC003FFE007FF8003FFE00FFF8003FFE00FFF0003FFE01FFE0003FFE01FFE0003FFE03F FC0001FFC03FFC0000FF807FFC00007F007FFC000000007FF800000000FFF800000000FFF80000 0000FFF800000000FFF800000000FFF800000000FFF800000000FFF800000000FFF800000000FF F800000000FFF800000000FFF800000000FFF8000000007FFC000000007FFC000000007FFC0000 00003FFC000000003FFE000000F81FFE000000F81FFF000001F80FFF000003F80FFF800003F007 FFC00007E003FFE0000FE000FFF8003FC0007FFE01FF80001FFFFFFE000007FFFFF8000001FFFF E00000001FFF00002D2E7CAD35>99 D<00000000007FC00000000000FFFFC00000000000FFFFC0 0000000000FFFFC00000000000FFFFC00000000000FFFFC0000000000003FFC0000000000001FF C0000000000001FFC0000000000001FFC0000000000001FFC0000000000001FFC0000000000001 FFC0000000000001FFC0000000000001FFC0000000000001FFC0000000000001FFC00000000000 01FFC0000000000001FFC0000000000001FFC0000000000001FFC0000000000001FFC000000000 0001FFC0000000000001FFC0000000000001FFC0000000000001FFC00000001FFE01FFC0000001 FFFFC1FFC0000007FFFFF1FFC000001FFFFFFDFFC000007FFE03FFFFC00000FFF0007FFFC00003 FFE0003FFFC00007FF80001FFFC00007FF800007FFC0000FFF000007FFC0001FFE000003FFC000 1FFE000003FFC0003FFC000003FFC0003FFC000003FFC0007FFC000003FFC0007FFC000003FFC0 007FF8000003FFC000FFF8000003FFC000FFF8000003FFC000FFF8000003FFC000FFF8000003FF C000FFF8000003FFC000FFF8000003FFC000FFF8000003FFC000FFF8000003FFC000FFF8000003 FFC000FFF8000003FFC000FFF8000003FFC000FFF8000003FFC0007FF8000003FFC0007FF80000 03FFC0007FFC000003FFC0003FFC000003FFC0003FFC000003FFC0003FFC000003FFC0001FFE00 0007FFC0000FFE00000FFFC0000FFF00001FFFC00007FF80003FFFC00003FFC0007FFFC00001FF E000FFFFE000007FFC07FFFFFF80003FFFFFF3FFFF80000FFFFFE3FFFF800003FFFF83FFFF8000 003FF803FFFF8039487CC742>I<00001FFE00000001FFFFE0000007FFFFF800001FFFFFFE0000 7FFC07FF0000FFE001FF8001FFC000FFC003FF80003FE007FF00003FF00FFE00001FF01FFE0000 0FF81FFC00000FF83FFC00000FFC3FFC000007FC7FFC000007FC7FF8000007FC7FF8000007FE7F F8000007FEFFF8000007FEFFF8000007FEFFFFFFFFFFFEFFFFFFFFFFFEFFFFFFFFFFFEFFFFFFFF FFFEFFF800000000FFF800000000FFF800000000FFF8000000007FF8000000007FF8000000007F FC000000003FFC000000003FFC000000003FFC0000003E1FFE0000003E0FFE0000007E0FFF0000 007E07FF800000FC03FFC00001F801FFE00007F800FFF0001FF0003FFE00FFE0001FFFFFFF8000 07FFFFFE000000FFFFF80000000FFF80002F2E7DAD36>I<00FC0003FF0007FF800FFFC00FFFC0 1FFFE01FFFE01FFFE01FFFE01FFFE01FFFE00FFFC00FFFC007FF8003FF0000FC00000000000000 000000000000000000000000000000000000000000000000000000007FC0FFFFC0FFFFC0FFFFC0 FFFFC0FFFFC003FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC0 01FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC0 01FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC0FFFFFFFFFFFF FFFFFFFFFFFFFFFFFF18497CC820>105 D<007FC001FFC00000FFE00000FFFFC00FFFF80007FF FC0000FFFFC03FFFFE001FFFFF0000FFFFC0FFFFFF807FFFFFC000FFFFC1FE07FFC0FF03FFE000 FFFFC7F003FFC3F801FFE00003FFCFC001FFE7E000FFF00001FFCF8001FFE7C000FFF00001FFDF 0001FFEF8000FFF00001FFFE0000FFFF00007FF80001FFFC0000FFFE00007FF80001FFF80000FF FC00007FF80001FFF80000FFFC00007FF80001FFF00000FFF800007FF80001FFF00000FFF80000 7FF80001FFF00000FFF800007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF800 01FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE0 0000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FF F000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF00000 7FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF800 01FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE0 0000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FF F000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF00000 7FF800FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF0 FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF05C2E7CAD63>109 D<007FC001FFC00000FFFFC00FFFF80000FFFFC03FFFFE0000FFFFC0FFFFFF8000FFFFC1FE07FF C000FFFFC7F003FFC00003FFCFC001FFE00001FFCF8001FFE00001FFDF0001FFE00001FFFE0000 FFF00001FFFC0000FFF00001FFF80000FFF00001FFF80000FFF00001FFF00000FFF00001FFF000 00FFF00001FFF00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE0 0000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FF E00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001 FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000 01FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF0 0001FFE00000FFF00001FFE00000FFF000FFFFFFC07FFFFFE0FFFFFFC07FFFFFE0FFFFFFC07FFF FFE0FFFFFFC07FFFFFE0FFFFFFC07FFFFFE03B2E7CAD42>I<00000FFF0000000000FFFFF00000 0007FFFFFE0000001FFFFFFF8000003FFC03FFC00000FFE0007FF00001FF80001FF80003FF0000 0FFC0007FE000007FE000FFE000007FF000FFC000003FF001FFC000003FF803FFC000003FFC03F F8000001FFC03FF8000001FFC07FF8000001FFE07FF8000001FFE07FF8000001FFE0FFF8000001 FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8 000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF07FF8000001FFE07FF8000001FF E07FF8000001FFE07FF8000001FFE03FFC000003FFC03FFC000003FFC01FFC000003FF801FFE00 0007FF800FFE000007FF0007FF00000FFE0003FF80001FFC0001FFC0003FF80000FFE0007FF000 007FFC03FFE000001FFFFFFF80000007FFFFFE00000000FFFFF0000000000FFF000000342E7DAD 3B>I<007FC01FFE000000FFFFC0FFFFE00000FFFFC3FFFFF80000FFFFCFFFFFFE0000FFFFFFF0 1FFF8000FFFFFF8007FFC00003FFFE0003FFE00001FFFC0001FFF00001FFF80000FFF80001FFF0 00007FF80001FFE000007FFC0001FFE000003FFE0001FFE000003FFE0001FFE000003FFF0001FF E000001FFF0001FFE000001FFF0001FFE000001FFF0001FFE000001FFF8001FFE000000FFF8001 FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF80 01FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE000001FFF 8001FFE000001FFF0001FFE000001FFF0001FFE000001FFF0001FFE000003FFE0001FFE000003F FE0001FFE000007FFC0001FFE000007FFC0001FFF00000FFF80001FFF80001FFF00001FFFC0001 FFF00001FFFE0007FFE00001FFFF800FFF800001FFFFE03FFF000001FFEFFFFFFC000001FFE3FF FFF0000001FFE0FFFFC0000001FFE01FFC00000001FFE0000000000001FFE0000000000001FFE0 000000000001FFE0000000000001FFE0000000000001FFE0000000000001FFE0000000000001FF E0000000000001FFE0000000000001FFE0000000000001FFE0000000000001FFE0000000000001 FFE0000000000001FFE0000000000001FFE00000000000FFFFFFC000000000FFFFFFC000000000 FFFFFFC000000000FFFFFFC000000000FFFFFFC00000000039427CAD42>I<00FF803FC000FFFF 81FFF000FFFF83FFFC00FFFF87FFFE00FFFF8FE7FF00FFFF9F8FFF8003FFBF0FFF8001FFBE0FFF 8001FFFC0FFF8001FFF80FFF8001FFF80FFF8001FFF00FFF8001FFF007FF0001FFF003FE0001FF E001FC0001FFE000000001FFE000000001FFE000000001FFC000000001FFC000000001FFC00000 0001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FF C000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC00000 0001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FF C0000000FFFFFFE00000FFFFFFE00000FFFFFFE00000FFFFFFE00000FFFFFFE00000292E7CAD31 >114 D<000FFF81E000FFFFF7E003FFFFFFE00FFFFFFFE01FFC01FFE03FE0003FE03FC0000FE0 7F80000FE07F000007E0FF000003E0FF000003E0FF800003E0FF800003E0FFC0000000FFF00000 00FFFF000000FFFFF80000FFFFFFC0007FFFFFF8007FFFFFFE003FFFFFFF001FFFFFFFC00FFFFF FFE003FFFFFFF000FFFFFFF0003FFFFFF80007FFFFF800001FFFFC000000FFFC0000003FFCF800 001FFCF800000FFCFC000007FCFC000003FCFC000003FCFE000003FCFF000003F8FF000007F8FF 800007F8FFE0000FF0FFF0001FE0FFFE00FFC0FFFFFFFF80FEFFFFFE00FC3FFFF800F003FFC000 262E7CAD2F>I<0001F000000001F000000001F000000001F000000001F000000001F000000003 F000000003F000000003F000000007F000000007F000000007F00000000FF00000000FF0000000 1FF00000003FF00000003FF00000007FF0000001FFF0000003FFF000000FFFFFFFC0FFFFFFFFC0 FFFFFFFFC0FFFFFFFFC0FFFFFFFFC000FFF0000000FFF0000000FFF0000000FFF0000000FFF000 0000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0 000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FF F0000000FFF0000000FFF0000000FFF001F000FFF001F000FFF001F000FFF001F000FFF001F000 FFF001F000FFF001F000FFF001F000FFF001F000FFF803F0007FF803E0007FF807E0003FFC0FE0 003FFE1FC0001FFFFF800007FFFF000001FFFC0000003FF00024427EC12E>I<007FE000003FF0 00FFFFE0007FFFF000FFFFE0007FFFF000FFFFE0007FFFF000FFFFE0007FFFF000FFFFE0007FFF F00003FFE00001FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000 FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE000 00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE0 0000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FF E00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001 FFE00000FFF00001FFE00000FFF00001FFE00001FFF00001FFE00001FFF00001FFE00001FFF000 01FFE00003FFF00001FFE00003FFF00001FFE00007FFF00000FFE0000FFFF00000FFF0001F7FF0 00007FF8007E7FF800007FFE01FC7FFFE0003FFFFFF87FFFE0000FFFFFF07FFFE00003FFFFC07F FFE000003FFE007FFFE03B2E7CAD42>I<7FFFFF801FFFFF007FFFFF801FFFFF007FFFFF801FFF FF007FFFFF801FFFFF007FFFFF801FFFFF00007FF80003FF0000007FFC0001FC0000003FFE0003 F80000001FFF0007F00000000FFF0007E000000007FF800FC000000003FFC01F8000000003FFE0 3F8000000001FFF07F0000000000FFF8FE00000000007FF9FC00000000003FFFF800000000003F FFF000000000001FFFE000000000000FFFC0000000000007FFC0000000000003FFC00000000000 01FFE0000000000001FFF0000000000001FFF8000000000003FFFC000000000003FFFE00000000 0007FFFE00000000000FEFFF00000000001FCFFF80000000003F87FFC0000000007F03FFE00000 0000FE01FFE000000001FC00FFF000000001F8007FF800000003F0007FFC00000007F0003FFE00 00000FE0001FFF0000001FC0000FFF0000003F800007FF800000FFC00007FFC000FFFFF8003FFF FFC0FFFFF8003FFFFFC0FFFFF8003FFFFFC0FFFFF8003FFFFFC0FFFFF8003FFFFFC03A2E7EAD3F >120 D E /Fw 46 122 df<0000C018000000C018000000C018000001C0380000018030000001 80300000018030000003807000000300600000030060000003006000000700E000000600C00000 0600C000000600C000000E01C000000C018000FFFFFFFFC0FFFFFFFFC000180300000018030000 001803000000380700000030060000003006000000300600000030060000FFFFFFFFC0FFFFFFFF C000600C000000E01C000000C018000000C018000000C018000001C03800000180300000018030 0000018030000003807000000300600000030060000003006000000700E000000600C000000600 C00000222D7DA229>35 D<70F8FCFC7C0C0C0C1C18183870E0E0060F7C840E>44 DI<70F8F8F87005057C840E>I<01F00007FC000E0E001C07003803 803803807803C07001C07001C07001C0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001 E0F001E0F001E0F001E0F001E0F001E0F001E07001C07001C07001C07803C03803803803801C07 000E0E0007FC0001F00013227EA018>48 D<018003800F80FF80F3800380038003800380038003 800380038003800380038003800380038003800380038003800380038003800380038003800380 0380FFFEFFFE0F217CA018>I<03F8000FFE001E1F00380F807007C07807C07C07C07807C07807 C00007C00007C0000780000F80001F00003E0003FC0003F800001E00000F000007800007C00003 E00003E07003E0F803E0F803E0F803E0F803E0E007C07007C0780F803E1F000FFE0003F8001322 7EA018>51 D<000E00000E00001E00001E00003E00003E00006E0000EE0000CE0001CE00018E00 030E00070E00060E000E0E000C0E00180E00180E00300E00700E00600E00E00E00FFFFF8FFFFF8 000E00000E00000E00000E00000E00000E00000E0001FFF001FFF015217FA018>I<1801801E07 801FFF801FFF001FFC001FF00018000018000018000018000018000018000019F8001FFE001F0F 001E07801C03C01803C00001C00001E00001E00001E00001E07001E0F001E0F001E0F001E0E003 C0E003C0700780380F803E1F000FFC0007F00013227EA018>I<007E0001FF8003C3C00703C00E 03C01E03C03C03C0380000780000780000780000702000F3FC00F7FF00FE0F80FC0780F803C0F8 03C0F801E0F001E0F001E0F001E0F001E0F001E07001E07001E07801E07801C03803C03C03801E 07800F1F0007FE0003F80013227EA018>I<03F8000FFE001F1F003C07803803807003C07001C0 7001C07001C07801C07C03C03E07803F87001FDE000FFC0007F80007FE000FFF001E3F803C1FC0 7807C07003E0F001E0E001E0E000E0E000E0E000E0F000E07001C07803C03C07801E0F000FFE00 03F80013227EA018>56 D<03F8000FFC001F1E003C07003807807803807003C0F003C0F001C0F0 01C0F001E0F001E0F001E0F001E0F001E0F003E07803E07803E03C07E03E0FE01FFDE007F9E000 81E00001C00003C00003C0000380780780780700780F00781E00787C003FF8000FE00013227EA0 18>I<000180000003C0000003C0000003C0000007E0000007E0000007E000000FF000000CF000 000CF000001CF800001878000018780000383C0000303C0000303C0000601E0000601E0000601E 0000C00F0000C00F0000C00F0001FFFF8001FFFF8001800780030003C0030003C0030003C00600 01E0060001E0060001E00E0000F01F0001F0FFC00FFFFFC00FFF20237EA225>65 D<000FF030007FFC3000FC1E7003F0077007C003F00F8001F01F0001F01F0000F03E0000F03C00 00707C0000707C0000707C000030F8000030F8000030F8000000F8000000F8000000F8000000F8 000000F8000000F8000000F80000307C0000307C0000307C0000303E0000703E0000601F0000E0 1F0000C00F8001C007C0038003F0070000FC1E00007FFC00000FF0001C247DA223>67 D69 DI73 D77 DI80 D82 D<07F0600FFE601E1FE03807E07003E07001E0E000E0E000E0E00060E00060F00060F00000 F800007C00007F00003FF0001FFE000FFF8003FFC0007FC00007E00001E00001F00000F00000F0 C00070C00070C00070E00070E000F0F000E0F801E0FC01C0FF0780C7FF00C1FC0014247DA21B> I<7FFFFFF87FFFFFF87C0780F8700780386007801860078018E007801CC007800CC007800CC007 800CC007800CC007800C0007800000078000000780000007800000078000000780000007800000 078000000780000007800000078000000780000007800000078000000780000007800000078000 00078000000780000007800003FFFF0003FFFF001E227EA123>I<1FF0003FFC003C3E003C0F00 3C0F00000700000700000F0003FF001FFF003F07007C0700F80700F80700F00718F00718F00F18 F81F187C3FB83FF3F01FC3C015157E9418>97 D<0E0000FE0000FE00001E00000E00000E00000E 00000E00000E00000E00000E00000E00000E00000E00000E3FC00EFFE00FE1F00F80780F003C0E 003C0E001E0E001E0E001E0E001E0E001E0E001E0E001E0E001E0E003C0E003C0F007C0F80F80F E1F00CFFE00C3F8017237FA21B>I<03FE000FFF801F07803E07803C0780780000780000F00000 F00000F00000F00000F00000F00000F000007800007800C03C01C03E01801F87800FFF0003FC00 12157E9416>I<0000E0000FE0000FE00001E00000E00000E00000E00000E00000E00000E00000 E00000E00000E00000E003F8E00FFEE01F0FE03E03E07C01E07800E07800E0F000E0F000E0F000 E0F000E0F000E0F000E0F000E07000E07801E07801E03C03E01F0FF00FFEFE03F0FE17237EA21B >I<01FC0007FF001F0F803E03C03C01C07801E07801E0FFFFE0FFFFE0F00000F00000F00000F0 0000F000007800007800603C00E03E01C00F83C007FF0001FC0013157F9416>I<003E00FF01EF 03CF038F070007000700070007000700070007000700FFF8FFF807000700070007000700070007 0007000700070007000700070007000700070007007FF87FF8102380A20F>I<0000F003FBF807 FFB80F1F381E0F003C07803C07803C07803C07803C07803C07803C07801E0F001F1E001FFC001B F8001800001800001C00001FFF000FFFC03FFFF07C01F0700078F00078E00038E00038E00038F0 00787800F03F07E01FFFC003FE0015217F9518>I<0E0000FE0000FE00001E00000E00000E0000 0E00000E00000E00000E00000E00000E00000E00000E00000E3F800EFFE00FE1E00F80F00F0070 0F00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070 0E0070FFE7FFFFE7FF18237FA21B>I<1E003E003E003E001E0000000000000000000000000000 0000000E00FE00FE001E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00 0E00FFC0FFC00A227FA10E>I<01C003E003E003E001C000000000000000000000000000000000 01E00FE00FE001E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000 E000E000E000E000E000E000E0F1E0F1C0F3C0FF803E000B2C82A10F>I<0E0000FE0000FE0000 1E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E0FFC0E0FFC 0E07E00E07800E07000E1E000E3C000E78000EF8000FFC000FFC000F1E000E0F000E0F800E0780 0E03C00E03E00E01E00E01F0FFE3FEFFE3FE17237FA21A>I<0E00FE00FE001E000E000E000E00 0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E 000E000E000E000E000E000E00FFE0FFE00B237FA20E>I<0E3FC0FF00FEFFF3FFC0FFE0F783C0 1F807E01E00F003C00E00F003C00E00E003800E00E003800E00E003800E00E003800E00E003800 E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E0038 00E0FFE3FF8FFEFFE3FF8FFE27157F942A>I<0E3F80FEFFE0FFE1E01F80F00F00700F00700E00 700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070FFE7 FFFFE7FF18157F941B>I<01FC0007FF000F07801C01C03800E07800F0700070F00078F00078F0 0078F00078F00078F00078F000787000707800F03800E01C01C00F078007FF0001FC0015157F94 18>I<0E3FC0FEFFE0FFE1F00F80F80F007C0E003C0E003E0E001E0E001E0E001E0E001E0E001E 0E001E0E003E0E003C0E003C0F007C0F80F80FE1F00EFFE00E3F800E00000E00000E00000E0000 0E00000E00000E00000E0000FFE000FFE000171F7F941B>I<0E7EFEFFFFEF1F8F0F0F0F000F00 0E000E000E000E000E000E000E000E000E000E000E000E00FFF0FFF010157F9413>114 D<1FD83FF87878F038E018E018F018F8007F807FE01FF003F8007CC03CC01CE01CE01CF03CF878 FFF0CFE00E157E9413>I<060006000600060006000E000E000E001E003E00FFF8FFF80E000E00 0E000E000E000E000E000E000E000E000E0C0E0C0E0C0E0C0E0C0F1C073807F803E00E1F7F9E13 >I<0E0070FE07F0FE07F01E00F00E00700E00700E00700E00700E00700E00700E00700E00700E 00700E00700E00700E00F00E00F00E01F00787F807FF7F01FC7F18157F941B>III121 D E /Fx 20 118 df45 D68 D73 D77 D80 D<007F806003FFF06007C0F8E00F003CE01E000F E03C0007E03C0003E0780003E0780001E0F00001E0F00000E0F00000E0F00000E0F0000060F800 0060F8000060F80000007C0000007E0000007F0000003FC000001FFC00001FFFC0000FFFF80003 FFFE0000FFFF00001FFF800001FFC000001FE0000007E0000003F0000001F0000001F8000000F8 000000F8C0000078C0000078C0000078C0000078E0000078E0000078E0000070F00000F0F00000 F0F80001E0FC0001E0FF0003C0E7800F80E3F01F00C0FFFC00C01FF0001D337CB125>83 D<03FF00000FFFC0001E03F0003E00F8003E007C003E003C003E003E001C001E0000001E000000 1E0000001E0000001E000003FE00007FFE0003FF1E0007F01E001FC01E003F001E007E001E007C 001E00FC001E00F8001E0CF8001E0CF8001E0CF8003E0CFC007E0C7C007E0C7E01FF1C3F87CFB8 1FFF07F003FC03C01E1F7D9E21>97 D<003FE001FFF803E03C07C03E0F003E1F003E3E003E3C00 1C7C00007C0000780000F80000F80000F80000F80000F80000F80000F80000F80000F800007C00 007C00007C00003E00033E00071F00060F800E07C01C03F07801FFF0003F80181F7D9E1D>99 D<000001E000003FE000003FE0000003E0000001E0000001E0000001E0000001E0000001E00000 01E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E000 3F81E001FFF1E003F079E007C01FE00F800FE01F0007E03E0003E03C0001E07C0001E07C0001E0 780001E0F80001E0F80001E0F80001E0F80001E0F80001E0F80001E0F80001E0F80001E0F80001 E0780001E07C0001E07C0001E03C0003E03E0003E01E0007E01F000FE00FC03DE007E0F9F001FF E1FF007F81FF20327DB125>I<003F800001FFF00003E1F80007807C000F003E001E001E003E00 1F003C000F007C000F807C000F8078000F80F8000780FFFFFF80FFFFFF80F8000000F8000000F8 000000F8000000F8000000F80000007C0000007C0000007C0000003E0001803E0003801F000300 0F80070007E00E0003F83C0000FFF800003FC000191F7E9E1D>I<0007F0001FF8007C3C00F87C 00F07C01E07C01E03803E00003C00003C00003C00003C00003C00003C00003C00003C00003C000 03C00003C000FFFFC0FFFFC003C00003C00003C00003C00003C00003C00003C00003C00003C000 03C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C000 03C00003C00003C00003C00007E0007FFF007FFF0016327FB114>I<000001F8007F07FC01FFCF 1C07E3FC3C0F80F81C0F0078001F007C001E003C003E003E003E003E003E003E003E003E003E00 3E003E003E003E003E001E003C001F007C000F0078000F80F8000FE3F0000DFFC0001C7F00001C 0000001C0000001C0000001E0000001F0000000FFFF8000FFFFF0007FFFFC00FFFFFE03E000FF0 7C0001F0780000F0F0000078E0000038E0000038E0000038E0000038F000007870000070780000 F03C0001E01F0007C00FE03F8003FFFE00007FF0001E2F7E9F21>I<0F001F801F801F801F800F 00000000000000000000000000000000000000000000000780FF80FF800F800780078007800780 078007800780078007800780078007800780078007800780078007800780078007800780078007 800FC0FFF8FFF80D307EAF12>105 D<0781FE003FC000FF87FF80FFF000FF9E0FC3C1F8000FB8 03E7007C0007F001EE003C0007E001FC003E0007E000FC001E0007C000F8001E0007C000F8001E 00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000 F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00 078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0 001E00078000F0001E00078000F0001E000FC001F8003F00FFFC1FFF83FFF0FFFC1FFF83FFF034 1F7E9E38>109 D<0781FE0000FF87FF8000FF9E0FC0000FB803E00007F001E00007E001F00007 E000F00007C000F00007C000F000078000F000078000F000078000F000078000F000078000F000 078000F000078000F000078000F000078000F000078000F000078000F000078000F000078000F0 00078000F000078000F000078000F000078000F000078000F000078000F0000FC001F800FFFC1F FF80FFFC1FFF80211F7E9E25>I<001FC00000FFF80001E03C0007800F000F0007801E0003C01E 0003C03C0001E03C0001E0780000F0780000F0780000F0F80000F8F80000F8F80000F8F80000F8 F80000F8F80000F8F80000F8F80000F8780000F07C0001F03C0001E03C0001E01E0003C01E0003 C00F00078007C01F0001F07C0000FFF800001FC0001D1F7E9E21>I<0787F0FF8FF8FFBC7C0FB8 7C07F07C07E07C07E00007C00007C00007C0000780000780000780000780000780000780000780 000780000780000780000780000780000780000780000780000780000780000780000FC000FFFE 00FFFE00161F7E9E19>114 D<03FE300FFFF03E07F07801F07000F0E00070E00070E00030E000 30F00030F800007F00007FF0003FFF000FFFC003FFE0003FF00003F80000F8C0007CC0003CE000 1CE0001CE0001CF0001CF80038F80038FC00F0EF03F0C7FFC0C1FF00161F7E9E1A>I<00C00000 C00000C00000C00000C00001C00001C00001C00003C00003C00007C0000FC0001FC000FFFFE0FF FFE003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003 C00003C00003C00003C00003C03003C03003C03003C03003C03003C03003C03003C07001E06001 E0E000F9C000FFC0003F00142C7FAB19>I<078000F000FF801FF000FF801FF0000F8001F00007 8000F000078000F000078000F000078000F000078000F000078000F000078000F000078000F000 078000F000078000F000078000F000078000F000078000F000078000F000078000F000078000F0 00078000F000078000F000078001F000078001F000078001F000078003F00003C007F00003C00E F00001F03CF80000FFF8FF80003FC0FF80211F7E9E25>I E /Fy 5 85 df<00000000C0000000 0000E00000000001E00000000003E00000000003E00000000007E00000000007E0000000000FE0 000000000FE0000000001FE0000000001FE00000000037E00000000067E00000000067E0000000 00C7E000000000C7F00000000183F00000000183F00000000303F00000000703F00000000603F0 0000000C03F00000000C03F00000001803F00000001803F00000003003F00000003003F0000000 6003F0000000C003F0000000C003F00000018003F00000018003F8000003FFFFF8000003FFFFF8 0000060001F800000E0001F800000C0001F80000180001F80000180001F80000300001F8000030 0001F80000600001F80000E00001F80000C00001F80001C00001F80001C00001F80007C00001FC 001FC00003FC00FFF8007FFFE0FFF8007FFFE02B327BB135>65 D<000FFFFFFE0000000FFFFFFF 800000007F000FE00000007E0003F00000007E0000F80000007E0000FC0000007E00007C000000 FC00003E000000FC00003E000000FC00003F000000FC00001F000001F800001F000001F800001F 800001F800001F800001F800001F800003F000001F800003F000001F800003F000001F800003F0 00001F800007E000003F800007E000003F800007E000003F800007E000003F80000FC000003F00 000FC000007F00000FC000007F00000FC000007F00001F8000007E00001F800000FE00001F8000 00FE00001F800000FC00003F000001FC00003F000001F800003F000001F800003F000003F00000 7E000003E000007E000007E000007E00000FC000007E00000F800000FC00001F800000FC00003F 000000FC00007E000000FC0000FC000001F80001F0000001F80003E0000001F8000FC0000003F8 007F000000FFFFFFFC000000FFFFFFE000000031317BB036>68 D<000FFFFFFFFC000FFFFFFFFC 00007F0001FC00007E00007C00007E00003C00007E00003C00007E0000180000FC0000180000FC 0000180000FC0000180000FC0000180001F80000180001F80000180001F80000180001F8000018 0003F00080100003F00180000003F00180000003F00180000007E00300000007E00300000007E0 0700000007E01F0000000FFFFE0000000FFFFE0000000FC01E0000000FC00E0000001F800C0000 001F800C0000001F800C0000001F800C0000003F00180000003F00080000003F00000000003F00 000000007E00000000007E00000000007E00000000007E0000000000FC0000000000FC00000000 00FC0000000000FC0000000001F80000000001F80000000001F80000000003F800000000FFFFF0 000000FFFFF00000002E317BB02F>70 D<000FFFFFF000000FFFFFFE0000007F003F8000007E00 0FC000007E0007E000007E0003F000007E0001F80000FC0001F80000FC0001F80000FC0001F800 00FC0001F80001F80003F80001F80003F80001F80003F80001F80003F00003F00007F00003F000 07E00003F0000FC00003F0000FC00007E0001F000007E0007E000007E000FC000007E007F00000 0FFFFFC000000FFFFF0000000FC00F8000000FC003C000001F8003E000001F8001F000001F8001 F000001F8001F800003F0001F800003F0001F800003F0001F800003F0001F800007E0003F80000 7E0003F800007E0003F000007E0003F00000FC0007F00000FC0007F00000FC0007F00800FC0007 F00C01F80007F01801F80007F01801F80003F03003F80003F030FFFFE001F0E0FFFFE000FFC000 0000003F002E327BB034>82 D<07FFFFFFFFF00FFFFFFFFFF00FC00FE003F01E000FC000F01C00 0FC000E018000FC000E038000FC0006030001F8000E030001F8000E060001F8000C060001F8000 C060003F0000C0C0003F0000C0C0003F0000C0C0003F0000C080007E00008000007E0000000000 7E00000000007E0000000000FC0000000000FC0000000000FC0000000000FC0000000001F80000 000001F80000000001F80000000001F80000000003F00000000003F00000000003F00000000003 F00000000007E00000000007E00000000007E00000000007E0000000000FC0000000000FC00000 00000FC0000000000FC0000000001F80000000001F80000000001F80000000001F80000000003F 00000000003F00000000003F0000000000FF00000000FFFFFF000000FFFFFF0000002C3173B033 >84 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300 TeXDict begin %%EndSetup %%Page: 0 1 bop 795 908 a Fy(D)26 b(R)g(A)f(F)h(T)225 999 y Fx(Do)r(cumen)n(t)20 b(for)i(a)f(Standard)g(Message-P)n(assing)f(In)n(terface)621 1194 y Fw(Message)c(P)o(assing)h(In)o(terface)e(F)l(orum)802 1320 y(August)i(10,)f(1993)87 1378 y(This)g(w)o(ork)g(w)o(as)h(supp)q(orted)g (b)o(y)f(ARP)l(A)g(and)g(NSF)g(under)g(con)o(tract)g(n)o(um)o(b)q(er)f(###,)g (b)o(y)g(the)192 1436 y(National)h(Science)f(F)l(oundation)i(Science)e(and)i (T)l(ec)o(hnology)f(Cen)o(ter)f(Co)q(op)q(erativ)o(e)76 1494 y(Agreemen)o(t)e(No.)22 b(CCR-8809615,)d(and)e(b)o(y)e(the)h(Commission)e(of) j(the)f(Europ)q(ean)i(Comm)o(unit)n(y)654 1552 y(through)f(Esprit)f(pro)s (ject)g(P6643.)p eop %%Page: 1 2 bop 75 377 a Fv(Con)m(ten)m(ts)75 645 y Fu(4)42 b(Groups,)17 b(Con)o(texts,)f(and)i(Comm)o(unicators)809 b(1)143 702 y Ft(4.1)46 b(In)o(tro)q(duction)15 b Fs(:)22 b(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f (:)h(:)f(:)g(:)h(:)f(:)91 b Ft(1)143 758 y(4.2)46 b(Con)o(text)32 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f (:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h (:)f(:)91 b Ft(2)143 815 y(4.3)46 b(Groups)12 b Fs(:)22 b(:)g(:)h(:)f(:)g(:)h (:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(2)248 871 y(4.3.1)50 b(Prede\014ned)17 b(Groups)29 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(3)143 927 y(4.4)46 b(Comm)o(unicators) 20 b Fs(:)j(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(3)248 984 y(4.4.1)50 b(Prede\014ned)17 b(Comm)o(unicators)38 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f (:)h(:)f(:)g(:)h(:)f(:)91 b Ft(4)143 1040 y(4.5)46 b(Group)15 b(Managemen)o(t)41 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:) f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(4)248 1097 y(4.5.1)50 b(Lo)q(cal)16 b(Op)q(erations)24 b Fs(:)e(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(4)248 1153 y(4.5.2)50 b(Lo)q(cal)16 b(Group)f(Constructors)20 b Fs(:)i(:)h(:)f(:)g(:)h (:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(5)248 1210 y(4.5.3)50 b(Collectiv)o(e)17 b(Group)e(Constructors)39 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f (:)g(:)h(:)f(:)91 b Ft(8)143 1266 y(4.6)46 b(Op)q(erations)16 b(on)f(Con)o(texts)35 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(8)248 1323 y(4.6.1)50 b(Lo)q(cal)16 b(Op)q(erations)24 b Fs(:)e(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(8)248 1379 y(4.6.2)50 b(Collectiv)o(e)17 b(Op)q(erations)43 b Fs(:)22 b(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f (:)h(:)f(:)g(:)h(:)f(:)91 b Ft(9)143 1436 y(4.7)46 b(Op)q(erations)16 b(on)f(Comm)o(unicators)41 b Fs(:)23 b(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:) g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(10)248 1492 y(4.7.1)50 b(Lo)q(cal)16 b(Comm)o(unicator)f(Op)q(erations) 32 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:) g(:)h(:)f(:)69 b Ft(10)248 1548 y(4.7.2)50 b(Lo)q(cal)16 b(Constructors)k Fs(:)j(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:) f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(10)248 1605 y(4.7.3)50 b(Collectiv)o(e)17 b(Comm)o(unicator)d(Constructors)f Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(11)143 1661 y(4.8)46 b(In)o(tro)q(duction)16 b(to)e(In)o(ter-Comm)o (unication)g Fs(:)22 b(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:) h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(11)248 1718 y(4.8.1)50 b(De\014nitions)17 b(of)d(In)o(ter-Comm)o(unication)i(and)g(In)o(ter-Comm)o (unicators)e Fs(:)22 b(:)h(:)f(:)69 b Ft(12)248 1774 y(4.8.2)50 b(Prop)q(erties)16 b(of)f(In)o(ter-Comm)o(unication)g(and)h(In)o(ter-Comm)o (unicators)24 b Fs(:)e(:)h(:)f(:)69 b Ft(12)248 1831 y(4.8.3)50 b(In)o(ter-Comm)o(unication)16 b(Routines)33 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f (:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(14)248 1887 y(4.8.4)50 b(Implemen)o(tation)16 b(Notes)30 b Fs(:)22 b(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f (:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(17)143 1944 y(4.9)46 b(Cac)o(heing)12 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f (:)h(:)f(:)g(:)h(:)f(:)69 b Ft(18)248 2000 y(4.9.1)50 b(F)l(unctionalit)o(y) 32 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:) g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(18)143 2057 y(4.10)23 b(F)l(ormalizing)16 b(the)f(Lo)q(osely)h(Sync)o (hronous)g(Mo)q(del)g(\(Usage,)e(Safet)o(y\))j Fs(:)22 b(:)h(:)f(:)h(:)f(:)g (:)h(:)f(:)69 b Ft(20)248 2113 y(4.10.1)27 b(Basic)16 b(Statemen)o(ts)22 b Fs(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(20)248 2169 y(4.10.2)27 b(Mo)q(dels)16 b(of)f(Execution)29 b Fs(:)22 b(:)g(:)h(:)f(:)h(:) f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h (:)f(:)69 b Ft(21)143 2226 y(4.11)23 b(Motiv)m(ating)15 b(Examples)h Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h (:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(22)248 2282 y(4.11.1)27 b(Curren)o(t)15 b(Practice)g(#1)27 b Fs(:)22 b(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h (:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(22)248 2339 y(4.11.2)27 b(Curren)o(t)15 b(Practice)g(#2)27 b Fs(:)22 b(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:) f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(23)248 2395 y(4.11.3)27 b(\(Appro)o(ximate\))15 b(Curren)o(t)f(Practice) i(#3)35 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g (:)h(:)f(:)69 b Ft(23)248 2452 y(4.11.4)27 b(Example)16 b(#4)44 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(24)248 2508 y(4.11.5)27 b(Library)16 b(Example)g(#1)22 b Fs(:)g(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:) g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(25)248 2565 y(4.11.6)27 b(Library)16 b(Example)g(#2)22 b Fs(:)g(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(26)248 2621 y(4.11.7)27 b(In)o(ter-Comm)o(unication)16 b(Examples)g Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:) f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(27)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 0 3 bop 875 722 a Fu(Abstract)75 828 y Ft(The)18 b(Message)f(P)o(assing)h(In)o (terface)f(F)l(orum)h(\(MPIF\),)e(with)i(participation)h(from)e(o)o(v)o(er)g (40)g(organiza-)75 885 y(tions,)f(has)h(b)q(een)g(meeting)g(since)h(Jan)o (uary)e(1993)f(to)h(discuss)h(and)g(de\014ne)g(a)f(set)h(of)f(library)h(in)o (terface)75 941 y(standards)f(for)h(message)f(passing.)25 b(MPIF)16 b(is)i(not)e(sanctioned)i(or)e(supp)q(orted)h(b)o(y)g(an)o(y)g(o\016cial)g (stan-)75 998 y(dards)e(organization.)166 1054 y(This)22 b(is)f(a)g(draft)f (of)h(what)g(will)h(b)q(ecome)g(the)f(Final)h(Rep)q(ort,)h(V)l(ersion)f(1.0,) f(of)f(the)i(Message)75 1111 y(P)o(assing)c(In)o(terface)h(F)l(orum.)29 b(This)19 b(do)q(cumen)o(t)g(con)o(tains)g(all)g(the)g(tec)o(hnical)h (features)e(prop)q(osed)h(for)75 1167 y(the)c(in)o(terface.)20 b(This)c(cop)o(y)f(of)g(the)g(draft)g(w)o(as)f(pro)q(cessed)i(b)o(y)f(L)1174 1161 y Fq(a)1195 1167 y Ft(T)1220 1181 y(E)1246 1167 y(X)g(on)g(August)g(10,) g(1993.)166 1224 y(MPIF)j(in)o(vites)h(commen)o(ts)f(on)h(the)f(tec)o(hnical) i(con)o(ten)o(t)e(of)g(MPI,)g(as)g(w)o(ell)i(as)e(on)g(the)h(editorial)75 1280 y(presen)o(tation)14 b(in)g(the)g(do)q(cumen)o(t.)19 b(Commen)o(ts)13 b(receiv)o(ed)i(b)q(efore)e(July)i(1,)e(1993)g(will)i(b)q(e)f(considered)h (in)75 1336 y(pro)q(ducing)i(the)e(\014nal)h(draft)e(of)h(V)l(ersion)h(1.0)e (of)h(the)g(Message)g(P)o(assing)g(In)o(terface)g(Sp)q(eci\014cation.)166 1393 y(The)i(goal)f(of)g(the)g(Message)g(P)o(assing)g(In)o(terface,)g(simply) i(stated,)e(is)g(to)g(dev)o(elop)h(a)g(widely)g(used)75 1449 y(standard)g(for)f(writing)i(message-passing)f(programs.)24 b(As)17 b(suc)o(h)h(the)f(in)o(terface)g(should)h(establishing)75 1506 y(a)d(practical,)h(p)q(ortable,)f(e\016cien)o(t,)g(and)h(\015exible)h (standard)e(for)f(message)h(passing.)p eop %%Page: 1 4 bop 75 361 a Fp(Section)35 b(4)75 573 y Fv(Groups,)40 b(Con)m(texts,)g(and)75 697 y(Comm)m(unicators)75 943 y Fo(4.1)59 b(Intro)r(duction)75 1053 y Ft(W)l(e)17 b(de\014ne)h(the)e(concepts)h(of)g(group,)f(con)o(text,)g (and)h(comm)o(unicator)f(here.)25 b(W)l(e)17 b(discuss)h(op)q(erations)75 1109 y(for)13 b(ho)o(w)g(these)h(should)g(b)q(e)g(used)g(to)f(pro)o(vide)h (safe)f(\(safer\))g(comm)o(unication)h(in)g(the)g(MPI)f(system.)19 b(W)l(e)75 1166 y(start)11 b(b)o(y)h(discussing)i(in)o(tra-comm)o(unication)f (in)g(full)h(detail;)g(then)f(w)o(e)f(discuss)h(in)o(ter-comm)o(unication,)75 1222 y(whic)o(h)j(builds)g(on)f(the)g(data)f(structures)g(and)h(requiremen)o (ts)g(of)g(the)f(in)o(tra-comm)o(unication)i(sections.)75 1279 y(W)l(e)f(follo)o(w)g(with)g(discussion)h(of)e(formalizations)h(of)f(the)h (lo)q(osely)h(sync)o(hronous)f(mo)q(del)g(of)f(computing)75 1335 y(\(vis)h(a)g(vis)h(message)f(passing\))g(and)g(o\013er)g(examples.)166 1396 y(It)c(is)h(highly)g(desirable)h(that)d(pro)q(cesses)i(executing)g(a)f (parallel)i(pro)q(cedure)f(use)f(a)g(\\virtual)h(pro)q(cess)75 1453 y(name)20 b(space")f(lo)q(cal)i(to)e(the)h(in)o(v)o(o)q(cation.)34 b(Th)o(us,)20 b(the)g(co)q(de)g(of)g(the)f(parallel)j(pro)q(cedure)e(will)i (lo)q(ok)75 1509 y(iden)o(tical,)d(irresp)q(ectiv)o(e)g(of)e(the)g(absolute)h (addresses)f(of)g(the)h(executing)g(pro)q(cesses.)27 b(It)17 b(is)h(often)f(the)75 1566 y(case)h(that)f(parallel)i(application)h(co)q(de)e (is)h(built)g(b)o(y)f(comp)q(osing)g(sev)o(eral)g(parallel)i(mo)q(dules)f(\() p Fn(e.g.)p Ft(,)e(a)75 1622 y(n)o(umerical)f(solv)o(er,)f(and)g(a)g(graphic) h(displa)o(y)g(mo)q(dule\).)21 b(Supp)q(ort)15 b(of)g(a)g(virtual)h(name)f (space)g(for)g(eac)o(h)75 1678 y(mo)q(dule)j(will)h(allo)o(w)e(for)f(the)h (comp)q(osition)g(of)g(mo)q(dules)h(that)e(w)o(ere)h(dev)o(elop)q(ed)h (separately)f(without)75 1735 y(c)o(hanging)e(all)h(message)e(passing)h (calls)g(within)h(eac)o(h)f(mo)q(dule.)21 b(The)15 b(set)f(of)g(pro)q(cesses) h(that)f(execute)h(a)75 1791 y(parallel)i(pro)q(cedure)f(ma)o(y)f(b)q(e)h (\014xed,)g(or)f(ma)o(y)g(b)q(e)h(determined)h(dynamically)g(b)q(efore)f(the) g(in)o(v)o(o)q(cation.)75 1848 y(Th)o(us,)23 b(MPI)e(has)g(to)g(pro)o(vide)h (a)f(mec)o(hanism)h(for)f(dynamically)j(creating)d(sets)g(of)g(lo)q(cally)j (named)75 1904 y(pro)q(cesses.)35 b(W)l(e)20 b(alw)o(a)o(ys)f(n)o(um)o(b)q (er)h(pro)q(cesses)h(that)e(execute)h(a)g(parallel)i(pro)q(cedure)f (consecutiv)o(ely)l(,)75 1961 y(starting)15 b(from)g(zero,)h(and)g(call)g (this)h(n)o(um)o(b)q(ering)f Fu(rank)i(in)h(group)p Ft(.)i(Th)o(us,)16 b(a)f Fu(group)h Ft(is)g(an)g(ordered)75 2017 y(set)f(of)f(pro)q(cesses,)h (where)h(pro)q(cesses)f(are)g(iden)o(ti\014ed)i(b)o(y)e(their)g(ranks)g(when) g(comm)o(unication)h(o)q(ccurs.)166 2078 y(Comm)o(unication)23 b Fm(contexts)f Ft(partition)h(the)h(message-passing)f(space)g(in)o(to)g (separate,)h(man-)75 2135 y(ageable)f(\\univ)o(erses.")44 b(Sp)q (eci\014cally)l(,)27 b(a)c(send)g(made)g(in)h(a)f(con)o(text)f(cannot)h(b)q (e)g(receiv)o(ed)h(in)g(an-)75 2191 y(other)18 b(con)o(text.)29 b(Con)o(texts)17 b(are)i(iden)o(ti\014ed)h(in)f(MPI)g(using)g(opaque)f Fu(con)o(texts)h Ft(that)e(reside)j(within)75 2247 y Fm(communicator)e Ft(ob)s(jects.)31 b(The)20 b(con)o(text)e(mec)o(hanism)i(is)g(need)g(to)f (allo)o(w)g(predictable)i(b)q(eha)o(vior)f(in)75 2304 y(subprograms,)14 b(and)g(to)g(allo)o(w)h(dynamicism)h(in)g(message)e(usage)g(that)g(cannot)h (b)q(e)g(reasonably)g(an)o(tici-)75 2360 y(pated)g(or)e(managed.)20 b(Normally)l(,)15 b(a)f(parallel)i(pro)q(cedure)f(is)g(written)g(so)f(that)f (all)j(messages)e(pro)q(duced)75 2417 y(during)f(its)g(execution)h(are)e (also)g(consumed)h(b)o(y)g(the)f(pro)q(cesses)h(that)f(execute)h(the)f (parallel)i(pro)q(cedure.)75 2473 y(Ho)o(w)o(ev)o(er,)d(if)h(one)g(parallel)i (pro)q(cedure)f(calls)g(another,)e(then)i(it)f(migh)o(t)f(b)q(e)i(desirable)g (to)f(allo)o(w)g(suc)o(h)g(call)75 2530 y(to)17 b(pro)q(ceed)i(while)g (messages)e(are)g(p)q(ending)j(\(the)d(messages)h(will)h(b)q(e)f(consumed)h (b)o(y)e(the)h(pro)q(cedure)75 2586 y(after)d(the)g(call)h(returns\).)k(In)c (suc)o(h)g(case,)f(a)g(new)g(comm)o(unication)h(con)o(text)f(is)h(needed)g (for)f(the)g(called)75 2643 y(parallel)i(pro)q(cedure,)e(ev)o(en)h(if)g(the)f (transfer)f(of)h(con)o(trol)g(is)h(sync)o(hronized.)166 2704 y(The)e(comm)o(unication)i(domain)e(used)h(b)o(y)f(a)g(parallel)i(pro)q (cedure)g(is)e(iden)o(ti\014ed)j(b)o(y)d(a)g Fu(comm)o(uni-)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 2 5 bop 75 -100 a Ft(2)432 b Fl(SECTION)16 b(4.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fu(cator)p Ft(.)29 b(Comm)o(unicators)17 b(bring)h(together)f(the)h(concepts)h(of)e(pro)q(cess)h (group)g(and)g(comm)o(unication)75 102 y(con)o(text.)29 b(A)18 b(comm)o(unicator)g(is)h(an)f(explicit)j(parameter)c(in)j(eac)o(h)e(p)q(oin)o (t-to-p)q(oin)o(t)h(comm)o(unication)75 158 y(op)q(eration.)31 b(The)19 b(comm)o(unicator)f(iden)o(ti\014es)j(the)d(comm)o(unication)i(con)o (text)e(of)g(that)g(op)q(eration;)j(it)75 214 y(iden)o(ti\014es)f(the)f (group)f(of)g(pro)q(cesses)g(that)g(can)g(b)q(e)h(in)o(v)o(olv)o(ed)h(in)f (this)g(comm)o(unication;)h(and)e(it)h(pro-)75 271 y(vides)g(the)g (translation)f(from)g(virtual)h(pro)q(cess)g(names,)g(whic)o(h)g(are)f(ranks) g(within)i(the)e(group,)h(in)o(to)75 327 y(absolute)13 b(addresses.)20 b(Collectiv)o(e)14 b(comm)o(unication)g(calls)g(also)f(tak)o(e)f(a)h(comm)o (unicator)f(as)h(parameter;)75 384 y(it)k(is)h(exp)q(ected)g(that)e(parallel) j(libraries)g(will)f(b)q(e)g(built)g(to)f(accept)g(a)g(comm)o(unicator)f(as)h (parameter.)75 440 y(Comm)o(unicators)d(are)h(represen)o(ted)h(b)o(y)f (opaque)g(MPI)h(ob)s(jects.)166 504 y(W)l(e)38 b(start)f(b)o(y)h(discussing)i (in)o(tra-comm)o(unication)e(in)h(full)h(detail;)50 b(then)38 b(w)o(e)g(discuss)75 561 y(in)o(ter-comm)o(unication,)21 b(whic)o(h)e(builds) i(on)e(the)g(data)f(structures)h(and)g(requiremen)o(ts)h(of)e(the)h(in)o (tra-)75 617 y(comm)o(unication)i(sections.)34 b(W)l(e)20 b(follo)o(w)g(with) h(discussion)g(of)f(formalizations)g(of)g(the)g(lo)q(osely)h(syn-)75 674 y(c)o(hronous)15 b(mo)q(del)h(of)f(computing)h(\(vis)f(a)g(vis)h(message) e(passing\))i(and)f(o\013er)f(examples.)75 860 y Fo(4.2)59 b(Context)75 976 y Ft(A)15 b Fu(con)o(text)h Ft(is)g(the)f(MPI)g(mec)o (hanism)h(for)f(partitioning)h(comm)o(unication)g(space.)21 b(A)15 b(de\014ning)i(prop-)75 1032 y(ert)o(y)c(of)h(a)f(con)o(text)h(is)g (that)f(a)h(send)g(made)g(in)h(a)e(con)o(text)h(cannot)f(b)q(e)i(receiv)o(ed) g(in)g(another)e(con)o(text.)19 b(A)75 1089 y(con)o(text)c(is)g(an)g(opaque)h (ob)s(ject.)j(Only)d(one)g(comm)o(unicator)e(in)i(a)f(pro)q(cess)h(ma)o(y)e (bind)j(the)e(same)g(con-)75 1145 y(text.)24 b(Con)o(texts)15 b(ha)o(v)o(e)i(additional)h(attributes)e(for)g(in)o(ter-comm)o(unication,)i (to)e(b)q(e)i(discussed)g(b)q(elo)o(w.)75 1201 y(F)l(or)e(in)o(tra-comm)o (unication,)g(a)g(con)o(text)g(is)h(essen)o(tially)h(a)d(h)o(yp)q(er-tag)h (needed)i(to)e(mak)o(e)f(a)h(comm)o(uni-)75 1258 y(cator)e(safe)h(for)g(p)q (oin)o(t-to-p)q(oin)o(t)h(and)f(MPI-de\014ned)i(collectiv)o(e)g(comm)o (unication.)166 1405 y Fk(Discussion:)34 b Fj(Some)12 b(implemen)o(tations)f (ma)o(y)h(mak)o(e)g(a)h(con)o(text)i(to)e(b)q(e)i(a)e(pair)g(of)g(in)o (tegers,)h(eac)o(h)h(repre-)75 1461 y(sen)o(ting)g(\\h)o(yp)q(er)h(tags")f({) g(one)h(for)e(p)q(oin)o(t-to-p)q(oin)o(t)g(and)h(one)h(for)f (\(MPI-de\014ned\))h(collectiv)o(e)g(op)q(erations)f(on)g(a)75 1517 y(comm)o(unicator.)g(By)e(making)d(this)j(concept)h(opaque,)e(w)o(e)h (reliev)o(e)g(the)h(implemen)o(tor)c(of)i(the)h(requiremen)o(t)g(that)75 1574 y(this)h(is)g(the)g(only)f(w)o(a)o(y)g(to)h(implemen)o(t)d(con)o(texts)k (correctly)g(for)f(MPI.)166 1803 y Fk(Discussion:)19 b Fj(Among)14 b(other)i(reasons,)h(including)e(to)g(address)i(Jim)e(Co)o(wnie's)g(concerns) i(ab)q(out)f(safet)o(y)75 1860 y(and)c(to)g(mak)o(e)e(b)q(oth)i(p)q(oin)o (t-to-p)q(oin)o(t)f(and)h(collectiv)o(e)g(comm)o(unicati)o(on)d(safer)k(on)e (an)h(in)o(tra-comm)o(uni)o(ncator,)e(w)o(e)75 1916 y(ha)o(v)o(e)k(opted)h (to)g(mak)o(e)e(con)o(texts)i(opaque,)f(at)h(the)g(exp)q(ense)h(of)e (upsetting)h(those)h(who)e(w)o(an)o(t)g(to)g(b)q(e)h(able)g(to)f(set)75 1973 y(con)o(text)g(v)n(alues.)j(This)d(c)o(hange)f(is)g(crucial)g(to)g (abstracting)h(MPI)f(from)f(sp)q(eci\014c)i(implemen)o(tations,)c(and)j (forces)75 2029 y(sp)q(eci\014c)19 b(implem)o(en)o(tations)c(to)i(pro)o(vide) g(implem)o(en)o(tation-sp)q(eci\014c)e(functions)j(to)f(mak)o(e)e(the)j (connotation)f(of)75 2086 y(con)o(texts)e(with)f(sp)q(eci\014c)h(in)o(teger)f (v)n(alues.)75 2354 y Fo(4.3)59 b(Groups)75 2470 y Ft(A)17 b Fu(group)g Ft(is)g(an)g(ordered)g(set)f(of)h(pro)q(cess)g(iden)o(ti\014ers) h(\(henceforth)f(pro)q(cesses\);)g(pro)q(cess)g(iden)o(ti\014ers)75 2527 y(are)h(implemen)o(tation)j(dep)q(enden)o(t;)g(a)e(group)f(is)i(an)e (opaque)h(ob)s(ject.)30 b(Eac)o(h)19 b(pro)q(cess)g(in)g(a)g(group)f(is)75 2583 y(asso)q(ciated)d(with)h(an)f(in)o(teger)g Fu(rank)p Ft(,)g(starting)f (from)h(zero.)166 2647 y(Groups)d(are)g(represen)o(ted)h(b)o(y)f(opaque)h Fu(group)h(ob)s(jects)p Ft(,)e(and)h(hence)g(cannot)g(b)q(e)g(directly)g (trans-)75 2704 y(ferred)i(from)g(one)g(pro)q(cess)h(to)e(another.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 3 6 bop 75 -100 a Fl(4.4.)34 b(COMMUNICA)l(TORS)1246 b Ft(3)75 45 y Fi(4.3.1)49 b(Prede\014ned)15 b(Groups)75 134 y Ft(.)20 b(Initial)d(groups)e(de\014ned)h(once)31 b Fh(MPI)p 750 134 14 2 v 16 w(INIT)14 b Ft(has)h(b)q(een)i(called)g(are)d(as)h(follo)o(ws:)143 233 y Fg(\017)23 b Fh(MPI)p 274 233 V 15 w(GROUP)p 441 233 V 18 w(ALL)p Ft(,)45 b(SPMD-lik)o(e)16 b(siblings)h(of)e(a)g(pro)q(cess.)143 332 y Fg(\017)23 b Fh(MPI)p 274 332 V 15 w(GROUP)p 441 332 V 18 w(HOST)p Ft(,)46 b(A)15 b(group)g(including)j(one's)d(self)h(and)f (one's)g(HOST)143 432 y Fg(\017)23 b Fh(MPI)p 274 432 V 15 w(GROUP)p 441 432 V 18 w(P)l(ARENT)p Ft(,)39 b(A)13 b(group)g(con)o(taining)g (one's)g(self)g(and)g(one's)g(P)l(ARENT)g(\(spa)o(wner\).)143 531 y Fg(\017)23 b Fh(MPI)p 274 531 V 15 w(GROUP)p 441 531 V 18 w(SELF)p Ft(,)45 b(A)15 b(group)g(comprising)i(one's)d(self)166 630 y(MPI)j(implemen)o(tations)h(are)f(required)h(to)e(pro)o(vide)h(these)h (groups;)f(ho)o(w)o(ev)o(er,)f(not)g(all)i(forms)e(of)75 687 y(comm)o(unication)22 b(mak)o(e)f(sense)h(for)f(all)h(systems,)g(so)g(not)f (all)h(of)f(these)h(groups)f(ma)o(y)g(b)q(e)h(relev)m(an)o(t.)75 743 y(En)o(vironmen)o(tal)17 b(inquiry)i(will)g(b)q(e)e(pro)o(vided)h(to)e (determine)j(whic)o(h)e(of)g(these)g(are)g(usable)h(in)g(a)e(giv)o(en)75 800 y(implemen)o(tation.)24 b(The)17 b(analogous)f(comm)o(unicators)f (corresp)q(onding)j(to)d(these)i(groups)f(are)g(de\014ned)75 856 y(b)q(elo)o(w)g(in)g(section)g(4.4.1.)166 997 y Fk(Discussion:)f Fj(En)o(vironmen)o(tal)d(sub-committee)h(needs)i(to)f(pro)o(vide)f(suc)o(h)i (inquiry)e(functions)h(for)f(us.)75 1231 y Fo(4.4)59 b(Communicato)n(rs)75 1335 y Ft(All)20 b(MPI)f(comm)o(unication)h(\(b)q(oth)f(p)q(oin)o(t-to-p)q (oin)o(t)h(and)f(collectiv)o(e\))i(functions)f(use)f Fu(comm)o(unica-)75 1391 y(tors)c Ft(to)g(pro)o(vide)i(a)e(sp)q(eci\014c)j(scop)q(e)e(\(con)o (text)f(and)h(group)f(sp)q(eci\014cations\))j(for)d(the)h(comm)o(unication.) 75 1448 y(In)d(short,)g(comm)o(unicators)f(bring)h(together)f(the)h(concepts) g(of)g(group)f(and)h(con)o(text;)g(\(furthermore,)f(to)75 1504 y(supp)q(ort)18 b(implemen)o(tation-sp)q(eci\014c)j(optimizations,)d(and)g (virtual)g(top)q(ologies,)g(they)g(\\cac)o(he")f(addi-)75 1561 y(tional)f(information)g(opaquely\).)22 b(The)16 b(source)f(and)h (destination)h(of)e(a)g(message)g(is)i(iden)o(ti\014ed)g(b)o(y)f(the)75 1617 y(rank)h(of)f(that)g(pro)q(cess)h(within)h(the)f(group;)g(no)g(a)g (priori)h(mem)o(b)q(ership)g(restrictions)f(on)g(the)g(pro)q(cess)75 1674 y(sending)f(or)f(receiving)i(the)e(message)f(are)h(implied.)22 b(F)l(or)15 b(collectiv)o(e)i(comm)o(unication,)e(the)g(comm)o(uni-)75 1730 y(cator)i(sp)q(eci\014es)j(the)e(set)f(of)h(pro)q(cesses)g(that)f (participate)i(in)g(the)f(collectiv)o(e)h(op)q(eration.)29 b(Th)o(us,)18 b(the)75 1786 y(comm)o(unicator)g(restricts)h(the)f(\\spatial") h(scop)q(e)g(of)f(comm)o(unication,)i(and)f(pro)o(vides)g(lo)q(cal)h(pro)q (cess)75 1843 y(addressing.)166 1983 y Fk(Discussion:)48 b Fj(`Comm)n(unicator')14 b(replaces)k(the)g(w)o(ord)g(`con)o(text')f(ev)o (erywhere)i(in)e(curren)o(t)i(pt2pt)e(and)75 2040 y(collcomm)10 b(drafts.)166 2180 y Ft(Comm)o(unicators)g(are)h(represen)o(ted)h(b)o(y)g (opaque)f Fu(comm)o(unicator)i(ob)s(jects)p Ft(,)f(and)g(hence)g(cannot)75 2237 y(b)q(e)k(directly)g(transferred)f(from)g(one)g(pro)q(cess)g(to)g (another.)75 2365 y Fh(Raison)h(d')o(^)-21 b(etre)16 b(fo)o(r)e(sepa)o(rate)j (Contexts)g(and)f(Comm)m(unicato)o(rs)44 b Ft(Within)16 b(a)g(comm)o (unicator,)f(a)h(con)o(text)75 2421 y(is)h(separately)f(brok)o(en)h(out,)f (rather)f(than)i(b)q(eing)g(inheren)o(t)h(in)f(the)f(comm)o(unicator)g(for)g (one)g(sp)q(eci\014c,)75 2478 y(essen)o(tial)j(purp)q(ose.)27 b(W)l(e)18 b(w)o(an)o(t)f(to)g(mak)o(e)g(it)h(p)q(ossible)h(for)e(libraries)i (quic)o(kly)g(to)e(ac)o(hiev)o(e)i(additional)75 2534 y(safe)i(comm)o (unication)h(space)f(without)g(MPI-comm)o(unicator-based)h(sync)o (hronization.)38 b(The)21 b(only)75 2591 y(w)o(a)o(y)15 b(to)g(do)h(this)g (is)h(to)e(pro)o(vide)i(a)e(means)h(to)g(preallo)q(cate)g(man)o(y)g(con)o (texts,)f(and)h(bind)h(them)f(lo)q(cally)l(,)75 2647 y(as)c(needed.)21 b(This)13 b(c)o(hoice)h(w)o(eak)o(ens)e(the)h(o)o(v)o(erall,)g(inheren)o(t)h (\\safet)o(y")d(of)h(MPI,)h(if)g(programmed)f(in)i(this)75 2704 y(w)o(a)o(y)l(,)g(but)h(pro)o(vides)h(added)g(p)q(erformance)f(whic)o(h) h(library)g(designers)g(will)h(demand.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 4 7 bop 75 -100 a Ft(4)432 b Fl(SECTION)16 b(4.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fi(4.4.1)49 b(Prede\014ned)15 b(Comm)o(unicato)o(r)o(s)75 186 y Ft(Initial)i(comm)o(unicators)e(de\014ned)h (once)31 b Fh(MPI)p 885 186 14 2 v 16 w(INIT)14 b Ft(has)i(b)q(een)g(called)h (are)e(as)f(follo)o(ws:)143 394 y Fg(\017)23 b Ff(MPI)p 266 394 13 2 v 14 w(COMM)p 410 394 V 15 w(ALL)p Ft(,)45 b(SPMD-lik)o(e)16 b(siblings)i(of)c(a)h(pro)q(cess.)143 602 y Fg(\017)23 b Ff(MPI)p 266 602 V 14 w(COMM)p 410 602 V 15 w(HOST)p Ft(,)45 b(A)15 b(comm)o(unicator)g(for)g(talking)g(to)g(one's)g(HOST.)143 810 y Fg(\017)23 b Ff(MPI)p 266 810 V 14 w(COMM)p 410 810 V 15 w(P)m(ARENT)p Ft(,)44 b(A)15 b(comm)o(unicator)g(for)f(talking)i(to)f (one's)g(P)l(ARENT)g(\(spa)o(wner\).)143 1018 y Fg(\017)23 b Ff(MPI)p 266 1018 V 14 w(COMM)p 410 1018 V 15 w(SELF)p Ft(,)32 b(A)12 b(comm)o(unicator)e(for)h(talking)g(to)g(one's)g(self)g(\(useful)h (for)f(getting)g(con)o(texts)189 1074 y(for)j(serv)o(er)h(purp)q(oses,)g (etc.\).)166 1282 y(MPI)j(implemen)o(tations)h(are)f(required)h(to)f(pro)o (vide)h(these)f(comm)o(unicators;)h(ho)o(w)o(ev)o(er,)e(not)h(all)75 1339 y(forms)12 b(of)f(comm)o(unication)j(mak)o(e)d(sense)i(for)f(all)h (systems.)19 b(En)o(vironmen)o(tal)12 b(inquiry)i(will)g(b)q(e)f(pro)o(vided) 75 1395 y(to)20 b(determine)h(whic)o(h)h(of)e(these)h(comm)o(unicators)f(are) g(usable)h(in)h(a)e(giv)o(en)h(implemen)o(tation.)37 b(The)75 1451 y(groups)20 b(corresp)q(onding)h(to)f(these)g(comm)o(unicators)g(are)g (also)g(a)o(v)m(ailable)i(as)d(prede\014ned)j(quan)o(tities)75 1508 y(\(see)15 b(section)h(4.3.1\).)166 1676 y Fk(Discussion:)f Fj(En)o(vironmen)o(tal)d(sub-committee)h(needs)i(to)f(pro)o(vide)f(suc)o(h)i (inquiry)e(functions)h(for)f(us.)75 2064 y Fo(4.5)59 b(Group)20 b(Management)75 2221 y Ft(This)14 b(section)h(describ)q(es)g(the)f (manipulation)h(of)f(groups)f(under)i(v)m(arious)f(subheadings:)20 b(general,)15 b(con-)75 2278 y(structors,)f(and)h(so)g(on.)75 2562 y Fi(4.5.1)49 b(Lo)q(cal)18 b(Op)q(erations)75 2704 y Ft(The)d(follo)o(wing)h(are)f(all)h(lo)q(cal)h(\(non-comm)o(unicating\))e(op) q(erations.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 5 8 bop 75 -100 a Fl(4.5.)34 b(GR)o(OUP)15 b(MANA)o(GEMENT)1138 b Ft(5)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(GROUP)p 328 45 V 18 w(SIZE\(group,)14 b(size\))117 124 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject.)117 202 y(OUT)124 b Fh(size)503 b Fj(is)14 b(the)g(in)o(teger)h(n)o(um)o(b)q(er)e(of)g(pro)q (cesses)k(in)c(the)i(group.)75 376 y Fh(MPI)p 160 376 V 16 w(GROUP)p 328 376 V 18 w(RANK\(group,)f(rank\))117 455 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject.)117 533 y(OUT)124 b Fh(rank)488 b Fj(is)15 b(the)h(in)o(teger)g(rank)f(of)g(the)g (calling)f(pro)q(cess)j(in)e(group,)g(or)905 589 y Ff(MPI)p 982 589 13 2 v 15 w(UNDEFINED)d Fj(if)h(the)i(pro)q(cess)h(is)d(not)h(a)g (mem)o(b)q(er.)75 763 y Fh(MPI)p 160 763 14 2 v 16 w(TRANSLA)l(TE)p 432 763 V 17 w(RANKS)i(\(group)p 739 763 V 16 w(a,)f(n,)g(ranks)p 956 763 V 17 w(a,)g(group)p 1131 763 V 16 w(b,)g(ranks)p 1298 763 V 17 w(b\))117 842 y Fj(IN)171 b Fh(group)p 445 842 V 16 w(a)425 b Fj(handle)14 b(to)g(group)f(ob)r(ject)i(\\A")117 920 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)13 b(of)h(ranks)g(in)27 b Ff(ranks)p 1371 920 13 2 v 16 w(a)14 b Fj(arra)o(y)117 998 y(IN)171 b Fh(ranks)p 437 998 14 2 v 16 w(a)433 b Fj(arra)o(y)14 b(of)f(zero)i(or)f(more)f(v)n(alid)f(ranks)i(in)g(group)f(\\A")117 1077 y(IN)171 b Fh(group)p 445 1077 V 16 w(b)424 b Fj(handle)14 b(to)g(group)f(ob)r(ject)i(\\B")117 1155 y(OUT)124 b Fh(ranks)p 437 1155 V 16 w(b)432 b Fj(arra)o(y)9 b(of)g(corresp)q(onding)h(ranks)g(in)f (group)g(\\B,")18 b Ff(MPI)p 1757 1155 13 2 v 14 w(UNDEFINED)905 1212 y Fj(when)d(no)e(corresp)q(ondence)k(exists.)75 1411 y Fi(4.5.2)49 b(Lo)q(cal)18 b(Group)e(Constructo)o(rs)75 1500 y Ft(The)f(execution)i(of)d(the)i(follo)o(wing)g(op)q(erations)f(do)g(not)g (require)h(in)o(terpro)q(cess)g(comm)o(unication.)75 1605 y Fh(MPI)p 160 1605 14 2 v 16 w(LOCAL)p 318 1605 V 16 w(SUBGROUP\(group,)h(n,)e (ranks,)g(new)p 981 1605 V 17 w(group\))117 1684 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject)117 1762 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)9 b(of)f(elemen)o(ts)i(in)e (arra)o(y)h(ranks)h(\(and)f(size)h(of)18 b Ff(new)p 1802 1762 13 2 v 16 w(group)p Fj(\))117 1841 y(IN)171 b Fh(ranks)471 b Fj(arra)o(y)11 b(of)f(in)o(teger)i(ranks)f(in)f Ff(group)i Fj(to)f(app)q(ear)g(in)g Ff(new)p 1751 1841 V 16 w(group)p Fj(.)117 1919 y(OUT)124 b Fh(new)p 411 1919 14 2 v 17 w(group)372 b Fj(new)18 b(group)g(deriv)o(ed)g(from)e(ab)q(o)o(v)o(e,)i(preserving)h(the) f(order)905 1976 y(de\014ned)d(b)o(y)28 b Ff(ranks)p Fj(.)75 2102 y Ft(If)15 b(no)h(ranks)e(are)h(sp)q(eci\014ed,)i Fh(new)p 655 2102 V 17 w(group)f Ft(has)f(no)g(mem)o(b)q(ers.)75 2207 y Fh(MPI)p 160 2207 V 16 w(LOCAL)p 318 2207 V 16 w(EX)o(CL)p 444 2207 V 16 w(SUBGROUP\(group,)i(n,)e(ranks,)h(new)p 1108 2207 V 17 w(group\))117 2286 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject)117 2364 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)9 b(of)f(elemen)o(ts)i(in)e(arra)o(y)h(ranks)h(\(and)f(size)h(of)18 b Ff(new)p 1802 2364 13 2 v 16 w(group)p Fj(\))117 2443 y(IN)171 b Fh(ranks)471 b Fj(arra)o(y)9 b(of)g(in)o(teger)h(ranks)f(in)g Ff(group)h Fj(not)f(to)g(app)q(ear)h(in)f Ff(new)p 1805 2443 V 16 w(group)117 2521 y Fj(OUT)124 b Fh(new)p 411 2521 14 2 v 17 w(group)372 b Fj(new)18 b(group)g(deriv)o(ed)g(from)e(ab)q(o)o(v)o(e,)i (preserving)h(the)f(order)905 2578 y(de\014ned)d(b)o(y)28 b Ff(ranks)p Fj(.)75 2704 y Ft(If)15 b(no)h(ranks)e(are)h(sp)q(eci\014ed,)i Fh(new)p 655 2704 V 17 w(group)f Ft(is)f(iden)o(tical)i(to)e Fh(group)p Ft(.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 6 9 bop 75 -100 a Ft(6)432 b Fl(SECTION)16 b(4.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(LOCAL)p 318 45 V 16 w(SUBGROUP)p 572 45 V 19 w(RANGES\(group,)h (n,)f(ranges,)g(new)p 1193 45 V 17 w(group\))117 175 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject)117 354 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)9 b(of)f(elemen)o(ts)i(in)e (arra)o(y)h(ranks)h(\(and)f(size)h(of)18 b Ff(new)p 1802 354 13 2 v 16 w(group)p Fj(\))117 534 y(IN)171 b Fh(ranges)450 b Fj(a)19 b(one-dimensional)e(arra)o(y)i(of)g(in)o(teger)g(triplets:)29 b(pairs)20 b(of)905 590 y(ranks)c(\(form:)j(b)q(eginning)c(through)g(end,)h (inclusiv)o(e\))f(to)g(b)q(e)905 647 y(included)i(in)g(the)g(output)g(group)g Ff(new)p 1529 647 V 16 w(group)p Fj(,)h(plus)e(a)h(con-)905 703 y(stan)o(t)d(stride)h(\(often)f(1)g(or)g(-1\).)117 883 y(OUT)124 b Fh(new)p 411 883 14 2 v 17 w(group)372 b Fj(new)18 b(group)g(deriv)o(ed)g(from)e(ab)q(o)o(v)o(e,)i(preserving)h(the)f(order)905 939 y(de\014ned)d(b)o(y)28 b Ff(ranges)p Fj(.)75 1116 y Ft(If)15 b(an)o(y)g(of)g(the)g(rank)g(sets)g(o)o(v)o(erlap,)f(then)i(the)f(o)o(v)o (erlap)g(is)g(ignored.)21 b(If)15 b(no)g(ranges)g(are)g(sp)q(eci\014ed,)i (then)75 1172 y(the)e(output)g(group)g(has)g(no)g(mem)o(b)q(ers.)75 1328 y Fh(MPI)p 160 1328 V 16 w(LOCAL)p 318 1328 V 16 w(SUBGROUP)p 572 1328 V 19 w(EX)o(CL)p 701 1328 V 16 w(RANGES\(group,)h(n,)f(ranges,)g (new)p 1319 1328 V 17 w(group\))117 1458 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject)117 1638 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)9 b(of)f(elemen)o(ts)i(in)e(arra)o(y)h(ranks)h (\(and)f(size)h(of)18 b Ff(new)p 1802 1638 13 2 v 16 w(group)p Fj(\))117 1817 y(IN)171 b Fh(ranges)450 b Fj(a)17 b(one-dimensional)e(arra)o (y)i(of)g(\(three)i(in)o(teger\))f(consisting)905 1874 y(of)12 b(pairs)h(of)f(ranks)i(\(form:)h(b)q(eginning)e(through)f(end,)i(inclu-)905 1930 y(siv)o(e\))d(to)g(b)q(e)h(excluded)g(from)d(the)j(output)f(group)g Ff(new)p 1751 1930 V 16 w(group)p Fj(,)905 1987 y(plus)j(a)g(constan)o(t)g (stride)h(\(often)f(1)f(or)h(-1\).)117 2166 y(OUT)124 b Fh(new)p 411 2166 14 2 v 17 w(group)372 b Fj(new)18 b(group)g(deriv)o(ed)g(from)e(ab)q (o)o(v)o(e,)i(preserving)h(the)f(order)905 2223 y(de\014ned)d(b)o(y)28 b Ff(ranges)p Fj(.)75 2399 y Ft(If)15 b(an)o(y)f(of)f(the)i(rank)f(sets)g(o)o (v)o(erlap,)g(then)h(the)f(o)o(v)o(erlap)g(is)h(ignored.)20 b(If)15 b(there)f(are)g(no)g(ranges)g(sp)q(eci\014ed,)75 2456 y(the)h(output)g(group)g(is)h(the)f(same)g(as)g(the)g(original)h(group.)166 2647 y Fk(Discussion:)e Fj(Please)f(prop)q(ose)f(additional)e(subgroup)i (functions,)g(b)q(efore)h(the)f(second)h(reading...Virtual)75 2704 y(T)m(op)q(ologies)f(supp)q(ort?)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 7 10 bop 75 -100 a Fl(4.5.)34 b(GR)o(OUP)15 b(MANA)o(GEMENT)1138 b Ft(7)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(LOCAL)p 318 45 V 16 w(GROUP)p 486 45 V 18 w(UNION\(group1,)15 b(group2,)f(group)p 1088 45 V 17 w(out\))117 124 y Fj(IN)171 b Fh(group1)440 b Fj(\014rst)15 b(group)f(ob)r(ject)g(handle)117 203 y(IN)171 b Fh(group2)440 b Fj(second)15 b(group)f(ob)r(ject)h(handle)117 282 y(OUT)124 b Fh(group)p 445 282 V 16 w(out)385 b Fj(group)14 b(ob)r(ject)h(handle)75 455 y Fh(MPI)p 160 455 V 16 w(LOCAL)p 318 455 V 16 w(GROUP)p 486 455 V 18 w(INTERSECT\(group1,)g(group2,)f(group)p 1191 455 V 17 w(out\))117 534 y Fj(IN)171 b Fh(group1)440 b Fj(\014rst)15 b(group)f(ob)r(ject)g(handle)117 613 y(IN)171 b Fh(group2)440 b Fj(second)15 b(group)f(ob)r(ject)h(handle)117 692 y(OUT)124 b Fh(group)p 445 692 V 16 w(out)385 b Fj(group)14 b(ob)r(ject)h(handle)75 865 y Fh(MPI)p 160 865 V 16 w(LOCAL)p 318 865 V 16 w(GROUP)p 486 865 V 18 w(DIFFERENCE\(group1,)f(group2,)h(group)p 1216 865 V 16 w(out\))117 944 y Fj(IN)171 b Fh(group1)440 b Fj(\014rst)15 b(group)f(ob)r(ject)g(handle)117 1023 y(IN)171 b Fh(group2)440 b Fj(second)15 b(group)f(ob)r(ject)h(handle)117 1102 y(OUT)124 b Fh(group)p 445 1102 V 16 w(out)385 b Fj(group)14 b(ob)r(ject)h(handle)166 1228 y Ft(The)g(set-lik)o(e)i(op)q(erations)e(are)g (de\014ned)h(as)f(follo)o(ws:)75 1330 y Fu(union)24 b Ft(All)19 b(elemen)o(ts)g(of)f(the)h(\014rst)f(group)g(\()p Fh(group1)p Ft(\),)g(follo)o(w)o(ed)h(b)o(y)f(all)h(elemen)o(ts)g(of)f(second)h(group)189 1386 y(\()p Fh(group2)p Ft(\))14 b(not)h(in)h(\014rst)75 1487 y Fu(in)o(tersect)23 b Ft(all)16 b(elemen)o(ts)g(of)f(the)g(\014rst)g(group)g (whic)o(h)h(are)e(also)i(in)g(the)f(second)h(group)75 1588 y Fu(di\013erence)22 b Ft(all)17 b(elemen)o(ts)e(of)g(the)g(\014rst)g(group)g (whic)o(h)h(are)f(not)g(in)h(the)f(second)h(group)75 1689 y(Note)d(that)f (for)g(these)i(op)q(erations)f(the)g(order)g(of)f(pro)q(cesses)i(in)g(the)f (output)g(group)f(is)i(determined)g(\014rst)75 1746 y(b)o(y)f(order)f(in)i (the)f(\014rst)f(group)h(\(if)g(p)q(ossible\))h(and)f(then)g(b)o(y)g(order)f (in)i(the)f(second)g(group)g(\(if)f(necessary\).)166 1887 y Fk(Discussion:)32 b Fj(What)22 b(do)g(p)q(eople)h(think)f(ab)q(out)g(these)i (lo)q(cal)d(op)q(erations?)44 b(More?)g(Less?)g(Note:)75 1943 y(these)15 b(op)q(erations)g(do)f(not)f(explicitly)g(en)o(umerate)h(ranks,)g (and)g(therefore)i(are)e(more)f(scalable)h(if)f(implemen)o(ted)75 2000 y(e\016cien)o(tly)p Fe(:)7 b(:)g(:)75 2188 y Fh(MPI)p 160 2188 V 16 w(GROUP)p 328 2188 V 18 w(FREE\(group\))117 2267 y Fj(IN)171 b Fh(group)463 b Fj(frees)29 b Ff(group)15 b Fj(previously)e (de\014ned.)75 2393 y Ft(This)i(op)q(eration)f(frees)h(a)f(handle)29 b Fh(group)15 b Ft(whic)o(h)g(is)f(not)g(curren)o(tly)h(b)q(ound)g(to)f(a)g (comm)o(unicator.)19 b(It)14 b(is)75 2450 y(erroneous)h(to)g(attempt)f(to)g (free)i(a)f(group)f(curren)o(tly)i(b)q(ound)g(to)f(a)g(comm)o(unicator.)166 2591 y Fk(Discussion:)h Fj(The)f(p)q(oin)o(t-to-p)q(oin)o(t)e(c)o(hapter)i (suggests)h(that)e(there)i(is)e(a)g(single)g(destructor)j(for)d(all)f(MPI)75 2647 y(opaque)g(ob)r(jects;)i(ho)o(w)o(ev)o(er,)e(it)g(is)h(arguable)f(that)g (this)h(sp)q(eci\014es)h(the)f(implemen)o(tatio)o(n)d(of)i(MPI)g(v)o(ery)h (strongly)m(.)75 2704 y(W)m(e)f(p)q(olitely)g(argue)i(against)e(this)h (approac)o(h.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 8 11 bop 75 -100 a Ft(8)432 b Fl(SECTION)16 b(4.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(GROUP)p 328 45 V 18 w(DUP\(group,)f(new)p 666 45 V 17 w(group\))117 129 y Fj(IN)171 b Fh(group)463 b Fj(extan)o(t)14 b(group)g(ob)r(ject)h(handle)117 219 y(OUT)124 b Fh(new)p 411 219 V 17 w(group)372 b Fj(new)15 b(group)e(ob)r(ject)i(handle)75 350 y Fh(MPI)p 160 350 V 16 w(GROUP)p 328 350 V 18 w(DUP)k Ft(duplicates)j(a)d(group)h(with)g(all)h(its)f(cac)o(hed)g(information,)h (replacing)g(nothing.)75 407 y(This)16 b(function)g(is)g(essen)o(tial)g(to)e (the)h(supp)q(ort)h(of)f(virtual)g(top)q(ologies.)75 569 y Fi(4.5.3)49 b(Collective)17 b(Group)f(Constructo)o(rs)75 669 y Ft(The)e(execution)h(of)f(the)g(follo)o(wing)h(op)q(erations)f(require)h (collectiv)o(e)h(comm)o(unication)f(within)g(a)f(group.)75 779 y Fh(MPI)p 160 779 V 16 w(COLL)p 288 779 V 16 w(SUBGROUP\(comm)o(,)e(k)o (ey)l(,)j(colo)o(r,)e(new)p 983 779 V 18 w(group\))117 864 y Fj(IN)171 b Fh(comm)450 b Fj(comm)o(unicator)11 b(ob)r(ject)k(handle)117 953 y(IN)171 b Fh(k)o(ey)509 b Fj(\(in)o(teger\))117 1042 y(IN)171 b Fh(colo)o(r)479 b Fj(\(in)o(teger\))117 1132 y(OUT)124 b Fh(new)p 411 1132 V 17 w(group)372 b Fj(new)15 b(group)e(ob)r(ject)i(handle) 75 1263 y Ft(This)f(collectiv)o(e)h(function)f(is)f(called)i(b)o(y)e(all)h (pro)q(cesses)g(in)f(the)h(group)f(asso)q(ciated)g(with)26 b Fh(comm)n Ft(.)35 b Fh(colo)o(r)75 1320 y Ft(de\014nes)16 b(the)g(particular)g(new)g(group)f(to)f(whic)o(h)j(the)e(pro)q(cess)h(b)q (elongs.)42 b Fh(k)o(ey)15 b Ft(de\014nes)i(the)e(rank-order)75 1376 y(in)34 b Fh(new)p 223 1376 V 17 w(group)p Ft(;)17 b(a)g(stable)g(sort)f (is)h(used)g(to)f(determine)i(rank)f(order)f(in)35 b Fh(new)p 1440 1376 V 17 w(group)16 b Ft(if)h(the)34 b Fh(k)o(eys)17 b Ft(are)75 1433 y(not)e(unique.)166 1579 y Fk(Discussion:)29 b Fj(According)22 b(to)e(the)i(op)q(eration)f(of)f(this)h(function,)h(the)f (groups)h(so)f(created)h(are)f(non-)75 1635 y(o)o(v)o(erlapping.)c(Is)d (there)h(a)f(need)h(for)e(a)h(more)f(complex)f(functionalit)o(y?)75 1902 y Fo(4.6)59 b(Op)r(erations)19 b(on)h(Contexts)75 2019 y Fi(4.6.1)49 b(Lo)q(cal)18 b(Op)q(erations)75 2118 y Ft(There)d(are)g(no)g (lo)q(cal)i(op)q(erations)e(on)g(con)o(texts.)75 2229 y Fh(MPI)p 160 2229 V 16 w(CONTEXTS)p 414 2229 V 17 w(FREE\(n,)h(contexts\))117 2313 y Fj(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)13 b(of)h(con)o(texts)h(to)e (free)117 2403 y(IN)171 b Fh(contexts)415 b Fj(v)o(oid)13 b(*)h(arra)o(y)f (of)h(con)o(texts)75 2534 y Ft(Lo)q(cal)e(deallo)q(cation)h(of)e(con)o(text)g (allo)q(cated)i(b)o(y)22 b Fh(MPI)p 995 2534 V 16 w(CONTEXTS)p 1249 2534 V 18 w(ALLOC)11 b Ft(\(b)q(elo)o(w\).)19 b(It)11 b(is)h(erroneous)75 2591 y(to)i(free)h(a)g(con)o(text)f(that)g(is)i(b)q(ound) g(to)e(an)o(y)g(comm)o(unicator)h(\(either)g(lo)q(cally)i(or)d(in)i(another)e (pro)q(cess\).)75 2647 y(This)22 b(op)q(eration)f(is)h(lo)q(cal)g(\(as)f(it)g (m)o(ust)g(b)q(e,)i(b)q(ecause)f(it)g(do)q(es)f(not)g(p)q(ose)h(a)f(comm)o (unicator)g(in)h(its)75 2704 y(argumen)o(t)14 b(list\).)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 9 12 bop 75 -100 a Fl(4.6.)34 b(OPERA)l(TIONS)17 b(ON)f(CONTEXTS)1008 b Ft(9)75 45 y Fi(4.6.2)49 b(Collective)17 b(Op)q(erations)75 184 y Fh(MPI)p 160 184 14 2 v 16 w(CONTEXTS)p 414 184 V 17 w(ALLOC\(comm)m(,)12 b(n,)j(contexts,)i(len\))117 264 y Fj(IN)171 b Fh(comm)450 b Fj(Comm)o(unicator)11 b(whose)j(group)g(denotes)h(participan) o(ts)117 345 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)13 b(of)h(con)o(texts)h (to)e(allo)q(cate)117 426 y(OUT)124 b Fh(contexts)415 b Fj(v)o(oid)13 b(*)h(arra)o(y)f(of)h(con)o(texts)117 507 y(OUT)124 b Fh(len)517 b Fj(length)14 b(of)f(con)o(texts)i(arra)o(y)166 634 y Ft(Allo)q(cates)22 b(an)e(arra)o(y)g(of)g(opaque)h(con)o(texts.)35 b(This)21 b(collectiv)o(e)i (op)q(eration)e(is)g(executed)g(b)o(y)g(all)75 690 y(pro)q(cesses)16 b(in)g(the)f Fh(MPI)p 488 690 V 16 w(COMM)p 646 690 V 16 w(GROUP\(comm)n(\))p Ft(.)i(MPI)e(pro)o(vides)h(sp)q(ecial)h(con)o(textual)e(space)g(to)g(its)75 747 y(collectiv)o(e)22 b(op)q(erations)f(\()e(includin)q(g)k Fh(MPI)p 829 747 V 15 w(CONTEXTS)p 1082 747 V 18 w(ALLOC)p Ft(\))d(so)g(that,)g(despite)i(an)o(y)e(on-going)75 803 y(of)d(p)q(oin)o (t-to-p)q(oin)o(t)h(comm)o(unication)f(on)h Fh(comm)m Ft(,)c(this)j(op)q (eration)h(can)f(execute)h(safely)l(.)26 b(MPI)17 b(collec-)75 860 y(tiv)o(e)h(functions)g(will)h(ha)o(v)o(e)e(to)f(lo)q(c)o(k)i(out)f(m)o (ultiple)j(threads,)d(so)g(it)h(will)h(eviden)o(tly)g(ha)o(v)o(e)e (capabilities)75 916 y(una)o(v)m(ailable)g(to)e(the)g(user)h(program.)166 975 y(Con)o(texts)9 b(that)g(are)h(allo)q(cated)h(b)o(y)21 b Fh(MPI)p 859 975 V 15 w(ALLOC)p 1016 975 V 17 w(CONTEXTS)11 b Ft(are)f(unique)i(within)f Fh(MPI)p 1714 975 V 16 w(COMM)p 1872 975 V 16 w(GROUP\(comm)o(\))o Ft(.)75 1032 y(The)16 b(arra)o(y)f(is)h (the)g(same)f(on)h(all)h(pro)q(cesses)f(that)f(call)i(the)f(function)g (\(same)f(order,)h(same)f(n)o(um)o(b)q(er)h(of)75 1088 y(elemen)o(ts\).)166 1224 y Fk(Discussion:)36 b Ff(MPI)p 513 1224 13 2 v 14 w(CONTEXTS)p 746 1224 V 14 w(ALLOC\(comm)m(,)11 b(n,)k(contexts\))g Fj(w)o(as)g(the)g (previous)f(de\014nition)g(of)g(this)75 1274 y(function,)e(then)i(w)o(e)f(c)o (hanged)g(to)f(a)h Ff(group)g Fj(argumen)o(t)f(in)g(the)h(\014rst)h(slot,)e (arguing)g(that)h(the)g(comm)o(unicator)d(w)o(as)75 1323 y(unnecessary)m(.)24 b(W)m(e)15 b(ha)o(v)o(e)g(c)o(hanged)h(bac)o(k)f(b)q(ecause)i(the)f (group-seman)o(tics)f(pro)o(v)o(ed)g(not)g(to)h(b)q(e)f(thread)h(safe,)g(so) 75 1373 y(w)o(e)g(had)f(to)h(retain)f(the)h(approac)o(h)g(discussed)h(at)e (length)h(at)f(the)h(previous)g(MPI)g(meeting.)22 b(\(In)16 b(case)g(y)o(ou)f(did)75 1423 y(not)e(read)g(the)g(July)f(10)h(draft,)f(y)o (ou)g(ha)o(v)o(e)h(not)f(seen)i(a)e(c)o(hange)h(in)g(this)f(draft)h(compared) f(to)g(the)i(June)f(24)f(draft!\))75 1473 y(No)o(w,)f(w)o(e)i(ha)o(v)o(e)e (stronger)i(justi\014cation)e(for)h(k)o(eeping)g(the)g(approac)o(h)g (discussed)i(at)d(June)i(24)e(meeting.)17 b(W)m(e)11 b(ha)o(v)o(e)75 1523 y(added)18 b(the)g Ff(len)h Fj(parameter)e(yielding)f(the)j(curren)o(t)g (form)o(ulation,)c(b)q(ecause)k(con)o(texts)g(are)f(no)o(w)f(opaque,)h(not)75 1572 y(in)o(tegers.)166 1625 y(W)m(e)10 b(ha)o(v)o(e)g(to)h(retain)g(the)g(c) o(hic)o(k)o(en-and-egg)f(asp)q(ect)i(of)e(MPI)h(\()p Fd(i.e.)p Fj(,)f(use)h(a)g(comm)o(uni)o(cator)d(to)j(get)g(con)o(text\(s\))75 1675 y(or)i(a)h(comm)o(unicator\),)c(to)j(get)h(thread)g(safet)o(y)m(.)k(Y)m (et,)13 b(w)o(e)h(w)o(an)o(t)f(libraries)g(to)g(con)o(trol)h(their)g(o)o(wn)f (fate)g(regarding)75 1725 y(safet)o(y)m(,)c(not)h(to)f(rely)g(on)g(the)h (caller)f(to)h(pro)o(vide)f(a)g(quiescen)o(t)h(con)o(text.)17 b(W)m(e)9 b(ac)o(hiev)o(e)h(this)f(b)o(y)g(adding)g(the)h Fc(quiescen)o(t)75 1775 y(prop)q(ert)o(y)20 b Fj(for)15 b(MPI)h(collectiv)o(e)g(comm)o (unication)c(functions.)24 b(W)m(e)15 b(m)o(ust,)g(in)g(fact,)h(push)g(this)f (requiremen)o(t)h(to)75 1824 y(the)k(collectiv)o(e)e(c)o(hapter,)j(but)e(w)o (e)g(demonstrate)g(here)h(wh)o(y)f(our)g(particular)g(collectiv)o(e)g (routines)g(need)h(this)75 1874 y(prop)q(ert)o(y)m(.)28 b(In)17 b(a)g(m)o(ulti-threaded)f(en)o(vironmen)o(t,)g(it)h(is)g(clear)g(that)g(eac)o (h)h(temp)q(orally)d(o)o(v)o(erlapping)h(call)g(to)h(a)75 1924 y(collectiv)o(e)f(op)q(eration)f(m)o(ust)g(b)q(e)h(with)f(a)h(di\013eren)o(t) g(comm)o(unicator.)21 b(If)15 b(this)h(has)f(not)h(b)q(een)h(made)d (explicit,)h(it)75 1974 y(m)o(ust)e(b)q(e.)166 2027 y(One)f(can)f(ha)o(v)o(e) g(on-going)f(p)q(oin)o(t-to-p)q(oin)o(t)g(and)h(collectiv)o(e)g(comm)o (unications)d(on)j(a)g(single)g(comm)o(unicator.)75 2076 y(A)i(con)o(text)g (is)f(de\014ned)h(to)g(b)q(e)g(su\016cien)o(tly)f(p)q(o)o(w)o(erful)g(to)g(k) o(eep)h(b)q(oth)g(p)q(oin)o(t-to-p)q(oin)o(t)e(and)h(collectiv)o(e)h(op)q (erations)75 2126 y(distinct.)19 b(Hence,)c(it)f(is)g(alw)o(a)o(ys)f(safe)h (to)g(call)27 b Ff(MPI)p 894 2126 V 15 w(COMM)p 1039 2126 V 14 w(MAKE)14 b Fj(and)28 b Ff(MPI)p 1355 2126 V 14 w(CONTEXTS)p 1588 2126 V 15 w(ALLOC)p Fj(,)13 b(ev)o(en)i(if)75 2176 y(p)q(ending)h(async) o(hronous)h(p)q(oin)o(t-to-p)q(oin)o(t)e(op)q(erations)i(are)f(on-going,)f (or)i(messages)f(ha)o(v)o(e)g(not)g(b)q(een)i(receiv)o(ed)75 2226 y(but)g(are)h(on)f(the)h(receipt)g(queue.)32 b(With)17 b(these)j(rules,)f(no)f(quiescen)o(t)h(comm)o(unicator)d(is)h(required)j(in)d (order)75 2276 y(to)e(get)h(new)g(con)o(texts.)24 b(W)m(e)15 b(ha)o(v)o(e)h(added)g(demands)f(on)g(the)h(MPI)g(implem)o(en)o(tation)d (while)i(making)e(con)o(texts)75 2325 y(opaque)h(to)g(mak)o(e)e(this)i (simpler)f(to)g(realize)i(without)e(sa)o(ying)g(ho)o(w)h(it)f(m)o(ust)g(b)q (e)h(done.)166 2385 y(In)e(summary)m(,)d(libraries)i(ha)o(v)o(e)h(to)g(get)g (the)g(comm)o(unicator)e(as)i(a)f(base)i(argumen)o(t)d(to)i(retain)g(thread)h (safet)o(y)m(,)75 2441 y(but)d(they)h(can)f(alw)o(a)o(ys)f(safely)h(get)g (comm)o(unicatio)o(n)e(con)o(texts)j(to)e(do)h(further)h(w)o(ork.)17 b(The)10 b(concept)h(of)f(quiescence)75 2498 y(is)g(banished)h(to)f(b)q(e)g (a)g(small)e(detail)i(of)f(implemen)o(tation,)e(rather)k(than)g(a)e(cen)o (tral)i(tenet)g(of)f(library)f(design.)17 b(Users)75 2554 y(m)o(ust)9 b(still)h(w)o(orry)g(ab)q(out)h(temp)q(oral)e(safet)o(y)m(,)h(whic)o(h)g(is)g (not)h(guaran)o(teed)g(b)o(y)f(con)o(texts)h(alone)f(\(see)i(example)d(b)q (elo)o(w)75 2611 y(in)k(section)i(4.11.6\).)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 10 13 bop 75 -100 a Ft(10)409 b Fl(SECTION)16 b(4.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fo(4.7)59 b(Op)r(erations)19 b(on)h(Communicato)n(rs)75 158 y Fi(4.7.1)49 b(Lo)q(cal)18 b(Comm)o(unicato)o(r)13 b(Op)q(erations)75 253 y Ft(The)i(follo)o(wing)h(are) f(all)h(lo)q(cal)h(\(non-comm)o(unicating\))e(op)q(erations.)75 361 y Fh(MPI)p 160 361 14 2 v 16 w(COMM)p 318 361 V 16 w(SIZE\(comm)m(,)d (size\))117 443 y Fj(IN)171 b Fh(comm)450 b Fj(handle)14 b(to)g(comm)o (unicator)d(ob)r(ject.)117 528 y(OUT)124 b Fh(size)503 b Fj(is)22 b(the)g(in)o(teger)g(n)o(um)o(b)q(er)f(of)g(pro)q(cesses)k(in)c(the)h(group)g (of)905 585 y Ff(comm)m Fj(.)75 761 y Fh(MPI)p 160 761 V 16 w(COMM)p 318 761 V 16 w(RANK\(comm)m(,)12 b(rank\))117 843 y Fj(IN)171 b Fh(comm)450 b Fj(handle)14 b(to)g(comm)o(unicator)d(ob)r(ject.) 117 928 y(OUT)124 b Fh(rank)488 b Fj(is)17 b(the)g(in)o(teger)g(rank)g(of)f (the)i(calling)d(pro)q(cess)k(in)d(group)h(of)905 984 y Ff(comm)m Fj(,)g(or)39 b Ff(MPI)p 1195 984 13 2 v 14 w(UNDEFINED)18 b Fj(if)g(the)i(pro)q(cess)h(is)e(not)g(a)905 1041 y(mem)o(b)q(er.)75 1258 y Fi(4.7.2)49 b(Lo)q(cal)18 b(Constructo)o(rs)75 1401 y Fh(MPI)p 160 1401 14 2 v 16 w(COMM)p 318 1401 V 16 w(GROUP\(comm)n(,)12 b(group\))117 1483 y Fj(IN)171 b Fh(comm)450 b Fj(comm)o(unicator)11 b(ob)r(ject)k(handle)117 1568 y(OUT)124 b Fh(group)463 b Fj(group)14 b(ob)r(ject)h(handle)75 1697 y Ft(Accessor)g(that)g(returns)g(the)g(group)g (corresp)q(onding)h(to)f(the)g(comm)o(unicator)30 b Fh(comm)m Ft(.)75 1805 y Fh(MPI)p 160 1805 V 16 w(COMM)p 318 1805 V 16 w(CONTEXT\(comm)n(,)12 b(context\))117 1887 y Fj(IN)171 b Fh(comm)450 b Fj(comm)o(unicator)11 b(ob)r(ject)k(handle)117 1972 y(OUT)124 b Fh(context)432 b Fj(con)o(text)75 2101 y Ft(Returns)16 b(the)f(con)o(text)g (asso)q(ciated)g(with)h(the)f(comm)o(unicator)30 b Fh(comm)m Ft(.)75 2210 y Fh(MPI)p 160 2210 V 16 w(COMM)p 318 2210 V 16 w(UNBIND\(comm)m(\))117 2292 y Fj(IN)171 b Fh(comm)450 b Fj(the)15 b(comm)o(unicator)c(to)j(b)q(e)g(deallo)q(cated.)75 2421 y Ft(This)23 b(routine)g(disasso)q(ciates)g(the)f(group)g Fh(MPI)p 928 2421 V 16 w(COMM)p 1086 2421 V 16 w(GROUP\(comm)o(\))d Ft(asso)q(ciated)j(with)h Fh(comm)75 2478 y Ft(from)10 b(the)i(con)o(text)e Fh(MPI)p 495 2478 V 16 w(COMM)p 653 2478 V 16 w(CONTEXT\(comm)n(\))e Ft(asso)q(ciated)k(with)22 b Fh(comm)m Ft(.)16 b(The)11 b(opaque)h(ob)s(ject) 75 2534 y Fh(comm)c Ft(is)15 b(deallo)q(cated.)21 b(Both)14 b(the)h(group)f(and)g(con)o(text,)g(pro)o(vided)h(at)e(the)29 b Fh(MPI)p 1501 2534 V 16 w(COMM)p 1659 2534 V 16 w(BIND)14 b Ft(call,)75 2591 y(remain)19 b(a)o(v)m(ailable)h(for)e(further)g(use.)30 b(If)38 b Fh(MPI)p 908 2591 V 15 w(COMM)p 1065 2591 V 17 w(MAKE)18 b Ft(\(see)g(b)q(elo)o(w\))h(w)o(as)f(called)i(in)f(lieu)h(of)75 2647 y Fh(MPI)p 160 2647 V 16 w(COMM)p 318 2647 V 16 w(BIND)p Ft(,)c(then)h(there)f(is)h(no)f(exp)q(osed)i(con)o(text)d(kno)o(wn)h(to)g (the)h(user,)f(and)h(this)f(quan)o(tit)o(y)75 2704 y(is)g(freed)f(b)o(y)30 b Fh(MPI)p 396 2704 V 16 w(COMM)p 554 2704 V 17 w(UNBIND)p Ft(.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 11 14 bop 75 -100 a Fl(4.8.)34 b(INTR)o(ODUCTION)16 b(TO)g(INTER-COMMUNICA)l(TION) 600 b Ft(11)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(COMM)p 318 45 V 16 w(DUP\(comm)m(,)12 b(new)p 662 45 V 17 w(context,)k(new)p 921 45 V 17 w(comm)m(\))117 122 y Fj(IN)171 b Fh(comm)450 b Fj(comm)o(unicator)11 b(ob)r(ject)k(handle)117 194 y(IN)171 b Fh(new)p 411 194 V 17 w(context)341 b Fj(new)15 b(con)o(text)f(to)g(use)h (with)e(new)p 1428 194 13 2 v 16 w(comm)117 266 y(OUT)124 b Fh(new)p 411 266 14 2 v 17 w(comm)359 b Fj(comm)o(unicator)11 b(ob)r(ject)k(handle)75 390 y Fh(MPI)p 160 390 V 16 w(COMM)p 318 390 V 16 w(DUP)e Ft(duplicates)i(a)e(comm)o(unicator)g(with)g(all)h(its)g (cac)o(hed)f(information,)h(replacing)g(just)75 447 y(the)h(con)o(text.)75 565 y Fi(4.7.3)49 b(Collective)17 b(Comm)o(unicato)o(r)c(Constructo)o(rs)75 698 y Fh(MPI)p 160 698 V 16 w(COMM)p 318 698 V 16 w(MAKE\(sync)p 562 698 V 17 w(comm)m(,)f(comm)p 840 698 V 11 w(group,)i(comm)p 1107 698 V 11 w(new\))117 776 y Fj(IN)171 b Fh(sync)p 418 776 V 17 w(comm)352 b Fj(Comm)o(unicator)6 b(whose)k(incorp)q(orated)g(group)f (\()p Ff(MPI)p 1746 776 13 2 v 15 w(COMM)p 1891 776 V 14 w(GROUP\(com)o(m)m (\))p Fj(\))905 832 y(is)g(the)h(group)f(o)o(v)o(er)h(whic)o(h)f(the)h(new)f (comm)o(unicator)e Ff(comm)p 1845 832 V 9 w(new)905 889 y Fj(will)i(b)q(e)h (de\014ned,)i(also)d(sp)q(ecifying)h(participan)o(ts)g(in)g(this)g(syn-)905 945 y(c)o(hronizing)k(op)q(eration.)117 1017 y(IN)171 b Fh(comm)p 454 1017 14 2 v 10 w(group)332 b Fj(Group)11 b(of)f(the)i(new)f(comm)o (unicator;)e(often)i(this)g(will)f(b)q(e)h(the)905 1073 y(same)18 b(as)37 b Ff(sync)p 1164 1073 13 2 v 16 w(comm)m Fj(')o(s)16 b(group,)j(else)g(it)f(m)o(ust)g(b)q(e)h(subset)905 1130 y(thereof;)117 1201 y(OUT)124 b Fh(comm)p 454 1201 14 2 v 10 w(new)366 b Fj(the)15 b(new)f(comm)o(unicator.)90 1326 y Fh(MPI)p 175 1326 V 16 w(COMM)p 333 1326 V 16 w(MAKE)h Ft(is)h(equiv)m(alen)o(t)h(to:)147 1403 y Fm(MPI_CONTEXTS_ALLOC\(sync)o(\\_comm,)j(context,)j(1,)h(len\))147 1459 y(MPI_COMM_BIND\(comm\\_gro)o(up,)d(context,)i(comm_new\))75 1537 y Ft(plus,)i(notionally)l(,)g(in)o(ternal)e(\015ags)f(are)g(set)g(in)h (the)f(comm)o(unicator,)i(denoting)f(that)e(con)o(text)h(w)o(as)75 1593 y(created)f(as)g(part)f(of)g(the)h(opaque)g(pro)q(cess)h(that)e(made)h (the)g(comm)o(unicator)g(\(so)f(it)h(can)g(b)q(e)h(freed)75 1649 y(b)o(y)33 b Fh(MPI)p 241 1649 V 16 w(COMM)p 399 1649 V 16 w(UNBIND)p Ft(\).)17 b(It)g(is)g(erroneous)f(if)34 b Fh(comm)p 1116 1649 V 11 w(group)16 b Ft(is)h(not)g(a)f(subset)h(of)33 b Fh(sync)p 1715 1649 V 17 w(comm)m Ft('s)75 1706 y(underlying)17 b(group.)166 1845 y Fk(Discussion:)34 b Ff(MPI)p 511 1845 13 2 v 14 w(COMM)p 655 1845 V 15 w(MAKE)13 b Fj(and)27 b Ff(MPI)p 970 1845 V 15 w(CONTEXTS)p 1204 1845 V 14 w(ALLOC)14 b Fj(b)q(oth)f(require)i (b)q(o)q(otstrap)f(via)f(a)75 1902 y(comm)o(unicator,)g(instead)j(of)f(just)h (the)h(group)e(of)h(that)f(comm)o(unicator)e(for)j(thread)g(safet)o(y)m(.)24 b(W)m(e)15 b(ha)o(v)o(e)h(argued)75 1958 y(this)g(bac)o(k)h(and)f(forth)g (carefully)m(,)g(and)g(conclude)h(that)f(eac)o(h)h(thread)g(of)f(an)g(MPI)h (program)d(will)h(ha)o(v)o(e)h(one)h(or)75 2014 y(more)c(con)o(texts.)75 2237 y Fo(4.8)59 b(Intro)r(duction)18 b(to)i(Inter-Communicat)o(ion)75 2339 y Ft(This)e(section)h(in)o(tro)q(duces)f(the)g(concept)g(of)f(in)o (ter-comm)o(unication)i(and)f(describ)q(es)i(the)d(p)q(ortions)h(of)75 2395 y(MPI)h(that)f(supp)q(ort)h(it.)32 b(It)19 b(describ)q(es)i(supp)q(ort)e (for)f(writing)h(programs)f(whic)o(h)i(con)o(tain)f(user-lev)o(el)75 2452 y(serv)o(ers.)g(It)13 b(also)g(describ)q(es)i(a)e(name)h(service)g(whic) o(h)g(simpli\014es)i(writing)d(programs)f(con)o(taining)j(in)o(ter-)75 2508 y(comm)o(unication.)166 2647 y Fk(Discussion:)26 b Fj(Recommendation)16 b(and)i(plea)h(for)g(patience:)29 b(MPI)19 b(Commi)o(ttee)f(tak)o(es)h(stra)o (w)g(p)q(oll)f(on)75 2704 y(whether)c(to)f(ha)o(v)o(e)g(in)o(ter-comm)o (unication)d(or)j(not)g({)f(as)h(a)g(whole)g({)g(in)f(MPI1.)18 b(The)13 b(most)f(suitable)h(time)f(w)o(ould)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 12 15 bop 75 -100 a Ft(12)409 b Fl(SECTION)16 b(4.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fj(b)q(e)f(after)g(w)o(e)f(hear) h(the)g(argumen)o(ts)e(ab)q(out)i(the)g(in)o(terface)f(at)h(a)f(high)f(lev)o (el,)h(but)h(b)q(efore)g(w)o(e)f(this)h(section)g(of)e(the)75 102 y(c)o(hapter.)75 304 y Fi(4.8.1)49 b(De\014nitions)17 b(of)f(Inter-Comm)n (unication)f(and)h(Inter-Comm)n(unicato)o(rs)75 390 y Ft(All)h(p)q(oin)o (t-to-p)q(oin)o(t)g(comm)o(unication)g(describ)q(ed)h(th)o(us)e(far)g(has)g (in)o(v)o(olv)o(ed)h(comm)o(unication)g(b)q(et)o(w)o(een)75 446 y(pro)q(cesses)12 b(that)e(are)h(mem)o(b)q(ers)h(of)f(the)g(same)g (group.)18 b(The)12 b(source)f(pro)q(cess)h(in)g(send)g(or)f(the)g (destination)75 502 y(pro)q(cess)j(in)h(receiv)o(e)f(\(the)g(\\target")e(pro) q(cess\))i(is)g(sp)q(eci\014ed)i(using)e(a)g Fh(\(com)o(m)n(unicato)o(r,)c (rank\))k Ft(pair.)20 b(The)75 559 y(target)14 b(pro)q(cess)i(is)g(that)f (pro)q(cess)h(with)g(the)g(giv)o(en)g(rank)f(within)i(the)f(group)f(of)g(the) h(giv)o(en)g(comm)o(uni-)75 615 y(cator.)j(This)c(t)o(yp)q(e)g(of)f(comm)o (unication)i(is)f(called)i(\\in)o(tra-comm)o(unication")d(and)h(the)g(comm)o (unicator)75 672 y(used)h(is)f(called)i(an)e(\\in)o(tra-comm)o(unicator.")166 728 y(In)20 b(mo)q(dular)g(and)g(m)o(ulti-discipli)q(nary)i(applications,)f (di\013eren)o(t)f(pro)q(cess)g(groups)f(execute)h(dif-)75 785 y(feren)o(t)f(mo)q(dules)h(and)f(pro)q(cesses)g(within)h(di\013eren)o(t)f(mo) q(dules)h(comm)o(unicate)f(with)h(one)f(another)f(in)75 841 y(a)h(pip)q(eline)k(or)c(a)g(more)g(general)h(mo)q(dule)h(graph.)32 b(In)20 b(these)g(applications)h(the)f(most)e(natural)i(w)o(a)o(y)75 898 y(for)f(a)h(pro)q(cess)g(to)f(sp)q(ecify)i(a)f(target)e(pro)q(cess)i(is)h (b)o(y)e(the)h(rank)g(of)f(the)h(target)f(pro)q(cess)h(within)h(the)75 954 y(target)c(group.)27 b(In)19 b(applications)g(that)f(con)o(tain)g(in)o (ternal)g(user)h(lev)o(el)g(serv)o(ers,)f(eac)o(h)g(serv)o(er)f(ma)o(y)g(b)q (e)75 1011 y(a)i(pro)q(cess)h(group)g(that)f(pro)o(vides)h(services)g(to)f (one)h(or)f(more)h(clien)o(ts,)h(and)f(eac)o(h)g(clien)o(t)h(ma)o(y)e(b)q(e)h (a)75 1067 y(pro)q(cess)f(group)g(whic)o(h)h(uses)g(the)f(services)h(of)f (one)g(or)g(more)g(serv)o(ers.)31 b(In)20 b(these)f(applications)i(it)e(is)75 1123 y(again)13 b(most)g(natural)g(to)g(sp)q(ecify)h(the)g(target)e(pro)q (cess)h(b)o(y)g(rank)g(within)i(the)e(target)f(group.)19 b(This)14 b(t)o(yp)q(e)75 1180 y(of)h(comm)o(unication)g(is)h(called)g(\\in)o(ter-comm) o(unication")g(and)f(the)g(comm)o(unicator)f(used)i(is)f(called)i(an)75 1236 y(\\in)o(ter-comm)o(unicator.")166 1293 y(An)23 b(in)o(ter-comm)o (unication)g(op)q(eration)g(is)f(a)g(p)q(oin)o(t-to-p)q(oin)o(t)h(comm)o (unication)g(b)q(et)o(w)o(een)g(pro-)75 1349 y(cesses)10 b(in)h(di\013eren)o (t)g(groups.)18 b(The)10 b(group)g(con)o(taining)h(a)f(pro)q(cess)g(that)g (initiates)h(an)f(in)o(ter-comm)o(unication)75 1406 y(op)q(eration)15 b(is)h(called)h(the)e(\\lo)q(cal)h(group,")f(that)f(is,)h(the)h(sender)g(in)g (a)e(send)i(and)g(the)f(receiv)o(er)h(in)g(a)f(re-)75 1462 y(ceiv)o(e.)29 b(The)18 b(group)g(con)o(taining)h(the)f(target)f(pro)q(cess)h (is)g(called)i(the)e(\\remote)f(group,")h(that)f(is,)i(the)75 1519 y(receiv)o(er)e(in)g(a)f(send)h(and)g(the)f(sender)h(in)g(a)f(receiv)o (e.)24 b(As)17 b(in)g(in)o(tra-comm)o(unication,)g(the)f(target)f(pro-)75 1575 y(cess)f(is)h(sp)q(eci\014ed)h(using)e(a)g Fh(\(com)o(m)n(unicato)o(r,)d (rank\))i Ft(pair.)20 b(Unlik)o(e)15 b(in)o(tra-comm)o(unication,)g(the)f (rank)f(is)75 1632 y(relativ)o(e)j(to)e(the)i(remote)e(group.)166 1688 y(One)19 b(additional)i(needed)f(concept)f(is)g(the)g(\\group)f (leader.")31 b(The)19 b(pro)q(cess)g(with)g(rank)f(0)h(in)g(a)75 1744 y(pro)q(cess)f(group)f(is)i(designated)f(\\group)f(leader.")28 b(This)19 b(concept)f(is)g(used)h(in)f(supp)q(ort)g(of)f(user-lev)o(el)75 1801 y(serv)o(ers,)d(and)i(elsewhere.)75 1920 y Fi(4.8.2)49 b(Prop)q(erties)15 b(of)i(Inter-Comm)n(unication)d(and)j(Inter-Comm)n (unicato)o(rs)75 2006 y Ft(Here)e(is)h(a)f(summary)g(of)f(the)i(prop)q (erties)g(of)e(in)o(ter-comm)o(unication)j(and)e(in)o(ter-comm)o(unicators:) 143 2088 y Fg(\017)23 b Ft(The)16 b(syn)o(tax)f(is)i(the)f(same)g(for)f(b)q (oth)h(in)o(ter-)h(and)f(in)o(tra-comm)o(unication.)23 b(The)16 b(same)g(comm)o(u-)189 2145 y(nicator)f(can)g(b)q(e)h(used)g(for)e(b)q(oth)i (send)f(and)h(receiv)o(e)g(op)q(erations.)143 2234 y Fg(\017)23 b Ft(A)15 b(target)f(pro)q(cess)h(is)h(addressed)g(b)o(y)f(its)g(rank)g(in)h (the)f(remote)g(group.)143 2323 y Fg(\017)23 b Ft(Comm)o(unications)12 b(using)i(an)e(in)o(ter-comm)o(unicator)h(are)f(guaran)o(teed)g(not)g(to)g (con\015ict)h(with)g(an)o(y)189 2380 y(comm)o(unications)i(that)g(use)h(a)e (di\013eren)o(t)i(comm)o(unicator.)143 2469 y Fg(\017)23 b Ft(An)15 b(in)o(ter-comm)o(unicator)g(cannot)g(b)q(e)h(used)g(for)f (collectiv)o(e)i(comm)o(unication.)143 2558 y Fg(\017)23 b Ft(A)15 b(comm)o(unicator)g(will)i(pro)o(vide)e(either)h(in)o(tra-)f(or)g(in) o(ter-comm)o(unication,)h(nev)o(er)f(b)q(oth.)143 2647 y Fg(\017)23 b Ft(Once)18 b(constructed,)f(the)g(remote)f(group)h(of)g(an)f(in)o(ter-comm) o(unicator)i(ma)o(y)e(not)g(b)q(e)i(c)o(hanged.)189 2704 y(Comm)o(unication)d (with)h(an)o(y)f(pro)q(cess)g(outside)h(of)f(the)g(remote)g(group)f(is)i(not) f(allo)o(w)o(ed.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 13 16 bop 75 -100 a Fl(4.8.)34 b(INTR)o(ODUCTION)16 b(TO)g(INTER-COMMUNICA)l(TION) 600 b Ft(13)166 45 y(The)19 b(routine)g Fh(MPI)p 508 45 14 2 v 16 w(COMM)p 666 45 V 16 w(ST)l(A)l(T\(\))g Ft(ma)o(y)f(b)q(e)h(used)g(to) f(determine)h(if)g(a)g(comm)o(unicator)f(is)h(an)75 102 y(in)o(ter-)g(or)e (in)o(tra-comm)o(unicator.)29 b(In)o(ter-comm)o(unicators)18 b(can)g(b)q(e)h(used)g(as)f(argumen)o(ts)f(to)g(some)h(of)75 158 y(the)f(other)f(comm)o(unicator)g(inquiry)i(routines)f(\(de\014ned)g(ab)q (o)o(v)o(e\).)24 b(In)o(ter-comm)o(unicators)16 b(cannot)g(b)q(e)75 214 y(used)i(as)f(input)h(to)f(an)o(y)g(of)g(the)h(lo)q(cal)g(constructor)f (routines)h(for)e(in)o(tra-comm)o(unicators.)26 b(When)18 b(an)75 271 y(in)o(ter-comm)o(unicator)c(is)h(used)g(as)f(an)h(input)g(argumen)o(t,)e (the)i(follo)o(wing)g(table)g(describ)q(es)g(b)q(eha)o(vior)g(of)75 327 y(relev)m(an)o(t)h Fh(MPI)p 332 327 V 16 w(COMM)p 490 327 V 16 w(*)f Ft(functions:)p 222 382 1506 2 v 221 438 2 57 v 664 421 a Fh(MPI)p 749 421 14 2 v 16 w(COMM)p 907 421 V 31 w Ft(F)l(unction)h(Beha)o(vior)p 1727 438 2 57 v 221 495 V 655 478 a(\(in)g(In)o(ter-Comm)o(unication)f(Mo)q(de\))p 1727 495 V 222 496 1506 2 v 222 506 V 221 563 2 57 v 247 546 a Fh(MPI)p 332 546 14 2 v 16 w(COMM)p 490 546 V 17 w(SIZE\(\))p 777 563 2 57 v 170 w Ft(returns)g(the)g(size)h(of)f(the)g(remote)g(group.)p 1727 563 V 221 619 V 247 602 a Fh(MPI)p 332 602 14 2 v 16 w(COMM)p 490 602 V 17 w(GROUP\(\))p 777 619 2 57 v 111 w Ft(returns)g(the)g(remote)g (group.)p 1727 619 V 221 676 V 247 659 a Fh(MPI)p 332 659 14 2 v 16 w(COMM)p 490 659 V 17 w(RANK\(\))p 777 676 2 57 v 140 w Ft(returns)g Ff(MPI)p 1037 659 13 2 v 14 w(UNDEFINED)p 1727 676 2 57 v 221 732 V 247 715 a Fh(MPI)p 332 715 14 2 v 16 w(COMM)p 490 715 V 17 w(CONTEXT\(\))p 777 732 2 57 v 50 w Ft(erroneous)p 1727 732 V 221 788 V 247 772 a Fh(MPI)p 332 772 14 2 v 16 w(COMM)p 490 772 V 17 w(UNBIND\(\))p 777 788 2 57 v 92 w Ft(erroneous)p 1727 788 V 221 845 V 247 828 a Fh(MPI)p 332 828 14 2 v 16 w(COMM)p 490 828 V 17 w(DUP\(\))p 777 845 2 57 v 170 w Ft(erroneous)p 1727 845 V 222 847 1506 2 v 75 948 a Fh(Construction/Destruction)j(of)d (Inter-Comm)m(unicato)o(rs)75 1034 y Ft(Construction)i(of)f(an)g(in)o (ter-comm)o(unicator)h(requires)g(t)o(w)o(o)f(separate)g(collectiv)o(e)i(op)q (erations)f(\(one)f(in)75 1091 y(the)11 b(lo)q(cal)h(group)f(and)g(one)g(in)h (the)f(remote)g(group\))f(and)h(a)g(p)q(oin)o(t-to-p)q(oin)o(t)h(op)q (eration)f(b)q(et)o(w)o(een)g(the)g(t)o(w)o(o)75 1147 y(group)i(leaders.)20 b(These)14 b(op)q(erations)g(ma)o(y)e(b)q(e)i(p)q(erformed)g(with)g(explicit) h(sync)o(hronization)g(of)e(the)g(t)o(w)o(o)75 1203 y(groups)e(b)o(y)h (calling)h Fh(MPI)p 503 1203 14 2 v 16 w(COMM)p 661 1203 V 17 w(PEER)p 790 1203 V 17 w(MAKE\(\))p Ft(.)k(The)12 b(explicit)i(sync)o (hronization)f(can)e(cause)h(dead-)75 1260 y(lo)q(c)o(k)g(in)h(mo)q(dular)f (programs)f(with)h(cyclic)h(comm)o(unication)g(graphs.)18 b(So,)12 b(the)g(lo)q(cal)h(and)e(remote)h(op)q(er-)75 1316 y(ations)i(can)f(b)q(e)i (decoupled)g(and)f(the)g(construction)f(p)q(erformed)h(\\lo)q(osely)h(sync)o (hronously")f(b)o(y)f(calling)75 1373 y(the)d(t)o(w)o(o)f(routines)i Fh(MPI)p 484 1373 V 15 w(COMM)p 641 1373 V 17 w(PEER)p 770 1373 V 17 w(MAKE)p 916 1373 V 16 w(ST)l(ART\(\))f Ft(and)h Fh(MPI)p 1286 1373 V 16 w(COMM)p 1444 1373 V 16 w(PEER)p 1572 1373 V 17 w(MAKE)p 1718 1373 V 16 w(FINISH\(\))p Ft(.)166 1512 y Fk(Discussion:)67 b Ff(MPI)p 544 1512 13 2 v 14 w(COMM)p 688 1512 V 15 w(PEER)p 807 1512 V 14 w(MAKE)p 939 1512 V 14 w(ST)m(ART\(\))16 b Fj(and)33 b Ff(MPI)p 1307 1512 V 14 w(COMM)p 1451 1512 V 15 w(PEER)p 1570 1512 V 14 w(MAKE)p 1702 1512 V 14 w(FINISH\(\))75 1568 y Fj(are)16 b(b)q(oth)f(collectiv)o(e)g(op)q (erations)g(in)g(the)h(lo)q(cal)e(group.)22 b(They)15 b(ma)o(y)e(lea)o(v)o(e) i(a)g(non-blo)q(c)o(king)f(send)i(and)f(receiv)o(e)75 1625 y(activ)o(e)k(b)q(et)o(w)o(een)h(the)f(t)o(w)o(o)g(calls,)g(where)h(the)f (group)g(leaders)g(exc)o(hange)h(lo)q(cal)e(comm)o(unicator)e(information)75 1681 y(as)i(necessary)m(.)32 b(Ho)o(w)o(ev)o(er,)19 b(they)f(are)h(not)f(a)f (non-blo)q(c)o(king)g(collectiv)o(e)h(op)q(eration.)64 b Ft(These)20 b(routines)g(can)75 1820 y(construct)d(m)o(ultiple)j(in)o(ter-comm)o (unicators)d(with)h(a)f(single)i(call.)28 b(This)18 b(impro)o(v)o(es)f(p)q (erformance)h(b)o(y)75 1877 y(allo)o(wing)e(amortization)f(of)g(the)g(sync)o (hronization)h(o)o(v)o(erhead.)166 1933 y(The)e(in)o(ter-comm)o(unicator)h (ob)s(jects)f(are)g(destro)o(y)o(ed)f(in)j(the)e(same)g(w)o(a)o(y)f(as)h(in)o (tra-comm)o(unicator)75 1990 y(ob)s(jects,)g(b)o(y)h(calling)i Fh(MPI)p 535 1990 14 2 v 16 w(COMM)p 693 1990 V 16 w(FREE\(\))p Ft(.)75 2110 y Fh(Supp)q(o)o(rt)g(fo)o(r)d(User-Level)h(Servers)75 2195 y Ft(W)l(e)g(consider)g(the)f(primary)h(feature)f(of)g(user-lev)o(el)i (serv)o(ers)e(that)g(can)h(require)g(additional)h(supp)q(ort)e(is)75 2252 y(that)c(the)h(serv)o(er)g(cannot)g(a)g(priori)g(kno)o(w)g(the)g(iden)o (ti\014cation)i(of)d(the)i(clien)o(ts,)g(whereas)f(the)g(clien)o(ts)i(m)o (ust)75 2308 y(a)g(priori)i(kno)o(w)e(the)h(iden)o(ti\014cation)h(of)e(the)h (serv)o(ers.)19 b(In)14 b(addition,)h(a)e(user-lev)o(el)j(serv)o(er)d(is)h(a) g(dedicated)75 2365 y(pro)q(cess)h(group)g(whic)o(h,)h(after)f(some)f (initialization,)k(pro)o(vides)e(a)f(giv)o(en)g(service)i(un)o(til)f (termination.)166 2421 y(The)h(supp)q(ort)f(for)g(user-lev)o(el)i(serv)o(ers) f(tak)o(es)e(in)o(to)i(accoun)o(t)f(the)h(prev)m(ailing)i(view)e(that)f(all)h (pro-)75 2478 y(cesses)g(\(p)q(ossibly)g(excepting)h(a)e(host)f(pro)q(cess\)) i(are)f(initially)j(equiv)m(alen)o(t)f(mem)o(b)q(ers)e(of)g(the)g(group)g(of) 75 2534 y(all)f(pro)q(cesses.)20 b(This)14 b(group)g(is)g(describ)q(ed)i(b)o (y)e(pre-de\014ned)i(in)o(tra-comm)o(unicator)e Fh(MPI)p 1612 2534 V 15 w(COMM)p 1769 2534 V 17 w(ALL)p Ft(.)75 2591 y(The)k(user)h(splits) g(this)g(group)f(suc)o(h)g(that)g(pro)q(cesses)g(in)h(eac)o(h)g(parallel)h (serv)o(er)d(are)h(placed)i(within)f(a)75 2647 y(sp)q(eci\014c)e(sub-group.) 22 b(The)16 b(non-serv)o(er)f(pro)q(cesses)h(are)g(placed)h(in)f(a)f(group)h (of)f(all)h(non-serv)o(ers.)21 b(Pro-)75 2704 y(vided)14 b(that)e(the)h(user) g(can)g(determine)h(the)f(ranks)g(of)f(the)h(serv)o(er)g(group)f(leaders)i (\()p Fn(i.e.,)f Ft(rank)f(zero\))h(and)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 14 17 bop 75 -100 a Ft(14)409 b Fl(SECTION)16 b(4.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Ft(assign)f(some)f(tags)g(for)g (clien)o(ts)i(to)e(send)h(a)f(message)g(to)g(the)h(group)f(leaders,)i(then)f (a)f(group)g(leader)i(can)75 102 y(at)g(an)o(y)f(time)i(notify)f(a)g(serv)o (er)g(that)f(it)i(wishes)g(to)e(b)q(ecome)i(a)f(clien)o(t.)166 166 y(MPI)22 b(pro)o(vides)h(a)f(routine,)i Fh(MPI)p 772 166 14 2 v 15 w(COMM)p 929 166 V 17 w(SPLITL\(\))p Ft(,)e(that)f(splits)i(a)f (paren)o(t)g(group,)h(creates)75 222 y(sub-groups)14 b(\(in)o(tra-comm)o (unicators\))g(according)g(to)g(supplied)i(k)o(eys,)e(and)h(returns)f(the)g (rank)g(of)g(eac)o(h)75 278 y(sub-group)i(leader)h(\(relativ)o(e)f(to)g(the)g (paren)o(t)f(group\).)22 b(This)16 b(allo)o(ws)g(a)g(pro)q(cess)g(that)g(do)q (es)g(not)f(kno)o(w)75 335 y(ab)q(out)k(a)g(sub-group)h(to)f(con)o(tact)f (that)h(sub-group)h(via)g(the)f(sub-group)h(leader,)h(using)f(the)f(paren)o (t)75 391 y(comm)o(unicator.)k(The)16 b(k)o(eys)h(ma)o(y)e(b)q(e)i(used)g(as) f(unique)i(tags.)k(This)17 b(information)f(ma)o(y)g(also)g(b)q(e)h(used)75 448 y(as)e(input)h(to)f Fh(MPI)p 393 448 V 15 w(COMM)p 550 448 V 17 w(PEER)p 679 448 V 17 w(MAKE\(\))p Ft(,)f(for)g(example.)75 611 y Fh(Name)f(Service)75 712 y Ft(MPI)h(pro)o(vides)h(a)g(name)f(service)h (to)f(simplify)j(construction)d(of)g(in)o(ter-comm)o(unicators.)20 b(This)15 b(service)75 768 y(allo)o(ws)f(a)g(lo)q(cal)h(pro)q(cess)f(group)g (to)f(create)h(an)g(in)o(ter-comm)o(unicator)g(when)g(the)g(only)h(a)o(v)m (ailable)h(infor-)75 824 y(mation)11 b(ab)q(out)h(the)g(remote)f(group)g(is)h (a)f(user-de\014ned)i(c)o(haracter)e(string.)19 b(A)12 b(sync)o(hronizing)h (v)o(ersion)e(is)75 881 y(pro)o(vided)h(b)o(y)f(routine)g Fh(MPI)p 554 881 V 16 w(COMM)p 712 881 V 17 w(NAME)p 858 881 V 16 w(MAKE\(\))p Ft(.)18 b(A)11 b(lo)q(osely)h(sync)o(hronous)f(v)o(ersion)h(is)f(pro)o(vided) 75 937 y(b)o(y)j(routines)h Fh(MPI)p 396 937 V 16 w(COMM)p 554 937 V 17 w(NAME)p 700 937 V 16 w(MAKE)p 845 937 V 17 w(ST)l(ART\(\))f Ft(and)h Fh(MPI)p 1224 937 V 16 w(COMM)p 1382 937 V 16 w(NAME)p 1527 937 V 17 w(MAKE)p 1673 937 V 16 w(FINISH\(\))p Ft(.)75 1102 y Fi(4.8.3)49 b(Inter-Comm)n(unication)14 b(Routines)75 1203 y Fh(Synchronous)j(Inter-Comm)m(unicato)o(r)c(Constructo)o(rs)75 1303 y Ft(Both)k(of)g(these)h(routines)g(allo)o(w)g(construction)g(of)f(m)o (ultiple)i(in)o(ter-comm)o(unicators.)27 b(Eac)o(h)18 b(of)f(these)75 1360 y(in)o(ter-comm)o(unicators)11 b(con)o(tains)g(the)h(same)e(remote)h (group)g(and)g(di\013eren)o(t)h(in)o(ternal)g(con)o(texts.)18 b(There-)75 1416 y(fore,)13 b(comm)o(unication)h(using)g(an)o(y)e(of)h(these) g(in)o(ter-comm)o(unicators)h(will)g(not)f(in)o(terfere)h(with)f(comm)o(u-)75 1473 y(nication)j(using)g(an)o(y)f(of)g(the)g(others.)75 1584 y Fh(MPI)p 160 1584 V 16 w(COMM)p 318 1584 V 16 w(PEER)p 446 1584 V 17 w(MAKE\(my)p 670 1584 V 13 w(comm)m(,)g(p)q(eer)p 909 1584 V 17 w(comm)m(,)g(p)q(eer)p 1152 1584 V 17 w(rank,)i(tag,)h(num)p 1459 1584 V 14 w(comm)n(s,)d(new)p 1711 1584 V 17 w(comm)m(s\))117 1725 y Fj(IN)171 b Fh(my)p 396 1725 V 13 w(comm)377 b Fj(lo)q(cal)13 b(in)o(tra-comm)o(unicator)117 1815 y(IN)171 b Fh(p)q(eer)p 417 1815 V 17 w(comm)353 b Fj(\\paren)o(t")14 b(in)o(tra-comm)o(uni)o(cator) 117 1906 y(IN)171 b Fh(p)q(eer)p 417 1906 V 17 w(rank)391 b Fj(rank)14 b(of)f(remote)h(group)g(leader)g(in)27 b Ff(p)q(eer)p 1563 1906 13 2 v 17 w(comm)117 1996 y Fj(IN)171 b Fh(tag)510 b Fj(\\safe")14 b(tag)117 2086 y(IN)171 b Fh(num)p 422 2086 14 2 v 14 w(comm)m(s)337 b Fj(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o(ter-comm)o (unicators)e(to)h(construct)117 2176 y(OUT)124 b Fh(new)p 411 2176 V 17 w(comm)m(s)345 b Fj(arra)o(y)14 b(of)f(new)h(in)o(ter-comm)o (unicators)166 2308 y Ft(This)k(routine)f(constructs)g(an)g(arra)o(y)f(of)h (in)o(ter-comm)o(unicators)g(and)h(stores)e(it)i(in)g Fh(new)p 1716 2308 V 17 w(comm)m(s)p Ft(.)75 2365 y(In)o(tra-comm)o(unicator)h Fh(my)p 552 2365 V 13 w(comm)14 b Ft(describ)q(es)20 b(the)g(lo)q(cal)g (group.)32 b(In)o(tra-comm)o(unicator)19 b Fh(p)q(eer)p 1746 2365 V 17 w(comm)75 2421 y Ft(describ)q(es)c(a)f(group)f(that)g(con)o(tains)h (the)g(leaders)h(\()p Fn(i.e.,)e Ft(mem)o(b)q(ers)h(with)g(rank)f(zero\))h (of)f(b)q(oth)h(the)g(lo)q(cal)75 2478 y(and)f(remote)e(groups.)19 b(In)o(teger)12 b Fh(p)q(eer)p 707 2478 V 17 w(rank)h Ft(is)g(the)f(rank)g (of)g(the)h(remote)f(leader)h(in)g Fh(p)q(eer)p 1575 2478 V 17 w(comm)m Ft(.)j(In)o(teger)75 2534 y(tag)11 b(is)h(used)g(to)f (distinguish)i(this)f(op)q(eration)g(from)f(others)g(with)g(the)h(same)f(p)q (eer.)20 b(In)o(teger)11 b Fh(num)p 1731 2534 V 14 w(comm)n(s)75 2591 y Ft(is)h(the)f(n)o(um)o(b)q(er)h(of)f(new)h(in)o(ter-comm)o(unicators)f (constructed.)19 b(This)12 b(routine)g(is)g(collectiv)o(e)h(in)f(the)g(lo)q (cal)75 2647 y(group)i(and)h(sync)o(hronizes)g(with)g(the)f(remote)g(group.) 19 b(Eac)o(h)14 b(of)g(the)g(in)o(ter-comm)o(unicators)h(pro)q(duced)75 2704 y(pro)o(vides)h(in)o(ter-comm)o(unication)g(with)g(the)f(remote)g (group.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 15 18 bop 75 -100 a Fl(4.8.)34 b(INTR)o(ODUCTION)16 b(TO)g(INTER-COMMUNICA)l(TION) 600 b Ft(15)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(COMM)p 318 45 V 16 w(NAME)p 463 45 V 17 w(MAKE\(my)p 686 45 V 13 w(comm)m(,)12 b(name,)g(num)p 1057 45 V 15 w(comm)m(s,)g(new)p 1306 45 V 17 w(comm)m(s\))117 122 y Fj(IN)171 b Fh(my)p 396 122 V 13 w(comm)377 b Fj(lo)q(cal)13 b(in)o(tra-comm)o(unicator)117 198 y(IN)171 b Fh(name)467 b Fj(\\name)10 b(kno)o(wn)h(to)f(b)q(oth)i(lo)q (cal)e(and)h(remote)f(group)h(leaders")117 273 y(IN)171 b Fh(num)p 422 273 V 14 w(comm)m(s)337 b Fj(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o (ter-comm)o(unicators)e(to)h(construct)117 349 y(OUT)124 b Fh(new)p 411 349 V 17 w(comm)m(s)345 b Fj(Arra)o(y)14 b(of)f(new)i(in)o (ter-comm)o(unicators)166 473 y Ft(This)21 b(is)h(the)f(name-serv)o(ed)g (equiv)m(alen)o(t)h(of)f Fh(MPI)p 1041 473 V 15 w(COMM)p 1198 473 V 17 w(PEER)p 1327 473 V 17 w(MAKE)g Ft(in)g(whic)o(h)h(the)f(caller)75 530 y(need)16 b(only)h(kno)o(w)e(a)g(name)h(for)f(the)g(p)q(eer)i (connection.)22 b(The)16 b(same)f(name)g(is)i(supplied)h(b)o(y)d(b)q(oth)h (lo)q(cal)75 586 y(and)k(remote)f(groups.)34 b(The)20 b(name)g(is)g(remo)o(v) o(ed)g(from)f(the)h(in)o(ternal)h(name-serv)o(er)e(database)h(after)75 643 y(b)q(oth)15 b(groups)g(ha)o(v)o(e)g(completed)h Fh(MPI)p 736 643 V 16 w(COMM)p 894 643 V 16 w(NAME)p 1039 643 V 17 w(MAKE\(\))p Ft(.)75 763 y Fh(Lo)q(osely)f(Synchronous)i(Inter-Comm)m(unicato)o(r)c (Constructo)o(rs)75 849 y Ft(These)f(routines)g(are)g(lo)q(osely)g(sync)o (hronous)g(coun)o(terparts)f(of)g(the)h(sync)o(hronous)g(in)o(ter-comm)o (unicator)75 906 y(construction)j(routines)h(describ)q(ed)h(ab)q(o)o(v)o(e.) 75 1010 y Fh(MPI)p 160 1010 V 16 w(COMM)p 318 1010 V 16 w(PEER)p 446 1010 V 17 w(MAKE)p 592 1010 V 16 w(ST)l(ART\(my)p 828 1010 V 15 w(comm)m(,)7 b(p)q(eer)p 1061 1010 V 17 w(comm)n(,)g(p)q(eer)p 1297 1010 V 17 w(rank,)k(tag,)g(num)p 1591 1010 V 14 w(comm)n(s,)d(mak)o(e)p 1864 1010 V 13 w(id\))117 1143 y Fj(IN)171 b Fh(my)p 396 1143 V 13 w(comm)377 b Fj(lo)q(cal)13 b(in)o(tra-comm)o(unicator)117 1219 y(IN)171 b Fh(p)q(eer)p 417 1219 V 17 w(comm)353 b Fj(\\paren)o(t")14 b(in)o(tra-comm)o(uni)o(cator)117 1294 y(IN)171 b Fh(p)q(eer)p 417 1294 V 17 w(rank)391 b Fj(rank)14 b(of)f(remote)h(group)g(leader)g(in)27 b Ff(p)q(eer)p 1563 1294 13 2 v 17 w(comm)117 1370 y Fj(IN)171 b Fh(tag)510 b Fj(\\safe")14 b(tag)117 1445 y(IN)171 b Fh(num)p 422 1445 14 2 v 14 w(comm)m(s)337 b Fj(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o (ter-comm)o(unicators)e(to)h(construct)117 1520 y(OUT)124 b Fh(mak)o(e)p 438 1520 V 13 w(id)422 b Fj(handle)14 b(for)27 b Ff(MPI)p 1193 1520 13 2 v 15 w(COMM)p 1338 1520 V 14 w(PEER)p 1456 1520 V 14 w(MAKE)p 1588 1520 V 15 w(FINISH\(\))166 1645 y Ft(This)d(starts)e(o\013)g(a)p 533 1645 14 2 v 39 w Fh(PEER)p 662 1645 V 17 w(MAKE)h Ft(op)q(eration,)i(returning)f(a)f(handle)h(for)f(the) g(op)q(eration)g(in)75 1701 y(\\)p Fh(mak)o(e)p 203 1701 V 13 w(id)p Ft(.")42 b(It)23 b(is)g(collectiv)o(e)i(in)e Fh(my)p 763 1701 V 14 w(comm)m Ft(.)39 b(It)22 b(do)q(es)h(not)g(w)o(ait)f(for)g(the) h(remote)f(group)g(to)g(do)75 1758 y Fh(MPI)p 160 1758 V 16 w(COMM)p 318 1758 V 16 w(MAKE)p 463 1758 V 16 w(ST)l(ART\(\))p Ft(.)34 b(The)20 b(\\)p Fh(m)o(ak)o(e)p 926 1758 V 14 w(id)p Ft(")f(handle)i(is)f(conceptually)i(similar)e(to)f(the)h(com-)75 1814 y(m)o(unication)d(handle)g(used)f(b)o(y)g(non-blo)q(c)o(king)h(p)q(oin)o (t-to-p)q(oin)o(t)f(routines.)22 b(A)16 b Fh(mak)o(e)p 1538 1814 V 13 w(id)g Ft(handle)h(is)g(con-)75 1871 y(structed)f(b)o(y)f(a)g(\\)p 380 1871 V 16 w Fh(ST)l(ART)p Ft(")i(routine)f(and)f(destro)o(y)o(ed)g(b)o(y) h(the)f(matc)o(hing)h(\\)p 1391 1871 V 16 w Fh(FINISH)p Ft(")f(routine.)22 b(These)75 1927 y(handles)c(are)e(not)g(v)m(alid)j(for)c(an)o(y)i(other)f (use.)24 b(It)17 b(is)g(erroneous)g(to)e(call)j(this)f(routine)g(again)g (with)g(the)75 1984 y(same)f Fh(p)q(eer)p 273 1984 V 17 w(comm)m Ft(,)c Fh(p)q(eer)p 513 1984 V 17 w(rank)k Ft(and)g Fh(tag)p Ft(,)g(without)g(calling)h Fh(MPI)p 1204 1984 V 16 w(COMM)p 1362 1984 V 16 w(MAKE)p 1507 1984 V 17 w(FINISH\(\))d Ft(to)i(\014nish)75 2040 y(the)f(\014rst)g(call.)75 2144 y Fh(MPI)p 160 2144 V 16 w(COMM)p 318 2144 V 16 w(PEER)p 446 2144 V 17 w(MAKE)p 592 2144 V 16 w(FINISH\(mak)o(e)p 869 2144 V 13 w(id,)g(new)p 1018 2144 V 18 w(comm)m(s\))117 2221 y Fj(IN)171 b Fh(mak)o(e)p 438 2221 V 13 w(id)422 b Fj(handle)14 b(from)26 b Ff(MPI)p 1228 2221 13 2 v 14 w(COMM)p 1372 2221 V 15 w(PEER)p 1491 2221 V 14 w(MAKE)p 1623 2221 V 14 w(ST)m(ART\(\))117 2297 y Fj(OUT)124 b Fh(new)p 411 2297 14 2 v 17 w(comm)m(s)345 b Fj(arra)o(y)14 b(of)f(new)h(in)o(ter-comm)o(unicators)166 2421 y Ft(This)f(completes)g(a)p 512 2421 V 29 w Fh(PEER)p 641 2421 V 17 w(MAKE)f Ft(op)q(eration,)h (returning)g(an)f(arra)o(y)f(of)h Fh(num)p 1520 2421 V 15 w(comm)m(s)e Ft(new)j(in)o(ter-)75 2478 y(comm)o(unicators)k(in)h Fh(new)p 524 2478 V 17 w(comm)m(s)p Ft(.)23 b(\(Note)17 b(that)f Fh(num)p 1027 2478 V 14 w(comm)n(s)e Ft(w)o(as)i(sp)q(eci\014ed)k(in)e(the)f(corresp)q (onding)75 2534 y(call)i(to)d Fh(MPI)p 303 2534 V 16 w(COMM)p 461 2534 V 17 w(PEER)p 590 2534 V 17 w(MAKE)p 736 2534 V 16 w(ST)l(ART\(\))p Ft(.\))26 b(This)18 b(routine)g(is)g(collectiv)o(e)i(in)e (the)f Fh(my)p 1695 2534 V 14 w(comm)11 b Ft(of)75 2591 y(the)h(corresp)q (onding)h(call)g(to)e Fh(MPI)p 656 2591 V 16 w(COMM)p 814 2591 V 16 w(PEER)p 942 2591 V 17 w(MAKE)p 1088 2591 V 16 w(ST)l(ART\(\))p Ft(.)19 b(It)12 b(w)o(aits)g(for)f(the)h(remote)f(group)75 2647 y(to)16 b(call)i Fh(MPI)p 302 2647 V 16 w(COMM)p 460 2647 V 16 w(PEER)p 588 2647 V 17 w(MAKE)p 734 2647 V 16 w(ST)l(ART\(\))f Ft(but)g(do)q(es)g(not)g(w)o(ait)f(for)g(the)h(remote)f(group)h(to)f(call)75 2704 y Fh(MPI)p 160 2704 V 16 w(COMM)p 318 2704 V 16 w(PEER)p 446 2704 V 17 w(MAKE)p 592 2704 V 16 w(FINISH\(\))p Ft(.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 16 19 bop 75 -100 a Ft(16)409 b Fl(SECTION)16 b(4.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(COMM)p 318 45 V 16 w(NAME)p 463 45 V 17 w(MAKE)p 609 45 V 16 w(ST)l(ART\(my)p 845 45 V 14 w(comm)m(,)d(name,)h(num)p 1217 45 V 14 w(comm)m(s,)f(mak)o(e)p 1493 45 V 14 w(id\))117 142 y Fj(IN)171 b Fh(my)p 396 142 V 13 w(comm)377 b Fj(lo)q(cal)13 b(in)o(tra-comm)o(unicator)117 258 y(IN)171 b Fh(name)467 b Fj(\\name")10 b(kno)o(wn)g(to)h(b)q(oth)g(lo)q(cal)g(and)g(remote)f(group)h (leaders)117 373 y(IN)171 b Fh(num)p 422 373 V 14 w(comm)m(s)337 b Fj(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o(ter-comm)o(unicators)e(to)h (construct)117 488 y(OUT)124 b Fh(mak)o(e)p 438 488 V 13 w(id)422 b Fj(handle)14 b(for)g(MPI)p 1186 488 13 2 v 15 w(COMM)p 1339 488 V 15 w(NAME)p 1482 488 V 16 w(MAKE)p 1627 488 V 15 w(FINISH\(\))75 680 y Fh(MPI)p 160 680 14 2 v 16 w(COMM)p 318 680 V 16 w(NAME)p 463 680 V 17 w(MAKE)p 609 680 V 16 w(FINISH\(mak)o(e)p 885 680 V 13 w(id,)h(new)p 1035 680 V 17 w(comm)m(s\))117 777 y Fj(IN)171 b Fh(mak)o(e)p 438 777 V 13 w(id)422 b Fj(handle)14 b(from)e(MPI)p 1220 777 13 2 v 15 w(COMM)p 1373 777 V 16 w(NAME)p 1517 777 V 15 w(MAKE)p 1661 777 V 16 w(ST)m(AR)m(T\(\))117 892 y(OUT)124 b Fh(new)p 411 892 14 2 v 17 w(comm)m(s)345 b Fj(arra)o(y)14 b(of)f(new)h(in)o(ter-comm)o(unicators)166 1037 y Ft(These)23 b(are)f(the)g Fh(ST)l(ART/FINISH)h Ft(v)o(ersions)f(of)g Fh(MPI)p 1125 1037 V 16 w(COMM)p 1283 1037 V 17 w(NAME)p 1429 1037 V 16 w(MAKE\(\))p Ft(.)41 b(They)22 b(ha)o(v)o(e)75 1093 y(sync)o(hronization)16 b(prop)q(erties)g(analogous)f(to)f(the)i(corresp)q (onding)p 1254 1093 V 32 w Fh(PEER)p 1382 1093 V 32 w Ft(routines.)75 1328 y Fh(Comm)m(unicato)o(r)c(Status)75 1452 y Ft(This)20 b(lo)q(cal)g(routine)g(allo)o(ws)f(the)g(calling)i(pro)q(cess)f(to)e (determine)i(if)g(a)f(comm)o(unicator)g(is)g(an)h(in)o(ter-)75 1509 y(comm)o(unicator)15 b(or)g(an)g(in)o(tra-comm)o(unicator.)75 1633 y Fh(MPI)p 160 1633 V 16 w(COMM)p 318 1633 V 16 w(ST)l(A)l(T\(comm)n(,)d (status\))117 1730 y Fj(IN)171 b Fh(comm)450 b Fj(comm)o(unicator)117 1845 y(OUT)124 b Fh(status)460 b Fj(in)o(teger)15 b(status)166 1990 y Ft(This)j(returns)g(the)g(status)f(of)g(comm)o(unicator)g Fh(comm)m Ft(.)25 b(V)l(alid)19 b(status)e(v)m(alues)i(are)33 b Ff(MPI)p 1726 1990 13 2 v 15 w(INTRA)p Ft(,)75 2046 y Ff(MPI)p 152 2046 V 14 w(INTER)p Ft(,)c Ff(MPI)p 404 2046 V 14 w(INV)m(ALID)p Ft(.)166 2205 y Fk(Discussion:)15 b Fj(Should)f(there)h(b)q(e)f(an)o(y)g (other)g(status)h(v)n(alues?)75 2522 y Fh(Supp)q(o)o(rt)i(fo)o(r)d (User-Level)h(Servers)75 2647 y Ft(This)e(collectiv)o(e)h(routine)e(is)h (used)f(to)g(mak)o(e)g(it)g(easier)g(for)g(pro)q(cesses)g(to)g(form)f (sub-groups)i(and)f(con)o(tact)75 2704 y(user-lev)o(el)17 b(serv)o(ers.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 17 20 bop 75 -100 a Fl(4.8.)34 b(INTR)o(ODUCTION)16 b(TO)g(INTER-COMMUNICA)l(TION) 600 b Ft(17)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(COMM)p 318 45 V 16 w(SPLITL\(com)o(m)m(,)12 b(k)o(ey)l(,)j(nk)o(eys,)h(leaders,)f(sub)p 1079 45 V 18 w(comm)m(\))117 128 y Fj(IN)171 b Fh(comm)450 b Fj(extan)o(t)14 b(in)o(tra-comm)o(unicator)d(to)j(b)q(e)g(\\split")117 216 y(IN)171 b Fh(k)o(ey)509 b Fj(k)o(ey)14 b(for)g(sub-group)g(mem)o(b)q (ership)117 303 y(IN)171 b Fh(nk)o(eys)469 b Fj(n)o(um)o(b)q(er)13 b(of)h(k)o(eys)g(\(n)o(um)o(b)q(er)f(of)h(sub-groups\))117 390 y(OUT)124 b Fh(leaders)442 b Fj(ranks)14 b(of)g(sub-group)g(leaders)g(in) g(comm)117 477 y(OUT)124 b Fh(sub)p 400 477 V 17 w(comm)370 b Fj(in)o(tra-comm)o(unicator)6 b(describing)k(sub-group)f(of)g(calling)f (pro-)905 534 y(cess)166 664 y Ft(This)17 b(routine)g(splits)h(the)f(group)f (describ)q(ed)j(b)o(y)d(in)o(tra-comm)o(unicator)h(comm)f(in)o(to)h(nk)o(eys) f(sub-)75 721 y(groups.)21 b(Eac)o(h)15 b(calling)j(pro)q(cess)e(m)o(ust)f (sp)q(ecify)i(a)e(v)m(alue)i(of)e(k)o(ey)h(in)g(the)g(range)f([0)p Fs(:)8 b(:)g(:)d Ft(\()p Fh(nk)o(eys)p Ft(-1\)].)21 b(Pro-)75 777 y(cesses)g(sp)q(ecifying)i(the)d(same)h(k)o(ey)f(are)g(placed)i(in)g(the) f(same)f(sub-group.)36 b(Ranks)21 b(of)g(the)f(leaders)75 834 y(of)e(eac)o(h)h(sub-group)g(\(relativ)o(e)g(to)f Fh(comm)m Ft(\))e(are)i(returned)h(in)h(in)o(teger)f(arra)o(y)e(leaders.)31 b(This)19 b(routine)75 890 y(returns)f(a)g(new)h(in)o(tra-comm)o(unicator,)f Fh(sub)p 862 890 V 18 w(comm)m Ft(,)e(that)i(describ)q(es)i(the)e(sub-group)h (to)f(whic)o(h)h(the)75 947 y(calling)e(pro)q(cess)e(b)q(elongs.)75 1103 y Fi(4.8.4)49 b(Implem)o(e)o(ntation)14 b(Notes)75 1200 y Fh(Securit)o(y)i(and)g(P)o(erfo)o(rm)n(ance)d(Issues)75 1298 y Ft(The)h(routines)g(in)g(this)g(section)g(do)f(not)g(in)o(tro)q(duce)i (insecurit)o(y)g(in)o(to)e(the)h(basic)g(usage)f(of)g(MPI.)g(Sp)q(ecif-)75 1354 y(ically)l(,)k(they)e(do)g(not)g(allo)o(w)g(con)o(texts)g(to)g(b)q(e)g (b)q(ound)i(in)f(m)o(ultiple)h(usable)f(comm)o(unicators.)166 1417 y(The)f(pro)o(vision)h(of)f(in)o(ter-comm)o(unication)i(do)q(es)e(not)g (adv)o(ersely)h(a\013ect)e(the)i(\(p)q(oten)o(tial\))f(p)q(erfor-)75 1473 y(mance)g(of)g(in)o(tra-comm)o(unication.)75 1628 y Fh(\\Under)h(the)g (Ho)q(o)q(d")75 1726 y Ft(A)j(p)q(ossible)h(implemen)o(tation)g(of)e(a)g (comm)o(unicator)h(con)o(tains)f(a)h(single)h(group,)e(a)h(send)g(con)o (text,)f(a)75 1782 y(receiv)o(e)f(con)o(text,)f(and)g(a)g(source)h(\(for)e (the)i(message)f(en)o(v)o(elop)q(e\).)24 b(This)17 b(structure)f(mak)o(es)f (in)o(tra-)i(and)75 1839 y(in)o(ter-comm)o(unicators)e(basically)i(the)e (same)g(ob)s(ject.)166 1901 y(The)21 b(in)o(tra-comm)o(unicator)f(has)h(the)f (prop)q(erties)i(that:)30 b(the)20 b(send-con)o(text)h(and)g(the)g(receiv)o (e-)75 1958 y(con)o(text)c(are)g(iden)o(tical;)j(the)d(pro)q(cess)h(is)g(a)f (mem)o(b)q(er)g(of)g(the)h(group;)g(the)f(source)g(is)h(the)g(rank)f(of)g (the)75 2014 y(pro)q(cess)e(in)h(the)g(group.)166 2077 y(The)e(in)o(ter-comm) o(unicator)h(cannot)e(b)q(e)i(discussed)h(sensibly)g(without)e(considering)i (pro)q(cesses)e(in)75 2133 y(b)q(oth)19 b(the)g(lo)q(cal)h(and)f(remote)g (groups.)31 b(Imagine)19 b(a)g(pro)q(cess)g Fu(P)g Ft(in)h(group)e(G)h(whic)o (h)h(has)f(an)g(in)o(ter-)75 2189 y(comm)o(unicator)14 b Fm(Cp)p Ft(,)f(and)i(a)f(pro)q(cess)g Fu(Q)g Ft(in)h(group)f(H)g(whic)o(h)h(has)f(an) g(in)o(ter-comm)o(unicator)g Fm(Cq)p Ft(.)20 b(\(Note)75 2246 y(that)e(G)g(and)g(H)g(do)h(not)e(ha)o(v)o(e)h(to)g(b)q(e)h(distinct.\))30 b(The)19 b(in)o(ter-comm)o(unicators)f(ha)o(v)o(e)g(the)g(prop)q(erties)75 2302 y(that:)g(the)c(send)f(con)o(text)g(of)g Fm(Cp)g Ft(is)h(iden)o(tical)h (to)e(the)g(receiv)o(e)h(con)o(text)f(of)g Fm(Cq)p Ft(,)g(and)g(is)h(unique)h (in)f(H;)f(the)75 2359 y(receiv)o(e)i(con)o(text)e(of)h Fm(Cp)g Ft(is)g(iden)o(tical)i(to)d(the)h(send)h(con)o(text)e(of)h Fm(Cq)p Ft(,)f(and)i(is)f(unique)i(in)e(G;)g(the)g(group)f(of)75 2415 y Fm(Cp)i Ft(is)g(H;)g(the)h(group)e(of)h Fm(Cq)g Ft(is)h(G;)e(the)h (source)g(of)g Fm(Cp)g Ft(is)h(the)f(rank)g(of)f Fu(P)h Ft(in)h(G,)f(whic)o (h)h(is)f(the)h(group)e(of)75 2472 y Fm(Cq)p Ft(;)h(the)g(source)g(of)g Fm(Cq)g Ft(is)g(the)h(rank)f(of)f(Q)i(in)g(H,)f(whic)o(h)h(is)g(the)f(group)g (of)g Fm(Cp)p Ft(.)166 2534 y(It)22 b(is)h(easy)f(to)g(see)g(that)g(in)h (terms)f(of)g(these)g(\014elds,)j(the)d(in)o(tra-comm)o(unicator)g(is)h(a)f (sp)q(ecial)75 2591 y(case)17 b(of)g(the)h(in)o(ter-comm)o(unicator.)26 b(It)18 b(has)f Fg(G)i Ft(=)d Fg(H)i Ft(and)g(b)q(oth)f(con)o(texts)g(the)g (same.)26 b(This)18 b(ensures)75 2647 y(that)c(the)h(p)q(oin)o(t-to-p)q(oin)o (t)g(comm)o(unication)h(implemen)o(tation)g(for)e(in)o(tra-comm)o(unication)i (and)f(in)o(ter-)75 2704 y(comm)o(unication)h(can)f(b)q(e)h(iden)o(tical.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 18 21 bop 75 -100 a Ft(18)409 b Fl(SECTION)16 b(4.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fo(4.9)59 b(Cacheing)75 168 y Ft(MPI)16 b(pro)o(vides)g(a)g(\\cac)o(heing")g(facilit)o(y)h(that)f (allo)o(ws)g(an)g(application)h(to)f(attac)o(h)f(arbitrary)g(pieces)i(of)75 225 y(information,)d(called)h Fn(attributes)p Ft(,)g(to)e(con)o(text,)g (group,)g(and)i(comm)o(unicator)e(descriptors;)h(it)g(pro)o(vides)75 281 y(this)d(facilit)o(y)i(to)d(user)h(programs)f(as)h(w)o(ell.)19 b(A)o(ttributes)11 b(are)g(lo)q(cal)h(to)e(the)h(pro)q(cess)h(and)f(are)g (not)f(included)75 337 y(if)j(the)f(descriptor)h(w)o(ere)f(someho)o(w)g(sen)o (t)g(to)f(another)h(pro)q(cess)1143 321 y Fb(1)1163 337 y Ft(.)19 b(This)13 b(facilit)o(y)h(is)e(in)o(tended)i(to)e(supp)q(ort)75 394 y(optimizations)22 b(suc)o(h)g(as)f(sa)o(ving)g(p)q(ersisten)o(t)h(comm)o (unication)g(handles)h(and)e(recording)h(top)q(ology-)75 450 y(based)c(decisions)h(b)o(y)e(adaptiv)o(e)h(algorithms.)26 b(Ho)o(w)o(ev)o(er,)17 b(attributes)g(are)h(propagated)e(in)o(ten)o(tionally) 75 507 y(b)o(y)f(sp)q(eci\014c)i(MPI)e(routines.)166 574 y(T)l(o)f (summarize,)g(cac)o(heing)h(is,)f(in)h(particular,)g(the)f(pro)q(cess)g(b)o (y)g(whic)o(h)h(implemen)o(tation-de\014ned)75 631 y(data)g(\(and)g(virtual)g (top)q(ology)g(data\))g(is)g(propagated)g(in)h(groups)f(and)g(comm)o (unicators.)166 781 y Fk(Discussion:)g Fj(A)o(ttribute)g(propagation)e(m)o (ust)g(b)q(e)h(discussed)i(carefully)m(.)75 1049 y Fi(4.9.1)49 b(F)o(unctionalit)o(y)75 1156 y Ft(MPI)15 b(pro)o(vides)h(the)f(follo)o(wing) h(services)g(related)g(to)e(cac)o(heing.)21 b(They)15 b(are)g(all)h(pro)q (cess-lo)q(cal.)75 1271 y Fh(MPI)p 160 1271 14 2 v 16 w(A)l(TTRIBUTE)p 425 1271 V 17 w(ALLOC\(n,handle)p 760 1271 V 18 w(a)o(rra)o(y)l(,len\))117 1359 y Fj(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)13 b(of)h(handles)g(to)g (allo)q(cate)117 1457 y(OUT)124 b Fh(handle)p 459 1457 V 17 w(a)o(rra)o(y)337 b Fj(p)q(oin)o(ter)10 b(to)f(arra)o(y)g(of)g(opaque)g (attribute)h(handling)e(structure)117 1554 y(OUT)124 b Fh(len)517 b Fj(length)14 b(of)f(eac)o(h)i(opaque)f(structure)75 1689 y Ft(Allo)q(cates)19 b(a)e(new)h(attribute,)g(so)f(user)h(programs)f(and)h (functionalit)o(y)h(la)o(y)o(ered)f(on)f(top)h(of)f(MPI)h(can)75 1746 y(access)d(attribute)g(tec)o(hnology)l(.)75 1861 y Fh(MPI)p 160 1861 V 16 w(A)l(TTRIBUTE)p 425 1861 V 17 w(FREE\(handle)p 691 1861 V 18 w(a)o(rra)o(y)l(,n\))117 1949 y Fj(IN)171 b Fh(handle)p 459 1949 V 17 w(a)o(rra)o(y)337 b Fj(arra)o(y)16 b(of)e(p)q(oin)o(ters)j(to)e (opaque)g(attribute)i(handling)d(struc-)905 2005 y(tures)117 2103 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)13 b(of)h(handles)g(to)g (deallo)q(cate)75 2238 y Ft(F)l(rees)h(attribute)g(handle.)75 2353 y Fh(MPI)p 160 2353 V 16 w(GET)p 264 2353 V 17 w(A)l(TTRIBUTE)p 530 2353 V 17 w(KEY\(k)o(eyval\))117 2441 y Fj(OUT)124 b Fh(k)o(eyval)455 b Fj(Pro)o(vide)14 b(the)h(in)o(teger)f(k)o(ey)g(v)n(alue)f(for)h(future)g (storing.)75 2577 y Ft(Generates)h(a)g(new)g(cac)o(he)h(k)o(ey)l(.)p 75 2661 720 2 v 127 2688 a Fr(1)144 2704 y Fa(The)d(deletion)i(of)e (\015atten/un\015atten)i(mak)o(es)f(this)f(p)q(oin)o(t)i(mo)q(ot.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 19 22 bop 75 -100 a Fl(4.9.)29 b(CA)o(CHEING)1404 b Ft(19)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(SET)p 259 45 V 17 w(A)l(TTRIBUTE\(handle,)12 b(k)o(eyval,)f(attribute)p 993 45 V 18 w(val,)f(attribute)p 1251 45 V 18 w(len,)h(attribute)p 1510 45 V 19 w(destructo)o(r)p 1718 45 V 17 w(routine\))117 180 y Fj(IN)171 b Fh(handle)449 b Fj(opaque)14 b(attribute)g(handle)117 256 y(IN)171 b Fh(k)o(eyval)455 b Fj(The)15 b(in)o(teger)f(k)o(ey)g(v)n(alue)f(for)h(future)g(storing.)117 333 y(IN)171 b Fh(attribute)p 500 333 V 18 w(val)336 b Fj(attribute)15 b(v)n(alue)e(\(opaque)h(p)q(oin)o(ter\))117 410 y(IN)171 b Fh(attribute)p 500 410 V 18 w(len)336 b Fj(length)14 b(of)f(attribute)i(\(in) e(b)o(ytes\))117 486 y(IN)171 b Fh(attribute)p 500 486 V 18 w(destructo)o(r)p 707 486 V 17 w(routine)52 b Fj(What)14 b(one)g(calls)f(to)h (get)g(rid)g(of)f(this)h(attribute)g(later)75 612 y Ft(Stores)h(attribute)g (in)h(cac)o(he)f(b)o(y)g(k)o(ey)l(.)75 716 y Fh(MPI)p 160 716 V 16 w(TEST)p 290 716 V 16 w(A)l(TTRIBUTE\(handle,k)o(eyval,attribute)p 1000 716 V 20 w(ptr,len\))117 794 y Fj(IN)171 b Fh(handle)449 b Fj(opaque)14 b(attribute)g(handle)117 871 y(IN)171 b Fh(k)o(eyval)455 b Fj(The)15 b(in)o(teger)f(k)o(ey)g(v)n(alue)f(for)h(future)g(storing.)117 948 y(OUT)124 b Fh(attribute)p 500 948 V 18 w(ptr)335 b Fj(v)o(oid)13 b(p)q(oin)o(ter)h(to)g(attribute,)g(or)g(NULL)g(if)f(not)h(found)117 1024 y(OUT)124 b Fh(len)517 b Fj(length)14 b(in)f(b)o(ytes)i(of)e(attribute,) h(if)f(found.)75 1149 y Ft(Retriev)o(e)j(attribute)f(from)g(cac)o(he)g(b)o(y) g(k)o(ey)l(.)75 1254 y Fh(MPI)p 160 1254 V 16 w(DELETE)p 346 1254 V 16 w(A)l(TTRIBUTE\(handle,)i(k)o(eyval\))117 1332 y Fj(IN)171 b Fh(handle)449 b Fj(opaque)14 b(attribute)g(handle)117 1409 y(IN)171 b Fh(k)o(eyval)455 b Fj(The)15 b(in)o(teger)f(k)o(ey)g(v)n (alue)f(for)h(future)g(storing.)75 1534 y Ft(Delete)i(attribute)f(from)f(cac) o(he)i(b)o(y)f(k)o(ey)l(.)75 1658 y Fh(Example)75 1746 y Ft(Eac)o(h)k (attribute)g(consists)h(of)f(a)g(p)q(oin)o(ter)h(or)e(a)h(v)m(alue)i(of)e (the)g(same)g(size)i(as)e(a)g(p)q(oin)o(ter,)h(and)g(w)o(ould)75 1802 y(t)o(ypically)14 b(b)q(e)f(a)f(reference)h(to)e(a)h(larger)g(blo)q(c)o (k)h(of)f(storage)f(managed)h(b)o(y)h(the)f(mo)q(dule.)20 b(As)12 b(an)g(example,)75 1859 y(a)k(global)i(op)q(eration)e(using)i(cac)o(heing)f (to)f(b)q(e)h(more)g(e\016cien)o(t)g(for)f(all)h(con)o(texts)f(of)g(a)h (group)f(after)g(the)75 1915 y(\014rst)f(call)h(migh)o(t)f(lo)q(ok)h(lik)o(e) g(this:)147 2026 y Fm(static)23 b(int)g(gop_key_assigned)f(=)h(0;)96 b(/*)23 b(0)h(only)f(on)h(first)f(entry)g(*/)147 2083 y(static)g(int)g (gop_key;)190 b(/*)23 b(key)h(for)f(this)h(module's)e(stuff)i(*/)147 2195 y(efficient_global_op)d(\(comm,)i(...\))147 2252 y(void)g(*comm;)147 2308 y({)194 2365 y(struct)g(gop_stuff_type)f(*gop_stuff;)70 b(/*)24 b(whatever)f(we)g(need)h(*/)194 2421 y(void)47 b(*group)24 b(=)f(mpi_comm_group\(comm\);)194 2534 y(if)h(\(!gop_key_assigned\))117 b(/*)23 b(get)h(a)f(key)h(on)f(first)h(call)f(ever)g(*/)194 2591 y({)h(gop_key_assigned)e(=)h(1;)242 2647 y(if)h(\()f(!)h(\(gop_key)f(=)h (mpi_Get_Attribute_Key\(\))o(\))d(\))j({)290 2704 y(mpi_abort)e (\("Insufficient)g(keys)i(available"\);)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 20 23 bop 75 -100 a Ft(20)409 b Fl(SECTION)16 b(4.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)242 45 y Fm(})194 102 y(})194 158 y(if)24 b(\(mpi_Test_Attribute)d(\(mpi_group_attr\(group\),gop_)o (key,&gop)o(_stuff\))o(\))194 214 y({)j(/*)g(This)f(module)g(has)g(executed)g (in)h(this)f(group)g(before.)314 271 y(We)g(will)h(use)f(the)g(cached)g (information)g(*/)194 327 y(})194 384 y(else)194 440 y({)h(/*)g(This)f(is)h (a)f(group)g(that)h(we)f(have)h(not)f(yet)h(cached)f(anything)f(in.)314 497 y(We)h(will)h(now)f(do)h(so.)266 553 y(*/)242 610 y(gop_stuff)f(=)g(/*)h (malloc)f(a)h(gop_stuff_type)e(*/)242 723 y(/*)i(...)f(fill)g(in)h (*gop_stuff)e(with)i(whatever)f(we)g(want)g(...)h(*/)242 835 y(mpi_set_attribute)e(\(mpi_group_attr\(group\),)e(gop_key,)j(gop_stuff,)958 892 y(gop_stuff_destructor\);)194 948 y(})194 1005 y(/*)h(...)f(use)h (contents)f(of)g(*gop_stuff)g(to)g(do)h(the)f(global)g(op)h(...)f(*/)170 1061 y(})170 1174 y(gop_stuff_destructor)f(\(gop_stuff\))70 b(/*)23 b(called)g(by)h(MPI)f(on)h(group)f(delete)g(*/)170 1231 y(struct)g(gop_stuff_type)f(*gop_stuff;)170 1287 y({)218 1344 y(/*)i(...)f(free)h(storage)e(pointed)h(to)h(by)g(gop_stuff)e(...)i(*/) 170 1400 y(})166 1614 y Fk(Discussion:)15 b Fj(The)e(cac)o(he)i(facilit)o(y)c (could)i(also)f(b)q(e)i(pro)o(vided)f(for)g(other)g(descriptors,)i(but)e(it)g (is)g(less)h(clear)75 1671 y(ho)o(w)d(suc)o(h)g(pro)o(vision)g(w)o(ould)f(b)q (e)h(useful.)18 b(It)11 b(is)g(suggested)h(that)f(this)g(issue)h(b)q(e)g (review)o(ed)g(in)e(reference)k(to)d(Virtual)75 1727 y(T)m(op)q(ologies.)75 1982 y Fo(4.10)59 b(F)n(o)n(rmalizing)21 b(the)e(Lo)r(osely)g(Synchronous)g (Mo)r(del)g(\(Usage,)g(Safet)n(y\))75 2095 y Fi(4.10.1)49 b(Basic)17 b(Statement)o(s)75 2190 y Ft(When)c(a)f(caller)i(passes)f(a)f(comm)o (unicator)h(\(whic)o(h)g(con)o(tains)g(a)f(con)o(text)g(and)h(group\))f(to)g (a)g(callee,)j(that)75 2247 y(comm)o(unicator)g(m)o(ust)h(b)q(e)g(free)g(of)f (side)i(e\013ects)e(throughout)g(execution)i(of)e(the)h(subprogram)f (\(quies-)75 2303 y(cen)o(t\).)26 b(This)18 b(pro)o(vides)g(one)f(mo)q(del)h (in)g(whic)o(h)h(libraries)g(can)e(b)q(e)h(written,)f(and)h(w)o(ork)e (\\safely)l(.")27 b(F)l(or)75 2360 y(libraries)13 b(so)f(designated,)h(the)f (callee)h(has)f(p)q(ermission)h(to)e(do)h(whatev)o(er)f(comm)o(unication)i (it)f(lik)o(es)h(with)75 2416 y(the)19 b(comm)o(unicator,)g(and)g(under)h (the)f(ab)q(o)o(v)o(e)g(guaran)o(tee)f(kno)o(ws)g(that)h(no)g(other)f(comm)o (unications)75 2473 y(will)e(in)o(terfere.)k(Since)c(w)o(e)e(p)q(ermit)h(the) f(creation)g(of)g(new)h(comm)o(unicators)f(without)g(sync)o(hronization)75 2529 y(\(assuming)h(preallo)q(cated)i(con)o(texts\),)c(this)j(do)q(es)g(not)e (imp)q(ose)i(a)f(signi\014can)o(t)h(o)o(v)o(erhead.)166 2591 y(This)k(form)e(of)h(safet)o(y)f(is)i(analogous)f(to)f(other)h(common)g (computer)g(science)h(usages,)g(suc)o(h)f(as)75 2647 y(passing)c(a)f (descriptor)h(of)f(an)h(arra)o(y)e(to)h(a)g(library)i(routine.)k(The)15 b(library)g(routine)g(has)g(ev)o(ery)f(righ)o(t)h(to)75 2704 y(exp)q(ect)h(suc)o(h)f(a)g(descriptor)h(to)f(b)q(e)g(v)m(alid)i(and)f(mo)q (di\014able.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 21 24 bop 75 -100 a Fl(4.10.)28 b(F)o(ORMALIZING)12 b(THE)f(LOOSEL)l(Y)i(SYNCHR)o (ONOUS)g(MODEL)e(\(USA)o(GE,)f(SAFETY\))p Ft(21)75 45 y Fi(4.10.2)49 b(Mo)q(dels)18 b(of)e(Execution)75 138 y Ft(W)l(e)j(sa)o(y)f(that)f(a)i (parallel)h(pro)q(cedure)f(is)g Fn(active)g Ft(at)f(a)g(pro)q(cess)h(if)g (the)g(pro)q(cess)g(b)q(elongs)g(to)f(a)g(group)75 194 y(that)e(ma)o(y)h (collectiv)o(ely)i(execute)f(the)f(pro)q(cedure,)h(and)f(some)f(mem)o(b)q(er) h(of)g(that)f(group)h(is)g(curren)o(tly)75 251 y(executing)c(the)f(pro)q (cedure)g(co)q(de.)20 b(If)12 b(a)f(parallel)j(pro)q(cedure)e(is)h(activ)o(e) e(at)h(a)f(pro)q(cess,)h(then)g(this)h(pro)q(cess)75 307 y(ma)o(y)e(b)q(e)i (receiving)g(messages)e(p)q(ertaining)i(to)f(this)g(pro)q(cedure,)h(ev)o(en)f (if)g(it)g(do)q(es)h(not)e(curren)o(tly)h(execute)75 364 y(the)j(co)q(de)h (of)f(this)g(pro)q(cedure.)75 504 y Fh(Nonreentrant)i(pa)o(rallel)d(p)o(ro)q (cedures)75 597 y Ft(This)22 b(co)o(v)o(ers)f(the)h(case)g(where,)h(at)e(an)o (y)g(p)q(oin)o(t)h(in)h(time,)g(at)e(most)g(one)h(in)o(v)o(o)q(cation)g(of)f (a)g(parallel)75 653 y(pro)q(cedure)14 b(can)f(b)q(e)g(activ)o(e)g(at)f(an)o (y)h(pro)q(cess.)19 b(That)12 b(is,)i(concurren)o(t)e(in)o(v)o(o)q(cations)i (of)e(the)h(same)f(parallel)75 710 y(pro)q(cedure)k(ma)o(y)e(o)q(ccur)h(only) g(within)h(disjoin)o(t)f(groups)f(of)h(pro)q(cesses.)20 b(F)l(or)14 b(example,)h(all)h(in)o(v)o(o)q(cations)75 766 y(of)f(parallel)i(pro)q (cedures)f(in)o(v)o(olv)o(e)g(all)h(pro)q(cesses,)e(pro)q(cesses)h(are)f (single-threaded,)i(and)e(there)h(are)f(no)75 823 y(recursiv)o(e)h(in)o(v)o (o)q(cations.)166 883 y(In)21 b(suc)o(h)f(a)g(case,)h(a)f(con)o(text)f(can)i (b)q(e)f(statically)h(allo)q(cated)g(to)f(eac)o(h)g(pro)q(cedure.)36 b(The)20 b(static)75 939 y(allo)q(cation)h(can)e(b)q(e)h(done)g(in)h(a)e (pream)o(ble,)i(as)e(part)g(of)g(initialization)j(co)q(de.)34 b(Or,)20 b(it)g(can)f(b)q(e)i(done)75 996 y(a)d(compile/link)j(time,)e(if)g (the)g(implemen)o(tation)g(has)g(additional)h(mec)o(hanisms)f(to)e(reserv)o (e)i(con)o(text)75 1052 y(v)m(alues.)h(Comm)o(unicators)12 b(to)g(b)q(e)h(used)g(b)o(y)f(the)h(di\013eren)o(t)f(pro)q(cedures)i(can)e(b) q(e)h(build)i(in)e(a)f(pream)o(ble,)h(if)75 1108 y(the)f(executing)g(groups)f (are)h(statically)g(de\014ned;)i(if)e(the)f(executing)i(groups)e(c)o(hange)h (dynamically)l(,)i(then)75 1165 y(a)j(new)g(comm)o(unicator)f(has)h(to)f(b)q (e)i(built)g(whenev)o(er)f(the)g(executing)h(group)f(c)o(hanges,)f(but)h (this)h(new)75 1221 y(comm)o(unicator)d(can)h(b)q(e)g(built)h(using)f(the)f (same)g(preallo)q(cated)i(con)o(text.)j(If)c(the)f(parallel)i(pro)q(cedures) 75 1278 y(can)10 b(b)q(e)h(organized)g(in)o(to)f(libraries,)j(so)d(that)f (only)i(one)f(pro)q(cedure)i(of)e(eac)o(h)g(library)h(can)f(b)q(e)h (concurren)o(tly)75 1334 y(activ)o(e)k(at)g(eac)o(h)g(pro)q(cessor,)g(then)g (it)h(is)f(su\016cien)o(t)h(to)f(allo)q(cate)h(one)f(con)o(text)g(p)q(er)g (library)l(.)75 1475 y Fh(P)o(a)o(rallel)f(p)o(ro)q(cedures)i(that)g(a)o(re)f (nonreentrant)i(within)f(each)g(executing)h(group)75 1567 y Ft(This)k(co)o(v)o(ers)g(the)g(case)f(where,)j(at)d(an)o(y)g(p)q(oin)o(t)i (in)f(time,)h(for)f(eac)o(h)g(pro)q(cess)g(group,)g(there)g(can)g(b)q(e)75 1624 y(at)d(most)g(one)h(activ)o(e)g(in)o(v)o(o)q(cation)g(of)g(a)f(parallel) i(pro)q(cedure)g(b)o(y)f(a)f(pro)q(cess)h(mem)o(b)q(er.)31 b(Ho)o(w)o(ev)o(er,)19 b(it)75 1680 y(migh)o(t)i(b)q(e)h(p)q(ossible)i(that)c (the)i(same)f(pro)q(cedure)h(is)g(concurren)o(tly)g(in)o(v)o(ok)o(ed)g(in)g (t)o(w)o(o)e(partially)j(\(or)75 1737 y(completely\))17 b(o)o(v)o(erlapping)f (groups.)k(F)l(or)15 b(example,)i(the)e(same)g(collectiv)o(e)j(comm)o (unication)e(function)75 1793 y(ma)o(y)e(b)q(e)i(concurren)o(tly)g(in)o(v)o (ok)o(ed)g(on)f(t)o(w)o(o)f(partially)i(o)o(v)o(erlapping)g(groups.)166 1853 y(In)f(suc)o(h)f(a)g(case,)g(a)f(con)o(text)h(is)g(asso)q(ciated)h(with) f(eac)o(h)g(parallel)i(pro)q(cedure)f(and)f(eac)o(h)g(executing)75 1910 y(group,)j(so)h(that)e(o)o(v)o(erlapping)j(execution)f(groups)f(ha)o(v)o (e)g(distinct)i(comm)o(unication)f(con)o(texts.)27 b(\(One)75 1966 y(do)q(es)19 b(not)f(need)i(a)f(di\013eren)o(t)f(con)o(text)h(from)f (eac)o(h)h(group;)g(one)g(merely)h(needs)f(a)g(\\coloring")g(of)f(the)75 2023 y(groups,)d(so)g(that)g(One)h(can)g(generate)g(the)f(comm)o(unicators)g (for)g(eac)o(h)h(parallel)h(pro)q(cedure)g(when)f(the)75 2079 y(execution)g(groups)e(are)h(de\014ned.)21 b(Here,)14 b(again,)h(one)g(only)g (need)h(one)e(con)o(text)h(for)f(eac)o(h)g(library)l(,)i(if)f(no)75 2135 y(t)o(w)o(o)f(pro)q(cedures)i(from)e(the)i(same)f(library)h(can)f(b)q(e) h(concurren)o(tly)g(activ)o(e)f(in)h(the)f(same)g(group.)166 2195 y(Note)f(that,)g(for)h(collectiv)o(e)h(comm)o(unication)g(libraries,)g (w)o(e)f(do)g(allo)o(w)g(sev)o(eral)g(concurren)o(t)g(in)o(v)o(o-)75 2252 y(cations)h(within)h(the)f(same)g(group:)21 b(a)15 b(broadcast)h(in)g(a) g(group)g(ma)o(y)f(b)q(e)h(started)g(at)f(a)g(pro)q(cess)i(b)q(efore)75 2308 y(the)e(previous)h(broadcast)f(in)h(that)e(group)h(ended)h(at)f(another) g(pro)q(cess.)20 b(In)c(suc)o(h)f(a)g(case,)g(one)g(cannot)75 2365 y(rely)e(on)f(con)o(text)g(mec)o(hanisms)h(to)f(disam)o(biguate)h (successiv)o(e)h(in)o(v)o(o)q(cations)e(of)g(the)h(same)f(parallel)i(pro-)75 2421 y(cedure)f(within)h(the)f(same)f(group:)18 b(the)13 b(pro)q(cedure)g (need)h(b)q(e)f(implemen)o(ted)h(so)e(as)g(to)g(a)o(v)o(oid)h(confusion.)75 2478 y(F)l(or)f(example,)i(for)e(broadcast,)g(one)h(ma)o(y)f(need)i(to)e (carry)g(additional)j(information)e(in)g(messages,)g(suc)o(h)75 2534 y(as)19 b(the)g(broadcast)g(ro)q(ot,)g(to)g(help)i(in)f(suc)o(h)g(disam) o(biguation;)i(one)d(also)g(relies)i(on)e(preserv)m(ation)h(of)75 2591 y(message)e(order)f(b)o(y)h(MPI.)g(With)g(suc)o(h)g(an)g(approac)o(h,)g (w)o(e)g(ma)o(y)f(b)q(e)i(gaining)g(p)q(erformance,)f(but)g(w)o(e)75 2647 y(lo)q(ose)f(mo)q(dularit)o(y)l(.)27 b(It)17 b(is)g(not)g(su\016cien)o (t)g(to)g(implemen)o(t)h(the)f(parallel)i(pro)q(cedure)f(so)e(that)h(it)g(w)o (orks)75 2704 y(correctly)k(in)h(isolation,)g(when)g(in)o(v)o(ok)o(ed)f(only) g(once;)j(it)d(needs)g(to)g(b)q(e)g(implemen)o(ted)i(so)d(that)g(an)o(y)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 22 25 bop 75 -100 a Ft(22)414 b Fl(SECTION)16 b(4.)30 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Ft(n)o(um)o(b)q(er)h(of)g (successiv)o(e)h(in)o(v)o(o)q(cations)f(will)i(execute)f(correctly)l(.)22 b(Of)17 b(course,)e(the)i(same)e(approac)o(h)h(can)75 102 y(b)q(e)g(used)g (for)e(other)h(parallel)i(libraries.)75 226 y Fh(W)o(ell-nested)g(pa)o (rallel)d(p)o(ro)q(cedures)75 313 y Ft(Calls)h(of)g(parallel)h(pro)q(cedures) g(are)e(w)o(ell)i(nested)f(if)g(a)g(new)g(parallel)h(pro)q(cedure)g(is)f(alw) o(a)o(ys)f(in)o(v)o(ok)o(ed)h(in)75 369 y(a)j(subset)h(of)e(a)h(group)g (executing)i(the)e(same)g(parallel)i(pro)q(cedure.)30 b(Th)o(us,)19 b(pro)q(cesses)g(that)e(execute)75 426 y(the)e(same)g(parallel)i(pro)q (cedure)f(ha)o(v)o(e)f(the)g(same)g(execution)h(stac)o(k.)166 483 y(In)i(suc)o(h)f(a)g(case,)h(a)f(new)g(con)o(text)g(need)h(to)f(b)q(e)h (dynamically)h(allo)q(cated)f(for)f(eac)o(h)g(new)h(in)o(v)o(o)q(ca-)75 540 y(tion)g(of)f(a)g(parallel)i(pro)q(cedure.)28 b(Ho)o(w)o(ev)o(er,)16 b(a)i(stac)o(k)e(mec)o(hanism)i(can)g(b)q(e)g(used)g(for)f(allo)q(cating)i (new)75 596 y(con)o(texts.)27 b(Th)o(us,)18 b(a)g(p)q(ossible)h(mec)o(hanism) g(is)f(to)f(allo)q(cate)i(\014rst)e(a)h(large)f(n)o(um)o(b)q(er)i(of)e(con)o (text's)g(\(up)75 652 y(to)f(the)h(upp)q(er)h(b)q(ound)g(on)f(the)g(depth)g (of)g(nested)g(parallel)h(pro)q(cedure)g(calls\),)g(and)f(then)g(use)g(a)g (lo)q(cal)75 709 y(stac)o(k)d(managemen)o(t)g(of)g(these)h(con)o(text's)f(on) g(eac)o(h)h(pro)q(cess)g(to)f(create)h(a)f(new)h(comm)o(unicator)g(\(using)75 765 y Fh(MPI)p 160 765 14 2 v 16 w(COMM)p 318 765 V 16 w(MAKE)p Ft(\))g(for)f(eac)o(h)i(new)f(in)o(v)o(o)q(cation.)75 890 y Fh(The)h(General)f(case)75 977 y Ft(In)22 b(the)g(general)g(case,)h(there)e (ma)o(y)g(b)q(e)h(m)o(ultiple)i(concurren)o(tly)e(activ)o(e)g(in)o(v)o(o)q (cations)g(of)f(the)g(same)75 1033 y(parallel)i(pro)q(cedure)g(within)g(the)f (same)f(group;)j(in)o(v)o(o)q(cations)e(ma)o(y)f(not)h(b)q(e)g(w)o (ell-nested.)41 b(A)22 b(new)75 1090 y(con)o(text)17 b(need)i(to)e(b)q(e)h (created)g(for)f(eac)o(h)g(in)o(v)o(o)q(cation.)28 b(It)18 b(is)g(the)g(user)g(resp)q(onsibilit)o(y)i(to)d(mak)o(e)g(sure)75 1146 y(that,)k(if)h(t)o(w)o(o)d(distinct)j(parallel)h(pro)q(cedures)e(are)g (in)o(v)o(ok)o(ed)g(concurren)o(tly)h(on)e(o)o(v)o(erlapping)i(sets)e(of)75 1203 y(pro)q(cesses,)15 b(then)h(con)o(text)e(allo)q(cation)j(or)d(comm)o (unicator)h(creation)h(is)f(prop)q(erly)h(co)q(ordinated.)75 1350 y Fo(4.11)59 b(Motivating)19 b(Examples)75 1535 y Fk(Discussion:)c Fj(The)e(in)o(tra-comm)o(uni)o(cation)c(examples)j(w)o(ere)i(\014rst)f (presen)o(ted)i(at)d(the)i(June)f(MPI)g(meeting;)e(the)75 1592 y(in)o(ter-comm)o(unication)g(routines)j(\(when)h(added\))f(are)g(new.)75 1800 y Fi(4.11.1)49 b(Current)15 b(Practice)h(#1)75 1888 y Ft(Example)g(#1a:)170 1985 y Fm(int)24 b(me,)f(size;)170 2042 y(...)170 2098 y(mpi_init\(\);)170 2155 y(mpi_comm_rank\(MPI_COMM_ALL,)e (&me\);)170 2211 y(mpi_comm_size\(MPI_COMM_ALL,)g(&size\);)170 2324 y(printf\("Process)h(\045d)i(size)f(\045d\\n",)g(me,)h(size\);)170 2380 y(...)170 2437 y(mpi_end\(\);)75 2534 y Ft(Example)19 b(#1a)f(is)h(a)f(do-nothing)h(program)e(that)h(initializes)k(itself)d (legally)l(,)i(and)d(refers)g(to)g(the)h(the)75 2590 y(\\all")13 b(comm)o(unicator,)f(and)h(prin)o(ts)f(a)g(message.)19 b(This)13 b(example)g(do)q(es)g(not)f(imply)i(that)e(MPI)g(supp)q(orts)75 2646 y(prin)o(tf-lik)o(e)17 b(comm)o(unication)f(itself.)75 2704 y(Example)g(#1b:)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 23 26 bop 75 -100 a Fl(4.11.)29 b(MOTIV)-5 b(A)l(TING)15 b(EXAMPLES)1056 b Ft(23)170 45 y Fm(int)24 b(me,)f(size;)170 102 y(...)170 158 y(mpi_init\(\);)170 214 y(mpi_comm_rank\(MPI_COMM_ALL,)e(&me\);)71 b(/*)23 b(local)g(*/)170 271 y(mpi_comm_size\(MPI_COMM_ALL,)e(&size\);)i(/*)g (local)g(*/)170 384 y(if\(\(me)g(\045)h(2\))g(==)f(0\))242 440 y(mpi_send\(...,)f(MPI_COMM_ALL,)g(\(\(me)i(+)f(1\))h(\045)g(size\)\);) 170 497 y(else)242 553 y(mpi_recv\(...,)e(MPI_COMM_ALL,)g(\(\(me)i(-)f(1)h(+) g(size\))f(\045)h(size\)\);)170 666 y(...)170 723 y(mpi_end\(\);)75 818 y Ft(Example)16 b(#1b)f(sc)o(hematically)i(illustrates)g(message)e(exc)o (hanges)h(b)q(et)o(w)o(een)g(\\ev)o(en")f(and)h(\\o)q(dd")f(pro-)75 875 y(cesses)h(in)g(the)f(\\all")g(comm)o(unicator.)75 999 y Fi(4.11.2)49 b(Current)15 b(Practice)h(#2)170 1086 y Fm(void)24 b(*data;)170 1142 y(int)g(me;)170 1199 y(...)170 1255 y(mpi_init\(\);)170 1311 y(mpi_comm_rank\(MPI_COMM_ALL,)d(&me\);)170 1424 y(if\(me)j(==)f(0\))170 1481 y({)266 1537 y(/*)g(get)h(input,)f(create)g(buffer)g(``data'')g(*/)266 1594 y(...)170 1650 y(})170 1763 y(mpi_broadcast\(MPI_COMM_ALL,)e(0,)i (data\);)170 1876 y(...)170 1932 y(mpi_end\(\);)75 2041 y Ft(This)16 b(example)g(illustrates)g(the)f(use)h(of)f(a)g(collectiv)o(e)h(comm)o (unication.)75 2165 y Fi(4.11.3)49 b(\(App)o(ro)o(ximate\))14 b(Current)h(Practice)g(#3)170 2252 y Fm(int)24 b(me;)170 2308 y(void)g(*grp0,)f(*grprem,)g(*commslave;)170 2365 y(...)170 2421 y(mpi_init\(\);)170 2478 y(mpi_comm_rank\(MPI_COMM_ALL,)e(&me\);)47 b(/*)23 b(local)g(*/)170 2534 y(mpi_local_subgroup\(MPI_GROUP_)o(ALL,)e(1,)i (``[0]'',)g(&grp0\);)g(/*)h(local)f(*/)170 2591 y (mpi_group_difference\(MPI_GROU)o(P_ALL,)e(grp0,)i(&grprem\);)f(/*)i(local)f (*/)170 2647 y(mpi_comm_make\(MPI_COMM_ALL,)e(grprem,)i(&commslave\);)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 24 27 bop 75 -100 a Ft(24)409 b Fl(SECTION)16 b(4.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)170 45 y Fm(if\(me)24 b(!=)f(0\))170 102 y({)266 158 y(/*)g(compute)g(on)h(slave)f(*/)266 214 y(...)266 271 y(mpi_reduce\(commslave,)e(...\);)266 327 y(...)170 384 y(})170 440 y(/*)j(zero)f(falls)h(through)e(immediately)h(to)g (this)h(reduce,)f(others)g(do)g(later...)g(*/)170 497 y (mpi_reduce\(MPI_COMM_ALL,)e(...\);)75 613 y Ft(This)d(example)h(illustrates) g(ho)o(w)e(a)h(group)f(consisting)i(of)e(all)i(but)f(the)g(zeroth)f(pro)q (cess)h(of)g(the)g(\\all")75 670 y(group)13 b(is)h(created,)f(and)h(then)g (ho)o(w)f(a)g(comm)o(unicator)g(is)h(formed)f(\()g Fh(comm)m(slave)p Ft(\))d(for)j(that)g(new)h(group.)75 726 y(The)19 b(new)g(comm)o(unicator)g (is)g(used)h(in)f(a)g(collectiv)o(e)h(call,)h(and)e(all)h(pro)q(cesses)f (execute)h(a)e(collectiv)o(e)75 783 y(call)k(in)g(the)40 b Ff(MPI)p 403 783 13 2 v 15 w(COMM)p 548 783 V 14 w(ALL)22 b Ft(con)o(text.)37 b(This)21 b(example)h(illustrates)g(ho)o(w)f(the)g(t)o(w)o (o)e(comm)o(unica-)75 839 y(tors)d(\(whic)o(h)i(p)q(ossess)f(distinct)i(con)o (texts\))d(protect)h(comm)o(unication.)27 b(That)16 b(is,)i(comm)o(unication) g(in)75 896 y Ff(MPI)p 152 896 V 14 w(COMM)p 296 896 V 15 w(ALL)e Ft(is)f(insulated)i(from)e(comm)o(unication)h(in)31 b Fh(comm)m(slave)p Ft(,)12 b(and)j(vice)i(v)o(ersa.)166 954 y(In)h(summary)l(,)f(for)g(comm)o (unication)h(with)g(\\group)e(safet)o(y)l(,")h(con)o(texts)g(within)h(comm)o (unicators)75 1011 y(m)o(ust)d(b)q(e)g(distinct.)75 1144 y Fi(4.11.4)49 b(Example)15 b(#4)75 1234 y Ft(The)g(follo)o(wing)g(example)g (is)g(mean)o(t)f(to)f(illustrate)j(\\safet)o(y")d(b)q(et)o(w)o(een)h(p)q(oin) o(t-to-p)q(oin)o(t)i(and)e(collectiv)o(e)75 1291 y(comm)o(unication.)20 b(MPI)12 b(guaran)o(tees)g(that)g(a)g(single)i(comm)o(unicator)e(can)h(do)f (safe)g(p)q(oin)o(t-to-p)q(oin)o(t)i(and)75 1347 y(collectiv)o(e)j(comm)o (unication.)75 1451 y Fm(#define)23 b(TAG_ARBITRARY)f(12345)75 1508 y(#define)h(SOME_COUNT)166 b(50)170 1564 y(int)24 b(me;)170 1621 y(int)g(len;)170 1677 y(void)g(*contexts;)170 1734 y(void)g(*subgroup;) 170 1847 y(...)170 1903 y(mpi_init\(\);)170 1960 y (mpi_contexts_alloc\(MPI_COMM_A)o(LL,)d(1,)j(&contexts,)e(&len\);)170 2016 y(mpi_local_subgroup\(MPI_GROUP_)o(ALL,)f(4,)i(``[2,4,6,8]'',)f (&subgroup\);)h(/*)g(local)g(*/)170 2072 y(mpi_group_rank\(subgroup,)e (&me\);)119 b(/*)23 b(local)g(*/)170 2185 y(if\(me)h(!=)f(MPI_UNDEFINED\))170 2242 y({)266 2298 y(mpi_comm_bind\(subgroup,)e(context,)h(&the_comm\);)h(/*)g (local)h(*/)266 2411 y(/*)f(asynchronous)g(receive:)f(*/)266 2468 y(mpi_irecv\(...,)g(MPI_SRC_ANY,)g(TAG_ARBITRARY,)g(the_comm\);)170 2524 y(})170 2637 y(for\(i)i(=)f(0;)h(i)g(<)f(SOME_COUNT,)g(i++\))266 2693 y(mpi_reduce\(the_comm,)e(...\);)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 25 28 bop 75 -100 a Fl(4.11.)29 b(MOTIV)-5 b(A)l(TING)15 b(EXAMPLES)1056 b Ft(25)75 45 y Fi(4.11.5)49 b(Lib)o(ra)o(ry)17 b(Example)d(#1)75 136 y Ft(The)h(main)h(program:)170 242 y Fm(int)24 b(done)f(=)h(0;)170 298 y(user_lib_t)f(*libh_a,)g(*libh_b;)170 355 y(void)h(*dataset1,)e (*dataset2;)170 411 y(...)170 468 y(mpi_init\(\);)170 524 y(...)170 581 y(init_user_lib\(MPI_COMM_ALL,)f(&libh_a\);)170 637 y (init_user_lib\(MPI_COMM_ALL,)g(&libh_b\);)170 694 y(...)170 750 y(user_start_op\(libh_a,)g(dataset1\);)170 807 y(user_start_op\(libh_a,)g (dataset2\);)170 863 y(...)170 919 y(while\(!done\))170 976 y({)266 1032 y(/*)i(work)h(*/)266 1089 y(...)266 1145 y (mpi_reduce\(MPI_COMM_ALL,)c(...\);)266 1202 y(...)266 1258 y(/*)j(see)h(if)g(done)f(*/)266 1315 y(...)170 1371 y(})170 1428 y(user_end_op\(libh_a\);)170 1484 y(user_end_op\(libh_b\);)75 1588 y Ft(The)15 b(user)h(library)g(initialization)i(co)q(de:)75 1694 y Fm(void)23 b(init_user_lib\(void)f(*comm,)h(user_lib_t)f(**handle\))75 1751 y({)147 1807 y(user_lib_t)g(*save;)147 1863 y(void)h(*context;)147 1920 y(void)g(*group;)147 1976 y(int)g(len;)147 2089 y (user_lib_initsave\(&save)o(\);)e(/*)j(local)f(*/)147 2146 y(mpi_comm_group\(comm,)e(&group\);)147 2202 y(mpi_contexts_alloc\(comm)o(,)g (1,)j(&context,)e(&len\);)147 2259 y(mpi_comm_dup\(comm,)f(context,)i(save)g (->)h(comm\);)147 2372 y(/*)f(other)g(inits)h(*/)147 2428 y(*handle)e(=)i (save;)75 2484 y(})75 2588 y Ft(Notice)17 b(that)e(the)h(comm)o(unicator)32 b Fh(comm)10 b Ft(passed)16 b(to)f(the)i(library)f Fn(is)g Ft(needed)h(to)f(allo)q(cate)g(new)h(con-)75 2645 y(texts.)75 2704 y(User)e(start-up)g(co)q(de:)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 26 29 bop 75 -100 a Ft(26)414 b Fl(SECTION)16 b(4.)30 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fm(void)23 b (user_start_op\(user_lib_t)e(*handle,)i(void)g(*data\))75 102 y({)170 158 y(user_lib_state)f(*state;)170 214 y(state)i(=)f(handle)g(->)h (state;)170 271 y(mpi_irecv\(save)e(->)i(comm,)f(...,)g(data,)h(...)f (&\(state)g(->)h(irecv_handle\)\);)170 327 y(mpi_isend\(save)e(->)i(comm,)f (...,)g(data,)h(...)f(&\(state)g(->)h(isend_handle\)\);)75 384 y(})75 478 y Ft(User)15 b(clean-up)i(co)q(de:)75 573 y Fm(void)23 b(user_end_op\(user_lib_t)e(*handle\))75 630 y({)170 686 y(mpi_wait\(save)i(->)g(state)g(->)h(isend_handle\);)170 743 y(mpi_wait\(save)f(->)g(state)g(->)h(irecv_handle\);)75 799 y(})75 922 y Fi(4.11.6)49 b(Lib)o(ra)o(ry)17 b(Example)d(#2)75 1008 y Ft(The)h(main)h(program:)170 1103 y Fm(int)24 b(ma,)f(mb;)170 1159 y(...)170 1216 y(list_a)g(:=)h(``[0,1]'';)170 1272 y(list_b)f(:=)h (``[0,2{,3}]'';)170 1385 y(mpi_local_subgroup\(MPI_GROUP_)o(ALL,)d(2,)i (list_a,)g(&group_a\);)170 1442 y(mpi_local_subgroup\(MPI_GROUP_)o(ALL,)e (2\(3\),)i(list_b,)g(&group_b\);)170 1554 y(mpi_comm_make\(MPI_COMM_ALL,)e (group_a,)h(&comm_a\);)170 1611 y(mpi_comm_make\(MPI_COMM_ALL,)f(group_b,)h (&comm_b\);)170 1724 y(mpi_comm_rank\(comm_a,)f(&ma\);)170 1780 y(mpi_comm_rank\(comm_b,)g(&mb\);)170 1893 y(if\(ma)j(!=)f (MPI_UNDEFINED\))242 1950 y(lib_call\(comm_a\);)170 2006 y(if\(mb)h(!=)f (MPI_UNDEFINED\))170 2063 y({)242 2119 y(lib_call\(comm_b\);)242 2175 y(lib_call\(comm_b\);)170 2232 y(})75 2327 y Ft(The)15 b(library:)75 2421 y Fm(void)23 b(lib_call\(void)f(*comm\))75 2478 y({)170 2534 y(int)i(me,)f(done)h(=)f(0;)170 2591 y (mpi_comm_rank\(comm,)f(&me\);)170 2647 y(if\(me)i(==)f(0\))266 2704 y(while\(!done\))1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 27 30 bop 75 -100 a Fl(4.11.)29 b(MOTIV)-5 b(A)l(TING)15 b(EXAMPLES)1056 b Ft(27)266 45 y Fm({)361 102 y(mpi_recv\(...,)22 b(comm,)i(MPI_SRC_ANY\);) 361 158 y(...)266 214 y(})170 271 y(else)170 327 y({)266 384 y(/*)f(work)h(*/)266 440 y(mpi_send\(...,)e(comm,)h(0\);)266 497 y(....)170 553 y(})170 610 y(MPI_SYNC\(comm\);)70 b(/*)24 b(include/no)e(safety)h(for)h(safety/no)e(safety)h(*/)75 666 y(})75 764 y Ft(The)16 b(ab)q(o)o(v)o(e)g(example)g(is)h(really)g(t)o(w)o(o)d (examples,)j(dep)q(ending)h(on)d(whether)i(or)e(not)g(y)o(ou)h(include)i (rank)75 820 y(3)e(in)h Fm(list)p 267 820 15 2 v 17 w(b)p Ft(.)23 b(This)17 b(example)g(illustrates)g(that,)f(despite)h(con)o(texts,)f (subsequen)o(t)h(calls)g(to)f Fm(lib)p 1766 820 V 17 w(call)75 876 y Ft(with)22 b(the)f(same)g(con)o(text)f(need)i(not)f(b)q(e)h(safe)f (from)f(one)h(another)g(\(\\bac)o(k)g(masking"\).)37 b(Safet)o(y)20 b(is)75 933 y(realized)g(if)e(the)37 b Fh(MPI)p 474 933 14 2 v 16 w(SYNC)19 b Ft(is)f(added.)30 b(What)17 b(this)i(demonstrates)e(is)i (that)e(libraries)j(ha)o(v)o(e)e(to)f(b)q(e)75 989 y(written)e(carefully)l(,) i(ev)o(en)e(with)h(con)o(texts.)166 1047 y(Algorithms)i(lik)o(e)g(\\com)o (bine")g(ha)o(v)o(e)f(strong)f(enough)i(source)g(selectivit)o(y)h(so)e(that)f (they)i(are)f(in-)75 1103 y(heren)o(tly)k(OK.)g(So)g(are)g(m)o(ultiple)h (calls)g(to)e(a)g(t)o(ypical)i(tree)f(broadcast)f(algorithm)g(with)h(the)g (same)75 1160 y(ro)q(ot.)27 b(Ho)o(w)o(ev)o(er,)17 b(m)o(ultiple)j(calls)f (to)e(a)h(t)o(ypical)g(tree)g(broadcast)f(algorithm)h({)g(with)g(di\013eren)o (t)g(ro)q(ots)75 1216 y(|)h(could)i(break.)31 b(Therefore,)19 b(suc)o(h)h(algorithms)f(w)o(ould)g(ha)o(v)o(e)g(to)f(utilize)j(the)f(tag)e (to)g(k)o(eep)i(things)75 1273 y(straigh)o(t.)f(All)c(of)g(the)f(foregoing)g (is)h(a)f(discussion)i(of)e(\\collectiv)o(e)i(calls")g(implemen)o(ted)g(with) f(p)q(oin)o(t)g(to)75 1329 y(p)q(oin)o(t)k(op)q(erations.)28 b(MPI)18 b(implemen)o(tations)h(ma)o(y)f(or)f(ma)o(y)g(not)h(implemen)o(t)h (collectiv)o(e)h(calls)f(using)75 1386 y(p)q(oin)o(t-to-p)q(oin)o(t)e(op)q (erations.)22 b(These)16 b(algorithms)g(are)f(used)i(to)e(illustrate)i(the)f (issues)h(of)e(correctness)75 1442 y(and)g(safet)o(y)l(,)g(indep)q(enden)o(t) i(of)e(ho)o(w)g(MPI)g(implemen)o(ts)h(its)g(collectiv)o(e)h(calls.)75 1569 y Fi(4.11.7)49 b(Inter-Comm)o(unication)14 b(Examples)75 1656 y Fh(Example)f(1:)19 b(Three-Group)e(\\Pip)q(eline")75 1744 y Ft(+|||+)k(+|||+)g(+|||+)g(|)f(|)f(|)h(|)f(|)h(|)f(|)g(Group)g(0)g(|)g (<|{>)h(|)f(Group)g(1)g(|)75 1801 y(<|{>)c(|)h(Group)f(2)g(|)g(|)h(|)g(|)f(|) h(|)f(|)h(+|||+)h(+|||+)g(+|||+)166 1858 y(Groups)k(0)g(and)h(1)f(comm)o (unicate.)39 b(Groups)21 b(1)g(and)h(2)f(comm)o(unicate.)39 b(Therefore,)23 b(group)e(0)75 1914 y(requires)e(one)f(in)o(ter-comm)o (unicator,)h(group)e(1)h(requires)h(t)o(w)o(o)e(in)o(ter-comm)o(unicators,)h (and)g(group)g(2)75 1971 y(requires)d(1)f(in)o(ter-comm)o(unicator.)20 b(Note)14 b(that)g(the)h(sync)o(hronous)f(in)o(ter-comm)o(unicator)h (constructor)75 2027 y(\()p Fh(MPI)p 178 2027 V 15 w(COMM)p 335 2027 V 17 w(PEER)p 464 2027 V 17 w(MAKE\(\))p Ft(\))9 b(can)i(b)q(e)h (safely)e(used)i(here)f(since)h(there)e(is)i(no)e(cyclic)j(comm)o(unication.) 170 2139 y Fm(void)24 b(*)f(myComm;)453 b(/*)24 b(intra-communicator)d(of)j (local)f(sub-group)f(*/)170 2195 y(void)i(*)f(myFirstComm;)786 b(/*)24 b(inter-communicator)d(*/)170 2252 y(void)j(*)f(mySecondComm;)237 b(/*)24 b(second)f(inter-communicator)e(\(group)i(B)h(only\))f(*/)170 2308 y(int)h(membershipKey;)170 2365 y(int)g(subGroupLeaders[3];)170 2478 y(MPI_INIT\(\);)170 2534 y(...)170 2647 y(/*)g(User)f(code)h(must)f (generate)g(membershipKey)f(in)h(the)h(range)f([0,)h(1,)f(2])h(*/)170 2704 y(membershipKey)f(=)g(...)h(;)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 28 31 bop 75 -100 a Ft(28)414 b Fl(SECTION)16 b(4.)30 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)170 102 y Fm(/*)24 b(Build)f (intra-communicator)f(for)h(local)g(sub-group)g(and)g(get)h(group)f(leaders)g (*/)170 158 y(/*)h(of)g(each)f(sub-group)g(\(relative)f(to)i(MPI_COMM_ALL\).) e(*/)170 214 y(MPI_COMM_SPLITL\(MPI_COMM_ALL,)e(membershipKey,)i(3,)i (subGroupLeaders,)e(&myComm\);)170 327 y(/*)i(Build)f(inter-communicators.)45 b(Tags)24 b(are)f(hard-coded.)f(*/)170 384 y(if)i(\(membershipKey)e(==)i(0\)) 266 440 y({)692 b(/*)23 b(Group)h(0)f(communicates)g(with)g(group)g(1.)h(*/) 266 497 y(MPI_COMM_PEER_MAKE\(myComm)o(,)d(MPI_COMM_ALL,)h (subGroupLeaders[1],)g(10,)h(1,)719 553 y(&myFirstComm\);)266 610 y(})170 666 y(else)h(if)f(\(membershipKey)f(==)i(1\))266 723 y({)525 b(/*)23 b(Group)h(1)f(communicates)f(with)i(groups)f(0)h(and)f (2.)h(*/)266 779 y(MPI_COMM_PEER_MAKE\(myComm)o(,)d(MPI_COMM_ALL,)h (subGroupLeaders[0],)g(10,)h(1,)719 835 y(&myFirstComm\);)266 892 y(MPI_COMM_PEER_MAKE\(myComm)o(,)e(MPI_COMM_ALL,)h(subGroupLeaders[2],)g (21,)h(1,)719 948 y(&mySecondComm\);)266 1005 y(})170 1061 y(else)h(if)f(\(membershipKey)f(==)i(2\))266 1118 y({)692 b(/*)23 b(Group)h(2)f(communicates)g(with)g(group)g(1.)h(*/)266 1174 y(MPI_COMM_PEER_MAKE\(myComm)o(,)d(MPI_COMM_ALL,)h(subGroupLeaders[1],)g(21,) h(1,)719 1231 y(&myFirstComm\);)266 1287 y(})75 1513 y (\\subsubsection{Example)e(2:)47 b(Three-Group)23 b(``Ring"})75 1569 y(\\label{context-ex7})75 1682 y(\\begin{verbatim})290 1739 y(+-----------------------)o(--------)o(-------)o(-------)o(--------)o (------+)290 1795 y(|)1408 b(|)290 1852 y(|)95 b(+---------+)213 b(+---------+)h(+---------+)94 b(|)290 1908 y(|)h(|)215 b(|)f(|)h(|)g(|)f(|) 96 b(|)290 1965 y(+-->)23 b(|)h(Group)f(0)h(|)f(<----->)g(|)h(Group)f(1)h(|)g (<----->)f(|)g(Group)h(2)f(|)h(<--+)409 2021 y(|)215 b(|)f(|)h(|)g(|)f(|)409 2077 y(+---------+)f(+---------+)h(+---------+)166 2192 y Ft(Groups)18 b(0)f(and)i(1)f(comm)o(unicate.)29 b(Groups)17 b(1)h(and)h(2)e(comm)o (unicate.)29 b(Groups)18 b(0)g(and)g(2)g(com-)75 2249 y(m)o(unicate.)49 b(Therefore,)26 b(eac)o(h)f(requires)h(t)o(w)o(o)d(in)o(ter-comm)o (unicators.)48 b(Note)24 b(that)g(the)h("lo)q(osely)75 2305 y(sync)o(hronous")f(in)o(ter-comm)o(unicator)g(constructor)f(\()h Fh(MPI)p 1154 2305 14 2 v 16 w(COMM)p 1312 2305 V 16 w(PEER)p 1440 2305 V 17 w(MAKE)p 1586 2305 V 16 w(ST)l(ART\(\))h Ft(and)75 2361 y Fh(MPI)p 160 2361 V 16 w(COMM)p 318 2361 V 16 w(PEER)p 446 2361 V 17 w(MAKE)p 592 2361 V 16 w(FINISH\(\))p Ft(\))c(is)h(the)g(b)q (est)g(c)o(hoice)g(here)h(due)f(to)f(the)h(cyclic)h(comm)o(uni-)75 2418 y(cation.)170 2534 y Fm(void)h(*)f(myComm;)453 b(/*)24 b(intra-communicator)d(of)j(local)f(sub-group)f(*/)170 2591 y(void)i(*)f(myFirstComm;)762 b(/*)24 b(inter-communicators)d(*/)170 2647 y(void)j(*)f(mySecondComm;)170 2704 y(make_id)g(firstMakeID,)g (secondMakeID;)332 b(/*)23 b(handles)g(for)h("FINISH")f(*/)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 29 32 bop 75 -100 a Fl(4.11.)34 b(MOTIV)-5 b(A)l(TING)16 b(EXAMPLES)1050 b Ft(29)170 45 y Fm(int)24 b(membershipKey;)170 102 y(int)g (subGroupLeaders[3];)170 214 y(MPI_INIT\(\);)170 271 y(...)170 384 y(/*)g(User)f(code)h(must)f(generate)g(membershipKey)f(in)h(the)h(range)f ([0,)h(1,)f(2])h(*/)170 440 y(membershipKey)f(=)g(...)h(;)170 553 y(/*)g(Build)f(intra-communicator)f(for)h(local)g(sub-group)g(and)g(get)h (group)f(leaders)g(*/)170 610 y(/*)h(of)g(each)f(sub-group)g(\(relative)f(to) i(MPI_COMM_ALL\).)e(*/)170 666 y(MPI_COMM_SPLITL\(MPI_COMM_ALL,)e (membershipKey,)i(3,)i(subGroupLeaders,)e(&myComm\);)170 779 y(/*)i(Build)f(inter-communicators.)45 b(Tags)24 b(are)f(hard-coded.)f(*/)170 835 y(if)i(\(membershipKey)e(==)i(0\))266 892 y({)525 b(/*)23 b(Group)h(0)f(communicates)f(with)i(groups)f(1)h(and)f(2.)h(*/)266 948 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[2],)g(20,)862 1005 y(1,)i(&firstMakeID\);)266 1061 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[1],)g(10,)862 1118 y(1,)i(&secondMakeID\);)266 1174 y(})170 1231 y(else)g(if)f(\(membershipKey)f(==)i(1\))266 1287 y({)525 b(/*)23 b(Group)h(1)f(communicates)f(with)i(groups)f(0)h(and)f (2.)h(*/)266 1344 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[0],)g(10,)862 1400 y(1,)i(&firstMakeID\);)266 1456 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[2],)g(21,)862 1513 y(1,)i(&secondMakeID\);)266 1569 y(})170 1626 y(else)g(if)f(\(membershipKey)f(==)i(2\))266 1682 y({)525 b(/*)23 b(Group)h(2)f(communicates)f(with)i(groups)f(0)h(and)f (1.)h(*/)266 1739 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[1],)g(21,)862 1795 y(1,)i(&firstMakeID\);)266 1852 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[0],)g(20,)862 1908 y(1,)i(&secondMakeID\);)266 1965 y(})170 2077 y(/*)g(Everyone)f(has)g(the)h(same)f("FINISH")g(code...)g (*/)170 2134 y(MPI_COMM_PEER_MAKE_FINISH\(fir)o(stMakeID)o(,)e (&myFirstComm\);)170 2190 y(MPI_COMM_PEER_MAKE_FINISH\(sec)o(ondMakeI)o(D,)g (&mySecondComm\);)75 2308 y Fh(Example)13 b(3:)19 b(Three-Group)e(\\Pip)q (eline")e(Using)h(Name)d(Service)409 2393 y Fm(+---------+)213 b(+---------+)h(+---------+)409 2450 y(|)h(|)f(|)h(|)g(|)f(|)409 2506 y(|)24 b(Group)f(0)h(|)f(<----->)g(|)h(Group)f(1)h(|)g(<----->)f(|)g (Group)h(2)f(|)409 2563 y(|)215 b(|)f(|)h(|)g(|)f(|)409 2619 y(+---------+)f(+---------+)h(+---------+)166 2704 y Ft(Groups)21 b(0)g(and)h(1)f(comm)o(unicate.)39 b(Groups)21 b(1)g(and)h(2)f(comm)o (unicate.)39 b(Therefore,)23 b(group)e(0)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 30 33 bop 75 -100 a Ft(30)414 b Fl(SECTION)16 b(4.)30 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Ft(requires)k(one)f(in)o (ter-comm)o(unicator,)h(group)e(1)h(requires)h(t)o(w)o(o)e(in)o(ter-comm)o (unicators,)h(and)g(group)g(2)75 102 y(requires)d(1)f(in)o(ter-comm)o (unicator.)20 b(Note)14 b(that)g(the)h(sync)o(hronous)f(in)o(ter-comm)o (unicator)h(constructor)75 158 y(\()p Fh(MPI)p 178 158 14 2 v 15 w(COMM)p 335 158 V 17 w(NAME)p 481 158 V 17 w(MAKE\(\))p Ft(\))f(can)j(b)q(e)f(safely)h(used)f(here)h(since)g(there)f(is)h(no)f (cyclic)h(comm)o(unica-)75 214 y(tion.)170 333 y Fm(void)24 b(*)f(myComm;)453 b(/*)24 b(intra-communicator)d(of)j(local)f(sub-group)f(*/) 170 390 y(void)i(*)f(myFirstComm;)786 b(/*)24 b(inter-communicator)d(*/)170 446 y(void)j(*)f(mySecondComm;)237 b(/*)24 b(second)f(inter-communicator)e (\(group)i(B)h(only\))f(*/)170 559 y(MPI_INIT\(\);)170 615 y(...)170 728 y(/*)h(User)f(builds)g(intra-communicator)f(myComm)h (describing)f(the)i(local)f(sub-group)g(*/)170 785 y(/*)h(using)f(any)h (appropriate)e(MPI)h(routine\(s\).)47 b(\(For)23 b(example,)g(myComm)g(could) g(*/)170 841 y(/*)h(have)f(been)h(passed)f(in)g(as)h(an)f(argument)g(to)h(a)g (user)f(subroutine.\))f(*/)170 898 y(myComm)h(=)h(...)g(;)170 1011 y(/*)g(Build)f(inter-communicators.)45 b(Group)23 b(membership)g (conditions)f(must)i(be)f(*/)170 1067 y(/*)h(provided)f(by)g(the)h(user.)f (*/)170 1123 y(if)h(\(\))266 1180 y({)692 b(/*)23 b(Group)h(0)f(communicates)g(with)g(group)g(1.)h(*/)266 1236 y(MPI_COMM_NAME_MAKE\(myComm)o(,)d("Connect)i(10",)g(1,)h (&myFirstComm\);)266 1293 y(})170 1349 y(else)g(if)f(\(\))266 1406 y({)525 b(/*)23 b(Group)h(1)f(communicates)f(with)i(groups)f (0)h(and)f(2.)h(*/)266 1462 y(MPI_COMM_NAME_MAKE\(myComm)o(,)d("Connect)i (10",)g(1,)h(&myFirstComm\);)266 1519 y(MPI_COMM_NAME_MAKE\(myComm)o(,)d ("Connect)i(21",)g(1,)h(&mySecondComm\);)266 1575 y(})170 1632 y(else)g(if)f(\(\))266 1688 y({)692 b(/*)23 b(Group)h(2)f(communicates)g(with)g(group)g(1.)h(*/)266 1744 y(MPI_COMM_NAME_MAKE\(myComm)o(,)d("Connect)i(21",)g(1,)h (&myFirstComm\);)266 1801 y(})75 1933 y Fh(Example)13 b(4:)19 b(Three-Group)e(\\Ring")e(Using)g(Name)e(Service)290 2022 y Fm(+-----------------------)o(--------)o(-------)o(-------)o(--------)o (------+)290 2079 y(|)1408 b(|)290 2135 y(|)95 b(+---------+)213 b(+---------+)h(+---------+)94 b(|)290 2192 y(|)h(|)215 b(|)f(|)h(|)g(|)f(|) 96 b(|)290 2248 y(+-->)23 b(|)h(Group)f(0)h(|)f(<----->)g(|)h(Group)f(1)h(|)g (<----->)f(|)g(Group)h(2)f(|)h(<--+)409 2305 y(|)215 b(|)f(|)h(|)g(|)f(|)409 2361 y(+---------+)f(+---------+)h(+---------+)166 2478 y Ft(Groups)18 b(0)f(and)i(1)f(comm)o(unicate.)29 b(Groups)17 b(1)h(and)h(2)e(comm)o (unicate.)29 b(Groups)18 b(0)g(and)g(2)g(com-)75 2534 y(m)o(unicate.)49 b(Therefore,)26 b(eac)o(h)f(requires)h(t)o(w)o(o)d(in)o(ter-comm)o (unicators.)48 b(Note)24 b(that)g(the)h("lo)q(osely)75 2591 y(sync)o(hronous")20 b(in)o(ter-comm)o(unicator)h(constructor)f(\()h Fh(MPI)p 1141 2591 V 15 w(COMM)p 1298 2591 V 17 w(NAME)p 1444 2591 V 17 w(MAKE)p 1590 2591 V 16 w(ST)l(ART\(\))g Ft(and)75 2647 y Fh(MPI)p 160 2647 V 16 w(COMM)p 318 2647 V 16 w(NAME)p 463 2647 V 17 w(MAKE)p 609 2647 V 16 w(FINISH\(\))p Ft(\))15 b(is)h(the)g(b)q(est)g(c)o(hoice)g(here)h(due)f(to)f(the)h(cyclic)h(comm)o (unica-)75 2704 y(tion.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 31 34 bop 75 -100 a Fl(4.11.)34 b(MOTIV)-5 b(A)l(TING)16 b(EXAMPLES)1050 b Ft(31)170 45 y Fm(void)24 b(*)f(myComm;)453 b(/*)24 b(intra-communicator)d (of)j(local)f(sub-group)f(*/)170 102 y(void)i(*)f(myFirstComm;)762 b(/*)24 b(inter-communicators)d(*/)170 158 y(void)j(*)f(mySecondComm;)170 214 y(make_id)g(firstMakeID,)g(secondMakeID;)332 b(/*)23 b(handles)g(for)h ("FINISH")f(*/)170 327 y(MPI_INIT\(\);)170 384 y(...)170 497 y(/*)h(User)f(builds)g(intra-communicator)f(myComm)h(describing)f(the)i (local)f(sub-group)g(*/)170 553 y(/*)h(using)f(any)h(appropriate)e(MPI)h (routine\(s\).)47 b(\(For)23 b(example,)g(myComm)g(could)g(*/)170 610 y(/*)h(have)f(been)h(passed)f(in)g(as)h(an)f(argument)g(to)h(a)g(user)f (subroutine.\))f(*/)170 666 y(myComm)h(=)h(...)g(;)170 779 y(/*)g(Build)f(inter-communicators.)45 b(Group)23 b(membership)g(conditions)f (must)i(be)f(*/)170 835 y(/*)h(provided)f(by)g(the)h(user.)f(*/)170 892 y(if)h(\(\))266 948 y({)525 b(/*)23 b(Group)h(0)f(communicates)f(with)i(groups)f(1)h(and)f(2.)h(*/)266 1005 y(MPI_COMM_NAME_MAKE_START\()o(myComm,)c("Connect)j(20",)g(1,)h (&firstMakeID\);)266 1061 y(MPI_COMM_NAME_MAKE_START\()o(myComm,)c("Connect)j (10",)g(1,)h(&secondMakeID\);)266 1118 y(})170 1174 y(else)g(if)f(\(\))266 1231 y({)525 b(/*)23 b(Group)h(1)f(communicates)f (with)i(groups)f(0)h(and)f(2.)h(*/)266 1287 y(MPI_COMM_NAME_MAKE_START\()o (myComm,)c("Connect)j(10",)g(1,)h(&firstMakeID\);)266 1344 y(MPI_COMM_NAME_MAKE_START\()o(myComm,)c("Connect)j(21",)g(1,)h (&secondMakeID\);)266 1400 y(})170 1456 y(else)g(if)f(\(\))266 1513 y({)525 b(/*)23 b(Group)h(2)f(communicates)f(with)i(groups)f (0)h(and)f(1.)h(*/)266 1569 y(MPI_COMM_NAME_MAKE_START\()o(myComm,)c ("Connect)j(21",)g(1,)h(&firstMakeID\);)266 1626 y (MPI_COMM_NAME_MAKE_START\()o(myComm,)c("Connect)j(20",)g(1,)h (&secondMakeID\);)266 1682 y(})170 1795 y(/*)g(Everyone)f(has)g(the)h(same)f ("FINISH")g(code...)g(*/)170 1852 y(MPI_COMM_NAME_MAKE_FINISH\(fir)o (stMakeID)o(,)e(&myFirstComm\);)170 1908 y(MPI_COMM_NAME_MAKE_FINISH\(sec)o (ondMakeI)o(D,)g(&mySecondComm\);)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF From owner-mpi-context@CS.UTK.EDU Tue Aug 10 02:26:24 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA18316; Tue, 10 Aug 93 02:26:24 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04194; Tue, 10 Aug 93 02:25:54 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 10 Aug 1993 02:25:53 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from chenas.inria.fr by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04181; Tue, 10 Aug 93 02:25:50 -0400 Received: from irgate (irgate.ifp.fr) by chenas.inria.fr (5.65c8d/92.02.29) via Fnet-EUnet id AA08128; Tue, 10 Aug 1993 08:25:44 +0200 (MET) Received: from irsun21.ifp.fr by irgate, Tue, 10 Aug 93 08:25:37 +0200 Received: by irsun21.ifp.fr, Tue, 10 Aug 93 08:25:36 +0200 Date: Tue, 10 Aug 93 08:25:36 +0200 From: stoessel@irsun21.ifp.fr (Alain Stoessel) Message-Id: <9308100625.AA16081@irsun21.ifp.fr> To: mpi-context@cs.utk.edu Subject: Great !!!!! Cc: tony@Aurora.CS.MsState.edu Hi context subcommitee, I have just read the last draft for groups, contexts and communicators (revised version). It is the first time that I understand how I will be able to use these new features. (examples help me a lot) So don't rebuild it fully at the next meeting.... Alain +-----------------------+------------------------------+ | Alain STOESSEL | Institut Francais du Petrole | | Tel: 33.1.47.52.71.33 | Parallel processing group | | Fax: 33.1.47.52.70.22 | 1-4 Av de Bois-Preau | | | 92506 RUEIL-MALMAISON | +-----------------------+------------------------------+ | Email: stoessel@irsun21.ifp.fr | +-----------------------+------------------------------+ From owner-mpi-context@CS.UTK.EDU Tue Aug 10 02:57:32 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA18605; Tue, 10 Aug 93 02:57:32 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05815; Tue, 10 Aug 93 02:57:02 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 10 Aug 1993 02:57:02 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05807; Tue, 10 Aug 93 02:57:00 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA00257; Tue, 10 Aug 93 01:56:59 CDT Date: Tue, 10 Aug 93 01:56:59 CDT From: Tony Skjellum Message-Id: <9308100656.AA00257@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu, stoessel@irsun21.ifp.fr Subject: Re: Great !!!!! Thanks, we will try not to rebuild it :-) - Tony ---------------------------------------------- From owner-mpi-context@CS.UTK.EDU Tue Aug 10 01:26:08 1993 Received: from Walt.CS.MsState.Edu by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA00211; Tue, 10 Aug 93 01:26:07 CDT Received: from CS.UTK.EDU by Walt.CS.MsState.Edu (4.1/6.0s-FWP); id AA11601; Tue, 10 Aug 93 01:26:07 CDT Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04194; Tue, 10 Aug 93 02:25:54 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 10 Aug 1993 02:25:53 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from chenas.inria.fr by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA04181; Tue, 10 Aug 93 02:25:50 -0400 Received: from irgate (irgate.ifp.fr) by chenas.inria.fr (5.65c8d/92.02.29) via Fnet-EUnet id AA08128; Tue, 10 Aug 1993 08:25:44 +0200 (MET) Received: from irsun21.ifp.fr by irgate, Tue, 10 Aug 93 08:25:37 +0200 Received: by irsun21.ifp.fr, Tue, 10 Aug 93 08:25:36 +0200 Date: Tue, 10 Aug 93 08:25:36 +0200 From: stoessel@irsun21.ifp.fr (Alain Stoessel) Message-Id: <9308100625.AA16081@irsun21.ifp.fr> To: mpi-context@cs.utk.edu Subject: Great !!!!! Cc: tony@Aurora.CS.MsState.edu Status: R Hi context subcommitee, I have just read the last draft for groups, contexts and communicators (revised version). It is the first time that I understand how I will be able to use these new features. (examples help me a lot) So don't rebuild it fully at the next meeting.... Alain +-----------------------+------------------------------+ | Alain STOESSEL | Institut Francais du Petrole | | Tel: 33.1.47.52.71.33 | Parallel processing group | | Fax: 33.1.47.52.70.22 | 1-4 Av de Bois-Preau | | | 92506 RUEIL-MALMAISON | +-----------------------+------------------------------+ | Email: stoessel@irsun21.ifp.fr | +-----------------------+------------------------------+ From owner-mpi-context@CS.UTK.EDU Fri Aug 13 18:06:22 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA04172; Fri, 13 Aug 93 18:06:22 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20982; Fri, 13 Aug 93 18:05:03 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 13 Aug 1993 18:04:58 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA20889; Fri, 13 Aug 93 18:04:37 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA09934; Fri, 13 Aug 93 17:04:34 CDT Date: Fri, 13 Aug 93 17:04:34 CDT From: Tony Skjellum Message-Id: <9308132204.AA09934@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: Interim (August 13) draft (post first reading) Dear colleagues, We successfully made the first reading of the context chapter, with significant improvements and "tuning." Here is an intermediate draft, reflecting much of the tuning made during the first reading of the context proposal. It is still "rough" in some respects, but this information is provided you now to give you the most possible opportunity to see what we have evolved. Over next four weeks, we will be doing the following i) bullet-proofing the cacheing section ii) adding further explanations to the intra-communication section iii) leaving inter-communication as is. iv) fixing (and possibly extending) the examples At this point we seek to clarify, and polish rather than significantly change this chapter. Comments? - Tony Skjellum %!PS-Adobe-2.0 %%Creator: dvips 5.47 Copyright 1986-91 Radical Eye Software %%Title: mpi-report.dvi %%Pages: 35 1 %%BoundingBox: 0 0 612 792 %%EndComments %%BeginProcSet: tex.pro /TeXDict 200 dict def TeXDict begin /N /def load def /B{bind def}N /S /exch load def /X{S N}B /TR /translate load N /isls false N /vsize 10 N /@rigin{ isls{[0 1 -1 0 0 0]concat}if 72 Resolution div 72 VResolution div neg scale Resolution VResolution vsize neg mul TR matrix currentmatrix dup dup 4 get round 4 exch put dup dup 5 get round 5 exch put setmatrix}N /@letter{/vsize 10 N}B /@landscape{/isls true N /vsize -1 N}B /@a4{/vsize 10.6929133858 N}B /@a3{ /vsize 15.5531 N}B /@ledger{/vsize 16 N}B /@legal{/vsize 13 N}B /@manualfeed{ statusdict /manualfeed true put}B /@copies{/#copies X}B /FMat[1 0 0 -1 0 0]N /FBB[0 0 0 0]N /nn 0 N /IE 0 N /ctr 0 N /df-tail{/nn 8 dict N nn begin /FontType 3 N /FontMatrix fntrx N /FontBBox FBB N string /base X array /BitMaps X /BuildChar{CharBuilder}N /Encoding IE N end dup{/foo setfont}2 array copy cvx N load 0 nn put /ctr 0 N[}B /df{/sf 1 N /fntrx FMat N df-tail} B /dfs{div /sf X /fntrx[sf 0 0 sf neg 0 0]N df-tail}B /E{pop nn dup definefont setfont}B /ch-width{ch-data dup length 5 sub get}B /ch-height{ch-data dup length 4 sub get}B /ch-xoff{128 ch-data dup length 3 sub get sub}B /ch-yoff{ ch-data dup length 2 sub get 127 sub}B /ch-dx{ch-data dup length 1 sub get}B /ch-image{ch-data dup type /stringtype ne{ctr get /ctr ctr 1 add N}if}B /id 0 N /rw 0 N /rc 0 N /gp 0 N /cp 0 N /G 0 N /sf 0 N /CharBuilder{save 3 1 roll S dup /base get 2 index get S /BitMaps get S get /ch-data X pop /ctr 0 N ch-dx 0 ch-xoff ch-yoff ch-height sub ch-xoff ch-width add ch-yoff setcachedevice ch-width ch-height true[1 0 0 -1 -.1 ch-xoff sub ch-yoff .1 add]{ch-image} imagemask restore}B /D{/cc X dup type /stringtype ne{]}if nn /base get cc ctr put nn /BitMaps get S ctr S sf 1 ne{dup dup length 1 sub dup 2 index S get sf div put}if put /ctr ctr 1 add N}B /I{cc 1 add D}B /bop{userdict /bop-hook known{bop-hook}if /SI save N @rigin 0 0 moveto}N /eop{clear SI restore showpage userdict /eop-hook known{eop-hook}if}N /@start{userdict /start-hook known{start-hook}if /VResolution X /Resolution X 1000 div /DVImag X /IE 256 array N 0 1 255{IE S 1 string dup 0 3 index put cvn put}for}N /p /show load N /RMat[1 0 0 -1 0 0]N /BDot 260 string N /rulex 0 N /ruley 0 N /v{/ruley X /rulex X V}B /V statusdict begin /product where{pop product dup length 7 ge{0 7 getinterval(Display)eq}{pop false}ifelse}{false}ifelse end{{gsave TR -.1 -.1 TR 1 1 scale rulex ruley false RMat{BDot}imagemask grestore}}{{gsave TR -.1 -.1 TR rulex ruley scale 1 1 false RMat{BDot}imagemask grestore}}ifelse B /a{ moveto}B /delta 0 N /tail{dup /delta X 0 rmoveto}B /M{S p delta add tail}B /b{ S p tail}B /c{-4 M}B /d{-3 M}B /e{-2 M}B /f{-1 M}B /g{0 M}B /h{1 M}B /i{2 M}B /j{3 M}B /k{4 M}B /w{0 rmoveto}B /l{p -4 w}B /m{p -3 w}B /n{p -2 w}B /o{p -1 w }B /q{p 1 w}B /r{p 2 w}B /s{p 3 w}B /t{p 4 w}B /x{0 S rmoveto}B /y{3 2 roll p a}B /bos{/SS save N}B /eos{clear SS restore}B end %%EndProcSet TeXDict begin 1000 300 300 @start /Fa 19 118 df<007F0001FF0007CF00070F000F0F00 0E07000E07000E07000E07000E0700FFFF00FFFF000E07000E07000E07000E07000E07000E0700 0E07000E07000E07000E07000E07000E07007F9FE07F9FE0131A809915>13 D46 D<000C000C001C0018001800380030003000700060006000E000C0 00C001C001800180038003000700060006000E000C000C001C0018001800380030003000700060 006000E000C000C0000E257E9B13>I<7FFFFF007FFFFF00781E0F00601E0300601E0300E01E03 80C01E0180C01E0180C01E0180001E0000001E0000001E0000001E0000001E0000001E0000001E 0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E000003FFF00003 FFF000191A7F991C>84 D<7F80FFE0F1F0F0F0607000F01FF03FF07870F070E070E073E0F3F1F3 7FFE3F3C10107E8F13>97 D<007E00007E00000E00000E00000E00000E00000E00000E00000E00 000E000FEE001FFE003C3E00781E00700E00E00E00E00E00E00E00E00E00E00E00E00E00700E00 781E003C3E001FFFC00FCFC0121A7F9915>100 D<07E01FF03C78701C701CFFFCFFFCE000E000 E000F0007000780C3E1C1FF807E00E107F8F11>I<00F803FC07BC0F3C0E3C0E000E000E000E00 0E00FFC0FFC00E000E000E000E000E000E000E000E000E000E000E000E007FE07FE00E1A80990C >I104 D<3C003C003C003C00000000000000000000000000FC00FC001C001C00 1C001C001C001C001C001C001C001C001C001C00FF80FF80091A80990A>I107 DIII<07E01FF8381C700E60 06E007E007E007E007E007E007700E700E3C3C1FF807E010107F8F13>II<3FE07FE0F0E0E060E060F0 00FF807FC01FE001F0C0F0E070E070F0F0FFE0DF800C107F8F0F>115 D<0C000C000C000C001C 001C003C00FFC0FFC01C001C001C001C001C001C001C001C601C601C601C601EE00FC007800B17 7F960F>II E /Fb 1 50 df<03000F00FF00F700070007 0007000700070007000700070007000700070007000700070007007FF07FF00C157E9412>49 D E /Fc 12 122 df<01FE07FE0F8F1E0E3C0E3C00780078007800F800F000F0007800780E7C0C 3E1C1FF807E010127E9112>99 D<01F807FE0F1E1E0F3C0F7C077FFF7FFF7800F800F000F00070 00780E7C0C3E3C1FF807E010127E9112>101 D<01C003E003E003E003C0000000000000000000 0000001F801F8007800780070007000700070007000F000E000E000E000E000E001E00FF80FF80 0B1D7F9C0C>105 D<1F9FC01FFFE007F1E007C0E00780E00780E00700E00700E00701E00F01E0 0E01C00E01C00E01C00E01C00E03C01E03C0FF8FF0FF9FF014127F9117>110 D<00FC0003FF000F07801E03C03C01C03801C07801E07801E07801E0F003C0F003C0F003C07003 80700780780F003C1E001FF80007E00013127E9115>I<0FCFE00FFFF803F8F803E07C03C03C03 803E03801E03801E03803E07803E07003C07003C0700780700F80781F00FC3E00EFFC00E7F000E 00000E00001E00001E00001C00001C0000FF8000FF8000171A809117>I<01F86007FEE00F87E0 1F03C03E01C03C01C07801C07801C07803C0F803C0F00380F00380F803807807807C0F803E3F80 1FF7000FC700000700000700000F00000E00000E00000E00007FC0007FC0131A7E9116>I<1FBE 1FFE07EF07CE078E07800700070007000F000E000E000E000E000E001E00FFC0FFC010127F9110 >I<03F60FFE1E1E3C0E380C3C0C3E003FE01FF807FC007C603C601C701C7038F878FFF0CFC00F 127F9110>I<0600060006000E000E000C001C003C00FFE0FFE01C003C00380038003800380038 00780070C070C070C070C071C073807F003E000B1A7C9910>II<0FF1FE0FF1FE03C07801C06001C06001C0C001C1C001C18000E38000E300 00E60000E60000EC00007C0000780000780000700000700000600000600000C00070C000F18000 C70000FE00007C0000171A809116>121 D E /Fd 3 106 df<3078F06005047C830D>46 D<01E007100C1018083810701070607F80E000E000E000E000E000E0086010602030C01F000D12 7B9113>101 D<01800380010000000000000000000000000000001C002600470047008E008E00 0E001C001C001C0038003800710071007100720072003C00091C7C9B0D>105 D E /Fe 1 59 df<70F8F8F87005057D840C>58 D E /Ff 40 122 df<01E003C0038007800F00 0E001E001C003C0038007800700070007000F000E000E000E000E000E000E000E000E000E000E0 00E000E000F000700070007000780038003C001C001E000E000F000780038003C001E00B2A7E9E 10>40 DI<7878787838307060E005097D830C>44 D<001C0000003E0000003E0000002E0000006700000067000000E7800000C7800000C3800001C3 C0000183C0000181C0000381E0000381E0000700F0000700F0000600F0000E0078000FFFF8000F FFF8001FFFFC001C003C0018003C0038001E0038001E0070001F0070000F0070000F00E0000780 191D7F9C1C>65 D<007FC001FFF007FFF00FE0F01F80301F00003E00007C0000780000780000F8 0000F00000F00000F00000F00000F00000F00000F00000F800007800007800007C00003E00001F 00001F80180FE07807FFF801FFF0007FC0151D7D9C1B>67 DIII<007F8001FF F007FFF80FE0F81F80381F00183E00007C0000780000780000F80000F00000F00000F00000F000 00F00000F007F8F007F8F807F87800387800387C00383E00381F00381F80380FE0F807FFF801FF F0007F80151D7D9C1C>III 75 DIII<003F000001FFE00003FFF00007C0F8000F003C001E001E003C000F 003C000F00780007807800078070000380F00003C0F00003C0F00003C0F00003C0F00003C0F000 03C0F00003C0F80007C078000780780007803C000F003E001F001E001E000F807C0007C0F80003 FFF00001FFE000003F00001A1D7E9C1F>II82 D<07F8001FFE003FFF007E0F007C0700F80300F00000F00000F00000 F80000FC00007F00003FF0003FFC000FFF0001FF00003F80000FC00007C00003C00003C00003C0 C003C0E007C0F00F80FC1F807FFF003FFC0007F000121D7E9C17>IIII<78000E007C001E003C003C001E0038000F0070000F00 F0000781E00003C1C00001C3C00001E7800000F70000007E0000003E0000003C0000003C000000 7E00000077000000E7800001E3800003C1C0000381E0000700F0000F00F8000E0078001C003C00 3C003E0078001F0070000F00F0000F80191D7F9C1C>88 D<0FE03FF07FF8707C603C001C001C01 FC1FFC7FFCF81CF01CE01CF03CF87CFFFC7FFC3F1C0E127E9114>97 D<07F01FF83FFC7C3C780C F000F000E000E000E000E000F000F000780C7C3C3FFC1FF807E00E127E9112>99 D<07C01FE03FF078787018601CFFFCFFFCFFFCE000E000E0007000700C3C3C3FFC1FF807E00E12 7E9112>101 D<07E7C00FFFC01FFFC03E7C003C3C00381C00381C00381C003C3C003E7C001FF8 003FF0003FE0003800003800003FFC003FFF007FFF80F807C0F003C0E001C0E001C0F003C07C0F 807FFF801FFE0007F800121B7F9115>103 D107 DIII<03F0000FFC001FFE003C0F00780780700380E001C0E001C0E001C0E001 C0E001C0F003C07003807807803C0F001FFE000FFC0003F00012127F9115>II114 D<1FE07FF0FFF0F070E000E000F0007F007FE01FF0 01F800780038C038F078FFF07FF01FC00D127F9110>I<1C001C001C001C001C001C00FFE0FFE0 FFE01C001C001C001C001C001C001C001C001C001C001C001E601FF00FF00FC00C187F970F>I< E01CE01CE01CE01CE01CE01CE01CE01CE01CE01CE01CE01CE01CE03CF07CFFFCFFDC7F1C0E127D 9115>I119 D<7003807807003C0E001C1C000E1C0007380003F00001E00001C00001E00003F0 000738000E18000E1C001C0E00380700700380F003C01212809113>II E /Fg 3 73 df<07E01FF83FFC7FFE7FFEFFFFFFFFFFFFFFFFFFFFFFFF FFFFFFFF7FFE7FFE3FFC1FF807E010127D9317>15 D<0001FE000FFF001FFF00701F01C00F0380 0F07000E0E001E1E001C1C00383C0020380000780000780000700000F00006F0000EF0001CF000 3CF0003CF0003CF80078F80078FC00F87E01F07F83F03FFEF01FF8E007E1E00001C00003C00003 800007003E0E00FFF800FFF0001FC00018257E9F1B>71 D<007C00002001FE0000E007FE0001E0 1C3E0001C0381E0003C0701E000380F01E000780F01E000700C01C000F00003C000E00003C001E 00003C001C00003FFFFC00003FFFFC00007FFFB80000780078000078007800007000700000F000 F00000F000F00000E000F00001E000E00001E001E00001C001E00003C001E00003C001E0000380 01E000078001E060078001E0E0070001F1C00E0001FF800C0001FE00000000F80023217F9E26> I E /Fh 62 123 df<783C783C783C783C381C3018301870386030E0700E0A7F9F17>34 D<787878783830307060E0050A7D9F0D>39 D<00F001E003E003C007800F000F001E001E003C00 3C003C007800780078007800F800F000F000F000F000F000F000F000F000F000F000F000F000F8 0078007800780078003C003C003C001E001E000F000F00078003C003E001E000F00C2E7EA112> II<018001C001800180C183E187F99F7DBE1F F807E007E01FF87DBEF99FE187C1830180018001C0018010147DA117>I<787878783830307060 E0050A7D830D>44 DI<00030003000700060006000E000C000C00 1C0018001800380030003000700060006000E000C000C001C00180018001800380030003000700 060006000E000C000C001C0018001800380030003000700060006000E000C000C000102D7DA117 >47 D<00C001C00FC0FFC0FFC0F3C003C003C003C003C003C003C003C003C003C003C003C003C0 03C003C003C003C003C003C003C003C003C003C0FFFEFFFEFFFE0F1F7C9E17>49 D<07F0001FFC003FFE007C3F00701F80F00F80E007C0E003C0C003C04003C00003C00003C00007 C0000780000780000F00001F00003E00007C0000F80001F00003E00007C0000F80001F00003E00 007C0000F80000FFFFC0FFFFC0FFFFC0121F7E9E17>I<07F0001FFC003FFE007E1F00780F8070 0780200780000780000780000780000F00001F00003E0003FC0003F80003FC00001F00000F8000 07800007C00003C00003C00003C00003C08003C0C007C0E00780F00F807C1F003FFE001FFC0007 F00012207E9E17>I<003E00003E00005E00005E0000DE0001DE00019E00039E00039E00079E00 071E000F1E000E1E001E1E003C1E003C1E00781E00781E00F01E00FFFFF0FFFFF0FFFFF0001E00 001E00001E00001E00001E00001E00001E00001E00141E7F9D17>I58 D<001F0000001F0000003F8000003B8000003B8000007B C0000073C0000071C00000F1E00000E1E00000E0E00001E0F00001E0F00001C0F00003C0780003 C078000380780007803C0007803C0007003C000FFFFE000FFFFE000FFFFE001E000F001E000F00 3C000F803C0007803C000780780007C0780003C0780003C0F00003E01B207F9F1E>65 DI<003FE000FFF803FFFC07F07C0F C01C1F800C1F00003E00003C00007C0000780000780000F80000F00000F00000F00000F00000F0 0000F00000F00000F00000F800007800007800007C00003C00003E00001F00001F80060FC00E07 F03E03FFFC00FFF8003FE017227DA01D>IIII<003FE000FFFC03FFFE07F07E0FC00E1F80061F00003E0000 3C00007C0000780000780000F80000F00000F00000F00000F00000F00000F00000F003FEF003FE F803FE78001E78001E7C001E3C001E3E001E1F001E1F801E0FC01E07F03E03FFFE00FFFC003FE0 17227DA01E>III75 DIII<003F000000FFC00003FFF00007E1F8000F807C001F003E001E001E003C00 0F003C000F00780007807800078078000780F00003C0F00003C0F00003C0F00003C0F00003C0F0 0003C0F00003C0F00003C0F00003C0F80007C07800078078000780780007803C000F003C000F00 1E001E001F003E000F807C0007E1F80003FFF00000FFC000003F00001A227DA021>II82 D<01FE0007FF801FFFC01F07C03E01C03C00C07C000078000078000078000078 00007C00003E00003F00001FE0001FFC000FFF0003FF80003F80000FC00007C00003E00001E000 01E00001E00001E00001E0C003E0E003C0F007C0FE0F807FFF001FFE0007F80013227EA019>I< FFFFFFC0FFFFFFC0FFFFFFC0001E0000001E0000001E0000001E0000001E0000001E0000001E00 00001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E 0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E000000 1E0000001E0000001E00001A207E9F1F>II87 D<780007807C000F003E001F001E001E000F003C00 0F807C000780780003C0F00003E1F00001F1E00000F3C000007FC000007F8000003F0000001F00 00001E0000003F0000007F8000007FC00000F3C00001F1E00001E0F00003C0F80007C078000780 3C000F003E001F001E001E000F003C000F807C0007C0780003C0F00003E01B207F9F1E>III<381C3018703860306030E070F078F078F078F0780E0A799F17>92 D<03C007E00FF01FF83E 7C7C3EF81FF00F10087D9F17>94 D<0FF03FFC7FFE783E601F000F000F000F007F0FFF3FFF7E0F F80FF00FF00FF81FFC3F7FFF7FFF3F8F10147E9316>97 DI<03F80FFC1FFE3E1E7C067800F800F000F000F000F000F000F000F80078 007C033E0F1FFF0FFE03F810147E9314>I<000780000780000780000780000780000780000780 00078000078000078000078000078007E7801FFF803FFF803F1F807C0F80780780F80780F00780 F00780F00780F00780F00780F00780F80780780F807C0F803E3F803FFF801FF78007C78011207E 9F17>I<03F0000FFC001FFE003E1F003C0700780700700380FFFF80FFFF80FFFF80F00000F000 00F000007000007800003C03003E0F001FFF0007FE0001F80011147F9314>I<00FE01FE03FE07 C00F800F000F000F000F000F000F000F00FFF0FFF0FFF00F000F000F000F000F000F000F000F00 0F000F000F000F000F000F000F000F000F000F20809F0E>I<07F1F00FFFF01FFFF03E3E007C1F 00780F00780F00780F00780F00780F007C1F003E3E003FFC003FF80037F0003000003800003FFE 003FFFC03FFFE07FFFE0F803F0F001F0F000F0F000F0F801F07E07E03FFFC01FFF8003FC00141E 7F9317>III107 DIII<01F80007FE001FFF803F0FC03C03C07801E07801E0F000F0F000F0F000F0F000F0F000F0 F000F07801E07801E03C03C03F0FC01FFF8007FE0001F80014147F9317>II114 D<0FF03FFC3FFC7C1C780078007C007E003FE03FF01F F803FC007C003C003CC03CF87CFFF87FF01FE00E147F9311>I<1E001E001E001E001E001E00FF F0FFF0FFF01E001E001E001E001E001E001E001E001E001E001E001E001E001F601FF00FF00FC0 0C1A7F9910>IIII<7801E07C03C03E07801E0F000F0F00079E0003FC0003F80001F8 0000F00001F00001F80003FC00079E000F0F000E0F001E07803C03C07801E0F801F01414809315 >II<7FFF7FFF7FFF003E003C007800F800F001E003E007 C007800F001F001E003C007C00FFFFFFFFFFFF10147F9314>I E /Fi 50 122 df<00007800FC7801FC7803FC7803CC000780000780000780000780000780000780000780 00078000078000FFFC78FFFC78FFFC780780780780780780780780780780780780780780780780 78078078078078078078078078078078078078078078078078078078078078078078152480A31A >12 D<0000C018000000C018000000C018000001C0380000018030000001803000000180300000 03807000000300600000030060000003006000000700E000000600C000000600C000000600C000 000E01C000000C018000FFFFFFFFC0FFFFFFFFC000180300000018030000001803000000380700 000030060000003006000000300600000030060000FFFFFFFFC0FFFFFFFFC000600C000000E01C 000000C018000000C018000000C018000001C03800000180300000018030000001803000000380 7000000300600000030060000003006000000700E000000600C000000600C00000222D7DA229> 35 D<00F001F001E003C0078007800F001E001E001E003C003C003C007800780078007800F800 F000F000F000F000F000F000F000F000F000F000F000F000F000F000F80078007800780078003C 003C003C001E001E001E000F000780078003C001E001F000F00C327DA413>40 DI45 DI<01F00007FC000FFE001F1F003C07803C07807803C07803 C07803C07001C0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001 E0F001E0F001E0F001E0F001E07803C07803C07803C07803C03C07803C07801F1F000FFE0007FC 0001F00013237EA118>48 D<00C001C007C0FFC0FFC0FBC003C003C003C003C003C003C003C003 C003C003C003C003C003C003C003C003C003C003C003C003C003C003C003C003C003C0FFFFFFFF FFFF10227CA118>I<07F8000FFE001FFF003C3F80780F807807C0F003C0F003E0F001E06001E0 2001E00001E00001E00003E00003C00003C00007C0000F80000F00001E00003C00007C0000F000 01E00003C0000780000F80001F00003E00007C0000F80000FFFFE0FFFFE0FFFFE013227EA118> I<03F8000FFE001FFF003E1F807C07807007C07003C02003C00003C00003C00003C00007800007 80000F80003F0003FE0003FC0003FE00001F000007800003C00003C00001E00001E00001E00001 E00001E08001E0C003E0E003C0F807C07E1F803FFF000FFE0003F80013237EA118>I<001F0000 1F00002F00002F00006F0000EF0000CF0001CF0001CF00038F00038F00078F00070F000F0F000E 0F001E0F003C0F003C0F00780F00780F00F00F00FFFFF8FFFFF8FFFFF8000F00000F00000F0000 0F00000F00000F00000F00000F00000F0015217FA018>I<3FFF803FFF803FFF803C00003C0000 3C00003C00003C00003C00003C00003C00003C00003CF8003FFE003FFF003F0F803E07C03C03C0 3803C00001E00001E00001E00001E00001E00001E00001E04003E0E003C0F007C0F80F807E1F80 3FFF001FFC0007F00013227EA018>I<007E0001FF0003FF0007C3000F80001F00003E00003C00 003C000078000078000078FC00F3FE00FFFF00FF1F80FE0780FC07C0F803C0F803E0F001E0F001 E0F001E0F001E0F001E0F001E07801E07801E07803E07C03C03C07C03E07801F1F000FFF0007FE 0001F80013237EA118>II< 01F00007FC000FFE001E0F003C07803C07807803C07803C07803C07803C07803C03803803C0780 1E0F000F1E0007FC0003F8000FFE001E0F003C07807803C07803C0F001E0F001E0F001E0F001E0 F001E0F001E07803C07803C03C07803E0F801FFF0007FC0001F00013237EA118>I<03F80007FC 001FFE001F1F003E0F807C07807803C0F803C0F003C0F003C0F001E0F001E0F001E0F001E0F001 E0F001E0F803E07803E07C07E03C0FE03F1FE01FFFE00FF9E007E3C00003C00003C00007C00007 80000F80000F00101E003C7E003FFC001FF0000FE00013237EA118>I<001F0000001F0000003F 8000003F8000003B8000007BC0000073C0000071C00000F1E00000F1E00000E0E00001E0F00001 E0F00001C0F00003C0780003C078000380780007803C0007803C0007003C000F001E000F001E00 0FFFFE001FFFFF001FFFFF001C000F003C0007803C00078038000780780003C0780003C0700003 C0F00001E0F00001E0E00001E01B237EA220>65 DI<001FF000007FFE0001FFFF0003F83F0007E00F000FC007 001F0002001F0000003E0000003C0000007C0000007800000078000000F8000000F0000000F000 0000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F80000007800000078 0000007C0000003C0000003E0000001F0000001F0000800FC0018007E0078003F81F8001FFFF00 007FFE00001FF00019257DA31F>II II<000FF800 007FFF0001FFFF8003F81F8007E007800FC003800F8001001F0000003E0000003C0000007C0000 007800000078000000F8000000F0000000F0000000F0000000F0000000F0000000F0000000F000 FFC0F000FFC0F000FFC0F80003C0780003C0780003C07C0003C03C0003C03E0003C01F0003C00F 8003C00FC003C007E003C003F81FC001FFFFC0007FFF80000FF8001A257DA321>I73 D76 DII<001FC000007FF00001FFFC0003 F07E0007C01F000F800F801F0007C01E0003C03C0001E03C0001E0780000F0780000F0780000F0 70000070F0000078F0000078F0000078F0000078F0000078F0000078F0000078F0000078F00000 78780000F0780000F0780000F07C0001F03C0001E03E0003E01E0003C01F0007C00F800F8007C0 1F0003F07E0001FFFC00007FF000001FC0001D257DA324>II82 D<01FF0007FFE00FFFF01F83F03E00F03C00707C00207800 007800007800007800007C00007C00003E00003F00001FE0001FFC0007FF0003FFC0007FE00007 E00003F00001F00000F80000F8000078000078000078400078C000F8E000F0F001F0FC03E07F07 E03FFFC00FFF8001FC0015257EA31B>I<07F01FFC3FFE3C3E301F200F000F000F000F03FF1FFF 3FFF7F0FFC0FF80FF00FF00FF81FFC3F7FFF7FFF3FCF10167E9517>97 DI<01FC0007FF001FFF803F07803E03 807C0080780000F80000F00000F00000F00000F00000F00000F00000F800007800007C00C03E01 C03F07C01FFFC007FF8001FC0012167E9516>I<0003C00003C00003C00003C00003C00003C000 03C00003C00003C00003C00003C00003C00003C003F3C00FFFC01FFFC03F0FC07E07C07C03C078 03C0F803C0F003C0F003C0F003C0F003C0F003C0F003C0F803C07803C07C07C07C07C03F1FC01F FFC00FFFC007F3C012237EA219>I<03F00007FC001FFE003E0F003C0780780380780380F001C0 FFFFC0FFFFC0FFFFC0F00000F00000F000007000007800007800003C01801F07800FFF8007FF00 01FC0012167E9516>I<007F00FF01FF03E303C007800780078007800780078007800780FFF8FF F8FFF8078007800780078007800780078007800780078007800780078007800780078007800780 0780102380A20F>I105 D108 DII<01FC0007FF000FFF801F07C03C01E07800F07800F0700070F00078F00078 F00078F00078F00078F000787800F07800F07C01F03E03E01F07C00FFF8007FF0001FC0015167F 9518>II114 D<07F81FFE3FFF7C1F7807780178007C007F003FF01FFC0FFE01FE003F001F000FC00FE0 1FFC3FFFFE7FFC0FF010167F9513>I<0F000F000F000F000F000F00FFF8FFF8FFF80F000F000F 000F000F000F000F000F000F000F000F000F000F000F000F080F9C0FFC07F803E00E1C7F9B12> III<7801F07C01E03E03C01E07C00F0780078F0007DE0003FC0001FC0000F80000 700000F80001FC0003DC00039E00078F000F07801E07801E03C03C01E07800F0F800F815168095 16>120 DI E /Fj 72 124 df<003F1F8001FFFFC003C3F3C00783E3C00F03E3C00E01C0000E01C0000E01C0 000E01C0000E01C0000E01C000FFFFFC00FFFFFC000E01C0000E01C0000E01C0000E01C0000E01 C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0000E01C0007F 87FC007F87FC001A1D809C18>11 D<003F0001FF8003C3C00783C00F03C00E03C00E00000E0000 0E00000E00000E0000FFFFC0FFFFC00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0 0E01C00E01C00E01C00E01C00E01C00E01C07F87F87F87F8151D809C17>I<003F83F00001FFDF F80003E1FC3C000781F83C000F01F03C000E01E03C000E00E000000E00E000000E00E000000E00 E000000E00E00000FFFFFFFC00FFFFFFFC000E00E01C000E00E01C000E00E01C000E00E01C000E 00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C000E00E01C00 0E00E01C000E00E01C007FC7FCFF807FC7FCFF80211D809C23>14 D<70F8F8F8F8F8F8F8F87070 7070707070707070700000000070F8F8F870051D7D9C0C>33 D<7070F8F8FCFCFCFC7C7C0C0C0C 0C1C1C181838387070F0F060600E0D7F9C15>I<70F8FCFC7C0C0C1C183870F060060D7D9C0C> 39 D<01C00380038007000E000C001C001800380038007000700070007000E000E000E000E000 E000E000E000E000E000E000E000E000E000E00070007000700070003800380018001C000C000E 0007000380038001C00A2A7D9E10>II<018001C0018001806186 F99F7DBE1FF807E007E01FF87DBEF99F61860180018001C0018010127E9E15>I<70F8F8F87818 1818383070E060050D7D840C>44 DI<70F8F8F87005057D840C>I< 07E00FF01C38381C781E700E700EF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF00FF0 0F700E700E781E381C1C380FF007E0101B7E9A15>48 D<030007003F00FF00C700070007000700 07000700070007000700070007000700070007000700070007000700070007000700FFF8FFF80D 1B7C9A15>I<0FE03FF878FC603EF01EF81FF80FF80F700F000F001F001E003E003C007800F001 E001C0038007000E031C0338037006FFFEFFFEFFFE101B7E9A15>I<0FE03FF8387C783E7C1E78 1E781E001E003C003C00F807F007E00078003C001E000F000F000F700FF80FF80FF81EF01E787C 3FF80FE0101B7E9A15>I<001C00001C00003C00007C00007C0000DC0001DC00039C00031C0007 1C000E1C000C1C00181C00381C00301C00601C00E01C00FFFFC0FFFFC0001C00001C00001C0000 1C00001C00001C0001FFC001FFC0121B7F9A15>I<01F807FC0F8E1E1E3C1E381E781E78007000 F080F7F8FFFCFC1CF81EF80FF00FF00FF00FF00FF00F700F700F781E381E1E3C0FF807E0101B7E 9A15>54 D<70F8F8F870000000000000000070F8F8F87005127D910C>58 D<70F8F8F870000000000000000070F8F8F878181818383070E060051A7D910C>I<1FE03FF878 7CE01EF01EF01EF01E603E007C00F801F001E003C0038003000300030003000300030000000000 0000000007000F800F800F8007000F1D7E9C14>63 D<00060000000F0000000F0000000F000000 1F8000001F8000001F8000003FC0000033C0000033C0000073E0000061E0000061E00000E1F000 00C0F00000C0F00001C0F8000180780001FFF80003FFFC0003003C0003003C0007003E0006001E 0006001E001F001F00FFC0FFF0FFC0FFF01C1C7F9B1F>65 DI<00 3FC18001FFF18003F07B800FC01F801F000F801E0007803C0003807C0003807800038078000180 F0000180F0000000F0000000F0000000F0000000F0000000F0000000F000000078000180780001 807C0001803C0003801E0003001F0007000FC00E0003F03C0001FFF000003FC000191C7E9B1E> I69 DI<003FC18001FFF18003F07B 800FC01F801F000F801E0007803C0003807C0003807800038078000180F0000180F0000000F000 0000F0000000F0000000F0000000F000FFF0F000FFF078000780780007807C0007803C0007801E 0007801F0007800FC00F8003F03F8001FFFB80003FE1801C1C7E9B21>III<1FFF1FFF007800780078007800780078007800780078 0078007800780078007800780078007800780078F878F878F878F8F8F1F07FE01F80101C7F9B15 >IIIII<003F800001FFF00003E0F80007001C000E000E001C0007003C00078038 000380780003C0700001C0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0F00001E0 F00001E0780003C0780003C0380003803C0007801E000F000E000E0007803C0003E0F80001FFF0 00003F80001B1C7E9B20>II82 D<07F1801FFD803C1F807007 80700380E00380E00180E00180F00000F80000FE00007FE0003FFC001FFE000FFF0000FF80000F 800007C00003C00001C0C001C0C001C0E001C0E00380F00780FE0F00DFFE00C7F800121C7E9B17 >I<7FFFFFC07FFFFFC0780F03C0700F01C0600F00C0E00F00E0C00F0060C00F0060C00F0060C0 0F0060000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000 000F0000000F0000000F0000000F0000000F0000000F0000000F000003FFFC0003FFFC001B1C7F 9B1E>IIII 89 D<18183C3C383870706060E0E0C0C0C0C0F8F8FCFCFCFC7C7C38380E0D7B9C15>92 D<183C387060E0C0C0F8FCFC7C38060D7E9C0C>96 D<1FE0003FF8003C3C003C1E00180E00000E 00001E0007FE003FFE007E0E00F80E00F80E00F00E60F00E60F81E607C7E607FFFC01FC7801312 7F9115>II<07F80FFC3E3C3C3C78187800F000F000F000 F000F000F000780078063C0E3F1C0FF807F00F127F9112>I<001F80001F800003800003800003 8000038000038000038000038000038000038007F3801FFF803E1F807C0780780380F80380F003 80F00380F00380F00380F00380F00380F003807807807C0F803E1F801FFBF007E3F0141D7F9C17 >I<07E01FF83E7C781C781EF01EFFFEFFFEF000F000F000F000780078063C0E3F1C0FF807F00F 127F9112>I<00FC03FE079E071E0F1E0E000E000E000E000E000E00FFE0FFE00E000E000E000E 000E000E000E000E000E000E000E000E000E000E007FE07FE00F1D809C0D>I<07E7C01FFFC03C 3DC0781E00781E00781E00781E00781E00781E003C3C003FF80037E0007000007000007800003F FC003FFF007FFF807807C0F003C0E001C0E001C0F003C0F807C07C0F801FFE0007F800121B7F91 15>II<3C007C007C007C003C0000000000000000000000 0000FC00FC001C001C001C001C001C001C001C001C001C001C001C001C001C001C00FF80FF8009 1D7F9C0C>I<01C003E003E003E001C00000000000000000000000000FE00FE000E000E000E000 E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E0F0E0F1E0F3C0FF80 7E000B25839C0D>IIIII<03F0000FFC001E1E00380700780780700380F003C0F003C0F0 03C0F003C0F003C0F003C07003807807803807001E1E000FFC0003F00012127F9115>II<07F1801FF9803F1F803C0F80780780780380F00380F00380F00380F00380F00380F003 80F803807807807C0F803E1F801FFB8007E380000380000380000380000380000380000380001F F0001FF0141A7F9116>II<1FB07FF0F0F0E070E030F030F8007FC07FE01FF000F8C078 C038E038F078F8F0FFF0CFC00D127F9110>I<0C000C000C000C000C001C001C003C00FFE0FFE0 1C001C001C001C001C001C001C001C001C301C301C301C301C301E700FE007C00C1A7F9910>I< FC1F80FC1F801C03801C03801C03801C03801C03801C03801C03801C03801C03801C03801C0380 1C07801C0F801E1F800FFFF007F3F014127F9117>III<7F8FF07F8FF00F0F80070F00038E0001DC0001D80000F00000700000780000F800 01DC00038E00030E000707001F0780FF8FF8FF8FF81512809116>II<7FFC7FFC 783C707860F061E061E063C00780078C0F0C1E0C1E1C3C187818F078FFF8FFF80E127F9112>I< FFFFF0FFFFF01402808B15>I E /Fk 8 118 df<7CFEFEFEFEFE7C000000007CFEFEFEFEFE7C07 127D910D>58 D68 D<03FC001FFF003F1F007E1F007E1F00FC0E00FC0000FC0000FC0000FC00 00FC0000FC0000FC00007E01807F03803F87001FFE0003F80011127E9115>99 D<1E003F007F007F007F003F001E0000000000000000000000FF00FF001F001F001F001F001F00 1F001F001F001F001F001F001F001F001F00FFE0FFE00B1E7F9D0E>105 D110 D<01FC000FFF801F07C03E03E07C01 F07C01F0FC01F8FC01F8FC01F8FC01F8FC01F8FC01F87C01F07C01F03E03E01F07C00FFF8001FC 0015127F9118>I<1FF87FF87078E018E018F000FF80FFF07FF83FF80FFC007CC03CE01CE01CF8 78FFF8CFE00E127E9113>115 D117 D E /Fl 35 91 df<0007000E001C0038007000E001C001C00380070007000E000E001C001C00 1C00380038003800700070007000700070006000E000E000E000E000E000E000E000E000E00060 006000700070003000380038001C001C000E0007000300102E7CA112>40 D<00C000E00070003000380018001C001C000C000E000E000E0006000600060006000600060006 000E000E000E000E000E000E001C001C001C001C0038003800380070007000E000E001C001C003 80070007000E001C0038007000E0000F2E7FA112>I<3C3E7E7E3E060E0C0C1C3870E0C0070E7D 840D>44 D<7FE07FE0FFC00B037E8A0F>I<7878F8F87005057C840D>I<007E0001FF0003C38007 01C00E01C00E01E01C00E01C01E03C01E03C01E07801E07801E07801E07801E07801E0F003C0F0 03C0F003C0F003C0F003C0F00380F00780E00780E00700E00F00E00E00701E00701C003878003F F0000FC000131F7C9D17>48 D<000C001C00FC0FFC0F3800380038003800380078007000700070 0070007000F000E000E000E000E000E001E001C001C001C001C001C003C07FFEFFFE0F1E7C9D17 >I<007F0001FFC003C3E00701F00F01F00F80F00F01F00F01F00001E00003E00003C00007C000 1F8000FE0000FC00000F000007800007C00007C00003C00007C07007C0F807C0F807C0F807C0F0 0F80E00F80F01F00787E003FF8000FE000141F7D9D17>51 D<0001C00003C00003C0000780000F 80001B80003B8000738000638000C7000187000307000707000E07000C0700180E00300E00600E 00E00E00FFFFF0FFFFF0001E00001C00001C00001C00001C00003C00003C0003FFC003FFC0141E 7D9D17>I<03807003FFF003FFE003FF8007FE000600000600000600000600000600000E00000C 7E000DFF000F87800F03C00E01C00C01C00001E00001E00001E00003E07003E0F003C0F003C0F0 07C0C00780E00F00701E00787C003FF8000FE000141F7D9D17>I<001F80007FE001F0E003C0E0 0781E00F01E00F01E01E00001C00003C00003C300079FE007BFF007F07807E0780FC0380F803C0 F803C0F003C0F003C0F007C0F00780E00780E00780E00780F00F00701E00701E003C7C003FF000 0FC000131F7C9D17>I<3000003FFFE07FFFE07FFFC0600380E00300C00700C00E00001C000038 0000700000600000E00001C0000380000380000700000700000F00000E00001E00001E00001C00 003C00003C00003C00003C00007C0000780000780000780000131F799D17>I<003F0000FFC003 E3E00780E00700F00E00700E00700E00700E00F00F00E00F81E00FE3C007FF0003FE0001FE0007 FF000F3F801E1FC03C0FC07803C07001C0F001C0E001C0E001C0E001C0E00380F00780780F003C 3E001FFC000FE000141F7D9D17>I<007E0001FF0007C7800F03C01E01C01E01E03C01E03C01E0 3C01E07C01E07801E07801E07801E07803E07803E07807C0380FC03C1FC01FFBC00FF3C0010780 000780000700000F00E01E00F01C00F03C00E07800E1F000FFC0003F0000131F7C9D17>I<0000 180000003800000038000000780000007C000000FC000000FC000001BC000001BC0000033C0000 033E0000061E0000061E00000C1E00000C1E0000181E0000181F0000300F0000300F0000600F00 007FFF0000FFFF0000C00F0001800780018007800300078003000780060007800E0007801F0007 C0FFC07FFCFFC07FFC1E207E9F22>65 D<0003FC08001FFE18007F073800FC03F801F001F803E0 00F807C000F00F8000700F0000701F0000703E0000703E0000703E0000607C0000007C0000007C 0000007C000000FC000000F8000000F8000000F8000000F80000C0780001C0780001807C000180 7C0003803C0007003E0006001F000E000F803C0007E0F00003FFE000007F80001D217B9F21>67 D<07FFFF0007FFFFE0003C01F0003C00F8007C007C0078003C0078001E0078001E0078001E0078 001F00F8001F00F0001F00F0001F00F0001F00F0001F00F0001F01F0001E01E0003E01E0003E01 E0003E01E0003C01E0007C03E0007803C000F003C000F003C001E003C003C003C00F8007C03F00 7FFFFC00FFFFE000201F7E9E23>I<07FFFFF807FFFFF8003C00F8003C0078007C003800780038 00780038007800380078003800780C3000F8183000F0180000F0180000F0380000FFF80000FFF8 0001F0700001E0300001E0300001E0301801E0303001E0003003E0003003C0006003C0006003C0 00E003C001C003C003C007C00FC07FFFFF80FFFFFF801D1F7E9E1F>I<07FFFFF807FFFFF8003C 00F8003C0078007C00380078003800780038007800380078003800780C3000F8183000F0180000 F0180000F0380000FFF80000FFF80001F0700001E0300001E0300001E0300001E0300001E00000 03E0000003C0000003C0000003C0000003C0000003C0000007C000007FFE0000FFFE00001D1F7E 9E1E>I<0003FC04000FFF0C003F079C00FC01FC01F000FC03E0007C07C000780F8000380F8000 381F0000383E0000383E0000383E0000307C0000007C0000007C0000007C000000FC000000F800 0000F8007FFCF8007FFCF80001E0780003E07C0003E07C0003C07C0003C03E0003C03E0007C01F 0007C00FC01FC007F07D8001FFF080007FC0001E217B9F24>I<07FFC7FFC007FFC7FFC0003C00 7800003C00F800007C00F800007800F000007800F000007800F000007800F000007801F00000F8 01F00000F001E00000F001E00000F001E00000FFFFE00000FFFFE00001F003E00001E003C00001 E003C00001E003C00001E003C00001E007C00003E007C00003C007800003C007800003C0078000 03C007800003C00F800007C00F80007FFCFFF800FFF8FFF800221F7E9E22>I<07FFE007FFE000 3C00003C00007C0000780000780000780000780000780000F80000F00000F00000F00000F00000 F00001F00001E00001E00001E00001E00001E00003E00003C00003C00003C00003C00003C00007 C000FFFC00FFFC00131F7F9E10>I<07FFF00007FFF000003C0000003C0000007C000000780000 0078000000780000007800000078000000F8000000F0000000F0000000F0000000F0000000F000 0001F0000001E0000001E0000001E0018001E0018001E0030003E0030003C0030003C0070003C0 060003C00E0003C01E0007C07E007FFFFC00FFFFFC00191F7E9E1C>76 D<07FC0000FFC007FC00 01FFC0003E0001F800003E0003F800007E0003F800006E0006F000006E0006F000006E000CF000 0067000CF00000670019F00000E70019F00000C70031E00000C70031E00000C70061E00000C380 61E00000C380C3E00001C380C3E00001838183C00001838183C0000181C303C0000181C303C000 0181C607C0000381C607C0000301CC0780000301CC0780000300F80780000300F80780000700F0 0F80000F80F00F80007FF0E0FFF800FFF0E1FFF8002A1F7E9E2A>I<07FC03FFC007FE03FFC000 3E007C00003E003800007F003800006F003000006F803000006780300000678030000067C07000 00E3C0700000C3E0600000C1E0600000C1E0600000C1F0600000C0F0E00001C0F8E000018078C0 00018078C00001807CC00001803CC00001803FC00003801FC00003001F800003001F800003000F 800003000F800007000780000F800780007FF0070000FFF0030000221F7E9E22>I<0003F80000 1FFE00003C1F0000F0078001E003C003C001E0078001E00F8000F00F0000F01F0000F01E0000F8 3E0000F83C0000F87C0000F87C0000F87C0000F87C0000F8F80001F0F80001F0F80001F0F80001 F0F80003E0780003E0780007C07C0007C07C000F803C000F003E001E001E003C000F00780007C1 F00003FFC00000FE00001D217B9F23>I<07FFFF0007FFFFC0003C03E0003C01F0007C00F00078 00F8007800F8007800F8007800F8007800F800F801F000F001F000F001E000F003C000F00F8000 FFFE0001FFF80001E0000001E0000001E0000001E0000001E0000003E0000003C0000003C00000 03C0000003C0000003C0000007C000007FFC0000FFFC00001D1F7E9E1F>I<07FFFC0007FFFF00 003C07C0003C03E0007C01E0007801F0007801F0007801F0007801F0007801E000F803E000F003 C000F0078000F01F0000FFFC0000FFF00001F0780001E03C0001E03C0001E01C0001E01E0001E0 1E0003E03E0003C03E0003C03E0003C03E0003C03E0603C03E0E07C03F0C7FFC1F1CFFFC1FF800 0007F01F207E9E21>82 D<003F8C00FFCC01E1FC03C07C07803C07003C0F00380E00180E00180E 00180E00000F00000F80000FF00007FF0007FF8003FFC000FFE0000FE00001E00000F00000F000 00F06000E06000E06000E07000E07001C07803C0FC0780FE0F00EFFE00C3F80016217D9F19>I< 1FFFFFF81FFFFFF81E03C0F83803C0383807C03830078038700780186007801860078038600780 30C00F8030000F0000000F0000000F0000000F0000000F0000001F0000001E0000001E0000001E 0000001E0000001E0000003E0000003C0000003C0000003C0000003C0000003C0000007C00001F FFF0003FFFF0001D1F7B9E21>III<03FFC1FFC003FFC1FFC0003E00FC00001E007000001E006000000F00 C000000F01C000000F83800000078300000007C600000003CC00000003FC00000001F800000001 F000000000F000000000F800000001F800000003FC000000033C000000063C0000000C1E000000 1C1E000000381F000000300F000000600F800000C007800001C007C000038003C0000FC007E000 FFF01FFE00FFE01FFE00221F7F9E22>88 D<7FF803FF80FFF803FF0007C000F80007C000E00003 C000C00003E001C00001E003800001F003000000F006000000F80E000000F80C00000078180000 007C300000003C700000003E600000001EC00000001F800000000F800000000F000000000F0000 00000F000000000E000000001E000000001E000000001E000000001E000000001E000000001C00 0000003C00000003FFE0000007FFE00000211F7B9E22>I<03FFFFC003FFFFC003F0078007C00F 8007001F0007003E000E007C000C007C000C00F8000C01F0001803E0000003E0000007C000000F 8000001F0000001F0000003E0000007C000000F8030000F0030001F0030003E0060007C0060007 8006000F800E001F001C003E001C003C007C007C01FC00FFFFF800FFFFF8001A1F7D9E1C>I E /Fm 81 126 df<70F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8F8000000000070F8F8F870051C77 9B18>33 D<4010F078F078F078E038E038E038E038E038E038E038E038E038E0380D0E7B9C18> I<078F00078F00078F00078F00078F00078F00078F00FFFFE0FFFFE0FFFFE07FFFE00F1E000F1E 000F1E000F1E000F1E000F1E007FFFE0FFFFE0FFFFE0FFFFE01E3C001E3C001E3C001E3C001E3C 001E3C001E3C00131C7E9B18>I<3803807C0780FE0780FE0F80EE0F00EE0F00EE1F00EE1E00EE 1E00FE3E00FE3C007C3C00387C0000780000780000F80000F00001F00001E00001E00003E00003 C00003C00007C0000783800787C00F8FE00F0FE00F0EE01F0EE01E0EE01E0EE03E0FE03C0FE03C 07C01C038013247E9F18>37 D<03C0000FF0000FF0001E78001E78001C38001C38001C78001C7B F01CF3F01FF3F01FE7800FC7800F87000F0F001F0F003F8E007FDE00FBDE00F1FC00E1FC00E0F8 00E0F870F0FC70F9FEF07FFFF03FCFE01F03C0141C7F9B18>I<387C7E7E3E0E0E1E1E3C7CF8F0 E0070E789B18>I<007000F003F007E00F800F001E003E003C007800780070007000F000F000E0 00E000E000E000E000E000F000F00070007000780078003C003E001E000F000F8007E003F000F0 00700C24799F18>II<01C00001C00001C00001C000C1C180F1C780F9CF807FFF001FFC0007F0 0007F0001FFC007FFF00F9CF80F1C780C1C18001C00001C00001C00001C00011147D9718>I<00 F00000F00000F00000F00000F00000F00000F00000F000FFFFE0FFFFE0FFFFE0FFFFE000F00000 F00000F00000F00000F00000F00000F00000F00013147E9718>I<3C7E7F7F7F3F0F0F3EFEF8F0 080C788518>II<78FCFCFCFC780606778518>I<00 0380000780000780000F80000F00001F00001E00001E00003E00003C00007C0000780000780000 F80000F00001F00001E00003E00003C00003C00007C0000780000F80000F00000F00001F00001E 00003E00003C00003C00007C0000780000F80000F00000F00000E0000011247D9F18>I<01F000 07FC000FFE001F1F001C07003803807803C07001C07001C0E000E0E000E0E000E0E000E0E000E0 E000E0E000E0E000E0E000E0F001E07001C07001C07803C03803801C07001F1F000FFE0007FC00 01F000131C7E9B18>I<0380038007800F800F803F80FF80FB8063800380038003800380038003 800380038003800380038003800380038003800380FFFEFFFEFFFE0F1C7B9B18>I<07F8001FFE 003FFF807C0FC0F803C0F001E0F001E0F000E0F000E00000E00001E00001E00003C00003C00007 80000F80001F00003E0000FC0001F80003E00007C0000F80003F00E07E00E0FFFFE0FFFFE0FFFF E0131C7E9B18>I<07F8001FFE007FFF807C0FC07803C07801C07801C00001C00003C00007C000 0F8003FF0003FE0003FF80000FC00003C00001E00001E00000E00000E0F000E0F001E0F001E0F0 03C0FC0FC07FFF801FFE0007F800131C7E9B18>I<001F00003F00007F0000770000F70001E700 01C70003C7000787000707000E07001E07003C0700380700780700F00700FFFFF8FFFFF8FFFFF8 00070000070000070000070000070000070000FFF800FFF800FFF8151C7F9B18>I<3FFF803FFF 803FFF803800003800003800003800003800003800003800003800003BFC003FFF003FFF803E07 C03803C00001E00001E00000E06000E0F000E0F001E0F003E0F007C07C0F807FFF001FFE0007F8 00131C7E9B18>I<007E0003FF8007FFC00FC3C01F03C03E03C03C03C0780000780000F00000F3 F800EFFE00FFFF80FE0F80FC03C0F803E0F001E0F000E0F000E0F000E07000E07801E07803E03C 07C03E0F801FFF000FFE0003F800131C7E9B18>I<03F8000FFE001FFF003E0F803803807001C0 7001C07001C07001C03803803C07801FFF0007FC000FFE001F1F003C07807001C0F001E0E000E0 E000E0E000E0E000E07001C07803C03E0F801FFF000FFE0003F800131C7E9B18>56 D<78FCFCFCFC78000000000000000078FCFCFCFC780614779318>58 D<3C7E7E7E7E3C00000000 000000003C7E7E7E7E3E1E1E3CFCF8E0071A789318>I<000380000F80001F80003F8000FE0001 FC0003F8000FE0001FC0003F8000FE0000FC0000FC0000FE00003F80001FC0000FE00003F80001 FC0000FE00003F80001F80000F8000038011187D9918>I<7FFFC0FFFFE0FFFFE0FFFFE0000000 000000000000000000FFFFE0FFFFE0FFFFE07FFFC0130C7E9318>II<00700000F80000F80000 D80000D80001DC0001DC0001DC00018C00038E00038E00038E00038E0003060007070007070007 07000707000FFF800FFF800FFF800E03800E03801C01C01C01C0FF8FF8FF8FF8FF8FF8151C7F9B 18>65 DI<01FCE007FFE00FFFE01F07E03E03E03C01E07801E078 00E07000E0F00000F00000E00000E00000E00000E00000E00000E00000F00000F000007000E078 00E07800E03C01E03E03E01F07C00FFF8007FF0001FC00131C7E9B18>IIII<01F9C007FFC00FFFC0 1F0FC03E07C03C03C07803C07801C07001C0F00000F00000E00000E00000E00000E00000E01FF0 E01FF0F01FF0F001C07003C07803C07803C03C07C03E0FC01F1FC00FFFC007FDC001F9C0141C7E 9B18>III75 DIII<0FF8003FFE007FFF00780F00700700F00780E00380E00380E00380E00380E00380E0 0380E00380E00380E00380E00380E00380E00380E00380E00380E00380E00380F0078070070078 0F007FFF003FFE000FF800111C7D9B18>II82 D<07F3801FFF803FFF807C1F80F80780F00780E00380E00380F00000F00000780000 7F00003FF0000FFE0001FF80001FC00003C00001E00001E00000E0E000E0E000E0E001E0F003E0 FC07C0FFFF80FFFF00E7FC00131C7E9B18>III87 D89 D91 D II95 D<0E1E3E7C78F0F0E0E0F8FCFC7C38070E78 9E18>I<1FE0007FF8007FFE00783E00780F00000F0000070001FF000FFF003FFF007F0700F807 00F00700E00700E00700F00F00F83F007FFFF03FFFF00FE1F014147D9318>II<01FE000FFF801FFF803F07807C0780780000F00000F00000E00000E00000E00000 E00000F00000F000007801C07C03C03F07C01FFF800FFF0001FC0012147D9318>I<003F80003F 80003F8000038000038000038000038000038003F3800FFF801FFF803E1F807C0F80780780F007 80F00380E00380E00380E00380E00380F00780F00780780F80781F803E3F803FFFF80FFBF803E3 F8151C7E9B18>I<03F0000FFE001FFF003E1F807C0780780380F003C0F003C0E001C0FFFFC0FF FFC0FFFFC0F00000F000007801C07C03C03F07C01FFF800FFF0001FC0012147D9318>I<001FC0 007FE000FFE001F1E001E0C001C00001C00001C000FFFFC0FFFFC0FFFFC001C00001C00001C000 01C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C0007FFF007FFF00 7FFF00131C7F9B18>I<03F1F00FFFF81FFFF81E1F383C0F003C0F003807003807003807003C0F 003C0F001E1E001FFE003FFC003FF0003C00003C00001FFF003FFFC07FFFF07801F0F00078F000 78E00038E00038F00078F800F87E03F03FFFE00FFF8003FE00151F7F9318>II<03800007C00007C00007C000038000000000000000000000000000FFC000FFC000 FFC00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C00001C000 01C00001C000FFFF80FFFF80FFFF80111D7C9C18>I107 DIII<01F0000FFE001FFF003E0F803803807001C07001C0E000E0E000E0 E000E0E000E0E000E0F001E07001C07803C03C07803E0F801FFF000FFE0001F00013147E9318> II<03F3800FFF801FFF803E1F807C0F80780780F0 0780F00780E00380E00380E00380E00380F00780F00780780F807C0F803E1F801FFF800FFF8003 F380000380000380000380000380000380000380000380003FF8003FF8003FF8151E7E9318>I< FF87F0FF9FF8FFFFF803FC7803F07803E00003E00003C00003C000038000038000038000038000 038000038000038000038000FFFF00FFFF00FFFF0015147F9318>I<0FF7003FFF007FFF00F81F 00F00F00E00700F00700FC00007FF0001FFC0007FF00001F80E00780E00380F00380F80780FC0F 80FFFF00FFFE00E7F80011147D9318>I<038000038000038000038000038000FFFFC0FFFFC0FF FFC00380000380000380000380000380000380000380000380000380000380400380E00380E003 C1E003E3E001FFC000FF80007E0013197F9818>IIII<7F9FF07F9FF07F9FF0070700 078E00039E0001DC0001F80000F80000700000F00000F80001DC00039E00038E000707000F0780 FF8FF8FF8FF8FF8FF815147F9318>II<7FFFF07F FFF07FFFF07003E07007C0700F80001F00003E00007C0000F80001F00003E00007C0000F80701F 00703E00707C0070FFFFF0FFFFF0FFFFF014147F9318>I<0007E0003FE0007FE000FC0000F000 00E00000E00000E00000E00000E00000E00000E00000E00000E00000E00001E0007FE000FFC000 FFC0007FE00001E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E000 00F00000FC00007FE0003FE00007E013247E9F18>III E /Fn 13 119 df<1C3C3C3C3C0C1C18383070E0C0C0060E7D840E>44 D<70F8F8F0F005057B840E>46 D<00F9C003FFC0078FC00F0F801E07803C07803C0780780700780700780F00F80F00F00E00F00E 00F01E30F01E70F03C60707C6078FEE03FCFC01F078014147C9317>97 D<07803F803F80070007 000F000F000E000E001E001E001C001CF83FFC3F1E3E0E3C0F780F780F700F700F701FF01FE01E E01EE03CE03CE07870F071E03FC01F0010207B9F15>I<007E0003FF0007C3800F07801E07803C 07803C0200780000780000780000F80000F00000F00000F00000F000007003007807003C3E003F FC000FE00011147C9315>I<00FE0003FF0007C7800F03801E01803C03807C0700781F007FFC00 7FF000F80000F00000F00000F000007000007003007807003C3E001FFC000FE00011147C9315> 101 D<003E7000FFF001E3F003C3E00781E00F01E00F01E01E01C01E01C01E03C03E03C03C0380 3C03803C07803C07803C0F001C1F001E3F000FFF0007CE00000E00001E00001E00001C00703C00 787800F8F0007FE0003F8000141D7E9315>103 D<00F000F000F000E000000000000000000000 000000000F001F803BC071C063C06380E380078007000F000F000E001E001C701C603C6038E03D C01F800F000C1F7D9E0E>105 D<1E1F003F3F8077F1C063E3C063C3C0E783C0C7838007000007 00000F00000F00000E00000E00001E00001E00001C00001C00003C00003C000038000012147D93 13>114 D<01FC03FE07870F0F0E0F0E0F1E000F800FF80FFC03FC007E001E701EF01CF01CF03C F0787FF01FC010147D9313>I<01C003C003C003800380078007800700FFF0FFF00F000E000E00 1E001E001C001C003C003C00380038007870786070E070C079C03F801F000C1C7C9B0F>I<0F00 701F80F03BC0F071C0E063C0E06381E0E381E00781C00701C00703C00F03C00E03800E03800E07 8C0E079C0E0F180E0F180F3FB807FBF003E1E016147D9318>I<0F01C01F83E03BC3E071C3E063 C1E06380E0E380C00780C00700C00701C00F01800E01800E01800E03000E03000E07000E0E000F 1C0007F80003F00013147D9315>I E /Fo 46 123 df<001E003C007800F001F001E003C007C0 07800F800F001F001F001E003E003E003E007C007C007C007C007C007800F800F800F800F800F8 00F800F800F800F800F800F800F800F800F800F80078007C007C007C007C007C003E003E003E00 1E001F001F000F000F80078007C003C001E001F000F00078003C001E0F3D7CAC17>40 DI44 DII<007F000001FFC00007FFF0000FFF F8000FC1F8001F007C003F007E003E003E003C001E007C001F007C001F007C001F0078000F00F8 000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80 F8000F80F8000F80F8000F80F8000F80F8000F8078000F007C001F007C001F007C001F003E003E 003E003E003F007E001F80FC000FC1F8000FFFF80007FFF00001FFC000007F000019297EA71E> 48 D<00180000380000F80007F800FFF800FFF800F8F80000F80000F80000F80000F80000F800 00F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F800 00F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F80000F8007FFFF0 7FFFF07FFFF014287BA71E>I<00FE0003FFC007FFE00FFFF01F03F83C00FC38007E78003E7000 3EF0001FF0001F60001F20001F00001F00001F00001F00003E00003E00007C00007C0000F80001 F00001E00003C0000780000F00001E00003C0000780000F00001E00003C0000780000F00001E00 003C00007FFFFF7FFFFF7FFFFF7FFFFF18287EA71E>I<007F000001FFC00007FFF0000FFFF800 1FC1F8003E007C003C003E0078003E0038003E0010003E0000003E0000003E0000003C0000007C 000000FC000001F8000007F00000FFE00000FFC00000FFE00000FFF0000001FC0000007C000000 3E0000001F0000001F0000000F8000000F8000000F8000000F8000000F8040000F8060001F00F0 001F00F8003F007E007E003F81FC001FFFF8000FFFF00003FFE000007F000019297EA71E>I<00 03F0000007F0000005F000000DF000000DF000001DF0000039F0000039F0000079F0000079F000 00F1F00000F1F00001E1F00003E1F00003E1F00007C1F00007C1F0000F81F0000F81F0001F01F0 001F01F0003E01F0007C01F0007C01F000F801F000FFFFFF80FFFFFF80FFFFFF80FFFFFF800001 F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F0000001F00019 277EA61E>I<3FFFFC3FFFFC3FFFFC3FFFFC3E00003E00003E00003E00003E00003E00003E0000 3E00003E00003E00003E3F003EFFC03FFFE03FFFF03FE1F83F807C3F003E3E003E00003E00001F 00001F00001F00001F00001F00001F00001F20001F60003E70003EF8007C7C00FC3F03F81FFFF0 0FFFE007FF8000FE0018287EA61E>I<000FF000003FFC0000FFFC0001FFFC0003F80C0007E000 000FC000000F8000001F0000001E0000003E0000003C0000007C0000007C0000007C3FE000F8FF F000F9FFF800FBFFFC00FF807E00FF003E00FE003F00FC001F00FC001F00FC000F80F8000F80F8 000F80F8000F80F8000F8078000F807C000F807C000F807C000F003E001F003E001F001F003E00 1F807C000FC1FC0007FFF80003FFF00001FFC000007F000019297EA71E>II<00 7F000001FFC00007FFF0000FFFF8001FC1FC003F007E003E003E007E003F007C001F007C001F00 7C001F007C001F007C001F003E003E003E003E001F007C000FC1F80007FFF00003FFE00003FFE0 000FFFF8001FC1FC003F007E003E003E007C001F007C001F00F8000F80F8000F80F8000F80F800 0F80F8000F80F8000F807C001F007C001F007E003F003F007E001FC1FC000FFFF80007FFF00003 FFE000007F000019297EA71E>I<007F000001FFC00003FFE0000FFFF0000FC1F8001F007C003E 007C007C003E007C001E007C001F00F8001F00F8001F00F8000F00F8000F80F8000F80F8000F80 F8000F80F8001F807C001F807C001F807E003F803E007F803F00FF801FFFEF800FFFCF8007FF8F 8003FE1F0000001F0000001F0000001E0000003E0000003E0000007C0000007C000000F8001801 F0001E07E0003FFFC0001FFF80000FFE000003F8000019297EA71E>I<0001FF00000FFFE0003F FFF8007FFFF801FE01F803F8003007E0001007C000000F8000001F8000001F0000003E0000003E 0000007C0000007C0000007C0000007C000000F8000000F8000000F8000000F8000000F8000000 F8000000F8000000F8000000F80000007C0000007C0000007C0000007C0000003E0000003E0000 001F0000001F8000000F80000007C0000007E0000403F8001C01FE00FC007FFFFC003FFFF8000F FFE00001FF001E2B7CA926>67 D69 DI<0001FF00000FFFE0003FFFFC007FFFFE01FE01FE03F8003E07F0000C07C000000F 8000001F8000001F0000003E0000003E0000007C0000007C0000007C0000007C000000F8000000 F8000000F8000000F8000000F8000000F8000000F8001FFEF8001FFEF8001FFE7C001FFE7C0000 3E7C00003E7C00003E3E00003E3E00003E1F00003E1F80003E0F80003E07C0003E07F0003E03F8 003E01FE00FE007FFFFE003FFFFC000FFFE00001FF001F2B7CA928>I73 D76 DI<0001FC0000 000FFF8000003FFFE00000FFFFF80001FE03FC0003F800FE0007E0003F000FC0001F800F80000F 801F000007C01F000007C03E000003E03E000003E07C000001F07C000001F07C000001F0780000 00F0F8000000F8F8000000F8F8000000F8F8000000F8F8000000F8F8000000F8F8000000F8F800 0000F8F8000000F8FC000001F87C000001F07C000001F07E000003F03E000003E03E000003E01F 000007C01F80000FC00F80000F800FC0001F8007F0007F0003F800FE0001FE03FC0000FFFFF800 003FFFE000000FFF80000001FC0000252B7DA92C>79 D<007FC00001FFF80007FFFE000FFFFF00 1FC07F003F000F007E0006007C000000F8000000F8000000F8000000F8000000F8000000FC0000 007C0000007E0000007F0000003FE000001FFE00000FFFC00007FFF00001FFF800003FFC000003 FE0000007F0000001F8000001F8000000FC0000007C0000007C0000007C0000007C0000007C000 0007C000000F8060000F80F0001F00FC003F00FF80FE007FFFFC001FFFF80007FFE00000FF8000 1A2B7DA921>83 D85 D<00FE0007FF801FFFC03FFF E03E03F03801F02001F80000F80000F80000F80000F80000F8007FF807FFF81FFFF83FE0F87F00 F8FC00F8F800F8F800F8F800F8FC01F87E07F87FFFF83FFFF81FFCF80FE0F8151B7E9A1D>97 D<007F8001FFE007FFF80FFFFC1FC07C1F001C3E00087C00007C00007C0000F80000F80000F800 00F80000F80000F80000F800007C00007C00007E00003E00001F000C1FC07C0FFFFC07FFFC01FF F0007F80161B7E9A1B>99 D<00003E00003E00003E00003E00003E00003E00003E00003E00003E 00003E00003E00003E00003E00003E00FC3E03FF3E07FFFE0FFFFE1FC1FE3F007E3E003E7C003E 7C003EFC003EF8003EF8003EF8003EF8003EF8003EF8003EF8003EFC003E7C003E7C003E3E007E 3F00FE1FC1FE0FFFFE07FFBE03FF3E00FC3E17297EA81F>I<007E0003FF8007FFC00FFFE01F83 F03F00F03E00787C00787C003878003CFFFFFCFFFFFCFFFFFCFFFFFCF80000F80000F800007800 007C00007C00003E00003F000C1FC07C0FFFFC07FFFC01FFF0007F80161B7E9A1B>I<001FC000 7FC000FFC001FFC003E00003C00007C00007C00007C00007C00007C00007C00007C00007C000FF FE00FFFE00FFFE0007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007 C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007 C00012297FA812>I<00F8078003FE7FC00FFFFFC01FFFFFC01F07C0003E03E0003E03E0007C01 F0007C01F0007C01F0007C01F0007C01F0007C01F0003E03E0003E03E0001F07C0001FFFC0003F FF80003BFE000038F8000078000000780000003C0000003FFFC0003FFFF8001FFFFC001FFFFE00 3FFFFF007C007F00F8001F80F8000F80F8000F80F8000F80FC001F807E003F003F80FE003FFFFE 000FFFF80007FFF00000FF80001A287E9A1E>III108 D II<007F000001FFC00007FFF0000FFFF8001FC1FC003F007E003E003E00 7C001F007C001F0078000F00F8000F80F8000F80F8000F80F8000F80F8000F80F8000F80F8000F 807C001F007C001F007E003F003E003E003F007E001FC1FC000FFFF80007FFF00001FFC000007F 0000191B7E9A1E>II114 D<03FC001FFF803FFFC07FFFC07C07C0F80080F80000F80000F80000FC00007F80007FF8003FFE 001FFF0007FF8000FFC0000FE00007E00003E00003E04003E0E007E0FC0FC0FFFFC07FFF801FFE 0003F800131B7E9A17>I<07C00007C00007C00007C00007C00007C00007C000FFFFC0FFFFC0FF FFC007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007C00007 C00007C00007C00007C00007C00007C00007C04007E1C003FFE003FFE001FF8000FC0013227FA1 16>III<7C000FC03E001F803F001F001F803E000F807C0007C0FC0003E0F800 01F1F00001FBE00000FFC000007FC000003F8000001F0000001F0000003F8000007FC00000FBC0 0000F3E00001F1F00003E0F80007C07C000F807C000F803E001F001F003E000F807E000FC0FC00 07E01B1B809A1C>120 DII E /Fp 8 117 df<0003FFC000003FFFF80000FFFFFE0001FE07FF8003F001FFC007C000FFE00FF8007F F00FFC007FF01FFC007FF81FFE007FF81FFE007FF81FFE007FF81FFE007FF81FFE007FF80FFC00 7FF80FFC007FF007F800FFF000C000FFE0000000FFE0000001FFC0000001FF80000003FF000000 07FE0000003FF800000FFFE000000FFFC000000FFFFC00000007FF00000001FFC00000007FE000 00007FF00000003FF80000003FFC0000001FFE0000001FFE0000001FFF0200001FFF1FC0001FFF 3FE0001FFF7FF0001FFFFFF8001FFFFFF8001FFFFFF8001FFFFFF8001FFEFFF8001FFEFFF8003F FE7FF0003FFC7FE0007FF83FC0007FF83FE000FFF01FFC07FFE007FFFFFF8003FFFFFE00007FFF F8000007FF800028377CB631>51 D<0003FF800380003FFFF8078000FFFFFE0F8003FFFFFF9F80 07FE00FFFF800FF8001FFF801FE00003FF801FC00001FF803F800000FF807F8000007F807F0000 003F807F0000001F80FF0000001F80FF0000000F80FF0000000F80FF8000000F80FF8000000780 FFC000000780FFE000000780FFF800000000FFFF000000007FFFF00000007FFFFF8000007FFFFF FC00003FFFFFFF80001FFFFFFFE0001FFFFFFFF0000FFFFFFFFC0007FFFFFFFE0003FFFFFFFF00 00FFFFFFFF80003FFFFFFF80000FFFFFFFC00000FFFFFFC0000007FFFFE00000007FFFE0000000 07FFE000000001FFF0000000007FF0000000003FF0F00000003FF0F00000001FF0F00000001FF0 F00000000FF0F00000000FF0F80000000FF0F80000000FE0FC0000000FE0FC0000001FE0FE0000 001FC0FF0000003FC0FFC000003F80FFF000007F00FFFC0001FE00FFFFC00FFC00FCFFFFFFF800 F83FFFFFF000F007FFFFC000E0007FFC00002C3B7BBA37>83 D<0001FFF800000FFFFF00003FFF FFC000FFC03FE001FF003FF007FE007FF00FFC007FF00FF8007FF01FF8007FF03FF0007FF03FF0 003FE07FF0001FC07FE0000F807FE0000000FFE0000000FFE0000000FFE0000000FFE0000000FF E0000000FFE0000000FFE0000000FFE0000000FFE0000000FFE00000007FE00000007FF0000000 7FF00000003FF00000003FF80000781FF80000780FFC0000F80FFE0001F007FF0003E003FF8007 C000FFF03F80003FFFFF00000FFFFC000001FFE00025267DA52C>99 D<0001FFC000001FFFF800 007FFFFE0001FFC1FF8003FF007FC007FE003FE00FFC001FF01FF8000FF01FF8000FF83FF00007 F87FF00007F87FF00007FC7FE00007FC7FE00003FCFFE00003FCFFE00003FCFFFFFFFFFCFFFFFF FFFCFFFFFFFFFCFFE0000000FFE0000000FFE0000000FFE0000000FFE00000007FE00000007FE0 0000007FF00000003FF000003C3FF000003C1FF800007C1FF80000FC0FFC0001F807FE0003F003 FF0007E000FFE03FC0003FFFFF80000FFFFC000000FFE00026267DA52D>101 D<01F80007FC000FFF001FFF001FFF801FFF801FFF801FFF801FFF801FFF801FFF000FFF0007FC 0001F80000000000000000000000000000000000000000000000000000000000FF00FFFF00FFFF 00FFFF00FFFF0007FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF 0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF0003FF 0003FF0003FF0003FF0003FF0003FF00FFFFF8FFFFF8FFFFF8FFFFF8153D7DBC1B>105 D<00FE007FE000FFFE03FFF800FFFE0FFFFE00FFFE1F83FF00FFFE3E01FF0007FE7801FF8003FE F000FF8003FFE000FFC003FFC000FFC003FFC000FFC003FF8000FFC003FF8000FFC003FF0000FF C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF 0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FF C003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF0000FFC003FF 0000FFC003FF0000FFC0FFFFFC3FFFFFFFFFFC3FFFFFFFFFFC3FFFFFFFFFFC3FFFFF30267CA537 >110 D<0000FFC00000000FFFFC0000003FFFFF000000FFC0FFC00001FE001FE00007FC000FF8 0007F80007F8000FF00003FC001FF00003FE003FF00003FF003FE00001FF007FE00001FF807FE0 0001FF807FE00001FF807FE00001FF80FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE00001FF C0FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE00001FFC0FFE00001FFC07FE00001FF807FE0 0001FF807FE00001FF803FF00003FF003FF00003FF001FF00003FE000FF80007FC000FF80007FC 0007FC000FF80003FE001FF00000FFC0FFC000003FFFFF0000000FFFFC00000001FFE000002A26 7DA531>I<0007800000078000000780000007800000078000000F8000000F8000000F8000000F 8000001F8000001F8000003F8000003F8000007F800000FF800001FF800007FF80001FFFFFF0FF FFFFF0FFFFFFF0FFFFFFF001FF800001FF800001FF800001FF800001FF800001FF800001FF8000 01FF800001FF800001FF800001FF800001FF800001FF800001FF800001FF800001FF800001FF80 0001FF800001FF800001FF803C01FF803C01FF803C01FF803C01FF803C01FF803C01FF803C01FF 803C00FF807800FFC078007FC0F8007FF1F0003FFFE0000FFFC00001FF001E377EB626>116 D E /Fq 1 98 df<001800001800001800003C00003C00004E00004E00004E0000870000870001 87800103800103800201C00201C003FFC00400E00400E00800700800701800703C0078FE01FF18 177F961C>97 D E /Fr 10 58 df<1F003F8060C04040C060C060C060C060C060C060C060C060 60C060C03F801F000B107F8F0F>48 D<18007800F8009800180018001800180018001800180018 0018001800FF80FF8009107E8F0F>I<3F00FFC0F3E0F0E0F0E000E000E001E001C007800F001C 0038607060FFC0FFC00B107F8F0F>I<1F007F8071C079C071C003C00F800F8001C000E060E0F0 E0F0E0F1C07FC03F000B107F8F0F>I<070007000F001F001B003B0033006300E300FFE0FFE003 00030003001FE01FE00B107F8F0F>I<61807F807F007C00600060006F807FC079E070E000E0E0 E0E0E0E1C0FFC03F000B107F8F0F>I<0F801FC039C071C06000C200FFC0FFC0E0E0C060C060C0 6060E071C03FC01F000B107F8F0F>I<60007FE07FE0C0E0C1C00380070006000E000E000C001C 001C001C001C001C001C000B117E900F>I<1F003F8071C060C070C07DC03F803F807FC0E3E0C0 E0C060C060F1E07FC01F000B107F8F0F>I<1F007F8071C0E0C0C060C060C060E0E07FE07FE008 6000C071C073807F803E000B107F8F0F>I E /Fs 1 59 df<70F8F8F87005057C840D>58 D E /Ft 81 125 df<001FC3F0007FEFF801F0FE7803C0FC780780F87807007000070070000700 700007007000070070000700700007007000FFFFFF80FFFFFF8007007000070070000700700007 007000070070000700700007007000070070000700700007007000070070000700700007007000 0700700007007000070070007FE3FF007FE3FF001D20809F1B>11 D<001F8000FFC001F1E003C1 E00781E00701E0070000070000070000070000070000070000FFFFE0FFFFE00700E00700E00700 E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700 E07FC3FE7FC3FE1720809F19>I<001FE000FFE001F1E003C1E00781E00700E00700E00700E007 00E00700E00700E00700E0FFFFE0FFFFE00700E00700E00700E00700E00700E00700E00700E007 00E00700E00700E00700E00700E00700E00700E00700E00700E07FE7FE7FE7FE1720809F19>I< 001FC1FC00007FE7FE0001F0FF0F0003C0FC0F000780F80F000700F00F00070070000007007000 000700700000070070000007007000000700700000FFFFFFFF00FFFFFFFF000700700700070070 070007007007000700700700070070070007007007000700700700070070070007007007000700 7007000700700700070070070007007007000700700700070070070007007007007FE3FE3FF07F E3FE3FF02420809F26>I<7038F87CFC7EFC7E7C3E0C060C060C061C0E180C381C7038F0786030 0F0E7E9F17>34 D<000300C0000300C0000300C0000701C0000601800006018000060180000601 80000E0380000C0300000C0300000C0300000C0300001C070000180600FFFFFFFEFFFFFFFE0030 0C0000300C0000300C0000701C0000601800006018000060180000601800FFFFFFFEFFFFFFFE01 C07000018060000180600001806000018060000380E0000300C0000300C0000300C0000300C000 0701C0000601800006018000060180001F297D9F26>I<70F8FCFC7C0C0C0C1C183870F060060E 7C9F0D>39 D<00E001C003C0038007000E000E001C001C003800380030007000700070006000E0 00E000E000E000E000E000E000E000E000E000E000E000E000E000600070007000700030003800 38001C001C000E000E000700038003C001C000E00B2E7DA112>II<70F8FCFC7C0C0C0C1C183870F060060E7C840D>44 DI<70F8F8F87005057C840D>I<00030003000700060006000E000C 000C001C0018001800380030003000700060006000E000C000C001C00180018001800380030003 000700060006000E000C000C001C0018001800380030003000700060006000E000C000C000102D 7DA117>I<03F0000FFC001E1E001C0E00380700780780700380700380700380F003C0F003C0F0 03C0F003C0F003C0F003C0F003C0F003C0F003C0F003C0F003C0F003C0F003C070038070038070 03807807803807001C0E001E1E000FFC0003F000121F7E9D17>I<018003801F80FF80E3800380 038003800380038003800380038003800380038003800380038003800380038003800380038003 8003800380FFFEFFFE0F1E7C9D17>I<07F0001FFC003C3E00701F00600F80E00780F807C0F807 C0F803C0F803C07007C00007C00007C0000F80000F00001F00003E00003C0000780000F00001E0 000380000700000E00C01C00C03800C0700180FFFF80FFFF80FFFF80121E7E9D17>I<07F0001F FC003C3F00380F00780F80780F807C0780780F80000F80000F80001F00001E00007E0003F80003 F000003C00001F00000F80000F800007C00007C07007C0F807C0F807C0F807C0F80F80E00F8070 1F003C3E001FFC0007F000121F7E9D17>I<000E00000E00001E00003E00003E00006E0000EE00 00CE00018E00038E00030E00060E000E0E000C0E00180E00380E00300E00600E00E00E00FFFFF0 FFFFF0000E00000E00000E00000E00000E00000E00000E0000FFE000FFE0141E7F9D17>I<3807 003FFF003FFE003FFC003FF00030000030000030000030000030000030000031F80037FC003F1E 003C0F003807803807800003C00003C00003C00003C07003C0F003C0F003C0F003C0E007806007 80700F003C3E001FFC0007F000121F7E9D17>I<00FE0003FF0007C7800F07801E07803C07803C 0780780000780000700000F06000F3FC00F7FE00FE0F00FC0780F80780F80380F003C0F003C0F0 03C0F003C0F003C07003C07003C07803C07807803807803C0F001E1E000FFC0003F000121F7E9D 17>I<6000007FFFC07FFFC07FFFC0600380C00300C00700C00E00000C00001C00003800003000 00700000E00000E00000C00001C00001C00003C000038000038000038000038000078000078000 078000078000078000078000078000078000121F7D9D17>I<03F0000FFC001E1E003C0F003807 807003807003807003807803807C07807E07003F8E001FFC000FF80007F8000FFE001EFF003C3F 80781F807007C0F003C0E003C0E001C0E001C0E001C0F003C07003807807003E1F001FFC0007F0 00121F7E9D17>I<03F0000FFC001E1E003C0F00780700780780F00780F00380F00380F003C0F0 03C0F003C0F003C0F003C07007C07807C0780FC03C1FC01FFBC00FF3C00183C000038000078000 0780780700780F00781E00783C007878003FF0000FC000121F7E9D17>I<70F8F8F87000000000 00000000000070F8F8F87005147C930D>I<70F8F8F8700000000000000000000070F8F8F87818 181838303070E040051D7C930D>I61 D<0003800000038000000380000007C0000007C0000007C000000DE000000DE000000DE0000018 F0000018F0000018F00000307800003078000030780000603C0000603C0000603C0000E01E0000 C01E0000FFFE0001FFFF0001800F0001800F0003800F800300078003000780070007C0070003C0 0F8003C0FFE03FFEFFE03FFE1F207F9F22>65 DI<001FC0C000 FFF0C001F83DC007E00FC00F8007C00F0007C01F0003C03E0001C03C0001C07C0001C07C0000C0 7C0000C0F80000C0F8000000F8000000F8000000F8000000F8000000F8000000F8000000F80000 007C0000C07C0000C07C0000C03C0001C03E0001801F0001800F0003800F80070007E00E0001F8 3C0000FFF800001FE0001A217D9F21>IIII<001FE060007FF86001F83CE003E00FE007C007 E00F8003E01F0001E03E0000E03E0000E07C0000E07C0000607C000060F8000060F8000000F800 0000F8000000F8000000F8000000F8000000F8007FFCF8007FFC7C0001E07C0001E07C0001E03E 0001E03E0001E01F0001E00F8001E007C003E003E007E001FC1FE0007FFC60001FF0001E217D9F 24>III<0FFFC00FFFC0003C00003C00003C00003C00003C00003C00003C00003C0000 3C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C0070 3C00F83C00F83C00F83C00F87C00E0780078F0003FE0000FC00012207E9E17>IIIII<001F800000FFF00001E0780007C03E000F801F000F000F001E000780 3C0003C03C0003C07C0003E07C0003E0780001E0F80001F0F80001F0F80001F0F80001F0F80001 F0F80001F0F80001F0F80001F0F80001F0780001E07C0003E07C0003E03C0003C03E0007C01E00 07800F000F000F801F0007C03E0001F0F80000FFF000001F80001C217D9F23>II<001F800000FFF00001E0780007C03E000F801F000F000F001E0007803E0007 C03C0003C07C0003E07C0003E0780001E0F80001F0F80001F0F80001F0F80001F0F80001F0F800 01F0F80001F0F80001F0F80001F0780001E07C0003E07C0003E03C0003C03E0F07C01E1F87800F 38CF000FB05F0007F07E0001F8780000FFF010001FB010000030100000383000003C7000003FF0 00001FE000001FE000001FC000000F801C297D9F23>II<07E1801FF9803C3F80780F80700780E00380E00380E00180E00180E00180F00000F800007C 00007F80003FF8003FFE000FFF0003FF00003F80000F800003C00003C00001C0C001C0C001C0C0 01C0E001C0E00380F00380F80780FE0F00CFFE00C3F80012217D9F19>I<7FFFFFE07FFFFFE078 0F01E0700F00E0600F0060600F0060E00F0070C00F0030C00F0030C00F0030C00F0030000F0000 000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F0000000F00 00000F0000000F0000000F0000000F0000000F0000000F0000000F000007FFFE0007FFFE001C1F 7E9E21>IIII<7FF83FF87FF83FF807E00F8003C00F0001E00E0001F00C0000F018000078380000 7C3000003C7000003E6000001EC000000FC000000F8000000780000007C0000007E000000DE000 001DF0000018F8000038780000307C0000603C0000E01E0000C01F0001800F0003800780078007 C00FC007E0FFE01FFEFFE01FFE1F1F7F9E22>I91 D<180C3C1E381C70386030E070C060C060C060F87CFC7EFC7E7C3E381C0F0E7B9F17>II<3FE0007FF800787C00781E00781E00000E00000E00000E0007FE001FFE 003F0E007C0E00F80E00F00E30F00E30F01E30F81E307C7F703FFFE01FC78014147E9317>97 D<0E0000FE0000FE00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E3F 000EFFC00FC3E00F81E00F00F00E00F00E00780E00780E00780E00780E00780E00780E00780E00 780E00F00F00F00F81E00FC3C00CFF800C7F0015207F9F19>I<03FC0FFE1E1E3C1E781E7800F0 00F000F000F000F000F000F000F80078007C033E071F0E0FFC03F010147E9314>I<000380003F 80003F8000038000038000038000038000038000038000038000038000038007F3800FFF801E1F 803C0780780780780380F00380F00380F00380F00380F00380F00380F00380F003807803807807 803C0F803E1F801FFBF807E3F815207E9F19>I<03F0000FFC001E1E003C0F00780700780780F0 0780F00380FFFF80FFFF80F00000F00000F00000F800007800007C01803E03801F87000FFE0003 F80011147F9314>I<007E00FF01EF038F078F0700070007000700070007000700FFF0FFF00700 0700070007000700070007000700070007000700070007000700070007007FF07FF01020809F0E >I<0001E007F7F00FFF703E3E703C1E00780F00780F00780F00780F00780F00780F003C1E003E 3E003FF80037F0003000003000003800003FFE001FFFC03FFFE07803E0F000F0E00070E00070E0 0070F000F07801E03E07C01FFF8003FC00141F7F9417>I<0E0000FE0000FE00000E00000E0000 0E00000E00000E00000E00000E00000E00000E00000E7F000EFF800FC7C00F83C00F01C00F01C0 0E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0FFE7FC FFE7FC16207F9F19>I<1E003E003E003E001E000000000000000000000000000E007E007E000E 000E000E000E000E000E000E000E000E000E000E000E000E000E000E00FFC0FFC00A1F809E0C> I<00E001F001F001F000E0000000000000000000000000007007F007F000F00070007000700070 0070007000700070007000700070007000700070007000700070007000700070F0F0F0E0F1E0FF C03F000C28829E0E>I<0E0000FE0000FE00000E00000E00000E00000E00000E00000E00000E00 000E00000E00000E1FF00E1FF00E0F800E0F000E1E000E3C000E78000EF0000FF0000FF8000FBC 000F3C000E1E000E0F000E0F000E07800E07800E03C0FFCFF8FFCFF815207F9F18>I<0E00FE00 FE000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E 000E000E000E000E000E000E000E000E00FFE0FFE00B20809F0C>I<0E3F03F000FEFFCFFC00FF C3DC3C000F81F81E000F00F00E000F00F00E000E00E00E000E00E00E000E00E00E000E00E00E00 0E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E000E00E00E 00FFE7FE7FE0FFE7FE7FE023147F9326>I<0E7F00FEFF80FFC7C00F83C00F01C00F01C00E01C0 0E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C0FFE7FCFFE7FC 16147F9319>I<01F80007FE001E07803C03C03801C07000E07000E0F000F0F000F0F000F0F000 F0F000F0F000F07000E07801E03801C03C03C01E078007FE0001F80014147F9317>I<0E3F00FE FFC0FFC3E00F81E00F01F00E00F00E00F80E00780E00780E00780E00780E00780E00780E00F80E 00F00F01F00F81E00FC7C00EFF800E7F000E00000E00000E00000E00000E00000E00000E0000FF E000FFE000151D7F9319>I<07F1800FF9801F1F803C0F807C0780780380F80380F00380F00380 F00380F00380F00380F00380F803807807807C07803C0F803E1F801FFB8007E380000380000380 000380000380000380000380000380003FF8003FF8151D7E9318>I<0E7CFFFEFFDE0F9E0F1E0F 000E000E000E000E000E000E000E000E000E000E000E000E00FFE0FFE00F147F9312>I<1FB07F F078F0E070E030E030F000FC007FC03FE00FF000F8C078C038E038E038F078F8F0FFE0CFC00D14 7E9312>I<06000600060006000E000E001E003E00FFF8FFF80E000E000E000E000E000E000E00 0E000E000E000E180E180E180E180E180F3007F003E00D1C7F9B12>I<0E01C0FE1FC0FE1FC00E 01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E01C00E03C00E 07C00F0FC007FDFC03F9FC16147F9319>III<7FC7FC7FC7FC0F03E007038003830001C70000EE0000EC 00007800003800003C00007C0000EE0001C7000187000303800701C01F01E0FF87FEFF87FE1714 809318>II<3FFF3FFF381F301E703C6078607860F001E0 03C003C007830F031E031E073C067806F81EFFFEFFFE10147F9314>III E /Fu 28 121 df<000FF07F00007FFFFF80 01FC3FEFC003F07F8FC007E07F8FC007C07F0FC007C07F078007C03F000007C01F000007C01F00 0007C01F000007C01F0000FFFFFFF800FFFFFFF80007C01F000007C01F000007C01F000007C01F 000007C01F000007C01F000007C01F000007C01F000007C01F000007C01F000007C01F000007C0 1F000007C01F000007C01F000007C01F000007C01F00003FF8FFF0003FF8FFF0002220809F1F> 11 D<7CFEFEFFFFFF7F030707060E1C3C787008107C860F>44 DI<00700000F0000FF000FFF000F3F00003F00003F00003F00003F00003F00003F00003F000 03F00003F00003F00003F00003F00003F00003F00003F00003F00003F00003F00003F00003F000 03F00003F000FFFF80FFFF80111D7C9C1A>49 D<03FC000FFF801F1FC01F0FE03F87E03F87E03F 87E03F8FE03F8FE01F0FC0001F80003F0001FE0001FE00001F800007E00007F00003F03C03F87E 03F8FF03F8FF03F8FF03F8FF03F8FF07F07E07E03E0FE01FFF8003FE00151D7E9C1A>51 D<0000E000000000E000000001F000000001F000000001F000000003F800000003F800000007FC 00000007FC0000000FFE0000000CFE0000000CFE000000187F000000187F000000307F80000030 3F800000703FC00000601FC00000601FC00000C01FE00000C00FE00001FFFFF00001FFFFF00001 8007F000030003F800030003F800060003FC00060001FC000E0001FE00FFE01FFFE0FFE01FFFE0 231F7E9E28>65 D<000FFC06007FFF8E01FE03DE03F800FE0FE0007E1FC0003E1F80001E3F8000 0E7F00000E7F00000E7F000006FE000006FE000000FE000000FE000000FE000000FE000000FE00 0000FE000000FE0000007F0000067F0000067F0000063F80000E1F80000C1FC0001C0FE0003803 F800F001FF03E0007FFF80000FFC001F1F7D9E26>67 D<000FFC0600007FFF8E0001FE03DE0003 F800FE000FE0007E001FC0003E001F80001E003F80000E007F00000E007F00000E007F00000600 FE00000600FE00000000FE00000000FE00000000FE00000000FE00000000FE007FFFE0FE007FFF E0FE0000FE007F0000FE007F0000FE007F0000FE003F8000FE001F8000FE001FC000FE000FE000 FE0003F801FE0001FF07FE00007FFF3E00000FFC0600231F7D9E29>71 D80 D<001FF80000FFFF0001F81F8007E007E00FC003F01F8001F81F8001F83F0000FC7F 0000FE7F0000FE7E00007EFE00007FFE00007FFE00007FFE00007FFE00007FFE00007FFE00007F FE00007FFE00007F7E00007E7E00007E7F0000FE3F0000FC3F87C1FC1F8FF1F80FDC3BF007F83F E001FC1F8000FFFF00001FFE0300000F0300000F8700000FFF00000FFF000007FE000007FE0000 03FC000003FC000001F020287D9E27>I<07FC001FFF803F0FC03F07C03F03E03F03E01E03E000 03E001FFE00FFFE03FC3E07E03E0FC03E0F803E0F803E0F803E0FC07E07E1FE03FFDFE0FF0FE17 147F9319>97 DI<03FE000FFF801F 8FC03F0FC07E0FC07C0FC0FC0780FC0000FC0000FC0000FC0000FC0000FC0000FC00007E00007E 00603F00E01FC3C00FFF8003FE0013147E9317>I<0007F80007F80000F80000F80000F80000F8 0000F80000F80000F80000F80000F80000F803FCF80FFFF81F87F83F01F87E00F87C00F8FC00F8 FC00F8FC00F8FC00F8FC00F8FC00F8FC00F8FC00F87C00F87E01F83E03F81F87F80FFEFF03F8FF 18207E9F1D>I<01FF0007FFC01F87E03F01F07E00F07E00F8FC00F8FC00F8FFFFF8FFFFF8FC00 00FC0000FC00007C00007E00003E00183F00381FC0F007FFE001FF8015147F9318>I<03FE7C0F FFFE1F8FDE1F07DE3E03E03E03E03E03E03E03E03E03E01F07C01F8FC01FFF801FFE001800001C 00001E00001FFFC01FFFF00FFFF83FFFFC7C00FEF8003EF0001EF0001EF0001EF8003E7C007C3F 01F81FFFF003FF80171E7F931A>103 D<1E003F007F007F007F003F001E000000000000000000 00000000FF00FF001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F001F 00FFE0FFE00B217EA00E>105 D<007C00FE00FE00FE00FE00FE007C0000000000000000000000 0001FE01FE003E003E003E003E003E003E003E003E003E003E003E003E003E003E003E003E003E 003E003E003E783EFC3EFC7EFC7CFCF87FF01FC00F2A83A010>II109 DI<01FF0007FFC0 1F83F03E00F83E00F87C007C7C007CFC007EFC007EFC007EFC007EFC007EFC007E7C007C7C007C 3E00F83E00F81F83F007FFC001FF0017147F931A>II114 D<0FF63FFE781EE00EE006 F006F800FFC07FF87FFC1FFE03FF003FC00FE007E007F00FFC1EFFFCCFF010147E9315>I<0180 0180018003800380038007800F803F80FFFCFFFC0F800F800F800F800F800F800F800F800F800F 800F860F860F860F860F860FCC07FC01F80F1D7F9C14>II120 D E /Fv 17 121 df<07E0001FF8003FFC007FFE00FFFE00FFFF00FFFF00FFFF80FFFF80FFFF80FFFF80FFFF807F FF803FFF801FFF8007E780000780000F80000F80000F00000F00001F00001F00003E00003E0000 7C00007C0000F80001F80003F00007E0000FE0001FC0001F80001F00000E00001124788F21>44 D<000000007FFE00001E0000000FFFFFE0003E000000FFFFFFF8007E000003FFFFFFFE00FE0000 1FFFFFFFFF81FE00007FFFF801FFE7FE0000FFFF80003FFFFE0003FFFC000007FFFE0007FFF000 0003FFFE001FFFE0000001FFFE003FFF800000007FFE007FFF000000003FFE00FFFE000000001F FE01FFFC000000001FFE01FFFC000000000FFE03FFF80000000007FE07FFF00000000007FE07FF F00000000003FE0FFFE00000000003FE0FFFE00000000001FE1FFFC00000000001FE1FFFC00000 000000FE3FFFC00000000000FE3FFF800000000000FE3FFF800000000000FE7FFF800000000000 7E7FFF8000000000007E7FFF8000000000007E7FFF00000000000000FFFF00000000000000FFFF 00000000000000FFFF00000000000000FFFF00000000000000FFFF00000000000000FFFF000000 00000000FFFF00000000000000FFFF00000000000000FFFF00000000000000FFFF000000000000 00FFFF00000000000000FFFF00000000000000FFFF00000000000000FFFF00000000000000FFFF 000000000000007FFF000000000000007FFF800000000000007FFF8000000000003E7FFF800000 0000003E3FFF8000000000003E3FFF8000000000003E3FFFC000000000003E1FFFC00000000000 7E1FFFC000000000007E0FFFE000000000007C0FFFE00000000000FC07FFF00000000000FC07FF F00000000001F803FFF80000000001F801FFFC0000000003F801FFFC0000000007F000FFFE0000 00000FE0007FFF000000000FE0003FFFC00000003FC0001FFFE00000007F800007FFF0000000FF 000003FFFC000003FE000000FFFF80000FF80000007FFFF800FFF00000001FFFFFFFFFC0000000 03FFFFFFFF0000000000FFFFFFFC00000000000FFFFFE00000000000007FFE000000474979C756 >67 D<000000007FFE00001E000000000FFFFFE0003E00000000FFFFFFF8007E00000003FFFFFF FE00FE0000001FFFFFFFFF81FE0000007FFFF801FFE7FE000000FFFF80003FFFFE000003FFFC00 0007FFFE000007FFF0000003FFFE00001FFFE0000001FFFE00003FFF800000007FFE00007FFF00 0000003FFE0000FFFE000000001FFE0001FFFC000000001FFE0001FFFC000000000FFE0003FFF8 0000000007FE0007FFF00000000007FE0007FFF00000000003FE000FFFE00000000003FE000FFF E00000000001FE001FFFC00000000001FE001FFFC00000000000FE003FFFC00000000000FE003F FF800000000000FE003FFF800000000000FE007FFF8000000000007E007FFF8000000000007E00 7FFF8000000000007E007FFF0000000000000000FFFF0000000000000000FFFF00000000000000 00FFFF0000000000000000FFFF0000000000000000FFFF0000000000000000FFFF000000000000 0000FFFF0000000000000000FFFF0000000000000000FFFF0000000000000000FFFF0000000000 000000FFFF0000000000000000FFFF0000000000000000FFFF0000000000000000FFFF0000007F FFFFFFFEFFFF0000007FFFFFFFFE7FFF8000007FFFFFFFFE7FFF8000007FFFFFFFFE7FFF800000 7FFFFFFFFE7FFF8000000000FFFE003FFF8000000000FFFE003FFF8000000000FFFE003FFFC000 000000FFFE001FFFC000000000FFFE001FFFC000000000FFFE000FFFE000000000FFFE000FFFE0 00000000FFFE0007FFF000000000FFFE0007FFF000000000FFFE0003FFF800000000FFFE0001FF FC00000000FFFE0001FFFE00000000FFFE0000FFFE00000000FFFE00007FFF80000000FFFE0000 3FFFC0000001FFFE00001FFFE0000003FFFE000007FFF8000003FFFE000003FFFE00000FFFFE00 0000FFFFC0001FFFFE0000007FFFFC00FFEFFE0000001FFFFFFFFFC3FE00000003FFFFFFFF00FE 00000000FFFFFFFC003E000000000FFFFFF0000E00000000007FFF000000004F4979C75D>71 D<0007FFFE000000007FFFFFE0000001FFFFFFF8000003FFFFFFFE000007FF001FFF80000FFF80 07FFC0000FFF8003FFE0000FFF8001FFF0000FFF8000FFF8000FFF80007FF8000FFF80007FFC00 0FFF80007FFC0007FF00003FFC0003FE00003FFC0001FC00003FFC00000000003FFC0000000000 3FFC00000000003FFC00000000003FFC00000007FFFFFC000000FFFFFFFC00000FFFFFFFFC0000 3FFFF03FFC0001FFFE003FFC0003FFF0003FFC000FFFC0003FFC001FFF80003FFC003FFE00003F FC007FFE00003FFC007FFC00003FFC00FFF800003FFC00FFF800003FFC00FFF000003FFC00FFF0 00003FFC00FFF000003FFC00FFF000007FFC00FFF800007FFC00FFF80000FFFC007FFC0001FFFC 007FFE0003EFFC003FFF000FCFFF001FFFC07F8FFFFC0FFFFFFF07FFFC03FFFFFC03FFFC007FFF F001FFFC0007FF80007FFC362E7DAD3A>97 D<00001FFFC0000001FFFFFC000007FFFFFF00001F FFFFFF80007FFC01FFC000FFF003FFE003FFC003FFE007FF8003FFE00FFF8003FFE00FFF0003FF E01FFE0003FFE01FFE0003FFE03FFC0001FFC03FFC0000FF807FFC00007F007FFC000000007FF8 00000000FFF800000000FFF800000000FFF800000000FFF800000000FFF800000000FFF8000000 00FFF800000000FFF800000000FFF800000000FFF800000000FFF800000000FFF8000000007FFC 000000007FFC000000007FFC000000003FFC000000003FFE000000F81FFE000000F81FFF000001 F80FFF000003F80FFF800003F007FFC00007E003FFE0000FE000FFF8003FC0007FFE01FF80001F FFFFFE000007FFFFF8000001FFFFE00000001FFF00002D2E7CAD35>99 D<00000000007FC00000 000000FFFFC00000000000FFFFC00000000000FFFFC00000000000FFFFC00000000000FFFFC000 0000000003FFC0000000000001FFC0000000000001FFC0000000000001FFC0000000000001FFC0 000000000001FFC0000000000001FFC0000000000001FFC0000000000001FFC0000000000001FF C0000000000001FFC0000000000001FFC0000000000001FFC0000000000001FFC0000000000001 FFC0000000000001FFC0000000000001FFC0000000000001FFC0000000000001FFC00000000000 01FFC00000001FFE01FFC0000001FFFFC1FFC0000007FFFFF1FFC000001FFFFFFDFFC000007FFE 03FFFFC00000FFF0007FFFC00003FFE0003FFFC00007FF80001FFFC00007FF800007FFC0000FFF 000007FFC0001FFE000003FFC0001FFE000003FFC0003FFC000003FFC0003FFC000003FFC0007F FC000003FFC0007FFC000003FFC0007FF8000003FFC000FFF8000003FFC000FFF8000003FFC000 FFF8000003FFC000FFF8000003FFC000FFF8000003FFC000FFF8000003FFC000FFF8000003FFC0 00FFF8000003FFC000FFF8000003FFC000FFF8000003FFC000FFF8000003FFC000FFF8000003FF C0007FF8000003FFC0007FF8000003FFC0007FFC000003FFC0003FFC000003FFC0003FFC000003 FFC0003FFC000003FFC0001FFE000007FFC0000FFE00000FFFC0000FFF00001FFFC00007FF8000 3FFFC00003FFC0007FFFC00001FFE000FFFFE000007FFC07FFFFFF80003FFFFFF3FFFF80000FFF FFE3FFFF800003FFFF83FFFF8000003FF803FFFF8039487CC742>I<00001FFE00000001FFFFE0 000007FFFFF800001FFFFFFE00007FFC07FF0000FFE001FF8001FFC000FFC003FF80003FE007FF 00003FF00FFE00001FF01FFE00000FF81FFC00000FF83FFC00000FFC3FFC000007FC7FFC000007 FC7FF8000007FC7FF8000007FE7FF8000007FEFFF8000007FEFFF8000007FEFFFFFFFFFFFEFFFF FFFFFFFEFFFFFFFFFFFEFFFFFFFFFFFEFFF800000000FFF800000000FFF800000000FFF8000000 007FF8000000007FF8000000007FFC000000003FFC000000003FFC000000003FFC0000003E1FFE 0000003E0FFE0000007E0FFF0000007E07FF800000FC03FFC00001F801FFE00007F800FFF0001F F0003FFE00FFE0001FFFFFFF800007FFFFFE000000FFFFF80000000FFF80002F2E7DAD36>I<00 FC0003FF0007FF800FFFC00FFFC01FFFE01FFFE01FFFE01FFFE01FFFE01FFFE00FFFC00FFFC007 FF8003FF0000FC0000000000000000000000000000000000000000000000000000000000000000 0000007FC0FFFFC0FFFFC0FFFFC0FFFFC0FFFFC003FFC001FFC001FFC001FFC001FFC001FFC001 FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001 FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001FFC001 FFC001FFC001FFC0FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF18497CC820>105 D<007FC001FFC00000FFE00000FFFFC00FFFF80007FFFC0000FFFFC03FFFFE001FFFFF0000FFFF C0FFFFFF807FFFFFC000FFFFC1FE07FFC0FF03FFE000FFFFC7F003FFC3F801FFE00003FFCFC001 FFE7E000FFF00001FFCF8001FFE7C000FFF00001FFDF0001FFEF8000FFF00001FFFE0000FFFF00 007FF80001FFFC0000FFFE00007FF80001FFF80000FFFC00007FF80001FFF80000FFFC00007FF8 0001FFF00000FFF800007FF80001FFF00000FFF800007FF80001FFF00000FFF800007FF80001FF E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000 FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000 007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF8 0001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FF E00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000 FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000 007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF80001FFE00000FFF000007FF8 0001FFE00000FFF000007FF80001FFE00000FFF000007FF800FFFFFFC07FFFFFE03FFFFFF0FFFF FFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07FFFFFE03FFFFFF0FFFFFFC07F FFFFE03FFFFFF05C2E7CAD63>109 D<007FC001FFC00000FFFFC00FFFF80000FFFFC03FFFFE00 00FFFFC0FFFFFF8000FFFFC1FE07FFC000FFFFC7F003FFC00003FFCFC001FFE00001FFCF8001FF E00001FFDF0001FFE00001FFFE0000FFF00001FFFC0000FFF00001FFF80000FFF00001FFF80000 FFF00001FFF00000FFF00001FFF00000FFF00001FFF00000FFF00001FFE00000FFF00001FFE000 00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE0 0000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FF E00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001 FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000 01FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF000FFFFFFC07FFFFF E0FFFFFFC07FFFFFE0FFFFFFC07FFFFFE0FFFFFFC07FFFFFE0FFFFFFC07FFFFFE03B2E7CAD42> I<00000FFF0000000000FFFFF000000007FFFFFE0000001FFFFFFF8000003FFC03FFC00000FFE0 007FF00001FF80001FF80003FF00000FFC0007FE000007FE000FFE000007FF000FFC000003FF00 1FFC000003FF803FFC000003FFC03FF8000001FFC03FF8000001FFC07FF8000001FFE07FF80000 01FFE07FF8000001FFE0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FF F8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001FFF0FFF8000001 FFF07FF8000001FFE07FF8000001FFE07FF8000001FFE07FF8000001FFE03FFC000003FFC03FFC 000003FFC01FFC000003FF801FFE000007FF800FFE000007FF0007FF00000FFE0003FF80001FFC 0001FFC0003FF80000FFE0007FF000007FFC03FFE000001FFFFFFF80000007FFFFFE00000000FF FFF0000000000FFF000000342E7DAD3B>I<007FC01FFE000000FFFFC0FFFFE00000FFFFC3FFFF F80000FFFFCFFFFFFE0000FFFFFFF01FFF8000FFFFFF8007FFC00003FFFE0003FFE00001FFFC00 01FFF00001FFF80000FFF80001FFF000007FF80001FFE000007FFC0001FFE000003FFE0001FFE0 00003FFE0001FFE000003FFF0001FFE000001FFF0001FFE000001FFF0001FFE000001FFF0001FF E000001FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001 FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF8001FFE000000FFF80 01FFE000000FFF8001FFE000001FFF8001FFE000001FFF0001FFE000001FFF0001FFE000001FFF 0001FFE000003FFE0001FFE000003FFE0001FFE000007FFC0001FFE000007FFC0001FFF00000FF F80001FFF80001FFF00001FFFC0001FFF00001FFFE0007FFE00001FFFF800FFF800001FFFFE03F FF000001FFEFFFFFFC000001FFE3FFFFF0000001FFE0FFFFC0000001FFE01FFC00000001FFE000 0000000001FFE0000000000001FFE0000000000001FFE0000000000001FFE0000000000001FFE0 000000000001FFE0000000000001FFE0000000000001FFE0000000000001FFE0000000000001FF E0000000000001FFE0000000000001FFE0000000000001FFE0000000000001FFE00000000000FF FFFFC000000000FFFFFFC000000000FFFFFFC000000000FFFFFFC000000000FFFFFFC000000000 39427CAD42>I<00FF803FC000FFFF81FFF000FFFF83FFFC00FFFF87FFFE00FFFF8FE7FF00FFFF 9F8FFF8003FFBF0FFF8001FFBE0FFF8001FFFC0FFF8001FFF80FFF8001FFF80FFF8001FFF00FFF 8001FFF007FF0001FFF003FE0001FFE001FC0001FFE000000001FFE000000001FFE000000001FF C000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC00000 0001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FF C000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC000000001FFC00000 0001FFC000000001FFC000000001FFC0000000FFFFFFE00000FFFFFFE00000FFFFFFE00000FFFF FFE00000FFFFFFE00000292E7CAD31>114 D<000FFF81E000FFFFF7E003FFFFFFE00FFFFFFFE0 1FFC01FFE03FE0003FE03FC0000FE07F80000FE07F000007E0FF000003E0FF000003E0FF800003 E0FF800003E0FFC0000000FFF0000000FFFF000000FFFFF80000FFFFFFC0007FFFFFF8007FFFFF FE003FFFFFFF001FFFFFFFC00FFFFFFFE003FFFFFFF000FFFFFFF0003FFFFFF80007FFFFF80000 1FFFFC000000FFFC0000003FFCF800001FFCF800000FFCFC000007FCFC000003FCFC000003FCFE 000003FCFF000003F8FF000007F8FF800007F8FFE0000FF0FFF0001FE0FFFE00FFC0FFFFFFFF80 FEFFFFFE00FC3FFFF800F003FFC000262E7CAD2F>I<0001F000000001F000000001F000000001 F000000001F000000001F000000003F000000003F000000003F000000007F000000007F0000000 07F00000000FF00000000FF00000001FF00000003FF00000003FF00000007FF0000001FFF00000 03FFF000000FFFFFFFC0FFFFFFFFC0FFFFFFFFC0FFFFFFFFC0FFFFFFFFC000FFF0000000FFF000 0000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0 000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FF F0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF0000000FFF001F000FFF001F000 FFF001F000FFF001F000FFF001F000FFF001F000FFF001F000FFF001F000FFF001F000FFF803F0 007FF803E0007FF807E0003FFC0FE0003FFE1FC0001FFFFF800007FFFF000001FFFC0000003FF0 0024427EC12E>I<007FE000003FF000FFFFE0007FFFF000FFFFE0007FFFF000FFFFE0007FFFF0 00FFFFE0007FFFF000FFFFE0007FFFF00003FFE00001FFF00001FFE00000FFF00001FFE00000FF F00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000 FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE000 00FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE0 0000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FF E00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00000FFF00001FFE00001FFF00001 FFE00001FFF00001FFE00001FFF00001FFE00003FFF00001FFE00003FFF00001FFE00007FFF000 00FFE0000FFFF00000FFF0001F7FF000007FF8007E7FF800007FFE01FC7FFFE0003FFFFFF87FFF E0000FFFFFF07FFFE00003FFFFC07FFFE000003FFE007FFFE03B2E7CAD42>I<7FFFFF801FFFFF 007FFFFF801FFFFF007FFFFF801FFFFF007FFFFF801FFFFF007FFFFF801FFFFF00007FF80003FF 0000007FFC0001FC0000003FFE0003F80000001FFF0007F00000000FFF0007E000000007FF800F C000000003FFC01F8000000003FFE03F8000000001FFF07F0000000000FFF8FE00000000007FF9 FC00000000003FFFF800000000003FFFF000000000001FFFE000000000000FFFC0000000000007 FFC0000000000003FFC0000000000001FFE0000000000001FFF0000000000001FFF80000000000 03FFFC000000000003FFFE000000000007FFFE00000000000FEFFF00000000001FCFFF80000000 003F87FFC0000000007F03FFE000000000FE01FFE000000001FC00FFF000000001F8007FF80000 0003F0007FFC00000007F0003FFE0000000FE0001FFF0000001FC0000FFF0000003F800007FF80 0000FFC00007FFC000FFFFF8003FFFFFC0FFFFF8003FFFFFC0FFFFF8003FFFFFC0FFFFF8003FFF FFC0FFFFF8003FFFFFC03A2E7EAD3F>120 D E /Fw 46 122 df<0000C018000000C018000000 C018000001C0380000018030000001803000000180300000038070000003006000000300600000 03006000000700E000000600C000000600C000000600C000000E01C000000C018000FFFFFFFFC0 FFFFFFFFC000180300000018030000001803000000380700000030060000003006000000300600 000030060000FFFFFFFFC0FFFFFFFFC000600C000000E01C000000C018000000C018000000C018 000001C03800000180300000018030000001803000000380700000030060000003006000000300 6000000700E000000600C000000600C00000222D7DA229>35 D<70F8FCFC7C0C0C0C1C18183870 E0E0060F7C840E>44 DI<70F8F8F87005057C840E>I<01F00007FC 000E0E001C07003803803803807803C07001C07001C07001C0F001E0F001E0F001E0F001E0F001 E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E0F001E07001C07001C07001C07803 C03803803803801C07000E0E0007FC0001F00013227EA018>48 D<018003800F80FF80F3800380 038003800380038003800380038003800380038003800380038003800380038003800380038003 8003800380038003800380FFFEFFFE0F217CA018>I<03F8000FFE001E1F00380F807007C07807 C07C07C07807C07807C00007C00007C0000780000F80001F00003E0003FC0003F800001E00000F 000007800007C00003E00003E07003E0F803E0F803E0F803E0F803E0E007C07007C0780F803E1F 000FFE0003F80013227EA018>51 D<000E00000E00001E00001E00003E00003E00006E0000EE00 00CE0001CE00018E00030E00070E00060E000E0E000C0E00180E00180E00300E00700E00600E00 E00E00FFFFF8FFFFF8000E00000E00000E00000E00000E00000E00000E0001FFF001FFF015217F A018>I<1801801E07801FFF801FFF001FFC001FF0001800001800001800001800001800001800 0019F8001FFE001F0F001E07801C03C01803C00001C00001E00001E00001E00001E07001E0F001 E0F001E0F001E0E003C0E003C0700780380F803E1F000FFC0007F00013227EA018>I<007E0001 FF8003C3C00703C00E03C01E03C03C03C0380000780000780000780000702000F3FC00F7FF00FE 0F80FC0780F803C0F803C0F801E0F001E0F001E0F001E0F001E0F001E07001E07001E07801E078 01C03803C03C03801E07800F1F0007FE0003F80013227EA018>I<03F8000FFE001F1F003C0780 3803807003C07001C07001C07001C07801C07C03C03E07803F87001FDE000FFC0007F80007FE00 0FFF001E3F803C1FC07807C07003E0F001E0E001E0E000E0E000E0E000E0F000E07001C07803C0 3C07801E0F000FFE0003F80013227EA018>56 D<03F8000FFC001F1E003C070038078078038070 03C0F003C0F001C0F001C0F001E0F001E0F001E0F001E0F001E0F003E07803E07803E03C07E03E 0FE01FFDE007F9E00081E00001C00003C00003C0000380780780780700780F00781E00787C003F F8000FE00013227EA018>I<000180000003C0000003C0000003C0000007E0000007E0000007E0 00000FF000000CF000000CF000001CF800001878000018780000383C0000303C0000303C000060 1E0000601E0000601E0000C00F0000C00F0000C00F0001FFFF8001FFFF8001800780030003C003 0003C0030003C0060001E0060001E0060001E00E0000F01F0001F0FFC00FFFFFC00FFF20237EA2 25>65 D<000FF030007FFC3000FC1E7003F0077007C003F00F8001F01F0001F01F0000F03E0000 F03C0000707C0000707C0000707C000030F8000030F8000030F8000000F8000000F8000000F800 0000F8000000F8000000F8000000F80000307C0000307C0000307C0000303E0000703E0000601F 0000E01F0000C00F8001C007C0038003F0070000FC1E00007FFC00000FF0001C247DA223>67 D69 DI73 D77 DI80 D82 D<07F0600FFE601E1FE03807E07003E07001E0E000E0E000E0E00060E00060F00060F00000 F800007C00007F00003FF0001FFE000FFF8003FFC0007FC00007E00001E00001F00000F00000F0 C00070C00070C00070E00070E000F0F000E0F801E0FC01C0FF0780C7FF00C1FC0014247DA21B> I<7FFFFFF87FFFFFF87C0780F8700780386007801860078018E007801CC007800CC007800CC007 800CC007800CC007800C0007800000078000000780000007800000078000000780000007800000 078000000780000007800000078000000780000007800000078000000780000007800000078000 00078000000780000007800003FFFF0003FFFF001E227EA123>I<1FF0003FFC003C3E003C0F00 3C0F00000700000700000F0003FF001FFF003F07007C0700F80700F80700F00718F00718F00F18 F81F187C3FB83FF3F01FC3C015157E9418>97 D<0E0000FE0000FE00001E00000E00000E00000E 00000E00000E00000E00000E00000E00000E00000E00000E3FC00EFFE00FE1F00F80780F003C0E 003C0E001E0E001E0E001E0E001E0E001E0E001E0E001E0E001E0E003C0E003C0F007C0F80F80F E1F00CFFE00C3F8017237FA21B>I<03FE000FFF801F07803E07803C0780780000780000F00000 F00000F00000F00000F00000F00000F000007800007800C03C01C03E01801F87800FFF0003FC00 12157E9416>I<0000E0000FE0000FE00001E00000E00000E00000E00000E00000E00000E00000 E00000E00000E00000E003F8E00FFEE01F0FE03E03E07C01E07800E07800E0F000E0F000E0F000 E0F000E0F000E0F000E0F000E07000E07801E07801E03C03E01F0FF00FFEFE03F0FE17237EA21B >I<01FC0007FF001F0F803E03C03C01C07801E07801E0FFFFE0FFFFE0F00000F00000F00000F0 0000F000007800007800603C00E03E01C00F83C007FF0001FC0013157F9416>I<003E00FF01EF 03CF038F070007000700070007000700070007000700FFF8FFF807000700070007000700070007 0007000700070007000700070007000700070007007FF87FF8102380A20F>I<0000F003FBF807 FFB80F1F381E0F003C07803C07803C07803C07803C07803C07803C07801E0F001F1E001FFC001B F8001800001800001C00001FFF000FFFC03FFFF07C01F0700078F00078E00038E00038E00038F0 00787800F03F07E01FFFC003FE0015217F9518>I<0E0000FE0000FE00001E00000E00000E0000 0E00000E00000E00000E00000E00000E00000E00000E00000E3F800EFFE00FE1E00F80F00F0070 0F00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070 0E0070FFE7FFFFE7FF18237FA21B>I<1E003E003E003E001E0000000000000000000000000000 0000000E00FE00FE001E000E000E000E000E000E000E000E000E000E000E000E000E000E000E00 0E00FFC0FFC00A227FA10E>I<01C003E003E003E001C000000000000000000000000000000000 01E00FE00FE001E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000 E000E000E000E000E000E000E0F1E0F1C0F3C0FF803E000B2C82A10F>I<0E0000FE0000FE0000 1E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E00000E0FFC0E0FFC 0E07E00E07800E07000E1E000E3C000E78000EF8000FFC000FFC000F1E000E0F000E0F800E0780 0E03C00E03E00E01E00E01F0FFE3FEFFE3FE17237FA21A>I<0E00FE00FE001E000E000E000E00 0E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E000E 000E000E000E000E000E000E00FFE0FFE00B237FA20E>I<0E3FC0FF00FEFFF3FFC0FFE0F783C0 1F807E01E00F003C00E00F003C00E00E003800E00E003800E00E003800E00E003800E00E003800 E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E003800E00E0038 00E0FFE3FF8FFEFFE3FF8FFE27157F942A>I<0E3F80FEFFE0FFE1E01F80F00F00700F00700E00 700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E00700E0070FFE7 FFFFE7FF18157F941B>I<01FC0007FF000F07801C01C03800E07800F0700070F00078F00078F0 0078F00078F00078F00078F000787000707800F03800E01C01C00F078007FF0001FC0015157F94 18>I<0E3FC0FEFFE0FFE1F00F80F80F007C0E003C0E003E0E001E0E001E0E001E0E001E0E001E 0E001E0E003E0E003C0E003C0F007C0F80F80FE1F00EFFE00E3F800E00000E00000E00000E0000 0E00000E00000E00000E0000FFE000FFE000171F7F941B>I<0E7EFEFFFFEF1F8F0F0F0F000F00 0E000E000E000E000E000E000E000E000E000E000E000E00FFF0FFF010157F9413>114 D<1FD83FF87878F038E018E018F018F8007F807FE01FF003F8007CC03CC01CE01CE01CF03CF878 FFF0CFE00E157E9413>I<060006000600060006000E000E000E001E003E00FFF8FFF80E000E00 0E000E000E000E000E000E000E000E000E0C0E0C0E0C0E0C0E0C0F1C073807F803E00E1F7F9E13 >I<0E0070FE07F0FE07F01E00F00E00700E00700E00700E00700E00700E00700E00700E00700E 00700E00700E00700E00F00E00F00E01F00787F807FF7F01FC7F18157F941B>III121 D E /Fx 20 118 df45 D68 D73 D77 D80 D<007F806003FFF06007C0F8E00F003CE01E000F E03C0007E03C0003E0780003E0780001E0F00001E0F00000E0F00000E0F00000E0F0000060F800 0060F8000060F80000007C0000007E0000007F0000003FC000001FFC00001FFFC0000FFFF80003 FFFE0000FFFF00001FFF800001FFC000001FE0000007E0000003F0000001F0000001F8000000F8 000000F8C0000078C0000078C0000078C0000078E0000078E0000078E0000070F00000F0F00000 F0F80001E0FC0001E0FF0003C0E7800F80E3F01F00C0FFFC00C01FF0001D337CB125>83 D<03FF00000FFFC0001E03F0003E00F8003E007C003E003C003E003E001C001E0000001E000000 1E0000001E0000001E000003FE00007FFE0003FF1E0007F01E001FC01E003F001E007E001E007C 001E00FC001E00F8001E0CF8001E0CF8001E0CF8003E0CFC007E0C7C007E0C7E01FF1C3F87CFB8 1FFF07F003FC03C01E1F7D9E21>97 D<003FE001FFF803E03C07C03E0F003E1F003E3E003E3C00 1C7C00007C0000780000F80000F80000F80000F80000F80000F80000F80000F80000F800007C00 007C00007C00003E00033E00071F00060F800E07C01C03F07801FFF0003F80181F7D9E1D>99 D<000001E000003FE000003FE0000003E0000001E0000001E0000001E0000001E0000001E00000 01E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E0000001E000 3F81E001FFF1E003F079E007C01FE00F800FE01F0007E03E0003E03C0001E07C0001E07C0001E0 780001E0F80001E0F80001E0F80001E0F80001E0F80001E0F80001E0F80001E0F80001E0F80001 E0780001E07C0001E07C0001E03C0003E03E0003E01E0007E01F000FE00FC03DE007E0F9F001FF E1FF007F81FF20327DB125>I<003F800001FFF00003E1F80007807C000F003E001E001E003E00 1F003C000F007C000F807C000F8078000F80F8000780FFFFFF80FFFFFF80F8000000F8000000F8 000000F8000000F8000000F80000007C0000007C0000007C0000003E0001803E0003801F000300 0F80070007E00E0003F83C0000FFF800003FC000191F7E9E1D>I<0007F0001FF8007C3C00F87C 00F07C01E07C01E03803E00003C00003C00003C00003C00003C00003C00003C00003C00003C000 03C00003C000FFFFC0FFFFC003C00003C00003C00003C00003C00003C00003C00003C00003C000 03C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C000 03C00003C00003C00003C00007E0007FFF007FFF0016327FB114>I<000001F8007F07FC01FFCF 1C07E3FC3C0F80F81C0F0078001F007C001E003C003E003E003E003E003E003E003E003E003E00 3E003E003E003E003E001E003C001F007C000F0078000F80F8000FE3F0000DFFC0001C7F00001C 0000001C0000001C0000001E0000001F0000000FFFF8000FFFFF0007FFFFC00FFFFFE03E000FF0 7C0001F0780000F0F0000078E0000038E0000038E0000038E0000038F000007870000070780000 F03C0001E01F0007C00FE03F8003FFFE00007FF0001E2F7E9F21>I<0F001F801F801F801F800F 00000000000000000000000000000000000000000000000780FF80FF800F800780078007800780 078007800780078007800780078007800780078007800780078007800780078007800780078007 800FC0FFF8FFF80D307EAF12>105 D<0781FE003FC000FF87FF80FFF000FF9E0FC3C1F8000FB8 03E7007C0007F001EE003C0007E001FC003E0007E000FC001E0007C000F8001E0007C000F8001E 00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000 F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00 078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0001E00078000F0 001E00078000F0001E00078000F0001E000FC001F8003F00FFFC1FFF83FFF0FFFC1FFF83FFF034 1F7E9E38>109 D<0781FE0000FF87FF8000FF9E0FC0000FB803E00007F001E00007E001F00007 E000F00007C000F00007C000F000078000F000078000F000078000F000078000F000078000F000 078000F000078000F000078000F000078000F000078000F000078000F000078000F000078000F0 00078000F000078000F000078000F000078000F000078000F000078000F0000FC001F800FFFC1F FF80FFFC1FFF80211F7E9E25>I<001FC00000FFF80001E03C0007800F000F0007801E0003C01E 0003C03C0001E03C0001E0780000F0780000F0780000F0F80000F8F80000F8F80000F8F80000F8 F80000F8F80000F8F80000F8F80000F8780000F07C0001F03C0001E03C0001E01E0003C01E0003 C00F00078007C01F0001F07C0000FFF800001FC0001D1F7E9E21>I<0787F0FF8FF8FFBC7C0FB8 7C07F07C07E07C07E00007C00007C00007C0000780000780000780000780000780000780000780 000780000780000780000780000780000780000780000780000780000780000780000FC000FFFE 00FFFE00161F7E9E19>114 D<03FE300FFFF03E07F07801F07000F0E00070E00070E00030E000 30F00030F800007F00007FF0003FFF000FFFC003FFE0003FF00003F80000F8C0007CC0003CE000 1CE0001CE0001CF0001CF80038F80038FC00F0EF03F0C7FFC0C1FF00161F7E9E1A>I<00C00000 C00000C00000C00000C00001C00001C00001C00003C00003C00007C0000FC0001FC000FFFFE0FF FFE003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003C00003 C00003C00003C00003C00003C03003C03003C03003C03003C03003C03003C03003C07001E06001 E0E000F9C000FFC0003F00142C7FAB19>I<078000F000FF801FF000FF801FF0000F8001F00007 8000F000078000F000078000F000078000F000078000F000078000F000078000F000078000F000 078000F000078000F000078000F000078000F000078000F000078000F000078000F000078000F0 00078000F000078000F000078001F000078001F000078001F000078003F00003C007F00003C00E F00001F03CF80000FFF8FF80003FC0FF80211F7E9E25>I E /Fy 5 85 df<00000000C0000000 0000E00000000001E00000000003E00000000003E00000000007E00000000007E0000000000FE0 000000000FE0000000001FE0000000001FE00000000037E00000000067E00000000067E0000000 00C7E000000000C7F00000000183F00000000183F00000000303F00000000703F00000000603F0 0000000C03F00000000C03F00000001803F00000001803F00000003003F00000003003F0000000 6003F0000000C003F0000000C003F00000018003F00000018003F8000003FFFFF8000003FFFFF8 0000060001F800000E0001F800000C0001F80000180001F80000180001F80000300001F8000030 0001F80000600001F80000E00001F80000C00001F80001C00001F80001C00001F80007C00001FC 001FC00003FC00FFF8007FFFE0FFF8007FFFE02B327BB135>65 D<000FFFFFFE0000000FFFFFFF 800000007F000FE00000007E0003F00000007E0000F80000007E0000FC0000007E00007C000000 FC00003E000000FC00003E000000FC00003F000000FC00001F000001F800001F000001F800001F 800001F800001F800001F800001F800003F000001F800003F000001F800003F000001F800003F0 00001F800007E000003F800007E000003F800007E000003F800007E000003F80000FC000003F00 000FC000007F00000FC000007F00000FC000007F00001F8000007E00001F800000FE00001F8000 00FE00001F800000FC00003F000001FC00003F000001F800003F000001F800003F000003F00000 7E000003E000007E000007E000007E00000FC000007E00000F800000FC00001F800000FC00003F 000000FC00007E000000FC0000FC000001F80001F0000001F80003E0000001F8000FC0000003F8 007F000000FFFFFFFC000000FFFFFFE000000031317BB036>68 D<000FFFFFFFFC000FFFFFFFFC 00007F0001FC00007E00007C00007E00003C00007E00003C00007E0000180000FC0000180000FC 0000180000FC0000180000FC0000180001F80000180001F80000180001F80000180001F8000018 0003F00080100003F00180000003F00180000003F00180000007E00300000007E00300000007E0 0700000007E01F0000000FFFFE0000000FFFFE0000000FC01E0000000FC00E0000001F800C0000 001F800C0000001F800C0000001F800C0000003F00180000003F00080000003F00000000003F00 000000007E00000000007E00000000007E00000000007E0000000000FC0000000000FC00000000 00FC0000000000FC0000000001F80000000001F80000000001F80000000003F800000000FFFFF0 000000FFFFF00000002E317BB02F>70 D<000FFFFFF000000FFFFFFE0000007F003F8000007E00 0FC000007E0007E000007E0003F000007E0001F80000FC0001F80000FC0001F80000FC0001F800 00FC0001F80001F80003F80001F80003F80001F80003F80001F80003F00003F00007F00003F000 07E00003F0000FC00003F0000FC00007E0001F000007E0007E000007E000FC000007E007F00000 0FFFFFC000000FFFFF0000000FC00F8000000FC003C000001F8003E000001F8001F000001F8001 F000001F8001F800003F0001F800003F0001F800003F0001F800003F0001F800007E0003F80000 7E0003F800007E0003F000007E0003F00000FC0007F00000FC0007F00000FC0007F00800FC0007 F00C01F80007F01801F80007F01801F80003F03003F80003F030FFFFE001F0E0FFFFE000FFC000 0000003F002E327BB034>82 D<07FFFFFFFFF00FFFFFFFFFF00FC00FE003F01E000FC000F01C00 0FC000E018000FC000E038000FC0006030001F8000E030001F8000E060001F8000C060001F8000 C060003F0000C0C0003F0000C0C0003F0000C0C0003F0000C080007E00008000007E0000000000 7E00000000007E0000000000FC0000000000FC0000000000FC0000000000FC0000000001F80000 000001F80000000001F80000000001F80000000003F00000000003F00000000003F00000000003 F00000000007E00000000007E00000000007E00000000007E0000000000FC0000000000FC00000 00000FC0000000000FC0000000001F80000000001F80000000001F80000000001F80000000003F 00000000003F00000000003F0000000000FF00000000FFFFFF000000FFFFFF0000002C3173B033 >84 D E end %%EndProlog %%BeginSetup %%Feature: *Resolution 300 TeXDict begin %%EndSetup %%Page: 0 1 bop 795 908 a Fy(D)26 b(R)g(A)f(F)h(T)225 999 y Fx(Do)r(cumen)n(t)20 b(for)i(a)f(Standard)g(Message-P)n(assing)f(In)n(terface)621 1194 y Fw(Message)c(P)o(assing)h(In)o(terface)e(F)l(orum)802 1320 y(August)i(11,)f(1993)87 1378 y(This)g(w)o(ork)g(w)o(as)h(supp)q(orted)g (b)o(y)f(ARP)l(A)g(and)g(NSF)g(under)g(con)o(tract)g(n)o(um)o(b)q(er)f(###,)g (b)o(y)g(the)192 1436 y(National)h(Science)f(F)l(oundation)i(Science)e(and)i (T)l(ec)o(hnology)f(Cen)o(ter)f(Co)q(op)q(erativ)o(e)76 1494 y(Agreemen)o(t)e(No.)22 b(CCR-8809615,)d(and)e(b)o(y)e(the)h(Commission)e(of) j(the)f(Europ)q(ean)i(Comm)o(unit)n(y)654 1552 y(through)f(Esprit)f(pro)s (ject)g(P6643.)p eop %%Page: 1 2 bop 75 377 a Fv(Con)m(ten)m(ts)75 645 y Fu(3)42 b(Groups,)17 b(Con)o(texts,)f(and)i(Comm)o(unicators)809 b(1)143 702 y Ft(3.1)46 b(In)o(tro)q(duction)15 b Fs(:)22 b(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f (:)h(:)f(:)g(:)h(:)f(:)91 b Ft(1)143 758 y(3.2)46 b(Con)o(text)32 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f (:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h (:)f(:)91 b Ft(2)143 815 y(3.3)46 b(Groups)12 b Fs(:)22 b(:)g(:)h(:)f(:)g(:)h (:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(2)248 871 y(3.3.1)50 b(Prede\014ned)17 b(Groups)29 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(3)143 927 y(3.4)46 b(Comm)o(unicators) 20 b Fs(:)j(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(3)248 984 y(3.4.1)50 b(Prede\014ned)17 b(Comm)o(unicators)38 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f (:)h(:)f(:)g(:)h(:)f(:)91 b Ft(4)143 1040 y(3.5)46 b(Group)15 b(Managemen)o(t)41 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:) f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(4)248 1097 y(3.5.1)50 b(Lo)q(cal)16 b(Op)q(erations)24 b Fs(:)e(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(4)248 1153 y(3.5.2)50 b(Lo)q(cal)16 b(Group)f(Constructors)20 b Fs(:)i(:)h(:)f(:)g(:)h (:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(5)248 1210 y(3.5.3)50 b(Collectiv)o(e)17 b(Group)e(Constructors)39 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f (:)g(:)h(:)f(:)91 b Ft(8)143 1266 y(3.6)46 b(Op)q(erations)16 b(on)f(Con)o(texts)35 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(8)248 1323 y(3.6.1)50 b(Lo)q(cal)16 b(Op)q(erations)24 b Fs(:)e(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)91 b Ft(8)248 1379 y(3.6.2)50 b(Collectiv)o(e)17 b(Op)q(erations)43 b Fs(:)22 b(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f (:)h(:)f(:)g(:)h(:)f(:)91 b Ft(9)143 1436 y(3.7)46 b(Op)q(erations)16 b(on)f(Comm)o(unicators)41 b Fs(:)23 b(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:) g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(10)248 1492 y(3.7.1)50 b(Lo)q(cal)16 b(Comm)o(unicator)f(Op)q(erations) 32 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:) g(:)h(:)f(:)69 b Ft(10)248 1548 y(3.7.2)50 b(Lo)q(cal)16 b(Constructors)k Fs(:)j(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:) f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(10)248 1605 y(3.7.3)50 b(Collectiv)o(e)17 b(Comm)o(unicator)d(Constructors)f Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(11)143 1661 y(3.8)46 b(In)o(tro)q(duction)16 b(to)e(In)o(ter-Comm)o (unication)g Fs(:)22 b(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:) h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(12)248 1718 y(3.8.1)50 b(De\014nitions)17 b(of)d(In)o(ter-Comm)o(unication)i(and)g(In)o(ter-Comm)o (unicators)e Fs(:)22 b(:)h(:)f(:)69 b Ft(12)248 1774 y(3.8.2)50 b(Prop)q(erties)16 b(of)f(In)o(ter-Comm)o(unication)g(and)h(In)o(ter-Comm)o (unicators)24 b Fs(:)e(:)h(:)f(:)69 b Ft(12)248 1831 y(3.8.3)50 b(In)o(ter-Comm)o(unication)16 b(Routines)33 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f (:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(14)248 1887 y(3.8.4)50 b(Implemen)o(tation)16 b(Notes)30 b Fs(:)22 b(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f (:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(17)143 1944 y(3.9)46 b(Cac)o(heing)12 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f (:)h(:)f(:)g(:)h(:)f(:)69 b Ft(18)248 2000 y(3.9.1)50 b(F)l(unctionalit)o(y) 32 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:) g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(18)143 2057 y(3.10)23 b(F)l(ormalizing)16 b(the)f(Lo)q(osely)h(Sync)o (hronous)g(Mo)q(del)g(\(Usage,)e(Safet)o(y\))j Fs(:)22 b(:)h(:)f(:)h(:)f(:)g (:)h(:)f(:)69 b Ft(21)248 2113 y(3.10.1)27 b(Basic)16 b(Statemen)o(ts)22 b Fs(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(21)248 2169 y(3.10.2)27 b(Mo)q(dels)16 b(of)f(Execution)29 b Fs(:)22 b(:)g(:)h(:)f(:)h(:) f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h (:)f(:)69 b Ft(21)143 2226 y(3.11)23 b(Motiv)m(ating)15 b(Examples)h Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h (:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(23)248 2282 y(3.11.1)27 b(Curren)o(t)15 b(Practice)g(#1)27 b Fs(:)22 b(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h (:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(23)248 2339 y(3.11.2)27 b(Curren)o(t)15 b(Practice)g(#2)27 b Fs(:)22 b(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:) f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(23)248 2395 y(3.11.3)27 b(\(Appro)o(ximate\))15 b(Curren)o(t)f(Practice) i(#3)35 b Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g (:)h(:)f(:)69 b Ft(24)248 2452 y(3.11.4)27 b(Example)16 b(#4)44 b Fs(:)23 b(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(24)248 2508 y(3.11.5)27 b(Library)16 b(Example)g(#1)22 b Fs(:)g(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:) g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(25)248 2565 y(3.11.6)27 b(Library)16 b(Example)g(#2)22 b Fs(:)g(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)g (:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(26)248 2621 y(3.11.7)27 b(In)o(ter-Comm)o(unication)16 b(Examples)g Fs(:)22 b(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:)f(:)g(:)h(:) f(:)h(:)f(:)g(:)h(:)f(:)69 b Ft(28)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 0 3 bop 875 722 a Fu(Abstract)75 828 y Ft(The)18 b(Message)f(P)o(assing)h(In)o (terface)f(F)l(orum)h(\(MPIF\),)e(with)i(participation)h(from)e(o)o(v)o(er)g (40)g(organiza-)75 885 y(tions,)f(has)h(b)q(een)g(meeting)g(since)h(Jan)o (uary)e(1993)f(to)h(discuss)h(and)g(de\014ne)g(a)f(set)h(of)f(library)h(in)o (terface)75 941 y(standards)f(for)h(message)f(passing.)25 b(MPIF)16 b(is)i(not)e(sanctioned)i(or)e(supp)q(orted)h(b)o(y)g(an)o(y)g(o\016cial)g (stan-)75 998 y(dards)e(organization.)166 1054 y(This)22 b(is)f(a)g(draft)f (of)h(what)g(will)h(b)q(ecome)g(the)f(Final)h(Rep)q(ort,)h(V)l(ersion)f(1.0,) f(of)f(the)i(Message)75 1111 y(P)o(assing)c(In)o(terface)h(F)l(orum.)29 b(This)19 b(do)q(cumen)o(t)g(con)o(tains)g(all)g(the)g(tec)o(hnical)h (features)e(prop)q(osed)h(for)75 1167 y(the)c(in)o(terface.)20 b(This)c(cop)o(y)f(of)g(the)g(draft)g(w)o(as)f(pro)q(cessed)i(b)o(y)f(L)1174 1161 y Fq(a)1195 1167 y Ft(T)1220 1181 y(E)1246 1167 y(X)g(on)g(August)g(11,) g(1993.)166 1224 y(MPIF)j(in)o(vites)h(commen)o(ts)f(on)h(the)f(tec)o(hnical) i(con)o(ten)o(t)e(of)g(MPI,)g(as)g(w)o(ell)i(as)e(on)g(the)h(editorial)75 1280 y(presen)o(tation)14 b(in)g(the)g(do)q(cumen)o(t.)19 b(Commen)o(ts)13 b(receiv)o(ed)i(b)q(efore)e(July)i(1,)e(1993)g(will)i(b)q(e)f(considered)h (in)75 1336 y(pro)q(ducing)i(the)e(\014nal)h(draft)e(of)h(V)l(ersion)h(1.0)e (of)h(the)g(Message)g(P)o(assing)g(In)o(terface)g(Sp)q(eci\014cation.)166 1393 y(The)i(goal)f(of)g(the)g(Message)g(P)o(assing)g(In)o(terface,)g(simply) i(stated,)e(is)g(to)g(dev)o(elop)h(a)g(widely)g(used)75 1449 y(standard)g(for)f(writing)i(message-passing)f(programs.)24 b(As)17 b(suc)o(h)h(the)f(in)o(terface)g(should)h(establishing)75 1506 y(a)d(practical,)h(p)q(ortable,)f(e\016cien)o(t,)g(and)h(\015exible)h (standard)e(for)f(message)h(passing.)p eop %%Page: 1 4 bop 75 361 a Fp(Section)35 b(3)75 573 y Fv(Groups,)40 b(Con)m(texts,)g(and)75 697 y(Comm)m(unicators)75 943 y Fo(3.1)59 b(Intro)r(duction)75 1053 y Ft(W)l(e)17 b(de\014ne)h(the)e(concepts)h(of)g(group,)f(con)o(text,)g (and)h(comm)o(unicator)f(here.)25 b(W)l(e)17 b(discuss)h(op)q(erations)75 1109 y(for)13 b(ho)o(w)g(these)h(should)g(b)q(e)g(used)g(to)f(pro)o(vide)h (safe)f(\(safer\))g(comm)o(unication)h(in)g(the)g(MPI)f(system.)19 b(W)l(e)75 1166 y(start)11 b(b)o(y)h(discussing)i(in)o(tra-comm)o(unication)f (in)g(full)h(detail;)g(then)f(w)o(e)f(discuss)h(in)o(ter-comm)o(unication,)75 1222 y(whic)o(h)j(builds)g(on)f(the)g(data)f(structures)g(and)h(requiremen)o (ts)g(of)g(the)f(in)o(tra-comm)o(unication)i(sections.)75 1279 y(W)l(e)f(follo)o(w)g(with)g(discussion)h(of)e(formalizations)h(of)f(the)h (lo)q(osely)h(sync)o(hronous)f(mo)q(del)g(of)f(computing)75 1335 y(\(vis)h(a)g(vis)h(message)f(passing\))g(and)g(o\013er)g(examples.)166 1396 y(It)c(is)h(highly)g(desirable)h(that)d(pro)q(cesses)i(executing)g(a)f (parallel)i(pro)q(cedure)f(use)f(a)g(\\virtual)h(pro)q(cess)75 1453 y(name)20 b(space")f(lo)q(cal)i(to)e(the)h(in)o(v)o(o)q(cation.)34 b(Th)o(us,)20 b(the)g(co)q(de)g(of)g(the)f(parallel)j(pro)q(cedure)e(will)i (lo)q(ok)75 1509 y(iden)o(tical,)d(irresp)q(ectiv)o(e)g(of)e(the)g(absolute)h (addresses)f(of)g(the)h(executing)g(pro)q(cesses.)27 b(It)17 b(is)h(often)f(the)75 1566 y(case)h(that)f(parallel)i(application)h(co)q(de)e (is)h(built)g(b)o(y)f(comp)q(osing)g(sev)o(eral)g(parallel)i(mo)q(dules)f(\() p Fn(e.g.)p Ft(,)e(a)75 1622 y(n)o(umerical)f(solv)o(er,)f(and)g(a)g(graphic) h(displa)o(y)g(mo)q(dule\).)21 b(Supp)q(ort)15 b(of)g(a)g(virtual)h(name)f (space)g(for)g(eac)o(h)75 1678 y(mo)q(dule)j(will)h(allo)o(w)e(for)f(the)h (comp)q(osition)g(of)g(mo)q(dules)h(that)e(w)o(ere)h(dev)o(elop)q(ed)h (separately)f(without)75 1735 y(c)o(hanging)e(all)h(message)e(passing)h (calls)g(within)h(eac)o(h)f(mo)q(dule.)21 b(The)15 b(set)f(of)g(pro)q(cesses) h(that)f(execute)h(a)75 1791 y(parallel)i(pro)q(cedure)f(ma)o(y)f(b)q(e)h (\014xed,)g(or)f(ma)o(y)g(b)q(e)h(determined)h(dynamically)g(b)q(efore)f(the) g(in)o(v)o(o)q(cation.)75 1848 y(Th)o(us,)23 b(MPI)e(has)g(to)g(pro)o(vide)h (a)f(mec)o(hanism)h(for)f(dynamically)j(creating)d(sets)g(of)g(lo)q(cally)j (named)75 1904 y(pro)q(cesses.)35 b(W)l(e)20 b(alw)o(a)o(ys)f(n)o(um)o(b)q (er)h(pro)q(cesses)h(that)e(execute)h(a)g(parallel)i(pro)q(cedure)f (consecutiv)o(ely)l(,)75 1961 y(starting)15 b(from)g(zero,)h(and)g(call)g (this)h(n)o(um)o(b)q(ering)f Fu(rank)i(in)h(group)p Ft(.)i(Th)o(us,)16 b(a)f Fu(group)h Ft(is)g(an)g(ordered)75 2017 y(set)f(of)f(pro)q(cesses,)h (where)h(pro)q(cesses)f(are)g(iden)o(ti\014ed)i(b)o(y)e(their)g(ranks)g(when) g(comm)o(unication)h(o)q(ccurs.)166 2078 y(Comm)o(unication)23 b Fm(contexts)f Ft(partition)h(the)h(message-passing)f(space)g(in)o(to)g (separate,)h(man-)75 2135 y(ageable)f(\\univ)o(erses.")44 b(Sp)q (eci\014cally)l(,)27 b(a)c(send)g(made)g(in)h(a)f(con)o(text)f(cannot)h(b)q (e)g(receiv)o(ed)h(in)g(an-)75 2191 y(other)18 b(con)o(text.)29 b(Con)o(texts)17 b(are)i(iden)o(ti\014ed)h(in)f(MPI)g(using)g(opaque)f Fu(con)o(texts)h Ft(that)e(reside)j(within)75 2247 y Fm(communicator)e Ft(ob)s(jects.)31 b(The)20 b(con)o(text)e(mec)o(hanism)i(is)g(need)g(to)f (allo)o(w)g(predictable)i(b)q(eha)o(vior)f(in)75 2304 y(subprograms,)14 b(and)g(to)g(allo)o(w)h(dynamicism)h(in)g(message)e(usage)g(that)g(cannot)h (b)q(e)g(reasonably)g(an)o(tici-)75 2360 y(pated)g(or)e(managed.)20 b(Normally)l(,)15 b(a)f(parallel)i(pro)q(cedure)f(is)g(written)g(so)f(that)f (all)j(messages)e(pro)q(duced)75 2417 y(during)f(its)g(execution)h(are)e (also)g(consumed)h(b)o(y)g(the)f(pro)q(cesses)h(that)f(execute)h(the)f (parallel)i(pro)q(cedure.)75 2473 y(Ho)o(w)o(ev)o(er,)d(if)h(one)g(parallel)i (pro)q(cedure)f(calls)g(another,)e(then)i(it)f(migh)o(t)f(b)q(e)i(desirable)g (to)f(allo)o(w)g(suc)o(h)g(call)75 2530 y(to)17 b(pro)q(ceed)i(while)g (messages)e(are)g(p)q(ending)j(\(the)d(messages)h(will)h(b)q(e)f(consumed)h (b)o(y)e(the)h(pro)q(cedure)75 2586 y(after)d(the)g(call)h(returns\).)k(In)c (suc)o(h)g(case,)f(a)g(new)g(comm)o(unication)h(con)o(text)f(is)h(needed)g (for)f(the)g(called)75 2643 y(parallel)i(pro)q(cedure,)e(ev)o(en)h(if)g(the)f (transfer)f(of)h(con)o(trol)g(is)h(sync)o(hronized.)166 2704 y(The)e(comm)o(unication)i(domain)e(used)h(b)o(y)f(a)g(parallel)i(pro)q (cedure)g(is)e(iden)o(ti\014ed)j(b)o(y)d(a)g Fu(comm)o(uni-)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 2 5 bop 75 -100 a Ft(2)432 b Fl(SECTION)16 b(3.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fu(cator)p Ft(.)29 b(Comm)o(unicators)17 b(bring)h(together)f(the)h(concepts)h(of)e(pro)q(cess)h (group)g(and)g(comm)o(unication)75 102 y(con)o(text.)29 b(A)18 b(comm)o(unicator)g(is)h(an)f(explicit)j(parameter)c(in)j(eac)o(h)e(p)q(oin)o (t-to-p)q(oin)o(t)h(comm)o(unication)75 158 y(op)q(eration.)31 b(The)19 b(comm)o(unicator)f(iden)o(ti\014es)j(the)d(comm)o(unication)i(con)o (text)e(of)g(that)g(op)q(eration;)j(it)75 214 y(iden)o(ti\014es)f(the)f (group)f(of)g(pro)q(cesses)g(that)g(can)g(b)q(e)h(in)o(v)o(olv)o(ed)h(in)f (this)g(comm)o(unication;)h(and)e(it)h(pro-)75 271 y(vides)g(the)g (translation)f(from)g(virtual)h(pro)q(cess)g(names,)g(whic)o(h)g(are)f(ranks) g(within)i(the)e(group,)h(in)o(to)75 327 y(absolute)13 b(addresses.)20 b(Collectiv)o(e)14 b(comm)o(unication)g(calls)g(also)f(tak)o(e)f(a)h(comm)o (unicator)f(as)h(parameter;)75 384 y(it)k(is)h(exp)q(ected)g(that)e(parallel) j(libraries)g(will)f(b)q(e)g(built)g(to)f(accept)g(a)g(comm)o(unicator)f(as)h (parameter.)75 440 y(Comm)o(unicators)d(are)h(represen)o(ted)h(b)o(y)f (opaque)g(MPI)h(ob)s(jects.)166 504 y(W)l(e)h(start)f(b)o(y)h(discussing)i (in)o(tra-comm)o(unication)e(in)h(full)h(detail;)f(then)g(w)o(e)e(discuss)j (in)o(ter-com-)75 561 y(m)o(unication,)14 b(whic)o(h)f(builds)i(on)e(the)g (data)f(structures)g(and)h(requiremen)o(ts)h(of)e(the)h(in)o(tra-comm)o (unicat-)75 617 y(ion)j(sections.)22 b(W)l(e)16 b(follo)o(w)g(with)g (discussion)h(of)e(formalizations)i(of)e(the)h(lo)q(osely)g(sync)o(hronous)g (mo)q(del)75 674 y(of)f(computing)h(\(vis)f(a)g(vis)g(message)g(passing\))g (and)h(o\013er)e(examples.)75 860 y Fo(3.2)59 b(Context)75 976 y Ft(A)15 b Fu(con)o(text)h Ft(is)g(the)f(MPI)g(mec)o(hanism)h(for)f (partitioning)h(comm)o(unication)g(space.)21 b(A)15 b(de\014ning)i(prop-)75 1032 y(ert)o(y)c(of)h(a)f(con)o(text)h(is)g(that)f(a)h(send)g(made)g(in)h(a)e (con)o(text)h(cannot)f(b)q(e)i(receiv)o(ed)g(in)g(another)e(con)o(text.)19 b(A)75 1089 y(con)o(text)c(is)g(an)g(opaque)h(ob)s(ject.)j(Only)d(one)g(comm) o(unicator)e(in)i(a)f(pro)q(cess)h(ma)o(y)e(bind)j(the)e(same)g(con-)75 1145 y(text.)24 b(Con)o(texts)15 b(ha)o(v)o(e)i(additional)h(attributes)e (for)g(in)o(ter-comm)o(unication,)i(to)e(b)q(e)i(discussed)g(b)q(elo)o(w.)75 1201 y(F)l(or)e(in)o(tra-comm)o(unication,)g(a)g(con)o(text)g(is)h(essen)o (tially)h(a)d(h)o(yp)q(er-tag)h(needed)i(to)e(mak)o(e)f(a)h(comm)o(uni-)75 1258 y(cator)e(safe)h(for)g(p)q(oin)o(t-to-p)q(oin)o(t)h(and)f(MPI-de\014ned) i(collectiv)o(e)g(comm)o(unication.)166 1405 y Fk(Discussion:)34 b Fj(Some)12 b(implemen)o(tations)f(ma)o(y)h(mak)o(e)g(a)h(con)o(text)i(to)e (b)q(e)i(a)e(pair)g(of)g(in)o(tegers,)h(eac)o(h)h(repre-)75 1461 y(sen)o(ting)g(\\h)o(yp)q(er)h(tags")f({)g(one)h(for)e(p)q(oin)o(t-to-p) q(oin)o(t)g(and)h(one)h(for)f(\(MPI-de\014ned\))h(collectiv)o(e)g(op)q (erations)f(on)g(a)75 1517 y(comm)o(unicator.)g(By)e(making)d(this)j(concept) h(opaque,)e(w)o(e)h(reliev)o(e)g(the)h(implemen)o(tor)c(of)i(the)h (requiremen)o(t)g(that)75 1574 y(this)h(is)g(the)g(only)f(w)o(a)o(y)g(to)h (implemen)o(t)d(con)o(texts)k(correctly)g(for)f(MPI.)166 1803 y Fk(Discussion:)19 b Fj(Among)14 b(other)i(reasons,)h(including)e(to)g (address)i(Jim)e(Co)o(wnie's)g(concerns)i(ab)q(out)f(safet)o(y)75 1860 y(and)c(to)g(mak)o(e)e(b)q(oth)i(p)q(oin)o(t-to-p)q(oin)o(t)f(and)h (collectiv)o(e)g(comm)o(unicati)o(on)d(safer)k(on)e(an)h(in)o(tra-comm)o(uni) o(ncator,)e(w)o(e)75 1916 y(ha)o(v)o(e)k(opted)h(to)g(mak)o(e)e(con)o(texts)i (opaque,)f(at)h(the)g(exp)q(ense)h(of)e(upsetting)h(those)h(who)e(w)o(an)o(t) g(to)g(b)q(e)h(able)g(to)f(set)75 1973 y(con)o(text)g(v)n(alues.)j(This)d(c)o (hange)f(is)g(crucial)g(to)g(abstracting)h(MPI)f(from)f(sp)q(eci\014c)i (implemen)o(tations,)c(and)j(forces)75 2029 y(sp)q(eci\014c)19 b(implem)o(en)o(tations)c(to)i(pro)o(vide)g(implem)o(en)o(tation-sp)q (eci\014c)e(functions)j(to)f(mak)o(e)e(the)j(connotation)f(of)75 2086 y(con)o(texts)e(with)f(sp)q(eci\014c)h(in)o(teger)f(v)n(alues.)75 2354 y Fo(3.3)59 b(Groups)75 2470 y Ft(A)17 b Fu(group)g Ft(is)g(an)g (ordered)g(set)f(of)h(pro)q(cess)g(iden)o(ti\014ers)h(\(henceforth)f(pro)q (cesses\);)g(pro)q(cess)g(iden)o(ti\014ers)75 2527 y(are)h(implemen)o(tation) j(dep)q(enden)o(t;)g(a)e(group)f(is)i(an)e(opaque)h(ob)s(ject.)30 b(Eac)o(h)19 b(pro)q(cess)g(in)g(a)g(group)f(is)75 2583 y(asso)q(ciated)d (with)h(an)f(in)o(teger)g Fu(rank)p Ft(,)g(starting)f(from)h(zero.)166 2647 y(Groups)d(are)g(represen)o(ted)h(b)o(y)f(opaque)h Fu(group)h(ob)s (jects)p Ft(,)e(and)h(hence)g(cannot)g(b)q(e)g(directly)g(trans-)75 2704 y(ferred)i(from)g(one)g(pro)q(cess)h(to)e(another.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 3 6 bop 75 -100 a Fl(3.4.)34 b(COMMUNICA)l(TORS)1246 b Ft(3)75 45 y Fi(3.3.1)49 b(Prede\014ned)15 b(Groups)75 134 y Ft(.)20 b(Initial)d(groups)e(de\014ned)h(once)31 b Fh(MPI)p 750 134 14 2 v 16 w(INIT)14 b Ft(has)h(b)q(een)i(called)g(are)d(as)h(follo)o(ws:)143 233 y Fg(\017)23 b Fh(MPI)p 274 233 V 15 w(GROUP)p 441 233 V 18 w(ALL)p Ft(,)45 b(SPMD-lik)o(e)16 b(siblings)h(of)e(a)g(pro)q(cess.)143 332 y Fg(\017)23 b Fh(MPI)p 274 332 V 15 w(GROUP)p 441 332 V 18 w(HOST)p Ft(,)46 b(A)15 b(group)g(including)j(one's)d(self)h(and)f (one's)g(HOST)143 432 y Fg(\017)23 b Fh(MPI)p 274 432 V 15 w(GROUP)p 441 432 V 18 w(P)l(ARENT)p Ft(,)39 b(A)13 b(group)g(con)o(taining)g (one's)g(self)g(and)g(one's)g(P)l(ARENT)g(\(spa)o(wner\).)143 531 y Fg(\017)23 b Fh(MPI)p 274 531 V 15 w(GROUP)p 441 531 V 18 w(SELF)p Ft(,)45 b(A)15 b(group)g(comprising)i(one's)d(self)166 630 y(MPI)j(implemen)o(tations)h(are)f(required)h(to)e(pro)o(vide)h(these)h (groups;)f(ho)o(w)o(ev)o(er,)f(not)g(all)i(forms)e(of)75 687 y(comm)o(unication)22 b(mak)o(e)f(sense)h(for)f(all)h(systems,)g(so)g(not)f (all)h(of)f(these)h(groups)f(ma)o(y)g(b)q(e)h(relev)m(an)o(t.)75 743 y(En)o(vironmen)o(tal)17 b(inquiry)i(will)g(b)q(e)e(pro)o(vided)h(to)e (determine)j(whic)o(h)e(of)g(these)g(are)g(usable)h(in)g(a)e(giv)o(en)75 800 y(implemen)o(tation.)24 b(The)17 b(analogous)f(comm)o(unicators)f (corresp)q(onding)j(to)d(these)i(groups)f(are)g(de\014ned)75 856 y(b)q(elo)o(w)g(in)g(section)g(3.4.1.)166 997 y Fk(Discussion:)f Fj(En)o(vironmen)o(tal)d(sub-committee)h(needs)i(to)f(pro)o(vide)f(suc)o(h)i (inquiry)e(functions)h(for)f(us.)75 1231 y Fo(3.4)59 b(Communicato)n(rs)75 1335 y Ft(All)20 b(MPI)f(comm)o(unication)h(\(b)q(oth)f(p)q(oin)o(t-to-p)q (oin)o(t)h(and)f(collectiv)o(e\))i(functions)f(use)f Fu(comm)o(unica-)75 1391 y(tors)c Ft(to)g(pro)o(vide)i(a)e(sp)q(eci\014c)j(scop)q(e)e(\(con)o (text)f(and)h(group)f(sp)q(eci\014cations\))j(for)d(the)h(comm)o(unication.) 75 1448 y(In)d(short,)g(comm)o(unicators)f(bring)h(together)f(the)h(concepts) g(of)g(group)f(and)h(con)o(text;)g(\(furthermore,)f(to)75 1504 y(supp)q(ort)18 b(implemen)o(tation-sp)q(eci\014c)j(optimizations,)d(and)g (virtual)g(top)q(ologies,)g(they)g(\\cac)o(he")f(addi-)75 1561 y(tional)f(information)g(opaquely\).)22 b(The)16 b(source)f(and)h (destination)h(of)e(a)g(message)g(is)i(iden)o(ti\014ed)g(b)o(y)f(the)75 1617 y(rank)h(of)f(that)g(pro)q(cess)h(within)h(the)f(group;)g(no)g(a)g (priori)h(mem)o(b)q(ership)g(restrictions)f(on)g(the)g(pro)q(cess)75 1674 y(sending)f(or)f(receiving)i(the)e(message)f(are)h(implied.)22 b(F)l(or)15 b(collectiv)o(e)i(comm)o(unication,)e(the)g(comm)o(uni-)75 1730 y(cator)i(sp)q(eci\014es)j(the)e(set)f(of)h(pro)q(cesses)g(that)f (participate)i(in)g(the)f(collectiv)o(e)h(op)q(eration.)29 b(Th)o(us,)18 b(the)75 1786 y(comm)o(unicator)g(restricts)h(the)f(\\spatial") h(scop)q(e)g(of)f(comm)o(unication,)i(and)f(pro)o(vides)g(lo)q(cal)h(pro)q (cess)75 1843 y(addressing.)166 1983 y Fk(Discussion:)48 b Fj(`Comm)n(unicator')14 b(replaces)k(the)g(w)o(ord)g(`con)o(text')f(ev)o (erywhere)i(in)e(curren)o(t)i(pt2pt)e(and)75 2040 y(collcomm)10 b(drafts.)166 2180 y Ft(Comm)o(unicators)g(are)h(represen)o(ted)h(b)o(y)g (opaque)f Fu(comm)o(unicator)i(ob)s(jects)p Ft(,)f(and)g(hence)g(cannot)75 2237 y(b)q(e)k(directly)g(transferred)f(from)g(one)g(pro)q(cess)g(to)g (another.)75 2365 y Fh(Raison)h(d')o(^)-21 b(etre)16 b(fo)o(r)e(sepa)o(rate)j (Contexts)g(and)f(Comm)m(unicato)o(rs)44 b Ft(Within)16 b(a)g(comm)o (unicator,)f(a)h(con)o(text)75 2421 y(is)h(separately)f(brok)o(en)h(out,)f (rather)f(than)i(b)q(eing)g(inheren)o(t)h(in)f(the)f(comm)o(unicator)g(for)g (one)g(sp)q(eci\014c,)75 2478 y(essen)o(tial)j(purp)q(ose.)27 b(W)l(e)18 b(w)o(an)o(t)f(to)g(mak)o(e)g(it)h(p)q(ossible)h(for)e(libraries)i (quic)o(kly)g(to)e(ac)o(hiev)o(e)i(additional)75 2534 y(safe)i(comm)o (unication)h(space)f(without)g(MPI-comm)o(unicator-based)h(sync)o (hronization.)38 b(The)21 b(only)75 2591 y(w)o(a)o(y)15 b(to)g(do)h(this)g (is)h(to)e(pro)o(vide)i(a)e(means)h(to)g(preallo)q(cate)g(man)o(y)g(con)o (texts,)f(and)h(bind)h(them)f(lo)q(cally)l(,)75 2647 y(as)c(needed.)21 b(This)13 b(c)o(hoice)h(w)o(eak)o(ens)e(the)h(o)o(v)o(erall,)g(inheren)o(t)h (\\safet)o(y")d(of)h(MPI,)h(if)g(programmed)f(in)i(this)75 2704 y(w)o(a)o(y)l(,)g(but)h(pro)o(vides)h(added)g(p)q(erformance)f(whic)o(h) h(library)g(designers)g(will)h(demand.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 4 7 bop 75 -100 a Ft(4)432 b Fl(SECTION)16 b(3.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fi(3.4.1)49 b(Prede\014ned)15 b(Comm)o(unicato)o(r)o(s)75 186 y Ft(Initial)i(comm)o(unicators)e(de\014ned)h (once)31 b Fh(MPI)p 885 186 14 2 v 16 w(INIT)14 b Ft(has)i(b)q(een)g(called)h (are)e(as)f(follo)o(ws:)143 394 y Fg(\017)23 b Ff(MPI)p 266 394 13 2 v 14 w(COMM)p 410 394 V 15 w(ALL)p Ft(,)45 b(SPMD-lik)o(e)16 b(siblings)i(of)c(a)h(pro)q(cess.)143 602 y Fg(\017)23 b Ff(MPI)p 266 602 V 14 w(COMM)p 410 602 V 15 w(HOST)p Ft(,)45 b(A)15 b(comm)o(unicator)g(for)g(talking)g(to)g(one's)g(HOST.)143 810 y Fg(\017)23 b Ff(MPI)p 266 810 V 14 w(COMM)p 410 810 V 15 w(P)m(ARENT)p Ft(,)44 b(A)15 b(comm)o(unicator)g(for)f(talking)i(to)f (one's)g(P)l(ARENT)g(\(spa)o(wner\).)143 1018 y Fg(\017)23 b Ff(MPI)p 266 1018 V 14 w(COMM)p 410 1018 V 15 w(SELF)p Ft(,)32 b(A)12 b(comm)o(unicator)e(for)h(talking)g(to)g(one's)g(self)g(\(useful)h (for)f(getting)g(con)o(texts)189 1074 y(for)j(serv)o(er)h(purp)q(oses,)g (etc.\).)166 1282 y(MPI)j(implemen)o(tations)h(are)f(required)h(to)f(pro)o (vide)h(these)f(comm)o(unicators;)h(ho)o(w)o(ev)o(er,)e(not)h(all)75 1339 y(forms)12 b(of)f(comm)o(unication)j(mak)o(e)d(sense)i(for)f(all)h (systems.)19 b(En)o(vironmen)o(tal)12 b(inquiry)i(will)g(b)q(e)f(pro)o(vided) 75 1395 y(to)20 b(determine)h(whic)o(h)h(of)e(these)h(comm)o(unicators)f(are) g(usable)h(in)h(a)e(giv)o(en)h(implemen)o(tation.)37 b(The)75 1451 y(groups)20 b(corresp)q(onding)h(to)f(these)g(comm)o(unicators)g(are)g (also)g(a)o(v)m(ailable)i(as)d(prede\014ned)j(quan)o(tities)75 1508 y(\(see)15 b(section)h(3.3.1\).)166 1676 y Fk(Discussion:)f Fj(En)o(vironmen)o(tal)d(sub-committee)h(needs)i(to)f(pro)o(vide)f(suc)o(h)i (inquiry)e(functions)h(for)f(us.)75 2064 y Fo(3.5)59 b(Group)20 b(Management)75 2221 y Ft(This)14 b(section)h(describ)q(es)g(the)f (manipulation)h(of)f(groups)f(under)i(v)m(arious)f(subheadings:)20 b(general,)15 b(con-)75 2278 y(structors,)f(and)h(so)g(on.)75 2562 y Fi(3.5.1)49 b(Lo)q(cal)18 b(Op)q(erations)75 2704 y Ft(The)d(follo)o(wing)h(are)f(all)h(lo)q(cal)h(\(non-comm)o(unicating\))e(op) q(erations.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 5 8 bop 75 -100 a Fl(3.5.)34 b(GR)o(OUP)15 b(MANA)o(GEMENT)1138 b Ft(5)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(GROUP)p 328 45 V 18 w(SIZE\(group,)14 b(size\))117 124 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject.)117 202 y(OUT)124 b Fh(size)503 b Fj(is)14 b(the)g(in)o(teger)h(n)o(um)o(b)q(er)e(of)g(pro)q (cesses)k(in)c(the)i(group.)75 376 y Fh(MPI)p 160 376 V 16 w(GROUP)p 328 376 V 18 w(RANK\(group,)f(rank\))117 455 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject.)117 533 y(OUT)124 b Fh(rank)488 b Fj(is)15 b(the)h(in)o(teger)g(rank)f(of)g(the)g (calling)f(pro)q(cess)j(in)e(group,)g(or)905 589 y Ff(MPI)p 982 589 13 2 v 15 w(UNDEFINED)d Fj(if)h(the)i(pro)q(cess)h(is)d(not)h(a)g (mem)o(b)q(er.)75 763 y Fh(MPI)p 160 763 14 2 v 16 w(TRANSLA)l(TE)p 432 763 V 17 w(RANKS)i(\(group)p 739 763 V 16 w(a,)f(n,)g(ranks)p 956 763 V 17 w(a,)g(group)p 1131 763 V 16 w(b,)g(ranks)p 1298 763 V 17 w(b\))117 842 y Fj(IN)171 b Fh(group)p 445 842 V 16 w(a)425 b Fj(handle)14 b(to)g(group)f(ob)r(ject)i(\\A")117 920 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)13 b(of)h(ranks)g(in)27 b Ff(ranks)p 1371 920 13 2 v 16 w(a)14 b Fj(arra)o(y)117 998 y(IN)171 b Fh(ranks)p 437 998 14 2 v 16 w(a)433 b Fj(arra)o(y)14 b(of)f(zero)i(or)f(more)f(v)n(alid)f(ranks)i(in)g(group)f(\\A")117 1077 y(IN)171 b Fh(group)p 445 1077 V 16 w(b)424 b Fj(handle)14 b(to)g(group)f(ob)r(ject)i(\\B")117 1155 y(OUT)124 b Fh(ranks)p 437 1155 V 16 w(b)432 b Fj(arra)o(y)9 b(of)g(corresp)q(onding)h(ranks)g(in)f (group)g(\\B,")18 b Ff(MPI)p 1757 1155 13 2 v 14 w(UNDEFINED)905 1212 y Fj(when)d(no)e(corresp)q(ondence)k(exists.)75 1411 y Fi(3.5.2)49 b(Lo)q(cal)18 b(Group)e(Constructo)o(rs)75 1500 y Ft(The)f(execution)i(of)d(the)i(follo)o(wing)g(op)q(erations)f(do)g(not)g (require)h(in)o(terpro)q(cess)g(comm)o(unication.)75 1605 y Fh(MPI)p 160 1605 14 2 v 16 w(LOCAL)p 318 1605 V 16 w(SUBGROUP\(group,)h(n,)e (ranks,)g(new)p 981 1605 V 17 w(group\))117 1684 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject)117 1762 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)9 b(of)f(elemen)o(ts)i(in)e (arra)o(y)h(ranks)h(\(and)f(size)h(of)18 b Ff(new)p 1802 1762 13 2 v 16 w(group)p Fj(\))117 1841 y(IN)171 b Fh(ranks)471 b Fj(arra)o(y)11 b(of)f(in)o(teger)i(ranks)f(in)f Ff(group)i Fj(to)f(app)q(ear)g(in)g Ff(new)p 1751 1841 V 16 w(group)p Fj(.)117 1919 y(OUT)124 b Fh(new)p 411 1919 14 2 v 17 w(group)372 b Fj(new)18 b(group)g(deriv)o(ed)g(from)e(ab)q(o)o(v)o(e,)i(preserving)h(the) f(order)905 1976 y(de\014ned)d(b)o(y)28 b Ff(ranks)p Fj(.)75 2102 y Ft(If)15 b(no)h(ranks)e(are)h(sp)q(eci\014ed,)i Fh(new)p 655 2102 V 17 w(group)f Ft(has)f(no)g(mem)o(b)q(ers.)75 2207 y Fh(MPI)p 160 2207 V 16 w(LOCAL)p 318 2207 V 16 w(EX)o(CL)p 444 2207 V 16 w(SUBGROUP\(group,)i(n,)e(ranks,)h(new)p 1108 2207 V 17 w(group\))117 2286 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject)117 2364 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)9 b(of)f(elemen)o(ts)i(in)e(arra)o(y)h(ranks)h(\(and)f(size)h(of)18 b Ff(new)p 1802 2364 13 2 v 16 w(group)p Fj(\))117 2443 y(IN)171 b Fh(ranks)471 b Fj(arra)o(y)9 b(of)g(in)o(teger)h(ranks)f(in)g Ff(group)h Fj(not)f(to)g(app)q(ear)h(in)f Ff(new)p 1805 2443 V 16 w(group)117 2521 y Fj(OUT)124 b Fh(new)p 411 2521 14 2 v 17 w(group)372 b Fj(new)18 b(group)g(deriv)o(ed)g(from)e(ab)q(o)o(v)o(e,)i (preserving)h(the)f(order)905 2578 y(de\014ned)d(b)o(y)28 b Ff(ranks)p Fj(.)75 2704 y Ft(If)15 b(no)h(ranks)e(are)h(sp)q(eci\014ed,)i Fh(new)p 655 2704 V 17 w(group)f Ft(is)f(iden)o(tical)i(to)e Fh(group)p Ft(.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 6 9 bop 75 -100 a Ft(6)432 b Fl(SECTION)16 b(3.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(LOCAL)p 318 45 V 16 w(SUBGROUP)p 572 45 V 19 w(RANGES\(group,)h (n,)f(ranges,)g(new)p 1193 45 V 17 w(group\))117 175 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject)117 354 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)9 b(of)f(elemen)o(ts)i(in)e (arra)o(y)h(ranks)h(\(and)f(size)h(of)18 b Ff(new)p 1802 354 13 2 v 16 w(group)p Fj(\))117 534 y(IN)171 b Fh(ranges)450 b Fj(a)19 b(one-dimensional)e(arra)o(y)i(of)g(in)o(teger)g(triplets:)29 b(pairs)20 b(of)905 590 y(ranks)c(\(form:)j(b)q(eginning)c(through)g(end,)h (inclusiv)o(e\))f(to)g(b)q(e)905 647 y(included)i(in)g(the)g(output)g(group)g Ff(new)p 1529 647 V 16 w(group)p Fj(,)h(plus)e(a)h(con-)905 703 y(stan)o(t)d(stride)h(\(often)f(1)g(or)g(-1\).)117 883 y(OUT)124 b Fh(new)p 411 883 14 2 v 17 w(group)372 b Fj(new)18 b(group)g(deriv)o(ed)g(from)e(ab)q(o)o(v)o(e,)i(preserving)h(the)f(order)905 939 y(de\014ned)d(b)o(y)28 b Ff(ranges)p Fj(.)75 1116 y Ft(If)15 b(an)o(y)g(of)g(the)g(rank)g(sets)g(o)o(v)o(erlap,)f(then)i(the)f(o)o(v)o (erlap)g(is)g(ignored.)21 b(If)15 b(no)g(ranges)g(are)g(sp)q(eci\014ed,)i (then)75 1172 y(the)e(output)g(group)g(has)g(no)g(mem)o(b)q(ers.)75 1328 y Fh(MPI)p 160 1328 V 16 w(LOCAL)p 318 1328 V 16 w(SUBGROUP)p 572 1328 V 19 w(EX)o(CL)p 701 1328 V 16 w(RANGES\(group,)h(n,)f(ranges,)g (new)p 1319 1328 V 17 w(group\))117 1458 y Fj(IN)171 b Fh(group)463 b Fj(handle)14 b(to)g(group)f(ob)r(ject)117 1638 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)9 b(of)f(elemen)o(ts)i(in)e(arra)o(y)h(ranks)h (\(and)f(size)h(of)18 b Ff(new)p 1802 1638 13 2 v 16 w(group)p Fj(\))117 1817 y(IN)171 b Fh(ranges)450 b Fj(a)17 b(one-dimensional)e(arra)o (y)i(of)g(\(three)i(in)o(teger\))f(consisting)905 1874 y(of)12 b(pairs)h(of)f(ranks)i(\(form:)h(b)q(eginning)e(through)f(end,)i(inclu-)905 1930 y(siv)o(e\))d(to)g(b)q(e)h(excluded)g(from)d(the)j(output)f(group)g Ff(new)p 1751 1930 V 16 w(group)p Fj(,)905 1987 y(plus)j(a)g(constan)o(t)g (stride)h(\(often)f(1)f(or)h(-1\).)117 2166 y(OUT)124 b Fh(new)p 411 2166 14 2 v 17 w(group)372 b Fj(new)18 b(group)g(deriv)o(ed)g(from)e(ab)q (o)o(v)o(e,)i(preserving)h(the)f(order)905 2223 y(de\014ned)d(b)o(y)28 b Ff(ranges)p Fj(.)75 2399 y Ft(If)15 b(an)o(y)f(of)f(the)i(rank)f(sets)g(o)o (v)o(erlap,)g(then)h(the)f(o)o(v)o(erlap)g(is)h(ignored.)20 b(If)15 b(there)f(are)g(no)g(ranges)g(sp)q(eci\014ed,)75 2456 y(the)h(output)g(group)g(is)h(the)f(same)g(as)g(the)g(original)h(group.)166 2647 y Fk(Discussion:)e Fj(Please)f(prop)q(ose)f(additional)e(subgroup)i (functions,)g(b)q(efore)h(the)f(second)h(reading...Virtual)75 2704 y(T)m(op)q(ologies)f(supp)q(ort?)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 7 10 bop 75 -100 a Fl(3.5.)34 b(GR)o(OUP)15 b(MANA)o(GEMENT)1138 b Ft(7)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(LOCAL)p 318 45 V 16 w(GROUP)p 486 45 V 18 w(UNION\(group1,)15 b(group2,)f(group)p 1088 45 V 17 w(out\))117 124 y Fj(IN)171 b Fh(group1)440 b Fj(\014rst)15 b(group)f(ob)r(ject)g(handle)117 203 y(IN)171 b Fh(group2)440 b Fj(second)15 b(group)f(ob)r(ject)h(handle)117 282 y(OUT)124 b Fh(group)p 445 282 V 16 w(out)385 b Fj(group)14 b(ob)r(ject)h(handle)75 455 y Fh(MPI)p 160 455 V 16 w(LOCAL)p 318 455 V 16 w(GROUP)p 486 455 V 18 w(INTERSECT\(group1,)g(group2,)f(group)p 1191 455 V 17 w(out\))117 534 y Fj(IN)171 b Fh(group1)440 b Fj(\014rst)15 b(group)f(ob)r(ject)g(handle)117 613 y(IN)171 b Fh(group2)440 b Fj(second)15 b(group)f(ob)r(ject)h(handle)117 692 y(OUT)124 b Fh(group)p 445 692 V 16 w(out)385 b Fj(group)14 b(ob)r(ject)h(handle)75 865 y Fh(MPI)p 160 865 V 16 w(LOCAL)p 318 865 V 16 w(GROUP)p 486 865 V 18 w(DIFFERENCE\(group1,)f(group2,)h(group)p 1216 865 V 16 w(out\))117 944 y Fj(IN)171 b Fh(group1)440 b Fj(\014rst)15 b(group)f(ob)r(ject)g(handle)117 1023 y(IN)171 b Fh(group2)440 b Fj(second)15 b(group)f(ob)r(ject)h(handle)117 1102 y(OUT)124 b Fh(group)p 445 1102 V 16 w(out)385 b Fj(group)14 b(ob)r(ject)h(handle)166 1228 y Ft(The)g(set-lik)o(e)i(op)q(erations)e(are)g (de\014ned)h(as)f(follo)o(ws:)75 1330 y Fu(union)24 b Ft(All)19 b(elemen)o(ts)g(of)f(the)h(\014rst)f(group)g(\()p Fh(group1)p Ft(\),)g(follo)o(w)o(ed)h(b)o(y)f(all)h(elemen)o(ts)g(of)f(second)h(group)189 1386 y(\()p Fh(group2)p Ft(\))14 b(not)h(in)h(\014rst)75 1487 y Fu(in)o(tersect)23 b Ft(all)16 b(elemen)o(ts)g(of)f(the)g(\014rst)g(group)g (whic)o(h)h(are)e(also)i(in)g(the)f(second)h(group)75 1588 y Fu(di\013erence)22 b Ft(all)17 b(elemen)o(ts)e(of)g(the)g(\014rst)g(group)g (whic)o(h)h(are)f(not)g(in)h(the)f(second)h(group)75 1689 y(Note)d(that)f (for)g(these)i(op)q(erations)f(the)g(order)g(of)f(pro)q(cesses)i(in)g(the)f (output)g(group)f(is)i(determined)g(\014rst)75 1746 y(b)o(y)f(order)f(in)i (the)f(\014rst)f(group)h(\(if)g(p)q(ossible\))h(and)f(then)g(b)o(y)g(order)f (in)i(the)f(second)g(group)g(\(if)f(necessary\).)166 1887 y Fk(Discussion:)32 b Fj(What)22 b(do)g(p)q(eople)h(think)f(ab)q(out)g(these)i (lo)q(cal)d(op)q(erations?)44 b(More?)g(Less?)g(Note:)75 1943 y(these)15 b(op)q(erations)g(do)f(not)f(explicitly)g(en)o(umerate)h(ranks,)g (and)g(therefore)i(are)e(more)f(scalable)h(if)f(implemen)o(ted)75 2000 y(e\016cien)o(tly)p Fe(:)7 b(:)g(:)75 2188 y Fh(MPI)p 160 2188 V 16 w(GROUP)p 328 2188 V 18 w(FREE\(group\))117 2267 y Fj(IN)171 b Fh(group)463 b Fj(frees)29 b Ff(group)15 b Fj(previously)e (de\014ned.)75 2393 y Ft(This)i(op)q(eration)f(frees)h(a)f(handle)29 b Fh(group)15 b Ft(whic)o(h)g(is)f(not)g(curren)o(tly)h(b)q(ound)g(to)f(a)g (comm)o(unicator.)19 b(It)14 b(is)75 2450 y(erroneous)h(to)g(attempt)f(to)g (free)i(a)f(group)f(curren)o(tly)i(b)q(ound)g(to)f(a)g(comm)o(unicator.)166 2591 y Fk(Discussion:)h Fj(The)f(p)q(oin)o(t-to-p)q(oin)o(t)e(c)o(hapter)i (suggests)h(that)e(there)i(is)e(a)g(single)g(destructor)j(for)d(all)f(MPI)75 2647 y(opaque)g(ob)r(jects;)i(ho)o(w)o(ev)o(er,)e(it)g(is)h(arguable)f(that)g (this)h(sp)q(eci\014es)h(the)f(implemen)o(tatio)o(n)d(of)i(MPI)g(v)o(ery)h (strongly)m(.)75 2704 y(W)m(e)f(p)q(olitely)g(argue)i(against)e(this)h (approac)o(h.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 8 11 bop 75 -100 a Ft(8)432 b Fl(SECTION)16 b(3.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(GROUP)p 328 45 V 18 w(DUP\(group,)f(new)p 666 45 V 17 w(group\))117 129 y Fj(IN)171 b Fh(group)463 b Fj(extan)o(t)14 b(group)g(ob)r(ject)h(handle)117 219 y(OUT)124 b Fh(new)p 411 219 V 17 w(group)372 b Fj(new)15 b(group)e(ob)r(ject)i(handle)75 350 y Fh(MPI)p 160 350 V 16 w(GROUP)p 328 350 V 18 w(DUP)k Ft(duplicates)j(a)d(group)h(with)g(all)h(its)f(cac)o(hed)g(information,)h (replacing)g(nothing.)75 407 y(This)16 b(function)g(is)g(essen)o(tial)g(to)e (the)h(supp)q(ort)h(of)f(virtual)g(top)q(ologies.)75 569 y Fi(3.5.3)49 b(Collective)17 b(Group)f(Constructo)o(rs)75 669 y Ft(The)e(execution)h(of)f(the)g(follo)o(wing)h(op)q(erations)f(require)h (collectiv)o(e)h(comm)o(unication)f(within)g(a)f(group.)75 779 y Fh(MPI)p 160 779 V 16 w(COLL)p 288 779 V 16 w(SUBGROUP\(comm)o(,)e(k)o (ey)l(,)j(colo)o(r,)e(new)p 983 779 V 18 w(group\))117 864 y Fj(IN)171 b Fh(comm)450 b Fj(comm)o(unicator)11 b(ob)r(ject)k(handle)117 953 y(IN)171 b Fh(k)o(ey)509 b Fj(\(in)o(teger\))117 1042 y(IN)171 b Fh(colo)o(r)479 b Fj(\(in)o(teger\))117 1132 y(OUT)124 b Fh(new)p 411 1132 V 17 w(group)372 b Fj(new)15 b(group)e(ob)r(ject)i(handle) 75 1263 y Ft(This)f(collectiv)o(e)h(function)f(is)f(called)i(b)o(y)e(all)h (pro)q(cesses)g(in)f(the)h(group)f(asso)q(ciated)g(with)26 b Fh(comm)n Ft(.)35 b Fh(colo)o(r)75 1320 y Ft(de\014nes)16 b(the)g(particular)g(new)g(group)f(to)f(whic)o(h)j(the)e(pro)q(cess)h(b)q (elongs.)42 b Fh(k)o(ey)15 b Ft(de\014nes)i(the)e(rank-order)75 1376 y(in)34 b Fh(new)p 223 1376 V 17 w(group)p Ft(;)17 b(a)g(stable)g(sort)f (is)h(used)g(to)f(determine)i(rank)f(order)f(in)35 b Fh(new)p 1440 1376 V 17 w(group)16 b Ft(if)h(the)34 b Fh(k)o(eys)17 b Ft(are)75 1433 y(not)e(unique.)166 1579 y Fk(Discussion:)29 b Fj(According)22 b(to)e(the)i(op)q(eration)f(of)f(this)h(function,)h(the)f (groups)h(so)f(created)h(are)f(non-)75 1635 y(o)o(v)o(erlapping.)c(Is)d (there)h(a)f(need)h(for)e(a)h(more)f(complex)f(functionalit)o(y?)75 1902 y Fo(3.6)59 b(Op)r(erations)19 b(on)h(Contexts)75 2019 y Fi(3.6.1)49 b(Lo)q(cal)18 b(Op)q(erations)75 2118 y Ft(There)d(are)g(no)g (lo)q(cal)i(op)q(erations)e(on)g(con)o(texts.)75 2229 y Fh(MPI)p 160 2229 V 16 w(CONTEXTS)p 414 2229 V 17 w(FREE\(n,)h(contexts\))117 2313 y Fj(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)13 b(of)h(con)o(texts)h(to)e (free)117 2403 y(IN)171 b Fh(contexts)415 b Fj(v)o(oid)13 b(*)h(arra)o(y)f (of)h(con)o(texts)75 2534 y Ft(Lo)q(cal)e(deallo)q(cation)h(of)e(con)o(text)g (allo)q(cated)i(b)o(y)22 b Fh(MPI)p 995 2534 V 16 w(CONTEXTS)p 1249 2534 V 18 w(ALLOC)11 b Ft(\(b)q(elo)o(w\).)19 b(It)11 b(is)h(erroneous)75 2591 y(to)i(free)h(a)g(con)o(text)f(that)g(is)i(b)q(ound) g(to)e(an)o(y)g(comm)o(unicator)h(\(either)g(lo)q(cally)i(or)d(in)i(another)e (pro)q(cess\).)75 2647 y(This)22 b(op)q(eration)f(is)h(lo)q(cal)g(\(as)f(it)g (m)o(ust)g(b)q(e,)i(b)q(ecause)f(it)g(do)q(es)f(not)g(p)q(ose)h(a)f(comm)o (unicator)g(in)h(its)75 2704 y(argumen)o(t)14 b(list\).)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 9 12 bop 75 -100 a Fl(3.6.)34 b(OPERA)l(TIONS)17 b(ON)f(CONTEXTS)1008 b Ft(9)75 45 y Fi(3.6.2)49 b(Collective)17 b(Op)q(erations)75 184 y Fh(MPI)p 160 184 14 2 v 16 w(CONTEXTS)p 414 184 V 17 w(ALLOC\(comm)m(,)12 b(n,)j(contexts,)i(len\))117 264 y Fj(IN)171 b Fh(comm)450 b Fj(Comm)o(unicator)11 b(whose)j(group)g(denotes)h(participan) o(ts)117 345 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)13 b(of)h(con)o(texts)h (to)e(allo)q(cate)117 426 y(OUT)124 b Fh(contexts)415 b Fj(v)o(oid)13 b(*)h(arra)o(y)f(of)h(con)o(texts)117 507 y(OUT)124 b Fh(len)517 b Fj(length)14 b(of)f(con)o(texts)i(arra)o(y)166 634 y Ft(Allo)q(cates)22 b(an)e(arra)o(y)g(of)g(opaque)h(con)o(texts.)35 b(This)21 b(collectiv)o(e)i (op)q(eration)e(is)g(executed)g(b)o(y)g(all)75 690 y(pro)q(cesses)16 b(in)g(the)f Fh(MPI)p 488 690 V 16 w(COMM)p 646 690 V 16 w(GROUP\(comm)n(\))p Ft(.)i(MPI)e(pro)o(vides)h(sp)q(ecial)h(con)o(textual)e(space)g(to)g(its)75 747 y(collectiv)o(e)22 b(op)q(erations)f(\()e(includin)q(g)k Fh(MPI)p 829 747 V 15 w(CONTEXTS)p 1082 747 V 18 w(ALLOC)p Ft(\))d(so)g(that,)g(despite)i(an)o(y)e(on-going)75 803 y(of)d(p)q(oin)o (t-to-p)q(oin)o(t)h(comm)o(unication)f(on)h Fh(comm)m Ft(,)c(this)j(op)q (eration)h(can)f(execute)h(safely)l(.)26 b(MPI)17 b(collec-)75 860 y(tiv)o(e)h(functions)g(will)h(ha)o(v)o(e)e(to)f(lo)q(c)o(k)i(out)f(m)o (ultiple)j(threads,)d(so)g(it)h(will)h(eviden)o(tly)g(ha)o(v)o(e)e (capabilities)75 916 y(una)o(v)m(ailable)g(to)e(the)g(user)h(program.)166 975 y(Con)o(texts)9 b(that)g(are)h(allo)q(cated)h(b)o(y)21 b Fh(MPI)p 859 975 V 15 w(ALLOC)p 1016 975 V 17 w(CONTEXTS)11 b Ft(are)f(unique)i(within)f Fh(MPI)p 1714 975 V 16 w(COMM)p 1872 975 V 16 w(GROUP\(comm)o(\))o Ft(.)75 1032 y(The)16 b(arra)o(y)f(is)h (the)g(same)f(on)h(all)h(pro)q(cesses)f(that)f(call)i(the)f(function)g (\(same)f(order,)h(same)f(n)o(um)o(b)q(er)h(of)75 1088 y(elemen)o(ts\).)166 1224 y Fk(Discussion:)36 b Ff(MPI)p 513 1224 13 2 v 14 w(CONTEXTS)p 746 1224 V 14 w(ALLOC\(comm)m(,)11 b(n,)k(contexts\))g Fj(w)o(as)g(the)g (previous)f(de\014nition)g(of)g(this)75 1274 y(function,)e(then)i(w)o(e)f(c)o (hanged)g(to)f(a)h Ff(group)g Fj(argumen)o(t)f(in)g(the)h(\014rst)h(slot,)e (arguing)g(that)h(the)g(comm)o(unicator)d(w)o(as)75 1323 y(unnecessary)m(.)24 b(W)m(e)15 b(ha)o(v)o(e)g(c)o(hanged)h(bac)o(k)f(b)q(ecause)i(the)f (group-seman)o(tics)f(pro)o(v)o(ed)g(not)g(to)h(b)q(e)f(thread)h(safe,)g(so) 75 1373 y(w)o(e)g(had)f(to)h(retain)f(the)h(approac)o(h)g(discussed)h(at)e (length)h(at)f(the)h(previous)g(MPI)g(meeting.)22 b(\(In)16 b(case)g(y)o(ou)f(did)75 1423 y(not)e(read)g(the)g(July)f(10)h(draft,)f(y)o (ou)g(ha)o(v)o(e)h(not)f(seen)i(a)e(c)o(hange)h(in)g(this)f(draft)h(compared) f(to)g(the)i(June)f(24)f(draft!\))75 1473 y(No)o(w,)f(w)o(e)i(ha)o(v)o(e)e (stronger)i(justi\014cation)e(for)h(k)o(eeping)g(the)g(approac)o(h)g (discussed)i(at)d(June)i(24)e(meeting.)17 b(W)m(e)11 b(ha)o(v)o(e)75 1523 y(added)18 b(the)g Ff(len)h Fj(parameter)e(yielding)f(the)j(curren)o(t)g (form)o(ulation,)c(b)q(ecause)k(con)o(texts)g(are)f(no)o(w)f(opaque,)h(not)75 1572 y(in)o(tegers.)166 1625 y(W)m(e)10 b(ha)o(v)o(e)g(to)h(retain)g(the)g(c) o(hic)o(k)o(en-and-egg)f(asp)q(ect)i(of)e(MPI)h(\()p Fd(i.e.)p Fj(,)f(use)h(a)g(comm)o(uni)o(cator)d(to)j(get)g(con)o(text\(s\))75 1675 y(or)i(a)h(comm)o(unicator\),)c(to)j(get)h(thread)g(safet)o(y)m(.)k(Y)m (et,)13 b(w)o(e)h(w)o(an)o(t)f(libraries)g(to)g(con)o(trol)h(their)g(o)o(wn)f (fate)g(regarding)75 1725 y(safet)o(y)m(,)c(not)h(to)f(rely)g(on)g(the)h (caller)f(to)h(pro)o(vide)f(a)g(quiescen)o(t)h(con)o(text.)17 b(W)m(e)9 b(ac)o(hiev)o(e)h(this)f(b)o(y)g(adding)g(the)h Fc(quiescen)o(t)75 1775 y(prop)q(ert)o(y)20 b Fj(for)15 b(MPI)h(collectiv)o(e)g(comm)o (unication)c(functions.)24 b(W)m(e)15 b(m)o(ust,)g(in)g(fact,)h(push)g(this)f (requiremen)o(t)h(to)75 1824 y(the)k(collectiv)o(e)e(c)o(hapter,)j(but)e(w)o (e)g(demonstrate)g(here)h(wh)o(y)f(our)g(particular)g(collectiv)o(e)g (routines)g(need)h(this)75 1874 y(prop)q(ert)o(y)m(.)28 b(In)17 b(a)g(m)o(ulti-threaded)f(en)o(vironmen)o(t,)g(it)h(is)g(clear)g(that)g(eac)o (h)h(temp)q(orally)d(o)o(v)o(erlapping)h(call)g(to)h(a)75 1924 y(collectiv)o(e)f(op)q(eration)f(m)o(ust)g(b)q(e)h(with)f(a)h(di\013eren)o(t) g(comm)o(unicator.)21 b(If)15 b(this)h(has)f(not)h(b)q(een)h(made)d (explicit,)h(it)75 1974 y(m)o(ust)e(b)q(e.)166 2027 y(One)f(can)f(ha)o(v)o(e) g(on-going)f(p)q(oin)o(t-to-p)q(oin)o(t)g(and)h(collectiv)o(e)g(comm)o (unications)d(on)j(a)g(single)g(comm)o(unicator.)75 2076 y(A)i(con)o(text)g (is)f(de\014ned)h(to)g(b)q(e)g(su\016cien)o(tly)f(p)q(o)o(w)o(erful)g(to)g(k) o(eep)h(b)q(oth)g(p)q(oin)o(t-to-p)q(oin)o(t)e(and)h(collectiv)o(e)h(op)q (erations)75 2126 y(distinct.)19 b(Hence,)c(it)f(is)g(alw)o(a)o(ys)f(safe)h (to)g(call)27 b Ff(MPI)p 894 2126 V 15 w(COMM)p 1039 2126 V 14 w(MAKE)14 b Fj(and)28 b Ff(MPI)p 1355 2126 V 14 w(CONTEXTS)p 1588 2126 V 15 w(ALLOC)p Fj(,)13 b(ev)o(en)i(if)75 2176 y(p)q(ending)h(async) o(hronous)h(p)q(oin)o(t-to-p)q(oin)o(t)e(op)q(erations)i(are)f(on-going,)f (or)i(messages)f(ha)o(v)o(e)g(not)g(b)q(een)i(receiv)o(ed)75 2226 y(but)g(are)h(on)f(the)h(receipt)g(queue.)32 b(With)17 b(these)j(rules,)f(no)f(quiescen)o(t)h(comm)o(unicator)d(is)h(required)j(in)d (order)75 2276 y(to)e(get)h(new)g(con)o(texts.)24 b(W)m(e)15 b(ha)o(v)o(e)h(added)g(demands)f(on)g(the)h(MPI)g(implem)o(en)o(tation)d (while)i(making)e(con)o(texts)75 2325 y(opaque)h(to)g(mak)o(e)e(this)i (simpler)f(to)g(realize)i(without)e(sa)o(ying)g(ho)o(w)h(it)f(m)o(ust)g(b)q (e)h(done.)166 2385 y(In)e(summary)m(,)d(libraries)i(ha)o(v)o(e)h(to)g(get)g (the)g(comm)o(unicator)e(as)i(a)f(base)i(argumen)o(t)d(to)i(retain)g(thread)h (safet)o(y)m(,)75 2441 y(but)d(they)h(can)f(alw)o(a)o(ys)f(safely)h(get)g (comm)o(unicatio)o(n)e(con)o(texts)j(to)e(do)h(further)h(w)o(ork.)17 b(The)10 b(concept)h(of)f(quiescence)75 2498 y(is)g(banished)h(to)f(b)q(e)g (a)g(small)e(detail)i(of)f(implemen)o(tation,)e(rather)k(than)g(a)e(cen)o (tral)i(tenet)g(of)f(library)f(design.)17 b(Users)75 2554 y(m)o(ust)9 b(still)h(w)o(orry)g(ab)q(out)h(temp)q(oral)e(safet)o(y)m(,)h(whic)o(h)g(is)g (not)h(guaran)o(teed)g(b)o(y)f(con)o(texts)h(alone)f(\(see)i(example)d(b)q (elo)o(w)75 2611 y(in)k(section)i(3.11.6\).)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 10 13 bop 75 -100 a Ft(10)409 b Fl(SECTION)16 b(3.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fo(3.7)59 b(Op)r(erations)19 b(on)h(Communicato)n(rs)75 151 y Fi(3.7.1)49 b(Lo)q(cal)18 b(Comm)o(unicato)o(r)13 b(Op)q(erations)75 240 y Ft(The)i(follo)o(wing)h(are) f(all)h(lo)q(cal)h(\(non-comm)o(unicating\))e(op)q(erations.)75 346 y Fh(MPI)p 160 346 14 2 v 16 w(COMM)p 318 346 V 16 w(SIZE\(comm)m(,)d (size\))117 425 y Fj(IN)171 b Fh(comm)450 b Fj(handle)14 b(to)g(comm)o (unicator)d(ob)r(ject.)117 503 y(OUT)124 b Fh(size)503 b Fj(is)22 b(the)g(in)o(teger)g(n)o(um)o(b)q(er)f(of)g(pro)q(cesses)k(in)c(the)h(group)g (of)905 560 y Ff(comm)m Fj(.)75 733 y Fh(MPI)p 160 733 V 16 w(COMM)p 318 733 V 16 w(RANK\(comm)m(,)12 b(rank\))117 812 y Fj(IN)171 b Fh(comm)450 b Fj(handle)14 b(to)g(comm)o(unicator)d(ob)r(ject.) 117 890 y(OUT)124 b Fh(rank)488 b Fj(is)17 b(the)g(in)o(teger)g(rank)g(of)f (the)i(calling)d(pro)q(cess)k(in)d(group)h(of)905 947 y Ff(comm)m Fj(,)g(or)39 b Ff(MPI)p 1195 947 13 2 v 14 w(UNDEFINED)18 b Fj(if)g(the)i(pro)q(cess)h(is)e(not)g(a)905 1003 y(mem)o(b)q(er.)75 1202 y Fi(3.7.2)49 b(Lo)q(cal)18 b(Constructo)o(rs)75 1338 y Fh(MPI)p 160 1338 14 2 v 16 w(COMM)p 318 1338 V 16 w(GROUP\(comm)n(,)12 b(group\))117 1417 y Fj(IN)171 b Fh(comm)450 b Fj(comm)o(unicator)11 b(ob)r(ject)k(handle)117 1496 y(OUT)124 b Fh(group)463 b Fj(group)14 b(ob)r(ject)h(handle)75 1622 y Ft(Accessor)g(that)g(returns)g(the)g(group)g (corresp)q(onding)h(to)f(the)g(comm)o(unicator)30 b Fh(comm)m Ft(.)75 1727 y Fh(MPI)p 160 1727 V 16 w(COMM)p 318 1727 V 16 w(CONTEXT\(comm)n(,)12 b(context\))117 1806 y Fj(IN)171 b Fh(comm)450 b Fj(comm)o(unicator)11 b(ob)r(ject)k(handle)117 1885 y(OUT)124 b Fh(context)432 b Fj(con)o(text)75 2011 y Ft(Returns)16 b(the)f(con)o(text)g (asso)q(ciated)g(with)h(the)f(comm)o(unicator)30 b Fh(comm)m Ft(.)75 2116 y Fh(MPI)p 160 2116 V 16 w(COMM)p 318 2116 V 16 w(BIND\(group,)15 b(context,)h(comm)p 882 2116 V 10 w(new\))117 2195 y Fj(IN)171 b Fh(group)463 b Fj(ob)r(ject)15 b(handle)f(to)f(b)q(e)i(b)q (ound)f(to)g(new)g(comm)o(unicator)117 2273 y(IN)171 b Fh(context)432 b Fj(con)o(text)15 b(to)f(b)q(e)g(b)q(ound)g(to)g(new)g(comm)o(unicator)117 2352 y(OUT)124 b Fh(comm)p 454 2352 V 10 w(new)366 b Fj(the)15 b(new)f(comm)o(unicator.)166 2478 y Ft(The)j(ab)q(o)o(v)o(e)f(function)i (creates)e(a)h(new)g(comm)o(unicator)f(ob)s(ject,)h(whic)o(h)g(is)h(asso)q (ciated)f(with)g(the)75 2534 y(group)h(de\014ned)i(b)o(y)f(group,)f(and)h (the)g(sp)q(eci\014ed)i(con)o(text.)29 b(The)19 b(op)q(eration)g(do)q(es)f (not)h(require)g(com-)75 2591 y(m)o(unication.)24 b(It)17 b(is)g(correct)f (to)f(b)q(egin)j(using)f(a)f(comm)o(unicator)g(as)g(so)q(on)h(as)f(it)g(is)h (de\014ned.)25 b(It)16 b(is)h(not)75 2647 y(erroneous)d(to)g(in)o(v)o(ok)o(e) g(this)h(function)g(t)o(wice)g(in)g(the)f(same)g(pro)q(cess)h(with)f(the)h (same)f(con)o(text.)19 b(Finally)l(,)75 2704 y(there)c(is)h(no)f(explicit)i (sync)o(hronization)g(o)o(v)o(er)d(the)h(group.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 11 14 bop 75 -100 a Fl(3.7.)34 b(OPERA)l(TIONS)17 b(ON)f(COMMUNICA)l(TORS)819 b Ft(11)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(COMM)p 318 45 V 16 w(UNBIND\(comm)m(\))117 124 y Fj(IN)171 b Fh(comm)450 b Fj(the)15 b(comm)o(unicator)c(to)j(b)q(e)g(deallo)q(cated.)75 250 y Ft(This)23 b(routine)g(disasso)q(ciates)g(the)f(group)g Fh(MPI)p 928 250 V 16 w(COMM)p 1086 250 V 16 w(GROUP\(comm)o(\))d Ft(asso)q(ciated)j(with)h Fh(comm)75 307 y Ft(from)10 b(the)i(con)o(text)e Fh(MPI)p 495 307 V 16 w(COMM)p 653 307 V 16 w(CONTEXT\(comm)n(\))e Ft(asso)q(ciated)k(with)22 b Fh(comm)m Ft(.)16 b(The)11 b(opaque)h(ob)s(ject) 75 363 y Fh(comm)c Ft(is)15 b(deallo)q(cated.)21 b(Both)14 b(the)h(group)f(and)g(con)o(text,)g(pro)o(vided)h(at)e(the)29 b Fh(MPI)p 1501 363 V 16 w(COMM)p 1659 363 V 16 w(BIND)14 b Ft(call,)75 420 y(remain)19 b(a)o(v)m(ailable)h(for)e(further)g(use.)30 b(If)38 b Fh(MPI)p 908 420 V 15 w(COMM)p 1065 420 V 17 w(MAKE)18 b Ft(\(see)g(b)q(elo)o(w\))h(w)o(as)f(called)i(in)f(lieu)h(of)75 476 y Fh(MPI)p 160 476 V 16 w(COMM)p 318 476 V 16 w(BIND)p Ft(,)c(then)h(there)f(is)h(no)f(exp)q(osed)i(con)o(text)d(kno)o(wn)h(to)g (the)h(user,)f(and)h(this)f(quan)o(tit)o(y)75 532 y(is)g(freed)f(b)o(y)30 b Fh(MPI)p 396 532 V 16 w(COMM)p 554 532 V 17 w(UNBIND)p Ft(.)75 638 y Fh(MPI)p 160 638 V 16 w(COMM)p 318 638 V 16 w(DUP\(comm)m(,)12 b(new)p 662 638 V 17 w(context,)k(new)p 921 638 V 17 w(comm)m(\))117 717 y Fj(IN)171 b Fh(comm)450 b Fj(comm)o(unicator)11 b(ob)r(ject)k(handle) 117 795 y(IN)171 b Fh(new)p 411 795 V 17 w(context)341 b Fj(new)15 b(con)o(text)f(to)g(use)h(with)e(new)p 1428 795 13 2 v 16 w(comm)117 874 y(OUT)124 b Fh(new)p 411 874 14 2 v 17 w(comm)359 b Fj(comm)o(unicator)11 b(ob)r(ject)k(handle)75 1000 y Fh(MPI)p 160 1000 V 16 w(COMM)p 318 1000 V 16 w(DUP)e Ft(duplicates)i(a)e(comm)o(unicator)g(with)g(all)h(its) g(cac)o(hed)f(information,)h(replacing)g(just)75 1056 y(the)h(con)o(text.)75 1188 y Fi(3.7.3)49 b(Collective)17 b(Comm)o(unicato)o(r)c(Constructo)o(rs)75 1324 y Fh(MPI)p 160 1324 V 16 w(COMM)p 318 1324 V 16 w(MAKE\(sync)p 562 1324 V 17 w(comm)m(,)f(comm)p 840 1324 V 11 w(group,)i(comm)p 1107 1324 V 11 w(new\))117 1403 y Fj(IN)171 b Fh(sync)p 418 1403 V 17 w(comm)352 b Fj(Comm)o(unicator)6 b(whose)k(incorp)q(orated)g (group)f(\()p Ff(MPI)p 1746 1403 13 2 v 15 w(COMM)p 1891 1403 V 14 w(GROUP\(com)o(m)m(\))p Fj(\))905 1460 y(is)g(the)h(group)f(o)o(v)o(er)h (whic)o(h)f(the)h(new)f(comm)o(unicator)e Ff(comm)p 1845 1460 V 9 w(new)905 1516 y Fj(will)i(b)q(e)h(de\014ned,)i(also)d(sp)q(ecifying)h (participan)o(ts)g(in)g(this)g(syn-)905 1572 y(c)o(hronizing)k(op)q(eration.) 117 1651 y(IN)171 b Fh(comm)p 454 1651 14 2 v 10 w(group)332 b Fj(Group)11 b(of)f(the)i(new)f(comm)o(unicator;)e(often)i(this)g(will)f(b)q (e)h(the)905 1707 y(same)18 b(as)37 b Ff(sync)p 1164 1707 13 2 v 16 w(comm)m Fj(')o(s)16 b(group,)j(else)g(it)f(m)o(ust)g(b)q(e)h(subset) 905 1764 y(thereof;)117 1842 y(OUT)124 b Fh(comm)p 454 1842 14 2 v 10 w(new)366 b Fj(the)15 b(new)f(comm)o(unicator.)90 1969 y Fh(MPI)p 175 1969 V 16 w(COMM)p 333 1969 V 16 w(MAKE)h Ft(is)h(equiv)m(alen)o(t)h(to:)166 2027 y Fh(MPI)p 251 2027 V 16 w(CONTEXTS)p 505 2027 V 17 w(ALLOC\(sync)p 763 2027 V 18 w(comm)m(,)e(context,)20 b(1,)f(len\))f(MPI)p 1326 2027 V 16 w(COMM)p 1484 2027 V 16 w(BIND\(comm)p 1744 2027 V 10 w(group,)75 2083 y(context,)e(comm)p 364 2083 V 11 w(new\))75 2141 y Ft(plus,)25 b(notionally)l(,)g(in)o(ternal)e(\015ags)f(are)g(set)g(in) h(the)f(comm)o(unicator,)i(denoting)f(that)e(con)o(text)h(w)o(as)75 2198 y(created)f(as)g(part)f(of)g(the)h(opaque)g(pro)q(cess)h(that)e(made)h (the)g(comm)o(unicator)g(\(so)f(it)h(can)g(b)q(e)h(freed)75 2254 y(b)o(y)33 b Fh(MPI)p 241 2254 V 16 w(COMM)p 399 2254 V 16 w(UNBIND)p Ft(\).)17 b(It)g(is)g(erroneous)f(if)34 b Fh(comm)p 1116 2254 V 11 w(group)16 b Ft(is)h(not)g(a)f(subset)h(of)33 b Fh(sync)p 1715 2254 V 17 w(comm)m Ft('s)75 2311 y(underlying)17 b(group.)166 2452 y Fk(Discussion:)34 b Ff(MPI)p 511 2452 13 2 v 14 w(COMM)p 655 2452 V 15 w(MAKE)13 b Fj(and)27 b Ff(MPI)p 970 2452 V 15 w(CONTEXTS)p 1204 2452 V 14 w(ALLOC)14 b Fj(b)q(oth)f(require)i (b)q(o)q(otstrap)f(via)f(a)75 2508 y(comm)o(unicator,)g(instead)j(of)f(just)h (the)h(group)e(of)h(that)f(comm)o(unicator)e(for)j(thread)g(safet)o(y)m(.)24 b(W)m(e)15 b(ha)o(v)o(e)h(argued)75 2564 y(this)g(bac)o(k)h(and)f(forth)g (carefully)m(,)g(and)g(conclude)h(that)f(eac)o(h)h(thread)g(of)f(an)g(MPI)h (program)d(will)h(ha)o(v)o(e)h(one)h(or)75 2621 y(more)c(con)o(texts.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 12 15 bop 75 -100 a Ft(12)409 b Fl(SECTION)16 b(3.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fo(3.8)59 b(Intro)r(duction)18 b(to)i(Inter-Communicat)o(ion)75 148 y Ft(This)e(section)h(in)o(tro)q(duces)f (the)g(concept)g(of)f(in)o(ter-comm)o(unication)i(and)f(describ)q(es)i(the)d (p)q(ortions)h(of)75 205 y(MPI)h(that)f(supp)q(ort)h(it.)32 b(It)19 b(describ)q(es)i(supp)q(ort)e(for)f(writing)h(programs)f(whic)o(h)i (con)o(tain)f(user-lev)o(el)75 261 y(serv)o(ers.)k(It)16 b(also)h(describ)q (es)h(a)e(name)g(service)h(whic)o(h)h(simpli\014es)g(writing)f(programs)e (con)o(taining)j(in)o(t-)75 317 y(er-comm)o(unication.)166 457 y Fk(Discussion:)26 b Fj(Recommendation)16 b(and)i(plea)h(for)g (patience:)29 b(MPI)19 b(Commi)o(ttee)f(tak)o(es)h(stra)o(w)g(p)q(oll)f(on)75 514 y(whether)c(to)f(ha)o(v)o(e)g(in)o(ter-comm)o(unication)d(or)j(not)g({)f (as)h(a)g(whole)g({)g(in)f(MPI1.)18 b(The)13 b(most)f(suitable)h(time)f(w)o (ould)75 570 y(b)q(e)i(after)g(w)o(e)f(hear)h(the)g(argumen)o(ts)e(ab)q(out)i (the)g(in)o(terface)f(at)h(a)f(high)f(lev)o(el,)h(but)h(b)q(efore)g(w)o(e)f (this)h(section)g(of)e(the)75 627 y(c)o(hapter.)75 836 y Fi(3.8.1)49 b(De\014nitions)17 b(of)f(Inter-Comm)n(unication)f(and)h(Inter-Comm)n (unicato)o(rs)75 923 y Ft(All)h(p)q(oin)o(t-to-p)q(oin)o(t)g(comm)o (unication)g(describ)q(ed)h(th)o(us)e(far)g(has)g(in)o(v)o(olv)o(ed)h(comm)o (unication)g(b)q(et)o(w)o(een)75 979 y(pro)q(cesses)12 b(that)e(are)h(mem)o (b)q(ers)h(of)f(the)g(same)g(group.)18 b(The)12 b(source)f(pro)q(cess)h(in)g (send)g(or)f(the)g(destination)75 1036 y(pro)q(cess)j(in)h(receiv)o(e)f (\(the)g(\\target")e(pro)q(cess\))i(is)g(sp)q(eci\014ed)i(using)e(a)g Fh(\(com)o(m)n(unicato)o(r,)c(rank\))k Ft(pair.)20 b(The)75 1092 y(target)14 b(pro)q(cess)i(is)g(that)f(pro)q(cess)h(with)g(the)g(giv)o (en)g(rank)f(within)i(the)f(group)f(of)g(the)h(giv)o(en)g(comm)o(uni-)75 1149 y(cator.)j(This)c(t)o(yp)q(e)g(of)f(comm)o(unication)i(is)f(called)i (\\in)o(tra-comm)o(unication")d(and)h(the)g(comm)o(unicator)75 1205 y(used)h(is)f(called)i(an)e(\\in)o(tra-comm)o(unicator.")166 1262 y(In)20 b(mo)q(dular)g(and)g(m)o(ulti-discipli)q(nary)i(applications,)f (di\013eren)o(t)f(pro)q(cess)g(groups)f(execute)h(dif-)75 1319 y(feren)o(t)f(mo)q(dules)h(and)f(pro)q(cesses)g(within)h(di\013eren)o(t)f(mo) q(dules)h(comm)o(unicate)f(with)h(one)f(another)f(in)75 1375 y(a)h(pip)q(eline)k(or)c(a)g(more)g(general)h(mo)q(dule)h(graph.)32 b(In)20 b(these)g(applications)h(the)f(most)e(natural)i(w)o(a)o(y)75 1432 y(for)f(a)h(pro)q(cess)g(to)f(sp)q(ecify)i(a)f(target)e(pro)q(cess)i(is) h(b)o(y)e(the)h(rank)g(of)f(the)h(target)f(pro)q(cess)h(within)h(the)75 1488 y(target)c(group.)27 b(In)19 b(applications)g(that)f(con)o(tain)g(in)o (ternal)g(user)h(lev)o(el)g(serv)o(ers,)f(eac)o(h)g(serv)o(er)f(ma)o(y)g(b)q (e)75 1545 y(a)i(pro)q(cess)h(group)g(that)f(pro)o(vides)h(services)g(to)f (one)h(or)f(more)h(clien)o(ts,)h(and)f(eac)o(h)g(clien)o(t)h(ma)o(y)e(b)q(e)h (a)75 1601 y(pro)q(cess)f(group)g(whic)o(h)h(uses)g(the)f(services)h(of)f (one)g(or)g(more)g(serv)o(ers.)31 b(In)20 b(these)f(applications)i(it)e(is)75 1658 y(again)13 b(most)g(natural)g(to)g(sp)q(ecify)h(the)g(target)e(pro)q (cess)h(b)o(y)g(rank)g(within)i(the)e(target)f(group.)19 b(This)14 b(t)o(yp)q(e)75 1714 y(of)h(comm)o(unication)g(is)h(called)g(\\in)o(ter-comm) o(unication")g(and)f(the)g(comm)o(unicator)f(used)i(is)f(called)i(an)75 1771 y(\\in)o(ter-comm)o(unicator.")166 1828 y(An)23 b(in)o(ter-comm)o (unication)g(op)q(eration)g(is)f(a)g(p)q(oin)o(t-to-p)q(oin)o(t)h(comm)o (unication)g(b)q(et)o(w)o(een)g(pro-)75 1884 y(cesses)16 b(in)h(di\013eren)o (t)f(groups.)21 b(The)16 b(group)f(con)o(taining)h(a)g(pro)q(cess)g(that)f (initiates)i(an)e(in)o(ter-comm)o(un-)75 1941 y(ication)i(op)q(eration)f(is)g (called)h(the)f(\\lo)q(cal)h(group,")e(that)g(is,)h(the)g(sender)h(in)f(a)g (send)g(and)g(the)g(receiv)o(er)75 1997 y(in)g(a)e(receiv)o(e.)21 b(The)15 b(group)g(con)o(taining)g(the)g(target)f(pro)q(cess)h(is)g(called)i (the)e(\\remote)f(group,")g(that)g(is,)75 2054 y(the)j(receiv)o(er)i(in)f(a)f (send)h(and)f(the)h(sender)g(in)g(a)f(receiv)o(e.)27 b(As)17 b(in)i(in)o(tra-comm)o(unication,)f(the)f(target)75 2110 y(pro)q(cess)12 b(is)g(sp)q(eci\014ed)i(using)e(a)f Fh(\(comm)m(unicato)o(r,)e(rank\))j Ft(pair.)19 b(Unlik)o(e)13 b(in)o(tra-comm)o(unication,)g(the)e(rank)75 2167 y(is)16 b(relativ)o(e)f(to)g(the)g(remote)g(group.)166 2224 y(One)k(additional)i(needed)f(concept)f(is)g(the)g(\\group)f(leader.")31 b(The)19 b(pro)q(cess)g(with)g(rank)f(0)h(in)g(a)75 2280 y(pro)q(cess)f (group)f(is)i(designated)f(\\group)f(leader.")28 b(This)19 b(concept)f(is)g(used)h(in)f(supp)q(ort)g(of)f(user-lev)o(el)75 2337 y(serv)o(ers,)d(and)i(elsewhere.)75 2463 y Fi(3.8.2)49 b(Prop)q(erties)15 b(of)i(Inter-Comm)n(unication)d(and)j(Inter-Comm)n (unicato)o(rs)75 2550 y Ft(Here)e(is)h(a)f(summary)g(of)f(the)i(prop)q (erties)g(of)e(in)o(ter-comm)o(unication)j(and)e(in)o(ter-comm)o(unicators:) 143 2647 y Fg(\017)23 b Ft(The)16 b(syn)o(tax)f(is)i(the)f(same)g(for)f(b)q (oth)h(in)o(ter-)h(and)f(in)o(tra-comm)o(unication.)23 b(The)16 b(same)g(comm)o(u-)189 2704 y(nicator)f(can)g(b)q(e)h(used)g(for)e(b)q(oth)i (send)f(and)h(receiv)o(e)g(op)q(erations.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 13 16 bop 75 -100 a Fl(3.8.)34 b(INTR)o(ODUCTION)16 b(TO)g(INTER-COMMUNICA)l(TION) 600 b Ft(13)143 45 y Fg(\017)23 b Ft(A)15 b(target)f(pro)q(cess)h(is)h (addressed)g(b)o(y)f(its)g(rank)g(in)h(the)f(remote)g(group.)143 154 y Fg(\017)23 b Ft(Comm)o(unications)12 b(using)i(an)e(in)o(ter-comm)o (unicator)h(are)f(guaran)o(teed)g(not)g(to)g(con\015ict)h(with)g(an)o(y)189 210 y(comm)o(unications)i(that)g(use)h(a)e(di\013eren)o(t)i(comm)o(unicator.) 143 318 y Fg(\017)23 b Ft(An)15 b(in)o(ter-comm)o(unicator)g(cannot)g(b)q(e)h (used)g(for)f(collectiv)o(e)i(comm)o(unication.)143 427 y Fg(\017)23 b Ft(A)15 b(comm)o(unicator)g(will)i(pro)o(vide)e(either)h(in)o(tra-)f(or)g (in)o(ter-comm)o(unication,)h(nev)o(er)f(b)q(oth.)143 535 y Fg(\017)23 b Ft(Once)18 b(constructed,)f(the)g(remote)f(group)h(of)g(an)f(in) o(ter-comm)o(unicator)i(ma)o(y)e(not)g(b)q(e)i(c)o(hanged.)189 592 y(Comm)o(unication)d(with)h(an)o(y)f(pro)q(cess)g(outside)h(of)f(the)g (remote)g(group)f(is)i(not)f(allo)o(w)o(ed.)166 700 y(The)k(routine)g Fh(MPI)p 508 700 14 2 v 16 w(COMM)p 666 700 V 16 w(ST)l(A)l(T\(\))g Ft(ma)o(y)f(b)q(e)h(used)g(to)f(determine)h(if)g(a)g(comm)o(unicator)f(is)h (an)75 757 y(in)o(ter-)g(or)e(in)o(tra-comm)o(unicator.)29 b(In)o(ter-comm)o(unicators)18 b(can)g(b)q(e)h(used)g(as)f(argumen)o(ts)f(to) g(some)h(of)75 813 y(the)f(other)f(comm)o(unicator)g(inquiry)i(routines)f (\(de\014ned)g(ab)q(o)o(v)o(e\).)24 b(In)o(ter-comm)o(unicators)16 b(cannot)g(b)q(e)75 870 y(used)i(as)f(input)h(to)f(an)o(y)g(of)g(the)h(lo)q (cal)g(constructor)f(routines)h(for)e(in)o(tra-comm)o(unicators.)26 b(When)18 b(an)75 926 y(in)o(ter-comm)o(unicator)c(is)h(used)g(as)f(an)h (input)g(argumen)o(t,)e(the)i(follo)o(wing)g(table)g(describ)q(es)g(b)q(eha)o (vior)g(of)75 982 y(relev)m(an)o(t)h Fh(MPI)p 332 982 V 16 w(COMM)p 490 982 V 16 w(*)f Ft(functions:)p 222 1060 1506 2 v 221 1116 2 57 v 664 1100 a Fh(MPI)p 749 1100 14 2 v 16 w(COMM)p 907 1100 V 31 w Ft(F)l(unction)h(Beha)o(vior)p 1727 1116 2 57 v 221 1173 V 655 1156 a(\(in)g(In)o(ter-Comm)o(unication)f(Mo)q(de\))p 1727 1173 V 222 1175 1506 2 v 222 1185 V 221 1241 2 57 v 247 1224 a Fh(MPI)p 332 1224 14 2 v 16 w(COMM)p 490 1224 V 17 w(SIZE\(\))p 777 1241 2 57 v 170 w Ft(returns)g(the)g(size)h(of)f(the)g(remote)g(group.)p 1727 1241 V 221 1297 V 247 1281 a Fh(MPI)p 332 1281 14 2 v 16 w(COMM)p 490 1281 V 17 w(GROUP\(\))p 777 1297 2 57 v 111 w Ft(returns)g(the)g(remote)g(group.)p 1727 1297 V 221 1354 V 247 1337 a Fh(MPI)p 332 1337 14 2 v 16 w(COMM)p 490 1337 V 17 w(RANK\(\))p 777 1354 2 57 v 140 w Ft(returns)g Ff(MPI)p 1037 1337 13 2 v 14 w(UNDEFINED)p 1727 1354 2 57 v 221 1410 V 247 1393 a Fh(MPI)p 332 1393 14 2 v 16 w(COMM)p 490 1393 V 17 w(CONTEXT\(\))p 777 1410 2 57 v 50 w Ft(erroneous)p 1727 1410 V 221 1467 V 247 1450 a Fh(MPI)p 332 1450 14 2 v 16 w(COMM)p 490 1450 V 17 w(UNBIND\(\))p 777 1467 2 57 v 92 w Ft(erroneous)p 1727 1467 V 221 1523 V 247 1506 a Fh(MPI)p 332 1506 14 2 v 16 w(COMM)p 490 1506 V 17 w(DUP\(\))p 777 1523 2 57 v 170 w Ft(erroneous)p 1727 1523 V 222 1525 1506 2 v 75 1648 a Fh (Construction/Destruction)j(of)d(Inter-Comm)m(unicato)o(rs)75 1741 y Ft(Construction)i(of)f(an)g(in)o(ter-comm)o(unicator)h(requires)g(t)o (w)o(o)f(separate)g(collectiv)o(e)i(op)q(erations)f(\(one)f(in)75 1797 y(the)11 b(lo)q(cal)h(group)f(and)g(one)g(in)h(the)f(remote)g(group\))f (and)h(a)g(p)q(oin)o(t-to-p)q(oin)o(t)h(op)q(eration)f(b)q(et)o(w)o(een)g (the)g(t)o(w)o(o)75 1853 y(group)i(leaders.)20 b(These)14 b(op)q(erations)g (ma)o(y)e(b)q(e)i(p)q(erformed)g(with)g(explicit)h(sync)o(hronization)g(of)e (the)g(t)o(w)o(o)75 1910 y(groups)e(b)o(y)h(calling)h Fh(MPI)p 503 1910 14 2 v 16 w(COMM)p 661 1910 V 17 w(PEER)p 790 1910 V 17 w(MAKE\(\))p Ft(.)k(The)12 b(explicit)i(sync)o(hronization)f(can)e (cause)h(dead-)75 1966 y(lo)q(c)o(k)g(in)h(mo)q(dular)f(programs)f(with)h (cyclic)h(comm)o(unication)g(graphs.)18 b(So,)12 b(the)g(lo)q(cal)h(and)e (remote)h(op)q(er-)75 2023 y(ations)i(can)f(b)q(e)i(decoupled)g(and)f(the)g (construction)f(p)q(erformed)h(\\lo)q(osely)h(sync)o(hronously")f(b)o(y)f (calling)75 2079 y(the)d(t)o(w)o(o)f(routines)i Fh(MPI)p 484 2079 V 15 w(COMM)p 641 2079 V 17 w(PEER)p 770 2079 V 17 w(MAKE)p 916 2079 V 16 w(ST)l(ART\(\))f Ft(and)h Fh(MPI)p 1286 2079 V 16 w(COMM)p 1444 2079 V 16 w(PEER)p 1572 2079 V 17 w(MAKE)p 1718 2079 V 16 w(FINISH\(\))p Ft(.)166 2222 y Fk(Discussion:)67 b Ff(MPI)p 544 2222 13 2 v 14 w(COMM)p 688 2222 V 15 w(PEER)p 807 2222 V 14 w(MAKE)p 939 2222 V 14 w(ST)m(ART\(\))16 b Fj(and)33 b Ff(MPI)p 1307 2222 V 14 w(COMM)p 1451 2222 V 15 w(PEER)p 1570 2222 V 14 w(MAKE)p 1702 2222 V 14 w(FINISH\(\))75 2279 y Fj(are)16 b(b)q(oth)f(collectiv)o(e)g(op)q(erations)g(in)g(the)h(lo)q(cal)e (group.)22 b(They)15 b(ma)o(y)e(lea)o(v)o(e)i(a)g(non-blo)q(c)o(king)f(send)i (and)f(receiv)o(e)75 2335 y(activ)o(e)k(b)q(et)o(w)o(een)h(the)f(t)o(w)o(o)g (calls,)g(where)h(the)f(group)g(leaders)g(exc)o(hange)h(lo)q(cal)e(comm)o (unicator)e(information)75 2391 y(as)i(necessary)m(.)32 b(Ho)o(w)o(ev)o(er,) 19 b(they)f(are)h(not)f(a)f(non-blo)q(c)o(king)g(collectiv)o(e)h(op)q (eration.)64 b Ft(These)20 b(routines)g(can)75 2531 y(construct)d(m)o (ultiple)j(in)o(ter-comm)o(unicators)d(with)h(a)f(single)i(call.)28 b(This)18 b(impro)o(v)o(es)f(p)q(erformance)h(b)o(y)75 2587 y(allo)o(wing)e(amortization)f(of)g(the)g(sync)o(hronization)h(o)o(v)o (erhead.)166 2647 y(The)e(in)o(ter-comm)o(unicator)h(ob)s(jects)f(are)g (destro)o(y)o(ed)f(in)j(the)e(same)g(w)o(a)o(y)f(as)h(in)o(tra-comm)o (unicator)75 2704 y(ob)s(jects,)g(b)o(y)h(calling)i Fh(MPI)p 535 2704 14 2 v 16 w(COMM)p 693 2704 V 16 w(FREE\(\))p Ft(.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 14 17 bop 75 -100 a Ft(14)409 b Fl(SECTION)16 b(3.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fh(Supp)q(o)o(rt)i(fo)o(r)d (User-Level)h(Servers)75 197 y Ft(W)l(e)g(consider)g(the)f(primary)h(feature) f(of)g(user-lev)o(el)i(serv)o(ers)e(that)g(can)h(require)g(additional)h(supp) q(ort)e(is)75 253 y(that)c(the)h(serv)o(er)g(cannot)g(a)g(priori)g(kno)o(w)g (the)g(iden)o(ti\014cation)i(of)d(the)i(clien)o(ts,)g(whereas)f(the)g(clien)o (ts)i(m)o(ust)75 310 y(a)g(priori)i(kno)o(w)e(the)h(iden)o(ti\014cation)h(of) e(the)h(serv)o(ers.)19 b(In)14 b(addition,)h(a)e(user-lev)o(el)j(serv)o(er)d (is)h(a)g(dedicated)75 366 y(pro)q(cess)h(group)g(whic)o(h,)h(after)f(some)f (initialization,)k(pro)o(vides)e(a)f(giv)o(en)g(service)i(un)o(til)f (termination.)166 456 y(The)h(supp)q(ort)f(for)g(user-lev)o(el)i(serv)o(ers)f (tak)o(es)e(in)o(to)i(accoun)o(t)f(the)h(prev)m(ailing)i(view)e(that)f(all)h (pro-)75 513 y(cesses)g(\(p)q(ossibly)g(excepting)h(a)e(host)f(pro)q(cess\))i (are)f(initially)j(equiv)m(alen)o(t)f(mem)o(b)q(ers)e(of)g(the)g(group)g(of) 75 569 y(all)f(pro)q(cesses.)20 b(This)14 b(group)g(is)g(describ)q(ed)i(b)o (y)e(pre-de\014ned)i(in)o(tra-comm)o(unicator)e Fh(MPI)p 1612 569 14 2 v 15 w(COMM)p 1769 569 V 17 w(ALL)p Ft(.)75 626 y(The)k(user)h (splits)g(this)g(group)f(suc)o(h)g(that)g(pro)q(cesses)g(in)h(eac)o(h)g (parallel)h(serv)o(er)d(are)h(placed)i(within)f(a)75 682 y(sp)q(eci\014c)e (sub-group.)22 b(The)16 b(non-serv)o(er)f(pro)q(cesses)h(are)g(placed)h(in)f (a)f(group)h(of)f(all)h(non-serv)o(ers.)21 b(Pro-)75 739 y(vided)14 b(that)e(the)h(user)g(can)g(determine)h(the)f(ranks)g(of)f(the)h(serv)o(er)g (group)f(leaders)i(\()p Fn(i.e.,)f Ft(rank)f(zero\))h(and)75 795 y(assign)h(some)f(tags)g(for)g(clien)o(ts)i(to)e(send)h(a)f(message)g(to) g(the)h(group)f(leaders,)i(then)f(a)f(group)g(leader)i(can)75 852 y(at)g(an)o(y)f(time)i(notify)f(a)g(serv)o(er)g(that)f(it)i(wishes)g(to)e (b)q(ecome)i(a)f(clien)o(t.)166 942 y(MPI)22 b(pro)o(vides)h(a)f(routine,)i Fh(MPI)p 772 942 V 15 w(COMM)p 929 942 V 17 w(SPLITL\(\))p Ft(,)e(that)f(splits)i(a)f(paren)o(t)g(group,)h(creates)75 998 y(sub-groups)14 b(\(in)o(tra-comm)o(unicators\))g(according)g(to)g (supplied)i(k)o(eys,)e(and)h(returns)f(the)g(rank)g(of)g(eac)o(h)75 1055 y(sub-group)i(leader)h(\(relativ)o(e)f(to)g(the)g(paren)o(t)f(group\).) 22 b(This)16 b(allo)o(ws)g(a)g(pro)q(cess)g(that)g(do)q(es)g(not)f(kno)o(w)75 1111 y(ab)q(out)k(a)g(sub-group)h(to)f(con)o(tact)f(that)h(sub-group)h(via)g (the)f(sub-group)h(leader,)h(using)f(the)f(paren)o(t)75 1168 y(comm)o(unicator.)k(The)16 b(k)o(eys)h(ma)o(y)e(b)q(e)i(used)g(as)f(unique)i (tags.)k(This)17 b(information)f(ma)o(y)g(also)g(b)q(e)h(used)75 1224 y(as)e(input)h(to)f Fh(MPI)p 393 1224 V 15 w(COMM)p 550 1224 V 17 w(PEER)p 679 1224 V 17 w(MAKE\(\))p Ft(,)f(for)g(example.)75 1538 y Fh(Name)f(Service)75 1690 y Ft(MPI)h(pro)o(vides)h(a)g(name)f(service) h(to)f(simplify)j(construction)d(of)g(in)o(ter-comm)o(unicators.)20 b(This)15 b(service)75 1746 y(allo)o(ws)f(a)g(lo)q(cal)h(pro)q(cess)f(group)g (to)f(create)h(an)g(in)o(ter-comm)o(unicator)g(when)g(the)g(only)h(a)o(v)m (ailable)h(infor-)75 1803 y(mation)11 b(ab)q(out)h(the)g(remote)f(group)g(is) h(a)f(user-de\014ned)i(c)o(haracter)e(string.)19 b(A)12 b(sync)o(hronizing)h (v)o(ersion)e(is)75 1859 y(pro)o(vided)h(b)o(y)f(routine)g Fh(MPI)p 554 1859 V 16 w(COMM)p 712 1859 V 17 w(NAME)p 858 1859 V 16 w(MAKE\(\))p Ft(.)18 b(A)11 b(lo)q(osely)h(sync)o(hronous)f(v)o (ersion)h(is)f(pro)o(vided)75 1916 y(b)o(y)j(routines)h Fh(MPI)p 396 1916 V 16 w(COMM)p 554 1916 V 17 w(NAME)p 700 1916 V 16 w(MAKE)p 845 1916 V 17 w(ST)l(ART\(\))f Ft(and)h Fh(MPI)p 1224 1916 V 16 w(COMM)p 1382 1916 V 16 w(NAME)p 1527 1916 V 17 w(MAKE)p 1673 1916 V 16 w(FINISH\(\))p Ft(.)75 2231 y Fi(3.8.3)49 b(Inter-Comm)n (unication)14 b(Routines)75 2383 y Fh(Synchronous)j(Inter-Comm)m(unicato)o(r) c(Constructo)o(rs)75 2534 y Ft(Both)k(of)g(these)h(routines)g(allo)o(w)g (construction)g(of)f(m)o(ultiple)i(in)o(ter-comm)o(unicators.)27 b(Eac)o(h)18 b(of)f(these)75 2591 y(in)o(ter-comm)o(unicators)11 b(con)o(tains)g(the)h(same)e(remote)h(group)g(and)g(di\013eren)o(t)h(in)o (ternal)g(con)o(texts.)18 b(There-)75 2647 y(fore,)13 b(comm)o(unication)h (using)g(an)o(y)e(of)h(these)g(in)o(ter-comm)o(unicators)h(will)g(not)f(in)o (terfere)h(with)f(comm)o(u-)75 2704 y(nication)j(using)g(an)o(y)f(of)g(the)g (others.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 15 18 bop 75 -100 a Fl(3.8.)34 b(INTR)o(ODUCTION)16 b(TO)g(INTER-COMMUNICA)l(TION) 600 b Ft(15)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(COMM)p 318 45 V 16 w(PEER)p 446 45 V 17 w(MAKE\(my)p 670 45 V 13 w(comm)m(,)15 b(p)q(eer)p 909 45 V 17 w(comm)m(,)g(p)q(eer)p 1152 45 V 17 w(rank,)i(tag,)h(num)p 1459 45 V 14 w(comm)n(s,)d(new)p 1711 45 V 17 w(comm)m(s\))117 202 y Fj(IN)171 b Fh(my)p 396 202 V 13 w(comm)377 b Fj(lo)q(cal)13 b(in)o(tra-comm)o(unicator)117 324 y(IN)171 b Fh(p)q(eer)p 417 324 V 17 w(comm)353 b Fj(\\paren)o(t")14 b(in)o(tra-comm)o(uni)o(cator)117 445 y(IN)171 b Fh(p)q(eer)p 417 445 V 17 w(rank)391 b Fj(rank)14 b(of)f(remote)h(group)g(leader)g(in)27 b Ff(p)q(eer)p 1563 445 13 2 v 17 w(comm)117 567 y Fj(IN)171 b Fh(tag)510 b Fj(\\safe")14 b(tag)117 689 y(IN)171 b Fh(num)p 422 689 14 2 v 14 w(comm)m(s)337 b Fj(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o (ter-comm)o(unicators)e(to)h(construct)117 810 y(OUT)124 b Fh(new)p 411 810 V 17 w(comm)m(s)345 b Fj(arra)o(y)14 b(of)f(new)h(in)o (ter-comm)o(unicators)166 958 y Ft(This)k(routine)f(constructs)g(an)g(arra)o (y)f(of)h(in)o(ter-comm)o(unicators)g(and)h(stores)e(it)i(in)g Fh(new)p 1716 958 V 17 w(comm)m(s)p Ft(.)75 1015 y(In)o(tra-comm)o(unicator)h Fh(my)p 552 1015 V 13 w(comm)14 b Ft(describ)q(es)20 b(the)g(lo)q(cal)g (group.)32 b(In)o(tra-comm)o(unicator)19 b Fh(p)q(eer)p 1746 1015 V 17 w(comm)75 1071 y Ft(describ)q(es)c(a)f(group)f(that)g(con)o(tains)h (the)g(leaders)h(\()p Fn(i.e.,)e Ft(mem)o(b)q(ers)h(with)g(rank)f(zero\))h (of)f(b)q(oth)h(the)g(lo)q(cal)75 1128 y(and)f(remote)e(groups.)19 b(In)o(teger)12 b Fh(p)q(eer)p 707 1128 V 17 w(rank)h Ft(is)g(the)f(rank)g (of)g(the)h(remote)f(leader)h(in)g Fh(p)q(eer)p 1575 1128 V 17 w(comm)m Ft(.)j(In)o(teger)75 1184 y(tag)11 b(is)h(used)g(to)f (distinguish)i(this)f(op)q(eration)g(from)f(others)g(with)g(the)h(same)f(p)q (eer.)20 b(In)o(teger)11 b Fh(num)p 1731 1184 V 14 w(comm)n(s)75 1240 y Ft(is)h(the)f(n)o(um)o(b)q(er)h(of)f(new)h(in)o(ter-comm)o(unicators)f (constructed.)19 b(This)12 b(routine)g(is)g(collectiv)o(e)h(in)f(the)g(lo)q (cal)75 1297 y(group)i(and)h(sync)o(hronizes)g(with)g(the)f(remote)g(group.) 19 b(Eac)o(h)14 b(of)g(the)g(in)o(ter-comm)o(unicators)h(pro)q(duced)75 1353 y(pro)o(videsin)o(ter-comm)o(unication)i(with)e(the)h(remote)e(group.)75 1480 y Fh(MPI)p 160 1480 V 16 w(COMM)p 318 1480 V 16 w(NAME)p 463 1480 V 17 w(MAKE\(my)p 686 1480 V 13 w(comm)m(,)e(name,)g(num)p 1057 1480 V 15 w(comm)m(s,)g(new)p 1306 1480 V 17 w(comm)m(s\))117 1581 y Fj(IN)171 b Fh(my)p 396 1581 V 13 w(comm)377 b Fj(lo)q(cal)13 b(in)o(tra-comm)o(unicator)117 1703 y(IN)171 b Fh(name)467 b Fj(\\name)10 b(kno)o(wn)h(to)f(b)q(oth)i(lo)q(cal)e(and)h(remote)f(group)h (leaders")117 1824 y(IN)171 b Fh(num)p 422 1824 V 14 w(comm)m(s)337 b Fj(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o(ter-comm)o(unicators)e(to)h (construct)117 1946 y(OUT)124 b Fh(new)p 411 1946 V 17 w(comm)m(s)345 b Fj(Arra)o(y)14 b(of)f(new)i(in)o(ter-comm)o(unicators)166 2094 y Ft(This)21 b(is)h(the)f(name-serv)o(ed)g(equiv)m(alen)o(t)h(of)f Fh(MPI)p 1041 2094 V 15 w(COMM)p 1198 2094 V 17 w(PEER)p 1327 2094 V 17 w(MAKE)g Ft(in)g(whic)o(h)h(the)f(caller)75 2150 y(need)16 b(only)h(kno)o(w)e(a)g(name)h(for)f(the)g(p)q(eer)i(connection.)22 b(The)16 b(same)f(name)g(is)i(supplied)h(b)o(y)d(b)q(oth)h(lo)q(cal)75 2207 y(and)k(remote)f(groups.)34 b(The)20 b(name)g(is)g(remo)o(v)o(ed)g(from) f(the)h(in)o(ternal)h(name-serv)o(er)e(database)h(after)75 2263 y(b)q(oth)15 b(groups)g(ha)o(v)o(e)g(completed)h Fh(MPI)p 736 2263 V 16 w(COMM)p 894 2263 V 16 w(NAME)p 1039 2263 V 17 w(MAKE\(\))p Ft(.)75 2516 y Fh(Lo)q(osely)f(Synchronous)i(Inter-Comm)m (unicato)o(r)c(Constructo)o(rs)75 2647 y Ft(These)f(routines)g(are)g(lo)q (osely)g(sync)o(hronous)g(coun)o(terparts)f(of)g(the)h(sync)o(hronous)g(in)o (ter-comm)o(unicator)75 2704 y(construction)j(routines)h(describ)q(ed)h(ab)q (o)o(v)o(e.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 16 19 bop 75 -100 a Ft(16)409 b Fl(SECTION)16 b(3.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(COMM)p 318 45 V 16 w(PEER)p 446 45 V 17 w(MAKE)p 592 45 V 16 w(ST)l(ART\(my)p 828 45 V 15 w(comm)m(,)7 b(p)q(eer)p 1061 45 V 17 w(comm)n(,)g(p)q(eer)p 1297 45 V 17 w(rank,)k(tag,)g(num)p 1591 45 V 14 w(comm)n(s,)d(mak)o(e)p 1864 45 V 13 w(id\))117 181 y Fj(IN)171 b Fh(my)p 396 181 V 13 w(comm)377 b Fj(lo)q(cal)13 b(in)o(tra-comm)o(unicator)117 259 y(IN)171 b Fh(p)q(eer)p 417 259 V 17 w(comm)353 b Fj(\\paren)o(t")14 b(in)o(tra-comm)o(uni)o(cator) 117 338 y(IN)171 b Fh(p)q(eer)p 417 338 V 17 w(rank)391 b Fj(rank)14 b(of)f(remote)h(group)g(leader)g(in)27 b Ff(p)q(eer)p 1563 338 13 2 v 17 w(comm)117 417 y Fj(IN)171 b Fh(tag)510 b Fj(\\safe")14 b(tag)117 496 y(IN)171 b Fh(num)p 422 496 14 2 v 14 w(comm)m(s)337 b Fj(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o(ter-comm)o(unicators)e(to)h (construct)117 575 y(OUT)124 b Fh(mak)o(e)p 438 575 V 13 w(id)422 b Fj(handle)14 b(for)27 b Ff(MPI)p 1193 575 13 2 v 15 w(COMM)p 1338 575 V 14 w(PEER)p 1456 575 V 14 w(MAKE)p 1588 575 V 15 w(FINISH\(\))166 701 y Ft(This)d(starts)e(o\013)g(a)p 533 701 14 2 v 39 w Fh(PEER)p 662 701 V 17 w(MAKE)h Ft(op)q(eration,)i(returning)f(a) f(handle)h(for)f(the)g(op)q(eration)g(in)75 758 y(\\)p Fh(mak)o(e)p 203 758 V 13 w(id)p Ft(.")42 b(It)23 b(is)g(collectiv)o(e)i(in)e Fh(my)p 763 758 V 14 w(comm)m Ft(.)39 b(It)22 b(do)q(es)h(not)g(w)o(ait)f (for)g(the)h(remote)f(group)g(to)g(do)75 814 y Fh(MPI)p 160 814 V 16 w(COMM)p 318 814 V 16 w(MAKE)p 463 814 V 16 w(ST)l(ART\(\))p Ft(.)34 b(The)20 b(\\)p Fh(m)o(ak)o(e)p 926 814 V 14 w(id)p Ft(")f(handle)i(is)f(conceptually)i(similar)e(to)f(the)h(com-)75 870 y(m)o(unication)d(handle)g(used)f(b)o(y)g(non-blo)q(c)o(king)h(p)q(oin)o (t-to-p)q(oin)o(t)f(routines.)22 b(A)16 b Fh(mak)o(e)p 1538 870 V 13 w(id)g Ft(handle)h(is)g(con-)75 927 y(structed)f(b)o(y)f(a)g(\\)p 380 927 V 16 w Fh(ST)l(ART)p Ft(")i(routine)f(and)f(destro)o(y)o(ed)g(b)o(y)h (the)f(matc)o(hing)h(\\)p 1391 927 V 16 w Fh(FINISH)p Ft(")f(routine.)22 b(These)75 983 y(handles)c(are)e(not)g(v)m(alid)j(for)c(an)o(y)i(other)f (use.)24 b(It)17 b(is)g(erroneous)g(to)e(call)j(this)f(routine)g(again)g (with)g(the)75 1040 y(same)f Fh(p)q(eer)p 273 1040 V 17 w(comm)m Ft(,)c Fh(p)q(eer)p 513 1040 V 17 w(rank)k Ft(and)g Fh(tag)p Ft(,)g(without)g(calling)h Fh(MPI)p 1204 1040 V 16 w(COMM)p 1362 1040 V 16 w(MAKE)p 1507 1040 V 17 w(FINISH\(\))d Ft(to)i(\014nish)75 1096 y(the)f(\014rst)g(call.)75 1202 y Fh(MPI)p 160 1202 V 16 w(COMM)p 318 1202 V 16 w(PEER)p 446 1202 V 17 w(MAKE)p 592 1202 V 16 w(FINISH\(mak)o(e)p 869 1202 V 13 w(id,)g(new)p 1018 1202 V 18 w(comm)m(s\))117 1281 y Fj(IN)171 b Fh(mak)o(e)p 438 1281 V 13 w(id)422 b Fj(handle)14 b(from)26 b Ff(MPI)p 1228 1281 13 2 v 14 w(COMM)p 1372 1281 V 15 w(PEER)p 1491 1281 V 14 w(MAKE)p 1623 1281 V 14 w(ST)m(ART\(\))117 1360 y Fj(OUT)124 b Fh(new)p 411 1360 14 2 v 17 w(comm)m(s)345 b Fj(arra)o(y)14 b(of)f(new)h(in)o(ter-comm)o(unicators)166 1486 y Ft(This)f(completes)g(a)p 512 1486 V 29 w Fh(PEER)p 641 1486 V 17 w(MAKE)f Ft(op)q(eration,)h (returning)g(an)f(arra)o(y)f(of)h Fh(num)p 1520 1486 V 15 w(comm)m(s)e Ft(new)j(in)o(ter-)75 1542 y(comm)o(unicators)k(in)h Fh(new)p 524 1542 V 17 w(comm)m(s)p Ft(.)23 b(\(Note)17 b(that)f Fh(num)p 1027 1542 V 14 w(comm)n(s)e Ft(w)o(as)i(sp)q(eci\014ed)k(in)e(the)f(corresp)q (onding)75 1599 y(call)i(to)d Fh(MPI)p 303 1599 V 16 w(COMM)p 461 1599 V 17 w(PEER)p 590 1599 V 17 w(MAKE)p 736 1599 V 16 w(ST)l(ART\(\))p Ft(.\))26 b(This)18 b(routine)g(is)g(collectiv)o(e)i(in)e (the)f Fh(my)p 1695 1599 V 14 w(comm)11 b Ft(of)75 1655 y(the)h(corresp)q (onding)h(call)g(to)e Fh(MPI)p 656 1655 V 16 w(COMM)p 814 1655 V 16 w(PEER)p 942 1655 V 17 w(MAKE)p 1088 1655 V 16 w(ST)l(ART\(\))p Ft(.)19 b(It)12 b(w)o(aits)g(for)f(the)h(remote)f(group)75 1712 y(to)16 b(call)i Fh(MPI)p 302 1712 V 16 w(COMM)p 460 1712 V 16 w(PEER)p 588 1712 V 17 w(MAKE)p 734 1712 V 16 w(ST)l(ART\(\))f Ft(but)g(do)q(es)g(not)g(w)o(ait)f(for)g(the)h(remote)f(group)h(to)f(call)75 1768 y Fh(MPI)p 160 1768 V 16 w(COMM)p 318 1768 V 16 w(PEER)p 446 1768 V 17 w(MAKE)p 592 1768 V 16 w(FINISH\(\))p Ft(.)75 1874 y Fh(MPI)p 160 1874 V 16 w(COMM)p 318 1874 V 16 w(NAME)p 463 1874 V 17 w(MAKE)p 609 1874 V 16 w(ST)l(ART\(my)p 845 1874 V 14 w(comm)m(,)c(name,)h(num)p 1217 1874 V 14 w(comm)m(s,)f(mak)o(e)p 1493 1874 V 14 w(id\))117 1953 y Fj(IN)171 b Fh(my)p 396 1953 V 13 w(comm)377 b Fj(lo)q(cal)13 b(in)o(tra-comm)o(unicator)117 2032 y(IN)171 b Fh(name)467 b Fj(\\name")10 b(kno)o(wn)g(to)h(b)q(oth)g(lo)q (cal)g(and)g(remote)f(group)h(leaders)117 2111 y(IN)171 b Fh(num)p 422 2111 V 14 w(comm)m(s)337 b Fj(n)o(um)o(b)q(er)13 b(of)h(new)g(in)o (ter-comm)o(unicators)e(to)h(construct)117 2189 y(OUT)124 b Fh(mak)o(e)p 438 2189 V 13 w(id)422 b Fj(handle)14 b(for)g(MPI)p 1186 2189 13 2 v 15 w(COMM)p 1339 2189 V 15 w(NAME)p 1482 2189 V 16 w(MAKE)p 1627 2189 V 15 w(FINISH\(\))75 2363 y Fh(MPI)p 160 2363 14 2 v 16 w(COMM)p 318 2363 V 16 w(NAME)p 463 2363 V 17 w(MAKE)p 609 2363 V 16 w(FINISH\(mak)o(e)p 885 2363 V 13 w(id,)h(new)p 1035 2363 V 17 w(comm)m(s\))117 2442 y Fj(IN)171 b Fh(mak)o(e)p 438 2442 V 13 w(id)422 b Fj(handle)14 b(from)e(MPI)p 1220 2442 13 2 v 15 w(COMM)p 1373 2442 V 16 w(NAME)p 1517 2442 V 15 w(MAKE)p 1661 2442 V 16 w(ST)m(AR)m(T\(\))117 2521 y(OUT)124 b Fh(new)p 411 2521 14 2 v 17 w(comm)m(s)345 b Fj(arra)o(y)14 b(of)f(new)h(in)o(ter-comm)o(unicators)166 2647 y Ft(These)23 b(are)f(the)g Fh(ST)l(ART/FINISH)h Ft(v)o(ersions)f(of)g Fh(MPI)p 1125 2647 V 16 w(COMM)p 1283 2647 V 17 w(NAME)p 1429 2647 V 16 w(MAKE\(\))p Ft(.)41 b(They)22 b(ha)o(v)o(e)75 2704 y(sync)o(hronization) 16 b(prop)q(erties)g(analogous)f(to)f(the)i(corresp)q(onding)p 1254 2704 V 32 w Fh(PEER)p 1382 2704 V 32 w Ft(routines.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 17 20 bop 75 -100 a Fl(3.8.)34 b(INTR)o(ODUCTION)16 b(TO)g(INTER-COMMUNICA)l(TION) 600 b Ft(17)75 45 y Fh(Comm)m(unicato)o(r)12 b(Status)75 141 y Ft(This)20 b(lo)q(cal)g(routine)g(allo)o(ws)f(the)g(calling)i(pro)q(cess)f (to)e(determine)i(if)g(a)f(comm)o(unicator)g(is)g(an)h(in)o(ter-)75 197 y(comm)o(unicator)15 b(or)g(an)g(in)o(tra-comm)o(unicator.)75 306 y Fh(MPI)p 160 306 14 2 v 16 w(COMM)p 318 306 V 16 w(ST)l(A)l(T\(comm)n (,)d(status\))117 388 y Fj(IN)171 b Fh(comm)450 b Fj(comm)o(unicator)117 473 y(OUT)124 b Fh(status)460 b Fj(in)o(teger)15 b(status)166 603 y Ft(This)j(returns)g(the)g(status)f(of)g(comm)o(unicator)g Fh(comm)m Ft(.)25 b(V)l(alid)19 b(status)e(v)m(alues)i(are)33 b Ff(MPI)p 1726 603 13 2 v 15 w(INTRA)p Ft(,)75 659 y Ff(MPI)p 152 659 V 14 w(INTER)p Ft(,)c Ff(MPI)p 404 659 V 14 w(INV)m(ALID)p Ft(.)166 804 y Fk(Discussion:)15 b Fj(Should)f(there)h(b)q(e)f(an)o(y)g (other)g(status)h(v)n(alues?)75 1035 y Fh(Supp)q(o)o(rt)i(fo)o(r)d (User-Level)h(Servers)75 1131 y Ft(This)e(collectiv)o(e)h(routine)e(is)h (used)f(to)g(mak)o(e)g(it)g(easier)g(for)g(pro)q(cesses)g(to)g(form)f (sub-groups)i(and)f(con)o(tact)75 1187 y(user-lev)o(el)17 b(serv)o(ers.)75 1296 y Fh(MPI)p 160 1296 14 2 v 16 w(COMM)p 318 1296 V 16 w(SPLITL\(com)o(m)m (,)12 b(k)o(ey)l(,)j(nk)o(eys,)h(leaders,)f(sub)p 1079 1296 V 18 w(comm)m(\))117 1378 y Fj(IN)171 b Fh(comm)450 b Fj(extan)o(t)14 b(in)o(tra-comm)o(unicator)d(to)j(b)q(e)g(\\split")117 1463 y(IN)171 b Fh(k)o(ey)509 b Fj(k)o(ey)14 b(for)g(sub-group)g(mem)o(b)q(ership) 117 1549 y(IN)171 b Fh(nk)o(eys)469 b Fj(n)o(um)o(b)q(er)13 b(of)h(k)o(eys)g(\(n)o(um)o(b)q(er)f(of)h(sub-groups\))117 1634 y(OUT)124 b Fh(leaders)442 b Fj(ranks)14 b(of)g(sub-group)g(leaders)g (in)g(comm)117 1719 y(OUT)124 b Fh(sub)p 400 1719 V 17 w(comm)370 b Fj(in)o(tra-comm)o(unicator)6 b(describing)k(sub-group)f(of)g(calling)f (pro-)905 1776 y(cess)166 1905 y Ft(This)17 b(routine)g(splits)h(the)f(group) f(describ)q(ed)j(b)o(y)d(in)o(tra-comm)o(unicator)h(comm)f(in)o(to)h(nk)o (eys)f(sub-)75 1962 y(groups.)21 b(Eac)o(h)15 b(calling)j(pro)q(cess)e(m)o (ust)f(sp)q(ecify)i(a)e(v)m(alue)i(of)e(k)o(ey)h(in)g(the)g(range)f([0)p Fs(:)8 b(:)g(:)d Ft(\()p Fh(nk)o(eys)p Ft(-1\)].)21 b(Pro-)75 2018 y(cesses)g(sp)q(ecifying)i(the)d(same)h(k)o(ey)f(are)g(placed)i(in)g (the)f(same)f(sub-group.)36 b(Ranks)21 b(of)g(the)f(leaders)75 2074 y(of)e(eac)o(h)h(sub-group)g(\(relativ)o(e)g(to)f Fh(comm)m Ft(\))e(are)i(returned)h(in)h(in)o(teger)f(arra)o(y)e(leaders.)31 b(This)19 b(routine)75 2131 y(returns)f(a)g(new)h(in)o(tra-comm)o(unicator,)f Fh(sub)p 862 2131 V 18 w(comm)m Ft(,)e(that)i(describ)q(es)i(the)e(sub-group) h(to)f(whic)o(h)h(the)75 2187 y(calling)e(pro)q(cess)e(b)q(elongs.)75 2338 y Fi(3.8.4)49 b(Implem)o(e)o(ntation)14 b(Notes)75 2434 y Fh(Securit)o(y)i(and)g(P)o(erfo)o(rm)n(ance)d(Issues)75 2529 y Ft(The)h(routines)g(in)g(this)g(section)g(do)f(not)g(in)o(tro)q(duce)i (insecurit)o(y)g(in)o(to)e(the)h(basic)g(usage)f(of)g(MPI.)g(Sp)q(ecif-)75 2586 y(ically)l(,)k(they)e(do)g(not)g(allo)o(w)g(con)o(texts)g(to)g(b)q(e)g (b)q(ound)i(in)f(m)o(ultiple)h(usable)f(comm)o(unicators.)166 2647 y(The)f(pro)o(vision)h(of)f(in)o(ter-comm)o(unication)i(do)q(es)e(not)g (adv)o(ersely)h(a\013ect)e(the)i(\(p)q(oten)o(tial\))f(p)q(erfor-)75 2704 y(mance)g(of)g(in)o(tra-comm)o(unication.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 18 21 bop 75 -100 a Ft(18)409 b Fl(SECTION)16 b(3.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fh(\\Under)h(the)g(Ho)q(o)q(d") 75 161 y Ft(A)j(p)q(ossible)h(implemen)o(tation)g(of)e(a)g(comm)o(unicator)h (con)o(tains)f(a)h(single)h(group,)e(a)h(send)g(con)o(text,)f(a)75 217 y(receiv)o(e)f(con)o(text,)f(and)g(a)g(source)h(\(for)e(the)i(message)f (en)o(v)o(elop)q(e\).)24 b(This)17 b(structure)f(mak)o(es)f(in)o(tra-)i(and) 75 274 y(in)o(ter-comm)o(unicators)e(basically)i(the)e(same)g(ob)s(ject.)166 346 y(The)21 b(in)o(tra-comm)o(unicator)f(has)h(the)f(prop)q(erties)i(that:) 30 b(the)20 b(send-con)o(text)h(and)g(the)g(receiv)o(e-)75 402 y(con)o(text)c(are)g(iden)o(tical;)j(the)d(pro)q(cess)h(is)g(a)f(mem)o(b) q(er)g(of)g(the)h(group;)g(the)f(source)g(is)h(the)g(rank)f(of)g(the)75 459 y(pro)q(cess)e(in)h(the)g(group.)166 530 y(The)e(in)o(ter-comm)o (unicator)h(cannot)e(b)q(e)i(discussed)h(sensibly)g(without)e(considering)i (pro)q(cesses)e(in)75 587 y(b)q(oth)19 b(the)g(lo)q(cal)h(and)f(remote)g (groups.)31 b(Imagine)19 b(a)g(pro)q(cess)g Fu(P)g Ft(in)h(group)e(G)h(whic)o (h)h(has)f(an)g(in)o(ter-)75 643 y(comm)o(unicator)14 b Fm(Cp)p Ft(,)f(and)i(a)f(pro)q(cess)g Fu(Q)g Ft(in)h(group)f(H)g(whic)o(h)h(has)f(an) g(in)o(ter-comm)o(unicator)g Fm(Cq)p Ft(.)20 b(\(Note)75 700 y(that)e(G)g(and)g(H)g(do)h(not)e(ha)o(v)o(e)h(to)g(b)q(e)h(distinct.\))30 b(The)19 b(in)o(ter-comm)o(unicators)f(ha)o(v)o(e)g(the)g(prop)q(erties)75 756 y(that:)g(the)c(send)f(con)o(text)g(of)g Fm(Cp)g Ft(is)h(iden)o(tical)h (to)e(the)g(receiv)o(e)h(con)o(text)f(of)g Fm(Cq)p Ft(,)g(and)g(is)h(unique)h (in)f(H;)f(the)75 813 y(receiv)o(e)i(con)o(text)e(of)h Fm(Cp)g Ft(is)g(iden)o(tical)i(to)d(the)h(send)h(con)o(text)e(of)h Fm(Cq)p Ft(,)f(and)i(is)f(unique)i(in)e(G;)g(the)g(group)f(of)75 869 y Fm(Cp)i Ft(is)g(H;)g(the)h(group)e(of)h Fm(Cq)g Ft(is)h(G;)e(the)h (source)g(of)g Fm(Cp)g Ft(is)h(the)f(rank)g(of)f Fu(P)h Ft(in)h(G,)f(whic)o (h)h(is)f(the)h(group)e(of)75 926 y Fm(Cq)p Ft(;)h(the)g(source)g(of)g Fm(Cq)g Ft(is)g(the)h(rank)f(of)f(Q)i(in)g(H,)f(whic)o(h)h(is)g(the)f(group)g (of)g Fm(Cp)p Ft(.)166 998 y(It)h(is)g(easy)g(to)f(see)i(that)e(in)i(terms)e (of)g(these)i(\014elds,)g(the)f(in)o(tra-comm)o(unicator)f(is)i(a)e(sp)q (ecial)j(case)75 1054 y(of)f(the)g(in)o(ter-comm)o(unicator.)26 b(It)17 b(has)h Fg(G)g Ft(=)f Fg(H)h Ft(and)f(b)q(oth)h(con)o(texts)e(the)h (same.)26 b(This)18 b(ensures)g(that)75 1110 y(the)d(p)q(oin)o(t-to-p)q(oin)o (t)h(comm)o(unication)g(implemen)o(tation)g(for)f(in)o(tra-comm)o(unication)g (and)h(in)o(ter-com-)75 1167 y(m)o(unication)g(can)f(b)q(e)h(iden)o(tical.)75 1398 y Fo(3.9)59 b(Cacheing)75 1530 y Ft(MPI)16 b(pro)o(vides)g(a)g(\\cac)o (heing")g(facilit)o(y)h(that)f(allo)o(ws)g(an)g(application)h(to)f(attac)o(h) f(arbitrary)g(pieces)i(of)75 1586 y(information,)d(called)h Fn(attributes)p Ft(,)g(to)e(con)o(text,)g(group,)g(and)i(comm)o(unicator)e (descriptors;)h(it)g(pro)o(vides)75 1643 y(this)d(facilit)o(y)i(to)d(user)h (programs)f(as)h(w)o(ell.)19 b(A)o(ttributes)11 b(are)g(lo)q(cal)h(to)e(the)h (pro)q(cess)h(and)f(are)g(not)f(included)75 1699 y(if)j(the)f(descriptor)h(w) o(ere)f(someho)o(w)g(sen)o(t)g(to)f(another)h(pro)q(cess)1143 1683 y Fb(1)1163 1699 y Ft(.)19 b(This)13 b(facilit)o(y)h(is)e(in)o(tended)i (to)e(supp)q(ort)75 1755 y(optimizations)22 b(suc)o(h)g(as)f(sa)o(ving)g(p)q (ersisten)o(t)h(comm)o(unication)g(handles)h(and)e(recording)h(top)q(ology-) 75 1812 y(based)c(decisions)h(b)o(y)e(adaptiv)o(e)h(algorithms.)26 b(Ho)o(w)o(ev)o(er,)17 b(attributes)g(are)h(propagated)e(in)o(ten)o(tionally) 75 1868 y(b)o(y)f(sp)q(eci\014c)i(MPI)e(routines.)166 1940 y(T)l(o)f(summarize,)g(cac)o(heing)h(is,)f(in)h(particular,)g(the)f(pro)q (cess)g(b)o(y)g(whic)o(h)h(implemen)o(tation-de\014ned)75 1997 y(data)g(\(and)g(virtual)g(top)q(ology)g(data\))g(is)g(propagated)g(in)h (groups)f(and)g(comm)o(unicators.)166 2151 y Fk(Discussion:)g Fj(A)o(ttribute)g(propagation)e(m)o(ust)g(b)q(e)h(discussed)i(carefully)m(.) 75 2444 y Fi(3.9.1)49 b(F)o(unctionalit)o(y)75 2560 y Ft(MPI)15 b(pro)o(vides)h(the)f(follo)o(wing)h(services)g(related)g(to)e(cac)o(heing.) 21 b(They)15 b(are)g(all)h(pro)q(cess-lo)q(cal.)p 75 2661 720 2 v 127 2688 a Fr(1)144 2704 y Fa(The)d(deletion)i(of)e (\015atten/un\015atten)i(mak)o(es)f(this)f(p)q(oin)o(t)i(mo)q(ot.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 19 22 bop 75 -100 a Fl(3.9.)34 b(CA)o(CHEING)1399 b Ft(19)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(A)l(TTRIBUTE)p 425 45 V 17 w(ALLOC\(n,handle)p 760 45 V 18 w(a)o(rra)o(y)l(,len\))117 132 y Fj(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)13 b(of)h(handles)g(to)g(allo)q(cate)117 225 y(OUT)124 b Fh(handle)p 459 225 V 17 w(a)o(rra)o(y)337 b Fj(p)q(oin)o(ter)10 b(to)f(arra)o(y)g(of)g(opaque)g(attribute)h(handling)e (structure)117 319 y(OUT)124 b Fh(len)517 b Fj(length)14 b(of)f(eac)o(h)i (opaque)f(structure)75 453 y Ft(Allo)q(cates)19 b(a)e(new)h(attribute,)g(so)f (user)h(programs)f(and)h(functionalit)o(y)h(la)o(y)o(ered)f(on)f(top)h(of)f (MPI)h(can)75 509 y(access)d(attribute)g(tec)o(hnology)l(.)75 622 y Fh(MPI)p 160 622 V 16 w(A)l(TTRIBUTE)p 425 622 V 17 w(FREE\(handle)p 691 622 V 18 w(a)o(rra)o(y)l(,n\))117 708 y Fj(IN)171 b Fh(handle)p 459 708 V 17 w(a)o(rra)o(y)337 b Fj(arra)o(y)16 b(of)e(p)q(oin)o(ters)j(to)e (opaque)g(attribute)i(handling)d(struc-)905 765 y(tures)117 859 y(IN)171 b Fh(n)548 b Fj(n)o(um)o(b)q(er)13 b(of)h(handles)g(to)g(deallo) q(cate)75 992 y Ft(F)l(rees)h(attribute)g(handle.)75 1105 y Fh(MPI)p 160 1105 V 16 w(GET)p 264 1105 V 17 w(A)l(TTRIBUTE)p 530 1105 V 17 w(KEY\(k)o(eyval\))117 1192 y Fj(OUT)124 b Fh(k)o(eyval)455 b Fj(Pro)o(vide)14 b(the)h(in)o(teger)f(k)o(ey)g(v)n(alue)f(for)h(future)g (storing.)75 1325 y Ft(Generates)h(a)g(new)g(cac)o(he)h(k)o(ey)l(.)75 1438 y Fh(MPI)p 160 1438 V 16 w(SET)p 259 1438 V 17 w(A)l(TTRIBUTE\(handle,)c (k)o(eyval,)f(attribute)p 993 1438 V 18 w(val,)f(attribute)p 1251 1438 V 18 w(len,)h(attribute)p 1510 1438 V 19 w(destructo)o(r)p 1718 1438 V 17 w(routine\))117 1581 y Fj(IN)171 b Fh(handle)449 b Fj(opaque)14 b(attribute)g(handle)117 1675 y(IN)171 b Fh(k)o(eyval)455 b Fj(The)15 b(in)o(teger)f(k)o(ey)g(v)n(alue)f(for)h(future)g(storing.)117 1769 y(IN)171 b Fh(attribute)p 500 1769 V 18 w(val)336 b Fj(attribute)15 b(v)n(alue)e(\(opaque)h(p)q(oin)o(ter\))117 1862 y(IN)171 b Fh(attribute)p 500 1862 V 18 w(len)336 b Fj(length)14 b(of)f(attribute)i (\(in)e(b)o(ytes\))117 1956 y(IN)171 b Fh(attribute)p 500 1956 V 18 w(destructo)o(r)p 707 1956 V 17 w(routine)52 b Fj(What)14 b(one)g(calls)f(to)h(get)g(rid)g(of)f(this)h(attribute)g(later)75 2090 y Ft(Stores)h(attribute)g(in)h(cac)o(he)f(b)o(y)g(k)o(ey)l(.)75 2202 y Fh(MPI)p 160 2202 V 16 w(TEST)p 290 2202 V 16 w(A)l (TTRIBUTE\(handle,k)o(eyval,attribute)p 1000 2202 V 20 w(ptr,len\))117 2289 y Fj(IN)171 b Fh(handle)449 b Fj(opaque)14 b(attribute)g(handle)117 2383 y(IN)171 b Fh(k)o(eyval)455 b Fj(The)15 b(in)o(teger)f(k)o(ey)g(v)n (alue)f(for)h(future)g(storing.)117 2476 y(OUT)124 b Fh(attribute)p 500 2476 V 18 w(ptr)335 b Fj(v)o(oid)13 b(p)q(oin)o(ter)h(to)g(attribute,)g (or)g(NULL)g(if)f(not)h(found)117 2570 y(OUT)124 b Fh(len)517 b Fj(length)14 b(in)f(b)o(ytes)i(of)e(attribute,)h(if)f(found.)75 2704 y Ft(Retriev)o(e)j(attribute)f(from)g(cac)o(he)g(b)o(y)g(k)o(ey)l(.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 20 23 bop 75 -100 a Ft(20)414 b Fl(SECTION)16 b(3.)30 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fh(MPI)p 160 45 14 2 v 16 w(DELETE)p 346 45 V 16 w(A)l(TTRIBUTE\(handle,)i(k)o(eyval\))117 124 y Fj(IN)171 b Fh(handle)449 b Fj(opaque)14 b(attribute)g(handle)117 204 y(IN)171 b Fh(k)o(eyval)455 b Fj(The)15 b(in)o(teger)f(k)o(ey)g(v)n(alue) f(for)h(future)g(storing.)75 330 y Ft(Delete)i(attribute)f(from)f(cac)o(he)i (b)o(y)f(k)o(ey)l(.)75 462 y Fh(Example)75 552 y Ft(Eac)o(h)k(attribute)g (consists)h(of)f(a)g(p)q(oin)o(ter)h(or)e(a)h(v)m(alue)i(of)e(the)g(same)g (size)i(as)e(a)g(p)q(oin)o(ter,)h(and)g(w)o(ould)75 609 y(t)o(ypically)14 b(b)q(e)f(a)f(reference)h(to)e(a)h(larger)g(blo)q(c)o(k)h(of)f(storage)f (managed)h(b)o(y)h(the)f(mo)q(dule.)20 b(As)12 b(an)g(example,)75 665 y(a)k(global)i(op)q(eration)e(using)i(cac)o(heing)f(to)f(b)q(e)h(more)g (e\016cien)o(t)g(for)f(all)h(con)o(texts)f(of)g(a)h(group)f(after)g(the)75 722 y(\014rst)f(call)h(migh)o(t)f(lo)q(ok)h(lik)o(e)g(this:)147 841 y Fm(static)23 b(int)g(gop_key_assigned)f(=)h(0;)96 b(/*)23 b(0)h(only)f(on)h(first)f(entry)g(*/)147 897 y(static)g(int)g(gop_key;)190 b(/*)23 b(key)h(for)f(this)h(module's)e(stuff)i(*/)147 1010 y(efficient_global_op)d(\(comm,)i(...\))147 1066 y(void)g(*comm;)147 1123 y({)194 1179 y(struct)g(gop_stuff_type)f(*gop_stuff;)70 b(/*)24 b(whatever)f(we)g(need)h(*/)194 1236 y(void)47 b(*group)24 b(=)f(mpi_comm_group\(comm\);)194 1349 y(if)h(\(!gop_key_assigned\))117 b(/*)23 b(get)h(a)f(key)h(on)f(first)h(call)f(ever)g(*/)194 1405 y({)h(gop_key_assigned)e(=)h(1;)242 1462 y(if)h(\()f(!)h(\(gop_key)f(=)h (mpi_Get_Attribute_Key\(\))o(\))d(\))j({)290 1518 y(mpi_abort)e (\("Insufficient)g(keys)i(available"\);)242 1574 y(})194 1631 y(})194 1687 y(if)g(\(mpi_Test_Attribute)d(\(mpi_group_attr\(group\),gop_)o (key,&gop)o(_stuff\))o(\))194 1744 y({)j(/*)g(This)f(module)g(has)g(executed) g(in)h(this)f(group)g(before.)314 1800 y(We)g(will)h(use)f(the)g(cached)g (information)g(*/)194 1857 y(})194 1913 y(else)194 1970 y({)h(/*)g(This)f(is) h(a)f(group)g(that)h(we)f(have)h(not)f(yet)h(cached)f(anything)f(in.)314 2026 y(We)h(will)h(now)f(do)h(so.)266 2083 y(*/)242 2139 y(gop_stuff)f(=)g (/*)h(malloc)f(a)h(gop_stuff_type)e(*/)242 2252 y(/*)i(...)f(fill)g(in)h (*gop_stuff)e(with)i(whatever)f(we)g(want)g(...)h(*/)242 2365 y(mpi_set_attribute)e(\(mpi_group_attr\(group\),)e(gop_key,)j(gop_stuff,)958 2421 y(gop_stuff_destructor\);)194 2478 y(})194 2534 y(/*)h(...)f(use)h (contents)f(of)g(*gop_stuff)g(to)g(do)h(the)f(global)g(op)h(...)f(*/)170 2591 y(})170 2704 y(gop_stuff_destructor)f(\(gop_stuff\))70 b(/*)23 b(called)g(by)h(MPI)f(on)h(group)f(delete)g(*/)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 21 24 bop 75 -100 a Fl(3.10.)28 b(F)o(ORMALIZING)12 b(THE)f(LOOSEL)l(Y)i(SYNCHR)o (ONOUS)g(MODEL)e(\(USA)o(GE,)f(SAFETY\))p Ft(21)170 45 y Fm(struct)23 b(gop_stuff_type)f(*gop_stuff;)170 102 y({)218 158 y(/*)i(...)f(free)h (storage)e(pointed)h(to)h(by)g(gop_stuff)e(...)i(*/)170 214 y(})166 404 y Fk(Discussion:)15 b Fj(The)e(cac)o(he)i(facilit)o(y)c(could)i (also)f(b)q(e)i(pro)o(vided)f(for)g(other)g(descriptors,)i(but)e(it)g(is)g (less)h(clear)75 461 y(ho)o(w)d(suc)o(h)g(pro)o(vision)g(w)o(ould)f(b)q(e)h (useful.)18 b(It)11 b(is)g(suggested)h(that)f(this)g(issue)h(b)q(e)g(review)o (ed)g(in)e(reference)k(to)d(Virtual)75 517 y(T)m(op)q(ologies.)75 744 y Fo(3.10)59 b(F)n(o)n(rmalizing)21 b(the)e(Lo)r(osely)g(Synchronous)g (Mo)r(del)g(\(Usage,)g(Safet)n(y\))75 847 y Fi(3.10.1)49 b(Basic)17 b(Statement)o(s)75 933 y Ft(When)c(a)f(caller)i(passes)f(a)f(comm)o(unicator) h(\(whic)o(h)g(con)o(tains)g(a)f(con)o(text)g(and)h(group\))f(to)g(a)g (callee,)j(that)75 989 y(comm)o(unicator)g(m)o(ust)h(b)q(e)g(free)g(of)f (side)i(e\013ects)e(throughout)g(execution)i(of)e(the)h(subprogram)f (\(quies-)75 1046 y(cen)o(t\).)26 b(This)18 b(pro)o(vides)g(one)f(mo)q(del)h (in)g(whic)o(h)h(libraries)g(can)e(b)q(e)h(written,)f(and)h(w)o(ork)e (\\safely)l(.")27 b(F)l(or)75 1102 y(libraries)13 b(so)f(designated,)h(the)f (callee)h(has)f(p)q(ermission)h(to)e(do)h(whatev)o(er)f(comm)o(unication)i (it)f(lik)o(es)h(with)75 1159 y(the)19 b(comm)o(unicator,)g(and)g(under)h (the)f(ab)q(o)o(v)o(e)g(guaran)o(tee)f(kno)o(ws)g(that)h(no)g(other)f(comm)o (unications)75 1215 y(will)e(in)o(terfere.)k(Since)c(w)o(e)e(p)q(ermit)h(the) f(creation)g(of)g(new)h(comm)o(unicators)f(without)g(sync)o(hronization)75 1272 y(\(assuming)h(preallo)q(cated)i(con)o(texts\),)c(this)j(do)q(es)g(not)e (imp)q(ose)i(a)f(signi\014can)o(t)h(o)o(v)o(erhead.)166 1328 y(This)k(form)e(of)h(safet)o(y)f(is)i(analogous)f(to)f(other)h(common)g (computer)g(science)h(usages,)g(suc)o(h)f(as)75 1385 y(passing)c(a)f (descriptor)h(of)f(an)h(arra)o(y)e(to)h(a)g(library)i(routine.)k(The)15 b(library)g(routine)g(has)g(ev)o(ery)f(righ)o(t)h(to)75 1441 y(exp)q(ect)h(suc)o(h)f(a)g(descriptor)h(to)f(b)q(e)g(v)m(alid)i(and)f(mo)q (di\014able.)75 1564 y Fi(3.10.2)49 b(Mo)q(dels)18 b(of)e(Execution)75 1650 y Ft(W)l(e)j(sa)o(y)f(that)f(a)i(parallel)h(pro)q(cedure)f(is)g Fn(active)g Ft(at)f(a)g(pro)q(cess)h(if)g(the)g(pro)q(cess)g(b)q(elongs)g(to) f(a)g(group)75 1706 y(that)e(ma)o(y)h(collectiv)o(ely)i(execute)f(the)f(pro)q (cedure,)h(and)f(some)f(mem)o(b)q(er)h(of)g(that)f(group)h(is)g(curren)o(tly) 75 1763 y(executing)c(the)f(pro)q(cedure)g(co)q(de.)20 b(If)12 b(a)f(parallel)j(pro)q(cedure)e(is)h(activ)o(e)e(at)h(a)f(pro)q(cess,)h(then) g(this)h(pro)q(cess)75 1819 y(ma)o(y)e(b)q(e)i(receiving)g(messages)e(p)q (ertaining)i(to)f(this)g(pro)q(cedure,)h(ev)o(en)f(if)g(it)g(do)q(es)h(not)e (curren)o(tly)h(execute)75 1876 y(the)j(co)q(de)h(of)f(this)g(pro)q(cedure.) 75 1996 y Fh(Nonreentrant)i(pa)o(rallel)d(p)o(ro)q(cedures)75 2082 y Ft(This)22 b(co)o(v)o(ers)f(the)h(case)g(where,)h(at)e(an)o(y)g(p)q (oin)o(t)h(in)h(time,)g(at)e(most)g(one)h(in)o(v)o(o)q(cation)g(of)f(a)g (parallel)75 2139 y(pro)q(cedure)14 b(can)f(b)q(e)g(activ)o(e)g(at)f(an)o(y)h (pro)q(cess.)19 b(That)12 b(is,)i(concurren)o(t)e(in)o(v)o(o)q(cations)i(of)e (the)h(same)f(parallel)75 2195 y(pro)q(cedure)k(ma)o(y)e(o)q(ccur)h(only)g (within)h(disjoin)o(t)f(groups)f(of)h(pro)q(cesses.)20 b(F)l(or)14 b(example,)h(all)h(in)o(v)o(o)q(cations)75 2252 y(of)f(parallel)i(pro)q (cedures)f(in)o(v)o(olv)o(e)g(all)h(pro)q(cesses,)e(pro)q(cesses)h(are)f (single-threaded,)i(and)e(there)h(are)f(no)75 2308 y(recursiv)o(e)h(in)o(v)o (o)q(cations.)166 2365 y(In)21 b(suc)o(h)f(a)g(case,)h(a)f(con)o(text)f(can)i (b)q(e)f(statically)h(allo)q(cated)g(to)f(eac)o(h)g(pro)q(cedure.)36 b(The)20 b(static)75 2421 y(allo)q(cation)h(can)e(b)q(e)h(done)g(in)h(a)e (pream)o(ble,)i(as)e(part)g(of)g(initialization)j(co)q(de.)34 b(Or,)20 b(it)g(can)f(b)q(e)i(done)75 2478 y(a)d(compile/link)j(time,)e(if)g (the)g(implemen)o(tation)g(has)g(additional)h(mec)o(hanisms)f(to)e(reserv)o (e)i(con)o(text)75 2534 y(v)m(alues.)h(Comm)o(unicators)12 b(to)g(b)q(e)h(used)g(b)o(y)f(the)h(di\013eren)o(t)f(pro)q(cedures)i(can)e(b) q(e)h(build)i(in)e(a)f(pream)o(ble,)h(if)75 2591 y(the)f(executing)g(groups)f (are)h(statically)g(de\014ned;)i(if)e(the)f(executing)i(groups)e(c)o(hange)h (dynamically)l(,)i(then)75 2647 y(a)j(new)g(comm)o(unicator)f(has)h(to)f(b)q (e)i(built)g(whenev)o(er)f(the)g(executing)h(group)f(c)o(hanges,)f(but)h (this)h(new)75 2704 y(comm)o(unicator)d(can)h(b)q(e)g(built)h(using)f(the)f (same)g(preallo)q(cated)i(con)o(text.)j(If)c(the)f(parallel)i(pro)q(cedures) -32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 22 25 bop 75 -100 a Ft(22)409 b Fl(SECTION)16 b(3.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Ft(can)10 b(b)q(e)h(organized)g (in)o(to)f(libraries,)j(so)d(that)f(only)i(one)f(pro)q(cedure)i(of)e(eac)o(h) g(library)h(can)f(b)q(e)h(concurren)o(tly)75 102 y(activ)o(e)k(at)g(eac)o(h)g (pro)q(cessor,)g(then)g(it)h(is)f(su\016cien)o(t)h(to)f(allo)q(cate)h(one)f (con)o(text)g(p)q(er)g(library)l(.)75 236 y Fh(P)o(a)o(rallel)f(p)o(ro)q (cedures)i(that)g(a)o(re)f(nonreentrant)i(within)f(each)g(executing)h(group) 75 327 y Ft(This)k(co)o(v)o(ers)g(the)g(case)f(where,)j(at)d(an)o(y)g(p)q (oin)o(t)i(in)f(time,)h(for)f(eac)o(h)g(pro)q(cess)g(group,)g(there)g(can)g (b)q(e)75 383 y(at)d(most)g(one)h(activ)o(e)g(in)o(v)o(o)q(cation)g(of)g(a)f (parallel)i(pro)q(cedure)g(b)o(y)f(a)f(pro)q(cess)h(mem)o(b)q(er.)31 b(Ho)o(w)o(ev)o(er,)19 b(it)75 439 y(migh)o(t)i(b)q(e)h(p)q(ossible)i(that)c (the)i(same)f(pro)q(cedure)h(is)g(concurren)o(tly)g(in)o(v)o(ok)o(ed)g(in)g (t)o(w)o(o)e(partially)j(\(or)75 496 y(completely\))17 b(o)o(v)o(erlapping)f (groups.)k(F)l(or)15 b(example,)i(the)e(same)g(collectiv)o(e)j(comm)o (unication)e(function)75 552 y(ma)o(y)e(b)q(e)i(concurren)o(tly)g(in)o(v)o (ok)o(ed)g(on)f(t)o(w)o(o)f(partially)i(o)o(v)o(erlapping)g(groups.)166 611 y(In)f(suc)o(h)f(a)g(case,)g(a)f(con)o(text)h(is)g(asso)q(ciated)h(with)f (eac)o(h)g(parallel)i(pro)q(cedure)f(and)f(eac)o(h)g(executing)75 668 y(group,)j(so)h(that)e(o)o(v)o(erlapping)j(execution)f(groups)f(ha)o(v)o (e)g(distinct)i(comm)o(unication)f(con)o(texts.)27 b(\(One)75 724 y(do)q(es)19 b(not)f(need)i(a)f(di\013eren)o(t)f(con)o(text)h(from)f(eac) o(h)h(group;)g(one)g(merely)h(needs)f(a)g(\\coloring")g(of)f(the)75 781 y(groups,)d(so)g(that)g(One)h(can)g(generate)g(the)f(comm)o(unicators)g (for)g(eac)o(h)h(parallel)h(pro)q(cedure)g(when)f(the)75 837 y(execution)g(groups)e(are)h(de\014ned.)21 b(Here,)14 b(again,)h(one)g(only)g (need)h(one)e(con)o(text)h(for)f(eac)o(h)g(library)l(,)i(if)f(no)75 894 y(t)o(w)o(o)f(pro)q(cedures)i(from)e(the)i(same)f(library)h(can)f(b)q(e)h (concurren)o(tly)g(activ)o(e)f(in)h(the)f(same)g(group.)166 953 y(Note)f(that,)g(for)h(collectiv)o(e)h(comm)o(unication)g(libraries,)g(w) o(e)f(do)g(allo)o(w)g(sev)o(eral)g(concurren)o(t)g(in)o(v)o(o-)75 1009 y(cations)h(within)h(the)f(same)g(group:)21 b(a)15 b(broadcast)h(in)g(a) g(group)g(ma)o(y)f(b)q(e)h(started)g(at)f(a)g(pro)q(cess)i(b)q(efore)75 1065 y(the)e(previous)h(broadcast)f(in)h(that)e(group)h(ended)h(at)f(another) g(pro)q(cess.)20 b(In)c(suc)o(h)f(a)g(case,)g(one)g(cannot)75 1122 y(rely)e(on)f(con)o(text)g(mec)o(hanisms)h(to)f(disam)o(biguate)h (successiv)o(e)h(in)o(v)o(o)q(cations)e(of)g(the)h(same)f(parallel)i(pro-)75 1178 y(cedure)f(within)h(the)f(same)f(group:)18 b(the)13 b(pro)q(cedure)g (need)h(b)q(e)f(implemen)o(ted)h(so)e(as)g(to)g(a)o(v)o(oid)h(confusion.)75 1235 y(F)l(or)f(example,)i(for)e(broadcast,)g(one)h(ma)o(y)f(need)i(to)e (carry)g(additional)j(information)e(in)g(messages,)g(suc)o(h)75 1291 y(as)19 b(the)g(broadcast)g(ro)q(ot,)g(to)g(help)i(in)f(suc)o(h)g(disam) o(biguation;)i(one)d(also)g(relies)i(on)e(preserv)m(ation)h(of)75 1348 y(message)e(order)f(b)o(y)h(MPI.)g(With)g(suc)o(h)g(an)g(approac)o(h,)g (w)o(e)g(ma)o(y)f(b)q(e)i(gaining)g(p)q(erformance,)f(but)g(w)o(e)75 1404 y(lo)q(ose)f(mo)q(dularit)o(y)l(.)27 b(It)17 b(is)g(not)g(su\016cien)o (t)g(to)g(implemen)o(t)h(the)f(parallel)i(pro)q(cedure)f(so)e(that)h(it)g(w)o (orks)75 1461 y(correctly)k(in)h(isolation,)g(when)g(in)o(v)o(ok)o(ed)f(only) g(once;)j(it)d(needs)g(to)g(b)q(e)g(implemen)o(ted)i(so)d(that)g(an)o(y)75 1517 y(n)o(um)o(b)q(er)c(of)g(successiv)o(e)h(in)o(v)o(o)q(cations)f(will)i (execute)f(correctly)l(.)22 b(Of)17 b(course,)e(the)i(same)e(approac)o(h)h (can)75 1574 y(b)q(e)g(used)g(for)e(other)h(parallel)i(libraries.)75 1708 y Fh(W)o(ell-nested)g(pa)o(rallel)d(p)o(ro)q(cedures)75 1799 y Ft(Calls)h(of)g(parallel)h(pro)q(cedures)g(are)e(w)o(ell)i(nested)f (if)g(a)g(new)g(parallel)h(pro)q(cedure)g(is)f(alw)o(a)o(ys)f(in)o(v)o(ok)o (ed)h(in)75 1855 y(a)j(subset)h(of)e(a)h(group)g(executing)i(the)e(same)g (parallel)i(pro)q(cedure.)30 b(Th)o(us,)19 b(pro)q(cesses)g(that)e(execute)75 1912 y(the)e(same)g(parallel)i(pro)q(cedure)f(ha)o(v)o(e)f(the)g(same)g (execution)h(stac)o(k.)166 1970 y(In)i(suc)o(h)f(a)g(case,)h(a)f(new)g(con)o (text)g(need)h(to)f(b)q(e)h(dynamically)h(allo)q(cated)f(for)f(eac)o(h)g(new) h(in)o(v)o(o)q(ca-)75 2027 y(tion)g(of)f(a)g(parallel)i(pro)q(cedure.)28 b(Ho)o(w)o(ev)o(er,)16 b(a)i(stac)o(k)e(mec)o(hanism)i(can)g(b)q(e)g(used)g (for)f(allo)q(cating)i(new)75 2083 y(con)o(texts.)27 b(Th)o(us,)18 b(a)g(p)q(ossible)h(mec)o(hanism)g(is)f(to)f(allo)q(cate)i(\014rst)e(a)h (large)f(n)o(um)o(b)q(er)i(of)e(con)o(text's)g(\(up)75 2140 y(to)f(the)h(upp)q(er)h(b)q(ound)g(on)f(the)g(depth)g(of)g(nested)g(parallel) h(pro)q(cedure)g(calls\),)g(and)f(then)g(use)g(a)g(lo)q(cal)75 2196 y(stac)o(k)d(managemen)o(t)g(of)g(these)h(con)o(text's)f(on)g(eac)o(h)h (pro)q(cess)g(to)f(create)h(a)f(new)h(comm)o(unicator)g(\(using)75 2253 y Fh(MPI)p 160 2253 14 2 v 16 w(COMM)p 318 2253 V 16 w(MAKE)p Ft(\))g(for)f(eac)o(h)i(new)f(in)o(v)o(o)q(cation.)75 2387 y Fh(The)h(General)f(case)75 2478 y Ft(In)22 b(the)g(general)g(case,)h(there) e(ma)o(y)g(b)q(e)h(m)o(ultiple)i(concurren)o(tly)e(activ)o(e)g(in)o(v)o(o)q (cations)g(of)f(the)g(same)75 2534 y(parallel)i(pro)q(cedure)g(within)g(the)f (same)f(group;)j(in)o(v)o(o)q(cations)e(ma)o(y)f(not)h(b)q(e)g(w)o (ell-nested.)41 b(A)22 b(new)75 2591 y(con)o(text)17 b(need)i(to)e(b)q(e)h (created)g(for)f(eac)o(h)g(in)o(v)o(o)q(cation.)28 b(It)18 b(is)g(the)g(user)g(resp)q(onsibilit)o(y)i(to)d(mak)o(e)g(sure)75 2647 y(that,)k(if)h(t)o(w)o(o)d(distinct)j(parallel)h(pro)q(cedures)e(are)g (in)o(v)o(ok)o(ed)g(concurren)o(tly)h(on)e(o)o(v)o(erlapping)i(sets)e(of)75 2704 y(pro)q(cesses,)15 b(then)h(con)o(text)e(allo)q(cation)j(or)d(comm)o (unicator)h(creation)h(is)f(prop)q(erly)h(co)q(ordinated.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 23 26 bop 75 -100 a Fl(3.11.)29 b(MOTIV)-5 b(A)l(TING)15 b(EXAMPLES)1056 b Ft(23)75 45 y Fo(3.11)59 b(Motivating)19 b(Examples)75 230 y Fk(Discussion:)c Fj(The)e(in)o(tra-comm)o(uni)o(cation)c(examples)j(w)o (ere)i(\014rst)f(presen)o(ted)i(at)d(the)i(June)f(MPI)g(meeting;)e(the)75 287 y(in)o(ter-comm)o(unication)g(routines)j(\(when)h(added\))f(are)g(new.)75 494 y Fi(3.11.1)49 b(Current)15 b(Practice)h(#1)75 581 y Ft(Example)g(#1a:) 170 678 y Fm(int)24 b(me,)f(size;)170 734 y(...)170 791 y(mpi_init\(\);)170 847 y(mpi_comm_rank\(MPI_COMM_ALL,)e(&me\);)170 904 y (mpi_comm_size\(MPI_COMM_ALL,)g(&size\);)170 1017 y(printf\("Process)h(\045d) i(size)f(\045d\\n",)g(me,)h(size\);)170 1073 y(...)170 1130 y(mpi_end\(\);)75 1226 y Ft(Example)19 b(#1a)f(is)h(a)f(do-nothing)h(program) e(that)h(initializes)k(itself)d(legally)l(,)i(and)d(refers)g(to)g(the)h(the) 75 1282 y(\\all")13 b(comm)o(unicator,)f(and)h(prin)o(ts)f(a)g(message.)19 b(This)13 b(example)g(do)q(es)g(not)f(imply)i(that)e(MPI)g(supp)q(orts)75 1339 y(prin)o(tf-lik)o(e)17 b(comm)o(unication)f(itself.)75 1396 y(Example)g(#1b:)170 1492 y Fm(int)24 b(me,)f(size;)170 1549 y(...)170 1605 y(mpi_init\(\);)170 1662 y(mpi_comm_rank\(MPI_COMM_ALL,)e (&me\);)71 b(/*)23 b(local)g(*/)170 1718 y(mpi_comm_size\(MPI_COMM_ALL,)e (&size\);)i(/*)g(local)g(*/)170 1831 y(if\(\(me)g(\045)h(2\))g(==)f(0\))242 1888 y(mpi_send\(...,)f(MPI_COMM_ALL,)g(\(\(me)i(+)f(1\))h(\045)g(size\)\);) 170 1944 y(else)242 2000 y(mpi_recv\(...,)e(MPI_COMM_ALL,)g(\(\(me)i(-)f(1)h (+)g(size\))f(\045)h(size\)\);)170 2113 y(...)170 2170 y(mpi_end\(\);)75 2266 y Ft(Example)16 b(#1b)f(sc)o(hematically)i(illustrates)g(message)e(exc)o (hanges)h(b)q(et)o(w)o(een)g(\\ev)o(en")f(and)h(\\o)q(dd")f(pro-)75 2322 y(cesses)h(in)g(the)f(\\all")g(comm)o(unicator.)75 2447 y Fi(3.11.2)49 b(Current)15 b(Practice)h(#2)170 2534 y Fm(void)24 b(*data;)170 2591 y(int)g(me;)170 2647 y(...)170 2704 y(mpi_init\(\);)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 24 27 bop 75 -100 a Ft(24)414 b Fl(SECTION)16 b(3.)30 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)170 45 y Fm (mpi_comm_rank\(MPI_COMM_ALL,)21 b(&me\);)170 158 y(if\(me)j(==)f(0\))170 214 y({)266 271 y(/*)g(get)h(input,)f(create)g(buffer)g(``data'')g(*/)266 327 y(...)170 384 y(})170 497 y(mpi_broadcast\(MPI_COMM_ALL,)e(0,)i(data\);) 170 610 y(...)170 666 y(mpi_end\(\);)75 747 y Ft(This)16 b(example)g (illustrates)g(the)f(use)h(of)f(a)g(collectiv)o(e)h(comm)o(unication.)75 865 y Fi(3.11.3)49 b(\(App)o(ro)o(ximate\))14 b(Current)h(Practice)g(#3)170 951 y Fm(int)24 b(me;)170 1007 y(void)g(*grp0,)f(*grprem,)g(*commslave;)170 1064 y(...)170 1120 y(mpi_init\(\);)170 1177 y(mpi_comm_rank\(MPI_COMM_ALL,)e (&me\);)47 b(/*)23 b(local)g(*/)170 1233 y(mpi_local_subgroup\(MPI_GROUP_)o (ALL,)e(1,)i(``[0]'',)g(&grp0\);)g(/*)h(local)f(*/)170 1290 y(mpi_group_difference\(MPI_GROU)o(P_ALL,)e(grp0,)i(&grprem\);)f(/*)i(local)f (*/)170 1346 y(mpi_comm_make\(MPI_COMM_ALL,)e(grprem,)i(&commslave\);)170 1459 y(if\(me)h(!=)f(0\))170 1515 y({)266 1572 y(/*)g(compute)g(on)h(slave)f (*/)266 1628 y(...)266 1685 y(mpi_reduce\(commslave,)e(...\);)266 1741 y(...)170 1798 y(})170 1854 y(/*)j(zero)f(falls)h(through)e(immediately) h(to)g(this)h(reduce,)f(others)g(do)g(later...)g(*/)170 1911 y(mpi_reduce\(MPI_COMM_ALL,)e(...\);)75 1991 y Ft(This)d(example)h (illustrates)g(ho)o(w)e(a)h(group)f(consisting)i(of)e(all)i(but)f(the)g (zeroth)f(pro)q(cess)h(of)g(the)g(\\all")75 2048 y(group)13 b(is)h(created,)f(and)h(then)g(ho)o(w)f(a)g(comm)o(unicator)g(is)h(formed)f (\()g Fh(comm)m(slave)p Ft(\))d(for)j(that)g(new)h(group.)75 2104 y(The)19 b(new)g(comm)o(unicator)g(is)g(used)h(in)f(a)g(collectiv)o(e)h (call,)h(and)e(all)h(pro)q(cesses)f(execute)h(a)e(collectiv)o(e)75 2161 y(call)k(in)g(the)40 b Ff(MPI)p 403 2161 13 2 v 15 w(COMM)p 548 2161 V 14 w(ALL)22 b Ft(con)o(text.)37 b(This)21 b(example)h(illustrates) g(ho)o(w)f(the)g(t)o(w)o(o)e(comm)o(unica-)75 2217 y(tors)d(\(whic)o(h)i(p)q (ossess)f(distinct)i(con)o(texts\))d(protect)h(comm)o(unication.)27 b(That)16 b(is,)i(comm)o(unication)g(in)75 2274 y Ff(MPI)p 152 2274 V 14 w(COMM)p 296 2274 V 15 w(ALL)e Ft(is)f(insulated)i(from)e(comm) o(unication)h(in)31 b Fh(comm)m(slave)p Ft(,)12 b(and)j(vice)i(v)o(ersa.)166 2330 y(In)h(summary)l(,)f(for)g(comm)o(unication)h(with)g(\\group)e(safet)o (y)l(,")h(con)o(texts)g(within)h(comm)o(unicators)75 2387 y(m)o(ust)d(b)q(e)g (distinct.)75 2505 y Fi(3.11.4)49 b(Example)15 b(#4)75 2591 y Ft(The)g(follo)o(wing)g(example)g(is)g(mean)o(t)f(to)f(illustrate)j (\\safet)o(y")d(b)q(et)o(w)o(een)h(p)q(oin)o(t-to-p)q(oin)o(t)i(and)e (collectiv)o(e)75 2647 y(comm)o(unication.)20 b(MPI)12 b(guaran)o(tees)g (that)g(a)g(single)i(comm)o(unicator)e(can)h(do)f(safe)g(p)q(oin)o(t-to-p)q (oin)o(t)i(and)75 2704 y(collectiv)o(e)j(comm)o(unication.)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 25 28 bop 75 -100 a Fl(3.11.)29 b(MOTIV)-5 b(A)l(TING)15 b(EXAMPLES)1056 b Ft(25)75 45 y Fm(#define)23 b(TAG_ARBITRARY)f(12345)75 102 y(#define)h(SOME_COUNT)166 b(50)170 158 y(int)24 b(me;)170 214 y(int)g(len;)170 271 y(void)g(*contexts;)170 327 y(void)g(*subgroup;)170 440 y(...)170 497 y(mpi_init\(\);)170 553 y(mpi_contexts_alloc\(MPI_COMM_A)o (LL,)d(1,)j(&contexts,)e(&len\);)170 610 y(mpi_local_subgroup\(MPI_GROUP_)o (ALL,)f(4,)i(``[2,4,6,8]'',)f(&subgroup\);)h(/*)g(local)g(*/)170 666 y(mpi_group_rank\(subgroup,)e(&me\);)119 b(/*)23 b(local)g(*/)170 779 y(if\(me)h(!=)f(MPI_UNDEFINED\))170 835 y({)266 892 y (mpi_comm_bind\(subgroup,)e(context,)h(&the_comm\);)h(/*)g(local)h(*/)266 1005 y(/*)f(asynchronous)g(receive:)f(*/)266 1061 y(mpi_irecv\(...,)g (MPI_SRC_ANY,)g(TAG_ARBITRARY,)g(the_comm\);)170 1118 y(})170 1231 y(for\(i)i(=)f(0;)h(i)g(<)f(SOME_COUNT,)g(i++\))266 1287 y(mpi_reduce\(the_comm,)e(...\);)75 1407 y Fi(3.11.5)49 b(Lib)o(ra)o(ry)17 b(Example)d(#1)75 1492 y Ft(The)h(main)h(program:)170 1574 y Fm(int)24 b(done)f(=)h(0;)170 1631 y(user_lib_t)f(*libh_a,)g(*libh_b;)170 1687 y(void)h(*dataset1,)e(*dataset2;)170 1744 y(...)170 1800 y(mpi_init\(\);)170 1857 y(...)170 1913 y(init_user_lib\(MPI_COMM_ALL,)f (&libh_a\);)170 1970 y(init_user_lib\(MPI_COMM_ALL,)g(&libh_b\);)170 2026 y(...)170 2083 y(user_start_op\(libh_a,)g(dataset1\);)170 2139 y(user_start_op\(libh_a,)g(dataset2\);)170 2195 y(...)170 2252 y(while\(!done\))170 2308 y({)266 2365 y(/*)i(work)h(*/)266 2421 y(...)266 2478 y(mpi_reduce\(MPI_COMM_ALL,)c(...\);)266 2534 y(...)266 2591 y(/*)j(see)h(if)g(done)f(*/)266 2647 y(...)170 2704 y(})-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 26 29 bop 75 -100 a Ft(26)414 b Fl(SECTION)16 b(3.)30 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)170 45 y Fm(user_end_op\(libh_a\);)170 102 y(user_end_op\(libh_b\);)75 194 y Ft(The)g(user)h(library)g (initialization)i(co)q(de:)75 286 y Fm(void)23 b(init_user_lib\(void)f (*comm,)h(user_lib_t)f(**handle\))75 342 y({)147 399 y(user_lib_t)g(*save;) 147 455 y(void)h(*context;)147 512 y(void)g(*group;)147 568 y(int)g(len;)147 681 y(user_lib_initsave\(&save)o(\);)e(/*)j(local)f(*/)147 737 y(mpi_comm_group\(comm,)e(&group\);)147 794 y(mpi_contexts_alloc\(comm)o (,)g(1,)j(&context,)e(&len\);)147 850 y(mpi_comm_dup\(comm,)f(context,)i (save)g(->)h(comm\);)147 963 y(/*)f(other)g(inits)h(*/)147 1020 y(*handle)e(=)i(save;)75 1076 y(})75 1168 y Ft(Notice)17 b(that)e(the)h(comm)o(unicator)32 b Fh(comm)10 b Ft(passed)16 b(to)f(the)i(library)f Fn(is)g Ft(needed)h(to)f(allo)q(cate)g(new)h(con-)75 1225 y(texts.)75 1281 y(User)e(start-up)g(co)q(de:)75 1373 y Fm(void)23 b(user_start_op\(user_lib_t)e(*handle,)i(void)g(*data\))75 1430 y({)170 1486 y(user_lib_state)f(*state;)170 1543 y(state)i(=)f(handle)g (->)h(state;)170 1599 y(mpi_irecv\(save)e(->)i(comm,)f(...,)g(data,)h(...)f (&\(state)g(->)h(irecv_handle\)\);)170 1656 y(mpi_isend\(save)e(->)i(comm,)f (...,)g(data,)h(...)f(&\(state)g(->)h(isend_handle\)\);)75 1712 y(})75 1804 y Ft(User)15 b(clean-up)i(co)q(de:)75 1896 y Fm(void)23 b(user_end_op\(user_lib_t)e(*handle\))75 1953 y({)170 2009 y(mpi_wait\(save)i(->)g(state)g(->)h(isend_handle\);)170 2066 y(mpi_wait\(save)f(->)g(state)g(->)h(irecv_handle\);)75 2122 y(})75 2243 y Fi(3.11.6)49 b(Lib)o(ra)o(ry)17 b(Example)d(#2)75 2329 y Ft(The)h(main)h(program:)170 2421 y Fm(int)24 b(ma,)f(mb;)170 2478 y(...)170 2534 y(list_a)g(:=)h(``[0,1]'';)170 2591 y(list_b)f(:=)h (``[0,2{,3}]'';)170 2704 y(mpi_local_subgroup\(MPI_GROUP_)o(ALL,)d(2,)i (list_a,)g(&group_a\);)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 27 30 bop 75 -100 a Fl(3.11.)29 b(MOTIV)-5 b(A)l(TING)15 b(EXAMPLES)1056 b Ft(27)170 45 y Fm(mpi_local_subgroup\(MPI_GROUP_)o(ALL,)21 b(2\(3\),)i(list_b,)g(&group_b\);)170 158 y(mpi_comm_make\(MPI_COMM_ALL,)e (group_a,)h(&comm_a\);)170 214 y(mpi_comm_make\(MPI_COMM_ALL,)f(group_b,)h (&comm_b\);)170 327 y(mpi_comm_rank\(comm_a,)f(&ma\);)170 384 y(mpi_comm_rank\(comm_b,)g(&mb\);)170 497 y(if\(ma)j(!=)f(MPI_UNDEFINED\))242 553 y(lib_call\(comm_a\);)170 610 y(if\(mb)h(!=)f(MPI_UNDEFINED\))170 666 y({)242 723 y(lib_call\(comm_b\);)242 779 y(lib_call\(comm_b\);)170 835 y(})75 912 y Ft(The)15 b(library:)75 989 y Fm(void)23 b(lib_call\(void)f (*comm\))75 1046 y({)170 1102 y(int)i(me,)f(done)h(=)f(0;)170 1159 y(mpi_comm_rank\(comm,)f(&me\);)170 1215 y(if\(me)i(==)f(0\))266 1272 y(while\(!done\))266 1328 y({)361 1385 y(mpi_recv\(...,)f(comm,)i (MPI_SRC_ANY\);)361 1441 y(...)266 1498 y(})170 1554 y(else)170 1610 y({)266 1667 y(/*)f(work)h(*/)266 1723 y(mpi_send\(...,)e(comm,)h(0\);) 266 1780 y(....)170 1836 y(})170 1893 y(MPI_SYNC\(comm\);)70 b(/*)24 b(include/no)e(safety)h(for)h(safety/no)e(safety)h(*/)75 1949 y(})75 2026 y Ft(The)16 b(ab)q(o)o(v)o(e)g(example)g(is)h(really)g(t)o (w)o(o)d(examples,)j(dep)q(ending)h(on)d(whether)i(or)e(not)g(y)o(ou)h (include)i(rank)75 2083 y(3)e(in)h Fm(list)p 267 2083 15 2 v 17 w(b)p Ft(.)23 b(This)17 b(example)g(illustrates)g(that,)f(despite)h(con) o(texts,)f(subsequen)o(t)h(calls)g(to)f Fm(lib)p 1766 2083 V 17 w(call)75 2139 y Ft(with)22 b(the)f(same)g(con)o(text)f(need)i(not)f(b)q (e)h(safe)f(from)f(one)h(another)g(\(\\bac)o(k)g(masking"\).)37 b(Safet)o(y)20 b(is)75 2195 y(realized)g(if)e(the)37 b Fh(MPI)p 474 2195 14 2 v 16 w(SYNC)19 b Ft(is)f(added.)30 b(What)17 b(this)i(demonstrates)e(is)i(that)e(libraries)j(ha)o(v)o(e)e(to)f(b)q(e)75 2252 y(written)e(carefully)l(,)i(ev)o(en)e(with)h(con)o(texts.)166 2308 y(Algorithms)i(lik)o(e)g(\\com)o(bine")g(ha)o(v)o(e)f(strong)f(enough)i (source)g(selectivit)o(y)h(so)e(that)f(they)i(are)f(in-)75 2365 y(heren)o(tly)k(OK.)g(So)g(are)g(m)o(ultiple)h(calls)g(to)e(a)g(t)o (ypical)i(tree)f(broadcast)f(algorithm)g(with)h(the)g(same)75 2421 y(ro)q(ot.)27 b(Ho)o(w)o(ev)o(er,)17 b(m)o(ultiple)j(calls)f(to)e(a)h(t) o(ypical)g(tree)g(broadcast)f(algorithm)h({)g(with)g(di\013eren)o(t)g(ro)q (ots)75 2478 y(|)h(could)i(break.)31 b(Therefore,)19 b(suc)o(h)h(algorithms)f (w)o(ould)g(ha)o(v)o(e)g(to)f(utilize)j(the)f(tag)e(to)g(k)o(eep)i(things)75 2534 y(straigh)o(t.)f(All)c(of)g(the)f(foregoing)g(is)h(a)f(discussion)i(of)e (\\collectiv)o(e)i(calls")g(implemen)o(ted)g(with)f(p)q(oin)o(t)g(to)75 2591 y(p)q(oin)o(t)k(op)q(erations.)28 b(MPI)18 b(implemen)o(tations)h(ma)o (y)f(or)f(ma)o(y)g(not)h(implemen)o(t)h(collectiv)o(e)h(calls)f(using)75 2647 y(p)q(oin)o(t-to-p)q(oin)o(t)e(op)q(erations.)22 b(These)16 b(algorithms)g(are)f(used)i(to)e(illustrate)i(the)f(issues)h(of)e (correctness)75 2704 y(and)g(safet)o(y)l(,)g(indep)q(enden)o(t)i(of)e(ho)o(w) g(MPI)g(implemen)o(ts)h(its)g(collectiv)o(e)h(calls.)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 28 31 bop 75 -100 a Ft(28)414 b Fl(SECTION)16 b(3.)30 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)75 45 y Fi(3.11.7)49 b(Inter-Comm)o (unication)14 b(Examples)75 133 y Fh(Example)f(1:)19 b(Three-Group)e(\\Pip)q (eline")409 277 y Fm(+---------+)213 b(+---------+)h(+---------+)409 334 y(|)h(|)f(|)h(|)g(|)f(|)409 390 y(|)24 b(Group)f(0)h(|)f(<----->)g(|)h (Group)f(1)h(|)g(<----->)f(|)g(Group)h(2)f(|)409 447 y(|)215 b(|)f(|)h(|)g(|)f(|)409 503 y(+---------+)f(+---------+)h(+---------+)166 671 y Ft(Groups)21 b(0)g(and)h(1)f(comm)o(unicate.)39 b(Groups)21 b(1)g(and)h(2)f(comm)o(unicate.)39 b(Therefore,)23 b(group)e(0)75 728 y(requires)e(one)f(in)o(ter-comm)o(unicator,)h(group)e(1)h(requires)h(t)o (w)o(o)e(in)o(ter-comm)o(unicators,)h(and)g(group)g(2)75 784 y(requires)d(1)f(in)o(ter-comm)o(unicator.)20 b(Note)14 b(that)g(the)h(sync)o (hronous)f(in)o(ter-comm)o(unicator)h(constructor)75 841 y(\()p Fh(MPI)p 178 841 14 2 v 15 w(COMM)p 335 841 V 17 w(PEER)p 464 841 V 17 w(MAKE\(\))p Ft(\))9 b(can)i(b)q(e)h(safely)e(used)i(here)f(since)h (there)e(is)i(no)e(cyclic)j(comm)o(unication.)170 953 y Fm(void)24 b(*)f(myComm;)453 b(/*)24 b(intra-communicator)d(of)j(local)f(sub-group)f(*/) 170 1010 y(void)i(*)f(myFirstComm;)786 b(/*)24 b(inter-communicator)d(*/)170 1066 y(void)j(*)f(mySecondComm;)237 b(/*)24 b(second)f(inter-communicator)e (\(group)i(B)h(only\))f(*/)170 1123 y(int)h(membershipKey;)170 1179 y(int)g(subGroupLeaders[3];)170 1292 y(MPI_INIT\(\);)170 1349 y(...)170 1462 y(/*)g(User)f(code)h(must)f(generate)g(membershipKey)f (in)h(the)h(range)f([0,)h(1,)f(2])h(*/)170 1518 y(membershipKey)f(=)g(...)h (;)170 1631 y(/*)g(Build)f(intra-communicator)f(for)h(local)g(sub-group)g (and)g(get)h(group)f(leaders)g(*/)170 1687 y(/*)h(of)g(each)f(sub-group)g (\(relative)f(to)i(MPI_COMM_ALL\).)e(*/)170 1744 y (MPI_COMM_SPLITL\(MPI_COMM_ALL,)e(membershipKey,)i(3,)i(subGroupLeaders,)e (&myComm\);)170 1857 y(/*)i(Build)f(inter-communicators.)45 b(Tags)24 b(are)f(hard-coded.)f(*/)170 1913 y(if)i(\(membershipKey)e(==)i (0\))266 1970 y({)692 b(/*)23 b(Group)h(0)f(communicates)g(with)g(group)g(1.) h(*/)266 2026 y(MPI_COMM_PEER_MAKE\(myComm)o(,)d(MPI_COMM_ALL,)h (subGroupLeaders[1],)g(10,)h(1,)719 2083 y(&myFirstComm\);)266 2139 y(})170 2195 y(else)h(if)f(\(membershipKey)f(==)i(1\))266 2252 y({)525 b(/*)23 b(Group)h(1)f(communicates)f(with)i(groups)f(0)h(and)f (2.)h(*/)266 2308 y(MPI_COMM_PEER_MAKE\(myComm)o(,)d(MPI_COMM_ALL,)h (subGroupLeaders[0],)g(10,)h(1,)719 2365 y(&myFirstComm\);)266 2421 y(MPI_COMM_PEER_MAKE\(myComm)o(,)e(MPI_COMM_ALL,)h(subGroupLeaders[2],)g (21,)h(1,)719 2478 y(&mySecondComm\);)266 2534 y(})170 2591 y(else)h(if)f(\(membershipKey)f(==)i(2\))266 2647 y({)692 b(/*)23 b(Group)h(2)f(communicates)g(with)g(group)g(1.)h(*/)266 2704 y(MPI_COMM_PEER_MAKE\(myComm)o(,)d(MPI_COMM_ALL,)h(subGroupLeaders[1],)g(21,) h(1,)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 29 32 bop 75 -100 a Fl(3.11.)29 b(MOTIV)-5 b(A)l(TING)15 b(EXAMPLES)1056 b Ft(29)719 45 y Fm(&myFirstComm\);)266 102 y(})75 333 y Fh(Example)13 b(2:)19 b(Three-Group)e(\\Ring")290 419 y Fm(+-----------------------)o (--------)o(-------)o(-------)o(--------)o(------+)290 476 y(|)1408 b(|)290 532 y(|)95 b(+---------+)213 b(+---------+)h(+---------+)94 b(|)290 589 y(|)h(|)215 b(|)f(|)h(|)g(|)f(|)96 b(|)290 645 y(+-->)23 b(|)h(Group)f(0)h(|)f(<----->)g(|)h(Group)f(1)h(|)g(<----->)f(|)g (Group)h(2)f(|)h(<--+)409 701 y(|)215 b(|)f(|)h(|)g(|)f(|)409 758 y(+---------+)f(+---------+)h(+---------+)166 856 y Ft(Groups)18 b(0)f(and)i(1)f(comm)o(unicate.)29 b(Groups)17 b(1)h(and)h(2)e(comm)o (unicate.)29 b(Groups)18 b(0)g(and)g(2)g(com-)75 912 y(m)o(unicate.)49 b(Therefore,)26 b(eac)o(h)f(requires)h(t)o(w)o(o)d(in)o(ter-comm)o (unicators.)48 b(Note)24 b(that)g(the)h("lo)q(osely)75 969 y(sync)o(hronous")f(in)o(ter-comm)o(unicator)g(constructor)f(\()h Fh(MPI)p 1154 969 14 2 v 16 w(COMM)p 1312 969 V 16 w(PEER)p 1440 969 V 17 w(MAKE)p 1586 969 V 16 w(ST)l(ART\(\))h Ft(and)75 1025 y Fh(MPI)p 160 1025 V 16 w(COMM)p 318 1025 V 16 w(PEER)p 446 1025 V 17 w(MAKE)p 592 1025 V 16 w(FINISH\(\))p Ft(\))c(is)h(the)g(b)q (est)g(c)o(hoice)g(here)h(due)f(to)f(the)h(cyclic)h(comm)o(uni-)75 1082 y(cation.)170 1179 y Fm(void)h(*)f(myComm;)453 b(/*)24 b(intra-communicator)d(of)j(local)f(sub-group)f(*/)170 1236 y(void)i(*)f(myFirstComm;)762 b(/*)24 b(inter-communicators)d(*/)170 1292 y(void)j(*)f(mySecondComm;)170 1349 y(make_id)g(firstMakeID,)g (secondMakeID;)332 b(/*)23 b(handles)g(for)h("FINISH")f(*/)170 1405 y(int)h(membershipKey;)170 1462 y(int)g(subGroupLeaders[3];)170 1574 y(MPI_INIT\(\);)170 1631 y(...)170 1744 y(/*)g(User)f(code)h(must)f (generate)g(membershipKey)f(in)h(the)h(range)f([0,)h(1,)f(2])h(*/)170 1800 y(membershipKey)f(=)g(...)h(;)170 1913 y(/*)g(Build)f (intra-communicator)f(for)h(local)g(sub-group)g(and)g(get)h(group)f(leaders)g (*/)170 1970 y(/*)h(of)g(each)f(sub-group)g(\(relative)f(to)i (MPI_COMM_ALL\).)e(*/)170 2026 y(MPI_COMM_SPLITL\(MPI_COMM_ALL,)e (membershipKey,)i(3,)i(subGroupLeaders,)e(&myComm\);)170 2139 y(/*)i(Build)f(inter-communicators.)45 b(Tags)24 b(are)f(hard-coded.)f(*/)170 2195 y(if)i(\(membershipKey)e(==)i(0\))266 2252 y({)525 b(/*)23 b(Group)h(0)f(communicates)f(with)i(groups)f(1)h(and)f(2.)h(*/)266 2308 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[2],)g(20,)862 2365 y(1,)i(&firstMakeID\);)266 2421 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[1],)g(10,)862 2478 y(1,)i(&secondMakeID\);)266 2534 y(})170 2591 y(else)g(if)f(\(membershipKey)f(==)i(1\))266 2647 y({)525 b(/*)23 b(Group)h(1)f(communicates)f(with)i(groups)f(0)h(and)f (2.)h(*/)266 2704 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[0],)g(10,)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 30 33 bop 75 -100 a Ft(30)414 b Fl(SECTION)16 b(3.)30 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)862 45 y Fm(1,)24 b(&firstMakeID\);)266 102 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[2],)g(21,)862 158 y(1,)i(&secondMakeID\);)266 214 y(})170 271 y(else)g(if)f(\(membershipKey)f(==)i(2\))266 327 y({)525 b(/*)23 b(Group)h(2)f(communicates)f(with)i(groups)f(0)h(and)f (1.)h(*/)266 384 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[1],)g(21,)862 440 y(1,)i(&firstMakeID\);)266 497 y(MPI_COMM_PEER_MAKE_START\()o(myComm,)c(MPI_COMM_ALL,)i (subGroupLeaders[0],)g(20,)862 553 y(1,)i(&secondMakeID\);)266 610 y(})170 723 y(/*)g(Everyone)f(has)g(the)h(same)f("FINISH")g(code...)g(*/) 170 779 y(MPI_COMM_PEER_MAKE_FINISH\(fir)o(stMakeID)o(,)e(&myFirstComm\);)170 835 y(MPI_COMM_PEER_MAKE_FINISH\(sec)o(ondMakeI)o(D,)g(&mySecondComm\);)75 954 y Fh(Example)13 b(3:)19 b(Three-Group)e(\\Pip)q(eline")e(Using)h(Name)d (Service)409 1040 y Fm(+---------+)213 b(+---------+)h(+---------+)409 1097 y(|)h(|)f(|)h(|)g(|)f(|)409 1153 y(|)24 b(Group)f(0)h(|)f(<----->)g(|)h (Group)f(1)h(|)g(<----->)f(|)g(Group)h(2)f(|)409 1210 y(|)215 b(|)f(|)h(|)g(|)f(|)409 1266 y(+---------+)f(+---------+)h(+---------+)166 1364 y Ft(Groups)21 b(0)g(and)h(1)f(comm)o(unicate.)39 b(Groups)21 b(1)g(and)h(2)f(comm)o(unicate.)39 b(Therefore,)23 b(group)e(0)75 1420 y(requires)e(one)f(in)o(ter-comm)o(unicator,)h(group)e(1)h(requires)h(t) o(w)o(o)e(in)o(ter-comm)o(unicators,)h(and)g(group)g(2)75 1477 y(requires)d(1)f(in)o(ter-comm)o(unicator.)20 b(Note)14 b(that)g(the)h(sync)o (hronous)f(in)o(ter-comm)o(unicator)h(constructor)75 1533 y(\()p Fh(MPI)p 178 1533 14 2 v 15 w(COMM)p 335 1533 V 17 w(NAME)p 481 1533 V 17 w(MAKE\(\))p Ft(\))f(can)j(b)q(e)f(safely)h(used)f(here)h (since)g(there)f(is)h(no)f(cyclic)h(comm)o(unica-)75 1590 y(tion.)170 1687 y Fm(void)24 b(*)f(myComm;)453 b(/*)24 b(intra-communicator)d(of)j (local)f(sub-group)f(*/)170 1744 y(void)i(*)f(myFirstComm;)786 b(/*)24 b(inter-communicator)d(*/)170 1800 y(void)j(*)f(mySecondComm;)237 b(/*)24 b(second)f(inter-communicator)e(\(group)i(B)h(only\))f(*/)170 1913 y(MPI_INIT\(\);)170 1970 y(...)170 2083 y(/*)h(User)f(builds)g (intra-communicator)f(myComm)h(describing)f(the)i(local)f(sub-group)g(*/)170 2139 y(/*)h(using)f(any)h(appropriate)e(MPI)h(routine\(s\).)47 b(\(For)23 b(example,)g(myComm)g(could)g(*/)170 2195 y(/*)h(have)f(been)h (passed)f(in)g(as)h(an)f(argument)g(to)h(a)g(user)f(subroutine.\))f(*/)170 2252 y(myComm)h(=)h(...)g(;)170 2365 y(/*)g(Build)f(inter-communicators.)45 b(Group)23 b(membership)g(conditions)f(must)i(be)f(*/)170 2421 y(/*)h(provided)f(by)g(the)h(user.)f(*/)170 2478 y(if)h(\(\))266 2534 y({)692 b(/*)23 b(Group)h(0)f(communicates)g(with)g (group)g(1.)h(*/)266 2591 y(MPI_COMM_NAME_MAKE\(myComm)o(,)d("Connect)i(10",) g(1,)h(&myFirstComm\);)266 2647 y(})170 2704 y(else)g(if)f(\(\))1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Page: 31 34 bop 75 -100 a Fl(3.11.)29 b(MOTIV)-5 b(A)l(TING)15 b(EXAMPLES)1056 b Ft(31)266 45 y Fm({)525 b(/*)23 b(Group)h(1)f(communicates)f(with)i(groups) f(0)h(and)f(2.)h(*/)266 102 y(MPI_COMM_NAME_MAKE\(myComm)o(,)d("Connect)i (10",)g(1,)h(&myFirstComm\);)266 158 y(MPI_COMM_NAME_MAKE\(myComm)o(,)d ("Connect)i(21",)g(1,)h(&mySecondComm\);)266 214 y(})170 271 y(else)g(if)f(\(\))266 327 y({)692 b(/*)23 b(Group)h(2)f(communicates)g(with)g(group)g(1.)h(*/)266 384 y(MPI_COMM_NAME_MAKE\(myComm)o(,)d("Connect)i(21",)g(1,)h (&myFirstComm\);)266 440 y(})75 559 y Fh(Example)13 b(4:)19 b(Three-Group)e(\\Ring")e(Using)g(Name)e(Service)290 645 y Fm(+-----------------------)o(--------)o(-------)o(-------)o(--------)o (------+)290 701 y(|)1408 b(|)290 758 y(|)95 b(+---------+)213 b(+---------+)h(+---------+)94 b(|)290 814 y(|)h(|)215 b(|)f(|)h(|)g(|)f(|)96 b(|)290 871 y(+-->)23 b(|)h(Group)f(0)h(|)f(<----->)g(|)h(Group)f(1)h(|)g (<----->)f(|)g(Group)h(2)f(|)h(<--+)409 927 y(|)215 b(|)f(|)h(|)g(|)f(|)409 984 y(+---------+)f(+---------+)h(+---------+)166 1082 y Ft(Groups)18 b(0)f(and)i(1)f(comm)o(unicate.)29 b(Groups)17 b(1)h(and)h(2)e(comm)o (unicate.)29 b(Groups)18 b(0)g(and)g(2)g(com-)75 1138 y(m)o(unicate.)49 b(Therefore,)26 b(eac)o(h)f(requires)h(t)o(w)o(o)d(in)o(ter-comm)o (unicators.)48 b(Note)24 b(that)g(the)h("lo)q(osely)75 1194 y(sync)o(hronous")20 b(in)o(ter-comm)o(unicator)h(constructor)f(\()h Fh(MPI)p 1141 1194 14 2 v 15 w(COMM)p 1298 1194 V 17 w(NAME)p 1444 1194 V 17 w(MAKE)p 1590 1194 V 16 w(ST)l(ART\(\))g Ft(and)75 1251 y Fh(MPI)p 160 1251 V 16 w(COMM)p 318 1251 V 16 w(NAME)p 463 1251 V 17 w(MAKE)p 609 1251 V 16 w(FINISH\(\))p Ft(\))15 b(is)h(the)g(b)q(est)g(c)o(hoice)g(here)h(due)f(to)f(the)h(cyclic)h(comm)o (unica-)75 1307 y(tion.)170 1405 y Fm(void)24 b(*)f(myComm;)453 b(/*)24 b(intra-communicator)d(of)j(local)f(sub-group)f(*/)170 1462 y(void)i(*)f(myFirstComm;)762 b(/*)24 b(inter-communicators)d(*/)170 1518 y(void)j(*)f(mySecondComm;)170 1574 y(make_id)g(firstMakeID,)g (secondMakeID;)332 b(/*)23 b(handles)g(for)h("FINISH")f(*/)170 1687 y(MPI_INIT\(\);)170 1744 y(...)170 1857 y(/*)h(User)f(builds)g (intra-communicator)f(myComm)h(describing)f(the)i(local)f(sub-group)g(*/)170 1913 y(/*)h(using)f(any)h(appropriate)e(MPI)h(routine\(s\).)47 b(\(For)23 b(example,)g(myComm)g(could)g(*/)170 1970 y(/*)h(have)f(been)h (passed)f(in)g(as)h(an)f(argument)g(to)h(a)g(user)f(subroutine.\))f(*/)170 2026 y(myComm)h(=)h(...)g(;)170 2139 y(/*)g(Build)f(inter-communicators.)45 b(Group)23 b(membership)g(conditions)f(must)i(be)f(*/)170 2195 y(/*)h(provided)f(by)g(the)h(user.)f(*/)170 2252 y(if)h(\(\))266 2308 y({)525 b(/*)23 b(Group)h(0)f(communicates)f(with)i (groups)f(1)h(and)f(2.)h(*/)266 2365 y(MPI_COMM_NAME_MAKE_START\()o(myComm,)c ("Connect)j(20",)g(1,)h(&firstMakeID\);)266 2421 y (MPI_COMM_NAME_MAKE_START\()o(myComm,)c("Connect)j(10",)g(1,)h (&secondMakeID\);)266 2478 y(})170 2534 y(else)g(if)f(\(\))266 2591 y({)525 b(/*)23 b(Group)h(1)f(communicates)f(with)i(groups)f (0)h(and)f(2.)h(*/)266 2647 y(MPI_COMM_NAME_MAKE_START\()o(myComm,)c ("Connect)j(10",)g(1,)h(&firstMakeID\);)266 2704 y (MPI_COMM_NAME_MAKE_START\()o(myComm,)c("Connect)j(21",)g(1,)h (&secondMakeID\);)-32 46 y Fr(1)-32 103 y(2)-32 159 y(3)-32 215 y(4)-32 272 y(5)-32 328 y(6)-32 385 y(7)-32 441 y(8)-32 498 y(9)-40 554 y(10)-40 611 y(11)-40 667 y(12)-40 724 y(13)-40 780 y(14)-40 836 y(15)-40 893 y(16)-40 949 y(17)-40 1006 y(18)-40 1062 y(19)-40 1119 y(20)-40 1175 y(21)-40 1232 y(22)-40 1288 y(23)-40 1345 y(24)-40 1401 y(25)-40 1457 y(26)-40 1514 y(27)-40 1570 y(28)-40 1627 y(29)-40 1683 y(30)-40 1740 y(31)-40 1796 y(32)-40 1853 y(33)-40 1909 y(34)-40 1966 y(35)-40 2022 y(36)-40 2078 y(37)-40 2135 y(38)-40 2191 y(39)-40 2248 y(40)-40 2304 y(41)-40 2361 y(42)-40 2417 y(43)-40 2474 y(44)-40 2530 y(45)-40 2587 y(46)-40 2643 y(47)-40 2699 y(48)p eop %%Page: 32 35 bop 75 -100 a Ft(32)409 b Fl(SECTION)16 b(3.)35 b(GR)o(OUPS,)15 b(CONTEXTS,)g(AND)g(COMMUNICA)l(TORS)266 45 y Fm(})170 102 y(else)24 b(if)f(\(\))266 158 y({)525 b(/*)23 b(Group)h(2)f(communicates)f(with)i(groups)f(0)h(and)f(1.)h(*/)266 214 y(MPI_COMM_NAME_MAKE_START\()o(myComm,)c("Connect)j(21",)g(1,)h (&firstMakeID\);)266 271 y(MPI_COMM_NAME_MAKE_START\()o(myComm,)c("Connect)j (20",)g(1,)h(&secondMakeID\);)266 327 y(})170 440 y(/*)g(Everyone)f(has)g (the)h(same)f("FINISH")g(code...)g(*/)170 497 y (MPI_COMM_NAME_MAKE_FINISH\(fir)o(stMakeID)o(,)e(&myFirstComm\);)170 553 y(MPI_COMM_NAME_MAKE_FINISH\(sec)o(ondMakeI)o(D,)g(&mySecondComm\);)1967 46 y Fr(1)1967 103 y(2)1967 159 y(3)1967 215 y(4)1967 272 y(5)1967 328 y(6)1967 385 y(7)1967 441 y(8)1967 498 y(9)1959 554 y(10)1959 611 y(11)1959 667 y(12)1959 724 y(13)1959 780 y(14)1959 836 y(15)1959 893 y(16)1959 949 y(17)1959 1006 y(18)1959 1062 y(19)1959 1119 y(20)1959 1175 y(21)1959 1232 y(22)1959 1288 y(23)1959 1345 y(24)1959 1401 y(25)1959 1457 y(26)1959 1514 y(27)1959 1570 y(28)1959 1627 y(29)1959 1683 y(30)1959 1740 y(31)1959 1796 y(32)1959 1853 y(33)1959 1909 y(34)1959 1966 y(35)1959 2022 y(36)1959 2078 y(37)1959 2135 y(38)1959 2191 y(39)1959 2248 y(40)1959 2304 y(41)1959 2361 y(42)1959 2417 y(43)1959 2474 y(44)1959 2530 y(45)1959 2587 y(46)1959 2643 y(47)1959 2699 y(48)p eop %%Trailer end userdict /end-hook known{end-hook}if %%EOF From owner-mpi-context@CS.UTK.EDU Tue Aug 17 01:06:17 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA00703; Tue, 17 Aug 93 01:06:17 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03165; Tue, 17 Aug 93 01:05:00 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 17 Aug 1993 01:04:55 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA03157; Tue, 17 Aug 93 01:04:30 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA19046; Tue, 17 Aug 93 00:04:03 CDT Date: Tue, 17 Aug 93 00:04:03 CDT From: Tony Skjellum Message-Id: <9308170504.AA19046@Aurora.CS.MsState.Edu> To: igl@ecs.soton.ac.uk Subject: Re: Interim (August 13) draft (post first reading) Cc: mpi-context@cs.utk.edu I believe this to be the postscript (uuencoded) of our best draft. - Tony begin 640 context.ps M)2%04RU!9&]B92TR+C *)25##H@," P(#8Q M,B W.3(*)25%;F1#;VUM96YTV)I;F0@9&5F?4X@+U,@+V5X8V@*;&]A9"!D968@+UA[4R!. M?4(@+U12("]TF4@;F5G(&UU;"!44B!M871R M:7@@8W5R"!D=7 @9'5P(#0@9V5T"G)O=6YD(#0@97AC:"!P M=70@9'5P(&1U<" U(&=E="!R;W5N9" U(&5X8V@@<'5T('-E=&UA=')I>'U. M("] ;&5T=&5R>R]VF4@,3 N-CDR.3$S,S@U."!. M?4(@+T!A,WL*+W9S:7IE(#$U+C4U,S$@3GU"("] ;&5D9V5R>R]VF4@,3,@3GU"("] ;6%N=6%L9F5E9'L*T-H87)"=6EL9&5R?4X@+T5N8V]D:6YG($E%($X@96YD M(&1U<'LO9F]O('-E=&9O;G1],@IA2!C;W!Y(&-V>"!.(&QO860@,"!N M;B!P=70@+V-TR]S9B Q($X@+V9N=')X($9-870@3B!D M9BUT86EL?0I"("]D9G-[9&EV("]S9B!8("]F;G1R>%MS9B P(# @'MC:"UD871A(&1U<"!L96YG=&@@,2!S=6(@9V5T?4(* M+V-H+6EM86=E>V-H+61A=&$@9'5P('1Y<&4@+W-T" P"F-H+7AO9F8@8V@M>6]F9B!C:"UH96EG:'0@V-C(#$@861D M($1]0B O8F]P>W5S97)D:6-T("]B;W M:&]O:PIK;F]W;GMB;W M:&]O:WUI M9B O4TD@S *-R!G971I;G1EWMG"!R=6QE M>2!F86QS92!236%T>T)$;W1]:6UA9V5M87-K(&=R97-T;W)E?7U[>V=S879E M(%12("TN,0HM+C$@5%(@RTR($U]0B O9GLM,2!-?4(@+V=[,"!-?4(@ M+VA[,2!-?4(@+VE[,B!-?4(*+VI[,R!-?4(@+VM[-"!-?4(@+W=[,"!R;6]V M971O?4(@+VQ[<" M-"!W?4(@+VU[<" M,R!W?4(@+VY[<" M,B!W?4(@+V][ M<" M,2!W"GU"("]Q>W @,2!W?4(@+W)[<" R('=]0B OW @-"!W?4(@+WA[,"!3(')M;W9E=&]]0B O>7LS(#(@R]34R!S879E($Y]0B O96]S>V-L96%R(%-3(')E2!&>"A$;RER*&-U;65N*6XH="DR M, IB*&9O2A!=6=U M2A4:&ES*62EF*$%24"EL*$$I9RAA;F0I M9RA.4T8I9RAU;F1E2EG*'1H92DQ.3(@,30S-B!Y*$YA=&EO;F%L*6@H M4V-I96YC92EF*$8I;"AO=6YD871I;VXI:2A38VEE;F-E*64H86YD*6D**%0I M;"AE8REO*&AN;VQO9WDI9BA#96XI;RAT97(I9BA#;REQ*&]P*7$H97)A=&EV M*6\H92DW-B Q-#DT"GDH06=R965M96XI;RAT*64H3F\N*3(R(&(H0T-2+3@X M,#DV,34L*60H86YD*64H8BEO*'DI92AT:&4I:"A#;VUM:7-S:6]N*64H;V8I M"FHH=&AE*68H175R;W I<2AE86XI:2A#;VUM*6\H=6YI="EN*'DI-C4T(#$U M-3(@>2AT:')O=6=H*68H17-P2!&="@S+C$I-#8*8BA);BEO*'1R;REQ*&1U8W1I;VXI,34@8B!&'0I,S(*8B!&2@S+C,N,2DU,"!B*%!R961E M7# Q-&YE9"DQ-R!B*$=R;W5P2@S+C4I-#8@8BA'2@S+C4N,2DU,"!B M*$QO*7$H8V%L*3$V(&(H3W I<2AE2@S+C4N,BDU M,"!B*$QO*7$H8V%L*3$V(&(H1W)O=7 I9BA#;VYS=')U8W1O2@S+C8I-#8@8BA/<"EQ*&5R871I;VYS*3$V"F(H;VXI M9BA#;VXI;RAT97AT2@S+C8N,2DU,"!B*$QO*7$H8V%L M*3$V(&(H3W I<2AE2@S+C8N,BDU,"!B*$-O;&QE M8W1I=BEO*&4I,3<@8BA/<"EQ*&5R871I;VYS*30S(&(@1G,H.BDR,@IB*#HI M:"@Z*68H.BEH*#HI9B@Z*62@S+C2@S+C@I-#8@ M8BA);BEO*'1R;REQ*&1U8W1I;VXI,38@8BAT;REE*$EN*6\H=&5R+4-O;6TI M;PHH=6YI8V%T:6]N*6<@1G,H.BDR,B!B*#HI9R@Z*6@H.BEF*#HI9R@Z*6@H M.BEF*#HI9R@Z*6@H.BEF*#HI9R@Z*6@H.BEF*#HI9R@Z*0IH*#HI9B@Z*6@H M.BEF*#HI9R@Z*6@H.BEF*#HI-CD@8B!&="@Q,BDR-#@@,32@S+C@N,RDU, IB*$EN*6\H=&5R+4-O;6TI;RAU;FEC871I;VXI,38@ M8BA2;W5T:6YE2@S+CDN,2DU,"!B*$8I;"AU;F-T:6]N86QI="EO*'DI M"C,R(&(@1G,H.BDR,R!B*#HI9B@Z*6FEN M9RDQ-B!B*'1H92EF*$QO*7$H;W-E;'DI:"A3>6YC*6\**&AR;VYO=7,I9RA- M;REQ*&1E;"EG*%PH57-A9V4L*64H4V%F970I;RAY7"DI:B!&2@S+C$P+C$I,C<@8BA"87-I8RDQ-B!B*%-T871E;65N*6\H=',I M,C(*8B!&2@S+C$Q+C(I,C<*8BA#=7)R96XI M;RAT*3$U(&(H4')A8W1I8V4I9R@C,BDR-R!B($9S*#HI,C(@8B@Z*62@S+C$Q+C4I,C<@8BA,:6)R87)Y*3$V(&(H M17AA;7!L92EG*",Q*3(R(&(*1G,H.BEG*#HI9R@Z*6@H.BEF*#HI:"@Z*68H M.BEG*#HI:"@Z*68H.BEG*#HI:"@Z*68H.BEG*#HI:"@Z*68H.BEG*#HI:"@Z M*68H.BD*9R@Z*6@H.BEF*#HI:"@Z*68H.BEG*#HI:"@Z*68H.BDV.2!B($9T M*#(U*3(T." R-38U('DH,RXQ,2XV*3(W"F(H3&EB2DQ-B!B*$5X86UP M;&4I9R@C,BDR,B!B($9S*#HI9R@Z*62!&2@R*2TS,@HQ-3D@>2@S*2TS,B R,34@>2@T*2TS,B R-S(@>2@U M*2TS,B S,C@@>2@V*2TS,B S.#4@>2@W*2TS,@HT-#$@>2@X*2TS,B T.3@@ M>2@Y*2TT," U-30@>2@Q,"DM-# @-C$Q('DH,3$I+30P(#8V-R!Y*#$R*2TT M, HW,C0@>2@Q,RDM-# @-S@P('DH,30I+30P(#@S-B!Y*#$U*2TT," X.3,@ M>2@Q-BDM-# @.30Y('DH,32@Q."DM-# @,3 V,B!Y*#$Y M*2TT," Q,3$Y('DH,C I+30P(#$Q-S4@>2@R,2DM-# @,3(S,@IY*#(R*2TT M," Q,C@X('DH,C,I+30P(#$S-#4@>2@R-"DM-# @,30P,2!Y*#(U*2TT," Q M-#4W('DH,C8I+30P"C$U,30@>2@R-RDM-# @,34W,"!Y*#(X*2TT," Q-C(W M('DH,CDI+30P(#$V.#,@>2@S,"DM-# @,32@S,RDM-# @,3DP.2!Y*#,T*2TT," Q.38V('DH,S4I M+30P"C(P,C(@>2@S-BDM-# @,C W."!Y*#,W*2TT," R,3,U('DH,S@I+30P M(#(Q.3$@>2@S.2DM-# @,C(T. IY*#0P*2TT," R,S T('DH-#$I+30P(#(S M-C$@>2@T,BDM-# @,C0Q-R!Y*#0S*2TT," R-#2@T-2DM-# @,C4X-R!Y*#0V*2TT," R-C0S('DH-#2@T M."EP(&5O< HE)5!A9V4Z(# @,PIB;W @.#2AT:6]N2AS=&%N9&%R9',I9BAF;W(I:"AM M97-S86=E*68H<&%S2EG*&%N*6\H M>2EG*&]<,#$V8VEA;"EG"BAS=&%N+2DW-2 Y.3@@>2AD87)D2A4:&ES*3(R(&(H:7,I9BAA*62EF*&]F*62EF*$PI,3$W M- HQ,38Q('D@1G$H82DQ,3DU(#$Q-C<@>2!&="A4*3$R,C @,3$X,2!Y*$4I M,3(T-B Q,38W('DH6"EG*&]N*62AP2AS M=&%N9&%R9"EG*&9O2!&=BA'2!&="A7 M*6PH92DQ-R!B*&1E7# Q-&YE*6@H=&AE*64H8V]N8V5P=',I:"AO9BEG*&=R M;W5P+"EF*&-O;BEO*'1E>'0L*6<**&%N9"EH*&-O;6TI;RAU;FEC871O2EH*&1I&%M<&QE2A)="EC*&ES*6@H:&EG M:&QY*62AN*6\H=6UE2EG*&UO*7$H9'5L95PI+BDR,2!B*%-U M<' I<2AO2AC*6\H:&%N M9VEN9REE*&%L;"EH*&UE2EG*&(I<2AE*6@H9&5T97)M M:6YE9"EH*&1Y;F%M:6-A;&QY*62A4:"EO*'5S+"DR,R!B*$U022EE M*&AAF5R;RPI:"AA;F0I9RAC86QL M*6<**'1H:7,I:"AN*6\H=6TI;RAB*7$H97)I;F2AS970I9BAO9BEF*'!R M;REQ*&-E2EE*'1H96ER*6'1S*68@1G0H<&%R=&ET M:6]N*6@H=&AE*6@H;65S'0I9BAC86YN M;W0I:"AB*7$**&4I9RAR96-E:78I;RAE9"EH*&EN*62AO=&AE'0N*3(Y"F(H0V]N*6\H=&5X=',I,3<@ M8BAA2!&;2AC;VUM=6YI8V%T;W(I90I&="AO8BES*&IE M8W1S+BDS,2!B*%1H92DR,"!B*&-O;BEO*'1E>'0I92AM96,I;RAH86YI2EG*'1H92EF*'!R;REQ*&-E&5C=71E*6@H M=&AE*68**'!A2AA9G1E2AP87)A M;&QE;"EI*'!R;REQ*&-E9'5R92PI92AE=BEO*&5N*6@H:68I9RAT:&4I9@HH M=')A;G-F97(I9BAO9BEH*&-O;BEO*'1R;VPI9RAI2A4:&4I92AC;VUM*6\H=6YI8V%T:6]N*6DH9&]M M86EN*64H=7-E9"EH*&(I;RAY*68H82EG*'!A2@Q,BDM-# @-S(T('DH,3,I+30P(#2@Q-2DM-# *.#DS('DH,38I+30P(#DT.2!Y*#$W M*2TT," Q,# V('DH,3@I+30P(#$P-C(@>2@Q.2DM-# @,3$Q.0IY*#(P*2TT M," Q,32@R,BDM-# @,3(X."!Y*#(S*2TT," Q M,S0U('DH,C0I+30P"C$T,#$@>2@R-2DM-# @,30U-R!Y*#(V*2TT," Q-3$T M('DH,C2@R."DM-# @,38R-PIY*#(Y*2TT," Q-C@S('DH M,S I+30P(#$W-# @>2@S,2DM-# @,32@S-"DM-# @,3DV-B!Y*#,U*2TT," R,#(R('DH,S8I+30P M(#(P-S@@>2@S-RDM-# @,C$S-0IY*#,X*2TT," R,3DQ('DH,SDI+30P(#(R M-#@@>2@T,"DM-# @,C,P-"!Y*#0Q*2TT," R,S8Q('DH-#(I+30P"C(T,3<@ M>2@T,RDM-# @,C0W-"!Y*#0T*2TT," R-3,P('DH-#4I+30P(#(U.#<@>2@T M-BDM-# @,C8T,PIY*#0W*2TT," R-CDY('DH-#@I<"!E;W *)25086=E.B R M(#4*8F]P(#2AO<"EQ*&5R871I M;VXN*3,Q"F(H5&AE*3$Y(&(H8V]M;2EO*'5N:6-A=&]R*68H:61E;BEO*'1I M7# Q-&5S*6HH=&AE*60H8V]M;2EO*'5N:6-A=&EO;BEI*&-O;BEO"BAT97AT M*64H;V8I9RAT:&%T*62A#;VUM*6\H=6YI8V%T;W)S*60H M87)E*6@H2EF"BAO<&%Q=64I9RA-4$DI M:"AO8BES*&IE8W1S+BDQ-C8@-3 T('DH5REL*&4I:"AS=&%R="EF*&(I;RAY M*6@H9&ES8W5S2AM*6\H=6YI8V%T:6]N+"DQ-"!B M*'=H:6,I;RAH*68H8G5I;&1S*6DH;VXI92AT:&4I9PHH9&%T82EF*'-T6YC*6\H:')O;F]U2AO M9BEF*&-O;7!U=&EN9REH*%PH=FES*68H82EG*'9I2!&;R@S+C(I-3D@8BA#;VYT97AT*3'0I:"AC86YN M;W0I9BAB*7$H92EI*')E8V5I=BEO*&5D*62AC;VXI;RAT97AT*6,H:7,I9RAA M;BEG*&]P87%U92EH*&]B*7,H:F5C="XI:BA/;FQY*60H;VYE*6'0N*3(T(&(H M0V]N*6\H=&5X=',I,34@8BAH82EO*'8I;RAE*6DH861D:71I;VYA;"EH*&%T M=')I8G5T97,I90HH9F]R*62A&*6PH;W(I92AI;BEO*'1R82UC;VUM*6\H=6YI8V%T:6]N+"EG*&$I M9RAC;VXI;RAT97AT*62AC871O2AT:&ES*6@H:7,I M9RAT:&4I9RAO;FQY*68H=REO*&$I;RAY*62AA;F0I8RAT;REG*&UA:REO*&4I92AB*7$H;W1H*6DH M<"EQ*&]I;BEO*'0M=&\M<"EQ*&]I;BEO*'0I9BAA;F0I: HH8V]L;&5C=&EV M*6\H92EG*&-O;6TI;RAU;FEC871I*6\H;VXI9"AS869E'0I9RAV*6XH86QU97,N*6HH5&AI2!&;R@S+C,I-3D@8BA'2EG*'1R86YS+2DW M-0HR-S T('DH9F5R2!&2@T*3$Y-C<@,C2@W*3$Y-C<@-#0Q('DH."DQ.38W(#0Y."!Y*#DI,3DU M.2 U-30@>2@Q,"DQ.34Y"C8Q,2!Y*#$Q*3$Y-3D@-C8W('DH,3(I,3DU.2 W M,C0@>2@Q,RDQ.34Y(#2@Q-BDQ.34Y(#DT.2!Y*#$W*3$Y-3D@,3 P-B!Y*#$X*3$Y-3D@,3 V,B!Y M*#$Y*3$Y-3D*,3$Q.2!Y*#(P*3$Y-3D@,3$W-2!Y*#(Q*3$Y-3D@,3(S,B!Y M*#(R*3$Y-3D@,3(X."!Y*#(S*3$Y-3D*,3,T-2!Y*#(T*3$Y-3D@,30P,2!Y M*#(U*3$Y-3D@,30U-R!Y*#(V*3$Y-3D@,34Q-"!Y*#(W*3$Y-3D*,34W,"!Y M*#(X*3$Y-3D@,38R-R!Y*#(Y*3$Y-3D@,38X,R!Y*#,P*3$Y-3D@,32!&:2@S+C,N,2DT.2!B*%!R961E M7# Q-&YE9"DQ-2!B*$=R;W5P2!&="@N*3(P"F(H26YI=&EA M;"ED*&=R;W5P7-T96US+"EG*'-O*62A%;BEO*'9I2EI*'=I;&PI9RAB*7$H92EE*'!R;REO*'9I9&5D*6@H=&\I90HH9&5T M97)M:6YE*6HH=VAI8REO*&@I92AO9BEG*'1H97-E*62AB*7$H96QO*6\H=REG*&EN*62!&;R@S+C0I-3D@8BA#;VUM M=6YI8V%T;REN*')S*32!&="A!;&PI,C @8BA-4$DI9BAC;VUM M*6\H=6YI8V%T:6]N*6@H7"AB*7$H;W1H*68H<"EQ*&]I;BEO*'0M=&\M<"EQ M"BAO:6XI;RAT*6@H86YD*68H8V]L;&5C=&EV*6\H95PI*6DH9G5N8W1I;VYS M*68H=7-E*68@1G4H8V]M;2EO*'5N:6-A+2DW-0HQ,SDQ('DH=&]R2AS=7!P*7$H;W)T*3$X(&(H:6UP;&5M96XI;RAT871I;VXM2AT:6]N86PI9BAI;F9O2AC871O2AC;VUM*6\H=6YI8V%T;W(I9RAR97-T2!&:RA$:7-C=7-S:6]N.BDT M."!B"D9J*&!#;VUM*6XH=6YI8V%T;W(G*3$T(&(H'0G*68H978I;PHH97)Y=VAE*2TR,2!B M*&5T2EG*'1O*64H86,I;RAH:65V*6\H92EI*&%D9&ET:6]N86PI-S4@,C4S-"!Y M*'-A9F4I:2AC;VUM*6\**'5N:6-A=&EO;BEH*'-P86-E*68H=VET:&]U="EG M*$U022UC;VUM*6\H=6YI8V%T;W(M8F%S960I:"AS>6YC*6\**&AR;VYI>F%T M:6]N+BDS."!B*%1H92DR,2!B*&]N;'DI-S4@,C4Y,2!Y*'2DQ M-2!B*'1O*62@Q M,"DM-# @-C$Q('DH,3$I+30P(#8V-PIY*#$R*2TT," W,C0@>2@Q,RDM-# @ M-S@P('DH,30I+30P(#@S-B!Y*#$U*2TT," X.3,@>2@Q-BDM-# *.30Y('DH M,32@Q."DM-# @,3 V,B!Y*#$Y*2TT," Q,3$Y('DH,C I M+30P(#$Q-S4*>2@R,2DM-# @,3(S,B!Y*#(R*2TT," Q,C@X('DH,C,I+30P M(#$S-#4@>2@R-"DM-# @,30P,2!Y*#(U*2TT, HQ-#4W('DH,C8I+30P(#$U M,30@>2@R-RDM-# @,34W,"!Y*#(X*2TT," Q-C(W('DH,CDI+30P(#$V.#,* M>2@S,"DM-# @,32@S M,RDM-# @,3DP.2!Y*#,T*2TT, HQ.38V('DH,S4I+30P(#(P,C(@>2@S-BDM M-# @,C W."!Y*#,W*2TT," R,3,U('DH,S@I+30P(#(Q.3$*>2@S.2DM-# @ M,C(T."!Y*#0P*2TT," R,S T('DH-#$I+30P(#(S-C$@>2@T,BDM-# @,C0Q M-R!Y*#0S*2TT, HR-#2@T-2DM-# @,C4X-R!Y M*#0V*2TT," R-C0S('DH-#2@T."EP(&5O< HE)5!A9V4Z M(#0@-PIB;W @-S4@+3$P,"!A($9T*#0I-#,R(&(@1FPH4T5#5$E/3BDQ-B!B M*#,N*3,U(&(H1U(I;RA/55!3+"DQ-0IB*$-/3E1%6%13+"EG*$%.1"EG*$-/ M34U53DE#02EL*%1/4E,I-S4@-#4@>2!&:2@S+C0N,2DT.2!B*%!R961E7# Q M-&YE9"DQ-0IB*$-O;6TI;RAU;FEC871O*6\H2!&9RA<,#$W*3(S(&(@1F8H35!)*7 * M,C8V(#8P,B!6(#$T('2!&9RA<,#$W*3(S"F(@1F8H M35!)*7 @,C8V(#$P,3@@5B Q-"!W*$-/34TI<" T,3 @,3 Q."!6(#$U(''1S*3$X.2 Q,#2A-4$DI:BAI;7!L96UE;BEO*'1A=&EO;G,I:"AA2!&;R@S+C4I-3D@8BA'2AS=')U8W1O0I&="A4:&4I9"AF;VQL;REO*'=I M;F2@S*3$Y-C<@,C$U"GDH-"DQ.38W(#(W,B!Y M*#4I,3DV-R S,C@@>2@V*3$Y-C<@,S@U('DH-RDQ.38W(#0T,2!Y*#@I,3DV M-PHT.3@@>2@Y*3$Y-3D@-34T('DH,3 I,3DU.2 V,3$@>2@Q,2DQ.34Y(#8V M-R!Y*#$R*3$Y-3D@-S(T"GDH,3,I,3DU.2 W.# @>2@Q-"DQ.34Y(#@S-B!Y M*#$U*3$Y-3D@.#DS('DH,38I,3DU.2 Y-#D@>2@Q-RDQ.34Y"C$P,#8@>2@Q M."DQ.34Y(#$P-C(@>2@Q.2DQ.34Y(#$Q,3D@>2@R,"DQ.34Y(#$Q-S4@>2@R M,2DQ.34Y"C$R,S(@>2@R,BDQ.34Y(#$R.#@@>2@R,RDQ.34Y(#$S-#4@>2@R M-"DQ.34Y(#$T,#$@>2@R-2DQ.34Y"C$T-3<@>2@R-BDQ.34Y(#$U,30@>2@R M-RDQ.34Y(#$U-S @>2@R."DQ.34Y(#$V,C<@>2@R.2DQ.34Y"C$V.#,@>2@S M,"DQ.34Y(#$W-# @>2@S,2DQ.34Y(#$W.38@>2@S,BDQ.34Y(#$X-3,@>2@S M,RDQ.34Y"C$Y,#D@>2@S-"DQ.34Y(#$Y-C8@>2@S-2DQ.34Y(#(P,C(@>2@S M-BDQ.34Y(#(P-S@@>2@S-RDQ.34Y"C(Q,S4@>2@S."DQ.34Y(#(Q.3$@>2@S M.2DQ.34Y(#(R-#@@>2@T,"DQ.34Y(#(S,#0@>2@T,2DQ.34Y"C(S-C$@>2@T M,BDQ.34Y(#(T,3<@>2@T,RDQ.34Y(#(T-S0@>2@T-"DQ.34Y(#(U,S @>2@T M-2DQ.34Y"C(U.#<@>2@T-BDQ.34Y(#(V-#,@>2@T-RDQ.34Y(#(V.3D@>2@T M."EP(&5O< HE)5!A9V4Z(#4@. IB;W @-S4@+3$P,"!A($9L*#,N-2XI,S0@ M8BA'4BEO*$]54"DQ-2!B*$U!3D$I;RA'14U%3E0I,3$S. IB($9T*#4I-S4@ M-#4@>2!&:"A-4$DI<" Q-C @-#4@,30@,B!V(#$V('F5<*2DQ,3<@,3(T('D@ M1FHH24XI,32!&:BA)3BDQ-S$*8B!&:"AG2DQ,3<@.3DX"GDH24XI,32A)3BDQ-S$@8B!&:"AG2!&:BAW:&5N*60H;F\I92AC;W)R97-P*7$H;VYD M96YC92EK*&5X:7-T0I&:"A- M4$DI<" Q-C @,38P-2 Q-" R('8@,38@=RA,3T-!3"EP(#,Q." Q-C U(%8@ M,38@=RA354)'4D]54%PH9W)O=7 L*6@H;BPI90HH2!&:BA)3BDQ-S$* M8B!&:"AG2EH*')A;FMS*6@H7"AA;F0I9BAS:7IE*6@H;V8I,3@@8B!&9BAN M972A)3BDQ-S$@8B!&:"AN M*34T."!B($9J*&XI;RAU;2EO*&(I<2AE2A)3BDQ-S$*8B!&:"AR86YK2DY(&(H;V8I9RAI;BEO*'1E9V5R*6@H2!&:BA/550I M,3(T(&(@1F@H;F5W*7 @-#$Q(#(U,C$@,30@,@IV(#$W('2AD95PP,31N960I9"AB*6\H>2DR."!B"D9F*')A;FMS*7 @1FHH M+BDW-2 R-S T('D@1G0H268I,34@8BAN;REH*')A;FMS*64H87)E*6@H2!&2@R*2TS,B Q-3D@>2@S*2TS,B R M,34*>2@T*2TS,B R-S(@>2@U*2TS,B S,C@@>2@V*2TS,B S.#4@>2@W*2TS M,B T-#$@>2@X*2TS,B T.3@*>2@Y*2TT," U-30@>2@Q,"DM-# @-C$Q('DH M,3$I+30P(#8V-R!Y*#$R*2TT," W,C0@>2@Q,RDM-# *-S@P('DH,30I+30P M(#@S-B!Y*#$U*2TT," X.3,@>2@Q-BDM-# @.30Y('DH,32@Q."DM-# *,3 V,B!Y*#$Y*2TT," Q,3$Y('DH,C I+30P(#$Q-S4@>2@R M,2DM-# @,3(S,B!Y*#(R*2TT," Q,C@X"GDH,C,I+30P(#$S-#4@>2@R-"DM M-# @,30P,2!Y*#(U*2TT," Q-#4W('DH,C8I+30P(#$U,30@>2@R-RDM-# * M,34W,"!Y*#(X*2TT," Q-C(W('DH,CDI+30P(#$V.#,@>2@S,"DM-# @,32@S,RDM-# @,3DP.2!Y M*#,T*2TT," Q.38V('DH,S4I+30P(#(P,C(@>2@S-BDM-# *,C W."!Y*#,W M*2TT," R,3,U('DH,S@I+30P(#(Q.3$@>2@S.2DM-# @,C(T."!Y*#0P*2TT M," R,S T"GDH-#$I+30P(#(S-C$@>2@T,BDM-# @,C0Q-R!Y*#0S*2TT," R M-#2@T-2DM-# *,C4X-R!Y*#0V*2TT," R-C0S M('DH-#2@T."EP(&5O< HE)5!A9V4Z(#8@.0IB;W @-S4@ M+3$P,"!A($9T*#8I-#,R(&(@1FPH4T5#5$E/3BDQ-B!B*#,N*3,U(&(H1U(I M;RA/55!3+"DQ-0IB*$-/3E1%6%13+"EG*$%.1"EG*$-/34U53DE#02EL*%1/ M4E,I-S4@-#4@>2!&:"A-4$DI<" Q-C @-#4*,30@,B!V(#$V('2!&:BA)3BDQ-S$*8B!&:"AG2A)3BDQ-S$@8B!&:"AN*34T."!B($9J*&XI;RAU;2EO*&(I<2AE MF4I:"AO9BDQ."!B($9F*&YE=REP(#$X,#(@,S4T M"C$S(#(@=B Q-B!W*&=R;W5P*7 @1FHH7"DI,3$W(#4S-"!Y*$E.*3$W,2!B M($9H*')A;F=E2AR86YK2A/550I,3(T(&(@1F@H;F5W*7 @-#$Q(#@X,R Q-" R('8@,3<@=RAG2AD95PP,31N960I9"AB*6\H>2DR."!B($9F*')A;F=E2!&:"A-4$DI<" Q-C @,3,R."!6(#$V M('2!&:BA)3BDQ-S$@8B!&:"AG2AT:&4I:"AO=71P M=70I9RAG2!&:RA$:7-C=7-S:6]N.BEE($9J M*%!L96%S92EF*'!R;W I<2AO2A4*6TH;W I<2AO;&]G:65S*68H M2@R*3$Y-C<@ M,34Y('DH,RDQ.38W(#(Q-2!Y*#0I,3DV-R R-S(@>2@U*3$Y-C<@,S(X('DH M-BDQ.38W"C,X-2!Y*#2@X*3$Y-C<@-#DX('DH.2DQ.34Y M(#4U-"!Y*#$P*3$Y-3D@-C$Q('DH,3$I,3DU.0HV-C<@>2@Q,BDQ.34Y(#2@Q-2DQ.34Y(#@Y,PIY M*#$V*3$Y-3D@.30Y('DH,32!&:BA)3BDQ-S$@8B!&:"AG2!&=2AU;FEO;BDR-"!B($9T M*$%L;"DQ.0IB*&5L96UE;BEO*'1S*62A<*"EP($9H*&=R;W5P,BEP($9T*%PI*3$T M(&(H;F]T*6@H:6XI:"A<,#$T2!&=2AI;BEO*'1E2A.;W1E*60H=&AA M="EF"BAF;W(I9RAT:&5S92EI*&]P*7$H97)A=&EO;G,I9BAT:&4I9RAO2AB M*6\H>2EF*&]R9&5R*68H:6XI:0HH=&AE*68H7# Q-')S="EF*&=R;W5P*6@H M7"AI9BEG*' I<2AO5PI+BDQ-C8@,3@X-R!Y"D9K*$1I2EG*&5N*6\H=6UE2AE7# Q M-F-I96XI;RAT;'DI<"!&92@Z*3<@8B@Z*62!&:BA)3BDQ-S$@8B!&:"AG2EE"BAD95PP,31N960N*32!&="A4:&ES*6DH;W I<2AE2!&:RA$:7-C M=7-S:6]N.BEH($9J*%1H92EF*' I<2AO:6XI;RAT+71O+7 I<2AO:6XI;RAT M*64H8REO*&AA<'1E2EG*&%R9W5E*6DH86=A:6YS="EE*'1H:7,I M: HH87!P2@Q,2DM-# @-C8W('DH,3(I+30P(#2@Q-"DM-# @.#,V('DH,34I+30P(#@Y,R!Y*#$V*2TT," Y-#D@ M>2@Q-RDM-# @,3 P-B!Y*#$X*2TT, HQ,#8R('DH,3DI+30P(#$Q,3D@>2@R M,"DM-# @,3$W-2!Y*#(Q*2TT," Q,C,R('DH,C(I+30P(#$R.#@*>2@R,RDM M-# @,3,T-2!Y*#(T*2TT," Q-# Q('DH,C4I+30P(#$T-3<@>2@R-BDM-# @ M,34Q-"!Y*#(W*2TT, HQ-32@R.2DM-# @,38X M,R!Y*#,P*2TT," Q-S0P('DH,S$I+30P(#$W.38*>2@S,BDM-# @,3@U,R!Y M*#,S*2TT," Q.3 Y('DH,S0I+30P(#$Y-C8@>2@S-2DM-# @,C R,B!Y*#,V M*2TT, HR,#2@S."DM-# @,C$Y,2!Y*#,Y*2TT M," R,C0X('DH-# I+30P(#(S,#0*>2@T,2DM-# @,C,V,2!Y*#0R*2TT," R M-#$W('DH-#,I+30P(#(T-S0@>2@T-"DM-# @,C4S,"!Y*#0U*2TT, HR-3@W M('DH-#8I+30P(#(V-#,@>2@T-RDM-# @,C8Y.2!Y*#0X*7 @96]P"B4E4&%G M93H@." Q,0IB;W @-S4@+3$P,"!A($9T*#@I-#,R(&(@1FPH4T5#5$E/3BDQ M-B!B*#,N*3,U(&(H1U(I;RA/55!3+"DQ-0IB*$-/3E1%6%13+"EG*$%.1"EG M*$-/34U53DE#02EL*%1/4E,I-S4@-#4@>2!&:"A-4$DI<" Q-C @-#4*,30@ M,B!V(#$V('2!&:BA)3BDQ M-S$@8B!&:"AG'1A;BEO*'0I,30*8BAG2DU,#D@8B!&:BA<*&EN*6\H=&5G97)<*2DQ,3<@,3 T,B!Y*$E. M*3$W,0IB($9H*&-O;&\I;RAR*30W.2!B($9J*%PH:6XI;RAT96=E7,I,3<*8B!& M="AA2!&:RA$ M:7-C=7-S:6]N.BDR.0IB($9J*$%C8V]R9&EN9RDR,B!B*'1O*64H=&AE*6DH M;W I<2AE2!&="A4:&5R92ED*&%R92EG M*&YO*6<**&QO*7$H8V%L*6DH;W I<2AE2!&:"A-4$DI< HQ-C @,C(R.2!6(#$V(''1S*6@H=&\I90HH9G)E92DQ M,3<@,C0P,R!Y*$E.*3$W,2!B($9H*&-O;G1E>'1S*30Q-2!B($9J*'8I;RAO M:60I,3,@8B@J*6@H87)R82EO*'DI9@HH;V8I:"AC;VXI;RAT97AT'0I9PHH86QL;REQ*&-A=&5D*6DH8BEO*'DI,C(@8B!&:"A-4$DI M<" Y.34@,C4S-"!6(#$V('2A4 M:&ES*3(R(&(H;W I<2AE2AA2@R*3$Y-C<@,34Y('DH,RDQ M.38W(#(Q-2!Y*#0I,3DV-R R-S(@>2@U*3$Y-C<*,S(X('DH-BDQ.38W(#,X M-2!Y*#2@X*3$Y-C<@-#DX('DH.2DQ.34Y(#4U-"!Y*#$P M*3$Y-3D*-C$Q('DH,3$I,3DU.2 V-C<@>2@Q,BDQ.34Y(#2@Q-2DQ.34Y(#@Y,R!Y*#$V*3$Y-3D@ M.30Y('DH,32A/550I,3(T(&(@1F@H8V]N M=&5X=',I-#$U(&(@1FHH=BEO*&]I9"DQ,PIB*"HI:"AA2EF*&]F M*6@H8V]N*6\H=&5X=',I,3$W(#4P-R!Y*$]55"DQ,C0@8B!&:"AL96XI-3$W M"F(@1FHH;&5N9W1H*3$T(&(H;V8I9BAC;VXI;RAT97AT2!&="A!;&QO*7$H8V%T97,I,C(*8BAA;BEE*&%R2AC;VQL96-T:78I;RAE*3(R(&(H;W I<2AE M2EL M*"XI,C8@8BA-4$DI,3<@8BAC;VQL96,M*32AU;F$I;RAV*6TH86EL86)L92EG*'1O*64H=&AE*62A#;VXI;RAT97AT2EO*&]U*68H9&ED*32AN;W0I92AR96%D*62EF*#$P*6@H9')A9G0L*68H>2EO"BAO=2EG*&AA*6\H=BEO*&4I M:"AN;W0I9BAS965N*6DH82EE*&,I;RAH86YG92EH*&EN*62AA9&1E9"DQ."!B*'1H92EG($9F*&QE;BEH($9J*'!A2AI;BEO*'1E9V5R2A7*6TH92DQ,"!B*&AA*6\H=BEO*&4I9RAT;REH*')E=&%I M;BEG*'1H92EG*&,I"F\H:&EC*6\H:REO*&5N+6%N9"UE9V2AP2DR,"!B($9J M*&9O2ED*&\I;RAV*6\H97)L87!P:6YG*6@H8V%L M;"EG*'1O*6@H82DW-2 Q.3(T"GDH8V]L;&5C=&EV*6\H92EF*&]P*7$H97)A M=&EO;BEF*&TI;RAU2AM*6\H=7-T*64H8BEQ*&4N*3$V-B R,#(W('DH3VYE*68H M8V%N*68H:&$I;RAV*6\H92D*9RAO;BUG;VEN9REF*' I<2AO:6XI;RAT+71O M+7 I<2AO:6XI;RAT*67,I9BAS869E*6@**'1O*62AP*7$H96YD:6YG M*6@H87-Y;F,I"F\H:')O;F]U'1S*32AO<&%Q=64I:"AT;REG*&UA:REO*&4I92AT:&ES*6D**'-I;7!L M97(I9BAT;REG*')E86QI>F4I:2AW:71H;W5T*64H2A);BEE*'-U;6UA2EM*"PI-S4@,C0T,2!Y*&)U="ED*'1H97DI:"AC86XI9BAA M;'7,I9BAS869E;'DI:"AG970I9PHH8V]M;2EO*'5N:6-A=&EO M*6\H;BEE*&-O;BEO*'1E>'1S*6HH=&\I92AD;REH*&9U2AM*6\H=7-T M*3D*8BAS=&EL;"EH*'2EM*"PI:"AW:&EC*6\H:"EG*&ES*6<**&YO="EH*&=U M87)A;BEO*'1E960I9RAB*6\H>2EF*&-O;BEO*'1E>'1S*6@H86QO;F4I9BA< M*'-E92EI*&5X86UP;&4I9"AB*7$**&5L;REO*'2!&2@R M*2TS,B Q-3D@>2@S*2TS,B R,34@>2@T*2TS,B R-S(@>2@U*2TS,B S,C@@ M>2@V*2TS,@HS.#4@>2@W*2TS,B T-#$@>2@X*2TS,B T.3@@>2@Y*2TT," U M-30@>2@Q,"DM-# @-C$Q('DH,3$I+30P"C8V-R!Y*#$R*2TT," W,C0@>2@Q M,RDM-# @-S@P('DH,30I+30P(#@S-B!Y*#$U*2TT," X.3,@>2@Q-BDM-# * M.30Y('DH,32@Q."DM-# @,3 V,B!Y*#$Y*2TT," Q,3$Y M('DH,C I+30P(#$Q-S4*>2@R,2DM-# @,3(S,B!Y*#(R*2TT," Q,C@X('DH M,C,I+30P(#$S-#4@>2@R-"DM-# @,30P,2!Y*#(U*2TT, HQ-#4W('DH,C8I M+30P(#$U,30@>2@R-RDM-# @,34W,"!Y*#(X*2TT," Q-C(W('DH,CDI+30P M(#$V.#,*>2@S,"DM-# @,32@S,RDM-# @,3DP.2!Y*#,T*2TT, HQ.38V('DH,S4I+30P(#(P,C(@ M>2@S-BDM-# @,C W."!Y*#,W*2TT," R,3,U('DH,S@I+30P(#(Q.3$*>2@S M.2DM-# @,C(T."!Y*#0P*2TT," R,S T('DH-#$I+30P(#(S-C$@>2@T,BDM M-# @,C0Q-R!Y*#0S*2TT, HR-#2@T-2DM-# @ M,C4X-R!Y*#0V*2TT," R-C0S('DH-#2@T."EP(&5O< HE M)5!A9V4Z(#$P(#$S"F)O<" W-2 M,3 P(&$@1G0H,3 I-# Y(&(@1FPH4T5# M5$E/3BDQ-B!B*#,N*3,U(&(H1U(I;RA/55!3+"DQ-0IB*$-/3E1%6%13+"EG M*$%.1"EG*$-/34U53DE#02EL*%1/4E,I-S4@-#4@>2!&;R@S+CF5<*2DQ,3<@-#(U('D@1FHH24XI M,32A/550I,3(T(&(@ M1F@H2A/550I,3(T M(&(@1F@H2!&:2@S+C2!& M:"A-4$DI<" Q-C @,3,S." Q-" R('8@,38@=RA#3TU-*7 @,S$X(#$S,S@@ M5B Q-B!W*$=23U507"AC;VUM*6XH+"DQ,@IB*&=R;W5P7"DI,3$W(#$T,3<@ M>2!&:BA)3BDQ-S$@8B!&:"AC;VUM*30U,"!B($9J*&-O;6TI;RAU;FEC871O M'1<*2DQ,3<@,3@P M-B!Y($9J*$E.*3$W,2!B($9H*&-O;6TI-#4P"F(@1FHH8V]M;2EO*'5N:6-A M=&]R*3$Q(&(H;V(I'0I-#,R(&(@1FHH8V]N*6\H=&5X="DW-2 R,#$Q('D@ M1G0H4F5T=7)N'0I9PHH87-S;REQ*&-I M871E9"EG*'=I=&@I:"AT:&4I9BAC;VUM*6\H=6YI8V%T;W(I,S @8B!&:"AC M;VUM*6T*1G0H+BDW-2 R,3$V('D@1F@H35!)*7 @,38P(#(Q,38@5B Q-B!W M*$-/34TI<" S,3@@,C$Q-B!6(#$V"G2A/550I,3(T(&(@1F@H8V]M;2EP(#0U-" R,S4R(%8@,3 @=RAN972EF*&=R;W5P M+"EF*&%N9"EH"BAT:&4I9RAS<"EQ*&5C:5PP,31E9"EI*&-O;BEO*'1E>'0N M*3(Y(&(H5&AE*3$Y(&(H;W I<2AE2AT:&5R92EC*&ES*6@H M;F\I9BAE>'!L:6-I="EI"BAS>6YC*6\H:')O;FEZ871I;VXI9RAO*6\H=BEO M*&5R*60H=&AE*6@H9W)O=7 N*3$Y-C<@-#8@>2!&2@T*3$Y-C<@,C2@W*3$Y-C<@-#0Q('DH."DQ.38W(#0Y."!Y M*#DI,3DU.2 U-30@>2@Q,"DQ.34Y(#8Q,2!Y*#$Q*3$Y-3D*-C8W('DH,3(I M,3DU.2 W,C0@>2@Q,RDQ.34Y(#2@Q-BDQ.34Y(#DT.2!Y*#$W*3$Y-3D@,3 P-B!Y*#$X*3$Y-3D@ M,3 V,B!Y*#$Y*3$Y-3D@,3$Q.0IY*#(P*3$Y-3D@,3$W-2!Y*#(Q*3$Y-3D@ M,3(S,B!Y*#(R*3$Y-3D@,3(X."!Y*#(S*3$Y-3D@,3,T-0IY*#(T*3$Y-3D@ M,30P,2!Y*#(U*3$Y-3D@,30U-R!Y*#(V*3$Y-3D@,34Q-"!Y*#(W*3$Y-3D@ M,34W, IY*#(X*3$Y-3D@,38R-R!Y*#(Y*3$Y-3D@,38X,R!Y*#,P*3$Y-3D@ M,3'0L*62AR96UA:6XI,3D@8BAA*6\H=BEM*&%I;&%B;&4I:"AF;W(I92AF=7)T M:&5R*62DW-2 U,S(@>2AI2!&:"A-4$DI<" Q-C @-C,X(%8@,38@=RA#3TU- M*7 @,S$X(#8S."!6(#$V(''0L*6LH;F5W*7 @.3(Q(#8S."!6(#$W('2!&:BA)3BDQ-S$@8B!&:"AC;VUM*30U,"!B M($9J*&-O;6TI;RAU;FEC871O'0I,S0Q(&(@1FHH;F5W*3$U"F(H8V]N*6\H=&5X="EF*'1O*62!&:"A-4$DI<" Q-C @,3 P,"!6(#$V('2!&:BA)3BDQ-S$@8B!&:"AS>6YC*7 @-#$X"C$T,#,@5B Q-R!W*&-O;6TI M,S4R(&(@1FHH0V]M;2EO*'5N:6-A=&]R*38@8BAW:&]S92EK*&EN8V]R<"EQ M*&]R871E9"EG"BAG2!&:BAW:6QL*6DH8BEQ*&4I:"AD95PP,31N960L*6DH86QS;RED M*'-P*7$H96-I9GEI;F2AS86UE*3$X(&(H87,I,S<@ M8B!&9BAS>6YC*7 @,3$V-" Q-S W(#$S"C(@=B Q-B!W*&-O;6TI;2!&:B@G M*6\H2!&:"A-4$DI<" R-3$@,C R-PI6(#$V('2!&="AP;'5S+"DR-2!B*&YO=&EO;F%L;'DI;"@L*6'0I:"AW M*6\H87,I-S4*,C$Y."!Y*&-R96%T960I9BAA2AB*6\H>2DS,R!B($9H*$U022EP(#(T,2 R,C4T M(%8@,38@=RA#3TU-*7 @,SDY(#(R-30*5B Q-B!W*%5.0DE.1"EP($9T*%PI M+BDQ-R!B*$ET*66YC*7 @,32!&:RA$:7-C=7-S:6]N.BDS-"!B($9F*$U022EP(#4Q M,2 R-#4R(#$S"C(@=B Q-"!W*$-/34TI<" V-34@,C0U,B!6(#$U('2AC;VUM*6\H=6YI8V%T;W(L*62EM*"XI,C0*8BA7*6TH92DQ M-2!B*&AA*6\H=BEO*&4I:"AA2AM;W)E*6,H8V]N*6\H=&5X=',N*2TS,@HT-B!Y($9R*#$I M+3,R(#$P,R!Y*#(I+3,R(#$U.2!Y*#,I+3,R(#(Q-2!Y*#0I+3,R(#(W,B!Y M*#4I+3,R"C,R."!Y*#8I+3,R(#,X-2!Y*#2@Q,2DM-# @-C8W('DH,3(I M+30P(#2@Q-"DM-# @.#,V('DH,34I+30P"C@Y M,R!Y*#$V*2TT," Y-#D@>2@Q-RDM-# @,3 P-B!Y*#$X*2TT," Q,#8R('DH M,3DI+30P(#$Q,3D*>2@R,"DM-# @,3$W-2!Y*#(Q*2TT," Q,C,R('DH,C(I M+30P(#$R.#@@>2@R,RDM-# @,3,T-2!Y*#(T*2TT, HQ-# Q('DH,C4I+30P M(#$T-3<@>2@R-BDM-# @,34Q-"!Y*#(W*2TT," Q-32@R.2DM-# @,38X,R!Y*#,P*2TT," Q-S0P('DH,S$I+30P(#$W.38@ M>2@S,BDM-# @,3@U,R!Y*#,S*2TT, HQ.3 Y('DH,S0I+30P(#$Y-C8@>2@S M-2DM-# @,C R,B!Y*#,V*2TT," R,#2@S."DM M-# @,C$Y,2!Y*#,Y*2TT," R,C0X('DH-# I+30P(#(S,#0@>2@T,2DM-# @ M,C,V,2!Y*#0R*2TT, HR-#$W('DH-#,I+30P(#(T-S0@>2@T-"DM-# @,C4S M,"!Y*#0U*2TT," R-3@W('DH-#8I+30P(#(V-#,*>2@T-RDM-# @,C8Y.2!Y M*#0X*7 @96]P"B4E4&%G93H@,3(@,34*8F]P(#2AP2AT87)G970I,30@8BAP7 I M<2AE*62AF;W(I9BAA*6@H<')O*7$H8V5S2EI*&$I9BAT87)G970I92AP2AT87)G970I8RAG2AA*6DH<')O*7$H8V5S2AP7 I<2AE M*32AO9BEH*&-O;6TI;RAU;FEC871I;VXI9RAI2A!;BDR,R!B*&EN*6\H=&5R+6-O;6TI;PHH=6YI8V%T:6]N*62AC97-S97,I,38@8BAI;BEH*&1I7# Q,V5R96XI;PHH M="EF*&=R;W5P2AI8V%T:6]N*6DH;W I M<2AE2AI2A/;F4I:RAA9&1I=&EO;F%L M*6DH;F5E9&5D*68H8V]N8V5P="EF*&ES*62AS97)V*6\H97)S+"ED M*&%N9"EI*&5L2!& M="A(97)E*64H:7,I:"AA*68H2EG*&]F*68H=&AE*6DH<')O<"EQ M"BAE2!& M9RA<,#$W*3(S(&(@1G0H5&AE*3$V(&(H2AN:6-A=&]R*68H8V%N*62!&2@T*3$Y-C<@,C2@W*3$Y-C<@-#0Q('DH."DQ.38W(#0Y M."!Y*#DI,3DU.2 U-30@>2@Q,"DQ.34Y(#8Q,2!Y*#$Q*3$Y-3D*-C8W('DH M,3(I,3DU.2 W,C0@>2@Q,RDQ.34Y(#2@Q-BDQ.34Y(#DT.2!Y*#$W*3$Y-3D@,3 P-B!Y*#$X*3$Y M-3D@,3 V,B!Y*#$Y*3$Y-3D@,3$Q.0IY*#(P*3$Y-3D@,3$W-2!Y*#(Q*3$Y M-3D@,3(S,B!Y*#(R*3$Y-3D@,3(X."!Y*#(S*3$Y-3D@,3,T-0IY*#(T*3$Y M-3D@,30P,2!Y*#(U*3$Y-3D@,30U-R!Y*#(V*3$Y-3D@,34Q-"!Y*#(W*3$Y M-3D@,34W, IY*#(X*3$Y-3D@,38R-R!Y*#(Y*3$Y-3D@,38X,R!Y*#,P*3$Y M-3D@,32EF*&ET2DQ.#D*,C$P('DH8V]M;2EO*'5N:6-A=&EO;G,I M:2AT:&%T*62EE*&YO M="EG*&(I<2AE*6DH8REO*&AA;F=E9"XI,3@Y"C4Y,B!Y*$-O;6TI;RAU;FEC M871I;VXI9"AW:71H*6@H86XI;RAY*68H<')O*7$H8V5S2EF*&(I<2AE*6@H=7-E9"EG*'1O M*68H9&5T97)M:6YE*6@H:68I9RAA*62EI*')O=71I;F5S*68**%PH9&5<,#$T;F5D M*62EG*&]F*6F4I:"AO9BEF*'1H M92EG*')E;6]T92EG*&=R;W5P+BEP"C$W,C<@,3(T,2!6(#(R,2 Q,CDW(%8@ M,C0W(#$R.#$@82!&:"A-4$DI<" S,S(@,3(X,2 Q-" R('8*,38@=RA#3TU- M*7 @-#DP(#$R.#$@5B Q-R!W*$=23U507"A<*2EP(#'!L:6-I="EH*'-Y;F,I;RAHF%T:6]N*68H8V%N*64**&-A=7-E*6@H9&5A9"TI M-S4@,3DV-B!Y*&QO*7$H8REO*&LI9RAI;BEH*&UO*7$H9'5L87(I9BAP2(I9BAB*6\H>2EF"BAC86QL:6YG*32AT:&4I9"AT*6\H=REO M*&\I9BAR;W5T:6YE2DQ-2!B*&UA M*6\H>2EE*&QE82EO*'8I;RAE*6DH82EG*&YO;BUB;&\I<2AC*6\H:VEN9REF M*'-E;F0I:0HH86YD*68H2AA;&QO*6\H=VEN9REE*&%M;W)T:7IA=&EO;BEF*&]F*6F%T:6]N*6@H;REO*'8I;PHH97)H96%D+BDQ M-C8@,C8T-R!Y*%1H92EE*&EN*6\H=&5R+6-O;6TI;RAU;FEC871O2EO*&5D*68H:6XI:BAT:&4I M92AS86UE*62AO8BES*&IE8W1S+"EG*&(I;RAY*6@H8V%L;&EN M9REI($9H*$U022EP"C4S-2 R-S T(#$T(#(@=B Q-B!W*$-/34TI<" V.3,@ M,C2@Q,2DM-# @-C8W('DH,3(I+30P M(#2@Q-"DM-# @.#,V('DH,34I+30P"C@Y,R!Y M*#$V*2TT," Y-#D@>2@Q-RDM-# @,3 P-B!Y*#$X*2TT," Q,#8R('DH,3DI M+30P(#$Q,3D*>2@R,"DM-# @,3$W-2!Y*#(Q*2TT," Q,C,R('DH,C(I+30P M(#$R.#@@>2@R,RDM-# @,3,T-2!Y*#(T*2TT, HQ-# Q('DH,C4I+30P(#$T M-3<@>2@R-BDM-# @,34Q-"!Y*#(W*2TT," Q-32@R.2DM-# @,38X,R!Y*#,P*2TT," Q-S0P('DH,S$I+30P(#$W.38@>2@S M,BDM-# @,3@U,R!Y*#,S*2TT, HQ.3 Y('DH,S0I+30P(#$Y-C8@>2@S-2DM M-# @,C R,B!Y*#,V*2TT," R,#2@S."DM-# @ M,C$Y,2!Y*#,Y*2TT," R,C0X('DH-# I+30P(#(S,#0@>2@T,2DM-# @,C,V M,2!Y*#0R*2TT, HR-#$W('DH-#,I+30P(#(T-S0@>2@T-"DM-# @,C4S,"!Y M*#0U*2TT," R-3@W('DH-#8I+30P(#(V-#,*>2@T-RDM-# @,C8Y.2!Y*#0X M*7 @96]P"B4E4&%G93H@,30@,3<*8F]P(#2AP2A4:&4I:"AS=7!P*7$H;W)T M*68H9F]R*6&-E<'1I;F2A4:&4I:RAU2AS<"EQ*&5C:5PP,31C*64**'-U M8BUG7,L*64H86YD*6@H2AA2!&:"A.86UE*68H4V5R=FEC92DW-2 Q-CDP('D@1G0H35!) M*6@H<')O*6\H=FED97,I:"AA*62EJ*&-O;G-T2EH*&$I;RAV*6T**&%I;&%B;&4I:"AI M;F9O2AP6YC*6\H:')O;F]U2AI;BEO*'1E'1S+BDQ."!B*%1H97)E+2DW-2 R-C0W('DH9F]R92PI,3,@8BAC;VUM*6\H M=6YI8V%T:6]N*6@**'5S:6YG*62!&2@T*3$Y-C<@,C2@W*3$Y M-C<@-#0Q('DH."DQ.38W"C0Y."!Y*#DI,3DU.2 U-30@>2@Q,"DQ.34Y(#8Q M,2!Y*#$Q*3$Y-3D@-C8W('DH,3(I,3DU.2 W,C0*>2@Q,RDQ.34Y(#2@Q-BDQ.34Y(#DT.2!Y*#$W M*3$Y-3D*,3 P-B!Y*#$X*3$Y-3D@,3 V,B!Y*#$Y*3$Y-3D@,3$Q.2!Y*#(P M*3$Y-3D@,3$W-2!Y*#(Q*3$Y-3D*,3(S,B!Y*#(R*3$Y-3D@,3(X."!Y*#(S M*3$Y-3D@,3,T-2!Y*#(T*3$Y-3D@,30P,2!Y*#(U*3$Y-3D*,30U-R!Y*#(V M*3$Y-3D@,34Q-"!Y*#(W*3$Y-3D@,34W,"!Y*#(X*3$Y-3D@,38R-R!Y*#(Y M*3$Y-3D*,38X,R!Y*#,P*3$Y-3D@,32EP(#8W," T-2!6(#$S M('2!&:BA)3BDQ-S$@8B!& M:"AM>2EP(#,Y-B R,#(*5B Q,R!W*&-O;6TI,S2!&:BA)3BDQ-S$*8B!&:"AT862DQ-"!B*&]F*68H;F5W*6@H:6XI;PHH=&5R+6-O;6TI M;RAU;FEC871O2!&="AD97-C2!&="AI2AG2AP M2!&:"A-4$DI<" Q-C @ M,30X,"!6(#$V('2A)3BDQ-S$@8B!&:"AN86UE*30V-PIB($9J*%Q<;F%M M92DQ,"!B*&MN;REO*'=N*6@H=&\I9BAB*7$H;W1H*6DH;&\I<2AC86PI92AA M;F0I:"AR96UO=&4I9BAG2AA;F0I:RAR96UO=&4I9BAG2!&="A4:&5S92EF*')O=71I;F5S*66YC*6\H:')O;F]U2AC;VYS=')U8W1I;VXI:BAR;W5T:6YE2@Q M,BDM-# @-S(T('DH,3,I+30P(#2@Q-2DM-# @ M.#DS('DH,38I+30P(#DT.2!Y*#$W*2TT," Q,# V('DH,3@I+30P(#$P-C(@ M>2@Q.2DM-# *,3$Q.2!Y*#(P*2TT," Q,32@R M,BDM-# @,3(X."!Y*#(S*2TT," Q,S0U"GDH,C0I+30P(#$T,#$@>2@R-2DM M-# @,30U-R!Y*#(V*2TT," Q-3$T('DH,C2@R."DM-# * M,38R-R!Y*#(Y*2TT," Q-C@S('DH,S I+30P(#$W-# @>2@S,2DM-# @,32@S-"DM-# @,3DV-B!Y M*#,U*2TT," R,#(R('DH,S8I+30P(#(P-S@@>2@S-RDM-# *,C$S-2!Y*#,X M*2TT," R,3DQ('DH,SDI+30P(#(R-#@@>2@T,"DM-# @,C,P-"!Y*#0Q*2TT M," R,S8Q"GDH-#(I+30P(#(T,3<@>2@T,RDM-# @,C0W-"!Y*#0T*2TT," R M-3,P('DH-#4I+30P(#(U.#<@>2@T-BDM-# *,C8T,R!Y*#0W*2TT," R-CDY M('DH-#@I<"!E;W *)25086=E.B Q-B Q.0IB;W @-S4@+3$P,"!A($9T*#$V M*30P.2!B($9L*%-%0U1)3TXI,38@8B@S+BDS-2!B*$=2*6\H3U504RPI,34* M8BA#3TY415A44RPI9RA!3D0I9RA#3TU-54Y)0T$I;"A43U)3*32EP(#@R." T-2!6(#$U('2A)3BDQ-S$@8B!&:"AP*7$H965R*7 *-#$W M(#(U.2!6(#$W('2A)3BDQ-S$@ M8B!&:"AP*7$H965R*7 @-#$W(#,S."!6(#$W('2A)3BDQ-S$@8B!&:"AN=6TI<" T,C(@-#DV(#$T M(#(@=B Q-"!W*&-O;6TI;2AS*3,S-PIB($9J*&XI;RAU;2EO*&(I<2AE2!&="A4:&ES*60H M2AM*6\H=6YI8V%T M:6]N*60H:&%N9&QE*62EF*&$I9RA<7"EP"C,X," Y,C<@5B Q-B!W($9H*%-4*6PH05)4*7 @ M1G0H(BEI*')O=71I;F4I9BAA;F0I9BAD97-T2EH"BAT:&4I9BAM871C*6\H:&EN9REH*%Q<*7 @,3,Y,2 Y,C<@5B Q-B!W M($9H*$9)3DE32"EP($9T*"(I9BAR;W5T:6YE+BDR,@IB*%1H97-E*32EI*&]T:&5R*68**'5S92XI,C0@8BA)="DQ-R!B*&ES*62AS86UE*68@1F@H<"EQ*&5E2!&:BA)3BDQ-S$@8B!&:"AM86LI M;RAE*7 *-#,X(#$R.#$@5B Q,R!W*&ED*30R,B!B($9J*&AA;F1L92DQ-"!B M*&9R;VTI,C8@8B!&9BA-4$DI< HQ,C(X(#$R.#$@,3,@,B!V(#$T('2AC;VUM*6\H=6YI8V%T;W)S*6LH:6XI:"!&:"AN972AT;RDQ-B!B*&-A;&PI:2!&:"A-4$DI<" S M,#(@,32!&:"A-4$DI<" Q M-C @,3@W-"!6(#$V('2!&:BA)3BDQ-S$@8B!& M:"AM>2EP(#,Y-B Q.34S"E8@,3,@=RAC;VUM*3,W-R!B($9J*&QO*7$H8V%L M*3$S(&(H:6XI;RAT2A)3BDQ-S$@8B!&:"AN=6TI< HT M,C(@,C$Q,2!6(#$T('2!& M:BA)3BDQ-S$*8B!&:"AM86LI;RAE*7 @-#,X(#(T-#(@5B Q,R!W*&ED*30R M,B!B($9J*&AA;F1L92DQ-"!B*&9R;VTI92A-4$DI< HQ,C(P(#(T-#(@,3,@ M,B!V(#$U('F%T:6]N*0HQ-B!B*'!R M;W I<2AE2@S*3$Y-C<@,C$U('DH-"DQ.38W(#(W,B!Y*#4I M,3DV-PHS,C@@>2@V*3$Y-C<@,S@U('DH-RDQ.38W(#0T,2!Y*#@I,3DV-R T M.3@@>2@Y*3$Y-3D@-34T('DH,3 I,3DU.0HV,3$@>2@Q,2DQ.34Y(#8V-R!Y M*#$R*3$Y-3D@-S(T('DH,3,I,3DU.2 W.# @>2@Q-"DQ.34Y(#@S-@IY*#$U M*3$Y-3D@.#DS('DH,38I,3DU.2 Y-#D@>2@Q-RDQ.34Y(#$P,#8@>2@Q."DQ M.34Y(#$P-C(@>2@Q.2DQ.34Y"C$Q,3D@>2@R,"DQ.34Y(#$Q-S4@>2@R,2DQ M.34Y(#$R,S(@>2@R,BDQ.34Y(#$R.#@@>2@R,RDQ.34Y"C$S-#4@>2@R-"DQ M.34Y(#$T,#$@>2@R-2DQ.34Y(#$T-3<@>2@R-BDQ.34Y(#$U,30@>2@R-RDQ M.34Y"C$U-S @>2@R."DQ.34Y(#$V,C<@>2@R.2DQ.34Y(#$V.#,@>2@S,"DQ M.34Y(#$W-# @>2@S,2DQ.34Y"C$W.38@>2@S,BDQ.34Y(#$X-3,@>2@S,RDQ M.34Y(#$Y,#D@>2@S-"DQ.34Y(#$Y-C8@>2@S-2DQ.34Y"C(P,C(@>2@S-BDQ M.34Y(#(P-S@@>2@S-RDQ.34Y(#(Q,S4@>2@S."DQ.34Y(#(Q.3$@>2@S.2DQ M.34Y"C(R-#@@>2@T,"DQ.34Y(#(S,#0@>2@T,2DQ.34Y(#(S-C$@>2@T,BDQ M.34Y(#(T,3<@>2@T,RDQ.34Y"C(T-S0@>2@T-"DQ.34Y(#(U,S @>2@T-2DQ M.34Y(#(U.#<@>2@T-BDQ.34Y(#(V-#,@>2@T-RDQ.34Y"C(V.3D@>2@T."EP M(&5O< HE)5!A9V4Z(#$W(#(P"F)O<" W-2 M,3 P(&$@1FPH,RXX+BDS-"!B M*$E.5%(I;RA/1%5#5$E/3BDQ-B!B*%1/*62!&:"A#;VUM*6TH=6YI8V%T;REO M*'(I,3(@8BA3=&%T=7,I-S4@,30Q"GD@1G0H5&AI2AC;VUM*6\H M=6YI8V%T;W(I,34@8BAO2!&:"A-4$DI<" Q-C @,S V(#$T(#(@=B Q-B!W*$-/ M34TI<" S,3@@,S V(%8@,38@=RA35"EL*$$I;"A47"AC;VUM*6X**"PI9"AS M=&%T=7-<*2DQ,3<@,S@X('D@1FHH24XI,32EG"BAO=&AE2!&:"A-4$DI<" Q-C @,3(Y M-B Q-" R('8@,38@=RA#3TU-*7 @,S$X(#$R.38@5B Q-B!W*%-03$E43%PH M8V]M*6\H;2EM"B@L*3$R(&(H:REO*&5Y*6PH+"EJ*&YK*6\H97ES+"EH*&QE M861E2!&:BA)3BDQ-S$@8B!&:"AC;VUM*30U,"!B($9J*&5X=&%N*6\H M="DQ- IB*&EN*6\H=')A+6-O;6TI;RAU;FEC871O2DU,#D@ M8B!&:BAK*6\H97DI,30@8BAF;W(I9RAS=6(M9W)O=7 I9RAM96TI;RAB*7$H M97)S:&EP*0HQ,3<@,34T.2!Y*$E.*3$W,2!B($9H*&YK*6\H97ES*30V.2!B M($9J*&XI;RAU;2EO*&(I<2AE2A/ M550I,3(T(&(@1F@H;&5A9&5R2A/ M550I,3(T(&(@1F@H2AC M97-S*3$V-B Q.3 U('D@1G0H5&AI2ED*&EN*6\H M=')A+6-O;6TI;RAU;FEC871O2EI*&$I92AV*6TH86QU92EI*&]F*64H:REO*&5Y*6@H:6XI9RAT:&4I9RAR M86YG92EF*%LP*7 *1G,H.BDX(&(H.BEG*#HI9"!&="A<*"EP($9H*&YK*6\H M97ES*7 @1G0H+3%<*5TN*3(Q(&(H4')O+2DW-0HR,#$X('DH8V5S6EN9REI*'1H92ED*'-A;64I:"AK*6\H97DI9BAA2AO9BEE*&5A M8REO*&@I:"AS=6(M9W)O=7 I9RA<*')E;&%T:78I;RAE*62AC86QL:6YG*64H M<')O*7$H8V5S2!&:2@S+C@N-"DT M.2!B*$EM<&QE;2EO*&4I;RAN=&%T:6]N*3$T(&(H3F]T97,I-S4@,C0S- IY M($9H*%-E8W5R:70I;RAY*6DH86YD*62EL M*"PI:RAT:&5Y*64H9&\I9RAN;W0I9RAA;&QO*6\H=REG*&-O;BEO*'1E>'1S M*62@Q-"DM-# @ M.#,V('DH,34I+30P(#@Y,R!Y*#$V*2TT, HY-#D@>2@Q-RDM-# @,3 P-B!Y M*#$X*2TT," Q,#8R('DH,3DI+30P(#$Q,3D@>2@R,"DM-# @,3$W-0IY*#(Q M*2TT," Q,C,R('DH,C(I+30P(#$R.#@@>2@R,RDM-# @,3,T-2!Y*#(T*2TT M," Q-# Q('DH,C4I+30P"C$T-3<@>2@R-BDM-# @,34Q-"!Y*#(W*2TT," Q M-32@R.2DM-# @,38X,PIY*#,P*2TT," Q-S0P M('DH,S$I+30P(#$W.38@>2@S,BDM-# @,3@U,R!Y*#,S*2TT," Q.3 Y('DH M,S0I+30P"C$Y-C8@>2@S-2DM-# @,C R,B!Y*#,V*2TT," R,#2@S."DM-# @,C$Y,0IY*#,Y*2TT," R,C0X('DH-# I+30P M(#(S,#0@>2@T,2DM-# @,C,V,2!Y*#0R*2TT," R-#$W('DH-#,I+30P"C(T M-S0@>2@T-"DM-# @,C4S,"!Y*#0U*2TT," R-3@W('DH-#8I+30P(#(V-#,@ M>2@T-RDM-# @,C8Y.0IY*#0X*7 @96]P"B4E4&%G93H@,3@@,C$*8F]P(#'0L*68H86YD M*62EI*'1H92EE*'-A;64I9RAO M8BES*&IE8W0N*3$V-@HS-#8@>2A4:&4I,C$@8BAI;BEO*'1R82UC;VUM*6\H M=6YI8V%T;W(I9BAH87,I:"AT:&4I9BAP2A4:&4I92AI;BEO*'1E M2EG*'=I=&AO=70I92AC;VYS:61E2AB*7$H;W1H*3$Y(&(H=&AE*6'0I9RAO M9BEG($9M*$-P*6<@1G0H:7,I:"AI9&5N*6\H=&EC86PI: HH=&\I92AT:&4I M9RAR96-E:78I;RAE*6@H8V]N*6\H=&5X="EF*&]F*6<@1FTH0W$I<"!&="@L M*62AR96-E:78I;RAE*6DH8V]N*6\H=&5X="EE*&]F*6@@1FTH0W I9PI&="AI M'0I92AO9BEH"D9M*$-Q*7 @1G0H+"EF*&%N9"EI*&ES*68H=6YI<75E*6DH M:6XI92A'.REG*'1H92EG*&=R;W5P*68H;V8I-S4*.#8Y('D@1FTH0W I:2!& M="AI2!&;2A#<2EP($9T M*#LI:"AT:&4I9RAS;W5R8V4I9RAO9BEG"D9M*$-Q*6<@1G0H:7,I9RAT:&4I M:"AR86YK*68H;V8I9BA1*6DH:6XI9RA(+"EF*'=H:6,I;RAH*6@H:7,I9RAT M:&4I9BAG'1S*64H=&AE*6@**'-A;64N*3(V(&(H M5&AI2AT:&4I9"AP*7$H M;VEN*6\H="UT;RUP*7$H;VEN*6\**'0I:"AC;VUM*6\H=6YI8V%T:6]N*62!&="A-4$DI,38@ M8BAP2EH*'1H870I9BAA;&QO*6\H=W,I9RAA;BEG*&%P<&QI8V%T:6]N*6@H M=&\I9BAA='1A8REO*&@I"F8H87)B:71R87)Y*62AT:&ES*60H9F%C:6QI="EO*'DI:2AT;RED*'5S97(I M: HH<')O9W)A;7,I9BAA2!&8B@Q*3$Q-C,@,38Y.2!Y($9T*"XI,3D@8BA4:&ES*3$S(&(H9F%C:6QI M="EO*'DI:"AIF%T:6]N2TI M"C2AB87-E9"EC*&1E8VES:6]NF4L M*62EG*'=H:6,I;RAH*6@H:6UP;&5M96XI M;RAT871I;VXM9&5<,#$T;F5D*32AD871A*62!&="A-4$DI,34*8BAP2!&82A4:&4I9"AD96QE=&EO;BEI*&]F M*64**%PP,35A='1E;B]U;EPP,35A='1E;BEI*&UA:REO*&5S*68H=&AI2@Q,BDQ.34Y(#2@Q M-2DQ.34Y(#@Y,R!Y*#$V*3$Y-3D@.30Y('DH,30I&:"A-4$DI<" Q M-C @-#4@,30@,B!V(#$V('2!&:BA)3BDQ-S$@8B!&:"AN*34T. IB($9J M*&XI;RAU;2EO*&(I<2AE2DS,S<*8B!&:BAP*7$H;VEN*6\H=&5R M*3$P(&(H=&\I9BAA2EG*&]F*62A/550I,3(T(&(@ M1F@H;&5N*34Q-R!B($9J*&QE;F=T:"DQ-"!B*&]F*68H96%C*6\H:"EI"BAO M<&%Q=64I9BAS=')U8W1U2A)3BDQ-S$@8B!&:"AN*34T."!B($9J*&XI;RAU;2EO*&(I<2AE79A;%PI*3$Q-R Q,3DR('D@1FHH3U54*3$R-"!B($9H*&LI;RAE>79A;"DT M-34*8B!&:BA02EG*'8I;BAA;'5E*68H9F]R*6@H9G5T=7)E*6<**'-T;W)I;F2!&="A'96YE2!&:"A-4$DI<" Q-C @,30S."!6(#$V('2A)3BDQ-S$@8B!&:"AK*6\H97EV86PI-#4U"F(@ M1FHH5&AE*3$U(&(H:6XI;RAT96=E2EG*'8I;BAA;'5E*68H M9F]R*6@H9G5T=7)E*62A)3BDQ-S$@8B!& M:"AA='1R:6)U=&4I<" U,# @,32A)3BDQ-S$@8@I&:"AA='1R:6)U=&4I<" U,# @ M,3@V,B!6(#$X('71E2!&:BA)3BDQ-S$@8B!&:"AH86YD;&4I-#0Y(&(@1FHH;W!A<75E*3$T(&(H M871T2EG*&LI;RAE>2EL*"XI+3,R"C0V('D@ M1G(H,2DM,S(@,3 S('DH,BDM,S(@,34Y('DH,RDM,S(@,C$U('DH-"DM,S(@ M,C2@Q,BDM-# @-S(T('DH,3,I+30P(#2@Q-2DM M-# *.#DS('DH,38I+30P(#DT.2!Y*#$W*2TT," Q,# V('DH,3@I+30P(#$P M-C(@>2@Q.2DM-# @,3$Q.0IY*#(P*2TT," Q,32@R,BDM-# @,3(X."!Y*#(S*2TT," Q,S0U('DH,C0I+30P"C$T,#$@>2@R M-2DM-# @,30U-R!Y*#(V*2TT," Q-3$T('DH,C2@R."DM M-# @,38R-PIY*#(Y*2TT," Q-C@S('DH,S I+30P(#$W-# @>2@S,2DM-# @ M,32@S-"DM-# @,3DV M-B!Y*#,U*2TT," R,#(R('DH,S8I+30P(#(P-S@@>2@S-RDM-# @,C$S-0IY M*#,X*2TT," R,3DQ('DH,SDI+30P(#(R-#@@>2@T,"DM-# @,C,P-"!Y*#0Q M*2TT," R,S8Q('DH-#(I+30P"C(T,3<@>2@T,RDM-# @,C0W-"!Y*#0T*2TT M," R-3,P('DH-#4I+30P(#(U.#<@>2@T-BDM-# @,C8T,PIY*#0W*2TT," R M-CDY('DH-#@I<"!E;W *)25086=E.B R," R,PIB;W @-S4@+3$P,"!A($9T M*#(P*30Q-"!B($9L*%-%0U1)3TXI,38@8B@S+BDS,"!B*$=2*6\H3U504RPI M,34*8BA#3TY415A44RPI9RA!3D0I9RA#3TU-54Y)0T$I;"A43U)3*379A;"DT-34@8B!&:BA4:&4I,34@8BAI;BEO*'1E9V5R*68H M:REO*&5Y*62!&:"A%>&%M<&QE M*3&%M M<&QE+"DW-0HV-C4@>2AA*6LH9VQO8F%L*6DH;W I<2AE2AV;VED*6RDQ.30@,3$W.2!Y*'-T7!E*68H*F=O M<%]S='5F9CLI-S *8B@O*BDR-"!B*'=H871E=F5R*68H=V4I9RAN965D*6@H M*B\I,3DT(#$R,S8@>2AV;VED*30W(&(H*F=R;W5P*3(T"F(H/2EF*&UP:5]C M;VUM7V=R;W5P7"AC;VUM7"D[*3$Y-" Q,S0Y('DH:68I:"A<*"%G;W!?:V5Y M7V%S2A[*6@H9V]P M7VME>5]A2AM<&E?86)O2A] M*3$Y-" Q-C,Q"GDH?2DQ.30@,38X-R!Y*&EF*6REJ*"\J*62A[*6@H+RHI9RA4:&ES*68H:7,I M"F@H82EF*&=R;W5P*671H:6YG*68H:6XN*3,Q- HR,#(V('DH5V4I:"AW:6QL M*6@H;F]W*68H9&\I:"AS;RXI,C8V(#(P.#,@>2@J+RDR-#(@,C$S.2!Y*&=O M<%]S='5F9BEF*#TI9PHH+RHI:"AM86QL;V,I9BAA*6@H9V]P7W-T=69F7W1Y M<&4I92@J+RDR-#(@,C(U,B!Y*"\J*6DH+BXN*68H9FEL;"EG*&EN*6@**"IG M;W!?2AG;W!?2@O*BEH*"XN+BEF*'5S92EH"BAC;VYT96YT2@R*3$Y-C<@,34Y('DH,RDQ.38W(#(Q-2!Y*#0I M,3DV-R R-S(@>2@U*3$Y-C<*,S(X('DH-BDQ.38W(#,X-2!Y*#2@X*3$Y-C<@-#DX('DH.2DQ.34Y(#4U-"!Y*#$P*3$Y-3D*-C$Q('DH M,3$I,3DU.2 V-C<@>2@Q,BDQ.34Y(#2@Q-2DQ.34Y(#@Y,R!Y*#$V*3$Y-3D@.30Y('DH,3RDR,3@@ M,34X('DH+RHI:2@N+BXI9BAF2EG*&=O<%]S='5F9BEE*"XN+BEI*"HO*3$W," R,30*>2A]*3$V M-B T,#0@>2!&:RA$:7-C=7-S:6]N.BDQ-2!B($9J*%1H92EE*&-A8REO*&AE M*6DH9F%C:6QI="EO*'DI8RAC;W5L9"EI"BAA;'-O*68H8BEQ*&4I:2AP'0I M9RAA;F0I:"AG2EL*"XB*3(W(&(H1BEL*&]R*32AL:6)R87)I M97,I,3,@8BAS;REF*&1E2AT:&4I,3D@8BAC;VUM*6\H=6YI8V%T;W(L*62AW:6QL*64H:6XI;RAT97)F97)E+BEK*%-I;F-E*6,H=REO M*&4I92AP*7$H97)M:70I:"AT:&4I"F8H8W)E871I;VXI9RAO9BEG*&YE=REH M*&-O;6TI;RAU;FEC871OF%T M:6]N*32A<*&%S2EE*'1O*6@H82EG*&QI8G)A&5C=71I;VXI-S4*,38U,"!Y($9T*%&5C=71E*68H=&AE*68H<')O*7$**&-E9'5R92PI:"AA;F0I9BAS M;VUE*68H;65M*6\H8BEQ*&5R*6@H;V8I9RAT:&%T*68H9W)O=7 I:"AI2D*-S4@,3&5C=71E*32AT:&4I:BAC;REQ*&1E*6@H;V8I M9BAT:&ES*62AP2A);BDR,2!B*'-U8REO*&@I9BAA*62AA;&QO*7$H8V%T:6]N*6@H M8V%N*64H8BEQ*&4I:"AD;VYE*62AA*60H8V]M<&EL92]L:6YK*6HH=&EM92PI92AI9BEG"BAT:&4I9RAI;7!L M96UE;BEO*'1A=&EO;BEG*&AA'0I-S4@,C4S-"!Y M*'8I;2AA;'5E2EF*'1H92EH*&1I7# Q,V5R96XI;RAT*68H<')O M*7$H8V5D=7)E&5C=71I;F2@Q,BDM M-# @-S(T('DH,3,I+30P(#2@Q-2DM-# *.#DS M('DH,38I+30P(#DT.2!Y*#$W*2TT," Q,# V('DH,3@I+30P(#$P-C(@>2@Q M.2DM-# @,3$Q.0IY*#(P*2TT," Q,32@R,BDM M-# @,3(X."!Y*#(S*2TT," Q,S0U('DH,C0I+30P"C$T,#$@>2@R-2DM-# @ M,30U-R!Y*#(V*2TT," Q-3$T('DH,C2@R."DM-# @,38R M-PIY*#(Y*2TT," Q-C@S('DH,S I+30P(#$W-# @>2@S,2DM-# @,32@S-"DM-# @,3DV-B!Y*#,U M*2TT," R,#(R('DH,S8I+30P(#(P-S@@>2@S-RDM-# @,C$S-0IY*#,X*2TT M," R,3DQ('DH,SDI+30P(#(R-#@@>2@T,"DM-# @,C,P-"!Y*#0Q*2TT," R M,S8Q('DH-#(I+30P"C(T,3<@>2@T,RDM-# @,C0W-"!Y*#0T*2TT," R-3,P M('DH-#4I+30P(#(U.#<@>2@T-BDM-# @,C8T,PIY*#0W*2TT," R-CDY('DH M-#@I<"!E;W *)25086=E.B R,B R-0IB;W @-S4@+3$P,"!A($9T*#(R*30P M.2!B($9L*%-%0U1)3TXI,38@8B@S+BDS-2!B*$=2*6\H3U504RPI,34*8BA# M3TY415A44RPI9RA!3D0I9RA#3TU-54Y)0T$I;"A43U)3*32DW-2 Q,#(@>2AA8W1I=BEO*&4I:RAA="EG*&5A8REO M*&@I9PHH<')O*7$H8V5S'0I9RAP*7$H97(I9RAL:6)R87)Y*6PH+BDW-2 R,S8@>2!&:"A0*6\H82EO M*')A;&QE;"EF*' I;RAR;REQ"BAC961U2AC M;VUP;&5T96QY7"DI,3<@8BAO*6\H=BEO*&5R;&%P<&EN9REF"BAG2AG M2EG"BAN M965D*6@H;VYE*64H8V]N*6\H=&5X="EH*&9O2EF*&(I<2AE M*6@H2EF*&YE960I:2AT;REE"BAC87)R>2EG*&%D9&ET:6]N86PI:BAI;F9O M2EF*&(I<2AE*6DH9V%I;FEN9REG*' I<2AE2AL;REQ*&]S92EF*&UO*7$H9'5L87)I="EO M*'DI;"@N*3(W(&(H270I,3<@8BAI2DW-0HQ-3$W('DH;BEO*'5M*6\H8BEQ*&5R*6,H;V8I9RAS=6-C97-S:78I M;RAE*6@H:6XI;RAV*6\H;REQ*&-A=&EO;G,I9BAW:6QL*6D**&5X96-U=&4I M9BAC;W)R96-T;'DI;"@N*3(R(&(H3V8I,3<@8BAC;W5R2!&="A#86QL&5C=71E M*32AT:&4I92AS86UE*6&5C=71I;VXI:"AS M=&%C*6\H:RXI,38V(#$Y-S @>2A);BEI*'-U8REO*&@I9BAA*6'0I9RAN965D*6@H=&\I9BAB*7$H92EH M*&1Y;F%M:6-A;&QY*6@H86QL;REQ*&-A=&5D*68H9F]R*68H96%C*6\H:"EG M*&YE=RD*:"AI;BEO*'8I;RAO*7$H8V$M*32AT:6]N*62AS=&%C*6\H:RED*&UA;F%G96UE;BEO*'0I9RAO9BEG*'1H97-E M*6@H8V]N*6\H=&5X="=S*68H;VXI9RAE86,I;RAH*6@**'!R;REQ*&-E2!&:"A-4$DI<" Q-C @,C(U,R Q-" R('8@,38@ M=RA#3TU-*7 @,S$X(#(R-3,@5B Q-B!W*$U!2T4I< I&="A<*2EG*&9O2EE*&%C=&EV M*6\H92EG*&EN*6\H=BEO*&\I<0HH8V%T:6]N2AP'0I92AA;&QO*7$H8V%T:6]N*6HH;W(I9"AC M;VUM*6\**'5N:6-A=&]R*6@H8W)E871I;VXI:"AI2!&2@T*3$Y-C<@,C2@W*3$Y-C<@-#0Q('DH."DQ.38W(#0Y."!Y M*#DI,3DU.2 U-30@>2@Q,"DQ.34Y"C8Q,2!Y*#$Q*3$Y-3D@-C8W('DH,3(I M,3DU.2 W,C0@>2@Q,RDQ.34Y(#2@Q-BDQ.34Y(#DT.2!Y*#$W*3$Y-3D@,3 P-B!Y*#$X*3$Y-3D@ M,3 V,B!Y*#$Y*3$Y-3D*,3$Q.2!Y*#(P*3$Y-3D@,3$W-2!Y*#(Q*3$Y-3D@ M,3(S,B!Y*#(R*3$Y-3D@,3(X."!Y*#(S*3$Y-3D*,3,T-2!Y*#(T*3$Y-3D@ M,30P,2!Y*#(U*3$Y-3D@,30U-R!Y*#(V*3$Y-3D@,34Q-"!Y*#(W*3$Y-3D* M,34W,"!Y*#(X*3$Y-3D@,38R-R!Y*#(Y*3$Y-3D@,38X,R!Y*#,P*3$Y-3D@ M,32!&;R@S+C$Q*34Y(&(H36]T:79A=&EN9RDQ.2!B*$5X86UP;&5S M*32!&;2AI;G0I,C0@8BAM92PI9BAS:7IE.RDQ-S @-S,T('DH+BXN*3$W M," W.3$@>2AM<&E?:6YI=%PH7"D[*3$W, HX-#<@>2AM<&E?8V]M;5]R86YK M7"A-4$E?0T]-35]!3$PL*64H)FUE7"D[*3$W," Y,#0@>0HH;7!I7V-O;6U? MF4I9BA<,#0U9%Q<;B(L*62A<7&%L;"(I,3,@8BAC;VUM*6\H=6YI M8V%T;W(L*68H86YD*6@H<')I;BEO*'1S*68H82EG*&UE2@N+BXI,32AM<&E?:6YI=%PH M7"D[*3$W," Q-C8R('DH;7!I7V-O;6U?2AM M<&E?8V]M;5]S:7IE7"A-4$E?0T]-35]!3$PL*64**"9S:7IE7"D[*6DH+RHI M9RAL;V-A;"EG*"HO*3$W," Q.#,Q('DH:69<*%PH;64I9RA<,#0U*6@H,EPI M*62AM<&E?F5<*5PI M.RD*,32AE;'-E*3(T,B R,# P('DH;7!I7W)E8W9<*"XN+BPI M92A-4$E?0T]-35]!3$PL*6F5< M*2EF*%PP-#4I:"AS:7IE7"E<*3LI,32@N+BXI,32AM<&E?96YD7"A<*3LI-S4*,C(V-B!Y($9T*$5X86UP;&4I,38@8B@C,6(I M9BAS8REO*&AE;6%T:6-A;&QY*6DH:6QL=7-T2!&:2@S+C$Q+C(I-#D@8BA#=7)R96YT*3$U(&(H4')A8W1I8V4I:"@C,BDQ M-S @,C4S-"!Y($9M*'9O:60I,C0*8B@J9&%T83LI,32AI;G0I M9RAM93LI,32@N+BXI,32AM<&E?:6YI=%PH7"D[ M*2TS,@HT-B!Y($9R*#$I+3,R(#$P,R!Y*#(I+3,R(#$U.2!Y*#,I+3,R(#(Q M-2!Y*#0I+3,R(#(W,B!Y*#4I+3,R"C,R."!Y*#8I+3,R(#,X-2!Y*#2@Q M,2DM-# @-C8W('DH,3(I+30P(#2@Q-"DM-# @ M.#,V('DH,34I+30P"C@Y,R!Y*#$V*2TT," Y-#D@>2@Q-RDM-# @,3 P-B!Y M*#$X*2TT," Q,#8R('DH,3DI+30P(#$Q,3D*>2@R,"DM-# @,3$W-2!Y*#(Q M*2TT," Q,C,R('DH,C(I+30P(#$R.#@@>2@R,RDM-# @,3,T-2!Y*#(T*2TT M, HQ-# Q('DH,C4I+30P(#$T-3<@>2@R-BDM-# @,34Q-"!Y*#(W*2TT," Q M-32@R.2DM-# @,38X,R!Y*#,P*2TT," Q-S0P M('DH,S$I+30P(#$W.38@>2@S,BDM-# @,3@U,R!Y*#,S*2TT, HQ.3 Y('DH M,S0I+30P(#$Y-C8@>2@S-2DM-# @,C R,B!Y*#,V*2TT," R,#2@S."DM-# @,C$Y,2!Y*#,Y*2TT," R,C0X('DH-# I+30P M(#(S,#0@>2@T,2DM-# @,C,V,2!Y*#0R*2TT, HR-#$W('DH-#,I+30P(#(T M-S0@>2@T-"DM-# @,C4S,"!Y*#0U*2TT," R-3@W('DH-#8I+30P(#(V-#,* M>2@T-RDM-# @,C8Y.2!Y*#0X*7 @96]P"B4E4&%G93H@,C0@,C<*8F]P(#2!&;0HH;7!I7V-O;6U?2A]*3$W," T.3<@>2AM<&E?8G)O861C87-T7"A-4$E?0T]-35]!3$PL M*64H,"PI:2AD871A7"D[*0HQ-S @-C$P('DH+BXN*3$W," V-C8@>2AM<&E? M96YD7"A<*3LI-S4@-S0W('D@1G0H5&AI2AM<&E?9W)O=7!?9&EF M9F5R96YC95PH35!)7T=23U4I;RA07T%,3"PI92AG2A[*3(V-B Q M-32@N+BXI,32A]*3$W," Q.#4T('DH M+RHI:BAZ97)O*68H9F%L;',I:"AT:')O=6=H*64H:6UM961I871E;'DI"F@H M=&\I9RAT:&ES*6@H&%M<&QE*6@**&EL;'5S=')A M=&5S*62EL*"PI9BAF;W(I9RAC;VUM*6\H M=6YI8V%T:6]N*6@H=VET:"EG*%Q<9W)O=7 I92AS869E="EO"BAY*6PH+"(I M:"AC;VXI;RAT97AT2!&2@T*3$Y-C<@,C2@W*3$Y-C<@-#0Q('DH."DQ M.38W(#0Y."!Y*#DI,3DU.2 U-30@>2@Q,"DQ.34Y"C8Q,2!Y*#$Q*3$Y-3D@ M-C8W('DH,3(I,3DU.2 W,C0@>2@Q,RDQ.34Y(#2@Q-BDQ.34Y(#DT.2!Y*#$W*3$Y-3D@,3 P-B!Y M*#$X*3$Y-3D@,3 V,B!Y*#$Y*3$Y-3D*,3$Q.2!Y*#(P*3$Y-3D@,3$W-2!Y M*#(Q*3$Y-3D@,3(S,B!Y*#(R*3$Y-3D@,3(X."!Y*#(S*3$Y-3D*,3,T-2!Y M*#(T*3$Y-3D@,30P,2!Y*#(U*3$Y-3D@,30U-R!Y*#(V*3$Y-3D@,34Q-"!Y M*#(W*3$Y-3D*,34W,"!Y*#(X*3$Y-3D@,38R-R!Y*#(Y*3$Y-3D@,38X,R!Y M*#,P*3$Y-3D@,32!&;2@C9&5F:6YE*3(S(&(H5$%'7T%20DE44D%2 M62EF*#$R,S0U*32AI;G0I,C0@8BAM93LI,32AV;VED M*62AM<&E?:6YI M=%PH7"D[*3$W," U-3,@>2AM<&E?8V]N=&5X='-?86QL;V-<*$U025]#3TU- M7T$I;PHH3$PL*60H,2PI:B@F8V]N=&5X=',L*64H)FQE;EPI.RDQ-S @-C$P M('DH;7!I7VQO8V%L7W-U8F=R;W5P7"A-4$E?1U)/55!?*6\**$%,3"PI9B@T M+"EI*&!@6S(L-"PV+#A=)R2AM<&E?9W)O=7!?2DQ-PIB*$5X86UP;&4I9"@C M,2DW-2 Q-#DR('D@1G0H5&AE*6@H;6%I;BEH*'!R;V=R86TZ*3$W," Q-32A[*3(V-B R,S8U M('DH+RHI:2AW;W)K*6@H*B\I,C8V"C(T,C$@>2@N+BXI,C8V(#(T-S@@>2AM M<&E?2@Q,BDM-# @-S(T M('DH,3,I+30P(#2@Q-2DM-# @.#DS('DH,38I M+30P(#DT.2!Y*#$W*2TT," Q,# V('DH,3@I+30P(#$P-C(@>2@Q.2DM-# * M,3$Q.2!Y*#(P*2TT," Q,32@R,BDM-# @,3(X M."!Y*#(S*2TT," Q,S0U"GDH,C0I+30P(#$T,#$@>2@R-2DM-# @,30U-R!Y M*#(V*2TT," Q-3$T('DH,C2@R."DM-# *,38R-R!Y*#(Y M*2TT," Q-C@S('DH,S I+30P(#$W-# @>2@S,2DM-# @,32@S-"DM-# @,3DV-B!Y*#,U*2TT," R M,#(R('DH,S8I+30P(#(P-S@@>2@S-RDM-# *,C$S-2!Y*#,X*2TT," R,3DQ M('DH,SDI+30P(#(R-#@@>2@T,"DM-# @,C,P-"!Y*#0Q*2TT," R,S8Q"GDH M-#(I+30P(#(T,3<@>2@T,RDM-# @,C0W-"!Y*#0T*2TT," R-3,P('DH-#4I M+30P(#(U.#<@>2@T-BDM-# *,C8T,R!Y*#0W*2TT," R-CDY('DH-#@I<"!E M;W *)25086=E.B R-B R.0IB;W @-S4@+3$P,"!A($9T*#(V*30Q-"!B($9L M*%-%0U1)3TXI,38@8B@S+BDS,"!B*$=2*6\H3U504RPI,34*8BA#3TY415A4 M4RPI9RA!3D0I9RA#3TU-54Y)0T$I;"A43U)3*3$W," T-2!Y($9M*'5S97)? M96YD7V]P7"AL:6)H7V%<*3LI,32!&;2AV;VED*3(S(&(H M:6YI=%]U2A[*3$T-R S.3D@>2AU2AM<&E?8V]N=&5X='-?86QL;V-<*&-O;6TI;PHH+"EG M*#$L*6HH)F-O;G1E>'0L*64H)FQE;EPI.RDQ-#<@.#4P('DH;7!I7V-O;6U? M9'5P7"AC;VUM+"EF*&-O;G1E>'0L*6D**'-A=F4I9R@M/BEH*&-O;6U<*3LI M,30W(#DV,R!Y*"\J*68H;W1H97(I9RAI;FET2AT97AT2!&;2AV;VED*3(S(&(H=7-E M2AU2AS=&%T92EI*#TI9BAH86YD;&4I9PHH+3XI M:"AS=&%T93LI,32AM<&E?:7)E8W9<*'-A=F4I92@M/BEI*&-O M;6TL*68H+BXN+"EG*&1A=&$L*6@H+BXN*68**"9<*'-T871E*62A]*32!&="A52A[*3$W," R,# Y('DH;7!I7W=A:71<*'-A=F4I:2@M M/BEG*'-T871E*62AM M<&E?=V%I=%PH2DQ-R!B*$5X86UP;&4I9"@C,BDW-0HR,S(Y('D@1G0H5&AE M*6@H;6%I;BEH*'!R;V=R86TZ*3$W," R-#(Q('D@1FTH:6YT*3(T(&(H;6$L M*68H;6([*3$W, HR-#RPS?5TG)SLI,32AM<&E?;&]C86Q?2!&2@T*3$Y-C<@,C2@W M*3$Y-C<*-#0Q('DH."DQ.38W(#0Y."!Y*#DI,3DU.2 U-30@>2@Q,"DQ.34Y M(#8Q,2!Y*#$Q*3$Y-3D@-C8W('DH,3(I,3DU.0HW,C0@>2@Q,RDQ.34Y(#2@Q-BDQ.34Y(#DT.0IY M*#$W*3$Y-3D@,3 P-B!Y*#$X*3$Y-3D@,3 V,B!Y*#$Y*3$Y-3D@,3$Q.2!Y M*#(P*3$Y-3D@,3$W-0IY*#(Q*3$Y-3D@,3(S,B!Y*#(R*3$Y-3D@,3(X."!Y M*#(S*3$Y-3D@,3,T-2!Y*#(T*3$Y-3D@,30P,0IY*#(U*3$Y-3D@,30U-R!Y M*#(V*3$Y-3D@,34Q-"!Y*#(W*3$Y-3D@,34W,"!Y*#(X*3$Y-3D@,38R-PIY M*#(Y*3$Y-3D@,38X,R!Y*#,P*3$Y-3D@,32AM<&E?8V]M;5]M86ME7"A-4$E?0T]-35]!3$PL*68H9W)O=7!?8BPI M: HH)F-O;6U?8EPI.RDQ-S @,S(W('DH;7!I7V-O;6U?2A[*3(T,B W M,C,@>2AL:6)?8V%L;%PH8V]M;5]B7"D[*3(T,B W-SD@>2AL:6)?8V%L;%PH M8V]M;5]B7"D[*3$W, HX,S4@>2A]*32A[*3$W," Q,3 R('DH:6YT*6DH;64L*68H M9&]N92EH*#TI9B@P.RDQ-S *,3$U.2!Y*&UP:5]C;VUM7W)A;FM<*&-O;6TL M*68H)FUE7"D[*3$W," Q,C$U('DH:69<*&UE*6DH/3TI9B@P7"DI,C8V"C$R M-S(@>2AW:&EL95PH(61O;F5<*2DR-C8@,3,R."!Y*'LI,S8Q(#$S.#4@>2AM M<&E?2A[*3(V-B Q-C8W('DH+RHI9BAW;W)K*6@H*B\I,C8V(#$W,C,@>2AM M<&E?2@N+BXN M*3$W," Q.#,V('DH?2DQ-S @,3@Y,R!Y*$U025]364Y#7"AC;VUM7"D[*3

2EO*&]U*6@**&EN8VQU9&4I:2AR M86YK*32@S*64H:6XI:"!&;2AL:7-T*7 @,C8W(#(P.#,@,34@ M,@IV(#$W(''1S+"EF M*'-U8G-E<75E;BEO*'0I:"AC86QL2!&="AW:71H*3(R(&(H=&AE*68H M'0I9BAN965D*6DH;F]T*68H8BEQ"BAE*6@H2DR,"!B*&ES*32AR96%L M:7IE9"EG*&EF*64H=&AE*3,W(&(@1F@H35!)*7 *-#'1S M+BDQ-C8*,C,P."!Y*$%L9V]R:71H;7,I:2AL:6LI;RAE*62AH97)E;BEO*'1L>2EK*$]++BEG*%-O*67!I8V%L*6DH M=')E92EF*&)R;V%D8V%S="EF*&%L9V]R:71H;2EG*'=I=&@I:"AT:&4I9RAS M86UE*32AR;REQ*&]T+BDR-R!B*$AO*6\H=REO*&5V*6\H97(L M*3$W(&(H;2EO*'5L=&EP;&4I:BAC86QL7!I M8V%L*6F4I:BAT:&4I9BAT862@Q,BDM-# @-S(T('DH M,3,I+30P(#2@Q-2DM-# *.#DS('DH,38I+30P M(#DT.2!Y*#$W*2TT," Q,# V('DH,3@I+30P(#$P-C(@>2@Q.2DM-# @,3$Q M.0IY*#(P*2TT," Q,32@R,BDM-# @,3(X."!Y M*#(S*2TT," Q,S0U('DH,C0I+30P"C$T,#$@>2@R-2DM-# @,30U-R!Y*#(V M*2TT," Q-3$T('DH,C2@R."DM-# @,38R-PIY*#(Y*2TT M," Q-C@S('DH,S I+30P(#$W-# @>2@S,2DM-# @,32@S-"DM-# @,3DV-B!Y*#,U*2TT," R,#(R M('DH,S8I+30P(#(P-S@@>2@S-RDM-# @,C$S-0IY*#,X*2TT," R,3DQ('DH M,SDI+30P(#(R-#@@>2@T,"DM-# @,C,P-"!Y*#0Q*2TT," R,S8Q('DH-#(I M+30P"C(T,3<@>2@T,RDM-# @,C0W-"!Y*#0T*2TT," R-3,P('DH-#4I+30P M(#(U.#<@>2@T-BDM-# @,C8T,PIY*#0W*2TT," R-CDY('DH-#@I<"!E;W * M)25086=E.B R." S,0IB;W @-S4@+3$P,"!A($9T*#(X*30Q-"!B($9L*%-% M0U1)3TXI,38@8B@S+BDS,"!B*$=2*6\H3U504RPI,34*8BA#3TY415A44RPI M9RA!3D0I9RA#3TU-54Y)0T$I;"A43U)3*3&%M<&QE2!&:"A%>&%M<&QE*68H,3HI,3D@8BA4:')E92U'2A\*3(T(&(H1W)O=7 I9B@P*6@H?"EF*#PM M+2TM+3XI9RA\*6@**$=R;W5P*68H,2EH*'PI9R@\+2TM+2T^*68H?"EG*$=R M;W5P*6@H,BEF*'PI-# Y(#0T-R!Y*'PI,C$U"F(H?"EF*'PI:"A\*62!&;2AV;VED*3(T M"F(H*BEF*&UY0V]M;3LI-#4S(&(H+RHI,C0@8BAI;G1R82UC;VUM=6YI8V%T M;W(I9"AO9BEJ*&QO8V%L*68H49I3LI,32AI;G0I9RAS=6)'2AM96UB97)S:&EP2V5Y M*68H/2EG*"XN+BEH"B@[*3$W," Q-C,Q('DH+RHI9RA"=6EL9"EF*&EN=')A M+6-O;6UU;FEC871O0HH35!)7T-/34U?4U!, M251,7"A-4$E?0T]-35]!3$PL*64H;65M8F5R2PI:2@S+"EI*'-U M8D=R;W5P3&5A9&5R2A[*38Y,B!B*"\J*3(S M(&(H1W)O=7 I:"@P*68H8V]M;75N:6-A=&5S*62A-4$E?0T]-35]014527TU!2T5<*&UY0V]M M;2EO*"PI9"A-4$E?0T]-35]!3$PL*6@**'-U8D=R;W5P3&5A9&5R49I2AE;'-E*6@H:68I9BA<*&UE;6)E4-O M;6TI;R@L*60H35!)7T-/34U?04Q,+"EH"BAS=6)'2@F;7E&:7)S=$-O;6U<*3LI,C8V"C(T M,C$@>2A-4$E?0T]-35]014527TU!2T5<*&UY0V]M;2EO*"PI92A-4$E?0T]- M35]!3$PL*6@H5-E8V]N9$-O;6U<*3LI,C8V(#(U,S0@>2A]*3$W," R-3DQ M"GDH96QS92EH*&EF*68H7"AM96UB97)S:&EP2V5Y*68H/3TI:2@R7"DI,C8V M(#(V-#<@>2A[*38Y,B!B*"\J*3(S"F(H1W)O=7 I:"@R*68H8V]M;75N:6-A M=&5S*62!&2@T*3$Y-C<* M,C2@W*3$Y-C<@-#0Q('DH M."DQ.38W(#0Y."!Y*#DI,3DU.0HU-30@>2@Q,"DQ.34Y(#8Q,2!Y*#$Q*3$Y M-3D@-C8W('DH,3(I,3DU.2 W,C0@>2@Q,RDQ.34Y(#2@Q-BDQ.34Y(#DT.2!Y*#$W*3$Y-3D@,3 P M-B!Y*#$X*3$Y-3D*,3 V,B!Y*#$Y*3$Y-3D@,3$Q.2!Y*#(P*3$Y-3D@,3$W M-2!Y*#(Q*3$Y-3D@,3(S,B!Y*#(R*3$Y-3D*,3(X."!Y*#(S*3$Y-3D@,3,T M-2!Y*#(T*3$Y-3D@,30P,2!Y*#(U*3$Y-3D@,30U-R!Y*#(V*3$Y-3D*,34Q M-"!Y*#(W*3$Y-3D@,34W,"!Y*#(X*3$Y-3D@,38R-R!Y*#(Y*3$Y-3D@,38X M,R!Y*#,P*3$Y-3D*,32A]*32@K+2T^*3(S(&(H?"EH*$=R M;W5P*68H,"EH*'PI9B@\+2TM+2T^*62!&="A' M2AM M*6\H=6YI8V%T92XI-#D*8BA4:&5R969O2DW M-2 Y-CD*>2AS>6YC*6\H:')O;F]U2!&:"A-4$DI<" Q-C @,3 R-2!6(#$V M('2AC871I;VXN*3$W," Q,34-O;6T[*30U M,R!B*"\J*3(T"F(H:6YT49I2AV;VED*6HH*BEF*&UY4V5C;VYD0V]M;3LI,32AM86ME7VED*62@N+BXI,32@O*BEG*%5S97(I9BAC;V1E*6@H M;75S="EF"BAG96YE2EF*#TI9R@N+BXI:"@[*3$W," Q.3$S('DH+RHI9RA"=6EL9"EF M"BAI;G1R82UC;VUM=6YI8V%T;W(I9BAF;W(I:"AL;V-A;"EG*'-U8BUG2A-4$E?0T]-35]0 M14527TU!2T5?4U1!4E1<*"EO*&UY0V]M;2PI8RA-4$E?0T]-35]!3$PL*6D* M*'-U8D=R;W5P3&5A9&5R2@Q+"EI*"9F M:7)S=$UA:V5)1%PI.RDR-C8*,C0R,2!Y*$U025]#3TU-7U!%15)?34%+15]3 M5$%25%PH*6\H;7E#;VUM+"EC*$U025]#3TU-7T%,3"PI:0HH2AE;'-E*64-O;6TL*6,H35!)7T-/34U?04Q,+"EI"BAS M=6)'2@Q,2DM-# @-C8W('DH,3(I+30P"C2@Q-"DM-# @.#,V('DH,34I+30P(#@Y,R!Y*#$V M*2TT," Y-#D@>2@Q-RDM-# *,3 P-B!Y*#$X*2TT," Q,#8R('DH,3DI+30P M(#$Q,3D@>2@R,"DM-# @,3$W-2!Y*#(Q*2TT," Q,C,R"GDH,C(I+30P(#$R M.#@@>2@R,RDM-# @,3,T-2!Y*#(T*2TT," Q-# Q('DH,C4I+30P(#$T-3<@ M>2@R-BDM-# *,34Q-"!Y*#(W*2TT," Q-32@R M.2DM-# @,38X,R!Y*#,P*2TT," Q-S0P"GDH,S$I+30P(#$W.38@>2@S,BDM M-# @,3@U,R!Y*#,S*2TT," Q.3 Y('DH,S0I+30P(#$Y-C8@>2@S-2DM-# * M,C R,B!Y*#,V*2TT," R,#2@S."DM-# @,C$Y M,2!Y*#,Y*2TT," R,C0X"GDH-# I+30P(#(S,#0@>2@T,2DM-# @,C,V,2!Y M*#0R*2TT," R-#$W('DH-#,I+30P(#(T-S0@>2@T-"DM-# *,C4S,"!Y*#0U M*2TT," R-3@W('DH-#8I+30P(#(V-#,@>2@T-RDM-# @,C8Y.2!Y*#0X*7 @ M96]P"B4E4&%G93H@,S @,S,*8F]P(#2!&;2@Q+"DR M-"!B*"9F:7)S=$UA:V5)1%PI.RDR-C8*,3 R('DH35!)7T-/34U?4$5%4E]- M04M%7U-405)47"@I;RAM>4-O;6TL*6,H35!)7T-/34U?04Q,+"EI"BAS=6)' M2@Q+"EI*"9S96-O;F1- M86ME241<*3LI,C8V"C(Q-"!Y*'TI,32EF*#T]*6DH,EPI*3(V-@HS,C<@>2A[*34R-2!B*"\J M*3(S(&(H1W)O=7 I:"@R*68H8V]M;75N:6-A=&5S*68H=VET:"EI*&=R;W5P M4-O;6TL*6,H35!)7T-/34U?04Q,+"EI"BAS M=6)'2@Q+"EI*"9F:7)S M=$UA:V5)1%PI.RDR-C8*-#DW('DH35!)7T-/34U?4$5%4E]-04M%7U-405)4 M7"@I;RAM>4-O;6TL*6,H35!)7T-/34U?04Q,+"EI"BAS=6)'2@Q+"EI*"9S96-O;F1-86ME241<*3LI M,C8V"C8Q,"!Y*'TI,32A-4$E?0T]-35]014527TU!2T5?1DE.25-(7"AF:7(I;RAS=$UA:V5)1"EO M*"PI92@F;7E&:7)S=$-O;6U<*3LI,32A\*6@H?"EF*'PI:"A\*62A\*3(T(&(H1W)O=7 I9B@P*6@H?"EF*#PM+2TM M+3XI9RA\*6@**$=R;W5P*68H,2EH*'PI9R@\+2TM+2T^*68H?"EG*$=R;W5P M*6@H,BEF*'PI-# Y(#$R,3 @>2A\*3(Q-0IB*'PI9BA\*6@H?"EG*'PI9BA\ M*30P.2 Q,C8V('DH*RTM+2TM+2TM+2LI9B@K+2TM+2TM+2TM*REH*"LM+2TM M+2TM+2TK*3$V-@HQ,S8T('D@1G0H1W)O=7!S*3(Q(&(H,"EG*&%N9"EH*#$I M9BAC;VUM*6\H=6YI8V%T92XI,SD@8BA'2AR97%U:7)E6YC M*6\**&AR;VYO=7,I9BAI;BEO*'1E6-L:6,I:"AC;VUM*6\H=6YI8V$M*32AT:6]N+BDQ-S * M,38X-R!Y($9M*'9O:60I,C0@8B@J*68H;7E#;VUM.RDT-3,@8B@O*BDR-"!B M*&EN=')A+6-O;6UU;FEC871O2AV;VED*6DH*BEF*&UY1FER5-E8V]N9$-O;6T[*3(S-PIB*"\J*3(T(&(H2@N+BXI M,32@O*BEH*%5S97(I9BAB=6EL9',I9PHH:6YT2@O*BEH*'5S:6YG*68H86YY*6@H M87!P&%M<&QE+"EG*&UY0V]M;2EG*&-O=6QD*62@O*BEH*&AA=F4I9BAB965N*6@**'!ARDV.3(@8B@O*BDR,R!B*$=R;W5P*6@H,"EF M*&-O;6UU;FEC871E4-O;6TI;R@L*60H(D-O;FYE M8W0I:2@Q,"(L*0IG*#$L*6@H)FUY1FER2@S*3$Y-C<*,C$U('DH-"DQ.38W(#(W,B!Y*#4I,3DV-R S,C@@>2@V M*3$Y-C<@,S@U('DH-RDQ.38W(#0T,2!Y*#@I,3DV-PHT.3@@>2@Y*3$Y-3D@ M-34T('DH,3 I,3DU.2 V,3$@>2@Q,2DQ.34Y(#8V-R!Y*#$R*3$Y-3D@-S(T M"GDH,3,I,3DU.2 W.# @>2@Q-"DQ.34Y(#@S-B!Y*#$U*3$Y-3D@.#DS('DH M,38I,3DU.2 Y-#D@>2@Q-RDQ.34Y"C$P,#8@>2@Q."DQ.34Y(#$P-C(@>2@Q M.2DQ.34Y(#$Q,3D@>2@R,"DQ.34Y(#$Q-S4@>2@R,2DQ.34Y"C$R,S(@>2@R M,BDQ.34Y(#$R.#@@>2@R,RDQ.34Y(#$S-#4@>2@R-"DQ.34Y(#$T,#$@>2@R M-2DQ.34Y"C$T-3<@>2@R-BDQ.34Y(#$U,30@>2@R-RDQ.34Y(#$U-S @>2@R M."DQ.34Y(#$V,C<@>2@R.2DQ.34Y"C$V.#,@>2@S,"DQ.34Y(#$W-# @>2@S M,2DQ.34Y(#$W.38@>2@S,BDQ.34Y(#$X-3,@>2@S,RDQ.34Y"C$Y,#D@>2@S M-"DQ.34Y(#$Y-C8@>2@S-2DQ.34Y(#(P,C(@>2@S-BDQ.34Y(#(P-S@@>2@S M-RDQ.34Y"C(Q,S4@>2@S."DQ.34Y(#(Q.3$@>2@S.2DQ.34Y(#(R-#@@>2@T M,"DQ.34Y(#(S,#0@>2@T,2DQ.34Y"C(S-C$@>2@T,BDQ.34Y(#(T,3<@>2@T M,RDQ.34Y(#(T-S0@>2@T-"DQ.34Y(#(U,S @>2@T-2DQ.34Y"C(U.#<@>2@T M-BDQ.34Y(#(V-#,@>2@T-RDQ.34Y(#(V.3D@>2@T."EP(&5O< HE)5!A9V4Z M(#,Q(#,T"F)O<" W-2 M,3 P(&$@1FPH,RXQ,2XI,CD@8BA-3U1)5BDM-2!B M*$$I;"A424Y'*3$U(&(H15A!35!,15,I,3 U-@IB($9T*#,Q*3(V-B T-2!Y M($9M*'LI-3(U(&(H+RHI,C,@8BA'2A-4$E?0T]-35].04U%7TU!2T5<*&UY0V]M;2EO*"PI9"@B0V]N;F5C="EI M"B@Q,"(L*65-E8V]N9$-O;6U<*3LI,C8V(#(Q-"!Y*'TI,32A[*38Y,@IB*"\J*3(S(&(H1W)O=7 I:"@R*68H8V]M;75N:6-A M=&5S*64-O;6TI;R@L*60H(D-O;FYE8W0I:2@R,2(L*62A]*32A\*3$T,#@@8BA\ M*3(Y," W-3@@>2A\*3DU(&(H*RTM+2TM+2TM+2LI,C$S"F(H*RTM+2TM+2TM M+2LI:"@K+2TM+2TM+2TM*RDY-"!B*'PI,CDP(#@Q-"!Y*'PI:"A\*3(Q-2!B M*'PI9BA\*6@H?"EG*'PI9BA\*3DV"F(H?"DR.3 @.#2AM*6\H=6YI8V%T92XI-#D*8BA4:&5R969O2DW-2 Q,3DT"GDH6-L:6,I:"AC M;VUM*6\**'5N:6-A+2DW-2 Q,S W('DH=&EO;BXI,32!&;2AV M;VED*3(T(&(H*BEF*&UY0V]M;3LI-#4S"F(H+RHI,C0@8BAI;G1R82UC;VUM M=6YI8V%T;W(I9"AO9BEJ*&QO8V%L*68H2AV;VED*6DH*BEF*&UY1FER4-O;6TI:"AD97-C2@O*BEG*$)U:6QD*68H:6YT97(M8V]M;75N:6-A=&]R M2A[*34R-2!B*"\J*3(S M(&(H1W)O=7 I:"@P*68H8V]M;75N:6-A=&5S*68H=VET:"EI"BAG4-O;6TL M*6,**")#;VYN96-T*6HH,3 B+"EG*#$L*6@H)F9I2@Q,BDM-# @-S(T('DH,3,I+30P"C2@Q-2DM-# @.#DS('DH,38I+30P(#DT.2!Y*#$W*2TT," Q,# V('DH M,3@I+30P"C$P-C(@>2@Q.2DM-# @,3$Q.2!Y*#(P*2TT," Q,32@R,BDM-# @,3(X. IY*#(S*2TT," Q,S0U('DH,C0I+30P M(#$T,#$@>2@R-2DM-# @,30U-R!Y*#(V*2TT," Q-3$T('DH,C2@R."DM-# @,38R-R!Y*#(Y*2TT," Q-C@S('DH,S I+30P(#$W-# @ M>2@S,2DM-# @,32@S M-"DM-# @,3DV-B!Y*#,U*2TT," R,#(R('DH,S8I+30P"C(P-S@@>2@S-RDM M-# @,C$S-2!Y*#,X*2TT," R,3DQ('DH,SDI+30P(#(R-#@@>2@T,"DM-# @ M,C,P- IY*#0Q*2TT," R,S8Q('DH-#(I+30P(#(T,3<@>2@T,RDM-# @,C0W M-"!Y*#0T*2TT," R-3,P('DH-#4I+30P"C(U.#<@>2@T-BDM-# @,C8T,R!Y M*#0W*2TT," R-CDY('DH-#@I<"!E;W *)25086=E.B S,B S-0IB;W @-S4@ M+3$P,"!A($9T*#,R*30P.2!B($9L*%-%0U1)3TXI,38@8B@S+BDS-2!B*$=2 M*6\H3U504RPI,34*8BA#3TY415A44RPI9RA!3D0I9RA#3TU-54Y)0T$I;"A4 M3U)3*3(V-B T-2!Y($9M*'TI,32A[*34R-0IB M*"\J*3(S(&(H1W)O=7 I:"@R*68H8V]M;75N:6-A=&5S*68H=VET:"EI*&=R M;W5P2A-4$E?0T]-35]. M04U%7TU!2T5?4U1!4E1<*"EO*&UY0V]M;2PI8R@B0V]N;F5C="EJ*#(Q(BPI M9R@Q+"EH"B@F9FER0HH35!)7T-/34U?3D%-15]-04M% M7T9)3DE32%PH9FER*6\H2A-4$E?0T]-35].04U%7TU!2T5?1DE.25-(7"AS96,I;RAO M;F1-86ME22EO*$0L*62@S*3$Y-C<@,C$U('DH-"DQ.38W M(#(W,B!Y*#4I,3DV-PHS,C@@>2@V*3$Y-C<@,S@U('DH-RDQ.38W(#0T,2!Y M*#@I,3DV-R T.3@@>2@Y*3$Y-3D@-34T('DH,3 I,3DU.0HV,3$@>2@Q,2DQ M.34Y(#8V-R!Y*#$R*3$Y-3D@-S(T('DH,3,I,3DU.2 W.# @>2@Q-"DQ.34Y M(#@S-@IY*#$U*3$Y-3D@.#DS('DH,38I,3DU.2 Y-#D@>2@Q-RDQ.34Y(#$P M,#8@>2@Q."DQ.34Y(#$P-C(@>2@Q.2DQ.34Y"C$Q,3D@>2@R,"DQ.34Y(#$Q M-S4@>2@R,2DQ.34Y(#$R,S(@>2@R,BDQ.34Y(#$R.#@@>2@R,RDQ.34Y"C$S M-#4@>2@R-"DQ.34Y(#$T,#$@>2@R-2DQ.34Y(#$T-3<@>2@R-BDQ.34Y(#$U M,30@>2@R-RDQ.34Y"C$U-S @>2@R."DQ.34Y(#$V,C<@>2@R.2DQ.34Y(#$V M.#,@>2@S,"DQ.34Y(#$W-# @>2@S,2DQ.34Y"C$W.38@>2@S,BDQ.34Y(#$X M-3,@>2@S,RDQ.34Y(#$Y,#D@>2@S-"DQ.34Y(#$Y-C8@>2@S-2DQ.34Y"C(P M,C(@>2@S-BDQ.34Y(#(P-S@@>2@S-RDQ.34Y(#(Q,S4@>2@S."DQ.34Y(#(Q M.3$@>2@S.2DQ.34Y"C(R-#@@>2@T,"DQ.34Y(#(S,#0@>2@T,2DQ.34Y(#(S M-C$@>2@T,BDQ.34Y(#(T,3<@>2@T,RDQ.34Y"C(T-S0@>2@T-"DQ.34Y(#(U M,S @>2@T-2DQ.34Y(#(U.#<@>2@T-BDQ.34Y(#(V-#,@>2@T-RDQ.34Y"C(V M.3D@>2@T."EP(&5O< HE)51R86EL97(*96YD"G5S97)D:6-T("]E;F0M:&]O ::R!K;F]W;GME;F0M:&]O:WUI9@HE)45/1@ID end From owner-mpi-context@CS.UTK.EDU Fri Aug 20 19:35:38 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA00433; Fri, 20 Aug 93 19:35:38 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02897; Fri, 20 Aug 93 19:34:57 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 20 Aug 1993 19:34:56 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from watson.ibm.com by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02889; Fri, 20 Aug 93 19:34:55 -0400 Message-Id: <9308202334.AA02889@CS.UTK.EDU> Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 7773; Fri, 20 Aug 93 19:34:53 EDT Date: Fri, 20 Aug 93 19:20:06 EDT From: "Marc Snir" X-Addr: (914) 945-3204 (862-3204) 28-226 IBM T.J. Watson Research Center P.O. Box 218 Yorktown Heights NY 10598 To: mpi-context@cs.utk.edu Subject: C12C2 Reply-To: SNIR@watson.ibm.com I suggest to add C1 to C2 and C2 to C1 communicator conversion functions. Both functions are collective. MPI_C1toC2(comm, inwhich, newcomm) , where "inwhich" is either (0 or 1) splits a group into two subgroups and create a C2 communicator for this pair of groups. Order is preserved within each subgroup. MPI_C2toC1(comm, whichfirst, newcomm) merges the two groups and create a new communicator for the resulting group. "whichfirst" can be either 0 or 1. All processes within one subgroup should supply 0 and all processes within the other subgroup should supply 1. In the merged group, processes that supplied 0 appear before processes that supplied 1. (Need some rule for the case where the groups are not distinct) Rationale for C2toC1: If we move to a dynamic world, new processes will not appear in the MPI_ALL group but, most likely, will appear in a group (communicator) that contain them and their paretn, which is in MPI_ALL. We can create a C2 communicator for MPI_ALL and the new children and, later, merge the groups. We might have a shorthand function that does the merge in one pass (and has a syntax similar to the syntax for C2 creation: two groups, a bridge group, name a leaders, etc.) Rationale for C1toC2: If we start in a closed universe, we may use this function to create in a convenient manner server partitions known to everybody. From owner-mpi-context@CS.UTK.EDU Sat Aug 21 11:20:10 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA01959; Sat, 21 Aug 93 11:20:10 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28339; Sat, 21 Aug 93 11:19:32 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Sat, 21 Aug 1993 11:19:31 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA28331; Sat, 21 Aug 93 11:19:30 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA01624; Sat, 21 Aug 93 10:19:27 CDT Date: Sat, 21 Aug 93 10:19:27 CDT From: Tony Skjellum Message-Id: <9308211519.AA01624@Aurora.CS.MsState.Edu> To: SNIR@watson.ibm.com, mpi-context@cs.utk.edu Subject: Re: C12C2 Yes, please add to draft. I concur. -Tony From owner-mpi-context@CS.UTK.EDU Wed Aug 25 14:10:28 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA25941; Wed, 25 Aug 93 14:10:28 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA16146; Wed, 25 Aug 93 14:09:18 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 25 Aug 1993 14:09:16 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA16136; Wed, 25 Aug 93 14:09:15 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA16212; Wed, 25 Aug 93 13:09:13 CDT Date: Wed, 25 Aug 93 13:09:13 CDT From: Tony Skjellum Message-Id: <9308251809.AA16212@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: Cacheing ----- Begin Included Message ----- From owner-mpi-pt2pt@CS.UTK.EDU Wed Aug 25 10:09:01 1993 X-Resent-To: mpi-pt2pt@CS.UTK.EDU ; Wed, 25 Aug 1993 10:56:21 EDT Date: Wed, 25 Aug 1993 15:56:06 +0100 From: James Cownie To: mpi-pt2pt@cs.utk.edu Subject: Cacheing Content-Length: 3610 At the last meeting the context committee presented a proposal for a cacheing facility within MPI to allow the user to associate data with two classes of MPI objects (groups and comunicators). This proposal was greeted with less than total enthusiasm, particularly (it appeared) because of concerns about store allocation issues. Let me explain why I am in favour of a cacheing proposal, and why I don't believe that store allocation is a (large) issue. Recap on the extant proposal ============================ The proposal provides the ability for user code to be given unique keys which it can use to store a void * on any group or communicator. If you like there is a symbol table associating the pair (MPI_OBJ,key) with the void * value. (A better implementation might be to have the lookup table stored on the MPI object, and actually implement it as a flat table indexed by the key, but this is not necessary for the scheme to work) In addition it associates with each key value two functions, a destructor function called by MPI when an MPI object which has a value associated with this key is destroyed, and a replicator function called by MPI when an MPI object which has a value associated with this key is replicated. (e.g. MPI_GROUP_DUP). Note that these functions are ALWAYS called synchronously to the execution of the user code, since they are called when the user is making an MPI call. (MPI_xxx_DUP, or MPI_xxx_FREE), they are NEVER called by the MPI system "in an interrupt routine". What does this proposal buy us ? ================================ 1) It makes it much easier to support layering of functionality on top of MPI. e.g. Topology information needs to be associated with MPI group objects, and copied or deleted as the groups are changed. User collective operations may need to have a separate communicator from that passed to them (so that they operate in their own context) They can now associate this with the communicator they are passed, AND (because of the call back) correctly free it when the parent communicator is deleted by the user code. 2) It allows the user to write collective functions which behave EXACTLY like the MPI collective functions. Without the callbacks this is impossible, as there is no point at which the user code gains control to free its data structures when MPI objects are deleted. The major gain comes from the call back functions. The cacheing is merely a convenient place to hang the callback functions, and know which ones should be called. What does it cost ? =================== I don't think the cost is large. Indeed it could be implemented using the function re-naming scam on top of existing MPI without any problem. As I point out above, all of the store allocation can be done (should be done ?) in user space on the way into or out of the system. There is no need for this to impact anything "inside the kernel" at all. The user functions are only called while we're on the user stack anyway, as a result of a user call to MPI. They're also only called at non-time critical points (how often does a good code create or replicate groups compared with how many times it communicates ?). Conclusion ========== I think is buys us a lot (in expansibility) and costs us a little. I think we shold have it ! -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Wed Aug 25 15:44:28 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA26435; Wed, 25 Aug 93 15:44:28 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23786; Wed, 25 Aug 93 15:43:41 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 25 Aug 1993 15:43:41 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA23778; Wed, 25 Aug 93 15:43:39 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA19515; Wed, 25 Aug 93 14:43:38 CDT Date: Wed, 25 Aug 93 14:43:38 CDT From: Tony Skjellum Message-Id: <9308251943.AA19515@Aurora.CS.MsState.Edu> To: mpi-context@cs.utk.edu Subject: Status of Context chapter Mark Snir has the "lock" on the whole chapter, except Cacheing, until September 6 (nominally). Rik Littlefield has lock on the rest till he's done, I hope by same date. An intermediate version will be published as soon as these two colleagues send me back their updates. Thanks for your patience. Jim, thanks for input. - Tony From owner-mpi-context@CS.UTK.EDU Tue Sep 7 05:24:56 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA00191; Tue, 7 Sep 93 05:24:56 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02796; Tue, 7 Sep 93 02:41:29 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 7 Sep 1993 02:41:27 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02788; Tue, 7 Sep 93 02:41:25 -0400 Received: from snacker.pnl.gov (130.20.186.18) by pnlg.pnl.gov; Mon, 6 Sep 93 23:39 PDT Received: by snacker.pnl.gov (4.1/SMI-4.1) id AA14464; Mon, 6 Sep 93 23:36:46 PDT Date: Mon, 6 Sep 93 23:36:46 PDT From: rj_littlefield@pnlg.pnl.gov Subject: copying ? To: mpi-context@cs.utk.edu Cc: rj_littlefield@pnlg.pnl.gov Message-Id: <9309070636.AA14464@snacker.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu I have a couple of questions about copying groups and communicators. 1. The current context chapter says that MPI_GROUP_DUP is "essential to the support of virtual topologies". I do not understand this, and the examples are of no help. (Only one example calls MPI_COMM_DUP, and no example calls MPI_GROUP_DUP.) Can someone please explain why MPI_GROUP_DUP is considered so important, and perhaps provide an illustrative example? 2. The chapter says that MPI_COMM_{,LOCAL_,COLL_}DUP "duplicates the existing communicator comm with all its cached information, replacing just the context, and creates the new communicator new_comm". Does a communicator bind a group by reference or by copy? That is, does dup'ing a communicator imply dup'ing its group? I would hope that the group is bound by reference, but it needs to be explicit one way or the other. --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Tue Sep 7 05:24:57 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA00197; Tue, 7 Sep 93 05:24:57 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02201; Tue, 7 Sep 93 02:31:31 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 7 Sep 1993 02:31:29 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA02189; Tue, 7 Sep 93 02:31:24 -0400 Received: from snacker.pnl.gov (130.20.186.18) by pnlg.pnl.gov; Mon, 6 Sep 93 23:30 PDT Received: by snacker.pnl.gov (4.1/SMI-4.1) id AA14455; Mon, 6 Sep 93 23:27:30 PDT Date: Mon, 6 Sep 93 23:27:30 PDT From: rj_littlefield@pnlg.pnl.gov Subject: DRAFT cacheing section To: mpi-context@cs.utk.edu Cc: rj_littlefield@pnlg.pnl.gov Message-Id: <9309070627.AA14455@snacker.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu Folks, Here is a DRAFT rewrite of the cacheing section of the contexts chapter. I think that the content is pretty much as I wish, although there are still some typos & syntax errors in the example. One aspect that I am very concerned about is the interaction between cacheing and copying. I have taken a conservative position in this proposal. If you are a fan of copying, please check what I have done and propose enhancements. --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 % ----------------------------------------------------------------- \section{Cacheing} MPI provides a ``cacheing'' facility that allows an application to attach arbitrary pieces of information, called {\em attributes}, to group and communicator descriptors. More precisely, the cacheing facility allows a portable library to: \begin{itemize} \item pass information between calls by associating it with an MPI descriptor, \item quickly retrieve that information, and \item be guaranteed that out of date information is never retrieved, even if the descriptor is freed and its handle reused by MPI. \end{itemize} These capabilities, in some form, are required by built-in MPI routines such as collective communication and virtual topology. Defining an interface to these capabilities as part of the MPI standard is valuable because it permits routines like collective communication and virtual topology to be implemented as portable code, and also because it makes MPI more extensible by allowing user-written routines to use standard MPI calling sequences. \subsection{Functionality} Attributes are local to the process and specific to the descriptor to which they are attached. They are not propagated by MPI from one descriptor to another except when the descriptor is exactly copied (by \func{...\_DUP}, and even then the application must give specific permission for the attribute to be copied. Attributes are scalar values, equal in size or larger than a C-language pointer. The cacheing interface defined here represents that attributes are stored by MPI in some sort of opaque data structure called an {\em attribute set}. Accessor functions are provided for groups and communicators, that return a handle to the associated attribute set. Further accessor functions, acting on the attribute set, are provided to: \begin{itemize} \item obtain a key value (used to identify an attribute); \item store and retrieve the value of an attribute; \item specify ``callback'' functions by which MPI informs the application when the group or communicator is destroyed (by \func{...\_FREE}) or copied (by \func{...\_DUP}). \end{itemize} \discuss{At the August MPI meeting, concern was expressed that any form of the cacheing facility might trigger unpleasant interactions between memory management in system and user spaces. As discussed on the reflectors by Jim Cownie, the cacheing and callback functions are only called synchronously, in response to explicit application request. Because of this restriction, the memory management concern is considered to be resolved.} \discuss{The choice of key values is under control of MPI. This allows MPI to optimize its implementation of attribute sets. It also avoids conflict between independent modules cacheing information on the same groups and communicators.} \discuss{At the August MPI meeting, it was noted that a much smaller interface, consisting of just a callback facility, would allow the entire cacheing facility to be implemented by portable code. Defining such a minimal interface would have the pleasant effect of making the MPI specification and implementation smaller. However, it would also imply lower efficiency. With the minimal callback interface, some form of table searching is implied by the need to handle arbitrary group and communicator handles. In contrast, the more complete interface defined here permits rapid access to attributes through the use of pointers in group and communicator descriptors (to find the attribute table) and cleverly chosen key values (to retrieve individual attributes). In light of the efficiency hit from the minimal interface, the more complete interface proposed here is felt to be superior.} \discuss{Also at the August MPI meeting, it was suggested that a somewhat larger interface would permit attribute tables to be used independently of group and communicator descriptors, and that this would be a good tradeoff because it would be a relatively cheap way of providing extra functionality. It turns out that the increase in size, to provide a fully consistent interface, is larger than was suggested at the meeting. (For example, if callbacks are supposed to be meaningful in isolation, then in addition to a mechanism for defining them, there must also be mechanisms for finding out that callbacks have been defined and for invoking them at appropriate times.) Accordingly, this proposal defines attribute tables only in connection with groups and communicators.} MPI provides the following services related to cacheing. They are all process-local. % These belong to the suggested ``larger'' interface % \begin{funcdef}{MPI\_ATTRIBUTE\_ALLOC(n,handle\_array,len)} % \funcarg {\IN}{ n}{ number of handles to allocate} % \funcarg {\OUT}{ handle\_array}{ pointer to array of opaque attribute handling structure} % \funcarg {\OUT}{ len}{ length of each opaque structure} % \end{funcdef} % Allocates a new attribute, so user programs and functionality layered % on top of MPI can access attribute technology. % % \begin{funcdef}{MPI\_ATTRIBUTE\_FREE(handle\_array,n)} % \funcarg {\IN}{ handle\_array}{ array of pointers to opaque attribute handling structures} % \funcarg {\IN}{ n}{ number of handles to deallocate} % \end{funcdef} % Frees attribute handle. \begin{funcdef}{MPI\_GROUP\_ATTR (group,attribute\_set)} \funcarg {\IN}{group}{handle to group descriptor} \funcarg {\OUT}{attribute\_set}{handle to attribute set} \end{funcdef} Given a group, returns that group's attribute set. \begin{funcdef}{MPI\_COMM\_ATTR (comm,attribute\_set)} \funcarg {\IN}{comm}{handle to communicator descriptor} \funcarg {\OUT}{attribute\_set}{handle to attribute set} \end{funcdef} Given a communicator, returns that communicator's attribute set. \begin{funcdef}{MPI\_GET\_ATTRIBUTE\_KEY(keyval)} \funcarg {\OUT}{ keyval}{ Provide the integer key value for future storing.} \end{funcdef} Generates a new attribute key. \begin{funcdef} {MPI\_PUT\_ATTRIBUTE\_VALUE(attribute\_set, keyval, attribute\_val)} \funcarg{\IN}{attribute\_set}{handle to attribute set} \funcarg{\IN}{ keyval} {integer key value, as returned by MPI\_GET\_ATTRIBUTE\_KEY} \funcarg{\IN}{ attribute\_val}{attribute value} \end{funcdef} Stores attribute value for subsequent retrieval by \func{MPI\_GET\_ATTRIBUTE\_VALUE} . \begin{funcdef} {MPI\_GET\_ATTRIBUTE\_VALUE(attribute\_set,keyval,attribute\_val,found)} \funcarg{\IN}{attribute\_set}{handle to attribute set} \funcarg{\IN}{ keyval}{integer key value} \funcarg{\OUT}{ attribute\_val}{attribute value, unless found = \cconst{MPI\_NOT\_FOUND}} \funcarg{\OUT}{found}{\cconst{MPI\_FOUND} or \cconst{MPI\_NOT\_FOUND}, depending on whether the attribute value has been put into this attribute set} \end{funcdef} Retrieves attribute value by key. \begin{funcdef}{MPI\_PUT\_ATTRIBUTE\_DESTRUCTOR\_CALLBACK (attribute\_set, keyval, attribute\_destructor\_routine)} \funcarg{\IN}{attribute\_set} {handle to attribute set} \funcarg{\IN}{keyval}{key value} \funcarg{\IN}{attribute\_destructor\_routine}{routine to be called by MPI} \end{funcdef} When the group or communicator is freed, MPI calls the destructor routine for each attribute that has one defined. The calling sequence is \func{attribute\_destructor\_routine(attribute\_set,keyval,attribute\_val)}. \begin{funcdef}{MPI\_PUT\_ATTRIBUTE\_COPIER\_STRATEGY (attribute\_set, keyval, strategy, attribute\_copier\_routine)} \funcarg{\IN}{attribute\_set} {handle to attribute set} \funcarg{\IN}{keyval}{key value} \funcarg{\IN}{strategy}{one of \cconst{MPI\_ATTR\_COPY\_FREELY}, \cconst{MPI\_ATTR\_COPY\_NEVER}, or \cconst{MPI\_ATTR\_COPY\_CALLBACK}} \funcarg{\IN}{attribute\_copier\_routine}{routine to be called by MPI, if strategy = \cconst{MPI\_ATTR\_COPY\_CALLBACK}} \end{funcdef} Defines how MPI propagates an attribute when the group or communicator is copied (via \func{...\_DUP}). The default is \cconst{MPI\_ATTR\_COPY\_NEVER}, which means that MPI must not copy that attribute. For \cconst{MPI\_ATTR\_COPY\_FREELY}, MPI will copy the attribute and not inform the application. For \cconst{MPI\_ATTR\_COPY\_CALLBACK}, MPI copies the value of the attribute and then calls the copier routine as \func{attribute\_copier\_routine(attribute\_set,keyval,attribute\_val)}. The copier routine can, but does not have to, replace the copied value by doing an explicit \func{MPI\_PUT\_ATTRIBUTE\_VALUE(attribute\_set,keyval,new\_value)}. The most common use of the copy callback mechanism is expected to be to maintain reference counts on extra storage associated with the attribute. % \begin{funcdef}{MPI\_DELETE\_ATTRIBUTE(handle, keyval)} % \funcarg{\IN}{ handle}{ opaque attribute handle} % \funcarg{\IN}{ keyval}{ The integer key value for future storing.} % \end{funcdef} % Delete attribute from cache by key. \subsubsection{Example} This example shows how to write a collective communication operation that uses cacheing to be more efficient after the first call. The coding style assumes that MPI function results return only error statuses. \begin{verbatim} static int gop_key_assigned = 0; /* 0 only on first entry */ static int gop_key; /* key for this module's stuff */ typedef struct { int ref_count; /* reference count */ /* other stuff, whatever else we want */ } gop_stuff_type; Efficient_Collective_Op (comm, ...) MPI_comm comm; { gop_stuff_type *gop_stuff; MPI_Group group; MPI_Attribute_Set attr_set; MPI_Flag foundflag; MPI_COMM_GROUP(comm,&group); if (!gop_key_assigned) /* get a key on first call ever */ { gop_key_assigned = 1; if ( ! MPI_GET_ATTRIBUTE_KEY(&gop_key)) { mpi_abort ("Insufficient keys available"); } } MPI_GROUP_ATTR(group,&attr_set); MPI_GET_ATTRIBUTE_VALUE (attr_set,gop_key,&gop_stuff,&foundflag); if (foundflag == MPI_FOUND) { /* This module has executed in this group before. We will use the cached information */ } else { /* This is a group that we have not yet cached anything in. We will now do so. */ /* First, allocate storage for the stuff we want, and initialize the reference count */ gop_stuff = (gop_stuff_type *) malloc (sizeof(gop_stuff_type)); if (gop_stuff == NULL) { /* abort on out-of-memory error */ } gop_stuff -> ref_count = 1; /* Second, fill in *gop_stuff with whatever we want. This part isn't shown here */ /* Third, store gop_stuff as the attribute value */ MPI_PUT_ATTRIBUTE_VALUE (attr_set,gop_key,gop_stuff); /* Fourth, tell MPI that it can copy the attribute, but has to call us back when it does so. */ MPI_PUT_ATTRIBUTE_COPIER_STRATEGY (attr_set,gop_key,MPI_ATTR_COPY_CALLBACK,gop_stuff_copier); /* Fifth, install the destructor routine */ MPI_PUT_ATTRIBUTE_DESTRUCTOR_CALLBACK (attr_set,gop_key,gop_stuff_destructor); } /* Then, in any case, use contents of *gop_stuff to do the global op ... */ } /* The following routine is called by MPI when a group is freed */ gop_stuff_destructor (attr_set,keyval,gop_stuff) MPI_Attribute attr_set; int keyval; gop_stuff_type *gop_stuff; { if (keyval != gop_key) { /* abort -- programming error */ } /* The group's being freed removes one reference to this gop_stuff */ gop_stuff -> ref_count -= 1; /* If no references remain, then free the storage */ if (gop_stuff -> ref_count == 0) { free((void *)gop_stuff); } } /* The following routine is called by MPI when a group is copied */ gop_stuff_copier (attr_set,keyval,gop_stuff) MPI_Attribute attr_set; int keyval; gop_stuff_type *gop_stuff; { if (keyval != gop_key) { /* abort -- programming error */ } /* The new group adds one reference to this gop_stuff */ gop_stuff -> ref_count += 1; } \end{verbatim} \discuss{The cache facility could also be provided for other descriptors, but it is less clear how such provision would be useful. It is suggested that this issue be reviewed in reference to Virtual Topologies.} %---------------------------------------------------------------------- From owner-mpi-context@CS.UTK.EDU Tue Sep 7 10:25:40 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA02011; Tue, 7 Sep 93 10:25:40 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08627; Tue, 7 Sep 93 09:35:30 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 7 Sep 1993 09:35:28 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from [129.215.56.21] by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08605; Tue, 7 Sep 93 09:35:25 -0400 Date: Tue, 7 Sep 93 14:34:59 BST Message-Id: <22326.9309071334@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: (unlocked) context chapter draft To: mpi-context@cs.utk.edu Reply-To: lyndon@epcc.ed.ac.uk Howdy y'all Any sign of an (unlocked) contexts chapter draft around here? I'm off-line from Friday to the next meeting, so I'd really appreciate seeing the updated chapter this week. Thanks and best wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Sep 7 10:59:45 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA02266; Tue, 7 Sep 93 10:59:45 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15155; Tue, 7 Sep 93 10:59:31 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 7 Sep 1993 10:59:30 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15132; Tue, 7 Sep 93 10:59:22 -0400 Via: uk.ac.southampton.ecs; Tue, 7 Sep 1993 15:58:29 +0100 Via: brewery.ecs.soton.ac.uk; Tue, 7 Sep 93 15:49:09 BST From: Ian Glendinning Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; Tue, 7 Sep 93 16:00:29 BST Date: Tue, 7 Sep 93 16:00:31 BST Message-Id: <16751.9309071500@holt.ecs.soton.ac.uk> To: mpi-context@cs.utk.edu Subject: Re: copying ? > From: rj_littlefield@gov.pnl.pnlg > I have a couple of questions about copying groups and communicators. > > 1. The current context chapter says that MPI_GROUP_DUP is "essential to > the support of virtual topologies". I do not understand this, and > the examples are of no help. (Only one example calls MPI_COMM_DUP, > and no example calls MPI_GROUP_DUP.) > > Can someone please explain why MPI_GROUP_DUP is considered so > important, and perhaps provide an illustrative example? I suspect, as you appear to, that it is not `essential'. Remember that the process topologies proposal, as it stands, has groups everywhere as arguments to routines, where there possibly ought to be communicators. Until the last meeting, the groups and contexts proposal was in such a state of flux, that it wasn't clear what precisely ought to be there, but then Rolf wasn't at the last meeting to advise on what changes to make, and the second reading of that chapter was a bit of a shambles. It could be that one of the COMM_DUP routines is actually what's needed now. On the other hand, if we're going to have local manipulation of groups, then I suppose you ought to be able to duplicate them. Ian -- I.Glendinning@ecs.soton.ac.uk Ian Glendinning Tel: +44 703 593368 Dept of Electronics and Computer Science Fax: +44 703 593045 University of Southampton SO9 5NH England From owner-mpi-context@CS.UTK.EDU Tue Sep 7 11:32:16 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA02446; Tue, 7 Sep 93 11:32:16 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17750; Tue, 7 Sep 93 11:31:21 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 7 Sep 1993 11:31:19 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA17732; Tue, 7 Sep 93 11:31:11 -0400 Received: from snacker.pnl.gov (130.20.186.18) by pnlg.pnl.gov; Tue, 7 Sep 93 08:30 PDT Received: by snacker.pnl.gov (4.1/SMI-4.1) id AA15154; Tue, 7 Sep 93 08:27:36 PDT Date: Tue, 7 Sep 93 08:27:36 PDT From: rj_littlefield@pnlg.pnl.gov Subject: tex macros To: mpi-context@cs.utk.edu Cc: rj_littlefield@pnlg.pnl.gov Message-Id: <9309071527.AA15154@snacker.pnl.gov> X-Envelope-To: mpi-context@cs.utk.edu Ian Glendinning writes: > Rik, > could you please send me (or perhaps better, post to the mail list) a > copy of the various Latex command definitions used in your caching proposal. > (e.g. \funcdef) Thanks, > Ian The official macros and the tex source that I started with can be anonymous ftp'd from cse.ogi.edu as pub/otto/MPI/*.tex . My piece on cacheing fits into file 'context.tex', which requires chapter-head.tex and mpi-macs.tex . Here are chapter-head.tex and mpi-macs.tex as of Sept.3, 1993. --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 --- cut here: chapter-head.tex -------------------------- % Version as of July 30, 1993 % %chapter-head.tex %Version of July 29, 1993 - Steve Otto, Oregon Graduate Institute \documentstyle[twoside,11pt]{report} \pagestyle{plain} %\markright{ {\em Draft Document of the MPI Standard,\/ \today} } \marginparwidth 0pt \oddsidemargin=.25in \evensidemargin .25in \marginparsep 0pt \topmargin=-.5in \textwidth=6.0in \textheight=9.0in \parindent=2em \input{psfig} \include{mpi-macs} \makeindex \hyphenation{sub-script mul-ti-ple} \begin{document} \setcounter{page}{1} \pagenumbering{roman} \title{ {\em D R A F T} \\ Document for a Standard Message-Passing Interface} \author{Message Passing Interface Forum} \date{\today \\ This work was supported in part by ARPA and NSF under grant ASC-9310330, the National Science Foundation Science and Technology Center Cooperative Agreement No. CCR-8809615, and by the Commission of the European Community through Esprit project P6643. } \maketitle \hfuzz=5pt \newpage \vspace{5.0in} This is the result of a LaTeX run of a draft of a single chapter of the MPIF Final Report document. \newpage \setcounter{page}{1} \pagenumbering{arabic} \pagestyle{headings} \withlinenumbers --- cut here: mpi-macs.tex -------------------------- % Version as of July 30, 1993 % ---------------------------------------------------------------------- % mpi-macs.tex --- man page macros, % discuss, missing, mpifunc macros % % ---------------------------------------------------------------------- % a couple of commands from Marc Snir, modified S. Otto \newlength{\discussSpace} \setlength{\discussSpace}{.7cm} \newcommand{\discuss}[1]{\vspace{\discussSpace} {\small {\bf Discussion:} #1} \vspace{\discussSpace} } \newcommand{\missing}[1]{\vspace{\discussSpace} {\small {\bf Missing:} #1} \vspace{\discussSpace} } \newcommand{\implement}[1]{\vspace{\discussSpace} {\small {\bf Implementation note:} #1} \vspace{\discussSpace} } \newlength{\codeSpace} \setlength{\codeSpace}{.4cm} \def\mpi{{\sc mpi\ }} % MPI macro \newenvironment{funcdef}[1]{ \vspace{\codeSpace} \noindent \samepage {\func{#1}}\index{#1} \begin{list}{}{ % see pg 113 of Lamport's book \setlength{\leftmargin}{200pt} \setlength{\labelwidth}{180pt} \setlength{\labelsep}{10pt} \setlength{\itemindent}{0pt} \setlength{\itemsep}{0pt} \setlength{\topsep}{5pt} } }{\end{list} \vspace{\codeSpace}} \newenvironment{cfuncdef}[1]{ \vspace{\codeSpace} \noindent C binding: \noindent \samepage {\cfunc{#1}} \begin{list}{}{ % see pg 113 of Lamport's book \setlength{\leftmargin}{200pt} \setlength{\labelwidth}{180pt} \setlength{\labelsep}{10pt} \setlength{\itemindent}{0pt} \setlength{\itemsep}{0pt} \setlength{\topsep}{5pt} } }{\end{list} \vspace{\codeSpace}} \newenvironment{ffuncdef}[1]{ \vspace{\codeSpace} \noindent Fortran binding: \noindent \samepage {\ffunc{#1}} \begin{list}{}{ % see pg 113 of Lamport's book \setlength{\leftmargin}{200pt} \setlength{\labelwidth}{180pt} \setlength{\labelsep}{10pt} \setlength{\itemindent}{0pt} \setlength{\itemsep}{0pt} \setlength{\topsep}{5pt} } }{\end{list} \vspace{\codeSpace}} % see page 77, the TeX book. \newcommand{\funcarg}[3]{\item[\hbox to 45pt{\type{#1} \hfill} \mpiarg{#2}\hfill]{\small #3}} \newcommand{\cfuncarg}[3]{\item[\hbox to 70pt{\ctype{#1} \hfill} \carg{#2}\hfill]{\small #3}} \newcommand{\ffuncarg}[3]{\item[\hbox to 50pt{\ftype{#1} \hfill} \farg{#2}\hfill]{\small #3}} \newcommand{\funcold}[1]{\vspace{\codeSpace} {\bf #1} \vspace{\codeSpace} } \newcommand{\mpifuncold}[1]{\vspace{\codeSpace} {\bf #1} \vspace{\codeSpace} } \newcommand{\func}[1]{{\sf #1}} \newcommand{\mpifunc}[1]{{\sf #1}} \newcommand{\cfunc}[1]{{\sf #1}} \newcommand{\ffunc}[1]{{\sf #1}} \newcommand{\const}[1]{{\small\sf #1}} \newcommand{\cconst}[1]{{\sf #1}} \newcommand{\fconst}[1]{{\sf #1}} \newcommand{\constitem}[2]{\item[\const{#1}\hfill]{#2}} \newcommand{\mpiarg}[1]{{\sf #1}} \newcommand{\carg}[1]{{\sf #1}} \newcommand{\farg}[1]{{\sf #1}} \newcommand{\type}[1]{{\sf #1}} \newcommand{\ctype}[1]{{\tt #1}} \newcommand{\ftype}[1]{{\tt #1}} \newcommand{\IN}[0]{\small IN} \newcommand{\OUT}[0]{\small OUT} \newcommand{\INOUT}[0]{\small INOUT} \newenvironment{constlist}[0]{ \vspace{\codeSpace} \noindent \begin{list}{}{ % see pg 113 of Lamport's book \setlength{\leftmargin}{200pt} \setlength{\labelwidth}{190pt} \setlength{\labelsep}{10pt} \setlength{\itemindent}{10pt} \setlength{\itemsep}{-5pt} \setlength{\topsep}{-5pt} } }{\end{list} \vspace{\codeSpace}} % command to indicate changes \newcommand{\change}{\marginpar{\tiny \bf \ CHANGE}} % command to indicate topics that need be discussed \newcommand{\todiscuss}{\marginpar{\tiny \bf \ TO DISCUSS}} % some commands from Bill Gropp \def\code#1{{\tt #1}} \def\setmargin#1{\begingroup\leftmargin #1 \advance\leftmargin\labelsep \leftmargini #1 \advance\leftmargini\labelsep} \def\esetmargin{\endgroup} \def\ibamount{3.0cm\relax} \def\ibaamount{4.0cm} \def\ibdamount{4.5cm} \def\ibcamount{2.0cm} \def\ib#1{\hbox to \ibamount{#1\hfil}} \def\iba#1{\hbox to \ibaamount{#1\hfil}} \def\ibd#1{\hbox to \ibdamount{#1\hfil}} \def\ibc#1{\hbox to \ibcamount{#1\hfil}} % Use \code{...} for code fragments %\def\code#1{{\tt #1}} % Use \df{name} for a definition of name in the text \def\df#1{{\bf #1}} % Use \note{text} for marginal notes \def\note#1{\marginpar{\bf #1}} % % Get line numbers in the gutters. Thanks to Guy Steele and HPFF! % \makeatletter % % This is used to put line numbers on plain pages. Used in draft.tex % \def\withlinenumbers{\relax \def\@oddfoot{\hbox to 0pt{\hss\LineNumberRuler\hskip 1.5pc}\hfil}\relax \def\@evenfoot{\hfil\hbox to 0pt{\hskip 1.5pc\LineNumberRuler\hss}}} \def\LineNumberRuler{\vbox to 0pt{\vss\normalsize \baselineskip13.6pt \lineskip 1pt \normallineskip 1pt \def\baselinestretch{1}\relax \LNR{1}\LNR{2}\LNR{3}\LNR{4}\LNR{5}\LNR{6}\LNR{7}\LNR{8}\LNR{9} \LNR{10}\LNR{11}\LNR{12}\LNR{13}\LNR{14} \LNR{15}\LNR{16}\LNR{17}\LNR{18}\LNR{19} \LNR{20}\LNR{21}\LNR{22}\LNR{23}\LNR{24} \LNR{25}\LNR{26}\LNR{27}\LNR{28}\LNR{29} \LNR{30}\LNR{31}\LNR{32}\LNR{33}\LNR{34}\LNR{35} \LNR{36}\LNR{37}\LNR{38}\LNR{39} \LNR{40}\LNR{41}\LNR{42}\LNR{43}\LNR{44} \LNR{45}\LNR{46}\LNR{47}\LNR{48} \vskip 31pt}} \def\LNR#1{\hbox to 1pc{\hfil\tiny#1\hfil}} \def\ps@plainwithlinenumbers{\ps@plain\withlinenumbers} % % 1st page of a chapter has its own page style, so we have to put line % numbers in here also. % \def\chapter{\clearpage \thispagestyle{plainwithlinenumbers} \global\@topnum\z@ \@afterindentfalse \secdef\@chapter\@schapter} % % Change "Chapter" to "Section", "Appendix" to "Annex" % \def\@chapapp{Section} \def\appendix{\par \setcounter{chapter}{0} \setcounter{section}{0} \def\@chapapp{Annex} \def\thechapter{\Alph{chapter}}} \makeatother % % Also from HPFF. These look potentially useful. % \newenvironment{rationale}{\begin{list}{}{}\item[]{\it Rationale.} }{{\rm ({\it End of rationale.})} \end{list}} \newenvironment{implementors}{\begin{list}{}{}\item[]{\it Advice to implementors.} }{{\rm ({\it End of advice to implementors.})} \end{list}} \newenvironment{users}{\begin{list}{}{}\item[]{\it Advice to users.} }{{\rm ({\it End of advice to users.})} \end{list}} % % Use Sans Serif font for sections, etc. S. Otto % \makeatletter \def\section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\Large\sf}} \def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\large\sf}} \def\subsubsection{\@startsection{subsubsection}{3}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize\sf}} \def\paragraph{\@startsection {paragraph}{4}{\z@}{3.25ex plus 1ex minus .2ex}{-1em}{\normalsize\sf}} \def\subparagraph{\@startsection {subparagraph}{4}{\parindent}{3.25ex plus 1ex minus .2ex}{-1em}{\normalsize\sf}} \makeatother % % An Editor's Note macro % \def\ednote#1{{\sl Editor's note: #1}} % a way to comment out large sections of text \newcommand{\commentOut}[1]{{}} % % A few commands to help in writing MPI man pages % \def\twoc#1#2{ \begin{list} {\hbox to95pt{#1\hfil}} {\setlength{\leftmargin}{120pt} \setlength{\labelwidth}{95pt} \setlength{\labelsep}{0pt} \setlength{\partopsep}{0pt} \setlength{\parskip}{0pt} \setlength{\topsep}{0pt} } \item {#2} \end{list} } \outer\long\def\onec#1{ \begin{list} {} {\setlength{\leftmargin}{25pt} \setlength{\labelwidth}{0pt} \setlength{\labelsep}{0pt} \setlength{\partopsep}{0pt} \setlength{\parskip}{0pt} \setlength{\topsep}{0pt} } \item {#1} \end{list} } \def\manhead#1{\noindent{\bf{#1}}} \makeatletter % % make our own index environment that can have a different % title than just "Index" -- S. Otto % \def\@index{Index} \newif\if@restonecol \def\myindex{\@restonecoltrue\if@twocolumn\@restonecolfalse\fi \columnseprule \z@ %\columnsep 35pt\twocolumn[\@makeschapterhead{Index}] %\@mkboth{INDEX}{INDEX}\thispagestyle{plain}\parindent\z@ \columnsep 35pt\twocolumn[\@makeschapterhead{\@index}] \@mkboth{\@index}{\@index}\thispagestyle{plain}\parindent\z@ \parskip\z@ plus .3pt\relax\let\item\@idxitem} \def\@idxitem{\par\hangindent 40pt} \def\subitem{\par\hangindent 40pt \hspace*{20pt}} \def\subsubitem{\par\hangindent 40pt \hspace*{30pt}} \def\endmyindex{\if@restonecol\onecolumn\else\clearpage\fi} \def\indexspace{\par \vskip 10pt plus 5pt minus 3pt\relax} \makeatother From owner-mpi-context@CS.UTK.EDU Tue Sep 7 15:22:44 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA04405; Tue, 7 Sep 93 15:22:44 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05510; Tue, 7 Sep 93 15:22:10 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 7 Sep 1993 15:22:09 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA05502; Tue, 7 Sep 93 15:22:07 -0400 Via: uk.ac.southampton.ecs; Tue, 7 Sep 1993 20:15:36 +0100 Via: brewery.ecs.soton.ac.uk; Tue, 7 Sep 93 20:06:07 BST From: Ian Glendinning Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; Tue, 7 Sep 93 20:17:27 BST Date: Tue, 7 Sep 93 20:17:30 BST Message-Id: <16980.9309071917@holt.ecs.soton.ac.uk> To: mpi-context@cs.utk.edu Subject: Re: DRAFT cacheing section Rik Littlefield writes: > Here is a DRAFT rewrite of the cacheing section of the contexts chapter. > I think that the content is pretty much as I wish, although there are > still some typos & syntax errors in the example. I have read Rik's proposal, and it seems pretty sensible to me. In any case, regardless of the precise set of routines provided, I would like to add my voice to Rik and Jim's in support of the general principle of having caching built into the standard. Unfortunately I can't attend the next meeting, so I thought I'd take the opportunity to `bang the drum' while I can. Ian -- I.Glendinning@ecs.soton.ac.uk Ian Glendinning Tel: +44 703 593368 Dept of Electronics and Computer Science Fax: +44 703 593045 University of Southampton SO9 5NH England From owner-mpi-context@CS.UTK.EDU Wed Sep 8 10:33:27 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA09131; Wed, 8 Sep 93 10:33:27 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29633; Wed, 8 Sep 93 10:32:38 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 8 Sep 1993 10:32:37 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA29625; Wed, 8 Sep 93 10:32:36 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA28656; Wed, 8 Sep 93 09:32:25 CDT Date: Wed, 8 Sep 93 09:32:25 CDT From: Tony Skjellum Message-Id: <9309081432.AA28656@Aurora.CS.MsState.Edu> To: igl@ecs.soton.ac.uk, mpi-context@cs.utk.edu Subject: Re: copying ? I believe that virtual topology information can be safely cached with groups. MPI_GROUP_DUP would then have the power to preserve this info. MPI_COMM_DUP calls MPI_GROUP_DUP in a typical implementation :-) - Tony From owner-mpi-context@CS.UTK.EDU Tue Sep 7 09:59:47 1993 Received: from Walt.CS.MsState.Edu by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA25142; Tue, 7 Sep 93 09:59:47 CDT Received: from CS.UTK.EDU by Walt.CS.MsState.Edu (4.1/6.0s-FWP); id AA00855; Tue, 7 Sep 93 09:59:45 CDT Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15155; Tue, 7 Sep 93 10:59:31 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 7 Sep 1993 10:59:30 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from sun2.nsfnet-relay.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA15132; Tue, 7 Sep 93 10:59:22 -0400 Via: uk.ac.southampton.ecs; Tue, 7 Sep 1993 15:58:29 +0100 Via: brewery.ecs.soton.ac.uk; Tue, 7 Sep 93 15:49:09 BST From: Ian Glendinning Received: from holt.ecs.soton.ac.uk by brewery.ecs.soton.ac.uk; Tue, 7 Sep 93 16:00:29 BST Date: Tue, 7 Sep 93 16:00:31 BST Message-Id: <16751.9309071500@holt.ecs.soton.ac.uk> To: mpi-context@cs.utk.edu Subject: Re: copying ? Status: R > From: rj_littlefield@gov.pnl.pnlg > I have a couple of questions about copying groups and communicators. > > 1. The current context chapter says that MPI_GROUP_DUP is "essential to > the support of virtual topologies". I do not understand this, and > the examples are of no help. (Only one example calls MPI_COMM_DUP, > and no example calls MPI_GROUP_DUP.) > > Can someone please explain why MPI_GROUP_DUP is considered so > important, and perhaps provide an illustrative example? I suspect, as you appear to, that it is not `essential'. Remember that the process topologies proposal, as it stands, has groups everywhere as arguments to routines, where there possibly ought to be communicators. Until the last meeting, the groups and contexts proposal was in such a state of flux, that it wasn't clear what precisely ought to be there, but then Rolf wasn't at the last meeting to advise on what changes to make, and the second reading of that chapter was a bit of a shambles. It could be that one of the COMM_DUP routines is actually what's needed now. On the other hand, if we're going to have local manipulation of groups, then I suppose you ought to be able to duplicate them. Ian -- I.Glendinning@ecs.soton.ac.uk Ian Glendinning Tel: +44 703 593368 Dept of Electronics and Computer Science Fax: +44 703 593045 University of Southampton SO9 5NH England From owner-mpi-context@CS.UTK.EDU Wed Sep 8 10:48:55 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA09194; Wed, 8 Sep 93 10:48:55 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00959; Wed, 8 Sep 93 10:48:37 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 8 Sep 1993 10:48:36 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from daedalus.epcc.ed.ac.uk by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA00951; Wed, 8 Sep 93 10:48:34 -0400 Date: Wed, 8 Sep 93 15:48:26 BST Message-Id: <23736.9309081448@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: copying ? To: Tony Skjellum , igl@ecs.soton.ac.uk, mpi-context@cs.utk.edu In-Reply-To: Tony Skjellum's message of Wed, 8 Sep 93 09:32:25 CDT Reply-To: lyndon@epcc.ed.ac.uk > I believe that virtual topology information can be safely cached with groups. > MPI_GROUP_DUP would then have the power to preserve this info. MPI_COMM_DUP > calls MPI_GROUP_DUP in a typical implementation :-) > The question on _COMM_DUP is whether the semantics are defined to be equivalent to copying the value of the group or a reference to the group, so to speak. It appears that you advocate the actual copy semantics. This interesting because lets assume an implementation is trying not to eat memory - it will not copy the actual group mapping tables but will hold an additional reference to them. However the copy semantics mean that there must be an independent group cache "in" the new communicator compared to the old communicator. I suppose we get a picture of the implementation of a group being a reference to a group mapping object (such object not directly visible to the user), and an actual cache object (such object is directly visible to the user). The copying a group means cretaing a new cache object and taking a copy of the same group mapping object. Yeah, that works, fine. If the copy semantics are adopted then each communicator will presumably have a different group therein, and therefore a different group cache. Does this remove the need for a separate communicator cache also? Regards Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Sep 8 11:56:03 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA11238; Wed, 8 Sep 93 11:56:03 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA06231; Wed, 8 Sep 93 11:55:15 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 8 Sep 1993 11:55:12 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA06207; Wed, 8 Sep 93 11:55:05 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA29170; Wed, 8 Sep 93 10:54:13 CDT Date: Wed, 8 Sep 93 10:54:13 CDT From: Tony Skjellum Message-Id: <9309081554.AA29170@Aurora.CS.MsState.Edu> To: lyndon@epcc.ed.ac.uk Subject: Re: current context draft Cc: mpi-context@cs.utk.edu Lyndon, as you wish, but I have not incorporated the changes of Rik into this as yet. Also, I have not worked on it, and Tom has not worked on it. [Tom, please give your feedback now.] - Tony From snir@watson.ibm.com Tue Sep 7 17:30:48 1993 Received: from watson.ibm.com by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA27830; Tue, 7 Sep 93 17:30:42 CDT Message-Id: <9309072230.AA27830@Aurora.CS.MsState.Edu> Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 5841; Tue, 07 Sep 93 18:30:43 EDT Date: Tue, 7 Sep 93 18:30:42 EDT From: "Marc Snir" To: tony@aurora.cs.msstate.edu Status: R \include{chapter-head} % Context chapter. % % Version of September 6, 1993 % Lyndon Clarke, Tom Henderson, Mark Sears, Anthony Skjellum, % Marc Snir, Rik Littlefield, Jim Cownie % % [now uses \funcdef, \const, \func per Steve Otto] % % Non-portable lines... \setcounter{chapter}{2} \newcommand{\dross}[1]{} % % % ******************************************************************** % ******************************************************************** % ******************************************************************** % ******************************************************************** % Rest goes to Steve Otto each time... \chapter{Groups, Contexts, and Communicators} \label{sec:context} \label{chap:context} \section{Introduction} This section defines the concepts of group, context, and communicator and discuss operations on those. Each communication operation (point-to-point or collective) accepts as parameter a {\bf communicator}. This parameter specifies the ``universe'' of the communication operation. The current practice in many communication libraries is that there is a a unique, predefined communication universe that includes all processes available when the library is initiated; the processes are assigned consecutive ranks. Participants in a point-to-point communication are identified by their rank; a collective communication (such as broadcast) involves all processes. This practice can be followed in MPI by using the predefined communicator {\tt MPI\_COMM\_WORLD}. Users that are satisfied with such practice can plug in {\tt MPI\_COMM\_WORLD} wherever a communicator argument is required, and ignore the rest of this section. User defined communicators allow to create multiple distinct ``universes'' of noninterfering communication. Such universe may include only a subset of the processes. It often desireable to execute a collective operation on a subset of processes, without involving in any way the remaining processes. The same holds true of user defined parallel procedures. Quite often a parallel program is built by composing several parallel modules ({\em e.g.}, a numerical solver, and a graphic display module). It is highly desirable that processes executing a parallel procedure use a ``virtual process name space'' local to the invocation. Thus, the code of the parallel procedure will look identical, irrespective of the absolute addresses of the executing processes. This is done by using rank within the subset to address local communication. Support of a virtual name space for each module will allow for the composition of modules that were developed separately without changing all message passing calls within each module. Distinct communication universes may be needed even within the same set of processes, in order to allow predictable behavior in subprograms, and to allow dynamicism in message usage that cannot be reasonably anticipated or managed. Normally, a parallel procedure is written so that all messages produced during its execution are also consumed by the processes that execute the parallel procedure. However, if one parallel procedure calls another, then it might be desirable to allow such call to proceed while messages are pending (the messages will be consumed by the procedure after the call returns). In such case, a new communication ``universe'' is needed for the called parallel procedure, even if the transfer of control is synchronized. Communicators bring together the concepts of process group and communication context. A {\bf group} is an ordered set of processes. Processes in a group are assigned consecutive ranks, starting from zero. A collective communication involves all processes in the group indentified by the communicator parameter. It is expected that parallel libraries will be built to accept a communicator as parameter. In a point to point communication, the communicator parameter provides the translation from virtual process names, which are ranks within the group, into absolute addresses. A {\bf context} is a mechanism to partition the message-passing space into separate, noninterfering universes. Specifically, a message sent in one context can only be received in the same context. A communicator has to be defined in a consistent manner at all processes in the communicator group before it is used. To achieve this, operations that define communicators are collective: they are executed by all processes in the group that contains the new group, and all processes provide matching parameters; furthermore, all processes execute the calls in the same order (see Section~\ref{chap:coll} for a more precise definition). The definition of a communicator consists of two suboperations: The new communication group is defined and this group is bound to a context. MPI allows groups to be defined and manipulated locally. They are represented by explicit, opaque group objects. Contexts do not have explicit representation: they manifest themselves only as a binding aof a group into a communicator. Dynamic communicator creation at parallel procedure call boundaries lead to a clean, modular programming styel. Yet, users may be reluctant to avail themselves of this facility if an expensive collective communication is required for each communicator creation. In order to alleviate this problem, we allow local creation of communicators. The user has to follow the same calling convention as for collective calls; there is no change in semantics and performance is gained at the expense of error checking. Additional operations are available for (pre)allocating contexts. The discussion has dealt so far with {\bf intra-communication}: communication within a group. MPI also supports {\bf inter-communication}: communication across groups. When an application is built by composing several parallel modules, it is convenient to allow one module to communicate with the other using local ranks within the second module for addressing. This is especially convenient in a client-server computing paradigm, where either client or server are parallel. The support of inter-communication also provides a mechanism for the extension of MPI to a dynamic model where not all processes are preallocated at initialization time. In such situation, it becomes necessary to support communication across ``universes''. Inter-communication is supported by {\bf inter-communicators}. These bind together two groups and a context that is shared by both groups. MPI provides mechanisms for creating and manipulating inter-communicators. They are used in point-to-point communication in the same manner as regular {\bf intra-communicators}. Users that do not need inter-communication in their applications can ignore this extension. Finally, a general mechanism is available for attaching additional attributes to communicators. This additional information is used, for example, to carry information on topology (see Section~\ref{chap:topol}). This general caching mechanism can be used to build more complex, collective objects. \section{Basic Concepts} \subsection{Groups} A {\bf group} is an ordered set of process identifiers (henceforth processes); process identifiers are implementation dependent. Each process in a group is associated with an integer {\bf rank}, starting from zero. Groups are represented by opaque {\bf group objects}, and hence cannot be directly transferred from one process to another. Such group can be though as of an unbound communicator; a communicator is created by binding a group (or two groups, for inter-communicators) to a context. \implement{ A group may be represented by a virtual-to-real process address translation table. Each communicator object would have a pointer to such table. } \subsection{Contexts} A {\bf context} is the MPI mechanism for partitioning communication space. A defining property of a context is that a message sent in a context cannot be received in another context. Only one communicator in a process may bind the same context. Contexts are not explicit MPI objects; they appear only as bindings of groups into communicators. For int\-ra-com\-mun\-i\-cat\-ion, a context is essentially a tag (or tags) needed to make a communicator safe for point-to-point and MPI-defined collective communication. Contexts have additional attributes for int\-er-com\-mun\-i\-cat\-ion, to be discussed below. \implement{ A possible implementation is for a context to be a label attached to messages and matched on receive. Each intra-communicator stores the value of that label. Since collective communication traffic has to be kept separate from point-to-point traffic, a pair of labels would actually be needed, if collective communication is implemented on top of point to point. In inter-communication, it is more convenient to use two labels, one used by group A to send and group B to receive, and the second used by group B to send and group A to receive. Since contexts are not explicit objects, other implementations are possible. } \subsection{Communicators} Communicators bring together the concepts of group and context. Furthermore, to support implementation-specific optimizations, and virtual topologies, they ``cache'' additional information opaquely. The source and destination of a message is identified by the rank of that process within the group. For collective communication, the communicator specifies the set of processes that participate in the collective operation and their order, when significant. Thus, the communicator restricts the ``spatial'' scope of communication, and provides local process addressing. Communicators are represented by opaque {\bf communicator objects}, and hence cannot be directly transferred from one process to another. % \paragraph{Raison d'\^{e}tre for Contexts and Communicators} % Within a communicator, one or more contexts are cached. % We want to make it possible for libraries quickly to achieve % additional safe communication space without MPI-communicator-based % synchronization. The only way to do this is to provide a means to % preallocate many contexts, and bind them locally, as needed. This % choice weakens the overall, inherent ``safety'' of MPI, if programmed % in this way, but provides added performance which library designers % will demand. We are convinced that the means we have chosen provides % a good compromise between efficiency and safety. \subsection{Predefined Communicators} \label{sec:predef-comms} Initial communicators defined once \func{ MPI\_INIT} has been called are as follows: \begin{itemize} \item \const{ MPI\_COMM\_WORLD},~~~A communicator of all processes the local process can communicate with after initialization (itself included). \item \const{ MPI\_COMM\_PEER},~~~A communicator of all processes the local process can communicate with after initialization, excluding a ``host'' process (if any). In the absence of a host process this communicator will have the same group as \const{MPI\_COMM\_WORLD}, but will still contain a different context. \end{itemize} These two communicators are available on any MPI implementation. Other implementation dependent predefined communicators may also be provided. In a static MPI implementation all processes that participate in the computation are available after MPI is initialized. In such case, \const{MPI\_COMM\_WORLD} is a communicator of all processes available for the computation; this communicator has the same value at all processes. In an implementation of MPI where processes can dynamically join an MPI execution it may be the case that a process starts an MPI computation without having access to all other processes. In such situation, \const{MPI\_COMM\_WORLD} is a communicator of all processes the joining process can start communicating with; \const{MPI\_COMM\_WORLD} may have at the same time different values at different processes. MPI implementations are required to provide these two communicators. They cannot be deallocated during the computation. The groups corresponding to these communicators do not appear as pre-defined constants, but they may be accessed using \func{MPI\_COMM\_GROUP} (see below). MPI does not specify the correspondance between the process rank in \const{MPI\_COMM\_WORLD} and its (machine-dependent) absolute address. Also, MPI does not specify the function of the host process. \discuss{ A recommended implementation for dynamic process spawning is to add a function \mpifunc{MPI\_SPAWN(text, n, newcomm,...)} which spawns {\tt n} copies of {\tt text}. The parent is returned in {\tt newcomm} a communicator for itself and the new children; each child is started with \const{MPI\_COMM\_WORLD} set to that same communicator, and \const{MPI\_COMM\_PEER} set to a communicator for all sibblings (e.g. the parent is a ``local host''). An inter or intra-communicator including the new children and previous processes can new be built explicitly, using the parent as a bridge. Generally, a new group of processes can merge into an existing group, provided these two groups have a nonempty intersection, or both have a nonempty intersection with a third group. I changed from ``ALL'' to ``WORLD'' in order to make more sense in the case where multiple ``worlds'' exist, e.g. dynamic process spawning. } \discuss{ In the host process, if there is one, what should \const{MPI\_COMM\_PEER} be? There are two consistent possibilities: a null communicator; a communicator in which the group contains only the host process. Is the host, if there is one, at a fixed rank in \const{MPI\_COMM\_WORLD}? Should that be zero or n-1? The first choice seems more natural, but the second choice allow slave processes to have the same rank in \const{MPI\_COMM\_WORLD} and \const{MPI\_COMM\_PEER} } \section{Group Management} This section describes the manipulation of process groups in MPI. These operations are local and their execution do not require interprocess communication. \subsection{Group Accessors} \begin{funcdef}{MPI\_GROUP\_SIZE(group, size)} \funcarg{\IN}{ group}{ handle to group object.} \funcarg{\OUT}{ size}{ is the integer number of processes in the group. } \end{funcdef} \begin{funcdef}{MPI\_GROUP\_RANK(group, rank)} \funcarg{\IN}{ group}{ handle to group object.} \funcarg{\OUT}{ rank}{ is the integer rank of the calling process in group, or \const{ MPI\_UNDEFINED} if the process is not a member.} \end{funcdef} \begin{funcdef}{MPI\_TRANSLATE\_RANKS (group\_a, n, ranks\_a, group\_b, ranks\_b)} \funcarg{\IN}{ group\_a}{ handle to group object ``A''} \funcarg{\IN}{ n}{ number of ranks in \mpiarg{ ranks\_a} array} \funcarg{\IN}{ ranks\_a}{ array of zero or more valid ranks in group ``A''} \funcarg{\IN}{ group\_b}{ handle to group object ``B''} \funcarg{\OUT}{ ranks\_b}{ array of corresponding ranks in group ``B,'' \const{ MPI\_UNDEFINED} when no correspondence exists.} \end{funcdef} \discuss{ {\bf (language binding issue)} We may need multiple ``{\tt MPI\_UNDEFINED}'', one for each datatype that can be undefined: undefined integer, undefined communication handle, etc. } \subsection{Group Constructors} These functions allow to construct new groups from existing groups. These are local operations, and distinct groups may be defined at different processes; a process may also define a group that does not include it. Consistent definitions are required when groups are used as arguments in communicator building functions. MPI does not provide a mechanism to build a group from scratch, but only from other, previously defined groups. The base case for this recursion is provided by the groups associated with the initial communicators {\tt MPI\_COMM\_WORLD} and {\tt MPI\_COMM\_PEER}. \begin{funcdef}{MPI\_GROUP\_DUP(group, new\_group)} \funcarg{\IN}{ group}{ extant group object handle} \funcarg{\OUT}{ new\_group}{ new group object handle} \end{funcdef} \func{ MPI\_GROUP\_DUP} duplicates a group with all its cached information. \discuss{ Do we need caching for groups? Or is it sufficient to have it for communicators? } \implement{ Since groups cannot be modified, then no copying of the group structure is actually needed; one only needs keep count of active references. If information is cached, one may use ``copy on write'' for the cached information, if updates are unfrequent. } The remaining group constructors do not carry any cached information into the newly created group. \begin{funcdef}{MPI\_GROUP\_UNION(group1, group2, group\_out)} \funcarg{\IN}{ group1}{ first group object handle} \funcarg{\IN}{ group2}{ second group object handle} \funcarg{\OUT}{ group\_out}{ group object handle} \end{funcdef} \begin{funcdef}{MPI\_GROUP\_INTERSECTION(group1, group2, group\_out)} \funcarg{\IN}{ group1}{ first group object handle} \funcarg{\IN}{ group2}{ second group object handle} \funcarg{\OUT}{ group\_out}{ group object handle} \end{funcdef} \begin{funcdef}{MPI\_GROUP\_DIFFERENCE(group1, group2, group\_out)} \funcarg{\IN}{ group1}{ first group object handle} \funcarg{\IN}{ group2}{ second group object handle} \funcarg{\OUT}{ group\_out}{ group object handle} \end{funcdef} The set-like operations are defined as follows: \begin{description} \item[union] All elements of the first group (\mpiarg{group1}), followed by all elements of second group (\mpiarg{group2}) not in first \item[intersect] all elements of the first group which are also in the second group, ordered as in first group. \item[difference] all elements of the first group which are not in the second group, ordered as in the first group. \end{description} Note that for these operations the order of processes in the output group is determined first by order in the first group (if possible) and then by order in the second group (if necessary). Neither union nor intersection are commutative, but both are associative. \discuss { Do we know why we want these operations? } \begin{funcdef}{MPI\_GROUP\_INCL(group, n, ranks, new\_group)} \funcarg{\IN}{ group}{ handle to group object} \funcarg{\IN}{ n}{ number of elements in array ranks (and size of \mpiarg{ new\_group})} \funcarg{\IN}{ ranks}{ array of integer ranks in \mpiarg{group} to appear in \mpiarg{new\_group}.} \funcarg{\OUT}{ new\_group}{ new group derived from above, in the order defined by \mpiarg{ ranks}.} \end{funcdef} The function \func{MPI\_GROUP\_INCL} creates a group {\tt new\_group} of size {\tt n} which is a subgroup of {\tt old\_group} such that the process with rank {\tt i} in {\tt new\_group} is the process with rank {\tt ranks[i]} in {\tt old\_group}. Each of the {\tt n} elements of {\tt ranks} must be a valid rank in {\tt group} and all elements must be distinct, else the program is erroneous. If {\tt n=0}, then \mpiarg{new\_group} is empty. This function can be used to reorder the elements of a group. \begin{funcdef}{MPI\_GROUP\_EXCL(group, n, ranks, new\_group)} \funcarg{\IN}{ group}{ handle to group object} \funcarg{\IN}{ n}{ number of elements in array ranks (and size of \mpiarg{ new\_group})} \funcarg{\IN}{ ranks}{ array of integer ranks in \mpiarg{group} not to appear in \mpiarg{new\_group}} \funcarg{\OUT}{ new\_group}{ new group derived from above, preserving the order defined by \mpiarg{ ranks}.} \end{funcdef} The function \func{MPI\_GROUP\_EXCL} creates a group {\tt new\_group} which is a subgroup of {\tt old\_group} such that {\tt new\_group} contains all process in {\tt old\_group} except the {\tt n} processes with ranks {\tt ranks[i], i = 0, ..., n - 1}. The ordering of processes in {\tt new\_group} is identical to the ordering in {\tt old\_group}. Each of the {\tt n} elements of {\tt ranks} must be a valid rank in {\tt old\_group} and all elements must be distinct, else the program is erroneous. If {\tt n=0}, then \mpiarg{new\_group} is identical to \mpiarg{group}. \begin{funcdef}{MPI\_GROUP\_RANGE\_INCL(group, n, ranges, new\_group)} \funcarg{\IN}{ group}{ handle to group object} \funcarg{\IN}{ n}{ number of triplets in array \mpiarg{ ranges}. } \funcarg{\IN}{ ranges}{ a one-dimensional array of integer triplets, of the form (first rank, last rank, stride), to be included in the output group \mpiarg{new\_group}. } \funcarg{\OUT}{ new\_group}{ new group derived from above, in the order defined by \mpiarg{ ranges}. } \end{funcdef} If \mpiarg{ ranges} consist of the triplets \[ (first_1 , last_1, stride_1) , ..., (first_n, last_n, stride_n) \] then the new group consists of the processes in {\tt group} with ranks \[ first_1 , first_1 + stride_1 , ... , first_1 + \left\lfloor \frac{last_1 - first_1}{stride_1} \right\rfloor stride_1 , ... \] \[ first_n , first_n + stride_n , ... , first_n + \left\lfloor \frac{last_n - first_n}{stride_n} \right\rfloor stride_n . \] Each computed rank must be a valid rank in {\tt group} and all computed ranks must be distinct, else the program is erroneous. Note that we may have $first_i > last_i$, and $stride_i$ may be negative, but cannot be zero. The functionality of this routine is specified to be equivalent to expanding the array of ranges to an array of the included ranks and passing the resulting array of ranks and other arguments to \mpifunc{MPI\_GROUP\_INCL}. A call to \mpifunc{MPI\_GROUP\_INCL} is equivalent to a call to \mpifunc{MPI\_GROUP\_RANGE\_INCL} with each rank {\tt i} in \mpiarg{ranks} replaced by the triplet {\tt (i,i,1)} in the argument {\tt ranges}. \begin{funcdef}{MPI\_GROUP\_RANGE\_EXCL(group, n, ranges, new\_group)} \funcarg{\IN}{ group}{ handle to group object} \funcarg{\IN}{ n}{ number of elements in array ranks (and size of \mpiarg{ new\_group}) } \funcarg{\IN}{ ranges}{ a one-dimensional array of integer triplets of the form (first rank, last rank, stride), to be excluded from the output group \mpiarg{new\_group}. } \funcarg{\OUT}{ new\_group}{ new group derived from above, preserving the order in \mpiarg{ group}.} \end{funcdef} Each computed rank must be a valid rank in {\tt group} and all computed ranks must be distinct, else the program is erroneous. The functionality of this routine is specified to be equivalent to expanding the array of ranges to an array of the excluded ranks and passing the resulting array of ranks and other arguments to \mpifunc{MPI\_GROUP\_EXCL}. A call to \mpifunc{MPI\_GROUP\_EXCL} is equivalent to a call to \mpifunc{MPI\_GROUP\_RANGE\_EXCL} with each rank {\tt i} in \mpiarg{ranks} replaced by the triplet {\tt (i,i,1)} in the argument {\tt ranges}. \discuss{ Do we need all these group constructors? Do we need more? Note: the range operations do not explicitly enumerate ranks, and therefore are more scalable if implemented efficiently\ldots} \subsection{Group Destructors} \begin{funcdef}{MPI\_GROUP\_FREE(group)} \funcarg{\INOUT}{ group}{ handle to group} \end{funcdef} This operation marks a group object for deallocation. The handle {\tt group} is set to null by the call. The group object is actually deallocated only if there are no other active references to it. If a group has been duplicated, or has been bound to communicators, then the user has to free all ``copies'' of the group. When a group is freed then all information cached with it becomes unavailable. \change \discuss { I changed \func{MPI\_GROUP\_FREE} to be consistent with other free functions: FREE marks the object for deallocation; the object is deallocated only when there are no other pending references to it (reference count mechanism). } \implement{ One can keep a reference count that is incremented for each call to \func{MPI\_GROUP\_DUP}, \func{MPI\_COMM\_MAKE} and \func{MPI\_COMM\_DUP}, and and decremented for each call to \func{MPI\_GROUP\_FREE} or \func{MPI\_COMM\_FREE}; the group object is deallocated when the reference count drops to zero. } \discuss{ MPI does not keep count of references to an opaque MPI object that are created by assigning to one handle the value of another, or are lost when a handle variable is deallocated (e.g., at procedure exit). In the first case, a call to \func{MPI\_GROUP\_FREE} may leave a dangling reference; in the second case an opaque MPI object may be left with no handle to it, and no possibility of deallocating it. It is the user responsibility to avoid these situations. MPI keeps track only of changes in the number of active references to the MPI object that are due to explicit MPI calls. (May want to move this discussion to chapter 2.) } \section{Communicator Management} These section describes the manipulation of communicators in MPI. Operations that access communicators are local and their execution does not require interprocess communication. Operations that create communicators are collective and may require interprocess communication. Mechanisms are provided to avoid in many cases the need for communication for these collective operations as well. \subsection{Communicator Accessors} The following are all local operations. \begin{funcdef}{MPI\_COMM\_SIZE(comm, size)} \funcarg{\IN}{ comm}{ handle to communicator object.} \funcarg{\OUT}{ size}{ is the integer number of processes in the group of \mpiarg{ comm}.} \end{funcdef} \begin{funcdef}{MPI\_COMM\_RANK(comm, rank)} \funcarg{\IN}{ comm}{ handle to communicator object.} \funcarg{\OUT}{ rank}{ is the integer rank of the calling process in group of \mpiarg{ comm}, or \const{ MPI\_UNDEFINED} if the process is not a member.} \end{funcdef} \begin{funcdef}{MPI\_COMM\_GROUP(comm, group)} \funcarg{\IN}{ comm}{ communicator object handle} \funcarg{\OUT}{ group}{ group object handle corresponding to {\tt comm}} \end{funcdef} \subsection{Communicator Constructors} These functions are collective calls that are invoked by all processes in the group associated with {\tt comm}. \discuss{ There is a chicken-and-egg aspect to MPI in that a communicator is needed to create a new communicator. The base case of this recursive process is provided by the predefined communicators {\tt MPI\_COMM\_WORLD} and {\tt MPI\_COMM\_PEER}, which are defined outside MPI. } \begin{funcdef}{MPI\_COMM\_DUP(comm, new\_comm)} \funcarg{\IN}{ comm}{ communicator object handle} \funcarg{\OUT}{ new\_comm}{ communicator object handle} \end{funcdef} Duplicates the existing communicator {\tt comm} with all its cached information. Returns in {\tt new\_comm} a new communicator with the same group, same cached information, but a new context. \implement{ One need not actually copy the group information, but only add a new reference and increment the reference count. Copy on write can be used for the cached information. } \begin{funcdef}{MPI\_COMM\_MAKE(comm, group, comm\_new)} \funcarg{\IN}{ comm}{ Communicator handle} \funcarg{\IN}{group}{ Group which is a subset of the group of {\tt comm}} \funcarg{\OUT}{ comm\_new}{ Handle to new communicator.} \end{funcdef} Creates a new communicator {\tt new\_comm} with communication group defined by {\tt group} and a new context. No cached information propagates from {\tt comm} to {\tt comm\_new}. For processes in the group of {\tt comm} which are not in {\tt group}, these functions return {\tt comm\_new} as the null communicator \const{MPI\_COMM\_NULL}. The call is erroneous if not all {\tt group} arguments have the same value, or if {\tt group} is not a subset of the group associated with {\tt comm}. Note that the call is executed by all processes in {\tt comm}, even if they do not belong to the new group. \discuss{ The need to have all processes in {\tt comm} execute the call will become clearer in the next section, where we discuss mechanisms to avoid communication in call that create new communicators; this allows all the processes to mark the allocated context as used. } \begin{funcdef}{MPI\_COMM\_SPLIT(comm, color, key, comm\_new)} \funcarg{\IN}{comm}{Handle to existing communicator} \funcarg{\IN}{color}{control of subset assignment (integer)} \funcarg{\IN}{key}{ control of rank assigment (integer)} \funcarg{\OUT}{ comm\_new}{ Handle to new communicator.} \end{funcdef} Partition the group associated with {\tt comm} into disjoint subgroups, one for each value of {\tt color}. Each subgroup contains all processes of the same color. Within each subgroup, the processes are ranked in the order defined by the value of the parameter {\tt key}, with ties broken according to their rank in the old group. A new communicator is created for each subgroup and returned in {\tt comm\_new}. A process may supply the color value {\tt MPI\_UNDEFINED}, in which case {\tt comm\_new} returns the null communicator {\tt MPI\_COMM\_NULL}. This is a collective call, but each process can provide different values for {\tt color} and {\tt key}. A call to \mpifunc{MPI\_COMM\_MAKE(comm, group, new\_comm)} is equivalent to a call to \mpifunc{MPI\_COMM\_SPLIT(comm, color, key, new\_comm)}, where all members of {\tt group} provide {\tt color = 0} and {\tt key} = rank in {\tt group}, and all processes that are not members of {\tt group} provide {\tt color = MPI\_UNDEFINED}. The function \mpifunc{MPI\_COMM\_SPLIT} allow more generally to partition the original group into one or many subgroups, and reorder each subgroup. \discuss{New Function below} \change \begin{funcdef}{MPI\_COMM\_MERGE(comm, leader, remote\_comm, new\_comm)} \funcarg{\IN}{comm}{ handle to local communicator} \funcarg{\IN}{leader}{ rank of shared process in \mpiarg{comm}} \funcarg{\IN}{remote\_comm}{ handle to remote communicator} \funcarg{\OUT}{new\_comm}{ handle to new communicator} \end{funcdef} Create a new communicator by merging the groups of two existing communicators. Unlike all previous functions, this function does not require a preexisting communicator that encompasses the newly created group. Rather, it only requires that both groups share a process, and that the local rank of this shared process is known within each group. This function is collective within each of the merged groups and within each group all processes supply the same value for {\tt leader} (different values may be supplied by the two groups). The parameter {\tt remote\_comm} is significant only at the shared process. The group of the resulting communicator returned by {\tt new\_comm} contains all the processes in {\tt comm}, followed by all processes in {\tt remote\_comm} that are not in {\tt comm}. (Thus, the order in which the two groups are merged is determined by the order of the two arguments {\tt comm} and {\tt remote\_comm} at the leader process.) Application: Consider a system where new processes can be created dynamically. When a process spawns a set of new processes then a communicator is created for the parent and the newly spawned children. This new group can join the old group of {\tt MPI\_COMM\_WORLD} using a call to {\tt MPI\_COMM\_MERGE} with the parent as the leader of the merge. \implement{ This merge requires a broadcast from the shared process within each subgroup. } \discuss{ This function bears a nonaccidental ressemblance to the inter-communicator creation function \func{MPI\_INTERCOMM\_MAKE}. It was simplified to use only one leader, rather than two, no bridge communicator, and to create only one output communicator. Alternatives: (i) Keep this function consistent with \func{MPI\_INTERCOMM\_MAKE} (two leaders, a bridge communicator). The function above becomes a special case of this more general construction. (ii) Introduce a function that makes an inter-communicator into an intra-communicator by taking the union of both groups. One still needs a leader (or two) to break the symmetry between the two groups, and one introduces additional communication and a heavier construction. } \subsection{Communicator Destructors} \begin{funcdef}{MPI\_COMM\_FREE(comm)} \funcarg{\INOUT}{ comm}{ handle to communicator to be destroyed.} \end{funcdef} This call marks the communication object for deallocation. The handle is set to null. The object is actually deallocated only if there are no other active references to it. The associated group may also be deallocated, if there are no active references to it. It is the user responsibility to free each copy of a communicator object. \change \discuss { Changed, here too, to a reference count mechanism } \implement{ A reference count mechanism can be used: the reference count is incremented by each call to \func{MPI\_COMM\_DUP}, and decremented by each call to \func{MPI\_COMM\_FREE}. The object is deallocated when the count reaches zero. } \section{Context Management} The creation of a new communicator requires agreement among the members of the communicator group on the context associated with the new communicator. This requires collective communication within a group that contains the new communicator group, using a preexisting communicator. MPI provides a mechanism to amortize the cost of one such collective communication over the creation of multiple communicators: one collective call can be used to allocate multiple contexts, and cache them with the communicator used in the call. Subsequent calls with this same communicator as argument can use these cached contexts to generate new communicators. The effect of these subsequent calls is as if a collective agreement protocol was run by the members of the communicator group; however, communication was not required. \subsection{Context Allocation} \begin{funcdef}{MPI\_CONTEXTS\_ALLOC(comm, n)} \funcarg{\IN}{ comm}{a communicator} \funcarg{\IN}{ n}{ number of contexts to allocate} \end{funcdef} This function allocates {\tt n } contexts, which are unique within the group bound to the communicator {\tt comm}, and stores them in the unused context store of {\tt comm}. This is a collective function that is invoked by all processes in the group of {\tt comm} and all processes provide the same value for {\tt n}. \subsection{Context Deallocation} \begin{funcdef}{MPI\_CONTEXTS\_FREE(comm, n)} \funcarg{\IN}{comm} a communicator \funcarg{\IN}{ n}{number of contexts to free} \end{funcdef} This function frees (i.e., returns to the MPI system) {\tt n} contexts from the unused context store of the communicator {\tt comm}, which were previously allocated by calling \func{ MPI\_CONTEXTS\_ALLOC} (below). If less than {\tt n} contexts are available, then all contexts are freed. The function call is collective in {\tt comm}. \discuss{ Do we want to return the number of contexts actually freed? } \subsection{Context Accessor} \begin{funcdef}{MPI\_COMM\_CONTEXTS(comm, n)} \funcarg{\IN}{ comm}{ handle to communicator object.} \funcarg{\OUT}{ n}{ the number of contexts available in the unused context store associated with {\tt comm}} \end{funcdef} This is a local call. \subsection{Use of Preallocated Contexts} Allocation and deallocation of contexts do not change the semantics of communicator management functions if these are used correctly -- only their performance. If \func{MPI\_COMM\_DUP} or \func{MPI\_COMM\_MAKE} are called with parameter {\tt comm}, and an unused context is available in the context store of {\tt comm}, then the new communicator may be created with no communication, binding this unused context. If no context is available, then a global communication will occur, in order to allocate the new context. Since context allocation and deallocation are collective functions and so are the communicator allocation and deallocation functions, then all processes in the group associated with {\tt comm} have the same number of unused contexts allocated to {\tt comm}; they either all proceed to communicate or all perform the allocation locally. The operation \func{MPI\_COMM\_FREE} does not require communication; the operations \func{MPI\_COMM\_SPLIT} and \func{MPI\_COMM\_MERGE} always require collective communication. The use of preallocated contexts may affect the behavior of erroneous programs: errors that cause deadlock or can be easily detected when communicator creation is a synchronizing call may run undetected when a preallocated context is used locally. An MPI implementation may actually allocate more or less contexts than required by the user by calls to \func{MPI\_CONTEXTS\_ALLOC}. At one extreme, one may use an implementation where an esentially unbound number of contexts is preallocated whenever a group is created; the operations \func{MPI\_CONTEXTS\_ALLOC} and \func{MPI\_CONTEXTS\_FREE} are noops, and all communicator creation and destruction operations are executed locally (\func{MPI\_COMM\_SPLIT} and \func{MPI\_COMM\_MERGE} excepted) and are not synchronizing. At the other extreme, one may never preallocate contexts, in which case, the operations \func{MPI\_CONTEXTS\_ALLOC} and \func{MPI\_CONTEXTS\_FREE} are again noops, and \func{MPI\_COMM\_DUP} and \func{MPI\_COMM\_MAKE} always require global communication and are synchronizing. As with all collective operations, a program should be written so that it works correctly whether the collective call is synchronizing or not. The enquiry function ??? can be used to find out which mechanism is used for context allocation. MPI implementation may allow the user to control the context allocation mechanism, e.g. to use synchronizing communicator creation in debug mode, and nonsycnhronizing communicator creation in run mode. \change \discuss{ I left out the communicator making functions that are mandated to be local or global, for several reasons: (i) We made a similar decision w.r.t. collective communications (we do not have a synchronizing and a nonsychronizing mode, as was proposed by Ho). (ii) This would make implementation decisions on the number of preallocated contexts visible to the application. I feel this should be a performance issue, not a correctness one. (iii) We can always have a global switch (use / do not use synchronization or, equivalently, preallocate / do not preallocate contexts). (iv) We can use a barrier synchronization to get the effect of a synchronizing operation. } \implement{ A communicator can be used for point-to-point communication as soon as the communicator is created. If communicator creation is nonsynchronizing, then a process may receive a message sent with a newly allocated context before it has allocated itself this context to a communicator (in case the sender runs ahead of the receiver). However, since context preallocation is synchronizing, such context must be already preallocated at the receiver. The communication subsystem should be built so that arriving messages that carry a context preallocated at the receiver are put on hold (waiting for the receiver to bind this context to a communicator, next executing a receive with this communicator). Messages arriving with a context that has not been preallocated can be rejected as erroneous. } \discuss{ We miss a mechanism to allow a new communicator to inherit part of the pool of unused contexts of its communicator parent (e.g., an additional parameter in communicator creating functions). Do we want it? } \section{Inter-Communication} This section introduces the concept of int\-er-com\-mun\-i\-cat\-ion and describes the portions of MPI that support it. It describes support for writing programs which contain user-level servers. It also describes a name service which simplifies writing programs containing int\-er-com\-mun\-i\-cat\-ion. All point-to-point communication described thus far has involved communication between processes that are members of the same group. This type of communication is called ``int\-ra-com\-mun\-i\-cat\-ion'' and the communicator used is called an ``intra-communicator.'' In modular and multi-disciplinary applications, different process groups execute different modules and processes within different modules communicate with one another in a pipeline or a more general module graph. In these applications the most natural way for a process to specify a target process is by the rank of the target process within the target group. In applications that contain internal user level servers, each server may be a process group that provides services to one or more clients, and each client may be a process group which uses the services of one or more servers. In these applications it is again most natural to specify the target process by rank within the target group. This type of communication is called ``int\-er-com\-mun\-i\-cat\-ion'' and the communicator used is called an ``inter-communicator.'' An int\-er-com\-mun\-i\-cat\-ion operation is a point-to-point communication between processes in different groups. The group containing a process that initiates an int\-er-com\-mun\-i\-cat\-ion operation is called the ``local group,'' that is, the sender in a send and the receiver in a receive. The group containing the target process is called the ``remote group,'' that is, the receiver in a send and the sender in a receive. As in int\-ra-com\-mun\-i\-cat\-ion, the target process is specified using a \mpiarg{(communicator, rank)} pair. Unlike int\-ra-com\-mun\-i\-cat\-ion, the rank is relative to the remote group. % One additional needed concept is the ``group leader.'' The process % with rank 0 in a process group is designated ``group leader.'' This % concept is used in support of user-level servers, and elsewhere. Local and remote groups of an inter-communicator are not required to be disjoint. An intra-communicator can be seen as a particular case of an inter-communicator, where local and remote groups happen to be identical. The semantics of accessor function are defined so as to be consistent with this interpretation. Here is a summary of the properties of int\-er-com\-mun\-i\-cat\-ion and inter-communicators: \begin{itemize} \item The syntax of point-to-point communication is the same for both inter- and int\-ra-com\-mun\-i\-cat\-ion. The same communicator can be used both for send and for receive operations. \item A target process is addressed by its rank in the remote group, both for sends and for receives. \item Communications using an inter-communicator are guaranteed not to conflict with any communications that use a different communicator. \item An inter-communicator cannot be used for collective communication. \item A communicator will provide either intra- or int\-er-com\-mun\-i\-cat\-ion, never both. \item Once constructed, the remote group of an inter-communicator may not be changed. Communication with any process outside of the remote group is not allowed. \end{itemize} The routine \func{MPI\_COMM\_STAT()} may be used to determine if a communicator is an inter- or intra-communicator. Inter-communicators can be used as arguments to some of the other communicator access routines. Inter-communicators cannot be used as input to any of the constructor routines for intra-communicators or the context management routines. \implement{ Communicators can be represented at each process by a tuple consisting of: \begin{description} \item[group] \item[send\_context] \item[receive\_context] \item[source] \end{description} For inter-communicators, {\bf group} describes the remote group, and {\bf source} is the rank of the process in the local group. For intra-communicators, {\bf group} is the communicator group (remote=local), {\bf source} is the rank of the process in this group, and {\bf send context} and {\bf receive context} are identical. A group is represneted by a rank to absolute address tranlation table. The inter-communicator cannot be discussed sensibly without considering processes in both the local and remote groups. Imagine a process {\bf P} in group $\cal P$ which has an inter-communicator {\bf Cp}, and a process {\bf Q} in group $\cal Q$ which has an inter-communicator {\bf Cq}. (Note that $\cal P$ and $\cal Q$ do not have to be distinct.) Then \begin{itemize} \item {\bf Cp.group} describes the group $\cal Q$ and {\bf Cq.group} describes the group $\cal P$. \item {\bf Cp.send\_context~=~Cq.receive\_context} and the context is unique in $\cal Q$; \\ {\bf Cp.receive\_context~=~ Cq.receive\_context} and this context is unique in $\cal P$. \item {\bf Cp.source} is rank of {\bf P} in $\cal P$ and {\bf Cq.source} is rank of {\bf Q} in $\cal Q$. \end{itemize} Assume that {\bf P} sends a message to {\bf Q} using the inter-communicator. Then {\bf P} uses the {\bf group} table to find the absolute address of {\bf Q}; {\bf source} and {\bf send\_context} are appended to the message. Assume that {\bf Q} posts a receive with an explicit source parameter using the inter-communicator. Then {\bf Q} matches {\bf receive\_context} to the message context and source parameter to the message source. The same algorithm is used for intra-communicators as well. } \subsection{Communicator Accessors} \begin{funcdef}{MPI\_COMM\_STAT(comm, status)} \funcarg{\IN}{ comm}{ handle to communicator} \funcarg{\OUT}{ status}{ integer status} \end{funcdef} This local routine allows the calling process to determine if a communicator is an inter-communicator or an intra-communicator. This returns the status of communicator \mpiarg{comm}. Valid status values are \const{ MPI\_INTRA}, \const{ MPI\_INTER}, \const{ MPI\_INVALID}. When an inter-communicator is used as an input argument to the communicator accessors described under intra-communication, the following table describes behavior. \begin{center} \begin{tabular}{|l|p{3.0in}|} \hline \multicolumn{2}{|c|}{\func{ MPI\_COMM\_} Function Behavior} \\ \multicolumn{2}{|c|}{(in Inter-Communication Mode)}\\ \hline \hline \func{MPI\_COMM\_SIZE()} & returns the size of the remote group. \\ \func{MPI\_COMM\_GROUP()} & returns the remote group. \\ \func{MPI\_COMM\_RANK()} & returns \const{MPI\_UNDEFINED} \\ \func{MPI\_COMM\_CONTEXTS()} & erroneous \\ \hline \end{tabular} \end{center} \discuss{Should there be any other status values?} \subsection{Intercommunicator Constructors and Destructors} Construction of an inter-communicator requires two separate collective operations (one in the local group and one in the remote group) and a point-to-point communication between a process in the local group and a process in the remote group. These operations may be performed with explicit synchronization of the two groups and within each group by calling \func{MPI\_INTERCOMM\_MAKE()}. The explicit synchronization can cause deadlock in modular programs with cyclic communication graphs, even if within each local group calls are executed in the same order. So, the local and remote operations can be decoupled and the construction performed ``loosely synchronously'' by calling the two routines \func{MPI\_COMM\_INTERCOMM\_START()} and \func{MPI\_INTERCOMM\_FINISH()}. \func{ MPI\_INTERCOMM\_START()} and \func{ MPI\_INTERCOMM\_FINISH()} are both collective operations in their local group, but the call within each group need not by synchronized with the call in the other group. These routines can construct multiple inter-communicators with a single call. This improves performance by allowing amortization of the synchronization overhead. The inter-communicator objects are destroyed in the same way as intra-communicator objects, by calling \func{MPI\_COMM\_FREE()}. \begin{funcdef}{MPI\_INTERCOMM\_MAKE(local\_comm, local\_leader, bridge\_comm, remote\_leader, tag, n, new\_comms)} \funcarg{\IN}{ local\_comm }{ handle to local intra-communicator} \funcarg{\IN}{ local\_leader}{ rank of local group leader in \mpiarg{my\_comm}} \funcarg{\IN}{ bridge\_comm}{ handle to parent intra-communicator} \funcarg{\IN}{ remote\_leader}{ rank of remote group leader in \mpiarg{ bridge\_comm}} \funcarg{\IN}{ tag }{ ``safe'' tag} \funcarg{\IN}{ n}{ number of new inter-communicators to construct} \funcarg{\OUT}{ new\_comms}{ array of handles to new inter-communicators} \end{funcdef} This routine constructs an array of {\tt n} inter-communicators and sets the first {\tt n} entries in array {\tt new\_comms} to point to them. Intra-communicator \mpiarg{my\_comm} describes the local group. Intra-communicator \mpiarg{bridge\_comm} describes a group that contains members from both local and remote groups, the local leader and the remote leader. The local leader is identified by its rank {\tt local\_leader} in the local group. The remote leader is identified by its rank {\tt remote\_leader} in the bridge group. Integer {\tt tag} is used to distinguish this operation from others with the same bridge communicator. Integer \mpiarg{n} is the number of new inter-communicators constructed. The routine is collective in each (local) group and synchronizes with the remote group. All calling processes in a local group provide the same value for {\tt local\_leader}, {\tt tag} and {\tt n}. The value of {\tt bridge\_comm} and {\tt remote\_leader} is significant only at the process which is the local leader. (We thus have four possible combinations of parameter values: those provided by a nonleader in each of the two groups and the one provided by each leader.) Each of the inter-communicators produced provides int\-er-com\-mun\-i\-cat\-ion with the remote group. The two groups may intersect. The same process can be both local and remote leader. The function \func{MPI\_INTERCOMM\_MAKE} allow to combine two groups into an inter-communicator even if no process in one group has knowledge of the other group. All that is required is that a third group interesect with both groups and that a process in the intersection is known in each group. \implement{ The operation requires an exchange between the two leaders, followed by a broadcast in each local group with the leader at the broadcast root. } \discuss{ Rather than allocating multiple inter-communicators, may want to allocate only one inter-communicator and a store of hidden contexts. Then {\tt MPI\_COMM\_DUP} can be used to create additional ones. This is more in line with intra-communicator functions. } \begin{funcdef}{MPI\_INTERCOMM\_START(local\_comm, local\_leader, bridge\_comm, remote\_leader, tag, n, handle)} \funcarg{\IN}{ local\_comm }{ local intra-communicator} \funcarg{\IN}{ local\_leader}{ rank of local group leader in \mpiarg{my\_comm}} \funcarg{\IN}{ peer\_comm}{ ``parent'' intra-communicator} \funcarg{\IN}{ remote\_leader}{ rank of remote group leader in \mpiarg{ peer\_comm}} \funcarg{\IN}{ tag }{ ``safe'' tag} \funcarg{\IN}{ n}{ number of new inter-communicators to construct} \funcarg{\OUT}{ handle }{ handle for \func{ MPI\_INTERCOMM\_FINISH()}} \end{funcdef} This starts off an inter-communicator creation operation, returning a handle for the completion of the operation in \mpiarg{handle}. It is collective in \mpiarg{my\_comm}. It does not wait for the remote group to execute \func{MPI\_INTERCOMM\_START()}. \mpiarg{handle} is conceptually similar to the communication handle used by non-blocking point-to-point routines. It is constructed by a \func{\_START} routine and destroyed by the matching \func{\_FINISH} routine. These handles are not valid for any other use. It is erroneous to call this routine again with the same \mpiarg{bridge\_comm}, \mpiarg{remote\_leader} and \mpiarg{tag} arguments, without calling \func{MPI\_INTERCOMM\_FINISH()} to finish the first call. \begin{funcdef}{MPI\_INTERCOMM\_FINISH(handle, new\_comms)} \funcarg{\IN}{ handle }{ handle returned by \func{ MPI\_INTERCOMM\_START()}} \funcarg{\OUT}{ new\_comms }{ array of handles to new inter-communicators} \end{funcdef} This completes an asynchronous inter-communicator creation operation, returning an array of handles to \mpiarg{n} new inter-communicators in \mpiarg{new\_comms}, where {\tt n} is the count that was passed to the \func{\_START} routine. This routine is collective in the goup associated with communicator \mpiarg{localy\_comm} of the corresponding call to \func{MPI\_INTERCOMM\_START()}. It waits for the remote group to call \func{MPI\_INTERCOMM\_START()} but does not wait for the remote group to call \func{MPI\_INTERCOMM\_FINISH()}. Inter-communicator objects are destroyed in the same fashion as intra-communiation objects, using \func{MPI\_COMM\_FREE} described above. \implement{ \func{MPI\_INTERCOMM\_START} initiates asynchronous sends and receives for an exchange of information between leaders. \func{MPI\_INTERCOMM\_FINISH} completes the exchange of information, next execute a broadcast in each local group, with the local leader at the root. } \discuss{ ({\bf Language binding}) we need yet another handle type. } \change \discuss{ I deleted \func{MPI\_COMM\_SPLITL}. Seems of limited usefulness (Since will need it only once, it is acceptable to use a more awkward mechanism for generating an array of leaders, especially as any process in a group can be its leader) If we salvage this procedure, I suggest to make it consistent with \func{MPI\_COMM\_SPLIT}. } %%%%%% \dross{ \subsection{Support for User-Level Servers} The support for user-level servers takes into account the prevailing view that all processes (possibly excepting a host process) are initially equivalent members of the group of all processes. This group is described by pre-defined intra-communicator \func{MPI\_COMM\_ALL}. The user splits this group such that processes in each parallel server are placed within a specific sub-group. The non-server processes are placed in a group of all non-servers. Provided that the user can determine the ranks of the server group leaders ({\em i.e.,} rank zero) and assign some tags for clients to send a message to the group leaders, then a group leader can at any time notify a server that it wishes to become a client. MPI provides a routine, \func{MPI\_COMM\_SPLITL()}, that splits a parent group, creates sub-groups (intra-communicators) according to supplied keys, and returns the rank of each sub-group leader (relative to the parent group). This allows a process that does not know about a sub-group to contact that sub-group via the sub-group leader, using the parent communicator. The keys may be used as unique tags. This information may also be used as input to \func{MPI\_COMM\_PEER\_MAKE()}, for example. \begin{funcdef}{MPI\_COMM\_SPLITL(comm, key, nkeys, leaders, sub\_comm)} \funcarg{\IN}{ comm }{ extant intra-communicator to be ``split''} \funcarg{\IN}{ key }{ key for sub-group membership} \funcarg{\IN}{ nkeys }{ number of keys (number of sub-groups)} \funcarg{\OUT}{ leaders }{ ranks of sub-group leaders in comm} \funcarg{\OUT}{ sub\_comm}{ intra-communicator describing sub-group of calling process} \end{funcdef} This routine splits the group described by intra-communicator comm into nkeys sub-groups. Each calling process must specify a value of key in the range [0\ldots(\mpiarg{nkeys}-1)]. Processes specifying the same key are placed in the same sub-group. Ranks of the leaders of each sub-group (relative to \mpiarg{comm}) are returned in integer array leaders. This routine returns a new intra-communicator, \mpiarg{sub\_comm}, that describes the sub-group to which the calling process belongs. } %%%% \subsection{Name Service} MPI provides a name service to simplify construction of inter-communicators. This service allows a local process group to create an inter-communicator when the only available information about the remote group is a user-defined character string. A synchronizing version is provided by routine \func{MPI\_INTERCOMM\_NAME}. A loosely synchronous version is provided by routines \func{MPI\_INTERCOMM\_NAME\_START} and \func{MPI\_INTERCOMM\_NAME\_FINISH}. \begin{funcdef}{MPI\_INTERCOMM\_NAME(comm, name, n, new\_comms)} \funcarg{\IN}{ comm}{ handle to local intra-communicator} \funcarg{\IN}{ name}{ character string } \funcarg{\IN}{ n}{ number of new inter-communicators to construct} \funcarg{\OUT}{ new\_comms}{ Array of handles to new inter-communicators} \end{funcdef} This is the name-served equivalent of \func{MPI\_INTERCOMM\_MAKE} in which the caller need only know a name for the peer connection. The call is collective within each local group and all memebers of the group provide the same values for {\tt name} and {\tt n}. The call blocks until another group executes a call to \func{MPI\_INTERCOMM\_NAME} or \func{MPI\_INTERCOMM\_NAME\_START} with the same {\tt name} parameter. An array of handles to {\tt n} inter-communicators for these two groups is then returned in {\tt new\_comms}. \begin{funcdef}{MPI\_INTERCOMM\_NAME\_START(comm, name, n, handle)} \funcarg{\IN}{ comm}{ handle to local intra-communicator} \funcarg{\IN}{ name}{ character string } \funcarg{\IN}{ n}{ number of new inter-communicators to construct} \funcarg{\OUT}{ handle}{ handle for MPI\_INTERCOMM\_NAME\_FINISH()} \end{funcdef} This is the name-served equivalent of \func{MPI\_INTERCOMM\_START}. The operation is compelted by a call to \mpifunc{MPI\_INTERCOMM\_FINISH}. \change \discuss{ I use the same {\tt \_FINISH} routine for the name served and the regular inter-communicator making routines Here, too, rather then creating n communicators, one could return n cached contexts. } \implement{ Assume that the ``host'' process is used as name server. A call to \func{MPI\_INTERCOMM\_NAME} can be implemented by having a selected group leader (say, process with rank 0) send a message to the name server. When the name server has received two requests with matching names, it sends back to each leader a description of the remote group, and a ``clean'' context. The leaders next broadcast this information within each group. In the nonblocking version, the {\tt \_START} function causes the leader to send a nonblocking message to the name server and post a nonblocking receive; the {\tt \_FINISH} function causes the leaders to receive the reply from the name server and execute the local broadcast. } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%% No changes made below this line MS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \section{Cacheing} MPI provides a ``cacheing'' facility that allows an application to attach arbitrary pieces of information, called {\em attributes}, to context, group, and communicator descriptors; it provides this facility to user programs as well. Attributes are local to the process and are not included if the descriptor were somehow sent to another process\footnote{The deletion of flatten/unflatten makes this point moot.}. This facility is intended to support optimizations such as saving persistent communication handles and recording topology-based decisions by adaptive algorithms. However, attributes are propagated intentionally by specific MPI routines. To summarize, cacheing is, in particular, the process by which implementation-defined data (and virtual topology data) is propagated in groups and communicators. \discuss{Attribute propagation must be discussed carefully.} \subsection{Functionality} MPI provides the following services related to cacheing. They are all process-local. \begin{funcdef}{MPI\_ATTRIBUTE\_ALLOC(n,handle\_array,len)} \funcarg {\IN}{ n}{ number of handles to allocate} \funcarg {\OUT}{ handle\_array}{ pointer to array of opaque attribute handling structure} \funcarg {\OUT}{ len}{ length of each opaque structure} \end{funcdef} Allocates a new attribute, so user programs and functionality layered on top of MPI can access attribute technology. \begin{funcdef}{MPI\_ATTRIBUTE\_FREE(handle\_array,n)} \funcarg {\IN}{ handle\_array}{ array of pointers to opaque attribute handling structures} \funcarg {\IN}{ n}{ number of handles to deallocate} \end{funcdef} Frees attribute handle. \begin{funcdef}{MPI\_GET\_ATTRIBUTE\_KEY(keyval)} \funcarg {\OUT}{ keyval}{ Provide the integer key value for future storing.} \end{funcdef} Generates a new cache key. \begin{funcdef}{MPI\_SET\_ATTRIBUTE(handle, keyval, attribute\_val, attribute\_len, attribute\_destructor\_routine)} \funcarg{\IN}{ handle}{ opaque attribute handle} \funcarg{\IN}{ keyval}{ The integer key value for future storing.} \funcarg{\IN}{ attribute\_val}{ attribute value (opaque pointer)} \funcarg{\IN}{ attribute\_len}{ length of attribute (in bytes)} \funcarg{\IN}{ attribute\_destructor\_routine}{ What one calls to get rid of this attribute later} \end{funcdef} Stores attribute in cache by key. \begin{funcdef}{MPI\_TEST\_ATTRIBUTE(handle,keyval,attribute\_ptr,len)} \funcarg{\IN}{ handle}{ opaque attribute handle} \funcarg{\IN}{ keyval}{ The integer key value for future storing.} \funcarg{\OUT}{ attribute\_ptr}{ void pointer to attribute, or NULL if not found} \funcarg{\OUT}{ len}{ length in bytes of attribute, if found.} \end{funcdef} Retrieve attribute from cache by key. \begin{funcdef}{MPI\_DELETE\_ATTRIBUTE(handle, keyval)} \funcarg{\IN}{ handle}{ opaque attribute handle} \funcarg{\IN}{ keyval}{ The integer key value for future storing.} \end{funcdef} Delete attribute from cache by key. \subsubsection{Example} Each attribute consists of a pointer or a value of the same size as a pointer, and would typically be a reference to a larger block of storage managed by the module. As an example, a global operation using cacheing to be more efficient for all contexts of a group after the first call might look like this: \begin{verbatim} static int gop_key_assigned = 0; /* 0 only on first entry */ static int gop_key; /* key for this module's stuff */ efficient_global_op (comm, ...) void *comm; { struct gop_stuff_type *gop_stuff; /* whatever we need */ void *group = mpi_comm_group(comm); if (!gop_key_assigned) /* get a key on first call ever */ { gop_key_assigned = 1; if ( ! (gop_key = mpi_Get_Attribute_Key()) ) { mpi_abort ("Insufficient keys available"); } } if (mpi_Test_Attribute (mpi_group_attr(group),gop_key,&gop_stuff)) { /* This module has executed in this group before. We will use the cached information */ } else { /* This is a group that we have not yet cached anything in. We will now do so. */ gop_stuff = /* malloc a gop_stuff_type */ /* ... fill in *gop_stuff with whatever we want ... */ mpi_set_attribute (mpi_group_attr(group), gop_key, gop_stuff, gop_stuff_destructor); } /* ... use contents of *gop_stuff to do the global op ... */ } gop_stuff_destructor (gop_stuff) /* called by MPI on group delete */ struct gop_stuff_type *gop_stuff; { /* ... free storage pointed to by gop_stuff ... */ } \end{verbatim} \discuss{The cache facility could also be provided for other descriptors, but it is less clear how such provision would be useful. It is suggested that this issue be reviewed in reference to Virtual Topologies.} %---------------------------------------------------------------------- \section{Formalizing the Loosely Synchronous Model (Usage, Safety)} \subsection{Basic Statements} When a caller passes a communicator (which contains a context and group) to a callee, that communicator must be free of side effects throughout execution of the subprogram (quiescent). This provides one model in which libraries can be written, and work ``safely.'' For libraries so designated, the callee has permission to do whatever communication it likes with the communicator, and under the above guarantee knows that no other communications will interfere. Since we permit the creation of new communicators without synchronization (assuming preallocated contexts), this does not impose a significant overhead. This form of safety is analogous to other common computer science usages, such as passing a descriptor of an array to a library routine. The library routine has every right to expect such a descriptor to be valid and modifiable. \subsection{Models of Execution} We say that a parallel procedure is {\em active} at a process if the process belongs to a group that may collectively execute the procedure, and some member of that group is currently executing the procedure code. If a parallel procedure is active at a process, then this process may be receiving messages pertaining to this procedure, even if it does not currently execute the code of this procedure. \subsubsection{Nonreentrant parallel procedures} This covers the case where, at any point in time, at most one invocation of a parallel procedure can be active at any process. That is, concurrent invocations of the same parallel procedure may occur only within disjoint groups of processes. For example, all invocations of parallel procedures involve all processes, processes are single-threaded, and there are no recursive invocations. In such a case, a context can be statically allocated to each procedure. The static allocation can be done in a preamble, as part of initialization code. Or, it can be done a compile/link time, if the implementation has additional mechanisms to reserve context values. Communicators to be used by the different procedures can be build in a preamble, if the executing groups are statically defined; if the executing groups change dynamically, then a new communicator has to be built whenever the executing group changes, but this new communicator can be built using the same preallocated context. If the parallel procedures can be organized into libraries, so that only one procedure of each library can be concurrently active at each processor, then it is sufficient to allocate one context per library. \subsubsection{Parallel procedures that are nonreentrant within each executing group} This covers the case where, at any point in time, for each process group, there can be at most one active invocation of a parallel procedure by a process member. However, it might be possible that the same procedure is concurrently invoked in two partially (or completely) overlapping groups. For example, the same collective communication function may be concurrently invoked on two partially overlapping groups. In such a case, a context is associated with each parallel procedure and each executing group, so that overlapping execution groups have distinct communication contexts. (One does not need a different context from each group; one merely needs a ``coloring'' of the groups, so that One can generate the communicators for each parallel procedure when the execution groups are defined. Here, again, one only need one context for each library, if no two procedures from the same library can be concurrently active in the same group. Note that, for collective communication libraries, we do allow several concurrent invocations within the same group: a broadcast in a group may be started at a process before the previous broadcast in that group ended at another process. In such a case, one cannot rely on context mechanisms to disambiguate successive invocations of the same parallel procedure within the same group: the procedure need be implemented so as to avoid confusion. For example, for broadcast, one may need to carry additional information in messages, such as the broadcast root, to help in such disambiguation; one also relies on preservation of message order by MPI.\@ With such an approach, we may be gaining performance, but we loose modularity. It is not sufficient to implement the parallel procedure so that it works correctly in isolation, when invoked only once; it needs to be implemented so that any number of successive invocations will execute correctly. Of course, the same approach can be used for other parallel libraries. \subsubsection{Well-nested parallel procedures} Calls of parallel procedures are well nested if a new parallel procedure is always invoked in a subset of a group executing the same parallel procedure. Thus, processes that execute the same parallel procedure have the same execution stack. In such a case, a new context need to be dynamically allocated for each new invocation of a parallel procedure. However, a stack mechanism can be used for allocating new contexts. Thus, a possible mechanism is to allocate first a large number of context's (up to the upper bound on the depth of nested parallel procedure calls), and then use a local stack management of these context's on each process to create a new communicator (using \func{ MPI\_COMM\_MAKE}) for each new invocation. \subsubsection{The General case} In the general case, there may be multiple concurrently active invocations of the same parallel procedure within the same group; invocations may not be well-nested. A new context need to be created for each invocation. It is the user responsibility to make sure that, if two distinct parallel procedures are invoked concurrently on overlapping sets of processes, then context allocation or communicator creation is properly coordinated. \section{Motivating Examples} \discuss{The int\-ra-com\-mun\-i\-cat\-ion examples were first presented at the June MPI meeting; the int\-er-com\-mun\-i\-cat\-ion routines (when added) are new.} \subsection{Current Practice \#1} \label{context-ex1} \noindent Example \#1a: \begin{verbatim} int me, size; ... mpi_init(); mpi_comm_rank(MPI_COMM_ALL, &me); mpi_comm_size(MPI_COMM_ALL, &size); printf("Process %d size %d\n", me, size); ... \end{verbatim} Example \#1a is a do-nothing program that initializes itself legally, and refers to the the ``all'' communicator, and prints a message. This example does not imply that MPI supports printf-like communication itself. \noindent Example \#1b: \begin{verbatim} int me, size; ... mpi_init(); mpi_comm_rank(MPI_COMM_ALL, &me); /* local */ mpi_comm_size(MPI_COMM_ALL, &size); /* local */ if((me % 2) == 0) mpi_send(..., ((me + 1) % size), MPI_COMM_ALL); else mpi_recv(..., ((me - 1 + size) % size), MPI_COMM_ALL); ... \end{verbatim} Example \#1b schematically illustrates message exchanges between ``even'' and ``odd'' processes in the ``all'' communicator. \subsection{Current Practice \#2} \label{context-ex2} \begin{verbatim} void *data; int me; int count; ... mpi_init(); mpi_comm_rank(MPI_COMM_ALL, &me); if(me == 0) { /* get input, create buffer ``data'' */ ... } mpi_broadcast(data, count, MPI_BYTE, 0, MPI_COMM_ALL); ... \end{verbatim} This example illustrates the use of a collective communication. \subsection{(Approximate) Current Practice \#3} \label{context-ex3} \begin{verbatim} int me, count, count2; void *send_buf, *recv_buf, *send_buf2, *recv_buf2; MPI_Group MPI_GROUP_ALL, grp0, grprem; MPI_Comm commslave; static int ranks[] = {0}; ... mpi_init(); mpi_comm_group(MPI_COMM_ALL, &MPI_GROUP_ALL); mpi_comm_rank(MPI_COMM_ALL, &me); /* local */ mpi_local_subgroup(group_all, 1, ranks, &grp0); /* local */ mpi_group_difference(group_all, grp0, &grprem); /* local */ mpi_comm_make(MPI_COMM_ALL, grprem, &commslave); if(me != 0) { /* compute on slave */ ... mpi_reduce(send_buf, recv_buff, count, MPI_BYTE, commslave); ... } /* zero falls through immediately to this reduce, others do later... */ mpi_reduce(send_buf2, recv_buff2, count2, MPI_BYTE, MPI_COMM_ALL); \end{verbatim} This example illustrates how a group consisting of all but the zeroth process of the ``all'' group is created, and then how a communicator is formed (\mpiarg{ commslave}) for that new group. The new communicator is used in a collective call, and all processes execute a collective call in the \const{ MPI\_COMM\_ALL} context. This example illustrates how the two communicators (which possess distinct contexts) protect communication. That is, communication in \const{ MPI\_COMM\_ALL} is insulated from communication in \mpiarg{ commslave}, and vice versa. In summary, for communication with ``group safety,'' contexts within communicators must be distinct. \subsection{Example \#4} \label{context-ex4} The following example is meant to illustrate ``safety'' between point-to-point and collective communication. MPI guarantees that a single communicator can do safe point-to-point and collective communication. \begin{verbatim} #define TAG_ARBITRARY 12345 #define SOME_COUNT 50 int me; MPI_Group MPI_GROUP_ALL, subgroup; int ranks[] = {2, 4, 6, 8}; ... mpi_init(); mpi_comm_group(MPI_COMM_ALL, &MPI_GROUP_ALL); mpi_local_subgroup(MPI_GROUP_ALL, 4, ranks, &subgroup); /* local */ mpi_group_rank(subgroup, &me); /* local */ mpi_comm_make(MPI_COMM_ALL, subgroup, &the_comm); if(me != MPI_UNDEFINED) { /* asynchronous receive: */ mpi_irecv(handle, start, count, MPI_DOUBLE, MPI_SRC_ANY, TAG_ARBITRARY, the_comm); } for(i = 0; i < SOME_COUNT, i++) mpi_reduce(..., the_comm); \end{verbatim} \subsection{Library Example \#1} \label{context-ex5} The main program: \begin{verbatim} int done = 0; user_lib_t *libh_a, *libh_b; void *dataset1, *dataset2; ... mpi_init(); ... init_user_lib(MPI_COMM_ALL, &libh_a); init_user_lib(MPI_COMM_ALL, &libh_b); ... user_start_op(libh_a, dataset1); user_start_op(libh_a, dataset2); ... while(!done) { /* work */ ... mpi_reduce(..., MPI_COMM_ALL); ... /* see if done */ ... } user_end_op(libh_a); user_end_op(libh_b); \end{verbatim} \noindent The user library initialization code: \begin{verbatim} void init_user_lib(void *comm, user_lib_t **handle) { user_lib_t *save; void *context; void *group; int len; user_lib_initsave(&save); /* local */ mpi_contexts_alloc(comm, 1); mpi_comm_dup(comm, &(save -> comm)); /* other inits */ *handle = save; } \end{verbatim} Notice that the communicator \mpiarg{ comm} passed to the library {\em is} needed to allocate new contexts. \noindent User start-up code: \begin{verbatim} void user_start_op(user_lib_t *handle, void *data) { mpi_irecv(&(handle -> irecv_handle), ..., handle -> comm); mpi_isend(&(handle -> isend_handle), ..., handle -> comm); } \end{verbatim} \noindent User clean-up code: \begin{verbatim} void user_end_op(user_lib_t *handle) { mpi_wait(handle -> isend_handle); mpi_wait(handle -> irecv_handle); } \end{verbatim} \subsection{Library Example \#2} \label{context-ex6} The main program: \begin{verbatim} int ma, mb; ... list_a := ``[0,1]''; list_b := ``[0,2{,3}]''; mpi_local_subgroup(MPI_GROUP_ALL, 2, list_a, &group_a); mpi_local_subgroup(MPI_GROUP_ALL, 2(3), list_b, &group_b); mpi_comm_make(MPI_COMM_ALL, group_a, &comm_a); mpi_comm_make(MPI_COMM_ALL, group_b, &comm_b); mpi_comm_rank(comm_a, &ma); mpi_comm_rank(comm_b, &mb); if(ma != MPI_UNDEFINED) lib_call(comm_a); if(mb != MPI_UNDEFINED) { lib_call(comm_b); lib_call(comm_b); } \end{verbatim} \noindent The library: \begin{verbatim} void lib_call(void *comm) { int me, done = 0; mpi_comm_rank(comm, &me); if(me == 0) while(!done) { mpi_recv(..., comm, MPI_SRC_ANY); ... } else { /* work */ mpi_send(..., comm, 0); .... } MPI_SYNC(comm); /* include/no safety for safety/no safety */ } \end{verbatim} The above example is really two examples, depending on whether or not you include rank 3 in {\tt list\_b}. This example illustrates that, despite contexts, subsequent calls to {\tt lib\_call} with the same context need not be safe from one another (``back masking''). Safety is realized if the \func{ MPI\_SYNC} is added. What this demonstrates is that libraries have to be written carefully, even with contexts. Algorithms like ``combine'' have strong enough source selectivity so that they are inherently OK. So are multiple calls to a typical tree broadcast algorithm with the same root. However, multiple calls to a typical tree broadcast algorithm -- with different roots --- could break. Therefore, such algorithms would have to utilize the tag to keep things straight. All of the foregoing is a discussion of ``collective calls'' implemented with point to point operations. MPI implementations may or may not implement collective calls using point-to-point operations. These algorithms are used to illustrate the issues of correctness and safety, independent of how MPI implements its collective calls. \subsection{Inter-Communication Examples} \subsubsection{Example 1: Three-Group ``Pipeline"} \label{context-ex7} \begin{verbatim} +---------+ +---------+ +---------+ | | | | | | | Group 0 | <-----> | Group 1 | <-----> | Group 2 | | | | | | | +---------+ +---------+ +---------+ \end{verbatim} Groups 0 and 1 communicate. Groups 1 and 2 communicate. Therefore, group 0 requires one inter-communicator, group 1 requires two inter-communicators, and group 2 requires 1 inter-communicator. Note that the synchronous inter-communicator constructor (\func{MPI\_COMM\_PEER\_MAKE()}) can be safely used here since there is no cyclic communication. \begin{verbatim} void * myComm; /* intra-communicator of local sub-group */ void * myFirstComm; /* inter-communicator */ void * mySecondComm; /* second inter-communicator (group B only) */ int membershipKey; int subGroupLeaders[3]; MPI_INIT(); ... /* User code must generate membershipKey in the range [0, 1, 2] */ membershipKey = ... ; /* Build intra-communicator for local sub-group and get group leaders */ /* of each sub-group (relative to MPI_COMM_ALL). */ MPI_COMM_SPLITL(MPI_COMM_ALL, membershipKey, 3, subGroupLeaders, &myComm); /* Build inter-communicators. Tags are hard-coded. */ if (membershipKey == 0) { /* Group 0 communicates with group 1. */ MPI_COMM_PEER_MAKE(myComm, 0, MPI_COMM_ALL, subGroupLeaders[1], 10, 1, &myFirstComm); } else if (membershipKey == 1) { /* Group 1 communicates with groups 0 and 2. */ MPI_COMM_PEER_MAKE(myComm, 0, MPI_COMM_ALL, subGroupLeaders[0], 10, 1, &myFirstComm); MPI_COMM_PEER_MAKE(myComm, 0, MPI_COMM_ALL, subGroupLeaders[2], 21, 1, &mySecondComm); } else if (membershipKey == 2) { /* Group 2 communicates with group 1. */ MPI_COMM_PEER_MAKE(myComm, 0, MPI_COMM_ALL, subGroupLeaders[1], 21, 1, &myFirstComm); } \end{verbatim} \subsubsection{Example 2: Three-Group ``Ring"} \label{context-ex8} \begin{verbatim} +-----------------------------------------------------------+ | | | +---------+ +---------+ +---------+ | | | | | | | | | +--> | Group 0 | <-----> | Group 1 | <-----> | Group 2 | <--+ | | | | | | +---------+ +---------+ +---------+ \end{verbatim} Groups 0 and 1 communicate. Groups 1 and 2 communicate. Groups 0 and 2 communicate. Therefore, each requires two inter-communicators. Note that the "loosely synchronous" inter-communicator constructor ( \func{MPI\_COMM\_PEER\_MAKE\_START()} and \func{MPI\_COMM\_PEER\_MAKE\_FINISH()}) is the best choice here due to the cyclic communication. \begin{verbatim} MPI_Comm myComm; /* intra-communicator of local sub-group */ MPI_Comm myFirstComm; /* inter-communicators */ MPI_Comm mySecondComm; make_id firstMakeID, secondMakeID; /* handles for "FINISH" */ int membershipKey; int subGroupLeaders[3]; MPI_INIT(); ... /* User code must generate membershipKey in the range [0, 1, 2] */ membershipKey = ... ; /* Build intra-communicator for local sub-group and get group leaders */ /* of each sub-group (relative to MPI_COMM_ALL). */ MPI_COMM_SPLITL(MPI_COMM_ALL, membershipKey, 3, subGroupLeaders, &myComm); /* Build inter-communicators. Tags are hard-coded. */ if (membershipKey == 0) { /* Group 0 communicates with groups 1 and 2. */ MPI_COMM_PEER_MAKE_START(myComm, 0, MPI_COMM_ALL, subGroupLeaders[2], 20, 1, &firstMakeID); MPI_COMM_PEER_MAKE_START(myComm, 0, MPI_COMM_ALL, subGroupLeaders[1], 10, 1, &secondMakeID); } else if (membershipKey == 1) { /* Group 1 communicates with groups 0 and 2. */ MPI_COMM_PEER_MAKE_START(myComm, 0, MPI_COMM_ALL, subGroupLeaders[0], 10, 1, &firstMakeID); MPI_COMM_PEER_MAKE_START(myComm, 0, MPI_COMM_ALL, subGroupLeaders[2], 21, 1, &secondMakeID); } else if (membershipKey == 2) { /* Group 2 communicates with groups 0 and 1. */ MPI_COMM_PEER_MAKE_START(myComm, 0, MPI_COMM_ALL, subGroupLeaders[1], 21, 1, &firstMakeID); MPI_COMM_PEER_MAKE_START(myComm, 0, MPI_COMM_ALL, subGroupLeaders[0], 20, 1, &secondMakeID); } /* Everyone has the same "FINISH" code... */ MPI_COMM_PEER_MAKE_FINISH(firstMakeID, &myFirstComm); MPI_COMM_PEER_MAKE_FINISH(secondMakeID, &mySecondComm); \end{verbatim} \subsubsection{Example 3: Three-Group ``Pipeline" Using Name Service} \label{context-ex9} \begin{verbatim} +---------+ +---------+ +---------+ | | | | | | | Group 0 | <-----> | Group 1 | <-----> | Group 2 | | | | | | | +---------+ +---------+ +---------+ \end{verbatim} Groups 0 and 1 communicate. Groups 1 and 2 communicate. Therefore, group 0 requires one inter-communicator, group 1 requires two inter-communicators, and group 2 requires 1 inter-communicator. Note that the synchronous inter-communicator constructor (\func{MPI\_COMM\_NAME\_MAKE()}) can be safely used here since there is no cyclic communication. \begin{verbatim} MPI_Comm myComm; /* intra-communicator of local sub-group */ MPI_Comm myFirstComm; /* inter-communicator */ MPI_Comm mySecondComm; /* second inter-communicator (group B only) */ MPI_INIT(); ... /* User builds intra-communicator myComm describing the local sub-group */ /* using any appropriate MPI routine(s). (For example, myComm could */ /* have been passed in as an argument to a user subroutine.) */ myComm = ... ; /* Build inter-communicators. Group membership conditions must be */ /* provided by the user. */ if () { /* Group 0 communicates with group 1. */ MPI_COMM_NAME_MAKE(myComm, "Connect 10", 1, &myFirstComm); } else if () { /* Group 1 communicates with groups 0 and 2. */ MPI_COMM_NAME_MAKE(myComm, "Connect 10", 1, &myFirstComm); MPI_COMM_NAME_MAKE(myComm, "Connect 21", 1, &mySecondComm); } else if () { /* Group 2 communicates with group 1. */ MPI_COMM_NAME_MAKE(myComm, "Connect 21", 1, &myFirstComm); } \end{verbatim} \subsubsection{Example 4: Three-Group ``Ring" Using Name Service} \label{context-ex10} \begin{verbatim} +-----------------------------------------------------------+ | | | +---------+ +---------+ +---------+ | | | | | | | | | +--> | Group 0 | <-----> | Group 1 | <-----> | Group 2 | <--+ | | | | | | +---------+ +---------+ +---------+ \end{verbatim} Groups 0 and 1 communicate. Groups 1 and 2 communicate. Groups 0 and 2 communicate. Therefore, each requires two inter-communicators. Note that the "loosely synchronous" inter-communicator constructor ( \func{MPI\_COMM\_NAME\_MAKE\_START()} and \func{MPI\_COMM\_NAME\_MAKE\_FINISH()}) is the best choice here due to the cyclic communication. \begin{verbatim} MPI_Comm myComm; /* intra-communicator of local sub-group */ MPI_Comm myFirstComm; /* inter-communicators */ MPI_Comm mySecondComm; make_id firstMakeID, secondMakeID; /* handles for "FINISH" */ MPI_INIT(); ... /* User builds intra-communicator myComm describing the local sub-group */ /* using any appropriate MPI routine(s). (For example, myComm could */ /* have been passed in as an argument to a user subroutine.) */ myComm = ... ; /* Build inter-communicators. Group membership conditions must be */ /* provided by the user. */ if () { /* Group 0 communicates with groups 1 and 2. */ MPI_COMM_NAME_MAKE_START(myComm, "Connect 20", 1, &firstMakeID); MPI_COMM_NAME_MAKE_START(myComm, "Connect 10", 1, &secondMakeID); } else if () { /* Group 1 communicates with groups 0 and 2. */ MPI_COMM_NAME_MAKE_START(myComm, "Connect 10", 1, &firstMakeID); MPI_COMM_NAME_MAKE_START(myComm, "Connect 21", 1, &secondMakeID); } else if () { /* Group 2 communicates with groups 0 and 1. */ MPI_COMM_NAME_MAKE_START(myComm, "Connect 21", 1, &firstMakeID); MPI_COMM_NAME_MAKE_START(myComm, "Connect 20", 1, &secondMakeID); } /* Everyone has the same "FINISH" code... */ MPI_COMM_NAME_MAKE_FINISH(firstMakeID, &myFirstComm); MPI_COMM_NAME_MAKE_FINISH(secondMakeID, &mySecondComm); \end{verbatim} %--------------------------------------------------------------------------- %--------------------------------------------------------------------------- %-------STOP HERE----------------------------------------------------------- %--------------------------------------------------------------------------- %--------------------------------------------------------------------------- \end{document} From lyndon@epcc.ed.ac.uk Wed Sep 8 05:54:49 1993 Received: from epcc.ed.ac.uk ([129.215.56.21]) by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA28320; Wed, 8 Sep 93 05:54:47 CDT Date: Wed, 8 Sep 93 11:54:35 BST Message-Id: <23438.9309081054@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: current context draft To: tony@aurora.cs.msstate.edu Reply-To: lyndon@epcc.ed.ac.uk Status: R Hi Tony Can you send out a copy of the current draft please. I understand that Marc has done and forwarded to you. Thanks Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Tue Sep 21 11:47:51 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA02679; Tue, 21 Sep 93 11:47:51 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08419; Tue, 21 Sep 93 11:45:38 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 21 Sep 1993 11:45:36 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930125/2.8s-UTK) id AA08404; Tue, 21 Sep 93 11:45:34 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA08316; Tue, 21 Sep 93 10:45:04 CDT Date: Tue, 21 Sep 93 10:45:04 CDT From: Tony Skjellum Message-Id: <9309211545.AA08316@Aurora.CS.MsState.Edu> To: mpi-comm@cs.utk.edu, mpi-context@cs.utk.edu, walker@rios2.EPM.ORNL.GOV Subject: MPI meeting ... Context Ladies/Gentlemen: I am coming to Dallas on Wednesday, in time for a post-dinner Context committee meeting. We are reading the chapter on Thursday. The Bristol Suites cancelled my reservation for Tuesday night, so I am coming on Wednesday; this also means I will be bringing the polished (:-)) chapter with me. Those of you who want to see it Wednesday will be able to see copies, or come to the meeting after dinner. I am sorry for the late arrival, but events have conspired against me. Please be patient. - Tony Skjellum From owner-mpi-context@CS.UTK.EDU Mon Sep 27 14:05:45 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA09531; Mon, 27 Sep 93 14:05:45 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930922/2.8s-UTK) id AA24911; Mon, 27 Sep 93 14:02:16 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 27 Sep 1993 14:01:31 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930922/2.8s-UTK) id AA24893; Mon, 27 Sep 93 14:01:29 -0400 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA01930; Mon, 27 Sep 93 12:59:32 CDT Date: Mon, 27 Sep 93 12:59:32 CDT From: Tony Skjellum Message-Id: <9309271759.AA01930@Aurora.CS.MsState.Edu> To: lyndon@epcc.ed.ac.uk, snir@watson.ibm.com, tony@aurora.cs.msstate.edu, hender@macaw.fsl.noaa.gov Subject: Re: Intercommunication Issues as yet Unresolved Cc: walker@rios2.epm.ornl.gov, mpi-context@cs.utk.edu I prefer if we bounce this off the reflector, just for completeness. However, I expect that the smaller number of us will arrive at the solution. Given recent input from the reflector, I don't see it likely we will hear much from others ;-( - Tony ----- Begin Included Message ----- From hender@macaw.fsl.noaa.gov Mon Sep 27 11:30:24 1993 Date: Mon, 27 Sep 93 10:30:51 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) To: lyndon@epcc.ed.ac.uk, snir@watson.ibm.com, tony@aurora.cs.msstate.edu Subject: Re: Intercommunication Issues as yet Unresolved Cc: walker@rios2.epm.ornl.gov Content-Length: 9295 Hi all, Here's a few more thoughts on unresolved intercommunication issues. Should we be bouncing this discussion off the reflector? > The following points were raised by Marc... > . Does the "symmetric" protocol lead to more problems than our previous > asymmetric protocol? If so, should we switch back to something > like what we had before (publish/subscribe)? > > . For either the current, or an altered set of intercommunication calls, > can the sufficient conditions for non-deadlocking be described? > Can the necessary/sufficient conditions for non-deadlocking be > described? . For publish/subscribe, can we satisfy the same list of "objectives" that the current draft was designed to satisfy? These objectives came from objections to the original publish/subscribe scheme. As a reminder, here are those objectives: >> INTER-COMMUNICATION OBJECTIVES >> >> >> 1 The inter-communicator object is used for inter-group communication with >> the same point-to-point syntax as intra-communication. >> >> 2 An instance of communicator will provide either intra-group or >> inter-group and never both. >> >> 3 The same inter-communicator can be used for send and receive. >> >> 4 The inter-communicator object is invalid for MPI collective operations >> and there must be a simple (quick) way for an implementation to determine >> this. The (quick) way should be made available to the user via an >> enquiry routine. >> >> 5 Communications using an inter-communicator are resolved from >> communications using any other communicator. >> >> 6 Processes in group G address processes in group H by the ranks of >> processes in group H (and therefore vice versa) in both send and receive. >> >> 7 There is no "out-of-band" communication, i.e. if group H is bound to >> inter-communicator C then communicator C can only be used to send >> (receive) messages to (from) processes in group H. >> >> 8 The mechanisms for constructing inter-communicator objects must not >> introduce insecurity into the basic usage of MPI. (The mechanisms must >> not allow contexts to be bound in multiple usable inter-communicators.) >> >> 9 There is a mechanism for establishing communicator objects for >> inter-communication which does not synchronize the two groups. >> >> 10 There is support for user level servers. >> >> 11 There should be a name service however it should no be central to the >> provision of inter-group. It should be possible to implement the name >> service using the other provisions. >> >> 12 It should be possible to isolate the more advanced features of the >> context proposal from the more basic features. Users who only need basic >> features should not have to understand advanced features (or pay a >> performance penalty for existence of advanced features). >> >> 13 The provision for inter-communication should not adversely affect the >> (potential) performance of intra-communication. I think the objectives most likely to be affected by a change to publish/subscribe are 7, 8, 9, and 12. > Tom raised the following point: > . Is the intercommunication functionality of MPI different in its > deadlocking properties than any of the other aspects of MPI related > to communication order, cycles, etc, in process communication? > [I believe that Marc also raised a similar flag.] > > As I see this, we have at least to be able to state the sufficient conditions > for non-deadlocking in the current calls, or change to suitable "publish" > and "subscribe" type calls, and also state the sufficient conditions there. > I invite earnest, and specific discussion over the next week. I will also > think on this, of course, and contribute. ... > -Tony To elaborate on my (limited) understanding of deadlocking properties... The "possible implementation" of MPI_INTERCOMM_START() and MPI_INTERCOMM_FINISH() went like this: MPI_INTERCOMM_START(local_comm, local_leader, bridge_comm, remote_leader, tag, handle) Begin Duplicate local_comm using MPI_COMM_DUP(). (COLLECTIVE OPERATION) If (I am local_leader) Begin "Flatten" duplicated communicator Non-blocking send flat communicator to remote_leader using bridge_comm and tag. Non-blocking receive "remote" flat communicator from remote_leader using bridge_comm and tag. (Store communication handles in handle). End End MPI_INTERCOMM_FINISH(handle, new_comm) Begin If (local leader) Begin Complete the non blocking send and receive from the START. (Use MPI_WAITALL() with communication handles previously stored in handle). End Broadcast the remote flat communicator in local_comm. (COLLECTIVE OPERATION) Merge the local and remote flat communicators. Free memory used to store flat communicators (if necessary). End This looks like: 1) COLLECTIVE OPERATION (in local_comm) 2) NON-BLOCKING POINT-TO-POINT OPERATION ("leaders" only) 3) COLLECTIVE OPERATION (in local_comm) I'm going to try and make the case that these operations are separable: if we can describe sufficient conditions for non-deadlocking for collective operations and for non-blocking point-to-point operations then we can describe sufficient conditions for non-deadlocking for MPI_INTERCOMM_START() and MPI_INTERCOMM_FINISH(). I'll go slow because I don't understand most of this stuff as well as the rest of you. Let's see how far I get... :-) The first collective operation has the annoying property that I don't know how to describe sufficient conditions for non-deadlocking... If the local and remote groups do not overlap, I don't think we have a problem. If they do overlap, then it's more complicated... Suppose we have three partially overlapping groups, A, B, and C with the following properties: Some processes are members of group A only. Some processes are members of group B only. Some processes are members of group C only. Some processes are members of both groups A and B. Some processes are members of both groups A and C. Some processes are members of both groups B and C. What happens with the following codes? /* Process in the overlap of groups A and B. */ MPI_COLLECTIVE_OP(A, ...); MPI_COLLECTIVE_OP(B, ...); /* Process in the overlap of groups A and C. */ MPI_COLLECTIVE_OP(A, ...); MPI_COLLECTIVE_OP(C, ...); /* Process in the overlap of groups B and C. */ MPI_COLLECTIVE_OP(B, ...); MPI_COLLECTIVE_OP(C, ...); I think the operations complete in order A, B, C. How about this (cyclic dependency)? /* Process in the overlap of groups A and B. */ MPI_COLLECTIVE_OP(A, ...); MPI_COLLECTIVE_OP(B, ...); /* Process in the overlap of groups A and C. */ MPI_COLLECTIVE_OP(C, ...); MPI_COLLECTIVE_OP(A, ...); /* Process in the overlap of groups B and C. */ MPI_COLLECTIVE_OP(B, ...); MPI_COLLECTIVE_OP(C, ...); This looks like deadlock. None of the collective operations will complete. (Unfortunately, this could easily happen since the processes in the overlap of groups A and C may have no knowledge of the existence of group B!) I don't think we have defined sufficient conditions for non-deadlocking for collective operations with overlapping groups. Once we have defined these, I think we will be able to describe sufficient conditions for non-deadlocking for the collective operation in MPI_INTERCOMM_START() (operation 1 above). Operation 2 is two point-to-point operations (send and receive) between the "leaders" of the two groups. Given that operation 1 has completed without deadlock (because we figure out how to describe sufficient conditions for non-deadlocking for collective operations :-) the sufficient conditions for operation 2 are just the conditions for non-blocking point-to-point. I think non-blocking point-to-point will work as long as all interdependent "start" operations are called without any intervening blocking operations. Applying this to INTERCOMM_MAKE: MPI_INTERCOMM_START(a) MPI_INTERCOMM_START(b) MPI_INTERCOMM_START(c) MPI_INTERCOMM_FINISH(a) MPI_INTERCOMM_FINISH(b) MPI_INTERCOMM_FINISH(c) should be non-erroneous (but not SAFE) while MPI_INTERCOMM_START(a) MPI_INTERCOMM_FINISH(a) MPI_INTERCOMM_START(b) MPI_INTERCOMM_FINISH(b) MPI_INTERCOMM_START(c) MPI_INTERCOMM_FINISH(c) or MPI_INTERCOMM_START(a) MPI_INTERCOMM_START(b) some other collective/blocking operation MPI_INTERCOMM_START(c) MPI_INTERCOMM_FINISH(a) MPI_INTERCOMM_FINISH(b) MPI_INTERCOMM_FINISH(c) might be erroneous. It may not be possible to detect erroneousness until run time. If we believe this, then operation 3 looks just like operation 1. So, did I miss anything? Are there interactions between the three operations I haven't thought about? Any other comments? On publish/subscribe: should we be looking at the publish/subscribe mechanism from the June 23-25 meeting? Can anyone point to a mail message that contains the version of publish/subscribe we want to look at? Bye for now... Tom ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Fri Oct 22 13:07:17 1993 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with SMTP (5.61+IDA+UTK-930125/2.8t-netlib) id AA01469; Fri, 22 Oct 93 13:07:17 -0400 Received: from localhost by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930922/2.8s-UTK) id AA17555; Fri, 22 Oct 93 12:55:36 -0400 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 22 Oct 1993 12:55:33 EDT Errors-To: owner-mpi-context@CS.UTK.EDU Received: from gw1.fsl.noaa.gov by CS.UTK.EDU with SMTP (5.61+IDA+UTK-930922/2.8s-UTK) id AA17525; Fri, 22 Oct 93 12:55:31 -0400 Received: by gw1.fsl.noaa.gov (5.57/Ultrix3.0-C) id AA13627; Fri, 22 Oct 93 16:53:00 GMT Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA17176; Fri, 22 Oct 93 10:55:31 MDT Date: Fri, 22 Oct 93 10:55:31 MDT From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9310221655.AA17176@macaw.fsl.noaa.gov> To: mpi-context@cs.utk.edu, otto@merckx.cse.ogi.edu Subject: Comments on Oct. 19 draft Hi all, A few comments on the Inter-Communication section (5.5) and the Inter-Communication Examples section (5.8.7) of the October 19 draft... On page 118, line 46, the first sentence reads: "Intercommunicators can be created using the function MPI_COMM_DUP." We had a long discussion following the last meeting about how to describe rules for "safe" use of the inter-communicator construction routines. The result was a decision to drop the blocking versions: MPI_INTERCOMM_MAKE() and MPI_INTERCOMM_NAME(). It seems to me that using MPI_COMM_DUP() to duplicate an inter-communicator has the same synchronization problems as MPI_INTERCOMM_MAKE()-- both groups need to know what's going on. It's just another blocking inter-communicator construction routine. To be consistent, I think we need to delete this sentence or replace "can" with "cannot". On page 119, lines 6-12: "The explicit synchronization can cause deadlock in modular programs with cyclic communication graphs, even if: * within each local group calls are executed in the same order, * the local and remote operations are decoupled and the construction is performed "loosely synchronously" (by calling the two routines MPI_INTERCOMM_START() and MPI_INTERCOMM_FINISH())." The previous draft (9/27) read: "The explicit synchronization can cause deadlock in modular programs with cyclic communication graphs, even if within each local group calls are executed in the same order. So, the local and remote operations can be decoupled and the construction performed "loosely synchronously" by calling the two routines MPI_INTERCOMM_START() and MPI_INTERCOMM_FINISH()." I have two comments about this change: 1) In the current draft, the ONLY WAY to construct an inter-communicator is "loosely synchronously". The second bullet doesn't make sense any more. The local and remote operations are always decoupled. 2) My interpretation of the current draft is that there is no "safe" way to construct an inter-communicator. I think it may also be true that we don't know how to describe sufficient conditions for non-deadlocking for any series of collective operations on groups that overlap. If this is true, then we should make this statement a general warning about using any collective operations and not hide it here. Comments please? I don't have a good idea for fixing this text without knowing what lead to the change. What am I missing? On page 119, line 41 (typo): "MPI _arg" appears, looks like a typo. On page 120, lines 8-9 (typo): "This call starts off an inter-communicator creation operation, returning a handle for the completion of the operation in handle; ..." ^^^^^^ "This call starts off an inter-communicator creation operation, returning a handle for the completion of the operation in inter_request; ..." ^^^^^^^^^^^^^ On page 121, line 28 (clarification): I would like to add the following sentence: "All processes in the local and remote groups must provide the same name." This is not stated anywhere else. It used to be in an implementation note. On page 134, line 26 (typo): "group B" should be "group 1" This one's my fault, left over from several drafts ago... :-) On page 137, lines 24-40 (typos): "if (color == 0) { /* Group 0 communicates with group 1. */ MPI_Intercomm_name_start (myComm, "Connect 01", &inter_request1); MPI_Intercomm_finish (inter_request1, &myFirstComm); } else if (color == 1) { /* Group 1 communicates with groups 0 and 2. */ MPI_Intercomm_name_start (myComm, "Connect 10", &inter_request1); MPI_Intercomm_name_start (myComm, "Connect 12", &inter_request2); MPI_Intercomm_finish (inter_request1, &myFirstComm); MPI_Intercomm_finish (inter_request2, &mySecondComm); } else if (color == 2) { /* Group 2 communicates with group 1. */ MPI_Intercomm_name_start (myComm, "Connect 21", &inter_request1); MPI_Intercomm_finish (inter_request1, &myFirstComm); }" Names must match for all processes in local and remote groups. Fix is: if (color == 0) { /* Group 0 communicates with group 1. */ MPI_Intercomm_name_start (myComm, "Connect 01", &inter_request1); MPI_Intercomm_finish (inter_request1, &myFirstComm); } else if (color == 1) { /* Group 1 communicates with groups 0 and 2. */ MPI_Intercomm_name_start (myComm, "Connect 01", &inter_request1); ^^ MPI_Intercomm_name_start (myComm, "Connect 12", &inter_request2); MPI_Intercomm_finish (inter_request1, &myFirstComm); MPI_Intercomm_finish (inter_request2, &mySecondComm); } else if (color == 2) { /* Group 2 communicates with group 1. */ MPI_Intercomm_name_start (myComm, "Connect 12", &inter_request1); ^^ MPI_Intercomm_finish (inter_request1, &myFirstComm); } On page 138, lines 39-48 and page 139, lines 1-5 (typos): "if (color == 0) { /* Group 0 communicates with groups 1 and 2. */ MPI_Intercomm_name_start (myComm, "Connect 01", &inter_request1); MPI_Intercomm_name_start (myComm, "Connect 02", &inter_request2); } else if (color == 1) { /* Group 1 communicates with groups 0 and 2. */ MPI_Intercomm_name_start (myComm, "Connect 10", &inter_request1); MPI_Intercomm_name_start (myComm, "Connect 12", &inter_request2); } else if () { /* Group 2 communicates with groups 0 and 1. */ MPI_Intercomm_name_start (myComm, "Connect 20", &inter_request1); MPI_Intercomm_name_start (myComm, "Connect 21", &inter_request2); }" Names must match for all processes in local and remote groups. Also, last elseif is a dinosaur. Fixes are: if (color == 0) { /* Group 0 communicates with groups 1 and 2. */ MPI_Intercomm_name_start (myComm, "Connect 01", &inter_request1); MPI_Intercomm_name_start (myComm, "Connect 02", &inter_request2); } else if (color == 1) { /* Group 1 communicates with groups 0 and 2. */ MPI_Intercomm_name_start (myComm, "Connect 01", &inter_request1); ^^ MPI_Intercomm_name_start (myComm, "Connect 12", &inter_request2); } else if (color == 2) ^^^^^^^^^^ { /* Group 2 communicates with groups 0 and 1. */ MPI_Intercomm_name_start (myComm, "Connect 02", &inter_request1); ^^ MPI_Intercomm_name_start (myComm, "Connect 12", &inter_request2); ^^ } By the way, whoever updated all these examples to the current draft, thanks! Bye for now... Tom From owner-mpi-context@CS.UTK.EDU Mon Jan 3 22:23:03 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id WAA02924; Mon, 3 Jan 1994 22:23:03 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id WAA12245; Mon, 3 Jan 1994 22:23:23 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 3 Jan 1994 22:23:22 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from watson.ibm.com by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id WAA12238; Mon, 3 Jan 1994 22:23:20 -0500 Message-Id: <199401040323.WAA12238@CS.UTK.EDU> Received: from YKTVMV by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 8271; Mon, 03 Jan 94 22:23:20 EST Date: Mon, 3 Jan 94 22:23:19 EST From: "Marc Snir" To: MPI-CONTEXT@CS.UTK.EDU Suggestion for changes in 5.5 (Intercommunicators) 5.5.2. The usual mechanism for binding client to server (e.g. via sockets) is that a client opens a connection, without specifying the remote port (client), and starts listening (receiving). The client opens the connection by specifying both local and remote ports, and starts sending messages. This protocol is asymmetric: the server need not know the name/address of the client, whereas the client provides full information on the server. Also, the server operation is nonblocking: The server may continue serving other clients while waiting for a new client to join. This mechanism cannot be supported now with intercommunicators: a receive can be posted only after the intercommunicator creation has completed (with a call to MPI_INTERCOMM_FINISH). This call is blocking. Thus, if a server is not multithreaded, its binding to a client is blocking. Also, we offer a binding mechanism where both sides need know the identity of the other (MPI_INTERCOMM_START ...FINISH) and a binding mechanism where neither sides need know the identity of the other (MPI_INTERCOMM_NAME_START ...FINISH); not a mechanism where client need not know the name of the server, but server need know the name of the client. I believe we should offer a nonblocking binding mechanism. This in support of current practice, and in order to reduce the chances for deadlock (In an asymmetric scenario, there can be no deadlock, if clients and servers are disjoint). I suggest the following changes: 1. Change the semantics of MPI_INTERCOMM_FINISH to be nonblocking. Thus, replace MPI_INTERCOMM_FINISH with MPI_INTERCOMM_TEST(inter_request, flag, newcomm). Returns flag=true if the intercommunicator creation has completed, in which case newcomm is a handle to the new intercommunicator. MPI_INTERCOMM_START is used, as before, to start the process. 2. Allow the "remote_leader" argument in the call MPI_INTERCOMM_START to be MPI_ANY_SOURCE, even at the local_leader, where it is significant. Thus, only one of the two pairing leaders need know the identity of the other (the call is OK if both leaders provide the identity of the other, and erroneous if neither do). 3. For good measure, we can add a blocking call: MPI_INTERCOMM_MAKE(local_comm, local_leader, peer_comm, remote_leader, tag, intercomm), which blocks until the intercommunicator is created. With these changes the protocol for binding to a server is as follows: The following need to be agreed beforehand: 1. Who is the leader of the client group. 2. Who is the leader of the server group. 3. Which communicator (peer_comm) is used by a client leader to bind to the server leader. 4. The client leader need know "peer_comm" and the rank of the server leader in peer_comm. 5. The server leader need only know "peer_comm" When a client wants to bind with a server, it executes a call to MPI_INTERCOMM_MAKE. The client leader supplies in this call the "peer_comm" and "remote_leader" parameters that allow to communicate with the server leader. When the call returns, the client can start communicating with the server. A server that can accept new clients executes a call to MPI_INTERCOMM_START. The server leader provides the "peer_comm" parameter and passes "remote_leader = MPI_ANY_SOURCE". The server then periodically calls MPI_INTERCOMM_TEST to check if it has pending requests to bind. If it returns "flag=true" then the server starts communicating with the new client. Alternatively, in a multithreaded environment, the server spawns a thread that blocks on a call to MPI_INTERCOMM_MAKE. The call at the server leader is with parameters (...,peer_comm, MPI_ANY_SOURCE). Note that the functionality proposed here is, essentially, a superset of the functionality in the draft, so that no power is lost. Implementation: MPI_INTERCOMM_START The leader coordinates a new local context in the local group (this is a collective operation in the local group); The leader posts a nonblocking receive with communicator, source and tag as specified by the peer_comm, remote_leader and tag parameters of the call (source might be MPI_SOURCE_ANY); If remote_leader <> MPI_SOURCE_ANY, then the leader posts a nonblocking send with communicator, destination and tag as specified by the peer_comm, remote_leader, and tag parameters; the message contains the new context and the local group table. MPI_INTERCOMM_TEST The leader tests if the receive posted by _START has been satisfied (this is a call to MPI_TEST); If not, then the leader broadcasts flag=false in the group; If yes then { the leader broadcast flag=true and the received message in the group; if it did not post a send in _START then the send is posted now (to the source of the message received); the leader waits for send completion }. MPI_INTERCOMM_MAKE is MPI_INTERCOMM_START, followed by MPI_INTERCOMM_TEST, with MPI_TEST replaced by MPI_WAIT. Note: One should warn that the same communicator and tag should not be used for communication between the two leaders while an intercommunicator is being created, unless we want to preserve a separate tag space for intercommunicator operations (this is true with the old version, as well). 5.5.3 I propose to change the name service constructs for the following reasons: 1. It is an overkill to force each MPI implementation to have an (anonymous) name server: When I run with 8 processes, I may not want an additional name server process, or have one of the eight processes perturbed by an additional name serving function. 2. I may want in a large system multiple name servers, and perhaps hierarchical name servers. 3. Name servers, as currently defined, may be difficult to extent to a dynamic MPI, where processes come and go. To do so, we should have name servers that are local to a communication universe (e.g., MPI_COMM_WORLD). The mechanism provided to extend a communication universe will, ipso facto, also extend the group of processes serviced by a name server. Quite generally, we should want our standard "communication context" mechanism, namely communicators, to extend to name servers as well. 4. Various communicators may correspond to various communication protocols, various protection mechanisms, etc. One more reason to have the name server "local" to a communication context. 5. It seems that the functions in 5.2.2 can be used to provide the functions of a name server, as well, thus reducing the number of functions. Basically, the name server allows to bind two groups via communication to a third process. A mechanism to do so, using the previous calls is to allow the leaders of each group to communicate with a third server process, rather than with each other. I suggest the following changes: An MPI implementation may provide one or more "intercom servers". The predefined constant MPI_SERVER contains the rank in the group of MPI_COMM_WORLD of an intercom server for the group, if such is available; MPI_SERVER = MPI_UNDEFINED, if there is no such server. Additional intercom servers may be provided. In a call to MPI_INTERCOMM_MAKE or MPI_INTERCOMM_START, the parameter "peer_comm" may be the rank of an "intercom server", rather than the rank of a remote group leader (or MPI_ANY_SOURCE). If two groups call MPI_INTERCOMM_MAKE (or MPI_INTERCOMM_START followed by MPI_INTERCOMM_TEST), the leaders in each group provide the same "peer_comm", "remote_leader" and "tag" parameters, and "remote_leader" is the rank of an "intercom server", then a new intercommunicator will be created that binds at least one of these two groups (The two groups will be bound together unless there is a race with another matching call.) Thus, a call to MPI_INTERCOMM_NAME_START(comm, name, inter_request) can be replaced by a call to MPI_INTERCOMM_START(comm, 0, MPI_COMM_WORLD, MPI_SERVER, name, inter_request) ("name", which was a character string, becomes an integer tag). One could also use another local leader, or another peer communicator which is known to have a name server. Implementation: NO CHANGES are needed in the code to MPI_INTERCOMM_MAKE, MPI_INERCOMM_START, and MPI_INTERCOMM_TEST. Each server needs to know which communicator it serves. The server code is busy loop which does the following: post a (blocking) receive with source = MPI_ANY_SOURCE and tag = MPI_ANY_TAG; test tag of incoming message (messageA); if previous message (messageB) with same tag is available in local store then { exchange messages (send messageA to sender of messageB and send messageB to sender of messageA); erase messageB from store } else put messageA in store Of course, one can replace blocking communication with nonblocking communication. From owner-mpi-context@CS.UTK.EDU Tue Jan 4 12:28:06 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id MAA07496; Tue, 4 Jan 1994 12:28:06 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA18008; Tue, 4 Jan 1994 12:28:24 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 4 Jan 1994 12:28:22 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from macaw.fsl.noaa.gov by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA18001; Tue, 4 Jan 1994 12:28:21 -0500 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA10247; Tue, 4 Jan 94 10:28:12 MST Date: Tue, 4 Jan 94 10:28:12 MST From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9401041728.AA10247@macaw.fsl.noaa.gov> To: mpi-context@CS.UTK.EDU Subject: Re: Suggestion for changes in 5.5 (Intercommunicators) Marc, Your suggestions for changing section 5.5.2 (MPI_INTERCOMM_TEST(), etc.) make sense to me. I assume that a "high quality" implementation will be able to detect the erroneous case where "remote_leader = MPI_ANY_SOURCE" for BOTH leaders in calls to MPI_INTERCOMM_START() or MPI_INTERCOMM_MAKE() and return an appropriate error status to the user... Is there any reason why this would not be possible? (Error status might actually be returned by MPI_INTERCOMM_TEST() instead of MPI_INTERCOMM_START()...). Regarding your suggestions for changing section 5.5.3, I am concerned about removing the name server without a formal vote. Perhaps this would be a good thing to discuss and vote on at the February meeting? Tom From owner-mpi-context@CS.UTK.EDU Wed Jan 5 09:41:42 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id JAA21393; Wed, 5 Jan 1994 09:41:41 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id JAA17510; Wed, 5 Jan 1994 09:41:56 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 5 Jan 1994 09:41:54 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from epcc.ed.ac.uk by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id JAA17502; Wed, 5 Jan 1994 09:41:45 -0500 Date: Wed, 5 Jan 94 14:41:36 GMT Message-Id: <13643.9401051441@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: suggestion for changes in 5.5 To: "Marc Snir" , MPI-CONTEXT@CS.UTK.EDU In-Reply-To: Marc Snir's message of Mon, 3 Jan 94 22:23:19 EST Reply-To: lyndon@epcc.ed.ac.uk Happy New Year to all! > 5.5.2. > The usual mechanism for binding client to server (e.g. via sockets) is that a > client opens a connection, without specifying the remote port (client), and > starts listening (receiving). The client opens the connection by specifying > both local and remote ports, and starts sending messages. > This protocol is asymmetric: the server need not know the name/address of the > client, whereas the client provides full information on the server. Also, the > server operation is nonblocking: The server may continue serving other clients > while waiting for a new client to join. For concreteness this is as follows. ---------------------------------------------------------------------- Client ------ int socket_fd; struct sockaddr *srvname; int namelen; /* create a socket */ socket_fd = socket(domain,type,protocol); /* connect to server "srvname, namelen" */ connect(socket_fd, srvname, namelen); /* In this case the name of the server socket "srvname, namelen" are IN arguments for connect() and the example does not deal with the whole ugly magic associated with how the client can discover them. */ write(socket_fd, data, nbytes); Server ------ int listen_fd, socket_fd; struct sockaddr *srvname; int *namelen; /* create a socket */ listen_fd = socket(domain,type,protocol); /* start to listen */ listen(listen_fd, backlog); /* accept connection */ socket_fd = accept(listen_fd, cliname, namelen); /* In this case the name of the client socket "cliname, namelen" are OUT arguments for accept() and the example does not deal with the whole ugly magic associated with how the server can interpret them. */ read(socket_fd, buf, nbytes); ---------------------------------------------------------------------- There is another aspect to the socket mechanism for client-server which involves use of select(2). The server wants to wait for connections from new clients and communications from (possibly also to) existing clients. Not an option for the server to block waiting for a new client without regard for communications from existing clients since this means that existing clients dont get serviced, possibly indefinitely if there is no new client. Not an option for the server to block waiting for communications from existing clients without regard for connections from new clients since this means that new clients dont get serviced, possibly indefinitely if there are no more requests from existing clients. The server sets listen() on its dangling socket and waits in select() for either communication from an existing client or connection from a new client. In the above example there is a select() call in between the listen and the wait. The accept() call is in a program branch after the select() call etc. If the connection was ready then the server calls accept() which completes the connection to the new client. If it was a socket file descriptor ready for read then the server reads the message and does with it. I'd extend the example, and will do so if people want, but the select() call is rather messy to use. If we translate this into MPI like terms, it would be something like the inter-request handle being suitable as an argument to MPI_WAIT_ANY (etc) along with non blocking receive requests. The alternative I had thought of, which had the advantage of not mixing up communicator management and point-to-point quite so intimately, for binding of clients and servers follows. The client send a message within MPI_COMM_WORLD (or the copy thereof) to the server, such message meaning "I want to connect". Then wait for a reply from the server such reply containing the tag to use for calls to the intercommunicator constructors. Then call MPI_INTERCOMM_START with the bridge (peer) communicator as MPI_COMM_WORLD (or the copy thereof), remote leader rank is that of server, and tag is that contained in the reply message from the server. Then call MPI_INTERCOMM_FINISH. The server post nonblocking receives for each existing client in each case using the (inter)communicator which provides a connection between the server and the client. The server also keep posted one nonblocking receive from any member of MPI_COMM_WORLD (or the copy thereof). The server wait in MPI_WAIT_ANY (etc) for any of the posted receives. The server receive either a message from an existing client or this "I want to connect" message from a new client. On receipt of the "I want to conect" message the server reply to the client in MPI_COMM_WORLD (or the copy thereof) with a tag which can safely be used for the client and server to bind using the intercommunicator constructors. The server then call MPI_INTERCOMM_START using tag and in MPI_COMM_WORLD (or copy thereof) and remote rank is that of the client which sent the message. Then call MPI_INTERCOMM_FINISH. The MPI_INTERCOMM_START and FINISH calls in the server block only for a short time as the client is also calling the matching MPI_INTERCOMM_START and FINISH. The client is actually a group. So before the client group leader sends that first message to the server there is a synchronisation in the client group and all other members of the group wait for the leader. The reply containing safe tag can be broadcast by the leader client, and then all members of client group do the MPI_INTERCOMM_START etc. The server is actually a group. So the client send to the server group leader. After the server group leader receive the "I want to connect" message it must send "It wants to connect" down to other servers. The server group synchronises before the leader replies to the client with a safe tag for the intercommunicator constructors. Then all members of the server group do the MPI_INTERCOMM_START etc. Is there a flaw in this mechanism? If so then I would be grateful for an explanation of the flaw. If not then I dont think we need to make extensions which attempt to move toward the socket mechanism. However, if there is a preference for a socket like mechanism then I am happy to work on a proposal for such mechanism (which I believe would not make too much sense unless it contained the ability to wait for combinations of communication and connect/listen requests - thus impacting on the point-to-point chapter). > 3. For good measure, we can add a blocking call: > MPI_INTERCOMM_MAKE(local_comm, local_leader, peer_comm, remote_leader, tag, > intercomm), > which blocks until the intercommunicator is created. This suggestion is fine, in fact was originally there, and is convenient for the usage anticipated as described above. It increases the number of functions by one, i.e. epsilon which imho is negligible. > A server that can accept new clients executes a call to MPI_INTERCOMM_START. > The server leader provides the "peer_comm" parameter and passes > "remote_leader = MPI_ANY_SOURCE". > The server then periodically calls MPI_INTERCOMM_TEST to check if it has pending > requests to bind. If it returns "flag=true" then the server starts > communicating with the new client. This means however that the server cannot block waiting for communications from existing clients, since this may mean that waiting new clients always wait. So the server is forced to be a busy loop? (In the scenario I described the server is not a busy loop, unless the (low quality?) implementation of MPI_WAIT_ANY is itself a busy loop.) > 5.5.3 > > I propose to change the name service constructs for the following reasons: > > 1. It is an overkill to force each MPI implementation to have an (anonymous) > name server: > > When I run with 8 processes, I may not want an additional name > server process, or have one of the eight processes perturbed by an additional > name serving function. Sure. If you dont use a name service then you dont want resources consumed by it. So it is useful to consider how we can avoid such. One thought which comes to mind, MPI demands that the program creation environment at least allow SPMD program creation, perhaps we could make the inclusion of name server(s) a mandated run time _option_ in program creation environment (possibly also a link time option). This way if you dont want it you dont get it, and if you do want it you say you want it and you get it. > 2. I may want in a large system multiple name servers, and perhaps > hierarchical name servers. > 3. Name servers, as currently defined, may be difficult to extent to a dynamic > MPI, where processes come and go. To do so, we should have name servers that > are local to a communication universe (e.g., MPI_COMM_WORLD). The mechanism > provided to extend a communication universe will, ipso facto, also extend the > group of processes serviced by a name server. Quite generally, we should want > our standard "communication context" mechanism, namely communicators, to extend > to name servers as well. > 4. Various communicators may correspond to various > communication protocols, various protection mechanisms, etc. One more reason to > have the name server "local" to a communication context. This is fine so far. The name service was intended to be (in some sense) local to MPI_COMM_WORLD - i.e., it did not intend to offer to provide intercomm services which bind processes within the group of MPI_COMM_WORLD to processes outwith that group. We could accomadate(sp?) these considerations by adding a communicator argument to MPI_INTERCOMM_NAME, this communicator identifying the scope of the name. If there is no name service associated with the given scope then perhaps this is a non-fatal error condition. If we do this, then I do hope we mandate that there is a name service associated with MPI_COMM_WORLD (subject to possibilities of program configuration options perhaps?). So MPI_INTERCOMM_NAME would gain one argument and then look like MPI_INTERCOMM_NAME(comm, scope, name, inter_request) IN comm local intra-communicator (handle) IN scope intra-communicator scope of name service IN name character string OUT inter_request handle for MPI_INTERCOMM_FINISH > 5. It seems that the functions in 5.2.2 can be used to provide the functions of > a name server, as well, thus reducing the number of functions. It decreases the number of functions by one, i.e. epsilon which imho is negligible (as above). The detailed suggestion does have some abstract elegance, and it removes the function MPI_INTERCOMM_NAME while retaining a "name" service capability in which "name" is a non-negative integer. However, what does the proposal do for us? It removes the string name which after all was more convenient for the user (loss) for what seems to be a very small decrease in the complexity of implementation (gain). Is it worth making this change? Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Jan 5 11:12:41 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id LAA22271; Wed, 5 Jan 1994 11:12:40 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id LAA23845; Wed, 5 Jan 1994 11:13:05 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 5 Jan 1994 11:13:04 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from macaw.fsl.noaa.gov by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id LAA23838; Wed, 5 Jan 1994 11:13:02 -0500 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA13225; Wed, 5 Jan 94 09:12:53 MST Date: Wed, 5 Jan 94 09:12:53 MST From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9401051612.AA13225@macaw.fsl.noaa.gov> To: mpi-context@CS.UTK.EDU Subject: Re: suggestion for changes in 5.5 Lyndon writes: > One thought which comes to mind, MPI demands that the program creation > environment at least allow SPMD program creation, perhaps we could > make the inclusion of name server(s) a mandated run time _option_ in > program creation environment (possibly also a link time option). This > way if you dont want it you dont get it, and if you do want it you say > you want it and you get it. Would it be possible to specify this at run time in the call to MPI_INIT()? Maybe an additional argument like: C "I don't need a name service." CALL MPI_INIT(MPI_INIT_DEFAULT) ... C "I want the name service." CALL MPI_INIT(MPI_INIT_NAME_SERVICE) ... If this could work, it seems cleaner to me. > The detailed suggestion does have some abstract elegance, and it > removes the function MPI_INTERCOMM_NAME while retaining a "name" > service capability in which "name" is a non-negative integer. However, > what does the proposal do for us? It removes the string name which > after all was more convenient for the user (loss) for what seems to be > a very small decrease in the complexity of implementation (gain). Is it > worth making this change? I feel that the change is not worth it here. I'd like to see more discussion... Tom From owner-mpi-context@CS.UTK.EDU Wed Jan 5 11:23:15 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id LAA22351; Wed, 5 Jan 1994 11:23:15 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id LAA24643; Wed, 5 Jan 1994 11:23:38 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 5 Jan 1994 11:23:36 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from hub.meiko.co.uk by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id LAA24636; Wed, 5 Jan 1994 11:23:34 -0500 Received: from tycho.co.uk (tycho.meiko.co.uk) by hub.meiko.co.uk with SMTP id AA21859 (5.65c/IDA-1.4.4 for mpi-context@CS.UTK.EDU); Wed, 5 Jan 1994 16:23:31 GMT Received: by tycho.co.uk (5.0/SMI-SVR4) id AA25295; Wed, 5 Jan 1994 16:20:46 +0000 Date: Wed, 5 Jan 1994 16:20:46 +0000 From: jim@meiko.co.uk (James Cownie) Message-Id: <9401051620.AA25295@tycho.co.uk> To: mpi-context@CS.UTK.EDU In-Reply-To: <9401051612.AA13225@macaw.fsl.noaa.gov> (hender@macaw.fsl.noaa.gov) Subject: Re: suggestion for changes in 5.5 Content-Length: 924 > Would it be possible to specify this at run time in the call to MPI_INIT()? This is clearly possible. It has some cost, however, in that all the name service code has to be present whether or not it is going to be used. I don't have a problem with this on my machine, it can all go in a dynamic shared library and appear as needed, so it takes no space until it is used. However I think there may be other people who are concerned about executable size and don't have such a rich run time environment. In this case all that code has to be loaded (onto every node too if it's really a SPMD code !), and it begins to seem rather costly... -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Wed Jan 5 17:05:41 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id RAA25245; Wed, 5 Jan 1994 17:05:41 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id RAA21291; Wed, 5 Jan 1994 17:05:49 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 5 Jan 1994 17:05:47 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from Phoenix.ERC.MsState.Edu by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id RAA21284; Wed, 5 Jan 1994 17:05:46 -0500 Received: from Athena.ERC.MsState.Edu by Phoenix.ERC.MsState.Edu (4.1/6.0s-FWP); id AA26604; Wed, 5 Jan 94 16:10:28 CST From: "Nathan E. Doss" Received: by Athena.ERC.MsState.Edu (4.1/6.0c-FWP); id AA13770; Wed, 5 Jan 94 16:09:00 CST Message-Id: <9401052209.AA13770@Athena.ERC.MsState.Edu> Subject: Comments on inter-communicators To: mpi-context@CS.UTK.EDU Date: Wed, 5 Jan 1994 16:08:59 -0600 (CST) X-Mailer: ELM [version 2.4 PL17] Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Content-Length: 5508 >Date: Mon, 3 Jan 94 22:23:19 EST >From: "Marc Snir" >To: MPI-CONTEXT@CS.UTK.EDU >Status: RO > >Suggestion for changes in 5.5 (Intercommunicators) > >5.5.2. > >[deleted] > >I suggest the following changes: > >1. Change the semantics of MPI_INTERCOMM_FINISH to be >nonblocking. Thus, replace MPI_INTERCOMM_FINISH with >MPI_INTERCOMM_TEST(inter_request, flag, newcomm). Returns flag=true if the >intercommunicator creation has completed, in which case newcomm is a handle to >the new intercommunicator. MPI_INTERCOMM_START is used, as before, to start >the process. o When issued at the group leaders, MPI_INTERCOMM_START behaves similarly to an MPI_ISEND call performed on the peer communicator. MPI_INTERCOMM_FINISH then behaves like MPI_RECV. MPI_INTERCOMM_TEST parallels the MPI_TEST function. I would be in favor of keeping MPI_INTERCOMM_FINISH just as MPI_RECV was kept when MPI_TEST was added. o If the communicators are disjoint, then deadlock should not occur when using MPI_INTERCOMM_FINISH or MPI_INTERCOMM_TEST. If they are not disjoint, then every process in the intersection must call the two MPI_INTERCOMM_START calls first, followed by the two MPI_INTERCOMM_FINISH functions (same order). Since the MPI_INTERCOMM_TEST function implies a synchronization within the local communicator, the following code would be illegal in the intersection processes since the MPI_INTERCOMM_TEST call to complete first may not be the same for all processes in the intersection. MPI_INTERCOMM_START (for comm A); MPI_INTERCOMM_START (for comm B); status_A = status_B = false; while ( !status_A || !status_B ) { MPI_INTERCOMM_TEST (for comm A); MPI_INTERCOMM_TEST (for comm B); } A correct calling order would have to guarantee that the MPI_INTERCOMM_TEST calls finish in the same order in all intersecting processes. o MPI_INTERCOMM_TEST must always block when called by non-leaders since they will not know when the MPI_INTERCOMM_TEST called by the leader has completed. This can be avoided by having the leader send a message to the non-leaders notifying them that the remote communicator has made contact. >2. Allow the "remote_leader" argument in the call MPI_INTERCOMM_START to be >MPI_ANY_SOURCE, even at the local_leader, where it is significant. Thus, only >one of the two pairing leaders need know the identity of the other (the call >is OK if both leaders provide the identity of the other, and erroneous if >neither do). o Assume we want to create an inter-communicator between a server comm A (MPI_ANY_SOURCE is given to MPI_INTERCOMM_START) and a client comm B. If there is an intersection between A and B, then the two starts must be called first followed by the finish on A, then the finish on B. The finish on the server comm must always be called before the finish on the client comm or deadlock may (will) occur. o A new function should possibly be used to set up a server connection instead of allowing MPI_INTERCOMM_START to accept MPI_ANY_SOURCE since accepting MPI_ANY_SOURCE changes the communication patterns in both MPI_INTERCOMM_START and MPI_INTERCOMM_FINISH. o If MPI_ANY_SOURCE is given, the most efficient implementation may be for MPI_INTERCOMM_START to do nothing. Instead of allowing MPI_INTERCOMM_START to accept MPI_ANY_SOURCE, one alternative would be for there to be an MPI_INTERCOMM_SERVER_MAKE. The client process would call MPI_INTERCOMM_START and MPI_INTERCOMM_FINISH as normal while the server process would simply call MPI_INTERCOMM_SERVER_MAKE. To avoid deadlock in a process in the intersection the client MPI_INTERCOMM_START must be called first followed by the server's MPI_INTERCOMM_SERVER_MAKE and then the client's MPI_INTERCOMM_FINISH. >3. For good measure, we can add a blocking call: >MPI_INTERCOMM_MAKE(local_comm, local_leader, peer_comm, remote_leader, tag, > intercomm), >which blocks until the intercommunicator is created. o This call will deadlock if there are processes in the intersection of the two communicators to be connected. In the intersecting processes, the first MPI_INTERCOMM_MAKE cannot complete until the second MPI_INTERCOMM_MAKE is called. The second MPI_INTERCOMM_MAKE is not called until the first has completed. > >[deleted] > ----------------------------------------------------------------------------- >Date: Tue, 4 Jan 94 10:28:12 MST >From: hender@macaw.fsl.noaa.gov (Tom Henderson) >Message-Id: <9401041728.AA10247@macaw.fsl.noaa.gov> >To: mpi-context@CS.UTK.EDU >Subject: Re: Suggestion for changes in 5.5 (Intercommunicators) >Status: RO > >Marc, > >Your suggestions for changing section 5.5.2 (MPI_INTERCOMM_TEST(), etc.) make >sense to me. I assume that a "high quality" implementation will be able to >detect the erroneous case where "remote_leader = MPI_ANY_SOURCE" for BOTH >leaders in calls to MPI_INTERCOMM_START() or MPI_INTERCOMM_MAKE() and return >an appropriate error status to the user... Is there any reason why this would >not be possible? o The implementation probably could not return an error when two MPI_INTERCOM_START's had used "remote_leader = MPI_ANY_SOURCE" since it won't know if the calls are really meant for each other. >[deleted] -- Nathan Doss doss@ERC.MsState.Edu From owner-mpi-context@CS.UTK.EDU Wed Jan 5 17:37:14 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id RAA25532; Wed, 5 Jan 1994 17:37:14 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id RAA23511; Wed, 5 Jan 1994 17:37:41 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 5 Jan 1994 17:37:39 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from macaw.fsl.noaa.gov by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id RAA23503; Wed, 5 Jan 1994 17:37:38 -0500 Received: by macaw.fsl.noaa.gov (4.1/SMI-4.1) id AA13903; Wed, 5 Jan 94 15:37:30 MST Date: Wed, 5 Jan 94 15:37:30 MST From: hender@macaw.fsl.noaa.gov (Tom Henderson) Message-Id: <9401052237.AA13903@macaw.fsl.noaa.gov> To: mpi-context@CS.UTK.EDU Subject: Re: Comments on inter-communicators Cc: doss@erc.msstate.edu > >... I assume that a "high quality" implementation will be able to > >detect the erroneous case where "remote_leader = MPI_ANY_SOURCE" for BOTH > >leaders in calls to MPI_INTERCOMM_START() or MPI_INTERCOMM_MAKE() and return > >an appropriate error status to the user... Is there any reason why this would > >not be possible? > > o The implementation probably could not return an error when two > MPI_INTERCOM_START's had used "remote_leader = MPI_ANY_SOURCE" since it > won't know if the calls are really meant for each other. > > Nathan Doss doss@ERC.MsState.Edu > Nathan, I think you're right here. Unfortunately, the typical MPI response to this kind of situation has been "...the behavior of an erroneous program is undefined...". I think we're giving the user more rope here (as usual). Tom From owner-mpi-context@CS.UTK.EDU Thu Jan 6 10:52:59 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id KAA00512; Thu, 6 Jan 1994 10:52:59 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id KAA04771; Thu, 6 Jan 1994 10:52:56 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 6 Jan 1994 10:52:54 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from watson.ibm.com by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id KAA04763; Thu, 6 Jan 1994 10:52:53 -0500 Received: from WATSON by watson.ibm.com (IBM VM SMTP V2R3) with BSMTP id 3031; Thu, 06 Jan 94 10:52:52 EST Received: from YKTVMH by watson.vnet.ibm.com with "VAGENT.V1.0" id 0871; Thu, 6 Jan 1994 10:52:39 EST Received: from snir.watson.ibm.com by yktvmh.watson.ibm.com (IBM VM SMTP V2R3) with TCP; Thu, 06 Jan 94 10:52:38 EST Received: by snir.watson.ibm.com (AIX 3.2/UCB 5.64/930311) id AA20409; Thu, 6 Jan 1994 10:52:49 -0500 From: snir@watson.ibm.com (Marc Snir) Message-Id: <9401061552.AA20409@snir.watson.ibm.com> To: "Nathan E. Doss" Cc: mpi-context@CS.UTK.EDU Subject: Re: Comments on inter-communicators In-Reply-To: (Your message of Wed, 05 Jan 94 16:08:59 CST.) Date: Thu, 06 Jan 94 10:52:39 EST Date: Thu, 06 Jan 94 10:52:48 -0500 If two servers decide to start listening for requests and nobody ever ask for their services, then they wiat forever (unless one has a timeout). This is the equivalent of two groups calling MPI_INTERCOMM_START with a dontcare peer leader. From owner-mpi-context@CS.UTK.EDU Wed Jan 12 12:29:07 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id MAA16601; Wed, 12 Jan 1994 12:29:07 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA02470; Wed, 12 Jan 1994 12:29:51 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 12 Jan 1994 12:29:49 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from epcc.ed.ac.uk by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA02458; Wed, 12 Jan 1994 12:29:47 -0500 Date: Wed, 12 Jan 94 17:29:41 GMT Message-Id: <18863.9401121729@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: mpi-context; MPI_ATTR_KEY_NULL proposal To: mpi-context@CS.UTK.EDU Reply-To: lyndon@epcc.ed.ac.uk Dear All I like to propose a very small addition to the cache section of our chapter. Colleague here just wrote the cache calls, and suggested this. Proposal -------- Introduce the constant MPI_ATTR_KEY_NULL defined as an attriubte key value which is legal, cannot be returned by MPI_ATTR_GET_KEY, and against which no attribute may be stored. MPI_ATTR_GET_VALUE(comm, MPI_ATTR_KEY_NULL, attribute_val, found) will always return found = false (provided comm is valid). Rationale --------- This simplifies writing libraries using the cache capability, allows for better software practice by easier detection of non-initialised library and encapsulation of library initialisation code where this is shared across multiple library procedures. With this proposal we can write code like the below. before we would simply have to have the initialisation code in the top of every lib routine or call the lib init routine every time a lib routine is called. The addition to the drafts costs us next to nothing. static void lib_init(MPI_Comm); static int key = MPI_ATTR_KEY_NULL; void lib_call(MPI_Comm usr_comm, ...) { int *found; state *state; MPI_Comm lib_comm; MPI_Attr_get_value(usr_comm, key, &state, &found); if (! found) lib_init(usr_comm); lib_comm = state->comm; } lib_init(MPI_Comm usr_comm) { lib_state *state; if (key == MPI_ATTR_KEY_NULL) MPI_Attr_get_key(lib_copy, lib_delete, &key, NULL); state = lib_alloc() MPI_Comm_dup(usr_comm, &state->comm) MPI_Attr_put_value(usr_comm, key, state) } Comments? Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Jan 12 12:38:53 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id MAA16680; Wed, 12 Jan 1994 12:38:53 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA03342; Wed, 12 Jan 1994 12:39:35 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 12 Jan 1994 12:39:34 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from epcc.ed.ac.uk by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA03333; Wed, 12 Jan 1994 12:39:21 -0500 Date: Wed, 12 Jan 94 17:10:35 GMT Message-Id: <18849.9401121710@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: problems with MPI_COMM_DUP and MPI_COMM_FREE and intercommunicator To: mpi-context@CS.UTK.EDU Reply-To: lyndon@epcc.ed.ac.uk Dear All In the draft MPI_INTERCOMM_MERGE is a collective operation in the union of the two groups associated with the intercommunicator. The draft also says that an intercommunicator can be created with MPI_COMM_DUP, and is destroyed with MPI_COMM_FREE. It is not stated that these are collective operations in the union of the two groups associated with the intercommunicator, although I believe that these are assumed semantics. Let us consider two groups G and H which at first make an intercommunicator with MPI_INTERCOMM_START and MPI_INTERCOMM_FINISH. The following pseudocode executed by all members of G and H works whether the intersection of G and H is empty or non-empty. IF (IN G) MPI_INTERCOMM_START(Gcomm, 0, MPI_COMM_WORLD, 1, 99, &Greq) IF (IN H) MPI_INTERCOMM_START(Hcomm, 0, MPI_COMM_WORLD, 0, 99, &Hreq) IF (IN G) MPI_INTERCOMM_FINISH(Greq, &GHcomm) IF (IN H) MPI_INTERCOMM_FINISH(Hreq, &HGcomm) The important thing is that where G and H intersect the processes in the intersection make the calls all in the same order, as Nathan has pointed out. Notice also that processes in the intersection end up with two intercommunicators, GHcomm and HGcomm. Now imagine that G-H want to create another intercommunicator by use of MPI_COMM_DUP. IF (IN G) MPI_COMM_DUP(GHcomm, &GHcomm2) IF (IN H) MPI_COMM_DUP(HGcomm, &HGcomm2) This looks extraordinarily bad. Imagine that MPI_COMM_DUP is a collective operation in the union of G, presumably the following occurs. Processes in G and not in H complete the textually first MPI_COMM_DUP. Processes in H and not in G complete the textually second MPI_COMM_DUP. Processes in G and in H complete the textually first MPI_COMM_DUP and are then stuck in the second MPI_COMM_DUP, i.e. deadlock. Imagine that MPI_COMM_DUP is defined such that none of the calls complete until the calls for all instances of the GHcomm and HGcomm communicators are made - it is bi-collective in G and H. Presumably then the processes in the intersection cannot make the textually second MPI_COMM_DUP since they are stuck in the textually first MPI_COMM_DUP, waiting for the calls on HGcomm, i.e. deadlock again. The same considerations apply to MPI_COMM_FREE with an intercommunicator. These are just the same problems as a combined MPI_INTERCOM_START and MPI_INTERCOMM_FINISH call MPI_INTERCOMM_MAKE, as Nathan has also pointed out. So we have a bit of a problem here (okay, this is at least partly of my making). I suggest that we might overcome the problem with 4 changes as follows ... 1) Change name of MPI_INTERCOMM_START to MPI_INTERCOMM_MAKE. Trivial change in draft, for consistency with below. 2) Disallow MPI_COMM_DUP and MPI_COMM_FREE on intercommunicators. Just means removing some (already vague) text from the draft. 3) Introduce MPI_INTERCOMM_FREE which is a free-start operation and must be finished with MPI_INTERCOM_FINISH (c.f., MPI_INTERCOMM_MAKE). The implementation is simple and structurally similar to MPI_INTERCOMM_MAKE. MPI_INTERCOMM_FREE :- mark intercommunicator as being freed barrier in local group (can be omitted if no error tracking) zeroth member of local group Isends "I wish free" to zeroth member of remote group (this can use mpi provate hidden intercommunicator contexts) zeroth member of group Irecvs from zeroth member of remote group done MPI_INTERCOMM_FINISH (for free) :- zeroth member of local group waits for isend and irecv to complete barrier in local group (can be omitted if no error tracking) deallocate contexts and other clean up done 4) Introduce MPI_INTERCOMM_DUP which is a dup-start operation and must be finished with MPI_INTERCOMM_FINISH (c.f., MPI_INTERCOMM_MAKE). The implementation is simple and structurally similar to MPI_INTERCOMM_MAKE. MPI_INTERCOMM_DUP :- allocate context(s) in local group (may be a barrier) zeroth member of local group Isends "I wish dup" and contexts to zeroth member of remote group (mpi private contexts as per free) zeroth member of group Irecvs from zeroth member of remote group done MPI_INTERCOMM_FINISH (for dup) :- zeroth member of local group waits for isend and irecv to complete bcast remote contexts in local group (may be a barrier) munge together groups and context to make new intercommunicator done Finally there is a related issue with MPI_INTERCOMM_MERGE. It looks like processes in the intersection are expected to call twice. I'm sure this is not what Marc intended since the above deadlock problem arise. In order to ensure that there are calls which cover all of the endpoints for the intercommunicator I'd suggest the following change to MPI_INTERCOMM_MERGE. MPI_INTERCOMM_MERGE(gauche, droit, nouveau) - args?! Well, okay, we are in nice in a few days time :-) IN gauche an intercommunicator or MPI_COMM_NULL IN droit an intercommunicator or MPI_COMM_NULL OUT nouveau the new intracommunicator semantics by example ... IF (in G and not in H) MPI_INTERCOMM_MERGE(GHcomm, MPI_COMM_NULL, &new) IF (in H and not in G) MPI_INTERCOMM_MERGE(MPI_COMM_NULL, HGcomm, &new) IF (in H and in G) MPI_INTERCOMM_MERGE(GHcomm, HGcomm, &new) This makes a new intracommunicator from the intercommunicator pair GHcomm and HGcomm (of the above example) where the intercommunicator in G must take the same arg position for all members of G, and in H for all members of H etc. So if there is an intersection both intercomm are given and they must be in same order over intersection which determines the order elsewhere. In the current definition a key is used to order the two groups for the merge-union operation. This is now achieved by argument ordering. Its the same thing really. I could have suggested a similar intercommunicator argument pair for the _DUP and _FREE but it seemed rather less natural in that case. Comments? Best Wishes Lyndon /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Wed Jan 12 13:47:05 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id NAA17199; Wed, 12 Jan 1994 13:47:04 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id NAA08113; Wed, 12 Jan 1994 13:47:50 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Wed, 12 Jan 1994 13:47:49 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id NAA08097; Wed, 12 Jan 1994 13:47:41 -0500 Received: from snacker.pnl.gov. (130.20.186.18) by pnlg.pnl.gov; Wed, 12 Jan 94 10:40 PST Received: by snacker.pnl.gov. (4.1/SMI-4.1) id AA14552; Wed, 12 Jan 94 10:40:41 PST Date: Wed, 12 Jan 94 10:40:41 PST From: Rik Littlefield Subject: Re: mpi-context; MPI_ATTR_KEY_NULL proposal To: lyndon@epcc.ed.ac.uk, mpi-context@CS.UTK.EDU Cc: rj_littlefield@pnlg.pnl.gov Message-id: <9401121840.AA14552@snacker.pnl.gov.> X-Envelope-to: mpi-context@CS.UTK.EDU Lyndon writes: > Proposal > -------- > > Introduce the constant MPI_ATTR_KEY_NULL defined as an attriubte key > value which is legal, cannot be returned by MPI_ATTR_GET_KEY, and > against which no attribute may be stored. > > MPI_ATTR_GET_VALUE(comm, MPI_ATTR_KEY_NULL, attribute_val, found) > > will always return found = false (provided comm is valid). > > Rationale > --------- > > This simplifies writing libraries using the cache capability, allows > for better software practice by easier detection of non-initialised > library and encapsulation of library initialisation code where this is > shared across multiple library procedures. > > With this proposal we can write code like the below. before we would > simply have to have the initialisation code in the top of every lib > routine or call the lib init routine every time a lib routine is called. > The addition to the drafts costs us next to nothing. Cheap, works good, lasts a long time. Do it. --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Fri Jan 14 12:07:18 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id MAA04071; Fri, 14 Jan 1994 12:07:18 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA00322; Fri, 14 Jan 1994 12:07:50 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 14 Jan 1994 12:07:48 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from hub by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA00304; Fri, 14 Jan 1994 12:07:42 -0500 Received: from tycho.co.uk (tycho.meiko.co.uk) by hub with SMTP id AA00739 (5.65c/IDA-1.4.4 for mpi-context@CS.UTK.EDU); Fri, 14 Jan 1994 17:07:38 GMT Received: by tycho.co.uk (5.0/SMI-SVR4) id AA05163; Fri, 14 Jan 1994 17:04:31 +0000 Date: Fri, 14 Jan 1994 17:04:31 +0000 From: jim@meiko.co.uk (James Cownie) Message-Id: <9401141704.AA05163@tycho.co.uk> To: lyndon@epcc.ed.ac.uk Cc: mpi-context@CS.UTK.EDU In-Reply-To: <18863.9401121729@subnode.epcc.ed.ac.uk> (message from L J Clarke on Wed, 12 Jan 94 17:29:41 GMT) Subject: Re: mpi-context; MPI_ATTR_KEY_NULL proposal Content-Length: 2377 You propose to add a pre-defined "NULL" value for the key, which is never found by get_attr_value. I suppose this is OK, but it doesn't really seem necessary. I could write your code like this (at the cost of one extra static variable). static void lib_init(MPI_Comm); static int initialised = FALSE; static int key = MPI_ATTR_KEY_NULL; void lib_call(MPI_Comm usr_comm, ...) { int *found; state *state; MPI_Comm lib_comm; if (! initialised) lib_init(usr_comm); MPI_Attr_get_value(usr_comm, key, &state, &found); /* We KNOW this must succeeed */ lib_comm = state->comm; /* And use the communicator... */ } static void lib_init(MPI_Comm usr_comm) { lib_state *state; if (initialised) return; /* Or abort ? */ initialised = TRUE; MPI_Attr_get_key(lib_copy, lib_delete, &key, NULL); state = lib_alloc() MPI_Comm_dup(usr_comm, &state->comm) MPI_Attr_put_value(usr_comm, key, state) } I find this clearer, and it requires no changes to what we have now. Alternatively we could define a value MPI_ATTR_KEY_NULL as you require (in fact it's probably sensible to do this). But your code still has a BUG ! void lib_call(MPI_Comm usr_comm, ...) { int *found; state *state; MPI_Comm lib_comm; MPI_Attr_get_value(usr_comm, key, &state, &found); if (! found) lib_init(usr_comm); /* At this point state will not be set if we executed the init path */ lib_comm = state->comm; } The correct way to write it is explicitly to test the key, void lib_call(MPI_Comm usr_comm, ...) { int *found; state *state; MPI_Comm lib_comm; if (key == MPI_ATTR_KEY_NULL) lib_init(usr_comm); MPI_Attr_get_value(usr_comm, key, &state, &found); /* At this point state will not be set if we executed the init path */ lib_comm = state->comm; } I think I was objecting to your example, (and rationale, which I don't believe), rather than the enhancement. What it buys is being able to test for initialisation of a key without having to hold that bit of information elsewhere, that's all. But that's probably worthwhile ! -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Fri Jan 14 12:25:39 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id MAA04182; Fri, 14 Jan 1994 12:25:38 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA01872; Fri, 14 Jan 1994 12:26:05 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Fri, 14 Jan 1994 12:26:04 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from epcc.ed.ac.uk by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA01861; Fri, 14 Jan 1994 12:25:59 -0500 Date: Fri, 14 Jan 94 17:26:01 GMT Message-Id: <20799.9401141726@subnode.epcc.ed.ac.uk> From: L J Clarke Subject: Re: mpi-context; MPI_ATTR_KEY_NULL proposal To: jim@meiko.co.uk (James Cownie) In-Reply-To: James Cownie's message of Fri, 14 Jan 1994 17:04:31 +0000 Reply-To: lyndon@epcc.ed.ac.uk Cc: mpi-context@CS.UTK.EDU > You propose to add a pre-defined "NULL" value for the key, which is > never found by get_attr_value. > > I suppose this is OK, but it doesn't really seem necessary. > I could write your code like this (at the cost of one extra static > variable). There was a small bug in the peice of code I sent. I thought it was so obvious I didnt want to waste bandwidth with the correction. At least, modulo bug fix, the example I sent works. Your version does NOT. The logic of the thing is wrong. In your example the first time "lib_call" is ever made at all "initialised" gets set to TRUE. On a later call with the same user communicator as argument this is fine. On a later call which is the first with a different user level communicator then the example fails because the code in lib_init() to dup the user communicator and put the attribute is not executed ("initialised" has the value TRUE). There are nevertheless ways of transforming the (bug correceted) code, which also remove the explicit initialisation code form the library call, and do not need this MPI_ATTR_KEY_NULL, its fair comment. /--------------------------------------------------------\ e||) | Lyndon J Clarke Edinburgh Parallel Computing Centre | e||) c||c | Tel: 031 650 5021 Email: lyndon@epcc.edinburgh.ac.uk | c||c \--------------------------------------------------------/ From owner-mpi-context@CS.UTK.EDU Thu Jan 20 05:59:09 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id FAA10609; Thu, 20 Jan 1994 05:59:08 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id FAA27901; Thu, 20 Jan 1994 05:59:59 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 20 Jan 1994 05:59:58 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from hub by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id FAA27894; Thu, 20 Jan 1994 05:59:55 -0500 Received: from tycho.co.uk (tycho.meiko.co.uk) by hub with SMTP id AA19535 (5.65c/IDA-1.4.4 for mpi-context@CS.UTK.EDU); Thu, 20 Jan 1994 10:59:53 GMT Received: by tycho.co.uk (5.0/SMI-SVR4) id AA01029; Thu, 20 Jan 1994 10:56:09 +0000 Date: Thu, 20 Jan 1994 10:56:09 +0000 From: jim@meiko.co.uk (James Cownie) Message-Id: <9401201056.AA01029@tycho.co.uk> To: lyndon@epcc.ed.ac.uk Cc: mpi-context@CS.UTK.EDU In-Reply-To: <20799.9401141726@subnode.epcc.ed.ac.uk> (message from L J Clarke on Fri, 14 Jan 94 17:26:01 GMT) Subject: Re: mpi-context; MPI_ATTR_KEY_NULL proposal Content-Length: 1107 Of course you are correct, there are two distinct levels of initialisation required which I confused (maybe because of your bug). 1) Initialisation of the library key. Required once for the whole library 2) Initialisation of the library related things associated with a particular communicator. Required once each time a new communicator is passed by the user. I still fail to see the benefit of looking up the undefined key value, since this seems to me to confuse the two issues (I might even be inclined to say that doing so is an error [cf *NULL]). I'd definitely write it like this if (lib_key == MPI_NULL_KEY) /* Library completely uninitialised, get the key */ lookup(communicator, key, &found); if (!found) /* It's a communicator we haven't seen, so init our stuff */ do everyting else. -- Jim James Cownie Meiko Limited Meiko Inc. 650 Aztec West Reservoir Place Bristol BS12 4SD 1601 Trapelo Road England Waltham MA 02154 Phone : +44 454 616171 +1 617 890 7676 FAX : +44 454 618188 +1 617 890 5042 E-Mail: jim@meiko.co.uk or jim@meiko.com From owner-mpi-context@CS.UTK.EDU Thu Feb 3 16:52:48 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id QAA16347; Thu, 3 Feb 1994 16:52:47 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id QAA21903; Thu, 3 Feb 1994 16:53:04 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Thu, 3 Feb 1994 16:53:03 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from pnlg.pnl.gov by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id QAA21859; Thu, 3 Feb 1994 16:52:10 -0500 Received: from snacker.pnl.gov. (130.20.186.18) by pnlg.pnl.gov; Thu, 3 Feb 94 13:48 PST Received: by snacker.pnl.gov. (4.1/SMI-4.1) id AA06800; Thu, 3 Feb 94 13:48:28 PST Date: Thu, 3 Feb 94 13:48:28 PST From: Rik Littlefield Subject: MPI meeting To: mpi-context@CS.UTK.EDU, mpi-core@CS.UTK.EDU Cc: rj_littlefield@pnlg.pnl.gov Message-id: <9402032148.AA06800@snacker.pnl.gov.> X-Envelope-to: mpi-context@cs.utk.edu, mpi-core@cs.utk.edu I will not be able to attend the MPI meeting in Knoxville, due to a program review by my funding agency on the same days. I will attempt to review materials that come out beforehand, so please consider this a plea to get that stuff out early. Thanks much, --Rik ---------------------------------------------------------------------- rj_littlefield@pnl.gov (alias 'd39135') Rik Littlefield Tel: 509-375-3927 Pacific Northwest Lab, MS K1-87 Fax: 509-375-6631 P.O.Box 999, Richland, WA 99352 From owner-mpi-context@CS.UTK.EDU Tue Feb 15 18:38:18 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id SAA01765; Tue, 15 Feb 1994 18:38:18 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id SAA20254; Tue, 15 Feb 1994 18:37:55 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 15 Feb 1994 18:37:53 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id SAA20245; Tue, 15 Feb 1994 18:37:51 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA11495; Tue, 15 Feb 94 17:35:57 CST Date: Tue, 15 Feb 94 17:35:57 CST From: Tony Skjellum Message-Id: <9402152335.AA11495@Aurora.CS.MsState.Edu> To: snir@watson.ibm.com Subject: Chapter revisions Cc: mpi-context@CS.UTK.EDU We understand. So, the copy I got from Lyndon is from you. I did not recall getting it from you, that's why I asked. I will edit this week, and put on reflector, say by Saturday morning. -Tony ----- Begin Included Message ----- From snir@watson.ibm.com Tue Feb 15 17:34:03 1994 From: snir@watson.ibm.com (Marc Snir) To: Tony Skjellum Subject: Re: Question Date: Tue, 15 Feb 94 18:35:52 EST Content-Length: 230 Date: Tue, 15 Feb 94 18:35:50 -0500 No, I did not send it on the reflector -- expected you to do it as the "official chapter editor". It is importnat to put it there this week asap, with whatever additions you want to make. ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Sun Feb 20 19:40:21 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id TAA05789; Sun, 20 Feb 1994 19:40:21 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id TAA11928; Sun, 20 Feb 1994 19:40:08 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Sun, 20 Feb 1994 19:40:07 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id TAA11908; Sun, 20 Feb 1994 19:39:52 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA15251; Sun, 20 Feb 94 18:37:15 CST Date: Sun, 20 Feb 94 18:37:15 CST From: Tony Skjellum Message-Id: <9402210037.AA15251@Aurora.CS.MsState.Edu> To: mpi-pt2pt@CS.UTK.EDU, SNIR@watson.ibm.com, mpi-context@CS.UTK.EDU Subject: "Major changes" at this time Cc: frankeh@watson.ibm.com Marc, I think it is too late to make such big changes, there is not even a detailed proposal (though that you could surely change by Wednesday!). As you have taught me over the past year, by now we should only consider air-tight recommendations with air-tight proposals, whereas during the readings we started to require carefully thought out and worked out proposals. I am concerned that we will overlook something major. In analogy, within the context chapter, we will be restricting changes to fixing the intercommunication section (except for minor details elsewhere), and looking hard at some of the proposals (some of the ideas worked out in France [eg, ideas discussed between you and Lyndon on Intercommunication] have serious problems). We are sticking with well-formed proposals, and trying to augment incomplete proposals (or defeat them in advance by finding serious problems). I also think that we should not try to bring back ideas that were defeated (perhaps multiple times, or in the subcommittee process) at this time. Otherwise, we stand a good chance of not finishing. In short, I think we are just doing incremental fixes at this meeting. Can't we leave other things for MPI follow-on? -Tony PS Your idea does sound reasonable, but I think that the four modes need to be clarified better, so that the design appears more "orthogonal" to users. Can you clarify what the total send/receive functionality would look like, etc. ----- Begin Included Message ----- From owner-mpi-pt2pt@CS.UTK.EDU Sun Feb 20 14:21:03 1994 X-Resent-To: mpi-pt2pt@CS.UTK.EDU ; Sun, 20 Feb 1994 15:20:04 EST Date: Sun, 20 Feb 94 14:54:53 EST From: "Marc Snir ((914) 945-3204 (862)" To: mpi-pt2pt@CS.UTK.EDU Cc: frankeh@watson.ibm.com Subject: buffering in point to point Reply-To: SNIR@watson.ibm.com Content-Length: 1802 Among the comments we got there where many that insisted that buffering is very important, and some that wanted CMMD style communication. In the time honored MPI tradition, we should consider appeasing both by providing additional functions. A possible proposal is outlined below: Add a fourth mode in pt2pt communication. Thus, we shall have, for blocking sends 1. Synchronous: returns when a matching receive has started. The intended implementation is that no buffering is done. Completion of such send is nonlocal. > [No buffering, non-local, always succeeds, can deadlock] 2. Asynchronous: returns irrespective of the state of the receiver. If no receive is posted, then the message has to be buffered. Completion of such send is local. If the operation cannot complete (no buffer, no receiver) then an error occurs. > [Buffering (if necessary), local, fails sometimes, cannot deadlock] 3. Standard: I.e., implementation-dependent. The message may be buffered, thus allowing the operation to complete, or the send may block until a matching receive occurs. Completion is nonlocal. The operation never fails, but lack of buffer space may cause dealocks. > [Buffering (if necessary), non-local, always succeeds, can deadlock] > > "3" is either... > "1" > or "2" with retry (should we discuss semantics for retrying/timeouts?) > 4. Ready: a variant of (1) and/or (2), where the user promises that a receive is already posted -- thus, no buffering is needed. > [No buffering, non-local, fails if receive not posted, cannot deadlock] > > "4" is "3" + the promise of the user to have a receive posted. > This extends, as usual to nonblocking sends: (1) a WAIT for a synchronous immediate send blocks until a receive has cleared the send buffer. (2) a WAIT for an asynchronous immediate send returns after the message is received or buffered, and causes an error if neither can be done. (3) a WAIT for a standard send can block if there is no posted receive or available buffer space, waiting for either to become available, due to other processes' acitivity. (4) a WAIT for a ready send returns after the receive has cleared the send buffer. This means 3 more functions (blocking, immediate, persistent send). > > Please explain further. > Comments? ----- End Included Message ----- From owner-mpi-context@CS.UTK.EDU Tue Feb 22 05:11:19 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id FAA16469; Tue, 22 Feb 1994 05:11:16 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id FAA04418; Tue, 22 Feb 1994 05:10:15 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 22 Feb 1994 05:10:13 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id FAA04393; Tue, 22 Feb 1994 05:10:05 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA01869; Tue, 22 Feb 94 04:10:00 CST Date: Tue, 22 Feb 94 04:10:00 CST From: Tony Skjellum Message-Id: <9402221010.AA01869@Aurora.CS.MsState.Edu> To: mpi-context@CS.UTK.EDU Subject: latest postscript of Context chapter This is the same as what will appear in the committee document on Wednesday. It reflects (0) Clarified intracommunication section (minor changes only) (1) Improved intercommunication section (with syntax and semantics we understand) (2) Slightly fixed attributes section (3) One new intercommunication example (long) Note: In re item (1), we have omitted all forms that implicitly required non-blocking collective communications, since MPI specifically omits such calls from the collective chapter, while recommending the use of multithreading to achieve such functionality. Many thanks to all of you for your input and help, especially Lyndon, Marc, and Nathan. -Tony Skjellum begin 640 context.ps.Z M'YV0)4) F=(B")DW8LJTD.$"AH(2)8;(*1.&SALY.D"0L9,&SAP0-5S0N %B MR!LX>>2D.8.&#H@8.7#8:)$C!@@I8%S 2:S@!94R6(CL="D# MQEN=8UR2*6,&Q.7,FT$D+.P&Q LGKD>79O,FIT;2KH7L$6.8S&TS?6"_F.(: M\A@T"FC;ENT:RQ[B3OH(<4U%BFLZ58D4IX@#!&'7),!%@;/Z81)!EUP*%1DR#0 $)7="B0V%F^26D< M&B# (>&33C+I9 U3-@;"E6B!!,*677Z)'AU(8J=D<*X! 14=4\FQQPLTNO>2 M0]%-]X*=VI%Q8AAPE+$G?A]A5T=ZKO4IU'\@!%IGL*>F?+MAP5TTSS(!# M#3A4*MVE,]C')WM^QA!2#37,8).E@T)%1E=ZKEKC2S:8*JB=A/6D*:OOS>!K MG4BZ44=/9I11!AGVS4%'177, 9I++R2[+!O-/@N"HT)YB>"O!,*1AE=[CE#N MN1]A<>H+1C11T18 P@4"I0(Z49D10@C!G[WYNN9&:_&YED01]@H7FAP)QV9& M"].FP<:> X-0ZK65@E#Q:H;M6R =5.21* C&"F?$Q_+*&:09@,D1I,D?]_M& MD/Q.!YNT*JGEFAAAH <"%B DE5T>E0GAE+P>_9Q;'1*3,00:2>WQ=%)",,W& M:'+0^8)>!*:%X\$9_^6;F'N:\<8;)C9F-F!]X"5T&'D(>5+<8]CQ,@C@V5NQ MN*XMG/ 6[Y)AQIYSE#:K:RQC1W/*&0L.<1@2]Z& H(+K)Z)KA2O]0N(N;Y%Y M@":6=B/ 7<#F>,1LO%O$'G"/]U!O!JSE&'&&7F_L+P:)2Q4DO"[U[\M& FISSS4CH//97O#H]' M48//5>KPVQ\O9O)J,6^L^-&;^'SYN^>!OGWL&X\\5/'C$EWH1Z67R( DXL,? M\?"0/>+Y[WT 7!Z7;$+ QDBG,L-+ Y*ZTL#V(8\.(A/*"W!FF.6%4&.*\EL! MIX<=(;70)CD)3X* (RC?P44!PI'#'1KV CF,@8=G<%+!IB?$C+W@"#S\G!&G M)H>J-:T,>II#&.P@%&/9)#%L\,D4%/"^%_#,9P6D2V\@4S_BO,!H=$#:1PI( MG!7VCWM :YV36,BP@K$O2 XQ'_J$E+\]4L]Z+'$)_?1HAM(,KW>_XU(,^=@" M_14R;73@R7&>50:.C*$,"CBD[YCWQ^NY!%ST MB]_.,"FQ"R-(/LXR-+&30R MD*$+#=0@5\H@.5EV!4ES6,.9O&*1B;R+"'L:PP^!]CX0CFR$?6NAWV#HRF.]( G"^R$_?4.$=XGA)'NH%A0QYL63M - M9UN# M;@AC??5&'8@0BS:V(K!4"&GP\8R$,O8 (5DC" M!C4'-AG4P 8*>%O<8-,?$-BU!GL &W$HF$P\MQAICD/A6M]:(JP]FRYIP MYHA3B^*M-N&IC!12%DK0X0LNI:1A(,.N6YC#),G._0$!9J9 QRZDX<4E"$.?7BND\R3'UHJJ SM32][T3-# M^/I,;'O8PQFD2$736*<%K 1P#!30(YM0$$5"R6V0,,V[4-]VPL"087!/BA5Z1L"!U_N#1%U>^CB:-@ 1WO)(P\YF@O%)!3%8=*VZ33KM&3DSC]+OTXZ=+OBJJ3I(3J%S#P+<1Y M/IB"TO:$&, M8AA3I=-H1KG$_1-;9S!NMCIU6EYUC1&.5ZH8Q.!B9N!!#(Q0!,_ X 9&, (. M/(-O(PR!WSB8P;]A0.X8U-LA]CXXOQ6>[X//H.'\]O?!]0V#@UO<,Q>'@1%@ M8'#/Z'L&!^\WO0F>[Y'/>P@@=T@,9'7 (M0D!C;P00Y(0@0>W+O>\_YXOH=P M Y07X08XR+G'-W[ODAO]Z RW-]%O '2-WZ (\WZX D:^YO6,P<+"WW>A#GSK1X>YVL;/=[FU/.][WOO:\ MW[WO?*_[VA70=I%'/-^')[J[88+OF,#W@H8?>7SO6^CM[[D^U[\S%T>=*[#H%0US_?)-6X$?-?; MZ"$W0L=O#H-_I[[G8D<[P),O]GLW_P;/CS[SIS]TR_/>^B4?N )6/I?&NYOK M,9!2S>==!)3OO-]#V+C <3X$]E.\Z/]^^-*M#W*+6YSH1?AX$>3N;]BG?^[= MIW7?!WDV!WW4QV_29V\&6'US5W(,J(">D7H(>(#.1X$+.($0N%;.)RH(R('. MAW9 ]W[R9V\/MW]B]WZ+QW1&D -R$7.11V\/UWI%$(,:-X/O5V\>B(-DIX,8 M1W [Z(,]F(.IEX- 6'%%R(-&B'8;]WZN5W2L1W(QZ"KJ P-;!WD*@##P CUT M@0-T(3@V)W\#!W?D!G7Z9EK_UG/_%G]#$'3_=G=N&'MM"(<$!W;[UAD(2H>&"H*-P,7>($P8%IB1X@)UWQY5WQV1W9@)QP8@52')M"']SR'LR08O69XGA]VXWH'6R(@0* 'D_H6X_9"Q'37>(U@.(W'AW-7]W0O5P,^X"H@4(G72()&-W!01X8X MQX965W!MJ&_0)W >AP,X.'! 9WSX>&]D]X]HYX$+2) *V(D'V8/+IX@3&(L3 M2&Y*IY 2B(08]W1VEW_-%WJ>MX>)6(1BYX')AXL8B' C>8@9200Q0 3[E@-" MD)*3^!:5J(O-YX D2(V,.'$#YY#(UXM#.'"&F'*E2'#LMW;Q2)0>V7'*AY07 MF8MZUXI-29($QXH?V7I3J7&,:(#O5X'(-X819X0#)W]!QXB6!Y'-9X@2*'H> M]Y&39UK]9I*^]WD[YY <1Y/L:'9"( ,QT',L*0,R8(58: 2^T2MV 8TVYY&O MF'QY-W+V1@0*!P.,N78XT(KOUHKD-I Z1FF-0.$=WAA"'&*V'$.N80X2&\C M5X)YZ6Z-QW%!X ,V0"9?1WX0"7?S1HU"F98* ()6N8>JJ)M.N9L1>8(DF)MS MB)M B7/Z-GRP1XL'MWTT@)HNQW$YP)HT)WG)V80=QX&@V88'IYWOQIUF]X1S MB8)09W;[]VZF-7SG>7'#IYU1>7V6UXX<)RMYN8(<%W.>DHXV!YL/.)L/5YL. ML7&X*8*^26Z]6:"4UX#IAY!*Z)^>2'$-B'CA.3GUZ9PYP'%"X ,WX'74:7[9 M)W_#!W0?&G0A)Z( %Z(YQWHG>IPCNGTC:J(M^J( MW/R)Z.Z%W3S6:$Q<*$O M^* .*GR6]V[DIW)!)P2>&'0Y<)QAZ6] B@-/MZ0/MZ3SYGBMIY*QB&]A27)7 M.INA&9E+QZ41*"I >J,6BJ&XMXSP(A1T(0-2XH4*UZ8F&91NJI!P^J;U-I%T M>J=R&J=SFJ=XNJ=^JG!7V*=Z.JB"6JA\>JAS:J>(2JB+:JA_ZJB RJB/VJB4 M.JF6ZJ:*>JEO"GN<^H1$8 . : ,V*HE\B9][&)"H>JJJZH,2V(>I6H2NNJJQ M"JNO.JNV6JMDIP"W*JNXRJN^2JN_NJNTVJJ]"JS&*JS(6JS"JJO*VJS!ZJS' M"JVB0JS/6JW1RJN=JF_)^:D<.*I^J6ZE01>+1YB8]W=RV7SHFJY+:8E,AYD& M^'._67$2.'SF.@3Q1P3QEP/Q1W%AYQE!D)+,0B($XJ[,YFXLW>Y8[JY6(UY;%YRKDV)Q] M20-BP;)_.4'BFI>$"7W:&IE,ZADQ^XAKJ(=$X .DH@ U)[61F7X\EX:*. 1L M]VZB(K6#B)D5UW,XH+4$*))R>H)R"Y0SJ0!UVYA*E[?IFI4DJ+?YB*YV2[>" M:WS-]X^$^Y& Z[?^F;A_BZZ,.[B!&X%[N+AR"[BNB*XQ\*]C&Q,R, .L21>5 MN()?.7DA&(:YB91XVYY-AX]-][K+9X!--W=4N9L3IY">.(.1R7&SJ'YJ!WT] M![R1N7^9V7_&>9A2.!(N%XDR-YTA69?0V[,E::!B5Z?5NZY@UYB<1[I[:+IA MFY;\>K'[YKKC>X8B*[*Q&X&S&[ASA[ON:W"BDKL$&I65^8K"2V_%FW['.[(U MT!GD*'R2N'UL-;K[>K$\IXJR@H.NBWP_%W _QWIOR83QVHJM:(" V'H"!W+; M5W4FQW'*FP.R8HZ1=X)M%Z1[^)A[F ,*9Q8$3VWQZVY@T>:)&J7FJ M=WF(:Y@@Z1 ]_,.XJ'Q,.<2"*(B].<,RC+TH9W)Q?_AL85]X\$279!VXH'![]L2(_D MMGT'S';&&89DAY*]MX)0!P,#6[#MN7N[9ZP@Z,8]V\4(%Z?T&L%XM&P(&SN,8VNW"MVG9!X+\NMW%%,+"X)V]+V+U$.KX/IZ_W%G5I#+]I MG,'_-H,H]ZJKBG!"G,LN[)MI>;V^G+U*3'IX.1(K&,(^L'WA9ZH:5X_[>+_. M!Z"J2,%N3'%O68+=JW18B7_:S+L .@2WB:(=''[D:,Q;',?F+'A_Q\9?^7K: M^H2XB8%$%\.'VLCH#,E\J'9\Z(/WV'YD/+DHR7,KV)* G 17J&YI8#%D,A>] MXH5@.[4.3;4/'=$03;5BW-!4:X UP'9,IP L^6X^$"KXN8FJ&(( ":#ZZ(,Z MF-(HO=*BHM(MS=+[1@1M*P3ZZM$*,,(D':,'6<&[J],^7=(,^'LE)]00)W(3 M-WP:9W!G*7P,A]1,K70?.M3N2=14)]6PG+W_>-6\F]6T/*^TC-4Y*2L\YW(M M*8F1QX$-X,3]]6OR,9,FG>YC(M^ MZY3]"LPW7+D?R<0'QM&'^Q M-P-J^K\>#8BFNL,0.G1$B79G9W:U['8>J-&N_<13EW:7W*"W/9<-['$\UXE5 M"[ZTN)LQ&'PHS"_%7(XS'F+-Q):EZ':'8'4Z\6\O.1)7M_ -]50_N3UC>0S M6>7 >>5*+IFR\L%&3H "U]U]C&_]N80@1YJ[FWH0"W2CXM$[^H1-^.;)J;)$ M'IELS@.GVW]BVW2F90-,M^9'CG ]'-B.R\N%_=Z&C=@D"-B '<2X.'6";7>$ MW9B#N\,[O),+.;E33NB1'L:3GG>5ONB#WN1!"79S0>0G/L*%?7S!Z)GSUJ]W M9\'C>[ZL6W<&>'=N>NN'B.L]&*@]J.M&Z*:*=[&T3NOE"X38>8:T6-B>R8N; MI^4:3=9U;LB^';@TZ.92#*K26JNK>ZW[$7]^0( M7W*!RGUCG=XXC8*>*7(BV^Z72;X:*X0.;B=8CK\1&:(AR;80I7YM+=X;NGK#I/L5-#W8Q#^U_ M7L7K/LW8AW+UV)XIAX^*;[[/F^6:>8.>^>:_UWIJF)N:#P.SP-ZF_#B+M4A-\O+M]20KL24#NH)F>5);.](_7FI M%WJU+_)B3_;!#\1K)RM$0(XMKMXQN-2*:<-V__5I/>P:*_UWA];%CK%TB6^5 MI^ZM3G067^M2B=FY/OZ]OOD7C_B3'YX43&]G:?H,+_-U?N=BZV]"+,8Q.O]Y MOF]R,1)VP;Q=6W-?[N<(0._6D<18 .QN A#D&!Q8UG].6BL+ D 'A$6[L6<$ M]AW ZUYB2>$4,BRE=#1@,#)];4WI)*;$).DHSKP*1B$0 ^(X$JB6&-$%Q'1 M:/$P*:U3$[2;R&%%%;#UZ4#%9/]Z(.NK:D4M^\2GXC/.ZARO W AKP.VP!5X M@DJ@"T2!/) #TJ(GB(=48%YC@E8P!6Y $!@%H2 NTELRL!@A.)NVQ8R88,-% M*DGLI$&P@Z_63AMD@R4,![2=EW:J_I #1&GM;VTA()S%!R'0 J)Z@! \(2$> MY(8^6U/25HYNZ;4S+F7U-%I \VBM"3\)0??$@.00/BJ$F&A3I0AO(3*[>]X(GZ$>:#/WX$^=@P!J<),J' MK$?&@H]D@SAAA^WAH&$XR-@:4OI!"VCN9;J2-[=&G3,$5)FN-R7#RY2$*A9[ M2GS[JS.]'Z+5STY?PRMKR:C-=:+(5\/R7O4[A880%&["5(BZ7F$K3#L6SAW* M0U8X#V5A.V2'/<@-#25,B IS4[Z1.]5GXH"G883UZMS1<4]+2#;1PX5H#QLB MZDEQCJP4V;J]Y_(HXD34.Q$Q(@:Y=18+?5 ZDT<><9TQO9'H>J100=1N/Y"J M:<*%AP_783X\/7VG7LG$G#1Y<@Y-M#YGZY&QG7G%XD /3/2)*BXQK;JA6,+^ M7G8+A_F)>SDA;7CXQI(!0WR,S^*]NKHCZI!8&&.&FQ#Q&*=3)G04#[)C7=>O MV+FN^/-[ZE$ARX'6!]^\P2C5AG0AX"- BI 1QD44E0[K8BB\BWZ<)2!'&3MC#JN$[HF=\2MWPW.T#CB$B_9H$5HI"':<*M^KDT/\RFQY17IE M^-S.3F([*"C]N,':!HR^$\03 C0QJSTUWD4;XUWX>XWG[QS60AR(MU9C:F2$ M5JHQ/D))= ,*UK*[CE$0*VI'RM4,M^,OZXZY[TV!Q^]8BFZ30GI7$\B\E2*I M!7<.XD$DB(X1)=:CX86E2M/'04:#" <)@?Q( MO?BHWL"3_/&%70KI3: PZ7 :UAY:766R%&$R594FVR2;?)- R$W&23BY)@D. MFE1585)V+9_YD_:PH]DAC6(PZTFUT.B>/EM/^H2:4!.Z0H;H?.IAH\2'H/ 2 M"L$3-?JFCD;TC)SQ,WH>D,@[ HV:UJ^FXR2%4NY1<+G-*'6_H>$!/B0)WV.[;?;^.)($W\B?O M$!O+PX%NCB8]L+"&^@2EZR&)L"=0W:.3-B^%W4N;ER\LZED@?=FS+-R^_)?2 MBXBY,&Y(,$OAEJR J8_UE""%&>ZH$8]\E YQ44;,=^@H)69VH9B,TF)J3(PY M,>7A[4).:0DY\1Q>AWZ $6/C.$%R);G+L3.TVMG808R'T1U1RP]),S6D@D)D M?7 /^L&:9 =[)L\XQ@G4STQ+:(80'DZ-EO<8S+:>)4.:7HQN^:?3&T&H&V> M'%T%6CKG,8KN)MJ%L$3.=--Q=&T8F3KF]8)HV'*Z@N31.XK'-=:FIE[DTVOL MJY>UKP2Z'LWDG'RC=9(#APJVL\EA[%$H>+K9J,PB=*BG#3,Q" 7+4PTJ%5E)1 H_,*>)Y55 MEVP<\JNM1X8F7FYZ77AHCM))-QI'66FT?*67"5H"O(/%+.%<*AM"-J@1?L-H M)^!X3ZQ,>Z7T,*%2(.3]PA\^BHA&9W/>PENX2!MIZ-M7R"L02B:<6$F-T03< M8(6'![*ZX=AQ'MEE+)3@%#YIRB7V$\=IICRGY90S&IZB Q#UGC?DDB/L2?$H M80E[%F H!5F+S?L=O>A7]+Z4,3V.[X@B@B$EFNB 'X[KDU7'BM+*4P9P>"- M\IFGBJ,@L\P/(['A"/@L!%3S&O(B439 MKE[&ZS*.2U5(+[44Q=29"E.Q)[WQ/=8HW/T<2WF:UA 5XH6!3M$%4,/4 ^T? MC?N'E2>=*M5O:DZ7ZN!QJH02%X[&E*27DE\7]:A*$)%N/#%:D2#4 \,_1"O] M59S"AD?;5MMQ6)=(++V?B;7JA$!3 W8F#QDJ-Y!%_]H90.-2=_5S2B+XP/9R MCS;]JUPUI&K5,;H=%>E@':F(5; 2UO"83%4BE.N>"9,(^*A+Q"]&Y22<-T0* M.":[<9>]%F!.*D\/,R?.Q-&J$WNBDC2MHK6TDE:WDQ,C4&H=.IFU1\76US-; MH5/X,5*7% ;=40XV">E-2D5.B@K_U%29.EQI*G$]KL;UX-W4<'>%<.IRA3H9 MB@:LI$L*XK0E,/)^Z<_Z5:"IB!')GWWS=>,2>P(S\%?Q*E_K2G_.SY?BN.(G MSI87&90\>+1A0KD:!)0L>S1,, ZR72@!FM"$Q77AI*J\^R'%_!%/E RUB*RV#I?G6E_17IP=+9 M94J993"EE!;V3%;83]I)QQ[Q^Z1@28D&)!0588N@=ON5-R[?'*GP^80>3F^S M.N=U?PY02E?VBJS8FYDL]L52PB5+=Y(7)(IV^<:"T:1U:DN24;(OU2/ ISJ+%=3;\C&R2 MM;,NULCJ(3C*2@G/\#E8O[3R7*4X)BMFS@H*2W#Q]RC:MD9UDFM-33V.%KD6 MUTDK::DGR*QB3\ZH]3[C=9JD*Q7*K8LQU#+"ZS17JR82 EZK\P.UOSM(!U<: MZX2#0C/6(D'DF8N<;#%[C'B+<\9%#GDU5>5G"Y*V[4BF'V&K=3Y;L8V;7FL- M$H%EN\>8;=P\6;YMHF[#[9K1B!E')4!\4W3NV**S;8%>AS6<&I!HOD!::\1F MD@LDFF$!HGK;D99*12WGM+77=D=USGG[A!+E$2I"^%+5IMJT59-^TAW$8WVG M,95->S-PXZ;MNW34$ &%L=NIY%;8Y.0WFC7B$#SGN79J[*VU@7 'W_ ?Z5:! MB">_^8K<4L@>IL>)+1N3I3NE<\G"$=&1RQX=D'N43#6SFC+909Q&A& M=4^9U:VZ6'?CU+:KRW6SKM>]NIIHM'F*$RIYA*):!6S:ZPCNW)Z+=!>K$>4] M;>>;O3R&L^K8KMT]K$>W^:C=W2?XCFI60H%0]X.=T):E!J*$@>%+7"3>M#ZA M=G#V8[<* A7'!] 0::!I*QJXSV>X8!1LL&ILYS>YL5PG_?"Y:P1ZJERE%[Z M5^9(:7DMFV,?4R/]HZBY:Z]YG#UK.9V<2?JS$>[#?2C#-4H1W6,3I=M(^%@H MBO*F_"867)NEU96 GMP(VCP7Z0/5NP\4@-Z^CRMR"QW LV1LK>6E5^VG *IAMUFM)5/ S5(LE9"'59,]3Q,W%F:Y2A4'3YGICIVW6HT= *V&IMT)% M8[5,PF#L +]=*9SEEBEO#8QXZ0?7M.ATTZA3$XZJ2_B0BF$H'*^LK(X[PSPX M#)/A-Q78&H/,UJ1)4A3&[J30A=X5\+%K_N?.''7_<2> M6 0WISER2JJ,DG�V/9*XV-,DA09-9[& MUC@;1^-K_.V&VO9,2;<)"&NQLJN RF(YUK+$I[+%3?"FHS6.LY7RU#@S77YXS,VD,%;01ZKX[+>!&'[-N[\<5); MR!<+2(U+(^>0GQB) I .64"*/XWLD$'3N/3(YM,&84@:\2>F2 M5F9'=N=?'3^7(QM%I3:,9V^.+Q-CR];%&O,J?LS C3$O9@V;G/C/@T)=A)@- M4V'-C( STBBNJAE8'#X@,'R8ZEY/CLPB.#5O7;M> ML6TNHFRO=K7B5EPN43,R;H3,^$(ARP($\133C&IP\4RM43O.'.J8KEJB3'Z7 MRR[!I294"ZIU7FR1[/5-4H2 /7-Z[HKN> MV_/P"6/J.3X#+OFLN>:S?:[/]=E_NN?[G)'X,]S;S_A9[W[+^MMX6-PFKIP0 MR'(JZ 3-H!&T@WYFJ$=M/6@)#:$GM&^BT!CZ0FOH!;T'4>AI*\TL'-J18FL+8QU.8SBC:66-5GS"%O%%ROCC$C7G)NQ\ M$-IB!6D@;5HH5EIB2[U(%0'EB&2^6JZNHI]?(O6T!JBJJ2K@,RB:93F1ZYM= .&KJF@Y?AV0]: M/C_7$"(VQ'6=GN 0"\$ 1V\5H\G8O1+1N W 2B[3*>HEMZA[D\?1SXPZ4CMJ M26UW2/)*,M#HS52UWB *O(: :=%5Z-<-*:EJU !Q8]IQ/R7XKV4?]/.D9EGW M<3GNA@8TK]5;XZ81&.HY7PYB66/R.8V;SMC*8'')G5F3E]:8N-I:ESAE@>0F+6>VKQ MJM=E/:D@K?K$;RV2"DG)<[>+JP]@=(J-N!=/JX](L>W95VK8X6G&0F)BT["4^ G!2KX7GDWO"G(W(DSU]:DCR*LUU?IYJ$K: M'Q]3:5FS?)'&D6+<2_@$JH2LI''V2[O9\HZJ8F!9S5?Q$]%^VN$2:D/MH1VU MJS;4)CA*FY6MILAC@RR12;W"DO4]F1^?0RQS5R=>J2\U;==4M4U#-VG(.M:/O@I0/RLLDUU^Y/ M!Y93J>JHA',JF\'!UIV[M-7_T3=72"M:I)I=LY,:!!1 IEOU8;(> ME8>%4^B>=P";=XM'O"T&X<,HB]6K-TBU)9?KD^54!9L_6W$*+B11 =I,F@-J M2X4,)>$%[P,?"%"]JF?H6[T*[/-=KD*U^W:M[YMWG2<>/'<&3HMK/%*+?AYN MQ9VX]_?A9HG\VW +G&[FKHT OGJAKCKKE"/]G8G:U2!Z5^\*3-6RG2C!D>0< MN^!+.8/; $)T SBXPXED78=TOQS0Q;5KTYI,?&92@_530-2+@!>J_4 O''C5 M+Y1&!UM:#=='=/#=C*='QN)R$A'PLVMQ1A+--+@&_:%8"MD)26DR[](]!!AX M%7/@]KB!05H)7L%;G 4O/I@,(MECZ-O!]Q ARFJAVNP$I. T)I<3]C$[JHE\ M@ZZYX'5:EBYY1 ;$6'BA[E6BQ2FPCF\'"W5I)KZF"5_B]0H]?=PE'J+M)4Z/ MSED5/28(D"/R1:[(&WG>">20G)''74'NQP=Y-F52:3$UYB3!?*D]FKM19@)N M&G$D/,[$8G%17KN'=0<;6%KH"5LY$[.WK;PEPG)9SLKQH96ZY<\1$Z\N\FQ\ M?2PPLM@ 1S;WMUE>R8FY2_R'5WB5Q_)B7LN9^3(7Y,8\D=/R5]=_-#F-3=B\ M//*@6H>&_W0H\B%;:4BTAB5FQ6_55BW2@_7&_M8T\.!(;[<:Q% MIL[ FC24O;%%B9I29Z0**)(.]:'VY1]IAQ$0E= MWJ-#"1?%*M$/*$ A$N*R8*GHO:4>0D2O<'#0=;E%>!4:RY9D!/+O!D[FP5BX M'4?05_=ZLW67:XEQ&,=2-%R; RQOQNXFR#'[9FHLWGOSY7M%\/)'L2'XAZ*[ MEY_UC0#LFEGDD/> I=F<+G:JRM)Q[K"7E*C29GS"H7U2"_B^1>#157_/[7C( MTL4BQ"6D=5-U_8/*[9KQ:;\LD\!.3=BE.-TRKY[4AO3>SJ'3QO2M")G@^B;, MP4XIOV$IN+Z!3,.6XGW3BH?%*#[SQ#/,,^('U+^FAB#^OXW<@\B 2EA+^H:= M 2D:X6=E*X++@3/D-XA0@?@ZQOC+/"+U,DAGO6'.NER["H\^)U6JKN%>OO.N(0O MSOQBXHZGTRKS\)*H'*@EYSM70 'ZE7N97<9WO5ZZBV4/#^QY?;YCJ]&Y[A;[ M9?_KFSVQ=_;#7O8Q^V=/[:L/KL\W W5SM1PE$\-YG.,Z-_[ M>Z(#\ FNK_;W!S_@_UXW#EL8T M*^KX#=T(@/S:7D@;>LDG^7.Y3^-X?Y.VTF4MZS>K"R^U2QD0??G.#BHW^>?6R6IWX], M?L$%?0)'ZH,1E9X!"B=!DG[_63O'>?)]Y=2WS=*?]_$*\-C6%"5C^++]/@.8RTX/=6Z MYJ!8C5"]"03:;2 _T!R[:.SNOEXW ]1X!]YW$UZK[G)M-]?1@)AB"?T8)O3@ MGCE+4F^/,K=]8R\ZL2CGI$&'3\W2U\R4D[$HC@'XP69RFR[D_JN/>[ QHJY\ MG?6.-_6F)5O,QE*Y%K:WDQUFX[NT/ >YAE'DGV M%]P4*"B0%*2:[$1LG_D6EL@R?$C5,HJL+>,2.*>(<'-XCC=7RZQ)QP]-8[:8 M(W=!].;[B2 2VT*VZ3%#/Y=Q%O&$5Q3'Q.%&X1\* )A7!!1I(@I;,M:P)9/2 MZW7ZB'"-V\PA"8U 3%7HT1%A2E"5;?-+C5)8TG6R#VE"L]F ,PM]@?=0R!0& M@F0@F:24!1(I_ I]Y,&H:;Q<%S9VP%UYG"QSTGTXXUZO)M\,(L=4J^:SW3A) MS?A6NN5KA0GDY+>X680@GO5F$213!W9"2G5GC$_K@L2Y>,V8'+!Z@RR^B! M[@P P@0>:_I: 7-O3$X 65(37I5'RYK 1K I;\D;@=,W23;8W,(F"HXL0E & MTP:!*# *0.*BE"C-(*C&QGA)@@_#A7!]9Q];R_.Z_%,2$>:6%WUEU(PH-$KQ M0,\?7B+7E6XQ&Q14!0(Y/%$4J%0U/V';%-1[*7XQT*ED*U4_J\L]: _N@_5@ M/WCL^(.OQ[:E8R$>GV V9[" /BQ7V#= 379+5[UWT:5O?T='5-&U;X)'ND1V M$&C5.84*KD%@I&G]!F?$>^1:PY+, MQP3F;6679!,/?EGS()#V#QZ&N9(^.#/I@P#AK>08VDH#84*T;0TCLV H^(4$ M?]K' I0F&3V7":PC[(&#VMUNQ@UZAL,4/\6B.#4?D&0RNI%OZ:#^ AK-)B,+ M,:0/?4)=8!=8VX"!8F!NJ/V)9*V;96,&_BAQCFM8$Y(DZ)34 ^^M'8\"- [":-[-\'<(&H)* M%A(5GJR!RF$5\H)(5E7/O,$,,H.Z"E6(%&\V%.(AML1-W4?I5OD5'V>)V%?TQ6^R656C"66% M42%L0R091G3,AIB4J63V6V*W@M0$E=^HQIW%1HTA8W@8YH/\(&+X&+:(*U'V%!^>1('%I!$]9&( 9 M4&(4]S$+NB#4B8]5U4B)IA/JE-UI)JP5E#=6^4<)1X[RBV <7J+<$XFL,&)- MX5$F*B!FB..'/*4<2J+H WA<(-R5]&.A-(@O!T[G'(5.X LB0YGH(YN1N%7$ M,5+/A_3D.K5.7(G4% QU=7.1HR.%R KR0"RH_"(;4VM]"/6&T)BD B!$(D* MB)'(T1 \75H6Y( =,7<71RDXW?("NFAHVCGD&J]#5$& M="AYK]T@TG&X@@M)U1?U=5.GH>'S\$TF)'(A)LUBM]BP92@*''RPM?AU?M60 MEXV4>?Q' <00KHL^T+FHC?P?EIF2DLZ)03" 7='9*3-"AR'RKA5J)HQB8I$4 M,BE'B591'2+?S/7"!_8@@L^3UY@4:8M+PPBY.(PS"?,%1J6*A4A8R+8D-I-?H,27:&#EXD'TQ(4<+"/-M3*ZC"WCO4$R MQ8PO8P-',]Z,,"-RIHUT'0517[*QK%Y,#P.B3>TK\QZ-Z/5A'S@)&(4T)D&2 M#>^5-(9 4"S1*05(.H90$D3"02T=2 6F-7&/6Z#4VA& C9244 5[V'UFC M -AZ^P8H%P7Y1?+=S?5EO8U2(Q,"334X!R,3]8?4-H+/F//QJ('UT=[H^_&- M(Y+?*#C6)57,$ *@?#P6XXTSYE!4),?98W+$>OE'9_(&8BF"#]NH1/T;8YI0 MSE#E&/!E" QC:\[#U M:U@2$\79E&;B(.BC89ECWDWKE_TJ34;_+* MU8B./8\QH0B$-9YC58J1%H/4.J4/"C)!%6L+'(*CS&!>E\CLLUZ-CPY.^;@D M$C;H(RX#W:@_TPWF99'T)__B$M(4S8_5#7 8-]Z/]J/]&(Y5-^]C^T@09EH^ M"G0EKM4TBEZ%Y^TMCFO/&6(Z;26!BE.D/8D[4*#T*/!0C!FC<'5R5(@8%:%% MK)%O4"*WA=LH3&0(T*%A03$H6VF6"9J0**1WJ$3U>V6A-9AP653CBU63<;5' MHT]/4]*I8C&*;\-'S2.#$M$BA9R#+\=>17#@)W_6_ &>\8$(BWK6 S%./9NK M8]R!.$ZD@Q-%0I%/Y!3I"R(C9\<*(M,\;P3+S0<^3I%@9!4I1DJ18V3*14:> MD50D&AE&6ERJ9[+#WUR.V(#%(^T53@@@SZD8?/ MH#:A2D MT)&XC9;RG@A+(5LB.6]867R8'I:/B)*T"RIY2J:2K&0^0DE2DD *$[A]%) ; MD" 5^A!#?^0?"1Q^+[/)+NE+SD>[Y/X8#!T^V!#_J U])SF)\$B^H42W#1'6 MH.12SY)LA.1(D\#CTB';+8_)HS6I/%Z3LIUO5;]YDK(1-"F;W#9R8?I(/IZ/ MXJ-UI4ZV?JUB: 9?U5<'%N^2NXT^"0IA6$-BC^DD.GE.?HKYY*?(1))Z:\<$ M1;XQ;%TQT2CZP_[$P0%6/M'&2>&L*_V2,9E' (==QK[@8)]X60 M<4G:&R.676;IXAJ91HZ4:Z0T:5*&';(1*]-CX1\BXABDF>R(=V)TU$J:DC2E M';5*WI0U92EI4^IAI&38X4E^6;!A"H5X2(D #";I3@9K_X=L1$W.;S\@Q()P M6!YAGT[2XE0N9,G!@N,8(O,&5MD%19 (EQ!#EFP?#^10'TB97*DX'478]IG"\$?+9/9$E:2;]')7+"RM"QM @ MJL1QY$HOB0'-'SU-$06,W5>VV6+YBLDSBR6G4O4D1.1-8\E8.I:8Y65IFUB6 MG&5FV5EJEL02#= 9K"3ST\I!_6%?0$>NIEI.<&H'VG*'S$N921+2@Z2'PD=Y MULXI,V4=4!+?/)5@'8>CSN0[_ U_X][4@<2E+59< CC!Y1W8W_R6$LZ"P]/D M.VSBSI=Y+5]/Q[]RK"%TJ%BN2/8YE[?/0!+A<#6"7UY6/II[V!BY%XV%=>)E M[S/6+9?\B'*IR(1E+->-1^+\'$2 =0D(;GKE2A44B[0X"A FA^#T'D2 6R7* MO8S32#'B5HV(OM5?XUK^.*X-;M=U$#G6I8ZHS25B(N3#LN4Y'Z@(3_>[J71; M2TRW@>% (*:_0G!8*?M1VW<9)A\0CZ7468UUU2%G.$P=3Y(@-P@:MB+@>8^? < M5*6+BGES*(]('&=HAQ!@+Z008TXF7ZB=!OCXC(SF%@)USB8\90T0#,B5Y";HR+I8PV@A!Q>*EFKJA>XL?K#4KSU7REU/26P]X8\]NE>L,4 U9M)ERM&&*3 MH_@OT W^%PF5B\ 7:,0\8E?\GWB7FH5@^&:\TT .8\\2&C9/(3V=G[V9C-F0 M V?H5G!"9F&'-1E.14;)I/]"TS UO1SP%T,F5'@CQ7.VO!_49/*%[\!IC::@ M:/HE89]>9G9 +6"-9IM)0'V*9$>1]LK5.(E/;^4EF1P,0M8!>;DD;V)O4,G>%CT7EY'9T:!Q+&X9&,*A@W$X-Q)#28 M-&/<*9RJ'#N3&7(A]\;#:>L-80LG!Z.&?9Q@9^BF^LF;K]GN]W5286_8Z?= M/78[$,XAVF2=68RBI\HQ1UZGR#GHW9T&V!G63IU99V??:7B4Z"JJXE#%: XV?J?:Z?'P1@M3#+G;\"ZL#<:F MU< OYDQ8]X"98TB)^C&:D$H(H363E_ EQL:?PN7L2G[.GJT!X&@&.3^-SPEU/IUSR;NQ?')VSEAV:7D28FGG MV!EV5IZJ7P%Z@/Z=#"CYY82Y)_F@7_G$9#%CI>1!!:6&W\MAN6>-F)&)*!*" M>* =J (2@OQ!N"=9TL!<,@E'"&*QJ*"=3P@2^_4VG%6N%H/"H.<020/:D#1U M2H79'UYU,1I0H[:0-#_H[L*5D#3I%E!3A)(T31(2RLJ0--O'$MK30$N?YA-Z MODA#0/84VGXL>Y-(M^67+XQ$WT58A2@\E%NNF+:729*?[IET!-MEVYN MHIHHR"*7)5R7:"F:ZA53TAX] ^+8+89D-CEQT9HT#GNC]K&>]66]& .@1NJG MOUE'&5,!)V1V;Y)WOFB^:8N9=W)-GRD:E7YX9P.Z60Z@@J?U.8$*'S"A&46_ MN3G@)YMHF@FCP2@PVIJ==Y39"H6,]C>WX<>$^7&6N)GF9^058^-=\-GTF& P MQ_+ID@QGNLV6M2P=/EO)4/*/B$(D9V #B U5DP[ZN"VM'EO9Q"785&?)EWH9 MS$";1LG%IB;AG+>CC%,*+3O/GVC33:AY!MWUF30.?=:?!6C_V>[CU7"D M]@Y[II%VI/[9#0.?E:0B*4H*H&VD+.E%HI^-I+K/)W-D^B@'VB]J;V*CMLE- MJHW:I-FH3NJ39J,.%4[*DXYW]" ]N/A 1?6C*#7%?%$EFFB9ESR-'RI4E@I)Y4MVPKF16N1+ MG\9F7B8Z"'=%L61W[%9O!LWT9B]<>"5"@HE;:"4SHJ@[PQ$=!&RY6H/IS79O M9#5OT'PW*/9'9L<;)" UIE_)L5D6]E(03^+2KS"*[.:1(@,(9].))7KIT9M0 M)(>3=/)ZE]XE^HFJD#@.N)02/A\9H!T&F.56%::6ING**><;_B''2[[S7!IO M$ Z)%TC6)D)F=8/:<)P59D@2^=5@;A 8-#IF@"]DAV7@R2WU3WFJGJ:G["EZ MZI[F7.OI>\H&C994",2)0O%8^.FCLT]*.HH-=^IQ4EP:I\9#]&FA4I\7:B>&H68CMX<,<(']*XZ) MES/'O)8^2&P96PH[;2@0^H8V+*D'!/1?YB46Z8::H,H/#.*#M:C7%[@#,O7QW"2MPV=X\$403&'$RB )J & M:#*:Y="(%-BWD]<,=<+2&Q;=S9_EJ(@EILZ? =9,2?\46(!IEIK;D#K^BX3* M'%8[])I>PZT]<-/.<6E[D9+:Y6H7KB%W3=.#602ACLH:HRFH!C'[3L=I:<*9 M(,AN@V*I/= A"CE,43-FUIXDJ;IB82H=U6,2DI9/+@5#AB<@R,@4>TB@4M>; MBAD*5+)=]G<\>892U0-J7+)A14S9PZ-]GW>8S87CJ%DR5_AA?)D<*J8U\WN5 ME[SJ%#7J_:IKE)WJ^0FKY&5Q29_.E3@=[,>CL(]"">3Y<2&:8^JD6JD^J]B5 M;NFQ48.4R>'B0:V0\Y]%%3Q1'3%DO".4+B&C#_'1/P9&5,^D"78T)P^,!&09 M%JOPJI":U&@KSA7[HUO1:Z\'DMJST6_Z:K\Z1^ZK $B]>&^XJ4(>UB79T:@S MJBC6ILZ5%TH79J56J0AH&S9*TF9IB3"WH% P3LU8IGSX7=N/TC%7=I]YHM+U MV)2?]16,-WZF+A*HA&I[N&D)J]:%L+ZL"NO RK#"1;L'U=%O3)(EA\8%V71% M"V+?5$]N)3^K&A(FZ9^-W]!:@Q6MV.;2T?46+T>JT)JU0:WEEM"Y 6Q2; M]H+,)E-'O>HL?B/IDK_*K_ZK7BO ZK7"!ZVF-<4\JENHKD7[TE9EZ60ICE"KA.JWZ0X\: M3GFF J[KYRUFC4&L#:@"NGAD8K)"W_I[:#9&:O'FZAEO\JHU%K?VJDY)2[)% ME6^QFJ9V17$H8\N+0PUY.%)9G3*"B'*H7<3&D0BMJMI!&8^.5;7J+1IYJ**\ MZW*"G "K5%[O*EXVE\.K\"KAB'SG8=+)/!8MZ2&)>*!]K="KV JV3J_1ZU.) MVV!R1@=.-2+YJ6;K@28:5CP^YJ7ZO<8SWYU$%% 11TW-VS&6_78@R<9RE1R; MO50OA=2(;N+,K94.'I)E&U72H% E( H4URY$R*MHE^/^%K MR3>^HJ.:H;,)U;P=4^#&8B82//!K$MM/@HK?47%(@A2JEB@58W+Y'>*9_8IM MY:Q8+# %7-:I\ XN-97J7KFE0CAK9B-^ZJV%.OX$^T)K@"RR<>2*:H,Q5G_( MY,)4NK!,_,C?E(&J-6NKGDLB MXLLB(L L]65K'B[6"WAFS%J,R"S&:+<,GC*K\!&=#)8Q2=7!P<$>Q@D.PI@8 MCIW(HI5-.GZ0DZ::@AXQ&&/;TILL+*Y=%&1E08Q853J;5:FS$^,44Y:$LWS. MC(>9<%*-9^:U 25&@R_YDKJ,:SM:YN/_%[QHP.#I9BVOE2/RJ4 MJ0-M8CJ M6&+H*N/H"1Z.=I,/&-N6=+XG'CD])D%D'@79$=(F'M%['K605"]YMVIEF^/;@^.@?=65[EX..8'4$EW/#[\"D#V9*TF\(&Q,'Y(94>D$2F,@+0,(8323HF4 M#&X9V>"JD0T(X^49("-X276)<00RRLQC(WM!C1ONML@N!BDFC+,*WW"$(RZB MHUN2N">NB8OBKK@J[G]5YO@<9ZH.U0"!;]B'#! $!!PS9XD8_"VQ/.Y#1-;^ MN#PN^>%3>I,AD]D6=BB"V]5PM,$(1[>1J(F#?:PUK&43&THC%15H8K8AMQP* MC6)!89VDC9K71#JX8"Z$*^:".'7:@TM2AKEG+ANIF!RG_TOD%7GP5,.@#-)O M[%BMVQ("' 6,11A55J]5-]W9\:(\'CY_[A+3H%"R@>Z02^C>/7PNHFOH]I&% M[J#[BFQ:(,X?V>CZ4C@K-F*#M#,"[59KC,@ :EYR4X. ;_G'D+LY@9)Z6"AI MINJ4J*XJF5.JNCMEE:6X6X7E'.I.ZS5KKB^6UV>1< MJ^V2''*M#W/7(H_<[OS1UN:U)Z(XZ8!$'?;C6"L^GD ][CJY[ I@YD$!!-V M;36(RO;6 (<+HGF;)2N59W(!ZV)*+#; MPH*9C$K]FNLE&VUJWFX80C,B*-/:SO%_7!WME004L^&7#*:UHH3$D&:6F?=) M3BX3%NJSNSIN/U&DZ M;\^K2OZ44CM/#@(_.;9UI MUQIXC,O"(I*,+8O(I%-*RD)9R=.8I'9UHEP3"L**->=(9Y".M"QH@[BRT"A> M1-ZYJ)K4BV[+70*ZK"P%R"AXJU&2X$[P)?@*OMV(K!NL[;DAW&W2O:(C3(NZ ML6J(!9F:%Y+!#$!H5(89SKU\7HDIXM/!+3$=]3<,"C4#A^/U=. D)<(>]5= M-#P=UO+2M7= GRK49-XJU J^-=\T=7U>\@:6885,C.N2"G:9N]B#!*,J+6JC M6J438H='3P^X)YV0?QBF2B8A7)ZM^;@\#:A*S)RY\2A/H XA. M:MY30HYL;DT:-;$*95*92XL<[K0H2?@K>F":S*\+E,0.?^[K9P>2%'7$2'"2 M,\F6)!]NYW H)-;O1# V&!@Q*Z*00&P\H.^&(X(5?^+MWIY M&2T*K )OM&/46+.8['F)&&_YCVA">EH&C) B-L#=9ZC=)7UP*4OE"HTHC1BG MI $U)P F^&M\H:TM[4+:1H%,3/#"H[V8G!G,E.8OHH%=R>O"UHPECV!WN&.^ MI8CN\ML#?\&*I2QDWN!>0?#C5A1YF> O4B2+[G^ EGXU)DE1ZE$/R9;YG:Y=8-*&23XJ(YR@=8Z.XP4P2'P7?$] MXI?!S]@S?!IX76/8"'8DFY=P(A(J:,*22ZU8B'BD14LO^RVI5'1(OS/EQ"89 MK.YE,KS+ H@W7N_EZ^4WEHXC";B"1I&N6NC?\#I B?'*NX*F M\*5NJ9!V.+E7A&,"A3Y\$JE$,O(QY(A ^R 6:ID+K*23 "@>2.)9KGY97BP@ M^9MD)5]=!O)]^6$"& %E!%+"'R''JM=)+X+4^'9_B #UQ;4=7YQBLUZF,^%PR*,4$[)%4U'L M0YF8+JPF0EDQ)+H*@'(XX;MD6DXL3&6;FQ02MWUA2[9M32+VBKBZB8?K&'&WU.$ \]!^"CZ4U#QL]?I8Y-'5L%KT/B[<5PE_XY M^Y1JIK+:3YLLP8J<2PQW1@61=H)CIX,:V\!AC0)5UX&;(J9IR MFT&-^#BXP)>&C;>T7B;'_>GNM3]=FNVE0>551L?6V73\5CZ5J'$U9UEE4)X! M7V(7"[3WY3KBA/PA\^_!Q4;EHW FY:F,QIFZ3]IU Q>H[._&20!OG-$P!U83 MR0H;57GR(*Y3B#L@Y5WHZ&0]XZU^"'.K%IO[99J3[@67D MB8,'WUAG524KE._\)!$>,5M>]L5V_,,UR94,IPTA7MH+ M28]8F[W(AD7&BD:TEV;[!9XQ M*X6(]U.6,$OO[/C2)R=I[LJ/B'%8@)E7R@':F2V!RJ%\*R(B8<>A7+0!'N+3 M0APT028 2?;UBO@=2ZNE'&G&0-T2O>?@C5LT2]DS:_D\@IYRN1\R)<>C\O5G MIL,GXQ8S;"JU'$G(2!)CP]LE2>P#WL/ B:R\^^@?= MJT^O^O'IFL#SPA%S$ MLI]6+#\NF3$I:I,5'ITQL_3L1:++,5GVBM0\&>#:^#X-&,_H41P, [>3D&>\ MV!C#F!DR3-8IP^C:Y#+R!B[*SGCYX1E[7MF[:\C&9VI72@J2:IP+X<99+P/( MJAWQMV6-C7X'.AQH1J7;I646$]TJU$BY/2&P,Q93 PM-G;+ 1LR"\:*#Z7H&3]%_0E ,ZJQ M:_@J:SJ7E)JJ\22"%U0B,4AG4C;94^955/05B7X24>7F,SM#!4H5A'063!C5 M4&32AJ0'6*78@Y@E0P\$C!R!+(NM7&/I%LHHB U0X4JH%6B*DJ261#C(J/L0 MAX"ASU7L"'*&Y@F>]B4Z@ARF$J<_M4.(O+JRMR&:_HBRIP*\(X8R;@7JQT M>\W(AXXUS) @-=;P1BAX9#5,L9"2[ZA?.4GGW(=\)9Y453G[_B,8LIS2#V4O M2!9J)JM".IZC%6H7\+2 X)3%9=IE2,O)LCE6OTA$6WV*W)L<'N8!VSZ/SB'!] MST-,23)O^"]! .VLG%Z&])KL:!,%/("7>G3#<,]"7Y7C$7XM1N^NB.4KG?E6TMO?1 9* MHSF&B!ESAJ0U4:TBS3(QTBT+PR K&!!=2#2R.X6.;]JTPM\A?I;H2"*(E"2+ M&N8B3BNB_Q#BR0.UA7=1W2'D5#YU!]YW*JDJ>)^FITU?(/Q4^C)ZZ22#\GR3 MR50@^_3<,0VMA,MQO.R>=20:R6"*;(HD(,\Q2W/9'>^3YT(6;BYM+Q$IKTW4 M10M_:=@-TVP+??G2C3(>YM=2TME0J:]%IH=PU'A%3!*D?-+B$W\R<&%:MC$C M"[3R=W.08:S\$2O*4X$J']_-CK%\BD =Q*OJVPPI+R*<'E.2][5P5XD 1>)& MA- J;U/HR,8>[?BQ.OU=_R"H-A"77/GEO\.8NQN^7JZC"WHCP>.+59 MB4'@,8H3+Z:U.-&M_-[$-NU7-MPY@\ZR"NG,@*D="#,6EY L.J''Q[%G#C8) M=:FGG8[%1DD=DUF+7/>RW@P0Z\:@=4&R2K=J]-+T8D$R.8\@3W*S/)&%_8:]AYC@\+$3B5<&^TUI87-$5^CW3K MX3\5'%1*X[3H>GD^)1(C$P )?UXO_BFG=TV(HSDN* M^= MTVGUQZA;!B3KI<#1R5R9LFM\F2LJEW]0+1R^1,RCF:";GCEJAM:S5M7#A'F.(61 R^VEC5^+EHN]PS_&'GNKB9/9>K7)4@SI9F]>N]KD?/NZ'3J'D@IZKC&LR.PRNT1- M.4_'(G2 N+/;/B)H"V4CS762[L('[NC'NQ)V/)Y-/*SJE5>0]J*)J"*HZ_'P M9$]/OI>C(ARKN9K!&_,$ G%2+B02MN3@@._Q]G(X2=D9*+E:9?.8KLOJ@F5K M)6>R;5:;>=G$$KP29G\@GAJ9/9KDNP!*/MB_7)^UY:WZ\=!@"XJ0MN"=2O3O M+=OLGLJ")#.:+Q'5'5?YNSY/C$,.IZUH_[QMI8!D)F[;ME.HB,JAM\(3%6A0 MY5EUEJ4Y.^4B"FC;54!!50DPQ*56H9A@S,G5CCK4HDPO9X]Y>_!+,45$J517 MKI&[!2)*=E('>Y1^S5:VQ<.;49/Y=L7\BI#:+BG?L<\ '"/O]2;3KJL#9 Y[ M[W1!3[#=T9_.QVZ6H/I\X&)3STQFE0ASHMJAI/*=W*>2P;TS[U,>5MK2L-0O M&E_:H;:\;@,.@S-^&:)B4+"MKVUC"&ZJB8.-+\B(,GU XX$O-X+-<]^^BU)F MDFL?;4_2<"D;\=60EO(S&@.,KTIPR4;3.W*VA(-^_2.@FB1C=,,SEDP6-T=W M+ &.)^._F#A1'QPYX#):I4O.D3NI($Q*0HD'$K<$W^?#3ZLUX<^/%6;UF 9; MD6V]4:SWU=Q7R1%7X&6?UAOV:>F0318T84*9M'DEQ0SP2I]-3, OF(&.MI* MKR4%49=!PS:$4VRS)ZL2B"8O 6ESJYMXWOVPA>L3!0UR)&6P6#(9<1\C@9H- MH\93U"28T[# (&F(LQ1]BRBFV@*TNUA:ZDEO&# RM?8H.L1JZ\S"S?E* !T> M;@P_^W6N>G/+M]1@)556##:7V>IKD_+#D9BFVJD+6]-US[ O9NA=_(Y^N_/[ M(HY6K*R0AY.!]L7&8R%H;@-,4PYV%,>(9\JW.=*%V5/=2RV-P.&!7XTKN+-! MR;'B[#N]+%+USD>J^Y13'+?G6$U)0!IE/Z.K5BVGS.]AL#6;HU*\5^%5,4G9 M*5A]ZT8^IPCY/^XB:Q8)CN)$F=]EU?UUF[)8MT+"NV[=/?BI@C ++TB:L$-( M$XW5%;MGXI2871+92Y5HVX4V[(,K4HI@RM &IK0H&5-_8\IA341(U7>QL#+A MFID4W]TV4@R^@8RPCCC'_B-['SFS&7^B'Y%M"2P6OI -UP&94WF96,C\2._3 M#OT@#DE1HB$=(B;H/9>^LFRR*/(M>\=L\@Q2QN.H'YH570-LT357N"J"[>QD M$IP>!*)$TA(3!]>>\#JQ6*G%D/ @I98G=-(AC \+]G05SZO@J5IT3KNLKDT. MGL!$X>;O%!Z)"]-V>-RSA3>PKQ!(M#$=L B,J75OW4WWG)C'1$O^G89(HO3^)D=WQFUA37@9>\O7^)8 #\#IYM+%GW8YTJUJ/6P[?JF^F.YX;O:.Q^/T^#QNC\/C\[CRZ6D% 9H)1#T7 M/,)Q"+H2D .SYXIC"%1?SYL>0K[@P:?U-!G'/2+E M$0[DPFQ!_IF4=N\TI:R0C^0'N>#B:D?DYI]##I1SS^C1 UWTH<")""C[H?(E MP?5' V5+D>NE<%F\!MBY%X)-+G^NQ&KGVKE:Y0'M6@ZG?(W/K$V[&+7W%DYZ*J5AZ93.71)_G;E.-[P6E(!,018EY((\/L:"[_7U2@G?TJ'6S!<8Y 2=_6&':)^ MK[_C-GT<.TU/\?'^B)O7YHXFGJEQZIE,&! DG+N=9^-$ZFY@MZAYK.H=FE*G MZL[\;]@A'+?^U)LB4'W9-9YH54*5VA[Z4K)GAE>3F[DPL(@MY/N.,FL4/ZZF>WR'&5^6(!UMB53'R5.@P MQO*Q>1S88!^Y4P?V+%W!XF!TB)V"?O4F$"O#\MY$>@+)MG:PV$^_@NPMI?TS M>EY='N?FF6-5-1:HGDYZK)$',U P.G[Z>>A&2?LCH6>:-KJ6WE4&8E 7MKRB M U(V3\>ZC8+?I2&;7G"#W]KT0TP/8JS1>/0RL)EWC>";'J1+9@8ID [OX-]B M452$7@G#TKC?\[D\Z490=BYL^J??]O^*:P'ZUE3A&9#,E2V4#+7XTU1V%UE MJ7L"Z_:YTQETLB@OF+1.K'.?-Z35\0.A2W$,@P"(" &V^CIL'@;%V]*+#MVV M-Q(D/MD=*^IV,X!\%L>^7C9VXXVLB$Z+][@EX%,.G4;% M/&%%5"CJB7[LO,QI"K*K92-[5;*Q\S(=.R9JF[G$,)YT*G1A9N@GMSD,$\>; MRZ$.KO,E&N7+I>Y5J%B. OJP+J!J)]LIG.^;Z_'/;K3G7!B@TI.?*I_?^BS8>&'S>ZTOWYE!T++9;V< MI&X#-^\!(20ZV,&N R[N.F2?6+N'I#L[GV("K-BI5NW*0M3O OOK]"TEKR'ND<-1C*L M[1]3V3O"%4GOM?8IN"A9[]@7]N[P820AX"F8JWGOO8T,?L!0ZS*WTRF#5VMU M;EPCK9L<+9EVHNBV[]9Z9\5\:B9/C*UN>$Y2K"GZ2&02MY0L& TN&SJ"NT'* MIV%.LAAPXF@*3%1H A\H*O" F +OG2[P@>(!+\$_.LTIOF.IA\P<\R!#@X'1 MC"S\6*XS7;W)/?L^*>P*P.3>&B=K@Y)4Q8ONI,8GL.["@Z<^9R>SOJN@/,F];U=E'MK_U)R2J0C%S^BYA><@Y5-+K\^HJ/ MFM#9J&ERE4#YI)D,1>CA_S*/IM=N0@,LO/?M*4M3%M* *:6L4^:V6 MFD9XO-9SG;ZA*OB 2,T!0RF.- M/)04S$&.9#0=)(0._'CI$_J7KM"#Z0;5PX%!SH[>[F'[%+W4,ZRSK /[YQO] M^*JG[X]X-SE,R C%_,AJVKI7EWA)/-]+!;616^AWAKC:7X\C:.1A[>'4U]FJ M(E"74#Y8ZE$]W% AP\9!\S'4. :>_8^F9Y7,EP>7VTYR.5D*BTJ]77W4<^5. M_66""3[U73FBS8'(I'CQB)2R"97"4NMH^;#I2RND#:C'?=V/0.7*VC(\C*=S MHF^-WBK'M95,+2#K\2E1Z9#TJ.C4UFO8) ?WT[_<*!D-2G27U^7%92_I7%E* M]^K$V]51K])K]8K8AZU(J@!SJ'/N,,&IF9H=K"UKA@I]>@:B)2!27W8&/3U!$@B:%EI0]P\*QEK@ZGB/(XI[&&>#&FT:L3!GT,=ZLG.>^VC M^SX?GIW=V#W-FMI+N*V/YJI]I+R ?;S:JZ+1QWU0?XM";Z?YLY:S/MTB"\;2 MJ8)GKRMO5>JB4 MUM];/5;UC,\K;R3=,_:+?9#/K_[9!R46*W=LKRP^&?#R<,$K=H$[=N)&L+\Y1K=W^;(^LA4\$PM)'F M:PU!59IC_$(K.;G5%,Q43_B16L3PQ>K;2"&B"A6"T!]HUYU8?_U4/O>WE?WH M#SWV9[ZYPOQ<*&R2E,)(3B5,I]C[_(GJ\98N4GU+/]%P+7&YLQ&:]O)VMO@N M2,OB$GB8XOHG^.T7.^QS\!!^LY^K./O,/LN[ MNHK$U+Z*VQ&ZV/OO,89@U\IDVNT#7EXR/U@08+8DF7SO=-I5I\M-,1L3B3Z1 MVCX'C]\VMPDA=,GB'OLI+KT_[]?[0(FHGNS'^SN>M._@HKC+OKAJV.C[\B6X MWWJ.^W#11#.UY#.HUIY3BL0L/,>-VYRLLJQFN0CN7/P(C)4"=0"J$@W%<;7< M !*_NW$$&X+X\'9LADE;HNUGJ_+/6LUO\]3/M_Q6/,N?\LO\*S^V3?/?_"Y_ MA^S6PUR,8OT.D$E>P?7H,FR7]$.\=OIQ\&($R>9)0VF?02$MHE7:(2J'O^O\ M2CI3[%!%9S90T=,) ZCT,3121<6,4H>PQ+B77QS>?'W,P@0];77Y\[O* MLVDX2[#T)JD'>]A\F/YN4.3WL7)ITM#JSV^X_O-JI1[II#IXK>*R5&>E78X+F-+$D2R#K#LZ9HA)X)E0EU;.JO:RZB]N'O/S__HQWT#YT< M$-7E:Z=$=RCM" &NYET_W7F]^_]T[N09]4QHYR>/6HI54NG\VTP M 74<2_WG(WR&/D+]6>7?4I"@5;XN;U/$VU'<'?F=59W]?)0@@/7/549K^LXX M; XJN*Q8WG(KPS:C8\^TZ_9^E2Q*F-,E?.*Q H2@@3QD"1Z!2,6.P*5E@3^] MQ-X.,C4'A2/J[Q("O 2-,:)AGZG/U&]J!;B*BI5\II1*6Y:UBD4ER2:I(VQ\ M"CQ_:C_'SO'/^(?\FXS1:19D>K\*F[Z.4<,@*UP Q/PG!K+%G_3OG.0 A-L! M+AYC^IF<&N1B\4<%G ): :MX<;/FG_)O"9B=X_PQ'=1S \ Z#-L/!>+V0XL0 M] !&"J=C7Q6CBH(6>D0ANJIA#HXFUY?BWN=BBX;!LF* SYD22!?$_#*JF6/= M'%H5)"XYH!L0TR$"'&.0 +TR)4!$X,BE3B,(7 ->Z#(1<+^A",C#[:?Y0V9Q M_F8/:;_]0Y3.LG.8X<%,,T* ?3LWT07HT2'\(?KL-!@=$!2*QM[B6';-P(>) M56I6I!#CGQFGJL:AFC7==C9)$Q<9'5Q/!%(;&P3A_^: OL!@8-GC%RB4Z 4. MI!0G)3^.Q] ./5-@(1DQGV8.X#HNA"1B@_,],@]-P]Y;A \[TL;(3B*@6 FM M%2)CI1W&1;"I7?<$A #&V\R!&;;$GSJ0\7<%+(C16O!4$N?=!? *,2JJQF5'."?NJ (B M^MR!C#_]##OP(9@.G A"!!=_0://' ]P([@.= A>!-N!LQ9XX$>0(JCX PGR MU#A4R92=G4J0(&3@@ ;>N$PR]XF4T47J(T<3[ A*!#V"%4&<8$F0)*@.5!+E M!,EIYK AGE!0L#82_*CH!#N")D&D($]P(AC:2@I:!'^"&8E/ADN0"W%? AU) M>;)'DQ KRHC/JB7?0F9UQAB XD!'C0J'F"4$W-<%R,"!3L&CH$]P)P@K*<[@ M_V E3:[LEN*B3E8G&UQ GKY__8YZEBC02(<#HTPT3=1-(:$.F50F7419&0C2 M(@J"H[F[ $I$X<<8[/@U!A^#4XM$"&1P,N@8A.LE]T!^-RZ1W^I(&?G-F$M@J]P MO8HW*";<(,N&UN2SX1>@*] JL9Z+A DD..B"N?%YCNI)2A&HTXTO?^*@0&&A M=T(V\Q<1(#0D!!&RT?V) '$>1ASLX'50)28")*P9<=1]N0E'!WA0!/A%(P]J M9\R#?!DDCJY#S9 MQL(7NZH;Q+E,0#CMV(;8=PR$AL!IQIV)XV&FF8SE)GI:G;]H8#NG10=X,63] M3H1&M B;S",0X(0"0PLN!8^"![*.A&XNZZ50R^N-=@1_;3^_C&XGTZ:60+^5 M"$T[(<*RX%D0';BOTY>-,6P1 99#DA_0S0$F,1Y9")<=93NUS]E(KE-?N@O< MHM1XG$%ND0+%&*@D;!(*/)B$ @^?X'AC24@E=!)^7*"$5\(JX9.PT3$EM!(^ M![V$7<(M(9CP,I8E=!*2>;"$7T*!A[",;J%G6F8YNLA; KM'C=='<03K:;!4 M"*]J-J[Z'19YYSB:L#239=BQV= M!F6 A\*D"4%J)71R.O+L'KI1N##@4 Z$95,LD>NX'/X5%PKO44S"_0?^"U#U M"GF%B@M.QZ]0_,<^(]O('X@G0KOD!FS"FG3B:L3L'YR%] ?\7YD#P";?JA92 M"Z^%E8MHH;/064A$V3\,=92%PSDGA%MEW5%?\3FLGC"!PI(67>UMV29#FOZT MCQ 80+#V4B,F!E:":5#9"PLX[AK1(-H!WC<0ZT=!?<1!0I ;2*SJ#50KG.L4 MD.1YWPSV6R\O0+4J-!46_KA3#1BAC*ORZ&#T.'8>K$8I@QY!@. MQ#2&$$/)DV)C\&1HR0Q6!<5D)0@U%7Q":T:/4A-A\!H7(PN>4VAL9OC@H1G^ M_CYF)AP/")S&5WR$G@GMWRMKJ/>=2 0> M I^&-)$2'7NCVF&A6VE1P]PI:96UPR>#HI(#^%>\GEB#8,.F(,XO52CSXU)9 M.Y(Y62T=C G";E4_HA\EG,I5O9>X(=QP;IB4JAN^#9-2D"ZYH=U0U78W[!OR M#?^&'HP(86;0Q.08S)[AFBB#B,/*8.*0HM&_@(!X,\IG;BX+E.)P=O-->P,.UQ!A$69D,KVL_&\,40-#PP3%)+6Y>MH.V&:91J@VT= MC52!*#^5E[-OHD)*FAVNO&R'LL/;H>XP=QA ]PHG))DNO0 M-/AQ@ R'%71K6%)9Z3.E#1B&?L/4!?40>[@]W!MJ M#[^'_(KK(9^K.6$\[!H"S!1JT[;^F -JDH(&I+G4^W1S@:Y(EUB"$.@T' ': M#Q.$^,-'S5XF?KA-FAJBN-* D< -3W9,?U<#A,_4!&R%YL.OX5!GQ])Z0 ,I M!-U':ZM>4J1';A02VX9 [["%Q4 ,XK0P6ZA!Q/^%"<.$9K:2#M&H9?*?V)Z@ M@5@1#$2K!*8%;'@J% U^!CU.JR>FP_$08"99BM+([-52,P5" M^ZA3T;X@(A!QB.B8,+W,$ E*%;_+$.-NXA0K%(64D%P1AD(R8,FO0G<"X9:= M7=19"2/Q'XAN/6*.Z3-MOT@AJQ[WQS%BKL,< AF:$3V&!5UK$8Q@R[!AB M##^&# E1)^WL/J8?

RA(!!]R M#_N#A,1#8B%QD1BNJN?EN51&^0;V$ %NKA0)7 .5#^MK@$3V!0AP$,A)A,&! MIMZ'V;!>#__P#7A*N1^V!R=(5 SIXLNHB62 93TV!C+HLE=+!S7>">/_E!7=3S;(4H'_CB<2*DAJ&/*P/X8@YD13#?-C$\2M) M0D: F\1&8!L0WW=4 TTQ MN *$ X32$P/8A*1$SD!>^)) ]+'5'B_:?:D27> M8FZ)5Z]V&PGBFS:;LIM)HOXS%#;_&(%LIT:OJVFQ4P0UK2D(VUU !4%.M$BU M3\ G&$5LB12+L4&=Z?M=$\\Q!#%=8"XPS\15>H'(UBP1"S7I1IE'.V;RBDO( M!1"(_XJM3CIJ.D!M!.!O\&4G''QM M> 9?U9C78/+/D*7^& @JV52$9,5-588PH?8700!B!/\=:<7QF%N1Z^4,!'D M+>@2*\*MS-FN;L$'9"NJ%?.*2I2R$&#QKQA75'M$4@X7?<7)F&%QE.177"P6 M%O&*<47^V&!1/'98A$IPFP:+(+FTXHH0L%B>JRQ.M^1_8\585%[1[<<^>A>= M"+<1_;K/7%4- 1A\.2TBU$2+:<7.XHJ0^0%7]"M2%@>+Y3G<8F)1;F$:@^_P M%B\K<<5UGF>Q,'A=$: -2AB+#L+C8G"QR1)<7*<,%MTILT6S77'Q?K;6,RY6 M%_L6UT5%U()KN>CDLBZVM/R*)XHKT&\1?0?:4 M/A87=#%6H45%RK$V"KX(2WH_<0ZYPP.CJ_B*P@I&%"%2@[_SWW6E7]=PT*3X M%>6*6$/$XD4%I+6RL1R="+>>C[P M&7V,.J.10YFQ&]5S.VRA&0V-6\85UIT1*K$)ZC.J0$R,+K-IV,,(#_AD+&:% M\?",M<71$6TJQ&ATZE:U"+6+CSOT%"UPRZ@U&X:)&&N!4\:4(* 1(5)HG&ZA M*T8"2HLT7W4DH4=7?(_!OZI&=C/ZH@41TYB-V' EV;01#\;L6:70Y>%0M!HM M&/46ET:S'421-WC^ RV.CL*+SQD'(661T[B5L1.2&',@'SQU!:P,TBCU&+8Y M"%]GU,:-!]'%R6B=4^?5&8$_/3<3!K>1T?A==#1F59*-1$8+'9PQ!')GE B" M2>:-3QPYH\M(W]C \:&;V,!ZM#8YK1X!B<2S@* M'+&+#T=0(\$QS>AP=-14?]XZO,&+8X<1R=@A SYP'*4;'L>THL:QR/AM_'9X M&$N.@Y)%(W'1HN)C5"[J&2\1AJ9UXS2,.(9-0SPAN+A-Y,82(CD(R^@1 M%G&,3)AR'>#BY&=M3#5"&).+0$)X(P+0K @-V>TL&R,7-$<-8W .P9B1FM(E M'H!=J2 .X[O(Z1)MY/%H&L=8WS!TA8&1P]-S(S?2 (P L49]177$<^?923R MM*H_>HL@7M"HZTAS]"&1%7EZL:AT%9#0_((!*_W@]?Z._0:URA1Q]9 =$Y@] M9^8@AB86(>'/ ,AK!#0R'LV+[D:2 [!KS:AG-"T&YTR-[ZUUQXE,!Y0:FA4J M'HN&OY^^#/TAZ3C0TP,2_=1Z%#.RHJXGH;9ZC/S8QUR/NCH1HS:D0/9Z#&+4 M'IDPIT<$X$Y1^S1ZW//TM:)^"14>RT=.=&+^(_(D&Q4J;47_'V=B!F UHR\% M 7XX,X#5A)1*#+AJ-#5"'"./>C4BH^INS AT;#SV0-*'7PSPH[81J?)]?#F> M"%>-"T/[A_@QQ_BPLS&:'VF Y4?L8_KQBQ'$0S^N'_UG\L?[X_F1_DAHBC_: M']%5FL;UF=7QQ,?T\)_81'J-IT6BWMNGJL9XA#921*XK>8>^HV!]$"> M&$F0LD4<.CP-RW6%6-! N(&\;Z#\#)$,P [1[>&_Y MZX O!#4QD^ B!R!=04/< $1+:C#=(_X1Z'AQ_#6N%G%=T9D\(G\+CQA$R2/J M'2\JU<:_C$4,5F81RT"RMQB/L2 IXK*-KLAB]#D:&9$YA1J9:_4Q=1(M42DT<1H^7"!\G_8_LUE)8@JD<^R23'$8AQ>T%& MFM81GZ-2X].1K9BBB#NFYMQS[Y-=S/-Q!G!MHRY6'[N/:48?X\D/^VA]! JB M'Q^/.<:7HQ8R__A^[#^J'XN1OQR]2S"2&,F,_#\V(^%3^$=GI#32_TB-_#?J M'^&/Q\@^8XE1SWA^&7]!QHX.XL=28]BF!MAZV3%VQ!91OI_V8QT2 MF$1&B- MC2J12LBMS(E,8F0 ?#S6(W>,!L!+9#X2'SET 4_0&"6!(@D#8&PQ.A.0)$@F M(2TLXS6#Y!+2[_AY9"L*)!V2?I>UX$,2Z@B0/$BN(^61\BB+I./.F=AYY$C2 M?SR2&,F%) ]R*6)91!&6R/1W4D31(SI28%8A!/Y40):-VR]#EA>RDO2X&VH8 M#?6*=3:()+/$,K./*4/:*\Z054$9Y-'P"3AW/!H""4LF,)]N8V\D&=E:B['4 M7+PL'0\@(9$/1:C,NS5:"FU32JJ(HLJLF4B!1#$=SE!,7Y5;(F&/K5BONB5R MF5"+7,6TI. +W-'/V["E&B.1+,:N)"%277&)K)N9'X&1BL>_A0_2%O($)&H8 M>((G];- X<@(C)@;7/V8A^!K[K($8\?118@0\2XR(MYSU*7Z$A& ^:C9PC2^ M%D>/ZR)+M?M;\XWJM M&C6.9DE@H]G..3F3%!D])ZM&I48HH^!+\&AY+$\*Q52+5IU?8_!D#2F+H"XF M1/9GORK\Y-;%0=@5X4_^J@1H_4G\Y'V2UUB?%#8.@P0\\4GXH_C1*4GA.4>N M'.6.BDF0HPURC$67C#K6 &(ZK:<9Q(4BP$6K879H'Y&1O/X<<3I?@14G.-Y$VF*)^1V H.PMR*4-A'I#6RQKI*7/$#>1 :"ES$.9'@"]:>!G$*R M?I*/JQ_/(A22LK:J@OM<*;V46TIL6I)MJSBF#!TA86B04$@R93PR2XFFE$+: M("60;4K]G9I2!IFF%%/&(T]^94HF3)Y2X3:2K%/J*9&0_Z0])9O2!NFGY%/& M*9-L\;)"Y:#2)"FH9/W8O>".:L3@XP]$4BFTHU0N!&.2I0DA@$^2YR!:,KY, M'DN3TJB3"K1Q5"E# H:4*B\J)Y5Y7M%L 3C,:E7:/]J,6;L;7JS2CO/[N8G8 M*IF2T[8!0ZT25[FK3#_F*GN5OTH?X:W2P3:L#%86*X&5R$IA):\2)_*P,U;* M%\6*VXAW0ZS17H%UHA!N>"2&R9YK)?D%]<@WZU9B(V^3\T??I(L2_$BN7%)Z M*\>5YDIV8TOR7 FN+%>N*[^5*$IO)+S27F*]&5@!=U)+UR7GFO;%?V M*\.5]LIW);ZR7KFO'%BJ*PF6_\J"9?%O84D72PGV .DOFTH& 2 #/E#*B#@, M*Q\L(DICH\!0Z)(,DG)HS4J4"\OJS[5229GGFJ^U(^Y@ER,:&NDQA @71#S1 M -V"]3']'96RG(;YDT\V'#.2[,B5H_)12QF+6?\H(#>!2:NFW7%O2+?^6:Y5')61G)FD+!#FB M_U:"F,K'9=2._ BI9%PJ+I\UF]#%XJ+^N6%9% )7TQ+BN5L<(="'NK M%G6.&LP.YD5R[R&7)&&R(H,@L;6*R $3?[F$7%#:D4 J74D;),%DRK-.(&@>0&,(/I>!"M.1\G$P>#&]"$TJRGW8K%^F%%%VN M%/> [Q%#5H*1J48"FP&9%3>/O\;)FM"R:TF#1#]L1FR0\S4:9!@3C E9!*D$ M(6V0R$7FY1G3C-E7C$&.)/%Z-$AD#PMR:"F4;&/N(#F1 Y%E!VUQCNF6?&/: M,;.8,L@U9C'K%3F2%&1J(IF78TP8YFLR7L94ZUW^++>8. M(2$.G467A]UK3.F%Q#]2-1(1:LO[HP_2DQE_;#6.,F]-I\Q&Y3+2E#FA)&6* M'8^0V4A6YM=1E>G*A$8R+X&"[\=-YO4QETGDJ>SH,L>0HJ5W'A'@?5F86"D) M$-,KKH=D8S(%>S7$&UX0+8P.-\0[))$"CIEO&!:']V9[$SRXS)2703/I&>^ M,^^9&:!UICTSGLG/W&<60!"0P3D=Y.71)$ES!&BZY^X"G$M9P3DB_-"A; ^9 MTP!-NQV*(61G?;:R<;I8,S9E<2Y!S]_$SXCU,I=$?E(DSQ/6(^@+XGA]E @. M*769%\=L8,I1UE1:?#U*.?A;>366YH)Q$EB49!]IC:"3F$)"TUJP7=-,S !) MB9R)PB$75@'099(;5%HF0O: /\@:XS&O81GX:AK=)$$PP0B:#W7I&RN48$WJXI1QK(E_A$9F2YXS]DGZY*D1EWF1ND@= M*4V++KGXY952:BG7E#(B*M>9T$O?9F\SN GX.U,.-ZV7PLWD9MR1>,G<[%[N M-I6;2#OGYJ*2MKFFA/M4*>N4_TC;YFY'4$E:I%TF[7 <7A9VB@6QA(F*K*M8 M&&$/%T+Z(O"DZ[C]:CK"UWXG I*IYO.1F*D4^UB"')X:1/-3D(4$@PTB[WH>2;V2?/%(X M'H>4D$>NS(QREBDVTDZ>*/&21TAKI,+QN1AM7#5*'9.,R)Q"X^TL17F["3)> MC]Z/=,)9YDKKE'D6231>0$J8N:RZI,6H$OBH*=?YSVR72Y"J96MM$#EW]$+2 MG5Y.JIRB6>E28[/DC&B.)3X9 I!99'/'K^3YFMDE)T26ZT,L)VA$RTDHX7*2 M_02/D\8^9DAQ@]D]NT2>%5$@&XS[):7ID.E4=$$:%!E1F(LAI%GP[]AGE$M* M&R$W K3OS%9SW8$-^Y_!UV93K\A<8ZQJ.UDQPTXB.=P#@E$?N M.5=(@$[:)(&E1@G1Y#[V'FZ1(,KN(Q*0W_AQS'#V0#*9JTQE9"V3^XA^W/_8 M/SA;*$[WXE>SJ!+I-"_>=02=V3,OQ?U0_TU,RZL"2R"RN%PV ZYCA+#FJ/[J3\,AC9*LQ+"#:E'(J M(_8/\ (,0;L!:C''"4GA_=I9Y<8\I&6GJYAX9#4R$UN9Z$Y-8[HSSCDZ8JNP M.]^=*L9XIR(*WCGO#(B9YX:3^$Z,9;USWYF_E'?>&NF=_4Y^9[YSG33P]*5 M:M:=\LZ I\*SX(GN+'?J.Q>>#T^&9Q1$XXBI##Y^YBJ>E@=WGEU " "J6&4E M:]2:X+1.9J:CU$CS2MH=1.)7%,.-%R7S9CF![)N9LP:&.$Y'Y9;1[JAI['C@ M)9$H0ELYY(YD3N[@6A#A*/?F-4T^H)_T1Z\GU-#)V+5&/$Z[72ISR#7+'+ ('*,:7T8B"EN1H7?VO)\]*9^,8$[_67:,KY@'%#:: M%3U'%\;>3X8S^,#NC%"NMR [YT8,IVO1/R8@>6* ZT8"**-%#WCQ(++IM#X& M(O>7>D2FY&[2M#EM(]&HCG:>3,FS)I(3WD908: (.-&:YT9HIW12Y'F>=$.> M-?N0JS"HYN$3[1&0*GTRVT"5P[:'9[8$A^G/8D<$I3J/O4_0IZESN^>R!'VF M5PR8P$45DP''IQBU-@Q.%\VG]10K &1T"C^7\<48X!//L%?ZS%(! M77)T/L+,)ULE\YDEX?#@V"NO9$U+^U'3(BJ50/X/@TRBTKLH^/+$+. M/"6>[$YR9K'1M%F6)\6Q0BG)0GT2*!.?7XS3YVD3!%IGI&$> M)TF@'] 4J/\3!2KK%(':/E>@']#V9X\*Y+':\XC)'7)=43P\5Q0/^Z'DS!KR M,)N2'"O%X\/.AGG6"WZ*/GN/O\ESC1%T%?:5H&&ZLI*@1U#:YQ)TM18%?8$. M09F@+] GZ!04"HH%58(V)9V@2- MJ!@4"#JJ,N2X)VD4\,FLI$?."UDEZSGT M"P 1 PMWCFKC<]0>D::4_2:& "3D'RF$#EF@W$="-5]1$R?,)W\+_BC@1&#V M.PNA%"^,I8"S+7'65'5<4?)W7M"=YR)T!@H)G80^0HN3@U!,J-LR$@J:A$4E M0JTK9\WU62-4%)J[G#CU+^N>IS*3)><1!Z/D]$NJ6,I^G<<4!5I3P.D"Y81Z M0&&@+5"!8RUT%ZH+[87&/4V@M] 3*"[SIME])/+(+PE P0E !/43#CI!\1-. MCH8:2BV;9&1D%T.!\<9@@PM!@Z L4':H.Y09.:D2@\- M8SST'2H/Y0V2P,ZAZ]!\J"T4\2DTZBH. M/?NARA?1!@0T")#<6VC*086&C! MK@ CQ8Z7:! O)NJ@TR-6B PI"95IQFVGW_#[(X[A&,=?6(TO(,>3YK/0M!:U M+'8(00<# IF@VWE!DKAQ]VXJ[PZ]6\CF=6'/T4I,'P(1WK2L*%-"],:PG#MR M154G6U&M*%8T+$H6'8N:1:^B:%'/RUDT9[$6];O97#H>JKU2B!6*M)2!<3=T M*-U?QB/(^>[9UVPX0C97Y2E%N, H'2X\81OEIAU'"Z'I$BD(R*8PB M1A&CVH^4H9(*NA+*6O(%@APL_:H9551\ M1DFCB]'4*&CT-!H:+8QJ7":,O@??P^K)%4&!.H+U <&2GD_,RXA'NG7\TM"< M;6HOWS2N*%O4P -.$XZZ12T[>B=4H^AM9\$578X.1SDOS='CJ',4ER$=K8Y: M1=VBS%%*1G3T.GH=_4)U1]^B]RI91 EB!*&F"&411;4Y6Y/QQECC3!6M442\ MT-(8L9!9#VMI7L)Y"Y.U+4)5'XV#:"_"TT$JVH\R'T(/YXE"U7D"%S$@K1B= M*R@2_SQ@UO7L5=70"8O%B>X0PQU=8U,S:W1P M2+6*6L7I!Z7L>:;_ *J12(-R=1('SR3'0OJ1J)#V1S.D:XL-*20L1OH$Z?Q5 M4,ZC.1W;C:\DZ7/Y2NFH+0@1#RRT6X@J+*#,X*K!<_YIA#"SCA$N>,;4*:6- MA&Q?E8H86NA%0B;00;_X*0H2AS6 3ONR]31!B>^U,+8W#29$1=!"H!,F9=\4 M?205)2$H*<$&*U2LV%GTW9*D4;%[CKUA2UI?ZI*ZRO)G>-)T!9X44I,GQ<+1X6/]*,FML";D%6.N8-'" O@ A]0^F+I//0@(40 M+'HZ68L053QEM1;$RMSP4_)O_91<:8QIG)??TS%U]'JE'+VO#&GH3,)R$U\= M/2PWK<3]73O/+A8"RZ'E)U(5_::7V[:#@T(M#:%42U452)9=!<8Q7-/["H$- MFD*?Y2]7",$G<+$>/&0IOWQTIBHV8JLO //\8GXQ&H&&% B?S]_AG?85L624 MG;I?>(GOE[>T .%0F;8!:] RL10R"W-.8%KS<\!,\9Y8U;G:W&_,XV0JE%W8 MOQZFY]+\5]M.7(H(NTT>C-@X@"8CF*M,&>L8S3JR^BM MF(*?*$5B#;-T^<8# .T(#.^F0I OCAZ.15J!R)#U1WT?F:/KF2 "*U(%(K*4 MVU@LRH>I'CEN<3JF6Y8RPNBF*,7]3ZT49$&M@#$52TE#,;U&T-!CN'&;I&DF MYS)>G8Q:&JG+82KL&>LD?P9APM+\SCY%MF,3N2LZ3JFFD%/*5&"-:*I_"[YE MA\0L-R:E:=K4W#0P'6K! ',.[:?;%-JB2B=N!*"\VKH4(8KXQD!BZ53"6)6Y M'/:E@QU;9E$E>DJP$&>HV62EF]*=#KO(HJ'70>E\@%0Z]8MZPY[% N-JPI[F MFIR3MB:&8ZOR?/H>B;YI'>I%H NY0$/SZD54J9\^QF2$#*V?S^D/#+*.$LF% M\&Q(C3%1F5?IA:1TJQCM, JH@QSF"WM"9RD+!:)))FX\\;!8F(ZMN[2@*DCP M7IABF,RL(Q4RF,H)X+Y8-[8/G\PHAPK5VX8DS$2HQM"\FK8G8ZH.LXUR MFG2.E[./0V+*['5CV^#]_@1CRA%:1PGC*VA\.+GT[4Y (,(L:EH0),@O5$18 M43E.[+0X#:4L=P9!5:&"%/-6HCJ&)!"E9I>7N)$.SD"HPS 1:E^.JGB!5?=1 Z@E0D,I'U:.299)J>%1]XD@LJD=#95 4 M5L[.Z\CC0T!;)9?"X=$33JF!^O%=(Y&Q &C>AB.8H-1CJ,ADA)]9NQ M4!-GLK!@JHZ-DYHZ^:'MX3RIGM14JBFUE+HO6Z8J4RD7J-3'VC,5(;I0' *V MIQ:*T[F@HS9U[5 3(*+F )43YI_ (W'GZ$?B\A?UX01KP#]@QN "9YB$>.1P MG+!D81X"J5)0;%@4A JB'ZR08*_Q2@E#,_:Q(SCVLKA.3'*-^DCEZ/QS.(GE#UH3= 2:/6Z)"F0]6C:E(5TP0+,=PY M5?48AKN=!.:DY[15:N?!S.BF#RJ7Z@;#7^,RZJ$RS%RH$B"/Q/'$#7:0.*7B M+0(=],(<4O+!PIE\Z!EFFR'GFCJF>I9.N2H>-4?:D?5AXI#_H,L;%<99;8ZVN M(AQGV D;TG -6#3/F@ YSJH?7Q[);8X84=)3W9;(TMXO9(T!1_SZE9" <8@3:YQR3 31RXOCY:("&1>!4C4 M5W$U7IXH&-O"JJ0AW:?-;\RK5A8WFB9.P#K-0H&-5F51XU046&'C<>8;G;(< M.LFK*U4B:JOL,GJ97$K0\AH7Q+_WD@LUC2JX8UVHB?0[!+PHURAPO<(*K!N: M>%1M(]84JXGUS&:4 +$ZQ#!F)3-!UA>%#F3?^=^Y"3M;M";(B8&CC:J9*JRZ M))5$VZ"O*L.,I'I"K:&:RXRL *)MQW.I87;?>:'^"">>"=%KZD=(RLI0I+): M4^=V.,9)R+RBF[I+M6H>:E2$!+QZE&\LQ^-"Y6>U/8YG+M8TJPOD\7,W5+&V M65>L;=:\H;%LSFHQ6[.^6.^L#)X.5GFKE**$8%: IWR#-0 $IXLNK1J\4B9> M'5JD&HA3!<^TJ$-0D?^)I\136B,XA(VTQVJ>84^>[/!A^)VI'C]KAJI7??!T M6IFL/Q=BXJN6[T8J9]ZW;WN;D=O39 5Z.IV\M9\ZV[LM.TL-5XAH\FY9F M4A.N&%>$J\;U=.:3L,Y*9EH;E:'XAG,(&,0\ -.N'S^98]7(YG1:BB3UFL2+%9([JJ M4XDECXZ.3WS(:VI?K7)T?*YG_:$8']2^NM\Z$)2X,8 M'+JN7%?$XIA3V1$.! M*>2X7 9/B6;KB2]H=,W!P6P1:S9U=7HBMSH:]LA1Z M$TD[+CO#AL[P*&=T#>8H4!:M@@@@57]&:^2C5 >RI_2!Z(K\:='D?SH(&KER MG&)H.XP,WF?B@4KBL:'."25<;<''D3?)/>3'*,Y=L=";KH<58I\/F/'%T4MA M;GA@1Q_N#[OT8[I4M/I)%3^*.3%\U%S&5#(J='4T/0QK?@T!'!9HS5-[V&GI M2&0.J;5FD0EH^%JV*+YN)HBOR-?B:_)U^:I\;;YR-OHF'JIWGK@HOK=?VMY) M<&!+SQL[1ARJ VXNZ7=X4::ND(>J& 7,KF[5Q M>7H76PT!RRPB>)'ST#""X]!L':\U6U_%YG T W?"3"^P,@RI0\*! U/^4E', MP.IR2V^XL$8=XF+UKE,0U#5]- M@JY# -,G[.WT"2O3HXJB2T\LU MG1!5 3D4CEE(]RE8W+29L=2%H X9HHM"^D#E-'#ZL=;9EX M(@16I"5YA68KJL&:;,#J-?89)3$PE>&*<+6]JUR(N)R2J5*.DH3,Q>+DBI/U M3=YJM!@(1 NCF+6B>;2]C@Z+N$XB6..A.<$;48FFQB+1UB83QK&0U0TYES>B [)E4$\X(X\!@R><\ M22<5O)53*K8<_BP>=@]K#8609."TP7M MA,,6"5#"P6I5B;!@%B>+:4\6)5_A2))6SBO*AZ=31Y)0 M1Y8:B(M6XMQC'TL-^=#0F*HY>!4:RPLVVK'A2CR*A ZRQMA!A3KV&!NGN"*% MLDHQ(XRN#2[F:^,A(SRTWIQK!#L=CW#E TMID;1T9&,J?3>=:TS%(TN2'P)EBR:IAE+-M4:AZ]8D1). FIZAX%>?-_7<4Z M[!2@8 ?RZ:TEGO="&8=T:;H?/KJ2CTQ/$C.%3:5$82M!V-;]3I UHZ<)^@X) M[ QPJQ8C;(L%$GO-B7=15W8Q;YW'A!_#&\L[0\BJ23%H QOA0ZT*$*0?TED! M-80=.(A!' 6CI&<+PGH1B"IE,(T#44DC0:20L3Z@)%A-19#[4B!-!>8D]2T5 M6+ACEEE>$,*F89'[ZN3[T M3FPD?0?8R6G0A;2 V#Y8MRXNJ)$="U%)/JND$ >=/X82# SL":=)(;.?+3[D MTE0G.!(WAGE")T&,4+_$!#86) C'2U5)09N@C9OJNA)YASQ%2N@A[4,592" M=AQEMHFW0T>N?:6;JKBP@2XI,XHHK;"DX/"QR+7-B@ 28!Y](G,IM83 N$X@ M)9QG>@X5Q;:TC"4!X?C="UV*]2*3A5#V775?!"MRNM*TMP<#QU6F91$DH 'T M"KAQ# T1(GS+4]KB.)'P6\(K19I1:>0%JSC3,KK5:KZ&JFA M.%95TL3P"G=%NZ)=P;]1/^JEFQ\_77A)4]N:R?1Q:CVUXY5+[79%\8;X@'Y\ M/>)$NR!I8NY,(F7A\F@X2_%+$=3>H&;"6DJK!:'8:MEV'Y1KQ;94XU#A&LVQ MZ&:ER;E_*3$ER[:9X.!@;GQOV17TBK*VXS(L5)<*-K9^Q9\#7A.+>)JSJ(#% M2#NN8;B0C\)C5CDI$6WD):::*L3EQ,RVIZ!1TFBV.*7Q,Y$):!9GZ M9P,ND(DO2J&H>D,"\JYX@WYF$S#5Z$8KIL)[7DFR//T0W1(K*$^$@Y4Y)G4S+UE@W,PBJDV],Y M9':N8QW.D!RE, LWO9BR:L&U<%B1*))M Q&=O8283C1TCM5&D\#*<16&>&:F M/[BJ&CVBCRBJ4]N? ]626Z^VGSY:2M0Z@,(]_[:_V,O0Y_79% M16)ZK[J\"'EE;1LC(F%5/.PZ\P:&5C\2-5;\S+>EI2P/"Q_ZUWGG9GNS#9>< M3)FRH#XX'>,CU/V:AOZ\7Q8 M;ED73ZVM" JC\0"H((4-P+)T]SAIE4I%A(3VR("DZ"920HI!;-W"LZ90Y&BZ M;DF$',) NC.1)PBQ>1KB,WQ-N:/$\R-!.#?-A ,-Q(E"(R@9!2L%ZJLU/?MM=:]_: M;R]-_;#!Q71UUU'H:*F]0C Z\!=?#(01*]EWH*#8ZN*G$UB[UR93"7%BDZ7& M4L=J$-R$2RO5X7>9H.#J.31Q)=*$Q/R%[W*4/$IZ;O=NDU5;Q 4W(5$@O>!6 M*5P@\K1+!T3"\*/"O72X2C=6)#&RC)%J4FER>)\8YVP],<&L8DT0Q\/$6*3R MA&"I$=QY[*7#:;K1NN Z32VX> @1;@:7K#I\6WDZ<9NX3-P*[O-"A-LTG>)6 M6:.L6-QJZO>,_U!T7#O<)6ZX.B[MV$L5IVI3['2\&'YPK(M?!< MO]@8Z E$[B'W*Z%/](0YKK:7G;11LDFMPZ-W&2(= [XWTA"DW_+) 9>4V;^D[S]M'6W?&;36TL!2* M(2PJF*D;KN3U" O=<1=P56VD5!79>U8,<]S9:N")C[E@02#K)=3PA MH.8:RC2DYQQSJS&7E$3-?45T<\=L>A#!JQZDE+O1JO^9 M?Q03^J)8;B;B)V&]8.>RP*Q<$3*ZA"MT;6&]O>&F@V";<=1PU86-KQK5JZOY M<_UK -WM34!7Q];#K:7V4.] HZ-98C1RDI-10)F<>-U"M$KKA;7/-GA].+" M<=%&P=>")H93CFI2->-VEVP??A@ 6^,CI/N;$^EZ^U"MYM+ND@(F_B6Y^#BZ M&,FFSPO6CJM$4.,PZ^@^5&VZ)M1Q6>,#@J)D7>E>*7 \G]1.E1PEH6*/Z5A> M0 \.HYM:7:DC&0&/NG&$7_ 7?;![WN84U+<+JNJ63*^Z*S;27VV,]&G[A)7= M(;,ID"W^"> )4X2 J/ML8GU1R]H?!0YD#T>T@ $&#B%,YJ#S64ISKKO8^FV@ MS1#O +=9MWC$-F5M!^H!_]!OSJ]YH]52K,P>I MP;1[CB!1'?I.6[L("@)5-@15)?IA0;@/CX@5A'G7WA:'W?7X MG+Q\V-@0KX7ERKF-/>HH=42\==AIK.LEZ _W3=@]2J3()C?T1TL..BCV,4) M2_5T8SV5&]O4NL--J?\(%K%K*@[2!P)S:$&/HN&I6UHM7KXC7&R&N,4VRIT@ MZ%I\1X[9+MP#%9MWN>V^3$T?NUW?BGXP_M*(\XQZ>26VK5$P+VN4,-IW8X]\ M56!)7(A4Z737? .,;<_4FB U&Z[F1VYVSHNGZ+LE8_T5K"?\:^2PHS/O\PBY M*C]LO&0Y>]UEYJ+[9WN/+L79+2 M/>Y5L"[2(:SK@&"TT/"^R2*R5,/%"SSW-A-_*8^,>;^\JU%X+VHTWFN-L,Q\ M58 4&5[8+BT)BE?ED4/82JXG(XT&C5Q/^"?>,S[MI3HT%2M G)ZFU,/>2R:5 M>\F0H1)T+VQ/W4L?+/*83A$;P]WCKJ1VX]MW0^Z&Z(ZU_;U"()&KXN"9\&:I MVU*QE5[SW+LK0[%1(1^68Y\.9\E05Q;M%[7(2,CF\6\\MXPK[KCW9OLG;7)#]$=_0:@9UT%WSN*/2I-1FD4H!5RZ6%4/<*'FT"L M,]0O/]T,AFO#0RMU1>4F')BW& <1+AOQQ7*:T/G.WJ@37(H=BR)DQP*2D;[M M7$\XH5D-2>B74A1;PC"-@$R_FKBEC-Y4[;#.J&5\)?05W+4E!I6"AT:E*(W5 M?E5B7AF,'2^C+]K-)8KI?+5E2)6%R634/@@:)?P:?H^_'A%4!>@LMA&CDY*A M'AR_D=_JA>27CD@(\ID&3(ZT$0G%!F?EGTCKTSB=)EJ^_KAK7.,+7A W<'L= MYP@30,5E)L#Q,7IX[&EUCJ((SG.7O& 8&4.2U ULC2DN7X"&HB6B9 M/I&-T"G8E%\P^D >UFT<*OD=T,^8#_G4HM@$E<%*0'< %X#=B7+$B! M%"7 $C8*<*P3 \P!+@<".CS &>"PHP0) 6S235R0@"W A3OQE-;K91A V?]J MO38I!SOHR<&TS@1!>7D,-$^2-,?-5.5J"*"@94TT/5)&#RAD%K&P8QK^&Q8: M@:$20V!@81%X_,<$7@(_+MH?1V G,!%X"JP$[IA"@9O 5. D\,$(J)A1:J/N M1V!4/R-2%B$RKGCE/)!!9Q9_?XAM48XB#8P,M5"P@3,2<%4_Q.$B9":YD /# MT9"-(3,#2N<-T*$'S@/_UM)B?@LDF?N6^A(R0X@5@N.U:(NTF((G,3:1"L M03K! ZL2\([M%TP*1BD"@VEKQ>#'!P-%JX9AM9P58A>HOB<*;3(S5I"7X $O MM/!KN;INF7ARH=9TV;L24+J""RIBH=W"<.3]!4TY.-Y_Q,2W$3JX59@.]NI] M,=O!YN 6H%J""-QZ_0:7@']_1Y 9D$P2\PC2$G3Z@_O!,\9?IT#X'UQG! @7 MA _"]DZ",)[1(+P01@CO&<.:548,)X;3KK #WH]XJ%X2&2Z%Q7;36&N5J,T@ M;1@.3*U7'H.'@$<\L;,&?D:]*&&F;#R5.Z)=DZ^BIQ"OD48"VOY1#;D^?(5" M,5&2%%%#S>AH.SF;4OY1 6^"DKI)V4$B*!P4IND:A:,/!@JMKHS.'(/Y&/'8 M8%]NCBYT2=K'/)22"!6X0><"YHCIC^+6$YH:,P:C4\%JWC!K7[E")96=\G:" M2[B.E$4/A0VXN+- 32+%C'8O'KC VN2BFP+'BU*V9D6Z"R+B0S[5(N$"RSD@ M2>P1Z0:RZ)J@:1GK5Q HDY<7#>-1L.0A3@VC"(J M4&YCJT(Z\*@OJRM7J87TN\QU=PC?JTL"P='0++HP.YF=$K8(FVK8%;D:GI>M MI$A2KN'Y513B"E1$N@5I9_I%V:P'#\[0[J9/;;'N4\]CC!F3L' 8\#,TE0K0OV&;=:T$>TB)K:A28OYPNR-5<7-D46^"6]3JI%G0P[7,0@67 M0RD7TQ::R]X(M<%Z_'=P6?]EK"_QUXBF%P,8%EG1=Z\AZ>#H'%*5J*I457F, MV]*(#>((L8)XJ%I'91!/B"W$#KT&<8580AS6=:XNAAR\=\\3 E1HD(]6"=3N\ MI AJ!+5^W?71TJK][ 3B#!$/(\)#$7"""H5LO(.\6EHG$94(7O1!B%';&.#5 M#*\@IH\3)[/-X/">^U]0[L15-:2W$8U(??KG'9!MA9AA=-0U+AIW35Q39>,J M6;&9VXK*A'I.^-"7@ \(,FY94AE!BZ)SSS.99?G< 3T.@F&AA/@V+$1S2_R\ MM&"&;J,YS29140S@."C"< \=C^(FJXK'A3HI7L"T>!(N_5)*L8M'4WPICA2' M6N,5G^)*\85M4VPI/FQXBCG%IN+Y(:1X?NB!2[ATBE7%.Y^H(7LCUYJ3S*-0 M2HXS[1Q=:J!)5#"P^'CBB8U#=LX^<6+B\2;XO'=X$I,/^> 2BK1BQ$I=A4S M3TYV&3M8,6(76IQJ91;#BIW%I>)L,8XXE&HM[A9'B[W%U>($G*S85HP6L3JA M6&0^$E9%&X'S788QH[$*6=MJ##TKL95X@F? (P6>04J!]V+0%KZ8;K$OWEM( M\*1Q\N+?\%EL@;ISA;D&LH9)8DYY7H25538X$\JD)A4AP)$14M,#CR5,N>A5 M_=R(B57&*@#%L8M8;9QFC).VCJ;5 V"*U2$7\*;VE0I*@ZW8A ,"6+E&>(#9$8V'AN;C<7&V1_HV8!A%Y0<5)8&#@$:08"6!%T65OS"S!I: M RM3YSAP,;5X;_PL[AMKB['%HV*^\=^X64PXOA87CK._@V/#,>XWAMOX6*K- M6*1C35KHW>RCZ;MAHX$68>H0[H9T\;53W: '4!/T";N= KETA6%%V]AS@W4* MA%.*L[X7)>2BW3> M(_YGDBU(S1E-03JI6? 4C_6Y/E- IV]G4B,E!73Z 8-9/>'',4FN4=?$/#6R M'OPG6B611(HL>#M DTN,6[S'\-N^WQSB'2PH';<<[' 1@\D@AZ61 ^1Y[(B$=A2H',N!S@CP/)A&W.4%E#V3 C/POB(%LK $C M%CG('V02(2?%IRG/*G,6*&J EC!+H)+M?!PQLB!78UC("\"LS&M-ACP(RB"O MP1;('#+.VBQ/4./*>]T:Q"J$"#6UGD2P0QQ2Y$)>_?I^;T[CFWG1@R+ZLSP. M-#$7LH+*5;^ X&"?J%CJ&B6!AIR%RS2%B#R B8]E$8G(6C%+F0@$/C.0 JJ% M%,''0%Y/&23L>TP%641X-6\1AU82']]S1BI'W@#?SR*V'$VP*$=SJPG$[",3 MC*='_3KM5T#L A*&%"0[1/NC=62Q*QQ9D;SJK.SVB] M;OM8V^.KT;@%KEA&IO=8DCV*?9DVLJBB>]WN MD,><^49EI/S7LC-)YJE%./IGDHLZSB%9T#5-QB"+CT,Y->1K\C0%A$R$S2$/ MF_)&TA#$D2SOA-S]O1KQ@=9V\SR.7#(0O(L:(XM5YY;'N4 "A7+QV_NYD9XB M+$+)]F/'BZRFJ=M:?%8RC0A<3@T/3T0('2.((RC#6=AB29H72.C7K60R\Y1% MTMQ*Q5JGXA&UES)1IL\5?2C*F0M6X2VQY^9-OB./&2_)'^60\OBQT3%-#D8& M&3/)K4\K_.U1D"53,F M&M&GLR9T!:>KA2; 4, )0 M!((+$X&,@ K@K/ AJ!"$9[T-J8%P [:!##!5)C9 !+8"70&WP%C@H. DL N0 M"2H4I8+C@1% #P!>2 '0?%0#* I0 K@#( "" *D ,P * C0 H #8 "H +8 ME>T*( $AU@@;F $R . %]X *0 Y I@2. 7< .D -P * Z@%W9(2 &0 %$ M%E( :0 40!C@KXP"F ),"] "20$RP%X9!= $\ I($;H"+0 HP&59M#P'8 ^H M!4C+20#+,F8Y3V >N"2D *QF X8]GY2 L8P'8"V[EF<)*8 Q I MOP&H"W; MEL\ @V440&XY!=!!8%BQS ME\L 8@#70E*@O$!8YDNP %;+, $R9'B9)+!R #J !S+5 TP'@@NGP'H"YW ME^4 :P#24@@!1 MLQ=]BL3E@L%T64GP!1 L)Q=QA) $:++&@;M@']9LXP#BT##\H *0#.WTM@V=DK<"P[ 2H"((*>0 J #%!:'@.< M"PP$.&; LGFYNXPE #(7"$#+1>8C,W:9P0QB#BT;EL_+8P#_,AJ@HO!4+B\ MEH< ?X$5\XGYRPQ9C@-TEQ/,9F8H I#9#N!?QC%W<)@39,C[)C*#F+/+%N;F,H8YLUP]B"[S M!=H XP'VP)-9 9!=+@JD -0 C.8D,PJ@"$ D. DHF%$ % '+Q?/@LX M!6;+Y04 XTL 5D 7<"P##ZX$+ '2/_*E>\ A67]\AQ@LXQ7WB\3 MEF_,I&6:PHY L\Q7MC0#ED?+A&4F0!C@R@QJQ@*LEET+EN5NLQE@KTQE)BV3 M ;(#1H'5LJ7YPTQ:?BY#!2[-DV4M,Z(@3S!D[BZOFZ_-C6:^46@V9BV>R,H462!.3*=@ 4P-0 #L!OOBUUEF, X64RP5V!3,!8K@.@ M (X 5P*/P'L9H=-9-@E8EML F67(0&#@O:QP]A20EU7.;X"^0 H Z&QJ/A!< M!.8 [V6^,HB9KSP$N"XOG5$ ^N7;LC-& G)?!@$8 >@ *( 80"DA!5 #"*YU MEG/+V '(LF19N@ BB#G_"6Y+C&5H\Q5@NPQJSC&7"CK+%N8T,VL@N@P\V"_S ME;/,S66H ^#3WFB<$;.;F,HXYM&QI)C(S M%R[,T>6!4(,V$9,C >F!#O (Y9ZCP1("UK$!S.%X'3\HJ9S_QN4 !T MEI\ )((<,UV@LVQISBYCFPG+2(((,U\9I0!YOCH[F;<#;N=-&^V9^ Q8/C][ MEJ_*1 *O0/%Y-' B4 GDG4'-JN70 %2 #8!O9@.D M =++20%V 9\Y]GQ ,!8PEN< 8>=2PMA9]=)9O@*@ \\H"Y[LQ7)A$\ M"U;+)NA$,PH@.^!?5@GHGV//S$?RLM<9VLQ<%BUG![((\(44@$0!!1"$S@X8 MH3W/3F@<@$=@1^ =\$(WF@'+.@&6_@/.@^DQ8)A!L!WK/%.?,,B/Z#5TD>#R3"&K+\N9U,U_9>0!XQCJO MEH\++8#X\TC !3U9?A;DGY?/'^;FL^(Y,>!?Y@B,!A#/N6; ,A> "Z!?9@F< MF G+4('@,WQ!!."!7C=GE['/7H4P &_ !*U95C3SG:$&Z6;6,L'9^AP8@"^L MFT/+/ '+,B8!!$V,;BX;H\L FF@R06'"'-A2L MEH$'JV<(Z)RTA6#[S ME2W3\8G,]$J:Q2R3S@Y,!0H#^FQ.6$WGFA73]>8:02QZM1R9I@J\ MES_2O24&;]LBLAS$Q8ED@GG?O-V.:L K I[""3FAX MT&P"+^@8M-A9!I " '[5H&_0.690-*@9#]![-DE'H)'/DNEILW(YXKP&0 &\ MGNO-38($LP+ S*R 'D+;FJG0)^@B]&BYN7R%CC^S(#3.2VCI\H/:[XP"0$W# M'(8$3 :^,V0YNWP'J#7CF%O,_N@*-6&Y#TU:]@O4EI'+BN>>P7.9-PV@5CZ3 MEI'2W&84@#1Z^"P'P$+CFA?,'>JM=!F@QHP"J$OS!!0*;F?K[77ZQAR)AAH4 MH]O,J^6*PD;:OSRA1@%XF3W+Q&>^,K99 0"-QDNOET74UN?M !EZ*A!==@J@ M![@%J^4RP&5Z!O!XOA"TF=D Y06^LGD:O/R GCQWEV_,P>EU-'-Z-0TQ> .T M /;4=V9GS&@)/BV4/BYW!?#-.X7:LG]:]:Q9SBY3HAW/TV;U]),9L*Q[SC33 MJ"?0FNB[LW:9NTQ\?C=4F[<#I^5+ I':1?U:UD)GEKD"T^9O,RFZW;Q^#C5W MIS_+@NJ5@\99M QE3E&7%VS,(VJLL^_ 3FVFQC.3JF<)6.@Y=?'9_LR>3@$0 M23O+2( &]0Z:("UYE@.\ETL5G>5D >89L)RBUDN'EI$$F.?L,N"9P0Q1^#0? M&T[4ENI8=9S9L2Q%\ ODI_'*TX+]PF_ZO_R-GC[KJ O2!P$YP D "VVKSC6C MGZO4BV8SM3D:RMP;& ^, 50"D&8W@'< 3HUO1E++I5'5SN 0DZ OU=5DOO '+-V>4(,Z891]UOY@F@!][+I6AA0YM:&@UE!C'? MFKW5S>G,@9!1!A1D"3 MH/G*.^IX=9+:*25G[DUCHM_+F.8J,PJ N1!F/EIWJ-74).D?=&BY]=QO1BE( M"ZK14P'+\D?:#4"PWEP_J8'.D(/N@!C@WDQ8;E +IM'3$.C2\XZ !!U:?A/T MF^?-[.@[]"7!#MVCQDJSJP75_ \Y\X^Z%UU;]@JHI?O-(N@R,X/Y0&"U]D[7 MJ:',Q6J8 %Z9">"EKE G,SK+7.BC=*Z:(PU@EC=;EEO,'FHVM9'@SIRS_EIS MK'?3MV9BM)EY=XT>6#X+JF,%K";',HAY9HVWQCI7J7G64N?3M6A98J"-9EV_ MH \^I96\VY_C#_ MGWW2,>>S4=GYZ#R7_E]#!2K6/&O*\P!;1I"U[A70?$8"IILP*@_@PU>%!7J_7/F.:#@!O@!*!9MCQ3J6W36NKZ-?_Y7QVA3C,C MJHO6U.7G]- :4'V0?D,KG9O+3>?W3,G8%@-FKB.HV74&=_ M+O "E.QB=G^ 432K_ERHL]LY[.Q/=CA[$X'.)F7OL^$XE&QKMEHN>8'.WF8C M=(P61V=&D3K;:D82H&27LP_:H@I>-+E!G7V&= @XEF4%%&T6A)Q9M*'.'A5( MM)/9!^T*!2:[G:/.9D'$I&F1O.RS46ANHRW0?@LD\3;:!FV7-OYXHPW.1NBD M] C588%@-E]"NJ( <"SK(73:Y(8J=G,"J"T7D#.+EG3:@&B?-DC;I0UK-&KK ML]5R%.&C\^1%ITTJ:&BC .RT.FVS-"]Z))#5]DQ?M679'F>S<@D K1P5 #:P MFDC.9(+_ %OAV8P"H$_#'*0$T ,C !Q Y8P$" ) :@ 10"],CKBY0S'ED?? MG/7*S>4G0!5@(&"3EH#VG)\ 3@"\-A: "C %D$X/ 9X 38 F0!7 "9 $&,W1 MG:D 3P I0&,[NAP$< (0 4C+ENWF\A "9 $< (< =S.[U^B@1$ VBRP=E'C MF3_+Z6O=-00[4LVHQC-W"/P"W>H%]+D9<^TJ>#R;H9/,#.:)0'= 1H!3Z A MF2O/#NSU]6S;"YVK+A" J[4#I(7F@@O[>QVNYDS7BP+1O6?_,V^CE*+ MN(/5Q&>(=3+#U5P]2 ,$IXG,0>L\@<#Z<.W$!B]#L4O2!P(007N:9&#:B$^+ MG<7.FFP:- K !KUZMD??GT_/L6E:-86:O.TLZ$)?F^O7!V;LQ-=%P;VCP%" ,T"V[/>&4"P1-[/8WCEF+7KG<*;^LX]X/Z#+ LR X8 MI)_3S@(LM &;5LU7KE>CHU7,@&5)MXV;TBV')E.OKDLM0=[9PW&!E#NK/8LVZ+@3.&7,TTF!:L MKC7-7V@:MO4@.-UMGGB'HW/2L^B9]'@[$YW!1G#LNF'0/VZ10'U:+'"?+G+K MIV_,A&S LNEYYXV,7G*OEE$$M@)^=3A:\YQKCCW?'H;+&>I.]&L;_XQZUE+7 ML%W82VFJ@&:Z74W(5DICH[7/!^GE=8'@DL!RIFUSKK_-*.X$],0;;5VIUCSK M 3=5@2 A:&[URWP_EQXG[W.F&6?LY@9#Q 8L#L3O+GV.AE*[H=$#(6A7M_'9T0VN7D:;F@73[6MJM+JY16VH5@@D MJF?4Z&[1#2U2SJQ^%D^YP.S9L4R(SE%W ML!W74VOM +":'+VWMEMCFSW<4X4H-;.Z;%U\3A9T!,;4$>PW]&I:Z,SSCF/7 MH.G..&;I]3T:7\W +G*C-[8/[9Q9=FZM,1[ 7V\ICZ[ MJQ//QN?%LY$9<)UEQC2;HOG806?"=^H9=[W_9G7GN L$1&K*]14;S*VJ=EZL[P%;W7KN.W*1.]H\WYY#BZ)'H!/ MKKO3'>R@-0:ZNRQQEF+#P"W5I>[JM0]\_$V>)EROFV7>U6<#^*S;/7N&%%ZK ML;T&J>N>M>BZ$?U3R$P_F)?=%>LG,^79&KWPGFWO MK1/26F8!\U\Z?\U7]G2O :+.5.:@]*D9P@P(ES:3EC4,RW V.(A"SDRS+E!# MP\,/K^C?MDZZB_W%5@!(GE?6!NH3-VGYL_P%KWV/I4G-OZDG A*"&H MI>'8<.UH. :QQRYMFV7"SC>R>X\=3<<%KV=%HA3 MJJG;@&I_N"H:O:T11W=7L9G8JFB%MR;\_;P"/SM;EF$.WF?1LBM!^(T!7RV_ MQ)GA?^MHN$8:L&SL)DZWKA_AV'"P==U9A*U@SD";K;<#PG K0DQ@UQWPSDL, MO./:@>^%N,B[_]V[7CFH!J8.T.:L-'H 7$TO\"^OJV4 78 "N/!9#.T\R(+W MK+,(N^DZ-2$[@3T:*#"7O$G+*F8W-$(@!0"#YC7[FC7=M^AVM(YZ5UV/SM1T M!E+;ENK0,GW@^=R!GD7?Q(W8.7'S0L1[+SWI;H@_F;O-(^SS]1=E,DXP*, ]MJMF(+O_70+VN[0E0X-WO@## >1T>? MK='@%NNZ\R$Z(M[^9GMTEIO04(6=0$? .)Y9GC9WMR/,INMK-!*[NKS$7I&K MHB'6Y ;]MUR<_ZT<_S!/QW?5JVD(MP>C>=T#9V#/O,_-$?&7M>F:#*#V!F!# MJ1W@,W(+N!3;8YVV#H>_NNG2<6LR %E: KJU7G?W#*#35'#ELB \"LX&D$9W MIT70F7'9=GT\'AT5SR_OP4D" NGL0,7:M_QWCHP7J3G>GNF?+"_&F\VRY(B +=T+/ MECG=E67-\@R;.GYN;I7/EK7C'^]>N:Q<.FVWYI6+EL/1_O(+^;W\40 8(%A# MK.'7V.TY0*Z\[HQ9/@,D!63E[>IZ>:_\A;N'D@.,V'_DMG!8P F.48=W:@!6 >;SN+ MRJ'-=V]H-7*<1B[%=A5$Q?O0K>_1LJ#:+&W5OHIKHM'DKF_#=V?Y/ #E3E#C MN0O?4&68N14;#$XS+Y1SP9_,-W,40,X<:5XS?S)WGY?F_^IC ]OZ*L[S/F7? MI\/,@FHJ-EKZ*>[71A3$H97C"VNV-2%:8L":3C0SF!?-DF[L\XPZ LT('TS? MFAO34.@Z-TH!&-[?7E>OP^GELV\HN'-:^'QDGG"?PI/7$^_C=O@9)E#0LM-N MN1_9.686!'UYDMW6SISS.2[9&VW/.4F@DVW4%IW[%439CF52-N6<)"#:F%6G MLE?G)(,^H6.YE;TZ?V43JF79L'-:MIS9E@T>%PODLIO9,(#,>=E9L.3+#CO# MG'OGPFRK]EQ >!Y<.V8WLS79R'-F]B_[DXT\AV8WLU7G,13(%S7[E_TZSYYC MLW/C86?:N?=\K1#-QIV/SP<1!7*8 //\)7#.IF0'SY'GZ^R>=EO[>#X^?V>C ML]W:]7,/&CH;>CX^QV>CLZGGX_--A/S45[HTT_5Z#;!4[:^',%ND=;?AXJ**"+ MM#?: '0%NDE[HXT]'Y[7E+[JLT_=Z$WM7W:*72,<_63B-Y"EZ+O M\GS:"72,LU;;J$U#]RMX"C#9TA60=;'YK'QL)AF !+S/K@-GC%K[+2!7GFO/ MIU'B>&5PW12@LGVN/FP[ =H/=^TI@/C9M'#XAC9KLA7.H/'!N9F927!O-G+' MO'?F2G+--XMZ%.Z!QE/CF??DM^70LL0;2(TB )*[ 23BXO"^,W7[^VU\[C47 M"&;E@O0@M?\:MBT5GY(OG#_3-(7GFQ]T\\UYR_3G6;H#73+8"IPNLZ9UT5^!#\GAG3V0&=P!F@#2"=IEQ/ MQZT'CG2%LZM __U+-UJSJ\';4(37M20\1OV$'D&OTK7>Q)I[M-.<04UA_HJ' MJ:GF77-1LR2Z0X!G3GQOQN'CYO)"=999* Z.'@^ EV/556R"N*5Z%AZZEC2' MH[W-2?,G-V3< EWN=F-7H-'/RNO+-4#YZ)BU QA'C^&XQN85[F]X=AYRO MK_70B.<-=4DZ%WV;&)#_M%S->6@3M!Y BCVSUB# HZ70TN6G>$H<'QUX'JC/ MHIT"RP(9P=[ZU$R7+HO+TOG2W6EP]W'Y%3X&YTV/TQ_.L^_6^-64H=G6XUMX-[S?/@!6@[]]G[!\VE?I-[ MHRO-%N;*^..Y.PWUMEHGPV/<4(2JN5"=+CYM)C:,RMW.HH(J=B>:D&T7QSOW M!OC.D?.>-)Z9;"ZBIA)(MUOFAV=SN=I[_2T!CT"3L7_0>'+,?'"J]TW:@:WXAJ?+T[7,4_(H 4)'?EV2G@-A_+),I^;3_XOWT,C MK;4#Y^_A-FD=K3Z6-H\_G;G/>?!.- YZ_,RW!G\WK]7/#&:D^%?!(D"T9E#' MN%';3B!"^5I=.5Z@;GU?OT7YGPCL G7%'*; @/[E2YT%D7?P6$.> 6YP-&9U,PS[R6P MK9GFN'+7]>*[E][KY@)($K3,)01J-/JYUPS&KJ=[UK_1$^P^N+*Z.^W +M=O.["AY81Z0;RGO4Z^K7.*D;FN<+96X>6W@BX MGY',%W&?@A;["RVG)A(XTJ7.\6KJ=G5]^?QM%D8SF'ODA&6JMP$:[#U>$'O; MIFO7?&F0=Z$\L\YT )^/O$/K9H"S0#8<+4YM[BPOF3O5*7'(@!N[S+ZW[F ; MLH,+0N8G-YJY$=UE'HC+V8G;T66-=Q$; 'T%!VT+GOG-I0&I^0_@!Y Y_WMS MN3G7SACK>1/\C_VF7C2?PL'3L^B?>O:Y>6TQGU(3F#'/; &\X[;\;#E!CO/ MI\G.>7 A0,^@ BW_1CHCF=O>\>?; \W9ZPST=@$P!&#.0FZ<\S)\:9W06#UM MN:'-?F6H!5P36G@ H@ C4V;;K=WH"G/%F9=LV4\,,!]UE07 :S.+75[^TX<"UVIOHXS MVX?=E&JW-XA]?2UB-ZW3RAOK_W+V>NT;5Q!_9A3MVJ7FYG"0]5N]UHX3N*37 MHT?.M77UM[N=TVT6J(XWGR/0Z(9&M1K\/,U7=@\D!F#K>&5A.TP\QXQRYUR7 MPV?3- 7--(]\6GUF#@-HQZ?/R?!F.Q*Z.2$_GXR+U4,#;763M=*\M,YY#D-K MP97K#O6:NA$;^>WB?CPWN)_3GN[" 11A(C K%U;GK<'5#G*V-97]7Q[=UIC; ME9'FR78/^<,]'HZ$C@FPOMG6"H59.="?7SFN(^"4=,+V5#H?_ MS;T*(O+$^XJY.YV5/CQ#L@!>]G!GQV'8#6^!.(G:$HTBKVU[ MK+D D?=F]>1]-RU-SV@K338^X=H$Z[HY7;K:OG^G?B&:.MV@T-Q_:^J]U'CV"QRN+QTGKQP6AP@?>$4;@IC![ MHV/ALN@9NPG>]LT.[W;;WFG5$_87]:$Z!FZ;?HPSV)?P$O V^<# H;Y81W8' MMWO-U@-[=Z.]R]YV9YFKW'7@=( =MU^H:TW[CH 7OEOB?.OP>HX:,O V/S5C MFJG>_'?*^*YYK/ZRGEE7#V#AHN4[-PB\7-Y=)EW[P'L"!0+H\KR<14U8YJOO MTHGI#/',.E[B/N%8?K/WJ.'8N&?P A#\OIIW,_F7+,HQ[VHQIUJLDM\_ZM!Y)T#)3 MVV7L$P&2N#0:Y]Y9QK^'P^OJY_7,,G6=&=]+&#(SF&L*J&<_/+B:S:PKCQ"8 MJD'2M>U2MT,^SQ#\?'VU3GHO MXXWD%^N@NHW;V[U>G[]#EB/7=&9G]Q4; 4XBZ"\[UI?AO(,%.ZO[Z)[!MBOH MT&?Q\6@Y-MO\4TU5-\?+U#_QFN6X^=1:OVZYMJ7OJ!OG'6B!NFU=.*W;UJOK MY 7@7/9"/+@Z[UYA!F-?J9OM@&4/LT5^0$XT;UR+#?#,A^:Y>Y,@NBP$\+#_ MW$7R;6:<=3I\ !\:[Z2;HF?EA'G"Y$=U_W\-# MW:739?@R-;Z](+UB7D0KESG7!6J].7O@Y=8X"\&>+-G05\W84@#5[D;W*EC-OLQ?9,9W;^2S;[NY8?FB_ M!7[GO^S@.3=;+D"H!F9?LS]C9GZV60V_P MHF$.P6PR)";;+1[.QDQ\L]79Z'-RMCI[SS= U\\?M/,2[&Q(]D';?A[/=F?K MSRG92^V7@/^D[[FKW3_J$+Z?T*06VCMI$>3<\$OVK+ MLZWE4 NE-E*[$@Y%IVKWPJ_:57J_0E7;J)VE/V%K)GS:,6TONE<[C![6!CF7 MM:4$> 6T]BAAK9VAIIYSK>/:<^W1MET;KZW7]FN+G8W5?NTC & ;!2#8)FR/ MX*/BC^W$=A%@L;W9ICI+RR?;E6V_,G<9LZW9WEMWMC_;7V:_LFB;M&W:1FW3 MG*<.&'@70) [#PZ4-Y07H?W+Q'85/7PZ0]VKM\1WQJ/BZ>6']QO KSYFWJ>K MO]G>A6]S.FL^'BYU5LVCY8W5G.5-]7K:J]!<0"E,!&S2>?!#^H&9_7ZB!T$_ MS5//#&LS>W->1A!,'[3+TM_M-( &^)EY0@!I'JE'NY'70>GQ>Z =.XU]?BJS M"Z3=?WBLM(,=XX+=^D[AKUM=UA[VP/07O4,MYY;\^Z1;D3_ MFVO;)><7O6,Y[8Z/O[AGG8_2#6RC>:7=SDYW[E0#N\/BM'G6N'K=JJZW]I'W MW5GRRN[&,]FZUPROQV%;JHG,MNN2\P'=TAR=S[:SK]/?['$9/.*]#$TL3T8C MPH'V(>PSL^/Y+(ZOOXA[O)GKTG8H0C,]W%Y\GI"7I!G@)/(T/&I[Q:T2/[V_ ME^72#6R_^\^=A !U7T7;GY?1X@4@V.EX3!Y15[?K MGQGII6XI^/G=:I\0U\(?[5OC'/5,>S;;0-WJME.SJ+_-1H$3\^?Y?*^([G-C MFK/2^FHT]1/]E6TNT9>?YVZGL!'EQW0B/!RO+C@_2PDGTZS MK?7V]FNU>Z/9ZJUP)EO MZS_V '46/$,]W6YNMW"7HE'=#&8R0$$=T@SU+B_$GJ$NLJSH)N8G MS?G*!>XAP5L>TWQJ)C)CN'W@X.FN>@?;9KT>_WJ'[ OXS7P#<\Q](D[=9L&7 M\?7M3V9I^KMA5MV'QF 3F0OP'(';LM19F-# C\WWP8_;CO11_-6^_=V6GJIW MMUGJMFBN.$Q]#HVS]CH'\AOY(P&QP'$.6SZ_CY9;RHD-/CSVN?Y\?Z\MG^1S MZO_W?^44/#*:!0T:W[^WQR\"[OJ+ $8:NOQASN*+\MG6B_1P>06ZI8[ WS5O M\Q?5-/MY>3JZ'^\7T D,K*WY"8V"PVZ>)7]2!VETECO1$X'(PI!ZL[R^3YH[ M"2[H*X<7?22??L_(O]\?$%3T6OU5]4[J>_7^4S/]97?27&_]DW?-JY%26&# M\I/F,6MC_N((=Z*CJ5_ MTDGK7^C>^U>?Q;T'3XG#\B/.:68L](%:ZBR(!S4_S%_=O71LNT(@A8]O1DFC MO3/+A>8BM82@3,V:1NMGW[?NHF6"?O$:-#^C[J@+N(OA+OQ&/FF?5" E@)2# M "3ET/)%_D>?9."IGJ9+"9[E"@!HLZ:^4_\MYW+KQ=?<6OC)=9LYS/R\CJBS M]LW2O_>"?LA]-F[1M^K'SML-,'Y#?@U?D5^_[S@3NA?9;^VW@&E_6T[)_VQ; M\J'-=NM>-5\Y*XTD4$C/#OCMEW#-]'^;RGSP9A$/.B'/*^CN=VM[Z7CE# MVJ<.F.5)>Y\YV,U.-Z1_O0'+5.^(ND/ZE:]B1GXW!J ( 6MWM8( 9Y^97D5# MVSG/QX8<==I='[T1A])SUE7+E&F2=)T<:LZR/Z!(__(3 MKA7[IVIV_6B@WVY3UTP3^\T3,>F!-'E?V6Z#_]QGE]'Z"&AV^HE_\%[ KZP_ M"]3PZ]]6>PQ:DST\[RSKW!/,W6UY@<#ZEO 7V$R?C3)*(?@@.V^Z30WG5[37 MI57PFF;SMY<@_D[=_D#_J/7=L.A&_&.=O4X;?WV3EN?JSN[8^-D:0NYFII)W MIDW@A7 2]',^*'VV)RV'\[O*ZO2(\Q8:.!_99QJ0HE/R>0)<,]FZQ(VX']IO MPM7P$$+I/9![!IT'?_CGI_O\\GUV.H4Z6%\>;60GRFW\S/)T?&KA$OI)MM>Y0%W:'L$; MMEOXR_V,_N&[0-ULQ^5GI<'19 "HP-T?:F%7&$PXE@7;AF6*I:4=Z$PL(#O+ MO_?^!NI2\ZJ[+IT1_]GCKF_Z4N_^N+NZV0Z;!^&OOW$%Q^NTN6Z>L4PLE^W_ M[!?EF7*/NFX>'8$7D)1C^I7AD_E7PS/B@*";WY3?_HD%66G8?]DYV:PVSY8K M^(?=7_DF 3KQ[^EYVC ,3GE7-8MF/9?/XYUYW7Y]?G^/FP,Z?;V9W MT,4" /IF=@@]W0T_^64WT=/=:2/3<^MS"GK2;-5SVW-A9UETWW/-;#1TW6SE MVUM573,"0=T-'KPIEL%H"7 M?IMUF6SW?Y=^2'J9;/M_&W&??IEL_W]/>B-M)("4J4*6'HQ=%MZ M%VT,@'@)JV\W=#%T(#(;;1* ?@4^=$9M.H!"=$)M]W\U16UZ273>EQT,71?=$9M8&UK A]G9'0A9^][ M:'1M96EM:P5! UYG;G2Z>K!]?VV>9PBW^L?9G4&\F:75^ MS'OD?(MI?YI]B67/" ^!.GZ@@/Q[VG6>;?D'I6U% M !\P&Y3!\5U+WOV9S1[=)IW 7%9RH X8!4<,X8T17;>.U[G(!U G1&;MV RW Q '1_!FI[ M:^9PZ2!['7(<-!^I(''<\!NF'\^:S5^JX'L9D8 !6IO Q=F6&BF?V)FJ'_S@/1E M*P=5;+&!L +I@"Q[I8'U:69I[8#?@=YHN'P =39WVFGL:H!EWGJ:9C1YDFC< M;AMR)G>A?\E_< #':(]ZAWNL:#0%)'Z8<.IIT8&[= &"J8$I M V!*GIG:8QE$8&>"RR"X@K& C5\CGX8@;L[&X$ ?.-GF'_\<9AF4FK::A:B5^.&8%:KQGT@(\;Y5WPVQ>9SB!W7RP?SN!HWWT90-RH("W?U9_0X%0 M@HR!H8#8 =H!W %E:%=]Z CA!P4MQG_L@;YKL']5"U>"8H)6?RI_GX&&C9^=EKM!G>";']4@DZ!J(%0@6V"?&5R M@B9WK(%5@9QI G98@8)\6H'%$D)^EWYF?XB!88&\ O !;'\!?H^!(F:2@7AM M/GX9#91^E()B@FN!2P!3 &>!G7JF@8!E3@!$:J""4P"E@J* @VTI@GQEJ8*N M@L6 M8!P@;EF[7?,9D5\=8&_@'B!:(&F@7QEIX+19:F",0!9:2H T6EX9JV M*8*$9;""PX+%@FYH?8&9;?%G6"=2?X: &GA5?\-N1FX^9^%MH0$-:KUKTG05 M"@UM'G/29RAU;GYS=M5QS!_P7P^=U1H"GP9;)1SO&C/;F=QRV>&9JIYJ7A!<.V"PW8M:O1O$7YV M:N1]775/9G)]$6Z^<4-Q6GQS "]K\70T>_=JV6KK=)!E$&L"*X%HV7OA9T=\ MRWU>;ZMG:&AA?K1MM7UP::%GO&KE:-P";&F@>F=KJ&BV:;A\/WB(=HY]?6EM M:Y)]QGCA?<&,6FB>X1YX'U.:/!RFFEL:[5WU74M #EV)(*R:V]^ M='::@!9^?FT8?NEO!VH%@<5HBV\1>"UX6&=">=A\G'7V=&!J>')6 RA[HV73 M>I-YVO5;]YKPGNE9\A[=VK-!>MJ VD< 2UW!&LY;!^# MAFCT@IIH]H*S?#9_F1K4;]($/'?5=]Z JV63>0QJU&W,!3M]MFB]?/)F"7T] M@]MTT7Q[@V0 1&K/<6J ?[!_ MDQQK)VCW9GURL&;/ M9[]]2GES=MR!RX"&*%N IH"M@P)^HGT&/:4"L8.5@I""HX"W@VMI97GM'(ME MIX-9@7QTGPO +VB"D8*F;,R#U(-L?]B &(%) (AMGH%- J:!ZFF @K%N)'97 M<:V!Y5'?@AN"Z8&S@>J .6>V@8EZF'^ :SUJ2G!^;9J#O('Q@&IJY8%Y?[QH M\8,)@CEG&GAW!>V Z &?>3UX+G$0=[]X$(+_@[^!>GJ.?YQZ50"1?[<'ZX%= M>>N ! OM@*9F7G,I8YL86?=>782E&S=>=A_ MF6S@>>@(XGG:>=Y_HVRJ;.EYZWET :ML\G/O>70!\7FQ;/-YM6S!;/%_NVS_ M><5S_GD('0!Z\G/+<]=LSG/^?P5Z 8 '>L]LSH"_;'EZ#GK5;"EM$GH+@!AZ M[G=A9V!ZXFSI<_9Y['/_;A6 &WJU?!F X6PD>FUM2GH?@"AZ]VR@<6H*,7HF M@ 1TW&P&=#-Z67H+=&UM@7H(;3& "VTT@#TM/WIX@A-MX6Q$>CV &6T@=!UM M1( E=.%L)W07;2UZ2H L=#IM5WJ9;'UZ48"9;(%Z5(#7::("77HU 21TCX1E M>EV 0&VK?U=M:WI;:TAT GIF@$9M3GJC#4)M"'IZ>G=Z4VUD>GMZ6'19;;EN M@'I=;>!SA'IZ@&)M9&T9 ;T"T7=I=(& C'I<;J]SCWJ$@'1MDWJ_@ IOEWJK M;PY_FWJ=>IMMPWRA>HIMC&V.;:=ZE&V6;:MZF6V;;:YYGFV@;8Z HVVE;:=M MF(#W95,'4(.+@C$+40+[:&1_1(32@)*"QX,V?*AZY(.L@E IH&(%_N$G7K( M@GEF10!7 'F!?&7.@E( N(#] 3"!-WR]@+J"(84_A,>"RW]. "F%*X6R@GZ! M[VO%=\2#V(,6A:R#3821@NAYS(,6A=> VG*\@'EM4P#E!U0 XX-. .6# H*% M;8:#RW I@L1XZX-2A.6!N@HM@K"!YX (@L2!N8$+@J!H]X,>#/F#"X3+?>9]T:(4@A ^$XH"$;AD-$8851 MA/%P>@BS@<5UZV].=<*#"F]XA;V!$(3V?9X"PWI& ":$>&5.@N]?CH5B?35\ M?H(OA+%S> &5A9F!W 6,9QQ_-H0V?JDE@B6S9T-^/82Y@=)I281^)8AZI84: MA7^"!83O976"=0$H;(YEJ(.B"K.%) &^?T"!IX50A%F%RW T@3)\LVJ;@!:% M;PL%4&8YA2J%:(%N M@;."+H5Q@0X!*X*.94N%=H' @+R"TFG6A;Z".H7:A:ME+86Y=K$"/6W6@XN" MI'T$"U8$-'R&@<^#8('=@W\"3'I)A=!F1 %9_0&4@"2@.T'4H6F@$N($)A':%%X;Q<&J%>84 A/ACUVE^A?F 7W\*A@EF@X7M@,EO1@*^ M949NS79,%PVY6@N$'<6:9@3H! MA02J:9<$-X0O?+<)JV](AC*$Z C'?]1ZJ(6K99N K@*=?4F&KH57ANF#=8+N M61:%M87M>WJ"JV9(A5Z&9H-/A"^&P(4P!_T+BH(^?HR"K0D: AJ%W(,]/[P" M;X9B@OZ%#0MJUH4M9MB%>8& 92V%+X67?A>$X86Y@GIEA&6M M@#:%:(&Q@H2&ZH7-@F^!R(#N:<9^X()HA6\"7FR<=A=K9'$::P1L*7?D?>II M!69$9F8#YH)_<'=O%7CC;(QEF&4UA'9E&H+(=HAQM7;P!>IZU658: !U6VI$ M<&:%GVN 9=9WOW@B=_.#GX$B=WEQ*0#2::B&?G#+:J=[1'<$?!EH#6N*=^AK M=86J>85_GG'0"4IU* #"AJ1Y]&EN=<:&-7LF@95J%89R>[%JJ'/2=6!P"82N M?Y9V2'ZTAIIJMH;Y=AQVW'"<=.*&*&8-A*-N^8((=FYHH'B[<2.&06D)@'Q6D* M?SYIX@+O<*UY)WRG:;=^3P:S ZIE0W.TA>QHL@0J <)[*7M7:FE[ MS'65:B^'&H>G@1B&>'X,:?IXTWF^@9MKS6=U:D5["6Q@<3IU+VH\==QE;8/3 M:M1P6 /L=1II)(2.>0J$RF4Z>YAF7'Z6:"=F1&:PAN%U.G'-:2=OZ(8?>YQK M/X*M $'K($VFP/ M@(J$[P8:@(V$&GKP$'8"]!^6P%=/QL ME&Q,@*.$Z&Q9=.Y9='L'!"^ E7L*;?!S-H$^>NR'.( 3;=F'>0D\@!AM\',? M=!MM0H!+>A]M)G0&@B1M\HCYM/H#3A$9M^W]P>F> O8?9A$YM,7353VZ '8AQ@%AMV8=T M@.2$071W@#YM>8!C=&-M970D 69G[H2+>FQT](13?\9J*7&:>HF LF6+@(V M_F6^@)& DX 9)=%W&6JM;!&%R'*"@9V 089$AH>!T(-J?Z: )Q)*A7=T3 !2 MA39I/WM%>YAFI('K:FR&*88->UZ%?G^3?X%_@PFX@85_*7$9AFR%HW.MA@>" ML )=B,:!\X0;@LJ!*8$::]UUN&L5A]&&7W"J:AV"&6XM9M&!2WY4=R]U='S2 M>C5^77DM@5TDD0AR9LUVHG(%:L-XOWB"AT%GY($1A,M.5FT>AOJ ''+0=VAR M_H"2B"AY '6@?\YQZF@U?LUV7 (;@H=T;G..9P]SCW&1=CV"OH63>0<-&X*X M@V-I*H(;A$H+O&AVB#YP '5H=;@"_FT A+6'%(06A"V&Q8&GB#"&&X2X@>R" M[GZ:@U@#M'D7<<)JCF_>:J!PH&9O@C^'K6M!AUUUY'TU==YH*X(;'+MN58B2 MB(47JHAKA2.$9 *T+N^%D(4]ADV"[7M/@K0NEH53@BZ$N7^:A18#7(&7?D"! M*8-/AB!K,X?EB&B"KX7+<(&".6IHASV&@X)6B'AF5&_8!#MF9VGSB&F&"6:$ M9;V%=P#L@X>"B65.@&!_%H5W 7@!F8$;A7X%NSP.BLM_AX;>A8EEQW(RA44 XX6[@B")JV6_@LF"FX'+@L>&I&63 MABJ)AH:6AK6"U$ 42UY_T&;@=$ASXX(D;_"%\8AW93B)A8%V *B 7H'0@YX" M_7VW![T"_(46B4P *7U<;O]J5V5;? APB'E3![R(!XG#;JF(T&;L<2AIO6>Y MAAB' 6> ?]]K&6JX:/R"83N[>B%>)B'DJ?72'Q79>;31[LW?D=L%MCXF5B>R! M6P!I (6)DXGJ9VR)F85O=05J)GDM=6)K8''?;U:)F7^>B-!FU\(VZ) 0YIMV_196-WSFTM=]AU47%) M .MFRG9BB7]\T&8] '=\>&4T9CEF>WQL>TAH_SA$B"^&JWB7?&QK5 $L9ZAV M* #N;2QP>W"::CV)'WOI:,B&H76& 4\%2&9$:GEG@W>V===PIW 3 5F'$7/@ M?-"&MHG2>K1J2&BB"O)S.GM_:-MS/XF(?OT!KX@5B1&):G](AIH)3(FC>OYE M%XE1B/UF^8C\B#UHO8@,AF5G\&Q;B&"%J(F3>8)_8(C6@F*(VX@,AAD*%FUG MB/*#F7_'@72%;8C,@2J!EX@O>\^))7+.?AR"$VIW<]EFDHBA9>*!&WA MB(-X+']_B"1V@8CN:>9_A7\$@M Y6VX%:BR&&(2]B/5I723 B"1V4W? AL2( M8GAG@^=IZV@[>81PE6^29T1JI'O9= ]ONW$1BBN"G0% &SBJR*5H:3>06)6(A) M:E%D<(;+ QX5VAKL\S(H5B:-Z*HH9B2N)TFFI@AZ)V84JB96& MW(6(AHMJ,HF#;;]I#&;LA0!VF7+'9V*)?(JZB31K,6L<=F( $0/C<>]KN6+4;YAG]G"VALII#6AK M<'&)(H,9:D1I96\3@_QE?VF6:WF(N(D2<72)T65VB5YG>(DI 'J)JV5W;]=K M+(MS %L @XF%B=V&_8FQABQP\6](6?,6!ZHEA;+>& M2XK2>K&)T8%1A^^#T&8,<9]K)WX'> YWP(G<;F2*Q(E6BQ&&)H82AZIY8'(3 MBF2+S'YLB[V)AGO6B70 B(F-:'AJO0-^?,"#N6O>B1%WN7E5;.&);7W5 NI^ M^)N'PW9J)F$(M8B=1Z*GWL=@YKE70"=41R=HOI MB&]U5(32?U>$VGG5?UJ$K8M>9Y9X1EA*ILJH>S;.I_ M* #@;/!Y,0'RKN'*P/A+('Z&QG>NMLZ&QK>KT^'("9A!XT W0I>MV' M_W.?A(1JH80';3)ZY(N>',6V4MZ+%W1!>A1M](?DBQ]T)'1* M>AYM37HE;?V'VX2]A &(OX0+BGUM.7?\A(-M_H2&;8M\ MHGJD>@.%D6VH>@:%A64(A:YZ"X6Q>BF*M'H0A0Z!MWI??YN ?'2>@/6%97]& MB?B%2XBR@XV!98&F;96!LP807QV%MW^E;=6*_H:B@=%EGFY*:PF&,(HIAH@7 MY8! B@2$@H5?B V".HIX99R*#(9>+8N&$(9ABT.*P(B:?T:*;XA(BGQG*@%E M=LQ^Q''Q@1ENR86';]>(:@'Z9S>+VHBTB%6*$82DAW2,885MC/IG=EKV@QN" MHV4*=\YMQ ,F=:=H!0)"9MEN7HJ89\-]!7&+B!I\*GD= 7<"=W[>9FQ^IWP7 M >!E*X*GAUEI\X9SB^=P>&;= 11S+'M[:R0$(FE47A_=EK% M=WF*%83X@'N*WXR^B)2+!6K!B+1UPXA0!,6(IWPABZ)E%VS*B%-]18?V9Q&* MT(B5@Z1E/7(K@@@]6VZ0C$IKEV;W@SR*AX)!(.I_/(91@>*(HXJ4A;!LIXI% MAIF%H@)$@4QWF8&;C/&(TFMF:38!8H+OB**%7WT9C2**]8C*?TYX48'$>+J* MI&4J (2";HS&:M1LE7YDAH)\9H:5?A"):(IKAG",;F= !U5LPX7+;-I^$(G0 MBF*!;PM8@@%^:X&^@"!K6FXWC0&+:(P8B86&Y8I09D*-10";;::"'XGKBJMH M(HGPB0EZ/(#BA;^ CH:4ADN-+8EBC**"3X0L ,2",(G99>J*:8$M9NR%6V?3 M?1EJYHFO>!>"V@$"C7MMJGTP@\", 8'^<7("B8PR;+%&?P(T@!]KHG+E:61G MU6;!:69FGVMR 9MS<&=>9WQE,&>@"IMSTWU3!X1E?7A4:)]F5W+A9;) M-8LZ #H +VL&BWV-,8N>99]X9X49)2H'! ?%<,1XMV=OAJB-G(VMC9AF)&AF M9I.-I0X:?;J-6VZ\C=T'> &_C8QEG8TY;X*-A&O':,,(FHV'C48 RHT\9GN- MJ8TT@<@,Z7FB""G'3H 4=WN'GF=.5V*'_.?%-S8(MI MA6Z)XW!C@M^#5$7 C>MF1(P?;X.-FF63%X<7%TJEC5>$(P&'%[2%M(V0C>MF M,&[1C9]K96>CC78%EXU7A%8S-D4$CEQN*P#_;[2-. .@C<=H$1!*40Z.,&&5$7?B-!F%([B;/U] MQ A;;AYK]2*O?]!HDHT6AC=_K3O?B)B->SX:?9-X>V\C:9X!%8Y& M '^-KG0HCCQW*&Q!CHQEF(W+ "$\+5W1>9S>.Z(.7X[3C?T+;8X72F2.QW,7 M2A&.[X.=C9AF(XZ8>NT&80$Q:2:.^HTR:>Z!((I9A%YGO(W]12IZ2'H%COAF M@HY.CFP))8 N:(B.F7\Y?KAQ> 6@?C*.N&==CM&-H8W>;"0+DX[7C9E_>T*+ MCCR.K8TBCJ^-B(FUC0B.-G46AE@'DH[)C9E_\ '#<(".(7@RCK<'&GVA EY_ M:HZO<\6&R(Z-?.=?LFH#(O7< 1\3FSB GR#>8@)CT=V"X]C<=%\FWAW;+%GY'5R9MJ)4W?89@6!WHE- M>+MG10+BB9"+L891<36'\6]';SB'*GX7:QB#WFB 97!]47FTC?B-@8W5CDIP M]&SN67QTC&64:CQW4 NR@:&.A&4^ -UUPXVT94IP,1TKA$6/E(X<9P9T^HHE MCZ!^W6_":)R.]H[Y:79W6X;%=T>/JR=[@NZ%H'X ?[9]-7_,@)AL-P*3AXN* M_(R,A[1]P'F^B2@ ]8$;(YG"W [&:X:_9G*8/O<15YT8$"@_=NN74R:DEY7WO:9DP&KH^8 MCZ./YVAC@EUV4(><:7T.:B,18-SB!UN]HIW!4-Z]F6,?D6)(84@:PP" M[0:ZCTR)2(V&B;!O(GAMSUR!6P-;'>-=P*6BU**A&LL &D + #"@E1PB'+09A%FZH\Z=)QT M20&^CVQL9XMOC?IGUW46:C$46VY A?T!" Y?C$2%HX!&A2\!,)#\CYJ!8HQ$ MC<6.-I"9@2B*!(LKBCUHJH$HD*9I#&;&BGA__@O'C@^&68K#AC>*=XP;@F&( M47%[C "$E118BNJ!@(5D9VN(ZXR#C,UES8&3=7&(%(K0B8]X38I$=]&!%Y"5 MC'( >H6@+ 5RD(@*D)&,TV:=C'2%H(R>?R0"HXR59?YT0F:&9JF,WFB89K., M2'\R@81X%)"- 5^)1GL-:+:,K8@#<@<$!H=\;^$'9X60C+V,U&J3C;!YYG#2 M:<.,_X-O@J !,WVB?U!F)H)3ZZ* MIX_7<,>(H7&-:!("2(>>D#ITFHIQ;::(Z(PB=P5JW&>,?W"0CW_EC))_@(5\ MBH.%1(J BGQP@HKOC(2*,W=T:O.,R8B*BLN(JWC=<<)TCXJ:9I&*721V!9MH M#GO)96>*H(O:B#F&$&N->:"*"HVBBB!K3X(G;%&"1(:8A>F($HU>=",!6(*W M?Q:-8P+2:^60%8FNBK"*&0)EB"ALM8IJA@F!+68DC6N,)6\/:BB-G8^QA=1L M,83)BHQLS(-,A&P/PXH)9H."1I"6AU6$1@#3?Y:'KHN=<]=_X'/\;-SVG^MA[^+KX?T>;&'\'__C?EYZW/'B[>'RHO+<]F' MS8N]A]"+)(#3B\2'#WH)@!)Z>(1==.1S%WKP<]R+ZG/0AY&$TH?C@.5Z]FSPC6 M7)'PASJ \X='9_6'&G1 @!QM\',C=!=MW(>ZA)EL28 B%1Z4( +C'. Z!H/ MC$%T-G0*B%> #8@Q=!>,5VT1B$-T$X@Q=$=T%HAI@!^,:(!*='1Z:X =B'4, M/FT"B."$QS?X!U >^$,8AR;3%H- "' M@/5[2HR3@:QZW76,@$X CH \B)* AF4N@<%NMFQ#B#R&.8T7A>:(+81V*-8D5>(^%T&8#C^)G MD6YP:-MMEG0!9NMGTHD*@SUKI&7CC_R!%W$A=X]X%(]/B!ULL(_*::&/V'=;;!5IDHC5>KB/>0B1A,EYSWER "60T(GH M;?"-PG__C/4&^H;2,RKF9X M:^Z/.6@C 4N)UW['C\0&? B4>X>2;7\XD$.-8P+ +X:2=)(#BV628HMOB>!P M+77^@>J1L8D'D)=\"9!RB4R2#)"S:0Z0U':C>?9G@9##>>"(('P9D!N0QVL> MD%-SXW6+;"QNSVU3DM]ED8^XB0*-/8N;>SIT,X\Y<\I]#'*:=U-X8GNAAHAV M)8.Q;EEW&6AX ,)Y!&M%@FV(T&XW@_R(MX_M:E%]='DI=_F!&6QF:;-JAP+Z M<42">'[%?%)\WG?@B9=I-@!&9III!&M[?IB*R&I6=S9IFFH<<"-Q"W&/>6%V MXV@)DOEM?@&]:V-V37G;B?=T$E!J:]K*G(,:OMX4 '=;]-F_WS7 M RT VG$6DIAGY'R*?<]IBVA\9;T#B''!>+YOAF9.=JAK_WB':NN-KGQ09G9L M[6EX"PB-1@"]=C, NGI#;SEWUH+Z:40 O 2N!=J"16C!>C,'U8/4CU"&87\R MDW"22HBG;#>33(E& &&?X8(B1&$=F*Y<5Z%]X#C;!^&PVXXBGB,\7 /@HN% MW(W'.>^#D84_AN.(GH UC@^-Z)!"@>J(3Y/MD-IR$FG. O60,X=/DR"-5X8J M $"3+&AO9BZ-C7Q"A0<$C9)4@CV-!@0C,F*"/9-O#=*%:8$LA9:&+X7X;[.2 M)HDHB=.1/(7/@N^*83OBD:QTY9'D9S)_)F_%<0!F!W9&A R1W65A=<9Y.6KZ M=+A]4G D;%1]3WTM<+1U#X26BTR0%8_XB1M_#VIK=\J!AWU*=IQI (NTDD]O M(FB1BY:3\6\_>YJ#3G>/DRYL7W(T:;IXA&62DS9UNGUO@P%R8(F]?I)POWF0 M@])M'V8@;&-N;6O%>U9WQ';N67!GWFD);')[W6W390IO6GO0=89S.Y*:9IJ# M9FQ8:(1YZFFG<%&)96CT9E)YR7'-> !["FMJ<,X"1(N<=-6&O7CF0)(\=8ET%P&$?1L+QX[L>.&2 MQWS,=RR*I 7/=Z8-T8X;@N!G\6_6?:"/* " ;&EL#'91?4V&=FH$ F?6IKZ(\0>5]HGH@1@MH[V8G 9Z/%FHQA7:/WW'C MA%YGKG9";PU_2&=] ;YZF&49<>]M$G^S!/%R26HZ"%B79> 41L M'W\_A_B!J'4C?U)L#8-@<9YXNW04A7QFV65/ (:3M&T3B_%])6XP+5]17]==XUV!G$T>]V3!79W<#!R,I*<>L24 MJWY/>-J39&J0>-Z3!G:/A[-O1G)"<79JD7EHC]>4U6==>]J4=GX*<6T 0GEO MDHYE3G_G>%)U66E( 'ML?' ,DWZ/QW'I>LIWB'9N:!23 '&@=O.,]V9C=M]F MWGH3 G1\HW 'DJYK3775 SD!;&L5EWWPBC%I@6E-=:>,5I29:,"%-!_D M@+QVH)%4?WYFWGL4:CQW6'_L:]N"7'_^DIJ&A9"IAKT#-7AO;P=]T&C[@@)L M/8=S:-5Y\'FKBY!L0'Z;AUN$WWEAA%^$HH>6;)B,N(OYA>-_NXNL;+V+\GE4 MA">1]GE[A;IL9WHMD<:++Y',B\@B?(1ZA'Z$T8L->L*'TFS2B]6+$7H3?!1Z M!VT6>K]LC(1 D565WXON ,7I5 MD?.+56SHA^%L6I%M;3MZ8"6LA/J+8)']BV.1_XMFD0*,9H]%@ 6,2(!2>DN M"HP%<@R,9'H&B/6+QH0T;A*(&XRDE?)3/FW5A$MT:(!D M>FJ ,1TCC!^(WX0GC V,*8R2D>6$>( NC.F$9724@N%T-(R"@&QMNVS#"(=O M.HSVA)5Z>&7YA)EZ08R>>O^$B&T!A:5ZR8.DD9=M38R<;4^,*F<-A5*,MGH2 MA323^GU&B$.)R(-"E 1^I0(=A0%^L8!9:2%O0&?I?<]_OX&J;WB*F8R ?R%O M<)"X@7YW"G=2D.N5=@A]A;Z0?X6 ?Y5_G()$BEN0:8^=?WAEGW_4C+1M$6:4 M9E][?Y3K9E]]M9#(<2V&AXA^D(R()6<#+>"0;8(+C>.0[U\"4;611(2D9J\& M?AT9EKJ%T&; ?]%^X7]W96*"I&92>?>(T67.?W9_!HUH.0:16H$NEIF!X)4T M'\8",9;(@^65R8/K:L. Q6?-D:5D(P'ABM&1JV7U>WMK.Y8TB=R%;F>F!?%K M]HIA?TF6WI4V?."5@XA-EI^"9H'(<>B5XF=1<52!$83[BS2*5I#OE7)Q\968 M?_.5#7WUE7A_71SXE;J(;HV;=_R =(4A=^Y^NH:Q:U:4!(&6>P>!28W)Q'9:E H"66Y/0 M9DB!PG\%+8"6%8DGEC5N*98%B5F68GR?.VB3RHIO 3,'S8H;EB^4-)8DB9^6 M-GR@@EEI0Y:D9:""(8EWDR.)B#:8;%B-=X$YEL"">X&X9>"1Q7;O@42(5XS] M 6]M3I8!?E"6!7Z\EMN#G7KFE7)QF&4V:$'K"P4EJY_@Y8,C4&&GH LA!N6B98D WX=[):ZD:& [(-)@08& M,1TQD+Z6I.SD:X"BVP@ELF#XY"# M)1"7=(8RD(-MIY;2 2" M#(17AKF#$&P7<1P!UW'VBI6%!CUC?]" 7'T??2*3E'YPDW6&#'ZZB?V1OWD? MCW9J79/4=C&7C&C;E9:0UNHGZ.>XAG67\]=3U])8=% MD,!Y"'!B O%U7'M%BR:43'#ID_6$[),FE(*#)(N-9@B'.6%XS&XXCX-^3H.R M?"2#<7@,9EMR/6F ;-=PHWG-B2>+)X((E@V$,FH)=49FDGAK"VAIC91S;H%U M7I9*9X$E(W=::^=?/F8%E/5OUF\#:,5GKW0'>1!VM6B ;.9NA0(M (0!O&\] M PIPUF^W<"QX^H:K9^%W+6:=?*%^Q7BXC[=QB)%?:+-]Y93/;1I^@(KGE(MS M+W9P;+%UC68Z'^I,O=@QW7'\Z+<)KV'O>>#]H8FNE M:8MO.I+]B 1G6X\7<05HL8EE@D12.#L:C=^5+Y0(#J-S\)>S@V1]X'X.?I=K M\Y%Z7:I\+JV]ZAAF759:T>ZEG[(+- M92%OF&5ID)Y]HK D#MH(6^RA=*6 M"6;R@2EQWF8U?N>6%(;6ED-S.6CNECF!0(9>=#EH/H'/+HB62F>*ED>8;)/: MBM!_FG/5(IB']'.=AT25%I&? MAY:'&I'=?QV1M7.GA^1_K'.\B[ESOHOL?ZR'5)4ID5>5&H!9E(1,@(2$W'/K<]>+;G$_D4E?!S;Y$H;2.(P(0,C 6()8D'B':1$8PR M@#=M68 :@ ^(?9&!D="$171E@(*1J96$D1F(VIBNE6R 'HB,D2"(9X!S@+65 M7&TCB"R,YX0JB)B1GQPW:;Z5\(2%@*&1-(A^9@2%J7K2E7IM^FFHD:J1D("L MD3.+TQ_!;L!LMH)IE."((9?=?*B6A&78A:B6V64FEZHW*V_NCWR%J9C[BCB7 MHGUZ"!I]SAALF)]ZP6Y$ $T"2_7Y.9RQSLY?@960KP6XL M:DIQ Y#-:Q4!@VB%:%Q_>I/K CJ#4Y3;?6]HY@*::PQFN(/"!-)V:O%U&)+<<""4C8F?EQ]QQFB(; 5J#W:!9_27$7F2 M>.5Z-IE3 <%W^H1Y98T!1 +P9SMX[XV>B_YE(6]QC9=VP7SL@MR4^Y/,=3E] MYV^I].6I<>=V7K'AH M>)M[:HW&=])E\'[Z=@.4/6@%E'=O]FEIB@J4\(EX9=EK,23N'D68>BJ5.I@9;K5V,G8E:DX!X&I#?>V1^I*KCQAQ MFI#/?'V9U99;@96, :8K] M@W**R2)'9QJ&*89FCMF6%H(UF&N9/UF]94DCYMU M"8T5EBAW1IBN @=M]):A@!R6]Y9[@@@=F8%( FL :FI6?81NG8<72HZ"3Y9> MF*INJ84%ETF//)H!BZ>%)8X@D&6IGIOF"27ZH50 MC5< 79I^DY>&0X !CUQN&G^X?/2* '-\;J-\+XH4;RJ9))IF?5>)')I*9RV& M )!_:;A\Z72+D^YF^XRA93UR]GT@,J&.FY/VD51W]&9;:("*%G*$=7)[!6IZ M9H)NY7$SF?R7#WNFF7UNL6T[!0YS*W9*;VMPH8GGEG=Q7&Y?:).,?YKJF;27 M68OX9EIOK(:8;'::SG589?!GGW ]:&!LLVOA:8N%/75#AP]R(X.'F=-Q((?W MB8^:*I#OBIT!E'C19N:7&(]9=8Z++075 BYL27BU?)U W(*#:^9D&G/ [9K'9/V M9YJ#JI>VDYV#L9>-B2V92V>PF@N6>6:ZEWQPHVXA?J67674Q;U]VJ9!5:^)\ M0W=?=0=]L6K/"#-[!):?>"9OSY9R<;67T&:%F8QPEF7D?-*35VH19Y.9.6H" M<(EX8&J(B^R"L'DZ>\F:S$XMO\W6%YQVC8<6D 9'DY:Y::97FEDS9[R)((;BQY=W"$976@9H)IZVK-E/)Q MRI+2E.IT&9+5 X1L%)+D@LYU.8/=:2]P<7X(<-T!4Y+/:"IW0G$G>$EJ"'.5 M>,AU''[79>Z49H=M:\D%3GOAB:UU'GG"EYI[Y'%G9Y"9BVCN?FA]Y8Z>AMF, M^X$I:"UW]I3'!9!F%F[O?1EH*7[+:FAS W(SA[22A&Q9F04"QVF; 42+E&CG M 7Z#DHM7?]AK@(IQF>V7D'GRE >+&'6A9?U\ZWU0>^5N[&X% NT!TI!.>)F( M+(%):M^0!GCU;FUFMIG?C2:9VIL7< %Y+GGD;A-H!F80!>9^;HO/;?QX#I4^ MF0MY%WR%:E)X4XE% JEWOGU5F;MXC7>!FW6'+&X9=B5S'INK9>4%DF6%X1(MI=S^5:X1!E:YL%)&5;$!^L8M>A-Q_880= MD=IY@YBN;"&1YW]1E6V$4Y5OA-=LQ8MSA/UY1GI!(%N5!7J4F,Z+7Y5'9V&5 M"7ICE=2+.)&&A&B5J(1KE8N$&'IFB.ISYFQ#D>&+1I&4;.2+UX?GB]N'G(3> MA_MLM)@I;>.'MYB$E;F8A)7JAXR$[8=>D>%L$6TD=.&+PIB.E;6$X6S(F)*5 MN80&C%Z@.(P82_A-.8Q81WD1*,-FT@,LN$.FU\D3]MI)7>F-*$X9AN M>AZ,Y)C<;!N(=7K?A-V$=IR.D5AM*XR1D5QM*XRD82B(N96&>LAFX@IG;?:8 MGI'09C"1L&R0>O6$/(S&E7QMP9G[A#>(0XR@>LV5HWH"A2*728P%A:61TY4* MA;!ZUI51C ^%V96J;2Z0M)$9F:Z#X96TD6*"4P!0 $P 20!4 ..9_F7@9<:6 MF7 0 @IHP6W<<6UN=&=[:R*8R''MF4&3\W90>H7D!FN0 M\9GK@=&<93STF6UX:691I09@R9VXO1)@6EL0&3X*_-AJ6,)KVEK1( MI0(AG9F!X@7<:%^3RH0GG=^5/YH+<9:6Y9S.:F%J.6HWFGD >VNJA6^2+9T^ MFE>87YA0FDB6](EDF,1S&@)*F+^VN/;5"=4@#K:E.=59UAFF^:&6P= M'(*3&G_*9;6:GW<3 @>5F8/#;MEN$W=X:!AU^)K=;A.;2F=??@B0*V:097QP M:6C:?+]U!IID<]B)((/.F0IG00=6;5.9JV7 B3!LHV^&?N6N4_)-O=U%Q5P"T=T]Q>7"+G=>9 M56R9G8># 6DNDJ!_O)/#B:*7O'Q&AT&'3(<-F(5Y"YOV9XV=SF[6:+2,-G)4 MDB]R;V6]<(!_T9Q9<^E]=)EZ:7ARB0'#E\5GN8-$F56+AW30E,U^OY0NDA " M)(> E\9J*'>Q>)J9W9=\FBUYBXM;9W& VI?;>REQI'2+G2UFKITMET%I$P&W MFJ1^Z69JF:Z:RIRPFJ%E?F4WF]IVUW<0<%=J46[C;NM\S9S,=;Z=N7:?E$B7 MS( "#,-P7)(>A1F!1@ X@H:)EX.+@P*:X7#/?$EO2HL0G6R9FXY<;N^=9IGU M:/II7'EK?75N?G<[CXN%9'*6DV=RU6;V<==\; 6$I.)56;'G6UFY07( MFC>)[2@I;&]WD6ZEER.2_I=EAM=\D'P#F -^H IY;U6>H8!;FLBNXG+G')W$VO;@M"<2)[2G.J#-I@4FV>> MR9W]9LM\FVM9?"B!A(QF<8.0(HN=B&2)EGV+;Z>:0YXLE;"2F(LH=\%S9I/7 M:2!F;9E 9]&)VI>EGYPIH.7B_R"H0*/A;R<*P?3-&*"DVTW@CF%H7*P MFD!V$Y*:E\*#?IHV?D8**GK+GE:>HGTS!_1EQYX!BTJ=QIR0FI!S"VC=:=B2 MEFM= N5Z-X9(?_5K'78FAQ=N^G6P>>R=.W<3 MD[5R\WK#;\YU5(LD:R]I6H@DDRB5NWHO@XB7OWHH>2R3Z8(PE?%GH@J[;H=^ M8"4\>B-]VY[4<@5^$I\5B7F4XYG*G&>>$H)Y"1]U1)/EC$:3^II@EBV8TY8* M=U]HF&:^B92#. ,-:A2!@I92D_&63WKB"B^:MW\QFB6= M-9#7;9F!1)=?DYV'O29 C4N8K7=/A#^=%7CC:D*=\654AC^?77UG(Q67" Y4 MA@%^'9^REF*:M(*V@I.*A&HAEUV=>VOLA6 E)FSICJ^4>W _>Y>7P9-1><22 M96N&<(B3F'0M?CMS5'E$>QYHCI/_=Z!U9GBUDXR4=W#=;>!T*'>9D^)JF&S[ M9;5K")KH;;R-4&Q9!-]T+7!^;:Z3>9_T["7A7A6A[M^ MD'":9J)[4FH?:8YTP)-@:X-WLGF\F;MKK91$>,F3A&O+D\I]X'1M3VM/BC5]['1=>.!JEI^Z?5"!NI,1-X3GB2E,63;)>KG[9Q MGPL[AHZ9R'F\]W[)"X@=1WG67$G[V9;VB_F:IH_WX4=H%GJ'47 M=N!X*F8MGW!R1W;/ CESV7789OQ^UI^^9;Z9EI3T3B'%C=IE]-GX;'#."4H\5@8U^&ID@:R8+F7*=+]U^3)NXD""839(6 M:+UFN)E"=Q"01'>+G7&>;)=3!UJ4>PX%H%2&%X%6F]41R6;_G]U^4I>&B#Z+ M$&M"AE:7_G& E*^3!)JD9864<0+P!8!XNI\8=9AF_'%;<6IOV9\Y:HAH?9FT M;9V4R9KXCB6@9@&Q9A9R+9[VC*N?A8_ 'JA?)AZ&8#H _D4^<6H!PE9*$ MTX>@D!F UH>LF$N169Q.D9V$28!2D7Z549& E2R N9@L@&2<7)'J>^Z'%G2+ ME?:'_HL%=/>'27K'F)&5)73,F'.LZ898#0F'.1BQAUD66 ?)S7F!2, M.W1[D:*5W)A&='^1&XQ&=&. 0H#6A')ZAY$BC$%T4W3IF.6@LY4FB)2G#:,A8 A ?J89V>B>H!FHY&RG)=M4(7-<*5D!YE_=!)0 M"IFAE*1LHY1H9VEUCV:N<<5R08:IC72:FFJ$AW!^BHAI9X1SIY^(BPENLVDY M:F&*;7G[E'UN98<6=QY_D&KZ7676GJ\:%1O?GC1<5%NYWKYAG)[ M1)2Q?>.4(:'9?09FHG9Y;LZ9$Y.U9JZ/^&UN;*EJJW=R;IN=D /A:#X%7FR, M<"1H1HN.>7( H&_!E!IJJ6?XAB^?M90BH21SU6XA@ZUMX&K1D"9^N7M2H-Z: MP)[,AHMS5VJ<=%6@_W@D;]=T5X=P=4.8Y)^7>,)WN@KD>4$ B''1>=-X+7*^ M=1QNAII,AX%W=6G-F:9W1W;]9BMST641G"EY]&Y=N=HIVD>21Y@Y-R:<-I'&_X METU_Z)1M9W%\5I1<=5ENYG_+:#)S80#I=:.A!0(2=I&:+I*P?*%YGY>T?E!H M8X.+::JATW6LH=5U4'7H>!=*:Y".H?MMA'.?F]"=(G$,<;&/HXPG K",H(T" M:SQUD6L(9B29'&O2:3.&8V^5?:N7"7%AGEAQ9W )FXEV(H^MF2>13WS];;69 M$F80FUUU>W8-I"'7$H3AF)9O9 7 : !:BU-U89OA@M]F MT:%$>\MG)IG_?L%^&GZG9_MZ9FQPA\Z9W).6>U:!Q7-K=Q^2'G[ND>9G)) ( MFL":D6JUH=.)HID-;/.16VR)B:=JH)LR=R.!-GM;AXUO$9,SHHN3J9-3=5:! MMX7R:Q*B\G6*E^24_IQ#H=F4)V]B9=I28P)Y"=L5NCHH6H1UNC6C*>\!X^VV^GR\% M>'DETW'"=))W:7DRH?Q^OY<.<\1QGFL/:N5M1 +KGIEW'W 9:$9F M:GZ$=[QX.J(IHG^!/6YGH41F"(_HH16B.:)2:WB9J(COC9EF3GGP=1.BC)_5 M?>AJGZ%&;*-OZVB&FN1\6FQE==67" Z\:+JA'*&NH6X U&@A<19W1YG>G(B# M2'8IH'&>-)FYH5P K'6MH=^7EGB^H8"3KU(I;-Q\/WO,HG:A;I](>3)V^V7S M<@APNG5SH=UEOW7>EXET;6>XGXRADF=A9FYH98DJGC%I"WU_?M1MU:'6=&1S M+H.JD]UUFH,F>$.ABVK=HO64D:$&QF4IJ%F>V=H09E< (^;](;VB-1H>J*PDWUV(H=B Z&;BXP;?T%I4V>TC)=Y M87A5HBQHD81!8R?]XY1<<"&]Z+F>].=$Y GH@]J%X.A?KBB^G#K M:,MHN)FZEZ>B< L =[HV[C ?=^:Y-_HC1Q(W$@HQN577LA@QYQJF5==<^4 M@*(8HUJ'X6@.=R:C(WE.GO-H8:*Y9^ITJZ'-HG6?FW#@DS=J,*/Q9Z$"!7(T M>^Y^'&HCHB=[SY"0=&>@"'B ?QZ?5V>^>A1JH8%9ELJ)C&DN;K!U47%5 *1K MO)NKDVQYU64K>8E[57&,=,J,FW?2= R=&V=6HVUKJG4\:P"3 I!AIYJ MC&7JDW5M>I< B1]WCI8FDH]]M6V(BQ"CDI_NG8^C9&J1H]UI=(^ G=M\$VZM M;Q&*BZ/*B MX2J.TDQ0!B7^Z:Y^CA'TQ"X1J%7NT=>@!TFI&H!F527@NE[QK MQ6>UFC9RFF:DAG=KD'F:E+B?AING:"II%Y2$FRUFI7:?9^QWL (NAU $ VF, M?7QF2B;KDM5GB&6Y9_9QH79] 5]J^)>FQ,"6WPX>9>CAJ%P?DV4 MS*+\@IARPG4=H4]G-HO#<$B)1XX";["26F_*9<=G_WAX +MI<)L*>#%NWZ*[ MHWEYX9WDHJ%^H7CN:OIT%VR'9UUU :2QGDA_,W/R<&.&&:02>W9EB7K,FIF9 M\V_F:MJ;17D.?-RC#8J%>8QU!) $<2]YCWD+H-%X1*-!>T5Q(@N[C\24:7$K MAQ]W;7&);M>!7&[)=BEQZ 92:\M[ZG8D>UAV[V8OE[N=&6X2C\Q^+Z+ZAI&4 MM:/3D[>*MGV.AZEQ4Z)H 6YO(G&?92N'-9!\A8YE4J-M9I!U4YAZ,0JD;G'3 M@91RAYLS?^%T1W:D=?1U?I"M==.6UJ/O<)UN<7M-;E%J:XIX:X-UT)N^9G>= MU7J89P2'@V9+?7^CC7TY>>2?NWB7E#N>;I[AG:2=9Y]X"=I^ &^+D$>DJ9K( M=FMY5Z.]H@5VRF[I:#V9=(]"@\Z9=7DZ:VR?V8SI=&EQ6X#9CD:D;W&S>\EV M?FVP?$Z@TXP*=L=HDX]\AX1X6:- A^QUHYMI=>ETTYQ]7MW]X;9Q^/GZN$&AZ>8;(@\Z% MEHMKBTF2X&FA9:>DAW2C 7D"N7YE:M&CJVBP?+^7#*&9:2Y:)1G@8>[ MH5JC>/Q^I*19=XR/ZFF$>7!OTVU(AP)U[VN# M(5X MJ)DPFS&AF'^>>RN9]6BT=4-]BIJ&>Q9I07?BDG26%(=69_1T"79/I#U\"YL( M<,$%97;=HY)X8Y@I:3Z<>H1 G"EZ-)& A)V@*6VDNC !I%D>H*<9WI?@&IZ'(R#D8NHR(*5G#9M[:"9G'R 96W)#%!QBGHUC,"5U'.BG/2$DGJEG&B%R)5 C*JM25MIRA;71MN9Q#B%,'W&5*=[>6CGG-9Y)^69XBG0%^ M"'G2=#(![YE2'6-F05J@F5 9P:*AH7- MHOZD='_K>".+&X(TH1E^RZ66:I!]6Z1UEB*-"Z"6?-QE'0)$ KYED7;%=>UP MX&HOHG6-)VH-BT^D)6&X*O=)QT M"(]]D.EI>*)3 AY.91\ M:UEN@9+2C:.+/0%-H7YE.:; AAQJ1Z7==Z9[4&81D#-VYG!#:]IH8P6%:.EW M6'ZP:*9P\'=PG_1WM7G40/T+$I*F:OVCZ)3$9_IW6'>:9>*?0&JYG_YJU0+& MHJEY8X>+9N&;.6I=B<2(-7B@?X]QQ9MS@S^'XW5BIA"C.I37<#BCYI1]@SF! M<@5< 82/ 7RT:KBD3Z4)IK)E7Z:5:N*-RF6R91\!2I3\91*>/'VQ?)!QWVL* MG!BE.*8)ID, =F>K;Z0"QXX?/%X,)IE$ M#7Z%A8>:SW[*9="F(:!?;K6FCH.WIC>2NW2[IL2B6*;<9;ZFTF_& B!B6VZ= M9="FZ#F")7=ER94JG>J=;;BIP M9*<^?D@+%@/40&*""'FA9>R)BVP]9[]EWB-,"BRGX66G;'JGWXA-IX]Y]WM$ M4H*GF8$(>7N=6:8H>^.C:J72(0:>QIIA_PG#%G>^##Z=];Q&G M*F<3IQP-!9Y)IWR%( Q,I_V)%Z89>XJ7-:;=:4JFCW"KIGB=TJ:1=O2FN@2@ MIT-F1*=K#MALH8Y>I^]IB64];@FF+@"[IQFF:8IJIJ")AWO/IONF:9GYIG:G M":"$??P&*6RU9;>27*(L9N:E%Z<:IJEY1FE9@QEY;)?:ITZ;=J3'=$RF@J;. MHG1Q?&:$90NG#7[<9;)EZ:=\HIA],Z;#;E9OCPC?B"MLIGPDE%AIRGY?B_., M=*::;^IWV:$@>^>G=Z<;@A6#":;4IRQ\R*>IB%2FZZ7L MIF>;0Z-MBL)G76B2CYUY&7EWCTT;CF7T:,X#YFIAFTEIVJ>6?'&A!)/EC5&C MLWEA>RQJV6Z'DW1X\76'$,!L?J#9\(JB?B*MRSGKMIXES<*<+H%QQ M1:;/D"20MY+MGT:9_G'MIRQH]P:-:T6FA'TTEO)SX'0K=7>9DFB4:.QMRI!J MGH%X07B(:9IO^G19HR286Z/->!-QM&W+:HN?NF@G=B>58F7 (H7JD M2F>OAF]VOWHLE/-H67\0G^V3,Y/)GF5G6Y)(B *@BI(%?IZH%8G:I)%^FG5[ M@MXCF8'.A6VC5I;*:&0\7 %ZA5"7\)GPB1F>0IY*9Y,7SG!AEBJ?X)SWF6P/ M\X?V@/F5NXB+D#4 O7A]/9N*^H*% 2@";89,"J&.)H11<2R:S:@BG3^?))WX MEM*HF8'-;YIEC'(_+<0(I)8!?IZ%:)"BA4,DW:A=F$V:E98TG0J79@#= 6< M3I,L"$=G#Y?NJ &+5YIJ<7QT8H*FJ&E_K(8]FF>,^:!NF%Z=I&5& )*!1P!$ M:AZ76GIIH2&73IUIFF>--8E>GX)'Y#<<*=F MQZ+>?/2G^J3H>*PG!7(V:&%F(7'G=(M]PJ1[I*%^IZ0"ASBI;@!(AXY]!J57 MH$:HY7A)J)V9DGBGAR^5O6LEJ2"G:68H>R9F9G@3H7ZC_J-%>YBADY?*AJZ/ M?*:(FF>I&7Y%I6EF3FI"<7>#2JCCD*5D:@H7A,,(+FW6?I21F7)<:W0!I( $D 1@"A:G6D *8_:%Y?U("80!\EKAE!:9 MI5)LEFM- $5I$(2(?H,)KJF6?KQ_>GKWI7Y)5O?WJ45PQZ>IYQJJG@ MCJ<&HY*WFB!_:7U]B*"=A]E_H(>%H("89:6"F!^1B:",H.E_\GFY.I;.8[HNUF+>@69&YH%>JJ81;D>R'OICOA\"8 MPZ#"H/*'Q*!!@,:@^X?)H,N8XW-UG-J8J:72F*NEDY'4H'F1H)7:F+.EW9A@ M@(&1WZ#CF->$Y9B(D:^5Y:"/G.J8?'J0D5MM4X#&I85ZR*7(9B:4FY%J=+^5 M,H@N /:@+F]=B$!AI=.<7D"K:&4 MJ/UO#)\_>\.4FJ.(=L0(G&DFE8 )>9=#C?.):)_39KL\]VBRE*R,%*&#=$)J M*&Y$ JB9JVO&H\UEH*/097262*@TJ:./0J3\@FB.!IY:CKL\OIRSJN"5N:@; M<5NH> 709EN;T(Z8B>2?#HJY:KP!S65'>U=@^JMZ/%:8BD M+ZG#JKVAN68=J/2BM79VFE%WZZI3-"0#; 6KJ/F@0I1@)38?$ D5B0B@19[- M9U)[[X.\G"9GE8) @Q!^&7<%<2=O M,*8QI&UK'Y+C>.B2/HOOBBZ6CW^":=T!$'\H=P!ULJ-[B#RF.:.2@X^D^8(7 M;+9G-JN#I$BI!:;OI,%Z/Y;QEV\ ^F6\ 0YF]Z?299.+"W*]ET>I&I5E:VZK MX),AAYIF/:%+G_QU#9+1FB&KS9D<:B9H@&SW=:=PC7'%HQ)GQZ/CE%EI;(WC M=>NFM&K#?#*C'&K(H6^=: "(:*)GBWNS=7 !8!%X!L*$ M==RAZ)%0I56DJFA]>?JK5FSS?O^1)JAC;WYXKZLCE=!_U$"> M=I M5Y?4GCZ)CJE_IJ2%,9 7@5.K+Y16#QRL 8M;FP:+0WTSI/FJEGG+:WB:N*;9 ME#"I?G=6@:/^JYW[R@K=I+Z9;=H&"MY1QJ-EZ MCH"R?%G?'/B>'Z9%@/SI+F> )8#<7*#$J XIU>&I7<-;,MHP(8P M!5@#(@""<8"A]F?::!(#D&\FJP9]^V^$J(IT_Y[89IEV#:5G:H]G%7E/K,V! MA8K*D/B,5W94K$T"D(TT:UIRRV")*)FIG#&JZZF-=-)M MDZPY :Z,-Z)9=7VLMFF9EC)RQ*+O@*B2-*R*K)^LO6N^F=.K_9M&>_-RX)I. MIC8%WX#1:LRK ):\GC*>7&G2=,B-CF6:G9:LYX*OCYJL M'Z8?;YZL6 -WCU0(A&K8@CIO;6?0CDJ2?:L(JT5YY6EH@PBL-X^-:%&I*GDR MI=ZNH^EOF&\S<&J/FS:>&=NBVKNK.=KE&JZDX&DYF[? MJJ&KX:K^:2RK^0&/AX-LSP+%@I%K"),WCV%L=6X%9G">YGITH2MF%J2_=8R: M-!^_=W.D3W'Z:75I^:O. EMYPJL/J,9UC6;S;9^"2S9\4?9JB(HM)A+JBPGI5F)2I,!L&!IQ] M%8FT@RA]DXDBGENA+69TCZ5V2&B:ELAT8V_KES2IAX!S;-EZTGP.;UEF69B? M>+>;]WHQH -HZVJH?'IV86]NHC:MMI,)<7]H':CZE%^*S*,[!39OBIY/;Q!V M5P/C:ZFF.)L_H4%JO7WI;;!^XVC@? B;0'_$JQ)FK6<<;MN4=*(ZCT5V@W6H1I[LW(=;+&9 M**2+@U%P<@#3JY9K-&LGF@5RIH^6=1ULHZS@HTE^S'Y.?G^M?P&!EV9IRH 5 MF&]M3:T4JWEM5*L\>I9S6IHVE'^&$*/$!MVDPZU2I^29[V56D/J".7\@:RJ. MT*UB@EJL:GM';&EF)V8@?<&IV:W>C^>5()BAHH^C8F;>"H))CEF2.K[)=%>3B&ZI9!D\NG?HQ+D-6M%W4D:+,&5 @G$GR%F8'N MK<9J *&5B)MW 9W[<-:M^*S0:)2L+'\5H!2NZ(]IDTF/RZ?5?@&@(:AR<>B6 M")VO:9:,(9]W 0*$^9QAA8MGWZT!K@8&-ZY.EN&#!JY B*J(7 "XK=5EU&B, MH@*KI8-\E,9G*P%O>QX,0WW_=IF?;FY!H=AF*X(P ?VMT'X K@=:-9!I(RZN MB)(0U5/ZF1>"$::S!NJ!7Y_/@(A\'7U4:":N276X@6V6M'4 M=?>LRWTEKJRLTH$C9S5^.Z[!: GSI+FK.&@_>U6N$&@U?@"N?B7)9A:"0WXN887% M;QNLN(& >YQO2*[GG[UX:H7UG%.0UI -G<.H]:V 5 $-YB#".V P8BMGMBL M='&I9^&N MYZ@_<4)PB@%"9LMRE2N,J16)[JT#D3."_(#R<[:%1X5#A".=2F>W9LX"?ZYU M 6P%'(89JS"N?0$QG0^J('V"@/^N[ZZ4K&R"UF_QK2N-\JYM>O6N&FQ%;3*6 M7I@&B?:M,)U6?%F!R(UV!1*O1(2F9=0"CWM/A$&M^ZT)B?*0FY:K9BYMC7E# MJQ6K>I[3'_*0RJWBF?!M3P"C.6:2@B4 F"FOVY[2:5"=,Z_7K=UG+J_(@TP MZCF,"6*=OV9O#7IERW+>,>N06ZA3GVF:KX*0@$\ FH(@??R6>E['EII?!Y:Z4VG&VEW&PYG':$6:ISI9>@=:4$ M>D*<7ZI[I4:V6 M(X#MBS!Z_VRDA(&5-WICG/>+HH1FG*V$:)RHNF8LI7KF+25ZZ"WE9BM2E/XRIG(%M0HS8I79TSI5(C))M_*#>I0F%?675E>*E#H6U>@F958RA90R9 M/WLZKT$ -*^*CO.6YY :F3FO,J_WK]RD7&L(E_!M0:]7KZ1E4 !%KXU\#QS! MG*"6":EY94ZO4*_X;S\M K!5KT*O6*]:KS9IN8 QA5^O1*MFG6.O9:_(D6B- MG&D5"M.J(&\8C]J7]'PMK#>K.JG6)1:PHG@IQ0:#';?6,1G"<=+"&]ZO. M>BNK*J[!:-67:@HQ:0^J?Y?#=IEJ>&7:!K2>4W?E?J^LE'4,9I=]HHLY;H>V).ZMY<-ZBO'76>-&7)ZW4JZ!FGZU7;'*K=FLRJW"NK*S&DT>; M\Z7DK--F-X>=ES6PX6_T9LV0P:,LK,=Q_79\(YU4HE[GXJ4-*L,9OAR MM)00AUU[>FZ ?WZNC7QA 0Z!.6B=IW)Q9F=RC=Z M@;Z?:W<"I=BK RW\I'QP)')7:AB;*V;(>89X'FAGHI2;@6RJK)![E*RT;!4P"";5( [0=*CEN)!JMA ML*5Q^H!IKD:GI0(/L4.%[:TFKKJ)IXGKJE0(^'\>)F*"8*_V>_AOF1H&L2.L MRZU3EX!E&G^RGAMV/G5">%:!$0@S>^=Z!6H&>5"CQVU]HZVL0 M GMKRWQ7>'*B>7E H;V)O0'%E.>KHV6>^B&Y6 9<>L'ZF_?9MK=Z;&D[-& MNX\*FAYP5*)4L7BP6 ,*BNM]B(N(HF&Q-&NB>4ZQ"7E2B_YQ:Z*4I-EK!'BF M>.1M]6M6L7< M@+#DA^F&6Q!![QH;[&K:1-[UIL6>U&;U'+#=G&[&#<[5O<88W?X:Q(+': M4">B'" ?\5ONHEHJ/EV.ZD-I+&)(J]T:!EJF9/[ MG0N;=(]'=T9H%W$7?-F9P::(I_VI]H6/?J>H, ?RI;JQ29T3L16Q:8W'IJJ: M0FPFJ82/WJL#GC>P2:8ZK6*QH9MV:RQJ,VQ+EN51"HG$K8^ VK'M'+T"!T$! MB^1FM+%1Q#*#JI.:Q7WO%;^2=!7*^G3NI;Z$Z?W.#?8=6;-6977"%D+9G M8I#GC<:N8'6P>[YM1ZDF9@V8@9>W!Q.?PJT;LA6)*K%$C4*%'K)LF*9_>0)L MF=EZ5(78FS:5)K)FG@R&!CWME>2<%J]*9Q(+N(%)J2]V_V_IEOZ#RI8Q:2R& M:8@D=C*RX&5-'L.0()JXKN6JHJ3!:0^G&IT1L,ZH.9\Z@9Z @ AV@ONOS872 MKB%OLP:+&/5EWJA0F N&_)_C;%BRY:CZ;4"RH8&JA=A\7K(3K^:H%:]4L@MQ M;89V6H1J.8U*A *8H8 @LKI_](4)68WOJF:=&[ KKWJR(; 5G\&M=I[,/,V[+>@&Q8WYF MB+.,!IZ= 91^6IX:H'J>)Y=:GK=_"* JD#*QTYZ9LLQK]FJWI@^B>IDOI"FN MI(M2IG2KFF]\=?Z>&'=PF72?07GQ9QD*;(\3HD=[EIIZ<11^%'LM<,I]:Y_" MG^IIS:-Q>SL!!W:OHTRFLZQSG=]HWVH2I-2Q:RS0LA^R>;+*@^6(IPE]GO6J M,+$JD/&E795+EJ1]7 A8GTZ(A[(;F?J/R"):FGEMOH ELK.NZYG-97MKS6AH M (")WYTKLFIR+;(IAH2!M*@3KJ*R-+*P@;:IO*@OF&2N28\Q 3BN7(AAA0>S M9&=OJ*6NR:A0B^V#W(O K@^="[/791V:0W0?FFB%"ZZS93>R2;(UG[=QZ(O+ MKO"6%Y;LFOISAY93LBRR3)B4>\R#^WSFKO>270'UD#Z*,[/=KDJ?TVT\L\QT M2;.<9]<#\:ZC@SB-98:WA3MM *^DE$N?:+):@DV?6)M&92:O6H$;F*26^ZC[ MLJF*,X*A EFR]:H LW^&?+*HF:1M1P!( -QQO(#:@!&9;YK?BH6C>Y.#LFJS M+(6);7&S(Y?,D0NI?ZD4@8YE#ZD1J=%E;+-( *6;UHTZB>YP$6D.JZEN(*O& MH1IWX*QCJ[JN<:Q&>U=K)6EP?F^6[FKYJR-J=:S,<'URMI[59T:+A&KUFI*' MC92,B7]\H;)ILI.HLWO8=7AE"+*99L8D>L[J) Y[X MFJUOJYV,>5*'R&V<=)NSJ)GRHN.4;(U&;I:FJ:+RH6F%:)>[:J&G#:/_L'A] M1J9 9QZS?&7LB4IOYP%(=Y=VJGF@?Q!K8Y@WHNYJR7DZH^EV!G_/LZ9JZ+&X MLWML@&7LB<-[CIU]EZ"M'6PVL/]E6 -3=E>L#Z/*:J)OG;.OAWZ@$Y%[F!.1 M*IQ)JMQ_M7-+E1Z1KG.IAU"5AYA2J@^15*K$F%:J]']9JI"86ZK>K:@GZ_RARV IX3IA_>+?JKYB[^8&'3-AZJO]H?&F&B1HJ7*F%%Z_X>G MI7&1F)70H)"J"(C'A N(V)@5C-B@9GJ7JK:EWJ =C".(NJ5I@)ZJYYBAJNB@ MS*_OF,ZO)HB5D<>ETJ_(9O1E+HC6K\VEKZHV **1UY6YG.F.NJJ@G$*(5(R] M>XYG4GV7G9UE\H;,:-<#!WLZ>]U\Q[-(H3^'^YZBG8!_U;-@L%RI M,F_A:/B,08=6;_!L**7$:I>S(JO@LH6K ')T9T=S_XG_I'1PSIIR<[:K#&;B MJEVK1WLQL.";J)U&>^AQI*&;K/5G-*R]=[-JQ;-#4]G89:D27 M(:N0LYVR8*D[L-^),ZJ=GZJRG&[AL[)H]K+_C=VDW)F-?$$'XQMA?39\(*"P MFC"L5FU:;YAFF9,IH%.BRY<:?B">\Z(LI)>(>)^^;8N+HI^4GY%H+Z!(H%.N MUF>]GQVIQ)JW<,EYHI]!>YEVY(##DQ.4!'D=;L>?DZ/@=!Z>L)^=@PE\[K#+ M??]^]&^?? )_C(],AY!S2W]Z:#-U/+60<;Z?4FI!M3QKUYE'9^J?6'OU:TX! M?&8:;%M^?IWZ:=B2XJ(+H1=N"8,=I1^M96L0>WR#BW0>H_2D+ZQO9:>KCF[\ M9GA[KW10;^I^97(M<-9]ZZQ!>X:?SJ-/BV, &:WY@O9GLZP@I0!GYK+QL%>2 M68'+3IV6)9;@I.RRC7P22_.Q+[$WE#Y^MI=W6@JP8J?,GSFE$P(#E$AS/:49 MGM*>\6_9>F.AQWP-:@)Y2I*EL>QS4H8*L(.N-8)RLB6 !@8*L%J;:+,)H%%G MM+5NAL%EO;6(DKJU(#*;@A/"_ZRQK6:M8A^UK3:M5J;\K+(IN=^!*-^A-!ZL+-TM*]^HZ>T;9,!;FJZ M7#'F:K1R3E:1]J7]H(Y]?9S%HJ6S$F#UGUV6[ M;R]I@9(9:E1O.G-@HI&D#79:F;IJ:@&PH;UE8:S=:$^*G[,];!IVEZPWDMQ] M;W-G?".2$7G3 E6420&2M%]J+68_F:>?96EZDY9F2:X-LE5Q(I11;YZ(!;9@ MF2F0LWO2:=F3,Z65M#^&^G7\?@-I1F8H<(F>(J>2>]MJ/+:\;S^V Y49;F5I M@H-%MN6>R&NW:$]SIGP5;Y.TDH_1?/:!'6S)ACVL:@$'I%8$)Y3-CD*G<:2, M:;!HPWREF4\&<)\5MHZPH*U0;[1M;Z:DLW6=RFGC?DYX YQLH/22*+;];AMQ M?)?&<+"GG:0H /ARPI<$:T9N[YU&:/AK3WB/MA%[^8BEGH-N6*XSD[&DB[;@ M:<60?J3K;S]H%*.M:Q("+0!% 0X$KFY5F5FVY7LY=<6D/W=R HQPEH,^?SMJ M[*0[K&6LK&[G;M:&+7TDF@-Y'Y-VF[-R(&;*DVQK*;5::0>:Z8TK9@AL7W@? MI6>AAZTN?_AT%JR5FT.C8FA@K-QH_WJA?V6HYFH1:;:,RW^B>_&4II**=>I] M=',[=NITZI>"CQRI'I(/>-JF<&Y%2K%V^FAJR/%&;XE]EX MP8$=;&)V_:RH92UL:YN-G7RV:8]':F5OGY&>Z^UM6\!AVQK,7W9:/QXLV_ <%-] M%*8:@XRP-+9> *@ MK2VE26YK=TAJ774(BL4#+6R]BN-H6[:D=>B-!'.[:S*',(&,J)T"5']1<8)E M-VYS=@V3,)+*F^Z3R'8K9E^V)G*9>#B5-:K.:\&=EGM7MV:C4*Z4?[^3RI+]M1.3\HDJA RK@*PWLC2U06I)MHV4YJ0S:EBFRX#H M#1AG/:H"M6B8'JA=KTFK:'_]L;]X;Z+/MJ-PB+27CT&B)'4=C]4"FF5HNO2IS9BQ=Z%7J0KQ]ZA:5QE>9LU(=6G(JE M(WIXE5J<[(NBA)"E>7I?G ATH*\N@**OJH1HG)FEIJ^;I8&JG:6-E:NOQ:"A MI?N'HH2PKP>,C*JTKT6TMJ\/C*VEJZ!]G#)ML:4V;9:JA)R8JL*ON*50;52T MK956M,FO>'K I/B:Y09F)I MPYO^= -QH75?K"EV)V@';HB?DZVLI""&4]JR.-H&>GCR.A M/*P3;R4$AK8D P6>M[9(I-!F" $7B]>;#'G1G'9]>[C195EIYK9K=SAR[G:9 MHMRFI+AKN&&9*0"QC9*W/@6\:>>P.W*T;!S M<)U@=INW$YZ(?0RRMHPR$[4<>Q2AHVF,=L!I*6^3 M%^MN<+C8J^F.A&XDIHFV$*=]=#)I9[8I<9)KK+;G HJC7WOHL5ML.::U?M>X MU6]E JK-8$GD2RW.+;>JX.VP+@U>(&BGF=?>%EP^JHEL)>X!)^S?::W"F_W MAEE\CYLADP2@N9;7@PEZ@:G*MZ61U0'CD-1L!* RE@&&I9%CC'D(@PD]N42$ M4YW$EL.929Z")9D:1+FA@"P#ZVI+!K=^/G[/ITRYMW].N=%E;6[>!6)L)YE: M@HU\UEM-K$>S(9; !1"$$H+H#4F0[I43FTBY1(UOJ&6Y 8LL _5I*&RX@28! M:;FGIN.X$WD>:1BL&J2?95F4#I1(GEFY$82L=A"S:(AAA5"Y9&JS!M,TK'9, MB6ZYHB4Z@AN"+)-W O>.3YY9!.2XMZN+LTZ#U*.R<#5^6+E"9GA_?!7CC+F( MVI9'GI!H?KD"CNV T9RX?+Z;NWQZGZ=\$@+Y9VJ7)6YFD'V6LJA'JF>Y0&=" M9T1HLP9(F)Z'*)UBN<9JVGZX@44 A&]$:*>F0:W@:J1T=;E-F4IOGJG!>M)O MQ7?/J#J?-[,&!".P.K-]92EVA+'M!CTM!*X?JU*)FX+2:\RY%8DVG26R"G$H M 0.@I0K>N42?H87,@OJ("G51N>"MUSCFN42$+ .ZC#")N(H*ES:=6;GQKBR" M5FUV ,&CFJF)K5RYKX,*='24 8O$N6NR/( /ES$=V;E]950 .KD?G167"+J3 MEC^Y4*]1 L.I2YJA@$:YBWQ0 *VQ-:^ %10! KQ*Z'C3T90^Z MF8%& !^Z4YU6 />OTFFNG-J <88R9@")";H>L+* ZX7UM,Z1>HV.9:Z<>6VI MD4Z)IGH9NH:G](]ZC2>Z'[J,"2*ZWZGPBTX@1+I9G4J/\G-AGR@ *KHLNJME M+KI^95=])(_[A1VO6:]X;=N%?H$ =F E$)$*?@2BW@$G=YNPXF][FC69I;BK M9=&<>WZ%I9UE2P">:/5H#KG3<8^;]W6F9J>GJ:+#?KBFLW*LN'^;WFGS4%9WZHN.AZ(JNAN#2N:Z&S?>!G<6E% MMHR4SP)=HU6QB)[-N+I\L'=;H>]]7G@HK".E)I17^KB79;J[I]*J9]?OZULJ>Q;18!%Y+$:>J/ M$X2_L1!]C8DS=L:Z>W 0DC !RY:ILY*F2)D9;J\&[KGCKA6)\;G=9=BP9IXV M:0U]U)SWK7MKFKE:N 0*30Z -;TUN@'"Y:N!UA6?SC?9G[(*]MT5YD)WSNG)Q@FKH$"JUE>;B7:6OL:W%V6KF$9<%S,6LO>Z]GG*%*I'AE<8L7 M<:&9!ZRUFH.4N89GHER)<&B/N>-NO;)CNCNLL+92;"IN&("7BLI]-IJRCX9V MAWU6NS1PY[I8A_>PN9\N:>JK.F^_:%%\E;&=K2>E#92JNAFY%8.UMR=F$&OP ME_J4<* L:;MFVK=PM;YOU7K)>3]WII_K<6VQ0:$X;R]Y:[O?>AMI^F6\@_&G MOGT,>62U57,UH*V4-+-:@R]X@+'?9XYTLWW8NB*ACYT8N3>9>FRQ9J5Q;Y:!=N>KL6G+*B8+AI9B"2\7H->#!R!I\#IN^Z>IT9GL>X MAGARN!J"RX#2%=Z#*+'&CQ:KEG/9@_FR_86*@)2(G9,]?8^N#:N6N9&A$JW% ME(UO7I:^NDJ:O:XHL?"YEF63G7!ML8FFNYNUR"+&NVVY3W'*N[:QA[?RI(1J MX&E<<>YZ3'MGHW5H+YL@GLR(N;>-@]ZZ['R!I$=[:;EQGI.YG+0>:!)G"FXH M;L2>IYY]N>NNL)K)FAF"TZ*(>)1HTIIZ>=V[9;OQB)QL!KS=M F0Z9HN=5:4 MD[E\N;9^=P*%?N*P;WFYG9EK%X]L:R&YY9XVBUIKG'L4F(2Y9:I6FRUFDUXR*(0H$=X M4IZ"?!6G>GJMA4^6!)B4@F9Y"+7:=NNJPPQ&O$R\BK>K>@RZ^;E3 MO &+1KJ0?I6!'JM(M>!U(KM3 0J[ G))KW)FU)=HA8=T.*/UC"RLDI\DNX5H MG*%) %&/>&5HC^6DL+OO>NV[S74E@?"[EV8*G"1JV:8V=O6[O9-,AX^Y:9*2 MN7:YC(N[C_V[1 *O:H=RL8GVNE.7T7]LKW^@UG__L]]Y [1*JJ"'B* DE@:T MBZ (M ^1:Z6U<["'6H .M/A_$+3> MAU&179SRA[:8 M$[BXH)6E-+2JA#:TZ&PV@-NQ@:K!H!ZX/+2UA#ZTB*I[JB2X)G1#M 6(M:^/ MJEQZ(XB2JG:1>I%!=):JVZ"8JB.(AYQD@$YTQJ]5M.2@98#HF#NX(XB1G.R8 M7+1E@$&X\IB[E35TK*J=D?2@+;9HM+B<\*^6M0NYZG-2JY]KD] <"%9;/)BNY9B 4@#9F!9*]7O**60KVUM'J4UXX\;F"S MVG(7NE&Z3 #LBG.S(XEA 4"6@K+2D5.]7+I_D]1 DY*@?DMF'[TME4MSX6A" M9W%[D'%NO*&F:[IIJY"H2?\JW(J7"TL;MA !]N#Z;Q?8!V '$XO<>2-:&&!RK,/I3.E&6S+J;N//ZTAO2ZMM66(B%B47WNHGO9K M:[S]F^1I<8--;H> ]X+=;9J]XF^D=ONJ"WF)NO2P\8R I$5GH''@=\YZDZM1 MHKRDA6QY:#:>601/L:IQ!IXDNH(E";TT@@JQEVU7O%T')[$QD.[R:@M"W M['$XO8IIG9]ZO5=P"(IO:SBQ!7*$IF>A<9X@KVN2!I[+3NEN0Y\8G[&*8H'J MO:& 3KRS>Z-N.+V/@Q]FW+T_<=Z]IKA-M:VWA)N)B:Z]3*S5$::L2:@[;C(! MQXZ\G 2^YJXYN0V]/2W."+>RVG)0 !6$K:CXK7AE,+P]:]R]Q 8R 8N66I[A M@UFYK8$@##.32I#.EK>HX&4HG\Z;Q*)R=U]HX)M_O!EU7[8HBR.4R;1,MIR: MS[CP@'**' U @!::09.#"%N=?XQ\N:*Y0F8"CG2%IKG9=:BYXYJ0M6UK\)U? MH1EN5Z[MFBB4!9=<"-V-=UH(L2^N(GA 9PNZ#;TJ"\5VF+7)#)1R/[E79Q*Z MW0>J-^V:FW)EE%.=T:8IO9*Z+K)>;>2< M@=R]"F]#OC5_^&_.M5&33K))?6LL/I\*L32],IHJ>O 3F8'B E*85@2B#<>[ M5K.%H2:-5;&>:/>YC)W\N="=%KY0@H2^B[X&H8Z) ML]5]47$^O RYJ&=FL1,"NI3"MY^[MJ/1L[M\+W.H:Y6^RIX&!B6/0X4;OJ>, M#+5)>_":CH_ECQ"Y!&F&:9)H$9^4@AN+WFCXILN F*23@CV: 7X)OH2QP66) M@A")D'Y3ERR%TI/ODW"I/FF2;H&F$G<[M9J3 M@H$]@?R^[EG.O0R]F[42=+:^N+(ANER\$9^+J%QNU9-SM(N0- !GA5NV%7F> MJQZ+VXGEH94$% (*O-N^G[JX@ZR;L(D5O(>\EYN@FH!_)[SZ?5*&&QQ/F-!F M;KG0MR*_DX]5:^QW8)*^<-MMQ8*FLI*:9)X,:#"_[FHBOY9K) )9>-.^!;GB MEPN4D;EU;IB#T)ITO6"'TG$=DH6;EFM\O>&C3K_/LHY_H;E1:7ZY\[*<:1XT MK;2,LFR_%8GYOH\O;Y1J//V^3)TNO/UFCV@9:'D D;Z;:Y6^,@%SBJ26Y+Y! MNUAJ6 )\C)\:@;G:G+BHT9:[J*>LP)+V@8R#:FM49MR]*K4POM&9,KX8LZ-S MJQWZF5V(&[T_+;B!0;Y)NR>]JWMFOG^0:9 LJ)^Y&IJ ?VN^A;E1 MI^C:Y; MJ(*_F'ITOMF^*:.=97B^Z7UG >VA!'DU?K5G7(\;@M>SGF79LY>,H*Z]D+J( M78B]OZIETA49GK.I0JF3C""!OV_?9MR]H65XOGAF(K\M9K:Y:VC/ D)R7F>S MJ0X&FZG/OY.9AY!;@)^BJ[_#<8:HD8?+:YV#H;3D=JB]B)[)@(^%?+Y AF* MX;Y2LM2YGXTUO8*1F8%4 HB^;'H#!B:6+YT"KU=P#VH_O5JYB&N3OGF]++XP M?)]S%8AB@I&^]+D%B:R+YWDGG$65<*]BI:5SY+=CA"^<=J]X M 3*<:H2:=>NW:X2+F#B<[[=8E8"O\;?TM\ML7:KWMW^$^;=ZI665BJ]J.;2\ M%7J!I0*X@Z51G+N\4YR4A.ULE:]*D9>OFX29KUN<\(M=G ^XDZ4NFF*<+IJ[ MH!BX-[2GKZ^$:YR>I:FOH*4';7"X-K@YN+ZEL94V;?6\XH2EJN6$1'3NH$.X;B\D MA4:X&6K2AY9SHYS=KSV,QY4JIM:E3[BKG.6O4[CGK]VE3(S?I;6<#(4"O5., MM)&J;4A^$GF6OWEK?WP65=YRHPJTS? J]<;\.O3MF,Y0Z@9[*G QF M\;H3O@*\KXJMKEY_(+Z)OR.^B[\IGXV_"W*/ORJ^",!INI2_HK>6O^!V#ZX2 M@DBK$JXAOCR^:+\^OIZ_F'^NGS#!5,!HH**_OYV$D)2#C[G;MI._I+\1E@Z! M>[Y-LDF$3X*B)4.&EX6!OO>_@[Y@P0&+)[S:KCYS9L'_OSV=0)I,LT]ID+Y MO0F)&0UMLEF!,0N$;I\<'(@]6]7)I\LZB^JKY2NER?NK[M=SJ: MO;YGFK^^@L'*OO2T1Y;OBJT)SK[@!2J_FII+_;6=+=O"^.;^4P!T<,6E.J20"PW3+>$]L9JQ3 ME*]T#9 5?&FIQ+=?>]6CV9G+3N][B:?KO:MFU&S$PW;W9M=\*)ISFNYPK7!X #3" M'G"Y;1^I!'C5;N66A6*)EE[V8H4&978>9P8FH++F$M>:1P'!YMQ>2#97.9;>2 MG);Z]T* 6$=&4 X*M(?'4 A00*EY["X@RK9MUGKF8A!+(!]68Z;Y_" MJV:(D7L =F4:G],T#GMQ><.37P 5E( !"7* "*K"D<(_>V2U"G%G PF[&GS/AB:+=P#@:D@!<()/A+K"4(YG2%5L M?0"EPG 7P#@PH4$7P#&!!JYP\)+>FYPFV:%!")R%7]? %YK"+=4!%\ R:H9 MG6[!F&8N K#=W_\ND2UFFNBR7=JR9.MF.Z8A "'# MA<*>:(?"\9R":ZV!G66KPGYO+G$(FO%T-IW-L -QF<*L:7^29&[D(1JQL+E4=9R)8E*PX;"B,+C<>NST64Q ER9VE#>:QO+WL?:X!E(0!.> _# M00 I=LW">)I+PWD 'VL(AQO#P07VPCJ9,YT!!U)TLY" P^'"7P#\9SIO%P'O M96=5!:I6;4+#IL*>:'MK66G[N=QN!8EJ:3 :FGH=O*Q9\"/PIJI Z*?D_YQ M5\.6:XR#]V9K:VY[KH]*?CZ9A[HB<3J_^;O790%FH;S\LT2JE+P[FI:\?IB8 MO'Z8FKQL#XJ@3ZJ>O%&J;813JF^$DJ"9P(Z8IKST?Q*TPW,4M!.TEYBOO)B. MPX>QO)V8'+2BH.=S@:6XO,^'>ZJ2KR2T;JHFM$J1UH<,N'.J++3&O':J5)$P MM'FJS+PQM+N@S[R^H!)TP* :=(.J>ZJ%JF>1XW/(P&N1RJ"+JMV\5ZIWG-&8 MSJ#1P-68T\#DO)2J(XCGO."8A9R9JE*T98!4M%.T.;B@JK^E6;2CJNJ@Y<#O MF%ZTT:^8D=,/8 3[#AY^_>&[!@\+SPC:=5;V2PR+# M@[ M@X*ZPA!?68\H $G$<6:)@@1N'\$8PSV_.#$6<0>PXD")XW':]=JR6M99@>=@K@M!68 PH1E\<+XQ#HM&HKSQ&!Q+0!-CX*[S<*5 ME#EJ'<%MP[O$YX%5;)7$C &?>6VM5G"S<>2T_,2XPLJRWKK,!>3"G _!>MJ,'FC M?QW%'W:8?9*_++YHH'O G<(GQ7<%F&QWPWG#2(=JA4)[ZVIGPY3#IX$=Q>AV M&,/M*#%IB\3]?:(*O<2AP\R:;ZE_O:7$-IY^?'MY;["RKH !T*,@PU'%-!_< MC^^K7[Q?J.63*6O*JV)[?&4*P_>GW,3ZPFYEX>9N:@Z]T1)>S>EG<,7RF^*3,&]PEC!YR&799:+%)<.> K-JNWX7D#'%W' ,QMMW#L8C MQ:1E_\2K99O#5'"$9<;"W+>%P$*5DFS?MSV?X;>W<7*OIW/EM^=YY[=WK^FW M>:^5P)"@[;?W>M);#A;#=?&*-+,9FQ<6.,86,Q5&"LVK$QCJ$A&J5Q+RZ/*R3Q26C M.[>6Q0NC_(R:Q7>W.X,MK2UFGL7NH3^9H<7MPJQS-( LQB+%/YDZ<:S%4&:N MQ63%6,3APM)IV+^SQ;%S4;-) #S#X&42P\/"#[K*PE>Y<<%L ,+%>0A\Q-_& MQ\5Q@BS&R\6)Q)?#%W$J -#%G\)8JA;#8,.1Q!MG+%C\7::#?$A&7GQ:MHDZOKQ6EJ+H_,=>_%BL1@P[C$CF5+Q?7%3+\1HY-Y MWV:N!%"[(*9%M=G$!ZA],5@HURG7&[&P<1]R**EJ0QI,FKZ9^]\ ML[=JA:-N+*/P9QZB"'G3DEUU0),(I7Z2CFE"H+*M27W\M0;"V;=MBVZX':1W MO9Z[[ '':,Z)$6Y]>P]HOH,8;(FMDIL)FB"I+WG/ Y.K.7.T=7AJZ*S9C>1J M^YO3JT9NZ;MJBFJ7PI^9HXFF/H<'P?RBF;5;< <85KA6L1GLWW7;B>:N[$/NUX!3JY:M?RL974LQR5S Y-E M:V6X'WOHI/5K-<5BO%6BM[J/=S1UO6_-;JER$KN @\R [**E@TN4JGEJJ3NL MB(H5E>V45G&[MF)F>[U2:GBV_*)?M0JRS'7I= #(B'&,FALP<%,;GN:OI0V>QALTG3JLN1J M>*"!9ZMP]GK4K#N*$<$IAZA[%76+E)>?0)D!MJ./ ',SI>FYHJ?4E(Z][WYZ MUG"C:M0!!>5=GR DV\+!7(:?SRQAI#UHW:9S0+EJXN+ M98<(O,";"*6!NS5N,G8&CV-IRZ$0AV.>U0+ :J&!$<$@I2&0LVJG;R9ISWEU MJ41L+&?M<@"L1'5ZD$^*!Z@>HF9P;K?&L8V?>&7U:ANI'))QQ&EJ@W#AM=EF MK;M@:A5XQXG!>8>/K&ESR#-VBG9\R("0MW !=Z^[2KODH0&B9,* :Q-X:G@K M!X.7,6ALQ]2"T*B[J1.M[:Q)>,"YI,?9>,N64[*$DU][D7O?:O=U#Z3S=#C( M7,?2FJ+(@W3P!7UVQ'L0:=:L)GA8!#:2"'BH?LATCFD G?II"'&\?3AF^*83 MO.!\GL'-I(:;*;='(G*'@=)EX M+<*%<&EPOW[KE /#4&HM9@BM!0*KAEB;E,BG9G)YT90)FZFXDJU]M!";$H^1 MJ^=^I,Q :+#7>DO4># ;^=HOV;CJS[!1YV MW**'FAF+MFY?;YAFNIM&PO]P@GYSQ"'&DJOY;KUHT M?+"C>*XSH_#!#&X??J*S)H^*K'M1R;)M-5*CKH\9>5ILZ[O9 M:"VCPW(&?:*GT,CY9]]PS[8[NYNVIVDF@Y_$J)D*;CR>]W5/R8B\II-(DTZDYLCY9VBUD;PZQD:J_K.XPWV8&9&[PPO O<--E:B'G;SN>:^'H+P+ MM,7#5ZJDO,1S.YQ:JI>@7*H$>JV\>(2M&R5M;QIJKJ\ MX(MLJ@BX;ZK@P\"\XL,KM,6\ 73'O %T>*K$F%>1/)&7I;R@-[2 JCFTA*KR MPQYT/;3VPS^TB:I!M-J\^\,N=(ZJ_\-'M '$GI7BO.6\98 &Q"YTP*\4B.N\ M%XCAH"",XZ T=OII MY*UOHT-HR&M!R,IQA*/%<,^^)G)#I57(^)*8MX2#N;>9:=UO,G?'L'JV%*19 MPEN')G7\:\-QBG.$< C)_W1KH,5G(6B!*VGL9>7(%6Z0K&V7#81QB&3)0'W_R)&?7<+A!U5K=GS=?'E\BVN* MG3O"<*TIRC1N'LE"HE_(S,A+RHJ)#Y!LNZR,@J$2N?[(#HO2:2=_%6X:>;Z? M]F;><3%[O&B\J["K97O4DIV?/K7%9R%YB[Q!>_;!47#LR!QKMG&'G--E9WX$ MM]ZZ!WG2!/QROGA6K4-]L*PY;T!FU7RL9RK*^[UK=U*B6,ITRA,!3;BRFK&P MVY))?%-W%G4?R7!^U; NGQUJLK6)YXF::IQT2LHRR6&F1W$TR77*6Z'MD2:'#G,FR4&2]9[+<7EI M?+@&9K=P[''-NZUK9Z*-R4=_'VB+G5UU]6K/?/&DU&"#O:=XSWI?R2G"M<=! M>[-Q9K":DJ?*^VXSG=2W['TMJ:%V5\)LHBRUUJN8NLVPG,1ZM2,T5FUB:=9N MBG!5P""BP:U$ --\%6CZ=\66R\%$GI>ZL!ZL;.0H4]XFLE;9Z:CA[U"RW'*,LDOI2S+W)ISL8'$4\L_RW!^3P7"EQMS M,P=%?.IT/)G* W4,NUYY@\=DGL\"#KM2H)2M'6H,RZ'*O7R3D]1V;JTS MGB;'5F_G>=B0A+"0REYSF7@ K##)W711:(&M'LNIRCM[,+6.MZ/*[("*%;NZKE9O=K9#?05J=(_IGM"2_,@'K*-EL'E9 MP+%Y]HIG2**UF1I#A;2R WX5"IJ-]V^9@;FR4G==P,V(IJ'N:ES+U&WTGAQJ MT[B+?5"+YJM0PJ!QZWR1RX*L \E/@:./S'7MN-=Q$JT->$FM&:^FB@9TDY83 MP@6C#891JXQN?YZ&9::)M8_!RU!HS'6WRR?'"69NJ(RT674IG4:IU\I3HH)I M@\7\GLQP^LC&=/."@8M)RG#)-&YNMV5SM'5.P2JR+KM@J=)T<(!W=S#)\WU2 MQ8BDU&=(@[MX1]4:W*I.J#S6NH?)[+&VLWE=R7,,@S%HENMO&BF>,&CO;:2:!8!S'KWI#G+:784 M;.UKOY+'RFB\_G'=R_BLO[>U?>/+=VRCL73"R9G0?!++OK)UK9AL"'!>RY^C M$J0HMMUI)*/I:[G*$' O;U7)+XI3O\D"R&5PJ4#,MZMBD YW,,MCK 9L2&L, MR8NUNFCF?Q*RA[R>NXO*/6I MY6V=X]XCGV%2:S1REBW%I-4RAF3)J6G:GFQ M ;\3D\UN603!MB>'7\K_@F+(((*&FBW((LS<;M)TA A+?0][1:A?PE !A'B1 M>>_'0GN^HE&BT[=X?FYH^ZSQ;@>2'*2%H6"MU+JC!4.B$6=]S*F?'0*%G79J MLFXHJ&"LQVCBG[R=#6PAO+1R1))Y==ZLD'W];<*X('I>?\6X"[F%R4.BI7N_ M<75IP?-IY-^?-%GU6<'EFA]W*/H9VM]UKZ-NPEN[9"-&,SOK)&M!ZPF M@0N5 V;/AB++),O8JZ*<(G@Z<^R"L\A5;/.5?FV;F09]V(S)F7B:974"VJHS>:>2]; 8:S.P;JOF B.Y M@I#3@U-[)HXW?7K)C7!^=]IJHW7:FA]MS.3H92R M;*.4NJF09YF0D\LSPF%O7'^."-*"#LJ/J'YF7\GZ@T.P:G*'=^YJ(P P9]ZS MD9=<;IG-5*6$9:?-80!;I4"5W;=#E3W&V<$D="NE;JOB<8PN,JN MW)B,QJ:5OZ^:JL6OBISQN"[B7H; MQ"7$SZ4LF$JXI\8^C%; X:^KQO? 18RNQMREZ:^RQEBXM<;CI0.]K(?RK\5Q MA@'AN3IS))!C $1J&'-F>"B-))!V .O&@+C4Q2N$ VX]AA-V3L4U D3#1\3] MB0K#1L1BMP_#N66'ZS-A71Y<*_-^[@. M=P!:)3)0W\R=PQKY04KH42V<'Q:=4]S M/Z'@B'G.= #K9EESG89_NT&WW;;F:*MVKJ6;9FK*5*5\9:?-%F9JA2VA!)./ MBA-L?L*Z:2.QW6D8FVII-HNP$*2-H(H=QC+.D:Y3DC7.3V]D;"J6*@ ZSHB& MLF=BP[;$;9)6;?V0B,X&9D3.)\4"K?3&8J^0;44 7P!9KZ=MUK/198S"%,\* M#4=GCL[XQ.VN.\-,SM!I* !0S@K/2Y13SIW#,L2*H1*%6J!RZVIDSO^#"JN9>%+#/WO"Q !QNL1DP[<5,6ELS@MQ],*4SFS.7\X_ MS_YE0<_]9G'.BLY09I7$2<\6Q2?/I*@*Q[J,A6K&:9H(I&SM98B(,\9"N\=K M^,2PJ8YEQL)RDJ<)C+2/Q4ZGEK.ZO]R:UP.\!"/+WV4) 9-]!8&ON>S"KWBC MC1T<*<;QQ+]XP[(4CHF-)GP\ .FI$(3M>?XGUVD/PXP!: %9:8;%H8'F:J%E MB,\3QCJ6'\\ASQ>P0ZU>SI%M;L[_>S7/X@K/IT7'^,0@#+-J)X];#DTH3Y\/ MPPX"=0.4SPK#ZVH*I0O&;,/"@)S/(L]BLK?%5,^U@Z3/%W]OJ(YE*L]OPP6C M.\.2SJG..)IFQ7^I1B:+Q<>*O<("C_/.A73Z:?;.]69]F11HHN M:5M\[:=#OV^VX&@"C?V;<[ ">QB;OL[\SED$!WLXJSZ%?WI_MP[*P:HJ9Z'-:(6( ;MRILUN;;O%.[S@B ?/X8C( MEE'.)VD-SW&"#\\TS]!_N\7MFS[.S;?TQD+.1WO7PB3#[7^UPX:-M\."H*K) MP\4"M+S#3*IVF+_#Y7_!PZR'M,F5N'#L*![JG2JY<,NM-?) MZ,,.='JJVW#7I'?R?R+U+P<=#JTY,DB=.;)BZKYPRITZLG,J>S)Z\D MQ-.@UIB3JMF8!<2]KWZ1Z;S@F)JJ2G3NO W$\+PN= [.C9'IH/B\][S$F)21 M%'"=@'S$C\52:R5OX@)5O=J727;APJZ,8 !@ .T#)P G %_#V+G%S8W. M2,Z0Q)#.:L6W"=>06&5OK?3. MA&O:SVIHR\XNDH:]@,OOR)B7HKJMI!UW?7WZSR>3UGM9:4$ =VX4:@UJ_&DL MH S#H'K^SX1E -!K 9;O07I\ZP[[0F*0U18[%;L4&CW3" MO&G= 9[$!L;UOD&;PL](SN;13BEET>[&#@*NFP]^: %-T2P 4=&$:/YEA00L M _&C=%VM^[&O(!$:I'/50#^J(F-RW_.T>/1T-'JT26)>M$^P,B32<3'J3IT M2\7U@0-Q&,LSJKZ#:*U%?2)VG:L?M\W!;FLUT17(C,OU:UEP2JA(S@;&\ZUE MO#O#[M%":O#19 !2T7&CO69'T4/13M$VSB/%*8++3IH)G\-+SHEMY]!? %, M -+K:N^)%(6@S\;1K]"3T47$JHO?B-EYM,W<>6^OV7]QK[G-H6Q#QI' D,"Q MP(2O5,;.S0@=-I'[MUG&"!VGP&J5 MULU>QMC-8,;@<^EL8\:(I=W-EJ\+N.'-_0'CS0-M$+A_E9^OE*6AKW+&%[CL MS11TKH3OS7C&'KB?I7O&@]%]QLS ?\8UD('&SL"UKX3&MZ]^G/'.+;AA>KNO M L[7P+^O",2.QD=MD,8*SLURD\;BP*&JRZ^CJCZXFL8HKP7*EY%E='=EZI(: MSLJ\I<;QP*:<,6BIQC:(]L#DKR7.^<"OQOO K'JSQNVO(<19G;G&/]+ST)3. M9L7WI[;F@^POUE@L?VHN2-I'7PGJ*3"[:< M=/6![+]\R@QOC;3VL[W.4VP9MM:"U:)D=4S-H+ WS,_25:')P6"I:YMUM;-R M'VOPC>2MS]&@M=Z&IX;_H8^DJZT;BG^9( T*S%B%$LS?I@N2%7'QGS JYCD2& MJ\L;F4HF#)W=?E.MWG)S<'*-EIHEGM3.[8;1S\-Q M%:@J2:6&?JTL*S-*PJ MJ13,@*O7>;]X>*Q6; -H8,DH==)Y"7$9;VK-9WB@;CMPQVO7H5,!AJ2LK9[! M7V\*KG=^AIKV?3$+',@ASK2U=)Y/T]NUOYS,1\8"5-/[>T"!(M, ?"3$W[4' MBNAOK)#;K(::CW%DT6&%\M(%T@^>MLB8=Y8$=FJ@;V$ FWNHE5-U+&OSHVV' M"J -=IJ#SWB-L%9JOLXVI@.0$;F$H79]Q+VBH4)QVYL*@UJW4&;NRV*H:'C! M>GTT!X$\S^I%VUMLN>FAE[9GNVO?!]=KJ?NGB;.V-T'N??@[1 MU7MY<)S.D+F=92, GVP2?"W*+*1Q MQWO3>0#4:"22;A/A"K1+-$4 MPZZ8$]"QF'#1?@&:T'V56-$-A)#1G7K'T:MHZG'/ PF!V66S MT&#$D\M'SXC(ZJXT;F/40I];Q4/4PM$A<,311M22T?K1I\=B F!H64F #=F M(L5] ;30=YP* M31?&4/QUF87P#-GAJ!SYYGQ*PF_@N8;&/#(["=E2W/1M$! M<.'"IH&1TS?14&8/PT0 @FU" $"ORW]WPZRQ.M+_S.MJ(L_RT_33CX'WTT1G M\9 Z9K22A=1+L[*91W?2 IW#%0JC@\*%*,-) $FCF]2%!"F"/X*?U#_2HM15 M *34-;IZB'X!*P L]S7JK A]YST&PWT#+5GZ ZT&6J M/-"WO&BJN;S;PR.T0M!SE430W\U&T'*J2-#DP]7)YL,OM/*+R[PW>NO#-;2] MF-[)OZ#3O/'#U;SSPUG0TWY;T"IT7= :?8RJSYC?O.W)X;QDT +$TZ#RR7C0 M],E(FPC$4;2#D6[0Y)APT/W)\;Q8M/&\== N=.V8IJJGTN^@96U??3TM?M + MRH6 @= ZJQ#*3X6%T(?0JWH7RGAE&_%M M5FX4:I]HAM4JTZ1EI\W&GL>.B;/N:AW&"78.:0>D!I@SD^O$,W0 7P"YU4%H/<^;T;#0()B 928 MP]5A &;.&PL6S]/$Y-53!;K5)VCGU8O1*'VQT'QE[-4G:+_58@"TT/T/\G/? MU:,"1V>YU5T%X]4G 5EI[=74H<8#RM6TT$1\N-7TU0?67P )UKAP_M4,UM," M;P*&@D3#+ C3T._0?P)AQJ?#.)HA (W+DM1_ F*F8\-+GGJ3J\+@FQUI7L]\ MQ<@,=K8JSS>)3X+XU._1[M"[S^C0Z-5#T@ERXKI*?>C1[:90OL&!2\5[<="V MD\F&G4O/-XDGF.[0,]2M?*4M;TU8;60-$B!47.,!LXS^75)VA? .+5( :AT)O61<3992S6 M^H+%LG74%2ZM9D+4E]$]PU&8YZAMU'IQ(@7190S&4GGVQ$7.'G=R9DO%XK-5 MF:S#H];M'(1JW]4[@55LB-:^FB3/$6B3U@7'ER2$:OG0C'G!SQEJVW6XN/=H M,7F72G9;_MUS& ;C7S1IZK,"&I6_2E*]Q MTK+ <]*8K^+-++1KQN7-;<9ZTKO S+P6N%N1O\" TL' @M)#>GG&/X"&TD. M(KB)TO?-(6V,TE9ZCM+\S87&D=(LN&!Z2[37H(K&HY68TC.X!\X=C)S2G*J2 MQAR(E,8[N*+2X82DTK:5%,ZXE1?$J=*E G4,"LKPA"7$2+@=SJ;&\L"GG&O ML](CSK724KA'C+C2*,ZZTBK.4(RVQBW. L$IQ!;4,85+SIK4V8;=?)7/86[^ M@Q_%;L'KU9R*KM8ZQY^74=$YAAYKZ] ";GW1P-39AH1E:=>=N(N%;-<7KY9K M)@!OUQ?'.L?4<4S1=->5SA'5='O'U@1^+FBS>\O6:(7/!MH!S]8)9A9R5;=U MUH>G>-:]QAZ!:J&JT5O6W-:=UY'6)<:)UJR'RYH3T/R<.\-SQ]*:ZFG.?\*N M1,/5L3O# =5. 6_7@&4NQH;7\=$/A'+(PJYFSM@.-,2RUR_/S+$YQ_QJ<]>+ MA;K7'Z]FQ>R0":1%QUZBH8Z.U9T"*).=99+5%&I5:CQU1FZFTYC5]&EB?+YL MFH;7=Y5JJA;4O3XQP^U]80"T MSV( FM!Y">!SW]72;P5R;M%OP1!JT@*'PB?4,L:K978&>VLQ #+4+,J8;", MNWZ@T>W3%JT"BXS00*]? )LI:\^T=7P LY28PD_##=B(U1#80P"2U"YZUVGZ MUP_ _-?CU6( _]=KPWQE MC19?1I@&4L #, !=A!AE8S\G,C -J)E<2'U8W0 M$-A^9:314-91R_K7_Y &R_W7(]@HU"K47M$,9C( +=C$PE5L,=@?<>>)Q8X= M3^_7C,_UU?W79(,IU,^0Q )% 0K6O6>'PFII(P0U E78/8:';G;7V\)-V'R( M90!/V"+8>&82QN_$7-@F:#[86=A4V(7/48%>V)K0?-% UF##^\+LT5\ V-6B MT*30/M17UMW,3-85QCB..,]FU$T"F=:@9UW.K='/@LUE6]'OQ&38P-4]V./5 M\==;Q%=Q6-A%Q)L-0U#EGA-@O!8;8D=%5SXG85]1U?^X!C-AB 'MK:M@B MV,2 M=%GU/_5=M?2T33$.L_#T??51]3YU2"8D:.-V.IIS=%0S^[5J]C:T<#1 M>=39KIK1^-7IU:&!DMBCV,^0M]@]PZK81GY/81<[?LX7/VMC;Q5:GXMB/U V>D=1UF'V@DKP8U1W0V<$?T-M_I7.9 MO!_5W<:(0SD<_#&+3"R375 M&[0WU1VT(+38PSO5VL.]O#[5O;Q#T,"\KJ!#U<.\T\GLBTK04Y'(O-C)&GWI MSRNW %723T(&68\,RQ#&% MZ=@B:.O8V]CXQ%T+FW9B ,W"4F%&! M2LDT' MCL/[T37$J-2ZST)[Z'9687\"X)0SUMFU')T@QV?%893R/9\4;F)K9&L5V MMDO%=P NUJ31*\^@<9'/8@/ZU#W2CL/WG?/3]=,:9ED G<];KY?9]\0VSS&% MAL4?UE>/A]F<:1D-UVD(V&8 P,*$9338#]B<+G38.-2TUF[%M:V:9EEI(H+I MQG@ )'S$S*5G27MEQSYP#-I> 3NC]KO"UF'(W\5W;SC82]CS=N+8LG8'G#.! M-6[H=G6V1]AB [M^_--G2,;6T(+]"QR[6F_(B"C($)@-T]?.1ZGF=U:':FZV MRRK3=WX,K(FF+)*\;R&[E&N1@\IN/9D==B5O$-K+N2)M\G,OHIIL5GA/L+UG MU'*>@"=T0X6W?Q9FHK'&LD/:G:R%HD&7R&BS:#?(6&\CU'$(7 6O&L M3GP8!A%VP7>A98UN8[O[?L<==NI7A/:@S(!Z@7S9-K MM6:A:Z>347VS:Q.WC\G,>:>S*FX)B+QYU9MG<^.[X( 1CUV$:YL]DIFZOVVR MM769J'#EFR;)^[Z? MK*NPC.VV&,LDJ7F;V+J]R\-[S@)*@U%RMM"-OPH?IP7'%WJ-EVI 42 MF VE&,D:K7USSVH-?3/;5*6G;L:]OYHS=CG;Q;+8 3S;Z668!3_;"M,'VU9I MT644V.>A*8L.AZEI/)[G9MEIS'X4V()R$FMN?1^[>&6JVJS:N015O<-KR06D M=29F[ '9 5.W9:O;9CQOK+75DHJ["F\1P9*?0=M6:3A]47#\FZJ]--IC=A=Y M6=83 M7$H[1(:US'%@&5;E&@2-J(:#-JP7A#R4QR.7W2EX5[:68%<':@F[KG MH@2)TVEJ 1Z26H*MNS*'R).OL\697K%.>%)]E&A$B[^7"&K=?<<&,WBEQ\IH M#G:&M5^XE;9_K)^7D)G-:P." MM78] 5[;EVC@LIS$UY.9=W?:W,^8?=1QS@-);CAY=<"A :]H+G=.DI_#>M/E MJ^JP6G+N!/UV8V]/F^MFRF70TIEGB*U,R,V?,FP+:X5OQ],)T124\:LCG)>' MILG>M^S6S(TJG$S2M8M.TKO-N8N]S4;&O\WHUE72P\U7TDS&^M:];%O24,;^ MUL9LS,V^AV'219RCP&+2!M<*@,F'UA^XAJKTSOC-"8SZS2[75GK]S3'7_\U^G"^X+FT^=$1TM*7VR;>EV\!,;9'&"\Z?T@X! M]]1!UQ#.H](2SG6 1M?0KYW&8+1!JZ9L>]5-US398(+#E5'7L-+YA,I]5=?] MA%?7K9Q9UR?.L<90_;?8K(MX71X(@K .?%K]RP MW!2.5(!X9:[47'!&'%3H$P9S]W MFILSK&C-_683H5.IXJQ5R5NB@&4,W1;=77?=::C-_9)BQ!S=N+J[PN@-4),P MSJ&%#-2GU>;7#]0LZ!>&5. MW;>TDJ@Z A:[O=$07Q,4EM'OQEW=T6IY "O%GW1BW3IS1-:>=*IYX*A1W45Y M.:92A5!F"MV/KHB(==2H"7?4[L:JUZ?;'Z^+SE:.3=@3=MK8C&?Z97 ;[IY M )K047_TQHZ)EW0:U!N$G]"SU]G5/-V!T5_8,M&MV*C9:P"PV'W4L="!UU2! MFM"U;@C:^&:0UXUW.9Y":7V[OX9BNIIO*(%\S\^KD=WG=.I\U\PHD*1E_ '2 M:::![66EB6K=:\7)PH5JC=T]:H_=N-W^Q)N0$8]=PE-MA&4S )K0[\*KW7QE M3@2PK()+SL&TO=@XFKYJ]=T!P*%E M7M&_V+'85\_99::!9T@UTC##OVQ$:O3=8-UT (]FH8%QSL:N1'!]V8G9V-D0 M7P1R)H\?P6F'*X&UW8[='0&XW>NS0<<-W<1B@)#/OM0IWJ_8V8;_?BW>2[,OWN'0.M; MV)_8NM!$9X-NT]C-;#K>;0!?W5K#/MYX9D#>'Z_/N/79Y*XEB2C>2K,JWA]K MZVA[WG[><-X'W'+>,MYTWBF"-M[OZVB WL$%X8&K MV,N;1=XT?.V;)X\HJ4O>[=U-WI#=KL(IU&?8AH))SM"$QL(8WE'#&MXIDW:6 M]&F0>!_>6FQ?WDJ'EWKSJ MVN+3!T6O>Q\)>W8S> M==Z.WD#2(-0'EC7>PW QA7,!>MY\WOF#?MYMU,FQ@MZIW4#.Y-1BMTO%E,Q\ M:"[6>]=(SA[:5\YBQ('9D\X/U839[)"]T(S7L (BP=?7*'DR *#<+'"BW*3< M#792 !%VO] 'C8&6^]RZW'+?L-S^97/?==^ZW'??MMQYWWK?BW/TW+_0_:Z$ M:A38) N::L[<([ OLW<*H)^;7'?N]S'PBAWB]\M /G\3O(%*ARG8[-SLU(S?O=PJI^C ]TP9Y?+S&L\ MHM3?N'P@W;&AN]_2MA_>V]\(W2EQ%=WHS^#?:W90PM,*E7@5?"BC$=VXL.J= M+=VH9]=F^; FW6?=-*ELI/1E6VOBUR/1IM5+:)Y5F5#=7-ZZKNRUW=V5=%QQX-V:@X+=,89&W?=T MF7A[WCS>=-VT=:#6_-WI[=3MZ>:$#4Q=V"PBQ[B=W3W9K=+\]YV#//*]%7UN3= MO-BLT(_>,MY@;%?4Z]6HW738A-'3U=30.Y"5>ZO"F&:MW6]HK]TZ KZ&S':S MW7.;3-[(W3W@ 7(!R?'=AVJ 9;[=\&5[:X!Q,\=&WH#8^I2HWF7@JMZ+9D'' M7,YES]%ET=U&Q#C>X\75W6*#OGXM ^&L:-VZKMO=)6C>W1C@5MW AJ/6X'%V MV#K/Y]TQWEK12=2^:G3@$=XWG1"/\MU;Q(O>]MTO:Y785-XCQV[%^MTTG2?= MA>!!I?_=G&@"W@G/\,>I955SHF=-H;G6#:4JQH;/EN#OW7?@%-YUF)/4C&MN MSZ]XRGU8WK7>*,=AWC5NN=[5O_^>8-X*W1VC;6U\9B3>A-Z]U#K4A]XZWVW> M/=\PWL71(-3K:$/?\J[(D^>#J]E(WV'=F]YB=>>P\-_=MB1J,[8T> , MX!>OC=Z-U.G0G-%X9I+>=][MFY;>W."9WC#@ S MKXW2*+B/TBJX,=>'Q@#.3+0VUP3..-?9P#K7-;AMW-W GM(_UZ#2OZ5"UX^1 M1-?$I:;26-EVW)B1['F=G&2T"\I^= /04->OTM^O(/L)V;G[3@$-ZVX#+&4=XFUCW.PM^ZT<_<3@ M,\6+DUAHJ]\+W8>K2\]H2#/$G]#0X'K4?L/IX"[>/M_LX#O6H8'OB3;>%W^Z M;$??(>#?X)W>+[^DSZ=SU-CFX%#/T>!(:V[>1&K4X'S40M*A@?#@Q@)8SD;? M4&;TW?7@2M_,T4S?2WYVUX;91<<8P[W0I=[?DGKA/.!VX'WA4&:NWF#6.;J^ MT%B #^"#X;;>'MY=WKK>QN K?WQEBN$@9O-REWI8Q?*QB<0WW].N/<,ZW^9\ MJN%09JSAC][6X*/9=M[DII#.\^ [WGW>H.%,WT+>;L-$@2=VXMBFX=[ADN%L MWI3A;]Z6X9+@KN%X9MC@V- />NKA+N"?>4K?X."!WKKA1,,BQZ+>> MW :O= MH653WPFEZ<+DV4S66,5GQ58+\M""V<+29M:_J+8B^]DB?<&M9-\_>S, 9]]N MQV*#YF=="OB;VN,FJ(-A)_1HU=LS9?.V^ITY@$2 M VIWMW#]L,J3XG?$O=_*TY#,==B2AH^QNR%U>8,,F!Z\:G;F>*Q]0*N"DKUK M$K4Y:R-^0H+DENRQD[IY<$$ E[W$VH=]'KD0F^O+9&J0G;Z*-A^R:IU]0YM; MC#=\XIZ.GEW#3761:'[)\I9D?!C+\YO6SF%OOGAVFA5]-)/F9H" G7UG:2FQ MT\0#H%*&OXK4RT7BY'+RN6K367'X[@(J( M?NVJF.*9@1QI16BBK:.2K'U*"XNA9IDDS!QJ L4$Q8RMG:)1:XT!6:5Y9@/% M3HEQ?%]X.8.8T]7;S9\YMF>QY&]8EXI[H7^]=Y3("' WH+;'*:XG 8[B+GR0 MXO>*+-GJ@6FGE,RQBN&^V^+(@\)T)-K':Y.=!G>":3*?'7;5SB>/A&5J 'V[ MJ'B4R9"4S*S0D%*A:*G!FV9\;WE]S$&=1:@BB^R"JP:B;,42V*3:W"9&6C6R(W*>-."Q6P OLZTN+1UF)DNIOQG[,>!HN'=,/00VB8PR;-.:13 ML\98<9@T+);G=>NJLGUVE+9EQO^Z-@B=!V"&9Q:U64 M"MJ&MGY:1:2*ML:XI64WH:+--,-:XAG---M09E>AMFU@L9:A&\EJ<;.2";FE MML%N!F];S?*!1V_;TZ=Q!WBJK9JAB*+ME+UK;;B5M^$'6695>PS3EZ3U(K:V M3>/JAGUEY*N)I&3C7+?*>&3+RN+<=4%UJ*VEPM"=C,OKQV1WILCOJSR&_M,6 MV"+(HHI8 '?8.'PZTGEM5@!Y;?+97P P9]EE/;\\NT3>B^.$CQ2%C^/-GD\ MDN-2 )3C4@"6XS!G#.*=94P';K1H>O.D^M=ZUD'@XN)R 'S#7P!9N1NY1<.- M:Z[*]+DQ<$NA?=:&PN+5 -@OWANY9<,IQNO"FF>/XR[/],+9="3:B0)]UF[! M@J-K !36'<.RXY20R^.SXX/6E=IW %\ X]FOK60 ))6FFPK@BF?[KK74M^-+ MO&'8+'O2X\KCO>-R )[@?\D=X J7LN/DXYO)X>-?F-OC%P%7 R]K$%_,X5.( MX)VD (Y$;T #A%9&5O!_00,;'WA[58X1- MJ@/9S-XBD0;9KH?$PR?55:K3WL28+]#*P]?>S,-=JJV\AJ\VT-[>%MFRO,-S MM+PYU:.8Y=X]U=K-Z-[5A_5S(]GYV\;-7\P\BO_LGGH''56K1VT /*>-#YO*C2(6<@#?5X)-__ MO&)G?M5Y< @=4JWPN]A66%U25Y+]^-T#'?B]4E="'*H64PT3"%*^ HPX;= M.8$WX%31C^'TXY)I\\)2Q/J") +RXUZ?='N"V(R3S>.SX_Z&H&Y5 3[#.6=K MT>1>3=B\C(387P"3Y-S4+,DU C'2M $SG;_% -@M9J39I618SD'@H.3PKNUT MX^/NK6C4T>.& ??"^HDBQ'IQNFFMWRC8B(A\ ,3<66D5W;??PN1% >KC MF^(&Q<7<%-CJ:0S:$7'IX[+C>&8^ !W!SW_FV.R$"8@XNV2"8]9! $D 3 "/ ML7%_8,2P$/'0H> (;BNL]6?CV<_A;'7I?9G%.['D)J[!UK]X".73XP?D1&84 MCIAF4MY+?GK@$GWN 8W"_X/_?_,%7:L_GD/=R$:OSD8L^%GF9X M\MA)Y1'EM>3*Y.!E1&8SQHZ0$N76X_;$$<]WAKS@,-8HQ:'=/39 MY ' #LNEY-"<\-TPSXZL8]A0FCW6JC=SA9#A.-^(WC[?&:F-Y&[!#&E&Y:RL M>VMIY;'4P352");1R..1XTV%I..5XZS4R]7QXX[E!>*>WIW#XKK*IAW@Y.1V MFZ'0@N7]U%1PS66%Y6S1!7(&B?;CW>/HU _EY=V(B*KD0WI^Q0S21WK7:=S9 M>]$QK"G6JC=P97;EP=%XY9O=D^4U:P&>?^68X$_.B>2KY87E*FTEI;S8_);( MY]N&NY&UFK*P[SP%YM.1N +S5DF>7Y))N MR9R4Y6* _.47YA>OEMW1XPGEU.,29C/E%Z_JLFD:QZ:N;YCEQIRCXZ7CI^-. MQ=;CH>1/S@K5]\0DO'K5)N7UO4VV-G9=>;+E!>/#X#=XM+]@@]CDU^/:Y%*! MV<)'P"V"CG!9:2$ \>._X]/$\]2ODN/%HV:OOD^%.Z]8Q]"%N*I\948 SX)N M:))M3 !/A50 2 _N?Z%M7KEV4? )Y$XQK+-%-Q(T@'A%]P_QG^8C<"EAX_ M'MQGA'BO> &.H&RE#^$EW!'A.ISV?UJ57=+8T!;A+=P8X6+2!-<)>C+<9]*Y MC6G2-]P,U]G-S,D[W+# <=(#=%B<%-=UTA;7M\!%W#MF;L8TU#+AQ,(TX3. M[$OUY#2,FUET$KA-=>6 MTF3<&8Q/X$&RF5MMR69F&SD M;&W^A^_ T:4[C!_.\\!IX8/&TQE[7+,ZG;8*1O]*A MT0ZY8P Y@X'$QP7L:UYLB> !<9?/,*6.M3KEG+3VQE_#@I& N%S%X>1[PSV] M+&=#YB@!7P#VT45[9(W!XC5FN.$PYN7&IH+!XJ/E=UK!@1G%6FMBM['E N37 M<$''X.4QX#FKU$"\X5G.3<2SXX>^0N:AY ZD!.?VQD1J=L[-F6*-#N?WX][C M(^6=@#R N^72T'+"9ID]H\S-QFK)P/=B'S$?^2\V;QHY-;BU5\ 1W>/ A#5SI%% MW>JY?P%EYX\"9^>JC71_E(-MYV4 ;^?H=I3$_;FMPFELUV:(C+&,M(]2;Y*Z MTV8D(M/9TWFO1^->+ MW7&B1V;CU>EJH8%^L>/5%JB/YPG@W^2=YP+FM>0[Y@_GA9D[!9G"I.<4YJ;G M)>>/Y\*0],:BG>=[B8#/ M,-94@(3=*<.- ??"L:BBPM4"/]ZCY^EJ".=A>K=ZK-$,Z!:H>G'"KF/.H^<9 MZ#3F)FPZP\MK_.>KYU\ K>>SY6O#LN?:YW2)KN7GT<'AYI4CZ,5OUK/LB;!Y MQ^/CV*+4+.:5:6/8RN>GMF[G4WU$9N+G\MGSW:GG/6HDZ";H ^3A97?> MH=[7Y9 #7.C"Y+7DLN/VQ N"]Y!6;1G0I=?YV)]SB,#$WAB1(- 0Y(>@ =FO MR828S HCU7?F&>3O?RC5$>&4H-7>5ZH4X9.8V=X @#70,M6PO!;9>(3@WCC5 MXMX?M IT+>0>V23ASJ2>1T4MD;WV/D<]5WT%ITEYPM MC%G9R>9_F1YR7=DQ:&_DDH5AV8;08]DLWV79+M]GV7KD=^,H;>'F-M]IWH?> MWK1QSC#FOW!>GFT T=GR4X"XZ644 6L KN6'Y OG5XY@Y0J6/WM1Z,?G.)K! MY]_0,.@,KRUF8^BJYV;HK.$5QW&/"6G> MNF5X=<,&QH*1B<0OTE $+HO!UJR/!>EGY0?IVN,/YZC"16V-Q%G.DL_>X]GG MQ7NQT?[1OM0XTI;)(^@**/G!NDEZ CI#^<(YE+I-@$)XNC9D\]5Z=OG86=%Z?S11^EZ:F3H$.E;Z2+F MOG29YPWH3NED9R,R[=#MYPCG"^(%A9:?_H"&Y- MZ<2 '>A%IISG(^A;Z5[H)N>5YSV[Q];(#(B12<0TUG1Z!6_]9L^3Q:%SR@5] M0.>,L/S%2GN*@V.'Q;G9B4#:_*Q_XV9X_'&;MI2_%,@\X\%IA*-W>M27?&;S MKX5O,IG2CP"K\(78XC-GV9X@BGWBR(-XHQ%G5WU)4/9KA@Q/ENHF=Z5IJ M ^AE:+SCKVNP>7::.V[ICO"_7<$$9Z#CPV=4Y^&H=^GS@"ND9=USVD]O/^A) MH+MG8J3U9@?D!L:6K?S"7-V#@NWGM^,MFE5L8.?,G9,,\RZ6;IV^ WTN#G>&8^TBGF-NBBXYKE+>:=Y85FY+39Z97G&0IM MW?3AP='>M%OGR9PFVKF/E7N4Z=2OE'@9:D@ O7X*=QQJ=IJ&M_RP$P%W@]4" MY+9@XI_B'[QOR0W%**57?,C0.6K8HQVM,W;FRA*M2=-TVHAECF5(YR-PM'68 M?/R,MJL.E(FP[.8* 8&B)8[,P8:%PZ0HB^R_4\O=LDVPBLHMD]&;,W\5> Z; MS75T;.Z/ZKK/")3B2K.&I\-P,(*;B&;K?9!I#&M->X'.Y6[- M>9$!F'HML#O?G67^ISU]'NISMCYP[>+ C.1]I+INE/IO*ZP*;YQT5^/,Z<)Q MB..CU:&*T*C4Z7?8*=Y7H8KEPL3^96P H>2QE%#I",-MS#C15^Z.FA MUCT!H>13HHSJ$^*#:5@YWC>%=!E:W?" M=]'YBN?CR+]%SQG>.8'2Y/%]^'(_K(5^0FG+=,O"L&W9XSSFUW A;SBM[>=X M9E[EE-YR9C_1H64J +;CI='MFUS=_M*\ZIKD8Y@TX ;B#^@4@ZYLZ=:&P.O6 M;.;\V /A[]9-T@;A<>9$QO/6=>8DU?;6#N$HD?G6>^;[UBC<#MF2F/_67I5@ MTH/F+]Q7QAOATLUFT@C7'^$*UVK2C.9LT@[7'WK.R67&W\UGQO9L*^%JQBWA MF.82N#1Z>](5N'W2'M=_TCYZ3MPTV1VX5]"FYB"XJ.8_X?<8J^;-P"W7K^9< MW##7LN8RUXC&2^&6THO&3N&-QE#A:-P8B#W7:]Q5X0K25^'"YEGA$LX_N,^O MTNA>X65TSH"5@*S2T9BNTM'F4M>QTF.9U>:LQOC A]QOX8G>WY]#CO.?%X\O"2<^ZU2:NR]W"XZ71J=#&V7G#^[\;YV[! MB^KD76G:S>UFYSU%3KK*R(AC[I MP>7";-'#3.M7Z2780N#^YU3I8^E!Z28 ^^E$Z?AF_NGKV<-J NE=Z\7DN]EB MZ2C#PMFAT)CK_6:RT=?51^G7TW0*N#>0;U:O)JZ"&H-WJY7D%M*AL![3-WL+# M&.2-H G9N,D>Y"S5(.0RT!'9,-47M(GH&;0GY-3#&-G6P^3>&]F1Z,K)19$? MV1> ENC9R>O>F>@EV9OH)]F>Z+6@>ZK)O"N HN@N@+J@3M6GZ%#5VM!T]@4II&ODC+Y' MLH%N?]5/ #*UH0%*@QZ[P7Q0VU %^[;VW\%Q,W\BPD/++2 MZL4#VRNDY&;'JWP!&H_,=5, 6,)O M:U9FT+,:ZO+HLL3:=4QUNG<>V\5I?=P,ZG)F2@!0!'#(3&E[:YF:$)A9R>7" MW8LU 0.^BH1; &_@<[C<9WYM4P!K +QFVL_@9<&B>&9! ,^/3 !TYVH!?H-1 M<5H I]QO:)?7R&C(LWNV_'@8S8J?AW?GD9!V[P?+?=PR 2',$+>:HK*9 M4\IMGX!L87#P!1-[Y8GR>M'QE40#/CT8 R^"E M918!$@+?< E[_&YXG9)EB 'H QFM5[:6M9=YZD6@^9Y?)K7)M@MEE?NSLRXYF!0+%>P#MZ.QJ MT_=MVV\M9AT<-P![ $P''L0TB6\-@F[XL6*FB &N941J\64P !+H[Q@(, MMIWYO?6Y>H0X2N1?.:VAW&EG< NU>WJ%^'X MMX3F,-R&YASAM*<>X3"%BN8AX5"8&N/SJG.@)N'+2\VP,N +KFZ_DS9VO M=])NQCSDG.9\JH:5^(LQV<' :IR,E1'K/.&GYH?2J>9VU$'A]Q@LUP;?3X!& MX0G?_LVOI6#<0,!C>B'K-]?-@)G2)>N/QKSFG=)EPVS0%/ !40 end From owner-mpi-context@CS.UTK.EDU Mon Feb 28 12:15:53 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id MAA10631; Mon, 28 Feb 1994 12:15:52 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA01057; Mon, 28 Feb 1994 12:15:54 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 28 Feb 1994 12:15:53 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id MAA01049; Mon, 28 Feb 1994 12:15:51 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA05423; Mon, 28 Feb 94 11:14:50 CST Date: Mon, 28 Feb 94 11:14:50 CST From: Tony Skjellum Message-Id: <9402281714.AA05423@Aurora.CS.MsState.Edu> To: mpi-context@CS.UTK.EDU, mpi-topology@CS.UTK.EDU Subject: cc not sent before, oops Subject: Clarifications of MPI_MAKE_CART Content-Length: 1124 Rolf, hello. We need to make the following clarification on MPI_MAKE_CART and MPI_MAKE_GRAPH. These functions are analogous to MPI_COMM_MAKE, except that they also attach the topology capability to the output communicator. The issue of caching semantics therefore arises. I propose that caching of attributes not be carried across by these functions, so they are symmetric with MPI_COMM_MAKE, and that this be explicitly stated in the text of section 6.5. I consider this to be strictly an interpretation of what we have already agreed upon, and invite your comments. -Tony PS Can you remind me why we do not have a means to attach a topology to an already existing communicator, which could be a non-communicating collective operation? This is mainly curiosity at this point. I recognize that the functionality of MPI_MAKE_GRAPH/CART is superior to that, but the functionality I mention is also without need of communication, and analogous to adding other attributes to a communicator (that is to say, user-defined topology functions could provide this type of capability, so why not the MPI-defined approach?) From owner-mpi-context@CS.UTK.EDU Mon Feb 28 14:56:14 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id OAA11948; Mon, 28 Feb 1994 14:56:13 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id OAA12937; Mon, 28 Feb 1994 14:56:24 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Mon, 28 Feb 1994 14:56:22 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from Aurora.CS.MsState.Edu by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id OAA12928; Mon, 28 Feb 1994 14:56:19 -0500 Received: by Aurora.CS.MsState.Edu (4.1/6.0s-FWP); id AA05582; Mon, 28 Feb 94 13:55:10 CST Date: Mon, 28 Feb 94 13:55:10 CST From: Tony Skjellum Message-Id: <9402281955.AA05582@Aurora.CS.MsState.Edu> To: doss@erc.msstate.edu, mpi-context@CS.UTK.EDU Subject: Requested action: add opaque object comparisons to Context chapter Dear Context sub-committee, The draft of the chapter passed with minor changes, to be posted. For now, I need to draw your attention to additional important, but minor fixes not discussed. These are very important for library developers, and the standard is inadequate without them. I encourage others to look at their opaque objects to see if similar tests are needed elsewhere; if so, this discussion might also be appropriate in Section 2. Because we need to review text and fixes before the MPI editor's meeting in mid to late March, I propose discussion on the enclosed proposal, till March 7, to be followed by vote of subcommittee, but only if there are objections to their inclusion, to be followed by consideration by the whole MPI committee with e-mail vote, but only if there are any objections to their inclusion. Regards, Tony Skjellum - - - - - - - - Some significant discussion transpired on "empty" and "invalid" objects last week. We did not get to a crucial, related issue, however. The following type of operations need to be relatively cheap for libraries. 1) Comparison of two MPI_Comm opaque objects for i) identity (reference copied) ii) congruency (closely related, but different contexts, etc) This operation needs to be "fast" in good implementations so that libraries can determine if distributed objects with communicators are "compatible" to varying degrees, on the fly. Use of attributes is insufficient because many types of compatible communicators do not share attributes. 2) Comparison of two MPI_Group objects for i) identity (including identical ranks) ii congruency This group operation also needs to be specifiable as "fast" in good implementations so that libraries can determine that distributed group operations may be optimized or otherwise simplified if the groups are the same or differ only by rank order. Note: 2.ii is doable with group difference, but is recommended here for convenience, and is possibly more optimizable in this form. However, 2.i is not possible with group difference, but can be achieved by using MPI_GROUP_TRANSLATE_RANKS, and then studying the translation for the identity mapping (lots of work, not particularly scalable, ==> we need a specific solution). Note: In pointer implementations, where MPI_Comm and MPI_Group are really pointers, equality can be determined in C; in Fortran, it is always a reference or index and therefore allegedly comparable as integers. This is not enough, in general. Implementations of MPI_Comm as a structure would render == comparison illegal, and therefore a portable mechanism is needed. This discussion is extremely closely related to the agreement to have MPI_COMM_EMPTY, MPI_GROUP_EMPTY which are special "empty" communicators and groups, resp. We also will have MPI_COMM_INVALID, MPI_GROUP_INVALID which are equivalent to NULL pointers in a pointer implementation. These were added at the Knoxville meeting, but without any comparison mechanisms. Proposal: --------- 1a. The following non-communicating, local operation is proposed: MPI_COMM_TEST_EQUAL(comm1, comm2, flag) IN comm1 IN comm2 OUT flag[2] (int) flag[0] is MPI_IDENT if the communicator objects are identical in all respects (that is they would pass == equality for a pointer implementation), flag[1] is undefined MPI_CONGRUENT if the communicator objects are dup's of each other, flag[1] is undefined MPI_SIMILAR if the communicator objects are related as noted in (*) for this case, flag[1] is also defined, and is as follows: MPI_CONGRUENT if groups are congruent MPI_SIMILAR if groups are similar MPI_FALSE otherwise. Both comm1 and comm2 are intra-communicators. (*) The issue of MPI_COMM_MAKE, MPI_COMM_SPLIT, MPI_MAKE_CART, MPI_MAKE_GRAPH must be addressed. These calls do not pass attributes. They also permit subgrouping to occur in this collective step. However, if the "new group" specified is the same (for MPI_COMM_SPLIT: one color, original or permuted ranks used) as the original group, then the implementation can recognize this, and subsequently return MPI_SIMILAR, because these communicator constructors have effectively replicated a communicator, and this information is of value to a library. [I have specifically asked for Rolf for clarification on intended the behavior of the MPI_MAKE_CART/_GRAPH this morning, yet I am already convinced that they have the same semantics as MPI_COMM_MAKE.] It is useful to distinguish between congruent and similar communicators, because of attributes, and topologies. 1b. The following non-communicating, local operation is proposed: MPI_INTERCOMM_TEST_EQUAL(comm1, comm2, flag) IN comm1 IN comm2 OUT flag[3] (int) flag[0] is MPI_IDENT if the communicator objects are identical in all respects (that is they would pass == equality for a pointer implementation) For this case, flag[1] and flag[2] are undefined. MPI_CONGRUENT if the communicators are related through a dup, For this case, flag[1] and flag[2] are undefined. MPI_SIMILAR: For this case, flag[1] is MPI_CONGRUENT if local groups are congruent MPI_SIMILAR if local groups are similar and flag[2] is MPI_CONGRUENT if remote groups are congruent MPI_SIMILAR if remote groups are similar (Related, but not created through a dup) MPI_FALSE otherwise. Both comm1 and comm2 are inter-communicators. 2. The following non-communicating, local operation is proposed MPI_GROUP_TEST_EQUAL(group1, group2, flag) IN group1 IN group2 OUT flag (int) flag is MPI_IDENT if the group objects are identical in all respects (that is they would pass == equality for a pointer implementation) MPI_CONGRUENT if the group objects are dup's of each other (see ***) [ie, same members, same ranks] MPI_SIMILAR if the group objects are identical except for permutation of ranks (see ***) MPI_FALSE otherwise (***) Although there is no MPI_GROUP_DUP capability in MPI, MPI_COMM_GROUP creates a copy of a group object, so groups can effectively be duplicated. Other calls also can effectively duplicate groups, perhaps identically, perhaps with rearrangement of ranks. (PS Given the final semantics of MPI_COMM_GROUP, the re-inclusion of MPI_GROUP_DUP seems reasonable, but can be achieved other ways.) - - - - - - - Relationships between parts 1 & 2. a. MPI_IDENT for groups and communicators mean that the user or a library copied a reference (created by MPI) (e.g., when making a data structure). ==> Identical communicators have identical groups. b. MPI_CONGRUENT for groups and communicators mean that the system made a duplicate reference to the object. ==> Congruent communicators have congruent groups. c. MPI_SIMILAR Similar groups are well defined above, using MPI_GROUP_TEST_EQUAL. Similar intra-communicators have either similar or congruent groups, as noted above under MPI_COMM_EQUAL. Similar inter-communicators have their local and remote groups defined as noted above under MPI_INTERCOMM_EQUAL. - - - - - - - !!! Consequences of potentially omitting this functionality. !!! It is now possible, but expensive to tell if two groups are identical, in a portable way. It is not possible to say anything about communicators. This will limit runtime optimization of libraries that use persistent distributed objects (like vectors and matrices) and want to compare communicators, when two such objects appear as arguments. This kind of testing will be wanted by library developers. Library writers will tend to encapsulate communicators in different, incompatible ways, to get this functionality, if it is not inherent in communicators. This will limit interoperability of libraries. Note: Because relationships cross boundaries where user-defined attributes do not follow, no adequate mechanism for using user-defined attributes has been found as an alternative to the above functionality. Impact on implementations/performance: The impact on performance should be strictly positive or zero. Groups and communicators that are identical will be identifiable easily by implementations (O(1) time). Groups that are similar will be identifiable in time no worse than what an MPI_GROUP_DIFFERENCE could do, without asking the user to deal with the empty group, etc, and possibly much better if internal tagging of related groups is incorporated into an implementation. This functionality does not impact any communication functions of MPI; it is strictly bookkeeping on processes. Who will use this? Library writers who are trying to reduce overheads or discover shortcuts while operating on distributed data structures, defined in unpredictable ways by users and libraries, and therefore mainly amenable to runtime optimization. A good example would be redistribution of a vector from one topology to another. If the shape is different, but the underlying processes are the same, the amount of communication could be significant reduced. Average users will not use this capability, so that the exposition of MPI is not particularly complicated. From owner-mpi-context@CS.UTK.EDU Tue Mar 15 10:17:54 1994 Received: from CS.UTK.EDU by netlib2.cs.utk.edu with ESMTP (8.6.4/2.8t-netlib) id KAA27195; Tue, 15 Mar 1994 10:17:54 -0500 Received: from localhost by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id KAA29327; Tue, 15 Mar 1994 10:17:47 -0500 X-Resent-To: mpi-context@CS.UTK.EDU ; Tue, 15 Mar 1994 10:17:45 EST Errors-to: owner-mpi-context@CS.UTK.EDU Received: from Phoenix.ERC.MsState.Edu by CS.UTK.EDU with SMTP (8.6.4/2.8s-UTK) id KAA29320; Tue, 15 Mar 1994 10:17:43 -0500 Received: from Athena.ERC.MsState.Edu by Phoenix.ERC.MsState.Edu (4.1/6.0s-FWP); id AA19073; Tue, 15 Mar 94 09:23:16 CST From: "Nathan E. Doss" Received: by Athena.ERC.MsState.Edu (4.1/6.0c-FWP); id AA11408; Tue, 15 Mar 94 09:20:51 CST Message-Id: <9403151520.AA11408@Athena.ERC.MsState.Edu> Subject: Re: simplified proposal (fwd) To: mpi-context@CS.UTK.EDU Date: Tue, 15 Mar 1994 09:20:51 -0600 (CST) X-Mailer: ELM [version 2.4 PL17] Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Forwarded message: > From: Tony Skjellum > To: doss@ERC.MsState.Edu > Subject: Re: simplified proposal > > Please forward to reflector with statement that I agree with simplification. > -Tony > These two functions are simplified versions of some Tony had earlier proposed. -- Nathan Doss doss@ERC.MsState.Edu =---=---=---=---=---=---=---=---=---=---=---=---=---=---=---=---=---=---= Proposal: --------- 1. The following non-communicating, local operation is proposed: MPI_COMM_TEST_EQUAL(comm1, comm2, flag) IN comm1 IN comm2 OUT flag (int) If comm1 and comm2 are the same type of communicator (i.e., both intra-communicators or both inter-communicators), then flag is: MPI_IDENT if the communicator objects are identical in all respects i.e., if the contexts are equal MPI_CONGRUENT if the groups are equal - order and members of group are the same. For inter-communicators, the local and remote group of comm1 must be equal to the local and remote group of comm2 respectively. MPI_SIMILAR if the group members are the same but the order is different. For inter-communicators, the local and remote group of comm1 must be similar to the local and remote group of comm2 respectively. MPI_FALSE otherwise If comm1 and comm2 are not the same type (i.e., one inter-communicator and one intra-communicator) then flag is MPI_FALSE. 2. The following non-communicating, local operation is proposed MPI_GROUP_TEST_EQUAL(group1, group2, flag) IN group1 IN group2 OUT flag (int) MPI_IDENT if the group members and group order is exactly the same in both groups MPI_SIMILAR if the group members are the same but the order is different. MPI_FALSE otherwise