ftp://hpsl.cs.umd.edu/pub/chaos_distribution/
CHAOS
runtime library for parallelizing Fortran and C programs
with irregular data access patterns
Alan Sussman / als@cs.umd.edu
The Chaos library is a set of software primitives that are designed to
efficiently handle irregular problems on distributed memory systems. It
is a superset of the Parti library. These primitives have been designed
to ease the implementation of computational problems on parallel
architecture machines by relieving users of low-level machine specific
issues. The design philosophy has been to leave the original
(sequential) source codes essentially unaltered, with the exception of
the introduction of various calls to the Chaos primitives which are
embedded in the codes at the appropriate locations. These primitives
allow the distribution and retrieval of data from the numerous processor
local memories.
In distributed memory systems, arrays can be partitioned among local
memories of processors. These partitioned arrays are called distributed
arrays.
Chaos provides primitives to map arrays onto processors.
Chaos also provides a set of primitives that carry out optimizations at
runtime to reduce both the number of messages and the volume of
interprocessor communication.
data partitioning; distributed memory multiprocessor; irregular
problem; runtime system
ppt-rts
http://www.cs.umd.edu/projects/hpsl/compilers/base_chaos.html
Any distributed memory system that supports message passing or
distributed shared memory
(currently implemented on Intel iPSC/860 and Paragon, IBM SP1/2,
TMC CM5, Cray T3D, Stanford DASH, network of workstations via
PVM)
ftp://hpsl.cs.umd.edu/pub/block_parti_distribution/block_parti.tar.Z
Multiblock PARTI
runtime library for parallelizing multiple
structured grid (multiblock and multigrid) applications written in
Fortran and C
Alan Sussman / als@cs.umd.edu
Multiblock PARTI is a runtime library for parallelizing multiple
structured grid (e.g., multiblock and multigrid) applications
written in Fortran and C.
The Multiblock Parti is used to produce an SPMD parallel program, and
provides routines that allow an application programmer or a compiler to
Lay out distributed data in a flexible way, to enable good load
balancing and minimize interprocessor communication,
Give high level specifications for performing data movement,
and
Distribute the computation across the processors.
Two types of communication are required in multiple structured grid
applications -- inter-block communication and intra-block communication.
In the runtime system, communication is performed in two phases. First,
a subroutine is called to build a communication schedule that describes
the required data motion, and then another subroutine is called to
perform the data motion (sends and receives on a distributed memory
parallel machine) using a previously built schedule. Such an arrangement
allows a schedule to be be used multiple times in an iterative algorithm.
The library defines a descriptor in each processor that both describes the
global structure of a distributed array, and also caches information about
the portion of the array local to a processor.
The library provides routines for using distributed array descriptors
for address translation and for interprocessor communication (regular
section moves and filling ghost cells) using communication schedules.
Any distributed memory system that supports message passing
(currently implemented on Intel iPSC/860 and Paragon, IBM SP1/2,
TMC CM5, network of workstations via PVM)
data partitioning; distributed memory multiprocessor;
multigrid problem; multiblock problem;
runtime system; communication library; distributed arrays
ppt-rts
http://www.cs.umd.edu/projects/hpsl/compilers/base_mblock.html