ScaLAPACK
- url
- http://www.netlib.org/scalapack/index.html
- abstract
-
The ScaLAPACK software library, scheduled for completion by the end of
1994, will extend the LAPACK library to run scalably on MIMD, distributed
memory, concurrent computers. For such machines, the memory hierarchy
includes the off-processor memory of other processors, in addition to the
hierarchy of registers, cache, and local memory on each processor. Like
LAPACK, the ScaLAPACK routines are based on block-partitioned
algorithms in order to minimize the frequency of data movement between
different levels of the memory hierarchy. The fundamental building blocks of
the ScaLAPACK library are distributed memory versions of the Level 2 and
Level 3 BLAS, and a set of Basic Linear Algebra Communication Subprograms
(BLACS) for communication tasks that arise frequently in parallel linear
algebra computations. In the ScaLAPACK routines, all interprocessor
communication occurs within the distributed BLAS and the BLACS, so the
source code of the top software layer of ScaLAPACK looks very similar to that
of LAPACK.
Six ScaLAPACK routines are currently available from NETLIB -- parallel
LU, QR, and Cholesky factorization routines, and parallel Hessenberg (HRD),
tridiagonal (BRD), and bidiagonal (BRD) reduction routines. The
ScaLAPACK routines are based on PB-BLAS (Parallel Blocked Basic Linear
Algebra Subprograms), which is a distributed memory version of the Level 2
and Level 3 BLAS. ScaLAPACK is currently available only for double
precision real data, but will be implemented in the near future for other data
types.
- contact
- scalapack@cs.utk.edu
- keywords
- parallel numerical library; linear algebra;
distributed memory multiprocessor; MIMD machine
nhse-librarian@netlib.org