ScaLAPACK can solve systems of linear equations, linear least squares problems, eigenvalue problems, and singular value problems. ScaLAPACK can also handle many associated computations such as matrix factorizations or estimating condition numbers.

Like LAPACK, the ScaLAPACK routines are based on block-partitioned algorithms in order to minimize the frequency of data movement between different levels of the memory hierarchy . The fundamental building blocks of the ScaLAPACK library are distributed-memory versions of the Level 1, Level 2, and Level 3 BLAS, called the Parallel BLAS or PBLAS [26, 104], and a set of Basic Linear Algebra Communication Subprograms (BLACS) [54] for communication tasks that arise frequently in parallel linear algebra computations. In the ScaLAPACK routines, the majority of interprocessor communication occurs within the PBLAS, so the source code of the top software layer of ScaLAPACK looks similar to that of LAPACK.

ScaLAPACK contains **driver routines** for solving standard types of
problems,
**computational routines** to perform a distinct
computational task, and **auxiliary routines**
to perform a certain
subtask or common low-level computation. Each driver routine
typically calls a sequence of
computational routines. Taken as a whole, the computational routines
can perform a wider range of tasks than are covered by the driver
routines.
Many of the auxiliary routines may be of use to numerical analysts
or software developers, so we have documented the Fortran source for
these routines with the same level of detail used for the ScaLAPACK
computational routines and driver routines.

Dense and band matrices are provided for, but not general sparse matrices. Similar functionality is provided for real and complex matrices. See Chapter 3 for a complete summary of the contents.

Not all the facilities of LAPACK are covered by Release 1.5 of ScaLAPACK.

Tue May 13 09:21:01 EDT 1997