LAPACK [14] provides routines for solving systems of linear equations, least-squares problems, eigenvalue problems, and singular value problems. The associated matrix factorizations (LU, Cholesky, QR, SVD, Schur, generalized Schur) are also provided, as are related computations such as reordering of the Schur factorizations and estimating condition numbers. Dense and banded matrices are handled, but not general sparse matrices. In all areas, similar functionality is provided for real and complex matrices, in both single and double precision.
The original goal of the LAPACK project was to make the widely used EISPACK and LINPACK libraries run efficiently on shared-memory vector and parallel processors, as well as RISC workstations. On these machines, LINPACK and EISPACK are inefficient because their memory access patterns disregard the multilayered memory hierarchies of the machines, thereby spending too much time moving data instead of doing useful floating-point operations. LAPACK addresses this problem by reorganizing the algorithms to use block matrix operations, such as matrix multiplication, in the innermost loops [14][3]. These block operations can be optimized for each architecture to account for the memory hierarchy [2], and so provide a transportable way to achieve high efficiency on diverse modern machines. Here we use the term ``transportable'' instead of ``portable'' because, for fastest possible performance, LAPACK requires that highly optimized block matrix operations be already implemented on each machine. In other words, the correctness of the code is portable, but high performance is not-if we limit ourselves to a single Fortran source code.
LAPACK can be regarded as a successor to LINPACK and EISPACK.
It has virtually all the capabilities of
these two packages and much more besides. LAPACK improves on LINPACK and
EISPACK in four main respects: speed, accuracy, robustness and functionality.
While LINPACK and EISPACK are based on the vector operation kernels
of the Level 1 BLAS, LAPACK
was designed at the outset to exploit the Level 3 BLAS
-a set of specifications for Fortran subprograms that
do various types of matrix multiplication and the solution of
triangular systems with multiple right-hand sides. Because of the
coarse granularity of the Level 3 BLAS operations, their use
tends to promote high efficiency on many high-performance computers,
particularly if specially coded implementations are provided by the
manufacturer.