In this section, we briefly discuss direct methods for the computation of eigenvalues of symmetric matrices that can be stored in the computer as full matrices. These direct methods are sometimes called transformation methods and are built up around similarity transformations.
They are implemented in LAPACK [12], ScaLAPACK [52], and the eig command in MATLAB.
All the subspace-based algorithms we are going to discuss later in this chapter need to use dense, tridiagonal, or banded matrix routines as inner iterations to get Ritz approximations to subspace eigenvalues.
The most common direct methods have two phases:
The initial reduction to tridiagonal form is made by a sequence of orthogonal Householder reflections and takes floating point operations, or if eigenvectors are also desired. The symmetric or Hermitian driver routines of LAPACK start with this reduction: it is computed by the computational routines xSYTRD for the real and xHETRD for the complex case (PxSYTRD and PxHETRD in ScaLAPACK). There are also implementations for packed and banded storage.
There is a choice of methods for finding the eigendecomposition of a tridiagonal matrix:
This is the algorithm underlying the MATLAB command eig.
Inverse iteration can then be used to find the corresponding
eigenvectors. In the best case, when the eigenvalues are well
separated, inverse iteration also costs only floating point operations. This is
much less than either QR or divide-and-conquer, even when all
eigenvalues and eigenvectors are desired (). On the
other hand, when many eigenvalues are clustered close together, Gram-Schmidt
orthogonalization will be needed to make sure that we do not get
several identical eigenvectors. This will add floating point operations to the
operation count in the worst case.
Bisection method and inverse iteration are implemented in the LAPACK routines xSTEBZ and xSTEIN, and they are available as options in xSYEVX. In ScaLAPACK, they are subroutines PxSTEBZ and PxSTEIN.
Finally, a classical method for solving Hermitian eigenvalue problems
is the Jacobi method.
This method constructs an orthogonal transformation to diagonal form,
The Jacobi algorithm has been very popular, since it is implemented by a very simple program and gives eigenvectors that are orthogonal to working accuracy. However, it cannot compete with the QR method in terms of operation counts: Jacobi needs multiplications for sweeps ( to 5 usually), which is more than the needed for tridiagonal reduction.
There is one important advantage to the Jacobi algorithm. Properly implemented it can deliver eigenvalue approximations with a small error in the relative sense, in contrast to algorithms based on tridiagonalization, which only guarantee that the error is bounded relative to the norm of the matrix (4.3). See [124].