next up previous contents index
Next: Single- and Multiple-Vector Iterations Up: Hermitian Eigenvalue Problems Previous: Storage.   Contents   Index


Direct Methods

In this section, we briefly discuss direct methods for the computation of eigenvalues of symmetric matrices that can be stored in the computer as full matrices. These direct methods are sometimes called transformation methods and are built up around similarity transformations.

They are implemented in LAPACK [12], ScaLAPACK [52], and the eig command in MATLAB.[*]

All the subspace-based algorithms we are going to discuss later in this chapter need to use dense, tridiagonal, or banded matrix routines as inner iterations to get Ritz approximations to subspace eigenvalues.

The most common direct methods have two phases:

  1. Find an orthogonal matrix $Q$ such that $Q^{\ast} A Q = T$ is a tridiagonal matrix.
  2. Compute the eigendecomposition of the tridiagonal matrix $T$.

The initial reduction to tridiagonal form is made by a sequence of $n-2$ orthogonal Householder reflections and takes $\frac{4}{3}n^3$ floating point operations, or $\frac{8}{3}n^3$ if eigenvectors are also desired. The symmetric or Hermitian driver routines of LAPACK start with this reduction: it is computed by the computational routines xSYTRD for the real and xHETRD for the complex case (PxSYTRD and PxHETRD in ScaLAPACK). There are also implementations for packed and banded storage.

There is a choice of methods for finding the eigendecomposition of a tridiagonal matrix:

QR algorithm:
This algorithm finds all the eigenvalues and optionally all the eigenvectors. It takes $O(n^2)$ floating point operations for finding all the eigenvalues of a tridiagonal matrix. Since reducing a dense matrix to tridiagonal form costs $\frac{4}{3}n^3$ floating point operations, $O(n^2)$ is negligible for large enough $n$. For finding all the eigenvectors as well, QR iteration takes a little over $6n^3$ floating point operations on average. It is implemented as xSTEQR and xSTERF in LAPACK.

This is the algorithm underlying the MATLAB command eig.[*]

Divide-and-conquer method:
It divides the tridiagonal matrix into two halves, solves the eigenproblems of each of the halves, and glues the solutions together by solving a special rational equation. It is implemented in LAPACK as xSTEVD. xSTEVD can be many times faster than xSTEQR for large matrices but needs more workspace ($2n^2$ or $3n^2$).

Bisection method and inverse iteration:
Bisection may be used to find just a subset of the eigenvalues, say those in an interval $[a,b]$. It needs only $O(nk)$ floating point operations, where $k$ is the number of eigenvalues desired. Thus the bisection method could be much faster than the QR method when $k \ll n$. It can be highly accurate, but may be adjusted to run faster if lower accuracy is acceptable.

Inverse iteration can then be used to find the corresponding eigenvectors. In the best case, when the eigenvalues are well separated, inverse iteration also costs only $O(nk)$ floating point operations. This is much less than either QR or divide-and-conquer, even when all eigenvalues and eigenvectors are desired ($k=n$). On the
other hand, when many eigenvalues are clustered close together, Gram-Schmidt orthogonalization will be needed to make sure that we do not get several identical eigenvectors. This will add $O(nk^2)$ floating point operations to the operation count in the worst case.

Bisection method and inverse iteration are implemented in the LAPACK routines xSTEBZ and xSTEIN, and they are available as options in xSYEVX. In ScaLAPACK, they are subroutines PxSTEBZ and PxSTEIN.

Relatively robust representation algorithm:
This algorithm uses an LDL$^T$ factorization of a number of translates $T-s I$ of $T$, for one shift $s$ near each cluster of eigenvalues. For each translate the algorithm computes very accurate eigenpairs for the tiny eigenvalues. It is implemented as xSTEGR in LAPACK. xSTEGR is faster than all the routines except in a few cases and uses the least workspace. See [358,128,360].

Finally, a classical method for solving Hermitian eigenvalue problems is the Jacobi method. This method constructs an orthogonal transformation to diagonal form,

\begin{displaymath}A=X\Lambda X^{\ast}\end{displaymath}

by applying a sequence of elementary orthogonal rotations, each time reducing the sum of squares of the nondiagonal elements of the matrix, until it is of diagonal form to working accuracy.

The Jacobi algorithm has been very popular, since it is implemented by a very simple program and gives eigenvectors that are orthogonal to working accuracy. However, it cannot compete with the QR method in terms of operation counts: Jacobi needs $2sn^3$ multiplications for $s$ sweeps ($s=3$ to 5 usually), which is more than the $4/3n^3$ needed for tridiagonal reduction.

There is one important advantage to the Jacobi algorithm. Properly implemented it can deliver eigenvalue approximations with a small error in the relative sense, in contrast to algorithms based on tridiagonalization, which only guarantee that the error is bounded relative to the norm of the matrix (4.3). See [124].


next up previous contents index
Next: Single- and Multiple-Vector Iterations Up: Hermitian Eigenvalue Problems Previous: Storage.   Contents   Index
Susan Blackford 2000-11-20