next up previous contents index
Next: LAPACK Benchmark Up: Examples of Block Algorithms Previous: QR Factorization   Contents   Index

Eigenvalue Problems

Eigenvalue problems have also provided a fertile ground for the development of higher performance algorithms. These algorithms generally all consist of three phases: (1) reduction of the original dense matrix to a condensed form by orthogonal transformations, (2) solution of condensed form, and (3) optional backtransformation of the solution of the condensed form to the solution of the original matrix. In addition to block versions of algorithms for phases 1 and 3, a number of entirely new algorithms for phase 2 have recently been discovered. In particular, Version 3.0 of LAPACK includes new block algorithms for the singular value decomposition (SVD) and SVD-based least squares solver, as well as a prototype of a new algorithm xSTEGR[35,87,86,36], which may be the ultimate solution for the symmetric eigenproblem and SVD on both parallel and serial machines.

The first step in solving many types of eigenvalue problems is to reduce the original matrix to a condensed form by orthogonal transformations. In the reduction to condensed forms, the unblocked algorithms all use elementary Householder matrices and have good vector performance. Block forms of these algorithms have been developed [46], but all require additional operations, and a significant proportion of the work must still be performed by Level 2 BLAS, so there is less possibility of compensating for the extra operations.

The algorithms concerned are:

Note that only in the reduction to Hessenberg form is it possible to use the block Householder representation described in subsection 3.4.2. Extra work must be performed to compute the n-by-b matrices X and Y that are required for the block updates (b is the block size) -- and extra workspace is needed to store them.

Nevertheless, the performance gains can be worthwhile on some machines for large enough matrices, for example, on an IBM Power 3, as shown in Table 3.11.

(all matrices are square of order n)

Table 3.11: Speed in megaflops of reductions to condensed forms on an IBM Power 3
  Block Values of n
  size 100 1000
DSYTRD 1 356 418
  32 192 499
DGEBRD 1 269 241
  32 179 342
DGEHRD 1 277 250
  32 277 471

Following the reduction of a dense (or band) symmetric matrix to tridiagonal form T, we must compute the eigenvalues and (optionally) eigenvectors of T. Computing the eigenvalues of T alone (using LAPACK routine xSTERF) requires O(n2) flops, whereas the reduction routine xSYTRD does $\frac{4}{3}n^3 + O(n^2)$ flops. So eventually the cost of finding eigenvalues alone becomes small compared to the cost of reduction. However, xSTERF does only scalar floating point operations, without scope for the BLAS, so n may have to be large before xSYTRD is slower than xSTERF.

Version 2.0 of LAPACK introduced a new algorithm, xSTEDC, for finding all eigenvalues and eigenvectors of T. The new algorithm can exploit Level 2 and 3 BLAS, whereas the previous algorithm, xSTEQR, could not. Furthermore, xSTEDC usually does many fewer flops than xSTEQR, so the speedup is compounded. Briefly, xSTEDC works as follows (for details, see [57,89]). The tridiagonal matrix T is written as

T = \left( \begin{array}{cc} T_1 & 0 \\ 0 & T_2 \end{array} \right) + H

where T1 and T2 are tridiagonal, and H is a very simple rank-one matrix. Then the eigenvalues and eigenvectors of T1 and T2 are found by applying the algorithm recursively; this yields $T_1 = Q_1 \Lambda_1 Q_1^T$ and $T_2 = Q_2 \Lambda_2 Q_2^T$, where $\Lambda_i$ is a diagonal matrix of eigenvalues, and the columns of Qi are orthonormal eigenvectors. Thus

T = \left( \begin{array}{cc} Q_1 \Lambda_1 Q_1^T & 0 \\ 0 & ...
... \begin{array}{cc} Q_1^T & 0 \\ 0 & Q_2^T \end{array} \right)

where H' is again a simple rank-one matrix. The eigenvalues and eigenvectors of $\left( \begin{array}{cc} \Lambda_1 & 0 \\ 0 & \Lambda_2 \end{array} \right) + H'$ may be found using O(n2) scalar operations, yielding $
\left( \begin{array}{cc} \Lambda_1 & 0 \\ 0 & \Lambda_2 \end{array} \right) + H' =
\hat{Q} \Lambda \hat{Q}^T \; .
$ Substituting this into the last displayed expression yields

T =
\left( \begin{array}{cc} Q_1 & 0 \\ 0 & Q_2 \end{array} ...
... \\ 0 & Q_2^T \end{array} \right) \right) =
Q \Lambda Q^T \; ,

where the diagonals of $\Lambda$ are the desired eigenvalues of T, and the columns of $Q = \left( \begin{array}{cc} Q_1 & 0 \\ 0 & Q_2 \end{array} \right) \hat{Q}$ are the eigenvectors. Almost all the work is done in the two matrix multiplies of Q1 and Q2 times $\hat{Q}$, which is done using the Level 3 BLAS.

The same recursive algorithm has been developed for the singular value decomposition of the bidiagonal matrix resulting from reducing a dense matrix with xGEBRD. The SVD driver using this algorithm is called xGESDD. This recursive algorithm is also used for the SVD-based linear least squares solver xGELSD; see Figure 3.3 to see how much faster DGELSD is than its older routine DGELSS. SBDSQR, Comparison timings of DGESVD and DGESDD can be found in Tables 3.18 and 3.19.

Version 3.0 of LAPACK introduced another new algorithm, xSTEGR, for finding all the eigenvalues and eigenvectors of a symmetric tridiagonal matrix. It is usually even faster than xSTEDC above, and we expect it to ultimately replace all other LAPACK algorithm for the symmetric eigenvalue problem and SVD. Here is a rough description of how it works; for details see [35,87,86,36].

It is easiest to think of xSTEGR as a variation on xSTEIN, inverse iteration. If all the eigenvalues were well separated, xSTEIN would run in O(n2) time. But it is difficult for xSTEIN to compute accurate eigenvectors belonging to close eigenvalues, those that have four or more decimals in common with their neighbors. Indeed, xSTEIN slows down because it reorthogonalizes the corresponding eigenvectors. xSTEGR escapes this difficulty by exploiting the invariance of eigenvectors under translation.

For each cluster $\cal C$ of close eigenvalues the algorithm chooses a shift s at one end of $\cal C$, or just outside, with the property that T - sI permits triangular factorization LDLT = T - sI such that the small shifted eigenvalues ($\lambda - s$ for $\lambda \in {\cal C}$) are determined to high relative accuracy by the entries in L and D. Note that the small shifted eigenvalues will have fewer digits in common than those in $\cal C$. The algorithm computes these small shifted eigenvalues to high relative accuracy, either by xLASQ2 or by refining earlier approximations using bisection. This means that each computed $\hat{\lambda}$ approximating a $\lambda - s$ has error bounded by $O( \varepsilon )

The next task is to compute an eigenvector for $\lambda - s$. For each $\hat{\lambda}$ the algorithm computes, with care, an optimal twisted factorization

LDL^T - \hat{\lambda} I &=& N_r \Delta_r N_r^T \\
\Delta_r &=& {\rm diag}(\delta_1,\delta_2, ...

obtained by implementing triangular factorization both from top down and bottom up and joining them at a well chosen index r. An approximate eigenvector z is obtained by solving NrT z = er where er is column r of I. It turns out that $N_r \Delta_r N_r^T = e_r \delta_r$ and

\Vert LDL^T - \hat{\lambda} I \Vert =
\vert {\rm error\ in\ } \hat{\lambda}\vert / \Vert u\Vert _{\infty} + ...

where ... indicates smaller terms and $Tu = \lambda u$, uTu = 1. >From the basic gap theorem [85]

\vert\sin \theta(u,z)\vert <= O(\epsilon)
) \vert\hat{\lamb...
...t u\Vert _{\infty}
\vert\hat{\lambda} - \hat{\mu}\vert ) + ...

where $\hat{\mu}$ is $\hat{\lambda}$'s neighbor in the shifted spectrum. Given this nice bound the algorithm computes z for all $\lambda$ such that the relative gap $\vert\lambda - \mu\vert/\vert\lambda - s\vert > 10^{-3}$. These z's have an error that is $O(\epsilon)$.

The procedure described above is repeated for any eigenvalues that remain without eigenvectors. It can be shown that all the computed z's are very close to eigenvectors of small relative perturbations of one global Cholesky factorization GGT of a translate $T - \sigma I$ of T. A key component of the algorithm is the use of recently discovered differential qd algorithms to ensure that the twisted factorizations described above are computed without ever forming the indicated matrix products such as LDLT [51].

For computing the eigenvalues and eigenvectors of a Hessenberg matrix--or rather for computing its Schur factorization-- yet another flavor of block algorithm has been developed: a multishift QR iteration [9]. Whereas the traditional EISPACK routine HQR uses a double shift (and the corresponding complex routine COMQR uses a single shift), the multishift algorithm uses block shifts of higher order. It has been found that often the total number of operations decreases as the order of shift is increased until a minimum is reached typically between 4 and 8; for higher orders the number of operations increases quite rapidly. On many machines the speed of applying the shift increases steadily with the order, and the optimum order of shift is typically in the range 8-16. Note however that the performance can be very sensitive to the choice of the order of shift; it also depends on the numerical properties of the matrix. Dubrulle [49] has studied the practical performance of the algorithm, while Watkins and Elsner [101] discuss its theoretical asymptotic convergence rate.

Finally, we note that research into block algorithms for symmetric and nonsymmetric eigenproblems continues [11,68], and future versions of LAPACK will be updated to contain the best algorithms available.

next up previous contents index
Next: LAPACK Benchmark Up: Examples of Block Algorithms Previous: QR Factorization   Contents   Index
Susan Blackford