next up previous contents index
Next: Software Availability Up: Subspace Iteration Previous: Locking.   Contents   Index

Acceleration.

If eigenvalues near a shift $\sigma$ are desired and a factorization $A - \sigma \, I=LU$ can be easily obtained (see §10.3), then one can apply the above algorithm to $\left(A - \sigma \, I\right)^{-1}$. The eigenvalues near $\sigma$ will converge fast.

One can also use polynomial acceleration to speed up the computation by replacing the power $A^{m} $ by a polynomial $T_m [ (A - \sigma I) / \rho ] $, in which $T_m$ is the Chebyshev polynomial of the first kind of degree $m$, and $\sigma$ and $\rho$ give a translation and scaling of the part of the spectrum one wants to suppress. Ideally one should take $\sigma=(\lambda_{p+1}+\lambda_n)/2$ as the center and $\rho=(\lambda_{p+1}-\lambda_n)/2$ as the half-width of the interval containing the eigenvalues that are not of interest for some reasonable estimates of those eigenvalues. We assume that the eigenvalues are ordered along the real axis and that we want $p$ of them in one of the ends.

With these enhancements, subspace iteration may be a reasonably efficient method that has the advantage of being easy to code and to understand. Some of the methods to be discussed later are often preferred, however, because they tend to find eigenvalues/eigenvectors more quickly.

Much of the material in this section is drawn from Demmel [114], Golub and Van Loan [198], and Saad [387]. For further discussion on subspace iteration, the reader is recommended to refer to Chatelin [79], Lehoucq and Scott [292], Stewart [422], and Wilkinson [457]. See also Bathe and Wilson [42] and Jennings [242] for structural engineering approaches.


next up previous contents index
Next: Software Availability Up: Subspace Iteration Previous: Locking.   Contents   Index
Susan Blackford 2000-11-20