For computing several eigenvalues, MIC is not as good. We look at computing the smallest five eigenvalues with . Again the preconditioners are from incomplete factorizations of , with no attempt to adjust for the particular eigenvalue being computed. This time we restart the Davidson methods, with maximum subspaces of dimension 20, but do not restart Lanczos (if eigenvectors are desired, Lanczos might also need to be restarted). Relatively speaking, Lanczos does better at computing several eigenvalues, because the Davidson method targets one eigenvalue at a time. IC(0) uses 159 iterations, MIC(0) requires 158, and Lanczos 172. However, it should be noted that Lanczos misses the fact that one of the small eigenvalues is double, while the Davidson approaches do compute two eigenvalues at that point.
Lanczos | IC(0) | MIC(0) | |
25 | 13 | 11 | 12 |
49 | 25 | 14 | 15 |
100 | 36 | 17 | 17 |
196 | 51 | 24 | 20 |
400 | 69 | 27 | 22 |
784 | 99 | 40 | 26 |
1600 | 137 | 49 | 30 |
3136 | 182 | 64 | 34 |
6400 | 287 | 101 | 40 |
For a sparse matrix such as in Example 11.2.2, more than just the number of
iterations must be considered. The Lanczos algorithm can be much cheaper
per iteration than the Davidson approach. This
motivates the development of the preconditioned Lanczos method which
takes advantage of preconditioning while retaining much of the efficiency
of
the Lanczos approach. This will be discussed in
§11.2.6. Efficiency issues also lead to
connections with the Jacobi-Davidson method. One way to reduce the number
of iterations required by the Davidson method and thus reduce the overhead
for the Rayleigh-Ritz procedure is to develop an extremely good
preconditioner. An iterative method can be used to approximately solve
for the new vector in step (3) of
Algorithm 11.2.
And this solution can be made as accurate as is desired.
In the symmetric case, the conjugate gradient method can be used
for the iterative method, and it can be preconditioned itself. As with the
preconditioned Lanczos method, we then have an inner loop that is efficient
in terms of computations. But it is also efficient in terms of storage
requirements. There is, however, danger in developing a preconditioner
that is too accurate. Exact solution of
gives
, the approximate eigenvector which is already in the subspace. This
could be dealt with by instead solving
, with
, as in the inexact Cayley transform. However, in the
Jacobi-Davidson method a different approach is taken; the approximate
eigenvector is deflated from the solution of
. Discussion of connections with Jacobi-Davidson is continued in the next
subsection.
Davidson's method can have difficulty when the starting vector or starting subspace is not very accurate. In [335] and [172], it is suggested that initially be set to , where is an approximation to and is near the desired eigenvalues. As mentioned in §11.1, another possibility is to use a Krylov subspace method in an initial phase to generate starting vectors for the Davidson method.