It is well known that the iterative projection methods outlined in the
previous section rapidly provide approximations to well-separated
extremal eigenvalues. Quite often, however, the wanted eigenvalues
may not be well separated or located in the interior of the convex
hull of eigenvalues. In these situations, iterative projection
methods need many steps to generate acceptable approximations,
to these eigenvalues, if they converge at all. An alternative is to employ a
spectral transformation of so that these poorly separated
eigenvalues are transformed to well-separated extremal
eigenvalues. Of course, the eigenvalues of
should be easily
recoverable from the eigenvalues of the transformed problem, and
the transformation should be efficiently applicable in terms of
computer time and storage.
The notion of well-separated (or nonclustered) eigenvalues can be
roughly explained as follows. A well-separated eigenvalue
is such that
for all
with
. For Hermitian matrices, we suppose that
, while for non-Hermitian
matrices,
.
The shift-and-invert spectral transformation (SI) is the one typically
used. If is an eigenpair for
,
,
and
then the shift-and-invert eigenvalue
problem is
The SI is extremely effective in terms of iteration steps (that is the dimension of the subspace) and should be used whenever possible. This is particularly the case when interior eigenvalues are sought, or when the desired eigenvalues are significantly smaller in magnitude than the eigenvalues largest in magnitude, or when the desired eigenvalues are clustered.
The potential drawback of a spectral transformation is that linear
systems involving must be solved. This can either
be accomplished via a matrix factorization or with an iterative
method. The user should use a sparse direct method to factor the
appropriate matrix whenever feasible. If an iterative method is
used for the linear system solves, the accuracy of the solutions
must be commensurate with the convergence tolerance used for the
eigensolver. A heuristic is that a slightly more stringent
tolerance is needed for the iterative linear system solves
(relative to the desired accuracy of the eigenvalue calculation).
See [283,291,414] for a discussion and further
references.
It is also possible to carry out the transformation in an approximate way and to work with subspaces that no longer have a Krylov structure. This is the idea behind several algorithms, including the Jacobi-Davidson method. The latter, and other methods, will also be discussed in more detail in the remainder of the book.