If we want to include restarts and deflation then matters become more
complicated since non-Hermitian matrices do not have orthonormal
eigensystems in general. Since we prefer to work with an orthonormal
basis for at least the test subspace, we compute Schur forms of the
reduced matrices. Instead of eigenvectors of the matrix , we compute
a partial Schur form
, where
is an
by
orthonormal matrix and
is
by
upper
tridiagonal. A scheme that accommodates for Schur forms is given in
Algorithm 7.18. This scheme includes the possibility for
restart when the dimension of the current subspace exceeds
.
The scheme computes eigenvalues close to a target
in the complex plane. Here we have to be necessarily inexact,
since the eigenvalues of a general non-Hermitian matrix are not ordered
in the complex plane. Which Ritz values are delivered as close
eigenvalues depends, among other factors, on the angles of the corresponding
eigenvectors. Usually, the selection from Ritz values is appropriate if
is chosen somewhere at the exterior of the distribution of
eigenvalues. If we want eigenvalues of
close to some interior
point then the scheme may be much less satisfactory, and we propose to
work in such cases with harmonic Ritz values. A scheme for interior
eigenvalues will be presented in §7.12.3.
To apply this algorithm we need to specify a starting vector , a
tolerance
, a target value
, and a number
that specifies how many eigenpairs near
should be computed. The
value of
denotes the maximum dimension of the search
subspace. If it is exceeded, then a restart takes place with a subspace
of specified dimension
.
On completion the eigenvalues close to
are delivered, and the corresponding reduced Schur form
,
where
is
by
orthogonal
and
is
by
upper triangular. The eigenvalues
are on the diagonal of
.
The computed Schur form
satisfies
,
where
is the
th column of
.
We will now discuss the components of the algorithm.
Initialization phase.
The new vector is made orthogonal with respect to the current search
subspace by means of modified Gram-Schmidt. This can be replaced, for
improved numerical stability, by the template, for iterated modified
Gram-Schmidt, given in Algorithm 4.14.
If , then this is an empty loop.
If the angle between the new vector and the search subspace is
very small, then the resulting vector
has little significance
and the method practically stagnates. A random vector
helps to
overcome this stagnation. A more sophisticated strategy can be found in
[190].
We compute the last column and row of the dense matrix
(of order
);
.
The matrix
denotes the
by
matrix with columns
,
likewise.
The Schur reduction for the by
matrix
can be solved by a
standard solver for dense eigenproblems. We have chosen to compute the
standard Ritz values, which makes the algorithm suitable for computing
exterior eigenvalues of
close to a specified
. If
eigenvalues in the interior part of the spectrum have to be computed,
then the usage of harmonic Ritz values is advocated; see
§7.12.3.
In each step, the Schur form has to be sorted such that is
closest to
. Only if
does the sorting of the Schur
form have to be such that all of the
leading diagonal
elements of
are closest to
. For ease of presentation we
sorted all diagonal elements here.
For a template of an algorithm for the sorting of a Schur form, see [448,449,33,171, Chap. 6B].
is the
by
matrix with columns
.
Detection of all wanted eigenvalues cannot be guaranteed; see note (13)
for Algorithm 4.17 (p. ).
If the matrix is real, then all eigenpairs are either real or appear in complex conjugate pairs. If a complex eigenpair has been detected, then its conjugate is known and can be deflated as well. This makes the algorithm more efficient.
After acceptance of a Ritz pair, we continue the search for a next pair, with the remaining Ritz vectors as a basis for the initial search space.
We restart as soon as the dimension of the search space for the current
Schur vector exceeds . The process is restarted with the
subspace spanned by the
Ritz vectors corresponding to the Ritz
values closest to the target value
.
We have collected the locked (computed) Schur vectors in , and the
matrix
is
expanded with the current Schur vector
approximation
. This is done in order to obtain a more compact
formulation; the correction equation is equivalent to the one
in (4.50). The new correction
has to be orthogonal to the
columns of
as well as to
.
Of course, the correction equation can be solved by any suitable
process, for instance a preconditioned Krylov subspace method that is
designed to solve unsymmetric systems. Because of the occurrence of
one has to be careful with the usage of preconditioners
for the matrix
. The inclusion of preconditioners can be
achieved following the same principles as for the single vector
Jacobi-Davidson algorithm; see Algorithm 4.18 for a template.
Make sure that the starting vector
for an iterative solver
satisfies the orthogonality constraints
.
Note that significant savings per step can be made in
Algorithm 4.18 if
is kept the same for a (few)
Jacobi-Davidson iterations. In that case columns of
can be
saved from previous steps. Also, the matrix
can be updated
from previous steps, along with its
decomposition.
It is not necessary to solve the correction equation very accurately. A
strategy, often used for inexact Newton methods [113], here also
works well: increase the accuracy with the Jacobi-Davidson iteration
step, and, for instance, solve the correction equation with a residual
reduction of in the
th Jacobi-Davidson iteration
(
is reset to 0 when a Schur vector is detected).
In particular, in the first few initial steps, the approximate
eigenvalue may be very inaccurate, and then it does not make
sense to solve the correction equation accurately. In this stage it can
be more effective to temporarily replace
by
or to take
for the expansion of the search subspace [335,172].
For deflation see §8.4.2,
with replaced by
and
by
.
For the full theoretical background of this method, as well as
details on the deflation technique with Schur vectors, see [172].