For restart purposes, specially if one is heading for interior eigenvalues, the harmonic Ritz vectors have been advocated for symmetric matrices [331]; see also §4.7.4.
The concept of harmonic Ritz values [349] is easily generalized
for unsymmetric matrices [411]. As we have seen, the
Jacobi-Davidson methods generate basis vectors
for a subspace . For the projection of onto this
subspace we have to compute the vectors
. The
inverses of the Ritz values of , with respect to the subspace
spanned by the , are called the harmonic Ritz values. The
harmonic Ritz values can be computed without inverting , since a
harmonic Ritz pair
satisfies
The exterior standard Ritz values usually converge to exterior eigenvalues of . Likewise, the interior harmonic Ritz values for the shifted matrix usually converge towards eigenvalues closest to the shift . Fortunately, the search subspaces for the shifted matrix and the unshifted matrix coincide, which facilitates the computation of harmonic Ritz pairs. For reasons of stability we construct both and to orthonormal: is such that , where is by upper triangular.
In the resulting scheme we compute a (partial) Schur decomposition rather than a partial eigendecomposition. That is, we wish to compute vectors , such that , with , and is a by upper triangular matrix. The diagonal elements of represent eigenvalues of , and the corresponding eigenvectors of can be computed from and .
An algorithm for Jacobi-Davidson, with partial reduction to Schur form and based on harmonic Ritz values and vectors, is given in Algorithm 7.19. The algorithm includes restart and deflation techniques. It can be used for the computation of a number of eigenvalues close to .
To apply this algorithm we need to specify a starting vector , a tolerance , a target value , and a number that specifies how many eigenpairs near should be computed. The value of denotes the maximum dimension of the search subspace. If it is exceeded, then a restart takes place with a subspace of specified dimension .
On completion the eigenvalues close to are delivered, and the corresponding reduced Schur form , where is by orthogonal and is by upper triangular. The eigenvalues are on the diagonal of . The computed Schur form satisfies , where is the th column of .
We will comment on some parts of the algorithm in view of our discussions in previous subsections.
Initialization phase.
The newly computed correction is made orthogonal with respect to the current search subspace by means of modified Gram-Schmidt. We have chosen to store also the matrix ; is the expansion vector for this matrix. The expansion vector for is obtained by making orthogonal with respect to the space of detected Schur vectors and with respect to the current test subspace by means of modified Gram-Schmidt. The Gram-Schmidt steps can be replaced, for improved numerical stability, by the template given in Algorithm 4.14.
denotes the by matrix with columns ; likewise and .
The values represent elements of the square by matrix . The values represent elements of the by upper triangular .
(Note that , if . Therefore, can be reconstructed from , , , and . In particular, can be computed from these quantities. Instead of storing the -dimensional matrix , it suffices to store the by matrix (of elements , computed in (3)-(4)). This approach saves memory space. However, for avoiding instabilities, the deflation procedure needs special attention.)
At this point the QZ decomposition (generalized Schur form) of the matrix pencil has to be be computed: and , where and are unitary and and are upper triangular. This can be done with a suitable routine for dense matrix pencils from LAPACK.
In each step, the Schur form has to be sorted such that
is smallest. Only if
does the sorting of the Schur form have to be
such that all of the leading
diagonal elements of and represent the
harmonic Ritz values closest to .
For ease of presentation we sorted all diagonal elements here.
For an algorithm of the sorting of a generalized Schur form, see [33,448,449] and [171, Chap. 6B].
The matrix is by with columns ; likewise .
The value of needs some attention. We have chosen to compute the Rayleigh quotient (instead of the harmonic Ritz value) corresponding to the harmonic Ritz vector (see [412]). The Rayleigh quotient follows from the requirement that instead of ; then . denotes the complex conjugate of .
The stopping criterion is to accept a Schur vector approximation as soon as the norm of the residual (for the normalized Schur vector approximation) is below . This means that we accept inaccuracies in the order of in the computed eigenvalues, and inaccuracies (in angle) in the eigenvectors of (provided that the concerned eigenvalue is simple and well separated from the others and the left and right eigenvectors have a small angle).
Detection of all wanted eigenvalues cannot be guaranteed; see note (13) for Algorithm 4.17 (p. ).
This is a restart after acceptance of an approximate eigenpair.
At this point we have a restart when the dimension of the subspace exceeds . After a restart the Jacobi-Davidson iterations are resumed with a subspace of dimension .
The deflation with computed eigenvectors is represented by the factors with . The matrix has the converged eigenvectors as its columns. If a left preconditioner is available for the operator , then with a Krylov solver similar reductions are realizable as in the situation for exterior eigenvalues. A template for the efficient handling of the left-preconditioned operator is given in Algorithm 4.18.