We consider the application of the Jacobi-Davidson approach to the
GHEP (5.1). We can, similarly as for the
Lanczos method treated in the previous section (see also [444]),
apply the Jacobi-Davidson method to
(5.1), with a -orthogonal basis
for the search subspace; that is,
The Ritz-Galerkin condition for vectors
in this subspace leads to
The correction equation for the eigenvector component
for the generalized eigenproblem can be written as
As for the standard Hermitian case, the resulting scheme can be combined
with restart and deflation. If we want to work with orthogonal operators
in the deflation, then we have to work with -orthogonal matrices that
reduce the given generalized system to Schur form:
in which
and is -orthogonal. The matrix
is a diagonal matrix with the computed eigenvalues on
its diagonal; the columns of are eigenvectors of .
This leads to skew projections for the deflation with the
first eigenvectors:
If is not well-conditioned, that means that if leads to a highly distorted inner product, then we suggest using the approach with Jacobi-Davidson (see §8.4). The QZ approach does not exploit symmetry of the involved matrices.
Algorithm 5.6 represents a Jacobi-Davidson template with restart and deflation for exterior eigenvalues. A template for a left-preconditioned Krylov solver is given in Algorithm 5.7.
To apply this algorithm we need to specify a tolerance , a target value , and a number that specifies how many eigenpairs near should be computed. The value of denotes the maximum dimension of the search subspace. If it is exceeded then a restart takes place with a subspace of specified dimension . We also need to give a starting vector .
On completion the largest eigenvalues are delivered when is chosen larger than ; the smallest eigenvalues are delivered if is chosen smaller than . The computed eigenpairs , , satisfy , where denotes the th column of . The eigenvectors are -orthogonal: for .
Let us now discuss the different steps of Algorithm 5.6.
If then this is an empty loop.
Detection of all wanted eigenvalues cannot be guaranteed; see item (13) of Algorithm 4.17 (p. ).
Of course, the correction equation can be solved by any suitable process, for instance, a preconditioned Krylov subspace method that is designed to solve unsymmetric systems. However, because of the skew projections, we always need a preconditioner (which may be the identity operator if nothing else is available) that is deflated by the same skew projections so that we obtain a mapping between and itself. Because of the occurrence of and , one has to be careful with the usage of preconditioners for the matrix . The inclusion of preconditioners can be done as in Algorithm 5.7. Make sure that the starting vector for an iterative solver satisfies the orthogonality constraints . Note that significant savings per step can be made in Algorithm 5.7 if is kept the same for a (few) Jacobi-Davidson iterations. In that case, columns of can be saved from previous steps. Also the matrix can be updated from previous steps, as well as its decomposition.
It is not necessary to solve the correction equation very accurately. A strategy, often used for inexact Newton methods [113], works well here also: increase the accuracy with the Jacobi-Davidson iteration step; for instance, solve the correction equation with a residual reduction of in the th Jacobi-Davidson iteration ( is reset to 0 when an eigenvector is detected).
For a full theoretical background of this method see [172]. For details on the deflation technique with eigenvectors see also §4.7.3.