The possible disadvantage of the linearized approach is the doubling of the dimension of the problems; that is, a problem with -dimensional matrices , , and is transformed to a generalized problem with -dimensional matrices and . This is avoided in the Jacobi-Davidson method to be discussed in this section. In this method, the QEP is first projected onto a low-dimensional subspace, which leads to a QEP of modest dimension. This low-dimensional projected QEP can be solved with any method of choice. Expansion of the subspace is realized by a Jacobi-Davidson correction equation. For polynomial eigenproblems this technique was first suggested and discussed in [408, sec. 8].
As we will see below, this method can also be applied directly
to polynomial eigenproblems
In the first part of a Jacobi-Davidson iteration step, for solving the
polynomial eigenproblem
(265) |
In the second part of the Jacobi-Davidson iteration step, the
search subspace
is expanded by a vector
that solves (approximately) the correction equation
This process is repeated until an eigenpair has been detected, i.e., until the residual vector is sufficiently small. The basic form of the algorithm is presented in Algorithm 9.1. We refer to [413] for an example of a quadratic eigenvalue problem arising from an acoustic problem, which has been solved with this reduction technique, for up to 250,000.
If the dimension of the search subspace becomes too large, if the number of columns of is equal to, say, , then the process can be continued with a search subspace of smaller dimension. Take for the new search subspace the space spanned by the best Ritz pairs from the last step; that is, take , where are the Ritz vectors associated with the best available Ritz values . Then apply modified Gram-Schmidt to orthonormalize and restart with this matrix. Note that the new search matrix can be expressed in terms of the old search matrix as for some matrix . The transformation matrix can be computed explicitly and can be used to update the auxiliary matrices () and ().
Eigenvectors that already have been detected can be kept in the search subspace (explicit deflation) if more eigenvectors are wanted. Keeping the eigenvectors in the search subspace prevents the process from reconstructing known eigenvectors.
In §4.7.3 and §8.4.2, we suggested using ``implicit'' deflation, since that approach is based on Schur forms and this is more stable, with better conditioned correction equations. However, in this general polynomial setting, it is not clear how to incorporate implicit deflation: the Schur form and the generalized Schur form cannot easily be generalized.