Next: Deflation and Restart
Up: Jacobi-Davidson Method G. Sleijpen and
Previous: Jacobi-Davidson Method G. Sleijpen and
  Contents
  Index
Basic Theory
As for the GHEP, we want to avoid
transformation of
to a standard eigenproblem.
This would require the solution
of some linear system, involving
and/or
, per iteration step.
Furthermore, for stability reasons we want to work with orthonormal
transformations, and for this reason our goal is to compute Schur vectors
for the pencil
, rather than eigenvectors. This leads to
a generalization of the Jacobi-Davidson algorithm, in which we compute
a partial Schur form for the pencil. This algorithm can be interpreted
as a subspace iteration variant of the QZ algorithm. A consequence of
this approach is that we have to work with search and test spaces that
are different.
With
, the generalized eigenproblem
is equivalent to the eigenproblem
![\begin{displaymath}
(\beta A-\alpha B){x}=0,
\end{displaymath}](img2683.png) |
(221) |
and we denote a generalized eigenvalue of the matrix pair
as a pair
. This approach is preferred, because
underflow or overflow for
in finite
precision arithmetic may occur when
and/or
are
zero or close to zero, in which case the pair is still well defined
[328], [425, Chap. VI], [376].
A partial generalized Schur
form of dimension
for a matrix pair
is the
decomposition
![\begin{displaymath}
A Q_k = Z_k R^A_k,\quad B Q_k = Z_k R^B_k,
\end{displaymath}](img2684.png) |
(222) |
where
and
are orthogonal
by
matrices and
and
are upper triangular
by
matrices. A column
of
is referred to as a generalized Schur vector, and we
refer to a pair
, with
as a generalized Schur pair.
It follows that if
is a generalized eigenpair
of
then
is a
generalized eigenpair of
.
We will now describe a Jacobi-Davidson method for the generalized
eigenproblem (8.8).
From the relations (8.9) we conclude that
which suggests that we should follow a Petrov-Galerkin condition for
the construction of reduced systems.
In each step the approximate eigenvector
is selected from a search subspace
. We require
that the residual
is orthogonal to some other
well-chosen test subspace
:
![\begin{displaymath}
\eta \, A u -\zeta \, B u \perp \mbox{span}(W).
\end{displaymath}](img2698.png) |
(223) |
Both subspaces are of the same dimension, say
. Equation
(8.10) leads to the projected eigenproblem
![\begin{displaymath}
(\eta\, W^\ast A V-\zeta \,W^\ast B V) \,{s}=0.
\end{displaymath}](img2699.png) |
(224) |
The pencil
can be reduced
by the QZ algorithm (see §8.2) to generalized Schur form
(note that this is an
-dimensional problem).
This leads to orthogonal
by
matrices
and
and upper triangular
by
matrices
and
, such that
![\begin{displaymath}
(S^L)^\ast ( W^\ast A V)S^R=T^A
\quad\mbox{and}\quad
(S^L)^\ast ( W^\ast B V)S^R=T^B.
\end{displaymath}](img2705.png) |
(225) |
The decomposition can be reordered such that the first column of
and the
-entries of
and
represent
the wanted Petrov solution [172].
With
and
,
,
the Petrov vector is defined
as
for the associated generalized
Petrov value
. In our
algorithm we will construct orthogonal
matrices
and
, so that also
.
With the decomposition in (8.12), we
construct an approximate partial generalized Schur form (cf.
(8.9)):
approximates a
, and
approximates the associated
. Since
(cf. (8.9)), this
suggests to choose
such that
coincides
with
.
With the weights
and
we can influence the convergence
of the Petrov values. If we want
eigenpair approximations for eigenvalues
close to a target
,
then the choice
is very effective [172], especially if we want to compute
eigenvalues in the interior of the spectrum of
.
We will call the Petrov approximations for this choice
the harmonic Petrov eigenpairs.
The Jacobi-Davidson correction equation for
the component
for
the pencil
becomes
![\begin{displaymath}
\left( I- \frac{pp^\ast}{p^\ast p}\right)
(\eta A-\zeta B)
\left( I-uu^\ast\right) {t}=- r,
\end{displaymath}](img2722.png) |
(226) |
with
and
.
It can be shown that if (8.13)
is solved exactly, the convergence to
the generalized eigenvalue will be quadratic;
see [408, Theorem 3.2].
Usually, this correction equation is solved
only approximately, for
instance, with a (preconditioned) iterative
solver. The obtained vector
is used for the expansion
of
and
is used for the
expansion of
. For both spaces we work with orthonormal bases.
Therefore, the new columns are orthonormalized with respect to the
current basis by a modified Gram-Schmidt orthogonalization process
(see §4.7.1).
It can be shown that, with the above choices for
and
,
![\begin{displaymath}
p= W s^L_1= W S^Le_1.
\end{displaymath}](img2726.png) |
(227) |
The relation between the partial generalized Schur
form for the given large problem and the
complete generalized Schur form for
the reduced problem (8.11) via
right vectors (
) is
similar to the relation via left vectors
(
). This can also
be exploited for restarts.
Next: Deflation and Restart
Up: Jacobi-Davidson Method G. Sleijpen and
Previous: Jacobi-Davidson Method G. Sleijpen and
  Contents
  Index
Susan Blackford
2000-11-20