next up previous contents index
Next: Computing Interior Eigenvalues Up: Jacobi-Davidson Methods   G. Sleijpen Previous: Generalization of Hermitian Case   Contents   Index


Schur Form and Restart

If we want to include restarts and deflation then matters become more complicated since non-Hermitian matrices do not have orthonormal eigensystems in general. Since we prefer to work with an orthonormal basis for at least the test subspace, we compute Schur forms of the reduced matrices. Instead of eigenvectors of the matrix $A$, we compute a partial Schur form $A{Q}_{k}={Q}_{k}{R}_{k}$, where ${Q}_{k}$ is an $n$ by ${k}$ orthonormal matrix and ${R}_{k}$ is ${k}$ by ${k}$ upper tridiagonal. A scheme that accommodates for Schur forms is given in Algorithm 7.18. This scheme includes the possibility for restart when the dimension of the current subspace exceeds ${m}_{\max}$.

The scheme computes ${k}_{\max}$ eigenvalues close to a target $\tau $ in the complex plane. Here we have to be necessarily inexact, since the eigenvalues of a general non-Hermitian matrix are not ordered in the complex plane. Which Ritz values are delivered as close eigenvalues depends, among other factors, on the angles of the corresponding eigenvectors. Usually, the selection from Ritz values is appropriate if $\tau $ is chosen somewhere at the exterior of the distribution of eigenvalues. If we want eigenvalues of ${M}$ close to some interior point then the scheme may be much less satisfactory, and we propose to work in such cases with harmonic Ritz values. A scheme for interior eigenvalues will be presented in §7.12.3.


\begin{algorithm}{Jacobi--Davidson Method for $k_{\max}$Exterior Eigenvalues for...
...widetilde{r}$\ \\
{\rm (28)} \> {\bf end while}
\end{tabbing}}
\end{algorithm}

To apply this algorithm we need to specify a starting vector ${v}_0$, a tolerance $\epsilon$, a target value $\tau $, and a number ${k}_{\max}$ that specifies how many eigenpairs near $\tau $ should be computed. The value of ${m}_{\max}$ denotes the maximum dimension of the search subspace. If it is exceeded, then a restart takes place with a subspace of specified dimension ${m}_{\min}$.

On completion the $k_{\max}$ eigenvalues close to $\tau $ are delivered, and the corresponding reduced Schur form $AQ=QR$, where $Q$ is $n$ by $k_{\max}$ orthogonal and $R$ is $k_{\max}$ by $k_{\max}$ upper triangular. The eigenvalues are on the diagonal of $R$. The computed Schur form satisfies $\Vert A q_{j}- {Q} R e_{j}\Vert _2\leq j \epsilon$, where $q_j$ is the $j$th column of $Q$.

We will now discuss the components of the algorithm.

(1)

Initialization phase.

(3)-(4)

The new vector ${t}$ is made orthogonal with respect to the current search subspace by means of modified Gram-Schmidt. This can be replaced, for improved numerical stability, by the template, for iterated modified Gram-Schmidt, given in Algorithm 4.14.

If ${m}=0$, then this is an empty loop.

If the angle between the new vector ${t}$ and the search subspace is very small, then the resulting vector ${v}_j$ has little significance and the method practically stagnates. A random vector ${t}$ helps to overcome this stagnation. A more sophisticated strategy can be found in [190].

(6)-(8)

We compute the last column and row of the dense matrix ${M}\equiv V^\ast AV
=V^\ast V^A$ (of order $m$); ${V^A}\equiv A{V}$. The matrix ${V}$ denotes the $n$ by ${m}$ matrix with columns ${v}_j$, ${V^A}$ likewise.

(9)

The Schur reduction for the ${m}$ by ${m}$ matrix ${M}$ can be solved by a standard solver for dense eigenproblems. We have chosen to compute the standard Ritz values, which makes the algorithm suitable for computing ${k}_{\max}$ exterior eigenvalues of $A$ close to a specified $\tau $. If eigenvalues in the interior part of the spectrum have to be computed, then the usage of harmonic Ritz values is advocated; see §7.12.3.

In each step, the Schur form has to be sorted such that ${T}_{1,1}$ is closest to $\tau $. Only if ${m}\geq {m}_{\max}$ does the sorting of the Schur form have to be such that all of the ${m}_{\max}$ leading diagonal elements of ${T}$ are closest to $\tau $. For ease of presentation we sorted all diagonal elements here.

For a template of an algorithm for the sorting of a Schur form, see [448,449,33,171, Chap. 6B].

$S$ is the $m$ by $m$ matrix with columns $s_j$.

(11)
The stopping criterion is to accept a Schur vector approximation as soon as the norm of the residual (for the normalized Schur vector approximation) is below $\epsilon$. This means that we accept inaccuracies in the order of $\epsilon$ in the computed eigenvalues, and inaccuracies (in angle) in the Schur vectors of $O(\epsilon)$ (provided that the concerned eigenvalue is simple and well separated from the others). For a more quantitative analysis and proportionality constants, see §7.13.

Detection of all wanted eigenvalues cannot be guaranteed; see note (13) for Algorithm 4.17 (p. [*]).

If the matrix is real, then all eigenpairs are either real or appear in complex conjugate pairs. If a complex eigenpair has been detected, then its conjugate is known and can be deflated as well. This makes the algorithm more efficient.

(15)-(16)

After acceptance of a Ritz pair, we continue the search for a next pair, with the remaining Ritz vectors as a basis for the initial search space.

(20)

We restart as soon as the dimension of the search space for the current Schur vector exceeds ${m}_{\max}$. The process is restarted with the subspace spanned by the ${m}_{\min}$ Ritz vectors corresponding to the Ritz values closest to the target value $\tau $.

(27)

We have collected the locked (computed) Schur vectors in ${Q}$, and the matrix $\tilde{Q}$ is ${Q}$ expanded with the current Schur vector approximation ${u}$. This is done in order to obtain a more compact formulation; the correction equation is equivalent to the one in (4.50). The new correction ${t}$ has to be orthogonal to the columns of ${Q}$ as well as to ${u}$.

Of course, the correction equation can be solved by any suitable process, for instance a preconditioned Krylov subspace method that is designed to solve unsymmetric systems. Because of the occurrence of $\widetilde{Q}$ one has to be careful with the usage of preconditioners for the matrix $A-\theta I$. The inclusion of preconditioners can be achieved following the same principles as for the single vector Jacobi-Davidson algorithm; see Algorithm 4.18 for a template. Make sure that the starting vector ${t}_0$ for an iterative solver satisfies the orthogonality constraints $\widetilde{Q}^\ast {t}_0=0$. Note that significant savings per step can be made in Algorithm 4.18 if ${K}$ is kept the same for a (few) Jacobi-Davidson iterations. In that case columns of $\widehat{Q}$ can be saved from previous steps. Also, the matrix ${\cal M}$ can be updated from previous steps, along with its ${\cal L}{\cal U}$ decomposition.

It is not necessary to solve the correction equation very accurately. A strategy, often used for inexact Newton methods [113], here also works well: increase the accuracy with the Jacobi-Davidson iteration step, and, for instance, solve the correction equation with a residual reduction of $2^{-\ell}$ in the $\ell$th Jacobi-Davidson iteration ($\ell$ is reset to 0 when a Schur vector is detected).

In particular, in the first few initial steps, the approximate eigenvalue $\theta $ may be very inaccurate, and then it does not make sense to solve the correction equation accurately. In this stage it can be more effective to temporarily replace $\theta $ by $\tau $ or to take ${t}=-r$ for the expansion of the search subspace [335,172].

For deflation see §8.4.2, with $Z_j$ replaced by $Q_j$ and $B$ by $I$. For the full theoretical background of this method, as well as details on the deflation technique with Schur vectors, see [172].


next up previous contents index
Next: Computing Interior Eigenvalues Up: Jacobi-Davidson Methods   G. Sleijpen Previous: Generalization of Hermitian Case   Contents   Index
Susan Blackford 2000-11-20