next up previous contents index
Next: Stability and Accuracy Assessments Up: Jacobi-Davidson Methods   G. Sleijpen Previous: Software Availability   Contents   Index

Numerical Example

We include a numerical example for testing purposes, so that potential users of the Jacobi-Davidson algorithms can verify and compare their results.

The symmetric matrix $A$ is of dimension $n=1000$. The diagonal entries are $a_{i,i} = i$, the codiagonal entries are $a_{i,i-1} = a_{i,i+1}= 0.5$, and furthermore, $a_{1000,1} = a_{1,1000} = 0.5$. All other entries are zero. This example has been taken from [88] and is discussed, in the context of the Jacobi-Davidson algorithm, in [411, p. 410].

We use Algorithm 4.17 for the computation of the ${k}_{\max}=10$ largest eigenvalues. The input parameters have been chosen as follows. The starting vector ${v}_0=(0.01,\ldots,0.01,1)^T$. The tolerance is $\epsilon = 10^{-8}$. The subspace dimension parameters are ${m}_{\min}=10$, ${m}_{\max}=15$, and the target value $\tau=1001$.

Figure 4.5: Jacobi-Davidson for exterior eigenvalues with several strategies for solving the correction equation.
\begin{figure}\@ifnextchar1{\Plot@}{\@ifnextchar2{\Plot@@}{\Plot@@@@}}4{Chap3JDa11.eps}{Chap3JDa12.eps}{Chap3JDa21.eps}{Chap3JDa22.eps}
\end{figure}

Figure 4.6: Jacobi-Davidson for exterior eigenvalues (top) and interior eigenvalues (bottom). The correction equations have been solved with 5 steps of plain GMRES (left) and with 5 steps of preconditioned GMRES (right).
\begin{figure}\begin{center}
\@ifnextchar1{\Plot@}{\@ifnextchar2{\Plot@@}{\Plot@...
...{Chap3JDb21.eps}{Chap3JDb22.eps}
\end{center}\vspace*{-18pt}%% help
\end{figure}

We show graphically the norm of the residual vector as a function of the iteration number in Figure 4.5. Every time the norm is less than $\epsilon$, we have determined an eigenvalue within this precision, and the iteration is continued with deflation for the next eigenvalue. The four pictures represent, lexicographically, the following different situations:

  1. Top left: This shows what happens when ${t}$, in item (32) of Algorithm 4.17, is simply taken as ${t}=-r$. In exact arithmetic, this should deliver the same Ritz values as the Arnoldi algorithm (assuming for Arnoldi a similar restart strategy as in Algorithm 4.17).

  2. Top right: Here we have applied a simple preconditioner ${K}=\mbox{diag}(A)$, as in Algorithm 4.18, without further subspace acceleration; that is, we stop after step (d${}^{\,\prime}$). This is equivalent to the method currently in use among quantum chemists [344].

  3. Bottom left: This gives the iteration results, for the case where the correction equation has been solved with 5 steps of MINRES, without preconditioning (Algorithm 4.17 with ${K}=I$).

  4. Bottom right: Here we have used preconditioning as in Algorithm 4.17, with ${K}=\mbox{diag}(A)$ and $5$ steps of GMRES (note that it would have been more efficient to use MINRES, but this requires two-sided preconditioning, for which we did not supply the algorithm).

In Figure 4.6, we give the convergence history for interior eigenvalues, as obtained with Algorithm 4.17 (top parts) and with Algorithm 4.19 (bottom parts), with the following input specifications: ${v}_0=(0.01,\ldots,0.01,1)^T$, $\tau=900.5$, $\epsilon = 10^{-8}$, ${k}_{\max}=10$, ${m}_{\min}=5$, and ${m}_{\max}=10$. Again, every time the curve gets below $\epsilon$, this indicates convergence of an approximated eigenvalue to within that tolerance. For all figures, we used 5 steps of GMRES to solve the correction equation in (32). For the left figures, we did not use preconditioning. For the right figures, we preconditioned GMRES with ${K}=\mbox{diag}(A)-\tau I$, as in Algorithm 4.18.


next up previous contents index
Next: Stability and Accuracy Assessments Up: Jacobi-Davidson Methods   G. Sleijpen Previous: Software Availability   Contents   Index
Susan Blackford 2000-11-20