next up previous contents index
Next: An Algorithm Template. Up: Restart and Deflation Previous: Deflation.   Contents   Index

Preconditioning.

Preconditioning for an iterative solver like GMRES or CGS, applied with equation (4.50), is only slightly more complicated than in the single-vector case (cf. §4.7.1). Suppose that we have a left preconditioner ${K}$ available for the operator $A-\theta_j^{(m)}I$. Let $\widetilde{Q}$ denote the matrix $\widetilde{X}_{k-1}$ expanded with ${u}_j^{(m)}$ as its $k$th column. In this case, the preconditioner ${K}$ has to be restricted to the subspace orthogonal to $\widetilde{Q}$ as well, which means that we have to work effectively with

\begin{displaymath}\widetilde{K} \equiv (I-\widetilde{Q}\widetilde{Q}^\ast){K}
(I-\widetilde{Q}\widetilde{Q}^\ast) .\end{displaymath}

Similar to the single-vector case, this can be realized in a surprisingly efficient way.

Assume that we use a Krylov solver with initial guess ${t}_0=0$ and with left-preconditioning for the approximate solution of the correction equation (4.50). Since the starting vector is in the subspace orthogonal to $\widetilde{Q}$, all iteration vectors for the Krylov solver will be in that space. In that subspace we have to compute the vector ${z} \equiv
\widetilde{K}^{-1}\widetilde{A}{v}$ for a vector ${v}$ supplied by the Krylov solver, and

\begin{displaymath}\widetilde{A}\equiv
(I-\widetilde{Q}\widetilde{Q}^\ast)(A-\theta_j^{(m)} I)
(I-\widetilde{Q}\widetilde{Q}^\ast).\end{displaymath}

This is done in two steps. First we compute

\begin{displaymath}
\widetilde{A}{v} =(I-\widetilde{Q}\widetilde{Q}^\ast) (A-\th...
...idetilde{Q}^\ast){v} =
(I-\widetilde{Q}\widetilde{Q}^\ast){y}
\end{displaymath}

with $y \equiv (A-\theta_j^{(m)} I){v}$ since $\widetilde{Q}^\ast {v}=0$. Then, with left-preconditioning we solve ${z}\perp \widetilde{Q}$ from

\begin{displaymath}\widetilde{K} {z}=(I-\widetilde{Q}\widetilde{Q}^\ast){y} .\end{displaymath}

Since $\widetilde{Q}^\ast {z}=0$, it follows that ${z}$ satisfies ${K}{z}={y}-\widetilde{Q}\vec{\alpha}$ or ${z}={K}^{-1}{y} - {K}^{-1}\widetilde{Q}\vec{\alpha}$. The condition $\widetilde{Q}^\ast {z}=0$ leads to

\begin{displaymath}\vec{\alpha}=(\widetilde{Q}^\ast {K}^{-1}\widetilde{Q})^{-1}
\widetilde{Q}^\ast {K}^{-1}{y} .\end{displaymath}

The vector $\widehat{y}\equiv {K}^{-1}{y}$ is solved from ${K}\widehat{y}={y}$ and, likewise, ${\widehat{Q}}\equiv
{K}^{-1}\widetilde{Q}$ is solved from ${K}{\widehat{Q}}=\widetilde{Q}$. Note that the last set of equations has to be solved only once in an iteration process for equation (4.50), so that effectively ${i}_S+{k}$ operations with the preconditioner are required for ${i}_S$ iterations of the linear solver. Note also that a matrix-vector multiplication with the left-preconditioned operator, in an iteration of the Krylov solver, requires only one operation with $\widetilde{Q}^\ast$ and ${K}^{-1}\widetilde{Q}$, instead of the four actions of the projector operator $(I-\widetilde{Q}\widetilde{Q}^\ast)$. This has been worked out in the solution template, given in Algorithm 4.18. Note that obvious savings can be realized if the operator ${K}$ is kept the same for a number of successive eigenvalue computations (for details, see [412]).


next up previous contents index
Next: An Algorithm Template. Up: Restart and Deflation Previous: Deflation.   Contents   Index
Susan Blackford 2000-11-20