Previous: Conjugate Gradient Method (CG)
Up: Conjugate Gradient Method (CG)
Next: Convergence
Previous Page: Conjugate Gradient Method (CG)
Next Page: Convergence

Theory

The unpreconditioned conjugate gradient method constructs the th iterate as an element of so that is minimized, where is the exact solution of . This minimum is guaranteed to exist in general only if is symmetric positive definite. The preconditioned version of the method uses a different subspace for constructing the iterates, but it satisfies the same minimization property, although over this different subspace. It requires in addition that the preconditioner is symmetric and positive definite.

The above minimization of the error is equivalent to the residuals being orthogonal (that is, if ). Since for symmetric an orthogonal basis for the Krylov subspace can be constructed with only three-term recurrences, such a recurrence also suffices for generating the residuals. In the Conjugate Gradient method two coupled two-term recurrences are used; one that updates residuals using a search direction vector, and one updating the search direction with a newly computed residual. This makes the Conjugate Gradient Method quite attractive computationally.

There is a close relationship between the Conjugate Gradient method and the Lanczos method for determining eigensystems, since both are based on the construction of an orthogonal basis for the Krylov subspace. In fact, the coefficients computed during the CG iteration can be used to reconstruct the Lanczos process, and the other way around; see Paige and Saunders [164] and Golub and Van Loan ([108], 10.2.6). This relationship can be exploited to obtain relevant information about the eigensystem of the (preconditioned) matrix ; see §.