Consider again the linear equations in (). If
we proceed as with the Jacobi method, but now assume
that the equations are examined one at a time in sequence, and that
previously computed results are used as soon as they are available, we
obtain the Gauss-Seidel method:
Two important facts about the Gauss-Seidel method should be noted.
First, the computations in () appear
to be serial. Since each component of the new iterate depends upon
all previously computed components, the updates cannot be done
simultaneously as in the Jacobi method. Second, the new iterate
depends upon the order in which the equations are examined.
The Gauss-Seidel method is sometimes called the method of
successive displacements to indicate the dependence of the
iterates on the ordering. If this ordering is changed, the components of the new iterate (and not just their order) will
also change.
Successive displacements, method of: Gauss-Seidel method.
These two points are important because if is sparse, the
dependency of each component of the new iterate on previous components
is not absolute. The presence of zeros in the matrix may remove the
influence of some of the previous components. Using a judicious
ordering of the equations, it may be possible to reduce such
dependence, thus restoring the ability to make updates to groups of
components in parallel. However, reordering the equations can affect
the rate at which the Gauss-Seidel method
converges. A poor choice of ordering can degrade the rate of
convergence; a good choice can enhance the rate of convergence. For a
practical discussion of this tradeoff (parallelism versus convergence
rate) and some standard reorderings, the reader is referred to
Chapter
and ยง
.
In matrix terms, the definition of the
Gauss-Seidel method
in () can be expressed as
As before, ,
and
represent the diagonal, lower-triangular,
and upper-triangular parts of
, respectively.
The pseudocode for the Gauss-Seidel algorithm is given in Figure .
Figure: The Gauss-Seidel Method