The modern treatment of iterative methods dates back to the
relaxation method of Southwell [193].
This was the precursor to the SOR method, though the order in which
approximations to the unknowns were relaxed varied during the
computation. Specifically, the next unknown was chosen based upon
estimates of the location of the largest error in the current
approximation. Because of this, Southwell's relaxation method was
considered impractical for automated computing. It is interesting to
note that the introduction of multiple-instruction, multiple
data-stream (MIMD) parallel computers has rekindled an interest in
so-called *asynchronous *, or *chaotic * iterative methods (see Chazan and
Miranker [54], Baudet [30], and
Elkin [92]), which are closely related to Southwell's
original relaxation method. In chaotic methods, the order of
relaxation is unconstrained, thereby eliminating costly
synchronization of the processors, though the effect on convergence is
difficult to predict.

The notion of accelerating the convergence of an iterative method by
extrapolation predates the development of SOR. Indeed, Southwell used
overrelaxation to accelerate the convergence of
his original relaxation method . More
recently, the *ad hoc SOR * method, in
which a different relaxation factor is used for updating each
variable, has given impressive results for some classes of
problems (see Ehrlich [83]).

The three main references for the theory of stationary iterative methods are Varga [211], Young [217] and Hageman and Young [120]. The reader is referred to these books (and the references therein) for further details concerning the methods described in this section.

Mon Nov 20 08:52:54 EST 1995