Parallelism



next up previous contents index
Next: Inner products Up: Related Issues Previous: Generating a CRS-based

Parallelism

   

Pipelining: See: Vector computer. Vector computer: Computer that is able to process consecutive identical operations (typically additions or multiplications) several times faster than intermixed operations of different types. Processing identical operations this way is called `pipelining' the operations. Shared memory: See: Parallel computer. Distributed memory: See: Parallel computer. Message passing: See: Parallel computer. Parallel computer: Computer with multiple independent processing units. If the processors have immediate access to the same memory, the memory is said to be shared; if processors have private memory that is not immediately visible to other processors, the memory is said to be distributed. In that case, processors communicate by message passing.

In this section we discuss aspects of parallelism in the iterative methods discussed in this book.

Since the iterative methods share most of their computational kernels we will discuss these independent of the method. The basic time-consuming kernels of iterative schemes are:

We will examine each of these in turn. We will conclude this section by discussing two particular issues, namely computational wavefronts in the SOR method, and block operations in the GMRES method.





Jack Dongarra
Mon Nov 20 08:52:54 EST 1995