Parallelism

**This needs major updating, probably incorporation
into earlier sections on data structures and matrix-vector
multiplication. Suggestions welcome!**

is really incomplete. When I think of all the bells and whistles for parallel vectors and matrix layouts and associated inner product, vector update, and mat-vec algorithms in, say, PETSc, or even just my CS267 notes, I wonder how much we can or should really say about this. Given the sheer amount of material available, we can at best summarize, and point to references (better ones than Jack's). I invite suggestions.

In this section we discuss aspects of parallelism in the iterative methods discussed in this book. Since the iterative methods share most of their computational kernels we will discuss these independent of the method. The basic time-consuming kernels of iterative schemes are:

- inner products;
- vector updates;
- matrix-vector products, e.g., (for some methods also );
- solvers for .