next up previous contents index
Next: 13.5.2 Various Parallelizing Technologies Up: 13.5 ASPAR Previous: 13.5 ASPAR

13.5.1 Degrees of Difficulty

It is easy to define various ``degrees of difficulty'' in parallel processing. One such taxonomy might be as follows:

  1. Extremely difficult (Asynchronous)

    In this category fall the complex, asynchronous, real-time applications. A good example of such a beast is ``parallel chess''  [Felten:88h] of Section 14.3, where AI heuristics  must be combined with real-time constraints to solve the ill-posed problem of searching the ``tree'' of potential moves.

  2. Complex (Compound Metaproblems)  

    In this area one might put the very large applications of fairly straightforward science. Often, algorithms must be carefully constructed, but the greatest problems are the large scale of the overall system and the fact that different ``modules'' must be integrated into the complete system. An example might be the SDI simulation ``Sim88'' and its successors [Meier:90a] described in Section 18.3. The parallel processing issues in such a code require careful thought but pose no insurmountable problems.

  3. Hard (Loosely Synchronous)

    Problems such as large-scale fluid dynamics or oceanography  [Keppenne:90b] mentioned in Section 13.3 often have complex physics but fairly straightforward and well-known numerical methods. In these cases, the majority of the work involved in parallelization comes from analysis of the individual algorithms which can then often be parallelized separately. Each submodule is then a simpler, tractable problem which often has a ``well-known'' parallel implementation.

  4. Straightforward but Tedious (Synchronous)

    The simplest class of ``interesting'' parallel programs are partial differential equations [Brooks:82b], [Fox:88a] and the applications of Chapters 4 and 6. In these cases the parallel processing issues are essentially trivial but the successful implementation of the algorithm still requires some care to get the details correct.

  5. Trivial

    The last class of problems are those with ``embarrassing parallelism'' such as in Chapter 7-essentially uncoupled loop iterations or functional units. In these cases, the parallel processing issues are again trivial but the code still requires care if it is to work correctly in all cases.

The ``bottom line'' from this type of analysis is that all but the hardest cases pose problems in parallelization which are, at least conceptually, straightforward. Unfortunately, the actual practice of turning such concepts into working code is never trivial and rarely easy. At best it is usually an error-prone and time-consuming task.

This is the basic reason for ASPAR's existence. Experience has taught us that the complexities of parallel processing are really due not to any inherent problems but to the fact that human beings and parallel computers don't speak the same language. While a human can usually explain a parallel algorithm on a piece of paper with great ease, it is often a significant task to convert that picture to functional code. It is our hope that the bulk of the work can be automated by the use of ``parallelizing'' technologies such as ASPAR. In particular, we believe (and our results so far bear out this belief) that problems in all the previous categories (except possibly (1) above), can be either completely or significantly automated.



next up previous contents index
Next: 13.5.2 Various Parallelizing Technologies Up: 13.5 ASPAR Previous: 13.5 ASPAR



Guy Robinson
Wed Mar 1 10:19:35 EST 1995