It is easy to define various ``degrees of difficulty'' in parallel processing. One such taxonomy might be as follows:
In this category fall the complex, asynchronous, real-time applications. A good example of such a beast is ``parallel chess'' [Felten:88h] of Section 14.3, where AI heuristics must be combined with real-time constraints to solve the ill-posed problem of searching the ``tree'' of potential moves.
In this area one might put the very large applications of fairly straightforward science. Often, algorithms must be carefully constructed, but the greatest problems are the large scale of the overall system and the fact that different ``modules'' must be integrated into the complete system. An example might be the SDI simulation ``Sim88'' and its successors [Meier:90a] described in Section 18.3. The parallel processing issues in such a code require careful thought but pose no insurmountable problems.
Problems such as large-scale fluid dynamics or oceanography [Keppenne:90b] mentioned in Section 13.3 often have complex physics but fairly straightforward and well-known numerical methods. In these cases, the majority of the work involved in parallelization comes from analysis of the individual algorithms which can then often be parallelized separately. Each submodule is then a simpler, tractable problem which often has a ``well-known'' parallel implementation.
The simplest class of ``interesting'' parallel programs are partial differential equations [Brooks:82b], [Fox:88a] and the applications of Chapters 4 and 6. In these cases the parallel processing issues are essentially trivial but the successful implementation of the algorithm still requires some care to get the details correct.
The last class of problems are those with ``embarrassing parallelism'' such as in Chapter 7-essentially uncoupled loop iterations or functional units. In these cases, the parallel processing issues are again trivial but the code still requires care if it is to work correctly in all cases.
The ``bottom line'' from this type of analysis is that all but the hardest cases pose problems in parallelization which are, at least conceptually, straightforward. Unfortunately, the actual practice of turning such concepts into working code is never trivial and rarely easy. At best it is usually an error-prone and time-consuming task.
This is the basic reason for ASPAR's existence. Experience has taught us that the complexities of parallel processing are really due not to any inherent problems but to the fact that human beings and parallel computers don't speak the same language. While a human can usually explain a parallel algorithm on a piece of paper with great ease, it is often a significant task to convert that picture to functional code. It is our hope that the bulk of the work can be automated by the use of ``parallelizing'' technologies such as ASPAR. In particular, we believe (and our results so far bear out this belief) that problems in all the previous categories (except possibly (1) above), can be either completely or significantly automated.