next up previous contents index
Next: 13.5.7 Conclusions Up: 13.5 ASPAR Previous: 13.5.5 Global Strategies

13.5.6 Dynamic Data Distribution

The three decomposition methods already described suffer from the defect that they are all implemented, except in detail, during the compile-time ``parallelization'' of the original program. Thus, while the particular details of ``which column to send to which other processor'' and similar decisions may be deferred to the runtime support, the overall strategy is determined from static analysis of the sequential source code. ASPAR's method is entirely different.

Instead of enforcing global decomposition rules based on static evaluation of the code, ASPAR leaves all the decisions about global decomposition to the run time system and offers only hints as to possible optimizations, whenever they can safely be determined from static analysis. As a result, ASPAR's view of the previously troublesome code would be something along the lines of

 
C-  I need B and C to be distributed in increasing order.

I need B andDO 10 I=1,100

I need B andA(I) = B(I) + C(I)

10 I neCONTINUE

C- I need B to increase and C to decrease.

I neDO 20 I=1,100

I need B andD(I) = B(I) + C(100-I)

20 I neCONTINUE

where the ``comments'' correspond to ASPAR's hints to the run time support.

The advantages of such an approach are extraordinary. Instead of being stymied by complex, dynamically changing decomposition strategies, ASPAR proceeds irrespective of these, merely expecting that the run time support will be smart enough to provide whatever data will be required for a particular operation.

As a result of this simplification in philosophy, ASPAR is able to successfully parallelize practically 100% of any application that can be parallelized at all, with no user intervention.



Guy Robinson
Wed Mar 1 10:19:35 EST 1995