Application scientists will often be reluctant or unable to create from scratch large-scale application programs that achieve the best possible performance on different advanced computer architectures. Using code from generic mathematical software libraries of from application-specific libraries may be useful for prototyping, but programs constructed from such codes may not achieve sufficiently high performance for large-scale problems. The reason for insufficient performance of generic library software on parallel architectures is that various operations, such as data mapping and partitioning, must be finely tuned to a particular architecture to achieve maximum performance.
An approach being taken to produce high-performance linear algebra software, while preserving transportability and reusability, is illustrated by the LAPACK and ScaLAPACK packages available from Netlib . The algorithms have been structured so as to carry out key matrix computations by means of calls to the Basic Linear Algebra Subroutines (BLAS). In addition, for ScaLAPACK, common parallel communication tasks are carried out by calls to the BLACS (Basic Linear Algebra Communication Subroutines). The BLAS and BLACS may then be implemented and finely tuned by vendors of advanced architecture machines. The result is a common application-level programming interface provided by the LAPACK and ScaLAPACK routines, but with the resulting application programs being transportable across different architectures that have implemented the underlying support routines. A similar approach of using lower-level parallel implementations of common computational and communication operations may work for other application domains, such as partial differential equations or combinatorial graph algorithms.
Work is needed to develop, evaluate, consolidate, and standardize system software that allows easier and more efficient use of parallel architectures. Such software should include notations for data mapping, parallel compilers, parallel programming enviroments, and visualization and debugging tools. Application-specific languages, such as FIDIL for computational fluid dynamics, will also be useful.
Although most available mathematical software is written in Fortran, some users may be more comfortable and productive with an object-oriented language such as C++. LAPACK++, available from Netlib, is an object-oriented C++ extension to the Fortran LAPACK library for numerical linear algebra. ScaLAPACK++, currently under development, is an object-oriented C++ library for implementing linear algebra computations on distributed memory parallel computers. LPARX, developed by researchers at the University of California at San Diego and available from Netlib, is a C++ class library that provides run-time support for dynamic, non-uniform scientific calculations running on MIMD distributed memory architectures.
Software engineering models for developing library software for high performance computing and for writing application programs that use this software should be developed, described, and illustrated with examples. The hypertext medium presents a good way of making this information available. Development of the software engineering models should be an ongoing process, with feedback from users guiding their development.