next up previous
Next: Evaluation of HPC-Netlib Up: Evaluation of High-Performance Computing Previous: Approach

Evaluation of PTLIB

 

Our evaluation of PTLIB software covered both parallel debuggers and performance analyzers. We give a detailed description of the evaluation criteria below. Note that it is has been refined and expanded to a level of detail to enable it to serve as an evaluation checklist.

Performance
Includes accuracy, efficiency, and scalability.
Accuracy
A performance monitoring tool is accurate if it doesn't cause too great a change in the behavior and timing of the program it is monitoring.

Efficiency
The software runs fast enough, in that slow speed does not make it an ineffective tool.

Scalability
A parallel tool is scalable if its overhead grows in a reasonable manner with increases in system and problem sizes. In some cases, linear growth may not be acceptable.

Capabilities
The tool has adequate functionality to effectively accomplish its intended tasks.

Versatility
Includes heterogeneity, interoperability, portability, and extensibility
Heterogeneity
A heterogeneous tool can simultaneously be invoked on and/or have its components running on all platforms in a heterogeneous system.

Interoperability
A parallel tool is interoperable if its design is based on open interfaces and if it conforms to applicable standards.

Portability
A parallel tool is portable if it works on different parallel platforms and if platform dependencies have been isolated to specific parts of the code.

Extensibility
A performance analysis tool is extensible if new analysis methods and views can be added easily.

Maturity
Includes robustness, level of support, and size of user base.
Robustness
A parallel tool is robust if it handles error conditions without crashing and by reporting them and recovering from them appropriately.

Level of support
Size of user base

Ease of use
The software has an understandable user interface and is easy to use for a typical NHSE user.

Most of the software characteristics described in the criteria above require qualitative rather than quantitative assessment. For this reason reviewers assign the reviewed package a numerical score on each of the characteristics.

Currently over 20 parallel debuggers and performance analyzers have been evaluated according to the above criteria. These packages include AIMS, DAQV, LCB, MQM, NTV, Pablo, Pangaea, Paradyn, ParaGraph, ParaVision, PGPVM, PVaniM, TotalView, Upshot, VAMPIR, VT, Xmdb, XMPI, and XPVM. We have solicited author feedback on these evaluations, and the initial evaluations have been updated based on the feedback received. Web access to the evaluations is available through the NHSE homepage at http://www.nhse.org/


next up previous
Next: Evaluation of HPC-Netlib Up: Evaluation of High-Performance Computing Previous: Approach

Jack Dongarra
Fri Nov 15 09:09:21 EST 1996