next up previous
Next: Evaluation of PTLIB Up: Evaluation of High-Performance Computing Previous: Introduction

Approach

 

Our approach differs in several respects from the traditional presentations of comparative evaluations.

We decided that users would benefit most if we concentrated our evaluations on the software with broadest applicability. For this reason we have focused our evaluations on the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high-performance branch of the Netlib mathematical software repository. Many packages selected for evaluation were drawn from the collection of software already available through Netlib and the NHSE. We also solicited other promising packages not then available from our repositories.

Our first step in designing a systematic, well-defined evaluation criteria was to use a high-level set of criteria that can be refined as needed to particular domains. Our starting point for establishing the high-level set of criteria was to build on the software requirements described in the Baseline Development Environment [cite Pancake]. The criteria were appropriately tailored to a particular domain by those doing the evaluations and by others with expertise in the domain. We expect that the evaluation criteria for a given domain will evolve over time as we take advantage of author and user feedback, and as new evaluation resources such as new tools and problem sets become available.

The NHSE software evaluation process consists of the following steps.

  1. Reviewers and other domain experts refine the high-level evaluation criteria to this domain.
  2. We select software packages within this domain and assign each to an NHSE project member knowledgeable in the field for evaluation.
  3. The reviewer evaluates the software package systematically, typically using a well-defined evaluation criteria checklist. Results are reviewer-assigned scores for characteristics requiring a qualitative assessment. Characteristics more appropriately measured quantitately are reported directed as quantitative results.
  4. We solicit feedback from the package author, giving him the opportunity to make corrections, additions, or comments on the evaluation. In effect we ask him to review our review.
  5. We make the review and the author's feedback available via the Web.
  6. We add to the evaluation and author feedback any comments users wish to submit through the NHSE Web pages.

next up previous
Next: Evaluation of PTLIB Up: Evaluation of High-Performance Computing Previous: Introduction

Jack Dongarra
Fri Nov 15 09:09:21 EST 1996