Next: Evaluation of PTLIB
Up: Evaluation of High-Performance Computing
Previous: Introduction
Our approach differs in several respects from the traditional
presentations of comparative evaluations.
- Comparative evaluations
currently available are typically done by an author of one of the
packages and can be subject to bias and possible inconsistencies
across evaluations. By performing our evaluations as consistently
and objectively as possible, we should be able to avoid even the
appearance of bias.
- We incorporate feedback from the
packages authors and users into our evaluation. This ensures that
the evaluations are both fair and up to date.
- Our evaluations are not static. As additional information
is gathered, either through our author/user feedback mechanism
or through enhancements to our evaluation procedures, we will
update the evaluations.
- The collection of evaluations will be easily accessible
at a centralized location via the Web. Users can do side-by-side
comparisons according to selected characteristics.
We decided that users would benefit most if we concentrated our
evaluations on the software with broadest applicability. For this
reason we have focused our evaluations on the Parallel Tools Library
(PTLIB), a new software repository for parallel systems software and
tools, and HPC-Netlib, a high-performance branch of the Netlib
mathematical software repository. Many packages selected for evaluation
were drawn from the collection of software already available through
Netlib and the NHSE. We also solicited other promising packages not then
available from our repositories.
Our first step in designing a systematic, well-defined evaluation criteria
was to use a high-level set of criteria that can be refined as needed
to particular domains. Our starting point for establishing the high-level
set of criteria was to build on the software requirements described in
the Baseline Development Environment [cite Pancake]. The criteria were
appropriately tailored to a particular domain by those doing the evaluations
and by others with expertise in the domain. We expect that the evaluation
criteria for a given domain will evolve over time as we take advantage of
author and user feedback, and as new evaluation resources such as new tools
and problem sets become available.
The NHSE software evaluation process consists of
the following steps.
- Reviewers and other domain experts refine
the high-level evaluation criteria to this domain.
- We select software packages within this domain and assign
each to an NHSE project member knowledgeable in the field for evaluation.
- The reviewer evaluates the software package systematically,
typically using a well-defined evaluation criteria
checklist. Results are reviewer-assigned scores for characteristics
requiring a qualitative assessment. Characteristics more appropriately
measured quantitately are reported directed as quantitative results.
- We solicit feedback from the package author, giving him the
opportunity to make corrections, additions, or comments on the
evaluation. In effect we ask him to review our review.
- We make the review and the author's feedback available via the Web.
- We add to the evaluation and author feedback any comments users
wish to submit through the NHSE Web pages.
Next: Evaluation of PTLIB
Up: Evaluation of High-Performance Computing
Previous: Introduction
Jack Dongarra
Fri Nov 15 09:09:21 EST 1996