Next: References Up: The TOP25 Supercomputer Previous: List of the

Background Information about some of the TOP25 sites

Wherever available some short summary of the mission and the environment of the first fifteen of the TOP25 sites will be given below. If not noted otherwise, this information was gathered from home pages of these sites on the world wide web (WWW). With the availability of Mosaic many of the supercomputer sites have created such home pages. In order to facilitate browsing the net, URLs for site home pages are given.

National Aerospace Laboratory Numerical Wind Tunnel, Tokyo, Japan - 129.3 Gflop/s

The following information is from [6] and [8]:

``The 140 processor NWT represents, on paper, one of the worlds most powerful computers. Each processor has a peak performance of 1.7 Gflop/s, allowing a potential peak performance of 236 Gflop/s. NWT is situated in a room that also contains a Fujitsu VP2600, itself a powerful 5 Gflop/s (peak) supercomputer, but which functions as a front end for the NWT."

``NAL was established in 1955 to promote aeronautical engineering technology and belongs to the Science and Technology Agency. ... The numerical windtunnel (NWT) is a parallel computer system of distributed memory architecture composed of vector processors. NWT consists of 140 Processing Elements (PE), two Control Processors(CP) and Crossbar Network. That is, each PE itself is a vector supercomputer similar to VP400. Each PE has 256 MBytes of memory and peak performance of 1.7 Gflop/s. PE has Vector Unit, Scalar Unit and Data Mover which communicates with other PE's. PE is 50%faster than the standard VP400 and same size of memory. CP has 128 MB of memory. CP manages NWT and communicates with VP2600 through SSU. CP's do not execute real computation of CFD code. The cross-bar network has 421 MByte/s x 2 x 142 performance between each processors. The total performance of NWT is 236 Gflop/s and 35 GB main memory.

The specific feature of NWT is that it only pursues the efficiency and performance of CFD codes. The architecture is intended to satisfy to get maximum efficiency in this target only. Each PE itself can execute a large scale CFD computation with fewer data exchange with other PE's. Commercial MPP type machine has deficiency in this aspect as each load is small granularity. Commercial machines may not be able to concentrate into CFD purpose only even if they become available early this year. Only special order can realize this kind of development."

C:nalnwt

Los Alamos National Laboratory - 86.3 Gflop/s

The following information is from the home page of the Advanced Computing Laboratory (ACL)
(URL http://www.acl.lanl.gov/Home.html).
General information about Los Alamos National Laboratory is available under URL http://www.lanl.gov:/welcome.html.

``In December, 1991 Los Alamos National Laboratory was named as one of two national high performance computing research center (HPCRC) sites by the Department of Energy's (DOE) HPCC Program. The Advanced Computing Laboratory (ACL) at LANL is the foundation upon which this center is being built. The goal of the ACL as an HPCRC is to promote technology transfer in advanced computing to industry, academia, and other national laboratories through the operation of an experimental computational laboratory. The ACL will support an advanced environment for computational scientists working together in interdisciplinary teams to solve today's Grand Challenges, forming the building blocks of tomorrow's computing environment and educating others in the tools of the trade. Together, the Grand Challenge problems, the GRAnd Challenge Computing Environment (GRACCE), and the ACL form the nucleus of the HPCRC."

C:lanl

U.S. Government, Classified - 54.8 Gflop/s

Although in general nothing is known about this center, this classified installation is supposedly located in Texas. It contains at least 4 Cray C90's which are networked in a unique way for solving a single computational task.

C:usgov1

Minnesota Supercomputer Center - 45.0 Gflop/s

Resources at the University of Minnesota include supercomputers at the Minnesota Supercomputer Center and at the Army High Performance Computing Research Center (AHPCRC). ``AHPCRC is a university led research and educational consortium. Consortium members include the University of Minnesota as prime contractor and Howard, Jackson State, and Purdue Universities. The AHPCRC is funded by the Army Research Office's Division of Mathematical and Computer Sciences. The AHPCRC mission is to advance the state of the art in heterogeneous and networked high performance computing, to educate Army researchers and the next generation of engineers and scientists in new techniques in high performance computing, and to promote technology transfer and encourage joint research and development projects which include both university and Army researchers (URL http://www.arc.umn.edu/html/ahpcrc.html)."

C:minnsc

NCSA, Univ. of Illinois - 44.5 Gflop/s

``Established in February 1985 with a National Science Foundation (NSF) grant, the National Center for Supercomputing Applications (NCSA) opened to the national research community in January 1986. The state of Illinois, the University of Illinois at Urbana-Champaign (UIUC), corporate partners, and other federal agencies supply additional funding. The center has provided high-performance computing and communications (HPCC) resources for over 6,000 users at more than 380 universities and corporations (URL http://www.ncsa.uiuc.edu/General/NCSAHome.html)."

C:ncsa

NASA Ames Research Center - 39.2 Gflop/s

``The Numerical Aerodynamic Simulation (NAS) program was started in 1984 to support the NASA charter to maintain 'the role of the United States as a leader in aeronautical technology'. The NAS vision is to provide the nation's aerospace research and development community by the year 2000 a high-performance, operational computing system capable of simulating an entire aerospace vehicle system within a computing time ranging from one to several hours. The major objectives of the NAS program are:

  1. Act as pathfinder in advanced, large scale computational capability through systematic incorporation of state-of-the-art improvements in computer hardware and software technologies.

  2. Provide a national computational capability, available to NASA, DoD, industry, other government agencies and universities, as a necessary element in insuring continuing leadership in computational fluid dynamics and related computational aerospace disciplines.

  3. Provide a strong research tool for the Office of Aeronautics. "
For more details see URL http://www.nas.nasa.gov/home.html.

C:nasaames

National Security Agency - 30.4 Gflop/s

``The goal of NSA's HPCC Program is to accelerate the development and application of the highest performance computing and communications technologies to meet national security requirements and to contribute to collective progress in the Federal HPCC Program. In support of this goal, NSA develops algorithms and architectural simulators and testbeds that contribute to a balanced environment of workstations, vector supercomputers, massively parallel computer architectures, and high speed networks. It sponsors and participates in basic and applied research and development of gigabit networking technology. NSA develops network security and information security techniques and testbeds appropriate for high speed in-house networking and for interconnection with public networks. It also develops software and hardware technology for highly parallel architectures scalable to sustained teraops performance, and it investigates or develops new technologies in materials science, superconductivity, ultra-high-speed switching and interconnection techniques, networking, and mass storage systems fundamental to increased performance objectives of high performance computing programs
(URL http://www.hpcc.gov/blue94/section.4.6.html).

C:nsa

Cray Research - 29.9 Gflop/s

The following general information is from a Cray Research press release: ``Founded on April 6 ,1972, Cray Research Inc. continues its mission to lead in the development and marketing of supercomputers.

As of June 30, 1993, the company employed 4,978 people. Its principal manufacturing and development facilities are in Chippewa Falls, Wisconsin, with additional operations in San Diego, California, and Beaverton, Oregon. Software development, marketing support and corporate headquarters are located in the St. Paul/Minneapolis area. The company maintains sales and support offices throughout the United States and in 21 other countries.

The company continues to target at least 15 percent of its revenue toward research and development. This helps ensure that Cray Research is on the cutting edge of advanced scientific computing."

C:cray

NEC Fuchu Plant - 24.1 Gflop/s

C:nec

Lawrence Livermore National Laboratory - 22.4 Gflop/s

``Lawrence Livermore National Laboratory (LLNL), managed by the University of California for the U.S. Department of Energy, has achieved 40 years of excellence in creating and applying science and technology to meet vital national needs. LLNL is a major national resource whose mission areas have been expanded by the federal government to address a broad spectrum of evolving national needs, including national security, energy, the environment, health and biomedicine, economic competitiveness, and science and math education (URL http://www.llnl.gov/)." There are two major computing facilities at LLNL: Livermore Computing and the National Energy Research Supercomputer Center (NERSC).

``The Livermore Computing organization has been a leader in the development of High Performance computing since the Lawrence Livermore National Laboratory (LLNL) was founded in 1952. During the Cold War, Livermore Computing concentrated on supporting the development of our nation's nuclear deterrent, designing early supercomputers, like the LARC (Livermore Advanced Research Computer), and developing the first time-sharing system in the 1960s and 1970s, large archival storage systems in the 1970s and 1980s, and distributed systems in the 1980s and 1990s (URL http://www.llnl.gov/liv_comp/lc.html)."

``The National Energy Research Supercomputer Center (NERSC), located at the Lawrence Livermore National Laboratory (LLNL), is the principal supplier of production high-performance computing and networking services to the nationwide energy research community. The programs directly supported by NERSC in the Department of Energy's Office of Energy Research include the Office of Scientific Computing, Fusion Energy, Basic Energy Sciences, High Energy and Nuclear Physics, and Health and Environmental Research. Scientists funded by these programs access the supercomputing resources at NERSC via the Energy Sciences network (ESnet) (URL http://www.nersc.gov/)."

C:llnl

Oak Ridge National Laboratory - 21.6 Gflop/s

``The Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL) provides state-of-the-art resources for Grand Challenge computing and serves as a centerpiece for computational science at ORNL. The CCS is the ORNL focal point for the U.S. High Performance Computing and Communications (HPCC) Program, with responsibilities also extending to educational initiatives. The CCS is home to one of the two High Performance Computing Research Centers established by the United States Department of Energy (DOE). The CCS is a focus for collaborations with academia, industry, and other federal laboratories. Through these efforts, the Center extends its impact far beyond the boundaries of the Oak Ridge campus. The most extensive of these cooperative ventures is the Partnership in Computational Science (PICS) Consortium supported by the DOE's Office of Scientific Computing (OSC). The three national laboratories and six universities of the PICS collaborate on a variety of Grand Challenges and other research projects. In addition to ORNL and PICS, the CCS provides computational resources to the DOE's Advanced Mathematics and Model Physics (CHAMMP) program, as an essential component of its global climate modeling program, and to a number of other Grand Challenge programs (Quantum Chromodynamics, Quantum Structure of Matter, Numerical Tokamak, Computational Biology, Computational Chemistry). Resources are also provided to a number of other organizations or programs involved in systems evaluation (e.g., NASA, ARPA, other universities and federal labs) to assist scientists and engineers in evaluating current architectures and systems with an eye to superior capabilities in the future (URL http://gopher.ccs.ornl.gov/HomePage.html)."

C:ornl

Sandia National Labs, Albuquerque - 20.6 Gflop/s

``The Department of Energy's Sandia National Laboratories is one of the nation's largest and most diverse research and development facilities. It employs more than 8,000 people at two locations in New Mexico and California. One of Sandia's strengths is in computational and experimental mechanics where several advanced code development efforts are in progress. These codes are run on state-of-the-art vector and massively parallel computer systems at Sandia. They support internal customers with analysis capabilities and the codes are also distributed to external customers. In addition, Sandia makes use of commercial and externally developed codes when applicable." The highly parallel supercomputers are located at Sandia's Massively Parallel Computing Research Laboratory (MPCRL) in Albuquerque, NM
(URL http://tesuque.cs.sandia.gov/MPCRL/MPCRL_home_page.html).

C:sandia

Atmospheric Env. Serv., Dorval, Canada - 20.0 Gflop/s

C:aes

Calif. Institute of Technology - 18.6 Gflop/s

``The Caltech Concurrent Supercomputing Facilities (CCSF) is located on the campus of the California Institute of Technology which supports and maintains a variety of massively parallel supercomputers for the Concurrent Supercomputing Consortium (CSCC). The CSCC is an alliance of twelve institutions - universities, research laboratories, government agencies, and industry - that pool their resources to gain access to unique computational facilities and to exchange technical information, share expertise, and collaborate on high-performance computing issues.

Major scientific applications that are being pursued include: computationally intensive astronomical data analysis (such as searches for binary radio pulsars and gamma-ray pulsars); computational neuroscience (using simulations to explore nervous system function and to develop a version of the GENESIS neural network simulation package that takes advantage of parallel computers); volumetric data rendering of medical instrument data, electron-molecule collisions in low-temperature plasmas; calculation of chemical reaction cross sections and rate constants from first quantum principles; 3-D simulations of mantle convections and the Earth's True Polar Wander; calculation of the nucleon-nucleon interaction; and computational fluid dynamics, including the numerical simulation of turbulent and unsteady separated fluid flows. In addition, Caltech researchers are developing numerical algorithms as well as new programming languages such as PCN (a crPC collaboration) and CC++ for use on parallel computers (URL http://ccsf.caltech.edu)."

C:caltech

US Naval Research Laboratory - 17.5 Gflop/s

``NRL is the Navy's corporate research and development laboratory, created in 1923 by Congress for the Department of the Navy on the advice of Thomas Edison. The Laboratory has over 4000 personnel (over 1500 full-time scientists, engineers and SES employees - more than half of these PhDs, currently including a Nobel Laureate), who address basic research issues concerning the Navy's environment of sea, sky, and space. Investigations have ranged widely from monitoring the sun's behavior, to analyzing marine atmospheric conditions, to measuring parameters of the deep oceans, to exploring the outermost regions of space. Detection and communication capabilities have benefited by research that has exploited new portions of the electromagnetic spectrum, extended ranges to outer space, and provided means of transferring information reliably and securely, even through massive jamming. Submarine habitability, lubricants, shipbuilding, aircraft materials, and fire fighting along with the study of sound in the sea and the advancement of radar technology have been steadfast concerns. New and emerging areas include the study of biological and chemical processes and nanoelectronics.

The Center for Computational Sciences (CCS) is a newly formed NRL organization within the Information Technology Division. The mission of the CCS is to provide the Navy community with access to state-of-the-art high performance computing and communications (HPCC) capabilities and provide production computational services. Contained within the CCS is the Navy's cutting edge research in massively parallel computing (MPP) and high speed wide area networking (URL http://www.cmf.nrl.navy.mil/home.html)."

C:usnaval:res

Thinking Machines Corp. - 16.1 Gflop/s

C:tmc

DOE/Bettis Atomic Power Laboratory - 15.8 Gflop/s

C:bettis

DOE/Knolls Atomic Power Laboratory - 15.8 Gflop/s

C:knolls

UCSD/San Diego Supercomputer Center - 15.4 Gflop/s

C:ucsd

Hitachi Ltd. - 15.2 Gflop/s

C:hitachi

U.S. Government, classified - 15.1 Gflop/s

C:usgov2

Pittsburgh Supercomputing Center - 14.7 Gflop/s

C:pittsburgh

US Naval Ocean. Commd, Bay St. Louis - 14.3 Gflop/s

C:usnaval:stlouis

Japan Atomic Energy Research - 13.8 Gflop/s

C:jaeri

DOD/CEWES Vicksburg, Miss. - 13.7 Gflop/s

C:cewes

ECMWF, Reading, UK - 13.7 Gflop/s

C:ecmwf

KIST/System Engineering Res. Inst., Korea - 13.7 Gflop/s

C:kist



Next: References Up: The TOP25 Supercomputer Previous: List of the


top500@rz.uni-mannheim.de
Fri Jun 3 12:02:18 MDT 1994