There was remarkable progress during the 1980s in most areas related to
high-performance computing in general and parallel computing in
particular. There are now substantial numbers of people who use
parallel computers to get real applications work done, in addition to
many people who have developed and are developing new algorithms, new
operating systems, new languages, and new programming paradigms and
software tools for massively parallel and other high-performance
computer systems. It was during this decade, especially in the last
half, that there was a very quick transition towards identifying
high-performance computing strongly with massively parallel computing.
In the early part of the decade, only large, vector-oriented systems
were used for high-performance computing. By the end of the decade,
while most such work was still being done on vector systems, some of
the leading-edge work was already being done on parallel systems. This
includes work at universities and research laboratories, as well as in
industrial applications. By the end of the decade, oil companies,
brokerage companies on Wall Street, and database users were all taking
advantage of parallelism in addition to the traditional scientific and
engineering fields. The CP efforts played an important role in
advancing parallel hardware, software, and applications. As this
chapter indicates, many other projects contributed to this advance as well.
There is still a frustrating phenomenon of neglect of certain areas in the design of parallel computer systems, including ratios of internal computational speed versus input and output speed, and speed of communication between the processors in distributed-memory systems. Latency for both I/O and communication is still very high. Compilers are often still crude. Operating systems still lack stability and even the most fundamental system management tools. Nevertheless, much progress was made.
By the end of the 1980s, higher speeds than on any sequential computer were indeed achieved on the parallel computer systems, and this was done for a few real applications. In a few cases, the parallel systems even proved to be cheaper, that is, more cost-effective than sequential computers of equivalent power. This despite a truly dramatic increase in performance of sequential microprocessors, especially floating-point units, in the late 1980s. So, both key objectives of parallel computing-the highest achievable speed and more cost-effective performance-were achieved and demonstrated in the 1980s.