next up previous contents index
Next: 20 Computational Science Up: 19 Parallel Computing in Previous: 19.1 Motivation

19.2 Examples of Industrial Applications

In the following, we refer to the numerical label (item number) in the first column of Table 19.1.

Items 1, 4, 14, 15, and 16 are typical of major signal processing and feature identification problems in defense systems. Currently, special purpose hardware-typically with built-in parallelism-is used for such problems. We can expect that use of commercial parallel architectures will aid the software development process and enhance reuse. Parallel computing in acoustic beam forming (item 1) should allow adaptive on-line signal processing to maximize signal-to-noise ratio dynamically as a function of angle and time. Currently, the INTEL iWarp is being used, although SIMD architectures would be effective in this and most low-level signal processing problems. A SIMD initial processor would be augmented with a MIMD machine to do the higher level vision functions. Currently, JointStars (item 4) uses a VAX for the final tracking stage of their airborne synthetic aperture radar system. This was used very successfully in the Gulf War. However, parallel computing could enhance the performance of JointStars and allow it to track many moving targets-one may remember the difficulties in following the movement of SCUD launchers in the Gulf War. As shown in Chapter 18, we already know good MIMD algorithms for multitarget tracking [Gottschalk:88a;90b].

We can expect the Defense Department to reduce the purchases of new planes, tanks, and ships. However, we see a significant opportunity to integrate new high-performance computer systems into existing systems at all levels of defense. This includes both avionics and mission control in existing aircraft and the hierarchy of control centers within the armed services. High-performance computing can be used both in the delivered systems and perhaps even more importantly in the simulation of their performance.

Modelling of the ocean environment (item 2) is a large-scale partial differential equation problem which can determine dynamically the acoustic environment in which sonar signals are propagating. Large scale (teraflop)  machines would allow real-time simulation in a submarine and lead to dramatic improvement in detection efficiency.

Computational fluid dynamics,  structural analysis,  and electromagnetic  simulation (item 3) are a major emphasis in the national high-performance computing initiative-especially within NASA and DOE. Such problems are described in Chapter 12. However, the industries that can use this application are typically facing major cutbacks and the integration of new technology faces major hurdles. How do you use parallelism when the corporation would like to shut down its current supercomputer center and, further, has a hiring freeze preventing personnel trained in this area from entering the company? We are collaborating with NASA in helping industry with a new consortium where several companies are banding together to accelerate the integration of parallelism into their working environment in the area of multidisciplinary  design for electromagnetics, fluids, and structures. An interesting initial concept was a consortium project to develop a nonproprietary software suite of generic applications which would be modified by each company for their particular needs. One company would optimize the CFD code for a new commercial transport, another for aircraft engine design, another for automobile air drag simulation, another for automobile fan design, another for minimizing noise in air conditioners (item 7) or more efficient exhaust pumps (item 6). The electromagnetic simulation could be optimized either for stealth aircraft or the simulation of electromagnetics properties for a new high-frequency printed circuit board. In the latter case, we use simulation to identify problems which otherwise would require time-consuming fabrication cycles. Thus, parallel computing can accelerate the introduction of products to market and so give competitive edge to corporations using it.

Power utilities (item 9) have several interesting applications of high-performance computing, including nuclear power  safety simulation, and gas and electric transmission problems. Here the huge dollar value of power implies that small percentage savings can warrant large high-performance computing systems. There are many electrical transmission problems suitable for high-performance computing which are built around sparse matrix  operations. For Niagara Mohawk, a New York utility, the matrix has about 4000 rows (and columns) with approximately 12 nonzero elements in each row (column). We are designing a parallel transient stability analysis system now. This would have some features described in DASSL (Section 9.6). Niagara Mohawk's problem (matrix size) can only use a modest (perhaps 16-node) parallel system. However, one could use large teraflop machines (10,000 nodes?) to simulate larger areas-such as the sharing of power over a national grid.

In a completely different area, the MONY Insurance Company (item 10) spends $70 million a year on data processing-largely on COBOL applications where they have some 15 million lines of code and a multi-year backlog. They see no immediate need for high-performance computing, but surely a more productive software environment would be a great value! Similarly, Empire Blue Cross/Blue Shield (item 11) processes 6.5 million medical  insurance transactions   every day. Their IBM 3090-400 handles this even with automatic optical scanning of all documents. Massively parallel systems could only be relevant if one could develop a new approach, perhaps with parallel computers examining the database with an expert system or neural network to identify anomalous situations. The states and federal government are burdened by the major cost of medicaid and small improvements would have great value.

The major computing problem for Wall Street (items 12, 13) is currently centered on the large databases.  SIAC runs the day-to-day operation of the New York and American Stock exchanges. Two acres (about 300) of Tandem computers handle the calls from brokers to traders on the floor. The traders already use an ``embarrassingly parallel'' decomposition with some 2000 stocks of the New York Stock Exchange decomposed over about 500 personal computers with about one PC per trader. For SIAC, the major problem is reliability and network management with essentially no down time ``allowed.'' High-performance computers could perhaps be used as part of a heterogeneous network management system to simulate potential bottlenecks and strategies to deal with faults. The brokerages already use parallel computers for economic modelling  [Mills:92a;92b], [Zenios:91b]. This is obviously glamorous, with integration of sophisticated optimization methods very promising.

As our final example (item 17), we have the entertainment and education industries. Here high-performance computing is linked to areas such as multimedia and virtual reality with high bandwidth  and sophisticated visualization and delivery systems. Some applications can be viewed as the civilian versions of military flight simulators, with commercial general-purpose parallel computers replacing the special-purpose hardware now used. Parallelism will appear in the low end with future extensions of Nintendo-like systems; at a medium scale for computer-generated stages in a future theater; at the high end with parallel supercomputers controlling simulations in tomorrow's theme parks. Here, a summer CP project lead by Alex Ho with three undergraduates may prove to be pioneering [Ho:89b], [Ho:90b]. They developed a parallel video game Asteroids on the nCUBE-1  and transputer  systems [Fox:88v]. This game is a space war in a three-dimensional toroidal space with spacecrafts, missile, and rocks obeying some sort of laws of physics. We see this as a foretaste of a massively parallel supergame accessed by our children from throughout the globe with high-speed lines and consumer virtual reality interfaces. A parallel machine is a natural choice to support the realism and good graphics of the virtual worlds that would be demanded by the Nintendo generation. We note that, even today, the market for Nintendo and Sega video entertainment systems is an order of magnitude larger than that for supercomputers. High-performance computers should also appear in all major sports stadiums to perform image analysis as a training aid for coaches or providing new views for cable TV audiences. We can imagine sensors and tracking systems developed for Strategic Defense Initiative being adapted to track players on a football field rather than a missile launch from an unfriendly country. Many might consider this appropriate with American football being as aggressive as many military battles!

Otis (item 5) is another example of information processing, discussed generally in Section 19.1. They are interested in setting up a database of elevator monitoring data which can be analyzed for indicators of future equipment problems. This would lead to improved reliability-an area where Otis and Japanese companies compete. In this way, high-performance computing can lead to competitive advantage in the ``global economic war.''



next up previous contents index
Next: 20 Computational Science Up: 19 Parallel Computing in Previous: 19.1 Motivation



Guy Robinson
Wed Mar 1 10:19:35 EST 1995