A broad class of neural network algorithms [Grossberg:88a], [Hopfield:82a], [Kohonen:84a], [Rumelhart:86a] can be implemented in terms of a suitable set of data-parallel operators [Fox:88g], [Nelson:89a]. Rapid prototyping capabilities of MOVIE, combined with the field algebra model, offer a convenient experimentation and portable development environment for neural network research. In fact, the need for such tools, integrated with the HPC support, was one of the original arguments driving the MOVIE project. We plan to continue our previous work on parallel neural network algorithms [Fox:88e], [Ho:88c], [Nelson:89a], now supported by rapid prototyping and visualization tools.
Within CNP, we also plan to continue our exploration of methods in computational neurobiology [Furmanski:87a], [Nelson:89a]. We want to couple MOVIE with popular neural network simulation systems such as Aspirin from MITRE or Genesis from Caltech and to provide the MOVIE-based HPC support for the neuroscience community. Another attractive area for neural network applications is in the context of load-balancing algorithms for the MIMD-parallel and distributed versions of the system. We plan to extend our previous algorithms for neural net-based static load balancing [Fox:88e] to the present, more dynamic MOVIE model and to construct ``neural routing'' techniques for MovieScript threads.
This class of neural net applications can be viewed as an instance of a broader domain referred to as physical computation, illustrated in Chapter 11-that is, using methods and intuitions of physics to develop new algorithms for hard problems in combinatorial optimization
[Fox:88kk;88tt;88uu;90nn], [Koller:89b]. We also plan to continue this promising research path.