This problem provided the initial motivation for developing MOVIE. Vision involves diverse computational modules, ranging from massively parallel algorithms for image processing to symbolic AI techniques and coupled in the real time via feedforward and feedback pathways. In consequence, the corresponding software environment needs to support both the regular data-parallel computing and the irregular, dynamic processing, all embedded in some uniform high-level programming model with consistent data structures and communication model between individual modules. Furmanski started the vision research within the Computation and Neural Systems (CNS) program at Caltech and then continued experiments with various image-processing and early/medium vision algorithms (Sections 6.5, 6.6, 6.7, 9.9) with the terrain Map Understanding project (Section 17.3). The most recent framework is the new Computational Neuroscience Program (CNP) at Syracuse University, where various elements of our previous work on vision algorithms and the software support could be augmented by new ideas from biological vision and possibly integrated towards some more complete machine vision system. We are also planning to couple some aspects of the vision research with the design and development work for virtual reality environments.