Optimization calculations are a staple of numerical computation. They are used by industry in planning, design, marketing, and distribution. Rarely does modeling not involve at least one optimization stage; and since the point of modeling is either control or improvement of the process being modeled, a second optimization stage is often required. The CRPC Parallel Optimization group is developing parallel algorithms motivated by applications for optimization calculations on parallel machines.
The overall goal of the Parallel Optimization project is to learn to use parallel computer systems to solve optimization problems of interest to industry and academia in which lack of computing speed or power are the major bottlenecks. Major emphasis is on linear and integer programming and on multidisciplinary design optimization.
John Dennis' research is on practical methods for optimization with particular interest in parallel methods for nonlinear optimization of systems described by coupled nonlinear simulations. He is editor-in-chief and founder of the SIAM Journal for Optimization, an advisory editor of Mathematics of Operations Research, and a past co-editor of Mathematical Programming. He has served as chair of the SIAM Activity Group for Optimization, and he served two terms on the SIAM Council. He has done extensive consulting for United States industry, has been a Fullbright Lecturer to Argentina, and has published his algorithm research in several widely disseminated software journals. Dennis has given many short courses and featured addresses at international conferences. He has directed 27 Ph.D. theses, and his students hold positions in industry, government, and academic departments of business, mathematics, computer science, and applied mathematics. His textbook, co-authored with R.B. Schnabel, was published by Prentice-Hall and in Russian translation by Mir.
Christian Bischof devised the concept of incremental condition estimation, which enabled novel approaches to the computation of rank-revealing factorizations. He also was one of the developers of the public-domain LAPACK linear algebra library. As part of his CRPC-supported work, he is one of the initiators of the ADIFOR project for a portable Fortran automatic differentiation system. Bischof was the first recipient of the Wilkinson Fellowship in Computational Mathematics, awarded by Argonne National Laboratory in 1988. He is the author of more than 50 articles and technical reports.
The CRPC effort in MDO involves collaborative research with Boeing and with MADIC, a consortium of major aerospace and automobile manufacturers. The major result to date has been a modeling framework for MDO that has led to a reformulation suitable for a parallel solution on a heterogeneous network of parallel supercomputers and workstations.
These beneficial effects have been obtained on trajectory optimization programs (Boeing), optimal design of membrane filtration processes (Rice University), three-dimensional Navier-Stokes codes for transonic flow (NASA), car engine lubrication simulations (General Motors), and biomechanical models of complex human organs (National Institute of Standards and Technology). Robert Bixby Professor, Computational and Applied Mathematics, Rice University
Robert Bixby is interested in linear, integer, and combinatorial optimization problems, with a particular emphasis on the solution of large-scale real-world models. Current projects include the study of cutting-plane techniques, both sequential and parallel, for the Traveling Salesman problem, parallel mixed integer programming, and methods for combining simplex and interior-point methods in the solution of large-scale linear programs. He is editor-in-chief of Mathematical Programming.
In system modeling applications, the system's future behavior may be predicted by using past observational data of the system's states to determine its characterizing parameters. This first optimization stage is called the inverse problem solution or parameter identification stage. Once this is done, emphasis shifts to the determination of the design or control parameters that optimize some system performance index.
The group has applied new codes being developed to determine 4,100 spline coefficients for hydraulic conductivity from pressure data. This effort involved using the Intel Delta to solve a 96,000-variable nonlinear programming problem with 92,000 nonlinear constraints.
Cutting-plane methods embedded in branch-and-bound techniques have been applied with particular success to the Traveling Salesman Problem (TSP). Examples have shown TSP instances that can be solved on five to seven processors of an Intel iPSC/860 hypercube at speeds equaling that of a single-processor CRAY Y-MP. The group is testing this approach on the Intel Delta, as well. Working with researchers at the Center for Discrete Mathematics and Theoretical Computer Science, another NSF Science and Technology Center, CRPC researchers are restructuring the parallel TSP code in order to incorporate the lessons learned in solving a world-record 3038-city problem on a network of heterogeneous workstations. These same collaborators are also building a mixed integer-linear programming code exploiting the work on TSP and other major advances in pure integer programming during the past ten years.
Jorge More is interested in the development of algorithms and software for optimization problems and their performance on vector and distributed-memory architectures. He played the lead role in the development of MINPACK-1, a collection of high-quality optimization subroutines distributed worldwide to more than 500 sites. He is currently working on an expanded version of the MINPACK package, with a focus on large-scale optimization. More is on the editorial boards of Numerische Mathematik and the SIAM Journal on Optimization, and has been on the editorial boards of the SIAM Journal on Numerical Analysis and the SIAM Journal on Scientific and Statistical Computing. He recently finished a book with Stephen Wright, Optimization Software Guide, published by SIAM.