The Flow in Porous Media Parallel Project (FPMPP) group is pursuing several objectives: to develop accurate and efficient parallel algorithms for reservoir simulation; to develop an understanding of parallel scaling issues in reservoir simulation; to investigate problems in porting reservoir simulators to parallel computers; to develop techniques for conditional simulation on parallel machines and for the parallel simulation of contaminant remediation; and to perform basic research on various aspects of flow in porous media. These objectives are essential for predicting the response of reservoirs to complicated processes, and understanding, designing, and testing of economically feasible recovery or decontamination strategies.
Mary Wheeler's research interests include the numerical solution of partial differential systems with application to flow in porous media and parallel computation. Her numerical work includes formulation, analysis, and implementation of finite-difference/finite-element discretization schemes for nonlinear coupled partial differential equations as well as domain decomposition iterative solution methods. Her applications include reservoir engineering and contaminant transport in groundwater. Current work has emphasized mixed finite-element methods for modeling reactive multi-phase flow and transport in a heterogeneous porous media, with the goal of simulating these systems on parallel computing platforms.
Specific work within the FPMPP includes:
The goal of this project is the improvement of reflection seismic data processing through development of new algorithms and employment of parallel computation. Reflection seismology provides the most detailed picture of the earth's structure available for petroleum exploration and production. It is the primary tool used by geophysicists to locate and map likely oil and gas prospects, and is of increasing importance in advanced production techniques. Seismic crews generate explosions or other sources of acoustic energy (noise), record the echoes from underground formations, and collect the records of many such "shots." Surveys are carried out at sea from ships towing cables full of microphones, in deserts, on mountains, and in swamps all over the world. The petroleum industry spends several billion dollars annually in the worldwide application of this technology. Texas and the contiguous Gulf of Mexico form the most thoroughly seismically surveyed territory in the world.
The current focus of this project is the development of a feasible plan for accurate seismic inversion on a small industrial scale. This investigation will include estimating seismic wave velocity directly from waveform data and extracting detailed estimates of local parameter fluctuations in the earth model. In this project, models are regarded as successful if they are physically sensible and approximately reproduce the data, through detailed simulation of seismic wave propagation. The production of such models is very much an open research problem, and is the first step toward extraction of maximal information from seismic data. Even small problems of this type involve large-scale computations that exhibit intrinsic parallelism at many levels. Thus, parallel computation is an essential tool and an integral part of this project.
The improved estimation of seismic wave velocities is key to the formation of earth models. Over the past several years, group researchers have developed an approach to velocity estimation that overcomes theoretical obstacles fatal to other approaches. This approach is called differential semblance optimization (DSO). DSO is one of a limited number of working techniques for the extraction of velocities and other earth features derived directly from seismic data, with minimal human intervention. It has been successfully applied both in synthetic model studies and in field data trials. A number of other research groups around the world have developed similar ideas, some of which were derived at independently, some of which were inspired by the work at Rice University. The technological potential of DSO is also being demonstrated through The Rice Inversion Project, an industrial research consortium that has tested DSO implementations in selected field data situations.
Along with developing a theoretical understanding of this approach, researchers from this group have produced a prototype computer implementation that runs efficiently on a variety of platforms (from UNIX workstations to massively parallel supercomputers) with uniform interfaces.
Several important problems arise in petroleum resources management that are naturally posed as optimization problems. For instance, suppose an enhanced oil recovery (EOR) strategy for an oil reservoir is being designed. In this EOR strategy, one might inject CO2 into the reservoir at some wells to drive the oil to other wells for extraction. A natural question exists: What is the best EOR strategy that can be designed? The meaning of "best" will vary from situation to situation. Roughly speaking, the best EOR strategy is to maximize the amount of oil recovered subject to limits on cost and perhaps other economic and technological constraints. A similar optimization problem arises in reservoir modeling. Generally, there is incomplete information about such physical properties as the variation of rock permeability in the reservoir. On the other hand, such knowledge is crucial to accurately model the behavior of the reservoir. The question that might be asked then is: What is the best estimate for such reservoir properties?
The solution of these problems in petroleum resources management is the work of the Reservoir Characterization group of the Geosciences Parallel Computation Project. The group's goal is to combine the developments in numerical optimization with the power of parallel computation. Researchers are developing computational methods that will improve upon the time-consuming optimization methods currently used in the oil industry.
The optimization problems that arise in oil reservoir management have enormous computational requirements. For instance, in the problem of determining the best EOR strategy, various sets of injection and extraction rates at the wells are examined to determine the set that produces the best results. This requires the repeated simulation of the oil reservoir's response to the different injection and extraction rates. Only since the early 1980s has research in optimization yielded efficient methods to handle such large problems. At the same time, the development of parallel computation has given us machines on which to implement these large-scale problems quickly and efficiently.