**Today's Topics:**

- David Young's Recovery
- E-Letter on Systems, Control, and Signal Processing.
- Review of the Seventh Parallel Circus
- Finite Element Mesh Generator in MATLAB
- Large Dense Linear System Survey Report
- Fellowship in Computational Sciences at Sandia, Albquerque
- PICL, a Portable Instrumented Communication Library
- Symposium on Parallel Optimization 2
- New Additions to NA List

From: Gene H. Golub <golub@na-net.stanford.edu>

Date: Sat, 21 Apr 1990 15:35:12 PDT

I spoke to David Young at his home on Friday. David recently had a

serious operation. His voice was very strong and he seemed very

cheerful and optimistic.

I'm sure he would be pleased to hear from you.

Here is his address and phone numbers.

Prof. David M. Young, Jr.

Ashbel Smith Professor, Math & Comp Cntr

young@cs.utexas.edu

Center for Numerical Analysis

RLM 13.150

The University of Texas at Austin

Austin, Texas 78712

Office Phone: (512) 471-1242

Home Phone: (512) 452-2966

Gene

------------------------------

From: Gene H. Golub <golub@na-net.stanford.edu>

Date: Sat, 21 Apr 1990 17:09:25 PDT

For those of you interested in System Science, there is a very nice

newsletter sent on a regular basis. It is known as

E-LETTER on Systems, Control, and Signal Processing.

The editors are

Bradley W. Dickinson

bradley@princeton.edu or bradley@pucc.bitnet

Eduardo D. Sontag

sontag@hilbert.rutgers.edu or sontag@pisces.bitnet.

A msg to one of the editors will get you on the distribution list.

Gene Golub

------------------------------

From: Steven Kratzer <kratzer@super.org>

Date: Wed, 18 Apr 90 10:10:21 EDT

Review of the Seventh Parallel Circus

by Steven Kratzer

Supercomputing Research Center, Bowie, MD

The Seventh Parallel Circus was held on Friday and Saturday, March 30

and 31, at Stanford University. The Parallel Circus is an informal

gathering of researchers interested in parallel processing for

numerical computing. This event is held twice a year; past Circuses

were held at various locations in the Eastern US, but this time the

beautiful (and relatively warm) Stanford campus was chosen. The participants

represented a variety of campuses, companies and organizations from

throughout North America and Scandinavia.

The meeting was organized by Gene Golub of Stanford, and Steve

Hammond and Rob Schreiber of RIACS. About sixty people attended,

and there were around 20 talks (nominally 20 minutes each, with

plenty of time for discussions). The talks covered a wide spectrum,

from mathematics and graph theory to chip design,

and many of them contained brand-new

results not yet revealed to the world. Brief descriptions of (hopefully)

all of the talks follow.

A. Gerasoulis described a general method for partitioning task

graphs and mapping tasks to processors. C. Kuzmaul of MasPar Corp.

discussed related issues, but from a different viewpoint.

Specific performance results for the Maspar

machine (running dense LU factorization) were provided by Rob Schreiber.

Methods for sparse matrix factorization on another "massively parallel"

machine (the Conection Machine) were presented by J. Gilbert (Xerox

PARC) and S. Kratzer (Supercomputing Research Center).

O. McBryan of Colorado Univ. presented performance measurements

for the Evans & Sutherland ES-1 and the Myrias SPS-2.

Several talks dealt with parallel methods for solving pde's.

S. McCormick (Univ. of Colorado) described an adaptive,

multilevel discretization method. S. Bowa also discussed

nonuniform grid generation, and focussed on domain decomposition

techniques. P. Frederickson of RIACS

explained how to parallelize the multigrid solver, giving

a "superconvergent" algorithm that runs on the

Connection Machine. M. London of Myrias Research Corp.

described his implementations of direct and iterative pde

solvers, which were developed for oil reservoir simulation.

D. Kincaid (U. Texas) described

ITPACKV, a new sparse-matrix software package for vector computers.

D. Bailey of NASA/Ames presented the "fractional Fourier transform,"

which generalizes the DFT to arbitrary resolution in frequency space,

and discussed parallel computations as well as applications for it.

O. Egecioglu (UC Santa Barbara) obtained good parallel speedup in computing

coefficients for rational interpolation.

F. Luk of Cornell showed how "algorithm-based fault tolerance" allows

us to check the output of a systolic array for arithmetic glitches;

he had been inspired by a previous Parallel Circus to work on this topic.

Several talks dealt with interesting applications for numerical computations.

J. Cavallaro of Rice Univ. is developing a systolic array for computing

the SVD, which will be used for robot control.

S. Barnard of SRI discussed the stereo image matching problem,

and a method named

Cyclops for solving it. D. Foulser of Yale gave an introduction to the

Human Genome Project, which endeavors to unravel the structure of human

DNA, and explained his algorithm for performing DNA sequence matching.

O. Tjorbjornsen of Univ. of Trondheim, Norway described

a new hardware/software system (using 80186s in a hypercube) for

database processing.

Tony Chan of UCLA discussed the philosophy of the physics of

parallel machines.

J. Barlow (Penn State) described an incremental conditional-number

estimator for sparse matrices.

Aside from the talks, which were very informal, the Circus provided the

opportunity for researchers to chat casually about the great issues of our time.

A reception at Gene Golub's house was a very enjoyable chance to

mingle, and the banquet at the Grand China restaurant was quite a treat!

The next Parallel Circus will take place Oct. 26 and 27, 1990,

in Toronto. Santa Barbara, CA was mentioned as a possible site for next Spring.

------------------------------

From: Jeffrey Kantor <jeff@iuvax.cs.indiana.edu>

Date: 22 Apr 90 15:57:14 GMT

I've been putting together a set of MATLAB tools for constructing a mesh

of triangular finite elements with linear shape functions. It's primarily

intended for teaching. But some of the folks around here seem to think

it's pretty snappy.

The basic idea is that you can sit with your mouse and put together

a 2-D triangular mesh in a few minutes, accurately placing nodes and

wiring them together as elements. An existing mesh can be edited with the

mouse. The 'snap-to' grid can be used to zoom in on any portion of the

mesh. All of this is done is pure MATLAB, so that it should :-) be portable.

I'm doing this on a Mac, and testing on SparcStations.

Before I put to much more time it, though, I would like to know if there

are any other such MATLAB toolboxes out there. I don't want to reinvent

the wheel.

And I would also like to know if there is any general interest in such a

toolbox. Any input would be very much appreciated.

Jeff Kantor

Notre Dame

------------------------------

From: Alan Edelman <alan@math.mit.edu>

Date: Mon, 16 Apr 90 11:45:35 EDT

LARGE DENSE LINEAR SYSTEM SURVEY REPORT

Some months ago, I asked in the NA digest who was solving large

dense systems of equations, why they were being solved, how big is n,

and for comments about accuracy. In fact, I only received ten

nearly complete responses and several other partial responses,

but all were quite interesting. Here I will report what

I have learned.

There are clearly more people involved in solving large dense

systems than those who generously took out the time to respond to me,

and I will mention some references given to me, which I myself have not

had the time to pursue. Furthermore, I suspect there may be uses in

economics, theoretical computer science, etc. but my inquiry never reached

people in these areas. Lastly, in this rapidly changing field there may

have been many new development since the time of my survey. In other

words, there is a lot of room for a more in depth study than what is given here.

Most responders are solving boundary integral equations arising

from elliptic PDE's in 3-space. Various buzz words used are

"boundary element methods," "method of moments," and "panel methods."

One specific application area mentioned a few times was radar cross

section analysis.

All, but two, main responses were from the US. The other two were

from Sweden and Switzerland. Some responders were at universities,

some from computer vendors, and others from the airline industry.

Clearly different people have different computing resources available.

I was not sure whether it was appropriate to preserve anonymity or

credit the respective researchers, and chose to err on the safe side

by not mentioning any particular names, companies, or institutions.

Only five responders said they are actually solving the equations

now. The responses were

1) 1000

2) 1026 but I know of people who have gone over 5000

3) the current upper limit in a reasonable length of time seems to be

somewhere between 20,000 and 40,000

4) 5000 (7000 is the probable maximum our system accepts)

5) 3000.

Thus the biggest number that I am aware of is 40,000.

Given faster computers and more memory people would like to

(Numbers 1 through 5 are from the same authors above, respectively.)

1) increase the size as much as possible

2) go higher than 1026 some day

3) have n as large as we can get it

4) (there is no need for more than 7000 for our present needs)

5) go higher

6) solve when n=20,000 to 100,000, and probably more for some applications

7) solve when n=20,000

8) solve a Helmholtz equation for n=100,000

9) solve for n=10,000

10) solve n=1,000,000.

Comments about accuracy (no particular order):

1) The desired accuracy is not clear. In the engineering community it

probably 2-3 digits. I want more to better understand the numerical method. 2) The methods are stable.

3) Most people, at least in the aerospace industry, use 64-bit precision,

... I have observed that there is a significant increase in usable

resolution over doing the same problems ... with 32-bit math.

By and large, we don't know how good our answers are. They seem to

be good enough for what we're doing, and certainly better than the

traditional methods of antenna engineering.

I think that our answers here ... are probably as good as you can get

with 64-bit machines ...

4) The kind of accuracy seems to be not very critical, ... for most practical

purposes, single precision arithmetic and up to five or four digits of

accuracy is sufficient.

5) The accuracy of the equation solver seldom causes any trouble.

There were almost no responses to my question about the condition

number of the problem.

General references listed were

1) "A survey of boundary integral equation methods for the numerical

solution of Laplace's Equation in three dimensions" by K.E. Atkinson

at the University of Iowa (paper)

2) "Field Computation by Moment Methods" by Roger Harrington

at Syracuse University. (book) (the "Bible" used by engineers)

3) Conference proceedings: "Topics in Boundary Element

Research" edited by C. Brebbia.

Other than the integral equation methods, some approximation

theorists are interested in large dense systems, but I am not aware

of anyone who is actually solving large dense systems today, sometimes

this is due to fears of highly ill-conditioned problems. Also

one person mentioned a large linear programming problem that was

800 by 12 million, where the 800x800 normal equations are formed and

solved.

The bottom line, so far as I can tell from the survey, is that

all large dense linear matrices being solved today more or less come

from the same types of methods. The largest system that has been solved has

n=40,000. No one mentioned anything precise about the accuracy they

were obtaining now or the conditioning of the problem, but most users

seemed more or less satisfied anyway.

*** I would like to thank everybody for their interesting responses.

If the information given here is incomplete, then at least this note

could be a first step towards more complete information.

*** I would like to continue to keep track of the current record

of the biggest dense system ever solved and would appreciate if anyone

knows or hears of n greater than 40,000 being solved to please let me know.

I will be happy to forward the largest to NANET, anonymously if requested.

Of course, any report of a large n should at least mention something about

accuracy.

Thanks again

Alan Edelman

CERFACS

Toulouse, France

na.edelman

------------------------------

From: Richard C. Allen <rcallen@cs.sandia.gov>

Date: Mon, 16 Apr 90 07:16:08 MDT

RESEARCH FELLOWSHIP IN COMPUTATIONAL SCIENCES

Mathematics and Computational Science Department

Sandia National Laboratories

Albuquerque, New Mexico

Sandia National Laboratories invites applications and

nominations of outstanding your scientists for its 1990 Research

Fellowship in Computational Sciences.

The Sandia Research Fellowship will provide an exceptional

opportunity for young scientists who are performing leading-edge

research in the computational sciences. Sandia's Mathematics and

Computational Science Department maintains strong research

programs in theoretical computer science, analytical and

computational mathematics, computational physics and engineering,

advanced computational approaches for parallel computers,

graphics, and architectures and languages. Sandia provides a

unique parallel computing environment, including a 1024-processor

NCUBE 3200 hypercube, a 1024-processor NCUBE 6400 hypercube, a

Connection Machine-2, and several large Cray supercomputers. The

successful candidate must be a U.S. Citizen, must have earned a

recent doctorate in the sciences and should have made strong

contributions to numerical computation or computer science.

The fellowship appointment is for a period of one year, and

may be renewed for a second year. It includes a highly

competitive salary, moving expenses, and a generous professional

travel allowance. Applications from qualified candidates, or

nominations for the Fellowship, should be addressed to Robert H.

Banks, Division 3531-29, Albuquerque, NM 87185. Applications

should include a resume, a statement of research goals, and the

names of three references. The closing date for applications is

May 31, 1990. The position will commence during 1990.

EQUAL OPPORTUNITY EMPLOYER M/F/V/H

U.S. CITIZENSHIP IS REQUIRED

------------------------------

From: Pat Worley <worley@yhesun.EPM.ORNL.GOV>

Date: Tue, 17 Apr 90 14:38:29 EDT

PICL, a portable instrumented communication library for multiprocessors,

is now available from netlib. PICL is a subroutine library that

implements a generic message-passing interface on a variety of

multiprocessors. Programs written using PICL routines instead of the

native commands for interprocessor communication are portable in the

sense that the source can be compiled on any machine on which the library

has been implemented. Correct execution is also a function of the

parameter values passed to the routines, but standard error trapping

is used to inform the user when a parameter value is not legal on a

particular machine. Programs written using PICL routines will also

produce timestamped trace data on interprocessor communication,

processor busy/idle times, and simple user-defined events if a few

additional statements are added to the source code. A separate facility

called ParaGraph can be used to view the trace data graphically.

The PICL source is currently written in C, but Fortran-to-C interface

routines are supplied on those machines where that is feasible.

To create PICL, you need picl.shar, port.shar, and the appropriate

machine-dependent code. Unshar all three in the same (empty) directory.

A README file describing how to create the library is bundled with the

machine-dependent shar file.

The picl subdirectory on netlib currently contains the following shar files:

picl.shar low-level PICL routines

port.shar high-level PICL routines

ipsc2.shar machine-dependent routines for the iPSC/2, including

FORTRAN-to-C interface routines

ipsc860.shar machine-dependent routines for the iPSC/860, including

FORTRAN-to-C interface routines

ncube.shar machine-dependent routines for the NCUBE/3200, but

without any FORTRAN-to-C interface routines

documentation.shar latex source of the working copy of the user

documentation. This will be issued as an ORNL

technical report.

Preliminary versions of PICL for the iPSC/1, the Cogent, the Symult S2010,

the Cosmic Environment, Linda, and Unix System V are also available.

Contact worley@msr.epm.ornl.gov for more information on these

implementations.

------------------------------

From: Robert Meyer <rrm@cs.wisc.edu>

Date: Tue, 17 Apr 90 15:31:22 -0500

SYMPOSIUM ON PARALLEL OPTIMIZATION 2

23 - 25 July 1990

Center for Parallel Optimization

Computer Sciences Department

University of Wisconsin

Madison, Wisconsin 53706

A 3-day symposium of invited presentations on

state-of-the-art algorithms and theory for the parallel solution

of optimization and related problems will be held at University

of Wisconsin at Madison with support from the AFOSR and in

cooperation with SIAM. (The SIAM National Meeting will be taking

place in Chicago the preceding week.) Emphasis will be on algorithms

implementable on parallel and vector architectures. Refereed

proceedings of the Symposium are planned as a special issue of

the new SIAM Journal on Optimization. Speakers include the following:

R. S. Barr, Southern Methodist University, Dallas

D. E. Brown, University of Virginia, Charlottesville

T. L. Cannon, Digital Equipment Corporation, Fairfax

R. De Leone, University of Wisconsin, Madison

J. E. Dennis, Rice University, Houston

L. C. W. Dixon, Hatfield Polytechnic, Hatfield

M. C. Ferris, University of Wisconsin, Madison

J. J. Grefenstette, Naval Research Laboratory, Washington

H. Muhlenbein, Gesellschaft fur Mathematik und Datenverarbeitung, S.Augustin

S. G. Nash, George Mason University, Fairfax

A. S. Nemirovsky, USSR Academy of Sciences, Moscow

Yu. E. Nestorov, USSR Academy of Sciences, Moscow

J. M. Ortega, University of Virginia, Charlottesville

K. Ritter, Technical University of Munich, Munich

J. B. Rosen, University of Minnesota, Minneapolis

R. Rushmeier, Rice University, Houston

A. Sameh, University of Illinois, Urbana

A. Sofer, George Mason University, Fairfax

P. Tseng, MIT, Cambridge

D. Van Gucht, Indiana University, Bloomington

L. T. Watson, VPI , Blacksburg

S. J. Wright, North Carolina State University, Raleigh

S. Zenios, University of Pennsylvania, Philadelphia

Although the symposium will be comprised of invited talks as

indicated above, registration (early registration by May 30: $50)

is open to all persons wishing to attend. A registration form and

information on lodging is deposited in netlib and may obtained via

email to netlib (mail netlib@research.att.com)

with the request:

send SPO from meetings

For information beyond that in netlib, contact the SPO2 Secretary,

Laura Cuccia, or one of the organizers, O. L. Mangasarian, R. R. Meyer

at the above address. Secretary: (608)262-0017,

email: laura@cs.wisc.edu, FAX (608)262-9777.

------------------------------

From: Gene H. Golub <golub@na-net.stanford.edu>

Date: Tue, 17 Apr 1990 23:01:29 PDT

I've added quite a few names in the last few months. Here are the additions

and changes. I'm sorry to say we have no way to delete names now (!); we can

only add names.

Gene

IMACS: beauwens@BBRNSF11.bitnet

aboba: Bernard.Aboba@bmug.fidonet.org

academia_sinica: bmadis%beijing@ira.uka.de

agui: AGUI@EMDCCI11.BITNET

alefeld: AE02%DKAUNI2.bitnet@forsythe.stanford.edu

allen:rcallen@cs.sandia.gov

avila: avila@sjsumcs.mathcs.sjsu.edu

awatson: MA05@primea.dundee.ac.uk

axelsson: axelsson@scri1.scri.fsu.edu

babaoglu: numerica%dm.unibo.it@Forsythe.Stanford.EDU

baden: baden@csam.lbl.gov

batterson: sb@mathcs.emory.edu

beauwens: ulbg005@BBRNSF11.bitnet

bjorck: akbjo@math.liu.se

bjorksten: horus!alpha!jimb@isl.Stanford.EDU

block: IEBLOCK@wharton.upenn.edu

bogle: ucecb01@euclid.ucl.ac.uk

booth: A.S.Booth@durham.ac.uk

bratvold: bratvold@norunit.bitnet

brodzik: D5B5DA13079FC04C9E@vms.cis.pitt.edu

brueggemann: 211045%DHHDKRZ5.BITNET@Forsythe.Stanford.EDU

bstewart: billy@cscadm.ncsu.edu

butcher: butcher@maths.aukuni.ac.nz

cai: zcai@bnlux0.bnl.gov

callahan: ERVIN%morekypr.BITNET@Forsythe.Stanford.EDU

canning: fxc@risc.com

castro: castro@kodak.com

cerlpdr: CERLPDR%TECHNION.BITNET@Forsythe.Stanford.EDU

chun: chunj@ra.crd.ge.com

dee: SEODP%HDEDH1.BITNET@Forsythe.Stanford.edu

degroen:tena2!pieter@relay.EU.net

denham: zydeco@mathworks.com

deuflhard: deuflhard@sc.zib-berlin.dbp.de

dfausett: dfausett@zach.fit.edu

dhough: hough@ips.ethz.ch

doi: doi@IBL.CL.nec.co.jp

donato: donato@math.ucla.edu

egecioglu: omer@cs.ucsb.edu

einfeld: Einfeld@sc.zib-berlin.dbp.de

elliott: mt27amg@vm.tcs.tulane.edu

eydeland: alex@smectos.gang.umass.edu

fausett: dfausett@zach.fit.edu

fhanson: u12688@uicvm.cc.uic.edu

fletcher: fletcher@mcs.dund.ac.uk

fletcher: r.fletcher@primea.dundee.ac.uk

flores: JFLORES@CHARLIE.USD.EDU

garratt: tjg%maths.bath.ac.uk@NSFnet-Relay.AC.UK

geller: a84687%tansei.cc.u-tokyo.ac.jp

gersztenkorn: zaxg04@gpsb.trc.amoco.com

goldberg: goldberg@parc.xerox.com

goldfarb: goldfarb@cunixd.cc.columbia.edu

grace: andy@mathworks.com

greenberg: greenber@titan1.math.umbc.edu

greenstadt:greensta@sjsumcs.SJSU.edu

griewank: griewank@antares.mcs.anl.gov

gropp: gropp@antares.mcs.anl.gov

grote: grote@CGEUGE11.bitnet

haber:hendrix@Sun.COM

hanson: hanson%imsl.uucp@uunet.uu.net

harvard: na-list@mgh.harvard.edu

hasegawa: hasegawa@fuis.fuis.fukui-u.ac.jp

hasson: mhasson@copper.colorado.edu

heinreichsberger: heinreich@EVEV88.una.at

herbin: raphaele@masg1.epfl.ch

hmarshall: idaho@caen.engin.umich.edu

hodel: S_HODEL@ducvax.auburn.edu

hull: tehull@na.toronto.edu

ikebe: ikebe@gama.is.tsukuba.ac.jp

iserles: ai%camnum@atmos-dynamics.damtp.cambridge.ac.uk

jea: fjut006%twnmoe10.bitnet@forsythe.stanford.edu

jeltsch: jeltsch@math.ethz.ch

jennings: munnari!madvax.maths.uwa.oz.au!les@uunet.UU.NET

jimack: MTHPJK@VAXA.Heriot-Watt.Ac.UK

jjuang: JJUANG%TWNCTU01.BITNET@Forsythe.Stanford.EDU

jmarshall: mv10801@uc.msc.umn.edu

jmorris: jmorris@mcs.dund.ac.uk

jouvelot: jouvelot@ensmp.fr

kahaner:kahaner@xroads.cc.u-tokyo.ac.jp

kamath: kamath@vino.enet.dec.com

kanada: kanada@tansei.cc.u-tokyo.ac.jp

kearfott: rbk@usl.edu

kershaw: maa013@central1.lancaster.ac.uk

koshy: KOSHY@msg.UCSF.EDU

kriegsmann: G_KRIEGSMANN@nuacc.acns.nwu.edu

kstewart: stewart@sdsu.edu

kuo: cckuo%portia.usc.edu@usc.edu

lai: chenlai%mtha.usc.edu@usc.edu

levesley: mty012@cck.coventry.ac.uk

leyk: leyk@macomb.tn.cornell.edu

lfausett: lfausett@zach.fit.edu

liebling: LIEBLING@ELMA.EPFL.CH

lindquist: lindquis@ams.sunysb.edu

little: jnl@mathworks.com

lwatson: na.watson@na-net.stanford.edu

maechler: maechler@stat.washington.edu

math_sinica: math@twnas886.bitnet

mathias: mathias@patience.stanford.edu

mcelwee: mcelwee@cup.portal.com

mcenery: SCMP6012%IRUCCVAX.UCC.IE@Forsythe.Stanford.edu

meir: AMEIR@ducvax.auburn.edu

mit: NUMANAL@NERUS.PFC.MIT.EDU

mitchell: wmitchell@atl.ge.com

nanderson: anderson@vino.enet.dec.com

niethammer: AF01@DKAUNI2.bitnet

orel: bojan.orel%uni-lj.ac.mail.yu@relay.cs.net

osborne: mro250@csc2.anu.OZ.au

oser: jpbm@athena.umd.edu

osterman: osterman@cmcl2.NYU.EDU

otoole: jotoole@relay.nswc.navy.mil

overton: overton@cs.nyu.edu

papamichael: Nicholas.Papamichael%brunel.ac.uk@NSFnet-Relay.AC.UK

parady: garth!apd!bodo@apple.com

patricio: FCMTJJJ@civc2.rccn.pt

pchin: pchin@watfun.waterloo.edu.

pernice: usimap@sneffels.utah.edu

peterson: tep@mssun7.msi.cornell.edu

petiton: petiton@cs.yale.edu

pitsianis: schin@ibm.com

ptang: tang@mcs.anl.gov

ringertz: ht_rzu@ffa1.sunet.se

roberson: kyle%phoebus.hydro.pnl.gov@pnlg.pnl.gov

robin: robinf@etca.etca.fr

rothblum: rothblum@cancer.bitnet

rump: rump@tuhhco.rz.tu-harburg.de

rwright: wright@uvm.edu

saad: saad@hydra.riacs.edu

saied: saied@cs.uiuc.edu

sanz-serna: sanzserna@cpd.uva.es

saylor: saylor@inf.ethz.ch

scales: zjas23@trc.amoco.com

schlick: SCHLICK@ACF1.NYU.EDU

schumitzky: aschumitzky@gamera.usc.edu

seneta: seneta_e@maths.su.oz.au

shirakawa: shirakaw@ka.tsukuba.ac.jp

shoaff: wds@zach.fit.edu

shure: loren@mathworks.com

sli: sli@cs.purdue.edu

smale: smale@cartan.berkeley.edu

spaeht: 040624%DOLUNI1.BITNET@Forsythe.Stanford.edu

springer: springer-07@DCFCM5.DAS.NET

stenger: stenger@cs.utah.edu

tanabe: tanabe@sun312.ism.ac.jp

tanner: NA%AECLCR.BITNET@Forsythe.Stanford.EDU

tli: li@nsf1.mth.msu.edu

tuminaro: tuminaro@cs.sandia.gov

werner: or470@dbnuor1.bitnet

widlund: widlund@math.berkeley.edu

wimp: wimpjet@DUVM.bitnet

woltring: ELERCAMA@HEITUE5.bitnet

yun: dyun@wiliki.eng.hawaii.edu

zarantonello: sergioz@fai.fai.com

zha: zha@na-net.stanford.edu

zhang: zhang@umiacs.UMD.EDU

------------------------------

End of NA Digest

**************************

-------