NA Digest Sunday, May 12, 1991 Volume 91 : Issue 19

Today's Editor: Cleve Moler

Today's Topics:

-------------------------------------------------------


From: Michael Cohen <mike@park.bu.edu>
Date: Fri, 10 May 91 12:03:32 -0400
Subject: Hamilton Jacobi Equation

If someone could send me some standard references on solving numerically
the non-linear Hamilton Jacobi Equation or some pointers on basic recent
surveys in the literature I would appreciate it.
-mike

Boston University (617-353-7857) Email: mike@bucasb.bu.edu
Smail: Michael Cohen 111 Cummington Street, RM 242
Center for Adaptive Systems Boston, Mass 02215
Boston University


------------------------------

From: Robert Kaminsky <rkaminsk@risky.ecs.umass.edu>
Date: 11 May 91 03:13:28 GMT
Subject: Looking for Gear's Method for MATLAB

I'm looking for Gear's method (or other robust differential equation
solver) for MATLAB (i.e. an M-file). Any help would be appreciated.

Thanks...

Bob Kaminsky
University of Massachusetts, Amherst


------------------------------

From: Kosmo D. Tatalias <TATALIAS@A.ISI.EDU>
Date: Fri 3 May 91 09:54:48-EDT
Subject: High Dimensional Integration

I would like to know if anyone has developed routines for high
dimensional integration based on either of the following papers:

P. Zinterhof, "Gratis Lattice Points for Multi-Dimensional Integration,"
Computing, Vol. 38 (1987), 347-353.

Yosihiko Ogata, "A Monte Carlo Method for High Dimensional Integration,"
Num. Math., Vol. 55 (1989), 137-157.

Kosmo Tatalias (tatalias@a.isi.edu)
Government Systems Incorporated
on-site at Center for Night Vision and Electro-Optics, Ft. Belvoir, VA
(703) 664-4913


------------------------------

From: Mark Coffey <S1.MWC@ISUMVS.IASTATE.EDU>
Date: Fri, 10 May 91 23:08:12 CDT
Subject: Mathieu Functions

I require software for the numerical evaluation of Mathieu
functions of order zero. This application arises from the
solution of the London equation in elliptic coordinates.
Any software, references, or other advice would be appreciated.
Mark Coffey
Dept. of Physics
Iowa State University
515-294-6023
e-mail: s1.mwc@isumvs.iastate.edu


------------------------------

From: Anthony Skjellum <tony@helios.llnl.gov>
Date: Sun, 5 May 91 17:06:39 PDT
Subject: SIAM Supercomputer Newsletter

SIAM Activities Group in Supercomputer Newsletter Upcoming Deadline

The deadline for news submissions and short articles of general interest
for the next SIAM/SIAG Supercomputing Newsletter is June 15. Please
send submissions electronically to tony@helios.llnl.gov (Anthony Skjellum),
or e-mail to the same address for more information. Items of interest
include announcements of major massively parallel and vector software packages
to be made available freely, discussion of benchmarks, conference announcements
and calls for papers, and so on. Reviews of recent conferences, workshops,
etc, are also welcome.


------------------------------

From: Craig C. Douglas <douglas-craig@CS.YALE.EDU>
Date: Mon, 06 May 91 10:29:06 -0400
Subject: MGNet (Multigrid Network) Announcement

There is now a mailing list for multigrid issues. To get on the mailing list,
send electronic mail to

mgnet-requests@cs.yale.edu

with your name and Internet (preferably) or Bitnet address included. A more
detailed note will be sent back to you.

Also, please indicate if you are interested in the multigrd bakeoff and want
to get messages for that daily. Unless you are really interested, please wait
until the messages make the weekly digests.


To post a message to the mgnet digest, send mail to

mgnet@cs.yale.edu

and I will put it in the next issue. Issues will typically be sent over
weekends, so getting a message in by Thursday evening is a big help to me.


To see what is stored here,

ftp casper.cs.yale.edu
<anonymous login>
cd mgnet
dir

Then look in the README.mgnet and INDEX.mgnet files. You can change directory
to whatever you like. Software packages have their own directories as does
old mail.


Craig Douglas

Internet: douglas-craig@cs.yale.edu or bells@watson.ibm.com
Bitnet: bells at yktvmv


------------------------------

From: Tim Monks <tim@resmel.bhp.com.au>
Date: Sun, 12 May 91 20:12:03 EDT
Subject: Summary of Sparse Least Squares Response

Ages ago I posted the following request to this mailing list :

tm- Are there any routines available that will solve least
tm- squares problems for rectangular sparse matrices ? I am
tm- trying to solve a regularized least squares problem for
tm- Au = b, where A is non-square, large (typically 16384 x 4096),
tm- and sparse. I am thinking that methods based on QR or SVD
tm- factorizations would be appropriate, as A is not well enough
tm- conditioned to solve the normal equations with LU
tm- decomposition...

I was pleasantly suprised by the volume of replies I received, I would
like to thank all those who took the time and effort to answer, and
apologise for not getting replies out sooner. (A problem with
industrial research; he who pays the piper calls the tune, in other
words you do what you must!) I have been sporadically working on this
problem for a while, and I shall summarize my current thinking below.
In it, I shall extirpate sections from the various suggestions I
received from the following people :

Fernando Alvarado alvarado@eceserv0.ece.wisc.edu
Nelson Beebe beebe@math.utah.edu
Michael Berry berry@cis.uab.edu
Iain Duff isd@ib.rl.ac.uk
Tim Eakin tim@ccwf.cc.utexas.edu
John Gilbert gilbert@parc.xerox.com
Gene Golub golub@cholesky.stanford.edu
Chris Johnson crayjohn@cc.utah.edu
Linda Kaufman lck@research.att.com
Rob MacLeod macleod@vissgi.cvrti.utah.edu
Cleve Moler mathworks%moler@apple.com
Esmond Ng esmond@msr.epm.ornl.gov
Dan Pierce dpierce@jugeliang.boeing.com
Torgeir Rusten torgeir@ifi.uio.no
Michael Saunders mike@sol-michael.stanford.edu
Rudi Vankemmel vankemme@cs.kuleuven.ac.be


LSQR - Algorithm 583 CACM (& NAG routine F04QAF)
=================================================

Many people (NB, GG, LK, CM, TR & MS) recommended this routine, and it is
particularly convenient because you only have to supply a subroutine to
do the matrix-vector product, the vector coming from LSQR, and the
matrix storage, or generation, is up to the user. I have had a moderate
degree of success with it, particularly on scaled down problems. However
I ran into convergence problems with critical slowing down as I increased
the size of the matrices. As Michael Saunders says :

ms- ...It is easy to use and requires very little storage. It may prove
ms- to be slow unless your problem is well-conditioned or has clustered
ms- singular values. You might be able to speed things up if there is
ms- obvious structure that you can use to cook up a preconditioner.
ms- e.g. if A has a significant block-diagonal part.

In my problem, A is rather randomly structured and singular, so I have
to heavily damp to get a solution. I have tried playing with pre-
conditioners and the ridge parameter, but because the solution was
strongly dependent on these, I decided that the most rigorous approach
was to re-formulate the geometry to make A better behaved - I suppose
this should have been obvious from the start.

I have had greater success solving my problem in a multigrid scenario
using a straightforward application of an FMV routine, with relaxation
using LSQR. This required a simple mod. to LSQR, so that the
initialisation u0 = 0, was replaced with u0 = interpolated solution from
coarser grid. With this modification, far fewer iterations were required
to get the same type of convergence.

Other notes :

GG says that he has an elegant solution to the regularized problem with
a quadratic solution - I have not chased this.

LSQR is available through netlib & the original refs are :

%A Paige, C.C.
%A Saunders, M.A.
%D 1982
%T LSQR: An Algorithm for Sparse Linear Equations and Sparse Least-Squares
%J ACM Transactions on Mathematical Software
%V 8
%N 1
%P 43-71

%A Paige, C.C.
%A Saunders, M.A.
%D 1982
%T ALGORITHM 583 LSQR: Sparse Linear Equations and Least Squares Problems
%J ACM Transactions on Mathematical Software
%V 8
%N 2
%P 195-209


Alternatives I haven't tried
============================

There were a lot of suggestions for software I haven't got (yet),
including SPARSPAK, HLS and MATLAB. I can't recommend anything, so I
shall just paraphrase the information I received.

Many people recommended direct QR methods, I have not tried any of these,
mainly because previous benchmarks I did on my machine (Silicon Graphics
240GTX, 64M memory) showed that calculation of my matrix on the fly was
better than sparse storage.

Although I am aware of the computational disadvantages of SVD approaches
to least-squares c.f. QR methods, I have looked at the singular values
of some of my matrices to get some idea of the conditioning I should
expect. As I don't have any sparse SVD codes, I used a dense matrix
routine on sizes my machine could handle (max. size 1024x1024 using
KDSVDC from the CLASSPACK library, equiv. to LINPACK's DSVDC optimised
for the Silicon Graphics machine by Kuck & Associates kai@kai.com). I
found, for instance, that for the 1024x1024 matrix, the estimated rank
was 811, and the significant sv's ranged over (1e-5, 1.0], with ~60%
clustered over the decade (0.001, 0.01). I would be interested to learn
if people consider such a range as a disparate or clustered
distribution. The indications appeared to be that as I increased the
size of A, the range of the sv's increased, and their clustering
decreased.

Here then are the packages that were mentioned.

QR factorization using SPARSPAK
===============================

SPARSPAK is a collection of routines for solving sparse systems of
linear systems. It is divided into two portions; SPARSPAK-A deals with
sparse symmetric positive definite systems and SPARSPAK-B handles sparse
linear least squares problems, including linear equality constraints.
For least squares you need both A & B.


The algorithm is essentially that of George and Heath, which reduces A
to triangular form one row at a time by using Givens rotations, after
performing a symbolic factorization to get a static data structure in
which R will appear. The columns of A can be preordered (e.g. by
minimum degree on A'*A) to make R sparser. Dense rows of A, which would
cause R to be full, can be withheld from the factorization and the final
solution updated to incorporate them at the end. The matrix is assumed
to be available row by row. Only the upper triangular factor is
maintained; the Givens rotations are not saved. This means the
"right-hand" side has to be available when the matrix is provided to the
package.

The original reference is

%A George, Alan
%A Heath, Michael T.
%D 1980
%T Solution of sparse linear least squares problems using Givens rotations
%J LAA
%V 34
%P 69-83

A more recent paper detailing some of the extensions to this algorithm
as well as some alternatives is

%A Heath, Michael T.
%T Numerical methods for large sparse linear least squares problems
%J SJSSC,
%D 1984
%V 5
%P 497-513

There's been quite a bit of literature on direct methods for sparse
least squares in the past 10 years, much of it by Bjorck, George, Heath,
and Ng.


SPARSPAK is licensed and distributed by the University of Waterloo,
Canada. Detailed information can be obtained from

Mr. Peter Sprung
Department of Computing Services
University of Waterloo
Waterloo, Ontario N2L 3G1
CANADA

(519) 885-1211 ext 3239

RM mentioned a license cost of $350 pa (machine ?)


Multifrontal QR
===============

Dan Pierce from Boeing Computing Services writes :
dp- I have spent the last couple of years working on a sparse QR
dp- factorization/ least squares solution module based on the
dp- multifrontal paradigm (this was first proposed by Joesph Liu). It
dp- uses Householder transformations, which operate on locally dense
dp- submatrices. As a result it has been quite fast on CRAY machines.
dp- In fact we also have an experimental version which is capable of
dp- revealing the rank, for rank deficient problems. (Dan mentions this
dp- should be faster than Given rotation approaches)


Iain Duff also mentions CERFACS & Ake Bjorck are working on a multifrontal
QR code, which is still in an experimental stage.

SPARSPAK (C?) will also have a multifrontal paradigm soon, but still
using Givens rotations (primarily due to the advantage they felt was
possible in a parallel implementation).


Harwell Subroutine Library
==========================

Harwell has a code that is based on writing the least squares problem in
the augmented matrix form:

|I A| |r| |b|
| T | | | = | |
|A 0| |u| |0|

and then using MA27 to solve the symmetric indefinite linear system.
The MA27 code is not perfect on this kind of structure and they are
currently working on a new routine (MA47) to cater explicitly for this
kind of matrix.

HLS also has MA45 to solve the normal equations, pv stability is not a
problem.



MATLAB
======

Cleve Moler writes that the next release of MATLAB due out this summer
will have sparse least squares. It solves overdetermined least squares
problems by forming the augmented system involving the matrix:

|cI A| |r| |b|
| T | | | = | |
|A 0| |u| |0|

where c is an estimate of the smallest singular value of A. The sparse
augmented system is solved using minimum degree ordering and sparse
Gaussian elimination.



SVD packages
============

Mike Berry has some codes for this:

mb- I have developed several codes/methods for the sparse SVD: Lanczos
mb- (single-vector or block), subspace iteration, or trace minimization.
mb- I have used them for sparse SVD problems in information retrieval
mb- and seismic tomography... I have the codes written in Fortran-77
mb- and I'm hoping to start on a C version soon. I typically compute
mb- the 100-200 largest s-values and s-vectors for matrices having up to
mb- 30,000 rows and 20,000 columns. I have also looked at computing a
mb- few of the smallest s-values of large sparse matrices... I have run
mb- the codes on Cray-2, Cray Y-MP, Sun-4 workstations, Alliant FX/80,
mb- and Convex C-1.... All the methods are iterative and hence only
mb- require the user to supply kernels for multiplication by A and/or A'
mb- (A-transpose). I currently assume the sparse matrix is stored in
mb- Harwell/Boeing format but this is not really a requirement - just
mb- the standard I adopted. Optimal use of the methods depends on your
mb- constraints (CPU time, Memory, parameter estimates).

Rudi Vankemmel writes :

rv- Normally we solve our quite well conditioned system using a CGS
rv- routine with LU preconditioning but this breaks down if the matrix
rv- becomes singular valued. So, we are looking for the same solver. We
rv- did have a look at the same packages you did but the problem in this
rv- is always the lack of sparse storage. We thought about rewriting SVD
rv- for sparse storage but the following problem arises: sparse SVD would
rv- be interesting IF you can keep the same sparsity pattern for the
rv- decomposed arrays as for the original matrix A. (the same is done
rv- with a incomplete LU decomposition). Otherwise you can even give up
rv- the sparse storage. Now this seems to be a problem: due to this
rv- incompleteness (sure in LU decomposition) errors are introduced and
rv- the algorithm even breaks down if the diagonal is not diagonally
rv- dominant. We discussed the technique to use in SVD (this would be
rv- ISVD) and people working on this (e.g. Sabine Vanhuffel from
rv- K.U.Leuven, Belgium which posted the TLS software to netlib) state
rv- that these errors will affect the total decomposition and the answer.
rv- This is probably also the reason why the user of Sparse 1.3 can give a
rv- threshold parameter to allow for extension of the sparsity pattern in
rv- the LU decomposition needing of course much more storage.

Chris Johnson says there is a (sparse ?) SVD package from Uni of Tenn. :

cj- I do have a new SVD solver from Univ of Tenn., which uses a modified
cj- Lanczos algorithm. The problem is, it accurately computes the largest
cj- and smallest singular values but doesn't do too well for the middle
cj- values. The author says he is working on it. Its a package called
cj- SVDPACK, and I can send it to you if you want to take a look at it.

I haven't got any more info on it.



Miscellaneous
=============

Fernando Alvarado writes :

fa- You may want to look at the Spring 90 ORSA Journal on Computing,
fa- where I describe SMMS, the Sparse Matrix Manipulation System, a
fa- collection of directly executable sparse matrix commands. SMMS
fa- included routines for sparse Orthogonal Factorization, including
fa- many concepts not really described in the literature.

fa- However, my own preference for "state of the art" is the augmented
fa- matrix method, as Iain Duff mentioned to you. My impression is that
fa- the MA28 version is too general. Bill Tinney (the "father" of
fa- sparsity, at least in power system circles, with work dating to '62)
fa- and I recently published a paper on "Augmented Matrix Methods" for
fa- solving the sparse least squares problem (August issue of the IEEE
fa- Transactions on Power systems) that you may want to glance at, and
fa- have recently finished a book chapter describing some more ideas.

fa- My own view is that the blocking methods are THE way to go for
fa- sparse least square problems: they are computationally superior to
fa- Orthogonal Factorization, and the sparsity preservation is much
fa- greater. The condition number is not quite as good, but it is not
fa- bad either unless the original problem is ill-posed. They can
fa- handle equality constraints very easily.


Tim Eakin writes :

te- I think Prof. Owe Axelsson, Abdeling Wiskunde, Katholieke
te- Universiteit Nijmegen, in the Netherlands has some. You might check
te- with him. Also, check out his article and the references therein.

%A Axelsson, O
%D 1988
%T Block preconditioning and domain decomposition methods
%J Journal of Computational and Applied Mathematics
%V 24
%P 1-18

Conclusions
===========

I have had some degree of success in solving my least squares problem
with LSQR, particularly when I put it into a multigrid setting.

Having now got a much better handle on the type of system I am dealing
with than when I first posed the question, I think the best advantage
would be gained by re-formulating the question! I think that the last
word in all this turgid drivel should go to Linda Kaufman who gives
quite excellent advice :

lk- Do you know whether your R from a QR factorization of your matrix
lk- will fill-in? If so you are facing 8 willion words right there and
lk- if this is too large for your system, then a direct method is out.
lk- The advantage of using my generalized Householder algorithm (see LAA
lk- 1987) is that the sparsity of the bottom portion of the matrix is
lk- preserved, i.e. there is no fill-in. Unfortunately, in your case
lk- that only consists of 3/4 of your matrix and the top 1/4 filled in
lk- could kill you.
lk-
lk- In general I have found that looking only at the zero structure of a
lk- matrix only gives you one handle on the problem. Often the problem
lk- has some underlying algebraic structure that can be used. For
lk- example, The matrix might consist of repeated submatrices or scalar
lk- multiples of repeated matrices. Thus a QR decomposition of part of
lk- the problem might be used on another part as well. This has been a
lk- recurring theme as I have had tackled queuing problems, pde
lk- problems, imaging problems, and underwater sound problems. Parts of
lk- the problem might be randomly sparse, but as a whole there is also
lk- some general structure. Unfortunately, there is no canned software
lk- that can capture this combination of structured randomness.

Dr Tim Monks
Image Processing & Data Analysis Group | (direct) (+61-3)566-7448
BHP Research - Melbourne Laboratories | (switch) (+61-3)560-7066
245 Wellington Rd, Mulgrave, 3170, | (fax) (+61-3)561-6709
AUSTRALIA | (internet) tim@resmel.bhp.com.au

------------------------------

End of NA Digest

**************************
-------