The cheapest matrix operation is multiplication, by
,
, or
. Note that
is computed
as
, i.e., as one multiplication by
and then one
by
. There are two reasons not to form
itself:
First,
can be much denser than
and so more expensive to
multiply by. Indeed,
can be dense even when
has only
one nonzero row.
Second, it can cost as much to form
as
products
,
which is often more than enough to compute the desired singular triplets.
Similar comments apply to , except that since
is
-by-
,
and
,
can be (arbitrarily) larger than
.
Also, storage and manipulation of
vectors can be (arbitrarily) cheaper
than
vectors. So in general it is better to use
instead of
, and recover the left singular vectors
from
the eigenvectors
of
(right singular vectors of
)
by
. (But see the comments on shift-and-invert below
for a case when
is much better than
.)
One multiplication by costs the same as one multiplication
by either
or
.
Now consider shift-and-invert, which requires
computing an or
factorization of either
,
, or
.
Here
is a shift, or approximate eigenvalue
of the matrix from which it is being subtracted.
The cost of these factorizations depends strongly on
the sparsity structure of the matrix. Depending on
the dimensions and sparsity structure of
, it may
be much cheaper to factorize one of
,
, or
instead of the others. Here are some examples:
The above examples are chosen to represent extremes, where the
behavior of ,
, and
are all as different as
possible. It also assumes that the matrices being factored
are well ordered; i.e., the rows and columns are ordered to
minimize fill and operations during factorization.
For the above examples, symmetric minimum-degree ordering was used.
In general, it is possible to compute the ordering and estimate
the work and space required to factor
,
, and
by a
symbolic factorization much more cheaply than actually
performing the factorizations themselves.
See §10.3 for details.