Since the eigenvectors of are the right singular vectors,
and the eigenvectors of
are the left singular vectors,
it might seem natural to use
and
to find
right and left singular vectors, respectively. But as we pointed
out earlier, left (or right) singular vectors for nonzero singular
values can be recovered
from right (or left) singular vectors by multiplying them
by
(or
), so either
or
can be used,
whichever is cheaper.
The eigenvalues of and
are the squares of the
singular values of
.
In contrast, the eigenvalues of
are positive and
negative singular values of
(as well as
additional
zero eigenvalues). Since the speed with which all algorithms
converge depends on the distribution and location of
eigenvalues, there are significant differences between
and
on the one hand, and
on the other hand.
and
are appropriate for computing the largest singular values,
since squaring keeps the largest singular values largest, and many of
the algorithms of Chapter 4 converge most easily to the
largest eigenvalues. Indeed, squaring increases any gaps between
the largest singular values and the rest of the spectrum,
accelerating convergence.
On the other hand, the smallest singular values are squared too.
In the most extreme case, a singular value near the square root
of machine precision
turns into
after
squaring, so its value may be entirely obscured by rounding errors.
Also, small singular values that are clustered appear even more
clustered after squaring. In other words, getting the smallest
singular values can be challenging.
does not square singular values, so the problems just mentioned
do not arise. On the other hand, since the eigenvalues of
are plus and minus the singular values of
, tiny singular values
of
are near the middle of the spectrum of
. These are
the hardest eigenvalues to find and generally require
shift-and-invert or Jacobi-Davidson to find.
In summary, the most accurate (but most expensive) way to find the
smallest singular values is to use shift-and-invert on
.