As described in §2.4.7, there are several problems closely related to the SVD whose solutions we describe here. This list is not exhaustive, because the use of low rank approximations to matrices (for which the ``optimal'' one is supplied by the truncated SVD) is very widespread in applications. Depending on the application, it may be appropriate to use an SVD algorithm described here or another low rank approximation.

- The
*quotient or generalized SVD*of and is defined as follows. Suppose first that is by and is by and nonsingular. Let the SVD of . Then we may also write two equivalent decompositions of and as and , where is by and nonsingular, is by , and is by , with . Then . The values are called the*generalized singular values*of and .Software using transformation methods (i.e., suitable for dense matrices) is available in LAPACK driver routine

`xGGSVD`, for the more general case of being singular or by . It is also available as the MATLAB command`gsvd`. No ScaLAPACK software is available. The advantage of using these algorithms is accuracy only; that is, when is ill-conditioned they can be much more accurate than simply forming and computing its SVD. But they are also rather slower, so if is well-conditioned, it is better to just form and compute the SVD of .When and are large and sparse and is square, nonsingular, and can be factored, we recommend applying SVD algorithms discussed above that only require multiplication by the product and its transpose.

If is not square, we recommend finding the eigenvalues and eigenvectors of the generalized Hermitian eigenvalue problem , using techniques from Chapter 5. This is because the eigendecomposition of is and .

- Yet
*more generalized SVDs*can be defined for instead of (the*product SVD*), or indeed arbitrary products of the form . No specialized software for these cases is available in LAPACK, ScaLAPACK, or MATLAB. So in the dense case, one would form these products explicitly (although algorithms have been proposed that avoid this [54,3,53]), and in the sparse case, one would multiply by them without forming them. - Finding that minimizes is the
*linear least squares problem*. Suppose is by with ; then we say that the least squares problem is*overdetermined*. If is nonsingular and is the thin SVD of , then the solution is . It is generally cheaper to either solve the normal equations or use the QR decomposition to compute , but when is badly conditioned, i.e., , then the SVD can be much more accurate. In particular, when is very ill-conditioned one typically uses the truncated SVD of instead of the thin SVD: .It is not necessary to compute the thin or truncated SVD of explicitly to solve the least squares problem. For dense problems, LAPACK driver routine

`xGELSD`is the method of choice, using divide-and-conquer to solve the least squares problem quickly without forming an explicit thin SVD. In ScaLAPACK, there is only a routine for computing a thin SVD,`PxGESVD`, which could then be used for the least squares problem. For sparse problems, we can use the approximate factorization derived from equations (6.7) or (6.8) to get .