Here we elaborate on how condition numbers are used
to get error bounds. For simplicity, suppose we
are just trying to evaluate a scalar function ,
using an algorithm , and we want to bound
the error . Further, suppose that we can show
that
for some ``small'' .
If the algorithm enjoys this property, that it
gets exactly the right answer for a
slightly perturbed argument , then the
algorithm is called numerically stable,
or just stable for short. In this case
we can estimate the error as follows:
The same approach applies to getting error bounds for eigenproblems. In this case would be a matrix, would be a perturbed matrix, and could be an eigenvalue, a set of eigenvalues, a set of eigenvalue/eigenvector pairs, or other quantities. Furthermore would be replaced by a norm of the matrix .
So to get an error bound, we need both a condition number and a bound on the backward error . There are two ways to get a bound on . The first way is to prove that the algorithm always has a small backward error . For the transformation methods discussed in this book, this is the case, with the norm of the backward error matrix roughly bounded by machine precision times the norm of the input matrix . This kind of guaranteed stability does not hold for iterative methods.
For iterative methods, the story is more complicated. Here is a sketch, with the details left to Chapters 4 through 9. Suppose and unit vector form an approximate eigenvalue and eigenvector pair, that is, that the residual vector is small. Then it is easy to show that a smallest matrix such that , i.e., where and are an exact eigenvalue/eigenvector pair of , is , with norm . So to bound the backward error all we have to do is compute the residual 2-norm . So bounding the backward error for a single eigenvalue and eigenvector pair is very easy. The story is more complicated for a set of eigenvalue/eigenvector pairs , . Even if each is small, that does not mean that the are approximations of different eigenpairs (some may approximate the same true eigenpair), or that the eigenvectors are orthogonal when they are supposed to be (when is Hermitian), or any other desirable property. Some iterative methods are more effective than others at guaranteeing such properties (for examples, see implicitly restarted Lanczos and Arnoldi methods described in §4.5 and §7.6).
The reader is referred to the textbooks by Golub and Van Loan [198] and Demmel [114] for an introductory study on numerical stability. The books by Higham [228] and Stewart and Sun [425] are excellent sources for the advanced study of numerical stability and conditioning for solving linear system of equations and eigenvalue problems, respectively.