The error bounds discussed in this section are subject to floating point errors, most of which are innocuous, but which deserve some discussion.
The infinity norm requires the fewest
floating point operations to compute, and cannot overflow or cause other
exceptions if the
are themselves finite
. On the other hand, computing
in the most straightforward manner
can easily overflow or lose accuracy to underflow even when the true result
is far from either the overflow or underflow thresholds. For this reason,
a careful implementation for computing
without this danger
is available (subroutine snrm2 in the BLAS [72] [144]),
but it is more expensive than computing
.
Now consider computing the residual by forming the
matrix-vector product
and then subtracting
, all in floating
point arithmetic with relative precision
. A standard error
analysis shows that the error
in the computed
is bounded by
, where
is typically bounded by
, and
usually closer to
. This is why one should not choose
in Criterion 1, and why Criterion 2 may not
be satisfied by any method.
This uncertainty in the value of
induces an uncertainty in the error
of
at most
.
A more refined bound is that the error
in the
th component of
is bounded by
times the
th component of
, or more tersely
.
This means the uncertainty in
is really bounded by
.
This last quantity can be estimated inexpensively provided solving systems
with
and
as coefficient matrices is inexpensive (see the last
paragraph of ยง
).
Both these bounds can be severe overestimates of the uncertainty in
,
but examples exist where they are attainable.