The fact that is
Hessenberg depends upon the term
vanishing in the expression
so there may be numerical difficulty when the first component of
is small.
To be specific,
and
thus
in exact arithmetic. However, in finite
precision, the computed
.
The error
will be on the order of
relative to
,
but
The remedy is to introduce a step-by-step acceptable perturbation and
rescaling of the vector to simultaneously force the conditions
The basic idea is as follows: If, at the th step, the computed
quantity
is not sufficiently small, then it is adjusted
to be small enough by scaling the vector
with a number
and the component
with a number
.
With this rescaling just prior to the computation of
, we then
have
and
,
where
.
Certainly,
should not be altered with this scaling and this
is therefore required.
This gives the following system of equations to determine
and
: if
,
(_j )^2 + (1 - _j^2)^2 &=& 1,
y_j^* H_j q_j + _j _j _j+1 &=&
_M _j+1.
If is on the order of
, then the scaling
may be absorbed into
without alteration of
and also
without effecting the numerical orthogonality of
.
When
is modified, it turns out that none of the previously
computed
,
, need to be altered.
After step
, the vector
is simply rescaled in subsequent steps, and
the formulas defining
,
are invariant with respect to scaling of
.
For complete detail, one should consult [420].
The code shown in Algorithm 7.9 implements this scheme
to compute an acceptable .
In practice, this transformation may
be computed and applied in place to obtain
and
without storing
. However, that implementation
is quite subtle and the construction of
is obscured by the details
required to avoid creating additional storage.
This code departs slightly from the above description since the
vector
is rescaled to have unit norm only after all of
the columns
to
have been determined.
There are several implementation issues.