     Next: Convergence Properties. Up: Lanczos Methods   A. Previous: Algorithm.   Contents   Index

#### Lanczos Algorithm with SI.

In many practical contexts a shift-and-invert variant is natural. It corresponds to the choice (80)

It gives eigenvalues close to a chosen shift point , and we get convergence after a much smaller number of steps . Even if the systems with the shifted matrix are more cumbersome to solve than those with needed in the direct variant, the smaller number of steps needed will most often compensate for this.

The basic recursion (5.9) is replaced by (81)

now with a -orthogonal (5.10) basis and a orthogonal auxiliary basis for which . The two bases are biorthogonal, , compared to the nonsymmetric two-sided Lanczos as described in §7.8.

The matrix is symmetric, (82)

An eigensolution of , is used to get an approximate eigenvalue (83)

and an approximate eigenvector (84)

Its residual is and its norm is small whenever the quantity (85)

is small. This approximation is not a standard but a harmonic Ritz value of the original eigenproblem (5.1); see the discussion in §3.2.

As in the standard case, we can monitor and flag the eigenvalue as converged when it gets small, without actually performing the matrix-vector multiplication (5.20). We save this operation until the step when the estimate indicates convergence.

We get the following algorithm. We note that only a few steps differ from the previous direct iteration algorithm. In actual implementations one can use the same program for both computations. Let us comment on those steps that differ:

(8)
Now we expect convergence for a rather small number of vectors , and it is advised to use full reorthogonalization, even if selective reorthogonalization gives results that are accurate enough. Apply the Gram-Schmidt orthogonalization process, until is -orthogonal to the basis . We need one extra matrix-vector multiplication with for each reorthogonalization, and we have to use the classical Gram-Schmidt orthogonalization process.

We may save and use the auxiliary basis to save extra operations, (14)
Now the basis is used to get the eigenvector (5.20), for each that is flagged as converged.

When we run a shift-and-invert operator (5.16), we factor (86)

for an appropriate sparsity preserving permutation once in the beginning, using sparse Gaussian elimination.

During the actual iteration, we use the factors and , computing (87)

in step (4) of Algorithm 5.5.

If there are eigenvalues on both sides of the shift , the matrix is indefinite, and we cannot use simple Cholesky decomposition, but have to make a symmetric indefinite factorization, e.g., MA47 of Duff and Reid . See §10.3. This type of algorithm makes block-diagonal with one by one and two by two blocks. It is easy to determine the inertia of such a matrix, and this gives information on exactly how many eigenvalues there are on each side of the shift . We may determine the number of eigenvalues in an interval by performing two factorizations with in either end point. This is used to make sure that all eigenvalues in the interval have been computed in the software based on  and .

We note that Algorithm 5.5 does not need any factorization of the matrix and can be applied also when is singular. This is not an uncommon case in practical situations, but special precautions have to be taken to prevent components of the null space of from entering the computation; see  or .     Next: Convergence Properties. Up: Lanczos Methods   A. Previous: Algorithm.   Contents   Index
Susan Blackford 2000-11-20