next up previous contents index
Next: Subspace Iteration Up: Single- and Multiple-Vector Iterations Previous: Inverse Iteration   Contents   Index


Rayleigh Quotient Iteration

A natural extension of inverse iteration is to vary the shift at each step. It turns out that the best shift which can be derived from an eigenvector approximation $x \not =
0$ is the Rayleigh quotient of $x$, namely, $\rho(x)
\equiv x^{\ast} \, A \, x / x^{\ast} \, x $.

In general, Rayleigh quotient iteration (RQI) will need fewer iterations to find an eigenvalue than inverse iteration with a constant shift; it ultimately has cubical convergence, while inverse iteration converges linearly. However, it is not obvious how to choose the starting vector $y$ to make RQI converge to any particular eigenvalue/eigenvector pair. For example, the RQI can converge to an eigenvalue which is not the closest to the starting Rayleigh quotient $\rho(y)$ and to an eigenvector which is not closest to the starting vector $y$. Furthermore, there is the tiny but nasty possibility that it may not converge to an eigenvalue/eigenvector pair at all. RQI is more expensive than inverse iteration, requiring a factorization of $A - \rho_k I$ at every iteration, and this matrix will be singular when $\rho_k$ hits an eigenvalue. Hence RQI is practical only if such factorizations can be obtained cheaply at every iteration. See Parlett [353] for more details.


\begin{algorithm}{RQI for HEP}
{
\begin{tabbing}
(nr)ss\=ijkl\=bbb\=ccc\=ddd\= \...
...t $\lambda=\rho_k$\ for most recent $k$\ and $x=v$\end{tabbing}}
\end{algorithm}


next up previous contents index
Next: Subspace Iteration Up: Single- and Multiple-Vector Iterations Previous: Inverse Iteration   Contents   Index
Susan Blackford 2000-11-20