next up previous contents index
Next: Rayleigh Quotient Iteration Up: Single- and Multiple-Vector Iterations Previous: Power Method   Contents   Index


Inverse Iteration

The drawbacks of the power method can be partially overcome by applying the power method to $(A - \sigma I)^{-1}$ instead of $A$, where $\sigma$ is called a shift. This will let the method converge to the eigenvalue closest to $\sigma$, rather than just $\lambda_{\max}$. This method is called inverse iteration, or the inverse power method.


\begin{algorithm}{Inverse Iteration for HEP
}
{
\begin{tabbing}
(nr)ss\=ijkl\=b...
...ccept $\lambda=\sigma+1/\theta$\ and $x=y/\theta$ \end{tabbing}}
\end{algorithm}

Note that there is no need to compute the inverse $\left(A-\sigma I\right)^{-1}$ explicitly at step (4). We only have to solve a system like $\left(A-\sigma I\right) y = v$ for $y$ efficiently, which formally gives $y = \left(A-\sigma I\right)^{-1} \, v$.

As in the power method, we should expect the sequence of $v$ vectors generated by the inverse power method to become increasingly parallel to the eigenvector corresponding to the largest eigenvalue of $(A - \sigma I)^{-1}$ in magnitude. More specifically, assume that $\lambda_j$ and $x_j$ are an eigenvalue and eigenvector pair of $A$ so that $\vert\lambda_j - \sigma \vert^{-1}$ is the largest eigenvalue of $(A - \sigma I)^{-1}$ in magnitude. The inverse power method converges if $z$ is not perpendicular to $x_j$. The convergence rate is $\left\vert(\lambda_j - \sigma )/(\lambda_k
- \sigma )\right\vert$, where $\lambda_k$ is an eigenvalue of $A$ such that $\vert\lambda_k - \sigma \vert^{-1}$ is the second largest eigenvalue of $(A - \sigma I)^{-1}$ in magnitude.

One advantage of inverse iteration over the power method is the ability to converge to any desired eigenvalue (the one nearest $\sigma$). By choosing $\sigma$ very close to a desired eigenvalue, inverse iteration can converge very quickly. This method is particularly effective when we have a good approximation to an eigenvalue and only want to compute this eigenvalue and its corresponding eigenvector. However, inverse iteration does require a factorization of the matrix $A-\sigma I$, making it less attractive when this factorization is expensive.


next up previous contents index
Next: Rayleigh Quotient Iteration Up: Single- and Multiple-Vector Iterations Previous: Power Method   Contents   Index
Susan Blackford 2000-11-20