next up previous contents index
Next: Notes and References Up: Quadratic Eigenvalue Problems Z. Bai, Previous: Numerical Methods for Solving   Contents   Index


Jacobi-Davidson Method

The possible disadvantage of the linearized approach is the doubling of the dimension of the problems; that is, a problem with $n$-dimensional matrices $M$, $C$, and $K$ is transformed to a generalized problem with $2n$-dimensional matrices $A$ and $B$. This is avoided in the Jacobi-Davidson method to be discussed in this section. In this method, the QEP is first projected onto a low-dimensional subspace, which leads to a QEP of modest dimension. This low-dimensional projected QEP can be solved with any method of choice. Expansion of the subspace is realized by a Jacobi-Davidson correction equation. For polynomial eigenproblems this technique was first suggested and discussed in [408, sec. 8].

As we will see below, this method can also be applied directly to polynomial eigenproblems

\begin{displaymath}
(\lambda^\ell C_\ell+\cdots +\lambda C_1 +C_0 ) x=0,
\end{displaymath} (263)

and no transformation to a generalized ``linear'' eigenvalue problem will be required, where $C_i$ for $i = 0, 1, \ldots, \ell$ are given $n \times n$ matrices. For simplicity, we will only present the case for $\ell = 2$.

In the first part of a Jacobi-Davidson iteration step, for solving the polynomial eigenproblem

\begin{displaymath}
\Psi(\lambda)x =0
\end{displaymath} (264)

where
\begin{displaymath}
\Psi(\lambda)\equiv \lambda^2 C_2+\lambda C_1+C_0,
\end{displaymath} (265)

the projected polynomial problem
\begin{displaymath}
(\theta^2 V^\ast C_2 V +\theta V^\ast C_1 V + V^\ast C_0V){s}=0
\end{displaymath} (266)

is solved. The columns ${v}_i$ of the $n\times {m}$ matrix ${V}$ form a basis for the search subspace. For stability reasons, the columns are constructed to be orthonormal. The projected problem has the same order as the original one but is of much smaller dimension, typically ${m}\ll
n$. We will assume that the solution vectors ${s}$ are normalized, $\Vert{s}\Vert _2 = 1$. First, a Ritz value $\theta $ with the desired properties, such as the largest real part or closest to some target $\tau $, is selected and for the associated eigenvector ${s}$. Then the Ritz vector ${u}\equiv V {s}$ and the residual $r\equiv \Psi(\theta){u}$ is computed. For expansion of the search space the vector ${p}$,

\begin{displaymath}
{p} \equiv \Psi'(\theta){u},
\end{displaymath}

with

\begin{displaymath}
\Psi'(\theta)= 2 \theta C_2+C_1
\end{displaymath}

is also computed.

In the second part of the Jacobi-Davidson iteration step, the search subspace $\mbox{span}({V})$ is expanded by a vector ${t}\perp {u}$ that solves (approximately) the correction equation

\begin{displaymath}
\left(I-\frac{{p}\, {u}^\ast}{{u}^\ast {p}}\right)
\Psi(\theta) \left(I-{u}\, {u}^\ast\right){t}=-r.
\end{displaymath} (267)

The next column of ${V}$ is obtained by orthonormalizing the approximate solution against the previously computed columns ${v}_1, \ldots, {v}_{m}$.

This process is repeated until an eigenpair $(\lambda,x)$ has been detected, i.e., until the residual vector $r$ is sufficiently small. The basic form of the algorithm is presented in Algorithm 9.1. We refer to [413] for an example of a quadratic eigenvalue problem arising from an acoustic problem, which has been solved with this reduction technique, for $n$ up to 250,000.


\begin{algorithm}{Jacobi--Davidson Method for QEP
}
{
\begin{tabbing}
(nr)ss\=ij...
...for} \\
{\rm (16)}\> \> \> expand ${V}=[{V},{v}]$\end{tabbing}}
\end{algorithm}

If the dimension of the search subspace becomes too large, if the number of columns of $V$ is equal to, say, ${m}={m}_{\max}$, then the process can be continued with a search subspace of smaller dimension. Take for the new search subspace the space spanned by the best ${m}_{\min}$ Ritz pairs from the last step; that is, take $V=[u_1,\ldots,u_{{m}_{\min}}]$, where $u_1,\ldots, u_{{m}_{\min}}$ are the Ritz vectors associated with the best available ${m}_{\min}$ Ritz values $\theta_1,\ldots,\theta_{{m}_{\min}}$. Then apply modified Gram-Schmidt to orthonormalize $V$ and restart with this matrix. Note that the new search matrix $V=V_{{m}_{\min}}$ can be expressed in terms of the old search matrix $V=V_{{m}_{\max}}$ as $V_{{m}_{\min}}=V_{{m}_{\max}} T$ for some ${m}_{\max}\times {m}_{\min}$ matrix $T$. The transformation matrix $T$ can be computed explicitly and can be used to update the auxiliary matrices $W_i$ ($=W_i T$) and $M_i$ ($=T^\ast M_i T$).

Eigenvectors that already have been detected can be kept in the search subspace (explicit deflation) if more eigenvectors are wanted. Keeping the eigenvectors in the search subspace prevents the process from reconstructing known eigenvectors.

In §4.7.3 and §8.4.2, we suggested using ``implicit'' deflation, since that approach is based on Schur forms and this is more stable, with better conditioned correction equations. However, in this general polynomial setting, it is not clear how to incorporate implicit deflation: the Schur form and the generalized Schur form cannot easily be generalized.


next up previous contents index
Next: Notes and References Up: Quadratic Eigenvalue Problems Z. Bai, Previous: Numerical Methods for Solving   Contents   Index
Susan Blackford 2000-11-20