Similar to the single-vector case, this can be realized in a surprisingly efficient way.

Assume that we use a Krylov solver with initial guess and with
left-preconditioning for the approximate solution of the correction
equation (4.50). Since the starting vector is in the subspace
orthogonal to , all iteration vectors for the Krylov
solver will be in that space. In that subspace we have to compute the
vector
for a vector
supplied by the Krylov solver, and

This is done in two steps. First we compute

with since . Then, with left-preconditioning we solve from

Since
, it follows that satisfies
or
.
The condition
leads to

The vector is solved from and, likewise, is solved from . Note that the last set of equations has to be solved only once in an iteration process for equation (4.50), so that effectively operations with the preconditioner are required for iterations of the linear solver. Note also that a matrix-vector multiplication with the left-preconditioned operator, in an iteration of the Krylov solver, requires only one operation with and , instead of the four actions of the projector operator . This has been worked out in the solution template, given in Algorithm 4.18. Note that obvious savings can be realized if the operator is kept the same for a number of successive eigenvalue computations (for details, see [412]).