There are two different ways to get a set of linearly independent eigenvectors to a multiple eigenvalue. The first is to restart and run a projected operator on the subspace orthogonal to the converged eigenvector(s), where all the converged eigendirections have been projected away. This corresponds to orthogonalizing the vector in step (9) of Algorithm 4.6 to all converged eigenvectors. This procedure is repeated as long as new vectors converge; see, e.g., [318]. There is a second, more radical way to deal with possibly multiple eigenvalues: the block Lanczos method. It starts with several, say , starting directions, forming a block , and lets operate on all of in step , to compute a new orthogonal block . The matrix will be a block-tridiagonal, or more properly band matrix; see the description in §4.6.
In the block Lanczos method, the convergence is governed by the separation of the desired eigenvalue from other eigenvalues away in the spectrum, say , if we assume the eigenvalues sorted from smallest to largest. However, the degree of the polynomial is , not , as we would have if we ran single-vector Lanczos for the same number of basis vectors, and this will slow down the convergence in the general case.
Regardless of the above objection, block Lanczos is advantageous for efficiency reasons, whenever computing for an matrix is much cheaper than computing for vectors, one after the other. This is the case for most machines with cache memories and when the matrices are so large that they need to be stored in secondary storage.