Block methods are used for two major reasons. The first one is for reliably determining multiple and/or clustered eigenvalues. The second reason is related to issues dealing with computational efficiency. In many instances, the cost of computing a few matrix-vector products is commensurate with that of one matrix-vector product. On the other hand, two major drawbacks of block methods are the (not insignificant) added complexity of the software implementation and the comparative lack of theoretical understanding. There also remains the selection of the block size.
Although an unblocked method coupled with a deflation strategy (such as in §7.6) may be used to compute multiple and/or clustered eigenvalues, an unblocked method may prove inefficient for some eigenvalue problems because of the cost of computing the underlying subspace. Moreover, a relatively small convergence tolerance is required to reliably compute nearby eigenvalues. Many problems do not require this much accuracy, and such a criterion can result in unnecessary computation.
The simplest block method, subspace iteration, has already been discussed in §7.4. Here the block size is just equal to the subspace size. Just as Arnoldi methods are procedures for computing a basis for a sequence of power iterates (the members of the Krylov space), block Arnoldi methods string together a sequence of subspace iterates.