This implicit scheme costs rather than the matrix-vector products the explicit scheme would require to apply the same -degree polynomial and rebuild the -step Arnoldi factorization. When the operation is relatively expensive, this savings can be considerable. However, there are tradeoffs. It is impossible to predict optimal values for and in general. The distribution of the spectrum of has a great deal to do with convergence behavior. Two matrices with precisely the same sparsity pattern could exhibit completely different convergence characteristics for a given choice of and . Therefore, it would be misleading to draw any conclusions based solely on operation counts per iteration. Nevertheless, it can be useful to have a notion of the major costs of an iteration to guide initial choices.
In the following, we assume that the cost of a matrix-vector product is . For a sparse matrix, one could think of as twice the average number of nonzero entries per row of . For a dense matrix , and for an FFT matrix . See §10.2. For a fixed value of and , the cost (in floating point operations) for one iteration of IRAM is
Note that the factor of 2 in the second term is due to a worst-case scenario with respect to orthogonalization corrections. This factor will usually be around 1.5 in practice. If the matrix-vector product is cheap ( i.e. is quite small) then the cost of implicit restarting is significant and alternatives should be considered. One of these might be to use a polynomial spectral transformation. This would involve operating with a suitably constructed polynomial function of in place of . An example would be a Chebyshev polynomial designed to damp the unwanted portion of the spectrum. The action would, of course, be applied as a succession of matrix-vector products involving .