The book is divided into 11 chapters. We refer the reader to the list of symbols and acronyms on page ; we will use the notation listed there freely throughout the text.
Chapter 1 motivates and outlines the rest of the book.
Chapter 2 gives a tour of all the eigenvalue problems discussed in the book. If you are a first-time reader, read this chapter! It gives basic terminology and definitions used to describe eigenvalue problems, and classifies all eigenvalue problems into six types. This is the top of the decision tree. Chapters 4 through 9 give details for each of the six types.
Chapter 3 summarizes the two mathematical principles used by most algorithms for large eigenvalue problems: projection onto subspaces and spectral transformations.
Chapter 4 covers the Hermitian eigenvalue problem, . Here is a Hermitian matrix (if is real, this means is symmetric). Chapters 4 through 9 all begin with advice on how to choose an algorithm and a brief discussion of transformation methods, which are the standard methods for small- to medium-sized dense matrices. Then there are more detailed templates for each major iterative method. For the Hermitian eigenvalue problem, which is the best understood case, these include the power method, inverse iteration, Rayleigh quotient iteration, subspace iteration, the Lanczos method, the implicitly restarted Lanczos method, the band Lanczos method, and the Jacobi-Davidson method. Each chapter ends with a summary of stability and accuracy assessment.
Chapter 5 covers generalized Hermitian eigenvalue problems , where and are Hermitian and is also positive definite. The methods discussed are analogs of those discussed in Chapter 4.
Chapter 6 covers the singular value decomposition . The chapter emphasizes methods for large sparse matrices that compute just selected parts of this decomposition. The methods discussed are analogs of those discussed in Chapter 4.
Chapter 7 covers the non-Hermitian eigenvalue problem , where is a general square matrix. At the heart of the chapter are sections on Arnoldi's method, the non-Hermitian Lanczos method, and the Jacobi-Davidson method, including implicitly restarted, block, and band variants.
Chapter 8 covers the generalized non-Hermitian eigenvalue problem, . This includes both the regular case, where and are by and have eigenvalues that are continuous functions of the entries of and , and the singular case, where there may be fewer than eigenvalues, discontinuous eigenvalues, or (when and are not square) no eigenvalues at all. In the regular case, we discuss oblique projection Jacobi-Davidson, rational Krylov, and a special variant of the Lanczos method for symmetric indefinite pairs.
Chapter 9 covers two kinds of nonlinear eigenvalue problems. First, we discuss polynomial eigenvalue problems, which are defined by three or more matrices. Second, we discuss nonlinear eigenvalue problems with orthogonality constraints.
Chapter 10 describes common issues of sparse matrix representation and computation, both sequentially and in parallel, shared by all algorithms. This includes a list of standard sparse matrix formats, algorithms for fast matrix-vector multiplication (the ``BLAS''), a survey of direct linear solvers, a survey of iterative linear solvers, and a brief discussion of how parallelism may be exploited.
Chapter 11 focuses on inexact methods and preconditioning techniques that are the subject of current research. Many iterative methods depend in part on preconditioning to improve performance and ensure fast convergence.
It is inevitable that a large amount of important material needs to be excluded from this book. In the appendix, we list some of the subjects not covered in this book, giving references for the reader who is interested in pursuing a particular subject in depth.