Iterative Methods for Linear Systems
Iterative methods generate a sequence of approximations converging to the solution of . They are essential for large sparse systems where direct methods are too expensive in memory or computation.
Classical Iterative Methods
Write where is diagonal, is strictly lower, is strictly upper triangular. The Jacobi iteration is , i.e., . The Gauss-Seidel iteration uses the latest values: .
The SOR method with relaxation parameter is: . For , SOR reduces to Gauss-Seidel. The iteration matrix is . The Kahan theorem states that SOR converges only if .
Convergence Theory
An iterative method converges for all initial guesses if and only if the spectral radius . The asymptotic convergence rate is : roughly iterations reduce the error by a factor of 10. For strictly diagonally dominant matrices (), both Jacobi and Gauss-Seidel converge. For SPD matrices, Gauss-Seidel always converges.
For the discrete Laplacian on an grid with mesh size , the Jacobi spectral radius is . The optimal SOR parameter is for small , giving . This reduces the iteration count from (Jacobi/Gauss-Seidel) to .
Krylov Subspace Methods
For SPD , the conjugate gradient (CG) method generates iterates that minimize over the Krylov subspace . The algorithm requires only one matrix-vector product, two inner products, and three vector updates per iteration. CG converges in at most iterations in exact arithmetic, and satisfies where is the condition number.
Preconditioning replaces with where is easy to invert. The preconditioned CG converges with rate governed by . Common preconditioners include: incomplete Cholesky (), algebraic multigrid (AMG), and domain decomposition methods. For well-designed preconditioners, the iteration count becomes independent of problem size.