Numerical Stability
Numerical stability characterizes how errors propagate through an algorithm. A numerically stable algorithm produces results with errors commensurate with the precision of the input data and the conditioning of the problem.
An algorithm is:
- Stable: Forward error is bounded by , where is a modest constant and is the problem's condition number
- Backward stable: The computed result is the exact solution to a problem with input data perturbed by
- Unstable: Errors grow unboundedly with problem size or precision requirements
Backward stability is a stronger condition than forward stability and generally preferred in numerical linear algebra.
The distinction between stability types becomes clear in matrix computations. For solving , Gaussian elimination with partial pivoting is backward stable: the computed solution satisfies where . This implies forward stability since the forward error is bounded by .
Consider evaluating .
Naive method (power computation): requires computing for each term, introducing operations and potentially large errors.
Horner's method (nested multiplication): requires only multiplications and additions. This method is backward stable: the computed result equals where has coefficients within of the original coefficients.
Growth Factors: In Gaussian elimination, the growth factor measures how large matrix elements can become during factorization: where are intermediate values. Partial pivoting guarantees , though in practice is typically modest. Complete pivoting ensures .
Achieving numerical stability often requires algorithmic modifications. For example, computing the sample variance naively as can suffer catastrophic cancellation when the variance is small relative to the mean. The two-pass algorithm first computes the mean , then computes , avoiding cancellation.
Compensated summation algorithms like Kahan summation track and correct rounding errors explicitly. When summing numbers, ordinary summation accumulates relative error , while Kahan summation achieves independent of , approaching backward stability at the cost of additional operations.
Stability analysis guides algorithm selection: for solving linear systems, QR factorization is more stable than normal equations; for eigenvalue problems, the QR algorithm is preferred over power iteration for multiple eigenvalues. These choices reflect deep understanding of error propagation in finite precision arithmetic.