ConceptComplete

Numerical Stability

Numerical stability characterizes how errors propagate through an algorithm. A numerically stable algorithm produces results with errors commensurate with the precision of the input data and the conditioning of the problem.

DefinitionStability Classifications

An algorithm is:

  1. Stable: Forward error is bounded by Câ‹…Ï”mach⋅ÎșC \cdot \epsilon_{\text{mach}} \cdot \kappa, where CC is a modest constant and Îș\kappa is the problem's condition number
  2. Backward stable: The computed result is the exact solution to a problem with input data perturbed by O(ϔmach)O(\epsilon_{\text{mach}})
  3. Unstable: Errors grow unboundedly with problem size or precision requirements

Backward stability is a stronger condition than forward stability and generally preferred in numerical linear algebra.

The distinction between stability types becomes clear in matrix computations. For solving Ax=bAx = b, Gaussian elimination with partial pivoting is backward stable: the computed solution x~\tilde{x} satisfies (A+ΔA)x~=b(A + \Delta A)\tilde{x} = b where ∄ΔA∄=O(Ï”mach)∄A∄\|\Delta A\| = O(\epsilon_{\text{mach}})\|A\|. This implies forward stability since the forward error is bounded by Îș(A)â‹…Ï”mach\kappa(A) \cdot \epsilon_{\text{mach}}.

ExampleEvaluating Polynomials

Consider evaluating p(x)=a0+a1x+a2x2+⋯+anxnp(x) = a_0 + a_1x + a_2x^2 + \cdots + a_nx^n.

Naive method (power computation): p(x)=∑i=0naixip(x) = \sum_{i=0}^n a_i x^i requires computing xix^i for each term, introducing O(n2)O(n^2) operations and potentially large errors.

Horner's method (nested multiplication): p(x)=a0+x(a1+x(a2+⋯+x(an−1+xan)⋯ ))p(x) = a_0 + x(a_1 + x(a_2 + \cdots + x(a_{n-1} + xa_n)\cdots)) requires only nn multiplications and nn additions. This method is backward stable: the computed result equals p~(x)\tilde{p}(x) where p~\tilde{p} has coefficients within O(Ï”mach)O(\epsilon_{\text{mach}}) of the original coefficients.

Remark

Growth Factors: In Gaussian elimination, the growth factor gg measures how large matrix elements can become during factorization: g=max⁥i,j,k∣aij(k)∣max⁥i,j∣aij∣g = \frac{\max_{i,j,k} |a_{ij}^{(k)}|}{\max_{i,j} |a_{ij}|} where aij(k)a_{ij}^{(k)} are intermediate values. Partial pivoting guarantees g≀2n−1g \leq 2^{n-1}, though in practice gg is typically modest. Complete pivoting ensures g≀n1/2(2⋅31/2⋅41/3⋯n1/(n−1))1/2g \leq n^{1/2}(2 \cdot 3^{1/2} \cdot 4^{1/3} \cdots n^{1/(n-1)})^{1/2}.

Achieving numerical stability often requires algorithmic modifications. For example, computing the sample variance naively as 1n∑xi2−(1n∑xi)2\frac{1}{n}\sum x_i^2 - (\frac{1}{n}\sum x_i)^2 can suffer catastrophic cancellation when the variance is small relative to the mean. The two-pass algorithm first computes the mean xˉ=1n∑xi\bar{x} = \frac{1}{n}\sum x_i, then computes 1n∑(xi−xˉ)2\frac{1}{n}\sum(x_i - \bar{x})^2, avoiding cancellation.

Compensated summation algorithms like Kahan summation track and correct rounding errors explicitly. When summing nn numbers, ordinary summation accumulates relative error O(nϔmach)O(n\epsilon_{\text{mach}}), while Kahan summation achieves O(ϔmach)O(\epsilon_{\text{mach}}) independent of nn, approaching backward stability at the cost of additional operations.

Stability analysis guides algorithm selection: for solving linear systems, QR factorization is more stable than normal equations; for eigenvalue problems, the QR algorithm is preferred over power iteration for multiple eigenvalues. These choices reflect deep understanding of error propagation in finite precision arithmetic.