Rounding Errors and Error Analysis
Numerical computation in finite precision arithmetic accumulates errors through successive operations. Analyzing these errors is fundamental to assessing algorithm reliability and numerical stability.
Let be the true value and be an approximation. The absolute error is:
The relative error is: provided . In floating point arithmetic, relative error is typically the more meaningful measure.
The fundamental model of floating point arithmetic assumes that each elementary operation satisfies: where . This leads to forward and backward error analysis frameworks.
Consider an algorithm computing that produces :
- Forward error: measures the error in the output
- Backward error: where exactly
An algorithm is backward stable if the computed solution is the exact solution to a slightly perturbed problem, i.e., where .
Computing for large suffers from catastrophic cancellation. Direct evaluation: subtracts two nearly equal numbers, losing precision.
Rationalizing the numerator: avoids cancellation and is numerically stable.
Condition Number: For a problem , the relative condition number is:
For differentiable , this is approximately . A problem is ill-conditioned if , meaning small input perturbations cause large output changes regardless of the algorithm used.
Error accumulation follows different patterns depending on whether errors compound additively or multiplicatively. For a sequence of operations, accumulated relative error typically grows as for operations under favorable conditions, but can be exponential for unstable algorithms. This distinction separates stable algorithms from unstable ones.
Understanding the interplay between problem conditioning and algorithmic stability is essential: even a stable algorithm applied to an ill-conditioned problem will produce inaccurate results, while an unstable algorithm applied to a well-conditioned problem may still fail.