Fundamental Theorem of Rounding
The fundamental theorem of rounding establishes the basic error bound for representing real numbers in floating point arithmetic, forming the foundation for all subsequent error analysis.
Let be a floating point system with round-to-nearest rounding. For any real number satisfying , the floating point representation satisfies: where .
Furthermore, for any basic arithmetic operation : where , assuming no overflow or underflow occurs.
This theorem provides the standard model for floating point arithmetic analysis. The bound reflects round-to-nearest rounding; directed rounding modes (round toward zero, round toward ) satisfy the weaker bound .
The multiplicative error model is crucial for error analysis because it naturally leads to relative error bounds. In contrast, an additive error model would require tracking absolute errors, which is less informative for numbers of vastly different magnitudes.
The theorem's power lies in its composability. For a sequence of operations, if each introduces error with , the accumulated error is: where first-order analysis gives for small .
Computing in floating point:
- Each product has relative error
- Summing terms accumulates error
- Total relative error:
For IEEE 754 double precision with , this gives error , losing about 6 decimal digits from machine precision.
The theorem assumes correct rounding, which IEEE 754 guarantees for basic operations. Extended precision arithmetic and fused multiply-add (FMA) operations can achieve tighter bounds, with FMA computing with only a single rounding error rather than two.
Understanding this theorem enables rigorous error analysis: backward error analysis proves algorithms produce results as if computed exactly with slightly perturbed inputs, quantified by multiples of . Forward error analysis uses condition numbers to bound output error given input perturbations. Together, these techniques form the theoretical foundation of numerical stability analysis.