ConceptComplete

Rounding Errors and Error Analysis

Numerical computation in finite precision arithmetic accumulates errors through successive operations. Analyzing these errors is fundamental to assessing algorithm reliability and numerical stability.

DefinitionAbsolute and Relative Error

Let xx be the true value and x~\tilde{x} be an approximation. The absolute error is: Eabs=∣x−x~∣E_{\text{abs}} = |x - \tilde{x}|

The relative error is: Erel=∣x−x~∣∣x∣E_{\text{rel}} = \frac{|x - \tilde{x}|}{|x|} provided x≠0x \neq 0. In floating point arithmetic, relative error is typically the more meaningful measure.

The fundamental model of floating point arithmetic assumes that each elementary operation ∘∈{+,−,×,Ă·}\circ \in \{+, -, \times, \div\} satisfies: fl(x∘y)=(x∘y)(1+ÎŽ)\text{fl}(x \circ y) = (x \circ y)(1 + \delta) where âˆŁÎŽâˆŁâ‰€Ï”mach|\delta| \leq \epsilon_{\text{mach}}. This leads to forward and backward error analysis frameworks.

DefinitionForward and Backward Error

Consider an algorithm computing y=f(x)y = f(x) that produces y~\tilde{y}:

  • Forward error: ∄y−y~∄\|y - \tilde{y}\| measures the error in the output
  • Backward error: ∄x~−x∄\|\tilde{x} - x\| where y~=f(x~)\tilde{y} = f(\tilde{x}) exactly

An algorithm is backward stable if the computed solution y~\tilde{y} is the exact solution to a slightly perturbed problem, i.e., y~=f(x+Δx)\tilde{y} = f(x + \Delta x) where ∄Δx∄=O(Ï”mach)∄x∄\|\Delta x\| = O(\epsilon_{\text{mach}})\|x\|.

ExampleCatastrophic Cancellation

Computing f(x)=x+1−xf(x) = \sqrt{x+1} - \sqrt{x} for large xx suffers from catastrophic cancellation. Direct evaluation: fl(x+1−x)\text{fl}(\sqrt{x+1} - \sqrt{x}) subtracts two nearly equal numbers, losing precision.

Rationalizing the numerator: f(x)=(x+1−x)(x+1+x)x+1+x=1x+1+xf(x) = \frac{(\sqrt{x+1} - \sqrt{x})(\sqrt{x+1} + \sqrt{x})}{\sqrt{x+1} + \sqrt{x}} = \frac{1}{\sqrt{x+1} + \sqrt{x}} avoids cancellation and is numerically stable.

Remark

Condition Number: For a problem y=f(x)y = f(x), the relative condition number is: Îș=limâĄÏ”â†’0supâĄâˆ„Î”x∄≀ϔ∄x∄∄f(x+Δx)−f(x)∄/∄f(x)∄∄Δx∄/∄x∄\kappa = \lim_{\epsilon \to 0} \sup_{\|\Delta x\| \leq \epsilon\|x\|} \frac{\|f(x+\Delta x) - f(x)\|/\|f(x)\|}{\|\Delta x\|/\|x\|}

For differentiable ff, this is approximately Îș≈∄fâ€Č(x)∄∄x∄/∄f(x)∄\kappa \approx \|f'(x)\| \|x\|/\|f(x)\|. A problem is ill-conditioned if Îș≫1\kappa \gg 1, meaning small input perturbations cause large output changes regardless of the algorithm used.

Error accumulation follows different patterns depending on whether errors compound additively or multiplicatively. For a sequence of operations, accumulated relative error typically grows as O(nϔmach)O(n\epsilon_{\text{mach}}) for nn operations under favorable conditions, but can be exponential for unstable algorithms. This distinction separates stable algorithms from unstable ones.

Understanding the interplay between problem conditioning and algorithmic stability is essential: even a stable algorithm applied to an ill-conditioned problem will produce inaccurate results, while an unstable algorithm applied to a well-conditioned problem may still fail.