Interpolation Error Theorem
The interpolation error theorem quantifies how well a polynomial approximates a function, revealing the dependence on data spacing and function smoothness.
Let and let be the polynomial of degree interpolating at distinct nodes . Then for any , there exists such that:
Consequently: where and .
This theorem has a form similar to Taylor's theorem, but the error depends on the product rather than . The choice of interpolation nodes significantly affects .
On with nodes:
Uniform spacing: (grows unboundedly with )
Chebyshev nodes: (exponentially decaying)
For on with :
- Uniform: max error
- Chebyshev: max error
The factor-of-25 improvement demonstrates the value of optimal node placement.
Implications:
- If is smooth ( bounded), error decreases as where
- For analytic functions with bounded derivatives, Chebyshev interpolation converges exponentially
- For functions with singularities, convergence can be arbitrarily slow regardless of node choice
The theorem explains the Runge phenomenon: interpolating with equally-spaced nodes gives , which grows faster than shrinks, causing divergence near endpoints.
The error formula extends to piecewise polynomial approximation. For cubic splines with spacing , the error is for functions, independent of the number of knots. This makes splines more practical than global polynomial interpolation for large datasets.
Understanding interpolation error guides numerical method design: finite difference formulas derive from polynomial interpolation, inheriting the same error behavior. Higher-order methods require more function evaluations but achieve faster convergence, balancing computational cost against accuracy requirements.