Adaptive and Multidimensional Quadrature
Adaptive quadrature automatically refines the mesh where the integrand is difficult, achieving a prescribed tolerance with minimal function evaluations. Multidimensional integration (cubature) presents the "curse of dimensionality" and requires specialized techniques such as Monte Carlo methods and sparse grids.
Adaptive Quadrature
Adaptive quadrature estimates the integral on an interval using two approximations of different accuracy, then uses their difference to estimate the local error. If the estimated error exceeds the tolerance, the interval is bisected and the procedure applied recursively. For adaptive Simpson: let be Simpson's rule on and where . The error estimate is , using Richardson extrapolation with the error of Simpson's rule.
Consider . The integrand has as , so fixed-step quadrature converges slowly. Adaptive quadrature automatically clusters subintervals near : for tolerance , adaptive Simpson might use 50+ subintervals near but only a few near , requiring roughly 200 function evaluations vs. thousands for uniform Simpson.
Romberg Integration
Romberg integration applies Richardson extrapolation systematically to the composite trapezoidal rule. Let denote the trapezoidal approximation with step size . Since (Euler-Maclaurin expansion), successive elimination yields: where with . The entry has error .
The Romberg method builds a triangular tableau: the first column contains trapezoidal approximations with halved step sizes, subsequent columns improve accuracy by two orders each. For smooth (analytic) integrands, the diagonal entries converge superlinearly. The method requires only function evaluations at equally spaced points and is easily implemented. However, for functions with limited smoothness, the extrapolation gains are lost and adaptive methods are preferable.
Multidimensional Integration
Cubature refers to numerical integration in for . Tensor product rules apply 1D quadrature along each axis: for points per dimension, evaluations are needed (curse of dimensionality). Monte Carlo integration approximates where are uniform random points. The error is regardless of dimension, making Monte Carlo competitive for .
Sparse grids mitigate the curse of dimensionality for smooth integrands. Instead of the full tensor product , the Smolyak algorithm selects a subset of grid points that achieves accuracy for , using only points instead of . For and : tensor product needs points, while the sparse grid needs .
Quasi-Monte Carlo (QMC) replaces random points with low-discrepancy sequences (Halton, Sobol, Niederreiter) that fill the space more uniformly. For functions of bounded variation in the sense of Hardy and Krause, the Koksma-Hlawka inequality gives error , which is asymptotically better than Monte Carlo's for fixed dimension .