ConceptComplete

Adaptive and Multidimensional Quadrature

Adaptive quadrature automatically refines the mesh where the integrand is difficult, achieving a prescribed tolerance with minimal function evaluations. Multidimensional integration (cubature) presents the "curse of dimensionality" and requires specialized techniques such as Monte Carlo methods and sparse grids.


Adaptive Quadrature

Definition4.7Adaptive Simpson's Rule

Adaptive quadrature estimates the integral on an interval [a,b][a, b] using two approximations of different accuracy, then uses their difference to estimate the local error. If the estimated error exceeds the tolerance, the interval is bisected and the procedure applied recursively. For adaptive Simpson: let S(a,b)S(a, b) be Simpson's rule on [a,b][a, b] and S2(a,b)=S(a,m)+S(m,b)S_2(a, b) = S(a, m) + S(m, b) where m=(a+b)/2m = (a+b)/2. The error estimate is S2S/15|S_2 - S|/15, using Richardson extrapolation with the O(h4)O(h^4) error of Simpson's rule.

ExampleAdaptive Quadrature for Singular Integrands

Consider 01xdx=2/3\int_0^1 \sqrt{x}\, dx = 2/3. The integrand f(x)=xf(x) = \sqrt{x} has f(x)=1/(2x)f'(x) = 1/(2\sqrt{x}) \to \infty as x0+x \to 0^+, so fixed-step quadrature converges slowly. Adaptive quadrature automatically clusters subintervals near x=0x = 0: for tolerance 10810^{-8}, adaptive Simpson might use 50+ subintervals near x=0x = 0 but only a few near x=1x = 1, requiring roughly 200 function evaluations vs. thousands for uniform Simpson.


Romberg Integration

Definition4.8Romberg Integration

Romberg integration applies Richardson extrapolation systematically to the composite trapezoidal rule. Let T(h)T(h) denote the trapezoidal approximation with step size hh. Since T(h)=I+c1h2+c2h4+T(h) = I + c_1 h^2 + c_2 h^4 + \cdots (Euler-Maclaurin expansion), successive elimination yields: Tj,k=4kTj,k1Tj1,k14k1T_{j,k} = \frac{4^k T_{j,k-1} - T_{j-1,k-1}}{4^k - 1} where Tj,0=T(hj)T_{j,0} = T(h_j) with hj=(ba)/2jh_j = (b-a)/2^j. The entry Tj,kT_{j,k} has error O(hj2k+2)O(h_j^{2k+2}).

RemarkRomberg Tableau

The Romberg method builds a triangular tableau: the first column contains trapezoidal approximations with halved step sizes, subsequent columns improve accuracy by two orders each. For smooth (analytic) integrands, the diagonal entries Tj,jT_{j,j} converge superlinearly. The method requires only function evaluations at equally spaced points and is easily implemented. However, for functions with limited smoothness, the extrapolation gains are lost and adaptive methods are preferable.


Multidimensional Integration

Definition4.9Cubature and Monte Carlo Methods

Cubature refers to numerical integration in Rd\mathbb{R}^d for d2d \geq 2. Tensor product rules apply 1D quadrature along each axis: for nn points per dimension, ndn^d evaluations are needed (curse of dimensionality). Monte Carlo integration approximates Ωf(x)dxΩNi=1Nf(xi)\int_\Omega f(\mathbf{x})\, d\mathbf{x} \approx \frac{|\Omega|}{N}\sum_{i=1}^N f(\mathbf{x}_i) where xi\mathbf{x}_i are uniform random points. The error is O(N1/2)O(N^{-1/2}) regardless of dimension, making Monte Carlo competitive for d1d \gg 1.

ExampleSparse Grids (Smolyak Construction)

Sparse grids mitigate the curse of dimensionality for smooth integrands. Instead of the full tensor product j=1dQ(n)\bigotimes_{j=1}^d Q^{(n)}, the Smolyak algorithm selects a subset of grid points that achieves accuracy O(Nr(logN)(d1)(r+1))O(N^{-r} (\log N)^{(d-1)(r+1)}) for fCrf \in C^r, using only N=O(n(logn)d1)N = O(n (\log n)^{d-1}) points instead of ndn^d. For d=10d = 10 and n=10n = 10: tensor product needs 101010^{10} points, while the sparse grid needs 105\sim 10^5.

RemarkQuasi-Monte Carlo Methods

Quasi-Monte Carlo (QMC) replaces random points with low-discrepancy sequences (Halton, Sobol, Niederreiter) that fill the space more uniformly. For functions of bounded variation in the sense of Hardy and Krause, the Koksma-Hlawka inequality gives error O((logN)d/N)O((\log N)^d / N), which is asymptotically better than Monte Carlo's O(N1/2)O(N^{-1/2}) for fixed dimension dd.