TheoremComplete

Dahlquist Equivalence Theorem

Theorem7.1Dahlquist Equivalence Theorem

A linear multistep method βˆ‘j=0kΞ±jyn+kβˆ’j=hβˆ‘j=0kΞ²jf(tn+kβˆ’j,yn+kβˆ’j)\sum_{j=0}^k \alpha_j y_{n+k-j} = h \sum_{j=0}^k \beta_j f(t_{n+k-j}, y_{n+k-j}) (with Ξ±0=1\alpha_0 = 1) is convergent (i.e., max⁑n∣y(tn)βˆ’ynβˆ£β†’0\max_n |y(t_n) - y_n| \to 0 as hβ†’0h \to 0 for all sufficiently smooth IVPs) if and only if it satisfies both:

  1. Consistency: ρ(1)=0\rho(1) = 0 and ρ′(1)=Οƒ(1)\rho'(1) = \sigma(1), where ρ(ΞΆ)=βˆ‘Ξ±jΞΆkβˆ’j\rho(\zeta) = \sum \alpha_j \zeta^{k-j} and Οƒ(ΞΆ)=βˆ‘Ξ²jΞΆkβˆ’j\sigma(\zeta) = \sum \beta_j \zeta^{k-j}.
  2. Zero-stability: All roots of ρ(ΞΆ)=0\rho(\zeta) = 0 satisfy βˆ£ΞΆβˆ£β‰€1|\zeta| \leq 1, and roots with ∣΢∣=1|\zeta| = 1 are simple.

Moreover, if the method has order pp (consistency to order pp) and is zero-stable, then the global error is O(hp)O(h^p).


Proof

Proof

Necessity of consistency. If the method is convergent, it must reproduce constant solutions exactly. For yβ€²=0y' = 0, y≑Cy \equiv C: βˆ‘Ξ±jC=0\sum \alpha_j C = 0, so ρ(1)=βˆ‘Ξ±j=0\rho(1) = \sum \alpha_j = 0. For yβ€²=1y' = 1, y(t)=t+Cy(t) = t + C: βˆ‘Ξ±j(tn+kβˆ’j)=hβˆ‘Ξ²j\sum \alpha_j (t_{n+k-j}) = h \sum \beta_j, giving ρ′(1)=Οƒ(1)\rho'(1) = \sigma(1) after algebra.

Necessity of zero-stability. Consider yβ€²=0y' = 0, y(0)=0y(0) = 0. The multistep recurrence becomes βˆ‘Ξ±jyn+kβˆ’j=0\sum \alpha_j y_{n+k-j} = 0 (homogeneous). The general solution is a linear combination of ΞΆin\zeta_i^n (for simple roots) and nmΞΆinn^m \zeta_i^n (for roots of multiplicity m+1m+1). If a root ∣΢i∣>1|\zeta_i| > 1, the solution grows exponentially even with zero initial data perturbed by rounding. If ∣΢i∣=1|\zeta_i| = 1 with multiplicity >1> 1, nmΞΆinn^m \zeta_i^n grows polynomially. Either case prevents convergence.

Sufficiency (outline). Let en=y(tn)βˆ’yne_n = y(t_n) - y_n be the global error. Subtracting the numerical scheme from the exact equation yields: βˆ‘Ξ±jen+kβˆ’j=hβˆ‘Ξ²j[f(tn+kβˆ’j,y(tn+kβˆ’j))βˆ’f(tn+kβˆ’j,yn+kβˆ’j)]+hp+1Ο„n\sum \alpha_j e_{n+k-j} = h \sum \beta_j [f(t_{n+k-j}, y(t_{n+k-j})) - f(t_{n+k-j}, y_{n+k-j})] + h^{p+1}\tau_n where Ο„n=O(1)\tau_n = O(1) is the local truncation error.

By the Lipschitz condition ∣f(t,u)βˆ’f(t,v)βˆ£β‰€L∣uβˆ’v∣|f(t,u) - f(t,v)| \leq L|u-v|: βˆ‘Ξ±jen+kβˆ’j=hβˆ‘Ξ²j[Gn+kβˆ’jen+kβˆ’j]+O(hp+1)\sum \alpha_j e_{n+k-j} = h \sum \beta_j [G_{n+k-j} e_{n+k-j}] + O(h^{p+1}) where ∣Gjβˆ£β‰€L|G_j| \leq L.

This is a perturbed linear recurrence. Zero-stability ensures the homogeneous recurrence βˆ‘Ξ±jen+kβˆ’j=0\sum \alpha_j e_{n+k-j} = 0 has bounded solutions. The perturbation theory for difference equations (discrete Gronwall inequality) then gives: max⁑n∣enβˆ£β‰€Cβ‹…(max⁑j∣ej∣j<k+hp)\max_n |e_n| \leq C \cdot \left(\max_j |e_j|_{j<k} + h^p\right) where CC depends on LL, T=tfinalβˆ’t0T = t_{\text{final}} - t_0, and the method coefficients.

If the starting values satisfy ∣ej∣=O(hp)|e_j| = O(h^p) for j=0,…,kβˆ’1j = 0, \ldots, k-1 (e.g., computed by a one-step method of order pp), then max⁑n∣en∣=O(hp)\max_n|e_n| = O(h^p). β–‘\square

β– 

ExampleVerifying Convergence of AB2

AB2: yn+1=yn+h2(3fnβˆ’fnβˆ’1)y_{n+1} = y_n + \frac{h}{2}(3f_n - f_{n-1}). Here ρ(ΞΆ)=ΞΆ2βˆ’ΞΆ\rho(\zeta) = \zeta^2 - \zeta, Οƒ(ΞΆ)=12(3ΞΆβˆ’1)\sigma(\zeta) = \frac{1}{2}(3\zeta - 1). Check: ρ(1)=0\rho(1) = 0 (consistent), ρ′(1)=1=Οƒ(1)\rho'(1) = 1 = \sigma(1). Roots of ρ\rho: ΞΆ=0,1\zeta = 0, 1 (both ≀1\leq 1, simple). So AB2 is zero-stable and convergent. Order: ρ(eh)βˆ’hΟƒ(eh)=O(h3)\rho(e^h) - h\sigma(e^h) = O(h^3), confirming second order.

RemarkRoot Condition vs. Stability

Zero-stability (root condition) only concerns h=0h = 0. For finite hh, the characteristic roots of the perturbed recurrence must also be bounded, leading to the concept of absolute stability. The interplay between zero-stability (ensuring convergence) and absolute stability (ensuring practical usability for stiff problems) is the central tension in multistep method design.