ConceptComplete

Martingale Convergence Theorems

The martingale convergence theorems are fundamental results stating that, under appropriate conditions, martingales converge almost surely to a limit. These theorems underpin the theory of stochastic processes, providing conditions for long-run convergence of random sequences.


Doob's martingale convergence theorem

Theorem3.1Doob's martingale convergence theorem

Let (Mn)nβ‰₯0(M_n)_{n \geq 0} be a submartingale (or supermartingale) with sup⁑nE[∣Mn∣]<∞\sup_n \mathbb{E}[|M_n|] < \infty. Then there exists a random variable M∞M_\infty with E[∣M∞∣]<∞\mathbb{E}[|M_\infty|] < \infty such that:

Mnβ†’M∞almostΒ surelyΒ asΒ nβ†’βˆž.M_n \to M_\infty \quad \text{almost surely as } n \to \infty.

For martingales: If (Mn)(M_n) is a martingale with sup⁑nE[Mn2]<∞\sup_n \mathbb{E}[M_n^2] < \infty (bounded in L2L^2), then Mnβ†’M∞M_n \to M_\infty a.s.

Intuition: A martingale with bounded variation cannot "wander off" indefinitely β€” it must converge to a limit. The key is that the expected absolute value (or second moment) is uniformly bounded, preventing the process from escaping to infinity.

ExamplePΓ³lya urn

An urn initially contains 1 red ball and 1 blue ball. At each step, draw a ball uniformly at random, then return it along with one additional ball of the same color. Let RnR_n be the number of red balls after nn draws and Xn=Rn/(n+2)X_n = R_n/(n+2) the proportion of red balls.

Then (Xn)(X_n) is a martingale: E[Xn+1∣Fn]=Xn\mathbb{E}[X_{n+1} \mid \mathcal{F}_n] = X_n. Moreover, 0≀Xn≀10 \leq X_n \leq 1, so E[Xn2]≀1\mathbb{E}[X_n^2] \leq 1 for all nn. By Doob's theorem, Xnβ†’X∞X_n \to X_\infty a.s. for some random variable X∞X_\infty.

Remarkably, X∞∼Uniform(0,1)X_\infty \sim \text{Uniform}(0, 1): the limiting proportion of red balls is uniformly distributed on [0,1][0, 1]. This is a classic example of exchangeability and the de Finetti theorem.


L2L^2 convergence

Theorem3.2$L^2$ martingale convergence

If (Mn)(M_n) is a martingale with sup⁑nE[Mn2]<∞\sup_n \mathbb{E}[M_n^2] < \infty, then:

  1. Mnβ†’M∞M_n \to M_\infty almost surely.
  2. Mnβ†’M∞M_n \to M_\infty in L2L^2: E[(Mnβˆ’M∞)2]β†’0\mathbb{E}[(M_n - M_\infty)^2] \to 0.
  3. E[M∞2]≀lim inf⁑nβ†’βˆžE[Mn2]\mathbb{E}[M_\infty^2] \leq \liminf_{n \to \infty} \mathbb{E}[M_n^2].

L2L^2 convergence is stronger than almost sure convergence: it implies convergence of second moments and allows us to pass limits through expectations.

ExampleSimple random walk up to a stopping time

Let XnX_n be a simple random walk and Ο„=inf⁑{n:Xnβˆ‰(βˆ’a,a)}\tau = \inf\{n : X_n \notin (-a, a)\} the exit time from (βˆ’a,a)(-a, a). Define Mn=Xnβˆ§Ο„M_n = X_{n \wedge \tau} (the stopped process). Then:

  • (Mn)(M_n) is a martingale (stopped processes preserve the martingale property).
  • ∣Mnβˆ£β‰€a|M_n| \leq a for all nn, so E[Mn2]≀a2\mathbb{E}[M_n^2] \leq a^2.

By the convergence theorem, Mnβ†’M∞M_n \to M_\infty a.s. and in L2L^2. Since Ο„<∞\tau < \infty a.s., M∞=XΟ„βˆˆ{βˆ’a,a}M_\infty = X_\tau \in \{-a, a\}.


Uniform integrability

Definition3.1Uniformly integrable

A family of random variables {Xα}α∈A\{X_\alpha\}_{\alpha \in A} is uniformly integrable (UI) if:

lim⁑Kβ†’βˆžsup⁑α∈AE[∣Xα∣1∣Xα∣>K]=0.\lim_{K \to \infty} \sup_{\alpha \in A} \mathbb{E}[|X_\alpha| \mathbf{1}_{|X_\alpha| > K}] = 0.

Equivalently, sup⁑αE[∣Xα∣]<∞\sup_\alpha \mathbb{E}[|X_\alpha|] < \infty and for any Ρ>0\varepsilon > 0, there exists δ>0\delta > 0 such that P(A)<δ\mathbb{P}(A) < \delta implies E[∣Xα∣1A]<Ρ\mathbb{E}[|X_\alpha| \mathbf{1}_A] < \varepsilon for all α\alpha.

Theorem3.3$L^1$ convergence

If (Mn)(M_n) is a martingale and the family {Mn}\{M_n\} is uniformly integrable, then:

  1. Mnβ†’M∞M_n \to M_\infty almost surely.
  2. Mnβ†’M∞M_n \to M_\infty in L1L^1: E[∣Mnβˆ’M∞∣]β†’0\mathbb{E}[|M_n - M_\infty|] \to 0.
  3. E[M∞]=E[M0]\mathbb{E}[M_\infty] = \mathbb{E}[M_0].

Uniform integrability is the key to L1L^1 convergence. It ensures that the "tails" of the distribution do not contribute significantly, allowing us to pass limits through expectations.

ExampleBounded martingales are UI

If ∣Mnβˆ£β‰€C|M_n| \leq C a.s. for all nn, then {Mn}\{M_n\} is uniformly integrable (trivially: E[∣Mn∣1∣Mn∣>K]=0\mathbb{E}[|M_n| \mathbf{1}_{|M_n| > K}] = 0 for Kβ‰₯CK \geq C). Hence, bounded martingales converge in L1L^1.


Doob's upcrossing inequality

Theorem3.4Upcrossing inequality

Let (Xn)(X_n) be a submartingale and Un(a,b)U_n(a, b) the number of upcrossings of the interval [a,b][a, b] by X0,X1,…,XnX_0, X_1, \ldots, X_n (i.e., the number of times the process crosses from below aa to above bb). Then:

E[Un(a,b)]≀E[(Xnβˆ’a)+]bβˆ’a.\mathbb{E}[U_n(a, b)] \leq \frac{\mathbb{E}[(X_n - a)^+]}{b - a}.

The upcrossing inequality is the key tool in proving Doob's convergence theorem. If E[(Xnβˆ’a)+]\mathbb{E}[(X_n - a)^+] is uniformly bounded, then the expected number of upcrossings is finite, which implies that XnX_n cannot oscillate indefinitely β€” it must converge.


Backwards martingales

Definition3.2Backwards martingale

A sequence (Mn)n∈Z(M_n)_{n \in \mathbb{Z}} indexed by negative integers is a backwards martingale if:

E[Mn∣Mnβˆ’1,Mnβˆ’2,…]=Mnβˆ’1.\mathbb{E}[M_n \mid M_{n-1}, M_{n-2}, \ldots] = M_{n-1}.

Equivalently, with reversed indexing, Mβˆ’nM_{-n} is a forwards martingale.

Theorem3.5Backwards martingale convergence

Every backwards martingale converges almost surely and in L1L^1: there exists Mβˆ’βˆžM_{-\infty} such that Mnβ†’Mβˆ’βˆžM_n \to M_{-\infty} a.s. as nβ†’βˆ’βˆžn \to -\infty.

No integrability condition is needed for backwards martingales β€” they always converge! This is because moving backwards in time provides more and more information, naturally leading to a limit.

ExampleExchangeable sequences and de Finetti

Let X1,X2,…X_1, X_2, \ldots be an exchangeable sequence of {0,1}\{0, 1\}-valued random variables. Define Mn=E[X1∣Xn,Xn+1,…]M_n = \mathbb{E}[X_1 \mid X_n, X_{n+1}, \ldots]. Then (Mn)(M_n) is a backwards martingale (as nn decreases, we condition on more information). By the backwards convergence theorem, Mnβ†’Mβˆ’βˆžM_n \to M_{-\infty} a.s.

The de Finetti theorem states that Mβˆ’βˆž=ΞΈM_{-\infty} = \theta for some random variable ΞΈ\theta, and conditionally on ΞΈ\theta, the XiX_i are i.i.d. Bernoulli(ΞΈ)(\theta). This is the foundation of Bayesian statistics: exchangeable sequences arise from a mixture of i.i.d. sequences.


Summary

Martingale convergence theorems provide conditions for almost sure and LpL^p convergence:

  • Doob's theorem: If sup⁑nE[∣Mn∣]<∞\sup_n \mathbb{E}[|M_n|] < \infty, then Mnβ†’M∞M_n \to M_\infty a.s.
  • L2L^2 convergence: If sup⁑nE[Mn2]<∞\sup_n \mathbb{E}[M_n^2] < \infty, then Mnβ†’M∞M_n \to M_\infty in L2L^2.
  • L1L^1 convergence: If {Mn}\{M_n\} is uniformly integrable, then Mnβ†’M∞M_n \to M_\infty in L1L^1.
  • Backwards martingales: Always converge (no integrability needed).

These results are central to probability theory, with applications to ergodic theory, statistical inference, and stochastic analysis.