ConceptComplete

Stopping Times

A stopping time is a random time at which we can decide to "stop" observing a stochastic process based only on information available up to that time. Stopping times are fundamental in the optional stopping theorem, optimal stopping problems, and the study of martingales.


Definition

Definition3.1Stopping time

Let (Fn)(\mathcal{F}_n) be a filtration. A random variable Ο„:Ξ©β†’{0,1,2,…}βˆͺ{∞}\tau: \Omega \to \{0, 1, 2, \ldots\} \cup \{\infty\} is a stopping time (or optional time) if for each nβ‰₯0n \geq 0:

{Ο„=n}∈Fn.\{\tau = n\} \in \mathcal{F}_n.

Equivalently, {τ≀n}∈Fn\{\tau \leq n\} \in \mathcal{F}_n for all nn.

Intuition: The decision to stop at time nn can be made using only information available up to time nn. We cannot use "future knowledge" to decide when to stop.

ExampleFirst passage time

Let (Xn)(X_n) be a stochastic process and AA a set. The first passage time (or hitting time) is:

Ο„A=inf⁑{nβ‰₯0:Xn∈A}.\tau_A = \inf\{n \geq 0 : X_n \in A\}.

Then {Ο„A=n}\{\tau_A = n\} occurs iff X0,…,Xnβˆ’1βˆ‰AX_0, \ldots, X_{n-1} \notin A and Xn∈AX_n \in A, which depends only on X0,…,XnX_0, \ldots, X_n. So Ο„A\tau_A is a stopping time.

ExampleLast visit is NOT a stopping time

Let Ο„=sup⁑{n:Xn=0}\tau = \sup\{n : X_n = 0\} be the last time the process visits 0. To check whether Ο„=n\tau = n, we need to know that Xkβ‰ 0X_k \neq 0 for all k>nk > n β€” this requires knowledge of the future. So Ο„\tau is not a stopping time.


Examples

ExampleFixed time

Ο„=n0\tau = n_0 (a deterministic constant) is a stopping time: {Ο„=n}={Ξ©ifΒ n=n0,βˆ…otherwise,\{\tau = n\} = \begin{cases} \Omega & \text{if } n = n_0, \\ \varnothing & \text{otherwise}, \end{cases} which is in Fn\mathcal{F}_n.

ExampleFirst return to the origin

For a random walk (Xn)(X_n), the first return time is Ο„=inf⁑{nβ‰₯1:Xn=0}\tau = \inf\{n \geq 1 : X_n = 0\}. This is a stopping time.

ExampleTime of maximum (before time N)

Let Ο„=arg⁑max⁑0≀k≀NXk\tau = \arg\max_{0 \leq k \leq N} X_k be the time at which the maximum over [0,N][0, N] is achieved. This is not a stopping time (in general), because determining Ο„\tau requires comparing XnX_n to all future values Xn+1,…,XNX_{n+1}, \ldots, X_N.


Properties

Theorem3.1Algebra of stopping times

If Οƒ\sigma and Ο„\tau are stopping times, then so are:

  1. min⁑(Οƒ,Ο„)\min(\sigma, \tau).
  2. max⁑(Οƒ,Ο„)\max(\sigma, \tau).
  3. Οƒ+Ο„\sigma + \tau (on {Οƒ<∞,Ο„<∞}\{\sigma < \infty, \tau < \infty\}).
  4. Any fixed time nn.
Definition3.2Stopped process

For a process (Xn)(X_n) and a stopping time Ο„\tau, the stopped process is:

XnΟ„=Xmin⁑(n,Ο„).X_n^\tau = X_{\min(n, \tau)}.

If (Xn)(X_n) is a martingale and Ο„\tau is a stopping time, then (XnΟ„)(X_n^\tau) is also a martingale.


Optional stopping theorem

Theorem3.2Optional stopping theorem

Let (Mn)(M_n) be a martingale and Ο„\tau a stopping time. If any of the following conditions hold:

  1. Ο„\tau is bounded: τ≀N\tau \leq N a.s. for some constant NN.
  2. E[Ο„]<∞\mathbb{E}[\tau] < \infty and sup⁑n∣Mn+1βˆ’Mn∣<C\sup_n |M_{n+1} - M_n| < C a.s. (bounded increments).
  3. E[Ο„]<∞\mathbb{E}[\tau] < \infty and E[sup⁑n∣Mn∣]<∞\mathbb{E}[\sup_n |M_n|] < \infty.

Then E[MΟ„]=E[M0]\mathbb{E}[M_\tau] = \mathbb{E}[M_0].

The theorem fails without additional conditions, as the doubling strategy example shows.

ExampleSymmetric random walk hitting time

Let XnX_n be a symmetric random walk starting at 0, and Ο„=inf⁑{n:Xn=1}\tau = \inf\{n : X_n = 1\} the first time the walk reaches 1. Then P(Ο„<∞)=1\mathbb{P}(\tau < \infty) = 1 by recurrence, but E[Ο„]=∞\mathbb{E}[\tau] = \infty.

The optional stopping theorem with Ο„βˆ§N=min⁑(Ο„,N)\tau \wedge N = \min(\tau, N) gives E[XΟ„βˆ§N]=0\mathbb{E}[X_{\tau \wedge N}] = 0. Taking Nβ†’βˆžN \to \infty and using dominated convergence (the increments are bounded):

E[XΟ„]=lim⁑Nβ†’βˆžE[XΟ„βˆ§N]=0.\mathbb{E}[X_\tau] = \lim_{N \to \infty} \mathbb{E}[X_{\tau \wedge N}] = 0.

But XΟ„=1X_\tau = 1 a.s., so this is consistent: E[XΟ„]=1β‹…P(Ο„<∞)=1\mathbb{E}[X_\tau] = 1 \cdot \mathbb{P}(\tau < \infty) = 1. Wait, this seems wrong! The resolution: the limit does not converge properly without additional conditions. The correct statement requires bounded stopping times or bounded increments.


Wald's identity

Theorem3.3Wald's identity

Let ΞΎ1,ΞΎ2,…\xi_1, \xi_2, \ldots be i.i.d. with E[ΞΎi]=ΞΌ\mathbb{E}[\xi_i] = \mu and Sn=βˆ‘i=1nΞΎiS_n = \sum_{i=1}^n \xi_i. If Ο„\tau is a stopping time with E[Ο„]<∞\mathbb{E}[\tau] < \infty and E[∣ξ1∣]<∞\mathbb{E}[|\xi_1|] < \infty, then:

E[SΟ„]=ΞΌE[Ο„].\mathbb{E}[S_\tau] = \mu \mathbb{E}[\tau].

Wald's identity is a powerful tool for computing expectations of sums over random indices. It follows from the optional stopping theorem applied to the martingale Mn=Snβˆ’nΞΌM_n = S_n - n\mu.

ExampleCoupon collector

A collector seeks to collect all nn types of coupons. Each coupon type appears independently with probability 1/n1/n. Let Ο„\tau be the number of coupons collected until all nn types are obtained. By Wald's identity (applied to suitable martingales), E[Ο„]=nHn∼nlog⁑n\mathbb{E}[\tau] = n H_n \sim n \log n, where Hn=βˆ‘k=1n1/kH_n = \sum_{k=1}^n 1/k is the nn-th harmonic number.


Summary

Stopping times formalize the notion of "when to stop" based on observable information:

  • Definition: {Ο„=n}∈Fn\{\tau = n\} \in \mathcal{F}_n (decision depends only on past).
  • Optional stopping: Under appropriate conditions, E[MΟ„]=E[M0]\mathbb{E}[M_\tau] = \mathbb{E}[M_0] for martingales.
  • Wald's identity: E[SΟ„]=ΞΌE[Ο„]\mathbb{E}[S_\tau] = \mu \mathbb{E}[\tau] for i.i.d. sums.
  • Applications: Optimal stopping, gambling strategies, sequential analysis.

Stopping times are central to the theory of martingales and stochastic optimization.