ConceptComplete

Recurrence and Transience

A fundamental question for Markov chains: starting from a state ii, will the chain return to ii infinitely often, or will it eventually leave ii forever? This dichotomy leads to the classification of states as recurrent or transient.


First return times

Definition1.1First return time

For a state iSi \in S, the first return time to ii is

τi=inf{n1:Xn=i}.\tau_i = \inf\{n \geq 1 : X_n = i\}.

The return probability is

fii=P(τi<X0=i).f_{ii} = \mathbb{P}(\tau_i < \infty \mid X_0 = i).

Intuitively, fiif_{ii} is the probability that the chain ever returns to ii after starting from ii.

Definition1.2Recurrent and transient states

A state ii is recurrent if fii=1f_{ii} = 1, i.e., starting from ii, the chain returns to ii with probability 1.

A state ii is transient if fii<1f_{ii} < 1, i.e., there is positive probability the chain never returns to ii.

RemarkNumber of visits

Let Ni=n=01Xn=iN_i = \sum_{n=0}^\infty \mathbf{1}_{X_n = i} be the total number of visits to state ii. Then:

  • ii is recurrent iff P(Ni=X0=i)=1\mathbb{P}(N_i = \infty \mid X_0 = i) = 1.
  • ii is transient iff E[NiX0=i]<\mathbb{E}[N_i \mid X_0 = i] < \infty.

For transient states, the chain visits ii only finitely many times (a.s.).


Characterization via transition probabilities

Theorem1.1Recurrence via infinite series

A state ii is recurrent if and only if

n=0pii(n)=.\sum_{n=0}^\infty p_{ii}^{(n)} = \infty.

Otherwise, ii is transient.

Proof sketch: The expected number of visits to ii starting from ii is

E[NiX0=i]=n=0P(Xn=iX0=i)=n=0pii(n).\mathbb{E}[N_i \mid X_0 = i] = \sum_{n=0}^\infty \mathbb{P}(X_n = i \mid X_0 = i) = \sum_{n=0}^\infty p_{ii}^{(n)}.

For recurrent states, the chain returns infinitely often, so E[NiX0=i]=\mathbb{E}[N_i \mid X_0 = i] = \infty. For transient states, E[NiX0=i]=1/(1fii)<\mathbb{E}[N_i \mid X_0 = i] = 1/(1-f_{ii}) < \infty.

ExampleSimple random walk on Z

For the simple symmetric random walk on Z\mathbb{Z} (with p=1/2p = 1/2), Stirling's approximation gives

p00(2n)1πn.p_{00}^{(2n)} \sim \frac{1}{\sqrt{\pi n}}.

Hence,

n=0p00(2n)n=11n=.\sum_{n=0}^\infty p_{00}^{(2n)} \sim \sum_{n=1}^\infty \frac{1}{\sqrt{n}} = \infty.

The origin is recurrent, and by translation invariance, every state is recurrent. The walk on Z\mathbb{Z} is recurrent.

For p1/2p \neq 1/2 (asymmetric walk), the walk drifts to ±\pm \infty and is transient.

ExampleRandom walk on Z^d

The simple symmetric random walk on Zd\mathbb{Z}^d is:

  • Recurrent for d=1,2d = 1, 2.
  • Transient for d3d \geq 3.

This is Pólya's theorem (1921). In higher dimensions, the walk has "more room to escape" and does not return with probability 1.


Positive and null recurrence

Definition1.3Mean return time

For a recurrent state ii, the mean return time is

μi=E[τiX0=i].\mu_i = \mathbb{E}[\tau_i \mid X_0 = i].

State ii is positive recurrent if μi<\mu_i < \infty and null recurrent if μi=\mu_i = \infty.

Positive recurrence means the chain returns to ii quickly on average; null recurrence means returns occur infinitely often but with arbitrarily long gaps.

ExampleNull recurrence in dimension 1 and 2
  • Dimension 1: The symmetric random walk on Z\mathbb{Z} is recurrent, but E[τ0X0=0]=\mathbb{E}[\tau_0 \mid X_0 = 0] = \infty. Hence, it is null recurrent.
  • Dimension 2: Similarly, the symmetric random walk on Z2\mathbb{Z}^2 is null recurrent.
  • Dimension 3\geq 3: The walk is transient, so the question of positive/null recurrence does not arise.
RemarkPositive recurrence and stationary distributions

For an irreducible Markov chain:

  • The chain is positive recurrent iff there exists a stationary distribution π\pi.
  • If positive recurrent, πi=1/μi\pi_i = 1/\mu_i.

Null recurrent and transient chains have no stationary distribution.


Class properties

Theorem1.2Recurrence is a class property

In an irreducible Markov chain, either all states are recurrent or all states are transient. Similarly, either all recurrent states are positive recurrent, or all are null recurrent.

This follows from the fact that in an irreducible chain, all states communicate, and the properties of recurrence/transience and positive/null recurrence are preserved under this communication.

ExampleFinite state space

Every irreducible Markov chain on a finite state space is positive recurrent. (If all states were transient, where would the chain go? It must return to some state infinitely often, contradicting transience.)

ExampleGalton-Watson branching process

A Galton-Watson branching process with offspring distribution {pk}\{p_k\} can be viewed as a Markov chain on S={0,1,2,}S = \{0, 1, 2, \ldots\} (population size). State 0 is absorbing: p00=1p_{00} = 1. If the mean offspring μ=kpk1\mu = \sum k p_k \leq 1, the process dies out (reaches 0) with probability 1. If μ>1\mu > 1, there is positive probability of survival forever, so states 1\geq 1 are transient.


Hitting probabilities and absorption

Definition1.4Hitting probability

For states i,jSi, j \in S, the hitting probability is

hij=P(τj<X0=i),h_{ij} = \mathbb{P}(\tau_j < \infty \mid X_0 = i),

the probability of ever reaching jj starting from ii.

Theorem1.3Hitting probabilities satisfy a system of equations

The hitting probabilities hijh_{ij} satisfy

hij={1if i=j,kSpikhkjif ij.h_{ij} = \begin{cases} 1 & \text{if } i = j, \\ \sum_{k \in S} p_{ik} h_{kj} & \text{if } i \neq j. \end{cases}

This is a linear system in the unknowns hijh_{ij} (for fixed jj).

ExampleGambler's ruin

A gambler starts with kk dollars and plays a game: win 11 with probability pp, lose 11 with probability q=1pq = 1-p. The game stops when wealth reaches 00 (ruin) or NN (target). Let hkh_k be the probability of reaching NN before 00. Then:

hk=phk+1+qhk1,h0=0,  hN=1.h_k = p h_{k+1} + q h_{k-1}, \quad h_0 = 0, \; h_N = 1.

Solving this recurrence relation:

  • If p=1/2p = 1/2: hk=k/Nh_k = k/N.
  • If p1/2p \neq 1/2: hk=1(q/p)k1(q/p)Nh_k = \frac{1 - (q/p)^k}{1 - (q/p)^N}.

As NN \to \infty, if p>1/2p > 1/2, hk1(q/p)k>0h_k \to 1 - (q/p)^k > 0 (positive probability of infinite wealth). If p<1/2p < 1/2, hk0h_k \to 0 (ruin is certain).


Strong Markov property and renewal theory

The strong Markov property (restarting at a stopping time) allows us to decompose the chain's trajectory into i.i.d. excursions. This leads to a connection with renewal theory:

  • Let T1,T2,T_1, T_2, \ldots be the successive return times to state ii.
  • Then {TnTn1}\{T_n - T_{n-1}\} are i.i.d. with distribution of τi\tau_i.
  • The number of visits to ii up to time nn is approximately n/μin / \mu_i for positive recurrent states.
RemarkErgodic theorem for Markov chains

For an irreducible, aperiodic, positive recurrent chain with stationary distribution π\pi:

limn1nk=0n11Xk=j=πja.s.\lim_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} \mathbf{1}_{X_k = j} = \pi_j \quad \text{a.s.}

This is the Markov chain analogue of the strong law of large numbers. The long-run proportion of time in state jj equals πj=1/μj\pi_j = 1/\mu_j.


Summary

Recurrence and transience govern the long-run behavior of Markov chains:

  • Recurrent states: Visited infinitely often (a.s.).
  • Transient states: Visited only finitely many times (a.s.).
  • Positive recurrence: Recurrent with finite mean return time; admits a stationary distribution.
  • Null recurrence: Recurrent but mean return time is infinite; no stationary distribution.

Examples:

  • Symmetric random walk on Z\mathbb{Z}: null recurrent.
  • Symmetric random walk on Z2\mathbb{Z}^2: null recurrent.
  • Symmetric random walk on Zd\mathbb{Z}^d (d3d \geq 3): transient.
  • Any irreducible chain on a finite state space: positive recurrent.