Recurrence and Transience
A fundamental question for Markov chains: starting from a state , will the chain return to infinitely often, or will it eventually leave forever? This dichotomy leads to the classification of states as recurrent or transient.
First return times
For a state , the first return time to is
The return probability is
Intuitively, is the probability that the chain ever returns to after starting from .
A state is recurrent if , i.e., starting from , the chain returns to with probability 1.
A state is transient if , i.e., there is positive probability the chain never returns to .
Let be the total number of visits to state . Then:
- is recurrent iff .
- is transient iff .
For transient states, the chain visits only finitely many times (a.s.).
Characterization via transition probabilities
A state is recurrent if and only if
Otherwise, is transient.
Proof sketch: The expected number of visits to starting from is
For recurrent states, the chain returns infinitely often, so . For transient states, .
For the simple symmetric random walk on (with ), Stirling's approximation gives
Hence,
The origin is recurrent, and by translation invariance, every state is recurrent. The walk on is recurrent.
For (asymmetric walk), the walk drifts to and is transient.
The simple symmetric random walk on is:
- Recurrent for .
- Transient for .
This is Pólya's theorem (1921). In higher dimensions, the walk has "more room to escape" and does not return with probability 1.
Positive and null recurrence
For a recurrent state , the mean return time is
State is positive recurrent if and null recurrent if .
Positive recurrence means the chain returns to quickly on average; null recurrence means returns occur infinitely often but with arbitrarily long gaps.
- Dimension 1: The symmetric random walk on is recurrent, but . Hence, it is null recurrent.
- Dimension 2: Similarly, the symmetric random walk on is null recurrent.
- Dimension : The walk is transient, so the question of positive/null recurrence does not arise.
For an irreducible Markov chain:
- The chain is positive recurrent iff there exists a stationary distribution .
- If positive recurrent, .
Null recurrent and transient chains have no stationary distribution.
Class properties
In an irreducible Markov chain, either all states are recurrent or all states are transient. Similarly, either all recurrent states are positive recurrent, or all are null recurrent.
This follows from the fact that in an irreducible chain, all states communicate, and the properties of recurrence/transience and positive/null recurrence are preserved under this communication.
Every irreducible Markov chain on a finite state space is positive recurrent. (If all states were transient, where would the chain go? It must return to some state infinitely often, contradicting transience.)
A Galton-Watson branching process with offspring distribution can be viewed as a Markov chain on (population size). State 0 is absorbing: . If the mean offspring , the process dies out (reaches 0) with probability 1. If , there is positive probability of survival forever, so states are transient.
Hitting probabilities and absorption
For states , the hitting probability is
the probability of ever reaching starting from .
The hitting probabilities satisfy
This is a linear system in the unknowns (for fixed ).
A gambler starts with dollars and plays a game: win with probability , lose with probability . The game stops when wealth reaches (ruin) or (target). Let be the probability of reaching before . Then:
Solving this recurrence relation:
- If : .
- If : .
As , if , (positive probability of infinite wealth). If , (ruin is certain).
Strong Markov property and renewal theory
The strong Markov property (restarting at a stopping time) allows us to decompose the chain's trajectory into i.i.d. excursions. This leads to a connection with renewal theory:
- Let be the successive return times to state .
- Then are i.i.d. with distribution of .
- The number of visits to up to time is approximately for positive recurrent states.
For an irreducible, aperiodic, positive recurrent chain with stationary distribution :
This is the Markov chain analogue of the strong law of large numbers. The long-run proportion of time in state equals .
Summary
Recurrence and transience govern the long-run behavior of Markov chains:
- Recurrent states: Visited infinitely often (a.s.).
- Transient states: Visited only finitely many times (a.s.).
- Positive recurrence: Recurrent with finite mean return time; admits a stationary distribution.
- Null recurrence: Recurrent but mean return time is infinite; no stationary distribution.
Examples:
- Symmetric random walk on : null recurrent.
- Symmetric random walk on : null recurrent.
- Symmetric random walk on (): transient.
- Any irreducible chain on a finite state space: positive recurrent.