The Markov Property
The Markov property is the fundamental assumption that the future evolution of a stochastic process depends only on its present state, not on its past history. This "memoryless" property makes Markov chains tractable for analysis while remaining rich enough to model many real-world phenomena.
Discrete-time Markov chains
A discrete-time stochastic process taking values in a countable state space is a Markov chain if for all and all states :
whenever the conditional probabilities are well-defined.
The right-hand side is called the transition probability from state to state .
A Markov chain is time-homogeneous (or stationary) if the transition probabilities do not depend on time: for all . Unless stated otherwise, we work with time-homogeneous chains and denote
The matrix is called the transition matrix.
On , let and
where are i.i.d. with and . Then is a Markov chain with transition probabilities:
This is the simple random walk on .
Let with transition matrix
If , the chain moves between states with positive probability. The stationary distribution (if it exists) satisfies , giving
The Chapman-Kolmogorov equation
The -step transition probability is
the probability of being in state exactly steps after starting in state .
For all states and all integers :
In matrix form, , i.e., .
The proof follows from the law of total probability and the Markov property: to go from to in steps, we must pass through some intermediate state at time .
For the two-state chain with
we compute
Thus, : starting in state 0, the probability of being in state 0 after 2 steps is 0.61.
Strong Markov property
A random variable is a stopping time with respect to if the event depends only on for each .
Intuitively, the decision to stop at time can be made using only information available up to time .
For a set , the hitting time (or first passage time) is
Then is a stopping time: occurs iff and , which depends only on .
Let be a stopping time with . Then, conditionally on , the process is a Markov chain starting from , independent of .
The strong Markov property says that "restarting" the chain at a random (stopping) time preserves the Markov property. This is crucial for analyzing recurrence and transience.
Using the strong Markov property, we can decompose long-run behavior into i.i.d. excursions between visits to a fixed state. This leads to the classification of states into recurrent and transient states (see Theorem 1.3 and related content).
Irreducibility and aperiodicity
A Markov chain is irreducible if for every pair of states , there exists such that . In other words, every state is accessible from every other state.
The period of a state is
State is aperiodic if . A chain is aperiodic if all states are aperiodic.
The simple symmetric random walk on has period 2: starting from the origin, is even if is even and odd if is odd. Hence for all odd .
For an irreducible, aperiodic Markov chain on a finite state space, the distribution converges to a unique stationary distribution as , regardless of the initial state . This is the content of the convergence theorem (see Theorem 1.4).
Summary
The Markov property provides a mathematically tractable framework for modeling random evolution:
- Memorylessness: Future depends only on present, not past.
- Chapman-Kolmogorov: -step transitions multiply as matrices.
- Strong Markov property: Restarting at a stopping time preserves the Markov property.
- Irreducibility and aperiodicity: Ensure long-run convergence to equilibrium.
These foundational concepts underpin the entire theory of Markov chains, from finite-state models to general state space processes.