Module 1:Concepts of Random walks, Markov Chains, Markov Processes
  Lecture 4:Markov Process
 

Theorem 1.9 [Ergodic theorem]

For every finite irreducible, aperiodic Markov chain, with transition probability matrix,  we would have , where  and also remember this limiting distribution is equal to the stationary distribution of the Markov chain.

Proof of Theorem 1.9 [Ergodic theorem]

Now for a Markov chain in which the states are aperiodic, persistent non-null, then for every pair,  we have , and as  is persistent, hence , which means that  and is independent of . Also the sums of rows should add up to 1 (in the limiting sense), i.e., , where set , such that , such that .

Moreover it is known that

(i)

(ii)  are both true, hence we can write (ii) using (i) as



, this we get from Fatou's Lemma, which says that for a sequence of some type of measurable function the integral (sum) of the limit of the infimum of a function is less than or equal to the limit of the infimum of the integral (sum) of the function, i.e.,
.
Example for which we can check this is (i) , (ii) , where  is a natural number, (ii)  where  is a natural number, (iii)  where  is a real number. Just for your convenience we state the greater than inequality when we have , which is the reverse of Fatou's Lemma.

Thus we have
 for all

Now if the greater than sign only holds, which would mean that we have  for all , i.e.,  for all  would hold true. But ask yourself is that possible.
Remember that sum of the pdf/pmf values at each point which is the left hand side is stated to be greater that the cumulative probability values (pdf/pmf) summed up after summing them up at teach point. But that is not possible!! Why? Ask yourself.

Hence equality has to hold. Remember the distribution which we get in the limiting case is the stationary distribution.

Remember that if a Markov chain is irreducible and aperiodic and if there exits a unique stationary distribution for the Markov chain, the chain is ergodic and