# Chapman-Kolmogorov Equations

Stochastic processes and Markov chains are introduced in this previous post. Transition probabilities are an integral part of the theory of Markov chains. The post preceding this one is a beginning look at transition probabilities. This post shows how to calculate the $n$-step transition probabilities. The Chapman-Kolmogorov equations are also discussed and derived.

Introductory Example

Suppose $\left\{X_n: n=0,1,2,\cdots \right\}$ is a Markov chain with the transition probability matrix $\mathbf{P}$. The entries of the matrix are the one-step transition probabilities $P_{ij}$. The number $P_{ij}$ is the probability that the Markov chain will move to state $j$ at time $m+1$ given that it is in state $i$ at time $m$, independent of where the chain was prior to time $m$. Thus $P_{ij}$ can be expressed as the following conditional probability.

$(1) \ \ \ \ \ P_{ij}=P[X_{m+1}=j \lvert X_m=i]$

Thus the future state only depends on the period immediately preceding it. This is called the Markov property. Also note that $P_{ij}$ is independent of the time period $m$. Any Markov chain with this property is called a time-homogeneous Markov chain or stationary Markov chain.

The focus of this post is to show how to calculate the probability that the Markov chain will move to state $j$ at time $n+m$ given that it is in state $i$ at time $m$. This probability is denoted by $P_{ij}^n$. In other words, $P_{ij}^n$ is the probability that the chain will be in state $j$ after the Markov chain goes through $n$ more periods given that it is in state $i$ in the current period. The probability $P_{ij}^n$, as a conditional probability, can be notated as follows:

$(2) \ \ \ \ \ P_{ij}^n=P[X_{m+n}=j \lvert X_m=i]$

In (2), the $n$ in the $n$-step transition probabilities satisfies $n \ge 0$. Note that when $n=0$, $P_{ij}^0=1$ for $i=j$ and $P_{ij}^0=0$ for $i \ne j$. Including the case for $n=0$ will make the Chapman-Kolmogorov equations work better.

Before discussing the general method, we use examples to illustrate how to compute 2-step and 3-step transition probabilities. Consider a Markov chain with the following transition probability matrix.

$\mathbf{P} = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.6 & 0.2 & 0.2 \cr 1 & 0.3 & 0.5 & 0.2 \cr 2 & 0.4 & 0.3 & 0.3 \cr } \qquad$

Calculate the two-step transition probabilities $P_{02}^2$, $P_{12}^2$ and $P_{22}^2$. Then calculate the three-step transition probability $P_{02}^3$ using the two-step transition probabilities.

First, let’s handle $P_{02}^2$. We can condition on the first steps. To go from state 0 to state 2 in two steps, the chain must first go to an interim state and then from that state go to state 2.

\displaystyle \begin{aligned}P_{02}^2&=\sum \limits_{k=0}^2 P_{0k} \ P_{k2} \\&=P_{00} \ P_{02}+P_{01} \ P_{12}+P_{02} \ P_{22} \\&=0.6 \times 0.2+0.2 \times 0.2+0.2 \times 0.3 \\&=0.22 \end{aligned}

Note that the above calculation lists out the three possible paths to go from state 0 to state 2 in two steps – from state 0 to state 0 and then from state 0 to state 2, from state 0 to state 1 and then from state 1 to state 2 and from state 0 to state 2 and then from state 2 to state 2. Looking at the calculation more closely, the above calculation is basically the first row of $\mathbf{P}$ (the row corresponding to state 0) multiplying the third column of $\mathbf{P}$ (the column corresponding to state 2).

$P_{02}^2= \left[\begin{array}{ccc} 0.6 & 0.2 & 0.2 \\ \end{array}\right] \left[\begin{array}{c} 0.2 \\ 0.2 \\ 0.3 \end{array}\right]= \left[\begin{array}{c} 0.22 \end{array}\right]$

By the same idea, the following gives the other two 2-step transition probabilities.

\displaystyle \begin{aligned}P_{12}^2&=\sum \limits_{k=0}^2 P_{1k} \ P_{k2} \\&=P_{10} \ P_{02}+P_{11} \ P_{12}+P_{12} \ P_{22} \\&=0.3 \times 0.2+0.5 \times 0.2+0.2 \times 0.3 \\&=0.22 \end{aligned}

\displaystyle \begin{aligned}P_{22}^2&=\sum \limits_{k=0}^2 P_{2k} \ P_{k2} \\&=P_{20} \ P_{02}+P_{21} \ P_{12}+P_{22} \ P_{22} \\&=0.4 \times 0.2+0.3 \times 0.2+0.3 \times 0.3 \\&=0.23 \end{aligned}

As discussed, the above two calculations can be viewed as the sum of all the possible paths to go from the beginning state to the end state (conditioning on the interim state in the middle) or as a row in the transition probability $\mathbf{P}$ multiplying a column in $\mathbf{P}$. The following shows all three calculations in terms of matrix calculation.

$P_{02}^2= \left[\begin{array}{ccc} 0.6 & 0.2 & 0.2 \\ \end{array}\right] \left[\begin{array}{c} 0.2 \\ 0.2 \\ 0.3 \end{array}\right]= \left[\begin{array}{c} 0.22 \end{array}\right]$

$P_{12}^2= \left[\begin{array}{ccc} 0.3 & 0.5 & 0.2 \\ \end{array}\right] \left[\begin{array}{c} 0.2 \\ 0.2 \\ 0.3 \end{array}\right]= \left[\begin{array}{c} 0.22 \end{array}\right]$

$P_{22}^2= \left[\begin{array}{ccc} 0.4 & 0.3 & 0.3 \\ \end{array}\right] \left[\begin{array}{c} 0.2 \\ 0.2 \\ 0.3 \end{array}\right]= \left[\begin{array}{c} 0.23 \end{array}\right]$

The view of matrix calculation will be crucial in understanding Chpan-Kolmogorov equations discussed below. To conclude the example, consider the 3-step transition probability $P_{02}^3$. We can also condition on the first step. The chain goes from state 0 to the next step (3 possibilities) and then goes from that state to state 2 in two steps.

\displaystyle \begin{aligned} P_{02}^3&=\sum \limits_{k=0}^2 P_{0k} \ P_{k2}^2 \\&=P_{00} \ P_{02}^2+P_{01} \ P_{12}^2+P_{02} \ P_{22}^2 \\&=0.6 \times 0.22+0.2 \times 0.22+0.2 \times 0.23 \\&=0.222 \end{aligned}

The example shows that the calculation of a 3-step transition probability is based 2-step probabilities. Thus longer length transition probabilities can be built up from shorter length transition probabilities. However, it pays to focus on a more general framework before carrying out more calculation. The view of matrix calculation demonstrated above will help in understanding the general frame work.

Chapman-Kolmogorov Equations

The examples indicate that finding $n$-step transition probabilities involve matrix calculation. Let $\mathbf{Q}_n$ be the $n$-step transition probability matrix. The goal now is to have a systematic way to compute the entries in the matrix $\mathbf{Q}_n$. The computation is based on the Chapman-Kolmogorov equations. These equations are:

$\displaystyle (3) \ \ \ \ \ P_{ij}^{n+m}=\sum \limits_{k=0}^\infty P_{ik}^n \times P_{kj}^m$

for $n, m \ge 0$ and for all $i,j$. For a finite-state Markov chain, the summation in (3) does not go up to infinity and would have an upper limit. The number $P_{ij}^{n+m}$ is the probability that the chain will be in state $j$ after taking $n+m$ steps if it is currently in state $i$. The following gives the derivation of (3).

\displaystyle \begin{aligned} P_{ij}^{n+m}&=P[X_{n+m}=j \lvert X_0=i] \\&=\sum \limits_{k=0}^\infty P[X_{n+m}=j, X_n=k \lvert X_0=i]\\&=\sum \limits_{k=0}^\infty \frac{P[X_{n+m}=j, X_n=k, X_0=i]}{P[X_0=i]} \\&=\sum \limits_{k=0}^\infty \frac{P[X_n=k,X_0=i]}{P[X_0=i]} \times \frac{P[X_{n+m}=j, X_n=k, X_0=i]}{P[X_n=k,X_0=i]} \\&=\sum \limits_{k=0}^\infty P[X_n=k \lvert X_0=i] \times P[X_{n+m}=j \lvert X_n=k, X_0=i] \\&=\sum \limits_{k=0}^\infty P[X_n=k \lvert X_0=i] \times P[X_{n+m}=j \lvert X_n=k] \ * \\&=\sum \limits_{k=0}^\infty P_{ik}^n \times P_{kj}^m \end{aligned}

Here’s the idea behind the derivation. The path from state $i$ to state $j$ in $n+m$ steps can be broken down into two paths, one from state $i$ to an intermediate state $k$ in the first $n$ transitions, and the other from state $k$ to state $j$ in the remaining $m$ transitions. Summing over all intermediate states $k$ yields the probability that the process will go from state $i$ to state $j$ in $n+m$ transitions.

The entries in the matrix $\mathbf{Q}_n$ can be then computed by (3). There is an interesting an useful interpretation of (3). The following is another way to state the Chapman-Kolmogorov equations:

$(4) \ \ \ \ \ \mathbf{Q}_{n+m}=\mathbf{Q}_n \times \mathbf{Q}_m$.

A typical element of the matrix $\mathbf{Q}_n$ is $P_{ik}^n$ and a typical element of the matrix $\mathbf{Q}_m$ is $P_{kj}^m$. Note that $P_{ik}^n$ with $k$ varying is a row in $\mathbf{Q}_n$ (the row corresponding to the state $i$) and that $P_{kj}^n$ with $k$ varying is a column in $\mathbf{Q}_m$ (the column corresponding to the state $j$).

$\left[\begin{array}{cccccc} P_{i0}^n & P_{i1}^n & \cdots & P_{ik}^n & \cdots & P_{iw}^n \\ \end{array}\right] \left[\begin{array}{c} P_{0j}^m \\ \text{ } \\ P_{1j}^m \\ \vdots \\ P_{kj}^m \\ \vdots \\ P_{wj}^m \end{array}\right]$

The product of the above row and column is the transition probability $P_{ij}^{n+m}$.

Powers of the One-Step Transition Probability Matrix

Let $\mathbf{P}$ be the one-step transition probability matrix of a Markov chain. Let $\mathbf{Q}_n$ be the $n$-step transition probability matrix, which can be computed by using the Chapman-Kolmogorov equations. We now derive another important fact.

The $n$-step transition probability matrix $\mathbf{Q}_n$ is obtained by multiplying the one-step transition probability matrix $\mathbf{P}$ by itself $n$ times, i.e. $\mathbf{Q}_n$ is the $n$th power of $\mathbf{P}$.

This fact is important in terms of calculation of transition probabilities. Compute $\mathbf{P}^n$, the $n$th power of $\mathbf{P}$ (in terms of matrix calculation). Then $P_{ij}^n$ is simply an entry in $\mathbf{P}^n$ (in the row that corresponds to state $i$ and in the column that corresponds to state $j$). If the matrix size is large and/or $n$ is large, the matrix multiplication can be done using software or by using an online matrix calculator (here’s one matrix calculator).

Of course, the above fact is also important Markov chain theory in general. Almost all mathematical properties of Markov chains are at root based on this basic fact.

We can establish this basic fact using an induction argument. Clearly $\mathbf{Q}_1=\mathbf{P}^1=\mathbf{P}$. Suppose that the fact is true for $n-1$. Based on (4), $\mathbf{Q}_{n}=\mathbf{Q}_{n-1} \times \mathbf{Q}_1$. Continuing the derivation: $\mathbf{Q}_{n}=\mathbf{Q}_{n-1} \times \mathbf{Q}_1=\mathbf{P}^{n-1} \times \mathbf{P}=\mathbf{P}^n$.

The Row Vector and the Column Vector

As demonstrated in the preceding section, $\mathbf{P}^n$, the $n$th power of the $\mathbf{P}$, is precisely the $n$-step transition probability matrix. Let’s examine the matrix $\mathbf{P}^n$ more closely. Suppose that the Markov chain has a state space $\left\{0,1,2,\cdots,w \right\}$. The following shows the matrix $\mathbf{P}^n$ with special focus on a generic row and a generic column.

$\mathbf{P}^n= \left[\begin{array}{ccccccc} P_{0,0}^n & P_{0,1}^n & \cdots & P_{0,k}^n & \cdots & P_{0,w}^n \\ P_{1,0}^n & P_{1,1}^n & \cdots & P_{1,k}^n & \cdots & P_{1,w}^n \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ P_{i,0}^n & P_{i,1}^n & \cdots & P_{i,k}^n & \cdots & P_{i,w}^n \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ P_{w,0}^n & P_{w,1}^n & \cdots & P_{w,k}^n & \cdots & P_{w,w}^n \\ \end{array}\right]$

Now look at the row corresponding to the state $i$ and call it $R_i$. Also look at the column corresponding to the state $k$ and call it $C_k$. They are separated from the matrix $\mathbf{P}^n$ below.

$R_i=\left[\begin{array}{cccccc} P_{i,0}^n & P_{i,1}^n & \cdots & P_{i,k}^n & \cdots & P_{i,w}^n \\ \end{array}\right] \ \ \ \ \ \text{row vector}$

$C_k=\left[\begin{array}{c} P_{0,k}^n \\ \text{ } \\ P_{1,k}^n \\ \vdots \\ P_{i,k}^n \\ \vdots \\ P_{w,k}^n \end{array}\right] \ \ \ \ \ \text{column vector}$

The row vector $R_i$ is a conditional distribution. It gives the probabilities of the process transitioning (after $n$ transitions) into one of the states given the process starts in state $i$. If it is certain that the process begins in state $i$, then $R_i$ gives the probability function for the random variable $X_n$ since $P[X_n=k]=P_{i,k}^n$.

On the other hand, if the initial state is not certain, then the column vector $C_k$ gives the probabilities of the process transitioning (after $n$ transitions) into state $k$ from any one of the possible initial states. Thus a column from the matrix $\mathbf{P}^n$ gives the probability of the process in a given state at the $n$th period from all possible initial states. The following gives the probability distribution for $X_n$ (the state of the Markov chain at time $n$).

$\displaystyle (5) \ \ \ \ \ P[X_n=k]=\sum \limits_{i=0}^\infty P_{i,k}^n \times P[X_0=i]$

The calculation in (5) can be viewed as matrix calculation. If we put the probabilities $P[X_0=i]$ in a row vector, then the product of this row vector with the column vector $C_k$ would be $P[X_n=k]$. The following is (5) in matrix multiplication.

$(6) \ \ \ \ \ P[X_n=k]=\left[\begin{array}{cccccc} P(X_0=0) & P(X_0=1) & \cdots & P(X_0=j) & \cdots & P(X_0=w) \\ \end{array}\right] \left[\begin{array}{c} P_{0k}^n \\ \text{ } \\ P_{1k}^n \\ \vdots \\ P_{jk}^n \\ \vdots \\ P_{wk}^n \end{array}\right]$

Even though the matrix calculation for $P_{ij}^n$ should be done using software, it pays to understand the orientation of the matrix $\mathbf{P}^n$. In the preceding section, we learn that a row of $\mathbf{P}^n$ is the conditional distribution $X_n \lvert X_0$ (the state of the process at time $n$ given the initial state). In the preceding section, we also learn that a column in the matrix $\mathbf{P}^n$ gives the probabilities of the process landing in a fixed state $k$ from any one of the initial states.

The Chapman-Kolmogorov equations in (3) tells us that an entry in the matrix $\mathbf{P}^{n+m}$ is simply the product of a row in $\mathbf{P}^n$ and a column in $\mathbf{P}^m$. This observation makes it possible to focus just on the transition probability that is asked in a given problem rather than calculating the entire matrix. For example, to calculate an entry $P_{ij}^2$ of the matrix $\mathbf{P}^2$, multiply a row of $\mathbf{P}$ (the row corresponding to the state $i$) and a column of $\mathbf{P}$ (the row corresponding to state $j$). There is no need to calculate the entire matrix $\mathbf{P}^2$ if the goal is just one entry of $\mathbf{P}^2$.

To calculate an entry of $\mathbf{P}^n$, there is a considerable amount of variation in multiplication approaches. For example, multiple a row of $\mathbf{P}$ and a column of $\mathbf{P}^{n-1}$ or multiple a row of $\mathbf{P}^{n-1}$ and a column of $\mathbf{P}$. Or multiple a row of $\mathbf{P}^2$ and a column of $\mathbf{P}^{n-2}$.

If a row of transition probabilities multiplies a transition matrix, the result is a row in a higher transition probability matrix. For example, if a row in $\mathbf{P}$ multiplies the matrix $\mathbf{P}^n$, the result is a row in $\mathbf{P}^{n+1}$. More specifically, the first row in $\mathbf{P}$ multiplying the matrix $\mathbf{P}$ gives the first row of $\mathbf{P}^2$. The first row of $\mathbf{P}$ multiplying the matrix $\mathbf{P}^2$ gives the first row in $\mathbf{P}^3$, etc.

On the other hand, when a transition probability matrix multiplies a column of transition probabilities, the result is a column in a higher transition probability matrix. For example, the matrix $\mathbf{P}^n$ multiplies a column in $\mathbf{P}$, the result is a column in $\mathbf{P}^{n+1}$. More specifically, when the matrix $\mathbf{P}$ multiples the first column in the matrix $\mathbf{P}$, the result is the first column of $\mathbf{P}^2$. When the matrix $\mathbf{P}^2$ multiples a column in the matrix $\mathbf{P}$, the result is the first column in $\mathbf{P}^3$, etc.

Examples

We now give some examples on how to calculate transition probabilities.

Example 1
Consider a Markov chain with the following transition probability matrix.

$\mathbf{P} = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.6 & 0.2 & 0.2 \cr 1 & 0.3 & 0.5 & 0.2 \cr 2 & 0.4 & 0.3 & 0.3 \cr } \qquad$

Determine $P[X_3=1 \lvert X_0=0]$ and $P[X_5=2 \lvert X_2=0]$.

Of course, the most straightforward way would be to calculate $\mathbf{P}^3$. Then $P[X_3=1 \lvert X_0=0]=P_{01}^3$ would be the (0,1)th entry of $\mathbf{P}^3$ and $P[X_5=2 \lvert X_2=0]=P_{02}^3$ would be the (0,1)th entry of $\mathbf{P}^3$. In fact, it is a good practice to use an online matrix calculator for this task. Doing so produces the following marrices.

$\mathbf{P}^2 = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.50 & 0.28 & 0.22 \cr 1 & 0.41 & 0.37 & 0.22 \cr 2 & 0.45 & 0.32 & 0.23 \cr } \qquad \ \ \ \ \ \ \mathbf{P}^3 = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.472 & 0.306 & 0.222 \cr 1 & 0.445 & 0.333 & 0.222 \cr 2 & 0.458 & 0.319 & 0.223 \cr } \qquad$

From the matrix $\mathbf{P}^3$, we see that $P_{01}^3=0.306$ and $P_{02}^3=0.222$. Note that $P_{02}$ is calculated in the introductory example earlier.

Example 2
For the transition matrix $\mathbf{P}$ in Example 1, use the ideas discussed in the section Additional Comments on Chapman-Kolmogorov to compute the first row and the first column in the transition matrix $\mathbf{P}^4$.

$\left[\begin{array}{ccc} 0.6 & 0.2 & 0.2 \\ \end{array}\right] \left[\begin{array}{ccc} 0.472 & 0.306 & 0.222 \\ 0.445 & 0.333 & 0.222 \\ 0.458 & 0.319 & 0.223 \end{array}\right] = \left[\begin{array}{ccc} 0.4638 & 0.314 & 0.2222 \end{array}\right]= \left[\begin{array}{ccc} P_{00}^4 & P_{01}^4 & P_{02}^4 \end{array}\right]$

$\left[\begin{array}{ccc} 0.472 & 0.306 & 0.222 \\ 0.445 & 0.333 & 0.222 \\ 0.458 & 0.319 & 0.223 \end{array}\right] \left[\begin{array}{c} 0.6 \\ 0.3 \\ 0.4 \end{array}\right] = \left[\begin{array}{c} 0.4638 \\ 0.4557 \\ 0.4597 \end{array}\right]= \left[\begin{array}{c} P_{00}^4 \\ P_{10}^4 \\ P_{20}^4 \end{array}\right]$

As discussed in earlier, the first row of $\mathbf{P}^4$ is obtained by multiplying the first row of $\mathbf{P}$ with the matrix $\mathbf{P}^3$. The first column of $\mathbf{P}^4$ is obtained by multiplying the matrix $\mathbf{P}^3$ with the first column of the matrix $\mathbf{P}$.

Example 3
A particle moves among states 0, 1 and 2 according to a Markov process with the following transition probability matrix.

$\mathbf{P} = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.6 & 0.3 & 0.1 \cr 1 & 0.3 & 0.3 & 0.4 \cr 2 & 0.3 & 0.2 & 0.5 \cr } \qquad$

Let $X_n$ be the position of the particle after the $n$th move. Suppose that at the beginning, the particle is in state 1. Determine the probability $P[X_2=k]$ where $k=0,1,2$.

Since the initial state of the process is certain, $P[X_2=k]=P[X_2=k \lvert X_0=1]=P_{1k}^2$. Thus the problem is to find the second row of $\mathbf{P}^2$.

$\left[\begin{array}{ccc} 0.3 & 0.3 & 0.4 \\ \end{array}\right] \left[\begin{array}{ccc} 0.6 & 0.3 & 0.1 \\ 0.3 & 0.3 & 0.4 \\ 0.3 & 0.2 & 0.5 \end{array}\right] = \left[\begin{array}{ccc} 0.39 & 0.26 & 0.35 \end{array}\right]= \left[\begin{array}{ccc} P_{10}^2 & P_{11}^2 & P_{12}^2 \end{array}\right]$

Example 4
Consider a Markov chain with the following transition probability matrix.

$\mathbf{P} = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.3 & 0.2 & 0.5 \cr 1 & 0.1 & 0.3 & 0.6 \cr 2 & 0.4 & 0.3 & 0.3 \cr } \qquad$

Suppose that initially the process is equally likely to be in state 0 or state 1. Determine $P[X_3=0]$ and $P[X_3=1]$.

To find $P[X_3=0]$, we need the first column of $\mathbf{P}^3$. To find $P[X_3=1]$, we need the second column of $\mathbf{P}^3$. Using an online calculator produces $\mathbf{P}^3$.

$\mathbf{P}^3 = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.288 & 0.269 & 0.443 \cr 1 & 0.283 & 0.270 & 0.447 \cr 2 & 0.295 & 0.273 & 0.432 \cr } \qquad$

The following calculation gives the answers.

$P[X_3=0]=\left[\begin{array}{ccc} 0.5 & 0.5 & 0 \\ \end{array}\right] \left[\begin{array}{c} 0.288 \\ 0.283 \\ 0.295 \end{array}\right]=0.2855$

$P[X_3=1]=\left[\begin{array}{ccc} 0.5 & 0.5 & 0 \\ \end{array}\right] \left[\begin{array}{c} 0.269 \\ 0.270 \\ 0.273 \end{array}\right]=0.2695$

$P[X_3=2]=1-P[X_3=0]-P[X_3=1]=0.445$

Example 5
Consider a Markov chain with the following transition probability matrix.

$\mathbf{P} = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.6 & 0.2 & 0.2 \cr 1 & 0.2 & 0.2 & 0.6 \cr 2 & 0 & 0 & 1 \cr } \qquad$

At time 0, the Markov process is in state 0. Let $T=\text{min}\left\{n \ge 0:X_n=2 \right\}$. Note that state 2 is called an absorbing state. The Markov process will eventually reach and be absorbed in state 2 (it stays there forever whenever the process reaches state 2). Thus $T$ is the first time period in which the process reaches state 2. Suppose that the Markov process is being observed and that absorption has not taken place. Then we would be interested in this conditional probability: the probability that the process is in state 0 or state 1 given that the process has not been absorbed. Determine $P[X_4=0 \lvert T>4]$.

The key idea here is that for the event $T>4$ to happen, $X_4=0$ or $X_4=1$. Thus

$P[T>4]=P[X_4=0 \text{ or } X_4=1]=P[X_4=0]+P[X_4=1]=P_{00}^4+P_{01}^4$.

Note that since the initial state is certain to be state 0, $P[X_4=0]=P_{00}^4$ and $P[X_4=1]=P_{01}^4$. Using an online calculator gives the following matrix.

$\mathbf{P}^4 = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.1856 & 0.0768 & 0.7376 \cr 1 & 0.0768 & 0.032 & 0.8912 \cr 2 & 0 & 0 & 1 \cr } \qquad$

The following gives the desired result.

$\displaystyle P[X_4=0 \lvert T>4]=\frac{P_{00}^4}{P_{00}^4+P_{01}^4}=\frac{0.1856}{0.1856+0.0768}=0.7073$

Thus if absorption has not taken place in the 4th period, the process is in state 0 about 71% of the time.

Example 6
Continue with the Markov chain in Example 5. Determine $P[T=4]$.

Note that $P[T=4]=P[T>3]-P[T>4]$. The probability $P[T>4]$ is already determined using $\mathbf{P}^4$. To determine $P[T>3]$, we need the top row of $\mathbf{P}^3$.

$\mathbf{P}^3 = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.272 & 0.112 & 0.616 \cr 1 & 0.112 & 0.048 & 0.84 \cr 2 & 0 & 0 & 1 \cr } \qquad$

Thus $P[T>3]=P_{00}^3+P_{01}^3=0.272+0.112=0.384$. Thus $P[T=4]=0.384-0.2624=0.1216$. Thus about 12% of the time, absorption takes place in the 4th period about 12% of the time.

Exercises

We give a few practice problems to reinforce the concepts and calculation discussed here. Further practice problems are in a companion blog (here and here).

Exercise 1
Consider a Markov chain with the following transition probability matrix.

$\mathbf{P} = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.1 & 0.2 & 0.7 \cr 1 & 0.2 & 0.2 & 0.6 \cr 2 & 0.6 & 0.1 & 0.3 \cr } \qquad$

Determine these conditional probabilities.

• $P[X_3=2 \lvert X_1=1]$
• $P[X_3=2 \lvert X_0=1]$
• $P[X_4=2 \lvert X_0=1]$

Exercise 2
A particle moves among states 1, 2 and 3 according to a Markov process with the following transition probability matrix.

$\mathbf{P} = \bordermatrix{ & 1 & 2 & 3 \cr 1 & 0 & 0.6 & 0.4 \cr 2 & 0.6 & 0 & 0.4 \cr 3 & 0.6 & 0.4 & 0 \cr } \qquad$

Let $X_n$ be the position of the particle after the $n$th move. Determine the probability $P[X_3=1 \lvert X_0=1]$ and $P[X_4=1 \lvert X_0=1]$.

Exercise 3
Consider a Markov chain with the following transition probability matrix.

$\mathbf{P} = \bordermatrix{ & 0 & 1 & 2 \cr 0 & 0.1 & 0.2 & 0.7 \cr 1 & 0.9 & 0.1 & 0 \cr 2 & 0.1 & 0.8 & 0.1 \cr } \qquad$

Suppose that initially the process is equally likely to be in state 0 or state 1 or state 2. Determine the distribution for $X_3$.

Exercise 4
Continue with Example 5 and Example 6. Work these two examples assuming that the Markov chain starts in state 1 initially.

$\text{ }$

$\text{ }$

$\text{ }$

Exercise 1

• $P[X_3=2 \lvert X_1=1]=P_{12}^2=0.13$
• $P[X_3=2 \lvert X_0=1]=P_{12}^3=0.16$
• $P[X_4=2 \lvert X_0=1]=P_{12}^4=0.1473$

Exercise 2

• $P[X_3=1 \lvert X_0=1]=P_{11}^3=0.24$
• $P[X_4=1 \lvert X_0=1]=P_{11}^4=0.456$

Exercise 3

• $\displaystyle P[X_3=0]=\frac{1.076}{3}=0.3587$
• $\displaystyle P[X_3=1]=\frac{1.013}{3}=0.3377$
• $\displaystyle P[X_3=2]=\frac{0.911}{3}=0.3037$

Exercise 4

• $\displaystyle P[X_4=0 \lvert T>4]=\frac{P_{10}^4}{P_{10}^4+P_{11}^4}=\frac{0.0768}{0.0768+0.0.032}=0.70588$
• $P[T=4]=P[T>3]-P[T>4]=0.16-0.1088=0.0512$

$\text{ }$

$\text{ }$

$\text{ }$

$\copyright$ 2017 – Dan Ma