A beginning look at transition probabilities

The previous post introduces the notion of Markov chains, more specifically discrete Markov chains. This and the next post focus on calculating transition probabilities.

Suppose that \left\{X_n: n \ge 0 \right\} is a Markov chain with transition probability matrix \mathbf{P}. Elements of \mathbf{P} are the one-step transition probabilities P_{ij}. As the Markov process moves through the states over time, the probabilities in the matrix \mathbf{P} shows how likely the process will transition to state j in the next time period if the process is currently in state i. Elements of \mathbf{P} are conditional probabilities:

    (1) \ \ \ \ \ P_{ij}=P[X_{n+1}=j \lvert X_n=i]

The transition probabilities described in (1) are independent of the time period n. These are called stationary transition probabilities. Markov chains with stationary transition probabilities are called stationary Markov chains or time-homogeneous Markov chains. One immediate goal is to develop the n-step transition probabilities P_{ij}^n from the one-step transition probabilities P_{ij}. This is to be developed in the next post. This post focuses on basic calculations with P_{ij}.

Examples of Conditional Probabilities

We use examples to demonstrate conditional probabilities that can be calculated from the one-step transition probabilities P_{ij}. The calculation is based on the following transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 \cr        0 & 0.6 & 0.2 & 0.2   \cr        1 & 0.3 & 0.5 & 0.2  \cr        2 & 0.4 & 0.3 & 0.3   \cr           } \qquad

Example 1
Determine the following conditional probabilities:

  • P[X_1=0, X_2=1 \lvert X_0=0]
  • P[X_2=0, X_3=1 \lvert X_1=0]
  • P[X_1=2, X_2=1,X_3=0 \lvert X_0=1]

Note that the first two are identical since the transition probabilities are stationary. To calculate the first and third probabilities, it is a matter of linking up the one-step transition probabilities from one state to the next.

    \displaystyle \begin{aligned} P[X_1=0, X_2=1 \lvert X_0=0]&=P[X_1=0 \lvert X_0=0] \times P[X_2=1 \lvert X_1=0] \\&=P_{00} \times P_{01} \\&=0.6 \times 0.2=0.12 \end{aligned}

    \displaystyle P[X_2=0, X_3=1 \lvert X_1=0]=P[X_1=0, X_2=1 \lvert X_0=0]=0.12

    \displaystyle \begin{aligned} P[X_1=2, X_2=1,X_3=0 \lvert X_0=1]&=P[X_1=2 \lvert X_0=1] \times P[X_2=1 \lvert X_1=2] \times P[X_3=0 \lvert X_2=1] \\&=P_{12} \times P_{21} \times P_{10} \\&=0.2 \times 0.3 \times 0.3=0.018 \end{aligned}

Example 2
Example 1 shows that the probability of a chain moving from along a path conditional on a starting state is simply the product of the transition probabilities on that path. We now calculate unconditional probability of a path. For this to be possible, we need to assume a distribution for the initial state.

Assume that P[X_0=0]=0.4, P[X_0=1]=0.3 and P[X_0=2]=0.3. Determine the following unconditional probabilities:

  • P[X_0=0,X_1=0,X_2=1]
  • P[X_1=0,X_2=0,X_3=1]
  • P[X_0=1,X_1=2, X_2=1,X_3=0]

To calculate these unconditional probabilities, we multiply the transition probabilities on the path along with the probability of the first state in the path.

    \displaystyle \begin{aligned} P[X_0=0,X_1=0,X_2=1]&=P[X_0=0] \times P[X_1=0 \lvert X_0=0] \times P[X_2=1 \lvert X_1=0] \\&=P[X_0=0] \times P_{00} \times P_{01} \\&=0.4 \times 0.6 \times 0.2=0.048 \end{aligned}

To calculate the second probability, we need to first compute P[X_1=0]. This can be done by conditioning on the initial state X_0.

    \displaystyle \begin{aligned} P[X_1=0]&=P[X_0=0] \times P_{00}+ P[X_0=1] \times P_{10} +P[X_0=2] \times P_{20} \\&=0.4 \times 0.6 +0.3 \times 0.3+0.3 \times 0.4=0.45 \end{aligned}

    \displaystyle \begin{aligned}  P[X_1=0,X_2=0,X_3=1]&=P[X_1=0] \times P[X_2=0 \lvert X_1=0] \times  P[X_3=1 \lvert X_2=0]\\&=P[X_1=0] \times P_{00} \times P_{01} \\&=0.45 \times 0.6 \times 0.2 \\&=0.054  \end{aligned}

The following gives the third probability.

    \displaystyle \begin{aligned} P[X_0=1,X_1=2, X_2=1,X_3=0]&=P[X_0=1] \times P[X_1=2 \lvert X_0=1] \times P[X_2=1 \lvert X_1=2] \times P[X_3=0 \lvert X_2=1] \\&=P[X_0=1] \times P_{12} \times P_{21} \times P_{10} \\&=0.4 \times 0.2 \times 0.3 \times 0.3 \\&=0.0072 \end{aligned}

Note that each unconditional probability in this example is obtained by multiplying the corresponding conditional probability in Example 1 with the probability of the conditioned event.

Example 3
Sometimes we know for certain that the Markov chain starts in a given state. Then the unconditional probability of a path is simply the conditional probability of that path (conditioned on the initial state). Determine the three probabilities in Example 2 given that the chain starts in state 0 in the first two probabilities and the chain starts in state 1 in the third probability. The probabilities are:

  • P[X_0=0,X_1=0,X_2=1]
  • P[X_1=0,X_2=0,X_3=1]
  • P[X_0=1,X_1=2, X_2=1,X_3=0]

For the first probability, P[X_0=0,X_1=0,X_2=1]=P[X_0=0] \times P[X_1=0,X_2=1 \lvert X_0=0]. With P[X_0=0]=1, P[X_0=0,X_1=0,X_2=1]=P[X_1=0,X_2=1 \lvert X_0=0]=0.12, the answer from Example 1.

For the second probability, we need to first calculate P[X_1=0]. Since P[X_0=0]=1, P[X_1=0]=P[X_0=0] \times P_{00}=0.6. Thus P[X_1=0,X_2=0,X_3=1]=P[X_1=0] \times P_{00} \times P_{01}=0.6 \times 0.6 \times 0.2=0.072, which is 0.6 times the corresponding conditional probability in Example 1.

Similarly, the third probability is

    \displaystyle \begin{aligned} P[X_0=1,X_1=2, X_2=1,X_3=0]&=P[X_1=2, X_2=1,X_3=0 \lvert X_0=1] \\& =0.018\end{aligned}

Conditional Probabilities of Paths in Markov Chains

The conditional probabilities in the above examples are not difficult to do as long as the basic idea of conditional probabilities and the Markov property are understood. First, the idea of conditional probability. The idea is that the probability of the intersection of two events can be obtained by conditioning on one of the events. For example, given two events A and B, we have

    (2a) \ \ \ \ \ P[A \cap B]=P[A] \times P[B \lvert A]

    \displaystyle (2b) \ \ \ \ \ P[B \lvert A]=\frac{P[A \cap B]}{P[A]}

The probability of the intersection of two events is the product of a conditional probability of one event given another event times the probability of the event being conditioned. Relation (2b) is the probability of the event B given that event A has occurred. Of course, the conditioned event has to have positive probability, i.e. P[A]>0.

Suppose that the event A is X=x, the discrete random variable X taking on the value x and that the event B is Y=y. Then we have the following relations.

    (3a) \ \ \ \ \ P[X=x,Y=y]=P[X=x] \times P[Y=y \lvert X=x]

    \displaystyle (3b) \ \ \ \ \ P[Y=y \lvert X=x]=\frac{P[X=x,Y=y]}{P[X=x]}

We can also write the conditional probability of X=x conditioning on the event Y=y. Suppose we have a path in a Markov chain, X_0=x_0,X_1=x_1,X_2=x_2,\cdots,X_n=x_n. This is the event that the chain starts in state x_0 initially, then move to state x_1 in time 1 and so on and at time n the chain is in state x_0. Conditioning on the initial state, consider the following conditional probability.

    \displaystyle (4) \ \ \ \ \ P[X_1=x_1,X_2=x_2,\cdots,X_n=x_n \lvert X_0=x_0]=\frac{P[X_0=x_0,X_1=x_1,X_2=x_2,\cdots,X_n=x_n]}{P[X_0=x_0]}

Relation (4) can be further simplified since the random variables X_0=x_0,X_1=x_1,X_2=x_2,\cdots,X_n=x_n satisfy the Markov property.

    \displaystyle \begin{aligned} (5) \ \ \ \ \ P[X_1=x_1,X_2=x_2,\cdots,X_n=x_n \lvert X_0=x_0]&=P[X_1=x_1 \lvert X_0=x_0] \times P[X_2=x_2 \lvert X_1=x_1] \\& \ \ \ \times \cdots \times P[X_n=x_n \lvert X_{n-1}=x_{n-1}] \\&=P_{x_0,x_1} \times P_{x_1,x_2} \times \cdots \times P_{x_{n-1},x_n}  \end{aligned}

Relation (5) says that the probability of a path in a Markov chain conditioning on the initial state is simply the product of the transition probabilities along the path. This is derived using the Markov property. Relation (5) can be established by an induction argument on the subscript n, i.e. the length of a path.

Clearly P[X_1=j \lvert X_0=i]=P_{ij}. This is the fundamental assumption in Markov chains. Now consider P[X_1=j,X_2=k \lvert X_0=i].

    \displaystyle \begin{aligned} P[X_1=j,X_2=k \lvert X_0=i]&=\frac{P[X_0=i,X_1=j,X_2=k]}{P[X_0=i]} \\&=\frac{P[X_0=i,X_1=j,X_2=k]}{P[X_0=i,X_1=j]} \times \frac{P[X_0=i,X_1=j]}{P[X_0=i]} \\&=P[X_2=k \lvert X_0=i,X_1=j] \times P[X_1=j \lvert X_0=i] \\&=P[X_2=k \lvert X_1=j] \times P[X_1=j \lvert X_0=i] \\&=P_{jk} \times P_{ij} \\&=P_{ij} \times P_{jk} \end{aligned}

Note that P[X_2=k \lvert X_0=i,X_1=j]=P[X_2=k \lvert X_1=j] in the above derivation. This is Markov property in action since the probability of the future state depends only on the state immediately preceding it. Now that relation (5) is true for a path of length 2, the following shows how it is derived for length 3.

    \displaystyle \begin{aligned} P[X_1=j,X_2=k,X_3=l \lvert X_0=i]&=\frac{P[X_0=i,X_1=j,X_2=k,X_3=l]}{P[X_0=i]} \\&=\frac{P[X_0=i,X_1=j,X_2=k,X_3=l]}{P[X_0=i,X_1=j,X_2=k]} \times \frac{P[X_0=i,X_1=j,X_2=k]}{P[X_0=i]} \\&=P[X_3=l \lvert X_0=i,X_1=j,X_2=k] \times P[X_1=j,X_2=k \lvert X_0=i]  \\&=P[X_3=l \lvert X_2=k] \times P[X_1=j,X_2=k \lvert X_0=i] \\&=P_{ij} \times P_{jk} \times P_{kl} \end{aligned}

Note that going from the third line to the fourth line the Markov property is used. Relation (5) for length 2 is used at the end. The inductive proof keeps going and relation (5) is true for any path of arbitrary length.

The conditional probability in (4) by itself may not be special. With the additional assumption of Markov property, the conditional probability can be made to be a “chain” of one-step transition probabilities.

Using the idea of (3a), the unconditional probability of a path is:

    \displaystyle \begin{aligned} (6) \ \ \ \ \ P[X_0=x_0,X_1=x_1,X_2=x_2,\cdots,X_n=x_n]&=P[X_0=x_0] \times P_{x_0,x_1} \\& \ \ \times P_{x_1,x_2} \times \cdots \times P_{x_{n-1},x_n}   \end{aligned}

Relation (6) says that the unconditional probability of a path in a Markov chain is simply the conditional probability of the path conditioning on the initial state times the probability of the initial state.

More Examples

The earlier examples demonstrate relation (5) and relation (6). We now work a few more examples.

Example 4
There is a slightly different way to look at one unconditional probability P[X_1=0,X_2=0,X_3=1] in Example 2. The first state in this path is X_1=0. We can condition on the initial state X_0.

    \displaystyle P[X_1=0,X_2=0,X_3=1]=\sum \limits_{k=0}^2 P[X_0=k,X_1=0,X_2=0,X_3=1]

Then every one of the unconditional probability within the summation sign can be computed according to the idea in (6). Note that Example 2 gives the distribution of the initial state.

    \displaystyle \begin{aligned} P[X_1=0,X_2=0,X_3=1]&=P[X_0=0,X_1=0,X_2=0,X_3=1]+ \\& \ \ P[X_0=1,X_1=0,X_2=0,X_3=1]+P[X_0=2,X_1=0,X_2=0,X_3=1] \\&=P[X_0=0] \times P_{00} \times P_{00} \times P_{01} +\\&\ \ P[X_0=1] \times P_{10} \times P_{00} \times P_{01}+P[X_0=2] \times P_{20} \times P_{00} \times P_{01} \\&=0.4 \times 0.6 \times 0.6 \times 0.2+\\&\ \ 0.3 \times 0.3 \times 0.6 \times 0.2 + 0.3 \times 0.4 \times 0.6 \times 0.2\\&=0.054 \end{aligned}

In calculating probabilities in a Markov chain, conditioning on the first step is a useful technique that will be used often in the subsequent posts.

Example 5
Consider a Markov chain with the following transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 & 3 \cr        0 & 0.5 & 0.2 & 0.2 & 0.1 \cr        1 & 0.3 & 0.3 & 0.2 & 0.2 \cr        2 & 0.3 & 0.2 & 0.2 & 0.3 \cr        3 & 0 & 0 & 0 & 1   } \qquad

State 3 is called an absorbing state since P_{33}=1. In other words, when the chain reaches state 3, it will never leave. Suppose that initially the Markov chain is in state 0 50% of the time, in state 1 30% of the time and in state 2 20% of the time. Calculate the probability P[X_1=0,X_2=1,X_3=0,X_4=2].

We can condition on the initial state as in Example 4.

    \displaystyle \begin{aligned} P[X_1=0,X_2=1,X_3=0,X_4=2]&=\sum \limits_{k=0}^2 P[X_0=k,X_1=0,X_2=1,X_3=0,X_4=2] \\&=P[X_0=0,X_1=0,X_2=1,X_3=0,X_4=2]+ \\&\ \ P[X_0=1,X_1=0,X_2=1,X_3=0,X_4=2]+\\&\ \ P[X_0=2,X_1=0,X_2=1,X_3=0,X_4=2] \\&=P[X_0=0] \times P_{00} \times P_{01} \times P_{10} \times P_{02}+ \\& \ \ P[X_0=1] \times P_{10} \times P_{01} \times P_{10} \times P_{02}+\\&\ \ P[X_0=2] \times P_{20} \times P_{01} \times P_{10} \times P_{02} \\&=0.0048\end{aligned}

Example 6
This is Example 1 in this previous post.

When a binary digit, 0 or 1, is transmitted through a communication system, it passes through several stages. At each stage, there is a probability p that the digit is transmitted in error. Let X_n be the digit that is the output at the nth stage. Then \left\{X_n: n \ge 0 \right\} is a two-state Markov chain with the following transition probability matrix.

    \mathbf{P} = \bordermatrix{     & 0 &  \text{ } & 1 \cr   0 & 1-p & \text{ } & p \cr  \text{ } & \text{ } & \text{ } &\text{ } \cr   1 & p & \text{ } & 1-p \cr  }

Determine the following:

  • If the digit to be sent is 0, determine the probability that no error occurs up to the second stage.
  • If the digit to be sent is 0, determine the probability that a correct message is received at stage 2 through stage 4.

The first probability is P[X_1=0,X_2=0 \lvert X_0=0]. This is P_{00} \times P_{00}=(1-p)^2. To calculate the second probability, we can condition on the state in time 1. We first calculate P[X_1=0] and P[X_1=1]. Since we know for certain that the initial state is 0, P[X_1=0]=P_{00}=1-p and P[X_1=1]=P_{01}=p.

    \displaystyle \begin{aligned} P[X_2=0,X_3=0,X_4=0]&=P[X_1=0,X_2=0,X_3=0,X_4=0]+P[X_1=1,X_2=0,X_3=0,X_4=0] \\&=P[X_2=0,X_3=0,X_4=0 \lvert X_1=0] \times P[X_1=0]+ \\&\ \ P[X_2=0,X_3=0,X_4=0 \lvert X_1=1] \times P[X_1=1] \\&=P_{00} \times P_{00} \times P_{00} \times (1-p)+ \\&\ \ P_{10} \times P_{00} \times P_{00} \times p\\&=(1-p)^4+p^2 \times (1-p)^2 \end{aligned}

Remarks

The calculation in this post involves very basic calculation with one-step transition probabilities, i.e. the elements of the transition probability matrix given in a Markov chain. The n-step transition probabilities are key to the development of the theory of Markov chains. The next post begins the discussion of Chapman-Kolmogorov equations.

Practice problems are in this companion blog.

\text{ }

\text{ }

\text{ }

\copyright 2017 – Dan Ma

Advertisements

6 thoughts on “A beginning look at transition probabilities

  1. Pingback: Chapman-Kolmogorov Equations | Topics in Probability

  2. Pingback: Introducing Markov Chains | Topics in Probability

  3. Pingback: More on random walks | Topics in Probability

  4. Pingback: A first look at applications of Markov chains | Topics in Probability

  5. Pingback: Practice Problem Set 1 – describing Markov chains using urn models | Topics in Probability

  6. Pingback: Practice Problem Set 2 – Chapman-Kolmogorov Equations | Topics in Probability

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s