Introducing Markov Chains

This post continues the discussion of the previous two posts – the previous post that looks at random walks informally through graphs and the previous post that defines random walks more formally. This post introduces the notion of Markov chains.

Stochastic Process

A stochastic process is a collection of random variables \left\{X_t: t \in T \right\}. Each element of the collection is a random variable X_t. The index t is often regarded as time. Thus we refer to X_t the state of the process at time t. Thus a stochastic process is a family of random variables that describes the evolution through time of some physical process.

The set T is called the index set of the stochastic process. When T is a countable set, e.g. T=\left\{0,1,2,3,\cdots \right\}, the stochastic process is said to be a discrete-time process. When the index set T is an interval of the real number line, the stochastic process is said to be a continuous-time process. For example, \left\{X_n: n=0,1,2,\cdots \right\} is a discrete-time stochastic process indexed by the non-negative integers whereas \left\{X_t: t \ge 0 \right\} is a continuous-time stochastic process indexed by the non-negative real numbers. The state space of a stochastic process is defined as the set of all possible values that can be taken by the random variables X_t.

In the present discussion, we focus on discrete-time stochastic processes \left\{X_n: n=0,1,2,\cdots \right\} where the state space S is finite or countably infinite. When the state space is finite, the process is said to be a finite-state stochastic process.

We also assume that the state space S consists of integers. Thus S is either \mathbb{N}=\left\{0,1,2,3,\cdots \right\} (the natural numbers) or \mathbb{Z}=\left\{\cdots,-3,-2,-1,0,1,2,3,\cdots \right\} (the integers), or some subset of \mathbb{N} or \mathbb{Z}.

If we know nothing about X_n other than the fact that they are random variables, then there is not much we can say about the stochastic process \left\{X_n: n=0,1,2,\cdots \right\}. To make the discussion more meaningful, it is necessary to impose some additional structure on these random variables.

The simplest structure we can assume is that the random variables X_n are independent random variables. This would be a good model for phenomena that can be regarded as random experiments in which the future states of the process are independent of past and present states. However, in many situations the assumption of independence is often unjustified. In many stochastic processes that arise in practice, past and present states can influence the future states.

Markov Chains

We consider stochastic processes that possess the following property: the future states of the process (conditional on both past and present states) depends only upon the present state, not on the past states leading up to the present state. This property is called the Markov property. Any stochastic process that possesses the Markov property is called a Markov chain. More specifically, to satisfy the Markov property, given the present state X_n and the past states X_{n-1},X_{n-2},\cdots,X_2,X_1,X_0, the conditional distribution of the future state X_{n+1} depends only on the present state X_n. Even more specifically, to satisfy the Markov property, the process must satisfy the following requirement:

    \displaystyle (1) \ \ \ \ P[X_{n+1}=j \lvert X_n=i,X_{n-1}=x_{n-1},\cdots,X_0=x_0]=P[X_{n+1}=j \lvert X_n=i]

for all possible states x_0,x_1, \cdots,x_{n-1},i,j and all non-negative integers n. Thus under the Markov property, the conditional distribution of the future state X_{n+1} depends only on the present state X_n and that all the states preceding the present state have no influence on the future state.

The conditional probabilities P[X_{n+1}=j \lvert X_n=i] in (1) are called transition probabilities of the Markov chain. In the present discussion, we consider only the Markov chains in which the transition probabilities P[X_{n+1}=j \lvert X_n=i] are independent of the current period n. Such Markov chains are called time-homogeneous Markov chains or stationary Markov chains. So the additional assumption of time-homogeneity means that the probability of transitioning into state j from state i is identical regardless of where the process is in the time scale (at the beginning in the process or later in the process).

With the transition probability P[X_{n+1}=j \lvert X_n=i] independent of n, we can denote this probability by the states i and j only:

    \displaystyle  (2) \ \ \ \ P_{ij}=P[X_{n+1}=j \lvert X_n=i]

The probability P_{ij} represents the probability that the process, starting in state i, will enter into state j in the next period. Since the process must transition into one of the states in the state space in the next time period, P_{ij}, with i fixed and j varying, must represent a probability distribution. Thus P_{i0}+  P_{i1}+\cdots=1. Here we assume that the state space is the set of all non-negative integers.

The probabilities P_{ij} are called one-step transition probabilities since they give the probabilities of transitioning from one state to another state in one period of time. The one-step transition probabilities are usually expressed in a matrix or a transition diagram. The following shows what a transition probability matrix looks like.

    \mathbf{P}= \left[\begin{array}{cccccc}      P_{0,0} & P_{0,1} & P_{0,2} & \cdots & P_{0,m} \\      P_{1,0} & P_{1,1} & P_{1,2} & \cdots & P_{1,m} \\      P_{2,0} & P_{2,1} & P_{2,2} & \cdots & P_{2,m} \\      \vdots & \vdots & \vdots & \vdots & \vdots \\  P_{m,0} & P_{m,1} & P_{m,2} & \cdots & P_{m,m} \\    \end{array}\right]

The above transition matrix is for a finite-state Markov chain with state space being \left\{0,1,\cdots,m \right\}. The probabilities in each row sum to 1. If the process is currently in state i, the row with the first index as i (in this case the (i+1)st row) will give the probabilities of transitioning into all the states. If the state space is countably infinite, then each row would have infinitely many terms and there would be infinitely many rows.

In many situations, it is helpful to have a column label and row label to specify the states in the Markov chain. For example, a particle moves through the states 0, 1, 2, 3 and 4 according to a Markov chain described by the following transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 & 3 & 4 \cr        0 & 0 & 0.6 & 0.2  & 0.1  & 0.1 \cr        1 & 0.5 & 0 & 0.2  & 0.2  & 0.1 \cr        2 & 0.4 & 0.2 & 0  & 0.1  & 0.3 \cr        3 & 0.2 & 0.3 & 0.2  & 0  & 0.3 \cr        4 & 0.6 & 0.2 & 0.1  & 0.1  & 0 \cr   } \qquad

In the above matrix, P_{31}=0.3, which means that when the Markov process is in state 3, it moves to state 1 with probability 0.3. As discussed earlier, the probabilities in a given row sum to 1, which makes sense since the probabilities in a row describes all possible moves when the process is a given state.

When it is helpful to do so, the transition probability matrices will have a row label and a column label. If the row label and column label are not given, we assume that the rows and columns are listed in ascending order (recall that the states are integers). The transition probability matrices can be displayed using square brackets or parentheses.

Examples

The key to working with Markov chains is the transition probabilities P_{ij}. These probabilities allow us describe the random transitions through the states over time. The remainder of this post gives several examples of Markov chains focusing on transition probability matrices.

Example 1
When a binary digit, 0 or 1, is transmitted through a communication system, it passes through several stages. At each stage, there is a probability p that the digit is transmitted in error. Let X_n be the digit that is the output at the nth stage. Then \left\{X_n: n \ge 0 \right\} is a two-state Markov chain with the following transition probability matrix.

    \mathbf{P} = \bordermatrix{     & 0 &  \text{ } & 1 \cr   0 & 1-p & \text{ } & p \cr  \text{ } & \text{ } & \text{ } &\text{ } \cr   1 & p & \text{ } & 1-p \cr  }

Some typical problems might be:

  • If the digit to be sent is 0, determine the probability that no error occurs up to the second stage.
  • If the digit to be sent is 0, determine the probability that a correct message is received at stage 2.
  • If the digit to be sent is 0, determine the probability that a correct message is received at stage n where n \ge 1.

Example 2
Let’s consider the example of gambler’s ruin with finitely many states. Suppose that a gambler makes a series of one-unit bets against the house. For the gambler, the probabilities of winning and losing each bet are p and 1-p, respectively. Whenever the capital reaches zero, the gambler is in ruin and his capital remains zero thereafter. On the other hand, if the capital of the gambler increases to m, a large level of capital, the gambler quits playing. Let X_n be the capital of the gambler after the nth bet. Then \left\{ X_n: n=0,1,2,\cdots \right\} is a random walk, as discussed in the previous post. Since the future capital X_{n+1} depends only on the current level of capital X_n (up one unit or down one unit), this is also a Markov chain. The following is the transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 & 3 & 4 & \cdots & m \cr        0 & 1 & 0 & 0 & 0 & 0 & \cdots & 0 \cr        1 & 1-p & 0 & p & 0 & 0 & \cdots & 0 \cr        2 & 0 & 1-p & 0 & p & 0 &\cdots & 0 \cr        3 & 0 & 0 & 1-p & 0 & p &\cdots & 0 \cr        \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \cr        m & 0 & 0 & 0 & 0 & 0 &\cdots & 1    } \qquad

The above is an (m+1) \times (m+1) matrix. The states are of the process are 0,1,2,\cdots,m. The states 0 and m are absorbing states since the process would stay there and not leave once the process arrive at these two states. Note that P_{00}=1 and P_{mm}=1. State 0 would be the state of ruin. State m would mean the gambler becomes fabulously rich (if m is large).

For the rows in between, the transition probabilities are P_{i,i-1}=1-p and P_{i,i+1}=p. In other words, from state i, the process would go to state i+1 with probability p and would go down to i-1 with probability 1-p. More specifically, if m=5, the following is the transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 & 3 & 4 & 5 \cr        0 & 1 & 0 & 0  & 0  & 0 & 0 \cr        1 & 1-p & 0 & p  & 0  & 0 & 0 \cr        2 & 0 & 1-p & 0  & p  & 0 & 0 \cr        3 & 0 & 0 & 1-p  & 0  & p & 0 \cr        4 & 0 & 0 & 0  & 1-p  & 0 & p \cr        5 & 0 & 0 & 0  & 0  & 0 & 1   } \qquad

Assuming that the gambler has initial capital d which is positive but small in comparison to m. For example, d=10 and m=1000. Some interesting questions are:

  • If the staring capital is 10, what is the probability that the gambler’s capital will be 5 after 8 plays of the game?
  • What is the probability of ruin for the gambler, i.e. reaching state 0?
  • What is the mean time until reaching the absorbing states (ruin or fabulously rich)?

Example 3 (Restless Mouse)
A maze has eight areas as shown below.

The rat moves through the areas in the maze at random. That is, if an area has exits to w areas, the rat moves to each of these w areas with probability 1/w. Let X_n be the area the mouse is located in after the nth move. Then \left\{X_n: n \ge 0 \right\} is a Markov chain with the following transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \cr        1 & 0 & 1/2 & 0  & 1/2  & 0 & 0 & 0 & 0 \cr        2 & 1/2 & 0 & 1/2  & 0  & 0 & 0 & 0 & 0 \cr        3 & 0 & 1/2 & 0  & 0  & 1/2 & 0 & 0 & 0 \cr        4 & 1/3 & 0 & 0  & 0  & 1/3 & 1/3 & 0 & 0 \cr        5 & 0 & 0 & 1/3  & 1/3  & 0 & 0 & 0 & 1/3 \cr        6 & 0 & 0 & 0  & 1/2  & 0 & 0 & 1/2 & 0 \cr        7 & 0 & 0 & 0  & 0  & 0 & 1/2 & 0 & 1/2 \cr        8 & 0 & 0 & 0  & 0  & 1/2 & 0 & 1/2 & 0   } \qquad

Here’s some interesting questions about this Markov chain. Suppose that area 8 contains food and area 6 has a device for an electric shock.

  • Given that the rat is placed in area 1 at the beginning, what is the probability that the rat will reach area 8 in 4 moves, in general in n moves?
  • Given that the rat is placed in area 1 at the beginning, what is the mean number of moves for the rat to reach area 8 or area 6?
  • Given that the rat is placed in area 1 at the beginning, what is the probability that the rat will reach the food area before the electric shock area?

Next Steps

The concept of Markov chains has been introduced and illustrated with examples of random walks (in earlier posts) and other examples in this post. All the properties of Markov chains (in this case time-homogeneous Markov chains) are derived from the transition probability matrix. One of the most urgent tasks is to develop a way to calculation the transition probabilities. Recall that the elements of the transition probability matrix \mathbf{P} are the one-step transition probabilities P_{ij}, which gives the probability of going from state i into state j. The natural next step is to calculate the n-step transition probabilities P^n_{ij}, which is the probability that the process will in state j in the nth period given the process is initially in state i. The next post is a beginning look at transition probabilities. The post that follows the next post is on Chapman-Kolmogorov equations, a systematic way of calculating transition probabilities using matrix multiplication.

\text{ }

\text{ }

\text{ }

\copyright 2017 – Dan Ma

Advertisements

6 thoughts on “Introducing Markov Chains

  1. Pingback: A beginning look at transition probabilities | Topics in Probability

  2. Pingback: Chapman-Kolmogorov Equations | Topics in Probability

  3. Pingback: A walk down Random Lane | Topics in Probability

  4. Pingback: More on random walks | Topics in Probability

  5. Pingback: A first look at applications of Markov chains | Topics in Probability

  6. Pingback: The occupancy problem | Topics in Probability

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s