# A first look at applications of Markov chains

This post presents several interesting examples of Markov chain models. Previous posts on Markov chains are: an introduction, how to work with transition probabilities (here and here).

Examples

The following gives several examples of Markov chains, discussed at an introductory level. They will be further developed in subsequent posts.

Example 1 – The Ehrenfest Chain

The Ehrenfest chain, named for the physicist Paul Ehrenfest, is a simple, discrete model for the exchange of gas molecules contained in a volume divided into two regions A and B by a permeable membrane. Suppose that the gas has $N$ molecules. At each period of time, a molecule is chosen at random from the set of $N$ molecules and moved from the region that it is in to the other region.

Figure 1 – The Efrenfest Chain

The model has a simple mathematical description using balls and urns. Suppose that two urns, labeled A and B, contain $N$ balls, labeled $1, 2, 3, \cdots, N$. At each step (i.e. at each discrete time unit), an integer is selected at random from $1, 2, 3, \cdots, N$ independent of the past selections. Then the ball labeled by the selected integer is removed from its urn and placed in the other urn. The procedure is repeated indefinitely.

The state of the system at time $n=0,1,2,\cdots$ is the number of balls in urn A and is denoted by $X_n$. Then $\left\{X_n: n=0,1,2,\cdots \right\}$ is a Markov chain on the state space $\left\{0,1,\cdots,N \right\}$.

When the process is in state 0 (urn A is empty), the next state is 1. When the process is in state $N$ (urn A is full), the next state is $N-1$. When the process is in state $i$ where $0, the next state of the process is either $i-1$ or $i+1$. The following gives the one-step transition probabilities:

$\displaystyle P_{ij} = \left\{ \begin{array}{ll} \displaystyle 1 &\ \ \ \ \ \ i=0 \text{ and } j=1 \\ \text{ } & \text{ } \\ \displaystyle 1 &\ \ \ \ \ \ i=N \text{ and } j=N-1 \\ \text{ } & \text{ } \\ \displaystyle \frac{i}{N} &\ \ \ \ \ \ 0

When urn A has $i$ balls, there is a probability of $\frac{i}{N}$ such that the randomly selected ball is from urn A and thus urn A loses a ball. On the other hand, there is a probability of $1-\frac{i}{N}$ such that the randomly selected ball is from urn B and thus urn A gains a ball. As an illustration, the following gives the transition probability matrix for $N=5$ balls.

$\mathbf{P} = \bordermatrix{ & 0 & 1 & 2 & 3 & 4 & 5 \cr 0 & 0 & 1 & 0 & 0 & 0 & 0 \cr 1 & \frac{1}{5} & 0 & \frac{4}{5} & 0 & 0 & 0 \cr 2 & 0 & \frac{2}{5} & 0 & \frac{3}{5} & 0 & 0 \cr 3 & 0 & 0 & \frac{3}{5} & 0 & \frac{2}{5} & 0 \cr 4 & 0 & 0 & 0 & \frac{4}{5} & 0 & \frac{1}{5} \cr 5 & 0 & 0 & 0 & 0 & 1 & 0 \cr } \qquad$

An important problem in the Ehrenfest chain is the long-term, or equilibrium, distribution of $X_n$, the number of balls in urn A. Another interesting problem concerns the changes in the composition of the two regions over time. For example, if all molecules are in one region at the beginning, how long on average will it be before each region has half of the molecules? The theory of Markov chains provide a good method for answering such questions.

Example 2 – Random Walk

The simplest random walk is a Markov chain $\left\{X_n: n=0,1,2,\cdots \right\}$ such that each state is the result of a random one-unit up or down move from the previous state.

Figure 2 – Five Simulations of a Random Walk

In the random walk in Figure 1, each state is one unit above or below the preceding state with equal probability. Instead of a random one-unit up or down move, let’s give a more general description of a random walk. The moves in the random walk are determined by a predetermined discrete distribution. Let $Y_1, Y_2, Y_3 \cdots$ be integer-valued random variables that are independent and identically distributed. Let $P[Y=y]$ be the common probability function. Let $X_0$ be an integer-valued random variable that is independent of the random variables $Y_n$. The random variable $X_0$ will be the initial position of the random walk. The random variables $Y_n$ are the increments (they are the amounts added to the stochastic process as time increases).

For $n \ge 1$, define $X_n=X_0+Y_1+\cdots+Y_{n-1}+Y_n$. Then $\left\{X_n: n=0,1,2,\cdots \right\}$ is called a random walk. It is a Markov chain with the state space the set of all integers. Wherever the process is at any given time, the next step in the process is always a walk in a distance and direction that is dictated by the independent random variables $Y_n$. If the random variables $Y_n$ are independent Bernoulli variables with $P(Y_n=1)=p$ and $P(Y_n=-1)=1-p$, then we revert back to the simple random walks described in Figure 1.

To transition from state $i$ to state $j$, the increment $Y_n$ must be $j-i$. Thus the one-step transition probabilities $P_{ij}$ are defined by $P_{ij}=P[Y=j-i]$. Recall that $P[Y=y]$ is the common probability function for the sequence $Y_1, Y_2, Y_3 \cdots$. For a more detailed description, see this previous post.

Example 3 – Gambler’s Ruin

A gambler’s ruin is a simple random walk that can take on only finitely many states. At each time period, the state is one-unit higher or lower than the preceding state as a result of a random bet. More specifically, suppose that a gambler starts out with $d$ units in capital (in dollars or other monetary units) and makes a series of one-unit bets against the house. For the gambler, the probabilities of winning and losing each bet are $p$ and $1-p$, respectively. Whenever the capital reaches zero, the gambler is in ruin and his capital remains zero thereafter. On the other hand, if the capital of the gambler increases to $m$ where $d, then the gambler quits playing. Let $X_n$ be the capital of the gambler after the $n$th bet. Then $\left\{X_n: n=0,1,2,3,\cdots \right\}$ is a random walk. The starting state is $X_0=d$. The states 0 and $m$ are absorbing states. The following gives the transition probabilities.

$\displaystyle P_{ij} = \left\{ \begin{array}{ll} \displaystyle p &\ \ \ \ \ \ j=i+1,i \ne 0, i \ne m \\ \text{ } & \text{ } \\ \displaystyle 1-p &\ \ \ \ \ \ j=i-1,i \ne 0, i \ne m \\ \text{ } & \text{ } \\ \displaystyle 1 &\ \ \ \ \ \ i=0,j=0 \\ \text{ } & \text{ } \\ \displaystyle 1 &\ \ \ \ \ \ i=m,j=m \\ \text{ } & \text{ } \\ 0 &\ \ \ \ \ \ \text{otherwise} \end{array} \right.$

Whenever the gambler reaches the absorbing state of 0, the gambler is in ruin. If the process reaches the absorbing state of $m$, the gambler strikes gold. If the initial capital of the gambler is small relative to the casino, there is a virtually 100% chance that the gambler will be in ruin even if each bet has even odds (see here for the calculation). One interesting questions: on average how long does it take for the gambler to be in ruin? Such questions will be answered after necessary tools are built in subsequent posts.

An alternative interpretation of the gamblerâ€™s ruin random walk is this. Assume that two players make a series of one-unit bets against each other and that the total capital between the two players is $m$ units with the first player having $d$ units in capital. Further suppose that the first player has probability $p$ of winning a bet and the second player has probability $1-p$ of winning a bet. Then the two gamblers play until one of them goes broke (is in ruin). Let $X_n$ be the capital of the first player after the $n$th play. Then $\left\{X_n: n=0,1,2,3,\cdots \right\}$ is a Markov chain that is a gambler’s ruin. The following graph shows five simulations of a gambler’s ruin from this previous post.

Figure 3 – Five Simulations of a Gambler’s Ruin

Example 4 – Discrete Birth and Death Chain

In a birth and death chain, the current state $i$ in the process is transitioned to $i+1$ (a birth), $i-1$ (a death) or $i$. The state space is either $\left\{0,1,2,3,\cdots \right\}$ or $\left\{0,1,2,3,\cdots, m \right\}$. The following gives the one-step transition probabilities.

$\displaystyle P_{ij} = \left\{ \begin{array}{ll} \displaystyle q_i &\ \ \ \ \ \ j=i-1 \\ \text{ } & \text{ } \\ \displaystyle r_i &\ \ \ \ \ \ j=i \\ \text{ } & \text{ } \\ \displaystyle p_i &\ \ \ \ \ \ j=i+1 \\ \text{ } & \text{ } \\ 0 &\ \ \ \ \ \ \text{otherwise} \end{array} \right.$

where for each state $i$, $q_i$ (the probability of down), $r_i$ (the probability of same) and $p_i$ (the probability of up) are non-negative real numbers such that $q_i+r_i+p_i=1$. The following gives the transition probability matrix for an illustrative case of $m=5$.

$\mathbf{P} = \bordermatrix{ & 0 & 1 & 2 & 3 & 4 & 5 \cr 0 & r_0 & p_0 & 0 & 0 & 0 & 0 \cr 1 & q_1 & r_1 & p_1 & 0 & 0 & 0 \cr 2 & 0 & q_2 & r_2 & p_2 & 0 & 0 \cr 3 & 0 & 0 & q_3 & r_3 & p_3 & 0 \cr 4 & 0 & 0 & 0 & q_4 & r_4 & p_4 \cr 5 & 0 & 0 & 0 & 0 & q_5 & r_5 \cr } \qquad$

In the above matrix, $q_0=0$ and $p_5=0$. If $r_0=1$, then state 0 is absorbing. If $p_0=1$, then state 0 is reflecting. On the other hand, if $r_5=1$, then state 5 is absorbing. If $q_5=1$, then state 5 is reflecting.

The Ehrenfest chain is an example of a birth and death chain (with the boundary states being reflecting). When the probability of down $q_i$, the probability of same $r_i$ and the probability of up $p_i$ do not vary according to the current state (i.e. they are constant), the resulting birth and death chain is a random walk. Thus the gambler’s ruin chain is also an example of a birth and death chain. The birth and death chain is suitable model for applications in which the state of the process is the population of a living system.

Example 5 – Discrete Queueing Markov Chain

Though continuous-time queueing models are much more realistic, a simple discrete-time queueing model is introduced here in order to illustrate applications of discrete Markov chains. Suppose that time is measured in a convenient time interval such as one minute or some appropriate group of minutes. The model assumes that during one period of time, only one customer is served if at least one customer is present. The model also assumes that a random number of new customers arrive at any given time period and that the numbers of customers arriving in the time periods form an independent identically distributed sequence of random variables.

More specifically, let $Y_1,Y_2,Y_3,\cdots$ be independent and identically distributed random variables that take on non-negative integers. Let $P[Y=k]$, $k=0,1,2,\cdots$, be the common probability function. The random variable $Y_n$ is the number of customers that arrive into the system during the $n$th period. Let $X_0$ be the number of customers present initially in the system. For $n \ge 1$, let $X_n$ be the number of customers in the system during the $n$th period. If the current state is $X_n$, then the next state is:

$\displaystyle X_{n+1} = \left\{ \begin{array}{ll} \displaystyle Y_{n+1} &\ \ \ \ \ \ X_n=0 \\ \text{ } & \text{ } \\ \displaystyle (X_n-1)+Y_{n+1} &\ \ \ \ \ \ X_n>0 \\ \end{array} \right.$

where $n \ge 0$. In essence, the number of customers in the system at any given time period is the number of new customers arriving plus the customers already in the system less one (if the system is not empty). If the system is empty, the number of customers is simply the number of new customers. It follows that $\left\{X_n: n \ge 0 \right\}$ is a Markov chain with the state space $\left\{0,1,2,\cdots \right\}$, the set of all non-negative integers. The one-step transition probabilities $P_{ij}$ are given by:

$\displaystyle P_{ij} = \left\{ \begin{array}{ll} \displaystyle P[Y=j] &\ \ \ \ \ \ i=0 \text{ and } j \in \left\{0,1,2,\cdots \right\} \\ \text{ } & \text{ } \\ \displaystyle P[Y=j-(i-1)] &\ \ \ \ \ \ i>0 \text{ and } j \in \left\{i-1,i,i+1,\cdots \right\} \\ \end{array} \right.$

where $P[Y=k]$ is the common probability function for the independent numbers of arrivals of customers $Y_1,Y_2,Y_3,\cdots$. If the $Y_n$ have a common binomial distributions with parameters 3 and $p=0.5$. Then the following gives the transition probability matrix.

$\displaystyle \mathbf{P} = \bordermatrix{ & 0 & \text{ } & 1 & \text{ } & 2 & \text{ } & 3 & \text{ } & 4 & \text{ } & 5 & \text{ } & 6 & \text{ } & 7 & \text{ } & 8 & \cdots \cr 0 & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \cdots \cr 1 & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \cdots \cr 2 & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \cdots \cr 3 & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \cdots \cr 4 & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0 & \text{ } & 0 & \cdots \cr 5 & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0 & \cdots \cr 6 & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \cdots \cr 7 & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \cdots \cr 8 & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \cdots \cr \vdots & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } \cr } \qquad$

If the customer arrivals have a common Poisson distribution with parameter $\lambda=0.2$ (per period), then the following gives the transition probabilities.

$\displaystyle P_{ij} = \left\{ \begin{array}{ll} \displaystyle \frac{e^{-0.2} \ 0.2^j}{j!} &\ \ \ \ \ \ i=0 \text{ and } j \in \left\{0,1,2,\cdots \right\} \\ \text{ } & \text{ } \\ \displaystyle \frac{e^{-0.2} \ 0.2^{j-(i-1)}}{[j-(i-1)]!} &\ \ \ \ \ \ i>0 \text{ and } j \in \left\{i-1,i,i+1,\cdots \right\} \\ \end{array} \right.$

Example 6 – Discrete Branching Chain

Suppose that we have a system of particles that can generate new particles of the same type, e.g. particles such as neutrons and bacteria.

In such a system, we assume that a particle is replaced by a ransom number of new particles (regarded as offspring of the original particle). Furthermore, we assume that each particle generates offspring independently and offspring generation follows the same distribution. Let $Y$ be the random variable that is the common distribution for offspring generation across all particles in the system. Let $P[Y=k]$ be the common probability function of the number of offspring of a particle.

For $n \ge 0$, let $X_n$ be the number of particles in the $n$th generation in the system. It follows from the assumptions that with $X_n=i$

$\displaystyle X_{n+1}=\sum \limits_{k=1}^{i} Y_k$

where $Y_1,Y_2, \cdots, Y_i$ are independent random variables with common probability function $P[Y=k]$. Then $\left\{X_n: n \ge 0 \right\}$ is a Markov chain with the state space the set of all non-negative integers. The one-step transition probabilities $P_{ij}$ are

$\displaystyle P_{ij}=P \biggl[\sum \limits_{k=1}^{i} Y_k = j \biggr]$

In other words, the transition probability $P_{ij}$ is defined by the probability that the independent sum $Y_1+\cdots+Y_i$ taking on the next state $j$.

Note that state 0 is an absorbing state, which means extinction. An interesting problems in branching chains is the computation of the probability of extinction. One approach is to compute the probability of extinction for a branching chain with a single particle, i.e. the probability $\rho$ of starting in state 1 and being absorbed into state 0. Then the probability of extinction for a branching chain starting with $k$ particles is then $\rho^k$ since the particles are assumed to generate new particles independently.

$\text{ }$

$\text{ }$

$\text{ }$

$\copyright$ 2017 – Dan Ma