This post presents several interesting examples of Markov chain models. Previous posts on Markov chains are: an introduction, how to work with transition probabilities (here and here).

**Examples**

The following gives several examples of Markov chains, discussed at an introductory level. They will be further developed in subsequent posts.

*Example 1 – The Ehrenfest Chain*

The Ehrenfest chain, named for the physicist Paul Ehrenfest, is a simple, discrete model for the exchange of gas molecules contained in a volume divided into two regions A and B by a permeable membrane. Suppose that the gas has molecules. At each period of time, a molecule is chosen at random from the set of molecules and moved from the region that it is in to the other region.

The model has a simple mathematical description using balls and urns. Suppose that two urns, labeled A and B, contain balls, labeled . At each step (i.e. at each discrete time unit), an integer is selected at random from independent of the past selections. Then the ball labeled by the selected integer is removed from its urn and placed in the other urn. The procedure is repeated indefinitely.

The state of the system at time is the number of balls in urn A and is denoted by . Then is a Markov chain on the state space .

When the process is in state 0 (urn A is empty), the next state is 1. When the process is in state (urn A is full), the next state is . When the process is in state where , the next state of the process is either or . The following gives the one-step transition probabilities:

When urn A has balls, there is a probability of such that the randomly selected ball is from urn A and thus urn A loses a ball. On the other hand, there is a probability of such that the randomly selected ball is from urn B and thus urn A gains a ball. As an illustration, the following gives the transition probability matrix for balls.

An important problem in the Ehrenfest chain is the long-term, or equilibrium, distribution of , the number of balls in urn A. Another interesting problem concerns the changes in the composition of the two regions over time. For example, if all molecules are in one region at the beginning, how long on average will it be before each region has half of the molecules? The theory of Markov chains provide a good method for answering such questions.

*Example 2 – Random Walk*

The simplest random walk is a Markov chain such that each state is the result of a random one-unit up or down move from the previous state.

In the random walk in Figure 1, each state is one unit above or below the preceding state with equal probability. Instead of a random one-unit up or down move, let’s give a more general description of a random walk. The moves in the random walk are determined by a predetermined discrete distribution. Let be integer-valued random variables that are independent and identically distributed. Let be the common probability function. Let be an integer-valued random variable that is independent of the random variables . The random variable will be the initial position of the random walk. The random variables are the increments (they are the amounts added to the stochastic process as time increases).

For , define . Then is called a random walk. It is a Markov chain with the state space the set of all integers. Wherever the process is at any given time, the next step in the process is always a walk in a distance and direction that is dictated by the independent random variables . If the random variables are independent Bernoulli variables with and , then we revert back to the simple random walks described in Figure 1.

To transition from state to state , the increment must be . Thus the one-step transition probabilities are defined by . Recall that is the common probability function for the sequence . For a more detailed description, see this previous post.

*Example 3 – Gambler’s Ruin*

A gambler’s ruin is a simple random walk that can take on only finitely many states. At each time period, the state is one-unit higher or lower than the preceding state as a result of a random bet. More specifically, suppose that a gambler starts out with units in capital (in dollars or other monetary units) and makes a series of one-unit bets against the house. For the gambler, the probabilities of winning and losing each bet are and , respectively. Whenever the capital reaches zero, the gambler is in ruin and his capital remains zero thereafter. On the other hand, if the capital of the gambler increases to where , then the gambler quits playing. Let be the capital of the gambler after the th bet. Then is a random walk. The starting state is . The states 0 and are absorbing states. The following gives the transition probabilities.

Whenever the gambler reaches the absorbing state of 0, the gambler is in ruin. If the process reaches the absorbing state of , the gambler strikes gold. If the initial capital of the gambler is small relative to the casino, there is a virtually 100% chance that the gambler will be in ruin even if each bet has even odds (see here for the calculation). One interesting questions: on average how long does it take for the gambler to be in ruin? Such questions will be answered after necessary tools are built in subsequent posts.

An alternative interpretation of the gamblerâ€™s ruin random walk is this. Assume that two players make a series of one-unit bets against each other and that the total capital between the two players is units with the first player having units in capital. Further suppose that the first player has probability of winning a bet and the second player has probability of winning a bet. Then the two gamblers play until one of them goes broke (is in ruin). Let be the capital of the first player after the th play. Then is a Markov chain that is a gambler’s ruin. The following graph shows five simulations of a gambler’s ruin from this previous post.

*Example 4 – Discrete Birth and Death Chain*

In a birth and death chain, the current state in the process is transitioned to (a birth), (a death) or . The state space is either or . The following gives the one-step transition probabilities.

where for each state , (the probability of down), (the probability of same) and (the probability of up) are non-negative real numbers such that . The following gives the transition probability matrix for an illustrative case of .

In the above matrix, and . If , then state 0 is absorbing. If , then state 0 is reflecting. On the other hand, if , then state 5 is absorbing. If , then state 5 is reflecting.

The Ehrenfest chain is an example of a birth and death chain (with the boundary states being reflecting). When the probability of down , the probability of same and the probability of up do not vary according to the current state (i.e. they are constant), the resulting birth and death chain is a random walk. Thus the gambler’s ruin chain is also an example of a birth and death chain. The birth and death chain is suitable model for applications in which the state of the process is the population of a living system.

*Example 5 – Discrete Queueing Markov Chain*

Though continuous-time queueing models are much more realistic, a simple discrete-time queueing model is introduced here in order to illustrate applications of discrete Markov chains. Suppose that time is measured in a convenient time interval such as one minute or some appropriate group of minutes. The model assumes that during one period of time, only one customer is served if at least one customer is present. The model also assumes that a random number of new customers arrive at any given time period and that the numbers of customers arriving in the time periods form an independent identically distributed sequence of random variables.

More specifically, let be independent and identically distributed random variables that take on non-negative integers. Let , , be the common probability function. The random variable is the number of customers that arrive into the system during the th period. Let be the number of customers present initially in the system. For , let be the number of customers in the system during the th period. If the current state is , then the next state is:

where . In essence, the number of customers in the system at any given time period is the number of new customers arriving plus the customers already in the system less one (if the system is not empty). If the system is empty, the number of customers is simply the number of new customers. It follows that is a Markov chain with the state space , the set of all non-negative integers. The one-step transition probabilities are given by:

where is the common probability function for the independent numbers of arrivals of customers . If the have a common binomial distributions with parameters 3 and . Then the following gives the transition probability matrix.

If the customer arrivals have a common Poisson distribution with parameter (per period), then the following gives the transition probabilities.

*Example 6 – Discrete Branching Chain*

Suppose that we have a system of particles that can generate new particles of the same type, e.g. particles such as neutrons and bacteria.

In such a system, we assume that a particle is replaced by a ransom number of new particles (regarded as offspring of the original particle). Furthermore, we assume that each particle generates offspring independently and offspring generation follows the same distribution. Let be the random variable that is the common distribution for offspring generation across all particles in the system. Let be the common probability function of the number of offspring of a particle.

For , let be the number of particles in the th generation in the system. It follows from the assumptions that with

where are independent random variables with common probability function . Then is a Markov chain with the state space the set of all non-negative integers. The one-step transition probabilities are

In other words, the transition probability is defined by the probability that the independent sum taking on the next state .

Note that state 0 is an absorbing state, which means extinction. An interesting problems in branching chains is the computation of the probability of extinction. One approach is to compute the probability of extinction for a branching chain with a single particle, i.e. the probability of starting in state 1 and being absorbed into state 0. Then the probability of extinction for a branching chain starting with particles is then since the particles are assumed to generate new particles independently.

2017 – Dan Ma