A first look at applications of Markov chains

This post presents several interesting examples of Markov chain models. Previous posts on Markov chains are: an introduction, how to work with transition probabilities (here and here).

Examples

The following gives several examples of Markov chains, discussed at an introductory level. They will be further developed in subsequent posts.

Example 1 – The Ehrenfest Chain

The Ehrenfest chain, named for the physicist Paul Ehrenfest, is a simple, discrete model for the exchange of gas molecules contained in a volume divided into two regions A and B by a permeable membrane. Suppose that the gas has N molecules. At each period of time, a molecule is chosen at random from the set of N molecules and moved from the region that it is in to the other region.

Figure 1 – The Efrenfest Chain

The model has a simple mathematical description using balls and urns. Suppose that two urns, labeled A and B, contain N balls, labeled 1, 2, 3, \cdots, N. At each step (i.e. at each discrete time unit), an integer is selected at random from 1, 2, 3, \cdots, N independent of the past selections. Then the ball labeled by the selected integer is removed from its urn and placed in the other urn. The procedure is repeated indefinitely.

The state of the system at time n=0,1,2,\cdots is the number of balls in urn A and is denoted by X_n. Then \left\{X_n: n=0,1,2,\cdots \right\} is a Markov chain on the state space \left\{0,1,\cdots,N \right\}.

When the process is in state 0 (urn A is empty), the next state is 1. When the process is in state N (urn A is full), the next state is N-1. When the process is in state i where 0<i<N, the next state of the process is either i-1 or i+1. The following gives the one-step transition probabilities:

    \displaystyle  P_{ij} = \left\{ \begin{array}{ll}           \displaystyle  1 &\ \ \ \ \ \ i=0 \text{ and } j=1 \\            \text{ } & \text{ } \\           \displaystyle  1 &\ \ \ \ \ \ i=N \text{ and } j=N-1 \\            \text{ } & \text{ } \\          \displaystyle  \frac{i}{N} &\ \ \ \ \ \ 0<i<N \text{ and } j=i-1 \\           \text{ } & \text{ } \\          \displaystyle  1-\frac{i}{N} &\ \ \ \ \ \ 0<i<N \text{ and } j=i+1 \\           \text{ } & \text{ } \\           0 &\ \ \ \ \ \ \text{otherwise}           \end{array} \right.

When urn A has i balls, there is a probability of \frac{i}{N} such that the randomly selected ball is from urn A and thus urn A loses a ball. On the other hand, there is a probability of 1-\frac{i}{N} such that the randomly selected ball is from urn B and thus urn A gains a ball. As an illustration, the following gives the transition probability matrix for N=5 balls.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 & 3 & 4 & 5 \cr        0 & 0 & 1 & 0 & 0 & 0 & 0 \cr        1 & \frac{1}{5} & 0 & \frac{4}{5} & 0 & 0 & 0 \cr        2 & 0 & \frac{2}{5} & 0 & \frac{3}{5} & 0 & 0 \cr        3 & 0 & 0 & \frac{3}{5} & 0 & \frac{2}{5} & 0 \cr        4 & 0 & 0 & 0 & \frac{4}{5} & 0 & \frac{1}{5} \cr        5 & 0 & 0 & 0 & 0 & 1 & 0 \cr   } \qquad

An important problem in the Ehrenfest chain is the long-term, or equilibrium, distribution of X_n, the number of balls in urn A. Another interesting problem concerns the changes in the composition of the two regions over time. For example, if all molecules are in one region at the beginning, how long on average will it be before each region has half of the molecules? The theory of Markov chains provide a good method for answering such questions.

Example 2 – Random Walk

The simplest random walk is a Markov chain \left\{X_n: n=0,1,2,\cdots \right\} such that each state is the result of a random one-unit up or down move from the previous state.

Figure 2 – Five Simulations of a Random Walk

In the random walk in Figure 1, each state is one unit above or below the preceding state with equal probability. Instead of a random one-unit up or down move, let’s give a more general description of a random walk. The moves in the random walk are determined by a predetermined discrete distribution. Let Y_1, Y_2, Y_3 \cdots be integer-valued random variables that are independent and identically distributed. Let P[Y=y] be the common probability function. Let X_0 be an integer-valued random variable that is independent of the random variables Y_n. The random variable X_0 will be the initial position of the random walk. The random variables Y_n are the increments (they are the amounts added to the stochastic process as time increases).

For n \ge 1, define X_n=X_0+Y_1+\cdots+Y_{n-1}+Y_n. Then \left\{X_n: n=0,1,2,\cdots \right\} is called a random walk. It is a Markov chain with the state space the set of all integers. Wherever the process is at any given time, the next step in the process is always a walk in a distance and direction that is dictated by the independent random variables Y_n. If the random variables Y_n are independent Bernoulli variables with P(Y_n=1)=p and P(Y_n=-1)=1-p, then we revert back to the simple random walks described in Figure 1.

To transition from state i to state j, the increment Y_n must be j-i. Thus the one-step transition probabilities P_{ij} are defined by P_{ij}=P[Y=j-i]. Recall that P[Y=y] is the common probability function for the sequence Y_1, Y_2, Y_3 \cdots. For a more detailed description, see this previous post.

Example 3 – Gambler’s Ruin

A gambler’s ruin is a simple random walk that can take on only finitely many states. At each time period, the state is one-unit higher or lower than the preceding state as a result of a random bet. More specifically, suppose that a gambler starts out with d units in capital (in dollars or other monetary units) and makes a series of one-unit bets against the house. For the gambler, the probabilities of winning and losing each bet are p and 1-p, respectively. Whenever the capital reaches zero, the gambler is in ruin and his capital remains zero thereafter. On the other hand, if the capital of the gambler increases to m where d<m, then the gambler quits playing. Let X_n be the capital of the gambler after the nth bet. Then \left\{X_n: n=0,1,2,3,\cdots \right\} is a random walk. The starting state is X_0=d. The states 0 and m are absorbing states. The following gives the transition probabilities.

    \displaystyle  P_{ij} = \left\{ \begin{array}{ll}           \displaystyle  p &\ \ \ \ \ \ j=i+1,i \ne 0, i \ne m \\            \text{ } & \text{ } \\          \displaystyle  1-p &\ \ \ \ \ \ j=i-1,i \ne 0, i \ne m \\           \text{ } & \text{ } \\           \displaystyle  1 &\ \ \ \ \ \ i=0,j=0 \\           \text{ } & \text{ } \\           \displaystyle  1 &\ \ \ \ \ \ i=m,j=m \\           \text{ } & \text{ } \\           0 &\ \ \ \ \ \ \text{otherwise}           \end{array} \right.

Whenever the gambler reaches the absorbing state of 0, the gambler is in ruin. If the process reaches the absorbing state of m, the gambler strikes gold. If the initial capital of the gambler is small relative to the casino, there is a virtually 100% chance that the gambler will be in ruin even if each bet has even odds (see here for the calculation). One interesting questions: on average how long does it take for the gambler to be in ruin? Such questions will be answered after necessary tools are built in subsequent posts.

An alternative interpretation of the gambler’s ruin random walk is this. Assume that two players make a series of one-unit bets against each other and that the total capital between the two players is m units with the first player having d units in capital. Further suppose that the first player has probability p of winning a bet and the second player has probability 1-p of winning a bet. Then the two gamblers play until one of them goes broke (is in ruin). Let X_n be the capital of the first player after the nth play. Then \left\{X_n: n=0,1,2,3,\cdots \right\} is a Markov chain that is a gambler’s ruin. The following graph shows five simulations of a gambler’s ruin from this previous post.

Figure 3 – Five Simulations of a Gambler’s Ruin

Example 4 – Discrete Birth and Death Chain

In a birth and death chain, the current state i in the process is transitioned to i+1 (a birth), i-1 (a death) or i. The state space is either \left\{0,1,2,3,\cdots \right\} or \left\{0,1,2,3,\cdots, m \right\}. The following gives the one-step transition probabilities.

    \displaystyle  P_{ij} = \left\{ \begin{array}{ll}           \displaystyle  q_i &\ \ \ \ \ \ j=i-1 \\            \text{ } & \text{ } \\          \displaystyle  r_i &\ \ \ \ \ \ j=i \\           \text{ } & \text{ } \\           \displaystyle  p_i &\ \ \ \ \ \ j=i+1 \\           \text{ } & \text{ } \\                    0 &\ \ \ \ \ \ \text{otherwise}           \end{array} \right.

where for each state i, q_i (the probability of down), r_i (the probability of same) and p_i (the probability of up) are non-negative real numbers such that q_i+r_i+p_i=1. The following gives the transition probability matrix for an illustrative case of m=5.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 & 3 & 4 & 5 \cr        0 & r_0 & p_0 & 0 & 0 & 0 & 0 \cr        1 & q_1 & r_1 & p_1 & 0 & 0 & 0 \cr        2 & 0 & q_2 & r_2 & p_2 & 0 & 0 \cr        3 & 0 & 0 & q_3 & r_3 & p_3 & 0 \cr        4 & 0 & 0 & 0 & q_4 & r_4 & p_4 \cr        5 & 0 & 0 & 0 & 0 & q_5 & r_5 \cr   } \qquad

In the above matrix, q_0=0 and p_5=0. If r_0=1, then state 0 is absorbing. If p_0=1, then state 0 is reflecting. On the other hand, if r_5=1, then state 5 is absorbing. If q_5=1, then state 5 is reflecting.

The Ehrenfest chain is an example of a birth and death chain (with the boundary states being reflecting). When the probability of down q_i, the probability of same r_i and the probability of up p_i do not vary according to the current state (i.e. they are constant), the resulting birth and death chain is a random walk. Thus the gambler’s ruin chain is also an example of a birth and death chain. The birth and death chain is suitable model for applications in which the state of the process is the population of a living system.

Example 5 – Discrete Queueing Markov Chain

Though continuous-time queueing models are much more realistic, a simple discrete-time queueing model is introduced here in order to illustrate applications of discrete Markov chains. Suppose that time is measured in a convenient time interval such as one minute or some appropriate group of minutes. The model assumes that during one period of time, only one customer is served if at least one customer is present. The model also assumes that a random number of new customers arrive at any given time period and that the numbers of customers arriving in the time periods form an independent identically distributed sequence of random variables.

More specifically, let Y_1,Y_2,Y_3,\cdots be independent and identically distributed random variables that take on non-negative integers. Let P[Y=k], k=0,1,2,\cdots, be the common probability function. The random variable Y_n is the number of customers that arrive into the system during the nth period. Let X_0 be the number of customers present initially in the system. For n \ge 1, let X_n be the number of customers in the system during the nth period. If the current state is X_n, then the next state is:

    \displaystyle  X_{n+1} = \left\{ \begin{array}{ll}           \displaystyle  Y_{n+1} &\ \ \ \ \ \ X_n=0 \\            \text{ } & \text{ } \\          \displaystyle  (X_n-1)+Y_{n+1} &\ \ \ \ \ \ X_n>0 \\                      \end{array} \right.

where n \ge 0. In essence, the number of customers in the system at any given time period is the number of new customers arriving plus the customers already in the system less one (if the system is not empty). If the system is empty, the number of customers is simply the number of new customers. It follows that \left\{X_n: n \ge 0 \right\} is a Markov chain with the state space \left\{0,1,2,\cdots \right\}, the set of all non-negative integers. The one-step transition probabilities P_{ij} are given by:

    \displaystyle  P_{ij} = \left\{ \begin{array}{ll}           \displaystyle  P[Y=j] &\ \ \ \ \ \ i=0 \text{ and } j \in \left\{0,1,2,\cdots \right\} \\            \text{ } & \text{ } \\          \displaystyle  P[Y=j-(i-1)] &\ \ \ \ \ \ i>0 \text{ and } j \in \left\{i-1,i,i+1,\cdots \right\} \\                      \end{array} \right.

where P[Y=k] is the common probability function for the independent numbers of arrivals of customers Y_1,Y_2,Y_3,\cdots. If the Y_n have a common binomial distributions with parameters 3 and p=0.5. Then the following gives the transition probability matrix.

    \displaystyle \mathbf{P} =       \bordermatrix{ & 0 & \text{ } & 1 & \text{ } & 2 & \text{ } & 3 & \text{ } & 4 & \text{ } & 5 & \text{ } & 6 & \text{ } & 7 & \text{ } & 8   & \cdots \cr        0 & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 &  \cdots \cr                1 & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 &    \cdots \cr          2 & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0    & \cdots \cr          3 & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0   & \cdots \cr          4 & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0 & \text{ } & 0    & \cdots \cr          5 & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125 & \text{ } & 0  & \cdots \cr          6 & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375 & \text{ } & 0.125   & \cdots \cr         7 & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375 & \text{ } & 0.375    & \cdots \cr        8 & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0 & \text{ } & 0.125 & \text{ } & 0.375    & \cdots \cr       \vdots & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots & \text{ } & \vdots    & \text{ } \cr       } \qquad

If the customer arrivals have a common Poisson distribution with parameter \lambda=0.2 (per period), then the following gives the transition probabilities.

    \displaystyle  P_{ij} = \left\{ \begin{array}{ll}           \displaystyle  \frac{e^{-0.2} \ 0.2^j}{j!} &\ \ \ \ \ \ i=0 \text{ and } j \in \left\{0,1,2,\cdots \right\} \\            \text{ } & \text{ } \\          \displaystyle  \frac{e^{-0.2} \ 0.2^{j-(i-1)}}{[j-(i-1)]!} &\ \ \ \ \ \ i>0 \text{ and } j \in \left\{i-1,i,i+1,\cdots \right\} \\                      \end{array} \right.

Example 6 – Discrete Branching Chain

Suppose that we have a system of particles that can generate new particles of the same type, e.g. particles such as neutrons and bacteria.

In such a system, we assume that a particle is replaced by a ransom number of new particles (regarded as offspring of the original particle). Furthermore, we assume that each particle generates offspring independently and offspring generation follows the same distribution. Let Y be the random variable that is the common distribution for offspring generation across all particles in the system. Let P[Y=k] be the common probability function of the number of offspring of a particle.

For n \ge 0, let X_n be the number of particles in the nth generation in the system. It follows from the assumptions that with X_n=i

    \displaystyle X_{n+1}=\sum \limits_{k=1}^{i} Y_k

where Y_1,Y_2, \cdots, Y_i are independent random variables with common probability function P[Y=k]. Then \left\{X_n: n \ge 0 \right\} is a Markov chain with the state space the set of all non-negative integers. The one-step transition probabilities P_{ij} are

    \displaystyle P_{ij}=P \biggl[\sum \limits_{k=1}^{i} Y_k = j \biggr]

In other words, the transition probability P_{ij} is defined by the probability that the independent sum Y_1+\cdots+Y_i taking on the next state j.

Note that state 0 is an absorbing state, which means extinction. An interesting problems in branching chains is the computation of the probability of extinction. One approach is to compute the probability of extinction for a branching chain with a single particle, i.e. the probability \rho of starting in state 1 and being absorbed into state 0. Then the probability of extinction for a branching chain starting with k particles is then \rho^k since the particles are assumed to generate new particles independently.

\text{ }

\text{ }

\text{ }

\copyright 2017 – Dan Ma

Advertisements

A beginning look at transition probabilities

The previous post introduces the notion of Markov chains, more specifically discrete Markov chains. This and the next post focus on calculating transition probabilities.

Suppose that \left\{X_n: n \ge 0 \right\} is a Markov chain with transition probability matrix \mathbf{P}. Elements of \mathbf{P} are the one-step transition probabilities P_{ij}. As the Markov process moves through the states over time, the probabilities in the matrix \mathbf{P} shows how likely the process will transition to state j in the next time period if the process is currently in state i. Elements of \mathbf{P} are conditional probabilities:

    (1) \ \ \ \ \ P_{ij}=P[X_{n+1}=j \lvert X_n=i]

The transition probabilities described in (1) are independent of the time period n. These are called stationary transition probabilities. Markov chains with stationary transition probabilities are called stationary Markov chains or time-homogeneous Markov chains. One immediate goal is to develop the n-step transition probabilities P_{ij}^n from the one-step transition probabilities P_{ij}. This is to be developed in the next post. This post focuses on basic calculations with P_{ij}.

Examples of Conditional Probabilities

We use examples to demonstrate conditional probabilities that can be calculated from the one-step transition probabilities P_{ij}. The calculation is based on the following transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 \cr        0 & 0.6 & 0.2 & 0.2   \cr        1 & 0.3 & 0.5 & 0.2  \cr        2 & 0.4 & 0.3 & 0.3   \cr           } \qquad

Example 1
Determine the following conditional probabilities:

  • P[X_1=0, X_2=1 \lvert X_0=0]
  • P[X_2=0, X_3=1 \lvert X_1=0]
  • P[X_1=2, X_2=1,X_3=0 \lvert X_0=1]

Note that the first two are identical since the transition probabilities are stationary. To calculate the first and third probabilities, it is a matter of linking up the one-step transition probabilities from one state to the next.

    \displaystyle \begin{aligned} P[X_1=0, X_2=1 \lvert X_0=0]&=P[X_1=0 \lvert X_0=0] \times P[X_2=1 \lvert X_1=0] \\&=P_{00} \times P_{01} \\&=0.6 \times 0.2=0.12 \end{aligned}

    \displaystyle P[X_2=0, X_3=1 \lvert X_1=0]=P[X_1=0, X_2=1 \lvert X_0=0]=0.12

    \displaystyle \begin{aligned} P[X_1=2, X_2=1,X_3=0 \lvert X_0=1]&=P[X_1=2 \lvert X_0=1] \times P[X_2=1 \lvert X_1=2] \times P[X_3=0 \lvert X_2=1] \\&=P_{12} \times P_{21} \times P_{10} \\&=0.2 \times 0.3 \times 0.3=0.018 \end{aligned}

Example 2
Example 1 shows that the probability of a chain moving from along a path conditional on a starting state is simply the product of the transition probabilities on that path. We now calculate unconditional probability of a path. For this to be possible, we need to assume a distribution for the initial state.

Assume that P[X_0=0]=0.4, P[X_0=1]=0.3 and P[X_0=2]=0.3. Determine the following unconditional probabilities:

  • P[X_0=0,X_1=0,X_2=1]
  • P[X_1=0,X_2=0,X_3=1]
  • P[X_0=1,X_1=2, X_2=1,X_3=0]

To calculate these unconditional probabilities, we multiply the transition probabilities on the path along with the probability of the first state in the path.

    \displaystyle \begin{aligned} P[X_0=0,X_1=0,X_2=1]&=P[X_0=0] \times P[X_1=0 \lvert X_0=0] \times P[X_2=1 \lvert X_1=0] \\&=P[X_0=0] \times P_{00} \times P_{01} \\&=0.4 \times 0.6 \times 0.2=0.048 \end{aligned}

To calculate the second probability, we need to first compute P[X_1=0]. This can be done by conditioning on the initial state X_0.

    \displaystyle \begin{aligned} P[X_1=0]&=P[X_0=0] \times P_{00}+ P[X_0=1] \times P_{10} +P[X_0=2] \times P_{20} \\&=0.4 \times 0.6 +0.3 \times 0.3+0.3 \times 0.4=0.45 \end{aligned}

    \displaystyle \begin{aligned}  P[X_1=0,X_2=0,X_3=1]&=P[X_1=0] \times P[X_2=0 \lvert X_1=0] \times  P[X_3=1 \lvert X_2=0]\\&=P[X_1=0] \times P_{00} \times P_{01} \\&=0.45 \times 0.6 \times 0.2 \\&=0.054  \end{aligned}

The following gives the third probability.

    \displaystyle \begin{aligned} P[X_0=1,X_1=2, X_2=1,X_3=0]&=P[X_0=1] \times P[X_1=2 \lvert X_0=1] \times P[X_2=1 \lvert X_1=2] \times P[X_3=0 \lvert X_2=1] \\&=P[X_0=1] \times P_{12} \times P_{21} \times P_{10} \\&=0.4 \times 0.2 \times 0.3 \times 0.3 \\&=0.0072 \end{aligned}

Note that each unconditional probability in this example is obtained by multiplying the corresponding conditional probability in Example 1 with the probability of the conditioned event.

Example 3
Sometimes we know for certain that the Markov chain starts in a given state. Then the unconditional probability of a path is simply the conditional probability of that path (conditioned on the initial state). Determine the three probabilities in Example 2 given that the chain starts in state 0 in the first two probabilities and the chain starts in state 1 in the third probability. The probabilities are:

  • P[X_0=0,X_1=0,X_2=1]
  • P[X_1=0,X_2=0,X_3=1]
  • P[X_0=1,X_1=2, X_2=1,X_3=0]

For the first probability, P[X_0=0,X_1=0,X_2=1]=P[X_0=0] \times P[X_1=0,X_2=1 \lvert X_0=0]. With P[X_0=0]=1, P[X_0=0,X_1=0,X_2=1]=P[X_1=0,X_2=1 \lvert X_0=0]=0.12, the answer from Example 1.

For the second probability, we need to first calculate P[X_1=0]. Since P[X_0=0]=1, P[X_1=0]=P[X_0=0] \times P_{00}=0.6. Thus P[X_1=0,X_2=0,X_3=1]=P[X_1=0] \times P_{00} \times P_{01}=0.6 \times 0.6 \times 0.2=0.072, which is 0.6 times the corresponding conditional probability in Example 1.

Similarly, the third probability is

    \displaystyle \begin{aligned} P[X_0=1,X_1=2, X_2=1,X_3=0]&=P[X_1=2, X_2=1,X_3=0 \lvert X_0=1] \\& =0.018\end{aligned}

Conditional Probabilities of Paths in Markov Chains

The conditional probabilities in the above examples are not difficult to do as long as the basic idea of conditional probabilities and the Markov property are understood. First, the idea of conditional probability. The idea is that the probability of the intersection of two events can be obtained by conditioning on one of the events. For example, given two events A and B, we have

    (2a) \ \ \ \ \ P[A \cap B]=P[A] \times P[B \lvert A]

    \displaystyle (2b) \ \ \ \ \ P[B \lvert A]=\frac{P[A \cap B]}{P[A]}

The probability of the intersection of two events is the product of a conditional probability of one event given another event times the probability of the event being conditioned. Relation (2b) is the probability of the event B given that event A has occurred. Of course, the conditioned event has to have positive probability, i.e. P[A]>0.

Suppose that the event A is X=x, the discrete random variable X taking on the value x and that the event B is Y=y. Then we have the following relations.

    (3a) \ \ \ \ \ P[X=x,Y=y]=P[X=x] \times P[Y=y \lvert X=x]

    \displaystyle (3b) \ \ \ \ \ P[Y=y \lvert X=x]=\frac{P[X=x,Y=y]}{P[X=x]}

We can also write the conditional probability of X=x conditioning on the event Y=y. Suppose we have a path in a Markov chain, X_0=x_0,X_1=x_1,X_2=x_2,\cdots,X_n=x_n. This is the event that the chain starts in state x_0 initially, then move to state x_1 in time 1 and so on and at time n the chain is in state x_0. Conditioning on the initial state, consider the following conditional probability.

    \displaystyle (4) \ \ \ \ \ P[X_1=x_1,X_2=x_2,\cdots,X_n=x_n \lvert X_0=x_0]=\frac{P[X_0=x_0,X_1=x_1,X_2=x_2,\cdots,X_n=x_n]}{P[X_0=x_0]}

Relation (4) can be further simplified since the random variables X_0=x_0,X_1=x_1,X_2=x_2,\cdots,X_n=x_n satisfy the Markov property.

    \displaystyle \begin{aligned} (5) \ \ \ \ \ P[X_1=x_1,X_2=x_2,\cdots,X_n=x_n \lvert X_0=x_0]&=P[X_1=x_1 \lvert X_0=x_0] \times P[X_2=x_2 \lvert X_1=x_1] \\& \ \ \ \times \cdots \times P[X_n=x_n \lvert X_{n-1}=x_{n-1}] \\&=P_{x_0,x_1} \times P_{x_1,x_2} \times \cdots \times P_{x_{n-1},x_n}  \end{aligned}

Relation (5) says that the probability of a path in a Markov chain conditioning on the initial state is simply the product of the transition probabilities along the path. This is derived using the Markov property. Relation (5) can be established by an induction argument on the subscript n, i.e. the length of a path.

Clearly P[X_1=j \lvert X_0=i]=P_{ij}. This is the fundamental assumption in Markov chains. Now consider P[X_1=j,X_2=k \lvert X_0=i].

    \displaystyle \begin{aligned} P[X_1=j,X_2=k \lvert X_0=i]&=\frac{P[X_0=i,X_1=j,X_2=k]}{P[X_0=i]} \\&=\frac{P[X_0=i,X_1=j,X_2=k]}{P[X_0=i,X_1=j]} \times \frac{P[X_0=i,X_1=j]}{P[X_0=i]} \\&=P[X_2=k \lvert X_0=i,X_1=j] \times P[X_1=j \lvert X_0=i] \\&=P[X_2=k \lvert X_1=j] \times P[X_1=j \lvert X_0=i] \\&=P_{jk} \times P_{ij} \\&=P_{ij} \times P_{jk} \end{aligned}

Note that P[X_2=k \lvert X_0=i,X_1=j]=P[X_2=k \lvert X_1=j] in the above derivation. This is Markov property in action since the probability of the future state depends only on the state immediately preceding it. Now that relation (5) is true for a path of length 2, the following shows how it is derived for length 3.

    \displaystyle \begin{aligned} P[X_1=j,X_2=k,X_3=l \lvert X_0=i]&=\frac{P[X_0=i,X_1=j,X_2=k,X_3=l]}{P[X_0=i]} \\&=\frac{P[X_0=i,X_1=j,X_2=k,X_3=l]}{P[X_0=i,X_1=j,X_2=k]} \times \frac{P[X_0=i,X_1=j,X_2=k]}{P[X_0=i]} \\&=P[X_3=l \lvert X_0=i,X_1=j,X_2=k] \times P[X_1=j,X_2=k \lvert X_0=i]  \\&=P[X_3=l \lvert X_2=k] \times P[X_1=j,X_2=k \lvert X_0=i] \\&=P_{ij} \times P_{jk} \times P_{kl} \end{aligned}

Note that going from the third line to the fourth line the Markov property is used. Relation (5) for length 2 is used at the end. The inductive proof keeps going and relation (5) is true for any path of arbitrary length.

The conditional probability in (4) by itself may not be special. With the additional assumption of Markov property, the conditional probability can be made to be a “chain” of one-step transition probabilities.

Using the idea of (3a), the unconditional probability of a path is:

    \displaystyle \begin{aligned} (6) \ \ \ \ \ P[X_0=x_0,X_1=x_1,X_2=x_2,\cdots,X_n=x_n]&=P[X_0=x_0] \times P_{x_0,x_1} \\& \ \ \times P_{x_1,x_2} \times \cdots \times P_{x_{n-1},x_n}   \end{aligned}

Relation (6) says that the unconditional probability of a path in a Markov chain is simply the conditional probability of the path conditioning on the initial state times the probability of the initial state.

More Examples

The earlier examples demonstrate relation (5) and relation (6). We now work a few more examples.

Example 4
There is a slightly different way to look at one unconditional probability P[X_1=0,X_2=0,X_3=1] in Example 2. The first state in this path is X_1=0. We can condition on the initial state X_0.

    \displaystyle P[X_1=0,X_2=0,X_3=1]=\sum \limits_{k=0}^2 P[X_0=k,X_1=0,X_2=0,X_3=1]

Then every one of the unconditional probability within the summation sign can be computed according to the idea in (6). Note that Example 2 gives the distribution of the initial state.

    \displaystyle \begin{aligned} P[X_1=0,X_2=0,X_3=1]&=P[X_0=0,X_1=0,X_2=0,X_3=1]+ \\& \ \ P[X_0=1,X_1=0,X_2=0,X_3=1]+P[X_0=2,X_1=0,X_2=0,X_3=1] \\&=P[X_0=0] \times P_{00} \times P_{00} \times P_{01} +\\&\ \ P[X_0=1] \times P_{10} \times P_{00} \times P_{01}+P[X_0=2] \times P_{20} \times P_{00} \times P_{01} \\&=0.4 \times 0.6 \times 0.6 \times 0.2+\\&\ \ 0.3 \times 0.3 \times 0.6 \times 0.2 + 0.3 \times 0.4 \times 0.6 \times 0.2\\&=0.054 \end{aligned}

In calculating probabilities in a Markov chain, conditioning on the first step is a useful technique that will be used often in the subsequent posts.

Example 5
Consider a Markov chain with the following transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 & 3 \cr        0 & 0.5 & 0.2 & 0.2 & 0.1 \cr        1 & 0.3 & 0.3 & 0.2 & 0.2 \cr        2 & 0.3 & 0.2 & 0.2 & 0.3 \cr        3 & 0 & 0 & 0 & 1   } \qquad

State 3 is called an absorbing state since P_{33}=1. In other words, when the chain reaches state 3, it will never leave. Suppose that initially the Markov chain is in state 0 50% of the time, in state 1 30% of the time and in state 2 20% of the time. Calculate the probability P[X_1=0,X_2=1,X_3=0,X_4=2].

We can condition on the initial state as in Example 4.

    \displaystyle \begin{aligned} P[X_1=0,X_2=1,X_3=0,X_4=2]&=\sum \limits_{k=0}^2 P[X_0=k,X_1=0,X_2=1,X_3=0,X_4=2] \\&=P[X_0=0,X_1=0,X_2=1,X_3=0,X_4=2]+ \\&\ \ P[X_0=1,X_1=0,X_2=1,X_3=0,X_4=2]+\\&\ \ P[X_0=2,X_1=0,X_2=1,X_3=0,X_4=2] \\&=P[X_0=0] \times P_{00} \times P_{01} \times P_{10} \times P_{02}+ \\& \ \ P[X_0=1] \times P_{10} \times P_{01} \times P_{10} \times P_{02}+\\&\ \ P[X_0=2] \times P_{20} \times P_{01} \times P_{10} \times P_{02} \\&=0.0048\end{aligned}

Example 6
This is Example 1 in this previous post.

When a binary digit, 0 or 1, is transmitted through a communication system, it passes through several stages. At each stage, there is a probability p that the digit is transmitted in error. Let X_n be the digit that is the output at the nth stage. Then \left\{X_n: n \ge 0 \right\} is a two-state Markov chain with the following transition probability matrix.

    \mathbf{P} = \bordermatrix{     & 0 &  \text{ } & 1 \cr   0 & 1-p & \text{ } & p \cr  \text{ } & \text{ } & \text{ } &\text{ } \cr   1 & p & \text{ } & 1-p \cr  }

Determine the following:

  • If the digit to be sent is 0, determine the probability that no error occurs up to the second stage.
  • If the digit to be sent is 0, determine the probability that a correct message is received at stage 2 through stage 4.

The first probability is P[X_1=0,X_2=0 \lvert X_0=0]. This is P_{00} \times P_{00}=(1-p)^2. To calculate the second probability, we can condition on the state in time 1. We first calculate P[X_1=0] and P[X_1=1]. Since we know for certain that the initial state is 0, P[X_1=0]=P_{00}=1-p and P[X_1=1]=P_{01}=p.

    \displaystyle \begin{aligned} P[X_2=0,X_3=0,X_4=0]&=P[X_1=0,X_2=0,X_3=0,X_4=0]+P[X_1=1,X_2=0,X_3=0,X_4=0] \\&=P[X_2=0,X_3=0,X_4=0 \lvert X_1=0] \times P[X_1=0]+ \\&\ \ P[X_2=0,X_3=0,X_4=0 \lvert X_1=1] \times P[X_1=1] \\&=P_{00} \times P_{00} \times P_{00} \times (1-p)+ \\&\ \ P_{10} \times P_{00} \times P_{00} \times p\\&=(1-p)^4+p^2 \times (1-p)^2 \end{aligned}

Remarks

The calculation in this post involves very basic calculation with one-step transition probabilities, i.e. the elements of the transition probability matrix given in a Markov chain. The n-step transition probabilities are key to the development of the theory of Markov chains. The next post begins the discussion of Chapman-Kolmogorov equations.

Practice problems are in this companion blog.

\text{ }

\text{ }

\text{ }

\copyright 2017 – Dan Ma

Introducing Markov Chains

This post continues the discussion of the previous two posts – the previous post that looks at random walks informally through graphs and the previous post that defines random walks more formally. This post introduces the notion of Markov chains.

Stochastic Process

A stochastic process is a collection of random variables \left\{X_t: t \in T \right\}. Each element of the collection is a random variable X_t. The index t is often regarded as time. Thus we refer to X_t the state of the process at time t. Thus a stochastic process is a family of random variables that describes the evolution through time of some physical process.

The set T is called the index set of the stochastic process. When T is a countable set, e.g. T=\left\{0,1,2,3,\cdots \right\}, the stochastic process is said to be a discrete-time process. When the index set T is an interval of the real number line, the stochastic process is said to be a continuous-time process. For example, \left\{X_n: n=0,1,2,\cdots \right\} is a discrete-time stochastic process indexed by the non-negative integers whereas \left\{X_t: t \ge 0 \right\} is a continuous-time stochastic process indexed by the non-negative real numbers. The state space of a stochastic process is defined as the set of all possible values that can be taken by the random variables X_t.

In the present discussion, we focus on discrete-time stochastic processes \left\{X_n: n=0,1,2,\cdots \right\} where the state space S is finite or countably infinite. When the state space is finite, the process is said to be a finite-state stochastic process.

We also assume that the state space S consists of integers. Thus S is either \mathbb{N}=\left\{0,1,2,3,\cdots \right\} (the natural numbers) or \mathbb{Z}=\left\{\cdots,-3,-2,-1,0,1,2,3,\cdots \right\} (the integers), or some subset of \mathbb{N} or \mathbb{Z}.

If we know nothing about X_n other than the fact that they are random variables, then there is not much we can say about the stochastic process \left\{X_n: n=0,1,2,\cdots \right\}. To make the discussion more meaningful, it is necessary to impose some additional structure on these random variables.

The simplest structure we can assume is that the random variables X_n are independent random variables. This would be a good model for phenomena that can be regarded as random experiments in which the future states of the process are independent of past and present states. However, in many situations the assumption of independence is often unjustified. In many stochastic processes that arise in practice, past and present states can influence the future states.

Markov Chains

We consider stochastic processes that possess the following property: the future states of the process (conditional on both past and present states) depends only upon the present state, not on the past states leading up to the present state. This property is called the Markov property. Any stochastic process that possesses the Markov property is called a Markov chain. More specifically, to satisfy the Markov property, given the present state X_n and the past states X_{n-1},X_{n-2},\cdots,X_2,X_1,X_0, the conditional distribution of the future state X_{n+1} depends only on the present state X_n. Even more specifically, to satisfy the Markov property, the process must satisfy the following requirement:

    \displaystyle (1) \ \ \ \ P[X_{n+1}=j \lvert X_n=i,X_{n-1}=x_{n-1},\cdots,X_0=x_0]=P[X_{n+1}=j \lvert X_n=i]

for all possible states x_0,x_1, \cdots,x_{n-1},i,j and all non-negative integers n. Thus under the Markov property, the conditional distribution of the future state X_{n+1} depends only on the present state X_n and that all the states preceding the present state have no influence on the future state.

The conditional probabilities P[X_{n+1}=j \lvert X_n=i] in (1) are called transition probabilities of the Markov chain. In the present discussion, we consider only the Markov chains in which the transition probabilities P[X_{n+1}=j \lvert X_n=i] are independent of the current period n. Such Markov chains are called time-homogeneous Markov chains or stationary Markov chains. So the additional assumption of time-homogeneity means that the probability of transitioning into state j from state i is identical regardless of where the process is in the time scale (at the beginning in the process or later in the process).

With the transition probability P[X_{n+1}=j \lvert X_n=i] independent of n, we can denote this probability by the states i and j only:

    \displaystyle  (2) \ \ \ \ P_{ij}=P[X_{n+1}=j \lvert X_n=i]

The probability P_{ij} represents the probability that the process, starting in state i, will enter into state j in the next period. Since the process must transition into one of the states in the state space in the next time period, P_{ij}, with i fixed and j varying, must represent a probability distribution. Thus P_{i0}+  P_{i1}+\cdots=1. Here we assume that the state space is the set of all non-negative integers.

The probabilities P_{ij} are called one-step transition probabilities since they give the probabilities of transitioning from one state to another state in one period of time. The one-step transition probabilities are usually expressed in a matrix or a transition diagram. The following shows what a transition probability matrix looks like.

    \mathbf{P}= \left[\begin{array}{cccccc}      P_{0,0} & P_{0,1} & P_{0,2} & \cdots & P_{0,m} \\      P_{1,0} & P_{1,1} & P_{1,2} & \cdots & P_{1,m} \\      P_{2,0} & P_{2,1} & P_{2,2} & \cdots & P_{2,m} \\      \vdots & \vdots & \vdots & \vdots & \vdots \\  P_{m,0} & P_{m,1} & P_{m,2} & \cdots & P_{m,m} \\    \end{array}\right]

The above transition matrix is for a finite-state Markov chain with state space being \left\{0,1,\cdots,m \right\}. The probabilities in each row sum to 1. If the process is currently in state i, the row with the first index as i (in this case the (i+1)st row) will give the probabilities of transitioning into all the states. If the state space is countably infinite, then each row would have infinitely many terms and there would be infinitely many rows.

In many situations, it is helpful to have a column label and row label to specify the states in the Markov chain. For example, a particle moves through the states 0, 1, 2, 3 and 4 according to a Markov chain described by the following transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 & 3 & 4 \cr        0 & 0 & 0.6 & 0.2  & 0.1  & 0.1 \cr        1 & 0.5 & 0 & 0.2  & 0.2  & 0.1 \cr        2 & 0.4 & 0.2 & 0  & 0.1  & 0.3 \cr        3 & 0.2 & 0.3 & 0.2  & 0  & 0.3 \cr        4 & 0.6 & 0.2 & 0.1  & 0.1  & 0 \cr   } \qquad

In the above matrix, P_{31}=0.3, which means that when the Markov process is in state 3, it moves to state 1 with probability 0.3. As discussed earlier, the probabilities in a given row sum to 1, which makes sense since the probabilities in a row describes all possible moves when the process is a given state.

When it is helpful to do so, the transition probability matrices will have a row label and a column label. If the row label and column label are not given, we assume that the rows and columns are listed in ascending order (recall that the states are integers). The transition probability matrices can be displayed using square brackets or parentheses.

Examples

The key to working with Markov chains is the transition probabilities P_{ij}. These probabilities allow us describe the random transitions through the states over time. The remainder of this post gives several examples of Markov chains focusing on transition probability matrices.

Example 1
When a binary digit, 0 or 1, is transmitted through a communication system, it passes through several stages. At each stage, there is a probability p that the digit is transmitted in error. Let X_n be the digit that is the output at the nth stage. Then \left\{X_n: n \ge 0 \right\} is a two-state Markov chain with the following transition probability matrix.

    \mathbf{P} = \bordermatrix{     & 0 &  \text{ } & 1 \cr   0 & 1-p & \text{ } & p \cr  \text{ } & \text{ } & \text{ } &\text{ } \cr   1 & p & \text{ } & 1-p \cr  }

Some typical problems might be:

  • If the digit to be sent is 0, determine the probability that no error occurs up to the second stage.
  • If the digit to be sent is 0, determine the probability that a correct message is received at stage 2.
  • If the digit to be sent is 0, determine the probability that a correct message is received at stage n where n \ge 1.

Example 2
Let’s consider the example of gambler’s ruin with finitely many states. Suppose that a gambler makes a series of one-unit bets against the house. For the gambler, the probabilities of winning and losing each bet are p and 1-p, respectively. Whenever the capital reaches zero, the gambler is in ruin and his capital remains zero thereafter. On the other hand, if the capital of the gambler increases to m, a large level of capital, the gambler quits playing. Let X_n be the capital of the gambler after the nth bet. Then \left\{ X_n: n=0,1,2,\cdots \right\} is a random walk, as discussed in the previous post. Since the future capital X_{n+1} depends only on the current level of capital X_n (up one unit or down one unit), this is also a Markov chain. The following is the transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 & 3 & 4 & \cdots & m \cr        0 & 1 & 0 & 0 & 0 & 0 & \cdots & 0 \cr        1 & 1-p & 0 & p & 0 & 0 & \cdots & 0 \cr        2 & 0 & 1-p & 0 & p & 0 &\cdots & 0 \cr        3 & 0 & 0 & 1-p & 0 & p &\cdots & 0 \cr        \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \cr        m & 0 & 0 & 0 & 0 & 0 &\cdots & 1    } \qquad

The above is an (m+1) \times (m+1) matrix. The states are of the process are 0,1,2,\cdots,m. The states 0 and m are absorbing states since the process would stay there and not leave once the process arrive at these two states. Note that P_{00}=1 and P_{mm}=1. State 0 would be the state of ruin. State m would mean the gambler becomes fabulously rich (if m is large).

For the rows in between, the transition probabilities are P_{i,i-1}=1-p and P_{i,i+1}=p. In other words, from state i, the process would go to state i+1 with probability p and would go down to i-1 with probability 1-p. More specifically, if m=5, the following is the transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 0 & 1 & 2 & 3 & 4 & 5 \cr        0 & 1 & 0 & 0  & 0  & 0 & 0 \cr        1 & 1-p & 0 & p  & 0  & 0 & 0 \cr        2 & 0 & 1-p & 0  & p  & 0 & 0 \cr        3 & 0 & 0 & 1-p  & 0  & p & 0 \cr        4 & 0 & 0 & 0  & 1-p  & 0 & p \cr        5 & 0 & 0 & 0  & 0  & 0 & 1   } \qquad

Assuming that the gambler has initial capital d which is positive but small in comparison to m. For example, d=10 and m=1000. Some interesting questions are:

  • If the staring capital is 10, what is the probability that the gambler’s capital will be 5 after 8 plays of the game?
  • What is the probability of ruin for the gambler, i.e. reaching state 0?
  • What is the mean time until reaching the absorbing states (ruin or fabulously rich)?

Example 3 (Restless Mouse)
A maze has eight areas as shown below.

The rat moves through the areas in the maze at random. That is, if an area has exits to w areas, the rat moves to each of these w areas with probability 1/w. Let X_n be the area the mouse is located in after the nth move. Then \left\{X_n: n \ge 0 \right\} is a Markov chain with the following transition probability matrix.

    \mathbf{P} =       \bordermatrix{ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \cr        1 & 0 & 1/2 & 0  & 1/2  & 0 & 0 & 0 & 0 \cr        2 & 1/2 & 0 & 1/2  & 0  & 0 & 0 & 0 & 0 \cr        3 & 0 & 1/2 & 0  & 0  & 1/2 & 0 & 0 & 0 \cr        4 & 1/3 & 0 & 0  & 0  & 1/3 & 1/3 & 0 & 0 \cr        5 & 0 & 0 & 1/3  & 1/3  & 0 & 0 & 0 & 1/3 \cr        6 & 0 & 0 & 0  & 1/2  & 0 & 0 & 1/2 & 0 \cr        7 & 0 & 0 & 0  & 0  & 0 & 1/2 & 0 & 1/2 \cr        8 & 0 & 0 & 0  & 0  & 1/2 & 0 & 1/2 & 0   } \qquad

Here’s some interesting questions about this Markov chain. Suppose that area 8 contains food and area 6 has a device for an electric shock.

  • Given that the rat is placed in area 1 at the beginning, what is the probability that the rat will reach area 8 in 4 moves, in general in n moves?
  • Given that the rat is placed in area 1 at the beginning, what is the mean number of moves for the rat to reach area 8 or area 6?
  • Given that the rat is placed in area 1 at the beginning, what is the probability that the rat will reach the food area before the electric shock area?

Next Steps

The concept of Markov chains has been introduced and illustrated with examples of random walks (in earlier posts) and other examples in this post. All the properties of Markov chains (in this case time-homogeneous Markov chains) are derived from the transition probability matrix. One of the most urgent tasks is to develop a way to calculation the transition probabilities. Recall that the elements of the transition probability matrix \mathbf{P} are the one-step transition probabilities P_{ij}, which gives the probability of going from state i into state j. The natural next step is to calculate the n-step transition probabilities P^n_{ij}, which is the probability that the process will in state j in the nth period given the process is initially in state i. The next post is a beginning look at transition probabilities. The post that follows the next post is on Chapman-Kolmogorov equations, a systematic way of calculating transition probabilities using matrix multiplication.

\text{ }

\text{ }

\text{ }

\copyright 2017 – Dan Ma

More on random walks

This post continues the discussion of the previous post on random walks. The previous post looks at random walks graphically. This post introduces the notion of random walks more formally.

Consider \left\{X_n: n=0,1,2,\cdots \right\}, which is a sequence of random variables indexed by the non-negative integers. By definition this is a stochastic process, to be precise a discrete stochastic process since the random variables are indexed by integers. The integers in the subscripts can be interpreted as time. Thus the random variables X_n describe the states of some process over time. The stochastic processes discussed in the previous post, the future state X_{n+1} of the process is obtained by adding one (with probability p) or subtracting one (with probability 1-p) to the current state X_n. So a given state of the process X_n is one higher or lower than the state in the previous period. The random walks in the previous post go up or down with equal chance, i.e. p=0.5. The following graph is from the previous post, which shows 5 simulations of such random walks.

Figure 1 – Examples of Random Walks in One Dimension

Random Walks – A General Description

We now give a more general description of the random walk. Instead of a random one-unit up or down move, the moves in the random walk are determined by a predetermined discrete distribution. Let Y_1,Y_2,Y_3,\cdots be integer-valued random variables that are independent and identically distributed. Let P[Y=y] be the common probability function. Let X_0 be an integer-valued random variable that is independent of the random variables Y_n. The random variable X_0 will be the initial position of the random walk. The random variables Y_n are the increments (they are the amounts added to the stochastic process as time increases).

For n \ge 1, define X_n=X_0+Y_1+\cdots+Y_{n-1}+Y_n. Then \left\{X_n: n=0,1,2,\cdots \right\} is called a random walk. Wherever the process is at any given time, the next step in the process is always a walk in a distance and direction that is dictated by the independent random variables Y_n. For convenience, we will call the random variables Y_n the increments in the random walk since they tell the process how far (and what direction) to walk in the next step.

The state space is the set of all the values that can be taken on by the random variables X_n. In this case, the state space is either the set of all integers \mathbb{Z}=\left\{\cdots,-3,-2,-2,0,1,2,3,\cdots \right\} or the set of all non-negative integers \mathbb{N}=\left\{0,1,2,3,\cdots \right\} or some appropriate subset of \mathbb{Z} or \mathbb{N}.

Given the current state of the process X_n, the next state X_{n+1} is obtained by adding the random value Y_{n+1} to the previous state X_n. If the random variables Y_n are independent Bernoulli variables with P(Y_n=1)=p and P(Y_n=-1)=1-p. Then each state X_n is the result of an up or down move by one unit from the previous state. The examples in the previous post are based on this type of Bernoulli one-unit up and down move. In the general definition, the future state X_{n+1}, instead of being adjusted by a uniform up or down adjustment, is adjusted by the random variable Y_{n+1}.

One important task is to establish the probabilities of the transition from one state to another. For example, if the process is currently in state i, what is the probability that it will be in state j in the next period? More specifically, given that the random walk is in state i, i.e. X_n=i, what is the probability that the process in in state j in the next period, i.e. X_{n+1}=j? Such probabilities are called one-step transition probabilities.

The probabilities that we wish to obtain are P[X_{n+1}=j \lvert X_n=i] for n \ge 0. Recall that P[Y=y] is the common probability function for the random variables Y_n. Given states i,j, define P_{ij}=P[Y=j-i].

When X_n=i, in order for X_{n+1}=j to happen, the increment must be Y_{n+1}=j-i. Note that P[Y_{n+1}=j-i]=P[Y=j-i]=P_{ij}. Observe that this probability is independent of the time period n since the Y_n have a common distribution Y. Thus it is appropriate to use P_{ij} to express this probability, i.e. there is no need to include the time n in the subscript of P_{ij}. For all n \ge 0,

    P[X_{n+1}=j \lvert X_n=i]=P[Y_{n+1}=j-i]=P[Y=j-i]=P_{ij}

The number P_{ij} is the probability of the process transitioning from state i to state j in one step, which is evaluated based on the common probability function of the increments Y_n.

Now that we know the probability of the process going one state to another state in one step, other probability questions are: what is the probability of a path and what is the probability of going from state i to state j in n steps? The probabilities in the second question are called n-step transition probabilities. The probability of a path is discussed here. The n-step transition probabilities are discussed here.

The first question has straightforward answers. For example, if the process starts in state i_0 at time 0, moves to state i_1 at time 1 and so on until reaching state i_n at time n, then the following is the probability of this path:

    \displaystyle \begin{aligned} P[X_0=i_0,X_1=i_1,\cdots,X_n=i_n]&=P[X_0=i_0,Y_1=i_1-i_0,\cdots,Y_n=i_n-i_{n-1}] \\&=P[X_0=i_0] \times P[Y_1=i_1-i_0] \times \cdots \times P[Y_n=i_n-i_{n-1}] \\&=P[X_0=i_0] \times P_{i_0,i_1} \times \cdots \times P_{i_{n-1},i_n}  \end{aligned}

    \displaystyle \begin{aligned} P[X_1=i_1,\cdots,X_n=i_n \lvert X_0=i_0]&=\frac{P[X_0=i_0,X_1=i_1,\cdots,X_n=i_n]}{P[X_0=i_0]} \\&=P_{i_0,i_1} \times \cdots \times P_{i_{n-1},i_n}  \end{aligned}

Conditional on the initial state X_0=i_0, the probability of the process going through the path X_1=i_1,\cdots,X_n=i_n is simply the product of the one-step transition probabilities.

If the path is observed to start at a time period other than time 0, conditional on the first state in the path, the probability of the process going through the path would be:

    \displaystyle P[X_{k+1}=i_1,\cdots,X_{k+n}=i_n \lvert X_k=i_0]=P_{i_0,i_1} \times \cdots \times P_{i_{n-1},i_n}

Note that the transition probabilities P_{ij} and the probabilities of paths are independent of the current period n. Random walks with such property are called time-homogeneous or stationary. Thus the probability of transitioning into state j from state i or the probability of transitioning through a path is identical regardless of where the process is in the time scale (at the beginning in the process or later in the process).

Specific Examples

For special cases of the random walks discussed above, let the increments Y_n be defined by a Bernoulli random variable with P[Y=1]=p and P[Y=-1]=1-p. Then the resulting random walk is a series of up and down moves as shown in Figure 1 above. The one-step transition probabilities are:

    \displaystyle  P_{ij} = \left\{ \begin{array}{ll}           \displaystyle  p &\ \ \ \ \ \ j=i+1 \\            \text{ } & \text{ } \\          \displaystyle  1-p &\ \ \ \ \ \ j=i-1 \\           \text{ } & \text{ } \\           0 &\ \ \ \ \ \ \text{otherwise}           \end{array} \right.

If the increments Y_n are defined by the distribution where P[Y=1]=p, P[Y=-1]=q and P[Y=0]=1-p-q, then the following gives the transition probabilities:

    \displaystyle  P_{ij} = \left\{ \begin{array}{ll}           \displaystyle  p &\ \ \ \ \ \ j=i+1 \\            \text{ } & \text{ } \\          \displaystyle  q &\ \ \ \ \ \ j=i-1 \\           \text{ } & \text{ } \\          \displaystyle  1-p-q &\ \ \ \ \ \ j=i \\           \text{ } & \text{ } \\           0 &\ \ \ \ \ \ \text{otherwise}           \end{array} \right.

This random walk has three moves at any given state, an up move by one unit, a down move by one unit and a possibility that the process stays at the same state. Of course, the possibility for modeling is limitless. We can use other distributions for the common distribution of the increments Y_n, e.g. binomial distribution, Poisson, preferably having these distributions shifted in order to account for up moves and down moves.

Finite-State Random Walks

The random walks discussed above potentially have an infinite state space, i.e. the process can potentially reach far from the initial state without bounds. Depending on the common distribution of the increments Y_1,\cdots,Y_n,\cdots, the process can go up indefinitely to +\infty or go down to -\infty. The random walks in Figure 1 have increments 1 and -1 and can reach states that are arbitrarily high or low, though it would take a great many steps to reach such high levels.

Any random walk with unbounded state space can be modified to have a finite state space. Pick two states that serve as boundaries, one lower and one upper. The states in between the boundary states are called interior states. Whenever the process reaches the upper boundary or the lower boundary, the process either stays there and not move any further or transition back into an interior state. The transition probabilities in the interior states remain the same as discussed above. If a boundary state is such that the process always stays there after reaching it, it is called an absorbing state.

A handy example of a finite-state random walk is the gambler’s ruin. Suppose that a gambler starts out with d units in capital (in dollars or other monetary units) and makes a series of one-unit bets against the house. For the gambler, the probabilities of winning and losing each bet are p and 1-p, respectively. Whenever the capital reaches zero, the gambler is in ruin and his capital remains zero thereafter. On the other hand, if the capital of the gambler increases to m where d<m, then the gambler quits playing. Let X_n be the capital of the gambler after the nth bet. Then \left\{X_n: n=0,1,2,3,\cdots \right\} is a random walk. The starting state is X_0=d. The states 0 and m are absorbing states. The following gives the transition probabilities.

    \displaystyle  P_{ij} = \left\{ \begin{array}{ll}           \displaystyle  p &\ \ \ \ \ \ j=i+1,i \ne 0, i \ne m \\            \text{ } & \text{ } \\          \displaystyle  1-p &\ \ \ \ \ \ j=i-1,i \ne 0, i \ne m \\           \text{ } & \text{ } \\           \displaystyle  1 &\ \ \ \ \ \ i=0,j=0 \\           \text{ } & \text{ } \\           \displaystyle  1 &\ \ \ \ \ \ i=m,j=m \\           \text{ } & \text{ } \\           0 &\ \ \ \ \ \ \text{otherwise}           \end{array} \right.

Whenever the gambler reaches the absorbing state of 0, the gambler is in ruin. If the process reaches the absorbing state of m, the gambler strikes gold. If the initial capital of the gambler is small relative to the casino, there is a virtually 100% chance that the gambler will be in ruin even if each bet has even odds (see here for the calculation). One interesting questions: on average how long does it take for the gambler to be in ruin? Such questions will be answered after necessary tools are built in subsequent posts.

Remarks

A random walk is a stochastic process \left\{X_n: n=0,1,2,\cdots \right\} such that at each time point, the process walks a certain distance that is dictated by an independent and identically distributed sequence of random variables. In such a stochastic process, the future state depends only on the preceding state and not on the past states before the preceding state. For example, in the gambler’s ruin, the gambler’s capital at any given point depends only on the his capital in the previous play.

In a stochastic process \left\{X_n: n=0,1,2,\cdots \right\}, the property that the future state X_{n+1} depends only on the preceding state X_n and not on the past states X_0,X_1,\cdots,X_{n-1} is called the Markov property. Any stochastic process with the Markov property is called a Markov chain. A random walk is a Markov chain. There are Markov chains that are not random walk. The notion of Markov chains is introduced in this post.

\copyright 2017 – Dan Ma

A walk down Random Lane

A stochastic process is a sequence of random variables \left\{X_n: n=0,1,2,3,\cdots \right\}. Since the random variables are indexed by the integers, this is a discrete stochastic process. The integers in the subscripts can also be interpreted as time. Suppose X_0 is the random value taken on by the process at time 0. Then X_1 is the random value taken on by the physical process at time 1 and so on. The notion of stochastic processes will be examined in more details in subsequent posts. This post looks at plots of some simple random walks, a particular type of stochastic processes. The next post gives a more formal definition of random walks.

Random Walk

One type of stochastic processes is a random walk, which can be regarded as a sequence of random steps (up or down for example) on some mathematical spaces such as the integers. For example, assume that X_0 is an integer (the initial position of the walk) and further assume that it is equally likely that X_n is either X_{n-1}-1 or X_{n-1}+1. Then \left\{X_n: n=0,1,2,3,\cdots \right\} is an example of a random walk. A colorful way to describe this random walk is that a drunkard takes steps along a path at random by taking one step to the left or one step to the right with equal chance. For convenience, we can assume that the initial position is X_0=0 (the origin). To demonstrate the random nature of the walk, we simulate such a random walk five times using random numbers generated by the function =Rand() in Excel.

Figure 1 – Examples of Random Walks in One Dimension

All 5 examples of the walk starts with X_0=0. One hundred steps are shown for each random walk. Each random move is basically the result of a coin toss. If the toss is a head, the step is to the right, which means adding one to the previous position (in the graph the path would go up by one step). If the coin toss is a tail, the step is to the left, which means subtracting one to the previous position (the graph would go down one step).

It is clear that the positions of the drunkard are initially close to the origin but as time goes by the drunkard may be far from the origin (either to the left or to the right). At the end of the 100 moves, the drunkard is about 10-12 steps from either side of the origin.

For each random walk in Figure 1, the random moves to the left or to the right (up or down in the plot) are generated using the function =Rand() in Excel. The following are the first 10 random values for the random walk in red in Figure 1.

    0.762417629
    0.475581918
    0.916720504
    0.455643747
    0.918072861

    0.141459931
    0.871570535
    0.813575225
    0.092492375
    0.823638012

If a random value is less than 0.5, we take that as a tail and if it is more than 0.5, we take that as a head. With X_0=0, the next 10 positions are 1, 0, 1, 0, 1, 0, 1, 2, 1, 2.

The random walks in Figure 1 have no restriction. The random variables X_n can take on both positive and negative integers without bounds. The set of all values that can be taken by the random variables in the random walk is called the state space. So in these examples, the state space is \mathbb{Z}=\left\{\cdots,-3,-2,-1,0,1,2,3,\cdots \right\}, the set of all integers.

The random walks in Figure 1 have finite versions, i.e. the state space is a finite set. For example, the path the drunkard is on has barriers at both ends (10 steps away from the origin at either end). Whenever the drunkard reach 10 or -10, he stops. For example, his house is at step 10 and a bar is at step -10. Whenever he reaches the house or the bar, he stays there. The following is a graph of the same random walks with the 10-step limitatons.

Figure 2 – Examples of Random Walks in One Dimension with Absorbing States

Note that when the process reaches 10 or -10, it stays there from that point on. Such states are called absorbing states. The random walks in Figure 2 have two absorbing states 10 and -10. With minor adjustments, these random walks have a gambling interpretation.

Gambler’s Ruin

Suppose that a gambler starts out with d units in capital (in dollars or other monetary units) and makes a series of one-unit bets against the house. For the gambler, the probabilities of winning and losing each bet are p and 1-p, respectively. Whenever the capital reaches zero, the gambler is in ruin and his capital remains zero thereafter. On the other hand, if the capital of the gambler increases to m where d<m, then the gambler quits playing. Let X_n be the capital of the gambler after the nth bet. Then \left\{X_n: n=0,1,2,3,\cdots \right\} is a random walk. The starting state is X_0=d. The states 0 and m are absorbing states.

Note the similarity between the random walk that describes the gambler’s ruin and the random walks in Figure 2. In gambler’s ruin, the capital X_n is increased or decreased by one unit at random (probability p for an increase and probability 1-p for a decrease). Whenever the process reaches 0, it stays at 0 thereafter (the gambler is in ruin and he stops playing). Similarly, whenever the process reaches m, the process remains at m (the gambler reaches his goals and stops playing). The following is a graph of five simulations (100 plays each) with starting capital 10 units (d=10) and maximum target 100 units (m=100).

Figure 3 – Gambler’s Ruin – 5 Simulations, 100 Plays Each

Note the similarity in shapes between Figure 2 and Figure 3. The 100 random moves in Figure 3 are obtained by the same random numbers behind the random walks in Figure 2. Two of the simulations end in ruin (the purple at 36th play and the black at 38th play) fairly early in the process. At the end of 100 plays, the other three simulations are not yet in ruin, even though the random walk in green appears to be close to be in ruin. Let’s extend the random walks up to 500 plays of the gambling games.

Figure 4 – Gambler’s Ruin – 5 Simulations, 500 Plays Each

In all 500 plays of the game, no random walks have reached the maximum target of 100 units. In fact, the gambler is in ruin in four of the simulations, the purple and the black mentioned earlier along with the green and the red. The yellow random walk will likely be in ruin too if the game continues.

An alternative interpretation of the gambler’s ruin random walk is this. Assume that two players make a series of one-unit bets against each other and that the total capital between the two players is m units with the first player having d units in capital. Further suppose that the first player has probability p of winning a bet and the second player has probability 1-p of winning a bet. Then the two gamblers play until one of them goes broke (is in ruin). Let X_n be the capital of the first player after the nth play. Then \left\{X_n: n=0,1,2,3,\cdots \right\} is the same random walk demonstrated in Figure 3 and Figure 4.

In Figures 3 and 4, the initial capital of the first player is 10 units and the total capital of the two players is 100 units. The probability of the first player winning all 100 units is 10/100 = 0.10 (see this post on gambler’s ruin). In other words, there is a 90% chance for the first player going broke! Whenever the gambler’s capital is small and is dwarfed by the other player (e.g. a casino), the probability of ruin for the gambler is all by certain. The 90% chance of ruin is based on the bets being even bets, which is not the case in the casino. Thus being in ruin in casino games is even more of a certainty.

Markov Chains

For a stochastic process \left\{X_n: n=0,1,2,3,\cdots \right\} in general, there is no additional assumption on the random variables X_n. We can assume that X_n are independent random variables. However, the independence assumption may not be realistic. In many applications, the future states may be dependent on the past and current states. Note that in the random walk examples discussed in this post, the future state X_{n+1} is dependent on the preceding state X_{n}. In the gambler’s ruin example, the future capital X_n is always one unit above or below the capital of the previous period. In the drunkard’s random walk, the future position is always one step to the left or to the right of the previous position. The past position preceding the immediately preceding period do not influence the future state.

The examples discussed in this post are excellent beginning examples to illustrate the stochastic processes \left\{X_n: n=0,1,2,3,\cdots \right\} in which the future state X_{n+1} depends only on the preceding state X_n and in which the past states X_0,X_1,\cdots,X_{n-1} do not influence the future states. This property is called the Markov property. Any stochastic process with the Markov property is called a Markov chain. A random walk is an example of a Markov chain. Random walks are discussed further in the next post. The notion of Markov chains is introduced in this post.

\copyright 2017 – Dan Ma