This post discusses the technique of collapsing a set of states into an absorbing state to order to make certain Markov chain problems easier. We motivate the technique with the following examples. The method discussed here is to raise the new transition probability matrix (the one with the collapsed absorbing state) to an appropriate th power (Chapman-Kolmogorov equations are discussed here). Other applications of Markov chains are discussed here. Markov chains are applied to the occupancy problem in this previous post.

*Example 1*

On any given day, the weather condition in a city is either sunny, cloudy (but not rainy) or rainy. The weather condition on a given day is identical to that of the previous day with probability 0.5. The weather condition on a given day is equally likely to be one of the other two weather conditions if it is different from the previous day. Suppose that it is sunny today in this city. Find the probability that none of the next five days will be rainy.

*Example 2*

A fair coin is flipped repeatedly until a run of 4 consecutive heads appear. We want to know how many flips are required to achieve this goal. Specifically we have the following questions.

- Find the probability that it takes at most 9 flips to get a run of 4 consecutive heads.
- Find the probability that it takes at exactly 9 flips to get a run of 4 consecutive heads.

*Example 3*

A maze is divided into 9 areas as shown below.

A mouse is placed in area 1 initially. Suppose that areas 3, 7 and 9 contain food and that the other areas in the maze do not contain food. Further suppose that the mouse moves through the areas in the maze at random. That is, whenever the mouse is in an area that has exits, the next move is to one of these areas with probability .

- Find the probability that after the 5 moves, the mouse is one area from a food source.
- Find the probability that after the 7 moves, the mouse has not found food and is in area 6 or area 8 after the 7th move.
- Find the probability that it takes at most 8 moves for the mouse to find food.
- Find the probability that it takes exactly 8 moves for the mouse to find food.

**Working Example 1**

We work Example 1 to illustrate the method. The following is the transition probability matrix for Example 1 where state 0 = sunny weather, state 1 = cloudy weather and state 2 = rainy weather.

Since it is currently a sunny day, . The probability of not having a rainy day in the next 5 days is the probability . This is the probability that or for the next 5 periods given the current state is 0. Because the range of movement is restricted, using the Chapman-Kolmogorov equations, i.e. finding the 5th power of the matrix , does not work. For example, though is the probability that the chain will end up in state 1 at time 5 given that it is in state 0 at time 0, the probability would include the possibilities of a rainy day (state 2) in between time 0 and time 5. The technique is to make the state 2 (the state that is not to be visited in the next 5 days) an absorbing state. Doing so produces a new transition probability matrix.

The matrix is obtained by changing state 2 in the matrix an absorbing state (i.e. the entry in the row for state 2 and in the column for state 2 has probability 1). It is a valid transition probability matrix since the sum of each row is 1. So it describes a Markov chain. How does the new Markov chain relate to the original Markov chain? The following is the 5th power of .

The transition probability is the probability of transitioning in the new process from state 0 at time 0 to state 2 at time 5 (the 5th day). Starting at state 0, there is a 76.27% chance that the new process will reach state 2 at time 5. This means that the original process will have reached state 2 at time 5 or prior to time 5. The complement of this probability is . So there is a 23.73% chance the original process will never reach state 2 at any time in the first 5 periods. If the weather is indeed governed the chain described in , it is likely (76.27% chance) that there will be a rainy day at some point within the next 5 periods (given that today is sunny). In fact, even if today is cloudy, there is also a 76.27% chance that there will be a rainy day within the next 5 days.

**Discussion**

Before working Example 2, let’s discuss the idea of collapsing states into an absorbing state. Consider a Markov chain with transition probability matrix . Let be the state space of this chain. Let be a set of states. Suppose that the states in are either desirable to enter in or states that are negative situations to be avoided. In case of being desirable or attractive, we wish to find the probability of entering into a state in after a number of transitions. In the case of being negative situations, we wish to find the probability of not in entering into any state in after a number of transitions.

The key to finding these probabilities is to collapse the states in into a state called that is an integer that is different from all states not in . For convenience, we let be the smallest integer that is greater than all states not in . For example, if , and , then .

Now define a new Markov chain by letting all states not in remain the same and by collapsing all states in into the new state . The new state is made an absorbing state. The state space of the new Markov chain is the original states not in along with the new state . Let be the transition probability matrix of the new Markov chain. The transition probabilities in are defined as follows:

The transition probability matrix is derived from and is closely related to the matrix . For states not in , the transition probabilities are the same in both matrices (the first condition). The second condition says that is the sum of all where . Thus is the probability of the original process entering one of the states in when starting from a state . Since is an absorbing state, .

To calculate the two probabilities concerning the process for indicated earlier, compute powers of the matrix . In the new Markov process for the matrix , there are two kinds of states, the states and the collapsed state . Whenever the process is in state , the process is not yet absorbed (into the state ). When the process starts at a state not in (i.e. not yet absorbed), the process will eventually be absorbed (after some number of transitions).

Consider where . This is the probability that the process starts in state (not yet absorbed) and is absorbed at time . This is the connection between the two processes: being absorbed into state in the new process at time means that the original process enters a state in in some period prior to time or at time . We have the following summary.

Consider a Markov process with the transition probability matrix . Collapse , a set of states, into an absorbing state called in the manner described above. Let be the resulting transition probability matrix of the new Markov chain. For , is the probability of absorption given that the process starts out not being absorbed, i.e. the probability that the original process enters a state in at time or prior to time given that initially the process is in state . Equivalently .

The above observation gives a way to interpret the complement . It would be the probability of the original process never entering any state in during the time periods (given that the process starts at ). This is the answer to Example 1. We have the following summary.

The complement of is . It is the probability that the original process never enters any of the states in during the time periods (given that the process starts at ). Equivalently, can be stated as . Note that the probability is also the sum . Specifically, for , can be stated as .

The probability is the probability of absorption on or prior to time . Another probability of interest is the probability of absorption exactly at time .

As described above, let be the resulting transition probability matrix of the new Markov chain after collapsing some states into an absorbing state. For , is the probability of absorption at time given that the process starts out not being absorbed, i.e. the probability that the original process enters a state in at time given that initially the process is in state . Equivalently .

In a nutshell, the idea of collapsing states into an absorbing state is to inform on two probabilities concerning the original process. To illustrate this process, consider the following example.

*Example 0*

Consider a Markov chain with the following transition probability matrix.

Now collapse state 4 and state 5 into a new state called 4. The following is the resulting matrix .

The following is the 6th power of .

As a result, is the probability that the new process will be in state 4 at time 4 (given the process starts at state 0). This means that it is the probability that the original process will reach state 4 or state 5 by time 6 (at or before time 6) given the original process starts in state 0. Note that regardless of the initial state, the probability of being absorbed into state 4 or state 5 by time 6 is roughly the same (70% to 75%).

On the other hand, . Thus . Thus there is a 6.9% chance that absorption will take place exactly at time 6.

**More Examples**

We now work Example 2 and Example 3.

*Example 2** (Statement of the example given above)*

Define a Markov chain with states 0, 1, 2, 3 and 4 with meaning that there is currently a run of consecutive heads (the last flips are heads) and with state 4 meaning that a run of 4 consecutive heads has already occurred (the last 4 or more flips are heads). Thus state 4 is an absorbing state. The following is the transition probability matrix.

Here’s how is derived. For example, if there is currently a run of 2 heads (the third row), the next state is either 0 (the next flip is a tail) or 3 (the next flip is a head) with equal probability. To find the two probabilities in question, raise to the 8th and 9th powers.

The starting state is 0 since the coin flips are counted from the very first one. With , we know that the probability of taking 9 flips or less to get 4 consecutive heads is 0.216797.

With , there is a roughly 3% chance that the run of 4 consecutive heads is achieved at the 9th flip.

*Example 3** (Statement of the example given above)*

Let be the area the mouse is in after making moves. Then is a Markov chain. The following is the transition probability matrix. To help see the matrix, the diagram of the maze is repeated.

States 3, 7 and 9 are desirable states since these are the areas with food. Let . We then collapse into a new state called 9. The state 9 in the new process is an absorbing state, which means that the original process has reached a state with food (states 3, 7 and 9). The following is the transition probability matrix for the new (collapsed) process.

The state 9 in the matrix refers to the situation in the original process reaching states 3, 7 or 9. Then all the questions in the example can be answered by raising to an appropriate power. In particular, raising to the 5th, 7th and 8th power.

The statements for the probabilities to be sought are repeated here.

- Find the probability that after the 5 moves, the mouse is one area from a food source.
- Find the probability that after the 7 moves, the mouse has not found food and is in area 6 or area 8 after the 7th move.
- Find the probability that it takes at most 8 moves for the mouse to find food.
- Find the probability that it takes exactly 8 moves for the mouse to find food.

To obtain the answers, it is a matter of picking the entries from the matrices.

One area from a food source means being in areas 2, 4, 6 and 8. Thus the first probability is . The probability of being in areas 6 or 8 after 7 moves is . The probability that it will take 8 moves or less to reach food is . The probability of reaching food in exactly 8 moves is .

With the matrices , and , we can also find out the probability of reaching food. Just from the diagram, it is clear that the likelihood of reaching food is greater if the initial position is area 8 or area 6. Between these two areas, it is easier to reach food if the mouse is in area 8 initially since area 8 is adjacent to two food sources. This is also confirmed by the calculation. For example, and . So it is over 90% certain that the mouse will find food if it is placed in area 8 initially.

2017 – Dan Ma

Pingback: Probability problems using Markov chains | A Blog on Probability and Statistics

Pingback: Making use of Markov chains – Climbing Math Everest

Pingback: Making use of Markov chains – Climbing Math Everest