The preceding two posts are devoted to solving the problem of determining mean time to absorption and the problem of determining the probability of absorption (using first step analysis here and using fundamental matrix here). The Markov chains in these problems are called absorbing Markov chains. This post summarizes the properties of such chains.
What are Absorbing Markov Chains?
A state in a Markov chain is said to be an absorbing state if the process will never leave that state once it is entered. That is, if the state is an absorbing state, then . A Markov chain is said to be an absorbing Markov chain if it has at least one absorbing state and if any state in the chain, with a positive probability, can reach an absorbing state after a number of steps. The following transition probability matrix represents an absorbing Markov chain.

…………………………………………………………. (1)
The above matrix can be interpreted as a small scaled random walk or a small scaled gambler’s ruin. It has two absorbing states, 0 and 5. Once the process reaches these states, it does not leave these states. The other states (1, 2, 3 and 4) can reach an absorbing state in a finite number of steps with a positive probability. For example, state 1 can reach state 0 in one step with probability 0.5. State 1 can reach state 5 in four steps with probability 1/16.
As we will see below, any nonabsorbing state in an absorbing Markov chain will eventually reach an absorbing state will probability 1. Thus any nonabsorbing state in an absorbing Markov chain is a transient state. That is, when starting in a nonabsorbing state, the process will only spend finite amount of time in nonabsorbing states.
In order to be an absorbing Markov chain, it is not sufficient for a Markov chain to have an absorbing state. It must also have the property that all nonabsorbing states must eventually reach an absorbing state. The following is an example of a chain that is not an absorbing Markov chain.

…………………………………………………………. (2)
Even though the above chain has one absorbing state (state 5), it is not an absorbing chain. Note that the states 0, 1 and 2 can never reach state 5.
Reaching Absorbing States with Certainty
Now we discuss several properties of such chains. The first one is that it does not matter what state the process is in initially, eventually the process reach an absorbing state, i.e. the process will be absorbed at some point in time. Suppose that the absorbing Markov chain has nonabsorbing states and absorbing states. The transition probability matrix can be written in the following format

……………………………………………………………………………………….. (3)
where is a matrix representing the transition probabilities from the nonabsorbing states into the nonabsorbing states and is a matrix representing the transition probabilities from the nonabsorbing states into the absorbing states. Furthermore, is an matrix consisting of zeros in its entries and is the identity matrix. The following is the matrix in (1) arranged in the new format.

…………………………………………………….. (4)
Decomposing into the form in (3) is a useful technique. It makes certain calculations simple to perform. We will get to that in a minute. First we prove an important fact showing that the nonabsorbing states in an absorbing chain are indeed transient.
Theorem 1 

In an absorbing Markov chain, the probability that the process will reach an absorbing state is 1. In other words, the probability of being in a nonabsorbing state approaches zero as goes to infinity, i.e. as . 
Here’s a proof. For each nonabsorbing state , let be the minimum number of steps to transition from state to an absorbing state. For example, in the matrix (1), the minimum number of steps to go from state 1 to an absorbing state (state 0) is 1 with probability 0.5. In the same example, the minimum number of steps to go from state 3 to an absorbing state (state 5) is 2 with probability 0.25.
For each nonabsorbing state , let be the probability that, starting from state , the process will take more than steps to reach an absorbing state. The probability must be less than 1. If , the process for certain will not reach an absorbing state in steps, contradicting the property of . Thus .
Let be the maximum of all and let be the maximum of all . Taking maximum is possible here since the Markov chain has a finite state space. The probability of the process taking more than steps to reach an absorbing state is less than or equal to . This follows from the property of and .
Furthermore, the probability of the process taking more than steps to reach an absorbing state is less than or equal to . This follows from the fact that the Markov chain is memoryless. After taking steps and if the process still not in an absorbing state, the probability of the process taking additional or more steps to reach an absorbing state is less than or equal to . Thus multiply the two probabilities gives . By an inductive reasoning, the probability of the process taking more than steps to reach an absorbing state is less than or equal to . Since , the numbers approach zero as goes to infinity.
It follows that as the integer increases without bound, the probability that the process will take more than steps to reach an absorbing state approaches zero. Turning it around, the probability that the process will reach an absorbing state in or fewer steps approaches 1. As a result, for any nonabsorbing states and , the transition probabilities approaches zero as goes to infinity. Thus as .
The above theorem shows that in an absorbing Markov chain, the process will eventually land in an absorbing state (it cannot cycle among the nonabsorbing states indefinitely). Thus in an absorbing Markov chain, the time spent in the nonabsorbing states is finite. As a result, nonabsorbing states in an absorbing Markov chain are called transient states. Since the time spent in transient states is finite, one of the problems of interest is the mean time spent in transient states. Of course, the answer depends on the initial state.
More Properties of Absorbing Markov Chains
Another important property is the fundamental matrix that can be computed from the transition probability matrix of an absorbing chain. Given the transition probability matrix , decompose according to (3). Recall that consists of the transition probabilities from the transient states to the transient states. Derive where is the identity matrix of the same dimension as . The following three theorems capture more important properties of absorbing Markov chains.
Theorem 2 

In an absorbing Markov chain with transition probability matrix , the matrix has an inverse , i.e. . The matrix has the following properties:

For an absorbing Markov chain with transition probability matrix , the matrix is called the fundamental matrix of the absorbing Markov chain. A proof of Theorem 2 is sketched out in the preceding post. So we will not repeat it here.
Once the fundamental matrix is obtained by taking the inverse of , we can use it to solve two problems, stated in the following two theorems. The ideas of a proof are also sketched out in the preceding post.
Theorem 3 – mean time to absorption 

In an absorbing Markov chain with transition probability matrix , consider the fundamental matrix . The following calculation is of interest.

Theorem 4 – probabilities of absorption 

In an absorbing Markov chain with transition probability matrix , consider the fundamental matrix . The following calculation is of interest.

Examples
Examples are available in the preceding posts (here and here). We work three examples.
Example 1
An urn initially contains 7 blue balls and 5 red balls. One ball is selected at random one at a time from the urn. If the selected ball is not red, the ball is put back into the urn. If the selected ball is red, it is removed from the urn. The process continues until all red balls are removed. Determine the expected number of selections in order to have all red balls removed.
Let be the number of red balls in the urn after the th selection. The states are 0, 1, 2, 3, 4 and 5. The initial state is . The goal is to reach the absorbing state of 0 (all red balls are removed). The following is the transition probability matrix.
There is only one absorbing state, the state 0. The states 1, 2, 3, 4 and 5 are transient states. Identify the matrix , which is the transition probabilities from the transient states into the transient states. Derive the matrix where is a 5 x 5 identity matrix. The following is the inverse of .
The 5th row of gives the mean time spent in the transient states. The sum of the entries on the 5th row is 1259/60 = 20.9833. Thus it takes about 21 steps on average to remove all red balls from the urn.
Example 2
An urn initially contains 7 blue balls and 5 red balls. One ball is selected at random one at a time from the urn. If the selected ball is not red, the ball is put back into the urn. If the selected ball is red, it is removed from the urn and then a blue ball is put into the urn. The process continues until all red balls are removed. Determine the expected number of selections in order to have all red balls removed. Compare the expected value with the expected value in Example 1. Give an explanation for the difference.
As in Example 1, let be the number of red balls in the urn after the th selection. The states are 0, 1, 2, 3, 4 and 5. The initial state is . The goal is to reach the absorbing state of 0 (all red balls are removed). The following is the transition probability matrix.
One big difference from Example 1 is that each red ball selected is replaced by a blue ball. Thus the urn always has 12 balls whereas in Example 1 the number of balls is dwindling. As before, identify and derive . The following is the inverse of .
Since the initial state is 5, we focus on the 5th row of . The sum of that row is 137/5 = 27.4. Thus it takes on average 27.4 selections to remove all red balls. Note that the process takes longer than in Example 1. By replacing each selected red ball with a blue ball, it becomes harder to remove red balls especially when the process is in the lower states.
Example 3
An urn initially contains 4 blue balls and 2 red balls. One ball is selected at random one at a time from the urn. Each selected ball is replaced by a ball of the opposite color. The process continues until all balls in the urn are of the same color.
 Determine the probability that the process ends with the urn containing only red balls.
 Determine the expected number of selections in order for the urn to consist of balls of the same color.
Let be the number of red balls in the urn after the th selection. There are 7 states in this chain – 0, 1, 2, 3, 4, 5 and 6. The initial state is . The process will eventually reach either state 0 or state 6. We wish to find the probability of being absorbed into state 6 first. States 0 and 6 are the absorbing states. States 1, 2, 3, 4, and 5 are transient states. The following is the transition probability matrix.
As before, identify and derive . The following is the inverse of .
To answer the probability question, multiply the matrices and . The following is the result.
With the initial state being 2, the probability of being absorbed into state 6 (all red balls) is 6/13 = 0.4615. The expected time to absorption (all balls of the same color) is the sum of the second row of , which is 36. Thus on average it will take 36 selections of balls for the urn to consist of balls of the same color.
Practice Problems
Practice problems to reinforce the concepts discussed here are available in a companion blog. There are two problem sets. The first one is here and the second one is here.
Dan Ma math
Daniel Ma mathematics
Dan Ma Markov chains
Daniel Ma Markov chains
2018 – Dan Ma