A walk down Random Lane

A stochastic process is a sequence of random variables \left\{X_n: n=0,1,2,3,\cdots \right\}. Since the random variables are indexed by the integers, this is a discrete stochastic process. The integers in the subscripts can also be interpreted as time. Suppose X_0 is the random value taken on by the process at time 0. Then X_1 is the random value taken on by the physical process at time 1 and so on. The notion of stochastic processes will be examined in more details in subsequent posts. This post looks at plots of some simple random walks, a particular type of stochastic processes. The next post gives a more formal definition of random walks.

Random Walk

One type of stochastic processes is a random walk, which can be regarded as a sequence of random steps (up or down for example) on some mathematical spaces such as the integers. For example, assume that X_0 is an integer (the initial position of the walk) and further assume that it is equally likely that X_n is either X_{n-1}-1 or X_{n-1}+1. Then \left\{X_n: n=0,1,2,3,\cdots \right\} is an example of a random walk. A colorful way to describe this random walk is that a drunkard takes steps along a path at random by taking one step to the left or one step to the right with equal chance. For convenience, we can assume that the initial position is X_0=0 (the origin). To demonstrate the random nature of the walk, we simulate such a random walk five times using random numbers generated by the function =Rand() in Excel.

Figure 1 – Examples of Random Walks in One Dimension

All 5 examples of the walk starts with X_0=0. One hundred steps are shown for each random walk. Each random move is basically the result of a coin toss. If the toss is a head, the step is to the right, which means adding one to the previous position (in the graph the path would go up by one step). If the coin toss is a tail, the step is to the left, which means subtracting one to the previous position (the graph would go down one step).

It is clear that the positions of the drunkard are initially close to the origin but as time goes by the drunkard may be far from the origin (either to the left or to the right). At the end of the 100 moves, the drunkard is about 10-12 steps from either side of the origin.

For each random walk in Figure 1, the random moves to the left or to the right (up or down in the plot) are generated using the function =Rand() in Excel. The following are the first 10 random values for the random walk in red in Figure 1.

    0.762417629
    0.475581918
    0.916720504
    0.455643747
    0.918072861

    0.141459931
    0.871570535
    0.813575225
    0.092492375
    0.823638012

If a random value is less than 0.5, we take that as a tail and if it is more than 0.5, we take that as a head. With X_0=0, the next 10 positions are 1, 0, 1, 0, 1, 0, 1, 2, 1, 2.

The random walks in Figure 1 have no restriction. The random variables X_n can take on both positive and negative integers without bounds. The set of all values that can be taken by the random variables in the random walk is called the state space. So in these examples, the state space is \mathbb{Z}=\left\{\cdots,-3,-2,-1,0,1,2,3,\cdots \right\}, the set of all integers.

The random walks in Figure 1 have finite versions, i.e. the state space is a finite set. For example, the path the drunkard is on has barriers at both ends (10 steps away from the origin at either end). Whenever the drunkard reach 10 or -10, he stops. For example, his house is at step 10 and a bar is at step -10. Whenever he reaches the house or the bar, he stays there. The following is a graph of the same random walks with the 10-step limitatons.

Figure 2 – Examples of Random Walks in One Dimension with Absorbing States

Note that when the process reaches 10 or -10, it stays there from that point on. Such states are called absorbing states. The random walks in Figure 2 have two absorbing states 10 and -10. With minor adjustments, these random walks have a gambling interpretation.

Gambler’s Ruin

Suppose that a gambler starts out with d units in capital (in dollars or other monetary units) and makes a series of one-unit bets against the house. For the gambler, the probabilities of winning and losing each bet are p and 1-p, respectively. Whenever the capital reaches zero, the gambler is in ruin and his capital remains zero thereafter. On the other hand, if the capital of the gambler increases to m where d<m, then the gambler quits playing. Let X_n be the capital of the gambler after the nth bet. Then \left\{X_n: n=0,1,2,3,\cdots \right\} is a random walk. The starting state is X_0=d. The states 0 and m are absorbing states.

Note the similarity between the random walk that describes the gambler’s ruin and the random walks in Figure 2. In gambler’s ruin, the capital X_n is increased or decreased by one unit at random (probability p for an increase and probability 1-p for a decrease). Whenever the process reaches 0, it stays at 0 thereafter (the gambler is in ruin and he stops playing). Similarly, whenever the process reaches m, the process remains at m (the gambler reaches his goals and stops playing). The following is a graph of five simulations (100 plays each) with starting capital 10 units (d=10) and maximum target 100 units (m=100).

Figure 3 – Gambler’s Ruin – 5 Simulations, 100 Plays Each

Note the similarity in shapes between Figure 2 and Figure 3. The 100 random moves in Figure 3 are obtained by the same random numbers behind the random walks in Figure 2. Two of the simulations end in ruin (the purple at 36th play and the black at 38th play) fairly early in the process. At the end of 100 plays, the other three simulations are not yet in ruin, even though the random walk in green appears to be close to be in ruin. Let’s extend the random walks up to 500 plays of the gambling games.

Figure 4 – Gambler’s Ruin – 5 Simulations, 500 Plays Each

In all 500 plays of the game, no random walks have reached the maximum target of 100 units. In fact, the gambler is in ruin in four of the simulations, the purple and the black mentioned earlier along with the green and the red. The yellow random walk will likely be in ruin too if the game continues.

An alternative interpretation of the gambler’s ruin random walk is this. Assume that two players make a series of one-unit bets against each other and that the total capital between the two players is m units with the first player having d units in capital. Further suppose that the first player has probability p of winning a bet and the second player has probability 1-p of winning a bet. Then the two gamblers play until one of them goes broke (is in ruin). Let X_n be the capital of the first player after the nth play. Then \left\{X_n: n=0,1,2,3,\cdots \right\} is the same random walk demonstrated in Figure 3 and Figure 4.

In Figures 3 and 4, the initial capital of the first player is 10 units and the total capital of the two players is 100 units. The probability of the first player winning all 100 units is 10/100 = 0.10 (see this post on gambler’s ruin). In other words, there is a 90% chance for the first player going broke! Whenever the gambler’s capital is small and is dwarfed by the other player (e.g. a casino), the probability of ruin for the gambler is all by certain. The 90% chance of ruin is based on the bets being even bets, which is not the case in the casino. Thus being in ruin in casino games is even more of a certainty.

Markov Chains

For a stochastic process \left\{X_n: n=0,1,2,3,\cdots \right\} in general, there is no additional assumption on the random variables X_n. We can assume that X_n are independent random variables. However, the independence assumption may not be realistic. In many applications, the future states may be dependent on the past and current states. Note that in the random walk examples discussed in this post, the future state X_{n+1} is dependent on the preceding state X_{n}. In the gambler’s ruin example, the future capital X_n is always one unit above or below the capital of the previous period. In the drunkard’s random walk, the future position is always one step to the left or to the right of the previous position. The past position preceding the immediately preceding period do not influence the future state.

The examples discussed in this post are excellent beginning examples to illustrate the stochastic processes \left\{X_n: n=0,1,2,3,\cdots \right\} in which the future state X_{n+1} depends only on the preceding state X_n and in which the past states X_0,X_1,\cdots,X_{n-1} do not influence the future states. This property is called the Markov property. Any stochastic process with the Markov property is called a Markov chain. A random walk is an example of a Markov chain. Random walks are discussed further in the next post. The notion of Markov chains is introduced in this post.

\copyright 2017 – Dan Ma

Advertisements

3 thoughts on “A walk down Random Lane

  1. Pingback: More on random walks | Topics in Probability

  2. Pingback: Introducing Markov Chains | Topics in Probability

  3. Pingback: A first look at applications of Markov chains | Topics in Probability

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s