# More on random walks

This post continues the discussion of the previous post on random walks. The previous post looks at random walks graphically. This post introduces the notion of random walks more formally.

Consider $\left\{X_n: n=0,1,2,\cdots \right\}$, which is a sequence of random variables indexed by the non-negative integers. By definition this is a stochastic process, to be precise a discrete stochastic process since the random variables are indexed by integers. The integers in the subscripts can be interpreted as time. Thus the random variables $X_n$ describe the states of some process over time. The stochastic processes discussed in the previous post, the future state $X_{n+1}$ of the process is obtained by adding one (with probability $p$) or subtracting one (with probability $1-p$) to the current state $X_n$. So a given state of the process $X_n$ is one higher or lower than the state in the previous period. The random walks in the previous post go up or down with equal chance, i.e. $p=0.5$. The following graph is from the previous post, which shows 5 simulations of such random walks.

Figure 1 – Examples of Random Walks in One Dimension

Random Walks – A General Description

We now give a more general description of the random walk. Instead of a random one-unit up or down move, the moves in the random walk are determined by a predetermined discrete distribution. Let $Y_1,Y_2,Y_3,\cdots$ be integer-valued random variables that are independent and identically distributed. Let $P[Y=y]$ be the common probability function. Let $X_0$ be an integer-valued random variable that is independent of the random variables $Y_n$. The random variable $X_0$ will be the initial position of the random walk. The random variables $Y_n$ are the increments (they are the amounts added to the stochastic process as time increases).

For $n \ge 1$, define $X_n=X_0+Y_1+\cdots+Y_{n-1}+Y_n$. Then $\left\{X_n: n=0,1,2,\cdots \right\}$ is called a random walk. Wherever the process is at any given time, the next step in the process is always a walk in a distance and direction that is dictated by the independent random variables $Y_n$. For convenience, we will call the random variables $Y_n$ the increments in the random walk since they tell the process how far (and what direction) to walk in the next step.

The state space is the set of all the values that can be taken on by the random variables $X_n$. In this case, the state space is either the set of all integers $\mathbb{Z}=\left\{\cdots,-3,-2,-2,0,1,2,3,\cdots \right\}$ or the set of all non-negative integers $\mathbb{N}=\left\{0,1,2,3,\cdots \right\}$ or some appropriate subset of $\mathbb{Z}$ or $\mathbb{N}$.

Given the current state of the process $X_n$, the next state $X_{n+1}$ is obtained by adding the random value $Y_{n+1}$ to the previous state $X_n$. If the random variables $Y_n$ are independent Bernoulli variables with $P(Y_n=1)=p$ and $P(Y_n=-1)=1-p$. Then each state $X_n$ is the result of an up or down move by one unit from the previous state. The examples in the previous post are based on this type of Bernoulli one-unit up and down move. In the general definition, the future state $X_{n+1}$, instead of being adjusted by a uniform up or down adjustment, is adjusted by the random variable $Y_{n+1}$.

One important task is to establish the probabilities of the transition from one state to another. For example, if the process is currently in state $i$, what is the probability that it will be in state $j$ in the next period? More specifically, given that the random walk is in state $i$, i.e. $X_n=i$, what is the probability that the process in in state $j$ in the next period, i.e. $X_{n+1}=j$? Such probabilities are called one-step transition probabilities.

The probabilities that we wish to obtain are $P[X_{n+1}=j \lvert X_n=i]$ for $n \ge 0$. Recall that $P[Y=y]$ is the common probability function for the random variables $Y_n$. Given states $i,j$, define $P_{ij}=P[Y=j-i]$.

When $X_n=i$, in order for $X_{n+1}=j$ to happen, the increment must be $Y_{n+1}=j-i$. Note that $P[Y_{n+1}=j-i]=P[Y=j-i]=P_{ij}$. Observe that this probability is independent of the time period $n$ since the $Y_n$ have a common distribution $Y$. Thus it is appropriate to use $P_{ij}$ to express this probability, i.e. there is no need to include the time $n$ in the subscript of $P_{ij}$. For all $n \ge 0$,

$P[X_{n+1}=j \lvert X_n=i]=P[Y_{n+1}=j-i]=P[Y=j-i]=P_{ij}$

The number $P_{ij}$ is the probability of the process transitioning from state $i$ to state $j$ in one step, which is evaluated based on the common probability function of the increments $Y_n$.

Now that we know the probability of the process going one state to another state in one step, other probability questions are: what is the probability of a path and what is the probability of going from state $i$ to state $j$ in $n$ steps? The probabilities in the second question are called $n$-step transition probabilities. The probability of a path is discussed here. The $n$-step transition probabilities are discussed here.

The first question has straightforward answers. For example, if the process starts in state $i_0$ at time 0, moves to state $i_1$ at time 1 and so on until reaching state $i_n$ at time $n$, then the following is the probability of this path:

\displaystyle \begin{aligned} P[X_0=i_0,X_1=i_1,\cdots,X_n=i_n]&=P[X_0=i_0,Y_1=i_1-i_0,\cdots,Y_n=i_n-i_{n-1}] \\&=P[X_0=i_0] \times P[Y_1=i_1-i_0] \times \cdots \times P[Y_n=i_n-i_{n-1}] \\&=P[X_0=i_0] \times P_{i_0,i_1} \times \cdots \times P_{i_{n-1},i_n} \end{aligned}

\displaystyle \begin{aligned} P[X_1=i_1,\cdots,X_n=i_n \lvert X_0=i_0]&=\frac{P[X_0=i_0,X_1=i_1,\cdots,X_n=i_n]}{P[X_0=i_0]} \\&=P_{i_0,i_1} \times \cdots \times P_{i_{n-1},i_n} \end{aligned}

Conditional on the initial state $X_0=i_0$, the probability of the process going through the path $X_1=i_1,\cdots,X_n=i_n$ is simply the product of the one-step transition probabilities.

If the path is observed to start at a time period other than time 0, conditional on the first state in the path, the probability of the process going through the path would be:

$\displaystyle P[X_{k+1}=i_1,\cdots,X_{k+n}=i_n \lvert X_k=i_0]=P_{i_0,i_1} \times \cdots \times P_{i_{n-1},i_n}$

Note that the transition probabilities $P_{ij}$ and the probabilities of paths are independent of the current period $n$. Random walks with such property are called time-homogeneous or stationary. Thus the probability of transitioning into state $j$ from state $i$ or the probability of transitioning through a path is identical regardless of where the process is in the time scale (at the beginning in the process or later in the process).

Specific Examples

For special cases of the random walks discussed above, let the increments $Y_n$ be defined by a Bernoulli random variable with $P[Y=1]=p$ and $P[Y=-1]=1-p$. Then the resulting random walk is a series of up and down moves as shown in Figure 1 above. The one-step transition probabilities are:

$\displaystyle P_{ij} = \left\{ \begin{array}{ll} \displaystyle p &\ \ \ \ \ \ j=i+1 \\ \text{ } & \text{ } \\ \displaystyle 1-p &\ \ \ \ \ \ j=i-1 \\ \text{ } & \text{ } \\ 0 &\ \ \ \ \ \ \text{otherwise} \end{array} \right.$

If the increments $Y_n$ are defined by the distribution where $P[Y=1]=p$, $P[Y=-1]=q$ and $P[Y=0]=1-p-q$, then the following gives the transition probabilities:

$\displaystyle P_{ij} = \left\{ \begin{array}{ll} \displaystyle p &\ \ \ \ \ \ j=i+1 \\ \text{ } & \text{ } \\ \displaystyle q &\ \ \ \ \ \ j=i-1 \\ \text{ } & \text{ } \\ \displaystyle 1-p-q &\ \ \ \ \ \ j=i \\ \text{ } & \text{ } \\ 0 &\ \ \ \ \ \ \text{otherwise} \end{array} \right.$

This random walk has three moves at any given state, an up move by one unit, a down move by one unit and a possibility that the process stays at the same state. Of course, the possibility for modeling is limitless. We can use other distributions for the common distribution of the increments $Y_n$, e.g. binomial distribution, Poisson, preferably having these distributions shifted in order to account for up moves and down moves.

Finite-State Random Walks

The random walks discussed above potentially have an infinite state space, i.e. the process can potentially reach far from the initial state without bounds. Depending on the common distribution of the increments $Y_1,\cdots,Y_n,\cdots$, the process can go up indefinitely to $+\infty$ or go down to $-\infty$. The random walks in Figure 1 have increments 1 and -1 and can reach states that are arbitrarily high or low, though it would take a great many steps to reach such high levels.

Any random walk with unbounded state space can be modified to have a finite state space. Pick two states that serve as boundaries, one lower and one upper. The states in between the boundary states are called interior states. Whenever the process reaches the upper boundary or the lower boundary, the process either stays there and not move any further or transition back into an interior state. The transition probabilities in the interior states remain the same as discussed above. If a boundary state is such that the process always stays there after reaching it, it is called an absorbing state.

A handy example of a finite-state random walk is the gambler’s ruin. Suppose that a gambler starts out with $d$ units in capital (in dollars or other monetary units) and makes a series of one-unit bets against the house. For the gambler, the probabilities of winning and losing each bet are $p$ and $1-p$, respectively. Whenever the capital reaches zero, the gambler is in ruin and his capital remains zero thereafter. On the other hand, if the capital of the gambler increases to $m$ where $d, then the gambler quits playing. Let $X_n$ be the capital of the gambler after the $n$th bet. Then $\left\{X_n: n=0,1,2,3,\cdots \right\}$ is a random walk. The starting state is $X_0=d$. The states $0$ and $m$ are absorbing states. The following gives the transition probabilities.

$\displaystyle P_{ij} = \left\{ \begin{array}{ll} \displaystyle p &\ \ \ \ \ \ j=i+1,i \ne 0, i \ne m \\ \text{ } & \text{ } \\ \displaystyle 1-p &\ \ \ \ \ \ j=i-1,i \ne 0, i \ne m \\ \text{ } & \text{ } \\ \displaystyle 1 &\ \ \ \ \ \ i=0,j=0 \\ \text{ } & \text{ } \\ \displaystyle 1 &\ \ \ \ \ \ i=m,j=m \\ \text{ } & \text{ } \\ 0 &\ \ \ \ \ \ \text{otherwise} \end{array} \right.$

Whenever the gambler reaches the absorbing state of 0, the gambler is in ruin. If the process reaches the absorbing state of $m$, the gambler strikes gold. If the initial capital of the gambler is small relative to the casino, there is a virtually 100% chance that the gambler will be in ruin even if each bet has even odds (see here for the calculation). One interesting questions: on average how long does it take for the gambler to be in ruin? Such questions will be answered after necessary tools are built in subsequent posts.

Remarks

A random walk is a stochastic process $\left\{X_n: n=0,1,2,\cdots \right\}$ such that at each time point, the process walks a certain distance that is dictated by an independent and identically distributed sequence of random variables. In such a stochastic process, the future state depends only on the preceding state and not on the past states before the preceding state. For example, in the gambler’s ruin, the gambler’s capital at any given point depends only on the his capital in the previous play.

In a stochastic process $\left\{X_n: n=0,1,2,\cdots \right\}$, the property that the future state $X_{n+1}$ depends only on the preceding state $X_n$ and not on the past states $X_0,X_1,\cdots,X_{n-1}$ is called the Markov property. Any stochastic process with the Markov property is called a Markov chain. A random walk is a Markov chain. There are Markov chains that are not random walk. The notion of Markov chains is introduced in this post.

$\copyright$ 2017 – Dan Ma

Advertisements