0Introduction

IB Markov Chains

0 Introduction

So far, in IA Probability, we have always dealt with one random variable, or

numerous independent variables, and we were able to handle them. However, in

real life, things often are dependent, and things become much more difficult.

In general, there are many ways in which variables can be dependent. Their

dependence can be very complicated, or very simple. If we are just told two

variables are dependent, we have no idea what we can do with them.

This is similar to our study of functions. We can develop theories about

continuous functions, increasing functions, or differentiable functions, but if we

are just given a random function without assuming anything about it, there

really isnâ€™t much we can do.

Hence, in this course, we are just going to study a particular kind of dependent

variables, known as Markov chains. In fact, in IA Probability, we have already

encountered some of these. One prominent example is the random walk, in

which the next position depends on the previous position. This gives us some

dependent random variables, but they are dependent in a very simple way.

In reality, a random walk is too simple of a model to describe the world. We

need something more general, and these are Markov chains. These, by definition,

are random distributions that satisfy the Markov assumption. This assumption,

intuitively, says that the future depends only upon the current state, and not

how we got to the current state. It turns out that just given this assumption,

we can prove a lot about these chains.