0Introduction

III Advanced Probability



0 Introduction
In some other places in the world, this course might be known as “Stochastic
Processes”. In addition to doing probability, a new component studied in the
course is time. We are going to study how things change over time.
In the first half of the course, we will focus on discrete time. A familiar
example is the simple random walk we start at a point on a grid, and at
each time step, we jump to a neighbouring grid point randomly. This gives a
sequence of random variables indexed by discrete time steps, and are related to
each other in interesting ways. In particular, we will consider martingales, which
enjoy some really nice convergence and “stopping ”properties.
In the second half of the course, we will look at continuous time. There
is a fundamental difference between the two, in that there is a nice topology
on the interval. This allows us to say things like we want our trajectories to
be continuous. On the other hand, this can cause some headaches because
R
is uncountable. We will spend a lot of time thinking about Brownian motion,
whose discovery is often attributed to Robert Brown. We can think of this as the
limit as we take finer and finer steps in a random walk. It turns out this has a
very rich structure, and will tell us something about Laplace’s equation as well.
Apart from stochastic processes themselves, there are two main objects that
appear in this course. The first is the conditional expectation. Recall that if we
have a random variable
X
, we can obtain a number
E
[
X
], the expectation of
X
. We can think of this as integrating out all the randomness of the system,
and just remembering the average. Conditional expectation will be some subtle
modification of this construction, where we don’t actually get a number, but
another random variable. The idea behind this is that we want to integrate out
some of the randomness in our random variable, but keep the remaining.
Another main object is stopping time. For example, if we have a production
line that produces random number of outputs at each point, then we can ask how
much time it takes to produce a fixed number of goods. This is a nice random
time, which we call a stopping time. The niceness follows from the fact that
when the time comes, we know it. An example that is not nice is, for example,
the last day it rains in Cambridge in a particular month, since on that last day,
we don’t necessarily know that it is in fact the last day.
At the end of the course, we will say a little bit about large deviations.