Part III — Advanced Probability
Based on lectures by M. Lis
Notes taken by Dexter Chua
Michaelmas 2017
These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.
The aim of the course is to introduce students to advanced topics in modern probability
theory. The emphasis is on tools required in the rigorous analysis of stochastic
processes, such as Brownian motion, and in applications where probability theory plays
an important role.
Review of measure and integration:
sigma-algebras, measures and filtrations;
integrals and expectation; convergence theorems; product measures, independence and
Fubini’s theorem.
Conditional expectation:
Discrete case, Gaussian case, conditional density functions;
existence and uniqueness; basic properties.
Martingales:
Martingales and submartingales in discrete time; optional stopping;
Doob’s inequalities, upcrossings, martingale convergence theorems; applications of
martingale techniques.
Stochastic processes in continuous time:
Kolmogorov’s criterion, regularization
of paths; martingales in continuous time.
Weak convergence:
Definitions and characterizations; convergence in distribution,
tightness, Prokhorov’s theorem; characteristic functions, L´evy’s continuity theorem.
Sums of independent random variables:
Strong laws of large numbers; central
limit theorem; Cram´er’s theory of large deviations.
Brownian motion:
Wiener’s existence theorem, scaling and symmetry properties;
martingales associated with Brownian motion, the strong Markov property, hitting
times; properties of sample paths, recurrence and transience; Brownian motion and the
Dirichlet problem; Donsker’s invariance principle.
Poisson random measures: Construction and properties; integrals.
L´evy processes: L´evy-Khinchin theorem.
Pre-requisites
A basic familiarity with measure theory and the measure-theoretic formulation of
probability theory is very helpful. These foundational topics will be reviewed at the
beginning of the course, but students unfamiliar with them are expected to consult the
literature (for instance, Williams’ book) to strengthen their understanding.
Contents
0 Introduction
1 Some measure theory
1.1 Review of measure theory
1.2 Conditional expectation
2 Martingales in discrete time
2.1 Filtrations and martingales
2.2 Stopping time and optimal stopping
2.3 Martingale convergence theorems
2.4 Applications of martingales
3 Continuous time stochastic processes
4 Weak convergence of measures
5 Brownian motion
5.1 Basic properties of Brownian motion
5.2 Harmonic functions and Brownian motion
5.3 Transience and recurrence
5.4 Donsker’s invariance principle
6 Large deviations
0 Introduction
In some other places in the world, this course might be known as “Stochastic
Processes”. In addition to doing probability, a new component studied in the
course is time. We are going to study how things change over time.
In the first half of the course, we will focus on discrete time. A familiar
example is the simple random walk — we start at a point on a grid, and at
each time step, we jump to a neighbouring grid point randomly. This gives a
sequence of random variables indexed by discrete time steps, and are related to
each other in interesting ways. In particular, we will consider martingales, which
enjoy some really nice convergence and “stopping ”properties.
In the second half of the course, we will look at continuous time. There
is a fundamental difference between the two, in that there is a nice topology
on the interval. This allows us to say things like we want our trajectories to
be continuous. On the other hand, this can cause some headaches because
R
is uncountable. We will spend a lot of time thinking about Brownian motion,
whose discovery is often attributed to Robert Brown. We can think of this as the
limit as we take finer and finer steps in a random walk. It turns out this has a
very rich structure, and will tell us something about Laplace’s equation as well.
Apart from stochastic processes themselves, there are two main objects that
appear in this course. The first is the conditional expectation. Recall that if we
have a random variable
X
, we can obtain a number
E
[
X
], the expectation of
X
. We can think of this as integrating out all the randomness of the system,
and just remembering the average. Conditional expectation will be some subtle
modification of this construction, where we don’t actually get a number, but
another random variable. The idea behind this is that we want to integrate out
some of the randomness in our random variable, but keep the remaining.
Another main object is stopping time. For example, if we have a production
line that produces random number of outputs at each point, then we can ask how
much time it takes to produce a fixed number of goods. This is a nice random
time, which we call a stopping time. The niceness follows from the fact that
when the time comes, we know it. An example that is not nice is, for example,
the last day it rains in Cambridge in a particular month, since on that last day,
we don’t necessarily know that it is in fact the last day.
At the end of the course, we will say a little bit about large deviations.
1 Some measure theory
1.1 Review of measure theory
To make the course as self-contained as possible, we shall begin with some review
of measure theory. On the other hand, if one doesn’t already know measure
theory, they are recommended to learn the measure theory properly before
starting this course.
Definition
(
σ
-algebra)
.
Let
E
be a set. A subset
E
of the power set
P
(
E
) is
called a σ-algebra (or σ-field) if
(i) ∅ ∈ E;
(ii) If A ∈ E, then A
C
= E \ A ∈ E;
(iii) If A
1
, A
2
, . . . ∈ E, then
S
∞
n=1
A
n
∈ E.
Definition (Measurable space). A measurable space is a set with a σ-algebra.
Definition
(Borel
σ
-algebra)
.
Let
E
be a topological space with topology
T
.
Then the Borel
σ
-algebra
B
(
E
) on
E
is the
σ
-algebra generated by
T
, i.e. the
smallest σ-algebra containing T .
We are often going to look at B(R), and we will just write B for it.
Definition (Measure). A function µ : E → [0, ∞] is a measure if
– µ(∅) = 0
– If A
1
, A
2
, . . . ∈ E are disjoint, then
µ
∞
[
i=1
A
i
!
=
∞
X
i=1
µ(A
i
).
Definition
(Measure space)
.
A measure space is a measurable space with a
measure.
Definition
(Measurable function)
.
Let (
E
1
, E
1
) and (
E
2
, E
2
) be measurable
spaces. Then
f
:
E
1
→ E
2
is said to be measurable if
A ∈ E
2
implies
f
−1
(
A
)
∈ E
1
.
This is similar to the definition of a continuous function.
Notation.
For (
E, E
) a measurable space, we write
mE
for the set of measurable
functions E → R.
We write
mE
+
to be the positive measurable functions, which are allowed to
take value ∞.
Note that we do not allow taking the values ±∞ in the first case.
Theorem.
Let (
E, E, µ
) be a measure space. Then there exists a unique function
˜µ : mE
+
→ [0, ∞] satisfying
– ˜µ(1
A
) = µ(A), where 1
A
is the indicator function of A.
– Linearity: ˜µ(αf + βg) = α˜µ(f) + β ˜µ(g) if α, β ∈ R
≥0
and f, g ∈ mE
+
.
–
Monotone convergence: iff
f
1
, f
2
, . . . ∈ mE
+
are such that
f
n
% f ∈ mE
+
pointwise a.e. as n → ∞, then
lim
n→∞
˜µ(f
n
) = ˜µ(f ).
We call
˜µ
the integral with respect to
µ
, and we will write it as
µ
from now on.
Definition
(Simple function)
.
A function
f
is simple if there exists
α
n
∈ R
≥0
and A
n
∈ E for 1 ≤ n ≤ k such that
f =
k
X
n=1
α
n
1
A
n
.
From the first two properties of the measure, we see that
µ(f) =
k
X
n=1
α
n
µ(A
n
).
One convenient observation is that a function is simple iff it takes on only finitely
many values. We then see that if f ∈ mE
+
, then
f
n
= 2
−n
b2
n
fc∧ n
is a sequence of simple functions approximating
f
from below. Thus, given
monotone convergence, this shows that
µ(f) = lim µ(f
n
),
and this proves the uniqueness part of the theorem.
Recall that
Definition (Almost everywhere). We say f = g almost everywhere if
µ({x ∈ E : f(x) 6= g(x)}) = 0.
We say f is a version of g.
Example.
Let
`
n
=
1
[n,n+1]
. Then
µ
(
`
n
) = 1 for all 1, but also
f
n
→
0 and
µ(0) = 0. So the “monotone” part of monotone convergence is important.
So if the sequence is not monotone, then the measure does not preserve limits,
but it turns out we still have an inequality.
Lemma (Fatou’s lemma). Let f
i
∈ mE
+
. Then
µ
lim inf
n
f
n
≤ lim inf
n
µ(f
n
).
Proof. Apply monotone convergence to the sequence inf
m≥n
f
m
Of course, it would be useful to extend integration to functions that are not
necessarily positive.
Definition
(Integrable function)
.
We say a function
f ∈ mE
is integrable if
µ(|f|) ≤ ∞. We write L
1
(E) (or just L
1
) for the space of integrable functions.
We extend µ to L
1
by
µ(f) = µ(f
+
) − µ(f
−
),
where f
±
= (±f) ∧ 0.
If we want to be explicit about the measure and the
σ
-algebra, we can write
L
1
(E, Eµ).
Theorem
(Dominated convergence theorem)
.
If
f
i
∈ mE
and
f
i
→ f
a.e., such
that there exists g ∈ L
1
such that |f
i
| ≤ g a.e. Then
µ(f) = lim µ(f
n
).
Proof. Apply Fatou’s lemma to g −f
n
and g + f
n
.
Definition
(Product
σ
-algebra)
.
Let (
E
1
, E
1
) and (
E
2
, E
2
) be measure spaces.
Then the product
σ
-algebra
E
1
⊗E
2
is the smallest
σ
-algebra on
E
1
×E
2
containing
all sets of the form A
1
× A
2
, where A
i
∈ E
i
.
Theorem.
If (
E
1
, E
1
, µ
1
) and (
E
2
, E
2
, µ
2
) are
σ
-finite measure spaces, then there
exists a unique measure µ on E
1
⊗ E
2
) satisfying
µ(A
1
× A
2
) = µ
1
(A
1
)µ
2
(A
2
)
for all A
i
∈ E
i
.
This is called the product measure.
Theorem
(Fubini’s/Tonelli’s theorem)
.
If
f
=
f
(
x
1
, x
2
)
∈ mE
+
with
E
=
E
1
⊗ E
2
, then the functions
x
1
7→
Z
f(x
1
, x
2
)dµ
2
(x
2
) ∈ mE
+
1
x
2
7→
Z
f(x
1
, x
2
)dµ
1
(x
1
) ∈ mE
+
2
and
Z
E
f du =
Z
E
1
Z
E
2
f(x
1
, x
2
) dµ
2
(x
2
)
dµ
1
(x
1
)
=
Z
E
2
Z
E
1
f(x
1
, x
2
) dµ
1
(x
1
)
dµ
2
(x
2
)
1.2 Conditional expectation
In this course, conditional expectation is going to play an important role, and
it is worth spending some time developing the theory. We are going to focus
on probability theory, which, mathematically, just means we assume
µ
(
E
) = 1.
Practically, it is common to change notation to
E
= Ω,
E
=
F
,
µ
=
P
and
R
d
µ
=
E
. Measurable functions will be written as
X, Y, Z
, and will be called
random variables. Elements in
F
will be called events. An element
ω ∈
Ω will
be called a realization.
There are many ways we can think about conditional expectations. The first
one is how most of us first encountered conditional probability.
Suppose
B ∈ F
, with
P
(
B
)
>
0. Then the conditional probability of the
event A given B is
P(A | B) =
P(A ∩ B)
P(B)
.
This should be interpreted as the probability that
A
happened, given that
B
happened. Since we assume
B
happened, we ought to restrict to the subset of the
probability space where
B
in fact happened. To make this a probability space,
we scale the probability measure by
P
(
B
). Then given any event
A
, we take the
probability of
A ∩ B
under this probability measure, which is the formula given.
More generally, if
X
is a random variable, the conditional expectation of
X
given B is just the expectation under this new probability measure,
E[X | B] =
E[X1
B
]
P[B]
.
We probably already know this from high school, and we are probably not quite
excited by this. One natural generalization would be to allow B to vary.
Let G
1
, G
2
, . . . ∈ F be disjoint events such that
S
n
G
n
= Ω. Let
G = σ(G
1
, G
2
, . . .) =
(
[
n∈I
G
n
: I ⊆ N
)
.
Let X ∈ L
1
. We then define
Y =
∞
X
n=1
E(X | G
n
)1
G
n
.
Let’s think about what this is saying. Suppose a random outcome
ω
happens.
To compute
Y
, we figure out which of the
G
n
our
ω
belongs to. Let’s say
ω ∈ G
k
. Then
Y
returns the expected value of
X
given that we live in
G
k
. In
this processes, we have forgotten the exact value of
ω
. All that matters is which
G
n
the outcome belongs to. We can “visually” think of the
G
n
as cutting up
the sample space Ω into compartments:
We then average out
X
in each of these compartments to obtain
Y
. This is what
we are going to call the conditional expectation of
X
given
G
, written
E
(
X | G
).
Ultimately, the characterizing property of Y is the following lemma:
Lemma.
The conditional expectation
Y
=
E
(
X | G
) satisfies the following
properties:
– Y is G-measurable
– We have Y ∈ L
1
, and
EY 1
A
= EX1
A
for all A ∈ G.
Proof. It is clear that Y is G-measurable. To show it is L
1
, we compute
E[|Y |] = E
∞
X
n=1
E(X | G
n
)1
G
n
≤ E
∞
X
n=1
E(|X| | G
n
)1
G
n
=
X
E (E(|X| | G
n
)1
G
n
)
=
X
E|X|1
G
n
= E
X
|X|1
G
n
= E|X|
< ∞,
where we used monotone convergence twice to swap the expectation and the
sum.
The final part is also clear, since we can explicitly enumerate the elements in
G and see that they all satisfy the last property.
It turns out for any
σ
-subalgebra
G ⊆ F
, we can construct the conditional
expectation
E
(
X | G
), which is uniquely characterized by the above two proper-
ties.
Theorem
(Existence and uniqueness of conditional expectation)
.
Let
X ∈ L
1
,
and G ⊆ F. Then there exists a random variable Y such that
– Y is G-measurable
– Y ∈ L
1
, and EX1
A
= EY 1
A
for all A ∈ G.
Moreover, if
Y
0
is another random variable satisfying these conditions, then
Y
0
= Y almost surely.
We call Y a (version of) the conditional expectation given G.
We will write the condition expectation as
E
(
X | G
), and if
X
=
1
A
, we will
write P(A | G) = E(1
A
| G).
Recall also that if
Z
is a random variable, then
σ
(
Z
) =
{Z
−1
(
B
) :
B ∈ B}
.
In this case, we will write E(X | Z) = E(X | σ(Z)).
By, say, bounded convergence, it follows from the second condition that
EXZ = EY Z for all bounded G-measurable functions Z.
Proof.
We first consider the case where
X ∈ L
2
(Ω
, F, µ
). Then we know from
functional analysis that for any
G ⊆ F
, the space
L
2
(
G
) is a Hilbert space with
inner product
hX, Y i = µ(XY ).
In particular,
L
2
(
G
) is a closed subspace of
L
2
(
F
). We can then define
Y
to be the orthogonal projection of
X
onto
L
2
(
G
). It is immediate that
Y
is
G
-measurable. For the second part, we use that
X − Y
is orthogonal to
L
2
(
G
),
since that’s what orthogonal projection is supposed to be. So
E
(
X − Y
)
Z
= 0
for all
Z ∈ L
2
(
G
). In particular, since the measure space is finite, the indicator
function of any measurable subset is L
2
. So we are done.
We next focus on the case where X ∈ mE
+
. We define
X
n
= X ∧ n
We want to use monotone convergence to obtain our result. To do so, we need
the following result:
Claim.
If (
X, Y
) and (
X
0
, Y
0
) satisfy the conditions of the theorem, and
X
0
≥ X
a.s., then Y
0
≥ Y a.s.
Proof.
Define the event
A
=
{Y
0
≤ Y } ∈ G
. Consider the event
Z
= (
Y −Y
0
)
1
A
.
Then Z ≥ 0. We then have
EY
0
1
A
= EX
0
1
A
≥ EX1
A
= EY 1
A
.
So it follows that we also have
E
(
Y −Y
0
)
1
A
≤
0. So in fact
EZ
= 0. So
Y
0
≥ Y
a.s.
We can now define
Y
n
=
E
(
X
n
| G
), picking them so that
{Y
n
}
is increasing.
We then take
Y
∞
=
lim Y
n
. Then
Y
∞
is certainly
G
-measurable, and by monotone
convergence, if A ∈ G, then
EX1
A
= lim EX
n
1
A
= lim EY
n
1
A
= EY
∞
1
A
.
Now if
EX < ∞
, then
EY
∞
=
EX < ∞
. So we know
Y
∞
is finite a.s., and we
can define Y = Y
∞
1
Y
∞
<∞
.
Finally, we work with arbitrary
X ∈ L
1
. We can write
X
=
X
+
− X
−
, and
then define Y
±
= E(X
±
| G), and take Y = Y
+
− Y
−
.
Uniqueness is then clear.
Lemma.
If
Y
is
σ
(
Z
)-measurable, then there exists
h
:
R → R
Borel-measurable
such that Y = h(Z). In particular,
E(X | Z) = h(Z) a.s.
for some h : R → R.
We can then define
E
(
X | Z
=
z
) =
h
(
z
). The point of doing this is that we
want to allow for the case where in fact we have
P
(
Z
=
z
) = 0, in which case
our original definition does not make sense.
Exercise.
Consider
X ∈ L
1
, and
Z
: Ω
→ N
discrete. Compute
E
(
X | Z
) and
compare our different definitions of conditional expectation.
Example.
Let (
U, V
)
∈ R
2
with density
f
U,V
(
u, v
), so that for any
B
1
, B
2
∈ B
,
we have
P(U ∈ B
1
, V ∈ B
2
) =
Z
B
1
Z
b
2
f
U,V
(u, v) du dv.
We want to compute
E
(
h
(
V
)
| U
), where
h
:
R → R
is Borel measurable. We
can define
f
U
(u) =
Z
R
f
U,V
(u, v) dv,
and we define the conditional density of V given U by
F
|U
(v | u) =
f
U,V
(u, v)
f
U
(u)
.
We define
g(u) =
Z
h(u)f
V |U
(v | u) dv.
We claim that E(h(V ) | U) is just g(U).
To check this, we show that it satisfies the two desired conditions. It is clear
that it is
σ
(
U
)-measurable. To check the second condition, fix an
A ∈ σ
(
U
).
Then A = {(u, v) : u ∈ B} for some B. Then
E(h(V )1
A
) =
ZZ
h(v)1
u∈B
f
U,V
(u, v) du dv
=
ZZ
h(v)1
u∈B
f
V |U
(v | u)f
V
(u) du dv
=
Z
g(U)1
u∈B
f
U
(u) du
= E(g(U)1
A
),
as desired.
The point of this example is that to compute conditional expectations, we
use our intuition to guess what the conditional expectation should be, and then
check that it satisfies the two uniquely characterizing properties.
Example.
Suppose (
X, W
) are Gaussian. Then for all linear functions
ϕ
:
R
2
→
R, the quantity ϕ(X, W ) is Gaussian.
One nice property of Gaussians is that lack of correlation implies independence.
We want to compute
E
(
X | W
). Note that if
Y
is such that
EX
=
EY
,
X − Y
is independent of
W
, and
Y
is
W
-measurable, then
Y
=
E
(
X | W
), since
E(X − Y )1
A
= 0 for all σ(W )-measurable A.
The guess is that we want
Y
to be a Gaussian variable. We put
Y
=
aW
+
b
.
Then EX = EY implies we must have
aEW + b = EX. (∗)
The independence part requires
cov
(
X − Y, W
) = 0. Since covariance is linear,
we have
0 = cov(X −Y, W ) = cov(X, W ) −cov(aW +b, W ) = cov(X, W)−a cov(W, W ).
Recalling that cov(W, W) = var(W ), we need
a =
cov(X, W )
var(W )
.
This then allows us to use (
∗
) to compute
b
as well. This is how we compute the
conditional expectation of Gaussians.
We note some immediate properties of conditional expectation. As usual,
all (in)equality and convergence statements are to be taken with the quantifier
“almost surely”.
Proposition.
(i) E(X | G) = X iff X is G-measurable.
(ii) E(E(X | G)) = EX
(iii) If X ≥ 0 a.s., then E(X | G) ≥ 0
(iv) If X and G are independent, then E(X | G) = E[X]
(v) If α, β ∈ R and X
1
, X
2
∈ L
1
, then
E(αX
1
+ βX
2
| G) = αE(X
1
| G) + βE(X
2
| G).
(vi) Suppose X
n
% X. Then
E(X
n
| G) % E(X | G).
(vii) Fatou’s lemma: If X
n
are non-negative measurable, then
E
lim inf
n→∞
X
n
| G
≤ lim inf
n→∞
E(X
n
| G).
(viii)
Dominated convergence theorem: If
X
n
→ X
and
Y ∈ L
1
such that
Y ≥ |X
n
| for all n, then
E(X
n
| G) → E(X | G).
(ix) Jensen’s inequality: If c : R → R is convex, then
E(c(X) | G) ≥ c(E(X) | G).
(x) Tower property: If H ⊆ G, then
E(E(X | G) | H) = E(X | H).
(xi) For p ≥ 1,
kE(X | G)k
p
≤ kXk
p
.
(xii) If Z is bounded and G-measurable, then
E(ZX | G) = ZE(X | G).
(xiii)
Let
X ∈ L
1
and
G, H ⊆ F
. Assume that
σ
(
X, G
) is independent of
H
.
Then
E(X | G) = E(X | σ(G, H)).
Proof.
(i) Clear.
(ii) Take A = ω.
(iii) Shown in the proof.
(iv) Clear by property of expected value of independent variables.
(v)
Clear, since the RHS satisfies the unique characterizing property of the
LHS.
(vi) Clear from construction.
(vii) Same as the unconditional proof, using the previous property.
(viii) Same as the unconditional proof, using the previous property.
(ix) Same as the unconditional proof.
(x) The LHS satisfies the characterizing property of the RHS
(xi) Using the convexity of |x|
p
, Jensen’s inequality tells us
kE(X | G)k
p
p
= E|E(X | G)|
p
≤ E(E(|X|
p
| G))
= E|X|
p
= kXk
p
p
(xii) If Z = 1
B
, and let b ∈ G. Then
E(ZE(X | G)1
A
) = E(E(X | G) · 1
A∩B
) = E(X1
A∩B
) = E(ZX1
A
).
So the lemma holds. Linearity then implies the result for
Z
simple, then
apply our favorite convergence theorems.
(xiii) Take B ∈ H and A ∈ G. Then
E(E(X | σ(G, H)) · 1
A∩B
) = E(X · 1
A∩B
)
= E(X1
A
)P(B)
= E(E(X | G)1
A
)P(B)
= E(E(X | G)1
A∩B
)
If instead of
A ∩ B
, we had any
σ
(
G, H
)-measurable set, then we would
be done. But we are fine, since the set of subsets of the form
A ∩ B
with
A ∈ G, B ∈ H is a generating π-system for σ(H, G).
We shall end with the following key lemma. We will later use it to show that
many of our martingales are uniformly integrable.
Lemma.
If
X ∈ L
1
, then the family of random variables
Y
G
=
E
(
X | G
) for all
G ⊆ F is uniformly integrable.
In other words, for all ε > 0, there exists λ > 0 such that
E(Y
G
1
|Y
G
>λ|
) < ε
for all G.
Proof.
Fix
ε >
0. Then there exists
δ >
0 such that
E|X|1
A
< ε
for any
A
with
P(A) < δ.
Take Y = E(X | G). Then by Jensen, we know
|Y | ≤ E(|X| | G)
In particular, we have
E|Y | ≤ E|X|.
By Markov’s inequality, we have
P(|Y | ≥ λ) ≤
E|Y |
λ
≤
E|X|
λ
.
So take λ such that
E|X|
λ
< δ. So we have
E(|Y |1
|Y |≥λ
) ≤ E(E(|X| | G)1
|Y |≥λ
) = E(|X|1
|Y |≥λ
) < ε
using that 1
|Y |≥λ
is a G-measurable function.
2 Martingales in discrete time
2.1 Filtrations and martingales
We would like to model some random variable that “evolves with time”. For
example, in a simple random walk,
X
n
could be the position we are at time
n
.
To do so, we would like to have some
σ
-algebras
F
n
that tells us the “information
we have at time n”. This structure is known as a filtration.
Definition
(Filtration)
.
A filtration is a sequence of
σ
-algebras (
F
n
)
n≥0
such
that F ⊇ F
n+1
⊇ F
n
for all n. We define F
∞
= σ(F
0
, F
1
, . . .) ⊆ F.
We will from now on assume (Ω
, F, P
) is equipped with a filtration (
F
n
)
n≥0
.
Definition
(Stochastic process in discrete time)
.
A stochastic process (in discrete
time) is a sequence of random variables (X
n
)
n≥0
.
This is a very general definition, and in most cases, we would want
X
n
to
interact nicely with our filtration.
Definition (Natural filtration). The natural filtration of (X
n
)
n≥0
is given by
F
X
n
= σ(X
1
, . . . , X
n
).
Definition
(Adapted process)
.
We say that (
X
n
)
n≥0
is adapted (to (
F
n
)
n≥0
)
if X
n
is F
n
-measurable for all n ≥ 0. Equivalently, if F
X
n
⊆ F
n
.
Definition
(Integrable process)
.
A process (
X
n
)
n≥0
is integrable if
X
n
∈ L
1
for
all n ≥ 0.
We can now write down the definition of a martingale.
Definition
(Martingale)
.
An integrable adapted process (
X
n
)
n≥0
is a martingale
if for all n ≥ m, we have
E(X
n
| F
m
) = X
m
.
We say it is a super-martingale if
E(X
n
| F
m
) ≤ X
m
,
and a sub-martingale if
E(X
n
| F
m
) ≥ X
m
,
Note that it is enough to take
m
=
n −
1 for all
n ≥
0, using the tower
property.
The idea of a martingale is that we cannot predict whether
X
n
will go up
or go down in the future even if we have all the information up to the present.
For example, if
X
n
denotes the wealth of a gambler in a gambling game, then in
some sense (
X
n
)
n≥0
being a martingale means the game is “fair” (in the sense
of a fair dice).
Note that (
X
n
)
n≥0
is a super-martingale iff (
−X
n
)
n≥0
is a sub-martingale,
and if (
X
n
)
n≥0
is a martingale, then it is both a super-martingale and a sub-
martingale. Often, what these extra notions buy us is that we can formulate
our results for super-martingales (or sub-martingales), and then by applying the
result to both (
X
n
)
n≥0
and (
−X
n
)
n≥0
, we obtain the desired, stronger result
for martingales.
2.2 Stopping time and optimal stopping
The optional stopping theorem says the definition of a martingale in fact implies
an a priori much stronger property. To formulate the optional stopping theorem,
we need the notion of a stopping time.
Definition
(Stopping time)
.
A stopping time is a random variable
T
: Ω
→
N
≥0
∪ {∞} such that
{T ≤ n} ∈ F
n
for all n ≥ 0.
This means that at time
n
, if we want to know if
T
has occurred, we can
determine it using the information we have at time n.
Note that
T
is a stopping time iff
{t
=
n} ∈ F
n
for all
n
, since if
T
is a
stopping time, then
{T = n} = {T ≤ n} \ {T ≤ n − 1},
and {T ≤ n − 1} ∈ F
n−1
⊆ F
n
. Conversely,
{T ≤ n} =
n
[
k=1
{T = k} ∈ F
n
.
This will not be true in the continuous case.
Example. If B ∈ B(R), then we can define
T = inf{n : X
n
∈ B}.
Then this is a stopping time.
On the other hand,
T = sup{n : X
n
∈ B}
is not a stopping time (in general).
Given a stopping time, we can make the following definition:
Definition
(
X
T
)
.
For a stopping time
T
, we define the random variable
X
T
by
X
T
(ω) = X
T (ω)
(ω)
on {T < ∞}, and 0 otherwise.
Later, for suitable martingales, we will see that the limit
X
∞
=
lim
n→∞
X
n
makes sense. In that case, We define X
T
(ω) to be X
∞
(ω) if T = ∞.
Similarly, we can define
Definition (Stopped process). The stopped process is defined by
(X
T
n
)
n≥0
= (X
T (ω)∧n
(ω))
n≥0
.
This says we stop evolving the random variable X once T has occurred.
We would like to say that
X
T
is “
F
T
-measurable”, i.e. to compute
X
T
, we
only need to know the information up to time
T
. After some thought, we see
that the following is the correct definition of F
T
:
Definition (F
T
). For a stopping time T , define
F
T
= {A ∈ F
∞
: A ∩ {T ≤ n} ∈ F
n
}.
This is easily seen to be a σ-algebra.
Example. If T ≡ n is constant, then F
T
= F
n
.
There are some fairly immediate properties of these objects, whose proof is
left as an exercise for the reader:
Proposition.
(i) If T, S, (T
n
)
n≥0
are all stopping times, then
T ∨ S, T ∧ S, sup
n
T
n
, inf T
n
, lim sup T
n
, lim inf T
n
are all stopping times.
(ii) F
T
is a σ-algebra
(iii) If S ≤ T , then F
S
⊆ F
T
.
(iv) X
T
1
T <∞
is F
T
-measurable.
(v)
If (
X
n
) is an adapted process, then so is (
X
T
n
)
n≥0
for any stopping time
T
.
(vi)
If (
X
n
) is an integrable process, then so is (
X
T
n
)
n≥0
for any stopping time
T .
We now come to the fundamental property of martingales.
Theorem
(Optional stopping theorem)
.
Let (
X
n
)
n≥0
be a super-martingale
and S ≤ T bounded stopping times. Then
EX
T
≤ EX
S
.
Proof. Follows from the next theorem.
What does this theorem mean? If
X
is a martingale, then it is both a
super-martingale and a sub-martingale. So we can apply this to both
X
and
−X, and so we have
E(X
T
) = E(X
S
).
In particular, since 0 is a stopping time, we see that
EX
T
= EX
0
for any bounded stopping time T .
Recall that martingales are supposed to model fair games. If we again think
of
X
n
as the wealth at time
n
, and
T
as the time we stop gambling, then this
says no matter how we choose
T
, as long as it is bounded, the expected wealth
at the end is the same as what we started with.
Example. Consider the stopping time
T = inf{n : X
n
= 1},
and take
X
such that
EX
0
= 0. Then clearly we have
EX
T
= 1. So this tells us
T is not a bounded stopping time!
Theorem. The following are equivalent:
(i) (X
n
)
n≥0
is a super-martingale.
(ii) For any bounded stopping times T and any stopping time S,
E(X
T
| F
S
) ≤ X
S∧T
.
(iii) (X
T
n
) is a super-martingale for any stopping time T .
(iv) For bounded stopping times S, T such that S ≤ T , we have
EX
T
≤ EX
S
.
In particular, (iv) implies (i).
Proof.
–
(ii)
⇒
(iii): Consider (
X
T
0
n
)
≥0
for a stopping time
T
0
. To check if this is a
super-martingale, we need to prove that whenever m ≤ n,
E(X
n∧T
0
| F
m
) ≤ X
m∧T
0
.
But this follows from (ii) above by taking S = m and T = T
0
∧ n.
– (ii) ⇒ (iv): Clear by the tower law.
– (iii) ⇒ (i): Take T = ∞.
– (i) ⇒ (ii): Assume T ≤ n. Then
X
T
= X
S∧T
+
X
S≤k<T
(X
k+1
− X
k
)
= X
S∧T
+
n
X
k=0
(X
k+1
− X
k
)1
S≤k<T
(∗)
Now note that
{S ≤ k < T }
=
{S ≤ k} ∩ {T ≤ k}
c
∈ F
k
. Let
A ∈ F
S
.
Then A ∩ {S ≤ k} ∈ F
k
by definition of F
S
. So A ∩ {S ≤ k < T } ∈ F
k
.
Apply E to to (∗) × 1
A
. Then we have
E(X
T
1
A
) = E(X
S∧T
1
A
) +
n
X
k=0
E(X
k+1
− X
k
)1
A∩{S≤k<T }
.
But for all k, we know
E(X
k+1
− X
k
)1
A∩{S≤k<T }
≤ 0,
since X is a super-martingale. So it follows that for all A ∈ F
S
, we have
E(X
T
· 1
A
) ≤ E(X
S∧T
1
A
).
But since
X
S∧T
is
F
S∧T
measurable, it is in particular
F
S
measurable. So
it follows that for all A ∈ F
S
, we have
E(E(X
T
| F
S
)1
A
) ≤ E(X
S∧T
1
A
).
So the result follows.
– (iv) ⇒ (i): Fix m ≤ n and A ∈ F
m
. Take
T = m1
A
+ 1
A
C
.
One then manually checks that this is a stopping time. Now note that
X
T
= X
m
1
A
+ X
n
1
A
C
.
So we have
0 ≥ E(X
n
) − E(X
T
)
= E(X
n
) − E(X
n
1
A
c
) − E(X
m
1
A
)
= E(X
n
1
A
) − E(X
m
1
A
).
Then the same argument as before gives the result.
2.3 Martingale convergence theorems
One particularly nice property of martingales is that they have nice conver-
gence properties. We shall begin by proving a pointwise version of martingale
convergence.
Theorem
(Almost sure martingale convergence theorem)
.
Suppose (
X
n
)
n≥0
is a super-martingale that is bounded in
L
1
, i.e.
sup
n
E|X
n
| < ∞
. Then there
exists an F
∞
-measurable X
∞
∈ L
1
such that
X
n
→ X
∞
a.s. as n → ∞.
To begin, we need a convenient characterization of when a series converges.
Definition
(Upcrossing)
.
Let (
x
n
) be a sequence and (
a, b
) an interval. An
upcrossing of (
a, b
) by (
x
n
) is a sequence
j, j
+ 1
, . . . , k
such that
x
j
≤ a
and
x
k
≥ b. We define
U
n
[a, b, (x
n
)] = number of disjoint upcrossings contained in {1, . . . , n}
U[a, b, (x
n
)] = lim
n→∞
U
n
[a, b, x].
We can then make the following elementary observation:
Lemma.
Let (
x
n
)
n≥0
be a sequence of numbers. Then
x
n
converges in
R
if and
only if
(i) lim inf |x
n
| < ∞.
(ii) For all a, b ∈ Q with a < b, we have U [a, b, (x
n
)] < ∞.
For our martingales, since they are bounded in
L
1
, Fatou’s lemma tells us
E lim inf |X
n
| < ∞
. So
lim inf |X
n
|
= 0 almost surely. Thus, it remains to show
that for any fixed
a < b ∈ Q
, we have
P
(
U
[
a, b,
(
X
n
)] =
∞
) = 0. This is a
consequence of Doob’s upcrossing lemma.
Lemma (Doob’s upcrossing lemma). If X
n
is a super-martingale, then
(b − a)E(U
n
[a, b(X
n
)]) ≤ E(X
n
− a)
−
Proof.
Assume that
X
is a positive super-martingale. We define stopping times
S
k
, T
k
as follows:
– T
0
= 0
– S
k+1
= inf{n : X
n
≤ a, n ≥ T
n
}
– T
k+1
= inf{n : X
n
≥ b, n ≥ S
k+1
}.
Given an
n
, we want to count the number of upcrossings before
n
. There are
two cases we might want to distinguish:
b
a
n
b
a
n
Now consider the sum
n
X
k=1
X
T
k
∧n
− X
S
k
∧n
.
In the first case, this is equal to
U
n
X
k=1
X
T
k
− X
S
k
+
n
X
k=U
n
+1
X
n
− X
n
≥ (b − a)U
n
.
In the second case, it is equal to
U
n
X
k=1
X
T
k
−X
S
k
+(X
n
−X
S
U
n
+1
)+
n
X
k=U
n
+2
X
n
−X
n
≥ (b−a)U
n
+(X
n
−X
S
U
n
+1
).
Thus, in general, we have
n
X
k=1
X
T
k
∧n
− X
S
k
∧n
≥ (b − a)U
n
+ (X
n
− X
S
U
n
+1
∧n
).
By definition,
S
k
< T
k
≤ n
. So the expectation of the LHS is always non-negative
by super-martingale convergence, and thus
0 ≥ (b − a)EU
n
+ E(X
n
− X
S
U
n
+1
∧n
).
Then observe that
X
n
− X
S
U
n
+1
≥ −(X
n
− a)
−
.
The almost-sure martingale convergence theorem is very nice, but often it is
not good enough. For example, we might want convergence in
L
p
instead. The
following example shows this isn’t always possible:
Example. Suppose (ρ
n
)
n≥0
is a sequence of iid random variables and
P(ρ
n
= 0) =
1
2
= P(ρ
n
= 2).
Let
X
n
=
n
Y
k=0
ρ
k
.
Then this is a martingale, and
EX
n
= 1. On the other hand,
X
n
→
0 almost
surely. So kX
n
− X
∞
k
1
does not converge to 0.
For
p >
1, if we want convergence in
L
p
, it is not surprising that we at least
need the sequence to be
L
p
bounded. We will see that this is in fact sufficient.
For
p
= 1, however, we need a bit more than being bounded in
L
1
. We will need
uniform integrability.
To prove this, we need to establish some inequalities.
Lemma
(Maximal inequality)
.
Let (
X
n
) be a sub-martingale that is non-
negative, or a martingale. Define
X
∗
n
= sup
k≤n
|X
k
|, X
∗
= lim
n→∞
X
∗
n
.
If λ ≥ 0, then
λP(X
∗
n
≥ λ) ≤ E[|X
n
|1
X
∗
n
≥λ
].
In particular, we have
λP(X
∗
n
≥ λ) ≤ E[|X
n
|].
Markov’s inequality says almost the same thing, but has
E
[
|X
∗
n
|
] instead of
E[|X
n
|]. So this is a stronger inequality.
Proof.
If
X
n
is a martingale, then
|X
n
|
is a sub-martingale. So it suffices to
consider the case of a non-negative sub-martingale. We define the stopping time
T = inf{n : X
n
≥ λ}.
By optional stopping,
EX
n
≥ EX
T ∧n
= EX
T
1
T ≤n
+ EX
n
1
T >n
≥ λP(T ≤ n) + EX
n
1
T >n
= λP(X
∗
n
≥ λ) + EX
n
1
T >n
.
Lemma (Doob’s L
p
inequality). For p > 1, we have
kX
∗
n
k
p
≤
p
p − 1
kX
n
k
p
for all n.
Proof. Let k > 0, and consider
kX
∗
n
∧ kk
p
p
= E|X
∗
n
∧ k|
p
.
We use the fact that
x
p
=
Z
x
0
ps
p−1
ds.
So we have
kX
∗
n
∧ kk
p
p
= E|X
∗
n
∧ k|
p
= E
Z
X
∗
n
∧k
0
px
p−1
dx
= E
Z
k
0
px
p−1
1
X
∗
n
≥x
dx
=
Z
k
0
px
p−1
P(X
∗
n
≥ x) dx (Fubini)
≤
Z
k
0
px
p−2
EX
n
1
X
∗
n
≥x
dx (maximal inequality)
= EX
n
Z
k
0
px
p−2
1
X
∗
n
≥x
dx (Fubini)
=
p
p − 1
EX
n
(X
∗
n
∧ k)
p−1
≤
p
p − 1
kX
n
k
p
(E(X
∗
n
∧ k)
p
)
p−1
p
(H¨older)
=
p
p − 1
kX
n
k
p
kX
∗
n
∧ kk
p−1
p
Now take the limit k → ∞ and divide by kX
∗
n
k
p−1
p
.
Theorem
(
L
p
martingale convergence theorem)
.
Let (
X
n
)
n≥0
be a martingale,
and p > 1. Then the following are equivalent:
(i) (X
n
)
n≥0
is bounded in L
p
, i.e. sup
n
E|X
i
|
p
< ∞.
(ii)
(
X
n
)
n≥0
converges as
n → ∞
to a random variable
X
∞
∈ L
p
almost surely
and in L
p
.
(iii) There exists a random variable Z ∈ L
p
such that
X
n
= E(Z | F
n
)
Moreover, in (iii), we always have X
∞
= E(Z | F
∞
).
This gives a bijection between martingales bounded in
L
p
and
L
p
(
F
∞
),
sending (X
n
)
n≥0
7→ X
∞
.
Proof.
–
(i)
⇒
(ii): If (
X
n
)
n≥0
is bounded in
L
p
, then it is bounded in
L
1
. So by
the martingale convergence theorem, we know (
X
n
)
n≥0
converges almost
surely to X
∞
. By Fatou’s lemma, we have X
∞
∈ L
p
.
Now by monotone convergence, we have
kX
∗
k
p
= lim
n
kX
∗
n
k
p
≤
p
p − 1
sup
n
kX
n
k
p
< ∞.
By the triangle inequality, we have
|X
n
− X
∞
| ≤ 2X
∗
a.s.
So by dominated convergence, we know that X
n
→ X
∞
in L
p
.
– (ii) ⇒ (iii): Take Z = X
∞
. We want to prove that
X
m
= E(X
∞
| F
m
).
To do so, we show that
kX
m
− E
(
X
∞
| F
m
)
k
p
= 0. For
n ≥ m
, we know
this is equal to
kE(X
n
| F
m
)−E(X
∞
| F
m
)k
p
= kE(X
n
−X
∞
| F
m
)k
p
≤ kX
n
−X
∞
k
p
→ 0
as
n → ∞
, where the last step uses Jensen’s. But it is also a constant. So
we are done.
–
(iii)
⇒
(i): Since expectation decreases
L
p
norms, we already know that
(X
n
)
n≥0
is L
p
-bounded.
To show the “moreover” part, note that
S
n≥0
F
n
is a
π
-system that
generates F
∞
. So it is enough to prove that
EX
∞
1
A
= E(E(Z | F
∞
)1
A
).
But if A ∈ F
N
, then
EX
∞
1
A
= lim
n→∞
EX
n
1
A
= lim
n→∞
E(E(Z | F
n
)1
A
)
= lim
n→∞
E(E(Z | F
∞
)1
A
),
where the last step relies on the fact that 1
A
is F
n
-measurable.
We finally finish off the
p
= 1 case with the additional uniform integrability
condition.
Theorem
(Convergence in
L
1
)
.
Let (
X
n
)
n≥0
be a martingale. Then the follow-
ing are equivalent:
(i) (X
n
)
n≥0
is uniformly integrable.
(ii) (X
n
)
n≥0
converges almost surely and in L
1
.
(iii) There exists Z ∈ L
1
such that X
n
= E(Z | F
n
) almost surely.
Moreover, X
∞
= E(Z | F
∞
).
The proof is very similar to the L
p
case.
Proof.
–
(i)
⇒
(ii): Let (
X
n
)
n≥0
be uniformly integrable. Then (
X
n
)
n≥0
is bounded
in
L
1
. So the (
X
n
)
n≥0
converges to
X
∞
almost surely. Then by measure
theory, uniform integrability implies that in fact X
n
→ L
1
.
– (ii) ⇒ (iii): Same as the L
p
case.
–
(iii)
⇒
(i): For any
Z ∈ L
1
, the collection
E
(
Z | G
) ranging over all
σ-subalgebras G is uniformly integrable.
Thus, there is a bijection between uniformly integrable martingales and
L
1
(F
∞
).
We now revisit optional stopping for uniformly integrable martingales. Recall
that in the statement of optional stopping, we needed our stopping times to be
bounded. It turns out if we require our martingales to be uniformly integrable,
then we can drop this requirement.
Theorem.
If (
X
n
)
n≥0
is a uniformly integrable martingale, and
S, T
are arbi-
trary stopping times, then E(X
T
| F
S
) = X
S∧T
. In particular EX
T
= X
0
.
Note that we are now allowing arbitrary stopping times, so
T
may be infinite
with non-zero probability. Hence we define
X
T
=
∞
X
n=0
X
n
1
T =n
+ X
∞
1
T =∞
.
Proof. By optional stopping, for every n, we know that
E(X
T ∧n
| F
S
) = X
S∧T ∧n
.
We want to be able to take the limit as
n → ∞
. To do so, we need to show
that things are uniformly integrable. First, we apply optional stopping to write
X
T ∧n
as
X
T ∧n
= E(X
n
| F
T ∧n
)
= E(E(X
∞
| F
n
) | F
T ∧n
)
= E(X
∞
| F
T ∧n
).
So we know (
X
T
n
)
n≥0
is uniformly integrable, and hence
X
n∧T
→ X
T
almost
surely and in L
1
.
To understand E(X
T ∧n
| F
S
), we note that
kE(X
n∧T
− X
T
| F
S
)k
1
≤ kX
n∧T
− X
T
k
1
→ 0 as n → ∞.
So it follows that E(X
n∧T
| F
S
) → E(X
T
| F
S
) as n → ∞.
2.4 Applications of martingales
Having developed the theory, let us move on to some applications. Before we do
that, we need the notion of a backwards martingale.
Definition
(Backwards filtration)
.
A backwards filtration on a measurable space
(E, E) is a sequence of σ-algebras
ˆ
F
n
⊆ E such that
ˆ
F
n+1
⊆
ˆ
F
n
. We define
ˆ
F
∞
=
\
n≥0
ˆ
F
n
.
Theorem. Let Y ∈ L
1
, and let
ˆ
F
n
be a backwards filtration. Then
E(Y |
ˆ
F
n
) → E(Y |
ˆ
F
∞
)
almost surely and in L
1
.
A process of this form is known as a backwards martingale.
Proof.
We first show that
E
(
Y |
ˆ
F
n
) converges. We then show that what it
converges to is indeed E(Y |
ˆ
F
∞
).
We write
X
n
= E(Y |
ˆ
F
n
).
Observe that for all
n ≥
0, the process (
X
n−k
)
0≤k≤n
is a martingale by the tower
property, and so is (
−X
n−k
)
0≤k≤n
. Now notice that for all
a < b
, the number
of upcrossings of [
a, b
] by (
X
k
)
0≤k≤n
is equal to the number of upcrossings of
[−b, −a] by (−X
n−k
)
0≤k≤n
.
Using the same arguments as for martingales, we conclude that
X
n
→ X
∞
almost surely and in L
1
for some X
∞
.
To see that
X
∞
=
E
(
Y |
ˆ
F
∞
), we notice that
X
∞
is
ˆ
F
∞
measurable. So it is
enough to prove that
EX
∞
1
A
= E(E(Y |
ˆ
F
∞
)1
A
)
for all A ∈
ˆ
F
∞
. Indeed, we have
EX
∞
1
A
= lim
n→∞
EX
n
1
A
= lim
n→∞
E(E(Y |
ˆ
F
n
)1
A
)
= lim
n→∞
E(Y | 1
A
)
= E(Y | 1
A
)
= E(E(Y |
ˆ
F
n
)1
A
).
Theorem
(Kolmogorov 0-1 law)
.
Let (
X
n
)
n≥0
be independent random variables.
Then, let
ˆ
F
n
= σ(X
n+1
, X
n+2
, . . .).
Then the tail σ-algebra
ˆ
F
∞
is trivial, i.e. P(A) ∈ {0, 1} for all A ∈
ˆ
F
∞
.
Proof.
Let
F
n
=
σ
(
X
1
, . . . , X
n
). Then
F
n
and
ˆ
F
n
are independent. Then for
all A ∈
ˆ
F
∞
, we have
E(1
A
| F
n
) = P(A).
But the LHS is a martingale. So it converges almost surely and in
L
1
to
E
(
1
A
| F
∞
). But
1
A
is
F
∞
-measurable, since
ˆ
F
∞
⊆ F
∞
. So this is just
1
A
. So
1
A
= P(A) almost surely, and we are done.
Theorem
(Strong law of large numbers)
.
Let (
X
n
)
n≥1
be iid random variables
in L
1
, with EX
1
= µ. Define
S
n
=
n
X
i=1
X
i
.
Then
S
n
n
→ µ as n → ∞
almost surely and in L
1
.
Proof. We have
S
n
= E(S
n
| S
n
) =
n
X
i=1
E(X
i
| S
n
) = nE(X
1
| S
n
).
So the problem is equivalent to showing that
E
(
X
1
| S
n
)
→ µ
as
n → ∞
. This
seems like something we can tackle with our existing technology, except that the
S
n
do not form a filtration.
Thus, define a backwards filtration
ˆ
F
n
= σ(S
n
, S
n+1
, S
n+2
, . . .) = σ(S
n
, X
n+1
, X
n+2
, . . .) = σ(S
n
, τ
n
),
where
τ
n
=
σ
(
X
n+1
, X
n+2
, . . .
). We now use the property of conditional expec-
tation that we’ve never used so far, that adding independent information to a
conditional expectation doesn’t change the result. Since
τ
n
is independent of
σ(X
1
, S
n
), we know
S
n
n
= E(X
1
| S
n
) = E(X
1
|
ˆ
F
n
).
Thus, by backwards martingale convergence, we know
S
n
n
→ E(X
1
|
ˆ
F
∞
).
But by the Kolmogorov 0-1 law, we know
ˆ
F
∞
is trivial. So we know that
E
(
X
1
|
ˆ
F
∞
) is almost constant, which has to be E(E(X
1
|
ˆ
F
∞
)) = E(X
1
) = µ.
Recall that if (E, E, µ) is a measure space and f ∈ mE
+
, then
ν(A) = µ(f 1
A
)
is a measure on E. We say f is a density of ν with respect to µ.
We can ask an “inverse” question – given two different measures on
E
, when
is it the case that one is given by a density with respect to the other?
A first observation is that if
ν
(
A
) =
µ
(
f1
A
), then whenever
µ
(
A
) = 0, we
must have
ν
(
A
) = 0. However, this is not sufficient. For example, let
µ
be a
counting measure on
R
, and
ν
the Lebesgue measure. Then our condition is
satisfied. However, if
ν
is given by a density
f
with respect to
ν
, we must have
0 = ν({x}) = µ(f 1
{x}
) = f(x).
So f ≡ 0, but taking f ≡ 0 clearly doesn’t give the Lebesgue measure.
The problem with this is that µ is not a σ-finite measure.
Theorem
(Radon–Nikodym)
.
Let (Ω
, F
) be a measurable space, and
Q
and
P
be two probability measures on (Ω, F). Then the following are equivalent:
(i) Q
is absolutely continuous with respect to
P
, i.e. for any
A ∈ F
, if
P
(
A
) = 0,
then Q(A) = 0.
(ii)
For any
ε >
0, there exists
δ >
0 such that for all
A ∈ F
, if
P
(
A
)
≤ δ
, then
Q(A) ≤ ε.
(iii) There exists a random variable X ≥ 0 such that
Q(A) = E
P
(X1
A
).
In this case,
X
is called the Radon–Nikodym derivative of
Q
with respect
to P, and we write X =
dQ
dP
.
Note that this theorem works for all finite measures by scaling, and thus for
σ-finite measures by partitioning Ω into sets of finite measure.
Proof.
We shall only treat the case where
F
is countably generated, i.e.
F
=
σ
(
F
1
, F
2
, . . .
) for some sets
F
i
. For example, any second-countable topological
space is countably generated.
– (iii) ⇒ (i): Clear.
– (ii) ⇒ (iii): Define the filtration
F
n
= σ(F
1
, F
2
, . . . , F
n
).
Since F
n
is finite, we can write it as
F
n
= σ(A
n,1
, . . . , A
n,m
n
),
where each
A
n,i
is an atom, i.e. if
B ( A
n,i
and
B ∈ F
n
, then
B
=
∅
. We
define
X
n
=
m
n
X
n=1
Q(A
n,i
)
P(A
n,i
)
1
A
n,i
,
where we skip over the terms where
P
(
A
n,i
) = 0. Note that this is exactly
designed so that for any A ∈ F
n
, we have
E
P
(X
n
1
A
) = E
P
X
A
n,i
⊆A
Q(A
n,i
)
P(A
n
, i)
1
A
n,i
= Q(A).
Thus, if A ∈ F
n
⊆ F
n+1
, we have
EX
n+1
1
A
= Q(A) = EX
n
1
A
.
So we know that
E(X
n+1
| F
n
) = X
n
.
It is also immediate that (X
n
)
n≥0
is adapted. So it is a martingale.
We next show that (
X
n
)
n≥0
is uniformly integrable. By Markov’s inequality,
we have
P(X
n
≥ λ) ≤
EX
n
λ
=
1
λ
≤ δ
for λ large enough. Then
E(X
n
1
X
n
≥λ
) = Q(X
n
≥ λ) ≤ ε.
So we have shown uniform integrability, and so we know
X
n
→ X
almost
surely and in L
1
for some X. Then for all A ∈
S
n≥0
F
n
, we have
Q(A) = lim
n→∞
EX
n
1
A
= EX1
A
.
So
Q
(
−
) and
EX1
(−)
agree on
S
n≥0
F
n
, which is a generating
π
-system
for F, so they must be the same.
–
(i)
⇒
(ii): Suppose not. Then there exists some
ε >
0 and some
A
1
, A
2
, . . . ∈ F such that
Q(A
n
) ≥ ε, P(A
n
) ≤
1
2
n
.
Since
P
n
P(A
n
) is finite, by Borel–Cantelli, we know
P lim sup A
n
= 0.
On the other hand, by, say, dominated convergence, we have
Q lim sup A
n
= Q
∞
\
n=1
∞
[
m=n
A
m
!
= lim
k→∞
Q
k
\
n=1
∞
[
m=n
A
m
!
≥ lim
k→∞
Q
∞
[
m=k
A
k
!
≥ ε.
This is a contradiction.
Finally, we end the part on discrete time processes by relating what we have
done to Markov chains.
Let’s first recall what Markov chains are. Let
E
be a countable space, and
µ
a measure on E. We write µ
x
= µ({x}), and then µ(f ) = µ · f.
Definition
(Transition matrix)
.
A transition matrix is a matrix
P
= (
p
xy
)
x,y∈E
such that each p
x
= (p
x,y
)
y∈E
is a probability measure on E.
Definition
(Markov chain)
.
An adapted process (
X
n
) is called a Markov chain
if for any n and A ∈ F
n
such that {x
n
= x} ⊇ A, we have
P(X
n+1
= y | A) = p
xy
.
Definition
(Harmonic function)
.
A function
f
:
E → R
is harmonic if
P f
=
f
.
In other words, for any x, we have
X
y
p
xy
f(y) = f(x).
We then observe that
Proposition.
If
F
is harmonic and bounded, and (
X
n
)
n≥0
is Markov, then
(f(X
n
))
n≥0
is a martingale.
Example.
Let (
X
n
)
n≥0
be iid
Z
-valued random variables in
L
1
, and
E
[
X
i
] = 0.
Then
S
n
= X
0
+ ··· + X
n
is a martingale and a Markov chain.
However, if
Z
is a
Z
-valued random variable, consider the random variable
(
ZS
n
)
n≥0
and
F
n
=
σ
(
F
n
, Z
). Then this is a martingale but not a Markov
chain.
3 Continuous time stochastic processes
In the remainder of the course, we shall study continuous time processes. When
doing so, we have to be rather careful, since our processes are indexed by
an uncountable set, when measure theory tends to only like countable things.
Ultimately, we would like to study Brownian motion, but we first develop some
general theory of continuous time processes.
Definition
(Continuous time stochastic process)
.
A continuous time stochastic
process is a family of random variables (X
t
)
t≥0
(or (X
t
)
t∈[a,b]
).
In the discrete case, if
T
is a random variable taking values in
{
0
,
1
,
2
, . . .}
,
then it makes sense to look at the new random variable X
T
, since this is just
X
T
=
∞
X
n=0
X
n
1
T =n
.
This is obviously measurable, since it is a limit of measurable functions.
However, this is not necessarily the case if we have continuous time, unless
we assume some regularity conditions on our process. In some sense, we want
X
t
to depend “continuously” or at least “measurably” on t.
To make sense of X
T
, It would be enough to require that the map
ϕ : (ω, t) 7→ X
t
(ω)
is measurable when we put the product
σ
-algebra on the domain. In this case,
X
T
(
ω
) =
ϕ
(
ω, T
(
ω
)) is measurable. In this formulation, we see why we didn’t
have this problem with discrete time — the
σ
-algebra on
N
is just
P
(
N
), and so
all sets are measurable. This is not true for B([0, ∞)).
However, being able to talk about
X
T
is not the only thing we want. Often,
the following definitions are useful:
Definition
(Cadlag function)
.
We say a function
X
: [0
, ∞
]
→ R
is cadlag if
for all t
lim
s→t
+
x
s
= x
t
, lim
s→t
−
x
s
exists.
The name cadlag (or c´adl´ag) comes from the French term continue ´a droite,
limite ´a gauche, meaning “right-continuous with left limits”.
Definition
(Continuous/Cadlag stochastic process)
.
We say a stochastic process
is continuous (resp. cadlag) if for any
ω ∈
Ω, the map
t 7→ X
t
(
ω
) is continuous
(resp. cadlag).
Notation.
We write
C
([0
, ∞
)
, R
) for the space of all continuous functions
[0, ∞) → R, and D([0, ∞), R) the space of all cadlag functions.
We endow these spaces with a
σ
-algebra generated by the coordinate functions
(x
t
)
t≥0
7→ x
s
.
Then a continuous (or cadlag) process is a random variable taking values in
C([0, ∞), R) (or D([0, ∞), R)).
Definition
(Finite-dimensional distribution)
.
A finite dimensional distribution
of (X
t
)
t≥0
is a measure on R
n
of the form
µ
t
1
,...,t
n
(A) = P((X
t
1
, . . . , X
t
n
) ∈ A)
for all A ∈ B(R
n
), for some 0 ≤ t
1
< t
2
< . . . < t
n
.
The important observation is that if we know all finite-dimensional distri-
butions, then we know the law of
X
, since the cylinder sets form a
π
-system
generating the σ-algebra.
If we know, a priori, that (
X
t
)
t≥0
is a continuous process, then for any dense
set
I ⊆
[0
, ∞
), knowing (
X
t
)
t≥0
is the same as knowing (
X
t
)
t∈I
. Conversely, if
we are given some random variables (
X
t
)
t∈I
, can we extend this to a continuous
process (
X
t
)
t≥0
? The answer is, of course, “not always”, but it turns out we can
if we assume some H¨older conditions.
Theorem
(Kolmogorov’s criterion)
.
Let (
ρ
t
)
t∈I
be random variables, where
I ⊆ [0, 1] is dense. Assume that for some p > 1 and β >
1
p
, we have
kρ
t
− ρ
s
k
p
≤ C|t − s|
β
for all t, s ∈ I. (∗)
Then there exists a continuous process (X
t
)
t∈I
such that for all t ∈ I,
X
t
= ρ
t
almost surely,
and moreover for any
α ∈
[0
, β −
1
p
), there exists a random variable
K
α
∈ L
p
such that
|X
s
− X
t
| ≤ K
α
|s − t|
α
for all s, t ∈ [0, 1].
Before we begin, we make the following definition:
Definition (Dyadic numbers). We define
D
n
=
s ∈ [0, 1] : s =
k
2
n
for some k ∈ Z
, D =
[
n≥0
D
n
.
Observe that
D ⊆
[0
,
1] is a dense subset. Topologically, this is just like any
other dense subset. However, it is convenient to use
D
instead of an arbitrary
subset when writing down formulas.
Proof.
First note that we may assume
D ⊆ I
. Indeed, for
t ∈ D
, we can define
ρ
t
by taking the limit of
ρ
s
in
L
p
since
L
p
is complete. The equation (
∗
) is
preserved by limits, so we may work on I ∪ D instead.
By assumption, (
ρ
t
)
t∈I
is H¨older in
L
p
. We claim that it is almost surely
pointwise H¨older.
Claim. There exists a random variable K
α
∈ L
p
such that
|ρ
s
− ρ
t
| ≤ K
α
|s − t|
α
for all s, t ∈ D.
Moreover, K
α
is increasing in α.
Given the claim, we can simply set
X
t
(ω) =
(
lim
q→t,q∈D
ρ
q
(ω) K
α
< ∞ for all α ∈ [0, β −
1
p
)
0 otherwise
.
Then this is a continuous process, and satisfies the desired properties.
To construct such a
K
α
, observe that given any
s, t ∈ D
, we can pick
m ≥
0
such that
2
−(m+1)
< t − s ≤ 2
−m
.
Then we can pick u =
k
2
m+1
such that s < u < t. Thus, we have
u − s < 2
−m
, t − u < 2
−m
.
Therefore, by binary expansion, we can write
u − s =
X
i≥m+1
x
i
2
i
, t − u =
X
i≥m+1
y
i
2
i
,
for some x
i
, y
i
∈ {0, 1}. Thus, writing
K
n
= sup
t∈D
n
|S
t+2
−n
− S
t
|,
we can bound
|ρ
s
− ρ
t
| ≤ 2
∞
X
n=m+1
K
n
,
and thus
|ρ
s
− ρ
t
|
|s − t|
α
≤ 2
∞
X
n=m+1
2
(m+1)α
K
n
≤ 2
∞
X
n=m+1
2
(n+1)α
K
n
.
Thus, we can define
K
α
= 2
X
n≥0
2
nα
K
n
.
We only have to check that this is in L
p
, and this is not hard. We first get
EK
p
n
≤
X
t∈D
n
E|ρ
t+2
−n
− ρ
t
|
p
≤ C
p
2
n
· 2
−nβ
= C2
n(1−pβ)
.
Then we have
kK
α
k
p
≤ 2
X
n≥0
2
nα
kK
n
k
p
≤ 2C
X
n≥0
2
n(α+
1
p
−β)
< ∞.
We will later use this to construct Brownian motion. For now, we shall
develop what we know about discrete time processes for continuous time ones.
Fortunately, a lot of the proofs are either the same as the discrete time ones, or
can be reduced to the discrete time version. So not much work has to be done!
Definition
(Continuous time filtration)
.
A continuous-time filtration is a family
of
σ
-algebras (
F
t
)
t≥0
such that
F
s
⊆ F
t
⊆ F
if
s ≤ t
. Define
F
∞
=
σ
(
F
t
:
t ≥
0).
Definition
(Stopping time)
.
A random variable
t
: Ω
→
[0
, ∞
] is a stopping
time if {T ≤ t} ∈ F
t
for all t ≥ 0.
Proposition.
Let (
X
t
)
t≥0
be a cadlag adapted process and
S, T
stopping times.
Then
(i) S ∧ T is a stopping time.
(ii) If S ≤ T , then F
S
⊆ F
T
.
(iii) X
T
1
T <∞
is F
T
-measurable.
(iv) (X
T
t
)
t≥0
= (X
T ∧t
)
t≥0
is adapted.
We only prove (iii). The first two are the same as the discrete case, and the
proof of (iv) is similar to that of (iii).
To prove this, we need a quick lemma, whose proof is a simple exercise.
Lemma.
A random variable
Z
is
F
T
-measurable iff
Z1
{T ≤t}
is
F
t
-measurable
for all t ≥ 0.
Proof of (iii) of proposition.
We need to prove that
X
T
1
{T ≤t}
is
F
t
-measurable
for all t ≥ 0.
We write
X
T
1
T ≤t
= X
T
1
T <t
+ X
t
1
T =t
.
We know the second term is measurable. So it suffices to show that
X
T
1
T <t
is
F
t
-measurable.
Define
T
n
= 2
−n
d
2
n
T e
. This is a stopping time, since we always have
T
n
≥ T
.
Since (X
t
)
t≥0
is cadlag, we know
X
T
1
T <t
= lim
n→∞
X
T
n
∧t
1
T <t
.
Now
T
n
∧ t
can take only countably (and in fact only finitely) many values, so
we can write
X
T
n
∧t
=
X
q∈D
n
,q<t
X
q
1
T
n
=q
+ X
t
1
T <t<T
n
,
and this is F
t
-measurable. So we are done.
In the continuous case, stopping times are a bit more subtle. A natural
source of stopping times is given by hitting times.
Definition (Hitting time). Let A ∈ B(R). Then the hitting time of A is
T
A
= inf
t≥0
{X
t
≤ A}.
This is not always a stopping time. For example, consider the process
X
t
such that with probability
1
2
, it is given by
X
t
=
t
, and with probability
1
2
, it is
given by
X
t
=
(
t t ≤ 1
2 − t t > 1
.
1
1
Take
A
= (1
, ∞
). Then
T
A
= 1 in the first case, and
T
A
=
∞
in the second case.
But {T
a
≤ 1} 6∈ F
1
, as at time 1, we don’t know if we are going up or down.
The problem is that A is not closed.
Proposition. Let A ⊆ R be a closed set and (X
t
)
t≥0
be continuous. Then T
A
is a stopping time.
Proof. Observe that d(X
q
, A) is a continuous function in q. So we have
{T
A
≤ t} =
inf
q∈Q,q<t
d(X
q
, A) = 0
.
Motivated by our previous non-example of a hitting time, we define
Definition
(Right-continuous filtration)
.
Given a continuous filtration (
F
t
)
t≥0
,
we define
F
+
t
=
\
s>t
F
s
⊇ F
t
.
We say (F
t
)
t≥0
is right continuous if F
t
= F
+
t
.
Often, we want to modify our events by things of measure zero. While this
doesn’t really affect anything, it could potentially get us out of
F
t
. It does no
harm to enlarge all F
t
to include events of measure zero.
Definition
(Usual conditions)
.
Let
N
=
{A ∈ F
∞
:
P
(
A
)
∈ {
0
,
1
}}
. We say
that (F
t
)
t≥0
satisfies the usual conditions if it is right continuous and N ⊆ F
0
.
Proposition.
Let (
X
t
)
t≥0
be an adapted process (to (
F
t
)
t≥0
) that is cadlag,
and let A be an open set. Then T
A
is a stopping time with respect to F
+
t
.
Proof. Since (X
t
)
t≥0
is cadlag and A is open. Then
{T
A
< t} =
[
q<t,q∈Q
{X
q
∈ A} ∈ F
t
.
Then
{T
A
≤ t} =
\
n≥0
T
A
< t +
1
n
∈ F
+
t
.
Definition
(Coninuous time martingale)
.
An adapted process (
X
t
)
t≥0
is called
a martingale iff
E(X
t
| F
s
) = X
s
for all t ≥ s, and similarly for super-martingales and sub-martingales.
Note that if t
1
≤ t
2
≤ ···, then
˜
X
n
= X
t
n
is a discrete time martingale. Similarly, if t
1
≥ t
2
≥ ···, and
ˆ
X
n
= X
t
n
defines a discrete time backwards martingale. Using this observation, we can
now prove what we already know in the discrete case.
Theorem
(Optional stopping theorem)
.
Let (
X
t
)
t≥0
be an adapted cadlag
process in L
1
. Then the following are equivalent:
(i)
For any bounded stopping time
T
and any stopping time
S
, we have
X
T
∈ L
1
and
E(X
T
| F
S
) = X
T ∧S
.
(ii) For any stopping time T , (X
T
t
)
t≥0
= (X
T ∧t
)
t≥0
is a martingale.
(iii) For any bounded stopping time T , X
T
∈ L
1
and EX
T
= EX
0
.
Proof.
We show that (i)
⇒
(ii), and the rest follows from the discrete case
similarly.
Since T is bounded, assume T ≤ t, and we may wlog assume t ∈ N. Let
T
n
= 2
−n
d2
n
T e, S
n
= 2
−n
d2
n
Se.
We have T
n
& T as n → ∞, and so X
T
n
→ X
T
as n → ∞.
Since
T
n
≤ t
+ 1, by restricting our sequence to
D
n
, discrete time optional
stopping implies
E(X
t+1
| F
T
n
) = X
T
n
.
In particular,
X
T
n
is uniformly integrable. So it converges in
L
1
. This implies
X
T
∈ L
1
.
To show that
E
(
X
t
| F
S
) =
X
T ∧S
, we need to show that for any
A ∈ F
S
, we
have
EX
t
1
A
= EX
S∧T
1
A
.
Since F
S
⊆ F
S
n
, we already know that
EX
T
n
1
A
= lim
n→∞
EX
S
n
∧T
n
1
A
by discrete time optional stopping, since
E
(
X
T
n
| F
S
n
) =
X
T
n
∧S
n
. So taking the
limit n → ∞ gives the desired result.
Theorem.
Let (
X
t
)
t≥0
be a super-martingale bounded in
L
1
. Then it converges
almost surely as t → ∞ to a random variable X
∞
∈ L
1
.
Proof.
Define
U
s
[
a, b,
(
x
t
)
t≥0
] be the number of upcrossings of [
a, b
] by (
x
t
)
t≥0
up to time s, and
U
∞
[a, b, (x
t
)
t≥0
] = lim
s→∞
U
s
[a, b, (x
t
)
t≥0
].
Then for all s ≥ 0, we have
U
s
[a, b, (x
t
)
t≥0
] = lim
n→∞
U
s
[a, b, (x
(n)
t
)
t∈D
n
].
By monotone convergence and Doob’s upcrossing lemma, we have
EU
s
[a, b, (X
t
)
t≥0
] = lim
n→∞
EU
s
[a, b, (X
t
)
t∈D
n
] ≤
E(X
s
− a)
−
b − 1
≤
E|X
s
| + a
b − a
.
We are then done by taking the supremum over
s
. Then finish the argument as
in the discrete case.
This shows we have pointwise convergence in
R ∪ {±∞}
, and by Fatou’s
lemma, we know that
E|X
∞
| = E lim inf
t
n
→∞
|X
t
n
| ≤ lim inf
t
n
→∞
E|X
t
n
| < ∞.
So X
∞
is finite almost surely.
We shall now state without proof some results we already know for the
discrete case. The proofs are straightforward generalizations of the discrete
version.
Lemma
(Maximal inequality)
.
Let (
X
t
)
t≥0
be a cadlag martingale or a non-
negative sub-martingale. Then for all t ≥ 0, λ ≥ 0, we have
λP(X
∗
t
≥ λ) ≤ E|X
t
|.
Lemma (Doob’s L
p
inequality). Let (X
t
)
t≥0
be as above. Then
kX
∗
t
k
p
≤
p
p − 1
kX
t
k
p
.
Definition
(Version)
.
We say a process (
Y
t
)
t≥0
is a version of (
X
t
)
t≥0
if for all
t, P(Y
t
= X
t
) = 1.
Note that this not the same as saying P(∀
t
: Y
t
= X
t
) = 1.
Example.
Take
X
t
≡
0 for all
t
and take
U
be a uniform random variable on
[0, 1]. Define
Y
t
=
(
1 t = U
0 otherwise
.
Then for all
t
, we have
X
t
=
Y
t
almost surely. So (
Y
t
) is a version of (
X
t
).
However, X
t
is continuous but Y
t
is not.
Theorem
(Regularization of martingales)
.
Let (
X
t
)
t≥0
be a martingale with
respect to (
F
t
), and suppose
F
t
satisfies the usual conditions. Then there exists
a version (
˜
X
t
) of (X
t
) which is cadlag.
Proof. For all M > 0, define
Ω
M
0
=
(
sup
q∈D∩[0,M]
|X
q
| < ∞
)
∩
\
a<b∈Q
U
M
[a, b, (X
t
)
t∈D∩[0,M ]
] < ∞
Then we see that P(Ω
M
0
) = 1 by Doob’s upcrossing lemma. Now define
˜
X
t
= lim
s≥t,s→t,s∈D
X
s
1
Ω
t
0
.
Then this is F
t
measurable because F
t
satisfies the usual conditions.
Take a sequence
t
n
& t
. Then (
X
t
n
) is a backwards martingale. So it
converges almost surely in L
1
to
˜
X
t
. But we can write
X
t
= E(X
t
n
| F
t
).
Since
X
t
n
→
˜
X
t
in
L
1
, and
˜
X
t
is
F
t
-measurable, we know
X
t
=
˜
X
t
almost
surely.
The fact that it is cadlag is an exercise.
Theorem
(
L
p
convergence of martingales)
.
Let (
X
t
)
t≥0
be a cadlag martingale.
Then the following are equivalent:
(i) (X
t
)
t≥0
is bounded in L
p
.
(ii) (X
t
)
t≥0
converges almost surely and in L
p
.
(iii) There exists Z ∈ L
p
such that X
t
= E(Z | F
t
) almost surely.
Theorem
(
L
1
convergence of martingales)
.
Let (
X
t
)
t≥0
be a cadlag martingale.
Then the folloiwng are equivalent:
(i) (X
t
)
t≥0
is uniformly integrable.
(ii) (X
t
)
t≥0
converges almost surely and in L
1
to X
∞
.
(iii) There exists Z ∈ L
1
such that E(Z | F
t
) = X
t
almost surely.
Theorem
(Optional stopping theorem)
.
Let (
X
t
)
t≥0
be a uniformly integrable
martingale, and let S, T b e any stopping times. Then
E(X
T
| F
s
) = X
S∧T
.
4 Weak convergence of measures
Often, we may want to consider random variables defined on different spaces.
Since we cannot directly compare them, a sensible approach would be to use
them to push our measure forward to R, and compare them on R.
Definition
(Law)
.
Let
X
be a random variable on (Ω
, F, P
). The law of
X
is
the probability measure µ on (R, B(R)) defined by
µ(A) = P(X
−1
(A)).
Example. For x ∈ R, we have the Dirac δ measure
δ
x
(A) = 1
{x∈A}
.
This is the law of a random variable that constantly takes the value x.
Now if we have a sequence
x
n
→ x
, then we would like to say
δ
x
n
→ δ
x
. In
what sense is this true? Suppose f is continuous. Then
Z
fdδ
x
n
= f(x
n
) → f(x) =
Z
fdδ
x
.
So we do have some sort of convergence if we pair it with a continuous function.
Definition
(Weak convergence)
.
Let (
µ
n
)
n≥0
,
µ
be probability measures on
a metric space (
M, d
) with the Borel measure. We say that
µ
n
⇒ µ
, or
µ
n
converges weakly to µ if
µ
n
(f) → µ(f)
for all f bounded and continuous.
If (
X
n
)
n≥0
are random variables, then we say (
X
n
) converges in distribution
if µ
X
n
converges weakly.
Note that in general, weak convergence does not say anything about how
measures of subsets behave.
Example.
If
x
n
→ x
, then
δ
x
n
→ δ
x
weakly. However, if
x
n
6
=
x
for all
n
, then
δ
x
n
({x}) = 0 but δ
x
({x}) = 1. So
δ
x
n
({x}) 6→ δ
n
({x}).
Example. Pick X = [0, 1]. Let µ
n
=
1
n
P
n
k=1
δ
k
n
. Then
µ
n
(f) =
1
n
n
X
k=1
f
k
n
.
So µ
n
converges to the Lebesgue measure.
Proposition. Let (µ
n
)
n≥0
be as above. Then, the following are equivalent:
(i) (µ
n
)
n≥0
converges weakly to µ.
(ii) For all open G, we have
lim inf
n→∞
µ
n
(G) ≥ µ(G).
(iii) For all closed A, we have
lim sup
n→∞
µ
n
(A) ≤ µ(A).
(iv) For all A such that µ(∂A) = 0, we have
lim
n→∞
µ
n
(A) = µ(A)
(v)
(when
M
=
R
)
F
µ
n
(
x
)
→ F
µ
(
x
) for all
x
at which
F
µ
is continuous, where
F
µ
is the distribution function of µ, defined by F
µ
(x) = µ
n
((−∞, t]).
Proof.
–
(i)
⇒
(ii): The idea is to approximate the open set by continuous functions.
We know A
c
is closed. So we can define
f
N
(x) = 1 ∧ (N · dist(x, A
c
)).
This has the property that for all N > 0, we have
f
N
≤ 1
A
,
and moreover
f
N
% 1
A
as
N → ∞
. Now by definition of weak convergence,
lim inf
n→∞
µ(A) ≥ lim inf
n→∞
µ
n
(f
N
) = µ(F
N
) → µ(A) as N → ∞.
– (ii) ⇔ (iii): Take complements.
– (iii) and (ii) ⇒ (iv): Take A such that µ(∂A) = 0. Then
µ(A) = µ(
˚
A) = µ(
¯
A).
So we know that
lim inf
n→∞
µ
n
(A) ≥ lim inf
n→∞
µ
n
(
˚
A) ≥ µ(
˚
A) = µ(A).
Similarly, we find that
µ(A) ≥ lim sup
n→∞
µ
n
(A).
So we are done.
– (iv) ⇒ (i): We have
µ(f) =
Z
M
f(x) dµ(x)
=
Z
M
Z
∞
0
1
f(x)≥t
dt dµ(x)
=
Z
∞
0
µ({f ≥ t}) dt.
Since
f
is continuous,
∂{f ≤ t} ⊆ {f
=
t}
. Now there can be only
countably many
t
’s such that
µ
(
{f
=
t}
)
>
0. So replacing
µ
by
lim
n→∞
µ
n
only changes the integrand at countably many places, hence doesn’t affect
the integral. So we conclude using bounded convergence theorem.
– (iv) ⇒ (v): Assume t is a continuity point of F
µ
. Then we have
µ(∂(−∞, t]) = µ({t}) = F
µ
(t) − F
µ
(t
−
) = 0.
So µ
n
(∂
n
(−∞, t]) → µ((−∞, t]), and we are done.
– (v) ⇒ (ii): If A = (a, b), then
µ
n
(A) ≥ F
µ
n
(b
0
) − F
µ
n
(a
0
)
for any
a ≤ a
0
≤ b
0
≤ b
with
a
0
, b
0
continuity points of
F
µ
. So we know
that
lim inf
n→∞
µ
n
(A) ≥ F
µ
(b
0
) − F
µ
(a
0
) = µ(a
0
, b
0
).
By taking supremum over all such a
0
, b
0
, we find that
lim inf
n→∞
µ
n
(A) ≥ µ(A).
Definition
(Tight probability measures)
.
A sequence of probability measures
(
µ
n
)
n≥0
on a metric space (
M, e
) is tight if for all
ε >
0, there exists compact
K ⊆ M such that
sup
n
µ
n
(M \ K) ≤ ε.
Note that this is always satisfied for compact metric spaces.
Theorem
(Prokhorov’s theorem)
.
If (
µ
n
)
n≥0
is a sequence of tight probability
measures, then there is a subsequence (
µ
n
k
)
k≥0
and a measure
µ
such that
µ
n
k
⇒ µ.
To see how this can fail without the tightness assumption, suppose we define
measures µ
n
on R by
µ
n
(A) = ˜µ(A ∩ [n, n + 1]),
where
˜µ
is the Lebesgue measure. Then for any bounded set
S
, we have
lim
n→∞
µ
n
(
S
) = 0. Thus, if the weak limit existed, it must be everywhere zero,
but this does not give a probability measure.
We shall prove this only in the case
M
=
R
. It is not difficult to construct a
candidate of what the weak limit should be. Simply use Bolzano–Weierstrass to
pick a subsequence of the measures such that the distribution functions converge
on the rationals. Then the limit would essentially be what we want. We then
apply tightness to show that this is a genuine distribution.
Proof.
Take
Q ⊆ R
, which is dense and countable. Let
x
1
, x
2
, . . .
be an enumera-
tion of
Q
. Define
F
n
=
F
µ
n
. By Bolzano–Weierstrass, and some fiddling around
with sequences, we can find some F
n
k
such that
F
n
k
(x
i
) → y
i
≡ F (x
i
)
as k → ∞, for each fixed x
i
.
Since
F
is non-decreasing on
Q
, it has left and right limits everywhere. We
extend F to R by taking right limits. This implies F is cadlag.
Take
x
a continuity point of
F
. Then for each
ε >
0, there exists
s < x < t
rational such that
|F (s) − F (t)| <
ε
2
.
Take
n
large enough such that
|F
n
(
s
)
− F
(
s
)
| <
ε
4
, and same for
t
. Then by
monotonicity of F and F
n
, we have
|F
n
(x) − F (x)| ≤ |F (s) − F (t)|+ |F
n
(s) − F (s)|+ |F
n
(t) − F (t)| ≤ ε.
It remains to show that
F
(
x
)
→
1 as
x → ∞
and
F
(
x
)
→
0 as
x → −∞
. By
tightness, for all ε > 0, there exists N > 0 such that
µ
n
((−∞, N]) ≤ ε, µ
n
((N, ∞) ≤ ε.
This then implies what we want.
We shall end the chapter with an alternative characterization of weak con-
vergence, using characteristic functions.
Definition
(Characteristic function)
.
Let
X
be a random variable taking values
in R
d
. The characteristic function of X is the function R
d
→ C defined by
ϕ
X
(t) = Ee
iht,xi
=
Z
R
d
e
iht,xi
dµ
X
(x).
Note that ϕ
X
is continuous by bounded convergence, and ϕ
X
(0) = 1.
Proposition. If ϕ
X
= ϕ
Y
, then µ
X
= µ
Y
.
Theorem
(L´evy’s convergence theroem)
.
Let (
X
n
)
n≥0
,
X
be random variables
taking values in R
d
. Then the following are equivalent:
(i) µ
X
n
⇒ µ
X
as n → ∞.
(ii) ϕ
X
n
→ ϕ
X
pointwise.
We will in fact prove a stronger theorem.
Theorem
(L´evy)
.
Let (
X
n
)
n≥0
be as above, and let
ϕ
X
n
(
t
)
→ ψ
(
t
) for all
t
.
Suppose
ψ
is continuous at 0 and
ψ
(0) = 1. Then there exists a random variable
X such that ϕ
X
= ψ and µ
X
n
⇒ µ
X
as n → ∞.
We will only prove the case d = 1. We first need the following lemma:
Lemma. Let X be a real random variable. Then for all λ > 0,
µ
X
(|x| ≥ λ) ≤ cλ
Z
1/λ
0
(1 − Re ϕ
X
(t)) dt,
where C = (1 − sin 1)
−1
.
Proof. For M ≥ 1, we have
Z
M
0
(1 − cos t) dt = M − sin M ≥ M(1 − sin 1).
By setting M =
|x|
λ
, we have
1
|X|≥λ
≤ C
λ
|X|
Z
|X|/λ
0
(1 − cos t) dt.
By a change of variables with t 7→ Xt, we have
1
|X|≥λ
≤ cλ
Z
1
0
(1 − cos Xt) dt.
Apply µ
X
, and use the fact that Re ϕ
X
(t) = E cos(Xt).
We can now prove L´evy’s theorem.
Proof of theorem.
It is clear that weak convergence implies convergence in char-
acteristic functions.
Now observe that if
µ
n
⇒ µ
iff from every subsequence (
n
k
)
k≥0
, we can
choose a further subsequence (
n
k
`
) such that
µ
n
k
`
⇒ µ
as
` → ∞
. Indeed,
⇒
is
clear, and suppose
µ
n
6⇒ µ
but satisfies the subsequence property. Then we can
choose a bounded and continuous function f such that
µ
n
(f) 6⇒ µ(f).
Then there is a subsequence (
n
k
)
k≥0
such that
|µ
n
k
(
f
)
− µ
(
f
)
| > ε
. Then there
is no further subsequence that converges.
Thus, to show
⇐
, we need to prove the existence of subsequential limits
(uniqueness follows from convergence of characteristic functions). It is enough to
prove tightness of the whole sequence.
By the mean value theorem, we can choose λ so large that
cλ
Z
1/λ
0
(1 − Re ψ(t)) dt <
ε
2
.
By bounded convergence, we can choose λ so large that
cλ
Z
1/λ
0
(1 − Re ϕ
X
n
(t)) dt ≤ ε
for all
n
. Thus, by our previous lemma, we know (
µ
X
n
)
n≥0
is tight. So we are
done.
5 Brownian motion
Finally, we can begins studying Brownian motion. Brownian motion was first
observed by the botanist Robert Brown in 1827, when he looked at the random
movement of pollen grains in water. In 1905, Albert Einstein provided the first
mathematical description of this behaviour. In 1923, Norbert Wiener provided
the first rigorous construction of Brownian motion.
5.1 Basic properties of Brownian motion
Definition
(Brownian motion)
.
A continuous process (
B
t
)
t≥0
taking values in
R
d
is called a Brownian motion in R
d
started at x ∈ R
d
if
(i) B
0
= x almost surely.
(ii) For all s < t, the increment B
t
− B
s
∼ N(0, (t − s)I).
(iii)
Increments are independent. More precisely, for all
t
1
< t
2
< ··· < t
k
, the
random variables
B
t
1
, B
t
2
− B
t
1
, . . . , B
t
k
− B
t
k−1
are independent.
If B
0
= 0, then we call it a standard Brownian motion.
We always assume our Brownian motion is standard.
Theorem
(Wiener’s theorem)
.
There exists a Brownian motion on some proba-
bility space.
Proof.
We first prove existence on [0
,
1] and in
d
= 1. We wish to apply
Kolmogorov’s criterion.
Recall that
D
n
are the dyadic numbers. Let (
Z
d
)
d∈D
be iid
N
(0
,
1) random
variables on some probability space. We will define a process on
D
n
inductively
on n with the required properties. We wlog assume x = 0.
In step 0, we put
B
0
= 0, B
1
= Z
1
.
Assume that we have already constructed (
B
d
)
d∈D
n−1
satisfying the properties.
Take d ∈ D
n
\ D
n−1
, and set
d
±
= d ± 2
−n
.
These are the two consecutive numbers in
D
n−1
such that
d
−
< d < d
+
. Define
B
d
=
B
d
+
+ B
d
−
2
+
1
2
(n+1)/2
Z
d
.
The condition (i) is trivially satisfied. We now have to check the other two
conditions.
Consider
B
d
+
− B
d
=
B
d
+
− B
d
−
2
−
1
2
(n+1)/2
Z
d
B
d
− B
d
−
=
B
d
+
− B
d
−
2
| {z }
N
+
1
2
(n+1)/2
Z
d
| {z }
N
0
.
Notice that
N
and
N
0
are normal with variance
var
(
N
0
) =
var
(
N
) =
1
2
n+1
. In
particular, we have
cov(N − N
0
, N + N
0
) = var(N) − var(N
0
) = 0.
So B
d
+
− B
d
and B
d
− B
d
−
are independent.
Now note that the vector of increments of (
B
d
)
d∈D
n
between consecutive
numbers in
D
n
is Gaussian, since after dotting with any vector, we obtain a
linear combination of independent Gaussians. Thus, to prove independence, it
suffice to prove that pairwise correlation vanishes.
We already proved this for the case of increments between
B
d
and
B
d
±
, and
this is the only case that is tricky, since they both involve the same
Z
d
. The
other cases are straightforward, and are left as an exercise for the reader.
Inductively, we can construct (
B
d
)
d∈D
, satisfying (i), (ii) and (iii). Note that
for all s, t ∈ D, we have
E|B
t
− B
s
|
p
= |t − s|
p/2
E|N|
p
for
N ∼ N
(0
,
1). Since
E|N|
p
< ∞
for all
p
, by Kolmogorov’s criterion, we can
extend (
B
d
)
d∈D
to (
B
t
)
t∈[0,1]
. In fact, this is
α
-H¨older continuous for all
α <
1
2
.
Since this is a continuous process and satisfies the desired properties on
a dense set, it remains to show that the properties are preserved by taking
continuous limits.
Take 0
≤ t
1
< t
2
< ··· < t
m
≤
1, and 0
≤ t
n
1
< t
n
2
< ··· < t
n
m
≤
1 such that
t
n
i
∈ D
n
and t
n
i
→ t
i
as n → ∞ and i = 1, . . . m.
We now apply L´evy’s convergence theorem. Recall that if
X
is a random
variable in R
d
and X ∼ N(0, Σ), then
ϕ
X
(u) = exp
−
1
2
u
T
Σu
.
Since (B
t
)
t∈[0,1]
is continuous, we have
ϕ
(B
t
n
2
−B
t
n
1
,...,B
t
n
m
−B
n
t
m−1
)
(u) = exp
−
1
2
u
T
Σu
= exp
−
1
2
m−1
X
i=1
(t
n
i+1
− t
n
i
)u
2
i
!
.
We know this converges, as n → ∞, to exp
−
1
2
P
m−1
i=1
(t
i+1
− t
i
)u
2
i
.
By L´evy’s convergence theorem, the law of (
B
t
2
− B
t
1
, B
t
3
− B
t
2
, . . . , B
t
n
−
B
t
m−1
) is Gaussian with the right covariance. This implies that (ii) and (iii)
hold on [0, 1].
To extend the time to [0
, ∞
), we define independent Brownian motions
(B
i
t
)
t∈[0,1],i∈N
and define
B
t
=
btc−1
X
i=0
B
i
1
+ B
btc
t−btc
To extend to
R
d
, take the product of
d
many independent one-dimensional
Brownian motions.
Lemma.
Brownian motion is a Gaussian process, i.e. for any 0
≤ t
1
< t
2
<
··· < t
m
≤ 1, the vector (B
t
1
, B
t
2
, . . . , B
t
n
) is Gaussian with covariance
cov(B
t
1
, B
t
2
) = t
1
∧ t
2
.
Proof.
We know (
B
t
1
, B
t
2
− B
t
1
, . . . , B
t
m
− B
t
m−1
) is Gaussian. Thus, the
sequence (
B
t
1
, . . . , B
t
m
) is the image under a linear isomorphism, so it is Gaussian.
To compute covariance, for s ≤ t, we have
cov(B
s
, B
t
) = EB
s
B
t
= EB
s
B
T
−EB
2
s
+ EB
2
s
= EB
s
(B
t
−B
s
) + EB
2
s
= s.
Proposition
(Invariance properties)
.
Let (
B
t
)
t≥0
be a standard Brownian
motion in R
d
.
(i)
If
U
is an orthogonal matrix, then (
UB
t
)
t≥0
is a standard Brownian motion.
(ii)
Brownian scaling: If
a >
0, then (
a
−1/2
B
at
)
t≥0
is a standard Brownian
motion. This is known as a random fractal property.
(iii)
(Simple) Markov property: For all
s ≥
0, the sequence (
B
t+s
−B
s
)
t≥0
is a
standard Brownian motion, independent of (F
B
s
).
(iv) Time inversion: Define a process
X
t
=
(
0 t = 0
tB
1/t
t > 0
.
Then (X
t
)
t≥0
is a standard Brownian motion.
Proof.
Only (iv) requires proof. It is enough to prove that
X
t
is continuous and
has the right finite-dimensional distributions. We haves
(X
t
1
, . . . , X
t
m
) = (t
1
B
1/t
1
, . . . , t
m
B
1/t
m
).
The right-hand side is the image of (
B
1/t
1
, . . . , B
1/t
m
) under a linear isomorphism.
So it is Gaussian. If s ≤ t, then the covariance is
cov(sB
s
, tB
t
) = st cov(B
1/s
, B
1/t
) = st
1
s
∧
1
t
= s = s ∧ t.
Continuity is obvious for
t >
0. To prove continuity at 0, we already proved that
(
X
q
)
q>0,q∈Q
has the same law (as a process) as Brownian motion. By continuity
of X
t
for positive t, we have
P
lim
q∈Q
+
,q→0
X
q
= 0
= P
lim
q∈Q
+
,q→0
B
q
= 0
= 1
B
by continuity of B.
Using the natural filtration, we have
Theorem. For all s ≥ t, the process (B
t+s
− B
s
)
t≥0
is independent of F
+
s
.
Proof. Take a sequence s
n
→ s such that s
n
> s for all n. By continuity,
B
t+s
− B
s
= lim
n→∞
B
t+s
n
− B
s
n
almost surely. Now each of
B
t+s
n
− B
s
n
is independent of
F
+
s
, and hence so is
the limit.
Theorem
(Blumenthal’s 0-1 law)
.
The
σ
-algebra
F
+
0
is trivial, i.e. if
A ∈ F
+
0
,
then P(A) ∈ {0, 1}.
Proof.
Apply our previous theorem. Take
A ∈ F
+
0
. Then
A ∈ σ
(
B
s
:
s ≥
0). So
A is independent of itself.
Proposition.
(i) If d = 1, then
1 = P(inf{t ≥ 0 : B
t
> 0} = 0)
= P(inf{t ≥ 0 : B
t
< 0} = 0)
= P(inf{t > 0 : B
t
= 0} = 0)
(ii) For any d ≥ 1, we have
lim
t→∞
B
t
t
= 0
almost surely.
(iii) If we define
S
t
= sup
0≤s≤t
B
t
, I
t
= inf
0≤s≤t
B
t
,
then S
∞
= ∞ and I
∞
= −∞ almost surely.
(iv)
If
A
is open an
R
d
, then the cone of
A
is
C
A
=
{tx
:
x ∈ A, t >
0
}
. Then
inf{t ≥ 0 : B
t
∈ C
A
} = 0 almost surely.
Thus, Brownian motion is pretty chaotic.
Proof.
(i)
It suffices to prove the first equality. Note that the event
{inf{t ≥
0 :
B
k
>
0
}
= 0
}
is trivial. Moreover, for any finite
t
, the probability that
B
t
>
0 is
1
2
. Then take a sequence
t
n
such that
t
n
→
0, and apply Fatou to conclude
that the probability is positive.
(ii) Follows from the previous one since tB
1/t
is a Brownian motion.
(iii) By scale invariance, because S
∞
= aS
∞
for all a > 0.
(iv) Same as (i).
Theorem
(Strong Markov property)
.
Let (
B
t
)
t≥0
be a standard Brownian
motion in
R
d
, and let
T
be an almost-surely finite stopping time with respect to
(F
+
t
)
t≥0
. Then
˜
B
t
= B
T +t
− B
T
is a standard Brownian motion with respect to (
F
+
T +t
)
t≥0
that is independent of
F
+
T
.
Proof. Let T
n
= 2
−n
d2
n
T e. We first prove the statement for T
n
. We let
B
(k)
t
= B
t+k/2
n
− B
k/2
n
This is then a standard Browninan motion independent of
F
+
k/2
n
by the simple
Markov property. Let
B
∗
(t) = B
t+T
n
− B
T
n
.
Let
A
be the
σ
-algebra on
C
=
C
([0
, ∞
)
, R
d
), and
A ∈ A
. Let
E ∈ F
+
T
n
. The
claim that
B
∗
is a standard Brownian motion independent of
E
can be concisely
captured in the equality
P({B
∗
∈ A}∩ E) = P({B ∈ A})P(E). (†)
Taking
E
= Ω tells us
B
∗
and
B
have the same law, and then taking general
E
tells us B
∗
is independent of F
+
T
n
.
It is a straightforward computation to prove (†). Indeed, we have
P({B
∗
∈ A}∩ E) =
∞
X
k=0
P
{B
(k)
∈ A}∩ E ∩
T
n
=
k
2
n
Since
E ∈ F
+
T
n
, we know
E ∩ {T
n
=
k/
2
n
} ∈ F
+
k/2
n
. So by the simple Markov
property, this is equal to
=
∞
X
k=0
P({B
(k)
∈ A})P
E ∩
T
n
=
k
2
n
.
But we know B
k
is a standard Brownian motion. So this is equal to
=
∞
X
b=0
P({B ∈ A})P
E ∩
T
n
=
k
2
n
= P({B ∈ A})P(E).
So we are done.
Now as
n → ∞
, the increments of
B
∗
converge almost surely to the increments
of
˜
B
, since
B
is continuous and
T
n
& T
almost surely. But
B
∗
all have the same
distribution, and almost sure convergence implies convergence in distribution.
So
˜
B is a standard Brownian motion. Being independent of F
+
T
is clear.
We know that we can reset our process any time we like, and we also know
that we have a bunch of invariance properties. We can combine these to prove
some nice results.
Theorem
(Reflection principle)
.
Let (
B
t
)
T ≥0
and
T
be as above. Then the
reflected process (
˜
B
t
)
t≥0
defined by
˜
B
t
= B
t
1
t<T
+ (2B
T
− B
t
)1
t≥T
is a standard Brownian motion.
Of course, the fact that we are reflecting is not important. We can apply any
operation that preserves the law. This theorem is “obvious”, but we can be a
bit more careful in writing down a proof.
Proof. By the strong Markov property, we know
B
T
t
= B
T +t
− B
T
and
−B
T
t
are standard Brownian motions independent of
F
+
T
. This implies that
the pairs of random variables
P
1
= ((B
t
)
0≤t≤T
, (B
t
)
T
t≥0
), P
2
= ((B
t
)
0≤t≤T
, (−B
t
)
T
t≥0
)
taking values in C × C have the same law on C × C with the product σ-algebra.
Define the concatenation map ψ
T
(X, Y ) : C × C → C by
ψ
T
(X, Y ) = X
t
1
t<T
+ (X
T
+ Y
t−T
)1
t≥T
.
Assuming Y
0
= 0, the resulting process is continuous.
Notice that
ψ
T
is a measurable map, which we can prove by approximations
of
T
by discrete stopping times. We then conclude that
ψ
T
(
P
1
) has the same
law as ψ
T
(P
2
).
Corollary.
Let (
B
t
)
T ≥0
be a standard Brownian motion in
d
= 1. Let
b >
0
and a ≤ b. Let
S
t
= sup
0≤s≤t
B
t
.
Then
P(S
t
≥ b, B
t
≤ a) = P(B
t
≥ 2b − a).
Proof.
Consider the stopping time
T
given by the first hitting time of
b
. Since
S
∞
=
∞
, we know
T
is finite almost surely. Let (
˜
B
t
)
t≥0
be the reflected process.
Then
{S
t
≥ b, B
t
≤ a} = {
˜
B
t
≥ 2b − a}.
Corollary. The law of S
t
is equal to the law of |B
t
|.
Proof. Apply the previous process with b = a to get
P(S
t
≥ a) = P(S
t
≥ a, B
t
< a) + P(S
t
≥ a, B
t
≥ a)
= P(B
t
≥ a) + P(B
t
≥ a)
= P(B
t
≤ a) + P(B
t
≥ a)
= P(|B
t
| ≥ a).
Proposition.
Let
d
= 1 and (
B
t
)
t≥0
be a standard Brownian motion. Then
the following processes are (F
+
t
)
t≥0
martingales:
(i) (B
t
)
t≥0
(ii) (B
2
t
− t)
t≥0
(iii)
exp
uB
t
−
u
2
t
2
t≥0
for u ∈ R.
Proof.
(i) Using the fact that B
t
− B
s
is independent of F
+
s
, we know
E(B
t
− B
s
| F
+
s
) = E(B
t
− B
s
) = 0.
(ii) We have
E(B
2
t
− t | F
+
s
) = E((B
t
− B
s
)
2
| F
s
) − E(B
2
s
| F
+
s
) + 2E(B
t
B
s
| F
+
s
) − t
We know
B
t
− B
s
is independent of
F
+
s
, and so the first term is equal to
var(B
t
− B
s
) = (t − s), and we can simply to get
= (t − s) − B
2
s
+ 2B
2
s
− t
= B
2
s
− s.
(iii) Similar.
5.2 Harmonic functions and Brownian motion
Recall that a Markov chain plus a harmonic function gave us a martingale. We
shall derive similar results here.
Definition (Domain). A domain is an open connected set D ⊆ R
d
.
Definition (Harmonic function). A function u : D → R is called harmonic if
∆f =
d
X
i=1
∂
2
f
∂x
2
i
= 0.
There is also an alternative characterization of harmonic functions that
involves integrals instead of derivatives.
Lemma.
Let
u
:
D → R
be measurable and locally bounded. Then the following
are equivalent:
(i) u is twice continuously differentiable and ∆u = 0.
(ii) For any x ∈ D and r > 0 such that B(x, r) ⊆ D, we have
u(x) =
1
Vol(B(x, r))
Z
B(x,r)
u(y) dy
(iii) For any x ∈ D and r > 0 such that B(x, r) ⊆ D, we have
u(x) =
1
Area(∂B(x, r))
Z
∂B(x,r)
u(y) dy.
The latter two properties are known as the mean value property.
Proof. IA Vector Calculus.
Theorem.
Let (
B
t
)
t≥0
be a standard Brownian motion in
R
d
, and
u
:
R
d
→ R
be harmonic such that
E|u(x + B
t
)| < ∞
for any
x ∈ R
d
and
t ≥
0. Then the process (
u
(
B
t
))
t≥0
is a martingale with
respect to (F
+
t
)
t≥0
.
To prove this, we need to prove a side lemma:
Lemma.
If
X
and
Y
are independent random variables in
R
d
, and
X
is
G
-
measurable. If f : R
d
× R
d
→ R is such that f(X, Y ) is integrable, then
E(f(X, Y ) | G) = Ef(z, Y )|
z=X
.
Proof. Use Fubini and the fact that µ
(X,Y )
= µ
X
⊗ µ
Y
.
Observe that if
µ
is a probability measure in
R
d
such that the density of
µ
with respect to the Lebesgue measure depends only on
|x|
, then if
u
is harmonic,
the mean value property implies
u(x) =
Z
R
d
u(x + y) dµ(y).
Proof of theorem. Let t ≥ s. Then
E(u(B
t
) | F
+
s
) = E(u(B
s
+ (B
t
− B
s
)) | F
+
s
)
= E(u(z + B
t
− B
s
))|
Z=B
s
= u(z)|
z=B
s
= u(B
s
).
In fact, the following more general result is true:
Theorem.
Let
f
:
R
d
→ R
be twice continuously differentiable with bounded
derivatives. Then, the processes (X
t
)
t≥0
defined by
X
t
= f(B
t
) −
1
2
Z
t
0
∆f(B
s
) ds
is a martingale with respect to (F
+
t
)
t≥0
.
We shall not prove this, but we can justify this as follows: suppose we have
a sequence of independent random variables {X
1
, X
2
, . . .}, with
P(X
i
= ±1) =
1
2
.
Let S
n
= X
1
+ ··· + X
n
. Then
E(f(S
n+1
) | S
1
, . . . , S
n
)−f(S
n
) =
1
2
(f(S
n
−1)+f(S
n
+1)−2f(s
n
)) ≡
1
2
˜
∆f(S
n
),
and we see that this is the discretized second derivative. So
f(S
n
) −
1
2
n−1
X
i=0
˜
∆f(S
i
)
is a martingale.
Now the mean value property of a harmonic function
u
says if we draw a
sphere
B
centered at
x
, then
u
(
x
) is the average value of
u
on
B
. More generally,
if we have a surface
S
containing
x
, is it true that
u
(
x
) is the average value of
u
on S in some sense?
Remarkably, the answer is yes, and the precise result is given by Brownian
motion. Let (
X
t
)
t≥0
be a Brownian motion started at
x
, and let
T
be the first
hitting time of S. Then, under certain technical conditions, we have
u(x) = E
x
u(X
T
).
In fact, given some function ϕ defined on the boundary of D, we can set
u(x) = E
x
ϕ(X
T
),
and this gives us the (unique) solution to Laplace’s equation with the boundary
condition given by ϕ.
It is in fact not hard to show that the resulting
u
is harmonic in
D
, since
it is almost immediate by construction that
u
(
x
) is the average of
u
on a small
sphere around
x
. The hard part is to show that
u
is in fact continuous at the
boundary, so that it is a genuine solution to the boundary value problem. This
is where the technical condition comes in.
First, we quickly establish that solutions to Laplace’s equation are unique.
Definition
(Maximum principle)
.
Let
u
:
¯
D → R
be continuous and harmonic.
Then
(i) If u attains its maximum inside D, then u is constant.
(ii) If D is bounded, then the maximum of u in
¯
D is attained at ∂D.
Thus, harmonic functions do not have interior maxima unless it is constant.
Proof. Follows from the mean value property of harmonic functions.
Corollary.
If
u
and
u
0
solve ∆
u
= ∆
u
0
= 0, and
u
and
u
0
agree on
∂D
, then
u = u
0
.
Proof. u − u
0
is also harmonic, and so attains the maximum at the boundary,
where it is 0. Similarly, the minimum is attained at the boundary.
The technical condition we impose on D is the following:
Definition
(Poincar´e cone condition)
.
We say a domain
D
satisfies the Poincar´e
cone condition if for any
x ∈ ∂D
, there is an open cone
C
based at
X
such that
C ∩ D ∩B(x, δ) = ∅
for some δ ≥ 0.
Example.
If
D
=
R
2
\
(
{
0
} × R
≥0
), then
D
does not satisfy the Poincar´e cone
condition.
And the technical lemma is as follows:
Lemma.
Let
C
be an open cone in
R
d
based at 0. Then there exists 0
≤ a <
1
such that if |x| ≤
1
2
k
, then
P
x
(T
∂B(0,1)
< T
C
) ≤ a
k
.
Proof. Pick
a = sup
|x|≤
1
2
P
x
(T
∂B(0,1)
< T
C
) < 1.
We then apply the strong Markov property, and the fact that Brownian motion is
scale invariant. We reason as follows — if we start with
|x| ≤
1
2
k
, we may or may
not hit
∂B
(2
−k+1
) before hitting
C
. If we don’t, then we are happy. If we are not,
then we have reached
∂B
(2
−k+1
). This happens with probability at most
a
. Now
that we are at
∂B
(2
−k+1
), the probability of hitting
∂B
(2
−k+2
) before hitting
the cone is at most
a
again. If we hit
∂B
(2
−k+3
), we again have a probability of
≤ a
of hitting
∂B
(2
−k+4
), and keep going on. Then by induction, we find that
the probability of hitting ∂B(0, 1) before hitting the cone is ≤ a
k
.
The ultimate theorem is then
Theorem.
Let
D
be a bounded domain satisfying the Poincar´e cone condition,
and let ϕ : ∂D → R be continuous. Let
T
∂D
= inf{t ≥ 0 : B
t
∈ ∂D}.
This is a bounded stopping time. Then the function u :
¯
D → R defined by
u(x) = E
x
(ϕ(B
T
∂D
)),
where
E
x
is the expectation if we start at
x
, is the unique continuous function
such that u(x) = ϕ(x) for x ∈ ∂D, and ∆u = 0 for x ∈ D.
Proof.
Let
τ
=
T
∂B(x,δ)
for
δ
small. Then by the strong Markov property and
the tower law, we have
u(x) = E
x
(u(x
τ
)),
and
x
τ
is uniformly distributed over
∂B
(
x, δ
). So we know
u
is harmonic in the
interior of
D
, and in particular is continuous in the interior. It is also clear that
u|
∂D
= ϕ. So it remains to show that u is continuous up to
¯
D.
So let
x ∈ ∂D
. Since
ϕ
is continuous, for every
ε >
0, there is
δ >
0 such
that if y ∈ ∂D and |y −x| < δ, then |ϕ(y) − ϕ(x)| ≤ ε.
Take
z ∈
¯
D
such that
|z −x| ≤
δ
2
. Suppose we start our Brownian motion at
z
. If we hit the boundary before we leave the ball, then we are in good shape. If
not, then we are sad. But if the second case has small probability, then since
ϕ
is bounded, we can still be fine.
Pick a cone
C
as in the definition of the Poincar´e cone condition, and assume
we picked δ small enough that C ∩ B(x, δ) ∩ D = ∅. Then we have
|u(z) − ϕ(x)| = |E
z
(ϕ(B
T
∂D
)) − ϕ(x)|
≤ E
z
|ϕ(B
T
∂D
− ϕ(x))|
≤ εP
z
(T
B(x,δ)
> T
∂D
) + 2 sup kϕkP
z
(T
∂D
> T
∂B(x,δ)
)
≤ ε + 2kϕk
∞
P
z
(T
B(x,δ)
≤ T
C
),
and we know the second term → 0 as z → x.
5.3 Transience and recurrence
Theorem. Let (B
t
)
t≥0
be a Brownian motion in R
d
.
–
If
d
= 1, then (
B
t
)
t≥0
is point recurrent, i.e. for each
x, z ∈ R
, the set
{t ≥ 0 : B
t
= z} is unbounded P
x
-almost surely.
–
If
d
= 2, then (
B
t
)
t≥0
is neighbourhood recurrent, i.e. for each
x ∈ R
2
and
U ⊆ R
2
open, the set
{t ≥
0 :
B
t
∈ U}
is unbounded
P
x
-almost surely.
However, the process does not visit points, i.e. for all x, z ∈ R
d
, we have
P
X
(B
t
= z for some t > 0) = 0.
–
If
d ≥
3, then (
B
t
)
t≥0
is transient, i.e.
|B
t
| → ∞
as
t → ∞ P
x
-almost
surely.
Proof.
–
This is trivial, since
inf
t≥0
B
t
=
−∞
and
sup
t≥0
B
t
=
∞
almost surely,
and (B
t
)
t≥0
is continuous.
–
It is enough to prove for
x
= 0. Let 0
< ε < R < ∞
and
ϕ ∈ C
2
b
(
R
2
) such
that
ϕ
(
x
) =
log |x|
for
ε ≤ |x| ≤ R
. It is an easy exercise to check that this
is harmonic inside the annulus. By the theorem we didn’t prove, we know
M
t
= ϕ(B
t
) −
1
2
Z
t
0
∆ϕ(B
s
) ds
is a martingale. For
λ ≥
0, let
S
λ
=
inf{t ≥
0 :
|B
t
|
=
λ}
. If
ε ≤ |x| ≤ R
,
then
H
=
S
ε
∧ S
R
is
P
X
-almost surely finite. Then
M
H
is a bounded
martingale. By optional stopping, we have
E
x
(log |B
H
|) = log |x|.
But the LHS is
log εP(S
ε
< S
R
) + log RP(S
R
< S
ε
).
So we find that
P
x
(S
ε
< S
R
) =
log R − log |x|
log R − log ε
. (∗)
Note that if we let
R → ∞
, then
S
R
→ ∞
almost surely. Using (
∗
), this
implies
P
X
(
S
ε
< ∞
) = 1, and this does not depend on
x
. So we are done.
To prove that (
B
t
)
t≥0
does not visit points, let
ε →
0 in (
∗
) and then
R → ∞ for x 6= z = 0.
–
It is enough to consider the case
d
= 3. As before, let
ϕ ∈ C
2
b
(
R
3
) be such
that
ϕ(x) =
1
|x|
for ε ≤ x ≤ 2. Then ∆ϕ(x) = 0 for ε ≤ x ≤ R. As before, we get
P
x
(S
ε
< S
R
) =
|x|
−1
− |R|
−1
ε
−1
− R
−1
.
As R → ∞, we have
P
x
(S
ε
< ∞) =
ε
x
.
Now let
A
n
= {B
t
≥ n for all t ≥ B
T
n
3
}.
Then
P
0
(A
c
n
) =
1
n
2
.
So by Borel–Cantelli, we know only finitely of
A
c
n
occur almost surely. So
infinitely many of the A
n
hold. This guarantees our process → ∞.
5.4 Donsker’s invariance principle
To end our discussion on Brownian motion, we provide an alternative construction
of Brownian motion, given by Donsker’s invariance principle. Suppose we run any
simple random walk on
Z
d
. We can think of this as a very coarse approximation
of a Brownian motion. As we zoom out, the step sizes in the simple random
walk look smaller and smaller, and if we zoom out sufficiently much, then we
might expect that the result looks like Brownian motion, and indeed it converges
to Brownian motion in the limit.
Theorem
(Donsker’s invariance principle)
.
Let (
X
n
)
n≥0
be iid random variables
with mean 0 and variance 1, and set S
n
= X
1
+ ··· + X
n
. Define
S
t
= (1 − {t})s
btc
+ {t}S
btc+1
.
where {t} = t − btc.
Define
(S
[N]
t
)
t≥0
= (N
−1/2
S
t·N
)
t∈[0,1]
.
As (
S
[N]
t
)
t∈[0,1]
converges in distribution to the law of standard Brownian motion
on [0, 1].
The reader might wonder why we didn’t construct our Brownian motion
this way instead of using Wiener’s theorem. The answer is that our proof of
Donsker’s invariance principle relies on the existence of a Brownian motion! The
relevance is the following theorem:
Theorem
(Skorokhod embedding theorem)
.
Let
µ
be a probability measure on
R
with mean 0 and variance
σ
2
. Then there exists a probability space (Ω
, F, P
)
with a filtration (
F
t
)
t≥0
on which there is a standard Brownian motion (
B
t
)
t≥0
and a sequence of stopping times (T
n
)
n≥0
such that, setting S
n
= B
T
n
,
(i) T
n
is a random walk with steps of mean σ
2
(ii) S
n
is a random walk with step distribution µ.
So in some sense, Brownian motion contains all random walks with finite
variance.
The only stopping times we know about are the hitting times of some value.
However, if we take
T
n
to be the hitting time of some fixed value, then
B
T
n
would be a pretty poor attempt at constructing a random walk. Thus, we may
try to come up with the following strategy — construct a probability space
with a Brownian motion (
B
t
)
t≥0
, and an independent iid sequence (
X
n
)
n∈N
of
random variables with distribution
µ
. We then take
T
n
to be the first hitting
time of
X
1
+
···
+
X
n
. Then setting
S
n
=
X
T
n
, property (ii) is by definition
satisfied. However, (i) will not be satisfied in general. In fact, for any
y 6
= 0, the
expected first hitting time of
y
is infinite! The problem is that if, say,
y >
0,
and we accidentally strayed off to the negative side, then it could take a long
time to return.
The solution is to “split”
µ
into two parts, and construct two random variables
(
X, Y
)
∈
[0
, ∞
)
2
, such that if
T
is
T
is the first hitting time of (
−X, Y
), then
B
T
has law µ.
Since we are interested in the stopping times
T
−x
and
T
y
, the following
computation will come in handy:
Lemma. Let x, y > 0. Then
P
0
(T
−x
< T
y
) =
y
x + y
, E
0
T
−x
∧ T
y
= xy.
Proof sketch. Use optional stopping with (B
2
t
− t)
t≥0
.
Proof of Skorokhod embedding theorem.
Define Borel measures
µ
±
on [0
, ∞
) by
µ
±
(A) = µ(±A).
Note that these are not probability measures, but we can define a probability
measure ν on [0, ∞)
2
given by
dν(x, y) = C(x + y) dµ
−
(x) dµ
+
(y)
for some normalizing constant
C
(this is possible since
µ
is integrable). This
(
x
+
y
) is the same (
x
+
y
) appearing in the denominator of
P
0
(
T
−x
< T
y
) =
y
x+y
.
Then we claim that any (X, Y ) with this distribution will do the job.
We first figure out the value of C. Note that since µ has mean 0, we have
C
Z
∞
0
x dµ
−
(x) = C
Z
∞
0
y dµ
+
(y).
Thus, we have
1 =
Z
C(x + y) dµ
−
(x) dµ
+
(y)
= C
Z
x dµ
−
(x)
Z
dµ
+
(y) + C
Z
y dµ
+
(y)
Z
dµ
−
(x)
= C
Z
x dµ
−
(x)
Z
dµ
+
(y) +
Z
dµ
−
(x)
= C
Z
x dµ
−
(x) = C
Z
y dµ
+
(y).
We now set up our notation. Take a probability space (Ω
, F, P
) with a stan-
dard Brownian motion (
B
t
)
t≥0
and a sequence ((
X
n
, Y
n
))
n≥0
iid with distribution
ν and independent of (B
t
)
t≥0
.
Define
F
0
= σ((X
n
, Y
n
), n = 1, 2, . . .), F
t
= σ(F
0
, F
B
t
).
Define a sequence of stopping times
T
0
= 0, T
n+1
= inf{t ≥ T
n
: B
t
− B
T
n
∈ {−X
n+1
, Y
n+1
}}.
By the strong Markov property, it suffices to prove that things work in the case
n = 1. So for convenience, let T = T
1
, X = X
1
, Y = Y
1
.
To simplify notation, let τ : C([0, 1], R) × [0, ∞)
2
→ [0, ∞) be given by
τ(ω, x, y) = inf{t ≥ 0 : ω(t) ∈ {−x, y}}.
Then we have
T = τ((B
t
)
t≥0
, X, Y ).
To check that this works, i.e. (ii) holds, if A ⊆ [0, ∞), then
P(B
T
∈ A) =
Z
[0,∞)
2
Z
C([0,∞),R)
1
τ(ω,x,y)∈A
dµ
B
(ω) dν(x, y).
Using the first part of the previous computation, this is given by
Z
[0,∞)
2
x
x + y
1
y∈A
C(x + y) dµ
−
(x) dµ
+
(y) = µ
+
(A).
We can prove a similar result if A ⊆ (−∞, 0). So B
T
has the right law.
To see that T is also well-behaved, we compute
ET =
Z
[0,∞)
2
Z
C([0,1],R)
τ(ω, x, y) dµ
B
(ω) dν(x, y)
=
Z
[0,∞)
2
xy dν(x, y)
= C
Z
[0,∞)
2
(x
2
y + yx
2
) dµ
−
(x) dµ
+
(y)
=
Z
[0,∞)
x
2
dµ
−
(x) +
Z
[0,∞)
y
2
dµ
+
(y)
= σ
2
.
The idea of the proof of Donsker’s invariance principle is that in the limit of
large
N
, the
T
n
are roughly regularly spaced, by the law of large numbers, so
this allows us to reverse the above and use the random walk to approximate the
Brownian motion.
Proof of Donsker’s invariance principle.
Let (
B
t
)
t≥0
be a standard Brownian
motion. Then by Brownian scaling,
(B
(N)
t
)
t≥0
= (N
1/2
B
t/N
)
t≥0
is a standard Brownian motion.
For every
N >
0, we let (
T
(N)
n
)
n≥0
be a sequence of stopping times as in the
embedding theorem for B
(N)
. We then set
S
(N)
n
= B
(N)
T
(N)
n
.
For t not an integer, define S
(N)
t
by linear interpolation. Observe that
((T
(N)
n
)
n≥0
, S
(N)
t
) ∼ ((T
(1)
n
)
n≥0
, S
(1)
t
).
We define
˜
S
(N)
t
= N
−1/2
S
(N)
tN
,
˜
T
(N)
n
=
T
(N)
n
N
.
Note that if t =
n
N
, then
˜
S
(N)
n/N
= N
−1/2
S
(N)
n
= N
−1/2
B
(N)
T
(N)
n
= B
T
(N)
n
/N
= B
˜
T
N
n
. (∗)
Note that (
˜
S
(N)
t
)
t≥0
∼
(
S
(N)
t
)
t≥0
. We will prove that we have convergence in
probability, i.e. for any δ > 0,
P
sup
0≤t<1
|
˜
S
(N)
t
− B
t
| > δ
= P(k
˜
S
(N)
− Bk
∞
> δ) → 0 as N → ∞.
We already know that
˜
S
and
B
agree at some times, but the time on
˜
S
is fixed
while that on
B
is random. So what we want to apply is the law of large numbers.
By the strong law of large numbers,
lim
n→∞
1
n
|T
(1)
n
− n| → 0 as n → 0.
This implies that
sup
1≤n≤N
1
N
|T
(1)
n
− n| → 0 as N → ∞.
Note that (T
(1)
n
)
n≥0
∼ (T
(N)
n
)
n≥0
, it follows for any δ > 0,
P
sup
1≤n≤N
T
(N)
n
N
−
n
N
≥ δ
!
→ 0 as N → ∞.
Using (
∗
) and continuity, for any
t ∈
[
n
N
,
n+1
N
], there exists
u ∈
[
T
(N)
n/N
, T
(N)
(n+1)/N
]
such that
˜
S
(N)
t
= B
u
.
Note that if times are approximated well up to δ, then |t − u| ≤ δ +
1
N
.
Hence we have
{k
˜
S − Bk
∞
> ε} ≤
n
˜
T
(N)
n
−
n
N
> δ for some n ≤ N
o
∪
|B
t
− B
u
| > ε for some t ∈ [0, 1], |t − u| < δ +
1
N
.
The first probability
→
0 as
n → ∞
. For the second, we observe that (
B
t
)
T ∈[0,1]
has uniformly continuous paths, so for
ε >
0, we can find
δ >
0 such that the
second probability is less than ε whenever N >
1
δ
(exercise!).
So
˜
S
(N)
→ B
uniformly in probability, hence converges uniformly in distri-
bution.
6 Large deviations
So far, we have been interested in the average or “typical” behaviour of our
processes. Now, we are interested in “extreme cases”, i.e. events with small
probability. In general, our objective is to show that these probabilities tend to
zero very quickly.
Let (
X
n
)
n≥0
be a sequence of iid integrable random variables in
R
and mean
value EX
1
= ¯x and finite variance σ
2
. We let
S
n
= X
1
+ ··· + X
n
.
By the central limit theorem, we have
P(S
n
≥ n¯x +
√
nσa) → P(Z ≥ a) as n → ∞,
where Z ∼ N(0, 1). This implies
P(S
n
≥ an) → 0
for any a > ¯x. The question is then how fast does this go to zero?
There is a very useful lemma in the theory of sequences that tells us this
vanishes exponentially quickly with n. Note that
P(S
m+n
≥ a(m + n)) ≥ P(S
m
≥ am)P(S
n
≥ an).
So the sequence P(S
n
≥ an) is super-multiplicative. Thus, the sequence
b
n
= −log P(S
n
≥ an)
is sub-additive.
Lemma
(Fekete)
.
If
b
n
is a non-negative sub-additive sequence, then
lim
n
b
n
n
exists.
This implies the rate of decrease is exponential. Can we do better than that,
and point out exactly the rate of decrease?
For λ ≥ 0, consider the moment generating function
M(λ) = Ee
λX
1
.
We set ψ(λ) = log M(λ), and the Legendre transform of ψ is
ψ
∗
(a) = sup
λ≥0
(aλ − ψ(λ)).
Note that these things may be infinite.
Theorem (Cram´er’s theorem). For a > ¯x, we have
lim
n→∞
1
n
log P(S
n
≥ an) = −ψ
∗
(a).
Note that we always have
ψ
∗
(a) = sup
λ≥0
(aλ − ψ(λ)) ≥ −ψ(0) = 0.
Proof. We first prove an upper bound. For any λ, Markov tells us
P(S
n
≥ an) = P(e
λS
n
≥ e
λan
) ≤ e
−λa
n
Ee
λS
n
= e
−λan
n
Y
i=1
Ee
λX
i
= e
−λan
M(λ)
n
= e
−n(λa−ψ(λ))
.
Since
λ
was arbitrary, we can pick
λ
to maximize
λa −ψ
(
λ
), and so by definition
of ψ
∗
(a), we have P(S
n
≥ a
n
) ≤ e
−nψ
∗
(a)
. So it follows that
lim sup
1
n
log P(S
n
≥ a
n
) ≤ −ψ
∗
(a).
The lower bound is a bit more involved. One checks that by translating
X
i
by
a
,
we may assume a = 0, and in particular, ¯x < 0.
So we want to prove that
lim inf
n
1
n
log P(S
n
≥ 0) ≥ inf
λ≥0
ψ(λ).
We consider cases:
– If P(X ≤ 0) = 1, then
P(S
n
≥ 0) = P(X
i
= 0 for i = 1, . . . n) = P(X
1
= 0)
n
.
So in fact
lim inf
n
1
n
log P(S
n
≥ 0) = log P(X
1
= 0).
But by monotone convergence, we have
P(X
1
= 0) = lim
λ→∞
Ee
λX
1
.
So we are done.
–
Consider the case
P
(
X
1
>
0)
>
0, but
P
(
X
1
∈
[
−K, K
]) = 1 for some
K
.
The idea is to modify
X
1
so that it has mean 0. For
µ
=
µ
X
1
, we define a
new distribution by the density
dµ
θ
dµ
(x) =
e
θx
M(θ)
.
We define
g(θ) =
Z
x dµ
θ
(x).
We claim that g is continuous for θ ≥ 0. Indeed, by definition,
g(θ) =
R
xe
θx
dµ(x)
R
e
θx
dµ(x)
,
and both the numerator and denominator are continuous in
θ
by dominated
convergence.
Now observe that g(0) = ¯x, and
lim sup
θ→∞
g(θ) > 0.
So by the intermediate value theorem, we can find some
θ
0
such that
g(θ
0
) = 0.
Define
µ
θ
0
n
to be the law of the sum of
n
iid random variables with law
µ
θ
0
.
We have
P(S
n
≥ 0) ≥ P(S
n
∈ [0, εn]) ≥ Ee
θ
0
(S
n
−εn)
1
S
n
∈[0,εn]
,
using the fact that on the event
S
n
∈
[0
, εn
], we have
e
θ
0
(S
n
−εn)
≤
1. So
we have
P(S
n
≥ 0) ≥ M(θ
0
)
n
e
−θ
0
εn
µ
θ
0
n
({S
n
∈ [0, εn]}).
By the central limit theorem, for each fixed ε, we know
µ
θ
0
n
({S
n
∈ [0, εn]}) →
1
2
as n → ∞.
So we can write
lim inf
n
1
n
log P(S
n
≥ 0) ≥ ψ(θ
0
) − θ
0
ε.
Then take the limit ε → 0 to conclude the result.
–
Finally, we drop the finiteness assumption, and only assume
P
(
X
1
>
0)
>
0.
We define
ν
to be the law of
X
1
condition on the event
{|X
1
| ≤ K}
. Let
ν
n
be the law of the sum of n iid random variables with law ν. Define
ψ
K
(λ) = log
Z
K
−K
e
λx
dµ(x)
ψ
ν
(λ) = log
Z
∞
−∞
e
λx
dν(x) = ψ
K
(λ) − log µ({|X| ≤ K}).
Note that for
K
large enough,
R
x
d
ν
(
x
)
<
0. So we can use the previous
case. By definition of ν, we have
µ
n
([0, ∞)) ≥ ν([0, ∞))µ(|X| ≤ K)
n
.
So we have
lim inf
n
1
n
log µ([0, ∞)) ≥ log µ(|X| ≤ K) + lim inf log ν
n
([0, ∞))
≥ log µ(|X| ≤ K) + inf ψ
ν
(λ)
= inf
λ
ψ
K
(λ)
= ψ
λ
K
.
Since
ψ
K
increases as
K
increases to infinity, this increases to some
J
we
have
lim inf
n
1
n
log µ
n
([0, ∞)) ≥ J. (†)
Since
ψ
K
(
λ
) are continuous,
{λ
:
ψ
K
(
λ
)
≤ J}
is non-empty, compact and
nested in K. By Cantor’s theorem, we can find
λ
0
∈
\
K
{λ : ψ
K
(λ) ≤ J}.
So the RHS of (†) satisfies
J ≥ sup
K
ψ
K
(λ
0
) = ψ(λ
0
) ≥ inf
λ
ψ(λ).