3Continuous time stochastic processes

III Advanced Probability



3 Continuous time stochastic processes
In the remainder of the course, we shall study continuous time processes. When
doing so, we have to be rather careful, since our processes are indexed by
an uncountable set, when measure theory tends to only like countable things.
Ultimately, we would like to study Brownian motion, but we first develop some
general theory of continuous time processes.
Definition
(Continuous time stochastic process)
.
A continuous time stochastic
process is a family of random variables (X
t
)
t0
(or (X
t
)
t[a,b]
).
In the discrete case, if
T
is a random variable taking values in
{
0
,
1
,
2
, . . .}
,
then it makes sense to look at the new random variable X
T
, since this is just
X
T
=
X
n=0
X
n
1
T =n
.
This is obviously measurable, since it is a limit of measurable functions.
However, this is not necessarily the case if we have continuous time, unless
we assume some regularity conditions on our process. In some sense, we want
X
t
to depend “continuously” or at least “measurably” on t.
To make sense of X
T
, It would be enough to require that the map
ϕ : (ω, t) 7→ X
t
(ω)
is measurable when we put the product
σ
-algebra on the domain. In this case,
X
T
(
ω
) =
ϕ
(
ω, T
(
ω
)) is measurable. In this formulation, we see why we didn’t
have this problem with discrete time the
σ
-algebra on
N
is just
P
(
N
), and so
all sets are measurable. This is not true for B([0, )).
However, being able to talk about
X
T
is not the only thing we want. Often,
the following definitions are useful:
Definition
(Cadlag function)
.
We say a function
X
: [0
,
]
R
is cadlag if
for all t
lim
st
+
x
s
= x
t
, lim
st
x
s
exists.
The name cadlag (or adl´ag) comes from the French term continue ´a droite,
limite ´a gauche, meaning “right-continuous with left limits”.
Definition
(Continuous/Cadlag stochastic process)
.
We say a stochastic process
is continuous (resp. cadlag) if for any
ω
Ω, the map
t 7→ X
t
(
ω
) is continuous
(resp. cadlag).
Notation.
We write
C
([0
,
)
, R
) for the space of all continuous functions
[0, ) R, and D([0, ), R) the space of all cadlag functions.
We endow these spaces with a
σ
-algebra generated by the coordinate functions
(x
t
)
t0
7→ x
s
.
Then a continuous (or cadlag) process is a random variable taking values in
C([0, ), R) (or D([0, ), R)).
Definition
(Finite-dimensional distribution)
.
A finite dimensional distribution
of (X
t
)
t0
is a measure on R
n
of the form
µ
t
1
,...,t
n
(A) = P((X
t
1
, . . . , X
t
n
) A)
for all A B(R
n
), for some 0 t
1
< t
2
< . . . < t
n
.
The important observation is that if we know all finite-dimensional distri-
butions, then we know the law of
X
, since the cylinder sets form a
π
-system
generating the σ-algebra.
If we know, a priori, that (
X
t
)
t0
is a continuous process, then for any dense
set
I
[0
,
), knowing (
X
t
)
t0
is the same as knowing (
X
t
)
tI
. Conversely, if
we are given some random variables (
X
t
)
tI
, can we extend this to a continuous
process (
X
t
)
t0
? The answer is, of course, “not always”, but it turns out we can
if we assume some older conditions.
Theorem
(Kolmogorov’s criterion)
.
Let (
ρ
t
)
tI
be random variables, where
I [0, 1] is dense. Assume that for some p > 1 and β >
1
p
, we have
kρ
t
ρ
s
k
p
C|t s|
β
for all t, s I. ()
Then there exists a continuous process (X
t
)
tI
such that for all t I,
X
t
= ρ
t
almost surely,
and moreover for any
α
[0
, β
1
p
), there exists a random variable
K
α
L
p
such that
|X
s
X
t
| K
α
|s t|
α
for all s, t [0, 1].
Before we begin, we make the following definition:
Definition (Dyadic numbers). We define
D
n
=
s [0, 1] : s =
k
2
n
for some k Z
, D =
[
n0
D
n
.
Observe that
D
[0
,
1] is a dense subset. Topologically, this is just like any
other dense subset. However, it is convenient to use
D
instead of an arbitrary
subset when writing down formulas.
Proof.
First note that we may assume
D I
. Indeed, for
t D
, we can define
ρ
t
by taking the limit of
ρ
s
in
L
p
since
L
p
is complete. The equation (
) is
preserved by limits, so we may work on I D instead.
By assumption, (
ρ
t
)
tI
is older in
L
p
. We claim that it is almost surely
pointwise older.
Claim. There exists a random variable K
α
L
p
such that
|ρ
s
ρ
t
| K
α
|s t|
α
for all s, t D.
Moreover, K
α
is increasing in α.
Given the claim, we can simply set
X
t
(ω) =
(
lim
qt,qD
ρ
q
(ω) K
α
< for all α [0, β
1
p
)
0 otherwise
.
Then this is a continuous process, and satisfies the desired properties.
To construct such a
K
α
, observe that given any
s, t D
, we can pick
m
0
such that
2
(m+1)
< t s 2
m
.
Then we can pick u =
k
2
m+1
such that s < u < t. Thus, we have
u s < 2
m
, t u < 2
m
.
Therefore, by binary expansion, we can write
u s =
X
im+1
x
i
2
i
, t u =
X
im+1
y
i
2
i
,
for some x
i
, y
i
{0, 1}. Thus, writing
K
n
= sup
tD
n
|S
t+2
n
S
t
|,
we can bound
|ρ
s
ρ
t
| 2
X
n=m+1
K
n
,
and thus
|ρ
s
ρ
t
|
|s t|
α
2
X
n=m+1
2
(m+1)α
K
n
2
X
n=m+1
2
(n+1)α
K
n
.
Thus, we can define
K
α
= 2
X
n0
2
K
n
.
We only have to check that this is in L
p
, and this is not hard. We first get
EK
p
n
X
tD
n
E|ρ
t+2
n
ρ
t
|
p
C
p
2
n
· 2
= C2
n(1)
.
Then we have
kK
α
k
p
2
X
n0
2
kK
n
k
p
2C
X
n0
2
n(α+
1
p
β)
< .
We will later use this to construct Brownian motion. For now, we shall
develop what we know about discrete time processes for continuous time ones.
Fortunately, a lot of the proofs are either the same as the discrete time ones, or
can be reduced to the discrete time version. So not much work has to be done!
Definition
(Continuous time filtration)
.
A continuous-time filtration is a family
of
σ
-algebras (
F
t
)
t0
such that
F
s
F
t
F
if
s t
. Define
F
=
σ
(
F
t
:
t
0).
Definition
(Stopping time)
.
A random variable
t
:
[0
,
] is a stopping
time if {T t} F
t
for all t 0.
Proposition.
Let (
X
t
)
t0
be a cadlag adapted process and
S, T
stopping times.
Then
(i) S T is a stopping time.
(ii) If S T , then F
S
F
T
.
(iii) X
T
1
T <
is F
T
-measurable.
(iv) (X
T
t
)
t0
= (X
T t
)
t0
is adapted.
We only prove (iii). The first two are the same as the discrete case, and the
proof of (iv) is similar to that of (iii).
To prove this, we need a quick lemma, whose proof is a simple exercise.
Lemma.
A random variable
Z
is
F
T
-measurable iff
Z1
{T t}
is
F
t
-measurable
for all t 0.
Proof of (iii) of proposition.
We need to prove that
X
T
1
{T t}
is
F
t
-measurable
for all t 0.
We write
X
T
1
T t
= X
T
1
T <t
+ X
t
1
T =t
.
We know the second term is measurable. So it suffices to show that
X
T
1
T <t
is
F
t
-measurable.
Define
T
n
= 2
n
d
2
n
T e
. This is a stopping time, since we always have
T
n
T
.
Since (X
t
)
t0
is cadlag, we know
X
T
1
T <t
= lim
n→∞
X
T
n
t
1
T <t
.
Now
T
n
t
can take only countably (and in fact only finitely) many values, so
we can write
X
T
n
t
=
X
qD
n
,q<t
X
q
1
T
n
=q
+ X
t
1
T <t<T
n
,
and this is F
t
-measurable. So we are done.
In the continuous case, stopping times are a bit more subtle. A natural
source of stopping times is given by hitting times.
Definition (Hitting time). Let A B(R). Then the hitting time of A is
T
A
= inf
t0
{X
t
A}.
This is not always a stopping time. For example, consider the process
X
t
such that with probability
1
2
, it is given by
X
t
=
t
, and with probability
1
2
, it is
given by
X
t
=
(
t t 1
2 t t > 1
.
1
1
Take
A
= (1
,
). Then
T
A
= 1 in the first case, and
T
A
=
in the second case.
But {T
a
1} 6∈ F
1
, as at time 1, we don’t know if we are going up or down.
The problem is that A is not closed.
Proposition. Let A R be a closed set and (X
t
)
t0
be continuous. Then T
A
is a stopping time.
Proof. Observe that d(X
q
, A) is a continuous function in q. So we have
{T
A
t} =
inf
qQ,q<t
d(X
q
, A) = 0
.
Motivated by our previous non-example of a hitting time, we define
Definition
(Right-continuous filtration)
.
Given a continuous filtration (
F
t
)
t0
,
we define
F
+
t
=
\
s>t
F
s
F
t
.
We say (F
t
)
t0
is right continuous if F
t
= F
+
t
.
Often, we want to modify our events by things of measure zero. While this
doesn’t really affect anything, it could potentially get us out of
F
t
. It does no
harm to enlarge all F
t
to include events of measure zero.
Definition
(Usual conditions)
.
Let
N
=
{A F
:
P
(
A
)
{
0
,
1
}}
. We say
that (F
t
)
t0
satisfies the usual conditions if it is right continuous and N F
0
.
Proposition.
Let (
X
t
)
t0
be an adapted process (to (
F
t
)
t0
) that is cadlag,
and let A be an open set. Then T
A
is a stopping time with respect to F
+
t
.
Proof. Since (X
t
)
t0
is cadlag and A is open. Then
{T
A
< t} =
[
q<t,qQ
{X
q
A} F
t
.
Then
{T
A
t} =
\
n0
T
A
< t +
1
n
F
+
t
.
Definition
(Coninuous time martingale)
.
An adapted process (
X
t
)
t0
is called
a martingale iff
E(X
t
| F
s
) = X
s
for all t s, and similarly for super-martingales and sub-martingales.
Note that if t
1
t
2
···, then
˜
X
n
= X
t
n
is a discrete time martingale. Similarly, if t
1
t
2
···, and
ˆ
X
n
= X
t
n
defines a discrete time backwards martingale. Using this observation, we can
now prove what we already know in the discrete case.
Theorem
(Optional stopping theorem)
.
Let (
X
t
)
t0
be an adapted cadlag
process in L
1
. Then the following are equivalent:
(i)
For any bounded stopping time
T
and any stopping time
S
, we have
X
T
L
1
and
E(X
T
| F
S
) = X
T S
.
(ii) For any stopping time T , (X
T
t
)
t0
= (X
T t
)
t0
is a martingale.
(iii) For any bounded stopping time T , X
T
L
1
and EX
T
= EX
0
.
Proof.
We show that (i)
(ii), and the rest follows from the discrete case
similarly.
Since T is bounded, assume T t, and we may wlog assume t N. Let
T
n
= 2
n
d2
n
T e, S
n
= 2
n
d2
n
Se.
We have T
n
& T as n , and so X
T
n
X
T
as n .
Since
T
n
t
+ 1, by restricting our sequence to
D
n
, discrete time optional
stopping implies
E(X
t+1
| F
T
n
) = X
T
n
.
In particular,
X
T
n
is uniformly integrable. So it converges in
L
1
. This implies
X
T
L
1
.
To show that
E
(
X
t
| F
S
) =
X
T S
, we need to show that for any
A F
S
, we
have
EX
t
1
A
= EX
ST
1
A
.
Since F
S
F
S
n
, we already know that
EX
T
n
1
A
= lim
n→∞
EX
S
n
T
n
1
A
by discrete time optional stopping, since
E
(
X
T
n
| F
S
n
) =
X
T
n
S
n
. So taking the
limit n gives the desired result.
Theorem.
Let (
X
t
)
t0
be a super-martingale bounded in
L
1
. Then it converges
almost surely as t to a random variable X
L
1
.
Proof.
Define
U
s
[
a, b,
(
x
t
)
t0
] be the number of upcrossings of [
a, b
] by (
x
t
)
t0
up to time s, and
U
[a, b, (x
t
)
t0
] = lim
s→∞
U
s
[a, b, (x
t
)
t0
].
Then for all s 0, we have
U
s
[a, b, (x
t
)
t0
] = lim
n→∞
U
s
[a, b, (x
(n)
t
)
tD
n
].
By monotone convergence and Doob’s upcrossing lemma, we have
EU
s
[a, b, (X
t
)
t0
] = lim
n→∞
EU
s
[a, b, (X
t
)
tD
n
]
E(X
s
a)
b 1
E|X
s
| + a
b a
.
We are then done by taking the supremum over
s
. Then finish the argument as
in the discrete case.
This shows we have pointwise convergence in
R {±∞}
, and by Fatou’s
lemma, we know that
E|X
| = E lim inf
t
n
→∞
|X
t
n
| lim inf
t
n
→∞
E|X
t
n
| < .
So X
is finite almost surely.
We shall now state without proof some results we already know for the
discrete case. The proofs are straightforward generalizations of the discrete
version.
Lemma
(Maximal inequality)
.
Let (
X
t
)
t0
be a cadlag martingale or a non-
negative sub-martingale. Then for all t 0, λ 0, we have
λP(X
t
λ) E|X
t
|.
Lemma (Doob’s L
p
inequality). Let (X
t
)
t0
be as above. Then
kX
t
k
p
p
p 1
kX
t
k
p
.
Definition
(Version)
.
We say a process (
Y
t
)
t0
is a version of (
X
t
)
t0
if for all
t, P(Y
t
= X
t
) = 1.
Note that this not the same as saying P(
t
: Y
t
= X
t
) = 1.
Example.
Take
X
t
0 for all
t
and take
U
be a uniform random variable on
[0, 1]. Define
Y
t
=
(
1 t = U
0 otherwise
.
Then for all
t
, we have
X
t
=
Y
t
almost surely. So (
Y
t
) is a version of (
X
t
).
However, X
t
is continuous but Y
t
is not.
Theorem
(Regularization of martingales)
.
Let (
X
t
)
t0
be a martingale with
respect to (
F
t
), and suppose
F
t
satisfies the usual conditions. Then there exists
a version (
˜
X
t
) of (X
t
) which is cadlag.
Proof. For all M > 0, define
M
0
=
(
sup
qD[0,M]
|X
q
| <
)
\
a<bQ
U
M
[a, b, (X
t
)
tD[0,M ]
] <
Then we see that P(Ω
M
0
) = 1 by Doob’s upcrossing lemma. Now define
˜
X
t
= lim
st,st,sD
X
s
1
t
0
.
Then this is F
t
measurable because F
t
satisfies the usual conditions.
Take a sequence
t
n
& t
. Then (
X
t
n
) is a backwards martingale. So it
converges almost surely in L
1
to
˜
X
t
. But we can write
X
t
= E(X
t
n
| F
t
).
Since
X
t
n
˜
X
t
in
L
1
, and
˜
X
t
is
F
t
-measurable, we know
X
t
=
˜
X
t
almost
surely.
The fact that it is cadlag is an exercise.
Theorem
(
L
p
convergence of martingales)
.
Let (
X
t
)
t0
be a cadlag martingale.
Then the following are equivalent:
(i) (X
t
)
t0
is bounded in L
p
.
(ii) (X
t
)
t0
converges almost surely and in L
p
.
(iii) There exists Z L
p
such that X
t
= E(Z | F
t
) almost surely.
Theorem
(
L
1
convergence of martingales)
.
Let (
X
t
)
t0
be a cadlag martingale.
Then the folloiwng are equivalent:
(i) (X
t
)
t0
is uniformly integrable.
(ii) (X
t
)
t0
converges almost surely and in L
1
to X
.
(iii) There exists Z L
1
such that E(Z | F
t
) = X
t
almost surely.
Theorem
(Optional stopping theorem)
.
Let (
X
t
)
t0
be a uniformly integrable
martingale, and let S, T b e any stopping times. Then
E(X
T
| F
s
) = X
ST
.