2Semi-martingales

III Stochastic Calculus and Applications



2.4 Quadratic variation
Physicists are used to dropping all terms above first-order. It turns out that
Brownian motion, and continuous local martingales in general oscillate so wildly
that second order terms become important. We first make the following definition:
Definition
(Uniformly on compact sets in probability)
.
For a sequence of
processes (X
n
) and a process X, we say that X
n
X u.c.p. iff
P
sup
s[0,t]
|X
n
s
X
s
| > ε
!
0 as n for all t > 0, ε > 0.
Theorem.
Let
M
be a continuous local martingale with
M
0
= 0. Then there
exists a unique (up to indistinguishability) continuous adapted increasing process
(
hMi
t
)
t0
such that
hMi
0
= 0 and
M
2
t
hMi
t
is a continuous local martingale.
Moreover,
hMi
t
= lim
n→∞
hMi
(n)
t
, hMi
(n)
t
=
d2
n
te
X
i=1
(M
t2
n
M
(i1)2
n
)
2
,
where the limit u.c.p.
Definition (Quadratic variation). hMi is called the quadratic variation of M.
It is probably more useful to understand
hMi
t
in terms of the explicit formula,
and the fact that
M
2
t
hMi
t
is a continuous local martingale is a convenient
property.
Example.
Let
B
be a standard Brownian motion. Then
B
2
t
t
is a martingale.
Thus, hBi
t
= t.
The proof is long and mechanical, but not hard. All the magic happened
when we used the magical Doob’s inequality to show that
M
2
c
and
M
2
are
Hilbert spaces.
Proof.
To show uniqueness, we use that finite variation and local martingale
are incompatible. Suppose (
A
t
) and (
˜
A
t
) obey the conditions for
hMi
. Then
A
t
˜
A
t
= (
M
2
t
˜
A
t
)
(
M
2
t
A
t
) is a continuous adapted local martingale starting
at 0. Moreover, both
A
t
and
˜
A
t
are increasing, hence have finite variation. So
A
˜
A = 0 almost surely.
To show existence, we need to show that the limit exists and has the right
property. We do this in steps.
Claim. The result holds if M is in fact bounded.
Suppose
|M
(
ω, t
)
| C
for all (
ω, t
). Then
M M
2
c
. Fix
T >
0 deterministic.
Let
X
n
t
=
d2
n
T e
X
i=1
M
(i1)2
n
(M
i2
n
t
M
(i1)2
n
t
).
This is defined so that
hMi
(n)
k2
n
= M
2
k2
n
2X
n
k2
n
.
This reduces the study of hM i
(n)
to that of X
n
k2
n
.
We check that (
X
n
t
) is a Cauchy sequence in
M
2
c
. The fact that it is a
martingale is an immediate computation. To show it is Cauchy, for
n m
, we
calculate
X
n
X
m
=
d2
n
T e
X
i=1
(M
(i1)2
n
M
b(i1)2
mn
c2
m
)(M
i2
n
M
(i1)2
n
).
We now take the expectation of the square to get
E(X
n
X
m
)
2
= E
d2
n
T e
X
i=1
(M
(i1)2
n
M
b(i1)2
mn
c2
m
)
2
(M
i2
n
M
(i1)2
n
)
2
E
sup
|st|≤2
m
|M
t
M
s
|
2
d2
n
T e
X
i=1
(M
i2
n
M
(i1)2
n
)
2
= E
sup
|st|≤2
m
|M
t
M
s
|
2
hMi
(n)
T
!
E
sup
|st|≤2
m
|M
t
M
s
|
4
!
1/2
E
(hMi
(n)
T
)
2
1/2
(Cauchy–Schwarz)
We shall show that the second factor is bounded, while the first factor tends to
zero as
m
. These are both not surprising the first term vanishing in
the limit corresponds to
M
being continuous, and the second term is bounded
since M itself is bounded.
To show that the first term tends to zero, we note that we have
|M
t
M
s
|
4
16C
4
,
and moreover
sup
|st|≤2
m
|M
t
M
s
| 0 as m by uniform continuity.
So we are done by the dominated convergence theorem.
To show the second term is bounded, we do (writing N = d2
n
T e)
E
(hMi
(n)
T
)
2
= E
N
X
i=1
(M
i2
n
M
(i1)2
n
)
2
!
2
=
N
X
i=1
E
(M
i2
n
M
(i1)2
n
)
4
+ 2
N
X
i=1
E
(M
i2
n
M
(i1)2
n
)
2
N
X
k=i+1
(M
k2
n
M
(k1)2
n
)
2
!
We use the martingale property and orthogonal increments the rearrange the
off-diagonal term as
E
(M
i2
n
M
(i1)2
n
)(M
N2
n
M
i2
n
)
2
.
Taking some sups, we get
E
(hMi
(n)
T
)
2
12C
2
E
N
X
i=1
(M
i2
n
M
(i1)2
n
)
2
!
= 12C
2
E
(M
N2
n
M
0
)
2
12C
2
· 4C
2
.
So done.
So we now have X
n
X in M
2
c
for some X M
2
c
. In particular, we have
sup
t
|X
n
t
X
t
|
L
2
0
So we know that
sup
t
|X
n
t
X
t
| 0
almost surely along a subsequence Λ.
Let N be the events on which this convergence fails. We define
A
(T )
t
=
(
M
2
t
2X
t
ω \N
0 ω N
.
Then
A
(T )
is continuous, adapted since
M
and
X
are, and (
M
2
tT
A
(T )
tT
)
t
is a
martingale since
X
is. Finally,
A
(T )
is increasing since
M
2
t
X
n
t
is increasing
on 2
n
Z
[0
, T
] and the limit is uniform. So this
A
(T )
basically satisfies all the
properties we want hMi
t
to satisfy, except we have the stopping time T .
We next observe that for any
T
1,
A
(T )
tT
=
A
(T +1)
tT
for all
t
almost surely.
This essentially follows from the same uniqueness argument as we had at the
beginning of the proof. Thus, there is a process (hMi
t
)
t0
such that
hMi
t
= A
(T )
t
for all
t
[0
, T
] and
T N
, almost surely. Then this is the desired process. So
we have constructed hMi in the case where M is bounded.
Claim. hM i
(n)
hMi u.c.p.
Recall that
hMi
(n)
t
= M
2
2
n
b2
n
tc
2X
n
2
n
b2
n
tc
.
We also know that
sup
tT
|X
n
t
X
t
| 0
in L
2
, hence also in probability. So we have
|hMi
t
hM i
(n)
t
| sup
tT
|M
2
2
n
b2
n
tc
M
2
t
|
+ sup
tT
|X
n
2
n
b2
n
tc
X
2
n
b2
n
tc
| + sup
tT
|X
2
n
b2
n
tc
X
t
|.
The first and last terms
0 in probability since
M
and
X
are uniformly
continuous on [0
, T
]. The second term converges to zero by our previous assertion.
So we are done.
Claim. The theorem holds for M any continuous local martingale.
We let
T
n
=
inf{t
0 :
|M
t
| n}
. Then (
T
n
) reduces
M
and
M
T
n
is a
bounded martingale. So in particular
M
T
n
is a bounded continuous martingale.
We set
A
n
= hM
T
n
i.
Then (
A
n
t
) and (
A
n+1
tT
n
) are indistinguishable for
t < T
n
by the uniqueness argu-
ment. Thus there is a process
hMi
such that
hMi
tT
n
=
A
n
t
are indistinguishable
for all
n
. Clearly,
hMi
is increasing since the
A
n
are, and
M
2
tT
n
hM i
tT
n
is a
martingale for every n, so M
2
t
hM i
t
is a continuous local martingale.
Claim. hM i
(n)
hMi u.c.p.
We have seen
hM
T
k
i
(n)
hM
T
k
i u.c.p.
for every k. So
P
sup
tT
|hMi
(n)
t
hM
t
i| > ε
P(T
k
< T ) + P
sup
tT
|hM
T
k
i
(n)
t
hM
T
k
i
t
> ε
.
So we can fisrt pick
k
large enough such that the first term is small, then pick
n
large enough so that the second is small.
There are a few easy consequence of this theorem.
Fact.
Let
M
be a continuous local martingale, and let
T
be a stopping time.
Then alsmot surely for all t 0,
hM
T
i
t
= hMi
tT
Proof.
Since
M
2
t
hMi
t
is a continuous local martingle, so is
M
2
tT
hMi
tT
=
(M
T
)
2
t
hM i
tT
. So we are done by uniqueness.
Fact.
Let
M
be a continuous local martingale with
M
0
= 0. Then
M
= 0 iff
hMi = 0.
Proof.
If
M
= 0, then
hMi
= 0. Conversely, if
hMi
= 0, then
M
2
is a continuous
local martingale and positive. Thus EM
2
t
EM
2
0
= 0.
Proposition.
Let
M M
2
c
. Then
M
2
hM i
is a uniformly integrable martin-
gale, and
kM M
0
k
M
2
= (EhMi
)
1/2
.
Proof. We will show that hM i
L
1
. This then implies
|M
2
t
hM i
t
| sup
t0
M
2
t
+ hM i
.
Then the right hand side is in
L
1
. Since
M
2
hM i
is a local martingale, this
implies that it is in fact a uniformly integrable martingale.
To show hMi
L
1
, we let
S
n
= inf{t 0 : hM i
t
n}.
Then S
n
, S
n
is a stopping time and moreover hMi
tS
n
n. So we have
M
2
tS
n
hM i
tS
n
n + sup
t0
M
2
t
,
and the second term is in L
1
. So M
2
tS
n
hM i
tS
n
is a true martingale.
So
EM
2
tS
n
EM
2
0
= EhMi
tS
n
.
Taking the limit
t
, we know
EM
2
tS
n
EM
2
S
n
by dominated convergence.
Since
hMi
tS
n
is increasing, we also have
EhMi
tS
n
EhMi
S
n
by monotone
convergence. We can take n , and by the same justification, we have
EhMi EM
2
EM
2
0
= E(M
M
0
)
2
< .