4Stochastic differential equations

III Stochastic Calculus and Applications



4.1 Existence and uniqueness of solutions
After all this work, we can return to the problem we described in the introduction.
We wanted to make sense of equations of the form
˙x(t) = F (x(t)) + η(t),
where
η
(
t
) is Gaussian white noise. We can now interpret this equation as saying
dX
t
= F (X
t
) dt + dB
t
,
or equivalently, in integral form,
X
t
X
0
=
Z
T
0
F (X
s
) ds + B
t
.
In general, we can make the following definition:
Definition
(Stochastic differential equation)
.
Let
d, m N
,
b
:
R
+
× R
d
R
d
,
σ
:
R
+
× R
d
R
d×m
be locally bounded (and measurable). A solution to the
stochastic differential equation E(σ, b) given by
dX
t
= b(t, X
t
) dt + σ(t, X
t
) dB
t
consists of
(i) a filtered probability space (Ω, F, (F
t
), P) obeying the usual conditions;
(ii) an m-dimensional Brownian motion B with B
0
= 0; and
(iii) an (F
t
)-adapted continuous process X with values in R
d
such that
X
t
= X
0
+
Z
t
0
σ(s, X
s
) dB
s
+
Z
t
0
b(s, X
s
) ds.
If
X
0
=
x R
d
, then we say
X
is a (weak) solution to
E
x
(
σ, b
). It is a strong
solution if it is adapted with respect to the canonical filtration of B.
Our goal is to prove existence and uniqueness of solutions to a general class
of SDEs. We already know what it means for solutions to be unique, and in
general there can be multiple notions of uniqueness:
Definition
(Uniqueness of solutions)
.
For the stochastic differential equation
E(σ, b), we say there is
uniqueness in law if for every
x R
d
, all solutions to
E
x
(
σ, b
) have the
same distribution.
pathwise uniqueness if when (Ω
, F,
(
F
t
)
, P
) and
B
are fixed, any two
solutions X, X
0
with X
0
= X
0
0
are indistinguishable.
These two notions are not equivalent, as the following example shows:
Example (Tanaka). Consider the stochastic differential equation
dX
t
= sgn(X
t
) dB
t
, X
0
= x,
where
sgn(x) =
(
+1 x > 0
1 x 0
.
This has a weak solution which is unique in law, but pathwise uniqueness fails.
To see the existence of solutions, let
X
be a one-dimensional Brownian motion
with X
0
= x, and set
B
t
=
Z
t
0
sgn(X
s
) dX
s
,
which is well-defined because
sgn
(
X
s
) is previsible and left-continuous. Then we
have
x +
Z
t
0
sgn(X
s
) dB
s
= x +
Z
t
0
sgn(X
s
)
2
dX
s
= x + X
t
X
0
= X
t
.
So it remains to show that
B
is a Brownian motion. We already know that
B
is
a continuous local martingale, so by L´evy’s characterization, it suffices to show
its quadratic variation is t. We simply compute
hB, Bi
t
=
Z
t
0
dhX
s
, X
s
i = t.
So there is weak existence. The same argument shows that any solution is a
Brownian motion, so we have uniqueness in law.
Finally, observe that if
x
= 0 and
X
is a solution, then
X
is also a solution
with the same Brownian motion. Indeed,
X
t
=
Z
t
0
sgn(X
s
) dB
s
=
Z
t
0
sgn(X
s
) dB
s
+ 2
Z
t
0
1
X
s
=0
dB
s
,
where the second term vanishes, since it is a continuous local martingale with
quadratic variation
R
t
0
1
X
s
=0
ds = 0. So pathwise uniqueness does not hold.
In the other direction, however, it turns out pathwise uniqueness implies
uniqueness in law.
Theorem
(Yamada–Watanabe)
.
Assume weak existence and pathwise unique-
ness holds. Then
(i) Uniqueness in law holds.
(ii)
For every (Ω
, F,
(
F
t
)
, P
) and
B
and any
x R
d
, there is a unique strong
solution to E
x
(a, b).
We will not prove this, since we will not actually need it.
The key, important theorem we are now heading for is the existence and
uniqueness of solutions to SDEs, assuming reasonable conditions. As in the case
of ODEs, we need the following Lipschitz conditions:
Definition
(Lipschitz coefficients)
.
The coefficients
b
:
R
+
× R
d
R
d
,
σ
:
R
+
× R
d
R
d×m
are Lipschitz in
x
if there exists a constant
K >
0 such that
for all t 0 and x, y R
d
, we have
|b(t, x) b(t, y)| K|x y|
|σ(t, x) σ(t, y)| |x y|
Theorem.
Assume
b, σ
are Lipschitz in
x
. Then there is pathwise uniqueness
for the
E
(
σ, b
) and for every (Ω
, F,
(
F
t
)
, P
) satisfying the usual conditions and
every (
F
t
)-Brownian motion
B
, for every
x R
d
, there exists a unique strong
solution to E
x
(σ, b).
Proof. To simplify notation, we assume m = d = 1.
We first prove pathwise uniqueness. Suppose
X, X
0
are two solutions with
X
0
=
X
0
0
. We will show that
E
[(
X
t
X
0
t
)
2
] = 0. We will actually put some
bounds to control our variables. Define the stopping time
S = inf{t 0 : |X
t
| n or |X
0
t
| n}.
By continuity,
S
as
n
. We also fix a deterministic time
T >
0. Then
whenever t [0, T ], we can bound, using the identity (a + b)
2
2a
2
+ 2b
2
,
E((X
tS
X
0
tS
)
2
) 2E
Z
tS
0
(σ(s, X
s
) σ(s, X
0
s
)) dB
s
!
2
+ 2E
Z
tS
0
(b(s, X
s
) b(s, X
0
s
)) ds
!
2
.
We can apply the Lipschitz bound to the second term immediately, while we can
simplify the first term using the (corollary of the) Itˆo isometry
E
Z
tS
0
(σ(s, X
s
) σ(s, X
0
s
)) dB
s
!
2
= E
Z
tS
0
(σ(s, X
s
) σ(s, X
0
s
))
2
ds
!
.
So using the Lipschitz bound, we have
E((X
tS
X
0
tS
)
2
) 2K
2
(1 + T )E
Z
tS
0
|X
s
X
0
s
|
2
ds
!
2K
2
(1 + T )
Z
t
0
E(|X
sS
X
0
sS
|
2
) ds.
We now use Gr¨onwall’s lemma:
Lemma. Let h(t) be a function such that
h(t) c
Z
t
0
h(s) ds
for some constant c. Then
h(t) h(0)e
ct
.
Applying this to
h(t) = E((X
tS
X
0
tS
)
2
),
we deduce that h(t) h(0)e
ct
= 0. So we know that
E(|X
tS
X
0
tS
|
2
) = 0
for every t [0, T ]. Taking n and T gives pathwise uniqueness.
We next prove existence of solutions. We fix (Ω
, F,
(
F
t
)
t
) and
B
, and define
F (X)
t
= X
0
+
Z
t
0
σ(s, X
s
) dB
s
+
Z
t
0
b(s, X
s
) ds.
Then
X
is a solution to
E
x
(
a, b
) iff
F
(
X
) =
X
and
X
0
=
x
. To find a fixed point,
we use Picard iteration. We fix
T >
0, and define the
T
-norm of a continuous
adapted process X as
kXk
T
= E
sup
tT
|X
t
|
2
1/2
.
In particular, if
X
is a martingale, then this is the same as the norm on the
space of L
2
-bounded martingales by Doob’s inequality. Then
B = {X : Ω × [0, T ] R : kXk
T
< ∞}
is a Banach space.
Claim. kF (0)k
T
< , and
kF (X) F (Y )k
2
T
(2T + 8)K
2
Z
T
0
kX Y k
2
t
dt.
We first see how this claim implies the theorem. First observe that the claim
implies
F
indeed maps
B
into itself. We can then define a sequence of processes
X
i
t
by
X
0
t
= x, X
i+1
= F (X
i
).
Then we have
kX
i+1
X
i
k
2
T
CT
Z
T
0
kX
i
X
i1
k
2
t
dt ··· kX
1
X
0
k
2
T
CT
i
i!
.
So we find that
X
i=1
kX
i
X
i1
k
2
T
<
for all
T
. So
X
i
converges to
X
almost surely and uniformly on [0
, T
], and
F (X) = X. We then take T and we are done.
To prove the claim, we write
kF (0)k
T
|X
0
| +
Z
t
0
b(s, 0) ds
+
Z
t
0
σ(s, 0) dB
s
T
.
The first two terms are constant, and we can bound the last by Doob’s inequality
and the Itˆo isometry:
Z
t
0
σ(s, 0) dB
s
T
2E
Z
T
0
σ(s, 0) dB
s
2
= 2
Z
T
0
σ(s, 0)
2
ds.
To prove the second part, we use
kF (X) F (Y )k
2
2E
sup
tT
Z
t
0
b(s, X s) b(s, Y
s
) ds
2
!
+ 2E
sup
tT
Z
t
0
(σ(s, X
s
) σ(s, Y
s
)) dB
s
2
!
.
We can bound the first term with Cauchy–Schwartz by
T E
Z
T
0
|b(s, X
s
) b(s, Y
s
)|
2
!
T K
2
Z
T
0
kX Y k
2
t
dt,
and the second term with Doob’s inequality by
E
Z
T
0
|σ(s, X
s
) σ(s, Y
s
)|
2
ds
!
4K
2
Z
T
0
kX Y k
2
t
dt.