4Stochastic differential equations

III Stochastic Calculus and Applications



4.3 Representations of solutions to PDEs
Recall that in Advanced Probability, we learnt that we can represent the solution
to Laplace’s equation via Brownian motion, namely if
D
is a suitably nice domain
and
g
:
D R
is a function, then the solution to the Laplace’s equation on
D
with boundary conditions g is given by
u(x) = E
x
[g(B
T
)],
where T is the first hitting time of the boundary D.
A similar statement we can make is that if we want to solve the heat equation
u
t
=
2
u
with initial conditions u(x, 0) = u
0
(x), then we can write the solution as
u(x, t) = E
x
[u
0
(
2B
t
)]
This is just a fancy way to say that the Green’s function for the heat equation is
a Gaussian, but is a good way to think about it nevertheless.
In general, we would like to associate PDEs to certain stochastic processes.
Recall that a stochastic PDE is generally of the form
dX
t
= b(X
t
) dt + σ(X
t
) dB
t
for some
b
:
R
d
R
and
σ
:
R
d
R
d×m
which are measurable and locally
bounded. Here we assume these functions do not have time dependence. We
can then associate to this a differential operator L defined by
L =
1
2
X
i,j
a
ij
i
j
+
X
i
b
i
i
.
where a = σσ
T
.
Example. If b = 0 and σ =
2I, then L = ∆ is the standard Laplacian.
The basic computation is the following result, which is a standard application
of the Itˆo formula:
Proposition.
Let
x R
d
, and
X
a solution to
E
x
(
σ, b
). Then for every
f : R
+
× R
d
R that is C
1
in R
+
and C
2
in R
d
, the process
M
f
t
= f(t, X
t
) f(0, X
0
)
Z
t
0
s
+ L
f(s, X
s
) ds
is a continuous local martingale.
We first apply this to the Dirichlet–Poisson problem, which is essentially to
solve
Lu
=
f
. To be precise, let
U R
d
be non-empty, bounded and open;
f C
b
(
U
) and
g C
b
(
U
). We then want to find a
u C
2
(
¯
U
) =
C
2
(
U
)
C
(
¯
U
)
such that
Lu(x) = f(x) for x U
u(x) = g(x) for x U.
If
f
= 0, this is called the Dirichlet problem; if
g
= 0, this is called the Poisson
problem.
We will have to impose the following technical condition on a:
Definition
(Uniformly elliptic)
.
We say
a
:
¯
U R
d×d
is uniformly elliptic if
there is a constant c > 0 such that for all ξ R
d
and x
¯
U, we have
ξ
T
a(x)ξ c|ξ|
2
.
If
a
is symmetric (which it is in our case), this is the same as asking for the
smallest eigenvalue of a to be bounded away from 0.
It would be very nice if we can write down a solution to the Dirichlet–Poisson
problem using a solution to
E
x
(
σ, b
), and then simply check that it works. We
can indeed do that, but it takes a bit more time than we have. Instead, we shall
prove a slightly weaker result that if we happen to have a solution, it must be
given by our formula involving the SDE. So we first note the following theorem
without proof:
Theorem.
Assume
U
has a smooth boundary (or satisfies the exterior cone
condition),
a, b
are older continuous and
a
is uniformly elliptic. Then for
every older continuous
f
:
¯
U R
and any continuous
g
:
U R
, the
Dirichlet–Poisson process has a solution.
The main theorem is the following:
Theorem.
Let
σ
and
b
be bounded measurable and
σσ
T
uniformly elliptic,
U R
d
as above. Let
u
be a solution to the Dirichlet–Poisson problem and
X
a
solution to E
x
(σ, b) for some x R
d
. Define the stopping time
T
U
= inf{t 0 : X
t
6∈ U}.
Then ET
U
< and
u(x) = E
x
g(X
T
U
) +
Z
T
U
0
f(X
s
) ds
!
.
In particular, the solution to the PDE is unique.
Proof.
Our previous proposition applies to functions defined on all of
R
n
, while
u is just defined on U. So we set
U
n
=
x U : dist(x, U ) >
1
n
, T
n
= inf{t 0 : X
t
6∈ U
n
},
and pick
u
n
C
2
b
(
R
d
) such that
u|
U
n
=
u
n
|
U
n
. Recalling our previous notation,
let
M
n
t
= (M
u
n
)
T
n
t
= u
n
(X
tT
n
) u
n
(X
0
)
Z
tT
n
0
Lu
n
(X
s
) ds.
Then this is a continuous local martingale that is bounded by the proposition,
and is bounded, hence a true martingale. Thus for
x U
and
n
large enough,
the martingale property implies
u(x) = u
n
(x) = E
u(X
tT
n
)
Z
tT
n
0
Lu(X
s
) ds
!
= E
u(X
tT
n
) +
Z
tT
n
0
f(X
s
) ds
!
.
We would be done if we can take
n
. To do so, we first show that
E
[
T
U
]
<
.
Note that this does not depend on
f
and
g
. So we can take
f
= 1 and
g
= 0,
and let v be a solution. Then we have
E(t T
n
) = E
Z
tT
n
0
Lv(X
s
) ds
!
= v(x) E(v(X
tT
n
)).
Since
v
is bounded, by dominated/monotone convergence, we can take the limit
to get
E(T
U
) < .
Thus, we know that t T
n
T
U
as t and n . Since
E
Z
T
U
0
|f(X
s
)| ds
!
kfk
E[T
U
] < ,
the dominated convergence theorem tells us
E
Z
tT
n
0
f(X
s
) ds
!
E
Z
T
U
0
f(X
s
) ds
!
.
Since u is continuous on
¯
U, we also have
E(u(X
tT
n
)) E(u(T
u
)) = E(g(T
u
)).
We can use SDE’s to solve the Cauchy problem for parabolic equations as
well, just like the heat equation. The problem is as follows: for
f C
2
b
(
R
d
), we
want to find u : R
+
× R
d
R that is C
1
in R
+
and C
2
in R
d
such that
u
t
= Lu on R
+
× R
d
u(0, ·) = f on R
d
Again we will need the following theorem:
Theorem.
For every
f C
2
b
(
R
d
), there exists a solution to the Cauchy problem.
Theorem.
Let
u
be a solution to the Cauchy problem. Let
X
be a solution to
E
x
(σ, b) for x R
d
and 0 s t. Then
E
x
(f(X
t
) | F
s
) = u(t s, X
s
).
In particular,
u(t, x) = E
x
(f(X
t
)).
In particular, this implies X
t
is a continuous Markov process.
Proof.
The martingale has
t
+
L
, but the heat equation has
t
L
. So we set
g(s, x) = u(t s, x). Then
s
+ L
g(s, x) =
t
u(t s, x) + Lu(t s, x) = 0.
So g(s, X
s
) g(0, X
0
) is a martingale (boundedness is an exercise), and hence
u(t s, X
s
) = g(s, X
s
) = E(g(t, X
t
) | F
s
) = E(u(0, X
t
) | F
s
) = E(f(X
t
) | X
s
).
There is a generalization to the Feynman–Kac formula.
Theorem
(Feynman–Kac formula)
.
Let
f C
2
b
(
R
d
) and
V C
b
(
R
d
) and
suppose that u : R
+
× R
d
R satisfies
u
t
= Lu + V u on R
+
× R
d
u(0, ·) = f on R
d
,
where V u = V (x)u(x) is given by multiplication.
Then for all t > 0 and x R
d
, and X a solution to E
x
(σ, b). Then
u(t, x) = E
x
f(X
t
) exp
Z
t
0
V (X
s
) ds

.
If
L
is the Laplacian, then this is Schr¨odinger equation, which is why Feynman
was thinking about this.