7The Gaussian free field}\label{sec:gff
III Schramm--Loewner Evolutions
7 The Gaussian free field
We end by discussing the Gaussian free field, which we can think of as a two-
dimensional analogue of Brownian motion, i.e. a random surface. We will show
that the level curves of the Gaussian free field are SLE
4
s.
To define the Gaussian free field, we have to do some analysis.
Notation.
– C
∞
is the space of infinitely differentiable functions on C
– C
∞
0
is the space of functions in C
∞
with compact support.
– If D is a domain, C
∞
0
(D) is the functions in C
∞
0
supported in D.
Definition
(Dirichlet inner product)
.
Let
f, g ∈ C
∞
0
. The Dirichlet inner
product of f, g is
(f, g)
∇
=
1
2π
Z
∇f(x) · ∇g(x) dx.
This defines an inner product function on
C
∞
0
. If
D ⊆ C
is a non-trivial
simply-connected domain, i.e. not ∅ or C, we can define
Definition
(
H
1
0
(
D
))
.
We write
H
1
0
(
D
) for the Hilbert space completion of
C
∞
0
(D) with respect to ( ·, ·)
∇
.
Elements of
H
1
0
(
D
) can be thought of as functions well-defined up to a null
set. These functions need not be continuous (i.e. need not have a continuous
representatives), and in particular need not be genuinely differentiable, but they
have “weak derivatives”.
We will need the following key properties of H
1
0
(D):
Proposition.
(i)
Conformal invariance: Suppose
ϕ
:
D →
˜
D
is a conformal transformation,
and f, g ∈ C
∞
0
(D). Then
(f, g)
∇
= (f ◦ ϕ
−1
, g ◦ ϕ
−1
)
∇
In other words, the Dirichlet inner product is conformally invariant.
In other words,
ϕ
∗
:
H
1
0
(
D
)
→ H
1
0
(
˜
D
) given by
f 7→ f ◦ ϕ
−1
is an
isomorphism of Hilbert spaces.
(ii)
Inclusion: Suppose
U ⊆ D
is open. If
f ∈ C
∞
0
(
U
), then
f ∈ C
∞
0
(
D
).
Therefore the inclusion map
i
:
H
1
0
(
U
)
→ H
1
0
(
D
) is well-defined and
associates
H
1
0
(
U
) with a subspace of
H
1
0
(
D
). We write the image as
H
supp
(U).
(iii) Orthogonal decomposition: If U ⊆ D, let
H
harm
(U) = {f ∈ H
1
0
(D) : f is harmonic on U}.
Then
H
1
0
(D) = H
supp
(U) ⊕ H
harm
(U)
is an orthogonal decomposition of
H
1
0
(
D
). This is going to translate to a
Markov property of the Gaussian free field.
Proof.
Conformal invariance is a routine calculation, and inclusion does not
require proof. To prove orthogonality, suppose
f ∈ H
supp
(
U
) and
g ∈ H
harm
(
U
).
Then
(f, g) =
1
2π
Z
∇f(x) · ∇g(x) dx = −
1
2π
Z
f(x)∆g(x) dx = 0.
since f is supported on U and ∆g is supported outside of U.
To prove that they span, suppose
f ∈ H
1
0
(
D
), and
f
0
the orthogonal projec-
tion of
f
onto
H
supp
(
U
). Let
g
0
=
f − f
0
. We want to show that
g
0
is harmonic.
It would be a straightforward manipulation if we can take ∆, but there is no
guarantee that f
0
is smooth.
We shall show that
g
0
is weakly harmonic, and then it is a standard analysis
result (which is also on the example sheet) that
g
0
is in fact genuinely harmonic.
Suppose ϕ ∈ C
∞
0
(U). Then since g
0
⊥ H
supp
(U), we have
0 = (g
0
, ϕ) =
1
2π
Z
∇g
0
(x) · ∇ϕ(x) dx = −
1
2π
Z
g
0
(x)∆ϕ(x) dx.
This implies g
0
is C
∞
on U and harmonic.
We will define the Gaussian free field to be a Gaussian taking values in
H
1
0
.
To make this precise, we first understand how Gaussian random variables on
R
n
work.
Observe that if
α
1
, . . . , α
n
are iid
N
(0
,
1) variables, and
e
1
, . . . , e
n
is a stan-
dard basis of R
n
, then
h = α
1
e
1
+ ···+ α
n
e
n
is a standard n-dimensional Gaussian random variable.
If x =
P
x
j
e
j
∈ R
n
, then
(h, x) =
n
X
j=1
α
j
x
j
∼ N (0, kxk).
Moreover, if
x, y ∈ R
n
, then (
h, x
) and (
h, y
) are jointly Gaussian with covariance
(x, y).
Thus, we can associate with
h
a family of Gaussian random variables (
h, x
)
indexed by
x ∈ R
n
with mean zero and covariance given by the inner product
on R
n
. This is an example of a Gaussian Hilbert space.
We now just do the same for the infinite-dimensional vector space
H
1
0
(
D
).
One can show that this is separable, and so we can pick an orthonormal basis
(f
n
). Then the Gaussian free field h on D is defined by
h =
∞
X
j=1
α
j
f
j
,
where the α
j
’s are iid N (0, 1) random variables. Thus, if f ∈ H
1
0
(D), then
(h, f)
∇
∼ N (0, kfk
2
∇
).
More generally, if
f, g ∈ H
1
0
(
D
), then (
h, f
)
∇
and (
h, g
)
∇
are jointly Gaussian
with covariance (
f, g
)
∇
. Thus, the Gaussian free field is a family of Gaussian
variables (
H, f
)
∇
indexed by
f ∈ H
1
0
(
D
) with mean zero and covariance (
·, ·
)
∇
.
We can’t actually quite make this definition, because the sum
h
=
P
∞
j=1
α
j
f
j
does not converge. So
h
is not really a function, but a distribution. However,
this difference usually does not matter.
We can translate the properties of
H
1
0
into analogous properties of the
Gaussian free field.
Proposition.
(i)
If
ϕ
:
D →
˜
D
is a conformal transformation and
h
is a Gaussian free field
on D, then h ◦ ϕ
−1
is a Gaussian free field on
˜
D.
(ii)
Markov property: If
U ⊆ D
is open, then we can write
h
=
h
1
+
h
2
with
h
1
and
h
2
independent where
h
1
is a Gaussian free field on
U
1
and
h
2
is
harmonic on U .
Proof.
(i) Clear.
(ii)
Take
h
1
to be the projection onto
H
supp
(
U
). This works since we can take
the orthonormal basis (
f
n
) to be the union of an orthonormal basis of
H
supp
(U) plus an orthonormal basis of H
harm
(U).
Often, we would rather think about the
L
2
inner product of
h
with something
else. Observe that integration by parts tells us
(h, f)
∇
=
−1
2π
(h, ∆f)
L
2
.
Thus, we would be happy if we can invert ∆, which we can by IB methods.
Recall that
∆(−log |x − y|) = −2πδ(y − x),
where ∆ acts on
x
, and so
−log |x − y|
is a Green’s function for ∆. Given a
domain
D
, we wish to obtain a version of the Green’s function that vanishes on
the boundary. To do so, we solve ∆
˜
G
x
= 0 on
D
with the boundary conditions
˜
G
x
(y) = −log |x −y| if y ∈ ∂D.
We can then set
G(x, y) = −log |x −y| −
˜
G
x
(y).
With this definition, we can define
∆
−1
ϕ(x) = −
1
2π
Z
G(x, y)ϕ(y) dy.
Then ∆∆
−1
ϕ(x) = ϕ(x), and so
(h, ϕ) ≡ (h, ϕ)
L
2
= −2π(h, ∆
−1
ϕ)
∇
.
Then (h, ϕ) is a mean-zero Gaussian with variance
(2π)
2
k∆
−1
ϕk
2
∇
= (2π)
2
(∆
−1
ϕ, ∆
−1
ϕ)
∇
= −2π(∆
−1
ϕ, ∆∆
−1
ϕ)
= (−2π∆
−1
ϕ, ϕ)
=
ZZ
ϕ(x)G(x, y)ϕ(y) dx dy.
More generally, if ϕ, ψ ∈ C
∞
0
(D), then
cov((h, ϕ), (h, ψ)) =
ZZ
ϕ(x)G(x, y)ψ(y) dx dy.
On the upper half plane, we have a very explicit Green’s function
G(x, y) = G
H
(x, y) = −log |x −y| + log |x − ¯y|.
It is not hard to show that the Green’s function is in fact conformally invariant:
Proposition.
Let
D,
˜
D
be domains in
C
and
ϕ
is a conformal transformation
D →
˜
D. Then G
D
(x, y) = G
˜
D
(ϕ(x), ϕ(y)).
One way to prove this is to use the conformal invariance of Brownian motion,
but a direct computation suffices as well.
Schramm and Sheffield showed that the level sets of
h
, i.e.
{x
:
h
(
x
) = 0
}
are
SLE
4
’s. It takes some work to make this precise, since
h
is not a genuine
function, but the formal statement is as follows:
Theorem
(Schramm–Sheffield)
.
Let
λ
=
π
2
. Let
γ ∼ SLE
4
in
H
from 0 to
∞
.
Let
g
t
its Loewner evolution with driving function
U
t
=
√
κB
t
= 2
B
t
, and set
f
t
= g
t
− U
t
. Fix W ⊆ H open and let
τ = inf{t ≥ 0 : γ(t) ∈ W }.
Let
h
be a Gaussian free field on
H
,
λ >
0, and
𝒽
be the unique harmonic
function on
H
with boundary values
λ
on
R
>0
and
−λ
on
R
<0
. Explicitly, it is
given by
𝒽 = λ −
2λ
π
arg( ·).
Then
h + 𝒽
d
= (h + 𝒽) ◦ f
t∧τ
,
where both sides are restricted to W .
A few words should be said about why we should think of this as saying the
level curves of
h
are
SLE
4
’s. First, observe that since
𝒽
is harmonic, adding
𝒽
to
h
doesn’t change how
h
pairs with other functions in
H
1
0
(
H
). All it does is to
change the boundary conditions of
h
. So we should think of
h
+
𝒽
as a version
of the Gaussian free field with boundary conditions
(h + 𝒽)(x) = sgn(x)λ.
It is comforting to know the fact that the boundary conditions of
h
is well-defined
as an element of L
2
.
Similarly, (
h
+
𝒽
)
◦f
t∧τ
is a Gaussian free field with the boundary condition
that it takes
λ
to the right of
γ ∼ SLE
4
and
−λ
to the left of it. Thus, we can
think of this as taking value 0 along
γ
. The theorem says this has the same
distribution just
h
+
𝒽
. So we interpret this as saying the Gaussian free field has
the same distribution as a Gaussian free field forced to vanish along an SLE
4
.
What we stated here is a simplified version of the theorem by Schramm and
Sheffield, because we are not going to deal with what happens Γ hits W .
The proof is not difficult. The difficult part is to realize that this is the thing
we want to prove.
Proof. We want to show that if ϕ ∈ C
∞
0
(W ),
((h + 𝒽) ◦ f
t∧τ
, ϕ)
d
= (h + 𝒽, ϕ).
In other words, writing
m
t
(ϕ) = (𝒽 ◦ f
t
, ϕ), σ
2
0
(ϕ) =
ZZ
ϕ(x)G
H
(f
t
(x), f
t
(y))ϕ(y) dx dy,
we want to show that
((h + 𝒽) ◦ f
t∧τ
, ϕ) ∼ N (m
0
(ϕ), σ
2
0
(ϕ)).
This is the same as proving that
E
h
e
iθ((h+𝒽)◦f
t∧τ
,ϕ)
i
= exp
iθm
0
(ϕ) −
θ
2
2
σ
2
0
(ϕ)
.
Let F
t
= σ(U
s
: s ≤ t) be the filtration of U
t
. Then
E
h
e
iθ((h+𝒽)◦f
t∧τ
,ϕ)
F
t∧τ
i
= E
h
e
iθ(h◦f
t∧τ
,ϕ)
| F
t∧τ
i
e
iθm
t∧τ
(ϕ)
= exp
iθm
t∧τ
(ϕ) −
θ
2
2
σ
2
t∧τ
(ϕ)
,
If we knew that
exp
iθm
t
(ϕ) −
θ
2
2
σ
2
t
(ϕ)
is a martingale, then taking the expectation of the above equation yields the
desired results.
Note that this looks exactly like the form of an exponential martingale, which
in particular is a martingale. So it suffices to show that
m
t
(
ϕ
) is a martingale
with
[m
·
(ϕ)]
t
= σ
2
0
(ϕ) − σ
2
t
(ϕ).
To check that m
t
(ϕ) is a martingale, we expand it as
𝒽 ◦ f
t
(z) = λ −
2λ
π
arg(f
t
(z)) = λ −
2λ
π
im(log(g
t
(z) − U
t
)).
So it suffices to check that
log
(
g
t
(
z
)
−U
t
) is a martingale. We apply Itˆo’s formula
to get
d log(g
t
(z)−U
t
) =
1
g
t
(z) − U
t
·
2
g
t
(z) − U
t
dt−
1
g
t
(z) − U
t
dU
t
−
κ/2
(g
t
(z) − U
t
)
2
dt,
and so this is a continuous local martingale when
κ
= 4. Since
m
t
(
ϕ
) is bounded,
it is a genuine martingale.
We then compute the derivative of the quadratic variation
d[m
·
(ϕ)]
t
=
Z
ϕ(x) im
2
g
t
(x) − U
t
im
2
g
t
(y) −U
t
ϕ(y) dx dy dt.
To finish the proof, we need to show that d
σ
2
t
(
ϕ
) takes the same form. Recall
that the Green’s function can be written as
G
H
(x, y) = −log |x −y| + log |x − ¯y| = −Re(log(x − y) − log(x − ¯y)).
Since we have
log(f
t
(x) − f
t
(y)) = log(g
t
(x) − g
t
(y)),
we can compute
d log(g
t
(x) − g
t
(y)) =
1
g
t
(x) − g
t
(y)
2
g
t
(x) − U
t
−
2
g
t
(y) −U
t
dt
=
−2
(g
t
(x) − U
t
)(g
t
(y) −U
t
)
dt.
Similarly, we have
d log(g
t
(x) − g
t
(y)) =
−2
(g
t
(x) − U
t
)(g
t
(y) −U
t
)
dt.
So we have
dG
t
(x, y) = −im
2
g
t
(x) − U
t
im
2
g
t
(y) −U
t
dt.
This is exactly what we wanted it to be.