4The variational method

III Theoretical Physics of Soft Condensed Matter



4.1 The variational method
The variational method is a method to estimate the partition function
e
βF
TOT
=
Z
e
βF [φ]
D[φ]
when
F
is not Gaussian. To simplify notation, we will set
β
= 1. It is common
to make a notational change, where we replace
F
TOT
with
F
and
F
[
φ
] with
H
[
φ
].
We then want to estimate
e
F
TOT
=
Z
e
F [φ]
D[φ].
We now make a notation change, where we write
F
TOT
as
F
, and
F
[
φ
] as
H
[
φ
]
instead, called the effective Hamiltonian. In this notation, we write
e
F
=
Z
e
H[φ]
D[φ].
The idea of the variational method is to find some upper bounds on
F
in
terms of path integrals we can do, and then take the best upper bound as our
approximation to F .
Thus, we introduce a trial Hamiltonian H
0
[φ], and similarly define
e
F
0
=
Z
e
H
0
[φ]
D[φ].
We can then write
e
F
=
e
F
0
R
e
H
0
D[φ]
Z
e
H
0
e
(HH
0
)
D[φ] = e
F
0
he
(HH
0
)
i
0
,
where the subscript 0 denotes the average over the trial distribution. Taking the
logarithm, we end up with
F = F
0
loghe
(HH
0
)
i
0
.
So far, everything is exact. It would be nice if we can move the logarithm inside
the expectation to cancel out the exponential. While the result won’t be exactly
equal, the fact that log is concave, i.e.
log(αA + (1 α)B) α log A + (1 α) log B.
Thus Jensen’s inequality tells us
loghY i
0
hlog Y i
0
.
Applying this to our situation gives us an inequality
F F
0
hH
0
Hi
0
= F
0
hH
0
i
0
+ hHi
0
= S
0
+ hHi
0
.
This is the Feynman–Bogoliubov inequality.
To use this, we have to choose the trial distribution
H
0
simple enough to
actually do calculations (i.e. Gaussian), but include variational parameters in
H
0
. We then minimize the quantity
F
0
hH
0
i
+
hHi
0
over our variational
parameters, and this gives us an upper bound on
F
. We then take this to be
our best estimate of
F
. If we are brave, we can take this minimizing
H
0
as an
approximation of H, at least for some purposes.