1QFT in zero dimensions

III Advanced Quantum Field Theory



1.1 Free theories
We consider the simplest possible QFT. These QFT’s are free, and so
S
(
φ
) is at
most quadratic. Classically, this implies the equations of motions are linear, and
so there is superposition, and thus the particles do not interact.
Let φ : {pt} R
n
be a field with coordinates φ
a
, and define
S(φ) =
1
2
M(φ, φ) =
1
2
M
ab
φ
a
φ
b
,
where
M
:
R
n
× R
n
R
is a positive-definite symmetric matrix. Then the
partition function Z(M) is just a Gaussian integral:
Z(M ) =
Z
R
n
d
n
φ e
1
2~
M(φ,φ)
=
(2π~)
n/2
det M
.
Indeed, to compute this integral, since
M
is symmetric, there exists an orthogonal
transformation
O
:
R
n
R
n
that diagonalizes it. The measure d
n
φ
is invariant
under orthogonal transformations. So in terms of the eigenvectors of
M
, this
just reduces to a product of
n
1D Gaussian integrals of this type, and this is a
standard integral:
Z
dχ e
2
/2~
=
r
2π~
m
.
In our case,
m >
0 runs over all eigenvalues of
M
, and the product of eigenvalues
is exactly det M.
A small generalization is useful. We let
S(φ) =
1
2
M(φ, φ) + J(φ),
where
J
:
R
n
R
is some linear map (we can think of
J
as a (co)vector, and
also write
J
(
φ
) =
J · φ
).
J
is a source in the classical case. Then in this theory,
we have
Z(M, J) =
Z
R
n
d
n
φ exp
1
~
1
2
M(φ, φ) + J(φ)

.
To do this integral, we complete the square by letting
˜
φ
=
φ
+
M
1
J
. In other
words,
˜
φ
a
= φ
a
+ (M
1
)
ab
J
b
.
The inverse exists because
M
is assumed to be positive definite. We can now
complete the square to find
Z(M, J) =
Z
R
n
d
n
˜
φ exp
1
2~
M(
˜
φ,
˜
φ) +
1
2~
M
1
(J, J)
= exp
1
2~
M
1
(J, J)
Z
R
n
d
n
˜
φ exp
1
2~
M(
˜
φ,
˜
φ)
= exp
1
2~
M
1
(J, J)
(2π~)
n/2
det M
.
In the long run, we really don’t care about the case with a source. However, we
will use this general case to compute some correlation functions.
We return to the case without a source, and let
P
:
R
n
R
be a polynomial.
We want to compute
hP (φ)i =
1
Z(M )
Z
R
n
d
n
φ P (φ) exp
1
2~
M(φ, φ)
.
By linearity, it suffices to consider the case where P is just a monomial, so
P (φ) =
m
Y
i=1
(`
i
(φ)),
for
`
i
:
R
n
R
linear maps. Now if
m
is odd, then clearly
hP
(
φ
)
i
= 0, since
this is an integral of an odd function. When m = 2k, then we have
hP (φ)i =
1
Z(M )
Z
d
n
φ (`
i
· φ) ···(`
2k
· φ) exp
1
2~
M(φ, φ)
J · φ
~
.
Here we are eventually going to set
J
= 0, but for the time being, we will be
silly and put the source there. The relevance is that we can then think of our
factors `
i
· φ as derivatives with respect to J:
hP (φ)i =
(~)
2k
Z(M )
Z
d
n
φ
2k
Y
i=1
`
i
·
J
exp
1
2~
M(φ, φ)
J · φ
~
Since the integral is absolutely convergent, we can move the derivative out of
the integral, and get
=
(~)
2k
Z(M )
2k
Y
i=1
`
i
·
J
Z
d
n
φ exp
1
2~
M(φ, φ)
J · φ
~
= ~
2k
2k
Y
i=1
`
i
·
J
exp
1
2~
M
1
(J, J)
.
When each derivative `
i
·
J
acts on the exponential, we obtain a factor of
1
~
M
1
(J, `
i
).
in front. At the end, we are going to set
J
= 0. So we only get contributions if
and only if exactly half (i.e.
k
) of the derivatives act on the exponential, and the
other k act on the factor in front to get rid of the J.
We let
σ
denote a (complete) pairing of the set
{
1
, ··· ,
2
k}
, and Π
2k
be the
set of all such pairings. For example, if we have
k
= 2, then the possible pairings
are {(1, 2), (3, 4)}, {(1, 3), (2, 4)} and {(1, 4), (2, 3)}:
In general, we have
|Π
2k
| =
(2k)!
2
k
k!
,
and we have
Theorem (Wick’s theorem). For a monomial
P (φ) =
2k
Y
i=1
`
i
(φ),
we have
hP (φ)i = ~
k
X
σΠ
2k
Y
i∈{1,···,2k}
M
1
(`
i
, `
σ(i)
).
where the
{
1
, ··· ,
2
k}
says we sum over each pair
{i, σ
(
i
)
}
only once, rather
than once for (i, σ(i)) and another for (σ(i), i).
This is in fact the version of Wick’s theorem for this 0d QFT, and
M
1
plays
the role of the propagator.
For example, we have
h`
1
(φ)`
2
(φ)i = ~M
1
(`
1
, `
2
).
We can represent this by the diagram
1 2M
1
Similarly, we have
h`
1
(φ) ···`
4
(φ)i = ~
2
M
1
(`
1
, `
2
)M
1
(`
3
, `
4
)
+ M
1
(`
1
, `
3
)M
1
(`
2
, `
4
) + M
1
(`
1
, `
4
)M
1
(`
2
, `
3
)
.
Note that we have now reduced the problem of computing an integral for the
correlation function into a purely combinatorial problem of counting the number
of ways to pair things up.