5Differential forms and de Rham cohomology
III Differential Geometry
5.1 Differential forms
We are now going to restrict our focus to a special kind of tensors, known as
differential forms. Recall that in
R
n
(as a vector space), an alternating
n
-linear
map tells us the signed volume of the parallelepiped spanned by
n
vectors.
In general, a differential
p
-form is an alternating
p
-linear map on the tangent
space at each point, so it tells us the volume of an “infinitesimal
p
-dimensional
parallelepiped”.
In fact, we will later see than on an (oriented)
p
-dimensional manifold, we
can integrate a
p
-form on the manifold to obtain the “volume” of the manifold.
Definition (Differential form). We write
Ω
p
(M) = C
∞
(M, Λ
p
T
∗
M) = {p-forms on M}.
An element of Ω
p
(M) is known as a differential p-form.
In particular, we have
Ω
0
(M) = C
∞
(M, R).
In local coordinates x
1
, · · · , x
n
on U we can write ω ∈ Ω
p
(M) as
ω =
X
i
1
<...<i
p
ω
i
1
,...,i
p
dx
i
1
∧ · · · ∧ dx
i
p
for some smooth functions ω
i
1
,...,i
p
.
We are usually lazy and just write
ω =
X
I
ω
I
dx
I
.
Example. A 0-form is a smooth function.
Example.
A 1-form is a section of
T
∗
M
. If
ω ∈
Ω
1
(
M
) and
X ∈ Vect
(
M
),
then ω(X) ∈ C
∞
(M, R).
For example, if f is a smooth function on M , then df ∈ Ω
1
(M) with
df(X) = X(f)
for all X ∈ Vect(M).
Locally, we can write
df =
n
X
i=1
a
i
dx
i
.
To work out what the a
i
’s are, we just hit this with the
∂
∂x
j
. So we have
a
j
= df
∂
∂x
j
=
∂f
∂x
j
.
So we have
df =
n
X
i=1
∂f
∂x
i
dx
i
.
This is essentially just the gradient of a function!
Example. If dim M = n, and ω ∈ Ω
n
(M), then locally we can write
ω = g dx
1
∧ · · · ∧ dx
n
.
for some smooth function
g
. This is an alternating form that assigns a real
number to n tangent vectors. So it measures volume!
If y
1
, · · · , y
n
is any other coordinates, then
dx
i
=
X
∂x
i
∂y
j
dy
j
.
So we have
ω = g det
∂x
i
∂y
j
i,j
dy
1
∧ · · · ∧ dy
n
.
Now a motivating question is this — given an
ω ∈
Ω
1
(
M
), can we find some
f ∈ Ω
0
(M) such that ω = df?
More concretely, let U ⊆ R
2
be open, and let x, y be the coordinates. Let
ω = a dx + b dy
If we have w = df for some f , then we have
a =
∂f
∂x
, b =
∂f
∂y
.
So the symmetry of partial derivatives tells us that
∂a
∂y
=
∂b
∂x
. (∗)
So this equation (∗) is a necessary condition to solve ω = df. Is it sufficient?
To begin with, we want to find a better way to express (
∗
) without resorting
to local coordinates, and it turns out this construction will be very useful later
on.
Theorem (Exterior derivative). There exists a unique linear map
d = d
M,p
: Ω
p
(M) → Ω
p+1
(M)
such that
(i) On Ω
0
(M) this is as previously defined, i.e.
df(X) = X(f) for all X ∈ Vect(M ).
(ii) We have
d ◦ d = 0 : Ω
p
(M) → Ω
p+2
(M).
(iii) It satisfies the Leibniz rule
d(ω ∧ σ) = dω ∧ σ + (−1)
p
ω ∧ dσ.
It follows from these assumptions that
(iv)
d acts locally, i.e. if
ω, ω
0
∈
Ω
p
(
M
) satisfy
ω|
U
=
ω
0
|
U
for some
U ⊆ M
open, then dω|
U
= dω
0
|
U
.
(v) We have
d(ω|
U
) = (dω)|
U
for all U ⊆ M.
What do the three rules tell us? The first rule tells us this is a generalization
of what we previously had. The second rule will turn out to be a fancy way of
saying partial derivatives commute. The final Leibniz rule tells us this d is some
sort of derivative.
Example. If we have
ω = a dx + b dy,
then we have
dω = da ∧ dx + a d(dx) + db ∧ dy + b d(dy)
= da ∧ dx + db ∧ dy
=
∂a
∂x
dx +
∂a
∂y
dy
∧ dx +
∂b
∂x
dx +
∂b
∂y
dy
∧ dy
=
∂b
∂x
−
∂a
∂y
dx ∧ dy.
So the condition (∗) says dω = 0.
We now rephrase our motivating question — if ω ∈ Ω
1
(M) satisfies dω = 0,
can we find some
f
such that
ω
= d
f
for some
f ∈
Ω
0
(
M
)? Now this has the
obvious generalization — given any
p
-form
ω
, if d
ω
= 0, can we find some
σ
such that ω = dσ?
Example.
In
R
3
, we have coordinates
x, y, z
. We have seen that for
f ∈
Ω
0
(
R
3
),
we have
df =
∂f
∂x
dx +
∂f
∂y
dy +
∂f
∂z
dz.
Now if
ω = P dx + Q dy + R dz ∈ Ω
1
(R
3
),
then we have
d(P dx) = dP ∧ dx + P ddx
=
∂P
∂x
dx +
∂P
∂y
dy +
∂P
∂z
dz
∧ dx
= −
∂P
∂y
dx ∧ dy −
∂P
∂z
dx ∧ dz.
So we have
dω =
∂Q
∂x
−
∂P
∂y
dx ∧ dy +
∂R
∂x
−
∂P
∂z
dx ∧ dz +
∂R
∂y
−
∂Q
∂z
dy ∧ dz.
This is just the curl! So d
2
= 0 just says that curl ◦ grad = 0.
Proof.
The above computations suggest that in local coordinates, the axioms
already tell use completely how d works. So we just work locally and see that
they match up globally.
Suppose
M
is covered by a single chart with coordinates
x
1
, · · · , x
n
. We
define d : Ω
0
(M) → Ω
1
(M) as required by (i). For p > 0, we define
d
X
i
1
<...<i
p
ω
i
1
,...,i
p
dx
i
1
∧ · · · ∧ dx
i
p
=
X
dω
i
1
,...,i
p
∧ dx
i
1
∧ · · · ∧ dx
i
p
.
Then (i) is clear. For (iii), we suppose
ω = f dx
I
∈ Ω
p
(M)
σ = g dx
J
∈ Ω
q
(M).
We then have
d(ω ∧ σ) = d(fg dx
I
∧ dx
J
)
= d(fg) ∧ dx
I
∧ dx
J
= g df ∧ dx
I
∧ dx
J
+ f dg ∧ dx
I
∧ dx
J
= g df ∧ dx
I
∧ dx
J
+ f(−1)
p
dx
I
∧ (dg ∧ dx
J
)
= (dω) ∧ σ + (−1)
p
ω ∧ dσ.
So done. Finally, for (ii), if f ∈ Ω
0
(M), then
d
2
f = d
X
i
∂f
∂x
i
dx
i
!
=
X
i,j
∂
2
f
∂x
i
∂x
j
dx
j
∧ dx
i
= 0,
since partial derivatives commute. Then for general forms, we have
d
2
ω = d
2
X
ω
I
dx
I
= d
X
dω
I
∧ dx
I
= d
X
dω
I
∧ dx
i
1
∧ · · · ∧ dx
i
p
= 0
using Leibniz rule. So this works.
Certainly this has the extra properties. To claim uniqueness, if
∂
: Ω
p
(
M
)
→
Ω
p+1
(M) satisfies the above properties, then
∂ω = ∂
X
ω
I
dx
I
=
X
∂ω
I
∧ dx
I
+ ω
I
∧ ∂dx
I
=
X
dω
I
∧ dx
I
,
using the fact that ∂ = d on Ω
0
(M) and induction.
Finally, if
M
is covered by charts, we can define d : Ω
p
(
M
)
→
Ω
p+1
(
M
) by
defining it to be the d above on any single chart. Then uniqueness implies this is
well-defined. This gives existence of d, but doesn’t immediately give uniqueness,
since we only proved local uniqueness.
So suppose
∂
: Ω
p
(
M
)
→
Ω
p+1
(
M
) again satisfies the three properties. We
claim that
∂
is local. We let
ω, ω
0
∈
Ω
p
(
M
) be such that
ω|
U
=
ω
0
|
U
for some
U ⊆ M
open. Let
x ∈ U
, and pick a bump function
χ ∈ C
∞
(
M
) such that
χ ≡ 1 on some neighbourhood W of x, and supp(χ) ⊆ U . Then we have
χ · (ω − ω
0
) = 0.
We then apply ∂ to get
0 = ∂(χ · (ω − ω
0
)) = dχ ∧ (ω − ω
0
) + χ(∂ω − ∂ω
0
).
But χ ≡ 1 on W . So dχ vanishes on W . So we must have
∂ω|
W
− ∂ω
0
|
W
= 0.
So ∂ω = ∂ω
0
on W .
Finally, to show that
∂
=
d
, if
ω ∈
Ω
p
(
M
), we take the same
χ
as before,
and then on x, we have
∂ω = ∂
χ
X
ω
I
dx
I
= ∂χ
X
ω
I
dx
I
+ χ
X
∂ω
I
∧ dx
I
= χ
X
dω
I
∧ dx
I
= dω.
So we get uniqueness. Since x was arbitrary, we have ∂ = d.
One useful example of a differential form is a symplectic form.
Definition
(Non-degenerate form)
.
A 2-form
ω ∈
Ω
2
(
M
) is non-degenerate if
ω(X
p
, X
p
) = 0 implies X
p
= 0.
As in the case of an inner product, such an
ω
gives us an isomorphism
T
p
M → T
∗
p
M by
α(X
p
)(Y
p
) = ω(X
p
, Y
p
).
Definition
(Symplectic form)
.
A symplectic form is a non-degenerate 2-form
ω
such that dω = 0.
Why did we work with covectors rather than vectors when defining differential
forms? It happens that differential forms have nicer properties. If we have some
F ∈ C
∞
(M, N) and g ∈ Ω
0
(N) = C
∞
(N, R), then we can form the pullback
F
∗
g = g ◦ F ∈ Ω
0
(M).
More generally, for x ∈ M , we have a map
DF |
x
: T
x
M → T
F (x)
N.
This does not allow us to pushforward a vector field on
M
to a vector field of
N, as the map F might not be injective. However, we can use its dual
(DF |
x
)
∗
: T
∗
F (x)
N → T
∗
x
M
to pull forms back.
Definition
(Pullback of differential form)
.
Let
ω ∈
Ω
p
(
N
) and
F ∈ C
∞
(
M, N
).
We define the pullback of ω along F to be
F
∗
ω|
x
= Λ
p
(DF |
x
)
∗
(ω|
F (x)
).
In other words, for v
1
, · · · , v
p
∈ T
x
M, we have
(F
∗
ω|
x
)(v
1
, · · · , v
p
) = ω|
F (x)
(DF |
x
(v
1
), · · · , DF |
x
(v
p
)).
Lemma. Let F ∈ C
∞
(M, N). Let F
∗
be the associated pullback map. Then
(i) F
∗
is a linear map Ω
p
(N) → Ω
p
(M).
(ii) F
∗
(ω ∧ σ) = F
∗
ω ∧ F
∗
σ.
(iii) If G ∈ C
∞
(N, P ), then (G ◦ F )
∗
= F
∗
◦ G
∗
.
(iv) We have dF
∗
= F
∗
d.
Proof.
All but (iv) are clear. We first check that this holds for 0 forms. If
g ∈ Ω
0
(N), then we have
(F
∗
dg)|
x
(v) = dg|
F (x)
(DF |
x
(v))
= DF |
x
(v)(g)
= v(g ◦ F )
= d(g ◦ F )(v)
= d(F
∗
g)(v).
So we are done.
Then the general result follows from (i) and (ii). Indeed, in local coordinates
y
1
, · · · , y
n
, if
ω =
X
ω
i
1
,...,i
p
dy
i
1
∧ · · · ∧ dy
i
p
,
then we have
F
∗
ω =
X
(F
∗
ω
i
1
,...,i
p
)(F
∗
dy
i
1
∧ · · · ∧ dy
i
p
).
Then we have
dF
∗
ω = F
∗
dω =
X
(F
∗
dω
i
1
,...,i
p
)(F
∗
dy
i
1
∧ · · · ∧ dy
i
p
).