Part III — Riemannian Geometry
Based on lectures by A. G. Kovalev
Notes taken by Dexter Chua
Lent 2017
These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.
This course is a possible natural sequel of the course Differential Geometry offered in
Michaelmas Term. We shall explore various techniques and results revealing intricate
and subtle relations between Riemannian metrics, curvature and topology. I hope to
cover much of the following:
A closer look at geodesics and curvature. Brief review from the Differential Geometry
course. Geo desic coordinates and Gauss’ lemma. Jacobi fields, completeness and
the Hopf–Rinow theorem. Variations of energy, Bonnet–Myers diameter theorem and
Synge’s theorem.
Hodge theory and Riemannian holonomy. The Hodge star and Laplace–Beltrami
op erator. The Hodge decomposition theorem (with the ‘geometry part’ of the proof).
Bo chner–Weitzenb¨ock formulae. Holonomy groups. Interplays with curvature and de
Rham cohomology.
Ricci curvature. Fundamental groups and Ricci curvature. The Cheeger–Gromoll
splitting theorem.
Pre-requisites
Manifolds, differential forms, vector fields. Basic concepts of Riemannian geometry
(curvature, geodesics etc.) and Lie groups. The course Differential Geometry offered in
Michaelmas Term is the ideal pre-requisite.
Contents
1 Basics of Riemannian manifolds
2 Riemann curvature
3 Geodesics
3.1 Definitions and basic properties
3.2 Jacobi fields
3.3 Further properties of geodesics
3.4 Completeness and the Hopf–Rinow theorem
3.5 Variations of arc length and energy
3.6 Applications
4 Hodge theory on Riemannian manifolds
4.1 Hodge star and operators
4.2 Hodge decomposition theorem
4.3 Divergence
4.4 Introduction to Bochner’s method
5 Riemannian holonomy groups
6 The Cheeger–Gromoll splitting theorem
1 Basics of Riemannian manifolds
Before we do anything, we lay out our conventions. Given a choice of local
coordinates {x
k
}, the coefficients X
k
for a vector field X are defined by
X =
X
k
X
k
∂
∂x
k
.
In general, for a tensor field X ∈ T M
⊗q
⊗ T
∗
M
⊗p
, we write
X =
X
X
k
1
...k
q
`
1
...`
p
∂
∂x
k
1
⊗ ··· ⊗
∂
∂x
k
q
⊗ dx
`
1
⊗ ··· ⊗ dx
`
p
,
and we often leave out the ⊗ signs.
For the sake of sanity, we will often use implicit summation convention, i.e.
whenever we write something of the form
X
ijk
Y
i`jk
,
we mean
X
i,j
X
ijk
Y
i`jk
.
We will use upper indices to denote contravariant components, and lower
indices for covariant components, as we have done above. Thus, we always sum
an upper index with a lower index, as this corresponds to applying a covector to
a vector.
We will index the basis elements oppositely, e.g. we write d
x
k
instead of d
x
k
for a basis element of
T
∗
M
, so that the indices in expressions of the form
A
k
d
x
k
seem to match up. Whenever we do not follow this convention, we will write out
summations explicitly.
We will also adopt the shorthands
∂
k
=
∂
∂x
k
, ∇
k
= ∇
∂
k
.
With these conventions out of the way, we begin with a very brief summary
of some topics in the Michaelmas Differential Geometry course, starting from
the definition of a Riemannian metric.
Definition
(Riemannian metric)
.
Let
M
be a smooth manifold. A Riemannian
metric
g
on
M
is an inner product on the tangent bundle
T M
varying smoothly
with the fibers. Formally, this is a global section of
T
∗
M ⊗T
∗
M
that is fiberwise
symmetric and positive definite.
The pair (M, g) is called a Riemannian manifold.
On every coordinate neighbourhood with coordinates
x
= (
x
1
, ··· , x
n
), we
can write
g =
n
X
i,j=1
g
ij
(x) dx
i
dx
j
,
and we can find the coefficients g
ij
by
g
ij
= g
∂
∂x
i
,
∂
∂x
j
and are C
∞
functions.
Example.
The manifold
R
k
has a canonical metric given by the Euclidean
metric. In the usual coordinates, g is given by g
ij
= δ
ij
.
Does every manifold admit a metric? Recall
Theorem
(Whitney embedding theorem)
.
Every smooth manifold
M
admits
an embedding into
R
k
for some
k
. In other words,
M
is diffeomorphic to a
submanifold of R
k
. In fact, we can pick k such that k ≤ 2 dim M.
Using such an embedding, we can induce a Riemannian metric on
M
by
restricting the inner product from Euclidean space, since we have inclusions
T
p
M → T
p
R
k
∼
=
R
k
.
More generally,
Lemma.
Let (
N, h
) be a Riemannian manifold, and
F
:
M → N
is an immersion,
then the pullback g = F
∗
h defines a metric on M.
The condition of immersion is required for the pullback to be non-degenerate.
In Differential Geometry, if we do not have metrics, then we tend to consider
diffeomorphic spaces as being the same. With metrics, the natural notion of
isomorphism is
Definition
(Isometry)
.
Let (
M, g
) and (
N, h
) be Riemannian manifolds. We
say
f
:
M → N
is an isometry if it is a diffeomorphism and
f
∗
h
=
g
. In other
words, for any p ∈ M and u, v ∈ T
p
M, we need
h
(df)
p
u, (df )
p
v
= g(u, v).
Example.
Let
G
be a Lie group. Then for any
x
, we have translation maps
L
x
, R
x
: G → G given by
L
x
(y) = xy
R
x
(y) = yx
These maps are in fact diffeomorphisms of G.
We already know that
G
admits a Riemannian metric, but we might want
to ask something stronger — does there exist a left-invariant metric? In other
words, is there a metric such that each L
x
is an isometry?
Recall the following definition:
Definition
(Left-invariant vector field)
.
Let
G
be a Lie group, and
X
a vector
field. Then X is left invariant if for any x ∈ G, we have d(L
x
)X = X.
We had a rather general technique for producing left-invariant vector fields.
Given a Lie group
G
, we can define the Lie algebra
g
=
T
e
G
. Then we can
produce left-invariant vector fields by picking some X
e
∈ g, and then setting
X
a
= d(L
a
)X
e
.
The resulting vector field is indeed smooth, as shown in the differential geometry
course.
Similarly, to construct a left-invariant metric, we can just pick a metric at
the identity and the propagating it around using left-translation. More explicitly,
given any inner product on h·, ·i on T
e
G, we can define g by
g(u, v) = h(dL
x
−1
)
x
u, (dL
x
−1
)
x
vi
for all
x ∈ G
and
u, v ∈ T
x
G
. The argument for smoothness is similar to that
for vector fields.
Of course, everything works when we replace “left” with “right”. A Rie-
mannian metric is said to be bi-invariant if it is both left- and right-invariant.
These are harder to find, but it is a fact that every compact Lie group admits a
bi-invariant metric. The basic idea of the proof is to start from a left-invariant
metric, then integrate the metric along right translations of all group elements.
Here compactness is necessary for the result to be finite.
We will later see that we cannot drop the compactness condition. There are
non-compact Lie groups that do not admit bi-invariant metrics, such as
SL
(2
, R
).
Recall that in order to differentiate vectors, or even tensors on a manifold,
we needed a connection on the tangent bundle. There is a natural choice for the
connection when we are given a Riemannian metric.
Definition
(Levi-Civita connection)
.
Let (
M, g
) be a Riemannian manifold.
The Levi-Civita connection is the unique connection
∇
: Ω
0
M
(
T M
)
→
Ω
1
M
(
T M
)
on M satisfying
(i) Compatibility with metric:
Zg(X, Y ) = g(∇
Z
X, Y ) + g(X, ∇
Z
Y ),
(ii) Symmetry/torsion-free:
∇
X
Y − ∇
Y
X = [X, Y ].
Definition
(Christoffel symbols)
.
In local coordaintes, the Christoffel symbols
are defined by
∇
∂
j
∂
∂x
k
= Γ
i
jk
∂
∂x
i
.
With a bit more imagination on what the symbols mean, we can write the
first property as
d(g(X, Y )) = g(∇X, Y ) + g(X, ∇Y ),
while the second property can be expressed in coordinate representation by
Γ
i
jk
= Γ
i
kj
.
The connection was defined on
T M
, but in fact, the connection allows us to
differentiate many more things, and not just tangent vectors.
Firstly, the connection
∇
induces a unique covariant derivative on
T
∗
M
, also
denoted ∇, defined uniquely by the relation
Xhα, Y i = h∇
X
α, Y i + hα, ∇
X
Y i
for any X, Y ∈ Vect(M) and α ∈ Ω
1
(M).
To extend this to a connection
∇
on tensor bundles
T
q,p
≡
(
T M
)
⊗q
⊗
(T
∗
M)
⊗p
for any p, q ≥ 0, we note the following general construction:
In general, suppose we have vector bundles
E
and
F
, and
s
1
∈
Γ(
E
) and
s
2
∈
Γ(
F
). If we have connections
∇
E
and
∇
F
on
E
and
F
respectively, then
we can define
∇
E⊗F
(s
1
⊗ s
2
) = (∇
E
s
1
) ⊗ s
2
+ s
1
⊗ (∇
F
s
2
).
Since we already have a connection on
T M
and
T
∗
M
, this allows us to extend
the connection to all tensor bundles.
Given this machinery, recall that the Riemannian metric is formally a section
g ∈
Γ(
T
∗
M ⊗ T
∗
M
). Then the compatibility with the metric can be written in
the following even more compact form:
∇g = 0.
2 Riemann curvature
With all those definitions out of the way, we now start by studying the notion
of curvature. The definition of the curvature tensor might not seem intuitive
at first, but motivation was somewhat given in the III Differential Geometry
course, and we will not repeat that.
Definition
(Curvature)
.
Let (
M, g
) be a Riemannian manifold with Levi-Civita
connection ∇. The curvature 2-form is the section
R = −∇ ◦ ∇ ∈ Γ(
V
2
T
∗
M ⊗ T
∗
M ⊗ T M ) ⊆ Γ(T
1,3
M).
This can be thought of as a 2-form with values in
T
∗
M ⊗ T M
=
End
(
T M
).
Given any X, Y ∈ Vect(M), we have
R(X, Y ) ∈ Γ(End T M ).
The following formula is a straightforward, and also crucial computation:
Proposition.
R(X, Y ) = ∇
[X,Y ]
− [∇
X
, ∇
Y
].
In local coordinates, we can write
R =
R
i
j,k`
dx
k
dx
`
i,j=1,...,dim M
∈ Ω
2
M
(End(T M )).
Then we have
R(X, Y )
i
j
= R
i
j,k`
X
k
Y
`
.
The comma between j and k is purely for artistic reasons.
It is often slightly convenient to consider a different form of the Riemann
curvature tensor. Instead of having a tensor of type (1, 3), we have one of type
(0, 4) by
R(X, Y, Z, T ) = g(R(X, Y )Z, T )
for X, Y, Z, T ∈ T
p
M. In local coordinates, we write this as
R
ij,k`
= g
iq
R
q
j,k`
.
The first thing we want to prove is that
R
ij,k`
enjoys some symmetries we might
not expect:
Proposition.
(i)
R
ij,k`
= −R
ij,`k
= −R
ji,k`
.
(ii) The first Bianchi identity:
R
i
j,k`
+ R
i
k,`j
+ R
i
`,jk
= 0.
(iii)
R
ij,k`
= R
k`,ij
.
Note that the first Bianchi identity can also be written for the (0
,
4) tensor as
R
ij,k`
+ R
ik,`j
+ R
i`,jk
= 0.
Proof.
(i)
The first equality is obvious as coefficients of a 2-form. For the second
equality, we begin with the compatibility of the connection with the metric:
∂g
ij
∂x
k
= g(∇
k
∂
i
, ∂
j
) + g(∂
i
, ∇
k
∂
j
).
We take a partial derivative, say with respect to x
`
, to obtain
∂
2
g
ij
∂x
`
∂x
k
= g(∇
`
∇
k
∂
i
, ∂
j
)+g(∇
k
∂
i
, ∇
`
∂
j
)+g(∇
`
∂
i
, ∇
k
∂
j
)+g(∂
i
, ∇
`
∇
k
∂
j
).
Then we know
0 =
∂
2
g
∂x
`
∂x
k
−
∂
2
g
∂x
k
∂x
`
= g([∇
`
, ∇
k
]∂
i
, ∂
j
) + g(∂
i
, [∇
`
, ∇
k
]∂
j
).
But we know
R(∂
k
, ∂
`
) = ∇
[∂
k
,∂
`
]
− [∇
k
, ∇
`
] = −[∇
k
, ∇
`
].
Writing R
k`
= R(∂
k
, ∂
`
), we have
0 = g(R
k`
∂
i
, ∂
j
) + g(∂
i
, R
k`
∂
j
) = R
ji,k`
+ R
ij,k`
.
So we are done.
(ii) Recall
R
i
j,k`
= (R
k`
∂
j
)
i
= ([∇
`
, ∇
k
]∂
j
)
i
.
So we have
R
i
j,k`
+ R
i
k,`j
+ R
i
`,jk
= [(∇
`
∇
k
∂
j
− ∇
k
∇
`
∂
j
) + (∇
j
∇
`
∂
k
− ∇
`
∇
j
∂
k
) + (∇
k
∇
j
∂
`
− ∇
j
∇
k
∂
`
)]
i
.
We claim that
∇
`
∇
k
∂
j
− ∇
`
∇
j
∂
k
= 0.
Indeed, by definition, we have
(∇
k
∂
j
)
q
= Γ
q
kj
= Γ
q
jk
= (∇
j
∂
k
)
q
.
The other terms cancel similarly, and we get 0 as promised.
(iii) Consider the following octahedron:
R
ik,`j
= R
ki,j`
R
i`,jk
= R
`i,kj
R
j`,ki
= R
`j,ik
R
jk,i`
= R
kj,`i
R
ij,k`
= R
ji,`k
R
k`,ij
= R
`k,ji
The equalities on each vertex is given by (i). By the first Bianchi identity,
for each greyed triangle, the sum of the three vertices is zero.
Now looking at the upper half of the octahedron, adding the two greyed
triangles shows us the sum of the vertices in the horizontal square is
(
−
2)
R
ij,k`
. Looking at the bottom half, we find that the sum of the
vertices in the horizontal square is (−2)R
k`,ij
. So we must have
R
ij,k`
= R
k`,ij
.
What exactly are the properties of the Levi-Civita connection that make
these equality works? The first equality of (i) did not require anything. The
second equality of (i) required the compatibility with the metric, and (ii) required
the symmetric property. The last one required both properties.
Note that we can express the last property as saying
R
ij,k`
is a symmetric
bilinear form on
V
2
T
∗
p
M.
Sectional curvature
The full curvature tensor is rather scary. So it is convenient to obtain some
simpler quantities from it. Recall that if we had tangent vectors
X, Y
, then we
can form
|X ∧ Y | =
p
g(X, X)g(Y, Y ) − g(X, Y )
2
,
which is the area of the parallelogram spanned by X and Y . We now define
K(X, Y ) =
R(X, Y, X, Y )
|X ∧ Y |
2
.
Note that this is invariant under (non-zero) scaling of
X
or
Y
, and is symmetric
in
X
and
Y
. Finally, it is also invariant under the transformation (
X, Y
)
7→
(X + λY, Y ).
But it is an easy linear algebra fact that these transformations generate all
isomorphism from a two-dimensional vector space to itself. So
K
(
X, Y
) depends
only on the 2-plane spanned by X, Y . So we have in fact defined a function on
the Grassmannian of 2-planes,
K
:
Gr
(2
, T
p
M
)
→ R
. This is called the sectional
curvature (of g).
It turns out the sectional curvature determines the Riemann curvature tensor
completely!
Lemma.
Let
V
be a real vector space of dimension
≥
2. Suppose
R
0
, R
00
:
V
⊗4
→ R
are both linear in each factor, and satisfies the symmetries we found
for the Riemann curvature tensor. We define
K
0
, K
00
:
Gr
(2
, V
)
→ R
as in the
sectional curvature. If K
0
= K
00
, then R
0
= R
00
.
This is really just linear algebra.
Proof. For any X, Y, Z ∈ V , we know
R
0
(X + Z, Y, X + Z, Y ) = R
00
(X + Z, Y, X + Z, Y ).
Using linearity of
R
0
and
R
00
, and cancelling equal terms on both sides, we find
R
0
(Z, Y, X, Y ) + R
0
(X, Y, Z, Y ) = R
00
(Z, Y, X, Y ) + R
00
(X, Y, Z, Y ).
Now using the symmetry property of R
0
and R
00
, this implies
R
0
(X, Y, Z, Y ) = R
00
(X, Y, Z, Y ).
Similarly, we replace Y with Y + T , and then we get
R
0
(X, Y, Z, T ) + R
0
(X, T, Z, Y ) = R
00
(X, Y, Z, Y ) + R
00
”(X, T, Z, Y ).
We then rearrange and use the symmetries to get
R
0
(X, Y, Z, T ) − R
00
(X, Y, Z, T ) = R
0
(Y, Z, X, T ) − R
00
(Y, Z, X, T ).
We notice this equation says
R
0
(
X, Y, Z, T
)
− R
00
(
X, Y, Z, T
) is invariant under
the cyclic permutation
X → Y → Z → X
. So by the first Bianchi identity, we
have
3(R
0
(X, Y, Z, T ) − R
00
(X, Y, Z, T )) = 0.
So we must have R
0
= R
00
.
Corollary.
Let (
M, g
) be a manifold such that for all
p
, the function
K
p
:
Gr(2, T
p
M) → R is a constant map. Let
R
0
p
(X, Y, Z, T ) = g
p
(X, Z)g
p
(Y, T ) − g
p
(X, T )g
p
(Y, Z).
Then
R
p
= K
p
R
0
p
.
Here
K
p
is just a real number, since it is constant. Moreover,
K
p
is a smooth
function of p.
Equivalently, in local coordinates, if the metric at a point is
δ
ij
, then we have
R
ij,ij
= −R
ij,ji
= K
p
,
and all other entries all zero.
Of course, the converse also holds.
Proof.
We apply the previous lemma as follows: we define
R
0
=
K
p
R
0
p
and
R
00
=
R
p
. It is a straightforward inspection to see that this
R
0
does follow the
symmetry properties of
R
p
, and that they define the same sectional curvature.
So R
00
= R
0
. We know K
p
is smooth in p as both g and R are smooth.
We can further show that if
dim M >
2, then
K
p
is in fact independent of
p
under the hypothesis of this function, and the proof requires a second Bianchi
identity. This can be found on the first example sheet.
Other curvatures
There are other quantities we can extract out of the curvature, which will later
be useful.
Definition (Ricci curvature). The Ricci curvature of g at p ∈ M is
Ric
p
(X, Y ) = tr(v 7→ R
p
(X, v)Y ).
In terms of coordinates, we have
Ric
ij
= R
q
i,jq
= g
pq
R
pi,jq
,
where g
pq
denotes the inverse of g.
This
Ric
is a symmetric bilinear form on
T
p
M
. This can be determined by
the quadratic form
Ric(X) =
1
n − 1
Ric
p
(X, X).
The coefficient
1
n−1
is just a convention.
There are still two indices we can contract, and we can define
Definition
(Scalar curvature)
.
The scalar curvature of
g
is the trace of
Ric
respect to g. Explicitly, this is defined by
s = g
ij
Ric
ij
= g
ij
R
q
i,jq
= R
qi
iq
.
Sometimes a convention is to define the scalar curvature as
s
n(n−1)
instead.
In the case of a constant sectional curvature tensor, we have
Ric
p
= (n − 1)K
p
g
p
,
and
s(p) = n(n − 1)K
p
.
Low dimensions
If
n
= 2, i.e. we have surfaces, then the Riemannian metric
g
is also known as
the first fundamental form, and it is usually written as
g = E du
2
+ 2F du dv + G dv
2
.
Up to the symmetries, the only non-zero component of the curvature tensor is
R
12,12
, and using the definition of the scalar curvature, we find
R
12,12
=
1
2
s(EG − F
2
).
Thus
s/
2 is also the sectional curvature (there can only be one plane in the
tangent space, so the sectional curvature is just a number). One can further
check that
s
2
= K =
LN − M
2
EG − F
2
,
the Gaussian curvature. Thus, the full curvature tensor is determined by the
Gaussian curvature. Also,
R
12,21
is the determinant of the second fundamental
form.
If n = 3, one can check that R(g) is determined by the Ricci curvature.
3 Geodesics
3.1 Definitions and basic properties
We will eventually want to talk about geodesics. However, the setup we need to
write down the definition of geodesics can be done in a much more general way,
and we will do that.
The general setting is that we have a vector bundle π : E → M.
Definition
(Lift)
.
Let
π
:
E → M
be a vector bundle with typical fiber
V
.
Consider a curve
γ
: (
−ε, ε
)
→ M
. A lift of
γ
is a map
γ
E
: (
−ε, ε
)
→ E
if
π ◦ γ
E
= γ, i.e. the following diagram commutes:
E
(−ε, ε) M
π
γ
γ
E
.
For
p ∈ M
, we write
E
p
=
π
−1
(
{p}
)
∼
=
V
for the fiber above
p
. We can think
of
E
p
as the space of some “information” at
p
. For example, if
E
=
T M
, then the
“information” is a tangent vector at
p
. In physics, the manifold
M
might represent
our universe, and a point in
E
p
might be the value of the electromagnetic field
at p.
Thus, given a path
γ
in
M
, a lift corresponds to providing that piece of
“information” at each point along the curve. For example, if
E
=
T M
, then we
can canonically produce a lift of
γ
, given by taking the derivative of
γ
at each
point.
Locally, suppose we are in some coordinate neighbourhood
U ⊆ M
such that
E is trivial on U. After picking a trivialization, we can write our lift as
γ
E
(t) = (γ(t), a(t))
for some function a : (−ε, ε) → V .
One thing we would want to do with such lifts is to differentiate them, and
see how it changes along the curve. When we have a section of
E
on the whole
of
M
(or even just an open neighbourhood), rather than just a lift along a
curve, the connection provides exactly the information needed to do so. It is not
immediately obvious that the connection also allows us to differentiate curves
along paths, but it does.
Proposition.
Let
γ
: (
−ε, ε
)
→ M
be a curve. Then there is a uniquely
determined operation
∇
dt
from the space of all lifts of
γ
to itself, satisfying the
following conditions:
(i) For any c, d ∈ R and lifts ˜γ
E
, γ
E
of γ, we have.
∇
dt
(cγ
E
+ d˜γ
E
) = c
∇γ
E
dt
+ d
∇˜γ
E
dt
(ii) For any lift γ
E
of γ and function f : (−ε, ε) → R, we have
∇
dt
(fγ
E
) =
df
dt
+ f
∇γ
E
dt
.
(iii)
If there is a local section
s
of
E
and a local vector field
V
on
M
such that
γ
E
(t) = s(γ(t)), ˙γ(t) = V (γ(t)),
then we have
∇γ
E
dt
= (∇
V
s) ◦ γ.
Locally, this is given by
∇γ
E
dt
i
= ˙a
i
+ Γ
i
jk
a
j
˙x
k
.
The proof is straightforward — one just checks that the local formula works,
and the three properties force the operation to be locally given by that formula.
Definition
(Covariant derivative)
.
The uniquely defined operation in the propo-
sition above is called the covariant derivative.
In some sense, lifts that have vanishing covariant derivative are “constant”
along the map.
Definition
(Horizontal lift)
.
Let
∇
be a connection on
E
with Γ
i
jk
(
x
) the
coefficients in a local trivialization. We say a lift γ
E
is horizontal if
∇γ
E
dt
= 0.
Since this is a linear first-order ODE, we know that for a fixed
γ
, given any
initial a(0) ∈ E
γ(0)
, there is a unique way to obtain a horizontal lift.
Definition
(Parallel transport)
.
Let
γ
: [0
,
1]
→ M
be a curve in
M
. Given any
a
0
∈ E
γ(0)
, the unique horizontal lift of
γ
with
γ
E
(0) = (
γ
(0)
, a
0
) is called the
parallel transport of
a
0
along
γ
(0). We sometimes also call
γ
E
(1) the parallel
transport.
Of course, we want to use this general theory to talk about the case where
M
is a Riemannian manifold,
E
=
T M
and
∇
is the Levi-Civita connection of
g
. In this case, each curve
γ
(
t
) has a canonical lift independent of the metric or
connection given simply by taking the derivative ˙γ(t).
Definition
(Geodesic)
.
A curve
γ
(
t
) on a Riemannian manifold (
M, g
) is called
a geodesic curve if its canonical lift is horizontal with respect to the Levi-Civita
connection. In other words, we need
∇˙γ
dt
= 0.
In local coordinates, we write this condition as
¨x
i
+ Γ
i
jk
˙x
j
˙x
k
= 0.
This time, we obtain a second-order ODE. So a geodesic is uniquely specified
by the initial conditions
p
=
x
(0) and
a
=
˙x
(0). We will denote the resulting
geodesic as γ
p
(t, a), where t is the time coordinate as usual.
Since we have a non-linear ODE, existence is no longer guaranteed on all
time, but just for some interval (
−ε, ε
). Of course, we still have uniqueness of
solutions.
We now want to prove things about geodesics. To do so, we will need to apply
some properties of the covariant derivative we just defined. Since we are lazy,
we would like to reuse results we already know about the covariant derivative
for vector fields. The trick is to notice that locally, we can always extend
˙γ
to a
vector field.
Indeed, we work in some coordinate chart around
γ
(0), and we wlog assume
˙γ(0) =
∂
∂x
1
.
By the inverse function theorem, we note that
x
1
(
t
) is invertible near 0, and we
can write
t
=
t
(
x
1
) for small
x
1
. Then in this neighbourhood of 0, we can view
x
k
as a function of x
1
instead of t. Then we can define the vector field
˙γ(x
1
, ··· , x
k
) = ˙γ(x
1
, x
2
(x
1
), ··· , x
k
(x
1
)).
By construction, this agrees with ˙γ along the curve.
Using this notation, the geodesic equation can be written as
∇
˙γ
˙γ
γ(t)
= 0,
where the
∇
now refers to the covariant derivative of vector fields, i.e. the
connection itself.
γ
Using this, a lot of the desired properties of geodesics immediately follow from
well-known properties of the covariant derivative. For example,
Proposition. If γ is a geodesic, then |˙γ(t)|
g
is constant.
Proof.
We use the extension
˙γ
around
p
=
γ
(0), and stop writing the underlines.
Then we have
˙γ(g( ˙γ, ˙γ)) = g(∇
˙γ
˙γ, ˙γ) + g( ˙γ, ∇
˙γ
˙γ) = 0,
which is valid at each q = γ(t) on the curve. But at each q, we have
˙γ(g( ˙γ, ˙γ)) = ˙x
k
∂
∂x
k
g( ˙γ, ˙γ) =
d
dt
|˙γ(t)|
2
g
by the chain rule. So we are done.
At this point, it might be healthy to look at some examples of geodesics.
Example.
In
R
n
with the Euclidean metric, we have Γ
i
jk
= 0. So the geodesic
equation is
¨x
k
= 0.
So the geodesics are just straight lines.
Example.
On a sphere
S
n
with the usual metric induced by the standard
embedding S
n
→ R
n+1
. Then the geodesics are great circles.
To see this, we may wlog
p
=
e
0
and
a
=
e
1
, for a standard basis
{e
i
}
of
R
n+1
. We can look at the map
ϕ : (x
0
, ··· , x
n
) 7→ (x
0
, x
1
, −x
2
, ··· , −x
n
),
and it is clearly an isometry of the sphere. Therefore it preserves the Riemannian
metric, and hence sends geodesics to geodesics. Since it also preserves
p
and
a
,
we know
ϕ
(
γ
) =
γ
by uniqueness. So it must be contained in the great circle
lying on the plane spanned by e
0
and e
1
.
Lemma.
Let
p ∈ M
, and
a ∈ T
p
M
. As before, let
γ
p
(
t, a
) be the geodesic with
γ(0) = p and ˙γ(0) = p. Then
γ
p
(λt, a) = γ
p
(t, λa),
and in particular is a geodesic.
Proof. We apply the chain rule to get
d
dt
γ(λt, a) = λ ˙γ(λt, a)
d
2
dt
2
γ(λt, a) = λ
2
¨γ(λt, a).
So
γ
(
λt, a
) satisfies the geodesic equations, and have initial velocity
λa
. Then
we are done by uniqueness of ODE solutions.
Thus, instead of considering
γ
p
(
t, a
) for arbitrary
t
and
a
, we can just fix
t
= 1, and look at the different values of
γ
p
(1
, a
). By ODE theorems, we know
this depends smoothly on
a
, and is defined on some open neighbourhood of
0 ∈ T
p
M.
Definition
(Exponential map)
.
Let (
M, g
) be a Riemannian manifold, and
p ∈ M . We define exp
p
by
exp
p
(a) = γ(1, a) ∈ M
for a ∈ T
p
M whenever this is defined.
We know this function has domain at least some open ball around 0
∈ T
p
M
,
and is smooth. Also, by construction, we have exp
p
(0) = p.
In fact, the exponential map gives us a chart around
p
locally, known as
geodesic local coordinates. To do so, it suffices to note the following rather trivial
proposition.
Proposition. We have
(d exp
p
)
0
= id
T
p
M
,
where we identify T
0
(T
p
M)
∼
=
T
p
M in the natural way.
All this is saying is if you go in the direction of
a ∈ T
p
M
, then you go in the
direction of a.
Proof.
(d exp
p
)
0
(v) =
d
dt
exp
p
(tv) =
d
dt
γ(1, tv) =
d
dt
γ(t, v) = v.
Corollary. exp
p
maps an open ball
B
(0
, δ
)
⊆ T
p
M
to
U ⊆ M
diffeomorphically
for some δ > 0.
Proof. By the inverse mapping theorem.
This tells us the inverse of the exponential map gives us a chart of
M
around
p. These coordinates are often known as geodesic local coordinates.
In these coordinates, the geodesics from p have the very simple form
γ(t, a) = ta
for all a ∈ T
p
M and t sufficiently small that this makes sense.
Corollary.
For any point
p ∈ M
, there exists a local coordinate chart around
p
such that
– The coordinates of p are (0, ··· , 0).
– In local coordinates, the metric at p is g
ij
(p) = δ
ij
.
– We have Γ
i
jk
(p) = 0 .
Coordinates satisfying these properties are known as normal coordinates.
Proof.
The geodesic local coordinates satisfies these property, after identifying
T
p
M
isometrically with (
R
n
, eucl
). For the last property, we note that the
geodesic equations are given by
¨x
i
+ Γ
i
jk
˙x
k
˙x
j
= 0.
But geodesics through the origin are given by straight lines. So we must have
Γ
i
jk
= 0.
Such coordinates will be useful later on for explicit calculations, since when-
ever we want to verify a coordinate-independent equation (which is essentially
all equations we care about), we can check it at each point, and then use normal
coordinates at that point to simplify calculations.
We again identify (T
p
N, g(p))
∼
=
(R
n
, eucl), and then we have a map
(r, v) ∈ (0, δ) × S
n−1
7→ exp
p
(rv) ∈ M
n
.
This chart is known as geodesic polar coordinates. For each fixed
r
, the image of
this map is called a geodesic sphere of geodesic radius
r
, written Σ
r
. This is an
embedded submanifold of M.
Note that in geodesic local coordinates, the metric at 0
∈ T
p
N
is given by
the Euclidean metric. However, the metric at other points can be complicated.
Fortunately, Gauss’ lemma says it is not too complicated.
Theorem
(Gauss’ lemma)
.
The geodesic spheres are perpendicular to their
radii. More precisely,
γ
p
(
t, a
) meets every Σ
r
orthogonally, whenever this makes
sense. Thus we can write the metric in geodesic polars as
g = dr
2
+ h(r, v),
where for each r, we have
h(r, v) = g|
Σ
r
.
In matrix form, we have
g =
1 0 ··· 0
0
.
.
. h
0
The proof is not hard, but it involves a few subtle points.
Proof. We work in geodesic coordinates. It is clear that g(∂
r
, ∂
r
) = 1.
Consider an arbitrary vector field
X
=
X
(
v
) on
S
n−1
. This induces a vector
field on some neighbourhood B(0, δ) ⊆ T
p
M by
˜
X(rv) = X(v).
Pick a direction
v ∈ T
p
M
, and consider the unit speed geodesic
γ
in the direction
of v. We define
G(r) = g(
˜
X(rv), ˙γ(r)) = g(
˜
X, ˙γ(r)).
We begin by noticing that
∇
∂
r
˜
X − ∇
˜
X
∂
r
= [∂
r
,
˜
X] = 0.
Also, we have
d
dr
G(r) = g(∇
˙γ
˜
X, ˙γ) + g(
˜
X, ∇
˙γ
˙γ).
We know the second term vanishes, since
γ
is a geodesic. Noting that
˙γ
=
∂
∂r
,
we know the first term is equal to
g(∇
˜
X
∂
r
, ∂
r
) =
1
2
g(∇
˜
X
∂
r
, ∂
r
) + g(∂
r
, ∇
˜
X
∂
r
)
=
1
2
˜
X(g(∂
r
, ∂
r
)) = 0,
since we know that g(∂
r
, ∂
r
) = 1 constantly.
Thus, we know
G
(
r
) is constant. But
G
(0) = 0 since the metric at 0 is the
Euclidean metric. So G vanishes everywhere, and so ∂
r
is perpendicular to Σ
g
.
Corollary. Let a, w ∈ T
p
M. Then
g((d exp
p
)
a
a, (d exp
p
)
a
w) = g(a, w)
whenever a lives in the domain of the geodesic local neighbourhood.
3.2 Jacobi fields
Fix a Riemannian manifold
M
. Let’s imagine that we have a “manifold” of all
smooth curves on
M
. Then this “manifold” has a “tangent space”. Morally,
given a curve
γ
, a “tangent vector” at
γ
in the space of curve should correspond
to providing a tangent vector (in M ) at each point along γ:
Since we are interested in the geodesics only, we consider the “submanifold” of
geodesics curves. What are the corresponding “tangent vectors” living in this
“submanifold”?
In rather more concrete terms, suppose
f
s
(
t
) =
f
(
t, s
) is a family of geodesics
in
M
indexed by
s ∈
(
−ε, ε
). What do we know about
∂f
∂s
s=0
, a vector field
along f
0
?
We begin by considering such families that fix the starting point
f
(0
, s
), and
then derive some properties of
∂f
∂s
in these special cases. We will then define a
Jacobi field to be any vector field along a curve that satisfies these properties.
We will then prove that these are exactly the variations of geodesics.
Suppose
f
(
t, s
) is a family of geodesics such that
f
(0
, s
) =
p
for all
s
. Then
in geodesics local coordinates, it must look like this:
For a fixed p, such a family is uniquely determined by a function
a(s) : (−ε, ε) → T
p
M
such that
f(t, s) = exp
p
(ta(s)).
The initial conditions of this variation can be given by a(0) = a and
˙a(0) = w ∈ T
a
(T
p
M)
∼
=
T
p
M.
We would like to know the “variation field” of
γ
(
t
) =
f
(
t,
0) =
γ
p
(
t, a
) this
induces. In other words, we want to find
∂f
∂s
(
t,
0). This is not hard. It is just
given by
(d exp
p
)
ta
0
(tw) =
∂f
∂s
(t, 0),
As before, to prove something about
f
, we want to make good use of the
properties of
∇
. Locally, we extend the vectors
∂f
∂s
and
∂f
∂t
to vector fields
∂
∂t
and
∂
∂s
. Then in this set up, we have
˙γ =
∂f
∂t
=
∂
∂t
.
Note that in
∂f
∂t
, we are differentiating
f
with respect to
t
, whereas the
∂
∂t
on
the far right is just a formal expressions.
By the geodesic equation, we have
0 =
∇
dt
˙γ = ∇
∂
t
∂
t
.
Therefore, using the definition of the curvature tensor R, we obtain
0 = ∇
∂
s
∇
∂
t
∂
∂t
= ∇
∂
t
∇
∂
s
∂
t
− R(∂
s
, ∂
t
)∂
t
= ∇
∂
t
∇
∂
s
∂
t
+ R(∂
t
, ∂
s
)∂
t
We let this act on the function f. So we get
0 =
∇
dt
∇
ds
∂f
∂t
+ R(∂
t
, ∂
s
)
∂f
∂t
.
We write
J(t) =
∂f
∂s
(t, 0),
which is a vector field along the geodesic γ. Using the fact that
∇
ds
∂f
∂t
=
∇
dt
∂f
∂s
,
we find that J must satisfy the ordinary differential equation
∇
2
dt
2
J + R( ˙γ, J) ˙γ = 0.
This is a linear second-order ordinary differential equation.
Definition
(Jacobi field)
.
Let
γ
: [0
, L
]
→ M
be a geodesic. A Jacobi field is a
vector field J along γ that is a solution of the Jacobi equation on [0, L]
∇
2
dt
2
J + R( ˙γ, J) ˙γ = 0. (†)
We now embark on a rather technical journey to prove results about Jacobi
fields. Observe that ˙γ(t) and t ˙γ(t) both satisfy this equation, rather trivially.
Theorem.
Let
γ
: [0
, L
]
→ N
be a geodesic in a Riemannian manifold (
M, g
).
Then
(i) For any u, v ∈ T
γ(0)
M, there is a unique Jacobi field J along Γ with
J(0) = u,
∇J
dt
(0) = v.
If
J(0) = 0,
∇J
dt
(0) = k ˙γ(0),
then
J
(
t
) =
kt ˙γ
(
t
). Moreover, if both
J
(0)
,
∇J
dt
(0) are orthogonal to
˙γ
(0),
then J(t) is perpendicular to ˙γ(t) for all [0, L].
In particular, the vector space of all Jacobi fields along
γ
have dimension
2n, where n = dim M .
The subspace of those Jacobi fields pointwise perpendicular to
˙γ
(
t
) has
dimensional 2(n − 1).
(ii) J
(
t
) is independent of the parametrization of
˙γ
(
t
). Explicitly, if
˜γ
(
t
) =
˜γ(λt), then
˜
J with the same initial conditions as J is given by
˜
J(˜γ(t)) = J(γ(λt)).
This is the kind of theorem whose statement is longer than the proof.
Proof.
(i)
Pick an orthonormal basis
e
1
, ··· , e
n
of
T
p
M
, where
p
=
γ
(0). Then
parallel transports
{X
i
(
t
)
}
via the Levi-Civita connection preserves the
inner product.
We take e
1
to be parallel to ˙γ(0). By definition, we have
X
i
(0) = e
i
,
∇X
i
dt
= 0.
Now we can write
J =
n
X
i=1
y
i
X
i
.
Then taking g(X
i
, ·) of (†) , we find that
¨y
i
+
n
X
j=2
R( ˙γ, X
j
, ˙γ, X
i
)y
j
= 0.
Then the claims of the theorem follow from the standard existence and
uniqueness of solutions of differential equations.
In particular, for the orthogonality part, we know that
J
(0) and
∇J
dt
(0)
being perpendicular to
˙γ
is equivalent to
y
1
(0) =
˙y
1
(0) = 0, and then
Jacobi’s equation gives
¨y
1
(t) = 0.
(ii) This follows from uniqueness.
Our discussion of Jacobi fields so far has been rather theoretical. Now that
we have an explicit equation for the Jacobi field, we can actually produce some
of them. We will look at the case where we have constant sectional curvature.
Example.
Suppose the sectional curvature is constantly
K ∈ R
, for
dim M ≥
3.
We wlog |˙γ| = 1. We let J along γ be a Jacobi field, normal to ˙γ.
Then for any vector field T along γ, we have
hR( ˙γ, J) ˙γ, T i = K(g( ˙γ, ˙γ)g(J, T ) − g( ˙γ, J)g( ˙γ, T )) = Kg(J, T ).
Since this is true for all T , we know
R( ˙γ, J) ˙γ = KJ.
Then the Jacobi equation becomes
∇
2
dt
2
J + KJ = 0.
So we can immediately write down a collection of solutions
J(t) =
sin(t
√
K)
√
K
X
i
(t) K > 0
tX
i
(t) K = 0
sinh(t
√
−K)
√
−K
X
i
(t) K < 0
.
for i = 2, ··· , n, and this has initial conditions
J(0) = 0,
∇J
dt
(0) = e
i
.
Note that these Jacobi fields vanishes at 0.
We can now deliver our promise, proving that Jacobi fields are precisely the
variations of geodesics.
Proposition.
Let
γ
: [
a, b
]
→ M
be a geodesic, and
f
(
t, s
) a variation of
γ(t) = f(t, 0) such that f(t, s) = γ
s
(t) is a geodesic for all |s| small. Then
J(t) =
∂f
∂s
is a Jacobi field along ˙γ.
Conversely, every Jacobi field along
γ
can be obtained this way for an
appropriate function f.
Proof.
The first part is just the exact computation as we had at the beginning of
the section, but for the benefit of the reader, we will reproduce the proof again.
∇
2
J
dt
= ∇
t
∇
t
∂f
∂s
= ∇
t
∇
s
∂f
∂t
= ∇
s
∇
t
∂f
∂t
− R(∂
t
, ∂
s
) ˙γ
s
.
We notice that the first term vanishes, because
∇
t
∂f
∂t
= 0 by definition of geodesic.
So we find
∇
2
J
dt
= −R( ˙γ, J) ˙γ,
which is the Jacobi equation.
The converse requires a bit more work. We will write
J
0
(0) for the covariant
derivative of
J
along
γ
. Given a Jacobi field
J
along a geodesic
γ
(
t
) for
t ∈
[0
, L
],
we let ˜γ be another geodesic such that
˜γ(0) = γ(0),
˙
˜γ(0) = J(0).
We take parallel vector fields X
0
, X
1
along ˜γ such that
X
0
(0) = ˙γ(0), X
1
(0) = J
0
(0).
We put X(s) = X
0
(s) + sX
1
(s). We put
f(t, s) = exp
˜γ(s)
(tX(s)).
In local coordinates, for each fixed s, we find
f(t, s) = ˜γ(s) + tX(s) + O(t
2
)
as t → 0. Then we define
γ
s
(t) = f(t, s)
whenever this makes sense. This depends smoothly on
s
, and the previous
arguments say we get a Jacobi field
ˆ
J(t) =
∂f
∂s
(t, 0)
We now want to check that
ˆ
J
=
J
. Then we are done. To do so, we have to
check the initial conditions. We have
ˆ
J(0) =
∂f
∂s
(0, 0) =
d˜γ
ds
(0) = J(0),
and also
ˆ
J
0
(0) =
∇
dt
∂f
∂s
(0, 0) =
∇
ds
∂f
∂t
(0, 0) =
∇X
ds
(0) = X
1
(0) = J
0
(0).
So we have
ˆ
J = J.
Corollary. Every Jacobi field J along a geodesic γ with J(0) = 0 is given by
J(t) = (d exp
p
)
t ˙γ(0)
(tJ
0
(0))
for all t ∈ [0, L].
This is just a reiteration of the fact that if we pull back to the geodesic local
coordinates, then the variation must look like this:
But this corollary is stronger, in the sense that it holds even if we get out of the
geodesic local coordinates (i.e. when exp
p
no longer gives a chart).
Proof.
Write
˙γ
(0) =
a
, and
J
0
(0) =
w
. By above, we can construct the variation
by
f(t, s) = exp
p
(t(a + sw)).
Then
(d exp
p
)
t(a+sw)
(tw) =
∂f
∂s
(t, s),
which is just an application of the chain rule. Putting
s
= 0 gives the result.
It can be shown that in the situation of the corollary, if
a ⊥ w
, and
|a|
=
|w| = 1, then
|J(t)| = t −
1
3!
K(σ)t
3
+ o(t
3
)
as t → 0, where σ is the plane spanned by a and w.
3.3 Further properties of geodesics
We can now use Jacobi fields to prove interesting things. We now revisit the
Gauss lemma, and deduce a stronger version.
Lemma (Gauss’ lemma). Let a, w ∈ T
p
M, and
γ = γ
p
(t, a) = exp
p
(ta)
a geodesic. Then
g
γ(t)
((d exp
p
)
ta
a, (d exp
p
)
ta
w) = g
γ(0)
(a, w).
In particular,
γ
is orthogonal to
exp
p
{v ∈ T
p
M
:
|v|
=
r}
. Note that the latter
need not be a submanifold.
This is an improvement of the previous version, which required us to live in
the geodesic local coordinates.
Proof. We fix any r > 0, and consider the Jacobi field J satisfying
J(0) = 0, J
0
(0) =
w
r
.
Then by the corollary, we know the Jacobi field is
J(t) = (d exp
p
)
ta
tw
r
.
We may write
w
r
= λa + u,
with
a ⊥ u
. Then since Jacobi fields depend linearly on initial conditions, we
write
J(t) = λt ˙γ(t) + J
n
(t)
for a Jacobi field J
n
a normal vector field along γ. So we have
g(J(r), ˙γ(r)) = λr|˙γ(r)|
2
= g(w, a).
But we also have
g(w, a) = g(λar + u, a) = λr|a|
2
= λr|˙γ(0)|
2
= λr|˙γ(r)|
2
.
Now we use the fact that
J(r) = (d exp
p
)
ra
w
and
˙γ(r) = (d exp
p
)
ra
a,
and we are done.
Corollary
(Local minimizing of length)
.
Let
a ∈ T
p
M
. We define
ϕ
(
t
) =
ta
,
and ψ(t) a piecewise C
1
curve in T
p
M for t ∈ [0, 1] such that
ψ(0) = 0, ψ(1) = a.
Then
length(exp
p
◦ψ) ≥ length(exp
p
◦ϕ) = |a|.
It is important to interpret this corollary precisely. It only applies to curves
with the same end point in
T
p
M
. If we have two curves in
T
p
M
whose end
points have the same image in
M
, then the result need not hold (the torus would
be a counterexample).
Proof.
We may of course assume that
ψ
never hits 0 again after
t
= 0. We write
ψ(t) = ρ(t)u(t),
where ρ(t) ≥ 0 and |u(t)| = 1. Then
ψ
0
= ρ
0
u + ρu
0
.
Then using the extended Gauss lemma, and the general fact that if
u
(
t
) is a unit
vector for all t, then u · u
0
=
1
2
(u · u)
0
= 0, we have
d
dx
(exp
p
◦ψ)(t)
2
=
(d exp
p
)
ψ(t)
ψ
0
(t)
2
= ρ
0
(t)
2
+ 2g(ρ
0
(t)u(t), ρ(t)u
0
(t)) + ρ(t)
2
|(d exp
p
)
ψ(t)
u
0
(t)|
2
= ρ
0
(t)
2
+ ρ(t)
2
|(d exp
p
)
ψ(t)
u
0
(t)|
2
,
Thus we have
length(exp
p
◦ψ) ≥
Z
1
0
ρ
0
(t) dt = ρ(1) − ρ(0) = |a|.
Notation. We write Ω(p, q) for the set of all piecewise C
1
curves from p to q.
We now wish to define a metric on M, in the sense of metric spaces.
Definition
(Distance)
.
Suppose
M
is connected, which is the same as it being
path connected. Let (p, q) ∈ M. We define
d(p, q) = inf
ξ∈Ω(p,q)
length(ξ),
where
To see this is indeed a metric, All axioms of a metric space are obvious, apart
from the non-negativity part.
Theorem.
Let
p ∈ M
, and let
ε
be such that
exp
p
|
B(0,ε)
is a diffeomorphism
onto its image, and let U be the image. Then
–
For any
q ∈ U
, there is a unique geodesic
γ ∈
Ω(
p, q
) with
(
γ
)
< ε
.
Moreover,
(
γ
) =
d
(
p, q
), and is the unique curve that satisfies this property.
– For any point q ∈ M with d(p, q) < ε, we have q ∈ U.
–
If
q ∈ M
is any point,
γ ∈
Ω(
p, q
) has
(
γ
) =
d
(
p, q
)
< ε
, then
γ
is a
geodesic.
Proof.
Let
q
=
exp
p
(
a
). Then the path
γ
(
t
) =
exp
p
(
ta
) is a geodesic from
p
to
q
of length
|a|
=
r < ε
. This is clearly the only such geodesic, since
exp
p
|
B(0,ε)
is a diffeomorphism.
Given any other path ˜γ ∈ Ω(p, q), we want to show (˜γ) > (γ). We let
τ = sup
n
t ∈ [0, 1] : γ([0, t]) ⊆ exp
p
(B(0, r))
o
.
Note that if τ 6= 1, then we must have γ(τ) ∈ Σ
r
, the geodesic sphere of radius
r
, otherwise we can continue extending. On the other hand, if
τ
= 1, then we
certainly have
γ
(
τ
)
∈
Σ
r
, since
γ
(
τ
) =
q
. Then by local minimizing of length,
we have
(˜γ) ≥ (˜γ
[0,τ]
) ≥ r.
Note that we can always lift
˜γ[0, τ]
to a curve from 0 to
a
in
T
p
M
, since
exp
p
is
a diffeomorphism in B(0, ε).
By looking at the proof of the local minimizing of length, and using the same
notation, we know that we have equality iff τ = 1 and
ρ(t)
2
|(d exp
p
)
ψ(t)
ψ(t)u
0
(t)|
2
= 0
for all
t
. Since d
exp
p
is regular, this requires
u
0
(
t
) = 0 for all
t
(since
ρ
(
t
)
6
= 0
when
t 6
= 0, or else we can remove the loop to get a shorter curve). This implies
˜γ lifts to a straight line in T
p
M, i.e. is a geodesic.
Now given any
q ∈ M
with
r
=
d
(
p, q
)
< ε
, we pick
r
0
∈
[
r, ε
) and a path
γ ∈ Ω(p, q) such that (γ) = r
0
. We again let
τ = sup
n
t ∈ [0, 1] : γ([0, t]) ⊆ exp
p
(B(0, r
0
))
o
.
If
τ 6
= 1, then we must have
γ
(
τ
)
∈
Σ
r
0
, but lifting to
T
p
M
, this contradicts the
local minimizing of length.
The last part is an immediate consequence of the previous two.
Corollary.
The distance
d
on a Riemannian manifold is a metric, and induces
the same topology on M as the C
∞
structure.
Definition
(Minimal geodesic)
.
A minimal geodesic is a curve
γ
: [0
,
1]
→ M
such that
d(γ(0), γ(1)) = (γ).
One would certainly want a minimal geodesic to be an actual geodesic. This
is an easy consequence of what we’ve got so far, using the observation that a
sub-curve of a minimizing geodesic is still minimizing.
Corollary.
Let
γ
: [0
,
1]
→ M
be a piecewise
C
1
minimal geodesic with constant
speed. Then γ is in fact a geodesic, and is in particular C
∞
.
Proof.
We wlog
γ
is unit speed. Let
t ∈
[0
,
1], and pick
ε >
0 such that
exp
p
|
B(0,ε)
is a diffeomorphism. Then by the theorem,
γ
[t,t+
1
2
ε]
is a geodesic.
So γ is C
∞
on (t, t +
1
2
ε), and satisfies the geodesic equations there.
Since we can pick
ε
continuously with respect to
t
by ODE theorems, any
t ∈ (0, 1) lies in one such neighbourhood. So γ is a geodesic.
While it is not true that geodesics are always minimal geodesics, this is locally
true:
Corollary. Let γ : [0, 1] ⊆ R → M be a C
2
curve with |˙γ| constant. Then this
is a geodesic iff it is locally a minimal geodesic, i.e. for any
t ∈
[0
,
1), there exists
δ > 0 such that
d(γ(t), γ(t + δ)) = (γ|
[t,t+δ]
).
Proof.
This is just carefully applying the previous theorem without getting
confused.
To prove
⇒
, suppose
γ
is a geodesic, and
t ∈
[0
,
1). We wlog
γ
is unit speed.
Then pick
U
and
ε
as in the previous theorem, and pick
δ
=
1
2
ε
. Then
γ|
[t,t+δ]
is a geodesic with length
< ε
between
γ
(
t
) and
γ
(
t
+
δ
), and hence must have
minimal length.
To prove the converse, we note that for each
t
, the hypothesis tells us
γ|
[t,t+δ]
is a minimizing geodesic, and hence a geodesic, but the previous corollary. By
continuity,
γ
must satisfy the geodesic equation at
t
. Since
t
is arbitrary,
γ
is a
geodesic.
There is another sense in which geodesics are locally length minimizing.
Instead of chopping up a path, we can say it is minimal “locally” in the space
Ω(p, q). To do so, we need to give Ω(p, q) a topology, and we pick the topology
of uniform convergence.
Theorem.
Let
γ
(
t
) =
exp
p
(
ta
) be a geodesic, for
t ∈
[0
,
1]. Let
q
=
γ
(1).
Assume
ta
is a regular point for
exp
p
for all
t ∈
[0
,
1]. Then there exists
a neighbourhood of
γ
in Ω(
p, q
) such that for all
ψ
in this neighbourhood,
(ψ) ≥ (γ), with equality iff ψ = γ up to reparametrization.
Before we prove the result, we first look at why the two conditions are
necessary. To see the necessity of
ta
being regular, we can consider the sphere
and two antipodal points:
p
q
Then while the geodesic between them does minimize distance, it does not do so
strictly.
We also do not guarantee global minimization of length. For example, we
can consider the torus
T
n
= R
n
/Z
n
.
This has a flat metric from
R
n
, and the derivative of the exponential map is
the “identity” on
R
n
at all points. So the geodesics are the straight lines in
R
n
.
Now consider any two
p, q ∈ T
n
, then there are infinitely many geodesics joining
them, but typically, only one of them would be the shortest.
p
q
Proof.
The idea of the proof is that if
ψ
is any curve close to
γ
, then we can use
the regularity condition to lift the curve back up to
T
p
M
, and then apply our
previous result.
Write
ϕ
(
t
) =
ta ∈ T
p
M
. Then by the regularity assumption, for all
t ∈
[0
,
1],
we know
exp
p
is a diffeomorphism of some neighbourhood
W
(
t
) of
ϕ
(
t
) =
at ∈
T
p
M
onto the image. By compactness, we can cover [0
,
1] by finitely many such
covers, say W (t
1
), ··· , W (t
n
). We write W
i
= W (t
i
), and we wlog assume
0 = t
0
< t
1
< ··· < t
k
= 1.
By cutting things up, we may assume
γ([t
i
, t
i+1
]) ⊆ W
i
.
We let
U =
[
exp
p
(W
i
).
Again by compactness, there is some
ε <
0 such that for all
t ∈
[
t
i
, t
i+1
], we
have B(γ(t), ε) ⊆ W
i
.
Now consider any curve
ψ
of distance
ε
away from
γ
. Then
ψ
([
t
i
, t
i+1
])
⊆ W
i
.
So we can lift it up to
T
p
M
, and the end point of the lift is
a
. So we are done
by local minimization of length.
Note that the tricky part of doing the proof is to make sure the lift of
ψ
has
the same end point as
γ
in
T
p
M
, which is why we needed to do it neighbourhood
by neighbourhood.
3.4 Completeness and the Hopf–Rinow theorem
There are some natural questions we can ask about geodesics. For example, we
might want to know if geodesics can be extended to exist for all time. We might
also be interested if distances can always be realized by geodesics. It turns out
these questions have the same answer.
Definition
(Geodesically complete)
.
We say a manifold (
M, g
) is geodesically
complete if each geodesic extends for all time. In other words, for all
p ∈ M
,
exp
p
is defined on all of T
p
M.
Example. The upper half plane
H
2
= {(x, y) : y > 0}
under the induced Euclidean metric is not geodesically complete. However,
H
2
and R
2
are diffeomorphic but R
2
is geodesically complete.
The first theorem we will prove is the following:
Theorem.
Let (
M, g
) be geodesically complete. Then any two points can be
connected by a minimal geodesic.
In fact, we will prove something stronger — let
p ∈ M
, and suppose
exp
p
is
defined on all of
T
p
M
. Then for all
q ∈ M
, there is a minimal geodesic between
them.
To prove this, we need a lemma
Lemma. Let p, q ∈ M. Let
S
δ
= {x ∈ M : d(x, p) = δ}.
Then for all sufficiently small δ, there exists p
0
∈ S
δ
such that
d(p, p
0
) + d(p
0
, q) = d(p, q).
Proof.
For
δ >
0 small, we know
S
δ
= Σ
δ
is a geodesic sphere about
p
, and
is compact. Moreover,
d
(
·, q
) is a continuous function. So there exists some
p
0
∈ Σ
δ
that minimizes d( ·, q).
Consider an arbitrary
γ ∈
Ω(
p, q
). For the sake of sanity, we assume
δ <
d(p, q). Then there is some t such that γ(t) ∈ Σ
δ
, and
(γ) ≥ d(p, γ(t)) + d(γ(t), q) ≥ d(p, p
0
) + d(p
0
, q).
So we know
d(p, q) ≥ d(p, p
0
) + d(p
0
, p).
The triangle inequality gives the opposite direction. So we must have equality.
We can now prove the theorem.
Proof of theorem.
We know
exp
p
is defined on
T
p
M
. Let
q ∈ M
. Let
q ∈ M
.
We want a minimal geodesic in Ω(
p, q
). By the first lemma, there is some
δ >
0
and p
0
such that
d(p, p
0
) = δ, d(p, p
0
) + d(p
0
, q) = d(p, q).
Also, there is some v ∈ T
p
M such that exp
p
v = p
0
. We let
γ
p
(t) = exp
p
t
v
|v|
.
We let
I = {t ∈ R : d(q, γ
p
(t)) + t = d(p, q)}.
Then we know
(i) δ ∈ I
(ii) I is closed by continuity.
Let
T = sup{I ∩ [0, d(p, q)]}.
Since
I
is closed, this is in fact a maximum. So
T ∈ I
. We claim that
T
=
d
(
p, q
).
If so, then γ
p
∈ Ω(p, q) is the desired minimal geodesic, and we are done.
Suppose this were not true. Then
T < d
(
p, q
). We apply the lemma to
˜p
=
γ
p
(
T
), and
q
remains as before. Then we can find
ε >
0 and some
p
1
∈ M
with the property that
d(p
1
, q) = d(γ
p
(T ), q) − d(γ
p
(T ), p
1
)
= d(γ
p
(T ), q) − ε
= d(p, q) − T − ε
Hence we have
d(p, p
1
) ≥ d(p, q) − d(q, p
1
) = T + ε.
Let γ
1
be the radial (hence minimal) geodesic from γ
p
(T ) to p
1
. Now we know
(γ
p
|
[0,T ]
) + (γ
1
) = T + ε.
So
γ
1
concatenated with
γ
p
|
[0,T ]
is a length-minimizing geodesic from
p
to
p
1
,
and is hence a geodesic. So in fact
p
1
lies on
γ
p
, say
p
1
=
γ
p
(
T
+
s
) for some
s
. Then
T
+
s ∈ I
, which is a contradiction. So we must have
T
=
d
(
p, q
), and
hence
d(q, γ
p
(T )) + T = d(p, q),
hence d(q, γ
p
(T )) = 0, i.e. q = γ
p
(T ).
Corollary
(Hopf–Rinow theorem)
.
For a connected Riemannian manifold (
M, g
),
the following are equivalent:
(i) (M, g) is geodesically complete.
(ii) For all p ∈ M, exp
p
is defined on all T
p
M.
(iii) For some p ∈ M, exp
p
is defined on all T
p
M.
(iv) Every closed and bounded subset of (M, d) is compact.
(v) (M, d) is complete as a metric space.
Proof.
(i) and (ii) are equivalent by definition. (ii)
⇒
(iii) is clear, and we proved
(iii) ⇒ (i).
–
(iii)
⇒
(iv): Let
K ⊆ M
be closed and bounded. Then by boundedness,
K
is contained in
exp
p
(
B(0, R)
). Let
K
0
be the pre-image of
K
under
exp
p
.
Then it is a closed and bounded subset of
R
n
, hence compact. Then
K
is
the continuous image of a compact set, hence compact.
– (iv) ⇒ (v): This is a general topological fact.
–
(v)
⇒
(i): Let
γ
(
t
) :
I → R
be a geodesic, where
I ⊆ R
. We wlog
|˙γ| ≡
1.
Suppose
I 6
=
R
. We wlog
sup I
=
a < ∞
. Then
lim
t→a
γ
(
t
) exist by
completeness, and hence
γ
(
a
) exists. Since geodesics are locally defined
near
a
, we can pick a geodesic in the direction of
lim
t→a
γ
0
(
t
). So we can
extend γ further, which is a contradiction.
3.5 Variations of arc length and energy
This section is mostly a huge computation. As we previously saw, geodesics are
locally length-minimizing, and we shall see that another quantity, namely the
energy is also a useful thing to consider, as minimizing the energy also forces
the parametrization to be constant speed.
To make good use of these properties of geodesics, it is helpful to compute
explicitly expressions for how length and energy change along variations. The
computations are largely uninteresting, but it will pay off.
Definition (Energy). The energy function E : Ω(p, q) → R is given by
E(γ) =
1
2
Z
T
0
|˙γ|
2
dt,
where γ : [0, T] → M.
Recall that Ω(
p, q
) is defined as the space of piecewise
C
1
curves. Often, we
will make the simplifying assumption that all curves are in fact
C
1
. It doesn’t
really matter.
Note that the length of a curve is independent of parametrization. Thus,
if we are interested in critical points, then the critical points cannot possibly
be isolated, as we can just re-parametrize to get a nearby path with the same
length. On the other hand, the energy
E
does depend on parametrization. This
does have isolated critical points, which is technically very convenient.
Proposition.
Let
γ
0
: [0
, T
]
→ M
be a path from
p
to
q
such that for all
γ ∈
Ω(
p, q
) with
γ
: [0
, T
]
→ M
, we have
E
(
γ
)
≥ E
(
γ
0
). Then
γ
0
must be a
geodesic.
Recall that we already had such a result for length instead of energy. The
proof is just the application of Cauchy-Schwartz.
Proof. By the Cauchy-Schwartz inequality, we have
Z
T
0
|˙γ|
2
dt ≥
Z
T
0
|˙γ(t)| dt
!
2
with equality iff |˙γ| is constant. In other words,
E(γ) ≥
(γ)
2
2T
.
So we know that if
γ
0
minimizes energy, then it must be constant speed. Now
given any
γ
, if we just care about its length, then we may wlog it is constant
speed, and then
(γ) =
p
2E(γ)T ≥
p
2E(γ
0
)T = (γ
0
).
So γ
0
minimizes length, and thus γ
0
is a geodesic.
We shall consider smooth variations
H
(
t, s
) of
γ
0
(
t
) =
H
(
t,
0). We require
that
H
: [0
, T
]
×
(
−ε, ε
)
→ M
is smooth. Since we are mostly just interested
in what happens “near”
s
= 0, it is often convenient to just consider the
corresponding vector field along γ:
Y (t) =
∂H
∂s
s=0
= (dH)
(t,0)
∂
∂s
,
Conversely, given any such vector field
Y
, we can generate a variation
H
that
gives rise to Y . For example, we can put
H(t, s) = exp
γ
0
(t)
(sY (t)),
which is valid on some neighbourhood of [0
, T
]
× {
0
}
. If
Y
(0) = 0 =
Y
(
T
), then
we can choose H fixing end-points of γ
0
.
Theorem (First variation formula).
(i) For any variation H of γ, we have
d
ds
E(γ
s
)
s=0
= g(Y (t), ˙γ(t))|
T
0
−
Z
T
0
g
Y (t),
∇
dt
˙γ(t)
dt. (∗)
(ii) The critical points, i.e. the γ such that
d
ds
E(γ
s
)
s=0
for all (end-point fixing) variation H of γ, are geodesics.
(iii) If |˙γ
s
(t)| is constant for each fixed s ∈ (−ε, ε), and |˙γ(t)| ≡ 1, then
d
ds
E(γ
s
)
s=0
=
d
ds
(γ
s
)
s=0
(iv)
If
γ
is a critical point of the length, then it must be a reparametrization of
a geodesic.
This is just some calculations.
Proof.
We will assume that we can treat
∂
∂s
and
∂
∂t
as vector fields on an
embedded submanifold, even though H is not necessarily a local embedding.
The result can be proved without this assumption, but will require more
technical work.
(i) We have
1
2
∂
∂s
g( ˙γ
s
(t), ˙γ
s
(t)) = g
∇
ds
˙γ
s
(t), ˙γ
s
(t)
= g
∇
dt
∂H
∂s
(t, s),
∂H
∂t
(t, s)
=
∂
∂t
g
∂H
∂s
,
∂H
∂t
− g
∂H
∂s
,
∇
dt
∂H
∂t
.
Comparing with what we want to prove, we see that we get what we want
by integrating
R
T
0
dt, and then putting s = 0, and then noting that
∂H
∂s
s=0
= Y,
∂H
∂t
s=0
= ˙γ.
(ii) If γ is a geodesic, then
∇
dt
˙γ(t) = 0.
So the integral on the right hand side of (
∗
) vanishes. Also, we have
Y (0) = 0 = Y (T ). So the RHS vanishes.
Conversely, suppose γ is a critical point for E. Then choose H with
Y (t) = f(t)
∇
dt
˙γ(t)
for some f ∈ C
∞
[0, T ] such that f(0) = f(T ) = 0. Then we know
Z
T
0
f(t)
∇
dt
˙γ(t)
2
dt = 0,
and this is true for all f. So we know
∇
dt
˙γ = 0.
(iii)
This is evident from the previous proposition. Indeed, we fix [0
, T
], then
for all H, we have
E(γ
s
) =
(γ
s
)
2
2T
,
and so
d
ds
E(γ
s
)
s=0
=
1
T
(γ
s
)
d
ds
(γ
s
)
s=0
,
and when s = 0, the curve is parametrized by arc-length, so (γ
s
) = T .
(iv)
By reparametrization, we may wlog
|˙γ| ≡
1. Then
γ
is a critical point for
, hence for E, hence a geodesic.
Often, we are interested in more than just whether the curve is a critical
point. We want to know if it maximizes or minimizes energy. Then we need
more than the “first derivative”. We need the “second derivative” as well.
Theorem
(Second variation formula)
.
Let
γ
(
t
) : [0
, T
]
→ M
be a geodesic with
|˙γ| = 1. Let H(t, s) be a variation of γ. Let
Y (t, s) =
∂H
∂s
(t, s) = (dH)
(t,s)
∂
∂s
.
Then
(i) We have
d
2
ds
2
E(γ
s
)
s=0
= g
∇Y
ds
(t, 0), ˙γ
T
0
+
Z
T
0
(|Y
0
|
2
− R(Y, ˙γ, Y, ˙γ)) dt.
(ii) Also
d
2
ds
2
(γ
s
)
s=0
= g
∇Y
ds
(t, 0), ˙γ(t)
T
0
+
Z
T
0
|Y
0
|
2
− R(Y, ˙γ, Y, ˙γ) − g( ˙γ, Y
0
)
2
dt,
where R is the (4, 0) curvature tensor, and
Y
0
(t) =
∇Y
dt
(t, 0).
Putting
Y
n
= Y − g(Y, ˙γ) ˙γ
for the normal component of Y , we can write this as
d
2
ds
2
(γ
s
)
s=0
= g
∇Y
n
ds
(t, 0), ˙γ(t)
T
0
+
Z
T
0
|Y
0
n
|
2
− R(Y
n
, ˙γ, Y
n
, ˙γ)
dt.
Note that if we have fixed end points, then the first terms in the variation
formulae vanish.
Proof. We use
d
ds
E(γ
s
) = g(Y (t, s), ˙γ
s
(t))|
t=T
t=0
−
Z
T
0
g
Y (t, s),
∇
dt
˙γ
s
(t)
dt.
Taking the derivative with respect to s again gives
d
2
ds
2
E(γ
s
) = g
∇Y
ds
, ˙γ
T
t=0
+ g
Y,
∇
ds
˙γ
s
T
t=0
−
Z
T
0
g
∇Y
ds
,
∇
dt
˙γ
s
+ g
Y,
∇
ds
∇
dt
˙γ
dt.
We now use that
∇
ds
∇
dt
˙γ
s
(t) =
∇
dt
∇
ds
˙γ
s
(t) + R
∂H
∂s
,
∂H
∂t
˙γ
s
=
∇
dt
2
Y (t, s) + R
∂H
∂s
,
∂H
∂t
˙γ
s
.
We now set s = 0, and then the above gives
d
2
ds
2
E(γ
s
)
s=0
= g
∇Y
ds
, ˙γ
T
0
+ g
Y,
∇˙γ
ds
T
0
−
Z
T
0
"
g
Y,
∇
dt
2
Y
!
+ R( ˙γ, Y, ˙γ, Y )
#
dt.
Finally, applying integration by parts, we can write
−
Z
T
0
g
Y,
∇
dt
2
Y
!
dt = − g
Y,
∇
dt
Y
T
0
+
Z
T
0
∇Y
dt
2
dt.
Finally, noting that
∇
ds
˙γ(s) =
∇
dt
Y (t, s),
we find that
d
2
ds
2
E(γ
s
)
s=0
= g
∇Y
ds
, ˙γ
T
0
+
Z
T
0
|Y
0
|
2
− R(Y, ˙γ, Y, ˙γ)
dt.
It remains to prove the second variation of length. We first differentiate
d
ds
(γ
s
) =
Z
T
0
1
2
p
g( ˙γ
s
, ˙γ
s
)
∂
∂s
g( ˙γ
s
, ˙γ
s
) dt.
Then the second derivative gives
d
2
ds
2
(γ
s
)
s=0
=
Z
T
0
"
1
2
∂
2
∂s
2
g( ˙γ
s
, ˙γ
s
)
s=0
−
1
4
∂
∂s
g( ˙γ
s
, ˙γ
s
)
2
s=0
#
dt,
where we used the fact that g( ˙γ, ˙γ) = 1.
We notice that the first term can be identified with the derivative of the
energy function. So we have
d
2
ds
2
(γ
s
)
s=0
=
d
2
ds
2
E(γ
s
)
s=0
−
Z
T
0
g
˙γ
s
,
∇
ds
˙γ
s
s=0
2
dt.
So the second part follows from the first.
3.6 Applications
This finally puts us in a position to prove something more interesting.
Synge’s theorem
We are first going to prove the following remarkable result relating curvature
and topology:
Theorem
(Synge’s theorem)
.
Every compact orientable Riemannian manifold
(
M, g
) such that
dim M
is even and has
K
(
g
)
>
0 for all planes at
p ∈ M
is
simply connected.
We can see that these conditions are indeed necessary. For example, we can
consider
RP
2
=
S
2
/ ±
1 with the induced metric from
S
2
. Then this is compact
with positive sectional curvature, but it is not orientable. Indeed it is not simply
connected.
Similarly, if we take
RP
3
, then this has odd dimension, and the theorem
breaks.
Finally, we do need strict inequality, e.g. the flat torus is not simply connected.
We first prove a technical lemma.
Lemma.
Let
M
be a compact manifold, and [
α
] a non-trivial homotopy class
of closed curves in M . Then there is a closed minimal geodesic in [α].
Proof.
Since
M
is compact, we can pick some
ε >
0 such that for all
p ∈ M
, the
map exp
p
|
B(0,p)
is a diffeomorphism.
Let
=
inf
γ∈[α]
(
γ
). We know that
>
0, otherwise, there exists a
γ
with
(
γ
)
< ε
. So
γ
is contained in some geodesic coordinate neighbourhood, but
then α is contractible. So must be positive.
Then we can find a sequence
γ
n
∈
[
α
] with
γ
n
: [0
,
1]
→ M
,
|˙γ|
constant,
such that
lim
n→∞
(γ
n
) = .
Choose
0 = t
0
< t
1
< ··· < t
k
= 1
such that
t
i+1
− t
i
<
ε
2
.
So it follows that
d(γ
n
(t
i
), γ
n
(t
i+1
)) < ε
for all
n
sufficiently large and all
i
. Then again, we can replace
γ
n
|
[t
i
,t
i+1
]
by a
radial geodesic without affecting the limit lim (γ
n
).
Then we exploit the compactness of
M
(and the unit sphere) again, and pass
to a subsequence of
{γ
n
}
so that
γ
n
(
t
i
)
, ˙γ
n
(
t
i
) are all convergent for every fixed
i as n → ∞. Then the curves converges to some
γ
n
→ ˆγ ∈ [α],
given by joining the limits
lim
n→∞
γ
n
(
t
i
). Then we know that the length
converges as well, and so we know
ˆγ
is minimal among curves in [
α
]. So
ˆγ
is
locally minimal, hence a geodesic. So we can take γ = ˆγ, and we are done.
Proof of Synge’s theorem.
Suppose
M
satisfies the hypothesis, but
π
1
(
M
)
6
=
{
1
}
.
So there is a path
α
with [
α
]
6
= 1, i.e. it cannot be contracted to a point. By the
lemma, we pick a representative γ of [α] that is a closed, minimal geodesic.
We now prove the theorem. We may wlog assume
|˙γ|
= 1, and
t
ranges in
[0, T ]. Consider a vector field X(t) for 0 ≤ t ≤ T along γ(t) such that
∇X
dt
= 0, g(X(0), ˙γ(0)) = 0.
Note that since g is a geodesic, we know
g(X(t), ˙γ(t)) = 0,
for all
t ∈
[0
, T
] as parallel transport preserves the inner product. So
X
(
T
)
⊥
˙γ(T ) = ˙γ(0) since we have a closed curve.
We consider the map
P
that sends
X
(0)
7→ X
(
T
). This is a linear isometry
of (
˙γ
(0))
⊥
with itself that preserves orientation. So we can think of
P
as a map
P ∈ SO(2n − 1),
where
dim M
= 2
n
. It is an easy linear algebra exercise to show that every
element of
SO
(2
n −
1) must have an eigenvector of eigenvalue 1. So we can find
v ∈ T
p
M
such that
v ⊥ ˙γ
(0) and
P
(
v
) =
v
. We take
X
(0) =
v
. Then we have
X(T ) = v.
Consider now a variation
H
(
t, s
) inducing this
X
(
t
). We may assume
|˙γ
s
|
is
constant. Then
d
ds
(γ
s
)|
s=0
= 0
as
γ
is minimal. Moreover, since it is a minimum, the second derivative must be
positive, or at least non-negative. Is this actually the case?
We look at the second variation formula of length. Using the fact that the
loop is closed, the formula reduces to
d
2
ds
2
(γ
s
)
s=0
= −
Z
T
0
R(X, ˙γ, X, ˙γ) dt.
But we assumed the sectional curvature is positive. So the second variation is
negative! This is a contradiction.
Conjugate points
Recall that when a geodesic starts moving, for a short period of time, it is
length-minimizing. However, in general, if we keep on moving for a long time,
then we cease to be minimizing. It is useful to characterize when this happens.
As before, for a vector field J along a curve γ(t), we will write
J
0
=
∇J
dt
.
Definition (Conjugate points). Let γ(t) be a geodesic. Then
p = γ(α), q = γ(β)
are conjugate points if there exists some non-trivial
J
such that
J
(
α
) = 0 =
J
(
β
).
It is easy to see that this does not depend on parametrization of the curve,
because Jacobi fields do not.
Proposition.
(i)
If
γ
(
t
) =
exp
p
(
ta
), and
q
=
exp
p
(
βa
) is conjugate to
p
, then
q
is a singular
value of exp.
(ii) Let J be as in the definition. Then J must be pointwise normal to ˙γ.
Proof.
(i)
We wlog [
α, β
] = [0
,
1]. So
J
(0) = 0 =
J
(1). We
a
=
˙γ
(0) and
w
=
J
0
(0).
Note that
a, w
are both non-zero, as Jacobi fields are determined by initial
conditions. Then q = exp
p
(a).
We have shown earlier that if J(0) = 0, then
J(t) = (d exp
p
)
ta
(tw)
for all 0
≤ t ≤
1. So it follows (d
exp
p
)
a
(
w
) =
J
(1) = 0. So (d
exp
p
)
a
has
non-trivial kernel, and hence isn’t surjective.
(ii) We claim that any Jacobi field J along a geodesic γ satisfies
g(J(t), ˙γ(t)) = g(J
0
(0), ˙γ(0))t + g(J(0), ˙γ(0)).
To prove this, we note that by the definition of geodesic and Jacobi fields,
we have
d
dt
g(J
0
, ˙γ) = g(J
00
, ˙γ(0)) = −g(R( ˙γ, J), ˙γ, ˙γ) = 0
by symmetries of R. So we have
d
dt
g(J, ˙γ) = g(J
0
(t), ˙γ(t)) = g(J
0
(0), ˙γ(0)).
Now integrating gives the desired result.
This result tells us g(J(t), ˙γ(t)) is a linear function of t. But we have
g(J(0), ˙γ(0)) = g(J(1), ˙γ(1)) = 0.
So we know g(J(t), ˙γ(t)) is constantly zero.
From the proof, we see that for any Jacobi field with J(0) = 0, we have
g(J
0
(0), ˙γ(0)) = 0 ⇐⇒ g(J(t), ˙γ(t)) = constant.
This implies that the dimension of the normal Jacobi fields along
γ
satisfying
J(0) = 0 is dim M − 1.
Example.
Consider
M
=
S
2
⊆ R
3
with the round metric, i.e. the “obvious”
metric induced from
R
3
. We claim that
N
= (0
,
0
,
1) and
S
= (0
,
1
,
0) are
conjugate points.
To construct a Jacobi field, instead of trying to mess with the Jacobi equation,
we construct a variation by geodesics. We let
f(t, s) =
cos s sin t
sin s sin t
cos t
.
We see that when
s
= 0, this is the great-circle in the (
x, z
)-plane. Then we have
a Jacobi field
J(t) =
∂f
∂s
s=0
=
0
sin t
0
.
This is then a Jacobi field that vanishes at N and S.
p
When we are at the conjugate point, then there are many adjacent curves
whose length is equal to ours. If we extend our geodesic beyond the conjugate
point, then it is no longer even locally minimal:
p
q
We can push the geodesic slightly over and the length will be shorter. On the
other hand, we proved that up to the conjugate point, the geodesic is always
locally minimal.
In turns out this phenomenon is generic:
Theorem.
Let
γ
: [0
,
1]
→ M
be a geodesic with
γ
(0) =
p
,
γ
(1) =
q
such that
p
is conjugate to some
γ
(
t
0
) for some
t
0
∈
(0
,
1). Then there is a piecewise smooth
variation of f(t, s) with f(t, 0) = γ(t) such that
f(0, s) = p, f(1, s) = q
and (f ( ·, s)) < (γ) whenever s 6= 0 is small.
The proof is a generalization of the example we had above. We know that up
to the conjugate point, we have a Jacobi filed that allows us to vary the geodesic
without increasing the length. We can then give it a slight “kick” and then the
length will decrease.
Proof.
By the hypothesis, there is a
J
(
t
) defined on
t ∈
[0
,
1] and
t
0
∈
(0
,
1) such
that
J(t) ⊥ ˙γ(t)
for all t, and J(0) = J(t
0
) = 0 and J 6≡ 0. Then J
0
(t
0
) 6= 0.
We define a parallel vector field
Z
1
along
γ
by
Z
1
(
t
0
) =
−J
0
(
t
0
). We pick
θ ∈ C
∞
[0, 1] such that θ(0) = θ(1) = 0 and θ(t
0
) = 1.
Finally, we define
Z = θZ
1
,
and for α ∈ R, we define
Y
α
(t) =
(
J(t) + αZ(t) 0 ≤ t ≤ t
0
αZ(t) t
0
≤ t ≤ 1
.
We notice that this is not smooth at
t
0
, but is just continuous. We will postpone
the choice of α to a later time.
We know
Y
α
(
t
) arises from a piecewise
C
∞
variation of
γ
, say
H
α
(
t, s
). The
technical claim is that the second variation of length corresponding to
Y
α
(
t
) is
negative for some α.
We denote by
I
(
X, Y
)
T
the symmetric bilinear form that gives rise to the
second variation of length with fixed end points. If we make the additional
assumption that
X, Y
are normal along
γ
, then the formula simplifies, and
reduces to
I(X, Y )
T
=
Z
T
0
(g(X
0
, Y
0
) − R(X, ˙γ, Y, ˙γ)) dt.
Then for H
α
(t, s), we have
d
2
ds
2
(γ
s
)
s=0
= I
1
+ I
2
+ I
3
I
1
= I(J, J)
t
0
I
2
= 2αI(J, Z)
t
0
I
3
= α
2
I(Z, Z)
1
.
We look at each term separately.
We first claim that I
1
= 0. We note that
d
dt
g(J, J
0
) = g(J
0
, J
0
) + g(J, J
00
),
and
g
(
J, J
00
) added to the curvature vanishes by the Jacobi equation. Then
by integrating by parts and applying the boundary condition, we see that
I
1
vanishes.
Also, by integrating by parts, we find
I
2
= 2αg(Z, J
0
)|
t
0
0
.
Whence
d
2
ds
2
(γ
s
)
s=0
= −2α|J
0
(t
0
)|
2
+ α
2
I(Z, Z)
1
.
Now if
α >
0 is very very small, then the linear term dominates, and this is
negative. Since the first variation vanishes (
γ
is a geodesic), we know this is a
local maximum of length.
Note that we made a compromise in the theorem by doing a piecewise
C
∞
variation instead of a smooth one, but of course, we can fix this by making a
smooth approximation.
Bonnet–Myers diameter theorem
We are going to see yet another application of our previous hard work, which
may also be seen as an interplay between curvature topology. In case it isn’t
clear, all our manifolds are connected.
Definition (Diameter). The diameter of a Riemannian manifold (M, g) is
diam(M, g) = sup
p,q∈M
d(p, q).
Of course, this definition is valid for any metric space.
Example. Consider the sphere
S
n−1
(r) = {x ∈ R
n
: |x| = r},
with the induced “round” metric. Then
diam(S
n−1
(r)) = πr.
It is an exercise to check that
K ≡
1
r
2
.
We will also need the following notation:
Notation.
Let
h,
ˆ
h
be two symmetric bilinear forms on a real vector space. We
say h ≥
ˆ
h if h −
ˆ
h is non-negative definite.
If
h,
ˆ
h ∈
Γ(
S
2
T
∗
M
) are fields of symmetric bilinear forms, we write
h ≥
ˆ
h
if
h
p
≥
ˆ
h
p
for all p ∈ M.
The following will also be useful:
Definition
(Riemannian covering map)
.
Let (
M, g
) and (
˜
M, ˜g
) be two Rieman-
nian manifolds, and
f
:
˜
M → M
be a smooth covering map. We say
f
is a
Riemannian covering map if it is a local isometry. Alternatively,
f
∗
g
=
˜g
. We
say
˜
M is a Riemannian cover of M.
Recall that if
f
is in fact a universal cover, i.e.
˜
M
is simply connected, then
we can (non-canonically) identify π
1
(M) with f
−1
(p) for any point p ∈ M.
Definition
(Bonnet–Myers diameter theorem)
.
Let (
M, g
) be a complete
n
-
dimensional manifold with
Ric(g) ≥
n − 1
r
2
g,
where r > 0 is some positive number. Then
diam(M, g) ≤ diam S
n
(r) = πr.
In particular, M is compact and π
1
(M) is finite.
Proof.
Consider any
L < diam
(
M, g
). Then by definition (and Hopf–Rinow),
we can find
p, q ∈ M
such that
d
(
p, q
) =
L
, and a minimal geodesic
γ ∈
Ω(
p, q
)
with (γ) = d(p, q). We parametrize γ : [0, L] → M so that |˙γ| = 1.
Now consider any vector field
Y
along
γ
such that
Y
(
p
) = 0 =
Y
(
q
). Since
Γ is a minimal geodesic, it is a critical point for
, and the second variation
I
(
Y, Y
)
[0,L]
is non-negative (recall that the second variation has fixed end points).
We extend
˙γ
(0) to an orthonormal basis of
T
p
M
, say
˙γ
(0) =
e
1
, e
2
, ··· , e
n
.
We further let X
i
be the unique vector field such that
X
0
i
= 0, X
i
(0) = e
i
.
In particular, X
1
(t) = ˙γ(t).
For i = 2, ··· , n, we put
Y
i
(t) = sin
πt
L
X
i
(t).
Then after integrating by parts, we find that we have
I(Y
i
, Y
i
)
[0,L]
= −
Z
L
0
g(Y
00
i
+ R( ˙γ, Y
i
)Y
i
, ˙γ) dt
Using the fact that X
i
is parallel, this can be written as
=
Z
L
0
sin
2
πt
L
π
2
L
2
− R( ˙γ, X
i
, ˙γ, X
i
)
dt,
and since this is length minimizing, we know this is ≥ 0.
We note that we have R( ˙γ, X
1
, ˙γ, X
1
) = 0. So we have
n
X
i=2
R( ˙γ, X
i
, ˙γ, X
i
) = Ric( ˙γ, ˙γ).
So we know
n
X
i=2
I(Y
i
, Y
i
) =
Z
L
0
sin
2
πt
L
(n − 1)
π
2
L
− Ric( ˙γ, ˙γ)
dt ≥ 0.
We also know that
Ric( ˙γ, ˙γ) ≥
n − 1
r
2
by hypothesis. So this implies that
π
2
L
2
≥
1
r
2
.
This tells us that
L ≤ πr.
Since L is any number less that diam(M, g), it follows that
diam(M, g) ≤ πr.
Since
M
is known to be complete, by Hopf-Rinow theorem, any closed bounded
subset is compact. But M itself is closed and bounded! So M is compact.
To understand the fundamental, group, we simply have to consider a universal
Riemannian cover
f
:
˜
M → M
. We know such a topological universal covering
space must exist by general existence theorems. We can then pull back the
differential structure and metric along
f
, since
f
is a local homeomorphism.
So this gives a universal Riemannian cover of
M
. But they hypothesis of the
theorem is local, so it is also satisfied for
˜
M
. So it is also compact. Since
f
−1
(
p
)
is a closed discrete subset of a compact space, it is finite, and we are done.
It is an easy exercise to show that the hypothesis on the Ricci curvature
cannot be weakened to just saying that the Ricci curvature is positive definite.
Hadamard–Cartan theorem
To prove the next result, we need to talk a bit more about coverings.
Proposition.
Let (
M, g
) and (
N, h
) be Riemannian manifolds, and suppose
M
is complete. Suppose there is a smooth surjection
f
:
M → N
that is a local
diffeomorphism. Moreover, suppose that for any
p ∈ M
and
v ∈ T
p
M
, we have
|df
p
(v)|
h
≥ |v|. Then f is a covering map.
Proof.
By general topology, it suffices to prove that for any smooth curve
γ
: [0
,
1]
→ N
, and any
q ∈ M
such that
f
(
q
) =
γ
(0), there exists a lift of
γ
starting from from q.
M
[0, 1] N
f
γ
˜γ
From the hypothesis, we know that
˜γ
exists on [0
, ε
0
] for some “small”
ε
0
>
0.
We let
I = {0 ≤ ε ≤: ˜γ exists on [0, ε]}.
We immediately see this is non-empty, since it contains
ε
0
. Moreover, it is not
difficult to see that
I
is open in [0
,
1], because
f
is a local diffeomorphism. So it
suffices to show that I is closed.
We let {t
n
}
∞
n=1
⊆ I be such that t
n+1
> t
n
for all n, and
lim
n→∞
t
n
= ε
1
.
Using Hopf-Rinow, either
{˜γ
(
t
n
)
}
is contained in some compact
K
, or it is
unbounded. We claim that unboundedness is impossible. We have
(γ) ≥ (γ|
[0,t
n
]
) =
Z
t
n
0
|˙γ| dt
=
Z
t
n
0
|df
˜γ(t)
˙
˜γ(t)| dt
≥
Z
t
n
0
|
˙
˜γ| dt
= (˜γ|
[0,t
n
]
)
≥ d(˜γ(0), ˜γ(t
n
)).
So we know this is bounded. So by compactness, we can find some
x
such that
˜γ
(
t
n
`
)
→ x
as
→ ∞
. There exists an open
x ∈ V ⊆ M
such that
f|
V
is a
diffeomorphism.
Since there are extensions of
˜γ
to each
t
n
, eventually we get an extension to
within
V
, and then we can just lift directly, and extend it to
ε
1
. So
ε
1
∈ I
. So
we are done.
Corollary.
Let
f
:
M → N
be a local isometry onto
N
, and
M
be complete.
Then f is a covering map.
Note that since
N
is (assumed to be) connected, we know
f
is necessarily
surjective. To see this, note that the completeness of
M
implies completeness of
f
(
M
), hence
f
(
M
) is closed in
N
, and since it is a local isometry, we know
f
is
in particular open. So the image is open and closed, hence f(M ) = N .
For a change, our next result will assume a negative curvature, instead of a
positive one!
Theorem
(Hadamard–Cartan theorem)
.
Let (
M
n
, g
) be a complete Riemannian
manifold such that the sectional curvature is always non-positive. Then for every
point
p ∈ M
, the map
exp
p
:
T
p
M → M
is a covering map. In particular, if
π
1
(M) = 0, then M is diffeomorphic to R
n
.
We will need one more technical lemma.
Lemma.
Let
γ
(
t
) be a geodesic on (
M, g
) such that
K ≤
0 along
γ
. Then
γ
has no conjugate points.
Proof.
We write
γ
(0) =
p
. Let
I
(
t
) be a Jacobi field along
γ
, and
J
(0) = 0. We
claim that if J is not identically zero, then J does not vanish everywhere else.
We consider the function
f(t) = g(J(t), J(t)) = |J(t)|
2
.
Then f (0) = f
0
(0) = 0. Consider
1
2
f
00
(t) = g(J
00
(t), J(t)) + g(J
0
(t), J
0
(t)) = g(J
0
, J
0
) − R( ˙γ, J, ˙γ, J) ≥ 0.
So f is a convex function, and so we are done.
We can now prove the theorem.
Proof of theorem.
By the lemma, we know there are no conjugate points. So
we know
exp
p
is regular everywhere, hence a local diffeomorphism by inverse
function theorem. We can use this fact to pull back a metric from
M
to
T
p
M
such that
exp
p
is a local isometry. Since this is a local isometry, we know
geodesics are preserved. So geodesics originating from the origin in
T
p
M
are
straight lines, and the speed of the geodesics under the two metrics are the same.
So we know
T
p
M
is complete under this metric. Also, by Hopf–Rinow,
exp
p
is
surjective. So we are done.
4 Hodge theory on Riemannian manifolds
4.1 Hodge star and operators
Throughout this chapter, we will assume our manifolds are oriented, and write
n
of the dimension. We will write
ε ∈
Ω
n
(
M
) for a non-vanishing form defining
the orientation.
Given a coordinate patch
U ⊆ M
, we can use Gram-Schmidt to obtain a
positively-oriented orthonormal frame
e
1
, ··· , e
n
. This allows us to dualize and
obtain a basis ω
1
, ··· , ω
n
∈ Ω
1
(M), defined by
ω
i
(e
i
) = δ
ij
.
Since these are linearly independent, we can multiply all of them together to
obtain a non-zero n-form
ω
1
∧ ··· ∧ ω
n
= aε,
for some
a ∈ C
∞
(
U
),
a >
0. We can do this for any coordinate patches, and
the resulting
n
-form agrees on intersections. Indeed, given any other choice
ω
0
1
, ··· , ω
0
n
, they must be related to the original
ω
1
, ··· , ω
n
by an element
Φ ∈ SO(n). Then by linear algebra, we have
ω
0
1
∧ ··· ∧ ω
0
n
= det(Φ) ω
1
∧ ··· ∧ ω
n
= ω
1
∧ ··· ∧ ω
n
.
So we can patch all these forms together to get a global
n
-form
ω
g
∈
Ω
n
(
M
)
that gives the same orientation. This is a canonical such
n
-form, depending only
on
g
and the orientation chosen. This is called the (Riemannian) volume form
of (M, g).
Recall that the
ω
i
are orthonormal with respect to the natural dual inner
product on
T
∗
M
. In general,
g
induces an inner product on
V
p
T
∗
M
for all
p
= 0
,
1
, ··· , n
, which is still denoted
g
. One way to describe this is to give an
orthonormal basis on each fiber, given by
{ω
i
1
∧ ··· ∧ ω
i
p
: 1 ≤ i
1
< ··· < i
p
≤ n}.
From this point of view, the volume form becomes a form of “unit length”.
We now come to the central definition of Hodge theory.
Definition
(Hodge star)
.
The Hodge star operator on (
M
n
, g
) is the linear map
:
V
p
(T
∗
x
M) →
V
n−p
(T
∗
x
M)
satisfying the property that for all α, β ∈
V
p
(T
∗
x
M), we have
α ∧ β = hα, βi
g
ω
g
.
Since g is non-degenerate, it follows that this is well-defined.
How do we actually compute this? Since we have vector spaces, it is natural
to consider what happens in a basis.
Proposition.
Suppose
ω
1
, ··· , ω
n
is an orthonormal basis of
T
∗
x
M
. Then we
claim that
(ω
1
∧ ··· ∧ ω
p
) = ω
p+1
∧ ··· ∧ ω
n
.
We can check this by checking all basis vectors of
V
p
M
, and the result drops
out immediately. Since we can always relabel the numbers, this already tells us
how to compute the Hodge star of all other basis elements.
We can apply the Hodge star twice, which gives us a linear endomorphism
:
V
p
T
∗
x
M →
V
p
T
∗
x
M. From the above, it follows that
Proposition.
The double Hodge star
:
V
p
(
T
∗
x
M
)
→
V
p
(
T
∗
x
M
) is equal to
(−1)
p(n−p)
.
In particular,
1 = ω
g
, ω
g
= 1.
Using the Hodge star, we can define a differential operator:
Definition
(Co-differential (
δ
))
.
We define
δ
: Ω
p
(
M
)
→
Ω
p−1
(
M
) for 0
≤ p ≤
dim M by
δ =
(
(−1)
n(p+1)+1
d p 6= 0
0 p = 0
.
This is (sometimes) called the co-differential.
The funny powers of (−1) are chosen so that our future results work well.
We further define
Definition (Laplace–Beltrami operator ∆). The Laplace–Beltrami operator is
∆ = dδ + δd : Ω
p
(M) → Ω
p
(M).
This is also known as the (Hodge) Laplacian.
We quickly note that
Proposition.
∆ = ∆ .
Consider the spacial case of (
M, g
) = (
R
n
, eucl
), and
p
= 0. Then a straight-
forward calculation shows that
∆f = −
∂
2
f
∂x
2
1
− ··· −
∂
2
f
∂x
2
n
for each
f ∈ C
∞
(
R
n
) = Ω
0
(
R
n
). This is just the usual Laplacian, except there
is a negative sign. This is there for a good reason, but we shall not go into that.
More generally, metric
g
=
g
ij
d
x
i
d
x
j
on
R
n
(or alternatively a coordinate
patch on any Riemannian manifold), we have
ω
g
=
p
|g| dx
1
∧ ··· ∧ dx
n
,
where |g| is the determinant of g. Then we have
∆
g
f = −
1
p
|g|
∂
j
(
p
|g|g
ij
∂
i
f) = −g
ij
∂
i
∂
j
f + lower order terms.
How can we think about this co-differential δ? One way to understand it is
that it is the “adjoint” to d.
Proposition. δ
is the formal adjoint of d. Explicitly, for any compactly sup-
ported α ∈ Ω
p−1
and β ∈ Ω
p
, then
Z
M
hdα, βi
g
ω
g
=
Z
M
hα, δβi
g
ω
g
.
We just say it is a formal adjoint, rather than a genuine adjoint, because
there is no obvious Banach space structure on Ω
p
(
M
), and we don’t want to go
into that. However, we can still define
Definition
(
L
2
inner product)
.
For
ξ, η ∈
Ω
p
(
M
), we define the
L
2
inner
product by
hhξ, ηii
g
=
Z
M
hξ, ηi
g
ω
g
,
where ξ, η ∈ Ω
p
(M).
Note that this may not be well-defined if the space is not compact.
Under this notation, we can write the proposition as
hhdα, βii
g
= hhα, δβii
g
.
Thus, we also say δ is the L
2
adjoint.
To prove this, we need to recall Stokes’ theorem. Since we don’t care about
manifolds with boundary in this course, we just have
Z
M
dω = 0
for all forms ω.
Proof. We have
0 =
Z
M
d(α ∧ β)
=
Z
M
dα ∧ β +
Z
M
(−1)
p−1
α ∧ d β
=
Z
M
hdα, βi
g
ω
g
+ (−1)
p−1
(−1)
(n−p+1)(p−1)
Z
M
α ∧ d β
=
Z
M
hdα, βi
g
ω
g
+ (−1)
(n−p)(p−1)
Z
M
α ∧ d β
=
Z
M
hdα, βi
g
ω
g
−
Z
M
α ∧ δβ
=
Z
M
hdα, βi
g
ω
g
−
Z
M
hα, δβi
g
ω
g
.
This result explains the funny signs we gave δ.
Corollary. ∆ is formally self-adjoint.
Similar to what we did in, say, IB Methods, we can define
Definition
(Harmonic forms)
.
A harmonic form is a
p
-form
ω
such that ∆
ω
= 0.
We write
H
p
= {α ∈ Ω
p
(M) : ∆α = 0}.
We have a further corollary of the proposition.
Corollary. Let M be compact. Then
∆α = 0 ⇔ dα = 0 and δα = 0.
We say α is closed and co-closed .
Proof. ⇐ is clear. For ⇒, suppose ∆α = 0. Then we have
0 = hhα, ∆αii = hhα, dδα + δdαii = kδαk
2
g
+ kdαk
2
g
.
Since the L
2
norm is non-degenerate, it follows that δα = dα = 0.
In particular, in degree 0, co-closed is automatic. Then for all
f ∈ C
∞
(
M
),
we have
∆f = 0 ⇔ df = 0.
In other words, harmonic functions on a compact manifold must be constant.
This is a good way to demonstrate that the compactness hypothesis is required,
as there are many non-trivial harmonic functions on R
n
, e.g. x.
Some of these things simplify if we know about the parity of our manifold. If
dim M = n = 2m, then = (−1)
p
, and
δ = − d
whenever
p 6
= 0. In particular, this applies to complex manifolds, say
C
n
∼
=
R
2n
,
with the Hermitian metric. This is to be continued in sheet 3.
4.2 Hodge decomposition theorem
We now work towards proving the Hodge decomposition theorem. This is a very
important and far-reaching result.
Theorem
(Hodge decomposition theorem)
.
Let (
M, g
) be a compact oriented
Riemannian manifold. Then
– For all p = 0, ··· , dim M , we have dim H
p
< ∞.
– We have
Ω
p
(M) = H
p
⊕ ∆Ω
p
(M).
Moreover, the direct sum is orthogonal with respect to the
L
2
inner product.
We also formally set Ω
−1
(M) = 0.
As before, the compactness of M is essential, and cannot be dropped.
Corollary. We have orthogonal decompositions
Ω
p
(M) = H
p
⊕ dδΩ
p
(M) ⊕ δdΩ
p
(M)
= H
p
⊕ dΩ
p−1
(M) ⊕ δΩ
p+1
(M).
Proof. Now note that for an α, β, we have
hhdδα, δdβii
g
= hhddδα, dβii
g
= 0.
So
dδΩ
p
(M) ⊕ δdΩ
p
(M)
is an orthogonal direct sum that clearly contains ∆Ω
p
(
M
). But each component
is also orthogonal to harmonic forms, because harmonic forms are closed and
co-closed. So the first decomposition follows.
To obtain the final decomposition, we simply note that
dΩ
p−1
(M) = d(H
p−1
⊕ ∆Ω
p−1
(M)) = d(δdΩ
p−1
(M)) ⊆ dδΩ
p
(M).
On the other hand, we certainly have the other inclusion. So the two terms are
equal. The other term follows similarly.
This theorem has a rather remarkable corollary.
Corollary.
Let (
M, g
) be a compact oriented Riemannian manifold. Then for
all
α ∈ H
p
dR
(
M
), there is a unique
α ∈ H
p
such that [
α
] =
a
. In other words,
the obvious map
H
p
→ H
p
dR
(M)
is an isomorphism.
This is remarkable. On the left hand side, we have
H
p
, which is a completely
analytic thing, defined by the Laplacian. On the other hand, the right hand
sides involves the de Rham cohomology, which is just a topological, and in fact
homotopy invariant.
Proof.
To see uniqueness, suppose
α
1
, α
2
∈ H
p
are such that [
α
1
] = [
α
2
]
∈
H
p
dR
(M). Then
α
1
− α
2
= dβ
for some
β
. But the left hand side and right hand side live in different parts of
the Hodge decomposition. So they must be individually zero. Alternatively, we
can compute
kdβk
2
g
= hhdβ, α
1
− α
2
ii
g
= hhβ, δα
1
− δα
2
ii
g
= 0
since harmonic forms are co-closed.
To prove existence, let α ∈ Ω
p
(M) be such that dα = 0. We write
α = α
1
+ dα
2
+ δα
3
∈ H
p
⊕ dΩ
p−1
(M) ⊕ δΩ
p+1
(M).
Applying d gives us
0 = dα
1
+ d
2
α
2
+ dδα
3
.
We know d
α
1
= 0 since
α
1
is harmonic, and d
2
= 0. So we must have d
δα
3
= 0.
So
hhδα
3
, δα
3
ii
g
= hhα
3
, dδα
3
ii
g
= 0.
So δα
3
= 0. So [α] = [α
1
] and α has a representative in H
p
.
We can also heuristically justify why this is true. Suppose we are given some
de Rham cohomology class a ∈ H
p
dR
(M). We consider
B
a
= {ξ ∈ Ω
p
(M) : dξ = 0, [ξ] = a}.
This is an infinite dimensional affine space.
We now ask ourselves — which
α ∈ B
a
minimizes the
L
2
norm? We consider
the function
F
:
B
a
→ R
given by
F
(
α
) =
kαk
2
. Any minimizing
α
is an
extremum. So for any β ∈ Ω
p−1
(M), we have
d
dt
t=0
F (α + tdβ) = 0.
In other words, we have
0 =
d
dt
t=0
(kαk
2
+ 2thhα, dβii
g
+ t
2
kdβk
2
) = 2hhα, dβii
g
.
This is the same as saying
hhδα, βii
g
= 0.
So this implies
δα
= 0. But d
α
= 0 by assumption. So we find that
α ∈ H
p
. So
the result is at least believable.
The proof of the Hodge decomposition theorem involves some analysis, which
we are not bothered to do. Instead, we will just quote the appropriate results.
For convenience, we will use
h·, ·i
for the
L
2
inner product, and then
k · k
is
the L
2
norm.
The first theorem we quote is the following:
Theorem
(Compactness theorem)
.
If a sequence
α
n
∈
Ω
n
(
M
) satisfies
kα
n
k <
C and k∆α
n
k < C for all n, then α
n
contains a Cauchy subsequence.
This is almost like saying Ω
n
(
M
) is compact, but it isn’t, since it is not
complete. So the best thing we can say is that the subsequence is Cauchy.
Corollary. H
p
is finite-dimensional.
Proof.
Suppose not. Then by Gram–Schmidt, we can find an infinite orthonormal
sequence
e
n
such that
ke
n
k
= 1 and
k
∆
e
n
k
= 0, and this certainly does not have
a Cauchy subsequence.
A large part of the proof is trying to solve the PDE
∆ω = α,
which we will need in order to carry out the decomposition. In analysis, one
useful idea is the notion of weak solutions. We notice that if
ω
is a solution,
then for any ϕ ∈ Ω
p
(M), we have
hω, ∆ϕi = h∆ω, ϕi = hα, ϕi,
using that ∆ is self-adjoint. In other words, the linear form
=
hω, ·i
: Ω
p
(
M
)
→
R satisfies
(∆ϕ) = hα, ϕi.
Conversely, if
hω, ·i
satisfies this equation, then
ω
must be a solution, since for
any β, we have
h∆ω, βi = hω, ∆βi = hα, βi.
Definition
(Weak solution)
.
A weak solution to the equation ∆
ω
=
α
is a
linear functional : Ω
p
(M) → R such that
(i) (∆ϕ) = hα, ϕi for all ϕ ∈ Ω
p
(M).
(ii) is bounded, i.e. there is some C such that |(β)| < Ckβk for all β.
Now given a weak solution, we want to obtain a genuine solution. If Ω
p
(
M
)
were a Hilbert space, then we are immediately done by the Riesz representation
theorem, but it isn’t. Thus, we need a theorem that gives us what we want.
Theorem
(Regularity theorem)
.
Every weak solution of ∆
ω
=
α
is of the form
(β) = hω, βi
for ω ∈ Ω
p
(M).
Thus, we have reduced the problem to finding weak solutions. There is one
final piece of analysis we need to quote. The definition of a weak solution only
cares about what
does to ∆Ω
p
(
M
). And it is easy to define what
should do
on ∆Ω
p
(M) — we simply define
(∆η) = hη, αi.
Of course, for this to work, it must be well-defined, but this is not necessarily
the case in general. We also have to check it is bounded. But suppose this
worked. Then the remaining job is to extend this to a bounded functional on all
of Ω
p
(
M
) in whatever way we like. This relies on the following (relatively easy)
theorem from analysis:
Theorem
(Hahn–Banach theorem)
.
Let
L
be a normed vector space, and
L
0
be
a subspace. We let
f
:
L
0
→ R
be a bounded linear functional. Then
f
extends
to a bounded linear functional L → R with the same bound.
We can now begin the proof.
Proof of Hodge decomposition theorem.
Since
H
p
is finite-dimensional, by basic
linear algebra, we can decompose
Ω
p
(M) = H
p
⊕ (H
p
)
⊥
.
Crucially, we know (H
p
)
⊥
is a closed subspace. What we want to show is that
(H
p
)
⊥
= ∆Ω
p
(M).
One inclusion is easy. Suppose α ∈ H
p
and β ∈ Ω
p
(M). Then we have
hα, ∆βi = h∆α, βi = 0.
So we know that
∆Ω
p
(M) ⊆ (H
p
)
⊥
.
The other direction is the hard part. Suppose
α ∈
(
H
p
)
⊥
. We may assume
α
is
non-zero. Since our PDE is a linear one, we may wlog kαk = 1.
By the regularity theorem, it suffices to prove that ∆
ω
=
α
has a weak
solution. We define : ∆Ω
p
(M) → R as follows: for each η ∈ Ω
p
(M), we put
(∆η) = hη, αi.
We check this is well-defined. Suppose ∆
η
= ∆
ξ
. Then
η −ξ ∈ H
p
, and we have
hη, αi − hξ, αi = hη − ξ, αi = 0
since α ∈ (H
p
)
⊥
.
We next want to show the boundedness property. We now claim that there
exists a positive C > 0 such that
(∆η) ≤ Ck∆ηk
for all
η ∈
Ω
p
(
M
). To see this, we first note that by Cauchy–Schwartz, we have
|hα, ηi| ≤ kαk · kηk = kηk.
So it suffices to show that there is a C > 0 such that
kηk ≤ Ck∆ηk
for every η ∈ Ω
p
(M).
Suppose not. Then we can find a sequence
η
k
∈
(
H
p
)
⊥
such that
kη
k
k
= 1
and k∆η
k
k → 0.
But then
k
∆
η
k
k
is certainly bounded. So by the compactness theorem, we
may wlog
η
k
is Cauchy. Then for any
ψ ∈
Ω
p
(
M
), the sequence
hψ, η
k
i
is Cauchy,
by Cauchy–Schwartz, hence convergent.
We define a : Ω
p
(M) → R by
a(ψ) = lim
k→∞
hψ, η
k
i.
Then we have
a(∆ψ) = lim
k→∞
hη
k
, ∆ψi = lim
k→∞
h∆η
k
, ψi = 0.
So we know that
a
is a weak solution of ∆
ξ
= 0. By the regularity theorem
again, we have
a(ψ) = hξ, ψi
for some ξ ∈ Ω
p
(M). Then ξ ∈ H
p
.
We claim that
η
k
→ ξ
. Let
ε >
0, and pick
N
such that
n, m > N
implies
kη
n
− η
m
k < ε. Then
kη
n
− ξk
2
= hη
n
− ξ, η
n
− ξi ≤ |hη
m
− ξ, η
n
− ξi| + εkη
n
− ξk.
Taking the limit as
m → ∞
, the first term vansihes, and this tells us
kη
n
−ξk ≤ ε
.
So η
n
→ ξ.
But this is bad. Since
η
k
∈
(
H
p
)
⊥
, and (
H
p
)
∞
is closed, we know
ξ ∈
(
H
p
)
⊥
.
But also by assumption, we have
ξ ∈ H
p
. So
ξ
= 0. But we also know
kξk = lim kη
k
k = 1, whcih is a contradiction. So is bounded.
We then extend
to any bounded linear map on Ω
p
(
M
). Then we are
done.
That was a correct proof, but we just pulled a bunch of theorems out of
nowhere, and non-analysts might not be sufficiently convinced. We now look
at an explicit example, namely the torus, and sketch a direct proof of Hodge
decomposition. In this case, what we needed for the proof reduces to the fact
Fourier series and Green’s functions work, which is IB Methods.
Consider
M
=
T
n
=
R
n
/
(2
πZ
)
n
, the
n
-torus with flat metric. This has local
coordinates (
x
1
, ··· , x
n
), induced from the Euclidean space. This is convenient
because
V
p
T
∗
M
is trivialized by
{
d
x
i
1
∧ ···
d
x
i
p
}
. Moreover, the Laplacian is
just given by
∆(α dx
i
1
∧ ··· ∧ dx
i
p
) = −
n
X
i=1
∂
2
α
∂x
2
i
dx
i
1
∧ ··· ∧ dx
i
p
.
So to do Hodge decomposition, it suffices to consider the case
p
= 0, and we are
just looking at functions C
∞
(T
n
), namely the 2π-periodic functions on R.
Here we will quote the fact that Fourier series work.
Fact.
Let
ϕ ∈ C
∞
(
T
n
). Then it can be (uniquely) represented by a convergent
Fourier series
ϕ(x) =
X
k∈Z
n
ϕ
k
e
ik·x
,
where
k
and
x
are vectors, and
k · x
is the standard inner product, and this is
uniformly convergent in all derivatives. In fact, ϕ
k
can be given by
ϕ
k
=
1
(2π)
n
Z
T
n
ϕ(x)e
−ik·x
dx.
Consider the inner product
hϕ, ψi = (2π)
n
X
¯ϕ
k
ψ
k
.
on
2
, and define the subspace
H
∞
=
(ϕ
k
) ∈
2
: ϕ
k
= o(|k|
m
) for all m ∈ Z
.
Then the map
F : C
∞
(T
n
) →
2
ϕ 7→ (ϕ
k
).
is an isometric bijection onto H
∞
.
So we have reduced our problem of working with functions on a torus to
working with these infinite series. This makes our calculations rather more
explicit.
The key property is that the Laplacian is given by
F(∆ϕ) = (−|k|
2
ϕ
k
).
In some sense, F “diagonalizes” the Laplacian. It is now clear that
H
0
= {ϕ ∈ C
∞
(T
n
) : ϕ
k
= 0 for all k 6= 0}
(H
0
)
⊥
= {ϕ ∈ C
∞
(T
n
) : ϕ
0
= 0}.
Moreover, since we can divide by
|k|
2
whenever
k
is non-zero, it follows that
(H
0
)
⊥
= ∆C
∞
(T
n
).
4.3 Divergence
In ordinary multi-variable calculus, we had the notion of the divergence. This
makes sense in general as well. Given any X ∈ Vect(M), we have
∇X ∈ Γ(T M ⊗ T
∗
M) = Γ(End T M).
Now we can take the trace of this, because the trace doesn’t depend on the
choice of the basis.
Definition (Divergence). The divergence of a vector field X ∈ Vect(M ) is
divX = tr(∇X).
It is not hard to see that this extends the familiar definition of the divergence.
Indeed, by definition of trace, for any local frame field {e
i
}, we have
divX =
n
X
i=1
g(∇
e
i
X, e
i
).
It is straightforward to prove from definition that
Proposition.
div(fX) = tr(∇(fX)) = fdivX + hdf, Xi.
The key result about divergence is the following:
Theorem.
Let
θ ∈
Ω
1
(
M
), and let
X
θ
∈ Vect
(
M
) be such that
hθ, V i
=
g(X
θ
, V ) for all V ∈ T M. Then
δθ = −divX
θ
.
So the divergence isn’t actually a new operator. However, we have some
rather more explicit formulas for the divergence, and this helps us understand
δ
better.
To prove this, we need a series of lemmas.
Lemma. In local coordinates, for any p-form ψ, we have
dψ =
n
X
k=1
dx
k
∧ ∇
k
ψ.
Proof.
We fix a point
x ∈ M
, and we may wlog we work in normal coordinates
at x. By linearity and change of coordinates, we wlog
ψ = f dx
1
∧ ··· ∧ dx
p
.
Now the left hand side is just
dψ =
n
X
k=p+1
∂f
∂x
k
dx
k
∧ dx
1
∧ ··· ∧ dx
p
.
But this is also what the RHS is, because ∇
k
= ∂
k
at p.
To prove this, we first need a lemma, which is also useful on its own right.
Definition
(Interior product)
.
Let
X ∈ Vect
(
M
). We define the interior
product i(X) : Ω
p
(M) → Ω
p−1
(M) by
(i(X)ψ)(Y
1
, ··· , Y
p−1
) = ψ(X, Y
1
, ··· , Y
p−1
).
This is sometimes written as i(X)ψ = Xyψ.
Lemma. We have
(divX) ω
g
= d(i(X) ω
g
),
for all X ∈ Vect(M).
Proof. Now by unwrapping the definition of i(X), we see that
∇
Y
(i(X)ψ) = i(∇
Y
X)ψ + i(X)∇
Y
ψ.
From example sheet 3, we know that ∇ω
g
= 0. So it follows that
∇
Y
(i(X) ω
g
) = i(∇
Y
X) ω
g
.
Therefore we obtain
d(i(X)ω
g
)
=
n
X
k=1
dx
k
∧ ∇
k
(i(X)ω
g
)
=
n
X
k=1
dx
k
∧ i(∇
k
X)ω
g
=
n
X
k=1
dx
k
∧ i(∇
k
X)(
p
|g|dx
1
∧ ··· ∧ dx
n
)
= dx
k
(∇
k
X) ω
g
= (divX) ω
g
.
Note that this requires us to think carefully how wedge products work (
i
(
X
)(
α∧β
)
is not just α(X)β, or else α ∧ β would not be anti-symmetric).
Corollary (Divergence theorem). For any vector field X, we have
Z
M
div(X) ω
g
=
Z
M
d(i(X) ω
g
) = 0.
We can now prove the theorem.
Theorem.
Let
θ ∈
Ω
1
(
M
), and let
X
θ
∈ Vect
(
M
) be such that
hθ, V i
=
g(X
θ
, V ) for all V ∈ T M. Then
δθ = −divX
θ
.
Proof.
By the formal adjoint property of
δ
, we know that for any
f ∈ C
∞
(
M
),
we have
Z
M
g(df, θ) ω
g
=
Z
M
fδθ ω
g
.
So we want to show that
Z
M
g(df, θ) ω
g
= −
Z
M
fdivX
ω
ω
g
.
But by the product rule, we have
Z
M
div(fX
θ
) ω
g
=
Z
M
g(df, θ) ω
g
+
Z
M
fdivX
θ
ω
g
.
So the result follows by the divergence theorem.
We can now use this to produce some really explicit formulae for what
δ
is,
which will be very useful next section.
Corollary. If θ is a 1-form, and {e
k
} is a local orthonormal frame field, then
δθ = −
n
X
k=1
i(e
k
)∇
e
i
θ = −
n
X
k=1
h∇
e
k
θ, e
k
i.
Proof. We note that
e
i
hθ, e
i
i = h∇
e
i
θ, e
i
i + hθ, ∇
e
i
e
i
i
e
i
g(X
θ
, e
i
) = g(∇
e
i
X
θ
, e
i
) + g(X
θ
, ∇
e
i
e
i
).
By definition of X
θ
, this implies that
h∇
e
i
θ, e
i
i = g(∇
e
i
X
θ
, e
i
).
So we obtain
δθ = −divX
θ
= −
n
X
i=1
g(∇
e
i
X
θ
, e
i
) = −
n
X
k=1
h∇
e
i
θ, e
i
i,
We will assume a version for 2-forms (the general result is again on the third
example sheet):
Proposition. If β ∈ Ω
2
(M), then
(δβ)(Y ) = −
n
X
k=1
(∇
e
k
β)(e
k
, Y ).
In other words,
δβ = −
n
X
k=1
i(e
k
)(∇
e
k
β).
4.4 Introduction to Bochner’s method
How can we apply the Hodge decomposition theorem? The Hodge decomposition
theorem tells us the de Rham cohomology group is the kernel of the Laplace–
Beltrami operator ∆. So if we want to show, say,
H
1
dR
(
M
) = 0, then we want to
show that ∆α 6= 0 for all non-zero α ∈ Ω
1
(M). The strategy is to show that
hhα, ∆αii 6= 0
for all
α 6
= 0. Then we have shown that
H
1
dR
(
M
) = 0. In fact, we will show that
this inner product is positive. To do so, the general strategy is to introduce an
operator T with adjoint T
∗
, and then write
∆ = T
∗
T + C
for some operator C. We will choose T cleverly such that C is very simple.
Now if we can find a manifold such that C is always positive, then since
hhT
∗
T α, σii = hhT α, T αii ≥ 0,
it follows that ∆ is always positive, and so H
1
dR
(M) = 0.
Our choice of
T
will be the covariant derivative
∇
itself. We can formulate
this more generally. Suppose we have the following data:
– A Riemannian manifold M.
– A vector bundle E → M.
– An inner product h on E.
– A connection ∇ = ∇
E
: Ω
0
(E) → Ω
1
(E) on E.
We are eventually going to take
E
=
T
∗
M
, but we can still proceed in the
general setting for a while.
The formal adjoint (∇
E
)
∗
: Ω
1
(E) → Ω
0
(E) is defined by the relation
Z
M
h∇α, βi
E,g
ω
g
=
Z
M
hα, ∇
∗
βi
E
ω
g
for all
α ∈
Ω
0
(
E
) and
β ∈
Ω
1
(
E
). Since
h
is non-degenerate, this defines
∇
∗
uniquely.
Definition (Covariant Laplacian). The covariant Laplacian is
∇
∗
∇ : Γ(E) → Γ(E)
We are now going to focus on the case
E
=
T
∗
M
. It is helpful to have the
following explicit formula for ∇
∗
, which we shall leave as an exercise:
As mentioned, the objective is to understand ∆
−∇
∗
∇
. The theorem is that
this difference is given by the Ricci curvature.
This can’t be quite right, because the Ricci curvature is a bilinear form on
T M
2
, but ∆
−∇
∗
∇
is a linear endomorphism Ω
1
(
M
)
→
Ω
1
(
M
). Thus, we need
to define an alternative version of the Ricci curvature by “raising indices”. In
coordinates, we consider g
jk
Ric
ik
instead.
We can also define this
Ric
ik
without resorting to coordinates. Recall that
given an
α ∈
Ω
1
(
M
), we defined
X
α
∈ Vect
(
M
) to be the unique field such that
α(z) = g(X
α
, Z)
for all Z ∈ Vect(M ). Then given α ∈ Ω
1
(M), we define Ric(α) ∈ Ω
1
(M) by
Ric(α)(X) = Ric(X, X
α
).
With this notation, the theorem is
Theorem (Bochner–Weitzenb¨ock formula). On an oriented Riemannian mani-
fold, we have
∆ = ∇
∗
∇ + Ric .
Before we move on to the proof of this formula, we first give an application.
Corollary. Let (M, g) be a compact connected oriented manifold. Then
– If Ric(g) > 0 at each point, then H
1
dR
(M) = 0.
– If Ric(g) ≥ 0 at each point, then b
1
(M) = dim H
1
dR
(M) ≤ n.
– If Ric(g) ≥ 0 at each point, and b
1
(M) = n, then g is flat.
Proof. By Bochner–Weitzenb¨ock, we have
hh∆α, αii = hh∇
∗
∇α, αii+
Z
M
Ric(α, α) ω
g
= k∇αk
2
2
+
Z
M
Ric(α, α) ω
g
.
–
Suppose
Ric >
0. If
α 6
= 0, then the RHS is strictly positive. So the
left-hand side is non-zero. So ∆α 6= 0. So H
1
M
∼
=
H
1
dR
(M) = 0.
–
Suppose
α
is such that ∆
α
= 0. Then the above formula forces
∇α
= 0.
So if we know
α
(
x
) for some fixed
x ∈ M
, then we know the value of
α
everywhere by parallel transport. Thus
α
is determined by the initial
condition
α
(
x
), Thus there are
≤ n
=
dim T
∗
x
M
linearly independent such
α.
–
If
b
1
(
M
) =
n
, then we can pick a basis
α
1
, ··· , α
n
of
H
1
M
. Then as above,
these are parallel 1-forms. Then we can pick a dual basis
X
1
, ··· , X
n
∈
Vect
(
M
). We claim they are also parallel, i.e.
∇X
i
= 0. To prove this, we
note that
hα
j
, ∇X
i
i + h∇α
j
, X
i
i = ∇hα
j
, X
i
i.
But
hα
j
, X
i
i
is constantly 0 or 1 depending on
i
and
j
, So the RHS vanishes.
Similarly, the second term on the left vanishes. Since the
α
j
span, we know
we must have ∇X
i
= 0.
Now we have
R(X
i
, X
j
)X
k
= (∇
[X
i
,X
j
]
− [∇X
i
, ∇
X
j
])X
k
= 0,
Since this is valid for all
i, j, k
, we know
R
vanishes at each point. So we
are done.
Bochner–Weitzenb¨ock can be exploited in a number of similar situations.
In the third part of the theorem, we haven’t actually proved the optimal
statement. We can deduce more than the flatness of the metric, but requires
some slightly advanced topology. We will provide a sketch proof of the theorem,
making certain assertions about topology.
Proposition. In the case of (iii), M is in fact isometric to a flat torus.
Proof sketch. We fix p ∈ M and consider the map M → R
n
given by
x 7→
Z
x
p
α
i
i=1,···,n
∈ R
n
,
where the
α
i
are as in the previous proof. The integral is taken along any path
from
p
to
x
, and this is not well-defined. But by Stokes’ theorem, and the fact
that dα
i
= 0, this only depends on the homotopy class of the path.
In fact,
R
x
p
depends only on
γ ∈ H
1
(
M
), which is finitely generated. Thus,
R
x
p
α
i
is a well-defined map to
S
1
=
R/λ
i
Z
for some
λ
i
6
= 0. Therefore we obtain
a map
M →
(
S
1
)
n
=
T
n
. Moreover, a bit of inspection shows this is a local
diffeomorphism. But since the spaces involved are compact, it follows by some
topology arguments that it must be a covering map. But again by compactness,
this is a finite covering map. So M must be a torus. So we are done.
We only proved this for 1-forms, but this is in fact fact valid for forms of any
degree. To do so, we consider E =
V
p
T
∗
M, and then we have a map
∇ : Ω
0
M
(E) → Ω
1
M
(E),
and this has a formal adjoint
∇
∗
: Ω
1
M
(E) → Ω
0
M
(E).
Now if α ∈ Ω
p
(M), then it can be shown that
∆α = ∇
∗
∇α + R(α),
where
R
is a linear map Ω
p
(
M
)
→
Ω
p
(
M
) depending on the curvature. Then
by the same proof, it follows that if
R >
0 at all points, then
H
k
(
M
) = 0 for all
k = 1, ··· , n − 1.
If
R ≥
0 only, which in particular is the case if the space is flat, then we have
b
k
(M) ≤
n
k
= dim
V
k
T
∗
M,
and moreover ∆α = 0 iff ∇α = 0.
Proof of Bochner–Weitzenb¨ock
We now move on to actually prove Bochner–Weitzenb¨ock. We first produce an
explicit formula for ∇
∗
, and hence ∇
∗
∇.
Proposition.
Let
e
1
, ··· , e
n
be an orthonormal frame field, and
β ∈
Ω
1
(
T
∗
M
).
Then we have
∇
∗
β = −
n
X
i=1
i(e
i
)∇
e
i
β.
Proof. Let α ∈ Ω
0
(T
∗
M). Then by definition, we have
h∇α, βi =
n
X
i=1
h∇
e
i
α, β(e
i
)i.
Consider the 1-form given by
θ(Y ) = hα, β(Y )i.
Then we have
divX
θ
=
n
X
i=1
h∇
e
i
X
θ
, e
i
i
=
n
X
i=1
∇
e
i
hX
θ
, e
i
i − hX
θ
, ∇
e
i
e
i
i
=
n
X
i=1
∇
e
i
hα, β(e
i
)i − hα, β(∇
e
i
e
i
)i
=
n
X
i=1
h∇
e
i
α, β(e
i
)i + hα, ∇
e
i
(β(e
i
))i − hα, β(∇
e
i
e
i
)i
=
n
X
i=1
h∇
e
i
α, β(e
i
)i + hα, (∇
e
i
β)(e
i
)i.
So by the divergence theorem, we have
Z
M
h∇α, βi ω
g
=
Z
M
n
X
i=1
hα, (∇
e
i
β)(e
i
)i ω
g
.
So the result follows.
Corollary. For a local orthonormal frame field e
1
, ··· , e
n
, we have
∇
∗
∇α = −
n
X
i=1
∇
e
i
∇
e
i
α.
We next want to figure out more explicit expressions for d
δ
and
δ
d. To make
our lives much easier, we will pick a normal frame field:
Definition
(Normal frame field)
.
A local orthonormal frame
{e
k
}
field is normal
at p if further
∇e
k
|
p
= 0
for all k.
It is a fact that normal frame fields exist. From now on, we will fix a point
p ∈ M
, and assume that
{e
k
}
is a normal orthonormal frame field at
p
. Thus, the
formulae we derive are only valid at
p
, but this is fine, because
p
was arbitrary.
The first term dδ is relatively straightforward.
Lemma. Let α ∈ Ω
1
(M), X ∈ Vect(M). Then
hdδα, Xi = −
n
X
i=1
h∇
X
∇
e
i
α, e
i
i.
Proof.
hdδα, Xi = X(δα)
= −
n
X
i=1
Xh∇
e
i
α, e
i
i
= −
n
X
i=1
h∇
X
∇
e
i
α, e
i
i.
This takes care of one half of ∆ for the other half, we need a bit more work.
Recall that we previously found a formula for
δ
. We now re-express the formula
in terms of this local orthonormal frame field.
Lemma. For any 2-form β, we have
(δβ)(X) =
n
X
k=1
−e
k
(β(e
k
, X)) + β(e
k
, ∇
e
k
X).
Proof.
(δβ)(X) = −
n
X
k=1
(∇
e
k
β)(e
k
, X)
=
n
X
k=1
−e
k
(β(e
k
, X)) + β(∇
e
k
e
k
, X) + β(e
k
, ∇
e
k
X)
=
n
X
k=1
−e
k
(β(e
k
, X)) + β(e
k
, ∇
e
k
X).
Since we want to understand
δ
d
α
for
α
a 1-form, we want to find a decent
formula for dα.
Lemma. For any 1-form α and vector fields X, Y , we have
dα(X, Y ) = h∇
X
α, Y i − h∇
Y
α, Xi.
Proof. Since the connection is torsion-free, we have
[X, Y ] = ∇
X
Y − ∇
Y
X.
So we obtain
dα(X, Y ) = Xhα, Y i − Y hα, Xi − hα, [X, Y ]i
= h∇
X
α, Y i − h∇
Y
α, Xi.
Finally, we can put these together to get
Lemma. For any 1-form α and vector field X, we have
hδdα, Xi = −
n
X
k=1
h∇
e
k
∇
e
k
α, Xi +
n
X
k=1
h∇
e
k
∇
X
α, e
k
i −
n
X
k=1
h∇
∇
e
k
X
α, e
k
i.
Proof.
hδdα, Xi =
n
X
k=1
h
− e
k
(dα(e
k
, X)) + dα(e
k
, ∇
e
k
X)
i
=
n
X
k=1
h
− e
k
(h∇
e
k
α, Xi − h∇
X
α, e
k
i)
+ h∇
e
k
α, ∇
e
k
Xi − h∇
∇
e
k
X
α, e
k
i
i
=
n
X
k=1
h
− h∇
e
k
∇
e
k
α, Xi − h∇
e
k
α, ∇
e
k
Xi + h∇
e
k
∇
X
α, e
k
i)
+ h∇
e
k
α, ∇
e
k
Xi − h∇
∇
e
k
X
α, e
k
i
i
= −
n
X
k=1
h∇
e
k
∇
e
k
α, Xi +
n
X
k=1
h∇
e
k
∇
X
α, e
k
i −
n
X
k=1
h∇
∇
e
k
X
α, e
k
i.
What does this get us? The first term on the right is exactly the
∇
∗
∇
term
we wanted. If we add dδα to this, then we get
n
X
k=1
h([∇
e
k
, ∇
X
] − ∇
∇
e
k
X
)α, e
k
i.
We notice that
[e
k
, X] = ∇
e
k
X − ∇
X
e
k
= ∇
e
k
X.
So we can alternatively write the above as
n
X
k=1
h([∇
e
k
, ∇
X
] − ∇
[e
k
,X]
)α, e
k
i.
The differential operator on the left looks just like the Ricci curvature. Recall
that
R(X, Y ) = ∇
[X,Y ]
− [∇
X
, ∇
Y
].
Lemma
(Ricci identity)
.
Let
M
be any Riemannian manifold, and
X, Y, Z ∈
Vect(M) and α ∈ Ω
1
(M). Then
h([∇
X
, ∇
Y
] − ∇
[X,Y ]
)α, Zi = hα, R(X, Y )Zi.
Proof. We note that
h∇
[X,Y ]
α, Zi+hα, ∇
[X,Y ]
Zi = [X, Y ]hα, Zi = h[∇
X
, ∇
Y
]α, Zi+hα, [∇
X
, ∇
Y
]Zi.
The second equality follows from writing [
X, Y
] =
XY −Y X
. We then rearrange
and use that R(X, Y ) = ∇
[X,Y ]
− [∇
X
, ∇
Y
].
Corollary. For any 1-form α and vector field X, we have
h∆α, Xi = h∇
∗
∇α, Xi + Ric(α)(X).
This is the theorem we wanted.
Proof. We have found that
h∆α, Xi = h∇
∗
∇α, Xi +
n
X
i=1
hα, R(e
k
, X)e
k
i.
We have
n
X
i=1
hα, R(e
k
, X)e
k
i =
n
X
i=1
g(X
α
, R(e
k
, X)e
k
) = Ric(X
α
, X) = Ric(α)(X).
So we are done.
5 Riemannian holonomy groups
Again let
M
be a Riemannian manifold, which is always assumed to be connected.
Let
x ∈ M
, and consider a path
γ ∈
Ω(
x, y
),
γ
: [0
,
1]
→ M
. At the rather
beginning of the course, we saw that
γ
gives us a parallel transport from
T
x
M
to
T
y
M
. Explicitly, given any
X
0
∈ T
x
M
, there exists a unique vector field
X
along γ with
∇X
dt
= 0, X(0) = X
0
.
Definition
(Holonomy transformation)
.
The holonomy transformation
P
(
γ
)
sends X
0
∈ T
x
M to X(1) ∈ T
y
M.
We know that this map is invertible, and preserves the inner product. In
particular, if x = y, then P (γ) ∈ O(T
x
M)
∼
=
O(n).
Definition (Holonomy group). The holonomy group of M at x ∈ M is
Hol
x
(M) = {P (γ) : γ ∈ Ω(x, x)} ⊆ O(T
x
M).
The group operation is given by composition of linear maps, which corresponds
to composition of paths.
We note that this group doesn’t really depend on the point
x
. Given any
other
y ∈ M
, we can pick a path
β ∈
Ω(
x, y
). Writing
P
β
instead of
P
(
β
), we
have a map
Hol
x
(M) Hol
y
(M)
P
γ
P
β
◦ P
γ
◦ P
β
−1
∈ Hol
y
(M)
.
So we see that Hol
x
(M) and Hol
y
(M) are isomorphic. In fact, after picking an
isomorphism O(
T
x
M
)
∼
=
O(
T
y
M
)
∼
=
O(
N
), these subgroups are conjugate as
subgroups of O(n). We denote this class by Hol(M ).
Note that depending of what we want to emphasize, we write
Hol
(
M, g
), or
even Hol(g) instead.
Really,
Hol
(
M
) is a representation (up to conjugacy) induced by the standard
representation of O(n) on R
n
.
Proposition. If M is simply connected, then Hol
x
(M) is path connected.
Proof. Hol
x
(
M
) is the image of Ω(
x, x
) in O(
n
) under the map
P
, and this map
is continuous from the standard theory of ODE’s. Simply connected means
Ω(x, x) is path connected. So Hol
x
(M) is path connected.
It is convenient to consider the restricted holonomy group.
Definition (Restricted holonomy group). We define
Hol
0
x
(M) = {P (γ) : γ ∈ Ω(x, x) nullhomotopic}.
As before, this group is, up to conjugacy, independent of the choice of the
point in the manifold. We write this group as Hol
0
(M).
Of course, Hol
0
(M) ⊆ Hol(M ), and if π
1
(M) = 0, then they are equal.
Corollary. Hol
0
(M) ⊆ SO(n) .
Proof. Hol
0
(
M
) is connected, and thus lies in the connected component of the
identity in O(n).
Note that there is a natural action of
Hol
x
(
M
) and
Hol
0
x
(
M
) on
T
∗
x
M
,
V
p
T
∗
M for all p, and more generally tensor products of T
x
M.
Fact.
– Hol
0
(
M
) is the connected component of
Hol
(
M
) containing the identity
element.
– Hol
0
(
M
) is a Lie subgroup of
SO
(
n
), i.e. it is a subgroup and an immersed
submanifold. Thus, the Lie algebra of
Hol
0
(
M
) is a Lie subalgebra of
so(n), which is just the skew-symmetric n × n matrices.
This is a consequence of Yamabe theorem, which says that a path-connected
subgroup of a Lie group is a Lie subgroup.
We will not prove these.
Proposition
(Fundamental principle of Riemannian holonomy)
.
Let (
M, g
) be
a Riemannian manifold, and fix
p, q ∈ Z
+
and
x ∈ M
. Then the following are
equivalent:
(i) There exists a (p, q)-tensor field α on M such that ∇α = 0.
(ii)
There exists an element
α
0
∈
(
T
x
M
)
⊗p
⊗
(
T
∗
x
M
)
⊗q
such that
α
0
is invariant
under the action of Hol
x
(M).
Proof.
To simplify notation, we consider only the case
p
= 0. The general case
works exactly the same way, with worse notation. For α ∈ (T
∗
x
M)
q
, we have
(∇
X
α)(X
1
, ··· , X
q
) = X(α(X
1
, ··· , X
q
)) −
q
X
i=1
α(X
1
, ··· , ∇
X
X
i
, ··· , X
q
).
Now consider a loop
γ
: [0
,
1]
→ M
be a loop at
x
. We choose vector fields
X
i
along γ for i = 1, ··· , q such that
∇X
i
dt
= 0.
We write
X
i
(γ(0)) = X
0
i
.
Now if ∇α = 0, then this tells us
∇α
dt
(X
1
, ··· , X
q
) = 0.
By our choice of
X
i
, we know that
α
(
X
1
, ··· , X
q
) is constant along
γ
. So we
know
α(X
0
1
, ··· , X
0
q
) = α(P
γ
(X
0
1
), ··· , P
γ
(X
0
q
)).
So α is invariant under Hol
x
(M). Then we can take α
0
= α
x
.
Conversely, if we have such an
α
0
, then we can use parallel transport to
transfer it to everywhere in the manifold. Given any y ∈ M, we define α
y
by
α
y
(X
1
, ··· , X
q
) = α
0
(P
γ
(X
1
), ··· , P
γ
(X
q
)),
where
γ
is any path from
y
to
x
. This does not depend on the choice of
γ
precisely because α
0
is invariant under Hol
x
(M).
It remains to check that
α
is
C
∞
with
∇α
= 0, which is an easy exercise.
Example.
Let
M
be oriented. Then we have a volume form
ω
g
. Since
∇ω
g
= 0,
we can take
α
=
ω
g
. Here
p
= 0 and
q
=
n
. Also, its stabilizer is
H
=
SO
(
n
).
So we know Hol(M ) ⊆ SO(n) if (and only if) M is oriented.
The “only if” part is not difficult, because we can use parallel transport to
transfer an orientation at a particular point to everywhere.
Example. Let x ∈ M , and suppose
Hol
x
(M) ⊆ U(n) = {g ∈ SO(2n) : gJ
0
g
−1
= J
0
},
where
J
0
=
0 I
−I 0
.
By looking at
α
0
=
J
0
, we obtain
α
=
J ∈
Γ(
End T M
) with
∇J
= 0 and
J
2
=
−
1. This is a well-known standard object in complex geometry, and such
a J is an instance of an almost complex structure on M.
Example. Recall (from the theorem on applications of Bochner–Weitzenb¨ock)
that a Riemannian manifold (
M, g
) is flat (i.e.
R
(
g
)
≡
1) iff around each point
x ∈ M
, there is a parallel basis of parallel vector fields. So we find that (
M, g
)
is flat iff Hol
0
(M, g) = {id}.
It is essential that we use
Hol
0
(
M, g
) rather than the full
Hol
(
M, g
). For
example, we can take the Klein bottle
γ
with the flat metric. Then parallel transport along the closed loop γ has
P
γ
=
1 0
0 −1
.
In fact, we can check that Hol(K) = Z
2
. Note that here K is non-orientable.
Since we know that
Hol
(
M
) is a Lie group, we can talk about its Lie algebra.
Definition
(Holonomy algebra)
.
The holonomy algebra
hol
(
M
) is the Lie algebra
of Hol(M ).
Thus hol(M) ≤ so(n) up to conjugation.
Now consider some open coordinate neighbourhood
U ⊆ M
with coordinates
x
1
, ··· , x
n
. As before, we write
∂
i
=
∂
∂x
i
, ∇
i
= ∇
∂
i
.
The curvature may also be written in terms of coordinates
R
=
R
i
j,k`
, and we
also have
R(∂
k
, ∂
`
) = −[∇
k
, ∇
`
].
Thus, hol(M) contains
d
dt
t=0
P (γ
t
),
where γ
t
is the square
x
k
x
`
√
t
√
t
By a direct computation, we find
P (γ
t
) = I + λtR(∂
k
, ∂
`
) + o(t).
Here
λ ∈ R
is some non-zero absolute constant that doesn’t depend on anything
(except convention).
Differentiating this with respect to
t
and taking the limit
t →
0, we deduce
that at for p ∈ U, we have
R
p
= (R
i
j,k`
)
p
∈
V
2
T
∗
p
M ⊗ hol
p
(M),
where we think
hol
p
(
M
)
⊆ End T
p
M
. Recall we also had the
R
ij,k`
version, and
because of the nice symmetric properties of R, we know
(R
ij,k`
)
p
∈ S
2
hol
p
(M) ⊆
V
2
T
∗
p
M ⊗
V
2
T
∗
p
M.
Strictly speaking, we should write
(R
i
j
k
`
)
p
∈ S
2
hol
p
(M),
but we can use the metric to go between T
p
M and T
∗
p
M.
So far, what we have been doing is rather tautological. But it turns out this
allows us to decompose the de Rham cohomology groups.
In general, consider an arbitrary Lie subgroup
G ⊆ GL
n
(
R
). There is a
standard representation
ρ
of
GL
n
(
R
) on
R
n
, which restricts to a representation
(ρ, R
n
) of G. This induces a representation (ρ
k
,
V
k
(R
∗
)) of G.
This representation is in general not irreducible. We decompose it into
irreducible components (ρ
k
i
, W
k
i
), so that
V
k
(R
∗
) =
M
i
W
k
i
.
We can do this for bundles instead of just vector spaces. Consider a manifold
M
with a
G
-structure, i.e. there is an atlas of coordinate neighbourhoods where the
transition maps satisfy
∂x
α
∂x
0
β
!
p
∈ G
for all
p
. Then we can use this to split our bundle of
k
-forms into well-defined
vector sub-bundles with typical fibers W
k
i
:
V
k
T
∗
M =
M
Λ
k
i
.
We can furthermore say that every
G
-equivariant linear map
ϕ
:
W
k
i
→ W
`
j
induces a morphism of vector bundles φ : Λ
k
i
→ Λ
`
j
.
Now suppose further that
Hol
(
M
)
≤ G ≤
O(
n
). Then we can show that
parallel transport preserves this decomposition into sub-bundles. So
∇
restricts
to a well-defined connection on each Λ
k
i
.
Thus, if
ξ ∈
Γ(Λ
k
i
), then
∇ξ ∈
Γ(
T
∗
M ⊗
Λ
k
i
), and then we have
∇
∗
∇ξ ∈
Γ(Λ
k
i
).
But we know the covariant Laplacian is related to Laplace–Beltrami via the
curvature. We only do this for 1-forms for convenience of notation. Then if
ξ ∈ Ω
1
(M), then we have
∆ξ = ∇
∗
∇ξ + Ric(ξ).
We can check that
Ric
also preserves these subbundles. Then it follows that
∆ : Γ(Λ
1
j
) → Γ(Λ
1
j
) is well-defined.
Thus, we deduce
Theorem.
Let (
M, g
) be a connected and oriented Riemannian manifold, and
consider the decomposition of the bundle of
k
-forms into irreducible representa-
tions of the holonomy group,
V
k
T
∗
M =
M
i
Λ
k
i
.
In other words, each fiber (Λ
k
i
)
x
⊆
V
k
T
∗
x
M
is an irreducible representation of
Hol
x
(g). Then
(i) For all α ∈ Ω
k
i
(M) ≡ Γ(Λ
l
i
), we have ∆α ∈ Ω
k
i
(M).
(ii) If M is compact, then we have a decomposition
H
k
dR
(M) =
M
H
k
i,dR
(M),
where
H
k
i,dR
(M) = {[α] : α ∈ Ω
k
i
(M), ∆α = 0}.
The dimensions of these groups are known as the refined Betti numbers.
We have only proved this for
k
= 1, but the same proof technique can be
used to do it for arbitrary k.
Our treatment is rather abstract so far. But for example, if we are dealing
with complex manifolds, then we know that
Hol
(
M
)
≤
U(
n
). So this allows us
to have a canonical refinement of the de Rham cohomology, and this is known
as the Lefschetz decomposition.
6 The Cheeger–Gromoll splitting theorem
We will talk about the Cheeger–Gromoll splitting theorem. This is a hard
theorem, so we will not prove it. However, we will state it, and discuss a bit
about it. To state the theorem, we need some preparation.
Definition
(Ray)
.
Let (
M, g
) be a Riemannian manifold. A ray is a map
r
(
t
) : [0
, ∞
)
→ M
if
r
is a geodesic, and minimizes the distance between any
two points on the curve.
Definition
(Line)
.
A line is a map
(
t
) :
R → M
such that
(
t
) is a geodesic,
and minimizes the distance between any two points on the curve.
We have seen from the first example sheet that if
M
is a complete unbounded
manifold, then
M
has a ray from each point, i.e. for all
x ∈ M
, there exists a
ray r such that r(0) = x.
Definition
(Connected at infinity)
.
A complete manifold is said to be connected
at infinity if for all compact set
K ⊆ M
, there exists a compact
C ⊇ K
such
that for every two points
p, q ∈ M \C
, there exists a path
γ ∈
Ω(
p, q
) such that
γ(t) ∈ M \K for all t.
We say M is disconnected at infinity if it is not connected at infinity.
Note that if
M
is disconnected at infinity, then it must be unbounded, and
in particular non-compact.
Lemma. If M is disconnected at infinity, then M contains a line.
Proof.
Note that
M
is unbounded. Since
M
is disconnected at infinity, we can
find a compact subset
K ⊆ M
and sequences
p
m
, q
m
→ ∞
as
m → ∞
(to make
this precise, we can pick some fixed point
x
, and then require
d
(
x, p
m
)
, d
(
x, q
m
)
→
∞) such that every γ
m
∈ Ω(p
m
, q
m
) passes through K.
In particular, we pick
γ
m
to be a minimal geodesic from
p
m
to
q
m
parametrized
by arc-length. Then
γ
m
passes through
K
. By reparametrization, we may assume
γ
m
(0) ∈ K.
Since
K
is compact, we can pass to a subsequence, and wlog
γ
m
(0)
→ x ∈ K
and ˙γ
m
(0) → a ∈ T
x
M (suitably interpreted).
Then we claim the geodesic
γ
x,a
(
t
) is the desired line. To see this, since
solutions to ODE’s depend smoothly on initial conditions, we can write the line
as
(t) = lim
m→∞
γ
m
(t).
Then we know
d((s), (t)) = lim
m→∞
d(γ
m
(s), γ
m
(t)) = |s − t|.
So we are done.
Let’s look at some examples.
Example. The elliptic paraboloid
{z = x
2
+ y
2
} ⊆ R
3
with the induced metric does not contain a line. To prove this, we can show that
any geodesic that is not a meridian must intersect itself.
Example.
Any complete metric
g
on
S
n−1
× R
contains a line since it is
disconnected at ∞.
Theorem
(Cheeger–Gromoll line-splitting theorem (1971))
.
If (
M, g
) is a com-
plete connected Riemannian manifold containing a line, and has
Ric
(
g
)
≥
0 at
each point, then
M
is isometric to a Riemannian product (
N × R, g
0
+ d
t
2
) for
some metric g
0
on N .
We will not prove this, but just see what the theorem can be good for.
First of all, we can slightly improve the statement. After applying the
theorem, we can check again if
N
contains a line or not. We can keep on
proceeding and splitting lines off. Then we get
Corollary.
Let (
M, g
) be a complete connected Riemannian manifold with
Ric
(
g
)
≥
0. Then it is isometric to
X × R
q
for some
q ∈ N
and Riemannian
manifold X, where X is complete and does not contain any lines.
Note that if
X
is zero-dimensional, since we assume all our manifolds are
connected, then this implies
M
is flat. If
dim X
= 1, then
X
∼
=
S
1
(it can’t be a
line, because a line contains a line). So again M is flat.
Now suppose that in fact
Ric
(
g
) = 0. Then it is not difficult to see from the
definition of the Ricci curvature that
Ric
(
X
) = 0 as well. If we know
dim X ≤
3,
then M has to be flat, since in dimensions ≤ 3, the Ricci curvature determines
the full curvature tensor.
We can say a bit more if we assume more about the manifold. Recall (from
example sheets) that a manifold is homogeneous if the group of isometries
acts transitively. In other words, for any
p, q ∈ M
, there exists an isometry
φ
:
M → M
such that
φ
(
p
) =
q
. This in particular implies the metric is complete.
It is not difficult to see that if
M
is homogeneous, then so is
X
. In this case,
X
must be compact. Suppose not. Then
X
is unbounded. We will obtain a line
on X.
By assumption, for all
n
= 1
,
2
, ···
, we can find
p
n
, q
n
with
d
(
p
n
, q
n
)
≥
2
n
.
Since
X
is complete, we can find a minimal geodesic
γ
n
connecting these two
points, parametrized by arc length. By homogeneity, we may assume that the
midpoint
γ
n
(0) is at a fixed point
x
0
. By passing to a subsequence, we wlog
˙γ
n
(0) converges to some
a ∈ T
x
0
(
X
. Then we use
a
as an initial condition for
our geodesic, and this will be a line.
A similar argument gives
Lemma.
Let (
M, g
) be a compact Riemannian manifold, and suppose its uni-
versal Riemannian cover (
˜
M, ˜g) is non-compact. Then (
˜
M, ˜g) contains a line.
Proof.
We first find a compact
K ⊆
˜
M
such that
π
(
K
) =
M
. Since
˜
M
must be
complete, it is unbounded. Choose
p
n
, q
n
, γ
n
like before. Then we can apply deck
transformations so that the midpoint lies inside
K
, and then use compactness of
K to find a subsequence so that the midpoint converges.
We do more applications.
Corollary.
Let (
M, g
) be a compact, connected manifold with
Ric
(
g
)
≥
0. Then
–
The universal Riemannian cover is isometric to the Riemannian product
X × R
N
, with X compact, π
1
(X) = 1 and Ric(g
X
) ≥ 0.
– If there is some p ∈ M such that Ric(g)
p
> 0, then π
1
(M) is finite.
–
Denote by
I
(
˜
M
) the group of isometries
˜
M →
˜
M
. Then
I
(
˜
M
) =
I
(
X
)
×
E(R
q
), where E(R
q
) is the group of rigid Euclidean motions,
y 7→ Ay + b
where b ∈ R
n
and A ∈ O(q).
– If
˜
M is homogeneous, then so is X.
Proof.
– This is direct from Cheeger–Gromoll and the previous lemma.
–
If there is a point with strictly positive Ricci curvature, then the same is
true for the universal cover. So we cannot have any non-trivial splitting.
So by the previous part,
˜
M
must be compact. By standard topology,
|π
1
(M)| = |π
−1
({p})|.
–
We use the fact that
E
(
R
q
) =
I
(
R
q
). Pick a
g ∈ I
(
˜
M
). Then we know
g
takes lines to lines. Now use that all lines in
˜
M ×R
q
are of the form
p ×R
with p ∈ X and R ⊆ R
q
an affine line. Then
g(p ×R) = p
0
× R,
for some
p
0
and possibly for some other copy of
R
. By taking unions, we
deduce that g(p × R
q
) = p
0
× R
q
. We write h(p) = p
0
. Then h ∈ I(X).
Now for any
X × a
with
a ∈ R
q
, we have
X × a ⊥ p × R
q
for all
p ∈ X
.
So we must have
g(X × a) = X × b
for some b ∈ R
q
. We write e(a) = b. Then
g(p, a) = (h(p), e(a)).
Since the metric of
X
and
R
q
are decoupled, it follows that
h
and
e
must
separately be isometries.
We can look at more examples.
Proposition.
Consider
S
n
× R
for
n
= 2 or 3. Then this does not admit any
Ricci-flat metric.
Proof.
Note that
S
n
× R
is disconnected at
∞
. So any metric contains a line.
Then by Cheeger–Gromoll,
R
splits as a Riemannian factor. So we obtain
Ric
= 0 on the
S
n
factor. Since we are in
n
= 2
,
3, we know
S
n
is flat, as the
Ricci curvature determines the full curvature. So
S
n
is the quotient of
R
n
by a
discrete group, and in particular π
1
(S
n
) 6= 1. This is a contradiction.
Let
G
be a Lie group with a bi-invariant metric
g
. Suppose the center
Z
(
G
)
is finite. Then the center of
g
is trivial (since it is the Lie algebra of
G/Z
(
G
),
which has trivial center). From sheet 2, we find that
Ric
(
g
)
>
0 implies
π
1
(
G
) is
finite. The converse is also true, but is harder. This is done on Q11 of sheet 3 —
if π
1
(G) is finite, then Z(G) is finite.