2Singular (co)homology

III Algebraic Topology



2.2 Singular (co)homology
The idea now is that given any space
X
, we construct a chain complex
C
·
(
X
),
and for any map
f
:
X Y
, we construct a chain map
f
·
:
C
·
(
X
)
C
·
(
Y
). We
then take the homology of these, so that we get some homology groups
H
(
X
)
for each space X, and a map f
: H
(X) H
(Y ) for each map f : X Y .
There are many ways we can construct a chain complex
C
·
(
X
) from a space
X. The way we are going to define the chain complex is via singular homology.
The advantage of this definition is that it is obviously a property just of the
space
X
itself, whereas in other definitions, we need to pick, say, a triangulation
of the space, and then work hard to show that the homology does not depend
on the choice of the triangulation.
The disadvantage of singular homology is that the chain complexes
C
·
(
X
)
will be huge (and also a bit scary). Except for the case of a point (or perhaps
a few points), it is impossible to actually write down what
C
·
(
X
) looks like
and use that to compute the homology groups. Instead, we are going to play
around with the definition and prove some useful results that help us compute
the homology groups indirectly.
Later, for a particular type of spaces known as CW complexes, we will come
up with a different homology theory where the
C
·
(
X
) are actually nice and
small, so that we can use it to compute the homology groups directly. We will
prove that this is equivalent to singular homology, so that this also provides an
alternative method of computing homology groups.
Everything we do can be dualized to talk about cohomology instead. Most of
the time, we will just write down the result for singular homology, but analogous
results hold for singular cohomology as well. However, later on, we will see that
there are operations we can perform on cohomology groups only, which makes
them better, and the interaction between homology and cohomology will become
interesting when we talk about manifolds at the end of the course.
We start with definitions.
Definition (Standard n-simplex). The standard n-simplex is
n
=
n
(t
0
, · · · , t
n
) R
n+1
: t
i
0,
X
t
i
= 1
o
.
We notice that
n
has n + 1 “faces”.
Definition (Face of standard simplex). The ith face of
n
is
n
i
= {(t
0
, · · · , t
n
)
n
: t
i
= 0}.
Example. The faces of
1
are labelled as follows:
t
0
t
1
1
1
1
1
0
We see that the
i
th face of
n
looks like the standard (
n
1)-simplex. Of
course, it is just homeomorphic to it, with the map given by
δ
i
: ∆
n1
n
(t
0
, · · · , t
n1
) 7→ (t
0
, · · · , t
i1
, 0, t
i
, · · · , t
n1
)
This is a homeomorphism onto
n
i
. We will make use of these maps to define
our chain complex.
The idea is that we will look at subspaces of
X
that “look like” these standard
n-simplices.
Definition
(Singular
n
-simplex)
.
Let
X
be a space. Then a singular
n
-simplex
in X is a map σ : ∆
n
X.
Example.
The inclusion of the standard
n
-simplex into
R
n+1
is a singular
simplex, but so is the constant map to any point in
X
. So the singular
n
-
simplices can be stupid.
Given a space
X
, we obtain a set of singular
n
-simplices. To talk about
homology, we need to have an abelian group. We do the least imaginative thing
ever:
Definition
(Singular chain complex)
.
We let
C
n
(
X
) be the free abelian group
on the set of singular n-simplices in X. More explicitly, we have
C
n
(X) =
n
X
n
σ
σ : σ : ∆
n
X, n
σ
Z, only finitely many n
σ
non-zero
o
.
We define d
n
: C
n
(X) C
n1
(X) by
σ 7→
n
X
i=0
(1)
i
σ δ
i
,
and then extending linearly.
Note that it is essential that we insert those funny negative signs. Indeed, if
the signs weren’t there, then all terms in
d
(
d
(
σ
)) would have positive coefficients,
and there is no hope that they all cancel. Intuitively, we can think of these signs
as specifying the “orientation” of the faces. For example, if we have a line
then after taking
d
, one vertex would have a positive sign, and the other would
have a negative sign.
Now we actually check that this indeed gives us a chain complex. The key
tool is the following unexciting result:
Lemma. If i < j, then δ
j
δ
i
= δ
i
δ
j1
: ∆
n2
n
.
Proof. Both send (t
0
, · · · , t
n2
) to
(t
0
, · · · , t
i1
, 0, t
i
, · · · , t
j2
, 0, t
j1
, · · · , t
n2
).
Corollary. The homomorphism d
n1
d
n
: C
n
(X) C
n2
(X) vanishes.
Proof. It suffices to check this on each basis element σ : ∆
n
X. We have
d
n1
d
n
(σ) =
n1
X
i=0
(1)
i
n
X
j=0
(1)
j
σ δ
j
δ
i
.
We use the previous lemma to split the sum up into i < j and i j:
=
X
i<j
(1)
i+j
σ δ
j
δ
i
+
X
ij
(1)
i+j
σ δ
j
δ
i
=
X
i<j
(1)
i+j
σ δ
i
δ
j1
+
X
ij
(1)
i+j
σ δ
j
δ
i
=
X
ij
(1)
i+j+1
σ δ
i
δ
j
+
X
ij
(1)
i+j
σ δ
j
δ
i
= 0.
So the data
d
n
:
C
n
(
X
)
C
n1
(
X
) is indeed a chain complex. The only
thing we can do to a chain complex is to take its homology!
Definition
(Singular homology)
.
The singular homology of a space
X
is the
homology of the chain complex C
·
(X):
H
i
(X) = H
i
(C
·
(X), d
·
) =
ker(d
i
: C
i
(X) C
i1
(X))
im(d
i+1
: C
i+1
(X) C
i
(X))
.
We will also talk about the “dual” version of this:
Definition (Singular cohomology). We define the dual cochain complex by
C
n
(X) = Hom(C
n
(X), Z).
We let
d
n
: C
n
(X) C
n+1
(X)
be the adjoint to d
n+1
, i.e.
(ϕ : C
n
(X) Z) 7→ (ϕ d
n+1
: C
n+1
(X) Z).
We observe that
0 C
0
(X) C
1
(X) · · ·
d
0
is indeed a cochain complex, since
d
n+1
(d
n
(ϕ)) = ϕ d
n+1
d
n+2
= ϕ 0 = 0.
The singular cohomology of X is the cohomology of this cochain complex, i.e.
H
i
(X) = H
i
(C
·
(X), d
·
) =
ker(d
i
: C
i
(X) C
i+1
(X))
im(d
i1
: C
i1
(X) C
i
(X))
.
Note that in general, it is not true that
H
n
(
X
) =
Hom
(
H
n
(
X
)
, Z
). Thus,
dualizing and taking homology do not commute with each other. However, we
will later come up with a relation between the two objects.
The next thing to show is that maps of spaces induce chain maps, hence
maps between homology groups.
Proposition.
If
f
:
X Y
is a continuous map of topological spaces, then the
maps
f
n
: C
n
(X) C
n
(Y )
(σ : ∆
n
X) 7→ (f σ : ∆
n
Y )
give a chain map. This induces a map on the homology (and cohomology).
Proof.
To see that the
f
n
and
d
n
commute, we just notice that
f
n
acts by
composing on the left, and
d
n
acts by composing on the right, and these two
operations commute by the associativity of functional composition.
Now if in addition we have a map
g
:
Y Z
, then we obtain two maps
H
n
(X) H
n
(Z) by
H
n
(X) H
n
(Z)
H
n
(Y )
(gf)
f
g
.
By direct inspection of the formula, we see that this diagram commutes. In
equation form, we have
(g f)
= g
f
.
Moreover, we trivially have
(id
X
)
= id
H
n
(X)
: H
n
(X) H
n
(X).
Thus we deduce that
Proposition.
If
f
:
X Y
is a homeomorphism, then
f
:
H
n
(
X
)
H
n
(
Y
)
is an isomorphism of abelian groups.
Proof.
If
g
:
Y X
is an inverse to
f
, then
g
is an inverse to
f
, as
f
g
=
(f g)
= (id)
= id, and similarly the other way round.
If one is taking category theory, what we have shown is that
H
is a func-
tor, and the above proposition is just the usual proof that functors preserve
isomorphisms.
This is not too exciting. We will later show that homotopy equivalences
induce isomorphisms of homology groups, which is much harder.
Again, we can dualize this to talk about cohomology. Applying
Hom
(
· , Z
) to
f
·
: C
·
(X) C
·
(Y ) gives homomorphisms f
n
: C
n
(Y ) C
n
(X) by mapping
(ϕ : C
n
(Y ) Z) 7→ (ϕ f
n
: C
n
(X) Z).
Note that this map goes the other way! Again, this is a cochain map, and induces
maps f
: H
n
(Y ) H
n
(X).
How should we think about singular homology? There are two objects
involved cycles and boundaries. We will see that cycles in some sense “detect
holes”, and quotienting out by boundaries will help us identify cycles that detect
the same hole.
Example. We work with the space that looks like this:
Suppose we have a single chain complex
σ
:
1
X
. Then its boundary
d
1
(
σ
) =
σ
(1)
σ
(0). This is in general non-zero, unless we have
σ
(0) =
σ
(1). In
other words, this has to be a loop!
σ
So the 1-cycles represented by just one element are exactly the loops.
How about more complicated cycles? We could have, say, four 1-simplices
σ
1
, σ
2
, σ
3
, σ
4
.
σ
1
σ
2
σ
3
σ
4
In this case we have
σ
1
(1) = σ
2
(0)
σ
2
(1) = σ
3
(0)
σ
3
(1) = σ
4
(0)
σ
4
(1) = σ
1
(0)
Thus, we have
σ
1
+
σ
2
+
σ
3
+
σ
4
C
1
(
X
). We can think of these cycles as
detecting the holes by surrounding them.
However, there are some stupid cycles:
These cycles don’t really surround anything. The solution is that these cycles
are actually boundaries, as it is the boundary of the 2-simplex that fills the
region bounded by the loop (details omitted).
Similarly, the boundaries also allow us to identify two cycles that surround
the same hole, so that we don’t double count:
This time, the difference between the two cycles is the boundary of the 2-simplex
given by the region in between the two loops.
Of course, some work has to be done to actually find a 2-simplex whose
boundary is the difference of the two loops above, and in fact we will have to
write the region as the sum of multiple 2-simplices for this to work. However,
this is just to provide intuition, not a formal proof.
We now do some actual computations. If we want to compute the homology
groups directly, we need to know what the
C
·
(
X
) look like. In general, this is
intractable, unless we have the very simple case of a point:
Example. Consider the one-point space pt = {∗}. We claim that
H
n
(pt) = H
n
(pt) =
(
Z n = 0
0 n > 0
.
To see this, we note that there is always a single singular
n
-simplex
σ
n
: ∆
n
pt
.
So C
n
(pt) = Z, and is generated by σ
n
. Now note that
d
n
(σ
n
) =
n
X
i=0
(1)
i
σ
n
δ
i
=
(
σ
n1
n even
0 n odd
.
So the singular chain complex looks like
Z Z Z Z Z 0.
1 0 1 0
The homology groups then follow from direct computation. The cohomology
groups are similar.
This result is absolutely unexciting, and it is also almost the only space
whose homology groups we can find at this point. However, we are still capable
of proving the following general result:
Example.
If
X
=
`
αI
X
α
is a disjoint union of path components, then each
singular simplex must lie in some X
α
. This implies that
H
n
(X)
=
M
αI
H
n
(X
α
).
Now we know how to compute, say, the homology of three points. How
exciting.
Lemma. If X is path-connected and non-empty, then H
0
(X)
=
Z.
Proof. Define a homomorphism ε : C
0
(X) Z given by
X
n
σ
σ 7→
X
n
σ
.
Then this is surjective. We claim that the composition
C
1
(X) C
0
(X) Z
d ε
is zero. Indeed, each simplex has two ends, and a
σ
:
1
X
is mapped to
σ δ
0
σ δ
1
, which is mapped by ε to 1 1 = 0.
Thus, we know that
ε
(
σ
) =
ε
(
σ
+
) for any
σ C
0
(
X
) and
τ C
1
(
X
). So
we obtain a well-defined map
ε
:
H
0
(
X
)
Z
mapping [
x
]
7→ ε
(
x
), and this is
surjective as X is non-empty.
So far, this is true for a general space. Now we use the path-connectedness
condition to show that this map is indeed injective. Suppose
P
n
σ
σ C
0
(
X
)
lies in
ker ε
. We choose an
x
0
X
. As
X
is path-connected, for each of
0
X
we can choose a path τ
σ
: ∆
1
X with τ
σ
δ
0
= σ and τ
σ
δ
1
= x
0
.
Given these 1-simplices, we can form a 1-chain
P
n
σ
τ
σ
C
1
(X), and
d
1
X
n
σ
τ
σ
=
X
n
σ
(σ + x
0
) =
X
n
σ
· σ
X
n
σ
x
0
.
Now we use the fact that
P
n
σ
= 0. So
P
n
σ
· σ
is a boundary. So it is zero in
H
0
(X).
Combining with the coproduct formula, we have
Proposition.
For any space
X
, we have
H
0
(
X
) is a free abelian group generated
by the path components of X.
These are pretty much the things we can do by hand. To do more things, we
need to use some tools.