7Simplicial homology

II Algebraic Topology



7.4 Mayer-Vietoris sequence
The Mayer-Vietoris theorem is exactly like the Seifert-van Kampen theorem
for fundamental groups, which tells us what happens when we glue two spaces
together.
Suppose we have a space K = M N.
M N
We will learn how to compute the homology of the union
M N
in terms of
those of M, N and M N.
Recall that to state the Seifert-van Kampen theorem, we needed to learn
some new group-theoretic notions, such as free products with amalgamation.
The situation is somewhat similar here. We will need to learn some algebra in
order to state the Mayer-Vietoris theorem. The objects we need are known as
exact sequences.
Definition (Exact sequence). A pair of homomorphisms of abelian groups
A B C
f g
is exact (at B) if
im f = ker g.
A collection of homomorphisms
· · · A
i
A
i+1
A
i+2
· · ·
f
i1
f
i
f
i+1
f
i+2
is exact at A
i
if
ker f
i
= im f
i1
.
We say it is exact if it is exact at every A
i
.
Recall that we have seen something similar before. When we defined the
chain complexes, we had
d
2
= 0, i.e.
im d ker d
. Here we are requiring exact
equivalence, which is something even better.
Algebraically, we can think of an exact sequence as chain complexes with
trivial homology groups. Alternatively, we see the homology groups as measuring
the failure of a sequence to be exact.
There is a particular type of exact sequences that is important.
Definition (Short exact sequence). A short exact sequence is an exact sequence
of the form
0 A B C 0
f g
What does this mean?
The kernel of
f
is equal to the image of the zero map, i.e.
{
0
}
. So
f
is
injective.
The image of
g
is the kernel of the zero map, which is everything. So
g
is
surjective.
im f = ker g.
Since we like chain complexes, we can produce short exact sequences of chain
complexes.
Definition (Short exact sequence of chain complexes). A short exact sequence
of chain complexes is a pair of chain maps i
·
and j
·
0 A
·
B
·
C
·
0
i
·
j
·
such that for each k,
0 A
k
B
k
C
k
0
i
k
j
k
is exact.
Note that by requiring the maps to be chain maps, we imply that
i
k
and
j
k
commute with the boundary maps of the chain complexes.
The reason why we care about short exact sequences of chain complexes
(despite them having such a long name) is the following result:
Theorem (Snake lemma). If we have a short exact sequence of complexes
0 A
·
B
·
C
·
0
i
·
j
·
then a miracle happens to their homology groups. In particular, there is a long
exact sequence (i.e. an exact sequence that is not short)
· · · H
n
(A) H
n
(B) H
n
(C)
H
n1
(A) H
n1
(B) H
n1
(C) · · ·
i
j
i
j
where
i
and
j
are induced by
i
·
and
j
·
, and
is a map we will define in the
proof.
Having an exact sequence is good, since if we know most of the terms in an
exact sequence, we can figure out the remaining ones. To do this, we also need
to understand the maps
i
,
j
and
, but we don’t need to understand all, since
we can deduce some of them with exactness. Yet, we still need to know some of
them, and since they are defined in the proof, you need to remember the proof.
Note, however, if we replace
Z
in the definition of chain groups by a field
(e.g.
Q
), then all the groups become vector spaces. Then everything boils down
to the rank-nullity theorem. Of course, this does not get us the right answer in
exams, since we want to have homology groups over
Z
, and not
Q
, but this helps
us to understand the exact sequences somewhat. If, at any point, homology
groups confuse you, then you can try to work with homology groups over
Q
and
get a feel for what homology groups are like, since this is easier.
We are first going to apply this result to obtain the Mayer-Vietoris theorem.
Theorem (Mayer-Vietoris theorem). Let
K, L, M, N
be simplicial complexes
with K = M N and L = M N. We have the following inclusion maps:
L M
N K.
i
j
k
`
Then there exists some natural homomorphism
:
H
n
(
K
)
H
n1
(
L
) that
gives the following long exact sequence:
· · · H
n
(L) H
n
(M) H
n
(N) H
n
(K)
H
n1
(L) H
n1
(M) H
n1
(N) H
n1
(K) · · ·
· · · H
0
(M) H
0
(N) H
0
(K) 0
i
+j
k
`
i
+j
k
`
k
`
Here
A B
is the direct sum of the two (abelian) groups, which may also be
known as the Cartesian product.
Note that unlike the Seifert-van Kampen theorem, this does not require the
intersection
L
=
M N
to be (path) connected. This restriction was needed
for Seifert-van Kampen since the fundamental group is unable to see things
outside the path component of the basepoint, and hence it does not like non-path
connected spaces well. However, homology groups don’t have these problems.
Proof.
All we have to do is to produce a short exact sequence of complexes. We
have
0 C
n
(L) C
n
(M) C
n
(N) C
n
(K) 0
i
n
+j
n
k
n
`
n
Here
i
n
+
j
n
:
C
n
(
L
)
C
n
(
M
)
C
n
(
N
) is the map
x 7→
(
x, x
), while
k
n
`
n
:
C
n
(
M
)
C
n
(
N
)
C
n
(
K
) is the map (
a, b
)
7→ a b
(after applying the
appropriate inclusion maps).
It is easy to see that this is a short exact sequence of chain complexes. The
image of
i
n
+
j
n
is the set of all elements of the form (
x, x
), and the kernel of
k
n
`
n
is also these. It is also easy to see that
i
n
+
j
n
is injective and
k
n
`
n
is
surjective.
At first sight, the Mayer-Vietoris theorem might look a bit scary to use, since
it involves all homology groups of all orders at once. However, this is actually
often a good thing, since we can often use this to deduce the higher homology
groups from the lower homology groups.
Yet, to properly apply the Mayer-Vietoris sequence, we need to understand
the map
. To do so, we need to prove the snake lemma.
Theorem (Snake lemma). If we have a short exact sequence of complexes
0 A
·
B
·
C
·
0
i
·
j
·
then there is a long exact sequence
· · · H
n
(A) H
n
(B) H
n
(C)
H
n1
(A) H
n1
(B) H
n1
(C) · · ·
i
j
i
j
where
i
and
j
are induced by
i
·
and
j
·
, and
is a map we will define in the
proof.
The method of proving this is sometimes known as “diagram chasing”, where
we just “chase” around commutative diagrams to find the elements we need.
The idea of the proof is as follows in the short exact sequence, we can think
of
A
as a subgroup of
B
, and
C
as the quotient
B/A
, by the first isomorphism
theorem. So any element of
C
can be represented by an element of
B
. We
apply the boundary map to this representative, and then exactness shows that
this must come from some element of
A
. We then check carefully that this is
well-defined, i.e. does not depend on the representatives chosen.
Proof.
The proof of this is in general not hard. It just involves a lot of checking
of the details, such as making sure the homomorphisms are well-defined, are
actually homomorphisms, are exact at all the places etc. The only important
and non-trivial part is just the construction of the map
.
First we look at the following commutative diagram:
0 A
n
B
n
C
n
0
0 A
n1
B
n1
C
n1
0
i
n
d
n
j
n
d
n
d
n
i
n1
j
n1
To construct
:
H
n
(
C
)
H
n1
(
A
), let [
x
]
H
n
(
C
) be a class represented
by
x Z
n
(
C
). We need to find a cycle
z A
n1
. By exactness, we know the
map
j
n
:
B
n
C
n
is surjective. So there is a
y B
n
such that
j
n
(
y
) =
x
.
Since our target is
A
n1
, we want to move down to the next level. So consider
d
n
(
y
)
B
n1
. We would be done if
d
n
(
y
) is in the image of
i
n1
. By exactness,
this is equivalent to saying
d
n
(
y
) is in the kernel of
j
n1
. Since the diagram is
commutative, we know
j
n1
d
n
(y) = d
n
j
n
(y) = d
n
(x) = 0,
using the fact that
x
is a cycle. So
d
n
(
y
)
ker j
n1
=
im i
n1
. Moreover, by
exactness again,
i
n1
is injective. So there is a unique
z A
n1
such that
i
n1
(z) = d
n
(y). We have now produced our z.
We are not done. We have
[
x
] = [
z
] as our candidate definition, but we
need to check many things:
(i) We need to make sure
is indeed a homomorphism.
(ii) We need d
n1
(z) = 0 so that [z] H
n1
(A);
(iii)
We need to check [
z
] is well-defined, i.e. it does not depend on our choice
of y and x for the homology class [x].
(iv) We need to check the exactness of the resulting sequence.
We now check them one by one:
(i)
Since everything involved in defining
are homomorphisms, it follows
that
is also a homomorphism.
(ii) We check d
n1
(z) = 0. To do so, we need to add an additional layer.
0 A
n
B
n
C
n
0
0 A
n1
B
n1
C
n1
0
0 A
n2
B
n2
C
n2
0
i
n
d
n
j
n
d
n
d
n
i
n1
d
n1
j
n1
d
n1
d
n1
i
n2
j
n2
We want to check that
d
n1
(
z
) = 0. We will use the commutativity of the
diagram. In particular, we know
i
n2
d
n1
(z) = d
n1
i
n1
(z) = d
n1
d
n
(y) = 0.
By exactness at
A
n2
, we know
i
n2
is injective. So we must have
d
n1
(z) = 0.
(iii) (a)
First, in the proof, suppose we picked a different
y
0
such that
j
n
(
y
0
) =
j
n
(
y
) =
x
. Then
j
n
(
y
0
y
) = 0. So
y
0
y ker j
n
=
im i
n
. Let
a A
n
be such that i
n
(a) = y
0
y. Then
d
n
(y
0
) = d
n
(y
0
y) + d
n
(y)
= d
n
i
n
(a) + d
n
(y)
= i
n1
d
n
(a) + d
n
(y).
Hence when we pull back
d
n
(
y
0
) and
d
n
(
y
) to
A
n1
, the results differ
by the boundary
d
n
(
a
), and hence produce the same homology class.
(b)
Suppose [
x
0
] = [
x
]. We want to show that
[
x
] =
[
x
0
]. This time,
we add a layer above.
0 A
n+1
B
n+1
C
n+1
0
0 A
n
B
n
C
n
0
0 A
n1
B
n1
C
n1
0
i
n+1
d
n+1
j
n+1
d
n+1
d
n+1
i
n
d
n
j
n
d
n
d
n
i
n1
j
n1
By definition, since [x
0
] = [x], there is some c C
n+1
such that
x
0
= x + d
n+1
(c).
By surjectivity of
j
n+1
, we can write
c
=
j
n+1
(
b
) for some
b B
n+1
.
By commutativity of the squares, we know
x
0
= x + j
n
d
n+1
(b).
The next step of the proof is to find some
y
such that
j
n
(
y
) =
x
.
Then
j
n
(y + d
n+1
(b)) = x
0
.
So the corresponding
y
0
is
y
0
=
y
+
d
n+1
(
b
). So
d
n
(
y
) =
d
n
(
y
0
), and
hence
[x] =
[x
0
].
(iv)
This is yet another standard diagram chasing argument. When reading
this, it is helpful to look at a diagram and see how the elements are chased
along. It is even more beneficial to attempt to prove this yourself.
(a) im i
ker j
: This follows from the assumption that i
n
j
n
= 0.
(b) ker j
im i
: Let [
b
]
H
n
(
B
). Suppose
j
([
b
]) = 0. Then there is
some
c C
n+1
such that
j
n
(
b
) =
d
n+1
(
c
). By surjectivity of
j
n+1
,
there is some
b
0
B
n+1
such that
j
n+1
(
b
0
) =
c
. By commutativity,
we know j
n
(b) = j
n
d
n+1
(b
0
), i.e.
j
n
(b d
n+1
(b
0
)) = 0.
By exactness of the sequence, we know there is some
a A
n
such
that
i
n
(a) = b d
n+1
(b
0
).
Moreover,
i
n1
d
n
(a) = d
n
i
n
(a) = d
n
(b d
n+1
(b
0
)) = 0,
using the fact that
b
is a cycle. Since
i
n1
is injective, it follows that
d
n
(a) = 0. So [a] H
n
(A). Then
i
([a]) = [b] [d
n+1
(b
0
)] = [b].
So [b] im i
.
(c) im j
ker
: Let [
b
]
H
n
(
B
). To compute
(
j
([
b
])), we first
pull back
j
n
(
b
) to
b B
n
. Then we compute
d
n
(
b
) and then pull it
back to
A
n+1
. However, we know
d
n
(
b
) = 0 since
b
is a cycle. So
(j
([b])) = 0, i.e.
j
= 0.
(d) ker
im j
: Let [
c
]
H
n
(
C
) and suppose
([
c
]) = 0. Let
b B
n
be such that
j
n
(
b
) =
c
, and
a A
n1
such that
i
n1
(
a
) =
d
n
(
b
).
By assumption,
([
c
]) = [
a
] = 0. So we know
a
is a boundary,
say
a
=
d
n
(
a
0
) for some
a
0
A
n
. Then by commutativity we know
d
n
(b) = d
n
i
n
(a
0
). In other words,
d
n
(b i
n
(a
0
)) = 0.
So [b i
n
(a
0
)] H
n
(B). Moreover,
j
([b i
n
(a
0
)]) = [j
n
(b) j
n
i
n
(a
0
)] = [c].
So [c] im j
.
(e) im
ker i
: Let [
c
]
H
n
(
C
). Let
b B
n
be such that
j
n
(
b
) =
c
,
and a A
n1
be such that i
n
(a) = d
n
(b). Then
([c]) = [a]. Then
i
([a]) = [i
n
(a)] = [d
n
(b)] = 0.
So i
= 0.
(f) ker i
im
: Let [
a
]
H
n
(
A
) and suppose
i
([
a
]) = 0. So we can
find some
b B
n+1
such that
i
n
(
a
) =
d
n+1
(
b
). Let
c
=
j
n+1
(
b
). Then
d
n+1
(c) = d
n+1
j
n+1
(b) = j
n
d
n+1
(b) = j
n
i
n
(a) = 0.
So [
c
]
H
n
(
C
). Then [
a
] =
([
c
]) by definition of
. So [
a
]
im
.