Part III Symmetries, Fields and Particles
Based on lectures by N. Dorey
Notes taken by Dexter Chua
Michaelmas 2016
These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.
This course introduces the theory of Lie groups and Lie algebras and their applications
to high energy physics. The course begins with a brief overview of the role of symmetry
in physics. After reviewing basic notions of group theory we define a Lie group as a
manifold with a compatible group structure. We give the abstract definition of a Lie
algebra and show that every Lie group has an associated Lie algebra corresponding to
the tangent space at the identity element. Examples arising from groups of orthogonal
and unitary matrices are discussed. The case of
SU
(2), the group of rotations in three
dimensions is studied in detail. We then study the representations of Lie groups and
Lie algebras. We discuss reducibility and classify the finite dimensional, irreducible
representations of
SU
(2) and introduce the tensor product of representations. The
next part of the course develops the theory of complex simple Lie algebras. We
define the Killing form on a Lie algebra. We introduce the Cartan-Weyl basis and
discuss the properties of roots and weights of a Lie algebra. We cover the Cartan
classification of simple Lie algebras in detail. We describe the finite dimensional,
irreducible representations of simple Lie algebras, illustrating the general theory for the
Lie algebra of
SU
(3). The last part of the course discusses some physical applications.
After a general discussion of symmetry in quantum mechanical systems, we review the
approximate
SU
(3) global symmetry of the strong interactions and its consequences
for the observed spectrum of hadrons. We introduce gauge symmetry and construct a
gauge-invariant Lagrangian for Yang-Mills theory coupled to matter. The course ends
with a brief introduction to the Standard Model of particle physics.
Pre-requisites
Basic finite group theory, including subgroups and orbits. Special relativity and
quantum theory, including orbital angular momentum theory and Pauli spin matrices.
Basic ideas about manifolds, including coordinates, dimension, tangent spaces.
Contents
1 Introduction
2 Lie groups
2.1 Definitions
2.2 Matrix Lie groups
2.3 Properties of Lie groups
3 Lie algebras
3.1 Lie algebras
3.2 Differentiation
3.3 Lie algebras from Lie groups
3.4 The exponential map
4 Representations of Lie algebras
4.1 Representations of Lie groups and algebras
4.2 Complexification and correspondence of representations
4.3 Representations of su(2)
4.4 New representations from old
4.5 Decomposition of tensor product of su(2) representations
5 Cartan classification
5.1 The Killing form
5.2 The Cartan basis
5.3 Things are real
5.4 A real subalgebra
5.5 Simple roots
5.6 The classification
5.7 Reconstruction
6 Representation of Lie algebras
6.1 Weights
6.2 Root and weight lattices
6.3 Classification of representations
6.4 Decomposition of tensor products
7 Gauge theories
7.1 Electromagnetism and U(1) gauge symmetry
7.2 General case
8 Lie groups in nature
8.1 Spacetime symmetry
8.2 Possible extensions
8.3 Internal symmetries and the eightfold way
1 Introduction
In this course, we are, unsurprisingly, going to talk about symmetries. Unlike
what the course name suggests, there will be relatively little discussion of fields
or particles.
So what is a symmetry? There are many possible definitions, but we are
going to pick one that is relevant to physics.
Definition
(Symmetry)
.
A symmetry of a physical system is a transformation
of the dynamical variables which leaves the physical laws invariant.
Symmetries are very important. As Noether’s theorem tells us, every sym-
metry gives rise to a conserved current. But there is more to symmetry than
this. It seems that the whole physical universe is governed by symmetries. We
believe that the forces in the universe are given by gauge fields, and a gauge
field is uniquely specified by the gauge symmetry group it works with.
In general, the collection of all symmetries will form a group.
Definition
(Group)
.
A group is a set
G
of elements with a multiplication rule,
obeying the axioms
(i) For all g
1
, g
2
G, we have g
1
g
2
G. (closure)
(ii)
There is a (necessarily unique) element
e G
such that for all
g G
, we
have eg = ge = g. (identity)
(iii)
For every
g G
, there exists some (necessarily unique)
g
1
G
such that
gg
1
= g
1
g = e. (inverse)
(iv) For every g
1
, g
2
, g
3
G, we have g
1
(g
2
g
3
) = (g
1
g
2
)g
3
. (associativity)
Physically, these mean
(i) The composition of two symmetries is also a symmetry.
(ii) “Doing nothing” is a symmetry.
(iii) A symmetry can be “undone”.
(iv) Composing functions is always associative.
Note that the set of elements G may be finite or infinite.
Definition
(Commutative/abelian group)
.
A group is abelian or commutative
if g
1
g
2
= g
2
g
1
for all g
1
, g
2
G. A group is non-abelian if it is not abelian.
In this course, we are going to focus on smooth symmetries. These are
symmetries that “vary smoothly” with some parameters. One familiar example
is rotation, which can be described by the rotation axis and the angle of rotation.
These are the symmetries that can go into Noether’s theorem. These smooth
symmetries form a special kind of groups known as Lie groups.
One nice thing about these smooth symmetries is that they can be studied by
looking at the “infinitesimal” symmetries. They form a vector space, known as
a Lie algebra. This is a much simpler mathematical structure, and by reducing
the study of a Lie group to its Lie algebra, we make our lives much easier. In
particular, one thing we can do is to classify all (simple) Lie algebras. It turns
out this isn’t too hard, as the notion of (simple) Lie algebra is very restrictive.
After understanding Lie algebras very well, we will move on to study gauge
theories. These are theories obtained when we require that our theories obey
some sort of symmetry condition, and then magically all the interactions come
up automatically.
2 Lie groups
2.1 Definitions
A Lie group is a group and a manifold, where the group operations define smooth
maps. We already know what a group is, so let’s go into a bit more details about
what a manifold is.
The canonical example of a manifold one might want to keep in mind is the
sphere
S
2
. Having lived on Earth, we all know that near any point on Earth,
S
2
looks like
R
2
. But we also know that
S
2
is not
R
2
. We cannot specify a point
on the surface of the Earth uniquely by two numbers. Longitude/latitude might
be a good start, but adding 2
π
to the longitude would give you the same point,
and things also break down at the poles.
Of course, this will not stop us from producing a map of Cambridge. One
can reasonably come up with consistent coordinates just for Cambridge itself.
So what is true about
S
2
is that near each point, we can come up with some
coordinate system, but we cannot do so for the whole space itself.
Definition
(Smooth map)
.
We say a map
f
:
R
n
R
m
is smooth if all partial
derivatives of all orders exist.
Definition
(Manifold)
.
A manifold (of dimension
n
) is a set
M
together with
the following data:
(i) A collection U
α
of subsets of M whose union is M;
(ii)
A collection of bijections
ϕ
α
:
U
α
V
α
, where
V
α
is an open subset of
R
n
.
These are known as charts.
The charts have to satisfy the following compatibility condition:
For all
α, β
, we have
ϕ
α
(
U
α
U
β
) is open in
R
n
, and the transition function
ϕ
α
ϕ
1
β
: ϕ
β
(U
α
U
β
) ϕ
α
(U
α
U
β
)
is smooth.
We write (M, ϕ
α
) for the manifold if we need to specify the charts explicitly.
U
β
U
α
ϕ
β
ϕ
α
ϕ
α
ϕ
1
β
Definition
(Smooth map)
.
Let
M
be a manifold. Then a map
f
:
M R
is smooth if it is smooth in each coordinate chart. Explicitly, for each chart
(U
α
, ϕ
α
), the composition f ϕ
1
α
: ϕ
α
(U
α
) R is smooth (in the usual sense)
p
U
ϕ(p)
f ϕ
1
R
f
ϕ
More generally, if
M, N
are manifolds, we say
f
:
M N
is smooth if for
any chart (
U, ϕ
) of
M
and any chart (
V, ξ
) of
N
, the composition
ξ f ϕ
1
:
ϕ(U) ξ(V ) is smooth.
ϕ
ξ
f
ξ f ϕ
1
Finally, we note that if
M, N
are manifolds of dimensions
m
and
n
respectively,
then
M × N
is a manifold of dimension
m
+
n
, where charts (
U, ϕ
:
U
R
m
)
,
(
V, ξ
:
V R
n
) of
M
and
N
respectively give us a chart
ϕ × ξ
:
U × V
R
m
× R
n
= R
m+n
of M × N .
All those definitions were made so that we can define what a Lie group is:
Definition
(Lie group)
.
A Lie group is a group
G
whose underlying set is given
a manifold structure, and such that the multiplication map
m
:
G ×G G
and
inverse map
i
:
G G
are smooth maps. We sometimes write
M
(
G
) for the
underlying manifold of G.
Example. The unit 2-sphere
S
2
= {(x, y, z) R
3
: x
2
+ y
2
+ z
2
= 1}
is a manifold. Indeed, we can construct a coordinate patch near
N
= (0
,
0
,
1).
Near this point, we have
z
=
p
1 x
2
y
2
. This works, since near the north pole,
the
z
-coordinate is always positive. In this case,
x
and
y
are good coordinates
near the north pole.
However, it is a fact that the 2-sphere S
2
has no Lie group structure.
In general, most of our manifolds will be given by subsets of Euclidean space
specified by certain equations, but note that not all subsets given by equations
are manifolds! It is possible that they have some singularities.
Definition
(Dimension of Lie group)
.
The dimension of a Lie group
G
is the
dimension of the underlying manifold.
Definition
(Subgroup)
.
A subgroup
H
of
G
is a subset of
G
that is also a group
under the same operations. We write H G if H is a subgroup of G.
The really interesting thing is when the subgroup is also a manifold!
Definition
(Lie subgroup)
.
A subgroup is a Lie subgroup if it is also a manifold
(under the induced smooth structure).
We now look at some examples.
Example.
Let
G
= (
R
D
,
+) be the
D
-dimensional Euclidean space with addition
as the group operation. The inverse of a vector
x
is
x
, and the identity is
0
.
This is obviously locally homeomorphic to
R
D
, since it is
R
D
, and addition and
negation are obviously smooth.
This is a rather boring example, since
R
D
is a rather trivial manifold, and
the operation is commutative. The largest source of examples we have will be
matrix Lie groups.
2.2 Matrix Lie groups
We write
Mat
n
(
F
) for the set of
n×n
matrices with entries in a field
F
(usually
R
or
C
). Matrix multiplication is certainly associative, and has an identity, namely
the identity I. However, it doesn’t always have inverses not all matrices are
invertible! So this is not a group (instead, we call it a monoid). Thus, we are
led to consider the general linear group:
Definition (General linear group). The general linear group is
GL(n, F) = {M Mat
n
(F) : det M 6= 0}.
This is closed under multiplication since the determinant is multiplicative,
and matrices with non-zero determinant are invertible.
Definition (Special linear group). The special linear group is
SL(n, F) = {M Mat
n
(F) : det M = 1} GL(n, F).
While these are obviously groups, less obviously, these are in fact Lie groups!
In the remainder of the section, we are going to casually claim that all our favorite
matrix groups are Lie groups. Proving these take some work and machinery,
and we will not bother ourselves with that too much. However, we can do it
explicitly for certain special cases:
Example. Explicitly, we can write
SL(2, R) =

a b
c d
: a, b, c, d R, ad bc = 1
.
The identity is the matrix with a = d = 1 and b = c = 0. For a 6= 0, we have
d =
1 + bc
a
.
This gives us a coordinate patch for all points where
a 6
= 0 in terms of
b, c, a
,
which, in particular, contains the identity
I
. By considering the case where
b 6
= 0, we obtain a separate coordinate chart, and these together cover all of
SL(2, R), as a matrix in SL(2, R) cannot have a = b = 0.
Thus, we see that SL(2, R) has dimension 3.
In general, by a similar counting argument, we have
dim(SL(n, R)) = n
2
1 dim(SL(n, C)) = 2n
2
2
dim(GL(n, R)) = n
2
, dim(GL(n, C)) = 2n
2
.
Most of the time, we are interested in matrix Lie groups, which will be subgroups
of GL(n, R).
Subgroups of GL(n, R)
Lemma. The general linear group:
GL(n, R) = {M Mat
n
(R) : det M 6= 0}
and orthogonal group:
O(n) = {M GL(n, R) : M
T
M = I}
are Lie groups.
Note that we write O(
n
) instead of O(
n, R
) since orthogonal matrices make
sense only when talking about real matrices.
The orthogonal matrices are those that preserve the lengths of vectors. Indeed,
for v R
n
, we have
|Mv|
2
= v
T
M
T
Mv = v
T
v = |v|
2
.
We notice something interesting. If M O(n), we have
1 = det(I) = det(M
T
M) = det(M)
2
.
So
det
(
M
) =
±
1. Now
det
is a continuous function, and it is easy to see that
det
takes both
±
1. So O(
n
) has (at least) two connected components. Only one of
these pieces contains the identity, namely the piece
det M
= 1. We might expect
this to be a group on its own right, and indeed it is, because
det
is multiplicative.
Lemma. The special orthogonal group SO(n):
SO(n) = {M O(n) : det M = 1}
is a Lie group.
Given a frame
{v
1
, ··· , v
n
}
in
R
n
(i.e. an ordered basis), any orthogonal
matrix
M
O(
n
) acts on it to give another frame
v
a
R
n
7→ v
0
a
7→ M v
a
R
n
.
Definition
(Volume element)
.
Given a frame
{v
1
, ··· , v
n
}
in
R
n
, the volume
element is
Ω = ε
i
1
...i
n
v
i
1
1
v
i
2
2
···v
i
n
n
.
By direct computation, we see that an orthogonal matrix preserves the sign
of the volume element iff its determinant is +1, i.e. M SO(n).
We now want to find a more explicit description of these Lie groups, at
least in low dimensions. Often, it is helpful to classify these matrices by their
eigenvalues:
Definition
(Eigenvalue)
.
A complex number
λ
is an eigenvalue of
M M
(
n
)
if there is some (possibly complex) vector v
λ
6= 0 such that
Mv
λ
= λv
λ
.
Theorem.
Let
M
be a real matrix. Then
λ
is an eigenvalue iff
λ
is an eigenvalue.
Moreover, if M is orthogonal, then |λ|
2
= 1.
Proof.
Suppose
Mv
λ
=
λv
λ
. Then applying the complex conjugate gives
Mv
λ
= λ
v
λ
.
Now suppose
M
is orthogonal. Then
Mv
λ
=
λv
λ
for some non-zero
v
λ
. We
take the norm to obtain
|Mv
λ
|
=
|λ||v
λ
|
. Using the fact that
|v
λ
|
=
|Mv
λ
|
, we
have |λ| = 1. So done.
Example.
Let
M SO
(2). Since
det M
= 1, the eigenvalues must be of the
form λ = e
, e
. In this case, we have
M = M (θ) =
cos θ sin θ
sin θ cos θ
,
where θ is the rotation angle in S
1
. Here we have
M(θ
1
)M(θ
2
) = M(θ
2
)M(θ
1
) = M(θ
1
+ θ
2
).
So we have M(SO(2)) = S
1
.
Example.
Consider
G
=
SO
(3). Suppose
M SO
(3). Since
det M
= +1, and
the eigenvalues have to come in complex conjugate pairs, we know one of them
must be 1. Then the other two must be of the form e
, e
, where θ S
1
.
We pick a normalized eigenvector
n
for
λ
= 1. Then
Mn
=
n
, and
n · n
= 1.
This is known as the axis of rotation. Similarly,
θ
is the angle of rotation. We
write M (n, θ) for this matrix, and it turns out this is
M(n, θ)
ij
= cos θδ
ij
+ (1 cos θ)n
i
n
j
sin θε
ijk
n
k
.
Note that this does not uniquely specify a matrix. We have
M(n, 2π θ) = M(n, θ).
Thus, to uniquely specify a matrix, we need to restrict the range of
θ
to 0
θ π
,
with the further identification that
(n, π) (n, π).
Also note that M(n, 0) = I for any n.
Given such a matrix, we can produce a vector
w
=
θn
. Then
w
lies in the
region
B
3
= {w R
3
: kwk π} R
3
.
This has a boundary
B
3
= {w R
3
: kwk = π}
=
S
2
.
Now we identify antipodal points on
B
3
. Then each vector in the resulting
space corresponds to exactly one element of SO(3).
Subgroups of GL(n, C)
We can similarly consider subgroups of GL(n, C). Common examples include
Definition (Unitary group). The unitary group is defined by
U(n) = {U GL(n, C) : U
U = I}.
These are important in physics, because unitary matrices are exactly those
that preserve the norms of vectors, namely kvk = kUvk for all v.
Again, if
U
U
= 1, then
|det
(
U
)
|
2
= 1. So
det U
=
e
for some
δ R
.
Unlike the real case, the determinant can now take a continuous range of values,
and this no longer disconnects the group. In fact, U(n) is indeed connected.
Definition (Special unitary group). The special unitary group is defined by
SU(n) = {U U(n) : det U = 1}.
It is an easy exercise to show that
dim[U(n)] = 2n
2
n
2
= n
2
.
For
SU
(
n
), the determinant condition imposes an additional single constraint,
so we have
dim[SU(n)] = n
2
1.
Example. Consider the group G = U(1). This is given by
U(1) = {z C : |z| = 1}.
Therefore we have
M[U(1)] = S
1
.
However, we also know another Lie group with underlying manifold
S
1
, namely
SO(2). So are they “the same”?
2.3 Properties of Lie groups
The first thing we want to consider is when two Lie groups are “the same”. We
take the obvious definition of isomorphism.
Definition
(Homomorphism of Lie groups)
.
Let
G, H
be Lie groups. A map
J : G H is a homomorphism if it is smooth and for all g
1
, g
2
G, we have
J(g
1
g
2
) = J(g
1
)J(g
2
).
(the second condition says it is a homomorphism of groups)
Definition
(Isomorphic Lie groups)
.
An isomorphism of Lie groups is a bijective
homomorphism whose inverse is also a homomorphism. Two Lie groups are
isomorphic if there is an isomorphism between them.
Example. We define the map J : U(1) SO(2) by
J(e
) 7→
cos θ sin θ
sin θ cos θ
SO(2).
This is easily seen to be a homomorphism, and we can construct an inverse
similarly.
Exercise. Show that M(SU(2))
=
S
3
.
We now look at some words that describe manifolds. Usually, manifolds that
satisfy these properties are considered nice.
The first notion is the idea of compactness. The actual definition is a bit
weird and takes time to get used to, but there is an equivalent characterization
if the manifold is a subset of R
n
.
Definition
(Compact)
.
A manifold (or topological space)
X
is compact if every
open cover of X has a finite subcover.
If the manifold is a subspace of some
R
n
, then it is compact iff it is closed
and bounded.
Example.
The sphere
S
2
is compact, but the hyperboloid given by
x
2
y
2
z
2
=
1 (as a subset of R
3
) is not.
Example.
The orthogonal groups are compact. Recall that the definition of
an orthogonal matrix requires
M
T
M
=
I
. Since this is given by a polynomial
equation in the entries, it is closed. It is also bounded since each entry of
M
has
magnitude at most 1.
Similarly, the special orthogonal groups are compact.
Example.
Sometimes we want to study more exciting spaces such as Minkowski
spaces. Let
n
=
p
+
q
, and consider the matrices that preserve the metric on
R
n
of signature (p, q), namely
O(p, q) = {M GL(n, R) : M
T
ηM = η},
where
η =
I
p
0
0 I
q
.
For
p, q
both non-zero, this group is non-compact. For example, if we take
SO(1, 1), then the matrices are all of the form
M =
cosh θ sinh θ
sinh θ cosh θ
,
where θ R. So this space is homeomorphic to R, which is not compact.
Another common example of a non-compact group is the Lorentz group
O(3, 1).
Another important property is simply-connectedness.
Definition
(Simply connected)
.
A manifold
M
is simply connected if it is
connected (there is a path between any two points), and every loop
l
:
S
1
M
can be contracted to a point. Equivalently, any two paths between any two
points can be continuously deformed into each other.
Example.
The circle
S
1
is not simply connected, as the loop that goes around
the circle once cannot be continuously deformed to the loop that does nothing
(this is non-trivial to prove).
Example.
The 2-sphere
S
2
is simply connected, but the torus is not.
SO
(3) is
also not simply connected. We can define the map by
l(θ) =
(
θn θ [0, π)
(2π θ)n θ [π, 2π)
This is indeed a valid path because we identify antipodal points.
The failure of simply-connectedness is measured by the fundamental group.
Definition
(Fundamental group/First homotopy group)
.
Let
M
be a manifold,
and
x
0
M
be a preferred point. We define
π
1
(
M
) to be the equivalence classes
of loops starting and ending at
x
0
, where two loops are considered equivalent if
they can be continuously deformed into each other.
This has a group structure, with the identity given by the “loop” that stays
at x
0
all the time, and composition given by doing one after the other.
Example. π
1
(
S
2
) =
{
0
}
and
π
1
(
T
2
) =
Z ×Z
. We also have
π
1
(
SO
(3)) =
Z/
2
Z
.
We will not prove these results.
3 Lie algebras
It turns out that in general, the study of a Lie group can be greatly simplified by
studying its associated Lie algebra, which can be thought of as an infinitesimal
neighbourhood of the identity in the Lie group.
To get to that stage, we need to develop some theory of Lie algebra, and also
of differentiation.
3.1 Lie algebras
We begin with a rather formal and weird definition of a Lie algebra.
Definition
(Lie algebra)
.
A Lie algebra
g
is a vector space (over
R
or
C
) with
a bracket
[ ·, ·] : g × g g
satisfying
(i) [X, Y ] = [Y, X] for all X, Y g (antisymmetry)
(ii)
[
αX
+
βY, Z
] =
α
[
X, Z
] +
β
[
Y, Z
] for all
X, Y, Z g
and
α, β F
((bi)linearity)
(iii)
[
X,
[
Y, Z
]] + [
Y,
[
Z, X
]] + [
Z,
[
X, Y
]] = 0 for all
X, Y, Z g
.(Jacobi identity)
Note that linearity in the second argument follows from linearity in the first
argument and antisymmetry.
Some (annoying) pure mathematicians will complain that we should state
anti-symmetry as [
X, X
] = 0 instead, which is a stronger condition if we are
working over a field of characteristic 2, but I do not care about such fields.
There isn’t much one can say to motivate the Jacobi identity. It is a property
that our naturally-occurring Lie algebras have, and turns out to be useful when
we want to prove things about Lie algebras.
Example.
Suppose we have a vector space
V
with an associative product (e.g.
a space of matrices with matrix multiplication). We can then turn
V
into a Lie
algebra by defining
[X, Y ] = XY Y X.
We can then prove the axioms by writing out the expressions.
Definition
(Dimension of Lie algebra)
.
The dimension of a Lie algebra is the
dimension of the underlying vector space.
Given a finite-dimensional Lie algebra, we can pick a basis B for g.
B = {T
a
: a = 1, ··· , dim g}.
Then any X g can be written as
X = X
a
T
a
=
n
X
a=1
X
a
T
a
,
where X
a
F and n = dim g.
By linearity, the bracket of elements X, Y g can be computed via
[X, Y ] = X
a
Y
b
[T
a
, T
b
].
In other words, the whole structure of the Lie algebra can be given by the bracket
of basis vectors. We know that [
T
a
, T
b
] is again an element of
g
. So we can write
[T
a
, T
b
] = f
ab
c
T
c
,
where f
ab
c
F are the structure constants.
Definition
(Structure constants)
.
Given a Lie algebra
g
with a basis
B
=
{T
a
}
,
the structure constants are f
ab
c
given by
[T
a
, T
b
] = f
ab
c
T
c
,
By the antisymmetry of the bracket, we know
Proposition.
f
ba
c
= f
ab
c
.
By writing out the Jacobi identity, we obtain
Proposition.
f
ab
c
f
cd
e
+ f
da
c
f
cb
e
+ f
bd
c
f
ca
e
= 0.
As before, we would like to know when two Lie algebras are the same.
Definition
(Homomorphism of Lie algebras)
.
A homomorphism of Lie algebras
g, h is a linear map f : g h such that
[f(X), f(Y )] = f([X, Y ]).
Definition
(Isomorphism of Lie algebras)
.
An isomorphism of Lie algebras is a
homomorphism with an inverse that is also a homomorphism. Two Lie algebras
are isomorphic if there is an isomorphism between them.
Similar to how we can have a subgroup, we can also have a subalgebra
h
of
g
.
Definition
(Subalgebra)
.
A subalgebra of a Lie algebra
g
is a vector subspace
that is also a Lie algebra under the bracket.
Recall that in group theory, we have a stronger notion of a normal subgroup,
which are subgroups invariant under conjugation. There is an analogous notion
for subalgebras.
Definition
(Ideal)
.
An ideal of a Lie algebra
g
is a subalgebra
h
such that
[X, Y ] h for all X g and Y h.
Example. Every Lie algebra g has two trivial ideals h = {0} and h = g.
Definition (Derived algebra). The derived algebra of a Lie algebra g is
i = [g, g] = span
F
{[X, Y ] : X, Y g},
where F = R or C depending on the underlying field.
It is clear that this is an ideal. Note that this may or may not be trivial.
Definition (Center of Lie algebra). The center of a Lie algebra g is given by
ξ(g) = {X g : [X, Y ] = 0 for all Y g}.
This is an ideal, by the Jacobi identity.
Definition
(Abelian Lie algebra)
.
A Lie algebra
g
is abelian if [
X, Y
] = 0 for
all X, Y g. Equivalently, if ξ(g) = g.
Definition
(Simple Lie algebra)
.
A simple Lie algebra is a Lie algebra
g
that is
non-abelian and possesses no non-trivial ideals.
If
g
is simple, then since the center is always an ideal, and it is not
g
since
g
is not abelian, we must have
ξ
(
g
) =
{
0
}
. On the other hand, the derived algebra
is also an ideal, and is non-zero since it is not abelian. So we must have
i
(
g
) =
g
.
We will later see that these are the Lie algebras on which we can define a non-
degenerate invariant inner product. In fact, there is a more general class, known
as the semi-simple Lie algebras, that are exactly those for which non-degenerate
invariant inner products can exist.
These are important in physics because, as we will later see, to define the
Lagrangian of a gauge theory, we need to have a non-degenerate invariant inner
product on the Lie algebra. In other words, we need a (semi-)simple Lie algebra.
3.2 Differentiation
We are eventually going to get a Lie algebra from a Lie group. This is obtained
by looking at the tangent vectors at the identity. When we have homomorphisms
f
:
G H
of Lie groups, they are in particular smooth, and taking the derivative
will give us a map from tangent vectors in
G
to tangent vectors in
H
, which
in turn restricts to a map of their Lie algebras. So we need to understand how
differentiation works.
Before that, we need to understand how tangent vectors work. This is
completely general and can be done for manifolds which are not necessarily Lie
groups. Let
M
be a smooth manifold of dimension
D
and
p M
a point. We
want to formulate a notion of a “tangent vector” at the point
p
. We know how
we can do this if the space is
R
n
a tangent vector is just any vector in
R
n
.
By definition of a manifold, near a point p, the manifold looks just like R
n
. So
we can just pretend it is R
n
, and use tangent vectors in R
n
.
However, this definition of a tangent vector requires us to pick a particular
coordinate chart. It would be nice to have a more “intrinsic” notion of vectors.
Recall that in
R
n
, if we have a function
f
:
R
n
R
and a tangent vector
v
at
p
, then we can ask for the directional derivative of
f
along
v
. We have a
correspondence
v
v
.
This directional derivative takes in a function and returns its derivative at a
point, and is sort-of an “intrinsic” notion. Thus, instead of talking about
v
, we
will talk about the associated directional derivative
v
.
It turns out the characterizing property of this directional derivative is the
product rule:
v
(fg) = f (p)
v
g + g(p)
v
f.
So a “directional derivative” is a linear map from the space of smooth functions
M R to R that satisfies the Leibnitz rule.
Definition
(Tangent vector)
.
Let
M
be a manifold and write
C
(
M
) for the
vector space of smooth functions on
M
. For
p M
, a tangent vector is a linear
map v : C
(M) R such that for any f, g C
(M), we have
v(fg) = f(p)v(g) + v(f)g(p).
It is clear that this forms a vector space, and we write
T
p
M
for the vector space
of tangent vectors at p.
Now of course one would be worried that this definition is too inclusive, in
that we might have included things that are not genuinely directional derivatives.
Fortunately, this is not the case, as the following proposition tells us.
In the case where
M
is a submanifold of
R
n
, we can identify the tangent
space with an actual linear subspace of
R
n
. This is easily visualized when
M
is a
surface in
R
3
, where the tangent vectors consists of the vectors in
R
3
“parallel to”
the surface at the point, and in general, a “direction” in
M
is also a “direction”
in
R
n
, and tangent vectors of
R
n
can be easily identified with elements of
R
n
in
the usual way.
This will be useful when we study matrix Lie groups, because this means the
tangent space will consist of matrices again.
Proposition.
Let
M
be a manifold with local coordinates
{x
i
}
i=1,··· ,D
for some
region U M containing p. Then T
p
M has basis
x
j
j=1,··· ,D
.
In particular, dim T
p
M = dim M .
This result on the dimension is extremely useful. Usually, we can manage to
find a bunch of things that we know lie in the tangent space, and to show that
we have found all of them, we simply count the dimensions.
One way we can obtain tangent vectors is by differentiating a curve.
Definition
(Smooth curve)
.
A smooth curve is a smooth map
γ
:
R M
.
More generally, a curve is a C
1
function R M.
Since we only want the first derivative, being C
1
is good enough.
There are two ways we can try to define the derivative of the curve at time
t
= 0
R
. Using the definition of a tangent vector, to specify
˙γ
(0) is to tell how
we can differentiate a function
f
:
M R
at
p
=
γ
(0) in the direction of
˙γ
(0).
This is easy. We define
˙γ(0)(f) =
d
dt
f(γ(t)) R.
If this seems too abstract, we can also do it in local coordinates.
We introduce some coordinates
{x
i
}
near
p M
. We then refer to
γ
by
coordinates (at least near p), by
γ : t R 7→ {x
i
(t) R : i = 1, ··· , D}.
By the smoothness condition, we know
x
i
(
t
) is differentiable, with
x
i
(0) = 0.
Then the tangent vector of the curve γ at p is
v
γ
= ˙x
i
(0)
x
i
T
p
(M), ˙x
i
(t) =
dx
i
dt
.
It follows from the chain rule that this exactly the same thing as what we
described before.
More generally, we can define the derivative of a map between manifolds.
Definition
(Derivative)
.
Let
f
:
M N
be a map between manifolds. The
derivative of f at p M is the linear map
Df
p
: T
p
M T
f(p)
N
given by the formula
(Df
p
)(v)(g) = v(g f )
for v T
p
M and g C
(N).
This will be useful later when we want to get a map of Lie algebras from a
map of Lie groups.
3.3 Lie algebras from Lie groups
We now try to get a Lie algebra from a Lie group G, by considering T
e
(G).
Theorem.
The tangent space of a Lie group
G
at the identity naturally admits
a Lie bracket
[ ·, ·] : T
e
G × T
e
G T
e
G
such that
L(G) = (T
e
G, [ ·, ·])
is a Lie algebra.
Definition
(Lie algebra of a Lie group)
.
Let
G
be a Lie group. The Lie algebra
of
G
, written
L
(
G
) or
g
, is the tangent space
T
e
G
under the natural Lie bracket.
The general convention is that if the name of a Lie group is in upper case
letters, then the corresponding Lie algebra is the same name with lower case
letters in fraktur font. For example, the Lie group of SO(n) is so(n).
Proof.
We will only prove it for the case of a matrix Lie group
G Mat
n
(
F
).
Then
T
I
G
can be naturally identified as a subspace of
Mat
n
(
F
). There is then
an obvious candidate for the Lie bracket the actual commutator:
[X, Y ] = XY Y X.
The basic axioms of a Lie algebra can be easily (but painfully) checked.
However, we are not yet done. We have to check that if we take the bracket
of two elements in
T
I
(
G
), then it still stays within
T
I
(
G
). This will be done by
producing a curve in G whose derivative at 0 is the commutator [X, Y ].
In general, let
γ
be a smooth curve in
G
with
γ
(0) =
I
. Then we can Taylor
expand
γ(t) = I + ˙γ(0)t + ¨γ(0)t
2
+ O(t
3
),
Now given
X, Y T
e
G
, we take curves
γ
1
, γ
2
such that
˙γ
1
(0) =
X
and
˙γ
2
(0) =
Y
.
Consider the curve given by
γ(t) = γ
1
1
(t)γ
1
2
(t)γ
1
(t)γ
2
(t) G.
We can Taylor expand this to find that
γ(t) = I + [X, Y ]t
2
+ O(t
3
).
This isn’t too helpful, as [
X, Y
] is not the coefficient of
t
. We now do the slightly
dodgy step, where we consider the curve
˜γ(t) = γ(
t) = I + [X, Y ]t + O(t
3/2
).
Now this is only defined for
t
0, but it is good enough, and we see that its
derivative at
t
= 0 is [
X, Y
]. So the commutator is in
T
I
(
G
). So we know that
L(G) is a Lie algebra.
Example.
Let
G
=
GL
(
n, F
), where
F
=
R
or
C
. Then
L
(
GL
(
n, F
)) =
gl
(
n, F
) =
Mat
n
(
F
) because we know it must be an
n × n
-dimensional sub-
space of Mat
n
(F).
More generally, for a vector space
V
, say of dimension
n
, we can consider
the group of invertible linear maps
V V
, written
GL
(
V
). By picking a basis
of
V
, we can construct an isomorphism
GL
(
V
)
=
GL
(
n, F
), and this gives us a
smooth structure on
GL
(
V
) (this does not depend on the basis chosen). Then
the Lie algebra gl(V ) is the collection of all linear maps V V .
Example. If G = SO(2), then the curves are of the form
g(t) = M(θ(t)) =
cos θ(t) sin θ(t)
sin θ(t) cos θ(t)
SO(2).
So we have
˙g(0) =
0 1
1 0
˙
θ(0).
Since the Lie algebra has dimension 1, these are all the matrices in the Lie
algebra. So the Lie algebra is given by
so(2) =

0 c
c 0
, c R
.
Example.
More generally, suppose
G
=
SO
(
n
), and we have a path
R
(
t
)
SO(n).
By definition, we have
R
T
(t)R(t) = I.
Differentiating gives
˙
R
T
(t)R(t) + R
T
(t)
˙
R(t) = 0.
for all t R. Evaluating at t = 0, and noting that R(0) = I, we have
X
T
+ X = 0,
where
X
=
˙
R
(0) is a tangent vector. There are no further constraints from
demanding that
det R
= +1, since this is obeyed anyway for any matrix in O(
n
)
near I.
By dimension counting, we know the antisymmetric matrices are exactly the
matrices in L(O(n)) or L(SO(n)). So we have
o(n) = so(n) = {X Mat
n
(R) : X
T
= X}.
Example.
Consider
G
=
SU
(
n
). Suppose we have a path
U
(
t
)
SU
(
n
), with
U(0) = I. Then we have
U
(t)U(t) = I.
Then again by differentiation, we obtain
Z
+ Z = 0,
where Z =
˙
U(0) su(n). So we must have
su(n) {Z Mat
n
(C) : Z
= Z}.
What does the condition
det U
(
t
) = 1 give us? We can do a Taylor expansion by
det U(t) = 1 + tr Z · t + O(t
2
).
So requiring that det U(t) = 1 gives the condition
tr Z = 0.
By dimension counting, we know traceless anti-Hermitian matrices are all the
elements in the Lie algebra. So we have
su(n) = {Z Mat
n
(C), Z
= Z, tr Z = 0}.
Example.
We look at
SU
(2) in detail. We know that
su
(2) is the 2
×
2 traceless
anti-Hermitian matrices.
These are given by multiples of the Pauli matrices
σ
j
, for
j
= 1
,
2
,
3, satisfying
σ
i
σ
j
= δ
ij
I +
ijk
σ
k
.
They can be written explicitly as
σ
1
=
0 1
1 0
, σ
2
=
0 i
i 0
, σ
3
=
1 0
0 1
One can check manually that the generators for the Lie algebra are given by
T
a
=
1
2
a
.
Indeed, each
T
a
is in
su
(2), and they are independent. Since we know
dim su
(2) =
3, these must generate everything.
We have
[T
a
, T
b
] =
1
4
[σ
a
, σ
b
] =
1
2
abc
σ
c
= f
ab
c
T
c
,
where the structure constants are
f
ab
c
= ε
abc
.
Example.
Take
G
=
SO
(3). Then
so
(3) is the space of 3
×
3 real anti-symmetric
matrices, which one can manually check are generated by
˜
T
1
=
0 0 0
0 0 1
0 1 0
,
˜
T
2
=
0 0 1
0 0 0
1 0 0
,
˜
T
3
=
0 1 0
1 0 0
0 0 0
We then have
(
˜
T
a
)
bc
= ε
abc
.
Then the structure constants are
[
˜
T
a
,
˜
T
b
] = f
ab
c
˜
T
c
,
where
f
ab
c
= ε
abc
.
Note that the structure constants are the same! Since the structure constants
completely determine the brackets of the Lie algebra, if the structure constants
are the same, then the Lie algebras are isomorphic. Of course, the structure
constants depend on which basis we choose. So the real statement is that if
there are some bases in which the structure constants are equal, then the Lie
algebras are isomorphic.
So we get that
so
(3)
=
su
(2), but
SO
(3) is not isomorphic to
SU
(2). Indeed,
the underlying manifold is
SU
(2) is the 3-sphere, but the underlying manifold of
SO
(3) has a fancy construction. They are not even topologically homeomorphic,
since SU(2) is simply connected, but SO(3) is not. More precisely, we have
π
1
(SO(3)) = Z/2Z
π
1
(SU(2)) = {0}.
So we see that we don’t have a perfect correspondence between Lie algebras and
Lie groups. However, usually, two Lie groups with the same Lie algebra have
some covering relation. For example, in this case we have
SO(3) =
SU(2)
Z/2Z
,
where
Z/2Z = {I, I} SU(2)
is the center of SU(2).
We can explicitly construct this bijection as follows. We define the map
d : SU(2) SO(3) by
d(A)
ij
=
1
2
tr(σ
i
j
A
) SO(3).
This is globally a 2-to-1 map. It is easy to see that
d(A) = d(A),
and conversely if
d
(
A
) =
d
(
B
), then
A
=
B
. By the first isomorphism theorem,
this gives an isomorphism
SO(3) =
SU(2)
Z/2Z
,
where Z/2Z = {I, I} is the center of SU(2).
Geometrically, we know
M
(
SU
(2))
=
S
3
. Then the manifold of
SO
(3) is
obtained by identifying antipodal points of the manifold.
3.4 The exponential map
So far, we have been talking about vectors in the tangent space of the identity
e
. It turns out that the group structure means this tells us about the tangent
space of all points. To see this, we make the following definition:
Definition
(Left and right translation)
.
For each
h G
, we define the left and
right translation maps
L
h
: G G
g 7→ hg,
R
h
: G G
g 7→ gh.
These maps are bijections, and in fact diffeomorphisms (i.e. smooth maps with
smooth inverses), because they have smooth inverses
L
h
1
and
R
h
1
respectively.
In general, there is no reason to prefer left translation over right, or vice
versa. By convention, we will mostly talk about left translation, but the results
work equally well for right translation.
Then since
L
h
(
e
) =
h
, the derivative D
e
L
h
gives us a linear isomorphism
between
T
e
G
and
T
h
G
, with inverse given by D
h
L
h
1
. Then in particular, if we
are given a single tangent vector
X L
(
G
), we obtain a tangent vector at all
points in G, i.e. a vector field.
Definition (Vector field). A vector field V of G specifies a tangent vector
V (g) T
g
G
at each point
g G
. Suppose we can pick coordinates
{x
i
}
on some subset of
G
,
and write
v(g) = v
i
(g)
x
i
T
g
G.
The vector field is smooth if
v
i
(
g
)
R
are all differentiable for any coordinate
chart.
As promised, given any
X T
e
G
, we can define a vector field by using
L
g
:= D
e
L
g
to move this to all places in the world. More precisely, we can define
V (g) = L
g
(X).
This has the interesting property that if
X
is non-zero, then
V
is non-zero
everywhere, because L
g
is a linear isomorphism. So we found that
Proposition.
Let
G
be a Lie group of dimension
>
0. Then
G
has a nowhere-
vanishing vector field.
This might seem like a useless thing to know, but it tells us that certain
manifolds cannot be made into Lie groups.
Theorem
(Poincare-Hopf theorem)
.
Let
M
be a compact manifold. If
M
has
non-zero Euler characteristic, then any vector field on M has a zero.
The Poincare-Hopf theorem actually tells us how we can count the actual
number of zeroes, but we will not go into that. We will neither prove this, nor
use it for anything useful. But in particular, it has the following immediate
corollary:
Theorem
(Hairy ball theorem)
.
Any smooth vector field on
S
2
has a zero.
More generally, any smooth vector field on S
2n
has a zero.
Thus, it follows that
S
2n
can never be a Lie group. In fact, the full statement
of the Poincare-Hopf theorem implies that if we have a compact Lie group of
dimension 2, then it must be the torus! (we have long classified the list of all
possible compact 2-dimensional manifolds, and we can check that only the torus
works)
That was all just for our own amusement. We go back to serious work.
What does this left-translation map look like when we have a matrix Lie group
G Mat
n
(
F
)? For all
h G
and
X L
(
G
), we can represent
X
as a matrix in
Mat
n
(F). We then have a concrete representation of the left translation by
L
h
X = hX T
h
G.
Indeed,
h
acts on
G
by left multiplication, which is a linear map if we view
G
as
a subset of the vector space
Mat
n
(
F
). Then we just note that the derivative of
any linear map is “itself”, and the result follows.
Now we may ask ourselves the question given a tangent vector
X T
e
G
,
can we find a path that “always” points in the direction
X
? More concretely,
we want to find a path γ : R T
e
G such that
˙γ(t) = L
γ(t)
X.
Here we are using
L
γ(t)
to identify
T
γ(t)
G
with
T
e
G
=
L
(
G
). In the case of a
matrix Lie group, this just says that
˙γ(t) = γ(t)X.
We also specify the boundary condition γ(0) = e.
Now this is just an ODE, and by general theory of ODE’s, we know a solution
always exists and is unique. Even better, in the case of a matrix Lie group, there
is a concrete construction of this curve.
Definition
(Exponential)
.
Let
M Mat
n
(
F
) be a matrix. The exponential is
defined by
exp(M) =
X
`=0
1
!
M
`
Mat
n
(F).
The convergence properties of these series are very good, just like our usual
exponential.
Theorem.
For any matrix Lie group
G
, the map
exp
restricts to a map
L
(
G
)
G.
Proof.
We will not prove this, but on the first example sheet, we will prove this
manually for G = SU(n).
We now let
g(t) = exp(tX).
We claim that this is the curve we were looking for.
To check that it satisfies the desired properties, we simply have to compute
g(0) = exp(0) = I,
and also
dg(t)
dt
=
X
`=1
1
( 1)!
t
`1
X
`
= exp(tX)X = g(t)X.
So we are done.
We now consider the set
S
X
= {exp(tX) : t R}.
This is an abelian Lie subgroup of G with multiplication given by
exp(tX) exp(sX) = exp((t + s)X)
by the usual proof. These are known as one-parameter subgroups.
Unfortunately, it is not true in general that
exp(X) exp(Y ) = exp(X + Y ),
since the usual proof assumes that
X
and
Y
commute. Instead, what we’ve got
is the Baker–Campbell–Hausdorff formula.
Theorem (Baker–Campbell–Hausdorff formula). We have
exp(X) exp(Y ) = exp
X + Y +
1
2
[X, Y ] +
1
12
([X, [X, Y ]] [Y, [X, Y ]]) + ···
.
It is possible to find the general formula for all the terms, but it is messy.
We will not prove this.
By the inverse function theorem, we know the map
exp
is locally bijective. So
we know
L
(
G
) completely determines
G
in some neighbourhood of
e
. However,
exp
is not globally bijective. Indeed, we already know that the Lie algebra
doesn’t completely determine the Lie group, as
SO
(3) and
SU
(2) have the same
Lie algebra but are different Lie groups.
In general,
exp
can fail to be bijective in two ways. If
G
is not connected,
then
exp
cannot be surjective, since by continuity, the image of
exp
must be
connected.
Example.
Consider the groups O(
n
) and
SO
(
n
). Then the Lie algebra of O(
n
)
is
o(n) = {X Mat
n
(R) : X + X
T
= 0}.
So if X o(n), then tr X = 0. Then we have
det(exp(X)) = exp(tr X) = exp(0) = +1.
So any matrix in the image of
exp
has determinant +1, and hence can only lie
inside SO(n). It turns out that the image of exp is indeed SO(n).
More generally, we have
Proposition.
Let
G
be a Lie group, and
g
be its Lie algebra. Then the image
of g under exp is the connected component of e.
On the other hand,
exp
can also fail to be injective. This happens when
G
has a U(1) subgroup.
Example. Let G = U(1). Then
u(1) = {ix, x R}
We then have
exp(ix) = e
ix
.
This is certainly not injective. In particular, we have
exp(ix) = exp(i(x + 2π))
for any x.
4 Representations of Lie algebras
4.1 Representations of Lie groups and algebras
So far, we have just talked about Lie groups and Lie algebras abstractly. But we
know these groups don’t just sit there doing nothing. They act on things. For
example, the group
GL
(
n, R
) acts on the vector space
R
n
in the obvious way. In
general, the action of a group on a vector space is known as a representation.
Definition
(Representation of group)
.
Let
G
be a group and
V
be a (finite-
dimensional) vector space over a field
F
. A representation of
G
on
V
is given
by specifying invertible linear maps
D
(
g
) :
V V
(i.e.
D
(
g
)
GL
(
V
)) for each
g G such that
D(gh) = D(g)D(h)
for all
g, h G
. In the case where
G
is a Lie group and
F
=
R
or
C
, we require
that the map D : G GL(V ) is smooth.
The space
V
is known as the representation space, and we often write the
representation as the pair (V, D).
Here if we pick a basis
{e
1
, ··· , e
n
}
for
V
, then we can identify
GL
(
V
) with
GL
(
n, F
), and this obtains a canonical smooth structure when
F
=
R
or
C
. This
smooth structure does not depend on the basis chosen.
In general, the map D need not be injective or surjective.
Proposition.
Let
D
be a representation of a group
G
. Then
D
(
e
) =
I
and
D(g
1
) = D(g)
1
.
Proof. We have
D(e) = D(ee) = D(e)D(e).
Since D(e) is invertible, multiplying by the inverse gives
D(e) = I.
Similarly, we have
D(g)D(g
1
) = D(gg
1
) = D(e) = I.
So it follows that D(g)
1
= D(g
1
).
Now why do we care about representations? Mathematically, we can learn
a lot about a group in terms of its possible representations. However, from a
physical point of view, knowing about representations of a group is also very
important. When studying field theory, our fields take value in a fixed vector
space
V
, and when we change coordinates, our field will “transform accordingly”
according to some rules. For example, we say scalar fields “don’t transform”,
but, say, the electromagnetic field tensor transforms “as a 2-tensor”.
We can describe the spacetime symmetries by a group
G
, so that specifying
a “change of coordinates” is equivalent to giving an element of
G
. For example,
in special relativity, changing coordinates corresponds to giving an element of
the Lorentz group O(3, 1).
Now if we want to say how our objects in
V
transform when we change
coordinates, this is exactly the same as specifying a representation of
G
on
V
!
So understanding what representations are available lets us know what kinds of
fields we can have.
The problem, however, is that representations of Lie groups are very hard.
Lie groups are very big geometric structures with a lot of internal complexity.
So instead, we might try to find representations of their Lie algebras instead.
Definition
(Representation of Lie algebra)
.
Let
g
be a Lie algebra. A represen-
tation ρ of g on a vector space V is a collection of linear maps
ρ(X) gl(V ),
for each
X g
, i.e.
ρ
(
X
) :
V V
is a linear map, not necessarily invertible.
These are required to satisfy the conditions
[ρ(X
1
), ρ(X
2
)] = ρ([X
1
, X
2
])
and
ρ(αX
1
+ βX
2
) = αρ(X
1
) + βρ(X
2
).
The vector space
V
is known as the representation space. Similarly, we often
write the representation as (V, ρ).
Note that it is possible to talk about a complex representation of a real Lie
algebra, because any complex Lie algebra (namely
gl
(
V
) for a complex vector
space
V
) can be thought of as a real Lie algebra by “forgetting” that we can
multiply by complex numbers, and indeed this is often what we care about.
Definition
(Dimension of representation)
.
The dimension of a representation
is the dimension of the representation space.
We will later see that a representation of a Lie group gives rise to a represen-
tation of the Lie algebra. The representation is not too hard to obtain if we
have a representation
D
:
G GL
(
V
) of the Lie group, taking the derivative of
this map gives us the confusingly-denoted D
e
D
:
T
e
G T
e
(
GL
(
V
)), which is
a map
g gl
(
V
). To check that this is indeed a representation, we will have
to see that it respects the Lie bracket. We will do this later when we study the
relation between representations of Lie groups and Lie algebras.
Before we do that, we look at some important examples of representations of
Lie algebras.
Definition
(Trivial representation)
.
Let
g
be a Lie algebra of dimension
D
.
The trivial representation is the representation
d
0
:
g F
given by
d
0
(
X
) = 0
for all X g. This has dimension 1.
Definition
(Fundamental representation)
.
Let
g
=
L
(
G
) for
G Mat
n
(
F
). The
fundamental representation is given by d
f
: g Mat
n
(F) given by
d
f
(X) = X
This has dim(d
f
) = n.
Definition
(Adjoint representation)
.
All Lie algebras come with an adjoint
representation
d
Adj
of dimension
dim
(
g
) =
D
. This is given by mapping
X g
to the linear map
ad
X
: g g
Y 7→ [X, Y ]
By linearity of the bracket, this is indeed a linear map g gl(g).
There is a better way of thinking about this. Suppose our Lie algebra
g
comes from a Lie group
G
. Writing
Aut
(
G
) for all the isomorphisms
G G
, we
know there is a homomorphism
Φ : G Aut(G)
g 7→ Φ
g
given by conjugation:
Φ
g
(x) = gxg
1
.
Now by taking the derivative, we can turn each Φ
g
into a linear isomorphism
g g, i.e. an element of GL(g). So we found ourselves a homomorphism
Ad : G GL(g),
which is a representation of the Lie group
G
! It is an exercise to show that
the corresponding representation of the Lie algebra
g
is indeed the adjoint
representation.
Thus, if we view conjugation as a natural action of a group on itself, then
the adjoint representation is the natural representation of g over itself.
Proposition. The adjoint representation is a representation.
Proof.
Since the bracket is linear in both components, we know the adjoint
representation is a linear map g gl(g). It remains to show that
[ad
X
, ad
Y
] = ad
[X,Y ]
.
But the Jacobi identity says
[ad
X
, ad
Y
](Z) = [X, [Y, Z]] [Y, [X, Z]] = [[X, Y ], Z] = ad
[X,Y ]
(Z).
We will eventually want to find all representations of a Lie algebra. To do
so, we need the notion of when two representations are “the same”.
Again, we start with the definition of a homomorphism.
Definition
(Homomorphism of representations)
.
Let (
V
1
, ρ
1
)
,
(
V
2
, ρ
2
) be rep-
resentations of
g
. A homomorphism
f
: (
V
1
, ρ
1
)
(
V
2
, ρ
2
) is a linear map
f : V
1
V
2
such that for all X g, we have
f(ρ
1
(X)(v)) = ρ
2
(X)(f(v))
for all v V
1
. Alternatively, we can write this as
f ρ
1
= ρ
2
f.
In other words, the following diagram commutes for all X g:
V
1
V
2
V
1
V
2
f
ρ
1
(X) ρ
2
(X)
f
Then we can define
Definition
(Isomorphism of representations)
.
Two
g
-vector spaces
V
1
, V
2
are
isomorphic if there is an invertible homomorphism f : V
1
V
2
.
In particular, isomorphic representations have the same dimension.
If we pick a basis for
V
1
and
V
2
, and write the matrices for the representations
as
R
1
(
X
) and
R
2
(
X
), then they are isomorphic if there exists a non-singular
matrix S such that
R
2
(X) = SR
1
(X)S
1
for all X g.
We are going to look at special representations that are “indecomposable”.
Definition
(Invariant subspace)
.
Let
ρ
be a representation of a Lie algebra
g
with representation space
V
. An invariant subspace is a subspace
U V
such
that
ρ(X)u U
for all X g and u U.
The trivial subspaces are U = {0} and V .
Definition
(Irreducible representation)
.
An irreducible representation is a rep-
resentation with no non-trivial invariant subspaces. They are referred to as
irreps.
4.2 Complexification and correspondence of representa-
tions
So far, we have two things Lie algebras and Lie groups. Ultimately, the thing
we are interested in is the Lie group, but we hope to simplify the study of a
Lie group by looking at the Lie algebra instead. So we want to understand
how representations of Lie groups correspond to the representations of their Lie
algebras.
If we have a representation
D
:
G GL
(
V
) of a Lie group
G
, then taking
the derivative at the identity gives us a linear map
ρ
:
T
e
G T
I
GL
(
V
), i.e. a
map
ρ
:
g gl
(
V
). To show this is a representation, we need to show that it
preserves the Lie bracket.
Lemma.
Given a representation
D
:
G GL
(
V
), the induced representation
ρ : g gl(V ) is a Lie algebra representation.
Proof.
We will again only prove this in the case of a matrix Lie group, so that
we can use the construction we had for the Lie bracket.
We have to check that the bracket is preserved. We take curves
γ
1
, γ
2
:
R G
passing through I at 0 such that ˙γ
i
(0) = X
i
for i = 1, 2. We write
γ(t) = γ
1
1
(t)γ
1
2
(t)γ
1
(t)γ
2
(t) G.
We can again Taylor expand this to obtain
γ(t) = I + t
2
[X
1
, X
2
] + O(t
3
).
Essentially by the definition of the derivative, applying D to this gives
D(γ(t)) = I + t
2
ρ([X
1
, X
2
]) + O(t
3
).
On the other hand, we can apply D to () before Taylor expanding. We get
D(γ) = D(γ
1
1
)D(γ
2
2
)D(γ
1
)D(γ
2
).
So as before, since
D(γ
i
) = I + (X
i
) + O(t
2
),
it follows that
D(γ)(t) = I + t
2
[ρ(X
1
), ρ(X
2
)] + O(t
3
).
So we must have
ρ([X
1
, X
2
]) = [ρ(X
1
), ρ(X
2
)].
How about the other way round? We know that if
ρ
:
g gl
(
V
) is induced
by D : G GL(V ), then have
D(exp(X)) = I + (X) + O