15Representations of compact groups

II Representation Theory



15.1 Representations of SU(2)
In the rest of the time, we would like to talk about G = SU(2).
Recall that
SU(2) = {A GL
2
(C) : A
A = I, det A = I}.
Note that if
A =
a b
c d
SU(2),
then since det A = 1, we have
A
1
=
d b
c a
.
So we need d = ¯a and c =
¯
b. Moreover, we have
a¯a + b
¯
b = 1.
Hence we can write SU(2) explicitly as
G =

a b
¯
b ¯a
: a, b C, |a|
2
+ |b|
2
= 1
.
Topologically, we know G
=
S
3
C
2
=
R
4
.
Instead of thinking about
C
2
in the usual vector space way, we can think of
it as a subgroup of M
2
(C) via
H =

z w
¯w ¯z
: w, z C
M
2
(C).
This is known as Hamilton’s quaternion algebra. Then
H
is a 4-dimensional
Euclidean space (two components from z and two components from w), with a
norm on H given by
kAk
2
= det A.
We now see that
SU
(2)
H
is exactly the unit sphere
kAk
2
= 1 in
H
. If
A SU
(2) and
x H
, then
kAxk
=
kxk
since
kAk
= 1. So elements of
G
acts
as isometries on the space.
After normalization (by
1
2π
2
), the usual integration of functions on
S
3
defines
a Haar measure on
G
. It is an exercise on the last example sheet to write this
out explicitly.
We now look at conjugacy in G. We let
T =

a 0
0 ¯a
: a C, |a|
2
= 1
=
S
1
.
This is a maximal torus in
G
, and it plays a fundamental role, since we happen
to know about S
1
. We also have a favorite element
s =
0 1
1 0
SU(2).
We now prove some easy linear algebra results about SU(2).
Lemma (SU(2)-conjugacy classes).
(i) Let t T . Then sts
1
= t
1
.
(ii) s
2
= I Z(SU(2)).
(iii) The normalizer
N
G
(T ) = T sT =

a 0
0 ¯a
,
0 a
¯a 0
: a C, |a| = 1
.
(iv)
Every conjugacy class
C
of
SU
(2) contains an element of
T
, i.e.
C T 6
=
.
(v) In fact,
C T = {t, t
1
}
for some t T , and t = t
1
if and only if t = ±I, in which case C = {t}.
(vi) There is a bijection
{conjugacy classes in SU(2)} [1, 1],
given by
A 7→
1
2
tr A.
We can see that if
A =
λ 0
0
¯
λ
,
then
1
2
tr A =
1
2
(λ +
¯
λ) = Re(λ).
Note that what (iv) and (v) really say is that matrices in
SU
(2) are diago-
nalizable (over SU(2)).
Proof.
(i) Write it out.
(ii) Write it out.
(iii) Direct verification.
(iv)
It is well-known from linear algebra that every unitary matrix
X
has an
orthonormal basis of eigenvectors, and hence is conjugate in
U
(2) to one
in T , say
QXQ
T.
We now want to force Q into SU(2), i.e. make Q have determinant 1.
We put
δ
=
det Q
. Since
Q
is unitary, i.e.
QQ
=
I
, we know
|δ|
= 1. So
we let ε be a square root of δ, and define
Q
1
= ε
1
Q.
Then we have
Q
1
XQ
1
T.
(v)
We let
g G
, and suppose
g C
. If
g
=
±I
, then
C T
=
{g}
. Otherwise,
g
has two distinct eigenvalues
λ, λ
1
. Note that the two eigenvlaues must
be inverses of each other, since it is in SU(2). Then we know
C =
h
λ 0
0 λ
1
h
1
: h G
.
Thus we find
C T =

λ 0
0 λ
1
,
λ
1
0
0 λ

.
This is true since eigenvalues are preserved by conjugation, so if any
µ 0
0 µ
1
,
then
{µ, µ
1
}
=
{λ, λ
1
}
. Also, we can get the second matrix from the
first by conjugating with s.
(vi) Consider the map
1
2
tr : {conjugacy classes} [1, 1].
By (v), matrices are conjugate in
G
iff they have the same set of eigenvalues.
Now
1
2
tr
λ 0
0 λ
1
=
1
2
(λ +
¯
λ) = Re(λ) = cos θ,
where λ = e
. Hence the map is a surjection onto [1, 1].
Now we have to show it is injective. This is also easy. If
g
and
g
0
have the
same image, i.e.
1
2
tr g =
1
2
tr g
0
,
then g and g
0
have the same characteristic polynomial, namely
x
2
(tr g)x + 1.
Hence they have the same eigenvalues, and hence they are similar.
We now write, for t (1, 1),
C
t
=
g G :
1
2
tr g = t
.
In particular, we have
C
1
= {I}, C
1
= {−I}.
Proposition. For t (1, 1), the class C
t
=
S
2
as topological spaces.
This is not really needed, but is a nice thing to know.
Proof. Exercise!
We now move on to classify the irreducible representations of SU(2).
We let
V
n
be the complex space of all homogeneous polynomials of degree
n
in variables x, y, i.e.
V
n
= {r
0
x
n
+ r
1
x
n1
y + ··· + r
n
y
n
: r
0
, ··· , r
n
C}.
This is an
n
+ 1-dimensional complex vector space with basis
x
n
, x
n1
y, ··· , y
n
.
We want to get an action of
SU
(2) on
V
n
. It is easier to think about the
action of
GL
2
(
C
) in general, and then restrict to
SU
(2). We define the action of
GL
2
(C) on V
n
by
ρ
n
: GL
2
(C) GL(V
n
)
given by the rule
ρ
n

a b
c d

f(x, y) = f(ax + cy, bx + dy).
In other words, we have
ρ
n

a b
c d

f

x y

= f
x y
a b
c d

,
where the multiplication in f is matrix multiplication.
Example. When n = 0, then V
0
=
C, and ρ
0
is the trivial representation.
When n = 1, then this is the natural two-dimensional representation, and
ρ
1

a b
c d

has matrix
a b
c d
with respect to the standard basis {x, y} of V
1
= C
2
.
More interestingly, when n = 2, we know
ρ
2

a b
c d

has matrix
a
2
ab b
2
2ac ad + bc 2bd
c
2
cd d
2
,
with respect to the basis
x
2
, xy, y
2
of
V
2
=
C
3
. We obtain the matrix by
computing, e.g.
ρ
2
(g)(x
2
) = (ax + cy)
2
= a
2
x
2
+ 2acxy + c
2
y
2
.
Now we know
SU
(2)
GL
2
(
C
). So we can view
V
n
as a representation of
SU(2) by restriction.
Now we’ve got some representations. The claim is that these are all the
irreducibles. Before we prove that, we look at the character of these things.
Lemma.
A continuous class function
f
:
G C
is determined by its restriction
to T , and F |
T
is even, i.e.
f

λ 0
0 λ
1

= f

λ
1
0
0 λ

.
Proof.
Each conjugacy class in
SU
(2) meets
T
. So a class function is determined
by its restriction to
T
. Evenness follows from the fact that the two elements are
conjugate.
In particular, a character of a representation (ρ, V ) is also an even function
χ
ρ
: S
1
C.
Lemma.
If
χ
is a character of a representation of
SU
(2), then its restriction
χ|
T
is a Laurent polynomial, i.e. a finite N-linear combination of functions
λ 0
0 λ
1
7→ λ
n
for n Z.
Proof.
If
V
is a representation of
SU
(2), then
Res
SU(2)
T
V
is a representation
of
T
, and its character
Res
SU(2)
T
χ
is the restriction of
χ
V
to
T
. But every
representation of T has its character of the given form. So done.
Notation. We write N[z, z
1
] for the set of all Laurent polynomials, i.e.
N[z, z
1
] =
n
X
a
n
z
n
: a
n
N : only finitely many a
n
non-zero
o
.
We further write
N[z, z
1
]
ev
= {f N[z, z
1
] : f(z) = f (z
1
)}.
Then by our lemma, for every continuous representation
V
of
SU
(2), the
character χ
V
N[z, z
1
]
ev
(by identifying it with its restriction to T ).
We now actually calculate the character
χ
n
of (
ρ
n
, V
n
) as a representation of
SU(2). We have
χ
V
n
(g) = tr ρ
n
(g),
where
g
z 0
0 z
1
T.
Then we have
ρ
n

z 0
0 z
1

(x
i
y
j
) = (zx
i
)(z
1
y
j
) = z
ij
x
i
y
j
.
So each
x
i
y
j
is an eigenvector of
ρ
n
(
g
), with eigenvalue
z
ij
. So we know
ρ
n
(
g
)
has a matrix
z
n
z
n2
.
.
.
z
2n
z
n
,
with respect to the standard basis. Hence the character is just
χ
n

z 0
0 z
1

= z
n
+ z
n2
+ ··· + z
n
=
z
n+1
z
(n+1)
z z
1
,
where the last expression is valid unless z = ±1.
We can now state the result we are aiming for:
Theorem.
The representations
ρ
n
:
SU
(2)
GL
(
V
n
) of dimension
n
+ 1 are
irreducible for n Z
0
.
Again, we get a complete set (completeness proven later). A complete list of
all irreducible representations, given in a really nice form. This is spectacular.
Proof.
Let 0
6
=
W V
n
be a
G
-invariant subspace, i.e. a subrepresentation of
V
n
. We will show that W = V
n
.
All we know about
W
is that it is non-zero. So we take some non-zero vector
of W .
Claim. Let
0 6= w =
n
X
j=0
r
j
x
nj
y
j
W.
Since this is non-zero, there is some
i
such that
r
i
6
= 0. The claim is that
x
ni
y
i
W .
We argue by induction on the number of non-zero coefficients
r
j
. If there
is only one non-zero coefficient, then we are already done, as
w
is a non-zero
scalar multiple of x
ni
y
i
.
So assume there is more than one, and choose one
i
such that
r
i
6
= 0. We
pick z S
1
with z
n
, z
n2
, ··· , z
2n
, z
n
all distinct in C. Now
ρ
n

z
z
1

w =
X
r
j
z
n2j
x
nj
y
j
W.
Subtracting a copy of w, we find
ρ
n

z
z
1

w z
n2i
w =
X
r
j
(z
n2j
z
n2i
)x
nj
y
j
W.
We now look at the coefficient
r
j
(z
n2j
z
n2i
).
This is non-zero if and only if
r
j
is non-zero and
j 6
=
i
. So we can use this to
remove any non-zero coefficient. Thus by induction, we get
x
nj
y
j
W
for all j such that r
j
6= 0.
This gives us one basis vector inside W , and we need to get the rest.
Claim. W = V
n
.
We now know that x
ni
y
i
W for some i. We consider
ρ
n
1
2
1 1
1 1

x
ni
y
i
=
1
2
(x + y)
ni
(x + y)
i
W.
It is clear that the coefficient of
x
n
is non-zero. So we can use the claim to
deduce x
n
W .
Finally, for general a, b 6= 0, we apply
ρ
n

a
¯
b
b ¯a

x
n
= (ax + by)
n
W,
and the coefficient of everything is non-zero. So basis vectors are in
W
. So
W = V
n
.
This proof is surprisingly elementary. It does not use any technology at all.
Alternatively, to prove this, we can identify
C
cos θ
=
A G :
1
2
tr A = cos θ
with the 2-sphere
{|Im a|
2
+ |b|
2
= sin
2
θ}
of radius
sin θ
. So if
f
is a class function on
G
,
f
is constant on each class
C
cos θ
.
It turns out we get
Z
G
f(g) dg =
1
2π
2
Z
2π
0
1
2
f

e
e

4π sin
2
θ dθ.
This is the Weyl integration formula, which is the Haar measure for
SU
(2). Then
if
χ
n
is the character of
V
n
, we can use this to show that
hχ
n
, χ
n
i
= 1. Hence
χ
n
is irreducible. We will not go into the details of this construction.
Theorem.
Every finite-dimensional continuous irreducible representation of
G
is one of the ρ
n
: G GL(V
n
) as defined above.
Proof.
Assume
ρ
V
:
G GL
(
V
) is an irreducible representation affording a
character
χ
V
N
[
z, z
1
]
ev
. We will show that
χ
V
=
χ
n
for some
n
. Now we see
χ
0
= 1
χ
1
= z + z
1
χ
2
= z
2
+ 1 + z
2
.
.
.
form a basis of
Q
[
z, z
1
]
ev
, which is a non-finite dimensional vector space over
Q. Hence we can write
χ
V
=
X
n
a
n
χ
n
,
a finite sum with finitely many
a
n
6
= 0. Note that it is possible that
a
n
Q
. So
we clear denominators, and move the summands with negative coefficients to
the left hand side. So we get
V
+
X
iI
m
i
χ
i
=
X
jJ
n
j
χ
j
,
with I, J disjoint finite subsets of N, and m, m
i
, n
j
N.
We know the left and right-hand side are characters of representations of
G
.
So we get
mV
M
I
m
i
V
i
=
M
J
n
j
V
j
.
Since
V
is irreducible and factorization is unique, we must have
V
=
V
n
for some
n J.
We’ve got a complete list of irreducible representations of
SU
(2). So we can
look at what happens when we take products.
We know that for V, W representations of SU(2), if
Res
SU(2)
T
V
=
Res
SU(2)
T
W,
then in fact
V
=
W.
This gives us the following result:
Proposition.
Let
G
=
SU
(2) or
G
=
S
1
, and
V, W
are representations of
G
.
Then
χ
V W
= χ
V
χ
W
.
Proof.
By the previous remark, it is enough to consider the case
G
=
S
1
. Suppose
V and W have eigenbases e
1
, ··· , e
n
and f
1
, ··· , f
m
respectively such that
ρ(z)e
i
= z
n
i
e
i
, ρ(z)f
j
= z
m
j
f
j
for each i, j. Then
ρ(z)(e
i
f
j
) = z
n
i
+m
j
e
i
f
j
.
Thus the character is
χ
V W
(z) =
X
i,j
z
n
i
+m
j
=
X
i
z
n
i
!
X
j
z
m
j
= χ
V
(z)χ
W
(z).
Example. We have
χ
V
1
V
1
(z) = (z + z
1
)
2
= z
2
+ 2 + z
2
= χ
V
2
+ χ
V
0
.
So we have
V
1
V
1
=
V
2
V
0
.
Similarly, we can compute
χ
V
1
V
2
(z) = (z
2
+ 1 + z
2
)(z + z
1
) = z
3
+ 2z + 2z
1
+ z
3
= χ
V
3
+ χ
V
1
.
So we get
V
1
V
2
=
V
3
V
1
.
Proposition (Clebsch-Gordon rule). For n, m N, we have
V
n
V
m
=
V
n+m
V
n+m2
··· V
|nm|+2
V
|nm|
.
Proof.
We just check this works for characters. Without loss of generality, we
assume n m. We can compute
(χ
n
χ
m
)(z) =
z
n+1
z
n1
z z
1
(z
m
+ z
m2
+ ··· + z
m
)
=
m
X
j=0
z
n+m+12j
z
2jnm1
z z
1
=
m
X
j=0
χ
n+m2j
(z).
Note that the condition
n m
ensures there are no cancellations in the sum.