9Dual spaces and tensor products of representations
II Representation Theory
9.2 Tensor products
The next idea we will tackle is the concept of tensor products. We first introduce
it as a linear algebra concept, and then later see how representations fit into this
framework.
There are many ways we can define the tensor product. The definition we
will take here is a rather hands-on construction of the space, which involves
picking a basis. We will later describe some other ways to define the tensor
product.
Definition
(Tensor product)
.
Let
V, W
be vector spaces over
F
. Suppose
dim V
=
m
and
dim W
=
n
. We fix a basis
v
1
, ··· , v
m
and
w
1
, ··· , w
n
of
V
and W respectively.
The tensor product space
V ⊗ W
=
V ⊗
F
W
is an
nm
-dimensional vector
space (over F) with basis given by formal symbols
{v
i
⊗ w
j
: 1 ≤ i ≤ m, 1 ≤ j ≤ n}.
Thus
V ⊗ W =
n
X
λ
ij
v
i
⊗ w
j
: λ
ij
∈ F
o
,
with the “obvious” addition and scalar multiplication.
If
v =
X
α
i
v
i
∈ V, w =
X
β
j
w
j
,
we define
v ⊗ w =
X
i,j
α
i
β
j
(v
i
⊗ w
j
).
Note that note all elements of
V ⊗ W
are of this form. Some are genuine
combinations. For example,
v
1
⊗ w
1
+
v
2
⊗ w
2
cannot be written as a tensor
product of an element in V and another in W .
We can imagine our formula of the tensor of two elements as writing
X
α
i
v
i
⊗
X
β
j
w
j
,
and then expand this by letting ⊗ distribute over +.
Lemma.
(i) For v ∈ V , w ∈ W and λ ∈ F, we have
(λv) ⊗ w = λ(v ⊗ w) = v ⊗ (λw).
(ii) If x, x
1
, x
2
∈ V and y, y
1
, y
2
∈ W , then
(x
1
+ x
2
) ⊗ y = (x
1
⊗ y) + (x
2
⊗ y)
x ⊗ (y
1
+ y
2
) = (x ⊗ y
1
) + (x ⊗ y
2
).
Proof.
(i) Let v =
P
α
i
v
i
and w =
P
β
j
w
j
. Then
(λv) ⊗ w =
X
ij
(λα
i
)β
j
v
i
⊗ w
j
λ(v ⊗ w) = λ
X
ij
α
i
β
j
v
i
⊗ w
j
v ⊗ (λw) =
X
α
i
(λβ
j
)v
i
⊗ w
j
,
and these three things are obviously all equal.
(ii) Similar nonsense.
We can define the tensor product using these properties. We consider the
space of formal linear combinations
v ⊗ w
for all
v ∈ V, w ∈ W
, and then
quotient out by the relations re had above. It can be shown that this produces
the same space as the one constructed above.
Alternatively, we can define the tensor product using its universal property,
which says for any
U
, a bilinear map from
V ×W
to
U
is “naturally” equivalent
to a linear map
V ⊗W → U
. This intuitively makes sense, since the distributivity
and linearity properties of
⊗
we showed above are exactly the properties required
for a bilinear map if we replace the ⊗ with a “,”.
It turns out this uniquely defines
V ⊗W
, as long as we provide a sufficiently
precise definition of “naturally”. We can do it concretely as follows — in our
explicit construction, we have a canonical bilinear map given by
φ : V × W → V ⊗ W
(v, w) 7→ v ⊗w.
Given a linear map
V ⊗W → U
, we can compose it with
φ
:
V ×W → V ⊗ W
in order to get a bilinear map
V ×W → U
. Then universality says every bilinear
map V × W → U arises this way, uniquely.
In general, we say an equivalence between bilinear maps
V × W → U
and
linear maps
V ⊗ W → U
is “natural” if it is be mediated by such a universal
map. Then we say a vector space
X
is the tensor product of
V
and
W
if there
is a natural equivalence between bilinear maps
V × W → U
and linear maps
X → U .
We will stick with our definition for concreteness, but we prove basis-
independence here so that we feel happier:
Lemma.
Let
{e
1
, ··· , e
m
}
be any other basis of
V
, and
{f
1
, ··· , f
m
}
be another
basis of W . Then
{e
i
⊗ f
j
: 1 ≤ i ≤ m, 1 ≤ j ≤ n}
is a basis of V ⊗ W .
Proof. Writing
v
k
=
X
α
ik
e
i
, w
`
=
X
β
j`
f
`
,
we have
v
k
⊗ w
`
=
X
α
ik
β
jl
e
i
⊗ f
j
.
Therefore
{e
i
⊗ f
j
}
spans
V ⊗ W
. Moreover, there are
nm
of these. Therefore
they form a basis of V ⊗ W .
That’s enough of linear algebra. We shall start doing some representation
theory.
Proposition. Let ρ : G → GL(V ) and ρ
0
: G → GL(V
0
). We define
ρ ⊗ ρ
0
: G → GL(V ⊗ V
0
)
by
(ρ ⊗ ρ
0
)(g) :
X
λ
ij
v
i
⊗ w
j
7→
X
λ
ij
(ρ(g)v
i
) ⊗ (ρ
0
(g)w
j
).
Then ρ ⊗ ρ
0
is a representation of g, with character
χ
ρ⊗ρ
0
(g) = χ
ρ
(g)χ
ρ
0
(g)
for all g ∈ G.
As promised, the product of two characters of G is also a character of G.
Just before we prove this, recall we showed in sheet 1 that if
ρ
is irreducible
and
ρ
0
is irreducible of degree 1, then
ρ
0
ρ
=
ρ ⊗ ρ
0
is irreducible. However, if
ρ
0
is not of degree 1, then this is almost always false (or else we can produce
arbitrarily many irreducible representations).
Proof.
It is clear that (
ρ ⊗ ρ
0
)(
g
)
∈ GL
(
V ⊗ V
0
) for all
g ∈ G
. So
ρ ⊗ ρ
0
is a
homomorphism G → GL(V ⊗ V
0
).
To check the character is indeed as stated, let
g ∈ G
. Let
v
1
, ··· , v
m
be
a basis of
V
of eigenvectors of
ρ
(
g
), and let
w
1
, ··· , w
n
be a basis of
V
0
of
eigenvectors of ρ
0
(g), say
ρ(g)v
i
= λ
i
v
i
, ρ
0
(g)w
j
= µ
j
w
j
.
Then
(ρ ⊗ ρ
0
)(g)(v
i
⊗ w
j
) = ρ(g)v
i
⊗ ρ
0
(g)w
j
= λ
i
v
i
⊗ µ
j
w
j
= (λ
i
µ
j
)(v
i
⊗ w
j
).
So
χ
ρ⊗ρ
0
(g) =
X
i,j
λ
i
µ
j
=
X
λ
i
X
µ
j
= χ
ρ
(g)χ
ρ
0
(g).