3Duality

IB Linear Algebra



3.2 Dual maps
Since linear algebra is the study of vector spaces and linear maps between them,
after dualizing vector spaces, we should be able to dualize linear maps as well.
If we have a map
α
:
V W
, then after dualizing, the map will go the other
direction, i.e.
α
:
W
V
. This is a characteristic common to most dualization
processes in mathematics.
Definition
(Dual map)
.
Let
V, W
be vector spaces over
F
and
α
:
V W
L
(
V, W
). The dual map to
α
, written
α
:
W
V
is given by
θ 7→ θ α
.
Since the composite of linear maps is linear,
α
(
θ
)
V
. So this is a genuine
map.
Proposition.
Let
α L
(
V, W
) be a linear map. Then
α
L
(
W
, V
) is a
linear map.
This is not the same as what we remarked at the end of the definition of
the dual map. What we remarked was that given any
θ
,
α
(
θ
) is a linear map.
What we want to show here is that α
itself as a map W
V
is linear.
Proof. Let λ, µ F and θ
1
, θ
2
W
. We want to show
α
(λθ
1
+ µθ
2
) = λα
(θ
1
) + µα
(θ
2
).
To show this, we show that for every
v V
, the left and right give the same
result. We have
α
(λθ
1
+ µθ
2
)(v) = (λθ
1
+ µθ
2
)(αv)
= λθ
1
(α(v)) + µθ
2
(α(v))
= (λα
(θ
1
) + µα
(θ
2
))(v).
So α
L(W
, V
).
What happens to the matrices when we take the dual map? The answer is
that we get the transpose.
Proposition.
Let
V, W
be finite-dimensional vector spaces over
F
and
α
:
V
W
be a linear map. Let (
e
1
, ··· , e
n
) be a basis for
V
and (
f
1
, ··· , f
m
) be a basis
for W ; (ε
1
, ··· , ε
n
) and (η
1
, ··· , η
m
) the corresponding dual bases.
Suppose
α
is represented by
A
with respect to (
e
i
) and (
f
i
) for
V
and
W
.
Then α
is represented by A
T
with respect to the corresponding dual bases.
Proof. We are given that
α(e
i
) =
m
X
k=1
A
ki
f
k
.
We must compute α
(η
i
). To do so, we evaluate it at e
j
. We have
α
(η
i
)(e
j
) = η
i
(α(e
j
)) = η
i
m
X
k=1
A
kj
f
k
!
=
m
X
k=1
A
kj
δ
ik
= A
ij
.
We can also write this as
α
(η
i
)(e
j
) =
n
X
k=1
A
ik
ε
k
(e
j
).
Since this is true for all j, we have
α
(η
i
)
n
X
k=1
A
ik
ε
k
=
n
X
k=1
A
T
ki
ε
k
.
So done.
Note that if α : U V and β : V W , θ W
, then
(βα)
(θ) = θβα = α
(θβ) = α
(β
(θ)).
So we have (
βα
)
=
α
β
. This is obviously true for the finite-dimensional case,
since that’s how transposes of matrices work.
Similarly, if α, β : U V , then (λα + µβ)
= λα
+ µβ
.
What happens when we change basis? If
B
=
Q
1
AP
for some invertible
P
and Q, then
B
T
= (Q
1
AP )
T
= P
T
A
T
(Q
1
)
T
= ((P
1
)
T
)
1
A
T
(Q
1
)
T
.
So in the dual space, we conjugate by the dual of the change-of-basis matrices.
As we said, we can use dualization to translate problems about a vector space
to its dual. The following lemma gives us some good tools to do so:
Lemma.
Let
α L
(
V, W
) with
V, W
finite dimensional vector spaces over
F
.
Then
(i) ker α
= (im α)
0
.
(ii) r
(
α
) =
r
(
α
) (which is another proof that row rank is equal to column
rank).
(iii) im α
= (ker α)
0
.
At first sight, (i) and (iii) look quite similar. However, (i) is almost trivial to
prove, but (iii) is rather hard.
Proof.
(i) If θ W
, then
θ ker α
α
(θ) = 0
(v V ) θα(v) = 0
(w im α) θ(w) = 0
θ (im α)
0
.
(ii) As im α W , we’ve seen that
dim im α + dim(im α)
0
= dim W.
Using (i), we see
n(α
) = dim(im α)
0
.
So
r(α) + n(α
) = dim W = dim W
.
By the rank-nullity theorem, we have r(α) = r(α
).
(iii)
The proof in (i) doesn’t quite work here. We can only show that one
includes the other. To draw the conclusion, we will show that the two
spaces have the dimensions, and hence must be equal.
Let θ im α
. Then θ = φα for some φ W
. If v ker α, then
θ(v) = φ(α(v)) = φ(0) = 0.
So im α
(ker α)
0
.
But we know
dim(ker α)
0
+ dim ker α = dim V,
So we have
dim(ker α)
0
= dim V n(α) = r(α) = r(α
) = dim im α
.
Hence we must have im α
= (ker α)
0
.
Not only do we want to get from
V
to
V
, we want to get back from
V
to
V
.
We can take the dual of
V
to get a
V
∗∗
. We already know that
V
∗∗
is isomorphic
to
V
, since
V
is isomorphic to
V
already. However, the isomorphism between
V
and
V
are not “natural”. To define such an isomorphism, we needed to pick
a basis for
V
and consider a dual basis. If we picked a different basis, we would
get a different isomorphism. There is no natural, canonical, uniquely-defined
isomorphism between V and V
.
However, this is not the case when we want to construct an isomorphism
V V
∗∗
. The construction of this isomorphism is obvious once we think hard
what
V
∗∗
actually means. Unwrapping the definition, we know
V
∗∗
=
L
(
V
, F
).
Our isomorphism has to produce something in
V
∗∗
given any
v V
. This is
equivalent to saying given any
v V
and a function
θ V
, produce something
in F.
This is easy, by definition
θ V
is just a linear map
V F
. So given
v
and θ, we just return θ(v). We now just have to show that this is linear and is
bijective.
Lemma.
Let
V
be a vector space over
F
. Then there is a linear map
ev
:
V
(V
)
given by
ev(v)(θ) = θ(v).
We call this the evaluation map.
We call this a “canonical” map since this does not require picking a particular
basis of the vector spaces. It is in some sense a “natural” map.
Proof.
We first show that
ev
(
v
)
V
∗∗
for all
v V
, i.e.
ev
(
v
) is linear for any
v. For any λ, µ F, θ
1
, θ
2
V
, then for v V , we have
ev(v)(λθ
1
+ µθ
2
) = (λθ
1
+ µθ
2
)(v)
= λθ
1
(v) + µθ
2
(v)
= λ ev(v)(θ
1
) + µ ev(v)(θ
2
).
So done. Now we show that
ev
itself is linear. Let
λ, µ F
,
v
1
, v
2
V
. We
want to show
ev(λv
1
+ µv
2
) = λ ev(v
1
) + µ ev(v
2
).
To show these are equal, pick θ V
. Then
ev(λv
1
+ µv
2
)(θ) = θ(λv
1
+ µv
2
)
= λθ(v
1
) + µθ(v
2
)
= λ ev(v
1
)(θ) + µ ev(v
2
)(θ)
= (λ ev(v
1
) + µ ev(v
2
))(θ).
So done.
In the special case where V is finite-dimensional, this is an isomorphism.
Lemma. If V is finite-dimensional, then ev : V V
∗∗
is an isomorphism.
This is very false for infinite dimensional spaces. In fact, this is true only
for finite-dimensional vector spaces (assuming the axiom of choice), and some
(weird) people use this as the definition of finite-dimensional vector spaces.
Proof.
We first show it is injective. Suppose
ev
(
v
) =
0
for some
v V
. Then
θ
(
v
) =
ev
(
v
)(
θ
) = 0 for all
θ V
. So
dimhvi
0
=
dim V
=
dim V
. So
dimhvi
= 0. So
v
= 0. So
ev
is injective. Since
V
and
V
∗∗
have the same
dimension, this is also surjective. So done.
From now on, we will just pretend that
V
and
V
∗∗
are the same thing, at
least when V is finite dimensional.
Note that this lemma does not just say that
V
is isomorphic to
V
∗∗
(we
already know that since they have the same dimension). This says there is a
completely canonical way to choose the isomorphism.
In general, if
V
is infinite dimensional, then
ev
is injective, but not surjective.
So we can think of V as a subspace of V
∗∗
in a canonical way.
Lemma.
Let
V, W
be finite-dimensional vector spaces over
F
after identifying
(V and V
∗∗
) and (W and W
∗∗
) by the evaluation map. Then we have
(i) If U V , then U
00
= U.
(ii) If α L(V, W ), then α
∗∗
= α.
Proof.
(i)
Let
u U
. Then
u
(
θ
) =
θ
(
u
) = 0 for all
θ U
0
. So
u
annihilates
everything in U
0
. So u U
00
. So U U
00
. We also know that
dim U = dim V dim U
0
= dim V (dim V dim U
00
) = dim U
00
.
So we must have U = U
00
.
(ii)
The proof of this is basically the transpose of the transpose is the
original matrix. The only work we have to do is to show that the dual of
the dual basis is the original basis.
Let (
e
1
, ··· , e
n
) be a basis for
V
and (
f
1
, ··· , f
m
) be a basis for
W
, and let
(
ε
1
, ··· , ε
n
) and (
η
1
, ··· , η
n
) be the corresponding dual basis. We know
that
e
i
(ε
j
) = δ
ij
= ε
j
(e
i
), f
i
(η
j
) = δ
ij
= η
j
(f
i
).
So (e
1
, ··· , e
n
) is dual to (ε
1
, ··· , ε
n
), and similarly for f and η.
If
α
is represented by
A
, then
α
is represented by
A
T
. So
α
∗∗
is represented
by (A
T
)
T
= A. So done.
Proposition.
Let
V
be a finite-dimensional vector space
F
and
U
1
,
U
2
are
subspaces of V . Then we have
(i) (U
1
+ U
2
)
0
= U
0
1
U
0
2
(ii) (U
1
U
2
)
0
= U
0
1
+ U
0
2
Proof.
(i) Suppose θ V
. Then
θ (U
1
+ U
2
)
0
θ(u
1
+ u
2
) = 0 for all u
i
U
i
θ(u) = 0 for all u U
1
U
2
θ U
0
1
U
0
2
.
(ii) We have
(U
1
U
2
)
0
= ((U
0
1
)
0
(U
0
2
)
0
)
0
= (U
0
1
+ U
0
2
)
00
= U
0
1
+ U
0
2
.
So done.