3Duality

IB Linear Algebra



3.1 Dual space
To specify a subspace of
F
n
, we can write down linear equations that its elements
satisfy. For example, if we have the subspace
U
=
h
1
2
1
i F
3
, we can specify
this by saying
x
1
x
2
x
3
U if and only if
x
1
x
3
= 0
2x
1
x
2
= 0.
However, characterizing a space in terms of equations involves picking some
particular equations out of the many possibilities. In general, we do not like
making arbitrary choices. Hence the solution is to consider all possible such
equations. We will show that these form a subspace in some space.
We can interpret these equations in terms of linear maps
F
n
F
. For
example
x
1
x
3
= 0 if and only if
x
1
x
2
x
3
ker θ
, where
θ
:
F
3
F
is defined
by
x
1
x
2
x
3
7→ x
1
x
3
.
This works well with the vector space operations. If
θ
1
, θ
2
:
F
n
F
vanish on
some subspace of
F
n
, and
λ, µ F
, then
λθ
1
+
µθ
2
also vanishes on the subspace.
So the set of all maps F
n
F that vanishes on U forms a vector space.
To formalize this notion, we introduce dual spaces.
Definition
(Dual space)
.
Let
V
be a vector space over
F
. The dual of
V
is
defined as
V
= L(V, F) = {θ : V F : θ linear}.
Elements of V
are called linear functionals or linear forms.
By convention, we use Roman letters for elements in
V
, and Greek letters
for elements in V
.
Example.
If V = R
3
and θ : V R that sends
x
1
x
2
x
3
7→ x
1
x
3
, then θ V
.
Let
V
=
F
X
. Then for any fixed
x
,
θ
:
V F
defined by
f 7→ f
(
x
) is in
V
.
Let V = C([0, 1], R). Then f 7→
R
1
0
f(t) dt V
.
The trace tr : M
n
(F) F defined by A 7→
P
n
i=1
A
ii
is in M
n
(F)
.
It turns out it is rather easy to specify how the dual space looks like, at least
in the case where V is finite dimensional.
Lemma.
If
V
is a finite-dimensional vector space over
f
with basis (
e
1
, ··· , e
n
),
then there is a basis (
ε
1
, ··· , ε
n
) for
V
(called the dual basis to (
e
1
, ··· , e
n
))
such that
ε
i
(e
j
) = δ
ij
.
Proof.
Since linear maps are characterized by their values on a basis, there exists
unique choices for ε
1
, ··· , ε
n
V
. Now we show that (ε
1
, ··· , ε
n
) is a basis.
Suppose
θ V
. We show that we can write it uniquely as a combination of
ε
1
, ··· , ε
n
. We have
θ
=
P
n
i=1
λ
i
ε
i
if and only if
θ
(
e
j
) =
P
n
i=1
λ
i
ε
i
(
e
j
) (for all
j) if and only if λ
j
= θ(e
j
). So we have uniqueness and existence.
Corollary. If V is finite dimensional, then dim V = dim V
.
When
V
is not finite dimensional, this need not be true. However, we
know that the dimension of
V
is at least as big as that of
V
, since the above
gives a set of
dim V
many independent vectors in
V
. In fact for any infinite
dimensional vector space,
dim V
is strictly larger than
dim V
, if we manage to
define dimensions for infinite-dimensional vector spaces.
It helps to come up with a more concrete example of how dual spaces look
like. Consider the vector space
F
n
, where we treat each element as a column
vector (with respect to the standard basis). Then we can regard elements of
V
as just row vectors (
a
1
, ··· , a
n
) =
P
n
j=1
a
j
ε
j
with respect to the dual basis. We
have
X
a
j
ε
j
X
x
i
e
i
!
=
X
i,j
a
j
x
i
δ
ij
=
n
X
i=1
a
i
x
i
=
a
1
··· a
n
x
1
.
.
.
x
n
.
This is exactly what we want.
Now what happens when we change basis? How will the dual basis change?
Proposition.
Let
V
be a finite-dimensional vector space over
F
with bases
(e
1
, ··· , e
n
) and (f
1
, ··· , f
n
), and that P is the change of basis matrix so that
f
i
=
n
X
k=1
P
ki
e
k
.
Let (ε
1
, ··· , ε
n
) and (η
1
, ··· , η
n
) be the corresponding dual bases so that
ε
i
(e
j
) = δ
ij
= η
i
(f
j
).
Then the change of basis matrix from (
ε
1
, ··· , ε
n
) to (
η
1
, ··· , η
n
) is (
P
1
)
T
, i.e.
ε
i
=
n
X
`=1
P
T
`i
η
`
.
Proof. For convenience, write Q = P
1
so that
e
j
=
n
X
k=1
Q
kj
f
k
.
So we can compute
n
X
`=1
P
i`
η
`
!
(e
j
) =
n
X
`=1
P
i`
η
`
!
n
X
k=1
Q
kj
f
k
!
=
X
k,`
P
i`
δ
`k
Q
kj
=
X
k,`
P
i`
Q
`j
= [P Q]
ij
= δ
ij
.
So ε
i
=
P
n
`=1
P
T
`i
η
`
.
Now we’ll return to our original motivation, and think how we can define
subspaces of V
in terms of subspaces of V , and vice versa.
Definition (Annihilator). Let U V . Then the annihilator of U is
U
0
= {θ V
: θ(u) = 0, u U}.
If W V
, then the annihilator of W is
W
0
= {v V : θ(v) = 0, θ W }.
One might object that
W
0
should be a subset of
V
∗∗
and not
V
. We will
later show that there is a canonical isomorphism between
V
∗∗
and
V
, and this
will all make sense.
Example.
Consider
R
3
with standard basis
e
1
, e
2
, e
3
; (
R
3
)
with dual basis
(
ε
1
, ε
2
, ε
3
). If
U
=
he
1
+ 2
e
2
+
e
3
i
and
W
=
hε
1
ε
3
,
2
ε
1
ε
2
i
, then
U
0
=
W
and W
0
= U.
We see that the dimension of
U
and
U
0
add up to three, which is the
dimension of R
3
. This is typical.
Proposition. Let V be a vector space over F and U a subspace. Then
dim U + dim U
0
= dim V.
We are going to prove this in many ways.
Proof.
Let (
e
1
, ··· , e
k
) be a basis for
U
and extend to (
e
1
, ··· , e
n
) a basis for
V . Consider the dual basis for V
, say (ε
1
, ··· , ε
n
). Then we will show that
U
0
= hε
k+1
, ··· , ε
n
i.
So
dim U
0
=
n k
as required. This is easy to prove if
j > k
, then
ε
j
(
e
i
) = 0
for all
i k
. So
ε
k+1
, ··· , ε
n
U
0
. On the other hand, suppose
θ U
0
. Then
we can write
θ =
n
X
j=1
λ
j
ε
j
.
But then 0 = θ(e
i
) = λ
i
for i k. So done.
Proof.
Consider the restriction map
V
U
, given by
θ 7→ θ|
U
. This is
obviously linear. Since every linear map
U F
can be extended to
V F
, this
is a surjection. Moreover, the kernel is U
0
. So by rank-nullity theorem,
dim V
= dim U
0
+ dim U
.
Since dim V
= dim V and dim U
= dim U, we’re done.
Proof.
We can show that
U
0
'
(
V/U
)
, and then deduce the result. Details are
left as an exercise.