4Bilinear forms I

IB Linear Algebra



4 Bilinear forms I
So far, we have been looking at linear things only. This can get quite boring.
For a change, we look at bilinear maps instead. In this chapter, we will look at
bilinear forms in general. It turns out there isn’t much we can say about them,
and hence this chapter is rather short. Later, in Chapter 7, we will study some
special kinds of bilinear forms which are more interesting.
Definition
(Bilinear form)
.
Let
V, W
be vector spaces over
F
. Then a function
φ
:
V × W F
is a bilinear form if it is linear in each variable, i.e. for each
v V , φ(v, ·) : W F is linear; for each w W , φ( ·, w) : V F is linear.
Example. The map defined by
V × V
F
(v, θ) 7→ θ(v) = ev(v)(θ)
is a bilinear form.
Example.
Let
V
=
W
=
F
n
. Then the function (
v, w
) =
P
n
i=1
v
i
w
i
is bilinear.
Example. If V = W = C([0, 1], R), then
(f, g) 7→
Z
a
0
fg dt
is a bilinear form.
Example. Let A Mat
m,n
(F). Then
φ : F
m
× F
n
F
(v, w) 7→ v
T
Aw
is bilinear. Note that the (real) dot product is the special case of this, where
n = m and A = I.
In fact, this is the most general form of bilinear forms on finite-dimensional
vector spaces.
Definition
(Matrix representing bilinear form)
.
Let (
e
1
, ··· , e
n
) be a basis for
V
and (
f
1
, ··· , f
m
) be a basis for
W
, and
ψ
:
V × W F
. Then the matrix
A
representing ψ with respect to the basis is defined to be
A
ij
= ψ(e
i
, f
j
).
Note that if v =
P
λ
i
e
i
and w =
P
µ
j
f
j
, then by linearity, we get
ψ(v, w) = ψ
X
λ
i
e
i
, w
=
X
i
λ
i
ψ(e
i
, w)
=
X
i
λ
i
ψ
e
i
,
X
µ
j
f
j
=
X
i,j
λ
i
µ
j
ψ(e
i
, f
j
)
= λ
T
Aµ.
So ψ is determined by A.
We have identified linear maps with matrices, and we have identified bilinear
maps with matrices. However, you shouldn’t think linear maps are bilinear maps.
They are, obviously, two different things. In fact, the matrices representing
matrices and bilinear forms transform differently when we change basis.
Proposition.
Suppose (
e
1
, ··· , e
n
) and (
v
1
, ··· , v
n
) are basis for
V
such that
v
i
=
X
P
ki
e
k
for all i = 1, ··· , n;
and (f
1
, ··· , f
m
) and (w
1
, ··· , w
m
) are bases for W such that
w
i
=
X
Q
`j
f
`
for all j = 1, ··· , m.
Let
ψ
:
V × W F
be a bilinear form represented by
A
with respect to
(
e
1
, ··· , e
n
) and (
f
1
, ··· , f
m
), and by
B
with respect to the bases (
v
1
, ··· , v
n
)
and (w
1
, ··· , w
m
). Then
B = P
T
AQ.
The difference with the transformation laws of matrices is this time we are
taking transposes, not inverses.
Proof. We have
B
ij
= φ(v
i
, w
j
)
= φ
X
P
ki
e
k
,
X
Q
`j
f
`
=
X
P
ki
Q
`j
φ(e
k
, f
`
)
=
X
k,`
P
T
ik
A
k`
Q
`j
= (P
T
AQ)
ij
.
Note that while the transformation laws for bilinear forms and linear maps
are different, we still get that two matrices are representing the same bilinear
form with respect to different bases if and only if they are equivalent, since if
B = P
1
AQ, then B = ((P
1
)
T
)
T
AQ.
If we are given a bilinear form
ψ
:
V ×W F
, we immediately get two linear
maps:
ψ
L
: V W
, ψ
R
: W V
,
defined by ψ
L
(v) = ψ(v, ·) and ψ
R
(w) = ψ( ·, w).
For example, if
ψ
:
V × V
F
, is defined by (
v, θ
)
7→ θ
(
v
), then
ψ
L
:
V
V
∗∗
is the evaluation map. On the other hand,
ψ
R
:
V
V
is the identity
map.
Lemma.
Let (
ε
1
, ··· , ε
n
) be a basis for
V
dual to (
e
1
, ··· , e
n
) of
V
; (
η
1
, ··· , η
n
)
be a basis for W
dual to (f
1
, ··· , f
n
) of W .
If
A
represents
ψ
with respect to (
e
1
, ··· , e
n
) and (
f
1
, ··· , f
m
), then
A
also
represents
ψ
R
with respect to (
f
1
, ··· , f
m
) and (
ε
1
, ··· , ε
n
); and
A
T
represents
ψ
L
with respect to (e
1
, ··· , e
n
) and (η
1
, ··· , η
m
).
Proof. We just have to compute
ψ
L
(e
i
)(f
j
) = A
ij
=
X
A
i`
η
`
(f
j
).
So we get
ψ
L
(e
i
) =
X
A
T
`i
η
`
.
So A
T
represents ψ
L
.
We also have
ψ
R
(f
j
)(e
i
) = A
ij
.
So
ψ
R
(f
j
) =
X
A
kj
ε
k
.
Definition
(Left and right kernel)
.
The kernel of
ψ
L
is left kernel of
ψ
, while
the kernel of ψ
R
is the right kernel of ψ.
Then by definition, v is in the left kernel if ψ(v, w) = 0 for all w W .
More generally, if T V , then we write
T
= {w W : ψ(t, w) = 0 for all t T }.
Similarly, if U W , then we write
U = {v V : ψ(v, u) = 0 for all u U}.
In particular, V
= ker ψ
R
and
W = ker ψ
L
.
If we have a non-trivial left (or right) kernel, then in some sense, some
elements in V (or W ) are “useless”, and we don’t like these.
Definition
(Non-degenerate bilinear form)
. ψ
is non-degenerate if the left and
right kernels are both trivial. We say ψ is degenerate otherwise.
Definition
(Rank of bilinear form)
.
If
ψ
:
V W
is a bilinear form
F
on a
finite-dimensional vector space
V
, then the rank of
V
is the rank of any matrix
representing
φ
. This is well-defined since
r
(
P
T
AQ
) =
r
(
A
) if
P
and
Q
are
invertible.
Alternatively, it is the rank of ψ
L
(or ψ
R
).
Lemma.
Let
V
and
W
be finite-dimensional vector spaces over
F
with bases
(e
1
, ··· , e
n
) and (f
1
, ··· , f
m
) be their basis respectively.
Let
ψ
:
V ×W F
be a bilinear form represented by
A
with respect to these
bases. Then
φ
is non-degenerate if and only if
A
is (square and) invertible. In
particular, V and W have the same dimension.
We can understand this as saying if there are too many things in
V
(or
W
),
then some of them are bound to be useless.
Proof.
Since
ψ
R
and
ψ
L
are represented by
A
and
A
T
(in some order), they both
have trivial kernel if and only if
n
(
A
) =
n
(
A
T
) = 0. So we need
r
(
A
) =
dim V
and
r
(
A
T
) =
dim W
. So we need
dim V
=
dim W
and
A
have full rank, i.e. the
corresponding linear map is bijective. So done.
Example. The map
F
2
× F
2
F
a
c
,
b
d
7→ ad bc
is a bilinear form. This, obviously, corresponds to the determinant of a 2-by-2
matrix. We have ψ(v, w) = ψ(w, v) for all v, w F
2
.