8Inner product spaces

IB Linear Algebra



8.4 Spectral theory
We are going to classify matrices in inner product spaces. Recall that for general
vector spaces, what we effectively did was to find the orbits of the conjugation
action of
GL
(
V
) on
Mat
n
(
F
). If we have inner product spaces, we will want to
look at the action of
O
(
V
) or
U
(
V
) on
Mat
n
(
F
). In a more human language,
instead of allowing arbitrary basis transformations, we only allow transforming
between orthonormal basis.
We are not going to classify all endomorphisms, but just self-adjoint and
orthogonal/unitary ones.
Lemma.
Let
V
be a finite-dimensional inner product space, and
α End
(
V
)
self-adjoint. Then
(i) α has a real eigenvalue, and all eigenvalues of α are real.
(ii) Eigenvectors of α with distinct eigenvalues are orthogonal.
Proof. We are going to do real and complex cases separately.
(i)
Suppose first
V
is a complex inner product space. Then by the fundamental
theorem of algebra,
α
has an eigenvalue, say
λ
. We pick
v V \ {
0
}
such
that αv = λv. Then
¯
λ(v, v) = (λv, v) = (αv, v) = (v, αv) = (v, λv) = λ(v, v).
Since v 6= 0, we know (v, v) 6= 0. So λ =
¯
λ.
For the real case, we pretend we are in the complex case. Let
e
1
, ··· , e
n
be
an orthonormal basis for
V
. Then
α
is represented by a symmetric matrix
A
(with respect to this basis). Since real symmetric matrices are Hermitian
viewed as complex matrices, this gives a self-adjoint endomorphism of
C
n
.
By the complex case,
A
has real eigenvalues only. But the eigenvalues of
A are the eigenvalues of α and M
A
(t) = M
α
(t). So done.
Alternatively, we can prove this without reducing to the complex case. We
know every irreducible factor of
M
α
(
t
) in
R
[
t
] must have degree 1 or 2,
since the roots are either real or come in complex conjugate pairs. Suppose
f(t) were an irreducible factor of degree 2. Then
m
α
f
(α) 6= 0
since it has degree less than the minimal polynomial. So there is some
v V such that
M
α
f
(α)(v) 6= 0.
So it must be that
f
(
α
)(
v
) =
0
. Let
U
=
hv, α
(
v
)
i
. Then this is an
α-invariant subspace of V since f has degree 2.
Now
α|
U
End
(
U
) is self-adjoint. So if (
e
1
, e
2
) is an orthonormal basis of
U, then α is represented by a real symmetric matrix, say
a b
b a
But then
χ
α|
U
(
t
) = (
t a
)
2
b
2
, which has real roots, namely
a ±b
. This
is a contradiction, since M
α|
U
= f, but f is irreducible.
(ii)
Now suppose
αv
=
λv
,
αw
=
µw
and
λ 6
=
µ
. We need to show (
v, w
) = 0.
We know
(αv, w) = (v, αw)
by definition. This then gives
λ(v, w) = µ(v, w)
Since λ 6= µ, we must have (v, w) = 0.
Theorem.
Let
V
be a finite-dimensional inner product space, and
α End
(
V
)
self-adjoint. Then V has an orthonormal basis of eigenvectors of α.
Proof.
By the previous lemma,
α
has a real eigenvalue, say
λ
. Then we can find
an eigenvector v V \ {0} such that αv = λv.
Let U = hvi
. Then we can write
V = hvi U.
We now want to prove α sends U into U . Suppose u U . Then
(v, α(u)) = (αv, u) = λ(v, u) = 0.
So α(u) hvi
= U. So α|
U
End(U) and is self-adjoint.
By induction on
dim V
,
U
has an orthonormal basis (
v
2
, ··· , v
n
) of
α
eigen-
vectors. Now let
v
1
=
v
kvk
.
Then (v
1
, v
2
, ··· , v
n
) is an orthonormal basis of eigenvectors for α.
Corollary.
Let
V
be a finite-dimensional vector space and
α
self-adjoint. Then
V is the orthogonal (internal) direct sum of its α-eigenspaces.
Corollary.
Let
A Mat
n
(
R
) be symmetric. Then there exists an orthogonal
matrix P such that P
T
AP = P
1
AP is diagonal.
Proof.
Let (
·, ·
) be the standard inner product on
R
n
. Then
A
is self-adjoint
as an endomorphism of
R
n
. So
R
n
has an orthonormal basis of eigenvectors for
A, say (v
1
, ··· , v
n
). Taking P = (v
1
v
2
··· v
n
) gives the result.
Corollary.
Let
V
be a finite-dimensional real inner product space and
ψ
:
V ×V R
a symmetric bilinear form. Then there exists an orthonormal basis
(
v
1
, ··· , v
n
) for
V
with respect to which
ψ
is represented by a diagonal matrix.
Proof.
Let (
u
1
, ··· , u
n
) be any orthonormal basis for
V
. Then
ψ
is represented
by a symmetric matrix
A
. Then there exists an orthogonal matrix
P
such that
P
T
AP
is diagonal. Now let
v
i
=
P
P
ki
u
k
. Then (
v
1
, ··· , v
n
) is an orthonormal
basis since
(v
i
, v
j
) =
X
P
ki
u
k
,
X
P
`j
u
`
=
X
P
T
ik
(u
k
, u
`
)P
`j
= [P
T
P ]
ij
= δ
ij
.
Also, ψ is represented by P
T
AP with respect to (v
1
, ··· , v
n
).
Note that the diagonal values of
P
T
AP
are just the eigenvalues of
A
. So the
signature of
ψ
is just the number of positive eigenvalues of
A
minus the number
of negative eigenvalues of A.
Corollary.
Let
V
be a finite-dimensional real vector space and
φ, ψ
symmetric
bilinear forms on
V
such that
φ
is positive-definite. Then we can find a basis
(
v
1
, ··· , v
n
) for
V
such that both
φ
and
ψ
are represented by diagonal matrices
with respect to this basis.
Proof.
We use
φ
to define an inner product. Choose an orthonormal basis for
V
(equipped with
φ
) (
v
1
, ··· , v
n
) with respect to which
ψ
is diagonal. Then
φ
is
represented by I with respect to this basis, since ψ(v
i
, v
j
) = δ
ij
. So done.
Corollary.
If
A, B Mat
n
(
R
) are symmetric and
A
is positive definitive (i.e.
v
T
Av >
0 for all
v R
n
\ {
0
}
). Then there exists an invertible matrix
Q
such
that Q
T
AQ and Q
T
BQ are both diagonal.
We can deduce similar results for complex finite-dimensional vector spaces,
with the same proofs. In particular,
Proposition.
(i)
If
A Mat
n
(
C
) is Hermitian, then there exists a unitary matrix
U
Mat
n
(C) such that
U
1
AU = U
AU
is diagonal.
(ii)
If
ψ
is a Hermitian form on a finite-dimensional complex inner product
space V , then there is an orthonormal basis for V diagonalizing ψ.
(iii)
If
φ, ψ
are Hermitian forms on a finite-dimensional complex vector space
and
φ
is positive definite, then there exists a basis for which
φ
and
ψ
are
diagonalized.
(iv)
Let
A, B Mat
n
(
C
) be Hermitian, and
A
positive definitive (i.e.
v
Av >
0
for
v V \{
0
}
). Then there exists some invertible
Q
such that
Q
AQ
and
Q
BQ are diagonal.
That’s all for self-adjoint matrices. How about unitary matrices?
Theorem.
Let
V
be a finite-dimensional complex vector space and
α U
(
V
)
be unitary. Then V has an orthonormal basis of α eigenvectors.
Proof.
By the fundamental theorem of algebra, there exists
v V \ {
0
}
and
λ C such that αv = λv. Now consider W = hvi
. Then
V = W hvi.
We want to show
α
restricts to a (unitary) endomorphism of
W
. Let
w W
.
We need to show α(w) is orthogonal to v. We have
(αw, v) = (w, α
1
v) = (w, λ
1
v) = 0.
So
α
(
w
)
W
and
α|
W
End
(
W
). Also,
α|
W
is unitary since
α
is. So by
induction on
dim V
,
W
has an orthonormal basis of
α
eigenvectors. If we add
v/kvk
to this basis, we get an orthonormal basis of
V
itself comprised of
α
eigenvectors.
This theorem and the analogous one for self-adjoint endomorphisms have a
common generalization, at least for complex inner product spaces. The key fact
that leads to the existence of an orthonormal basis of eigenvectors is that
α
and
α
commute. This is clearly a necessary condition, since if
α
is diagonalizable, then
α
is diagonal in the same basis (since it is just the transpose (and conjugate)),
and hence they commute. It turns out this is also a sufficient condition, as you
will show in example sheet 4.
However, we cannot generalize this in the real orthogonal case. For example,
cos θ sin θ
sin θ cos θ
O(R
2
)
cannot be diagonalized (if
θ 6∈ πZ
). However, in example sheet 4, you will find a
classification of
O
(
V
), and you will see that the above counterexample is the
worst that can happen in some sense.