3Linear maps

IA Vectors and Matrices



3.5 Determinants
Consider a linear map
α
:
R
3
R
3
. The standard basis
e
1
, e
2
, e
3
is mapped to
e
0
1
, e
0
2
, e
0
3
with
e
0
i
=
Ae
i
. Thus the unit cube formed by
e
1
, e
2
, e
3
is mapped to
the parallelepiped with volume
[e
0
1
, e
0
2
, e
0
3
] = ε
ijk
(e
0
1
)
i
(e
0
2
)
j
(e
0
3
)
k
= ε
ijk
A
i`
(e
1
)
`
|{z}
δ
1`
A
jm
(e
2
)
m
|{z}
δ
2m
A
kn
(e
3
)
n
|{z}
δ
3n
= ε
ijk
A
i1
A
j2
A
k3
We call this the determinant and write as
det(A) =
A
11
A
12
A
13
A
21
A
22
A
23
A
31
A
32
A
33
3.5.1 Permutations
To define the determinant for square matrices of arbitrary size, we first have to
consider permutations.
Definition (Permutation). A permutation of a set S is a bijection ε : S S.
Notation.
Consider the set
S
n
of all permutations of 1
,
2
,
3
, ··· , n
.
S
n
contains
n! elements. Consider ρ S
n
with i 7→ ρ(i). We write
ρ =
1 2 ··· n
ρ(1) ρ(2) ··· ρ(n)
.
Definition
(Fixed point)
.
A fixed point of
ρ
is a
k
such that
ρ
(
k
) =
k
. e.g. in
1 2 3 4
4 1 3 2
, 3 is the fixed point. By convention, we can omit the fixed point
and write as
1 2 4
4 1 2
.
Definition (Disjoint permutation). Two permutations are disjoint if numbers
moved by one are fixed by the other, and vice versa. e.g.
1 2 4 5 6
5 6 1 4 2
=
2 6
6 2
1 4 5
5 1 4
, and the two cycles on the right hand side are disjoint.
Disjoint permutations commute, but in general non-disjoint permutations do
not.
Definition
(Transposition and
k
-cycle)
.
2 6
6 2
is a 2-cycle or a transposition,
and we can simply write (2 6).
1 4 5
5 1 4
is a 3-cycle, and we can simply write
(1 5 4). (1 is mapped to 5; 5 is mapped to 4; 4 is mapped to 1)
Proposition. Any q-cycle can be written as a product of 2-cycles.
Proof. (1 2 3 ··· n) = (1 2)(2 3)(3 4) ···(n 1 n).
Definition
(Sign of permutation)
.
The sign of a permutation
ε
(
ρ
) is (
1)
r
,
where
r
is the number of 2-cycles when
ρ
is written as a product of 2-cycles. If
ε
(
ρ
) = +1, it is an even permutation. Otherwise, it is an odd permutation. Note
that ε(ρσ) = ε(ρ)ε(σ) and ε(ρ
1
) = ε(ρ).
The proof that this is well-defined can be found in IA Groups.
Definition (Levi-Civita symbol). The Levi-Civita symbol is defined by
ε
j
1
j
2
···j
n
=
+1 if j
1
j
2
j
3
···j
n
is an even permutation of 1, 2, ···n
1 if it is an odd permutation
0 if any 2 of them are equal
Clearly, ε
ρ(1)ρ(2)···ρ(n)
= ε(ρ).
Definition
(Determinant)
.
The determinant of an
n ×n
matrix
A
is defined as:
det(A) =
X
σS
n
ε(σ)A
σ(1)1
A
σ(2)2
···A
σ(n)n
,
or equivalently,
det(A) = ε
j
1
j
2
···j
n
A
j
1
1
A
j
2
2
···A
j
n
n
.
Proposition.
a b
c d
= ad bc
3.5.2 Properties of determinants
Proposition. det(A) = det(A
T
).
Proof.
Take a single term
A
σ(1)1
A
σ(2)2
···A
σ(n)n
and let
ρ
be another permuta-
tion in S
n
. We have
A
σ(1)1
A
σ(2)2
···A
σ(n)n
= A
σ(ρ(1))ρ(1)
A
σ(ρ(2))ρ(2)
···A
σ(ρ(n))ρ(n)
since the right hand side is just re-ordering the order of multiplication. Choose
ρ = σ
1
and note that ε(σ) = ε(ρ). Then
det(A) =
X
ρS
n
ε(ρ)A
1ρ(1)
A
2ρ(2)
···A
(n)
= det(A
T
).
Proposition.
If matrix
B
is formed by multiplying every element in a single row
of
A
by a scalar
λ
, then
det
(
B
) =
λ det
(
A
). Consequently,
det
(
λA
) =
λ
n
det
(
A
).
Proof.
Each term in the sum is multiplied by
λ
, so the whole sum is multiplied
by λ
n
.
Proposition.
If 2 rows (or 2 columns) of
A
are identical, the determinant is 0.
Proof. wlog, suppose columns 1 and 2 are the same. Then
det(A) =
X
σS
n
ε(σ)A
σ(1)1
A
σ(2)2
···A
σ(n)n
.
Now write an arbitrary
σ
in the form
σ
=
ρ
(1 2). Then
ε
(
σ
) =
ε
(
ρ
)
ε
((1 2)) =
ε(ρ). So
det(A) =
X
ρS
n
ε(ρ)A
ρ(2)1
A
ρ(1)2
A
ρ(3)3
···A
ρ(n)n
.
But columns 1 and 2 are identical, so
A
ρ(2)1
=
A
ρ(2)2
and
A
ρ(1)2
=
A
ρ(1)1
. So
det(A) = det(A) and det(A) = 0.
Proposition.
If 2 rows or 2 columns of a matrix are linearly dependent, then
the determinant is zero.
Proof. Suppose in A, (column r) + λ(column s) = 0. Define
B
ij
=
(
A
ij
j 6= r
A
ij
+ λA
is
j = r
.
Then
det
(
B
) =
det
(
A
) +
λ det
(matrix with column
r
= column
s
) =
det
(
A
).
Then we can see that the
r
th column of
B
is all zeroes. So each term in the sum
contains one zero and det(A) = det(B) = 0.
Even if we don’t have linearly dependent rows or columns, we can still run
the exact same proof as above, and still get that
det
(
B
) =
det
(
A
). Linear
dependence is only required to show that
det
(
B
) = 0. So in general, we can add
a linear multiple of a column (or row) onto another column (or row) without
changing the determinant.
Proposition.
Given a matrix
A
, if
B
is a matrix obtained by adding a multiple
of a column (or row) of
A
to another column (or row) of
A
, then
det A
=
det B
.
Corollary.
Swapping two rows or columns of a matrix negates the determinant.
Proof. We do the column case only. Let A = (a
1
···a
i
···a
j
···a
n
). Then
det(a
1
···a
i
···a
j
···a
n
) = det(a
1
···a
i
+ a
j
···a
j
···a
n
)
= det(a
1
···a
i
+ a
j
···a
j
(a
i
+ a
j
) ···a
n
)
= det(a
1
···a
i
+ a
j
··· a
i
···a
n
)
= det(a
1
···a
j
··· a
i
···a
n
)
= det(a
1
···a
j
···a
i
···a
n
)
Alternatively, we can prove this from the definition directly, using the fact that
the sign of a transposition is 1 (and that the sign is multiplicative).
Proposition. det(AB) = det(A) det(B).
Proof.
First note that
P
σ
ε
(
σ
)
A
σ(1)ρ(1)
A
σ(2)ρ(2)
=
ε
(
ρ
)
det
(
A
), i.e. swapping
columns (or rows) an even/odd number of times gives a factor
±
1 respectively.
We can prove this by writing σ = µρ.
Now
det AB =
X
σ
ε(σ)(AB)
σ(1)1
(AB)
σ(2)2
···(AB)
σ(n)n
=
X
σ
ε(σ)
n
X
k
1
,k
2
,···,k
n
A
σ(1)k
1
B
k
1
1
···A
σ(n)k
n
B
k
n
n
=
X
k
1
,···,k
n
B
k
1
1
···B
k
n
n
X
σ
ε(σ)A
σ(1)k
1
A
σ(2)k
2
···A
σ(n)k
n
| {z }
S
Now consider the many different
S
’s. If in
S
, two of
k
1
and
k
n
are equal, then
S
is a determinant of a matrix with two columns the same, i.e.
S
= 0. So we only
have to consider the sum over distinct
k
i
s. Thus the
k
i
s are are a permutation
of 1, ···n, say k
i
= ρ(i). Then we can write
det AB =
X
ρ
B
ρ(1)1
···B
ρ(n)n
X
σ
ε(σ)A
σ(1)ρ(1)
···A
σ(n)ρ(n)
=
X
ρ
B
ρ(1)1
···B
ρ(n)n
(ε(ρ) det A)
= det A
X
ρ
ε(ρ)B
ρ(1)1
···B
ρ(n)n
= det A det B
Corollary. If A is orthogonal, det A = ±1.
Proof.
AA
T
= I
det AA
T
= det I
det A det A
T
= 1
(det A)
2
= 1
det A = ±1
Corollary. If U is unitary, |det U | = 1.
Proof.
We have
det U
= (
det U
T
)
=
det
(
U
)
. Since
UU
=
I
, we have
det(U) det(U)
= 1.
Proposition.
In
R
3
, orthogonal matrices represent either a rotation (
det
= 1)
or a reflection (det = 1).
3.5.3 Minors and Cofactors
Definition
(Minor and cofactor)
.
For an
n × n
matrix
A
, define
A
ij
to be the
(n 1) × (n 1) matrix in which row i and column j of A have been removed.
The minor of the ijth element of A is M
ij
= det A
ij
The cofactor of the ijth element of A is
ij
= (1)
i+j
M
ij
.
Notation.
We use
¯
to denote a symbol which has been missed out of a natural
sequence.
Example. 1, 2, 3, 5 = 1, 2, 3,
¯
4, 5.
The significance of these definitions is that we can use them to provide a
systematic way of evaluating determinants. We will also use them to find inverses
of matrices.
Theorem (Laplace expansion formula). For any particular fixed i,
det A =
n
X
j=1
A
ji
ji
.
Proof.
det A =
n
X
j
i
=1
A
j
i
i
n
X
j
1
,···,j
i
,···j
n
ε
j
1
j
2
···j
n
A
j
1
1
A
j
2
2
···A
j
i
i
···A
j
n
n
Let
σ S
n
be the permutation which moves
j
i
to the
i
th position, and leave
everything else in its natural order, i.e.
σ =
1 ··· i i + 1 i + 2 ··· j
i
1 j
i
j
i
+ 1 ··· n
1 ··· j
i
i i + 1 ··· j
i
2 j
i
1 j
i
+ 1 ··· n
if
j
i
> i
, and similarly for other cases. To perform this permutation,
|i j
i
|
transpositions are made. So ε(σ) = (1)
ij
i
.
Now consider the permutation ρ S
n
ρ =
1 ··· ···
¯
j
i
··· n
j
1
···
¯
j
i
··· ··· j
n
The composition
ρσ
reorders (1
, ··· , n
) to (
j
1
, j
2
, ··· , j
n
). So
ε
(
ρσ
) =
ε
j
1
···j
n
=
ε(ρ)ε(σ) = (1)
ij
i
ε
j
1
···
¯
j
i
···j
n
. Hence the original equation becomes
det A =
n
X
j
i
=1
A
j
i
i
X
j
1
···
¯
j
i
···j
n
(1)
ij
i
ε
j
1
···
¯
j
i
···j
n
A
j
1
1
···A
j
i
i
···A
j
n
n
=
n
X
j
i
=1
A
j
i
i
(1)
ij
i
M
j
i
i
=
n
X
j
i
=1
A
j
i
i
j
i
i
=
n
X
j=1
A
ji
ji
Example. det A =
2 4 2
3 2 1
2 0 1
. We can pick the first row and have
det A = 2
2 1
0 1
4
3 1
2 1
+ 2
3 2
2 0
= 2(2 0) 4(3 2) + 2(0 4)
= 8.
Alternatively, we can pick the second column and have
det A = 4
3 1
2 1
+ 2
2 2
2 1
0
2 2
3 1
= 4(3 2) + 2(2 4) 0
= 8.
In practical terms, we use a combination of properties of determinants with
a sensible choice of i to evaluate det(A).
Example. Consider
1 a a
2
1 b b
2
1 c c
2
. Row 1 - row 2 gives
0 a b a
2
b
2
1 b b
2
1 c c
2
= (a b)
0 1 a + b
1 b b
2
1 c c
2
.
Do row 2 - row 3. We obtain
(a b)(b c)
0 1 a + b
0 1 b + c
1 c c
2
.
Row 1 - row 2 gives
(a b)(b c)(a c)
0 0 1
0 1 b + c
1 c c
2
= (a b)(b c)(a c).