8Numerical linear algebra

IB Numerical Analysis



8.3 A = LU for special A
There is one thing we haven’t figured out yet. Can we just look at the matrix
A
,
and then determine whether it has an LU factorization (that does not involve
permutations)? To answer this, we start with some definitions.
Definition
(Leading principal submatrix)
.
The leading principal submatrices
A
k
R
k×k
for k = 1, ··· , n of A R
n×n
are
(A
k
)
ij
= A
ij
, i, j = 1, ··· , k.
In other words, we take the first k rows and columns of A.
Theorem.
A sufficient condition for the existence for both the existence and
uniqueness of A = LU is that det(A
k
) 6= 0 for k = 1, ··· , n 1.
Note that we don’t need
A
to be non-singular. This is equivalent to the
restriction
A
(k1)
kk
6
= 0 for
k
= 1
, ··· , n
1. Also, this is a sufficient condition,
not a necessary one.
Proof. Straightforward induction.
We extend this result a bit:
Theorem.
If
det
(
A
k
)
6
= 0 for all
k
= 1
, ··· , n
, then
A R
n×n
has a unique
factorization of the form
A = LD
ˆ
U,
where
D
is non-singular diagonal matrix, and both
L
and
ˆ
U
are unit triangular.
Proof.
From the previous theorem,
A
=
LU
exists. Since
A
is non-singular,
U
is non-singular. So we can write this as
U = D
ˆ
U,
where D consists of the diagonals of U and
ˆ
U = D
1
U is unit.
This is not hard, but is rather convenient. In particular, it allows us to
factorize symmetric matrices in a nice way.
Theorem.
Let
A R
n×n
be non-singular and
det
(
A
k
)
6
= 0 for all
k
= 1
, ··· , n
.
Then there is a unique “symmetric” factorization
A = LDL
T
,
with L unit lower triangular and D diagonal and non-singular.
Proof. From the previous theorem, we can factorize A uniquely as
A = LD
ˆ
U.
We take the transpose to obtain
A = A
T
=
ˆ
U
T
DL
T
.
This is a factorization of the form “unit lower”-“diagonal”-“unit upper”. By
uniqueness, we must have
ˆ
U = L
T
. So done.
We just looked at a special type of matrices, the symmetric matrices. We
look at another type of matrices:
Definition
(Positive definite matrix)
.
A matrix
A R
n×n
is positive-definite if
x
T
Ax > 0
for x 6= 0 R
n
.
Theorem.
Let
A R
n×n
be a positive-definite matrix. Then
det
(
A
k
)
6
= 0 for
all k = 1, ··· , n.
Proof.
First consider
k
=
n
. To show
A
is non-singular, it suffices to show that
Ax
=
0
implies
x
=
0
. But we can multiply the equation by
x
T
to obtain
x
T
Ax = 0. By positive-definiteness, we must have x = 0. So done.
Now suppose
A
k
y
=
0
for
k < n
and
y
=
R
k
. Then
y
T
A
k
y
= 0. We
invent a new
x R
n
by taking
y
and pad it with zeros. Then
x
T
Ax
= 0. By
positive-definiteness, we know x = 0. Then in particular y = 0.
We are going to use this to prove the most important result in this section.
We are going to consider matrices that are symmetric and positive-definite.
Theorem.
A symmetric matrix
A R
n×n
is positive-definite iff we can factor
it as
A = LDL
T
,
where L is unit lower triangular, D is diagonal and D
kk
> 0.
Proof. First suppose such a factorization exists, then
x
T
Ax = x
T
LDL
T
x = (L
T
x)
T
D(L
T
x).
We let
y
=
L
T
x
. Note that
y
=
0
if and only if
x
=
0
, since
L
is invertible. So
x
T
Ax = y
T
Dy =
n
X
k=1
y
2
k
D
kk
> 0
if y 6= 0.
Now if
A
is positive definite, it has an LU factorization, and since
A
is
symmetric, we can write it as
A = LDL
T
,
where
L
is unit lower triangular and
D
is diagonal. Now we have to show
D
kk
>
0. We define
y
k
such that
L
T
y
k
=
e
k
, which exist, since
L
is invertible.
Then clearly y
k
6= 0. Then we have
D
kk
= e
T
k
De
k
= y
T
k
LDL
T
y
k
= y
T
k
Ay
k
> 0.
So done.
This is a practical check for symmetric
A
being positive definite. We can
perform this LU factorization, and then check whether the diagonal has positive
entries.
Definition
(Cholesky factorization)
.
The Cholesky factorization of a symmetric
positive-definite matrix A is a factorization of the form
A = LDL
T
,
with L unit lower triangular and D a positive-definite diagonal matrix.
There is another way of doing this. We let
D
1/2
be the “square root” of
D
,
by taking the positive square root of the diagonal entries of D. Then we have
A = LDL
T
= LD
1/2
D
1/2
L
T
= (LD
1/2
)(LD
1/2
)
T
= GG
T
,
where
G
is lower triangular with
G
kk
>
0. This is another way of presenting this
result.
Finally, we look at the LU factorization of band matrices.
Definition
(Band matrix)
.
A band matrix of band width
r
is a matrix
A
such
that A
ij
6= 0 implies |i j| r.
For example, a band matrix of band width 0 is a diagonal matrix; a band
matrix of band width 1 is a tridiagonal matrix.
The result is
Proposition.
If a band matrix
A
has band width
r
and an LU factorization
A = LU, then L and U are both band matrices of width r.
Proof. Straightforward verification.