2Linear maps

IB Linear Algebra

2.1 Definitions and examples

Definition

(Linear map)

.

Let

U, V

be vector spaces over

F

. Then

α

:

U → V

is a linear map if

(i) α(u

1

+ u

2

) = α(u

1

) + α(u

2

) for all u

i

∈ U.

(ii) α(λu) = λα(u) for all λ ∈ F, u ∈ U.

We write L(U, V ) for the set of linear maps U → V .

There are a few things we should take note of:

–

If we are lazy, we can combine the two requirements to the single require-

ment that

α(λu

1

+ µu

2

) = λα(u

1

) + µα(u

2

).

–

It is easy to see that if

α

is linear, then it is a group homomorphism (if we

view vector spaces as groups). In particular, α(0) = 0.

–

If we want to stress the field

F

, we say that

α

is

F

-linear. For example,

complex conjugation is a map C → C that is R-linear but not C-linear.

Example.

(i)

Let

A

be an

n×m

matrix with coefficients in

F

. We will write

A ∈ M

n,m

(

F

).

Then α : F

m

→ F

n

defined by v → Av is linear.

Recall matrix multiplication is defined by: if A

ij

is the ijth coefficient of

A, then the ith coefficient of Av is A

ij

v

j

. So we have

α(λu + µv)

i

=

m

X

j=1

A

ij

(λu + µv)

j

= λ

m

X

j=1

A

ij

u

j

+ µ

m

X

j=1

A

ij

v

j

= λα(u)

i

+ µα(v)

i

.

So α is linear.

(ii)

Let

X

be a set and

g ∈ F

X

. Then we define

m

g

:

F

X

→ F

X

by

m

g

(

f

)(

x

) =

g(x)f(x). Then m

g

is linear. For example, f(x) 7→ 2x

2

f(x) is linear.

(iii)

Integration

I

: (

C

([

a, b

])

, R

)

→

(

C

([

a, b

])

, R

) defined by

f 7→

R

x

a

f

(

t

) d

t

is

linear.

(iv) Differentiation D : (C

∞

([a, b]), R) → (C

∞

([a, b]), R) by f 7→ f

0

is linear.

(v)

If

α, β ∈ L

(

U, V

), then

α

+

β

defined by (

α

+

β

)(

u

) =

α

(

u

) +

β

(

u

) is linear.

Also, if λ ∈ F, then λα defined by (λα)(u) = λ(α(u)) is also linear.

In this way, L(U, V ) is also a vector space over F.

(vi)

Composition of linear maps is linear. Using this, we can show that many

things are linear, like differentiating twice, or adding and then multiplying

linear maps.

Just like everything else, we want to define isomorphisms.

Definition

(Isomorphism)

.

We say a linear map

α

:

U → V

is an isomorphism

if there is some β : V → U (also linear) such that α ◦ β = id

V

and β ◦ α = id

U

.

If there exists an isomorphism

U → V

, we say

U

and

V

are isomorphic, and

write U

∼

=

V .

Lemma.

If

U

and

V

are vector spaces over

F

and

α

:

U → V

, then

α

is an

isomorphism iff α is a bijective linear map.

Proof.

If

α

is an isomorphism, then it is clearly bijective since it has an inverse

function.

Suppose

α

is a linear bijection. Then as a function, it has an inverse

β

:

V → U

. We want to show that this is linear. Let

v

1

, v

2

∈ V

,

λ, µ ∈ F

. We

have

αβ(λv

1

+ µv

2

) = λv

1

+ µv

2

= λαβ(v

1

) + µαβ(v

2

) = α(λβ(v

1

) + µβ(v

2

)).

Since α is injective, we have

β(λv

1

+ µv

2

) = λβ(v

1

) + µβ(v

2

).

So β is linear.

Definition

(Image and kernel)

.

Let

α

:

U → V

be a linear map. Then the

image of α is

im α = {α(u) : u ∈ U}.

The kernel of α is

ker α = {u : α(u) = 0}.

It is easy to show that these are subspaces of V and U respectively.

Example.

(i)

Let

A ∈ M

m,n

(

F

) and

α

:

F

n

→ F

m

be the linear map

v 7→ Av

. Then the

system of linear equations

m

X

j=1

A

ij

x

j

= b

i

, 1 ≤ i ≤ n

has a solution iff (b

1

, ··· , b

n

) ∈ im α.

The kernel of α contains all solutions to

P

j

A

ij

x

j

= 0.

(ii) Let β : C

∞

(R, R) → C

∞

(R, R) that sends

β(f)(t) = f

00

(t) + p(t)f

0

(t) + q(t)f(t).

for some p, q ∈ C

∞

(R, R).

Then if

y

(

t

)

∈ im β

, then there is a solution (in

C

∞

(

R, R

)) to the differential

equation

f

00

(t) + p(t)f

0

(t) + q(t)f(t) = y(t).

Similarly, ker β contains the solutions to the homogeneous equation

f

00

(t) + p(t)f

0

(t) + q(t)f(t) = 0.

If two vector spaces are isomorphic, then it is not too surprising that they

have the same dimension, since isomorphic spaces are “the same”. Indeed this is

what we are going to show.

Proposition. Let α : U → V be an F-linear map. Then

(i)

If

α

is injective and

S ⊆ U

is linearly independent, then

α

(

S

) is linearly

independent in V .

(ii) If α is surjective and S ⊆ U spans U, then α(S) spans V .

(iii) If α is an isomorphism and S ⊆ U is a basis, then α(S) is a basis for V .

Here (iii) immediately shows that two isomorphic spaces have the same

dimension.

Proof.

(i)

We prove the contrapositive. Suppose that

α

is injective and

α

(

S

) is linearly

dependent. So there are

s

0

, ··· , s

n

∈ S

distinct and

λ

1

, ··· , λ

n

∈ F

not all

zero such that

α(s

0

) =

n

X

i=1

λ

i

α(s

i

) = α

n

X

i=1

λ

i

s

i

!

.

Since α is injective, we must have

s

0

=

n

X

i=1

λ

i

s

i

.

This is a non-trivial relation of the s

i

in U. So S is linearly dependent.

(ii)

Suppose

α

is surjective and

S

spans

U

. Pick

v ∈ V

. Then there is some

u ∈ U

such that

α

(

u

) =

v

. Since

S

spans

U

, there is some

s

1

, ··· , s

n

∈ S

and λ

1

, ··· , λ

n

∈ F such that

u =

n

X

I=1

λ

i

s

i

.

Then

v = α(u) =

n

X

i=1

λ

i

α(s

i

).

So α(S) spans V .

(iii) Follows immediately from (i) and (ii).

Corollary.

If

U

and

V

are finite-dimensional vector spaces over

F

and

α

:

U → V

is an isomorphism, then dim U = dim V .

Note that we restrict it to finite-dimensional spaces since we’ve only shown

that dimensions are well-defined for finite dimensional spaces. Otherwise, the

proof works just fine for infinite dimensional spaces.

Proof.

Let

S

be a basis for

U

. Then

α

(

S

) is a basis for

V

. Since

α

is injective,

|S| = |α(S)|. So done.

How about the other way round? If two vector spaces have the same di-

mension, are they necessarily isomorphic? The answer is yes, at least for

finite-dimensional ones.

However, we will not just prove that they are isomorphic. We will show that

they are isomorphic in many ways.

Proposition.

Suppose

V

is a

F

-vector space of dimension

n < ∞

. Then writing

e

1

, ··· , e

n

for the standard basis of F

n

, there is a bijection

Φ : {isomorphisms F

n

→ V } → {(ordered) basis(v

1

, ··· , v

n

) for V },

defined by

α 7→ (α(e

1

), ··· , α(e

n

)).

Proof.

We first make sure this is indeed a function — if

α

is an isomorphism,

then from our previous proposition, we know that it sends a basis to a basis. So

(α(e

1

), ··· , α(e

n

)) is indeed a basis for V .

We now have to prove surjectivity and injectivity.

Suppose

α, β

:

F

n

→ V

are isomorphism such that Φ(

α

) = Φ(

β

). In other

words, α(e

i

) = β(e

i

) for all i. We want to show that α = β. We have

α

x

1

.

.

.

x

n

= α

n

X

i=1

x

i

e

i

!

=

X

x

i

α(e

i

) =

X

x

i

β(e

i

) = β

x

1

.

.

.

x

n

.

Hence α = β.

Next, suppose that (v

1

, ··· , v

n

) is an ordered basis for V . Then define

α

x

1

.

.

.

x

n

=

X

x

i

v

i

.

It is easy to check that this is well-defined and linear. We also know that

α

is

injective since (

v

1

, ··· , v

n

) is linearly independent. So if

P

x

i

v

i

=

P

y

i

v

i

, then

x

i

=

y

i

. Also,

α

is surjective since (

v

1

, ··· , v

n

) spans

V

. So

α

is an isomorphism,

and by construction Φ(α) = (v

1

, ··· , v

n

).