Part II — Linear Analysis
Based on lectures by J. W. Luk
Notes taken by Dexter Chua
Michaelmas 2015
These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.
Part IB Linear Algebra, Analysis II and Metric and Topological Spaces are essential
Normed and Banach spaces. Linear mappings, continuity, boundedness, and norms.
Finite-dimensional normed spaces. [4]
The Baire category theorem. The principle of uniform boundedness, the closed graph
theorem and the inversion theorem; other applications. [5]
The normality of compact Hausdorff spaces. Urysohn’s lemma and Tiezte’s exten-
sion theorem. Spaces of continuous functions. The Stone-Weierstrass theorem and
applications. Equicontinuity: the Ascoli-Arzel`a theorem. [5]
Inner product spaces and Hilbert spaces; examples and elementary properties. Or-
thonormal systems, and the orthogonalization process. Bessel’s inequality, the Parseval
equation, and the Riesz-Fischer theorem. Duality; the self duality of Hilbert space. [5]
Bounded linear operations, invariant subspaces, eigenvectors; the spectrum and resolvent
set. Compact operators on Hilbert space; discreteness of spectrum. Spectral theorem
for compact Hermitian operators. [5]
Contents
0 Introduction
1 Normed vector spaces
1.1 Bounded linear maps
1.2 Dual spaces
1.3 Adjoint
1.4 The double dual
1.5 Isomorphism
1.6 Finite-dimensional normed vector spaces
1.7 Hahn–Banach Theorem
2 Baire category theorem
2.1 The Baire category theorem
2.2 Some applications
3 The topology of C(K)
3.1 Normality of compact Hausdorff spaces
3.2 Tietze-Urysohn extension theorem
3.3 Arzel`a-Ascoli theorem
3.4 Stone–Weierstrass theorem
4 Hilbert spaces
4.1 Inner product spaces
4.2 Riesz representation theorem
4.3 Orthonormal systems and basis
4.4 The isomorphism with `
2
4.5 Operators
4.6 Self-adjoint operators
0 Introduction
In IB Linear Algebra, we studied vector spaces in general. Most of the time, we
concentrated on finite-dimensional vector spaces, since these are easy to reason
about. For example, we know that every finite-dimensional vector space (by
definition) has a basis. Using the basis, we can represent vectors and linear maps
concretely as column vectors (in F
n
) and matrices.
However, in real life, often we have to work with infinite-dimensional vector
spaces instead. For example, we might want to consider the vector space of
all continuous (real) functions, or the vector space of infinite sequences. It is
difficult to analyse these spaces using the tools from IB Linear Algebra, since
many of those assume the vector space is finite-dimensional. Moreover, in these
cases, we often are not interested in the vector space structure itself. It’s just
that the objects we are interested in happen to have a vector space structure.
Instead, we want to look at notions like continuity and convergence. We want to
do analysis on vector spaces. These are not something the vector space structure
itself provides.
In this course, we are going to give our vector spaces some additional structure.
For the first half of the course, we will grant our vector space a norm. This
allows us to assign a “length” to each vector. With this, we can easily define
convergence and continuity. It turns out this allows us to understand a lot about,
say, function spaces and sequence spaces.
In the second half, we will grant a stronger notion, called the inner product.
Among many things, this allows us to define orthogonality of the elements of a
vector space, which is something we are familiar with from, say, IB Methods.
Most of the time, we will be focusing on infinite-dimensional vector spaces,
since finite-dimensional spaces are boring. In fact, we have a section dedicated
to proving that finite-dimensional vector spaces are boring. In particular, they
are all isomorphic to
R
n
, and most of our theorems can be proved trivially for
finite-dimensional spaces using what we already know from IB Linear Algebra.
So we will not care much about them.
1 Normed vector spaces
In IB Linear Algebra, we have studied vector spaces in quite a lot of detail.
However, just knowing something is a vector space usually isn’t too helpful.
Often, we would want the vector space to have some additional structure. The
first structure we will study is a norm.
Definition
(Normed vector space)
.
A normed vector space is a pair (
V, k · k
),
where
V
is a vector space over a field
F
and
k · k
is a function
k · k
:
V 7→ R
,
known as the norm, satisfying
(i) kvk ≥ 0 for all v ∈ V , with equality iff v = 0.
(ii) kλvk = |λ|kvk for all λ ∈ F, v ∈ V .
(iii) kv + wk ≤ kvk+ kwk for all v, w ∈ V .
Intuitively, we think of kvk as the “length” or “magnitude” of the vector.
Example.
Let
V
be a finite dimensional vector space, and
{e
1
, ··· , e
n
}
a basis.
Then, for any v =
P
n
i=1
v
i
e
i
, we can define a norm as
kvk =
v
u
u
t
n
X
i=1
v
2
i
.
If we are given a norm of a vector space
V
, we immediately obtain two more
structures on V for free, namely a metric and a topology.
Recall from IB Metric and Topological Spaces that (
V, d
) is a metric space if
the metric d : V × V → R satisfies
(i) d(x, x) = 0 for all x ∈ V .
(ii) d(x, y) = d(y, x) for all x, y ∈ V .
(iii) d(x, y) ≤ d(x, z) + d(z, y) for all x, y, z ∈ V .
Also, a topological spaces is a set
V
together with a topology (a collection of
open subsets) such that
(i) ∅ and V are open subsets.
(ii) The union of open subsets is open.
(iii) The finite intersection of open subsets is open.
As we have seen in IB Metric and Topological Spaces, a norm on a vector space
induces a metric by
d
(
v, w
) =
kv − wk
. This metric in terms defines a topology
on
V
where the open sets are given by “
U ⊆ V
is open iff for any
x ∈ U
, there
exists some ε > 0 such that B(x, ε) = {y ∈ V : d(x, y) < ε} ⊆ U”.
This induced topology is not just a random topology on the vector space.
They have the nice property that the vector space operators behave well under
this topology.
Proposition.
Addition + :
V ×V → V
, and scalar multiplication
·
:
F×V → V
are continuous with respect to the topology induced by the norm (and the usual
product topology).
Proof.
Let
U
be open in
V
. We want to show that (+)
−1
(
U
) is open. Let
(
v
1
, v
2
)
∈
(+)
−1
(
U
), i.e.
v
1
+
v
2
∈ U
. Since
v
1
+
v
2
∈ U
, there exists
ε
such that
B
(
v
1
+
v
2
, ε
)
⊆ U
. By the triangle inequality, we know that
B
(
v
1
,
ε
2
)+
B
(
v
2
,
ε
2
)
⊆
U. Hence we have (v
1
, v
2
) ∈ B
(v
1
, v
2
),
ε
2
⊆ (+)
−1
(U). So (+)
−1
(U) is open.
Scalar multiplication can be done in a very similar way.
This motivates the following definition — we can do without the norm, and
just require a topology in which addition and scalar multiplication are continuous.
Definition
(Topological vector space)
.
A topological vector space (
V, U
) is
a vector space
V
together with a topology
U
such that addition and scalar
multiplication are continuous maps, and moreover singleton points
{v}
are
closed sets.
The requirement that points are closed is just a rather technical requirement
needed in certain proofs. We should, however, not pay too much attention to
this when trying to understand it intuitively.
A natural question to ask is: when is a topological vector space normable? i.e.
Given a topological vector space, can we find a norm that induces the topology?
To answer this question, we will first need a few definitions.
Definition
(Absolute convexity)
.
Let
V
be a vector space. Then
C ⊆ V
is
absolutely convex (or balanced convex ) if for any
λ, µ ∈ F
such that
|λ|
+
|µ| ≤
1,
we have λC + µC ⊆ C. In other words, if c
1
, c
2
∈ C, we have λc
1
+ µc
2
∈ C.
Proposition.
If (
V, k · k
) is a normed vector space, then
B
(
t
) =
B
(
0, t
) =
{v
:
kvk < t} is absolutely convex.
Proof. By triangle inequality.
Definition
(Bounded subset)
.
Let
V
be a topological vector space. Then
B ⊆ V
is bounded if for every open neighbourhood
U ⊆ V
of
0
, there is some
s >
0 such
that B ⊆ tU for all t > s.
At first sight, this might seem like a rather weird definition. Intuitively, this
just means that
B
is bounded if, whenever we take any open set
U
, by enlarging
it by a scalar multiple, we can make it fully contain B.
Example. B(t) in a normed vector space is bounded.
Proposition.
A topological vector space (
V, U
) is normable if and only if there
exists an absolutely convex, bounded open neighbourhood of 0.
Proof.
One direction is obvious — if
V
is normable, then
B
(
t
) is an absolutely
convex, bounded open neighbourhood of 0.
The other direction is not too difficult as well. We define the Minkowski
functional µ : V → R by
µ
C
(v) = inf{t > 0 : v ∈ tC},
where C is our absolutely convex, bounded open neighbourhood.
Note that by definition, for any
t < µ
C
(
v
),
v 6∈ tC
. On the other hand, by
absolute convexity, for any t > µ
C
(v), we have v ∈ tC.
We now show that this is a norm on V :
(i) If v = 0, then v ∈ 0C. So µ
C
(0) = 0. On the other hand, suppose v 6= 0.
Since a singleton point is closed,
U
=
V \{v}
is an open neighbourhood of
0. Hence there is some
t
such that
C ⊆ tU
. Alternatively,
1
t
C ⊆ U
. Hence,
v 6∈
1
t
C. So µ
C
(v) ≥
1
t
> 0. So µ
C
(v) = 0 iff v = 0.
(ii) We have
µ
C
(λv) = inf{t > 0 : λv ∈ tC} = λ inf{t > 0 : v ∈ tC} = λµ
C
(v).
(iii) We want to show that
µ
C
(v + w) ≤ µ
C
(v) + µ
C
(w).
This is equivalent to showing that
inf{t > 0 : v + w ∈ tC} ≤ inf{t > 0 : v ∈ tC} + inf{r > 0 : w ∈ rC}.
This is, in turn equivalent to proving that if
v ∈ tC
and
w ∈ rC
, then
(v + w) ∈ (t + r)C.
Let
v
0
=
v/t, w
0
=
w/r
. Then we want to show that if
v
0
∈ C
and
w
0
∈ C
,
then
1
(t+r)
(
tv
0
+
rw
0
)
∈ C
. This is exactly what is required by convexity.
So done.
In fact, the condition of absolute convexity can be replaced “convex”, where
“convex” means for every
t ∈
[0
,
1],
tC
+ (1
− t
)
C ⊆ C
. This is since for every
convex bounded
C
, we can find always find a absolutely convex bounded
˜
C ⊆ C
,
which is something not hard to prove.
Among all normed spaces, some are particularly nice, known as Banach
spaces.
Definition
(Banach spaces)
.
A normed vector space is a Banach space if it is
complete as a metric space, i.e. every Cauchy sequence converges.
Example.
(i)
A finite dimensional vector space (which is isomorphic to
F
n
for some
n
)
is Banach.
(ii) Let X be a compact Hausdorff space. Then let
B(X) = {f : X → R such that f is bounded}.
This is obviously a vector space, and we can define the norm be
kfk
=
sup
x∈X
f
(
x
). It is easy to show that this is a norm. It is less trivial to
show that this is a Banach space.
Let
{f
n
} ⊆ B
(
X
) be a Cauchy sequence. Then for any
x
,
{f
n
(
x
)
} ⊆ R
is
also Cauchy. So we can define f(x) = lim
n→∞
f
n
(x).
To show that
f
n
→ f
, let
ε >
0. By definition of
f
n
being Cauchy,
there is some
N
such that for any
n, m > N
and any fixed
x
, we have
|f
n
(
x
)
− f
m
(
x
)
| < ε
. Take the limit as
m → ∞
. Then
f
m
(
x
)
→ f
(
x
). So
|f
n
(
x
)
− f
(
x
)
| ≤ ε
. Since this is true for all
x
, for any
n > N
, we must
have kf
n
− fk ≤ ε. So f
n
→ f.
(iii) Define X as before, and let
C(X) = {f : X → R such that f is continuous}.
Since any continuous
f
is bounded, so
C
(
X
)
⊆ B
(
X
). We define the norm
as before.
Since we know that
C
(
X
)
⊆ B
(
X
), to show that
C
(
X
) is Banach, it suffices
to show that
C
(
X
)
⊆ B
(
X
) is closed, i.e. if
f
n
→ f
,
f
n
∈ C
(
X
), then
f ∈ C
(
X
), i.e. the uniform limit of a continuous function is continuous.
Proof can be found in IB Analysis II.
(iv) For 1 ≤ p < ∞, define
ˆ
L
p
([0, 1]) = {f : [0, 1] → R such that f is continuous}.
We define the norm k · k
ˆ
L
p
by
kfk
ˆ
L
p
=
Z
1
0
|f|
p
dx
1/p
.
It is easy to show that
ˆ
L
p
is indeed a vector space, and we now check that
this is a norm.
(a) kfk
ˆ
L
p
≥
0 is obvious. Also, suppose that
kfk
ˆ
L
p
= 0. Then we must
have
f
= 0. Otherwise, if
f 6
= 0, say
f
(
x
) =
ε
for some
x
. Then there
is some
δ
such that for any
y ∈
(
x − δ, x
+
δ
), we have
kf
(
y
)
k ≥
ε
2
.
Hence
kfk
ˆ
L
p
=
Z
1
0
|f|
p
dx
1/p
≥
h
2δ
ε
2
p
i
1/p
> 0.
(b) kλfk = |λ|kfk is obvious
(c)
The triangle inequality is the exactly what the Minkowski inequality
says, which is in the example sheet.
It turns out that
ˆ
L
p
is not a Banach space. We can brute-force a hard
proof here, but we will later develop some tools that allow us to prove this
much more easily.
Hence, we define
L
p
([0
,
1]) to be the completion of
ˆ
L
p
([0
,
1]). In IID
Probability and Measure, we will show that L
p
([0, 1]) is in fact the space
L
p
([0, 1]) =
f : [0, 1] → R such that
Z
1
0
|f|
p
dx < ∞
/∼,
where the integral is the Lebesgue integral, and we are quotienting by the
relation
f ∼ g
if
f
=
g
Lebesgue almost everywhere. You will understand
what these terms mean in the IID Probability and Measure course.
(v) `
p
spaces: for p ∈ [1, ∞), define
`
p
(F) =
(
(x
1
, x
2
, ···) : x
i
∈ F,
∞
X
i=1
|x
i
|
p
< ∞
)
,
with the norm
kxk
`
p
=
∞
X
i=1
|x
i
|
p
!
1/p
.
It should be easy to check that this is a normed vector space. Moreover,
this is a Banach space. Proof is in example sheet.
(vi) `
∞
space: we define
`
∞
=
(x
1
, x
2
, ···) : x
i
∈ F, sup
i∈N
|x
i
| < ∞
with norm
kxk
`
∞
= sup
i∈N
|x
i
|.
Again, this is a Banach space.
(vii)
Let
B
=
B
(1) be the unit open ball in
R
n
. Define
C
(
B
) to be the set of
continuous functions
f
:
B → R
. Note that unlike in our previous example,
these functions need not be bounded. So our previous norm cannot be
applied. However, we can still define a topology as follows:
Let
{K
i
}
∞
i=1
be a sequence of compact subsets of
B
such that
K
i
⊆ K
i+1
and
S
∞
i=1
K
i
= B. We define the basis to include
f ∈ C(B) : sup
x∈K
i
|f(x)| <
1
m
for each m, i = 1, 2, ···, as well as the translations of these sets.
This weird basis is chosen such that
f
n
→ f
in this topology iff
f
n
→ f
uniformly in every compact set. It can be showed that this is not normable.
1.1 Bounded linear maps
With vector spaces, we studied linear maps. These are maps that respect the
linear structure of a vector space. With normed vector spaces, the right kind of
maps to study is the bounded linear maps.
Definition
(Bounded linear map)
. T
:
X → Y
is a bounded linear map if there
is a constant
C >
0 such that
kT xk
Y
≤ Ckxk
X
for all
x ∈ X
. We write
B
(
X, Y
)
for the set of bounded linear maps from X to Y .
This is equivalent to saying
T
(
B
X
(1))
⊆ B
Y
(
C
) for some
C >
0. This also
equivalent to saying that
T
(
B
) is bounded for every bounded subset
B
of
X
.
Note that this final characterization is also valid when we just have a topological
vector space.
How does boundedness relate to the topological structure of the vector spaces?
It turns out that boundedness is the same as continuity, which is another reason
why we like bounded linear maps.
Proposition.
Let
X
,
Y
be normed vector spaces,
T
:
X → Y
a linear map.
Then the following are equivalent:
(i) T is continuous.
(ii) T is continuous at 0.
(iii) T is bounded.
Proof. (i) ⇒ (ii) is obvious.
(ii)
⇒
(iii): Consider
B
Y
(1)
⊆ Y
, the unit open ball. Since
T
is continuous
at 0,
T
−1
(
B
Y
(1))
⊆ X
is open. Hence there exists
ε >
0 such that
B
X
(
ε
)
⊆
T
−1
(B
Y
(1)). So T (B
x
(ε)) ⊆ B
Y
(1). So T (B
X
(1)) ⊆ B
Y
1
ε
. So T is bounded.
(iii)
⇒
(i): Let
ε >
0. Then
kT x
1
−T x
2
k
Y
=
kT
(
x
1
−x
2
)
k
Y
≤ Ckx
1
−x
2
k
X
.
This is less than ε if kx
1
− x
2
k < C
−1
ε. So done.
Using the obvious operations,
B
(
X, Y
) can be made a vector space. What
about a norm?
Definition
(Norm on
B
(
X, Y
))
.
Let
T
:
X → Y
be a bounded linear map.
Define kT k
B(X,Y )
by
kT k
B(X,Y )
= sup
kxk≤1
kT xk
Y
.
Alternatively, this is the minimum
C
such that
kT xk
Y
≤ Ckxk
X
for all
x
.
In particular, we have
kT xk
Y
≤ kT k
B(X,Y )
kxk
X
.
1.2 Dual spaces
We will frequently be interested in one particular case of B(X, Y ).
Definition (Dual space). Let V be a normed vector space. The dual space is
V
∗
= B(V, F).
We call the elements of V
∗
functionals. The algebraic dual of V is
V
0
= L(V, F),
where we do not require boundedness.
One particularly nice property of the dual is that
V
∗
is always a Banach
space.
Proposition. Let V be a normed vector space. Then V
∗
is a Banach space.
Proof.
Suppose
{T
i
} ∈ V
∗
is a Cauchy sequence. We define
T
as follows: for
any
v ∈ V
,
{T
i
(
v
)
} ⊆ F
is Cauchy sequence. Since
F
is complete (it is either
R
or C), we can define T : V → R by
T (v) = lim
n→∞
T
n
(v).
Our objective is to show that
T
i
→ T
. The first step is to show that we indeed
have T ∈ V
∗
, i.e. T is a bounded map.
Let
kvk ≤
1. Pick
ε
= 1. Then there is some
N
such that for all
i > N
, we
have
|T
i
(v) −T (v)| < 1.
Then we have
|T (v)| ≤ |T
i
(v) −T (v)| + |T
i
(v)|
< 1 + kT
i
k
V
∗
kvk
V
≤ 1 + kT
i
k
V
∗
≤ 1 + sup
i
kT
i
k
V
∗
Since
T
i
is Cauchy,
sup
i
kT
i
k
V
∗
is bounded. Since this bound does not depend
on v (and N), we get that T is bounded.
Now we want to show that kT
i
− T k
V
∗
→ 0 as n → ∞.
For arbitrary ε > 0, there is some N such that for all i, j > N , we have
kT
i
− T
j
k
V
∗
< ε.
In particular, for any v such that kvk ≤ 1, we have
|T
i
(v) −T
j
(v)| < ε.
Taking the limit as j → ∞, we obtain
|T
i
(v) −T (v)| ≤ ε.
Since this is true for any v, we have
kT
i
− T k
V
∗
≤ ε.
for all i > N. So T
i
→ T .
Exercise: in general, for
X, Y
normed vector spaces, what condition on
X
and Y guarantees that B(X, Y ) is a Banach space?
1.3 Adjoint
The idea of the adjoint is given a
T ∈ B
(
X, Y
), produce a “dual map”, or an
adjoint T
∗
∈ B(Y
∗
, X
∗
).
There is really only one (non-trivial) natural way of doing this. First we can
think about what
T
∗
should do. It takes in something from
Y
∗
and produces
something in
X
∗
. By the definition of the dual space, this is equivalent to taking
in a function g : Y → F and returning a function T
∗
(g) : X → F.
To produce this
T
∗
(
g
), the only things we have on our hands to use are
T
:
X → Y
and
g
:
Y → F
. Thus the only option we have is to define
T
∗
(
g
)
as the composition g ◦ T , or T
∗
(g)(x) = g(T(x)) (we also have a silly option of
producing the zero map regardless of input, but this is silly). Indeed, this is the
definition of the adjoint.
Definition
(Adjoint)
.
Let
X, Y
be normal vector spaces. Given
T ∈ B
(
X, Y
),
we define the adjoint of T , denoted T
∗
, as a map T
∗
∈ B(Y
∗
, X
∗
) given by
T
∗
(g)(x) = g(T(x))
for x ∈ X, y ∈ Y
∗
. Alternatively, we can write
T
∗
(g) = g ◦ T.
It is easy to show that our T
∗
is indeed linear. We now show it is bounded.
Proposition. T
∗
is bounded.
Proof.
We want to show that
kT
∗
k
B(Y
∗
,X
∗
)
is finite. For simplicity of notation,
the supremum is assumed to be taken over non-zero elements of the space. We
have
kT
∗
k
B(Y
∗
,X
∗
)
= sup
g∈Y
∗
kT
∗
(g)k
X
∗
kgk
Y
∗
= sup
g∈Y
∗
sup
x∈X
|T
∗
(g)(x)|/kxk
X
kgk
Y
∗
= sup
g∈Y
∗
sup
x∈X
|g(T x)|
kgk
Y
∗
kxk
X
≤ sup
g∈Y
∗
sup
x∈X
kgk
Y
∗
kT xk
Y
kgk
Y
∗
kxk
X
≤ sup
x∈X
kT k
B(X,Y )
kxk
X
kxk
X
= kT k
B(X,Y )
So it is finite.
1.4 The double dual
Definition
(Double dual)
.
Let
V
be a normed vector space. Define
V
∗∗
= (
V
∗
)
∗
.
We want to define a map
φ
:
V → V
∗∗
. Again, we can reason about what
we expect this function to do. It takes in a
v ∈ V
, and produces a
φ
(
v
)
∈ V
∗∗
.
Expanding the definition, this gives a
φ
(
v
) :
V
∗
→ F
. Hence this
φ
(
v
) takes in
a g ∈ V
∗
, and returns a φ(v)(g) ∈ F.
This is easy. Since
g ∈ V
∗
, we know that
g
is a function
g
:
V → F
. Given
this function
g
and a
v ∈ V
, it is easy to produce a
φ
(
v
)(
g
)
∈ F
. Just apply
g
on v:
φ(v)(g) = g(v).
Proposition.
Let
φ
:
V → V
∗∗
be defined by
φ
(
v
)(
g
) =
g
(
v
). Then
φ
is a
bounded linear map and kφk
B(V,V
∗
)
≤ 1
Proof. Again, we are taking supremum over non-zero elements. We have
kφk
B(V,V
∗
)
= sup
v∈V
kφ(v)k
V
∗∗
kvk
V
= sup
v∈V
sup
g∈V
∗
|φ(v)(g)|
kvk
V
kgk
V
∗
= sup
v∈V
sup
g∈V
∗
|g(v)|
kvk
V
kgk
V
∗
≤ 1.
In fact, we will later show that kφk
B(V,V
∗
)
= 1.
1.5 Isomorphism
So far, we have discussed a lot about bounded linear maps, which are “morphisms”
between normed vector spaces. It is thus natural to come up with the notion of
isomorphism.
Definition
(Isomorphism)
.
Let
X, Y
be normed vector spaces. Then
T
:
X → Y
is an isomorphism if it is a bounded linear map with a bounded linear inverse
(i.e. it is a homeomorphism).
We say X and Y are isomorphic if there is an isomorphism T : X → Y .
We say that
T
:
X → Y
is an isometric isomorphism if
T
is an isomorphism
and kT xk
Y
= kxk
X
for all x ∈ X.
X
and
Y
are isometrically isomorphic if there is an isometric isomorphism
between them.
Example.
Consider a finite-dimensional space
F
n
with the standard basis
{e
1
, ··· , e
n
}. For any v =
P
v
i
e
i
, the norm is defined by
kvk =
X
v
2
i
1/2
.
Then any
g ∈ V
∗
is determined by
g
(
e
i
) for
i
= 1
, ··· , n
. We want to show that
there are no restrictions on what
g
(
e
i
) can be, i.e. whatever values I assign to
them, g will still be bounded. We have
kgk
V
∗
= sup
v∈V
|g(v)|
kvk
≤ sup
v∈V
P
|v
i
||g(e
i
)|
(
P
|v
i
|
2
)
1/2
≤ C sup
v∈V
(
P
|v
i
|
2
)
1
2
(
P
|v
i
|
2
)
1
2
sup
i
|g(e
i
)|
= C sup
i
|g(e
i
)|
for some
C
, where the second-to-last line is due to the Cauchy-Schwarz inequality.
The supremum is finite since F
n
is finite dimensional.
Since
g
is uniquely determined by a list of values (
g
(
e
1
)
, g
(
e
2
)
, ··· , g
(
e
n
)),
it has dimension
n
. Therefore,
V
∗
is isomorphic to
F
n
. By the same lines of
argument, V
∗∗
is isomorphic to F
n
.
In fact, we can show that
φ
:
V → V
∗∗
by
φ
(
v
)(
g
) =
g
(
v
) is an isometric
isomorphism (this is not true for general normed vector spaces. Just pick
V
to
be incomplete, then V and V
∗∗
cannot be isomorphic since V
∗∗
is complete).
Example. Consider `
p
for p ∈ [1, ∞). What is `
∗
p
?
Suppose q is the conjugate exponent of p, i.e.
1
q
+
1
p
= 1.
(if p = 1, define q = ∞) It is easy to see that `
q
⊆ `
∗
p
by the following:
Suppose (
x
1
, x
2
, ···
)
∈ `
p
, and (
y
1
, y
2
, ···
)
∈ `
q
. Define
y
(
x
) =
P
∞
i=1
x
i
y
i
.
We will show that
y
defined this way is a bounded linear map. Linearity is easy
to see, and boundedness comes from the fact that
kyk
`
∗
p
= sup
x
ky(x)k
kxk
`
p
= sup
x
P
x
i
y
i
kxk
`
p
≤ sup
kxk
`
p
kyk
`
q
kxk
`
p
= kyk
`
p
,
by the H¨older’s inequality. So every (
y
i
)
∈ `
q
determines a bounded linear map.
In fact, we can show `
∗
p
is isomorphic to `
q
.
1.6 Finite-dimensional normed vector spaces
We are now going to look at a special case of normed vector spaces, where the
vector space is finite dimensional.
It turns out that finite-dimensional vector spaces are rather boring. In
particular, we have
(i) All norms are equivalent.
(ii) The closed unit ball is compact.
(iii) They are Banach spaces.
(iv) All linear maps whose domain is finite dimensional are bounded.
These are what we are going to show in this section.
First of all, we need to say what we mean when we say all norms are
“equivalent”
Definition
(Equivalent norms)
.
Let
V
be a vector space, and
k · k
1
,
k · k
2
be
norms on
V
. We say that these are equivalent if there exists a constant
C >
0
such that for any v ∈ V , we have
C
−1
kvk
2
≤ kvk
1
≤ Ckvk
2
.
It is an exercise to show that equivalent norms induce the same topology,
and hence agree on continuity and convergence. Also, equivalence of norms is an
equivalence relation (as the name suggests).
Now let
V
be an
n
-dimensional vector space with basis
{e
1
, ··· , e
n
}
. We
can define the `
n
p
norm by
kvk
`
n
p
=
n
X
i=1
|v
i
|
p
!
1/p
,
where
v =
n
X
i=1
v
i
e
i
.
Proposition.
Let
V
be an
n
-dimensional vector space. Then all norms on
V
are equivalent to the norm k · k
`
n
1
.
Corollary. All norms on a finite-dimensional vector space are equivalent.
Proof. Let k · k be a norm on V .
Let v = (v
1
, ··· , v
n
) =
P
v
i
e
i
∈ V . Then we have
kvk =
X
v
i
e
i
≤
n
X
i=1
|v
i
|ke
i
k
≤
sup
i
ke
i
k
n
X
i=1
|v
i
|
≤ Ckvk
`
n
1
,
where C = sup ke
i
k < ∞ since we are taking a finite supremum.
For the other way round, let
S
1
=
{v ∈ V
:
kvk
`
n
1
= 1
}
. We will show the
two following results:
(i) k · k : (S
1
, k · k
`
n
1
) → R is continuous.
(ii) S
1
is a compact set.
We first see why this gives what we want. We know that for any continuous map
from a compact set to
R
, the image is bounded and the infimum is achieved. So
there is some v
∗
∈ S
1
such that
kv
∗
k = inf
v∈S
1
kvk.
Since v
∗
6= 0, there is some c
0
such that kvk ≥ c
0
for all v ∈ S
1
.
Now take an arbitrary non-zero v ∈ V , since
v
kvk
`
n
1
∈ S
1
, we know that
v
kvk
`
n
1
≥ c
0
,
which is to say that
kvk ≥ c
0
kvk
`
n
1
.
Since we have found c, c
0
> 0 such that
c
0
kvk
`
n
1
≤ kvk ≤ ckvk
`
n
1
,
now let C = max
c,
1
c
0
> 0. Then
C
−1
kvk
2
≤ kvk
1
≤ Ckvk
2
.
So the norms are equivalent. Now we can start to prove (i) and (ii).
First, let v, w ∈ V . We have
kvk −kwk
≤ kv − wk ≤ Ckv − wk
`
n
1
.
Hence when
v
is close to
w
under
`
n
1
, then
kvk
is close to
kwk
. So it is continuous.
To show (ii), it suffices to show that the unit ball
B
=
{v ∈ V
:
kvk
`
n
1
≤
1
}
is compact, since
S
1
is a closed subset of
B
. We will do so by showing it is
sequentially compact.
Let {v
(k)
}
∞
k=1
be a sequence in B. Write
v
(k)
=
n
X
i=1
λ
(k)
i
e
i
.
Since v
(k)
∈ B, we have
n
X
i=1
|λ
(k)
i
| ≤ 1.
Consider the sequence λ
(k)
1
, which is a sequence in F.
We know that
|λ
(k)
1
| ≤
1. So by Bolzano-Weierstrass, there is a convergent
subsequence λ
(k
j
1
)
1
.
Now look at
λ
(k
j
1
)
2
. Since this is bounded, there is a convergent subsequence
λ
(k
j
2
)
2
.
Iterate this for all
n
to obtain a sequence
k
j
n
such that
λ
(k
j
n
)
i
is convergent
for all i. So v
(k
j
n
)
is a convergent subsequence.
Proposition.
Let
V
be a finite-dimensional normed vector space. Then the
closed unit ball
¯
B(1) = {v ∈ V : kvk ≤ 1}
is compact.
Proof. This follows from the proof above.
Proposition.
Let
V
be a finite-dimensional normed vector space. Then
V
is a
Banach space.
Proof.
Let
{v
i
} ∈ V
be a Cauchy sequence. Since
{v
i
}
is Cauchy, it is bounded,
i.e.
{v
i
} ⊆
¯
B
(
R
) for some
R >
0. By above,
¯
B
(
R
) is compact. So
{v
i
}
has a
convergent subsequence
v
i
k
→ v
. Since
{v
i
}
is Cauchy, we must have
v
i
→ v
.
So v
i
converges.
Proposition.
Let
V, W
be normed vector spaces,
V
be finite-dimensional. Also,
let T : V → W be a linear map. Then T is bounded.
Proof.
Recall discussions last time about
V
∗
for finite-dimensional
V
. We will
do a similar proof.
Note that since
V
is finite-dimensional,
im T
finite dimensional. So wlog
W
is finite-dimensional. Since all norms are equivalent, it suffices to consider the
case where the vector spaces have
`
n
1
and
`
m
1
norm. This can be represented by
a matrix T
ij
such that
T (x
1
, ··· , x
n
) =
X
T
1i
x
i
, ··· ,
X
T
mi
x
i
.
We can bound this by
kT (x
1
, ··· , x
n
)k ≤
m
X
j=1
n
X
i=1
|T
ji
||x
i
| ≤ m
sup
i,j
|T
ij
|
n
X
i=1
|x
i
| ≤ Ckxk
`
n
1
for some
C >
0, since we are taking the supremum over a finite set. This implies
that kT k
B(`
n
1
,`
m
1
)
≤ C.
There is another way to prove this statement.
Proof.
(alternative) Let
T
:
V → W
be a linear map. We define a norm on
V
by kvk
0
= kvk
V
+ kT vk
W
. It is easy to show that this is a norm.
Since
V
is finite dimensional, all norms are equivalent. So there is a constant
C > 0 such that for all v, we have
kvk
0
≤ Ckvk
V
.
In particular, we have
kT vk ≤ Ckvk
V
.
So done.
Among all these properties, compactness of
¯
B
(1) characterizes finite dimen-
sionality.
Proposition.
Let
V
be a normed vector space. Suppose that the closed unit
ball
¯
B(1) is compact. Then V is finite dimensional.
Proof. Consider the following open cover of
¯
B(1):
¯
B(1) ⊆
[
y∈
¯
B(1)
B
y,
1
2
.
Since
¯
B
(1) is compact, this has a finite subcover. So there is some
y
1
, ··· , y
n
such that
¯
B(1) ⊆
n
[
i=1
B
y
i
,
1
2
.
Now let
Y
=
span{y
1
, ··· , y
n
}
, which is a finite-dimensional subspace of
V
. We
want to show that in fact we have Y = V .
Clearly, by definition of Y , the unit ball
B(1) ⊆ Y + B
1
2
,
i.e. for every
v ∈ B
(1), there is some
y ∈ Y, w ∈ B
(
1
2
) such that
v
=
y
+
w
.
Multiplying everything by
1
2
, we get
B
1
2
⊆ Y + B
1
4
.
Hence we also have
B(1) ⊆ Y + B
1
4
.
By induction, for every n, we have
B(1) ⊆ Y + B
1
2
n
.
As a consequence,
B(1) ⊆
¯
Y .
Since
Y
is finite-dimensional, we know that
Y
is complete. So
Y
is a closed
subspace of V . So
¯
Y = Y . So in fact
B(1) ⊆ Y.
Since every element in
V
can be rescaled to an element of
B
(1), we know that
V = Y . Hence V is finite dimensional.
This concludes our discussion on finite-dimensional vector spaces. We’ll end
with an example that shows these are not true for infinite dimensional vector
spaces.
Example.
Consider
`
1
, and
e
i
= (0
,
0
, ··· ,
0
,
1
,
0
, ···
), where
e
i
is 1 on the
i
th
entry, 0 elsewhere.
Note that if i 6= j, then
ke
i
− e
j
k = 2.
Since
e
i
∈
¯
B
(1), we see that
¯
B
(1) cannot be covered by finitely many open balls
of radius
1
2
, since each open ball can contain at most one of {e
i
}.
1.7 Hahn–Banach Theorem
Let
V
be a real normed vector space. What can we say about
V
∗
=
B
(
V, R
)?
For instance, If V is non-trivial, must V
∗
be non-trivial?
The main goal of this section is to prove the Hahn–Banach theorem (surprise),
which allows us to produce a lot of elements in
V
∗
. Moreover, it doesn’t just tell
us that
V
∗
is non-empty (this is rather dull), but provides a tool to craft (or at
least prove existence of) elements of V
∗
that satisfy some property we want.
Proposition.
Let
V
be a real normed vector space, and
W ⊆ V
has co-
dimension 1. Assume we have the following two items:
– p : V → R (not necessarily linear), which is positive homogeneous, i.e.
p(λv) = λp(v)
for all v ∈ V, λ > 0, and subadditive, i.e.
p(v
1
+ v
2
) ≤ p(v
1
) + p(v
2
)
for all
v
1
, v
2
∈ V
. We can think of something like a norm, but more
general.
– f : W → R a linear map such that f(w) ≤ p(w) for all w ∈ W .
Then there exists an extension
˜
f
:
V → R
which is linear such that
˜
f|
W
=
f
and
˜
f(v) ≤ p(v) for all v ∈ V .
Why do we want this weird theorem? Our objective is to find something in
V
∗
. This theorem tells us that to find a bounded linear map in
V
, we just need
something in
W
bounded by a norm-like object, and then we can extend it to
V
.
Proof.
Let
v
0
∈ V \ W
. Since
W
has co-dimension 1, every element
v ∈ V
can be written uniquely as
v
=
w
+
av
0
, for some
w ∈ W, a ∈ R
. Therefore it
suffices to define
˜
f(v
0
) and then extend linearly to V .
The condition we want to meet is
˜
f(w + av
0
) ≤ p(w + av
0
) (∗)
for all
w ∈ W, a ∈ R
. If
a
= 0, then this is satisfied since
˜
f
restricts to
f
on
W
.
If a > 0 then (∗) is equivalent to
˜
f(w) + a
˜
f(v
0
) ≤ p(w + av
0
).
We can divide by a to obtain
˜
f(a
−1
w) +
˜
f(v
0
) ≤ p(a
−1
w + v
0
).
We let w
0
= a
−1
w. So we can write this as
˜
f(v
0
) ≤ p(w
0
+ v
0
) − f (w
0
),
for all w
0
∈ W .
If a < 0, then (∗) is equivalent to
˜
f(w) + a
˜
f(v
0
) ≤ p(w + av
0
).
We now divide by a and flip the sign of the equality. So we have
˜
f(a
−1
w) +
˜
f(v
0
) ≥ −(−a
−1
)p(w + av
0
).
In other words, we want
˜
f(v
0
) ≥ −p(−a
−1
w −v
0
) − f (a
−1
w).
We let w
0
= −a
−1
w. Then we are left with
˜
f(v
0
) ≥ −p(w
0
− v
0
) + f (w
0
).
for all w
0
∈ W .
Hence we are done if we can define a
˜
f
(
v
0
) that satisfies these two conditions.
This is possible if and only if
−p(w
1
− v
0
) + f (w
1
) ≤ p(w
2
+ v
0
) − f (w
2
)
for all w
1
, w
2
. This holds since
f(w
1
) + f (w
2
) = f(w
1
+ w
2
)
≤ p(w
1
+ w
2
)
= p(w
1
− v
0
+ w
2
+ v
0
)
≤ p(w
1
− v
0
) + p(w
2
+ v
0
).
So the result follows.
The goal is to “iterate” this to get a similar result without the co-dimension
1 assumption. While we can do this directly for finitely many times, this isn’t
helpful (since we already know a lot about finite dimensional normed spaces). To
perform an “infinite iteration”, we need the mysterious result known as Zorn’s
lemma.
Digression on Zorn’s lemma
We first need a few definitions before we can come to Zorn’s lemma.
Definition
(Partial order)
.
A relation
≤
on a set
X
is a partial order if it
satisfies
(i) x ≤ x (reflexivity)
(ii) x ≤ y and y ≤ x implies x = y (antisymmetry)
(iii) x ≤ y and y ≤ z implies x ≤ z (transitivity)
Definition
(Total order)
.
Let (
S, ≤
) be a partial order.
T ⊆ S
is totally ordered
if for all x, y ∈ T , either x ≤ y or y ≤ x, i.e. every two things are related.
Definition
(Upper bound)
.
Let (
S, ≤
) be a partial order.
S
0
⊆ S
subset. We
say b ∈ S is an upper bound of this subset if x ≤ b for all x ∈ S
0
.
Definition
(Maximal element)
.
Let (
S, ≤
) be a partial order. Then
m ∈ S
is a
maximal element if x ≥ m implies x = m.
The glorious Zorn’s lemma tells us that:
Lemma
(Zorn’s lemma)
.
Let (
S, ≤
) be a non-empty partially ordered set such
that every totally-ordered subset
S
0
has an upper bound in
S
. Then
S
has a
maximal element.
We will not give a proof of this lemma here, but can explain why it should
be true.
We start by picking one element
x
0
in
S
. If it is maximal, then done.
Otherwise, there is some
x
1
> x
0
. If this is not maximal, then pick
x
2
> x
1
. We
do this to infinity “and beyond” — after picking infinitely many
x
i
, if we have
not yet reached a maximal element, we take an upper bound of this set, and call
it x
ω
. If this is not maximal, we can continue picking a larger element.
We can do this forever, but if this process never stops, even after infinite
time, we would have picked out more elements than there are in
S
, which is
clearly nonsense. Of course, this is hardly a formal proof. The proper proof can
be found in the IID Logic and Set Theory course.
Back to vector spaces
The Hahn–Banach theorem is just our previous proposition without the constraint
that W has co-dimension 1.
Theorem
(Hahn–Banach theorem*)
.
Let
V
be a real normed vector space, and
W ⊆ V a subspace. Assume we have the following two items:
– p
:
V → R
(not necessarily linear), which is positive homogeneous and
subadditive;
– f : W → R a linear map such that f(w) ≤ p(w) for all w ∈ W .
Then there exists an extension
˜
f
:
V → R
which is linear such that
˜
f|
W
=
f
and
˜
f(v) ≤ p(v) for all v ∈ V .
Proof. Let S be the set of all pairs (
˜
V ,
˜
f) such that
(i) W ⊆
˜
V ⊆ V
(ii)
˜
f :
˜
V → R is linear
(iii)
˜
f|
W
= f
(iv)
˜
f(
˜
v) ≤ p(
˜
v) for all
˜
v ∈ V
We introduce a partial order
≤
on
S
by (
˜
V
1
,
˜
f
1
)
≤
(
˜
V
2
,
˜
f
2
) if
˜
V
1
⊆
˜
V
2
and
˜
f
2
|
˜
V
1
=
˜
f
1
. It is easy to see that this is indeed a partial order.
We now check that this satisfies the assumptions of Zorn’s lemma. Let
{(
˜
V
α
,
˜
f
α
)}
α∈A
⊆ S be a totally ordered set. Define (
˜
V ,
˜
f) by
˜
V =
[
α∈A
˜
V
α
,
˜
f(x) =
˜
f
α
(x) for x ∈
˜
V
α
.
This is well-defined because
{
(
˜
V ,
˜
f
α
)
}
α∈A
is totally ordered. So if
x ∈
˜
V
α
1
and
x ∈
˜
V
α
2
, wlog assume (
˜
V
α
1
,
˜
f
α
1
)
≤
(
˜
V
α
2
,
˜
f
α
2
). So
˜
f
α
2
|
˜
V
α
2
=
˜
f
α
1
. So
˜
f
α
1
(x) =
˜
f
α
2
(x).
It should be clear that (
˜
V ,
˜
f
)
∈ S
and (
˜
V ,
˜
f
) is indeed an upper bound of
{(
˜
V
α
,
˜
f
α
)}
α∈A
. So the conditions of Zorn’s lemma are satisfied.
Hence by Zorn’s lemma, there is an maximal element (
˜
W ,
˜
f
)
∈ S
. Then by
definition,
˜
f
is linear, restricts to
f
on
W
, and bounded by
p
. We now show
that
˜
W = V .
Suppose not. Then there is some
v
0
∈ V \
˜
W
. Define
˜
V
=
span{
˜
W , v
0
}
.
Now
˜
W
is a co-dimensional 1 subspace of
˜
V
. By our previous result, we know
that there is some
˜
˜
f
:
˜
V → R
linear such that
˜
˜
f|
˜
W
=
˜
f
and
˜
˜
f
(
v
)
≤ p
(
v
) for all
v ∈
˜
V .
Hence we have (
˜
W ,
˜
˜
f
)
∈ S
but (
˜
W ,
˜
f
)
<
(
˜
V ,
˜
˜
f
). This contradicts the
maximality of (
˜
W ,
˜
f).
There is a particularly important special case of this, which is also known as
Hahn-Banach theorem sometimes.
Corollary
(Hahn-Banach theorem 2.0)
.
Let
W ⊆ V
be real normed vector
spaces. Given
f ∈ W
∗
, there exists a
˜
f ∈ V
∗
such that
˜
f|
W
=
f
and
k
˜
fk
V
∗
=
kfk
W
∗
.
Proof.
Use the Hahn-Banach theorem with
p
(
x
) =
kfk
W
∗
kxk
V
for all
x ∈ V
.
Positive homogeneity and subadditivity follow directly from the axioms of the
norm. Then by definition
f
(
w
)
≤ p
(
w
) for all
w ∈ W
. So Hahn-Banach theorem
says that there is
˜
f
:
V → R
linear such that
˜
f|
W
=
f
and
˜
f
(
v
)
≤ p
(
w
) =
kfk
W
∗
kvk
V
.
Now notice that
˜
f(v) ≤ kfk
W
∗
kvk
V
, −
˜
f(v) =
˜
f(−v) ≤ kfk
W
∗
kvk
V
implies that |
˜
f(v)| ≤ kfk
W
∗
kvk
V
for all v ∈ V .
On the other hand, we have (again taking supremum over non-zero v)
k
˜
fk
V
∗
= sup
v∈V
|
˜
f(v)|
kvk
V
≥ sup
w∈W
|f(w)|
kwk
W
= kfk
W
∗
.
So indeed we have k
˜
fk
V
∗
= kfk
W
∗
.
We’ll have some quick corollaries of these theorems.
Proposition.
Let
V
be a real normed vector space. For every
v ∈ V \ {
0
}
,
there is some f
v
∈ V
∗
such that f
v
(v) = kvk
V
and kf
v
k
V
∗
= 1.
Proof.
Apply Hahn-Banach theorem (2.0) with
W
=
span{v}
,
f
0
v
(
v
) =
kvk
V
.
Corollary.
Let
V
be a real normed vector space. Then
v
=
0
if and only if
f(v) = 0 for all f ∈ V
∗
.
Corollary.
Let
V
be a non-trivial real normed vector space,
v, w ∈ V
with
v 6= w. Then there is some f ∈ V
∗
such that f (v) 6= f(w).
Corollary.
If
V
is a non-trivial real normed vector space, then
V
∗
is non-trivial.
We now want to restrict the discussion to double duals. We define
φ
:
V →
V
∗∗
as before by φ(v)(f) = f(v) for v ∈ V, f ∈ V
∗
.
Proposition. The map φ : V → V
∗∗
is an isometry, i.e. kφ(v)k
V
∗∗
= kvk
V
.
Proof. We have previously shown that
kφk
B(V,V
∗∗
)
≤ 1.
It thus suffices to show that the norm is greater than 1, or that
kφ(v)k
V
∗∗
≥ kvk
V
.
We can assume v 6= 0, for which the inequality is trivial. We have
kφ(v)k
V
∗∗
= sup
f∈V
∗
|φ(v)(f)|
kfk
V
∗
≥
|φ(v)(f
v
)|
kf
v
k
V
∗
= |f
v
(v)| = kvk
V
,
where
f
v
is the function such that
f
v
(
v
) =
kvk
V
, kf
v
k
V
∗
= 1 as we have
previously defined.
So done.
In particular,
φ
is injective and one can view
φ
as an isometric embedding of
V into V
∗∗
.
Definition (Reflexive). We say V is reflexive if φ(V ) = V
∗∗
.
Note that any reflexive space is Banach, since V
∗∗
, being the dual of V
∗
, is
reflexive.
You might have heard that for any infinite dimensional vector space V , the
dual of
V
is always strictly larger than
V
. This does not prevent an infinite
dimensional vector space from being reflexive. When we said the dual of
V
is
always strictly larger than
V
, we are referring to the algebraic dual, i.e. the set of
all linear maps from
V
to
F
. In the definition of reflexive (and everywhere else
where we mention “dual” in this course), we mean the continuous dual, where
we look at the set of all bounded linear maps from
V
to
F
. It is indeed possible
for the continuous dual to be isomorphic to the original space, even in infinite
dimensional spaces, as we will see later.
Example.
Finite-dimensional normed vector spaces are reflexive. Also
`
p
is
reflexive for p ∈ (1, ∞).
Recall that given T ∈ B(V, W ), we defined T
∗
∈ B(W
∗
, V
∗
) by
T
∗
(f)(v) = f(T v)
for v ∈ V, f ∈ W
∗
.
We have previously shown that
kT
∗
k
B(W
∗
,V
∗
)
≤ kT k
B(V,W )
.
We will now show that in fact equality holds.
Proposition.
kT
∗
k
B(W
∗
,V
∗
)
= kT k
B(V,W )
.
Proof. We have already shown that
kT
∗
k
B(W
∗
,V
∗
)
≤ kT k
B(V,W )
.
For the other inequality, first let ε > 0. Since
kT k
B(V,W )
= sup
v∈V
kT vk
W
kvk
V
by definition, there is some
v ∈ V
such that
kT vk
W
≥ kTk
B(V,W )
kvk
V
− ε
.
wlog, assume kvk
V
= 1. So
kT vk
W
≥ kT k
B(V,W )
− ε.
Therefore, we get that
kT
∗
k
B(W
∗
,V
∗
)
= sup
f∈W
∗
kT
∗
(f)k
V
∗
kfk
W
∗
≥ kT
∗
(f
T v
)k
V
∗
≥ |T
∗
(f
T v
)(v)|
= |f
T v
(T v)|
= kT vk
W
≥ kT k
B(V,W )
− ε,
where we used the fact that
kf
T v
k
W
∗
and
kvk
V
are both 1. Since
ε
is arbitrary,
we are done.
2 Baire category theorem
2.1 The Baire category theorem
When we first write the Baire category theorem down, it might seem a bit
pointless. However, it turns out to be a really useful result, and we will be able
to prove surprisingly many results from it.
In fact, the Baire category theorem itself does not involve normed vector
spaces. It works on any metric space. However, most of the applications we have
here are about normed vector spaces.
To specify the theorem, we will need some terminology.
Definition
(Nowhere dense set)
.
Let
X
be a topological space. A subset
E ⊆ X
is nowhere dense if
¯
E has empty interior.
Usually, we will pick