Part IB Analysis II
Based on lectures by N. Wickramasekera
Notes taken by Dexter Chua
Michaelmas 2015
These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.
Uniform convergence
The general principle of uniform convergence. A uniform limit of continuous functions
is continuous. Uniform convergence and termwise integration and differentiation of
series of real-valued functions. Local uniform convergence of power series. [3]
Uniform continuity and integration
Continuous functions on closed bounded intervals are uniformly continuous. Review of
basic facts on Riemann integration (from Analysis I). Informal discussion of integration
of complex-valued and
R
n
-valued functions of one variable; proof that
R
b
a
f
(
x
) d
x
R
b
a
f(x) dx. [2]
R
n
as a normed space
Definition of a normed space. Examples, including the Euclidean norm on
R
n
and the
uniform norm on
C
[
a, b
]. Lipschitz mappings and Lipschitz equivalence of norms. The
Bolzano-Weierstrass theorem in
R
n
. Completeness. Open and closed sets. Continuity
for functions between normed spaces. A continuous function on a closed bounded
set in
R
n
is uniformly continuous and has closed bounded image. All norms on a
finite-dimensional space are Lipschitz equivalent. [5]
Differentiation from R
m
to R
n
Definition of derivative as a linear map; elementary properties, the chain rule. Partial
derivatives; continuous partial derivatives imply differentiability. Higher-order deriva-
tives; symmetry of mixed partial derivatives (assumed continuous). Taylor’s theorem.
The mean value inequality. Path-connectedness for subsets of
R
n
; a function having
zero derivative on a path-connected open subset is constant. [6]
Metric spaces
Definition and examples. *Metrics used in Geometry*. Limits, continuity, balls,
neighbourho ods, open and closed sets. [4]
The Contraction Mapping Theorem
The contraction mapping theorem. Applications including the inverse function theorem
(proof of continuity of inverse function, statement of differentiability). Picard’s solution
of differential equations. [4]
Contents
0 Introduction
1 Uniform convergence
2 Series of functions
2.1 Convergence of series
2.2 Power series
3 Uniform continuity and integration
3.1 Uniform continuity
3.2 Applications to Riemann integrability
3.3 Non-examinable fun*
4 R
n
as a normed space
4.1 Normed spaces
4.2 Cauchy sequences and completeness
4.3 Sequential compactness
4.4 Mappings between normed spaces
5 Metric spaces
5.1 Preliminary definitions
5.2 Topology of metric spaces
5.3 Cauchy sequences and completeness
5.4 Compactness
5.5 Continuous functions
5.6 The contraction mapping theorem
6 Differentiation from R
m
to R
n
6.1 Differentiation from R
m
to R
n
6.2 The operator norm
6.3 Mean value inequalities
6.4 Inverse function theorem
6.5 2nd order derivatives
0 Introduction
Analysis II, is, unsurprisingly, a continuation of IA Analysis I. The key idea in
the course is to generalize what we did in Analysis I. The first thing we studied
in Analysis I was the convergence of sequences of numbers. Here, we would like
to study what it means for a sequence of functions to converge (this is technically
a generalization of what we did before, since a sequence of numbers is just a
sequence of functions
f
n
:
{
0
} R
, but this is not necessarily a helpful way
to think about it). It turns out this is non-trivial, and there are many ways
in which we can define the convergence of functions, and different notions are
useful in different circumstances.
The next thing is the idea of uniform continuity. This is a stronger notion
than just continuity. Despite being stronger, we will prove an important theorem
saying any continuous function on [0
,
1] (and in general a closed, bounded subset
of
R
) is uniform continuous. This does not mean that uniform continuity is a
useless notion, even if we are just looking at functions on [0
,
1]. The definition
of uniform continuity is much stronger than just continuity, so we now know
continuous functions on [0
,
1] are really nice, and this allows us to prove many
things with ease.
We can also generalize in other directions. Instead of looking at functions, we
might want to define convergence for arbitrary sets. Of course, if we are given a
set of, say, apples, oranges and pears, we cannot define convergence in a natural
way. Instead, we need to give the set some additional structure, such as a norm
or metric. We can then define convergence in a very general setting.
Finally, we will extend the notion of differentiation from functions
R R
to general vector functions
R
n
R
m
. This might sound easy we have been
doing this in IA Vector Calculus all the time. We just need to formalize it a bit,
just like what we did in IA Analysis I, right? It turns out differentiation from
R
n
to
R
m
is much more subtle, and we have to be really careful when we do so, and it
takes quite a long while before we can prove that, say,
f
(
x, y, z
) =
x
2
e
3z
sin
(2
xy
)
is differentiable.
1 Uniform convergence
In IA Analysis I, we understood what it means for a sequence of real numbers
to converge. Suppose instead we have sequence of functions. In general, let
E
be any set (not necessarily a subset of
R
), and
f
n
:
E R
for
n
= 1
,
2
, ···
be
a sequence of functions. What does it mean for
f
n
to converge to some other
function f : E R?
We want this notion of convergence to have properties similar to that of
convergence of numbers. For example, a constant sequence
f
n
=
f
has to
converge to
f
, and convergence should not be affected if we change finitely many
terms. It should also act nicely with products and sums.
An obvious first attempt would be to define it in terms of the convergence of
numbers.
Definition (Pointwise convergence). The sequence
f
n
converges pointwise to
f
if
f(x) = lim
n→∞
f
n
(x)
for all x.
This is an easy definition that is simple to check, and has the usual properties
of convergence. However, there is a problem. Ideally, We want to deduce
properties of
f
from properties of
f
n
. For example, it would be great if continuity
of all
f
n
implies continuity of
f
, and similarly for integrability and values of
derivatives and integrals. However, it turns out we cannot. The notion of
pointwise convergence is too weak. We will look at many examples where
f
fails
to preserve the properties of f
n
.
Example. Let
f
n
: [
1
,
1]
R
be defined by
f
n
(
x
) =
x
1/(2n+1)
. These are all
continuous, but the pointwise limit function is
f
n
(x) f (x) =
1 0 < x 1
0 x = 0
1 1 x < 0
,
which is not continuous.
x
y
Less excitingly, we can let f
n
be given by the following graph:
x
y
1
n
1
n
which converges to the same function as above.
Example. Let
f
n
: [0
,
1]
R
be the piecewise linear function formed by joining
(0, 0), (
1
n
, n), (
2
n
, 0) and (1, 0).
x
y
0
2
n
1
n
n
The pointwise limit of this function is f
n
(x) f (x) = 0. However, we have
Z
a
0
f
n
(x) dx = 1 for all n;
Z
1
0
f(x) dx = 0.
So the limit of the integral is not the integral of the limit.
Example. Let f
n
: [0, 1] R be defined as
f
n
(x) =
(
1 n!x Z
0 otherwise
Since
f
n
has finitely many discontinuities, it is Riemann integrable. However,
the limit is
f
n
(x) f (x) =
(
1 x Q
0 x ∈ Q
which is not integrable. So integrability of a function is not preserved by pointwise
limits.
This suggests that we need a stronger notion of convergence. Of course, we
don’t want this notion to be too strong. For example, we could define
f
n
f
to mean
f
n
=
f
for all sufficiently large
n
”, then any property common to
f
n
is obviously inherited by the limit. However, this is clearly silly since only the
most trivial sequences would converge.
Hence we want to find a middle ground between the two cases a notion of
convergence that is sufficiently strong to preserve most interesting properties,
without being too trivial. To do so, we can examine what went wrong in the
examples above. In the last example, even though our sequence
f
n
does indeed
tends pointwise to
f
, different points converge at different rates to
f
. For
example, at
x
= 1, we already have
f
1
(1) =
f
(1) = 1. However, at
x
= (100!)
1
,
f
99
(
x
) = 0 while
f
(
x
) = 1. No matter how large
n
is, we can still find some
x
where
f
n
(
x
) differs a lot from
f
(
x
). In other words, if we are given pointwise
convergence, there is no guarantee that for very large
n
,
f
n
will “look like”
f
,
since there might be some points for which
f
n
has not started to move towards
f.
Hence, what we need is for
f
n
to converge to
f
at the same pace. This is
known as uniform convergence.
Definition (Uniform convergence). A sequence of functions
f
n
:
E R
con-
verges uniformly to f if
(ε)(N)(x)(n > N) |f
n
(x) f(x)| < ε.
Alternatively, we can say
(ε)(N)(n > N) sup
xE
|f
n
(x) f(x)| < ε.
Note that similar to pointwise convergence, the definition does not require
E
to be a subset of
R
. It could as well be the set
{Winnie, Piglet, Tigger}
. However,
many of our theorems about uniform convergence will require
E
to be a subset
of R, or else we cannot sensibly integrate or differentiate our function.
We can compare this definition with the definition of pointwise convergence:
(ε)(x)(N)(n > N) |f
n
(x) f(x)| < ε.
The only difference is in where there (
x
) sits, and this is what makes all the
difference. Uniform convergence requires that there is an
N
that works for every
x
, while pointwise convergence just requires that for each
x
, we can find an
N
that works.
It should be clear from definition that if
f
n
f
uniformly, then
f
n
f
pointwise. We will show that the converse is false:
Example. Again consider our first example, where
f
n
: [
1
,
1]
R
is defined
by f
n
(x) = x
1/(2n+1)
. If the uniform limit existed, then it must be given by
f
n
(x) f (x) =
1 0 < x 1
0 x = 1
1 1 x < 0
,
since uniform convergence implies pointwise convergence.
We will show that we don’t have uniform convergence. Pick
ε
=
1
4
. Then
for each
n
,
x
= 2
(2n+1)
will have
f
n
(
x
) =
1
2
,
f
(
x
) = 1. So there is some
x
such
that |f
n
(x) f(x)| > ε. So f
n
→ f uniformly.
Example. Let
f
n
:
R R
be defined by
f
n
(
x
) =
x
n
. Then
f
n
(
x
)
f
(
x
) = 0
pointwise. However, this convergence is not uniform in
R
since
|f
n
(
x
)
f
(
x
)
|
=
|x|
n
, and this can be arbitrarily large for any n.
However, if we restrict
f
n
to a bounded domain, then the convergence is
uniform. Let the domain be [a, a] for some positive, finite a. Then
sup |f
n
(x) f(x)| =
|x|
n
a
n
.
So given ε, pick N such that N >
a
ε
, and we are done.
Recall that for sequences of normal numbers, we have normal convergence and
Cauchy convergence, which we proved to be the same. Then clearly pointwise
convergence and pointwise Cauchy convergence of functions are equivalent. We
will now look into the case of uniform convergence.
Definition (Uniformly Cauchy sequence). A sequence
f
n
:
E R
of functions
is uniformly Cauchy if
(ε > 0)(N)(m, n > N ) sup
xE
|f
n
(x) f
m
(x)| < ε.
Our first theorem will be that uniform Cauchy convergence and uniform
convergence are equivalent.
Theorem. Let
f
n
:
E R
be a sequence of functions. Then (
f
n
) converges
uniformly if and only if (f
n
) is uniformly Cauchy.
Proof.
First suppose that
f
n
f
uniformly. Given
ε
, we know that there is
some N such that
(n > N) sup
xE
|f
n
(x) f(x)| <
ε
2
.
Then if n, m > N, x E we have
|f
n
(x) f
m
(x)| |f
n
(x) f(x)|+ |f
m
(x) f(x)| < ε.
So done.
Now suppose (
f
n
) is uniformly Cauchy. Then (
f
n
(
x
)) is Cauchy for all
x
. So
it converges. Let
f(x) = lim
n→∞
f
n
(x).
We want to show that
f
n
f
uniformly. Given
ε >
0, choose
N
such that
whenever
n, m > N
,
x E
, we have
|f
n
(
x
)
f
m
(
x
)
| <
ε
2
. Letting
m
,
f
m
(x) f (x). So we have |f
n
(x) f(x)|
ε
2
< ε. So done.
This is an important result. If we are given a concrete sequence of functions,
then the usual way to show it converges is to compute the pointwise limit and
then prove that the convergence is uniform. However, if we are dealing with
sequences of functions in general, this is less likely to work. Instead, it is often
much easier to show that a sequence of functions is uniformly convergent by
showing it is uniformly Cauchy.
We now move on to show that uniform convergence tends to preserve proper-
ties of functions.
Theorem (Uniform convergence and continuity). Let
E R
,
x E
and
f
n
, f
:
E R
. Suppose
f
n
f
uniformly, and
f
n
are continuous at
x
for all
n
.
Then f is also continuous at x.
In particular, if
f
n
are continuous everywhere, then
f
is continuous every-
where.
This can be concisely phrased as “the uniform limit of continuous functions
is continuous”.
Proof. Let ε > 0. Choose N such that for all n N, we have
sup
yE
|f
n
(y) f (y)| < ε.
Since f
N
is continuous at x, there is some δ such that
|x y| < δ |f
N
(x) f
N
(y)| < ε.
Then for each y such that |x y| < δ, we have
|f(x) f (y)| |f(x) f
N
(x)| + |f
N
(x) f
N
(y)|+ |f
N
(y) f (y)| < 3ε.
Theorem (Uniform convergence and integrals). Let
f
n
, f
: [
a, b
]
R
be
Riemann integrable, with f
n
f uniformly. Then
Z
b
a
f
n
(t) dt
Z
b
a
f(t) dt.
Proof. We have
Z
b
a
f
n
(t) dt
Z
b
a
f(t) dt
=
Z
b
a
f
n
(t) f(t) dt
Z
b
a
|f
n
(t) f(t)| dt
sup
t[a,b]
|f
n
(t) f(t)|(b a)
0 as n .
This is really the easy part. What we would also want to prove is that if
f
n
is integrable,
f
n
f
uniformly, then
f
is integrable. This is indeed true, but we
will not prove it yet. We will come to this later on at the part where we talk a
lot about integrability.
So far so good. However, the relationship between uniform convergence and
differentiability is more subtle. The uniform limit of differentiable functions
need not be differentiable. Even if it were, the limit of the derivative is not
necessarily the same as the derivative of the limit, even if we just want pointwise
convergence of the derivative.
Example. Let f
n
, f : [1, 1] R be defined by
f
n
(x) = |x|
1+1/n
, f(x) = |x|.
Then f
n
f uniformly (exercise).
Each
f
n
is differentiable this is obvious at
x
= 0, and at
x
= 0, the
derivative is
f
n
(0) = lim
x0
f
n
(x) f
n
(0)
x
= lim
x0
sgn(x)|x|
1/n
= 0
However, the limit f is not differentiable at x = 0.
Example. Let
f
n
(x) =
sin nx
n
for all x R. Then
sup
xR
|f
n
(x)|
1
n
0.
So f
n
f = 0 uniformly in R. However, the derivative is
f
n
(x) =
n cos nx,
which does not converge to f
= 0, e.g. at x = 0.
Hence, for differentiability to play nice, we need a condition even stronger
than uniform convergence.
Theorem. Let
f
n
: [
a, b
]
R
be a sequence of functions differentiable on [
a, b
]
(at the end points
a
,
b
, this means that the one-sided derivatives exist). Suppose
the following holds:
(i) For some c [a, b], f
n
(c) converges.
(ii) The sequence of derivatives (f
n
) converges uniformly on [a, b].
Then (
f
n
) converges uniformly on [
a, b
], and if
f
=
lim f
n
, then
f
is differentiable
with derivative f
(x) = lim f
n
(x).
Note that we do not assume that
f
n
are continuous or even Riemann inte-
grable. If they are, then the proof is much easier!
Proof.
If we are given a specific sequence of functions and are asked to prove
that they converge uniformly, we usually take the pointwise limit and show that
the convergence is uniform. However, given a general function, this is usually not
helpful. Instead, we can use the Cauchy criterion by showing that the sequence
is uniformly Cauchy.
We want to find an
N
such that
n, m > N
implies
sup |f
n
f
m
| < ε
. We
want to relate this to the derivatives. We might want to use the fundamental
theorem of algebra for this. However, we don’t know that the derivative is
integrable! So instead, we go for the mean value theorem.
Fix x [a, b]. We apply the mean value theorem to f
n
f
m
to get
(f
n
f
m
)(x) (f
n
f
m
)(c) = (x c)(f
n
f
m
)(t)
for some t (x, c).
Taking the supremum and rearranging terms, we obtain
sup
x[a,b]
|f
n
(x) f
m
(x)| |f
n
(c) f
m
(c)| + (b a) sup
t[a,b]
|f
n
(t) f
m
(t)|.
So given any
ε
, since
f
n
and
f
n
(
c
) converge and are hence Cauchy, there is some
N such that for any n, m N,
sup
t[a,b]
|f
n
(t) f
m
(t)| < ε, |f
n
(c) f
m
(c)| < ε.
Hence we obtain
n, m N sup
x[a,b]
|f
n
(x) f
m
(x)| < (1 + b a)ε.
So by the Cauchy criterion, we know that
f
n
converges uniformly. Let
f
=
lim f
n
.
Now we have to check differentiability. Let
f
n
h
. For any fixed
y
[
a, b
],
define
g
n
(x) =
(
f
n
(x)f
n
(y)
xy
x = y
f
n
(y) x = y
Then by definition,
f
n
is differentiable at
y
iff
g
n
is continuous at
y
. Also, define
g(x) =
(
f(x)f(y)
xy
x = y
h(y) x = y
Then
f
is differentiable with derivative
h
at
y
iff
g
is continuous at
y
. However,
we know that
g
n
g
pointwise on [
a, b
], and we know that
g
n
are all continuous.
To conclude that
g
is continuous, we have to show that the convergence is
uniform. To show that
g
n
converges uniformly, we rely on the Cauchy criterion
and the mean value theorem.
For x = y, we know that
g
n
(x) g
m
(x) =
(f
n
f
m
)(x) (f
n
f
m
)(y)
x y
= (f
n
f
m
)(t)
for some
t
[
x, y
]. This also holds for
x
=
y
, since
g
n
(
y
)
g
m
(
y
) =
f
n
(
y
)
f
m
(
y
)
by definition.
Let
ε >
0. Since
f
converges uniformly, there is some
N
such that for all
x = y, n, m > N , we have
|g
n
(x) g
m
(x)| sup |f
n
f
m
| < ε.
So
n, m N sup
[a,b]
|g
n
g
m
| < ε,
i.e.
g
n
converges uniformly. Hence the limit function
g
is continuous, in particular
at x = y. So f is differentiable at y and f
(y) = h(y) = lim f
n
(y).
If we assume additionally that
f
n
are continuous, then there is an easy proof
of this theorem. By the fundamental theorem of calculus, we have
f
n
(x) = f
n
(c) +
Z
x
c
f
n
(t) dt. ()
Then we get that
sup
[a,b]
|f
n
(x) f
m
(x)| |f
n
(c) f
m
(c)| + sup
x[a,b]
Z
x
c
(f
n
(t) f
m
(t)) dt
|f
n
(c) f
m
(c)| + (b a) sup
t[a,b]
|f
n
(t) f
m
(t)|
< ε
for sufficiently large n, m > N.
So by the Cauchy criterion,
f
n
f
uniformly for some function
f
: [
a, b
]
R
.
Since the
f
n
are continuous,
h
=
lim
n→∞
f
n
is continuous and hence integrable.
Taking the limit of (), we get
f(x) = f(c) +
Z
x
c
h(t) dt.
Then the fundamental theorem of calculus says that
f
is differentiable and
f
(x) = h(x) = lim f
n
(x). So done.
Finally, we have a small proposition that can come handy.
Proposition.
(i)
Let
f
n
, g
n
:
E R
, be sequences, and
f
n
f
,
g
n
g
uniformly on
E
.
Then for any a, b R, af
n
+ bg
n
af + bg uniformly.
(ii)
Let
f
n
f
uniformly, and let
g
:
E R
is bounded. Then
gf
n
:
E R
converges uniformly to gf.
Proof.
(i) Easy exercise.
(ii) Say |g(x)| < M for all x E. Then
|(gf
n
)(x) (gf)(x)| M |f
n
(x) f(x)|.
So
sup
E
|gf
n
gf| M sup
E
|f
n
f| 0.
Note that (ii) is false without assuming boundedness. An easy example is to
take
f
n
=
1
n
,
x R
, and
g
(
x
) =
x
. Then
f
n
0 uniformly, but (
gf
n
)(
x
) =
x
n
does not converge uniformly to 0.
2 Series of functions
2.1 Convergence of series
Recall that in Analysis I, we studied the convergence of a series of numbers.
Here we will look at a series of functions. The definitions are almost exactly the
same.
Definition (Convergence of series). Let g
n
; E R be a sequence of functions.
Then we say the series
P
n=1
g
n
converges at a point
x E
if the sequence of
partial sums
f
n
=
n
X
j=1
g
j
converges at x. The series converges uniformly if f
n
converges uniformly.
Definition (Absolute convergence).
P
g
n
converges absolutely at a point
x E
if
P
|g
n
| converges at x.
P
g
n
converges absolutely uniformly if
P
|g
n
| converges uniformly.
Proposition. Let
g
n
:
E R
. If
P
g
n
converges absolutely uniformly, then
P
g
n
converges uniformly.
Proof.
Again, we don’t have a candidate for the limit. So we use the Cauchy
criterion.
Let
f
n
=
n
P
j=1
g
j
and
h
n
(
x
) =
n
P
j=1
|g
j
|
be the partial sums. Then for
n > m
,
we have
|f
n
(x) f
m
(x)| =
n
X
j=m+1
g
j
(x)
n
X
j=m+1
|g
j
(x)| = |h
n
(x) h
m
(x)|.
By hypothesis, we have
sup
xE
|h
n
(x) h
m
(x)| 0 as n, m .
So we get
sup
xE
|f
n
(x) f
m
(x)| 0 as n, m .
So the result follows from the Cauchy criteria.
It is important to remember that uniform convergence plus absolute pointwise
convergence does not imply absolute uniform convergence.
Example. Consider the series
X
n=1
(1)
n
n
x
n
.
This converges absolutely for every
x
[0
,
1) since it is bounded by the geometric
series. In fact, it converges uniformly on [0
,
1) (see example sheet). However,
this does not converge absolutely uniformly on [0, 1).
We can consider the difference in partial sums
n
X
j=m
(1)
j
j
x
j
=
n
X
j=m
1
j
|x|
j
1
m
+
1
m + 1
+ ··· +
1
n
|x|
n
.
For each
N
, we can make this difference large enough by picking a really large
n, and then making x close enough to 1. So the supremum is unbounded.
Theorem (Weierstrass M-test). Let
g
n
:
E R
be a sequence of functions.
Suppose there is some sequence M
n
such that for all n, we have
sup
xE
|g
n
(x)| M
n
.
If
P
M
n
converges, then
P
g
n
converges absolutely uniformly.
This is in fact a very easy result, and we could as well reproduce the proof
whenever we need it. However, this pattern of proving absolute uniform conver-
gence is so common that we prove it as a test.
Proof. Let f
n
=
n
P
j=1
|g
j
| be the partial sums. Then for n > m, we have
|f
n
(x) f
m
(x)| =
n
X
j=m+1
|g
j
(x)|
n
X
j=m+1
M
j
.
Taking supremum, we have
sup |f
n
(x) f
m
(x)|
n
X
j=m+1
M
j
0 as n, m .
So done by the Cauchy criterion.
2.2 Power series
A particularly interesting kind of series is a power series. We have already met
these in IA Analysis I and proved some results about them. However, our results
were pointwise results, discussing how
P
c
n
(
xa
)
n
behaves at a particular point
x
. Here we will quickly look into how the power series behaves as a function of
x. In particular, we want to know whether it converges absolutely uniformly.
Theorem. Let
P
n=0
c
n
(
x a
)
n
be a real power series. Then there exists a unique
number R [0, +] (called the radius of convergence) such that
(i) If |x a| < R, then
P
c
n
(x a)
n
converges absolutely.
(ii) If |x a| > R, then
P
c
n
(x a)
n
diverges.
(iii)
If
R >
0 and 0
< r < R
, then
P
c
n
(
x a
)
n
converges absolutely uniformly
on [a r, a + r].
We say that the sum converges locally absolutely uniformly inside the circle
of convergence, i.e. for every point
y
(
a R, a
+
R
), there is some open
interval around y on which the sum converges absolutely uniformly.
These results hold for complex power series as well, but for concreteness we will
just do it for real series.
Note that the first two statements are things we already know from IA
Analysis I, and we are not going to prove them.
Proof. See IA Analysis I for (i) and (ii).
For (iii), note that from (i), taking
x
=
a r
, we know that
P
|c
n
|r
n
is
convergent. But we know that if x [a r, a + r], then
|c
n
(x a)
n
| |c
n
|r
n
.
So the result follows from the Weierstrass M-test by taking M
n
= |c
n
|r
n
.
Note that uniform convergence need not hold on the entire interval of con-
vergence.
Example. Consider
P
x
n
. This converges for
x
(
1
,
1), but uniform conver-
gence fails on (1, 1) since the tail
n
X
j=m
x
j
= x
n
nm
X
j=0
x
j
x
m
1 x
.
This is not uniformly small since we can make this large by picking
x
close to 1.
Theorem (Termwise differentiation of power series). Suppose
P
c
n
(
x n
)
n
is
a real power series with radius of convergence R > 0. Then
(i) The “derived series”
X
n=1
nc
n
(x a)
n1
has radius of convergence R.
(ii)
The function defined by
f
(
x
) =
P
c
n
(
x a
)
n
,
x
(
a R, a
+
R
) is
differentiable with derivative
f
(
x
) =
P
nc
n
(
x a
)
n1
within the (open)
circle of convergence.
Proof.
(i) Let R
1
be the radius of convergence of the derived series. We know
|c
n
(x a)
n
| = |c
n
||x a|
n1
|x a| |nc
n
(x a)
n1
||x a|.
Hence if the derived series
P
nc
n
(
x a
)
n1
converges absolutely for some
x, then so does
P
c
n
(x a)
n
. So R
1
R.
Suppose that the inequality is strict, i.e.
R
1
< R
, then there are
r
1
, r
such that
R
1
< r
1
< r < R
, where
P
n|c
n
|r
n1
1
diverges while
P
|c
n
|r
n
converges. But this cannot be true since
n|c
n
|r
n1
1
|c
n
|r
n
for sufficiently
large n. So we must have R
1
= R.
(ii)
Let
f
n
(
x
) =
n
P
j=0
c
j
(
x a
)
j
. Then
f
n
(
x
) =
n
P
j=1
jc
j
(
x a
)
j1
. We want
to use the result that the derivative of limit is limit of derivative. This
requires that
f
n
converges at a point, and that
f
n
converges uniformly.
The first is obviously true, and we know that
f
n
converges uniformly on
[
ar, a
+
r
] for any
r < R
. So for each
x
0
, there is some interval containing
x
0
on which f
n
is convergent. So on this interval, we know that
f(x) = lim
n→∞
f
n
(x)
is differentiable with
f
(x) = lim
n→∞
f
n
(x) =
X
j=1
jc
j
(x a)
j
.
In particular,
f
(x
0
) =
X
j=1
jc
j
(x
0
a)
j
.
Since this is true for all x
0
, the result follows.
3 Uniform continuity and integration
3.1 Uniform continuity
Recall that we had a rather weak notion of convergence, known as pointwise
convergence, and then promoted it to uniform convergence. The process of this
promotion is to replace the condition “for each
x
, we can find an
ε
to “we can
find an
ε
that works for each
x
”. We are going to do the same for continuity to
obtain uniform continuity.
Definition (Uniform continuity). Let
E R
and
f
:
E R
. We say that
f
is
uniformly continuous on E if
(ε)(δ > 0)(x)(y) |x y| < δ |f(x) f(y)| < ε.
Compare this to the definition of continuity:
(ε)(x)(δ > 0)(y) |x y| < δ |f(x) f(y)| < ε.
Again, we have shifted the (
x
) out of the (
δ
) quantifier. The difference is
that in regular continuity,
δ
can depend on our choice of
x
, but in uniform
continuity, it only depends on
y
. Again, clearly a uniformly continuous function
is continuous.
In general, the converse is not true, as we will soon see in two examples.
However, the converse is true in a lot of cases.
Theorem. Any continuous function on a closed, bounded interval is uniformly
continuous.
Proof.
We are going to prove by contradiction. Suppose
f
: [
a, b
]
R
is not
uniformly continuous. Since
f
is not uniformly continuous, there is some
ε >
0
such that for all
δ
=
1
n
, there is some
x
n
, y
n
such that
|x
n
y
n
| <
1
n
but
|f(x
n
) f(y
n
)| > ε.
Since we are on a closed, bounded interval, by Bolzano-Weierstrass, (
x
n
) has a
convergent subsequence (
x
n
i
)
x
. Then we also have
y
n
i
x
. So by continuity,
we must have
f
(
x
n
i
)
f
(
x
) and
f
(
y
n
i
)
f
(
x
). But
|f
(
x
n
i
)
f
(
y
n
i
)
| > ε
for
all n
i
. This is a contradiction.
Note that we proved this in the special case where the domain is [
a, b
] and
the image is
R
. In fact, [
a, b
] can be replaced by any compact metric space;
R
by any metric space. This is since all we need is for Bolzano-Weierstrass to hold
in the domain, i.e. the domain is sequentially compact (ignore this comment if
you have not taken IB Metric and Topological Spaces).
Instead of a contradiction, we can also do a direct proof of this statement,
using the Heine-Borel theorem which says that [0, 1] is compact.
While this is a nice theorem, in general a continuous function need not be
uniformly continuous.
Example. Consider
f
: (0
,
1]
R
given by
f
(
x
) =
1
x
. This is not uniformly
continuous, since when we get very close to 0, a small change in
x
produces a
large change in
1
x
.
In particular, for any
δ <
1 and
x < δ
,
y
=
x
2
, then
|x y|
=
x
2
< δ
but
|f(x) f (y)| =
1
x
> 1.
In this example, the function is unbounded. However, even bounded functions
can be not uniformly continuous.
Example. Let f : (0, 1] R, f(x) = sin
1
x
. We let
x
n
=
1
2
, y
n
=
1
(2n +
1
2
)π
.
Then we have
|f(x
n
) f(y
n
)| = |0 1| = 1,
while
|x
n
y
n
| =
π
2n(4n + 1)
0.
3.2 Applications to Riemann integrability
We can apply the idea of uniform continuity to Riemann integration.
We first quickly recap and summarize things we know about Riemann integrals
IA Analysis I. Let f : [a, b] R be a bounded function, say m f(x) M for
all x [a, b]. Consider a partition of [a, b]
P = {a
0
, a
1
, ··· , a
n
}.
i.e. a = a
0
< a
1
< a
2
< ··· < a
n
= b. The upper sum is defined as
U(P, f) =
n1
X
j=0
(a
j+1
a
j
) sup
[a
j
,a
j+1
]
f.
Similarly, the lower sum is defined as
L(P, f ) =
n1
X
j=0
(a
j+1
a
j
) inf
[a
j
,a
j+1
]
f.
It is clear from definition that
m(b a) L(P, f ) U (P, f) M(b a).
Also, if a partition
P
is a refinement of a partition
P
, i.e. it contains all the
points of P and possibly more, then
L(P, f ) L(P
, f) U (P
, f) U (P, f ).
It thus follows that if P
1
and P
2
are arbitrary partitions, then
L(P
1
, f) U (P
2
, f),
which we can show by finding a partition that is simultaneously a refinement of
P
1
and P
2
. The importance of this result is that
sup
P
L(P, f ) inf
P
U(P, f),
where we take extrema over all partitions
P
. We define these to be the upper
and lower integrals
I
(f) = inf
P
U(P, f), I
(f) = sup
P
L(P, f ).
So we know that
m(b a) I
(f) I
(f) M(b a).
Now given any
ε >
0, by definition of the infimum, there is a partition
P
1
such
that
U(P
1
, f) < I
(f) +
ε
2
.
Similarly, there is a partition P
2
such that
L(P
2
, f) > I
(f)
ε
2
.
So if we let P = P
1
P
2
, then P is a refinement of both P
1
and P
2
. So
U(P, f) < I
(f) +
ε
2
and
L(P, f ) > I
(f)
ε
2
.
Combining these, we know that
0 I
(f) I
(f) < U(P, f) L(P, f ) < I
(f) I
(f) + ε.
We now define what it means to be Riemann integrable.
Definition (Riemann integrability). A bounded function
f
: [
a, b
]
R
is
Riemann integrable on [a, b] if I
(f) = I
(f). We write
Z
b
a
f(x) dx = I
(f) = I
(f).
Then the Riemann criterion says
Theorem (Riemann criterion for integrability). A bounded function
f
: [
a, b
]
R
is Riemann integrable if and only if for every
ε
, there is a partition
P
such
that
U(P, f) L(P, f ) < ε.
That’s the end of our recap. Now we have a new theorem.
Theorem. If
f
: [
a, b
]
[
A, B
] is integrable and
g
: [
A, B
]
R
is continuous,
then g f : [a, b] R is integrable.
Proof.
Let
ε >
0. Since
g
is continuous,
g
is uniformly continuous. So we can find
δ
=
δ
(
ε
)
>
0 such that for any
x, y
[
A, B
], if
|x y| < δ
then
|g
(
x
)
g
(
y
)
| < ε
.
Since
f
is integrable, for arbitrary
ε
, we can find a partition
P
=
{a
=
a
0
<
a
1
< ··· < a
n
= b} such that
U(P, f) L(P, f ) =
n1
X
j=0
(a
j+1
a
j
)
sup
I
j
f inf
I
j
f
!
< ε
. ()
Our objective is to make
U
(
P, g f
)
L
(
P, g f
) small. By uniform continuity
of
g
, if
sup
I
j
f inf
I
j
f
is less than
δ
, then
sup
I
j
g f inf
I
j
g f
will be less
than ε. We like these sorts of intervals. So we let
J =
(
j : sup
I
j
f inf
I
j
f < δ
)
,
We now show properly that these intervals are indeed “nice”. For any
j J
, for
all x, y I
j
, we must have
|f(x) f (y)| sup
z
1
,z
2
I
j
(f(z
1
) f(z
2
)) = sup
I
j
f inf
I
j
f < δ.
Hence, for each j J and all x, y I
j
, we know that
|g f (x) g f (y)| < ε.
Hence, we must have
sup
I
j
g f (x) g f (y)
ε.
So
sup
I
j
g f inf
I
j
g f ε.
Hence we know that
U(P, g f ) L(P, g f) =
n
X
j=0
(a
j+1
a
j
)
sup
I
j
g f inf
I
j
g f
!
=
X
jJ
(a
j+1
a
j
)
sup
I
j
g f inf
I
j
g f
!
+
X
j∈J
(a
j+1
a
j
)
sup
I
j
g f inf
I
j
g f
!
.
ε(b a) + 2 sup
[A,B]
|g|
X
j∈J
(a
j+1
a
j
).
Hence, it suffices here to make
P
j∈J
(
a
j+1
a
j
) small. From (
), we know that we
must have
X
j∈J
(a
j+1
a
j
) <
ε
δ
,
or else U(P, f) L(P, f ) > ε
. So we can bound
U(P, g f ) L(P, g f) ε(b a) + 2 sup
[A,B]
|g|
ε
δ
.
So if we are given an
ε
at the beginning, we can get a
δ
by uniform continuity.
Afterwards, we pick
ε
such that
ε
=
εδ
. Then we have shown that given any
ε
,
there exists a partition such that
U(P, g f ) L(P, g f) <
(b a) + 2 sup
[A,B]
|g|
!
ε.
Then the claim follows from the Riemann criterion.
As an immediate consequence, we know that any continuous function is
integrable, since we can just let
f
be the identity function, which we can easily
show to be integrable.
Corollary. A continuous function g : [a, b] R is integrable.
Theorem. Let
f
n
: [
a, b
]
R
be bounded and integrable for all
n
. Then if
(
f
n
) converges uniformly to a function
f
: [
a, b
]
R
, then
f
is bounded and
integrable.
Proof. Let
c
n
= sup
[a,b]
|f
n
f|.
Then uniform convergence says that
c
n
0. By definition, for each
x
, we have
f
n
(x) c
n
f (x) f
n
(x) + c
n
.
Since
f
n
is bounded, this implies that
f
is bounded by
sup |f
n
|
+
c
n
. Also, for
any x, y [a, b], we know
f(x) f (y) (f
n
(x) f
n
(y)) + 2c
n
.
Hence for any partition P ,
U(P, f) L(L, f) U(P, f
n
) L(P, f
n
) + 2(b a)c
n
.
So given
ε >
0, first choose
n
such that 2(
b a
)
c
n
<
ε
2
. Then choose
P
such that
U(P, f
n
) L(P, f
n
) <
ε
2
. Then for this partition, U (P, f ) L(P, f) < ε.
Most of the theory of Riemann integration extends to vector-valued or
complex-valued functions (of a single real variable).
Definition (Riemann integrability of vector-valued function). Let f : [
a, b
]
R
n
be a vector-valued function. Write
f(x) = (f
1
(x), f
2
(x), ··· , f
n
(x))
for all
x
[
a, b
]. Then f is Riemann integrable iff
f
j
: [
a, b
]
R
is integrable for
all j. The integral is defined as
Z
b
a
f(x) dx =
Z
b
a
f
1
(x) dx, ··· ,
Z
b
a
f
n
(x) dx
!
R
n
.
It is easy to see that most basic properties of integrals of real functions extend
to the vector-valued case. A less obvious fact is the following.
Proposition. If f : [
a, b
]
R
n
is integrable, then the function
f
: [
a, b
]
R
defined by
f(x) = f (x) =
v
u
u
t
n
X
j=1
f
2
j
(x).
is integrable, and
Z
b
a
f(x) dx
Z
b
a
f(x) dx.
This is a rather random result, but we include it here because it will be
helpful at some point in time.
Proof.
The integrability of
f
is clear since squaring and taking square roots
are continuous, and a finite sum of integrable functions is integrable. To show
the inequality, we let
v = (v
1
, ··· , v
n
) =
Z
b
a
f(x) dx.
Then by definition,
v
j
=
Z
b
a
f
j
(x) dx.
If v = 0, then we are done. Otherwise, we have
v
2
=
n
X
j=1
v
2
j
=
n
X
j=1
v
j
Z
b
a
f
j
(x) dx
=
Z
b
a
n
X
j=1
(v
j
f
j
(x)) dx
=
Z
b
a
v · f (x) dx
Using the Cauchy-Schwarz inequality, we get
Z
b
a
v∥∥f (x) dx
= v
Z
b
a
f dx.
Divide by v and we are done.
3.3 Non-examinable fun*
Since there is time left in the lecture, we’ll write down a really remarkable result.
Theorem (Weierstrass Approximation Theorem*). If f : [0, 1] R is continu-
ous, then there exists a sequence of polynomials (
p
n
) such that
p
n
f
uniformly.
In fact, the sequence can be given by
p
n
(x) =
n
X
k=0
f
k
n
n
k
x
k
(1 x)
nk
.
These are known as Bernstein polynomials.
Of course, there are many different sequences of polynomials converging
uniformly to
f
. Apart from the silly examples like adding
1
n
to each
p
n
, there
can also be vastly different ways of constructing such polynomial sequences.
Proof. For convenience, let
p
n,k
(x) =
n
k
x
k
(1 x)
nk
.
First we need a few facts about these functions. Clearly,
p
n,k
(
x
)
0 for all
x [0, 1]. Also, by the binomial theorem,
n
X
k=0
n
k
x
k
y
nk
= (x + y)
n
.
So we get
n
X
k=0
p
n,k
(x) = 1.
Differentiating the binomial theorem with respect to
x
and putting
y
= 1
x
gives
n
X
k=0
n
k
kx
k1
(1 x)
nk
= n.
We multiply by x to obtain
n
X
k=0
n
k
kx
k
(1 x)
nk
= nx.
In other words,
n
X
k=0
kp
n,k
(x) = nx.
Differentiating once more gives
n
X
k=0
k(k 1)p
n,k
(x) = n(n 1)x
2
.
Adding these two results gives
n
X
k=0
k
2
p
n,k
(x) = n
2
x
2
+ nx(1 x).
We will write our results in a rather weird way:
n
X
k=0
(nx k)
2
p
n,k
(x) = n
2
x
2
2nx · nx + n
2
x
2
+ nx(1 x) = nx(1 x). ()
This is what we really need.
Now given
ε
, since
f
is continuous,
f
is uniformly continuous. So pick
δ
such
that |f (x) f(y)| < ε whenever |x y| < δ.
Since
P
p
n,k
(
x
) = 1,
f
(
x
) =
P
p
n,k
(
x
)
f
(
x
). Now for each fixed
x
, we can
write
|p
n
(x) f(x)| =
n
X
k=0
f
k
n
f(x)
p
n,k
(x)
n
X
k=0
f
k
n
f(x)
p
n,k
(x)
=
X
k:|xk/n|
f
k
n
f(x)
p
n,k
(x)
+
X
k:|xk/n|≥δ
f
k
n
f(x)
p
n,k
(x)
ε
n
X
k=0
p
n,k
(x) + 2 sup
[0,1]
|f|
X
k:|xk/n|
p
n,k
(x)
ε + 2 sup
[0,1]
|f| ·
1
δ
2
X
k:|xk/n|
x
k
n
2
p
n,k
(x)
ε + 2 sup
[0,1]
|f| ·
1
δ
2
n
X
k=0
x
k
n
2
p
n,k
(x)
= ε +
2 sup |f|
δ
2
n
2
nx(1 x)
ε +
2 sup |f|
δ
2
n
Hence given any
ε
and
δ
, we can pick
n
sufficiently large that that
|p
n
(
x
)
f
(
x
)
| <
2ε. This is picked independently of x. So done.
Unrelatedly, we might be interested in the question when is a function
Riemann integrable? A possible answer is if it satisfies the Riemann integrability
criterion, but this is not really helpful. We know that a function is integrable if
it is continuous. But it need not be. It could be discontinuous at finitely many
points and still be integrable. If it has countably many discontinuities, then we
still can integrate it. How many points of discontinuity can we accommodate if
we want to keep integrability?
To answer this questions, we have Lebesgue’s theorem. To state this theorem,
we need the following definition:
Definition (Lebesgue measure zero*). A subset
A R
is said to have (Lebesgue)
measure zero if for any
ε >
0, there exists a countable (possibly finite) collection
of open intervals I
j
such that
A
[
j=1
I
J
,
and
X
j=1
|I
j
| < ε.
here
|I
j
|
is defined as the length of the interval, not the cardinality (obviously).
This is a way to characterize “small” sets, and in general a rather good way.
This will be studied in depth in the IID Probability and Measure course.
Example.
The empty set has measure zero.
Any finite set has measure zero.
Any countable set has measure zero. If A = {a
0
, a
1
, ···}, take
I
j
=
a
j
ε
2
j+1
, a
j
+
ε
2
j+1
.
Then A is contained in the union and the sum of lengths is ε.
A countable union of sets of measure zero has measure zero, using a similar
proof strategy as above.
Any (non-trivial) interval does not have measure zero.
The Cantor set, despite being uncountable, has measure zero. The Cantor
set is constructed as follows: start with
C
0
= [0
,
1]. Remove the middle
third
1
3
,
2
3
to obtain
C
1
=
0,
1
3
2
3
, 1
. Removing the middle third of
each segment to obtain
C
2
=
0,
1
9
2
9
,
3
9
6
9
,
7
9
8
9
, 1
.
Continue iteratively by removing the middle thirds of each part. Define
C =
\
n=0
C
n
,
which is the Cantor set. Since each
C
n
consists of 2
n
disjoint closed
intervals of length 1
/
3
n
, the total length of the segments of
C
n
is
2
3
n
0.
So we can cover
C
by arbitrarily small union of intervals. Hence the Cantor
set has measure zero.
It is slightly trickier to show that
C
is uncountable, and to save time, we
are not doing it now.
Using this definition, we can have the following theorem:
Theorem (Lebesgue’s theorem on the Riemann integral*). Let
f
: [
a, b
]
R
be a bounded function, and let
D
f
be the set of points of discontinuities of
f
.
Then f is Riemann integrable if and only if D
f
has measure zero.
Using this result, a lot of our theorems follow easily of these. Apart from
the easy ones like the sum and product of integrable functions is integrable,
we can also easily show that the composition of a continuous function with an
integrable function is integrable, since composing with a continuous function
will not introduce more discontinuities.
Similarly, we can show that the uniform limit of integrable functions is
integrable, since the points of discontinuities of the uniform limit is at most the
(countable) union of all discontinuities of the functions in the sequence.
Proof is left as an exercise for the reader, in the example sheet.
4 R
n
as a normed space
4.1 Normed spaces
Our objective is to extend most of the notions we had about functions of a
single variable
f
:
R R
to functions of multiple variables
f
:
R
n
R
. More
generally, we want to study functions
f
:
R
m
, where
R
n
. We wish to
define analytic notions such as continuity, differentiability and even integrability
(even though we are not doing integrability in this course).
In order to do this, we need more structure on
R
n
. We already know that
R
n
is a vector space, which means that we can add, subtract and multiply by
scalars. But to do analysis, we need something to replace our notion of
|x y|
in R. This is known as a norm.
It is useful to define and study this structure in an abstract setting, as
opposed to thinking about
R
n
specifically. This leads to the general notion of
normed spaces.
Definition (Normed space). Let
V
be a real vector space. A norm on
V
is a
function · : V R satisfying
(i) x 0 with equality iff x = 0 (non-negativity)
(ii) λx = |λ|∥x (linearity in scalar multiplication)
(iii) x + y x + y (triangle inequality)
A normed space is a pair (
V, ·
). If the norm is understood, we just say
V
is
a normed space. We do have to be slightly careful since there can be multiple
norms on a vector space.
Intuitively, x is the length or magnitude of x.
Example. We will first look at finite-dimensional spaces. This is typically
R
n
with different norms.
Consider R
n
, with the Euclidean norm
x
2
=
X
x
2
i
2
.
This is also known as the usual norm. It is easy to check that this is a
norm, apart from the triangle inequality. So we’ll just do this. We have
x + y
2
=
n
X
i=1
(x
i
+ y
i
)
2
= x
2
+ y
2
+ 2
X
x
i
y
i
x
2
+ y
2
+ 2xy
= (x
2
+ y
2
),
where we used the Cauchy-Schwarz inequality. So done.
We can have the following norm on R
n
:
x
1
=
X
|x
i
|.
It is easy to check that this is a norm.
We can also have the following norm on R
n
:
x
= max{|x
i
| : 1 i n}.
It is also easy to check that this is a norm.
In general, we can define the p norm (for p 1) by
x
p
=
X
|x
i
|
p
1/p
.
It is, however, not trivial to check the triangle inequality, and we will not
do this.
We can show that as
p
,
x
p
x
, which justifies our notation
above.
We also have some infinite dimensional examples. Often, we can just extend our
notions on
R
n
to infinite sequences with some care. We write
R
N
for the set of
all infinite real sequences (
x
k
). This is a vector space with termwise addition
and scalar multiplication.
Define
1
=
n
(x
k
) R
N
:
X
|x
k
| <
o
.
This is a linear subspace of R
N
. We define the norm by
(x
k
)
1
= (x
k
)
1
=
X
|x
k
|.
Similarly, we can define
2
by
2
=
n
(x
k
) R
N
:
X
x
2
k
<
o
.
The norm is defined by
(x
k
)
2
= (x
k
)
2
=
X
x
2
k
1/2
.
We can also write this as
(x
k
)
2
= lim
n→∞
(x
1
, ··· , x
n
)
2
.
So the triangle inequality for the Euclidean norm implies the triangle
inequality for
2
.
In general, for p 1, we can define
p
=
n
(x
k
) R
N
:
X
|x
k
|
p
<
o
with the norm
(x
k
)
p
= (x
k
)
p
=
X
|x
k
|
p
1/p
.
Finally, we have
, where
= {(x
k
) R
N
: sup |x
k
| < ∞},
with the norm
(x
k
)
= (x
k
)
= sup |x
k
|.
Finally, we can have examples where we look at function spaces, usually
C
([
a, b
]),
the set of continuous real functions on [a, b].
We can define the L
1
norm by
f
L
1
= f
1
=
Z
b
a
|f| dx.
We can define L
2
similarly by
f
L
2
= f
2
=
Z
b
a
f
2
dx
!
1
2
.
In general, we can define L
p
for p 1 by
f
L
p
= f
p
=
Z
b
a
f
p
dx
!
1
p
.
Finally, we have L
by
f
L
= f
= sup |f|.
This is also called the uniform norm, or the supremum norm.
Later, when we define convergence for general normed space, we will show
that convergence under the uniform norm is equivalent to uniform convergence.
To show that
L
2
is actually a norm, we can use the Cauchy-Schwarz inequality
for integrals.
Lemma (Cauchy-Schwarz inequality (for integrals)). If
f, g C
([
a, b
]),
f, g
0,
then
Z
b
a
fg dx
Z
b
a
f
2
dx)
!
1/2
Z
b
a
g
2
dx
!
1/2
.
Proof.
If
R
b
a
f
2
d
x
= 0, then
f
= 0 (since
f
is continuous). So the inequality
holds trivially.
Otherwise, let A
2
=
R
b
a
f
2
dx = 0, B
2
=
R
b
a
g
2
dx. Consider the function
ϕ(t) =
Z
b
a
(g tf )
2
dt 0.
for every t. We can expand this as
ϕ(t) = t
2
A
2
2t
Z
b
a
gf dx + B
2
.
The conditions for a quadratic in t to be non-negative is exactly
Z
b
a
gf dx
!
2
A
2
B
2
0.
So done.
Note that the way we defined
L
p
is rather unsatisfactory. To define the
p
spaces, we first have the norm defined as a sum, and then
p
to be the set of
all sequences for which the sum converges. However, to define the
L
p
space, we
restrict ourselves to
C
([0
,
1]), and then define the norm. Can we just define,
say,
L
1
to be the set of all functions such that
R
1
0
|f|
d
x
exists? We could,
but then the norm would no longer be the norm, since if we have the function
f
(
x
) =
(
1 x = 0.5
0 x = 0.5
, then
f
is integrable with integral 0, but is not identically
zero. So we cannot expand our vector space to be too large. To define
L
p
properly, we need some more sophisticated notions such as Lebesgue integrability
and other fancy stuff, which will be done in the IID Probability and Measure
course.
We have just defined many norms on the same space
R
n
. These norms are
clearly not the same, in the sense that for many x,
x
1
and
x
2
have different
values. However, it turns out the norms are all “equivalent” in some sense. This
intuitively means the norms are “not too different” from each other, and give
rise to the same notions of, say, convergence and completeness.
A precise definition of equivalence is as follows:
Definition (Lipschitz equivalence of norms). Let
V
be a (real) vector space.
Two norms
· , ·
on
V
are Lipschitz equivalent if there are real constants
0 < a < b such that
ax x
bx
for all x V .
It is easy to show this is indeed an equivalence relation on the set of all norms
on V .
We will show that if two norms are equivalent, the “topological” properties of
the space do not depend on which norm we choose. For example, the norms will
agree on which sequences are convergent and which functions are continuous.
It is possible to reformulate the notion of equivalence in a more geometric
way. To do so, we need some notation:
Definition (Open ball). Let (
V, ·
) be a normed space, a
V
,
r >
0. The
open ball centered at a with radius r is
B
r
(a) = {x V : x a < r}.
Then the requirement that
a
x
x
b
x
for all x
V
is equivalent
to saying
B
1/b
(0) B
1
(0) B
1/a
(0),
where
B
is the ball with respect to
·
, while
B
is the ball with respect to
· . Actual proof of equivalence is on the second example sheet.
Example. Consider
R
2
. Then the norms
·
and
·
2
are equivalent. This
is easy to see using the ball picture:
where the blue ones are the balls with respect to
·
and the red one is the
ball with respect to ·
2
.
In general, we can consider R
n
, again with ·
2
and ·
. We have
x
x
2
nx
.
These are easy to check manually. However, later we will show that in fact,
any two norms on a finite-dimensional vector space are Lipschitz equivalent.
Hence it is more interesting to look at infinite dimensional cases.
Example. Let V = C([0, 1]) with the norms
f
1
=
Z
1
0
|f| dx, f
= sup
[0,1]
|f|.
We clearly have the bound
f
1
f
.
However, there is no constant b such that
f
bf
1
for all f. This is easy to show by constructing a sequence of functions f
n
by
x
y
1
1
n
where the width is
2
n
and the height is 1. Then
f
n
= 1 but
f
n
1
=
1
n
0.
Example. Similarly, consider the space
2
=
(x
n
) :
P
x
2
n
<
under the
regular
2
norm and the
norm. We have
(x
k
)
(x
k
)
2
,
but there is no b such that
(x
k
)
2
b(x
k
)
.
For example, we can consider the sequence
x
n
= (1
,
1
, ··· ,
1
,
0
,
0
, ···
), where the
first n terms are 1.
So far in all our examples, out of the two inequalities, one holds and one does
not. Is it possible for both inequalities to not hold? The answer is yes. This is
an exercise on the second example sheet as well.
This is all we are going to say about Lipschitz equivalence. We are now going
to define convergence, and study the consequences of Lipschitz equivalence to
convergence.
Definition (Bounded subset). Let (
V, ·
) be a normed space. A subset
E V
is bounded if there is some R > 0 such that
E B
R
(0).
Definition (Convergence of sequence). Let (
V, ·
) be a normed space. A
sequence (
x
k
) in
V
converges to x
V
if
x
k
x
0 (as a sequence in
R
), i.e.
(ε > 0)(N)(k N ) x
k
x < ε.
These two definitions, obviously, depends on the chosen norm, not just the
vector space
V
. However, if two norms are equivalent, then they agree on what
is bounded and what converges.
Proposition. If
·
and
·
are Lipschitz equivalent norms on a vector
space V , then
(i)
A subset
E V
is bounded with respect to
·
if and only if it is bounded
with respect to ·
.
(ii)
A sequence
x
k
converges to
x
with respect to
·
if and only if it converges
to x with respect to ·
.
Proof.
(i) This is direct from definition of equivalence.
(ii) Say we have a, b such that ay y
by for all y. So
ax
k
x x
k
x
bx
k
x.
So x
k
x 0 if and only if x
k
x
0. So done.
What if the norms are not equivalent? It is not surprising that there are
some sequences that converge with respect to one norm but not another. More
surprisingly, it is possible that a sequence converges to different limits under
different norms. This is, again, on the second example sheet.
We have some easy facts about convergence:
Proposition. Let (V, · ) be a normed space. Then
(i) If x
k
x and x
k
y, then x = y.
(ii) If x
k
x, then ax
k
ax.
(iii) If x
k
x, y
k
y, then x
k
+ y
k
x + y.
Proof.
(i) x y x x
k
+ x
k
y 0. So x y = 0. So x = y.
(ii) ax
k
ax = |a|∥x
k
x 0.
(iii) (x
k
+ y
k
) (x + y) x
k
x + y
k
y 0.
Proposition. Convergence in
R
n
(with respect to, say, the Euclidean norm) is
equivalent to coordinate-wise convergence, i.e. x
(k)
x if and only if
x
(k)
j
x
j
for all j.
Proof.
Fix
ε >
0. Suppose x
(k)
x. Then there is some
N
such that for any
k N such that
x
(k)
x
2
2
=
n
X
j=1
(x
(k)
j
x
j
)
2
< ε.
Hence |x
(k)
j
x
j
| < ε for all k N .
On the other hand, for any fixed
j
, there is some
N
j
such that
k N
j
implies
|x
(k)
j
x
j
| <
ε
n
. So if k max{N
j
: j = 1, ··· , n}, then
x
(k)
x
2
=
n
X
j=1
(x
(k)
j
x
j
)
2
1
2
< ε.
So done
Another space we would like to understand is the space of continuous functions.
It should be clear that uniform convergence is the same as convergence under the
uniform norm, hence the name. However, there is no norm such that convergence
under the norm is equivalent to pointwise convergence, i.e. pointwise convergence
is not normable. In fact, it is not even metrizable. However, we will not prove
this.
We’ll now generalize the Bolzano-Weierstrass theorem to R
n
.
Theorem (Bolzano-Weierstrass theorem in
R
n
). Any bounded sequence in
R
n
(with, say, the Euclidean norm) has a convergent subsequence.
Proof.
We induct on
n
. The
n
= 1 case is the usual Bolzano-Weierstrass on the
real line, which was proved in IA Analysis I.
Assume the theorem holds in
R
n1
, and let x
(k)
= (
x
(k)
1
, ··· , x
(k)
n
) be a
bounded sequence in
R
n
. Then let y
(k)
= (
x
(k)
1
, ··· , x
(k)
n1
). Since for any
k
, we
know that
y
(k)
2
+ |x
(k)
n
|
2
= x
(k)
2
,
it follows that both (y
(k)
) and (
x
(k)
n
) are bounded. So by the induction hypothesis,
there is a subsequence (
k
j
) of (
k
) and some y
R
n1
such that y
(k
j
)
y. Also,
by Bolzano-Weierstrass in
R
, there is a further subsequence (
x
(k
j
)
n
) of (
x
(k
j
)
n
)
that converges to, say, y
n
R. Then we know that
x
(k
j
)
(y, y
n
).
So done.
Note that this is generally not true for normed spaces. Finite-dimensionality
is important for both of these results.
Example. Consider (
, ·
). We let
e
(k)
j
=
δ
jk
be the sequence with 1
in the
k
th component and 0 in other components. Then
e
(k)
j
0 for all fixed
j
, and hence
e
(k)
converges componentwise to the zero element 0 = (0
,
0
, ···
).
However,
e
(k)
does not converge to the zero element since
e
(k)
0
= 1 for
all
k
. Also, this is bounded but does not have a convergent subsequence for the
same reasons.
We know that all finite dimensional vector spaces are isomorphic to
R
n
as vector spaces for some
n
, and we will later show that all norms on finite
dimensional spaces are equivalent. This means every finite-dimensional normed
space satisfies the Bolzano-Weierstrass property. Is the converse true? If a
normed vector space satisfies the Bolzano-Weierstrass property, must it be finite
dimensional? The answer is yes, and the proof is in the example sheet.
Example. Let
C
([0
,
1]) have the
·
L
2
norm. Consider
f
n
(
x
) =
sin
2
x
. We
know that
f
n
2
L
2
=
Z
1
0
|f
n
|
2
=
1
2
.
So it is bounded. However, it doesn’t have a convergent subsequence. If it did,
say f
n
j
f in L
2
, then we must have
f
n
j
f
n
j+1
2
0.
However, by direct calculation, we know that
f
n
j
f
n
j+1
2
=
Z
1
0
(sin 2n
j
πx sin 2n
j+1
πx)
2
= 1.
Note that the same argument shows also that the sequence (
sin
2
x
) has no
subsequence that converges pointwise on [0
,
1]. To see this, we need the result
that if (
f
j
) is a sequence in
C
([0
,
1]) that is uniformly bounded with
f
j
f
pointwise, then
f
j
converges to
f
under the
L
2
norm. However, we will not
be able to prove this (in a nice way) without Lebesgue integration from IID
Probability and Measure.
4.2 Cauchy sequences and completeness
Definition (Cauchy sequence). Let (
V, ·
) be a normed space. A sequence
(x
(k)
) in V is a Cauchy sequence if
(ε)(N)(n, m N) x
(n)
x
(m)
< ε.
Definition (Complete normed space). A normed space (
V, ·
) is complete if
every Cauchy sequence converges to an element in V .
We’ll start with some easy facts about Cauchy sequences and complete spaces.
Proposition. Any convergent sequence is Cauchy.
Proof. If x
k
x, then
x
k
x
x
k
x + x
x 0 as k, .
Proposition. A Cauchy sequence is bounded.
Proof.
By definition, there is some
N
such that for all
n N
, we have
x
N
x
n
< 1. So x
n
< 1 + x
N
for n N . So, for all n,
x
n
max{∥x
1
, ··· , x
N1
, 1 + x
N
∥}.
Proposition. If a Cauchy sequence has a subsequence converging to an element
x, then the whole sequence converges to x.
Proof.
Suppose x
k
j
x. Since (x
k
) is Cauchy, given
ε >
0, we can choose an
N
such that
x
n
x
m
<
ε
2
for all
n, m N
. We can also choose
j
0
such that
k
j
0
n and x
k
j
0
x <
ε
2
. Then for any n N, we have
x
n
x x
n
x
k
j
0
+ x x
k
j
0
< ε.
Proposition. If
·
is Lipschitz equivalent to
·
on
V
, then (x
k
) is Cauchy
with respect to
·
if and only if (x
k
) is Cauchy with respect to
·
. Also,
(V, · ) is complete if and only if (V, ·
) is complete.
Proof. This follows directly from definition.
Theorem. R
n
(with the Euclidean norm, say) is complete.
Proof.
The important thing is to know this is true for
n
= 1, which we have
proved from Analysis I.
If (x
k
) is Cauchy in
R
n
, then (
x
(k)
j
) is a Cauchy sequence of real numbers for
each
j {
1
, ··· , n}
. By the completeness of the reals, we know that
x
k
j
x
j
R
for some
x
. So
x
k
x
= (
x
1
, ··· , x
n
) since convergence in
R
n
is equivalent to
componentwise convergence.
Note that the spaces
1
,
2
,
are all complete with respect to the standard
norms. Also,
C
([0
,
1]) is complete with respect to
·
, since uniform Cauchy
convergence implies uniform convergence, and the uniform limit of continuous
functions is continuous. However,
C
([0
,
1]) with the
L
1
or
L
2
norms are not
complete (see example sheet).
The incompleteness of
L
1
tells us that
C
([0
,
1]) is not large enough to to be
complete under the
L
1
or
L
2
norm. In fact, the space of Riemann integrable
functions, say
R
([0
,
1]), is the natural space for the
L
1
norm, and of course
contains
C
([0
,
1]). As we have previously mentioned, this time
R
([0
,
1]) is too
large for
·
to be a norm, since
R
1
0
|f|
d
x
= 0 does not imply
f
= 0. This is a
problem we can solve. We just have to take the equivalence classes of Riemann
integrable functions, where
f
and
g
are equivalent if
R
1
0
|f g|
d
x
= 0. But still,
L
1
is not complete on
R
([0
,
1])
/
. This is a serious problem in the Riemann
integral. This eventually lead to the Lebesgue integral, which generalizes the
Riemann integral, and gives a complete normed space.
Note that when we quotient our
R
([0
,
1]) by the equivalence relation
f g
if
R
1
0
|f g|
d
x
= 0, we are not losing too much information about our functions.
We know that for the integral to be zero,
f g
cannot be non-zero at a point of
continuity. Hence they agree on all points of continuities. We also know that
by Lebesgue’s theorem, the set of points of discontinuity has Lebesgue measure
zero. So they disagree on at most a set of Lebesgue measure zero.
Example. Let
V = {(x
n
) R
N
: x
j
= 0 for all but finitely many j}.
Take the supremum norm
·
on
V
. This is a subspace of
(and is
sometimes denoted
0
). Then (
V, ·
) is not complete. We define
x
(k)
=
(1,
1
2
,
1
3
, ··· ,
1
k
, 0, 0, ···) for k = 1, 2, 3, ···. Then this is Cauchy, since
x
(k)
x
()
=
1
min{ℓ, k} + 1
0,
but it is not convergent in
V
. If it actually converged to some
x
, then
x
(k)
j
x
j
.
So we must have x
j
=
1
j
, but this sequence not in V .
We will later show that this is because
V
is not closed, after we define what
it means to be closed.
Definition (Open set). Let (
V, ·
) be a normed space. A subspace
E V
is
open in V if for any y E, there is some r > 0 such that
B
r
(y) = {x V : x y < r} E.
We first check that the open ball is open.
Proposition. B
r
(y) V is an open subset for all r > 0, y V .
Proof. Let x B
r
(y). Let ρ = r x y > 0. Then B
ρ
(x) B
r
(y).
x
y
Definition (Limit point). Let (
V, ·
) be a normed space,
E V
. A point
y
V
is a limit point of
E
if there is a sequence (x
k
) in
E
with x
k
= y for all
k
and x
k
y.
(Some people allow x
k
= y, but we will use this definition in this course)
Example. Let
V
=
R
,
E
= (0
,
1). Then 0
,
1 are limit points of
E
. The set of
all limit points is [0, 1].
If E
= (0, 1) {2}. Then the set of limit points of E
is still [0, 1].
There is a nice result characterizing whether a set contains all its limit points.
Proposition. Let
E V
. Then
E
contains all of its limit points if and only if
V \ E is open in V .
Using this proposition, we define the following:
Definition (Closed set). Let (
V, ·
) be a normed space. Then
E V
is
closed if V \ E is open, i.e. E contains all its limit points.
Note that sets can be both closed or open; or neither closed nor open.
Before we prove the proposition, we first have a lemma:
Lemma. Let (
V, ·
) be a normed space,
E
any subset of
V
. Then a point
y V is a limit point of E if and only if
(B
r
(y) \ {y}) E =
for every r.
Proof.
(
) If y is a limit point of
E
, then there exists a sequence (x
k
)
E
with x
k
= y for all
k
and x
k
y. Then for every
r
, for sufficiently large
k
,
x
k
B
r
(y). Since x
k
= {y} and x
k
E, the result follows.
(
) For each
k
, let
r
=
1
k
. By assumption, we have some x
k
(
B
1
k
(y)
\
{y}) E. Then x
k
y, x
k
= y and x
k
E. So y is a limit point of E.
Now we can prove our proposition.
Proposition. Let
E V
. Then
E
contains all of its limit points if and only if
V \ E is open in V .
Proof.
(
) Suppose
E
contains all its limit points. To show
V \ E
is open,
we let y
V \ E
. So y is not a limit point of
E
. So for some
r
, we have
(B
r
(y) \ {y}) E = . Hence it follows that B
r
(y) V \ E (since y ∈ E).
(
) Suppose
V \ E
is open. Let y
V \ E
. Since
V \ E
is open, there is
some
r
such that
B
r
(y)
V \ E
. By the lemma, y is not a limit point of
E
. So
all limit points of E are in E.
4.3 Sequential compactness
In general, there are two different notions of compactness “sequential com-
pactness” and just “compactness”. However, in normed spaces (and metric
spaces, as we will later encounter), these two notions are equivalent. So we will
be lazy and just say “compactness” as opposed to “sequential compactness”.
Definition ((Sequentially) compact set). Let
V
be a normed vector space. A
subset
K V
is said to be compact (or sequentially compact) if every sequence
in K has a subsequence that converges to a point in K.
There are things we can immediately know about the spaces:
Theorem. Let (V, · ) be a normed vector space, K V a subset. Then
(i) If K is compact, then K is closed and bounded.
(ii)
If
V
is
R
n
(with, say, the Euclidean norm), then if
K
is closed and bounded,
then K is compact.
Proof.
(i)
Let
K
be compact. Boundedness is easy: if
K
is unbounded, then we can
generate a sequence x
k
such that
x
k
. Then this cannot have a
convergent subsequence, since any subsequence will also be unbounded,
and convergent sequences are bounded. So K must be bounded.
To show
K
is closed, let y be a limit point of
K
. Then there is some
y
k
K
such that y
k
y. Then by compactness, there is a subsequence
of y
k
converging to some point in
K
. But any subsequence must converge
to y. So y K.
(ii)
Let
K
be closed and bounded. Let x
k
be a sequence in
K
. Since
V
=
R
n
and
K
is bounded, (x
k
) is a bounded sequence in
R
n
. So by Bolzano-
Weierstrass, this has a convergent subsequence x
k
j
. By closedness of
K
,
we know that the limit is in K. So K is compact.
4.4 Mappings between normed spaces
We are now going to look at functions between normed spaces, and see if they
are continuous.
Let (
V, ·
), (
V
, ·
) be normed spaces, and let
E K
be a subset,
and
f
:
E V
a mapping (which is just a function, although we reserve the
terminology “function” or “functional” for when V
= R).
Definition (Continuity of mapping). Let y
E
. We say
f
:
E V
is
continuous at y if for all ε > 0, there is δ > 0 such that the following holds:
(x E) x y
V
< δ f(x) f (y)
V
< ε.
Note that x
E
and
x
y
< δ
is equivalent to saying x
B
δ
(y)
E
.
Similarly,
f
(x)
f
(y)
< ε
is equivalent to
f
(x)
B
ε
(
f
(y)). In other words,
x
f
1
(
B
ε
(
f
(y))). So we can rewrite this statement as there is some
δ >
0
such that
E B
δ
(y) f
1
(B
ε
(f(y ))).
We can use this to provide an alternative characterization of continuity.
Theorem. Let (
V, ·
), (
V
, ·
) be normed spaces,
E V
,
f
:
E
V
.
Then
f
is continuous at y
E
if and only if for any sequence y
k
y in
E
, we
have f(y
k
) f (y).
Proof.
(
) Suppose
f
is continuous at y
E
, and that y
k
y. Given
ε >
0,
by continuity, there is some δ > 0 such that
B
δ
(y) E f
1
(B
ε
(f(y ))).
For sufficiently large k, y
k
B
δ
(y) E. So f(y
k
) B
ε
(f(y )), or equivalently,
|f(y
k
) f(y)| < ε.
So done.
(
) If
f
is not continuous at
y
, then there is some
ε >
0 such that for any
k
,
we have
B
1
k
(y) ⊆ f
1
(B
ε
(f(y ))).
Choose y
k
B
1
k
(y)
\ f
1
(
B
ε
(
f
(y))). Then y
k
y, y
k
E
, but
f
(y
k
)
f(y ) ε, contrary to the hypothesis.
Definition (Continuous function).
f
:
E V
is continuous if
f
is continuous
at every point y E.
Theorem. Let (
V, ·
) and (
V
, ·
) be normed spaces, and
K
a compact
subset of V , and f : V V
a continuous function. Then
(i) f(K) is compact in V
(ii) f(K) is closed and bounded
(iii)
If
V
=
R
, then the function attains its supremum and infimum, i.e. there
is some y
1
, y
2
K such that
f(y
1
) = sup{f(y ) : y K}, f(y
2
) = inf{f(y) : y K}.
Proof.
(i)
Let (x
k
) be a sequence in
f
(
K
) with x
k
=
f
(y
k
) for some y
k
K
. By
compactness of
K
, there is a subsequence (y
k
j
) such that y
k
j
y. By the
previous theorem, we know that
f
(y
j
k
)
f
(y). So x
k
j
f
(y)
f
(
K
).
So f (K) is compact.
(ii)
This follows directly from (
i
), since every compact space is closed and
bounded.
(iii)
If
F
is any bounded subset of
R
, then either
sup F F
or
sup F
is a limit
point of
F
(or both), by definition of the supremum. If
F
is closed and
bounded, then any limit point must be in
F
. So
sup F F
. Applying this
fact to F = f(K) gives the desired result, and similarly for infimum.
Finally, we will end the chapter by proving that any two norms on a finite
dimensional space are Lipschitz equivalent. The key lemma is the following:
Lemma. Let
V
be an
n
-dimensional vector space with a basis
{
v
1
, ··· ,
v
n
}
.
Then for any x
V
, write x =
P
n
j=1
x
j
v
j
, with
x
j
R
. We define the Euclidean
norm by
x
2
=
X
x
2
j
1
2
.
Then this is a norm, and S = {x V : x
2
= 1} is compact in (V, ·
2
).
After we show this, we can easily show that every other norm is equivalent
to this norm.
This is not hard to prove, since we know that the unit sphere in
R
n
is
compact, and we can just pass our things on to R
n
.
Proof. ·
2
is well-defined since
x
1
, ··· , x
n
are uniquely determined by x (by
(a certain) definition of basis). It is easy to check that ·
2
is a norm.
Given a sequence x
(k)
in
S
, if we write x
(k)
=
P
n
j=1
x
(k)
j
v
j
. We define the
following sequence in R
n
:
˜
x
(k)
= (x
(k)
1
, ··· , x
(k)
n
)
˜
S = {
˜
x R
n
:
˜
x
Euclid
= 1}.
As
˜
S
is closed and bounded in
R
n
under the Euclidean norm, it is compact.
Hence there exists a subsequence
˜x
(k
j
)
and
˜x
˜
S
such that
˜
x
(k
j
)
˜
x
Euclid
0.
This says that x =
P
n
j=1
x
j
v
j
S, and x
k
j
x
2
0. So done.
Theorem. Any two norms on a finite dimensional vector space are Lipschitz
equivalent.
The idea is to pick a basis, and prove that any norm is equivalent to ·
2
.
To show that an arbitrary norm
·
is equivalent to
·
2
, we have to show
that for any x, we have
ax
2
x bx
2
.
We can divide by x
2
and obtain an equivalent requirement:
a
x
x
2
b.
We know that any x
/
x
2
lies in the unit sphere
S
=
{
x
V
:
x
2
= 1
}
. So
we want to show that the image of
·
is bounded. But we know that
S
is
compact. So it suffices to show that · is continuous.
Proof.
Fix a basis
{
v
1
, ··· ,
v
n
}
for
V
, and define
·
2
as in the lemma above.
Then
·
2
is a norm on
V
, and
S
=
{
x
V
:
x
2
= 1
}
, the unit sphere, is
compact by above.
To show that any two norms are equivalent, it suffices to show that if
·
is
any other norm, then it is equivalent to ·
2
, since equivalence is transitive.
For any
x =
n
X
j=1
x
j
v
j
,
we have
x =
n
X
j=1
x
j
v
j
X
|x
j
|∥v
j
x
2
n
X
j=1
v
j
2
1
2
by the Cauchy-Schwarz inequality. So x bx
2
for b =
P
v
j
2
1
2
.
To find
a
such that
x
a
x
2
, consider
·
: (
S, ·
2
)
R
. By above,
we know that
x y bx y
2
By the triangle inequality, we know that
x
y
x
y
. So when x is
close to y under
·
2
, then
x
and
y
are close. So
·
: (
S, ·
2
)
R
is continuous. So there is some x
0
S
such that
x
0
=
inf
xS
x
=
a
, say.
Since x > 0, we know that x
0
> 0. So x ax
2
for all x V .
The key to the proof is the compactness of the unit sphere of (
V, ·
).
On the other hand, compactness of the unit sphere also characterizes finite
dimensionality. As you will show in the example sheets, if the unit sphere of a
space is compact, then the space must be finite-dimensional.
Corollary. Let (V, · ) be a finite-dimensional normed space.
(i) The Bolzano-Weierstrass theorem holds for V , i.e. any bounded sequence
sequence in V has a convergent subsequence.
(ii) A subset of V is compact if and only if it is closed and bounded.
Proof.
If a subset is bounded in one norm, then it is bounded in any Lipschitz
equivalent norm. Similarly, if it converges to x in one norm, then it converges to
x in any Lipschitz equivalent norm.
Since these results hold for the Euclidean norm
·
2
, it follows that they
hold for arbitrary finite-dimensional vector spaces.
Corollary. Any finite-dimensional normed vector space (V, · ) is complete.
Proof.
This is true since if a space is complete in one norm, then it is complete
in any Lipschitz equivalent norm, and we know that
R
n
under the Euclidean
norm is complete.
5 Metric spaces
We would like to extend our notions such as convergence, open and closed
subsets, compact subsets and continuity from normed spaces to more general
sets. Recall that when we defined these notions, we didn’t really use the vector
space structure of a normed vector space much. Moreover, we mostly defined
these things in terms of convergence of sequences. For example, a space is closed
if it contains all its limits, and a space is open if its complement is closed.
So what do we actually need in order to define convergence, and hence all
the notions we’ve been using? Recall we define x
k
x to mean
x
k
x
0
as a sequence in
R
. What is
x
k
x
really about? It is measuring the distance
between x
k
and x. So what we really need is a measure of distance.
To do so, we can define a distance function
d
:
V ×V R
by
d
(
x, y
) =
xy
.
Then we can define x
k
x to mean d(x
k
, x) 0.
Hence, given any function
d
:
V × V R
, we can define a notion of
“convergence” as above. However, we want this to be well-behaved. In particular,
we would want the limits of sequences to be unique, and any constant sequence
x
k
= x should converge to x.
We will come up with some restrictions on what
d
can be based on these
requirements.
We can look at our proof of uniqueness of limits (for normed spaces), and
see what properties of
d
we used. Recall that to prove the uniqueness of limits,
we first assume that x
k
x and x
k
y. Then we noticed
x y x x
k
+ x
k
y 0,
and hence
x y
= 0. So
x
=
y
. We can reformulate this argument in terms of
d. We first start with
d(x, y) d(x, x
k
) + d(x
k
, y).
To obtain this equation, we are relying on the triangle inequality. So we would
want d to satisfy the triangle inequality.
After obtaining this, we know that
d
(
x
k
, y
)
0, since this is just the
definition of convergence. However, we do not immediately know
d
(
x, x
k
)
0,
since we are given a fact about
d
(
x
k
, x
), not
d
(
x, x
k
). Hence we need the property
that d(x
k
, x) = d(x, x
k
). This is symmetry.
Combining this, we know that
d(x, y) 0.
From this, we want to say that in fact,
d
(
x, y
) = 0, and thus
x
=
y
. Hence
we need the property that
d
(
x, y
)
0 for all
x, y
, and that
d
(
x, y
) = 0 implies
x = y.
Finally, to show that a constant sequence has a limit, suppose
x
k
=
x
for all
k N
. Then we know that
d
(
x, x
k
) =
d
(
x, x
) should tend to 0. So we must have
d(x, x) = 0 for all x.
We will use these properties to define metric spaces.
5.1 Preliminary definitions
Definition (Metric space). Let
X
be any set. A metric on
X
is a function
d : X × X R that satisfies
d(x, y) 0 with equality iff x = y (non-negativity)
d(x, y) = d(y, x) (symmetry)
d(x, y) d(x, z) + d(z, y) (triangle inequality)
The pair (X, d) is called a metric space.
We have seen that we can define convergence in terms of a metric. Hence,
we can also define open subsets, closed subsets, compact spaces, continuous
functions etc. for metric spaces, in a manner consistent with what we had for
normed spaces. Moreover, we will show that many of our theorems for normed
spaces are also valid in metric spaces.
Example.
(i) R
n
with the Euclidean metric is a metric space, where the metric is defined
by
d(x, y) = x y =
q
X
(x
j
y
j
)
2
.
(ii)
More generally, if (
V, ·
) is a normed space, then
d
(
x, y
) =
x y
defines a metric on V .
(iii) Discrete metric: let X be any set, and define
d(x, y) =
(
0 x = y
1 x = y
.
(iv) Given a metric space (X, d), we define
g(x, y) = min{1, d(x, y)}.
Then this is a metric on X. Similarly, if we define
h(x, y) =
d(x, y)
1 + d(x, y)
is also a metric on X. In both cases, we obtain a bounded metric.
The axioms are easily shown to be satisfied, apart from the triangle
inequality. So let’s check the triangle inequality for
h
. We’ll use a general
fact that for numbers a, c 0, b, d > 0 we have
a
b
c
d
a
a + b
c
c + d
.
Based on this fact, we can start with
d(x, y) d(x, z) + d(z, y).
Then we obtain
d(x, y)
1 + d(x, y)
d(x, z) + d(z, y)
1 + d(x, z) + d(z, y)
=
d(x, z)
1 + d(x, z) + d(z, y)
+
d(z, y)
1 + d(x, z) + d(z, y)
d(x, z)
1 + d(x, z)
+
d(z, y)
1 + d(z, y)
.
So done.
We can also extend the notion of Lipschitz equivalence to metric spaces.
Definition (Lipschitz equivalent metrics). Metrics
d, d
on a set
X
are said to
be Lipschitz equivalent if there are (positive) constants A, B such that
Ad(x, y) d
(x, y) Bd(x, y)
for all x, y X.
Clearly, any Lipschitz equivalent norms give Lipschitz equivalent metrics. Any
metric coming from a norm in
R
n
is thus Lipschitz equivalent to the Euclidean
metric. We will later show that two equivalent norms induce the same topology.
In some sense, Lipschitz equivalent norms are indistinguishable.
Definition (Metric subspace). Given a metric space (
X, d
) and a subset
Y X
,
the restriction
d|
Y ×Y
R
is a metric on
Y
. This is called the induced metric
or subspace metric.
Note that unlike vector subspaces, we do not require our subsets to have any
structure. We can take any subset of X and get a metric subspace.
Example. Any subspace of R
n
is a metric space with the Euclidean metric.
Definition (Convergence). Let (
X, d
) be a metric space. A sequence
x
n
X
is
said to converge to x if d(x
n
, x) 0 as a real sequence. In other words,
(ε)(K)(k > K) d(x
k
, x) < ε.
Alternatively, this says that given any
ε
, for sufficiently large
k
, we get
x
k
B
ε
(x).
Again, B
r
(a) is the open ball centered at a with radius r, defined as
B
r
(a) = {x X : d(x, a) < r}.
Proposition. The limit of a convergent sequence is unique.
Proof. Same as that of normed spaces.
Note that notions such as convergence, open and closed subsets and continuity
of mappings all make sense in an even more general setting called topological
spaces. However, in this setting, limits of convergent sequences can fail to be
unique. We will not worry ourselves about these since we will just focus on
metric spaces.
5.2 Topology of metric spaces
We will define open subsets of a metric space in exactly the same way as we did
for normed spaces.
Definition (Open subset). Let (
X, d
) be a metric space. A subset
U X
is
open if for every y U, there is some r > 0 such that B
r
(y) U.
This means we can write any open U as a union of open balls:
U =
[
yU
B
r(y)
(y)
for appropriate choices of r(y) for every y.
It is easy to check that every open ball
B
r
(
y
) is an open set. The proof is
exactly the same as what we had for normed spaces.
Note that two different metrics
d, d
on the same set
X
may give rise to the
same collection of open subsets.
Example. Lipschitz equivalent metrics give rise to the same collection of open
sets, i.e. if
d, d
are Lipschitz equivalent, then a subset
U X
is open with
respect to
d
if and only if it is open with respect to
d
. Proof is left as an easy
exercise.
The converse, however, is not necessarily true.
Example. Let
X
=
R
,
d
(
x, y
) =
|x y|
and
d
(
x, y
) =
min{
1
, |x y|}
. It is
easy to check that these are not Lipschitz equivalent, but they induce the same
set collection of open subsets.
Definition (Topology). Let (
X, d
) be a metric space. The topology on (
X, d
) is
the collection of open subsets of
X
. We say it is the topology induced by the
metric.
Definition (Topological notion). A notion or property is said to be a topological
notion or property if it only depends on the topology, and not the metric.
We will introduce a useful terminology before we go on:
Definition (Neighbourhood). Given a metric space
X
and a point
x X
, a
neighbourhood of x is an open set containing x.
Some people do not require the set to be open. Instead, it requires a
neighbourhood to be a set that contains an open subset that contains
x
, but
this is too complicated, and we could as well work with open subsets directly.
Clearly, being a neighbourhood is a topological property.
Proposition. Let (
X, d
) be a metric space. Then
x
k
x
if and only if for every
neighbourhood
V
of
x
, there exists some
K
such that
x
k
V
for all
k K
.
Hence convergence is a topological notion.
Proof.
(
) Suppose
x
k
X
, and let
V
be any neighbourhood of
x
. Since
V
is open, by definition, there exists some
ε
such that
B
ε
(
x
)
V
. By definition
of convergence, there is some
K
such that
x
k
B
ε
(
x
) for
k K
. So
x
k
V
whenever k K.
(
) Since every open ball is a neighbourhood, this direction follows directly
from definition.
Theorem. Let (X, d) be a metric space. Then
(i) The union of any collection of open sets is open
(ii) The intersection of finitely many open sets is open.
(iii) and X are open.
Proof.
(i)
Let
U
=
S
α
V
α
, where each
V
α
is open. If
x U
, then
x V
α
for
some
α
. Since
V
α
is open, there exists
δ >
0 such that
B
δ
(
x
)
V
α
. So
B
δ
(x)
S
α
V
α
= U. So U is open.
(ii)
Let
U
=
T
n
i=1
V
α
, where each
V
α
is open. If
x V
, then
x V
i
for all
i
= 1
, ··· , n
. So
δ
i
>
0 with
B
δ
i
(
x
)
V
i
. Take
δ
=
min{δ
1
, ··· , δ
n
}
. So
B
δ
(x) V
i
for all i. So B
δ
(x) V . So V is open.
(iii)
satisfies the definition of an open subset vacuously.
X
is open since for
any x, B
1
(x) X.
This theorem is not important in this course. However, this will be a key
defining property we will use when we define topological spaces in IB Metric
and Topological Spaces.
We can now define closed subsets and characterize them using open subsets,
in exactly the same way as for normed spaces.
Definition (Limit point). Let (
X, d
) be a metric space and
E X
. A point
y X
is a limit point of
E
if there exists a sequence
x
k
E
,
x
k
=
y
such that
x
k
y.
Definition (Closed subset). A subset
E X
is closed if
E
contains all its limit
points.
Proposition. A subset is closed if and only if its complement is open.
Proof.
Exactly the same as that of normed spaces. It is useful to observe that
y X
is a limit point of
E
if and only if (
B
r
(
y
)
\{y}
)
E
=
for all
r >
0.
We can write down an analogous theorem for closed sets:
Theorem. Let (X, d) be a metric space. Then
(i) The intersection of any collection of closed sets is closed
(ii) The union of finitely many closed sets is closed.
(iii) and X are closed.
Proof. By taking complements of the result for open subsets.
Proposition. Let (
X, d
) be a metric space and
x X
. Then the singleton
{x}
is a closed subset, and hence any finite subset is closed.
Proof.
Let
y X \ {x}
. So
d
(
x, y
)
>
0. Then
B
d(y,x)
(
x
)
X \ {x}
. So
X \{x}
is open. So {x} is closed.
Alternatively, since
{x}
has no limit points, it contains all its limit points.
So it is closed.
5.3 Cauchy sequences and completeness
Definition (Cauchy sequence). Let (X, d) be a metric space. A sequence (x
n
)
in X is Cauchy if
(ε)(N)(n, m N) d(x
n
, x
m
) < ε.
Proposition. Let (X, d) be a metric space. Then
(i) Any convergent sequence is Cauchy.
(ii)
If a Cauchy sequence has a convergent subsequence, then the original
sequence converges to the same limit.
Proof.
(i) If x
k
x, then
d(x
m
, x
n
) d(x
m
, x) + d(x
n
, x) 0
as m, n .
(ii)
Suppose
x
k
j
x
. Since (
x
k
) is Cauchy, given
ε >
0, we can choose an
N
such that
d
(
x
n
, x
m
)
<
ε
2
for all
n, m N
. We can also choose
j
0
such
that k
j
0
n and d(x
k
j
0
, x) <
ε
2
. Then for any n N, we have
d(x
n
, x) d(x
n
, x
k
j
0
) + d(x, x
k
j
0
) < ε.
Definition (Complete metric space). A metric space (
X, d
) is complete if all
Cauchy sequences converge to a point in X.
Example. Let X = R
n
with the Euclidean metric. Then X is complete.
It is easy to produce incomplete metric spaces. Since arbitrary subsets of
metric spaces are subspaces, we can just remove some random elements to make
it incomplete.
Example. Let
X
= (0
,
1)
R
with the Euclidean metric. Then this is
incomplete, since
1
k
is Cauchy but has no limit in X.
Similarly,
X
=
R \ {
0
}
is incomplete. Note, however, that it is possible to
construct a metric
d
on
X
=
R \{
0
}
such that
d
induces the same topology on
X
, but makes
X
complete. This shows that completeness is not a topological
property. The actual construction is left as an exercise on the example sheet.
Example. We can create an easy example of an incomplete metric on
R
n
. We
start by defining h : R
n
R
n
by
h(x) =
x
1 + x
,
where
·
is the Euclidean norm. We can check that this is injective: if
h(x) = h(y), taking the norm gives
x
1 + x
=
y
1 + y
.
So we must have x = y, i.e. x = y. So h(x) = h(y) implies x = y.
Now we define
d(x, y) = h(x) h(y).
It is an easy check that this is a metric on R
n
.
In fact, we can show that
h
:
R
n
B
1
(0), and
h
is a homeomorphism (i.e.
continuous bijection with continuous inverse) between
R
n
and the unit ball
B
1
(0), both with the Euclidean metric.
To show that this metric is incomplete, we can consider the sequence
x
k
=
(
k
1)
e
1
, where
e
1
= (1
,
0
,
0
, ··· ,
0) is the usual basis vector. Then (
x
k
) is
Cauchy in (R
n
, d). To show this, first note that
h(x
k
) =
1
1
k
e
1
.
Hence we have
d(x
n
, x
m
) = h(x
n
) h(x
m
) =
1
n
1
m
0.
So it is Cauchy. To show it does not converge in (
R
n
, d
), suppose
d
(
x
k
, x
)
0
for some x. Then since
d(x
k
, x) = h(x
k
) h(x)
h(x
k
) h(x)
,
We must have
h(x) = lim
k→∞
h(x
k
) = 1.
However, there is no element with h(x) = 1.
What is happening in this example, is that we are pulling in the whole
R
n
in
to the unit ball. Then under this norm, a sequence that “goes to infinity” in the
usual norm will be Cauchy in this norm, but we have nothing at infinity for it
to converge to.
Suppose we have a complete metric space (
X, d
). We know that we can
form arbitrary subspaces by taking subsets of
X
. When will this be complete?
Clearly it has to be closed, since it has to include all its limit points. It turns it
closedness is a sufficient condition.
Theorem. Let (X, d) be a metric space, Y X any subset. Then
(i) If (Y, d|
Y ×Y
) is complete, then Y is closed in X.
(ii)
If (
X, d
) is complete, then (
Y, d|
Y ×Y
) is complete if and only if it is closed.
Proof.
(i)
Let
x X
be a limit point of
Y
. Then there is some sequence
x
k
x
,
where each
x
k
Y
. Since (
x
k
) is convergent, it is a Cauchy sequence.
Hence it is Cauchy in
Y
. By completeness of
Y
, (
x
k
) has to converge to
some point in
Y
. By uniqueness of limits, this limit must be
x
. So
x Y
.
So Y contains all its limit points.
(ii)
We have just showed that if
Y
is complete, then it is closed. Now suppose
Y
is closed. Let (
x
k
) be a Cauchy sequence in
Y
. Then (
x
k
) is Cauchy in
X
. Since
X
is complete,
x
k
x
for some
x X
. Since
x
is a limit point
of Y , we must have x Y . So x
k
converges in Y .
5.4 Compactness
Definition ((Sequential) compactness). A metric space (
X, d
) is (sequentially)
compact if every sequence in X has a convergent subsequence.
A subset
K X
is said to be compact if (
K, d|
K×K
) is compact. In other
words,
K
is compact if every sequence in
K
has a subsequence that converges to
some point in K.
Note that when we say every sequence has a convergent subsequence, we
do not require it to be bounded. This is unlike the statement of the Bolzano-
Weierstrass theorem. In particular, R is not compact.
It follows from definition that compactness is a topological property, since it
is defined in terms of convergence, and convergence is defined in terms of open
sets.
The following theorem relates completeness with compactness.
Theorem. All compact spaces are complete and bounded.
Note that
X
is bounded iff
X B
r
(
x
0
) for some
r R, x
0
X
(or
X
is
empty).
Proof.
Let (
X, d
) be a compact metric space. Let (
x
k
) be Cauchy in
X
. By
compactness, it has some convergent subsequence, say
x
k
j
x
. So
x
k
x
. So
it is complete.
If (
X, d
) is not bounded, by definition, for any
x
0
, there is a sequence (
x
k
)
such that
d
(
x
k
, x
0
)
> k
for every
k
. But then (
x
k
) cannot have a convergent
subsequence. Otherwise, if x
k
j
x, then
d(x
k
j
, x
0
) d(x
k
j
, x) + d(x, x
0
)
and is bounded, which is a contradiction.
This implies that if (
X, d
) is a metric space and
E X
, and
E
is compact,
then
E
is bounded, i.e.
E B
R
(
x
0
) for some
x
0
X, R >
0, and
E
with the
subspace metric is complete. Hence E is closed as a subset of X.
The converse is not true. For example, recall if we have an infinite-dimensional
normed vector space, then the closed unit sphere is complete and bounded, but
not compact. Alternatively, we can take
X
=
R
with the metric
d
(
x, y
) =
min{
1
, |x y|}
. This is clearly bounded (by 1), and it is easy to check that this
is complete. However, this is not compact since the sequence
x
k
=
k
has no
convergent subsequence.
However, we can strengthen the condition of boundedness to total bound-
edness, and get the equivalence between “completeness and total boundedness”
and compactness.
Definition (Totally bounded*). A metric space (
X, d
) is said to be totally
bounded if for all
ε >
0, there is an integer
N N
and points
x
1
, ··· , x
N
X
such that
X =
N
[
i=1
B
ε
(x
i
).
It is easy to check that being totally bounded implies being bounded. We
then have the following strengthening of the previous theorem.
Theorem. (non-examinable) Let (
X, d
) be a metric space. Then
X
is compact
if and only if X is complete and totally bounded.
Proof.
(
) Let
X
be complete and totally bounded, (
y
i
)
X
. For every
j N
,
there exists a finite set of points
E
j
such that every point is within
1
j
of one of
these points.
Now since
E
1
is finite, there is some
x
1
E
1
such that there are infinitely
many y
i
’s in B(x
1
, 1). Pick the first y
i
in B(x
1
, 1) and call it y
i
1
.
Now there is some
x
2
E
2
such that there are infinitely many
y
i
’s in
B
(
x
1
,
1)
B
(
x
2
,
1
2
). Pick the one with smallest value of
i > i
1
, and call this
y
i
2
.
Continue till infinity.
This procedure gives a sequence x
i
E
i
and subsequence (y
i
k
), and also
y
i
n
n
\
j=1
B
x
j
,
1
j
.
It is easy to see that (
y
i
n
) is Cauchy since if
m > n
, then
d
(
y
i
m
, y
i
n
)
<
2
n
. By
completeness of X, this subsequence converges.
(
) Compactness implying completeness is proved above. Suppose
X
is not
totally bounded. We show it is not compact by constructing a sequence with no
Cauchy subsequence.
Suppose ε is such that there is no finite set of points x
1
, ··· , x
N
with
X =
N
[
i=1
B
ε
(x
i
).
We will construct our sequence iteratively.
Start by picking an arbitrary
y
1
. Pick
y
2
such that
d
(
y
1
, y
2
)
ε
. This exists
or else B
ε
(y
1
) covers all of X.
Now given
y
1
, ··· , y
n
such that
d
(
y
i
, y
j
)
ε
for all
i, j
= 1
, ··· , n
,
i
=
j
, we
pick
y
n+1
such that
d
(
y
n+1
, y
j
)
ε
for all
j
= 1
, ··· , n
. Again, this exists, or
else
S
n
i=1
B
ε
(
y
i
) covers
X
. Then clearly the sequence (
y
n
) is not Cauchy. So
done.
In IID Linear Analysis, we will prove the Arzel`a-Ascoli theorem that charac-
terizes the compact subsets of the space
C
([
a.b
]) in a very concrete way, which
is in some sense a strengthening of this result.
5.5 Continuous functions
We are going to look at continuous mappings between metric spaces.
Definition (Continuity). Let (
X, d
) and (
X
, d
) be metric spaces. A function
f : X X
is continuous at y X if
(ε > 0)(δ > 0)(x) d(x, y) < δ d
(f(x), f (y)) < ε.
This is true if and only if for every ε > 0, there is some δ > 0 such that
B
δ
(y) f
1
B
ε
(f(x)).
f is continuous if f is continuous at each y X.
Definition (Uniform continuity). f is uniformly continuous on X if
(ε > 0)(δ > 0)(x, y X) d(x, y) < δ d(f(x), f(y)) < ε.
This is true if and only if for all ε, there is some δ such that for all y, we have
B
δ
(y) f
1
(B
ε
(f(y))).
Definition (Lipschitz function and Lipschitz constant).
f
is said to be Lipschitz
on X if there is some K [0, ) such that for all x, y X,
d
(f(x), f (y)) Kd(x, y)
Any such K is called a Lipschitz constant.
It is easy to show
Lipschitz uniform continuity continuity.
We have seen many examples that continuity does not imply uniform continuity.
To show that uniform continuity does not imply Lipschitz, take
X
=
X
=
R
.
We define the metrics as
d(x, y) = min{1, |x y|}, d
(x, y) = |x y|.
Now consider the function
f
: (
X, d
)
(
X
, d
) defined by
f
(
x
) =
x
. We can
then check that this is uniformly continuous but not Lipschitz.
Note that the statement that metrics
d
and
d
are Lipschitz equivalent is
equivalent to saying the two identity maps
i
: (
X, d
)
(
X, d
) and
i
: (
X, d
)
(X, d) are Lipschitz, hence the name.
Note also that the metric itself is also a Lipschitz map for any metric. Here
we are viewing the metric as a function
d
:
X × X R
, with the metric on
X × X defined as
˜
d((x
1
, y
1
), (x
2
, y
2
)) = d(x
1
, x
2
) + d(y
1
, y
2
).
This is a consequence of the triangle inequality, since
d(x
1
, y
1
) d(x
1
, x
2
) + d(x
2
, y
2
) + d(y
1
, y
2
).
Moving the middle term to the left gives
d(x
1
, y
1
) d(x
2
, y
2
)
˜
d((x
1
, y
1
), (x
2
, y
2
))
Swapping the theorems around, we can put in the absolute value to obtain
|d(x
1
, y
1
) d(x
2
, y
2
)|
˜
d((x
1
, y
1
), (x
2
, y
2
))
Recall that at the very beginning, we proved that a continuous map from a
closed, bounded interval is automatically uniformly continuous. This is true
whenever the domain is compact.
Theorem. Let (
X, d
) be a compact metric space, and (
X
, d
) is any metric
space. If f : X X
be continuous, then f is uniformly continuous.
This is exactly the same proof as what we had for the [0, 1] case.
Proof.
We are going to prove by contradiction. Suppose
f
:
X X
is not
uniformly continuous. Since
f
is not uniformly continuous, there is some
ε >
0
such that for all
δ
=
1
n
, there is some
x
n
, y
n
such that
d
(
x
n
, y
n
)
<
1
n
but
d
(f(x
n
), f(y
n
)) > ε.
By compactness of
X
, (
x
n
) has a convergent subsequence (
x
n
i
)
x
. Then
we also have
y
n
i
x
. So by continuity, we must have
f
(
x
n
i
)
f
(
x
) and
f
(
y
n
i
)
f
(
x
). But
d
(
f
(
x
n
i
)
, f
(
y
n
i
))
> ε
for all
n
i
. This is a contradiction.
In the proof, we have secretly used (part of) the following characterization of
continuity:
Theorem. Let (
X, d
) and (
X
, d
) be metric spaces, and
f
:
X X
. Then the
following are equivalent:
(i) f is continuous at y.
(ii) f(x
k
) f (y) for every sequence (x
k
) in X with x
k
y.
(iii)
For every neighbourhood
V
of
f
(
y
), there is a neighbourhood
U
of
y
such
that U f
1
(V ).
Note that the definition of continuity says something like (iii), but with open
balls instead of open sets. So this should not be surprising.
Proof.
(i) (ii): The argument for this is the same as for normed spaces.
(i)
(iii): Let
V
be a neighbourhood of
f
(
y
). Then by definition there is
ε >
0 such that
B
ε
(
f
(
y
))
V
. By continuity of
f
, there is some
δ
such
that
B
δ
(y) f
1
(B
ε
(f(y))) f
1
(V ).
Set U = B
ε
(y) and done.
(iii)
(i): for any
ε
, use the hypothesis with
V
=
B
ε
(
f
(
y
)) to get a
neighbourhood U of y such that
U f
1
(V ) = f
1
(B
ε
(f(y))).
Since U is open, there is some δ such that B
δ
(y) U. So we get
B
δ
(y) f
1
(B
ε
(f(y))).
So we get continuity.
Corollary. A function
f
: (
X, d
)
(
X
, d
) is continuous if
f
1
(
V
) is open in
X whenever V is open in X
.
Proof.
Follows directly from the equivalence of (i) and (iii) in the theorem
above.
5.6 The contraction mapping theorem
If you have already taken IB Metric and Topological Spaces, then you were
probably bored by the above sections, since you’ve already met them all. Finally,
we get to something new. This section is comprised of just two theorems. The
first is the contraction mapping theorem, and we will use it to prove Picard-
Lindel¨of existence theorem. Later, we will prove the inverse function theorem
using the contraction mapping theorem. All of these are really powerful and
important theorems in analysis. They have many more applications and useful
corollaries, but we do not have time to get into those.
Definition (Contraction mapping). Let (
X, d
) be metric space. A mapping
f : X X is a contraction if there exists some λ with 0 λ < 1 such that
d(f(x), f (y)) λd(x, y).
Note that a contraction mapping is by definition Lipschitz and hence (uni-
formly) continuous.
Theorem (Contraction mapping theorem). Let
X
be a (non-empty) complete
metric space, and if
f
:
X X
is a contraction, then
f
has a unique fixed point,
i.e. there is a unique x such that f(x) = x.
Moreover, if
f
:
X X
is a function such that
f
(m)
:
X X
(i.e.
f
composed with itself
m
times) is a contraction for some
m
, then
f
has a unique
fixed point.
We can see finding fixed points as the process of solving equations. One
important application we will have is to use this to solve differential equations.
Note that the theorem is false if we drop the completeness assumption. For
example,
f
: (0
,
1)
(0
,
1) defined by
x
2
is clearly a contraction with no fixed
point. The theorem is also false if we drop the assumption
λ <
1. In fact, it is
not enough to assume
d
(
f
(
x
)
, f
(
y
))
< d
(
x, y
) for all
x, y
. A counterexample is
to be found on example sheet 3.
Proof. We first focus on the case where f itself is a contraction.
Uniqueness is straightforward. By assumption, there is some 0
λ <
1 such
that
d(f(x), f (y)) λd(x, y)
for all x, y X. If x and y are both fixed points, then this says
d(x, y) = d(f(x), f(y)) λd(x, y).
This is possible only if d(x, y) = 0, i.e. x = y.
To prove existence, the idea is to pick a point
x
0
and keep applying
f
. Let
x
0
X. We define the sequence (x
n
) inductively by
x
n+1
= f(x
n
).
We first show that this is Cauchy. For any n 1, we can compute
d(x
n+1
, x
n
) = d(f(x
n
), f(x
n1
)) λd(x
n
, x
n1
) λ
n
d(x
1
, x
0
).
Since this is true for any n, for m > n, we have
d(x
m
, x
n
) d(x
m
, x
m1
) + d(x
m1
, x
m2
) + ··· + d(x
n+1
, x
n
)
=
m1
X
j=n
d(x
j+1
, x
j
)
=
m1
X
j=n
λ
j
d(x
1
, x
0
)
d(x
1
, x
0
)
X
j=n
λ
j
=
λ
n
1 λ
d(x
1
, x
0
).
Note that we have again used the property that λ < 1.
This implies
d
(
x
m
, x
n
)
0 as
m, n
. So this sequence is Cauchy. By
the completeness of
X
, there exists some
x X
such that
x
n
x
. Since
f
is a contraction, it is continuous. So
f
(
x
n
)
f
(
x
). However, by definition
f
(
x
n
) =
x
n+1
. So taking the limit on both sides, we get
f
(
x
) =
x
. So
x
is a
fixed point.
Now suppose that
f
(m)
is a contraction for some
m
. Hence by the first part,
there is a unique x X such that f
(m)
(x) = x. But then
f
(m)
(f(x)) = f
(m+1)
(x) = f(f
(m)
(x)) = f(x).
So
f
(
x
) is also a fixed point of
f
(n)
(
x
). By uniqueness of fixed points, we must
have
f
(
x
) =
x
. Since any fixed point of
f
is clearly a fixed point of
f
(n)
as well,
it follows that x is the unique fixed point of f.
Based on the proof of the theorem, we have the following error estimate in
the contraction mapping theorem: for
x
0
X
and
x
n
=
f
(
x
n1
), we showed
that for m > n, we have
d(x
m
, x
n
)
λ
n
1 λ
d(x
1
, x
0
).
If x
n
x, taking the limit of the above bound as m gives
d(x, x
n
)
λ
n
1 λ
d(x
1
, x
0
).
This is valid for all n.
We are now going to use this to obtain the Picard-Lindel¨of existence theorem
for ordinary differential equations. The objective is as follows. Suppose we are
given a function
F = (F
1
, F
2
, ··· , F
n
) : R × R
n
R
n
.
We interpret the R as time and the R
n
as space.
Given
t
0
R
and x
0
R
n
, we want to know when can we find a solution to
the ODE
df
dt
= F(t, f (t))
subject to
f
(
t
0
) = x
0
. We would like this solution to be valid (at least) for all
t
in some interval I containing t
0
.
More explicitly, we want to understand when will there be some
ε >
0
and a differentiable function f = (
f
1
, ··· , f
n
) : (
t
0
ε, t
0
+
ε
)
R
n
(i.e.
f
j
: (t
0
ε, t
0
+ ε) R is differentiable for all j) satisfying
df
j
dt
= F
j
(t, f
1
(t), ··· , f
n
(t))
such that f
j
(t
0
) = x
(j)
0
for all j = 1, . . . , n and t (t
0
ε, t
0
+ ε).
We can imagine this scenario as a particle moving in
R
n
, passing through x
0
at time
t
0
. We then ask if there is a trajectory f(
t
) such that the velocity of the
particle at any time t is given by F(t, f (t)).
This is a complicated system, since it is a coupled system of many variables.
Explicit solutions are usually impossible, but in certain cases, we can prove the
existence of a solution. Of course, solutions need not exist for arbitrary
F
. For
example, there will be no solution if
F
is everywhere discontinuous, since any
derivative is continuous in a dense set of points. The Picard-Lindel¨of existence
theorem gives us sufficient conditions for a unique solution to exists.
We will need the following notation
Notation. For x
0
R
n
, R > 0, we let
B
R
(x
0
) = {x R
n
: x x
0
2
R}.
Then the theorem says
Theorem (Picard-Lindel¨of existence theorem). Let x
0
R
n
,
R >
0,
a < b
,
t
0
[a, b]. Let F : [a, b] × B
R
(x
0
) R
n
be a continuous function satisfying
F(t, x) F(t, y)
2
κx y
2
for some fixed
κ >
0 and all
t
[
a, b
]
,
x
B
R
(x
0
)
. In other words,
F
(
t, ·
) :
R
n
R
n
is Lipschitz on
B
R
(x
0
) with the same Lipschitz constant for every
t
.
Then
(i)
There exists an
ε >
0 and a unique differentiable function f : [
t
0
ε, t
0
+
ε] [a, b] R
n
such that
df
dt
= F(t, f (t)) ()
and f (t
0
) = x
0
.
(ii) If
sup
[a,b]×
B
R
(x
0
)
F
2
R
b a
,
then there exists a unique differential function f : [
a, b
]
R
n
that satisfies
the differential equation and boundary conditions above.
Even
n
= 1 is an important, special, non-trivial case. Even if we have only
one dimension, explicit solutions may be very difficult to find, if not impossible.
For example,
df
dt
= f
2
+ sin f + e
f
would be almost impossible to solve. However, the theorem tells us there will be
a solution, at least locally.
Note that any differentiable
f
satisfying the differential equation is auto-
matically continuously differentiable, since the derivative is F(
t,
f(
t
)), which is
continuous.
Before we prove the theorem, we first show the requirements are indeed
necessary. We first look at that
ε
in (i). Without the addition requirement in (ii),
there might not exist a solution globally on [
a, b
]. For example, we can consider
the n = 1 case, where we want to solve
df
dt
= f
2
,
with boundary condition
f
(0) = 1. Our
F
(
t, f
) =
f
2
is a nice, uniformly
Lipschitz function on any [0
, b
]
× B
R
(1) = [0
, b
]
×
[1
R,
1 +
R
]. However, we
will shortly see that there is no global solution.
If we assume f = 0, then for all t [0, b], the equation is equivalent to
d
dt
(t + f
1
) = 0.
So we need
t
+
f
1
to be constant. The initial conditions tells us this constant
is 1. So we have
f(t) =
1
1 t
.
Hence the solution on [0
,
1) is
1
1t
. Any solution on [0
, b
] must agree with this
on [0, 1). So if b 1, then there is no solution in [0, b].
The Lipschitz condition is also necessary to guarantee uniqueness. Without
this condition, existence of a solution is still guaranteed (but is another theorem,
the Cauchy-Peano theorem), but we could have many different solutions. For
example, we can consider the differential equation
df
dt
=
p
|f|
with
f
(0) = 0. Here
F
(
t, x
) =
p
|x|
is not Lipschitz near
x
= 0. It is easy to see
that both
f
= 0 and
f
(
t
) =
1
4
t
2
are both solutions. In fact, for any
α
[0
, b
],
the function
f
α
(t) =
(
0 0 t α
1
4
(t α)
2
α t b
is also a solution. So we have an infinite number of solutions.
We are now going to use the contraction mapping theorem to prove this. In
general, this is a very useful idea. It is in fact possible to use other fixed point
theorems to show the existence of solutions to partial differential equations. This
is much more difficult, but has many far-reaching important applications to
theoretical physics and geometry, say. For these, see Part III courses.
Proof. First, note that (ii) implies (i). We know that
sup
[a,b]×B
R
(x)
F
is bounded since it is a continuous function on a compact domain. So we can
pick an ε such that
2ε
R
sup
[a,b]×B
R
(x)
F
.
Then writing [t
0
ε, t
0
+ ε] [a, b] = [a
1
, b
1
], we have
sup
[a
1
,b
1
]×B
R
(x)
F sup
[a,b]×B
R
(x)
F
R
2ε
R
b
1
a
1
.
So (ii) implies there is a solution on [
t
0
ε, t
0
+
ε
]
[
a, b
]. Hence it suffices to
prove (ii).
To apply the contraction mapping theorem, we need to convert this into
a fixed point problem. The key is to reformulate the problem as an integral
equation. We know that a differentiable f : [
a, b
]
R
n
satisfies the differential
equation () if and only if f : [a, b] B
R
(x
0
) is continuous and satisfies
f(t) = x
0
+
Z
t
t
0
F(s, f (s)) ds
by the fundamental theorem of calculus. Note that we don’t require f is dif-
ferentiable, since if a continuous f satisfies this equation, it is automatically
differentiable by the fundamental theorem of calculus. This is very helpful, since
we can work over the much larger vector space of continuous functions, and it
would be easier to find a solution.
We let X = C([a, b], B
R
(x
0
)). We equip X with the supremum metric
g h = sup
t[a,b]
g(t) h(t)
2
.
We see that
X
is a closed subset of the complete metric space
C
([
a, b
]
, R
n
) (again
taken with the supremum metric). So
X
is complete. For every g
X
, we define
a function T g : [a, b] R
n
by
(T g)(t) = x
0
+
Z
t
t
0
F(s, g(s)) ds.
Our differential equation is thus
f = T f .
So we first want to show that
T
is actually mapping
X X
, i.e.
T
g
X
whenever g X, and then prove it is a contraction map.
We have
T g(t) x
0
2
=
Z
t
t
0
F(s, g(s)) ds
Z
t
t
0
F(s, g(s))
2
ds
sup
[a,b]×B
R
(x
0
)
F · |b a|
R
Hence we know that T g(t) B
R
(x
0
). So T g X.
Next, we need to show this is a contraction. However, it turns out
T
need
not be a contraction. Instead, what we have is that for g
1
, g
2
X, we have
T g
1
(t) T g
2
(t)
2
=
Z
t
t
0
F(s, g
1
(s)) F(s, g
2
(s)) ds
2
Z
t
t
0
F(s, g
1
(s)) F(s, g
2
(s))
2
ds
κ(b a)g
1
g
2
by the Lipschitz condition on F . If we indeed have
κ(b a) < 1, ()
then the contraction mapping theorem gives an f X such that
T f = f ,
i.e.
f = x
0
+
Z
t
t
0
F(s, f (s)) ds.
However, we do not necessarily have (
). There are many ways we can solve this
problem. Here, we can solve it by finding an
m
such that
T
(m)
=
T T ···T
:
X X
is a contraction map. We will in fact show that this map satisfies the
bound
sup
t[a,b]
T
(m)
g
1
(t) T
(m)
g
2
(t)
(b a)
m
κ
m
m!
sup
t[a,b]
g
1
(t) g
2
(t). ()
The key is the
m
!, since this grows much faster than any exponential. Given this
bound, we know that for sufficiently large m, we have
(b a)
m
κ
m
m!
< 1,
i.e.
T
(m)
is a contraction. So by the contraction mapping theorem, the result
holds.
So it only remains to prove the bound. To prove this, we prove instead the
pointwise bound: for any t [a, b], we have
T
(m)
g
1
(t) T
(m)
g
2
(t)
2
(|t t
0
|)
m
κ
m
m!
sup
s[t
0
,t]
g
1
(s) g
2
(s).
From this, taking the supremum on the left, we obtain the bound ().
To prove this pointwise bound, we induct on
m
. We wlog assume
t > t
0
. We
know that for every m, the difference is given by
T
(m)
g
1
(t) T
(m)
g
2
(t)
2
=
Z
t
t
0
F (s, T
(m1)
g
1
(s)) F (s, T
(m1)
g
2
(s)) ds
2
.
κ
Z
t
t
0
T
(m1)
g
1
(s) T
(m1)
g
2
(s)
2
ds.
This is true for all m. If m = 1, then this gives
T g
1
(t) T g
2
(t) κ(t t
0
) sup
[t
0
,t]
g
1
g
2
2
.
So the base case is done.
For
m
2, assume by induction the bound holds with
m
1 in place of
m
.
Then the bounds give
T
(m)
g
1
(t) T
(m)
g
2
(t) κ
Z
t
t
0
k
m1
(s t
0
)
m1
(m 1)!
sup
[t
0
,s]
g
1
g
2
2
ds
κ
m
(m 1)!
sup
[t
0
,t]
g
1
g
2
2
Z
t
t
0
(s t
0
)
m1
ds
=
κ
m
(t t
0
)
m
m!
sup
[t
0
,t]
g
1
g
2
2
.
So done.
Note that to get the factor of
m
!, we had to actually perform the integral,
instead of just bounding (
s t
0
)
m1
by (
t t
0
). In general, this is a good
strategy if we want tight bounds. Instead of bounding
Z
b
a
f(x) dx
(b a) sup |f(x)|,
we write
f
(
x
) =
g
(
x
)
h
(
x
), where
h
(
x
) is something easily integrable. Then we
can have a bound
Z
b
a
f(x) dx
sup |g(x)|
Z
b
a
|h(x)| dx.
6 Differentiation from R
m
to R
n
6.1 Differentiation from R
m
to R
n
We are now going to investigate differentiation of functions
f
:
R
n
R
m
. The
hard part is to first come up with a sensible definition of what this means. There
is no obvious way to generalize what we had for real functions. After defining
it, we will need to do some hard work to come up with easy ways to check if
functions are differentiable. Then we can use it to prove some useful results like
the mean value inequality. We will always use the usual Euclidean norm.
To define differentiation in R
n
, we first we need a definition of the limit.
Definition (Limit of function). Let
E R
n
and
f
:
E R
m
. Let a
R
n
be a
limit point of E, and let b R
m
. We say
lim
xa
f(x) = b
if for every ε > 0, there is some δ > 0 such that
(x E) 0 < x a < δ f (x) b < ε.
As in the case of
R
in IA Analysis I, we do not impose any requirements on
F when x = a. In particular, we don’t assume that a is in the domain E.
We would like a definition of differentiation for functions
f
:
R
n
R
(or
more generally f :
R
n
R
m
) that directly extends the familiar definition on the
real line. Recall that if
f
: (
b, c
)
R
and
a
(
b, c
), we say
f
is differentiable if
the limit
Df(a) = f
(a) = lim
h0
f(a + h) f(a)
h
()
exists (as a real number). This cannot be extended to higher dimensions directly,
since h would become a vector in
R
n
, and it is not clear what we mean by
dividing by a vector. We might try dividing by h instead, i.e. require that
lim
h0
f(a + h) f(a)
h
exists. However, this is clearly wrong, since in the case of
n
= 1, this reduces to
the existence of the limit
f(a + h) f(a)
|h|
,
which almost never exists, e.g. when
f
(
x
) =
x
. It is also possible that this exists
while the genuine derivative does not, e.g. when
f
(
x
) =
|x|
, at
x
= 0. So this is
clearly wrong.
Now we are a bit stuck. We need to divide by something, and that thing
better be a scalar.
h
is not exactly what we want. What should we do? The
idea is move f
(a) to the other side of the equation, and () becomes
lim
h0
f(a + h) f(a) f
(a)h
h
= 0.
Now if we replace h by |h|, nothing changes. So this is equivalent to
lim
h0
f(a + h) f(a) f
(a)h
|h|
= 0.
In other words, the function f is differentiable if there is some A such that
lim
h0
f(a + h) f(a) Ah
|h|
= 0,
and we call A the derivative.
We are now in a good shape to generalize. Note that if
f
:
R
n
R
is a
real-valued function, then
f
(
a
+
h
)
f
(
a
) is a scalar, but
h
is a vector. So
A
is
not just a number, but a (row) vector. In general, if our function f :
R
n
R
m
is vector-valued, then our
A
should be an
m × n
matrix. Alternatively,
A
is a
linear map from R
n
to R
m
.
Definition (Differentiation in
R
n
). Let
U R
n
be open, f :
R
n
R
m
. We say
f is differentiable at a point a
U
if there exists a linear map
A
:
R
n
R
m
such that
lim
h0
f(a + h) f(a) Ah
h
= 0.
We call A the derivative of f at a. We write the derivative as Df (a).
This is equivalent to saying
lim
xa
f(x) f (a) A(x a)
x a
= 0.
Note that this is completely consistent with our usual definition the case where
n
=
m
= 1, as we have discussed above, since a linear transformation
α
:
R R
is just given by α(h) = Ah for some real A R.
One might instead attempt to define differentiability as follows: for any
f
:
R
m
R
, we say
f
is differentiable at
x
if
f
is differentiable when restricted
to any line passing through
x
. However, this is a weaker notion, and we will
later see that if we define differentiability this way, then differentiability will no
longer imply continuity, which is bad.
Having defined differentiation, we want to show that the derivative is unique.
Proposition (Uniqueness of derivative). Derivatives are unique.
Proof. Suppose A, B : R
n
R
m
both satisfy the condition
lim
h0
f(a + h) f(a) Ah
h
= 0
lim
h0
f(a + h) f(a) Bh
h
= 0.
By the triangle inequality, we get
(B A)h f (a + h) f (a) Ah + f(a + h) f(a) Bh.
So
(B A)h
h
0
as h 0. We set h = tu in this proof to get
(B A)tu
tu
0
as t 0. Since (B A) is linear, we know
(B A)tu
tu
=
(B A)u
u
.
So (B A)u = 0 for all u R
n
. So B = A.
Notation. We write L(R
n
; R
m
) for the space of linear maps A : R
n
R
m
.
So Df (a) L(R
n
; R
m
).
To avoid having to write limits and divisions all over the place, we have the
following convenient notation:
Notation (Little o notation). For any function α : B
r
(0) R
n
R
m
, write
α(h) = o(h)
if
α(h)
h
0 as h 0.
In other words, α 0 faster than h as h 0.
Note that officially,
α
(h) =
o
(h) as a whole is a piece of notation, and does
not represent equality.
Then the condition for differentiability can be written as:
f
:
U R
m
is
differentiable at a U if there is some A with
f(a + h) f(a) Ah = o(h).
Alternatively,
f(a + h) = f (a) + Ah + o(h).
Note that we require the domain
U
of f to be open, so that for each a
U
,
there is a small ball around a on which f is defined, so f(a + h) is defined for
for sufficiently small h. We could relax this condition and consider “one-sided”
derivatives instead, but we will not look into these in this course.
We can interpret the definition of differentiability as saying we can find a
“good” linear approximation (technically, it is affine, not linear) to the function f
near a.
While the definition of the derivative is good, it is purely existential. This is
unlike the definition of differentiability of real functions, where we are asked to
compute an explicit limit if the limit exists, that’s the derivative. If not, it
is not differentiable. In the higher-dimensional world, this is not the case. We
have completely no idea where to find the derivative, even if we know it exists.
So we would like an explicit formula for it.
The idea is to look at specific “directions” instead of finding the general
derivative. As always, let f :
U R
m
be differentiable at a
U
. Fix some
u R
n
, take h = tu (with t R). Assuming u = 0, differentiability tells
lim
t0
f(a + tu) f(a) Df(a)(tu)
tu
= 0.
This is equivalent to saying
lim
t0
f(a + tu) f(a) tDf(a)u
|t|∥u
= 0.
Since u is fixed, This in turn is equivalent to
lim
t0
f(a + tu) f(a) tDf(a)u
t
= 0.
This, finally, is equal to
Df(a)u = lim
t0
f(a + tu) f(a)
t
.
We derived this assuming u
= 0, but this is trivially true for u = 0. So this
valid for all u.
This is of the same form as the usual derivative, and it is usually not too
difficult to compute this limit. Note, however, that this says if the derivative
exists, then the limit above is related to the derivative as above. However, even
if the limit exists for all u, we still cannot conclude that the derivative exists.
Regardless, even if the derivative does not exist, this limit is still often a
useful notion.
Definition (Directional derivative). We write
D
u
f(a) = lim
t0
f(a + tu) f(a)
t
whenever this limit exists. We call
D
u
f(a) the directional derivative of f at
a U in the direction of u R
n
.
By definition, we have
D
u
f(a) =
d
dt
t=0
f(a + tu).
Often, it is convenient to focus on the special cases where u = e
j
, a member of
the standard basis for
R
n
. This is known as the partial derivative. By convention,
this is defined for real-valued functions only, but the same definition works for
any R
m
-valued function.
Definition (Partial derivative). The
j
th partial derivative of
f
:
U R
at
a U is
D
e
j
f(a) = lim
t→∞
f(a + te
j
) f(a)
t
,
when the limit exists. We often write this as
D
e
j
f(a) = D
j
f(a) =
f
x
j
.
Note that these definitions do not require differentiability of f at a. We
will see some examples shortly. Before that, we first establish some elementary
properties of differentiable functions.
Proposition. Let U R
n
be open, a U.
(i) If f : U R
m
is differentiable at a, then f is continuous at a.
(ii)
If we write f = (
f
1
, f
2
, ··· , f
m
) :
U R
m
, where each
f
i
:
U R
, then f
is differentiable at a if and only if each
f
j
is differentiable at a for each
j
.
(iii)
If
f, g
:
U R
m
are both differentiable at a, then
λ
f +
µ
g is differentiable
at a with
D(λf + µg)(a) = λDf (a) + µDg(a).
(iv)
If
A
:
R
n
R
m
is a linear map, then
A
is differentiable for any a
R
n
with
DA(a) = A.
(v)
If f is differentiable at a, then the directional derivative
D
u
f(a) exists for
all u R
n
, and in fact
D
u
f(a) = Df(a)u.
(vi)
If f is differentiable at a, then all partial derivatives
D
j
f
i
(a) exist for
j = 1, ··· , n; i = 1, ··· , m, and are given by
D
j
f
i
(a) = Df
i
(a)e
j
.
(vii)
If
A
= (
A
ij
) be the matrix representing
D
f(a) with respect to the standard
basis for R
n
and R
m
, i.e. for any h R
n
,
Df(a)h = Ah.
Then A is given by
A
ij
= Df(a)e
j
, b
i
= D
j
f
i
(a).
where
{
e
1
, ··· ,
e
n
}
is the standard basis for
R
n
, and
{
b
1
, ··· ,
b
m
}
is the
standard basis for R
m
.
The second property is useful, since instead of considering arbitrary
R
m
-
valued functions, we can just look at real-valued functions.
Proof.
(i) By definition, if f is differentiable, then as h 0, we know
f(a + h) f(a) Df(a)h 0.
Since Df (a)h 0 as well, we must have f (a + h) f(h).
(ii) Exercise on example sheet 4.
(iii) We just have to check this directly. We have
(λf + µg)(a + h) (λf + µg)(a) (λDf(a) + µDg(a))
h
= λ
f(a + h) f(a) Df(a)h
h
+ µ
g(a + h) g(a) Dg(a)h
h
.
which tends to 0 as h 0. So done.
(iv) Since A is linear, we always have A(a + h) A(a) Ah = 0 for all h.
(v) We’ve proved this in the previous discussion.
(vi) We’ve proved this in the previous discussion.
(vii)
This follows from the general result for linear maps: for any linear map
represented by (A
ij
)
m×n
, we have
A
ij
= Ae
j
, b
i
.
Applying this with A = Df(a) and note that for any h R
n
,
Df(a)h = (Df
1
(a)h, ··· , Df
m
(a)h).
So done.
The above says differentiability at a point implies the existence of all direc-
tional derivatives, which in turn implies the existence of all partial derivatives.
The converse implication does not hold in either of these.
Example. Let f
2
: R
2
R be defined by
f(x, y) =
(
0 xy = 0
1 xy = 0
Then the partial derivatives are
df
dx
(0, 0) =
df
dy
(0, 0) = 0,
In other directions, say u = (1, 1), we have
f(0 + tu) f(0)
t
=
1
t
which diverges as t 0. So the directional derivative does not exist.
Example. Let f : R
2
R be defined by
f(x, y) =
(
x
3
y
y = 0
0 y = 0
Then for u = (u
1
, u
2
) = 0 and t = 0, we can compute
f(0 + tu) f(0)
t
=
(
tu
3
1
u
2
u
2
= 0
0 u
2
= 0
So
D
u
f(0) = lim
t0
f(0 + tu) f(0)
t
= 0,
and the directional derivative exists. However, the function is not differentiable
at 0, since it is not even continuous at 0, as
f(δ, δ
4
) =
1
δ
diverges as δ 0.
Example. Let f : R
2
R be defined by
f(x, y) =
(
x
3
x
2
+y
2
(x, y) = (0, 0)
0 (x, y) = (0, 0)
.
It is clear that
f
continuous at points other than 0, and
f
is also continuous at
0 since |f(x, y)| |x|. We can compute the partial derivatives as
f
x
(0, 0) = 1,
f
y
(0, 0) = 0.
In fact, we can compute the difference quotient in the direction u = (
u
1
, u
2
)
= 0
to be
f(0 + tu) f(0)
t
=
u
3
1
u
2
1
+ u
2
2
.
So we have
D
u
f(0) =
u
3
1
u
2
1
+ u
2
2
.
We can now immediately conclude that
f
is not differentiable at 0, since if it
were, then we would have
D
u
f(0) = Df(0)u,
which should be a linear expression in u, but this is not.
Alternatively, if f were differentiable, then we have
Df(0)h =
1 0
h
1
h
2
= h
1
.
However, we have
f(0 + h) f(0) Df(0)h
h
=
h
3
1
h
2
1
+h
2
2
h
1
p
h
2
1
+ h
2
2
=
h
1
h
2
2
p
h
2
1
+ h
2
2
3
,
which does not tend to 0 as h 0. For example, if h = (t, t), this quotient is
1
2
3/2
for t = 0.
To decide if a function is differentiable, the first step would be to compute
the partial derivatives. If they don’t exist, then we can immediately know the
function is not differentiable. However, if they do, then we have a candidate for
what the derivative is, and we plug it into the definition to check if it actually is
the derivative.
This is a cumbersome thing to do. It turns out that while existence of partial
derivatives does not imply differentiability in general, it turns out we can get
differentiability if we add some more slight conditions.
Theorem. Let
U R
n
be open, f :
U R
m
. Let a
U
. Suppose there exists
some open ball B
r
(a) U such that
(i) D
j
f
i
(x) exists for every x B
r
(a) and 1 i m, 1 j n
(ii) D
j
f
i
are continuous at a for all 1 i m, 1 j n.
Then f is differentiable at a.
Proof.
It suffices to prove for
m
= 1, by the long proposition. For each h =
(h
1
, ··· , h
n
) R
n
, we have
f(a + h) f (a) =
n
X
j=1
f(a + h
1
e
1
+ ··· + h
j
e
j
) f (a + h
1
e
1
+ ··· + h
j1
e
j1
).
Now for convenience, we can write
h
(j)
= h
1
e
1
+ ··· + h
j
e
j
= (h
1
, ··· , h
j
, 0, ··· , 0).
Then we have
f(a + h) f(a) =
n
X
j=1
f(a + h
(j)
) f(a + h
(j1)
)
=
n
X
j=1
f(a + h
(j1)
+ h
j
e
j
) f(a + h
(j1)
).
Note that in each term, we are just moving along the coordinate axes. Since
the partial derivatives exist, the mean value theorem of single-variable calculus
applied to
g(t) = f(a + h
(j1)
+ te
j
)
on the interval t [0, h
j
] allows us to write this as
f(a + h) f(a)
=
n
X
j=1
h
j
D
j
f(a + h
(j1)
+ θ
j
h
j
e
j
)
=
n
X
j=1
h
j
D
j
f(a) +
n
X
j=1
h
j
D
j
f(a + h
(j1)
+ θ
j
h
j
e
j
) D
j
f(a)
for some θ
j
(0, 1).
Note that
D
j
f
(a + h
(j1)
+
θ
j
h
j
e
j
)
D
j
f
(a)
0 as h
0 since the partial
derivatives are continuous at a. So the second term is
o
(h). So
f
is differentiable
at a with
Df(a)h =
n
X
j=1
D
j
f(a)h
j
.
This is a very useful result. For example, we can now immediately conclude
that the function
x
y
z
7→
3x
2
+ 4 sin y + e
6z
xyze
14x
is differentiable everywhere, since it has continuous partial derivatives. This is
much better than messing with the definition itself.
6.2 The operator norm
So far, we have only looked at derivatives at a single point. We haven’t discussed
much about the derivative at, say, a neighbourhood or the whole space. We
might want to ask if the derivative is continuous or bounded. However, this is
not straightforward, since the derivative is a linear map, and we need to define
these notions for functions whose values are linear maps. In particular, we want
to understand the map
D
f :
B
r
(a)
L
(
R
n
;
R
m
) given by x
7→ D
f(x). To do so,
we need a metric on the space L(R
n
; R
m
). In fact, we will use a norm.
Let
L
=
L
(
R
n
;
R
m
). This is a vector space over
R
defined with addition and
scalar multiplication defined pointwise. In fact, L is a subspace of C(R
n
, R
m
).
To prove this, we have to prove that all linear maps are continuous. Let
{e
1
, ··· , e
n
} be the standard basis for R
n
, and for
x =
n
X
j=1
x
j
e
j
,
and A L, we have
A(x) =
n
X
j=1
x
j
Ae
j
.
By Cauchy-Schwarz, we know
A(x)
n
X
j=1
|x
j
|∥A(e
j
) x
v
u
u
t
n
X
j=1
A(e
j
)
2
.
So we see
A
is Lipschitz, and is hence continuous. Alternatively, this follows
from the fact that linear maps are differentiable and hence continuous.
We can use this fact to define the norm of linear maps. Since
L
is finite-
dimensional (it is isomorphic to the space of real
m × n
matrices, as vector
spaces, and hence have dimension
mn
), it really doesn’t matter which norm we
pick as they are all Lipschitz equivalent, but a convenient choice is the sup norm,
or the operator norm.
Definition (Operator norm). The operator norm on
L
=
L
(
R
n
;
R
m
) is defined
by
A = sup
xR
n
:x=1
Ax.
Proposition.
(i) A < for all A L.
(ii) · is indeed a norm on L.
(iii)
A = sup
R
n
\{0}
Ax
x
.
(iv) Ax A∥∥x for all x R
n
.
(v)
Let
A L
(
R
n
;
R
m
) and
B L
(
R
m
;
R
p
). Then
BA
=
B A L
(
R
n
;
R
p
)
and
BA B∥∥A.
Proof.
(i) This is since A is continuous and {x R
n
: x = 1} is compact.
(ii) The only non-trivial part is the triangle inequality. We have
A + B = sup
x=1
Ax + Bx
sup
x=1
(Ax + Bx)
sup
x=1
Ax + sup
x=1
Bx
= A + B
(iii) This follows from linearity of A, and for any x R
n
, we have
x
x
= 1.
(iv) Immediate from above.
(v)
BA = sup
R
n
\{0}
BAx
x
sup
R
n
\{0}
B∥∥Ax
x
= B∥∥A.
For certain easy cases, we have a straightforward expression for the operator
norm.
Proposition.
(i)
If
A L
(
R, R
m
), then
A
can be written as
Ax
=
x
a for some a
R
m
.
Moreover,
A
=
a
, where the second norm is the Euclidean norm in
R
n
(ii)
If
A L
(
R
n
, R
), then
A
x = x
·
a for some fixed a
R
n
. Again,
A
=
a
.
Proof.
(i)
Set
A
(1) = a. Then by linearity, we get
Ax
=
xA
(1) =
x
a. Then we have
Ax = |x|∥a.
So we have
Ax
|x|
= a.
(ii) Exercise on example sheet 4.
Theorem (Chain rule). Let
U R
n
be open, a
U
, f :
U R
m
differentiable
at a. Moreover,
V R
m
is open with f (
U
)
V
and g :
V R
p
is differentiable
at f (a). Then g f : U R
p
is differentiable at a, with derivative
D(g f )(a) = Dg(f (a)) Df(a).
Proof.
The proof is very easy if we use the little
o
notation. Let
A
=
D
f(a) and
B = Dg(f (a)). By differentiability of f, we know
f(a + h) = f (a) + Ah + o(h)
g(f (a) + k) = g(f (a)) + Bk + o(k)
Now we have
g f (a + h) = g(f(a) + Ah + o(h)
| {z }
k
)
= g(f (a)) + B(Ah + o(h)) + o(Ah + o(h))
= g f (a) + BAh + B(o(h)) + o(Ah + o(h)).
We just have to show the last term is
o
(h), but this is true since
B
and
A
are
bounded. By boundedness,
B(o(h)) B∥∥o(h).
So B(o(h)) = o(h). Similarly,
Ah + o(h) A∥∥h + o(h) (A + 1)h
for sufficiently small h. So o(Ah + o(h)) is in fact o(h) as well. Hence
g f (a + h) = g f (a) + BAh + o(h).
6.3 Mean value inequalities
So far, we have just looked at cases where we assume the function is differentiable
at a point. We are now going to assume the function is differentiable in a region,
and see what happens to the derivative.
Recall the mean value theorem from single-variable calculus: if
f
: [
a, b
]
R
is continuous on [a, b] and differentiable on (a, b), then
f(b) f (a) = f
(c)(b a)
for some
c
(
a, b
). This is our favorite theorem, and we have used it many
times in IA Analysis. Here we have an exact equality. However, in general, for
vector-valued functions, i.e. if we are mapping to
R
m
, this is no longer true.
Instead, we only have an inequality.
We first prove it for the case when the domain is a subset of
R
, and then
reduce the general case to this special case.
Theorem. Let f : [
a, b
]
R
m
be continuous on [
a, b
] and differentiable on (
a, b
).
Suppose we can find some
M
such that for all
t
(
a, b
), we have
D
f(
t
)
M
.
Then
f(b) f (a) M(b a).
Proof. Let v = f (b) f(a). We define
g(t) = v · f (t) =
m
X
i=1
v
i
f
i
(t).
Since each
f
i
is differentiable,
g
is continuous on [
a, b
] and differentiable on (
a, b
)
with
g
(t) =
X
v
i
f
i
(t).
Hence, we know
|g
(t)|
m
X
i=1
v
i
f
i
(t)
v
n
X
i=1
f
2
i
(t)
!
1/2
= v∥∥Df(t) M v.
We now apply the mean value theorem to g to get
g(b) g(a) = g
(t)(b a)
for some t (a, b). By definition of g, we get
v · (f (b) f (a)) = g
(t)(b a).
By definition of v, we have
f(b) f (a)
2
= |g
(t)(b a)| (b a)Mf (b) f (a).
If f (
b
) = f (
a
), then there is nothing to prove. Otherwise, divide by
f(
b
)
f(
a
)
and done.
We now apply this to prove the general version.
Theorem (Mean value inequality). Let a
R
n
and f :
B
r
(a)
R
m
be
differentiable on B
r
(a) with Df(x) M for all x B
r
(a). Then
f(b
1
) f (b
2
) Mb
1
b
2
for any b
1
, b
2
B
r
(a).
Proof. We will reduce this to the previous theorem.
Fix b
1
, b
2
B
r
(a). Note that
tb
1
+ (1 t)b
2
B
r
(a)
for all t [0, 1]. Now consider g : [0, 1] R
m
.
g(t) = f (tb
1
+ (1 t)b
2
).
By the chain rule, g is differentiable and
g
(t) = Dg(t) = (Df(tb
1
+ (1 t)b
2
))(b
1
b
2
)
Therefore
Dg(t) Df(tb
1
+ (1 t)b
2
)∥∥b
1
b
2
Mb
1
b
2
.
Now we can apply the previous theorem, and get
f(b
1
) f (b
2
) = g(1) g(0) Mb
1
b
2
.
Note that here we worked in a ball. In general, we could have worked in a
convex set, since all we need is for tb
1
+ (1 t)b
2
to be inside the domain.
But with this, we have the following easy corollary.
Corollary. Let f :
B
r
(a)
R
n
R
m
have
D
f(x) = 0 for all x
B
r
(a). Then
f is constant.
Proof. Apply the mean value inequality with M = 0.
We would like to extend this corollary. Does this corollary extend to differ-
entiable maps f with Df = 0 defined on any open set U R
n
?
The answer is clearly no. Even for functions
f
:
R R
, this is not true, since
we can have two disjoint intervals [1, 2] [3, 4], and define f(t) to be 1 on [1, 2]
and 2 on [3
,
4]. Then
Df
= 0 but
f
is not constant.
f
is just locally constant on
each interval.
The problem with this is that the sets are disconnected. We cannot connect
points in [1
,
2] and points in [3
,
4] with a line. If we can do so, then we would be
able to show that f is constant.
Definition (Path-connected subset). A subset
E R
n
is path-connected if for
any a, b E, there is a continuous map γ : [0, 1] E such that
γ(0) = a, γ(1) = b.
Theorem. Let
U R
n
be open and path-connected. Then for any differentiable
f : U R
m
, if Df(x) = 0 for all x U , then f is constant on U .
A naive attempt would be to replace
t
b
1
(1
t
)b
2
in the proof of the mean
value theorem with a path γ(t). However, this is not a correct proof, since this
has to assume γ is differentiable. So this doesn’t work. We have to think some
more.
Proof.
We are going to use the fact that f is locally constant. wlog, assume
m
= 1. Given any a
,
b
U
, we show that
f
(a) =
f
(b). Let
γ
: [0
,
1]
U
be
a (continuous) path from a to b. For any
s
(0
,
1), there exists some
ε
such
that
B
ε
(
γ
(
s
))
U
since
U
is open. By continuity of
γ
, there is a
δ
such that
(s δ, s + δ) [0, 1] with γ((s δ, s + δ)) B
ε
(γ(s)) U.
Since
f
is constant on
B
ε
(
γ
(
s
)) by the previous corollary, we know that
g
(
t
) =
f γ
(
t
) is constant on (
s δ, s
+
δ
). In particular,
g
is differentiable at
s
with derivative 0. This is true for all
s
. So the map
g
: [0
,
1]
R
has zero
derivative on (0
,
1) and is continuous on (0
,
1). So
g
is constant. So
g
(0) =
g
(1),
i.e. f (a) = f(b).
If
γ
were differentiable, then this is much easier, since we can show
g
= 0 by
the chain rule:
g
(t) = Df(γ(t))γ
(t).
6.4 Inverse function theorem
Now, we get to the inverse function theorem. This is one of the most important
theorems of the course. This has many interesting and important consequences,
but we will not have time to get to these.
Before we can state the inverse function theorem, we need a definition.
Definition (
C
1
function). Let
U R
n
be open. We say f :
U R
m
is
C
1
on
U if f is differentiable at each x U and
Df : U L(R
n
, R
m
)
is continuous.
We write C
1
(U) or C
1
(U; R
m
) for the set of all C
1
maps from U to R
m
.
First we get a convenient alternative characterization of C
1
.
Proposition. Let
U R
n
be open. Then f = (
f
1
, ··· , f
n
) :
U R
n
is
C
1
on
U
if and only if the partial derivatives
D
j
f
i
(x) exists for all x
U
, 1
i n
,
1 j n, and D
j
f
i
: U R are continuous.
Proof. () Differentiability of f at x implies D
j
f
i
(x) exists and is given by
D
j
f
i
(x) = Df(x)e
j
, b
i
,
where {e
1
, ··· , e
n
} and {b
1
, ··· , b
m
} are the standard basis for R
n
and R
m
.
So we know
|D
j
f
i
(x) D
j
f
i
(y)| = |⟨(Df(x) Df(y))e
j
, b
i
⟩| Df(x) Df(y)
since e
j
and b
i
are unit vectors. Hence if Df is continuous, so is D
j
f
i
.
(
) Since the partials exist and are continuous, by our previous theorem, we
know that the derivative
D
f exists. To show
D
f :
U L
(
R
m
;
R
n
) is continuous,
note the following general fact:
For any linear map
A L
(
R
n
;
R
m
) represented by (
a
ij
) so that
A
h =
a
ij
h
j
,
then for x = (x
1
, ··· , x
n
), we have
Ax
2
=
m
X
i=1
n
X
j=1
A
ij
x
j
2
By Cauchy-Schwarz, we have
m
X
i=1
n
X
j=1
a
2
ij
n
X
j=1
x
2
j
= x
2
m
X
i=1
n
X
j=1
a
2
ij
.
Dividing by x
2
, we know
A
q
XX
a
2
ij
.
Applying this to A = Df(x) Df(y), we get
Df(x) Df(y)
q
XX
(D
j
f
i
(x) D
j
f
i
(y))
2
.
So if all D
j
f
i
are continuous, then so is Df.
If we do not wish to go through all that algebra to show the inequality
A
q
XX
a
2
ij
,
we can instead note that
q
PP
a
2
ij
is a norm on
L
(
R
n
, R
m
), since it is just the
Euclidean norm if we treat the matrix as a vector written in a funny way. So by
the equivalence of norms on finite-dimensional vector spaces, there is some
C
such that
A C
q
XX
a
2
ij
,
and then the result follows.
Finally, we can get to the inverse function theorem.
Theorem (Inverse function theorem). Let
U R
n
be open, and f :
U R
m
be a
C
1
map. Let a
U
, and suppose that
D
f(a) is invertible as a linear map
R
n
R
n
. Then there exists open sets
V, W R
n
with a
V
, f (a)
W
,
V U
such that
f|
V
: V W
is a bijection. Moreover, the inverse map f |
1
V
: W V is also C
1
.
We have a fancy name for these functions.
Definition (Diffeomorphism). Let
U, U
R
n
are open, then a map g :
U U
is a diffeomorphism if it is C
1
with a C
1
inverse.
Note that different people have different definitions for the word “diffeomor-
phism”. Some require it to be merely differentiable, while others require it to be
infinitely differentiable. We will stick with this definition.
Then the inverse function theorem says: if f is
C
1
and
D
f(a) is invertible,
then f is a local diffeomorphism at a.
Before we prove this, we look at the simple case where
n
= 1. Suppose
f
(
a
)
= 0. Then there exists a
δ
such that
f
(
t
)
>
0 or
f
(
t
)
<
0 in
t
(
aδ, a
+
δ
).
So
f|
(aδ,a+δ)
is monotone and hence is invertible. This is a triviality. However,
this is not a triviality even for n = 2.
Proof.
By replacing f with (
D
f(a))
1
f (or by rotating our heads and stretching
it a bit), we can assume
D
f(a) =
I
, the identity map. By continuity of
D
f, there
exists some r > 0 such that
Df(x) I <
1
2
for all x
B
r
(a)
. By shrinking
r
sufficiently, we can assume
B
r
(a) U
. Let
W = B
r/2
(f(a)), and let V = f
1
(W ) B
r
(a).
That was just our setup. There are three steps to actually proving the
theorem.
Claim. V is open, and f|
V
: V W is a bijection.
Since f is continuous, f
1
(
W
) is open. So
V
is open. To show f
|
V
:
V W
is bijection, we have to show that for each y
W
, then there is a unique x
V
such that f (x) = y. We are going to use the contraction mapping theorem to
prove this. This statement is equivalent to proving that for each y
W
, the
map T (x) = x f (x) + y has a unique fixed point x V .
Let h(x) = x f (x). Then note that
Dh(x) = I Df(x).
So by our choice of r, for every x B
r
(a), we must have
Dh(x)
1
2
.
Then for any x
1
, x
2
B
r
(a), we can use the mean value inequality to estimate
h(x
1
) h(x
2
)
1
2
x
1
x
2
.
Hence we know
T (x
1
) T (x
2
) = h(x
1
) h(x
2
)
1
2
x
1
x
2
.
Finally, to apply the contraction mapping theorem, we need to pick the right
domain for T , namely B
r
(a).
For any x B
r
(a), we have
T (x) a = x f (x) + y a
= x f (x) (a f (a)) + y f(a)
h(x) h(a) + y f (a)
1
2
x a + y f (a)
<
r
2
+
r
2
= r.
So
T
:
B
r
(a) B
r
(a)
B
r
(a)
. Since
B
r
(a)
is complete,
T
has a unique fixed
point x
B
r
(a)
, i.e.
T
(x) = x. Finally, we need to show x
B
r
(a), since this is
where we want to find our fixed point. But this is true, since
T
(x)
B
r
(a) by
above. So we must have x
B
r
(a). Also, since
f
(x) = y, we know x
f
1
(
W
).
So x V .
So we have shown that for each y
W
, there is a unique x
V
such that
f(x) = y. So f |
V
: V W is a bijection.
We have done the hard work now. It remains to show that f
|
V
is invertible
with C
1
inverse.
Claim. The inverse map g = f
|
1
V
:
W V
is Lipschitz (and hence continuous).
In fact, we have
g(y
1
) g(y
2
) 2y
1
y
2
.
For any x
1
, x
2
V , by the triangle inequality, know
x
1
x
2
f (x
1
) f (x
2
) (x
1
f (x
1
)) (x
2
f (x
2
))
= h(x
1
) h(x
0
)
1
2
x
1
x
2
.
Hence, we get
x
1
x
2
2f (x
1
) f (x
2
).
Apply this to x
1
= g(y
1
) and x
2
= g(y
2
), and note that f (g(y
j
)) = y
j
to get
the desired result.
Claim. g is in fact C
1
, and moreover, for all y W ,
Dg(y) = Df(g(y))
1
. ()
Note that if g were differentiable, then its derivative must be given by (
),
since by definition, we know
f(g(y)) = y,
and hence the chain rule gives
Df(g(y)) · Dg(y) = I.
Also, we immediately know
D
g is continuous, since it is the composition of
continuous functions (the inverse of a matrix is given by polynomial expressions
of the components). So we only need to check that
D
f(g(y))
1
satisfies the
definition of the derivative.
First we check that
D
f(x) is indeed invertible for every x
B
r
(a)
. We use
the fact that
Df(x) I
1
2
.
If Df (x)v = 0, then we have
v = Df(x)v v Df(x) I∥∥v
1
2
v.
So we must have
v
= 0, i.e. v = 0. So
ker D
f(x) =
{
0
}
. So
D
f(g(y))
1
exists.
Let x V be fixed, and y = f (x). Let k be small and
h = g(y + k) g(y).
In other words,
f(x + h) f(x) = k.
Since g is invertible, whenever k
= 0, h
= 0. Since g is continuous, as k
0,
h 0 as well.
We have
g(y + k) g(y) Df(g(y))
1
k
k
=
h Df(g(y))
1
k
k
=
Df(x)
1
(Df(x)h k)
k
=
Df(x)
1
(f(x + h) f(x) Df(x)h)
k
= Df(x)
1
f(x + h) f(x) Df(x)h
h
·
h
k
= Df(x)
1
f(x + h) f(x) Df(x)h
h
·
g(y + k) g(y)
(y + k) y
.
As k
0, h
0. The first factor
D
f(x)
1
is fixed; the second factor tends
to 0 as h
0; the third factor is bounded by 2. So the whole thing tends to 0.
So done.
Note that in the case where
n
= 1, if
f
: (
a, b
)
R
is
C
1
with
f
(
x
)
= 0 for
every
x
, then
f
is monotone on the whole domain (
a, b
), and hence
f
: (
a, b
)
f
((
a, b
)) is a bijection. In higher dimensions, this is not true. Even if we know
that
D
f(x) is invertible for all x
U
, we cannot say f
|
U
is a bijection. We still
only know there is a local inverse.
Example. Let U = R
2
, and f : R
2
R
2
be given by
f(x, y) =
e
x
cos y
e
x
sin y
.
Then we can directly compute
Df(x, y) =
e
x
cos y e
x
sin y
e
x
sin y e
x
cos y.
Then we have
det(Df(x, y)) = e
x
= 0
for all (x, y) R
2
. However, by periodicity, we have
f(x, y + 2) = f(x, y)
for all n. So f is not injective on R
2
.
One major application of the inverse function theorem is to prove the implicit
function theorem. We will not go into details here, but an example of the theorem
can be found on example sheet 4.
6.5 2nd order derivatives
We’ve done so much work to understand first derivatives. For real functions,
we can immediately know a lot about higher derivatives, since the derivative is
just a normal real function again. Here, it slightly more complicated, since the
derivative is a linear operator. However, this is not really a problem, since the
space of linear operators is just yet another vector space, so we can essentially
use the same definition.
Definition (2nd derivative). Let
U R
n
be open, f :
U R
m
be differentiable.
Then
D
f :
U L
(
R
n
;
R
m
). We say
D
f is differentiable at a
U
if there exists
A L(R
n
; L(R
n
; R
m
)) such that
lim
h0
1
h
(Df(a + h) Df(a) Ah) = 0.
For this to make sense, we would need to put a norm on
L
(
R
n
;
R
m
) (e.g. the
operator norm), but
A
, if it exists, is independent of the choice of the norm,
since all norms are equivalent for a finite-dimensional space.
This is, in fact, the same definition as our usual differentiability, since
L
(
R
n
;
R
m
) is just a finite-dimensional space, and is isomorphic to
R
nm
. So
D
f is
differentiable if and only if
D
f :
U R
nm
is differentiable with
A L
(
R
n
;
R
nm
).
This allows use to recycle our previous theorems about differentiability.
In particular, we know
D
f is differentiable is implied by the existence of
partial derivatives
D
i
(
D
j
f
k
) in a neighbourhood of a, and their continuity at a,
for all k = 1, ··· , m and i, j = 1, ··· , n.
Notation. Write
D
ij
f(a) = D
i
(D
j
f)(a) =
2
x
i
x
j
f(a).
Let’s now go back to the initial definition, and try to interpret it. By linear
algebra, in general, a linear map
ϕ
:
R
L
(
R
n
;
R
m
) induces a bilinear map
Φ : R
× R
n
R
m
by
Φ(u, v) = ϕ(u)(v) R
m
.
In particular, we know
Φ(au + bv, w) = aΦ(u, w) + bΦ(v, w)
Φ(u, av + bw) = aΦ(u, v) + bΦ(u, w).
Conversely, if Φ :
R
× R
n
R
m
is bilinear, then
ϕ
:
R
L
(
R
n
;
R
m
) defined
by
ϕ(u) = (v 7→ Φ(u, v))
is linear. These are clearly inverse operations to each other. So there is a
one-to-one correspondence between bilinear maps
ϕ
:
R
× R
n
R
m
and linear
maps Φ : R
L(R
n
; R
m
).
In other words, instead of treating our second derivative as a weird linear
map in L(R
n
; L(R
n
; R
m
)), we can view it as a bilinear map R
n
× R
n
R
m
.
Notation. We define D
2
f(a) : R
n
× R
n
R
m
by
D
2
f(a)(u, v) = D(Df )(a)(u)(v).
We know D
2
f(a) is a bilinear map.
In coordinates, if
u =
n
X
j=1
u
j
e
j
, v =
n
X
j=1
v
j
e
j
,
where
{
e
1
, ··· ,
e
n
}
are the standard basis for
R
n
, then using bilinearity, we have
D
2
f(a)(u, v) =
n
X
i=1
n
X
j=1
D
2
f(a)(e
i
, e
j
)u
i
v
j
.
This is very similar to the case of first derivatives, where the derivative can be
completely specified by the values it takes on the basis vectors.
In the definition of the second derivative, we can again take h =
t
e
i
. Then
we have
lim
t0
Df(a + te
i
) Df(a) tD(Df )(a)(e
i
)
t
= 0.
Note that the whole thing at the top is a linear map in L(R
n
; R
m
). We can let
the whole thing act on e
j
, and obtain
lim
t0
Df(a + te
i
)(e
j
) Df(a)(e
j
) tD(Df )(a)(e
i
)(e
j
)
t
= 0.
for all i, j = 1, ··· , n. Taking the D
2
f(a)(e
i
, e
j
) to the other side, we know
D
2
f(a)(e
i
, e
j
) = lim
t0
Df(a + te
i
)(e
j
) Df(a)(e
j
)
t
= lim
t0
D
e
j
f(a + te
i
) D
e
j
f(a)
t
= D
e
i
D
e
j
f(a).
In other words, we have
D
2
f(e
i
, e
j
) =
m
X
k=1
D
ij
f
k
(a)b
k
,
where {b
1
, ··· , b
m
} is the standard basis for R
m
. So we have
D
2
f(u, v) =
n
X
i,j=1
m
X
k=1
D
ij
f
k
(a)u
i
v
j
b
k
We have been very careful to keep the right order of the partial derivatives.
However, in most cases we care about, it doesn’t matter.
Theorem (Symmetry of mixed partials). Let
U R
n
be open, f :
U R
m
,
a U, and ρ > 0 such that B
ρ
(a) U .
Let
i, j {
1
, ··· , n}
be fixed and suppose that
D
i
D
j
f(x) and
D
j
D
i
f(x) exist
for all x B
ρ
(a) and are continuous at a. Then in fact
D
i
D
j
f(a) = D
j
D
i
f(a).
The proof is quite short, when we know what to do.
Proof.
wlog, assume
m
= 1. If
i
=
j
, then there is nothing to prove. So assume
i = j.
Let
g
ij
(t) = f(a + te
i
+ te
j
) f(a + te
i
) f(a + te
j
) + f(a).
Then for each fixed t, define ϕ : [0, 1] R by
ϕ(s) = f(a + ste
i
+ te
j
) f(a + ste
i
).
Then we get
g
ij
(t) = ϕ(1) ϕ(0).
By the mean value theorem and the chain rule, there is some
θ
(0
,
1) such that
g
ij
(t) = ϕ
(θ) = t
D
i
f(a + θte
i
+ te
j
) D
i
f(a + θte
i
)
.
Now apply mean value theorem to the function
s 7→ D
i
f(a + θte
i
+ ste
j
),
there is some η (0, 1) such that
g
ij
(t) = t
2
D
j
D
i
f(a + θte
i
+ ηte
j
).
We can do the same for g
ji
, and find some
˜
θ, ˜η such that
g
ji
(t) = t
2
D
i
D
j
f(a +
˜
θte
i
+ ˜ηte
j
).
Since g
ij
= g
ji
, we get
t
2
D
j
D
i
f(a + θte
i
+ ηte
j
) = t
2
D
i
D
j
f(a +
˜
θte
i
+ ˜ηte
j
).
Divide by
t
2
, and take the limit as
t
0. By continuity of the partial derivatives,
we get
D
j
D
i
f(a) = D
i
D
j
f(a).
This is nice. Whenever the second derivatives are continuous, the order does
not matter. We can alternatively state this result as follows:
Proposition. If
f
:
U R
m
is differentiable in
U
such that
D
i
D
j
f(x) exists
in a neighbourhood of a
U
and are continuous at a, then
D
f is differentiable
at a and
D
2
f(a)(u, v) =
X
j
X
i
D
i
D
j
f(a)u
i
v
j
.
is a symmetric bilinear form.
Proof.
This follows from the fact that continuity of second partials implies
differentiability, and the symmetry of mixed partials.
Finally, we conclude with a version of Taylor’s theorem for multivariable
functions.
Theorem (Second-order Taylor’s theorem). Let
f
:
U R
be
C
2
, i.e.
D
i
D
j
f
(x)
are continuous for all x U. Let a U and B
r
(a) U . Then
f(a + h) = f (a) + Df(a)h +
1
2
D
2
f(h, h) + E(h),
where E(h) = o(h
2
).
Proof. Consider the function
g(t) = f(a + th).
Then the assumptions tell us
g
is twice differentiable. By the 1D Taylor’s
theorem, we know
g(1) = g(0) + g
(0) +
1
2
g
′′
(s)
for some s [0, 1].
In other words,
f(a + h) = f (a) + Df(a)h +
1
2
D
2
f(a + sh)(h, h)
= f(a) + Df(a)h +
1
2
D
2
f(a)(h, h) + E(h),
where
E(h) =
1
2
D
2
f(a + sh)(h, h) D
2
f(a)(h, h)
.
By definition of the operator norm, we get
|E(h)|
1
2
D
2
f(a + sh) D
2
f(a)∥∥h
2
.
By continuity of the second derivative, as h 0, we get
D
2
f(a + sh) D
2
f(a) 0.
So E(h) = o(h
2
). So done.