Part IB Complex Methods
Based on lectures by R. E. Hunt
Notes taken by Dexter Chua
Lent 2016
These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.
Analytic functions
Definition of an analytic function. Cauchy-Riemann equations. Analytic functions as
conformal mappings; examples. Application to the solutions of Laplace’s equation in
various domains. Discussion of log z and z
a
. [5]
Contour integration and Cauchy’s Theorem
[Proofs of theorems in this section will not be examined in this course.]
Contours, contour integrals. Cauchy’s theorem and Cauchy’s integral formula. Taylor
and Laurent series. Zeros, poles and essential singularities. [3]
Residue calculus
Residue theorem, calculus of residues. Jordan’s lemma. Evaluation of definite integrals
by contour integration. [4]
Fourier and Laplace transforms
Laplace transform: definition and basic properties; inversion theorem (proof not
required); convolution theorem. Examples of inversion of Fourier and Laplace transforms
by contour integration. Applications to differential equations. [4]
Contents
0 Introduction
1 Analytic functions
1.1 The complex plane and the Riemann sphere
1.2 Complex differentiation
1.3 Harmonic functions
1.4 Multi-valued functions
1.5 obius map
1.6 Conformal maps
1.7 Solving Laplace’s equation using conformal maps
2 Contour integration and Cauchy’s theorem
2.1 Contour and integrals
2.2 Cauchy’s theorem
2.3 Contour deformation
2.4 Cauchy’s integral formula
3 Laurent series and singularities
3.1 Taylor and Laurent series
3.2 Zeros
3.3 Classification of singularities
3.4 Residues
4 The calculus of residues
4.1 The residue theorem
4.2 Applications of the residue theorem
4.3
Further applications of the residue theorem using rectangular
contours
4.4 Jordan’s lemma
5 Transform theory
5.1 Fourier transforms
5.2 Laplace transform
5.3 Elementary properties of the Laplace transform
5.4 The inverse Laplace transform
5.5 Solution of differential equations using the Laplace transform
5.6 The convolution theorem for Laplace transforms
0 Introduction
In Part IA, we learnt quite a lot about differentiating and integrating real
functions. Differentiation was fine, but integration was tedious. Integrals were
very difficult to evaluate.
In this course, we will study differentiating and integrating complex functions.
Here differentiation is nice, and integration is easy. We will show that complex
differentiable functions satisfy many things we hoped were true a complex
differentiable function is automatically infinitely differentiable. Moreover, an
everywhere differentiable function must be constant if it is bounded.
On the integration side, we will show that integrals of complex functions can
be performed by computing things known as residues, which are much easier
to compute. We are not actually interested in performing complex integrals.
Instead, we will take some difficult real integrals, and pretend they are complex
ones.
This is a methods course. By this, we mean we will not focus too much on
proofs. We will at best just skim over the proofs. Instead, we focus on doing
things. We will not waste time proving things people have proved 300 years ago.
If you like proofs, you can go to the IB Complex Analysis course, or look them
up in relevant books.
1 Analytic functions
1.1 The complex plane and the Riemann sphere
We begin with a review of complex numbers. Any complex number
z C
can
be written in the form
x
+
iy
, where
x
=
Re z
,
y
=
Im z
are real numbers. We
can also write it as re
, where
Definition
(Modulus and argument)
.
The modulus and argument of a complex
number z = x + iy are given by
r = |z| =
p
x
2
+ y
2
, θ = arg z,
where x = r cos θ, y = r sin θ.
The argument is defined only up to multiples of 2
π
. So we define the following:
Definition
(Principal value of argument)
.
The principal value of the argument
is the value of θ in the range (π, π].
We might be tempted to write down the formula
θ = tan
1
y
x
,
but this does not always give the right answer it is correct only if
x >
0. If
x 0, then it might be out by ±π (e.g. consider z = 1 + i and z = 1 i).
Definition
(Open set)
.
An open set
D
is one which does not include its boundary.
More technically,
D C
is open if for all
z
0
D
, there is some
δ >
0 such that
the disc |z z
0
| < δ is contained in D.
Definition
(Neighbourhood)
.
A neighbourhood of a point
z C
is an open set
containing z.
The extended complex plane
Often, the complex plane
C
itself is not enough. We want to consider the point
as well. This forms the extended complex plane.
Definition
(The extended complex plane)
.
The extended complex plane is
C
=
C {∞}
. We can reach the “point at infinity” by going off in any direction
in the plane, and all are equivalent. In particular, there is no concept of
−∞
.
All infinities are the same. Operations with are done in the obvious way.
Sometimes, we do write down things like
−∞
. This does not refer to a
different point. Instead, this indicates a limiting process. We mean we are
approaching this infinity from the direction of the negative real axis. However,
we still end up in the same place.
Conceptually, we can visualize this using the Riemann sphere, which is a
sphere resting on the complex plane with its “South Pole” S at z = 0.
S
N
P
z
For any point
z C
, drawing a line through the “North Pole”
N
of the sphere
to
z
, and noting where this intersects the sphere. This specifies an equivalent
point
P
on the sphere. Then
is equivalent to the North Pole of the sphere
itself. So the extended complex plane is mapped bijectively to the sphere.
This is a useful way to visualize things, but is not as useful when we actually
want to do computations. To investigate properties of
, we use the substitution
ζ
=
1
z
. A function
f
(
z
) is said to have a particular property at
if
f
(
1
ζ
) has
that same property at
ζ
= 0. This vague notion will be made precise when we
have specific examples to play with.
1.2 Complex differentiation
Recall the definition of differentiation for a real function f(x):
f
0
(x) = lim
δx0
f(x + δx) f(x)
δx
.
It is implicit that the limit must be the same whichever direction we approach
from. For example, consider
|x|
at
x
= 0. If we approach from the right, i.e.
δx
0
+
, then the limit is +1, whereas from the left, i.e.
δx
0
, the limit is
1. Because these limits are different, we say that
|x|
is not differentiable at
the origin.
This is obvious and we already know that, but for complex differentiation,
this issue is much more important, since there are many more directions. We
now extend the definition of differentiation to complex number:
Definition
(Complex differentiable function)
.
A complex differentiable function
f : C C is differentiable at z if
f
0
(z) = lim
δz0
f(z + δz) f (z)
δz
exists (and is therefore independent of the direction of approach but now
there are infinitely many possible directions).
This is the same definition as that for a real function. Often, we are not
interested in functions that are differentiable at a point, since this might allow
some rather exotic functions we do not want to consider. Instead, we want the
function to be differentiable near the point.
Definition
(Analytic function)
.
We say
f
is analytic at a point
z
if there
exists a neighbourhood of
z
throughout which
f
0
exists. The terms regular and
holomorphic are also used.
Definition
(Entire function)
.
A complex function is entire if it is analytic
throughout C.
The property of analyticity is in fact a surprisingly strong one! For example,
two consequences are:
(i)
If a function is analytic, then it is differentiable infinitely many times. This
is very very false for real functions. There are real functions differentiable
N
times, but no more (e.g. by taking a non-differentiable function and
integrating it N times).
(ii) A bounded entire function must be a constant.
There are many more interesting properties, but these are sufficient to show us
that complex differentiation is very different from real differentiation.
The Cauchy-Riemann equations
We already know well how to differentiate real functions. Can we use this to
determine whether certain complex functions are differentiable? For example is
the function
f
(
x
+
iy
) =
cos x
+
i sin y
differentiable? In general, given a complex
function
f(z) = u(x, y) + iv(x, y),
where
z
=
x
+
iy
are
u, v
are real functions, is there an easy criterion to determine
whether f is differentiable?
We suppose that
f
is differentiable at
z
. We may take
δz
in any direction
we like. First, we take it to be real, with δz = δx. Then
f
0
(z) = lim
δx0
f(z + δx) f (z)
δx
= lim
δx0
u(x + δx, y) + iv(x + δx, y) (u(x, y) + iv(x, y))
δx
=
u
x
+ i
v
x
.
What this says is something entirely obvious since we are allowed to take the
limit in any direction, we can take it in the
x
direction, and we get the corre-
sponding partial derivative. This is a completely uninteresting point. Instead,
let’s do the really fascinating thing of taking the limit in the y direction!
Let δz = iδy. Then we can compute
f
0
(z) = lim
δy0
f(z + y) f(z)
y
= lim
δy0
u(x, y + δy) + iv(x, y + δy) (u(x, y) + iv(x, y))
y
=
v
y
i
u
y
.
By the definition of differentiability, the two results for
f
0
(
z
) must agree! So we
must have
u
x
+ i
v
x
=
v
y
i
u
y
.
Taking the real and imaginary components, we get
Proposition
(Cauchy-Riemann equations)
.
If
f
=
u
+
iv
is differentiable, then
u
x
=
v
y
,
u
y
=
v
x
.
Is the converse true? If these equations hold, does it follow that
f
is differ-
entiable? This is not always true. This holds only if
u
and
v
themselves are
differentiable, which is a stronger condition that the partial derivatives exist, as
you may have learnt from IB Analysis II. In particular, this holds if the partial
derivatives u
x
, u
y
, v
x
, v
y
are continuous (which implies differentiability). So
Proposition.
Given a complex function
f
=
u
+
iv
, if
u
and
v
are real differen-
tiable at a point z and
u
x
=
v
y
,
u
y
=
v
x
,
then f is differentiable at z.
We will not prove this proofs are for IB Complex Analysis.
Examples of analytic functions
Example.
(i) f
(
z
) =
z
is entire, i.e. differentiable everywhere. Here
u
=
x, v
=
y
. Then
the Cauchy-Riemann equations are satisfied everywhere, since
u
x
=
v
y
= 1,
u
y
=
v
x
= 0,
and these are clearly continuous. Alternatively, we can prove this directly
from the definition.
(ii) f(z) = e
z
= e
x
(cos y + i sin y) is entire since
u
x
= e
x
cos y =
v
y
,
u
y
= e
x
sin y =
v
x
.
The derivative is
f
0
(z) =
u
x
+ i
v
x
= e
x
cos y + ie
x
sin y = e
z
,
as expected.
(iii) f
(
z
) =
z
n
for
n N
is entire. This is less straightforward to check. Writing
z = r(cos θ + i sin θ), we obtain
u = r
n
cos , v = r
n
sin .
We can check the Cauchy-Riemann equation using the chain rule, writing
r
=
p
x
2
= y
2
and
tan θ
=
y
x
. This takes quite a while, and it’s not worth
your time. But if you really do so, you will find the derivative to be
nz
n1
.
(iv)
Any rational function, i.e.
f
(
z
) =
P (z)
Q(z)
where
P, Q
are polynomials, is
analytic except at points where
Q
(
z
) = 0 (where it is not even defined).
For instance,
f(z) =
z
z
2
+ 1
is analytic except at ±i.
(v)
Many standard functions can be extended naturally to complex functions
and obey the usual rules for their derivatives. For example,
sin z
=
e
iz
e
iz
2i
is differentiable with derivative
cos z
=
e
iz
+e
iz
2
. We
can also write
sin z = sin(x + iy)
= sin x cos iy + cos x sin iy
= sin x cosh y + i cos x sinh y,
which is sometimes convenient.
Similarly
cos z, sinh z, cosh z
etc. differentiate to what we expect them
to differentiate to.
log z = log |z| + i arg z has derivative
1
z
.
The product rule, quotient rule and chain rule hold in exactly the
same way, which allows us to prove (iii) and (iv) easily.
Examples of non-analytic functions
Example.
(i) Let f(z) = Re z. This has u = x, v = 0. But
u
x
= 1 6= 0 =
v
y
.
So Re z is nowhere analytic.
(ii)
Consider
f
(
z
) =
|z|
. This has
u
=
p
x
2
+ y
2
, v
= 0. This is thus nowhere
analytic.
(iii)
The complex conjugate
f
(
z
) =
¯z
=
z
=
x iy
has
u
=
x, v
=
y
. So the
Cauchy-Riemann equations don’t hold. Hence this is nowhere analytic.
We could have deduced (ii) from this if
|z|
were analytic, then so would
|z|
2
, and hence ¯z =
|z|
2
z
also has to be analytic, which is not true.
(iv)
We have to be a bit more careful with
f
(
z
) =
|z|
2
=
x
2
+
y
2
. The
Cauchy-Riemann equations are satisfied only at the origin. So
f
is only
differentiable at
z
= 0. However, it is not analytic since there is no
neighbourhood of 0 throughout which f is differentiable.
1.3 Harmonic functions
This is the last easy section of the course.
Definition
(Harmonic conjugates)
.
Two functions
u, v
satisfying the Cauchy-
Riemann equations are called harmonic conjugates.
If we know one, then we can find the other up to a constant. For example, if
u(x, y) = x
2
y
2
, then v must satisfy
v
y
=
u
x
= 2x.
So we must have
v
= 2
xy
+
g
(
x
) for some function
g
(
x
). The other Cauchy-
Riemann equation gives
2y =
u
y
=
v
x
= 2y g
0
(x).
This tells us
g
0
(
x
) = 0. So
g
must be a genuine constant, say
α
. The corresponding
analytic function whose real part is u is therefore
f(z) = x
2
y
2
+ 2ixy + = (x + iy)
2
+ = z
2
+ iα.
Note that in an exam, if we were asked to find the analytic function
f
with real
part
u
(where
u
is given), then we must express it in terms of
z
, and not
x
and
y, or else it is not clear this is indeed analytic.
On the other hand, if we are given that
f
(
z
) =
u
+
iv
is analytic, then we
can compute
2
u
x
2
=
x
u
x
=
x
v
y
=
y
v
x
=
y
u
y
=
2
u
y
2
.
So u satisfies Laplace’s equation in two dimensions, i.e.
2
u =
2
u
x
2
+
2
u
y
2
= 0.
Similarly, so does v.
Definition
(Harmonic function)
.
A function satisfying Laplace’s equation equa-
tion in an open set is said to be harmonic.
Thus we have shown the following:
Proposition.
The real and imaginary parts of any analytic function are har-
monic.
1.4 Multi-valued functions
For
z
=
r
, we define
log z
=
log r
+
. There are infinitely many values of
log z, for every choice of θ. For example,
log i =
πi
2
or
5πi
2
or
3πi
2
or ··· .
This is fine, right? Functions can be multi-valued. Nothing’s wrong.
Well, when we write down an expression, it’d better be well-defined. So we
really should find some way to deal with this.
This section is really more subtle than it sounds like. It turns out it is non-
trivial to deal with these multi-valued functions. We can’t just, say, randomly
require
θ
to be in, say,
(0, 2π]
, or else we will have some continuity problems, as
we will later see.
Branch points
Consider the three curves shown in the diagram.
C
3
C
1
C
2
In
C
1
, we could always choose
θ
to be always in the range
0,
π
2
, and then
log z
would be continuous and single-valued going round C
1
.
On
C
2
, we could choose
θ
π
2
,
3π
2
and
log z
would again be continuous
and single-valued.
However, this doesn’t work for
C
3
. Since this encircles the origin, there is no
such choice. Whatever we do,
log z
cannot be made continuous and single-valued
around
C
3
. It must either “jump” somewhere, or the value has to increase by
2πi every time we go round the circle, i.e. the function is multi-valued.
We now define what a branch point is. In this case, it is the origin, since
that is where all our problems occur.
Definition
(Branch point)
.
A branch point of a function is a point which is
impossible to encircle with a curve on which the function is both continuous and
single-valued. The function is said to have a branch point singularity there.
Example.
(i) log(z a) has a branch point at z = a.
(ii) log
z1
z+1
= log(z 1) log(z + 1) has two branch points at ±1.
(iii) z
α
=
r
α
e
iαθ
has a branch point at the origin as well for
α 6∈ Z
consider
a circle of radius of
r
0
centered at 0, and wlog that we start at
θ
= 0 and
go once round anticlockwise. Just as before,
θ
must vary continuous to
ensure continuity of
e
iαθ
. So as we get back almost to where we started,
θ
will approach 2
π
, and there will be a jump in
θ
from 2
π
back to 0. So
there will be a jump in
z
α
from
r
α
0
e
2π
to
r
α
0
. So
z
α
is not continuous if
e
2π
6= 1, i.e. α is not an integer.
(iv) log z
also has a branch point at
. Recall that to investigate the properties
of a function
f
(
z
) at infinity, we investigate the property of
f
1
z
at zero.
If
ζ
=
1
z
, then
log z
=
log ζ
, which has a branch point at
ζ
= 0. Similarly,
z
α
has a branch point at for α 6∈ Z.
(v)
The function
log
z1
z+1
does not have a branch point at infinity, since if
ζ =
1
z
, then
log
z 1
z + 1
= log
1 ζ
1 + ζ
.
For
ζ
close to zero,
1ζ
1+ζ
remains close to 1, and therefore well away from
the branch point of
log
at the origin. So we can encircle
ζ
= 0 without
log
1ζ
1+ζ
being discontinuous.
So we’ve identified the points where the functions have problems. How do
we deal with these problems?
Branch cuts
If we wish to make
log z
continuous and single valued, therefore, we must stop
any curve from encircling the origin. We do this by introducing a branch cut
from −∞ on the real axis to the origin. No curve is allowed to cross this cut.
z
θ
Once we’ve decided where our branch cut is, we can use it to fix on values of
θ
lying in the range (
π, π
], and we have defined a branch of
log z
. This branch
is single-valued and continuous on any curve
C
that does not cross the cut.
This branch is in fact analytic everywhere, with
d
dz
log z
=
1
z
, except on the
non-positive real axis, where it is not even continuous.
Note that a branch cut is the squiggly line, while a branch is a particular
choice of the value of log z.
The cut described above is the canonical (i.e. standard) branch cut for
log z
.
The resulting value of log z is called the principal value of the logarithm.
What are the values of
log z
just above and just below the branch cut?
Consider a point on the negative real axis,
z
=
x <
0. Just above the cut, at
z
=
x
+
i
0
+
, we have
θ
=
π
. So
log z
=
log |x|
+
. Just below it, at
z
=
x
+
i
0
,
we have log z = log |x| . Hence we have a discontinuity of 2πi.
We have picked an arbitrary branch cut and branch. We can pick other
branch cuts or branches. Even with the same branch cut, we can still have a
different branch we can instead require
θ
to fall in (
π,
3
π
]. Of course, we can
also pick other branch cuts, e.g. the non-negative imaginary axis. Any cut that
stops curves wrapping around the branch point will do.
Here we can choose θ
3π
2
,
π
2
. We can also pick a branch cut like this:
The exact choice of
θ
is more difficult to write down, but this is an equally valid
cut, since it stops curves from encircling the origin.
Exactly the same considerations (and possible branch cuts) apply for
z
α
(for
α 6∈ Z).
In practice, whenever a problem requires the use of a branch, it is important
to specify it clearly. This can be done in two ways:
(i) Define the function and parameter range explicitly, e.g.
log z = log |z| + i arg z, arg z (π, π].
(ii)
Specify the location of the branch cut and give the value of the required
branch at a single point not on the cut. The values everywhere else are
then defined uniquely by continuity. For example, we have
log z
with a
branch cut along
R
0
and
log
1 = 0. Of course, we could have defined
log 1 = 2πi as well, and this would correspond to picking arg z (π, 3π].
Either way can be used, but it must be done properly.
Riemann surfaces*
Instead of this brutal way of introducing a cut and forbidding crossing, Riemann
imagined different branches as separate copies of
C
, all stacked on top of each
other but each one joined to the next at the branch cut. This structure is a
Riemann surface.
C
C
C
C
C
The idea is that traditionally, we are not allowed to cross branch cuts. Here,
when we cross a branch cut, we will move to a different copy of
C
, and this
corresponds to a different branch of our function.
We will not say any more about this there is a whole Part II course
devoted to these, uncreatively named IID Riemann Surfaces.
Multiple branch cuts
When there is more than one branch point, we may need more than one branch
cut. For
f(z) = (z(z 1))
1
3
,
there are two branch points, at 0 and 1. So we need two branch cuts. A possibility
is shown below. Then no curve can wrap around either 0 or 1.
10
z
r
r
1
θ
θ
1
For any
z
, we write
z
=
re
and
z
1 =
r
1
e
1
with
θ
(
π, π
] and
θ
1
[0
,
2
π
),
and define
f(z) =
3
rr
1
e
i(θ+θ
1
)/3
.
This is continuous so long as we don’t cross either branch cut. This is all and
simple.
However, sometimes, we need fewer branch cuts than we might think. Con-
sider instead the function
f(z) = log
z 1
z + 1
.
Writing z + 1 = re
and z 1 = r
1
e
1
, we can write this as
f(z) = log(z 1) log(z + 1)
= log(r
1
/r) + i(θ
1
θ).
This has branch points at
±
1. We can, of course, pick our branch cut as above.
However, notice that these two cuts also make it impossible for
z
to “wind
around
(e.g. moving around a circle of arbitrarily large radius). Yet
is not
a branch point, and we don’t have to make this unnecessary restriction. Instead,
we can use the following branch cut:
11
z
r
r
1
θ
θ
1
Drawing this branch cut is not hard. However, picking the values of
θ, θ
1
is
more tricky. What we really want to pick is
θ, θ
1
[0
,
2
π
). This might not look
intuitive at first, but we will shortly see why this is the right choice.
Suppose that we are unlawful and cross the branch cut. Then the value of
θ
passes through the branch cut, while the value of
θ
1
varies smoothly. So the
value of
f
(
z
) jumps. This is expected since we have a branch cut there. If we
pass through the negative real axis on the left of the branch cut, then nothing
happens, since θ = θ
1
= π are not at a point of discontinuity.
The interesting part is when we pass through the positive real axis on the
right of branch cut. When we do this, both
θ
and
θ
1
jump by 2
π
. However, this
does not induce a discontinuity in
f
(
z
), since
f
(
z
) depends on the difference
θ
1
θ, which has not experienced a jump.
1.5 obius map
We are now going to consider a special class of maps, namely the obius maps, as
defined in IA Groups. While these maps have many many different applications,
the most important thing we are going to use it for is to define some nice
conformal mappings in the next section.
We know from general theory that the obius map
z 7→ w =
az + b
cz + d
with
ad bc 6
= 0 is analytic except at
z
=
d
c
. It is useful to consider it as a
map from C
C
= C {∞}, with
d
c
7→ , 7→
a
c
.
It is then a bijective map between C
and itself, with the inverse being
w 7→
dw + b
cw a
,
another obius map. These are all analytic everywhere when considered as a
map C
C
.
Definition (Circline). A circline is either a circle or a line.
The key property of obius maps is the following:
Proposition. obius maps take circlines to circlines.
Note that if we start with a circle, we might get a circle or a line; if we start
with a line, we might get a circle or a line.
Proof. Any circline can be expressed as a circle of Apollonius,
|z z
1
| = λ|z z
2
|,
where z
1
, z
2
C and λ R
+
.
This was proved in the first example sheet of IA Vectors and Matrices. The
case
λ
= 1 corresponds to a line, while
λ 6
= 1 corresponds to a circle. Substituting
z in terms of w, we get
dw + b
cw a
z
1
= λ
dw + b
cw a
z
2
.
Rearranging this gives
|(cz
1
+ d)w (az
1
+ b)| = λ|(cz
2
+ d)w (az
2
+ b)|. ()
A bit more rearranging gives
w
az
1
+ b
cz
1
+ d
= λ
cz
2
+ d
cz
1
+ d
w
az
2
+ b
cz
2
+ d
.
This is another circle of Apollonius.
Note that the proof fails if either
cz
1
+
d
= 0 or
cz
2
+
d
= 0, but then (
)
trivially represents a circle.
Geometrically, it is clear that choosing three distinct points in
C
uniquely
specifies a circline (if one of the points is
, then we have specified the straight
line through the other two points).
Also,
Proposition.
Given six points
α, β, γ, α
0
, β
0
, γ
0
C
, we can find a obius
map which sends α 7→ α
0
, β 7→ β
0
and γ γ
0
.
Proof. Define the obius map
f
1
(z) =
β γ
β α
z α
z γ
.
By direct inspection, this sends α 0, β 1 and γ . Again, we let
f
2
(z) =
β
0
γ
0
β
0
α
0
z α
0
z γ
0
.
This clearly sends
α
0
0
, β
0
1 and
γ
0
. Then
f
1
2
f
1
is the required
mapping. It is a obius map since obius maps form a group.
Therefore, we can therefore find a obius map taking any given circline to
any other, which is convenient.
1.6 Conformal maps
Sometimes, we might be asked to solve a problem on some complicated subspace
U C
. For example, we might need to solve Laplace’s equation subject to
some boundary conditions. In such cases, it is often convenient to transform
our space
U
into some nicer space
V
, such as the open disk. To do so, we will
need a complex function
f
that sends
U
to
V
. For this function to preserve our
properties such that the solution on
V
can be transferred back to a solution on
U
, we would of course want
f
to be differentiable. Moreover, we would like it to
have non-vanishing derivative, so that it is at least locally invertible.
Definition
(Conformal map)
.
A conformal map
f
:
U V
, where
U, V
are
open subsets of C, is one which is analytic with non-zero derivative.
In reality, we would often want the map to be a bijection. We sometimes call
these conformal equivalences.
Unfortunately, after many hundred years, we still haven’t managed to agree
on what being conformal means. An alternative definition is that a conformal
map is one that preserves the angle (in both magnitude and orientation) between
intersecting curves.
We shall show that our definition implies this is true; the converse is also
true, but the proof is omitted. So the two definitions are equivalent.
Proposition.
A conformal map preserves the angles between intersecting curves.
Proof.
Suppose
z
1
(
t
) is a curve in
C
, parameterised by
t R
, which passes
through a point
z
0
when
t
=
t
1
. Suppose that its tangent there,
z
0
1
(
t
1
), has a
well-defined direction, i.e. is non-zero, and the curve makes an angle
φ
=
arg z
0
1
(
t
1
)
to the x-axis at z
0
.
Consider the image of the curve,
Z
1
(
t
) =
f
(
z
1
(
t
)). Its tangent direction at
t = t
1
is
Z
0
1
(t
1
) = z
0
1
(t
1
)f
0
(z
1
(t
1
)) = z
0
1
(t
0
)f
0
(z
0
),
and therefore makes an angle with the x-axis of
arg(Z
0
1
(t
1
)) = arg(z
0
1
(t
1
)f
0
(z
0
)) = φ + arg f
0
(z
0
),
noting that arg f
0
(z
0
) exists since f is conformal, and hence f
0
(z
0
) 6= 0.
In other words, the tangent direction has been rotated by
arg f
0
(
z
0
), and this
is independent of the curve we started with.
Now if
z
2
(
t
) is another curve passing through
z
0
. Then its tangent direction
will also be rotated by arg f
0
(z
0
). The result then follows.
Often, the easiest way to find the image set of a conformal map acting on a
set
U
is first to find the image of its boundary,
U
, which will form the boundary
V
of
V
; but, since this does not reveal which side of
V V
lies on, take a point
of your choice within U, whose image will lie within V .
Example.
(i)
The map
z 7→ az
+
b
, where
a, b C
and
a 6
= 0, is a conformal map. It
rotates by
arg a
, enlarges by
|a|
, and translates by
b
. This is conformal
everywhere.
(ii) The map f(z) = z
2
is a conformal map from
U =
n
z : 0 < |z| < 1, 0 < arg z <
π
2
o
to
V = {w : 0 < |w| < 1, 0 < arg w < π}.
1
U
f
1
V
Note that the right angles between the boundary curves at
z
= 1 and
i
are
preserved, because
f
is conformal there; but the right angle at
z
= 0 is not
preserved because
f
is not conformal there (
f
0
(0) = 0). Fortunately, this
does not matter, because U is an open set and does not contain 0.
(iii) How could we conformally map the left-hand half-plane
U = {z : Re z < 0}
to a wedge
V =
n
w :
π
4
< arg w
π
4
o
.
U
V
We need to halve the angle. We saw that
z 7→ z
2
doubles then angle, so we
might try
z
1
2
, for which we need to choose a branch. The branch cut must
not lie in
U
, since
z
1
2
is not analytic on the branch cut. In particular, the
principal branch does not work.
So we choose a cut along the negative imaginary axis, and the function is
defined by
re
7→
re
iθ/2
, where
θ
π
2
,
3π
2
. This produces the wedge
{z
0
:
π
4
< arg z
0
<
3π
4
}
. This isn’t exactly the wedge we want. So we need
to rotate it through
π
2
. So the final map is
f(z) = iz
1
2
.
(iv) e
z
takes rectangles conformally to sectors of annuli:
U
iy
1
iy
2
x
1
x
2
e
x
1
e
x
2
y
1
y
2
V
With an appropriate choice of branch, log z does the reverse.
(v)
obius maps (which are conformal equivalence except at the point that is
sent to
) are very useful in taking circles, or parts of them to straight
lines, or vice versa.
Consider
f
(
z
) =
z1
z+1
acting on the unit disk
U
=
{z
:
|z| <
1
}
. The
boundary of
U
is a circle. The three points
1
, i
and +1 lie on this circle,
and are mapped to , i and 0 respectively.
Since obius maps take circlines to circlines, the image of
U
is the
imaginary axis. Since
f
(0) =
1, we see that the image of
U
is the
left-hand half plane.
U
V
We can derive this alternatively by noting
w =
z 1
z + 1
z =
w + 1
w 1
.
So
|z| < 1 |w + 1| < |w 1|,
i.e.
w
is closer to
1 than it is to +1, which describes precisely the left-hand
half plane.
In fact, this particular map
f
(
z
) =
z1
z+1
can be deployed more generally on
quadrants, because it permutes 8 divisions on the complex plane as follows:
23
67
14
58
The map sends 1
7→
2
7→
3
7→
4
7→
1 and 5
7→
6
7→
7
7→
8
7→
5. In
particular, this agrees with what we had above it sends the complete
circle to the left hand half plane.
(vi)
Consider the map
f
(
z
) =
1
z
. This is just another obius map! Hence
everything we know about obius maps apply to this. In particular, it is
useful for acting on vertical and horizontal lines. Details are left for the
first example sheet.
In practice, complicated conformal maps are usually built up from individual
building blocks, each a simple conformal map. The required map is the com-
position of these. For this to work, we have to note that the composition of
conformal maps is conformal, by the chain rule.
Example.
Suppose we want to map the upper half-disc
|z| <
1,
Im z >
0 to the
full disc
|z| <
1. We might want to just do
z 7→ z
2
. However, this does not work,
since the image does not include the non-negative real axis, say
z
=
1
2
. Instead,
we need to do something more complicated. We will do this in several steps:
(i) We apply f
1
(z) =
z1
z+1
to take the half-disc to the second quadrant.
(ii)
We now recall that
f
1
also takes the right-hand half plane to the disc. So we
square and rotate to get the right-hand half plane. We apply
f
2
(
z
) =
iz
2
.
(iii) We apply f
3
(z) = f
1
(z) again to obtain the disc.
Then the desired conformal map is
f
3
f
2
f
1
, you can, theoretically, expand
this out and get an explicit expression, but that would be a waste of time.
z 7→
z1
z+1
z 7→ iz
2
z 7→
z1
z+1
1.7 Solving Laplace’s equation using conformal maps
As we have mentioned, conformal maps are useful for transferring problems from
a complicated domain to a simple domain. For example, we can use it to solve
Laplace’s equation, since solutions to Laplace’s equations are given by real and
imaginary parts of holomorphic functions.
More concretely, the following algorithm can be used to solve Laplace’s
Equation
2
φ
(
x, y
) = 0 on a tricky domain
U R
2
with given Dirichlet
boundary conditions on
U
. We now pretend
R
2
is actually
C
, and identify
subsets of R
2
with subsets of C in the obvious manner.
(i)
Find a conformal map
f
:
U V
, where
U
is now considered a subset of
C
, and
V
is a “nice” domain of our choice. Our aim is to find a harmonic
function Φ in V that satisfies the same boundary conditions as φ.
(ii)
Map the boundary conditions on
U
directly to the equivalent points on
V .
(iii) Now solve
2
Φ = 0 in V with the new boundary conditions.
(iv) The required harmonic function φ in U is then given by
φ(x, y) = Φ(Re(f(x + iy)), Im f(x + iy)).
To prove this works, we can take the
2
of this expression, write
f
=
u
+
iv
, use
the Cauchy-Riemann equation, and expand the mess.
Alternatively, we perform magic. Note that since Φ is harmonic, it is the
real part of some complex analytic function
F
(
z
) = Φ(
x, y
) +
i
Ψ(
x, y
), where
z
=
x
+
iy
. Now
F
(
f
(
z
)) is analytic, as it is a composition of analytic functions.
So its real part, which is Φ(Re f, Im f), is harmonic.
Let’s do an example. In this case, you might be able to solve this directly
just by looking at it, using what you’ve learnt from IB Methods. However, we
will do it with this complex methods magic.
Example.
We want to find a bounded solution of
2
φ
= 0 on the first quadrant
of R
2
subject to φ(x, 0) = 0 and φ(0, y) = 1 when, x, y > 0.
This is a bit silly, since our
U
is supposed to be a nasty region, but our
U
is
actually quite nice. Nevertheless, we still do this since this is a good example.
We choose f(z) = log z, which maps U to the strip 0 < Im z <
π
2
.
U
1
0
z 7→ log z
V
i
π
2
00
1
Recall that we said
log
maps an annulus to a rectangle. This is indeed the case
here
U
is an annulus with zero inner radius and infinite outer radius;
V
is an
infinitely long rectangle.
Now, we must now solve
2
Φ = 0 in V subject to
Φ(x, 0) = 0, Φ
x,
π
2
= 1
for all
x R
. Note that we have these boundary conditions since
f
(
z
) takes
positive real axis of
V
to the line
Im z
= 0, and the positive imaginary axis to
Im z =
π
2
.
By inspection, the solution is
Φ(x, y) =
2
π
y.
Hence,
Φ(x, y) = Φ(Re log z, Im log z)
=
2
π
Im log z
=
2
π
tan
1
y
x
.
Notice this is just the argument θ.
2 Contour integration and Cauchy’s theorem
In the remaining of the course, we will spend all our time studying integration
of complex functions, and see what we can do with it. At first, you might think
this is just an obvious generalization of integration of real functions. This is not
true. Complex integrals have many many nice properties, and it turns out there
are some really convenient tricks for evaluating complex integrals. In fact, we
will learn how to evaluate certain real integrals by pretending they are complex.
2.1 Contour and integrals
With real functions, we can just integrate a function, say, from 0 to 1, since there
is just one possible way we can get from 0 to 1 along the real line. However, in
the complex plane, there are many paths we can take to get from a point to
another. Integrating along different paths may produce different results. So we
have to carefully specify our path of integration.
Definition (Curve). A curve γ(t) is a (continuous) map γ : [0, 1] C.
Definition (Closed curve). A closed curve is a curve γ such that γ(0) = γ(1).
Definition
(Simple curve)
.
A simple curve is one which does not intersect itself,
except at t = 0, 1 in the case of a closed curve.
Definition (Contour). A contour is a piecewise smooth curve.
Everything we do is going to be about contours. We shall, in an abuse of
notation, often use the symbol
γ
to denote both the map and its image, namely
the actual curve in C traversed in a particular direction.
Notation.
The contour
γ
is the contour
γ
traversed in the opposite direction.
Formally, we say
(γ)(t) = γ(1 t).
Given two contours
γ
1
and
γ
2
with
γ
1
(1) =
γ
2
(0),
γ
1
+
γ
2
denotes the two
contours joined end-to-end. Formally,
(γ
1
+ γ
2
)(t) =
(
γ
1
(2t) t <
1
2
γ
2
(2t 1) t
1
2
.
Definition
(Contour integral)
.
The contour integral
R
γ
f
(
z
) d
z
is defined to be
the usual real integral
Z
γ
f(z) dz =
Z
1
0
f(γ(t))γ
0
(t) dt.
Alternatively, and equivalently, dissect [0
,
1] into 0 =
t
0
< t
1
< ··· < t
n
= 1,
and let z
n
= γ(t
n
) for n = 0, ··· , N. We define
δt
n
= t
n+1
t
n
, δz
n
= z
n+1
z
n
.
Then
Z
γ
f(z) dz = lim
0
N1
X
n=0
f(z
n
)δz
n
,
where
∆ = max
n=0,··· ,N 1
δt
n
,
and as 0, N .
All this says is that the integral is what we expect it to be an infinite sum.
The result of a contour integral between two points in
C
may depend on the
choice of contour.
Example. Consider
I
1
=
Z
γ
1
dz
z
, I
2
=
Z
γ
2
dz
z
,
where the paths are given by
γ
1
γ
2
θ
11 0
In both cases, we integrate from
z
=
1 to +1 around a unit circle:
γ
1
above,
γ
2
below the real axis. Substitute z = e
, dz = ie
dθ. Then we get
I
1
=
Z
0
π
ie
dθ
e
=
I
2
=
Z
0
π
ie
dθ
e
= .
So they can in fact differ.
Elementary properties of the integral
Contour integrals behave as we would expect them to.
Proposition.
(i) We write γ
1
+ γ
2
for the path obtained by joining γ
1
and γ
2
. We have
Z
γ
1
+γ
2
f(z) dz =
Z
γ
1
f(z) dz +
Z
γ
2
f(z) dz
Compare this with the equivalent result on the real line:
Z
c
a
f(x) dx =
Z
b
a
f(x) dx +
Z
c
b
f(x) dx.
(ii) Recall γ is the path obtained from reversing γ. Then we have
Z
γ
f(z) dz =
Z
γ
f(z) dz.
Compare this with the real result
Z
b
a
f(x) dx =
Z
a
b
f(x) dx.
(iii) If γ is a contour from a to b in C, then
Z
γ
f
0
(z) dz = f(b) f(a).
This looks innocuous. This is just the fundamental theorem of calculus.
However, there is some subtlety. This requires
f
to be differentiable at
every point on
γ
. In particular, it must not cross a branch cut. For example,
our previous example had
log z
as the antiderivative of
1
z
. However, this
does not imply the integrals along different paths are the same, since we
need to pick different branches of
log
for different paths, and things become
messy.
(iv)
Integration by substitution and by parts work exactly as for integrals on
the real line.
(v) If γ has length L and |f (z)| is bounded by M on γ, then
Z
γ
f(z) dz
LM.
This is since
Z
γ
f(z) dz
Z
γ
|f(z)|| dz| M
Z
γ
|dz| = ML.
We will be using this result a lot later on.
We will not prove these. Again, if you like proofs, go to IB Complex Analysis.
Integrals on closed contours
If
γ
is a closed contour, then it doesn’t matter where we start from on
γ
;
H
γ
f
(
z
) d
z
means the same thing in any case, so long as we go all the way round
(
H
denotes an integral around a closed contour).
The usual direction of traversal is anticlockwise (the “positive sense”). If
we traverse
γ
in a negative sense (clockwise), then we get negative the previous
result. More technically, the positive sense is the direction that keeps the interior
of the contour on the left. This “more technical” definition might seem pointless
if you can’t tell what anticlockwise is, then you probably can’t tell which is
the left. However, when we deal with more complicated structures in the future,
it turns out it is easier to define what is “on the left” than “anticlockwise”.
Simply connected domain
Definition
(Simply connected domain)
.
A domain
D
(an open subset of
C
) is
simply connected if it is connected and every closed curve in
D
encloses only
points which are also in D.
In other words, it does not have holes. For example, this is not simply-
connected:
These “holes” need not be big holes like this, but just individual points at which
a function under consider consideration is singular.
2.2 Cauchy’s theorem
We now come to the highlight of the course Cauchy’s theorem. Most of the
things we do will be based upon this single important result.
Theorem
(Cauchy’s theorem)
.
If
f
(
z
) is analytic in a simply-connected domain
D, then for every simple closed contour γ in D, we have
I
γ
f(z) dz = 0.
This is quite a powerful statement, and will allow us to do a lot! On the other
hand, this tells us functions that are analytic everywhere are not too interesting.
Instead, we will later look at functions like
1
z
that have singularities.
Proof.
(non-examinable) The proof of this remarkable theorem is simple (with a
catch), and follows from the Cauchy-Riemann equations and Green’s theorem.
Recall that Green’s theorem says
I
S
(P dx + Q dy) =
ZZ
S
Q
x
P
y
dx dy.
Let u, v be the real and imaginary parts of f. Then
I
γ
f(z) dz =
I
γ
(u + iv)(dx + i dy)
=
I
γ
(u dx v dy) + i
I
γ
(v dx + u dy)
=
ZZ
S
v
x
u
y
dx dy + i
ZZ
S
u
x
v
y
dx dy
But both integrands vanish by the Cauchy-Riemann equations, since
f
is differ-
entiable throughout S. So the result follows.
Actually, this proof requires
u
and
v
to have continuous partial derivatives
in
S
, otherwise Green’s theorem does not apply. We shall see later that in fact
f
is differentiable infinitely many time, so actually
u
and
v
do have continuous
partial derivatives. However, our proof of that will utilize Cauchy’s theorem! So
we are trapped.
Thus a completely different proof (and a very elegant one!) is required if we
do not wish to make assumptions about
u
and
v
. However, we shall not worry
about this in this course since it is easy to verify that the functions we use do
have continuous partial derivatives. And we are not doing Complex Analysis.
2.3 Contour deformation
One useful consequence of Cauchy’s theorem is that we can freely deform contours
along regions where f is defined without changing the value of the integral.
Proposition.
Suppose that
γ
1
and
γ
2
are contours from
a
to
b
, and that
f
is
analytic on the contours and between the contours. Then
Z
γ
1
f(z) dz =
Z
γ
2
f(z) dz.
a
b
γ
2
γ
1
Proof.
Suppose first that
γ
1
and
γ
2
do not cross. Then
γ
1
γ
2
is a simple closed
contour. So
I
γ
1
γ
2
f(z) dz = 0
by Cauchy’s theorem. Then the result follows.
If
γ
1
and
γ
2
do cross, then dissect them at each crossing point, and apply
the previous result to each section.
So we conclude that if
f
has no singularities, then
R
b
a
f
(
z
) d
z
does not depend
on the chosen contour.
This result of path independence, and indeed Cauchy’s theorem itself, becomes
less surprising if we think of
R
f(z) dz as a path integral in R
2
, because
f(z) dz = (u + iv)(dz + i dy) = (u + iv) dx + (v + iu) dy
is an exact differential, since
y
(u + iv) =
x
(v + iu)
from the Cauchy-Riemann equations.
The same idea of “moving the contour” applies to closed contours. Suppose
that
γ
1
is a closed contour that can be continuously deformed to another one,
γ
2
, inside it; and suppose f has no singularities in the region between them.
γ
2
γ
1
××
×
We can instead consider the following contour γ:
γ
××
×
By Cauchy’s theorem, we know
H
γ
f
(
z
) d
z
= 0 since
f
(
z
) is analytic throughout
the region enclosed by
γ
. Now we let the distance between the two “cross-cuts”
tend to zero: those contributions cancel and, in the limit, we have
I
γ
1
γ
2
f(z) dz = 0.
hence we know
I
γ
1
f(z) dz =
I
γ
2
f(z) dz = 0.
2.4 Cauchy’s integral formula
Theorem
(Cauchy’s integral formula)
.
Suppose that
f
(
z
) is analytic in a domain
D and that z
0
D. Then
f(z
0
) =
1
2πi
I
γ
f(z)
z z
0
dz
for any simple closed contour γ in D encircling z
0
anticlockwise.
This result is going to be very important in a brief moment, for proving one
thing. Afterwards, it will be mostly useless.
Proof. (non-examinable) We let γ
ε
be a circle of radius ε about z
0
, within γ.
z
0
γ
ε
γ
Since
f(z)
zz
0
is analytic except when z = z
0
, we know
I
γ
f(z)
z z
0
dz =
I
γ
ε
f(z)
z z
0
dz.
We now evaluate the right integral directly. Substituting z = z
0
+ εe
, we get
I
γ
ε
f(z)
z z
0
dz =
Z
2π
0
f(z
0
+ εe
)
εe
iεe
dθ
= i
Z
2
0
π(f(z
0
) + O(ε)) dθ
2πif(z
0
)
as we take the limit ε 0. The result then follows.
So, if we know
f
on
γ
, then we know it at all points within
γ
. While this
seems magical, it is less surprising if we look at it in another way. We can write
f
=
u
+
iv
, where
u
and
v
are harmonic functions, i.e. they satisfy Laplace’s
equation. Then if we know the values of
u
and
v
on
γ
, then what we essentially
have is Laplace’s equation with Dirichlet boundary conditions! Then the fact
that this tells us everything about
f
within the boundary is just the statement
that Laplace’s equation with Dirichlet boundary conditions has a unique solution!
The difference between this and what we’ve got in IA Vector Calculus is that
Cauchy’s integral formula gives an explicit formula for the value of
f
(
z
0
), while
in IA Vector Calculus, we just know there is one solution, whatever that might
be.
Note that this does not hold if
z
0
does not lie on or inside
γ
, since Cauchy’s
theorem just gives
1
2πi
I
γ
f(z)
z z
0
dz = 0.
Now, we can differentiate Cauchy’s integral formula with respect to
z
0
, and
obtain
f
0
(z
0
) =
1
2πi
I
γ
f(z)
(z z
0
)
2
dz.
We have just taken the differentiation inside the integral sign. This is valid since
it’s Complex Methods and we don’t care
the integrand, both before and after,
is a continuous function of both z and z
0
.
We see that the integrand is still differentiable. So we can differentiate it
again, and obtain
f
(n)
(z
0
) =
n!
2πi
I
γ
f(z)
(z z
0
)
n+1
dz.
Hence at any point
z
0
where
f
is analytic, all its derivatives exist, and we have
just found a formula for them. So it is differentiable infinitely many times as
advertised.
A classic example of Cauchy’s integral formula is Liouville’s theorem.
Theorem (Liouville’s theorem*). Any bounded entire function is a constant.
Proof.
(non-examinable) Suppose that
|f
(
z
)
| M
for all
z
, and consider a circle
of radius r centered at an arbitrary point z
0
C. Then
f
0
(z
0
) =
1
2πi
I
|zz
0
|=r
f(z)
(z z
0
)
2
dz.
Hence we know
1
2πi
1
2π
· 2πr ·
M
r
2
0
as r . So f
0
(z
0
) = 0 for all z
0
C. So f is constant.
3 Laurent series and singularities
3.1 Taylor and Laurent series
If f is analytic at z
0
, then it has a Taylor series
f(z) =
X
n=0
a
n
(z z
0
)
n
in a neighbourhood of
z
0
. We will prove this as a special case of the coming
proposition. Exactly which neighbourhood it applies in depends on the function.
Of course, we know the coefficients are given by
a
n
=
f
(n)
(z
0
)
n!
,
but this doesn’t matter. All the standard Taylor series from real analysis apply
in C as well. For example,
e
z
=
X
n=0
z
n
n!
,
and this converges for all z. Also, we have
(1 z)
1
=
X
n=0
z
n
.
This converges for |z| < 1.
But if
f
has a singularity at
z
0
, we cannot expect such a Taylor series, since
it would imply
f
is non-singular at
z
0
. However, it turns out we can get a series
expansion if we allow ourselves to have negative powers of z.
Proposition
(Laurent series)
.
If
f
is analytic in an annulus
R
1
< |z z
0
| < R
2
,
then it has a Laurent series
f(z) =
X
n=−∞
a
n
(z z
0
)
n
.
This is convergent within the annulus. Moreover, the convergence is uniform
within compact subsets of the annulus.
Proof.
(non-examinable) We wlog
z
0
= 0. Given a
z
in the annulus, we pick
r
1
, r
2
such that
R
1
< r
1
< |z| < r
2
< R
2
,
and we let
γ
1
and
γ
2
be the contours
|z|
=
r
1
,
|z|
=
r
2
traversed anticlockwise
respectively. We choose γ to be the contour shown in the diagram below.
z
0
r
1
r
2
z
γ
We now apply Cauchy’s integral formula (after a change of notation):
f(z) =
1
2πi
I
γ
f(ζ)
ζ z
dζ.
We let the distance between the cross-cuts tend to zero. Then we get
f(z) =
1
2πi
I
γ
2
f(ζ)
ζ z
dz
1
2πi
I
γ
1
f(ζ)
ζ z
dζ,
We have to subtract the second integral because it is traversed in the opposite
direction. We do the integrals one by one. We have
I
γ
1
f(ζ)
ζ z
=
1
z
I
γ
1
f(ζ)
1
ζ
z
dζ
Taking the Taylor series of
1
1
ζ
z
, which is valid since
|ζ|
=
r
1
< |z|
on
γ
1
, we
obtain
=
1
z
I
γ
1
f(ζ)
X
m=0
ζ
z
m
dζ
=
X
m=0
z
m1
I
γ
1
f(ζ)ζ
m
dζ.
This is valid since
H
P
=
P
H
by uniform convergence. So we can deduce
1
2πi
I
γ
1
f(ζ)
ζ z
dζ =
1
X
n=−∞
a
n
z
n
,
where
a
n
=
1
2πi
I
γ
1
f(ζ)ζ
n1
dζ
for n < 0.
Similarly, we obtain
1
2πi
I
γ
2
f(ζ)
ζ z
dζ =
X
n=0
a
n
z
n
,
for the same definition of a
n
, except n 0, by expanding
1
ζ z
=
1
ζ
1
1
z
ζ
=
X
n=0
z
n
ζ
n+1
.
This is again valid since
|ζ|
=
r
2
> |z|
on
γ
2
. Putting these result together, we
obtain the Laurent series. The general result then follows by translating the
origin by z
0
.
We will not prove uniform convergence go to IB Complex Analysis.
It can be shown that Laurent series are unique. Then not only is there a
unique Laurent series for each annulus, but if we pick two different annuli on
which
f
is analytic (assuming they overlap), they must have the same coefficient.
This is since we can restrict the two series to their common intersection, and
then uniqueness requires the coefficients to be must be the same.
Note that the Taylor series is just a special case of Laurent series, and if
f
is
holomorphic at
z
0
, then our explicit formula for the coefficients plus Cauchy’s
theorem tells us we have a
n
= 0 for n < 0.
Note, however, that we needed the Taylor series of
1
1z
in order to prove
Taylor’s theorem.
Example.
Consider
e
z
z
3
. What is its Laurent series about
z
0
= 0? We already
have a Taylor series for e
z
, and all we need to do is to divide it by z
3
. So
e
z
z
3
=
X
n=0
z
n3
n!
=
X
n=3
z
n
(n + 3)!
.
Example.
What is the Laurent series for
e
1/z
about
z
0
? Again, we just use the
Taylor series for e
z
. We have
e
1
z
= 1 +
1
z
+
1
2!z
2
+
1
3!z
3
+ ··· .
So the coefficients are
a
n
=
1
(n)!
for n 0.
Example. This is a little bit more tricky. Consider
f(z) =
1
z a
,
where
a C
. Then
f
is analytic in
|z| < |a|
. So it has a Taylor series about
z
0
= 0 given by
1
z a
=
1
a
1
z
a
1
=
X
n=0
1
a
n+1
z
n
.
What about in
|z| > |a|
? This is an annulus, that goes from
|a|
to infinity. So it
has a Laurent series. We can find it by
1
z a
=
1
z
1
a
z
1
=
X
m=0
a
m
z
m+1
=
1
X
n=−∞
a
n1
z
n
.
Example. Now consider
f(z) =
e
z
z
2
1
.
This has a singularity at
z
0
= 1 but is analytic in an annulus 0
< |z z
0
| <
2
(the 2 comes from the other singularity at
z
=
1). How do we find its Laurent
series? This is a standard trick that turns out to be useful we write everything
in terms of ζ = z z
0
. So
f(z) =
e
ζ
e
z
0
ζ(ζ + 2)
= e
z
0
2ζe
ζ
1 +
1
2
ζ
1
=
e
2ζ
1 + ζ +
1
2!
ζ
2
+ ···
1
1
2
ζ + ···
=
e
2ζ
1 +
1
2
ζ + ···
=
e
2
1
z z
0
+
1
2
+ ···
.
This is now a Laurent series, with a
1
=
1
2
e, a
0
=
1
4
e etc.
Example.
This doesn’t seem to work for
f
(
z
) =
z
1/2
. The reason is that the
required branch cut of
z
1
2
would pass through any annulus about
z
= 0. So we
cannot find an annulus on which f is analytic.
The radius of convergence of a Laurent series is at least the size of the annulus,
as we that’s how we have constructed it. So how large is it?
It turns out the radius of convergence of a Laurent series is always the
distance from
z
0
to the closest singularity of
f
(
z
), for we may always choose
R
2
to be that distance, and obtain a Laurent series valid all the way to
R
2
(not inclusive). This Laurent series must be the same series as we would have
obtained with a smaller radius, because Laurent series are unique.
While this sounds just like some technicalities, this is actually quite useful.
When deriving a Laurent series, we can make any assumptions we like about
z z
0
being small. Then even if we have derived a Laurent series for a really
small neighbourhood, we automatically know the series is valid up to the other
point where f is singular.
3.2 Zeros
Recall that for a polynomial
p
(
z
), we can talk about the order of its zero at
z
=
a
by looking at the largest power of (
z a
) dividing
p
. A priori, it is not
clear how we can do this for general functions. However, given that everything
is a Taylor series, we know how to do this for holomorphic functions.
Definition
(Zeros)
.
The zeros of an analytic function
f
(
Z
) are the points
z
0
where
f
(
z
0
) = 0. A zero is of order
N
if in its Taylor series
P
n=0
a
n
(
z z
0
)
n
,
the first non-zero coefficient is a
N
.
Alternatively, it is of order
N
if 0 =
f
(
z
0
) =
f
0
(
z
0
) =
···
=
f
(N1)
, but
f
(N)
(z
0
) 6= 0.
Definition (Simple zero). A zero of order one is called a simple zero.
Example. z
3
+
iz
2
+
z
+
i
= (
z i
)(
z
+
i
)
2
has a simple zero at
z
=
i
and a
zero of order 2 at z = i.
Example. sinh z
has zeros where
1
2
(
e
z
e
z
) = 0, i.e.
e
2z
= 1, i.e.
z
=
i
,
where n Z. The zeros are all simple, since cosh nπi = cos 6= 0.
Example.
Since
sinh z
has a simple zero at
z
=
πi
, we know
sinh
3
z
has a zero
of order 3 there. This is since the first term of the Taylor series of
sinh z
about
z
=
πi
has order 1, and hence the first term of the Taylor series of
sinh
3
z
has
order 3.
We can also find the Taylor series about πi by writing ζ = z πi:
sinh
3
z = [sinh(ζ + πi)]
3
= [sinh ζ]
3
=
ζ +
1
3!
+ ···
3
= ζ
3
1
2
ζ
5
···
= (z πi)
3
1
2
(z πi)
5
··· .
3.3 Classification of singularities
The previous section was rather boring — you’ve probably seen all of that before.
It is just there as a buildup for our study of singularities. These are in some
sense the “opposites” of zeros.
Definition
(Isolated singularity)
.
Suppose that
f
has a singularity at
z
0
=
z
.
If there is a neighbourhood of
z
0
within which
f
is analytic, except at
z
0
itself,
then
f
has an isolated singularity at
z
0
. If there is no such neighbourhood, then
f has an essential (non-isolated) singularity at z
0
.
Example. cosech z
has isolated singularities at
z
=
i, n Z
, since
sinh
has
zeroes at these points.
Example. cosech
1
z
has isolated singularities at
z
=
1
i
, with
n 6
= 0, and an
essential non-isolated singularity at
z
= 0 (since there are other arbitrarily close
singularities).
Example. cosech z
also has an essential non-isolated singularity at
z
=
, since
cosech
1
z
has an essential non-isolated singularity at z = 0.
Example. log z
has a non-isolated singularity at
z
= 0, because it is not analytic
at any point on the branch cut. This is normally referred to as a branch point
singularity.
If
f
has an isolated singularity, at
z
0
, we can find an annulus 0
< |z z
0
| < r
within which
f
is analytic, and it therefore has a Laurent series. This gives us a
way to classify singularities:
(i) Check for a branch point singularity.
(ii) Check for an essential (non-isolated) singularity.
(iii)
Otherwise, consider the coefficients of the Laurent series
P
n=−∞
a
n
(
z
z
0
)
n
:
(a) If a
n
= 0 for all n < 0, then f has a removable singularity at z
0
.
(b)
If there is a
N >
0 such that
a
n
= 0 for all
n < N
but
a
N
6
= 0,
then
f
has a pole of order
N
at
z
0
(for
N
= 1
,
2
, ···
, this is also called
a simple pole, double pole etc.).
(c)
If there does not exist such an
N
, then
f
has an essential isolated
singularity.
A removable singularity (one with Laurent series
a
0
+
a
1
(
z z
0
)+
···
) is so called
because we can remove the singularity by redefining
f
(
z
0
) =
a
0
=
lim
zz
0
f
(
z
);
then f will become analytic at z
0
.
Let’s look at some examples. In fact, we have 10 examples here.
Example.
(i)
1
zi
has a simple pole at
z
=
i
. This is since its Laurent series is, err,
1
zi
.
(ii)
cos z
z
has a singularity at the origin. This has Laurent series
cos z
z
= z
1
1
2
z +
1
24
z
3
··· ,
and hence it has a simple pole.
(iii)
Consider
z
2
(z1)
3
(zi)
2
. This has a double pole at
z
=
i
and a triple pole at
z
= 1. To show formally that, for instance, there is a double pole at
z
=
i
,
notice first that
z
2
(z1)
3
is analytic at z = i. So it has a Taylor series, say,
b
0
+ b
1
(z i) + b
2
(z i)
2
+ ···
for some
b
n
. Moreover, since
z
2
(z1)
3
is non-zero at
z
=
i
, we have
b
0
6
= 0.
Hence
z
2
(z 1)
3
(z i)
2
=
b
0
(z i)
2
+
b
1
z i
+ b2 + ··· .
So this has a double pole at z = i.
(iv)
If
g
(
z
) has zero of order
N
at
z
=
z
0
, then
1
g(z)
has a pole of order
N
there,
and vice versa. Hence cot z has a simple pole at the origin, because tan z
has a simple zero there. To prove the general statement, write
g(z) = (z z
0
)
N
G(z)
for some
G
with
G
(
z
0
)
6
= 0. Then
1
G(z)
has a Taylor series about
z
0
, and
then the result follows.
(v) z
2
has a double pole at infinity, since
1
ζ
2
has a double pole at ζ = 0.
(vi) e
1/z
has an essential isolated singularity at
z
= 0 because all the
a
n
’s are
non-zero for n 0.
(vii) sin
1
z
also has an essential isolated singularity at
z
= 0 because (using the
standard Taylor series for
sin
) there are non-zero
a
n
’s for infinitely many
negative n.
(viii) f
(
z
) =
e
z
1
z
has a removable singularity at
z
= 0, because its Laurent
series is
f(z) = 1 +
1
2!
z +
1
3!
z
2
+ ··· .
By defining
f
(0) = 1, we would remove the singularity and obtain an entire
function.
(ix) f
(
z
) =
sin z
z
is not defined at
z
= 0, but has a removable singularity there;
remove it by setting f(0) = 1.
(x)
A rational function
f
(
z
) =
P (z)
Q(z)
(where
P, Q
are polynomials) has a
singularity at any point
z
0
where
Q
has a zero. Assuming
Q
has a simple
zero, if
P
(
z
0
) = 0 as well, then the singularity is removable by redefining
f(z
0
) =
P
0
(z
0
)
Q
0
(z
0
)
(by L’Hˆopital’s rule).
Near an essential isolated singularity of a function
f
(
z
), it can be shown that
f
takes all possible complex values (except at most one) in any neighbourhood,
however small. For example,
e
1
z
takes all values except zero. We will not prove
this. Even in IB Complex Analysis.
3.4 Residues
So far, we’ve mostly been making lots of definitions. We haven’t actually used
them to do much useful things. We are almost there. It turns out we can
easily evaluate integrals of analytic functions by looking at their Laurent series.
Moreover, we don’t need the whole Laurent series. We just need one of the
coefficients.
Definition
(Residue)
.
The residue of a function
f
at an isolated singularity
z
0
is the coefficient
a
1
in its Laurent expansion about
z
0
. There is no standard
notation, but shall denote the residue by res
z=z
0
f(z).
Proposition. At a simple pole, the residue is given by
res
z=z
0
f(z) = lim
zz
0
(z z
0
)f(z).
Proof. We can simply expand the right hand side to obtain
lim
zz
0
(z z
0
)
a
1
z z
0
+ a
0
+ a
1
(z z
0
) + ···
= lim
zz
0
(a
1
+ a
0
(z z
0
) + ···)
= a
1
,
as required.
How about for more complicated poles? More generally, at a pole of order
N, the formula is a bit messier.
Proposition. At a pole of order N , the residue is given by
lim
zz
0
1
(N 1)!
d
N1
dz
N1
(z z
0
)
N
f(z).
This can be proved in a similar manner (see example sheet 2).
In practice, a variety of techniques can be used to evaluate residues no
single technique is optimal for all situations.
Example.
Consider
f
(
z
) =
e
z
z
3
. We can find the residue by directly computing
the Laurent series about z = 0:
e
z
z
3
= z
3
+ z
2
+
1
2
z
1
+
1
3!
+ ··· .
Hence the residue is
1
2
.
Alternatively, we can use the fact that
f
has a pole of order 3 at
z
= 0. So
we can use the formula to obtain
res
z=0
f(z) = lim
z0
1
2!
d
2
dz
2
(z
3
f(z)) = lim
z0
1
2
d
2
dz
2
e
z
=
1
2
.
Example. Consider
g(z) =
e
z
z
2
1
.
This has a simple pole at
z
= 1. Recall we have found its Laurent series at
z
= 1
to be
e
z
z
2
1
=
e
2
1
z 1
+
1
2
+ ···
.
So the residue is
e
2
.
Alternatively, we can use our magic formula to obtain
res
z=1
g(z) = lim
z1
(z 1)e
z
z
2
1
= lim
z1
e
z
z + 1
=
e
2
.
Example.
Consider
h
(
z
) = (
z
8
w
8
)
1
, for any complex constant
w
. We know
this has 8 simple poles at
z
=
we
i/4
for
n
= 0
, ··· ,
7. What is the residue at
z = w?
We can try to compute this directly by
res
z=w
h(z) = lim
zw
z w
(z w)(z we
/4
) ···(z we
7πi/4
)
=
1
(w we
/4
) ···(w we
7πi/4
)
=
1
w
7
1
(1 e
/4
) ···(1 e
7/4
)
Now we are quite stuck. We don’t know what to do with this. We can think
really hard about complex numbers and figure out what it should be, but this is
difficult. What we should do is to apply L’Hˆopital’s rule and obtain
res
z=w
h(z) = lim
zw
z w
z
8
w
8
=
1
8z
7
=
1
8w
7
.
Example.
Consider the function (
sinh πz
)
1
. This has a simple pole at
z
=
ni
for all integers
n
(because the zeros of
sinh z
are at
i
and are simple). Again,
we can compute this by finding the Laurent expansion. However, it turns out it
is easier to use our magic formula together with L’Hˆopital’s rule. We have
lim
zni
z ni
sinh πz
= lim
zni
1
π cosh πz
=
1
π cosh i
=
1
π cos
=
(1)
n
π
.
Example.
Consider the function (
sinh
3
z
)
1
. This time, we find the residue by
looking at the Laurent series. We first look at
sinh
3
z
. This has a zero of order
3 at z = πi. Its Taylor series is
sinh
3
z = (z πi)
3
1
2
(z πi)
5
.
Therefore
1
sinh
3
z
= (z πi)
3
1 +
1
2
(z πi)
2
+ ···
1
= (z πi)
3
1
1
2
(z πi)
2
+ ···
= (z πi)
3
+
1
2
(z πi)
1
+ ···
Therefore the residue is
1
2
.
So we can compute residues. But this seems a bit odd why are we
interested in the coefficient of
a
1
? Out of the doubly infinite set of coefficients,
why a
1
?
The purpose of this definition is to aid in evaluating integrals
H
f
(
z
) d
z
,
where
f
is a function analytic within the anticlockwise simple closed contour
γ
,
except for an isolated singularity
z
0
. We let
γ
r
be a circle of radius
r
centered
on z
0
, lying within γ.
z
0
γ
r
Now
f
has some Laurent series expansion
P
n=−∞
a
n
(
z z
0
)
n
about
z
0
. Recall
that we are allowed to deform contours along regions where f is analytic. So
I
γ
f(z) dz =
I
γ
r
f(z) dz
=
I
γ
r
X
n=−∞
a
n
(z z
0
)
n
dz
By uniform convergence, we can swap the integral and sum to obtain
=
X
n=−∞
a
n
I
γ
r
(z z
0
)
n
dz
We know how to integrate around the circle:
I
γ
r
(z z
0
)
n
dz =
Z
2π
0
r
n
e
inθ
ire
dθ
= ir
n+1
Z
2π
0
e
i(n+1)θ
dθ
=
(
2πi n = 1
r
n+1
n+1
e
i(n+1)θ
= 0, n 6= 1
.
Hence we have
I
γ
f(z) dz = 2πia
1
= 2πi res
z=z
0
f(z).
Theorem. γ
be an anticlockwise simple closed contour, and let
f
be analytic
within γ except for an isolated singularity z
0
. Then
I
γ
f(z) dz = 2πia
1
= 2πi res
z=z
0
f(z).
4 The calculus of residues
Nowadays, we use “calculus” to mean differentiation and integration. However,
historically, the word “calculus” just means doing calculations. The word “calcu-
lus” in the calculus of residues does not refer to differentiation and integration,
even though in this case they are related, but this is just a coincidence.
4.1 The residue theorem
We are now going to massively generalize the last result we had in the previous
chapter. We are going consider a function
f
with many singularities, and obtain
an analogous formula.
Theorem
(Residue theorem)
.
Suppose
f
is analytic in a simply-connected
region except at a finite number of isolated singularities
z
1
, ··· , z
n
, and that a
simple closed contour γ encircles the singularities anticlockwise. Then
I
γ
f(z) dz = 2πi
n
X
k=1
res
z=z
k
f(z).
z
1
z
2
z
3
γ
Note that we have already proved the case
n
= 1 in the previous section. To
prove this, we just need a difficult drawing.
Proof.
Consider the following curve
ˆγ
, consisting of small clockwise circles
γ
1
, ··· , γ
n
around each singularity; cross cuts, which cancel in the limit as they
approach each other, in pairs; and the large outer curve (which is the same as
γ
in the limit).
ˆγ
z
1
z
2
z
3
Note that
ˆγ
encircles no singularities. So
H
ˆγ
f
(
z
) d
z
= 0 by Cauchy’s theorem.
So in the limit when the cross cuts cancel, we have
I
γ
f(z) dz +
n
X
k=1
I
γ
k
f(z) dz =
I
ˆγ
f(z) dz = 0.
But from what we did in the previous section, we know
I
γ
k
f(z) dz = 2πi res
z=z
k
f(z),
since
γ
k
encircles only one singularity, and we get a negative sign since
γ
k
is a
clockwise contour. Then the result follows.
This is the key practical result of this course. We are going to be using this
very extensively to compute integrals.
4.2 Applications of the residue theorem
We now use the residue theorem to evaluate lots of real integrals.
Example. We shall evaluate
I =
Z
0
dx
1 + x
2
,
which we can already do so by trigonometric substitution. While it is silly
to integrate this with the residue theorem here, since integrating directly is
much easier, this technique is a much more general method, and can be used
to integrate many other things. On the other hand, our standard tricks easily
become useless when we change the integrand a bit, and we need to find a
completely different method.
Consider
I
γ
dz
1 + z
2
,
where
γ
is the contour shown: from
R
to
R
along the real axis (
γ
0
), then
returning to
R
via a semicircle of radius
R
in the upper half plane (
γ
R
). This
is known as “closing in the upper-half plane”.
γ
0
γ
R
R R
×
i
Now we have
1
1 + z
2
=
1
(z + i)(z i)
.
So the only singularity enclosed by
γ
is a simple pole at
z
=
i
, where the residue
is
lim
zi
1
z + i
=
1
2i
.
Hence
Z
γ
0
dz
1 + z
2
+
Z
γ
R
dz
1 + z
2
=
Z
γ
dz
1 + z
2
= 2πi ·
1
2i
= π.
Let’s now look at the terms individually. We know
Z
γ
0
dz
1 + z
2
=
Z
R
R
dx
1 + x
2
2I
as R . Also,
Z
γ
R
dz
1 + z
2
0
as R (see below). So we obtain in the limit
2I + 0 = π.
So
I =
π
2
.
Finally, we need to show that the integral about
γ
R
vanishes as
R
. This is
usually a bit tricky. We can use a formal or informal argument. We first do it
formally: by the triangle inequality, we know
|1 + z
2
| |1 |z|
2
|.
On γ
R
, we know |z| = R. So for large R, we get
|1 + z
2
| 1 R
2
= R
2
1.
Hence
1
|1 + z
2
|
1
R
2
1
.
Thus we can bound the integral by
Z
γ
R
dz
1 + z
2
πR ·
1
R
2
1
0
as R .
We can also do this informally, by writing
Z
γ
R
dz
1 + z
2
πR sup
zγ
R
1
1 + z
2
= πR · O(R
2
) = O(R
1
) 0.
This example is not in itself impressive, but the method adapts easily to
more difficult integrals. For example, the same argument would allow us to
integrate
1
1+x
8
with ease.
Note that we could have “closed in the lower half-plane” instead.
R
R
×
i
Most of the argument would be unchanged; the residue would now be
res
z=i
1
1 + z
2
=
1
2i
,
but the contour is now traversed clockwise. So we collect another minus sign,
and obtain the same result.
Let’s do more examples.
Example. To find the integral
I =
Z
0
dx
(x
2
+ a
2
)
2
,
where a > 0 is a real constant, consider the contour integral
Z
γ
dz
(z
2
+ a
2
)
2
,
where
γ
is exactly as above. The only singularity within
γ
is a pole of order 2 at
z = ia, at which the residue is
lim
zia
d
dz
1
(z + ia)
2
= lim
zia
2
(z + ia)
3
=
2
8ia
3
=
1
4
ia
3
.
We also have to do the integral around
γ
R
. This still vanishes as
R
, since
Z
γ
R
dz
(z
2
+ a
2
)
2
πR · O(R
4
) = O(R
3
) 0.
Therefore
2I = 2πi
1
4
ia
3
.
So
I =
π
4a
3
.
Example. Consider
I =
Z
0
dx
1 + x
4
.
We use the same contour again. There are simple poles of
1
1+z
4
at
e
πi/4
, e
3πi/4
, e
πi/4
, e
3πi/4
,
but only the first two are enclosed by the contour.
γ
0
γ
R
R R
××
The residues at these two poles are
1
4
e
πi/4
and +
1
4
e
πi/4
respectively. Hence
2I = 2πi
1
4
e
πi/4
+
1
4
e
πi/4
.
Working out some algebra, we find
I =
π
2
2
.
Example.
There is another way of doing the previous integral. Instead, we use
this quarter-circle as shown:
γ
0
γ
1
γ
2
iR
R
×
In words,
γ
consists of the real axis from 0 to
R
(
γ
0
); the arc circle from
R
to
iR (γ
1
); and the imaginary axis from iR to 0 (γ
2
).
Now
Z
γ
0
dz
1 + z
4
I as R ,
and along γ
2
we substitute z = iy to obtain
Z
γ
0
dz
1 + z
4
=
Z
0
R
i
dy
1 + y
4
iI as R .
Finally, the integral over
γ
1
vanishes as before. We enclose only one pole, which
makes the calculation a bit easier than what we did last time. In the limit, we
get
I iI = 2πi
1
4
e
πi/4
,
and we again obtain
I =
π
2
2
.
Example. We now look at trigonometric integrals of the form :
Z
2π
0
f(sin θ, cos θ) dθ.
We substitute
z = e
, cos θ =
1
2
(z + z
1
), sin θ =
1
2i
(z z
1
).
We then end up with a closed contour integral.
For example, consider the integral
I =
Z
2π
0
dθ
a + cos θ
,
where
a >
1. We substitute
z
=
e
, so that d
z
=
iz
d
θ
and
cos θ
=
1
2
(
z
+
z
1
).
As
θ
increases from 1 to 2
π
,
z
moves round the circle
γ
of radius 1 in the complex
plane. Hence
I =
I
γ
(iz)
1
dz
a +
1
2
(z + z
1
)
= 2i
I
γ
dz
z
2
+ 2az + 1
.
γ
×
z
×
z
+
We now solve the quadratic to obtain the poles, which happen to be
z
±
= a ±
p
a
2
1.
With some careful thinking, we realize
z
+
is inside the circle, while
z
is outside.
To find the residue, we notice the integrand is equal to
1
(z z
+
)(z z
)
.
So the residue at z = z
+
is
1
z
+
z
=
1
2
a
2
1
.
Hence
I = 2i
2πi
2
a
2
1
=
2π
a
2
1
.
Example.
Integrating a function with a branch cut is a bit more tricky, and
requires a “keyhole contour”. Suppose we want to integrate
I =
Z
0
x
α
1 +
2x + x
2
dx,
with
1
< α <
1 so that the integral converges. We need a branch cut for
z
α
.
We take our branch cut to be along the positive real axis, and define
z
α
= r
α
e
iαθ
,
where z = re
and 0 θ < 2π. We use the following keyhole contour:
C
R
C
ε
×
e
5πi/4
×
e
3πi/4
This consists of a large circle
C
R
of radius
R
, a small circle
C
ε
of radius
ε
, and
the two lines just above and below the branch cut. The idea is to simultaneously
take the limit ε 0 and R .
We have four integrals to work out. The first is
Z
γ
R
z
α
1 +
2z + z
2
dz = O(R
α2
) · 2πR = O(R
α1
) 0
as
R
. To obtain the contribution from
γ
ε
, we substitute
z
=
εe
, and
obtain
Z
0
2π
ε
α
e
iαθ
1 +
2εe
+ ε
2
e
2
iεe
dθ = O(ε
α+1
) 0.
Finally, we look at the integrals above and below the branch cut. The contribution
from just above the branch cut is
Z
R
ε
x
α
1 +
2x + x
2
dx I.
Similarly, the integral below is
Z
ε
R
x
α
e
2απi
1 +
2x + x
2
e
2απi
I.
So we get
I
γ
z
α
1 +
2z + z
2
dz (1 e
2απi
)I.
All that remains is to compute the residues. We write the integrand as
z
α
(z e
3i/4
)(z e
5πi/4
)
.
So the poles are at z
0
= e
3πi/4
and z
1
= e
5πi/4
. The residues are
e
3απi/4
2i
,
e
5απi/4
2i
respectively. Hence we know
(1 e
2απi
)I = 2πi
e
3απi/4
2i
+
e
5απi/4
2i
.
In other words, we get
e
απi
(e
απi
e
απi
)I =
2πe
απi
(e
απi/4
e
απi/4
).
Thus we have
I =
2π
sin(απ/4)
sin(απ)
.
Note that we we labeled the poles as
e
3πi/4
and
e
5πi/4
. The second point is the
same point as
e
3πi/4
, but it would be wrong to label it like that. We have
decided at the beginning to pick the branch such that 0
θ <
2
π
, and
3
πi/
4
is not in that range. If we wrote it as
e
3πi/4
instead, we might have got the
residue as e
3απi/4
/(
2i), which is not the same as e
5απi/4
/(
2i).
Note that the need of a branch cut does not mean we must use a keyhole
contour. Sometimes, we can get away with the branch cut and contour chosen
as follows:
R
ε ε
R
×
i
4.3 Further applications of the residue theorem using rect-
angular contours
Example. We want to calculate
I =
Z
−∞
e
αx
cosh x
dx,
where
α
(
1
,
1). We notice this has singularities at
x
=
n +
1
2
πi
for all
n
. So if we used our good, old semi-circular contours, then we would run into
infinitely many singularities as R .
Instead, we abuse the periodic nature of
cosh
, and consider the following
rectangular contour:
R
γ
0
R
γ
+
R
γ
1
γ
R
×
πi
2
πi
We see that
Z
γ
0
e
αz
cosh z
dz I as R .
Also,
Z
γ
1
e
αz
cosh z
dz =
Z
R
R
e
α(x+πi)
cosh(x + πi)
dx
= e
απi
Z
R
R
e
αx
cosh x
dx
e
απi
I.
On the γ
+
R
, we have
cosh z = cosh(R + iy) = cosh R cos y + i sinh R sin y.
So
|cosh z| =
q
cosh
2
R cos
2
y + sinh
2
R sin
2
y.
We now use the formula cosh
2
R = 1 + sinh
2
R, and magically obtain
|cosh z| =
q
cos
2
y + sinh
2
R sinh R.
Hence
e
αz
cosh z
|e
αR
e
αiy
|
sinh R
=
e
αR
sinh R
= O(e
(α1)R
) 0.
Hence as R ,
Z
γ
+
e
αz
cosh z
dz 0.
Similarly, the integral along γ
vanishes.
Finally, we need to find the residue. The only singularity inside the contour
is at
πi
2
, where the residue is
e
απi/2
sinh
πi
2
= ie
απi/2
.
Hence we get
I(1 + e
απi
) = 2πi(ie
απi/2
) = 2πie
απi/2
.
So we find
I =
π
cos(απ/2)
.
These rectangular contours tend to be useful for trigonometric and hyperbolic
functions.
Example. Consider the integral
Z
γ
cot πz
z
2
dz,
where
γ
is the square contour shown with corners at (
N
+
1
2
)(
±
1
± i
), where
N
is a large integer, avoiding the singularities
× × × × × × × × × × ×
(N +
1
2
)i
(N +
1
2
)i
N +
1
2
(N +
1
2
)
There are simple poles at
z
=
n Z \ {
0
}
, with residues
1
n
2
π
, and a triple pole
at
z
= 0 with residue
1
3
π
(from the Taylor series for
z
2
tan πz
). It turns out
the integrals along the sides all vanish as N (see later). So we know
2πi
2
N
X
n=1
1
n
2
π
π
3
!
0
as N . In other words,
N
X
n=1
1
n
2
=
π
2
6
.
This is probably the most complicated and most inefficient way of computing
this series. However, notice we can easily modify this to fund the sum of
1
n
3
, or
1
n
4
, or any complicated sum we can think of.
Hence all that remains is to show that the integrals along the sides vanish.
On the right-hand side, we can write z = N +
1
2
+ iy. Then
|cot πz| =
cot

N +
1
2
+ y
= | tan iπy| = |tanh πy| 1.
So
cot πz
is bounded on the vertical side. Since we are integrating
cot πz
z
2
, the
integral vanishes as N .
Along the top, we get z = x +
N +
1
2
i. This gives
|cot πz| =
q
cosh
2
N +
1
2
π sin
2
πx
q
sinh
2
N +
1
2
π + sin
2
πx
coth
N +
1
2
π coth
π
2
.
So again
cot πz
is bounded on the top side. So again, the integral vanishes as
N .
4.4 Jordan’s lemma
So far, we have bounded the semi-circular integral in a rather crude way, with
Z
γ
R
f(z)e
iλz
dz
R sup
|z|=R
|f(z)|.
This works for most cases. However, sometimes we need a more subtle argument.
Lemma
(Jordan’s lemma)
.
Suppose that
f
is an analytic function, except for a
finite number of singularities, and that f(z) 0 as |z| .
γ
R
γ
0
R
R R
×
×
×
Then for any real constant λ > 0, we have
Z
γ
R
f(z)e
iλz
dz 0
as R , where γ
R
is the semicircle of radius R in the upper half-plane.
For
λ <
0, the same conclusion holds for the semicircular
γ
0
R
in the lower
half-plane.
Such integrals arise frequently in Fourier transforms, as we shall see in the
next chapter.
How can we prove this result?
The result is easy to show if in fact
f
(
z
) =
o
(
|z|
1
) as
|z|
, i.e.
f(z)
|z|
0
as |z| , since |e
iλz
| = e
λ Im z
1 on γ
R
. So
Z
γ
R
f(z)e
iλz
dz
πR · o(R
1
) = o(1) 0.
But for functions decaying less rapidly than
o
(
|z|
1
) (e.g. if
f
(
z
) =
1
z
), we need
to prove Jordan’s lemma, which extends the result to any function
f
(
z
) that
tends to zero at infinity.
Proof. The proof relies on the fact that for θ
0,
π
2
, we have
sin θ
2θ
π
.
So we get
Z
γ
R
f(z)e
iλz
dz
=
Z
π
0
f(Re
)e
iλRe
iRe
dθ
R
Z
π
0
|f(Re
)|
e
iλRe
dθ
2R sup
zγ
R
|f(z)|
Z
π/2
0
e
λR sin θ
dθ
2R sup
zγ
R
|f(z)|
Z
π/2
0
e
2λRθ
dθ
=
π
λ
(1 e
λR
) sup
zγ
R
|f(z)|
0,
as required. Same for γ
0
R
when λ < 0.
Note that for most cases, we don’t actually need it, but can just bound it by
R sup
|z|=R
f(z).
Example. Suppose we want to compute
I =
Z
0
cos αx
1 + x
2
dx,
where α is a positive real constant. We consider the following contour:
γ
0
γ
R
R R
×
i
and we compute
Re
Z
γ
e
iαz
1 + z
2
dz.
Along
γ
0
, we obtain 2
I
as
R
. On
γ
R
, we do not actually need Jordan’s
lemma to show that we obtain zero in the limit, but we can still use it to save
ink. So
I =
1
2
Re
2πi res
z=i
e
iαz
1 + z
2
=
1
2
Re
2πi
e
α
2i
=
1
2
πe
α
.
Note that taking the real part does not do anything here the result of the
integral is completely real. This is since the imaginary part of
R
γ
e
iαz
1+z
2
d
z
is
integrating
sin αz
1+z
2
, which is odd and vanishes.
Note that if we had attempted to integrate
R
γ
cos αz
1+z
2
d
z
directly, we would
have found that
R
γ
R
6→ 0. In fact, cos αz is unbounded at .
Example. We want to find
I =
Z
−∞
sin x
x
dx.
This time, we do require Jordan’s lemma. Here we have an extra complication
while
sin x
x
is well-behaved at the origin, to perform the contour integral, we
need to integrate
e
iz
z
instead, which is singular at the origin.
Instead, we split the integral in half, and write
Z
−∞
= lim
ε0
R→∞
Z
ε
R
sin x
x
dx +
Z
R
ε
sin x
x
dx
!
= Im lim
ε0
R→∞
Z
ε
R
e
iz
x
dx +
Z
R
ε
e
iz
x
dx
!
We now let
C
be the contour from
R
to
ε
, then round a semi-circle
C
ε
to
ε
,
then to
R
, then returning via a semi-circle
C
R
of radius
R
. Then
C
misses all
our singularities.
R
ε
C
ε
ε
R
C
R
×
Since C misses all singularities, we must have
Z
ε
R
e
iz
z
dz +
Z
R
ε
e
iz
z
dz =
Z
C
ε
e
iz
z
dz
Z
C
R
e
iz
z
dz.
By Jordan’s lemma, the integral around
C
R
vanishes as
R
. On
C
ε
, we
substitute z = εe
and e
iz
= 1 + O(ε) to obtain
Z
C
ε
e
iz
z
dz =
Z
0
π
1 + O(ε)
εe
iεe
dθ = + O(ε).
Hence, in the limit ε 0 and R , we get
Z
−∞
sin x
x
dx = Im() = π.
Similarly, we can compute
Z
−∞
sin
2
x
x
2
dx = π.
Alternatively, we notice that
sin z
z
has a removable singularity at the origin.
Removing the singularity, the integrand is completely analytic. Therefore the
original integral is equivalent to the integral along this path:
We can then write
sin z
=
e
iz
e
iz
2i
, and then apply our standard techniques and
Jordan’s lemma.
5 Transform theory
We are now going to consider two different types of “transforms”. The first
is the Fourier transform, which we already met in IB methods. The second is
the Laplace transform. While the formula of a Laplace transform is completely
real, the inverse Laplace transform involves a more complicated contour integral,
which is why we did not do it in IB Methods. In either case, the new tool of
contour integrals allows us to compute more transforms.
Apart from that, the two transforms are pretty similar, and most properties
of the Fourier transform also carry over to the Laplace transform.
5.1 Fourier transforms
Definition
(Fourier transform)
.
The Fourier transform of a function
f
(
x
) that
decays sufficiently as |x| is defined as
˜
f(k) =
Z
−∞
f(x)e
ikx
dx,
and the inverse transform is
f(x) =
1
2π
Z
−∞
˜
f(k)e
ikx
dk.
It is common for the terms
e
ikx
and
e
ikx
to be swapped around in these
definitions. It might even be swapped around by the same author in the same
paper for some reason, if we have a function in two variables, then it is
traditional to transform one variable with
e
ikx
and the other with
e
ikx
, just to
confuse people. More rarely, factors of 2π or
2π are rearranged.
Traditionally, if
f
is a function of position
x
, then the transform variable is
called
k
; while if
f
is a function of time
t
, then it is called
ω
. You don’t have to
stick to this notation, if you like being confusing.
In fact, a more precise version of the inverse transform is
1
2
(f(x
+
) + f(x
)) =
1
2π
PV
Z
−∞
˜
f(k)e
ikx
dk.
The left-hand side indicates that at a discontinuity, the inverse Fourier transform
gives the average value. The right-hand side shows that only the Cauchy principal
value of the integral (denoted PV
R
, P
R
or
R
) is required, i.e. the limit
lim
R→∞
Z
R
R
˜
f(k)e
ikx
dk,
rather than
lim
R→∞
S→−∞
Z
R
S
˜
f(k)e
ikx
dk.
Several functions have PV integrals, but not normal ones. For example,
PV
Z
−∞
x
1 + x
2
dx = 0,
since it is an odd function, but
Z
−∞
x
1 + x
2
dx
diverges at both −∞ and . So the normal proper integral does not exist.
So for the inverse Fourier transform, we only have to care about the Cauchy
principal value. This is convenient because that’s how we are used to compute
contour integrals all the time!
Notation.
The Fourier transform can also be denoted by
˜
f
=
F
(
f
) or
˜
f
(
k
) =
F
(
f
)(
k
). In a slight abuse of notation, we often write
˜
f
(
k
) =
F
(
f
(
x
)), but this
is not correct notation, since
F
takes in a function at a parameter, not a function
evaluated at a particular point.
Note that in the Tripos exam, you are expected to know about all the
properties of the Fourier transform you have learned from IB Methods.
We now calculate some Fourier transforms using the calculus of residues.
Example. Consider f(x) = e
x
2
/2
. Then
˜
f(k) =
Z
−∞
e
x
2
/2
e
ikx
dx
=
Z
−∞
e
(x+ik)
2
/2
e
k
2
/2
dx
= e
k
2
/2
Z
+ik
−∞+ik
e
z
2
/2
dz
We create a rectangular contour that looks like this:
R
γ
1
R
γ
+
R
γ
0
γ
R
ik
The integral we want is the integral along
γ
0
as shown, in the limit as
R
. We
can show that
R
γ
+
R
0 and
R
γ
R
0. Then we notice there are no singularities
inside the contour. So
Z
γ
0
e
z
2
/2
dz =
Z
γ
1
e
z
2
/2
dz,
in the limit. Since γ
1
is traversed in the reverse direction, we have
˜
f(k) = e
k
2
/2
Z
−∞
e
z
2
/2
dz =
2πe
k
2
/2
,
using a standard result from real analysis.
When inverting Fourier transforms, we generally use a semicircular contour
(in the upper half-plane if x > 0, lower if x < 0), and apply Jordan’s lemma.
Example. Consider the real function
f(x) =
(
0 x < 0
e
ax
x > 0
,
where a > 0 is a real constant. The Fourier transform of f is
˜
f(k) =
Z
−∞
f(x)e
ikx
dx
=
Z
0
e
axikx
dx
=
1
a + ik
[e
axikx
]
0
=
1
a + ik
.
We shall compute the inverse Fourier transform by evaluating
1
2π
Z
−∞
˜
f(k)e
ikx
dk.
In the complex plane, we let
γ
0
be the contour from
R
to
R
in the real axis;
γ
R
be the semicircle of radius
R
in the upper half-plane,
γ
0
R
the semicircle in
the lower half-plane. We let γ = γ
0
+ γ
R
and γ
0
= γ
0
+ γ
0
R
.
γ
0
γ
R
γ
0
R
R R
×
i
We see
˜
f(k) has only one pole, at k = ia, which is a simple pole. So we get
I
γ
˜
f(k)e
ikx
dk = 2πi res
k=ia
e
ikx
i(k ia)
= 2πe
ax
,
while
I
γ
0
˜
f(k)e
ikx
dk = 0.
Now if
x >
0, applying Jordan’s lemma (with
λ
=
x
) to
C
R
shows that
R
C
R
˜
f(k)e
ikx
dk 0 as R . Hence we get
1
2π
Z
−∞
=
1
2π
lim
R→∞
Z
γ
0
˜
f(k)e
ikx
dk
=
1
2π
lim
R→∞
Z
γ
˜
f(k)e
ikx
dk
Z
γ
R
˜
f(k)e
ikx
dk
= e
ax
.
For
x <
0, we have to close in the lower half plane. Since there are no singularities,
we get
1
2π
Z
−∞
˜
f(k)e
ikx
dk = 0.
Combining these results, we obtain
1
2π
Z
−∞
˜
f(k)e
ikx
dk =
(
0 x < 0
e
ax
x > 0
,
We’ve already done Fourier transforms in IB Methods, so we will no spend
any more time on it. We move on to the new exciting topic of Laplace transforms.
5.2 Laplace transform
The Fourier transform is a powerful tool for solving differential equations and
investigating physical systems, but it has two key restrictions:
(i)
Many functions of interest grow exponentially (e.g.
e
x
), and so do not have
Fourier transforms;
(ii)
There is no way of incorporating initial or boundary conditions in the
transform variable. When used to solve an ODE, the Fourier transform
merely gives a particular integral: there are no arbitrary constants produced
by the method.
So for solving differential equations, the Fourier transform is pretty limited.
Right into our rescue is the Laplace transform, which gets around these two
restrictions. However, we have to pay the price with a different restriction it
is only defined for functions f(t) which vanishes for t < 0 (by convention).
From now on, we shall make this assumption, so that if we refer to the
function
f
(
t
) =
e
t
for instance, we really mean
f
(
t
) =
e
t
H
(
t
), where
H
(
t
) is the
Heaviside step function,
H(t) =
(
1 t > 0
0 t < 0
.
Definition
(Laplace transform)
.
The Laplace transform of a function
f
(
t
) such
that f (t) = 0 for t < 0 is defined by
ˆ
f(p) =
Z
0
f(t)e
pt
dt.
This exists for functions that grow no more than exponentially fast.
There is no standard notation for the Laplace transform.
Notation. We sometimes write
ˆ
f = L(f),
or
ˆ
f(p) = L(f(t)).
The variable p is also not standard. Sometimes, s is used instead.
Many functions (e.g.
t
and
e
t
) which do not have Fourier transforms do have
Laplace transforms.
Note that
ˆ
f
(
p
) =
˜
f
(
ip
), where
˜
f
is the Fourier transform, provided that
both transforms exist.
Example. Let’s do some exciting examples.
(i)
L(1) =
Z
0
e
pt
dt =
1
p
.
(ii) Integrating by parts, we find
L(t) =
1
p
2
.
(iii)
L(e
λt
) =
Z
0
e
(λp)t
dt =
1
p λ
.
(iv) We have
L(sin t) = L
1
2i
e
it
e
it
=
1
2i
1
p i
1
p + i
=
1
p
2
+ 1
.
Note that the integral only converges if
Re p
is sufficiently large. For example,
in (iii), we require
Re p > Re λ
. However, once we have calculated
ˆ
f
in this
domain, we can consider it to exist everywhere in the complete p-plane, except
at singularities (such as at
p
=
λ
in this example). This process of extending a
complex function initially defined in some part of the plane to a larger part is
known as analytic continuation.
So far, we haven’t done anything interesting with Laplace transform, and
this is going to continue in the next section!
5.3 Elementary properties of the Laplace transform
We will come up with seven elementary properties of the Laplace transform. The
first 4 properties are easily proved by direct substitution
Proposition.
(i) Linearity:
L(αf + βg) = αL(f) + βL(g).
(ii) Translation:
L(f(t t
0
)H(t t
0
)) = e
pt
0
ˆ
f(p).
(iii) Scaling:
L(f(λt)) =
1
λ
ˆ
f
p
λ
,
where we require λ > 0 so that f (λt) vanishes for t < 0.
(iv) Shifting:
L(e
p
0
t
f(t)) =
ˆ
f(p p
0
).
(v) Transform of a derivative:
L(f
0
(t)) = p
ˆ
f(p) f(0).
Repeating the process,
L(f
00
(t)) = pL(f
0
(t)) f
0
(0) = p
2
ˆ
f(p) pf(0) f
0
(0),
and so on. This is the key fact for solving ODEs using Laplace transforms.
(vi) Derivative of a transform:
ˆ
f
0
(p) = L(tf(t)).
Of course, the point of this is not that we know what the derivative of
ˆ
f
is. It is we know how to find the Laplace transform of
tf
(
t
)! For example,
this lets us find the derivative of t
2
with ease.
In general,
ˆ
f
(n)
(p) = L((t)
n
f(t)).
(vii) Asymptotic limits
p
ˆ
f(p)
(
f(0) as p
f() as p 0
,
where the second case requires f to have a limit at .
Proof.
(v) We have
Z
0
f
0
(t)e
pt
dt = [f(t)e
pt
]
0
+ p
Z
0
f(t)e
pt
dt = p
ˆ
f(p) f(0).
(vi) We have
ˆ
f(p) =
Z
0
f(t)e
pt
dt.
Differentiating with respect to p, we have
ˆ
f
0
(p) =
Z
0
tf(t)e
pt
dt.
(vii) Using (v), we know
p
ˆ
f(p) = f(0) +
Z
0
f
0
(t)e
pt
dt.
As
p
, we know
e
pt
0 for all
t
. So
p
ˆ
f
(
p
)
f
(0). This proof looks
dodgy, but is actually valid since
f
0
grows no more than exponentially fast.
Similarly, as p 0, then e
pt
1. So
p
ˆ
f(p) f(0) +
Z
0
f
0
(t) dt = f().
Example. We can compute
L(t sin t) =
d
dp
L(sin t) =
d
dp
1
p
2
+ 1
=
2p
(p
2
+ 1)
2
.
5.4 The inverse Laplace transform
It’s no good trying to find the Laplace transforms of functions if we can’t invert
them. Given
ˆ
f
(
p
), we can calculate
f
(
t
) using the Bromwich inversion formula.
Proposition. The inverse Laplace transform is given by
f(t) =
1
2πi
Z
c+i
ci
ˆ
f(p)e
pt
dp,
where
c
is a real constant such that the Bromwich inversion contour
γ
given by
Re p = c lies to the right of all the singularities of
ˆ
f(p).
Proof.
Since
f
has a Laplace transform, it grows no more than exponentially.
So we can find a c R such that
g(t) = f(t)e
ct
decays at infinity (and is zero for
t <
0, of course). So
g
has a Fourier transform,
and
˜g(ω) =
Z
−∞
f(t)e
ct
e
t
dt =
ˆ
f(c + ).
Then we use the Fourier inversion formula to obtain
g(t) =
1
2π
Z
−∞
ˆ
f(c + )e
t
dω.
So we make the substitution p = c + , and thus obtain
f(t)e
ct
=
1
2πi
Z
c+i
ci
ˆ
f(p)e
(pc)t
dp.
Multiplying both sides by
e
ct
, we get the result we were looking for (the re-
quirement that
c
lies to the right of all singularities is to fix the “constant of
integration” so that f(t) = 0 for all t < 0, as we will soon see).
In most cases, we don’t have to use the full inversion formula. Instead, we
use the following special case:
Proposition.
In the case that
ˆ
f
(
p
) has only a finite number of isolated singu-
larities p
k
for k = 1, ··· , n, and
ˆ
f(p) 0 as |p| , then
f(t) =
n
X
k=1
res
p=p
k
(
ˆ
f(p)e
pt
)
for t > 0, and vanishes for t < 0.
Proof.
We first do the case where
t <
0, consider the contour
γ
0
+
γ
0
R
as shown,
which encloses no singularities.
c iR
γ
0
c + iR
γ
0
R
×
×
×
Now if
ˆ
f(p) = o(|p|
1
) as |p| , then
Z
γ
0
R
ˆ
f(p)e
pt
dp
πRe
ct
sup
pγ
0
R
|
ˆ
f(p)| 0 as R .
Here we used the fact
|e
pt
| e
ct
, which arises from
Re
(
pt
)
ct
, noting that
t < 0.
If
ˆ
f
decays less rapidly at infinity, but still tends to zero there, the same
result holds, but we need to use a slight modification of Jordan’s lemma. So in
either case, the integral
Z
γ
0
R
ˆ
f(p)e
pt
dp 0 as R .
Thus, we know
R
γ
0
R
γ
. Hence, by Cauchy’s theorem, we know
f
(
t
) = 0 for
t <
0. This is in agreement with the requirement that functions with Laplace
transform vanish for t < 0.
Here we see why
γ
must lie to the right of all singularities. If not, then the
contour would encircle some singularities, and then the integral would no longer
be zero.
When t > 0, we close the contour to the left.
c iR
γ
0
c + iR
γ
R
×
×
×
This time, our
γ
does enclose some singularities. Since there are only finitely
many singularities, we enclose all singularities for sufficiently large
R
. Once
again, we get
R
γ
R
0 as R . Thus, by the residue theorem, we know
Z
γ
ˆ
f(p)e
pt
dp = lim
R→∞
Z
γ
0
ˆ
f(p)e
pt
dp = 2πi
n
X
k=1
res
p=p
k
(
ˆ
f(p)e
pt
).
So from the Bromwich inversion formula,
f(t) =
n
X
k=1
res
p=p
k
(
ˆ
f(p)e
pt
),
as required.
Example. We know
ˆ
f(p) =
1
p 1
has a pole at
p
= 1. So we must use
c >
1. We have
ˆ
f
(
p
)
0 as
|p|
. So
Jordan’s lemma applies as above. Hence
f
(
t
) = 0 for
t <
0, and for
t >
0, we
have
f(t) = res
p=1
e
pt
p 1
= e
t
.
This agrees with what we computed before.
Example.
Consider
ˆ
f
(
p
) =
p
n
. This has a pole of order
n
at
p
= 0. So we
pick c > 0. Then for t > 0, we have
f(t) = res
p=0
e
pt
p
n
= lim
p0
1
(n 1)!
d
n1
dp
n1
e
pt
=
t
n1
(n 1)!
.
This again agrees with what we computed before.
Example. In the case where
ˆ
f(p) =
e
p
p
,
then we cannot use the standard result about residues, since
ˆ
f
(
p
) does not vanish
as |p| . But we can use the original Bromwich inversion formula to get
f(t) =
1
2πi
Z
γ
e
p
p
e
pt
dp
=
1
2πi
Z
γ
1
p
e
pt
0
dp,
where
t
0
=
t
1. Now we can close to the right when
t
0
<
0, and to the left when
t
0
> 0, picking up the residue from the pole at p = 0. Then we get
f(t) =
(
0 t
0
< 0
1 t
0
> 0
=
(
0 t < 1
1 t > 1
= H(t 1).
This again agrees with what we’ve got before.
Example.
If
ˆ
f
(
p
) has a branch point (at
p
= 0, say), then we must use a
Bromwich keyhole contour as shown.
5.5 Solution of differential equations using the Laplace
transform
The Laplace transform converts ODEs to algebraic equations, and PDEs to
ODEs. We will illustrate this by an example.
Example. Consider the differential equation
t¨y t ˙y + y = 0,
with y(0) = 2 and ˙y(0) = 1. Note that
L(t ˙y) =
d
dp
L( ˙y)
d
dp
(pˆy y(0)) = pˆy
0
ˆy.
Similarly, we find
L(t¨y) = p
2
ˆy
0
2pˆy + y(0).
Substituting and rearranging, we obtain
pˆy
0
+ 2ˆy =
2
p
,
which is a simpler differential equation. We can solve this using an integrating
factor to obtain
ˆy =
2
p
+
A
p
2
,
where A is an arbitrary constant. Hence we have
y = 2 + At,
and A = 1 from the initial conditions.
Example. A system of ODEs can always be written in the form
˙
x = Mx, x(0) = x
0
,
where
x R
n
and
M
is an
n × n
matrix. Taking the Laplace transform, we
obtain
p
ˆ
x x
0
= M
ˆ
x.
So we get
ˆ
x = (pI M )
1
x
0
.
This has singularities when p is equal to an eigenvalue of M.
5.6 The convolution theorem for Laplace transforms
Finally, we recall from IB Methods that the Fourier transforms turn convolutions
into products, and vice versa. We will now prove an analogous result for Laplace
transforms.
Definition
(Convolution)
.
The convolution of two functions
f
and
g
is defined
as
(f g)(t) =
Z
−∞
f(t t
0
)g(t
0
) dt
0
.
When f and g vanish for negative t, this simplifies to
(f g)(t) =
Z
t
0
f(t t
0
)g(t) dt.
Theorem
(Convolution theorem)
.
The Laplace transform of a convolution is
given by
L(f g)(p) =
ˆ
f(p)ˆg(p).
Proof.
L(f g)(p) =
Z
0
Z
t
0
f(t t
0
)g(t
0
) dt
0
e
pt
dt
We change the order of integration in the (
t, t
0
) plane, and adjust the limits
accordingly (see picture below)
=
Z
0
Z
t
0
f(t t
0
)g(t
0
)e
pt
dt
dt
0
We substitute u = t t
0
to get
=
Z
0
Z
0
f(u)g(t
0
)e
pu
e
pt
0
du
dt
0
=
Z
0
Z
0
f(u)e
pu
du
g(t
0
)e
pt
0
dt
0
=
ˆ
f(p)ˆg(p).
Note the limits of integration are correct since they both represent the region
below
t
t
0