Part IA Differential Equations
Based on lectures by M. G. Worster
Notes taken by Dexter Chua
Michaelmas 2014
These notes are not endorsed by the lecturers, and I have modified them (often
significantly) after lectures. They are nowhere near accurate representations of what
was actually lectured, and in particular, all errors are almost surely mine.
Basic calculus
Informal treatment of differentiation as a limit, the chain rule, Leibnitz’s rule, Taylor
series, informal treatment of
O
and
o
notation and l’Hˆopital’s rule; integration as an
area, fundamental theorem of calculus, integration by substitution and parts. [3]
Informal treatment of partial derivatives, geometrical interpretation, statement (only)
of symmetry of mixed partial derivatives, chain rule, implicit differentiation. Informal
treatment of differentials, including exact differentials. Differentiation of an integral
with resp ect to a parameter. [2]
First-order linear differential equations
Equations with constant coefficients: exp onential growth, comparison with discrete
equations, series solution; modelling examples including radioactive decay.
Equations with non-constant coefficients: solution by integrating factor. [2]
Nonlinear first-order equations
Separable equations. Exact equations. Sketching solution trajectories. Equilibrium
solutions, stability by perturbation; examples, including logistic equation and chemical
kinetics. Discrete equations: equilibrium solutions, stability; examples including the
logistic map. [4]
Higher-order linear differential equations
Complementary function and particular integral, linear independence, Wronskian (for
second-order equations), Abel’s theorem. Equations with constant coefficients and
examples including radioactive sequences, comparison in simple cases with difference
equations, reduction of order, resonance, transients, damping. Homogeneous equations.
Response to step and impulse function inputs; introduction to the notions of the Heav-
iside step-function and the Dirac delta-function. Series solutions including statement
only of the need for the logarithmic solution. [8]
Multivariate functions: applications
Directional derivatives and the gradient vector. Statement of Taylor series for functions
on
R
n
. Local extrema of real functions, classification using the Hessian matrix. Coupled
first order systems: equivalence to single higher order equations; solution by matrix
metho ds. Non-degenerate phase portraits local to equilibrium points; stability.
Simple examples of first- and second-order partial differential equations, solution of
the wave equation in the form f (x + ct) + g(x ct). [5]
Contents
0 Introduction
1 Differentiation
1.1 Differentiation
1.2 Small o and big O notations
1.3 Methods of differentiation
1.4 Taylor’s theorem
1.5 L’Hopital’s rule
2 Integration
2.1 Integration
2.2 Methods of integration
3 Partial differentiation
3.1 Partial differentiation
3.2 Chain rule
3.3 Implicit differentiation
3.4 Differentiation of an integral wrt parameter in the integrand
4 First-order differential equations
4.1 The exponential function
4.2 Homogeneous linear ordinary differential equations
4.3 Forced (inhomogeneous) equations
4.3.1 Constant forcing
4.3.2 Eigenfunction forcing
4.4 Non-constant coefficients
4.5 Non-linear equations
4.5.1 Separable equations
4.5.2 Exact equations
4.6 Solution curves (trajectories)
4.7 Fixed (equilibrium) points and stability
4.7.1 Perturbation analysis
4.7.2 Autonomous systems
4.7.3 Logistic Equation
4.8 Discrete equations (Difference equations)
5 Second-order differential equations
5.1 Constant coefficients
5.1.1 Complementary functions
5.1.2 Second complementary function
5.1.3 Phase space
5.2 Particular integrals
5.2.1 Guessing
5.2.2 Resonance
5.2.3 Variation of parameters
5.3 Linear equidimensional equations
5.4 Difference equations
5.5 Transients and damping
5.6 Impulses and point forces
5.6.1 Dirac delta function
5.7 Heaviside step function
6 Series solutions
7 Directional derivative
7.1 Gradient vector
7.2 Stationary points
7.3 Taylor series for multi-variable functions
7.4 Classification of stationary points
7.5 Contours of f (x, y)
8 Systems of differential equations
8.1 Linear equations
8.2 Nonlinear dynamical systems
9 Partial differential equations (PDEs)
9.1 First-order wave equation
9.2 Second-order wave equation
9.3 The diffusion equation
0 Introduction
In this course, it is assumed that students already know how to do calculus.
While we will define all of calculus from scratch, it is there mostly to introduce
the big and small
o
notation which will be used extensively in this and future
courses (as well as for the sake of completeness). It is impossible for a person
who hasn’t seen calculus before to learn calculus from those few pages.
Calculus is often used to model physical systems. For example, if we know
that the force
F
=
m¨x
on a particle at any time
t
is given by
t
2
1, then we
can write this as
m¨x = t
2
1.
We can easily integrate this twice with respect to
t
, and find the position
x
as a
function of time.
However, often the rules governing a physical system are not like this. Instead,
the force on the particle is more likely to depend on the position of the particle,
instead of what time it is. Hence the actual equation of motion might be
m¨x = x
2
1.
This is an example of a differential equation. We are given an equation that
a function
x
obeys, often involving derivatives of
x
, and we have to find all
functions that satisfy this equation (of course, the first equation is also a
differential equation, but a rather boring one).
A closely related notion is difference equations. These are discrete analogues
of differential equations. A famous example is the Fibonacci sequence, which
states that
F
n+2
F
n+1
F
n
= 0.
This specifies a relationship between terms in a sequence (
F
n
), and we want to
find an explicit solution to this equation.
In this course, we will develop numerous techniques to solve different differ-
ential equations and difference equations. Often, this involves guessing of some
sort.
1 Differentiation
We will first quickly go through basic notions of differentiation and integration.
You should already be familiar with these from A levels or equivalent.
1.1 Differentiation
Definition (Derivative of function). The derivative of a function
f
(
x
) with
respect to x, interpreted as the rate of change of f(x) with x, is
df
dx
= lim
h0
f(x + h) f(x)
h
.
A function
f
(
x
) is differentiable at
x
if the limit exists (i.e. the left-hand and
right-hand limits are equal).
Example.
f
(
x
) =
|x|
is not differentiable at
x
= 0 as
lim
h0
+
|h|−|0|
h
= 1 and
lim
h0
|h|−|0|
h
= 1.
Notation. We write
df
dx
=
f
0
(
x
) =
d
dx
f
(
x
). Also,
d
dx
d
dx
f(x)
=
d
2
dx
2
f
(
x
) =
f
00
(x).
Note that the notation
f
0
represents the derivative with respect to the
argument. For example, f
0
(2x) =
df
d(2x)
1.2 Small o and big O notations
Definition (O and o notations).
(i)
f
(
x
) =
o
(
g
(
x
)) as
x x
0
if
lim
xx
0
f(x)
g(x)
= 0. Intuitively,
f
(
x
) is much
smaller than g(x).
(ii)
f
(
x
) =
O
(
g
(
x
)) as
x x
0
if
f(x)
g(x)
is bounded as
x x
0
. Intuitively,
f(x) is about as big as g(x).
Note that for f(x) = O(g(x)) to be true, lim
xx
0
f(x)
g(x)
need not exist.
Usually,
x
0
is either 0 or infinity. Clearly, we have
f
(
x
) =
o
(
g
(
x
)) implies
f(x) = O(g(x)).
Note that this is an abuse of notation. We are not really saying that
f
(
x
)
is “equal” to
o
(
g
(
x
)), since
o
(
g
(
x
)) itself is not a function. Instead,
o
(
g
(
x
))
represents a class of functions (namely functions that satisfy the property above),
and we are saying that
f
is in this class. Technically, a better notation might
be
f
(
x
)
o
(
g
(
x
)), but in practice, writing
f
(
x
) =
o
(
g
(
x
)) is more common and
more convenient.
Example.
x = o(
x) as x 0 and
x = o(x) as x .
sin 2x = O(x) as x 0 as sin θ θ for small θ.
sin 2x = O(1) as x even though the limit does not exist.
This notation will frequently be used in calculus. For example, if we want to
ignore all terms second order in
x
in an expression, we can write out the first
order terms and then append +
O
(
x
2
). In particular, we can use it to characterize
derivatives in a different way.
Proposition.
f(x
0
+ h) = f(x
0
) + f
0
(x
0
)h + o(h)
Proof. We have
f
0
(x
0
) =
f(x
0
+ h) f(x
0
)
h
+
o(h)
h
by the definition of the derivative and the small
o
notation. The result follows.
1.3 Methods of differentiation
Theorem (Chain rule). Given f(x) = F (g(x)), then
df
dx
=
dF
dg
dg
dx
.
Proof. Assuming that
dg
dx
exists and is therefore finite, we have
df
dx
= lim
h0
F (g(x + h)) F (g(x))
h
= lim
h0
F [g(x) + hg
0
(x) + o(h)] F (g(x))
h
= lim
h0
F (g(x)) + (hg
0
(x) + o(h))F
0
(g(x)) + o(hg
0
(x) + o(h)) F (g(x))
h
= lim
h0
g
0
(x)F
0
(g(x)) +
o(h)
h
= g
0
(x)F
0
(g(x))
=
dF
dg
dg
dx
Theorem (Product Rule). Give f(x) = u(x)v(x). Then
f
0
(x) = u
0
(x)v(x) + u(x)v
0
(x).
Theorem (Leibniz’s Rule). Given f = uv, then
f
(n)
(x) =
n
X
r=0
n
r
u
(r)
v
(nr)
,
where f
(n)
is the n-th derivative of f.
1.4 Taylor’s theorem
Theorem (Taylor’s Theorem). For n-times differentiable f, we have
f(x + h) = f(x) + hf
0
(x) +
h
2
2!
f
00
(x) + ··· +
h
n
n!
f
(n)
(x) + E
n
,
where E
n
= o(h
n
) as h 0. If f
(n+1)
exists, then E
n
= O(h
n+1
).
Note that this only gives a local approximation around
x
. This does not
necessarily tell anything about values of f far from x (but sometimes does).
An alternative form of the sum above is:
f(x) = f(x
0
) + (x x
0
)f
0
(x
0
) + ··· +
(x x
0
)
n
n!
f
(n)
(x
0
) + E
n
.
When the limit as
n
is taken, the Taylor series of
f
(
x
) about the point
x = x
0
is obtained.
1.5 L’Hopital’s rule
Theorem (L’Hopital’s Rule). Let
f
(
x
) and
g
(
x
) be differentiable at
x
0
, and
lim
xx
0
f(x) = lim
xx
0
g(x) = 0. Then
lim
xx
0
f(x)
g(x)
= lim
xx
0
f
0
(x)
g
0
(x)
.
Proof.
From the Taylor’s Theorem, we have
f
(
x
) =
f
(
x
0
) + (
x x
0
)
f
0
(
x
0
) +
o(x x
0
), and similarly for g(x). Thus
lim
xx
0
f(x)
g(x)
= lim
xx
0
f(x
0
) + (x x
0
)f
0
(x
0
) + o(x x
0
)
g(x
0
) + (x x
0
)g
0
(x
0
) + o(x x
0
)
= lim
xx
0
f
0
(x
0
) +
o(xx
0
)
xx
0
g
0
(x
0
) +
o(xx
0
)
xx
0
= lim
xx
0
f
0
(x)
g
0
(x)
2 Integration
2.1 Integration
Definition (Integral). An integral is the limit of a sum, e.g.
Z
b
a
f(x) dx = lim
x0
N
X
n=0
f(x
n
)∆x.
For example, we can take
x
=
ba
N
and
x
n
=
a
+
n
x
. Note that an integral
need not be defined with this particular
x
and
x
n
. The term “integral” simply
refers to any limit of a sum (The usual integrals we use are a special kind known
as Riemann integral, which we will study formally in Analysis I). Pictorially, we
have
x
y
a x
1
x
2
x
3
···
x
n
x
n+1
···
···
b
The area under the graph from
x
n
to
x
n+1
is
f
(
x
n
)∆
x
+
O
(∆
x
2
). Provided
that f is differentiable, the total area under the graph from a to b is
lim
N→∞
N1
X
n=0
(f(x
n
)∆x)+N·O(∆x
2
) = lim
N→∞
N1
X
n=0
(f(x
n
)∆x)+O(∆x) =
Z
b
a
f(x) dx
Theorem (Fundamental Theorem of Calculus). Let
F
(
x
) =
R
x
a
f
(
t
) d
t
. Then
F
0
(x) = f(x).
Proof.
d
dx
F (x) = lim
h0
1
h
"
Z
x+h
a
f(t) dt
Z
x
a
f(t) dt
#
= lim
h0
1
h
Z
x+h
x
f(t) dt
= lim
h0
1
h
[f(x)h + O(h
2
)]
= f(x)
Similarly, we have
d
dx
Z
b
x
f(t) dt = f (x)
and
d
dx
Z
g(x)
a
f(t) dt = f (g(x))g
0
(x).
Notation. We write
R
f
(
x
) d
x
=
R
x
f
(
t
) d
t
, where the unspecified lower limit
gives rise to the constant of integration.
2.2 Methods of integration
Integration is substantially harder than differentiation. Given an expression,
the product rule and chain rule combined with a few standard derivatives can
usually differentiate it with ease. However, many seemingly-innocent expressions
can turn out to be surprisingly difficult, if not impossible, to integrate. Hence
we have to come up with a lot of tricks to do integrals.
Example (Integration by substitution). Consider
R
12x
xx
2
d
x
. Write
u
=
xx
2
and du = (1 2x) dx. Then the integral becomes
Z
du
u
= 2
u + C = 2
p
x x
2
+ C.
Trigonometric substitution can be performed with reference to the following
table: if the function in the 2nd column is found in the integrand, perform
the substitution in the 3rd column and simplify using the identity in the first
column:
Useful identity Part of integrand Substitution
cos
2
θ + sin
2
θ = 1
1 x
2
x = sin θ
1 + tan
2
θ = sec
2
θ 1 + x
2
x = tan θ
cosh
2
u sinh
2
u = 1
x
2
1 x = cosh u
cosh
2
u sinh
2
u = 1
1 + x
2
x = sinh u
1 tanh
2
u = sech
2
u 1 x
2
x = tanh u
Example. Consider
R
2x x
2
d
x
=
R
p
1 (x 1)
2
d
x
. Let
x
1 =
sin θ
and thus dx = cos θ dθ. The expression becomes
Z
cos
2
θ dθ =
Z
cos 2θ + 1
2
dθ
=
1
4
sin 2θ +
1
2
θ + C
=
1
2
sin
1
(x 1) +
1
2
(x 1)
p
2x x
2
+ C.
Theorem (Integration by parts).
Z
uv
0
dx = uv
Z
vu
0
dx.
Proof.
From the product rule, we have (
uv
)
0
=
uv
0
+
u
0
v
. Integrating the whole
expression and rearranging gives the formula above.
Example. Consider
R
0
xe
x
d
x
. Let
u
=
x
ad
v
0
=
e
x
. Then
u
0
= 1 and
v = e
x
. We have
Z
0
xe
x
dx = [xe
x
]
0
+
Z
0
e
x
dx
= 0 + [e
x
]
0
= 1
Example (Integration by Parts). Consider
R
log x
d
x
. Let
u
=
log x
and
v
0
= 1.
Then u
0
=
1
x
and v = x. So we have
Z
log x dx = x log x
Z
dx
= x log x x + C
3 Partial differentiation
3.1 Partial differentiation
So far, we have only considered functions of one variable. If we have a function
of multiple variables, say
f
(
x, y
), we can either differentiate it with respect to
x
or with respect to y.
Definition (Partial derivative). Given a function of several variables
f
(
x, y
),
the partial derivative of
f
with respect to
x
is the rate of change of
f
as
x
varies,
keeping y constant. It is given by
f
x
y
= lim
δx0
f(x + δx, y) f(x, y)
δx
Example. Consider
f
(
x, y
) =
x
2
+
y
3
+
e
xy
2
. Computing the partial derivative
is equivalent to computing the regular derivative with the other variables treated
as constants. e.g.
f
x
y
= 2x + y
2
e
xy
2
.
Second and mixed partial derivatives can also be computed:
2
f
x
2
= 2 + y
4
e
xy
2
2
f
yx
=
y
f
x
= 2ye
xy
2
+ 2xy
3
e
xy
2
It is often cumbersome to write out the variables that are kept constant. If
all other variables are being held constant, we simply don’t write them out, and
just say
f
x
.
Another convenient notation is
Notation.
f
x
=
f
x
, f
xy
=
2
f
yx
.
It is important to know how the order works in the second case. The left
hand side should be interpreted as (
f
x
)
y
. So we first differentiate with respect
to x, and then y. However, in most cases, this is not important, since we have
Theorem. If f has continuous second partial derivatives, then f
xy
= f
yx
.
We will not prove this statement and just assume it to be true (since this is
an applied course).
3.2 Chain rule
Consider an arbitrary displacement in any direction (
x, y
)
(
x
+
δx, y
+
δy
).
We have
δf = f(x + δx, y + δy) f(x, y)
= f(x + δx, y + δy) f(x + δx, y) + f(x + δx, y) f(x, y)
= f
y
(x + δx, y)δy + o(δy) + f
x
(x, y)δx + o(δx)
= (f
y
(x, y) + o(1))δy + o(δy) + f
x
(x, y)δx + o(δx)
δf =
f
x
δx +
f
y
δy + o(δx, δy)
Take the limit as δx, δy 0, we have
Theorem (Chain rule for partial derivatives).
df =
f
x
dx +
f
y
dy.
Given this form, we can integrate the differentials to obtain the integral form:
Z
df =
Z
f
x
dx +
Z
f
y
dy,
or divide by another small quantity. e.g. to find the slope along the path
(x(t), y(t)), we can divide by dt to obtain
df
dt
=
f
x
dx
dt
+
f
y
dy
dt
.
If we pick the parameter t to be the arclength s, we have
df
ds
=
f
x
dx
ds
+
f
y
dy
ds
=
dx
ds
,
dy
ds
·
f
x
,
f
y
= ˆs · f,
which is known as the directional derivative (cf. Chapter 7).
Alternatively, the path may also be given by
y
=
y
(
x
). So
f
=
f
(
x, y
(
x
)).
Then the slope along the path is
df
dx
=
f
x
y
+
f
y
dy
dx
.
The chain rule can also be used for the change of independent variables, e.g.
change to polar coordinates x = x(r, θ), y = y(r, θ). Then
f
θ
r
=
f
x
y
x
θ
r
+
f
y
x
y
θ
r
.
3.3 Implicit differentiation
Consider the contour surface of a function
F
(
x, y, z
) given by
F
(
x, y, z
) = const.
This implicitly defines
z
=
z
(
x, y
). e.g. If
F
(
x, y, z
) =
xy
2
+
yz
2
+
z
5
x
= 5,
then we can have
x
=
5yz
2
y
2
+z
5
. Even though
z
(
x, y
) cannot be found explicitly
(involves solving quintic equation), the derivatives of
z
(
x, y
) can still be found
by differentiating F (x, y, z) = const w.r.t. x holding y constant. e.g.
x
(xy
2
+ yz
2
+ z
5
x) =
x
5
y
2
+ 2yz
z
x
+ z
5
+ 5z
4
x
z
x
= 0
z
x
=
y
2
+ z
5
2yz + 5z
4
x
In general, we can derive the following formula:
Theorem (Multi-variable implicit differentiation). Given an equation
F (x, y, z) = c
for some constant c, we have
z
x
y
=
(F )/(x)
(F )/(z)
Proof.
dF =
F
x
dx +
F
y
dy +
F
z
dz
F
x
y
=
F
x
x
x
y
+
F
y
y
x
y
+
F
z
z
x
y
= 0
F
x
+
F
z
z
x
y
= 0
z
x
y
=
(F )/(x)
(F )/(z)
3.4
Differentiation of an integral wrt parameter in the
integrand
Consider a family of functions
f
(
x, c
). Define
I
(
b, c
) =
R
b
0
f
(
x, c
)d
x
. Then by
the fundamental theorem of calculus, we have
I
b
=
f
(
b, c
). On the other hand,
we have
I
c
= lim
δc0
1
δc
"
Z
b
0
f(x, c + δc)dx
Z
b
0
f(x, c) dx
#
= lim
δc0
Z
b
0
f(x, c + δc) f(x, c)
δc
dx
=
Z
b
0
lim
δc0
f(x, c + δc) f(x, c)
δc
dx
=
Z
b
0
f
c
dx
In general, if I(b(x), c(x)) =
R
b(x)
0
f(y, c(x))dy, then by the chain rule, we have
dI
dx
=
I
b
db
dx
+
I
c
dc
dx
= f(b, c)b
0
(x) + c
0
(x)
Z
b
0
f
c
dy.
So we obtain
Theorem (Differentiation under the integral sign).
d
dx
Z
b(x)
0
f(y, c(x)) dy = f(b, c)b
0
(x) + c
0
(x)
Z
b
0
f
c
dy
This is sometimes a useful technique that allows us to perform certain
otherwise difficult integrals, as you will see in the example sheets. However, here
we will only have time for a boring example.
Example. Let I =
R
1
0
e
λx
2
dx. Then
dI
dλ
=
Z
1
0
x
2
e
λx
2
dx.
If I =
R
λ
0
e
λx
2
dx. Then
dI
dλ
= e
λ
3
+
Z
λ
0
x
2
e
λx
2
dx.
4 First-order differential equations
A differential equation is an equation that involves derivatives, such as
x
2
dy
dx
+
2
y
= 0. Unlike regular equations where the solution is a number, the solution to
a differential equation is a function that satisfies the equation. In this chapter,
we will look at first-order differential equations, where only first derivatives are
involved.
4.1 The exponential function
Often, the solutions to differential equations involve the exponential function.
Consider a function f(x) = a
x
, where a > 0 is constant.
x
y
O
1
The derivative of this function is given by
df
dx
= lim
h0
a
x+h
a
x
h
= a
x
lim
h0
a
h
1
h
= λa
x
= λf(x)
where
λ
=
lim
h0
a
h
1
h
=
f
0
(0) = const. So the derivative of an exponential
function is a multiple of the original function. In particular,
Definition (Exponential function).
exp
(
x
) =
e
x
is the unique function
f
satisfying f
0
(x) = f(x) and f(0) = 1.
We write the inverse function as ln x or log x.
Then if y = a
x
= e
x ln a
, then y
0
= e
x ln a
ln a = a
x
ln a. So λ = ln a.
Using this property, we find that the value of e is given by
e = lim
k→∞
1 +
1
k
k
2.718281828 ··· .
The importance of the exponential function lies in it being an eigenfunction of
the differential operator.
Definition (Eigenfunction). An eigenfunction under the differential operator
is a function whose functional form is unchanged by the operator. Only its
magnitude is changed. i.e.
df
dx
= λf
Example. e
mx
is an eigenfunction since
d
dx
e
mx
= me
mx
.
4.2 Homogeneous linear ordinary differential equations
Before we start, we will first define the terms used in the title. These are useful
criteria for categorizing differential equations.
Definition (Linear differential equation). A differential equation is linear if the
dependent variable (y, y
0
, y
00
etc.) appears only linearly.
Definition (Homogeneous differential equation). A differential equation is
homogeneous if y = 0 is a solution.
Definition (Differential equation with constant coefficients). A differential
equation has constant coefficients if the independent variable
x
does not appear
explicitly.
Definition (First-order differential equation). A differential equation is first-
order if only first derivatives are involved.
Theorem. Any linear, homogeneous, ordinary differential equation with constant
coefficients has solutions of the form e
mx
.
This theorem is evident when we consider an example.
Example. Given 5
dy
dx
3y = 0. Try y = e
mx
. Then
dy
dx
= me
mx
5me
mx
3e
mx
= 0
Since this must hold for all values of
x
, there exists some value
x
for which
e
mx
6
= 0 and we can divide by
e
mx
(note in this case this justification is not
necessary, because
e
mx
is never equal to 0. However, we should justify as above
if we are dividing, say,
x
m
). Thus 5
m
3 = 0 and
m
= 3
/
5. So
y
=
e
3x/5
is a
solution.
Because the equation is linear and homogeneous, any multiple of a solution
is also a solution. Therefore y = Ae
3x/5
is a solution for any value of A.
But is this the most general form of the solution? One can show that an
n
th
-order linear differential equation has
n
and only
n
independent solutions. So
y = Ae
3x/5
is indeed the most general solution.
We can determine A by applying a given boundary condition.
Discrete equations
Suppose that we are given the equation 5
y
0
3
y
= 0 with boundary condition
y
=
y
0
at
x
= 0. This gives a unique function
y
. We can approximate this by
considering discrete steps of length
h
between
x
n
and
x
n+1
. (Using the simple
Euler numerical scheme,) we can approximate the equation by
5
y
n+1
y
n
h
3y
n
0.
Rearranging the terms, we have
y
n+1
(1 +
3
5
h
)
y
n
. Applying the relation
successively, we have
y
n
=
1 +
3
5
h
y
n1
=
1 +
3
5
h
1 +
3
5
h
y
n2
=
1 +
3
5
h
n
y
0
For each given value of x, choose h = x/n, so
y
n
= y
0
1 +
3
5
(x/n)
n
.
Taking the limit as n , we have
y(x) = lim
n→∞
y
0
1 +
3x/5
n
n
= y
0
e
3x/5
,
in agreement with the solution of the differential equation.
x
y
solution of diff. eq.
1
discrete approximation
O
x
0
x
1
x
2
x
3
Series solution
We can also try to find a solution in the form of a Taylor Series
y
=
P
n=0
a
n
x
n
.
We have y
0
=
P
a
n
nx
n1
. Substituting these into our equation
5y
0
3y = 0
5(xy
0
) 3x(y) = 0
X
a
n
(5n 3x)x
n
= 0
Consider the coefficient of
x
n
: 5
na
n
3
a
n1
= 0. Since this holds for all values
of
n
, when
n
= 0, we get 0
a
0
= 0. This tells that
a
0
is arbitrary. If
n >
0, then
a
n
=
3
5n
a
n1
=
3
2
5
2
1
n(n 1)
a
n2
= ··· =
3
5
n
1
n!
a
0
.
Therefore we have
y = a
0
X
n=0
3x
5
n
1
n!
h
= a
0
e
3x/5
i
.
4.3 Forced (inhomogeneous) equations
Recall that a homogeneous equation is an equation like 5
y
0
3
y
= 0, with no
x
or constant terms floating around. A forced, or inhomogeneous, equation is
one that is not homogeneous. For example, 5
y
0
3
y
= 10 or 5
y
0
3
y
=
x
2
are
forced equations. We call the “extra” terms 10 and x
2
the forcing terms.
4.3.1 Constant forcing
Example. Consider 5
y
0
3
y
= 10. We can spot that there is a equilibrium
(constant) solution y = y
p
=
10
3
with y
0
p
= 0.
The particular solution
y
p
is a solution of the ODE. Now suppose the general
solution is
y
=
y
p
+
y
c
. We see that 5
y
0
c
3
y
c
= 0. So
y
c
satisfies the homogeneous
equation we already solved, and
y =
10
3
+ Ae
3x/5
.
Note that any boundary conditions to determine
A
must be applied to the full
solution y and not the complementary function y
c
.
This is the general method of solving forced equations. We first find one
particular solution to the problem, often via educated guesses. Then we solve
the homogeneous equation to get a general complementary solution. Then the
general solution to the full equation is the sum of the particular solution and
the general complementary solution.
4.3.2 Eigenfunction forcing
This is the case when the forcing term is an eigenfunction of the differential
operator.
Example. In a radioactive rock, isotope A decays into isotope B at a rate
proportional to the number
a
of remaining nuclei A, and B also decays at a rate
proportional to the number b of remaining nuclei B. Determine b(t).
We have
da
dt
= k
a
a
db
dt
= k
a
a k
b
b.
Solving the first equation, we obtain a = a
0
e
k
a
t
. Then we have
db
dt
+ k
b
b = k
a
a
0
e
k
a
t
.
We usually put the variables involving
b
on the left hand side and the others on
the right. The right-hand term k
a
a
0
e
k
a
t
is the forcing term.
Note that the forcing term is an eigenfunction of the differential operator
on the LHS. So that suggests that we can try a particular integral
b
p
=
Ce
k
a
t
.
Substituting it in, we obtain
k
a
C + k
b
C = k
a
a
0
C =
k
a
k
b
k
a
a
0
.
Then write
b
=
b
p
+
b
c
. We get
b
0
c
+
k
b
b
c
= 0 and
b
c
=
De
k
b
t
. All together, we
have the general solution
b =
k
a
k
b
k
a
a
0
e
k
a
t
+ De
k
b
t
.
Assume the following boundary condition:
b
= 0 when
t
= 0, in which case we
can find
b =
k
a
k
b
k
a
a
0
e
k
a
t
e
k
b
t
.
x
y
O
a
b
The isotope ratio is
b
a
=
k
a
k
b
k
a
h
1 e
(k
a
k
b
)t
i
.
So given the current ratio
b/a
, with laboratory determined rates
k
a
and
k
b
, we
can determine the value of t, i.e. the age of the rock.
4.4 Non-constant coefficients
Consider the general form of equation
a(x)y
0
+ b(x)y = c(x).
Divide by a(x) to get the standard form
y
0
+ p(x)y = f(x).
We solve this by multiplying an integrating factor
µ
(
x
) to obtain (
µ
)
y
0
+ (
µp
)
y
=
µf.
We want to choose a
µ
such that the left hand side is equal to (
µy
)
0
. By the
product rule, we want µp = µ
0
, i.e.
p =
1
µ
dµ
dx
Z
p dx =
Z
1
µ
dµ
dx
dx
=
Z
1
µ
du
= ln µ(+C)
µ = exp
Z
p dx
Then by construction, we have (µy)
0
= µf and thus
y =
R
µf dx
µ
, where µ = exp
Z
p dx
Example. Consider
xy
0
+ (1
x
)
y
= 1. To obtain it in standard form, we have
y
0
+
1x
x
y =
1
x
. We have µ = exp
R
(
1
x
1) dx
= e
ln xx
= xe
x
. Then
y =
R
xe
x
1
x
dx
xe
x
=
e
x
+ C
xe
x
=
1
x
+
C
x
e
x
Suppose that we have a boundary condition
y
is finite at
x
= 0. Since we have
y
=
Ce
x
1
x
, we have to ensure that
Ce
x
1
0 as
x
0. Thus
C
= 1, and by
L’Hopital’s rule, y 1 as x 0.
4.5 Non-linear equations
In general, a first-order equation has the form
Q(x, y)
dy
dx
+ P (x, y) = 0.
(this is not exactly the most general form, since theoretically we can have powers
of
y
0
or even other complicated functions of
y
0
, but we will not consider that
case here).
4.5.1 Separable equations
Definition (Separable equation). A first-order differential equation is separable
if it can be manipulated into the following form:
q(y) dy = p(x) dx.
in which case the solution can be found by integration
Z
q(y) dy =
Z
p(x) dx.
This is the easy kind.
Example.
(x
2
y 3y)
dy
dx
2xy
2
= 4x
dy
dx
=
4x + 2xy
2
x
2
y 3y
=
2x(2 + y
2
)
y(x
2
3)
y
2 + y
2
dy =
2x
x
2
3
dx
Z
y
2 + y
2
dy =
Z
2x
x
2
3
dx
1
2
ln(2 + y
2
) = ln(x
2
3) + C
ln
p
2 + y
2
= ln A(x
2
3)
p
y
2
+ 2 = A(x
2
3)
4.5.2 Exact equations
Definition (Exact equation).
Q
(
x, y
)
dy
dx
+
P
(
x, y
) = 0 is an exact equation iff
the differential form
Q
(
x, y
) d
y
+
P
(
x, y
) d
x
is exact, i.e. there exists a function
f(x, y) for which
df = Q(x, y) dy + P (x, y) dx
If
P
(
x, y
) d
x
+
Q
(
x, y
) d
y
is an exact differential of
f
, then d
f
=
P
(
x, y
) d
x
+
Q
(
x, y
) d
y
. But by the chain rule, d
f
=
f
x
d
x
+
f
y
d
y
and this equality holds
for any displacements dx, dy. So
f
x
= P,
f
y
= Q.
From this we have
2
f
yx
=
P
y
,
2
f
x∂y
=
Q
x
.
We know that the two mixed 2nd derivatives are equal. So
P
y
=
Q
x
.
The converse is not necessarily true. Even if this equation holds, the differential
need not be exact. However, it is true if the domain is simply-connected.
Definition (Simply-connected domain). A domain
D
is simply-connected if it
is connected and any closed curve in
D
can be shrunk to a point in
D
without
leaving D.
Example. A disc in 2D is simply-connected. A disc with a “hole” in the middle
is not simply-connected because a loop around the hole cannot be shrunk into a
point. Similarly, a sphere in 3D is simply-connected but a torus is not.
Theorem. If
P
y
=
Q
x
through a simply-connected domain
D
, then
P
d
x
+
Q
d
y
is an exact differential of a single-valued function in D.
If the equation is exact, then the solution is simply
f
= constant, and we
can find f by integrating
f
x
= P and
f
y
= Q.
Example.
6y(y x)
dy
dx
+ (2x 3y
2
) = 0.
We have
P = 2x 3y
2
, Q = 6y(y x).
Then
P
y
=
Q
x
= 6y. So the differential form is exact. We now have
f
x
= 2x 3y
2
,
f
y
= 6y
2
6xy.
Integrating the first equation, we have
f = x
2
3xy
2
+ h(y).
Note that since it was a partial derivative w.r.t.
x
holding
y
constant, the
“constant” term can be any function of
y
. Differentiating the derived
f
w.r.t
y
,
we have
f
y
= 6xy + h
0
(y).
Thus h
0
(y) = 6y
2
and h(y) = 2y
3
+ C, and
f = x
2
3xy
2
+ 2y
3
+ C.
Since the original equation was d
f
= 0, we have
f
= constant. Thus the final
solution is
x
2
3xy
2
+ 2y
3
= C.
4.6 Solution curves (trajectories)
Example. Consider the first-order equation
dy
dt
= t(1 y
2
).
We can solve it to obtain
dy
1 y
2
= t dt
1
2
ln
1 + y
1 y
=
1
2
t
2
+ C
1 + y
1 y
= Ae
t
2
y =
A e
t
2
A + e
t
2
We can plot the solution for different values of
A
and obtain the following graph:
t
y
1
1
But can we understand the nature of the family of solutions without solving
the equation? Can we sketch the graph without solving it?
We can spot that
y
=
±
1 are two constant solutions and we can plot them
first. We also note (and mark on our graph) that y
0
= 0 at t = 0 for any y.
Then notice that for t > 0, y
0
> 0 if 1 < y < 1. Otherwise, y
0
< 0.
Now we can find isoclines, which are curves along which
dy
dt
(i.e.
f
) is constant:
t
(1
y
2
) =
D
for some constant
D
. Then
y
2
= 1
D/t
. After marking a few
isoclines, can sketch the approximate form of our solution:
t
y
1
1
In general, we sketch the graph of a differential equation
dy
dt
= f(t, y)
by locating constant solutions (and determining stability: see below) and iso-
clines.
4.7 Fixed (equilibrium) points and stability
Definition (Equilibrium/fixed point). An equilibrium point or a fixed point of
a differential equation is a constant solution
y
=
c
. This corresponds to
dy
dt
= 0
for all t.
These are usually the easiest solutions to find, since we are usually given an
expression for
dy
dt
and we can simply equate it to zero.
Definition (Stability of fixed point). An equilibrium is stable if whenever
y
is deviated slightly from the constant solution
y
=
c
,
y c
as
t
. An
equilibrium is unstable if the deviation grows as t .
The objective of this section is to study how to find equilibrium points and
their stability.
Example. Referring to the differential equation above (
˙y
=
t
(1
y
2
)), we see
that the solutions converge towards
y
= 1, and this is a stable fixed point. They
diverge from y = 1, and this is an unstable fixed point.
4.7.1 Perturbation analysis
Perturbation analysis is used to determine stability. Suppose
y
=
a
is a fixed
point of
dy
dt
=
f
(
y, t
), so
f
(
a, t
) = 0. Write
y
=
a
+
ε
(
t
), where
ε
(
t
) is a small
perturbation from
y
=
a
. We will later assume that
ε
is arbitrarily small. Putting
this into the differential equation, we have
dε
dt
=
dy
dt
= f(a + ε, t)
= f(a, t) + ε
f
y
(a, t) + O(ε
2
)
= ε
f
y
(a, t) + O(ε
2
)
Note that this is a Taylor expansion valid when
ε
1. Thus
O
(
ε
2
) can be
neglected and
dε
dt
=
ε
f
y
.
We can use this to study how the perturbation grows with time.
This approximation is called a linearization of the differential equation.
Example. Using the example ˙y = t(1 y
2
) above, we have
f
y
= 2yt =
(
2t at y = 1
2t at y = 1
.
At
y
= 1,
˙ε
=
2
and
ε
=
ε
0
e
t
2
. Since
ε
0 as
t
,
y
1. So
y
= 1 is a
stable fixed point.
On the other hand, if we consider
y
=
1, then
˙ε
= 2
and
ε
=
ε
0
e
t
2
. Since
ε as t , y = 1 is unstable.
Technically
ε
is not a correct statement, since the approximation used
is only valid for small
ε
. But we can be sure that the perturbation grows (even
if not ) as t increases.
4.7.2 Autonomous systems
Often, the mechanics of a system does not change with time. So
˙y
is only a
function of y itself. We call this an autonomous system.
Definition (Autonomous system). An autonomous system is a system in the
form ˙y = f(y), where the derivative is only (explicitly) dependent on y.
Near a fixed point
y
=
a
, where
f
(
a
) = 0, write
y
=
a
+
ε
(
t
). Then
˙ε
=
ε
df
dy
(
a
) =
kε
for some constant
k
. Then
ε
=
ε
0
e
kt
. The stability of the
system then depends solely on sign of k.
Example. Consider a chemical reaction NaOH + HCl
H
2
O + NaCl. We
have
NaOH + HCl H
2
O + NaCl
Number of molecules a b c c
Initial number of molecules a
0
b
0
0 0
If the reaction is in dilute solution, then the reaction rate is proportional to
ab
.
Thus
dc
dt
= λab
= λ(a
0
c)(b
0
c)
= f(c)
We can plot
dc
dt
as a function of c, and wlog a
0
< b
0
.
c
˙c
a
0
b
0
We can also plot a phase portrait, which is a plot of the dependent variable only,
where arrows show the evolution with time,
c
a
0
b
0
We can see that the fixed point c = a
0
is stable while c = b
0
is unstable.
We can also solve the equation explicitly to obtain
c =
a
0
b
0
[1 e
(b
0
a
0
)λt
]
b
0
a
0
e
λ(b
0
a
0
)t
.
Of course, this example makes no sense physically whatsoever because it assumes
that we can have negative values of
a
and
b
. Clearly in reality we cannot have,
say,
1 mol of NaOH, and only solutions for
c a
0
are physically attainable,
and in this case, any solution will tend towards c = a
0
.
4.7.3 Logistic Equation
The logistic equation is a simple model of population dynamics. Suppose we
have a population of size
y
. It has a birth rate
αy
and a death rate
βy
. With
this model, we obtain
dy
dt
= (α β)y
y = y
0
e
(αβ)t
Our population increases or decreases exponentially depending on whether the
birth rate exceeds death rate or vice versa.
However, in reality, there is fighting for limited resources. The probability of
some piece of food (resource) being found is
y
. The probability of the same
piece of food being found by two individuals is
y
2
. If food is scarce, they fight
(to the death), so death rate due to fighting (competing) is γy
2
for some γ. So
dy
dt
= (α β)y γy
2
dy
dt
= ry
1
y
Y
,
where
r
=
α β
and
Y
=
r
. This is the differential logistic equation. Note
that it is separable and can be solved explicitly.
However, we find the phase portrait instead. If
r
=
α β >
0, then the
graph is a quadratic parabola, and we see that Y is a stable fixed point.
y
f
O Y
y
O Y
Now when the population is small, we have
˙y ' ry
So the population grows exponentially. Eventually, the stable equilibrium
Y
is
reached.
4.8 Discrete equations (Difference equations)
Since differential equations are approximated numerically by computers with
discrete equations, it is important to study the behaviour of discrete equations,
and their difference with continuous counterparts.
In the logistic equation, the evolution of species may occur discretely (e.g.
births in spring, deaths in winter), and we can consider the population at certain
time intervals (e.g. consider the population at the end of each month). We might
have a model in the form
x
n+1
= λx
n
(1 x
n
).
This can be derived from the continuous equation with a discrete approximation
y
n+1
y
n
t
= ry
n
1
y
n
Y
y
n+1
= y
n
+ rty
n
1
y
n
Y
= (1 + rt)y
n
rt
Y
y
2
n
= (1 + rt)y
n
1
rt
1 + rt
y
n
Y
Write
λ = 1 + rt, x
n
=
rt
1 + rt
y
n
Y
,
then
x
n+1
= λx
n
(1 x
n
).
This is the discrete logistic equation or logistic map.
If λ < 1, then deaths exceed births and the population decays to zero.
x
n+1
x
n
x
n+1
= x
n
x
0
We see that x = 0 is a fixed point.
In general, to find fixed points, we solve for
x
n+1
=
x
n
, i.e.
f
(
x
n
) =
x
n
. For
the logistic map, we have
λx
n
(1 x
n
) = x
n
x
n
[1 λ(1 x
n
)] = 0
x
n
= 0 or x
n
= 1
1
λ
When 1 < λ < 2, we have
x
n+1
x
n
x
n+1
= x
n
x
0
We see that
x
n
= 0 is an unstable fixed point and
x
n
= 1
1
λ
is a stable fixed
point.
When 2 < λ < 3, we have
x
n+1
x
n
x
n+1
= x
n
x
0
There is an oscillatory convergence to x
n
= 1
1
λ
.
When
λ >
3, we have a limit cycle, in which
x
n
oscillates between 2 values,
i.e. x
n+2
= x
n
. When λ = 1 +
6 3.449, we have a 4-cycle, and so on.
x
n+1
x
n
x
n+1
= x
n
x
0
We have the following plot of the stable solutions for different values of
λ
(plotted
as r in the horizontal axis)
Credits: Wikimedia Commons: Jordan Pierce - Public Domain CC0 1.0
Note that the fixed point still exists after
λ
= 3, but is no longer stable. Similarly,
the 2-cycles still exist after λ = 1 +
6, but it is not stable.
5 Second-order differential equations
We now move on to second-order differential equations. While we will only look
at second-order equations, most of the methods in this section apply to higher
order differential equations as well.
5.1 Constant coefficients
The general form of an equation with constant coefficients is
ay
00
+ by
0
+ cy = f(x).
We solve this in two steps:
(i)
Find the complementary functions which satisfy the homogeneous equation
ay
00
+ by
0
+ cy = 0.
(ii) Find a particular solution that satisfies the full equation.
5.1.1 Complementary functions
Recall that
e
λx
is an eigenfunction of the differential operator
d
dx
. Hence it is
also an eigenfunction of the second derivative
d
2
dx
2
=
d
dx
d
dx
.
If the complementary function has the form
y
c
=
e
λx
, then
y
0
c
=
λe
λx
and
y
00
c
= λ
2
e
λx
. Substituting into the differential equation gives
Definition (Characteristic equation). The characteristic equation of a (second-
order) differential equation ay
00
+ by
0
+ c = 0 is
2
+ + c = 0.
In this case there are two solutions to the characteristic equation, giving (in
principle) two complementary functions y
1
= e
λ
1
x
and y
2
= e
λ
2
x
.
If
λ
1
and
λ
2
are distinct, then
y
1
and
y
2
are linearly independent and complete
they form a basis of the solution space. The (most) general complementary
function is
y
c
= Ae
λ
1
x
+ Be
λ
2
x
.
Example.
y
00
5
y
0
+ 6
y
= 0. Try
y
=
e
λx
. The characteristic equation is
λ
2
5λ + 6 = 0. Then λ = 2 or 3. So the general solution is y = Ae
2x
+ Be
3x
.
Note that A and B can be complex constants.
Example (Simple harmonic motion).
y
00
+ 4
y
= 0. Try
y
=
e
λx
. The character-
istic equation is
λ
2
+ 4 = 0, with solutions
λ
=
±
2
i
. Then our general solution
is
y
=
Ae
2ix
+
Be
2ix
. However, if this is in a case of simple harmonic motion
in physics, we want the function to be real (or look real). We can write
y = A(cos 2x + i sin 2x) + B(cos 2x i sin 2x)
= (A + B) cos 2x + i(A B) sin 2x
= α cos 2x + β sin 2x
where α = A + B and β = i(A B), and α and β are independent constants.
In effect, we have changed the basis from {e
2ix
, e
2ix
} to {cos 2x, sin 2x}.
Example (Degeneracy). y
00
4y
0
+ 4y = 0.
Try
y
=
e
λx
. We have
λ
2
4
λ
+ 4 = 0 and (
λ
2)
2
= 0. So
λ
= 2 or 2. But
e
2x
and
e
2x
are clearly not linearly independent. We have only managed to find
one basis function of the solution space, but a second order equation has a 2
dimensional solution space. We need to find a second solution.
We can perform detuning. We can separate the two functions found above
from each other by considering
y
00
4
y
0
+ (4
ε
2
)
y
= 0. This turns into the
equation we want to solve as
ε
0. Try
y
=
e
λx
. We obtain
λ
2
4
λ
+ 4
ε
2
.
The two roots are λ = 2 ± ε. Then
y = Ae
(2+ε)x
+ Be
(2ε)x
= e
2x
[Ae
εx
+ Be
εx
]
Taking the limit as ε 0, we use the Taylor expansion of e
εx
to obtain
y = e
2x
[(A + B) + εx(A B) + O(
2
, Bε
2
)]
We let (
A
+
B
) =
α
and
ε
(
A B
) =
β
. This is perfectly valid for any non-zero
ε. Then A =
1
2
(α +
β
ε
) and B =
1
2
(α
β
ε
). So we have
y = e
2x
[α + βx + O(
2
, Bε
2
)]
We know for any
ε
, we have a solution of this form. Now we turn the procedure
around. We fix some
α
and
β
. Then given any
ε
, we can find some constants
A
,
B
(depending on
ε
) such that the above holds. As we decrease the size of
ε
, we
have A, B = O(
1
ε
). So O(
2
) = O(Bε
2
) = O(ε). So our solution becomes
y = e
2x
[α + βx + O(ε)]
e
2x
(α + βx)
In this way, we have derived two separate basis functions. In general, if
y
1
(
x
)
is a degenerate complementary function of a linear differential equation with
constant coefficients, then
y
2
(
x
) =
xy
1
(
x
) is an independent complementary
function.
5.1.2 Second complementary function
In general (i.e. if we don’t have constant coefficients), we can find a second com-
plementary function associated with a degenerate solution of the homogeneous
equation by looking for a solution in the form
y
2
(
x
) =
v
(
x
)
y
1
(
x
), where
y
1
(
x
) is
the degenerate solution we found.
Example. Consider
y
00
4
y
0
+ 4
y
= 0. We have
y
1
=
e
2x
. We try
y
2
=
ve
2x
.
Then
y
0
2
= (v
0
+ 2v)e
2x
y
00
2
= (v
00
+ 4v
0
+ 4v)e
2x
.
Substituting into the original equation gives
(v
00
+ 4v
0
+ 4v) 4(v
0
+ 2v) + 4v = 0.
Simplifying, this tells us v
00
= 0, which forces v to be a linear function of x. So
y
2
= (Ax + B)e
2x
for some A, B R.
5.1.3 Phase space
If we are given a general nth order differential equation of the form
a
n
(x)y
(n)
+ a
n1
y
(n1)
+ ··· + a
1
(x)y
0
+ a
0
(x)y = f(x),
and we have a solution
y
, then we can plot a graph of
y
versus
x
, and see how
y
evolves with x.
However, one problem is that for such an equation, the solution is not just
determined by the initial condition
y
(
x
0
), but also
y
0
(
x
0
),
y
00
(
x
0
) etc. So if we
just have a snapshot of the value of
y
at a particular point
x
0
, we have completely
no idea how it would evolve in the future.
So how much information do we actually need? At any point
x
0
, if we
are given the first
n
1 derivatives, i.e.
y
(
x
0
),
y
0
(
x
0
),
···
,
y
(n1)
(
x
0
), we can
then get the
n
th derivative and also any higher derivatives from the differential
equation. This means that we know the Taylor series of
y
about
x
0
, and it follows
that the solution is uniquely determined by these conditions (note that it takes
considerably more extra work to actually prove rigorously that the solution is
uniquely determined by these initial conditions, but it can be done for sufficiently
sensible f , as will be done in IB Analysis II).
Thus we are led to consider the solution vector
Y(x) = (y(x), y
0
(x), ··· , y
n1
(x)).
We say such a vector lies in the phase space, which is an
n
-dimensional space. So
for each
x
, we thus get a point Y(
x
) lying in the
n
-dimensional space. Moreover,
given any point in the phase space, if we view it as the initial conditions for our
differential equation, we get a unique trajectory in the phase space.
Example. Consider
y
00
+ 4
y
= 0. Suppose we have an initial condition of
y
1
(0) = 1
, y
0
1
(0) = 0. Then we can solve the equation to obtain
y
1
(
x
) =
cos
2
x
.
Thus the initial solution vector is Y
1
(0) = (1
,
0), and the trajectory as
x
varies is
given by Y
1
(
x
) = (
cos
2
x,
2
sin
2
x
). Thus as
x
changes, we trace out an ellipse
in the clockwise direction:
y
y
0
Y
1
(x)
Another possible initial condition is
y
2
(0) = 0
, y
0
2
(0) = 2. In this case, we obtain
the solution y(x) = sin 2x, with a solution vector Y
2
(x) = (sin 2x, 2 cos 2x).
Note that as vectors, our two initial conditions (1
,
0) and (0
,
2) are indepen-
dent. Moreover, as
x
changes, the two solution vectors Y
1
(
x
)
,
Y
2
(
x
) remain
independent. This is an important observation that allows the method of varia-
tion of parameters later on.
In general, for a 2nd order equation, the phase space is a 2-dimensional space,
and we can take the two complementary functions Y
1
and Y
2
as basis vectors
for the phase space at each particular value of
x
. Of course, we need the two
solutions to be linearly independent.
Definition (Wronskian). Given a differential equation with solutions
y
1
, y
2
, the
Wronskian is the determinant
W (x) =
y
1
y
2
y
0
1
y
0
2
.
Definition (Independent solutions). Two solutions
y
1
(
x
) and
y
2
(
x
) are indepen-
dent solutions of the differential equation if and only if Y
1
and Y
2
are linearly
independent as vectors in the phase space for some
x
, i.e. iff the Wronskian is
non-zero for some x.
In our example, we have W (x) = 2 cos
2
2x + 2 sin
2
2x = 2 6= 0 for all x.
Example. In our earlier example, y
1
= e
2x
and y
2
= xe
2x
. We have
W =
e
2x
xe
2x
2e
2x
e
2x
+ 2xe
2x
= e
4x
(1 + 2x 2x) = e
4x
6= 0.
In both cases, the Wronskian is never zero. Is it possible that it is zero for
some x while non-zero for others? The answer is no.
Theorem (Abel’s Theorem). Given an equation
y
00
+
p
(
x
)
y
0
+
q
(
x
)
y
= 0, either
W
= 0 for all
x
, or
W 6
= 0 for all
x
. i.e. iff two solutions are independent for
some particular x, then they are independent for all x.
Proof. If y
1
and y
2
are both solutions, then
y
2
(y
00
1
+ py
0
1
+ qy
1
) = 0
y
1
(y
00
2
+ py
0
2
+ qy
2
) = 0
Subtracting the two equations, we have
y
1
y
00
2
y
2
y
00
1
+ p(y
1
y
0
2
y
2
y
0
1
) = 0
Note that
W
=
y
1
y
0
2
y
2
y
0
1
and
W
0
=
y
1
y
00
2
+
y
0
1
y
0
2
(
y
0
2
y
0
1
+
y
2
y
00
1
) =
y
1
y
00
2
y
2
y
00
1
W
0
+ P (x)W = 0
W (x) = W
0
e
R
P dx
,
Where
W
0
= const. Since the exponential function is never zero, either
W
0
= 0,
in which case W = 0, or W
0
6= 0 and W 6= 0 for any value of x.
In general, any linear
n
th-order homogeneous differential equation can be
written in the form Y
0
+
A
Y = 0, a system of first-order equations. It can then
be shown that
W
0
+
tr
(
A
)
W
= 0, and
W
=
W
0
e
R
tr A dx
. So Abel’s theorem
holds.
5.2 Particular integrals
We now consider equations of the form
ay
00
+
by
0
+
cy
=
f
(
x
). We will come up
with several ways to find a particular integral.
5.2.1 Guessing
If the forcing terms are simple, we can easily “guess” the form of the particular
integral, as we’ve previously done for first-order equations.
f(x) y
p
(x)
e
mx
Ae
mx
sin kx
A sin kx + B cos kx
cos kx
polynomial p
n
(x) q
n
(x) = a
n
x
n
+ ··· + a
1
x + a
0
It is important to remember that the equation is linear, so we can superpose
solutions and consider each forcing term separately.
Example. Consider
y
00
5
y
0
+ 6
y
= 2
x
+
e
4x
. To obtain the forcing term 2
x
,
we need a first order polynomial
ax
+
b
, and to get
e
4x
we need
ce
4x
. Thus we
can guess
y
p
= ax + b + ce
4x
y
0
p
= a + 4ce
4x
y
00
p
= 16ce
4x
Substituting in, we get
16ce
4x
5(a + 4ce
4x
) + 6(ax + b + ce
4x
) = 2x + e
4x
Comparing coefficients of similar functions, we have
16c 20c + 6c = 1 c =
1
2
6a = 2 a =
1
3
5a + 6b = 0 b =
5
18
Since the complementary function is
y
c
=
Ae
3x
+
Be
2x
, the general solution is
y = Ae
3x
+ Be
2x
+
1
2
e
4x
+
1
3
x +
5
18
.
Note that any boundary condition to determine
A
and
B
must be applied to
the full solution, not the complementary function
5.2.2 Resonance
Consider
¨y
+
ω
2
0
y
=
sin ω
0
t
. The complementary solution is
y
c
=
A sin ω
0
t
+
B cos w
0
t
. We notice that the forcing is itself a complementary function. So
if we guess a particular integral
y
p
=
C sin ω
0
t
+
D cos ω
0
t
, we’ll simply find
¨y
p
+ ω
2
0
y
p
= 0, so we can’t balance the forcing.
This is an example of a simple harmonic oscillator being forced at its natural
frequency.
We can detune our forcing away from the natural frequency, and consider
¨y + ω
2
0
y = sin ωt with ω 6= ω
0
. Try
y
p
= C(sin ωt sin ω
0
t).
We have
¨y
p
= C(ω
2
sin ωt + ω
2
0
sin ω
0
t).
Substituting into the differential equation, we have C(ω
2
0
ω
2
) = 1. Then
y
p
=
sin ωt sin ω
0
t
ω
2
0
ω
2
.
We can simplify this to
y
p
=
2
ω
2
0
ω
2
cos
ω
0
+ ω
2
t
sin
ω ω
0
2
t
We let ω
0
ω = ω. Then
y
p
=
2
(2ω + ω)∆ω
cos

ω +
ω
2
t
sin
ω
2
t
.
t
y
p
y
p
sin
ω
2
t
sin
ω
2
t
O
1
ω
This oscillation in the amplitude of the
cos
wave is known as beating. This
happens when the forcing frequency is close to the natural frequency. The
wavelength of the
sin
function has order
O
(
1
ω
) and
cos
has wavelength
O
(
1
ω
0
).
As
ω
0, the wavelength of the beating envelope
and we just have the
initial linear growth.
Mathematically, since sin θ θ as θ 0, as ω 0, we have
y
p
t
2ω
0
cos ω
0
t.
In general, if the forcing is a linear combination of complementary functions,
then the particular integral is proportional to
t
(the independent variable) times
the non-resonant guess.
5.2.3 Variation of parameters
So far, we have been finding particular integrals by guessing. Here we are going
to come up with a method that can systematically help us find the particular
integral. Of course, this is substantially more complicated than guessing. So if
the form of the particular integral is obvious, we should go for guessing.
Suppose we have a second order differential equation
y
00
+ p(x)y
0
+ q(x)y = f(x).
We then know any particular solution vector can be written in the form
Y(x) =
y(x)
y
0
(x)
,
and our job is to find one solution vector that satisfies the equation. We
presuppose such a solution actually exists, and we will try to find out what it is.
The trick is to pick a convenient basis for this space. Let
y
1
(
x
) and
y
2
(
x
) be
linearly independent complementary functions of the ODE. Then for each
x
, the
solution vectors Y
1
(
x
) = (
y
1
(
x
)
, y
0
1
(
x
)) and Y
2
(
x
) = (
y
2
(
x
)
, y
0
2
(
x
)) form a basis
of the solution space. So for each particular
x
, we can find some constants
u, v
(depending on x) such that the following equality holds:
Y(x) = u(x)Y
1
(x) + v(x)Y
2
(x),
since Y
1
and Y
2
are a basis.
Component-wise, we have
y
p
= uy
1
+ vy
2
(a)
y
0
p
= uy
0
1
+ vy
0
2
(b)
Differentiating the second equation, we obtain
y
00
p
= (uy
00
1
+ u
0
y
0
1
) + (vy
00
2
+ v
0
y
0
2
) (c)
If we consider (c) + p(b) + q(a), we have y
0
1
u
0
+ y
0
2
v
0
= f.
Now note that we derived the equation of
y
0
p
from the vector equation. This
must be equal to what we get if we differentiate (a). By (a)
0
- (b), we obtain
y
1
u
0
+
y
2
v
0
= 0. Now we have two simultaneous equations for
u
0
and
v
0
, which
we should be able to solve.
We can, for example, write them in matrix form as
y
1
y
2
y
0
1
y
0
2
u
0
v
0
=
0
f
Inverting the left matrix, we have
u
0
v
0
=
1
W
y
0
2
y
2
y
0
1
y
1
0
f
So u
0
=
y
2
W
f and v
0
=
y
1
W
f.
Example.
y
00
+ 4
y
=
sin
2
x
. We know that
y
1
=
sin
2
x
and
y
2
=
cos
2
x
.
W = 2. We write
y
p
= u sin 2x + v cos 2x.
Using the formulae above, we obtain
u
0
=
cos 2x sin 2x
2
=
sin 4x
4
, v
0
=
sin
2
2x
2
=
cos 4x 1
4
So
u =
cos 4x
16
, v =
sin 4x
16
x
4
Therefore
y
p
=
1
16
(cos 4x sin 2x + sin 4x cos 2x
x
4
cos 2x) =
1
16
sin 2x
x
4
cos 2x
Note that
1
4
x cos
2
x
is what we found previously by detuning, and
1
16
sin
2
x
is
a complementary function, so the results agree.
It is generally not a good idea to remember the exact formula for the results
we’ve obtained above. Instead, whenever faced with such questions, you should
be able to re-derive the results instead.
5.3 Linear equidimensional equations
Equidimensional equations are often called homogeneous equations, but this is
confusing as it has the same name as those with no forcing term. So we prefer
this name instead.
Definition (Equidimensional equation). An equation is equidimensional if it
has the form
ax
2
y
00
+ bxy
0
+ cy = f(x),
where a, b, c are constants.
To understand the name “equidimensional”, suppose we are doing physics
and variables have dimensions. Say
y
has dimensions
L
and
x
has dimensions
T
.
Then
y
0
has dimensions
LT
1
and
y
00
has dimensions
LT
2
. So all terms
x
2
y
00
,
xy
0
and y have the same dimensions.
Solving by eigenfunctions
Note that
y
=
x
k
is an eigenfunction of
x
d
dx
. We can try an eigenfunction
y
=
x
k
.
We have
y
0
=
kx
k1
and thus
xy
0
=
kx
k
=
ky
; and
y
00
=
k
(
k
1)
x
k2
and
x
2
y
00
= k(k 1)x
k
.
Substituting in, we have
ak(k 1) + bk + c = 0,
which we can solve, in general, to give two roots
k
1
and
k
2
, and
y
c
=
Ax
k
1
+
Bx
k
2
.
Solving by substitution
Alternatively, we can make a substitution z = ln x. Then we can show that
a
d
2
y
dz
2
+ (b a)
dy
dz
+ cy = f(e
z
).
This turns an equidimensional equation into an equation with constant coeffi-
cients. The characteristic equation for solutions in the form
y
=
e
λz
is of form
a
2
λ
2
+ (
b a
)
λ
+
c
= 0, which we can rearrange to become
(
λ
1) +
+
c
= 0.
So λ = k
1
, k
2
.
Then the complementary function is y
c
= Ae
k
1
z
+ Be
k
2
z
= Ax
k
1
+ Bx
k
2
.
Degenerate solutions
If the roots of the characteristic equation are equal, then
y
c
=
{e
kz
, ze
kz
}
=
{x
k
, x
k
ln x}
. Similarly, if there is a resonant forcing proportional to
x
k
1
(or
x
k
2
),
then there is a particular integral of the form x
k
1
ln x.
These results can be easily obtained by considering the substitution method
of solving, and then applying our results from homogeneous linear equations
with constant coefficients.
5.4 Difference equations
Consider an equation of the form
ay
n+2
+ by
n+1
+ cy
n
= f
n
.
We can solve in a similar way to differential equations, by exploiting linearity
and eigenfunctions.
We can think of the difference operator
D
[
y
n
] =
y
n+1
. This has an eigen-
function y
n
= k
n
. We have D[y
n
] = D[k
n
] = k
n+1
= k · k
n
= ky
n
.
To solve the difference equation, we first look for complementary functions
satisfying
ay
n+2
+ by
n+1
+ cy
n
= 0
We try y
n
= k
n
to obtain
ak
n+2
+ bk
n+1
+ ck
n
= 0
ak
2
+ bk + c = 0
from which we can determine
k
. So the general complementary function is
y
c
n
= Ak
n
1
+ Bk
n
2
if k
1
6= k
2
. If they are equal, then y
c
n
= (A + Bn)k
n
.
To find the particular integral, we guess.
f
n
y
p
n
k
n
Ak
n
if k 6= k
1
, k
2
k
n
1
nk
n
1
n
p
An
p
+ Bn
p1
+ ··· + Cn + D
Example (Fibonacci sequence). The Fibonacci sequence is defined by
y
n
= y
n1
+ y
n2
with y
0
= y
1
= 1.
We can write this as
y
n+2
y
n+1
y
n
= 0
We try y
n
= k
n
. Then k
2
k 1 = 0. Then
k
2
k 1 = 0
k =
1 ±
5
2
We write k = ϕ
1
, ϕ
2
. Then y
n
=
n
1
+ Bϕ
n
2
. Our initial conditions give
A + B = 1
1
+ Bϕ
2
= 1
We get A =
ϕ
1
5
and B =
ϕ
2
5
. So
y
n
=
ϕ
n+1
1
ϕ
n+1
2
5
=
ϕ
n+1
1
1
ϕ
1
n+1
5
5.5 Transients and damping
In many physical systems, there is some sort of restoring force and some damping,
e.g. car suspension system.
Consider a car of mass
M
with a vertical force
F
(
t
) acting on it (e.g. mouse
jumping on the car). We can consider the wheels to be springs (
F
=
kx
) with a
“shock absorber” (F = l ˙x). Then the equation of motion can be given by
M ¨x = F (t) kx l ˙x.
So we have
¨x +
l
M
˙x +
k
M
x =
1
M
F (t).
Note that if we don’t have the damping and the forcing, we end up with simple
harmonic motion of angular frequency
p
k/M
. Write
t
=
τ
p
M/k
, where
τ
is dimensionless. The timescale
p
M/k
is proportional to the period of the
undamped, unforced system (or 1 over its natural frequency). Then we obtain
¨x + 2κ ˙x + x = f(τ)
where, ˙x means
dx
, κ =
l
2
kM
and f =
F
k
.
By this substitution, we are now left with only one parameter
κ
instead of
the original three (M, l, k).
We will consider different possible cases.
Free (natural) response f = 0
¨x + 2κ ˙x + x = 0
We try x = e
λτ
λ
2
+ 2κλ + 1 = 0
λ = κ ±
p
κ
2
1
= λ
1
, λ
2
where λ
1
and λ
2
have positive real parts.
Underdamping
If κ < 1, we have x = e
κτ
(A sin
1 κ
2
τ + B cos
1 κ
2
τ).
The period is
2π
1κ
2
and its amplitude decays in a characteristic of
O
(
1
κ
).
Note that the damping increases the period. As
κ
1, the oscillation period
.
τ
x
Critically damping
If κ = 1, then x = (A + Bτ)e
κτ
.
The rise time and decay time are both
O
(
1
κ
) =
O
(1). So the dimensional rise
and decay times are O(
p
M/k).
τ
x
Overdamping
If
κ >
1, then
x
=
Ae
λ
1
τ
+
Be
λ
2
τ
with
λ
1
< λ
2
. Then the decay time is
O(1
1
) and the rise time is O(1
2
).
τ
x
Note that in all cases, it is possible to get a large initial increase in amplitude.
Forcing
In a forced system, the complementary functions typically determine the short-
time transient response, while the particular integral determines the long-time
(asymptotic) response. For example, if
f
(
τ
) =
sin τ
, then we can guess
x
p
=
C sin τ + D cos τ . In this case, it turns out that x
p
=
1
2κ
cos τ.
The general solution is thus
x
=
Ae
λ
1
τ
+
Be
λτ
1
2κ
cos τ
1
2κ
cos τ
as
τ since Re(λ
1,2
) > 0.
It is important to note that the forcing response is out of phase with the
forcing.
5.6 Impulses and point forces
5.6.1 Dirac delta function
Consider a ball bouncing on the ground. When the ball hits the ground at some
time
T
, it experiences a force from the ground for some short period of time.
The force on the ball exerted by the ground
F
(
t
) is 0 for most of the time, except
during the short period (T ε, T + ε).
Often we don’t know (or we don’t wish to know) the details of
F
(
t
) but we
can note that it only acts for a short time of
O
(
ε
) that is much shorter than the
overall time
O
(
t
2
t
1
) of the system. It is convenient mathematically to imagine
the force acting instantaneously at time t = T , i.e. consider the limit ε 0.
Newton’s second law gives
m¨x
=
F
(
t
)
mg
. While we cannot solve it, we
can integrate the equation from T ε to T + ε. So
Z
T +ε
T ε
m
d
2
x
dt
2
dt =
Z
T +ε
T ε
F (t) dt
Z
T +ε
T ε
mg dt
m
dx
dt
T +ε
T ε
= I 2εmg
p = I O(ε)
Where
p
is the change in momentum and the impulse
I
=
R
T +ε
T ε
F
(
t
) d
t
is the
area under the force curve. Note that the impulse
I
is the only property of
F
that influences the macroscopic behaviour of the system. If the contact time 2
ε
is small, we’ll neglect it and write
p = I
Assuming that
F
only acts on a negligible amount of time
ε
, all that matters to
us is its integral I, i.e. the area under the force curve.
wlog, assume
T
= 0 for easier mathematical treatment. We can consider a
family of functions D(t; ε) such that
lim
ε0
D(t; ε) = 0 for all t 6= 0;
lim
ε0
Z
−∞
D(t; ε) dt = 1.
So we can replace the force in our example by
ID
(
t
;
ε
), and then take the limit
as ε 0.
For example, we can choose
D(t; ε) =
1
ε
π
e
t
2
2
t
D
ε = 1
ε = 0.5
This has height O(1) and width O(ε).
It can be checked that this satisfies the properties listed above. Note that as
ε 0, D(0; ε) . Therefore lim
ε0
D(0; ε) does not exist.
Definition (Dirac delta function). The Dirac delta function is defined by
δ(x) = lim
ε0
D(x; ε)
on the understanding that we can only use its integral properties. For example,
when we write
Z
−∞
g(x)δ(x) dx,
we actually mean
lim
ε0
Z
−∞
g(x)D(x; ε) dx.
In fact, this is equal to g(0).
More generally,
R
b
a
g
(
x
)
δ
(
x c
) d
x
=
g
(
c
) if
c
(
a, b
) and 0 otherwise,
provided g is continuous at x = c.
This gives a convenient way of representing and making calculations involving
impulsive or point forces. For example, in the previous example, we can write
m¨x = mg + Iδ(t T ).
Example.
y
00
y
= 3
δ
(
x
π
2
) with
y
= 0 at
x
= 0
, π
. Note that our function
y
is split into two parts by x =
π
2
.
First consider the region 0
x <
π
2
. Here the delta function is 0, and we have
y
00
y
= 0 and
y
= 0 at
x
= 0. Then
y
=
Ce
x
+
De
x
=
A sinh x
+
B cosh x
and
obtain
B
= 0 from the boundary condition. In the region
π
2
< x π
, we again
obtain
y
=
C sinh
(
π x
) +
D cosh
(
π x
) and (from the boundary condition),
D = 0.
When
x
=
π
2
, first insist that
y
is continuous at
x
=
π
2
. So
A
=
C
. Then
note that we have to solve
y
00
y = 3δ
x
π
2
But remember that the delta function makes sense only in an integral. So we
integrate both sides from
π
2
to
π
2
+
. Then we obtain
[y
0
]
π
2
+
π
2
Z
π
2
+
π
2
y dx = 3
Since we assume that y is well behaved, the second integral is 0. So we are left
with
[y
0
]
π
2
+
π
2
= 3
So we have
C cosh
π
2
A cosh
π
2
= 3
A = C =
3
2 cosh
π
2
Then we have
y =
(
3 sinh x
2 cosh
π
2
0 x <
π
2
3 sinh(πx)
2 cosh
π
2
π
2
< x π
x
y
Note that at
x
=
π
2
, our final function has continuous
y
, discontinuous
y
0
and
infinite
y
00
. In general, differentiating a function makes it less continuous. This
is why we insisted at first that
y
has to be continuous. Otherwise,
y
0
would look
like a delta function, and y
00
would be something completely unrecognizable.
Hence the discontinuity is always addressed by the highest order derivative
since differentiation increases the discontinuity.
5.7 Heaviside step function
Definition (Heaviside step function). Define the Heaviside step function as:
H(x) =
Z
x
−∞
δ(t) dt
We have
H(x) =
0 x < 0
1 x > 0
undefined x = 0
x
y
By the fundamental theorem of calculus,
dH
dx
= δ(x)
But remember that these functions and relationships can only be used inside
integrals.
6 Series solutions
Often, it is difficult to solve a differential equation directly. However, we can
attempt to find a Taylor series for the solution.
We will consider equations of the form
p(x)y
00
+ q(x)y
0
+ r(x)y = 0.
Definition (Ordinary and singular points). The point
x
=
x
0
is an ordinary
point of the differential equation if
q
p
and
r
p
have Taylor series about
x
0
(i.e. are
“analytic”, cf. Complex Analysis). Otherwise, x
0
is a singular point.
If x
0
is a singular point but the equation can be written as
P (x)(x x
0
)
2
y
00
+ Q(x)(x x
0
)y
0
+ R(x)y = 0,
where
Q
P
and
R
P
have Taylor series about
x
0
, then
x
0
is a regular singular point.
Example.
(i)
(1
x
2
)
y
00
2
cy
0
+ 2
y
= 0.
x
= 0 is an ordinary point. However,
x
=
±
1
are (regular) singular points since p(±1) = 0.
(ii) sin xy
00
+
cos xy
0
+ 2
y
= 0.
x
=
are regular singular points while all
others are ordinary.
(iii)
(1 +
x
)
y
00
2
xy
0
+ 2
y
= 0.
x
= 0 is an irregular singular point because
x is not differentiable at x = 0.
It is possible to show that if
x
0
is an ordinary point, then the equation is
guaranteed to have two linearly independent solutions of the form
y =
X
n=0
a
n
(x x
0
)
n
,
i.e. Taylor series about
x
0
. The solution must be convergent in some neighbour-
hood of x
0
.
If
x
0
is a regular singular point, then there is at least one solution of the form
y =
X
n=0
a
n
(x x
0
)
n+σ
with
a
0
6
= 0 (to ensure
σ
is unique). The index
σ
can be any complex number.
This is called a Frobenius series.
Alternatively, it can be nice to think of the Frobenius series as
y = (x x
0
)
σ
X
n=0
a
n
(x x
0
)
n
= (x x
0
)
σ
f(x)
where f (x) is analytic and has a Taylor series.
We will not prove these results, but merely apply them.
Ordinary points
Example. Consider (1
x
2
)
y
00
2
xy
0
+ 2
y
= 0. Find a series solution about
x = 0 (which is an ordinary point).
We try
y
=
P
n=0
a
n
x
n
. First, we write the equation in the form of an
equidimensional equation with polynomial coefficients by multiplying both sides
by
x
2
. This little trick will make subsequent calculations slightly nicer. We
obtain
(1 x)
2
(x
2
y
00
) 2x
2
(xy
0
) + 2x
2
y = 0
X
a
n
[(1 x
2
)n(n 1) 2x
2
n + 2x
2
]x
n
= 0
X
a
n
[n(n 1) + (n
2
n + 2)x
2
]x
n
= 0
We look at the coefficient of
x
n
and obtain the following general recurrence
relation:
n(n 1)a
n
+ [(n 2)
2
(n 2) + 2]a
n2
= 0
n(n 1)a
n
= (n
2
3n)a
n2
Here we do not divide by anything since they might be zero.
First consider the case
n
= 0. The left hand side gives 0
· a
0
= 0 (the right
hand side is 0 since
a
n2
= 0). So any value of
a
0
satisfies the recurrence
relationship, and it can take any arbitrary value. This corresponds to a constant
of integration. Similarly, by considering n = 1, a
1
is arbitrary.
For n > 1, n and n 1 are non-zero. So we have
a
n
=
n 3
n 1
a
n2
In this case (but generally not), we can further simplify it to obtain:
a
n
=
n 3
n 1
n 5
n 3
a
n4
=
n 5
n 1
a
n4
.
.
.
So
a
2k
=
1
2k 1
a
0
,
a
2k+1
= 0.
So we obtain
y = a
0
[1
x
2
1
x
4
3
x
6
5
···] + a
1
x
= a
0
1
x
2
ln
1 + x
1 x

+ a
1
x
Notice the logarithmic behaviour near
x
=
±
1 which are regular singular points.
Regular singular points
Example. Consider 4
xy
00
+ 2(1
x
2
)
y
0
xy
= 0. Note that
x
= 0 is a singular
point. However, if we multiply throughout by
x
to obtain an equidimensional
equation, we obtain
4(x
2
y
00
) + 2(1 x
2
)xy
0
x
2
y = 0.
Since
Q
P
=
1x
2
2
and
R
P
=
x
2
4
both have Taylor series,
x
= 0 is a regular singular
point. Try
y =
X
n=0
a
n
x
n+σ
with a
0
6= 0.
Substituting in, we have
X
a
n
x
n+σ
[4(n + σ)(n + σ 1) + 2(1 x
2
)(n + σ) x
2
]
By considering the coefficient of
x
n+σ
, we obtain the general recurrence relation
[4(n + σ)(n + σ 1) + 2(n + σ)]a
n
[2(n 2 + σ) + 1]a
n2
= 0.
Simplifying the equation gives
2(n + σ)(2n + 2σ 1)a
n
= (2n + 2σ 3)a
n2
.
The n = 0 case gives the indicial equation for the index σ:
2σ(2σ 1)a
0
= 0.
Since
a
0
6
= 0, we must have
σ
= 0 or
1
2
. The
σ
= 0 solution corresponds to an
analytic (“Taylor series”) solution, while
σ
=
1
2
corresponds to a non-analytic
one.
When σ = 0, the recurrence relation becomes
2n(2n 1)a
n
= (2n 3)a
n2
.
When
n
= 0, this gives 0
· a
0
= 0. So
a
0
is arbitrary. For
n >
0, we can divide
and obtain
a
n
=
2n 3
2n(2n 1)
a
n2
.
We can see that a
1
= 0 and so are subsequent odd terms.
If n = 2k, i.e. n is even, then
a
2k
=
4k 3
4k(4k 1)
a
2k2
y = a
0
1 +
1
4 · 3
x
2
+
5
8 · 7 · 4 ·3
x
4
+ ···
Note that we have only found one solution in this case.
Now when σ =
1
2
, we obtain
(2n + 1)(2n)a
n
= (2n 2)a
n2
When
n
= 0, we obtain 0
· a
0
= 0, so
a
0
is arbitrary. To avoid confusion with
the a
0
above, call it b
0
instead.
When
n
= 1, we obtain 6
a
1
= 0 and
a
1
= 0 and so are subsequent odd terms.
For even n,
a
n
=
n 1
n(2n + 1)
a
n2
So
y = b
0
x
1/2
1 +
1
2 · 5
x
2
+
3
2 · 5 · 4 ·9
x
4
+ ···
Resonance of solutions
Note that the indicial equation has two roots
σ
1
, σ
2
. Consider the two different
cases:
(i)
If
σ
2
σ
1
is not an integer, then there are two linearly independent
Frobenius solutions
y =
"
(x x
0
)
σ
1
X
n=0
a
n
(x x
0
)
n
#
+
"
(x x
0
)
σ
2
X
n=0
b
n
(x x
0
)
n
#
.
As x x
0
, y (x x
0
)
σ
1
, where Re(σ
1
) Re(σ
2
)
(ii)
If
σ
2
σ
1
is an integer (including when they are equal), there is one solution
of the form
y
1
= (x x
0
)
σ
2
X
n=0
a
n
(x x
0
)
n
with σ
2
σ
1
.
In this case,
σ
=
σ
1
will not give a valid solution, as we will later see.
Instead, the other solution is (usually) in the form
y
2
= ln(x x
0
)y
1
+
X
n=0
b
n
(x x
0
)
n+σ
1
.
This form arises from resonance between the two solutions. But if the
resonance somehow avoids itself, we can possibly end up with two regular
Frobenius series solutions.
We can substitute this form of solution into the differential equation to
determine b
n
.
Example. Consider
x
2
y
00
xy
= 0.
x
= 0 is a regular singular point. It is
already in equidimensional form (x
2
y
00
) x(y) = 0. Try
y =
X
n=0
a
n
x
n+σ
with a
0
6= 0. We obtain
X
a
n
x
n+σ
[(n + σ)(n + σ 1) x] = 0.
The general recurrence relation is
(n + σ)(n + σ 1)a
n
= a
n1
.
n = 0 gives the indicial equation
σ(σ 1) = 0.
Then
σ
= 0
,
1. We are guaranteed to have a solution in the form
σ
= 1. When
σ = 1, the recurrence relation becomes
(n + 1)na
n
= a
n1
.
When n = 0, 0 · a
0
= 0 so a
0
is arbitrary. When n > 0, we obtain
a
n
=
1
n(n + 1)
a
n1
=
1
(n + 1)(n!)
2
a
0
.
So
y
1
= a
0
x
1 +
x
2
+
x
2
12
+
x
3
144
+ ···
.
When σ = 0, we obtain
n(n 1)a
n
= a
n1
.
When
n
= 0, 0
· a
0
= 0 and
a
0
is arbitrary. When
n
= 1, 0
· a
1
=
a
0
. However,
a
0
6
= 0 by our initial constraint. Contradiction. So there is no solution in this
form (If we ignore the constraint that
a
0
6
= 0, we know that
a
0
is arbitrary. But
this gives exactly the same solution we found previously with σ = 1)
The other solution is thus in the form
y
2
= y
1
ln x +
X
n=0
b
n
x
n
.
7 Directional derivative
7.1 Gradient vector
Consider a function
f
(
x, y
) and a displacement ds = (d
x,
d
y
). The change in
f(x, y) during that displacement is
df =
f
x
dx +
f
y
dy
We can also write this as
df = (dx, dy) ·
f
x
,
f
y
= ds · f
where
f
=
gradf
=
f
x
,
f
y
are the Cartesian components of the gradient of
f.
We write ds =
ˆ
s ds, where |
ˆ
s| = 1. Then
Definition (Directional derivative). The directional derivative of
f
in the
direction of
ˆ
s is
df
ds
=
ˆ
s · f.
Definition (Gradient vector). The gradient vector
f
is defined as the vector
that satisfies
df
ds
= ˆs · f.
Officially, we take this to be the definition of
f
. Then
f
=
f
x
,
f
y
is a
theorem that can be proved from this definition.
We know that the directional derivative is given by
df
ds
= ˆs · f = |∇f|cos θ
where
θ
is the angle between the displacement and
f
. Then when
cos θ
is
maximized,
df
ds
= |∇f|. So we know that
(i) f
has magnitude equal to the maximum rate of change of
f
(
x, y
) in the
xy plane.
(ii) It has direction in which f increases most rapidly.
(iii)
If ds is a displacement along a contour of
f
(i.e. along a line in which
f
is
constant), then
df
ds
= 0.
So ˆs · f = 0, i.e. f is orthogonal to the contour.
7.2 Stationary points
There is always (at least) one direction in which
df
ds
= 0, namely the direction
parallel to the contour of f. However, local maxima and minima have
df
ds
= 0
for all directions, i.e. ˆs · f = 0 for all ˆs, i.e. f = 0. Then we know that
f
x
=
f
y
= 0.
However, apart from maxima and minima, in 3 dimensions, we also have saddle
points:
In general, we define a saddle point to be a point at which
f
= 0 but is not a
maximum or minimum.
When we plot out contours of functions, near maxima and minima, the
contours are locally elliptical. Near saddle points, they are locally hyperbolic.
Also, the contour lines cross at and only at saddle points.
7.3 Taylor series for multi-variable functions
Suppose we have a function
f
(
x, y
) and a point x
0
. Now consider a finite
displacement δs along a straight line in the x, y plane. Then
δs
d
ds
= δs ·
The Taylor series along the line is
f(s) = f(s
0
+ δs)
= f(s
0
) + δs
df
ds
+
1
2
(δs)
2
d
2
f
ds
2
= f(s
0
) + δs · f +
1
2
δs
2
(ˆs · )(ˆs ·)f.
We get
δs · f = (δx)
f
x
+ (δy)
f
y
= (x x
0
)
f
x
+ (y y
0
)
f
y
and
δs
2
(ˆs · )(ˆs ·)f = (δs · )(δs · )f
=
δx
x
+ δy
y
δx
f
x
+ δy
f
y
= δx
2
2
f
x
2
+ 2δy
2
f
x∂y
+ δy
2
2
f
y
2
=
δx δy
f
xx
f
xy
f
yx
f
yy
δx
δy
Definition (Hessian matrix). The Hessian matrix is the matrix
∇∇f =
f
xx
f
xy
f
yx
f
yy
In conclusion, we have
f(x, y) = f(x
0
, y
0
) + (x x
0
)f
x
+ (y y
0
)f
y
+
1
2
[(x x
0
)
2
f
xx
+ 2(x x
0
)(y y
0
)f
xy
+ (y y
0
)
2
f
yy
]
In general, the coordinate-free form is
f(x) = f(x
0
) + δx · f(x
0
) +
1
2
δx · ∇∇f · δx
where the dot in the second term represents a matrix product. Alternatively, in
terms of the gradient operator (and real dot products), we have
f(x) = f(x
0
) + δx · f(x
0
) +
1
2
[(f · δx)] · δx
7.4 Classification of stationary points
At a stationary point x
0
, we know that
f
(x
0
) = 0. So at a point near the
stationary point,
f(x) f (x
0
) +
1
2
δx · H · δx,
where H = ∇∇f(x
0
) is the Hessian matrix.
At a minimum, Every point near x
0
has
f
(x)
> f
(x
0
), i.e.
δ
x
·H ·δ
x
>
0 for
all δx. We say δx ·H · δx is positive definite.
Similarly, at a maximum,
δ
x
· H · δ
x
<
0 for all
δ
x. We say
δ
x
· H · δ
x is
negative definite.
At a saddle,
δ
x
· H · δ
x is indefinite, i.e. it can be positive, negative or zero
depending on the direction.
This, so far, is not helpful, since we do not have an easy way to know what
sign
δ
x
· H · δ
x could be. Now note that
H
=
∇∇f
is symmetric (because
f
xy
=
f
yx
). So
H
can be diagonalized (cf. Vectors and Matrices). With respect
to these axes in which H is diagonal (principal axes), we have
δx · H · δx = (δx, δy, ··· , δz)
λ
1
λ
2
.
.
.
λ
n
δx
δy
.
.
.
δz
= λ
1
(δx)
2
+ λ
2
(δy)
2
+ ··· + λ
n
(δz)
2
where λ
1
, λ
2
, ···λ
n
are the eigenvalues of H.
So for
δ
x
· H · δ
x to be positive-definite, we need
λ
i
>
0 for all
i
. Similarly,
it is negative-definite iff
λ
i
<
0 for all
i
. If eigenvalues have mixed sign, then it
is a saddle point.
Finally, if there is at least one zero eigenvalue, then we need further analysis
to determine the nature of the stationary point.
Apart from finding eigenvalues, another way to determine the definiteness is
using the signature.
Definition (Signature of Hessian matrix). The signature of
H
is the pattern of
the signs of the subdeterminants:
f
xx
|{z}
|H
1
|
,
f
xx
f
xy
f
yx
f
yy
| {z }
|H
2
|
, ··· ,
f
xx
f
xy
··· f
xz
f
yx
f
yy
··· f
yz
.
.
.
.
.
.
.
.
.
.
.
.
f
zx
f
zy
··· f
zz
| {z }
|H
n
|=|H|
Proposition.
H
is positive definite if and only if the signature is +
,
+
, ··· ,
+.
H
is negative definite if and only if the signature is
,
+
, ··· ,
(
1)
n
. Otherwise,
H is indefinite.
7.5 Contours of f(x, y)
Consider
H
in 2 dimensions, and axes in which
H
is diagonal. So
H
=
λ
1
0
0 λ
2
.
Write x x
0
= (X, Y ).
Then near x
0
,
f
= constant
x
H
x = constant, i.e.
λ
1
X
2
+
λ
2
Y
2
= constant.
At a maximum or minimum,
λ
1
and
λ
2
have the same sign. So these contours
are locally ellipses. At a saddle point, they have different signs and the contours
are locally hyperbolae.
Example. Find and classify the stationary points of
f
(
x, y
) = 4
x
3
12
xy
+
y
2
+ 10y + 6. We have
f
x
= 12x
2
12y
f
y
= 12x + 2y + 10
f
xx
= 24x
f
xy
= 12
f
yy
= 2
At stationary points, f
x
= f
y
= 0. So we have
12x
2
12y = 0, 12x + 2y + 10 = 0.
The first equation gives
y
=
x
2
. Substituting into the second equation, we obtain
x
= 1
,
5 and
y
= 1
,
25 respectively. So the stationary points are (1
,
1) and (5
,
25)
To classify them, first consider (1
,
1). Our Hessian matrix
H
=
24 12
12 2
.
Our signature is
|H
1
|
= 24 and
|H
2
|
=
96. Since we have a +
,
signature, this
an indefinite case and it is a saddle point.
At (5
,
25),
H
=
120 12
12 2
So
|H
1
|
= 120 and
|H
2
|
= 240
144 = 96.
Since the signature is +, +, it is a minimum.
To draw the contours, we draw what the contours look like near the stationary
points, and then try to join them together, noting that contours cross only at
saddles.
(5,25)
(1,1)
-2 0 2 4 6
-10
0
10
20
30
8 Systems of differential equations
8.1 Linear equations
Consider two dependent variables y
1
(t), y
2
(t) related by
˙y
1
= ay
1
+ by
2
+ f
1
(t)
˙y
2
= cy
1
+ dy
2
+ f
2
(t)
We can write this in vector notation by
˙y
1
˙y
2
=
a b
c d
y
1
y
2
+
f
1
f
2
or
˙
Y = M Y + F. We can convert this to a higher-order equation by
¨y
1
= a ˙y
1
+ b ˙y
2
+
˙
f
1
= a ˙y
1
+ b(cy
1
+ dy
2
+ f
2
) +
˙
f
1
= a ˙y
1
+ bcy
1
+ d( ˙y
1
ay
1
f
1
) + bf
2
+
˙
f
1
so
¨y
1
(a + d) ˙y
1
+ (ad bc)y
1
= bf
2
df
1
+
˙
f
1
and we know how to solve this. However, this actually complicates the equation.
So what we usually do is the other way round: if we have a high-order
equation, we can do this to change it to a system of first-order equations:
If ¨y + a ˙y + by = f , write y
1
= y and y
2
= ˙y. Then let Y =
y
˙y
Our system of equations becomes
˙y
1
= y
2
˙y
2
= f ay
2
by
1
or
˙
Y =
0 1
b a
y
1
y
2
0
f
.
Now consider the general equation
˙
Y = M Y + F
˙
Y MY = F
We first look for a complementary solution Y
c
= v
e
λt
, where v is a constant
vector. So we get
λv Mv = 0.
We can write this as
Mv = λv.
So λ is the eigenvalue of M and v is the corresponding eigenvector.
We can solve this by solving the characteristic equation
det
(
M λI
) = 0.
Then for each λ, we find the corresponding v.
Example.
˙
Y =
4 24
1 2
Y +
4
1
e
t
The characteristic equation of
M
is
4 λ 24
1 2 λ
= 0, which gives (
λ
+
8)(λ 2) = 0 and λ = 2, 8.
When λ = 2, v satisfies
6 24
1 4
v
1
v
2
= 0,
and we obtain v
1
=
4
1
.
When λ = 8, we have
4 24
1 6
v
1
v
2
= 0,
and v
2
=
6
1
. So the complementary solution is
Y = A
4
1
e
2t
+ B
6
1
e
8t
To plot the phase-space trajectories, we first consider the cases where Y is
an eigenvector. If Y is an integer multiple of
4
1
, then it will keep moving
outwards in the same direction. Similarly, if it is an integer multiple of
6
1
,
it will move towards the origin.
y
1
y
2
v
1
= (4, 1)
v
2
= (6, 1)
We can now add more (curved) lines based on these two:
y
1
y
2
v
1
= (4, 1)
v
2
= (6, 1)
To find the particular integral, we try Y
p
= ue
t
. Then
u
1
u
2
4 24
1 2
u
1
u
2
=
4
1
5 24
1 3
u
1
u
2
=
4
1
u
1
u
2
=
1
9
3 24
1 5
4
1
=
4
1
So the general solution is
Y = A
4
1
e
2t
+ B
6
1
e
8t
4
1
e
t
In general, there are three possible cases of
˙
Y
=
M
Y corresponding to three
different possible eigenvalues of M:
(i)
If both
λ
1
, λ
2
are real with opposite sign (
λ
1
λ
2
<
0). wlog assume
λ
1
>
0.
Then there is a saddle as above:
v
1
= (4, 1)
v
2
= (6, 1)
(ii)
If
λ
1
, λ
2
are real with
λ
1
λ
2
>
0. wlog assume
|λ
1
| |λ
2
|
. Then the phase
portrait is
v
1
v
2
If both
λ
1
, λ
2
<
0, then the arrows point towards the intersection and we
say there is a stable node. If both are positive, they point outwards and
there is an unstable node.
(iii) If λ
1
, λ
2
are complex conjugates, then we obtain a spiral
If
Re
(
λ
1,2
)
<
0, then it spirals inwards. If
Re
(
λ
1,2
)
>
0, then it spirals
outwards. If
Re
(
λ
1,2
) = 0, then we have ellipses with common centers
instead of spirals. We can determine whether the spiral is positive (as
shown above), or negative (mirror image of the spiral above) by considering
the eigenvectors. An example of this is given below.
8.2 Nonlinear dynamical systems
Consider the second-order autonomous system (i.e.
t
does not explicitly appear
in the forcing terms on the right)
˙x = f(x, y)
˙y = g(x, y)
It can be difficult to solve the equations, but we can learn a lot about phase-space
trajectories of these solutions by studying the equilibria and their stability.
Definition (Equilibrium point). An equilibrium point is a point in which
˙x = ˙y = 0 at x
0
= (x
0
, y
0
).
Clearly this occurs when
f
(
x
0
, y
0
) =
g
(
x
0
, y
0
) = 0. We solve these simultane-
ously for x
0
, y
0
.
To determine the stability, write x = x
0
+ ξ, y = y
0
+ η. Then
˙
ξ = f(x
0
+ ξ, y
0
+ η)
= f(x
0
, y
0
) + ξ
f
x
(x
0
) + η
f
y
(x
0
) + O(ξ
2
, η
2
)
So if ξ, η 1,
˙
ξ
˙η
=
f
x
f
y
g
x
g
y
ξ
η
This is a linear system, and we can determine its character from the eigensolu-
tions.
Example. (Population dynamics - predator-prey system) Suppose that there
are x prey and y predators. Then we have the following for the prey:
˙x = αx
|{z}
births - deaths
βx
2
|{z}
natural competition
γxy
|{z}
killed by predators
.
and the following for the predators:
˙y = εxy
|{z}
birth/survival rate
δy
|{z}
natural death rate
For example, let
˙x = 8x 2x
2
2xy
˙y = xy y
We find the fixed points: x(8 2x 2y) = 0 gives x = 0 or y = 4 x.
We also want y(x 1) = 0 which gives y = 0 or x = 1.
So the fixed points are (0, 0), (4, 0), (1, 3).
Near (0, 0), we have
˙
ξ
˙η
=
8 0
0 1
ξ
η
We clearly have eigenvalues 8, 1 with the standard basis as the eigenvectors.
Near (4
,
0), we have
x
= 4 +
ξ
,
y
=
η
. Instead of expanding partial derivatives,
we can obtain from the equations directly:
˙
ξ = (4 + ξ)(8 8 2ξ 2η)
= 8ξ 8η 2ξ
2
2ξη
˙η = η(4 + ξ 1)
= 3η + ξη
Ignoring the second-order terms, we have
˙
ξ
˙η
=
8 8
0 3
ξ
η
The eigenvalues are 8 and 3, with associated eigenvectors (1, 0), (8, 11).
Near (1, 3), we have x = 1 + ξ, y = 3 + η. So
˙
ξ = (1 + ξ)(8 2 2ξ 6 2η)
2ξ 2η
˙η = (3 + η)(1 + ξ 1)
3η
So
˙
ξ
˙η
=
2 2
3 0
ξ
η
The eigenvalues are
1
± i
5
. Since it is complex with a negative real part, it
is a stable spiral.
We can determine the chirality of the spirals by considering what happens to
a small perturbation to the right
ξ
0
with
ξ >
0. We have
˙
ξ
˙η
=
2ξ
3ξ
. So
x will head top-left, and the spiral is counter-clockwise (“positive”).
We can patch the three equilibrium points together and obtain the following
phase portrait:
0 1 2 3 4 5
0
1
2
3
4
5
We see that (1, 3) is a stable solution which almost all solutions spiral towards.
9 Partial differential equations (PDEs)
9.1 First-order wave equation
Consider the equation of the form
y
t
= c
y
x
,
with
c
a constant and
y
a function of
x
and
t
. This is known as the (first-order)
wave equation. We will later see that solutions correspond to waves travelling in
one direction.
We write this as
y
t
c
y
x
= 0.
Recall that along a path x = x(t) so that y = y(x(t), t),
dy
dt
=
y
x
dx
dt
+
y
t
=
dx
dt
y
x
+ c
y
x
by the chain rule. Now we choose a path along which
dx
dt
= c. (1)
Along such paths,
dy
dt
= 0 (2)
So we have replaced the original partial differential equations with a pair of
ordinary differential equations.
Each path that satisfies (1) can be described by
x
=
x
0
ct
, where
x
0
is a
constant. We can write this as x
0
= x + ct.
From (2), along each of these paths,
y
is constant. So suppose for each
x
0
, the
value of
y
along the path
x
0
=
x
+
ct
is given by
f
(
x
0
), where
f
is an arbitrary
function. Then the solution to the wave equation is
y = f(x + ct),
By differentiating this directly, we can easily check that every function of this
form is a solution to the wave equation.
The contours of y look rather boring.
x
t
O
contours of y
Note that as we move up the time axis, we are simply taking the
t
= 0 solution
and translating it to the left.
The paths we’ve identified are called the “characteristics” of the wave equation.
In this particular example,
y
is constant along the characteristics (because the
equation is unforced).
We usually have initial conditions e.g.
y(x, 0) = x
2
3
Since we know that y = f(x + ct) and f(x) = x
2
3, y must be given by
y = (x + ct)
2
3.
We can plot the xy curve for different values of t to obtain this:
x
y
We see that each solution is just a translation of the t = 0 version.
We can also solve forced equations, such as
y
t
+ 5
y
x
= e
t
, y(x, 0) = e
x
2
.
Along each path
x
0
=
x
5
t
, we have
dy
dt
=
e
t
. So
y
=
f
(
x
0
)
e
t
for some
function f .
Using the boundary conditions provided, At
t
= 0,
y
=
f
(
x
0
)
1 and
x
=
x
0
.
So f (x
0
) 1 = e
x
2
0
, i.e. f(x
0
) = 1 + e
x
2
0
. So
y = 1 + e
(x5t)
2
e
t
.
9.2 Second-order wave equation
We consider equations in the following form:
2
y
t
2
= c
2
2
y
x
2
.
This is the second-order wave equation, and is often known as the “hyperbolic
equation” because the form resembles that of a hyperbola (which has the form
x
2
b
2
y
2
= 0). However, the differential equation has no connections to
hyperbolae whatsoever.
This equation models an actual wave in one dimension. Consider a horizontal
string, along the
x
axis. We let
y
(
x, t
) be the vertical displacement of the string
at the point x at time t.
y
Suppose that
ρ
(
x
) is the mass per unit length of a string. Then the restoring
force on the string
ma
=
ρ
2
y
t
2
is proportional to the second derivative
2
y
x
2
. So
we obtain this wave equation.
(Why is it proportional to the second derivative? It certainly cannot be
proportional to
y
, because we get no force if we just move the whole string
upwards. It also cannot be proportional to
y/∂x
: if we have a straight slope,
then the force pulling upwards is the same as the force pulling downwards, and
we should have no force. We have a force only if the string is curved, and
curvature is measured by the second derivative)
To solve the equation, suppose that c is constant. Then we can write
2
y
t
2
c
2
2
y
x
2
= 0
t
+ c
x
t
c
x
y = 0
If
y
=
f
(
x
+
ct
), then the first operator differentiates it to give a constant (as in
the first-order wave equation). Then applying the second operator differentiates
it to 0. So y = f (x + ct) is a solution.
Since the operators are commutative,
y
=
f
(
x ct
) is also a solution. Since
the equation is linear, the general solution is
y = f(x + ct) + g(x ct).
This shows that the solution composes of superpositions of waves travelling to
the left and waves travelling to the right.
We can show that this is indeed the most general solution by substituting
ξ
=
x
+
ct
and
η
=
x ct
. We can show, using the chain rule, that
y
tt
c
2
y
xx
4c
2
y
ηξ
= 0. Integrating twice gives y = f (ξ) + g(η).
How many boundary conditions do we need to have a unique solution? In
ODEs, we simply count the order of the equation. In PDEs, we have to count
over all variables. In this case, we need 2 boundary conditions and 2 initial
conditions. For example, we can have:
Initial conditions: at t = 0,
y =
1
1 + x
2
y
t
= 0
Boundary conditions: y 0 as x ±∞.
We know that the solution has the form
y = f(x + ct) + g(x ct).
The first initial condition give
f(x) + g(x) =
1
1 + x
2
(1)
The second initial condition gives
y
t
= cf
0
(x) cg
0
(x) = 0 (2)
From (2), we know that
f
0
=
g
0
. So
f
and
g
differ by a constant. wlog, we can
assume that they are indeed equal, since if we had, say,
f
=
g
+ 2, then we can
let
y
= (
f
(
x
+
ct
) + 1) + (
g
(
x ct
) + 1) instead, and the new
f
and
g
are equal.
From (1), we must have
f(x) = g(x) =
1
2(1 + x
2
)
So, overall,
y =
1
2
1
1 + (x + ct)
2
+
1
1 + (x ct)
2
Where we substituted x for x + ct and x ct in f and g respectively.
9.3 The diffusion equation
Heat conduction in a solid in one dimension is modelled by the diffusion equation
T
t
= κ
2
T
x
2
This is known as a parabolic PDE (parabolic because it resembles y = ax
2
).
Here T (x, t) is the temperature and the constant κ is the diffusivity.
Example. Consider an infinitely long bar heated at one end (
x
= 0). Note
in general the “velocity”
T /∂t
is proportional to curvature
2
T/∂x
2
, while in
the wave equation, it is the “acceleration” that is proportional to curvature.
In this case, instead of oscillatory, the diffusion equation is dissipative and all
unevenness simply decays away.
Suppose
T
(
x,
0) = 0,
T
(0
, t
) =
H
(
t
) =
(
0 t < 0
1 t > 0
, and
T
(
x, t
)
0 as
x
. In words, this says that the rod is initially cool (at temperature 0), and
then the end is heated up on one end after t = 0.
There is a similarity solution of the diffusion equation valid on an infinite
domain (or our semi-infinite domain) in which
T
(
x, t
) =
θ
(
η
), where
η
=
x
2
κt
.
Applying the chain rule, we have
T
x
=
dθ
dη
η
x
=
1
2
κt
θ
0
(η)
2
T
x
2
=
1
2
κt
dθ
0
dη
η
x
=
1
4κt
θ
00
(η)
T
t
=
dθ
dη
η
t
=
1
2
x
2
κ
1
t
3/2
θ
0
(η)
=
η
2t
θ
0
(η)
Putting this into the diffusion equation yields
η
2t
θ
0
= κ
1
4κt
θ
00
θ
00
+ 2ηθ
0
= 0
This is an ordinary differential equation for
θ
(
η
). This can be seen as a first-
order equation for
θ
0
with non-constant coefficients. Use the integrating factor
µ = exp(
R
2η dη) = e
η
2
. So
(e
η
2
θ
0
)
0
= 0
θ
0
= Ae
η
2
θ = A
Z
η
0
e
u
2
du + B
= α erf(η) + B
where erf(η) =
2
π
R
η
0
e
u
2
du from statistics, and erf(η) 1 as η .
Now look at the boundary and initial conditions, (recall
η
=
x/
(2
κt
)) and
express them in terms of η. As x 0, we have η 0. So θ = 1 at η = 0.
Also, if x , t 0
+
, then η . So θ 0 as η .
So
θ
(0) = 1
B
= 1. Colloquially,
θ
(
) = 0 gives
α
=
1. So
θ
= 1
erf
(
η
).
This is also sometimes written as
erfc
(
η
), the error function complement of
η
. So
T = erfc
x
2
κt
In general, at any particular fixed time t
0
, T (x) looks like
x
T
with decay length
O
(
κt
). So if we actually have a finite bar of length
L
, we
can treat it as infinite if
κt L, or t L
2