5Ordinary differential equations

IB Numerical Analysis



5.2 One-step methods
There are many ways we can classify numerical methods. One important classifi-
cation is one-step versus multi-step methods. In one-step methods, the value of
y
n+1
depends only on the previous iteration
t
n
and
y
n
. In multi-step methods,
we are allowed to look back further in time and use further results.
Definition
((Explicit) one-step method)
.
A numerical method is (explicit)
one-step if y
n+1
depends only on t
n
and y
n
, i.e.
y
n+1
= φ
h
(t
n
, y
n
)
for some function φ
h
: R × R
N
R
N
.
We will later see what “explicit” means.
The simplest one-step method one can imagine is Euler’s method.
Definition (Euler’s method). Euler’s method uses the formula
y
n+1
= y
n
+ hf(t
n
, y
n
).
We want to show that this method “converges”. First of all, we need to make
precise the notion of “convergence”. The Lipschitz condition means there is a
unique solution to the differential equation. So we would want the numerical
solution to be able to approximate the actual solution to arbitrary accuracy as
long as we take a small enough h.
Definition
(Convergence of numerical method)
.
For each
h >
0, we can produce
a sequence of discrete values
y
n
for
n
= 0
, ··· ,
[
T/h
], where [
T/h
] is the integer
part of
T/h
. A method converges if, as
h
0 and
nh t
(hence
n
), we
get
y
n
y(t),
where
y
is the true solution to the differential equation. Moreover, we require
the convergence to be uniform in t.
We now prove that Euler’s method converges. We will only do this properly for
Euler’s method, since the algebra quickly becomes tedious and incomprehensible.
However, the proof strategy is sufficiently general that it can be adapted to most
other methods.
Theorem (Convergence of Euler’s method).
(i) For all t [0, T ], we have
lim
h0
nht
y
n
y(t) = 0.
(ii) Let λ be the Lipschitz constant of f . Then there exists a c 0 such that
ke
n
k ch
e
λT
1
λ
for all 0 n [T /h], where e
n
= y
n
y(t
n
).
Note that the bound in the second part is uniform. So this immediately gives
the first part of the theorem.
Proof.
There are two parts to proving this. We first look at the local truncation
error. This is the error we would get at each step assuming we got the previous
steps right. More precisely, we write
y(t
n+1
) = y(t
n
) + h(f, t
n
, y(t
n
)) + R
n
,
and
R
n
is the local truncation error. For the Euler’s method, it is easy to get
R
n
, since
f
(
t
n
, y
(
t
n
)) =
y
0
(
t
n
), by definition. So this is just the Taylor series
expansion of
y
. We can write
R
n
as the integral remainder of the Taylor series,
R
n
=
Z
t
n+1
t
n
(t
n+1
θ)y
00
(θ) dθ.
By some careful analysis, we get
kR
n
k
ch
2
,
where
c =
1
2
ky
00
k
.
This is the easy part, and tends to go rather smoothly even for more complicated
methods.
Once we have bounded the local truncation error, we patch them together to
get the actual error. We can write
e
n+1
= y
n+1
y(t
n+1
)
= y
n
+ hf(t
n
, y
n
)
y(t
n
) + hf(t
n
, y(t
n
)) + R
n
= (y
n
y(t
n
)) + h
f(t
n
, y
n
) f(t
n
, y(t
n
))
R
n
Taking the infinity norm, we get
ke
n+1
k
ky
n
y(t
n
)k
+ hkf(t
n
, y
n
) f(t
n
, y(t
n
))k
+ kR
n
k
ke
n
k
+ ke
n
k
+ ch
2
= (1 + λh)ke
n
k
+ ch
2
.
This is valid for all
n
0. We also know
ke
0
k
= 0. Doing some algebra, we get
ke
n
k
ch
2
n1
X
j=0
(1 + )
j
ch
λ
((1 + )
n
1) .
Finally, we have
(1 + ) e
λh
,
since 1 +
λh
is the first two terms of the Taylor series, and the other terms are
positive. So
(1 + )
n
e
λhn
e
λT
.
So we obtain the bound
ke
n
k
ch
e
λT
1
λ
.
Then this tends to 0 as we take h 0. So the method converges.
This works as long as
λ 6
= 0. However,
λ
= 0 is the easy case, since it is
just integration. We can either check this case directly, or use the fact that
e
λT
1
λ
T as λ 0.
The same proof strategy works for most numerical methods, but the algebra
will be much messier. We will not do those in full. We, however, take note of
some useful terminology:
Definition
(Local truncation error)
.
For a general (multi-step) numerical
method
y
n+1
= φ(t
n
, y
0
, y
1
, ··· , y
n
),
the local truncation error is
η
n+1
= y(t
n+1
) φ
n
(t
n
, y(t
0
), y(t
1
), ··· , y(t
n
)).
This is the error we will make at the (
n
+ 1)th step if we had accurate values
for the first n steps.
For Euler’s method, the local truncation error is just the Taylor series
remainder term.
Definition
(Order)
.
The order of a numerical method is the largest
p
1 such
that η
n+1
= O(h
p+1
).
The Euler method has order 1. Notice that this is one less than the power
of the local truncation error, since when we look at the global error, we drop a
power, and only have e
n
h.
Let’s try to get a little bit beyond Euler’s method.
Definition (θ-method). For θ [0, 1], the θ-method is
y
n+1
= y
n
+ h
θf(t
n
, y
n
) + (1 θ)f(t
n+1
, y
n+1
)
.
If we put
θ
= 1, then we just get Euler’s method. The other two most
common choices of θ are θ = 0 (backward Euler) and θ =
1
2
(trapezoidal rule).
Note that for
θ 6
= 1, we get an implicit method. This is since
y
n+1
doesn’t
just appear simply on the left hand side of the equality. Our formula for
y
n+1
involves
y
n+1
itself! This means, in general, unlike the Euler method, we can’t
just write down the value of
y
n+1
given the value of
y
n
. Instead, we have to
treat the formula as
N
(in general) non-linear equations, and solve them to find
y
n+1
!
In the past, people did not like to use this, because they didn’t have computers,
or computers were too slow. It is tedious to have to solve these equations in
every step of the method. Nowadays, these are becoming more and more popular
because it is getting easier to solve equations, and
θ
-methods have some huge
theoretical advantages (which we do not have time to get into).
We now look at the error of the θ-method. We have
η = y(t
n+1
) y(t
n
) h
θy
0
(t
n
) + (1 θ)y
0
(t
n+1
)
We expand all terms about t
n
with Taylor’s series to obtain
=
θ
1
2
h
2
y
00
(t
n
) +
1
2
θ
1
3
h
3
y
000
(t
n
) + O(h
4
).
We see that
θ
=
1
2
gives us an order 2 method. Otherwise, we get an order 1
method.