5Ordinary differential equations

IB Numerical Analysis



5.3 Multi-step methods
We can try to make our methods more efficient by making use of previous values
of
y
n
instead of the most recent one. One common method is the AB2 method:
Definition
(2-step Adams-Bashforth method)
.
The 2-step Adams-Bashforth
(AB2) method has
y
n+2
= y
n+1
+
1
2
h (3f(t
n+1
, y
n+1
) f(t
n
, y
n
)) .
This is a particular example of the general class of Adams-Bashforth methods.
In general, a multi-step method is defined as follows:
Definition (Multi-step method). A s-step numerical method is given by
s
X
`=0
ρ
`
y
n+`
= h
s
X
`=0
σ
`
f(t
n+`
, y
n+`
).
This formula is used to find the value of y
n+s
given the others.
One point to note is that we get the same method if we multiply all the
constants
ρ
`
, σ
`
by a non-zero constant. By convention, we normalize this by
setting ρ
s
= 1. Then we can alternatively write this as
y
n+s
= h
s
X
`=0
σ
`
f(t
n+`
, y
n+`
)
s1
X
`=0
ρ
`
y
n+`
.
This method is an implicit method if σ
s
6= 0. Otherwise, it is explicit.
Note that this method is linear in the sense that the coefficients
ρ
`
and
σ
`
appear linearly, outside the
f
s. Later we will see more complicated numerical
methods where these coefficients appear inside the arguments of f .
For multi-step methods, we have a slight problem to solve. In a one-step
method, we are given y
0
, and this allows us to immediately apply the one-step
method to get higher values of
y
n
. However, for an
s
-step method, we need
to use other (possibly 1-step) method to obtain
y
1
, ··· , y
s1
before we can get
started.
Fortunately, we only need to apply the one-step method a fixed, small number
of times, even as
h
0. So the accuracy of the one-step method at the start
does not matter too much.
We now study the properties of a general multi-step method. The first thing
we can talk about is the order:
Theorem. An s-step method has order p (p 1) if and only if
s
X
`=0
ρ
`
= 0
and
s
X
`=0
ρ
`
`
k
= k
s
X
`=0
σ
`
`
k1
for k = 1, ··· , p, where 0
0
= 1.
This is just a rather technical result coming directly from definition.
Proof. The local truncation error is
s
X
`=0
ρ
`
y(t
n+`
) h
s
X
`=0
σ
`
y
0
(t
n+`
).
We now expand the y and y
0
about t
n
, and obtain
s
X
`=0
ρ
`
!
y(t
n
) +
X
k=1
h
k
k!
s
X
`=0
ρ
`
`
k
k
s
X
`=0
σ
`
`
k1
!
y
(k)
(t
n
).
This is O(h
p+1
) under the given conditions.
Example
(AB2)
.
In the two-step Adams-Bashforth method, We see that the
conditions hold for p = 2 but not p = 3. So the order is 2.
Instead of dealing with the coefficients
ρ
`
and
σ
`
directly, it is often convenient
to pack them together into two polynomials. This uses two polynomials associated
with the numerical method. Unsurprisingly, they are
ρ(w) =
s
X
`=0
ρ
`
w
`
, σ(w) =
s
X
`=0
σ
`
w
`
.
We can then use this to restate the above theorem.
Theorem. A multi-step method has order p (with p 1) if and only if
ρ(e
x
) (e
x
) = O(x
p+1
)
as x 0.
Proof. We expand
ρ(e
x
) (e
x
) =
s
X
`=0
ρ
`
e
`x
x
s
X
`=0
σ
`
e
`x
.
We now expand the e
`x
in Taylor series about x = 0. This comes out as
s
X
`=0
ρ
`
+
X
k=1
1
k!
s
X
`=0
ρ
`
`
k
k
s
X
`=0
σ
`
`
k1
!
x
k
.
So the result follows.
Note that
P
s
`=0
ρ
`
= 0, which is the condition required for the method to
even have an order at all, can be expressed as ρ(1) = 0.
Example (AB2). In the two-step Adams-Bashforth method, we get
ρ(w) = w
2
w, σ(w) =
3
2
w
1
2
.
We can immediately check that ρ(1) = 0. We also have
ρ(e
x
) (e
x
) =
5
12
x
3
+ O(x
4
).
So the order is 2.
We’ve sorted out the order of multi-step methods. The next thing to check
is convergence. This is where the difference between one-step and multi-step
methods come in. For one-step methods, we only needed the order to understand
convergence. It is a fact that a one step method converges whenever it has an
order p 1. For multi-step methods, we need an extra condition.
Definition
(Root condition)
.
We say
ρ
(
w
) satisfies the root condition if all its
zeros are bounded by 1 in size, i.e. all roots
w
satisfy
|w|
1. Moreover any
zero with |w| = 1 must be simple.
We can imagine this as saying large roots are bad they cannot get past 1,
and we cannot have too many with modulus 1.
We saw any sensible multi-step method must have
ρ
(1) = 0. So in particular,
1 must be a simple zero.
Theorem
(Dahlquist equivalence theorem)
.
A multi-step method is convergent
if and only if
(i) The order p is at least 1; and
(ii) The root condition holds.
The proof is too difficult to include in this course, or even the Part II version.
This is only done in Part III.
Example
(AB2)
.
Again consider the two-step Adams-Bashforth method. We
have seen it has order
p
= 2
1. So we need to check the root condition. So
ρ(w) = w
2
w = w(w 1). So it satisfies the root condition.
Let’s now come up with a sensible strategy for constructing convergent
s
-step
methods:
(i) Choose a ρ so that ρ(1) = 0 and the root condition holds.
(ii) Choose σ to maximize the order, i.e.
σ =
ρ(w)
log w
+
(
O(|w 1|
s+1
) if implicit
O(|w 1|
s
) if explicit
We have the two different conditions since for implicit methods, we have
one more coefficient to fiddle with, so we can get a higher order.
Where does the
1
log w
come from? We try to substitute
w
=
e
x
(noting that
e
x
1 x). Then the formula says
σ(e
x
) =
1
x
ρ(e
x
) +
(
O(x
s+1
) if implicit
O(x
s
) if explicit
.
Rearranging gives
ρ(e
x
) (e
x
) =
(
O(x
s+2
) if implicit
O(x
s+1
) if explicit
,
which is our order condition. So given any
ρ
, there is only one sensible way to
pick σ. So the key is in picking a good enough ρ.
The root conditions is “best” satisfied if
ρ
(
w
) =
w
s1
(
w
1), i.e. all but one
roots are 0. Then we have
y
n+s
y
n+s1
= h
s
X
`=0
σ
`
f(t
n+`
, y
n+`
),
where α is chosen to maximize order.
Definition
(Adams method)
.
An Adams method is a multi-step numerical
method with ρ(w) = w
s1
(w 1).
These can be either explicit or implicit. In different cases, we get different
names.
Definition
(Adams-Bashforth method)
.
An Adams-Bashforth method is an
explicit Adams method.
Definition
(Adams-Moulton method)
.
An Adams-Moulton method is an im-
plicit Adams method.
Example.
We look at the two-step third-order Adams-Moulton method. This
is given by
y
n+2
y
n+1
= h
1
2
f(t
n
, y
n
) +
2
3
f(t
n+1
, y
n+1
) +
5
12
f(t
n+1
, y
n+2
)
.
Where do these coefficients come from? We have to first expand our
ρ(w)
log w
about
w 1:
ρ(w)
log w
=
w(w 1)
log w
= 1 +
3
2
(w 1) +
5
12
(w 1)
2
+ O(|w 1|
3
).
These aren’t our coefficients of
σ
, since what we need to do is to rearrange the
first three terms to be expressed in terms of w. So we have
ρ(w)
log w
=
1
12
+
2
3
w +
5
12
w
2
+ O(|w 1|
3
).
Another important class of multi-step methods is constructed in the opposite
way we choose a particular σ, and then find the most optimal ρ.
Definition
(Backward differentiation method)
.
A backward differentiation
method has σ(w) = σ
s
w
s
, i.e.
s
X
`=0
ρ
`
y
n+`
= σ
s
f(t
n+s
, y
n+s
).
This is a generalization of the one-step backwards Euler method.
Given this
σ
, we need to choose the appropriate
ρ
. Fortunately, this can be
done quite easily.
Lemma.
An
s
-step backward differentiation method of order
s
is obtained by
choosing
ρ(w) = σ
s
s
X
`=1
1
`
w
s`
(w 1)
`
,
with σ
s
chosen such that ρ
s
= 1, namely
σ
s
=
s
X
`=1
1
`
!
1
.
Proof. We need to construct ρ so that
ρ(w) = σ
s
w
s
log w + O(|w 1|
s+1
).
This is easy, if we write
log w = log
1
w
= log
1
w 1
w
=
X
`=1
1
`
w 1
w
`
.
Multiplying by σ
s
w
s
gives the desired result.
For this method to be convergent, we need to make sure it does satisfy the
root condition. It turns out the root condition is satisfied only for
s
6. This is
not obvious by first sight, but we can certainly verify this manually.