5Ordinary differential equations

IB Numerical Analysis



5.4 Runge-Kutta methods
Finally, we look at Runge-Kutta methods. These methods are very complicated,
and are rather tedious to analyse. They have been largely ignored for a long
time, until more powerful computers came along and made these methods much
more practical. These are used quite a lot nowadays since they have many nice
properties.
Runge-Kutta methods can be motivated by Gaussian quadrature, but we
will not go into that connection here. Instead, we’ll go straight in and work with
the method.
Definition
(Runge-Kutta method)
.
General (implicit)
ν
-stage Runge-Kutta
(RK) methods have the form
y
n+1
= y
n
+ h
ν
X
`=1
b
`
k
`
,
where
k
`
= f
t
n
+ c
n
h, y
n
+ h
ν
X
j=1
a
`j
k
j
for ` = 1, ··· , ν.
There are a lot of parameters we have to choose. We need to pick
{b
`
}
ν
`=1
, {c}
ν
`=1
, {a
`j
}
ν
`,j=1
.
Note that in general,
{k
`
}
ν
`=1
have to be solved for, since they are defined in
terms of one another. However, for certain choices of parameters, we can make
this an explicit method. This makes it easier to compute, but we would have
lost some accuracy and flexibility.
Unlike all the other methods we’ve seen so far, the parameters appear inside
f
. They appear non-linearly inside the functions. This makes the method much
more complicated and difficult to analyse using Taylor series. Yet, once we
manage to do this properly, these have lots of nice properties. Unfortunately, we
will not have time to go into what these properties actually are.
Notice this is a one-step method. So once we get order
p
1, we will have
convergence. So what conditions do we need for a decent order?
This is in general very complicated. However, we can quickly obtain some
necessary conditions. We can consider the case where
f
is a constant. Then
k
`
is always that constant. So we must have
ν
X
`=1
b
`
= 1.
It turns out we also need, for ` = 1, ··· , ν,
c
`
=
ν
X
j=1
a
`j
.
While these are necessary conditions, they are not sufficient. We need other
conditions as well, which we shall get to later. It is a fact that the best possible
order of a ν-stage Runge-Kutta method is 2ν.
To describe a Runge-Kutta method, a standard notation is to put the
coefficients in the Butcher table:
c
1
a
11
··· a
1ν
.
.
.
.
.
.
.
.
.
.
.
.
c
ν
a
ν1
··· a
νν
b
1
··· b
ν
We sometimes more concisely write it as
c A
v
T
This table allows for a general implicit method. Initially, explicit methods came
out first, since they are much easier to compute. In this case, the matrix
A
is
strictly lower triangular, i.e. a
`j
whenever ` j.
Example.
The most famous explicit Runge-Kutta method is the 4-stage 4th
order one, often called the classical Runge-Kutta method. The formula can be
given explicitly by
y
n+1
= y
n
+
h
6
(k
1
+ 2k
2
+ 2k
3
+ k
4
),
where
k
1
= f(x
n
, y
n
)
k
2
= f
x
n
+
1
2
h, y
n
+
1
2
hk
1
k
3
= f
x
n
+
1
2
h, y
n
+
1
2
hk
2
k
4
= f (x
n
+ h, y
n
+ hk
3
) .
We see that this is an explicit method. We don’t need to solve any equations.
Choosing the parameters for the Runge-Kutta method to maximize order
is hard. Consider the simplest case, the 2-stage explicit method. The general
formula is
y
n+1
= y
n
+ h(b
1
k
1
+ b
2
k
2
),
where
k
1
= f(x
n
, y
n
)
k
2
= f(x
n
+ c
2
h, y
n
+ r
2
hk
1
).
To analyse this, we insert the true solution into the method. First, we need to
insert the true solution of the ODE into the k’s. We get
k
1
= y
0
(t
n
)
k
2
= f(t
n
+ c
2
h, y(t
n
) + c
2
hy
0
(t
n
))
= y
0
(t
n
) + c
2
h
f
t
(t
n
.y(t
n
)) + f(t
n
, y(t
n
))y
0
(t
n
)
+ O(h
2
)
Fortunately, we notice that the thing inside the huge brackets is just
y
00
(
t
n
). So
this is
= y
0
(t
n
) + c
2
hy
00
(t
n
) + O(h
2
).
Hence, our local truncation method for the Runge-Kutta method is
y(t
n+1
) y(t
n
) h(b
1
k
1
+ b
2
k
2
)
= (1 b
1
b
2
)hy
0
(t
n
) +
1
2
b
2
c
2
h
2
y
00
(t
n
) + O(h
3
).
Now we see why Runge-Kutta methods are hard to analyse. The coefficients
appear non-linearly in this expression. It is still solvable in this case, in the
obvious way, but for higher stage methods, this becomes much more complicated.
In this case, we have a 1-parameter family of order 2 methods, satisfying
b
1
+ b
2
= 1, b
2
c
2
=
1
2
.
It is easy to check using a simple equation
y
0
=
λh
that it is not possible to get a
higher order method. So as long as our choice of
b
1
and
b
2
satisfy this equation,
we get a decent order 2 method.
As we have seen, Runge-Kutta methods are really complicated, even in the
simplest case. However, they have too many good properties that they are
becoming very popular nowadays.