5Ordinary differential equations

IB Numerical Analysis

5.1 Introduction

Our next big goal is to solve ordinary differential equations numerically. We will

focus on differential equations of the form

y

0

(t) = f(t, y(t))

for 0 ≤ t ≤ T, with initial conditions

y(0) = y

0

.

The data we are provided is the function

f

:

R×R

N

→ R

N

, the ending time

T >

0,

and the initial condition

y

0

∈ R

n

. What we seek is the function

y

: [0

, T

]

→ R

N

.

When solving the differential equation numerically, our goal would be to

make our numerical solution as close to the true solution as possible. This makes

sense only if a “true” solution actually exists, and is unique. From IB Analysis

II, we know a unique solution to the ODE exists if f is Lipschitz.

Definition

(Lipschitz function)

.

A function

f

:

R ×R

N

→ R

N

is Lipschitz with

Lipschitz constant λ ≥ 0 if

kf(t, x) − f(t,

ˆ

x)k ≤ λkx −

ˆ

xk

for all t ∈ [0, T ] and x,

ˆ

x ∈ R

N

.

A function is Lipschitz if it is Lipschitz with Lipschitz constant

λ

for some

λ

.

It doesn’t really matter what norm we pick. It will just change the

λ

. The

importance is the existence of a λ.

A special case is when

λ

= 0, i.e.

f

does not depend on

x

. In this case, this

is just an integration problem, and is usually easy. This is a convenient test case

— if our numerical approximation does not even work for these easy problems,

then it’s pretty useless.

Being Lipschitz is sufficient for existence and uniqueness of a solution to

the differential equation, and hence we can ask if our solution converges to

this unique solution. An extra assumption we will often make is that

f

can

be expanded in a Taylor series to as many degrees as we want, since this is

convenient for our analysis.

What exactly does a numerical solution to the ODE consist of? We first

choose a small time step h > 0, and then construct approximations

y

n

≈ y(t

n

), n = 1, 2, ··· ,

with

t

n

=

nh

. In particular,

t

n

− t

n−1

=

h

and is always constant. In practice,

we don’t fix the step size

t

n

− t

n−1

, and allow it to vary in each step. However,

this makes the analysis much more complicated, and we will not consider varying

time steps in this course.

If we make

h

smaller, then we will (probably) make better approximations.

However, this is more computationally demanding. So we want to study the

behaviour of numerical methods in order to figure out what h we should pick.