6Stiff equations

IB Numerical Analysis

6.2 Linear stability

This is the easy part of the course, where we are just considering the problem

y

0

=

λy

. No matter how complicated are numerical method is, when applied to

this problem, it usually becomes really simple.

Definition (Linear stability domain). If we apply a numerical method to

y

0

(t) = λy(t)

with y(0) = 1, λ ∈ C, then its linear stability domain is

D =

n

z = hλ : lim

n→∞

y

n

= 0

o

.

Example. Consider the Euler method. The discrete solution is

y

n

= (1 + hλ)

n

.

Thus we get

D = {z ∈ C : |1 + z| < 1}.

We can visualize this on the complex plane as the open unit ball

Re

Im

−1

Example.

Consider instead the backward Euler method. This method is

implicit, but since our problem is simple, we can find what

n

th step is. It turns

out it is

y

n

= (1 − λh)

−n

.

Then we get

D = {z ∈ C : |1 − z| > 1}.

We can visualize this as the region:

1

Re

Im

We make the following definition:

Definition (A-stability). A numerical method is A-stable if

C

−

= {z ∈ C : Re(z) < 0} ⊆ D.

In particular, for

Re

(

z

)

< λ

, A-stability means that

y

n

will tend to 0 regardless

of how large h is.

Hence the backwards Euler method is A-stable, but the Euler method is not.

A-stability is a very strong requirement. It is hard to get A-stability. In

particular, Dahlquist proved that no multi-step method of order

p ≥

3 is A-stable.

Moreover, no explicit Runge-Kutta method can be A-stable.

So let’s look at some other implicit methods.

Example

(Trapezoidal rule)

.

Again consider

y

0

(

t

) =

λy

, with the trapezoidal

rule. Then we can find

y

n

=

1 + hλ/2

1 − hλ/2

n

.

So the linear stability domain is

D =

z ∈ C :

2 + z

2 − z

< 1

.

What this says is that

z

has to be closer to

−

2 than to 2. In other words,

D

is

exactly C

−

.

When testing a numerical method for A-stability in general, complex analysis

is helpful. Usually, when applying a numerical method to the problem y

0

= λy,

we get

y

n

= [r(hλ)]

n

,

where r is some rational function. So

D = {z ∈ C : |r(z)| < 1}.

We want to know if

D

contains the left half-plane. For more complicated

expressions of

r

, like the case of the trapezoidal rule, this is not so obvious.

Fortunately, we have the maximum principle:

Theorem

(Maximum principle)

.

Let

g

be analytic and non-constant in an open

set Ω ⊆ C. Then |g| has no maximum in Ω.

Since

|g|

needs to have a maximum in the closure of Ω, the maximum must

occur on the boundary. So to show

|g| ≤

1 on the region Ω, we only need to

show the inequality holds on the boundary ∂Ω.

We try Ω =

C

−

. The trick is to first check that

g

is analytic in Ω, and

then check what happens at the boundary. This technique is made clear in the

following example:

Example. Consider

r(z) =

6 − 2z

6 − 4z + z

2

.

This is still pretty simple, but can illustrate how we can use the maximum

principle.

We first check if it is analytic. This certainly has some poles, but they are

2 ±

√

2i, and are in the right-half plane. So this is analytic in C

−

.

Next, what happens at the boundary of the left-half plane? Firstly, as

|z| → ∞

, we find

r

(

z

)

→

0, since we have a

z

2

at the denominator. The next

part is checking when

z

is on the imaginary axis, say

z

=

it

with

t ∈ R

. Then

we can check by some messy algebra that

|r(it)| ≤ 1

for

t ∈ R

. Therefore, by the maximum principle, we must have

|r

(

z

)

| ≤

1 for all

z ∈ C

−

.