1Polynomial interpolation

IB Numerical Analysis



1.5 Error bounds for polynomial interpolation
The actual error bound is not too difficult as well. It turns out the error
e
=
f p
n
is “like the next term in the Newton’s formula”. This vague sentence
is made precise in the following theorem:
Theorem.
Assume
{x
i
}
n
i=0
[
a, b
] and
f C
[
a, b
]. Let
¯x
[
a, b
] be a non-
interpolation point. Then
e
n
(¯x) = f[x
0
, x
1
, ··· , x
n
, ¯x]ω(¯x),
where
ω(x) =
n
Y
i=0
(x x
i
).
Note that we forbid the case where
¯x
is an interpolation point, since it is
not clear what the expression
f
[
x
0
, x
1
, ··· , x
n
, ¯x
] means. However, if
¯x
is an
interpolation point, then both
e
n
(
¯x
) and
ω
(
¯x
) are zero, so there isn’t much to
say.
Proof. We think of ¯x = x
n+1
as a new interpolation point so that
p
n+1
(x) p
n
(x) = f[x
0
, ··· , x
n
, ¯x]ω(x)
for all
x R
. In particular, putting
x
=
¯x
, we have
p
n+1
(
¯x
) =
f
(
¯x
), and we get
the result.
Combining the two results, we find
Theorem.
If in addition
f C
n+1
[
a, b
], then for each
x
[
a, b
], we can find
ξ
x
(a, b) such that
e
n
(x) =
1
(n + 1)!
f
(n+1)
(ξ
x
)ω(x)
Proof.
The statement is trivial if
x
is an interpolation point pick arbitrary
ξ
x
, and both sides are zero. Otherwise, this follows directly from the last two
theorems.
This is an exact result, which is not too useful, since there is no easy
constructive way of finding what
ξ
x
should be. Instead, we usually go for a
bound. We introduce the max norm
kgk
= max
t[a,b]
|g(t)|.
This gives the more useful bound
Corollary. For all x [a, b], we have
|f(x) p
n
(x)|
1
(n + 1)!
kf
(n+1)
k
|ω(x)|
Assuming our function
f
is fixed, this error bound depends only on
ω
(
x
),
which depends on our choice of interpolation points. So can we minimize
ω
(
x
)
in some sense by picking some clever interpolation points =
{x
i
}
n
i=0
? Here
we will have
n
fixed. So instead, we put as the subscript. We can write our
bound as
kf p
k
1
(n + 1)!
kf
(n+1)
k
kω
k
.
So the objective is to find a that minimizes kω
k
.
For the moment, we focus on the special case where the interval is [
1
,
1].
The general solution can be obtained by an easy change of variable.
For some magical reasons that hopefully will become clear soon, the optimal
choice of comes from the Chebyshev polynomials.
Definition
(Chebyshev polynomial)
.
The Chebyshev polynomial of degree
n
on
[1, 1] is defined by
T
n
(x) = cos(),
where x = cos θ with θ [0, π].
So given an
x
, we find the unique
θ
that satisfies
x
=
cos θ
, and then find
cos
(
). This is in fact a polynomial in disguise, since from trigonometric
identities, we know
cos
(
) can be expanded as a polynomial in
cos θ
up to
degree n.
Two key properties of T
n
on [1, 1] are
(i) The maximum absolute value is obtained at
X
k
= cos
πk
n
for k = 0, ··· , n with
T
n
(X
k
) = (1)
k
.
(ii) This has n distinct zeros at
x
k
= cos
2k 1
2n
π
.
for k = 1, ··· , n.
When plotted out, the polynomials look like this:
x
T
4
(x)
1 1
1
1
All that really matters about the Chebyshev polynomials is that the maximum
is obtained at
n
+ 1 distinct points with alternating sign. The exact form of the
polynomial is not really important.
Notice there is an intentional clash between the use of
x
k
as the zeros and
x
k
as the interpolation points we will show these are indeed the optimal
interpolation points.
We first prove a convenient recurrence relation for the Chebyshev polynomials:
Lemma
(3-term recurrence relation)
.
The Chebyshev polynomials satisfy the
recurrence relations
T
n+1
(x) = 2xT
n
(x) T
n1
(x)
with initial conditions
T
0
(x) = 1, T
1
(x) = x.
Proof.
cos((n + 1)θ) + cos((n 1)θ) = 2 cos θ cos().
This recurrence relation can be useful for many things, but for our purposes,
we only use it to show that the leading coefficient of T
n
is 2
n1
(for n 1).
Theorem
(Minimal property for
n
1)
.
On [
1
,
1], among all polynomials
p P
n
[
x
] with leading coefficient 1,
1
2
n1
kT
n
k
minimizes
kpk
. Thus, the
minimum value is
1
2
n1
.
Proof.
We proceed by contradiction. Suppose there is a polynomial
q
n
P
n
with leading coefficient 1 such that kq
n
k
<
1
2
n1
. Define a new polynomial
r =
1
2
n1
T
n
q
n
.
This is, by assumption, non-zero.
Since both the polynomials have leading coefficient 1, the difference must
have degree at most
n
1, i.e.
r P
n1
[
x
]. Since
1
2
n1
T
n
(
X
k
) =
±
1
2
n1
, and
|q
n
(
X
n
)
| <
1
2
n1
by assumption,
r
alternates in sign between these
n
+ 1 points.
But then by the intermediate value theorem,
r
has to have at least
n
zeros. This
is a contradiction, since r has degree n 1, and cannot be zero.
Corollary. Consider
w
=
n
Y
i=0
(x x
i
) P
n+1
[x]
for any distinct points ∆ = {x
i
}
n
i=0
[1, 1]. Then
min
kω
k
=
1
2
n
.
This minimum is achieved by picking the interpolation points to be the zeros of
T
n+1
, namely
x
k
= cos
2k + 1
2n + 2
π
, k = 0, ··· , n.
Theorem.
For
f C
n+1
[
1
,
1], the Chebyshev choice of interpolation points
gives
kf p
n
k
1
2
n
1
(n + 1)!
kf
(n+1)
k
.
Suppose
f
has as many continuous derivatives as we want. Then as we
increase
n
, what happens to the error bounds? The coefficients involve dividing
by an exponential and a factorial. Hence as long as the higher derivatives of
f
don’t blow up too badly, in general, the error will tend to zero as
n
, which
makes sense.
The last two results can be easily generalized to arbitrary intervals [
a, b
], and
this is left as an exercise for the reader.