1Polynomial interpolation

IB Numerical Analysis



1.3 The Newton formula
The idea of Newton’s formula is as follows for
k
= 0
, ··· , n
, we write
p
k
P
k
[
x
]
for the polynomial that satisfies
p
k
(x
i
) = f
i
for i = 0, ··· , k.
This is the unique degree-
k
polynomial that satisfies the first
k
+ 1 conditions,
whose existence (and uniqueness) is guaranteed by the previous section. Then
we can write
p(x) = p
n
(x) = p
0
(x) + (p
1
(x) p
0
(x)) + ··· + (p
n
(x) p
n1
(x)).
Hence we are done if we have an efficient way of finding the differences
p
k
p
k1
.
We know that
p
k
and
p
k1
agree on
x
0
, ··· , x
k1
. So
p
k
p
k1
evaluates to
0 at those points, and we must have
p
k
(x) p
k1
(x) = A
k
k1
Y
i=0
(x x
i
),
for some A
k
yet to be found out. Then we can write
p(x) = p
n
(x) = A
0
+
n
X
k=1
A
k
k1
Y
i=0
(x x
i
).
This formula has the advantage that it is built up gradually from the interpo-
lation points one-by-one. If we stop the sum at any point, we have obtained
the polynomial that interpolates the data for the first
k
points (for some
k
).
Conversely, if we have a new data point, we just need to add a new term, instead
of re-computing everything.
All that remains is to find the coefficients
A
k
. For
k
= 0, we know
A
0
is the
unique constant polynomial that interpolates the point at x
0
, i.e. A
0
= f
0
.
For the others, we note that in the formula for
p
k
p
k1
, we find that
A
k
is
the leading coefficient of
x
k
. But
p
k1
(
x
) has no degree
k
term. So
A
k
must
be the leading coefficient of
p
k
. So we have reduced our problem to finding the
leading coefficients of p
k
.
The solution to this is known as the Newton divided differences. We first
invent a new notation:
A
k
= f[x
0
, ··· , x
k
].
Note that these coefficients depend only on the first
k
interpolation points.
Moreover, since the labelling of the points
x
0
, ··· , x
k
is arbitrary, we don’t have
to start with x
0
. In general, the coefficient
f[x
j
, ··· , x
k
]
is the leading coefficient of the unique
q P
kj
[
x
] such that
q
(
x
i
) =
f
i
for
i = j, ··· , k.
While we do not have an explicit formula for what these coefficients are, we
can come up with a recurrence relation for these coefficients.
Theorem
(Recurrence relation for Newton divided differences)
.
For 0
j <
k n, we have
f[x
j
, ··· , x
k
] =
f[x
j+1
, ··· , x
k
] f[x
j
, ··· , x
k1
]
x
k
x
j
.
Proof.
The key to proving this is to relate the interpolating polynomials. Let
q
0
, q
1
P
kj1
[x] and q
2
P
kj
satisfy
q
0
(x
i
) = f
i
i = j, ··· , k 1
q
1
(x
i
) = f
i
i = j + 1, ··· , k
q
2
(x
i
) = f
i
i = j, ··· , k
We now claim that
q
2
(x) =
x x
j
x
k
x
j
q
1
(x) +
x
k
x
x
k
x
j
q
0
(x).
We can check directly that the expression on the right correctly interpolates
the points
x
i
for
i
=
j, ··· , k
. By uniqueness, the two expressions agree. Since
f
[
x
j
, ··· , x
k
],
f
[
x
j+1
, ··· , x
k
] and
f
[
x
j
, ··· , x
k1
] are the leading coefficients of
q
2
, q
1
, q
0
respectively, the result follows.
Thus the famous Newton divided difference table can be constructed
x
i
f
i
f[, ] f[, , ] ··· f[, ··· , ]
x
0
f[x
0
]
f[x
0
, x
1
]
x
1
f[x
1
] f[x
0
, x
1
, x
2
]
f[x
1
, x
2
]
.
.
.
x
2
f[x
2
] f[x
2
, x
3
, x
4
] ··· f[x
0
, x
1
, ··· , x
n
]
f[x
2
, x
3
]
.
.
.
x
3
f[x
3
]
.
.
.
.
.
.
.
.
.
.
.
.
x
n
f[x
n
]
From the first
n
columns, we can find the
n
+ 1th column using the recurrence
relation above. The values of
A
k
can then be found at the top diagonal, and
this is all we really need. However, to compute this diagonal, we will need to
compute everything in the table.
In practice, we often need not find the actual interpolating polynomial. If
we just want to evaluate
p
(
ˆx
) at some new point
ˆx
using the divided table, we
can use Horner’s scheme, given by
S <- f[x
0
,..., x
n
]
for k = n - 1,..., 0
S <- (^x - x
k
)S + f[x
0
,..., x
k
]
end
This only takes O(n) operations.
If an extra data point
{x
n+1
, f
n+1
}
is added, then we only have to compute
an extra diagonal
f
[
x
k
, ··· , x
n+1
] for
k
=
n, ··· ,
0 in the divided difference table
to obtain the new coefficient, and the old results can be reused. This requires
O(n) operations. This is less straightforward for Lagrange’s method.