# 3.1. Polynomial Collocation (Interpolation/Extrapolation)#

**References:**

Section 3.1

*Data and Interpolating Functions*in [Sauer, 2022].Section 3.1

*Interpolation and the Lagrange Polynomial*in [Burden*et al.*, 2016].Section 4.1 in [Chenney and Kincaid, 2012].

## 3.1.1. Introduction#

Numerical methods for dealing with functions require describing them, at least approximately, using a finite list of numbers, and the most basic approach is to approximate by a polynomial. (Other important choices are rational functions and “trigonometric polynomials”: sums of multiples of sines and cosines.) Such polynomials can then be used to approximate derivatives and integrals.

The simplest idea for approximating \(f(x)\) on domain \([a, b]\) is to start with a finite collection of **node** values \(x_i \in [a, b]\), \(0 \leq i \leq n\) and then seek a polynomial \(p\) which **collocates** with \(f\) at those values: \(p(x_i) = f(x_i)\) for \(0 \leq i \leq n\).
Actually, we can put the function aside for now, and simply seek a polynomial that passes through a list of points \((x_i, y_i)\); later we will achieve collocation with \(f\) by choosing \(y_i = f(x_i)\).

In fact there are infinitely many such polynomials: given one, add to it any polynomial with zeros at all of the \(n+1\) notes.
So to make the problem well-posed, we seek the collocating polynomial of *lowest degree*.

(Existence and uniqueness of a collocating polynomial)

Given \(n+1\) distinct values \(x_i\), \(0 \leq i \leq n\), and corresponding \(y\)-values \(y_i\), there is a unique polynomial \(P\) of degree at most \(n\) with \(P(x_i) = y_i\) for \(0 \leq i \leq n\).

(*Note:* although the degree is typically \(n\), it can be less; as an extreme example, if all \(y_i\) are equal to \(c\), then \(P(x)\) is that constant \(c\).)

Historically there are several methods for finding \(P_n\) and proving its uniqueness, in particular, the *divided difference* method introduced by Newton and the *Lagrange polynomial* method.
However for our purposes, and for most modern needs, a different method is easiest, and it also introduces a strategy that will be of repeated use later in this course: the **Method of Undertermined Coefficients** or **MUC**.

In general, this method starts by assuming that the function wanted is a sum of unknown multiples of a collection of known functions.
Here, \(P(x) = c_n x^n + c_{n-1} x^{n-1} + \cdots+ c_1 x + c_0 = \sum_{j=0}^n c_j x^j\).

(*Note:* any of the \(c_i\) could be zero, including \(c_n\), in which case the degree is less than \(n\).)

The unknown factors (\(c_0 \cdots c_n\)) are the *undetermined coefficients.*

Next one states the problem as a system of equations for these undetermined coefficients, and solves them.

Here, we have \(n+1\) conditions to be met:

This is a system if \(n+1\) simultaneous linear equations in \(n+1\) iunknowns, so the question of existence and uniqueness is exactly the question of whether the corresponding matrix is singular,
and so is equivalent to the case of all \(y_i = 0\) having *only* the solution with all
\(c_i = 0\).

Back in terms of polynomials, this is the claim that the only polynomial of degree at most \(n\) with distinct zeros \(x_0 \dots x_n\) is the zero function. And this is true, because any non-trivial polynomial with those \(n+1\) distinct roots is of degree at least \(n+1\), so the only “degree n” polynomial fitting this data is \(P(x) \equiv 0\). The theorem is proven.

The proof of this theorem is completely constructive; it gives the only numerical method we need, and which is the one implemented in Numpy through the pair of functions `numpy.polyfit`

and `numpy.polyval`

.
(Aside: here as in many places, Numpy mimics the names and functionality of corresponding Matlab tools.)

Briefly, the algorithm is this (indexing from 0 !)

Create the \(n+1 \times n+1\) matrix \(V\) with elements

and the \(n+1\)-element column vector \(y\) with elements \(y_i\) as above.

Solve \(V c = y\) for the vector of coefficients \(c_j\) as above.

I use the name \(V\) because this is called the **Vandermonde Matrix.**

```
from numpy import array, linspace, zeros, zeros_like, exp
from matplotlib.pyplot import figure, plot, title, grid, legend
```

As usual, I concoct a first example with known correct answer, by using a polynomial as \(f\):

using the nodes \(x_0 = 1\), \(x_1 = 2\), \(x_2= 0\), \(x_3 = 3.3\) and \(x_4 = 4\) (They do not need to be in order.)

```
def f(x):
return 4 + 7*x - 2*x**2 -5*x**3 + 3*x**4
```

```
xnodes = array([1., 2., 7., 5., 4.]) # They do not need to be in order
nnodes = len(xnodes)
n = nnodes-1
print(f"The x nodes 'x_i' are {xnodes}")
ynodes = zeros_like(xnodes)
for i in range(nnodes):
ynodes[i] = f(xnodes[i])
print(f"The y values at the nodes are {ynodes}")
```

```
The x nodes 'x_i' are [1. 2. 7. 5. 4.]
The y values at the nodes are [ 7. 18. 5443. 1239. 448.]
```

The Vandermonde matrix:

```
V = zeros([nnodes, nnodes])
for i in range(nnodes):
for j in range(nnodes):
V[i,j] = xnodes[i]**j
```

Solve, using our functions seen in earlier sections and gathered in Notebook for generating the module numericalMethods

```
from numericalMethods import rowReduce, backwardSubstitution
```

```
(U, z) = rowReduce(V, ynodes)
c = backwardSubstitution(U, z)
print(f"The coefficients of P are {c}")
```

```
The coefficients of P are [ 4. 7. -2. -5. 3.]
```

We can also check the resulting values of the polynomial:

```
P = c[0] + c[1]*xnodes + c[2]*xnodes**2 + c[3]*xnodes**3 + c[4]*xnodes**4
```

```
for (x, y, P_i) in zip(xnodes, ynodes, P):
print(f"P({x}) should be {y}; it is {P_i}")
```

```
P(1.0) should be 7.0; it is 7.0
P(2.0) should be 18.0; it is 18.0
P(7.0) should be 5443.0; it is 5443.0
P(5.0) should be 1239.0; it is 1239.0
P(4.0) should be 448.0; it is 448.0
```

## 3.1.2. Functions for computing the coefficients and evaluating the polynomials#

We will use this procedure several times, so it time to put it into a functions — and add a pretty printer for polynomials.

```
def fitPolynomial(x, y):
"""Compute the coefficients c_i of the polynomial of lowest degree that collocates the points (x[i], y[i]).
These are returned in an array c of the same length as x and y, even if the degree is less than the normal length(x)-1,
in which case the array has some trailing zeroes.
The polynomial is thus p(x) = c[0] + c[1]x + ... c[d] x^d where n =length(x)-1, the nominal degree.
"""
nnodes = len(x)
n = nnodes - 1
V = zeros([nnodes, nnodes])
for i in range(nnodes):
for j in range(nnodes):
V[i,j] = x[i]**j
(U, z) = rowReduce(V, y)
c = backwardSubstitution(U, z)
return c
```

```
def evaluatePolynomial(x, coeffs):
# Evaluate the polynomial with coefficients in coeffs at the points in x.
npoints = len(x)
ncoeffs = len(coeffs)
n = ncoeffs - 1
powers = linspace(0, n, n+1)
y = zeros_like(x)
for i in range(npoints):
y[i] = sum(coeffs * x[i]**powers)
return y
```

```
def showPolynomial(c):
print("P(x) = ", end="")
n = len(c)-1
print(f"{c[0]:.4}", end="")
if n > 0:
coeff = c[1]
if coeff > 0:
print(f" + {coeff:.4}x", end="")
elif coeff < 0:
print(f" - {-coeff:.4}x", end="")
if n > 1:
for j in range(2, len(c)):
coeff = c[j]
if coeff > 0:
print(f" + {coeff:.4}x^{j}", end="")
elif coeff < 0:
print(f" - {-coeff:.4}x^{j}", end="")
print()
```

While debugging, redo the first example:

```
c_new = fitPolynomial(xnodes, ynodes)
print(c_new)
```

```
[ 4. 7. -2. -5. 3.]
```

```
P_i_new = evaluatePolynomial(xnodes, c_new)
print(P_i_new)
```

```
[ 7. 18. 5443. 1239. 448.]
```

```
print(f"The values of P(x_i) are {P_i_new}")
```

```
The values of P(x_i) are [ 7. 18. 5443. 1239. 448.]
```

```
showPolynomial(c_new)
```

```
P(x) = 4.0 + 7.0x - 2.0x^2 - 5.0x^3 + 3.0x^4
```

\(f(x)\) not a polynomial of degree \(\leq n\))

(Make an exact fit impossible by using the same function but using only four nodes and reducing the degree of the interpolating \(P\) to three: \(x_0 = 1\), \(x_1 = 2\), \(x_2 = 3\) and \(x_3 = 4\)

```
# Reduce the degree of $P$ to at most 3:
n = 3
xnodes = array([-2.0, 0., 1.0, 2.])
ynodes = zeros_like(xnodes)
for i in range(len(xnodes)):
ynodes[i] = f(xnodes[i])
print(f"n is now {n}, the nodes are now {xnodes}, with f(x_i) values {ynodes}")
c = fitPolynomial(xnodes, ynodes)
print(f"The coefficients of P are now {c}")
showPolynomial(c)
print(f"The values of P at the nodes are now {evaluatePolynomial(xnodes, c)}")
```

```
n is now 3, the nodes are now [-2. 0. 1. 2.], with f(x_i) values [70. 4. 7. 18.]
The coefficients of P are now [ 4. -5. 10. -2.]
P(x) = 4.0 - 5.0x + 10.0x^2 - 2.0x^3
The values of P at the nodes are now [70. 4. 7. 18.]
```

There are several ways to assess the accuracy of this fit; we start graphically, and later consider the maximum and root-mean-square (RMS) errors.

```
x = linspace(min(xnodes), max(xnodes)) # defaulting to 50 points, for graphing
```

```
figure(figsize=[12,6])
plot(x, f(x), label="y=f(x)")
plot(xnodes, ynodes, "*", label="nodes")
P_n_x = evaluatePolynomial(x, c)
plot(x, P_n_x, label="y = P_n(x)")
legend()
grid(True);
```

```
P_error = f(x) - P_n_x
figure(figsize=[12,6])
title("Error in P_n(x)")
plot(x, P_error, label="y=f(x)")
plot(xnodes, zeros_like(xnodes), "*")
grid(True);
```

\(f(x)\) not a polynomial at all)

(\(f(x) = e^x\) with five nodes, equally spaced from \(-1\) to \(1\)

```
def g(x): return exp(x)
a_g = -1.0
b_g = 1.0
```

```
n = 3
xnodes_g = linspace(a_g, b_g, n+1)
ynodes_g = zeros_like(xnodes_g)
for i in range(len(xnodes_g)):
ynodes_g[i] = g(xnodes_g[i])
print(f"{n=}")
print(f"node x values {xnodes_g}")
print(f"node y values {ynodes_g}")
c_g = fitPolynomial(xnodes_g, ynodes_g)
print(f"The coefficients of P are {c_g}")
showPolynomial(c_g)
P_values = evaluatePolynomial(c_g, xnodes_g)
print(f"The values of P(x_i) are {P_values}")
```

```
n=3
node x values [-1. -0.33333333 0.33333333 1. ]
node y values [0.36787944 0.71653131 1.39561243 2.71828183]
The coefficients of P are [0.99519577 0.99904923 0.54788486 0.17615196]
P(x) = 0.9952 + 0.999x + 0.5479x^2 + 0.1762x^3
The values of P(x_i) are [-0.01593727 -0.00316622 -0.91810613 -1.04290824]
```

There are several ways to assess the accuracy of this fit. We start graphically, and later consider the maximum and root-mean-square (RMS) errors.

```
x_g = linspace(a_g - 0.25, b_g + 0.25) # Go a bit beyond the nodes in each direction
```

```
figure(figsize=[14,10])
title("With $g(x) = e^x$")
plot(x_g, g(x_g), label="y = $g(x)$")
plot(xnodes_g, ynodes_g, "*", label="nodes")
P_g = evaluatePolynomial(x_g, c_g)
plot(x_g, P_g, label=f"y = $P_{n}(x)$")
legend()
grid(True);
```

```
P_error = g(x_g) - P_g
figure(figsize=[14,10])
title(f"Error in $P_{n}(x)$ for $g(x) = e^x$")
plot(x_g, P_error)
plot(xnodes_g, zeros_like(xnodes_g), "*")
grid(True);
```

This work is licensed under Creative Commons Attribution-ShareAlike 4.0 International