# Polynomial Collocation (Interpolation/Extrapolation) and Approximation

## Contents

# 18. Polynomial Collocation (Interpolation/Extrapolation) and Approximation#

**References:**

Section 3.1

*Data and Interpolating Functions*of SauerSection 3.1

*Interpolation and the Lagrange Polynomial*of Burden&FairesSection 4.1 of Chenney&Kincaid

## 18.1. Introduction#

Numerical methods for dealing with functions require describing them, at least approximately, using a finite list of numbers, and the most basic approach is to approximate by a polynomial. (Other important choices are rational functions and “trigonometric polynomials”: sums of multiples of sines and cosines.) Such polynomials can then be used to aproximate derivatives and integrals.

The simplest idea for approximating \(f(x)\) on domain \([a, b]\) is to start with a finite collection of **node** values \(x_i \in [a, b]\), \(0 \leq i \leq n\) and then seek a polynomial \(p\) which **collocates** with \(f\) at those values: \(p(x_i) = f(x_i)\) for \(0 \leq i \leq n\).
Actually, we can put the function aside for now, and simply seek a polynomial that passes through a list of points \((x_i, y_i)\); later we will achieve collocation with \(f\) by choosing \(y_i = f(x_i)\).

In fact there are infinitely many such polynomials: given one, add to it any polynomial with zeros at all of the \(n+1\) notes.
So to make the problem well-posed, we seek the collocating polynomial of *lowest degree*.

**Theorem**
Given \(n+1\) distinct values \(x_i\), \(0 \leq i \leq n\), and corresponding \(y\)-values \(y_i\),
there is a unique polynomial \(P\) of degree at most \(n\) with \(P(x_i) = y_i\) for \(0 \leq i \leq n\).

(*Note:* although the degree is typically \(n\), it can be less; as an extreme example, if all \(y_i\) are equal to \(c\), then \(P(x)\) is that constant \(c\).)

Historically there are several methods for finding \(P_n\) and proving its uniqueness, in particular, the *divided difference* method introduced by Newton and the *Lagrange polynomial* method.
However for our purposes, and for most modern needs, a different method is easiest, and it also introduces a strategy that will be of repeated use later in this course: the **Method of Undertermined Coefficients** or **MUC**.

In general, this method starts by assuming that the function wanted is a sum of unknown multiples of a collection of known functions.
Here, \(P(x) = c_0 + c_1 x + \cdots + c_{n-1} x^{n-1} + c_n x^n = \sum_{j=0}^n c_j x^j\).

(*Note:* any of the \(c_i\) could be zero, including \(c_n\), in which case the degree is less than \(n\).)

The unknown factors (\(c_0 \cdots c_n\)) are the *undetermined coefficients.*

Next one states the problem as a system of equations for these undetermined coefficients, and solves them.

Here, we have \(n+1\) conditions to be met:

This is a system if \(n+1\) simultaneous linear equations in \(n+1\) unknowns, so the question of existence and uniqueness is exactly the question of whether the corresponding matrix is singular,
and so is equivalent to the case of all \(y_i = 0\) having *only* the solution with all
\(c_i = 0\).

Back in terms of polynomials, this is the claim that the only polynomial of degree at most \(n\) with zeros \(x_0 \dots x_n\). And this is true, because any non-trivial polynomial with those \(n+1\) distinct roots is of degree at least \(n+1\), so the only “degree n” polynomial fitting this data is \(P(x) \equiv 0\). The theorem is proven.

The proof of this theorem is completely constructive; it gives the only numerical method we need, and which is the one implemented in Numpy through the pair of functions `numpy.polyfit`

and `numpy.polyval`

that we will use in examples.
(Aside: here as in many places, Numpy mimics the names and functionality of corresponding Matlab tools.)

Briefly, the algorithm is this (indexing from 0 !)

Create the \(n+1 \times n+1\) matrix \(V\) with elements

and the \(n+1\)-element column vector \(y\) with elements \(y_i\) as above.

Solve \(V c = y\) for the vector of coefficients \(c_j\) as above.

I use the name \(V\) because this is called the **Vandermonde Matrix.**

```
# Adding the package PyPlot only needs to be done "once per computer", so I have commented it out
#using Pkg
#Pkg.add("PyPlot")
```

```
# Enable graphics; PyPlot is an interface to the Python package matplotlib.pyplot
using PyPlot
```

```
n_plot_points = 50;
```

```
# Enable LaTeX math formatting in text strings, e.g. L"y=x^2"
#using LaTeXStrings
```

## 18.2. Example 1#

As usual, I concoct a first example with known correct answer, by using a polynomial as \(f\).

```
function f(x)
return 4 + 7x -2x^2 - 5x^3 + 2x^4
end;
```

```
x_nodes = [1., 2., 0., 3.3, 4.] # They do not need to be in order
println("The x nodes 'x_i' are $(x_nodes)")
n_nodes = length(x_nodes)
degree = n_nodes - 1
# TO DO: creation of y_nodes can probably be vectorized, but for now:
y_nodes = zero(x_nodes) # Zero vector matching x_nodes; a shell to be filled next
for i in 1:n_nodes
y_nodes[i] = f(x_nodes[i])
end
println("The y values at the nodes are $(y_nodes)")
```

```
The x nodes 'x_i' are [1.0, 2.0, 0.0, 3.3, 4.0]
The y values at the nodes are [6.0, 2.0, 4.0, 62.81919999999994, 192.0]
```

```
V = zeros(n_nodes,n_nodes)
for i in 0:degree
for j in 0:degree
V[i+1,j+1] = x_nodes[i+1]^j # Shift the array indices up by one, since they stare at 1, not 0.
end
end
```

Solve, using our functions seen in earlier sections and gathered in Module NumericalMethodsAndAnalysis

```
# Tell Julia where I keep my modules; for now in a "sister" folder to where the notebooks are:
push!(LOAD_PATH, "../modules");
import NumericalMethodsAndAnalysis: rowReduce, backwardSubstitution
```

```
# c = backwardSubstitution(rowReduce(V, y_nodes)) # Not quite working
(U, rhs) = rowReduce(V, y_nodes)
c_4 = backwardSubstitution(U, rhs)
println("The coefficients of P are $(c_4)")
```

```
The coefficients of P are [3.999999999999999, 6.999999999999986, -1.9999999999999734, -5.000000000000013, 2.0000000000000018]
```

These are correct!

### 18.2.1. Aside on presentation: a pretty-printer for the polynomials#

```
function display_P(c)
# Note that the function `print` does not end by going to a new line, unlike `println`;
# this is useful when you want to create a line of output piece-by-piece.
degree=length(c)-1
print("P_$(degree)(x) = ")
print(c[1])
if degree > 0
print(" + $(c[2])x")
end
if degree > 1
for j in 2:degree
print(" + $(c[j+1])x^$(j)")
end
end
end;
```

```
display_P(c_4)
```

```
P_4(x) = 3.999999999999999 + 6.999999999999986x + -1.9999999999999734x^2 + -5.000000000000013x^3 + 2.0000000000000018x^4
```

## 18.3. Example 2: \(f(x)\) not a polynomial of degree \(\leq n\)#

Reduce the degree of the interpolating \(P\) to three by using only four nodes:

```
x_nodes = [1., 2., 3., 4.];
```

The rest is done as above:

```
println("The x nodes 'x_i' are $(x_nodes)")
n_nodes = length(x_nodes)
degree = n_nodes - 1
y_nodes = zero(x_nodes)
for i in 1:n_nodes
y_nodes[i] = f(x_nodes[i])
end
println("The y values at the nodes are $(y_nodes)")
V = zeros(n_nodes,n_nodes)
for i in 0:degree
for j in 0:degree
V[i+1,j+1] = x_nodes[i+1]^j
end
end
(U, rhs) = rowReduce(V, y_nodes)
c_3 = backwardSubstitution(U, rhs)
#println("The coefficients of P_3 are $(c_3)")
display_P(c_3)
```

```
The x nodes 'x_i' are [1.0, 2.0, 3.0, 4.0]
The y values at the nodes are [6.0, 2.0, 34.0, 192.0]
P_3(x) = -44.0 + 107.0x + -72.0x^2 + 15.0x^3
```

There are several ways to assess the accuracy of this fit; we start graphically, and later consider the maximum and root-mean-square (RMS) errors.

```
x_plot = range(start=0.9,stop=4.1,length=n_plot_points) # for graphing: go a bit beyond the nodes
f_plot = zeros(n_plot_points)
P_3_plot = zeros(n_plot_points)
for i in 1:n_plot_points
f_plot[i] = f(x_plot[i])
P_3_plot[i] = c_3[1] + c_3[2]x_plot[i] + c_3[3]x_plot[i]^2 + c_3[4]x_plot[i]^3
end
```

```
figure(figsize=[12,6])
plot(x_plot, f_plot, label="f(x)")
plot(x_nodes, y_nodes, "*", label="nodes")
plot(x_plot, P_3_plot, label="y = P_3(x)")
legend()
grid(true)
```

```
P_3_error = f_plot - P_3_plot
figure(figsize=[12,6])
title("Error in P_3(x)")
plot(x_plot, P_3_error)
plot(x_nodes, zero(x_nodes), "*")
grid(true)
```

## 18.4. Example 3: \(f(x)\) not a polynomial at all#

```
function g(x)
return exp(x)
end
a_g = -1;
b_g = 1;
```

```
degree = 3
n_nodes = degree+1
x_g_nodes = range(a_g, b_g, length=n_nodes)
g_nodes = zero(x_g_nodes)
for i in 1:n_nodes
g_nodes[i] = g(x_g_nodes[i])
end
println("degree=$(degree)")
println("node x values $(x_g_nodes)")
println("node y values $(g_nodes)")
V = zeros(n_nodes,n_nodes)
for i in 0:degree
for j in 0:degree
V[i+1,j+1] = x_g_nodes[i+1]^j
end
end
(U, rhs) = rowReduce(V, g_nodes)
c_g = backwardSubstitution(U, rhs)
#println("The coefficients of P_$(degree) are $(c_g)")
display_P(c_g)
```

```
degree=3
node x values -1.0:0.6666666666666666:1.0
node y values [0.36787944117144233, 0.7165313105737893, 1.3956124250860895, 2.718281828459045]
P_3(x) = 0.9951957719567764 + 0.9990492315340318x + 0.5478848628584674x^2 + 0.17615196210976966x^3
```

```
x_g_plot = range(a_g - 0.2, b_g + 0.2, length=n_plot_points) # Go a bit beyond the nodes in each direction
g_plot = zeros(n_plot_points)
P_g_plot = zeros(n_plot_points)
for i in 1:n_plot_points
g_plot[i] = g(x_g_plot[i])
P_g_plot[i] = c_g[1]
for j in 1:degree
P_g_plot[i] += c_g[j+1]*x_g_plot[i]^(j)
end
end
```

```
figure(figsize=[12,6])
#title(L"\text{With } y = e^x")
plot(x_g_plot, g_plot, label=L"y = g(x) = e^x")
plot(x_g_nodes, g_nodes, "*", label="nodes")
plot(x_g_plot, P_g_plot, label="y = P(x)")
legend()
grid(true)
```

```
P_g_error = g_plot - P_g_plot
figure(figsize=[12,6])
title(L"Error in P_{n}(x) for g(x) = e^x$")
plot(x_g_plot, P_g_error)
plot(x_g_nodes, zero(x_g_nodes), "*")
grid(true)
```

This work is licensed under Creative Commons Attribution-ShareAlike 4.0 International