In the previous discussion, we described a perspective on interpolation, numerical differentiation, and quadrature that utilised linear algebra. On the interpolation problem, we described it as follows: given points x0,…,xn and a point x, determine a set of coefficients c0(x),…,cn(x) such that
p(x)=c0(x)p(x0)+c1(x)p(x1)+⋯+cn(x)p(xn)
for all polynomials p of degree at most n.
We did this by analysing the dual space of Pn, the set of all such polynomials, and in essence, we are saying that the linear functional “evaluation at x” lies in the linear span of {evx0,…,evxn}.
We then picked a basis {p0,…,pn} of Pn to establish a matrix equation
c0(x)⋮cn(x)=p0(x0)⋮pn(x0)⋯⋮⋯p0(xn)⋮pn(xn)−1p0(x)⋮pn(x).
Then, given data points y0,…,yn, the interpolation of the data points (x0,y0),…,(xn,yn) at x is given by
(c0(x)⋯cn(x))y0⋮yn=(p0(x)⋯pn(x))p0(x0)⋮p0(xn)⋯⋮⋯pn(x0)⋮pn(xn)−1y0⋮yn.
Note that the square matrix got transposei. When pk(x)=xk is our basis, Neville’s method is factoring the product of the first two terms into a chain of simple matrices, and we descirbed how the intermediate factors provided more information about lower-order interpolating polynomials.
I gave an incorrect characterisation of the divided differencing method. One can certainly view it as selecting a different dual basis, but I think it’s more accurate to describe it as a smarter choice of {p0,…,pn}. Rather than using the standard choice, one defines pk(x)=∏j=0k−1(x−xk) as the basis. This makes the matrix upper triangular, and the method of divided differences is just applying forward substitution to solve for the product with (y0⋯yn)⊤!