Hunter Liu's Website

6. A Note about Divided Differences

≪ 5. Numerical Integration and Differentiation | Table of Contents | 7. Gaussian Elimination and Matrix Factorisations ≫

In the previous discussion, we described a perspective on interpolation, numerical differentiation, and quadrature that utilised linear algebra. On the interpolation problem, we described it as follows: given points x0,,xnx_0,\ldots, x_n and a point xx, determine a set of coefficients c0(x),,cn(x)c_0(x), \ldots, c_n(x) such that p(x)=c0(x)p(x0)+c1(x)p(x1)++cn(x)p(xn)p(x) = c_0(x) p\left( x_0 \right)+c_1(x) p\left( x_1 \right)+\cdots + c_n(x) p\left( x_n \right) for all polynomials pp of degree at most nn.

We did this by analysing the dual space of PnP_n, the set of all such polynomials, and in essence, we are saying that the linear functional “evaluation at xx” lies in the linear span of {evx0,,evxn}\left\lbrace \operatorname{ev} _{x_0}, \ldots, \operatorname{ev} _{x_n} \right\rbrace.

We then picked a basis {p0,,pn}\left\lbrace p_0,\ldots, p_n \right\rbrace of PnP_n to establish a matrix equation (c0(x)cn(x))=(p0(x0)p0(xn)pn(x0)pn(xn))1(p0(x)pn(x)).\begin{pmatrix} c_0(x) \\ \vdots \\ c_n(x) \end{pmatrix} = \begin{pmatrix} p_0\left( x_0 \right) & \cdots & p_0\left( x_n \right) \\ \vdots & \vdots & \vdots \\ p_n\left( x_0 \right) & \cdots & p_n\left( x_n \right) \end{pmatrix} ^{-1} \begin{pmatrix} p_0(x) \\ \vdots \\ p_n(x) \end{pmatrix}.

Then, given data points y0,,yny_0,\ldots, y_n, the interpolation of the data points (x0,y0),,(xn,yn)\left( x_0,y_0 \right),\ldots, \left( x_n,y_n \right) at xx is given by (c0(x)cn(x))(y0yn)=(p0(x)pn(x))(p0(x0)pn(x0)p0(xn)pn(xn))1(y0yn). \begin{align*}\begin{pmatrix} c_0(x) & \cdots & c_n(x) \end{pmatrix} \begin{pmatrix} y_0\\ \vdots \\ y_n \end{pmatrix} & \\ = \begin{pmatrix} p_0(x) & \cdots & p_n(x) \end{pmatrix} & \begin{pmatrix} p_0\left( x_0 \right) & \cdots & p_n\left( x_0 \right) \\ \vdots & \vdots & \vdots \\ p_0\left( x_n \right) & \cdots & p_n\left( x_n \right) \end{pmatrix} ^{-1} \begin{pmatrix} y_0 \\ \vdots \\ y_n \end{pmatrix}.\end{align*}

Note that the square matrix got transposei. When pk(x)=xkp_k(x) = x^k is our basis, Neville’s method is factoring the product of the first two terms into a chain of simple matrices, and we descirbed how the intermediate factors provided more information about lower-order interpolating polynomials.

I gave an incorrect characterisation of the divided differencing method. One can certainly view it as selecting a different dual basis, but I think it’s more accurate to describe it as a smarter choice of {p0,,pn}\left\lbrace p_0, \ldots, p_n \right\rbrace. Rather than using the standard choice, one defines pk(x)=j=0k1(xxk)p_k(x) = \prod _{j=0}^{k-1} \left( x-x_k \right) as the basis. This makes the matrix upper triangular, and the method of divided differences is just applying forward substitution to solve for the product with (y0yn)\begin{pmatrix} y_0 & \cdots y_n \end{pmatrix} ^\top!