divided differences
{{Short description|Algorithm for computing polynomial coefficients}}
In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions.{{Citation needed |date=October 2017}} Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its operation.{{cite book |last1=Isaacson |first1=Walter |title=The Innovators |date=2014 |publisher=Simon & Schuster |isbn=978-1-4767-0869-0 |page=20 }}
Divided differences is a recursive division process. Given a sequence of data points , the method calculates the coefficients of the interpolation polynomial of these points in the Newton form.
It is sometimes denoted by a delta with a bar: or .
Definition
Given n + 1 data points
where the are assumed to be pairwise distinct, the forward divided differences are defined as:
\mathopen[y_k] &:= y_k, && k \in \{ 0,\ldots,n\} \\
\mathopen[y_k,\ldots,y_{k+j}] &:= \frac{[y_{k+1},\ldots , y_{k+j}] - [y_{k},\ldots , y_{k+j-1}]}{x_{k+j}-x_k}, && k\in\{0,\ldots,n-j\},\ j\in\{1,\ldots,n\}.
\end{align}
To make the recursive process of computation clearer, the divided differences can be put in tabular form, where the columns correspond to the value of j above, and each entry in the table is computed from the difference of the entries to its immediate lower left and to its immediate upper left, divided by a difference of corresponding x-values:
\begin{matrix}
x_0 & y_0 = [y_0] & & & \\
& & [y_0,y_1] & & \\
x_1 & y_1 = [y_1] & & [y_0,y_1,y_2] & \\
& & [y_1,y_2] & & [y_0,y_1,y_2,y_3]\\
x_2 & y_2 = [y_2] & & [y_1,y_2,y_3] & \\
& & [y_2,y_3] & & \\
x_3 & y_3 = [y_3] & & & \\
\end{matrix}
= Notation =
Note that the divided difference depends on the values and , but the notation hides the dependency on the x-values. If the data points are given by a function f,
=(x_0, f(x_0)), \ldots, (x_n, f(x_n))
one sometimes writes the divided difference in the notation
\ \stackrel{\text{def}}= \ [f(x_k),\ldots,f(x_{k+j})]
= [y_k,\ldots,y_{k+j}].Other notations for the divided difference of the function ƒ on the nodes x0, ..., xn are:
\mathopen[x_0,\ldots,x_n;f]=
D[x_0,\ldots,x_n]f.
Example
Divided differences for and the first few values of :
\begin{align}
\mathopen[y_0] &= y_0 \\
\mathopen[y_0,y_1] &= \frac{y_1-y_0}{x_1-x_0} \\
\mathopen[y_0,y_1,y_2]
&= \frac{\mathopen[y_1,y_2]-\mathopen[y_0,y_1]}{x_2-x_0}
= \frac{\frac{y_2-y_1}{x_2-x_1}-\frac{y_1-y_0}{x_1-x_0}}{x_2-x_0}
= \frac{y_2-y_1}{(x_2-x_1)(x_2-x_0)}-\frac{y_1-y_0}{(x_1-x_0)(x_2-x_0)}
\\
\mathopen[y_0,y_1,y_2,y_3] &= \frac{\mathopen[y_1,y_2,y_3]-\mathopen[y_0,y_1,y_2]}{x_3-x_0}
\end{align}
Thus, the table corresponding to these terms upto two columns has the following form:
x_0 & y_{0} & & \\
& & {y_{1}-y_{0}\over x_1 - x_0} & \\
x_1 & y_{1} & & {{y_{2}-y_{1}\over x_2 - x_1}-{y_{1}-y_{0}\over x_1 - x_0} \over x_2 - x_0} \\
& & {y_{2}-y_{1}\over x_2 - x_1} & \\
x_2 & y_{2} & & \vdots \\
& & \vdots & \\
\vdots & & & \vdots \\
& & \vdots & \\
x_n & y_{n} & & \\
\end{matrix}
Properties
(f+g)[x_0,\dots,x_n] &= f[x_0,\dots,x_n] + g[x_0,\dots,x_n] \\
(\lambda\cdot f)[x_0,\dots,x_n] &= \lambda\cdot f[x_0,\dots,x_n]
\end{align}
- Leibniz rule
- Divided differences are symmetric: If is a permutation then
- Polynomial interpolation in the Newton form: if is a polynomial function of degree , and is the divided difference, then
- If is a polynomial function of degree
- Mean value theorem for divided differences: if
f is n times differentiable, thenf[x_0,\dots,x_n] = \frac{f^{(n)}(\xi)}{n!} for a number\xi in the open interval determined by the smallest and largest of thex_k 's.
Matrix form
The divided difference scheme can be put into an upper triangular matrix:
\begin{pmatrix}
f[x_0] & f[x_0,x_1] & f[x_0,x_1,x_2] & \ldots & f[x_0,\dots,x_n] \\
0 & f[x_1] & f[x_1,x_2] & \ldots & f[x_1,\dots,x_n] \\
0 & 0 & f[x_2] & \ldots & f[x_2,\dots,x_n] \\
\vdots & \vdots & & \ddots & \vdots \\
0 & 0 & 0 & \ldots & f[x_n]
\end{pmatrix}.
Then it holds
T_{f+g}(x) = T_f(x) + T_g(x) T_{\lambda f}(x) = \lambda T_f(x) if\lambda is a scalarT_{f\cdot g}(x) = T_f(x) \cdot T_g(x) {{pb}} This follows from the Leibniz rule. It means that multiplication of such matrices is commutative. Summarised, the matrices of divided difference schemes with respect to the same set of nodes x form a commutative ring.- Since
T_f(x) is a triangular matrix, its eigenvalues are obviouslyf(x_0), \dots, f(x_n) . - Let
\delta_\xi be a Kronecker delta-like function, that is\delta_\xi(t) = \begin{cases}
1 &: t=\xi , \\
0 &: \mbox{else}.
\end{cases} Obviously
1 & \frac{1}{(x_1-x_0)} & \frac{1}{(x_2-x_0) (x_2-x_1)} & \frac{1}{(x_3-x_0) (x_3-x_1) (x_3-x_2)} \\
0 & 1 & \frac{1}{(x_2-x_1)} & \frac{1}{(x_3-x_1) (x_3-x_2)} \\
0 & 0 & 1 & \frac{1}{(x_3-x_2)} \\
0 & 0 & 0 & 1
\end{pmatrix} The diagonalization of
=Polynomials and power series=
The matrix
J =
\begin{pmatrix}
x_0 & 1 & 0 & 0 & \cdots & 0 \\
0 & x_1 & 1 & 0 & \cdots & 0 \\
0 & 0 & x_2 & 1 & & 0 \\
\vdots & \vdots & & \ddots & \ddots & \\
0 & 0 & 0 & 0 & \; \ddots & 1\\
0 & 0 & 0 & 0 & & x_n
\end{pmatrix}
contains the divided difference scheme for the identity function with respect to the nodes
Consequently, you can obtain the divided differences for a polynomial function
and
then
This is known as Opitz' formula.de Boor, Carl, Divided Differences, Surv. Approx. Theory 1 (2005), 46–69, [http://www.emis.de/journals/SAT/papers/2/]Opitz, G. Steigungsmatrizen, Z. Angew. Math. Mech. (1964), 44, T52–T54
Now consider increasing the degree of
Let
You can compute the divided difference scheme for
If
and
then
Alternative characterizations
=Expanded form=
\begin{align}
f[x_0] &= f(x_0) \\
f[x_0,x_1] &= \frac{f(x_0)}{(x_0-x_1)} + \frac{f(x_1)}{(x_1-x_0)} \\
f[x_0,x_1,x_2] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)} + \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)} \\
f[x_0,x_1,x_2,x_3] &= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)\cdot(x_0-x_3)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)\cdot(x_1-x_3)} +\\
&\quad\quad \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)\cdot(x_2-x_3)} + \frac{f(x_3)}{(x_3-x_0)\cdot(x_3-x_1)\cdot(x_3-x_2)} \\
f[x_0,\dots,x_n] &=
\sum_{j=0}^{n} \frac{f(x_j)}{\prod_{k\in\{0,\dots,n\}\setminus\{j\}} (x_j-x_k)}
\end{align}
With the help of the polynomial function
f[x_0,\dots,x_n] = \sum_{j=0}^{n} \frac{f(x_j)}{\omega'(x_j)}.
=Peano form=
If
where
This is a consequence of the Peano kernel theorem; it is called the Peano form of the divided differences and
=Forward and backward differences=
{{details|Finite difference}}
When the data points are equidistantly distributed we get the special case called forward differences. They are easier to calculate than the more general divided differences.
Given n+1 data points
with
the forward differences are defined as
\Delta^{(0)} y_k &:= y_k,\qquad k=0,\ldots,n \\
\Delta^{(j)}y_k &:= \Delta^{(j-1)}y_{k+1} - \Delta^{(j-1)}y_k,\qquad k=0,\ldots,n-j,\ j=1,\dots,n.
\end{align}whereas the backward differences are defined as:
\nabla^{(0)} y_k &:= y_k,\qquad k=0,\ldots,n \\
\nabla^{(j)}y_k &:= \nabla^{(j-1)}y_{k} - \nabla^{(j-1)}y_{k-1},\qquad k=0,\ldots,n-j,\ j=1,\dots,n.
\end{align}
Thus the forward difference table is written as:
\begin{matrix}
y_0 & & & \\
& \Delta y_0 & & \\
y_1 & & \Delta^2 y_0 & \\
& \Delta y_1 & & \Delta^3 y_0\\
y_2 & & \Delta^2 y_1 & \\
& \Delta y_2 & & \\
y_3 & & & \\
\end{matrix}
whereas the backwards difference table is written as:
\begin{matrix}
y_0 & & & \\
& \nabla y_1 & & \\
y_1 & & \nabla^2 y_2 & \\
& \nabla y_2 & & \nabla^3 y_3\\
y_2 & & \nabla^2 y_3 & \\
& \nabla y_3 & & \\
y_3 & & & \\
\end{matrix}
The relationship between divided differences and forward differences is{{cite book|last1=Burden|first1=Richard L.| last2=Faires|first2=J. Douglas | title=Numerical Analysis |url=https://archive.org/details/numericalanalysi00rlbu |url-access=limited | date=2011|page=[https://archive.org/details/numericalanalysi00rlbu/page/n146 129]|publisher=Cengage Learning | isbn=9780538733519| edition=9th}}
See also
References
{{Reflist|1}}
- {{cite book|author=Louis Melville Milne-Thomson|author-link=L. M. Milne-Thomson| title=The Calculus of Finite Differences | year=2000|orig-year=1933| publisher=American Mathematical Soc.| isbn=978-0-8218-2107-7|at=Chapter 1: Divided Differences}}
- {{cite book|author1=Myron B. Allen|author2=Eli L. Isaacson| title=Numerical Analysis for Applied Science | year=1998|publisher=John Wiley & Sons|isbn=978-1-118-03027-1| at=Appendix A}}
- {{cite book|author=Ron Goldman|title=Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling|year=2002| publisher=Morgan Kaufmann| isbn=978-0-08-051547-2| at=Chapter 4:Newton Interpolation and Difference Triangles}}
External links
- An [http://code.henning-thielemann.de/htam/src/Numerics/Interpolation/DividedDifference.hs implementation] in Haskell.
de:Polynominterpolation#Bestimmung der Koeffizienten: Schema der dividierten Differenzen