Least-squares function approximation

{{Short description|Mathematical method}}

In mathematics, least squares function approximation applies the principle of least squares to function approximation, by means of a weighted sum of other functions. The best approximation can be defined as that which minimizes the difference between the original function and the approximation; for a least-squares approach the quality of the approximation is measured in terms of the squared differences between the two.

Functional analysis

{{See also|Fourier series|Generalized Fourier series}}

A generalization to approximation of a data set is the approximation of a function by a sum of other functions, usually an orthogonal set:

{{cite book |title=Applied analysis |author=Cornelius Lanczos |pages =212–213 |isbn=0-486-65656-X |publisher=Dover Publications |year=1988 |edition=Reprint of 1956 Prentice–Hall |url=https://books.google.com/books?id=6E85hExIqHYC&pg=PA212}}

:f(x) \approx f_n (x) = a_1 \phi _1 (x) + a_2 \phi _2(x) + \cdots + a_n \phi _n (x), \

with the set of functions {\ \phi _j (x) } an orthonormal set over the interval of interest, {{nowrap|say [a, b]}}: see also Fejér's theorem. The coefficients {\ a_j } are selected to make the magnitude of the difference ||{{nowrap|ffn}}||2 as small as possible. For example, the magnitude, or norm, of a function {{nowrap|g (x )}} over the {{nowrap|interval [a, b]}} can be defined by:

{{cite book |title=Fourier analysis and its application |page =69 |chapter=Equation 3.14 |author=Gerald B Folland |chapter-url=https://books.google.com/books?id=ix2iCQ-o9x4C&pg=PA69 |isbn=978-0-8218-4790-9 |publisher=American Mathematical Society Bookstore |year=2009 |edition=Reprint of Wadsworth and Brooks/Cole 1992}}

: \|g\| = \left(\int_a^b g^*(x)g(x) \, dx \right)^{1/2}

where the ‘*’ denotes complex conjugate in the case of complex functions. The extension of Pythagoras' theorem in this manner leads to function spaces and the notion of Lebesgue measure, an idea of “space” more general than the original basis of Euclidean geometry. The {{nowrap|{ \phi_j (x)\ } }} satisfy orthonormality relations:

{{cite book |title=Fourier Analysis and Its Applications|page =69 |first1=Gerald B | last1= Folland|url=https://books.google.com/books?id=ix2iCQ-o9x4C&pg=PA69 |isbn=978-0-8218-4790-9 |year=2009 |publisher=American Mathematical Society}}

: \int_a^b \phi _i^* (x)\phi _j (x) \, dx =\delta_{ij},

where δij is the Kronecker delta. Substituting function {{nowrap|fn}} into these equations then leads to

the n-dimensional Pythagorean theorem:

{{cite book |title=Statistical methods: the geometric approach |author= David J. Saville, Graham R. Wood |chapter=§2.5 Sum of squares |page=30 |chapter-url=https://books.google.com/books?id=8ummgMVRev0C&pg=PA30 |isbn=0-387-97517-9 |year=1991 |edition=3rd |publisher=Springer}}

:\|f_n\|^2 = |a_1|^2 + |a_2|^2 + \cdots + |a_n|^2. \,

The coefficients {aj} making {{nowrap begin}}||ffn||2{{nowrap end}} as small as possible are found to be:

:a_j = \int_a^b \phi _j^* (x)f (x) \, dx.

The generalization of the n-dimensional Pythagorean theorem to infinite-dimensional  real inner product spaces is known as Parseval's identity or Parseval's equation.

{{cite book |title=cited work |page =77 |chapter=Equation 3.22 |author=Gerald B Folland |chapter-url=https://books.google.com/books?id=ix2iCQ-o9x4C&pg=PA77 |isbn=978-0-8218-4790-9 |date=2009-01-13 |publisher =American Mathematical Soc. }}

Particular examples of such a representation of a function are the Fourier series and the generalized Fourier series.

Further discussion

=Using linear algebra=

It follows that one can find a "best" approximation of another function by minimizing the area between two functions, a continuous function f on [a,b] and a function g\in W where W is a subspace of C[a,b]:

:\text{Area} = \int_a^b \left\vert f(x) - g(x)\right\vert \, dx,

all within the subspace W. Due to the frequent difficulty of evaluating integrands involving absolute value, one can instead define

:\int_a^b [ f(x) - g(x) ] ^2\, dx

as an adequate criterion for obtaining the least squares approximation, function g, of f with respect to the inner product space W.

As such, \lVert f-g \rVert ^2 or, equivalently, \lVert f-g \rVert, can thus be written in vector form:

:\int_a^b [ f(x)-g(x) ]^2\, dx = \left\langle f-g , f-g\right\rangle = \lVert f-g\rVert^2.

In other words, the least squares approximation of f is the function g\in \text{ subspace } W closest to f in terms of the inner product \left \langle f,g \right \rangle. Furthermore, this can be applied with a theorem:

:Let f be continuous on [ a,b ], and let W be a finite-dimensional subspace of C[a,b]. The least squares approximating function of f with respect to W is given by

::g = \left \langle f,\vec w_1 \right \rangle \vec w_1 + \left \langle f,\vec w_2 \right \rangle \vec w_2 + \cdots + \left \langle f,\vec w_n \right \rangle \vec w_n,

:where B = \{\vec w_1 , \vec w_2 , \dots , \vec w_n \} is an orthonormal basis for W.

References