Carleman matrix

In mathematics, a Carleman matrix is a matrix used to convert function composition into matrix multiplication. It is often used in iteration theory to find the continuous iteration of functions which cannot be iterated by pattern recognition alone. Other uses of Carleman matrices occur in the theory of probability generating functions, and Markov chains.

Definition

The Carleman matrix of an infinitely differentiable function f(x) is defined as:

:M[f]_{jk} = \frac{1}{k!}\left[\frac{d^k}{dx^k} (f(x))^j \right]_{x=0} ~,

so as to satisfy the (Taylor series) equation:

:(f(x))^j = \sum_{k=0}^{\infty} M[f]_{jk} x^k.

For instance, the computation of f(x) by

:f(x) = \sum_{k=0}^{\infty} M[f]_{1,k} x^k. ~

simply amounts to the dot-product of row 1 of M[f] with a column vector \left[1,x,x^2,x^3,...\right]^\tau.

The entries of M[f] in the next row give the 2nd power of f(x):

:f(x)^2 = \sum_{k=0}^{\infty} M[f]_{2,k} x^k ~,

and also, in order to have the zeroth power of f(x) in M[f], we adopt the row 0 containing zeros everywhere except the first position, such that

:f(x)^0 = 1 = \sum_{k=0}^{\infty} M[f]_{0,k} x^k = 1+ \sum_{k=1}^{\infty} 0\cdot x^k ~.

Thus, the dot product of M[f] with the column vector \begin{bmatrix}1,x,x^2,...\end{bmatrix}^T yields the column vector \left[1,f(x),f(x)^2,...\right]^T, i.e.,

: M[f] \begin{bmatrix}1\\x\\x^2\\ x^3\\\vdots\end{bmatrix} = \begin{bmatrix} 1\\f(x)\\(f(x))^2\\(f(x))^3\\\vdots\end{bmatrix}.

Generalization

A generalization of the Carleman matrix of a function can be defined around any point, such as:

:M[f]_{x_0} = M_x[x - x_0]M[f]M_x[x + x_0]

or M[f]_{x_0} = M[g] where g(x) = f(x + x_0) - x_0. This allows the matrix power to be related as:

:(M[f]_{x_0})^n = M_x[x - x_0]M[f]^nM_x[x + x_0]

= General Series =

:Another way to generalize it even further is think about a general series in the following way:

:Let h(x) = \sum_n c_n (h) \cdot \psi_n(x) be a series approximation of h(x), where \{\psi_n(x)\}_n is a basis of the space containing h(x)

:Assuming that \{\psi_n(x)\}_n is also a basis for f(x), We can define G[f]_{mn} = c_n(\psi_m \circ f), therefore we have \psi_m \circ f = \sum_n c_n (\psi_m \circ f) \cdot \psi_n = \sum_n G[f]_{mn} \cdot \psi_n, now we can prove that G[g \circ f] = G[g] \cdot G[f], if we assume that \{\psi_n(x)\}_n is also a basis for g(x) and g(f(x)).

:Let g(x) be such that \psi_l \circ g = \sum_m G[g]_{lm} \cdot \psi_m where G[g]_{lm} = c_m (\psi_l \circ g).

:Now

\begin{aligned} \sum_n G[g \circ f]_{ln} \psi_n

= \psi_l \circ (g \circ f)

&= (\psi_l \circ g) \circ f\\

&= \sum_m G[g]_{lm} (\psi_m \circ f)\\

&= \sum_m G[g]_{lm} \sum_n G[f]_{mn} \psi_n\\

&= \sum_{n,m} G[g]_{lm} G[f]_{mn} \psi_n\\

&= \sum_{n} (\sum_m G[g]_{lm} G[f]_{mn}) \psi_n\end{aligned}

:Comparing the first and the last term, and from \{\psi_n(x)\}_n being a base for f(x), g(x) and g(f(x)) it follows that G[g \circ f] =\sum_m G[g]_{lm} G[f]_{mn}= G[g] \cdot G[f]

== Examples ==

=== Rederive (Taylor) Carleman Matrix===

If we set \psi_n(x) = x^n we have the Carleman matrix. Because


h(x) = \sum_n c_n (h) \cdot \psi_n(x) = \sum_n c_n (h) \cdot x^n

then we know that the n-th coefficient c_n(h) must be the nth-coefficient of the taylor series of h. Therefore c_n(h)=\frac{1}{n!} h^{(n)}(0)
Therefore G[f]_{mn}=c_n(\psi_m \circ f)=c_n(f(x)^m)=\frac{1}{n!}\left[\frac{d^n}{dx^n} (f(x))^m \right]_{x=0}


Which is the Carleman matrix given above. (It's important to note that this is not an orthornormal basis)

===Carleman Matrix For Orthonormal Basis===

If \{e_n(x)\}_n is an orthonormal basis for a Hilbert Space with a defined inner product \langle f,g \rangle, we can set \psi_n = e_n and c_n (h) will be {\displaystyle \langle h, e_n \rangle } . Then

G[f]_{mn}=c_n(e_m \circ f)=\langle e_m \circ f,e_n\rangle .

===Carleman Matrix for Fourier Series===

If e_n(x) = e^{i n x} we have the analogous for Fourier Series. Let \hat{c}_n and \hat{G} represent the carleman coefficient and matrix in the fourier basis. Because the basis is orthogonal, we have.

: \hat{c}_n (h) = \langle h,e_n \rangle = \cfrac{1}{2\pi} \int_{-\pi}^{\pi} \displaystyle h(x) \cdot e^{-inx}dx.


Then, therefore, \hat{G}[f]_{mn}=\hat{c_n}(e_m \circ f)=\langle e_m \circ f,e_n\rangle which is

: \hat{G}[f]_{mn}=\cfrac{1}{2\pi} \int_{-\pi}^{\pi} \displaystyle e^{i m f(x)} \cdot e^{-inx}dx

Properties

Carleman matrices satisfy the fundamental relationship

  • M[f \circ g] = M[f]M[g] ~,

which makes the Carleman matrix M a (direct) representation of f(x). Here the term f \circ g denotes the composition of functions f(g(x)).

Other properties include:

Examples

The Carleman matrix of a constant is:

:M[a] = \left(\begin{array}{cccc}

1&0&0& \cdots \\

a&0&0& \cdots \\

a^2&0&0& \cdots \\

\vdots&\vdots&\vdots&\ddots

\end{array}\right)

The Carleman matrix of the identity function is:

:M_x[x] = \left(\begin{array}{cccc}

1&0&0& \cdots \\

0&1&0& \cdots \\

0&0&1& \cdots \\

\vdots&\vdots&\vdots&\ddots

\end{array}\right)

The Carleman matrix of a constant addition is:

:M_x[a + x] = \left(\begin{array}{cccc}

1&0&0& \cdots \\

a&1&0& \cdots \\

a^2&2a&1& \cdots \\

\vdots&\vdots&\vdots&\ddots

\end{array}\right)

The Carleman matrix of the successor function is equivalent to the Binomial coefficient:

:M_x[1 + x] = \left(\begin{array}{ccccc}

1&0&0&0& \cdots \\

1&1&0&0& \cdots \\

1&2&1&0& \cdots \\

1&3&3&1& \cdots \\

\vdots&\vdots&\vdots&\vdots&\ddots

\end{array}\right)

:M_x[1 + x]_{jk} = \binom{j}{k}

The Carleman matrix of the logarithm is related to the (signed) Stirling numbers of the first kind scaled by factorials:

:M_x[\log(1 + x)] = \left(\begin{array}{cccccc}

1&0&0&0&0& \cdots \\

0&1&-\frac{1}{2}&\frac{1}{3}&-\frac{1}{4}& \cdots \\

0&0&1&-1&\frac{11}{12}& \cdots \\

0&0&0&1&-\frac{3}{2}& \cdots \\

0&0&0&0&1& \cdots \\

\vdots&\vdots&\vdots&\vdots&\vdots&\ddots

\end{array}\right)

:M_x[\log(1 + x)]_{jk} = s(k, j) \frac{j!}{k!}

The Carleman matrix of the logarithm is related to the (unsigned) Stirling numbers of the first kind scaled by factorials:

:M_x[-\log(1 - x)] = \left(\begin{array}{cccccc}

1&0&0&0&0& \cdots \\

0&1&\frac{1}{2}&\frac{1}{3}&\frac{1}{4}& \cdots \\

0&0&1&1&\frac{11}{12}& \cdots \\

0&0&0&1&\frac{3}{2}& \cdots \\

0&0&0&0&1& \cdots \\

\vdots&\vdots&\vdots&\vdots&\vdots&\ddots

\end{array}\right)

:M_x[-\log(1 - x)]_{jk} = |s(k, j)| \frac{j!}{k!}

The Carleman matrix of the exponential function is related to the Stirling numbers of the second kind scaled by factorials:

:M_x[\exp(x) - 1] = \left(\begin{array}{cccccc}

1&0&0&0&0& \cdots \\

0&1&\frac{1}{2}&\frac{1}{6}&\frac{1}{24}& \cdots \\

0&0&1&1&\frac{7}{12}& \cdots \\

0&0&0&1&\frac{3}{2}& \cdots \\

0&0&0&0&1& \cdots \\

\vdots&\vdots&\vdots&\vdots&\vdots&\ddots

\end{array}\right)

:M_x[\exp(x) - 1]_{jk} = S(k, j) \frac{j!}{k!}

The Carleman matrix of exponential functions is:

:M_x[\exp(a x)] = \left(\begin{array}{ccccc}

1&0&0&0& \cdots \\

1&a&\frac{a^2}{2}&\frac{a^3}{6}& \cdots \\

1&2a&2a^2&\frac{4a^3}{3}& \cdots \\

1&3a&\frac{9a^2}{2}&\frac{9a^3}{2}& \cdots \\

\vdots&\vdots&\vdots&\vdots&\ddots

\end{array}\right)

:M_x[\exp(a x)]_{jk} = \frac{(j a)^k}{k!}

The Carleman matrix of a constant multiple is:

:M_x[cx] = \left(\begin{array}{cccc}

1&0&0& \cdots \\

0&c&0& \cdots \\

0&0&c^2& \cdots \\

\vdots&\vdots&\vdots&\ddots

\end{array}\right)

The Carleman matrix of a linear function is:

:M_x[a + cx] = \left(\begin{array}{cccc}

1&0&0& \cdots \\

a&c&0& \cdots \\

a^2&2ac&c^2& \cdots \\

\vdots&\vdots&\vdots&\ddots

\end{array}\right)

The Carleman matrix of a function f(x) = \sum_{k=1}^{\infty}f_k x^k is:

:M[f] = \left(\begin{array}{cccc}

1&0&0& \cdots \\

0&f_1&f_2& \cdots \\

0&0&f_1^2& \cdots \\

\vdots&\vdots&\vdots&\ddots

\end{array}\right)

The Carleman matrix of a function f(x) = \sum_{k=0}^{\infty}f_k x^k is:

:M[f] = \left(\begin{array}{cccc}

1&0&0& \cdots \\

f_0&f_1&f_2& \cdots \\

f_0^2&2f_0f_1&f_1^2+2f_0f_2& \cdots \\

\vdots&\vdots&\vdots&\ddots

\end{array}\right)

Related matrices

The Bell matrix or the Jabotinsky matrix of a function f(x) is defined as{{Cite journal |last=Knuth |first=D. |year=1992 |title=Convolution Polynomials |journal=The Mathematica Journal |volume=2 |issue=4 |pages=67–78 |arxiv=math/9207221|bibcode=1992math......7221K }}{{Cite journal |last=Jabotinsky |first=Eri |date=1953 |title=Representation of functions by matrices. Application to Faber polynomials |url=https://www.ams.org/proc/1953-004-04/S0002-9939-1953-0059359-0/ |journal=Proceedings of the American Mathematical Society |language=en |volume=4 |issue=4 |pages=546–553 |doi=10.1090/S0002-9939-1953-0059359-0 |issn=0002-9939|doi-access=free }}{{Cite journal |last=Lang |first=W. |year=2000 |title=On generalizations of the stirling number triangles |journal=Journal of Integer Sequences |volume=3 |issue=2.4 |pages=1–19|bibcode=2000JIntS...3...24L }}

:B[f]_{jk} = \frac{1}{j!}\left[\frac{d^j}{dx^j} (f(x))^k \right]_{x=0} ~,

so as to satisfy the equation

:(f(x))^k = \sum_{j=0}^{\infty} B[f]_{jk} x^j ~,

These matrices were developed in 1947 by Eri Jabotinsky to represent convolutions of polynomials.{{Cite journal |last=Jabotinsky |first=Eri |year=1947 |title=Sur la représentation de la composition de fonctions par un produit de matrices. Applicaton à l'itération de e^x et de e^x-1. |journal= Comptes rendus de l'Académie des Sciences |volume=224 |pages=323–324}} It is the transpose of the Carleman matrix and satisfy

B[f \circ g] = B[g]B[f] ~, which makes the Bell matrix B an anti-representation of f(x).

See also

Notes

{{Reflist}}

References

  • R Aldrovandi, [https://web.archive.org/web/20120519152123/http://www.worldscibooks.com/physics/4772.html Special Matrices of Mathematical Physics]: Stochastic, Circulant and Bell Matrices, World Scientific, 2001. ([https://books.google.com/books?hl=en&lr=&id=wb9aLGfVsOwC preview])
  • R. Aldrovandi, L. P. Freitas, Continuous Iteration of Dynamical Maps, online preprint, 1997.
  • P. Gralewicz, K. Kowalski, Continuous time evolution from iterated maps and Carleman linearization, online preprint, 2000.
  • K Kowalski and W-H Steeb, [https://web.archive.org/web/20120208174226/http://www.worldscibooks.com/mathematics/1347.html Nonlinear Dynamical Systems and Carleman Linearization], World Scientific, 1991. ([https://books.google.com/books?id=PTTCxQwFtMEC preview])

{{Matrix classes}}

Category:Functions and mappings

Category:Matrix theory

Category:Eponyms in mathematics