Convolution#Measures

{{short description|Integral expressing the amount of overlap of one function as it is shifted over another}}

{{About}}

File:comparison convolution correlation.svg, and autocorrelation. For the operations involving function f, and assuming the height of f is 1.0, the value of the result at 5 different points is indicated by the shaded area below each point. The symmetry of f is the reason f \star g and g*f are identical in this example.

]]

In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions f and g that produces a third function f*g, as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The term convolution refers to both the resulting function and to the process of computing it. The integral is evaluated for all values of shift, producing the convolution function. The choice of which function is reflected and shifted before the integral does not change the integral result (see commutativity). Graphically, it expresses how the 'shape' of one function is modified by the other.

Some features of convolution are similar to cross-correlation: for real-valued functions, of a continuous or discrete variable, convolution f*g differs from cross-correlation f \star g only in that either f(x) or g(x) is reflected about the y-axis in convolution; thus it is a cross-correlation of g(-x) and f(x), or f(-x) and g(x).{{efn-ua|Reasons for the reflection include:

}} For complex-valued functions, the cross-correlation operator is the adjoint of the convolution operator.

Convolution has applications that include probability, statistics, acoustics, spectroscopy, signal processing and image processing, geophysics, engineering, physics, computer vision and differential equations.{{cite journal |last1=Bahri |first1=Mawardi |last2=Ashino |first2=Ryuichi |last3=Vaillancourt |first3=Rémi |title=Convolution Theorems for Quaternion Fourier Transform: Properties and Applications |url=https://core.ac.uk/download/pdf/25493611.pdf |archive-url=https://web.archive.org/web/20201021001150/https://core.ac.uk/download/pdf/25493611.pdf |archive-date=2020-10-21 |url-status=live |journal=Abstract and Applied Analysis |volume=2013 |access-date=2022-11-11 |pages=1–10 |doi=10.1155/2013/162769 |date=2013|doi-access=free}}

The convolution can be defined for functions on Euclidean space and other groups (as algebraic structures).{{citation needed|date=October 2017}} For example, periodic functions, such as the discrete-time Fourier transform, can be defined on a circle and convolved by periodic convolution. (See row 18 at {{section link|DTFT|Properties}}.) A discrete convolution can be defined for functions on the set of integers.

Generalizations of convolution have applications in the field of numerical analysis and numerical linear algebra, and in the design and implementation of finite impulse response filters in signal processing.{{citation needed|date=October 2017}}

Computing the inverse of the convolution operation is known as deconvolution.

Definition

The convolution of f and g is written f * g, denoting the operator with the symbol *.{{efn-ua

| The symbol {{unichar|2217|asterisk operator}} is different than {{unichar|2A|asterisk}}, which is often used to denote complex conjugation. See {{slink|Asterisk|Mathematical typography}}.

}} It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. As such, it is a particular kind of integral transform:

:(f * g)(t) := \int_{-\infty}^\infty f(\tau) g(t - \tau) \, d\tau.

An equivalent definition is (see commutativity):

:(f * g)(t) := \int_{-\infty}^\infty f(t - \tau) g(\tau)\, d\tau.

While the symbol t is used above, it need not represent the time domain. At each t, the convolution formula can be described as the area under the function f(\tau) weighted by the function g(-\tau) shifted by the amount t. As t changes, the weighting function g(t-\tau) emphasizes different parts of the input function f(\tau); If t is a positive value, then g(t-\tau) is equal to g(-\tau) that slides or is shifted along the \tau-axis toward the right (toward +\infty) by the amount of t, while if t is a negative value, then g(t-\tau) is equal to g(-\tau) that slides or is shifted toward the left (toward -\infty) by the amount of |t|.

For functions f, g supported on only [0,\infty) (i.e., zero for negative arguments), the integration limits can be truncated, resulting in:

:(f * g)(t) = \int_{0}^{t} f(\tau) g(t - \tau)\, d\tau \quad \ \text{for } f, g : [0, \infty) \to \mathbb{R}.

For the multi-dimensional formulation of convolution, see domain of definition (below).

= Notation =

A common engineering notational convention is:

{{cite book|last1=Smith|first1=Stephen W|title=The Scientist and Engineer's Guide to Digital Signal Processing|date=1997|publisher=California Technical Publishing|isbn=0-9660176-3-3|edition=1|chapter-url=https://dspguide.com/ch13/2.htm|access-date=22 April 2016|chapter=13.Convolution}}

: f(t) * g(t) \mathrel{:=} \underbrace{\int_{-\infty}^\infty f(\tau) g(t - \tau)\, d\tau}_{(f * g )(t)},

which has to be interpreted carefully to avoid confusion. For instance, f(t) * g(t-t_0) is equivalent to (f*g)(t-t_0), but f(t-t_0) * g(t-t_0) is in fact equivalent to (f * g)(t-2t_0).{{cite book|last1=Irwin|first1=J. David|author-link=J. David Irwin|title=The Industrial Electronics Handbook|date=1997|publisher=CRC Press|location=Boca Raton, FL|isbn=0-8493-8343-9|page=75|edition=1|chapter=4.3}}

= Relations with other transforms =

Given two functions f(t) and g(t) with bilateral Laplace transforms (two-sided Laplace transform)

: F(s) = \int_{-\infty}^\infty e^{-su} \ f(u) \ \text{d}u

and

: G(s) = \int_{-\infty}^\infty e^{-sv} \ g(v) \ \text{d}v

respectively, the convolution operation (f * g)(t) can be defined as the inverse Laplace transform of the product of F(s) and G(s) .{{cite web |last1=Differential Equations (Spring 2010) |first1=MIT 18.03 |title=Lecture 21: Convolution Formula |url=https://ocw.mit.edu/courses/mathematics/18-03-differential-equations-spring-2010/video-lectures/lecture-21-convolution-formula/ |website=MIT Open Courseware |publisher=MIT |access-date=22 December 2021}}{{cite web |title=18.03SC Differential Equations Fall 2011 |url=https://ocw.mit.edu/courses/mathematics/18-03sc-differential-equations-fall-2011/unit-iii-fourier-series-and-laplace-transform/transfer-system-and-weight-functions-greens-formula/MIT18_03SCF11_s30_5text.pdf |archive-url=https://web.archive.org/web/20150906102242/https://ocw.mit.edu/courses/mathematics/18-03sc-differential-equations-fall-2011/unit-iii-fourier-series-and-laplace-transform/transfer-system-and-weight-functions-greens-formula/MIT18_03SCF11_s30_5text.pdf |archive-date=2015-09-06 |url-status=live |website=Green's Formula, Laplace Transform of Convolution}} More precisely,

:

\begin{align}

F(s) \cdot G(s) &= \int_{-\infty}^\infty e^{-su} \ f(u) \ \text{d}u \cdot \int_{-\infty}^\infty e^{-sv} \ g(v) \ \text{d}v \\

&= \int_{-\infty}^\infty \int_{-\infty}^\infty e^{-s(u + v)} \ f(u) \ g(v) \ \text{d}u \ \text{d}v

\end{align}

Let t = u + v , then

:

\begin{align}

F(s) \cdot G(s) &= \int_{-\infty}^\infty \int_{-\infty}^\infty e^{-st} \ f(u) \ g(t - u) \ \text{d}u \ \text{d}t \\

&= \int_{-\infty}^\infty e^{-st} \underbrace{\int_{-\infty}^\infty f(u) \ g(t - u) \ \text{d}u}_{(f * g)(t)} \ \text{d}t \\

&= \int_{-\infty}^\infty e^{-st} (f * g)(t) \ \text{d}t.

\end{align}

Note that F(s) \cdot G(s) is the bilateral Laplace transform of (f * g)(t) . A similar derivation can be done using the unilateral Laplace transform (one-sided Laplace transform).

The convolution operation also describes the output (in terms of the input) of an important class of operations known as linear time-invariant (LTI). See LTI system theory for a derivation of convolution as the result of LTI constraints. In terms of the Fourier transforms of the input and output of an LTI operation, no new frequency components are created. The existing ones are only modified (amplitude and/or phase). In other words, the output transform is the pointwise product of the input transform with a third transform (known as a transfer function). See Convolution theorem for a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms.

Visual explanation

class="wikitable"
{{ordered list

| list_style = margin-left:1.6em;|Express each function in terms of a dummy variable \tau.|Reflect one of the functions: g(\tau)g(-\tau).|Add an offset of the independent variable, t, which allows g(-\tau) to slide along the \tau-axis. If {{mvar|t}} is a positive value, then g(t-\tau) is equal to g(-\tau) that slides or is shifted along the \tau-axis toward the right (toward +\infty) by the amount of t. If t is a negative value, then g(t-\tau) is equal to g(-\tau) that slides or is shifted toward the left (toward -\infty) by the amount of |t|.|Start t at -\infty and slide it all the way to +\infty. Wherever the two functions intersect, find the integral of their product. In other words, at time t, compute the area under the function f(\tau) weighted by the weighting function g(t - \tau).

}}

The resulting waveform (not shown here) is the convolution of functions f and g.

If f(t) is a unit impulse, the result of this process is simply g(t). Formally:

: \int_{-\infty}^\infty \delta(\tau) g(t - \tau)\, d\tau = g(t)

| File:Convolution3.svg

In this example, the red-colored "pulse", \ g(\tau), is an even function (\ g(-\tau) = g(\tau)\ ), so convolution is equivalent to correlation. A snapshot of this "movie" shows functions g(t - \tau) and f(\tau) (in blue) for some value of parameter t, which is arbitrarily defined as the distance along the \tau axis from the point \tau = 0 to the center of the red pulse. The amount of yellow is the area of the product f(\tau) \cdot g(t - \tau), computed by the convolution/correlation integral. The movie is created by continuously changing t and recomputing the integral. The result (shown in black) is a function of t, but is plotted on the same axis as \tau, for convenience and comparison.

| File:Convolution of box signal with itself2.gif

In this depiction, f(\tau) could represent the response of a resistor-capacitor circuit to a narrow pulse that occurs at \tau = 0. In other words, if g(\tau) = \delta(\tau), the result of convolution is just f(t). But when g(\tau) is the wider pulse (in red), the response is a "smeared" version of f(t). It begins at t = -0.5, because we defined t as the distance from the \tau = 0 axis to the center of the wide pulse (instead of the leading edge).

| File:Convolution of spiky function with box2.gif

Historical developments

One of the earliest uses of the convolution integral appeared in D'Alembert's derivation of Taylor's theorem in Recherches sur différents points importants du système du monde, published in 1754.Dominguez-Torres, p 2

Also, an expression of the type:

:\int f(u)\cdot g(x - u) \, du

is used by Sylvestre François Lacroix on page 505 of his book entitled Treatise on differences and series, which is the last of 3 volumes of the encyclopedic series: {{Lang|fr|Traité du calcul différentiel et du calcul intégral}}, Chez Courcier, Paris, 1797–1800.Dominguez-Torres, p 4 Soon thereafter, convolution operations appear in the works of Pierre Simon Laplace, Jean-Baptiste Joseph Fourier, Siméon Denis Poisson, and others. The term itself did not come into wide use until the 1950s or 1960s. Prior to that it was sometimes known as Faltung (which means folding in German), composition product, superposition integral, and Carson's integral.

{{Citation

| chapter = Early work on imaging theory in radio astronomy

| author = R. N. Bracewell

| editor = W. T. Sullivan

| title = The Early Years of Radio Astronomy: Reflections Fifty Years After Jansky's Discovery

| publisher = Cambridge University Press

| year = 2005

| isbn = 978-0-521-61602-7

| page = 172

| chapter-url = https://books.google.com/books?id=v2SqL0zCrwcC&pg=PA172

}} Yet it appears as early as 1903, though the definition is rather unfamiliar in older uses.

{{Citation

| title = The algebra of invariants

| author = John Hilton Grace and Alfred Young

| publisher = Cambridge University Press

| year = 1903

| page = 40

| url = https://books.google.com/books?id=NIe4AAAAIAAJ&pg=PA40

}}

{{Citation

| title = Algebraic invariants

| author = Leonard Eugene Dickson

| publisher = J. Wiley

| year = 1914

| page = 85

| isbn = 978-1-4297-0042-9

| url = https://books.google.com/books?id=LRGoAAAAIAAJ&pg=PA85

}}

The operation:

:\int_0^t \varphi(s)\psi(t - s) \, ds,\quad 0 \le t < \infty,

is a particular case of composition products considered by the Italian mathematician Vito Volterra in 1913.

According to

[Lothar von Wolfersdorf (2000), "Einige Klassen quadratischer Integralgleichungen",

Sitzungsberichte der Sächsischen Akademie der Wissenschaften zu Leipzig,

Mathematisch-naturwissenschaftliche Klasse, volume 128, number 2, 6–7], the source is Volterra, Vito (1913),

"Leçons sur les fonctions de linges". Gauthier-Villars, Paris 1913.

Circular convolution

{{Main article|Circular convolution}}

When a function g_T is periodic, with period T, then for functions, f, such that f * g_T exists, the convolution is also periodic and identical to:

:(f * g_T)(t) \equiv \int_{t_0}^{t_0+T} \left[\sum_{k=-\infty}^\infty f(\tau + kT)\right] g_T(t - \tau)\, d\tau,

where t_0 is an arbitrary choice. The summation is called a periodic summation of the function f.

When g_T is a periodic summation of another function, g, then f*g_T is known as a circular or cyclic convolution of f and g.

And if the periodic summation above is replaced by f_T, the operation is called a periodic convolution of f_T and g_T.

Discrete convolution

File:2D Convolution Animation.gif

For complex-valued functions f and g defined on the set \Z of integers, the discrete convolution of f and g is given by:{{harvnb|Damelin|Miller|2011|p=219}}

:(f * g)[n] = \sum_{m=-\infty}^\infty f[m] g[n - m],

or equivalently (see commutativity) by:

:(f * g)[n] = \sum_{m=-\infty}^\infty f[n-m] g[m].

The convolution of two finite sequences is defined by extending the sequences to finitely supported functions on the set of integers. When the sequences are the coefficients of two polynomials, then the coefficients of the ordinary product of the two polynomials are the convolution of the original two sequences. This is known as the Cauchy product of the coefficients of the sequences.

Thus when {{mvar|g}} has finite support in the set \{-M,-M+1,\dots,M-1,M\} (representing, for instance, a finite impulse response), a finite summation may be used:{{cite book |last1=Press |first1=William H. |last2=Flannery |first2=Brian P. |last3=Teukolsky |first3=Saul A. |last4=Vetterling |first4=William T. |title=Numerical Recipes in Pascal |year=1989 |publisher=Cambridge University Press |isbn=0-521-37516-9 |page=[https://archive.org/details/numericalrecipes0000unse/page/450 450] |url=https://archive.org/details/numericalrecipes0000unse/page/450}}

:(f * g)[n]=\sum_{m=-M}^M f[n-m]g[m].

= Circular discrete convolution =

When a function g_{_N} is periodic, with period N, then for functions, f, such that f*g_{_N} exists, the convolution is also periodic and identical to:

:(f * g_{_N})[n] \equiv \sum_{m=0}^{N-1} \left(\sum_{k=-\infty}^\infty {f}[m + kN]\right) g_{_N}[n - m].

The summation on k is called a periodic summation of the function f.

If g_{_N} is a periodic summation of another function, g, then f*g_{_N} is known as a circular convolution of f and g.

When the non-zero durations of both f and g are limited to the interval [0,N-1],  f*g_{_N} reduces to these common forms:

{{Equation box 1|title=

|indent=: |cellpadding= 0 |border= 0 |background colour=white

|equation={{NumBlk2|

|\begin{align}

\left(f * g_N\right)[n] &= \sum_{m=0}^{N-1} f[m]g_N[n - m] \\

&= \sum_{m=0}^n f[m]g[n - m] + \sum_{m=n+1}^{N-1} f[m]g[N + n - m] \\[2pt]

&= \sum_{m=0}^{N-1} f[m]g[(n - m)_\bmod{N}] \\[2pt]

&\triangleq \left(f *_N g\right)[n]

\end{align}        

|Eq.1}}}}

The notation f *_N g for cyclic convolution denotes convolution over the cyclic group of modular arithmetic.

Circular convolution arises most often in the context of fast convolution with a fast Fourier transform (FFT) algorithm.

= Fast convolution algorithms =

In many situations, discrete convolutions can be converted to circular convolutions so that fast transforms with a convolution property can be used to implement the computation. For example, convolution of digit sequences is the kernel operation in multiplication of multi-digit numbers, which can therefore be efficiently implemented with transform techniques ({{harvnb|Knuth|1997|loc=§4.3.3.C}}; {{harvnb|von zur Gathen|Gerhard|2003|loc=§8.2}}).

{{EquationNote|Eq.1}} requires {{mvar|N}} arithmetic operations per output value and {{math|N2}} operations for {{mvar|N}} outputs. That can be significantly reduced with any of several fast algorithms. Digital signal processing and other applications typically use fast convolution algorithms to reduce the cost of the convolution to O({{mvar|N}} log {{mvar|N}}) complexity.

The most common fast convolution algorithms use fast Fourier transform (FFT) algorithms via the circular convolution theorem. Specifically, the circular convolution of two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output. Other fast convolution algorithms, such as the Schönhage–Strassen algorithm or the Mersenne transform,{{cite journal|last=Rader|first=C.M.|title=Discrete Convolutions via Mersenne Transforms|journal=IEEE Transactions on Computers|date=December 1972|volume=21|issue=12|pages=1269–1273|doi=10.1109/T-C.1972.223497|s2cid=1939809}} use fast Fourier transforms in other rings. The Winograd method is used as an alternative to the FFT.{{Cite book |last=Winograd |first=Shmuel |url=https://epubs.siam.org/doi/book/10.1137/1.9781611970364 |title=Arithmetic Complexity of Computations |date=January 1980 |publisher=Society for Industrial and Applied Mathematics |isbn=978-0-89871-163-9 |language=en |doi=10.1137/1.9781611970364}} It significantly speeds up 1D,{{Cite journal |last1=Lyakhov |first1=P. A. |last2=Nagornov |first2=N. N. |last3=Semyonova |first3=N. F. |last4=Abdulsalyamova |first4=A. S. |date=June 2023 |title=Reducing the Computational Complexity of Image Processing Using Wavelet Transform Based on the Winograd Method |url=https://link.springer.com/10.1134/S1054661823020074 |journal=Pattern Recognition and Image Analysis |language=en |volume=33 |issue=2 |pages=184–191 |doi=10.1134/S1054661823020074 |s2cid=259310351 |issn=1054-6618}} 2D,{{Cite journal |last1=Wu |first1=Di |last2=Fan |first2=Xitian |last3=Cao |first3=Wei |last4=Wang |first4=Lingli |date=May 2021 |title=SWM: A High-Performance Sparse-Winograd Matrix Multiplication CNN Accelerator |url=https://ieeexplore.ieee.org/document/9373543 |journal=IEEE Transactions on Very Large Scale Integration (VLSI) Systems |volume=29 |issue=5 |pages=936–949 |doi=10.1109/TVLSI.2021.3060041 |s2cid=233433757 |issn=1063-8210}} and 3D{{Cite journal |last1=Mittal |first1=Sparsh |last2=Vibhu |date=May 2021 |title=A survey of accelerator architectures for 3D convolution neural networks |url=https://linkinghub.elsevier.com/retrieve/pii/S1383762121000400 |journal=Journal of Systems Architecture |language=en |volume=115 |pages=102041 |doi=10.1016/j.sysarc.2021.102041|s2cid=233917781 }} convolution.

If one sequence is much longer than the other, zero-extension of the shorter sequence and fast circular convolution is not the most computationally efficient method available.{{cite book|editor-last=Madisetti |editor-first=Vijay K. |chapter=Fast Convolution and Filtering |first1=Ivan W. |last1=Selesnick |first2=C. Sidney |last2=Burrus |title=Digital Signal Processing Handbook |year=1999 |publisher=CRC Press |isbn=978-1-4200-4563-5 |page=Section 8}} Instead, decomposing the longer sequence into blocks and convolving each block allows for faster algorithms such as the overlap–save method and overlap–add method.{{cite web|last=Juang|first=B.H.|title=Lecture 21: Block Convolution|url=https://users.ece.gatech.edu/~juang/4270/BHJ4270-21.pdf |archive-url=https://web.archive.org/web/20040729204137/https://users.ece.gatech.edu/~juang/4270/BHJ4270-21.pdf |archive-date=2004-07-29 |url-status=live|publisher=EECS at the Georgia Institute of Technology|access-date=17 May 2013}} A hybrid convolution method that combines block and FIR algorithms allows for a zero input-output latency that is useful for real-time convolution computations.{{cite journal|last=Gardner|first=William G.|title=Efficient Convolution without Input/Output Delay|journal=Audio Engineering Society Convention 97|date=November 1994|series=Paper 3897|url=https://cs.ust.hk/mjg_lib/bibs/DPSu/DPSu.Files/Ga95.PDF |archive-url=https://web.archive.org/web/20150408211312/https://cs.ust.hk/mjg_lib/bibs/DPSu/DPSu.Files/Ga95.PDF |archive-date=2015-04-08 |url-status=live|access-date=17 May 2013}}

Domain of definition

The convolution of two complex-valued functions on {{math|Rd}} is itself a complex-valued function on {{math|Rd}}, defined by:

:(f * g )(x) = \int_{\mathbf{R}^d} f(y)g(x-y)\,dy = \int_{\mathbf{R}^d} f(x-y)g(y)\,dy,

and is well-defined only if {{mvar|f}} and {{mvar|g}} decay sufficiently rapidly at infinity in order for the integral to exist. Conditions for the existence of the convolution may be tricky, since a blow-up in {{mvar|g}} at infinity can be easily offset by sufficiently rapid decay in {{mvar|f}}. The question of existence thus may involve different conditions on {{mvar|f}} and {{mvar|g}}:

= Compactly supported functions =

If {{mvar|f}} and {{mvar|g}} are compactly supported continuous functions, then their convolution exists, and is also compactly supported and continuous {{harv|Hörmander|1983|loc=Chapter 1}}. More generally, if either function (say {{mvar|f}}) is compactly supported and the other is locally integrable, then the convolution {{math|fg}} is well-defined and continuous.

Convolution of {{mvar|f}} and {{mvar|g}} is also well defined when both functions are locally square integrable on {{math|R}} and supported on an interval of the form {{math|[a, +∞)}} (or both supported on {{math|[−∞, a]}}).

= Integrable functions =

The convolution of {{mvar|f}} and {{mvar|g}} exists if {{mvar|f}} and {{mvar|g}} are both Lebesgue integrable functions in Lp space, and in this case {{math|fg}} is also integrable {{harv|Stein|Weiss|1971|loc=Theorem 1.3}}. This is a consequence of Tonelli's theorem. This is also true for functions in {{math|L1}}, under the discrete convolution, or more generally for the convolution on any group.

Likewise, if {{math|fL1}}({{math|Rd}})  and  {{math|gLp}}({{math|Rd}})  where {{math|1 ≤ p ≤ ∞}},  then  {{math|f*gLp}}({{math|Rd}}),  and

:\|{f}* g\|_p\le \|f\|_1\|g\|_p.

In the particular case {{math|p {{=}} 1}}, this shows that {{math|L1}} is a Banach algebra under the convolution (and equality of the two sides holds if {{mvar|f}} and {{mvar|g}} are non-negative almost everywhere).

More generally, Young's inequality implies that the convolution is a continuous bilinear map between suitable {{math|Lp}} spaces. Specifically, if {{math| 1 ≤ p, q, r ≤ ∞}} satisfy:

:\frac{1}{p}+\frac{1}{q}=\frac{1}{r}+1,

then

:\left\Vert f*g\right\Vert_r\le\left\Vert f\right\Vert_p\left\Vert g\right\Vert_q,\quad f\in L^p,\ g\in L^q,

so that the convolution is a continuous bilinear mapping from {{math|Lp×Lq}} to {{math|Lr}}.

The Young inequality for convolution is also true in other contexts (circle group, convolution on {{math|Z}}). The preceding inequality is not sharp on the real line: when {{math| 1 < p, q, r < ∞}}, there exists a constant {{math|Bp,q < 1}} such that:

:\left\Vert f*g\right\Vert_r\le B_{p,q}\left\Vert f\right\Vert_p\left\Vert g\right\Vert_q,\quad f\in L^p,\ g\in L^q.

The optimal value of {{math|Bp,q}} was discovered in 1975{{cite journal

| first1=William | last1=Beckner | authorlink1=William Beckner (mathematician)

| year=1975

| title=Inequalities in Fourier analysis

| journal=Annals of Mathematics |series=Second Series

| volume=102

| issue=1 | pages=159–182

| doi=10.2307/1970980| jstor=1970980}} and independently in 1976,{{cite journal

| first1=Herm Jan | last1=Brascamp

| first2=Elliott H. | last2=Lieb | authorlink2=Elliott H. Lieb

| year=1976

| title=Best constants in Young's inequality, its converse, and its generalization to more than three functions

| journal=Advances in Mathematics

| volume=20

| issue=2

| pages=151–173

| doi=10.1016/0001-8708(76)90184-5 | doi-access=free}} see Brascamp–Lieb inequality.

A stronger estimate is true provided {{math| 1 < p, q, r < ∞}}:

:\|f * g\|_r\le C_{p,q}\|f\|_p\|g\|_{q,w}

where \|g\|_{q,w} is the Lp space#Weak Lp norm. Convolution also defines a bilinear continuous map L^{p,w}\times L^{q,w}\to L^{r,w} for 1< p,q,r<\infty, owing to the weak Young inequality:{{harvnb|Reed|Simon|1975|loc=IX.4}}

:\|f * g\|_{r,w}\le C_{p,q}\|f\|_{p,w}\|g\|_{r,w}.

= Functions of rapid decay =

In addition to compactly supported functions and integrable functions, functions that have sufficiently rapid decay at infinity can also be convolved. An important feature of the convolution is that if f and g both decay rapidly, then fg also decays rapidly. In particular, if f and g are rapidly decreasing functions, then so is the convolution fg. Combined with the fact that convolution commutes with differentiation (see #Properties), it follows that the class of Schwartz functions is closed under convolution {{harv|Stein|Weiss|1971|loc=Theorem 3.3}}.

= Distributions =

{{Main article|Distribution (mathematics)}}

If f is a smooth function that is compactly supported and g is a distribution, then fg is a smooth function defined by

:\int_{\mathbb{R}^d} {f}(y)g(x-y)\,dy = (f*g)(x) \in C^\infty(\mathbb{R}^d) .

More generally, it is possible to extend the definition of the convolution in a unique way with \varphi the same as f above, so that the associative law

:f* (g* \varphi) = (f* g)* \varphi

remains valid in the case where f is a distribution, and g a compactly supported distribution {{harv|Hörmander|1983|loc=§4.2}}.

= Measures =

The convolution of any two Borel measures μ and ν of bounded variation is the measure \mu*\nu defined by {{harv|Rudin|1962}}

:\int_{\mathbf{R}^d} f(x) \, d(\mu*\nu)(x) = \int_{\mathbf{R}^d}\int_{\mathbf{R}^d}f(x+y)\,d\mu(x)\,d\nu(y).

In particular,

: (\mu*\nu)(A) = \int_{\mathbf{R}^d\times\mathbf R^d}1_A(x+y)\, d(\mu\times\nu)(x,y),

where A\subset\mathbf R^d is a measurable set and 1_A is the indicator function of A.

This agrees with the convolution defined above when μ and ν are regarded as distributions, as well as the convolution of L1 functions when μ and ν are absolutely continuous with respect to the Lebesgue measure.

The convolution of measures also satisfies the following version of Young's inequality

:\|\mu* \nu\|\le \|\mu\|\|\nu\|

where the norm is the total variation of a measure. Because the space of measures of bounded variation is a Banach space, convolution of measures can be treated with standard methods of functional analysis that may not apply for the convolution of distributions.

Properties

= Algebraic properties =

{{See also|Convolution algebra}}

The convolution defines a product on the linear space of integrable functions. This product satisfies the following algebraic properties, which formally mean that the space of integrable functions with the product given by convolution is a commutative associative algebra without identity {{harv|Strichartz|1994|loc=§3.3}}. Other linear spaces of functions, such as the space of continuous functions of compact support, are closed under the convolution, and so also form commutative associative algebras.

; Commutativity: f * g = g * f Proof: By definition: (f * g)(t) = \int^\infty_{-\infty} f(\tau)g(t - \tau)\, d\tau Changing the variable of integration to u = t - \tau the result follows.

; Associativity: f * (g * h) = (f * g) * h Proof: This follows from using Fubini's theorem (i.e., double integrals can be evaluated as iterated integrals in either order).

; Distributivity: f * (g + h) = (f * g) + (f * h) Proof: This follows from linearity of the integral.

; Associativity with scalar multiplication: a (f * g) = (a f) * g for any real (or complex) number a.

; Multiplicative identity: No algebra of functions possesses an identity for the convolution. The lack of identity is typically not a major inconvenience, since most collections of functions on which the convolution is performed can be convolved with a delta distribution (a unitary impulse, centered at zero) or, at the very least (as is the case of L1) admit approximations to the identity. The linear space of compactly supported distributions does, however, admit an identity under the convolution. Specifically, f * \delta = f where δ is the delta distribution.

; Inverse element: Some distributions S have an inverse element S−1 for the convolution which then must satisfy S^{-1} * S = \delta from which an explicit formula for S−1 may be obtained.{{paragraph}}The set of invertible distributions forms an abelian group under the convolution.

; Complex conjugation: \overline{f * g} = \overline{f} * \overline{g}

; Time reversal: If  q(t) = r(t)*s(t),  then  q(-t) = r(-t)*s(-t).

Proof (using convolution theorem):

q(t) \ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ \ Q(f) = R(f)S(f)

q(-t) \ \stackrel{\mathcal{F}}{\Longleftrightarrow}\ \ Q(-f) = R(-f)S(-f)

\begin{align}

q(-t) &= \mathcal{F}^{-1}\bigg\{R(-f)S(-f)\bigg\}\\

&= \mathcal{F}^{-1}\bigg\{R (-f)\bigg\} * \mathcal{F}^{-1}\bigg\{S(-f)\bigg\}\\

&= r(-t) * s(-t)

\end{align}

; Relationship with differentiation: (f * g)' = f' * g = f * g' Proof:

:

\begin{align}

(f * g)' & = \frac{d}{dt} \int^\infty_{-\infty} f(\tau) g(t - \tau) \, d\tau \\

& =\int^\infty_{-\infty} f(\tau) \frac{\partial}{\partial t} g(t - \tau) \, d\tau \\

& =\int^\infty_{-\infty} f(\tau) g'(t - \tau) \, d\tau = f* g'.

\end{align}

; Relationship with integration: If F(t) = \int^t_{-\infty} f(\tau) d\tau, and G(t) = \int^t_{-\infty} g(\tau) \, d\tau, then (F * g)(t) = (f * G)(t) = \int^t_{-\infty}(f * g)(\tau)\,d\tau.

= Integration =

If f and g are integrable functions, then the integral of their convolution on the whole space is simply obtained as the product of their integrals:{{Cite web|last=Weisstein|first=Eric W.|title=Convolution|url=https://mathworld.wolfram.com/Convolution.html|access-date=2021-09-22|website=mathworld.wolfram.com|language=en}}

: \int_{\mathbf{R}^d}(f * g)(x) \, dx=\left(\int_{\mathbf{R}^d}f(x) \, dx\right) \left(\int_{\mathbf{R}^d}g(x) \, dx\right).

This follows from Fubini's theorem. The same result holds if f and g are only assumed to be nonnegative measurable functions, by Tonelli's theorem.

= Differentiation =

In the one-variable case,

: \frac{d}{dx}(f * g) = \frac{df}{dx} * g = f * \frac{dg}{dx}

where \frac{d}{dx} is the derivative. More generally, in the case of functions of several variables, an analogous formula holds with the partial derivative:

: \frac{\partial}{\partial x_i}(f * g) = \frac{\partial f}{\partial x_i} * g = f * \frac{\partial g}{\partial x_i}.

A particular consequence of this is that the convolution can be viewed as a "smoothing" operation: the convolution of f and g is differentiable as many times as f and g are in total.

These identities hold for example under the condition that f and g are absolutely integrable and at least one of them has an absolutely integrable (L1) weak derivative, as a consequence of Young's convolution inequality. For instance, when f is continuously differentiable with compact support, and g is an arbitrary locally integrable function,

: \frac{d}{dx}(f* g) = \frac{df}{dx} * g.

These identities also hold much more broadly in the sense of tempered distributions if one of f or g is a

rapidly decreasing tempered distribution, a

compactly supported tempered distribution or a Schwartz function and the other is a tempered distribution. On the other hand, two positive integrable and infinitely differentiable functions may have a nowhere continuous convolution.

In the discrete case, the difference operator D f(n) = f(n + 1) − f(n) satisfies an analogous relationship:

: D(f * g) = (Df) * g = f * (Dg).

= Convolution theorem =

The convolution theorem states that{{cite web |last1=Weisstein |first1=Eric W |title=From MathWorld--A Wolfram Web Resource |url=https://mathworld.wolfram.com/ConvolutionTheorem.html}}

: \mathcal{F}\{f * g\} = \mathcal{F}\{f\}\cdot \mathcal{F}\{g\}

where \mathcal{F}\{f\} denotes the Fourier transform of f.

== Convolution in other types of transformations ==

Versions of this theorem also hold for the Laplace transform, two-sided Laplace transform, Z-transform and Mellin transform.

== Convolution on matrices ==

If \mathcal W is the Fourier transform matrix, then

: \mathcal W\left(C^{(1)}x \ast C^{(2)}y\right) = \left(\mathcal W C^{(1)} \bull \mathcal W C^{(2)}\right)(x \otimes y) = \mathcal W C^{(1)}x \circ \mathcal W C^{(2)}y,

where \bull is face-splitting product,{{Cite journal|last=Slyusar|first=V. I.|date= December 27, 1996|title=End products in matrices in radar applications. |url=https://slyusar.kiev.ua/en/IZV_1998_3.pdf |archive-url=https://web.archive.org/web/20130811122444/https://slyusar.kiev.ua/en/IZV_1998_3.pdf |archive-date=2013-08-11 |url-status=live|journal=Radioelectronics and Communications Systems |volume=41 |issue=3|pages=50–53}}{{Cite journal|last=Slyusar|first=V. I.|date=1997-05-20|title=Analytical model of the digital antenna array on a basis of face-splitting matrix products. |url=https://slyusar.kiev.ua/ICATT97.pdf |archive-url=https://web.archive.org/web/20130811112059/https://slyusar.kiev.ua/ICATT97.pdf |archive-date=2013-08-11 |url-status=live|journal=Proc. ICATT-97, Kyiv|pages=108–109}}{{Cite journal|last=Slyusar|first=V. I.|date=1997-09-15|title=New operations of matrices product for applications of radars|url=https://slyusar.kiev.ua/DIPED_1997.pdf |archive-url=https://web.archive.org/web/20130811113217/https://slyusar.kiev.ua/DIPED_1997.pdf |archive-date=2013-08-11 |url-status=live|journal=Proc. Direct and Inverse Problems of Electromagnetic and Acoustic Wave Theory (DIPED-97), Lviv.|pages=73–74}}{{Cite journal|last=Slyusar|first=V. I.|date=March 13, 1998|title=A Family of Face Products of Matrices and its Properties|url=https://slyusar.kiev.ua/FACE.pdf |archive-url=https://web.archive.org/web/20130811113935/https://slyusar.kiev.ua/FACE.pdf |archive-date=2013-08-11 |url-status=live|journal=Cybernetics and Systems Analysis C/C of Kibernetika I Sistemnyi Analiz.- 1999.|volume=35|issue=3|pages=379–384|doi=10.1007/BF02733426|s2cid=119661450}}{{Cite journal|last=Slyusar|first=V. I.|date=2003|title=Generalized face-products of matrices in models of digital antenna arrays with nonidentical channels|url=https://slyusar.kiev.ua/en/IZV_2003_10.pdf |archive-url=https://web.archive.org/web/20130811125643/https://slyusar.kiev.ua/en/IZV_2003_10.pdf |archive-date=2013-08-11 |url-status=live|journal=Radioelectronics and Communications Systems|volume=46|issue=10|pages=9–17}} \otimes denotes Kronecker product, \circ denotes Hadamard product (this result is an evolving of count sketch properties{{cite conference

| title = Fast and scalable polynomial kernels via explicit feature maps

| last1 = Ninh | first1 = Pham

| first2 = Rasmus | last2 = Pagh | author2-link = Rasmus Pagh

| date = 2013

| publisher = Association for Computing Machinery

| conference = SIGKDD international conference on Knowledge discovery and data mining

| doi = 10.1145/2487575.2487591

}}).

This can be generalized for appropriate matrices \mathbf{A},\mathbf{B}:

: \mathcal W\left((\mathbf{A}x) \ast (\mathbf{B}y)\right) = \left((\mathcal W \mathbf{A}) \bull (\mathcal W \mathbf{B})\right)(x \otimes y) = (\mathcal W \mathbf{A}x) \circ (\mathcal W \mathbf{B}y)

from the properties of the face-splitting product.

= Translational equivariance =

The convolution commutes with translations, meaning that

: \tau_x (f * g) = (\tau_x f) * g = f * (\tau_x g)

where τxf is the translation of the function f by x defined by

: (\tau_x f)(y) = f(y - x).

If f is a Schwartz function, then τxf is the convolution with a translated Dirac delta function τxf = fτx δ. So translation invariance of the convolution of Schwartz functions is a consequence of the associativity of convolution.

Furthermore, under certain conditions, convolution is the most general translation invariant operation. Informally speaking, the following holds

: Suppose that S is a bounded linear operator acting on functions which commutes with translations: S(τxf) = τx(Sf) for all x. Then S is given as convolution with a function (or distribution) gS; that is Sf = gSf.

Thus some translation invariant operations can be represented as convolution. Convolutions play an important role in the study of time-invariant systems, and especially LTI system theory. The representing function gS is the impulse response of the transformation S.

A more precise version of the theorem quoted above requires specifying the class of functions on which the convolution is defined, and also requires assuming in addition that S must be a continuous linear operator with respect to the appropriate topology. It is known, for instance, that every continuous translation invariant continuous linear operator on L1 is the convolution with a finite Borel measure. More generally, every continuous translation invariant continuous linear operator on Lp for 1 ≤ p < ∞ is the convolution with a tempered distribution whose Fourier transform is bounded. To wit, they are all given by bounded Fourier multipliers.

Convolutions on groups

If G is a suitable group endowed with a measure λ, and if f and g are real or complex valued integrable functions on G, then we can define their convolution by

:(f * g)(x) = \int_G f(y) g\left(y^{-1}x\right)\,d\lambda(y).

It is not commutative in general. In typical cases of interest G is a locally compact Hausdorff topological group and λ is a (left-) Haar measure. In that case, unless G is unimodular, the convolution defined in this way is not the same as \int f\left(xy^{-1}\right)g(y) \, d\lambda(y). The preference of one over the other is made so that convolution with a fixed function g commutes with left translation in the group:

:L_h(f* g) = (L_hf)* g.

Furthermore, the convention is also required for consistency with the definition of the convolution of measures given below. However, with a right instead of a left Haar measure, the latter integral is preferred over the former.

On locally compact abelian groups, a version of the convolution theorem holds: the Fourier transform of a convolution is the pointwise product of the Fourier transforms. The circle group T with the Lebesgue measure is an immediate example. For a fixed g in L1(T), we have the following familiar operator acting on the Hilbert space L2(T):

:T {f}(x) = \frac{1}{2 \pi} \int_{\mathbf{T}} {f}(y) g( x - y) \, dy.

The operator T is compact. A direct calculation shows that its adjoint T* is convolution with

:\bar{g}(-y).

By the commutativity property cited above, T is normal: T* T = TT* . Also, T commutes with the translation operators. Consider the family S of operators consisting of all such convolutions and the translation operators. Then S is a commuting family of normal operators. According to spectral theory, there exists an orthonormal basis {hk} that simultaneously diagonalizes S. This characterizes convolutions on the circle. Specifically, we have

:h_k (x) = e^{ikx}, \quad k \in \mathbb{Z},\;

which are precisely the characters of T. Each convolution is a compact multiplication operator in this basis. This can be viewed as a version of the convolution theorem discussed above.

A discrete example is a finite cyclic group of order n. Convolution operators are here represented by circulant matrices, and can be diagonalized by the discrete Fourier transform.

A similar result holds for compact groups (not necessarily abelian): the matrix coefficients of finite-dimensional unitary representations form an orthonormal basis in L2 by the Peter–Weyl theorem, and an analog of the convolution theorem continues to hold, along with many other aspects of harmonic analysis that depend on the Fourier transform.

Convolution of measures

Let G be a (multiplicatively written) topological group.

If μ and ν are Radon measures on G, then their convolution μν is defined as the pushforward measure of the group action and can be written asHewitt and Ross (1979) Abstract harmonic analysis, volume 1, second edition, Springer-Verlag, p 266.

:(\mu * \nu)(E) = \iint 1_E(xy) \,d\mu(x) \,d\nu(y)

for each measurable subset E of G. The convolution is also a Radon measure, whose total variation satisfies

:\|\mu * \nu\| \le \left\|\mu\right\| \left\|\nu\right\|.

In the case when G is locally compact with (left-)Haar measure λ, and μ and ν are absolutely continuous with respect to a λ, so that each has a density function, then the convolution μ∗ν is also absolutely continuous, and its density function is just the convolution of the two separate density functions. In fact, if either measure is absolutely continuous with respect to the Haar measure, then so is their convolution.Hewitt and Ross (1979), Theorem 19.18, p 272.

If μ and ν are probability measures on the topological group {{nowrap|(R,+),}} then the convolution μν is the probability distribution of the sum X + Y of two independent random variables X and Y whose respective distributions are μ and ν.

Infimal convolution

In convex analysis, the infimal convolution of proper (not identically +\infty) convex functions f_1,\dots,f_m on \mathbb R^n is defined by:{{citation|author=R. Tyrrell Rockafellar|title=Convex analysis|publisher=Princeton University Press|year=1970}}

(f_1*\cdots*f_m)(x)=\inf_x \{ f_1(x_1)+\cdots+f_m(x_m) | x_1+\cdots+x_m = x\}.

It can be shown that the infimal convolution of convex functions is convex. Furthermore, it satisfies an identity analogous to that of the Fourier transform of a traditional convolution, with the role of the Fourier transform is played instead by the Legendre transform:

\varphi^*(x) = \sup_y ( x\cdot y - \varphi(y)).

We have:

(f_1*\cdots *f_m)^*(x) = f_1^*(x) + \cdots + f_m^*(x).

Bialgebras

Let (X, Δ, ∇, ε, η) be a bialgebra with comultiplication Δ, multiplication ∇, unit η, and counit ε. The convolution is a product defined on the endomorphism algebra End(X) as follows. Let φ, ψ ∈ End(X), that is, φ, ψ: XX are functions that respect all algebraic structure of X, then the convolution φψ is defined as the composition

:X \mathrel{\xrightarrow{\Delta}} X \otimes X \mathrel{\xrightarrow{\phi\otimes\psi}} X \otimes X \mathrel{\xrightarrow{\nabla}} X.

The convolution appears notably in the definition of Hopf algebras {{harv|Kassel|1995|loc=§III.3}}. A bialgebra is a Hopf algebra if and only if it has an antipode: an endomorphism S such that

:S * \operatorname{id}_X = \operatorname{id}_X * S = \eta\circ\varepsilon.

Applications

File:Halftone, Gaussian Blur.jpg can be used to obtain a smooth grayscale digital image of a halftone print.]]

Convolution and related operations are found in many applications in science, engineering and mathematics.

  • Convolutional neural networks apply multiple cascaded convolution kernels with applications in machine vision and artificial intelligence.{{Cite journal|last1=Zhang|first1=Yingjie|last2=Soon|first2=Hong Geok|last3=Ye|first3=Dongsen|last4=Fuh|first4=Jerry Ying Hsi|last5=Zhu|first5=Kunpeng|date=September 2020|title=Powder-Bed Fusion Process Monitoring by Machine Vision With Hybrid Convolutional Neural Networks|url=https://ieeexplore.ieee.org/document/8913613|journal=IEEE Transactions on Industrial Informatics|volume=16|issue=9|pages=5769–5779|doi=10.1109/TII.2019.2956078|s2cid=213010088|issn=1941-0050}}{{Cite journal|last1=Chervyakov|first1=N.I.|last2=Lyakhov|first2=P.A.|last3=Deryabin|first3=M.A.|last4=Nagornov|first4=N.N.|last5=Valueva|first5=M.V.|last6=Valuev|first6=G.V.|date=September 2020|title=Residue Number System-Based Solution for Reducing the Hardware Cost of a Convolutional Neural Network|url=https://linkinghub.elsevier.com/retrieve/pii/S092523122030583X|journal=Neurocomputing|language=en|volume=407|pages=439–453|doi=10.1016/j.neucom.2020.04.018|s2cid=219470398|quote=Convolutional neural networks represent deep learning architectures that are currently used in a wide range of applications, including computer vision, speech recognition, time series analysis in finance, and many others.}} Though these are actually cross-correlations rather than convolutions in most cases.{{Cite journal|last=Atlas, Homma, and Marks|title=An Artificial Neural Network for Spatio-Temporal Bipolar Patterns: Application to Phoneme Classification|url=https://papers.nips.cc/paper/1987/file/98f13708210194c475687be6106a3b84-Paper.pdf |archive-url=https://web.archive.org/web/20210414091306/https://papers.nips.cc/paper/1987/file/98f13708210194c475687be6106a3b84-Paper.pdf |archive-date=2021-04-14 |url-status=live|journal=Neural Information Processing Systems (NIPS 1987)|volume=1}}
  • In non-neural-network-based image processing
  • In digital image processing convolutional filtering plays an important role in many important algorithms in edge detection and related processes (see Kernel (image processing))
  • In optics, an out-of-focus photograph is a convolution of the sharp image with a lens function. The photographic term for this is bokeh.
  • In image processing applications such as adding blurring.
  • In digital data processing
  • In analytical chemistry, Savitzky–Golay smoothing filters are used for the analysis of spectroscopic data. They can improve signal-to-noise ratio with minimal distortion of the spectra
  • In statistics, a weighted moving average is a convolution.
  • In acoustics, reverberation is the convolution of the original sound with echoes from objects surrounding the sound source.
  • In digital signal processing, convolution is used to map the impulse response of a real room on a digital audio signal.
  • In electronic music convolution is the imposition of a spectral or rhythmic structure on a sound. Often this envelope or structure is taken from another sound. The convolution of two signals is the filtering of one through the other.Zölzer, Udo, ed. (2002). DAFX:Digital Audio Effects, p.48–49. {{isbn|0471490784}}.
  • In electrical engineering, the convolution of one function (the input signal) with a second function (the impulse response) gives the output of a linear time-invariant system (LTI). At any given moment, the output is an accumulated effect of all the prior values of the input function, with the most recent values typically having the most influence (expressed as a multiplicative factor). The impulse response function provides that factor as a function of the elapsed time since each input value occurred.
  • In physics, wherever there is a linear system with a "superposition principle", a convolution operation makes an appearance. For instance, in spectroscopy line broadening due to the Doppler effect on its own gives a Gaussian spectral line shape and collision broadening alone gives a Lorentzian line shape. When both effects are operative, the line shape is a convolution of Gaussian and Lorentzian, a Voigt function.
  • In time-resolved fluorescence spectroscopy, the excitation signal can be treated as a chain of delta pulses, and the measured fluorescence is a sum of exponential decays from each delta pulse.
  • In computational fluid dynamics, the large eddy simulation (LES) turbulence model uses the convolution operation to lower the range of length scales necessary in computation thereby reducing computational cost.
  • In probability theory, the probability distribution of the sum of two independent random variables is the convolution of their individual distributions.
  • In kernel density estimation, a distribution is estimated from sample points by convolution with a kernel, such as an isotropic Gaussian.{{sfn|Diggle|1985}}
  • In radiotherapy treatment planning systems, most part of all modern codes of calculation applies a convolution-superposition algorithm.{{Clarify|date=May 2013}}
  • In structural reliability, the reliability index can be defined based on the convolution theorem.
  • The definition of reliability index for limit state functions with nonnormal distributions can be established corresponding to the joint distribution function. In fact, the joint distribution function can be obtained using the convolution theory.{{sfn|Ghasemi|Nowak|2017}}
  • In Smoothed-particle hydrodynamics, simulations of fluid dynamics are calculated using particles, each with surrounding kernels. For any given particle i, some physical quantity A_i is calculated as a convolution of A_j with a weighting function, where j denotes the neighbors of particle i: those that are located within its kernel. The convolution is approximated as a summation over each neighbor.{{cite journal |last1=Monaghan |first1=J. J. |title=Smoothed particle hydrodynamics |journal=Annual Review of Astronomy and Astrophysics |date=1992 |volume=30 |pages=543–547 |doi=10.1146/annurev.aa.30.090192.002551 |bibcode=1992ARA&A..30..543M |url=https://ui.adsabs.harvard.edu/abs/1992ARA&A..30..543M |access-date=16 February 2021 |ref=1992ARA&A..30..543M}}
  • In Fractional calculus convolution is instrumental in various definitions of fractional integral and fractional derivative.

See also

Notes

{{notelist-ua}}

References

{{reflist}}

Further reading

  • {{citation | last1=Bracewell | first1=R. | title=The Fourier Transform and Its Applications| edition=2nd |

publisher=McGraw–Hill | year=1986 | bibcode=1986ftia.book.....B | isbn=0-07-116043-4}}.

  • {{citation | last1=Damelin | first1=S. | last2=Miller | first2=W. | title=The Mathematics of Signal Processing | publisher=Cambridge University Press | isbn=978-1107601048 | year=2011}}
  • {{citation | last=Diggle | first=P. J. | s2cid=116746157 | title=A kernel method for smoothing point process data | journal=Journal of the Royal Statistical Society, Series C | volume = 34 | issue=2 | pages = 138–147 |doi=10.2307/2347366 | year=1985 | jstor=2347366}}
  • Dominguez-Torres, Alejandro (Nov 2, 2010). "Origin and history of convolution". 41 pgs. https://slideshare.net/Alexdfar/origin-adn-history-of-convolution. Cranfield, Bedford MK43 OAL, UK. Retrieved Mar 13, 2013.
  • {{Citation | last1=Ghasemi | first1=S. Hooman | last2=Nowak | first2=Andrzej S. | title=Reliability Index for Non-normal Distributions of Limit State Functions | doi=10.12989/sem.2017.62.3.365 | year=2017 | journal=Structural Engineering and Mechanics | volume=62 | issue=3 | pages=365–372}}
  • {{Citation | last1=Grinshpan | first1=A. Z. | title=An inequality for multiple convolutions with respect to Dirichlet probability measure | doi=10.1016/j.aam.2016.08.001 | year=2017 |

journal=Advances in Applied Mathematics | volume=82 | issue=1 | pages=102–119 | doi-access=free}}

  • {{Citation | last1=Hewitt | first1=Edwin | last2=Ross | first2=Kenneth A. | title=Abstract harmonic analysis. Vol. I | publisher=Springer-Verlag | location=Berlin, New York | edition=2nd | series=Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] | isbn=978-3-540-09434-0 | mr=551496 | year=1979 | volume=115}}.
  • {{Citation | last1=Hewitt | first1=Edwin | last2=Ross | first2=Kenneth A. | title=Abstract harmonic analysis. Vol. II: Structure and analysis for compact groups. Analysis on locally compact Abelian groups | publisher=Springer-Verlag | location=Berlin, New York | series=Die Grundlehren der mathematischen Wissenschaften, Band 152 | mr=0262773 | year=1970}}.
  • {{citation | last=Hörmander | first=L. | author-link=Lars Hörmander | mr=0717035 | title=The analysis of linear partial differential operators I | series= Grundl. Math. Wissenschaft. | volume= 256 | publisher= Springer | year=1983 | isbn=3-540-12104-8 | doi=10.1007/978-3-642-96750-4}}.
  • {{Citation | last1=Kassel | first1=Christian | title=Quantum groups | publisher=Springer-Verlag | location=Berlin, New York | series=Graduate Texts in Mathematics | isbn=978-0-387-94370-1 | mr=1321145 | year=1995 | volume=155 | doi=10.1007/978-1-4612-0783-2 | url-access=registration | url=https://archive.org/details/quantumgroups0000kass}}.
  • {{citation | last=Knuth | first=Donald | author-link=Donald Knuth | title=Seminumerical Algorithms|edition=3rd. | location=Reading, Massachusetts | publisher=Addison–Wesley | year=1997 | isbn=0-201-89684-2}}.
  • {{Narici Beckenstein Topological Vector Spaces|edition=2}}
  • {{citation | last1=Reed | first1=Michael | last2=Simon | first2=Barry | author2-link=Barry Simon | title=Methods of modern mathematical physics. II. Fourier analysis, self-adjointness | publisher=Academic Press Harcourt Brace Jovanovich, Publishers|location=New York-London | year= 1975 | pages= xv+361 | isbn =0-12-585002-6 | mr=0493420}}
  • {{Citation | last1=Rudin | first1=Walter | author1-link=Walter Rudin | title=Fourier analysis on groups | publisher=Interscience Publishers | location=New York–London | series=Interscience Tracts in Pure and Applied Mathematics | volume=12 | mr=0152834 | year=1962 | isbn=0-471-52364-X}}.
  • {{Schaefer Wolff Topological Vector Spaces|edition=2}}
  • {{citation | last1=Stein | first1=Elias | author-link1=Elias Stein | last2=Weiss | first2=Guido | title=Introduction to Fourier Analysis on Euclidean Spaces|publisher=Princeton University Press|year=1971|isbn=0-691-08078-X|url-access=registration|url=https://archive.org/details/introductiontofo0000stei}}.
  • {{springer | last=Sobolev | first=V.I.|id=C/c026430|title=Convolution of functions|year=2001}}.
  • {{citation | last=Strichartz | first=R.|year=1994|title=A Guide to Distribution Theory and Fourier Transforms|publisher=CRC Press|isbn=0-8493-8273-4}}.
  • {{citation | last=Titchmarsh | first=E|author-link=Edward Charles Titchmarsh|title=Introduction to the theory of Fourier integrals|isbn=978-0-8284-0324-5|year=1948|edition=2nd|publication-date=1986|publisher=Chelsea Pub. Co.|location=New York, N.Y.}}.
  • {{Trèves François Topological vector spaces, distributions and kernels}}
  • {{citation | last=Uludag | first=A. M. |author-link=A. Muhammed Uludag|title=On possible deterioration of smoothness under the operation of convolution|journal=J. Math. Anal. Appl. |volume=227 |issue=2 |pages=335–358|year=1998|doi=10.1006/jmaa.1998.6091 |doi-access=free|hdl=11693/25385 |hdl-access=free }}
  • {{citation | last1=von zur Gathen | first1=J. | last2=Gerhard | first2=J .|title=Modern Computer Algebra|isbn=0-521-82646-2|year=2003|publisher=Cambridge University Press}}.