Hadamard product (matrices)#Division
{{Short description|Elementwise product of two matrices}}
Image:Hadamard product qtl1.svg
In mathematics, the Hadamard product (also known as the element-wise product, entrywise product{{rp|ch. 5}} or Schur product{{cite journal|last=Davis|first=Chandler|year=1962|title=The norm of the Schur product operation|journal=Numerische Mathematik|volume=4|pages=343–44|doi=10.1007/bf01386329|number=1|s2cid=121027182}}) is a binary operation that takes in two matrices of the same dimensions and returns a matrix of the multiplied corresponding elements. This operation can be thought as a "naive matrix multiplication" and is different from the matrix product. It is attributed to, and named after, either French mathematician Jacques Hadamard or German mathematician Issai Schur.
The Hadamard product is associative and distributive. Unlike the matrix product, it is also commutative.{{Cite web|last=Million|first=Elizabeth|date=April 12, 2007|title=The Hadamard Product|url=http://buzzard.ups.edu/courses/2007spring/projects/million-paper.pdf|access-date=September 6, 2020|website=buzzard.ups.edu}}
Definition
For two matrices {{math|A}} and {{math|B}} of the same dimension {{math|m × n}}, the Hadamard product (sometimes {{cite web|url=https://machinelearning.wtf/terms/hadamard-product/|title=Hadamard product - Machine Learning Glossary|website=machinelearning.wtf}}{{cite web|url=https://math.stackexchange.com/q/815315 |title=linear algebra - What does a dot in a circle mean?|website=Mathematics Stack Exchange}}{{cite web|url=https://math.stackexchange.com/a/601545/688715|title=Element-wise (or pointwise) operations notation?|website=Mathematics Stack Exchange}}{{user generated|date=March 2025}}) is a matrix of the same dimension as the operands, with elements given by
:
For matrices of different dimensions ({{math|m × n}} and {{math|p × q}}, where {{math|m ≠ p}} or {{math|n ≠ q}}), the Hadamard product is undefined.
{{Anchor|Example}}An example of the Hadamard product for two arbitrary 2 × 3 matrices:
:
\begin{bmatrix}
2 & 3 & 1 \\
0 & 8 & -2
\end{bmatrix} \circ \begin{bmatrix}
3 & 1 & 4 \\
7 & 9 & 5
\end{bmatrix} = \begin{bmatrix}
2 \times 3 & 3 \times 1 & 1 \times 4 \\
0 \times 7 & 8 \times 9 & -2 \times 5
\end{bmatrix} = \begin{bmatrix}
6 & 3 & 4 \\
0 & 72 & -10
\end{bmatrix}.
Properties
- The Hadamard product is commutative (when working with a commutative ring), associative and distributive over addition. That is, if A, B, and C are matrices of the same size, and k is a scalar:
A \odot B &= B \odot A, \\
A \odot (B \odot C) &= (A \odot B) \odot C, \\
A \odot (B + C) &= A \odot B + A \odot C, \\
(kA) \odot B &= A \odot (kB) = k(A \odot B), \\
A \odot 0 &= 0 \odot A = 0.
\end{align}
- The identity matrix under Hadamard multiplication of two {{math|m × n}} matrices is an matrix of ones. This is different from the identity matrix under regular matrix multiplication, where only the elements of the main diagonal are equal to 1. Furthermore, a matrix has an inverse under Hadamard multiplication if and only if all of the elements are invertible, or equivalently over a field, if and only if none of the elements are equal to zero.{{cite web |last=Million |first=Elizabeth | title=The Hadamard Product |url=http://buzzard.ups.edu/courses/2007spring/projects/million-paper.pdf |access-date=2 January 2012}}
- For vectors {{math|x}} and {{math|y}} and corresponding diagonal matrices {{math|Dx}} and {{math|Dy}} with these vectors as their main diagonals, the following identity holds:{{cite book |last1=Horn |first1=Roger A. |first2=Charles R. |last2= Johnson|title=Matrix analysis |publisher=Cambridge University Press |date=2012}}{{rp|479}}
\mathbf{x}^* (A \odot B)\mathbf{y} = \operatorname{tr}\left({D}_\mathbf{x}^* A {D}_\mathbf{y} {B}^\mathsf{T}\right),
where {{math|x*}} denotes the conjugate transpose of {{math|x}}. In particular, using vectors of ones, this shows that the sum of all elements in the Hadamard product is the trace of {{math|ABT}} where superscript T denotes the matrix transpose, that is, . A related result for square {{mvar|A}} and {{mvar|B}}, is that the row-sums of their Hadamard product are the diagonal elements of {{math|ABT}}:
\sum_i (A \odot B)_{ij} = \left(B^\mathsf{T} A\right)_{jj} = \left(AB^\mathsf{T}\right)_{ii}.
Similarly,
(\mathbf{y}\mathbf{x}^*) \odot A = D_\mathbf{y} A D_\mathbf{x}^*.
Furthermore, a Hadamard matrix–vector product can be expressed as
(A \odot B) \mathbf{y} = \operatorname{diag}(A D_\mathbf{y} B^\mathsf{T}),
where is the vector formed from the diagonals of matrix {{mvar|M}}. Taking , this implies that
(A \odot B) \mathbf{1} = \operatorname{diag}(A B^\mathsf{T})
- The Hadamard product is a principal submatrix of the Kronecker product.{{cite journal |last1=Liu |first1=Shuangzhe |last2=Trenkler |first2=Götz |year=2008 |title= Hadamard, Khatri-Rao, Kronecker and other matrix products |journal= International Journal of Information and Systems Sciences |volume=4 |issue=1 |pages=160–177}}{{cite journal |last1=Liu |first1=Shuangzhe |last2=Leiva |first2=Víctor |last3=Zhuang |first3=Dan |last4=Ma |first4=Tiefeng |last5=Figueroa-Zúñiga |first5=Jorge I. |year=2022 |title=Matrix differential calculus with applications in the multivariate linear model and its diagnostics |journal=Journal of Multivariate Analysis |volume=188 |pages=104849 |doi=10.1016/j.jmva.2021.104849 |s2cid=239598156 |doi-access=free}}{{Cite journal |last1=Liu |first1=Shuangzhe |last2=Trenkler |first2=Götz |last3=Kollo |first3=Tõnu |
last4=von Rosen |first4=Dietrich |last5=Baksalary |first5=Oskar Maria |date= 2023 |title=Professor Heinz Neudecker and matrix differential calculus |
journal=Statistical Papers |volume=65 |issue=4 |pages=2605–2639 |language=en |doi=10.1007/s00362-023-01499-w}}
- The Hadamard product satisfies the rank inequality
\operatorname{rank}(A \odot B) \leq \operatorname{rank}(A) \operatorname{rank}(B).
- If {{math|A}} and {{math|B}} are positive-definite matrices, then the following inequality involving the Hadamard product holds:{{cite journal |last1=Hiai |first1=Fumio |last2=Lin |first2=Minghua |title=On an eigenvalue inequality involving the Hadamard product |journal=Linear Algebra and Its Applications |date=February 2017 |volume=515 |pages=313–320 |doi=10.1016/j.laa.2016.11.017 |doi-access=free}}
\prod_{i=k}^n \lambda_i(A \odot B) \ge \prod_{i=k}^n \lambda_i(A B),\quad k = 1, \ldots, n,
where {{math|λi(A)}} is the {{math|i}}th largest eigenvalue of {{math|A}}.
- If {{mvar|D}} and {{mvar|E}} are diagonal matrices, then{{cite web |url=http://buzzard.ups.edu/courses/2007spring/projects/million-paper.pdf |title=Project |publisher=buzzard.ups.edu |date=2007 |access-date=2019-12-18}}
D (A \odot B) E &= (D A E) \odot B = (D A) \odot (B E) \\
&= (AE) \odot (D B) = A \odot (D B E).
\end{align}
- The Hadamard product of two vectors and is the same as matrix multiplication of the corresponding diagonal matrix of one vector by the other vector:
\mathbf a \odot \mathbf b = D_\mathbf{a} \mathbf b = D_\mathbf{b} \mathbf a.
- The operator transforming a vector to a diagonal matrix may be expressed using the Hadamard product as
\operatorname{diag}(\mathbf{a}) = (\mathbf{a} \mathbf{1}^T) \odot I,
where is a constant vector with elements , and is the identity matrix.
The mixed-product property
The Hadamard product obeys certain relationships with other matrix product operators.
- If is the Kronecker product, assuming has the same dimensions as and as , then
- If is the face-splitting product, then{{Cite journal|last=Slyusar|first=V. I. |title=End products in matrices in radar applications. |url=http://slyusar.kiev.ua/en/IZV_1998_3.pdf |journal=Radioelectronics and Communications Systems |year=1998 |volume=41 |issue=3|pages=50–53}}
- If is the column-wise Khatri–Rao product, then
Schur product theorem
{{Main|Schur product theorem}}
The Hadamard product of two positive-semidefinite matrices is positive-semidefinite.{{Citation | doi=10.1016/0024-3795(73)90023-2 | last=Styan | first=George P. H. | title=Hadamard Products and Multivariate Statistical Analysis | journal=Linear Algebra and Its Applications | year=1973 | volume=6 | pages=217–240| hdl=10338.dmlcz/102190 | hdl-access=free }} This is known as the Schur product theorem, after Russian mathematician Issai Schur. For two positive-semidefinite matrices {{mvar|A}} and {{mvar|B}}, it is also known that the determinant of their Hadamard product is greater than or equal to the product of their respective determinants:
Analogous operations
Other Hadamard operations are also seen in the mathematical literature,{{cite journal |last=Reams |first=Robert |year=1999 |title=Hadamard inverses, square roots and products of almost semidefinite matrices |journal=Linear Algebra and Its Applications |volume=288 |pages=35–43 |doi=10.1016/S0024-3795(98)10162-3 |doi-access=free}} namely the {{visible anchor|Hadamard root}} and {{visible anchor|Hadamard power}} (which are in effect the same thing because of fractional indices), defined for a matrix such that:
{{anchor|Root|Power}}For
{B} &= {A}^{\circ 2} \\
B_{ij} &= {A_{ij}}^2
\end{align}
and for
{B} &= {A}^{\circ \frac12} \\
B_{ij} &= {A_{ij}}^\frac12
\end{align}
{{anchor|Inverse}}The {{visible anchor|Hadamard inverse}} reads:
{B} &= {A}^{\circ -1} \\
B_{ij} &= {A_{ij}}^{-1}
\end{align}
{{anchor|Division}}A {{visible anchor|Hadamard division}} is defined as:{{cite web |last1=Wetzstein |first1=Gordon |last2=Lanman |first2=Douglas |last3=Hirsch |first3=Matthew |last4=Raskar |first4=Ramesh |title=Supplementary Material: Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting |url=http://web.media.mit.edu/~gordonw/TensorDisplays/TensorDisplays-Supplement.pdf |work=MIT Media Lab}}{{cite book |last=Cyganek |first=Boguslaw |url=https://books.google.com/books?id=upsxI3bOZvAC&pg=PT109 |title=Object Detection and Recognition in Digital Images: Theory and Practice |publisher=John Wiley & Sons |year=2013 |isbn=9781118618363 |page=109}}
{C} &= {A} \oslash {B} \\
C_{ij} &= \frac{A_{ij}}{B_{ij}}
\end{align}
In programming languages
Most scientific or numerical programming languages include the Hadamard product, under various names.
In MATLAB, the Hadamard product is expressed as "dot multiply": a .* b
, or the function call: times(a, b)
.{{cite web |title=MATLAB times function|url=https://www.mathworks.com/help/matlab/ref/times.html}} It also has analogous dot operators which include, for example, the operators a .^ b
and a ./ b
.{{cite web |title=Array vs. Matrix Operations|url=https://www.mathworks.com/help/matlab/matlab_prog/array-vs-matrix-operations.html}} Because of this mechanism, it is possible to reserve *
and ^
for matrix multiplication and matrix exponentials, respectively.
The programming language Julia has similar syntax as MATLAB, where Hadamard multiplication is called broadcast multiplication and also denoted with a .* b
, and other operators are analogously defined element-wise, for example Hadamard powers use a .^ b
.{{cite web |title=Vectorized "dot" operators |url=https://docs.julialang.org/en/v1/manual/mathematical-operations/#man-dot-operators |access-date=31 January 2024}} But unlike MATLAB, in Julia this "dot" syntax is generalized with a generic broadcasting operator .
which can apply any function element-wise. This includes both binary operators (such as the aforementioned multiplication and exponentiation, as well as any other binary operator such as the Kronecker product), and also unary operators such as !
and √
. Thus, any function in prefix notation f
can be applied as f.(x)
.{{cite web |title=Dot Syntax for Vectorizing Functions |url=https://docs.julialang.org/en/v1/manual/functions/#man-vectorized |access-date=31 January 2024}}
Python does not have built-in array support, leading to inconsistent/conflicting notations. The NumPy numerical library interprets a*b
or a.multiply(b)
as the Hadamard product, and uses a@b
or a.matmul(b)
for the matrix product. With the SymPy symbolic library, multiplication of {{mono|array}} objects as either a*b
or a@b
will produce the matrix product. The Hadamard product can be obtained with the method call a.multiply_elementwise(b)
.{{Cite web |title=Common Matrices — SymPy 1.9 documentation |url=https://docs.sympy.org/latest/modules/matrices/common.html?highlight=multiply_elementwise#sympy.matrices.common.MatrixCommon.multiply}} Some Python packages include support for Hadamard powers using methods like np.power(a, b)
, or the Pandas method a.pow(b)
.
In C++, the Eigen library provides a cwiseProduct
member function for the {{mono|Matrix}} class (a.cwiseProduct(b)
), while the Armadillo library uses the operator %
to make compact expressions (a % b
; a * b
is a matrix product).
In GAUSS, and HP Prime, the operation is known as array multiplication.
In Fortran, R, APL, J and Wolfram Language (Mathematica), the multiplication operator *
or ×
apply the Hadamard product, whereas the matrix product is written using matmul
, %*%
, +.×
, +/ .*
and .
, respectively.
The R package [https://cran.r-project.org/web/packages/matrixcalc/matrixcalc.pdf matrixcalc] introduces the function hadamard.prod()
for Hadamard Product of numeric matrices or vectors.{{cite web |date=16 May 2013 |title=Matrix multiplication |url=https://cran.r-project.org/doc/manuals/r-release/R-intro.html#Multiplication |access-date=24 August 2013 |work=An Introduction to R |publisher=The R Project for Statistical Computing}}
Applications
The Hadamard product appears in lossy compression algorithms such as JPEG. The decoding step involves an entry-for-entry product, in other words the Hadamard product.{{citation needed|date=November 2019}}
In image processing, the Hadamard operator can be used for enhancing, suppressing or masking image regions. One matrix represents the original image, the other acts as weight or masking matrix.
It is used in the machine learning literature, for example, to describe the architecture of recurrent neural networks as GRUs or LSTMs.{{cite arXiv |last1=Sak |first1=Haşim |last2=Senior |first2=Andrew |last3=Beaufays |first3=Françoise |date=2014-02-05 |title=Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition |class=cs.NE |eprint=1402.1128 }}
It is also used to study the statistical properties of random vectors and matrices.
{{cite journal
|last1=Neudecker|first1=Heinz|last2=Liu|first2=Shuangzhe|last3=Polasek|first3=Wolfgang
|year=1995
|title= The Hadamard product and some of its applications in statistics
|journal= Statistics |volume=26|issue=4|pages=365–373|doi=10.1080/02331889508802503 }}
{{cite journal
|last1=Neudecker|first1=Heinz|last2=Liu|first2=Shuangzhe
|year=2001
|title= Some statistical properties of Hadamard products of random matrices
|journal= Statistical Papers|volume=42|issue=4 |pages=475–487|doi=10.1007/s003620100074 |s2cid=121385730 }}
The penetrating face product
File:Penetrating face product.jpg
According to the definition of V. Slyusar the penetrating face product of the p×g matrix and n-dimensional matrix (n > 1) with p×g blocks () is a matrix of size of the form:{{Cite journal|last=Slyusar|first=V. I.|date=March 13, 1998|title=A Family of Face Products of Matrices and its properties |url=http://slyusar.kiev.ua/FACE.pdf |journal=Cybernetics and Systems Analysis C/C of Kibernetika I Sistemnyi Analiz. 1999.|volume=35|issue=3|pages=379–384|doi=10.1007/BF02733426|s2cid=119661450}}
{A} [\circ] {B} =
\left[\begin{array} { c | c | c | c }
{A} \circ {B}_1 & {A} \circ {B}_2 & \cdots & {A} \circ {B}_n
\end{array}\right].
= Example =
If
\begin{bmatrix}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9
\end{bmatrix},\quad
{B} =
\left[\begin{array} { c | c | c }
{B}_1 & {B}_2 & {B}_3
\end{array}\right] =
\left[\begin{array} { c c c | c c c | c c c }
1 & 4 & 7 & 2 & 8 & 14 & 3 & 12 & 21 \\
8 & 20 & 5 & 10 & 25 & 40 & 12 & 30 & 6 \\
2 & 8 & 3 & 2 & 4 & 2 & 7 & 3 & 9
\end{array}\right]
then
\left[\begin{array} { c c c | c c c | c c c }
1 & 8 & 21 & 2 & 16 & 42 & 3 & 24 & 63 \\
32 & 100 & 30 & 40 & 125 & 240 & 48 & 150 & 36 \\
14 & 64 & 27 & 14 & 32 & 18 & 49 & 24 & 81
\end{array}\right].
= Main properties =
= Applications =
The penetrating face product is used in the tensor-matrix theory of digital antenna arrays. This operation can also be used in artificial neural network models, specifically convolutional layers.{{Cite journal|last=Ha D., Dai A.M., Le Q.V.|date= 2017|title=HyperNetworks.|journal=The International Conference on Learning Representations (ICLR) 2017. – Toulon, 2017.|page=Page 6|arxiv=1609.09106}}