generalized singular value decomposition

{{short description|Name of two different techniques based on the singular value decomposition}}

{{Use dmy dates|date=October 2020}}

In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition (SVD). The two versions differ because one version decomposes two matrices (somewhat like the higher-order or tensor SVD) and the other version uses a set of constraints imposed on the left and right singular vectors of a single-matrix SVD.

First version: two-matrix decomposition

The generalized singular value decomposition (GSVD) is a matrix decomposition on a pair of matrices which generalizes the singular value decomposition. It was introduced by Van Loan in 1976 and later developed by Paige and Saunders, which is the version described here. In contrast to the SVD, the GSVD decomposes simultaneously a pair of matrices with the same number of columns. The SVD and the GSVD, as well as some other possible generalizations of the SVD,{{Cite book | last = Hansen | first = Per Christian | name-list-style = vanc | publisher = SIAM Monographs on Mathematical Modeling and Computation | year = 1997 | title = Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion | isbn = 0-89871-403-6 }}{{cite web | last1 = de Moor | first1 = Bart L. R. | last2 = Golub | first2 = Gene H. | name-list-style = vanc | year = 1989 | title = Generalized Singular Value Decompositions A Proposal for a Standard Nomenclauture | url = http://ftp.esat.kuleuven.be/pub/SISTA/ida/reports/89-10.pdf }}{{cite journal | last1 = de Moor | first1 = Bart L. R. | last2 = Zha| first2 = Hongyuan | name-list-style = vanc | year = 1991 | title = A tree of generalizations of the ordinary singular value decomposition| journal = Linear Algebra and Its Applications | volume = 147 | pages = 469–500 | doi = 10.1016/0024-3795(91)90243-P | doi-access = free }} are extensively used in the study of the conditioning and regularization of linear systems with respect to quadratic semi-norms. In the following, let \mathbb{F} = \mathbb{R}, or \mathbb{F} = \mathbb{C}.

=== Definition ===

The generalized singular value decomposition of matrices A_1 \in \mathbb{F}^{m_1 \times n} and A_2 \in \mathbb{F}^{m_2 \times n} is

\begin{align}

A_1 & = U_1\Sigma_1 [ W^* D, 0_D] Q^*, \\

A_2 & = U_2\Sigma_2 [ W^* D, 0_D] Q^*,

\end{align}

where

  • U_1 \in \mathbb{F}^{m_1 \times m_1} is unitary,
  • U_2 \in \mathbb{F}^{m_2 \times m_2} is unitary,
  • Q \in \mathbb{F}^{n \times n} is unitary,

W \in \mathbb{F}^{k \times k}

is unitary,

D \in \mathbb{R}^{k \times k}

is real diagonal with positive diagonal, and contains the non-zero singular values of C = \begin{bmatrix} A_1 \\ A_2 \end{bmatrix} in decreasing order,

  • 0_D = 0 \in \mathbb{R}^{k \times (n - k)} ,
  • \Sigma_1 = \lceil I_A, S_1, 0_A \rfloor \in \mathbb{R}^{m_1 \times k} is real non-negative block-diagonal, where S_1 = \lceil \alpha_{r + 1}, \dots, \alpha_{r + s} \rfloor with 1 > \alpha_{r + 1} \ge \cdots \ge \alpha_{r + s} > 0, I_A = I_r, and 0_A = 0 \in \mathbb{R}^{(m_1 - r - s) \times (k - r - s)} ,
  • \Sigma_2 = \lceil 0_B, S_2, I_B \rfloor \in \mathbb{R}^{m_2 \times k} is real non-negative block-diagonal, where S_2 = \lceil \beta_{r + 1}, \dots, \beta_{r + s} \rfloor with 0 < \beta_{r + 1} \le \cdots \le \beta_{r + s} < 1, I_B = I_{k - r - s}, and 0_B = 0 \in \mathbb{R}^{(m_2 - k + r) \times r} ,
  • \Sigma_1^* \Sigma_1 = \lceil\alpha_1^2, \dots, \alpha_k^2\rfloor,
  • \Sigma_2^* \Sigma_2 = \lceil\beta_1^2, \dots, \beta_k^2\rfloor,
  • \Sigma_1^* \Sigma_1 + \Sigma_2^* \Sigma_2 = I_k,
  • k = \textrm{rank}(C).

We denote \alpha_1 = \cdots = \alpha_r = 1, \alpha_{r + s + 1} = \cdots = \alpha_k = 0, \beta_1 = \cdots = \beta_r = 0, and \beta_{r + s + 1} = \cdots = \beta_k = 1. While \Sigma_1 is diagonal, \Sigma_2 is not always diagonal, because of the leading rectangular zero matrix; instead \Sigma_2 is "bottom-right-diagonal".

= Variations =

There are many variations of the GSVD. These variations are related to the fact that it is always possible to multiply Q^* from the left by E E^* = I where E \in \mathbb{F}^{n \times n} is an arbitrary unitary matrix. We denote

  • X = ([W^* D, 0_D] Q^*)^*

X^* = [0, R] \hat{Q}^*

, where

R \in \mathbb{F}^{k \times k}

is upper-triangular and invertible, and

\hat{Q} \in \mathbb{F}^{n \times n}

is unitary. Such matrices exist by RQ-decomposition.

  • Y = W^* D. Then

Y

is invertible.

Here are some variations of the GSVD:

\begin{aligned}

A_1 & = U_1 \Sigma_1 X^*, \\

A_2 & = U_2 \Sigma_2 X^*.

\end{aligned}

\begin{aligned}

A_1 & = U_1 \Sigma_1 [0, R] \hat{Q}^*, \\

A_2 & = U_2 \Sigma_2 [0, R] \hat{Q}^*.

\end{aligned}

  • Simplified:

\begin{align}

A_1 & = U_1\Sigma_1 [ Y, 0_D] Q^*, \\

A_2 & = U_2\Sigma_2 [ Y, 0_D] Q^*.

\end{align}

= Generalized singular values =

A generalized singular value of A_1 and A_2 is a pair (a, b) \in \mathbb{R}^2 such that

\begin{align}

\lim_{\delta \to 0} \det(b^2 A_1^* A_1 - a^2 A_2^* A_2 + \delta I_n) / \det(\delta I_{n - k}) & = 0, \\

a^2 + b^2 & = 1, \\

a, b & \geq 0.

\end{align}

We have

  • A_i A_j^* = U_i \Sigma_i Y Y^* \Sigma_j^* U_j^*
  • A_i^* A_j = Q \begin{bmatrix} Y^* \Sigma_i^* \Sigma_j Y & 0 \\ 0 & 0 \end{bmatrix} Q^* = Q_1 Y^* \Sigma_i^* \Sigma_j Y Q_1^*

By these properties we can show that the generalized singular values are exactly the pairs (\alpha_i, \beta_i). We have

\begin{aligned}

& \det(b^2 A_1^* A_1 - a^2 A_2^* A_2 + \delta I_n) \\

= & \det(b^2 A_1^* A_1 - a^2 A_2^* A_2 + \delta Q Q^*) \\

= & \det\left(Q \begin{bmatrix} Y^* (b^2 \Sigma_1^* \Sigma_1 - a^2 \Sigma_2^* \Sigma_2) Y + \delta I_k & 0 \\ 0 & \delta I_{n - k} \end{bmatrix} Q^*\right) \\

= & \det(\delta I_{n - k}) \det(Y^* (b^2 \Sigma_1^* \Sigma_1 - a^2 \Sigma_2^* \Sigma_2) Y + \delta I_k).

\end{aligned}

Therefore

:

\begin{aligned}

{} & \lim_{\delta \to 0} \det(b^2 A_1^* A_1 - a^2 A_2^* A_2 + \delta I_n) / \det(\delta I_{n - k}) \\

= & \lim_{\delta \to 0} \det(Y^* (b^2 \Sigma_1^* \Sigma_1 - a^2 \Sigma_2^* \Sigma_2) Y + \delta I_k) \\

= & \det(Y^* (b^2 \Sigma_1^* \Sigma_1 - a^2 \Sigma_2^* \Sigma_2) Y) \\

= & |\det(Y)|^2 \prod_{i = 1}^k (b^2 \alpha_i^2 - a^2 \beta_i^2).

\end{aligned}

This expression is zero exactly when a = \alpha_i and b = \beta_i for some i.

In, the generalized singular values are claimed to be those which solve \det(b^2 A_1^* A_1 - a^2 A_2^* A_2) = 0. However, this claim only holds when k = n, since otherwise the determinant is zero for every pair (a, b) \in \mathbb{R}^2; this can be seen by substituting \delta = 0 above.

= Generalized inverse =

Define E^+ = E^{-1} for any invertible matrix E \in \mathbb{F}^{n \times n} , 0^+ = 0^* for any zero matrix 0 \in \mathbb{F}^{m \times n}, and \left\lceil E_1, E_2 \right\rfloor^+ = \left\lceil E_1^+, E_2^+ \right\rfloor for any block-diagonal matrix. Then defineA_i^+ = Q \begin{bmatrix} Y^{-1} \\ 0 \end{bmatrix} \Sigma_i^+ U_i^*It can be shown that A_i^+ as defined here is a generalized inverse of A_i; in particular a \{1, 2, 3\}-inverse of A_i. Since it does not in general satisfy (A_i^+ A_i)^* = A_i^+ A_i, this is not the Moore–Penrose inverse; otherwise we could derive (AB)^+ = B^+ A^+ for any choice of matrices, which only holds for certain class of matrices.

Suppose Q = \begin{bmatrix}Q_1 & Q_2\end{bmatrix} , where Q_1 \in \mathbb{F}^{n \times k} and Q_2 \in \mathbb{F}^{n \times (n - k)}. This generalized inverse has the following properties:

  • \Sigma_1^+ = \lceil I_A, S_1^{-1}, 0_A^T \rfloor
  • \Sigma_2^+ = \lceil 0^T_B, S_2^{-1}, I_B \rfloor
  • \Sigma_1 \Sigma_1^+ = \lceil I, I, 0 \rfloor
  • \Sigma_2 \Sigma_2^+ = \lceil 0, I, I \rfloor
  • \Sigma_1 \Sigma_2^+ = \lceil 0, S_1 S_2^{-1}, 0 \rfloor
  • \Sigma_1^+ \Sigma_2 = \lceil 0, S_1^{-1} S_2, 0 \rfloor
  • A_i A_j^+ = U_i \Sigma_i \Sigma_j^+ U_j^*
  • A_i^+ A_j = Q \begin{bmatrix} Y^{-1} \Sigma_i^+ \Sigma_j Y & 0 \\ 0 & 0 \end{bmatrix} Q^* = Q_1 Y^{-1} \Sigma_i^+ \Sigma_j Y Q_1^*

= Quotient SVD =

A generalized singular ratio of A_1 and A_2 is \sigma_i=\alpha_i \beta_i^+. By the above properties, A_1 A_2^+ = U_1 \Sigma_1 \Sigma_2^+ U_2^*. Note that \Sigma_1 \Sigma_2^+ = \lceil 0, S_1 S_2^{-1}, 0 \rfloor is diagonal, and that, ignoring the leading zeros, contains the singular ratios in decreasing order. If A_2 is invertible, then \Sigma_1 \Sigma_2^+ has no leading zeros, and the generalized singular ratios are the singular values, and U_1 and U_2 are the matrices of singular vectors, of the matrix A_1 A_2^+ = A_1 A_2^{-1}. In fact, computing the SVD of A_1 A_2^{-1} is one of the motivations for the GSVD, as "forming AB^{-1} and finding its SVD can lead to unnecessary and large numerical errors when B is ill-conditioned for solution of equations". Hence the sometimes used name "quotient SVD", although this is not the only reason for using GSVD. If A_2 is not invertible, then U_1 \Sigma_1 \Sigma_2^+ U_2^*is still the SVD of A_1 A_2^+ if we relax the requirement of having the singular values in decreasing order. Alternatively, a decreasing order SVD can be found by moving the leading zeros to the back: U_1 \Sigma_1 \Sigma_2^+ U_2^* = (U_1 P_1) P_1^* \Sigma_1 \Sigma_2^+ P_2 (P_2^* U_2^*), where P_1 and P_2 are appropriate permutation matrices. Since rank equals the number of non-zero singular values, \mathrm{rank}(A_1 A_2^+)=s.

= Construction =

Let

  • C = P \lceil D, 0 \rfloor Q^* be the SVD of C = \begin{bmatrix} A_1 \\ A_2 \end{bmatrix}, where P \in \mathbb{F}^{(m_1 + m_2) \times (m_1 \times m_2)} is unitary, and Q and D are as described,
  • P = [P_1, P_2], where P_1 \in \mathbb{F}^{(m_1 + m_2) \times k} and P_2 \in \mathbb{F}^{(m_1 + m_2) \times (n - k)},
  • P_1 = \begin{bmatrix} P_{11} \\ P_{21} \end{bmatrix}, where P_{11} \in \mathbb{F}^{m_1 \times k} and P_{21} \in \mathbb{F}^{m_2 \times k},
  • P_{11} = U_1 \Sigma_1 W^* by the SVD of P_{11}, where U_1, \Sigma_1 and W are as described,
  • P_{21} W = U_2 \Sigma_2 by a decomposition similar to a QR-decomposition, where U_2 and \Sigma_2 are as described.

Then\begin{aligned}

C & = P \lceil D, 0 \rfloor Q^* \\

{} & = [P_1 D, 0] Q^* \\

{} & = \begin{bmatrix} U_1 \Sigma_1 W^* D & 0 \\ U_2 \Sigma_2 W^* D & 0 \end{bmatrix} Q^* \\

{} & = \begin{bmatrix} U_1 \Sigma_1 [W^* D, 0] Q^* \\ U_2 \Sigma_2 [W^* D, 0] Q^* \end{bmatrix} .

\end{aligned}We also have\begin{bmatrix} U_1^* & 0 \\ 0 & U_2^* \end{bmatrix} P_1 W = \begin{bmatrix} \Sigma_1 \\ \Sigma_2 \end{bmatrix}.Therefore\Sigma_1^* \Sigma_1 + \Sigma_2^* \Sigma_2 = \begin{bmatrix} \Sigma_1 \\ \Sigma_2 \end{bmatrix}^* \begin{bmatrix} \Sigma_1 \\ \Sigma_2 \end{bmatrix} = W^* P_1^* \begin{bmatrix} U_1 & 0 \\ 0 & U_2 \end{bmatrix} \begin{bmatrix} U_1^* & 0 \\ 0 & U_2^* \end{bmatrix} P_1 W = I.Since P_1 has orthonormal columns, ||P_1||_2 \leq 1. Therefore||\Sigma_1||_2 = ||U_1^* P_1 W||_2 = ||P_1||_2 \leq 1.We also have for each x \in \mathbb{R}^k such that ||x||_2 = 1 that||P_{21} x||_2^2 \leq ||P_{11} x||_2^2 + ||P_{21} x||_2^2 = ||P_{1} x||_2^2 \leq 1.Therefore ||P_{21}||_2 \leq 1, and||\Sigma_2||_2 = || U_2^* P_{21} W ||_2 = ||P_{21}||_2 \leq 1.

Applications

File:Tensor Generalized Singular Value Decomposition following et int. Alter PLoS One 2015 and Alter NCI Physical Sciences in Oncology 2015.jpg

The GSVD, formulated as a comparative spectral decomposition,{{cite journal | vauthors = Alter O, Brown PO, Botstein D | title = Generalized singular value decomposition for comparative analysis of genome-scale expression data sets of two different organisms | journal = Proceedings of the National Academy of Sciences of the United States of America | volume = 100 | issue = 6 | pages = 3351–6 | date = March 2003 | pmid = 12631705 | pmc = 152296 | doi = 10.1073/pnas.0530258100 | bibcode = 2003PNAS..100.3351A | doi-access = free }} has been successfully applied to signal processing and data science, e.g., in genomic signal processing.{{cite journal | vauthors = Lee CH, Alpert BO, Sankaranarayanan P, Alter O | title = GSVD comparison of patient-matched normal and tumor aCGH profiles reveals global copy-number alterations predicting glioblastoma multiforme survival | journal = PLOS ONE| volume = 7 | issue = 1 | pages = e30098 | date = January 2012 | pmid = 22291905 | pmc = 3264559 | doi = 10.1371/journal.pone.0030098 | bibcode = 2012PLoSO...730098L | doi-access = free }}{{cite journal | vauthors = Aiello KA, Ponnapalli SP, Alter O | title = Mathematically universal and biologically consistent astrocytoma genotype encodes for transformation and predicts survival phenotype | journal = APL Bioengineering | volume = 2 | issue = 3 | pages = 031909 | date = September 2018 | pmid = 30397684 | pmc = 6215493 | doi = 10.1063/1.5037882 }}{{cite journal | vauthors = Ponnapalli SP, Bradley MW, Devine K, Bowen J, Coppens SE, Leraas KM, Milash BA, Li F, Luo H, Qiu S, Wu K, Yang H, Wittwer CT, Palmer CA, Jensen RL, Gastier-Foster JM, Hanson HA, Barnholtz-Sloan JS, Alter O | title = Retrospective Clinical Trial Experimentally Validates Glioblastoma Genome-Wide Pattern of DNA Copy-Number Alterations Predictor of Survival | journal = APL Bioengineering | volume = 4 | issue = 2 | pages = 026106 | date = May 2020 | doi = 10.1063/1.5142559 | pmid = 32478280 | pmc = 7229984 | id = [https://www.eurekalert.org/pub_releases/2020-05/uouh-gpf051320.php Press Release] | doi-access = free }}

These applications inspired several additional comparative spectral decompositions, i.e., the higher-order GSVD (HO GSVD) and the tensor GSVD.

It has equally found applications to estimate the spectral decompositions of linear operators when the eigenfunctions are parameterized with a linear model, i.e. a reproducing kernel Hilbert space.{{Cite arXiv|last1=Cabannes|first1=Vivien|last2=Pillaud-Vivien|first2=Loucas|last3=Bach|first3=Francis|last4=Rudi|first4=Alessandro|date=2021|title=Overcoming the curse of dimensionality with Laplacian regularization in semi-supervised learning|class=stat.ML|eprint=2009.04324}}

Second version: weighted single-matrix decomposition

The weighted version of the generalized singular value decomposition (GSVD) is a constrained matrix decomposition with constraints imposed on the left and right singular vectors of the singular value decomposition.{{cite book | vauthors = Jolliffe IT | title = Principal Component Analysis | url = https://archive.org/details/principalcompone00joll_0 | series = Springer Series in Statistics | edition = 2nd | publisher = Springer | location = NY | date = 2002 | isbn = 978-0-387-95442-4 | url-access = registration }}

{{Cite book | last = Greenacre | first = Michael | name-list-style = vanc | publisher = Academic Press | location = London | year = 1983 | title = Theory and Applications of Correspondence Analysis | isbn = 978-0-12-299050-2 }}{{Cite journal| vauthors = Abdi H, Williams LJ |year = 2010 | title = Principal component analysis. | journal = Wiley Interdisciplinary Reviews: Computational Statistics | volume = 2 |issue=4 | pages = 433–459 | doi=10.1002/wics.101|s2cid = 122379222 }} This form of the GSVD is an extension of the SVD as such. Given the SVD of an m×n real or complex matrix M

:M = U\Sigma V^* \,

where

:U^* W_u U = V^* W_v V = I.

Where I is the identity matrix and where U and V are orthonormal given their constraints (W_u and W_v). Additionally, W_u and W_v are positive definite matrices (often diagonal matrices of weights). This form of the GSVD is the core of certain techniques, such as generalized principal component analysis and Correspondence analysis.

The weighted form of the GSVD is called as such because, with the correct selection of weights, it generalizes many techniques (such as multidimensional scaling and linear discriminant analysis).{{cite book | vauthors = Abdi H | date = 2007 | chapter = Singular Value Decomposition (SVD) and Generalized Singular Value Decomposition (GSVD). | veditors = Salkind NJ | title = Encyclopedia of Measurement and Statistics. | url = https://archive.org/details/encyclopediameas00salk | url-access = limited | location = Thousand Oaks (CA) | publisher = Sage | pages = [https://archive.org/details/encyclopediameas00salk/page/n939 907]–912 }}

References

{{Reflist|refs=

{{cite journal | vauthors = Bradley MW, Aiello KA, Ponnapalli SP, Hanson HA, Alter O | title = GSVD- and tensor GSVD-uncovered patterns of DNA copy-number alterations predict adenocarcinomas survival in general and in response to platinum | journal = APL Bioengineering | volume = 3 | issue = 3 | pages = 036104 | date = September 2019 | pmid = 31463421 | pmc = 6701977 | doi = 10.1063/1.5099268 | id = [https://alterlab.org/publications/Bradley_et_al_APL_Bioeng_2019_Supplementary_Material.pdf Supplementary Material] }}

{{cite journal | vauthors = Sankaranarayanan P, Schomay TE, Aiello KA, Alter O | title = Tensor GSVD of patient- and platform-matched tumor and normal DNA copy-number profiles uncovers chromosome arm-wide patterns of tumor-exclusive platform-consistent alterations encoding for cell transformation and predicting ovarian cancer survival | journal = PLOS ONE| volume = 10 | issue = 4 | pages = e0121396 | date = April 2015 | pmid = 25875127 | pmc = 4398562 | doi = 10.1371/journal.pone.0121396 | bibcode = 2015PLoSO..1021396S | doi-access = free }}

{{cite journal | last= Van Loan | first = Charles F. | name-list-style = vanc | year = 1976 | title = Generalizing the Singular Value Decomposition | journal = SIAM J. Numer. Anal. | volume = 13 | issue = 1| pages = 76–83 | doi = 10.1137/0713009 | bibcode = 1976SJNA...13...76V }}

{{cite journal | last1 = Paige | first1 = C. C. | last2 = Saunders | first2 = M. A. | name-list-style = vanc | year = 1981 | title = Towards a Generalized Singular Value Decomposition | journal = SIAM J. Numer. Anal. | volume = 18 | issue = 3| pages = 398–405| doi = 10.1137/0718026 | bibcode = 1981SJNA...18..398P }}

{{cite journal | vauthors = Ponnapalli SP, Saunders MA, Van Loan CF, Alter O | title = A higher-order generalized singular value decomposition for comparison of global mRNA expression from multiple organisms | journal = PLOS ONE| volume = 6 | issue = 12 | pages = e28072 | date = December 2011 | pmid = 22216090 | pmc = 3245232 | doi = 10.1371/journal.pone.0028072 | bibcode = 2011PLoSO...628072P | doi-access = free }}

}}

Further reading

{{refbegin}}

  • {{Cite book | last1 = Golub | first1 = Gene | last2 = Van Loan | first2 = Charles | name-list-style = vanc | publisher = Johns Hopkins University Press | location = Baltimore | year = 1996 | title = Matrix Computation | edition = Third | isbn = 0-8018-5414-8 }}
  • LAPACK manual [http://www.netlib.org/lapack/lug/node36.html]

{{refend}}

Category:Linear algebra

Category:Singular value decomposition