L1-norm principal component analysis

File:L1-PCA.png

L1-norm principal component analysis (L1-PCA) is a general method for multivariate data analysis.{{cite journal|last1=Markopoulos|first1=Panos P.|last2=Karystinos|first2=George N.|last3=Pados|first3=Dimitris A.|title=Optimal Algorithms for L1-subspace Signal Processing|journal=IEEE Transactions on Signal Processing|date=October 2014|volume=62|issue=19|pages=5046–5058|doi=10.1109/TSP.2014.2338077|arxiv=1405.6785|bibcode=2014ITSP...62.5046M|s2cid=1494171}}

L1-PCA is often preferred over standard L2-norm principal component analysis (PCA) when the analyzed data may contain outliers (faulty values or corruptions), as it is believed to be robust.{{cite journal|last1=Barrodale|first1=I.|title=L1 Approximation and the Analysis of Data|journal=Applied Statistics|date=1968|volume=17|issue=1|pages=51–57|doi=10.2307/2985267|jstor=2985267}}{{cite book|last1=Barnett|first1=Vic|last2=Lewis|first2=Toby|title=Outliers in statistical data|date=1994|publisher=Wiley|location=Chichester [u.a.]|isbn=978-0471930945|edition=3.}}{{cite book|last1=Kanade|first1=T.|last2=Ke|first2=Qifa|title=2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) |chapter=Robust L₁ Norm Factorization in the Presence of Outliers and Missing Data by Alternative Convex Programming |volume=1|pages=739–746|date=June 2005|doi=10.1109/CVPR.2005.309|publisher=IEEE|isbn=978-0-7695-2372-9|citeseerx=10.1.1.63.4605|s2cid=17144854}}

Both L1-PCA and standard PCA seek a collection of orthogonal directions (principal components) that define a subspace wherein data representation is maximized according to the selected criterion.{{cite book|last1=Jolliffe|first1=I.T.|title=Principal component analysis|date=2004|publisher=Springer|location=New York|isbn=978-0387954424|edition=2nd|url-access=registration|url=https://archive.org/details/principalcompone00joll_0}}{{cite book|last1=Bishop|first1=Christopher M.|title=Pattern recognition and machine learning|date=2007|publisher=Springer|location=New York|isbn=978-0-387-31073-2|edition=Corr. printing.}}{{cite journal|last1=Pearson|first1=Karl|title=On Lines and Planes of Closest Fit to Systems of Points in Space|journal=The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science|date=8 June 2010|volume=2|issue=11|pages=559–572|doi=10.1080/14786440109462720|s2cid=125037489 |url=https://zenodo.org/record/1430636}}

Standard PCA quantifies data representation as the aggregate of the L2-norm of the data point projections into the subspace, or equivalently the aggregate Euclidean distance of the original points from their subspace-projected representations.

L1-PCA uses instead the aggregate of the L1-norm of the data point projections into the subspace.{{cite journal|last1=Markopoulos|first1=Panos P.|last2=Kundu|first2=Sandipan|last3=Chamadia|first3=Shubham|last4=Pados|first4=Dimitris A.|title=Efficient L1-Norm Principal-Component Analysis via Bit Flipping|journal=IEEE Transactions on Signal Processing|date=15 August 2017|volume=65|issue=16|pages=4252–4264|doi=10.1109/TSP.2017.2708023|arxiv=1610.01959|bibcode=2017ITSP...65.4252M|s2cid=7931130}} In PCA and L1-PCA, the number of principal components (PCs) is lower than the rank of the analyzed matrix, which coincides with the dimensionality of the space defined by the original data points.

Therefore, PCA or L1-PCA are commonly employed for dimensionality reduction for the purpose of data denoising or compression.

Among the advantages of standard PCA that contributed to its high popularity are low-cost computational implementation by means of singular-value decomposition (SVD){{cite journal|last1=Golub|first1=Gene H.|title=Some Modified Matrix Eigenvalue Problems|journal=SIAM Review|date=April 1973|volume=15|issue=2|pages=318–334|doi=10.1137/1015032|citeseerx=10.1.1.454.9868}} and statistical optimality when the data set is generated by a true multivariate normal data source.

However, in modern big data sets, data often include corrupted, faulty points, commonly referred to as outliers.{{cite book|last1=Barnett|first1=Vic|last2=Lewis|first2=Toby|title=Outliers in statistical data|date=1994|publisher=Wiley|location=Chichester [u.a.]|isbn=978-0471930945|edition=3.}}

Standard PCA is known to be sensitive to outliers, even when they appear as a small fraction of the processed data.{{cite journal|last1=Candès|first1=Emmanuel J.|last2=Li|first2=Xiaodong|last3=Ma|first3=Yi|last4=Wright|first4=John|title=Robust principal component analysis?|journal=Journal of the ACM|date=1 May 2011|volume=58|issue=3|pages=1–37|doi=10.1145/1970392.1970395|arxiv=0912.3599|s2cid=7128002}}

The reason is that the L2-norm formulation of L2-PCA places squared emphasis on the magnitude of each coordinate of each data point, ultimately overemphasizing peripheral points, such as outliers.

On the other hand, following an L1-norm formulation, L1-PCA places linear emphasis on the coordinates of each data point, effectively restraining outliers.{{cite journal|last1=Kwak|first1=N.|title=Principal Component Analysis Based on L1-Norm Maximization|journal=IEEE Transactions on Pattern Analysis and Machine Intelligence|date=September 2008|volume=30|issue=9|pages=1672–1680|doi=10.1109/TPAMI.2008.114|pmid=18617723|citeseerx=10.1.1.333.1176|s2cid=11882870}}

Formulation

Consider any matrix \mathbf X = [\mathbf x_1, \mathbf x_2, \ldots, \mathbf x_N] \in \mathbb R^{D \times N} consisting of N D-dimensional data points. Define r=rank(\mathbf X). For integer K such that 1 \leq K < r, L1-PCA is formulated as:

{{NumBlk|:|

\begin{align}

&\underset{\mathbf Q=[\mathbf q_1, \mathbf q_2, \ldots, \mathbf q_K] \in \mathbb R^{D \times K}}{\max}~~\| \mathbf X^\top \mathbf Q\|_1\\

&\text{subject to}~~ \mathbf Q^\top \mathbf Q=\mathbf I_K.

\end{align}

|{{EquationRef|1}}}}

For K=1, ({{EquationNote|1}}) simplifies to finding the L1-norm principal component (L1-PC) of \mathbf X by

{{NumBlk|:|

\begin{align}

&\underset{\mathbf q \in \mathbb R^{D \times 1}}{\max}~~\| \mathbf X^\top \mathbf q\|_1\\

&\text{subject to}~~ \| \mathbf q\|_2 =1.

\end{align}

|{{EquationRef|2}}}}

In ({{EquationNote|1}})-({{EquationNote|2}}), L1-norm \| \cdot \|_1 returns the sum of the absolute entries of its argument and L2-norm \| \cdot \|_2 returns the sum of the squared entries of its argument. If one substitutes \| \cdot \|_1 in ({{EquationNote|2}}) by the Frobenius/L2-norm \| \cdot \|_F, then the problem becomes standard PCA and it is solved by the matrix \mathbf Q that contains the K dominant singular vectors of \mathbf X (i.e., the singular vectors that correspond to the K highest singular values).

The maximization metric in ({{EquationNote|2}}) can be expanded as

{{NumBlk|:|

\| \mathbf X^\top \mathbf Q\|_1=\sum_{k=1}^K \sum_{n=1}^N |\mathbf x_n^\top \mathbf q_k|.

|{{EquationRef|3}}}}

Solution

For any matrix \mathbf A \in \mathbb R^{m \times n} with m \geq n, define \Phi(\mathbf A) as the nearest (in the L2-norm sense) matrix to \mathbf A that has orthonormal columns. That is, define

{{NumBlk|:|

\begin{align}

\Phi(\mathbf A) = & \underset{\mathbf Q \in \mathbb R^{m \times n}}{\text{argmin}}~~\| \mathbf A - \mathbf Q\|_F\\

&\text{subject to}~~ \mathbf Q^\top \mathbf Q=\mathbf I_n.

\end{align}

|{{EquationRef|4}}}}

Procrustes Theorem{{cite journal|last1=Eldén|first1=Lars|last2=Park|first2=Haesun|title=A Procrustes problem on the Stiefel manifold|journal=Numerische Mathematik|date=1 June 1999|volume=82|issue=4|pages=599–619|doi=10.1007/s002110050432|citeseerx=10.1.1.54.3580|s2cid=206895591}}{{cite journal|last1=Schönemann|first1=Peter H.|title=A generalized solution of the orthogonal procrustes problem|journal=Psychometrika|date=March 1966|volume=31|issue=1|pages=1–10|doi=10.1007/BF02289451|hdl=10338.dmlcz/103138|s2cid=121676935|hdl-access=free}} states that if \mathbf A has SVD \mathbf U_{m \times n} \boldsymbol \Sigma_{n \times n} \mathbf V_{n \times n}^\top, then

\Phi(\mathbf A)=\mathbf U \mathbf V^\top .

Markopoulos, Karystinos, and Pados showed that, if \mathbf B_{\text{BNM}} is the exact solution to the binary nuclear-norm maximization (BNM) problem

{{NumBlk|:|

\begin{align}

\underset{\mathbf B \in \{ \pm 1\}^{N \times K}}{\text{max}}~~\| \mathbf X \mathbf B\|_*^2,

\end{align}

|{{EquationRef|5}}}}

then

{{NumBlk|:|

\begin{align}

\mathbf Q_{\text{L1}} = \Phi(\mathbf X\mathbf B_{\text{BNM}})

\end{align}

|{{EquationRef|6}}}}

is the exact solution to L1-PCA in ({{EquationNote|2}}). The nuclear-norm \| \cdot \|_* in ({{EquationNote|2}}) returns the summation of the singular values of its matrix argument and can be calculated by means of standard SVD. Moreover, it holds that, given the solution to L1-PCA, \mathbf Q_{\text{L1}}, the solution to BNM can be obtained as

{{NumBlk|:|

\begin{align}

\mathbf B_{\text{BNM}} = \text{sgn}(\mathbf X^\top \mathbf Q_{\text{L1}})

\end{align}

|{{EquationRef|7}}}}

where \text{sgn}(\cdot) returns the \{\pm 1\}-sign matrix of its matrix argument (with no loss of generality, we can consider \text{sgn}(0)=1). In addition, it follows that \| \mathbf X^\top \mathbf Q_{\text{L1}}\|_1 = \| \mathbf X \mathbf B_{\text{BNM}}\|_*. BNM in ({{EquationNote|5}}) is a combinatorial problem over antipodal binary variables. Therefore, its exact solution can be found through exhaustive evaluation of all 2^{NK} elements of its feasibility set, with asymptotic cost \mathcal O(2^{NK}). Therefore, L1-PCA can also be solved, through BNM, with cost \mathcal O(2^{NK}) (exponential in the product of the number of data points with the number of the sought-after components). It turns out that L1-PCA can be solved optimally (exactly) with polynomial complexity in N for fixed data dimension D, \mathcal{O}(N^{rK-K+1}).

For the special case of K=1 (single L1-PC of \mathbf X), BNM takes the binary-quadratic-maximization (BQM) form

{{NumBlk|:|

\begin{align}

& \underset{\mathbf b \in \{ \pm 1\}^{N \times 1}}{\text{max}}~~ \mathbf b^\top \mathbf X^\top \mathbf X \mathbf b.

\end{align}

|{{EquationRef|8}}}}

The transition from ({{EquationNote|5}}) to ({{EquationNote|8}}) for K=1 holds true, since the unique singular value of \mathbf X \mathbf b is equal to \| \mathbf X \mathbf b\|_2 = \sqrt{\mathbf b^\top \mathbf X^\top \mathbf X \mathbf b}, for every \mathbf b . Then, if \mathbf b_{\text{BNM}} is the solution to BQM in ({{EquationNote|7}}), it holds that

{{NumBlk|:|

\begin{align}

\mathbf q_{\text{L1}} = \Phi(\mathbf X \mathbf b_{\text{BNM}}) = \frac{\mathbf X \mathbf b_{\text{BNM}}}{\| \mathbf X \mathbf b_{\text{BNM}}\|_2}

\end{align}

|{{EquationRef|9}}}}

is the exact L1-PC of \mathbf X, as defined in ({{EquationNote|1}}). In addition, it holds that \mathbf b_{\text{BNM}} = \text{sgn}(\mathbf X^\top \mathbf q_{\text{L1}}) and \| \mathbf X^\top \mathbf q_{\text{L1}}\|_1 = \| \mathbf X \mathbf b_{\text{BNM}}\|_2.

Algorithms

= Exact solution of exponential complexity =

As shown above, the exact solution to L1-PCA can be obtained by the following two-step process:

1. Solve the problem in ({{EquationNote|5}}) to obtain \mathbf B_{\text{BNM}}.

2. Apply SVD on \mathbf X\mathbf B_{\text{BNM}} to obtain \mathbf Q_{\text{L1}}.

BNM in ({{EquationNote|5}}) can be solved by exhaustive search over the domain of \mathbf B with cost \mathcal{O}(2^{NK}).

= Exact solution of polynomial complexity =

Also, L1-PCA can be solved optimally with cost \mathcal{O}(N^{rK-K+1}), when r=rank(\mathbf X) is constant with respect to N (always true for finite data dimension D).{{cite book|last1=Markopoulos|first1=PP|last2=Kundu|first2=S|last3=Chamadia|first3=S|last4=Tsagkarakis|first4=N|last5=Pados|first5=DA|title=Advances in Principal Component Analysis |chapter=Outlier-Resistant Data Processing with L1-Norm Principal Component Analysis |date=2018|issue=Springer, Singapore|pages=121–135|doi=10.1007/978-981-10-6704-4_6|isbn=978-981-10-6703-7}}

= Approximate efficient solvers =

In 2008, Kwak proposed an iterative algorithm for the approximate solution of L1-PCA for K=1. This iterative method was later generalized for K>1 components.{{cite journal|last1=Nie|first1=F|last2=Huang|first2=H|last3=Ding|first3=C|last4=Luo|first4=Dijun|last5=Wang|first5=H|title=Robust principal component analysis with non-greedy l1-norm maximization|journal=22nd International Joint Conference on Artificial Intelligence|date=July 2011|pages=1433–1438}} Another approximate efficient solver was proposed by McCoy and Tropp{{cite journal|last1=McCoy|first1=Michael|last2=Tropp|first2=Joel A.|date=2011|title=Two proposals for robust PCA using semidefinite programming|journal=Electronic Journal of Statistics|volume=5|pages=1123–1160|doi=10.1214/11-EJS636|arxiv=1012.1086|s2cid=14102421}} by means of semi-definite programming (SDP). Most recently, L1-PCA (and BNM in ({{EquationNote|5}})) were solved efficiently by means of bit-flipping iterations (L1-BF algorithm).

== L1-BF algorithm ==

1 function L1BF(\mathbf X, K):

2 Initialize \mathbf B^{(0)} \in \{\pm 1\}^{N \times K} and \mathcal L \leftarrow \{1,2,\ldots, NK\}

3 Set t \leftarrow 0 and \omega \leftarrow \| \mathbf X \mathbf B^{(0)} \|_*

4 Until termination (or T iterations)

5 \mathbf B \leftarrow \mathbf B^{(t)}, t' \leftarrow t

6 For x \in \mathcal L

7 k \leftarrow \lceil \frac{x}{N} \rceil, n \leftarrow x-N(k-1)

8 [\mathbf B]_{n,k} \leftarrow - [\mathbf B]_{n,k} // flip bit

9 a(n,k) \leftarrow \| \mathbf X \mathbf B \|_* // calculated by SVD or faster (see)

10 if a(n,k)>\omega

11 \mathbf B^{(t)} \leftarrow \mathbf B, t' \leftarrow t+1

12 \omega \leftarrow a(n,k)

13 end

14 if t'=t // no bit was flipped

15 if \mathcal L = \{1,2, \ldots, NK\}

16 terminate

17 else

18 \mathcal L \leftarrow \{1,2, \ldots, NK\}

The computational cost of L1-BF is \mathcal O (ND min\{N,D\} + N^2K^2(K^2 + r)).

Complex data

L1-PCA has also been generalized to process complex data. For complex L1-PCA, two efficient algorithms were proposed in 2018.{{cite journal|last1=Tsagkarakis|first1=Nicholas|last2=Markopoulos|first2=Panos P.|last3=Sklivanitis|first3=George|last4=Pados|first4=Dimitris A.|title=L1-Norm Principal-Component Analysis of Complex Data|journal=IEEE Transactions on Signal Processing|date=15 June 2018|volume=66|issue=12|pages=3256–3267|doi=10.1109/TSP.2018.2821641|arxiv=1708.01249|bibcode=2018ITSP...66.3256T|s2cid=21011653}}

Tensor data

L1-PCA has also been extended for the analysis of tensor data, in the form of L1-Tucker, the L1-norm robust analogous of standard Tucker decomposition.{{cite journal|last1=Chachlakis|first1=Dimitris G.|last2=Prater-Bennette|first2=Ashley|last3=Markopoulos|first3=Panos P.|title=L1-norm Tucker Tensor Decomposition|journal=IEEE Access|date=22 November 2019|volume=7|pages=178454–178465|doi=10.1109/ACCESS.2019.2955134|arxiv=1904.06455|doi-access=free}} Two algorithms for the solution of L1-Tucker are L1-HOSVD and L1-HOOI.{{cite book|last1=Markopoulos|first1=Panos P.|last2=Chachlakis|first2=Dimitris G.|last3=Prater-Bennette|first3=Ashley|title=2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP) |chapter=L1-Norm Higher-Order Singular-Value Decomposition |date=21 February 2019|pages=1353–1357|doi=10.1109/GlobalSIP.2018.8646385|isbn=978-1-7281-1295-4|s2cid=67874182}}{{cite journal|last1=Markopoulos|first1=Panos P.|last2=Chachlakis|first2=Dimitris G.|last3=Papalexakis|first3=Evangelos|title=The Exact Solution to Rank-1 L1-Norm TUCKER2 Decomposition

|journal=IEEE Signal Processing Letters|volume=25|issue=4|date=April 2018|pages=511–515|doi=10.1109/LSP.2018.2790901|arxiv=1710.11306|bibcode=2018ISPL...25..511M|s2cid=3693326}}

Code

MATLAB code for L1-PCA is available at MathWorks.{{cite web|title=L1-PCA TOOLBOX|url=https://www.mathworks.com/matlabcentral/fileexchange/64855-l1-pca-toolbox|access-date=May 21, 2018}}

References