Pseudo-determinant

In linear algebra and statistics, the pseudo-determinant{{cite web | author = Minka, T.P. | title = Inferring a Gaussian Distribution | url = http://research.microsoft.com/en-us/um/people/minka/papers/gaussian.html | year = 2001}} [http://research.microsoft.com/en-us/um/people/minka/papers/minka-gaussian.pdf PDF] is the product of all non-zero eigenvalues of a square matrix. It coincides with the regular determinant when the matrix is non-singular.

Definition

The pseudo-determinant of a square n-by-n matrix A may be defined as:

:|\mathbf{A}|_+ = \lim_{\alpha\to 0} \frac

\mathbf{A} + \alpha \mathbf{I}
{\alpha^{n-\operatorname{rank}(\mathbf{A})}}

where |A| denotes the usual determinant, I denotes the identity matrix and rank(A) denotes the matrix rank of A.{{cite book |first=Ionut |last=Florescu |title=Probability and Stochastic Processes |publisher=Wiley |year=2014 |page=529 |isbn=978-0-470-62455-5 |url=https://books.google.com/books?id=MaXCBwAAQBAJ&pg=PA529 }}

Definition of pseudo-determinant using Vahlen matrix

The Vahlen matrix of a conformal transformation, the Möbius transformation (i.e. (ax + b)(cx + d)^{-1} for a, b, c, d \in \mathcal{G}(p, q)), is defined as [f] = \begin{bmatrix}a & b \\c & d \end{bmatrix}. By the pseudo-determinant of the Vahlen matrix for the conformal transformation, we mean

: \operatorname{pdet} \begin{bmatrix}a & b\\ c& d\end{bmatrix} = ad^\dagger - bc^\dagger.

If \operatorname{pdet}[f] > 0, the transformation is sense-preserving (rotation) whereas if the \operatorname{pdet}[f] < 0, the transformation is sense-preserving (reflection).

Computation for positive semi-definite case

If A is positive semi-definite, then the singular values and eigenvalues of A coincide. In this case, if the singular value decomposition (SVD) is available, then |\mathbf{A}|_+ may be computed as the product of the non-zero singular values. If all singular values are zero, then the pseudo-determinant is 1.

Supposing \operatorname{rank}(A) = k , so that k is the number of non-zero singular values, we may write A = PP^\dagger where P is some n-by-k matrix and the dagger is the conjugate transpose. The singular values of A are the squares of the singular values of P and thus we have |A|_+ = \left|P^\dagger P\right|, where \left|P^\dagger P\right| is the usual determinant in k dimensions. Further, if P is written as the block column P = \left(\begin{smallmatrix} C \\ D \end{smallmatrix}\right), then it holds, for any heights of the blocks C and D, that |A|_+ = \left|C^\dagger C + D^\dagger D\right|.

Application in statistics

If a statistical procedure ordinarily compares distributions in terms of the determinants of variance-covariance matrices then, in the case of singular matrices, this comparison can be undertaken by using a combination of the ranks of the matrices and their pseudo-determinants, with the matrix of higher rank being counted as "largest" and the pseudo-determinants only being used if the ranks are equal.[http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/viewer.htm#statug_rreg_sect021.htm SAS documentation on "Robust Distance"] Thus pseudo-determinants are sometime presented in the outputs of statistical programs in cases where covariance matrices are singular.Bohling, Geoffrey C. (1997) "GSLIB-style programs for discriminant analysis and regionalized classification", Computers & Geosciences, 23 (7), 739–761 {{doi| 10.1016/S0098-3004(97)00050-2}} In particular, the normalization for a multivariate normal distribution with a covariance matrix {{math|Σ}} that is not necessarily nonsingular can be written as

\frac{1}{\sqrt{(2\pi)^{\operatorname{rank}(\mathbf\Sigma)}|\mathbf\Sigma|_+}} = \frac{1}{\sqrt{|2\pi\mathbf\Sigma|_+}}\,.

See also

References