Intrinsic dimension
{{Short description|Least variables needed to represent data}}
The intrinsic dimension for a data set can be thought of as the minimal number of variables needed to represent the data set. Similarly, in signal processing of multidimensional signals, the intrinsic dimension of the signal describes how many variables are needed to generate a good approximation of the signal.
When estimating intrinsic dimension, however, a slightly broader definition based on manifold dimension is often used, where a representation in the intrinsic dimension does only need to exist locally. Such intrinsic dimension estimation methods can thus handle data sets with different intrinsic dimensions in different parts of the data set. This is often referred to as local intrinsic dimensionality.
The intrinsic dimension can be used as a lower bound of what dimension it is possible to compress a data set into through dimension reduction, but it can also be used as a measure of the complexity of the data set or signal. For a data set or signal of N variables, its intrinsic dimension M satisfies 0 ≤ M ≤ N, although estimators may yield higher values.
Example
Let be a two-variable function (or signal) which is of the form
for some one-variable function g which is not constant. This means that f varies, in accordance to g, with the first variable or along the first coordinate. On the other hand, f is constant with respect to the second variable or along the second coordinate. It is only necessary to know the value of one, namely the first, variable in order to determine the value of f. Hence, it is a two-variable function but its intrinsic dimension is one.
A slightly more complicated example is.
f is still intrinsic one-dimensional, which can be seen by making a variable transformation
and
which gives
.
Since the variation in f can be described by the single variable y1 its intrinsic dimension is one.
For the case that f is constant, its intrinsic dimension is zero since no variable is needed to describe variation. For the general case, when the intrinsic dimension of the two-variable function f is neither zero or one, it is two.
In the literature, functions which are of intrinsic dimension zero, one, or two are sometimes referred to as i0D, i1D or i2D, respectively.
Formal definition for signals
For an N-variable function f, the set of variables can be represented as an N-dimensional vector x:
.
If for some M-variable function g and M × N matrix A it is the case that
- for all x;
- M is the smallest number for which the above relation between f and g can be found,
then the intrinsic dimension of f is M.
The intrinsic dimension is a characterization of f, it is not an unambiguous characterization of g nor of A. That is, if the above relation is satisfied for some f, g, and A, it must also be satisfied for the same f and g′ and A′ given by
and
where B is a non-singular M × M matrix, since
g'\left(\mathbf{A'x}\right) = g \left(\mathbf{BA'x}\right) = g\left(\mathbf{Ax}\right) .
The Fourier transform of signals of low intrinsic dimension
An N variable function which has intrinsic dimension M < N has a characteristic Fourier transform. Intuitively, since this type of function is constant along one or several dimensions its Fourier transform must appear like an impulse (the Fourier transform of a constant) along the same dimension in the frequency domain.
= A simple example =
Let f be a two-variable function which is i1D. This means that there exists a normalized vector and a one-variable function g such that
for all . If F is the Fourier transform of f (both are two-variable functions) it must be the case that
.
Here G is the Fourier transform of g (both are one-variable functions), δ is the Dirac impulse function and m is a normalized vector in perpendicular to n. This means that F vanishes everywhere except on a line which passes through the origin of the frequency domain and is parallel to m. Along this line F varies according to G.
= The general case =
Let f be an N-variable function which has intrinsic dimension M, that is, there exists an M-variable function g and M × N matrix A such that
.
Its Fourier transform F can then be described as follows:
- F vanishes everywhere except for a subspace of dimension M
- The subspace M is spanned by the rows of the matrix A
- In the subspace, F varies according to G the Fourier transform of g
Generalizations
The type of intrinsic dimension described above assumes that a linear transformation is applied to the coordinates of the N-variable function f to produce the M variables which are necessary to represent every value of f. This means that f is constant along lines, planes, or hyperplanes, depending on N and M.
In a general case, f has intrinsic dimension M if there exist M functions a1, a2, ..., aM and an M-variable function g such that
- for all x
- M is the smallest number of functions which allows the above transformation
A simple example is transforming a 2-variable function f to polar coordinates:
- , f is i1D and is constant along any circle centered at the origin
- , f is i1D and is constant along all rays from the origin
For the general case, a simple description of either the point sets for which f is constant or its Fourier transform is usually not possible.
Local Intrinsic Dimensionality
Local intrinsic dimensionality (LID) refers to the observation that often data is distributed on a lower-dimensional manifold when only considering a nearby subset of the data. For example the function
can be considered one-dimensional when y is close to 0 (with one variable x), two-dimensional when y is close to 1, and again one-dimensional when y is positive and much larger than 1 (with variable x+y).
Local intrinsic dimensionality is often used with respect to data. It then usually is estimated based on the k nearest neighbors of a data point,{{Cite book|last1=Amsaleg|first1=Laurent|last2=Chelly|first2=Oussama|last3=Furon|first3=Teddy|last4=Girard|first4=Stéphane|last5=Houle|first5=Michael E.|last6=Kawarabayashi|first6=Ken-ichi|last7=Nett|first7=Michael|title=Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining |chapter=Estimating Local Intrinsic Dimensionality |date=2015-08-10|chapter-url=https://doi.org/10.1145/2783258.2783405|series=KDD '15|location=Sydney, NSW, Australia|publisher=Association for Computing Machinery|pages=29–38|doi=10.1145/2783258.2783405|isbn=978-1-4503-3664-2|s2cid=16058196 }} often based on a concept related to the doubling dimension in mathematics. Since the volume of a d-sphere grows exponentially in d, the rate at which new neighbors are found as the search radius is increased can be used to estimate the local intrinsic dimensionality (e.g., GED estimation{{Cite book|last1=Houle|first1=M. E.|last2=Kashima|first2=H.|last3=Nett|first3=M.|title=2012 IEEE 12th International Conference on Data Mining Workshops |chapter=Generalized Expansion Dimension |date=2012|chapter-url=https://ieeexplore.ieee.org/document/6406405|volume=|pages=587–594|doi=10.1109/ICDMW.2012.94|isbn=978-1-4673-5164-5 |s2cid=8336466 |via=}}). However, alternate approaches of estimation have been proposed, for example angle-based estimation.{{Cite book|last1=Thordsen|first1=Erik|last2=Schubert|first2=Erich|date=2020|editor-last=Satoh|editor-first=Shin'ichi|editor2-last=Vadicamo|editor2-first=Lucia|editor3-last=Zimek|editor3-first=Arthur|editor4-last=Carrara|editor4-first=Fabio|editor5-last=Bartolini|editor5-first=Ilaria|editor6-last=Aumüller|editor6-first=Martin|editor7-last=Jónsson|editor7-first=Björn Þór|editor8-last=Pagh|editor8-first=Rasmus|editor8-link= Rasmus Pagh |chapter=ABID: Angle Based Intrinsic Dimensionality|chapter-url=https://link.springer.com/chapter/10.1007/978-3-030-60936-8_17|title=Similarity Search and Applications|series=Lecture Notes in Computer Science|volume=12440 |language=en|location=Cham|publisher=Springer International Publishing|pages=218–232|doi=10.1007/978-3-030-60936-8_17|isbn=978-3-030-60936-8|arxiv=2006.12880|s2cid=219980390 }}
= Intrinsic dimension estimation =
Intrinsic dimension of data manifolds can be estimated by many methods, depending on assumptions of the data manifold. A 2016 review is.{{Cite journal |last1=Camastra |first1=Francesco |last2=Staiano |first2=Antonino |date=2016-01-20 |title=Intrinsic dimension estimation: Advances and open problems |url=https://www.sciencedirect.com/science/article/pii/S0020025515006179 |journal=Information Sciences |volume=328 |pages=26–41 |doi=10.1016/j.ins.2015.08.029 |issn=0020-0255}}
The two-nearest neighbors (TwoNN) method is a method for estimating the intrinsic dimension of an immersed Riemannian manifold. The algorithm is as follows:{{Cite journal |last1=Facco |first1=Elena |last2=d’Errico |first2=Maria |last3=Rodriguez |first3=Alex |last4=Laio |first4=Alessandro |date=2017-09-22 |title=Estimating the intrinsic dimension of datasets by a minimal neighborhood information |journal=Scientific Reports |volume=7 |issue=1 |page=12140 |doi=10.1038/s41598-017-11873-y |issn=2045-2322|doi-access=free |pmid=28939866 |pmc=5610237 |arxiv=1803.06992 |bibcode=2017NatSR...712140F }}
Scatter some points on the manifold.Measure for many points, where are the distances to the point's two closest neighbors.
Fit the empirical CDF of to .
Return .
History
During the 1950s so called "scaling" methods were developed in the social sciences to explore and summarize multidimensional data sets.{{cite book
| first = Warren S. |last=Torgerson
| title = Theory and methods of scaling
| orig-year = 1958 | year=1978 | isbn=0471879452 |oclc=256008416
| publisher = Wiley}} After Shepard introduced non-metric multidimensional scaling in 1962{{cite journal
| first = Roger N. |last=Shepard
| title = The analysis of proximities: Multidimensional scaling with an unknown distance function. I.
| journal = Psychometrika
| volume = 27
| issue = 2
| pages = 125–140
| year = 1962 |doi=10.1007/BF02289630
|s2cid=186222646
}} one of the major research areas within multi-dimensional scaling (MDS) was estimation of the intrinsic dimension.{{cite journal
| first = Roger N. |last=Shepard
| title = Representation of structure in similarity data: Problems and prospects
| journal = Psychometrika
| volume = 39
| issue = 4
| pages = 373–421
| year = 1974 |doi=10.1007/BF02291665
|s2cid=121704645
}} The topic was also studied in information theory, pioneered by Bennet in 1965 who coined the term "intrinsic dimension" and wrote a computer program to estimate it.{{cite book
| first = Robert S. |last=Bennet
| chapter = Representation and analysis of signals—Part XXI: The intrinsic dimensionality of signal collections
| title = Rep. 163
| publisher = The Johns Hopkins University
| date = June 1965
| location = Baltimore, MD
| chapter-url = http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=AD0475844}}
| author = Robert S. Bennett
| title = Representation and Analysis of Signals Part XXI. The intrinsic dimensionality of signal collections
| type = PhD |publisher=The Johns Hopkins University
| date = 1965 |url=https://apps.dtic.mil/dtic/tr/fulltext/u2/475844.pdf
|archive-url=https://web.archive.org/web/20191227070509/https://apps.dtic.mil/dtic/tr/fulltext/u2/475844.pdf
|url-status=dead
|archive-date=December 27, 2019
| location = Ann Arbor, Michigan}}
| first = Robert S. |last=Bennett
| title = The intrinsic dimensionality of signal collections
| journal = IEEE Transactions on Information Theory
| volume = 15
| issue = 5
| pages = 517–525
| date = September 1969 |doi=10.1109/TIT.1969.1054365
}}
During the 1970s intrinsic dimensionality estimation methods were constructed that did not depend on dimensionality reductions such as MDS: based on local eigenvalues.,{{Cite journal |last1=Fukunaga |first1=K. |last2=Olsen |first2=D. R. |date=1971 |title=An algorithm for finding intrinsic dimensionality of data |journal=IEEE Transactions on Computers |volume=20 |issue=2 |pages=176–183 |doi=10.1109/T-C.1971.223208|s2cid=30206700 }} based on distance distributions,{{Cite journal |last1=Pettis |first1=K. W. |first2=Thomas A. |last2=Bailey |first3=Anil K. |last3=Jain |first4=Richard C. |last4=Dubes |date=1979 |title=An intrinsic dimensionality estimator from near-neighbor information |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=1 |issue=1 |pages=25–37 |doi=10.1109/TPAMI.1979.4766873|pmid=21868828 |s2cid=2196461 }} and based on other dimension-dependent geometric properties{{Cite journal |last=Trunk |first=G. V. |date=1976 |title=Statistical estimation of the intrinsic dimensionality of a noisy signal collection |journal=IEEE Transactions on Computers |volume=100 |issue=2 |pages=165–171 |doi=10.1109/TC.1976.5009231|s2cid=1181023 }}
Estimating intrinsic dimension of sets and probability measures has also been extensively studied since around 1980 in the field of dynamical systems, where dimensions of (strange) attractors have been the subject of interest.{{Cite journal |last1=Grassberger |first1=P. |last2=Procaccia |first2=I. |date=1983 |title=Measuring the strangeness of strange attractors |journal=Physica D: Nonlinear Phenomena |volume=9 |issue=1–2 |pages=189–208 |doi=10.1016/0167-2789(83)90298-1|bibcode=1983PhyD....9..189G }}{{Cite book |editor-first=Howell |editor-last=Tong |title=Dynamical Systems and Bifurcations, Proceedings of a Workshop Held in Groningen, The Netherlands, April 16-20, 1984 |last=Takens |first=F. |publisher=Springer-Verlag |year=1984 |isbn=3540394117 |series=Lecture Notes in Mathematics |volume=1125 |pages=99–106 |chapter=On the numerical determination of the dimension of an attractor |doi=10.1007/BFb0075637}}{{Cite book |title=Dimension estimation and models |last=Cutler |first=C. D. |publisher=World Scientific |year=1993 |isbn=9810213530 |series=Nonlinear Time Series and Chaos |volume=1 |pages=1–107 |chapter=A review of the theory and estimation of fractal dimension |chapter-url=https://books.google.com/books?id=uLyp99DIJG8C&pg=PA1}}{{Cite book |title=Multifractals — Theory and Applications |last=Harte |first=D. |publisher=Chapman and Hall/CRC |year=2001 |isbn=9781584881544 }} For strange attractors there is no manifold assumption, and the dimension measured is some version of fractal dimension — which also can be non-integer. However, definitions of fractal dimension yield the manifold dimension for manifolds.
In the 2000s the "curse of dimensionality" has been exploited to estimate intrinsic dimension.{{Cite journal |last=Chavez |first=E. |date=2001 |title=Searching in metric spaces |journal=ACM Computing Surveys |volume=33 |issue=3 |pages=273–321 |doi=10.1145/502807.502808|hdl=10533/172863 |s2cid=3201604 |hdl-access=free }}{{Cite journal |last=Pestov |first=V. |date=2008 |title=An axiomatic approach to intrinsic dimension of a dataset |journal=Neural Networks |volume=21 |issue=2–3 |pages=204–213 |doi=10.1016/j.neunet.2007.12.030 |pmid=18234471 |arxiv=0712.2063|s2cid=2309396 }}
Applications
The case of a two-variable signal which is i1D appears frequently in computer vision and image processing and captures the idea of local image regions which contain lines or edges. The analysis of such regions has a long history, but it was not until a more formal and theoretical treatment of such operations began that the concept of intrinsic dimension was established, even though the name has varied.
For example, the concept which here is referred to as an image neighborhood of intrinsic dimension 1 or i1D neighborhood is called 1-dimensional by Knutsson (1982),{{cite book
| first=Hans |last=Knutsson
| title=Filtering and reconstruction in image processing
| year=1982 |id=oai:DiVA.org:liu-54890
| series=Linköping Studies in Science and Technology |volume=88 | publisher=Linköping University |isbn=91-7372-595-1 |url=http://liu.diva-portal.org/smash/get/diva2:311054/FULLTEXT01.pdf }}
linear symmetric by Bigün & Granlund (1987){{cite book
| first1=Josef |last1=Bigün
| first2=Gösta H. |last2=Granlund
| chapter=Optimal orientation detection of linear symmetry
| title=Proceedings of the International Conference on Computer Vision
| year=1987
| pages=433–438
| chapter-url = http://www2.hh.se/staff/josef/publ/publications/bigun87london.pdf}}
and simple neighborhood in Granlund & Knutsson (1995).{{cite book
| first1=Gösta H. |last1=Granlund
| first2=Hans |last2=Knutsson
| title=Signal Processing in Computer Vision
| year=1995
| url=https://www.springer.com/engineering/signals/book/978-0-7923-9530-0
| isbn=978-1-4757-2377-9
| publisher=Kluwer Academic }}
See also
References
{{reflist}}
- {{cite journal
| author = Michael Felsberg |author2=Sinan Kalkan |author3=Norbert Krueger
| title = Continuous Dimensionality Characterization of Image Structures
| journal = Image and Vision Computing
| volume = 27
| issue = 6
| pages = 628–636
| year = 2009
| doi=10.1016/j.imavis.2008.06.018|url=http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18087 | hdl = 11511/36631
| hdl-access = free
}}