t-distributed stochastic neighbor embedding

{{short description|Technique for dimensionality reduction}}

{{redirect|TSNE|the Boston-based organization|Third Sector New England}}

File:T-SNE visualisation of word embeddings generated using 19th century literature.pngs generated using 19th century literature]]

File:T-SNE Embedding of MNIST.png dataset]]

{{lowercase title}}

{{Data Visualization}}

t-distributed stochastic neighbor embedding (t-SNE) is a statistical method for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map. It is based on Stochastic Neighbor Embedding originally developed by Geoffrey Hinton and Sam Roweis,{{cite conference |date=January 2002 |title=Stochastic neighbor embedding |url=https://papers.nips.cc/paper_files/paper/2002/file/6150ccc6069bea6b5716254057a194ef-Paper.pdf |conference=Neural Information Processing Systems |author1-last=Hinton |author1-first=Geoffrey |author2-last=Roweis |author2-first=Sam}} where Laurens van der Maaten and Hinton proposed the t-distributed variant.{{cite journal|last=van der Maaten|first=L.J.P.|author2=Hinton, G.E. |title=Visualizing Data Using t-SNE|journal=Journal of Machine Learning Research |volume=9|date=Nov 2008|pages=2579–2605|url=http://jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf}} It is a nonlinear dimensionality reduction technique for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability.

The t-SNE algorithm comprises two main stages. First, t-SNE constructs a probability distribution over pairs of high-dimensional objects in such a way that similar objects are assigned a higher probability while dissimilar points are assigned a lower probability. Second, t-SNE defines a similar probability distribution over the points in the low-dimensional map, and it minimizes the Kullback–Leibler divergence (KL divergence) between the two distributions with respect to the locations of the points in the map. While the original algorithm uses the Euclidean distance between objects as the base of its similarity metric, this can be changed as appropriate. A Riemannian variant is UMAP.

t-SNE has been used for visualization in a wide range of applications, including genomics, computer security research,{{cite journal|last=Gashi|first=I.|author2=Stankovic, V. |author3=Leita, C. |author4=Thonnard, O. |title=An Experimental Study of Diversity with Off-the-shelf AntiVirus Engines|journal=Proceedings of the IEEE International Symposium on Network Computing and Applications|year=2009|pages=4–11}} natural language processing, music analysis,{{cite journal|last=Hamel|first=P.|author2=Eck, D. |title=Learning Features from Music Audio with Deep Belief Networks|journal=Proceedings of the International Society for Music Information Retrieval Conference|year=2010|pages=339–344}} cancer research,{{cite journal|last=Jamieson|first=A.R.|author2=Giger, M.L. |author3=Drukker, K. |author4=Lui, H. |author5=Yuan, Y. |author6=Bhooshan, N. |title=Exploring Nonlinear Feature Space Dimension Reduction and Data Representation in Breast CADx with Laplacian Eigenmaps and t-SNE|journal=Medical Physics |issue=1|year=2010|pages=339–351|doi=10.1118/1.3267037|pmid=20175497|volume=37|pmc=2807447}} bioinformatics,{{cite journal|last=Wallach|first=I.|author2=Liliean, R. |title=The Protein-Small-Molecule Database, A Non-Redundant Structural Resource for the Analysis of Protein-Ligand Binding|journal=Bioinformatics |year=2009|pages=615–620|doi=10.1093/bioinformatics/btp035|volume=25|issue=5|pmid=19153135|doi-access=free}} geological domain interpretation,{{Cite journal|date=2019-04-01|title=A comparison of t-SNE, SOM and SPADE for identifying material type domains in geological data|url=https://www.sciencedirect.com/science/article/pii/S0098300418306010|journal=Computers & Geosciences|language=en|volume=125|pages=78–89|doi=10.1016/j.cageo.2019.01.011|issn=0098-3004|last1=Balamurali|first1=Mehala|last2=Silversides|first2=Katherine L.|last3=Melkumyan|first3=Arman|bibcode=2019CG....125...78B |s2cid=67926902}}{{Cite book|last1=Balamurali|first1=Mehala|last2=Melkumyan|first2=Arman|date=2016|editor-last=Hirose|editor-first=Akira|editor2-last=Ozawa|editor2-first=Seiichi|editor3-last=Doya|editor3-first=Kenji|editor4-last=Ikeda|editor4-first=Kazushi|editor5-last=Lee|editor5-first=Minho|editor6-last=Liu|editor6-first=Derong|chapter=t-SNE Based Visualisation and Clustering of Geological Domain|chapter-url=https://link.springer.com/chapter/10.1007/978-3-319-46681-1_67|title=Neural Information Processing|series=Lecture Notes in Computer Science|volume=9950|language=en|location=Cham|publisher=Springer International Publishing|pages=565–572|doi=10.1007/978-3-319-46681-1_67|isbn=978-3-319-46681-1}}{{Cite journal|last1=Leung|first1=Raymond|last2=Balamurali|first2=Mehala|last3=Melkumyan|first3=Arman|date=2021-01-01|title=Sample Truncation Strategies for Outlier Removal in Geochemical Data: The MCD Robust Distance Approach Versus t-SNE Ensemble Clustering|url=https://doi.org/10.1007/s11004-019-09839-z|journal=Mathematical Geosciences|language=en|volume=53|issue=1|pages=105–130|doi=10.1007/s11004-019-09839-z|bibcode=2021MatGe..53..105L |s2cid=208329378|issn=1874-8953}} and biomedical signal processing.{{Cite book|last1=Birjandtalab|first1=J.|last2=Pouyan|first2=M. B.|last3=Nourani|first3=M.|title=2016 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI) |chapter=Nonlinear dimension reduction for EEG-based epileptic seizure detection |date=2016-02-01|pages=595–598|doi=10.1109/BHI.2016.7455968|isbn=978-1-5090-2455-1|s2cid=8074617}}

For a data set with n elements, t-SNE runs in {{math|O(n2)}} time and requires {{math|O(n2)}} space.{{cite arXiv|title=Approximated and User Steerable tSNE for Progressive Visual Analytics|last=Pezzotti|first=Nicola|date=2015 |class=cs.CV |eprint=1512.01655 }}

Details

Given a set of N high-dimensional objects \mathbf{x}_1, \dots, \mathbf{x}_N, t-SNE first computes probabilities p_{ij} that are proportional to the similarity of objects \mathbf{x}_i and \mathbf{x}_j, as follows.

For i \neq j, define

: p_{j\mid i} = \frac{\exp(-\lVert\mathbf{x}_i - \mathbf{x}_j\rVert^2 / 2\sigma_i^2)}{\sum_{k \neq i} \exp(-\lVert\mathbf{x}_i - \mathbf{x}_k\rVert^2 / 2\sigma_i^2)}

and set p_{i\mid i} = 0.

Note the above denominator ensures \sum_j p_{j\mid i} = 1 for all i.

As van der Maaten and Hinton explained: "The similarity of datapoint x_j to datapoint x_i is the conditional probability, p_{j|i}, that x_i would pick x_j as its neighbor if neighbors were picked in proportion to their probability density under a Gaussian centered at x_i."

Now define

: p_{ij} = \frac{p_{j\mid i} + p_{i\mid j}}{2N}

This is motivated because p_{i} and p_{j} from the N samples are estimated as 1/N, so the conditional probability can be written as p_{i\mid j} = Np_{ij} and p_{j\mid i} = Np_{ji} . Since p_{ij} = p_{ji}, you can obtain previous formula.

Also note that p_{ii} = 0 and \sum_{i, j} p_{ij} = 1.

The bandwidth of the Gaussian kernels \sigma_i is set in such a way that the entropy of the conditional distribution equals a predefined entropy using the bisection method. As a result, the bandwidth is adapted to the density of the data: smaller values of \sigma_i are used in denser parts of the data space. The entropy increases with the perplexity of this distribution P_i; this relation is seen as

: Perp(P_i) = 2^{H(P_i)}

where H(P_i) is the Shannon entropy H(P_i)=-\sum_jp_{j|i}\log_2p_{j|i}.

The perplexity is a hand-chosen parameter of t-SNE, and as the authors state, "perplexity can be interpreted as a smooth measure of the effective number of neighbors. The performance of SNE is fairly robust to changes in the perplexity, and typical values are between 5 and 50.".

Since the Gaussian kernel uses the Euclidean distance \lVert x_i-x_j \rVert, it is affected by the curse of dimensionality, and in high dimensional data when distances lose the ability to discriminate, the p_{ij} become too similar (asymptotically, they would converge to a constant). It has been proposed to adjust the distances with a power transform, based on the intrinsic dimension of each point, to alleviate this.{{Cite conference|last1=Schubert|first1=Erich|last2=Gertz|first2=Michael|date=2017-10-04|title=Intrinsic t-Stochastic Neighbor Embedding for Visualization and Outlier Detection|conference=SISAP 2017 – 10th International Conference on Similarity Search and Applications|pages=188–203|doi=10.1007/978-3-319-68474-1_13}}

t-SNE aims to learn a d-dimensional map \mathbf{y}_1, \dots, \mathbf{y}_N (with \mathbf{y}_i \in \mathbb{R}^d and d typically chosen as 2 or 3) that reflects the similarities p_{ij} as well as possible. To this end, it measures similarities q_{ij} between two points in the map \mathbf{y}_i and \mathbf{y}_j, using a very similar approach.

Specifically, for i \neq j, define q_{ij} as

: q_{ij} = \frac{(1 + \lVert \mathbf{y}_i - \mathbf{y}_j\rVert^2)^{-1}}{\sum_k \sum_{l \neq k} (1 + \lVert \mathbf{y}_k - \mathbf{y}_l\rVert^2)^{-1}}

and set q_{ii} = 0 .

Herein a heavy-tailed Student t-distribution (with one-degree of freedom, which is the same as a Cauchy distribution) is used to measure similarities between low-dimensional points in order to allow dissimilar objects to be modeled far apart in the map.

The locations of the points \mathbf{y}_i in the map are determined by minimizing the (non-symmetric) Kullback–Leibler divergence of the distribution P from the distribution Q, that is:

: \mathrm{KL}\left(P \parallel Q\right) = \sum_{i \neq j} p_{ij} \log \frac{p_{ij}}{q_{ij}}

The minimization of the Kullback–Leibler divergence with respect to the points \mathbf{y}_i is performed using gradient descent.

The result of this optimization is a map that reflects the similarities between the high-dimensional inputs.

Output

While t-SNE plots often seem to display clusters, the visual clusters can be strongly influenced by the chosen parameterization (especially the perplexity) and so a good understanding of the parameters for t-SNE is needed. Such "clusters" can be shown to even appear in structured data with no clear clustering,{{Cite web |title=K-means clustering on the output of t-SNE |url=https://stats.stackexchange.com/a/264647 |access-date=2018-04-16 |website=Cross Validated}} and so may be false findings. Similarly, the size of clusters produced by t-SNE is not informative, and neither is the distance between clusters.{{Cite journal |last1=Wattenberg |first1=Martin |last2=Viégas |first2=Fernanda |last3=Johnson |first3=Ian |date=2016-10-13 |title=How to Use t-SNE Effectively |url=http://distill.pub/2016/misread-tsne |journal=Distill |language=en |volume=1 |issue=10 |pages=e2 |doi=10.23915/distill.00002 |issn=2476-0757|doi-access=free }} Thus, interactive exploration may be needed to choose parameters and validate results.{{Cite journal |last1=Pezzotti |first1=Nicola |last2=Lelieveldt |first2=Boudewijn P. F. |last3=Maaten |first3=Laurens van der |last4=Hollt |first4=Thomas |last5=Eisemann |first5=Elmar |last6=Vilanova |first6=Anna |date=2017-07-01 |title=Approximated and User Steerable tSNE for Progressive Visual Analytics |journal=IEEE Transactions on Visualization and Computer Graphics |language=en-US |volume=23 |issue=7 |pages=1739–1752 |arxiv=1512.01655 |doi=10.1109/tvcg.2016.2570755 |issn=1077-2626 |pmid=28113434 |s2cid=353336}}{{cite journal |last1=Wattenberg |first1=Martin |last2=Viégas |first2=Fernanda |last3=Johnson |first3=Ian |date=2016-10-13 |title=How to Use t-SNE Effectively |url=https://distill.pub/2016/misread-tsne/ |journal=Distill |language=en |volume=1 |issue=10 |doi=10.23915/distill.00002 |access-date=4 December 2017 |doi-access=free}} It has been shown that t-SNE can often recover well-separated clusters, and with special parameter choices, approximates a simple form of spectral clustering.{{cite arXiv |eprint=1706.02582 |class=cs.LG |first1=George C. |last1=Linderman |first2=Stefan |last2=Steinerberger |title=Clustering with t-SNE, provably |date=2017-06-08}}

Software

  • A C++ implementation of Barnes-Hut is available on the [https://github.com/lvdmaaten/bhtsne github account] of one of the original authors.
  • The R package [https://CRAN.R-project.org/package=Rtsne Rtsne] implements t-SNE in R.
  • ELKI contains tSNE, also with Barnes-Hut approximation
  • scikit-learn, a popular machine learning library in Python implements t-SNE with both exact solutions and the Barnes-Hut approximation.
  • Tensorboard, the visualization kit associated with TensorFlow, also implements t-SNE ([https://projector.tensorflow.org/ online version])
  • The Julia package [https://juliapackages.com/p/tsne TSne] implements t-SNE

References

{{reflist}}