Locality-sensitive hashing

{{Short description|Algorithmic technique using hashing}}

In computer science, locality-sensitive hashing (LSH) is a fuzzy hashing technique that hashes similar input items into the same "buckets" with high probability.{{cite web|url=http://infolab.stanford.edu/~ullman/mmds.html|title=Mining of Massive Datasets, Ch. 3.|last1=Rajaraman|first1=A.|last2=Ullman|first2=J.|author2-link=Jeffrey Ullman|year=2010}} (The number of buckets is much smaller than the universe of possible input items.) Since similar items end up in the same buckets, this technique can be used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.

Hashing-based approximate nearest-neighbor search algorithms generally use one of two main categories of hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-dependent methods, such as locality-preserving hashing (LPH).{{cite conference |last1=Zhao |first1=Kang |last2=Lu |first2=Hongtao |last3=Mei |first3=Jincheng |title=Locality Preserving Hashing |conference=AAAI Conference on Artificial Intelligence | volume=28 | year=2014 |url=https://ojs.aaai.org/index.php/AAAI/article/view/9133/8992 |pages=2874–2880}}{{cite book |last1=Tsai |first1=Yi-Hsuan |last2=Yang |first2=Ming-Hsuan |title=2014 IEEE International Conference on Image Processing (ICIP) |chapter=Locality preserving hashing |date=October 2014 |pages=2988–2992 |doi=10.1109/ICIP.2014.7025604 |isbn=978-1-4799-5751-4 |s2cid=8024458 |issn=1522-4880}}

Locality-preserving hashing was initially devised as a way to facilitate data pipelining in implementations of massively parallel algorithms that use randomized routing and universal hashing to reduce memory contention and network congestion.{{cite thesis |last=Chin |first=Andrew |date=1991 |title=Complexity Issues in General Purpose Parallel Computing |pages=87–95 |type=DPhil |publisher=University of Oxford |url=https://perma.cc/E47H-WCVP}}{{cite journal |last1=Chin |first1=Andrew |date=1994 |title=Locality-Preserving Hash Functions for General Purpose Parallel Computation |url=http://unclaw.com/chin/scholarship/hashfunctions.pdf |journal=Algorithmica |volume=12 |issue=2–3 |pages=170–181 |doi=10.1007/BF01185209|s2cid=18108051 }}

Definitions

A finite family \mathcal F of functions h\colon M \to S is defined to be an LSH family{{cite journal

| author1 = Gionis, A.

| author2-link = Piotr Indyk | author2 = Indyk, P. | author3-link = Rajeev Motwani | author3 = Motwani, R.

| year = 1999

| title = Similarity Search in High Dimensions via Hashing

| url = http://people.csail.mit.edu/indyk/vldb99.ps

| journal = Proceedings of the 25th Very Large Database (VLDB) Conference

}}{{cite conference

| author1 = Indyk, Piotr.

| author2 = Motwani, Rajeev.

| year = 1998

| contribution = Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality.

| contribution-url = http://people.csail.mit.edu/indyk/nndraft.ps

| title = Proceedings of 30th Symposium on Theory of Computing

| title-link = Symposium on Theory of Computing

}}

for

  • a metric space \mathcal M =(M, d),
  • a threshold r>0,
  • an approximation factor c>1,
  • and probabilities p_1 > p_2

if it satisfies the following condition. For any two points a, b \in M and a hash function h chosen uniformly at random from \mathcal F:

  • If d(a,b) \le r, then h(a)=h(b) (i.e., {{mvar|a}} and {{mvar|b}} collide) with probability at least p_1,
  • If d(a,b) \ge cr, then h(a)=h(b) with probability at most p_2.

Such a family \mathcal F is called (r,cr,p_1,p_2)-sensitive.

= LSH with respect to a similarity measure =

Alternatively{{cite conference

| last = Charikar|first= Moses S.|author-link=Moses Charikar

| year = 2002

| contribution = Similarity Estimation Techniques from Rounding Algorithms

| title = Proceedings of the 34th Annual ACM Symposium on Theory of Computing

| pages = 380–388

| doi = 10.1145/509907.509965

|isbn= 1-58113-495-9| citeseerx = 10.1.1.147.4064}} it is possible to define an LSH family on a universe of items {{mvar|U}} endowed with a similarity function \phi\colon U \times U \to [0,1]. In this setting, a LSH scheme is a family of hash functions {{mvar|H}} coupled with a probability distribution {{mvar|D}} over {{mvar|H}} such that a function h \in H chosen according to {{mvar|D}} satisfies Pr [h(a) = h(b)] = \phi(a,b) for each a,b \in U.

=Amplification=

Given a (d_1, d_2, p_1, p_2)-sensitive family \mathcal F, we can construct new families \mathcal G by either the AND-construction or OR-construction of \mathcal F.

To create an AND-construction, we define a new family \mathcal G of hash functions {{mvar|g}}, where each function {{mvar|g}} is constructed from {{mvar|k}} random functions h_1, \ldots, h_k from \mathcal F. We then say that for a hash function g \in \mathcal G, g(x) = g(y) if and only if all h_i(x) = h_i(y) for i = 1, 2, \ldots, k. Since the members of \mathcal F are independently chosen for any g \in \mathcal G, \mathcal G is a (d_1, d_2, p_{1}^k, p_{2}^k)-sensitive family.

To create an OR-construction, we define a new family \mathcal G of hash functions {{mvar|g}}, where each function {{mvar|g}} is constructed from {{mvar|k}} random functions h_1, \ldots, h_k from \mathcal F. We then say that for a hash function g \in \mathcal G, g(x) = g(y) if and only if h_i(x) = h_i(y) for one or more values of {{mvar|i}}. Since the members of \mathcal F are independently chosen for any g \in \mathcal G, \mathcal G is a (d_1, d_2, 1- (1 - p_1)^k, 1 - (1 - p_2)^k)-sensitive family.

Applications

LSH has been applied to several problem domains, including:

  • Near-duplicate detection

{{citation

| author = Das, Abhinandan S.

| title = Proceedings of the 16th international conference on World Wide Web

| chapter = Google news personalization: scalable online collaborative filtering

| pages = 271–280

| year = 2007|doi=10.1145/1242572.1242610|display-authors=etal| isbn = 9781595936547

| s2cid = 207163129

}}.

{{citation

| author = Koga, Hisashi |author2=Tetsuo Ishibashi |author3=Toshinori Watanabe

| title = Fast agglomerative hierarchical clustering algorithm using Locality-Sensitive Hashing

| journal = Knowledge and Information Systems |volume=12 |issue=1 |pages=25–53

| year = 2007 | doi=10.1007/s10115-006-0027-5|s2cid=4613827 }}.

{{citation

| author = Cochez, Michael |author2=Mou, Hao

|title=Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data

|chapter=Twister Tries

| pages=505–517

| year = 2015 | doi=10.1145/2723372.2751521| isbn = 9781450327589

|s2cid=14414777

| url = https://jyx.jyu.fi/bitstream/123456789/46537/1/cochezmousigmod15finalcameraready.pdf

}}.

{{citation

| author = Brinza, Dumitru

| title = RAPID detection of gene–gene interactions in genome-wide association studies

| journal = Bioinformatics |volume=26 |issue=22 |year=2010 |pages=2856–2862 | doi=10.1093/bioinformatics/btq529| pmid = 20871107

|display-authors=etal|pmc=3493125}}

{{citation

| title = dejavu - Audio fingerprinting and recognition in Python

| url = https://github.com/worldveil/dejavu| date = 2018-12-19}}

{{citation

| title = A Simple Introduction to Locality Sensitive Hashing (LSH)

| url =https://www.iunera.com/kraken/fabric/locality-sensitive-hashing-lsh/#7-applications-of-lsh| date = 2025-03-27}}

{{citation

|author1=Aluç, Güneş |author2=Özsu, M. Tamer |author3=Daudjee, Khuzaima

| title = Building self-clustering RDF databases using Tunable-LSH

| journal = The VLDB Journal | year = 2018 |volume=28 |issue=2 |pages=173–195 | doi = 10.1007/s00778-018-0530-9 |s2cid=53695535 }}

  • Training fully connected neural networks{{Cite arXiv|last1=Chen|first1=Beidi|last2=Medini|first2=Tharun|last3=Farwell|first3=James|last4=Gobriel|first4=Sameh|last5=Tai|first5=Charlie|last6=Shrivastava|first6=Anshumali|date=2020-02-29|title=SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems|class=cs.DC|eprint=1903.03129}}{{ citation

| last1=Chen | first1=Beidi | last2 = Liu | first2 = Zichang | last3 = Peng | first3 = Binghui | last4 = Xu | first4 = Zhaozhuo | last5 = Li | first5 = Jonathan Lingjie | last6 = Dao | first6 = Tri | last7 = Song | first7 = Zhao | last8 = Shrivastava | first8 = Anshumali | last9 = Re | first9 = Christopher| title = MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training | journal = International Conference on Learning Representation | year = 2021 | url = https://openreview.net/forum?id=wWK7yXkULyh }}

  • Computer security

{{cite conference

| author1 = Oliver, Jonathan| author2 = Cheng, Chun | author3 = Chen, Yanggui

| title = 2013 Fourth Cybercrime and Trustworthy Computing Workshop | chapter = TLSH -- A Locality Sensitive Hash | year = 2013

| pages = 7–13 | url = https://www.academia.edu/7833902

| doi = 10.1109/CTC.2013.9

| isbn = 978-1-4799-3076-0

}}

{{citation

| author1 = Fanaee-T, Hadi

| title = Natural Learning

| year = 2024

| arxiv = 2404.05903

}}

Methods

=Bit sampling for Hamming distance=

One of the easiest ways to construct an LSH family is by bit sampling. This approach works for the Hamming distance over {{mvar|d}}-dimensional vectors \{0,1\}^d. Here, the family \mathcal F of hash functions is simply the family of all the projections of points on one of the d coordinates, i.e., {\mathcal F}=\{h\colon \{0,1\}^d\to \{0,1\}\mid h(x)=x_i \text{ for some } i\in \{1, \ldots, d\}\}, where x_i is the ith coordinate of x. A random function h from {\mathcal F} simply selects a random bit from the input point. This family has the following parameters: P_1=1-R/d, P_2=1-cR/d.

That is, any two vectors x,y with Hamming distance at most R collide under a random h with probability at least P_1.

Any x,y with Hamming distance at least cR collide with probability at most P_2.

=Min-wise independent permutations=

{{main|MinHash}}

Suppose {{mvar|U}} is composed of subsets of some ground set of enumerable items {{mvar|S}} and the similarity function of interest is the Jaccard index {{mvar|J}}. If {{mvar|π}} is a permutation on the indices of {{mvar|S}}, for A \subseteq S let h(A) = \min_{a \in A} \{ \pi(a) \}. Each possible choice of {{mvar|π}} defines a single hash function {{mvar|h}} mapping input sets to elements of {{mvar|S}}.

Define the function family {{mvar|H}} to be the set of all such functions and let {{mvar|D}} be the uniform distribution. Given two sets A,B \subseteq S the event that h(A) = h(B) corresponds exactly to the event that the minimizer of {{mvar|π}} over A \cup B lies inside A \cap B. As {{mvar|h}} was chosen uniformly at random, Pr[h(A) = h(B)] = J(A,B)\, and (H,D)\, define an LSH scheme for the Jaccard index.

Because the symmetric group on {{mvar|n}} elements has size {{mvar|n}}!, choosing a truly random permutation from the full symmetric group is infeasible for even moderately sized {{mvar|n}}. Because of this fact, there has been significant work on finding a family of permutations that is "min-wise independent" — a permutation family for which each element of the domain has equal probability of being the minimum under a randomly chosen {{mvar|π}}. It has been established that a min-wise independent family of permutations is at least of size \operatorname{lcm}\{\,1, 2, \ldots, n\,\} \ge e^{n-o(n)},{{cite conference

| last1 = Broder | first1 = A.Z. | author1-link = Andrei Broder

|last2= Charikar | first2 = M. | author2-link = Moses Charikar

|last3= Frieze | first3 = A.M. | author3-link = Alan M. Frieze

|last4= Mitzenmacher | first4 = M. | author4-link = Michael Mitzenmacher

| year = 1998

| contribution = Min-wise independent permutations

| title = Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing

| pages = 327–336

| contribution-url = http://www.cs.princeton.edu/~moses/papers/minwise.ps

| access-date = 2007-11-14

| doi = 10.1145/276698.276781

| citeseerx = 10.1.1.409.9220}} and that this bound is tight.

{{cite journal

| title=An optimal construction of exactly min-wise independent permutations

| author1=Takei, Y. | author2 = Itoh, T. | author3 = Shinozaki, T.

| journal=Technical Report COMP98-62, IEICE, 1998

}}

Because min-wise independent families are too big for practical applications, two variant notions of min-wise independence are introduced: restricted min-wise independent permutations families, and approximate min-wise independent families.

Restricted min-wise independence is the min-wise independence property restricted to certain sets of cardinality at most {{mvar|k}}.{{cite journal

| author = Matoušek, J.

|author2=Stojakovic, M.

| year = 2002

| title = On Restricted Min-Wise Independence of Permutations

| journal = Preprint

| url = http://citeseer.ist.psu.edu/689217.html

| access-date = 2007-11-14

}}

Approximate min-wise independence differs from the property by at most a fixed {{mvar|ε}}.{{cite journal

| last1 = Saks | first1 = M. | author1-link = Michael Saks (mathematician)

|last2= Srinivasan| first2 = A.|last3= Zhou|first3= S.

|last4= Zuckerman | first4 = D.

| year = 2000

| title = Low discrepancy sets yield approximate min-wise independent permutation families

| journal = Information Processing Letters

| volume = 73

| issue = 1–2

| pages = 29–32

| url = http://citeseer.ist.psu.edu/saks99low.html

| access-date = 2007-11-14

| doi = 10.1016/S0020-0190(99)00163-5

| citeseerx = 10.1.1.20.8264 }}

=Open source methods=

==Nilsimsa Hash==

{{main|Nilsimsa Hash}}

Nilsimsa is a locality-sensitive hashing algorithm used in anti-spam efforts.{{cite web|author=Damiani |display-authors=et al|title=An Open Digest-based Technique for Spam Detection|year=2004|url=http://spdp.di.unimi.it/papers/pdcs04.pdf|access-date=2013-09-01}} The goal of Nilsimsa is to generate a hash digest of an email message such that the digests of two similar messages are similar to each other. The paper suggests that the Nilsimsa satisfies three requirements:

  1. The digest identifying each message should not vary significantly for changes that can be produced automatically.
  2. The encoding must be robust against intentional attacks.
  3. The encoding should support an extremely low risk of false positives.

Testing performed in the paper on a range of file types identified the Nilsimsa hash as having a significantly higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and Sdhash.{{cite journal|author=Oliver|display-authors=etal|title=TLSH - A Locality Sensitive Hash|url=https://www.academia.edu/7833902|journal=4th Cybercrime and Trustworthy Computing Workshop|date=2013|accessdate=2015-06-04}}

==TLSH==

TLSH is locality-sensitive hashing algorithm designed for a range of security and digital forensic applications. The goal of TLSH is to generate hash digests for messages such that low distances between digests indicate that their corresponding messages are likely to be similar.

An implementation of TLSH is available as open-source software.{{cite web|url=https://github.com/trendmicro/tlsh |title=TLSH |website=GitHub |access-date=2014-04-10}}

=Random projection=

{{main|Random projection}}

File:Cosine-distance.png

The random projection method of LSH due to Moses Charikar called SimHash (also sometimes called arccos{{cite journal

| author1 = Alexandr Andoni

| author2-link = Piotr Indyk

| author2 = Indyk, P.

| year = 2008

| title = Near-Optimal Hashing Algorithms for Approximate Nearest Neighbor in High Dimensions

| journal = Communications of the ACM

| volume = 51

| number = 1

| pages = 117–122

| doi=10.1145/1327452.1327494

| citeseerx = 10.1.1.226.6905

| s2cid = 6468963

}}) uses an approximation of the cosine distance between vectors. The technique was used to approximate the NP-complete max-cut problem.

The basic idea of this technique is to choose a random hyperplane (defined by a normal unit vector {{mvar|r}}) at the outset and use the hyperplane to hash input vectors.

Given an input vector {{mvar|v}} and a hyperplane defined by {{mvar|r}}, we let h(v) = \sgn(v \cdot r). That is, h(v) = \pm 1 depending on which side of the hyperplane {{mvar|v}} lies. This way, each possible choice of a random hyperplane {{mvar|r}} can be interpreted as a hash function h(v).

For two vectors {{mvar|u,v}} with angle \theta(u,v) between them, it can be shown that

:Pr[h(u) = h(v)] = 1 - \frac{\theta(u,v)}{\pi}.

Since the ratio between \frac{\theta(u,v)}{\pi} and 1-\cos(\theta(u,v)) is at least 0.439 when \theta(u, v) \in [0, \pi],{{cite journal | last1=Goemans | first1=Michel X. | last2=Williamson | first2=David P. | title=Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming | journal=Journal of the ACM | publisher=Association for Computing Machinery (ACM) | volume=42 | issue=6 | year=1995 | issn=0004-5411 | doi=10.1145/227683.227684 | pages=1115–1145| s2cid=15794408 | doi-access=free }} the probability of two vectors being on different sides of the random hyperplane is approximately proportional to the cosine distance between them.

=Stable distributions=

The hash function

{{cite journal

| author1 = Datar, M.

| author2 = Immorlica, N.|author2-link= Nicole Immorlica | author3-link = Piotr Indyk | author3 = Indyk, P. | author4 = Mirrokni, V.S.

| year=2004

| title = Locality-Sensitive Hashing Scheme Based on p-Stable Distributions

| url = http://theory.csail.mit.edu/~mirrokni/pstable.ps

| journal = Proceedings of the Symposium on Computational Geometry

}} h_{\mathbf{a},b} (\boldsymbol{\upsilon}) :

\mathcal{R}^d

\to \mathcal{N} maps a {{mvar|d}}-dimensional vector

\boldsymbol{\upsilon} onto the set of integers. Each hash function

in the family is indexed by a choice of random \mathbf{a} and

b where \mathbf{a} is a {{mvar|d}}-dimensional

vector with

entries chosen independently from a stable distribution and

b is

a real number chosen uniformly from the range [0,r]. For a fixed

\mathbf{a},b the hash function h_{\mathbf{a},b} is

given by h_{\mathbf{a},b} (\boldsymbol{\upsilon}) = \left \lfloor

\frac{\mathbf{a}\cdot \boldsymbol{\upsilon}+b}{r} \right \rfloor .

Other construction methods for hash functions have been proposed to better fit the data.

{{cite journal

| author1 = Pauleve, L. | author2 = Jegou, H. | author3 = Amsaleg, L.

| year=2010

| title = Locality sensitive hashing: A comparison of hash function types and querying mechanisms

| url = http://hal.inria.fr/inria-00567191/en/

| journal = Pattern Recognition Letters

| volume = 31 | issue = 11 | pages = 1348–1358 | doi = 10.1016/j.patrec.2010.04.004 | bibcode = 2010PaReL..31.1348P | s2cid = 2666044 }}

In particular k-means hash functions are better in practice than projection-based hash functions, but without any theoretical guarantee.

=Semantic hashing=

Semantic hashing is a technique that attempts to map input items to addresses such that closer inputs have higher semantic similarity.{{Cite journal|last1=Salakhutdinov|first1=Ruslan|last2=Hinton|first2=Geoffrey|date=2008|title=Semantic hashing|journal=International Journal of Approximate Reasoning|language=en|volume=50|issue=7|pages=969–978|doi=10.1016/j.ijar.2008.11.006|doi-access=free}} The hashcodes are found via training of an artificial neural network or graphical model.{{citation needed|date=September 2021}}

=Improvements=

When {{mvar|t}} is large, it is possible to reduce the hashing time from O(n^{\rho}).

This was shown byDahlgaard, Søren, Mathias Bæk Tejs Knudsen, and Mikkel Thorup. [https://arxiv.org/pdf/1704.04370 "Fast similarity sketching."] 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). IEEE, 2017. andChristiani, Tobias. [https://arxiv.org/pdf/1708.07586 "Fast locality-sensitive hashing frameworks for approximate near neighbor search."] International Conference on Similarity Search and Applications. Springer, Cham, 2019. which gave

  • query time: O(t\log^2(1/P_2)/P_1 + n^{\rho}(d + 1/P_1));
  • space: O(n^{1+\rho}/P_1 + \log^2(1/P_2)/P_1);

It is also sometimes the case that the factor 1/P_1 can be very large.

This happens for example with Jaccard similarity data, where even the most similar neighbor often has a quite low Jaccard similarity with the query.

InAhle, Thomas Dybdahl. "On the Problem of p_1^{-1} in Locality-Sensitive Hashing." International Conference on Similarity Search and Applications. Springer, Cham, 2020. it was shown how to reduce the query time to O(n^\rho/P_1^{1-\rho}) (not including hashing costs) and similarly the space usage.

See also

  • {{Annotated link |Bloom filter}}
  • {{Annotated link |Curse of dimensionality}}
  • {{Annotated link |Feature hashing}}
  • {{Annotated link |Fourier-related transforms}}
  • {{Annotated link |Geohash}}
  • {{Annotated link |Multilinear subspace learning}}
  • {{Annotated link |Principal component analysis}}
  • Random indexingGorman, James, and James R. Curran. [https://aclanthology.org/P06-1046.pdf "Scaling distributional similarity to large corpora."] Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2006.
  • {{Annotated link |Rolling hash}}
  • {{Annotated link |Singular value decomposition}}
  • {{Annotated link |Sparse distributed memory}}
  • {{Annotated link |Wavelet compression}}

References

{{reflist}}

Further reading

  • Samet, H. (2006) Foundations of Multidimensional and Metric Data Structures. Morgan Kaufmann. {{ISBN|0-12-369446-9}}
  • {{Cite conference| last1 = Indyk | first1 = Piotr | author1-link= Piotr Indyk| last2 = Motwani | first2 = Rajeev | author2-link = Rajeev Motwani| last3 = Raghavan | first3 = Prabhakar| last4 = Vempala | first4 = Santosh | author4-link = Santosh Vempala| title = Locality-preserving hashing in multidimensional spaces | doi = 10.1145/258533.258656 | book-title = Proceedings of the twenty-ninth annual ACM symposium on Theory of computing| conference = STOC '97| pages = 618–625| year = 1997 | isbn = 978-0-89791-888-6| citeseerx = 10.1.1.50.4927| s2cid = 15693787 }}
  • {{Cite journal | last1 = Chin | first1 = Andrew| title = Locality-preserving hash functions for general purpose parallel computation | url = http://www.unclaw.com/chin/scholarship/hashfunctions.pdf| doi = 10.1007/BF01185209 | journal = Algorithmica| volume = 12 | issue = 2–3 | pages = 170–181 | year = 1994 | s2cid = 18108051}}