count-distinct problem

{{Short description|Problem in computer science}}

In computer science, the count-distinct problem{{cite journal | last1=Ullman | first1=Jeff |author1-link=Jeffrey Ullman| last2 = Rajaraman | first2 = Anand | last3=Leskovec | first3=Jure |author3-link=Jure Leskovec| title=Mining data streams | url=http://infolab.stanford.edu/~ullman/mmds/ch4.pdf}}

(also known in applied mathematics as the cardinality estimation problem) is the problem of finding the number of distinct elements in a data stream with repeated elements.

This is a well-known problem with numerous applications. The elements might represent IP addresses of packets passing through a router, unique visitors to a web site, elements in a large database, motifs in a DNA sequence, or elements of RFID/sensor networks.

Formal definition

: Instance: Consider a stream of elements x_1, x_2, \ldots, x_s with repetitions. Let n denote the number of distinct elements in the stream, with the set of distinct elements represented as \{e_1, e_2, \ldots, e_n\}.

: Objective: Find an estimate \widehat{n} of n using only m storage units, where m \ll n .

An example of an instance for the cardinality estimation problem is the stream: a,b,a,c,d,b,d . For this instance, n = |\left\{ {a,b,c,d}\right\}| = 4 .

Naive solution

The naive solution to the problem is as follows:

Initialize a counter, {{mvar|c}}, to zero, {{nowrap| c \leftarrow 0 .}}

Initialize an efficient dictionary data structure, {{mvar|D}}, such as hash table or search tree in which insertion and membership can be performed quickly.

{{nowrap|For each element x_i }}, a membership query is issued.

{{nowrap|If x_i is not a member of {{mvar|D}}}} {{nowrap|( x_i \notin D )}}

{{nowrap|Add x_i to {{mvar|D}}}}

Increase {{mvar|c}} by one, {{nowrap| c \leftarrow c + 1}}

Otherwise {{nowrap|( x_i \in D )}} do nothing.

{{nowrap|Output n = c .}}

As long as the number of distinct elements is not too big, {{mvar|D}} fits in main memory and an exact answer can be retrieved.

However, this approach does not scale for bounded storage, or if the computation performed for each element x_i should be minimized. In such a case, several streaming algorithms have been proposed that use a fixed number of storage units.

HyperLogLog algorithm

{{main|HyperLogLog}}

Streaming algorithms

To handle the bounded storage constraint, streaming algorithms use a randomization to produce a non-exact estimation of the distinct number of elements, n.

State-of-the-art estimators hash every element e_j into a low-dimensional data sketch using a hash function, h(e_j) .

The different techniques can be classified according to the data sketches they store.

= Min/max sketches =

Min/max sketches{{cite journal | last1=Cosma | first1=Ioana A. | last2 = Clifford | first2 = Peter | year=2011 | title=A statistical analysis of probabilistic counting algorithms | journal=Scandinavian Journal of Statistics| arxiv=0801.3552 }} {{cite book | last1=Giroire | first1=Frederic | title=2007 Proceedings of the Fourth Workshop on Analytic Algorithmics and Combinatorics (ANALCO) | pages=223–231 | last2 = Fusy | first2 = Eric | year=2007 |citeseerx=10.1.1.214.270 |doi=10.1137/1.9781611972979.9| isbn=978-1-61197-297-9 }} store only the minimum/maximum hashed values. Examples of known min/max sketch estimators: Chassaing et al.{{cite journal | last1=Chassaing | first1=Philippe | last2=Gerin | first2=Lucas | year=2006| title=Efficient estimation of the cardinality of large data sets | journal=Proceedings of the 4th Colloquium on Mathematics and Computer Science| bibcode=2007math......1347C | arxiv=math/0701347 }} presents max sketch which is the minimum-variance unbiased estimator for the problem. The continuous max sketches estimator{{cite journal | last1=Cohen | first1=Edith |authorlink=Edith Cohen| year=1997| title=Size-estimation framework with applications to transitive closure and reachability | journal=J. Comput. Syst. Sci.| volume=55 | issue=3 | pages=441–453 | doi=10.1006/jcss.1997.1534 | doi-access=free }} is the maximum likelihood estimator. The estimator of choice in practice is the HyperLogLog algorithm.{{cite journal | last1=Flajolet | first1=Philippe|author1-link=Philippe Flajolet | last2 = Fusy | first2 = Eric | last3=Gandouet | first3=Olivier | last4=Meunier | first4=Frederic | title=HyperLoglog: the analysis of a near-optimal cardinality estimation algorithm | journal=Analysis of Algorithms |year=2007|url=https://hal.archives-ouvertes.fr/docs/00/40/61/66/PDF/FlFuGaMe07.pdf}}

The intuition behind such estimators is that each sketch carries information about the desired quantity. For example, when every element e_j is associated with a uniform RV, h(e_j) \sim U(0,1) , the expected minimum value of h(e_1),h(e_2), \ldots, h(e_n) is 1/(n+1) . The hash function guarantees that h(e_j) is identical for all the appearances of e_j . Thus, the existence of duplicates does not affect the value of the extreme order statistics.

There are other estimation techniques other than min/max sketches. The first paper on count-distinct estimation{{cite journal | last1=Flajolet | first1=Philippe|author1-link=Philippe Flajolet | last2 = Martin | first2 = G. Nigel | year=1985 | title=Probabilistic counting algorithms for data base applications | journal=J. Comput. Syst. Sci. | volume=31| issue=2| pages=182–209| doi=10.1016/0022-0000(85)90041-8| url=https://hal.inria.fr/inria-00076244/file/RR-0313.pdf}} describes the Flajolet–Martin algorithm, a bit pattern sketch. In this case, the elements are hashed into a bit vector and the sketch holds the logical OR of all hashed values. The first asymptotically space- and time-optimal algorithm for this problem was given by Daniel M. Kane, Jelani Nelson, and David P. Woodruff.{{cite journal | last1=Kane | first1=Daniel M. | last2 = Nelson | first2 = Jelani | last3=Woodruff | first3=David P. | year=2010 | authorlink1=Daniel Kane (mathematician) | authorlink2=Jelani Nelson | title=An Optimal Algorithm for the Distinct Elements Problem | journal=Proceedings of the 29th Annual ACM Symposium on Principles of Database Systems (PODS)|url=https://dash.harvard.edu/bitstream/handle/1/13820438/f0.pdf;sequence=1}}

=Bottom-''m'' sketches=

Bottom-m sketches

{{cite journal | last1=Cohen | first1=Edith|author1-link=Edith Cohen | last2 = Kaplan | first2 = Haim | year=2008 | title=Tighter estimation using bottom k sketches | journal=PVLDB | url=http://www.vldb.org/pvldb/1/1453884.pdf}}

are a generalization of min sketches, which maintain the m minimal values, where m \geq 1 .

See Cosma et al. for a theoretical overview of count-distinct estimation algorithms, and Metwally

{{Citation | last1=Metwally | first1=Ahmed | last2=Agrawal | first2=Divyakant | last3=Abbadi | first3= Amr El | year=2008 | title=Why go logarithmic if we can go linear?: Towards effective distinct counting of search traffic | series=Proceedings of the 11th international conference on Extending Database Technology: Advances in Database Technology | pages=618–629 | citeseerx=10.1.1.377.4771 }}

for a practical overview with comparative simulation results.

= Python implementation of Knuth's CVM algorithm =

def algorithm_d(stream, s: int):

m = len(stream) # We assume that this is given to us in advance.

t = -1 # Note that Knuth indexes the stream from 1.

p = 1

a = 0

buffer = []

while t < (m - 1):

t += 1

a = stream[t]

u = uniform(0, 1)

buffer = list(filter(lambda x: x[1] != a, buffer))

if u < p:

if len(buffer) < s:

buffer.append([u, a])

else:

buffer = sorted(buffer)

p = max(buffer[-1][0], u)

buffer.pop()

buffer.append([u, a])

return len(buffer) / p

= CVM algorithm =

Compared to other approximation algorithms for the count-distinct problem the CVM Algorithm{{Cite book |last1=Chakraborty |first1=Sourav |last2=Vinodchandran |first2=N. V. |last3=Meel |first3=Kuldeep S. |date=2022 |title=Distinct Elements in Streams: An Algorithm for the (Text) Book |series=Leibniz International Proceedings in Informatics (LIPIcs) |volume=244 |pages=6 pages, 727571 bytes |publisher=Schloss Dagstuhl – Leibniz-Zentrum für Informatik |doi=10.4230/LIPIcs.ESA.2022.34 |doi-access=free |arxiv=2301.10191 |isbn=978-3-95977-247-1 |issn=1868-8969}} (named by Donald Knuth after the initials of Sourav Chakraborty, N. V. Vinodchandran, and Kuldeep S. Meel) uses sampling instead of hashing. The CVM Algorithm provides an unbiased estimator for the number of distinct elements in a stream, in addition to the standard (ε-δ) guarantees. Below is the CVM algorithm, including the slight modification by Donald Knuth. {{cite journal |last1=Knuth |first1=Donald |date=May 2023 |title=The CVM Algorithm for Estimating Distinct Elements in Streams |url=https://cs.stanford.edu/~knuth/papers/cvm-note.pdf |journal=}}

{{nowrap|Initialize p \leftarrow 1 }}

Initialize max buffer size s , where s \geq 1

Initialize an empty buffer, {{mvar|B}}

{{nowrap|For each element a_t }} in data stream A of size n do:

{{nowrap|If (a_t, u), \forall u is in {{mvar|B}} then}}

{{nowrap|Delete (a_t, u) from {{mvar|B}}}}

{{nowrap| u \leftarrow random number in [0, 1) }}

{{nowrap|If u < p then}}

{{nowrap|If |B| < s then}}

insert (a_t, u) in {{mvar|B}}

else

(a',u') such that u' = \max\{u:(a,u)\in B, \forall a\} /* (a',u') whose u' is maximum in {{mvar|B}} */

If u > u' then

p\leftarrow u

else

Replace (a',u') with (a_t, u)

p\leftarrow u'

{{nowrap|End For}}

{{nowrap|return |B| / p .}}

The previous version of the CVM algorithm is improved with the following modification by Donald Knuth, that adds the while loop to ensure B is reduced. {{cite journal |last1=Knuth |first1=Donald |date=May 2023 |title=The CVM Algorithm for Estimating Distinct Elements in Streams |url=https://cs.stanford.edu/~knuth/papers/cvm-note.pdf |journal=}}

{{nowrap|Initialize p \leftarrow 1 }}

Initialize max buffer size s , where s \geq 1

Initialize an empty buffer, {{mvar|B}}

{{nowrap|For each element a_t }} in data stream A of size n do:

{{nowrap|If a_t is in {{mvar|B}} then}}

{{nowrap|Delete a_t from {{mvar|B}}}}

{{nowrap| u \leftarrow random number in [0, 1) }}

{{nowrap|If u \leq p then}}

Insert (a_t, u) into {{mvar|B}}

{{nowrap|While |B| = s \wedge u < p then}}

Remove every element of (a', u') of {{mvar|B}} with u' > \frac{p}{2}

{{nowrap| p \leftarrow \frac{p}{2} }}

{{nowrap|End While}}

If u < p then

Insert (a_t, u) into {{mvar|B}}

{{nowrap|End For}}

{{nowrap|return |B| / p .}}

Weighted count-distinct problem

In its weighted version, each element is associated with a weight and the goal is to estimate the total sum of weights.

Formally,

: Instance: A stream of weighted elements x_1,x_2,\ldots,x_s with repetitions, and an integer m . Let n be the number of distinct elements, namely n = |\left\{ {x_1,x_2,\ldots,x_s}\right\}| , and let these elements be \left\{ {e_1,e_2,\ldots,e_n}\right\} . Finally, let w_j be the weight of e_j .

: Objective: Find an estimate \widehat{w} of w = \sum_{j=1}^{n}w_j using only m storage units, where m \ll n .

An example of an instance for the weighted problem is: a(3),b(4),a(3),c(2),d(3),b(4),d(3) . For this instance, e_1=a, e_2=b, e_3=c, e_4=d , the weights are w_1=3, w_2=4, w_3=2, w_4=3 and \sum{w_j}=12 .

As an application example, x_1,x_2,\ldots,x_s could be IP packets received by a server. Each packet belongs to one of n IP flows e_1,e_2,\ldots,e_n . The weight w_j can be the load imposed by flow e_j on the server. Thus, \sum_{j=1}^{n}{w_j} represents the total load imposed on the server by all the flows to which packets x_1,x_2,\ldots,x_s belong.

Solving the weighted count-distinct problem

Any extreme order statistics estimator (min/max sketches) for the unweighted problem can be generalized to an estimator for the weighted problem

.{{cite journal | last1=Cohen |author1-link=Edith Cohen| first1=Reuven | last2 = Katzir | first2 = Liran | last3=Yehezkel | first3=Aviv | year=2014| title=A Unified Scheme for Generalizing Cardinality Estimators to Sum Aggregation | journal=Information Processing Letters|doi=10.1016/j.ipl.2014.10.009 | volume=115 |issue=2| pages=336–342}}

For example, the weighted estimator proposed by Cohen et al. can be obtained when the continuous max sketches estimator is extended to solve the weighted problem.

In particular, the HyperLogLog algorithm can be extended to solve the weighted problem. The extended HyperLogLog algorithm offers the best performance, in terms of statistical accuracy and memory usage, among all the other known algorithms for the weighted problem.

See also

References