alias method
{{Short description|Family of algorithms for sampling from discrete probability distributions}}
In computing, the alias method is a family of efficient algorithms for sampling from a discrete probability distribution, published in 1974 by Alastair J. Walker.{{Cite journal |doi=10.1049/el:19740097 |title=New fast method for generating discrete random numbers with arbitrary frequency distributions |journal=Electronics Letters |volume=10 |issue=8 |pages=127–128 |date=18 April 1974 |last1=Walker |first1=A. J. |bibcode=1974ElL....10..127W }}{{Cite journal |doi=10.1145/355744.355749 |title=An Efficient Method for Generating Discrete Random Variables with General Distributions |journal=ACM Transactions on Mathematical Software |volume=3 |issue=3 |pages=253–256 |date=September 1977 |last1=Walker |first1=Alastair J. |s2cid=4522588 |doi-access=free }} That is, it returns integer values {{math |1 ≤ i ≤ n}} according to some arbitrary discrete probability distribution {{mvar|pi}}. The algorithms typically use {{math|O(n log n)}} or {{math|O(n)}} preprocessing time, after which random values can be drawn from the distribution in {{math|O(1)}} time.{{Cite journal |doi=10.1109/32.92917 |title=A linear algorithm for generating random numbers with a given distribution |journal=IEEE Transactions on Software Engineering |volume=17 |issue=9 |pages=972–975 |date=September 1991 |last1=Vose |first1=Michael D. |url=http://web.eecs.utk.edu/~vose/Publications/random.pdf |archive-url=https://web.archive.org/web/20131029203736/http://web.eecs.utk.edu/~vose/Publications/random.pdf |archive-date=2013-10-29|citeseerx=10.1.1.398.3339 }}
Operation
Internally, the algorithm consults two tables, a probability table {{mvar|Ui}} and an alias table {{mvar|Ki}} (for {{math|1 ≤ i ≤ n}}). To generate a random outcome, a fair die is rolled to determine an index {{mvar|i}} into the two tables. A biased coin is then flipped, choosing a result of {{mvar|i}} with probability {{mvar|Ui}}, or {{mvar|Ki}} otherwise (probability {{math|1 − Ui}}).{{cite web |url=http://www.keithschwarz.com/darts-dice-coins/ |title=Darts, Dice, and Coins: Sampling from a Discrete Distribution |date=29 December 2011 |website=KeithSchwarz.com |access-date=2011-12-27}}
More concretely, the algorithm operates as follows:
- Generate a uniform random variate {{math|0 ≤ x < 1}}.
- Let {{math|1=i = ⌊nx⌋ + 1}} and {{math|1=y = nx + 1 − i}}. (This makes {{math|i}} uniformly distributed on {{math|{1, 2, ..., n} }} and {{math|y}} uniformly distributed on {{math|[0, 1)}}.)
- If {{math|y < Ui}}, return {{mvar|i}}. This is the biased coin flip.
- Otherwise, return {{mvar|Ki}}.
An alternative formulation of the probability table, proposed by Marsaglia et al.{{Citation |first1=George |last1=Marsaglia |author-link1=George Marsaglia |first2=Wai Wan |last2=Tsang |first3=Jingbo |last3=Wang |title=Fast Generation of Discrete Random Variables |journal=Journal of Statistical Software |date=2004-07-12 |volume=11 |issue=3 |pages=1–11 |doi=10.18637/jss.v011.i03 |doi-access=free |url=https://www.researchgate.net/publication/5142858}} as the square histogram method, avoids the computation of {{mvar|y}} by instead checking the condition {{math|1=x < Vi = (Ui + i − 1)/n}} in the third step.
Table generation
The distribution may be padded with additional probabilities {{math|1=pi = 0}} to increase {{mvar|n}} to a convenient value, such as a power of two.
To generate the two tables, first initialize {{math|1=Ui = npi}}. While doing this, divide the table entries into three categories:
- The "overfull" group, where {{math|Ui > 1}},
- The "underfull" group, where {{math|Ui < 1}} and {{mvar|Ki}} has not been initialized, and
- The "exactly full" group, where {{math|1=Ui = 1}} or {{mvar|Ki}} has been initialized.
If {{math|1=Ui = 1}}, the corresponding value {{mvar|Ki}} will never be consulted and is unimportant, but a value of {{math|1=Ki = i}} is sensible. This also avoids problems if the probabilities are represented as fixed-point numbers which cannot represent {{math|1=Ui = 1}} exactly.
As long as not all table entries are exactly full, repeat the following steps:
- Arbitrarily choose an overfull entry {{math|Ui > 1}} and an underfull entry {{math|Uj < 1}}. (If one of these exists, the other must, as well.)
- Allocate the unused space in entry {{mvar|j}} to outcome {{mvar|i}}, by setting {{math|1=Kj ← i}}.
- Remove the allocated space from entry {{mvar|i}} by changing {{math|1=Ui ← Ui − (1 − Uj) = Ui + Uj − 1}}.
- Entry {{mvar|j}} is now exactly full.
- Assign entry {{mvar|i}} to the appropriate category based on the new value of {{mvar|Ui}}.
Each iteration moves at least one entry to the "exactly full" category (and the last moves two), so the procedure is guaranteed to terminate after at most {{math|n −1}} iterations. Each iteration can be done in {{math|O(1)}} time, so the table can be set up in {{math|O(n)}} time.
Because of the arbitrary choice in step 1, the alias structure is not unique.
As the lookup procedure is slightly faster if {{math|y < Ui}} (because {{mvar|Ki}} does not need to be consulted), one goal during table generation is to maximize the sum of the {{mvar|Ui}}. Doing this optimally turns out to be NP hard,{{Rp|6}} but a greedy algorithm comes reasonably close: rob from the richest and give to the poorest. That is, at each step choose the largest {{mvar|Ui}} and the smallest {{mvar|Uj}}. Because this requires sorting the {{mvar|Ui}}, it requires {{math|O(n log n)}} time.
Efficiency
Although the alias method is very efficient if generating a uniform deviate is itself fast, there are cases where it is far from optimal in terms of random bit usage. This is because it uses a full-precision random variate {{mvar|x}} each time, even when only a few random bits are needed.
One case arises when the probabilities are particularly well balanced, so many {{math|1=Ui = 1}}. For these values of {{mvar|i}}, {{mvar|Ki}} is not needed and generating {{mvar|y}} is a waste of time. For example if {{math|1=p1 = p2 = {{frac|1|2}}}}, then a 32-bit random variate {{mvar|x}} could be used to generate 32 outputs, but the alias method will only generate one.
Another case arises when the probabilities are strongly unbalanced, so many {{math|Ui ≈ 0}}. For example if {{math|1=p1 = 0.999}} and {{math|1=p2 = 0.001}}, then the great majority of the time, only a few random bits are required to determine that case 1 applies.
In such cases, the table method described by Marsaglia et al.{{r|marsaglia|p=1–4}} is more efficient. If we make many choices with the same probability we can on average require much less than one unbiased random bit. Using arithmetic coding techniques arithmetic we can approach the limit given by the binary entropy function.
Literature
- Donald Knuth, The Art of Computer Programming, Vol 2: Seminumerical Algorithms, section 3.4.1.
Implementations
- http://www.keithschwarz.com/darts-dice-coins/ Keith Schwarz: Detailed explanation, numerically stable version of Vose's algorithm, and link to Java implementation
- https://jugit.fz-juelich.de/mlz/ransampl Joachim Wuttke: Implementation as a small C library.
- https://gist.github.com/0b5786e9bfc73e75eb8180b5400cd1f8 Liam Huang's Implementation in C++
- https://github.com/joseftw/jos.weightedresult/blob/develop/src/JOS.WeightedResult/AliasMethodVose.cs C# implementation of Vose's algorithm.
- https://github.com/cdanek/KaimiraWeightedList C# implementation of Vose's algorithm without floating point instability.
References
{{reflist}}