Prediction by partial matching
{{Short description|Data compression technique}}
{{More footnotes needed|date=November 2015}}
Prediction by partial matching (PPM) is an adaptive statistical data compression technique based on context modeling and prediction. PPM models use a set of previous symbols in the uncompressed symbol stream to predict the next symbol in the stream. PPM algorithms can also be used to cluster data into predicted groupings in cluster analysis.
Theory
Predictions are usually reduced to symbol rankings{{Clarify|date=November 2016}}.
Each symbol (a letter, bit or any other amount of data) is ranked before it is compressed, and the ranking system determines the corresponding codeword (and therefore the compression rate).
In many compression algorithms, the ranking is equivalent to probability mass function estimation. Given the previous letters (or given a context), each symbol is assigned with a probability.
For instance, in arithmetic coding the symbols are ranked by their probabilities to appear after previous symbols, and the whole sequence is compressed into a single fraction that is computed according to these probabilities.
The number of previous symbols, n, determines the order of the PPM model which is denoted as PPM(n). Unbounded variants where the context has no length limitations also exist and are denoted as PPM*. If no prediction can be made based on all n context symbols, a prediction is attempted with n − 1 symbols. This process is repeated until a match is found or no more symbols remain in context. At that point a fixed prediction is made.
Much of the work in optimizing a PPM model is handling inputs that have not already occurred in the input stream. The obvious way to handle them is to create a "never-seen" symbol which triggers the escape sequence{{Clarify|date=November 2016}}. But what probability should be assigned to a symbol that has never been seen? This is called the [https://en.wikibooks.org/wiki/Data_Compression/The_zero-frequency_problem zero-frequency problem]. One variant uses the Laplace estimator, which assigns the "never-seen" symbol a fixed pseudocount of one. A variant called PPMd increments the pseudocount of the "never-seen" symbol every time the "never-seen" symbol is used. (In other words, PPMd estimates the probability of a new symbol as the ratio of the number of unique symbols to the total number of symbols observed).
Implementation
PPM compression implementations vary greatly in other details. The actual symbol selection is usually recorded using arithmetic coding, though it is also possible to use Huffman encoding or even some type of dictionary coding technique. The underlying model used in most PPM algorithms can also be extended to predict multiple symbols. It is also possible to use non-Markov modeling to either replace or supplement Markov modeling. The symbol size is usually static, typically a single byte, which makes generic handling of any file format easy.
Published research on this family of algorithms can be found as far back as the mid-1980s. Software implementations were not popular until the early 1990s because PPM algorithms require a significant amount of RAM. Recent PPM implementations are among the best-performing lossless compression programs for natural language text.
PPMd is a public domain implementation of PPMII (PPM with information inheritance) by Dmitry Shkarin which has undergone several incompatible revisions.{{cite web |title=BMF, PPMd Всё о сжатии данных, изображений и видео |url=http://compression.ru/ds/ |website=compression.ru|language=ru}} NOTE: requires manually setting the "Cyrillic (Windows)" encoding in browser. It is used in the RAR file format by default. It is also available in the 7z and zip file formats.
Attempts to improve PPM algorithms led to the PAQ series of data compression algorithms.
A PPM algorithm, rather than being used for compression, is used to increase the efficiency of user input in the alternate input method program Dasher.
See also
Sources
- {{Cite journal| last1 = Cleary | first1 = J.| last2 = Witten | first2 = I.| doi = 10.1109/TCOM.1984.1096090| title = Data Compression Using Adaptive Coding and Partial String Matching| journal = IEEE Trans. Commun.| volume = 32| issue = 4| pages = 396–402| date=April 1984 | citeseerx = 10.1.1.14.4305}}
- {{Cite journal| last1 = Moffat | first1 = A.| title = Implementing the PPM data compression scheme| doi = 10.1109/26.61469| journal = IEEE Trans. Commun.| volume = 38| issue = 11| pages = 1917–1921| date=November 1990 | citeseerx = 10.1.1.120.8728}}
- {{Cite journal| last1 = Cleary | first1 = J. G.| last2 = Teahan | first2 = W. J.| last3 = Witten | first3 = I. H.| title = Unbounded length contexts for PPM | journal = The Computer Journal | year = 1997| volume = 40 | issue = 2_and_3 | pages = 67–75 | publisher = Oxford University Press | location = Oxford, England | language = English | url = https://academic.oup.com/comjnl/article-abstract/40/2_and_3/67/360793 | issn = 0010-4620 | doi = 10.1093/comjnl/40.2_and_3.67}}
- C. Bloom, [http://www.cbloom.com/papers/ppmz.zip Solving the problems of context modeling].
- W.J. Teahan, [http://cotty.16x16.com/compress/peppm.htm Probability estimation for PPM], [https://web.archive.org/web/20010605194536/http://www.cs.waikato.ac.nz/~wjt/papers/NZCSRSC.ps.gz Original Source from archive.org].
- {{Cite journal| last1 = Schürmann | first1 = T.| last2 = Grassberger | first2 = P.| doi = 10.1063/1.166191| title = Entropy estimation of symbol sequences| journal = Chaos| volume = 6| pmid = 12780271| issue = 3| pages = 414–427| date=September 1996 | arxiv = cond-mat/0203436| bibcode = 1996Chaos...6..414S| s2cid = 10090433}}
References
{{Reflist}}
External links
{{External links|date=November 2015}}
- [http://mattmahoney.net/dc/text.html Suite of PPM compressors with benchmarks]
- [http://www3.sympatico.ca/mt0000/bicom/bicom.html BICOM, a bijective PPM compressor] {{Webarchive|url=https://web.archive.org/web/20040415161053/http://www3.sympatico.ca/mt0000/bicom/bicom.html |date=2004-04-15 }}
- [https://marknelson.us/posts/1991/02/01/arithmetic-coding-statistical-modeling-data-compression.html#part2 "Arithmetic Coding + Statistical Modeling = Data Compression", Part 2]
- {{in lang|ru}} [http://compression.ru/ds/ PPMd compressor] by Dmitri Shkarin
- [https://github.com/rene-puschinger/ppm PPM compression in C++] by René Puschinger
{{Compression Methods}}