unary coding
{{Short description|Entropy encoding}}
Unary coding, or the unary numeral system, is an entropy encoding that represents a natural number, n, with n ones followed by a zero (if the term natural number is understood as non-negative integer) or with n − 1 ones followed by a zero (if the term natural number is understood as strictly positive integer). A unary number's code length would thus be n + 1 with that first definition, or n with that second definition. Unary code when vertical behaves like mercury in a thermometer that gets taller or shorter as n gets bigger or smaller, and so is sometimes called thermometer code.{{Cite web |title=University of Alberta Dictionary of Cognitive Science: Thermometer Code |url=http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Dictionary/contents/T/thermcode.html |access-date=2025-05-31 |website=www.bcp.psych.ualberta.ca}} An alternative representation uses n or n − 1 zeros followed by a one, effectively swapping the ones and zeros, without loss of generality. For example, the first ten unary codes are:
class="wikitable"
! Unary code !! Alternative !n (non-negative) !n (strictly positive) | |
0 | 1
|0 |1 |
10 | 01
|1 |2 |
110 | 001
|2 |3 |
1110 | 0001
|3 |4 |
11110 | 00001
|4 |5 |
111110 | 000001
|5 |6 |
1111110 | 0000001
|6 |7 |
11111110 | 00000001
|7 |8 |
111111110 | 000000001
|8 |9 |
1111111110 | 0000000001
|9 |10 |
Unary coding is an optimally efficient{{Clarify|reason=This term needs some introduction.|date=May 2025}} encoding for the following discrete probability distribution{{Citation needed|date=May 2025|reason=Citation needed for this strong claim and to explain this term "optimally efficient".}}
:
for .
In symbol-by-symbol coding, it is optimal for any geometric distribution
:
for which k ≥ φ = 1.61803398879..., the golden ratio, or, more generally, for any discrete distribution for which
:
for . Although it is the optimal symbol-by-symbol coding for such probability distributions, Golomb coding achieves better compression capability for the geometric distribution because it does not consider input symbols independently, but rather implicitly groups the inputs. For the same reason, arithmetic encoding performs better for general probability distributions, as in the last case above.
Unary coding is both a prefix-free code and a self-synchronizing code.
Unary code in use today
Examples of unary code uses include:
- In Golomb Rice code, unary encoding is used to encode the quotient part of the Golomb code word.
- In UTF-8, unary encoding is used in the leading byte of a multi-byte sequence to indicate the number of bytes in the sequence so that the length of the sequence can be determined without examining the continuation bytes.
- Instantaneously trained neural networks use unary coding for efficient data representation.
Unary coding in biological networks
Unary coding is used in the neural circuits responsible for birdsong production. The nucleus in the brain of the songbirds that plays a part in both the learning and the production of bird song is the HVC (high vocal center). The command signals for different notes in the birdsong emanate from different points in the HVC. This coding works as space coding which is an efficient strategy for biological circuits due to its inherent simplicity and robustness.
Standard run-length unary codes
All binary data is defined by the ability to represent unary numbers in alternating run-lengths of 1s and 0s. This conforms to the standard definition of unary i.e. N digits of the same number 1 or 0. All run-lengths by definition have at least one digit and thus represent strictly positive integers.
class="wikitable"
! n !! RL code !! Next code | ||
1 | 1 | 0 |
2 | 11 | 00 |
3 | 111 | 000 |
4 | 1111 | 0000 |
5 | 11111 | 00000 |
6 | 111111 | 000000 |
7 | 1111111 | 0000000 |
8 | 11111111 | 00000000 |
9 | 111111111 | 000000000 |
10 | 1111111111 | 0000000000 |
colspan="3" | ... |
These codes are guaranteed to end validly on any length of data (when reading arbitrary data) and in the (separate) write cycle allow for the use and transmission of an extra bit (the one used for the first bit) while maintaining overall and per-integer unary code lengths of exactly N.
Uniquely decodable non-prefix unary codes
Following is an example of uniquely decodable unary codes that is not a prefix code and is not instantaneously decodable ([http://www.cs.ucf.edu/courses/cap5015/Huff.pdf need look-ahead to decode])
class="wikitable"
! n !! Unary code !Alternative | |
1 | 1
|0 |
2 | 10
|01 |
3 | 100
|011 |
4 | 1000
|0111 |
5 | 10000
|01111 |
6 | 100000
|011111 |
7 | 1000000
|0111111 |
8 | 10000000
|01111111 |
9 | 100000000
|011111111 |
10 | 1000000000
|0111111111 |
colspan="3" | ... |
These codes also (when writing unsigned integers) allow for the use and transmission of an extra bit (the one used for the first bit). Thus they are able to transmit 'm' integers * N unary bits and 1 additional bit of information within m*N bits of data.
Symmetric unary codes
The following symmetric unary codes can be read and instantaneously decoded in either direction:
class="wikitable" style="text-align: center;"
! Unary code !Alternative !n (non-negative) !n (strictly positive) |
1
|0 |0 |1 |
00
|11 |1 |2 |
010
|101 |2 |3 |
0110
|1001 |3 |4 |
01110
|10001 |4 |5 |
011110
|100001 |5 |6 |
0111110
|1000001 |6 |7 |
01111110
|10000001 |7 |8 |
011111110
|100000001 |8 |9 |
0111111110
|1000000001 |9 |10 |
colspan="4" |... |
Canonical unary codes
{{See also|Canonical Huffman code}}
For unary values where the maximum is known, one can use canonical unary codes that are of a somewhat numerical nature and different from character based codes. The largest n numerical '0' or '-1' ( ) and the maximum number of digits then for each step reducing the number of digits by one and increasing/decreasing the result by numerical '1'.{{Clarify|date=May 2025|reason=(1) Poor grammar. (2) Are canonical codes only for the positive natural number convention? (3) Shouldn't this say that we are starting with the largest n for this 0 or 2^(n-1) code? (4) Why use red to write the final code (in this example corresponding to n=10)? Does it belong or not belong to the canonical format? (6) Why is there an extra row after the n=10 row? (Shouldn't we delete that empty row to indicate that there are no more codes after the maximum?) (7) Is the maximum code corresponding to n=10 or n=9, which this table both uses 9 digits to express. (We should delete this n=10 row if is beyond the maximum...the algorithm part about "reducing the number of digits by one" would only make sense if 9 is the maximum.)}}
class="wikitable"
! n !! Unary code !Alternative | |
1 | 1
|0 |
2 | 01
|10 |
3 | 001
|110 |
4 | 0001
|1110 |
5 | 00001
|11110 |
6 | 000001
|111110 |
7 | 0000001
|1111110 |
8 | 00000001
|11111110 |
9 | 000000001
|111111110 |
style="color: red;"
| 10 | 000000000
|111111111 |
colspan="3" | |
Canonical codes can [http://www.cs.ucf.edu/courses/cap5015/Huff.pdf require less processing time to decode]{{Clarification|reason=Citation deals with Canonical Huffman code. Is the statement relevant only for when dealing with Huffman encoding, or is this a general statement about Canonical unary code?|date=May 2025}} when they are processed as numbers not a string. If the number of codes required per symbol length is different to 1, i.e. there are more non-unary codes of some length required, those would be achieved by increasing/decreasing the values numerically without reducing the length in that case.
Generalized unary coding
A generalized version of unary coding was presented by Subhash Kak to represent numbers much more efficiently than standard unary coding. Here's an example of generalized unary coding for integers from 0 through 15 that requires only 7 bits (where three bits are arbitrarily chosen in place of a single one in standard unary to show the number). Note that the representation is cyclic where one uses markers to represent higher integers in higher cycles.
class="wikitable"
! n !! Unary code !! Generalized unary | ||
0 | 0 | 0000000 |
1 | 10 | 0000111 |
2 | 110 | 0001110 |
3 | 1110 | 0011100 |
4 | 11110 | 0111000 |
5 | 111110 | 1110000 |
6 | 1111110 | 0010111 |
7 | 11111110 | 0101110 |
8 | 111111110 | 1011100 |
9 | 1111111110 | 0111001 |
10 | 11111111110 | 1110010 |
11 | 111111111110 | 0100111 |
12 | 1111111111110 | 1001110 |
13 | 11111111111110 | 0011101 |
14 | 111111111111110 | 0111010 |
15 | 1111111111111110 | 1110100 |
Generalized unary coding requires that the range of numbers to be represented to be pre-specified because this range determines the number of bits that are needed.
See also
Notes
{{Reflist|group="nb"|refs=
The equivalent to the term "unary coding" in German scientific literature is "{{lang|de|BCD-Zählcode}}", which would translate into "binary-coded decimal counting code". This must not be confused with the similar German term "{{lang|de|BCD-Code}}" translating to BCD code in English.
}}
References
{{Reflist|refs=
{{cite book |author-last1=Fiete |author-first1=I. R. |author-first2=H. S. |author-last2=Seung |chapter=Neural network models of birdsong production, learning, and coding |title=New Encyclopedia of Neuroscience |editor-first1=L. |editor-last1=Squire |editor-first2=T. |editor-last2=Albright |editor-first3=F. |editor-last3=Bloom |editor-first4=F. |editor-last4=Gage |editor-first5=N. |editor-last5=Spitzer |publisher=Elsevier |date=2007}}
}}
{{Compression Methods}}