Kraft–McMillan inequality

{{Short description|Concept in coding theory}}

In coding theory, the Kraft–McMillan inequality gives a necessary and sufficient condition for the existence of a prefix code (in Leon G. Kraft's version) or a uniquely decodable code (in Brockway McMillan's version) for a given set of codeword lengths. Its applications to prefix codes and trees often find use in computer science and information theory. The prefix code can contain either finitely many or infinitely many codewords.

Kraft's inequality was published in {{harvtxt|Kraft|1949}}. However, Kraft's paper discusses only prefix codes, and attributes the analysis leading to the inequality to Raymond Redheffer. The result was independently discovered in {{harvtxt|McMillan|1956}}. McMillan proves the result for the general case of uniquely decodable codes, and attributes the version for prefix codes to a spoken observation in 1955 by Joseph Leo Doob.

Applications and intuitions

Kraft's inequality limits the lengths of codewords in a prefix code: if one takes an exponential of the length of each valid codeword, the resulting set of values must look like a probability mass function, that is, it must have total measure less than or equal to one. Kraft's inequality can be thought of in terms of a constrained budget to be spent on codewords, with shorter codewords being more expensive. Among the useful properties following from the inequality are the following statements:

  • If Kraft's inequality holds with strict inequality, the code has some redundancy.
  • If Kraft's inequality holds with equality, the code in question is a complete code.
  • If Kraft's inequality does not hold, the code is not uniquely decodable.
  • For every uniquely decodable code, there exists a prefix code with the same length distribution.

Formal statement

Let each source symbol from the alphabet

:S=\{\,s_1,s_2,\ldots,s_n\,\}

be encoded into a uniquely decodable code over an alphabet of size r with codeword lengths

:\ell_1,\ell_2,\ldots,\ell_n.

Then

:\sum_{i=1}^{n} r^{-\ell_i} \leqslant 1.

Conversely, for a given set of natural numbers \ell_1,\ell_2,\ldots,\ell_n satisfying the above inequality, there exists a uniquely decodable code over an alphabet of size r with those codeword lengths.

Example: binary trees

Image:AVLtreef.svg

Any binary tree can be viewed as defining a prefix code for the leaves of the tree. Kraft's inequality states that

: \sum_{\ell \in \text{leaves}} 2^{-\text{depth}(\ell)} \leqslant 1.

Here the sum is taken over the leaves of the tree, i.e. the nodes without any children. The depth is the distance to the root node. In the tree to the right, this sum is

: \frac{1}{4} + 4 \left( \frac{1}{8} \right) = \frac{3}{4} \leqslant 1.

Proof

= Proof for prefix codes =

Image:Kraft inequality example.png

First, let us show that the Kraft inequality holds whenever the code for S is a prefix code.

Suppose that \ell_1 \leqslant \ell_2 \leqslant \cdots \leqslant \ell_n . Let A be the full r-ary tree of depth \ell_n (thus, every node of A at level < \ell_n has r children, while the nodes at level \ell_n are leaves). Every word of length \ell \leqslant \ell_n over an r-ary alphabet corresponds to a node in this tree at depth \ell. The ith word in the prefix code corresponds to a node v_i; let A_i be the set of all leaf nodes (i.e. of nodes at depth \ell_n) in the subtree of A rooted at v_i. That subtree being of height \ell_n-\ell_i, we have

:|A_i| = r^{\ell_n-\ell_i}.

Since the code is a prefix code, those subtrees cannot share any leaves, which means that

:A_i \cap A_j = \varnothing,\quad i\neq j.

Thus, given that the total number of nodes at depth \ell_n is r^{\ell_n}, we have

: \left|\bigcup_{i=1}^n A_i\right|= \sum_{i=1}^n |A_i| = \sum_{i=1}^n r^{\ell_n-\ell_i} \leqslant r^{\ell_n}

from which the result follows.

Conversely, given any ordered sequence of n natural numbers,

:\ell_1 \leqslant \ell_2 \leqslant \cdots \leqslant \ell_n

satisfying the Kraft inequality, one can construct a prefix code with codeword lengths equal to each \ell_i by choosing a word of length \ell_i arbitrarily, then ruling out all words of greater length that have it as a prefix. There again, we shall interpret this in terms of leaf nodes of an r-ary tree of depth \ell_n. First choose any node from the full tree at depth \ell_1; it corresponds to the first word of our new code. Since we are building a prefix code, all the descendants of this node (i.e., all words that have this first word as a prefix) become unsuitable for inclusion in the code. We consider the descendants at depth \ell_n (i.e., the leaf nodes among the descendants); there are r^{\ell_n-\ell_1} such descendant nodes that are removed from consideration. The next iteration picks a (surviving) node at depth \ell_2 and removes r^{\ell_n-\ell_2} further leaf nodes, and so on. After n iterations, we have removed a total of

:\sum_{i=1}^n r^{\ell_n-\ell_i}

nodes. The question is whether we need to remove more leaf nodes than we actually have available — r^{\ell_n} in all — in the process of building the code. Since the Kraft inequality holds, we have indeed

: \sum_{i=1}^n r^{\ell_n-\ell_i} \leqslant r^{\ell_n}

and thus a prefix code can be built. Note that as the choice of nodes at each step is largely arbitrary, many different suitable prefix codes can be built, in general.

= Proof of the general case =

Now we will prove that the Kraft inequality holds whenever S is a uniquely decodable code. (The converse needs not be proven, since we have already proven it for prefix codes, which is a stronger claim.) The proof is by Jack I. Karush.{{Cite journal |last=Karush |first=J. |date=April 1961 |title=A simple proof of an inequality of McMillan (Corresp.) |url=https://ieeexplore.ieee.org/document/1057625 |journal=IEEE Transactions on Information Theory |language=en |volume=7 |issue=2 |pages=118 |doi=10.1109/TIT.1961.1057625 |issn=0018-9448}}{{Cite book |last1=Cover |first1=Thomas M. |title=Elements of information theory |last2=Thomas |first2=Joy A. |date=2006 |publisher=Wiley-Interscience |isbn=978-0-471-24195-9 |edition=2nd |location=Hoboken, N.J}}

We need only prove it when there are finitely many codewords. If there are infinitely many codewords, then any finite subset of it is also uniquely decodable, so it satisfies the Kraft–McMillan inequality. Taking the limit, we have the inequality for the full code.

Denote C = \sum_{i=1}^n r^{-l_i}. The idea of the proof is to get an upper bound on C^m for m \in \mathbb{N} and show that it can only hold for all m if C \leq 1. Rewrite C^m as

:

\begin{align}

C^m & = \left( \sum_{i=1}^n r^{-l_i} \right)^m \\

& = \sum_{i_1=1}^n \sum_{i_2=1}^n \cdots \sum_{i_m=1}^n r^{-\left(l_{i_1} + l_{i_2} + \cdots + l_{i_m} \right)} \\

\end{align}

Consider all m-powers S^m, in the form of words s_{i_1}s_{i_2}\dots s_{i_m}, where i_1, i_2, \dots, i_m are indices between 1 and n. Note that, since S was assumed to uniquely decodable,

s_{i_1}s_{i_2}\dots s_{i_m}=s_{j_1}s_{j_2}\dots s_{j_m} implies i_1=j_1, i_2=j_2, \dots, i_m=j_m. This means that each summand corresponds to exactly one word in S^m. This allows us to rewrite the equation to

:

C^m = \sum_{\ell=1}^{m \cdot \ell_{max}} q_\ell \, r^{-\ell}

where q_\ell is the number of codewords in S^m of length \ell and \ell_{max} is the length of the longest codeword in S. For an r-letter alphabet there are only r^\ell possible words of length \ell, so q_\ell \leq r^\ell. Using this, we upper bound C^m:

:

\begin{align}

C^m & = \sum_{\ell=1}^{m \cdot \ell_{max}} q_\ell \, r^{-\ell} \\

& \leq \sum_{\ell=1}^{m \cdot \ell_{max}} r^\ell \, r^{-\ell} = m \cdot \ell_{max}

\end{align}

Taking the m-th root, we get

:

C = \sum_{i=1}^n r^{-l_i} \leq \left( m \cdot \ell_{max} \right)^{\frac{1}{m}}

This bound holds for any m \in \mathbb{N}. The right side is 1 asymptotically, so \sum_{i=1}^n r^{-l_i} \leq 1 must hold (otherwise the inequality would be broken for a large enough m).

= Alternative construction for the converse =

Given a sequence of n natural numbers,

:\ell_1 \leqslant \ell_2 \leqslant \cdots \leqslant \ell_n

satisfying the Kraft inequality, we can construct a prefix code as follows. Define the ith codeword, Ci, to be the first \ell_i digits after the radix point (e.g. decimal point) in the base r representation of

:\sum_{j = 1}^{i - 1} r^{-\ell_j}.

Note that by Kraft's inequality, this sum is never more than 1. Hence the codewords capture the entire value of the sum. Therefore, for j > i, the first \ell_i digits of Cj form a larger number than Ci, so the code is prefix free.

Generalizations

The following generalization is found in.{{cite arXiv|last=Foldes |first=Stephan |title=On McMillan's theorem about uniquely decipherable codes |date=2008-06-21 |class=math.CO |eprint=0806.3277 }}

{{Math theorem

| name = Theorem

| note =

| math_statement = If C, D are uniquely decodable, and every codeword in C is a concatenation of codewords in D, then \sum_{c\in C} r^{-|c|} \leq \sum_{c\in D} r^{-|c|}

}}The previous theorem is the special case when D= \{a_1, \dots, a_r\}.{{Math proof|title=Proof|proof=

Let Q_{C}(x) be the generating function for the code. That is, Q_C(x) := \sum_{c\in C} x^

c

By a counting argument, the k-th coefficient of Q_C^n is the number of strings of length n with code length k. That is, Q_C^n(x) = \sum_{k\geq 0}x^k \#(\text{strings of length }n\text{ with }C\text{-codes of length }k) Similarly,

\frac{1}{1-Q_C(x)} = 1 + Q_C(x) + Q_C(x)^2 + \cdots = \sum_{k\geq 0}x^k \#(\text{strings with }C\text{-codes of length }k)

Since the code is uniquely decodable, any power of Q_C is absolutely bounded by r|x| + r^2|x|^2 + \cdots = \frac{r|x|}{1-r|x|}, so each of Q_C, Q_C^2, \dots and \frac{1}{1-Q_C(x)} is analytic in the disk |x| < 1/r.

We claim that for all x \in (0, 1/r), Q_C^n \leq Q_D^n + Q_D^{n+1} + \cdots

The left side is \sum_{k\geq 0}x^k \#(\text{strings of length }n\text{ with }C\text{-codes of length }k) and the right side is

\sum_{k\geq 0}x^k \#(\text{strings of length}\geq n\text{ with }D\text{-codes of length }k)

Now, since every codeword in C is a concatenation of codewords in D, and D is uniquely decodable, each string of length n with C-code c_1\dots c_n of length k corresponds to a unique string s_{c_1}\dots s_{c_n} whose D-code is c_1\dots c_n. The string has length at least n.

Therefore, the coefficients on the left are less or equal to the coefficients on the right.

Thus, for all x\in (0, 1/r), and all n = 1, 2, \dots, we have Q_C \leq \frac{Q_D}{(1-Q_D)^{1/n}} Taking n\to \infty limit, we have Q_C(x) \leq Q_D(x) for all x\in (0, 1/r).

Since Q_C(1/r) and Q_D(1/r) both converge, we have Q_C(1/r) \leq Q_D(1/r) by taking the limit and applying Abel's theorem.

}}There is a generalization to quantum code.{{Cite journal |last1=Schumacher |first1=Benjamin |last2=Westmoreland |first2=Michael D. |date=2001-09-10 |title=Indeterminate-length quantum coding |url=https://link.aps.org/doi/10.1103/PhysRevA.64.042304 |journal=Physical Review A |volume=64 |issue=4 |pages=042304 |doi=10.1103/PhysRevA.64.042304|arxiv=quant-ph/0011014 |bibcode=2001PhRvA..64d2304S |s2cid=53488312 }}

Notes

{{Reflist|refs=

{{citation

| last1 = Cover | first1 = Thomas M.

| last2 = Thomas | first2 = Joy A.

| title = Elements of Information Theory

| edition = 2nd

| year = 2006

| publisher = John Wiley & Sons, Inc

| isbn = 978-0-471-24195-9

| pages = 108–109

| doi = 10.1002/047174882X.ch5

| chapter = Data Compression

}}

{{citation

| last1 = De Rooij | first1 = Steven

| last2 = Grünwald | first2 = Peter D.

| title = Philosophy of Statistics

| edition = 1st

| year = 2011

| publisher = Elsevier

| isbn = 978-0-080-93096-1

| pages = 875

| chapter = LUCKINESS AND REGRET IN MINIMUM DESCRIPTION LENGTH INFERENCE

}}

}}

References

  • {{citation

| last = Kraft | first = Leon G.

| title = A device for quantizing, grouping, and coding amplitude modulated pulses

| publisher = MS Thesis, Electrical Engineering Department, Massachusetts Institute of Technology

| location = Cambridge, MA

| year = 1949

| hdl = 1721.1/12390

| type = Thesis

}}.

  • {{citation

| last = McMillan | first = Brockway

| title = Two inequalities implied by unique decipherability

| journal = IEEE Trans. Inf. Theory

| volume = 2 | issue = 4 | year = 1956 | pages = 115–116

| doi = 10.1109/TIT.1956.1056818}}.

See also