Gibbs' inequality
{{Short description|Statement in information theory}}
FILE:Josiah Willard Gibbs -from MMS-.jpg
In information theory, Gibbs' inequality is a statement about the information entropy of a discrete probability distribution. Several other bounds on the entropy of probability distributions are derived from Gibbs' inequality, including Fano's inequality.
It was first presented by J. Willard Gibbs in the 19th century.
Gibbs' inequality
Suppose that and
are discrete probability distributions. Then
:
with equality if and only if
for .{{cite book|author=Pierre Bremaud|title=An Introduction to Probabilistic Modeling|date=6 December 2012|publisher=Springer Science & Business Media|isbn=978-1-4612-1046-7}}{{rp|68}} Put in words, the information entropy of a distribution is less than or equal to its cross entropy with any other distribution .
The difference between the two quantities is the Kullback–Leibler divergence or relative entropy, so the inequality can also be written:{{cite book|author=David J. C. MacKay|title=Information Theory, Inference and Learning Algorithms|date=25 September 2003|publisher=Cambridge University Press|isbn=978-0-521-64298-9}}{{rp|34}}
:
Note that the use of base-2 logarithms is optional, and
allows one to refer to the quantity on each side of the inequality as an
Proof
For simplicity, we prove the statement using the natural logarithm, denoted by {{math|ln}}, since
:
so the particular logarithm base {{math|b > 1}} that we choose only scales the relationship by the factor {{math|1 / ln b}}.
Let denote the set of all for which pi is non-zero. Then, since for all x > 0, with equality if and only if x=1, we have:
:
The last inequality is a consequence of the pi and qi being part of a probability distribution. Specifically, the sum of all non-zero values is 1. Some non-zero qi, however, may have been excluded since the choice of indices is conditioned upon the pi being non-zero. Therefore, the sum of the qi may be less than 1.
So far, over the index set , we have:
:,
or equivalently
:.
Both sums can be extended to all , i.e. including , by recalling that the expression tends to 0 as tends to 0, and tends to as tends to 0. We arrive at
:
For equality to hold, we require
- for all so that the equality holds,
- and which means if , that is, if .
This can happen if and only if for .
Alternative proofs
The result can alternatively be proved using Jensen's inequality, the log sum inequality, or the fact that the Kullback-Leibler divergence is a form of Bregman divergence.
= Proof by Jensen's inequality =
Because log is a concave function, we have that:
:
where the first inequality is due to Jensen's inequality, and being a probability distribution implies the last equality.
Furthermore, since is strictly concave, by the equality condition of Jensen's inequality we get equality when
:
and
:.
Suppose that this ratio is , then we have that
:
where we use the fact that are probability distributions. Therefore, the equality happens when .
= Proof by Bregman divergence =
Alternatively, it can be proved by noting thatfor all , with equality holding iff . Then, sum over the states, we havewith equality holding iff .
This is because the KL divergence is the Bregman divergence generated by the function .