Inception score

{{Short description|Image algorithm}}

The Inception Score (IS) is an algorithm used to assess the quality of images created by a generative image model such as a generative adversarial network (GAN). The score is calculated based on the output of a separate, pretrained Inception v3 image classification model applied to a sample of (typically around 30,000) images generated by the generative model. The Inception Score is maximized when the following conditions are true:

  1. The entropy of the distribution of labels predicted by the Inceptionv3 model for the generated images is minimized. In other words, the classification model confidently predicts a single label for each image. Intuitively, this corresponds to the desideratum of generated images being "sharp" or "distinct".
  2. The predictions of the classification model are evenly distributed across all possible labels. This corresponds to the desideratum that the output of the generative model is "diverse".

It has been somewhat superseded by the related Fréchet inception distance. While the Inception Score only evaluates the distribution of generated images, the FID compares the distribution of generated images with the distribution of a set of real images ("ground truth").

Definition

Let there be two spaces, the space of images \Omega_X and the space of labels \Omega_Y. The space of labels is finite.

Let p_{gen} be a probability distribution over \Omega_X that we wish to judge.

Let a discriminator be a function of type p_{dis}:\Omega_X \to M(\Omega_Y)where M(\Omega_Y) is the set of all probability distributions on \Omega_Y. For any image x, and any label y, let p_{dis}(y|x) be the probability that image x has label y, according to the discriminator. It is usually implemented as an Inception-v3 network trained on ImageNet.

The Inception Score of p_{gen} relative to p_{dis} isIS(p_{gen}, p_{dis}) := \exp\left( \mathbb E_{x\sim p_{gen}}\left[

D_{KL} \left(p_{dis}(\cdot | x) \| \int p_{dis}(\cdot | x) p_{gen}(x)dx \right)

\right]\right)Equivalent rewrites include\ln IS(p_{gen}, p_{dis}) := \mathbb E_{x\sim p_{gen}}\left[

D_{KL} \left(p_{dis}(\cdot | x) \| \mathbb E_{x\sim p_{gen}}[p_{dis}(\cdot | x)]\right)

\right]\ln IS(p_{gen}, p_{dis}) :=

H[\mathbb E_{x\sim p_{gen}}[p_{dis}(\cdot | x)]]

-\mathbb E_{x\sim p_{gen}}[ H[p_{dis}(\cdot | x)]]\ln IS is nonnegative by Jensen's inequality.

Pseudocode:{{blockquote|INPUT discriminator p_{dis}.

INPUT generator g.

Sample images x_i from generator.

Compute p_{dis}(\cdot |x_i), the probability distribution over labels conditional on image x_i.

Sum up the results to obtain \hat p, an empirical estimate of \int p_{dis}(\cdot | x) p_{gen}(x)dx .

Sample more images x_i from generator, and for each, compute D_{KL} \left(p_{dis}(\cdot | x_i) \| \hat p\right).

Average the results, and take its exponential.

RETURN the result.}}

= Interpretation =

A higher inception score is interpreted as "better", as it means that p_{gen} is a "sharp and distinct" collection of pictures.

\ln IS(p_{gen}, p_{dis}) \in [0, \ln N], where N is the total number of possible labels.

\ln IS(p_{gen}, p_{dis}) = 0 iff for almost all x\sim p_{gen}p_{dis}(\cdot | x) = \int p_{dis}(\cdot | x) p_{gen}(x)dxThat means p_{gen} is completely "indistinct". That is, for any image x sampled from p_{gen}, discriminator returns exactly the same label predictions p_{dis}(\cdot | x).

The highest inception score N is achieved if and only if the two conditions are both true:

  • For almost all x\sim p_{gen}, the distribution p_{dis}(y|x) is concentrated on one label. That is, H_y[p_{dis}(y|x)] = 0. That is, every image sampled from p_{gen} is exactly classified by the discriminator.
  • For every label y, the proportion of generated images labelled as y is exactly \mathbb E_{x\sim p_{gen}}[p_{dis}(y | x)] = \frac 1 N. That is, the generated images are equally distributed over all labels.

References

{{reflist|refs=

{{Cite journal |last1=Salimans |first1=Tim |last2=Goodfellow |first2=Ian |last3=Zaremba |first3=Wojciech |last4=Cheung |first4=Vicki |last5=Radford |first5=Alec |last6=Chen |first6=Xi |last7=Chen |first7=Xi |date=2016 |title=Improved Techniques for Training GANs |url=https://proceedings.neurips.cc/paper/2016/hash/8a3363abe792db2d8761d6403605aeb7-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=29|arxiv=1606.03498 }}

{{cite journal

|title=Adversarial text-to-image synthesis: A review

|date=December 2021

|journal=Neural Networks|volume=144|pages=187–209

|last1=Frolov|first1=Stanislav|last2=Hinz|first2=Tobias|last3=Raue|first3=Federico|last4=Hees|first4=Jörn|last5=Dengel|first5=Andreas

|doi=10.1016/j.neunet.2021.07.019

|pmid=34500257

|s2cid=231698782

|doi-access=free|arxiv=2101.09983}}

{{Cite journal |last=Borji |first=Ali |date=2022 |title=Pros and cons of GAN evaluation measures: New developments |url=https://linkinghub.elsevier.com/retrieve/pii/S1077314221001685 |journal=Computer Vision and Image Understanding |language=en |volume=215 |pages=103329 |doi=10.1016/j.cviu.2021.103329|arxiv=2103.09396 |s2cid=232257836 }}

}}

{{Machine learning evaluation metrics}}

Category:Machine learning

Category:Computer graphics