Expectation propagation

{{Short description|Method to approximate a probability distribution}}

Expectation propagation (EP) is a technique in Bayesian machine learning.{{Cite book|title=Pattern Recognition and Machine Learning|last=Bishop|first=Christopher|publisher=Springer-Verlag New York Inc.|year=2007|isbn=978-0387310732|location=New York}}

EP finds approximations to a probability distribution. It uses an iterative approach that uses the factorization structure of the target distribution. It differs from other Bayesian approximation approaches such as variational Bayesian methods.

More specifically, suppose we wish to approximate an intractable probability distribution p(\mathbf{x}) with a tractable distribution q(\mathbf{x}). Expectation propagation achieves this approximation by minimizing the Kullback-Leibler divergence \mathrm{KL}(p||q). Variational Bayesian methods minimize \mathrm{KL}(q||p) instead.

If q(\mathbf{x}) is a Gaussian \mathcal{N}(\mathbf{x}|\mu, \Sigma), then \mathrm{KL}(p||q) is minimized with \mu and \Sigma being equal to the mean of p(\mathbf{x}) and the covariance of p(\mathbf{x}), respectively; this is called moment matching.

Applications

Expectation propagation via moment matching plays a vital role in approximation for indicator functions that appear when deriving the message passing equations for TrueSkill.

References

{{Reflist}}

  • {{cite book|author=Thomas Minka|author-link=Thomas Minka|chapter=Expectation Propagation for Approximate Bayesian Inference|url=http://research.microsoft.com/en-us/um/people/minka/papers/ep/minka-ep-uai.pdf|editor=Jack S. Breese, Daphne Koller|title=UAI '01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence|location=University of Washington, Seattle, Washington, USA|date=August 2–5, 2001|pages=362–369}}