reparameterization trick
{{Short description|Technique used in stochastic gradient variational inference}}
The reparameterization trick (aka "reparameterization gradient estimator") is a technique used in statistical machine learning, particularly in variational inference, variational autoencoders, and stochastic optimization. It allows for the efficient computation of gradients through random variables, enabling the optimization of parametric probability models using stochastic gradient descent, and the variance reduction of estimators.
It was developed in the 1980s in operations research, under the name of "pathwise gradients", or "stochastic gradients".{{Cite journal |last1=Figurnov |first1=Mikhail |last2=Mohamed |first2=Shakir |last3=Mnih |first3=Andriy |date=2018 |title=Implicit Reparameterization Gradients |url=https://proceedings.neurips.cc/paper_files/paper/2018/hash/92c8c96e4c37100777c7190b76d28233-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=31}}Fu, Michael C. "Gradient estimation." Handbooks in operations research and management science 13 (2006): 575-616. Its use in variational inference was proposed in 2013.{{cite arXiv |last1=Kingma |first1=Diederik P. |title=Auto-Encoding Variational Bayes |date=2022-12-10 |eprint=1312.6114 |last2=Welling |first2=Max|class=stat.ML }}
Mathematics
Let be a random variable with distribution , where is a vector containing the parameters of the distribution.
= REINFORCE estimator =
Consider an objective function of the form:Without the reparameterization trick, estimating the gradient can be challenging, because the parameter appears in the random variable itself. In more detail, we have to statistically estimate:The REINFORCE estimator, widely used in reinforcement learning and especially policy gradient,{{Cite journal |last=Williams |first=Ronald J. |date=1992-05-01 |title=Simple statistical gradient-following algorithms for connectionist reinforcement learning |url=https://link.springer.com/article/10.1007/bf00992696 |journal=Machine Learning |language=en |volume=8 |issue=3 |pages=229–256 |doi=10.1007/BF00992696 |issn=1573-0565}} uses the following equality:
\mathbb{E}_{z \sim q_\phi(z)}[\nabla_\phi(\ln q_\phi(z)) f(z)]This allows the gradient to be estimated:The REINFORCE estimator has high variance, and many methods were developed to reduce its variance.{{Cite journal |last1=Greensmith |first1=Evan |last2=Bartlett |first2=Peter L. |last3=Baxter |first3=Jonathan |date=2004 |title=Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning |url=https://jmlr.org/papers/v5/greensmith04a.html |journal=Journal of Machine Learning Research |volume=5 |issue=Nov |pages=1471–1530 |issn=1533-7928}}
= Reparameterization estimator =
The reparameterization trick expresses as:Here, is a deterministic function parameterized by , and is a noise variable drawn from a fixed distribution . This gives:Now, the gradient can be estimated as:
\approx \frac 1N \sum_{i=1}^N \nabla_\phi f(g_\phi(\epsilon_i))
Examples
For some common distributions, the reparameterization trick takes specific forms:
Normal distribution: For , we can use:
Exponential distribution: For , we can use:Discrete distribution can be reparameterized by the Gumbel distribution (Gumbel-softmax trick or "concrete distribution").{{cite arXiv |last1=Maddison |first1=Chris J. |title=The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables |date=2017-03-05 |eprint=1611.00712 |last2=Mnih |first2=Andriy |last3=Teh |first3=Yee Whye|class=cs.LG }}
In general, any distribution that is differentiable with respect to its parameters can be reparameterized by inverting the multivariable CDF function, then apply the implicit method. See for an exposition and application to the Gamma Beta, Dirichlet, and von Mises distributions.
Applications
= Variational autoencoder =
File:Reparameterization_Trick.png
File:Reparameterized_Variational_Autoencoder.png
In Variational Autoencoders (VAEs), the VAE objective function, known as the Evidence Lower Bound (ELBO), is given by:
\text{ELBO}(\phi, \theta) = \mathbb{E}_{z \sim q_\phi(z|x)}[\log p_\theta(x|z)] - D_{\text{KL}}(q_\phi(z|x) || p(z))
where is the encoder (recognition model), is the decoder (generative model), and is the prior distribution over latent variables. The gradient of ELBO with respect to is simply
\mathbb{E}_{z \sim q_\phi(z|x)}[\nabla_\theta \log p_\theta(x|z)] \approx \frac{1}{L} \sum_{l=1}^L \nabla_\theta \log p_\theta(x|z_l)
but the gradient with respect to requires the trick. Express the sampling operation as:
z = \mu_\phi(x) + \sigma_\phi(x) \odot \epsilon, \quad \epsilon \sim \mathcal{N}(0, I)
where and are the outputs of the encoder network, and denotes element-wise multiplication. Then we havewhere . This allows us to estimate the gradient using Monte Carlo sampling:
\nabla_\phi \text{ELBO}(\phi, \theta) \approx \frac{1}{L} \sum_{l=1}^L [\nabla_\phi \log p_\theta(x|z_l) + \nabla_\phi \log q_\phi(z_l|x) - \nabla_\phi \log p(z_l)]
where and for .
This formulation enables backpropagation through the sampling process, allowing for end-to-end training of the VAE model using stochastic gradient descent or its variants.
= Variational inference =
More generally, the trick allows using stochastic gradient descent for variational inference. Let the variational objective (ELBO) be of the form:
\text{ELBO}(\phi) = \mathbb{E}_{z \sim q_\phi(z)}[\log p(x, z) - \log q_\phi(z)]
Using the reparameterization trick, we can estimate the gradient of this objective with respect to :
\nabla_\phi \text{ELBO}(\phi) \approx \frac{1}{L} \sum_{l=1}^L \nabla_\phi [\log p(x, g_\phi(\epsilon_l)) - \log q_\phi(g_\phi(\epsilon_l))], \quad \epsilon_l \sim p(\epsilon)
= Dropout =
The reparameterization trick has been applied to reduce the variance in dropout, a regularization technique in neural networks. The original dropout can be reparameterized with Bernoulli distributions:
y = (W \odot \epsilon) x, \quad \epsilon_{ij} \sim \text{Bernoulli}(\alpha_{ij})
where is the weight matrix, is the input, and are the (fixed) dropout rates.
More generally, other distributions can be used than the Bernoulli distribution, such as the gaussian noise:
y_i = \mu_i + \sigma_i \odot \epsilon_i, \quad \epsilon_i \sim \mathcal{N}(0, I)
where and , with and being the mean and variance of the -th output neuron. The reparameterization trick can be applied to all such cases, resulting in the variational dropout method.{{Cite journal |last1=Kingma |first1=Durk P |last2=Salimans |first2=Tim |last3=Welling |first3=Max |date=2015 |title=Variational Dropout and the Local Reparameterization Trick |url=https://proceedings.neurips.cc/paper/2015/hash/bc7316929fe1545bf0b98d114ee3ecb8-Abstract.html |journal=Advances in Neural Information Processing Systems |volume=28|arxiv=1506.02557 }}
See also
References
{{Reflist}}
Further reading
- {{cite journal |last1=Ruiz |first1=Francisco R. |last2=AUEB |first2=Titsias RC |last3=Blei |first3=David |date=2016 |title=The Generalized Reparameterization Gradient |url=https://proceedings.neurips.cc/paper_files/paper/2016/hash/f718499c1c8cef6730f9fd03c8125cab-Abstract.html |journal=Advances in Neural Information Processing Systems |volume=29 |arxiv=1610.02287 |access-date=September 23, 2024}}
- {{Cite journal |last1=Zhang |first1=Cheng |last2=Butepage |first2=Judith |last3=Kjellstrom |first3=Hedvig |last4=Mandt |first4=Stephan |date=2019-08-01 |title=Advances in Variational Inference |url=https://ieeexplore.ieee.org/document/8588399 |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=41 |issue=8 |pages=2008–2026 |doi=10.1109/TPAMI.2018.2889774 |pmid=30596568 |issn=0162-8828|arxiv=1711.05597 }}
- {{cite web |last=Mohamed |first=Shakir |date=October 29, 2015 |title=Machine Learning Trick of the Day (4): Reparameterisation Tricks |url=https://blog.shakirm.com/2015/10/machine-learning-trick-of-the-day-4-reparameterisation-tricks/ |access-date=September 23, 2024 |website=The Spectator |ref={{sfnref|The Spectator|2015}}}}