Normal-inverse-gamma distribution
{{Short description|Family of multivariate continuous probability distributions}}
{{Probability distribution |
name =normal-inverse-gamma|
type =density|
pdf_image =File:Normal-inverse-gamma.svg|
cdf_image =|
parameters = location (real)
(real)
(real)
(real)|
support =|
pdf =
\frac{ \sqrt{ \lambda } }{ \sqrt{ 2 \pi \sigma^2 }}
\frac{ \beta^\alpha }{ \Gamma( \alpha ) }
\left( \frac{1}{\sigma^2 } \right)^{\alpha + 1}
\exp \left( -\frac { 2\beta + \lambda (x - \mu)^2} {2\sigma^2}\right)
|
cdf =|
mean =
, for |
median =|
mode =
|
variance =, for
, for
, for |
skewness =|
kurtosis =|
entropy =|
mgf =|
char =|
}}
In probability theory and statistics, the normal-inverse-gamma distribution (or Gaussian-inverse-gamma distribution) is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.
Definition
Suppose
:
has a normal distribution with mean and variance , where
:
has an inverse-gamma distribution. Then
has a normal-inverse-gamma distribution, denoted as
:
( is also used instead of )
The normal-inverse-Wishart distribution is a generalization of the normal-inverse-gamma distribution that is defined over multivariate random variables.
Characterization
=Probability density function=
:
For the multivariate form where is a random vector,
:
where is the determinant of the matrix . Note how this last equation reduces to the first form if so that are scalars.
== Alternative parameterization ==
It is also possible to let in which case the pdf becomes
:
In the multivariate form, the corresponding change would be to regard the covariance matrix instead of its inverse as a parameter.
=Cumulative distribution function=
:
\left(\operatorname{erf}\left(\frac{\sqrt{\lambda} (x-\mu )}{\sqrt{2} \sigma }\right)+1\right)}{2
\sigma^2 \Gamma (\alpha)}
Properties
=Marginal distributions=
Given
as above, by itself follows an inverse gamma distribution:
:
while follows a t distribution with degrees of freedom.{{Cite book |last=Ramírez-Hassan |first=Andrés |url=https://bookdown.org/aramir21/IntroductionBayesianEconometricsGuidedTour/sec42.html#sec42 |title=4.2 Conjugate prior to exponential family {{!}} Introduction to Bayesian Econometrics}}
{{math proof | title=Proof for | proof=
For probability density function is
Marginal distribution over is
\begin{align}
f(x \mid \mu,\alpha,\beta)
& =
\int_0^\infty d\sigma^2 f(x,\sigma^2\mid\mu,\alpha,\beta)
\\
& =
\frac {1} {\sqrt{2\pi} } \, \frac{\beta^\alpha}{\Gamma(\alpha)}
\int_0^\infty d\sigma^2
\left( \frac{1}{\sigma^2} \right)^{\alpha + 1/2 + 1} \exp \left( -\frac { 2\beta + (x - \mu)^2} {2\sigma^2} \right)
\end{align}
Except for normalization factor, expression under the integral coincides with Inverse-gamma distribution
\Gamma^{-1}(x; a, b) = \frac{b^a}{\Gamma(a)}\frac{e^{-b/x}}{{x}^{a+1}} ,
with , , .
Since , and
\int_0^\infty d\sigma^2
\left( \frac{1}{\sigma^2} \right)^{\alpha + 1/2 + 1} \exp \left( -\frac { 2\beta + (x - \mu)^2} {2\sigma^2}
\right)
= \Gamma(\alpha + 1/2) \left(\frac { 2\beta + (x - \mu)^2} {2} \right)^{-(\alpha + 1/2)}
Substituting this expression and factoring dependence on ,
f(x \mid \mu,\alpha,\beta) \propto_{x} \left(1 + \frac{(x - \mu)^2}{2 \beta} \right)^{-(\alpha + 1/2)} .
Shape of generalized Student's t-distribution is
t(x | \nu,\hat{\mu},\hat{\sigma}^2)
\propto_x
\left(1+\frac{1}{\nu} \frac{ (x-\hat{\mu})^2 }{\hat{\sigma}^2 } \right)^{-(\nu+1)/2}
.
Marginal distribution follows t-distribution with
degrees of freedom
f(x \mid \mu,\alpha,\beta) = t(x | \nu=2 \alpha, \hat{\mu}=\mu, \hat{\sigma}^2=\beta/\alpha )
.
}}
In the multivariate case, the marginal distribution of is a multivariate t distribution:
:
=Summation=
=Scaling=
Suppose
:
Then for ,
:
Proof: To prove this let and fix . Defining , observe that the PDF of the random variable evaluated at is given by times the PDF of a random variable evaluated at . Hence the PDF of evaluated at is given by :
The right hand expression is the PDF for a random variable evaluated at , which completes the proof.
=Exponential family=
Normal-inverse-gamma distributions form an exponential family with natural parameters , , , and and sufficient statistics , , , and .
=Information entropy=
=Kullback–Leibler divergence=
Measures difference between two distributions.
Maximum likelihood estimation
{{Empty section|date=July 2010}}
Posterior distribution of the parameters
See the articles on normal-gamma distribution and conjugate prior.
Interpretation of the parameters
See the articles on normal-gamma distribution and conjugate prior.
Generating normal-inverse-gamma random variates
Generation of random variates is straightforward:
- Sample from an inverse gamma distribution with parameters and
- Sample from a normal distribution with mean and variance
Related distributions
- The normal-gamma distribution is the same distribution parameterized by precision rather than variance
- A generalization of this distribution which allows for a multivariate mean and a completely unknown positive-definite covariance matrix (whereas in the multivariate inverse-gamma distribution the covariance matrix is regarded as known up to the scale factor ) is the normal-inverse-Wishart distribution
See also
References
{{Reflist}}
- Denison, David G. T.; Holmes, Christopher C.; Mallick, Bani K.; Smith, Adrian F. M. (2002) Bayesian Methods for Nonlinear Classification and Regression, Wiley. {{ISBN|0471490369}}
- Koch, Karl-Rudolf (2007) Introduction to Bayesian Statistics (2nd Edition), Springer. {{ISBN|354072723X}}
{{ProbDistributions|multivariate}}
Category:Continuous distributions