Sub-Gaussian distribution
{{Short description|Probability distribution whose tail probability is less than some gaussian, in some sense.}}
In probability theory, a subgaussian distribution, the distribution of a subgaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a subgaussian distribution are dominated by (i.e. decay at least as fast as) the tails of a Gaussian. This property gives subgaussian distributions their name.
Often in analysis, we divide an object (such as a random variable) into two parts, a central bulk and a distant tail, then analyze each separately. In probability, this division usually goes like "Everything interesting happens near the center. The tail event is so rare, we may safely ignore that." Subgaussian distributions are worthy of study, because the gaussian distribution is well-understood, and so we can give sharp bounds on the rarity of the tail event. Similarly, the subexponential distributions are also worthy of study.
Formally, the probability distribution of a random variable is called subgaussian if there is a positive constant C such that for every ,
:
\operatorname{P}(|X| \geq t) \leq 2 \exp{(-t^2/C^2)}
.
There are many equivalent definitions. For example, a random variable is sub-Gaussian iff its distribution function is bounded from above (up to a constant) by the distribution function of a Gaussian:
:
where is constant and is a mean zero Gaussian random variable.{{r|Wainwright2019|at=Theorem 2.6}}
Definitions
= Subgaussian norm =
The subgaussian norm of , denoted as , isIn other words, it is the Orlicz norm of generated by the Orlicz function By condition below, subgaussian random variables can be characterized as those random variables with finite subgaussian norm.
= Variance proxy =
If there exists some such that for all , then is called a variance proxy, and the smallest such is called the optimal variance proxy and denoted by .
Since when is Gaussian, we then have , as it should.
= Equivalent definitions =
Let be a random variable. Let be positive constants. The following conditions are equivalent: (Proposition 2.5.2 {{Cite book |last=Vershynin |first=R. |title=High-dimensional probability: An introduction with applications in data science |publisher=Cambridge University Press |year=2018 |location=Cambridge}})
- Tail probability bound:
\operatorname{P}(|X| \geq t) \leq 2 \exp{(-t^2/K_1^2)}
for all ;
- Finite subgaussian norm: ;
- Moment: for all , where is the Gamma function;
- Moment: for all ;
- Moment-generating function (of ), or variance proxy{{R|kahane}}{{R|buldygin}} : for all ;
- Moment-generating function (of ): for all ;
- Union bound: for some c > 0, for all n > c, where are i.i.d copies of X;
- Subexponential: has a subexponential distribution.
Furthermore, the constant is the same in the definitions (1) to (5), up to an absolute constant. So for example, given a random variable satisfying (1) and (2), the minimal constants in the two definitions satisfy , where are constants independent of the random variable.
= Proof of equivalence =
As an example, the first four definitions are equivalent by the proof below.
Proof. By the layer cake representation,
\operatorname{E} |X|^p &= \int_0^\infty \operatorname{P}(|X|^p \geq t) dt\\
&= \int_0^\infty pt^{p-1}\operatorname{P}(|X| \geq t) dt\\
&\leq 2\int_0^\infty pt^{p-1}\exp\left(-\frac{t^2}{K_1^2}\right) dt\\
\end{align}
After a change of variables , we find that
\operatorname{E} |X|^p &\leq 2K_1^p \frac{p}{2}\int_0^\infty u^{\frac{p}{2}-1}e^{-u} du\\
&= 2K_1^p \frac{p}{2}\Gamma\left(\frac{p}{2}\right)\\
&= 2K_1^p \Gamma\left(\frac{p}{2}+1\right).
\end{align} By the Taylor series
\operatorname{E}[\exp{(\lambda X^2)}] &= 1 + \sum_{p=1}^\infty \frac{\lambda^p \operatorname{E}{[X^{2p}]}}{p!}\\
&\leq 1 + \sum_{p=1}^\infty \frac{2\lambda^p K_3^{2p} \Gamma\left(p+1\right)}{p!}\\
&= 1 + 2 \sum_{p=1}^\infty \lambda^p K_3^{2p}\\
&= 2 \sum_{p=0}^\infty \lambda^p K_3^{2p}-1\\
&= \frac{2}{1-\lambda K_3^2}-1 \quad\text{for }\lambda K_3^2 <1,
\end{align}which is less than or equal to for . Let , then
By Markov's inequality, by asymptotic formula for gamma function: .
From the proof, we can extract a cycle of three inequalities:
- If
\operatorname{P}(|X| \geq t) \leq 2 \exp{(-t^2/K^2)}
, then for all .
- If for all , then .
- If , then
\operatorname{P}(|X| \geq t) \leq 2 \exp{(-t^2/K^2)}
.
In particular, the constant provided by the definitions are the same up to a constant factor, so we can say that the definitions are equivalent up to a constant independent of .
Similarly, because up to a positive multiplicative constant, for all , the definitions (3) and (4) are also equivalent up to a constant.
Basic properties
{{Math theorem
| math_statement = * If is subgaussian, and , then and .
- (Triangle inequality) If are subgaussian, then
- (Chernoff bound) If is subgaussian, then for all
| name = Basic properties
}}
means that , where the positive constant is independent of and .
{{Math theorem
| math_statement = If is subgaussian, then
| name = Subgaussian deviation bound
}}
{{Math proof|title=Proof|proof= By triangle inequality, . Now we have . By the equivalence of definitions (2) and (4) of subgaussianity, we have .}}
{{Math theorem
| math_statement = If are subgaussian and independent, then
| name = Independent subgaussian sum bound
}}
{{Math proof|title=Proof|proof= If independent, then use that the cumulant of independent random variables is additive. That is, .
If not independent, then by Hölder's inequality, for any we have
Solving the optimization problem
\min p \|X\|_{vp}^2 + q \|Y\|_{vp}^2 \\
1/p + 1/q = 1
\end{cases}, we obtain the result.}}
{{Math theorem
| math_statement = Linear sums of subgaussian random variables are subgaussian.
| name = Corollary
}}
{{Math theorem
| math_statement = If , and for all , then where depends on only.
| name = Partial converse
| note = Matoušek 2008, Lemma 2.4
}}
{{hidden begin|style=width:100%|ta1=center|border=1px #aaa solid|title=Proof}}
{{Math proof|title=Proof|proof=
Let be the CDF of . The proof splits the integral of MGF to two halves, one with and one with , and bound each one respectively.
\begin{aligned}
E[e^{tX}]
&= \int_\R e^{tx} dF(x) \\
&= \int_{-\infty}^{1/t} e^{tx} dF(x) + \int_{1/t}^{+\infty} e^{tx} dF(x) \\
\end{aligned}
Since for ,
\begin{aligned}
\int_{-\infty}^{1/t} e^{tx} dF(x)
&\leq \int_{-\infty}^{1/t} (1+tx + t^2x^2) dF(x) \\
&\leq \int_{\R} (1+tx + t^2x^2) dF(x) \\
&= 1 + tE[X] + t^2 E[X^2] \\
&= 1 + t^2 \\
&\leq e^{t^2}
\end{aligned}
For the second term, upper bound it by a summation:
\begin{aligned}
\int_{1/t}^{+\infty} e^{tx} dF(x)
&\leq e^2 Pr(X \in [1/t, 2/t]) + e^3 Pr(X \in [1/t, 2/t]) + \dots \\
&\leq \sum_{k=1}^\infty e^{k+1} Pr(X \geq k/t) \\
&\leq \sum_{k=1}^\infty e^{k(2-\frac 12 ak/t^2)}
\end{aligned}
When , for any , , so
\leq \frac{1}{e^{\frac{a}{4t^2}} - 1} \leq \frac 4a t^2
When , by drawing out the curve of , and plotting out the summation, we find that
\sum_{k=1}^\infty e^{k(2-\frac 12 ak/t^2)} \leq \int_\R f(x)dx + 2 \max_x f(x) = e^{\frac{2 t^2}{a}}\left(\sqrt{\frac{2 \pi t^2}{a}}+2\right) < 10 \sqrt{t^2/a} e^{\frac{2 t^2}{a}}
Now verify that , where depends on only.
}}{{hidden end}}
{{Math theorem
| name = Corollary
| note = Matoušek 2008, Lemma 2.2
| math_statement = are independent random variables with the same upper subgaussian tail:
-\ln Pr(X_i \geq t) \geq \frac 12 at^2
for all . Also, , then for any unit vector , the linear sum has a subgaussian tail:
-\ln Pr\left(\sum_i v_i X_i \geq t \right) \geq C_a t^2
where depends only on .
}}
Concentration
{{Math theorem|name=Gaussian concentration inequality for Lipschitz functions|note=Tao 2012, Theorem 2.1.12.|math_statement=
If is -Lipschitz, and is a standard gaussian vector, then concentrates around its expectation at a rate
Pr(f(X) - E[f(X)] \geq t)\leq e^{-\frac{2}{\pi^2}\frac{t^2}{L^2}}
and similarly for the other tail.
}}
{{hidden begin|style=width:100%|ta1=center|border=1px #aaa solid|title=Proof}}
{{Math proof|title=Proof|proof=
By shifting and scaling, it suffices to prove the case where , and .
Since every 1-Lipschitz function is uniformly approximable by 1-Lipschitz smooth functions (by convolving with a mollifier), it suffices to prove it for 1-Lipschitz smooth functions.
Now it remains to bound the cumulant generating function.
To exploit the Lipschitzness, we introduce , an independent copy of , then by Jensen,
E[e^{t(X-Y)}] = E[e^{tX}]E[e^{-tY}] \geq E[e^{tX}]e^{-tE[Y]} = E[e^{tX}]
By the circular symmetry of gaussian variables, we introduce . This has the benefit that its derivative is independent of it.
&=e^{t(f(X_{\pi/2}) - f(X_0))} \\
&= e^{t\int_0^{\pi/2} \nabla f(X_\theta) \cdot X_\theta'd\theta} \\
&= e^{\pi t/2 \int_0^{\pi/2} \nabla f(X_\theta) \cdot X_\theta'\frac{d\theta}{\pi/2}} \\
&\leq \int_0^{\pi/2} e^{\pi t/2 \nabla f(X_\theta) \cdot X_\theta'}\frac{d\theta}{\pi/2} \\
\end{aligned}
Now take its expectation,
E[e^{t(f(X) - f(Y))}] \leq \int_0^{\pi/2} E[e^{\pi t/2 \nabla f(X_\theta) \cdot X_\theta'}]\frac{d\theta}{\pi/2}
The expectation within the integral is over the joint distribution of , but since the joint distribution of is exactly the same, we have
= E_X[E_Y[e^{\pi t/2 \nabla f(X) \cdot Y}]]
Conditional on , the quantity is normally distributed, with variance , so
\leq e^{\frac 12 (\pi t/2)^2} = e^{\frac{\pi^2}{8} t^2}
Thus, we have
\ln E[e^{tf(X)}] \leq \frac{\pi^2}{8}t^2
}}{{hidden end}}
Strictly subgaussian
Expanding the cumulant generating function:we find that . At the edge of possibility, we define that a random variable satisfying is called strictly subgaussian.
= Properties =
Theorem.{{cite arXiv |last1=Bobkov |first1=S. G. |title=Strictly subgaussian probability distributions |date=2023-08-03 |eprint=2308.01749 |last2=Chistyakov |first2=G. P. |last3=Götze |first3=F.|class=math.PR }} Let be a subgaussian random variable with mean zero. If all zeros of its characteristic function are real, then is strictly subgaussian.
Corollary. If are independent and strictly subgaussian, then any linear sum of them is strictly subgaussian.
= Examples =
By calculating the characteristic functions, we can show that some distributions are strictly subgaussian: symmetric uniform distribution, symmetric Bernoulli distribution.
Since a symmetric uniform distribution is strictly subgaussian, its convolution with itself is strictly subgaussian. That is, the symmetric triangular distribution is strictly subgaussian.
Since the symmetric Bernoulli distribution is strictly subgaussian, any symmetric Binomial distribution is strictly subgaussian.
Examples
class="wikitable"
|+ ! ! ! !strictly subgaussian? |
gaussian distribution
| | |Yes |
mean-zero Bernoulli distribution
|solution to | |Iff |
symmetric Bernoulli distribution
| | |Yes |
uniform distribution
|solution to , approximately 0.7727 | |Yes |
arbitrary distribution on interval
| | | |
The optimal variance proxy is known for many standard probability distributions, including the beta, Bernoulli, Dirichlet{{R|marchal2017}}, Kumaraswamy, triangular{{R|arbel2020}}, truncated Gaussian, and truncated exponential.{{R|barreto2024}}
= Bernoulli distribution =
Let be two positive numbers. Let be a centered Bernoulli distribution , so that it has mean zero, then . Its subgaussian norm is where is the unique positive solution to .
Let be a random variable with symmetric Bernoulli distribution (or Rademacher distribution). That is, takes values and with probabilities each. Since , it follows thatand hence is a subgaussian random variable.
= Bounded distributions =
File:Bounded probability distributions, compared with the normal distribution.svg
Bounded distributions have no tail at all, so clearly they are subgaussian.
If is bounded within the interval , Hoeffding's lemma states that . Hoeffding's inequality is the Chernoff bound obtained using this fact.
= Convolutions =
File:Gaussian-mixture-example.svg
Since the sum of subgaussian random variables is still subgaussian, the convolution of subgaussian distributions is still subgaussian. In particular, any convolution of the normal distribution with any bounded distribution is subgaussian.
= Mixtures =
Given subgaussian distributions , we can construct an additive mixture as follows: first randomly pick a number , then pick .
Since we have , and so the mixture is subgaussian.
In particular, any gaussian mixture is subgaussian.
More generally, the mixture of infinitely many subgaussian distributions is also subgaussian, if the subgaussian norm has a finite supremum: .
Subgaussian random vectors
So far, we have discussed subgaussianity for real-valued random variables. We can also define subgaussianity for random vectors. The purpose of subgaussianity is to make the tails decay fast, so we generalize accordingly: a subgaussian random vector is a random vector where the tail decays fast.
Let be a random vector taking values in .
Define.
- , where is the unit sphere in . Similarly for the variance proxy
- is subgaussian iff .
Theorem. (Theorem 3.4.6 ) For any positive integer , the uniformly distributed random vector is subgaussian, with .
This is not so surprising, because as , the projection of to the first coordinate converges in distribution to the standard normal distribution.
Maximum inequalities
{{Math theorem
| math_statement =
If are mean-zero subgaussians, with , then for any , we have with probability .
}}
{{Math proof|title=Proof|proof= By the Chernoff bound, . Now apply the union bound. }}
{{Math theorem
| math_statement = If are subgaussians, with , then Further, the bound is sharp, since when are IID samples of we have .Kamath, Gautam. "[http://www.gautamkamath.com/writings/gaussian_max.pdf Bounds on the expectation of the maximum of samples from a gaussian]." (2015)
}}
{{Math theorem
| math_statement = If are subgaussian, with , then
E[\max_i (X_i - E[X_i])] \leq \sigma\sqrt{ 2\ln n}, &\quad \Pr(\max_i (X_i- E[X_i]) > t) \leq n e^{-\frac{t^2}{2\sigma^2}}, \\
E[\max_i |X_i - E[X_i]|] \leq \sigma\sqrt{ 2\ln (2n)},
&\quad
\Pr(\max_i |X_i- E[X_i]| > t) \leq 2 n e^{-\frac{t^2}{2\sigma^2}}
\end{aligned}
}}
{{Math proof|title=Proof|proof= For any t>0:
E\!\bigl[\max_{1\le i\le n}(X_i-E[X_i])\bigr]
&=\frac1t\,E\!\Bigl[\ln\max_{i}e^{\,t(X_i-E[X_i])}\Bigr]\\
&\le\frac1t\ln E\!\Bigl[\max_{i}e^{\,t(X_i-E[X_i])}\Bigr] \quad \text{by Jensen}\\
&\le\frac1t\ln\sum_{i=1}^{n}Ee^{t(X_i-E[X_i])}\\
&\le\frac1t\ln\sum_{i=1}^{n}e^{\sigma^{2}t^{2}/2}\quad \text{by def of }\|\cdot\|_{vp}\\
&=\frac{\ln n}{t}+\frac{\sigma^{2}t}{2} \\
&\overset{t=\sqrt{2\ln n}/\sigma}{=}\;\sigma\sqrt{2\ln n},
\end{aligned}This is a standard proof structure for proving Chernoff-like bounds for sub-Gaussian variables. For the second equation, it suffices to prove the case with one variable and zero mean, then use the union bound. First by Markov,
, then by definition of variance proxy,
, and then optimize at
.
}}
{{Math theorem
| math_statement = Fix a finite set of vectors . If is a random vector, such that each , then the above 4 inequalities hold, with replacing . Here, is the convex polytope hulled by the vectors .
| note = over a convex polytope
| name=Corollary
}}
{{Math theorem
| math_statement = If is a random vector in , such that for all on the unit sphere , then For any , with probability at least ,
| note = subgaussian random vectors
}}
Inequalities
Theorem. (Theorem 2.6.1 ) There exists a positive constant such that given any number of independent mean-zero subgaussian random variables , Theorem. (Hoeffding's inequality) (Theorem 2.6.3 ) There exists a positive constant such that given any number of independent mean-zero subgaussian random variables ,
\quad \forall t > 0Theorem. (Bernstein's inequality) (Theorem 2.8.1 ) There exists a positive constant such that given any number of independent mean-zero subexponential random variables ,
Theorem. (Khinchine inequality) (Exercise 2.6.5 ) There exists a positive constant such that given any number of independent mean-zero variance-one subgaussian random variables , any , and any ,
Hanson-Wright inequality
The Hanson-Wright inequality states that if a random vector is subgaussian in a certain sense, then any quadratic form of this vector, , is also subgaussian/subexponential. Further, the upper bound on the tail of , is uniform.
A weak version of the following theorem was proved in (Hanson, Wright, 1971).{{Cite journal |last1=Hanson |first1=D. L. |last2=Wright |first2=F. T. |date=1971 |title=A Bound on Tail Probabilities for Quadratic Forms in Independent Random Variables |journal=The Annals of Mathematical Statistics |volume=42 |issue=3 |pages=1079–1083 |doi=10.1214/aoms/1177693335 |jstor=2240253 |issn=0003-4851|doi-access=free }} There are many extensions and variants. Much like the central limit theorem, the Hanson-Wright inequality is more a cluster of theorems with the same purpose, than a single theorem. The purpose is to take a subgaussian vector and uniformly bound its quadratic forms.
Theorem.{{Cite journal |last1=Rudelson |first1=Mark |last2=Vershynin |first2=Roman |date=January 2013 |title=Hanson-Wright inequality and sub-gaussian concentration |url=https://projecteuclid.org/journals/electronic-communications-in-probability/volume-18/issue-none/Hanson-Wright-inequality-and-sub-gaussian-concentration/10.1214/ECP.v18-2865.full |journal=Electronic Communications in Probability |volume=18 |issue=none |pages=1–9 |doi=10.1214/ECP.v18-2865 |issn=1083-589X|arxiv=1306.2872 }}{{Cite book |last=Vershynin |first=Roman |url=https://www.cambridge.org/core/books/highdimensional-probability/797C466DA29743D2C8213493BD2D2102 |title=High-Dimensional Probability: An Introduction with Applications in Data Science |date=2018 |publisher=Cambridge University Press |isbn=978-1-108-41519-4 |series=Cambridge Series in Statistical and Probabilistic Mathematics |location=Cambridge |chapter=6. Quadratic Forms, Symmetrization, and Contraction |pages=127–146 |doi=10.1017/9781108231596.009 |chapter-url=https://doi.org/10.1017/9781108231596.009}} There exists a constant , such that:
Let be a positive integer. Let be independent random variables, such that each satisfies . Combine them into a random vector . For any matrix , we have
2 \exp \left[-c \min \left(\frac{t^2}{K^4\|A\|_F^2}, \frac{t}{K^2\|A\|}\right)\right]
where , and is the Frobenius norm of the matrix, and is the operator norm of the matrix.
In words, the quadratic form has its tail uniformly bounded by an exponential, or a gaussian, whichever is larger.
In the statement of the theorem, the constant is an "absolute constant", meaning that it has no dependence on . It is a mathematical constant much like pi and e.
= Consequences =
Theorem (subgaussian concentration). There exists a constant , such that:
Let be positive integers. Let be independent random variables, such that each satisfies . Combine them into a random vector . For any matrix , we haveIn words, the random vector is concentrated on a spherical shell of radius , such that is subgaussian, with subgaussian norm .
See also
Notes
{{reflist|refs=
}}
References
- {{cite journal
|first1=J.P. |last1=Kahane
|title=Propriétés locales des fonctions à séries de Fourier aléatoires
|journal=Studia Mathematica
|volume=19
|pages=1–25
|year=1960
|doi=10.4064/sm-19-1-1-25 | doi-access=free
}}
- {{Cite book |last=Tao |first=Terence |title=Topics in random matrix theory |date=2012 |publisher=American Mathematical Society |isbn=978-0-8218-7430-1 |series=Graduate studies in mathematics |location=Providence, R.I}}
- {{Cite journal |last=Matoušek |first=Jiří |date=September 2008 |title=On variants of the Johnson–Lindenstrauss lemma |url=https://onlinelibrary.wiley.com/doi/10.1002/rsa.20218 |journal=Random Structures & Algorithms |language=en |volume=33 |issue=2 |pages=142–156 |doi=10.1002/rsa.20218 |issn=1042-9832|url-access=subscription }}
- {{cite journal
|first1=V.V. |last1=Buldygin
|first2=Yu.V. |last2=Kozachenko
|title=Sub-Gaussian random variables
|journal=Ukrainian Mathematical Journal
|volume=32
|pages=483–489
|year=1980
|issue=6
|doi=10.1007/BF01087176 | doi-access=
}}
- {{cite book
| last1 = Ledoux | first1 = Michel
| last2 = Talagrand | first2 = Michel
| title = Probability in Banach Spaces
| year = 1991
| publisher = Springer-Verlag
}}
- {{cite book
| last1 = Stromberg | first1 = K.R.
| title = Probability for Analysts
| year = 1994
| publisher = Chapman & Hall/CRC
}}
- {{cite journal
| first1=A.E. | last1=Litvak
| first2=A. | last2=Pajor
| first3=M. | last3=Rudelson
| first4=N. | last4=Tomczak-Jaegermann
| title=Smallest singular value of random matrices and geometry of random polytopes
| journal=Advances in Mathematics
| volume=195
| pages=491–523
| year=2005
| issue=2
| doi=10.1016/j.aim.2004.08.004 | doi-access=free
| url = http://www.math.ualberta.ca/~alexandr/OrganizedPapers/lprtlastlast.pdf
}}
- {{cite conference
| first1=Mark | last1=Rudelson
| first2=Roman | last2=Vershynin
| title=Non-asymptotic theory of random matrices: extreme singular values
| book-title=Proceedings of the International Congress of Mathematicians 2010
| pages=1576–1602
| arxiv=1003.2990
| doi=10.1142/9789814324359_0111
| year=2010
}}
- {{cite news
| first1=O. | last1=Rivasplata
| title=Subgaussian random variables: An expository note
| journal=Unpublished
| year=2012
| url=http://www.stat.cmu.edu/~arinaldo/36788/subgaussians.pdf
}}
- Vershynin, R. (2018). [https://www.math.uci.edu/~rvershyn/papers/HDP-book/HDP-book.pdf "High-dimensional probability: An introduction with applications in data science"] (PDF). Volume 47 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge.
- Zajkowskim, K. (2020). "On norms in some class of exponential type Orlicz spaces of random variables". Positivity. An International Mathematics Journal Devoted to Theory and Applications of Positivity. 24(5): 1231--1240. {{arxiv|1709.02970}}. {{doi|10.1007/s11117-019-00729-6}}.