Multidimensional Chebyshev's inequality#Infinite dimensions
{{More references|date=August 2008}}
In probability theory, the multidimensional Chebyshev's inequality{{Cite journal |last1=Marshall |first1=Albert W. |last2=Olkin |first2=Ingram |date=December 1960 |title=Multivariate Chebyshev Inequalities |url=https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-31/issue-4/Multivariate-Chebyshev-Inequalities/10.1214/aoms/1177705673.full |journal=The Annals of Mathematical Statistics |volume=31 |issue=4 |pages=1001–1014 |doi=10.1214/aoms/1177705673 |issn=0003-4851}} is a generalization of Chebyshev's inequality, which puts a bound on the probability of the event that a random variable differs from its expected value by more than a specified amount.
Let be an -dimensional random vector with expected value and covariance matrix
:
If is a positive-definite matrix, for any real number :
:
\Pr \left( \sqrt{( X-\mu)^T V^{-1} (X-\mu) } > t\right) \le \frac N {t^2}
Proof
Since is positive-definite, so is . Define the random variable
:
y = (X-\mu)^T V^{-1} (X-\mu).
Since is positive, Markov's inequality holds:
:
\Pr\left( \sqrt{(X-\mu)^T V^{-1} (X-\mu) } > t\right) = \Pr( \sqrt{y} > t) = \Pr(y > t^2)
\le \frac{\operatorname{E}[y]}{t^2}.
Finally,
:
\operatorname{E}[y] &= \operatorname{E}[(X-\mu)^T V^{-1} (X-\mu)]\\[6pt]
&=\operatorname{E}[ \operatorname{trace} ( V^{-1} (X-\mu) (X-\mu)^T )]\\[6pt]
&= \operatorname{trace} ( V^{-1} V ) = N
\end{align}.{{cite arXiv|last=Navarro |first=Jorge |title=A simple proof for the multivariate Chebyshev inequality |date=2013-05-24 |class=math.ST |eprint=1305.5646 }}
Infinite dimensions
There is a straightforward extension of the vector version of Chebyshev's inequality to infinite dimensional settings[more refs. needed].{{Cite book |last1=Altomare |first1=Francesco |last2=Campiti |first2=Michele |date=1994 |title=Korovkin-type Approximation Theory and Its Applications |url=https://doi.org/10.1515/9783110884586 |editor=De Gruyter |language=en |pages=313 |doi=10.1515/9783110884586|isbn=978-3-11-014178-8 }} Let {{mvar|X}} be a random variable which takes values in a Fréchet space (equipped with seminorms {{math|{{!!}} ⋅ {{!!}}α}}). This includes most common settings of vector-valued random variables, e.g., when is a Banach space (equipped with a single norm), a Hilbert space, or the finite-dimensional setting as described above.
Suppose that {{mvar|X}} is of "strong order two", meaning that
:
for every seminorm {{math|{{!!}} ⋅ {{!!}}α}}. This is a generalization of the requirement that {{mvar|X}} have finite variance, and is necessary for this strong form of Chebyshev's inequality in infinite dimensions. The terminology "strong order two" is due to Vakhania.Vakhania, Nikolai Nikolaevich. Probability distributions on linear spaces. New York: North Holland, 1981.
Let be the Pettis integral of {{mvar|X}} (i.e., the vector generalization of the mean), and let
:
be the standard deviation with respect to the seminorm {{math|{{!!}} ⋅ {{!!}}α}}. In this setting we can state the following:
:General version of Chebyshev's inequality.
Proof. The proof is straightforward, and essentially the same as the finitary version[source needed]. If {{math|σα {{=}} 0}}, then {{mvar|X}} is constant (and equal to {{mvar|μ}}) almost surely, so the inequality is trivial.
If
:
then {{math|{{!!}}X − μ{{!!}}α > 0}}, so we may safely divide by {{math|{{!!}}X − μ{{!!}}α}}. The crucial trick in Chebyshev's inequality is to recognize that .
The following calculations complete the proof:
:
\Pr\left( \|X - \mu\|_\alpha \ge k \sigma_\alpha \right) &= \int_\Omega \mathbf{1}_{\|X - \mu\|_\alpha \ge k \sigma_\alpha} \, \mathrm d\Pr \\
& = \int_\Omega \left ( \frac{\|X - \mu\|_\alpha^2}{\|X - \mu\|_\alpha^2} \right ) \cdot \mathbf{1}_{\|X - \mu\|_\alpha \ge k \sigma_\alpha} \, \mathrm d\Pr \\[6pt]
&\le \int_\Omega \left (\frac{\|X - \mu\|_\alpha^2}{(k\sigma_\alpha)^2} \right ) \cdot \mathbf{1}_{\|X - \mu\|_\alpha \ge k \sigma_\alpha} \, \mathrm d\Pr \\[6pt]
&\le \frac{1}{k^2 \sigma_\alpha^2} \int_\Omega \|X - \mu\|_\alpha^2 \, \mathrm d\Pr && \mathbf{1}_{\|X - \mu\|_\alpha \ge k \sigma_\alpha} \le 1\\[6pt]
&= \frac{1}{k^2 \sigma_\alpha^2} \left (\operatorname{E}\|X - \mu\|_\alpha^2 \right )\\[6pt]
&= \frac{1}{k^2 \sigma_\alpha^2} \left (\sigma_\alpha^2 \right )\\[6pt]
&= \frac{1}{k^2}
\end{align}