empirical measure

{{more footnotes|date=March 2011}}

In probability theory, an empirical measure is a random measure arising from a particular realization of a (usually finite) sequence of random variables. The precise definition is found below. Empirical measures are relevant to mathematical statistics.

The motivation for studying empirical measures is that it is often impossible to know the true underlying probability measure P. We collect observations X_1, X_2, \dots , X_n and compute relative frequencies. We can estimate P, or a related distribution function F by means of the empirical measure or empirical distribution function, respectively. These are uniformly good estimates under certain conditions. Theorems in the area of empirical processes provide rates of this convergence.

Definition

Let X_1, X_2, \dots be a sequence of independent identically distributed random variables with values in the state space S with probability distribution P.

Definition

:The empirical measure Pn is defined for measurable subsets of S and given by

::P_n(A) = {1 \over n} \sum_{i=1}^n I_A(X_i)=\frac{1}{n}\sum_{i=1}^n \delta_{X_i}(A)

::where I_A is the indicator function and \delta_X is the Dirac measure.

Properties

  • For a fixed measurable set A, nPn(A) is a binomial random variable with mean nP(A) and variance nP(A)(1 − P(A)).
  • In particular, Pn(A) is an unbiased estimator of P(A).
  • For a fixed partition A_i of S, random variables Y_i=nP_n(A_i) form a multinomial distribution with event probabilities P(A_i)
  • The covariance matrix of this multinomial distribution is Cov(Y_i,Y_j)=nP(A_i)(\delta_{ij}-P(A_j)).

Definition

:\bigl(P_n(c)\bigr)_{c\in\mathcal{C}} is the empirical measure indexed by \mathcal{C}, a collection of measurable subsets of S.

To generalize this notion further, observe that the empirical measure P_n maps measurable functions f:S\to \mathbb{R} to their empirical mean,

:f\mapsto P_n f=\int_S f \, dP_n=\frac{1}{n}\sum_{i=1}^n f(X_i)

In particular, the empirical measure of A is simply the empirical mean of the indicator function, Pn(A) = Pn IA.

For a fixed measurable function f, P_nf is a random variable with mean \mathbb{E}f and variance \frac{1}{n}\mathbb{E}(f -\mathbb{E} f)^2.

By the strong law of large numbers, Pn(A) converges to P(A) almost surely for fixed A. Similarly P_nf converges to \mathbb{E} f almost surely for a fixed measurable function f. The problem of uniform convergence of Pn to P was open until Vapnik and Chervonenkis solved it in 1968.{{cite journal|last=Vapnik|first=V.|author2=Chervonenkis, A|title=Uniform convergence of frequencies of occurrence of events to their probabilities|journal=Dokl. Akad. Nauk SSSR|year=1968|volume=181}}

If the class \mathcal{C} (or \mathcal{F}) is Glivenko–Cantelli with respect to P then Pn converges to P uniformly over c\in\mathcal{C} (or f\in \mathcal{F}). In other words, with probability 1 we have

:\|P_n-P\|_\mathcal{C}=\sup_{c\in\mathcal{C}}|P_n(c)-P(c)|\to 0,

:\|P_n-P\|_\mathcal{F}=\sup_{f\in\mathcal{F}}|P_nf-\mathbb{E}f|\to 0.

Empirical distribution function

{{main|Empirical distribution function}}

The empirical distribution function provides an example of empirical measures. For real-valued iid random variables X_1,\dots,X_n it is given by

:F_n(x)=P_n((-\infty,x])=P_nI_{(-\infty,x]}.

In this case, empirical measures are indexed by a class \mathcal{C}=\{(-\infty,x]:x\in\mathbb{R}\}. It has been shown that \mathcal{C} is a uniform Glivenko–Cantelli class, in particular,

:\sup_F\|F_n(x)-F(x)\|_\infty\to 0

with probability 1.

See also

References

{{Reflist}}

Further reading

  • {{cite book |first=P. |last=Billingsley |title=Probability and Measure |publisher=John Wiley and Sons |location=New York |edition=Third |year=1995 |isbn=0-471-80478-9 }}
  • {{cite journal |first=M. D. |last=Donsker |title=Justification and extension of Doob's heuristic approach to the Kolmogorov–Smirnov theorems |journal=Annals of Mathematical Statistics |volume=23 |issue=2 |pages=277–281 |year=1952 |doi=10.1214/aoms/1177729445 |doi-access=free }}
  • {{cite journal |first=R. M. |last=Dudley |title=Central limit theorems for empirical measures |journal=Annals of Probability |volume=6 |issue=6 |pages=899–929 |year=1978 |jstor=2243028 |doi=10.1214/aop/1176995384|doi-access=free }}
  • {{cite book |first=R. M. |last=Dudley |title=Uniform Central Limit Theorems |series=Cambridge Studies in Advanced Mathematics |volume=63 |publisher=Cambridge University Press |location=Cambridge, UK |year=1999 |isbn=0-521-46102-2 }}
  • {{cite journal |first=J. |last=Wolfowitz |title=Generalization of the theorem of Glivenko–Cantelli |journal=Annals of Mathematical Statistics |volume=25 |issue=1 |pages=131–138 |year=1954 |jstor=2236518 |doi=10.1214/aoms/1177728852|doi-access=free }}

Category:Measures (measure theory)

Category:Empirical process