Uniform integrability

{{Short description|Mathematical concept}}

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

Measure-theoretic definition

Uniform integrability is an extension to the notion of a family of functions being dominated in L_1 which is central in dominated convergence.

Several textbooks on real analysis and measure theory use the following definition:{{cite book|first=Walter|last=Rudin|authorlink=Walter Rudin|year=1987|title=Real and Complex Analysis|edition=3|publisher=McGraw–Hill Book Co.|location=Singapore|page=133|isbn=0-07-054234-1}}{{cite book|author1=Royden, H.L. |author2=Fitzpatrick, P.M. |name-list-style=amp |year=2010|title=Real Analysis|edition=4|publisher=Prentice Hall|location=Boston|page=93|isbn=978-0-13-143747-0}}

Definition A: Let (X,\mathfrak{M}, \mu) be a positive measure space. A set \Phi\subset L^1(\mu) is called uniformly integrable if \sup_{f\in\Phi}\|f\|_{L_1(\mu)}<\infty, and to each \varepsilon>0 there corresponds a \delta>0 such that

: \int_E |f| \, d\mu < \varepsilon

whenever f \in \Phi and \mu(E)<\delta.

Definition A is rather restrictive for infinite measure spaces. A more general definition{{cite book|first=G. A.|last=Hunt|year=1966|title=Martingales et Processus de Markov|publisher=Dunod|location=Paris|page=254}} of uniform integrability that works well in general measures spaces was introduced by G. A. Hunt.

Definition H: Let (X,\mathfrak{M},\mu) be a positive measure space. A set \Phi\subset L^1(\mu) is called uniformly integrable if and only if

: \inf_{g\in L^1_+(\mu)}\sup_{f\in\Phi}\int_{\{|f|>g\}}|f|\, d\mu=0

where L^1_+(\mu)=\{g\in L^1(\mu): g\geq0\} .

Since Hunt's definition is equivalent to Definition A when the underlying measure space is finite (see Theorem 2 below), Definition H is widely adopted in Mathematics.

The following result{{cite book|first=A.|last=Klenke|year=2008|title=Probability Theory: A Comprehensive Course|publisher=Springer Verlag|location=Berlin|isbn= 978-1-84800-047-6|pages=134–137}} provides another equivalent notion to Hunt's. This equivalency is sometimes given as definition for uniform integrability.

Theorem 1: If (X,\mathfrak{M},\mu) is a (positive) finite measure space, then a set \Phi\subset L^1(\mu) is uniformly integrable if and only if

: \inf_{g\in L^1_+(\mu)}\sup_{f\in\Phi}\int (|f|- g)^+ \, d\mu=0

If in addition \mu(X)<\infty, then uniform integrability is equivalent to either of the following conditions

1. \inf_{a>0}\sup_{f\in \Phi}\int(|f|-a)_+\,d\mu =0.

2. \inf_{a>0}\sup_{f\in \Phi}\int_{\{|f|>a\}}|f|\,d\mu=0

When the underlying space (X,\mathfrak{M},\mu) is \sigma -finite, Hunt's definition is equivalent to the following:

Theorem 2: Let (X,\mathfrak{M},\mu) be a \sigma -finite measure space, and h\in L^1(\mu) be such that h>0 almost everywhere. A set \Phi\subset L^1(\mu) is uniformly integrable if and only if \sup_{f\in\Phi}\|f\|_{L_1(\mu)}<\infty , and for any \varepsilon>0 , there exits \delta>0 such that

: \sup_{f\in\Phi}\int_A|f|\, d\mu <\varepsilon

whenever \int_A h\,d\mu <\delta .

A consequence of Theorems 1 and 2 is that equivalence of Definitions A and H for finite measures follows. Indeed, the statement in Definition A is obtained by taking h\equiv1 in Theorem 2.

Probability definition

In the theory of probability, Definition A or the statement of Theorem 1 are often presented as definitions of uniform integrability using the notation expectation of random variables.,{{cite book|last=Williams|first=David|title=Probability with Martingales|year=1997|publisher=Cambridge Univ. Press.|location=Cambridge|isbn=978-0-521-40605-5|pages=126–132|edition=Repr.}}{{cite book|last=Gut|first=Allan|title=Probability: A Graduate Course|year=2005|publisher=Springer|isbn=0-387-22833-0|pages=214–218}}{{cite book|last=Bass|first=Richard F.|title=Stochastic Processes|year=2011|publisher=Cambridge University Press|location=Cambridge|isbn=978-1-107-00800-7|pages=356–357}} that is,

1. A class \mathcal{C} of random variables is called uniformly integrable if:

  • There exists a finite M such that, for every X in \mathcal{C}, \operatorname E(|X|)\leq M and
  • For every \varepsilon > 0 there exists \delta > 0 such that, for every measurable A such that P(A)\leq \delta and every X in \mathcal{C}, \operatorname E(|X|I_A)\leq\varepsilon.

or alternatively

2. A class \mathcal{C} of random variables is called uniformly integrable (UI) if for every \varepsilon > 0 there exists K\in[0,\infty) such that \operatorname E(|X|I_{|X|\geq K})\le\varepsilon\ \text{ for all } X \in \mathcal{C}, where I_{|X|\geq K} is the indicator function I_{|X|\geq K} = \begin{cases} 1 &\text{if } |X|\geq K, \\ 0 &\text{if } |X| < K. \end{cases}.

Tightness and uniform integrability

Another concept associated with uniform integrability is that of tightness. In this article tightness is taken in a more general setting.

Definition: Suppose measurable space (X,\mathfrak{M},\mu) is a measure space. Let \mathcal{K}\subset\mathfrak{M} be a collection of sets of finite measure. A family \Phi\subset L_1(\mu) is tight with respect to \mathcal{K} if

: \inf_{K\in\mathcal{K}}\sup_{f\in\Phi}\int_{X\setminus K}|f|\,\mu=0

A tight family with respect to \Phi=\mathfrak{M}\cap L_1(\,u) is just said to be tight.

When the measure space (X,\mathfrak{M},\mu) is a metric space equipped with the Borel \sigma algebra, \mu is a regular measure, and \mathcal{K} is the collection of all compact subsets of X, the notion of \mathcal{K}-tightness discussed above coincides with the well known concept of tightness used in the analysis of regular measures in metric spaces

For \sigma-finite measure spaces, it can be shown that if a family \Phi\subset L_1(\mu) is uniformly integrable, then \Phi is tight. This is capture by the following result which is often used as definition of uniform integrabiliy in the Analysis literature:

Theorem 3: Suppose (X,\mathfrak{M},\mu) is a \sigma finite measure space. A family \Phi\subset L_1(\mu) is uniformly integrable if and only if

  1. \sup_{f\in\Phi}\|f\|_1<\infty.
  2. \inf_{a>0}\sup_{f\in \Phi}\int_{\{|f|>a\}}|f|\,d\mu=0
  3. \Phi is tight.

When \mu(X)<\infty, condition 3 is redundant (see Theorem 1 above).

Uniform absolute continuity

There is another notion of uniformity, slightly different than uniform integrability, which also has many applications in probability and measure theory, and which does not require random variables to have a finite integral{{sfn|Bass|2011|page=356}}

Definition: Suppose (\Omega,\mathcal{F},P) is a probability space. A class \mathcal{C} of random variables is uniformly absolutely continuous with respect to P if for any \varepsilon>0, there is \delta>0 such that

E[|X|I_A]<\varepsilon

whenever P(A)<\delta.

It is equivalent to uniform integrability if the measure is finite and has no atoms.

The term "uniform absolute continuity" is not standard,{{citation needed|date=August 2023}} but is used by some authors.{{cite book|first=J. J. |last=Benedetto|authorlink=J. J. Benedetto |year=1976|title=Real Variable and Integration|publisher=B. G. Teubner |location=Stuttgart|page=89| isbn=3-519-02209-5}}{{cite book |first=C. W.|last=Burrill|authorlink=C. W. Burrill|year=1972|title=Measure, Integration, and Probability| publisher=McGraw-Hill|page=180| isbn=0-07-009223-0}}

Related corollaries

The following results apply to the probabilistic definition.{{sfn|Gut|2005|pages=215–216}}

  • Definition 1 could be rewritten by taking the limits as \lim_{K \to \infty} \sup_{X \in \mathcal{C}} \operatorname E(|X|\,I_{|X|\geq K})=0.
  • A non-UI sequence. Let \Omega = [0,1] \subset \mathbb{R}, and define X_n(\omega) = \begin{cases}

n, & \omega\in (0,1/n), \\

0 , & \text{otherwise.} \end{cases} Clearly X_n\in L^1, and indeed \operatorname E(|X_n|)=1\ , for all n. However, \operatorname E(|X_n| I_{\{|X_n|\ge K \}})= 1\ \text{ for all } n \ge K, and comparing with definition 1, it is seen that the sequence is not uniformly integrable.

File:Uniform integrability.png

  • By using Definition 2 in the above example, it can be seen that the first clause is satisfied as L^1 norm of all X_ns are 1 i.e., bounded. But the second clause does not hold as given any \delta positive, there is an interval (0, 1/n) with measure less than \delta and E[|X_m|: (0, 1/n)] =1 for all m \ge n .
  • If X is a UI random variable, by splitting \operatorname E(|X|) = \operatorname E(|X| I_{\{|X| \geq K \}})+\operatorname E(|X| I_{\{|X| < K \}}) and bounding each of the two, it can be seen that a uniformly integrable random variable is always bounded in L^1.
  • If any sequence of random variables X_n is dominated by an integrable, non-negative Y: that is, for all ω and n, |X_n(\omega)| \le Y(\omega),\ Y(\omega)\ge 0,\ \operatorname E(Y) < \infty, then the class \mathcal{C} of random variables \{X_n\} is uniformly integrable.
  • A class of random variables bounded in L^p (p > 1) is uniformly integrable.

Relevant theorems

In the following we use the probabilistic framework, but regardless of the finiteness of the measure, by adding the boundedness condition on the chosen subset of L^1(\mu).

  • DunfordPettis theorem{{Cite journal|last=Dunford|first=Nelson|date=1938|title=Uniformity in linear spaces |url=https://www.ams.org/| journal=Transactions of the American Mathematical Society |language=en |volume=44 |issue=2|pages=305–356 |doi=10.1090/S0002-9947-1938-1501971-X |issn=0002-9947 | doi-access=free}}{{Cite journal | last=Dunford|first=Nelson |date=1939 | title=A mean ergodic theorem| journal=Duke Mathematical Journal |language=en|volume=5 |issue=3|pages=635–646|doi=10.1215/S0012-7094-39-00552-1|issn=0012-7094}}{{pb}}A class{{what|reason=What is X_n? A set, a sequence, a single random variable?|date=August 2023}} of random variables X_n \subset L^1(\mu) is uniformly integrable if and only if it is relatively compact for the weak topology \sigma(L^1,L^\infty).{{what|reason=This notation is not explained in the given link.|date=August 2023}}{{citation needed|date=August 2023}}
  • de la Vallée-Poussin theoremMeyer, P.A. (1966). Probability and Potentials, Blaisdell Publishing Co, N. Y. (p.19, Theorem T22).{{Cite journal|last=Poussin | first=C. De La Vallee|date=1915|title=Sur L'Integrale de Lebesgue | journal=Transactions of the American Mathematical Society|volume=16 |issue=4 |pages=435–501 |doi=10.2307/1988879 |jstor=1988879 |hdl=10338.dmlcz/127627 |hdl-access=free}}{{pb}}The family \{X_{\alpha}\}_{\alpha\in\Alpha} \subset L^1(\mu) is uniformly integrable if and only if there exists a non-negative increasing convex function G(t) such that \lim_{t \to \infty} \frac{G(t)} t = \infty \text{ and } \sup_\alpha \operatorname E(G(|X_{\alpha}|)) < \infty.

Uniform integrability and stochastic ordering

A family of random variables \{X_i\}_{i \in I} is uniformly integrable if and only if{{Cite journal|author1=Leskelä, L.|author2=Vihola, M. | date=2013 |title=Stochastic order characterization of uniform integrability and tightness |url=https://www.sciencedirect.com/science/article/abs/pii/S0167715212003690?via%3Dihub| journal=Statistics and Probability Letters | language=en | volume=83 | issue=1 | pages=382–389 | doi=10.1016/j.spl.2012.09.023| arxiv=1106.0607 }}

there exists a random variable

X such that E X < \infty and

|X_i| \le_\mathrm{icx} X for all i \in I, where

\le_\mathrm{icx} denotes the increasing convex stochastic order defined by

A \le_\mathrm{icx} B if E \phi(A) \le E \phi(B) for all nondecreasing convex real functions \phi.

Relation to convergence of random variables

{{main|Convergence of random variables}}

A sequence \{X_n\} converges to X in the L_1 norm if and only if it converges in measure to X and it is uniformly integrable. In probability terms, a sequence of random variables converging in probability also converge in the mean if and only if they are uniformly integrable.{{cite book|last=Bogachev|first=Vladimir I.|chapter=The spaces Lp and spaces of measures | title=Measure Theory Volume I|year=2007 |publisher=Springer-Verlag|location=Berlin Heidelberg|isbn=978-3-540-34513-8|pages=268|doi=10.1007/978-3-540-34514-5_4}} This is a generalization of Lebesgue's dominated convergence theorem, see Vitali convergence theorem.

Citations

{{Reflist}}

References

  • {{cite book|authorlink=Albert Nikolayevich Shiryaev|first=A.N.|last=Shiryaev|year=1995|title=Probability|edition=2|publisher=Springer-Verlag|location=New York|pages=187–188|isbn=978-0-387-94549-1}}
  • Diestel, J. and Uhl, J. (1977). Vector measures, Mathematical Surveys 15, American Mathematical Society, Providence, RI {{isbn|978-0-8218-1515-1}}

Category:Martingale theory