Slutsky's theorem

{{Short description|Theorem in probability theory}}

In probability theory, Slutsky's theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables.{{cite book |first=Arthur S. |last=Goldberger |author-link=Arthur Goldberger |title=Econometric Theory |location=New York |publisher=Wiley |year=1964 |pages=[https://archive.org/details/econometrictheor0000gold/page/117 117]–120 |url=https://archive.org/details/econometrictheor0000gold |url-access=registration|ref=NULL }}

The theorem was named after Eugen Slutsky.{{Cite journal

| last = Slutsky | first = E. | author-link = Eugen Slutsky

| year = 1925

| title = Über stochastische Asymptoten und Grenzwerte

| language = de

| journal = Metron

| volume = 5 | issue = 3

| pages = 3–89

| jfm = 51.0380.03

| ref = NULL

}} Slutsky's theorem is also attributed to Harald Cramér.Slutsky's theorem is also called Cramér's theorem according to Remark 11.1 (page 249) of {{Cite book

| last = Gut | first = Allan

| title = Probability: a graduate course

| publisher = Springer-Verlag

| year = 2005

| isbn = 0-387-22833-0

| ref = NULL

}}

Statement

Let X_n, Y_n be sequences of scalar/vector/matrix random elements.

If X_n converges in distribution to a random element X and Y_n converges in probability to a constant c, then

  • X_n + Y_n \ \xrightarrow{d}\ X + c ;
  • X_nY_n \ \xrightarrow{d}\ Xc ;
  • X_n/Y_n \ \xrightarrow{d}\ X/c,   provided that c is invertible,

where \xrightarrow{d} denotes convergence in distribution.

Notes:

  1. The requirement that Yn converges to a constant is important — if it were to converge to a non-degenerate random variable, the theorem would be no longer valid. For example, let X_n \sim {\rm Uniform}(0,1) and Y_n = -X_n. The sum X_n + Y_n = 0 for all values of n. Moreover, Y_n \, \xrightarrow{d} \, {\rm Uniform}(-1,0), but X_n + Y_n does not converge in distribution to X + Y, where X \sim {\rm Uniform}(0,1), Y \sim {\rm Uniform}(-1,0), and X and Y are independent.See {{cite web |first=Donglin |last=Zeng |title=Large Sample Theory of Random Variables (lecture slides) |work=Advanced Probability and Statistical Inference I (BIOS 760) |url=https://www.bios.unc.edu/~dzeng/BIOS760/ChapC_Slide.pdf#page=59 |publisher=University of North Carolina at Chapel Hill |date=Fall 2018 |at=Slide 59 }}
  2. The theorem remains valid if we replace all convergences in distribution with convergences in probability.

Proof

This theorem follows from the fact that if Xn converges in distribution to X and Yn converges in probability to a constant c, then the joint vector (Xn, Yn) converges in distribution to (Xc) (see here).

Next we apply the continuous mapping theorem, recognizing the functions g(x,y) = x + y, g(x,y) = xy, and g(x,y) = x y−1 are continuous (for the last function to be continuous, y has to be invertible).

See also

References

{{reflist}}

Further reading

  • {{cite book |first=George |last=Casella |first2=Roger L. |last2=Berger |title=Statistical Inference |location=Pacific Grove |publisher=Duxbury |year=2001 |pages=240–245 |isbn=0-534-24312-6|ref=NULL }}
  • {{Cite book

| last1 = Grimmett | first1 = G.

| last2 = Stirzaker | first2 = D.

| title = Probability and Random Processes

| year = 2001

| publisher = Oxford

| edition = 3rd

| ref = NULL

}}

  • {{cite book |first=Fumio |last=Hayashi |author-link=Fumio Hayashi |title=Econometrics |publisher=Princeton University Press |year=2000 |isbn=0-691-01018-8 |pages=92–93 |url=https://books.google.com/books?id=QyIW8WUIyzcC&pg=PA92|ref=NULL }}

{{DEFAULTSORT:Slutsky's Theorem}}

Category:Asymptotic theory (statistics)

Category:Theorems in probability theory

Category:Theorems in statistics