Samuelson's inequality

{{Short description|Concept in statistics}}

File:Paul Samuelson.jpg

In statistics, Samuelson's inequality, named after the economist Paul Samuelson,{{cite journal |first=Paul |last=Samuelson |title=How Deviant Can You Be? |journal=Journal of the American Statistical Association |volume=63 |issue=324 |year=1968 |pages=1522–1525 |jstor=2285901 |doi=10.2307/2285901 }} also called the Laguerre–Samuelson inequality,{{cite thesis |type=MSc |last=Jensen |first=Shane Tyler |date=1999 |title=The Laguerre–Samuelson Inequality with Extensions and Applications in Statistics and Matrix Theory |publisher=Department of Mathematics and Statistics, McGill University |url=http://www.collectionscanada.gc.ca/obj/s4/f2/dsk1/tape10/PQDD_0027/MQ50799.pdf }}{{cite book |first1=Shane T. |last1=Jensen |first2=George P. H. |last2=Styan |year=1999 |chapter=Some Comments and a Bibliography on the Laguerre-Samuelson Inequality with Extensions and Applications in Statistics and Matrix Theory |title=Analytic and Geometric Inequalities and Applications |pages=151–181 |doi=10.1007/978-94-011-4577-0_10 |isbn=978-94-010-5938-1 }} after the mathematician Edmond Laguerre, states that every one of any collection x1, ..., xn, is within {{radic|n − 1}} uncorrected sample standard deviations of their sample mean.

Statement of the inequality

If we let

: \overline{x} = \frac{x_1+\cdots+x_n}{n}

be the sample mean and

: s = \sqrt{\frac{1}{n} \sum_{i=1}^n (x_i - \overline{x})^2 }

be the standard deviation of the sample, then

: \overline{x} - s\sqrt{n-1} \le x_j \le \overline{x} + s\sqrt{n-1}\qquad \text{for } j = 1,\dots,n. {{cite book |title=Advances in Inequalities from Probability Theory and Statistics |first1=Neil S. |last1=Barnett |first2=Sever Silvestru |last2=Dragomir |publisher=Nova Publishers |year=2008 |isbn=978-1-60021-943-6 |page=164 }}

Equality holds on the left (or right) for x_j if and only if all the n − 1 x_is other than x_j are equal to each other and greater (smaller) than x_j.

If you instead define s = \sqrt{\frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2 } then the inequality \overline{x} - s\sqrt{n-1} \le x_j \le \overline{x} + s\sqrt{n-1} still applies and can be slightly tightened to \overline{x} - s\tfrac{n-1}{\sqrt{n}} \le x_j \le \overline{x} + s\tfrac{n-1}{\sqrt{n}}.

Comparison to Chebyshev's inequality

{{Main|Chebyshev's inequality#Samuelson's inequality}}

Chebyshev's inequality locates a certain fraction of the data within certain bounds, while Samuelson's inequality locates all the data points within certain bounds.

The bounds given by Chebyshev's inequality are unaffected by the number of data points, while for Samuelson's inequality the bounds loosen as the sample size increases. Thus for large enough data sets, Chebyshev's inequality is more useful.

Applications

{{expand section|date=July 2017}}

Samuelson’s inequality has several applications in statistics and mathematics. It is useful in the studentization of residuals which shows a rationale for why this process should be done externally to better understand the spread of residuals in regression analysis.

In matrix theory, Samuelson’s inequality is used to locate the eigenvalues of certain matrices and tensors.

Furthermore, generalizations of this inequality apply to complex data and random variables in a probability space.{{Cite journal |last1=JIN |first1=HONGWEI |last2=BEN´ITEZ |first2=JULIO |title=Some generalizations and probability versions of Samuelson's inequality |url=https://files.ele-math.com/articles/mia-20-01.pdf |access-date=4 September 2024 |journal=Mathematical Inequalities & Applications |date=2017 |pages=1–12 |language=English |doi=10.7153/mia-20-01}}{{Cite book |title= Paul Samuelson|chapter=Samuelson's Approach to Revealed Preference Theory: Some Recent Advances |chapter-url=https://dipot.ulb.ac.be/dspace/bitstream/2013/314701/3/working_paper.pdf |series=Remaking Economics: Eminent Post-War Economists |date=2019 |doi=10.1057/978-1-137-56812-0_9 |last1=Demuynck |first1=Thomas |last2=Hjertstrand |first2=Per |pages=193–227 |isbn=978-1-137-56811-3 }}

Relationship to polynomials

Samuelson was not the first to describe this relationship: the first was probably Laguerre in 1880 while investigating the roots (zeros) of polynomials.Laguerre E. (1880) Mémoire pour obtenir par approximation les racines d'une équation algébrique qui a toutes les racines réelles. Nouv Ann Math 2e série, 19, 161–172, 193–202

Consider a polynomial with all roots real:

: a_0x^n + a_1x^{n-1} + \cdots + a_{n-1}x + a_n = 0

Without loss of generality let a_0 = 1 and let

: t_1 = \sum x_i and t_2 = \sum x_i^2

Then

: a_1 = - \sum x_i = -t_1

and

: a_2 = \sum x_ix_j = \frac{t_1^2 - t_2}{2} \qquad \text{ where } i < j

In terms of the coefficients

: t_2 = a_1^2 - 2a_2

Laguerre showed that the roots of this polynomial were bounded by

: -a_1 / n \pm b \sqrt{n - 1}

where

: b = \frac{\sqrt{nt_2 - t_1^2}}{n} = \frac{\sqrt{na_1^2 - a_1^2 - 2na_2}}{n}

Inspection shows that -\tfrac{a_1}{n} is the mean of the roots and that b is the standard deviation of the roots.

Laguerre failed to notice this relationship with the means and standard deviations of the roots, being more interested in the bounds themselves. This relationship permits a rapid estimate of the bounds of the roots and may be of use in their location.

When the coefficients a_1 and a_2 are both zero no information can be obtained about the location of the roots, because not all roots are real (as can be seen from Descartes' rule of signs) unless the constant term is also zero.

References