location parameter

{{Short description|Concept in statistics}}

{{Multiple issues|

{{more citations needed|date=February 2020}}

{{disputed|date=July 2021}}

}}

In statistics, a location parameter of a probability distribution is a scalar- or vector-valued parameter x_0, which determines the "location" or shift of the distribution. In the literature of location parameter estimation, the probability distributions with such parameter are found to be formally defined in one of the following equivalent ways:

  • either as having a probability density function or probability mass function f(x - x_0);{{cite journal |last1=Takeuchi |first1=Kei |title= A Uniformly Asymptotically Efficient Estimator of a Location Parameter |journal=Journal of the American Statistical Association |date=1971 |volume=66 |issue=334 |pages=292–301|doi=10.1080/01621459.1971.10482258 |s2cid=120949417 }} or
  • having a cumulative distribution function F(x - x_0);{{cite book |last1=Huber |first1=Peter J. |chapter=Robust Estimation of a Location Parameter |title=Breakthroughs in Statistics |series=Springer Series in Statistics |date=1992 |pages=492–518| publisher=Springer|doi=10.1007/978-1-4612-4380-9_35 |isbn=978-0-387-94039-7 |chapter-url=http://projecteuclid.org/euclid.aoms/1177703732 }} or
  • being defined as resulting from the random variable transformation x_0 + X, where X is a random variable with a certain, possibly unknown, distribution.{{cite journal |last1=Stone |first1=Charles J. |title=Adaptive Maximum Likelihood Estimators of a Location Parameter |journal=The Annals of Statistics |date=1975 |volume=3 |issue=2 |pages=267–284|doi=10.1214/aos/1176343056 |doi-access=free }} See also {{Slink||Additive noise}}.

A direct example of a location parameter is the parameter \mu of the normal distribution. To see this, note that the probability density function f(x | \mu, \sigma) of a normal distribution \mathcal{N}(\mu,\sigma^2) can have the parameter \mu factored out and be written as:

:

g(x' = x - \mu | \sigma) = \frac{1}{\sigma \sqrt{2\pi} } \exp(-\frac{1}{2}\left(\frac{x'}{\sigma}\right)^2)

thus fulfilling the first of the definitions given above.

The above definition indicates, in the one-dimensional case, that if x_0 is increased, the probability density or mass function shifts rigidly to the right, maintaining its exact shape.

A location parameter can also be found in families having more than one parameter, such as location–scale families. In this case, the probability density function or probability mass function will be a special case of the more general form

:f_{x_0,\theta}(x) = f_\theta(x-x_0)

where x_0 is the location parameter, θ represents additional parameters, and f_\theta is a function parametrized on the additional parameters.

Definition

Source:{{Cite book |last1=Casella |first1=George |title=Statistical Inference |last2=Berger |first2=Roger |year=2001 |isbn=978-0534243128 |edition=2nd |pages=116|publisher=Thomson Learning }}

Let f(x) be any probability density function and let \mu and \sigma > 0 be any given constants. Then the function

g(x| \mu, \sigma)= \frac{1}{\sigma}f\left(\frac{x-\mu}{\sigma}\right)

is a probability density function.

The location family is then defined as follows:

Let

f(x)

be any probability density function. Then the family of probability density functions

\mathcal{F} = \{f(x-\mu) : \mu \in \mathbb{R}\}

is called the location family with standard probability density function

f(x)

, where

\mu

is called the location parameter for the family.

Additive noise

An alternative way of thinking of location families is through the concept of additive noise. If x_0 is a constant and W is random noise with probability density f_W(w), then X = x_0 + W has probability density f_{x_0}(x) = f_W(x-x_0) and its distribution is therefore part of a location family.

Proofs

For the continuous univariate case, consider a probability density function f(x | \theta), x \in [a, b] \subset \mathbb{R}, where \theta is a vector of parameters. A location parameter x_0 can be added by defining:

:

g(x | \theta, x_0) = f(x - x_0 | \theta), \; x \in [a + x_0, b + x_0]

it can be proved that g is a p.d.f. by verifying if it respects the two conditions{{cite book | last=Ross | first=Sheldon | title=Introduction to probability models | publisher=Academic Press | publication-place=Amsterdam Boston | year=2010 | isbn=978-0-12-375686-2 | oclc=444116127 }} g(x | \theta, x_0) \ge 0 and \int_{-\infty}^{\infty} g(x | \theta, x_0) dx = 1. g integrates to 1 because:

:

\int_{-\infty}^{\infty} g(x | \theta, x_0) dx = \int_{a + x_0}^{b + x_0} g(x | \theta, x_0) dx = \int_{a + x_0}^{b + x_0} f(x - x_0 | \theta) dx

now making the variable change u = x - x_0 and updating the integration interval accordingly yields:

:

\int_{a}^{b} f(u | \theta) du = 1

because f(x | \theta) is a p.d.f. by hypothesis. g(x | \theta, x_0) \ge 0 follows from g sharing the same image of f, which is a p.d.f. so its range is contained in [0, 1].

See also

References

General references

  • {{Cite web |title=1.3.6.4. Location and Scale Parameters |url=https://www.itl.nist.gov/div898/handbook/eda/section3/eda364.htm |access-date=2025-03-17 |website=National Institute of Standards and Technology}}

{{Statistics|descriptive|state=collapsed}}

{{DEFAULTSORT:Location Parameter}}

Category:Summary statistics

Category:Statistical parameters