Grubbs's test

{{Short description|Statistical test}}

In statistics, Grubbs's test or the Grubbs test (named after Frank E. Grubbs, who published the test in 1950{{cite journal |last=Grubbs |first=Frank E. |title=Sample criteria for testing outlying observations |journal=Annals of Mathematical Statistics |volume=21 |issue=1 |pages=27–58 |doi=10.1214/aoms/1177729885 |year=1950 |doi-access=free |hdl=2027.42/182780 |hdl-access=free }}), also known as the maximum normalized residual test or extreme studentized deviate test, is a test used to detect outliers in a univariate data set assumed to come from a normally distributed population.

Definition

Grubbs's test is based on the assumption of normality. That is, one should first verify that the data can be reasonably approximated by a normal distribution before applying the Grubbs test.Quoted from the Engineering and Statistics Handbook, paragraph 1.3.5.17, http://www.itl.nist.gov/div898/handbook/eda/section3/eda35h.htm

Grubbs's test detects one outlier at a time. This outlier is expunged from the dataset and the test is iterated until no outliers are detected. However, multiple iterations change the probabilities of detection, and the test should not be used for sample sizes of six or fewer since it frequently tags most of the points as outliers.{{Cite journal|last1=Adikaram|first1=K. K. L. B.|last2=Hussein|first2=M. A.|last3=Effenberger|first3=M.|last4=Becker|first4=T.|date=2015-01-14|title=Data Transformation Technique to Improve the Outlier Detection Power of Grubbs's Test for Data Expected to Follow Linear Relation|journal=Journal of Applied Mathematics|volume=2015|pages=1–9|language=en|doi=10.1155/2015/708948|doi-access=free}}

Grubbs's test is defined for the following hypotheses:

:H0: There are no outliers in the data set

:Ha: There is exactly one outlier in the data set

The Grubbs test statistic is defined as

:

G = \frac{\displaystyle\max_{i=1,\ldots, N}\left \vert Y_i - \bar{Y}\right\vert}{s}

with \overline{Y} and s denoting the sample mean and standard deviation, respectively. The Grubbs test statistic is the largest absolute deviation from the sample mean in units of the sample standard deviation.

This is the two-sided test, for which the hypothesis of no outliers is rejected at significance level α if

:

G > \frac{N-1}{\sqrt{N}} \sqrt{\frac{t_{\alpha/(2N),N-2}^2}{N - 2 + t_{\alpha/(2N),N-2}^2}}

with tα/(2N),N−2 denoting the upper critical value of the t-distribution with N − 2 degrees of freedom and a significance level of α/(2N).

=One-sided case=

Grubbs's test can also be defined as a one-sided test, replacing α/(2N) with α/N. To test whether the minimum value is an outlier, the test statistic is

:

G = \frac{\bar{Y}-Y_\min}{s}

with Ymin denoting the minimum value. To test whether the maximum value is an outlier, the test statistic is

:

G = \frac{Y_\max - \bar{Y}}{s}

with Ymax denoting the maximum value.

Related techniques

Several graphical techniques can be used to detect outliers. A simple run sequence plot, a box plot, or a histogram should show any obviously outlying points. A normal probability plot may also be useful.

See also

References

{{Reflist}}

Further reading

  • {{cite journal|last=Grubbs|first=Frank|date=February 1969|title= Procedures for Detecting Outlying Observations in Samples|journal=Technometrics|volume=11|issue=1|pages=2–21|doi=10.2307/1266761|publisher=Technometrics, Vol. 11, No. 1|jstor=1266761}}
  • {{cite journal|last=Stefansky|first=W.|year=1972|title=Rejecting Outliers in Factorial Designs|journal=Technometrics|pages=469–479|doi=10.2307/1267436|volume=14|issue=2|publisher=Technometrics, Vol. 14, No. 2|jstor=1267436}}

{{NIST-PD}}

Category:Statistical tests

Category:Statistical outliers