Scott's rule#Terrell–Scott rule

{{Short description|Rule for choosing histogram bins}}

Scott's rule is a method to select the number of bins in a histogram.{{cite journal |last=Scott |first=David W. |year=1979 |title=On optimal and data-based histograms |journal=Biometrika |volume=66 |issue=3|pages=605–610 |doi=10.1093/biomet/66.3.605}} Scott's rule is widely employed in data analysis software including R,{{cite web | url=https://www.rdocumentation.org/packages/graphics/versions/3.6.2/topics/hist | title=Hist function - RDocumentation }} Python{{cite web | url=https://numpy.org/doc/stable/reference/generated/numpy.histogram_bin_edges.html#numpy.histogram_bin_edges | title=Numpy.histogram_bin_edges — NumPy v2.1 Manual }} and Microsoft Excel where it is the default bin selection method.{{Cite web| url=https://support.microsoft.com/en-gb/office/create-a-histogram-85680173-064b-4024-b39d-80f17ff2f4e8#bkmk_scottrefrule| title=Excel:Create a histogram}}

For a set of n observations x_i let \hat{f}(x) be the histogram approximation of some function f(x). The integrated mean squared error (IMSE) is

:

\text{IMSE} = E\left[ \int_{-\infty}^{\infty} dx (\hat{f}(x) - f(x))^2\right]

Where E[\cdot] denotes the expectation across many independent draws of n data points. By Taylor expanding to first order in h, the bin width, Scott showed that the optimal width is

:

h^* = \left( 6 / \int_{-\infty}^{\infty} f'(x)^2 dx \right)^{1/3}n^{-1/3}

This formula is also the basis for the Freedman–Diaconis rule.

By taking a normal reference i.e. assuming that f(x) is a normal distribution, the equation for h^* becomes

:

h^* = \left( 24 \sqrt{\pi} \right)^{1/3} \sigma n^{-1/3} \sim 3.5 \sigma n^{-1/3}

where \sigma is the standard deviation of the normal distribution and is estimated from the data. With this value of bin width Scott demonstrates thatScott DW. Scott's rule. Wiley Interdisciplinary Reviews: Computational Statistics. 2010 Jul; 2(4):497–502.

:\text{IMSE} \propto n^{-2/3}

showing how quickly the histogram approximation approaches the true distribution as the number of samples increases.

Terrell–Scott rule

Another approach developed by Terrell and ScottTerrell GR, Scott DW. Oversmoothed nonparametric density estimates. Journal of the American Statistical Association. 1985 Mar 1;80(389):209-14. is based on the observation that, among all densities g(x) defined on a compact interval, say |x| < 1/2, with derivatives which are absolutely continuous, the density which minimises \int_{\infty}^{\infty} dx (g^{(k)}(x))^2 is

:

f_k(x) = \begin{cases}

\frac{(2k+1)!}{2^{2k}(k!)^2}(1-4x^2)^k, \quad &|x|\leq1/2\\

0 &|x|>1/2

\end{cases}

Using this with k=1 in the expression for h^* gives an upper bound on the value of bin width which is

:

h^*_{TS} = \left( \frac{4}{n} \right)^{1/3}.

So, for functions satisfying the continuity conditions, at least

:

k_{TS} = \frac{b-a}{h^*} = \left( 2n \right)^{1/3}

bins should be used.{{Cite journal | last1 = Scott | first1 = D.W. | year = 2009 | title = Sturges' rule | journal = WIREs Computational Statistics | volume = 1 | issue = 3| pages = 303–306 | doi=10.1002/wics.35| s2cid = 197483064 }}

File:Histogram rules.png 15.]]

This rule is also called the oversmoothed rule or the Rice rule,Online Statistics Education: A Multimedia Course of Study (http://onlinestatbook.com/). Project Leader: David M. Lane, Rice University (chapter 2 "Graphing Distributions", section "Histograms") so called because both authors worked at Rice University. The Rice rule is often reported with the factor of 2 outside the cube root, 2\left(n \right)^{1/3}, and may be considered a different rule. The key difference from Scott's rule is that this rule does not assume the data is normally distributed and the bin width only depends on the number of samples, not on any properties of the data.

In general \left( 2n \right)^{1/3} is not an integer so \lceil \left( 2n \right)^{1/3} \rceil is used where \lceil \cdot \rceil denotes the ceiling function.

References