Estimation statistics

{{Short description|Data analysis approach in frequentist statistics}}

{{distinguish|Estimator|Estimation theory}}

{{other uses | Estimation (disambiguation)}}

Estimation statistics, or simply estimation, is a data analysis framework that uses a combination of effect sizes, confidence intervals, precision planning, and meta-analysis to plan experiments, analyze data and interpret results.{{cite web|last=Ellis|first=Paul|title=Effect size FAQ|url=http://effectsizefaq.com/}} It complements hypothesis testing approaches such as null hypothesis significance testing (NHST), by going beyond the question is an effect present or not, and provides information about how large an effect is.{{cite web|last=Cohen|first=Jacob|title=The earth is round (p<.05)|url=https://www.ics.uci.edu/~sternh/courses/210/cohen94_pval.pdf|access-date=2013-08-22|archive-date=2017-10-11|archive-url=https://web.archive.org/web/20171011155642/http://www.ics.uci.edu/~sternh/courses/210/cohen94_pval.pdf|url-status=dead}} Estimation statistics is sometimes referred to as the new statistics.{{cite book|last=Altman|first=Douglas|title=Practical Statistics For Medical Research|url=https://archive.org/details/isbn_9780412276309|url-access=registration|year=1991|publisher=Chapman and Hall|location=London}}{{cite book|title=Statistics with Confidence|year=2000|publisher=Wiley-Blackwell|location=London|editor=Douglas Altman}}{{pn|date=May 2022}}

The primary aim of estimation methods is to report an effect size (a point estimate) along with its confidence interval, the latter of which is related to the precision of the estimate. The confidence interval summarizes a range of likely values of the underlying population effect. Proponents of estimation see reporting a P value as an unhelpful distraction from the important business of reporting an effect size with its confidence intervals,{{cite web|last=Ellis|first=Paul|title=Why can't I just judge my result by looking at the p value?|url=http://effectsizefaq.com/2010/05/31/why-can%E2%80%99t-i-just-judge-my-result-by-looking-at-the-p-value/|access-date=5 June 2013|date=2010-05-31}} and believe that estimation should replace significance testing for data analysis.{{cite journal |last1=Claridge-Chang |first1=Adam |last2=Assam |first2=Pryseley N |title=Estimation statistics should replace significance testing |journal=Nature Methods |date=2016 |volume=13 |issue=2 |pages=108–109 |doi=10.1038/nmeth.3729 |pmid=26820542 |s2cid=205424566 |url=https://zenodo.org/record/60156 }}{{Cite journal |last1=Berner |first1=Daniel |last2=Amrhein |first2=Valentin |date=2022 |title=Why and how we should join the shift from significance testing to estimation |journal=Journal of Evolutionary Biology |volume=35 |issue=6 |language=en |pages=777–787 |doi=10.1111/jeb.14009 |pmid=35582935 |pmc=9322409 |s2cid=247788899 |issn=1010-061X}}

History

Starting in 1929, physicist Raymond Thayer Birge published review papers{{cite journal |last1=Birge |first1=Raymond T. |title=Probable Values of the General Physical Constants |journal=Reviews of Modern Physics |date=1929 |volume=1 |issue=1 |pages=1–73 |doi=10.1103/RevModPhys.1.1 |bibcode=1929RvMP....1....1B }} in which he used weighted-averages methods to calculate estimates of physical constants, a procedure that can be seen as the precursor to modern meta-analysis.{{cite journal|last=Hedges|first=Larry|title=How hard is hard science, how soft is soft science|journal=American Psychologist|year=1987|volume=42|issue=5|page=443|doi=10.1037/0003-066x.42.5.443|citeseerx=10.1.1.408.2317}}

In the 1930s Jerzy Neyman published a series of papers on statistical estimation where he defined the mathematics and terminology of confidence intervals.

Neyman, J. (1934). On the Two Different Aspects of the Representative Method: The Method of Stratified Sampling and the Method of Purposive Selection. Journal of the Royal Statistical Society, 97(4), 558–625. https://doi.org/10.2307/2342192 (see Note I in the appendix)J. Neyman (1935), Ann. Math. Statist. 6(3): 111-116 (September, 1935). https://doi.org/10.1214/aoms/1177732585Neyman, J. (1937). "Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability". Philosophical Transactions of the Royal Society A. 236 (767): 333–380. Bibcode:1937RSPTA.236..333N. doi:10.1098/rsta.1937.0005. JSTOR 91337.

In the 1960s, estimation statistics was adopted by the non-physical sciences with the development of the standardized effect size by Jacob Cohen.

In the 1970s, modern research synthesis was pioneered by Gene V. Glass with the first systematic review and meta-analysis for psychotherapy.{{cite book|last=Hunt|first=Morton|title=How science takes stock: the story of meta-analysis|year=1997|publisher=The Russell Sage Foundation|location=New York|isbn=978-0-87154-398-1}} This pioneering work subsequently influenced the adoption of meta-analyses for medical treatments more generally.

In the 1980s and 1990s, estimation methods were extended and refined for practical application by biostatisticians including Larry Hedges, Michael Borenstein, Doug Altman, Martin Gardner, and many others, with the development of the modern (medical) [https://www.wiley.com/en-sg/Introduction+to+Meta+Analysis%2C+2nd+Edition-p-9781119558354 meta-analysis].

Starting in the 1980s, the systematic review, used in conjunction with meta-analysis, became a technique widely used in medical research. There are over 200,000 [https://pubmed.ncbi.nlm.nih.gov/?term=meta-analysis&sort=date citations] to "meta-analysis" in PubMed.

In the 1990s, editor [http://www.bu.edu/sph/profile/kenneth-rothman/ Kenneth Rothman] banned the use of p-values from the journal Epidemiology; compliance was high among authors but this did not substantially change their analytical thinking.{{cite journal |last1=Fidler |first1=Fiona |last2=Thomason |first2=Neil |last3=Cumming |first3=Geoff |last4=Finch |first4=Sue |last5=Leeman |first5=Joanna |title=Editors Can Lead Researchers to Confidence Intervals, but Can't Make Them Think: Statistical Reform Lessons From Medicine |journal=Psychological Science |date=2004 |volume=15 |issue=2 |pages=119–126 |doi=10.1111/j.0963-7214.2004.01502008.x |pmid=14738519 |s2cid=21199094 }}

In the 2010s, Geoff Cumming published a [https://thenewstatistics.com/itns/other-books/introduction-to-the-new-statistics/ textbook] dedicated to estimation statistics, along with software in Excel designed to teach effect-size thinking, primarily to psychologists.{{cite web|last=Cumming|first=Geoff|title=ESCI (Exploratory Software for Confidence Intervals)|url=http://www.latrobe.edu.au/psy/research/projects/esci|access-date=2013-05-12|archive-date=2013-12-29|archive-url=https://web.archive.org/web/20131229210924/http://www.latrobe.edu.au/psy/research/projects/esci|url-status=dead}} Also in the 2010s, estimation methods were increasingly adopted in neuroscience.{{cite journal |last1=Yildizoglu |first1=Tugce |last2=Weislogel |first2=Jan-Marek |last3=Mohammad |first3=Farhan |last4=Chan |first4=Edwin S.-Y. |last5=Assam |first5=Pryseley N. |last6=Claridge-Chang |first6=Adam |title=Estimating Information Processing in a Memory System: The Utility of Meta-analytic Methods for Genetics |journal=PLOS Genetics |date=2015 |volume=11 |issue=12 |pages=e1005718 |doi=10.1371/journal.pgen.1005718 |pmid=26647168 |pmc=4672901 |doi-access=free }}{{cite journal|last=Hentschke|first=Harald|author2=Maik C. Stüttgen|title=Computation of measures of effect size for neuroscience data sets|journal=European Journal of Neuroscience|date=2011|volume=34|issue=12|pages=1887–1894|doi=10.1111/j.1460-9568.2011.07902.x|pmid=22082031|s2cid=12505606}}

In 2013, the Publication Manual of the American Psychological Association recommended to use estimation in addition to hypothesis testing.{{cite web|title=Publication Manual of the American Psychological Association, Sixth Edition|url=http://www.apastyle.org/manual/index.aspx|access-date=|archive-date=2013-03-05|archive-url=https://web.archive.org/web/20130305222323/http://www.apastyle.org/manual/index.aspx|url-status=dead}} Also in 2013, the Uniform Requirements for Manuscripts Submitted to Biomedical Journals document made a similar recommendation: "Avoid relying solely on statistical hypothesis testing, such as P values, which fail to convey important information about effect size."{{cite web|title=Uniform Requirements for Manuscripts Submitted to Biomedical Journals|url=http://www.icmje.org/manuscript_1prepare.html|access-date= |url-status=dead|archive-url=https://web.archive.org/web/20130515225111/http://www.icmje.org/manuscript_1prepare.html|archive-date=15 May 2013}}

In 2019, over 800 scientists signed an open comment calling for the entire concept of statistical significance to be abandoned.Amrhein, Valentin; Greenland, Sander; McShane, Blake (2019). [https://www.nature.com/articles/d41586-019-00857-9 "Scientists rise up against statistical significance"], Nature 567, 305-307.

In 2019, the Society for Neuroscience journal eNeuro instituted a policy recommending the use of estimation graphics as the preferred method for data presentation.{{cite journal |last1=Bernard |first1=Christophe |title=Changing the Way We Report, Interpret, and Discuss Our Results to Rebuild Trust in Our Research |journal=eNeuro |date=2019 |volume=6 |issue=4 |doi=10.1523/ENEURO.0259-19.2019 |pmid=31453315 |pmc=6709206 }} And in 2022, the International Society of Physiotherapy Journal Editors recommended the use of estimation methods instead of null hypothesis statistical tests.Elkins, Mark; et al. (2022). [https://www.sciencedirect.com/science/article/pii/S1836955321001284 "Statistical inference through estimation: recommendations from the International Society of Physiotherapy Journal Editors"], Journal of Physiotherapy, 68 (1), 1-4.

Despite the widespread adoption of meta-analysis for clinical research, and recommendations by several major publishing institutions, the estimation framework is not routinely used in primary biomedical research.{{cite journal |last1=Halsey |first1=Lewis G. |title=The reign of the p -value is over: what alternative analyses could we employ to fill the power vacuum? |journal=Biology Letters |date=2019 |volume=15 |issue=5 |pages=20190174 |doi=10.1098/rsbl.2019.0174 |pmid=31113309 |pmc=6548726 }}

Methodology

Many significance tests have an estimation counterpart;{{Cite book|title=Introduction to the New Statistics: Estimation, Open Science, and Beyond|last1=Cumming|first1=Geoff|last2=Calin-Jageman|first2=Robert|publisher=Routledge|year=2016|isbn=978-1138825529}}{{pn|date=May 2022}} in almost every case, the test result (or its p-value) can be simply substituted with the effect size and a precision estimate. For example, instead of using Student's t-test, the analyst can compare two independent groups by calculating the mean difference and its 95% confidence interval. Corresponding methods can be used for a paired t-test and multiple comparisons. Similarly, for a regression analysis, an analyst would report the coefficient of determination (R2) and the model equation instead of the model's p-value.

However, proponents of estimation statistics warn against reporting only a few numbers. Rather, it is advised to analyze and present data using data visualization. Examples of appropriate visualizations include the scatter plot for regression, and Gardner–Altman plots for two independent groups.{{cite journal |last1=Gardner |first1=M J |last2=Altman |first2=D G |title=Confidence intervals rather than P values: estimation rather than hypothesis testing. |journal=BMJ |date=1986 |volume=292 |issue=6522 |pages=746–750 |doi=10.1136/bmj.292.6522.746 |pmid=3082422 |pmc=1339793 }} While historical data-group plots (bar charts, box plots, and violin plots) do not display the comparison, estimation plots add a second axis to explicitly visualize the effect size.{{cite journal |last1=Ho |first1=Joses |last2=Tumkaya |first2=Tayfun |last3=Aryal |first3=Sameer |last4=Choi |first4=Hyungwon |last5=Claridge-Chang |first5=Adam |title=Moving beyond P values: Everyday data analysis with estimation plots |date=2018 |doi=10.1101/377978 |doi-access=free }}

File:20171231-wiki-figure-png.png

=Gardner–Altman plot=

The Gardner–Altman mean difference plot was first described by Martin Gardner and Doug Altman in 1986; it is a statistical graph designed to display data from two independent groups. There is also a version suitable for [http://www.estimationstats.com/#/analyze/paired paired data]. The key instructions to make this chart are as follows: (1) display all observed values for both groups side-by-side; (2) place a second axis on the right, shifted to show the mean difference scale; and (3) plot the mean difference with its confidence interval as a marker with error bars. Gardner-Altman plots can be generated with [https://github.com/ACCLAB/DABEST-python DABEST-Python], or [https://github.com/ACCLAB/dabestr dabestr]; alternatively, the analyst can use GUI software like the [http://www.estimationstats.com/#/ Estimation Stats] app.

File:Cumming_Estimation_Plot.png

= Cumming plot =

For multiple groups, [http://www.latrobe.edu.au/psychology/staff/profile?uname=GDCumming Geoff Cumming] introduced the use of a secondary panel to plot two or more mean differences and their confidence intervals, placed below the observed values panel; this arrangement enables [http://www.estimationstats.com/#/analyze/multi easy comparison] of mean differences ('deltas') over several data groupings. Cumming plots can be generated with the [https://thenewstatistics.com/itns/esci/ ESCI package], [https://github.com/ACCLAB/DABEST-python DABEST], or the [http://www.estimationstats.com/#/ Estimation Stats app].

= Other methodologies =

In addition to the mean difference, there are numerous other effect size types, all with relative benefits. Major types include effect sizes in the Cohen's d class of standardized metrics, and the coefficient of determination (R2) for regression analysis. For non-normal distributions, there are a number of more [https://garstats.wordpress.com/2016/05/02/robust-effect-sizes-for-2-independent-groups/ robust effect sizes], including Cliff's delta and the Kolmogorov-Smirnov statistic.

Flaws in hypothesis testing

{{main|Statistical hypothesis testing#Criticism}}

{{see also|p-value#Criticism|Misuse of p-values}}

In hypothesis testing, the primary objective of statistical calculations is to obtain a p-value, the probability of seeing an obtained result, or a more extreme result, when assuming the null hypothesis is true. If the p-value is low (usually < 0.05), the statistical practitioner is then encouraged to reject the null hypothesis. Proponents of estimation reject the validity of hypothesis testing{{cite book|last=Cumming|first=Geoff|title=Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis|publisher=Routledge|year=2011|isbn=978-0415879675|location=New York|pages=}}{{pn|date=May 2022}}{{cite journal |last1=Cohen |first1=Jacob |title=Things I have learned (so far) |journal=American Psychologist |date=1990 |volume=45 |issue=12 |pages=1304–1312 |doi=10.1037/0003-066x.45.12.1304 |url=http://revistas.um.es/analesps/article/view/28521 }} for the following reasons, among others:

  • P-values are easily and commonly misinterpreted. For example, the p-value is often mistakenly thought of as 'the probability that the null hypothesis is true.'
  • The null hypothesis is always wrong for every set of observations: there is always some effect, even if it is minuscule.{{cite journal|last=Cohen|first=Jacob|title=The earth is round (p < .05).|journal=American Psychologist|year=1994|volume=49|issue=12|pages=997–1003|doi=10.1037/0003-066X.49.12.997}}
  • Hypothesis testing produces dichotomous yes-no answers, while discarding important information about magnitude.{{cite book|last=Ellis|first=Paul|title=The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results|year=2010|publisher=Cambridge University Press|location=Cambridge}}{{pn|date=May 2022}}
  • Any particular p-value arises through the interaction of the effect size, the sample size (all things being equal a larger sample size produces a smaller p-value) and sampling error.{{cite book|title=The Significance Test Controversy: A Reader|year=2006|publisher=Aldine Transaction|isbn=978-0202308791|editor=Denton E. Morrison, Ramon E. Henkel}}{{pn|date=May 2022}}
  • At low power, simulation reveals that sampling error makes p-values extremely volatile.{{cite web|last=Cumming|first=Geoff|title=Dance of the p values|website=YouTube |date=3 March 2009 |url=https://www.youtube.com/watch?v=ez4DgdurRPg}}

Benefits of estimation statistics

=Quantification=

While p-values focus on yes/no answers, estimation directs the analyst's attention to quantification.

=Advantages of confidence intervals=

Confidence intervals behave in a predictable way. By definition, 95% confidence intervals have a 95% chance of covering the underlying population mean (μ). This feature remains constant with increasing sample size; what changes is that the interval becomes smaller. In addition, 95% confidence intervals are also 83% prediction intervals: one (pre experimental) confidence interval has an 83% chance of covering any future experiment's mean. As such, knowing a single experiment's 95% confidence intervals gives the analyst a reasonable range for the population mean. Nevertheless, confidence distributions and posterior distributions provide a whole lot more information than a single point estimate or intervals,{{cite journal |last1=Xie |first1=Min-ge |last2=Singh |first2=Kesar |title=Confidence Distribution, the Frequentist Distribution Estimator of a Parameter: A Review |journal=International Statistical Review |date=2013 |volume=81 |issue=1 |pages=3–39 |doi=10.1111/insr.12000 |jstor=43298799 |s2cid=3242459 }} that can exacerbate dichotomous thinking according to the interval covering or not covering a "null" value of interest (i.e. the Inductive behavior of Neyman as opposed to that of Fisher{{cite journal |last1=Halpin |first1=Peter F. |last2=Stam |first2=Henderikus J. |title=Inductive Inference or Inductive Behavior: Fisher and Neyman: Pearson Approaches to Statistical Testing in Psychological Research (1940-1960) |journal=The American Journal of Psychology |date=2006 |volume=119 |issue=4 |pages=625–653 |doi=10.2307/20445367 |jstor=20445367 |pmid=17286092 }}).

=Evidence-based statistics=

Psychological studies of the perception of statistics reveal that reporting interval estimates leaves a more accurate perception of the data than reporting p-values.{{cite journal |last1=Beyth-Marom |first1=Ruth |last2=Fidler |first2=Fiona Margaret |last3=Cumming |first3=Geoffrey David |title=Statistical cognition: Towards evidence-based practice in statistics and statistics education |journal=Statistics Education Research Journal |year=2008 |volume=7 |issue=2 |pages=20–39 |doi=10.52041/serj.v7i2.468 |citeseerx=10.1.1.154.7648 |s2cid=18902043 }}

=Precision planning=

The precision of an estimate is formally defined as 1/variance, and like power, increases (improves) with increasing sample size. Like power, a high level of precision is expensive; research grant applications would ideally include precision/cost analyses. Proponents of estimation believe precision planning should replace power since statistical power itself is conceptually linked to significance testing. Precision planning can be done with the [https://www.esci.thenewstatistics.com/esci-precision.html#tab-1 ESCI web app].

See also

References

{{Reflist|30em}}

{{Statistics}}

Category:Estimation theory

Category:Effect size