family-wise error rate#Controlling procedures

{{Short description|Probability of making type I errors when performing multiple hypotheses tests}}

{{more citations needed|date=June 2016}}

In statistics, family-wise error rate (FWER) is the probability of making one or more false discoveries, or type I errors when performing multiple hypotheses tests.

Familywise and experimentwise error rates

John Tukey developed in 1953 the concept of a familywise error rate as the probability of making a Type I error among a specified group, or "family," of tests.{{cite book |last1=Tukey |first1=J. W. |title=The problem of multiple comparisons |date=1953}} Based on Tukey (1953), Ryan (1959) proposed the related concept of an experimentwise error rate, which is the probability of making a Type I error in a given experiment.{{cite journal | last=Ryan | first=Thomas A. | title=Multiple comparison in psychological research. | journal=Psychological Bulletin | publisher=American Psychological Association (APA) | volume=56 | issue=1 | year=1959 | issn=1939-1455 | doi=10.1037/h0042478 | pages=26–47| pmid=13623958 }} Hence, an experimentwise error rate is a familywise error rate where the family includes all the tests that are conducted within an experiment.

As Ryan (1959, Footnote 3) explained, an experiment may contain two or more families of multiple comparisons, each of which relates to a particular statistical inference and each of which has its own separate familywise error rate. Hence, familywise error rates are usually based on theoretically informative collections of multiple comparisons. In contrast, an experimentwise error rate may be based on a collection of simultaneous comparisons that refer to a diverse range of separate inferences. Some have argued that it may not be useful to control the experimentwise error rate in such cases. Indeed, Tukey suggested that familywise control was preferable in such cases (Tukey, 1956, personal communication, in Ryan, 1962, p. 302).{{cite journal |last1=Ryan |first1=T. A. |title=The experiment as the unit for computing rates of error |journal=Psychological Bulletin |date=1962 |volume=59 |issue=4 |pages=301–305 |doi=10.1037/h0040562|pmid=14495585 }}

Background

Within the statistical framework, there are several definitions for the term "family":

  • Hochberg & Tamhane (1987) defined "family" as "any collection of inferences for which it is meaningful to take into account some combined measure of error".{{Cite book |last1=Hochberg |first1=Y. |last2=Tamhane |first2=A. C. |year=1987 |title=Multiple Comparison Procedures |url=https://archive.org/details/multiplecomparis00hoch_295 |url-access=limited |location=New York |publisher=Wiley |page=[https://archive.org/details/multiplecomparis00hoch_295/page/n25 5] |isbn=978-0-471-82222-6 }}
  • According to Cox (1982), a set of inferences should be regarded a family:{{citation needed|date=June 2016}}
  1. To take into account the selection effect due to data dredging
  2. To ensure simultaneous correctness of a set of inferences as to guarantee a correct overall decision

To summarize, a family could best be defined by the potential selective inference that is being faced: A family is the smallest set of items of inference in an analysis, interchangeable about their meaning for the goal of research, from which selection of results for action, presentation or highlighting could be made (Yoav Benjamini).{{citation needed|date=June 2016}}

=Classification of multiple hypothesis tests=

{{Main|Classification of multiple hypothesis tests}}

{{Classification of multiple hypothesis tests}}

Definition

The FWER is the probability of making at least one type I error in the family,

: \mathrm{FWER} = \Pr(V \ge 1), \,

or equivalently,

: \mathrm{FWER} = 1 -\Pr(V = 0).

Thus, by assuring \mathrm{FWER} \le \alpha\,\! \,, the probability of making one or more type I errors in the family is controlled at level \alpha\,\!.

A procedure controls the FWER in the weak sense if the FWER control at level \alpha\,\! is guaranteed only when all null hypotheses are true (i.e. when m_0 = m, meaning the "global null hypothesis" is true).{{cite book |last1=Dmitrienko |first1=Alex |last2=Tamhane |first2=Ajit |last3=Bretz |first3=Frank |title=Multiple Testing Problems in Pharmaceutical Statistics |date=2009 |publisher=CRC Press |isbn=9781584889847 |page=37 |edition=1}}

A procedure controls the FWER in the strong sense if the FWER control at level \alpha\,\! is guaranteed for any configuration of true and non-true null hypotheses (whether the global null hypothesis is true or not).

Controlling procedures

{{broader|Multiple testing correction}}

{{see also|False coverage rate#Controlling procedures|False discovery rate#Controlling procedures}}

{{further|Post hoc analysis#Tests{{!}}List of post hoc tests}}

Some classical solutions that ensure strong level \alpha FWER control, and some newer solutions exist.

=The Bonferroni procedure=

{{main|Bonferroni correction}}

  • Denote by p_{i} the p-value for testing H_{i}
  • reject H_{i} if p_{i} \leq \frac{\alpha}{m}

=The Šidák procedure=

{{main|Šidák correction}}

  • Testing each hypothesis at level \alpha_{SID} = 1-(1-\alpha)^\frac{1}{m} is Sidak's multiple testing procedure.
  • This procedure is more powerful than Bonferroni but the gain is small.
  • This procedure can fail to control the FWER when the tests are negatively dependent.

=Tukey's procedure=

{{main|Tukey's range test}}

  • Tukey's procedure is only applicable for pairwise comparisons.
  • It assumes independence of the observations being tested, as well as equal variation across observations (homoscedasticity).
  • The procedure calculates for each pair the studentized range statistic: \frac {Y_{A}-Y_{B}} {SE} where Y_{A} is the larger of the two means being compared, Y_{B} is the smaller, and SE is the standard error of the data in question.{{citation needed|date=June 2016}}
  • Tukey's test is essentially a Student's t-test, except that it corrects for family-wise error-rate.{{citation needed|date=June 2016}}

=Holm's step-down procedure (1979)=

{{main|Holm–Bonferroni method}}

  • Start by ordering the p-values (from lowest to highest) P_{(1)} \ldots P_{(m)} and let the associated hypotheses be H_{(1)} \ldots H_{(m)}
  • Let k be the minimal index such that P_{(k)} > \frac{\alpha}{m+1-k}
  • Reject the null hypotheses H_{(1)} \ldots H_{(k-1)}. If k = 1 then none of the hypotheses are rejected.{{citation needed|date=June 2016}}

This procedure is uniformly more powerful than the Bonferroni procedure.{{cite journal | last1 = Aickin | first1= M | last2 = Gensler | first2 = H | year = 1996 | title = Adjusting for multiple testing when reporting research results: the Bonferroni vs Holm methods | journal = American Journal of Public Health | volume = 86 | issue = 5 | pages = 726–728 | pmc=1380484 | pmid=8629727|doi=10.2105/ajph.86.5.726 }}

The reason why this procedure controls the family-wise error rate for all the m hypotheses at level α in the strong sense is, because it is a closed testing procedure. As such, each intersection is tested using the simple Bonferroni test.{{citation needed|date=June 2016}}

=Hochberg's step-up procedure=

Yosef Hochberg's step-up procedure (1988) is performed using the following steps:{{cite journal | last1 = Hochberg | first1= Yosef | year = 1988 | title = A Sharper Bonferroni Procedure for Multiple Tests of Significance | journal = Biometrika | volume = 75 | issue = 4 | pages = 800–802 | url = http://www-stat.wharton.upenn.edu/~steele/Courses/956/Resource/MultipleComparision/Hochberg88.pdf | doi=10.1093/biomet/75.4.800}}

  • Start by ordering the p-values (from lowest to highest) P_{(1)} \ldots P_{(m)} and let the associated hypotheses be H_{(1)} \ldots H_{(m)}
  • For a given \alpha, let R be the largest k such that P_{(k)} \leq \frac{\alpha}{m+1-k}
  • Reject the null hypotheses H_{(1)} \ldots H_{(R)}

Hochberg's procedure is more powerful than Holm's. Nevertheless, while Holm’s is a closed testing procedure (and thus, like Bonferroni, has no restriction on the joint distribution of the test statistics), Hochberg’s is based on the Simes test, so it holds only under non-negative dependence.{{citation needed|date=June 2016}} The Simes test is derived under assumption of independent tests;{{cite journal |last1=Simes |first1=R. J. |year=1986 |title=An improved Bonferroni procedure for multiple tests of significance |journal=Biometrika |volume=73 |issue=3 |pages=751–754 |doi=10.1093/biomet/73.3.751}} it is conservative for tests that are positively dependent in a certain sense{{cite journal |last1=Sarkar |first1=Sanat K. |last2=Chang |first2=Chung-Kuei |year=1997 |title=The Simes method for multiple hypothesis testing with positively dependent test statistics |journal=Journal of the American Statistical Association |volume=92 |issue=440 |pages=1601–1608 |doi=10.1080/01621459.1997.10473682}}{{cite journal |last1=Sarkar |first1=Sanat K. |year=1998 |title=Some probability inequalities for ordered MTP2 random variables: a proof of the Simes conjecture| journal=The Annals of Statistics |volume=26 |issue=2 |pages=494–504|doi=10.1214/aos/1028144846 }} and is anti-conservative for certain cases of negative dependence.{{cite journal |last1=Samuel-Cahn |first1=Ester |year=1996 |title=Is the Simes improved Bonferroni procedure conservative? |journal=Biometrika |volume=83 |issue=4 |pages=928–933 |doi=10.1093/biomet/83.4.928}}{{cite journal |last1=Block |first1=Henry W. |last2=Savits |first2=Thomas H. |last3=Wang |first3=Jie |year=2008 |title=Negative dependence and the Simes inequality |journal=Journal of Statistical Planning and Inference |volume=138 |issue=12 |pages=4107–4110 |doi=10.1016/j.jspi.2008.03.026}} However, it has been suggested that a modified version of the Hochberg procedure remains valid under general negative dependence.{{cite journal| last1=Gou |first1=Jiangtao |last2=Tamhane |first2=Ajit C. |year=2018 |title=Hochberg procedure under negative dependence |journal=Statistica Sinica |volume=28 |pages=339–362 |url=https://www3.stat.sinica.edu.tw/statistica/oldpdf/A28n116.pdf |doi=10.5705/ss.202016.0306|doi-broken-date=2 December 2024 }}

=Dunnett's correction=

{{main|Dunnett's test}}

Charles Dunnett (1955, 1966) described an alternative alpha error adjustment when k groups are compared to the same control group. Now known as Dunnett's test, this method is less conservative than the Bonferroni adjustment.{{citation needed|date=June 2016}}

=Scheffé's method=

{{main|Scheffé's method}}

{{empty section|date=February 2013}}

=Resampling procedures=

The procedures of Bonferroni and Holm control the FWER under any dependence structure of the p-values (or equivalently the individual test statistics). Essentially, this is achieved by accommodating a `worst-case' dependence structure (which is close to independence for most practical purposes). But such an approach is conservative if dependence is actually positive. To give an extreme example, under perfect positive dependence, there is effectively only one test and thus, the FWER is uninflated.

Accounting for the dependence structure of the p-values (or of the individual test statistics) produces more powerful procedures. This can be achieved by applying resampling methods, such as bootstrapping and permutations methods. The procedure of Westfall and Young (1993) requires a certain condition that does not always hold in practice (namely, subset pivotality).{{Cite book |last1=Westfall |first1=P. H. |last2=Young |first2=S. S. |year=1993 |title=Resampling-Based Multiple Testing: Examples and Methods for p-Value Adjustment |location=New York |publisher=John Wiley |isbn=978-0-471-55761-6 }} The procedures of Romano and Wolf (2005a,b) dispense with this condition and are thus more generally valid.{{cite journal | last1 = Romano | first1= J.P. | last2 = Wolf | first2 = M. | year = 2005a | title = Exact and approximate stepdown methods for multiple hypothesis testing | journal = Journal of the American Statistical Association | volume = 100 | issue= 469 | pages = 94–108 | doi=10.1198/016214504000000539| hdl= 10230/576 | s2cid= 219594470 | hdl-access = free }}{{cite journal | last1 = Romano | first1= J.P. | last2 = Wolf | first2 = M. | year = 2005b | title = Stepwise multiple testing as formalized data snooping | journal = Econometrica | volume = 73 | issue= 4 | pages = 1237–1282 | doi=10.1111/j.1468-0262.2005.00615.x| citeseerx = 10.1.1.198.2473 }}

= Harmonic mean ''p''-value procedure =

{{Main|Harmonic mean p-value}}

The harmonic mean p-value (HMP) procedure{{cite journal |vauthors=Good, I J |date=1958 |title=Significance tests in parallel and in series |journal=Journal of the American Statistical Association |volume=53 |issue=284 |pages=799–813|jstor=2281953 |doi=10.1080/01621459.1958.10501480 }}{{cite journal |vauthors=Wilson, D J |date=2019 |title=The harmonic mean p-value for combining dependent tests|journal=Proceedings of the National Academy of Sciences USA |volume=116 |issue=4 |pages=1195–1200|doi=10.1073/pnas.1814092116 |pmid=30610179 |pmc=6347718 |doi-access=free |bibcode=2019PNAS..116.1195W }} provides a multilevel test that improves on the power of Bonferroni correction by assessing the significance of groups of hypotheses while controlling the strong-sense family-wise error rate. The significance of any subset \mathcal{R} of the m tests is assessed by calculating the HMP for the subset,

\overset{\circ}{p}_\mathcal{R} = \frac{\sum_{i\in\mathcal{R}} w_{i}}{\sum_{i\in\mathcal{R}} w_{i}/p_{i}},

where w_1,\dots,w_m are weights that sum to one (i.e. \sum_{i=1}^m w_i=1). An approximate procedure that controls the strong-sense family-wise error rate at level approximately \alpha rejects the null hypothesis that none of the p-values in subset \mathcal{R} are significant when \overset{\circ}{p}_\mathcal{R}\leq\alpha\,w_\mathcal{R}{{cite journal |last1=Sciences |first1=National Academy of |title=Correction for Wilson, The harmonic mean p-value for combining dependent tests |journal=Proceedings of the National Academy of Sciences |date=2019-10-22 |volume=116 |issue=43 |pages=21948 |doi=10.1073/pnas.1914128116|pmid=31591234 |pmc=6815184 |doi-access=free |bibcode=2019PNAS..11621948. }} (where w_\mathcal{R}=\sum_{i\in\mathcal{R}}w_i). This approximation is reasonable for small \alpha (e.g. \alpha<0.05) and becomes arbitrarily good as \alpha approaches zero. An asymptotically exact test is also available (see main article).

Alternative approaches

{{further|False discovery rate}}

FWER control exerts a more stringent control over false discovery compared to false discovery rate (FDR) procedures. FWER control limits the probability of at least one false discovery, whereas FDR control limits (in a loose sense) the expected proportion of false discoveries. Thus, FDR procedures have greater power at the cost of increased rates of type I errors, i.e., rejecting null hypotheses that are actually true.{{cite journal |last=Shaffer |first=J. P. |year=1995 |title=Multiple hypothesis testing |journal=Annual Review of Psychology |volume=46 |pages=561–584 |doi=10.1146/annurev.ps.46.020195.003021 |hdl=10338.dmlcz/142950 |hdl-access=free }}

On the other hand, FWER control is less stringent than per-family error rate control, which limits the expected number of errors per family. Because FWER control is concerned with at least one false discovery, unlike per-family error rate control it does not treat multiple simultaneous false discoveries as any worse than one false discovery. The Bonferroni correction is often considered as merely controlling the FWER, but in fact also controls the per-family error rate.{{cite journal|last1=Frane|first1=Andrew|title=Are per-family Type I error rates relevant in social and behavioral science?|journal=Journal of Modern Applied Statistical Methods|date=2015|volume=14|issue=1|pages=12–23|doi=10.22237/jmasm/1430453040|doi-access=free}}

References

{{Reflist}}