Kuder–Richardson formulas
{{Use dmy dates|date=July 2022}}
In psychometrics, the Kuder–Richardson formulas, first published in 1937, are a measure of internal consistency reliability for measures with dichotomous choices. They were developed by Kuder and Richardson.
Kuder–Richardson Formula 20 (KR-20)
The name of this formula stems from the fact that is the twentieth formula discussed in Kuder and Richardson's seminal paper on test reliability.Kuder, G. F., & Richardson, M. W. (1937). The theory of the estimation of test reliability. Psychometrika, 2(3), 151–160.
It is a special case of Cronbach's α, computed for dichotomous scores.Cortina, J. M., (1993). What Is Coefficient Alpha? An Examination of Theory and Applications. Journal of Applied Psychology, 78(1), 98–104.{{cite conference |url=http://eric.ed.gov/?id=ED526237 |title=Understanding a Widely Misunderstood Statistic: Cronbach's "Alpha" |last=Ritter |first=Nicola L. |date=18 February 2010 |conference=Annual meeting of the Southwest Educational Research Association |location=New Orleans }} It is often claimed that a high KR-20 coefficient (e.g., > 0.90) indicates a homogeneous test. However, like Cronbach's α, homogeneity (that is, unidimensionality) is actually an assumption, not a conclusion, of reliability coefficients. It is possible, for example, to have a high KR-20 with a multidimensional scale, especially with a large number of items.
Values can range from 0.00 to 1.00 (sometimes expressed as 0 to 100), with high values indicating that the examination is likely to correlate with alternate forms (a desirable characteristic). The KR-20 may be affected by difficulty of the test, the spread in scores and the length of the examination.
In the case when scores are not tau-equivalent (for example when there is not homogeneous but rather examination items of increasing difficulty) then the KR-20 is an indication of the lower bound of internal consistency (reliability).
The formula for KR-20 for a test with K test items numbered i = 1 to K is
:
where pi is the proportion of correct responses to test item i, qi is the proportion of incorrect responses to test item i (so that pi + qi = 1), and the variance for the denominator is
:
where n is the total sample size, X_i is the sum of items correct for the ith respondent and is the mean of X_i values.
If it is important to use unbiased operators then the sum of squares should be divided by degrees of freedom (n − 1) and the probabilities are multiplied by
Kuder–Richardson Formula 21 (KR-21)
Often discussed in tandem with KR-20, is Kuder–Richardson Formula 21 (KR-21).{{Cite web|url=http://www.real-statistics.com/reliability/kuder-richardson-formula-20/|title=Kuder and Richardson Formula 20 {{!}} Real Statistics Using Excel|language=en-US|access-date=8 March 2019}} KR-21 is a simplified version of KR-20, which can be used when the difficulty of all items on the test are known to be equal. Like KR-20, KR-21 was first set forth as the twenty-first formula discussed in Kuder and Richardson's 1937 paper.
The formula for KR-21 is as such:
:
Similarly to KR-20, K is equal to the number of items. Difficulty level of the items (p), is assumed to be the same for each item, however, in practice, KR-21 can be applied by finding the average item difficulty across the entirety of the test. KR-21 tends to be a more conservative estimate of reliability than KR-20, which in turn is a more conservative estimate than Cronbach's α.
References
{{reflist}}
External links
- [http://www.hr-survey.com/WpAssessmentHandbook.htm Quality of assessment chapter in Illinois State Assessment handbook (1995)]
{{DEFAULTSORT:Kuder-Richardson Formula 20}}