Regression dilution

{{Short description|Statistical bias in linear regressions}}

File:Visualization of errors-in-variables linear regression.png. Two regression lines (red) bound the range of linear regression possibilities. The shallow slope is obtained when the independent variable (or predictor) is on the abscissa (x-axis). The steeper slope is obtained when the independent variable is on the ordinate (y-axis). By convention, with the independent variable on the x-axis, the shallower slope is obtained. Green reference lines are averages within arbitrary bins along each axis. Note that the steeper green and red regression estimates are more consistent with smaller errors in the y-axis variable.]]

Regression dilution, also known as regression attenuation, is the biasing of the linear regression slope towards zero (the underestimation of its absolute value), caused by errors in the independent variable.

Consider fitting a straight line for the relationship of an outcome variable y to a predictor variable x, and estimating the slope of the line. Statistical variability, measurement error or random noise in the y variable causes uncertainty in the estimated slope, but not bias: on average, the procedure calculates the right slope. However, variability, measurement error or random noise in the x variable causes bias in the estimated slope (as well as imprecision). The greater the variance in the x measurement, the closer the estimated slope must approach zero instead of the true value.

File:Scheme regression dilution.jpg

It may seem counter-intuitive that noise in the predictor variable x induces a bias, but noise in the outcome variable y does not. Recall that linear regression is not symmetric: the line of best fit for predicting y from x (the usual linear regression) is not the same as the line of best fit for predicting x from y.{{cite book

|title = Applied Regression Analysis

|edition = 3rd |pages= 19

|last1= Draper |first1=N.R. |last2=Smith |first2=H.

|publisher = John Wiley

|year = 1998

|isbn = 0-471-17082-8}}

Slope correction

Regression slope and other regression coefficients can be disattenuated as follows.

=The case of a fixed ''x'' variable=

The case that x is fixed, but measured with noise, is known as the functional model or functional relationship.{{cite journal | last1 = Riggs | first1 = D. S. | last2 = Guarnieri | first2 = J. A. |display-authors=etal | year = 1978 | title = Fitting straight lines when both variables are subject to error | journal = Life Sciences | volume = 22 | issue = 13–15 | pages = 1305–60 | doi=10.1016/0024-3205(78)90098-x| pmid = 661506 }}

It can be corrected using total least squares{{cite journal | last1=Golub | first1=Gene H. | last2=van Loan | first2=Charles F. | title=An Analysis of the Total Least Squares Problem | journal=SIAM Journal on Numerical Analysis | publisher=Society for Industrial & Applied Mathematics (SIAM) | volume=17 | issue=6 | year=1980 | issn=0036-1429 | doi=10.1137/0717073 | pages=883–893| hdl=1813/6251 | hdl-access=free }} and errors-in-variables models in general.

=The case of a randomly distributed ''x'' variable=

The case that the x variable arises randomly is known as the structural model or structural relationship. For example, in a medical study patients are recruited as a sample from a population, and their characteristics such as blood pressure may be viewed as arising from a random sample.

Under certain assumptions (typically, normal distribution assumptions) there is a known ratio between the true slope, and the expected estimated slope. Frost and Thompson (2000) review several methods for estimating this ratio and hence correcting the estimated slope.Frost, C. and S. Thompson (2000). "Correcting for regression dilution bias: comparison of methods for a single predictor variable." Journal of the Royal Statistical Society Series A 163: 173–190. The term regression dilution ratio, although not defined in quite the same way by all authors, is used for this general approach, in which the usual linear regression is fitted, and then a correction applied. The reply to Frost & Thompson by Longford (2001) refers the reader to other methods, expanding the regression model to acknowledge the variability in the x variable, so that no bias arises.{{cite journal | last1 = Longford | first1 = N. T. | year = 2001 | title = Correspondence | journal = Journal of the Royal Statistical Society, Series A | volume = 164 | issue = 3 | page = 565 | doi=10.1111/1467-985x.00219| s2cid = 247674444 | doi-access = free }} Fuller (1987) is one of the standard references for assessing and correcting for regression dilution.{{cite book |last=Fuller |first=W. A. |year=1987 |title=Measurement Error Models |location=New York |publisher=Wiley |isbn=9780470317334 |url=https://books.google.com/books?id=Nalc0DkAJRYC }}

Hughes (1993) shows that the regression dilution ratio methods apply approximately in survival models.{{cite journal | last1 = Hughes | first1 = M. D. | year = 1993 | title = Regression dilution in the proportional hazards model | journal = Biometrics | volume = 49 | issue = 4 | pages = 1056–1066 | doi=10.2307/2532247| jstor = 2532247 | pmid = 8117900 }} Rosner (1992) shows that the ratio methods apply approximately to logistic regression models.{{cite journal | last1 = Rosner | first1 = B. | last2 = Spiegelman | first2 = D. |display-authors=etal | year = 1992 | title = Correction of Logistic Regression Relative Risk Estimates and Confidence Intervals for Random Within-Person Measurement Error | journal = American Journal of Epidemiology | volume = 136 | issue = 11 | pages = 1400–1403 | doi=10.1093/oxfordjournals.aje.a116453| pmid = 1488967 }} Carroll et al. (1995) give more detail on regression dilution in nonlinear models, presenting the regression dilution ratio methods as the simplest case of regression calibration methods, in which additional covariates may also be incorporated.Carroll, R. J., Ruppert, D., and Stefanski, L. A. (1995). Measurement error in non-linear models. New York, Wiley.

In general, methods for the structural model require some estimate of the variability of the x variable. This will require repeated measurements of the x variable in the same individuals, either in a sub-study of the main data set, or in a separate data set. Without this information it will not be possible to make a correction.

=Multiple ''x'' variables=

The case of multiple predictor variables subject to variability (possibly correlated) has been well-studied for linear regression, and for some non-linear regression models. Other non-linear models, such as proportional hazards models for survival analysis, have been considered only with a single predictor subject to variability.

Correlation correction

Charles Spearman developed in 1904 a procedure for correcting correlations for regression dilution,{{cite journal | last=Spearman | first=C. | title=The Proof and Measurement of Association between Two Things | journal=The American Journal of Psychology | publisher=University of Illinois Press | volume=15 | issue=1 | year=1904 | issn=0002-9556 | jstor=1412159 | pages=72–101 | doi=10.2307/1412159 | url=https://archive.org/download/proofmeasurement00speauoft/proofmeasurement00speauoft_bw.pdf | access-date=2021-07-10}} i.e., to "rid a correlation coefficient from the weakening effect of measurement error".{{cite book | last=Jensen | first=A.R. | title=The g Factor: The Science of Mental Ability | publisher=Praeger | series=Human evolution, behavior, and intelligence | year=1998 | isbn=978-0-275-96103-9 }}

In measurement and statistics, the procedure is also called correlation disattenuation or the disattenuation of correlation.{{cite journal | last=Osborne | first=Jason W. | title=Effect Sizes and the Disattenuation of Correlation and Regression Coefficients: Lessons from Educational Psychology | journal=Practical Assessment, Research, and Evaluation | date=2003-05-27 | volume=8 | issue=1 | doi=10.7275/0k9h-tq64 | url=https://scholarworks.umass.edu/pare/vol8/iss1/11 | access-date=2021-07-10}}

The correction assures that the Pearson correlation coefficient across data units (for example, people) between two sets of variables is estimated in a manner that accounts for error contained within the measurement of those variables.{{Cite journal|last1=Franks|first1=Alexander|last2=Airoldi|first2=Edoardo|last3=Slavov|first3=Nikolai|date=2017-05-08|title=Post-transcriptional regulation across human tissues|journal=PLOS Computational Biology|volume=13|issue=5|pages=e1005535|doi=10.1371/journal.pcbi.1005535|issn=1553-7358|pmc=5440056|pmid=28481885 |doi-access=free }}

=Formulation=

Let \beta and \theta be the true values of two attributes of some person or statistical unit. These values are variables by virtue of the assumption that they differ for different statistical units in the population. Let \hat{\beta} and \hat{\theta} be estimates of \beta and \theta derived either directly by observation-with-error or from application of a measurement model, such as the Rasch model. Also, let

::

\hat{\beta} = \beta + \epsilon_{\beta} , \quad\quad \hat{\theta} = \theta + \epsilon_\theta,

where \epsilon_{\beta} and \epsilon_\theta are the measurement errors associated with the estimates \hat{\beta} and \hat{\theta}.

The estimated correlation between two sets of estimates is

:

\operatorname{corr}(\hat{\beta},\hat{\theta})= \frac{\operatorname{cov}(\hat{\beta},\hat{\theta})}{\sqrt{\operatorname{var}[\hat{\beta}]\operatorname{var}[\hat{\theta}}]}

:::::

=\frac{\operatorname{cov}(\beta+\epsilon_{\beta}, \theta+\epsilon_\theta)}{\sqrt{\operatorname{var}[\beta+\epsilon_{\beta}]\operatorname{var}[\theta+\epsilon_\theta]}},

which, assuming the errors are uncorrelated with each other and with the true attribute values, gives

:

\operatorname{corr}(\hat{\beta},\hat{\theta})= \frac{\operatorname{cov}(\beta,\theta)}{\sqrt{(\operatorname{var}[\beta]+\operatorname{var}[\epsilon_\beta])(\operatorname{var}[\theta]+\operatorname{var}[\epsilon_\theta])}}

:::::

=\frac{\operatorname{cov}(\beta,\theta)}{\sqrt{(\operatorname{var}[\beta]\operatorname{var}[\theta])}}.\frac{\sqrt{\operatorname{var}[\beta]\operatorname{var}[\theta]}}{\sqrt{(\operatorname{var}[\beta]+\operatorname{var}[\epsilon_\beta])(\operatorname{var}[\theta]+\operatorname{var}[\epsilon_\theta])}}

:::::

=\rho \sqrt{R_\beta R_\theta},

where R_\beta is the separation index of the set of estimates of \beta, which is analogous to Cronbach's alpha; that is, in terms of classical test theory, R_\beta is analogous to a reliability coefficient. Specifically, the separation index is given as follows:

:

R_\beta=\frac{\operatorname{var}[\beta]}{\operatorname{var}[\beta]+\operatorname{var}[\epsilon_\beta]}=\frac{\operatorname{var}[\hat{\beta}]-\operatorname{var}[\epsilon_\beta]}{\operatorname{var}[\hat{\beta}]},

where the mean squared standard error of person estimate gives an estimate of the variance of the errors, \epsilon_\beta. The standard errors are normally produced as a by-product of the estimation process (see Rasch model estimation).

The disattenuated estimate of the correlation between the two sets of parameter estimates is therefore

:

\rho = \frac{\mbox{corr}(\hat{\beta},\hat{\theta})}{\sqrt{R_\beta R_\theta}}.

That is, the disattenuated correlation estimate is obtained by dividing the correlation between the estimates by the geometric mean of the separation indices of the two sets of estimates. Expressed in terms of classical test theory, the correlation is divided by the geometric mean of the reliability coefficients of two tests.

Given two random variables X^\prime and Y^\prime measured as X and Y with measured correlation r_{xy} and a known reliability for each variable, r_{xx} and r_{yy}, the estimated correlation between X^\prime and Y^\prime corrected for attenuation is

:r_{x'y'} = \frac{r_{xy}}{\sqrt{r_{xx}r_{yy}}}.

How well the variables are measured affects the correlation of X and Y. The correction for attenuation tells one what the estimated correlation is expected to be if one could measure X′ and Y′ with perfect reliability.

Thus if X and Y are taken to be imperfect measurements of underlying variables X' and Y' with independent errors, then r_{x'y'} estimates the true correlation between X' and Y'.

Applicability

A correction for regression dilution is necessary in statistical inference based on regression coefficients. However, in predictive modelling applications, correction is neither necessary nor appropriate. In change detection, correction is necessary.

To understand this, consider the measurement error as follows. Let y be the outcome variable, x be the true predictor variable, and w be an approximate observation of x. Frost and Thompson suggest, for example, that x may be the true, long-term blood pressure of a patient, and w may be the blood pressure observed on one particular clinic visit. Regression dilution arises if we are interested in the relationship between y and x, but estimate the relationship between y and w. Because w is measured with variability, the slope of a regression line of y on w is less than the regression line of y on x.

Standard methods can fit a regression of y on w without bias. There is bias only if we then use the regression of y on w as an approximation to the regression of y on x. In the example, assuming that blood pressure measurements are similarly variable in future patients, our regression line of y on w (observed blood pressure) gives unbiased predictions.

An example of a circumstance in which correction is desired is prediction of change. Suppose the change in x is known under some new circumstance: to estimate the likely change in an outcome variable y, the slope of the regression of y on x is needed, not y on w. This arises in epidemiology. To continue the example in which x denotes blood pressure, perhaps a large clinical trial has provided an estimate of the change in blood pressure under a new treatment; then the possible effect on y, under the new treatment, should be estimated from the slope in the regression of y on x.

Another circumstance is predictive modelling in which future observations are also variable, but not (in the phrase used above) "similarly variable". For example, if the current data set includes blood pressure measured with greater precision than is common in clinical practice. One specific example of this arose when developing a regression equation based on a clinical trial, in which blood pressure was the average of six measurements, for use in clinical practice, where blood pressure is usually a single measurement.{{cite journal | last1 = Stevens | first1 = R. J. | last2 = Kothari | first2 = V. | last3 = Adler | first3 = A. I. | last4 = Stratton | first4 = I. M. | last5 = Holman | first5 = R. R. | year = 2001 | title = Appendix to "The UKPDS Risk Engine: a model for the risk of coronary heart disease in type 2 diabetes UKPDS 56) | journal = Clinical Science | volume = 101 | pages = 671–679 | doi=10.1042/cs20000335}}

All of these results can be shown mathematically, in the case of simple linear regression assuming normal distributions throughout (the framework of Frost & Thompson).

It has been discussed that a poorly executed correction for regression dilution, in particular when performed without checking for the underlying assumptions, may do more damage to an estimate than no correction.{{cite journal |last1=Davey Smith |first1=G. |author-link=George Davey Smith |first2=A. N. |last2=Phillips |year=1996 |title=Inflation in epidemiology: 'The proof and measurement of association between two things' revisited |journal=British Medical Journal |volume=312 |issue=7047 |pages=1659–1661 |pmc=2351357 |doi=10.1136/bmj.312.7047.1659 |pmid=8664725}}

Further reading

Regression dilution was first mentioned, under the name attenuation, by Spearman (1904).{{cite journal | last1 = Spearman | first1 = C | year = 1904 | title = The proof and measurement of association between two things | journal = American Journal of Psychology | volume = 15 | issue = 1 | pages = 72–101 | doi=10.2307/1412159| jstor = 1412159 | url = https://archive.org/details/proofmeasurement00speauoft }} Those seeking a readable mathematical treatment might like to start with Frost and Thompson (2000).

See also

References

{{DEFAULTSORT:Regression Dilution}}

Category:Regression models