uncorrelatedness (probability theory)
{{Short description|Theory in probability}}
{{more citations needed|date=January 2013}}
In probability theory and statistics, two real-valued random variables, , , are said to be uncorrelated if their covariance, , is zero. If two variables are uncorrelated, there is no linear relationship between them.
Uncorrelated random variables have a Pearson correlation coefficient, when it exists, of zero, except in the trivial case when either variable has zero variance (is a constant). In this case the correlation is undefined.
In general, uncorrelatedness is not the same as orthogonality, except in the special case where at least one of the two random variables has an expected value of 0. In this case, the covariance is the expectation of the product, and and are uncorrelated if and only if .
If and are independent, with finite second moments, then they are uncorrelated. However, not all uncorrelated variables are independent.{{cite book | last = Papoulis| first =Athanasios| title = Probability, Random Variables and Stochastic Processes | publisher = MCGraw Hill | year = 1991| isbn = 0-07-048477-5}}{{rp|p. 155}}
Definition
=Definition for two real random variables=
Two random variables are called uncorrelated if their covariance is zero.{{rp|p. 153}}Kun Il Park, Fundamentals of Probability and Stochastic Processes with Applications to Communications, Springer, 2018, 978-3-319-68074-3{{rp|p. 121}} Formally:
{{Equation box 1
|indent =
|title=
|equation =
|cellpadding= 6
|border
|border colour = #0073CF
|background colour=#F5FFFA}}
=Definition for two complex random variables=
Two complex random variables are called uncorrelated if their covariance and their pseudo-covariance is zero, i.e.
=Definition for more than two random variables=
A set of two or more random variables is called uncorrelated if each pair of them is uncorrelated. This is equivalent to the requirement that the non-diagonal elements of the autocovariance matrix of the random vector are all zero. The autocovariance matrix is defined as:
:
Examples of dependence without correlation
{{main|Correlation and dependence}}
=Example 1=
- Let be a random variable that takes the value 0 with probability 1/2, and takes the value 1 with probability 1/2.
- Let be a random variable, independent of , that takes the value −1 with probability 1/2, and takes the value 1 with probability 1/2.
- Let be a random variable constructed as .
The claim is that and have zero covariance (and thus are uncorrelated), but are not independent.
Proof:
Taking into account that
:
where the second equality holds because and are independent, one gets
:
\begin{align}
\operatorname{cov}[U,X] & = \operatorname{E}[(U-\operatorname E[U])(X-\operatorname E[X])] = \operatorname{E}[ U (X-\tfrac12)] \\
& = \operatorname{E}[X^2 Y - \tfrac12 XY] = \operatorname{E}[(X^2-\tfrac12 X)Y] = \operatorname{E}[(X^2-\tfrac12 X)] \operatorname E[Y] = 0
\end{align}
Therefore, and are uncorrelated.
Independence of and means that for all and , . This is not true, in particular, for and .
Thus so and are not independent.
Q.E.D.
=Example 2=
If is a continuous random variable uniformly distributed on and , then and are uncorrelated even though determines and a particular value of can be produced by only one or two values of :
f_Y(t)= {1 \over {2 \sqrt{t}}} I_{]0,1]}
on the other hand, is 0 on the triangle defined by
Therefore
E[X] = {{1-1} \over 4} = 0 ; E[Y]= {{1^3 - (-1)^3}\over {3 \times 2} } = {1 \over 3}
Cov[X,Y]=E \left [(X-E[X])(Y-E[Y]) \right ] = E \left [X^3- {X \over 3} \right ] = {{1^4-(-1)^4}\over{4 \times 2}}=0
Therefore the variables are uncorrelated.
Generalizations
See also
- Correlation and dependence
- Binomial distribution: Covariance between two binomials{{Broken anchor|date=2024-03-24|bot=User:Cewbot/log/20201008/configuration|reason= The anchor (Covariance between two binomials) has been deleted.}}
- Uncorrelated Volume Element
References
{{reflist}}
Further reading
- Probability for Statisticians, Galen R. Shorack, Springer (c2000) {{ISBN|0-387-98953-6}}