Uncorrelatedness (probability theory)

  (Redirected from Uncorrelated)

In probability theory and statistics, two real-valued random variables, , , are said to be uncorrelated if their covariance, , is zero. If two variables are uncorrelated, there is no linear relationship between them.

Uncorrelated random variables have a Pearson correlation coefficient of zero, except in the trivial case when either variable has zero variance (is a constant). In this case the correlation is undefined.

In general, uncorrelatedness is not the same as orthogonality, except in the special case where at least one of the two random variables has an expected value of 0. In this case, the covariance is the expectation of the product, and and are uncorrelated if and only if .

If and are independent, with finite second moments, then they are uncorrelated. However, not all uncorrelated variables are independent.[1]:p. 155

DefinitionEdit

Definition for two real random variablesEdit

Two random variables   are called uncorrelated if their covariance   is zero[1]:p. 153[2]:p. 121. Formally:

 

Definition for two complex random variablesEdit

Two complex random variables   are called uncorrelated if their covariance   and their pseudo-covariance   is zero, i.e.

 

Definition for more than two random variablesEdit

A set of two or more random variables   is called uncorrelated if each pair of them is uncorrelated. This is equivalent to the requirement that the non-diagonal elements of the autocovariance matrix   of the random vector   are all zero. The autocovariance matrix is defined as:

 

Examples of dependence without correlationEdit

Example 1Edit

  • Let   be a random variable that takes the value 0 with probability 1/2, and takes the value 1 with probability 1/2.
  • Let   be a random variable, independent of  , that takes the value −1 with probability 1/2, and takes the value 1 with probability 1/2.
  • Let   be a random variable constructed as  .

The claim is that   and   have zero covariance (and thus are uncorrelated), but are not independent.

Proof:

Taking into account that

 

where the second equality holds because   and   are independent, one gets

 

Therefore,   and   are uncorrelated.

Independence of   and   means that for all   and  ,  . This is not true, in particular, for   and  .

  •  
  •  

Thus   so   and   are not independent.

Q.E.D.

Example 2Edit

If   is a continuous random variable uniformly distributed on   and  , then   and   are uncorrelated even though   determines   and a particular value of   can be produced by only one or two values of  .

When uncorrelatedness implies independenceEdit

There are cases in which uncorrelatedness does imply independence. One of these cases is the one in which both random variables are two-valued (so each can be linearly transformed to have a Bernoulli distribution).[3] Further, two jointly normally distributed random variables are independent if they are uncorrelated,[4] although this does not hold for variables whose marginal distributions are normal and uncorrelated but whose joint distribution is not joint normal (see Normally distributed and uncorrelated does not imply independent).

GeneralizationsEdit

Uncorrelated random vectorsEdit

Two random vectors   and   are called uncorrelated if

 .

They are uncorrelated if and only if their cross-covariance matrix   is zero.[5]:p.337

Two complex random vectors   and   are called uncorrelated if their cross-covariance matrix and their pseudo-cross-covariance matrix is zero, i.e. if

 

where

 

and

 .

Uncorrelated stochastic processesEdit

Two stochastic processes   and   are called uncorrelated if their cross-covariance   is zero for all times.[2]:p. 142 Formally:

 

See alsoEdit

ReferencesEdit

  1. ^ a b Papoulis, Athanasios (1991). Probability, Random Variables and Stochastic Porcesses. MCGraw Hill. ISBN 0-07-048477-5.
  2. ^ a b Kun Il Park, Fundamentals of Probability and Stochastic Processes with Applications to Communications, Springer, 2018, 978-3-319-68074-3
  3. ^ Virtual Laboratories in Probability and Statistics: Covariance and Correlation, item 17.
  4. ^ Bain, Lee; Engelhardt, Max (1992). "Chapter 5.5 Conditional Expectation". Introduction to Probability and Mathematical Statistics (2nd ed.). pp. 185–186. ISBN 0534929303.
  5. ^ Gubner, John A. (2006). Probability and Random Processes for Electrical and Computer Engineers. Cambridge University Press. ISBN 978-0-521-86470-1.

Further readingEdit

  • Probability for Statisticians, Galen R. Shorack, Springer (c2000) ISBN 0-387-98953-6