Correlation is Not (Always) Transitive

Share this article


At first, I found this really puzzling. X is correlated (Pearson) with Y, and Y is correlated with Z. Does this mean X is necessarily correlated with Z? Intuitively, this totally makes sense. The answer, however, is “no.”

Perhaps the strangest thing is how easy it is to rationalize this “puzzle.” I drink more beer (X) and read more books (Z) when I am on a vacation (Y). That is, both pairs – X and Y and Z and Y – are positively correlated. But I do not drink more beer when I read more books – X and Z are not correlated. It is now obvious that correlation is not (always) transitive, but a second ago, this sounded bizarre.

Let’s go through the math.

Digging a Bit Deeper

Let’s denote the respective correlations between X, Y and Z by cor(X,Y), cor(X,Z), and cor(Y,Z). For simplicity (and without loss of generality), let’s work with standardized versions of these variables – that is, means of 0 and variances of 1. This implies, cov(X,Y) = cor(X,Y) for any pair.

We can write the linear projections of X and Z on Y as follows:

    \begin{equation*} X = cor(X,Y)Y + \epsilon^{X,Y}, \end{equation*}

    \begin{equation*} Z = cor(Z,Y)Y + \epsilon^{Z,Y}. \end{equation*}

Then, we have:

    \begin{equation*} cor(X,Z)=cor(X,Y)cor(Z,Y)+cor(\epsilon^{X,Y},\epsilon^{Z,Y}).\end{equation*}

We can use the Cauchy-Schwarz inequality to bound the last term, which gives the final range of possible values for cor(X,Z):

    \begin{equation*}cor(X,Y)cor(Z,Y) - \sqrt{(1-cor(X,Y)^2) (1-cor(Z,Y)^2)}\end{equation*}

    \begin{equation*}\leq cor(X,Z) \leq  \end{equation*}

    \begin{equation*}cor(X,Y)cor(Z,Y) + \sqrt{(1-cor(X,Y)^2) (1-cor(Z,Y)^2)}\end{equation*}

For instance, if we set cor(X,Y)=cor(Z,Y)=0.6, then we get:

    \begin{equation*}-.28 \leq cor(X,Z) \leq 1.\end{equation*}

That is, cor(X,Z) can be negative.

An Extremely Simple Example

Perhaps the simplest example to illustrate this is:

  • X and Z are independent random variables,
  • Y=X+Z.

The result follows.

The following code sets up this example in R.

x <- runif(n=1000) 
z <- runif(n=1000) 
y <- x + z

Below is a table with correlation coefficients and p-values associated with the null hypotheses that they are equal to zero.

Rendered by

You can find the code for this exercise in this GitHub repository.

When Is Correlation Transitive

From the equation above it follows that when both cor(X,Y) and cor(Z,Y) are sufficiently large, then cor(X,Z) is sure to be positive (i.e., bounded below by 0).

In the example above, if we fix cor(X,Y)=.6, then we need cor(Z,Y)>.8 to guarantee that cor(X,Z)>0.

Where to Learn More

Several Stack Overflow threads explain this phenomenon from various angles.

Olkin (1981) derives some further mathematical results related to transitivity in higher dimensions.

Bottom Line
  • X and Z both being correlated with Y does not guarantee that X and Z are correlated with each other.
  • This is the case when the former two correlations are “large enough.”

Olkin, I. (1981). Range restrictions for product-moment correlation matrices. Psychometrika, 46, 469-472. doi:10.1007/BF02293804

Leave a Reply

Your email address will not be published. Required fields are marked *