• Machine Learning
  • Manuel Perez Yllan
  • APR 02, 2019

Causal Inference

“Correlation does not imply causation” we all know this mantra from statistics. And we think that we fully understand it. Human (and not human) brains, being machines to find patterns, quickly understand that my coffee mug broke because it fell to the floor. One event (the falling) occurred just before the other (mug breaking) and without the first event, we would never see the second. So not only exists a correlation between mugs falling and mugs breaking there is also a causal relation (with lots of physics going on). So far so good.


There are also, easy to spot, funny correlations (see Spurious Correlations website) but the problem arises when we can’t decide about the situation. Do electoral polls reflect voters preferences? Or electoral polls are political weapons to affect citizens vote? Human activities are responsible for climate change or are there more factors?. Do those factors affect one another? Even a sequence in time can be misleading. In physics, lots of natural processes are symmetrical with respect to time so the trick “what came first may be the cause” vanishes. This is the first of a series of blog posts to clarify cause/effects problems using a technique called Causal Inference.


Randomized controlled trials
To research the effects of drugs or treatments on humans the tool of choice is the randomized controlled trial or RCT: Divide randomly the people participating in the test: A first group will do receive the treatment (intervention group) and we’ll compare the results with the group using placebo (or standard group). Comparing the results we can deduce if the treatment is working or not using standard statistics. Nice, but what happens when we’re trying to see if tobacco is the cause of cancer? Will you make people do smoke for your test? Or if we’re trying to find if hydraulic fracturing is the cause of some earthquakes. Are human-made actions causing climate change? We only have one earth to test. These are the situations where causal inference is necessary. It allows us to extract conclusions from data without the need for an intervention (causing earthquakes on purpose or making people smoke).


Causal levels
From an evolutionary point of view, plants (with no brain at all) are capable to associate (correlate) light with their wellbeing so sunflowers move towards a light source when they find it. Babies are capable to experiment (intervene or doing) on things: What happens if I touch here? The baby does no conform with observing, unlike the sunflower, he is affecting his environment to extract useful information. Small children and adult humans go one immense step forward: they can imagine things. It’s like experimenting in our heads: Would my health get better if I start doing exercise?. This has immense implications since counterfactuals (the ability to imagine a thing that is not a fact) are one of the bases of moral thinking because they give us the ability to choose between different scenarios before they happen and choose the right one (under a specific moral system). For example, the press has made extensive coverage about what a self-driving car must do when confronted with a moral dilemma (see Self-driving cars don't care about your moral dilemmas or The Moral Machine experiment for a more detailed view)

Associations, Interventions and counterfactuals form what is known as the ladder of causation as Judea Pearl defined it. Causal inference and the ladder of causation are some of the key points that will make machine learning (and AI) much more robust and smart in the future. If you think for a moment you’ll see that current ML is only in the first step.

The Harvard Innovation Lab

Made in Boston @

The Harvard Innovation Lab


Matching Providers

Matching providers 2
comments powered by Disqus.