Jul 172012
 

Imagine this.

Your child loves football and, as far as you can tell, he is very good at it. Of course, it may all be in his father’s eyes. So you would like to get an unbiased assessment of his real chances to develop into a top player. His coach tells you there is a test that can measure the child’s potential. The score goes from 0 to 100. In the last 20 years, each and every one of the top players who took the test in early childhood scored between 95 and 100. Your child takes the test and he scores 98. That’s excellent! says the coach. Not so fast, you reply, tell me first how many of the children with a high score failed to develop into top players. Not many, says the coach, only 5% of ordinary children wrongly got a high score.

So: what do you think is the chance that your child will become a top football player?

If you are like most people, you think the chance is very high. This is your reasoning: I don’t really know whether my child is good or not. But he has taken this test, which the coach says is very accurate: in fact it is infallible at spotting top players, and only rarely mistakes a normal child for a top player. If the test is really this accurate, my child will very likely have a bright future as a football star. It sounds obvious, right?

Wrong. Think of it this way. If there were no test, you would have asked the coach a very basic question: in your experience, what is the chance that a child like my son will become a top player? The coach would have dampened your enthusiasm: perhaps one in a thousand, he would have said, and it would have sounded about right. But with the test result in hand, you disregard this basic question. It surely looks irrelevant in the face of a very accurate test result.

This is a well known phenomenon, which psychologists call, among other things, the Inverse Fallacy.

It is so called because it amounts to confusing the probability of a hypothesis, given some evidence, with the probability of the evidence, given the hypothesis. In our example, the hypothesis H is that your child will be a top player, and the evidence E is the high test score. What you want to know is P(H|E): the probability that the child will be a top player, given that the test says he will be. But what you know is P(E|H): the probability that the test says the child will be a top player, given that he will be. The coach told you this probability is 100%: the test is infallible at spotting top players. In answering your other question, he also told you P(E|not H): the probability of a high test score, given that the child will not be a top player. This probability is only 5%. In other words, the probability that the test correctly spots a normal child is 95%. You take this information and conclude that your child is very likely to turn into a champion.

How remarkable: before the test, P(H), the probability that your child will become a top player, is only 0.1%. After the test, it jumps all the way to close to 100%. Or so you think. In reality, from Bayes’ Theorem:

with P(E|H)=100%, P(H)=0.1% and P(E)=P(E|H)P(H)+P(E|not H)P(not H)=5.1%.

So here it is: a probability you think is close to 100% is really less than 2%. The Inverse Fallacy can open a wide gap between perceived and actual probabilities.

The first time I encountered the Inverse Fallacy was in the shape of Tversky and Kahneman’s cab problem. Many variations exist, but the structure of the problem is the same: P(H|E) is confused with P(E|H), or a number close to it. This is an amazing fact, with hugely important consequences, in all sorts of ways. I use it a lot in my profession, looking for instances where the market falls prey to the Inverse Fallacy. But it has a much wider significance.

Print Friendly, PDF & Email

Also published on Medium.