Imagine this.

Your child loves football and, as far as you can tell, he is very good at it. Of course, it may all be in his father’s eyes. So you would like to get an unbiased assessment of his real chances to develop into a top player. His coach tells you there is a test that can measure the child’s potential. The score goes from 0 to 100. In the last 20 years, each and every one of the top players who took the test in early childhood scored between 95 and 100. Your child takes the test and he scores 98. That’s excellent! says the coach. Not so fast, you reply, tell me first how many of the children with a high score failed to develop into top players. Not many, says the coach, only 5% of ordinary children wrongly got a high score.

So: what do you think is the chance that your child will become a top football player?

If you are like most people, you think the chance is very high. This is your reasoning: I don’t really know whether my child is good or not. But he has taken this test, which the coach says is very accurate: in fact it is infallible at spotting top players, and only rarely mistakes a normal child for a top player. If the test is really this accurate, my child will very likely have a bright future as a football star. It sounds obvious, right?

Wrong. Think of it this way. If there were no test, you would have asked the coach a very basic question: in your experience, what is the chance that a child like my son will become a top player? The coach would have dampened your enthusiasm: perhaps one in a thousand, he would have said, and it would have sounded about right. But with the test result in hand, you disregard this basic question. It surely looks irrelevant in the face of a very accurate test result.

This is a well known phenomenon, which psychologists call, among other things, the *Inverse Fallacy*.

It is so called because it amounts to confusing the probability of a hypothesis, given some evidence, with the probability of the evidence, given the hypothesis. In our example, the hypothesis H is that your child will be a top player, and the evidence E is the high test score. What you want to know is P(H|E): the probability that the child will be a top player, given that the test says he will be. But what you know is P(E|H): the probability that the test says the child will be a top player, given that he will be. The coach told you this probability is 100%: the test is infallible at spotting top players. In answering your other question, he also told you P(E|not H): the probability of a high test score, given that the child will not be a top player. This probability is only 5%. In other words, the probability that the test correctly spots a normal child is 95%. You take this information and conclude that your child is very likely to turn into a champion.

How remarkable: before the test, P(H), the probability that your child will become a top player, is only 0.1%. After the test, it jumps all the way to close to 100%. Or so you think. In reality, from Bayes’ Theorem:

with P(E|H)=100%, P(H)=0.1% and P(E)=P(E|H)P(H)+P(E|not H)P(not H)=5.1%.

So here it is: a probability you think is close to 100% is really less than 2%. The Inverse Fallacy can open a wide gap between perceived and actual probabilities.

The first time I encountered the Inverse Fallacy was in the shape of Tversky and Kahneman’s cab problem. Many variations exist, but the structure of the problem is the same: P(H|E) is confused with P(E|H), or a number close to it. This is an amazing fact, with hugely important consequences, in all sorts of ways. I use it a lot in my profession, looking for instances where the market falls prey to the Inverse Fallacy. But it has a much wider significance.

Hi,

I happened to stumble upon this very interesting page, and I observed what I think is an error in explaining the Inverse Fallacy.

I want to point the error and to explain why I think so.

The second para where you mention, “tell me first how many of the children with a high score failed to develop into top players.” is later assumed to mean P(E|not H) in para 7. This I think is incorrect. When we say the number of children with a high score that failed to develop into top players, we are talking of a universe of high scoring children – the evidence E; and amongst them the number that failed to develop into top players – the failed hypothesis ‘not H’. Thus it will mean P(not H|E) and not P(E|not H).

In which case I should continue to think my child is a football star because P(H|E) = 1 – P(not H|E), which would be 95%.

Would appreciate your comments

Warm regards.

Kimi

Kimi,

It should be clear if you look at the table in the ‘Accurate experts’ post of 19 July 2012. The number of children with a high score who failed to develop into top players is 999. That is 5% of the 19,980 ‘ordinary’ children: the False Positive Rate. It is the answer to the question: what is the probability that a child gets a high score, given that he later fails to become a champion? That is P(E|not H), where E=high score and H=champion. M

Thanks Massimo

I read the post of 19 July. I interpreted “how many of the children with a high score failed to develop into top players” as 999 / 1019 in the given example. The children with high score are 1019 of which 999 failed to develop into top players. But it appears this is to be understood as 999 / 19,980. I am not sure why. Because, when we say, “how many of something…” then that something becomes the base against which the percentage is computed. For instance when we say “How many of the red balls had black dots?” we mean the number of black dots on red balls divided by number of red balls.

When you ask ‘how many’ you want a number, not a percentage. The number of children with a high score who failed to develop into top players is 999. That is 5%=999/19,980 of all non champions. This is the meaning of conditional probability: given that the child is a non champion (not H), what is the probability that he got a high score (E)? That is P(E|not H) = 5%. A different question is: given that a child got a high score (E), what is the probability that he is a non champion (not H)? That is P(not H|E)=999/1019=98%.

I hope it is clear. By the way, your perplexity is the Inverse Fallacy at work. ‘Blinded by Evidence – The paper’ of 15 October 2013 has the full monty.