Jul 262012
 

Bayes’ Theorem defines the Posterior Probability of a hypothesis H in the light of evidence E as a function of three probabilities: the Base Rate BR, the True Positive Rate TPR and the False Positive Rate FPR:

If the three probabilities can be measured through a controlled, replicable experiment, they constitute hard evidence. BR measures the relative frequency of H in a random sample of the relevant population. TPR and FPR measure the strength of the evidence, i.e. the accuracy of the sign associated with H being true. Overall accuracy A is equal to (TPR+TNR)/2, where TNR=1-FPR is the True Negative Rate. Hence A=0.5+(TPR-FPR)/2. Perfectly accurate evidence (A=1) has TPR=1 and FPR=0. Imperfectly accurate evidence implies a trade-off between TPR and FPR. Symmetric evidence has TPR+FPR=1 and therefore A=TPR.

But hard evidence is not always available. When an experiment is not possible, or if it has not been performed, evidence is soft. While still based on empirical observation, soft evidence can only generate subjective probabilities, as determined by the observer’s perception. With soft evidence, BR is a prior probability, i.e. the observer’s belief about the relative frequency of H, while TPR and FPR measure the perceived accuracy of the evidence, i.e. the observer’s confidence in using E as a sign for evaluating the probability of H.

In our child footballer story, the test predicting whether the child will be a top player is an example of hard evidence. The coach’s assessment, based on accumulated experience, is an example of soft evidence. The Prior Indifference Fallacy can cause wrong beliefs to persist despite hard evidence. But at least a proper experiment can prove them wrong. When the only available evidence is of the soft kind, disproof is impossible, and wrong beliefs can remain unchallenged. Under prior indifference:

and, if the evidence is symmetric, PP coincides precisely with TPR and therefore with A: Support equals Accuracy.

The Prior Indifference Fallacy explains the power of experts. A confident coach predicts that your child will be a top player. His assessment may be perfectly honest: based on years of experience, he believes TPR=100% and FPR=5%. What he is missing is the fact that, since top players are rare, the number of False Positives is much larger than the number of True Positives, despite a low False Positive Rate. Focus on a small rate of False Positives rather than on their large number leads the coach – and you – to confuse accuracy with support. As a result, a Posterior Probability of less than 2% appears as high as 95%.

The Prior Indifference Fallacy gives experts an incentive to appear confident. The coach in our story is honest and may well be right, i.e. his confidence may reflect his true accuracy. But other experts may not be as scrupulous. An easy way to increase confidence is to increase TPR. Since the focus is on H, it is important not to miss H whenever H is true. In the extreme case, the coach could ensure TPR=1 by calling every child a champion. Of course, this approach would imply FPR=1 and, from Bayes’ Theorem, PP=BR: the coach’s assessment would be obviously worthless. Nonetheless, under prior indifference, the coach would be able to gain a totally undeserved 50% support.

If this seems a bit stretched, imagine H is the hypothesis that “There will soon be a stock market downturn”. What can an expert – let’s call him Dr. Doom – do in order to gain support? He can call a downturn as often as possible. By doing so, he will maximise the chance of spotting all or most downturns. Clearly, there will be many times when his warning will not come true: the trade-off between Type I and Type II errors implies that an increase in TPR can only come at the cost of a higher FPR. But, as long as the public is worried about downturns, False Alarms will likely be forgiven and soon forgotten, and Dr. Doom will be hailed as an oracle.

An alternative to increasing TPR would be to decrease FPR. Ideally, FPR=0 would be preferable to TPR=1: from Bayes’ Theorem, it would imply PP=1 irrespective of BR and TPR. An expert who delivers no False Positives does not need prior indifference: if he says that H is true, H is certainly true, no matter how rare or common H is. However, the cost of such infallibility would have to be a lower TPR, i.e. a higher False Negative Rate, FNR: the expert would have to often refrain from calling market downturns, thereby incurring into many False Negatives. But False Negatives are much worse than False Positives. A worried public will forgive False Alarms, but will penalise Misses.

Increasing TPR is therefore a better trick. Inflated focus on TPR, coupled with dimmed attention to FPR, is a form of Confirmation Bias. The unscrupulous expert can exploit the bias by emphasising good calls and obfuscating bad calls. If nobody keeps the score, what counts is what is remembered. A high TPR and a hidden FPR imply a high Posterior Probability: if Dr. Doom calls it, the public believes there is a high probability of a market downturn.

Print Friendly, PDF & Email

  One Response to “Dr. Doom’s trick”

  1. This ties in well with some of the conclusions in Futurebabble (2010) Dan Gardner’s fascinating study of “expert” attempts to forecast the future – including Tetlock’s famous 20 year experiment on this.

Any comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.