As further indication of the muddle surrounding the phrase ‘Absence of evidence is not evidence of absence’, RationalWiki says instead that it is: ‘Absence of evidence, or the failure to observe evidence that favours a hypothesis, is evidence against that hypothesis’.
That is: Seeing no smoke is evidence of no fire. Well, yes, indeed. But the point is that it is no proof of no fire – this is what people have in mind when they use the phrase. Likewise, not seeing a black swan does not mean that it doesn’t exist, and not finding your car keys does not mean that you’ve lost them.
Proof requires conclusive evidence. So seeing no smoke does not prove no fire, unless smokeless fire is impossible: FNR=0. And seeing no black swan does not prove that the hypothesis “All swans are white” is true; but seeing one proves that it is false: TPR=0. Therefore, absence or presence of conclusive evidence is evidence of absence, whereas it is not if evidence is inconclusive.
Inconclusive evidence cannot prove that a hypothesis is true or false – it can only confirm it or disconfirm it. Smoke is confirmative evidence of fire because TPR>FPR: the probability of seeing smoke when there is a fire is higher than the probability of seeing it when there is no fire. And, since FNR=1-TPR and TNR=1-FPR, then TNR>FNR: the probability of not seeing smoke when there is no fire is higher than the probability of not seeing it when there is a fire. Likewise, seeing white swans confirms (Popper said ‘corroborates’) the hypothesis that “All swans are white”. The more white swans we see, the higher is the probability that the next one will also be white. We can never be sure but, surely, given reasonable payoffs, we are prepared to bet on it. In the limit, the accumulation of confirmative or disconfirmative evidence can be so overwhelming as to be de facto conclusive: I’ll bet you anything that the sun will rise tomorrow.
We crave for conclusive evidence, de facto and de jure, because we want certainty. Conclusive evidence frees us from belief and gives us knowledge – doxa and episteme, as the ancient Greeks called them. Such is our desire for conclusive evidence that we are inclined to see it when it’s not there. Sometimes it is just what we want to see – like Conan Doyle and his spirits. But not necessarily: 9/11 truthers don’t like their truth, and Othello hates it. Sometimes is what we don’t want to see – like portraying the latest heavy snowfall or similar anecdotes as proof that there is no global warming, or using missing links as proof that biological evolution is false.
We like conclusive evidence because we dislike uncertainty. Some people hate it, some others live with it. We may even be intrigued by it, as in the world of fiction. But in the end we expect complete resolution: we want to know whodunit – the winner of the evidential tug of war.
In the real world, however, tugs of war often go on without a winner. Just open the newspaper: did Amanda Knox kill Meredith Kercher? Did Woody Allen molest Dylan Farrow? We may all have our beliefs, based on evidence that we may well regard as conclusive. But we don’t really know for sure. Even when the accumulation of evidence is as large as to be de facto conclusive – Who killed JFK? Who brought down the twin towers? Do humans descend from apes? – lack of truly incontrovertible proof means there is always room for doubt, and for the possibility that even a single piece of conclusive evidence may immediately overturn our strongest convictions.
Contemplating that possibility, some people manage to take lack of conclusive positive evidence as proof that the hypothesis is false, or lack of conclusive negative evidence as proof that the hypothesis is true, thereby falling into the hilarious logical blunder called ‘Denying the antecedent‘ (or ‘Fallacy of the inverse’, not to be confused with the Inverse Fallacy): if E, then H; but not E, therefore not H. As in: if the frog is a mammal, then it is an animal; but it is not a mammal, therefore it is not an animal. Or: if there is conclusive evidence that biological evolution is true, then it is true; but there is no conclusive evidence, therefore it is false. Or: if there is conclusive evidence that extra-terrestrials are not among us, then they aren’t; but there isn’t, therefore they are.
On the other hand, portraying inconclusive evidence as conclusive can lead people to call Truth what they should, more correctly though less strikingly, define as a preponderant amount of evidence. Many scientific disputes are fruitlessly centred on this, with one side claiming that some evidence is ‘obviously’ conclusive and the other side retorting that it ‘obviously’ isn’t. As a result, both sides end up overstating their arguments, trying to win the tug of war on rhetoric, as they can’t manage to win it on evidence.
Demand for categorical views supported by seemingly conclusive evidence is met with an abundant supply of experts, offering a wide variety of full-proof answers. Their trick is always the same: emphasise confirmative evidence and obfuscate disconfirmative evidence, relying on our natural disposition to fall prey to the Confirmation Bias:
Once a man’s understanding has settled on something (either because it is an accepted belief or because it pleases him), it draws everything else also to support and agree with it. And if it encounters a larger number of more powerful countervailing examples, it either fails to notice them, or disregards them, or makes fine distinctions to dismiss and reject them, and all this with much dangerous prejudice, to preserve the authority of its first conceptions (Francis Bacon, The New Organon, Book I, XLVI).
Or: Guy walks into a bar. Orders a drink. Bartender sees the guy keep snapping his fingers. Asks the guy, “Why’re you snapping your fingers?” Guy says, “It keeps the elephants away.” Bartender says, “But there aren’t any elephants.” Guy: “See? It works.”
The Confirmation Bias gives experts an incentive to be overconfident, boost their perceived accuracy and gain trust. Unscrupulous experts flaunt as conclusive evidence what is nothing more than confirmative or disconfirmative, but inconclusive evidence. And ignore, dismiss or explain away any possible counterevidence.
It was this tendency to play with the Confirmation Bias, and portray any evidence as confirming or verifying a favoured hypothesis that led Karl Popper to condemn Marx’s theory of history and Freud’s psychoanalysis as unscientific:
These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory (Conjectures and Refutations, p. 45)
Against it, Popper imposed a simple question: What evidence do you require that would be enough to change your mind? It is openness to such a basic demand that separates science – knowledge achieved by means of evidence – from non-science. Testability is the essential ethos of science. No self-respecting scientific theory can claim immunity from contradiction.
Popper’s question is the touchstone of scientific inquiry. His tirades against overbearing, all-encompassing theories, unreceptive to counterevidence, were spot on. The Efficient Market Theory is a more recent example. But then he went astray, with his unqualified insistence on Falsifiability. This is not always the most appropriate criterion. The closer a scientist is to being certain that a hypothesis is true, the stronger is the need to define appropriate conditions under which the hypothesis could be falsified. But the closer he is to being certain that a hypothesis is false, the stronger is the need for verification. For example, James Randi, who does not believe in supernatural powers, invites people to demonstrate them, under conditions that would conclusively verify their existence. Those conditions need to be stringent enough: obviously, Randi would not accept enthusiastic witness reports or uncontrolled demonstrations – he needs to exclude delusion and cheating. At the same time, he himself would not cheat by asking people to predict the lottery or make elephants fly. All he asks them is to do what they claim they can do, but under supervision. One just wishes Eugene Fama would be as fair with successful investors, instead of simply dismissing their performance as a random outcome. It is as if, after watching Uri Geller bend a spoon, Randi would shrug and call him a lucky bastard.
But while it is a desirable goal, conclusive evidence from crucial experiments is not a sine qua non of scientific quest. What makes evidential tugs of war scientific is not that they are bound to end with a winner, but that they are open to a fair and honest competition between opposing empirical claims. What evidence does Fama require that would be enough for him to change his mind about investment ability? No flying elephants, please.