Bayesian updating is a tug of war between confirmative and disconfirmative evidence, aimed at evaluating the probability of a hypothesis:
PO = LR1 ∙ LR2 ∙ … ∙ LRN ∙ BO
Evidence is essentially defined by a probability pair: the True Positive Rate TPR – the probability of the evidence if the hypothesis is true – and the False Positive Rate FPR – the probability of the evidence if the hypothesis is false. Their ratio is the Likelihood Ratio LR=TPR/FPR.
The updating process is iterative: starting with prior odds BO, confirmative evidence (LR>1) increases posterior odds PO, unconfirmative evidence (LR=1) leaves them unchanged, and disconfirmative evidence (LR<1) decreases them. The updated PO become the new BO, which is then further updated in the light of more evidence.
The process is cumulative, but in a multiplicative rather than in an additive sense: a Smoking Gun (FPR=0) renders the hypothesis certainly true (infinite odds) and a Perfect Alibi (TPR=0) makes it certainly false (zero odds), irrespective of all other evidence. Certainty does not require conclusive evidence. The process can converge to the truth by mere accumulation of overwhelming confirmative or disconfirmative evidence. Besides, convergence is not always assured. The tug of war does not necessarily end with a winner: it can remain somewhere in the middle, where all we can say is that the hypothesis is probably true.
Prior indifference is a distortion of Bayesian updating. While a correct update takes BO as given and increases or decreases it according to the Likelihood Ratio of new evidence, prior indifference triggers an inadvertent shift of the Base Rate to 50% before the update takes place. As a result, the update builds on Knightian uncertainty and perfect ignorance, rather than on prior evidence.
The Availability heuristics (see Thinking, Fast and Slow, Chapters 12 and 13) is the process of judging frequency based of the ease with which instances come to mind. The area in which availability has been most extensively studied is risk perception.
My old friend Mimmo is terrified of flying. There is no point telling him that airplanes are safer than cars. The safest means of transportation – says Mimmo – is a car driven by me. This illusion of control is caused by an obviously improper comparison between his innumerable memories of safe driving – he’s had a car for 40 years and never had an accident – and many vivid episodes of catastrophic plane crashes.
Like Representativeness and Anchoring, Availability is a probability update in the light of new evidence. But with Availability evidence comes from within: our own memory. Far from being a passive and faithful repository of ‘objective’ reality, memory is a highly reconstructive process, heavily influenced by feelings and emotions. As Mimmo tries to assess the relative odds of a fatal plane accident versus a fatal car accident, he may well be aware that airplane crashes are more infrequent than car crashes. But when he updates Base Rates by retrieving evidence from memory, he finds that instances of plane crashes are more easily available than instances of car crashes.
Mimmo’s problem is essentially equivalent to Linda‘s. Here we have BO1=Prior Odds of fatal car accidents and BO2=Prior Odds of fatal airplane accidents, with BO1>BO2: car travel is statistically riskier than air travel. Evidence consist of retrieved memory. Let’s again assume symmetry, hence accuracy A=TPR. Just as Linda’s description can be a more or less accurate portrayal of a Greenpeace supporter or a bank employee, the availability of instances of airplane or car accidents defines the accuracy of our memory. Again mirroring Linda’s example, let’s assume LR1=1: memory is neutral with respect to car accidents. A2, the availability of fatal airplane accidents, is higher than A1. But how much higher should it be, for air travel to be perceived as riskier than car travel? Again we have A2>1/(1+K), where K=BO2/BO1 is the relative riskiness of air travel versus car travel. If air travel were as risky as car travel (K=1), all that would be necessary for airplanes to be perceived as riskier than cars would be more than neutrally available memories of airplane accidents: A2>50%. But for lower values of K the required A2 is higher. For instance, if K=10% (as seems to be the case in the US), A2 needs to be higher than 90% – which may be the reason why aviophobia is confined to a minority of exceedingly impressionable types (such as, apparently, Joseph Stalin).
But what if Mimmo is right? Aviation safety is usually defined in terms of deaths per kilometre. This answers the question: if I travel from London to Edinburgh, am I safer going by plane or by car? The answer is crystal clear: airplanes win hands down. Similarly if safety is measured in deaths per hour. Given the same journey, measured in either distance or time, planes are much safer than cars. However, these two measures hide the fact that most airplane accidents happen at takeoff or landing, which occupy only a small percentage of journey distance and time. A different question is: what is the probability of dying in an airplane journey versus a car journey? When safety is measured in deaths per journey, the answer seems to be unequivocally the opposite: car journeys are safer. This may be the measure in the back of Mimmo’s mind each time he boards a plane. And while few of us go to Mimmo’s extremes, raise your hand if you are not ever so slightly more anxious when you are on a plane than when you are driving a car.
Since we are not sure how to define the appropriate reference class for transportation safety, we tend to neglect Base Rate differences: K=1. Such prior indifference explains our discomfort, as airplane accidents are more available than car accidents for most of us. There are different ways to avert plane crashes. Personally, I hum a little secret song at takeoff. It has been working beautifully.
Where Mimmo is certainly wrong is with his illusion of control. Next time I see him I will show him this:
OK, it’s Russia. Still.