Feb 272017
 

Armed with our TPR surface, let’s revisit John Ioannidis’s claim according to which ‘Most Published Research Findings Are False’.

Ioannidis’s target is the immeasurable confusion generated by the widespread mistake of interpreting Fisher’s 5% statistical significance as implying a high probability that the hypothesis of interest is true. FPR<5%, hence PP>95%. As we have seen, this is far from being the general case: TPR=PP if and only if BO=FPR/FNR, which under prior indifference requires error symmetry.

Fisher’s 5% significance is neither a sufficient not a necessary condition for accepting the hypothesis of interest. Besides FPR, acceptance and rejection depend on BR, PP and TPR. Given FPR=5%, all combinations of the three variables lying on or above the curved surface indicate acceptance. But, contrary to Fisher’s criterion, combinations below the surface indicate rejection. The same is true for values of FPR below 5%, which are even more ‘significant’ according to Fisher’s criterion. These widen the curved surface and shrink the roof, thus enlarging the scope for acceptance, but may still indicate rejection for low priors and high standards of proof, if TPR power is not or cannot be high enough. On the other hand, TPR values above 5%, which according to Fisher’s criterion are ‘not significant’ and therefore imply unqualified rejection, reduce the curved surface and expand the roof, thus enlarging the scope for rejection, but may still indicate acceptance for higher priors and lower standards of proof, provided TPR power is high enough. Here are pictures for FPR=2.5% and 10%:

So let’s go back to where we started. We want to know whether a certain claim is true or false. But now, rather than seeing it from the perspective of a statistician who wants to test the claim, let’s see it from the perspective of a layman who wants to know if the claim has been tested and whether the evidence has converged towards a consensus, one way or the other.

For example: ‘Is it possible to tell whether milk has been poured before or after hot water by just tasting a cup of tea?’ (bear with me please). We google the question and let’s imagine we get ten papers delving into this vital issue. The first and earliest is by none other than the illustrious Ronald Fisher, who performed it on the algologist Muriel Bristol and, on finding that she made no mistakes with 8 cups – an event that has only 1/70 probability of being the product of chance, i.e. a p-value of 1.4%, much lower than required by his significance criterion – concluded, against his initial scepticism, that ‘Yes, it is possible’. That’s it? Well, no. The second paper describes an identical test performed on the very same Ms Bristol three months later, where she made 1 mistake – an event that has a 17/70 probability of being a chance event, i.e. a p-value of 24.3%, much larger than the 5% limit allowed by Fisher’s criterion. Hence the author rejected Professor Fisher’s earlier claim about the lady’s tea-tasting ability. On to the third paper, where Ms Bristol was given 12 cups and made 1 mistake, an event with a 37/924=4% probability of being random, once again below Fisher’s significance criterion. And so on with the other papers, each one with its own set up, its own numbers and its own conclusions.

It is tempting at this point for the layman to throw up his arms in despair and execrate the so-called experts for being unable to give a uniform answer to such a simple question. But he would be entirely wrong. The evidential tug of war between confirmative and disconfirmative evidence is the very essence of science. It is up to us to update our prior beliefs through multiplicative accumulation of evidence and to accept or reject a claim according to our standard of proof.

If anything, the problem is the opposite: not too much disagreement but – and this is Ioannidis’s main point – too little. Evidence accumulation presupposes that we are able to collect the full spectrum, or at least a rich, unbiased sample of all the available evidence. But we hardly ever do. The evidence we see is what reaches publication, and publication is naturally skewed towards ‘significant’ findings. A researcher who is trying to prove a point will only seek publication if he thinks he has gathered enough evidence to support it. Who wants to publish a paper about a theory only to announce that he has got inadequate evidence for it? And even if he tried, what academic journal would publish it?

As a result, available evidence tends to be biased towards acceptance. And since acceptance is still widely based on Fisher’s criterion, most published papers present FPR<5%, while those with FPR>5% remain unpublished and unavailable. To add insult to injury, in order to reach publication some studies get squeezed into significance through more or less malevolent data manipulation. It is what Ioannidis calls bias: ‘the combination of various design, data, analysis, and presentation factors that tend to produce research findings when they should not be produced’. (p. 0697).

This dramatically alters the evidential tug of war. It is as if, when looking into the milk-tea question, we would only find Fisher’s paper and others accepting the lady’s ability – including some conveniently glossing over a mistake or two – and none on the side of rejection. We would then be inclined to conclude that the experts agree and would be tempted to go along with them – perhaps disseminating and reinforcing the bias through our own devices.

How big a problem is this? Ioannidis clearly thinks it is huge – hence the dramatic title of his paper, enough to despair not just about some experts but about the entire academic community and its scientific enterprise. Is it this bad? Are we really swimming in a sea of false claims?

Let’s take a better look. First, we need to specify what we mean by true and false. As we know, it depends on the standard of proof, which in turn depends on utility preferences. What is the required standard of proof for accepting a research finding as true? Going by the wrongful interpretation of Fisher’s 5% significance criterion, it is PP>95%. But this is not only a mistake: it is the premise behind an insidious misrepresentation of the very ethos of scientific research. Obviously, truth and certainty are the ultimate goals of any claim. But the value of a research finding is not in how close it is to the truth, but in how closer it gets us to the truth. In our framework, it is not in PP but in LR.

The goal of scientific research is finding and presenting evidence that confirms or disconfirms a specific hypothesis. How much more (or less) likely is the evidence if the hypothesis is true than if it is false? The value of evidence is in its distance from the unconfirmative middle point LR=1. A study is informative, hence worth publication, if the evidence it presents has a Likelihood Ratio significantly different from 1, and is therefore a valuable factor in the multiplicative accumulation of knowledge. But a high (or low) LR is not the same as a high (or low) PP. They only coincide under prior indifference, where, more precisely, PO=LR, i.e. PP=LR/(1+LR). So, for example, if LR=19 – the evidence is 19 times more likely if the hypothesis is true than if it is false – then PP=19/20=95%. But, as we know very well, prior indifference is not a given: it is a starting assumption which, depending on the circumstances, may or may not be valid. BO – Ioannidis calls it R, ‘the ratio of “true relationships” to “no relationships”‘ (p. 0696) – gives the pre-study odds of the investigated hypothesis being true. It can be high if the hypothesis has been tested before and is already regarded as likely true, or low if it is a novel hypothesis that has never been tested and, if true, would be an unexpected discovery. In the first case, LR>1 is a further confirmation that the hypothesis should be accepted as true – a useful but hardly noteworthy exercise that just reinforces what is already known. On the other hand, LR<1 is much more interesting, as it runs against the established consensus. LR could be as low as to convert a high BO into a low PO, thus rejecting a previously accepted hypothesis. But not necessarily: while lower than BO, PO could remain high, thus keeping the hypothesis true, while casting some doubts on it and prodding further investigation. In the second case, LR<1 is a further confirmation that the hypothesis should be rejected as false. On the other hand, LR>1 increases the probability of an unlikely hypothesis. It could be as high as to convert a low BO into a high PO, thus accepting what was previously an unfounded conjecture. But not necessarily: while higher than BO, PO could remain low, thus keeping the hypothesis false, but at the same time stimulating more research.

Such distinctions get lost in Ioannidis’s sweeping claim. True, SUTCing priors and neglecting a low BO can lead to mistakenly accepting hypotheses on the basis of evidence that, while confirmative, leaves PO well below any acceptance level. The mistake is exacerbated by Fisher’s Bias – confusing significance (a low FPR) with confirmation (a high LR) – and by Ioannidis’s bias – squeezing FPR below 5% through data alteration. FPR<5% does not mean PP>95% or even PP>50%. As shown in our TPR surface, for any standard of proof, the lower is BO the higher is the required TPR for any level of FPR. Starting from a low BO, accepting the hypothesis requires very powerful evidence. Without it, acceptance is a false claim. Moreover, published acceptances – rightful or wrongful – are not adequately counterbalanced by rejections, which remain largely unpublished. This however occurs for an entirely legitimate reason: there is little interest in rejecting an already unlikely hypothesis. Interesting research is what runs counter prior consensus. Starting from a low BO, any confirmative evidence is interesting, even when it is not powerful enough to turn a low BO into a high PO. Making an unlikely hypothesis a bit less unlikely is interesting enough, and is worth publication. But – here Ioannidis is right – it should not be confused with acceptance. Likewise, there is little interest in confirmative evidence when BO is already high. What is interesting in this case is disconfirmative evidence, again even when it is not powerful enough to reject the hypothesis by turning a high BO into a low PO. Making a likely hypothesis a bit less likely is interesting enough. But it should not be confused with rejection.

Print Friendly
Feb 172017
 

Back to PO=LR∙BO.

Whether we accept or reject a hypothesis, i.e. decide whether a claim is true or false, depends on all three elements.

Posterior Odds. The minimum standard of proof required to accept a hypothesis is PO>1 (i.e. PP>50%). We call it Preponderance of evidence. But, depending on the circumstances, this may not be enough. We have seen two other cases: Clear and convincing evidence: PO>3 (i.e. PP>75%), and Evidence beyond reasonable doubt: PO>19 (PP>95%), to which we can add Evidence beyond the shadow of a doubt: PO>99 (PP>99%) or even PO>999 (PP>99.9%). The spectrum is continuous, from 1 to infinity, where Certainty (PP=100%) is unattainable and is therefore a decision. The same is symmetrically true for rejecting the hypothesis, from PO<1 to the other side of Certainty: PO=PP=0.

Base Odds. To reach the required odds we have to start somewhere. A common starting point is Prior indifference, or Perfect ignorance: BO=1 (BR=50%). But, depending on the circumstances, this may not be a good starting place. With BO=1 it looks like Base Odds have disappeared, but they haven’t: they are just being ignored – which is never a good start. Like PO, Base Odds are on a continuous spectrum between the two boundaries of Faith: BR=100% and BR=BO=0. Depending on BO, we need more or less evidence in order to achieve our required PO.

Likelihood Ratio. Evidence is confirmative if LR>1, i.e. TPR>FPR, and disconfirmative if LR<1, i.e. TPR<FPR. The size of TPR and FPR are not relevant per se – what matters is their ratio. A high TPR means nothing without a correspondingly low FPR. Ignoring this leads to the Confirmation Bias. Likewise, a low FPR means nothing without a correspondingly high TPR. Ignoring this leads to Fisher’s Bias.

To test a hypothesis, we start with a BO level that best reflects our priors and set our required standard of proof PO. The ratio of PO to BO determines the required LR: the strength or weight of the evidence we demand to accept the hypothesis. In our tea-tasting story, for example, we have BO=1 (BR=50%) and PO>19 (PP>95%), giving LR>19: in order to accept the hypothesis that the lady has some tea-tasting ability, we require evidence that is at least 19 times more likely if the hypothesis is true than if the hypothesis is false. A test is designed to calculate FPR: the probability that the evidence is a product of chance. This requires defining a random variable and assigning to it a probability distribution. Our example is an instance of what is known as Fisher’s exact test, where the random variable is the number of successes over the number of trials without replacement, as described by the hypergeometric distribution. Remember that with 8 trials the probability of a perfect choice under the null hypothesis of no ability is 1/70, the probability of 3 successes and 1 failure is 16/70, and so on. Hence, in case of a perfect choice we accept the hypothesis that the lady has some ability if TPR>19∙(1/70)=27% – a very reasonable requirement. But with 3 successes and 1 failure we would require an impossible TPR>19∙(17/70). On the other hand, if we lower our required PO to 3 (PP>75%), then all we need is TPR>3∙(17/70)=73% – a high but feasible requirement. But if we lower our BO to a more sceptical level, e.g. BO=1/3 (BR=25%), then TPR>3∙3∙(17/70) is again too high, whereas a perfect choice may still be acceptable evidence of some ability, even with the higher PO: TPR>3∙19∙(1/70)=81%.

So there are four variables: PP, BR, FPR and TPR. Of these, PP is set by our required standard of proof, BR by our prior beliefs and FPR by the probability distribution of the relevant random variable. These three combined give us the minimum level of TPR required to accept the hypothesis of interest. TPR – the probability of the evidence in case the hypothesis is true – is also known as statistical power, or sensitivity. Our question is: given our starting priors and required standard of proof, and given the probability that the evidence is a chance event, how powerful should the evidence be for us to decide that it is not a chance event but a sign that the hypothesis of interest is true?

Clearly, the lower is FPR, the more inclined we are to accept. As we know, Fisher would do it if FPR<5% – awkwardly preferring to declare himself able to disprove the null hypothesis at such significance level. That was enough for him: he apparently took no notice of the other three variables. But, as we have seen, what he might have been doing was implicitly assuming error symmetry, prior indifference and setting PP beyond reasonable doubt, thus requiring TPR>95%, i.e. LR>19. Or, more likely, at least in the tea test, he was starting from a more sceptical prior (e.g. 25%), while at the same time lowering his standard of proof to e.g. 75%, which at FPR=5% requires TPR>45%, i.e. LR>9, or perhaps to 85%, which requires TPR>85%, i.e. LR>17.

There are many combinations of the four variables that are consistent with the acceptance of the hypothesis of interest. To see it graphically, let’s fix FPR: imagine we have just ran a test and calculated that the probability that the resulting evidence is the product of chance is 5%. Do we accept the hypothesis? Yes, says Fisher, But we say: it depends on our priors and standard of proof. Here is the picture:

For each BR, TPR is a positively convex function of PP. For example, with prior indifference (BR=50%) and a minimum standard of proof (PP>50%) all we need to accept the hypothesis is TPR>5% (i.e. LR>1): the hypothesis is more likely to be true than false. But with a higher standard, e.g. PP>75%, we require TPR>15% (LR>3), and with PP>95% we need TPR>95% (LR>19). The requirement gets steeper with a sceptical prior. For instance, halving BR to 25% we need TPR>15% for a minimum standard and TPR>45% for PP>75%. But PP>95% would require TPR>1: evidence is not powerful enough for us to accept the hypothesis beyond reasonable doubt. For each BR, the maximum standard of proof that keeps TPR below 1 is BO/(BO+FPR). Under prior indifference, that is 95% (95.24% to be precise: PO=20), but with BR=25% it is 87%. The flat roof area in the figure indicates the combination of priors and standards of proof which is incompatible with accepting the hypothesis at the 5% FPR level.

If TPR is on or above the curved surface, we accept the hypothesis. But, unlike Fisher, if it is below we reject it: despite a 5% FPR, the evidence is not powerful enough for our priors and standard of proof. Remember we don’t need to calculate TPR precisely. If, as in the tea-tasting story, the hypothesis of interest is vague – the lady has some unspecified ability – it might not be possible. But what we can do is to assess whether TPR is above or below the required level. If we are prior indifferent and want to be certain beyond reasonable that the hypothesis is true, we need TPR>95%. But if we are happy with a lower 75% standard then all we need is TPR>15%. If on the other hand we have a sceptical 25% prior, there is no way we can be certain beyond reasonable doubt, while with a 75% standard we require TPR>45%.

It makes no sense to talk about significance, acceptance and rejection without first specifying priors and standards of proof. In particular, very low priors and very high standards land us on the flat roof corner of the surface, making it impossible for us to accept the hypothesis. This may be just fine – there many hypotheses that I am too sceptical and too demanding to be able to accept. At the same time, however, I want to keep an open mind. But doing so does not mean reneging my scepticism or compromising my standards of proof. It means looking for more compelling evidence with a lower FPR. That’s what we did when the lady made one mistake with 8 cups. We extended the trial to 12 cups and, under prior indifference and with no additional mistakes, we accepted her ability beyond reasonable doubt. Whereas, starting with sceptical priors, acceptance required lowering the standard of proof to 75% or extending the trial to 14 cups.

To reduce the size of the roof we need evidence that is less likely to be a chance event. For instance, FPR=1% shrinks the roof to a small corner, where even a sceptical 25% prior allows acceptance up to PP=97%. In the limit, as FPR tends to zero – there is no chance that the evidence is a random event – we have to accept. Think of the lady gulping 100 cups in a row and spotlessly sorting them in the two camps: even the most sceptical statistician would have no shadow of a doubt to regard this as conclusive positive evidence (a Smoking Gun). On the other hand, coarser evidence, more probably compatible with a chance event, enlarges the roof, thus making acceptance harder. With FPR=10%, for instance, the maximum standard of proof under prior indifference is 91%, meaning that even the most powerful evidence (TPR=1) is not enough for us to accept the hypothesis beyond reasonable doubt. And with a sceptical 25% prior the limit is 77%, barely above the ‘clear and convincing’ level. While harder, however, acceptance is not ruled out, as it would be with Fisher’s 5% criterion. According to Fisher, a 1 in 10 probability that the evidence is the product of chance is just too high for comfort, making him unable to reject the null hypothesis. But what if TPR is very high, say 90%? In that case, LR=9: the evidence is 9 times more likely if the hypothesis is true than if it false and, under prior indifference, PP=90%. Sure, we can’t be certain beyond reasonable doubt that the hypothesis is true, but in many circumstances it would be eminently reasonable to accept it.

Print Friendly
Jan 212017
 

Sorry we’re very late, but anyway, HAPPY NEW YEAR.

One reason is that once again the video was blocked on a copyright issue. Sony Music, owner of Uptown Funk, had no problems: their policy is to monetise the song. But Universal Music Group, owner of Here Comes the Sun, just blocks all Beatles songs – crazy. I am disputing their claim – it is just the first 56 seconds and we’re singing along, for funk sake! They will give me a response by 17 February.

Talking about Funk, Bruno Mars says the word 29 times. Lorenzo assures me there is an n in each one of them, but I have my doubts. (By the way, check out Musixmatch, a great Italian company).

What does Funk mean anyway? It’s another great kaleidoscopic expression.

I am putting the video online against my brother’s advice – “It’s bellissimo for the family, but what do others care?” I say fuggettaboutit, it’s funk.

Update. I got the response: “Your dispute wasn’t approved. The claimant has reviewed their claim and has confirmed it was valid. You may be able to appeal this decision, but if the claimant disagrees with your appeal, you could end up with a strike on your account.” Silly.

So I put the video on Vimeo. No problem there. Crazy.

Print Friendly
Dec 262016
 

This may be obvious to some of you, so I apologise. But it wasn’t to me, and since it has been such a great discovery I’d like to offer it as my Boxing Day present.

If you are my age, or even a bit younger, chance is that you have a decent/good/awesome hifi stereo system, with big, booming speakers and all the rest of it. Mine is almost 20 years old, and it works great. But until recently I’d been using it just to listen to my CDs. All the iTunes and iPhone stuff – including dozens of CDs that I imported in it – was, in my mind, separate: I would only listen to it through the PC speakers. Good, but not great. I wished there was a way to do it on the hifi. Of course, I could have bought better PC speakers, or a new whiz-bang fully-connected stereo system. But I couldn’t be bothered – I just wanted to use my old one.

The PC and the hifi are in different rooms, so connecting them doesn’t work, even wirelessly. But after some thinking and research I found a perfect solution. So here it is.

All I needed was this:

It’s a tiny 5×5 cm Bluetooth Audio Adapter, made by Logitech. You can buy it on the Logitech website or on Amazon, currently at £26. All you need to do is connect it to the stereo and pair your iPhone or iPad (or equivalent) to it.

To complete, I also bought this:

It is a 12×10 cm 1-Output 2-Port Audio Switch, also available on Amazon, currently at £7. Here is the back:

So you connect:

  1. The Audio Adapter to the first Input of the switch (the Adapter comes with its own RCA cable).
  2. The Output of your system’s CD player to the second Input of the switch (you need another RCA cable).
  3. The CD Input of your system’s amplifier to the Output of the switch (you need a third RCA cable).

And you’re done! Now if you want to listen to your iPhone stuff you press button 1 on the front of the switch. If you want to listen to your CDs you press button 2.

And of course – but this you should already know – you can also listen to the tons of music on display on any of the streaming apps: Amazon Music, Google Play Music, Groove, Spotify, Deezer etc.

There you go. You’ve just saved hundreds or thousands of your favourite currency that an upgrade would have cost you. And you’re using your good old system to the full.

Merry Christmas.

Print Friendly
Dec 262016
 

That morning I had an early meeting and left in a rush. So, on the way back home, I stopped by M&S Foodhall – I had to get some milk, the children were coming over – and got myself a freshly baked pain au chocolat. I got two, just in case. At home, I ate one in the kitchen, with a nice espresso. I didn’t need the second one, so I left it in its paper bag, up on the shelf.

The children arrived in the afternoon. They had their homework to do, I had mine. A couple of hours later, I went to the kitchen to make a cup of tea. There, on the table, I saw a blue plate, with some flaky crumbs on it.

There you go again. “Mau!”

Maurits, my younger son, is a bit of a sweet tooth. Nothing major – none of us care much about sugar. But while Lorenzo, my elder, is just like me, Mau likes the occasional candy – Mentos are his favourite. At around teatime, they are used to take a break and fix themselves a slice of bread with Nutella. But this time Mau had also evidently discovered and gobbled up the second pain au chocolat.

“Mau, can you come here for a sec?” He was doing his maths. “Who did this?” I asked him with a stern smile.

“Not me! I swear!”

He had done it before. Of course, I didn’t mind him eating the pastry. And I didn’t even mind that he hadn’t asked me first, sweet boy. But I wanted to check on his disposition to own up to his actions.

“Mau, please tell me the truth. It’s fine you ate it. But just tell me. Did you?”

“No dad, I didn’t!” His initial smirk had gone. He looked down and started sobbing, quietly at first, but with a menacing crescendo. It was his tactics – he had even told me once, in a wonderful moment of intimate connivance.

“I know you, Mau. Don’t do this. It’s ok, you understand? I just want you to tell me the truth.”

Other times he would had given up, especially when, like now, it was just the two of us – Lorenzo was in the living room. But not this time. His sob turned into a louder cry:

“I didn’t! I swear I didn’t!”

Big tears started streaming on his cheeky cheeks – that’s how we call him sometimes, Lori and I. Lorenzo came over to check what was going on. As soon as I told him, he started defending his brother:

“Dad, if he says he didn’t do it he didn’t, ok?”

Good, I thought. That didn’t happen very often. Usually, in similar circumstances, they would snitch and bicker. But this time, evidently moved by his brother’s distress, Lorenzo was with him.

“You stay out of it. Go back to work”, I said.

I took Mau to the bedroom and closed the door. I hugged him and wiped his tears with kisses. It was a great opportunity to teach him a fundamental lesson about honesty, truth and trust. I told him how important it was for me, and how important for my children to understand it. He had heard it before.

“Dad, I did not do it.” He was still upset, but had stopped crying. Great, I thought, he’s about to surrender. One more go and we’re done – I couldn’t wait. So I gave him another big hug and whispered a few more words about sincerity and love.

“Ok dad, I admit it. I did it” he whispered back, with a sombre sigh.

“Great, Mau! I’m so proud of you, well done!” I felt a big smile exploding on my face. “You see how good it is when you tell the truth? You should never, never lie to me again, ok?”

He nodded, and we hugged one more time. Then I lifted him, threw him on the bed and we started fighting, bouncing, tickling and laughing, as we always do. Lorenzo joined in.

A couple of hours later, it was time for dinner. I went back to the kitchen and started preparing. I cleared the table – the blue plate and everything else – and asked the children what they wanted to eat. “Mau’s turn to decide!”

But then, as I looked up to get the plates, my blood froze. There, on the shelf where I had left it in the morning, was the paper bag and, inside it, the second pain au chocolat.

“Mau!”

He came over, with a not-again look on his face. I grabbed him and kissed him all over. “I’m sorry sorry sorry sorry!” I explained to him what had happened. I had used the blue plate in the morning and left it there. He had nothing to do with it.

“I told you, dad.”

“Yes, but why did you then admit you did it?” I replied, choking on my words as soon as I realised that the last thing I could do was to blame him again.

“I know dad, but you were going on and on. I had to do it.”

Print Friendly
Dec 242016
 

Tempus fugit. It has been two years since I wrote about Ferrexpo, and more than three since I presented it at the valueconferences.com’s European Investment Summit. (By the way, Meyer Burger’s pearl status turned out to be short-lived. Barratt Developments stock price, on the other hand, proceeded to grow 40% in 2014 and another 40% in 2015, to a peak of 6.6 pounds. This year it fell like a stone after Brexit to as low as 3.3, but it is now back at 4.8 – market efficiency for you).

My point two years ago was to show how wrong I had been on Ferrexpo and to try to explain – first to myself – why. My main argument was that I had underestimated the negative consequences of the big three iron ore producers’ decision to continue to expand their production capacity. This had caused a steep drop in the iron price from 130 dollars per ton to 70, which the market was regarding as permanent, thus taking all iron ore producers – the big three as well as smaller companies like Ferrexpo – to very low valuations. My decision to stick with the investment was predicated on the assumption that the iron ore price drop would turn out to be cyclical rather permanent, and that valuations based on a permanently low price would eventually be proven wrong.

Here is what happened since then:

Ferrexpo ended 2014 at 0.5. But then, in the first half of 2015, it started to look as if I was right: the stock price climbed back 60% to 0.8. Alas, not for long. In the second half, the iron ore price resumed its fall, and by the end of the year it was as low as 41 dollars, taking Ferrexpo’s price all the way down to 0.2.

In the meantime, the company had been chugging along, reporting lower but overall solid results. Its profile was as low as ever, and the message invariably the same: we are doing as best as we can in the current adverse circumstances, but there is nothing we can do other than sit tight and weather the storm. Brokerage analysts took notice and kept pummelling the stock.

To my increasing frustration – there were things to do: share buybacks, dividend suspension, divestments, insider buying, strategic alliances – a stone-faced IR would, each time I saw her, give me the same ‘don’t look at me, I just work here’ message, the CFO kept sighing, and the CEO would not even meet me.

The attitude could not have been more different at Cliffs Natural Resources, the largest US iron ore producer. Here, after a fierce shareholder and management clash, a new CEO had been appointed in August 2014. Lourenço Gonçalves, a burly, ebullient Brazilian with a long experience in the mining industry, was as far as you could get from the amorphous Ferrexpo people. I had taken a position in Cliffs in 2013 and added to it in 2014.

Gonçalves’ first public foray, the Q3 2014 earnings call in October, was a stunner. The stock price was at 9 dollars, down from 17 in August. But Gonçalves was adamant: he was going to turn the company around, restore profitability and reverse the decline. The old management – he said – had been driving the company to the ground by pursuing the same aggressive, China-focused expansion and acquisition strategy that the big three producers had adopted. He warned that the strategy was reckless and self-destructive, and that the big three’s shareholders would eventually see the light and throw out the management responsible for it. His plan was to concentrate on the domestic market and sell everything else. He said brokerage analysts – who were almost unanimously sceptical about his plan and were ‘predicting’ more iron price weakness – were all wrong, and he was determined to prove them wrong. The company was buying back its stock and he and other company insiders were buying too on their personal account.

What a refreshing contrast to Ferrexpo’s lethargy! Here is a most memorable exchange with one on the analysts in the Q&A. And here is Gonçalves’ presentation at a Steel Industry conference in June 2015, where he effectively and presciently articulated his analysis and outlook:

The stock had gone up 22% on the day of the Q3 earnings call. But it had soon resumed its downward trend, ending 2014 at 7 dollars. On the day of the June ’15 conference it had reached 5, and a month later it was down to the beaten-up analyst’s 4 dollar valuation target. Lourenço would have none of it – but being in the midst of all this drama was no fun.

A value investor is used to endure the pain when things go the wrong way – but there is always a limit, beyond which endurance becomes stubbornness, valiance turns into delusion and composure into negligence. I had bought Ferrexpo in 2012, added to it in 2013, when I also bought Cliffs, to which I had added in 2014. I just couldn’t do more. All along I had kept doing the Blackstonian right thing – trying to prove myself wrong. But there was no way: I agreed with Gonçalves one hundred percent. Giving up was surely the wrong thing to do.

But there is one thing I did in 2015: I moved out of Ferrexpo and concentrated entirely on Cliffs. Realising a loss is never a pleasant experience, but resisting it when there is a better alternative is a mistake. The loss is there, whether one realises it or not – and selling creates a tax asset. I was not giving up the fight, but I reckoned that, if I had to keep fighting, I might as well seek comfort in Cliffs’ enthusiasm rather than remain mired in Ferrexpo’s apathy. If I was ever going to be proven right, it was more likely – and more fun, though admittedly the company’s name didn’t help – to be on Cliffs’ side:

We are squeezing the freaking shorts and we are going to take them out one by one: I’m going to make them sell everything to get out of my way and they are going to lose their pants. How am I going to get there? The toolbox is full. (Gonçalves, Cliffs Q1 Earnings call, 28 April 2015).

Fun, but tough. The shorts kept winning, and winning big, for the rest of the year, which Cliffs ended all the way down to 1.6 dollars. But, in the meantime, things had started to change. Exactly as Gonçalves had predicted, the big three’s shareholders lost their patience and forced a radical change in management strategy. The first to go was the Head of Iron Ore at BHP Billiton, who had been the most vocal advocate of the supposed virtues of lower iron ore prices – BHP’s stock price halved in 2014-15 – soon followed by Rio Tinto’s CEO and its own Head of Iron Ore – Rio’s price went down 40% in the same period. The new management put a halt to the misconceived policy of big actual and planned production expansion, which by the end of the year had driven the iron ore price down to 40 dollars. As a result, lo and behold, at the beginning of this year – two full years later than my Ferrexpo post – the cyclical upturn in the iron ore price began in earnest. And, unlike the year before, it didn’t stop: the price has now doubled to 80 dollars.

Right, at last. And freaking shorts on the run, as Cliffs’ stock price blasted to a peak of 10.5 dollars earlier this month – five and a half times where it ended last year. Thank you, Lourenço, great job. So here is the graph I showed in July at this year’s Italian Value Investing Seminar, where, alongside their stock picks, speakers are asked to talk about a past mistake.

My big mistake was – what else? – Ferrexpo, one of the three stocks I had presented at the 2014 Seminar (the other were two Italian stocks, Cembre and B&C Speakers, which went up 39% and 34% in 2015). Ferrexpo had by then gone up merely 50% since year-end, from 0.2 to 0.3, while Cliffs was already at 6. It was the silver lining of a bad story, and a vindication of my 2015 move – enhanced, ironically, by the post-Brexit dollar appreciation versus the pound.

Little was I expecting that, right after the July presentation, Ferrexpo’s own price would start a mighty catch-up, bringing it to a peak, earlier this month, of 1.5 pounds, basically in line with Cliffs, which is still a bit ahead from my pound perspective only because of the currency move:

There might be a perverse correlation between Ferrexpo’s stock price and my talking about it in Trani – but I doubt I will collect a third data point to test the hypothesis. The move had nothing to do with the company, which this year continued to be its dreary self. And very little to do with a change in brokerage analysts’ views – the Eeek analyst, bless him, still has a 0.26 price target. As it is clear in the first graph, it has all to do with the iron ore price correlation.

Be as it may, I am very happy for the few friends who followed me on this roller coaster. This has not been a successful investment (so far!) – and no, I did not increase my Cliffs position at the end of last year. So I hesitate to categorize this post under ‘Intelligent Investing’. But then ‘intelligent’ doesn’t always mean ‘successful’. Investing implies risk and must be judged on the intelligence of ex ante reasoning rather than the success of ex post outcomes. At the same time, however, it is wrong to indulge in self-forgiveness. We should learn from our mistakes: there is no right way to do the wrong thing. So, what did I get wrong?

When I first bought Ferrexpo at 1.7, back in September 2012, I saw it rapidly appreciate to 2.7 in February 2013. It was at that time that the first big downward revisions in iron ore price estimates started to surface, first from BREE (the Australian Bureau of Resources and Energy Economics), then from Rio Tinto’s chief economist, then from about everybody else and becoming consensus. As a result, in less than a month Ferrexpo’s stock price was back to where I had bought it. There I made my big mistake. The lower price forecasts were based on slowing Chinese demand and increased supply from the big three producers. But I dismissed the second point, on the assumption that the big three would not be so irrational. Therein lies the big lesson: it is wrong to assume that the turn of events will necessarily follow its most rational course – we’ve had a flood of evidence on this from this year’s major political developments!

Once it became clear, by 2013, that the big three were set on their expansion strategy and were not going to change their mind any time soon, I should have changed my mind and get out of Ferrexpo at close to par. I didn’t: neglecting my own precepts, I failed to acknowledge the multiplicative nature of evidence accumulation, whereby even a single piece of strong disconfirmative evidence can be enough to overwhelm any amount of confirmative evidence on the other side of the hypothesis. Confirmation Bias, right there in my own face! Going public with my Ferrexpo pick – first at valueconference.com in October ’13 and then at the Trani Seminar the following summer – did not help. So I have promised myself not to fall in that trap again, not by avoiding public picks, which are a useful and worthy exercise, but by refusing to abide by them (so far so good with my subsequent picks).

I am still invested in Cliffs. I was right in expecting the iron price upturn. But iron ore futures are still predicting a drop to 50-60 dollars in the next couple of years. China is likely to slow down steel production and the US likely to limit steel import – notice the big jump in Cliffs’ stock price after Trump’s election: one more irony in this mind blowing saga. The pellet premium – both Cliffs and Ferrexpo produce energy-efficient iron ore pellets, rather than lump and fines – should stay up, given China’s impelling need to combat air pollution.

But please don’t take my word for it.

Print Friendly
Dec 152016
 

Ronald Fisher was sceptical about the lady’s tea tasting prowess. But he was prepared to change his mind. If the lady could correctly identify 8 cups, he was willing to acknowledge her ability, or – in his unnecessarily convoluted but equivalent wording – to admit that her ability could not be disproved. He would have done the same with one mistake in 12 cups. In his mind, such procedure had nothing to do with scepticism. He would not let his subjective beliefs taint his conclusions: only data should decide.

This is a misconception. Data can do nothing by themselves. We interpret data into explanations and decide to accept or reject hypotheses, based on our standard of proof (PO), evaluation of evidence (LR) and prior beliefs (BO).

There is hardly a more appropriate illustration of this point than Fisher’s astonishing smoking blunder.

By the time of his retirement in 1957, Sir Ronald Aylmer Fisher (he had been knighted by the newly crowned Elisabeth II in 1952) was a world-renowned luminary in statistics and biology. In the same year, he wrote a letter to the British Medical Journal, in response to an article that had appeared there a month earlier, on the ‘Dangers of Cigarette-Smoking’. Fisher’s letter read ‘Alleged Dangers of Cigarette-Smoking‘:

Your annotation on “Dangers of Cigarette-smoking” leads up to the demand that these hazards “must be brought home to the public by all the modern devices of publicity”. That is just what some of us with research interests are afraid of. In recent wars, for example, we have seen how unscrupulously the “modern devices of publicity” are liable to be used under the impulsion of fear; and surely the “yellow peril” of modern times is not the mild and soothing weed but the original creation of states of frantic alarm.

A common “device” is to point to a real cause for alarm, such as the increased incidence of lung cancer, and to ascribe it urgent terms to what is possibly an entirely imaginary cause. Another, also illustrated in your annotation, is to ignore the extent to which the claims in question have aroused rational scepticism.

Amazing. Fisher’s scepticism at work again. But this time he would not change his mind. To our eyes, this is utterly dumbfounding. Today we know: the evidence accumulated by epidemiological research in establishing tobacco smoking as a major cause of multiple diseases is overwhelming, conclusive and undisputed. But it took a while.

This is from 1931. This, featuring a well-known actor, is from 1949:

And this is the other Ronnie in 1956.

Public attitudes to smoking have changed a great deal over the last hundred years – especially in the last few decades. I remember – to my horror – having to bear with my colleagues’ “right” to smoke next to me in the open space I was working on in the ’90s. And what about smoking seats in the back of airplanes? (By the way, why do some airlines still remind passengers, before departure, that ‘This is a non-smoking flight’?).

The adverse health effects of smoking were not as clear in the ’50s as they are today. But there was already plenty of evidence. Granted, the earlier studies had an unsavoury source: Adolf Hitler – a heavy smoker in his youth – had later turned into an anti-tobacco fanatic and initiated the first public anti-smoking campaign in Nazi Germany – that’s where the term ‘passive smoking’ (Passivrauchen) comes from. The link between tobacco and lung cancer was first identified by German doctors. But by 1956 it had been well established by the British Doctors’ Study, building on the pioneering research of Doll and Hill.

How could Fisher – a world authority in the evaluation of statistical evidence! – be so misguided? Let’s see. As a start, he was sceptical about the harmfulness of smoking. Did being a lifelong cigarette and pipe smoker have anything to do with it? Of course. But not – of course – if you asked him: he was prior indifferent – ready to let the data decide. His aggressive, confrontational character – he was apparently intolerant to contradiction and hated being wrong on any subject – did not help either. Besides, he was being paid some – probably small – fee by the tobacco industry. None of this, however, is very important. It is fair to assume that, as in the tea tasting experiment, Fisher was open to let evidence overcome his scepticism. The crucial point is that his scepticism (and stubbornness), combined with a suitably high standard of proof, meant that the amount of evidence he required in order to change his mind was likely to be particularly large.

Here is where Fisher failed: answering Popper’s question. The more sceptical we are about the truth of a hypothesis, the more we should try to verify it. But he didn’t: rather than taking stock of all the evidence that had already been accumulated on the harmful effects of smoking, Fisher largely ignored it, focusing instead on implausible alternatives – What if lung cancer causes smoking? What if there is a genetic predisposition to smoking and lung cancer? – with no indication on how to explore them. And he nitpicked on minor issues with the data, blowing them out of proportion in order to question the whole construct.

The parallel with the today’s global warming debate is evident, indicative and disturbing. It is what happens with all the weird beliefs we have been examining in this blog. The multiplicative nature of evidence accumulation is such that there are always two main ways to undermine a hypothesis: ignore or neglect confirmative evidence and look for conclusive counterevidence – dreaming it up or fabricating it when necessary.

What evidence do you require that would be enough to change your mind? Popper’s question invokes searching for disconfirmative evidence when we are quite convinced that a hypothesis is true. But if, on the contrary, we believe that the hypothesis is likely to be false, what is called for is a search for confirmative evidence – as much as we need to potentially overturn our initial conviction. Popper himself failed to draw this distinction, with his unqualified emphasis on falsifiability. And Fisher followed suit, as reflected in his null hypothesis flip and his verbal contortions to avoid saying ‘prove’, ‘accept’ or ‘verify’. The result was his embarrassing counter-anti-smoking tirade – definitely not the best end to a shining academic career.

PO=LR∙BO. Get any of these wrong and you are up for weird beliefs, embarrassing mistakes and wrong decisions. It can happen to anyone.

Print Friendly
Dec 082016
 

Fisher’s 5% significance criterion can be derived from Bayes’ Theorem under error symmetry and prior indifference, with a 95% standard of proof.

Let’s now ask: What happens if we change any of these conditions?

We have already seen the effect of relaxing the standard of proof. While, with a 95% standard, TPR needs to be at least 19 times FPR, with a 75% standard 3 times will suffice. Obviously, the lower our standard of proof the more tolerant we are towards accepting the hypothesis of interest.

Error symmetry is an incidental, non-necessary condition. What matters for significance is that TPR is at least rPO times FPR. So, with rPO=19 it just happens that rTPR=19∙5%=95%=PP. But, for example, with FPR=4% any TPR above 76% – i.e. any FNR below 24% – would do (not, however, with FPR=6%, which would require TPR>1).

Prior indifference, on the other hand, is crucially important. Remember we assumed it at the start of our tea-tasting story, when we gave the lady a 50/50 initial chance that she might be right: BR=50%. Hence BO=1 and Bayes’ Theorem in odds form simplifies to PO=LR. Now it is time to ask: does prior indifference make sense?

Fisher didn’t think so. According to his daughter’s biography, his initial reaction to the lady’s claim that she could spot the difference between the two kinds of tea was: ‘Nonsense. Surely it makes no difference’. Such scepticism would call for a lower BO – not as low as zero, in deference to Cromwell’s rule, but clearly much lower than 1. But Fisher would have adamantly opposed it. For him there was no other value for BO but 1. This was not out of polite indulgence towards the lady, but because of his lifelong credo: ‘I shall not assume the truth of Bayes’ axiom’ (The Design of Experiments, p. 6).

Such stalwart stance was based on ‘three considerations’ (p. 6-7):

1. Probability is ‘an objective quantity measured by observable frequencies’. As such, it cannot be used for ‘measuring merely psychological tendencies, theorems respecting which are useless for scientific purposes.’

This is the seemingly interminable and incredibly vacuous dispute between the objective and the subjective interpretations of probability. It is the same reason why Fisher ignored TPR: if it cannot be quantified, one might as well omit it. He was plainly wrong. As Albert Einstein did not say: ‘Not everything that can be counted counts, and not everything that counts can be counted.’ Science is the interpretation of evidence arranged into explanations. It relies on hard as well as soft evidence. Hard evidence is the result of a controlled, replicable experiment, generating objective, measurable probabilities grounded on empirical frequencies. Soft evidence is everything else: any sign that can help the observer’s effort to evaluate whether a hypothesis is true or false. Such effort arises from a primal need that long predates any theory of probability. The interpretation of soft evidence is inherently subjective. But, contrary to Fisher’s view, there is nothing unscientific about it: subjective probability can be laid out as a complete and coherent theory.

2. ‘My second reason is that it is the nature of an axiom that its truth should be apparent to any rational mind which fully apprehends its meaning. The axiom of Bayes has certainly been fully apprehended by a good many rational minds, including that of its author, without carrying this conviction of necessary truth. This, alone, shows that it cannot be accepted as the axiomatic basis of a rigorous argument.’

This is downright bizarre. First, Bayes’ is a theorem, not an axiom. Second, it is a straightforward consequence of two straightforward definitions. Hence, it is obviously and necessarily true. But somehow Fisher didn’t see it this way. He even believed that Reverend Bayes himself was not completely convinced about it, and that was the reason why he left his Essay unpublished. He had no evidence to support this claim – it was just a prior belief!

3. ‘My third reason is that inverse probability has been only very rarely used in the justification of conclusions from experimental facts’.

This is up there with Decca executive’s Beatles rejection: ‘We don’t like their sounds. Groups of guitars are on the way out’.

Fisher’s credo was embarrassingly wrong. But it doesn’t matter: whether one believes it or not, we are all Bayesian. We all have priors. Ignoring them only means that we are inadvertently assuming prior indifference: BR=50% and BO=1, with all its potentially misleading consequences. Rather than pretending they do not exist, we should try to get our priors right.

So let’s go back to Fisher’s sceptical reception of the lady’s claim. We might at first interpret his prior indifference as neutral open mindedness, expressing perfect ignorance. Maybe the lady is skilled, maybe not – we just don’t know: let’s give her a 50/50 chance and let the data decide. But hang on. Would we say the same and use the same amount of data if the claim had been much more ordinary – e.g. spotting sugar in the tea, or distinguishing between Darjeeling and Earl Grey? And what would we do, on the other hand, with a truly outlandish claim – e.g. spotting whether the tea contains one or two grains of sugar, or whether it has been prepared by a right-handed or a left-handed person? Surely, our priors would differ and we would require much more evidence to test the extraordinary claim than the ordinary one: extraordinary claims require extraordinary evidence.

Prior indifference may be appropriate for a fairly ordinary claim. But the more extraordinary the claim, the lower should be our prior belief and, therefore, the larger should be the amount of confirmative evidence required to satisfy a given standard of proof. For example, let’s share Fisher’s scepticism and halve BR to 25%, hence BO=1/3. Now, with a 95% standard of proof, we have rTPR=3∙19∙(1/70) in case of a perfect choice: it is three times as much as under prior indifference, but we can still accept the hypothesis. More so, obviously, if we relax the standard of proof to 75%. But notice in this case that with one mistake we now have rTPR=3∙3∙(17/70), which is higher than 1. So, while with prior indifference one mistake would still be clear and convincing evidence of some ability, starting with a sceptical prior would lead us to a rejection – coinciding again with Fisher’s conclusion. Did Fisher have an indifferent prior and a 95% standard of proof, or a sceptical prior and a 75% standard? Neither, if we asked him: he shunned priors. But in reality it is either (and – given his scepticism – more likely the latter): we are all Bayesian. In fact, Fisher’s 5% threshold is compatible with various combinations of priors and standards of proof. For instance, BR=25% and an 85% standard give rTPR=85%, where again, incidentally, PP=TPR.

(Note: Prior indifference and error symmetry are sufficient but not necessary conditions for PP=TPR. The necessary condition is BO=FPR/FNR).

Finally, let’s see what happens as we gather more evidence. Remember Fisher ran the experiment with 8 cups. If the lady made no mistake, he accepted the hypothesis that she had some ability; with one or more mistakes, he rejected it. Lowering the standard of proof would tolerate one mistake. But a lower prior would again mean rejection. We can however hear the lady’s protest: Come on, that was one silly mistake – I got distracted for a moment. Give me 4 more cups and I will show you: no more mistakes. As Bayesians, we consent: we allow new evidence to change our mind.

So let’s re-run the experiment with 12 tea cups (The Design of Experiments, p. 21). Now the probability of no mistake goes down to 1/924 (as 12!/[6!(12-6)!]=924), the probability of one mistake to 36/924 (as there are 6×6 ways to choose 5 right and 1 wrong cups), and the probability of one or no mistake to 37/924. Obviously, with no mistakes on 12 cups, the lady’s ability is even more apparent. But now, under prior indifference, one mistake satisfies Fisher’s 5% criterion, as 37/924=4%, and is compatible with a 95% standard of proof, as rTPR=19∙(37/924) is lower than 1. Hence we accept the hypothesis even if the lady makes one mistake. Not so, however, if we start with a sceptical prior: for that we need a 75% standard. Or we need to extend the experiment to 14 cups (by now you know what to do).

To summarise (A=Accept, R=Reject):

There is more to testing a hypothesis than 5% significance. The decision to accept or reject it is the result of a fine balance between standard of proof, evidence and priors: PO=LR∙BO.

Print Friendly
Nov 122016
 

The lady tasting tea‘ is one of the most famous experiments in the history of statistics. Ronald Fisher told the story in the second chapter of The Design of Experiments, published in 1935 and considered since then the Bible of experimental design. Apparently, the lady was right: she could easily distinguish the two kinds of tea. We don’t know the details of the impromptu experiment, but on his subsequent reflection Fisher agreed that ‘an event which would occur by chance only once in 70 trials is decidedly “significant“‘ (p.13). At the same time, however, he found ‘obvious’ that ‘3 successes to 1 failure, although showing a bias, or deviation, in the right direction, could not be judged as statistically significant evidence of a real sensory discrimination’. (p. 14-15). His reason:

It is usual and convenient for experimenters to take 5 per cent as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the great part of the fluctuations which chance causes have introduced into their experimental results. (p. 13).

Statistically significant at the 5% level: that’s where it all started – the most used, misused and abused criterion in the history of statistics.

Where did it come from? Let’s follow Fisher’s train of thought. Remember what we said about the Confirmation Bias: we cannot look at TPR without looking at its associated FPR. Whatever the result of an experiment, there is always a probability, however small, that it is just a product of chance. How small – asked Fisher – should that probability be for us to be comfortable that the result is not the product of chance? How small – in our framework – should FPR be? 5% – said Fisher. If FPR is lower than 5% – as it is with a perfect cup choice – we can safely conclude that the result is significant, and not a chance event. If FPR is above 5% – as it is with 3 successes and 1 failure – we cannot. That’s it – no further discussion. What about TPR? Shouldn’t we look at FPR in relation to TPR? Not according to Fisher: FPR – the probability of observing the evidence if the hypothesis is false – is all that matters. So much so that, with a bewildering flip, he pretended that the hypothesis under investigation was not that the lady could taste the difference, but its opposite: that she couldn’t. He called it the null hypothesis. After the flip, his criterion is: if the probability of the evidence, given that the null hypothesis is true – he calls it the p-value – is less than 5%, the null hypothesis is ‘disproved’ (p. 16). If it is above 5%, it is not disproved.

Why such an awkward twist? Because – said Fisher – only the probability of the evidence under the hypothesis of no ability can be calculated exactly, according to the laws of chance. Under the null hypothesis, the probability of a perfect choice is 1/70, the probability of 3 successes and 1 failure is 16/70, and so on. Whereas the probability of the evidence under the hypothesis of some ability cannot be calculated exactly, unless the ability level is specified. For instance, under perfect ability, the probability of a perfect choice is 100% and the probability of any number of mistakes is 0%. But how can we quantify the probability distribution under the hypothesis of some unspecified degree of ability – which, despite Fisher’s contortions, is the hypothesis of interest and the actual subject of the enquiry? We can’t. And if we can’t quantify it – seems to be the conclusion – we might as well SUTC it: Sweep it Under The Carpet.

How remarkable. This is the Confirmation Bias’s mirror image. The Confirmation Bias is transfixed on TPR and forgets about FPR. Fisher’s Bias does something similar: it focuses on FPR – because it can be calculated – and disregards TPR – because it can’t. The resulting mistake is the same: they both ignore that what matters is not how high TPR is, or how low is FPR, but how they relate to each other.

To see this, let’s assume for a moment that the hypothesis under investigation is ‘perfect ability’ versus ‘no ability’ – no middle ground. In this case, as we just said, TPR=1 for a perfect choice and 0 otherwise. Hence, under prior indifference, we have PO=LR=1/FPR or PO=0. Fisher agrees:

If it were asserted that the subject would never be wrong in her judgements we should again have an exact hypothesis, and it is easy to see that this hypothesis could be disproved by a single failure, but could never be proved by any finite amount of experimentation. (p. 16).

As we have seen, with a perfect choice over 8 cups we have FPR=1/70 and therefore PO=70 i.e. PP=98.6% (remember PO=PP/(1-PP)). True, it is not conclusive evidence that the lady is infallible – as the number of cups increases, FPR tends to zero but never reaches it – but to all intents and purposes we are virtually certain that she is. Fisher may abuse her patience and feed her a few more cups, and twist his tongue saying that what he did was disprove that the lady is not just lucky. In fact, all he demanded to do so was FPR<5%, i.e. PO>20 and PP>95.2% – a verdict beyond reasonable doubt. On the other hand, even a single mistake provides conclusive evidence to disprove the hypothesis that the lady is infallible – in the same way that a black swan disproves the hypothesis that all swans are white: TPR=0, hence PO=0 and PP=0%, irrespective of FPR.

Let’s now ask: What happens if we replace ‘perfect ability’ with ‘some ability’? The alternative hypothesis is still ‘no ability’, so FPR stays the same. The difference is that we cannot exactly quantify TPR. But we don’t need to. All we need to do is define the level of PP required to accept the hypothesis. This gives us a required PO – let’s call it rPO – which, given FPR, implies a required level of TPR: rTPR=rPO∙FPR. Let’s say for example that the required PP is 95%. Then rPO=19 and rTPR=19∙FPR. Hence, in case of a perfect choice, rTPR=19∙(1/70). At this point all we need to ask is: are we comfortable that the probability of a perfect choice, given that the lady has some ability, is at least 19/70? Remember that the same probability under no ability is 1/70 and under perfect ability is 70/70. If the answer is yes – as it should reasonably be – we accept the hypothesis. There is no need to know the exact value of TPR, as long as we are comfortable that it exceeds rTPR. On the other hand, if the lady makes one mistake we have rTPR=19∙(17/70): the required probability of one or no mistake, given some ability, exceeds 100%. Hence we reject the hypothesis.

This coincides with Fisher’s conclusion, as 1/70 is below 5% and 17/70 is above. But what happens if we lower rPO? After all, 95% is a very high standard of proof: do we really need to be sure beyond reasonable doubt that the lady has some tea tasting ability? What if we are happy with 75%, i.e. rPO=3? In this case, rTPR=3∙(1/70) for a perfect choice – a comfortable requirement, close to no ability. But now with one mistake we have rTPR=3∙(17/70). This is about two thirds of the way between no ability (17/70) and perfect ability (70/70 – remember we need to consider the cumulative probability of one or no mistake: under perfect ability, this is 0%+100%). We may or may not feel comfortable with such a high level, but if we do then we must conclude that there is clear and convincing evidence that the lady has some ability, despite her mistake.

For illustration purposes, let’s push this argument to the limit and ask: what if we lower the standard of proof all the way down to 50%, i.e. rPO=1? In this case, all we would need in order to grant the lady some ability is a preponderance of evidence. This comfortably covers one mistake, and may even allow for two mistakes, as long as we accept rTPR=53/70 (notice there are 6×6 ways to choose 2 right and 2 wrong cups).

This may well be too lenient. But the point is that, as soon as we explicitly relate FPR to TPR, we are able to place Fisher’s significance criterion in a proper context, where his 5% threshold is not a categorical standard but one choice within a spectrum of options. In fact, once viewed in this light, we can see where Fisher’s criterion comes from.

Fisher focused on the probability of the evidence, given the null hypothesis, stating that a probability of less than 5% was small enough to be comfortable that the evidence was ‘significant’ and not a chance event. But why did he then proceed to infer that such significant evidence disproved the null hypothesis? That is: why did he conclude that the probability of the null hypothesis, given significant evidence, was small enough to disprove it? As we know (very well by now!), the probability of E given H is not the same as the probability of H given E. Why did Fisher seem to commit the Inverse Fallacy?

To answer this question, remember that the two probabilities are the same under two conditions: symmetric evidence and prior indifference. Under error symmetry, FPR=FNR. Hence, in our framework, where the hypothesis of no ability is the alternative to the tested hypothesis of some ability, FPR=5% implies TPR=95% and therefore PO=19 and PP=TPR=95%. The result is the same in Fisher’s framework, where the two hypotheses are – unnecessarily and confusingly – flipped around, FPR becomes FNR and the null hypothesis is rejected if FNR is less than 5%.

Since Fisher could not quantify TPR, he avoided any explicit consideration of FNR=1-TPR and its relation with FPR – symmetric or otherwise. But his rejection of the null hypothesis required it: in the same way as we should avoid the Confirmation Bias – accepting a hypothesis based on a high TPR, without relating it to its associated FPR – we need to avoid Fisher’s Bias: accepting a hypothesis – or, as Fisher would have it, disprove a null hypothesis – based on a low FPR, without relating it to its associated TPR.

What level of TPR did Fisher associate to his 5% FPR threshold? We don’t know – and probably neither did he. All he said was that a p-value of less than 5% was low enough to disprove the null hypothesis. Since then, Fisher’s Bias has been a source of immeasurable confusion. Assuming symmetry, FPR<5% has been commonly taken to imply PP>95%: ‘We have tested our theory and found it significant at the 5% level: therefore, there is only a 5% probability that we are wrong.’

Fisher would have cringed at such a statement. But his emphasis on significance inadvertently encouraged it. Evidence is not significant or insignificant according to whether FPR is below or above 5%. It is confirmative or disconfirmative according to whether LR is above or below 1, i.e. TPR is above or below FPR. What matters is not the level of FPR per se, but its relation to TPR. Confirmative evidence increases the probability that the hypothesis of interest is true, and disconfirmative evidence decreases it. We accept the hypothesis of interest if we have enough evidence to move LR beyond the threshold required by our standard of proof. Only then can we call such evidence ‘significant’. So, if our threshold is 95%, then, under error symmetry and prior indifference, FPR<5% implies TPR=PP>95%. There is no fallacy: TPR – the probability of E given H – is equal to PP – the probability of H given E. And 5% significance does mean that we have enough confirmative evidence to decide that the hypothesis of interest has indeed been proven beyond reasonable doubt.

This is where Fisher’s 5% criterion comes from: Bayes’ Theorem under error symmetry and prior indifference, with a 95% standard of proof. Fisher ignored TPR, because he could not quantify it. But TPR cannot be ignored – or rather: it is there, whether one ignores it or not. Fisher’s criterion implicitly assumes that TPR is at least 95%. Without this assumption, a 5% ‘significance’ level cannot be used to accept the hypothesis of interest. Just like the Confirmation Bias consists in ignoring that a high TPR means nothing without a correspondingly low FPR, Fisher’s Bias consists in ignoring that a low FPR means nothing without a correspondingly high TPR.

Print Friendly
Oct 302016
 

The Made in Italy Fund started in May. It is up 7%, with the Italian market down 1%. It is a good time to go back to Hypothesis Testing.

We ask ourselves questions and give ourselves answers in response to thaumazein: wonder at what there is. Our questions spring from our curiosity. Our answers are grounded on evidence.

As David Hume and then Immanuel Kant made clear, all our answers are based on evidence. Everything we know cannot but be phenomena that are experienced by us as evidence. Even Kant’s synthetic a priori propositions – like those of geometry and mathematics – are ultimately based on axioms that we regards as self-evidently true.

The interpretation of evidence arranged into explanations is what we call Science – knowledge that results from separating true from false. Science is based on observation – evidence that we preserve and comply with. We know that the earth rotates around the sun because we observe it, in the same way that Aristotle and Ptolemy knew that the sun rotated around the earth. We are right and they were wrong but, like us, they were observing and interpreting evidence. So were the ancient Greeks, Egyptians, Indians and Chinese when they concluded that matter consisted of four or five elements. And when the Aztecs killed children to provide the rain god Tlaloc with their tears, their horrid lunacy – widespread in ancient times – was not a fickle mania, but the result of an age-old accumulation of evidence indicating that the practice ‘worked’, and it was therefore worth preserving. So was divination – the interpretation of multifarious signs sent by gods to humans.

While we now cringe at human sacrifice and laugh at divination, it is wrong to simply dismiss them as superseded primitiveness. Since our first Why?, humankind’s only way to answer questions is by making sense of evidence. Everything we say is some interpretation of evidence. Everything we say is science.

Contrary to Popper’s view, there is no such a thing as non-science. The only possible opposition is between good science and bad science. Bad science derives from a wrongful interpretation of evidence, leading to a wrongful separation of true and false. This in turn comes from neglecting or underestimating the extent to which evidence can be deceitful. Phenomena do not come to us in full light. What there is – what we call reality – is not always as it appears. Good science derives from paying due attention to the numerous perils of misperception. Hence the need to look at evidence from all sides and to collect plenty of it, analyse it, reproduce it, probe it, disseminate it and – crucially – challenge it, i.e. look for new evidence that may conflict with and possibly refute the prevailing interpretation. This is the essence of what we call the Scientific Revolution.

Viewed in this light, the obvious misperceptions behind the belief in the effectiveness of sacrifice and divination bear an awkward resemblance to the weird beliefs examined in many of my posts. Why did Steve Jobs try to cure his cancer with herbs and psychics? Why do people buy homeopathic medicines (and Boiron is worth 1.6 billion euro)? Why do people believe useless experts? Why did Othello kill Desdemona? Why did Arthur Conan Doyle believe in ghosts? Why did 9/11 truthers believe it was a conspiracy? Why do Islamists promote suicide bombing? It is tempting to call it lunacy. But it isn’t. It is misinterpretation of evidence.

The most pervasive pitfall in examining available evidence is the Confirmation Bias: focusing on evidence that supports the hypothesis under investigation, while neglecting, dismissing or obfuscating evidence that runs contrary to it. A proper experiment, correctly gathering and analysing the relevant evidence, can easily show the ineffectiveness of homeopathic medicine – in the same way as it would show the ineffectiveness of divination and sacrifice (however tricky it would be to test the Tlaloc hypothesis).

In our framework, PO=LR∙BO, where LR=TPR/FPR is the Likelihood Ratio, the ratio between the True Positive Rate – the probability of observing the evidence if the hypothesis is true – and the False Positive Rate – the probability of observing the same evidence if the hypothesis is false. The Confirmation Bias consists in paying attention to TPR, especially when it is high, while disregarding FPR. As we know, it is a big mistake: what matters is not just how high TPR is, but how high it is relative to FPR. We say evidence is confirmative is LR>1, i.e. TPR>FPR, and disconfirmative if LR<1. LR>1 increases the probability that the hypothesis is true; LR<1 decreases it. We cannot look at TPR without at the same time looking at FPR.

How high does the probability of a hypothesis have to be for us to accept that it is true? Equivalently, how low does it have to be for us to reject the hypothesis, or to accept that it is false?

As we have seen, there is no single answer: it depends on the specific standard of proof attached to the hypothesis and on the utility function of the decision maker. For instance, if the hypothesis is that a defendant is guilty of a serious crime, a jury needs a very high probability of guilt – say 95% – before convicting him. On the other hand, if the hypothesis is that an airplane passenger is carrying a gun, a small probability – say 5% – is all a security guard needs in order to give the passenger a good check. Notice that in neither case the decision maker is saying that the hypothesis is true. What he is saying is that the probability is high enough for him to act as if the hypothesis is true. Such threshold is known as significance level, and the accumulated evidence that allows the decision maker to surpass such threshold is itself called significant. We say that there is significant evidence to convict the defendant if, in the light of such evidence, the probability of guilt exceeds 95%. In the same way, we say that there is significant evidence to frisk the passenger if, in the light of the available evidence, the probability that he carries a gun exceeds 5%. In practice, we call the defendant ‘guilty’ but, strictly speaking, it is not what we are saying – in the same way that we are not saying that he is ‘innocent’ or ‘not guilty’ if the probability of Guilt is below 95%. Even more so, we are not saying that the passenger is a terrorist. What matters is the decision – convict or acquit, frisk or let go.

With such proviso, let’s examine the standard case in which we want to decide whether a certain claim is true or false. For instance, a lady we are having tea with tells us that tea tastes different depending on whether milk is poured in the cup before or after the tea. She says she can easily spot the difference. How can we decide if she is telling the truth? Simple: we prepare a few cups of tea, half one way and half the other, and ask her to say which is which. Let’s say we make 8 cups, and tell her that 4 are made one way and 4 the other way. She tastes them one after the other and, wow, she gets them all right. Surely she’s got a point?

Not so fast. Let’s define:

H: The lady can taste the difference between the two kinds of tea.

E: The lady gets all 8 cups right.

Clearly, TPR – the probability of E given H – is high. If she’s got the skill, she probably gets all her cups right. Let’s even say TPR=100%. But we are not Confirmation-biased: we know we also need to look at FPR. So we must ask: what is the probability of E given not-H, i.e. the lady was just lucky? This is easy to calculate: there are 8!/[4!(8-4)!]=70 ways to choose 4 cups out of 8, and there is only one way to get them all right. Therefore, FPR=1/70. This gives us LR=70. Hence PO – the odds of H in the light of E – is 70 times the Base Odds. What is BO? Let’s say for the moment we are prior indifferent: the lady may be skilled, she may be deluded – we don’t know. Let’s give her a 50/50 chance: BR=50%, hence BO=1 and PO=70. Result: PP – the probability that the lady is skilled, in the light of her fully successful choices, is 99%. That’s high enough, surely.

But what if she made one mistake? Notice that, while there is only one way to be fully right, there are 4 ways to make 3 right choices out of 4, and 4 ways to make 1 wrong choice out of 4. Hence, there are 4×4 ways to choose 3 right and 1 wrong cups. Therefore, FPR=16/70 and LR=4.4. Again assuming BO=1, this means PP=81%. Adding the 1/70 chance of a perfect choice, the probability of one or no mistake out of mere chance is 17/70 and LR=4.1, hence PP=80%. Is that high enough?

I would say yes. But Ronald A. Fisher – the dean of experimental design and one of the most prominent statisticians of the 20th century – would have none of it.

More on the next post.

Print Friendly