Dec 082016
 

Fisher’s 5% significance criterion can be derived from Bayes’ Theorem under error symmetry and prior indifference, with a 95% standard of proof.

Let’s now ask: What happens if we change any of these conditions?

We have already seen the effect of relaxing the standard of proof. While, with a 95% standard, TPR needs to be at least 19 times FPR, with a 75% standard 3 times will suffice. Obviously, the lower our standard of proof the more tolerant we are towards accepting the hypothesis of interest.

Error symmetry is an incidental, non-necessary condition. What matters for significance is that TPR is at least rPO times FPR. So, with rPO=19 it just happens that rTPR=19∙5%=95%=PP. But, for example, with FPR=4% any TPR above 76% – i.e. any FNR below 24% – would do (not, however, with FPR=6%, which would require TPR>1).

Prior indifference, on the other hand, is crucially important. Remember we assumed it at the start of our tea-tasting story, when we gave the lady a 50/50 initial chance that she might be right: BR=50%. Hence BO=1 and Bayes’ Theorem in odds form simplifies to PO=LR. Now it is time to ask: does prior indifference make sense?

Fisher didn’t think so. According to his daughter’s biography, his initial reaction to the lady’s claim that she could spot the difference between the two kinds of tea was: ‘Nonsense. Surely it makes no difference’. Such scepticism would call for a lower BO – not as low as zero, in deference to Cromwell’s rule, but clearly much lower than 1. But Fisher would have adamantly opposed it. For him there was no other value for BO but 1. This was not out of polite indulgence towards the lady, but because of his lifelong credo: ‘I shall not assume the truth of Bayes’ axiom’ (The Design of Experiments, p. 6).

Such stalwart stance was based on ‘three considerations’ (p. 6-7):

1. Probability is ‘an objective quantity measured by observable frequencies’. As such, it cannot be used for ‘measuring merely psychological tendencies, theorems respecting which are useless for scientific purposes.’

This is the seemingly interminable and incredibly vacuous dispute between the objective and the subjective interpretations of probability. It is the same reason why Fisher ignored TPR: if it cannot be quantified, one might as well omit it. He was plainly wrong. As Albert Einstein did not say: ‘Not everything that can be counted counts, and not everything that counts can be counted.’ Science is the interpretation of evidence arranged into explanations. It relies on hard as well as soft evidence. Hard evidence is the result of a controlled, replicable experiment, generating objective, measurable probabilities grounded on empirical frequencies. Soft evidence is everything else: any sign that can help the observer’s effort to evaluate whether a hypothesis is true or false. Such effort arises from a primal need that long predates any theory of probability. The interpretation of soft evidence is inherently subjective. But, contrary to Fisher’s view, there is nothing unscientific about it: subjective probability can be laid out as a complete and coherent theory.

2. ‘My second reason is that it is the nature of an axiom that its truth should be apparent to any rational mind which fully apprehends its meaning. The axiom of Bayes has certainly been fully apprehended by a good many rational minds, including that of its author, without carrying this conviction of necessary truth. This, alone, shows that it cannot be accepted as the axiomatic basis of a rigorous argument.’

This is downright bizarre. First, Bayes’ is a theorem, not an axiom. Second, it is a straightforward consequence of two straightforward definitions. Hence, it is obviously and necessarily true. But somehow Fisher didn’t see it this way. He even believed that Reverend Bayes himself was not completely convinced about it, and that was the reason why he left his Essay unpublished. He had no evidence to support this claim – it was just a prior belief!

3. ‘My third reason is that inverse probability has been only very rarely used in the justification of conclusions from experimental facts’.

This is up there with Decca executive’s Beatles rejection: ‘We don’t like their sounds. Groups of guitars are on the way out’.

Fisher’s credo was embarrassingly wrong. But it doesn’t matter: whether one believes it or not, we are all Bayesian. We all have priors. Ignoring them only means that we are inadvertently assuming prior indifference: BR=50% and BO=1, with all its potentially misleading consequences. Rather than pretending they do not exist, we should try to get our priors right.

So let’s go back to Fisher’s sceptical reception of the lady’s claim. We might at first interpret his prior indifference as neutral open mindedness, expressing perfect ignorance. Maybe the lady is skilled, maybe not – we just don’t know: let’s give her a 50/50 chance and let the data decide. But hang on. Would we say the same and use the same amount of data if the claim had been much more ordinary – e.g. spotting sugar in the tea, or distinguishing between Darjeeling and Earl Grey? And what would we do, on the other hand, with a truly outlandish claim – e.g. spotting whether the tea contains one or two grains of sugar, or whether it has been prepared by a right-handed or a left-handed person? Surely, our priors would differ and we would require much more evidence to test the extraordinary claim than the ordinary one: extraordinary claims require extraordinary evidence.

Prior indifference may be appropriate for a fairly ordinary claim. But the more extraordinary the claim, the lower should be our prior belief and, therefore, the larger should be the amount of confirmative evidence required to satisfy a given standard of proof. For example, let’s share Fisher’s scepticism and halve BR to 25%, hence BO=1/3. Now, with a 95% standard of proof, we have rTPR=3∙19∙(1/70) in case of a perfect choice: it is three times as much as under prior indifference, but we can still accept the hypothesis. More so, obviously, if we relax the standard of proof to 75%. But notice in this case that with one mistake we now have rTPR=3∙3∙(17/70), which is higher than 1. So, while with prior indifference one mistake would still be clear and convincing evidence of some ability, starting with a sceptical prior would lead us to a rejection – coinciding again with Fisher’s conclusion. Did Fisher have an indifferent prior and a 95% standard of proof, or a sceptical prior and a 75% standard? Neither, if we asked him: he shunned priors. But in reality it is either (and – given his scepticism – more likely the latter): we are all Bayesian. In fact, Fisher’s 5% threshold is compatible with various combinations of priors and standards of proof. For instance, BR=25% and an 85% standard give rTPR=85%, where again, incidentally, PP=TPR.

(Note: Prior indifference and error symmetry are sufficient but not necessary conditions for PP=TPR. The necessary condition is BO=FPR/FNR).

Finally, let’s see what happens as we gather more evidence. Remember Fisher ran the experiment with 8 cups. If the lady made no mistake, he accepted the hypothesis that she had some ability; with one or more mistakes, he rejected it. Lowering the standard of proof would tolerate one mistake. But a lower prior would again mean rejection. We can however hear the lady’s protest: Come on, that was one silly mistake – I got distracted for a moment. Give me 4 more cups and I will show you: no more mistakes. As Bayesians, we consent: we allow new evidence to change our mind.

So let’s re-run the experiment with 12 tea cups (The Design of Experiments, p. 21). Now the probability of no mistake goes down to 1/924 (as 12!/[6!(12-6)!]=924), the probability of one mistake to 36/924 (as there are 6×6 ways to choose 5 right and 1 wrong cups), and the probability of one or no mistake to 37/924. Obviously, with no mistakes on 12 cups, the lady’s ability is even more apparent. But now, under prior indifference, one mistake satisfies Fisher’s 5% criterion, as 37/924=4%, and is compatible with a 95% standard of proof, as rTPR=19∙(37/924) is lower than 1. Hence we accept the hypothesis even if the lady makes one mistake. Not so, however, if we start with a sceptical prior: for that we need a 75% standard. Or we need to extend the experiment to 14 cups (by now you know what to do).

To summarise (A=Accept, R=Reject):

There is more to testing a hypothesis than 5% significance. The decision to accept or reject it is the result of a fine balance between standard of proof, evidence and priors: PO=LR∙BO.

Print Friendly
Nov 122016
 

The lady tasting tea‘ is one of the most famous experiments in the history of statistics. Ronald Fisher told the story in the second chapter of The Design of Experiments, published in 1935 and considered since then the Bible of experimental design. Apparently, the lady was right: she could easily distinguish the two kinds of tea. We don’t know the details of the impromptu experiment, but on his subsequent reflection Fisher agreed that ‘an event which would occur by chance only once in 70 trials is decidedly “significant“‘ (p.13). At the same time, however, he found ‘obvious’ that ‘3 successes to 1 failure, although showing a bias, or deviation, in the right direction, could not be judged as statistically significant evidence of a real sensory discrimination’. (p. 14-15). His reason:

It is usual and convenient for experimenters to take 5 per cent as a standard level of significance, in the sense that they are prepared to ignore all results which fail to reach this standard, and, by this means, to eliminate from further discussion the great part of the fluctuations which chance causes have introduced into their experimental results. (p. 13).

Statistically significant at the 5% level: that’s where it all started – the most used, misused and abused criterion in the history of statistics.

Where did it come from? Let’s follow Fisher’s train of thought. Remember what we said about the Confirmation Bias: we cannot look at TPR without looking at its associated FPR. Whatever the result of an experiment, there is always a probability, however small, that it is just a product of chance. How small – asked Fisher – should that probability be for us to be comfortable that the result is not the product of chance? How small – in our framework – should FPR be? 5% – said Fisher. If FPR is lower than 5% – as it is with a perfect cup choice – we can safely conclude that the result is significant, and not a chance event. If FPR is above 5% – as it is with 3 successes and 1 failure – we cannot. That’s it – no further discussion. What about TPR? Shouldn’t we look at FPR in relation to TPR? Not according to Fisher: FPR – the probability of observing the evidence if the hypothesis is false – is all that matters. So much so that, with a bewildering flip, he pretended that the hypothesis under investigation was not that the lady could taste the difference, but its opposite: that she couldn’t. He called it the null hypothesis. After the flip, his criterion is: if the probability of the evidence, given that the null hypothesis is true – he calls it the p-value – is less than 5%, the null hypothesis is ‘disproved’ (p. 16). If it is above 5%, it is not disproved.

Why such an awkward twist? Because – said Fisher – only the probability of the evidence under the hypothesis of no ability can be calculated exactly, according to the laws of chance. Under the null hypothesis, the probability of a perfect choice is 1/70, the probability of 3 successes and 1 failure is 16/70, and so on. Whereas the probability of the evidence under the hypothesis of some ability cannot be calculated exactly, unless the ability level is specified. For instance, under perfect ability, the probability of a perfect choice is 100% and the probability of any number of mistakes is 0%. But how can we quantify the probability distribution under the hypothesis of some unspecified degree of ability – which, despite Fisher’s contortions, is the hypothesis of interest and the actual subject of the enquiry? We can’t. And if we can’t quantify it – seems to be the conclusion – we might as well SUTC it: Sweep it Under The Carpet.

How remarkable. This is the Confirmation Bias’s mirror image. The Confirmation Bias is transfixed on TPR and forgets about FPR. Fisher’s Bias does something similar: it focuses on FPR – because it can be calculated – and disregards TPR – because it can’t. The resulting mistake is the same: they both ignore that what matters is not how high TPR is, or how low is FPR, but how they relate to each other.

To see this, let’s assume for a moment that the hypothesis under investigation is ‘perfect ability’ versus ‘no ability’ – no middle ground. In this case, as we just said, TPR=1 for a perfect choice and 0 otherwise. Hence, under prior indifference, we have PO=LR=1/FPR or PO=0. Fisher agrees:

If it were asserted that the subject would never be wrong in her judgements we should again have an exact hypothesis, and it is easy to see that this hypothesis could be disproved by a single failure, but could never be proved by any finite amount of experimentation. (p. 16).

As we have seen, with a perfect choice over 8 cups we have FPR=1/70 and therefore PO=70 i.e. PP=98.6% (remember PO=PP/(1-PP)). True, it is not conclusive evidence that the lady is infallible – as the number of cups increases, FPR tends to zero but never reaches it – but to all intents and purposes we are virtually certain that she is. Fisher may abuse her patience and feed her a few more cups, and twist his tongue saying that what he did was disprove that the lady is not just lucky. In fact, all he demanded to do so was FPR<5%, i.e. PO>20 and PP>95.2% – a verdict beyond reasonable doubt. On the other hand, even a single mistake provides conclusive evidence to disprove the hypothesis that the lady is infallible – in the same way that a black swan disproves the hypothesis that all swans are white: TPR=0, hence PO=0 and PP=0%, irrespective of FPR.

Let’s now ask: What happens if we replace ‘perfect ability’ with ‘some ability’? The alternative hypothesis is still ‘no ability’, so FPR stays the same. The difference is that we cannot exactly quantify TPR. But we don’t need to. All we need to do is define the level of PP required to accept the hypothesis. This gives us a required PO – let’s call it rPO – which, given FPR, implies a required level of TPR: rTPR=rPO∙FPR. Let’s say for example that the required PP is 95%. Then rPO=19 and rTPR=19∙FPR. Hence, in case of a perfect choice, rTPR=19∙(1/70). At this point all we need to ask is: are we comfortable that the probability of a perfect choice, given that the lady has some ability, is at least 19/70? Remember that the same probability under no ability is 1/70 and under perfect ability is 70/70. If the answer is yes – as it should reasonably be – we accept the hypothesis. There is no need to know the exact value of TPR, as long as we are comfortable that it exceeds rTPR. On the other hand, if the lady makes one mistake we have rTPR=19∙(17/70): the required probability of one or no mistake, given some ability, exceeds 100%. Hence we reject the hypothesis.

This coincides with Fisher’s conclusion, as 1/70 is below 5% and 17/70 is above. But what happens if we lower rPO? After all, 95% is a very high standard of proof: do we really need to be sure beyond reasonable doubt that the lady has some tea tasting ability? What if we are happy with 75%, i.e. rPO=3? In this case, rTPR=3∙(1/70) for a perfect choice – a comfortable requirement, close to no ability. But now with one mistake we have rTPR=3∙(17/70). This is about two thirds of the way between no ability (17/70) and perfect ability (70/70 – remember we need to consider the cumulative probability of one or no mistake: under perfect ability, this is 0%+100%). We may or may not feel comfortable with such a high level, but if we do then we must conclude that there is clear and convincing evidence that the lady has some ability, despite her mistake.

For illustration purposes, let’s push this argument to the limit and ask: what if we lower the standard of proof all the way down to 50%, i.e. rPO=1? In this case, all we would need in order to grant the lady some ability is a preponderance of evidence. This comfortably covers one mistake, and may even allow for two mistakes, as long as we accept rTPR=53/70 (notice there are 6×6 ways to choose 2 right and 2 wrong cups).

This may well be too lenient. But the point is that, as soon as we explicitly relate FPR to TPR, we are able to place Fisher’s significance criterion in a proper context, where his 5% threshold is not a categorical standard but one choice within a spectrum of options. In fact, once viewed in this light, we can see where Fisher’s criterion comes from.

Fisher focused on the probability of the evidence, given the null hypothesis, stating that a probability of less than 5% was small enough to be comfortable that the evidence was ‘significant’ and not a chance event. But why did he then proceed to infer that such significant evidence disproved the null hypothesis? That is: why did he conclude that the probability of the null hypothesis, given significant evidence, was small enough to disprove it? As we know (very well by now!), the probability of E given H is not the same as the probability of H given E. Why did Fisher seem to commit the Inverse Fallacy?

To answer this question, remember that the two probabilities are the same under two conditions: symmetric evidence and prior indifference. Under error symmetry, FPR=FNR. Hence, in our framework, where the hypothesis of no ability is the alternative to the tested hypothesis of some ability, FPR=5% implies TPR=95% and therefore PO=19 and PP=TPR=95%. The result is the same in Fisher’s framework, where the two hypotheses are – unnecessarily and confusingly – flipped around, FPR becomes FNR and the null hypothesis is rejected if FNR is less than 5%.

Since Fisher could not quantify TPR, he avoided any explicit consideration of FNR=1-TPR and its relation with FPR – symmetric or otherwise. But his rejection of the null hypothesis required it: in the same way as we should avoid the Confirmation Bias – accepting a hypothesis based on a high TPR, without relating it to its associated FPR – we need to avoid Fisher’s Bias: accepting a hypothesis – or, as Fisher would have it, disprove a null hypothesis – based on a low FPR, without relating it to its associated TPR.

What level of TPR did Fisher associate to his 5% FPR threshold? We don’t know – and probably neither did he. All he said was that a p-value of less than 5% was low enough to disprove the null hypothesis. Since then, Fisher’s Bias has been a source of immeasurable confusion. Assuming symmetry, FPR<5% has been commonly taken to imply PP>95%: ‘We have tested our theory and found it significant at the 5% level: therefore, there is only a 5% probability that we are wrong.’

Fisher would have cringed at such a statement. But his emphasis on significance inadvertently encouraged it. Evidence is not significant or insignificant according to whether FPR is below or above 5%. It is confirmative or disconfirmative according to whether LR is above or below 1, i.e. TPR is above or below FPR. What matters is not the level of FPR per se, but its relation to TPR. Confirmative evidence increases the probability that the hypothesis of interest is true, and disconfirmative evidence decreases it. We accept the hypothesis of interest if we have enough evidence to move LR beyond the threshold required by our standard of proof. Only then can we call such evidence ‘significant’. So, if our threshold is 95%, then, under error symmetry and prior indifference, FPR<5% implies TPR=PP>95%. There is no fallacy: TPR – the probability of E given H – is equal to PP – the probability of H given E. And 5% significance does mean that we have enough confirmative evidence to decide that the hypothesis of interest has indeed been proven beyond reasonable doubt.

This is where Fisher’s 5% criterion comes from: Bayes’ Theorem under error symmetry and prior indifference, with a 95% standard of proof. Fisher ignored TPR, because he could not quantify it. But TPR cannot be ignored – or rather: it is there, whether one ignores it or not. Fisher’s criterion implicitly assumes that TPR is at least 95%. Without this assumption, a 5% ‘significance’ level cannot be used to accept the hypothesis of interest. Just like the Confirmation Bias consists in ignoring that a high TPR means nothing without a correspondingly low FPR, Fisher’s Bias consists in ignoring that a low FPR means nothing without a correspondingly high TPR.

Print Friendly
Oct 302016
 

The Made in Italy Fund started in May. It is up 7%, with the Italian market down 1%. It is a good time to go back to Hypothesis Testing.

We ask ourselves questions and give ourselves answers in response to thaumazein: wonder at what there is. Our questions spring from our curiosity. Our answers are grounded on evidence.

As David Hume and then Immanuel Kant made clear, all our answers are based on evidence. Everything we know cannot but be phenomena that are experienced by us as evidence. Even Kant’s synthetic a priori propositions – like those of geometry and mathematics – are ultimately based on axioms that we regards as self-evidently true.

The interpretation of evidence arranged into explanations is what we call Science – knowledge that results from separating true from false. Science is based on observation – evidence that we preserve and comply with. We know that the earth rotates around the sun because we observe it, in the same way that Aristotle and Ptolemy knew that the sun rotated around the earth. We are right and they were wrong but, like us, they were observing and interpreting evidence. So were the ancient Greeks, Egyptians, Indians and Chinese when they concluded that matter consisted of four or five elements. And when the Aztecs killed children to provide the rain god Tlaloc with their tears, their horrid lunacy – widespread in ancient times – was not a fickle mania, but the result of an age-old accumulation of evidence indicating that the practice ‘worked’, and it was therefore worth preserving. So was divination – the interpretation of multifarious signs sent by gods to humans.

While we now cringe at human sacrifice and laugh at divination, it is wrong to simply dismiss them as superseded primitiveness. Since our first Why?, humankind’s only way to answer questions is by making sense of evidence. Everything we say is some interpretation of evidence. Everything we say is science.

Contrary to Popper’s view, there is no such a thing as non-science. The only possible opposition is between good science and bad science. Bad science derives from a wrongful interpretation of evidence, leading to a wrongful separation of true and false. This in turn comes from neglecting or underestimating the extent to which evidence can be deceitful. Phenomena do not come to us in full light. What there is – what we call reality – is not always as it appears. Good science derives from paying due attention to the numerous perils of misperception. Hence the need to look at evidence from all sides and to collect plenty of it, analyse it, reproduce it, probe it, disseminate it and – crucially – challenge it, i.e. look for new evidence that may conflict with and possibly refute the prevailing interpretation. This is the essence of what we call the Scientific Revolution.

Viewed in this light, the obvious misperceptions behind the belief in the effectiveness of sacrifice and divination bear an awkward resemblance to the weird beliefs examined in many of my posts. Why did Steve Jobs try to cure his cancer with herbs and psychics? Why do people buy homeopathic medicines (and Boiron is worth 1.6 billion euro)? Why do people believe useless experts? Why did Othello kill Desdemona? Why did Arthur Conan Doyle believe in ghosts? Why did 9/11 truthers believe it was a conspiracy? Why do Islamists promote suicide bombing? It is tempting to call it lunacy. But it isn’t. It is misinterpretation of evidence.

The most pervasive pitfall in examining available evidence is the Confirmation Bias: focusing on evidence that supports the hypothesis under investigation, while neglecting, dismissing or obfuscating evidence that runs contrary to it. A proper experiment, correctly gathering and analysing the relevant evidence, can easily show the ineffectiveness of homeopathic medicine – in the same way as it would show the ineffectiveness of divination and sacrifice (however tricky it would be to test the Tlaloc hypothesis).

In our framework, PO=LR∙BO, where LR=TPR/FPR is the Likelihood Ratio, the ratio between the True Positive Rate – the probability of observing the evidence if the hypothesis is true – and the False Positive Rate – the probability of observing the same evidence if the hypothesis is false. The Confirmation Bias consists in paying attention to TPR, especially when it is high, while disregarding FPR. As we know, it is a big mistake: what matters is not just how high TPR is, but how high it is relative to FPR. We say evidence is confirmative is LR>1, i.e. TPR>FPR, and disconfirmative if LR<1. LR>1 increases the probability that the hypothesis is true; LR<1 decreases it. We cannot look at TPR without at the same time looking at FPR.

How high does the probability of a hypothesis have to be for us to accept that it is true? Equivalently, how low does it have to be for us to reject the hypothesis, or to accept that it is false?

As we have seen, there is no single answer: it depends on the specific standard of proof attached to the hypothesis and on the utility function of the decision maker. For instance, if the hypothesis is that a defendant is guilty of a serious crime, a jury needs a very high probability of guilt – say 95% – before convicting him. On the other hand, if the hypothesis is that an airplane passenger is carrying a gun, a small probability – say 5% – is all a security guard needs in order to give the passenger a good check. Notice that in neither case the decision maker is saying that the hypothesis is true. What he is saying is that the probability is high enough for him to act as if the hypothesis is true. Such threshold is known as significance level, and the accumulated evidence that allows the decision maker to surpass such threshold is itself called significant. We say that there is significant evidence to convict the defendant if, in the light of such evidence, the probability of guilt exceeds 95%. In the same way, we say that there is significant evidence to frisk the passenger if, in the light of the available evidence, the probability that he carries a gun exceeds 5%. In practice, we call the defendant ‘guilty’ but, strictly speaking, it is not what we are saying – in the same way that we are not saying that he is ‘innocent’ or ‘not guilty’ if the probability of Guilt is below 95%. Even more so, we are not saying that the passenger is a terrorist. What matters is the decision – convict or acquit, frisk or let go.

With such proviso, let’s examine the standard case in which we want to decide whether a certain claim is true or false. For instance, a lady we are having tea with tells us that tea tastes different depending on whether milk is poured in the cup before or after the tea. She says she can easily spot the difference. How can we decide if she is telling the truth? Simple: we prepare a few cups of tea, half one way and half the other, and ask her to say which is which. Let’s say we make 8 cups, and tell her that 4 are made one way and 4 the other way. She tastes them one after the other and, wow, she gets them all right. Surely she’s got a point?

Not so fast. Let’s define:

H: The lady can taste the difference between the two kinds of tea.

E: The lady gets all 8 cups right.

Clearly, TPR – the probability of E given H – is high. If she’s got the skill, she probably gets all her cups right. Let’s even say TPR=100%. But we are not Confirmation-biased: we know we also need to look at FPR. So we must ask: what is the probability of E given not-H, i.e. the lady was just lucky? This is easy to calculate: there are 8!/[4!(8-4)!]=70 ways to choose 4 cups out of 8, and there is only one way to get them all right. Therefore, FPR=1/70. This gives us LR=70. Hence PO – the odds of H in the light of E – is 70 times the Base Odds. What is BO? Let’s say for the moment we are prior indifferent: the lady may be skilled, she may be deluded – we don’t know. Let’s give her a 50/50 chance: BR=50%, hence BO=1 and PO=70. Result: PP – the probability that the lady is skilled, in the light of her fully successful choices, is 99%. That’s high enough, surely.

But what if she made one mistake? Notice that, while there is only one way to be fully right, there are 4 ways to make 3 right choices out of 4, and 4 ways to make 1 wrong choice out of 4. Hence, there are 4×4 ways to choose 3 right and 1 wrong cups. Therefore, FPR=16/70 and LR=4.4. Again assuming BO=1, this means PP=81%. Adding the 1/70 chance of a perfect choice, the probability of one or no mistake out of mere chance is 17/70 and LR=4.1, hence PP=80%. Is that high enough?

I would say yes. But Ronald A. Fisher – the dean of experimental design and one of the most prominent statisticians of the 20th century – would have none of it.

More on the next post.

Print Friendly
Jun 132015
 

It is not what I believe, it is what I know – said Sir Arthur Conan Doyle about the innumerable tricks and hoaxes he was haplessly subject to in the course of his long life. Despite Harry Houdini’s steadfast efforts to convince him to the contrary, Sherlock Holmes’ dad remained certain about the powers of Spiritualism: he saw plenty of conclusive evidence.

Just think of what Helder Guimarães could have made him believe – err, know:

Unbelievable. We know there is a trick. The whole setting, and the magician himself, give us conclusive evidence: there is no way that this is not a trick. Yet, it appears as just the opposite. This is the deep beauty of sleight of hand magic: it blinds us with evidence.

Print Friendly
Jun 122015
 

The framework we used to describe a judge’s decision to convict or acquit a defendant based on the probability of Guilt can be generalised to any decision about whether to accept or reject a hypothesis. The utility function is defined over two states – True or False – and two decisions – Accept or Reject:

The decision maker draws positive utility U(TP) from accepting a true hypothesis (True Positive) and negative utility U(FP) from accepting a false hypothesis (False Positive). And he draws positive utility U(TN) from rejecting a false hypothesis (True Negative) and negative utility U(FN) from rejecting a true hypothesis (False Negative). Based on these preferences, the threshold probability that leaves the decision maker indifferent between accepting and rejecting the hypothesis is:

                                    (1)

Hence:

                                                                      (2)

 

The decision maker accepts the hypothesis if he thinks the probability that the hypothesis is true is higher than P, and rejects it if he thinks it is lower. As in the judges’ case, we define BB=U(FP)/U(FN), CB=U(TN)/U(TP) and DB=-U(FN)/U(TP). BB is the ratio between the pain of a wrongful acceptance and the pain of a wrongful rejection. CB is the ratio between the pleasure of a rightful rejection and the pleasure of a rightful acceptance. And DB is the ratio between the pain of a wrongful rejection and the pleasure of a rightful acceptance. Using these definitions, (2) can be written as:

                                                                                            (3)

 

which renders P independent of the utility function’s metric.

Again, with BB=CB=DB=1 we have P=50%: the hypothesis is accepted if it is more likely to be true than false. In most cases, however, the decision maker has some bias. We have seen a Blackstonian judge has BB>1: the pain of a wrongful conviction is higher than the pain of a wrongful acquittal. This increases the threshold probability above 50%. For example, with BB=10 we have P=85%: the judge wants to be at least 85% sure that the defendant is guilty before convicting him. On the other hand, the threshold probability of a ‘perverse’ Bismarckian judge, who dislikes wrongful acquittals more than wrongful convictions, is lower than 50%. For instance, with BB=0.1 we have P=35%: the judge convicts even if he is only 35% sure that the defendant is guilty.

In other cases, however, there is nothing perverse about BB<1. For instance, if the hypothesis is ‘There is a fire‘, a False Negative – missing a fire when there is one – is clearly worse than a False Positive – giving a False Alarm. This is generally the case in security screening, such as airport checks, malware detection and medical tests, where the mild nuisance of a False Alarm is definitely preferable to the serious pain of missing a weapon, a computer virus or a disease. Hence BB<1 and P<50%. As we have seen, with BB=0.1 we have P=35%. The same happens if CB<1: the pleasure of a True Positive – catching a terrorist, blocking a virus, diagnosing an illness – is higher than the pleasure of a True Negative – letting through an innocuous passenger, a regular email, a healthy patient. When both BB and CB are 0.1, P is reduced to 9% (the complement to 91% for BB=CB=10). Obviously, a 9% probability that a passenger may be carrying a weapon is high enough to check him out. In fact, in such cases the threshold probability is likely to be substantially lower, implying lower values for BB and CB. With BB=CB=0.01, for example, P is reduced to 1%. Again, if BB=CB then (3) reduces to P=BB/(1+BB), which tends to 0% as BB tends to zero, independently of DB. If, on the other hand, BB differs from CB, then DB does affect P. Assuming for instance BB=0.01 and CB=0.1, increasing DB from 1 to 10 – the pain of letting an armed man on board is higher than the pleasure of catching him beforehand – decreases P from 5% to 2%, while decreasing DB from 1 to 0.1 increases P to 8%. It is the other way around if BB=0.1 and CB=0.01. A higher DB increases the sensitivity to misses and decreases the sensitivity to hits, while a lower one has the opposite effect.

Hence the size of the three biases depends on the tested hypothesis. If in some cases accepting a false hypothesis is ‘worse’ than rejecting a true one (BB>1), in some other cases the opposite is true (BB<1). Likewise, sometimes rejecting a false hypothesis is ‘better’ than accepting a true one (CB>1), and some other times it is the other way around (CB<1). Finally, the pain of rejecting a true hypothesis can be higher (DB>1) or lower (DB<1) than the pleasure of accepting it.

This is all consistent with the Neyman-Pearson framework. They called a False Negative a Type I error and a False Positive a Type II error. In their analysis, H is the hypothesis of interest: a statistician wants to know whether H is true, as he surmises, or false. From his point of view, therefore, the first error – rejecting H when it is true – is ‘worse’ than the second error – accepting H when it is false. Hence BB<1. As a result, it makes sense to fix the probability of a Type I error to a predetermined low value, known as the significance level and denoted by α, while designing the test as to minimise the probability of a type II error, denoted by β, i.e. maximise 1-β, known as the test’s power – the probability of rejecting the hypothesis when it is false.

An inordinate amount of confusion is generated by the circuitous convention of formulating the test not in terms of the hypothesis of interest – the defendant is guilty, there is a fire, the passenger is armed, the email is spam, the patient is ill – but in terms of its negation: the so-called null hypothesis. This goes back to Ronald Fisher who, in fierce Popperian spirit, insisted that one can never accept a hypothesis – only fail to reject it. In this topsy-turvy world, rejecting the null hypothesis when it is true is a Type I error – a False Positive: calling a fire when there is none – while failing to reject the null when it is false is a Type II error – a False Negative: missing a fire when there is one. This is a pointless convolution (one wonders what Fisher told his girlfriend when he asked her to marry him: ‘Will you not reject me?’). For all intents and purposes, a non-rejection is tantamount to an acceptance: a test’s objective is to reach a practical decision, not to consecrate an absolute truth. For reference, here is a depiction of the straightforward Neyman-Pearson framework vs. the roundabout Fisher framework:

       Neyman-Pearson                                                       Fisher

Framing a test in terms of the hypothesis of interest reflects what a statistician is actually trying to accomplish: decide whether to accept or reject the hypothesis. As we have just seen, this depends not only on the tug of war between confirming and disconfirming evidence, indicating whether the hypothesis is true or false, but also on the decision maker’s utility preferences, measuring the relative costs and benefits of wrongful and rightful decisions.

Print Friendly
May 142015
 

Wittgenstein thought Leibniz’s question was unanswerable and, therefore, senseless. Asking the question was a misuse of language, sternly proscribed in the last sentence of the Tractatus:

7. Whereof one cannot speak, thereof one must be silent.

(Ironically, the sentence is often misused as meaning ‘Shut up if you don’t know what you’re talking about’, in blatant contravention of its own supposed prescription).

The riddle does not exist. This was a direct reference to Arthur Schopenhauer, who had traced the origin of philosophy to “a wonder about the world and our own existence, since these obtrude themselves on the intellect as a riddle, whose solution then occupies mankind without intermission” (The World as Will and Representation, Volume II, Chapter XVII, p. 170). Schopenhauer was himself recalling Aristotle (p. 160): “For on account of wonder (thaumazein) men now begin and at first began to philosophise” (Metaphysics, Alpha 2), and Plato (p. 170): “For this feeling of wonder (thaumazein) shows that you are a philosopher, since wonder is the only beginning of philosophy, and he who said that Iris was the child of Thaumas made a good genealogy. (Theaetetus, 155d).

There would be no riddle – said Schopenhauer – if, in Spinoza’s sense, the world were an “absolute substance“:

Therefore its non-being would be impossibility itself, and so it would be something whose non-being or other-being would inevitably be wholly inconceivable, and could in consequence be just as little thought away as can, for instance, time or space. Further, as we ourselves would be parts, modes, attributes, or accidents of such an absolute substance, which would be the only thing capable in any sense of existing at any time and in any place, our existence and its, together with its properties, would necessarily be very far from presenting themselves to us as surprising, remarkable, problematical, in fact as the unfathomable and ever-disquieting riddle. On the contrary, they would of necessity be even more self-evident and a matter of course than the fact that two and two make four. For we should necessarily be quite incapable of thinking anything else than that the world is, and is as it is (p. 170-171).

Like Parmenides, Spinoza saw non-being as inconceivable. What-is-not cannot be spoken or thought. There is only being, the absolute substance, as self-evident as 2+2=4. Schopenhauer vehemently disagreed:

Now all this is by no means the case. Only to the animal lacking thoughts or ideas do the world and existence appear to be a matter of course. To man, on the contrary, they are a problem, of which even the most uncultured and narrow-minded person is at certain more lucid moments vividly aware, but which enters the more distinctly and permanently into everyone’s consciousness, the brighter and more reflective that consciousness is, and the more material for thinking he has acquired through culture (p. 171).

Thaumazein is the origin of philosophy and the inexhaustible fount of its core, metaphysics:

The balance wheel which maintains in motion the watch of metaphysics, that never runs down, is the clear knowledge that this world’s non-existence is just as possible as is its existence. Therefore, Spinoza’s view of the world as an absolutely necessary mode of existence, in other words, as something that positively and in every sense ought to and must be, is a false one (p. 171).

Spinoza’s solution to thaumazein was straighter than Leibniz’s own. The world is not God’s contingent creation ex nihilo. It is God itself: Deus sive natura. Hence, Leibniz’s question does not even have an obvious answer. It is, as Wittgenstein put it, an unanswerable nonsense: like asking why 2+2 is 4 rather than 5. The riddle does not exist.

Nonsense – said Schopenhauer. The riddle does exist, and no solution can ever be found:

Therefore, the actual, positive solution to the riddle of the world must be something that the human intellect is wholly incapable of grasping and conceiving; so that if a being of a higher order came and took all the trouble to impart it to us, we should be quite unable to understand any part of his disclosures. Accordingly, those who profess to know the ultimate, i.e. the first grounds of things, thus a primordial being, an Absolute, or whatever else they choose to call it, together with the process, the reasons, grounds, motives, or anything else, in consequence of which the world results from them, or emanates, or falls, or is produced, set in existence, “discharged” and ushered out, are playing the fool, are vain boasters, if indeed they are not charlatans (p. 185).

Wow. So much for my faith. Notice the difference between Schopenhauer and Baloo. Baloo says: We don’t need an ultimate answer. Schopenhauer says: Of course we do. We’re not bears. Men wonder and care to know. But we can’t. As Immanuel Kant definitively demonstrated, there is no way for us to know ‘the first ground of things’ or, as he called them, things-in-themselves. All we can possibly know are phenomena – things as they appear to us, come to light and are experienced by us as evidence. Kant contrasted phenomena with noumena – things as abstract knowledge, thoughts and concepts produced by the mind (nous) independently of sensory experience. He used the term as a synonym for things-in-themselves, although – as noted by Schopenhauer (Volume I, Appendix, p. 477) – it was not quite the way the ancient Greeks used it. Be as it may, Kant’s meaning has since prevailed. Noumena are things as they are per se – unknowable to the human intellect. Phenomena are things as they appear to us as evidence through our senses. Evidence is what there is, i.e. what ex-ists, is out there in what the ancient Greeks called physis and we call, in its Latin translation, nature. Physics is mankind’s endeavour to explain the phenomena of the natural world.

As we know, however, physics’ explanations unfold into endless why-chains, which we find inconceivable. Explanations cannot go on ad infinitum: why-chains must have a last ring – an ultimate answer that ends all questions. But where can we find it, if physis is all there is?

Hence thaumazein‘s first solution: physis in not all there is. Beyond physis there is a supernatural, self-sustaining entity that created it. We know such entity not through experience but through pure reason – the logos of ancient Greeks which, like Thaumas’ daughter, Iris, links mankind to the divine. Pure reason does not require evidence. God is logically self-evident, like 2+2=4: there cannot but be one.

It was against such pure reason that Kant unleashed his arresting Critique. After Kant, whoever professes to know the Absolute earns Schopenhauer’s unceremonious epithets. Mankind cannot know the Absolute, either by experience or pure reason. It can only believe in it through revelation, leaning upon the soft evidence emanating from a trusted source, such as: ‘We believe in one God, the Father Almighty, Maker of all things visible and invisible’. Alas, the obvious trouble with such divine disclosures is their wild variety in time, place and circumstance, leaving believers with a hodgepodge of conflicting but equally conclusive revelations, imparted by self-appointed messengers employing a full bag of tricks in order to establish and support their trustworthiness.

Where does this leave us, then, with our search for the last ring? If we cannot find it in physis, where phenomena, explained by endless why-chains, are all there is, and we cannot find beyond physis, where noumena are inaccessible to our intellect, where can we look for it? Do we need to surrender and, following Schopenhauer, declare the riddle insoluble? Do we need to heed Wittgenstein’s proscription, declare thaumazein an unanswerable nonsense and remain silent?

Print Friendly
Apr 272015
 

Children have little trouble understanding zero. When we teach them to count to ten on their fingers, two closed fists mean zero fingers. Easy, and not that interesting. In their attempt to size up the world, children are much more interested in upper limits. As every parent knows, they bombard us with measurement questions: What is the strongest animal? The fastest car? The best footballer? And as soon as they get into numbers and figure out that they go far beyond ten, to millions, billions and gazillions, comes the fateful question: what is the biggest number? To which the standard answer – there is no biggest number: take any number, if you add one to it you get a bigger number – is at first puzzling and rather upsetting. Until they get a name for it: infinity.

But this goes only some way to appease them. Infinity sounds like the biggest number, but it is a weird one. What is infinity plus one? Infinity. And infinity plus infinity? Still infinity. You know – I told them to assuage their perplexity – infinity is not really a number: it is a concept. ‘What’s a consett?’ Well, it’s an idea that we create in our mind to talk about things. ‘Hmm’ – I could hear their brain whirring.

No problem with zero, puzzled by infinity. Interestingly, it was the other way around with the ancient Greeks, who had trouble with ‘nothing’, but were quite comfortable with the unlimited – a-peiron. Anaximander saw it as the principle of all things. Euclid used it to define parallel lines and demonstrated the infinity of prime numbers: ‘Prime numbers are more than any assigned multitude of prime numbers. (Elements, Book IX, Proposition 20). ‘More than any assigned magnitude’ is the concept of infinity that we teach our children. Aristotle called it potential infinity:

The infinite, then, exists in no other way, but in this way it does exist, potentially and by reduction. (Physics, Book III, Part 6).

According to Aristotle, there is no such thing as actual infinity. To demonstrate it, he contrasted arithmetic infinity with physical infinity, and infinity by addition with infinity by division:

It is reasonable that there should not be held to be an infinite in respect of addition such as to surpass every magnitude, but that there should be thought to be such an infinite in the direction of division. For the matter and the infinite are contained inside what contains them, while it is the form which contains. It is natural too to suppose that in number there is a limit in the direction of the minimum, and that in the other direction every assigned number is surpassed. In magnitude, on the contrary, every assigned magnitude is surpassed in the direction of smallness, while in the other direction there is no infinite magnitude. The reason is that what is one is indivisible whatever it may be, e.g. a man is one man, not many. Number on the other hand is a plurality of ‘ones’ and a certain quantity of them. Hence number must stop at the indivisible: for ‘two’ and ‘three’ are merely derivative terms, and so with each of the other numbers. But in the direction of largeness it is always possible to think of a larger number: for the number of times a magnitude can be bisected is infinite. Hence this infinite is potential, never actual: the number of parts that can be taken always surpasses any assigned number. But this number is not separable from the process of bisection, and its infinity is not a permanent actuality but consists in a process of coming to be, like time and the number of time.
With magnitudes the contrary holds. What is continuous is divided ad infinitum, but there is no infinite in the direction of increase. For the size which it can potentially be, it can also actually be. Hence since no sensible magnitude is infinite, it is impossible to exceed every assigned magnitude; for if it were possible there would be something bigger than the heavens. (Physics, Book III, Part 7).

According to Aristotle, in physics there are no infinitely large things, but there are infinitely small things. In arithmetic, however, it is the other way around: numbers are infinitely large, but not infinitely small. To make the point, he used Zeno’s Dichotomy argument: any magnitude can be bisected a potentially infinite number of times. Hence numbers are potentially infinite: ‘in the direction of largeness it is always possible to think of a larger number: for the number of times a magnitude can be bisected is infinite’. On the other hand, there is no number smaller than one: ‘one is indivisible whatever it may be, e.g. a man is one man, not many’…’Hence number must stop at the indivisible’.

For the ancient Greeks there were only natural numbers: ‘A unit is that by virtue of which each of the things that exist is called one’. ‘A number is a multitude composed of units’. (Elements, Book VII, Definitions 1 and 2). So, strictly speaking, one was not even a number (let alone zero) and there was definitely no number smaller than one. A fraction was not seen as a number per se, but as the ratio of two numbers: ‘A ratio is a sort of relation in respect of size between two magnitudes of the same kind’. (Elements, Book V, Definition 3). On the other hand, there was no largest number: the number of times a magnitude can be bisected is infinite. Hence, magnitudes can be infinitely small.

Aristotle was wrong on both counts. In mathematics, there are lots of real numbers smaller than one – in fact, an infinity of them – whereas in the physical world nothing is smaller than a Planck length. On the side of the large, however, he was right on both counts: numbers are infinite, but there is no such thing as an infinite physical magnitude.

Whether mathematical infinity is merely potential or fully actual has been and still is the subject of a heated debate. The prince of mathematicians, Carl Gauss, agreed with Aristotle:

So first of all I protest against the use of an infinite magnitude as something completed, which is never allowed in mathematics. The infinite is only a way of speaking, in which one is really talking in terms of limits, which certain ratios may approach as close as one wishes, while others may be allowed to increase without restriction. (Letter to H. C. Schumacher, no. 396, 12 July 1831).

On the other side, Georg Cantor was adamant about actual infinity, and regarded its staunch defence as a mission from God. If there is an actual infinity of natural numbers, infinity can be treated as a number. But then, since Cantor’s set theory implies that there is an infinity of infinities, our childish quest to get our arms around the biggest number is thrown into even deeper despair. Numbers are infinite – or even infinitely infinite.

At the same time, however, Aristotle made clear that there are no infinite physical magnitudes. When we call something ‘infinite’, what we usually mean is that it is really, really big. Otherwise, if we purposely intend to say that it is actually infinite, we don’t understand what we are talking about. Clearly, no thing can be infinite: the difference between the most ginormously big thing and infinity is, well, infinite. Take the 1080 atoms in the observable universe. That’s a lot of atoms, but it is still a finite quantity. Call it U1. How big is U1 compared to U2=U180? As big as an atom relative to the observable universe. So is U2 compared to U3=U280. And that’s just a miserable 3: how about U1Million, U1Billion, U1Gazillion? They are all next to nothing compared to actual infinity.

Actual physical infinity is not an awe-inspiring immensity that we are too small to comprehend. It is an ill-considered, meaningless and unusable concept. There is no such thing as actual physical infinity. Nor is there potential infinity: ‘For the size which it can potentially be, it can also actually be’. In the physical world, potential infinity – we can call it indefiniteness – coincides with actual infinity: nothing can be bigger than the universe, otherwise it would itself be the universe.

Once properly rearranged, Aristotle’s crucial distinction between the mathematical and the physical world should not be forgotten. In mathematics, there is zero and there is infinity, and we can speak and think of both. There are infinitely small numbers and infinitely large numbers. Zero is itself a number and infinity can be treated as a number. But neither zero nor infinity are something: there is no such thing as nothing and no such thing as infinity. There are no infinitely small things and no infinitely large things. In the physical world, zero and infinity are just useful signs: zero indicates that something is absent and infinity indicates that something is indefinitely big. There are, however, three fundamental differences between them:

1.   We can observe zero: it is what we call negative evidence. There is nothing – zero things – on the table. But we cannot observe infinity: there is no infinity of things, on the table or anywhere else.

2.   Something becomes nothing after a finite number of bisections. Zero is the magnitude of nothing. But something cannot become infinite: no finite number of operations can turn something into infinity. No thing has infinite magnitude.

3.   We cannot observe the absence of everything – Nall – but we can imagine it. While Nall may be impossible, it is not senseless. But we can neither observe nor imagine the infinity of everything: it is an impossible and senseless concept.

Infinity is a sublime, breathtaking, but much abused word. We should never forget it is a consett – perfect to express a parent’s love for his children, but inapplicable to the size of any thing.

Print Friendly
Apr 212015
 

Parmenides‘ trouble with ‘nothing’ was nothing new. The ancient Greeks thought the world started with Chaos, a variably imagined primordial mess, where the principle of all things (Arche) eventually gave rise to an ordered Cosmos. Whatever that was – Anaximander called it Apeiron, the limitless – it was something.

This was common to all ancient creation myths, including the Bible. They all started with something. It was only in the second and third century CE that Christian theologians, eager to affirm God’s absolute omnipotence, reinterpreted Genesis as creation ex nihilo. Why is there something rather than nothing? Because God created it. Leibniz was so keen to demonstrate it that he got into his own muddle.

Take the infinite series S=1+x+x2+x3+…= 1/(1-x). For x=-1 we have S=1-1+1-1+…=1/2. That is S=(1-1)+(1-1)+…=0+0+…=1/2. Amazing but true: an infinite sum of zeros equals 1/2. Nothing=Something. Luigi Guido Grandi, a Camaldolese monk and mathematician, saw this as a marvellous representation of creation ex nihilo. Here is another way to see it: S can be written as (1-1)+(1-1)+… or as 1-(1-1)-(1-1)-… . The first sum, with an even number of terms, equals zero, while the second sum, with an odd number of terms, equals 1. So – just like a bit – S is either 0 or 1, depending on when we stop counting. Since we have an equal probability of stopping at an even or at an odd number, the expected value of S is 1/2.

Leibniz – one of the smartest men on earth and co-inventor of infinitesimal calculus – bought the argument. The wonders of binary arithmetic – he thought: God, the One, created all Being from Nothing. He was so impressed with the idea that, according to Laplace, he wrote a letter to the Jesuit missionary Claudio Filippo Grimaldi, president of the tribunal of Mathematics in China, asking him to show it to the Chinese emperor and convince him to convert to Christianity! “I report this incident only to show to what extent the prejudices of infancy can mislead the greatest men” (A Philosophical Essay on Probabilities, p. 169).

As should have been obvious to Leibniz (and likely it was to Grimaldi and to the emperor, who remained an infidel), S is only convergent for -1<x<1. For x=-1 it does not converge to any number, but bounces aimlessly between 1 and 0 – between something and nothing, if you want to insist on the metaphor. But in that case you don’t need the whole series. The first two terms are enough: 1-1=0. Nothing is the sum of two opposite somethings – hot and cold, wet and dry (as in Anaximander’s apeiron), positive and negative, good and evil, light and darkness, matter and antimatter, Yin and Yang, Laurel and Hardy or whatever opposite pair you may fancy. Why not 42-42=0?

Grandi’s series is not a good answer to Leibniz’s question. No wonder – Parmenides would have said. ‘Why is there something rather than nothing’ is a meaningless question: there is no such thing as nothing – we cannot even speak or think about it. In his Tractatus Logico-Philosophicus Wittgenstein agreed:

6.44    Not how the world is, is the mystical, but that it is.

6.45    The contemplation of the world sub specie aeterni is its contemplation as a limited whole.
The feeling of the world as a limited whole is the mystical feeling.

6.5      For an answer which cannot be expressed the question too cannot be expressed.
The riddle does not exist.
If a question can be put at all, then it can also be answered.

Wittgenstein’s world sub specie aeterni is the Einstein-Weyl block universe, and his world as a limited whole resembles Parmenides’ well-rounded sphere. In his 1929 Lecture on Ethics, Wittgenstein described thaumazein as ‘my experience par excellence‘: ‘when I have it I wonder at the existence of the world‘. Like Parmenides, however, he thought that any ‘verbal expression’ about thaumazein was ‘nonsense! If I say “I wonder at the existence of the world” I am misusing language’. ‘To say “I wonder at such and such being the case” has only sense if I can imagine it not to be the case’. ‘But it is nonsense to say that I wonder at the existence of the world, because I cannot imagine it not existing’ (p. 41). It is clear from this that the question Wittgenstein had in mind in 6.5 was precisely Leibniz’s question, which he regarded as senseless – a question that cannot be put at all and therefore cannot be answered. The riddle does not exist.

I find this very strange. What’s so difficult about imagining that nothing exists? Just imagine the absence of everything – any thing, all the 1080 atoms in the observable universe, or however many there are in the whole universe. And if after that you are left with something – a vacuum space – imagine that away as well, until there is nothing – nothing at all. What’s the big deal? Let’s call this ‘Nall’ – short for Nothing at all – which of course is not a thing but just a name for the absence of any thing. Nall may be impossible, but it is certainly not unimaginable. In fact, ‘Why is there All rather than Nall?’ is a shorter version of Leibniz’s question. Nothing senseless about it.

Print Friendly
Apr 172015
 

Nothing is smaller than a Planck length. When I say this, I mean: There is no such thing that is smaller than P. Notice, however, that the same sentence could be misunderstood as having the exact opposite meaning: There is such a thing, called nothing, that is smaller than P. Similarly, if I say: ‘Nothing is better than spaghetti alle vongole’, I mean that I like it a lot. But the opposite reading would mean that I hate it, and that I would rather eat nothing, i.e. not eat.

As obvious as this is, there is an astonishing amount of confusion about the use of the word ‘nothing’. The muddle goes back to Zeno’s teacher (and, according to Plato, his lover – which sheds a different light on his leaving the scene at the Dichotomy paradox): Parmenides. To be fair, the only thing Parmenides wrote is an allegorical poem, On Nature, of which we only have 160 lines, as reported in a few fragments by later writers. But, as these were purposeful selections, there is little chance that the rest was any clearer. Parmenides drew a stark distinction between the way of Truth (Aletheia), which he defined, in no uncertain but awkwardly convoluted terms, as ‘the way that it is and cannot not be‘, and the way of Appearance (Doxa), defined as ‘the way that it is not and that it must not be‘ (Fragment 3). According to Parmenides, the second is a no-go area: ‘There is no such thing as nothing’ (Fragment 5). The only way is the first:

Now only the one tale remains
Of the way that it is. On this way there are very many signs
Indicating that what-is is unborn and imperishable,
Entire, alone of its kind, unshaken, and complete.
It was not once nor it will be, since it is now, all together,
Single and continuous. (Fragment 8).

This is Einstein’s and Weyl’s block universe, which ‘simply is’, and where ‘the distinction between past, present and future is only a stubbornly persistent illusion’ (in conversation with Einstein, Karl Popper called him “Parmenides”. The Unended Quest, p. 129). It is a timeless world, governed by the Principle of Sufficient Reason, which Parmenides expressed in his own way, well before Spinoza and Leibniz:

For what birth could you seek for it?
How and from what did it grow? Neither will I allow you to say
Or to think that it grew from what-is-not, for that it is not
Cannot be spoken or thought. (Fragment 8).

Everything has a cause. Hence it is impossible to think that anything can come from what is not. As Lucretius would later put in De Rerum Natura: Nihil igitur fieri de nihilo posse (Book I, 205), which is commonly abbreviated as Ex nihilo nihil fit: Nothing comes from nothing. How could it be otherwise? There is no such thing as nothing.

Here is where the muddle starts. Let’s assume for a moment that the Principle of Sufficient Reason is right: what-is cannot come from what-is-not. Does that mean that what-is-not cannot even be spoken or thought about? Clearly not, as indeed Parmenides himself shows by repeatedly referring to it. We can speak and think of what-is-not, i.e. nothing, as a name for the absence of something. ‘There is nothing on the table’ does not mean that there is a thing, called nothing, that is lying on the table. It means the opposite: the table is bare, there is no thing lying on it.

The ancient Greeks were aware of the confusion. When Ulysses told Polyphemus that his name was Nobody (Outis) and proceeded to blind him, the giant called for help. But when his fellow Cyclopes asked him what was happening, they were mystified by his answer: Nobody is trying to kill me (Odyssey, Book 9). Somehow, however, Homer’s descendants never managed to dissolve the puzzle: how can nothing be something? Amazingly, therefore, they had no sign and no use for the concept of zero, until they later imported it from the East. Zero is The Nothing That Is. But it is not something: it is just a sign that we use to indicate that something is not. As such, not only we can speak and think about it, but we can also observe it. It is what we call negative evidence, or absence of evidence. Look: there is nothing on the table, no thing, zero things. And there is no green rhino – zero rhinos – under the carpet. This is not positive evidence about the presence of something. It is negative evidence about its absence. So it does not presuppose the existence of the absent thing: green rhinos do not exist, nor their existence is implied in the sentence.

Speaking and thinking of what-is-not does not imply that it is. Somehow Parmenides was confused about this and, as per Boileau’s rule, expressed it obscurely:

For the same thing both can be thought and can be (Fragment 4).
It must be that what can be spoken and thought is, for it is there for being (Fragment 5).
By thinking gaze unshaken on things which, though absent, are present,
For thinking will not sever what-is from clinging to what-is (Fragment 6).

What? Surely he did not mean that whatever we can speak and think of must exist – wouldn’t that be wonderful? What he meant was that it is impossible to think that things are not. What-is has always been and will always be. Ex nihilo nihil fit. Nothing cannot be something, or, more precisely, nothing cannot become something. It is, as we have seen, another way to state the Principle of Sufficient Reason.

For the same reason, Parmenides thought that something could not become nothing:

Nor ever will the power of trust allow that from what-is
It becomes something other than itself (Fragment 8).

As Lucretius put it: Haud igitur redit ad nihilum res ulla (Book I, 248). It is the same reason on which Zeno built his paradoxes. Which, as we have seen, are paradoxes precisely because, in reality, something can become nothing: nothing is smaller than a Planck length.

Parmenides was right: there is no such thing as nothing. More precisely, there is no such thing as nothingness – a place or a state in which phenomena are before they appear into existence. But then he went astray, insisting that what-is-not cannot even be thought about, and that there are no phenomena, no creation, no extinction and no change. What-is is an eternal, immutable one, ‘like the body of a well-rounded sphere’ (Fragment 8). In such a world, everything has a reason and could not be otherwise: there is no such thing as chance.

That is as crazy a world as that of Zeno’s paradoxes. In the real world, we can and do think of possibilities: what-is-not but could be. What-is was not bound to be. It is an event that happened, with some probability: one of several possibilities. Events are nothing before they happen and turn into nothing after they cease to exist. They do not appear from nothingness: they be-come from nothing – nothing at all.

 

Print Friendly