Jun 172020

A positive effect of the coronavirus pandemonium has been the bare exposure of the naïve view of science as the repository of certainty. As humdrum media kept informing the public about ‘what science says’ and governments stuck to the mantra of ‘being driven by science’, scientists themselves staged a dazzling display of varying views, findings, recommendations and guidance.

The treacherous misconception according to which science knows the truth and scientists impart it received a mighty blow – I’d dare to say final, but I won’t. People know that economists disagree, and are used to it – whence the vacuous question: Is economics a science? Even more so for finance experts, where their different views and opinions are the very essence of financial markets – in the face of a standard academic theory still based on the hyperuranian assumption of common knowledge. But when it comes to ‘real’ sciences, people expect experts to reach incontrovertible conclusions firmly grounded on objective evidence – the opposite of what they got from virologists, epidemiologists and assorted scientists around the world in the last few months.

Scientific disagreement should be no surprise: far from being the preserve of certainty, science is the realm of uncertainty. Scientists pursue certainty by asking appropriate questions, but are entirely comfortable with the uncertainty of provisional answers. It is not up to them to decide what to do with their findings.

What is surprising, however, is that most scientists at work on the pandemic anywhere in the world have failed not only to answer but to even ask a most basic question: how many people are infected? ‘A lot’ may have been an understandably quick answer in the initial stage of the tsunami, when all frantic efforts were focused on identifying and treating as many infections as possible. But when, by the beginning of March, the time came to take vital decisions on how best to contain the virus spread, hardly anyone pointed out that a more precise answer was necessary. John Ioannidis did it in mid-March; Giorgio Alleva et al. did it a little later, also providing an outstanding description of the operational framework required to overcome ‘convenience sampling’. A few others did, but no one heard. Instead, starting from Italy on 9 March, one country after another decided to impose blanket lockdowns, varying to some degree in intensity and scope, but all uniformly applied across the entire national territory, irrespective of what would have surely emerged as wide geographical variations in the Base Rate of infections.

Yinon Weiss’ trilogy spares me the task of expounding on what happened next – I agree with virtually everything he wrote. I add two observations. One, there is a stark parallel with the 2008 Great Financial Crisis, where fear of dread drove attention to the gloomiest scenarios of the most hyperbolic doomsayers. This had the disastrous effect of swaying many investors into locking in heavy losses and missing the 2009 turnaround. In the coronavirus panic, the direst predictions persuaded people to willingly acquiesce to unprecedented living conditions for the greater good of saving lives, while being largely oblivious to any consideration of future costs. Second, I hardly need to specify that questioning the appropriateness of lockdown measures has nothing to do with the foolish nonsense of virus deniers and assorted lunatics, no matter how they may attempt to hijack the arguments. Discussing the lockdowns does not mean rejecting their effectiveness in stemming the virus spread, let alone doubting their necessity in specific circumstances. It means assessing their impact vis à vis a full evaluation of their costs and alternative courses of action.

In this regard, as infections have started to recede, a major question currently being asked is what could have been done better with the benefit of hindsight. Unsurprisingly, the common answer is more of the same: earlier and stricter lockdowns. One notable exception, however, came from the UK Chief Medical Officer Chris Whitty, who recently admitted his regret for failing to increase testing capacity earlier on. “Many of the problems we had came because we were unable to work out exactly where we were, and we were trying to see our way through the fog.”

Indeed. Only at the end of April the Office of National Statistics started to produce the Coronavirus Infection Survey Pilot, reporting an estimate of the number of people infected with coronavirus in the community population of England, excluding infections reported in hospitals, care homes and other institutional settings. The Base Rate, finally! The first reported number was 148,000 infections, equal to 0.27% of the population – 1 in 370. Since then the number has been trending down, and according to the latest report of 12 June is 33,000, equal to 0.06% of the population – 1 in 1667.

Curiously, on the same day I was invited to take part in a COVID-19 testing research study (Wave 2) conducted by Imperial College London and Ipsos MORI. ‘The study will help the Government work out how many people may have COVID-19 in different areas of the country. The test may indicate whether you currently have the COVID-19 virus. We have chosen your name at random, and participation is completely voluntary’.

Better late than never, I guess. But the question remains: Why did it take so long? Why wade through the fog for five months only guided by rickety models full of crude assumptions? Why guess the virus spread through a highly abstract number rather than actually measure it on the ground?

We will never know what the infection rate was back in January and February – in the UK or anywhere else – and how it varied through time, across different areas, age groups, sex, and other cohorts – the kind of data that Ipsos MORI and other statistical research agencies routinely inundate us with, ahead of elections and in myriads other circumstances. Sure, a viral test is not as easy to carry out as a telephone interview. And, despite earlier warnings at the highest levels, testing capacity back then was widely insufficient. But the mystery is that random testing was nowhere even considered as an option, including – as far as I can tell – in biostatistics and statistical epidemiology departments. The only option on the table were blanket lockdowns, with national governments left to decide their intensity and people left to dread their worst nightmares and bear all costs, in the name of a comforting but misleading precautionary principle.

It is entirely possible that, despite showing cross-sectional and temporal variation, Base Rate data would have been judged too high to leave any alternative to the adopted lockdown policies. But the point is: what is too high? Is the current infection rate in England too high? Presumably not, given that lockdown measures are being relaxed. As the rate has been coming down since late April, it is reasonable to presume that is was higher earlier on. But how high? Was it 1%? 5%? 10%? We’ll never know. And, crucially, whatever it was, it was an average number, higher in certain areas and lower in others, higher for certain cohorts and lower for others, and varying through time. Such critical information would have been of great help in modulating restriction policies, intensifying them where needed and diminishing or even excluding them elsewhere.

Oh well, too late. But the point seems to be finally coming across. Hopefully, there won’t be a Wave 2. But, just in case, random testing will provide more visibility to navigate through its containment.

I am looking forward to taking my test. Thanks to the ONS Base Rate estimate, and not having any symptoms, I am almost sure I will come out negative. The letter does not specify the test’s accuracy – it just says in the Additional Information overleaf that ‘test results are not 100% accurate’. As we have seen, Base Rate estimation does not require high accuracy: as long as its accuracy level is known, any test would do (the same point is made here). But of course accuracy is important at the individual level. So what will happen in the unlikely event that I result positive? It depends. It would be bad news is the test has maximum Specificity – a Smoking Gun: FPR=0%. If not, however, a positive result will very likely be a False Positive. Hence it would be wrong to interpret it as proving that I am infected. Before reaching that conclusion, I would want to repeat the test and, if I am positive again, repeat it a third time.

I hope that this point will be well clarified to the unlucky positives and that they will not be rushed into isolation.

Print Friendly, PDF & Email

Any comments?

This site uses Akismet to reduce spam. Learn how your comment data is processed.