[from the London Mathematical Laboratory]
In the early weeks of the 2020 U.S. COVID-19 outbreak, guidance from the scientific establishment and government agencies included a number of dubious claims—masks don’t work, there’s no evidence of human-to-human transmission, and the risk to the public is low. These statements were backed by health authorities, as well as public intellectuals, but were later disavowed or disproven, and the initial under-reaction was followed by an equal overreaction and imposition of draconian restrictions on human social activities.
In a recent paper, LML Fellow Harry Crane examines how these early mis-steps ultimately contributed to higher death tolls, prolonged lockdowns, and diminished trust in science and government leadership. Even so, the organizations and individuals most responsible for misleading the public suffered little or no consequences, or even benefited from their mistakes. As he discusses, this perverse outcome can be seen as the result of authorities applying a formulaic procedure of “naïve probabilism” in facing highly uncertain and complex problems, and largely assuming that decision-making under uncertainty boils down to probability calculations and statistical analysis.
This attitude, he suggests, might be captured in a few simple “axioms of naïve probabilism”:
Axiom 1: more complex the problem, the more complicated the solution.
This idea is a hallmark of naïve decision making. The COVID-19 outbreak was highly complex, being a novel virus of uncertain origins, and spreading through the interconnected global society. But the potential usefulness of masks was not one of these complexities. The mask mistake was consequential not because masks were the antidote to COVID-19, but because they were a low cost measure the effect of which would be neutral at worst; wearing a mask can’t hurt in reducing the spread of a virus.
Yet the experts neglected common sense in favor of a more “scientific response” based on rigorous peer review and sufficient data. Two months after the initial U.S. outbreak, a study confirmed the obvious, and masks went from being strongly discouraged to being mandated by law. Precious time had been wasted, many lives lost, and the economy stalled.
Crane also considers another rule of naïve probabilism:
Axiom 2: Until proven otherwise, assume that the future will resemble the past.
In the COVID-19 pandemic, of course, there was at first no data that masks work, no data that travel restrictions work, no data of human-to-human transmission. How could there be? Yet some naïve experts took this as a reason to maintain the status quo. Indeed, many universities refused to do anything in preparation until a few cases had been detected on campus—at which point they had some data, as well as hundreds or thousands of other as yet undetected infections.
Crane touches on some of the more extreme examples of his kind of thinking, which assumes that whatever can’t be explained in terms of something that happened in the past is speculative, non-scientific and unjustifiable:
“This argument was put forward by John Ioannidis in mid-March 2020, as the pandemic outbreak was already spiralling out of control. Ioannidis wrote that COVID-19 wasn’t a ‘once-in-a-century pandemic,’ as many were saying, but rather a ‘once-in-a-century data-fiasco’. Ioannidis’s main argument was that we knew very little about the disease, its fatality rate, and the overall risks it poses to public health; and that in face of this uncertainty, we should seek data-driven policy decisions. Until the data was available, we should assume COVID-19 acts as a typical strain of the flu (a different disease entirely).”
Unfortunately, waiting for the data also means waiting too long, if it turns out that the virus turns out to be more serious. This is like waiting to hit the tree before accepting that the available data indeed supports wearing a seatbelt. Moreover, in the pandemic example, this “lack of evidence” argument ignores other evidence from before the virus entered the United States. China had locked down a city of 10 million; Italy had locked down its entire northern region, with the entire country soon to follow. There was worldwide consensus that the virus was novel, the virus was spreading fast and medical communities had no idea how to treat it. That’s data, and plenty of information to act on.
Crane goes on to consider a 3rd axiom of naïve probabilism, which aims to turn ignorance into a strength. Overall, he argues, these axioms, despite being widely used by many prominent authorities and academic experts, actually capture a set of dangerous fallacies for action in the real world.
In reality, complex problems call for simple, actionable solutions; the past doesn’t repeat indefinitely (i.e., COVID-19 was never the flu); and ignorance is not a form of wisdom. The Naïve Probabilist’s primary objective is to be accurate with high probability rather than to protect against high-consequence, low-probability outcomes. This goes against common sense principles of decision making in uncertain environments with potentially very severe consequences.
Importantly, Crane emphasizes, the hallmark of Naïve Probabilism is naïveté, not ignorance, stupidity, crudeness or other such base qualities. The typical Naïve Probabilist lacks not knowledge or refinement, but the experience and good judgment that comes from making real decisions with real consequences in the real world. The most prominent naïve probabilists are recognized (academic) experts in mathematical probability, or relatedly statistics, physics, psychology, economics, epistemology, medicine or so-called decision sciences. Moreover, and worryingly, the best known naïve probabilists are quite sophisticated, skilled in the art of influencing public policy decisions without suffering from the risks those policies impose on the rest of society.
Read the paper. [Archived PDF]