COVID-19 and “Naïve Probabilism”

[from the London Mathematical Laboratory]

In the early weeks of the 2020 U.S. COVID-19 outbreak, guidance from the scientific establishment and government agencies included a number of dubious claims—masks don’t work, there’s no evidence of human-to-human transmission, and the risk to the public is low. These statements were backed by health authorities, as well as public intellectuals, but were later disavowed or disproven, and the initial under-reaction was followed by an equal overreaction and imposition of draconian restrictions on human social activities.

In a recent paper, LML Fellow Harry Crane examines how these early mis-steps ultimately contributed to higher death tolls, prolonged lockdowns, and diminished trust in science and government leadership. Even so, the organizations and individuals most responsible for misleading the public suffered little or no consequences, or even benefited from their mistakes. As he discusses, this perverse outcome can be seen as the result of authorities applying a formulaic procedure of “naïve probabilism” in facing highly uncertain and complex problems, and largely assuming that decision-making under uncertainty boils down to probability calculations and statistical analysis.

This attitude, he suggests, might be captured in a few simple “axioms of naïve probabilism”:

Axiom 1: more complex the problem, the more complicated the solution.

This idea is a hallmark of naïve decision making. The COVID-19 outbreak was highly complex, being a novel virus of uncertain origins, and spreading through the interconnected global society. But the potential usefulness of masks was not one of these complexities. The mask mistake was consequential not because masks were the antidote to COVID-19, but because they were a low cost measure the effect of which would be neutral at worst; wearing a mask can’t hurt in reducing the spread of a virus.

Yet the experts neglected common sense in favor of a more “scientific response” based on rigorous peer review and sufficient data. Two months after the initial U.S. outbreak, a study confirmed the obvious, and masks went from being strongly discouraged to being mandated by law. Precious time had been wasted, many lives lost, and the economy stalled.

Crane also considers another rule of naïve probabilism:

Axiom 2: Until proven otherwise, assume that the future will resemble the past.

In the COVID-19 pandemic, of course, there was at first no data that masks work, no data that travel restrictions work, no data of human-to-human transmission. How could there be? Yet some naïve experts took this as a reason to maintain the status quo. Indeed, many universities refused to do anything in preparation until a few cases had been detected on campus—at which point they had some data, as well as hundreds or thousands of other as yet undetected infections.

Crane touches on some of the more extreme examples of his kind of thinking, which assumes that whatever can’t be explained in terms of something that happened in the past is speculative, non-scientific and unjustifiable:

“This argument was put forward by John Ioannidis in mid-March 2020, as the pandemic outbreak was already spiralling out of control. Ioannidis wrote that COVID-19 wasn’t a ‘once-in-a-century pandemic,’ as many were saying, but rather a ‘once-in-a-century data-fiasco’. Ioannidis’s main argument was that we knew very little about the disease, its fatality rate, and the overall risks it poses to public health; and that in face of this uncertainty, we should seek data-driven policy decisions. Until the data was available, we should assume COVID-19 acts as a typical strain of the flu (a different disease entirely).”

Unfortunately, waiting for the data also means waiting too long, if it turns out that the virus turns out to be more serious. This is like waiting to hit the tree before accepting that the available data indeed supports wearing a seatbelt. Moreover, in the pandemic example, this “lack of evidence” argument ignores other evidence from before the virus entered the United States. China had locked down a city of 10 million; Italy had locked down its entire northern region, with the entire country soon to follow. There was worldwide consensus that the virus was novel, the virus was spreading fast and medical communities had no idea how to treat it. That’s data, and plenty of information to act on.

Crane goes on to consider a 3rd axiom of naïve probabilism, which aims to turn ignorance into a strength. Overall, he argues, these axioms, despite being widely used by many prominent authorities and academic experts, actually capture a set of dangerous fallacies for action in the real world.

In reality, complex problems call for simple, actionable solutions; the past doesn’t repeat indefinitely (i.e., COVID-19 was never the flu); and ignorance is not a form of wisdom. The Naïve Probabilist’s primary objective is to be accurate with high probability rather than to protect against high-consequence, low-probability outcomes. This goes against common sense principles of decision making in uncertain environments with potentially very severe consequences.

Importantly, Crane emphasizes, the hallmark of Naïve Probabilism is naïveté, not ignorance, stupidity, crudeness or other such base qualities. The typical Naïve Probabilist lacks not knowledge or refinement, but the experience and good judgment that comes from making real decisions with real consequences in the real world. The most prominent naïve probabilists are recognized (academic) experts in mathematical probability, or relatedly statistics, physics, psychology, economics, epistemology, medicine or so-called decision sciences. Moreover, and worryingly, the best known naïve probabilists are quite sophisticated, skilled in the art of influencing public policy decisions without suffering from the risks those policies impose on the rest of society.

Read the paper. [Archived PDF]

Essay 42: The View From Nowhere as an Additional Problem in “Thinking About Thinking”

The View From Nowhere is a book by philosopher Thomas Nagel.

Published by Oxford University Press in 1986, it contrasts passive and active points of view in how humanity interacts with the world, relying either on a subjective perspective that reflects a point of view or an objective perspective that takes a more detached perspective. Nagel describes the objective perspective as the “view from nowhere,” one where the only valuable ideas are ones derived independently.

Epistemology (what we can know and why) is puzzling to the max if you ponder it for a moment. Think of a painting in a Boston museum. If you walk up to it, you see only the little piece in front of your nose so you back up and try to get an “optimal grip.” (to use Prof. Merleau-Ponty’s language.) If you walk all the way to China and try to see it from there, you will see nothing of it, no matter what telescope you might use. This is sort of what we mean by “the view from nowhere.” You’re way too far.

This brings us to the problem of the “detached observer” (modern versions of which stem from Descartes, who wants to get a bird’s eye view of all other bird’s eye views.  This is tricky and elusive for the obvious reasons. When Richard Feynman or some other physicist theorizes, is he not achieving a view from nowhere or is he? No one will deny a place to theoretical “standpoints” and “viewpoints.” The theoretician is himself a person who breathes, and sneezes, and yawns, and gets hungry and has to stretch his or her legs after too much sitting. One can’t quite “move into one’s own mind” since all theory is “embodied.”

Human beings have the unique ability to view the world in a detached way: 

We can think about the world in terms that “transcend” our own experience or interest, and consider the world from a vantage point that is, in Nagel’s words, “nowhere in particular.”

The strange human situation is seen from the fact that this “view from nowhere,” this “detached observer” theoretical stance, includes the theorist himself, the detachment and the theory as part of the “bird’s eye view” without any particular concrete bird serving as your ambassador or proxy.

“The unifying theme, as Nagel puts it at the beginning, is the problem of how to combine the perspective of a particular person ‘inside the world’ with an objective view of that same world, the person and his viewpoint included.”

(Bernard Williams, 1986 book review, London Review of Books.)

We have already seen the problem of Husserl‘s (died in 1938) “rhomboid” or “matchbox” (i.e., you can’t see the entire matchbox all at once) or Ortega y Gasset‘s “orange” (i.e., you cannot see the back or obverse or reverse of a spherical orange unless you walk around it and lose the first view from the front) and all this “partial viewing” takes place on “Neurath’s boat.” (Where we’re like sailors on a knowledge ship and can’t go back to any origins and can’t discuss Platonism with Plato himself. The Harvard philosopher Quine, among others, mentions this problem.) The ship movies forward and the “matchbox/orange” are viewed in some cabin on the ship (i.e., your field, such as chemistry or history or biology).

Lastly: think of the opening line of Thomas Mann’s (died in 1955) great novel, Joseph and His Brothers: “Deep is the well of the past. Should we not call it bottomless?”

In other words, there is no way for us as “knowledge detectives” to go back to the origins of ourselves or our history since that’s all unrecoverable and lost “in the mist of time.”

A student embarking on a “knowledge quest” (university education) should not dodge these puzzles and mysteries but look at them “unblinkingly.”  A deep education means all the dimensions of the quest are in front of the student and not wished away.  This includes the student’s own danger of being lost as “a leaf in the whirlwind of time.” (Hannah Arendt phrase we have already seen.). Career aside, there are multiple “Rubik’s Cubes” here if the student wants to experience the deep and the wide.

Essay 36: What We Mean by “Epochal Waters”

We sometimes use the phrase “epochal waters” to refer to the deepest layers of the past which we “swimmers” at the surface of the ocean don’t see or know. “Epochal waters” are latent, currents are closer to the surface.

There’s a similar idea from the French philosopher Michel Foucault who died in 1984. In his The Order of Things, classic from 1966, he talks about the “episteme” (as in epistemology) that frames everything from deep down. (The Greeks distinguished between “techne” (arts, crafts, practical skills and “episteme” (theory, overview).

“In essence, Les mots et les choses (Foucault’s The Order of Things) maintains that every period is characterized by an underground configuration that delineates its culture, a grid of knowledge making possible every scientific discourse, every production of statements. Foucault designates this historical a priori as an episteme, deeply basic to defining and limiting what any period can—or cannot—think.

Each science develops within the framework of an episteme, and therefore is linked in part with other sciences contemporary with it.

(Didier Eribon, Michel Foucault, Harvard University Press,  1991, page 158)

Take a simple example. A discussion comes up about what man is or does or thinks or knows. In today’s episteme or pre-definition, one thinks immediately not of man in terms of language or the invention of gods, but in terms of computational genomics, big data, bipedalism (walking upright on two legs). Its assumed in advance via an invisible episteme, that science and technology. physics, genetics, big data, chemistry and biology hold the answer and the rest is sort of outdated. This feeling is automatic and reflexive like breathing and might be called “mental breathing.”

One’s thoughts are immediately sent in certain directions or grooves, a process  that is automatic and more like a “mental reflex” than a freely chosen “analytical frame.” The thinker has been “trained” in advance and the episteme pre-decides what is thinkable and what is not.

There are deep episteme that underlie all analyses: for example, in the Anglo-American tradition of looking at things, the phrase “human nature” inevitably comes in as a deus ex machina (i.e., sudden way of clinching an argument, the “magic factor” that has been there all along). If you ask why are you suddenly “importing” the concept of “human nature,” the person who uses the phrase has no idea. It’s in the “epochal water” or Foucault’s episteme, and it suddenly swims up from below at the sea floor.

Another quick example: In the Anglo-American mind, there’s a belief from “way down and far away” that failure in life is mostly about individual behavior (laziness, alcoholism, etc.) and personal “stances” while “circum-stances” are an excuse. This way of sequencing acceptable explanations is deeply pre-established in a way that is itself hard to explain. It serves to “frame the picture” in advance. These are all “epochal water“ or episteme phenomena.