COVID-19 and “Naïve Probabilism”

[from the London Mathematical Laboratory]

In the early weeks of the 2020 U.S. COVID-19 outbreak, guidance from the scientific establishment and government agencies included a number of dubious claims—masks don’t work, there’s no evidence of human-to-human transmission, and the risk to the public is low. These statements were backed by health authorities, as well as public intellectuals, but were later disavowed or disproven, and the initial under-reaction was followed by an equal overreaction and imposition of draconian restrictions on human social activities.

In a recent paper, LML Fellow Harry Crane examines how these early mis-steps ultimately contributed to higher death tolls, prolonged lockdowns, and diminished trust in science and government leadership. Even so, the organizations and individuals most responsible for misleading the public suffered little or no consequences, or even benefited from their mistakes. As he discusses, this perverse outcome can be seen as the result of authorities applying a formulaic procedure of “naïve probabilism” in facing highly uncertain and complex problems, and largely assuming that decision-making under uncertainty boils down to probability calculations and statistical analysis.

This attitude, he suggests, might be captured in a few simple “axioms of naïve probabilism”:

Axiom 1: more complex the problem, the more complicated the solution.

This idea is a hallmark of naïve decision making. The COVID-19 outbreak was highly complex, being a novel virus of uncertain origins, and spreading through the interconnected global society. But the potential usefulness of masks was not one of these complexities. The mask mistake was consequential not because masks were the antidote to COVID-19, but because they were a low cost measure the effect of which would be neutral at worst; wearing a mask can’t hurt in reducing the spread of a virus.

Yet the experts neglected common sense in favor of a more “scientific response” based on rigorous peer review and sufficient data. Two months after the initial U.S. outbreak, a study confirmed the obvious, and masks went from being strongly discouraged to being mandated by law. Precious time had been wasted, many lives lost, and the economy stalled.

Crane also considers another rule of naïve probabilism:

Axiom 2: Until proven otherwise, assume that the future will resemble the past.

In the COVID-19 pandemic, of course, there was at first no data that masks work, no data that travel restrictions work, no data of human-to-human transmission. How could there be? Yet some naïve experts took this as a reason to maintain the status quo. Indeed, many universities refused to do anything in preparation until a few cases had been detected on campus—at which point they had some data, as well as hundreds or thousands of other as yet undetected infections.

Crane touches on some of the more extreme examples of his kind of thinking, which assumes that whatever can’t be explained in terms of something that happened in the past is speculative, non-scientific and unjustifiable:

“This argument was put forward by John Ioannidis in mid-March 2020, as the pandemic outbreak was already spiralling out of control. Ioannidis wrote that COVID-19 wasn’t a ‘once-in-a-century pandemic,’ as many were saying, but rather a ‘once-in-a-century data-fiasco’. Ioannidis’s main argument was that we knew very little about the disease, its fatality rate, and the overall risks it poses to public health; and that in face of this uncertainty, we should seek data-driven policy decisions. Until the data was available, we should assume COVID-19 acts as a typical strain of the flu (a different disease entirely).”

Unfortunately, waiting for the data also means waiting too long, if it turns out that the virus turns out to be more serious. This is like waiting to hit the tree before accepting that the available data indeed supports wearing a seatbelt. Moreover, in the pandemic example, this “lack of evidence” argument ignores other evidence from before the virus entered the United States. China had locked down a city of 10 million; Italy had locked down its entire northern region, with the entire country soon to follow. There was worldwide consensus that the virus was novel, the virus was spreading fast and medical communities had no idea how to treat it. That’s data, and plenty of information to act on.

Crane goes on to consider a 3rd axiom of naïve probabilism, which aims to turn ignorance into a strength. Overall, he argues, these axioms, despite being widely used by many prominent authorities and academic experts, actually capture a set of dangerous fallacies for action in the real world.

In reality, complex problems call for simple, actionable solutions; the past doesn’t repeat indefinitely (i.e., COVID-19 was never the flu); and ignorance is not a form of wisdom. The Naïve Probabilist’s primary objective is to be accurate with high probability rather than to protect against high-consequence, low-probability outcomes. This goes against common sense principles of decision making in uncertain environments with potentially very severe consequences.

Importantly, Crane emphasizes, the hallmark of Naïve Probabilism is naïveté, not ignorance, stupidity, crudeness or other such base qualities. The typical Naïve Probabilist lacks not knowledge or refinement, but the experience and good judgment that comes from making real decisions with real consequences in the real world. The most prominent naïve probabilists are recognized (academic) experts in mathematical probability, or relatedly statistics, physics, psychology, economics, epistemology, medicine or so-called decision sciences. Moreover, and worryingly, the best known naïve probabilists are quite sophisticated, skilled in the art of influencing public policy decisions without suffering from the risks those policies impose on the rest of society.

Read the paper. [Archived PDF]

Mathematics and the World: London Mathematical Laboratory

Stability of Heteroclinic Cycles in Rings of Coupled Oscillators

[from the London Mathematical Laboratory]

Complex networks of interconnected physical systems arise in many areas of mathematics, science and engineering. Many such systems exhibit heteroclinic cyclesdynamical trajectories that show a roughly periodic behavior, with non-convergent time averages. In these systems, average quantities fluctuate continuously, although the fluctuations slow down as the dynamics repeatedly and systematically approach a set of fixed points. Despite this general understanding, key open questions remain concerning the existence and stability of such cycles in general dynamical networks.

In a new paper [archived PDF], LML Fellow Claire Postlethwaite and Rob Sturman of the University of Leeds investigate a family of coupled map lattices defined on ring networks and establish stability properties of the possible families of heteroclinic cycles. To begin, they first consider a simple system of N coupled systems, each system based on the logistic map, and coupling between systems determined by a parameter γ. If γ = 0, each node independently follows logistic map dynamics, showing stable periodic cycles or chaotic behavior. The authors design the coupling between systems to have a general inhibitory effect, driving the dynamics toward zero. Intuitively, this should encourage oscillatory behavior, as nodes can alternately be active (take a non-zero value), and hence inhibit those nodes to which it is connected to, decay, when other nodes in turn inhibit them; and finally grow again to an active state as the nodes inhibiting them decay in turn. In the simple case of N = 3, for example, this dynamics leads to a trajectory which cycles between three fixed points.

The authors then extend earlier work to consider larger networks of coupled systems as described by a directed graph, describing how to find the fixed points and heteroclinic connections for such a system. In general, they show, this procedure results in highly complex and difficult to analyze heteroclinic network. Simplifying to the special case of N-node directed graphs with one-way nearest neighbor coupling, they successfully derive results for the dynamic stability of subcycles within this network, establishing that only one of the subcycles can ever be stable.

Overall, this work demonstrates that heteroclinic networks can typically arise in the phase space dynamics of certain types of symmetric graphs with inhibitory coupling. Moreover, it establishes that at most one of the subcycles can be stable (and hence observable in simulations) for an open set of parameters. Interestingly, Postlethwaite and Sturman find that the dynamics associated with such cycles are not ergodic, so that long-term averages do not converge. In particular, averaged observed quantities such as Lyapunov exponents are ill-defined, and will oscillate at a progressively slower rate.

In addition, the authors also address the more general question of whether or not a stable heteroclinic cycle is likely to be found in the corresponding phase space dynamics of a randomly generated physical network of nodes. In preliminary investigations using randomly generated Erdős–Rényi graphs, they find that the probability of existence of heteroclinic cycles increases both as the number of nodes in the physical network increases, and also as the density of edges in the physical network decreases. However, even in cases where the probability of existence of heteroclinic cycles is high, there is also a high chance of the existence of a stable fixed point in the phase space. From this they conclude that the question of the stability of the heteroclinic cycle is important in determining whether or not the heteroclinic cycle, and associated slowing down of trajectories, will be observed in the phase space associated with a randomly generated graph.

The paper is available as a pre-print here [archived PDF].