Science-Watching: From Ignition to Energy

[from Science & Technology Review July/August 2025 Research Highlights, by Noah Pflueger-Peters]

Achieving ignition at the National Ignition Facility (NIF) proved that harnessing the power of the Sun in a laboratory may be possible. The Sun’s extreme temperatures and pressures cause light elements to fuse together to create heavier ones, releasing enormous energy and sustaining conditions for more thermonuclear reactions. NIF replicates these conditions with inertial confinement fusion, in which lasers compress and heat a target capsule filled with deuterium and tritium (DT), “heavy” isotopes of hydrogen that contain extra neutrons. When the isotopes fuse, they create helium and a neutron, and the lost mass is converted into inertial fusion energy (IFE), which can be harnessed for energy production.

Nuclear fusion produces significantly more energy than either nuclear fission or burning fossil fuels for equivalent amounts of fuel. Since the input materials for fusion energy are plentiful on Earth, an IFE power plant could produce safe, abundant, power grid-compatible energy without highly radioactive byproducts.

Although significant work remains to harness fusion energy, pursuing the development and deployment of IFE is crucial for the nation’s energy security, enabling the United States to shape implementation worldwide, avoid technological surprises from adversaries, and influence technical leadership in other energy-intensive technologies such as AI, machine learning (ML), and supercomputing.

IFE research stretches back to the early days of Lawrence Livermore, and today the Laboratory is fostering the overall fusion ecosystem. Livermore’s unique capabilities, expertise, and connections will be critical to laying the technical, logistical, and legal groundwork to make IFE possible. “IFE is a grand scientific and engineering challenge, something that is so incredibly difficult and high-risk and takes enormous expertise,” says Tammy Ma, Livermore’s IFE Institutional Initiative lead. “This challenge makes it the right kind of problem for national laboratories to pursue.”

This artist’s rendering shows the concept for an inertial fusion energy (IFE) power plant design, with a cutaway to show the plant’s target chamber in the center. Livermore researchers are laying the groundwork for private fusion companies to build similar designs. (Illustration by Eric Smith.)

Designing for Viability

NIF is the only facility to date to demonstrate the ignition and burning plasma conditions that are prerequisites for IFE, but it is an experimental facility for stockpile stewardship research, not a power plant. To be commercially viable and produce the energy to offset costs and meet demands (baseload power), IFE plants will need to generate more than 30 times the energy they deliver to the fusion target on every shot while firing 10 or more shots per second, compared to NIF’s rate of one or two shots per day.

The Laser Inertial Fusion Energy (LIFE) study, conducted between 2008 and 2013, aimed to build directly on technology developed for NIF to achieve IFE and took a systematic approach to this requirement by developing the Integrated Process Model (IPM). (See S&TR, April/May 2009 [archived PDF], pp. 6-15.)

IPM is a technoeconomic model of an IFE power plant with detailed technical and cost breakdowns and interdependencies of key systems and subsystems. “The work done under LIFE was fantastic,” says Ma. “IPM lays out engineering and physics requirements for the entire system to test out different scenarios and see the impact. Now, we not only get to expand on all that but also leverage 15 years of new data from NIF, better codes, and high-performance computing (HPC), as well as new work in AI, ML, advanced manufacturing, diagnostics, and nonproliferation across the Laboratory.”

IPM describes an IFE power plant that requires a solid-state laser driver system to “pump” lasers with optical energy using laser diodes instead of flashlamps as at NIF. The plant will also need to fabricate and fill target capsules onsite and send them into its target chamber at a high enough frequency to produce baseload power. “We will have to repeatedly inject targets into the chamber, so the targets must be able to withstand and survive that process,” explains Ma. “Then, the lasers will track the moving targets, and when one gets to the center of the chamber, they would fire on the centered target, repeating 10 to 20 times per second.”

The facility would convert fusion energy into heat and then electricity via steam turbines, sending most of the electricity to the power grid and recycling the rest to power operations on subsequent shots. Neutrons from the reaction would produce tritium needed for the DT fuel by bombarding lithium isotopes in a “breeding blanket” material lining its target chamber. By closing both the power and fuel cycles, IFE plants are expected to be self-sustaining.

Thanks in part to IFE STARFIRE (IFE Science and Technology Accelerated Research for Fusion Innovation and Reactor Engineering), a Department of Energy (DOE)-funded multi-institutional IFE research and development hub, researchers across the Laboratory are working to meet the new system’s demands. IPM can help identify key challenges, test the viability of new designs, and direct future research. “Many technical models and cost models exist for IFE, but very few, if any, pair systems and cost models together at the same depth as IPM,” says Mackenzie Nelson, a technoeconomic systems analyst in the Computational Engineering Division. “This type of tool offers such an advantage because we can assess design choices from both a technical and economic standpoint and create blueprints for what an IFE plant could look like.”

(left to right) Livermore researchers Bassem El Dasher, Claudio Santiago, and Mackenzie Nelson discuss a 3D model of a proposed IFE power plant design alongside the Integrated Process Model (IPM). IPM has more than 270 potential user inputs that researchers and collaborators can use to assess different IFE design choices to see the technical and cost impact on the entire design.

Operational Demands

NIF’s target capsules are extremely precise, fragile, and can take weeks to fabricate, fill, and position. Researchers are trying to reconcile that factor with the estimated demand of more than 800,000 capsules per day produced at less than $0.50 each to achieve IFE plant viability. To do this, they are examining optimal target designs for IFE and exploring advanced manufacturing methods such as microfluidics, volumetric additive manufacturing, and two-photon polymerization. (See S&TR, April/May 2025 [archived PDF], pp. 16-19.) Additional projects involve developing diagnostic instruments that can collect, analyze, and combine data with other diagnostics at the 10 to 20 shot per second frequency and use it to improve lasers in real time.

Fusion energy systems such as IFE are also a regulatory challenge, as they generate high-energy neutrons capable of breeding plutonium or uranium-233 and rely on large quantities of tritium. “Pure fusion energy systems do not require fissile material, but there are still ways to misuse these technologies that pose proliferation risk,” says Yana Feldman, the associate program leader for international safeguards. Bad actors may only need small amounts of tritium to make nuclear weapons, and some breeding blanket designs may inadvertently produce traces of plutonium that may be diverted for military purposes.

Nuclear fission reactors are regulated through international agreements and export control rules, and the independent International Atomic Energy Agency (IAEA) verifies that nuclear material and facilities are only being used for peaceful purposes. Neither treaties nor the IAEA address fusion energy, and no consensus has been reached on whether fusion energy systems need an international verification program. Verification methods for safeguarding tritium are also far less developed than for plutonium and uranium and focus more on contamination and transfers than analytical accounting for discrepancies. The precise scale of allowable tritium unaccounted for without posing proliferation risk is also unclear.

Fusion systems can be designed for proliferation resistance, but not having an existing design remains a challenge.

International security analyst Anne-Marie Riitsaar and her colleagues are exploring these complexities and starting conversations with international fusion experts and private industry to raise awareness. Riitsaar also plans to collaborate with the IPM team to map tritium diversion vulnerabilities and identify high-risk points where researchers could incorporate surveillance methods into plant designs to detect and prevent potential misuse. “People sometimes ask me why I’m thinking about fusion energy regulations and proliferation risks at this point, but it’s not too early,” says Riitsaar. “Reaching a multinational consensus on regulating sensitive technologies takes considerable time and effort.”

The National Ignition Facility is an experimental facility and not a power plant, so a commercial IFE plant design has vastly different requirements—many of which are being studied by Livermore researchers and their collaborators.

NIFViable IFE plant (estimated)
Repetition rateOne shot per day10 to 20 shots per second
Energy gain4.13 times (as of April 2025)30 times (minimum), 50 times to 100 times (ideal)
How lasers gain energyFlashlampsDiode pumping
Target fabrication and fuel fillingFabricated offsite over several weeks and filled manually in 1 to 5 daysMass-manufactured and filled in a target factory within the facility
Target deliveryPositioned manually within the Target ChamberShot into the plant’s target chamber approximately 10 to 20 times per second
Laser alignmentComputationally in real time, taking up to 8 hoursIn real time
Power cycleOpen, requiring outside energy sourcesClosed, applying reused energy to power laser and ancillary plant operations
Fuel cycle (tritium)Produced offsiteBred onsite

The Laser Driven Fusion Integration Research and Science Test Facility (LD-FIRST) is a proposed blueprint for a proof-of-concept IFE facility that will test all the key IFE subsystems in an integrated fashion. A public-private partnership will likely be necessary to build the facility and will help the IFE community address the main subset of risks and the technological challenges of building a commercial plant.

Converging on a Solution

The team seeks to make IPM as accurate and comprehensive as possible by meeting with subject matter experts across the Laboratory to incorporate the latest research. “We’re trying to evolve the model so it has the same level of high detail across every single functional area to tell us where we can focus research and help us find optimized solutions that we could propose to industry,” says Nelson.

Computer scientist Claudio Santiago and his colleagues also modernized IPM by porting its framework from Microsoft Excel to Python in December 2024, making it compatible with AI, ML, design optimization, and HPC to further inform designs. “Once we think about all the forcing functions such as minimum shot yield and materials requirements pinning us in from every direction, we end up with an optimized solution space. As we sharpen the pencil more with these tools, that optimized solution box gets smaller until eventually we’ve converged on a point design,” says IFE lead systems engineer Justin Galbraith. Galbraith and his team’s point design is called the Laser Driven Fusion Integration Research and Science Test Facility, or LD-FIRST, a proof-of-concept physics demonstration facility for IFE. “That point design, we anticipate, will serve as the foundation for a future public-private partnership that would facilitate building and realizing a physical facility to focus the IFE community in pursuit of fusion power on the grid,” says Galbraith.

Livermore is leading the charge in IFE, helping the United States develop a technological roadmap, growing and coordinating science and technology efforts within the Laboratory, and fostering partnerships across the fusion industry, academia, and government.

Ma chaired DOE’s “Basic Research Needs for IFE” workshop and report in 2022 and co-chairs the subcommittee providing recommendations on the nation’s fusion activities through DOE’s Fusion Energy Sciences Advisory Committee. She and her team travel often to Washington, D.C., working with DOE and legislators to expand fusion energy research and advocacy in the nation. Livermore also leads a “Collaboratory” with other DOE national laboratories to connect research project leads and facilitate public-private partnerships. The Collaboratory has hosted multiple events with industry, and the Laboratory has partnered with three private companies who aim to design pilot IFE plants.

Meanwhile, Galbraith and other IFE leaders have served as technical advisors for engineering design teams at Texas A&M University and given them IFE-relevant problems to solve, including advanced chamber and blanket design. Galbraith is working with Nelson to develop the IFE plant design portion of a high-energy-density science summer school program, which Nelson is leading in 2025 at the University of California at San Diego, and they have developed IFE curriculum that has been deployed at six universities starting in spring 2025. “We’re hoping we can get a group of students really excited about fusion and start to build up the next generation of engineers and scientists that will make fusion a reality,” says Galbraith. The team has led IFE strategic planning exercises at the Laboratory, and Lawrence Livermore will stand up a new fusion institute—named “LIFT,” for Livermore Institute for Fusion Technology—a research and development center that will coordinate and centralize institutional fusion energy research.

Harnessing IFE will be a massive undertaking, but Livermore’s broad and deep expertise, facilities, and capabilities put the Laboratory in a unique position to lead and play an impactful role. “If we can set it up correctly, IFE will be a big piece of the Laboratory’s long-term vision,” says Ma. “IFE plays off of our history and all of our strengths, and it is critical for long-term national security.”

Monomania and the West

There have been all kinds of “voices” in the history of Western civilization. Perhaps the loudest voice is that of monomaniacs, who always claim that behind the appearance of the many is the one. If we illustrate the West, and at its roots, the intersection of Athens and Jerusalem, we see the origins of this monomania. Plato’s realm of ideas was supposed to explain everything encountered in our daily lives. His main student and rival, Aristotle, has his own competing explanation, based in biology instead of mathematics.

These monomanias in their modern counterpart in ideologies. In communism, the key to have everything is class and the resulting class struggles. Nazism revolves around race and racial conflict.

In our own era, the era of scientism, we have the idea of god replaced with Stephen Hawking’s “mind of god,” Leon Lederman’s The God Particle and KAKU Michio’s The God Equation. In the 2009 film, Angels & Demons, there’s a senior Vatican official, played by Ewan McGregor, who is absolutely outraged by the blasphemous phrase, “the god particle.”

Currently, the monomania impetus continues full-force. For example, Professor Seth Lloyd of MIT tells us that reality is the cosmos and not chaos, because all of reality together is a computer. His MIT colleague, Max Tegmark, argues in his books that the world is not explained by mathematics, but rather is mathematics. Perhaps the climax of this kind of thinking is given to us by the essay “Everything Is Computation” by Joscha Bach:

These days we see a tremendous number of significant scientific news stories, and it’s hard to say which has the highest significance. Climate models indicate that we are past crucial tipping points and irrevocably headed for a new, difficult age for our civilization. Mark van Raamsdonk expands on the work of Brian Swingle and Juan Maldacena and demonstrates how we can abolish the idea of spacetime in favor of a discrete tensor network, thus opening the way for a unified theory of physics. Bruce Conklin, George Church, and others have given us CRISPR/Cas9, a technology that holds promise for simple and ubiquitous gene editing. “Deep learning” starts to tell us how hierarchies of interconnected feature detectors can autonomously form a model of the world, learn to solve problems, and recognize speech, images, and video.

It is perhaps equally important to notice where we lack progress: Sociology fails to teach us how societies work; philosophy seems to have become infertile; the economic sciences seem ill-equipped to inform our economic and fiscal policies; psychology does not encompass the logic of our psyche; and neuroscience tells us where things happen in the brain but largely not what they are.

In my view, the 20th century’s most important addition to understanding the world is not positivist science, computer technology, spaceflight, or the foundational theories of physics.

It is the notion of computation. Computation, at its core, and as informally described as possible, is simple: Every observation yields a set of discernible differences.

These we call information. If the observation corresponds to a system that can change its state, we can describe those state changes. If we identify regularity in those state changes, we are looking at a computational system. If the regularity is completely described, we call this system an algorithm. Once a system can perform conditional state transitions and revisit earlier states, it becomes almost impossible to stop it from performing arbitrary computation. In the infinite case that is, if we allow it to make an unbounded number of state transitions and use unbounded storage for the states—it becomes a Turing machine, or a Lambda calculus, or a Post machine, or one of the many other mutually equivalent formalisms that capture universal computation.

Computational terms rephrase the idea of “causality,” something that philosophers have struggled with for centuries. Causality is the transition from one state in a computational system to the next. They also replace the concept of “mechanism” in mechanistic, or naturalistic, philosophy. Computationalism is the new mechanism, and unlike its predecessor, it is not fraught with misleading intuitions of moving parts.

Computation is different from mathematics. Mathematics turns out to be the domain of formal languages and is mostly undecidable, which is just another word for saying “uncomputable” (since decision making and proving are alternative words for computation, too). All our explorations into mathematics are computational ones, though. To compute means to actually do all the work, to move from one state to the next.

Computation changes our idea of knowledge: Instead of justified true belief, knowledge describes a local minimum in capturing regularities between observables. Knowledge is almost never static but progresses on a gradient through a state space of possible worldviews. We will no longer aspire to teach our children the truth, because, like us, they will never stop changing their minds. We will teach them how to productively change their minds, how to explore the never-ending land of insight.

A growing number of physicists understands that the universe is not mathematical but computational, and physics is in the business of finding an algorithm that can reproduce our observations. The switch from uncomputable mathematical notions (such as continuous space) makes progress possible. Climate science, molecular genetics, and AI are computational sciences. Sociology, psychology, and neuroscience are not: They still seem confused by the apparent dichotomy between mechanism (rigid moving parts) and the objects of their study. They are looking for social, behavioral, chemical, neural regularities, where they should be looking for computational ones.

Everything is computation.

Know This: Today’s Most Interesting and Important Scientific Ideas, Discoveries, and Developments, John Brockman (editor), Harper Perennial, 2017, pages 228-230.

Friedrich Nietzsche rebelled against this type of thinking the most profoundly. If scientism represents the modern, then Nietzsche was the prophet of postmodernism. Nietzsche’s famous phrase, “God is dead.” is not about a creator or divinity, but rather finality itself. There is no final explanation.

Problems of Perspective, Michel Foucault

Michel Foucault was one of the leading French philosophers of the 20th century. Often considered a postmodernist, he did not believe there was a final perspective that human knowledge could achieve. This immediately contrasts with the outlook of leading physicists like Stephen Hawking. In his 1988 classic, A Brief History of Time, Hawking concludes the book by saying, once science has achieved a theory of everything, which is not far off, we will “know the mind of god.”

In his 1966 key work, The Order of Things: An Archaeology of the Human Sciences (French: Les Mots et les Choses: Une archéologie des sciences humaines), Foucault argued that the so-called order of things is invented, not discovered, by us. This is contrary to scientific thought.

Foucault sets up this limit in his surprising interpretation of the Diego Velázquez masterpiece painting, Las Meninas (Spanish: The Ladies-in-waiting). The painting is deliberately elusive in its use of perspective.

The great German thinker, Jürgen Habermas, explained this Foucault/Velázquez perspective difficulty:

This picture portrays the painter in front of a canvas not visible to the spectator; the painter is evidently looking, as are the two ladies-in-waiting next to him, in the direction of his two models, King Philip IV and his spouse. These two personages standing as models are found outside the frame of the picture; they can be identified by the spectator only with the help of a mirror pictured in the background. The point that Velázquez apparently had in mind is a confusing circumstance of which the spectator becomes aware by inference: The spectator cannot avoid assuming the place and the direction of the gaze of the counterfeit but absent royal pair — toward which the painter captured in the picture gazes — as well as the place and the perspective of Velázquez himself, which is to say, of the .painter who actually produced this picture. For Foucault, in turn, the real point lies in the fact that the classical picture frame is too limited to permit the representation of the act of representing as such — it is this that Velázquez makes clear by showing the gaps within the classical picture frame. left by the lack of reflection on the process of representing itself.29

29. Foucault constructs two different series of absences. On the one hand, the painter in the picture lacks his model, the royal couple standing outside the frame of the picture; the latter are in turn unable to see the picture of themselves that is being painted — they only see the canvas from behind; finally, the spec­tator is missing the center of the scene, that is, the couple standing as models, to which the gaze of the painter and of the courtesans merely directs us. Still more revealing than the absence of the objects being represented is, on the other hand, that of the subjects doing the representing, which is to say, the triple absence of the painter, the model, and the spectator who, located in front of the picture, takes in perspectives of the two others. The painter, Velázquez, actually enters into the picture, but he is not presented exactly in the act of painting — one sees him during a pause and realizes that he will disappear behind the canvas as soon as he takes up his labors again. The faces of the two models can actually be recognized unclearly in a mirror reflection, but they are not to be observed directly during the act of their portrayal. Finally, the act of the spectator is equally unrepresented — the spectator depicted entering into the picture from the right cannot take over this function. (See Foucault, The Order of Things, pp. 3-16, 307-311.)

Critique and Power: Recasting the Foucault/Habermas Debate, Michael Kelly, editor, MIT Press, 1994, pages 67, 77 [archived PDF].

Let us conclude by saying one way of specifying the disagreement between scientists and these thinkers is that sciences see themselves as “objective” while the thinkers feel science lacks objectivity because of the human observer. Kant, centuries ago, argued that concepts like causality, space and time are imposed by the human mind on the world. Similarly, Heisenberg, in Physics and Philosophy: The Revolution in Modern Science, similarly said that science does not finally answer questions about an objective reality, but can only answer questions posed by us.

World-Watching: Science First Release, 10 July 2025

[from Science]

Accepted papers posted online prior to journal publication.

NASA Earth Science Division provides key data

by Dylan B. Millet, Belay B. Demoz, et al.

In May, the US administration proposed budget cuts to NASA, including a more than 50% decrease in funding for the agency’s Earth Science Division (ESD), the mission of which is to gather knowledge about Earth through space-based observation and other tools. The budget cuts proposed for ESD would cancel crucial satellites that observe Earth and its atmosphere, gut US science and engineering expertise, and potentially lead to the closure of NASA research centers. As former members of the recently dissolved NASA Earth Science Advisory Committee, an all-volunteer, independent body chartered to advise ESD, we warn that these actions would come at a profound cost to US society and scientific leadership.

[read more]

Spin-filter tunneling detection of antiferromagnetic resonance with electrically tunable damping

by Thow Min Jerald Cham, Daniel G. Chica, et al.

Antiferromagnetic spintronics offers the potential for higher-frequency operations and improved insensitivity to magnetic fields compared to ferromagnetic spintronics. However, previous electrical techniques to detect antiferromagnetic dynamics have utilized large, millimeter-scale bulk crystals. Here we demonstrate direct electrical detection of antiferromagnetic resonance in structures on the few-micrometer scale using spin-filter tunneling in PtTe2/bilayer CrSBr/graphite junctions in which the tunnel barrier is the van der Waals antiferromagnet CrSBr. This sample geometry allows not only efficient detection, but also electrical control of the antiferromagnetic resonance through spin-orbit torque from the PtTe2 electrode. The ability to efficiently detect and control antiferromagnetic resonance enables detailed studies of the physics governing these high-frequency dynamics.

[read more]

Scalable emulation of protein equilibrium ensembles with generative deep learning

by Sarah Lewis, Tim Hempel, et al.

Following the sequence and structure revolutions, predicting functionally relevant protein structure changes at scale remains an outstanding challenge. We introduce BioEmu, a deep learning system that emulates protein equilibrium ensembles by generating thousands of statistically independent structures per hour on a single GPU. BioEmu integrates over 200 milliseconds of molecular dynamics (MD) simulations, static structures and experimental protein stabilities using novel training algorithms. It captures diverse functional motions—including cryptic pocket formation, local unfolding, and domain rearrangements—and predicts relative free energies with 1 kcal/mol accuracy compared to millisecond-scale MD and experimental data. BioEmu provides mechanistic insights by jointly modeling structural ensembles and thermodynamic properties. This approach amortizes the cost of MD and experimental data generation, demonstrating a scalable path toward understanding and designing protein function.

[read more]

Negative capacitance overcomes Schottky-gate limits in GaN high-electron-mobility transistors

by Asir Intisar Khan, Jeong-Kyu Kim, et al.

For high-electron-mobility transistors based on two-dimensional electron gas (2DEG) within a quantum well, such as those based on AlGaN/GaN heterostructure, a Schottky-gate is used to maximize the amount of charge that can be induced and thereby the current that can be achieved. However, the Schottky-gate also leads to very high leakage current through the gate electrode. Adding a conventional dielectric layer between the nitride layers and gate metal can reduce leakage; but this comes at the price of a reduced drain current. Here, we used a ferroic HfO2ZrO2 bilayer as the gate dielectric and achieved a simultaneous increase in the ON current and decrease in the leakage current, a combination otherwise not attainable with conventional dielectrics. This approach surpasses the conventional limits of Schottky GaN transistors and provides a new pathway to improve performance in transistors based on 2DEG.

[read more]

Heidegger vs. Marx as World Watchers

Marx (1818-1883) implies that the foundation of human reality is econo-technical, and on that basis society creates thoughts and philosophies, art and poems. This explanation seems appealing when we think of the economic development of China in our time, for example, or the rise of computers and software.

In a way, Heidegger (1889-1976) turns this upside down. At the basis of world history is society producing culture. You can make a simple “cartoon” and say that for Marx, economics shapes everything, and for Heidegger culture replaces economics.

For example, in his book, What Is Called Thinking? (English translation, 1968, Harper & Row), Heidegger argues the foundation of all Western thinking and culture comes from axioms such as logos [Ancient Greekλόγος] (from which we have logic, cosmology, psychology, epistemology, etc.), as well as legein (the Greek verb λέγειν, “to speak”).

Heidegger states (on page 204), “Without the λέγειν of that logic, modern man would have to make do without his automobile. There would be no airplanes, no turbines, no Atomic Energy Commission.”

Our MI comment on this is that any monocausal explanation of how mankind went from Neanderthal to the Manhattan skyline is completely inadequate. You must create a “double-helix” of Marx and Heidegger, adding the dimensions of surprise and unintended consequences. Without the physics concepts of emergence and complexity, we have no possibility of understanding how we got to now. In the site tagline, we use the word “composite” as a reference to this kind of deeper understanding.

Speculative Science: The Reality beyond Spacetime, with Donald Hoffman

[from The Institute of Art and Ideas Science Weekly, July 22]

Donald Hoffman famously argues that we know nothing about the truth of the world. His book, The Case Against Reality, claims the process of survival of the fittest does not require a true picture of reality. Furthermore, Hoffman claims spacetime is not fundamental. So, what lies beneath spacetime, can we know about it? And how does consciousness come into play? Join this interview with the famed cognitive psychologist and author exploring our notions of consciousness, spacetime, and what lies beneath. Hosted by Curt Jaimungal.

[watch the video]

COVID-19 and “Naïve Probabilism”

[from the London Mathematical Laboratory]

In the early weeks of the 2020 U.S. COVID-19 outbreak, guidance from the scientific establishment and government agencies included a number of dubious claims—masks don’t work, there’s no evidence of human-to-human transmission, and the risk to the public is low. These statements were backed by health authorities, as well as public intellectuals, but were later disavowed or disproven, and the initial under-reaction was followed by an equal overreaction and imposition of draconian restrictions on human social activities.

In a recent paper, LML Fellow Harry Crane examines how these early mis-steps ultimately contributed to higher death tolls, prolonged lockdowns, and diminished trust in science and government leadership. Even so, the organizations and individuals most responsible for misleading the public suffered little or no consequences, or even benefited from their mistakes. As he discusses, this perverse outcome can be seen as the result of authorities applying a formulaic procedure of “naïve probabilism” in facing highly uncertain and complex problems, and largely assuming that decision-making under uncertainty boils down to probability calculations and statistical analysis.

This attitude, he suggests, might be captured in a few simple “axioms of naïve probabilism”:

Axiom 1: more complex the problem, the more complicated the solution.

This idea is a hallmark of naïve decision making. The COVID-19 outbreak was highly complex, being a novel virus of uncertain origins, and spreading through the interconnected global society. But the potential usefulness of masks was not one of these complexities. The mask mistake was consequential not because masks were the antidote to COVID-19, but because they were a low cost measure the effect of which would be neutral at worst; wearing a mask can’t hurt in reducing the spread of a virus.

Yet the experts neglected common sense in favor of a more “scientific response” based on rigorous peer review and sufficient data. Two months after the initial U.S. outbreak, a study confirmed the obvious, and masks went from being strongly discouraged to being mandated by law. Precious time had been wasted, many lives lost, and the economy stalled.

Crane also considers another rule of naïve probabilism:

Axiom 2: Until proven otherwise, assume that the future will resemble the past.

In the COVID-19 pandemic, of course, there was at first no data that masks work, no data that travel restrictions work, no data of human-to-human transmission. How could there be? Yet some naïve experts took this as a reason to maintain the status quo. Indeed, many universities refused to do anything in preparation until a few cases had been detected on campus—at which point they had some data, as well as hundreds or thousands of other as yet undetected infections.

Crane touches on some of the more extreme examples of his kind of thinking, which assumes that whatever can’t be explained in terms of something that happened in the past is speculative, non-scientific and unjustifiable:

“This argument was put forward by John Ioannidis in mid-March 2020, as the pandemic outbreak was already spiralling out of control. Ioannidis wrote that COVID-19 wasn’t a ‘once-in-a-century pandemic,’ as many were saying, but rather a ‘once-in-a-century data-fiasco’. Ioannidis’s main argument was that we knew very little about the disease, its fatality rate, and the overall risks it poses to public health; and that in face of this uncertainty, we should seek data-driven policy decisions. Until the data was available, we should assume COVID-19 acts as a typical strain of the flu (a different disease entirely).”

Unfortunately, waiting for the data also means waiting too long, if it turns out that the virus turns out to be more serious. This is like waiting to hit the tree before accepting that the available data indeed supports wearing a seatbelt. Moreover, in the pandemic example, this “lack of evidence” argument ignores other evidence from before the virus entered the United States. China had locked down a city of 10 million; Italy had locked down its entire northern region, with the entire country soon to follow. There was worldwide consensus that the virus was novel, the virus was spreading fast and medical communities had no idea how to treat it. That’s data, and plenty of information to act on.

Crane goes on to consider a 3rd axiom of naïve probabilism, which aims to turn ignorance into a strength. Overall, he argues, these axioms, despite being widely used by many prominent authorities and academic experts, actually capture a set of dangerous fallacies for action in the real world.

In reality, complex problems call for simple, actionable solutions; the past doesn’t repeat indefinitely (i.e., COVID-19 was never the flu); and ignorance is not a form of wisdom. The Naïve Probabilist’s primary objective is to be accurate with high probability rather than to protect against high-consequence, low-probability outcomes. This goes against common sense principles of decision making in uncertain environments with potentially very severe consequences.

Importantly, Crane emphasizes, the hallmark of Naïve Probabilism is naïveté, not ignorance, stupidity, crudeness or other such base qualities. The typical Naïve Probabilist lacks not knowledge or refinement, but the experience and good judgment that comes from making real decisions with real consequences in the real world. The most prominent naïve probabilists are recognized (academic) experts in mathematical probability, or relatedly statistics, physics, psychology, economics, epistemology, medicine or so-called decision sciences. Moreover, and worryingly, the best known naïve probabilists are quite sophisticated, skilled in the art of influencing public policy decisions without suffering from the risks those policies impose on the rest of society.

Read the paper. [Archived PDF]

Education and “Intuition Pumps”

Professor Daniel Dennett of Tufts uses the word “intuition pumps” in discussing intuitive understanding and its tweaking.

Let’s do a simple example, avoiding as always “rocket science,” where the intricacies weigh you down in advance. We make a U-turn and go back by choice to elementary notions and examples.

Think of the basic statistics curve. It’s called the Bell Curve, the Gaussian, the Normal Curve.

The first name is sort of intuitive based on appearance unless of course it’s shifted or squeezed and then it’s less obvious. The second name must be based on either the discoverer or the “name-giver” or both, if the same person. The third is a bit vague.

Already one’s intuitions and hunches are not fool-proof.

The formula for the Bell Curve is:

\begin{equation} y = \frac{1}{\sqrt{2\pi}}e^{\frac{-x^2}{2}} \end{equation}

We immediately see the two key constants: π (pi) and e. These are: 22/7 and 2.71823 (base of natural logs).

The first captures something about circularity, the second continuous growth as in continuous compounding of interest.

You would not necessarily anticipate seeing these two “irrational numbers” (they “go on” forever) in a statistics graph. Does that mean your intuition is poor or untutored or does it mean that “mathworld” is surprising?

It’s far from obvious.

For openers, why should π (pi) be everywhere in math and physics?

Remember Euler’s identity: e + 1 = 0

That the two key integers (1 and 0) should relate to π (pi), e, and i (-1) is completely unexpected and exotic.

Our relationship to “mathworld” is quite enigmatic and this raises the question whether Professor Max Tegmark of MIT who proposes to explain “ultimate reality” through the “math fabric” of all reality might be combining undoubted brilliance with quixotism. We don’t know.

Education and Finality Claims

Stephen Hawking kept saying he wanted to discover the ultimate world-equation. This would be the final “triumph of the rational human mind.”

This would presumably imply that if one had such a world-equation, one could infer or deduce all the formalisms in a university physics book with its thousand pages of equations, puzzles and conundrums, footnotes and names and dates.

While hypothetically imaginable, this seems very unlikely because too many phenomena are included, too many topics, too many rules and laws.

There’s another deep problem with such Hawking-type “final equation” quests. Think of the fact that a Henri Poincaré (died in 1912) suddenly appears and writes hundreds of excellent science papers. Think of Paul Erdős (died in 1996) and his hundreds of number theory papers. Since the appearance of such geniuses and powerhouses is not knowable in advance, the production of new knowledge is unpredictable and would “overwhelm” any move towards some world-equation which was formulated without the new knowledge since it was not known at the time that the world-equation was formalized.

Furthermore, if the universe is mathematical as MIT’s Professor Max Tegmark claims, then a Hawking-type “world-equation” would cover all mathematics without which parts of Tegmark’s universe would be “unaccounted for.”

In other words, history and the historical experience, cast doubt on the Stephen Hawking “finality” project. It’s not just that parts of physics don’t fit together. (General relativity and quantum mechanics, gravity and the other three fundamental forces.) Finality would also imply that there would be no new Stephen Hawking who would refute the world-equation as it stands at a certain point in time. In other words, if you choose, as scientists like Freeman Dyson claim that the universe is a “vast evolutionary” process, then the mathematical thinking about it is also evolving or co-evolving and there’s no end.

There are no final works in poetry, novels, jokes, language, movies or songs and there’s perhaps also no end to science.

Thus a Hawking-type quest for the final world-equation seems enchanting but quixotic.

Meaningfulness versus Informativeness

The Decoding Reality book is a classic contemporary analysis of the foundations of physics and the implications for the human world. The scientists don’t see that physics and science are the infrastructure on which the human “quest for meaning” takes place. Ortega (Ortega y Gasset, died in 1955) tells us that a person is “a point of view directed at the universe.” This level of meaning cannot be reduced to bits or qubits or electrons since man is a “linguistic creature” who invents fictional stories to explain “things” that are not things.

The following dialog between Paul Davies (the outstanding science writer) and Vlatko Vedral (the distinguished physicist) gropes along on these issues: the difference between science as one kind of story and the human interpretation of life and self expressed in “tales” and parables, fictions and beliefs:

Davies: “When humans communicate, a certain quantity of information passes between them. But that information differs from the bits (or qubits) physicists normally consider, inasmuch as it possesses meaning. We may be able to quantify the information exchanged, but meaning is a qualitative property—a value—and therefore hard, maybe impossible, to capture mathematically. Nevertheless the concept of meaning obviously has, well… meaning. Will we ever have a credible physical theory of ‘meaningful information,’ or is ‘meaning’ simply outside the scope of physical science?”

Vedral: “This is a really difficult one. The success of Shannon’s formulation of ‘information’ lies precisely in the fact that he stripped it of all “meaning” and reduced it only to the notion of probability. Once we are able to estimate the probability for something to occur, we can immediately talk about its information content. But this sole dependence on probability could also be thought of as the main limitation of Shannon’s information theory (as you imply in your question). One could, for instance, argue that the DNA has the same information content inside as well as outside of a biological cell. However, it is really only when it has access to the cell’s machinery that it starts to serve its main biological purpose (i.e., it starts to make sense). Expressing this in your own words, the DNA has a meaning only within the context of a biological cell. The meaning of meaning is therefore obviously important. Though there has been some work on the theory of meaning, I have not really seen anything convincing yet. Intuitively we need some kind of a ‘relative information’ concept, information that is not only dependent on the probability, but also on its context, but I am afraid that we still do not have this.”

For a physicist, all the world is information. The universe and its workings are the ebb and flow of information. We are all transient patterns of information, passing on the recipe for our basic forms to future generations using a four-letter digital code called DNA.

See Decoding Reality.

In this engaging and mind-stretching account, Vlatko Vedral considers some of the deepest questions about the universe and considers the implications of interpreting it in terms of information. He explains the nature of information, the idea of entropy, and the roots of this thinking in thermodynamics. He describes the bizarre effects of quantum behavior—effects such as “entanglement,” which Einstein called “spooky action at a distance” and explores cutting edge work on the harnessing quantum effects in hyper-fast quantum computers, and how recent evidence suggests that the weirdness of the quantum world, once thought limited to the tiniest scales, may reach into the macro world.

Vedral finishes by considering the answer to the ultimate question: Where did all of the information in the universe come from? The answers he considers are exhilarating, drawing upon the work of distinguished physicist John Wheeler. The ideas challenge our concept of the nature of particles, of time, of determinism, and of reality itself.

Science is an “ontic” quest. Human life is an “ontological” quest. They are a “twisted pair” where each strand must be seen clearly and not confused. The content of your telephone conversation with your friend, say. is not reducible to the workings of a phone or the subtle electrical engineering and physics involved. A musical symphony is not just “an acoustical blast.”

The “meaning of meaning” is evocative and not logically expressible. There’s a “spooky action at a distance” between these levels of meaning versus information but they are different “realms” or “domains.”