Science-Watching: From Ignition to Energy

[from Science & Technology Review July/August 2025 Research Highlights, by Noah Pflueger-Peters]

Achieving ignition at the National Ignition Facility (NIF) proved that harnessing the power of the Sun in a laboratory may be possible. The Sun’s extreme temperatures and pressures cause light elements to fuse together to create heavier ones, releasing enormous energy and sustaining conditions for more thermonuclear reactions. NIF replicates these conditions with inertial confinement fusion, in which lasers compress and heat a target capsule filled with deuterium and tritium (DT), “heavy” isotopes of hydrogen that contain extra neutrons. When the isotopes fuse, they create helium and a neutron, and the lost mass is converted into inertial fusion energy (IFE), which can be harnessed for energy production.

Nuclear fusion produces significantly more energy than either nuclear fission or burning fossil fuels for equivalent amounts of fuel. Since the input materials for fusion energy are plentiful on Earth, an IFE power plant could produce safe, abundant, power grid-compatible energy without highly radioactive byproducts.

Although significant work remains to harness fusion energy, pursuing the development and deployment of IFE is crucial for the nation’s energy security, enabling the United States to shape implementation worldwide, avoid technological surprises from adversaries, and influence technical leadership in other energy-intensive technologies such as AI, machine learning (ML), and supercomputing.

IFE research stretches back to the early days of Lawrence Livermore, and today the Laboratory is fostering the overall fusion ecosystem. Livermore’s unique capabilities, expertise, and connections will be critical to laying the technical, logistical, and legal groundwork to make IFE possible. “IFE is a grand scientific and engineering challenge, something that is so incredibly difficult and high-risk and takes enormous expertise,” says Tammy Ma, Livermore’s IFE Institutional Initiative lead. “This challenge makes it the right kind of problem for national laboratories to pursue.”

This artist’s rendering shows the concept for an inertial fusion energy (IFE) power plant design, with a cutaway to show the plant’s target chamber in the center. Livermore researchers are laying the groundwork for private fusion companies to build similar designs. (Illustration by Eric Smith.)

Designing for Viability

NIF is the only facility to date to demonstrate the ignition and burning plasma conditions that are prerequisites for IFE, but it is an experimental facility for stockpile stewardship research, not a power plant. To be commercially viable and produce the energy to offset costs and meet demands (baseload power), IFE plants will need to generate more than 30 times the energy they deliver to the fusion target on every shot while firing 10 or more shots per second, compared to NIF’s rate of one or two shots per day.

The Laser Inertial Fusion Energy (LIFE) study, conducted between 2008 and 2013, aimed to build directly on technology developed for NIF to achieve IFE and took a systematic approach to this requirement by developing the Integrated Process Model (IPM). (See S&TR, April/May 2009 [archived PDF], pp. 6-15.)

IPM is a technoeconomic model of an IFE power plant with detailed technical and cost breakdowns and interdependencies of key systems and subsystems. “The work done under LIFE was fantastic,” says Ma. “IPM lays out engineering and physics requirements for the entire system to test out different scenarios and see the impact. Now, we not only get to expand on all that but also leverage 15 years of new data from NIF, better codes, and high-performance computing (HPC), as well as new work in AI, ML, advanced manufacturing, diagnostics, and nonproliferation across the Laboratory.”

IPM describes an IFE power plant that requires a solid-state laser driver system to “pump” lasers with optical energy using laser diodes instead of flashlamps as at NIF. The plant will also need to fabricate and fill target capsules onsite and send them into its target chamber at a high enough frequency to produce baseload power. “We will have to repeatedly inject targets into the chamber, so the targets must be able to withstand and survive that process,” explains Ma. “Then, the lasers will track the moving targets, and when one gets to the center of the chamber, they would fire on the centered target, repeating 10 to 20 times per second.”

The facility would convert fusion energy into heat and then electricity via steam turbines, sending most of the electricity to the power grid and recycling the rest to power operations on subsequent shots. Neutrons from the reaction would produce tritium needed for the DT fuel by bombarding lithium isotopes in a “breeding blanket” material lining its target chamber. By closing both the power and fuel cycles, IFE plants are expected to be self-sustaining.

Thanks in part to IFE STARFIRE (IFE Science and Technology Accelerated Research for Fusion Innovation and Reactor Engineering), a Department of Energy (DOE)-funded multi-institutional IFE research and development hub, researchers across the Laboratory are working to meet the new system’s demands. IPM can help identify key challenges, test the viability of new designs, and direct future research. “Many technical models and cost models exist for IFE, but very few, if any, pair systems and cost models together at the same depth as IPM,” says Mackenzie Nelson, a technoeconomic systems analyst in the Computational Engineering Division. “This type of tool offers such an advantage because we can assess design choices from both a technical and economic standpoint and create blueprints for what an IFE plant could look like.”

(left to right) Livermore researchers Bassem El Dasher, Claudio Santiago, and Mackenzie Nelson discuss a 3D model of a proposed IFE power plant design alongside the Integrated Process Model (IPM). IPM has more than 270 potential user inputs that researchers and collaborators can use to assess different IFE design choices to see the technical and cost impact on the entire design.

Operational Demands

NIF’s target capsules are extremely precise, fragile, and can take weeks to fabricate, fill, and position. Researchers are trying to reconcile that factor with the estimated demand of more than 800,000 capsules per day produced at less than $0.50 each to achieve IFE plant viability. To do this, they are examining optimal target designs for IFE and exploring advanced manufacturing methods such as microfluidics, volumetric additive manufacturing, and two-photon polymerization. (See S&TR, April/May 2025 [archived PDF], pp. 16-19.) Additional projects involve developing diagnostic instruments that can collect, analyze, and combine data with other diagnostics at the 10 to 20 shot per second frequency and use it to improve lasers in real time.

Fusion energy systems such as IFE are also a regulatory challenge, as they generate high-energy neutrons capable of breeding plutonium or uranium-233 and rely on large quantities of tritium. “Pure fusion energy systems do not require fissile material, but there are still ways to misuse these technologies that pose proliferation risk,” says Yana Feldman, the associate program leader for international safeguards. Bad actors may only need small amounts of tritium to make nuclear weapons, and some breeding blanket designs may inadvertently produce traces of plutonium that may be diverted for military purposes.

Nuclear fission reactors are regulated through international agreements and export control rules, and the independent International Atomic Energy Agency (IAEA) verifies that nuclear material and facilities are only being used for peaceful purposes. Neither treaties nor the IAEA address fusion energy, and no consensus has been reached on whether fusion energy systems need an international verification program. Verification methods for safeguarding tritium are also far less developed than for plutonium and uranium and focus more on contamination and transfers than analytical accounting for discrepancies. The precise scale of allowable tritium unaccounted for without posing proliferation risk is also unclear.

Fusion systems can be designed for proliferation resistance, but not having an existing design remains a challenge.

International security analyst Anne-Marie Riitsaar and her colleagues are exploring these complexities and starting conversations with international fusion experts and private industry to raise awareness. Riitsaar also plans to collaborate with the IPM team to map tritium diversion vulnerabilities and identify high-risk points where researchers could incorporate surveillance methods into plant designs to detect and prevent potential misuse. “People sometimes ask me why I’m thinking about fusion energy regulations and proliferation risks at this point, but it’s not too early,” says Riitsaar. “Reaching a multinational consensus on regulating sensitive technologies takes considerable time and effort.”

The National Ignition Facility is an experimental facility and not a power plant, so a commercial IFE plant design has vastly different requirements—many of which are being studied by Livermore researchers and their collaborators.

NIFViable IFE plant (estimated)
Repetition rateOne shot per day10 to 20 shots per second
Energy gain4.13 times (as of April 2025)30 times (minimum), 50 times to 100 times (ideal)
How lasers gain energyFlashlampsDiode pumping
Target fabrication and fuel fillingFabricated offsite over several weeks and filled manually in 1 to 5 daysMass-manufactured and filled in a target factory within the facility
Target deliveryPositioned manually within the Target ChamberShot into the plant’s target chamber approximately 10 to 20 times per second
Laser alignmentComputationally in real time, taking up to 8 hoursIn real time
Power cycleOpen, requiring outside energy sourcesClosed, applying reused energy to power laser and ancillary plant operations
Fuel cycle (tritium)Produced offsiteBred onsite

The Laser Driven Fusion Integration Research and Science Test Facility (LD-FIRST) is a proposed blueprint for a proof-of-concept IFE facility that will test all the key IFE subsystems in an integrated fashion. A public-private partnership will likely be necessary to build the facility and will help the IFE community address the main subset of risks and the technological challenges of building a commercial plant.

Converging on a Solution

The team seeks to make IPM as accurate and comprehensive as possible by meeting with subject matter experts across the Laboratory to incorporate the latest research. “We’re trying to evolve the model so it has the same level of high detail across every single functional area to tell us where we can focus research and help us find optimized solutions that we could propose to industry,” says Nelson.

Computer scientist Claudio Santiago and his colleagues also modernized IPM by porting its framework from Microsoft Excel to Python in December 2024, making it compatible with AI, ML, design optimization, and HPC to further inform designs. “Once we think about all the forcing functions such as minimum shot yield and materials requirements pinning us in from every direction, we end up with an optimized solution space. As we sharpen the pencil more with these tools, that optimized solution box gets smaller until eventually we’ve converged on a point design,” says IFE lead systems engineer Justin Galbraith. Galbraith and his team’s point design is called the Laser Driven Fusion Integration Research and Science Test Facility, or LD-FIRST, a proof-of-concept physics demonstration facility for IFE. “That point design, we anticipate, will serve as the foundation for a future public-private partnership that would facilitate building and realizing a physical facility to focus the IFE community in pursuit of fusion power on the grid,” says Galbraith.

Livermore is leading the charge in IFE, helping the United States develop a technological roadmap, growing and coordinating science and technology efforts within the Laboratory, and fostering partnerships across the fusion industry, academia, and government.

Ma chaired DOE’s “Basic Research Needs for IFE” workshop and report in 2022 and co-chairs the subcommittee providing recommendations on the nation’s fusion activities through DOE’s Fusion Energy Sciences Advisory Committee. She and her team travel often to Washington, D.C., working with DOE and legislators to expand fusion energy research and advocacy in the nation. Livermore also leads a “Collaboratory” with other DOE national laboratories to connect research project leads and facilitate public-private partnerships. The Collaboratory has hosted multiple events with industry, and the Laboratory has partnered with three private companies who aim to design pilot IFE plants.

Meanwhile, Galbraith and other IFE leaders have served as technical advisors for engineering design teams at Texas A&M University and given them IFE-relevant problems to solve, including advanced chamber and blanket design. Galbraith is working with Nelson to develop the IFE plant design portion of a high-energy-density science summer school program, which Nelson is leading in 2025 at the University of California at San Diego, and they have developed IFE curriculum that has been deployed at six universities starting in spring 2025. “We’re hoping we can get a group of students really excited about fusion and start to build up the next generation of engineers and scientists that will make fusion a reality,” says Galbraith. The team has led IFE strategic planning exercises at the Laboratory, and Lawrence Livermore will stand up a new fusion institute—named “LIFT,” for Livermore Institute for Fusion Technology—a research and development center that will coordinate and centralize institutional fusion energy research.

Harnessing IFE will be a massive undertaking, but Livermore’s broad and deep expertise, facilities, and capabilities put the Laboratory in a unique position to lead and play an impactful role. “If we can set it up correctly, IFE will be a big piece of the Laboratory’s long-term vision,” says Ma. “IFE plays off of our history and all of our strengths, and it is critical for long-term national security.”

World-Watching: How Nature Paints With Color

[from Quanta Magazine]

by Yasemin Saplakoglu

When objects interact with light in particular ways — by absorbing or reflecting it — we see in color. A sunset’s orange hues and the ocean’s deep blues inspire artists and dazzle observant admirers. But colors are more than pretty decor; they also play a critical role in life. They attract mates, pollinators and seed-spreaders, and signal danger. And the same color can mean different things to different organisms: A red bird might attract a mate, while a red berry might warn off a hungry human.

For color to communicate meaning, systems to produce it had to evolve, by developing pigments to absorb certain wavelengths of light or structures to reflect them. Organisms also had to produce the machinery to perceive color. When you look out into a forest, you might see lush greenery dappled with yellowish sunlight and pink blooms. But this forest scene would look different if you were a bird or a fly. Color-perception machinery — which include photoreceptors in our eyes that recognize and distinguish light — can differ between species. While humans can’t see ultraviolet light, some birds can. While dogs can’t see red or green, many humans can. Even within species there’s some variation: People who are colorblind have trouble distinguishing some combinations, such as green and red. And many organisms can’t see color at all.

Within one planet, many colorful worlds exist. But how did colors evolve in the first place?

What’s New and Noteworthy

To pinpoint when different kinds of color signals may have evolved, researchers recently reviewed many papers, covering hundreds of millions of years of evolutionary history, to bring together information from the fossil record and phylogenetic trees (diagrams that depict evolutionary relationships between species). Their analysis across the tree of life suggested that color signals likely evolved much later than color vision. It’s likely that color vision evolved twice, developing independently in arthropods and fish, between 400 million and 500 million years ago. Then plants started using bright colors to attract pollinators and animals to disperse their seeds, and then animals started using colors to warn off predators and eventually to attract mates.

One of the most common colors that we see in nature is green. However, this isn’t a color signal: It’s a result of photosynthesis. Most plants absorb almost all the photons in the red and blue light spectra but only 90% of the green photons. The remaining 10% are reflected, making the plants appear green to our eyes. But why did they evolve to do this? According to a model, this makes photosynthetic machinery more stable, suggesting that sometimes evolution favors stability over efficiency.

The majority of colors in nature are produced by pigments that absorb or reflect different wavelengths of light. While many plants can produce these pigments on their own, most animals can’t; instead, they acquire pigments from their diet. Some pigments, though, are hard to acquire, so some animals instead rely on nanoscale structures that scatter light in particular ways to create “structural colors.” For example, the shell of the blue-rayed limpet has layers of transparent crystals, each of which diffracts and reflects a sliver of the light spectrum. When the layers grow to a precise thickness, around 100 nanometers, the wavelengths in each layer interact with one another, canceling each other out — except for blue. The result is the appearance of a bright blue limpet shell.

World-Watching: Human Enhanced Moisture Transport Exacerbated the Extreme Precipitation in Northern China

[from the American Geophysical Union, Geophysical Research Letters, 11 August 2025]

Abstract

Although previous studies suggest anthropogenic forcing may influence extreme precipitation probability, few have specifically investigated the human influence on moisture transport. Here, we leverage the 2023 record-breaking summer precipitation in Northern China (NC) to address this gap. Combining station observation with Coupled Model Intercomparison Project Phase 6 (CMIP6) model outputs, we demonstrate that the 2023-like heavy precipitation event was exacerbated by anthropogenic enhanced moisture transport. External forcing increased the probability of extreme southeasterly moisture transport by approximately 1.3 (90% confidence interval: 1.0–1.8) times. Moreover, the total anthropogenic forcing likely increased the probability of similar precipitation events at least 1.7 times (1.0–3.1), with both greenhouse gases and anthropogenic aerosols contributing positively. As greenhouse gases concentrations rise and anthropogenic warming intensifies, the frequency of similar extreme precipitation events in NC is projected to increase further.

Key Points

Plain Language Summary

The extent of human influence on moisture transport and consequent heavy precipitation remains a critical research question. While anthropogenic contributions to precipitation extremes are increasingly recognized, studies specifically addressing human-induced changes in moisture transport remain limited. The record-breaking summer precipitation in Northern China (NC) during 2023 provides a salient case study. This extreme event was fueled by substantial moisture transport from the southeast into NC, driven by Typhoons Doksuri and Khanun. Attribution analyses indicate that both greenhouse gas and anthropogenic aerosol emissions likely increased the probability of similar heavy precipitation events and associated moisture transport patterns. Such events are projected to become more frequent with continued anthropogenic warming. These findings demonstrate that human activities significantly influence moisture transport pathways and consequently modulate extreme precipitation occurrence in NC, deepening our understanding of the physical mechanisms underlying these events.

Read the full paper [archived PDF]

World-Watching: Science First Release, 10 July 2025

[from Science]

Accepted papers posted online prior to journal publication.

NASA Earth Science Division provides key data

by Dylan B. Millet, Belay B. Demoz, et al.

In May, the US administration proposed budget cuts to NASA, including a more than 50% decrease in funding for the agency’s Earth Science Division (ESD), the mission of which is to gather knowledge about Earth through space-based observation and other tools. The budget cuts proposed for ESD would cancel crucial satellites that observe Earth and its atmosphere, gut US science and engineering expertise, and potentially lead to the closure of NASA research centers. As former members of the recently dissolved NASA Earth Science Advisory Committee, an all-volunteer, independent body chartered to advise ESD, we warn that these actions would come at a profound cost to US society and scientific leadership.

[read more]

Spin-filter tunneling detection of antiferromagnetic resonance with electrically tunable damping

by Thow Min Jerald Cham, Daniel G. Chica, et al.

Antiferromagnetic spintronics offers the potential for higher-frequency operations and improved insensitivity to magnetic fields compared to ferromagnetic spintronics. However, previous electrical techniques to detect antiferromagnetic dynamics have utilized large, millimeter-scale bulk crystals. Here we demonstrate direct electrical detection of antiferromagnetic resonance in structures on the few-micrometer scale using spin-filter tunneling in PtTe2/bilayer CrSBr/graphite junctions in which the tunnel barrier is the van der Waals antiferromagnet CrSBr. This sample geometry allows not only efficient detection, but also electrical control of the antiferromagnetic resonance through spin-orbit torque from the PtTe2 electrode. The ability to efficiently detect and control antiferromagnetic resonance enables detailed studies of the physics governing these high-frequency dynamics.

[read more]

Scalable emulation of protein equilibrium ensembles with generative deep learning

by Sarah Lewis, Tim Hempel, et al.

Following the sequence and structure revolutions, predicting functionally relevant protein structure changes at scale remains an outstanding challenge. We introduce BioEmu, a deep learning system that emulates protein equilibrium ensembles by generating thousands of statistically independent structures per hour on a single GPU. BioEmu integrates over 200 milliseconds of molecular dynamics (MD) simulations, static structures and experimental protein stabilities using novel training algorithms. It captures diverse functional motions—including cryptic pocket formation, local unfolding, and domain rearrangements—and predicts relative free energies with 1 kcal/mol accuracy compared to millisecond-scale MD and experimental data. BioEmu provides mechanistic insights by jointly modeling structural ensembles and thermodynamic properties. This approach amortizes the cost of MD and experimental data generation, demonstrating a scalable path toward understanding and designing protein function.

[read more]

Negative capacitance overcomes Schottky-gate limits in GaN high-electron-mobility transistors

by Asir Intisar Khan, Jeong-Kyu Kim, et al.

For high-electron-mobility transistors based on two-dimensional electron gas (2DEG) within a quantum well, such as those based on AlGaN/GaN heterostructure, a Schottky-gate is used to maximize the amount of charge that can be induced and thereby the current that can be achieved. However, the Schottky-gate also leads to very high leakage current through the gate electrode. Adding a conventional dielectric layer between the nitride layers and gate metal can reduce leakage; but this comes at the price of a reduced drain current. Here, we used a ferroic HfO2ZrO2 bilayer as the gate dielectric and achieved a simultaneous increase in the ON current and decrease in the leakage current, a combination otherwise not attainable with conventional dielectrics. This approach surpasses the conventional limits of Schottky GaN transistors and provides a new pathway to improve performance in transistors based on 2DEG.

[read more]

Science-Watching: New Insights into Polyamorphism Could Influence How Drugs Are Formulated

[from the Royal Society of Chemistry’s Chemistry World, by Patrick de Jongh]

Results from a study combining experiments and simulations could overturn the assumption that amorphous forms of the same compound have the same molecular arrangement. The team behind the work claims to have prepared three amorphous forms of the diuretic drug hydrochlorothiazide and determined that they have distinct properties and distinct types of disorder. ‘If polyamorphism is proved in the future to be a universal—or at least not a very rare—phenomenon, then the pharmaceutical industry will need to make screens for polyamorphism and this will also be an opportunity for patenting,’ comments Inês Martins, from the University of Copenhagen in Denmark, who led the work with Thomas Rades.

Crystalline active pharmaceutical ingredients (APIs) often suffer from poor solubility. A common strategy to circumvent this problem is converting APIs into their amorphous form. This has been demonstrated for various APIs, including hydrochlorothiazide. However, the physical properties of polyamorphs are dependent on how they were prepared. Given there are no straightforward techniques to study how molecules interact and organise themselves in amorphous materials, the area is poorly understood.

Nevertheless, a team surrounding Rades and Martins set out to identify how amorphous forms of the same API, presenting different physicochemical properties, differ from each other. They decided to study hydrochlorothiazide as it was previously shown to have polyamorphs with glass transition temperatures above room temperature, which facilitates the preparation, isolation and analysis of its different polyamorphs. Starting from crystalline hydrochlorothiazide, they produced three polyamorphs: polyamorph I via spray-drying, polyamorph II via quench-cooling and polyamorph III by ball-milling. Thermal analysis revealed a significantly lower glass-transition temperature for polyamorph I (88.7°C), whereas polyamorphs II and III had similar glass-transition temperatures (117.5°C and 119.7°C, respectively). The polyamorphs also demonstrated very different shelf-life stabilities against crystallisation.

Subsequently, they studied polyamorphic interconversions by submitting the polyamorphs to the preparation conditions used for other polyamorphs. For example, polyamorph I (obtained by spray-drying) was subjected to quench–cooling or ball-milling. Identifying temperature as a critical parameter, they observed that polyamorph II could be obtained from polyamorphs I and III, but the reverse pathway was not possible. Meanwhile, they observed polyamorph I and polyamorph III interconvert. These results demonstrate polyamorph II is the most stable amorphous form.

Source: © Thomas Rades/University of Copenhagen
Researchers used a variety of techniques to elucidate the different polyamorphs that can be produced from crystalline hydrochlorothiazide and the polyamorphic interconversions that occur when a specific amorphous form is submitted to temperature or milling treatments

‘The problem out of the gate with polyamorphism as a concept is how to tell the difference between a well-defined metastable amorphous structure and an unrelaxed one that simply results from kinetically trapped defects introduced during processing. This is hard to define since the amorphous structure is statistical in any case,’ comments Simon Billinge, who studies the structure of disordered materials at Columbia University in the US. ‘They process the samples very differently. We know—from our own work—that this results in amorphous phases with very different stabilities against recrystallisation, for example, but is this polyamorphism? On the other hand, they find that the pair distribution functions of each of their “forms” are identical. There is no experimental evidence for a distinct structure. Taken together, the results do little to advance my understanding of polyamorphism.’

Distinct dihedral angle distributions

To get further information on how the polyamorphs are different on a molecular level, Martins and Rades turned to molecular dynamics simulations, comparing the dihedral angles around the sulfonamide groups in polyamorphs I and II. ‘Polyamorph I, which has a large number of the molecules with a dihedral angle similar to the one reported for crystalline hydrochlorothiazide, has a lower physical stability and faster structural relaxation time than polyamorph II, which has a broader dihedral angle distribution. Our findings indicate that a broader dihedral angle distribution seems to contribute to a better physical stability and slower structural relaxation,’ says Martins. They therefore hypothesise that having half the molecules with a conformation closer to crystalline hydrochlorothiazide and half of the molecules with a different conformation could help in establishing specific molecular arrangements that would favour the stability of the amorphous form.

The team also says the simulations corroborated its experimental results that polyamorph I can transform into polyamorph II, while the opposite conversion did not take place.

However, Billinge does not believe the computational studies provide conclusive evidence: ‘There is a detailed molecular dynamics analysis where different annealing conditions in the simulations give some slightly different statistics on the molecular conformations, but despite their claim, the resulting computed pair distribution functions do not look like the measured ones, so we have no way of knowing if the molecular dynamics is capturing what is happening in the real material. For amorphous materials, it is very difficult to equilibrate them in a molecular dynamics simulation, so you will be looking at artefacts of how the ensemble was created. Any claims to have found polyamorphism from molecular dynamics simulations by themselves are therefore questionable.’

Rades says their results can change the field of pharmaceutics: ‘We expect that other drug molecules may exhibit polyamorphism and the question would be which structural parameters would be different. In the case of hydrochlorothiazide, the dihedral angle distribution was found to be a parameter contributing for the formation of different polyamorphs. In other drugs, maybe the dihedral angle distribution (molecular conformations) could be different as well, but also maybe the type of intermolecular interactions can play a more important role in the formation of polyamorphs.’

The team now hope the pharmaceutical industry will look at amorphous systems differently and not assume that all amorphous forms of the same compound are the same. ‘Knowing this and considering that a certain polyamorph will have better physical stability, solubility or dissolution properties than another polyamorph, this will be an opportunity for the pharmaceutical industry to prepare tablets of a drug where the dose could be lower than tablets containing the crystalline form,’ concludes Rades.

Science-Watching: Why Do Batteries Sometimes Catch Fire and Explode?

[from Berkeley Lab News, by Theresa Duque]

Key Takeaways
  • Scientists have gained new insight into why thermal runaway, while rare, could cause a resting battery to overheat and catch fire.
  • In order to better understand how a resting battery might undergo thermal runaway after fast charging, scientists are using a technique called “operando X-ray microtomography” to measure changes in the state of charge at the particle level inside a lithium-ion battery after it’s been charged.
  • Their work shows for the first time that it is possible to directly measure current inside a resting battery even when the external current measurement is zero.
  • Much more work is needed before the findings can be used to develop improved safety protocols.

How likely would an electric vehicle battery self-combust and explode? The chances of that happening are actually pretty slim: Some analysts say that gasoline vehicles are nearly 30 times more likely to catch fire than electric vehicles. But recent news of EVs catching fire while parked have left many consumers – and researchers – scratching their heads over how these rare events could possibly happen.

Researchers have long known that high electric currents can lead to “thermal runaway” – a chain reaction that can cause a battery to overheat, catch fire, and explode. But without a reliable method to measure currents inside a resting battery, it has not been clear why some batteries go into thermal runaway, even when an EV is parked.

Now, by using an imaging technique called “operando X-ray microtomography,” scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have shown that the presence of large local currents inside batteries at rest after fast charging could be one of the causes behind thermal runaway. Their findings were reported in the journal ACS Nano.

“We are the first to capture real-time 3D images that measure changes in the state of charge at the particle level inside a lithium-ion battery after it’s been charged,” said Nitash P. Balsara, the senior author on the study. Balsara is a faculty senior scientist in Berkeley Lab’s Materials Sciences Division and a UC Berkeley professor of chemical and biomolecular engineering.

“What’s exciting about this work is that Nitash Balsara’s group isn’t just looking at images – They’re using the images to determine how batteries work and change in a time-dependent way. This study is a culmination of many years of work,” said co-author Dilworth Y. Parkinson, staff scientist and deputy for photon science operations at Berkeley Lab’s Advanced Light Source (ALS).

The team is also the first to measure ionic currents at the particle level inside the battery electrode.

3D microtomography experiments at the Advanced Light Source enabled researchers to pinpoint which particles generated current densities as high as 25 milliamps per centimeter squared inside a resting battery after fast charging. In comparison, the current density required to charge the test battery in 10 minutes was 18 milliamps per centimeter squared. (Credit: Nitash Balsara and Alec S. Ho/Berkeley Lab. Courtesy of ACS Nano)
Measuring a battery’s internal currents

In a lithium-ion battery, the anode component of the electrode is mostly made of graphite. When a healthy battery is charged slowly, lithium ions weave themselves between the layers of graphite sheets in the electrode. In contrast, when the battery is charged rapidly, the lithium ions have a tendency to deposit on the surface of the graphite particles in the form of lithium metal.

“What happens after fast charging when the battery is at rest is a little mysterious,” Balsara said. But the method used for the new study revealed important clues.

Experiments led by first author Alec S. Ho at the ALS show that when graphite is “fully lithiated” or fully charged, it expands a tiny bit, about a 10% change in volume – and that current in the battery at the particle level could be determined by tracking the local lithiation in the electrode. (Ho recently completed his Ph.D. in the Balsara group at UC Berkeley.)

A conventional voltmeter would tell you that when a battery is turned off, and disconnected from both the charging station and the electric motor, the overall current in the battery is zero.

But in the new study, the research team found that after charging the battery in 10 minutes, the local currents in a battery at rest (or currents inside the battery at the particle level) were surprisingly large. Parkinson’s 3D microtomography instrument at the ALS enabled the researchers to pinpoint which particles inside the battery were the “outliers” generating alarming current densities as high as 25 milliamps per centimeter squared. In comparison, the current density required to charge the battery in 10 minutes was 18 milliamps per centimeter squared.

The researchers also learned that the measured internal currents decreased substantially in about 20 minutes. Much more work is needed before their approach can be used to develop improved safety protocols.

Researchers from Argonne National Laboratory also contributed to the work.

The Advanced Light Source is a DOE Office of Science user facility at Berkeley Lab.

The work was supported by the Department of Energy’s Office of Science and Office of Energy Efficiency and Renewable Energy. Additional funding was provided by the National Science Foundation.

Speculative Science: The Reality beyond Spacetime, with Donald Hoffman

[from The Institute of Art and Ideas Science Weekly, July 22]

Donald Hoffman famously argues that we know nothing about the truth of the world. His book, The Case Against Reality, claims the process of survival of the fittest does not require a true picture of reality. Furthermore, Hoffman claims spacetime is not fundamental. So, what lies beneath spacetime, can we know about it? And how does consciousness come into play? Join this interview with the famed cognitive psychologist and author exploring our notions of consciousness, spacetime, and what lies beneath. Hosted by Curt Jaimungal.

[watch the video]

Technology-Watching: Quantum Microchips Connected in Record-Breaking World First

[from UK Research and Innovation]

Researchers in the UK have successfully transferred data between quantum microchips for the first time.

This helps overcome a key obstacle to building a commercial quantum computer.

The milestone achieved by a team from the University of Sussex and Brighton-based quantum computer developer Universal Quantum, allows chips to be linked like a jigsaw.

On track to useful quantum computers

It means that many more qubits, the basic calculating unit, can be joined together than is possible on a single microchip. This will make a more powerful quantum computer possible.

The project, which has been backed by the Engineering and Physical Sciences Research Council (EPSRC), has also broken the world record for quantum connection speed and accuracy.

The scaling of qubit numbers from the current level of around 100 qubits to nearer 1 million is central to creating a quantum processor that can make useful calculations.

The significant achievement is based on a technical blueprint for creating a large-scale quantum computer, which was first published in 2017 with funding from EPSRC.

Within the blueprint was the ground-breaking concept successfully demonstrated with this research of linking quantum computing modules with electrical fields.

Unlocking UK potential

The UK is a leader in the global race to develop useful quantum computers, which represent a step-change in computing power.

Their development may help solve pressing challenges from drug discovery to energy-efficient fertilizer production. But their impact is expected to sweep across the economy, transforming most sectors and all our lives.

Potential to scale up

Winfried Hensinger, Professor of Quantum Technologies at the University of Sussex and Chief Scientist and co-founder at Universal Quantum said:

As quantum computers grow, we will eventually be constrained by the size of the microchip, which limits the number of quantum bits such a chip can accommodate.

In demonstrating that we can connect 2 quantum computing chips, a bit like a jigsaw puzzle, and, crucially, that it works so well, we unlock the potential to scale up by connecting hundreds or even thousands of quantum computing microchips.

Speed and precision

The researchers were successful in transporting the qubits using electrical fields with a 99.999993% success rate and a connection rate of 2424 transfers per second. Both numbers are world records.

Dr. Kedar Pandya, Director of Cross-Council Programmes at EPSRC, said:

This significant milestone is evidence of how EPSRC funded science is seeding the commercial future for quantum computing in the UK.

The potential for complex technologies, like quantum, to transform our lives and create economic value widely relies on visionary early-stage investment in academic research.

We deliver that crucial building block and are delighted that the University of Sussex and its spin-out company, Universal Quantum, are demonstrating the strength it supports.

Institute of Physics award winner

Universal Quantum has been awarded €67 million from the German Aerospace Center to build 2 quantum computers.

The University of Sussex spin-out was also recently named as one of the 2022 Institute of Physics award winners in the business start-up category.

Japanese Philosopher KARATANI Kōjin (柄谷 行人) Awarded the 2022 Berggruen Prize

An expansive thinker who crosses boundaries.

[from Nōema Magazine, by Nathan Gardels, Editor-in-Chief]

KARATANI Kōjin has been named this year’s laureate for the $1 million Berggruen Prize for Culture and Philosophy. An expansive thinker who straddles East and West while crossing disciplinary boundaries, Karatani is not only one of Japan’s most esteemed literary critics, but a highly original mind who has turned key suppositions of Western philosophy on their heads.

In Karatani’s sharpest departure from conventional wisdom, he locates the origins of philosophy not in Athens, but in the earlier Ionian culture that greatly influenced the so-called “pre-Socratic thinkers” such as Heraclitus and Parmenides. Their ideas centered on the flux of constant change, in which “matter moves itself” without the gods, and the oneness of all being—a philosophical outlook closer to Daoist and Buddhist thought than to Plato’s later metaphysics, which posited that, as Karatani puts it, “the soul rules matter.”

In the political realm, Karatani contrasts the form of self-rule from Ionian times based on free and equal reciprocity among all inhabitants — “isonomia” — with what he calls the “degraded democracy” of Athens that rested on slavery and conquest. He considers the former the better foundation for a just polity.

In a novel twist on classical categorizations, Karatani regards Socrates himself as fitting into the pre-Socratic mold. “If one wants to properly consider the pre-Socratics, one must include Socrates in their number,” he writes. “Socrates was the last person to try to re-institute Ionian thought in politics.”

A Degraded Form of Democracy in Athens

For Karatani, Athenian democracy was debased because it was “constrained by the distinctions between public and private, and spiritual and manual labor,” a duality of existence that Socrates and the pre-Socratics sought to dismantle. As a result, by Karatani’s reading, Socrates was both held in contempt by the “aristocratic faction,” which sought to preserve its privileges built on the labor of others, and condemned to death by a narrow-minded mobocracy for his idiosyncratic insistence on autonomy and liberty in pursuit of truth.

Appalled at Socrates’ fate, Plato blamed democracy for giving birth to demagoguery and tyranny, radically rejecting the idea of rule by the masses and proposing instead a political order governed by philosophers. In Karatani’s reckoning, Plato then “took as his life’s work driving out the Ionian spirit that touched off Athenian democracy”—in short, throwing out the baby with the bathwater but maintaining the disassociations, such as citizen and slave, that follow from the distinction between public and private grounded in an apprehension of reality that separates the spiritual from the material.

In order to refute “Platonic metaphysics,” Karatani argues, “it is precisely Socrates that is required.”

Turning Marx On His Head

In his seminal work, The Structure of World History, Karatani flips Marx’s core tenet that the economic “mode of production” is the substructure of society that determines all else. He postulates instead that it is the ever-shifting “modes of exchange” among capital, the state and nation which together shape the social order.

For Karatani, historically cultivated norms and beliefs about fairness and justice, including universal religions, compel the state to regulate inequality within the mythic commonality of the nation, which sees itself as whole people, tempering the logic of the unfettered market. As he sees it, the siren call of reciprocity and equality has remained deeply resonant throughout the ages, drawing history toward a return to the original ideal of isonomia.

Expanding the Space of Civil Society

Not an armchair philosopher, Karatani has actively promoted a modern form of the kind of reciprocity he saw in ancient Ionian culture, which he calls “associationism.” In practical terms in Japan, this entails the activation of civil society, such as through citizens’ assemblies, that would exercise self-rule from the bottom up.

In the wake of the Fukushima nuclear accident in 2011, Karatani famously called for “a society where people demonstrate” that would expand the space of civil society and constrict the collusive power of Japan’s political, bureaucratic and corporate establishment. Like other activists, he blamed this closed system of governance that shuts out the voices of ordinary citizens for fatally mismanaging the nuclear power industry in a country where earthquakes and tsunamis are an ever-present danger.

An Expansive Mind

Along with The Structure of World History (2014) and Isonomia and The Origins of Philosophy (2017), the breadth of Karatani’s interests and erudition are readily evident in the titles of his many other books. These include Nation and Aesthetics: On Kant and Freud (2017), History and Repetition (2011), Transcritique: On Kant and Marx (2003), Architecture As Metaphor: Language, Number, Money (1995) and Origins of Modern Japanese Literature (1993).

The prize ceremony will be held in Tokyo in the spring.

First Clean Energy Cybersecurity Accelerator Participants Begin Technical Assessment

[From the National Renewable Energy Laboratory (NREL) News]

Program Selected Three Participants for Cohort 1

The Clean Energy Cybersecurity Accelerator™ (CECA)’s first cohort of solution providers—Blue Ridge Networks, Sierra Nevada Corporation, and Xage—recently began a technical assessment of their technologies that offer strong authentication solutions for distributed energy resources.

The selected solution providers will take part in a six-month acceleration period, where solutions will be evaluated in the Advanced Research on Integrated Energy Systems (ARIES) cyber range.

Working with its partners, CECA identified urgent security gaps, supporting emerging technologies as they build security into new technologies at the earliest stage—when security is most effective and efficient. The initiative is managed by the U.S. Department of Energy’s (DOE’s) National Renewable Energy Laboratory (NREL) and sponsored by DOE’s Office of Cybersecurity, Energy Security, and Emergency Response (CESER) and utility industry partners in collaboration with DOE’s Office of Energy Efficiency and Renewable Energy (EERE).

“We are thrilled to welcome and work with the first participants to the secure energy transformation,” said Jon White, director of NREL’s Cybersecurity Program Office. “These cyber-solution providers will work with NREL, using its world-class capabilities, to develop their ideas into real-world solutions. We are ready to build security into technologies at the early development stages when most effective and efficient.”

The selected innovators:

Blue Ridge Networks’ LinkGuard system “cloaks” critical information technology network operations from destructive and costly cyberattacks. The system overlays onto existing network infrastructure to secure network segments from external discovery or data exfiltration. Through a partnership with Schneider Electric, Blue Ridge Networks helped deploy a solution to protect supervisory control and data acquisition (SCADA) systems for the utility industry.

Sierra Nevada Corporation (SNC)’s Binary Armor® is used by the U.S. Department of Defense and utilities to protect critical assets, with the help of subject matter experts to deliver cyber solutions. SNC plans to integrate as a software solution into a communication gateway or other available edge processing to provide a scalable solution to enforce safe operation in an unauthenticated ecosystem. SNC currently helps secure heating, ventilation, and air conditioning systems; programmable logical controllers; and wildfire detection, with remote monitoring for two different utilities.

Xage uses identity-based access control to protect users, machines, apps, and data, at the edge and in the cloud, enforcing zero-trust access to secure operations and data universally. To test technology in energy sector environments, Xage provides zero-trust remote access, has demonstrated proofs of concept, and deploys local and remote access at various organizations.

Three major U.S. utilities, with more expected to join, are partners with CECA: Berkshire Hathaway Energy, Duke Energy and Xcel Energy. At the end of each cohort cycle, cyber innovators will present their solutions to the utilities with the goal to make an immediate impact.

Additionally, CECA participants benefit from access to NREL’s unique testing and evaluation capabilities, including its ARIES cyber range, developed with support from EERE. The ARIES cyber range provides one of the most advanced simulation environments with unparalleled real-time situational awareness and visualization to evaluate renewable energy system defenses.

Applications for the second CECA cohort will open in early January 2023 for providers offering solutions that uncover hidden risks due to incomplete system visibility and device security and configuration.

NREL is the U.S. Department of Energy’s primary national laboratory for renewable energy and energy efficiency research and development. NREL is operated for DOE by the Alliance for Sustainable Energy LLC.