Science-Watching: From Ignition to Energy

[from Science & Technology Review July/August 2025 Research Highlights, by Noah Pflueger-Peters]

Achieving ignition at the National Ignition Facility (NIF) proved that harnessing the power of the Sun in a laboratory may be possible. The Sun’s extreme temperatures and pressures cause light elements to fuse together to create heavier ones, releasing enormous energy and sustaining conditions for more thermonuclear reactions. NIF replicates these conditions with inertial confinement fusion, in which lasers compress and heat a target capsule filled with deuterium and tritium (DT), “heavy” isotopes of hydrogen that contain extra neutrons. When the isotopes fuse, they create helium and a neutron, and the lost mass is converted into inertial fusion energy (IFE), which can be harnessed for energy production.

Nuclear fusion produces significantly more energy than either nuclear fission or burning fossil fuels for equivalent amounts of fuel. Since the input materials for fusion energy are plentiful on Earth, an IFE power plant could produce safe, abundant, power grid-compatible energy without highly radioactive byproducts.

Although significant work remains to harness fusion energy, pursuing the development and deployment of IFE is crucial for the nation’s energy security, enabling the United States to shape implementation worldwide, avoid technological surprises from adversaries, and influence technical leadership in other energy-intensive technologies such as AI, machine learning (ML), and supercomputing.

IFE research stretches back to the early days of Lawrence Livermore, and today the Laboratory is fostering the overall fusion ecosystem. Livermore’s unique capabilities, expertise, and connections will be critical to laying the technical, logistical, and legal groundwork to make IFE possible. “IFE is a grand scientific and engineering challenge, something that is so incredibly difficult and high-risk and takes enormous expertise,” says Tammy Ma, Livermore’s IFE Institutional Initiative lead. “This challenge makes it the right kind of problem for national laboratories to pursue.”

This artist’s rendering shows the concept for an inertial fusion energy (IFE) power plant design, with a cutaway to show the plant’s target chamber in the center. Livermore researchers are laying the groundwork for private fusion companies to build similar designs. (Illustration by Eric Smith.)

Designing for Viability

NIF is the only facility to date to demonstrate the ignition and burning plasma conditions that are prerequisites for IFE, but it is an experimental facility for stockpile stewardship research, not a power plant. To be commercially viable and produce the energy to offset costs and meet demands (baseload power), IFE plants will need to generate more than 30 times the energy they deliver to the fusion target on every shot while firing 10 or more shots per second, compared to NIF’s rate of one or two shots per day.

The Laser Inertial Fusion Energy (LIFE) study, conducted between 2008 and 2013, aimed to build directly on technology developed for NIF to achieve IFE and took a systematic approach to this requirement by developing the Integrated Process Model (IPM). (See S&TR, April/May 2009 [archived PDF], pp. 6-15.)

IPM is a technoeconomic model of an IFE power plant with detailed technical and cost breakdowns and interdependencies of key systems and subsystems. “The work done under LIFE was fantastic,” says Ma. “IPM lays out engineering and physics requirements for the entire system to test out different scenarios and see the impact. Now, we not only get to expand on all that but also leverage 15 years of new data from NIF, better codes, and high-performance computing (HPC), as well as new work in AI, ML, advanced manufacturing, diagnostics, and nonproliferation across the Laboratory.”

IPM describes an IFE power plant that requires a solid-state laser driver system to “pump” lasers with optical energy using laser diodes instead of flashlamps as at NIF. The plant will also need to fabricate and fill target capsules onsite and send them into its target chamber at a high enough frequency to produce baseload power. “We will have to repeatedly inject targets into the chamber, so the targets must be able to withstand and survive that process,” explains Ma. “Then, the lasers will track the moving targets, and when one gets to the center of the chamber, they would fire on the centered target, repeating 10 to 20 times per second.”

The facility would convert fusion energy into heat and then electricity via steam turbines, sending most of the electricity to the power grid and recycling the rest to power operations on subsequent shots. Neutrons from the reaction would produce tritium needed for the DT fuel by bombarding lithium isotopes in a “breeding blanket” material lining its target chamber. By closing both the power and fuel cycles, IFE plants are expected to be self-sustaining.

Thanks in part to IFE STARFIRE (IFE Science and Technology Accelerated Research for Fusion Innovation and Reactor Engineering), a Department of Energy (DOE)-funded multi-institutional IFE research and development hub, researchers across the Laboratory are working to meet the new system’s demands. IPM can help identify key challenges, test the viability of new designs, and direct future research. “Many technical models and cost models exist for IFE, but very few, if any, pair systems and cost models together at the same depth as IPM,” says Mackenzie Nelson, a technoeconomic systems analyst in the Computational Engineering Division. “This type of tool offers such an advantage because we can assess design choices from both a technical and economic standpoint and create blueprints for what an IFE plant could look like.”

(left to right) Livermore researchers Bassem El Dasher, Claudio Santiago, and Mackenzie Nelson discuss a 3D model of a proposed IFE power plant design alongside the Integrated Process Model (IPM). IPM has more than 270 potential user inputs that researchers and collaborators can use to assess different IFE design choices to see the technical and cost impact on the entire design.

Operational Demands

NIF’s target capsules are extremely precise, fragile, and can take weeks to fabricate, fill, and position. Researchers are trying to reconcile that factor with the estimated demand of more than 800,000 capsules per day produced at less than $0.50 each to achieve IFE plant viability. To do this, they are examining optimal target designs for IFE and exploring advanced manufacturing methods such as microfluidics, volumetric additive manufacturing, and two-photon polymerization. (See S&TR, April/May 2025 [archived PDF], pp. 16-19.) Additional projects involve developing diagnostic instruments that can collect, analyze, and combine data with other diagnostics at the 10 to 20 shot per second frequency and use it to improve lasers in real time.

Fusion energy systems such as IFE are also a regulatory challenge, as they generate high-energy neutrons capable of breeding plutonium or uranium-233 and rely on large quantities of tritium. “Pure fusion energy systems do not require fissile material, but there are still ways to misuse these technologies that pose proliferation risk,” says Yana Feldman, the associate program leader for international safeguards. Bad actors may only need small amounts of tritium to make nuclear weapons, and some breeding blanket designs may inadvertently produce traces of plutonium that may be diverted for military purposes.

Nuclear fission reactors are regulated through international agreements and export control rules, and the independent International Atomic Energy Agency (IAEA) verifies that nuclear material and facilities are only being used for peaceful purposes. Neither treaties nor the IAEA address fusion energy, and no consensus has been reached on whether fusion energy systems need an international verification program. Verification methods for safeguarding tritium are also far less developed than for plutonium and uranium and focus more on contamination and transfers than analytical accounting for discrepancies. The precise scale of allowable tritium unaccounted for without posing proliferation risk is also unclear.

Fusion systems can be designed for proliferation resistance, but not having an existing design remains a challenge.

International security analyst Anne-Marie Riitsaar and her colleagues are exploring these complexities and starting conversations with international fusion experts and private industry to raise awareness. Riitsaar also plans to collaborate with the IPM team to map tritium diversion vulnerabilities and identify high-risk points where researchers could incorporate surveillance methods into plant designs to detect and prevent potential misuse. “People sometimes ask me why I’m thinking about fusion energy regulations and proliferation risks at this point, but it’s not too early,” says Riitsaar. “Reaching a multinational consensus on regulating sensitive technologies takes considerable time and effort.”

The National Ignition Facility is an experimental facility and not a power plant, so a commercial IFE plant design has vastly different requirements—many of which are being studied by Livermore researchers and their collaborators.

NIFViable IFE plant (estimated)
Repetition rateOne shot per day10 to 20 shots per second
Energy gain4.13 times (as of April 2025)30 times (minimum), 50 times to 100 times (ideal)
How lasers gain energyFlashlampsDiode pumping
Target fabrication and fuel fillingFabricated offsite over several weeks and filled manually in 1 to 5 daysMass-manufactured and filled in a target factory within the facility
Target deliveryPositioned manually within the Target ChamberShot into the plant’s target chamber approximately 10 to 20 times per second
Laser alignmentComputationally in real time, taking up to 8 hoursIn real time
Power cycleOpen, requiring outside energy sourcesClosed, applying reused energy to power laser and ancillary plant operations
Fuel cycle (tritium)Produced offsiteBred onsite

The Laser Driven Fusion Integration Research and Science Test Facility (LD-FIRST) is a proposed blueprint for a proof-of-concept IFE facility that will test all the key IFE subsystems in an integrated fashion. A public-private partnership will likely be necessary to build the facility and will help the IFE community address the main subset of risks and the technological challenges of building a commercial plant.

Converging on a Solution

The team seeks to make IPM as accurate and comprehensive as possible by meeting with subject matter experts across the Laboratory to incorporate the latest research. “We’re trying to evolve the model so it has the same level of high detail across every single functional area to tell us where we can focus research and help us find optimized solutions that we could propose to industry,” says Nelson.

Computer scientist Claudio Santiago and his colleagues also modernized IPM by porting its framework from Microsoft Excel to Python in December 2024, making it compatible with AI, ML, design optimization, and HPC to further inform designs. “Once we think about all the forcing functions such as minimum shot yield and materials requirements pinning us in from every direction, we end up with an optimized solution space. As we sharpen the pencil more with these tools, that optimized solution box gets smaller until eventually we’ve converged on a point design,” says IFE lead systems engineer Justin Galbraith. Galbraith and his team’s point design is called the Laser Driven Fusion Integration Research and Science Test Facility, or LD-FIRST, a proof-of-concept physics demonstration facility for IFE. “That point design, we anticipate, will serve as the foundation for a future public-private partnership that would facilitate building and realizing a physical facility to focus the IFE community in pursuit of fusion power on the grid,” says Galbraith.

Livermore is leading the charge in IFE, helping the United States develop a technological roadmap, growing and coordinating science and technology efforts within the Laboratory, and fostering partnerships across the fusion industry, academia, and government.

Ma chaired DOE’s “Basic Research Needs for IFE” workshop and report in 2022 and co-chairs the subcommittee providing recommendations on the nation’s fusion activities through DOE’s Fusion Energy Sciences Advisory Committee. She and her team travel often to Washington, D.C., working with DOE and legislators to expand fusion energy research and advocacy in the nation. Livermore also leads a “Collaboratory” with other DOE national laboratories to connect research project leads and facilitate public-private partnerships. The Collaboratory has hosted multiple events with industry, and the Laboratory has partnered with three private companies who aim to design pilot IFE plants.

Meanwhile, Galbraith and other IFE leaders have served as technical advisors for engineering design teams at Texas A&M University and given them IFE-relevant problems to solve, including advanced chamber and blanket design. Galbraith is working with Nelson to develop the IFE plant design portion of a high-energy-density science summer school program, which Nelson is leading in 2025 at the University of California at San Diego, and they have developed IFE curriculum that has been deployed at six universities starting in spring 2025. “We’re hoping we can get a group of students really excited about fusion and start to build up the next generation of engineers and scientists that will make fusion a reality,” says Galbraith. The team has led IFE strategic planning exercises at the Laboratory, and Lawrence Livermore will stand up a new fusion institute—named “LIFT,” for Livermore Institute for Fusion Technology—a research and development center that will coordinate and centralize institutional fusion energy research.

Harnessing IFE will be a massive undertaking, but Livermore’s broad and deep expertise, facilities, and capabilities put the Laboratory in a unique position to lead and play an impactful role. “If we can set it up correctly, IFE will be a big piece of the Laboratory’s long-term vision,” says Ma. “IFE plays off of our history and all of our strengths, and it is critical for long-term national security.”

Science-Watching: Why Do Batteries Sometimes Catch Fire and Explode?

[from Berkeley Lab News, by Theresa Duque]

Key Takeaways
  • Scientists have gained new insight into why thermal runaway, while rare, could cause a resting battery to overheat and catch fire.
  • In order to better understand how a resting battery might undergo thermal runaway after fast charging, scientists are using a technique called “operando X-ray microtomography” to measure changes in the state of charge at the particle level inside a lithium-ion battery after it’s been charged.
  • Their work shows for the first time that it is possible to directly measure current inside a resting battery even when the external current measurement is zero.
  • Much more work is needed before the findings can be used to develop improved safety protocols.

How likely would an electric vehicle battery self-combust and explode? The chances of that happening are actually pretty slim: Some analysts say that gasoline vehicles are nearly 30 times more likely to catch fire than electric vehicles. But recent news of EVs catching fire while parked have left many consumers – and researchers – scratching their heads over how these rare events could possibly happen.

Researchers have long known that high electric currents can lead to “thermal runaway” – a chain reaction that can cause a battery to overheat, catch fire, and explode. But without a reliable method to measure currents inside a resting battery, it has not been clear why some batteries go into thermal runaway, even when an EV is parked.

Now, by using an imaging technique called “operando X-ray microtomography,” scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have shown that the presence of large local currents inside batteries at rest after fast charging could be one of the causes behind thermal runaway. Their findings were reported in the journal ACS Nano.

“We are the first to capture real-time 3D images that measure changes in the state of charge at the particle level inside a lithium-ion battery after it’s been charged,” said Nitash P. Balsara, the senior author on the study. Balsara is a faculty senior scientist in Berkeley Lab’s Materials Sciences Division and a UC Berkeley professor of chemical and biomolecular engineering.

“What’s exciting about this work is that Nitash Balsara’s group isn’t just looking at images – They’re using the images to determine how batteries work and change in a time-dependent way. This study is a culmination of many years of work,” said co-author Dilworth Y. Parkinson, staff scientist and deputy for photon science operations at Berkeley Lab’s Advanced Light Source (ALS).

The team is also the first to measure ionic currents at the particle level inside the battery electrode.

3D microtomography experiments at the Advanced Light Source enabled researchers to pinpoint which particles generated current densities as high as 25 milliamps per centimeter squared inside a resting battery after fast charging. In comparison, the current density required to charge the test battery in 10 minutes was 18 milliamps per centimeter squared. (Credit: Nitash Balsara and Alec S. Ho/Berkeley Lab. Courtesy of ACS Nano)
Measuring a battery’s internal currents

In a lithium-ion battery, the anode component of the electrode is mostly made of graphite. When a healthy battery is charged slowly, lithium ions weave themselves between the layers of graphite sheets in the electrode. In contrast, when the battery is charged rapidly, the lithium ions have a tendency to deposit on the surface of the graphite particles in the form of lithium metal.

“What happens after fast charging when the battery is at rest is a little mysterious,” Balsara said. But the method used for the new study revealed important clues.

Experiments led by first author Alec S. Ho at the ALS show that when graphite is “fully lithiated” or fully charged, it expands a tiny bit, about a 10% change in volume – and that current in the battery at the particle level could be determined by tracking the local lithiation in the electrode. (Ho recently completed his Ph.D. in the Balsara group at UC Berkeley.)

A conventional voltmeter would tell you that when a battery is turned off, and disconnected from both the charging station and the electric motor, the overall current in the battery is zero.

But in the new study, the research team found that after charging the battery in 10 minutes, the local currents in a battery at rest (or currents inside the battery at the particle level) were surprisingly large. Parkinson’s 3D microtomography instrument at the ALS enabled the researchers to pinpoint which particles inside the battery were the “outliers” generating alarming current densities as high as 25 milliamps per centimeter squared. In comparison, the current density required to charge the battery in 10 minutes was 18 milliamps per centimeter squared.

The researchers also learned that the measured internal currents decreased substantially in about 20 minutes. Much more work is needed before their approach can be used to develop improved safety protocols.

Researchers from Argonne National Laboratory also contributed to the work.

The Advanced Light Source is a DOE Office of Science user facility at Berkeley Lab.

The work was supported by the Department of Energy’s Office of Science and Office of Energy Efficiency and Renewable Energy. Additional funding was provided by the National Science Foundation.

New Ultrathin Capacitor Could Enable Energy-Efficient Microchips

Scientists turn century-old material into a thin film for next-gen memory and logic devices

[from Berkeley Lab, by Rachel Berkowitz]

Electron microscope images show the precise atom-by-atom structure of a barium titanate (BaTiO3) thin film sandwiched between layers of strontium ruthenate (SrRuO3) metal to make a tiny capacitor. (Credit: Lane Martin/Berkeley Lab)

The silicon-based computer chips that power our modern devices require vast amounts of energy to operate. Despite ever-improving computing efficiency, information technology is projected to consume around 25% of all primary energy produced by 2030. Researchers in the microelectronics and materials sciences communities are seeking ways to sustainably manage the global need for computing power.

The holy grail for reducing this digital demand is to develop microelectronics that operate at much lower voltages, which would require less energy and is a primary goal of efforts to move beyond today’s state-of-the-art CMOS (complementary metaloxide semiconductor) devices.

Non-silicon materials with enticing properties for memory and logic devices exist; but their common bulk form still requires large voltages to manipulate, making them incompatible with modern electronics. Designing thin-film alternatives that not only perform well at low operating voltages but can also be packed into microelectronic devices remains a challenge.

Now, a team of researchers at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have identified one energy-efficient route—by synthesizing a thin-layer version of a well-known material whose properties are exactly what’s needed for next-generation devices.

First discovered more than 80 years ago, barium titanate (BaTiO3) found use in various capacitors for electronic circuits, ultrasonic generators, transducers, and even sonar.

Crystals of the material respond quickly to a small electric field, flip-flopping the orientation of the charged atoms that make up the material in a reversible but permanent manner even if the applied field is removed. This provides a way to switch between the proverbial “0” and “1” states in logic and memory storage devices—but still requires voltages larger than 1,000 millivolts (mV) for doing so.

Seeking to harness these properties for use in microchips, the Berkeley Lab-led team developed a pathway for creating films of BaTiO3 just 25 nanometers thin—less than a thousandth of a human hair’s width—whose orientation of charged atoms, or polarization, switches as quickly and efficiently as in the bulk version.

“We’ve known about BaTiO3 for the better part of a century and we’ve known how to make thin films of this material for over 40 years. But until now, nobody could make a film that could get close to the structure or performance that could be achieved in bulk,” said Lane Martin, a faculty scientist in the Materials Sciences Division (MSD) at Berkeley Lab and professor of materials science and engineering at UC Berkeley who led the work.

Historically, synthesis attempts have resulted in films that contain higher concentrations of “defects”—points where the structure differs from an idealized version of the material—as compared to bulk versions. Such a high concentration of defects negatively impacts the performance of thin films. Martin and colleagues developed an approach to growing the films that limits those defects. The findings were published in the journal Nature Materials.

To understand what it takes to produce the best, low-defect BaTiO3 thin films, the researchers turned to a process called pulsed-laser deposition. Firing a powerful beam of an ultraviolet laser light onto a ceramic target of BaTiO3 causes the material to transform into a plasma, which then transmits atoms from the target onto a surface to grow the film. “It’s a versatile tool where we can tweak a lot of knobs in the film’s growth and see which are most important for controlling the properties,” said Martin.

Martin and his colleagues showed that their method could achieve precise control over the deposited film’s structure, chemistry, thickness, and interfaces with metal electrodes. By chopping each deposited sample in half and looking at its structure atom by atom using tools at the National Center for Electron Microscopy at Berkeley Lab’s Molecular Foundry, the researchers revealed a version that precisely mimicked an extremely thin slice of the bulk.

“It’s fun to think that we can take these classic materials that we thought we knew everything about, and flip them on their head with new approaches to making and characterizing them,” said Martin.

Finally, by placing a film of BaTiO3 in between two metal layers, Martin and his team created tiny capacitors—the electronic components that rapidly store and release energy in a circuit. Applying voltages of 100 mV or less and measuring the current that emerges showed that the film’s polarization switched within two billionths of a second and could potentially be faster—competitive with what it takes for today’s computers to access memory or perform calculations.

The work follows the bigger goal of creating materials with small switching voltages, and examining how interfaces with the metal components necessary for devices impact such materials. “This is a good early victory in our pursuit of low-power electronics that go beyond what is possible with silicon-based electronics today,” said Martin.

“Unlike our new devices, the capacitors used in chips today don’t hold their data unless you keep applying a voltage,” said Martin. And current technologies generally work at 500 to 600 mV, while a thin film version could work at 50 to 100 mV or less. Together, these measurements demonstrate a successful optimization of voltage and polarization robustness—which tend to be a trade-off, especially in thin materials.

Next, the team plans to shrink the material down even thinner to make it compatible with real devices in computers and study how it behaves at those tiny dimensions. At the same time, they will work with collaborators at companies such as Intel Corp. to test the feasibility in first-generation electronic devices. “If you could make each logic operation in a computer a million times more efficient, think how much energy you save. That’s why we’re doing this,” said Martin.

This research was supported by the U.S. Department of Energy (DOE) Office of Science. The Molecular Foundry is a DOE Office of Science user facility at Berkeley Lab.