Science-Watching: From Ignition to Energy

[from Science & Technology Review July/August 2025 Research Highlights, by Noah Pflueger-Peters]

Achieving ignition at the National Ignition Facility (NIF) proved that harnessing the power of the Sun in a laboratory may be possible. The Sun’s extreme temperatures and pressures cause light elements to fuse together to create heavier ones, releasing enormous energy and sustaining conditions for more thermonuclear reactions. NIF replicates these conditions with inertial confinement fusion, in which lasers compress and heat a target capsule filled with deuterium and tritium (DT), “heavy” isotopes of hydrogen that contain extra neutrons. When the isotopes fuse, they create helium and a neutron, and the lost mass is converted into inertial fusion energy (IFE), which can be harnessed for energy production.

Nuclear fusion produces significantly more energy than either nuclear fission or burning fossil fuels for equivalent amounts of fuel. Since the input materials for fusion energy are plentiful on Earth, an IFE power plant could produce safe, abundant, power grid-compatible energy without highly radioactive byproducts.

Although significant work remains to harness fusion energy, pursuing the development and deployment of IFE is crucial for the nation’s energy security, enabling the United States to shape implementation worldwide, avoid technological surprises from adversaries, and influence technical leadership in other energy-intensive technologies such as AI, machine learning (ML), and supercomputing.

IFE research stretches back to the early days of Lawrence Livermore, and today the Laboratory is fostering the overall fusion ecosystem. Livermore’s unique capabilities, expertise, and connections will be critical to laying the technical, logistical, and legal groundwork to make IFE possible. “IFE is a grand scientific and engineering challenge, something that is so incredibly difficult and high-risk and takes enormous expertise,” says Tammy Ma, Livermore’s IFE Institutional Initiative lead. “This challenge makes it the right kind of problem for national laboratories to pursue.”

This artist’s rendering shows the concept for an inertial fusion energy (IFE) power plant design, with a cutaway to show the plant’s target chamber in the center. Livermore researchers are laying the groundwork for private fusion companies to build similar designs. (Illustration by Eric Smith.)

Designing for Viability

NIF is the only facility to date to demonstrate the ignition and burning plasma conditions that are prerequisites for IFE, but it is an experimental facility for stockpile stewardship research, not a power plant. To be commercially viable and produce the energy to offset costs and meet demands (baseload power), IFE plants will need to generate more than 30 times the energy they deliver to the fusion target on every shot while firing 10 or more shots per second, compared to NIF’s rate of one or two shots per day.

The Laser Inertial Fusion Energy (LIFE) study, conducted between 2008 and 2013, aimed to build directly on technology developed for NIF to achieve IFE and took a systematic approach to this requirement by developing the Integrated Process Model (IPM). (See S&TR, April/May 2009 [archived PDF], pp. 6-15.)

IPM is a technoeconomic model of an IFE power plant with detailed technical and cost breakdowns and interdependencies of key systems and subsystems. “The work done under LIFE was fantastic,” says Ma. “IPM lays out engineering and physics requirements for the entire system to test out different scenarios and see the impact. Now, we not only get to expand on all that but also leverage 15 years of new data from NIF, better codes, and high-performance computing (HPC), as well as new work in AI, ML, advanced manufacturing, diagnostics, and nonproliferation across the Laboratory.”

IPM describes an IFE power plant that requires a solid-state laser driver system to “pump” lasers with optical energy using laser diodes instead of flashlamps as at NIF. The plant will also need to fabricate and fill target capsules onsite and send them into its target chamber at a high enough frequency to produce baseload power. “We will have to repeatedly inject targets into the chamber, so the targets must be able to withstand and survive that process,” explains Ma. “Then, the lasers will track the moving targets, and when one gets to the center of the chamber, they would fire on the centered target, repeating 10 to 20 times per second.”

The facility would convert fusion energy into heat and then electricity via steam turbines, sending most of the electricity to the power grid and recycling the rest to power operations on subsequent shots. Neutrons from the reaction would produce tritium needed for the DT fuel by bombarding lithium isotopes in a “breeding blanket” material lining its target chamber. By closing both the power and fuel cycles, IFE plants are expected to be self-sustaining.

Thanks in part to IFE STARFIRE (IFE Science and Technology Accelerated Research for Fusion Innovation and Reactor Engineering), a Department of Energy (DOE)-funded multi-institutional IFE research and development hub, researchers across the Laboratory are working to meet the new system’s demands. IPM can help identify key challenges, test the viability of new designs, and direct future research. “Many technical models and cost models exist for IFE, but very few, if any, pair systems and cost models together at the same depth as IPM,” says Mackenzie Nelson, a technoeconomic systems analyst in the Computational Engineering Division. “This type of tool offers such an advantage because we can assess design choices from both a technical and economic standpoint and create blueprints for what an IFE plant could look like.”

(left to right) Livermore researchers Bassem El Dasher, Claudio Santiago, and Mackenzie Nelson discuss a 3D model of a proposed IFE power plant design alongside the Integrated Process Model (IPM). IPM has more than 270 potential user inputs that researchers and collaborators can use to assess different IFE design choices to see the technical and cost impact on the entire design.

Operational Demands

NIF’s target capsules are extremely precise, fragile, and can take weeks to fabricate, fill, and position. Researchers are trying to reconcile that factor with the estimated demand of more than 800,000 capsules per day produced at less than $0.50 each to achieve IFE plant viability. To do this, they are examining optimal target designs for IFE and exploring advanced manufacturing methods such as microfluidics, volumetric additive manufacturing, and two-photon polymerization. (See S&TR, April/May 2025 [archived PDF], pp. 16-19.) Additional projects involve developing diagnostic instruments that can collect, analyze, and combine data with other diagnostics at the 10 to 20 shot per second frequency and use it to improve lasers in real time.

Fusion energy systems such as IFE are also a regulatory challenge, as they generate high-energy neutrons capable of breeding plutonium or uranium-233 and rely on large quantities of tritium. “Pure fusion energy systems do not require fissile material, but there are still ways to misuse these technologies that pose proliferation risk,” says Yana Feldman, the associate program leader for international safeguards. Bad actors may only need small amounts of tritium to make nuclear weapons, and some breeding blanket designs may inadvertently produce traces of plutonium that may be diverted for military purposes.

Nuclear fission reactors are regulated through international agreements and export control rules, and the independent International Atomic Energy Agency (IAEA) verifies that nuclear material and facilities are only being used for peaceful purposes. Neither treaties nor the IAEA address fusion energy, and no consensus has been reached on whether fusion energy systems need an international verification program. Verification methods for safeguarding tritium are also far less developed than for plutonium and uranium and focus more on contamination and transfers than analytical accounting for discrepancies. The precise scale of allowable tritium unaccounted for without posing proliferation risk is also unclear.

Fusion systems can be designed for proliferation resistance, but not having an existing design remains a challenge.

International security analyst Anne-Marie Riitsaar and her colleagues are exploring these complexities and starting conversations with international fusion experts and private industry to raise awareness. Riitsaar also plans to collaborate with the IPM team to map tritium diversion vulnerabilities and identify high-risk points where researchers could incorporate surveillance methods into plant designs to detect and prevent potential misuse. “People sometimes ask me why I’m thinking about fusion energy regulations and proliferation risks at this point, but it’s not too early,” says Riitsaar. “Reaching a multinational consensus on regulating sensitive technologies takes considerable time and effort.”

The National Ignition Facility is an experimental facility and not a power plant, so a commercial IFE plant design has vastly different requirements—many of which are being studied by Livermore researchers and their collaborators.

NIFViable IFE plant (estimated)
Repetition rateOne shot per day10 to 20 shots per second
Energy gain4.13 times (as of April 2025)30 times (minimum), 50 times to 100 times (ideal)
How lasers gain energyFlashlampsDiode pumping
Target fabrication and fuel fillingFabricated offsite over several weeks and filled manually in 1 to 5 daysMass-manufactured and filled in a target factory within the facility
Target deliveryPositioned manually within the Target ChamberShot into the plant’s target chamber approximately 10 to 20 times per second
Laser alignmentComputationally in real time, taking up to 8 hoursIn real time
Power cycleOpen, requiring outside energy sourcesClosed, applying reused energy to power laser and ancillary plant operations
Fuel cycle (tritium)Produced offsiteBred onsite

The Laser Driven Fusion Integration Research and Science Test Facility (LD-FIRST) is a proposed blueprint for a proof-of-concept IFE facility that will test all the key IFE subsystems in an integrated fashion. A public-private partnership will likely be necessary to build the facility and will help the IFE community address the main subset of risks and the technological challenges of building a commercial plant.

Converging on a Solution

The team seeks to make IPM as accurate and comprehensive as possible by meeting with subject matter experts across the Laboratory to incorporate the latest research. “We’re trying to evolve the model so it has the same level of high detail across every single functional area to tell us where we can focus research and help us find optimized solutions that we could propose to industry,” says Nelson.

Computer scientist Claudio Santiago and his colleagues also modernized IPM by porting its framework from Microsoft Excel to Python in December 2024, making it compatible with AI, ML, design optimization, and HPC to further inform designs. “Once we think about all the forcing functions such as minimum shot yield and materials requirements pinning us in from every direction, we end up with an optimized solution space. As we sharpen the pencil more with these tools, that optimized solution box gets smaller until eventually we’ve converged on a point design,” says IFE lead systems engineer Justin Galbraith. Galbraith and his team’s point design is called the Laser Driven Fusion Integration Research and Science Test Facility, or LD-FIRST, a proof-of-concept physics demonstration facility for IFE. “That point design, we anticipate, will serve as the foundation for a future public-private partnership that would facilitate building and realizing a physical facility to focus the IFE community in pursuit of fusion power on the grid,” says Galbraith.

Livermore is leading the charge in IFE, helping the United States develop a technological roadmap, growing and coordinating science and technology efforts within the Laboratory, and fostering partnerships across the fusion industry, academia, and government.

Ma chaired DOE’s “Basic Research Needs for IFE” workshop and report in 2022 and co-chairs the subcommittee providing recommendations on the nation’s fusion activities through DOE’s Fusion Energy Sciences Advisory Committee. She and her team travel often to Washington, D.C., working with DOE and legislators to expand fusion energy research and advocacy in the nation. Livermore also leads a “Collaboratory” with other DOE national laboratories to connect research project leads and facilitate public-private partnerships. The Collaboratory has hosted multiple events with industry, and the Laboratory has partnered with three private companies who aim to design pilot IFE plants.

Meanwhile, Galbraith and other IFE leaders have served as technical advisors for engineering design teams at Texas A&M University and given them IFE-relevant problems to solve, including advanced chamber and blanket design. Galbraith is working with Nelson to develop the IFE plant design portion of a high-energy-density science summer school program, which Nelson is leading in 2025 at the University of California at San Diego, and they have developed IFE curriculum that has been deployed at six universities starting in spring 2025. “We’re hoping we can get a group of students really excited about fusion and start to build up the next generation of engineers and scientists that will make fusion a reality,” says Galbraith. The team has led IFE strategic planning exercises at the Laboratory, and Lawrence Livermore will stand up a new fusion institute—named “LIFT,” for Livermore Institute for Fusion Technology—a research and development center that will coordinate and centralize institutional fusion energy research.

Harnessing IFE will be a massive undertaking, but Livermore’s broad and deep expertise, facilities, and capabilities put the Laboratory in a unique position to lead and play an impactful role. “If we can set it up correctly, IFE will be a big piece of the Laboratory’s long-term vision,” says Ma. “IFE plays off of our history and all of our strengths, and it is critical for long-term national security.”

Economics-Watching: Estimating the Effects of Monetary Policy: An Ongoing Evolution

New monetary policy tools have lengthened the interval over which policy news is transmitted and processed.

[from the Federal Reserve Bank of Kansas City, 2 October 2025]

by Karlye Dilts Stedman, Amaze Lusompa & Phillip An

Disentangling how the economy responds to a monetary policy decision from its response to macroeconomic conditions at the time of the decision is an ongoing challenge. One popular method researchers use to measure the effect of a monetary policy announcement—high-frequency identification—analyzes the reaction of fast-moving financial variables immediately following the policy announcement, using a time window long enough for markets to respond but not so long that the response is contaminated by other information.

Since high-frequency identification was introduced in the early 2000s, policymakers have introduced tools such as forward guidance and large-scale asset purchases. Karlye Dilts Stedman, Amaze Lusompa, and Phillip An examine how the evolution of monetary policy has changed high-frequency identification and assess whether additional changes might be necessary to better capture the effect of modern monetary policy surprises. Although researchers have continually updated the asset mix used in high-frequency identification over time, they have not updated the measurement window. Because the timing of monetary policy communication has changed significantly in recent years, refining the length of this measurement window may be necessary going forward.

Read the full article [archived PDF].

World-Watching: How Nature Paints With Color

[from Quanta Magazine]

by Yasemin Saplakoglu

When objects interact with light in particular ways — by absorbing or reflecting it — we see in color. A sunset’s orange hues and the ocean’s deep blues inspire artists and dazzle observant admirers. But colors are more than pretty decor; they also play a critical role in life. They attract mates, pollinators and seed-spreaders, and signal danger. And the same color can mean different things to different organisms: A red bird might attract a mate, while a red berry might warn off a hungry human.

For color to communicate meaning, systems to produce it had to evolve, by developing pigments to absorb certain wavelengths of light or structures to reflect them. Organisms also had to produce the machinery to perceive color. When you look out into a forest, you might see lush greenery dappled with yellowish sunlight and pink blooms. But this forest scene would look different if you were a bird or a fly. Color-perception machinery — which include photoreceptors in our eyes that recognize and distinguish light — can differ between species. While humans can’t see ultraviolet light, some birds can. While dogs can’t see red or green, many humans can. Even within species there’s some variation: People who are colorblind have trouble distinguishing some combinations, such as green and red. And many organisms can’t see color at all.

Within one planet, many colorful worlds exist. But how did colors evolve in the first place?

What’s New and Noteworthy

To pinpoint when different kinds of color signals may have evolved, researchers recently reviewed many papers, covering hundreds of millions of years of evolutionary history, to bring together information from the fossil record and phylogenetic trees (diagrams that depict evolutionary relationships between species). Their analysis across the tree of life suggested that color signals likely evolved much later than color vision. It’s likely that color vision evolved twice, developing independently in arthropods and fish, between 400 million and 500 million years ago. Then plants started using bright colors to attract pollinators and animals to disperse their seeds, and then animals started using colors to warn off predators and eventually to attract mates.

One of the most common colors that we see in nature is green. However, this isn’t a color signal: It’s a result of photosynthesis. Most plants absorb almost all the photons in the red and blue light spectra but only 90% of the green photons. The remaining 10% are reflected, making the plants appear green to our eyes. But why did they evolve to do this? According to a model, this makes photosynthetic machinery more stable, suggesting that sometimes evolution favors stability over efficiency.

The majority of colors in nature are produced by pigments that absorb or reflect different wavelengths of light. While many plants can produce these pigments on their own, most animals can’t; instead, they acquire pigments from their diet. Some pigments, though, are hard to acquire, so some animals instead rely on nanoscale structures that scatter light in particular ways to create “structural colors.” For example, the shell of the blue-rayed limpet has layers of transparent crystals, each of which diffracts and reflects a sliver of the light spectrum. When the layers grow to a precise thickness, around 100 nanometers, the wavelengths in each layer interact with one another, canceling each other out — except for blue. The result is the appearance of a bright blue limpet shell.

Digitizing Heritage: Exploring the Transformation of Culture to Data

[from India in Transition by the Center for the Advanced Study of India at the University of Pennsylvania, 1 September 2025]

by Krupa Rajangam & Deborah Sutton

“Oh that. We just took some undergraduate history students on board as interns. They provided the content and it was done.

The co-founder of a digital heritage initiative promoting interactive user interfaces offered these opening remarks. Speaking at a Delhi-based museum, he had been asked about the information provided to users as they moved their hands across an interactive board, revealing images and narratives relating to the Indian freedom movement. His response clarified that the physical and digital components of such installations—for example, the 3D-modeling software and hardware, scanning equipment and its resolution and the user interface—were more carefully designed and calibrated than the content they provided.

Contemporary cultural heritage (CH) is rife with digital innovation. The COVID pandemic accelerated this transformation as archivists and curators worked to develop content that would reach remote, locked-down audiences. Within significant limits, digital platforms can democratize and facilitate access to materials previously inaccessible. Instead of being physically siloed, digitized material—as data components and not just content on culture—can be reproduced, combined, and circulated infinitely to achieve a reach previously considered impossible. Accessibility and malleability remain one of the great boons of digital formats. But here, we consider the information economy of CH practice as it exists—and not its extraordinary and often hypothetical potential—in two, overlapping realms of digitized CH: for-profit business enterprises and academic side-hustles, related to more mainstream academic research.

In the former, questions of what is shared are often less significant than the appeal of the format. In the latter, innovation is often the result of short-term projects that languish, abandoned after project completion, and rarely find audiences. Our research builds on our individual experiences and the findings of a scoping exercise examining a number of India-based heritage projects conducted in 2021-22. It suggests the need for more careful consideration of the implications of transforming CH materials into forms of data; the change impacts everything from how we understand “originality” to the reliance on for-profit services to deliver heritage material to the public.

As digitized representations of CH and access to such formats become more widespread, are we, as CH practitioners and academics, giving enough thought to how digital technologies are reshaping the nature of CH and its audience? Beyond questions of wider reach, are we sufficiently acknowledging how these changes challenge a continued focus on originality and notions of academy as primary controllers of access to knowledge and its validity, both in research and practice?

Digitizing for Dissemination

In 2019, one of us—Deborah Sutton—developed a software platform, Safarnama, including an app and authored experiences around Delhi’s CH. The project subsequently extended to Karachi. Generating “original” content, such as audio-visual clips and old photos, to be hosted on the app platform, was key to its attractiveness and usefulness, but permissions proved tricky. Some collaborators who were initially keen to contribute content quietly withdrew, likely due to the unfamiliar format and unknown reach. The app format also raised other questions. Would incorporating content from non-digital but published scholarship require authorial permission or only acknowledgement?

In 2020, Krupa Rajangam held a sponsored incubation at the NSRCEL, a business incubator located at the Indian Institute of Management-Bangalore, to develop a web interface that would host geo-locationed stories of marginalized histories by drawing on both historical facts and lived experiences. Corporate mentors remained skeptical of her ability to source “original” content on an ongoing basis, i.e., content that was both authenticated and validated. They repeatedly advised her to focus on the format, user experience, and appeal for “mass markets” so her prototype would find audiences. Both projects equally raised questions over who would consume the content and what constitutes the public or audience.

In a scoping exercise undertaken for the Arts and Humanities Research Council (AHRC), UK, in 2021-22, we explored a number of India-based heritage projects funded by the AHRC in partnership with the Newton Fund and Indian Council for Historical Research, since 2015 (figure 1). We were particularly interested in the digital components, which all projects included, even if only a website.

Our exploratory surveys firmly established the divergence in interpreting both CH and digital technologies, which was not surprising. Some projects defined and treated CH as fixed pre-existing material, to be interpreted and presented to audiences through digital technologies. Others re-framed digital formats of CH as components of data, assembling, manipulating, and representing extant archival and other materials. The rest generated digitized CH, effectively altering its nature. Typically, such projects dealt with more ephemeral or less conventional forms of CH.

Fundamental Transformations

Notions of originality remain central to art, architectural and art historical training, and CH practice. Digitization transforms the access and retrieval value of “original” material in physical archives, such as old maps and letters, much lauded in traditional “analog” scholarship, to use value as data. Once the end-user (audience) accesses this data (whether historical facts or stories), it becomes nothing more than bytes occupying valuable space, to be deleted once consumed rather than stored, making it easy to overlook or disregard the source and its context.

For example, in the Safarnama project, the app contained carefully collected and authenticated narratives on “partition memories” in Delhi and Karachi. However, the bite-sized media format meant that users would only explore content once, as snippets. This realization led the team to develop the software and incorporate the ability to download content, which at least meant that users could collect, organize and store (archive) the assembled media.

Digitization also takes away the materiality of the archive, making it more ephemeral. Non-digital materials through, and into which we render CH can (in endless combinations and cycles) be lost, forgotten, sold, recovered, collected, displayed, and stored. Such capacities of digital files are obvious, but maintaining access depends on varied and dynamic software ecologies for existence and sustained end-user access. Digital files created within one software-architecture can be incompatible with, and therefore rendered obsolete, by another. The ethos of software development is constant change.

In another paper, we examined questions of quantity, quality, and reusability of data related to digitization of building-crafts knowledge alongside CARE and FAIR principles of data management. The principles were proposed and adopted by an international consortium of scholars and industry, the former focused on responsible collection, use, and dissemination of data, especially related to vulnerable people and the latter on sustainable data management.

As an example, one AHRC project experimented with methods to capture detailed 3D images of heritage sites and structures in dynamic crowded environments. They used one set of methods to capture the interiors and another for the exteriors, hoping to merge both together and develop holistic imagery for audiences. This proved impossible at first due to issues of software compatibility. Once that was partially resolved, the new software couldn’t handle the sheer volume of data captured—and it was unclear where and for how long such volumes of data would be stored.

New realms of intellectual property remain fuzzy. While the content on digital platforms is governed by licensing and proprietary legal frameworks, it is often hosted on open platforms, through web repositories such as GitHub. Prima facie, such openness appears to challenge the proprietorial nature of archives and other repositories as keepers of knowledge. However, it raises a host of questions about how to maintain a critical understanding of archives.

Digitization may, and should, transform access but should it obliterate the regimes through which the materials were generated and organized and what’s included or excluded? For example, a local coordinator of one project that engaged with artists commented that digital technologies are typically used to document technical skills as forms of intangible heritage and develop artist encyclopedias, saying that “they are hardly used to interrogate the reality that many ‘traditional’ artists hail from marginalized castes.” Similarly, the local coordinator of another project that engaged with communities living in and around a protected heritage site commented on how digital technologies often end up being used to create a record of heritage structures without any reference to their day-to-day setting.

Any and all digital enterprise in CH, we argue, needs to integrate the ambition to use digital methods to not just present but also counter and interrogate the material, its creation, and purpose. Digital platforms and web- and app-based software are now able to manipulate and re-situate information in unprecedented ways. The novelty of such formats can displace original, provocative, and timely considerations of the material. Often, we are so taken by the visual and structural attributes of these formats, that we accept it at face value and lose sight of the tone and content of heritage as a curated message about the past and the present.

Alongside this, digital augmentations and iterations of CH, including storage, have significant financial and infrastructural implications. The creation and maintenance of digital platforms requires either developing “in-house” digital specialization or, more commonly, reliance on private, for-profit platforms. Paying for external provision introduces complexities. Funders, including the AHRC, struggle to devise guidance or policy in relation to software licensing. However, a persistent challenge to projects, and partnerships between academic and non-academic partners, is devising data and software strategies that subsist beyond the life of the funded-research project. Often, the adverse effects of the paucity of longer-term planning around IP issues, sustainability, and data archiving falls disproportionately on the non-academic stakeholder.

While digitization foregrounds the potential and promise of complete openness and equity, maybe this is lost in practice. Or digitization may merely mark the displacement of one set of ethics with another. There is a need for more careful consideration of the implications, complexities, and risks of taking CH materials out of boxes and off shelves and transforming and generating it into data files, which are, in turn, dependent on digital platforms to provide end-user access. However, the question remains of whether heritage-related disciplines are adequately prepared and willing to confront such new ways of working, which have begun to dislodge some of the privileges extant in current forms of research and practice.

Krupa Rajangam is nearing the end of her tenure as a Fulbright Fellow at the Historic Preservation Department, Weitzman School of Design, University of Pennsylvania. Her permanent designation is Founder-Director, Saythu…linking people and heritage, a professional conservation collective based in Bangalore, India.

Deborah Sutton is a Professor in Modern South Asian History at Lancaster University.

World-Watching: Human Enhanced Moisture Transport Exacerbated the Extreme Precipitation in Northern China

[from the American Geophysical Union, Geophysical Research Letters, 11 August 2025]

Abstract

Although previous studies suggest anthropogenic forcing may influence extreme precipitation probability, few have specifically investigated the human influence on moisture transport. Here, we leverage the 2023 record-breaking summer precipitation in Northern China (NC) to address this gap. Combining station observation with Coupled Model Intercomparison Project Phase 6 (CMIP6) model outputs, we demonstrate that the 2023-like heavy precipitation event was exacerbated by anthropogenic enhanced moisture transport. External forcing increased the probability of extreme southeasterly moisture transport by approximately 1.3 (90% confidence interval: 1.0–1.8) times. Moreover, the total anthropogenic forcing likely increased the probability of similar precipitation events at least 1.7 times (1.0–3.1), with both greenhouse gases and anthropogenic aerosols contributing positively. As greenhouse gases concentrations rise and anthropogenic warming intensifies, the frequency of similar extreme precipitation events in NC is projected to increase further.

Key Points

Plain Language Summary

The extent of human influence on moisture transport and consequent heavy precipitation remains a critical research question. While anthropogenic contributions to precipitation extremes are increasingly recognized, studies specifically addressing human-induced changes in moisture transport remain limited. The record-breaking summer precipitation in Northern China (NC) during 2023 provides a salient case study. This extreme event was fueled by substantial moisture transport from the southeast into NC, driven by Typhoons Doksuri and Khanun. Attribution analyses indicate that both greenhouse gas and anthropogenic aerosol emissions likely increased the probability of similar heavy precipitation events and associated moisture transport patterns. Such events are projected to become more frequent with continued anthropogenic warming. These findings demonstrate that human activities significantly influence moisture transport pathways and consequently modulate extreme precipitation occurrence in NC, deepening our understanding of the physical mechanisms underlying these events.

Read the full paper [archived PDF]

Economics-Watching: How Green Innovation Can Stimulate Economies and Curb Emissions

[from IMF Blog, by Zeina Hasna, Florence Jaumotte & Samuel Pienknagura]

Coordinated climate policies can spur innovation in low-carbon technologies and help them spread to emerging markets and developing economies

Making low-carbon technologies cheaper and more widely available is crucial to reducing harmful emissions.

We have seen decades of progress in green innovation for mitigation and adaptation: from electric cars and clean hydrogen to renewable energy and battery storage.

More recently though, momentum in green innovation has slowed. And promising technologies aren’t spreading fast enough to lower-income countries, where they can be especially helpful to curbing emissions. Green innovation peaked at 10 percent of total patent filings in 2010 and has experienced a mild decline since. The slowdown reflects various factors, including hydraulic fracking that has lowered the price of oil and technological maturity in some initial technologies such as renewables, which slows the pace of innovation.

The slower momentum is concerning because, as we show in a new staff discussion note, green innovation is not only good for containing climate change, but for stimulating economic growth too. As the world confronts one of the weakest five-year growth outlooks in more than three decades, those dual benefits are particularly appealing. They ease concerns about the costs of pursuing more ambitious climate plans. And when countries act jointly on climate, we can speed up low-carbon innovation and its transfer to emerging markets and developing economies.

IMF research [archived PDF] shows that doubling green patent filings can boost gross domestic product by 1.7 percent after five years compared with a baseline scenario. And that’s under our most conservative estimate—other estimates show up to four times the effect.

The economic benefits of green innovation mostly flow through increased investment in the first few years. Over time, further growth benefits come from cheaper energy and production processes that are more energy efficient. Most importantly, they come from less global warming and less frequent (and less costly) climate disasters.

Green innovation is associated with more innovation overall, not just a substitution of green technologies for other kinds. This may be because green technologies often require complementary innovation. More innovation usually means more economic growth.

A key question is how countries can better foster green innovation and its deployment. We highlight how domestic and global climate policies spur green innovation. For example, a big increase in the number of climate policies tends to boost green patent filings, our preferred proxy for green innovation, by 10 percent within five years.

Some of the most effective policies to stimulate green innovation include emissions-trading schemes that cap emissions, feed-in-tariffs, which guarantee a minimum price for renewable energy producers, and government spending, such as subsidies for research and development. What’s more, global climate policies result in much larger increases in green innovation than domestic initiatives alone. International pacts like the Kyoto Protocol and the Paris Agreement amplify the impact of domestic policies on green innovation.

One reason policy synchronization has a prominent impact on domestic green innovation is what is called the market size effect. There’s more incentive to develop low-carbon technologies if innovators can expect to sell into a much larger potential market, that is, in countries which adopted similar climate policies.

Another is that climate policies in other countries generate green innovations and knowledge that can be used in the domestic economy. This is known as technology diffusion. Finally, synchronized policy action and international climate commitments create more certainty around domestic climate policies, as they boost people’s confidence in governments’ commitment to addressing climate change.

Climate policies even help spread the use of low-carbon technologies in countries that are not sources of innovation, through trade and foreign-direct investment. Countries that introduce climate policies see more imports of low-carbon technologies and higher green FDI inflows, especially in emerging markets and developing economies.

Risks of protectionism

Lowering tariffs on low-carbon technologies can further enhance trade and FDI in green technologies. This is especially important for middle- and low-income countries where such tariffs remain high. On the flipside, more protectionist measures would impede the broader spread of low-carbon technologies.

In addition, and given evidence of economies of scale, protectionism—with ultimately smaller potential markets—could stifle incentives for green innovation and lead to duplication of efforts across countries.

The risks of protectionism are exacerbated when climate policies, such as subsidies, do not abide by international rules. For example, local content requirements, whereby only locally produced green goods benefit from subsidies, undermine trust in multilateral trade rules and could result in retaliatory measures.

Beyond embracing a rules-based approach to climate policies, the advanced economies, where most green innovation occurs, have an important responsibility: sharing the technology so that emerging and developing economies can get there faster. Such direct technology transfers hold the promise of a double dividend for emerging markets and developing economies—reducing emissions and yielding economic benefits.

—This blog reflects research by Zeina Hasna, Florence Jaumotte, Jaden Kim, Samuel Pienknagura and Gregor Schwerhoff.

Science-Watching: Why Do Batteries Sometimes Catch Fire and Explode?

[from Berkeley Lab News, by Theresa Duque]

Key Takeaways
  • Scientists have gained new insight into why thermal runaway, while rare, could cause a resting battery to overheat and catch fire.
  • In order to better understand how a resting battery might undergo thermal runaway after fast charging, scientists are using a technique called “operando X-ray microtomography” to measure changes in the state of charge at the particle level inside a lithium-ion battery after it’s been charged.
  • Their work shows for the first time that it is possible to directly measure current inside a resting battery even when the external current measurement is zero.
  • Much more work is needed before the findings can be used to develop improved safety protocols.

How likely would an electric vehicle battery self-combust and explode? The chances of that happening are actually pretty slim: Some analysts say that gasoline vehicles are nearly 30 times more likely to catch fire than electric vehicles. But recent news of EVs catching fire while parked have left many consumers – and researchers – scratching their heads over how these rare events could possibly happen.

Researchers have long known that high electric currents can lead to “thermal runaway” – a chain reaction that can cause a battery to overheat, catch fire, and explode. But without a reliable method to measure currents inside a resting battery, it has not been clear why some batteries go into thermal runaway, even when an EV is parked.

Now, by using an imaging technique called “operando X-ray microtomography,” scientists at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have shown that the presence of large local currents inside batteries at rest after fast charging could be one of the causes behind thermal runaway. Their findings were reported in the journal ACS Nano.

“We are the first to capture real-time 3D images that measure changes in the state of charge at the particle level inside a lithium-ion battery after it’s been charged,” said Nitash P. Balsara, the senior author on the study. Balsara is a faculty senior scientist in Berkeley Lab’s Materials Sciences Division and a UC Berkeley professor of chemical and biomolecular engineering.

“What’s exciting about this work is that Nitash Balsara’s group isn’t just looking at images – They’re using the images to determine how batteries work and change in a time-dependent way. This study is a culmination of many years of work,” said co-author Dilworth Y. Parkinson, staff scientist and deputy for photon science operations at Berkeley Lab’s Advanced Light Source (ALS).

The team is also the first to measure ionic currents at the particle level inside the battery electrode.

3D microtomography experiments at the Advanced Light Source enabled researchers to pinpoint which particles generated current densities as high as 25 milliamps per centimeter squared inside a resting battery after fast charging. In comparison, the current density required to charge the test battery in 10 minutes was 18 milliamps per centimeter squared. (Credit: Nitash Balsara and Alec S. Ho/Berkeley Lab. Courtesy of ACS Nano)
Measuring a battery’s internal currents

In a lithium-ion battery, the anode component of the electrode is mostly made of graphite. When a healthy battery is charged slowly, lithium ions weave themselves between the layers of graphite sheets in the electrode. In contrast, when the battery is charged rapidly, the lithium ions have a tendency to deposit on the surface of the graphite particles in the form of lithium metal.

“What happens after fast charging when the battery is at rest is a little mysterious,” Balsara said. But the method used for the new study revealed important clues.

Experiments led by first author Alec S. Ho at the ALS show that when graphite is “fully lithiated” or fully charged, it expands a tiny bit, about a 10% change in volume – and that current in the battery at the particle level could be determined by tracking the local lithiation in the electrode. (Ho recently completed his Ph.D. in the Balsara group at UC Berkeley.)

A conventional voltmeter would tell you that when a battery is turned off, and disconnected from both the charging station and the electric motor, the overall current in the battery is zero.

But in the new study, the research team found that after charging the battery in 10 minutes, the local currents in a battery at rest (or currents inside the battery at the particle level) were surprisingly large. Parkinson’s 3D microtomography instrument at the ALS enabled the researchers to pinpoint which particles inside the battery were the “outliers” generating alarming current densities as high as 25 milliamps per centimeter squared. In comparison, the current density required to charge the battery in 10 minutes was 18 milliamps per centimeter squared.

The researchers also learned that the measured internal currents decreased substantially in about 20 minutes. Much more work is needed before their approach can be used to develop improved safety protocols.

Researchers from Argonne National Laboratory also contributed to the work.

The Advanced Light Source is a DOE Office of Science user facility at Berkeley Lab.

The work was supported by the Department of Energy’s Office of Science and Office of Energy Efficiency and Renewable Energy. Additional funding was provided by the National Science Foundation.

Movies and Chemistry: Keeping the Enchantment of Education

Several movies give you an “enchanting” back door or window into chemistry so that you can “beat” the tediousness of regular education and come into the field and its topics via these movies:

I.

The Man in the White Suit is a 1951 British comedy classic with Alec Guinness as a genius research chemist. He fiddles with his flasks and polymer and textile chemistry experiments until he invents a fabric that shows no wear and tear “forever.” This would seem like a great boon to humanity in its clothing needs but the chemist (“Sidney Stratton”) finds that both labor and management reject his discovery violently as it threatens jobs and profits. Textile or fabric polymer chemistry is at the heart of the plot.

Cry Terror! is a taut 1958 crime thriller movie with James Mason and Rod Steiger. The plot involves the terrorist threat of exploding a domestic airliner with a hidden RDX cache (a TNT successor) unless the demanded payment is made.

RDX was used by both sides in World War II. The U.S. produced about 15,000 long tons per month during WWII and Germany about 7,000 long tons per month. RDX had the major advantages of possessing greater explosive force than TNT, used in World War I and requiring no additional raw materials for its manufacture.

Semtex is a general-purpose plastic explosive containing RDX and PETN. It is used in commercial blasting, demolition, and in certain military applications.

A Semtex bomb was used in the Pan Am Flight 103 (known also as the Lockerbie) bombing in 1988. A belt laden with 700 g (1.5 lb) of RDX explosives tucked under the dress of the assassin was used in the assassination of former Indian prime minister Rajiv Gandhi in 1991.

The 1993 Bombay bombings used RDX placed into several vehicles as bombs. RDX was the main component used for the 2006 Mumbai train bombings and the Jaipur bombings in 2008. It also is believed to be the explosive used in the 2010 Moscow Metro bombings.

Traces of RDX were found on pieces of wreckage from 1999 Russian apartment bombings and 2004 Russian aircraft bombings. Further reports on the bombs used in the 1999 apartment bombings indicated that while RDX was not a part of the main charge, each bomb contained plastic explosive used as a booster charge.

Ahmed Ressam, the al-Qaeda Millennium Bomber, used a small quantity of RDX as one of the components in the bomb that he prepared to detonate in Los Angeles International Airport on New Year’s Eve 1999-2000; the bomb could have produced a blast forty times greater than that of a devastating car bomb.

In July 2012, the Kenyan government arrested two Iranian nationals and charged them with illegal possession of 15 kilograms (33 pounds) of RDX. According to the Kenyan Police, the Iranians planned to use the RDX for “attacks on Israeli, U.S., UK and Saudi Arabian targets.”

RDX was used in the assassination of Lebanese Prime Minister Rafic Hariri on February 14, 2005.

In the 2019 Pulwama attack in India, 250 kg of high-grade RDX was used by Jaish-e-Mohammed. The attack resulted in the deaths of 44 Central Reserve Police Force personnel as well as the attacker.

Semtex was developed and manufactured in Czechoslovakia, originally under the name B 1 and then under the “Semtex” designation since 1964, labeled as SEMTEX 1A, since 1967 as SEMTEX H, and since 1987 as SEMTEX 10. Originally developed for Czechoslovak military use and export, Semtex eventually became popular with paramilitary groups and rebels or terrorists because prior to 2000 it was extremely difficult to detect, as in the case of Pan Am Flight 103.

The Russian apartment bombings were a series of explosions that hit four apartment blocks in the Russian cities of Buynaksk, Moscow and Volgodonsk in September 1999, killing more than 300, injuring more than 1,000, and spreading fear across the country. The bombings, together with the Invasion of Dagestan, triggered the Second Chechen War. The handling of the crisis by Vladimir Putin, who was prime minister at the time, boosted his popularity greatly and helped him attain the presidency within a few months.

The blasts hit Buynaksk on 4 September and in Moscow on 9 and 13 September. On 13 September, Russian Duma speaker Gennadiy Seleznyov made an announcement in the Duma about receiving a report that another bombing had just happened in the city of Volgodonsk. A bombing did indeed happen in Volgodonsk, but only three days later, on 16 September. Chechen militants were blamed for the bombings, but denied responsibility, along with Chechen president Aslan Maskhadov.

A suspicious device resembling those used in the bombings was found and defused in an apartment block in the Russian city of Ryazan on 22 September. On 23 September, Vladimir Putin praised the vigilance of the inhabitants of Ryazan and ordered the air bombing of Grozny, which marked the beginning of the Second Chechen War. Three FSB agents who had planted the devices at Ryazan were arrested by the local police, with the devices containing a sugar-like substance resembling RDX.

II.

The movie Khartoum (1966) has General Charles Gordon traveling to Sudan in 1884 to quell the “mad mullah” the Mahdi. (Osama bin Laden of his day).
At the train station where General Gordon starts his trip, there’s a railway ad sign that promotes the use of “Wright’s Coal Tar Soap.”

This gives us a sign of the rise of the modern chemical industry.

III.

Think of “Sherlock Holmes” in terms of all the movies and TV series or the original stories and books:

Holmes has to explain to Watson how he survived the assassination attempt on him by Moriarty, “the Napoleon of Crime” who threw him off the Reichenbach Falls. Holmes explains that he faked Moriarty out and clung to a bush or something and was (obviously) not killed.

Holmes tells Watson what he does when he returns to civilization and travels and studies for some three years:

“I then passed through Persia, looking in at Mecca, and paid a short but interesting visit to the Khalifa at Khartoum, the results of which I communicated to the Foreign Office. Returning to France, I spent some months in a research into the coal-tar derivatives, which I conducted in a laboratory at Montpellier, in the south of France.”

The context implies the year 1894.

There is clear evidence that Mr. Holmes was deeply involved in the research of coal-tar derivatives as early as 1889 when the events of the Copper Beeches matter were transpiring.

We are told that on an evening in 1889, Mr. Holmes was seated in 221B Baker Street at the deal table loaded with retorts and test tubes. He was settling down to one of those all-night chemical researches in which he frequently indulged.

The research work was interrupted by a message of distress from Violet Hunter. Watson found that there was a train the next morning, and Holmes tells Watson:

“That will do very nicely. Then perhaps I had better postpone my analysis of the acetones as we may need to be at our best in the morning.”

It is clear that Holmes was engaged in coal-tar research long before his visit to Montpellier in the south of France.

The quotation from the Copper Beeches story refers to acetones, not to coal-tar derivatives.

“In the fractional distillation of coal-tar, the distillate separates into five distinct groups or layers, depending upon the stage of the process and the amount of heat applied. Category-one of the five includes benzene, toluene, xylenes and cumenes.

Acetones [dimethelketone-CH3COCH3] may be derived from the oxidation of cumene. And cumene [isopropylbenzene-C6H5C(CH3)2] is derived by distillation from the coal-tar naphtha fractions.”

Cumenes are derived from coal-tar, and acetones are derived from cumenes. Thus, a study of the acetones is, necessarily, research into coal-tar derivatives.

The rise of chemical engineering and organic chemistry are at the heart of the Sherlock Holmes stories.

Thus we can “climb” into chemistry via these books and movies and keep a feeling of enchantment as a kind of educational “shoehorn.”

Technology-Watching: Quantum Microchips Connected in Record-Breaking World First

[from UK Research and Innovation]

Researchers in the UK have successfully transferred data between quantum microchips for the first time.

This helps overcome a key obstacle to building a commercial quantum computer.

The milestone achieved by a team from the University of Sussex and Brighton-based quantum computer developer Universal Quantum, allows chips to be linked like a jigsaw.

On track to useful quantum computers

It means that many more qubits, the basic calculating unit, can be joined together than is possible on a single microchip. This will make a more powerful quantum computer possible.

The project, which has been backed by the Engineering and Physical Sciences Research Council (EPSRC), has also broken the world record for quantum connection speed and accuracy.

The scaling of qubit numbers from the current level of around 100 qubits to nearer 1 million is central to creating a quantum processor that can make useful calculations.

The significant achievement is based on a technical blueprint for creating a large-scale quantum computer, which was first published in 2017 with funding from EPSRC.

Within the blueprint was the ground-breaking concept successfully demonstrated with this research of linking quantum computing modules with electrical fields.

Unlocking UK potential

The UK is a leader in the global race to develop useful quantum computers, which represent a step-change in computing power.

Their development may help solve pressing challenges from drug discovery to energy-efficient fertilizer production. But their impact is expected to sweep across the economy, transforming most sectors and all our lives.

Potential to scale up

Winfried Hensinger, Professor of Quantum Technologies at the University of Sussex and Chief Scientist and co-founder at Universal Quantum said:

As quantum computers grow, we will eventually be constrained by the size of the microchip, which limits the number of quantum bits such a chip can accommodate.

In demonstrating that we can connect 2 quantum computing chips, a bit like a jigsaw puzzle, and, crucially, that it works so well, we unlock the potential to scale up by connecting hundreds or even thousands of quantum computing microchips.

Speed and precision

The researchers were successful in transporting the qubits using electrical fields with a 99.999993% success rate and a connection rate of 2424 transfers per second. Both numbers are world records.

Dr. Kedar Pandya, Director of Cross-Council Programmes at EPSRC, said:

This significant milestone is evidence of how EPSRC funded science is seeding the commercial future for quantum computing in the UK.

The potential for complex technologies, like quantum, to transform our lives and create economic value widely relies on visionary early-stage investment in academic research.

We deliver that crucial building block and are delighted that the University of Sussex and its spin-out company, Universal Quantum, are demonstrating the strength it supports.

Institute of Physics award winner

Universal Quantum has been awarded €67 million from the German Aerospace Center to build 2 quantum computers.

The University of Sussex spin-out was also recently named as one of the 2022 Institute of Physics award winners in the business start-up category.

China Monitor: How Immigration Is Shaping Chinese Society

(from MERICS China Monitor)

To the surprise of many, China has emerged as a destination country for immigration: As China’s population ages and its workforce shrinks, China needs more immigrants.

The background of immigrants to China is becoming more diverse. While the number of high-earning expatriates from developed countries has peaked, China is now also attracting more students than ever from all over the world, including many from lesser developed countries. Low-skilled labor and migration for marriage are also on the rise. The main areas that attract foreigners are the large urban centers along the coast (Guangzhou, Shanghai, Beijing) and borderland regions in the South, Northeast and Northwest, but smaller numbers are also making their way to smaller cities across China.

In the new MERICS China MonitorHow immigration is shaping Chinese society” [archived PDF], MERICS Director Frank N. Pieke and colleagues from other European universities and institutions discuss the most salient issues confronting the Chinese government and foreign residents themselves.

According to their analysis, for many foreigners China has become considerably less accommodating over the last ten years, particularly with regard to border control, public security, visa categories, and work and residence permits. China’s immigration policy is still driven by narrow concerns of regulation, institutionalization and control. It remains predicated on attracting high-quality professionals, researchers, entrepreneurs and investors. Long-term challenges like the emerging demographic transition, remain to be addressed.

The authors detect a worrying trend towards intolerance to ethnic and racial difference, fed by increasing nationalism and ethnic chauvinism. They argue that the Chinese government, civil society, foreign diplomatic missions, employers of foreigners and international organizations present in China should take a clear stance against racism and discrimination. China’s immigration policy needs to include the integration of foreigners into society and provide clear and predictable paths to acquiring permanent residence.

[Archived PDF]