Science-Watching: From Ignition to Energy

[from Science & Technology Review July/August 2025 Research Highlights, by Noah Pflueger-Peters]

Achieving ignition at the National Ignition Facility (NIF) proved that harnessing the power of the Sun in a laboratory may be possible. The Sun’s extreme temperatures and pressures cause light elements to fuse together to create heavier ones, releasing enormous energy and sustaining conditions for more thermonuclear reactions. NIF replicates these conditions with inertial confinement fusion, in which lasers compress and heat a target capsule filled with deuterium and tritium (DT), “heavy” isotopes of hydrogen that contain extra neutrons. When the isotopes fuse, they create helium and a neutron, and the lost mass is converted into inertial fusion energy (IFE), which can be harnessed for energy production.

Nuclear fusion produces significantly more energy than either nuclear fission or burning fossil fuels for equivalent amounts of fuel. Since the input materials for fusion energy are plentiful on Earth, an IFE power plant could produce safe, abundant, power grid-compatible energy without highly radioactive byproducts.

Although significant work remains to harness fusion energy, pursuing the development and deployment of IFE is crucial for the nation’s energy security, enabling the United States to shape implementation worldwide, avoid technological surprises from adversaries, and influence technical leadership in other energy-intensive technologies such as AI, machine learning (ML), and supercomputing.

IFE research stretches back to the early days of Lawrence Livermore, and today the Laboratory is fostering the overall fusion ecosystem. Livermore’s unique capabilities, expertise, and connections will be critical to laying the technical, logistical, and legal groundwork to make IFE possible. “IFE is a grand scientific and engineering challenge, something that is so incredibly difficult and high-risk and takes enormous expertise,” says Tammy Ma, Livermore’s IFE Institutional Initiative lead. “This challenge makes it the right kind of problem for national laboratories to pursue.”

This artist’s rendering shows the concept for an inertial fusion energy (IFE) power plant design, with a cutaway to show the plant’s target chamber in the center. Livermore researchers are laying the groundwork for private fusion companies to build similar designs. (Illustration by Eric Smith.)

Designing for Viability

NIF is the only facility to date to demonstrate the ignition and burning plasma conditions that are prerequisites for IFE, but it is an experimental facility for stockpile stewardship research, not a power plant. To be commercially viable and produce the energy to offset costs and meet demands (baseload power), IFE plants will need to generate more than 30 times the energy they deliver to the fusion target on every shot while firing 10 or more shots per second, compared to NIF’s rate of one or two shots per day.

The Laser Inertial Fusion Energy (LIFE) study, conducted between 2008 and 2013, aimed to build directly on technology developed for NIF to achieve IFE and took a systematic approach to this requirement by developing the Integrated Process Model (IPM). (See S&TR, April/May 2009 [archived PDF], pp. 6-15.)

IPM is a technoeconomic model of an IFE power plant with detailed technical and cost breakdowns and interdependencies of key systems and subsystems. “The work done under LIFE was fantastic,” says Ma. “IPM lays out engineering and physics requirements for the entire system to test out different scenarios and see the impact. Now, we not only get to expand on all that but also leverage 15 years of new data from NIF, better codes, and high-performance computing (HPC), as well as new work in AI, ML, advanced manufacturing, diagnostics, and nonproliferation across the Laboratory.”

IPM describes an IFE power plant that requires a solid-state laser driver system to “pump” lasers with optical energy using laser diodes instead of flashlamps as at NIF. The plant will also need to fabricate and fill target capsules onsite and send them into its target chamber at a high enough frequency to produce baseload power. “We will have to repeatedly inject targets into the chamber, so the targets must be able to withstand and survive that process,” explains Ma. “Then, the lasers will track the moving targets, and when one gets to the center of the chamber, they would fire on the centered target, repeating 10 to 20 times per second.”

The facility would convert fusion energy into heat and then electricity via steam turbines, sending most of the electricity to the power grid and recycling the rest to power operations on subsequent shots. Neutrons from the reaction would produce tritium needed for the DT fuel by bombarding lithium isotopes in a “breeding blanket” material lining its target chamber. By closing both the power and fuel cycles, IFE plants are expected to be self-sustaining.

Thanks in part to IFE STARFIRE (IFE Science and Technology Accelerated Research for Fusion Innovation and Reactor Engineering), a Department of Energy (DOE)-funded multi-institutional IFE research and development hub, researchers across the Laboratory are working to meet the new system’s demands. IPM can help identify key challenges, test the viability of new designs, and direct future research. “Many technical models and cost models exist for IFE, but very few, if any, pair systems and cost models together at the same depth as IPM,” says Mackenzie Nelson, a technoeconomic systems analyst in the Computational Engineering Division. “This type of tool offers such an advantage because we can assess design choices from both a technical and economic standpoint and create blueprints for what an IFE plant could look like.”

(left to right) Livermore researchers Bassem El Dasher, Claudio Santiago, and Mackenzie Nelson discuss a 3D model of a proposed IFE power plant design alongside the Integrated Process Model (IPM). IPM has more than 270 potential user inputs that researchers and collaborators can use to assess different IFE design choices to see the technical and cost impact on the entire design.

Operational Demands

NIF’s target capsules are extremely precise, fragile, and can take weeks to fabricate, fill, and position. Researchers are trying to reconcile that factor with the estimated demand of more than 800,000 capsules per day produced at less than $0.50 each to achieve IFE plant viability. To do this, they are examining optimal target designs for IFE and exploring advanced manufacturing methods such as microfluidics, volumetric additive manufacturing, and two-photon polymerization. (See S&TR, April/May 2025 [archived PDF], pp. 16-19.) Additional projects involve developing diagnostic instruments that can collect, analyze, and combine data with other diagnostics at the 10 to 20 shot per second frequency and use it to improve lasers in real time.

Fusion energy systems such as IFE are also a regulatory challenge, as they generate high-energy neutrons capable of breeding plutonium or uranium-233 and rely on large quantities of tritium. “Pure fusion energy systems do not require fissile material, but there are still ways to misuse these technologies that pose proliferation risk,” says Yana Feldman, the associate program leader for international safeguards. Bad actors may only need small amounts of tritium to make nuclear weapons, and some breeding blanket designs may inadvertently produce traces of plutonium that may be diverted for military purposes.

Nuclear fission reactors are regulated through international agreements and export control rules, and the independent International Atomic Energy Agency (IAEA) verifies that nuclear material and facilities are only being used for peaceful purposes. Neither treaties nor the IAEA address fusion energy, and no consensus has been reached on whether fusion energy systems need an international verification program. Verification methods for safeguarding tritium are also far less developed than for plutonium and uranium and focus more on contamination and transfers than analytical accounting for discrepancies. The precise scale of allowable tritium unaccounted for without posing proliferation risk is also unclear.

Fusion systems can be designed for proliferation resistance, but not having an existing design remains a challenge.

International security analyst Anne-Marie Riitsaar and her colleagues are exploring these complexities and starting conversations with international fusion experts and private industry to raise awareness. Riitsaar also plans to collaborate with the IPM team to map tritium diversion vulnerabilities and identify high-risk points where researchers could incorporate surveillance methods into plant designs to detect and prevent potential misuse. “People sometimes ask me why I’m thinking about fusion energy regulations and proliferation risks at this point, but it’s not too early,” says Riitsaar. “Reaching a multinational consensus on regulating sensitive technologies takes considerable time and effort.”

The National Ignition Facility is an experimental facility and not a power plant, so a commercial IFE plant design has vastly different requirements—many of which are being studied by Livermore researchers and their collaborators.

NIFViable IFE plant (estimated)
Repetition rateOne shot per day10 to 20 shots per second
Energy gain4.13 times (as of April 2025)30 times (minimum), 50 times to 100 times (ideal)
How lasers gain energyFlashlampsDiode pumping
Target fabrication and fuel fillingFabricated offsite over several weeks and filled manually in 1 to 5 daysMass-manufactured and filled in a target factory within the facility
Target deliveryPositioned manually within the Target ChamberShot into the plant’s target chamber approximately 10 to 20 times per second
Laser alignmentComputationally in real time, taking up to 8 hoursIn real time
Power cycleOpen, requiring outside energy sourcesClosed, applying reused energy to power laser and ancillary plant operations
Fuel cycle (tritium)Produced offsiteBred onsite

The Laser Driven Fusion Integration Research and Science Test Facility (LD-FIRST) is a proposed blueprint for a proof-of-concept IFE facility that will test all the key IFE subsystems in an integrated fashion. A public-private partnership will likely be necessary to build the facility and will help the IFE community address the main subset of risks and the technological challenges of building a commercial plant.

Converging on a Solution

The team seeks to make IPM as accurate and comprehensive as possible by meeting with subject matter experts across the Laboratory to incorporate the latest research. “We’re trying to evolve the model so it has the same level of high detail across every single functional area to tell us where we can focus research and help us find optimized solutions that we could propose to industry,” says Nelson.

Computer scientist Claudio Santiago and his colleagues also modernized IPM by porting its framework from Microsoft Excel to Python in December 2024, making it compatible with AI, ML, design optimization, and HPC to further inform designs. “Once we think about all the forcing functions such as minimum shot yield and materials requirements pinning us in from every direction, we end up with an optimized solution space. As we sharpen the pencil more with these tools, that optimized solution box gets smaller until eventually we’ve converged on a point design,” says IFE lead systems engineer Justin Galbraith. Galbraith and his team’s point design is called the Laser Driven Fusion Integration Research and Science Test Facility, or LD-FIRST, a proof-of-concept physics demonstration facility for IFE. “That point design, we anticipate, will serve as the foundation for a future public-private partnership that would facilitate building and realizing a physical facility to focus the IFE community in pursuit of fusion power on the grid,” says Galbraith.

Livermore is leading the charge in IFE, helping the United States develop a technological roadmap, growing and coordinating science and technology efforts within the Laboratory, and fostering partnerships across the fusion industry, academia, and government.

Ma chaired DOE’s “Basic Research Needs for IFE” workshop and report in 2022 and co-chairs the subcommittee providing recommendations on the nation’s fusion activities through DOE’s Fusion Energy Sciences Advisory Committee. She and her team travel often to Washington, D.C., working with DOE and legislators to expand fusion energy research and advocacy in the nation. Livermore also leads a “Collaboratory” with other DOE national laboratories to connect research project leads and facilitate public-private partnerships. The Collaboratory has hosted multiple events with industry, and the Laboratory has partnered with three private companies who aim to design pilot IFE plants.

Meanwhile, Galbraith and other IFE leaders have served as technical advisors for engineering design teams at Texas A&M University and given them IFE-relevant problems to solve, including advanced chamber and blanket design. Galbraith is working with Nelson to develop the IFE plant design portion of a high-energy-density science summer school program, which Nelson is leading in 2025 at the University of California at San Diego, and they have developed IFE curriculum that has been deployed at six universities starting in spring 2025. “We’re hoping we can get a group of students really excited about fusion and start to build up the next generation of engineers and scientists that will make fusion a reality,” says Galbraith. The team has led IFE strategic planning exercises at the Laboratory, and Lawrence Livermore will stand up a new fusion institute—named “LIFT,” for Livermore Institute for Fusion Technology—a research and development center that will coordinate and centralize institutional fusion energy research.

Harnessing IFE will be a massive undertaking, but Livermore’s broad and deep expertise, facilities, and capabilities put the Laboratory in a unique position to lead and play an impactful role. “If we can set it up correctly, IFE will be a big piece of the Laboratory’s long-term vision,” says Ma. “IFE plays off of our history and all of our strengths, and it is critical for long-term national security.”

Economics-Watching: Where Could Reshoring Manufacturers Find Workers?

[from the Federal Reserve Bank of Cleveland, 9 October, 2025]

by Stephan D. Whitaker, Senior Policy Economist

The United States has lost millions of manufacturing jobs in recent decades, but a variety of policies have been enacted to incentivize the creation of manufacturing jobs in America. This District Data Brief analyzes where manufacturers might find US workers to fill these roles.

Introduction

The announcement of new tariffs this year has reignited the discussion of whether the United States can expand its manufacturing employment by millions of workers. Reversing decades of manufacturing job losses is one explicit goal of the new higher tariffs. This District Data Brief presents measures of employment and demographics as context around the current and potential employment in US manufacturing. Raising manufacturing employment by 4 to 6 million workers would constitute a large increase relative to current levels. However, an increase of this scale would not be large relative to the global growth of manufacturing employment in recent decades, the current US labor force size, or the number of US adults not engaged in high-paying work.

With different priorities and approaches, policymakers have spent much of the past decade addressing issues related to the loss or absence of manufacturing in the United States. For example, America’s dependence on imported manufactured goods was highlighted at the beginning of the COVID-19 pandemic as supply chain disruptions led to shortages of medical equipment, pharmaceuticals, microchips, and other products. The CHIPS and Science Act and the Inflation Reduction Act featured tax breaks and subsidies to expand US manufacturing capacity for semiconductors, electric vehicles, and renewable energy equipment.

At the same time, economists have been documenting the loss of work opportunities and earning power by workers without college degrees as manufacturing employment has declined. In 2013, David Autor, David Dorn, and Gordon Hanson published a study that estimated the labor market impacts resulting from increased trade competition following China’s entrance into the World Trade Organization, an effect often referred to as the “China shock.” Dozens of studies have since used the regional variation in job and income losses caused by the China shock to measure the adverse impacts of job displacement on family structures, crime, health, and other social indicators. Some supporters of industrial subsidies and higher tariffs have expressed the hope that these dynamics can be put into reverse.

Read the full article [archived PDF].

Economics-Watching: Estimating the Effects of Monetary Policy: An Ongoing Evolution

New monetary policy tools have lengthened the interval over which policy news is transmitted and processed.

[from the Federal Reserve Bank of Kansas City, 2 October 2025]

by Karlye Dilts Stedman, Amaze Lusompa & Phillip An

Disentangling how the economy responds to a monetary policy decision from its response to macroeconomic conditions at the time of the decision is an ongoing challenge. One popular method researchers use to measure the effect of a monetary policy announcement—high-frequency identification—analyzes the reaction of fast-moving financial variables immediately following the policy announcement, using a time window long enough for markets to respond but not so long that the response is contaminated by other information.

Since high-frequency identification was introduced in the early 2000s, policymakers have introduced tools such as forward guidance and large-scale asset purchases. Karlye Dilts Stedman, Amaze Lusompa, and Phillip An examine how the evolution of monetary policy has changed high-frequency identification and assess whether additional changes might be necessary to better capture the effect of modern monetary policy surprises. Although researchers have continually updated the asset mix used in high-frequency identification over time, they have not updated the measurement window. Because the timing of monetary policy communication has changed significantly in recent years, refining the length of this measurement window may be necessary going forward.

Read the full article [archived PDF].

Economics-Watching: Tracking the Economy in Real‑Time Through Regional Business Surveys

[from the Federal Reserve Bank of New York’s The Teller Window, 23 September 2025]

by Richard Deitz and Kartik Athreya

Federal Reserve policymakers need current information about economic conditions to make well-informed monetary policy decisions. But hard data, such as GDP and the unemployment rate, is released with a significant lag, making it difficult to get a precise, real-time read on the economy, especially during times of rapid change.

To help fill the gap, the New York Fed conducts two monthly regional business surveys: the Empire State Manufacturing Survey of manufacturers in New York state and the Business Leaders Survey, which covers service sector firms in New York state, northern New Jersey, and Fairfield County, Conn. These surveys provide timely soft data, available well before hard data is released.

Hard data is based on precise quantitative measurements, such as sales figures or the specific prices firms are charging. By contrast, soft data is qualitative, focusing on trends, expectations, and sentiment around economic activity. And while hard data looks backward, soft data from the regional surveys can look forward—providing important information about expectations for the future and emerging trends.

Gathering soft data quickly can be impactful—for example, the Empire State Manufacturing and Business Leaders surveys signaled a sharp downturn in economic activity in early March 2020 [archived PDF], providing a warning weeks before official statistics captured the full extent of the COVID pandemic’s economic impact.  

How the Surveys Work

The New York Fed launched the Empire State Manufacturing Survey in 2001. It was modeled after the Philadelphia Fed’s Business Outlook Survey, a long-running manufacturing survey that has historically been watched by financial markets and policymakers as an early signal about national manufacturing conditions. The Business Leaders Survey was launched later in 2004 and was among the first regional business surveys to target the service sector.

The surveys are sent to over 300 business executives and managers at firms across industries during the first week of every month. While about two-thirds of participating firms have 100 or fewer employees, some have hundreds or thousands of workers.

Leaders at the firms fill out a short questionnaire asking if business activity has increased, decreased, or stayed the same compared to the prior month. The surveys ask about indicators such as prices–yielding insights into inflationary pressures–as well as employment, orders, and capital spending. Respondents answer questions about how they expect these indicators to change over the next six months, offering a forward-looking perspective on the economy’s trajectory.

From the responses, New York Fed researchers construct diffusion indexes by calculating the difference between the percentage of firms reporting increased activity and those reporting decreased activity. Positive values indicate that more firms say activity increased than decreased, suggesting activity expanded over the month. Higher positive values indicate stronger growth, while lower negative values indicate stronger declines.

The surveys include local businesses, like restaurants and car dealerships, as well as firms with national and global reach, such as software manufacturers and shipping enterprises. As a result, the economic indicators derived from the surveys are often early predictors of national economic patterns, frequently aligning with hard data released later.

Getting Answers on Current Issues

The surveys regularly ask supplemental questions about current economic issues to get real-time answers. Over the last few years, the surveys have asked about firms’ experience with tariffsinflation expectations, if the use of AI is leading to a reduction in employment, how often employees work from home [archived PDF], and whether supply availability was affecting their businesses.

Going Beyond the Indicators

In addition to providing data to track economic conditions, the regional surveys also provide a channel to hear directly from local business leaders. Every month, survey respondents are asked for their comments, offering the opportunity for businesses to share their thoughts, concerns, and experiences with the New York Fed. This helps researchers and policymakers understand how businesses are being affected by economic conditions.

The surveys act as one of the bridges between the New York Fed and the business community, ensuring the voices of regional businesses are considered in economic assessments and policy discussions as well as enhancing the ability of policymakers to make informed decisions to respond effectively to economic challenges.

Executives, owners, or managers of businesses in New York, northern New Jersey, or Fairfield County, Conn., interested in participating in the New York Fed’s monthly business surveys can find more information here. The next survey results will be released on Oct. 15 and 16.

World-Watching: PONARS Eurasia—In the News

[from George Washington University’s Institute for European, Russian and Eurasian Studies/PONARS Eurasia, 8 September 2025]

Robert Orttung, Debra Javeline, Graeme Robertson, Richard Arnold, Andrew Barnes, Edward Holland, Mikhail Troitskiy, Judyth Twigg, and Susanne Wengle argue that the renewed U.S.Russia alignment under Trump and Putin prioritizes fossil fuel development over climate action, and undermines international climate negotiations.

Read the full article [archived PDF].

In a statement to The Kyiv Independent, Peter Rutland echoes the contrast between the West’s diplomatic quarantine of Russia and the possibility of implementing policies without its permission, articulating how differing attitudes between Europe and Putin discourage any kind of escalation. In her recent article, Margarita Zavadskaya explores the “White Coat” narrative, explaining the origin and manipulation of Russian attitudes towards those who have left.

Read the Rutland article / read the Zavadskaya article [archived PDF].

In a recent interview, Volodymyr Dubovyk explains why he believes Putin “wins” the Alaska summit, sharing his perspective on the meeting’s implications and concluding that the dynamics of peace negotiations shift somewhat. Richard Arnold marks the Donbas’ significance, stating that Russian control of the “Fortress Belt” enables havoc on all areas to the west.

Read the Dubovyk interview / read the Arnold article.

Ryhor Nizknikau speaks with TVP World, interpreting the significance of Ukrainian Parliamentary Speaker Parubiy’s assassination. Tymofii Brik’s recent study, together with Oleksii Sereda, Anna Kokoba, and Alina Shmaliuk, appears in Vox Ukraine, covering the participants and reasoning behind the protest against the bill to limit SAPO and NABU’s independence.

Watch the Nizknikau interview / read the Vox Ukraine article.

In the context of Russia’s recent nuclear developments near the Pan’kovo testing range, Pavel Podvig comments that “Skyfall”, the new weapon’s NATO nickname, has likely undergone testing already. During an interview with DW News, Mikhail Alekseev addresses the goals pursued by the Sino-Russian partnership, which range from the tangible benefits of constructing gas infrastructure to the more ideological advantage of presenting an alternative to the U.S.-led world order.

Read the Podvig article / watch the Alekseev interview.

World-Watching: How Nature Paints With Color

[from Quanta Magazine]

by Yasemin Saplakoglu

When objects interact with light in particular ways — by absorbing or reflecting it — we see in color. A sunset’s orange hues and the ocean’s deep blues inspire artists and dazzle observant admirers. But colors are more than pretty decor; they also play a critical role in life. They attract mates, pollinators and seed-spreaders, and signal danger. And the same color can mean different things to different organisms: A red bird might attract a mate, while a red berry might warn off a hungry human.

For color to communicate meaning, systems to produce it had to evolve, by developing pigments to absorb certain wavelengths of light or structures to reflect them. Organisms also had to produce the machinery to perceive color. When you look out into a forest, you might see lush greenery dappled with yellowish sunlight and pink blooms. But this forest scene would look different if you were a bird or a fly. Color-perception machinery — which include photoreceptors in our eyes that recognize and distinguish light — can differ between species. While humans can’t see ultraviolet light, some birds can. While dogs can’t see red or green, many humans can. Even within species there’s some variation: People who are colorblind have trouble distinguishing some combinations, such as green and red. And many organisms can’t see color at all.

Within one planet, many colorful worlds exist. But how did colors evolve in the first place?

What’s New and Noteworthy

To pinpoint when different kinds of color signals may have evolved, researchers recently reviewed many papers, covering hundreds of millions of years of evolutionary history, to bring together information from the fossil record and phylogenetic trees (diagrams that depict evolutionary relationships between species). Their analysis across the tree of life suggested that color signals likely evolved much later than color vision. It’s likely that color vision evolved twice, developing independently in arthropods and fish, between 400 million and 500 million years ago. Then plants started using bright colors to attract pollinators and animals to disperse their seeds, and then animals started using colors to warn off predators and eventually to attract mates.

One of the most common colors that we see in nature is green. However, this isn’t a color signal: It’s a result of photosynthesis. Most plants absorb almost all the photons in the red and blue light spectra but only 90% of the green photons. The remaining 10% are reflected, making the plants appear green to our eyes. But why did they evolve to do this? According to a model, this makes photosynthetic machinery more stable, suggesting that sometimes evolution favors stability over efficiency.

The majority of colors in nature are produced by pigments that absorb or reflect different wavelengths of light. While many plants can produce these pigments on their own, most animals can’t; instead, they acquire pigments from their diet. Some pigments, though, are hard to acquire, so some animals instead rely on nanoscale structures that scatter light in particular ways to create “structural colors.” For example, the shell of the blue-rayed limpet has layers of transparent crystals, each of which diffracts and reflects a sliver of the light spectrum. When the layers grow to a precise thickness, around 100 nanometers, the wavelengths in each layer interact with one another, canceling each other out — except for blue. The result is the appearance of a bright blue limpet shell.

Economics-Watching: SF FedViews: September 4, 2025

[from the Federal Reserve Bank of San Francisco]

Andrew Foerster, senior research advisor at the Federal Reserve Bank of San Francisco, shared views on the current economy and the outlook from the Economic Research Department as of September 4, 2025.

While economic activity in the United States has remained resilient, recent data show some softening in the labor market. Swings in net exports affected GDP in the first half of 2025, with imports surging in the first quarter followed by imports declining in the second quarter. Inflation remains above the Fed’s 2% goal, and a near-term rise from tariffs appears likely. Job gains in recent months have slowed. Downward revisions for recent job growth estimates have been large, but the magnitudes of these revisions are not out of line with historical values. Job growth estimates remain reliable despite data collection challenges. With the balance of risks surrounding the Fed’s dual mandate now shifting, market participants are projecting an easing of monetary policy in coming months.

Read the full article [archived PDF].

Economic-Watching: Fourth District Beige Book

[from the Federal Reserve Bank of Cleveland, 3 September 2025]

Summary of Economic Activity

Fourth District contacts reported a slight increase in overall business activity in recent weeks and expected activity to rise modestly in the months ahead. Consumer spending was flat, with retailers noting continued affordability concerns among consumers. Manufacturers also reported flat demand for goods, citing trade policy uncertainty as the main driver. Demand for professional and business services grew moderately, albeit at a slower pace than in the past three reporting periods. Contacts generally reported flat employment levels and modest wage pressures. Nonlabor cost pressures remained robust, and selling prices continued to grow modestly.

Read the full report [archived PDF].

Economics-Watching: Neutral Interest Rates and the Monetary Policy Stance

[from the Federal Reserve Bank of Cleveland, 2 September 2025]

by Taylor Horn & Saeed Zaman

The neutral interest rate (r-star) is an important input in monetary policy discussions and is commonly used to assess the stance of monetary policy. This Economic Commentary presents estimates of the neutral interest rate from a recently developed model and provides a high-level description of this new model. With data through 2025:Q2, the model estimates the implied (medium-run) nominal neutral interest rate to be 3.7 percent, with a 68 percent coverage band ranging from 2.9 percent to 4.5 percent. Given that the effective nominal federal funds rate is currently in the range of 4.25 percent to 4.5 percent, this model estimates with a high level of certainty (77 percent probability) that the policy stance is in restrictive territory.

Read the full article [archived PDF].

Digitizing Heritage: Exploring the Transformation of Culture to Data

[from India in Transition by the Center for the Advanced Study of India at the University of Pennsylvania, 1 September 2025]

by Krupa Rajangam & Deborah Sutton

“Oh that. We just took some undergraduate history students on board as interns. They provided the content and it was done.

The co-founder of a digital heritage initiative promoting interactive user interfaces offered these opening remarks. Speaking at a Delhi-based museum, he had been asked about the information provided to users as they moved their hands across an interactive board, revealing images and narratives relating to the Indian freedom movement. His response clarified that the physical and digital components of such installations—for example, the 3D-modeling software and hardware, scanning equipment and its resolution and the user interface—were more carefully designed and calibrated than the content they provided.

Contemporary cultural heritage (CH) is rife with digital innovation. The COVID pandemic accelerated this transformation as archivists and curators worked to develop content that would reach remote, locked-down audiences. Within significant limits, digital platforms can democratize and facilitate access to materials previously inaccessible. Instead of being physically siloed, digitized material—as data components and not just content on culture—can be reproduced, combined, and circulated infinitely to achieve a reach previously considered impossible. Accessibility and malleability remain one of the great boons of digital formats. But here, we consider the information economy of CH practice as it exists—and not its extraordinary and often hypothetical potential—in two, overlapping realms of digitized CH: for-profit business enterprises and academic side-hustles, related to more mainstream academic research.

In the former, questions of what is shared are often less significant than the appeal of the format. In the latter, innovation is often the result of short-term projects that languish, abandoned after project completion, and rarely find audiences. Our research builds on our individual experiences and the findings of a scoping exercise examining a number of India-based heritage projects conducted in 2021-22. It suggests the need for more careful consideration of the implications of transforming CH materials into forms of data; the change impacts everything from how we understand “originality” to the reliance on for-profit services to deliver heritage material to the public.

As digitized representations of CH and access to such formats become more widespread, are we, as CH practitioners and academics, giving enough thought to how digital technologies are reshaping the nature of CH and its audience? Beyond questions of wider reach, are we sufficiently acknowledging how these changes challenge a continued focus on originality and notions of academy as primary controllers of access to knowledge and its validity, both in research and practice?

Digitizing for Dissemination

In 2019, one of us—Deborah Sutton—developed a software platform, Safarnama, including an app and authored experiences around Delhi’s CH. The project subsequently extended to Karachi. Generating “original” content, such as audio-visual clips and old photos, to be hosted on the app platform, was key to its attractiveness and usefulness, but permissions proved tricky. Some collaborators who were initially keen to contribute content quietly withdrew, likely due to the unfamiliar format and unknown reach. The app format also raised other questions. Would incorporating content from non-digital but published scholarship require authorial permission or only acknowledgement?

In 2020, Krupa Rajangam held a sponsored incubation at the NSRCEL, a business incubator located at the Indian Institute of Management-Bangalore, to develop a web interface that would host geo-locationed stories of marginalized histories by drawing on both historical facts and lived experiences. Corporate mentors remained skeptical of her ability to source “original” content on an ongoing basis, i.e., content that was both authenticated and validated. They repeatedly advised her to focus on the format, user experience, and appeal for “mass markets” so her prototype would find audiences. Both projects equally raised questions over who would consume the content and what constitutes the public or audience.

In a scoping exercise undertaken for the Arts and Humanities Research Council (AHRC), UK, in 2021-22, we explored a number of India-based heritage projects funded by the AHRC in partnership with the Newton Fund and Indian Council for Historical Research, since 2015 (figure 1). We were particularly interested in the digital components, which all projects included, even if only a website.

Our exploratory surveys firmly established the divergence in interpreting both CH and digital technologies, which was not surprising. Some projects defined and treated CH as fixed pre-existing material, to be interpreted and presented to audiences through digital technologies. Others re-framed digital formats of CH as components of data, assembling, manipulating, and representing extant archival and other materials. The rest generated digitized CH, effectively altering its nature. Typically, such projects dealt with more ephemeral or less conventional forms of CH.

Fundamental Transformations

Notions of originality remain central to art, architectural and art historical training, and CH practice. Digitization transforms the access and retrieval value of “original” material in physical archives, such as old maps and letters, much lauded in traditional “analog” scholarship, to use value as data. Once the end-user (audience) accesses this data (whether historical facts or stories), it becomes nothing more than bytes occupying valuable space, to be deleted once consumed rather than stored, making it easy to overlook or disregard the source and its context.

For example, in the Safarnama project, the app contained carefully collected and authenticated narratives on “partition memories” in Delhi and Karachi. However, the bite-sized media format meant that users would only explore content once, as snippets. This realization led the team to develop the software and incorporate the ability to download content, which at least meant that users could collect, organize and store (archive) the assembled media.

Digitization also takes away the materiality of the archive, making it more ephemeral. Non-digital materials through, and into which we render CH can (in endless combinations and cycles) be lost, forgotten, sold, recovered, collected, displayed, and stored. Such capacities of digital files are obvious, but maintaining access depends on varied and dynamic software ecologies for existence and sustained end-user access. Digital files created within one software-architecture can be incompatible with, and therefore rendered obsolete, by another. The ethos of software development is constant change.

In another paper, we examined questions of quantity, quality, and reusability of data related to digitization of building-crafts knowledge alongside CARE and FAIR principles of data management. The principles were proposed and adopted by an international consortium of scholars and industry, the former focused on responsible collection, use, and dissemination of data, especially related to vulnerable people and the latter on sustainable data management.

As an example, one AHRC project experimented with methods to capture detailed 3D images of heritage sites and structures in dynamic crowded environments. They used one set of methods to capture the interiors and another for the exteriors, hoping to merge both together and develop holistic imagery for audiences. This proved impossible at first due to issues of software compatibility. Once that was partially resolved, the new software couldn’t handle the sheer volume of data captured—and it was unclear where and for how long such volumes of data would be stored.

New realms of intellectual property remain fuzzy. While the content on digital platforms is governed by licensing and proprietary legal frameworks, it is often hosted on open platforms, through web repositories such as GitHub. Prima facie, such openness appears to challenge the proprietorial nature of archives and other repositories as keepers of knowledge. However, it raises a host of questions about how to maintain a critical understanding of archives.

Digitization may, and should, transform access but should it obliterate the regimes through which the materials were generated and organized and what’s included or excluded? For example, a local coordinator of one project that engaged with artists commented that digital technologies are typically used to document technical skills as forms of intangible heritage and develop artist encyclopedias, saying that “they are hardly used to interrogate the reality that many ‘traditional’ artists hail from marginalized castes.” Similarly, the local coordinator of another project that engaged with communities living in and around a protected heritage site commented on how digital technologies often end up being used to create a record of heritage structures without any reference to their day-to-day setting.

Any and all digital enterprise in CH, we argue, needs to integrate the ambition to use digital methods to not just present but also counter and interrogate the material, its creation, and purpose. Digital platforms and web- and app-based software are now able to manipulate and re-situate information in unprecedented ways. The novelty of such formats can displace original, provocative, and timely considerations of the material. Often, we are so taken by the visual and structural attributes of these formats, that we accept it at face value and lose sight of the tone and content of heritage as a curated message about the past and the present.

Alongside this, digital augmentations and iterations of CH, including storage, have significant financial and infrastructural implications. The creation and maintenance of digital platforms requires either developing “in-house” digital specialization or, more commonly, reliance on private, for-profit platforms. Paying for external provision introduces complexities. Funders, including the AHRC, struggle to devise guidance or policy in relation to software licensing. However, a persistent challenge to projects, and partnerships between academic and non-academic partners, is devising data and software strategies that subsist beyond the life of the funded-research project. Often, the adverse effects of the paucity of longer-term planning around IP issues, sustainability, and data archiving falls disproportionately on the non-academic stakeholder.

While digitization foregrounds the potential and promise of complete openness and equity, maybe this is lost in practice. Or digitization may merely mark the displacement of one set of ethics with another. There is a need for more careful consideration of the implications, complexities, and risks of taking CH materials out of boxes and off shelves and transforming and generating it into data files, which are, in turn, dependent on digital platforms to provide end-user access. However, the question remains of whether heritage-related disciplines are adequately prepared and willing to confront such new ways of working, which have begun to dislodge some of the privileges extant in current forms of research and practice.

Krupa Rajangam is nearing the end of her tenure as a Fulbright Fellow at the Historic Preservation Department, Weitzman School of Design, University of Pennsylvania. Her permanent designation is Founder-Director, Saythu…linking people and heritage, a professional conservation collective based in Bangalore, India.

Deborah Sutton is a Professor in Modern South Asian History at Lancaster University.