Science-Watching: Forecasting New Diseases in Low-Data Settings Using Transfer Learning

[from London Mathematical Laboratory]

by Kirstin Roster, Colm Connaughton & Francisco A. Rodrigues

Abstract

Recent infectious disease outbreaks, such as the COVID-19 pandemic and the Zika epidemic in Brazil, have demonstrated both the importance and difficulty of accurately forecasting novel infectious diseases. When new diseases first emerge, we have little knowledge of the transmission process, the level and duration of immunity to reinfection, or other parameters required to build realistic epidemiological models. Time series forecasts and machine learning, while less reliant on assumptions about the disease, require large amounts of data that are also not available in early stages of an outbreak. In this study, we examine how knowledge of related diseases can help make predictions of new diseases in data-scarce environments using transfer learning. We implement both an empirical and a synthetic approach. Using data from Brazil, we compare how well different machine learning models transfer knowledge between two different dataset pairs: case counts of (i) dengue and Zika, and (ii) influenza and COVID-19. In the synthetic analysis, we generate data with an SIR model using different transmission and recovery rates, and then compare the effectiveness of different transfer learning methods. We find that transfer learning offers the potential to improve predictions, even beyond a model based on data from the target disease, though the appropriate source disease must be chosen carefully. While imperfect, these models offer an additional input for decision makers for pandemic response.

Introduction

Epidemic models can be divided into two broad categories: data-driven models aim to fit an epidemic curve to past data in order to make predictions about the future; mechanistic models simulate scenarios based on different underlying assumptions, such as varying contact rates or vaccine effectiveness. Both model types aid in the public health response: forecasts serve as an early warning system of an outbreak in the near future, while mechanistic models help us better understand the causes of spread and potential remedial interventions to prevent further infections. Many different data-driven and mechanistic models were proposed during the early stages of the COVID-19 pandemic and informed decision-making with varying levels of success. This range of predictive performance underscores both the difficulty and importance of epidemic forecasting, especially early in an outbreak. Yet the COVID-19 pandemic also led to unprecedented levels of data-sharing and collaboration across disciplines, so that several novel approaches to epidemic forecasting continue to be explored, including models that incorporate machine learning and real-time big data data streams. In addition to the COVID-19 pandemic, recent infectious disease outbreaks include Zika virus in Brazil in 2015, Ebola virus in West Africa in 2014–16, Middle East respiratory syndrome (MERS) in 2012, and coronavirus associated with severe acute respiratory syndrome (SARS-CoV) in 2003. This trajectory suggests that further improvements to epidemic forecasting will be important for global public health. Exploring the value of new methodologies can help broaden the modeler’s toolkit to prepare for the next outbreak. In this study, we consider the role of transfer learning for pandemic response.

Transfer learning refers to a collection of techniques that apply knowledge from one prediction problem to solve another, often using machine learning and with many recent applications in domains such as computer vision and natural language processing. Transfer learning leverages a model trained to execute a particular task in a particular domain, in order to perform a different task or extrapolate to a different domain. This allows the model to learn the new task with less data than would normally be required, and is therefore well-suited to data-scarce prediction problems. The underlying idea is that skills developed in one task, for example the features that are relevant to recognize human faces in images, may be useful in other situations, such as classification of emotions from facial expressions. Similarly, there may be shared features in the patterns of observed cases among similar diseases.

The value of transfer learning for the study of infectious diseases is relatively under-explored. The majority of existing studies on diseases remain in the domain of computer vision and leverage pre-trained neural networks to make diagnoses from medical images, such as retinal diseases, dental diseases, or COVID-19. Coelho and colleagues (2020) explore the potential of transfer learning for disease forecasts. They train a Long Short-Term Memory (LSTM) neural network on dengue fever time series and make forecasts directly for two other mosquito-borne diseases, Zika and Chikungunya, in two Brazilian cities. Even without any data on the two target diseases, their model achieves high prediction accuracy four weeks ahead. Gautam (2021) uses COVID-19 data from Italy and the USA to build an LSTM transfer model that predicts COVID-19 cases in countries that experienced a later pandemic onset.

These studies provide empirical evidence that transfer learning may be a valuable tool for epidemic forecasting in low-data situations, though research is still limited. In this study, we aim to contribute to this empirical literature not only by comparing different types of knowledge transfer and forecasting algorithms, but also by considering two different pairs of endemic and novel diseases observed in Brazilian cities, specifically (i) dengue and Zika, and (ii) influenza and COVID-19. With an additional analysis on simulated time series, we hope to provide theoretical guidance on the selection of appropriate disease pairs, by better understanding how different characteristics of the source and target diseases affect the viability of transfer learning.

Zika and COVID-19 are two recent examples of novel emerging diseases. Brazil experienced a Zika epidemic in 2015–16 and the WHO declared a public health emergency of global concern in February 2016. Zika is caused by an arbovirus spread primarily by mosquitoes, though other transmission methods, including congenital and sexual have also been observed. Zika belongs to the family of viral hemorrhagic fevers and symptoms of infection share some commonalities with other mosquito-borne arboviruses, such as yellow fever, dengue fever, or chikungunya. Illness tends to be asymptomatic or mild but can lead to complications, including microcephaly and other brain defects in the case of congenital transmission.

Given the similarity of the pathogen and primary transmission route, dengue fever is an appropriate choice of source disease for Zika forecasting. Not only does the shared mosquito vector result in similar seasonal patterns of annual outbreaks, but consistent, geographically and temporally granular data on dengue cases is available publicly via the open data initiative of the Brazilian government.

COVID-19 is an acute respiratory infection caused by the novel coronavirus SARS-CoV-2, which was first detected in Wuhan, China, in 2019. It is transmitted directly between humans via airborne respiratory droplets and particles. Symptoms range from mild to severe and may affect the respiratory tract and central nervous system. Several variants of the virus have emerged, which differ in their severity, transmissibility, and level of immune evasion.

Influenza is also a contagious respiratory disease that is spread primarily via respiratory droplets. Infection with the influenza virus also follows patterns of human contact and seasonality. There are two types of influenza (A and B) and new strains of each type emerge regularly. Given the similarity in transmission routes and to a lesser extent in clinical manifestations, influenza is chosen as the source disease for knowledge transfer to model COVID-19.

For each of these disease pairs, we collect time series data from Brazilian cities. Data on the target disease from half the cities is retained for testing. To ensure comparability, the test set is the same for all models. Using this empirical data, as well as the simulated time series, we implement the following transfer models to make predictions.

  • Random forest: First, we implement a random forest model which was recently found to capture well the time series characteristics of dengue in Brazil. We use this model to make predictions for Zika without re-training. We also train a random forest model on influenza data to make predictions for COVID-19. This is a direct transfer method, where models are trained only on data from the source disease.
  • Random forest with TrAdaBoost: We then incorporate data from the target disease (i.e., Zika and COVID-19) using the TrAdaBoost algorithm together with the random forest model. This is an instance-based transfer learning method, which selects relevant examples from the source disease to improve predictions on the target disease.
  • Neural network: The second machine learning algorithm we deploy is a feed-forward neural network, which is first trained on data of the endemic disease (dengue/influenza) and applied directly to forecast the new disease.
  • Neural network with re-training and fine-tuning: We then retrain only the last layer of the neural network using data from the new disease and make predictions on the test set. Finally, we fine-tune all the layers’ parameters using a small learning rate and low number of epochs. These models are examples of parameter-based transfer methods, since they leverage the weights generated by the source disease model to accelerate and improve learning in the target disease model.
  • Aspirational baseline: We compare these transfer methods to a model trained only on the target disease (Zika/COVID-19) without any data on the source disease. Specifically, we use half the cities in the target dataset for training and the other half for testing. This gives a benchmark of the performance in a large-data scenario, which would occur after a longer period of disease surveillance.

The remainder of this paper is organized as follows. The models are described in more technical detail in Section 2. Section 3 shows the results of the synthetic and empirical predictions. Finally, Section 4 discusses practical implications of the analyses.

Access the full paper [via institutional access or paid download].

Coronavirus Update: Fall Boosters Could Have Bits of Omicron

[from ScienceNews Coronavirus Update, by Erin Garcia de Jesús]

For all the coronavirus variants that have thrown pandemic curve balls—including alpha, beta, gamma, deltaCOVID-19 vaccines have stayed the same. That could change this fall.

Yesterday, an advisory committee to the U.S. Food and Drug Administration met to discuss whether vaccine developers should update their jabs to include a portion of the omicron variant—the version of the coronavirus that currently dominates the globe. The verdict: The omicron variant is different enough that it’s time to change the vaccines. Exactly how is up in the air; the FDA still has to weigh in and decide what versions of the coronavirus will be in the shot.

“This doesn’t mean that we are saying that there will be boosters recommended for everyone in the fall,” Amanda Cohn, chief medical officer for vaccine policy at the U.S. Centers for Disease Control and Prevention said at the June 28 advisory meeting. “But my belief is that this gives us the right vaccine for preparation for boosters in the fall.”

The decision to update COVID-19 vaccines didn’t come out of nowhere. In the two-plus years that the coronavirus has been spreading around the world, it has had a few “updates” of its own—mutating some of its proteins that allow the virus to more effectively infect our cells or hide from our immune systems.

Vaccine developers had previously crafted vaccines to tackle the beta variant that was first identified in South Africa in late 2020. Those were scrapped after studies showed that current vaccines remained effective.

The current vaccines gave our immune systems the tools to recognize variants such as beta and alpha, which each had a handful of changes from the original SARS-CoV-2 virus that sparked the pandemic. But the omicron variant is a slipperier foe. Lots more viral mutations combined with our own waning immunity mean that omicron can gain a foothold in the body. And vaccine protection isn’t as good as it once was at fending off COVID-19 symptoms.

The shots still largely protect people from developing severe symptoms, but there has been an uptick in hospitalizations and deaths among older age groups, Heather Scobie, deputy team lead of the CDC’s Surveillance and Analytics Epidemiology Task Force said at the meeting. And while it’s impossible to predict the future, we could be in for a tough fall and winter, epidemiologist Justin Lessler of the University of North Carolina at Chapel Hill said at the meeting. From March 2022 to March 2023, simulations project that deaths from COVID-19 in the United States might number in the tens to hundreds of thousands.

A switch to omicron-containing jabs may give people an extra layer of protection for the upcoming winter. PfizerBioNTech presented data at the meeting showing that updated versions of its mRNA shot gave clinical trial participants a boost of antibodies that recognize omicron. One version included omicron alone, while the other is a twofer, or bivalent, jab that mixes the original formulation with omicron. Moderna’s bivalent shot boosted antibodies too. Novavax, which developed a protein-based vaccine that the FDA is still mulling whether to authorize for emergency use, doesn’t have an omicron-based vaccine yet, though the company said its original shot gives people broad protection, generating antibodies that probably will recognize omicron.

Pfizer and Moderna both updated their vaccines using a version of omicron called BA.1, which was the dominant variant in the United States in December and January. But BA.1 has siblings and has already been outcompeted by some of them.

Since omicron first appeared late last year, “we’ve seen a relatively troubling, rapid evolution of SARS-CoV-2,” Peter Marks, director of the FDA’s Center for Biologics Evaluation and Research in Silver Spring, Maryland, said at the advisory meeting.

Now, omicron subvariants BA.2, BA.2.12.1, BA.4 and BA.5 are the dominant versions in the United States and other countries. The CDC estimates that roughly half of new U.S. infections the week ending June 25 were caused by either BA.4 or BA.5. By the time the fall rolls around, yet another new version of omicron—or a different variant entirely—may join their ranks. The big question is which of these subvariants to include in the vaccines to give people the best protection possible.

BA.1, the version already in the updated vaccines, may be the right choice, virologist Kanta Subbarao said at the FDA meeting. An advisory committee to the World Health Organization, which Subbarao chairs, recommended on June 17 that vaccines may need to be tweaked to include omicron, likely BA.1. “We’re not trying to match [what variants] may circulate,” Subbarao said. Instead, the goal is to make sure that the immune system is as prepared as possible to recognize a wide variety of variants, not just specific ones. The hope is that the broader the immune response, the better our bodies will be at fighting the virus off even as it evolves.

The variant that is farthest removed from the original virus is probably the best candidate to accomplish that goal, said Subbarao, who is director of the WHO’s Collaborating Center for Reference and Research on Influenza at the Doherty Institute in Melbourne, Australia. Computational analyses of how antibodies recognize different versions of the coronavirus suggest that BA.1 is probably the original coronavirus variant’s most distant sibling, she said.

Some members of the FDA advisory committee disagreed with choosing BA.1, instead saying that they’d prefer vaccines that include a portion of BA.4 or BA.5. With BA.1 largely gone, it may be better to follow the proverbial hockey puck where it’s going rather than where it’s been, said Bruce Gellin, chief of Global Public Health Strategy with the Rockefeller Foundation in Washington, D.C. Plus, BA.4 and BA.5 are also vastly different from the original variant. Both BA.4 and BA.5 have identical spike proteins, which the virus uses to break into cells and the vaccines use to teach our bodies to recognize an infection. So when it comes to making vaccines, the two are somewhat interchangeable.

There are some real-world data suggesting that current vaccines offer the least amount of protection from BA.4 and BA.5 compared with other omicron subvariants, Marks said. Pfizer also presented data showing results from a test in mice of a bivalent jab with the original coronavirus strain plus BA.4/BA.5. The shot sparked a broad immune response that boosted antibodies against four omicron subvariants. It’s unclear what that means for people.

Not everyone on the FDA advisory committee agreed that an update now is necessary—two members voted against it. Pediatrician Henry Bernstein of Zucker School of Medicine at Hofstra/Northwell in Uniondale, N.Y., noted that the current vaccines are still effective against severe disease and that there aren’t enough data to show that any changes would boost vaccine effectiveness. Pediatric infectious disease specialist Paul Offit of Children’s Hospital of Philadelphia said that he agrees that vaccines should help people broaden their immune responses, but he’s not yet convinced omicron is the right variant for it.

Plenty of other open questions remain too. The FDA could authorize either a vaccine that contains omicron alone or a bivalent shot, although some data hinted that a bivalent dose might spark immunity that could be more durable. Pfizer and Moderna tested their updated shots in adults. It’s unclear what the results mean for kids. Also unknown is whether people who have never been vaccinated against COVID-19 could eventually start with such an omicron-based vaccine instead of the original two doses.

Maybe researchers will get some answers before boosters start in the fall. But health agencies need to make decisions now so vaccine developers have a chance to make the shots in the first place. Unfortunately, we’re always lagging behind the virus, said pediatrician Hayley Gans of Stanford University. “We can’t always wait for the data to catch up.”

OFR Working Paper Finds Cash Biases Measurement of the Stock Return Correlations

[from the U.S. Office of Financial Research]

Today, the U.S. Office of Financial Research published a working paper, “Cash-Hedged Stock Returns” [archived PDF], and an accompanying blog (below), regarding firms’ cash holdings and the implications for asset prices and financial stability.

Cash holdings are important for financial stability because of their value in crises.  Corporate cash piles vary across companies and over time. Firms’ cash holdings typically earn low returns, and their cash returns are correlated across firms.  Thus, the asset pricing results are important for investors managing a portfolio’s risk and policymakers concerned about sources of vulnerability.

The working paper [archived PDF] shows how investors can hedge cash on firms’ balance sheets when making portfolio choices.  Cash generates variation in beta estimates, and the working paper decomposes stock betas into components that depend on the firm’s cash holding, return on cash, and cash-hedged return. Common asset pricing premia have large implicit cash positions, and portfolios of cash-hedged premia often have higher Sharpe ratios, used by investors to understand a return on investment, because of the correlation between firms’ cash returns. The paper shows the value of a dollar increased in 2020, and firms hold cash because they are riskier.

Read the working paper [archived PDF].

OFR Finds Large Cash Holdings Can Lead to Mismeasuring Risk

[from the OFR blog, by Sharon Ross]

Cash is necessary for companies’ operations. Firms use cash to make payments, finance investments, and manage risk. But holding cash comes at a cost: its low pecuniary return. Published today by the OFR, the working paper, “Cash-Hedged Stock Returns” [archived PDF], shows that the cash returns of publicly traded, non-financial firms are correlated. Since cash returns are a part of equity returns, investors that are using equity return correlations to measure risk can mismeasure risk.

We show the importance of cash for systemic risk by documenting the value of cash in crises, showing that firms hold cash in part due to risk management and studying how cash biases the measurement of the interconnectedness of stock returns. The consequences of cash are important for policymakers monitoring aggregate risks, and sources of market vulnerability and for investors making portfolio choices.

Cash holdings are important for financial stability because of their value in crises. Several papers document a “dash for cash” during the initial panicked stages of the coronavirus 2019 (COVID-19) pandemic when firms rushed to hold cash in their coffers. The dash for cash was driven by firms drawing down on lines of credit from banks, which in turn affected bank lending. The dash for cash highlighted the critical role of firms’ cash holdings and returns in understanding risk in the financial system.

We show the value of a dollar increased in 2020. Moreover, our results show that firms may hold cash because they are riskier, as opposed to firms with high cash shares being less risky due to their cash holdings. Our results are consistent with a precautionary savings motive for holding cash. In other words, firms hold cash for risk management, in part to weather bad times.

Cash is a growing share of public firmsassets. The value-weighted U.S. stock market held 22% of its assets in cash in December 2020 compared to 8% in the 1980s. An investor buying the market in 2020 ends up with an implicit cash position three times larger than in 1980. Individual firms vary in how much cash they hold. As cash holdings increase, it is important to understand how cash holdings affect returns, which in turn impacts who chooses to invest in the firms.

Cash returns are correlated across firms, and cash biases measurement of the interconnectedness of stock returns, making it a risk for financial stability. As a result, the asset pricing results are important both for investors managing portfolio risk and for policymakers concerned about interconnected returns.

We argue that the value of corporate cash is distinct, and we can separate the value of cash and the value of the firm’s primary business. We show how investors can explicitly account for the effect of corporate cash holdings when forming a portfolio. When an investor owns stock in a company with substantial cash, the investor has an implicit cash position managed by the company—something the investor might not intend. We argue that investors should account for the effect of corporate cash holdings in the portfolio decision to measure a portfolio’s risk. Firms’ cash management is not consistent across firms, and investors may want to manage their cash positions themselves. Policymakers should be aware of investors’ choices in cash because of investorsportfolio risk and the implications for aggregate risk.

We separate a company’s stock return into its cash and non-cash components, and we show that using the non-cash return gives a more informative correlation structure across stocks. In other words, if investors take out the correlated cash returns, the remaining return is less correlated, yielding portfolios that provide better diversification. We show how cash holdings and returns affect the returns of standard asset pricing strategies and asset pricing models like the capital asset pricing model (CAPM).

As cash holdings of public firms increase, it is important that policymakers understand how these increases impact stock returns for both individual firms and the aggregate market. Cash returns are correlated across firms, and cash biases the measurement of the interconnectedness of stock returns. This correlation is important both for investors who are managing a portfolio’s risk and policymakers concerned about sources of vulnerability stemming from interconnected returns.

India and the Russia-Ukraine War: The Paradox of Military Dependence, Traditional Loyalty and Strategic Autonomy

[from India in Transition, published by the Center for the Advanced Study of India (CASI) of the University of Pennsylvania, by Arndt Michael]

India, long-established as the world’s most populous democracy, has been quite instrumental over the years in assisting various countries dealing with democratic struggles. This support has included a blend of bilateral and multilateral initiatives, and especially economic development projects. Yet, India’s recent attitude toward the Russian attack on Ukraine and its concomitant behavior in the United Nations Security Council (as a non-permanent member) seems to contradict its support of democracy. By abstaining, rather than explicitly voting in favor of UN resolutions condemning Russian aggression at the beginning of the war, India angered several UN member-countries.

In order to substantiate its abstention from voting, India felt compelled to issue a so-called “Explanation of Vote” (EoV). In it, India asked for a “return to the path of diplomacy” and an immediate cessation of “violence and hostilities.” Crucially, India stated in the EoV that “the contemporary global order has been built on the UN Charter, international law, and respect for the sovereignty and territorial integrity of states…all member states need to honor these principles in finding a constructive way forward. Dialogue is the only answer to settling differences and disputes, however daunting that may appear at this moment.” 

While these statements and the call for dialogue are in accordance with India’s professed stance toward the relevance and objectives enshrined in the UN Charter, the discrepancy between rhetoric and practice is still conspicuous. At first glance, a “good” relationship with Russia seems to be more significant than the expectations of the world-community as represented in the United Nations. And, more importantly, by abstaining, India seemingly violated one of its central foreign and strategic policies: to always strive for strategic autonomy.

However, from a strategic perspective, India is precisely replicating what it did when the Soviet Union invaded Afghanistan. For India, its own national security is at stake, as well as its current and future geostrategic influence in Asia and the world. The military dependence that currently exists between India and Russia is nothing short of gigantic and has created a dangerous conundrum. Since the “Indo–Soviet Treaty of Peace, Friendship and Cooperation” was signed in 1971, defense agreements and long-term supply contracts have been in place. And while India and Russia have shared a strategic relationship since October 2000, this was upgraded in December 2020 to a “Special and Privileged Strategic Partnership.” 

Although there was a marked reduction of Russian imports in past years, official data from the Stockholm International Peace Research Institute (SIPRI) reveal that between 1996-2015, the Russian proportion of Indian military imports was almost 70 percent, and between 2016-20 it still hovered around 49 percent. In fact, 70 percent of all Indian military equipment currently in use has been directly produced in Russia, was manufactured with the majority of parts coming from Russia, or licensed by Russia. In 2020, this included the majority of Indian tanks, the only aircraft carrier (the INS Vikramaditya, a heavily modified Kiev-class aircraft carrier) with all of its combat aircraft MiG-29s, six frigates, four destroyers and the only nuclear-powered submarine. Additionally, eight out of fourteen Indian Navy submarines belong to the Russian Kilo-class. The Indian Air Force flies Sukhoi Su-30MKIs and Mil Mi-17s, which, respectively, constitute the largest share of the combat aircraft and utility helicopters, in addition to Russian tanker planes. India also just recently purchased the S-400 missile system.

Even though India has begun to reorient itself militarily toward other countries—the U.S., Israel, France and Italy—and has substituted foreign imports by slowly developing its own capabilities, a large number of new Indo-Russian projects are in the conceptual or implementation stages. In December 2021, in the frame of the so-called “2+2 Dialogue” (foreign and defense ministers), India and Russia began a new phase in their militarytechnological cooperation. Incidentally, India has used this very format for furthering cooperation in strategic, security and intelligence issues with four of its key strategic partners: Australia, the U.S., Japan and the newly added Russia. Russia and India agreed upon a further deepening of mutual military relations for ten years (until 2031). What is new is that next to the traditional purchase of Russian weapons systems, many common research projects and the development of new weapons systems—with their production taking place equally in both countries—have been agreed upon. This production includes new frigates, helicopters, submarines, cruise missiles and even Kalashnikovs

The depth of this mutual engagement, and especially India’s dependence, highlights a huge dilemma that might not only have drastic strategic consequences, but also long-lasting regional repercussions. The worldwide sanctions issued against Russia aim at the Russian economy and military. When it comes to the procurement of such crucial components as microchips or airline parts, Russia is soon expected to face shortages, essentially crippling its capacity to repair, construct, or have spare parts available (let alone construct new equipment). Unless other countries, such as China, circumvent international sanctions and step-in, the expected Russian inability to take care of its own military will have a spill-over effect. Russia is unlikely to be able to fulfill its contractual obligations toward India, and the lack of spare parts also has the potential to cripple India’s own military with regards to the Russian weapons equipment. The procurement agreements and common projects are, hence, all in jeopardy and India, now more than ever, depends on Russian goodwill. 

Next to military dependence, there are other concomitant effects in the economic and political sphere that influence Indian voting behavior. The worldwide sanctions have already led to dramatic increases in oil and gas prices, with India relying on imports of up to 80 percent. India will, therefore, have to pay much more for such crucial imports. Military imports from other countries aimed at substituting Russian equipment will also be much more expensive. All of this deals the Indian economy another blow—an economy that has been especially hit hard by the COVID-19 pandemic. And politically, Indian hegemony in South Asia has been markedly under pressure, in no small part because of the ChinaPakistan axis. In the eyes of India, this axis poses a serious threat to an already highly volatile IndoPakistan relationship. In addition, the IndoChina relationship reached a new low in May 2020 when Chinese infrastructure projects along the Himalayan borderlands led to fighting and the killing of soldiers. In addition, the Chinese claims to the South China Sea are categorically disputed by India. Chinese overtures toward Sri Lanka, the Maldives, and especially Pakistan in the frame of the Road Initiative are also regarded with growing discontent, as India claims that China is following a policy of encircling India.

In its 75th year of independence, India is following a classic realpolitik in trying not to alienate Russia while pledging rhetorical support for Ukraine. The contradictory consequence is that Russia has now offered more discounted oil, gas, and investments, while at the same time, the UK has suggested its military relationship with India could be upgraded—and has offered weapons made in the UK. For the Indian political establishment, India cannot forgo Russian support, militarily or as a producer of cheap oil and gas. Going forward, India’s military will need to protect its national security and project Indian influence and power well beyond its borders.

Arndt Michael is a Lecturer in the Department of Political Science, University of Freiburg (Germany), author of the multi-award-winning book India’s Foreign Policy and Regional Multilateralism (Palgrave Macmillan, 2013), and co-editor of Indien Verstehen (Understanding India, Springer, 2016). His articles have been published in Asian Security, Cambridge Review of International Affairs, Harvard Asia Quarterly, India Quarterly and India Review.

Penn Wharton: U.S. Budget Model

The U.S. Fiscal Imbalance: June 2022

[from Penn Wharton, University of Pennsylvania]

We estimate that the U.S. federal government faces a permanent fiscal imbalance equal to over 10 percent of all future GDP under current law where future federal spending outpaces tax and related receipts. Federal government debt will climb to 236 percent of GDP by 2050 and to over 800 percent of GDP by year 2095 (within 75 years).

Read the full analysis [archived PDF].

View the data [archived XLSX].

Brief based on work by Agustin Diaz, Jagadeesh Gokhale and Kent Smetters. Prepared by Mariko Paulson.

EconoSpeak: Tariffs and Inflation

[from EconoSpeak, posted by Kevin Quinn]

Jason Furman and Janet Yellen have both suggested that cutting Trump’s tariffs would  be anti-inflationary. But most economists agree that the incidence of the tariffs is for the most part on U.S. consumers, not foreign suppliers (pace the treasonous and ignorant former president, who crowed about all the revenues we were raising from China). So how is a tax cut anti-inflationary?  There is a supply-side effect, which is all to the good, but the demand-side effects may well wash that out. So get rid of the tariffs but reverse the Trump tax cuts, which Manchin favors, through reconciliation. Taxes remain the same, so we’ve neutralized the effects on demand; and we still get the good supply side effects of a more rational global division of labor.

COVID-19 and “Naïve Probabilism”

[from the London Mathematical Laboratory]

In the early weeks of the 2020 U.S. COVID-19 outbreak, guidance from the scientific establishment and government agencies included a number of dubious claims—masks don’t work, there’s no evidence of human-to-human transmission, and the risk to the public is low. These statements were backed by health authorities, as well as public intellectuals, but were later disavowed or disproven, and the initial under-reaction was followed by an equal overreaction and imposition of draconian restrictions on human social activities.

In a recent paper, LML Fellow Harry Crane examines how these early mis-steps ultimately contributed to higher death tolls, prolonged lockdowns, and diminished trust in science and government leadership. Even so, the organizations and individuals most responsible for misleading the public suffered little or no consequences, or even benefited from their mistakes. As he discusses, this perverse outcome can be seen as the result of authorities applying a formulaic procedure of “naïve probabilism” in facing highly uncertain and complex problems, and largely assuming that decision-making under uncertainty boils down to probability calculations and statistical analysis.

This attitude, he suggests, might be captured in a few simple “axioms of naïve probabilism”:

Axiom 1: more complex the problem, the more complicated the solution.

This idea is a hallmark of naïve decision making. The COVID-19 outbreak was highly complex, being a novel virus of uncertain origins, and spreading through the interconnected global society. But the potential usefulness of masks was not one of these complexities. The mask mistake was consequential not because masks were the antidote to COVID-19, but because they were a low cost measure the effect of which would be neutral at worst; wearing a mask can’t hurt in reducing the spread of a virus.

Yet the experts neglected common sense in favor of a more “scientific response” based on rigorous peer review and sufficient data. Two months after the initial U.S. outbreak, a study confirmed the obvious, and masks went from being strongly discouraged to being mandated by law. Precious time had been wasted, many lives lost, and the economy stalled.

Crane also considers another rule of naïve probabilism:

Axiom 2: Until proven otherwise, assume that the future will resemble the past.

In the COVID-19 pandemic, of course, there was at first no data that masks work, no data that travel restrictions work, no data of human-to-human transmission. How could there be? Yet some naïve experts took this as a reason to maintain the status quo. Indeed, many universities refused to do anything in preparation until a few cases had been detected on campus—at which point they had some data, as well as hundreds or thousands of other as yet undetected infections.

Crane touches on some of the more extreme examples of his kind of thinking, which assumes that whatever can’t be explained in terms of something that happened in the past is speculative, non-scientific and unjustifiable:

“This argument was put forward by John Ioannidis in mid-March 2020, as the pandemic outbreak was already spiralling out of control. Ioannidis wrote that COVID-19 wasn’t a ‘once-in-a-century pandemic,’ as many were saying, but rather a ‘once-in-a-century data-fiasco’. Ioannidis’s main argument was that we knew very little about the disease, its fatality rate, and the overall risks it poses to public health; and that in face of this uncertainty, we should seek data-driven policy decisions. Until the data was available, we should assume COVID-19 acts as a typical strain of the flu (a different disease entirely).”

Unfortunately, waiting for the data also means waiting too long, if it turns out that the virus turns out to be more serious. This is like waiting to hit the tree before accepting that the available data indeed supports wearing a seatbelt. Moreover, in the pandemic example, this “lack of evidence” argument ignores other evidence from before the virus entered the United States. China had locked down a city of 10 million; Italy had locked down its entire northern region, with the entire country soon to follow. There was worldwide consensus that the virus was novel, the virus was spreading fast and medical communities had no idea how to treat it. That’s data, and plenty of information to act on.

Crane goes on to consider a 3rd axiom of naïve probabilism, which aims to turn ignorance into a strength. Overall, he argues, these axioms, despite being widely used by many prominent authorities and academic experts, actually capture a set of dangerous fallacies for action in the real world.

In reality, complex problems call for simple, actionable solutions; the past doesn’t repeat indefinitely (i.e., COVID-19 was never the flu); and ignorance is not a form of wisdom. The Naïve Probabilist’s primary objective is to be accurate with high probability rather than to protect against high-consequence, low-probability outcomes. This goes against common sense principles of decision making in uncertain environments with potentially very severe consequences.

Importantly, Crane emphasizes, the hallmark of Naïve Probabilism is naïveté, not ignorance, stupidity, crudeness or other such base qualities. The typical Naïve Probabilist lacks not knowledge or refinement, but the experience and good judgment that comes from making real decisions with real consequences in the real world. The most prominent naïve probabilists are recognized (academic) experts in mathematical probability, or relatedly statistics, physics, psychology, economics, epistemology, medicine or so-called decision sciences. Moreover, and worryingly, the best known naïve probabilists are quite sophisticated, skilled in the art of influencing public policy decisions without suffering from the risks those policies impose on the rest of society.

Read the paper. [Archived PDF]

World-Watching: Global Energy Tracker

[from the Council on Foreign Relations]

by Benn Steil and Benjamin Della Rocca

The Global Energy Tracker allows you to gauge trends in energy use across the globe through time.

The charts on the tracker page compile data on energy-consumption trends in seventy-nine countries going back to 1990. Each chart shows how much energy a given country consumes from nine different sources.

The charts display each country’s consumption data for each energy source by the amount of exajoules consumed, by exajoules consumed per capita, and as a share of that country’s total energy consumption. (Exajoules are a measure of energy; one exajoule is roughly equivalent to California’s annual electricity use.)

As the legend indicates, five energy sources covered by the trackercoal, oil, natural gas, biofuels, and other (unclassified)—emit high levels of carbon dioxide. Four others—solar, wind, nuclear, and hydroelectric—are low-carbon emitters.

Together, the charts reveal significant trends in global energy usage. They show, for example, that high-carbon energy sources—especially oil—are the world’s dominant source of power. On average, 83 percent of tracker countries’ energy comes from high-carbon sources, and 37 percent specifically from oil.

Low-carbon sources, however, are on the rise, particularly in developed countries. Since 2010, the United States’ low-carbon consumption share climbed from 12 to 16 percent, the United Kingdom’s from 10 to 19 percent, and Germany’s from 14 to 19 percent. China, the world’s largest energy consumer, saw its low-carbon share rise from 9 to 15 percent. Rapid cost declines for low-carbon sources such as wind and solar, beneficiaries of technological innovation, explain much of the change. Still, low-carbon power’s share has actually declined in some rich countries, such as Japan—where it has fallen from 18 to 11 percent.

Some tracker countries rely highly on low-carbon energy. Twenty-five percent of Canada’s energy and 29 percent of Brazil’s, for example, comes from hydroelectric—compared with 9 percent for tracker countries on average. France derives over a third of its energy from nuclear. Other countries remain heavy users of higher-carbon sources. China derives 56 percent of its power from coal—although that figure is down from 70 percent a decade ago.

View the Global Energy Tracker.

WANG Huiyao: To Save Global Trade, Start Small

[from the Center for China and Globalization]

by WANG Huiyao (王辉耀), Founder of the Center for China and Globalization

The global economy is being rocked by war, sanctions and spiraling commodity prices—not to mention the ongoing strain of the pandemic, geopolitical tensions and climate change. These compounding risks present a serious challenge to the system of open trade that the World Trade Organization was designed to uphold. But it also offers a chance for the beleaguered organization, which is holding its first ministerial conference since 2017, to prove its continuing relevance.

The WTO has traditionally focused on combating protectionism—measures designed to insulate producers from international competition. Now, though, the biggest threats to free trade come from policies meant to safeguard national security and protect citizens from risks, such as those related to health, the environment or digital spaces.

Former WTO Director-General Pascal Lamy has called this growing use of export controls, cybersecurity laws, investment blacklists, reshoring incentives and the like “precautionism.” It’s been on the rise since the start of the pandemic, when many countries moved to restrict exports of medical supplies and other essentials. COVID-19 has also raised concerns about the vulnerability of supply chains, particularly those dependent on geopolitical rivals.

The world’s two biggest trading nations, the United States and China, have both engaged in precautionism. The U.S. is actively pursuing a policy of “friend-shoring”—shifting trade flows from potentially hostile countries to friendlier ones. China’s “dual circulation” strategy aims in part to reduce dependence on foreign imports, especially technology, while its government has long imposed limits on data flows in and out of the country.

With Russia’s invasion of Ukraine, the momentum toward friend-shoring has grown. Meanwhile, food shortages and surging prices have triggered another round of precautionary measures: Since the war began, 63 countries have imposed a more than 100 export restrictions on fertilizer and foodstuffs.

While the impulse driving such policies is understandable, the trend could cause great harm if allowed to run unchecked. It will increase inflation and depress global growth, especially if it involves costly redeployment of supply chains away from efficient producers such as China. A recent WTO study estimated that decoupling the global economy into “Western” and “Eastern” blocs would wipe out nearly 5% in output, the equivalent of $4 trillion.

As a recent study by the International Monetary Fund points out, the way to make global value chains more resilient is to diversify, not dismantle them. Turning away from open trade will only make states more vulnerable to economic shocks such as war, disease or crop failures.

The WTO is an obvious vehicle to rally collective action on these issues. However, like other global institutions, it has been weakened by years of deadlock. At this week’s meeting, countries should start to build positive momentum with some small but symbolically significant breakthroughs to show the WTO can still mobilize joint action.

Given current threats to food security, at the very least members should agree not to restrict exports of foodstuffs purchased for the World Food Programme. A step further would be a joint statement calling on members to keep trade in food and agricultural products open and avoid imposing unjustified export restrictions. There should also be closer coordination to smooth supply chains and clogged logistics channels.

Another low-hanging fruit is finally securing a  waiver covering intellectual property rights for COVID-19-related products. This proposal has languished for over 18 months but has now been redrafted to address concerns from the U.S. and European Union. Signing it would go some way to expanding global access to vaccines, which are still sorely needed in many parts of the world.

Beyond this week, the WTO secretariat and members need to develop a work program to reform the organization. This should include developing a framework to ensure that if states do take precautionary measures, they do so in a transparent, rules-based manner that does not slide into more harmful forms of protectionism.

Reviving the WTO’s defunct dispute settlement mechanism is a clear priority. Twenty-five members have agreed to an interim arrangement that would function in a similar way. More members should join this agreement, ideally including the U.S., and start negotiating the full restoration of a binding mechanism. They should also set clear criteria for carveouts for legitimate precautionary measures related to national security, healthcare and environmental issues.

No one should expect big breakthroughs in Geneva. But practical agreements on immediate priorities such food security and vaccines would at least help to reassert the WTO’s relevance and show that the world’s trading partners are not simply going to give up on multilateralism. At this dangerous moment, even small victories are welcome.

U.S. Bureau of Economic Analysis: Marine Economy, 2020

[from the U.S. Bureau of Economic Analysis]

The marine economy accounted for 1.7 percent, or $361.4 billion, of current-dollar U.S. gross domestic product (GDP) in 2020 and 1.7 percent, or $610.3 billion, of current-dollar gross output. Real (inflation-adjusted) GDP for the marine economy decreased 5.8 percent from 2019 to 2020, compared with a 3.4 percent decrease for the overall U.S. economy. Real gross output for the marine economy decreased 8.5 percent, while marine economy compensation decreased 1.2 percent, and employment decreased 10.8 percent.

Read the current release [Archived PDF]