Economics-Watching: Multivariate Core Trend Inflation

[from the Federal Reserve Bank of New York]

Overview

The Multivariate Core Trend (MCT) model measures inflation’s persistence in the seventeen core sectors of the personal consumption expenditures (PCE) price index.

Whether inflation is short-lived or persistent, concentrated in a few sectors or broad-based, is of deep relevance to policymakers. We estimate a dynamic factor model on monthly data for the major sectors of the personal consumption expenditures (PCE) price index to assess the extent of inflation persistence and its broadness. The results give a measure of trend inflation and shed light on whether inflation dynamics are dominated by a trend common across sectors or are sector-specific.

The New York Fed updates the MCT estimates and share sectoral insights at or shortly after 2 p.m. on the first Monday after the release of personal consumption expenditures (PCE) price index data from the Bureau of Economic Analysis. Data are available for download.

September 2023 Update

  • Multivariate Core Trend (MCT) inflation was 2.9 percent in September, a 0.3 percentage point increase from August (which was revised up from 2.5 percent). The 68 percent probability band is (2.4, 3.3).
  • Services ex-housing accounted for 0.54 percentage point (ppt) of the increase in the MCT estimate relative to its pre-pandemic average, while housing accounted for 0.50 ppt. Core goods had the smallest contribution, 0.03 ppt.
  • A large part of the persistence in housing and services ex-housing is explained by the sector-specific component of the trend.

Latest Release: 2:00 p.m. ET October 31, 2023

View the Multivariate Core Trend of PCE Inflation data here.

Frequently Asked Questions

What is the goal of the Multivariate Core Trend (MCT) analysis?

The New York Fed aims to provide a measure of inflation’s trend, or “persistence,” and identify where the persistence is coming from.

What data are reported?

The New York Fed’s interactive charts report monthly MCT estimates from 1960 to the present. The New York Fed also provides estimates of how much three broad sectors (core goods, core services excluding housing, and housing) are contributing to overall trend inflation over the same time span. The New York Fed further distinguishes whether the persistence owes to common or sector-specific components. Data are available for download.

What is the release schedule?

The New York Fed updates the estimate of inflation persistence and share sectoral insights following the release of PCE price data from the U.S. Bureau of Economic Analysis each month.

What is the modeling strategy?

A dynamic factor model with time-varying parameters is estimated on monthly data for the seventeen major sectors of the PCE price index. The model decomposes each sector’s inflation as the sum of a common trend, a sector-specific trend, a common transitory shock, and a sector-specific transitory shock. The trend in PCE inflation is constructed as the sum of the common and the sector-specific trends weighted by the expenditure shares.

The New York Fed uses data from all seventeen of the PCE’s sectors; however, in constructing the trend in PCE inflation, we exclude the volatile non-core sectors (that is, food and energy). The approach builds on Stock and Watson’s 2016 “Core Inflation and Trend Inflation.”

How does the MCT measure differ from the core personal consumption expenditures (PCE) inflation measure?

The core inflation measure simply removes the volatile food and energy components. The MCT model seeks to further remove the transitory variation from the core sectoral inflation rates. This has been key in understanding inflation developments in recent years because, during the pandemic, many core sectors (motor vehicles and furniture, for example) were hit by unusually large transitory shocks. An ideal measure of inflation persistence should filter those out.

PCE data are subject to revision by the Bureau of Economic Analysis (BEA). How does that affect MCT estimates?

BEA monthly revisions as well as other BEA periodic revisions to PCE price data do lead to reassessments of the estimated inflation persistence as measured by the MCT estimates. Larger revisions may lead to a more significant reassessment. A recent example of the latter case is described on Liberty Street Economics in “Inflation Persistence: Dissecting the News in January PCE Data.”

Historical estimates in our MCT data series back to 1960 are based on the latest vintage of data available and incorporate all prior revisions.

How does the MCT Inflation measure relate to other inflation measures?

The MCT model adds to the set of tools that aim at measuring the persistent component of PCE price inflation. Some approaches, such as the Cleveland Fed’s Median PCE and the Dallas Fed’s Trimmed Mean, rely on the cross-sectional distribution of price changes in each period. Other approaches, such as the New York Fed’s Underlying Inflation Gauge (UIG), rely on frequency-domain time series smoothing methods. The MCT approach shares some features with them, namely: exploiting the cross-sectional distribution of price changes and using time series smoothing techniques. But the MCT model also has some unique features that are relevant to inflation data. For example, it allows for outliers and for the noisiness of the data and for the relation with the common component to change over time.

How useful can MCT data be for policymakers?

The MCT model provides a timely measure of inflationary pressure and provides insights on how much price changes comove across sectors.

View the Multivariate Core Trend of PCE Inflation data here.

COVID-19 and “Naïve Probabilism”

[from the London Mathematical Laboratory]

In the early weeks of the 2020 U.S. COVID-19 outbreak, guidance from the scientific establishment and government agencies included a number of dubious claims—masks don’t work, there’s no evidence of human-to-human transmission, and the risk to the public is low. These statements were backed by health authorities, as well as public intellectuals, but were later disavowed or disproven, and the initial under-reaction was followed by an equal overreaction and imposition of draconian restrictions on human social activities.

In a recent paper, LML Fellow Harry Crane examines how these early mis-steps ultimately contributed to higher death tolls, prolonged lockdowns, and diminished trust in science and government leadership. Even so, the organizations and individuals most responsible for misleading the public suffered little or no consequences, or even benefited from their mistakes. As he discusses, this perverse outcome can be seen as the result of authorities applying a formulaic procedure of “naïve probabilism” in facing highly uncertain and complex problems, and largely assuming that decision-making under uncertainty boils down to probability calculations and statistical analysis.

This attitude, he suggests, might be captured in a few simple “axioms of naïve probabilism”:

Axiom 1: more complex the problem, the more complicated the solution.

This idea is a hallmark of naïve decision making. The COVID-19 outbreak was highly complex, being a novel virus of uncertain origins, and spreading through the interconnected global society. But the potential usefulness of masks was not one of these complexities. The mask mistake was consequential not because masks were the antidote to COVID-19, but because they were a low cost measure the effect of which would be neutral at worst; wearing a mask can’t hurt in reducing the spread of a virus.

Yet the experts neglected common sense in favor of a more “scientific response” based on rigorous peer review and sufficient data. Two months after the initial U.S. outbreak, a study confirmed the obvious, and masks went from being strongly discouraged to being mandated by law. Precious time had been wasted, many lives lost, and the economy stalled.

Crane also considers another rule of naïve probabilism:

Axiom 2: Until proven otherwise, assume that the future will resemble the past.

In the COVID-19 pandemic, of course, there was at first no data that masks work, no data that travel restrictions work, no data of human-to-human transmission. How could there be? Yet some naïve experts took this as a reason to maintain the status quo. Indeed, many universities refused to do anything in preparation until a few cases had been detected on campus—at which point they had some data, as well as hundreds or thousands of other as yet undetected infections.

Crane touches on some of the more extreme examples of his kind of thinking, which assumes that whatever can’t be explained in terms of something that happened in the past is speculative, non-scientific and unjustifiable:

“This argument was put forward by John Ioannidis in mid-March 2020, as the pandemic outbreak was already spiralling out of control. Ioannidis wrote that COVID-19 wasn’t a ‘once-in-a-century pandemic,’ as many were saying, but rather a ‘once-in-a-century data-fiasco’. Ioannidis’s main argument was that we knew very little about the disease, its fatality rate, and the overall risks it poses to public health; and that in face of this uncertainty, we should seek data-driven policy decisions. Until the data was available, we should assume COVID-19 acts as a typical strain of the flu (a different disease entirely).”

Unfortunately, waiting for the data also means waiting too long, if it turns out that the virus turns out to be more serious. This is like waiting to hit the tree before accepting that the available data indeed supports wearing a seatbelt. Moreover, in the pandemic example, this “lack of evidence” argument ignores other evidence from before the virus entered the United States. China had locked down a city of 10 million; Italy had locked down its entire northern region, with the entire country soon to follow. There was worldwide consensus that the virus was novel, the virus was spreading fast and medical communities had no idea how to treat it. That’s data, and plenty of information to act on.

Crane goes on to consider a 3rd axiom of naïve probabilism, which aims to turn ignorance into a strength. Overall, he argues, these axioms, despite being widely used by many prominent authorities and academic experts, actually capture a set of dangerous fallacies for action in the real world.

In reality, complex problems call for simple, actionable solutions; the past doesn’t repeat indefinitely (i.e., COVID-19 was never the flu); and ignorance is not a form of wisdom. The Naïve Probabilist’s primary objective is to be accurate with high probability rather than to protect against high-consequence, low-probability outcomes. This goes against common sense principles of decision making in uncertain environments with potentially very severe consequences.

Importantly, Crane emphasizes, the hallmark of Naïve Probabilism is naïveté, not ignorance, stupidity, crudeness or other such base qualities. The typical Naïve Probabilist lacks not knowledge or refinement, but the experience and good judgment that comes from making real decisions with real consequences in the real world. The most prominent naïve probabilists are recognized (academic) experts in mathematical probability, or relatedly statistics, physics, psychology, economics, epistemology, medicine or so-called decision sciences. Moreover, and worryingly, the best known naïve probabilists are quite sophisticated, skilled in the art of influencing public policy decisions without suffering from the risks those policies impose on the rest of society.

Read the paper. [Archived PDF]

Mathematics and the World: London Mathematical Laboratory

Stability of Heteroclinic Cycles in Rings of Coupled Oscillators

[from the London Mathematical Laboratory]

Complex networks of interconnected physical systems arise in many areas of mathematics, science and engineering. Many such systems exhibit heteroclinic cyclesdynamical trajectories that show a roughly periodic behavior, with non-convergent time averages. In these systems, average quantities fluctuate continuously, although the fluctuations slow down as the dynamics repeatedly and systematically approach a set of fixed points. Despite this general understanding, key open questions remain concerning the existence and stability of such cycles in general dynamical networks.

In a new paper [archived PDF], LML Fellow Claire Postlethwaite and Rob Sturman of the University of Leeds investigate a family of coupled map lattices defined on ring networks and establish stability properties of the possible families of heteroclinic cycles. To begin, they first consider a simple system of N coupled systems, each system based on the logistic map, and coupling between systems determined by a parameter γ. If γ = 0, each node independently follows logistic map dynamics, showing stable periodic cycles or chaotic behavior. The authors design the coupling between systems to have a general inhibitory effect, driving the dynamics toward zero. Intuitively, this should encourage oscillatory behavior, as nodes can alternately be active (take a non-zero value), and hence inhibit those nodes to which it is connected to, decay, when other nodes in turn inhibit them; and finally grow again to an active state as the nodes inhibiting them decay in turn. In the simple case of N = 3, for example, this dynamics leads to a trajectory which cycles between three fixed points.

The authors then extend earlier work to consider larger networks of coupled systems as described by a directed graph, describing how to find the fixed points and heteroclinic connections for such a system. In general, they show, this procedure results in highly complex and difficult to analyze heteroclinic network. Simplifying to the special case of N-node directed graphs with one-way nearest neighbor coupling, they successfully derive results for the dynamic stability of subcycles within this network, establishing that only one of the subcycles can ever be stable.

Overall, this work demonstrates that heteroclinic networks can typically arise in the phase space dynamics of certain types of symmetric graphs with inhibitory coupling. Moreover, it establishes that at most one of the subcycles can be stable (and hence observable in simulations) for an open set of parameters. Interestingly, Postlethwaite and Sturman find that the dynamics associated with such cycles are not ergodic, so that long-term averages do not converge. In particular, averaged observed quantities such as Lyapunov exponents are ill-defined, and will oscillate at a progressively slower rate.

In addition, the authors also address the more general question of whether or not a stable heteroclinic cycle is likely to be found in the corresponding phase space dynamics of a randomly generated physical network of nodes. In preliminary investigations using randomly generated Erdős–Rényi graphs, they find that the probability of existence of heteroclinic cycles increases both as the number of nodes in the physical network increases, and also as the density of edges in the physical network decreases. However, even in cases where the probability of existence of heteroclinic cycles is high, there is also a high chance of the existence of a stable fixed point in the phase space. From this they conclude that the question of the stability of the heteroclinic cycle is important in determining whether or not the heteroclinic cycle, and associated slowing down of trajectories, will be observed in the phase space associated with a randomly generated graph.

The paper is available as a pre-print here [archived PDF].

Meaningfulness versus Informativeness

The Decoding Reality book is a classic contemporary analysis of the foundations of physics and the implications for the human world. The scientists don’t see that physics and science are the infrastructure on which the human “quest for meaning” takes place. Ortega (Ortega y Gasset, died in 1955) tells us that a person is “a point of view directed at the universe.” This level of meaning cannot be reduced to bits or qubits or electrons since man is a “linguistic creature” who invents fictional stories to explain “things” that are not things.

The following dialog between Paul Davies (the outstanding science writer) and Vlatko Vedral (the distinguished physicist) gropes along on these issues: the difference between science as one kind of story and the human interpretation of life and self expressed in “tales” and parables, fictions and beliefs:

Davies: “When humans communicate, a certain quantity of information passes between them. But that information differs from the bits (or qubits) physicists normally consider, inasmuch as it possesses meaning. We may be able to quantify the information exchanged, but meaning is a qualitative property—a value—and therefore hard, maybe impossible, to capture mathematically. Nevertheless the concept of meaning obviously has, well… meaning. Will we ever have a credible physical theory of ‘meaningful information,’ or is ‘meaning’ simply outside the scope of physical science?”

Vedral: “This is a really difficult one. The success of Shannon’s formulation of ‘information’ lies precisely in the fact that he stripped it of all “meaning” and reduced it only to the notion of probability. Once we are able to estimate the probability for something to occur, we can immediately talk about its information content. But this sole dependence on probability could also be thought of as the main limitation of Shannon’s information theory (as you imply in your question). One could, for instance, argue that the DNA has the same information content inside as well as outside of a biological cell. However, it is really only when it has access to the cell’s machinery that it starts to serve its main biological purpose (i.e., it starts to make sense). Expressing this in your own words, the DNA has a meaning only within the context of a biological cell. The meaning of meaning is therefore obviously important. Though there has been some work on the theory of meaning, I have not really seen anything convincing yet. Intuitively we need some kind of a ‘relative information’ concept, information that is not only dependent on the probability, but also on its context, but I am afraid that we still do not have this.”

For a physicist, all the world is information. The universe and its workings are the ebb and flow of information. We are all transient patterns of information, passing on the recipe for our basic forms to future generations using a four-letter digital code called DNA.

See Decoding Reality.

In this engaging and mind-stretching account, Vlatko Vedral considers some of the deepest questions about the universe and considers the implications of interpreting it in terms of information. He explains the nature of information, the idea of entropy, and the roots of this thinking in thermodynamics. He describes the bizarre effects of quantum behavior—effects such as “entanglement,” which Einstein called “spooky action at a distance” and explores cutting edge work on the harnessing quantum effects in hyper-fast quantum computers, and how recent evidence suggests that the weirdness of the quantum world, once thought limited to the tiniest scales, may reach into the macro world.

Vedral finishes by considering the answer to the ultimate question: Where did all of the information in the universe come from? The answers he considers are exhilarating, drawing upon the work of distinguished physicist John Wheeler. The ideas challenge our concept of the nature of particles, of time, of determinism, and of reality itself.

Science is an “ontic” quest. Human life is an “ontological” quest. They are a “twisted pair” where each strand must be seen clearly and not confused. The content of your telephone conversation with your friend, say. is not reducible to the workings of a phone or the subtle electrical engineering and physics involved. A musical symphony is not just “an acoustical blast.”

The “meaning of meaning” is evocative and not logically expressible. There’s a “spooky action at a distance” between these levels of meaning versus information but they are different “realms” or “domains.”