Education and the Triple Helix underneath It

We want to restate the basic instinct and intuitions of this education or re-education project.

To get at the “schema” it will help you if you digress for a second and absorb this writeup of Professor Richard Lewontin’s (Harvard biology) 2002 masterpiece, The Triple Helix: Gene, Organism and Environment.

The blurb from Harvard University Press tells us:

“One of our most brilliant evolutionary biologists, Richard Lewontin has also been a leading critic of those—scientists and non-scientists alike—who would misuse the science to which he has contributed so much. In The Triple Helix, Lewontin the scientist and Lewontin the critic come together to provide a concise, accessible account of what his work has taught him about biology and about its relevance to human affairs. In the process, he exposes some of the common and troubling misconceptions that misdirect and stall our understanding of biology and evolution.

The central message of this book is that we will never fully understand living things if we continue to think of genes, organisms, and environments as separate entities, each with its distinct role to play in the history and operation of organic processes. Here Lewontin shows that an organism is a unique consequence of both genes and environment, of both internal and external features. Rejecting the notion that genes determine the organism, which then adapts to the environment, he explains that organisms, influenced in their development by their circumstances, in turn create, modify, and choose the environment in which they live.

The Triple Helix is vintage Lewontin: brilliant, eloquent, passionate and deeply critical. But it is neither a manifesto for a radical new methodology nor a brief for a new theory. It is instead a primer on the complexity of biological processes, a reminder to all of us that living things are never as simple as they may seem.”

Borrow from Lewontin the idea of a “triple helix” and apply it to the ultimate wide-angle view of this process of understanding. The educational triple helix includes and always tries to coordinate:

  1. The student and their life (i.e., every student is first of all a person who is playing the role of a student). Every person is born, lives, and dies.
  2. The student and their field are related to the rest of the campus. (William James: all knowledge is relational.)
  3. The student and the world. (Container ships from Kaohsiung, Taiwan are bringing Lenovo and Acer computers to Bakersfield, California in a world of techno-commerce, exchange rates, insurance, customs, contractual arrangements, etc. In other words, always with some sense of the global political economy.)

The student keeps the triple helix “running” in the back of the mind and tries to create a “notebook of composite sketches” of the world and its workings and oneself and this develops through a life as a kind of portable “homemade” university which stays alive and current and vibrant long after one has forgotten the mean value theorem and the names and sequence for the six wives of Henry VIII).

The reader should think of Emerson’s point from his Journals of Ralph Waldo Emerson: 1824–1832—“The things taught in schools and colleges are not an education, but the means to an education.”

Economics-Watching: “Doing Nothing” Is Still Doing a Lot

[from the Federal Reserve Bank of Philadelphia, speech by Patrick T. Harker President and Chief Executive Officer at the National Association of Corporate Directors Webinar, Philadelphia, PA (Virtual)]

Good afternoon, everyone.

I appreciate that you’re all giving up part of the end of your workday for us to be together, if only virtually.

My thanks to my good friend, Rick Mroz, for that welcome and introduction.

I do believe we’re going to have a productive session. But just so you all know, as much as I enjoy speaking and providing my outlook, I enjoy a good conversation even more.

So, first, let’s take a few minutes so I can give you my perspective on where we are headed, and then I will be more than happy to take questions and hear what’s on your minds.

But before we get into any of that, I must begin with the standard Fed disclaimer: The views I express today are my own and do not necessarily reflect those of anyone else on the Federal Open Market Committee (FOMC) or in the Federal Reserve System.

Put simply, this is one of those times where the operative words are, “Pat said,” not “the Fed said.”

Now, to begin, I’m going to first address the two topics that I get asked about most often: interest rates and inflation. And I would guess they are the topics front and center in many of your minds as well.

After the FOMC’s last policy rate hike in July, I went on record with my view that, if economic and financial conditions evolved roughly as I expected they would, we could hold rates where they are. And I am pleased that, so far, economic and financial conditions are evolving as I expected, if not perhaps even a tad better.

Let’s look at the current dynamics. There is a steady, if slow, disinflation under way. Labor markets are coming into better balance. And, all the while, economic activity has remained resilient.

Given this, I remain today where I found myself after July’s meeting: Absent a stark turnabout in the data and in what I hear from contacts, I believe that we are at the point where we can hold rates where they are.

In barely more than a year, we increased the policy rate by more than 5 percentage points and to its highest level in more than two decades — 11 rate hikes in a span of 12 meetings prior to September. We not only did a lot, but we did it very fast.

We also turned around our balance sheet policy — and we will continue to tighten financial conditions by shrinking the balance sheet.

The workings of the economy cannot be rushed, and it will take some time for the full impact of the higher rates to be felt. In fact, I have heard a plea from countless contacts, asking to give them some time to absorb the work we have already done.

I agree with them. I am sure policy rates are restrictive, and, as long they remain so, we will steadily press down on inflation and bring markets into a better balance.

Holding rates steady will let monetary policy do its work. By doing nothing, we are still doing something. And I would argue we are doing quite a lot.

Headline PCE inflation remained elevated in August at 3.5 percent year over year, but it is down 3 percentage points from this time last year. About half of that drop is due to the volatile components of energy and food that, while basic necessities, they are typically excluded by economists in the so-called core inflation rate to give a more accurate assessment of the pace of disinflation and its likely path forward.

Well, core PCE inflation has also shown clear signs of progress, and the August monthly reading was its smallest month-over-month increase since 2020.

So, yes, a steady disinflation is under way, and I expect it to continue. My projection is that inflation will drop below 3 percent in 2024 and level out at our 2 percent target thereafter.

However, there can be challenges in assessing the trends in disinflation. For example, September’s CPI report came out modestly on the upside, driven by energy and housing.

Let me be clear about two things. First, we will not tolerate a reacceleration in prices. But second, I do not want to overreact to the normal month-to-month variability of prices. And for all the fancy techniques, the best way to separate a signal from noise remains to average data over several months. Of course, to do so, you need several months of data to start with, which, in turn, demands that, yes, we remain data-dependent but patient and cautious with the data.

Turning to the jobs picture, I do anticipate national unemployment to end the year at about 4 percent — just slightly above where we are now — and to increase slowly over the next year to peak at around 4.5 percent before heading back toward 4 percent in 2025. That is a rate in line with what economists call the natural rate of unemployment, or the theoretical level in which labor market conditions support stable inflation at 2 percent.

Now, that said, as you know, there are many factors that play into the calculation of the unemployment rate. For instance, we’ve seen recent months where, even as the economy added more jobs, the unemployment rate increased because more workers moved off the sidelines and back into the labor force. There are many other dynamics at play, too, such as technological changes or public policy issues, like child care or immigration, which directly impact employment.

And beyond the hard data, I also have to balance the soft data. For example, in my discussions with employers throughout the Third District, I hear that given how hard they’ve worked to find the workers they currently have, they are doing all they can to hold onto them.

So, to sum up the labor picture, let me say, simply, I do not expect mass layoffs.

do expect GDP gains to continue through the end of 2023, before pulling back slightly in 2024. But even as I foresee the rate of GDP growth moderating, I do not see it contracting. And, again, to put it simply, I do not anticipate a recession.

Look, this economy has been nothing if not unpredictable. It has proven itself unwilling to stick to traditional modeling and seems determined to not only bend some rules in one place, but to make up its own in another. However, as frustratingly unpredictable as it has been, it continues to move along.

And this has led me to the following thought: What has fundamentally changed in the economy from, say, 2018 or 2019? In 2018, inflation averaged 2 percent almost to the decimal point and was actually below target in 2019. Unemployment averaged below 4 percent for both years and was as low as 3.5 percent — both nationwide and in our respective states — while policy rates peaked below 2.5 percent.

Now, I’m not saying we’re going to be able to exactly replicate the prepandemic economy, but it is hard to find fundamental differences. Surely, I cannot and will not minimize the immense impacts of the pandemic on our lives and our families, nor the fact that for so many, the new normal still does not feel normal. From the cold lens of economics, I do not see underlying fundamental changes. I could also be wrong, and, trust me, that would not be the first time this economy has made me rethink some of the classic models. We just won’t know for sure until we have more data to look at over time.

And then, of course, there are the economic uncertainties — both national and global — against which we also must contend. The ongoing auto worker strike, among other labor actions. The restart of student loan payments. The potential of a government shutdown. Fast-changing events in response to the tragic attacks against Israel. Russia’s ongoing war against Ukraine. Each and every one deserves a close watch.

These are the broad economic signals we are picking up at the Philadelphia Fed, but I would note that the regional ones we follow are also pointing us forward.

First, while in the Philadelphia Fed’s most recent business outlook surveys, which survey manufacturing and nonmanufacturing firms in the Third District, month-over-month activity declined, the six-month outlooks for each remain optimistic for growth.

And we also publish a monthly summary metric of economic activity, the State Coincident Indexes. In New Jersey, the index is up slightly year over year through August, which shows generally positive conditions. However, the three-month number from June through August was down, and while both payroll employment and average hours worked in manufacturing increased during that time, so did the unemployment rate — though a good part of that increase can be explained as more residents moved back into the labor force.

And for those of you joining us from the western side of the Delaware River, Pennsylvania’s coincident index is up more than 4 percent year over year through August and 1.7 percent since June. Payroll employment was up, and the unemployment rate was down; however, the number of average hours worked in manufacturing decreased.

There are also promising signs in both states in terms of business formation. The number of applications, specifically, for high-propensity businesses — those expected to turn into firms with payroll — are remaining elevated compared with pre-pandemic levels. Again, a promising sign.

So, it is against this full backdrop that I have concluded that now is the time at which the policy rate can remain steady. But I can hear you ask: “How long will rates need to stay high.” Well, I simply cannot say at this moment. My forecasts are based on what we know as of late 2023. As time goes by, as adjustments are completed, and as we have more data and insights on the underlying trends, I may need to adjust my forecasts, and with them my time frames.

I can tell you three things about my views on future policy. First, I expect rates will need to stay high for a while.

Second, the data and what I hear from contacts and outreach will signal to me when the time comes to adjust policy either way. I really do not expect it, but if inflation were to rebound, I know I would not hesitate to support further rate increases as our objective to return inflation to target is, simply, not negotiable.

Third, I believe that a resolute, but patient, monetary policy stance will allow us to achieve the soft landing that we all wish for our economy.

Before I conclude and turn things over to Rick to kick off our Q&A, I do want to spend a moment on a topic that he and I recently discussed, and it’s something about which I know there is generally great interest: fintech. In fact, I understand there is discussion about NACD hosting a conference on fintech.

Well, last month, we at the Philadelphia Fed hosted our Seventh Annual Fintech Conference, which brought business and thought leaders together at the Bank for two days of real in-depth discussions. And I am extraordinarily proud of the fact that the Philadelphia Fed’s conference has emerged as one of the premier conferences on fintech, anywhere. Not that it’s a competition.

I had the pleasure of opening this year’s conference, which always puts a focus on shifts in the fintech landscape. Much of this year’s conference centered around developments in digital currencies and crypto — and, believe me, some of the discussions were a little, shall we say, “spirited.” However, my overarching point to attendees was the following: Regardless of one’s views, whether in favor of or against such currencies, our reality requires us to move from thinking in terms of “what if” to thinking about “what next.”

In many ways, we’re beyond the stage of thinking about crypto and digital currency and into the stage of having them as reality — just as AI has moved from being the stuff of science fiction to the stuff of everyday life. What is needed now is critical thinking about what is next. And we at the Federal Reserve, both here in Philadelphia and System-wide, are focused on being part of this discussion.

We are also focused on providing not just thought leadership but actionable leadership. For example, the Fed rolled out our new FedNow instant payment service platform in July. With FedNow, we will have a more nimble and responsive banking system.

To be sure, FedNow is not the first instant payment system — other systems, whether operated by individual banks or through third parties, have been operational for some time. But by allowing banks to interact with each other quickly and efficiently to ensure one customer’s payment becomes another’s deposit, we are fulfilling our role in providing a fair and equitable payment system.

Another area where the Fed is assuming a mantle of leadership is in quantum computing, or QC, which has the potential to revolutionize security and problem-solving methodologies throughout the banking and financial services industry. But that upside also comes with a real downside risk, should other not-so-friendly actors co-opt QC for their own purposes.

Right now, individual institutions and other central banks globally are expanding their own research in QC. But just as these institutions look to the Fed for economic leadership, so, too, are they looking to us for technological leadership. So, I am especially proud that this System-wide effort is being led from right here at the Philadelphia Fed.

I could go on and talk about fintech for much longer. After all, I’m actually an engineer more than I am an economist. But I know that Rick is interested in starting our conversation, and I am sure that many of you are ready to participate.

But one last thought on fintech — my answers today aren’t going to be generated by ChatGPT.

On that note, Rick, thanks for allowing me the time to set up our discussion, and let’s start with the Q&A.

[archived PDF of the above speech]

World-Watching: Atlanta Federal Reserve Wage Growth Tracker Was 6.1 Percent in January

[from the Federal Reserve Bank of Atlanta]

The Atlanta Fed’s Wage Growth Tracker was 6.1 percent in January, the same as in December. For people who changed jobs, the Tracker in January was 7.3 percent, compared to 7.7 percent in December. For those not changing jobs, the Tracker was 5.4 percent, compared to the 5.3 percent reading in December.

The Atlanta Fed’s Wage Growth Tracker is a measure of the nominal wage growth of individuals. It is constructed using microdata from the Current Population Survey (CPS), and is the median percent change in the hourly wage of individuals observed 12 months apart. This measure is based on methodology developed by colleagues at the San Francisco Fed.

The Wage Growth Tracker is updated once the Atlanta Fed’s CPS dataset is constructed (see the “Explore the Data” tab on that page). This is usually by the second Friday of the month. The exact timing depends on when the Bureau of the Census publishes the microdata from the CPS.

Stay informed of all Wage Growth Tracker updates by subscribing to the Atlanta Fed’s mailing listsubscribing to their RSS feeddownloading their EconomyNow app, or following the Atlanta Fed on Twitter. You can also build your own cuts of Wage Growth Tracker data using the CPS Data Application from CADRE or alternatively from here. Look for instructions and program files in the “Explore the Data” tab on the Wage Growth Tracker.

Science-Watching: Forecasting New Diseases in Low-Data Settings Using Transfer Learning

[from London Mathematical Laboratory]

by Kirstin Roster, Colm Connaughton & Francisco A. Rodrigues

Abstract

Recent infectious disease outbreaks, such as the COVID-19 pandemic and the Zika epidemic in Brazil, have demonstrated both the importance and difficulty of accurately forecasting novel infectious diseases. When new diseases first emerge, we have little knowledge of the transmission process, the level and duration of immunity to reinfection, or other parameters required to build realistic epidemiological models. Time series forecasts and machine learning, while less reliant on assumptions about the disease, require large amounts of data that are also not available in early stages of an outbreak. In this study, we examine how knowledge of related diseases can help make predictions of new diseases in data-scarce environments using transfer learning. We implement both an empirical and a synthetic approach. Using data from Brazil, we compare how well different machine learning models transfer knowledge between two different dataset pairs: case counts of (i) dengue and Zika, and (ii) influenza and COVID-19. In the synthetic analysis, we generate data with an SIR model using different transmission and recovery rates, and then compare the effectiveness of different transfer learning methods. We find that transfer learning offers the potential to improve predictions, even beyond a model based on data from the target disease, though the appropriate source disease must be chosen carefully. While imperfect, these models offer an additional input for decision makers for pandemic response.

Introduction

Epidemic models can be divided into two broad categories: data-driven models aim to fit an epidemic curve to past data in order to make predictions about the future; mechanistic models simulate scenarios based on different underlying assumptions, such as varying contact rates or vaccine effectiveness. Both model types aid in the public health response: forecasts serve as an early warning system of an outbreak in the near future, while mechanistic models help us better understand the causes of spread and potential remedial interventions to prevent further infections. Many different data-driven and mechanistic models were proposed during the early stages of the COVID-19 pandemic and informed decision-making with varying levels of success. This range of predictive performance underscores both the difficulty and importance of epidemic forecasting, especially early in an outbreak. Yet the COVID-19 pandemic also led to unprecedented levels of data-sharing and collaboration across disciplines, so that several novel approaches to epidemic forecasting continue to be explored, including models that incorporate machine learning and real-time big data data streams. In addition to the COVID-19 pandemic, recent infectious disease outbreaks include Zika virus in Brazil in 2015, Ebola virus in West Africa in 2014–16, Middle East respiratory syndrome (MERS) in 2012, and coronavirus associated with severe acute respiratory syndrome (SARS-CoV) in 2003. This trajectory suggests that further improvements to epidemic forecasting will be important for global public health. Exploring the value of new methodologies can help broaden the modeler’s toolkit to prepare for the next outbreak. In this study, we consider the role of transfer learning for pandemic response.

Transfer learning refers to a collection of techniques that apply knowledge from one prediction problem to solve another, often using machine learning and with many recent applications in domains such as computer vision and natural language processing. Transfer learning leverages a model trained to execute a particular task in a particular domain, in order to perform a different task or extrapolate to a different domain. This allows the model to learn the new task with less data than would normally be required, and is therefore well-suited to data-scarce prediction problems. The underlying idea is that skills developed in one task, for example the features that are relevant to recognize human faces in images, may be useful in other situations, such as classification of emotions from facial expressions. Similarly, there may be shared features in the patterns of observed cases among similar diseases.

The value of transfer learning for the study of infectious diseases is relatively under-explored. The majority of existing studies on diseases remain in the domain of computer vision and leverage pre-trained neural networks to make diagnoses from medical images, such as retinal diseases, dental diseases, or COVID-19. Coelho and colleagues (2020) explore the potential of transfer learning for disease forecasts. They train a Long Short-Term Memory (LSTM) neural network on dengue fever time series and make forecasts directly for two other mosquito-borne diseases, Zika and Chikungunya, in two Brazilian cities. Even without any data on the two target diseases, their model achieves high prediction accuracy four weeks ahead. Gautam (2021) uses COVID-19 data from Italy and the USA to build an LSTM transfer model that predicts COVID-19 cases in countries that experienced a later pandemic onset.

These studies provide empirical evidence that transfer learning may be a valuable tool for epidemic forecasting in low-data situations, though research is still limited. In this study, we aim to contribute to this empirical literature not only by comparing different types of knowledge transfer and forecasting algorithms, but also by considering two different pairs of endemic and novel diseases observed in Brazilian cities, specifically (i) dengue and Zika, and (ii) influenza and COVID-19. With an additional analysis on simulated time series, we hope to provide theoretical guidance on the selection of appropriate disease pairs, by better understanding how different characteristics of the source and target diseases affect the viability of transfer learning.

Zika and COVID-19 are two recent examples of novel emerging diseases. Brazil experienced a Zika epidemic in 2015–16 and the WHO declared a public health emergency of global concern in February 2016. Zika is caused by an arbovirus spread primarily by mosquitoes, though other transmission methods, including congenital and sexual have also been observed. Zika belongs to the family of viral hemorrhagic fevers and symptoms of infection share some commonalities with other mosquito-borne arboviruses, such as yellow fever, dengue fever, or chikungunya. Illness tends to be asymptomatic or mild but can lead to complications, including microcephaly and other brain defects in the case of congenital transmission.

Given the similarity of the pathogen and primary transmission route, dengue fever is an appropriate choice of source disease for Zika forecasting. Not only does the shared mosquito vector result in similar seasonal patterns of annual outbreaks, but consistent, geographically and temporally granular data on dengue cases is available publicly via the open data initiative of the Brazilian government.

COVID-19 is an acute respiratory infection caused by the novel coronavirus SARS-CoV-2, which was first detected in Wuhan, China, in 2019. It is transmitted directly between humans via airborne respiratory droplets and particles. Symptoms range from mild to severe and may affect the respiratory tract and central nervous system. Several variants of the virus have emerged, which differ in their severity, transmissibility, and level of immune evasion.

Influenza is also a contagious respiratory disease that is spread primarily via respiratory droplets. Infection with the influenza virus also follows patterns of human contact and seasonality. There are two types of influenza (A and B) and new strains of each type emerge regularly. Given the similarity in transmission routes and to a lesser extent in clinical manifestations, influenza is chosen as the source disease for knowledge transfer to model COVID-19.

For each of these disease pairs, we collect time series data from Brazilian cities. Data on the target disease from half the cities is retained for testing. To ensure comparability, the test set is the same for all models. Using this empirical data, as well as the simulated time series, we implement the following transfer models to make predictions.

  • Random forest: First, we implement a random forest model which was recently found to capture well the time series characteristics of dengue in Brazil. We use this model to make predictions for Zika without re-training. We also train a random forest model on influenza data to make predictions for COVID-19. This is a direct transfer method, where models are trained only on data from the source disease.
  • Random forest with TrAdaBoost: We then incorporate data from the target disease (i.e., Zika and COVID-19) using the TrAdaBoost algorithm together with the random forest model. This is an instance-based transfer learning method, which selects relevant examples from the source disease to improve predictions on the target disease.
  • Neural network: The second machine learning algorithm we deploy is a feed-forward neural network, which is first trained on data of the endemic disease (dengue/influenza) and applied directly to forecast the new disease.
  • Neural network with re-training and fine-tuning: We then retrain only the last layer of the neural network using data from the new disease and make predictions on the test set. Finally, we fine-tune all the layers’ parameters using a small learning rate and low number of epochs. These models are examples of parameter-based transfer methods, since they leverage the weights generated by the source disease model to accelerate and improve learning in the target disease model.
  • Aspirational baseline: We compare these transfer methods to a model trained only on the target disease (Zika/COVID-19) without any data on the source disease. Specifically, we use half the cities in the target dataset for training and the other half for testing. This gives a benchmark of the performance in a large-data scenario, which would occur after a longer period of disease surveillance.

The remainder of this paper is organized as follows. The models are described in more technical detail in Section 2. Section 3 shows the results of the synthetic and empirical predictions. Finally, Section 4 discusses practical implications of the analyses.

Access the full paper [via institutional access or paid download].