Memes, Tropes and “Signs” in the “University” That Surrounds Us

A famous face or tune or quip or saying or photo constitutes the raw material for a shared cultural understanding around the world. Everybody somehow knows “James Bond” or “Mercedes” or “Big Ben” in London or the “Eiffel Tower” in Paris.

Think of the Polish movie masterpiece from the mid-fifties, Kanał by Wajda. It shows the doomed resistance story of the Warsaw Uprising of 1944 against the Germans, which followed the Warsaw Ghetto rebellion of a year earlier.

“Kanał” means sewer, and the story takes place in the sewers of Warsaw as this band of resistance fighters tries to avoid being killed by the Germans who still have overwhelming military superiority in weapons, ammunition, etc.

At one point these fighters enter a Polishbourgeois” home (i.e., great wealth is on display) and one of them, Michal, the classical musician, played by Sheybal (whom you will have seen as a villain in James Bond movies) goes over to the piano they find, a Bechstein, and “plays around” including the famous tango from Uruguay and Argentina, “La cumparsita” (the little parade) which went “viral” (for its time) around 1916-1917 “radiating out” from Uruguay and Argentina.

Famous versions of this tango include Carlos Gardel’s and performances by orchestras led by Juan d’Arienzo, Osvaldo Pugliese and Astor Piazzolla. “La cumparsita” is very popular at milongas; it is a common tradition for it to be played as the last dance of the evening.

The song was named cultural and popular anthem of Uruguay by law in 1997.

Appearances in Movies

Gene Kelly dances to “La cumparsita” in the film Anchors Aweigh (1945). The song was included in a ballroom scene of the film Sunset Boulevard (1950), in which Gloria Swanson and William Holden danced the tango. In the 2006 dance movie Take the Lead, Jenna Dewan, Dante Basco and Elijah Kelley danced to a remixed version.

In the 1959 film Some Like It Hot, “La cumparsita” is played by a blindfolded Cuban band during a scene in which Jack Lemmon dressed in drag dances with overstated flair in the arms of Joe E. Brown who thinks Lemmon is a woman (“Daphne—you’re leading again”). During the filming in 1958, actor George Raft taught the other two men to dance the tango for this scene.

Miscellaneous

In the Olympic Games of Sydney 2000, the Argentine team marched with the Uruguayan music “La cumparsita.” This originated protests and official claims from the Uruguayan government. The work was also an opening part of an infamous radio drama: The War of the Worlds was broadcast as an episode of the American radio drama anthology series The Mercury Theatre on the Air. It was performed as a Halloween episode of the series on October 30, 1938, and aired over the Columbia Broadcasting System radio network. This was directed and narrated by actor and future filmmaker Orson Welles.

Many artistic gymnasts have used variations of the song as their floor routine soundtracks including Vanessa Atler (1998–99), Jamie Dantzscher (2000), Oana Petrovschi (2001–02), Elvire Teza (1998), Elise Ray (1997–98), Natalia Ziganshina (2000), Maria Kharenkova (2013) and MyKayla Skinner (2011–12). Joannie Rochette skated to the song for her short program during the 2009–2010 season, most famously skating a clean performance at the 2010 Winter Olympics after the sudden death of her mother.

Students should realize as part of an education, thought of in the sense we are introducing here, that the world is an “ambient” ecosystem of these cultural icons, memes and tropes and one should be attune to them and how they connect the world into something like a semi-shared experience.

Notice, say, that the Bechstein piano, played by Michal (Vladek Sheybal) in the movie Kanał (he later goes mad and playing an ocarina, starts reciting lines from Dante and loses his mind) is the name of the Bechstein family which were instrumental in the rise of Hitler. The irony should be part of the student’s viewing experience and should show how “screwy” the world is.

Notice too that there’s an ocarina-playing character in the movie Stalag 17 who’s described as mentally unwell.

All of these aspects are part of the material world and the iconic world and the world of “signs” that are part of our education, understood as ambience of which the campus is one part only.

Federal Reserve Review of Monetary Policy Strategy, Tools, and Communications: Some Preliminary Views

(Speech by Governor Lael Brainard, at the Presentation of the 2019 William F. Butler Award New York Association for Business Economics, New York, New York)

It is a pleasure to be here with you. It is an honor to join the 45 outstanding economic researchers and practitioners who are past recipients of the William F. Butler Award. I want to express my deep appreciation to the New York Association for Business Economics (NYABE) and NYABE President Julia Coronado.

I will offer my preliminary views on the Federal Reserve’s review of its monetary policy strategy, tools, and communications after first touching briefly on the economic outlook. These remarks represent my own views. The framework review is ongoing and will extend into 2020, and no conclusions have been reached at this time.1

Outlook and Policy

There are good reasons to expect the economy to grow at a pace modestly above potential over the next year or so, supported by strong consumers and a healthy job market, despite persistent uncertainty about trade conflict and disappointing foreign growth. Recent data provide some reassurance that consumer spending continues to expand at a healthy pace despite some slowing in retail sales. Consumer sentiment remains solid, and the employment picture is positive. Housing seems to have turned a corner and is poised for growth following several weak quarters.

Business investment remains downbeat, restrained by weak growth abroad and trade conflict. But there is little sign so far that the softness in trade, manufacturing, and business investment is affecting consumer spending, and the effect on services has been limited.

Employment remains strong. The employment-to-population ratio for prime-age adults has moved up to its pre-recession peak, and the three-month moving average of the unemployment rate is near a 50-year low.2 Monthly job gains remain above the pace needed to absorb new entrants into the labor force despite some slowing since last year. And initial claims for unemployment insurance—a useful real-time indicator historically—remain very low despite some modest increases.

Data on inflation have come in about as I expected, on balance, in recent months. Inflation remains below the Federal Reserve’s 2 percent symmetric objective, which has been true for most of the past seven years. The price index for core personal consumption expenditures (PCE), which excludes food and energy prices and is a better indicator of future inflation than overall PCE prices, increased 1.7 percent over the 12 months through September.

Foreign growth remains subdued. While there are signs that the decline in euro-area manufacturing is stabilizing, the latest indicators on economic activity in China remain sluggish, and the news in Japan and in many emerging markets has been disappointing. Overall, it appears third-quarter foreign growth was weak, and the latest indicators point to little improvement in the fourth quarter.

More broadly, the balance of risks remains to the downside, although there has been some improvement in risk sentiment in recent weeks. The risk of a disorderly Brexit in the near future has declined significantly, and there is some hope that a U.S.China trade truce could avert additional tariffs. While risks remain, financial market indicators suggest market participants see a diminution in such risks, and probabilities of recessions from models using market data have declined.

The baseline is for continued moderate expansion, a strong labor market, and inflation moving gradually to our symmetric 2 percent objective. The Federal Open Market Committee (FOMC) has taken significant action to provide insurance against the risks associated with trade conflict and weak foreign growth against a backdrop of muted inflation. Since July, the Committee has lowered the target range for the federal funds rate by ¾ percentage point, to the current range of 1½ to 1¾ percent. It will take some time for the full effect of this accommodation to work its way through economic activity, the labor market, and inflation. I will be watching the data carefully for signs of a material change to the outlook that could prompt me to reassess the appropriate path of policy.

Review

The Federal Reserve is conducting a review of our monetary policy strategy, tools, and communications to make sure we are well positioned to advance our statutory goals of maximum employment and price stability.3 Three key features of today’s new normal call for a reassessment of our monetary policy strategy: the neutral rate is very low here and abroad, trend inflation is running below target, and the sensitivity of price inflation to resource utilization is very low.4

First, trend inflation is below target.5 Underlying trend inflation appears to be running a few tenths below the Committee’s symmetric 2 percent objective, according to various statistical filters. This raises the risk that households and businesses could come to expect inflation to run persistently below our target and change their behavior in a way that reinforces that expectation. Indeed, with inflation having fallen short of 2 percent for most of the past seven years, inflation expectations may have declined, as suggested by some survey-based measures of long-run inflation expectations and by market-based measures of inflation compensation.

Second, the sensitivity of price inflation to resource utilization is very low. This is what economists mean when they say that the Phillips curve is flat. A flat Phillips curve has the important advantage of allowing employment to continue expanding for longer without generating inflationary pressures, thereby providing greater opportunities to more people. But it also makes it harder to achieve our 2 percent inflation objective on a sustained basis when inflation expectations have drifted below 2 percent.

Third, the long-run neutral rate of interest is very low, which means that we are likely to see more frequent and prolonged episodes when the federal funds rate is stuck at its effective lower bound (ELB).6 The neutral rate is the level of the federal funds rate that would keep the economy at full employment and 2 percent inflation if no tailwinds or headwinds were buffeting the economy. A variety of forces have likely contributed to a decline in the neutral rate, including demographic trends in many large economies, some slowing in the rate of productivity growth, and increases in the demand for safe assets. When looking at the Federal Reserve’s Summary of Economic Projections (SEP), it is striking that the Committee’s median projection of the longer-run federal funds rate has moved down from 4¼ percent to 2½ percent over the past seven years.7 A similar decline can be seen among private forecasts.8 This decline means the conventional policy buffer is likely to be only about half of the 4½ to 5 percentage points by which the FOMC has typically cut the federal funds rate to counter recessionary pressures over the past five decades.

This large loss of policy space will tend to increase the frequency or length of periods when the policy rate is pinned at the ELB, unemployment is elevated, and inflation is below target.9 In turn, the experience of frequent or extended periods of low inflation at the ELB risks eroding inflation expectations and further compressing the conventional policy space. The risk is a downward spiral where conventional policy space gets compressed even further, the ELB binds even more frequently, and it becomes increasingly difficult to move inflation expectations and inflation back up to target. While consumers and businesses might see very low inflation as having benefits at the individual level, at the aggregate level, inflation that is too low can make it very challenging for monetary policy to cut the short-term nominal interest rate sufficiently to cushion the economy effectively.10

The experience of Japan and of the euro area more recently suggests that this risk is real. Indeed, the fact that Japan and the euro area are struggling with this challenging triad further complicates our task, because there are important potential spillovers from monetary policy in other major economies to our own economy through exchange rate and yield curve channels.11

In light of the likelihood of more frequent episodes at the ELB, our monetary policy review should advance two goals. First, monetary policy should achieve average inflation outcomes of 2 percent over time to re-anchor inflation expectations at our target. Second, we need to expand policy space to buffer the economy from adverse developments at the ELB.

Achieving the Inflation Target

The apparent slippage in trend inflation below our target calls for some adjustments to our monetary policy strategy and communications. In this context and as part of our review, my colleagues and I have been discussing how to better anchor inflation expectations firmly at our objective. In particular, it may be helpful to specify that policy aims to achieve inflation outcomes that average 2 percent over time or over the cycle. Given the persistent shortfall of inflation from its target over recent years, this would imply supporting inflation a bit above 2 percent for some time to compensate for the period of underperformance.

One class of strategies that has been proposed to address this issue are formal “makeup” rules that seek to compensate for past inflation deviations from target. For instance, under price-level targeting, policy seeks to stabilize the price level around a constant growth path that is consistent with the inflation objective.12 Under average inflation targeting, policy seeks to return the average of inflation to the target over some specified period.13

To be successful, formal makeup strategies require that financial market participants, households, and businesses understand in advance and believe, to some degree, that policy will compensate for past misses. I suspect policymakers would find communications to be quite challenging with rigid forms of makeup strategies, because of what have been called time-inconsistency problems. For example, if inflation has been running well below—or above—target for a sustained period, when the time arrives to maintain inflation commensurately above—or below—2 percent for the same amount of time, economic conditions will typically be inconsistent with implementing the promised action. Analysis also suggests it could take many years with a formal average inflation targeting framework to return inflation to target following an ELB episode, although this depends on difficult-to-assess modeling assumptions and the particulars of the strategy.14

Thus, while formal average inflation targeting rules have some attractive properties in theory, they could be challenging to implement in practice. I prefer a more flexible approach that would anchor inflation expectations at 2 percent by achieving inflation outcomes that average 2 percent over time or over the cycle. For instance, following five years when the public has observed inflation outcomes in the range of 1½ to 2 percent, to avoid a decline in expectations, the Committee would target inflation outcomes in a range of, say, 2 to 2½ percent for the subsequent five years to achieve inflation outcomes of 2 percent on average overall. Flexible inflation averaging could bring some of the benefits of a formal average inflation targeting rule, but it would be simpler to communicate. By committing to achieve inflation outcomes that average 2 percent over time, the Committee would make clear in advance that it would accommodate rather than offset modest upward pressures to inflation in what could be described as a process of opportunistic reflation.15

Policy at the ELB

Second, the Committee is examining what monetary policy tools are likely to be effective in providing accommodation when the federal funds rate is at the ELB.16 In my view, the review should make clear that the Committee will actively employ its full toolkit so that the ELB is not an impediment to providing accommodation in the face of significant economic disruptions.

The importance and challenge of providing accommodation when the policy rate reaches the ELB should not be understated. In my own experience on the international response to the financial crisis, I was struck that the ELB proved to be a severe impediment to the provision of policy accommodation initially. Once conventional policy reached the ELB, the long delays necessitated for policymakers in nearly every jurisdiction to develop consensus and take action on unconventional policy sapped confidence, tightened financial conditions, and weakened recovery. Economic conditions in the euro area and elsewhere suffered for longer than necessary in part because of the lengthy process of building agreement to act decisively with a broader set of tools.

Despite delays and uncertainties, the balance of evidence suggests forward guidance and balance sheet policies were effective in easing financial conditions and providing accommodation following the global financial crisis.17 Accordingly, these tools should remain part of the Committee’s toolkit. However, the quantitative asset purchase policies that were used following the crisis proved to be lumpy both to initiate at the ELB and to calibrate over the course of the recovery. This lumpiness tends to create discontinuities in the provision of accommodation that can be costly. To the extent that the public is uncertain about the conditions that might trigger asset purchases and how long the purchases would be sustained, it undercuts the efficacy of the policy. Similarly, significant frictions associated with the normalization process can arise as the end of the asset purchase program approaches.

For these reasons, I have been interested in exploring approaches that expand the space for targeting interest rates in a more continuous fashion as an extension of our conventional policy space and in a way that reinforces forward guidance on the policy rate.18 In particular, there may be advantages to an approach that caps interest rates on Treasury securities at the short-to-medium range of the maturity spectrum—yield curve caps—in tandem with forward guidance that conditions liftoff from the ELB on employment and inflation outcomes.

To be specific, once the policy rate declines to the ELB, this approach would smoothly move to capping interest rates on the short-to-medium segment of the yield curve. The yield curve ceilings would transmit additional accommodation through the longer rates that are relevant for households and businesses in a manner that is more continuous than quantitative asset purchases. Moreover, if the horizon on the interest rate caps is set so as to reinforce forward guidance on the policy rate, doing so would augment the credibility of the yield curve caps and thereby diminish concerns about an open-ended balance sheet commitment. In addition, once the targeted outcome is achieved, and the caps expire, any securities that were acquired under the program would roll off organically, unwinding the policy smoothly and predictably. This is important, as it could potentially avoid some of the tantrum dynamics that have led to premature steepening at the long end of the yield curve in several jurisdictions.

Forward guidance on the policy rate will also be important in providing accommodation at the ELB. As we saw in the United States at the end of 2015 and again toward the second half of 2016, there tends to be strong pressure to “normalize” or lift off from the ELB preemptively based on historical relationships between inflation and employment. A better alternative would have been to delay liftoff until we had achieved our targets. Indeed, recent research suggests that forward guidance that commits to delay the liftoff from the ELB until full employment and 2 percent inflation have been achieved on a sustained basis—say over the course of a year—could improve performance on our dual-mandate goals.19

To reinforce this commitment, the forward guidance on the policy rate could be implemented in tandem with yield curve caps. For example, as the federal funds rate approaches the ELB, the Committee could commit to refrain from lifting off the ELB until full employment and 2 percent inflation are sustained for a year. Based on its assessment of how long this is likely take, the Committee would then commit to capping rates out the yield curve for a period consistent with the expected horizon of the outcome-based forward guidance. If the outlook shifts materially, the Committee could reassess how long it will take to get inflation back to 2 percent and adjust policy accordingly. One benefit of this approach is that the forward guidance and the yield curve ceilings would reinforce each other.

The combination of a commitment to condition liftoff on the sustained achievement of our employment and inflation objectives with yield curve caps targeted at the same horizon has the potential to work well in many circumstances. For very severe recessions, such as the financial crisis, such an approach could be augmented with purchases of 10-year Treasury securities to provide further accommodation at the long end of the yield curve. Presumably, the requisite scale of such purchases—when combined with medium-term yield curve ceilings and forward guidance on the policy rate—would be relatively smaller than if the longer-term asset purchases were used alone.

Monetary Policy and Financial Stability

Before closing, it is important to recall another important lesson of the financial crisis: The stability of the financial system is important to the achievement of the statutory goals of full employment and 2 percent inflation. In that regard, the changes in the macroeconomic environment that underlie our monetary policy review may have some implications for financial stability. Historically, when the Phillips curve was steeper, inflation tended to rise as the economy heated up, which prompted the Federal Reserve to raise interest rates. In turn, the interest rate increases would have the effect of tightening financial conditions more broadly. With a flat Phillips curve, inflation does not rise as much as resource utilization tightens, and interest rates are less likely to rise to restrictive levels. The resulting lower-for-longer interest rates, along with sustained high rates of resource utilization, are conducive to increasing risk appetite, which could prompt reach-for-yield behavior and incentives to take on additional debt, leading to financial imbalances as an expansion extends.

To the extent that the combination of a low neutral rate, a flat Phillips curve, and low underlying inflation may lead financial stability risks to become more tightly linked to the business cycle, it would be preferable to use tools other than tightening monetary policy to temper the financial cycle. In particular, active use of macroprudential tools such as the countercyclical buffer is vital to enable monetary policy to stay focused on achieving maximum employment and average inflation of 2 percent on a sustained basis.

Conclusion

The Federal Reserve’s commitment to adapt our monetary policy strategy to changing circumstances has enabled us to support the U.S. economy throughout the expansion, which is now in its 11th year. In light of the decline in the neutral rate, low trend inflation, and low sensitivity of inflation to slack as well as the consequent greater frequency of the policy rate being at the effective lower bound, this is an important time to review our monetary policy strategy, tools, and communications in order to improve the achievement of our statutory goals. I have offered some preliminary thoughts on how we could bolster inflation expectations by achieving inflation outcomes of 2 percent on average over time and, when policy is constrained by the ELB, how we could combine forward guidance on the policy rate with caps on the short-to-medium segment of the yield curve to buffer the economy against adverse developments.


  1. I am grateful to Ivan Vidangos of the Federal Reserve Board for assistance in preparing this text. These remarks represent my own views, which do not necessarily represent those of the Federal Reserve Board or the Federal Open Market Committee. (return to text)
  2. Claudia Sahm shows that a ½ percentage point increase in the three-month moving average of the unemployment rate relative to the previous year’s low is a good real-time recession indicator. See Claudia Sahm (2019), “Direct Stimulus Payments to Individuals” [archived PDF], Policy Proposal, The Hamilton Project at the Brookings Institution (Washington: THP, May 16). (return to text)
  3. Information about the review of monetary policy strategy, tools, and communications is available on the Board’s website. Also see Richard H. Clarida (2019), “The Federal Reserve’s Review of Its Monetary Policy Strategy, Tools, and Communication Practices” [archived PDF], speech delivered at the 2019 U.S. Monetary Policy Forum, sponsored by the Initiative on Global Markets at the University of Chicago Booth School of Business, New York, February 22; and Jerome H. Powell (2019), “Monetary Policy: Normalization and the Road Ahead” [archived PDF] speech delivered at the 2019 SIEPR Economic Summit, Stanford Institute of Economic Policy Research, Stanford, Calif., March 8. (return to text)
  4. See Lael Brainard (2016), “The ‘New Normal’ and What It Means for Monetary Policy” [archived PDF] speech delivered at the Chicago Council on Global Affairs, Chicago, September 12. (return to text)
  5. See Lael Brainard (2017), “Understanding the Disconnect between Employment and Inflation with a Low Neutral Rate” [archived PDF], speech delivered at the Economic Club of New York, September 5; and James H. Stock and Mark W. Watson (2007), “Why Has U.S. Inflation Become Harder to Forecast?” [archived PDF], Journal of Money, Credit and Banking, vol. 39 (s1, February), pp. 3–33. (return to text)
  6. See Lael Brainard (2015), “Normalizing Monetary Policy When the Neutral Interest Rate Is Low” [archived PDF] speech delivered at the Stanford Institute for Economic Policy Research, Stanford, Calif., December 1. (return to text)
  7. The projection materials for the Federal Reserve’s SEP are available on the Board’s website. (return to text)
  8. For example, the Blue Chip Consensus long-run projection for the three-month Treasury bill has declined from 3.6 percent in October 2012 to 2.4 percent in October 2019. See Wolters Kluwer (2019), Blue Chip Economic Indicators, vol. 44 (October 10); and Wolters Kluwer (2012), Blue Chip Economic Indicators, vol. 37 (October 10). (return to text)
  9. See Michael Kiley and John Roberts (2017), “Monetary Policy in a Low Interest Rate World” [archived PDF], Brookings Papers on Economic Activity, Spring, pp. 317–72; Eric Swanson (2018), “The Federal Reserve Is Not Very Constrained by the Lower Bound on Nominal Interest Rates” [archived PDF] NBER Working Paper Series 25123 (Cambridge, Mass.: National Bureau of Economic Research, October); and Hess Chung, Etienne Gagnon, Taisuke Nakata, Matthias Paustian, Bernd Schlusche, James Trevino, Diego Vilán, and Wei Zheng (2019), “Monetary Policy Options at the Effective Lower Bound: Assessing the Federal Reserve’s Current Policy Toolkit” [archived PDF], Finance and Economics Discussion Series 2019-003 (Washington: Board of Governors of the Federal Reserve System, January). (return to text)
  10. The important observation that some consumers and businesses see low inflation as having benefits emerged from listening to a diverse range of perspectives, including representatives of consumer, labor, business, community, and other groups during the Fed Listens events; for details, see this page. (return to text)
  11. See Lael Brainard (2017), “Cross-Border Spillovers of Balance Sheet Normalization” [archived PDF] speech delivered at the National Bureau of Economic Research’s Monetary Economics Summer Institute, Cambridge, Mass., July 13. (return to text)
  12. See, for example, James Bullard (2018), “A Primer on Price Level Targeting in the U.S.” [archived PDF], a presentation before the CFA Society of St. Louis, St. Louis, Mo., January 10. (return to text)
  13. See, for example, Lars Svensson (2019), “Monetary Policy Strategies for the Federal Reserve” [archived PDF] presented at “Conference on Monetary Policy Strategy, Tools and Communication Practices,” sponsored by the Federal Reserve Bank of Chicago, Chicago, June 5. (return to text)
  14. See Board of Governors of the Federal Reserve System (2019), “Minutes of the Federal Open Market Committee, September 17–18, 2019,” press release, October 9; and David Reifschneider and David Wilcox (2019), “Average Inflation Targeting Would Be a Weak Tool for the Fed to Deal with Recession and Chronic Low Inflation” [archived PDF] Policy Brief PB19-16 (Washington: Peterson Institute for International Economics, November). (return to text)
  15. See Janice C. Eberly, James H. Stock, and Jonathan H. Wright (2019), “The Federal Reserve’s Current Framework for Monetary Policy: A Review and Assessment” [archived PDF] paper presented at “Conference on Monetary Policy Strategy, Tools and Communication Practices,” sponsored by the Federal Reserve Bank of Chicago, Chicago, June 4. (return to text)
  16. See Board of Governors of the Federal Reserve System (2019), “Minutes of the Federal Open Market Committee, July 31–August 1, 2018” [archived PDF] press release, August 1; and Board of Governors (2019), “Minutes of the Federal Open Market Committee, October 29–30, 2019” [archived PDF] press release, October 30. (return to text)
  17. For details on purchases of securities by the Federal Reserve, see this page. For a discussion of forward guidance, see this page. See, for example, Simon Gilchrist and Egon Zakrajšek (2013), “The Impact of the Federal Reserve’s Large-Scale Asset Purchase Programs on Corporate Credit Risk,” Journal of Money, Credit and Banking, vol. 45, (s2, December), pp. 29–57; Simon Gilchrist, David López-Salido, and Egon Zakrajšek (2015), “Monetary Policy and Real Borrowing Costs at the Zero Lower Bound,” American Economic Journal: Macroeconomics, vol. 7 (January), pp. 77–109; Jing Cynthia Wu and Fan Dora Xia (2016), “Measuring the Macroeconomic Impact of Monetary Policy at the Zero Lower Bound,” Journal of Money, Credit and Banking, vol. 48 (March–April), pp. 253–91; and Stefania D’Amico and Iryna Kaminska (2019), “Credit Easing versus Quantitative Easing: Evidence from Corporate and Government Bond Purchase Programs” [archived PDF], Bank of England Staff Working Paper Series 825 (London: Bank of England, September). (return to text)
  18. See Board of Governors of the Federal Reserve System (2010), “Strategies for Targeting Interest Rates Out the Yield Curve,” memorandum to the Federal Open Market Committee, October 13, available at this page; and Ben Bernanke (2016), “What Tools Does The Fed Have Left? Part 2: Targeting Longer-Term Interest Rates” [archived PDF] blog post, Brookings Institution, March 24. (return to text)
  19. See Ben Bernanke, Michael Kiley, and John Roberts (2019), “Monetary Policy Strategies for a Low-Rate Environment” [archived PDF], Finance and Economics Discussion Series 2019-009 (Washington: Board of Governors of the Federal Reserve System) and Chung and others, “Monetary Policy Options at the Effective Lower Bound,” in note 9. (return to text)

Essay 116: Reports of Rising Police-Society Conflict in China

Interview with Suzanne Scoggins (November 25, 2019)

China is facing a rising tide of conflict between the nation’s police officers and the public. While protest events receive considerable media attention, lower-profile conflicts between police officers and residents also make their way onto the internet, shaping perceptions of the police. The ubiquity of live events streamed on the internet helps illuminate the nature of statesociety conflict in China and the challenges faced by local law enforcement.

Simone McGuinness spoke with Suzanne Scoggins, a fellow with the National Asia Research Program (NARP), about the reports of rising policesociety conflict in China. Dr. Scoggins discusses how the Chinese Communist Party has responded to the upsurge, what channels Chinese citizens are utilizing to express their concerns, and what the implications are for the rest of the world.

What is the current state of police-society relations in China?

Reports of police violence have been on the rise, although this does not necessarily mean that violence is increasing. It does, however, mean that the media is more willing to report violence and that more incidents of violence are appearing on social media.

What we can now study is the nature of that violence—some are big events such as riots or attacks against the police, but there are also smaller events. For example, we see reports of passengers on trains who get into arguments with transit police. They may fight because one of the passengers is not in the right seat or is carrying something prohibited. Rather than complying with the officer, the passenger ends up getting into some sort of violent altercation. This kind of violence is typically being captured by cellphone cameras, and sometimes it makes the news.

The nature of the conflict matters. If somebody is on a train and sitting in a seat that they did not pay for, then it is usually obvious to the people reading about or watching the incident that the civilian is at fault. But if it is chengguan (城管, “city administration”) telling an elderly woman to stop selling her food on the street and the chengguan becomes violent, then public perceptions may be very different. It is that second type of violence that can be threatening to the state. The public’s response to the type of conflict can vary considerably.

What are the implications for China as a whole?

Regarding what this means for China, it’s not good for the regime to sustain this kind of conflict between street-level officers or state agents and the public. It lowers people’s trust in the agents of the government, and people may assume that the police cannot enforce public security. There are many state agents who might be involved in a conflict, such as the chengguan, the xiejing (auxiliary officer), or the official police. The type of agent almost doesn’t matter because the uniforms often look similar.

When information goes up online of state agents behaving poorly, it makes the state a little more vulnerable. Even people who were not at the event might see it on social media or in the news and think, “Oh, this is happening in my community, or in my province, or across the nation.” This violates public expectations about how the police or other state agents should act. People should be able to trust the police and go to them when they have problems.

How has the Chinese government responded to the increase in reporting violence?

There is a twofold approach. The first is through censorship. When negative videos go up online or when the media reports an incident, the government will go in and take it down. We see this over time. Even while collecting my research, some of the videos that were initially available online are no longer accessible simply because they have been censored. The government is removing many different types of content, not only violence. Censors are also interested in removing any sort of misinformation that might spread on social media.

If step one is to take the video or report down, step two is to counteract any negative opinion by using police propaganda. This is also referred to as “public relations,” and the goal is to present a better image of the police. Recently, the Ministry of Public Security put a lot of money and resources into their social media presence. Many police stations have a social media account on WeChat or Weibo (微博, “microblogging”) and aim to present a more positive, friendly image of the police. The ministry also teamed up with CCTV to produce television content. This has been going on for some time, but recently shows have become more sophisticated.

There is one program, for example, called Police Training Camp. It is a reality show where police officers are challenged with various tasks, and the production is very glossy. The ministry also produces other sorts of specials featuring police who are out in the field helping people. It shows the police officers working really long shifts, interacting positively with the public, and really making a difference in people’s lives. In this way, the government is counteracting negative opinions about police violence or misconduct.

In general, I will say that it is difficult for people in any society to get justice with police officers because of the way legal systems are structured and the power police hold in local government politics. In China, one of the things people are doing beyond reaching out to local governments or pursuing mediation is calling an official hotline.

This is a direct channel to the Ministry of Public Security, and all these calls are reviewed. There is not a whole lot that citizens can do about specific corruption claims. But if somebody has a particular goal, then the hotline is slightly more effective because it allows citizens to alert the ministry. However, many people do not know about the hotline, so the ministry is trying to increase awareness and also help staff the call center so that it can more effectively field calls.

As for how much relief people feel when they use these channels, this depends on what their goal is. If the goal is to get somebody fired, then the hotline may not work. But if someone is looking to air their grievances, then it may be helpful.

What are the implications of increased police-society conflict in China for the rest of the world? What can the United States or other countries do to improve the situation?

These are really sticky issues that are difficult to solve. When discussing policesociety conflict, it is important to step back and think about who the police are—the enforcement agents of the state. So by their very nature, there will be conflict between police and society, and that is true in every country. In China, it really depends on where and what type of police climate we are talking about.

Xinjiang, for instance, has a very different police climate than other regions in China. There is a different type of policing and police presence. Chinese leaders certainly do not want any international intervention in Xinjiang. They see this as an internal issue. While some governments in Europe and the United States might want to intervene, that is going to be a nonstarter for China.

As for police problems more generally, I would say that if China is able to reduce some of the policesociety conflict in other areas of the country, then this is good for the international community because it leads to a more stable government. We also know that there is a fair amount of international cooperation between police groups. China has police liaisons that travel and learn about practices and technology in different countries. The police in these groups attend conferences and take delegates abroad.

There are also police delegations from other nations that go to China to learn about and exchange best practices. But that work will not necessarily address the underlying issues that we see in a lot of the stations scattered throughout China outside the big cities like Beijing (北京) or Shanghai (上海). Those are the areas with insufficient training or manpower. Those issues must be addressed internally by the Ministry of Public Security.

How is the Chinese government improving its policing capabilities?

Recently, the ministry has tried to overcome manpower and other ground-level policing problems by using surveillance cameras and artificial intelligence. Networks of cameras are appearing all over the country, and police are using body cameras for recording interactions with the public. This type of surveillance is not just in large cities but also in smaller ones. Of course, it is not enough to just put the cameras up—you also need to train officers to use that technology properly. This process takes time, but it is one way that the ministry hopes to overcome on-the-ground problems such as the low number of police per capita.

How might the Hong Kong protests influence or change policing tactics in China?

The situation in Hong Kong is unlikely to change policing tactics in China, which are generally more aggressive in controlling protests than most of what we have seen thus far in Hong Kong. It is more likely that things will go in the other direction, with mainland tactics being used in Hong Kong, especially if we continue to observe increased pressure to bring the protestors in check.

Suzanne Scoggins is an Assistant Professor of Political Science at Clark University. She is also a 2019 National Asia Research Program (NARP) Fellow. Dr. Scoggins holds a Ph.D. in Political Science from the University of California, Berkeley, and her book manuscript Policing in the Shadow of Protest is forthcoming from Cornell University Press. Her research has appeared in Comparative Politics, The China Quarterly, Asian Survey, PS: Political Science and Politics, and the China Law and Society Review.

This interview was conducted by Simone McGuinness, the Public Affairs Intern at NBR.

Essay 115: Novels as Another University: Joseph Conrad

One can say that the first wave of imperial “neocons” was not the group that got the U.S. into the Iraq War (2003) but the group described by Warren Zimmerman in his classic book on the rise of the American Empire from the 1890s onwards:

First Great Triumph

How Five Americans Made Their Country a World Power.

By Warren Zimmermann.

Illustrated. 562 pp. New York: Farrar, Straus & Giroux

Americans like to pretend that they have no imperial past,” Warren Zimmermann tells us in First Great Triumph: How Five Americans Made Their Country a World Power. But they do.

The United States had been expanding its borders from the moment of its birth, though its reach had been confined to the North American continent until 1898, when American soldiers and sailors joined Cuban and Filipino rebels in a successful war against Spain. When the war was won, the United States acquired a “protectorate” in Cuba and annexed Hawaii, the Philippine Islands, Guam, Puerto Rico and Hawaii. “In 15 weeks,” Zimmermann notes, “the United States had gained island possessions on both the Atlantic and Pacific sides of its continental mass. It had put under its protection and control more than 10 million people: whites, blacks, Hispanics, Indians, Polynesians, Chinese, Japanese and the polyethnic peoples of the Philippine archipelago.”

John Hay, at the time the American ambassador to Britain, writing to his friend Theodore Roosevelt in Cuba, referred to the war against Spain as “a splendid little war, begun with the highest motives, carried on with magnificent intelligence and spirit, favored by that Fortune which loves the brave.” He hoped that the war’s aftermath would be concluded “with that fine good nature, which is, after all, the distinguishing trait of the American character.” More than a century later, we are still asking ourselves just how splendid that little war and its consequences really were.

Zimmermann, a career diplomat and a former United States ambassador to Yugoslavia, begins his brilliantly readable book about the war and its aftermath with biographical sketches of the five men — Alfred T. Mahan, Theodore Roosevelt, Henry Cabot Lodge, John Hay and Elihu Root — who played a leading role in making “their country a world power.”

Ironically, it turns out that any reader of Joseph Conrad’s (died in 1924) famous novel Nostromo from 1904 would have encountered the “manifesto” of the American Empire, very clearly enunciated by one of the characters in the novel:

“Time itself has got to wait on the greatest country in the whole of God’s universe. We shall be giving the word for everything; industry, trade, law, journalism, art, politics and religion, from Cape Horn clear over to Smith’s Sound (i.e., Canada/Greenland), and beyond too, if anything worth taking hold of turns up at the North Pole. And then we shall have the leisure to take in hand the outlying islands and continents of the earth.

“We shall run the world’s business whether the world likes it or not. The world can’t help it—and neither can we, I guess.”

Joseph Conrad, Nostromo, Penguin Books, 2007, pages 62/63

The political stances of Conrad which seem so denunciatory of imperialism here in Nostromo seem very disrespectful of Africans in his Heart of Darkness as Chinua Achebe (Nigerian novelist, author of Things Fall Apart) and other Africans have shown and decried. Thus one sees layer upon layer of contradiction both in American empire-mongering and Conrad’s anticipation of it in his novel Nostromo.

Essay 110: Education and Famine Analysis

The great historian Élie Halévy’s (died in 1937) History of the English People in the Nineteenth Century, a multi-volume classic, gives us a sense of nineteenth century famine dynamics for the 1840s, which combines failed harvests and failed incomes and failed speculations together:

“It was a ‘dearth’ (i.e., scarcity)—a crisis belonging to the old order—the last ‘dearth,’ in fact, Europe had known up to the present day (i.e., before 1937). The unsatisfactory harvest of 1845 was followed by the disastrous autumn of 1846. The potato disease was worse than it had been the year before. The cereal harvest, moderately good in 1845, was a failure not only in the United Kingdom, but in France and throughout Western Europe. In 1845, Great Britain could still purchase corn even in Ireland, while the Irish poor were starving to death. Nothing of the kind was possible at the end of 1846.

Britain could not obtain wheat from France or Germany. In short, it was no longer Ireland alone, but the whole of Western Europe that had to be saved from famine.

“The United Kingdom, France, and Germany must import Russian and American wheat, the only sources available to supply the deficit.

“In consequence the price of wheat rose from 50 shillings and 2d. on August 22 to 65 shillings and 7d. on November 18. The price of wheat rose once more. It exceeded 78 shillings in March.

“In Germany and France, where another ‘jacquerie’ seemed to have begun, hunger caused an outbreak of rioting. The same happened in Scotland and the south of England…but England suffered in common with Ireland and Continental Europe, and a drain on English gold began, to pay for the Russian and American wheat.

“Later there was a fall of 50% in four months. The corn factors (i.e., corn dealers) who for months had been gambling on a rise had no time to retrace their steps and were ruined at a single blow.” (“Commerical Failures in 1847,” Eclectic Review, December 1847)

(Élie Halévy, “Victorian Years (1841-1895),” Halévy’s History of the English People in the Nineteenth Century, Volume 4, pages 191-193, Ernest Benn Ltd., 1970)

Note that in British usage, “corn” refers to all feed grains (primarily wheat), not corn (in the American sense) or maize. For example, see the Corn Laws.

We sense from Halévy’s description of the “food insecurity” of the nineteenth century in Europe, why the Revolutions of 1848 were to a large extent severe food riots and how food poverty and speculation interacted with risk and uncertainty prevailing.

This should be read and pondered in connection with Prof. Amartya Sen’s classic from 1981, Poverty and Famines, which highlights the famine of income and buying power, as opposed to famines based on outright crop failures. Pearl Buck’s classic novel, The Good Earth (1931), fits this topic set, as it analyzes in human terms the pattern of Chinese famines. It is interesting to note, parenthetically, that the movie of The Good Earth could not feature Chinese actors in lead roles due to racial craziness at the time. Stepping back, we see a world of food insecurity aggravated by the spectre of racism further poisoning social relations worldwide.

Halévy states: “It was a ‘dearth’ (i.e., scarcity)—a crisis belonging to the old order—the last ‘dearth,’ in fact, Europe had known up to the present day…”.

It would be instructive to ponder whether this really was “a crisis belonging to the old order” given the catastrophes and food crises that could come with climate change from 2019 on out. Will we have “global ‘dearths’”?

Essay 108: Early View Alert: Water Resources Research

from the American Geophysical Union’s journals:

Research Articles

Modeling the Snow Depth Variability with a High-Resolution Lidar Data Set and Nonlinear Terrain Dependency

by T. Skaugen & K. Melvold

Summary: Using airborne laser, 400 million snow depth measurements at Hardangervidda in Southern Norway have been collected. The amount of data has made in-depth studies of the spatial distribution of snow and its interaction with the terrain and vegetation possible. We find that the terrain variability, expressed by the square slope, the average amount of snow, and whether the terrain is vegetated or not, largely explains the variation of snow depth. With this information it is possible to develop equations that predict snow depth variability that can be used in environmental models, which again are used for important tasks such as flood forecasting and hydropower planning. One major advantage is that these equations can be determined from the data that are, in principle, available everywhere, provided there exists a detailed digital model of the terrain.

[Archived PDF article]

Phosphorus Transport in Intensively Managed Watersheds

by Christine L. Dolph, Evelyn Boardman, Mohammad Danesh-Yazdi, Jacques C. Finlay, Amy T. Hansen, Anna C. Baker & Brent Dalzell

Abstract: When phosphorus from farm fertilizer, eroded soil, and septic waste enters our water, it leads to problems like toxic algae blooms, fish kills, and contaminated drinking supplies. In this study, we examine how phosphorus travels through streams and rivers of farmed areas. In the past, soil lost from farm fields was considered the biggest contributor to phosphorus pollution in agricultural areas, but our study shows that phosphorus originating from fertilizer stores in the soil and from crop residue, as well as from soil eroded from sensitive ravines and bluffs, contributes strongly to the total amount of phosphorus pollution in agricultural rivers. We also found that most phosphorus leaves farmed watersheds during the very highest river flows. Increased frequency of large storms due to climate chaos will therefore likely worsen water quality in areas that are heavily loaded with phosphorus from farm fertilizers. Protecting water in agricultural watersheds will require knowledge of the local landscape along with strategies to address (1) drivers of climate chaos, (2) reduction in the highest river flows, and (3) ongoing inputs and legacy stores of phosphorus that are readily transported across land and water.

[Archived PDF of article]

Detecting the State of the Climate System via Artificial Intelligence to Improve Seasonal Forecasts and Inform Reservoir Operations

by Matteo Giuliani, Marta Zaniolo, Andrea Castelletti, Guido Davoli & Paul Block

Abstract: Increasingly variable hydrologic regimes combined with more frequent and intense extreme events are challenging water systems management worldwide. These trends emphasize the need of accurate medium- to long-term predictions to timely prompt anticipatory operations. Despite in some locations global climate oscillations and particularly the El Niño Southern Oscillation (ENSO) may contribute to extending forecast lead times, in other regions there is no consensus on how ENSO can be detected, and used as local conditions are also influenced by other concurrent climate signals. In this work, we introduce the Climate State Intelligence framework to capture the state of multiple global climate signals via artificial intelligence and improve seasonal forecasts. These forecasts are used as additional inputs for informing water system operations and their value is quantified as the corresponding gain in system performance. We apply the framework to the Lake Como basin, a regulated lake in northern Italy mainly operated for flood control and irrigation supply. Numerical results show the existence of notable teleconnection patterns dependent on both ENSO and the North Atlantic Oscillation over the Alpine region, which contribute in generating skillful seasonal precipitation and hydrologic forecasts. The use of this information for conditioning the lake operations produces an average 44% improvement in system performance with respect to a baseline solution not informed by any forecast, with this gain that further increases during extreme drought episodes. Our results also suggest that observed preseason sea surface temperature anomalies appear more valuable than hydrologic-based seasonal forecasts, producing an average 59% improvement in system performance.

[Archived PDF of article]

Landscape Water Storage and Subsurface Correlation from Satellite Surface Soil Moisture and Precipitation Observations

by Daniel J. Short Gianotti, Guido D. Salvucci, Ruzbeh Akbar, Kaighin A. McColl, Richard Cuenca & Dara Entekhabi

Abstract: Surface soil moisture measurements are typically correlated to some degree with changes in subsurface soil moisture. We calculate a hydrologic length scale, λ, which represents (1) the mean-state estimator of total column water changes from surface observations, (2) an e-folding length scale for subsurface soil moisture profile covariance fall-off, and (3) the best second-moment mass-conserving surface layer thickness for a simple bucket model, defined by the data streams of satellite soil moisture and precipitation retrievals. Calculations are simple, based on three variables: the autocorrelation and variance of surface soil moisture and the variance of the net flux into the column (precipitation minus estimated losses), which can be estimated directly from the soil moisture and precipitation time series. We develop a method to calculate the lag-one autocorrelation for irregularly observed time series and show global surface soil moisture autocorrelation. λ is driven in part by local hydroclimate conditions and is generally larger than the 50-mm nominal radiometric length scale for the soil moisture retrievals, suggesting broad subsurface correlation due to moisture drainage. In all but the most arid regions, radiometric soil moisture retrievals provide more information about ecosystem-relevant water fluxes than satellite radiometers can explicitly “see”; lower-frequency radiometers are expected to provide still more statistical information about subsurface water dynamics.

[Archived PDF of article]

Process-Guided Deep Learning Predictions of Lake Water Temperature

by Jordan S. Read, Xiaowei Jia, Jared Willard, Alison P. Appling, Jacob A. Zwart, Samantha K. Oliver, Anuj Karpatne, Gretchen J. A. Hansen, Paul C. Hanson, William Watkins, Michael Steinbach & Vipin Kumar

Abstract: The rapid growth of data in water resources has created new opportunities to accelerate knowledge discovery with the use of advanced deep learning tools. Hybrid models that integrate theory with state-of-the art empirical techniques have the potential to improve predictions while remaining true to physical laws. This paper evaluates the Process-Guided Deep Learning (PGDL) hybrid modeling framework with a use-case of predicting depth-specific lake water temperatures. The PGDL model has three primary components: a deep learning model with temporal awareness (long short-term memory recurrence), theory-based feedback (model penalties for violating conversation of energy), and model pre-training to initialize the network with synthetic data (water temperature predictions from a process-based model). In situ water temperatures were used to train the PGDL model, a deep learning (DL) model, and a process-based (PB) model. Model performance was evaluated in various conditions, including when training data were sparse and when predictions were made outside of the range in the training data set. The PGDL model performance (as measured by root-mean-square error (RMSE)) was superior to DL and PB for two detailed study lakes, but only when pretraining data included greater variability than the training period. The PGDL model also performed well when extended to 68 lakes, with a median RMSE of 1.65 °C during the test period (DL: 1.78 °C, PB: 2.03 °C; in a small number of lakes PB or DL models were more accurate). This case-study demonstrates that integrating scientific knowledge into deep learning tools shows promise for improving predictions of many important environmental variables.

[Archived PDF of article]

Adjustment of Radar-Gauge Rainfall Discrepancy Due to Raindrop Drift and Evaporation Using the Weather Research and Forecasting Model and Dual-Polarization Radar

by Qiang Dai, Qiqi Yang, Dawei Han, Miguel A. Rico-Ramirez & Shuliang Zhang

Abstract: Radar-gauge rainfall discrepancies are considered to originate from radar rainfall measurements while ignoring the fact that radar observes rain aloft while a rain gauge measures rainfall on the ground. Observations of raindrops observed aloft by weather radars consider that raindrops fall vertically to the ground without changing in size. This premise obviously does not stand because raindrop location changes due to wind drift and raindrop size changes due to evaporation. However, both effects are usually ignored. This study proposes a fully formulated scheme to numerically simulate both raindrop drift and evaporation in the air and reduces the uncertainties of radar rainfall estimation. The Weather Research and Forecasting model is used to simulate high-resolution three-dimensional atmospheric fields. A dual-polarization radar retrieves the raindrop size distribution for each radar pixel. Three schemes are designed and implemented using the Hameldon Hill radar in Lancashire, England. The first considers only raindrop drift, the second considers only evaporation, and the last considers both aspects. Results show that wind advection can cause a large drift for small raindrops. Considerable loss of rainfall is observed due to raindrop evaporation. Overall, the three schemes improve the radar-gauge correlation by 3.2%, 2.9%, and 3.8% and reduce their discrepancy by 17.9%, 8.6%, and 21.7%, respectively, over eight selected events. This study contributes to the improvement of quantitative precipitation estimation from radar polarimetry and allows a better understanding of precipitation processes.

[Archived PDF of article]

The Role of Collapsed Bank Soil on Tidal Channel Evolution: A Process-Based Model Involving Bank Collapse and Sediment Dynamics

by K. Zhao, Z. Gong, F. Xu, Z. Zhou, C. K. Zhang, G. M. E. Perillo & G. Coco

Abstract: We develop a process-based model to simulate the geomorphodynamic evolution of tidal channels, considering hydrodynamics, flow-induced bank erosion, gravity-induced bank collapse, and sediment dynamics. A stress-deformation analysis and the Mohr-Coulomb criterion, calibrated through previous laboratory experiments, are included in a model simulating bank collapse. Results show that collapsed bank soil plays a primary role in the dynamics of bank retreat. For bank collapse with small bank height, tensile failure in the middle of the bank (Stage I), tensile failure on the bank top (Stage II), and sectional cracking from bank top to the toe (Stage III) are present sequentially before bank collapse occurs. A significant linear relation is observed between bank height and the contribution of bank collapse to bank retreat. Contrary to flow-induced bank erosion, bank collapse prevents further widening since the collapsed bank soil protects the bank from direct bank erosion. The bank profile is linear or slightly convex, and the planimetric shape of tidal channels (gradually decreasing in width landward) is similar when approaching equilibrium, regardless of the consideration of bank erosion and collapse. Moreover, the simulated width-to-depth ratio in all runs is comparable with observations from the Venice Lagoon. This indicates that the equilibrium configuration of tidal channels depends on hydrodynamic conditions and sediment properties, while bank erosion and collapse greatly affect the transient behavior (before equilibrium) of the tidal channels. Overall, this contribution highlights the importance of collapsed bank soil in investigating tidal channel morphodynamics using a combined perspective of geotechnics and soil mechanics.

[Archived PDF of article]

A Physically Based Method for Soil Evaporation Estimation by Revisiting the Soil Drying Process

by Yunquan Wang, Oliver Merlin, Gaofeng Zhu & Kun Zhang

Abstract: While numerous models exist for soil evaporation estimation, they are more or less empirically based either in the model structure or in the determination of introduced parameters. The main difficulty lies in representing the water stress factor, which is usually thought to be limited by capillarity-supported water supply or by vapor diffusion flux. Recent progress in understanding soil hydraulic properties, however, have found that the film flow, which is often neglected, is the dominant process under low moisture conditions. By including the impact of film flow, a reexamination on the typical evaporation process found that this usually neglected film flow might be the dominant process for supporting the Stage II evaporation (i.e., the fast falling rate stage), besides the generally accepted capillary flow-supported Stage I evaporation and the vapor diffusion-controlled Stage III evaporation. A physically based model for estimating the evaporation rate was then developed by parameterizing the Buckingham-Darcy’s law. Interestingly, the empirical Bucket model was found to be a specific form of the proposed model. The proposed model requires the in-equilibrium relative humidity as the sole input for representing water stress and introduces no adjustable parameter in relation to soil texture. The impact of vapor diffusion was also discussed. Model testing with laboratory data yielded an excellent agreement with observations for both thin soil and thick soil column evaporation experiments. Model evaluation at 15 field sites generally showed a close agreement with observations, with a great improvement in the lower range of evaporation rates in comparison with the widely applied Priestley and Taylor Jet Propulsion Laboratory model.

[Archived PDF of article]

Floodplain Land Cover and Flow Hydrodynamic Control of Overbank Sedimentation in Compound Channel Flows

by Carmelo Juez, C. Schärer, H. Jenny, A. J. Schleiss & M. J. Franca

Abstract: Overbank sedimentation is predominantly due to fine sediments transported under suspension that become trapped and settle in floodplains when high-flow conditions occur in rivers. In a compound channel, the processes of exchanging water and fine sediments between the main channel and floodplains regulate the geomorphological evolution and are crucial for the maintenance of the ecosystem functions of the floodplains. These hydrodynamic and morphodynamic processes depend on variables such as the flow-depth ratio between the water depth in the main channel and the water depth in the floodplain, the width ratio between the width of the main channel and the width of the floodplain, and the floodplain land cover characterized by the type of roughness. This paper examines, by means of laboratory experiments, how these variables are interlinked and how the deposition of sediments in the compound channel is jointly determined by them. The combination of these compound channel characteristics modulates the production of vertically axised large turbulent vortical structures in the mixing interface. Such vortical structures determine the water mass exchange between the main channel and the floodplain, conditioning in turn the transport of sediment particles conveyed in the water, and, therefore, the resulting overbank sedimentation. The existence and pattern of sedimentation are conditioned by both the hydrodynamic variables (the flow-depth ratio and the width ratio) and the floodplain land cover simulated in terms of smooth walls, meadow-type roughness, sparse-wood-type roughness, and dense-wood-type roughness.

[Archived PDF of article]

Identifying Actionable Compromises: Navigating Multi-city Robustness Conflicts to Discover Cooperative Safe Operating Spaces for Regional Water Supply Portfolios

by D. F. Gold, P. M. Reed, B. C. Trindade & G. W. Characklis

Summary: Cooperation among neighboring urban water utilities can help water managers face challenges stemming from climate change and population growth. Water utilities can cooperate by coordinating water transfers and water restrictions in times of water scarcity (drought) so that water is provided to areas that need it most. In order to successfully implement these policies, however, cooperative partners must find a compromise that is acceptable to all regional actors, a task complicated by asymmetries in resources and risks often present in regional systems. The possibility of deviations from agreed upon actions is another complicating factor that has not been addressed in water resources literature. Our study focuses on four urban water utilities in the Research Triangle region of North Carolina who are investigating cooperative drought mitigation strategies. We contribute a framework that includes the use of simulation models, optimization algorithms, and statistical tools to aid cooperating partners in finding acceptable compromises that are tolerant modest deviations in planned actions. Our results can be used by regional utilities to avoid or alleviate potential planning conflicts and are broadly applicable to urban regional water supply planning across the globe.

[Archived PDF of article]

Detecting Changes in River Flow Caused by Wildfires, Storms, Urbanization, Regulation, and Climate across Sweden

by Berit Arheimer & Göran Lindström

Abstract: Changes in river flow may appear from shifts in land cover, constructions in the river channel, and climatic change, but currently there is a lack of understanding of the relative importance of these drivers. Therefore, we collected gauged river flow time series from 1961 to 2018 from across Sweden for 34 disturbed catchments to quantify how the various types of disturbances have affected river flow. We used trend analysis and the differences in observations versus hydrological modeling to explore the effects on river flow from (1) land cover changes from wildfires, storms, and urbanization; (2) dam constructions with regulations for hydropower production; and (3) climate-change impact in otherwise undisturbed catchments. A mini model ensemble, consisting of three versions of the S-HYPE model, was used, and the three models gave similar results. We searched for changes in annual and daily stream flow, seasonal flow regime, and flow duration curves. The results show that regulation of river flow has the largest impact, reducing spring floods with up to 100% and increasing winter flow by several orders of magnitude, with substantial effects transmitted far downstream. Climate changed the total river flow up to 20%. Tree removal by wildfires and storms has minor impacts at medium and large scales. Urbanization, on the contrary, showed a 20% increase in high flows also at medium scales. This study emphasizes the benefits of combining observed time series with numerical modeling to exclude the effect of varying weather conditions, when quantifying the effects of various drivers on long-term streamflow shifts.

[Archived PDF of article]

Assessing the Feasibility of Satellite-Based Thresholds for Hydrologically Driven Landsliding

by Matthew A. Thomas, Brian D. Collins & Benjamin B. Mirus

Summary: Soil wetness and rainfall contribute to landslides across the world. Using soil moisture sensors and rain gauges, these environmental conditions have been monitored at numerous points across the Earth’s surface to define threshold conditions, above which landsliding should be expected for a localized area. Satellite-based technologies also deliver estimates of soil wetness and rainfall, potentially offering an approach to develop thresholds as part of landslide warning systems over larger spatial scales. To evaluate the potential for using satellite-based measurements for landslide warning, we compare the accuracy of landslide thresholds defined with ground- versus satellite-based soil wetness and rainfall information. We find that the satellite-based data over-predict soil wetness during the time of year when landslides are most likely to occur, resulting in thresholds that also over-predict the potential for landslides relative to thresholds informed by direct measurements on the ground. Our results encourage the installation of more ground-based monitoring stations in landslide-prone settings and the cautious use of satellite-based data when more direct measurements are not available.

[Archived PDF of article]

Modeling the Translocation and Transformation of Chemicals in the Soil-Plant Continuum: A Dynamic Plant Uptake Module for the HYDRUS Model

by Giuseppe Brunetti, Radka Kodešová & Jiří Šimůnek

Abstract: Food contamination is responsible for thousands of deaths worldwide every year. Plants represent the most common pathway for chemicals into the human and animal food chain. Although existing dynamic plant uptake models for chemicals are crucial for the development of reliable mitigation strategies for food pollution, they nevertheless simplify the description of physicochemical processes in soil and plants, mass transfer processes between soil and plants and in plants, and transformation in plants. To fill this scientific gap, we couple a widely used hydrological model (HYDRUS) with a multi-compartment dynamic plant uptake model, which accounts for differentiated multiple metabolization pathways in plant’s tissues. The developed model is validated first theoretically and then experimentally against measured data from an experiment on the translocation and transformation of carbamazepine in three vegetables. The analysis is further enriched by performing a global sensitivity analysis on the soilplant model to identify factors driving the compound’s accumulation in plants’ shoots, as well as to elucidate the role and the importance of soil hydraulic properties on the plant uptake process. Results of the multilevel numerical analysis emphasize the model’s flexibility and demonstrate its ability to accurately reproduce physicochemical processes involved in the dynamic plant uptake of chemicals from contaminated soils.

[Archived PDF of article]

Physical Controls on Salmon Redd Site Selection in Restored Reaches of a Regulated, Gravel-Bed River

by Lee R. Harrison, Erin Bray, Brandon Overstreet, Carl J. Legleiter, Rocko A. Brown, Joseph E. Merz, Rosealea M. Bond, Colin L. Nicol & Thomas Dunne

Abstract: Large-scale river restoration programs have emerged recently as a tool for improving spawning habitat for native salmonids in highly altered river ecosystems. Few studies have quantified the extent to which restored habitat is utilized by salmonids, which habitat features influence redd site selection, or the persistence of restored habitat over time. We investigated fall-run Chinook salmon spawning site utilization and measured and modeled corresponding habitat characteristics in two restored reaches: a reach of channel and floodplain enhancement completed in 2013 and a reconfigured channel and floodplain constructed in 2002. Redd surveys demonstrated that both restoration projects supported a high density of salmon redds, 3 and 14 years following restoration. Salmon redds were constructed in coarse gravel substrates located in areas of high sediment mobility, as determined by measurements of gravel friction angles and a grain entrainment model. Salmon redds were located near transitions between pool-riffle bedforms in regions of high predicted hyporheic flows. Habitat quality (quantified as a function of stream hydraulics) and hyporheic flow were both strong predictors of redd occurrence, though the relative roles of these variables differed between sites. Our findings indicate that physical controls on redd site selection in restored channels were similar to those reported for natural channels elsewhere. Our results further highlight that in addition to traditional habitat criteria (e.g., water depth, velocity, and substrate size), quantifying sediment texture and mobility, as well as intragravel flow, provides a more complete understanding of the ecological benefits provided by river restoration projects.

[Archived PDF of article]

Mountain-Block Recharge: A Review of Current Understanding

by Katherine H. Markovich, Andrew H. Manning, Laura E. Condon & Jennifer C. McIntosh

Abstract: Mountain-block recharge (MBR) is the subsurface inflow of groundwater to lowland aquifers from adjacent mountains. MBR can be a major component of recharge but remains difficult to characterize and quantify due to limited hydrogeologic, climatic, and other data in the mountain block and at the mountain front. The number of MBR-related studies has increased dramatically in the 15 years since the last review of the topic was conducted by Wilson and Guan (2004), generating important advancements. We review this recent body of literature, summarize current understanding of factors controlling MBR, and provide recommendations for future research priorities. Prior to 2004, most MBR studies were performed in the southwestern United States. Since then, numerous studies have detected and quantified MBR in basins around the world, typically estimating MBR to be 5–50% of basin-fill aquifer recharge. Theoretical studies using generic numerical modeling domains have revealed fundamental hydrogeologic and topographic controls on the amount of MBR and where it originates within the mountain block. Several mountain-focused hydrogeologic studies have confirmed the widespread existence of mountain bedrock aquifers hosting considerable groundwater flow and, in some cases, identified the occurrence of interbasin flow leaving headwater catchments in the subsurface—both of which are required for MBR to occur. Future MBR research should focus on the collection of high-priority data (e.g., subsurface data near the mountain front and within the mountain block) and the development of sophisticated coupled models calibrated to multiple data types to best constrain MBR and predict how it may change in response to climate warming.

[Archived PDF of article]

An Adjoint Sensitivity Model for Steady-State Sequentially Coupled Radionuclide Transport in Porous Media

by Mohamed Hayek, Banda S. RamaRao & Marsh Lavenue

Abstract: This work presents an efficient mathematical/numerical model to compute the sensitivity coefficients of a predefined performance measure to model parameters for one-dimensional steady-state sequentially coupled radionuclide transport in a finite heterogeneous porous medium. The model is based on the adjoint sensitivity approach that offers an elegant and computationally efficient alternative way to compute the sensitivity coefficients. The transport parameters include the radionuclide retardation factors due to sorption, the Darcy velocity, and the effective diffusion/dispersion coefficients. Both continuous and discrete adjoint approaches are considered. The partial differential equations associated with the adjoint system are derived based on the adjoint state theory for coupled problems. Physical interpretations of the adjoint states are given in analogy to results obtained in the theory of groundwater flow. For the homogeneous case, analytical solutions for primary and adjoint systems are derived and presented in closed forms. Numerically calculated solutions are compared to the analytical results and show excellent agreements. Insights from sensitivity analysis are discussed to get a better understanding of the values of sensitivity coefficients. The sensitivity coefficients are also computed numerically by finite differences. The numerical sensitivity coefficients successfully reproduce the analytically derived sensitivities based on adjoint states. A derivative-based global sensitivity method coupled with the adjoint state method is presented and applied to a real field case represented by a site currently being considered for underground nuclear storage in Northern Switzerland, “Zürich Nordost,” to demonstrate the proposed method. The results show the advantage of the adjoint state method compared to other methods in term of computational effort.

[Archived PDF of article]

Hydraulic Reconstruction of the 1818 Giétro Glacial Lake Outburst Flood

by C. Ancey, E. Bardou, M. Funk, M. Huss, M. A. Werder & T. Trewhela

Summary: Every year, natural and man-made dams fail and cause flooding. For public authorities, estimating the risk posed by dams is essential to good risk management. Efficient computational tools are required for analyzing flood risk. Testing these tools is an important step toward ensuring their reliability and performance. Knowledge of major historical floods makes it possible, in principle, to benchmark models, but because historical data are often incomplete and fraught with potential inaccuracies, validation is seldom satisfactory. Here we present one of the few major historical floods for which information on flood initiation and propagation is available and detailed: the Giétro flood. This flood occurred in June 1818 and devastated the Drance Valley in Switzerland. In the spring of that year, ice avalanches blocked the valley floor and formed a glacial lake, whose volume is today estimated at 25×106 m3. The local authorities initiated protection works: A tunnel was drilled through the ice dam, and about half of the stored water volume was drained in 2.5 days. On 16 June 1818, the dam failed suddenly because of significant erosion at its base; this caused a major flood. This paper presents a numerical model for estimating flow rates, velocities, and depths during the dam drainage and flood flow phases. The numerical results agree well with historical data. The flood reconstruction shows that relatively simple models can be used to estimate the effects of a major flood with good accuracy.

[Archived PDF of article]

The Representation of Hydrological Dynamical Systems Using Extended Petri Nets (EPN)

by Marialaura Bancheri, Francesco Serafin & Riccardo Rigon

Abstract: This work presents a new graphical system to represent hydrological dynamical models and their interactions. We propose an extended version of the Petri Nets mathematical modeling language, the Extended Petri Nets (EPN), which allows for an immediate translation from the graphics of the model to its mathematical representation in a clear way. We introduce the principal objects of the EPN representation (i.e., places, transitions, arcs, controllers, and splitters) and their use in hydrological systems. We show how to cast hydrological models in EPN and how to complete their mathematical description using a dictionary for the symbols and an expression table for the flux equations. Thanks to the compositional property of EPN, we show how it is possible to represent either a single hydrological response unit or a complex catchment where multiple systems of equations are solved simultaneously. Finally, EPN can be used to describe complex Earth system models that include feedback between the water, energy, and carbon budgets. The representation of hydrological dynamical systems with EPN provides a clear visualization of the relations and feedback between subsystems, which can be studied with techniques introduced in nonlinear systems theory and control theory.

[Archived PDF of article]

A Regularization Approach to Improve the Sequential Calibration of a Semidistributed Hydrological Model

by A. de Lavenne, V. Andréassian, G. Thirel, M.-H. Ramos & C. Perrin

Abstract: In semidistributed hydrological modeling, sequential calibration usually refers to the calibration of a model by considering not only the flows observed at the outlet of a catchment but also the different gauging points inside the catchment from upstream to downstream. While sequential calibration aims to optimize the performance at these interior gauged points, we show that it generally fails to improve performance at ungauged points. In this paper, we propose a regularization approach for the sequential calibration of semidistributed hydrological models. It consists in adding a priori information on optimal parameter sets for each modeling unit of the semi-distributed model. Calibration iterations are then performed by jointly maximizing simulation performance and minimizing drifts from the a priori parameter sets. The combination of these two sources of information is handled by a parameter k to which the method is quite sensitive. The method is applied to 1,305 catchments in France over 30 years. The leave-one-out validation shows that, at locations considered as ungauged, model simulations are significantly improved (over all the catchments, the median KGE criterion is increased from 0.75 to 0.83 and the first quartile from 0.35 to 0.66), while model performance at gauged points is not significantly impacted by the use of the regularization approach. Small catchments benefit most from this calibration strategy. These performances are, however, very similar to the performances obtained with a lumped model based on similar conceptualization.

[Archived PDF of article]

Proneness of European Catchments to Multiyear Streamflow Droughts

by Manuela I. Brunner & Lena M. Tallaksen

Summary: Droughts lasting longer than 1 year can have severe ecological, social, and economic impacts. They are characterized by below-average flows, not only during the low-flow period but also in the high-flow period when water stores such as groundwater or artificial reservoirs are usually replenished. Limited catchment storage might worsen the impacts of droughts and make water management more challenging. Knowledge on the occurrence of multiyear drought events enables better adaptation and increases preparedness. In this study, we assess the proneness of European catchments to multiyear droughts by simulating long discharge records. Our findings show that multiyear drought events mainly occur in regions where the discharge seasonality is mostly influenced by rainfall, whereas catchments whose seasonality is dominated by melt processes are less affected. The strong link between the proneness of a catchment to multiyear events and its discharge seasonality leads to the conclusion that future changes toward less snow storage and thus less snow melt will increase the probability of multiyear drought occurrence.

[Archived PDF of article]

Equifinality and Flux Mapping: A New Approach to Model Evaluation and Process Representation under Uncertainty

by Sina Khatami, Murray C. Peel, Tim J. Peterson & Andrew W. Western

Abstract: Uncertainty analysis is an integral part of any scientific modeling, particularly within the domain of hydrological sciences given the various types and sources of uncertainty. At the center of uncertainty rests the concept of equifinality, that is, reaching a given endpoint (finality) through different pathways. The operational definition of equifinality in hydrological modeling is that various model structures and/or parameter sets (i.e., equal pathways) are equally capable of reproducing a similar (not necessarily identical) hydrological outcome (i.e., finality). Here we argue that there is more to model equifinality than model structures/parameters, that is, other model components can give rise to model equifinality and/or could be used to explore equifinality within model space. We identified six facets of model equifinality, namely, model structure, parameters, performance metrics, initial and boundary conditions, inputs, and internal fluxes. Focusing on model internal fluxes, we developed a methodology called flux mapping that has fundamental implications in understanding and evaluating model process representation within the paradigm of multiple working hypotheses. To illustrate this, we examine the equifinality of runoff fluxes of a conceptual rainfall-runoff model for a number of different Australian catchments. We demonstrate how flux maps can give new insights into the model behavior that cannot be captured by conventional model evaluation methods. We discuss the advantages of flux space, as a subspace of the model space not usually examined, over parameter space. We further discuss the utility of flux mapping in hypothesis generation and testing, extendable to any field of scientific modeling of open complex systems under uncertainty.

[Archived PDF of article]

Role of Extreme Precipitation and Initial Hydrologic Conditions on Floods in Godavari River Basin, India

by Shailesh Garg & Vimal Mishra

Abstract: Floods are the most frequent natural calamity in India. The Godavari river basin (GRB) witnessed several floods in the past 50 years. Notwithstanding the large damage and economic loss, the role of extreme precipitation and antecedent moisture conditions on floods in the GRB remains unexplored. Using the observations and the well-calibrated Variable Infiltration Capacity model, we estimate the changes in the extreme precipitation and floods in the observed (1955–2016) and projected future (2071–2100) climate in the GRB. We evaluate the role of initial hydrologic conditions and extreme precipitation on floods in both observed and projected future climate. We find a statistically significant increase in annual maximum precipitation for the catchments upstream of four gage stations during the 1955–2016 period. However, the rise in annual maximum streamflow at all the four gage stations in GRB was not statistically significant. The probability of floods driven by extreme precipitation (PFEP) varies between 0.55 and 0.7 at the four gage stations of the GRB, which declines with the size of the basins. More than 80% of extreme precipitation events that cause floods occur on wet antecedent moisture conditions at all the four locations in the GRB. The frequency of extreme precipitation events is projected to rise by two folds or more (under RCP 8.5) in the future (2071–2100) at all four locations. However, the increased frequency of floods under the future climate will largely be driven by the substantial rise in the extreme precipitation events rather than wet antecedent moisture conditions.

[Archived PDF of article]

Research Letters

Combined Effect of Tides and Varying Inland Groundwater Input on Flow and Salinity Distribution in Unconfined Coastal Aquifers

by Woei Keong Kuan, Pei Xin, Guangqiu Jin, Clare E. Robinson, Badin Gibbes & Ling Li

Abstract: Tides and seasonally varying inland freshwater input, with different fluctuation periods, are important factors affecting flow and salt transport in coastal unconfined aquifers. These processes affect submarine groundwater discharge (SGD) and associated chemical transport to the sea. While the individual effects of these forcings have previously been studied, here we conducted physical experiments and numerical simulations to evaluate the interactions between varying inland freshwater input and tidal oscillations. Varying inland freshwater input was shown to induce significant water exchange across the aquifer-sea interface as the saltwater wedge shifted landward and seaward over the fluctuation cycle. Tidal oscillations led to seawater circulations through the intertidal zone that also enhanced the density-driven circulation, resulting in a significant increase in the total SGD. The combination of the tide and varying inland freshwater input, however, decreased the SGD components driven by the separate forcings (e.g., tides and density). Tides restricted the landward and seaward movement of the saltwater wedge in response to the varying inland freshwater input in addition to reducing the time delay between the varying freshwater input signal and landward-seaward movement in the saltwater wedge interface. This study revealed the nonlinear interaction between tidal fluctuations and varying inland freshwater input will help to improve our understanding of SGD, seawater intrusion, and chemical transport in coastal unconfined aquifers.

[Archived PDF of article]

Essay 83: Press Release: World Energy Outlook 2019 Highlights Deep Disparities in the Global Energy System

Rapid and widespread changes across all parts of the energy system are needed to put the world on a path to a secure and sustainable energy future

Deep disparities define today’s energy world. The dissonance between well-supplied oil markets and growing geopolitical tensions and uncertainties. The gap between the ever-higher amounts of greenhouse gas emissions being produced and the insufficiency of stated policies to curb those emissions in line with international climate targets. The gap between the promise of energy for all and the lack of electricity access for 850 million people around the world.

The World Energy Outlook 2019, the International Energy Agency’s flagship publication, explores these widening fractures in detail. It explains the impact of today’s decisions on tomorrow’s energy systems, and describes a pathway that enables the world to meet climate, energy access and air quality goals while maintaining a strong focus on the reliability and affordability of energy for a growing global population.

As ever, decisions made by governments remain critical for the future of the energy system. This is evident in the divergences between WEO scenarios that map out different routes the world could follow over the coming decades, depending on the policies, investments, technologies and other choices that decision makers pursue today. Together, these scenarios seek to address a fundamental issue – how to get from where we are now to where we want to go.

The path the world is on right now is shown by the Current Policies Scenario, which provides a baseline picture of how global energy systems would evolve if governments make no changes to their existing policies. In this scenario, energy demand rises by 1.3% a year to 2040, resulting in strains across all aspects of energy markets and a continued strong upward march in energy-related emissions.

The Stated Policies Scenario, formerly known as the New Policies Scenario, incorporates today’s policy intentions and targets in addition to existing measures. The aim is to hold up a mirror to today’s plans and illustrate their consequences. The future outlined in this scenario is still well off track from the aim of a secure and sustainable energy future. It describes a world in 2040 where hundreds of millions of people still go without access to electricity, where pollution-related premature deaths remain around today’s elevated levels, and where CO2 emissions would lock in severe impacts from climate change.

The Sustainable Development Scenario indicates what needs to be done differently to fully achieve climate and other energy goals that policy makers around the world have set themselves. Achieving this scenario – a path fully aligned with the Paris Agreement aim of holding the rise in global temperatures to well below 2°C and pursuing efforts to limit it to 1.5°C – requires rapid and widespread changes across all parts of the energy system. Sharp emission cuts are achieved thanks to multiple fuels and technologies providing efficient and cost-effective energy services for all.

“What comes through with crystal clarity in this year’s World Energy Outlook is there is no single or simple solution to transforming global energy systems,” said Dr. Fatih Birol, the IEA’s Executive Director. “Many technologies and fuels have a part to play across all sectors of the economy. For this to happen, we need strong leadership from policy makers, as governments hold the clearest responsibility to act and have the greatest scope to shape the future.”

In the Stated Policies Scenario, energy demand increases by 1% per year to 2040. Low-carbon sources, led by solar PV, supply more than half of this growth, and natural gas accounts for another third. Oil demand flattens out in the 2030s, and coal use edges lower. Some parts of the energy sector, led by electricity, undergo rapid transformations. Some countries, notably those with “net zero” aspirations, go far in reshaping all aspects of their supply and consumption.

However, the momentum behind clean energy is insufficient to offset the effects of an expanding global economy and growing population. The rise in emissions slows but does not peak before 2040.

Shale output from the United States is set to stay higher for longer than previously projected, reshaping global markets, trade flows and security. In the Stated Policies Scenario, annual U.S. production growth slows from the breakneck pace seen in recent years, but the United States still accounts for 85% of the increase in global oil production to 2030, and for 30% of the increase in gas. By 2025, total U.S. shale output (oil and gas) overtakes total oil and gas production from Russia.

“The shale revolution highlights that rapid change in the energy system is possible when an initial push to develop new technologies is complemented by strong market incentives and large-scale investment,” said Dr. Birol. “The effects have been striking, with U.S. shale now acting as a strong counterweight to efforts to manage oil markets.”

The higher U.S. output pushes down the share of OPEC members and Russia in total oil production, which drops to 47% in 2030, from 55% in the mid-2000s. But whichever pathway the energy system follows, the world is set to rely heavily on oil supply from the Middle East for years to come.

Alongside the immense task of putting emissions on a sustainable trajectory, energy security remains paramount for governments around the globe. Traditional risks have not gone away, and new hazards such as cybersecurity and extreme weather require constant vigilance. Meanwhile, the continued transformation of the electricity sector requires policy makers to move fast to keep pace with technological change and the rising need for the flexible operation of power systems.

“The world urgently needs to put a laser-like focus on bringing down global emissions. This calls for a grand coalition encompassing governments, investors, companies and everyone else who is committed to tackling climate change,” said Dr. Birol. “Our Sustainable Development Scenario is tailor-made to help guide the members of such a coalition in their efforts to address the massive climate challenge that faces us all.”

A sharp pick-up in energy efficiency improvements is the element that does the most to bring the world towards the Sustainable Development Scenario. Right now, efficiency improvements are slowing: the 1.2% rate in 2018 is around half the average seen since 2010 and remains far below the 3% rate that would be needed.

Electricity is one of the few energy sources that sees rising consumption over the next two decades in the Sustainable Development Scenario. Electricity’s share of final consumption overtakes that of oil, today’s leader, by 2040. Wind and solar PV provide almost all the increase in electricity generation.

Putting electricity systems on a sustainable path will require more than just adding more renewables. The world also needs to focus on the emissions that are “locked in” to existing systems. Over the past 20 years, Asia has accounted for 90% of all coal-fired capacity built worldwide, and these plants potentially have long operational lifetimes ahead of them. This year’s WEO considers three options to bring down emissions from the existing global coal fleet: to retrofit plants with carbon capture, utilisation and storage or biomass co-firing equipment; to repurpose them to focus on providing system adequacy and flexibility; or to retire them earlier.

Access the 2019 World Energy Outlook report.

About the IEA: The International Energy Agency, the global energy authority, was founded in 1974 to help its member countries co-ordinate a collective response to major oil supply disruptions. Its mission has evolved and rests today on three main pillars: working to ensure global energy security; expanding energy cooperation and dialogue around the world; and promoting an environmentally sustainable energy future.

International Energy Agency Press Office
31-35 Rue de la Fédération, Paris, 75015

Essay 80: Short-Term Energy Outlook

U.S. Energy Information Administration
November 13, 2019 Release

Highlights

Global liquid fuels
  • Brent crude oil spot prices averaged $60 per barrel (b) in October, down $3/b from September and down $21/b from October 2018. EIA forecasts Brent spot prices will average $60/b in 2020, down from a 2019 average of $64/b. EIA forecasts that West Texas Intermediate (WTI) prices will average $5.50/b less than Brent prices in 2020. EIA expects crude oil prices will be lower on average in 2020 than in 2019 because of forecast rising global oil inventories, particularly in the first half of next year.
  • Based on preliminary data and model estimates, EIA estimates that the United States exported 140,000 b/d more total crude oil and petroleum products in September than it imported; total exports exceeded imports by 550,000 b/d in October. If confirmed in survey-collected monthly data, it would be the first time the United States exported more petroleum than it imported since EIA records began in 1949. EIA expects total crude oil and petroleum net exports to average 750,000 b/d in 2020 compared with average net imports of 520,000 b/d in 2019.
  • Distillate fuel inventories (a category that includes home heating oil) in the U.S. East Coast—Petroleum Administration for Defense District (PADD 1)—totaled 36.6 million barrels at the end of October, which was 30% lower than the five-year (2014–18) average for the end of October. The declining inventories largely reflect low U.S. refinery runs during October and low distillate fuel imports to the East Coast. EIA does not forecast regional distillate prices, but low inventories could put upward pressure on East Coast distillate fuel prices, including home heating oil, in the coming weeks.
  • U.S. regular gasoline retail prices averaged $2.63 per gallon (gal) in October, up 3 cents/gal from September and 11 cents/gal higher than forecast in last month’s STEO. Average U.S. regular gasoline retail prices were higher than expected, in large part, because of ongoing issues from refinery outages in California. EIA forecasts that regular gasoline prices on the West Coast (PADD 5), a region that includes California, will fall as the issues begin to resolve. EIA expects that prices in the region will average $3.44/gal in November and $3.12/gal in December. For the U.S. national average, EIA expects regular gasoline retail prices to average $2.65/gal in November and fall to $2.50/gal in December. EIA forecasts that the annual average price in 2020 will be $2.62/gal.
  • Despite low distillate fuel inventories, EIA expects that average household expenditures for home heating oil will decrease this winter. This forecast largely reflects warmer temperatures than last winter for the entire October–March period, and retail heating oil prices are expected to be unchanged compared with last winter. For households that heat with propane, EIA forecasts that expenditures will fall by 15% from last winter because of milder temperatures and lower propane prices.
Natural gas
  • Natural gas storage injections in the United States outpaced the previous five-year (2014–18) average during the 2019 injection season as a result of rising natural gas production. At the beginning of April, when the injection season started, working inventories were 28% lower than the five-year average for the same period. By October 31, U.S. total working gas inventories reached 3,762 billion cubic feet (Bcf), which was 1% higher than the five-year average and 16% higher than a year ago.
  • EIA expects natural gas storage withdrawals to total 1.9 trillion cubic feet (Tcf) between the end of October and the end of March, which is less than the previous five-year average winter withdrawal. A withdrawal of this amount would leave end-of-March inventories at almost 1.9 Tcf, 9% higher than the five-year average.
  • The Henry Hub natural gas spot price averaged $2.33 per million British thermal units (MMBtu) in October, down 23 cents/MMBtu from September. The decline largely reflected strong inventory injections. However, forecast cold temperatures across much of the country caused prices to rise in early November, and EIA forecasts Henry Hub prices to average $2.73/MMBtu for the final two months of 2019. EIA forecasts Henry Hub spot prices to average $2.48/MMBtu in 2020, down 13 cents/MMBtu from the 2019 average. Lower forecast prices in 2020 reflect a decline in U.S. natural gas demand and slowing U.S. natural gas export growth, allowing inventories to remain higher than the five-year average during the year even as natural gas production growth is forecast to slow. 
  • EIA forecasts that annual U.S. dry natural gas production will average 92.1 billion cubic feet per day (Bcf/d) in 2019, up 10% from 2018. EIA expects that natural gas production will grow much less in 2020 because of the lag between changes in price and changes in future drilling activity, with low prices in the third quarter of 2019 reducing natural gas-directed drilling in the first half of 2020. EIA forecasts natural gas production in 2020 will average 94.9 Bcf/d.
  • EIA expects U.S. liquefied natural gas (LNG) exports to average 4.7 Bcf/d in 2019 and 6.4 Bcf/d in 2020 as three new liquefaction projects come online. In 2019, three new liquefaction facilities—Cameron LNG, Freeport LNG, and Elba Island LNG—commissioned their first trains. Natural gas deliveries to LNG projects set a new record in July, averaging 6.0 Bcf/d, and increased further to 6.6 Bcf/d in October, when new trains at Cameron and Freeport began ramping up. Cameron LNG exported its first cargo in May, Corpus Christi LNG’s newly commissioned Train 2 in July, and Freeport in September. Elba Island plans to ship its first export cargo by the end of this year. In 2020, Cameron, Freeport, and Elba Island expect to place their remaining trains in service, bringing the total U.S. LNG export capacity to 8.9 Bcf/d by the end of the year.
Electricity, coal, renewables, and emissions
  • EIA expects the share of U.S. total utility-scale electricity generation from natural gas-fired power plants will rise from 34% in 2018 to 37% in 2019 and to 38% in 2020. EIA forecasts the share of U.S. electric generation from coal to average 25% in 2019 and 22% in 2020, down from 28% in 2018. EIA’s forecast nuclear share of U.S. generation remains at about 20% in 2019 and in 2020. Hydropower averages a 7% share of total U.S. generation in the forecast for 2019 and 2020, down from almost 8% in 2018. Wind, solar, and other non-hydropower renewables provided 9% of U.S. total utility-scale generation in 2018. EIA expects they will provide 10% in 2019 and 12% in 2020.
  • EIA expects total U.S. coal production in 2019 to total 698 million short tons (MMst), an 8% decrease from the 2018 level of 756 MMst. The decline reflects lower demand for coal in the U.S. electric power sector and reduced competitiveness of U.S. exports in the global market. EIA expects U.S. steam coal exports to face increasing competition from Eastern European sources, and that Russia will fill a growing share of steam coal trade, causing U.S. coal exports to fall in 2020. EIA forecasts that coal production in 2020 will total 607 MMst.
  • EIA expects U.S. electric power sector generation from renewables other than hydropower—principally wind and solar—to grow from 408 billion kilowatt-hours (kWh) in 2019 to 466 billion kWh in 2020. In EIA’s forecast, Texas accounts for 19% of the U.S. non-hydropower renewables generation in 2019 and 22% in 2020. California’s forecast share of non-hydropower renewables generation falls from 15% in 2019 to 14% in 2020. EIA expects that the Midwest and Central power regions will see shares in the 16% to 18% range for 2019 and 2020.
  • EIA forecasts that, after rising by 2.7% in 2018, U.S. energy-related carbon dioxide (CO2) emissions will decline by 1.7% in 2019 and by 2.0% in 2020, partially as a result of lower forecast energy consumption. In 2019, EIA forecasts less demand for space cooling because of cooler summer months; an expected 5% decline in cooling degree days from 2018, when it was significantly higher than the previous 10-year (2008–17) average. In addition, EIA also expects U.S. CO2 emissions in 2019 to decline because the forecast share of electricity generated from natural gas and renewables will increase, and the share generated from coal, which is a more carbon-intensive energy source, will decrease.

Essay 79: Past and Present Thinking

History is “forever new” and we keep asking “what’s new?” but the past is “forever suggestive” and so we inquire here as to whether the past gives us interesting echoes of the more recent.

Specifically, we juxtapose the “closing of the gold window” in August 1971 (Nixon) and the British gold standard gyrations between 1925 and 1931, when England left gold (i.e., September 1931).

At the time, under Nixon, the U.S. also had an unemployment rate of 6.1% (August 1971) and an inflation rate of 5.84% (1971).

To combat these problems, President Nixon consulted Federal Reserve chairman Arthur Burns, incoming Treasury Secretary John Connally, and then Undersecretary for International Monetary Affairs and future Fed Chairman Paul Volcker.

On the afternoon of Friday, August 13, 1971, these officials along with twelve other high-ranking White House and Treasury advisors met secretly with Nixon at Camp David. There was great debate about what Nixon should do, but ultimately Nixon, relying heavily on the advice of the self-confident Connally, decided to break up Bretton Woods by announcing the following actions on August 15:

Speaking on television on Sunday, August 15, when American financial markets were closed, Nixon said the following:

“The third indispensable element in building the new prosperity is closely related to creating new jobs and halting inflation. We must protect the position of the American dollar as a pillar of monetary stability around the world.

“In the past 7 years, there has been an average of one international monetary crisis every year …

“I have directed Secretary Connally to suspend temporarily the convertibility of the dollar into gold or other reserve assets, except in amounts and conditions determined to be in the interest of monetary stability and in the best interests of the United States.

“Now, what is this action—which is very technical—what does it mean for you?

“Let me lay to rest the bugaboo of what is called devaluation.

“If you want to buy a foreign car or take a trip abroad, market conditions may cause your dollar to buy slightly less. But if you are among the overwhelming majority of Americans who buy American-made products in America, your dollar will be worth just as much tomorrow as it is today.

“The effect of this action, in other words, will be to stabilize the dollar.”

Britain’s own experience in the twenties is explained like this:

“In 1925, Britain had returned to the gold standard.

(editor: This Churchill decision was deeply critiqued by Keynes.)

“When Labour came to power in May 1929 this was in good time for Black Friday on Wall Street in the following October.

“After the Austrian and German crashes in May and July 1931, Britain’s financial position became critical, and on 21st September she abandoned the gold standard.

London was still the world’s financial capital in 1931, and the British abandonment of the gold standard set off a chain of reactions throughout the world.

“Strangely enough Germany and Austria maintained the gold standard…”

(Europe of the Dictators, Elizabeth Wiskemann, Fontana/Collins, 1977, page 92-93)

Nixon’s policies gave us the demise of Bretton Woods, while the economic gyrations of 1925-1931 were part of the lead-up to World War II.

The setting is both “infinitely different” across the decades but the feeling of “flying blind” applies to both cases: U.S.A. “closing the gold window,” August 1971 and Britain’s overturning Churchill’s 1925 return to the gold standard, by 1931. One gets the sense of “concealed turmoil” and a lot of “winging it” in both cases. Policy-makers disagreed and they all saw the world of their moments “through a glass, darkly.”

Essay 66: Education and the Question of Fecklessness

We propose in Meta Intelligence an education that is completely global and cosmopolitan from Day 1.

The problem with education as a confusing area of activity is revealed to us in an episode of the great Japanese novel, The Makioka Sisters.

The Makioka Sisters (細雪 [Sasameyuki], “Light Snow”) is a novel by Japanese writer Jun’ichirō Tanizaki (died in 1965) that was serialized from 1943 to 1948. It follows the lives of the wealthy Makioka family of Osaka from the autumn of 1936 to April 1941, focusing on the family’s attempts to find a husband for the third sister, Yukiko.

In the novel, there’s a description of a “failed educational odyssey:”

“Mimaki was an old court family. The present viscount, the son, was well along in years. Mimaki Minoru, son by a concubine, was a graduate of the Peers School and had studied physics at the Imperial University, which he left to go to France.  In Paris he studied painting for a time, and French cooking for a time, and numerous other things, none for very long.

“Going on to America, he studied aeronautics in a not-too-famous state university, and he did finally take a degree, it seemed.

“After graduation, he continued to wander about the United States, and on to Mexico and South America. With his allowance from home cut off in the course of these wanderings, he made a living as a cook and even as a bellboy. He also returned to painting and even tried his hand at architecture.

“Following his whims and relying on his undeniable cleverness, he tried everything. He abandoned aeronautics when he left school.”

(The Makioka Sisters, Vintage Books, 1985, Seidensticker translation, page 473-474)

This person winds up dabbling in architecture after his return to Japan.

This episode in Tanizaki’s great novel gives us a “flashlight” or “searchlight” into the whole problem of educational confusion.  Is this simply a case of one person’s “fecklessness?”  Is this just a case of what’s called “failure to launch” (see the movie by this name)?

Or is it partly perhaps that education as a “lockstep system” of schools, exams, courses, semesters, quizzes and grades is very “inhospitable” to “searchers?”

If we call everyone who “stumbles around” a dilettante and a feckless failure, we might be unnecessarily “binary,” exclusionary and unaware of the problem of “cold educational ecosystems” which punish exploring for those who are not “born specialists.”  Winners and losers are too polarized as an educational judgment, perhaps.

The classic German novel about youthful confusions is Fontane’s classic Irrungen, Wirrungen (Trials and Tribulations, 1888) and perhaps an argument could be made that the coldly “binary view” of “successes” versus “the feckless” causes the loss of many young people who had various kinds of emotional resistance to education as an “Olympics” of sorts, with “winners and losers.”  This might be seen as a kind of overly narrow kind of “edu-brutality” which is intolerant of more difficult adjustment stories for young people, which are not uncommon.