China Monitor: How Immigration Is Shaping Chinese Society

(from MERICS China Monitor)

To the surprise of many, China has emerged as a destination country for immigration: As China’s population ages and its workforce shrinks, China needs more immigrants.

The background of immigrants to China is becoming more diverse. While the number of high-earning expatriates from developed countries has peaked, China is now also attracting more students than ever from all over the world, including many from lesser developed countries. Low-skilled labor and migration for marriage are also on the rise. The main areas that attract foreigners are the large urban centers along the coast (Guangzhou, Shanghai, Beijing) and borderland regions in the South, Northeast and Northwest, but smaller numbers are also making their way to smaller cities across China.

In the new MERICS China MonitorHow immigration is shaping Chinese society” [archived PDF], MERICS Director Frank N. Pieke and colleagues from other European universities and institutions discuss the most salient issues confronting the Chinese government and foreign residents themselves.

According to their analysis, for many foreigners China has become considerably less accommodating over the last ten years, particularly with regard to border control, public security, visa categories, and work and residence permits. China’s immigration policy is still driven by narrow concerns of regulation, institutionalization and control. It remains predicated on attracting high-quality professionals, researchers, entrepreneurs and investors. Long-term challenges like the emerging demographic transition, remain to be addressed.

The authors detect a worrying trend towards intolerance to ethnic and racial difference, fed by increasing nationalism and ethnic chauvinism. They argue that the Chinese government, civil society, foreign diplomatic missions, employers of foreigners and international organizations present in China should take a clear stance against racism and discrimination. China’s immigration policy needs to include the integration of foreigners into society and provide clear and predictable paths to acquiring permanent residence.

[Archived PDF]

Federal Reserve Review of Monetary Policy Strategy, Tools, and Communications: Some Preliminary Views

(Speech by Governor Lael Brainard, at the Presentation of the 2019 William F. Butler Award New York Association for Business Economics, New York, New York)

It is a pleasure to be here with you. It is an honor to join the 45 outstanding economic researchers and practitioners who are past recipients of the William F. Butler Award. I want to express my deep appreciation to the New York Association for Business Economics (NYABE) and NYABE President Julia Coronado.

I will offer my preliminary views on the Federal Reserve’s review of its monetary policy strategy, tools, and communications after first touching briefly on the economic outlook. These remarks represent my own views. The framework review is ongoing and will extend into 2020, and no conclusions have been reached at this time.1

Outlook and Policy

There are good reasons to expect the economy to grow at a pace modestly above potential over the next year or so, supported by strong consumers and a healthy job market, despite persistent uncertainty about trade conflict and disappointing foreign growth. Recent data provide some reassurance that consumer spending continues to expand at a healthy pace despite some slowing in retail sales. Consumer sentiment remains solid, and the employment picture is positive. Housing seems to have turned a corner and is poised for growth following several weak quarters.

Business investment remains downbeat, restrained by weak growth abroad and trade conflict. But there is little sign so far that the softness in trade, manufacturing, and business investment is affecting consumer spending, and the effect on services has been limited.

Employment remains strong. The employment-to-population ratio for prime-age adults has moved up to its pre-recession peak, and the three-month moving average of the unemployment rate is near a 50-year low.2 Monthly job gains remain above the pace needed to absorb new entrants into the labor force despite some slowing since last year. And initial claims for unemployment insurance—a useful real-time indicator historically—remain very low despite some modest increases.

Data on inflation have come in about as I expected, on balance, in recent months. Inflation remains below the Federal Reserve’s 2 percent symmetric objective, which has been true for most of the past seven years. The price index for core personal consumption expenditures (PCE), which excludes food and energy prices and is a better indicator of future inflation than overall PCE prices, increased 1.7 percent over the 12 months through September.

Foreign growth remains subdued. While there are signs that the decline in euro-area manufacturing is stabilizing, the latest indicators on economic activity in China remain sluggish, and the news in Japan and in many emerging markets has been disappointing. Overall, it appears third-quarter foreign growth was weak, and the latest indicators point to little improvement in the fourth quarter.

More broadly, the balance of risks remains to the downside, although there has been some improvement in risk sentiment in recent weeks. The risk of a disorderly Brexit in the near future has declined significantly, and there is some hope that a U.S.China trade truce could avert additional tariffs. While risks remain, financial market indicators suggest market participants see a diminution in such risks, and probabilities of recessions from models using market data have declined.

The baseline is for continued moderate expansion, a strong labor market, and inflation moving gradually to our symmetric 2 percent objective. The Federal Open Market Committee (FOMC) has taken significant action to provide insurance against the risks associated with trade conflict and weak foreign growth against a backdrop of muted inflation. Since July, the Committee has lowered the target range for the federal funds rate by ¾ percentage point, to the current range of 1½ to 1¾ percent. It will take some time for the full effect of this accommodation to work its way through economic activity, the labor market, and inflation. I will be watching the data carefully for signs of a material change to the outlook that could prompt me to reassess the appropriate path of policy.

Review

The Federal Reserve is conducting a review of our monetary policy strategy, tools, and communications to make sure we are well positioned to advance our statutory goals of maximum employment and price stability.3 Three key features of today’s new normal call for a reassessment of our monetary policy strategy: the neutral rate is very low here and abroad, trend inflation is running below target, and the sensitivity of price inflation to resource utilization is very low.4

First, trend inflation is below target.5 Underlying trend inflation appears to be running a few tenths below the Committee’s symmetric 2 percent objective, according to various statistical filters. This raises the risk that households and businesses could come to expect inflation to run persistently below our target and change their behavior in a way that reinforces that expectation. Indeed, with inflation having fallen short of 2 percent for most of the past seven years, inflation expectations may have declined, as suggested by some survey-based measures of long-run inflation expectations and by market-based measures of inflation compensation.

Second, the sensitivity of price inflation to resource utilization is very low. This is what economists mean when they say that the Phillips curve is flat. A flat Phillips curve has the important advantage of allowing employment to continue expanding for longer without generating inflationary pressures, thereby providing greater opportunities to more people. But it also makes it harder to achieve our 2 percent inflation objective on a sustained basis when inflation expectations have drifted below 2 percent.

Third, the long-run neutral rate of interest is very low, which means that we are likely to see more frequent and prolonged episodes when the federal funds rate is stuck at its effective lower bound (ELB).6 The neutral rate is the level of the federal funds rate that would keep the economy at full employment and 2 percent inflation if no tailwinds or headwinds were buffeting the economy. A variety of forces have likely contributed to a decline in the neutral rate, including demographic trends in many large economies, some slowing in the rate of productivity growth, and increases in the demand for safe assets. When looking at the Federal Reserve’s Summary of Economic Projections (SEP), it is striking that the Committee’s median projection of the longer-run federal funds rate has moved down from 4¼ percent to 2½ percent over the past seven years.7 A similar decline can be seen among private forecasts.8 This decline means the conventional policy buffer is likely to be only about half of the 4½ to 5 percentage points by which the FOMC has typically cut the federal funds rate to counter recessionary pressures over the past five decades.

This large loss of policy space will tend to increase the frequency or length of periods when the policy rate is pinned at the ELB, unemployment is elevated, and inflation is below target.9 In turn, the experience of frequent or extended periods of low inflation at the ELB risks eroding inflation expectations and further compressing the conventional policy space. The risk is a downward spiral where conventional policy space gets compressed even further, the ELB binds even more frequently, and it becomes increasingly difficult to move inflation expectations and inflation back up to target. While consumers and businesses might see very low inflation as having benefits at the individual level, at the aggregate level, inflation that is too low can make it very challenging for monetary policy to cut the short-term nominal interest rate sufficiently to cushion the economy effectively.10

The experience of Japan and of the euro area more recently suggests that this risk is real. Indeed, the fact that Japan and the euro area are struggling with this challenging triad further complicates our task, because there are important potential spillovers from monetary policy in other major economies to our own economy through exchange rate and yield curve channels.11

In light of the likelihood of more frequent episodes at the ELB, our monetary policy review should advance two goals. First, monetary policy should achieve average inflation outcomes of 2 percent over time to re-anchor inflation expectations at our target. Second, we need to expand policy space to buffer the economy from adverse developments at the ELB.

Achieving the Inflation Target

The apparent slippage in trend inflation below our target calls for some adjustments to our monetary policy strategy and communications. In this context and as part of our review, my colleagues and I have been discussing how to better anchor inflation expectations firmly at our objective. In particular, it may be helpful to specify that policy aims to achieve inflation outcomes that average 2 percent over time or over the cycle. Given the persistent shortfall of inflation from its target over recent years, this would imply supporting inflation a bit above 2 percent for some time to compensate for the period of underperformance.

One class of strategies that has been proposed to address this issue are formal “makeup” rules that seek to compensate for past inflation deviations from target. For instance, under price-level targeting, policy seeks to stabilize the price level around a constant growth path that is consistent with the inflation objective.12 Under average inflation targeting, policy seeks to return the average of inflation to the target over some specified period.13

To be successful, formal makeup strategies require that financial market participants, households, and businesses understand in advance and believe, to some degree, that policy will compensate for past misses. I suspect policymakers would find communications to be quite challenging with rigid forms of makeup strategies, because of what have been called time-inconsistency problems. For example, if inflation has been running well below—or above—target for a sustained period, when the time arrives to maintain inflation commensurately above—or below—2 percent for the same amount of time, economic conditions will typically be inconsistent with implementing the promised action. Analysis also suggests it could take many years with a formal average inflation targeting framework to return inflation to target following an ELB episode, although this depends on difficult-to-assess modeling assumptions and the particulars of the strategy.14

Thus, while formal average inflation targeting rules have some attractive properties in theory, they could be challenging to implement in practice. I prefer a more flexible approach that would anchor inflation expectations at 2 percent by achieving inflation outcomes that average 2 percent over time or over the cycle. For instance, following five years when the public has observed inflation outcomes in the range of 1½ to 2 percent, to avoid a decline in expectations, the Committee would target inflation outcomes in a range of, say, 2 to 2½ percent for the subsequent five years to achieve inflation outcomes of 2 percent on average overall. Flexible inflation averaging could bring some of the benefits of a formal average inflation targeting rule, but it would be simpler to communicate. By committing to achieve inflation outcomes that average 2 percent over time, the Committee would make clear in advance that it would accommodate rather than offset modest upward pressures to inflation in what could be described as a process of opportunistic reflation.15

Policy at the ELB

Second, the Committee is examining what monetary policy tools are likely to be effective in providing accommodation when the federal funds rate is at the ELB.16 In my view, the review should make clear that the Committee will actively employ its full toolkit so that the ELB is not an impediment to providing accommodation in the face of significant economic disruptions.

The importance and challenge of providing accommodation when the policy rate reaches the ELB should not be understated. In my own experience on the international response to the financial crisis, I was struck that the ELB proved to be a severe impediment to the provision of policy accommodation initially. Once conventional policy reached the ELB, the long delays necessitated for policymakers in nearly every jurisdiction to develop consensus and take action on unconventional policy sapped confidence, tightened financial conditions, and weakened recovery. Economic conditions in the euro area and elsewhere suffered for longer than necessary in part because of the lengthy process of building agreement to act decisively with a broader set of tools.

Despite delays and uncertainties, the balance of evidence suggests forward guidance and balance sheet policies were effective in easing financial conditions and providing accommodation following the global financial crisis.17 Accordingly, these tools should remain part of the Committee’s toolkit. However, the quantitative asset purchase policies that were used following the crisis proved to be lumpy both to initiate at the ELB and to calibrate over the course of the recovery. This lumpiness tends to create discontinuities in the provision of accommodation that can be costly. To the extent that the public is uncertain about the conditions that might trigger asset purchases and how long the purchases would be sustained, it undercuts the efficacy of the policy. Similarly, significant frictions associated with the normalization process can arise as the end of the asset purchase program approaches.

For these reasons, I have been interested in exploring approaches that expand the space for targeting interest rates in a more continuous fashion as an extension of our conventional policy space and in a way that reinforces forward guidance on the policy rate.18 In particular, there may be advantages to an approach that caps interest rates on Treasury securities at the short-to-medium range of the maturity spectrum—yield curve caps—in tandem with forward guidance that conditions liftoff from the ELB on employment and inflation outcomes.

To be specific, once the policy rate declines to the ELB, this approach would smoothly move to capping interest rates on the short-to-medium segment of the yield curve. The yield curve ceilings would transmit additional accommodation through the longer rates that are relevant for households and businesses in a manner that is more continuous than quantitative asset purchases. Moreover, if the horizon on the interest rate caps is set so as to reinforce forward guidance on the policy rate, doing so would augment the credibility of the yield curve caps and thereby diminish concerns about an open-ended balance sheet commitment. In addition, once the targeted outcome is achieved, and the caps expire, any securities that were acquired under the program would roll off organically, unwinding the policy smoothly and predictably. This is important, as it could potentially avoid some of the tantrum dynamics that have led to premature steepening at the long end of the yield curve in several jurisdictions.

Forward guidance on the policy rate will also be important in providing accommodation at the ELB. As we saw in the United States at the end of 2015 and again toward the second half of 2016, there tends to be strong pressure to “normalize” or lift off from the ELB preemptively based on historical relationships between inflation and employment. A better alternative would have been to delay liftoff until we had achieved our targets. Indeed, recent research suggests that forward guidance that commits to delay the liftoff from the ELB until full employment and 2 percent inflation have been achieved on a sustained basis—say over the course of a year—could improve performance on our dual-mandate goals.19

To reinforce this commitment, the forward guidance on the policy rate could be implemented in tandem with yield curve caps. For example, as the federal funds rate approaches the ELB, the Committee could commit to refrain from lifting off the ELB until full employment and 2 percent inflation are sustained for a year. Based on its assessment of how long this is likely take, the Committee would then commit to capping rates out the yield curve for a period consistent with the expected horizon of the outcome-based forward guidance. If the outlook shifts materially, the Committee could reassess how long it will take to get inflation back to 2 percent and adjust policy accordingly. One benefit of this approach is that the forward guidance and the yield curve ceilings would reinforce each other.

The combination of a commitment to condition liftoff on the sustained achievement of our employment and inflation objectives with yield curve caps targeted at the same horizon has the potential to work well in many circumstances. For very severe recessions, such as the financial crisis, such an approach could be augmented with purchases of 10-year Treasury securities to provide further accommodation at the long end of the yield curve. Presumably, the requisite scale of such purchases—when combined with medium-term yield curve ceilings and forward guidance on the policy rate—would be relatively smaller than if the longer-term asset purchases were used alone.

Monetary Policy and Financial Stability

Before closing, it is important to recall another important lesson of the financial crisis: The stability of the financial system is important to the achievement of the statutory goals of full employment and 2 percent inflation. In that regard, the changes in the macroeconomic environment that underlie our monetary policy review may have some implications for financial stability. Historically, when the Phillips curve was steeper, inflation tended to rise as the economy heated up, which prompted the Federal Reserve to raise interest rates. In turn, the interest rate increases would have the effect of tightening financial conditions more broadly. With a flat Phillips curve, inflation does not rise as much as resource utilization tightens, and interest rates are less likely to rise to restrictive levels. The resulting lower-for-longer interest rates, along with sustained high rates of resource utilization, are conducive to increasing risk appetite, which could prompt reach-for-yield behavior and incentives to take on additional debt, leading to financial imbalances as an expansion extends.

To the extent that the combination of a low neutral rate, a flat Phillips curve, and low underlying inflation may lead financial stability risks to become more tightly linked to the business cycle, it would be preferable to use tools other than tightening monetary policy to temper the financial cycle. In particular, active use of macroprudential tools such as the countercyclical buffer is vital to enable monetary policy to stay focused on achieving maximum employment and average inflation of 2 percent on a sustained basis.

Conclusion

The Federal Reserve’s commitment to adapt our monetary policy strategy to changing circumstances has enabled us to support the U.S. economy throughout the expansion, which is now in its 11th year. In light of the decline in the neutral rate, low trend inflation, and low sensitivity of inflation to slack as well as the consequent greater frequency of the policy rate being at the effective lower bound, this is an important time to review our monetary policy strategy, tools, and communications in order to improve the achievement of our statutory goals. I have offered some preliminary thoughts on how we could bolster inflation expectations by achieving inflation outcomes of 2 percent on average over time and, when policy is constrained by the ELB, how we could combine forward guidance on the policy rate with caps on the short-to-medium segment of the yield curve to buffer the economy against adverse developments.


  1. I am grateful to Ivan Vidangos of the Federal Reserve Board for assistance in preparing this text. These remarks represent my own views, which do not necessarily represent those of the Federal Reserve Board or the Federal Open Market Committee. (return to text)
  2. Claudia Sahm shows that a ½ percentage point increase in the three-month moving average of the unemployment rate relative to the previous year’s low is a good real-time recession indicator. See Claudia Sahm (2019), “Direct Stimulus Payments to Individuals” [archived PDF], Policy Proposal, The Hamilton Project at the Brookings Institution (Washington: THP, May 16). (return to text)
  3. Information about the review of monetary policy strategy, tools, and communications is available on the Board’s website. Also see Richard H. Clarida (2019), “The Federal Reserve’s Review of Its Monetary Policy Strategy, Tools, and Communication Practices” [archived PDF], speech delivered at the 2019 U.S. Monetary Policy Forum, sponsored by the Initiative on Global Markets at the University of Chicago Booth School of Business, New York, February 22; and Jerome H. Powell (2019), “Monetary Policy: Normalization and the Road Ahead” [archived PDF] speech delivered at the 2019 SIEPR Economic Summit, Stanford Institute of Economic Policy Research, Stanford, Calif., March 8. (return to text)
  4. See Lael Brainard (2016), “The ‘New Normal’ and What It Means for Monetary Policy” [archived PDF] speech delivered at the Chicago Council on Global Affairs, Chicago, September 12. (return to text)
  5. See Lael Brainard (2017), “Understanding the Disconnect between Employment and Inflation with a Low Neutral Rate” [archived PDF], speech delivered at the Economic Club of New York, September 5; and James H. Stock and Mark W. Watson (2007), “Why Has U.S. Inflation Become Harder to Forecast?” [archived PDF], Journal of Money, Credit and Banking, vol. 39 (s1, February), pp. 3–33. (return to text)
  6. See Lael Brainard (2015), “Normalizing Monetary Policy When the Neutral Interest Rate Is Low” [archived PDF] speech delivered at the Stanford Institute for Economic Policy Research, Stanford, Calif., December 1. (return to text)
  7. The projection materials for the Federal Reserve’s SEP are available on the Board’s website. (return to text)
  8. For example, the Blue Chip Consensus long-run projection for the three-month Treasury bill has declined from 3.6 percent in October 2012 to 2.4 percent in October 2019. See Wolters Kluwer (2019), Blue Chip Economic Indicators, vol. 44 (October 10); and Wolters Kluwer (2012), Blue Chip Economic Indicators, vol. 37 (October 10). (return to text)
  9. See Michael Kiley and John Roberts (2017), “Monetary Policy in a Low Interest Rate World” [archived PDF], Brookings Papers on Economic Activity, Spring, pp. 317–72; Eric Swanson (2018), “The Federal Reserve Is Not Very Constrained by the Lower Bound on Nominal Interest Rates” [archived PDF] NBER Working Paper Series 25123 (Cambridge, Mass.: National Bureau of Economic Research, October); and Hess Chung, Etienne Gagnon, Taisuke Nakata, Matthias Paustian, Bernd Schlusche, James Trevino, Diego Vilán, and Wei Zheng (2019), “Monetary Policy Options at the Effective Lower Bound: Assessing the Federal Reserve’s Current Policy Toolkit” [archived PDF], Finance and Economics Discussion Series 2019-003 (Washington: Board of Governors of the Federal Reserve System, January). (return to text)
  10. The important observation that some consumers and businesses see low inflation as having benefits emerged from listening to a diverse range of perspectives, including representatives of consumer, labor, business, community, and other groups during the Fed Listens events; for details, see this page. (return to text)
  11. See Lael Brainard (2017), “Cross-Border Spillovers of Balance Sheet Normalization” [archived PDF] speech delivered at the National Bureau of Economic Research’s Monetary Economics Summer Institute, Cambridge, Mass., July 13. (return to text)
  12. See, for example, James Bullard (2018), “A Primer on Price Level Targeting in the U.S.” [archived PDF], a presentation before the CFA Society of St. Louis, St. Louis, Mo., January 10. (return to text)
  13. See, for example, Lars Svensson (2019), “Monetary Policy Strategies for the Federal Reserve” [archived PDF] presented at “Conference on Monetary Policy Strategy, Tools and Communication Practices,” sponsored by the Federal Reserve Bank of Chicago, Chicago, June 5. (return to text)
  14. See Board of Governors of the Federal Reserve System (2019), “Minutes of the Federal Open Market Committee, September 17–18, 2019,” press release, October 9; and David Reifschneider and David Wilcox (2019), “Average Inflation Targeting Would Be a Weak Tool for the Fed to Deal with Recession and Chronic Low Inflation” [archived PDF] Policy Brief PB19-16 (Washington: Peterson Institute for International Economics, November). (return to text)
  15. See Janice C. Eberly, James H. Stock, and Jonathan H. Wright (2019), “The Federal Reserve’s Current Framework for Monetary Policy: A Review and Assessment” [archived PDF] paper presented at “Conference on Monetary Policy Strategy, Tools and Communication Practices,” sponsored by the Federal Reserve Bank of Chicago, Chicago, June 4. (return to text)
  16. See Board of Governors of the Federal Reserve System (2019), “Minutes of the Federal Open Market Committee, July 31–August 1, 2018” [archived PDF] press release, August 1; and Board of Governors (2019), “Minutes of the Federal Open Market Committee, October 29–30, 2019” [archived PDF] press release, October 30. (return to text)
  17. For details on purchases of securities by the Federal Reserve, see this page. For a discussion of forward guidance, see this page. See, for example, Simon Gilchrist and Egon Zakrajšek (2013), “The Impact of the Federal Reserve’s Large-Scale Asset Purchase Programs on Corporate Credit Risk,” Journal of Money, Credit and Banking, vol. 45, (s2, December), pp. 29–57; Simon Gilchrist, David López-Salido, and Egon Zakrajšek (2015), “Monetary Policy and Real Borrowing Costs at the Zero Lower Bound,” American Economic Journal: Macroeconomics, vol. 7 (January), pp. 77–109; Jing Cynthia Wu and Fan Dora Xia (2016), “Measuring the Macroeconomic Impact of Monetary Policy at the Zero Lower Bound,” Journal of Money, Credit and Banking, vol. 48 (March–April), pp. 253–91; and Stefania D’Amico and Iryna Kaminska (2019), “Credit Easing versus Quantitative Easing: Evidence from Corporate and Government Bond Purchase Programs” [archived PDF], Bank of England Staff Working Paper Series 825 (London: Bank of England, September). (return to text)
  18. See Board of Governors of the Federal Reserve System (2010), “Strategies for Targeting Interest Rates Out the Yield Curve,” memorandum to the Federal Open Market Committee, October 13, available at this page; and Ben Bernanke (2016), “What Tools Does The Fed Have Left? Part 2: Targeting Longer-Term Interest Rates” [archived PDF] blog post, Brookings Institution, March 24. (return to text)
  19. See Ben Bernanke, Michael Kiley, and John Roberts (2019), “Monetary Policy Strategies for a Low-Rate Environment” [archived PDF], Finance and Economics Discussion Series 2019-009 (Washington: Board of Governors of the Federal Reserve System) and Chung and others, “Monetary Policy Options at the Effective Lower Bound,” in note 9. (return to text)

Essay 116: Reports of Rising Police-Society Conflict in China

Interview with Suzanne Scoggins (November 25, 2019)

China is facing a rising tide of conflict between the nation’s police officers and the public. While protest events receive considerable media attention, lower-profile conflicts between police officers and residents also make their way onto the internet, shaping perceptions of the police. The ubiquity of live events streamed on the internet helps illuminate the nature of statesociety conflict in China and the challenges faced by local law enforcement.

Simone McGuinness spoke with Suzanne Scoggins, a fellow with the National Asia Research Program (NARP), about the reports of rising policesociety conflict in China. Dr. Scoggins discusses how the Chinese Communist Party has responded to the upsurge, what channels Chinese citizens are utilizing to express their concerns, and what the implications are for the rest of the world.

What is the current state of police-society relations in China?

Reports of police violence have been on the rise, although this does not necessarily mean that violence is increasing. It does, however, mean that the media is more willing to report violence and that more incidents of violence are appearing on social media.

What we can now study is the nature of that violence—some are big events such as riots or attacks against the police, but there are also smaller events. For example, we see reports of passengers on trains who get into arguments with transit police. They may fight because one of the passengers is not in the right seat or is carrying something prohibited. Rather than complying with the officer, the passenger ends up getting into some sort of violent altercation. This kind of violence is typically being captured by cellphone cameras, and sometimes it makes the news.

The nature of the conflict matters. If somebody is on a train and sitting in a seat that they did not pay for, then it is usually obvious to the people reading about or watching the incident that the civilian is at fault. But if it is chengguan (城管, “city administration”) telling an elderly woman to stop selling her food on the street and the chengguan becomes violent, then public perceptions may be very different. It is that second type of violence that can be threatening to the state. The public’s response to the type of conflict can vary considerably.

What are the implications for China as a whole?

Regarding what this means for China, it’s not good for the regime to sustain this kind of conflict between street-level officers or state agents and the public. It lowers people’s trust in the agents of the government, and people may assume that the police cannot enforce public security. There are many state agents who might be involved in a conflict, such as the chengguan, the xiejing (auxiliary officer), or the official police. The type of agent almost doesn’t matter because the uniforms often look similar.

When information goes up online of state agents behaving poorly, it makes the state a little more vulnerable. Even people who were not at the event might see it on social media or in the news and think, “Oh, this is happening in my community, or in my province, or across the nation.” This violates public expectations about how the police or other state agents should act. People should be able to trust the police and go to them when they have problems.

How has the Chinese government responded to the increase in reporting violence?

There is a twofold approach. The first is through censorship. When negative videos go up online or when the media reports an incident, the government will go in and take it down. We see this over time. Even while collecting my research, some of the videos that were initially available online are no longer accessible simply because they have been censored. The government is removing many different types of content, not only violence. Censors are also interested in removing any sort of misinformation that might spread on social media.

If step one is to take the video or report down, step two is to counteract any negative opinion by using police propaganda. This is also referred to as “public relations,” and the goal is to present a better image of the police. Recently, the Ministry of Public Security put a lot of money and resources into their social media presence. Many police stations have a social media account on WeChat or Weibo (微博, “microblogging”) and aim to present a more positive, friendly image of the police. The ministry also teamed up with CCTV to produce television content. This has been going on for some time, but recently shows have become more sophisticated.

There is one program, for example, called Police Training Camp. It is a reality show where police officers are challenged with various tasks, and the production is very glossy. The ministry also produces other sorts of specials featuring police who are out in the field helping people. It shows the police officers working really long shifts, interacting positively with the public, and really making a difference in people’s lives. In this way, the government is counteracting negative opinions about police violence or misconduct.

In general, I will say that it is difficult for people in any society to get justice with police officers because of the way legal systems are structured and the power police hold in local government politics. In China, one of the things people are doing beyond reaching out to local governments or pursuing mediation is calling an official hotline.

This is a direct channel to the Ministry of Public Security, and all these calls are reviewed. There is not a whole lot that citizens can do about specific corruption claims. But if somebody has a particular goal, then the hotline is slightly more effective because it allows citizens to alert the ministry. However, many people do not know about the hotline, so the ministry is trying to increase awareness and also help staff the call center so that it can more effectively field calls.

As for how much relief people feel when they use these channels, this depends on what their goal is. If the goal is to get somebody fired, then the hotline may not work. But if someone is looking to air their grievances, then it may be helpful.

What are the implications of increased police-society conflict in China for the rest of the world? What can the United States or other countries do to improve the situation?

These are really sticky issues that are difficult to solve. When discussing policesociety conflict, it is important to step back and think about who the police are—the enforcement agents of the state. So by their very nature, there will be conflict between police and society, and that is true in every country. In China, it really depends on where and what type of police climate we are talking about.

Xinjiang, for instance, has a very different police climate than other regions in China. There is a different type of policing and police presence. Chinese leaders certainly do not want any international intervention in Xinjiang. They see this as an internal issue. While some governments in Europe and the United States might want to intervene, that is going to be a nonstarter for China.

As for police problems more generally, I would say that if China is able to reduce some of the policesociety conflict in other areas of the country, then this is good for the international community because it leads to a more stable government. We also know that there is a fair amount of international cooperation between police groups. China has police liaisons that travel and learn about practices and technology in different countries. The police in these groups attend conferences and take delegates abroad.

There are also police delegations from other nations that go to China to learn about and exchange best practices. But that work will not necessarily address the underlying issues that we see in a lot of the stations scattered throughout China outside the big cities like Beijing (北京) or Shanghai (上海). Those are the areas with insufficient training or manpower. Those issues must be addressed internally by the Ministry of Public Security.

How is the Chinese government improving its policing capabilities?

Recently, the ministry has tried to overcome manpower and other ground-level policing problems by using surveillance cameras and artificial intelligence. Networks of cameras are appearing all over the country, and police are using body cameras for recording interactions with the public. This type of surveillance is not just in large cities but also in smaller ones. Of course, it is not enough to just put the cameras up—you also need to train officers to use that technology properly. This process takes time, but it is one way that the ministry hopes to overcome on-the-ground problems such as the low number of police per capita.

How might the Hong Kong protests influence or change policing tactics in China?

The situation in Hong Kong is unlikely to change policing tactics in China, which are generally more aggressive in controlling protests than most of what we have seen thus far in Hong Kong. It is more likely that things will go in the other direction, with mainland tactics being used in Hong Kong, especially if we continue to observe increased pressure to bring the protestors in check.

Suzanne Scoggins is an Assistant Professor of Political Science at Clark University. She is also a 2019 National Asia Research Program (NARP) Fellow. Dr. Scoggins holds a Ph.D. in Political Science from the University of California, Berkeley, and her book manuscript Policing in the Shadow of Protest is forthcoming from Cornell University Press. Her research has appeared in Comparative Politics, The China Quarterly, Asian Survey, PS: Political Science and Politics, and the China Law and Society Review.

This interview was conducted by Simone McGuinness, the Public Affairs Intern at NBR.

Essay 114: FRBSF Economic Letter: Involuntary Part-Time Work a Decade after the Recession

by Marianna Kudlyak

Involuntary part-time employment reached unusually high levels during the last recession and declined only slowly afterward. The speed of the decline was limited because of a combination of two factors: the number of people working part-time due to slack business conditions was declining, and the number of those who could find only part-time work continued to increase until 2013. Involuntary part-time employment recently returned to its pre-recession level but remains slightly elevated relative to historically low unemployment, likely due to structural factors.

Read the full article at the Federal Reserve Bank of San Francisco. [Archived PDF]

Index of Economic Letters.

Essay 112: The Urban Institute’s State and Local Finance Initiative

The Urban Institute recently published its quarterly State Tax and Economic Review, which examines state tax revenues trends and the underlying economic factors.

They find that most states ended the year with surpluses. Yet states worry that the stimulus effect of the Trump tax cut is disappearing, forecasting weaker growth in income tax revenues for the fiscal year 2020.

[Archived PDF]

These analyses are based on data gathered directly from individual states. This collection is the only timely and accurate data source covering state tax revenue and fiscal performance for baselines and comparisons.

Abstract

State government tax revenues rebounded in the first quarter of 2019 after declines in the fourth quarter of 2018. However, year-over-year growth was substantially weaker in the first quarter of 2019 than in the final quarter of 2017 and the first three quarters of 2018. Most of the recent weakness was attributable to personal income tax declines.

State personal income taxes declined for the second consecutive quarter, reflecting a spike in state income tax payments in December 2017 and January 2018 in response to changes made in the TCJA. However, preliminary data for the second quarter of 2019 indicate double-digit growth in state personal income tax revenues, mostly because of higher final payments and delayed estimated payments filed in April. The surge in personal income tax revenues made up for earlier shortfalls in most states and put the revenues back on track for the states to close the budget books for fiscal year 2019 without shortfalls.

[Archived PDF]

If you are interested in accessing this data, please visit the Urban Institute to subscribe to its data services.

Essay 111: CMR Magazine on Article 6 Pilots, Carbon Pricing in Latin America, and More

Carbon Pricing in Latin America

Is Article 6 on the home stretch? In this issue [PDF] of the Carbon Mechanisms Review, we look at success factors for the negotiations, and analyze what is needed to make the Article 6 rulebook text ready for enabling up-scaled mitigation action. With regard to the ‘Latin American COP’ (taking place in Madrid), we cover the emerging carbon pricing landscape in the region while our cover feature reports on and analyses the current Article 6 pilot initiatives. An analysis of the latest CORSIA developments rounds off the issue.

[Archived PDF]

Essay 108: Early View Alert: Water Resources Research

from the American Geophysical Union’s journals:

Research Articles

Modeling the Snow Depth Variability with a High-Resolution Lidar Data Set and Nonlinear Terrain Dependency

by T. Skaugen & K. Melvold

Summary: Using airborne laser, 400 million snow depth measurements at Hardangervidda in Southern Norway have been collected. The amount of data has made in-depth studies of the spatial distribution of snow and its interaction with the terrain and vegetation possible. We find that the terrain variability, expressed by the square slope, the average amount of snow, and whether the terrain is vegetated or not, largely explains the variation of snow depth. With this information it is possible to develop equations that predict snow depth variability that can be used in environmental models, which again are used for important tasks such as flood forecasting and hydropower planning. One major advantage is that these equations can be determined from the data that are, in principle, available everywhere, provided there exists a detailed digital model of the terrain.

[Archived PDF article]

Phosphorus Transport in Intensively Managed Watersheds

by Christine L. Dolph, Evelyn Boardman, Mohammad Danesh-Yazdi, Jacques C. Finlay, Amy T. Hansen, Anna C. Baker & Brent Dalzell

Abstract: When phosphorus from farm fertilizer, eroded soil, and septic waste enters our water, it leads to problems like toxic algae blooms, fish kills, and contaminated drinking supplies. In this study, we examine how phosphorus travels through streams and rivers of farmed areas. In the past, soil lost from farm fields was considered the biggest contributor to phosphorus pollution in agricultural areas, but our study shows that phosphorus originating from fertilizer stores in the soil and from crop residue, as well as from soil eroded from sensitive ravines and bluffs, contributes strongly to the total amount of phosphorus pollution in agricultural rivers. We also found that most phosphorus leaves farmed watersheds during the very highest river flows. Increased frequency of large storms due to climate chaos will therefore likely worsen water quality in areas that are heavily loaded with phosphorus from farm fertilizers. Protecting water in agricultural watersheds will require knowledge of the local landscape along with strategies to address (1) drivers of climate chaos, (2) reduction in the highest river flows, and (3) ongoing inputs and legacy stores of phosphorus that are readily transported across land and water.

[Archived PDF of article]

Detecting the State of the Climate System via Artificial Intelligence to Improve Seasonal Forecasts and Inform Reservoir Operations

by Matteo Giuliani, Marta Zaniolo, Andrea Castelletti, Guido Davoli & Paul Block

Abstract: Increasingly variable hydrologic regimes combined with more frequent and intense extreme events are challenging water systems management worldwide. These trends emphasize the need of accurate medium- to long-term predictions to timely prompt anticipatory operations. Despite in some locations global climate oscillations and particularly the El Niño Southern Oscillation (ENSO) may contribute to extending forecast lead times, in other regions there is no consensus on how ENSO can be detected, and used as local conditions are also influenced by other concurrent climate signals. In this work, we introduce the Climate State Intelligence framework to capture the state of multiple global climate signals via artificial intelligence and improve seasonal forecasts. These forecasts are used as additional inputs for informing water system operations and their value is quantified as the corresponding gain in system performance. We apply the framework to the Lake Como basin, a regulated lake in northern Italy mainly operated for flood control and irrigation supply. Numerical results show the existence of notable teleconnection patterns dependent on both ENSO and the North Atlantic Oscillation over the Alpine region, which contribute in generating skillful seasonal precipitation and hydrologic forecasts. The use of this information for conditioning the lake operations produces an average 44% improvement in system performance with respect to a baseline solution not informed by any forecast, with this gain that further increases during extreme drought episodes. Our results also suggest that observed preseason sea surface temperature anomalies appear more valuable than hydrologic-based seasonal forecasts, producing an average 59% improvement in system performance.

[Archived PDF of article]

Landscape Water Storage and Subsurface Correlation from Satellite Surface Soil Moisture and Precipitation Observations

by Daniel J. Short Gianotti, Guido D. Salvucci, Ruzbeh Akbar, Kaighin A. McColl, Richard Cuenca & Dara Entekhabi

Abstract: Surface soil moisture measurements are typically correlated to some degree with changes in subsurface soil moisture. We calculate a hydrologic length scale, λ, which represents (1) the mean-state estimator of total column water changes from surface observations, (2) an e-folding length scale for subsurface soil moisture profile covariance fall-off, and (3) the best second-moment mass-conserving surface layer thickness for a simple bucket model, defined by the data streams of satellite soil moisture and precipitation retrievals. Calculations are simple, based on three variables: the autocorrelation and variance of surface soil moisture and the variance of the net flux into the column (precipitation minus estimated losses), which can be estimated directly from the soil moisture and precipitation time series. We develop a method to calculate the lag-one autocorrelation for irregularly observed time series and show global surface soil moisture autocorrelation. λ is driven in part by local hydroclimate conditions and is generally larger than the 50-mm nominal radiometric length scale for the soil moisture retrievals, suggesting broad subsurface correlation due to moisture drainage. In all but the most arid regions, radiometric soil moisture retrievals provide more information about ecosystem-relevant water fluxes than satellite radiometers can explicitly “see”; lower-frequency radiometers are expected to provide still more statistical information about subsurface water dynamics.

[Archived PDF of article]

Process-Guided Deep Learning Predictions of Lake Water Temperature

by Jordan S. Read, Xiaowei Jia, Jared Willard, Alison P. Appling, Jacob A. Zwart, Samantha K. Oliver, Anuj Karpatne, Gretchen J. A. Hansen, Paul C. Hanson, William Watkins, Michael Steinbach & Vipin Kumar

Abstract: The rapid growth of data in water resources has created new opportunities to accelerate knowledge discovery with the use of advanced deep learning tools. Hybrid models that integrate theory with state-of-the art empirical techniques have the potential to improve predictions while remaining true to physical laws. This paper evaluates the Process-Guided Deep Learning (PGDL) hybrid modeling framework with a use-case of predicting depth-specific lake water temperatures. The PGDL model has three primary components: a deep learning model with temporal awareness (long short-term memory recurrence), theory-based feedback (model penalties for violating conversation of energy), and model pre-training to initialize the network with synthetic data (water temperature predictions from a process-based model). In situ water temperatures were used to train the PGDL model, a deep learning (DL) model, and a process-based (PB) model. Model performance was evaluated in various conditions, including when training data were sparse and when predictions were made outside of the range in the training data set. The PGDL model performance (as measured by root-mean-square error (RMSE)) was superior to DL and PB for two detailed study lakes, but only when pretraining data included greater variability than the training period. The PGDL model also performed well when extended to 68 lakes, with a median RMSE of 1.65 °C during the test period (DL: 1.78 °C, PB: 2.03 °C; in a small number of lakes PB or DL models were more accurate). This case-study demonstrates that integrating scientific knowledge into deep learning tools shows promise for improving predictions of many important environmental variables.

[Archived PDF of article]

Adjustment of Radar-Gauge Rainfall Discrepancy Due to Raindrop Drift and Evaporation Using the Weather Research and Forecasting Model and Dual-Polarization Radar

by Qiang Dai, Qiqi Yang, Dawei Han, Miguel A. Rico-Ramirez & Shuliang Zhang

Abstract: Radar-gauge rainfall discrepancies are considered to originate from radar rainfall measurements while ignoring the fact that radar observes rain aloft while a rain gauge measures rainfall on the ground. Observations of raindrops observed aloft by weather radars consider that raindrops fall vertically to the ground without changing in size. This premise obviously does not stand because raindrop location changes due to wind drift and raindrop size changes due to evaporation. However, both effects are usually ignored. This study proposes a fully formulated scheme to numerically simulate both raindrop drift and evaporation in the air and reduces the uncertainties of radar rainfall estimation. The Weather Research and Forecasting model is used to simulate high-resolution three-dimensional atmospheric fields. A dual-polarization radar retrieves the raindrop size distribution for each radar pixel. Three schemes are designed and implemented using the Hameldon Hill radar in Lancashire, England. The first considers only raindrop drift, the second considers only evaporation, and the last considers both aspects. Results show that wind advection can cause a large drift for small raindrops. Considerable loss of rainfall is observed due to raindrop evaporation. Overall, the three schemes improve the radar-gauge correlation by 3.2%, 2.9%, and 3.8% and reduce their discrepancy by 17.9%, 8.6%, and 21.7%, respectively, over eight selected events. This study contributes to the improvement of quantitative precipitation estimation from radar polarimetry and allows a better understanding of precipitation processes.

[Archived PDF of article]

The Role of Collapsed Bank Soil on Tidal Channel Evolution: A Process-Based Model Involving Bank Collapse and Sediment Dynamics

by K. Zhao, Z. Gong, F. Xu, Z. Zhou, C. K. Zhang, G. M. E. Perillo & G. Coco

Abstract: We develop a process-based model to simulate the geomorphodynamic evolution of tidal channels, considering hydrodynamics, flow-induced bank erosion, gravity-induced bank collapse, and sediment dynamics. A stress-deformation analysis and the Mohr-Coulomb criterion, calibrated through previous laboratory experiments, are included in a model simulating bank collapse. Results show that collapsed bank soil plays a primary role in the dynamics of bank retreat. For bank collapse with small bank height, tensile failure in the middle of the bank (Stage I), tensile failure on the bank top (Stage II), and sectional cracking from bank top to the toe (Stage III) are present sequentially before bank collapse occurs. A significant linear relation is observed between bank height and the contribution of bank collapse to bank retreat. Contrary to flow-induced bank erosion, bank collapse prevents further widening since the collapsed bank soil protects the bank from direct bank erosion. The bank profile is linear or slightly convex, and the planimetric shape of tidal channels (gradually decreasing in width landward) is similar when approaching equilibrium, regardless of the consideration of bank erosion and collapse. Moreover, the simulated width-to-depth ratio in all runs is comparable with observations from the Venice Lagoon. This indicates that the equilibrium configuration of tidal channels depends on hydrodynamic conditions and sediment properties, while bank erosion and collapse greatly affect the transient behavior (before equilibrium) of the tidal channels. Overall, this contribution highlights the importance of collapsed bank soil in investigating tidal channel morphodynamics using a combined perspective of geotechnics and soil mechanics.

[Archived PDF of article]

A Physically Based Method for Soil Evaporation Estimation by Revisiting the Soil Drying Process

by Yunquan Wang, Oliver Merlin, Gaofeng Zhu & Kun Zhang

Abstract: While numerous models exist for soil evaporation estimation, they are more or less empirically based either in the model structure or in the determination of introduced parameters. The main difficulty lies in representing the water stress factor, which is usually thought to be limited by capillarity-supported water supply or by vapor diffusion flux. Recent progress in understanding soil hydraulic properties, however, have found that the film flow, which is often neglected, is the dominant process under low moisture conditions. By including the impact of film flow, a reexamination on the typical evaporation process found that this usually neglected film flow might be the dominant process for supporting the Stage II evaporation (i.e., the fast falling rate stage), besides the generally accepted capillary flow-supported Stage I evaporation and the vapor diffusion-controlled Stage III evaporation. A physically based model for estimating the evaporation rate was then developed by parameterizing the Buckingham-Darcy’s law. Interestingly, the empirical Bucket model was found to be a specific form of the proposed model. The proposed model requires the in-equilibrium relative humidity as the sole input for representing water stress and introduces no adjustable parameter in relation to soil texture. The impact of vapor diffusion was also discussed. Model testing with laboratory data yielded an excellent agreement with observations for both thin soil and thick soil column evaporation experiments. Model evaluation at 15 field sites generally showed a close agreement with observations, with a great improvement in the lower range of evaporation rates in comparison with the widely applied Priestley and Taylor Jet Propulsion Laboratory model.

[Archived PDF of article]

Floodplain Land Cover and Flow Hydrodynamic Control of Overbank Sedimentation in Compound Channel Flows

by Carmelo Juez, C. Schärer, H. Jenny, A. J. Schleiss & M. J. Franca

Abstract: Overbank sedimentation is predominantly due to fine sediments transported under suspension that become trapped and settle in floodplains when high-flow conditions occur in rivers. In a compound channel, the processes of exchanging water and fine sediments between the main channel and floodplains regulate the geomorphological evolution and are crucial for the maintenance of the ecosystem functions of the floodplains. These hydrodynamic and morphodynamic processes depend on variables such as the flow-depth ratio between the water depth in the main channel and the water depth in the floodplain, the width ratio between the width of the main channel and the width of the floodplain, and the floodplain land cover characterized by the type of roughness. This paper examines, by means of laboratory experiments, how these variables are interlinked and how the deposition of sediments in the compound channel is jointly determined by them. The combination of these compound channel characteristics modulates the production of vertically axised large turbulent vortical structures in the mixing interface. Such vortical structures determine the water mass exchange between the main channel and the floodplain, conditioning in turn the transport of sediment particles conveyed in the water, and, therefore, the resulting overbank sedimentation. The existence and pattern of sedimentation are conditioned by both the hydrodynamic variables (the flow-depth ratio and the width ratio) and the floodplain land cover simulated in terms of smooth walls, meadow-type roughness, sparse-wood-type roughness, and dense-wood-type roughness.

[Archived PDF of article]

Identifying Actionable Compromises: Navigating Multi-city Robustness Conflicts to Discover Cooperative Safe Operating Spaces for Regional Water Supply Portfolios

by D. F. Gold, P. M. Reed, B. C. Trindade & G. W. Characklis

Summary: Cooperation among neighboring urban water utilities can help water managers face challenges stemming from climate change and population growth. Water utilities can cooperate by coordinating water transfers and water restrictions in times of water scarcity (drought) so that water is provided to areas that need it most. In order to successfully implement these policies, however, cooperative partners must find a compromise that is acceptable to all regional actors, a task complicated by asymmetries in resources and risks often present in regional systems. The possibility of deviations from agreed upon actions is another complicating factor that has not been addressed in water resources literature. Our study focuses on four urban water utilities in the Research Triangle region of North Carolina who are investigating cooperative drought mitigation strategies. We contribute a framework that includes the use of simulation models, optimization algorithms, and statistical tools to aid cooperating partners in finding acceptable compromises that are tolerant modest deviations in planned actions. Our results can be used by regional utilities to avoid or alleviate potential planning conflicts and are broadly applicable to urban regional water supply planning across the globe.

[Archived PDF of article]

Detecting Changes in River Flow Caused by Wildfires, Storms, Urbanization, Regulation, and Climate across Sweden

by Berit Arheimer & Göran Lindström

Abstract: Changes in river flow may appear from shifts in land cover, constructions in the river channel, and climatic change, but currently there is a lack of understanding of the relative importance of these drivers. Therefore, we collected gauged river flow time series from 1961 to 2018 from across Sweden for 34 disturbed catchments to quantify how the various types of disturbances have affected river flow. We used trend analysis and the differences in observations versus hydrological modeling to explore the effects on river flow from (1) land cover changes from wildfires, storms, and urbanization; (2) dam constructions with regulations for hydropower production; and (3) climate-change impact in otherwise undisturbed catchments. A mini model ensemble, consisting of three versions of the S-HYPE model, was used, and the three models gave similar results. We searched for changes in annual and daily stream flow, seasonal flow regime, and flow duration curves. The results show that regulation of river flow has the largest impact, reducing spring floods with up to 100% and increasing winter flow by several orders of magnitude, with substantial effects transmitted far downstream. Climate changed the total river flow up to 20%. Tree removal by wildfires and storms has minor impacts at medium and large scales. Urbanization, on the contrary, showed a 20% increase in high flows also at medium scales. This study emphasizes the benefits of combining observed time series with numerical modeling to exclude the effect of varying weather conditions, when quantifying the effects of various drivers on long-term streamflow shifts.

[Archived PDF of article]

Assessing the Feasibility of Satellite-Based Thresholds for Hydrologically Driven Landsliding

by Matthew A. Thomas, Brian D. Collins & Benjamin B. Mirus

Summary: Soil wetness and rainfall contribute to landslides across the world. Using soil moisture sensors and rain gauges, these environmental conditions have been monitored at numerous points across the Earth’s surface to define threshold conditions, above which landsliding should be expected for a localized area. Satellite-based technologies also deliver estimates of soil wetness and rainfall, potentially offering an approach to develop thresholds as part of landslide warning systems over larger spatial scales. To evaluate the potential for using satellite-based measurements for landslide warning, we compare the accuracy of landslide thresholds defined with ground- versus satellite-based soil wetness and rainfall information. We find that the satellite-based data over-predict soil wetness during the time of year when landslides are most likely to occur, resulting in thresholds that also over-predict the potential for landslides relative to thresholds informed by direct measurements on the ground. Our results encourage the installation of more ground-based monitoring stations in landslide-prone settings and the cautious use of satellite-based data when more direct measurements are not available.

[Archived PDF of article]

Modeling the Translocation and Transformation of Chemicals in the Soil-Plant Continuum: A Dynamic Plant Uptake Module for the HYDRUS Model

by Giuseppe Brunetti, Radka Kodešová & Jiří Šimůnek

Abstract: Food contamination is responsible for thousands of deaths worldwide every year. Plants represent the most common pathway for chemicals into the human and animal food chain. Although existing dynamic plant uptake models for chemicals are crucial for the development of reliable mitigation strategies for food pollution, they nevertheless simplify the description of physicochemical processes in soil and plants, mass transfer processes between soil and plants and in plants, and transformation in plants. To fill this scientific gap, we couple a widely used hydrological model (HYDRUS) with a multi-compartment dynamic plant uptake model, which accounts for differentiated multiple metabolization pathways in plant’s tissues. The developed model is validated first theoretically and then experimentally against measured data from an experiment on the translocation and transformation of carbamazepine in three vegetables. The analysis is further enriched by performing a global sensitivity analysis on the soilplant model to identify factors driving the compound’s accumulation in plants’ shoots, as well as to elucidate the role and the importance of soil hydraulic properties on the plant uptake process. Results of the multilevel numerical analysis emphasize the model’s flexibility and demonstrate its ability to accurately reproduce physicochemical processes involved in the dynamic plant uptake of chemicals from contaminated soils.

[Archived PDF of article]

Physical Controls on Salmon Redd Site Selection in Restored Reaches of a Regulated, Gravel-Bed River

by Lee R. Harrison, Erin Bray, Brandon Overstreet, Carl J. Legleiter, Rocko A. Brown, Joseph E. Merz, Rosealea M. Bond, Colin L. Nicol & Thomas Dunne

Abstract: Large-scale river restoration programs have emerged recently as a tool for improving spawning habitat for native salmonids in highly altered river ecosystems. Few studies have quantified the extent to which restored habitat is utilized by salmonids, which habitat features influence redd site selection, or the persistence of restored habitat over time. We investigated fall-run Chinook salmon spawning site utilization and measured and modeled corresponding habitat characteristics in two restored reaches: a reach of channel and floodplain enhancement completed in 2013 and a reconfigured channel and floodplain constructed in 2002. Redd surveys demonstrated that both restoration projects supported a high density of salmon redds, 3 and 14 years following restoration. Salmon redds were constructed in coarse gravel substrates located in areas of high sediment mobility, as determined by measurements of gravel friction angles and a grain entrainment model. Salmon redds were located near transitions between pool-riffle bedforms in regions of high predicted hyporheic flows. Habitat quality (quantified as a function of stream hydraulics) and hyporheic flow were both strong predictors of redd occurrence, though the relative roles of these variables differed between sites. Our findings indicate that physical controls on redd site selection in restored channels were similar to those reported for natural channels elsewhere. Our results further highlight that in addition to traditional habitat criteria (e.g., water depth, velocity, and substrate size), quantifying sediment texture and mobility, as well as intragravel flow, provides a more complete understanding of the ecological benefits provided by river restoration projects.

[Archived PDF of article]

Mountain-Block Recharge: A Review of Current Understanding

by Katherine H. Markovich, Andrew H. Manning, Laura E. Condon & Jennifer C. McIntosh

Abstract: Mountain-block recharge (MBR) is the subsurface inflow of groundwater to lowland aquifers from adjacent mountains. MBR can be a major component of recharge but remains difficult to characterize and quantify due to limited hydrogeologic, climatic, and other data in the mountain block and at the mountain front. The number of MBR-related studies has increased dramatically in the 15 years since the last review of the topic was conducted by Wilson and Guan (2004), generating important advancements. We review this recent body of literature, summarize current understanding of factors controlling MBR, and provide recommendations for future research priorities. Prior to 2004, most MBR studies were performed in the southwestern United States. Since then, numerous studies have detected and quantified MBR in basins around the world, typically estimating MBR to be 5–50% of basin-fill aquifer recharge. Theoretical studies using generic numerical modeling domains have revealed fundamental hydrogeologic and topographic controls on the amount of MBR and where it originates within the mountain block. Several mountain-focused hydrogeologic studies have confirmed the widespread existence of mountain bedrock aquifers hosting considerable groundwater flow and, in some cases, identified the occurrence of interbasin flow leaving headwater catchments in the subsurface—both of which are required for MBR to occur. Future MBR research should focus on the collection of high-priority data (e.g., subsurface data near the mountain front and within the mountain block) and the development of sophisticated coupled models calibrated to multiple data types to best constrain MBR and predict how it may change in response to climate warming.

[Archived PDF of article]

An Adjoint Sensitivity Model for Steady-State Sequentially Coupled Radionuclide Transport in Porous Media

by Mohamed Hayek, Banda S. RamaRao & Marsh Lavenue

Abstract: This work presents an efficient mathematical/numerical model to compute the sensitivity coefficients of a predefined performance measure to model parameters for one-dimensional steady-state sequentially coupled radionuclide transport in a finite heterogeneous porous medium. The model is based on the adjoint sensitivity approach that offers an elegant and computationally efficient alternative way to compute the sensitivity coefficients. The transport parameters include the radionuclide retardation factors due to sorption, the Darcy velocity, and the effective diffusion/dispersion coefficients. Both continuous and discrete adjoint approaches are considered. The partial differential equations associated with the adjoint system are derived based on the adjoint state theory for coupled problems. Physical interpretations of the adjoint states are given in analogy to results obtained in the theory of groundwater flow. For the homogeneous case, analytical solutions for primary and adjoint systems are derived and presented in closed forms. Numerically calculated solutions are compared to the analytical results and show excellent agreements. Insights from sensitivity analysis are discussed to get a better understanding of the values of sensitivity coefficients. The sensitivity coefficients are also computed numerically by finite differences. The numerical sensitivity coefficients successfully reproduce the analytically derived sensitivities based on adjoint states. A derivative-based global sensitivity method coupled with the adjoint state method is presented and applied to a real field case represented by a site currently being considered for underground nuclear storage in Northern Switzerland, “Zürich Nordost,” to demonstrate the proposed method. The results show the advantage of the adjoint state method compared to other methods in term of computational effort.

[Archived PDF of article]

Hydraulic Reconstruction of the 1818 Giétro Glacial Lake Outburst Flood

by C. Ancey, E. Bardou, M. Funk, M. Huss, M. A. Werder & T. Trewhela

Summary: Every year, natural and man-made dams fail and cause flooding. For public authorities, estimating the risk posed by dams is essential to good risk management. Efficient computational tools are required for analyzing flood risk. Testing these tools is an important step toward ensuring their reliability and performance. Knowledge of major historical floods makes it possible, in principle, to benchmark models, but because historical data are often incomplete and fraught with potential inaccuracies, validation is seldom satisfactory. Here we present one of the few major historical floods for which information on flood initiation and propagation is available and detailed: the Giétro flood. This flood occurred in June 1818 and devastated the Drance Valley in Switzerland. In the spring of that year, ice avalanches blocked the valley floor and formed a glacial lake, whose volume is today estimated at 25×106 m3. The local authorities initiated protection works: A tunnel was drilled through the ice dam, and about half of the stored water volume was drained in 2.5 days. On 16 June 1818, the dam failed suddenly because of significant erosion at its base; this caused a major flood. This paper presents a numerical model for estimating flow rates, velocities, and depths during the dam drainage and flood flow phases. The numerical results agree well with historical data. The flood reconstruction shows that relatively simple models can be used to estimate the effects of a major flood with good accuracy.

[Archived PDF of article]

The Representation of Hydrological Dynamical Systems Using Extended Petri Nets (EPN)

by Marialaura Bancheri, Francesco Serafin & Riccardo Rigon

Abstract: This work presents a new graphical system to represent hydrological dynamical models and their interactions. We propose an extended version of the Petri Nets mathematical modeling language, the Extended Petri Nets (EPN), which allows for an immediate translation from the graphics of the model to its mathematical representation in a clear way. We introduce the principal objects of the EPN representation (i.e., places, transitions, arcs, controllers, and splitters) and their use in hydrological systems. We show how to cast hydrological models in EPN and how to complete their mathematical description using a dictionary for the symbols and an expression table for the flux equations. Thanks to the compositional property of EPN, we show how it is possible to represent either a single hydrological response unit or a complex catchment where multiple systems of equations are solved simultaneously. Finally, EPN can be used to describe complex Earth system models that include feedback between the water, energy, and carbon budgets. The representation of hydrological dynamical systems with EPN provides a clear visualization of the relations and feedback between subsystems, which can be studied with techniques introduced in nonlinear systems theory and control theory.

[Archived PDF of article]

A Regularization Approach to Improve the Sequential Calibration of a Semidistributed Hydrological Model

by A. de Lavenne, V. Andréassian, G. Thirel, M.-H. Ramos & C. Perrin

Abstract: In semidistributed hydrological modeling, sequential calibration usually refers to the calibration of a model by considering not only the flows observed at the outlet of a catchment but also the different gauging points inside the catchment from upstream to downstream. While sequential calibration aims to optimize the performance at these interior gauged points, we show that it generally fails to improve performance at ungauged points. In this paper, we propose a regularization approach for the sequential calibration of semidistributed hydrological models. It consists in adding a priori information on optimal parameter sets for each modeling unit of the semi-distributed model. Calibration iterations are then performed by jointly maximizing simulation performance and minimizing drifts from the a priori parameter sets. The combination of these two sources of information is handled by a parameter k to which the method is quite sensitive. The method is applied to 1,305 catchments in France over 30 years. The leave-one-out validation shows that, at locations considered as ungauged, model simulations are significantly improved (over all the catchments, the median KGE criterion is increased from 0.75 to 0.83 and the first quartile from 0.35 to 0.66), while model performance at gauged points is not significantly impacted by the use of the regularization approach. Small catchments benefit most from this calibration strategy. These performances are, however, very similar to the performances obtained with a lumped model based on similar conceptualization.

[Archived PDF of article]

Proneness of European Catchments to Multiyear Streamflow Droughts

by Manuela I. Brunner & Lena M. Tallaksen

Summary: Droughts lasting longer than 1 year can have severe ecological, social, and economic impacts. They are characterized by below-average flows, not only during the low-flow period but also in the high-flow period when water stores such as groundwater or artificial reservoirs are usually replenished. Limited catchment storage might worsen the impacts of droughts and make water management more challenging. Knowledge on the occurrence of multiyear drought events enables better adaptation and increases preparedness. In this study, we assess the proneness of European catchments to multiyear droughts by simulating long discharge records. Our findings show that multiyear drought events mainly occur in regions where the discharge seasonality is mostly influenced by rainfall, whereas catchments whose seasonality is dominated by melt processes are less affected. The strong link between the proneness of a catchment to multiyear events and its discharge seasonality leads to the conclusion that future changes toward less snow storage and thus less snow melt will increase the probability of multiyear drought occurrence.

[Archived PDF of article]

Equifinality and Flux Mapping: A New Approach to Model Evaluation and Process Representation under Uncertainty

by Sina Khatami, Murray C. Peel, Tim J. Peterson & Andrew W. Western

Abstract: Uncertainty analysis is an integral part of any scientific modeling, particularly within the domain of hydrological sciences given the various types and sources of uncertainty. At the center of uncertainty rests the concept of equifinality, that is, reaching a given endpoint (finality) through different pathways. The operational definition of equifinality in hydrological modeling is that various model structures and/or parameter sets (i.e., equal pathways) are equally capable of reproducing a similar (not necessarily identical) hydrological outcome (i.e., finality). Here we argue that there is more to model equifinality than model structures/parameters, that is, other model components can give rise to model equifinality and/or could be used to explore equifinality within model space. We identified six facets of model equifinality, namely, model structure, parameters, performance metrics, initial and boundary conditions, inputs, and internal fluxes. Focusing on model internal fluxes, we developed a methodology called flux mapping that has fundamental implications in understanding and evaluating model process representation within the paradigm of multiple working hypotheses. To illustrate this, we examine the equifinality of runoff fluxes of a conceptual rainfall-runoff model for a number of different Australian catchments. We demonstrate how flux maps can give new insights into the model behavior that cannot be captured by conventional model evaluation methods. We discuss the advantages of flux space, as a subspace of the model space not usually examined, over parameter space. We further discuss the utility of flux mapping in hypothesis generation and testing, extendable to any field of scientific modeling of open complex systems under uncertainty.

[Archived PDF of article]

Role of Extreme Precipitation and Initial Hydrologic Conditions on Floods in Godavari River Basin, India

by Shailesh Garg & Vimal Mishra

Abstract: Floods are the most frequent natural calamity in India. The Godavari river basin (GRB) witnessed several floods in the past 50 years. Notwithstanding the large damage and economic loss, the role of extreme precipitation and antecedent moisture conditions on floods in the GRB remains unexplored. Using the observations and the well-calibrated Variable Infiltration Capacity model, we estimate the changes in the extreme precipitation and floods in the observed (1955–2016) and projected future (2071–2100) climate in the GRB. We evaluate the role of initial hydrologic conditions and extreme precipitation on floods in both observed and projected future climate. We find a statistically significant increase in annual maximum precipitation for the catchments upstream of four gage stations during the 1955–2016 period. However, the rise in annual maximum streamflow at all the four gage stations in GRB was not statistically significant. The probability of floods driven by extreme precipitation (PFEP) varies between 0.55 and 0.7 at the four gage stations of the GRB, which declines with the size of the basins. More than 80% of extreme precipitation events that cause floods occur on wet antecedent moisture conditions at all the four locations in the GRB. The frequency of extreme precipitation events is projected to rise by two folds or more (under RCP 8.5) in the future (2071–2100) at all four locations. However, the increased frequency of floods under the future climate will largely be driven by the substantial rise in the extreme precipitation events rather than wet antecedent moisture conditions.

[Archived PDF of article]

Research Letters

Combined Effect of Tides and Varying Inland Groundwater Input on Flow and Salinity Distribution in Unconfined Coastal Aquifers

by Woei Keong Kuan, Pei Xin, Guangqiu Jin, Clare E. Robinson, Badin Gibbes & Ling Li

Abstract: Tides and seasonally varying inland freshwater input, with different fluctuation periods, are important factors affecting flow and salt transport in coastal unconfined aquifers. These processes affect submarine groundwater discharge (SGD) and associated chemical transport to the sea. While the individual effects of these forcings have previously been studied, here we conducted physical experiments and numerical simulations to evaluate the interactions between varying inland freshwater input and tidal oscillations. Varying inland freshwater input was shown to induce significant water exchange across the aquifer-sea interface as the saltwater wedge shifted landward and seaward over the fluctuation cycle. Tidal oscillations led to seawater circulations through the intertidal zone that also enhanced the density-driven circulation, resulting in a significant increase in the total SGD. The combination of the tide and varying inland freshwater input, however, decreased the SGD components driven by the separate forcings (e.g., tides and density). Tides restricted the landward and seaward movement of the saltwater wedge in response to the varying inland freshwater input in addition to reducing the time delay between the varying freshwater input signal and landward-seaward movement in the saltwater wedge interface. This study revealed the nonlinear interaction between tidal fluctuations and varying inland freshwater input will help to improve our understanding of SGD, seawater intrusion, and chemical transport in coastal unconfined aquifers.

[Archived PDF of article]

Essay 106: World Watching: Project Syndicate—New Commentary

from Project Syndicate:

The EU’s EV Greenwash

by Hans-Werner Sinn

EU emissions regulations that went into force earlier this year are clearly designed to push diesel and other internal-combustion-engine automobiles out of the European market to make way for electric vehicles. But are EVs really as climate-friendly and effective as their promoters claim?

MUNICHGermany’s automobile industry is its most important industrial sector. But it is in crisis, and not only because it is suffering the effects of a recession brought on by Volkswagen’s own cheating on emissions standards, which sent consumers elsewhere. The sector is also facing the existential threat of exceedingly strict European Union emissions requirements, which are only seemingly grounded in environmental policy.

The EU clearly overstepped the mark with the carbon dioxide regulation [PDF] that went into effect on April 17, 2019. From 2030 onward, European carmakers must have achieved average vehicle emissions of just 59 grams of CO2 per kilometer, which corresponds to fuel consumption of 2.2 liters of diesel equivalent per 100 kilometers (107 miles per gallon). This simply will not be possible.

As late as 2006, average emissions for new passenger vehicles registered in the EU were around 161 g/km. As cars became smaller and lighter, that figure fell to 118 g/km in 2016. But this average crept back up, owing to an increase in the market share of gasoline engines, which emit more CO2 than diesel engines do. By 2018, the average emissions of newly registered cars had once again climbed to slightly above 120 g/km, which is twice what will be permitted in the long term.

Even the most gifted engineers will not be able to build internal combustion engines (ICEs) that meet the EU’s prescribed standards (unless they force their customers into soapbox cars). But, apparently, that is precisely the point. The EU wants to reduce fleet emissions by forcing a shift to electric vehicles. After all, in its legally binding formula for calculating fleet emissions, it simply assumes that EVs do not emit any CO2 whatsoever.

The implication is that if an auto company’s production is split evenly between EVs and ICE vehicles that conform to the present average, the 59 g/km target will be just within reach. If a company cannot produce EVs and remains at the current average emissions level, it will have to pay a fine of around €6,000 ($6,600) per car, or otherwise merge with a competitor that can build EVs.

But the EU’s formula is nothing but a huge scam. EVs also emit substantial amounts of CO2, the only difference being that the exhaust is released at a remove—that is, at the power plant. As long as coal– or gas-fired power plants are needed to ensure energy supply during the “dark doldrums” when the wind is not blowing and the sun is not shining, EVs, like ICE vehicles, run partly on hydrocarbons. And even when they are charged with solar– or wind-generated energy, enormous amounts of fossil fuels are used to produce EV batteries in China and elsewhere, offsetting the supposed emissions reduction. As such, the EU’s intervention is not much better than a cut-off device for an emissions control system.

Earlier this year, the physicist Christoph Buchal and I published a research paper [PDF, in German] showing that, in the context of Germany’s energy mix, an EV emits a bit more CO2 than a modern diesel car, even though its battery offers drivers barely more than half the range of a tank of diesel. And shortly thereafter, data published [PDF, in German] by Volkswagen confirmed that its e-Rabbit vehicle emits slightly more CO2 [PDF, in German] than its Rabbit Diesel within the German energy mix. (When based on the overall European energy mix, which includes a huge share of nuclear energy from France, the e-Rabbit fares slightly better than the Rabbit Diesel.)

Adding further evidence, the Austrian think tank Joanneum Research has just published a large-scale study [PDF, in German] commissioned by the Austrian automobile association, ÖAMTC, and its German counterpart, ADAC, that also confirms those findings. According to this study, a mid-sized electric passenger car in Germany must drive 219,000 kilometers before it starts outperforming the corresponding diesel car in terms of CO2 emissions. The problem, of course, is that passenger cars in Europe last for only 180,000 kilometers, on average. Worse, according to Joanneum, EV batteries don’t last long enough to achieve that distance in the first place. Unfortunately, drivers’ anxiety about the cars’ range prompts them to recharge their batteries too often, at every opportunity, and at a high speed, which is bad for durability.

As for EU lawmakers, there are now only two explanations for what is going on: either they didn’t know what they were doing, or they deliberately took Europeans for a ride. Both scenarios suggest that the EU should reverse its interventionist industrial policy, and instead rely on market-based instruments such as a comprehensive emissions trading system.

With Germany’s energy mix, the EU’s regulation on fleet fuel consumption will not do anything to protect the climate. It will, however, destroy jobs, sap growth, and increase the public’s distrust in the EU’s increasingly opaque bureaucracy.

Essay 104: Economics—A Decade after the Global Recession: Lessons and Challenges for Emerging and Developing Economies

from M. Ayhan Kose, Director, Prospects Group, World Bank Group:

Dear Colleagues.

This year marks the tenth anniversary of the 2009 global recession. Most emerging market and developing economies (EMDEs) weathered the global recession relatively well, in part by using the sizeable fiscal and monetary policy buffers accumulated during the prior years of strong growth. However, a short-lived rebound in activity has been followed by a decade of protracted weakness in EMDEs amid bouts of financial market stress, falling commodity prices, and subdued trade and investment.

Are EMDEs ready to face a deeper global downturn, if it materializes? Our new study A Decade After the Global Recession: Lessons and Challenges for Emerging and Developing Economies [PDF] takes on this question. It examines developments of the past decade, draws lessons for these economies, and discusses policy options. The study is the first comprehensive analysis on the topic with a truly EMDE focus. It offers three main conclusions. First, perhaps for the first time, many EMDEs were able to implement large-scale countercyclical fiscal and monetary policy stimulus during the last global recession. Second, looking ahead, policymakers in many EMDEs are now equipped with stronger policy frameworks than in earlier global downturns or financial crises. Third, EMDEs have now less policy room to face a global downturn than they had before the 2009 global recession. Irrespective of the timing of the next global downturn, the big lesson of the past decade for EMDEs is clear: since they are less well prepared today than prior to the 2009 episode, they urgently need to undertake cyclical and structural policy measures to be able to effectively confront the next downturn when it happens.

You can download the book here [PDF]. Its table of contents is below (each chapter individually downloadable). All charts featured in the book (with underlying data series) are also available below.

A Decade After the Global Recession: Lessons and Challenges for Emerging and Developing Economies [PDF]

Edited by M. Ayhan Kose and Franziska Ohnsorge

Part I: Context

Chapter 1: A Decade After the Global Recession: Lessons and Challenges [PDF]
Chapter 2: What Happens During Global Recessions? [PDF]

Part II: In the Rearview Mirror

Chapter 3: Macroeconomic Developments [PDF]
Chapter 4: Financial Market Developments [PDF]
Chapter 5: Macroeconomic and Financial Sector Policies [PDF]

Part III: Looking Ahead

Chapter 6: Prospects, Risks, and Vulnerabilities [PDF]
Chapter 7: Policy Challenges [PDF]

Part IV: Implications for the World Bank Group

Chapter 8: The Role of the World Bank Group [PDF]

Excel Charts

Complete archive [ZIP]

Chapter 1 [XLSX]
Chapter 2 [XLSX]
Chapter 3 [XLSX]
Chapter 4 [XLSX]
Chapter 5 [XLSX]
Chapter 6 [XLSX] Box [XLSX]
Chapter 7 [XLSX]
Chapter 8 [XLSX]

PS: This study follows on the World Bank Group’s recent book on Inflation in Emerging and Developing Economies. For their main periodical products, please visit: Global Economic Prospects and Commodity Markets Outlook. For their full menu of monitoring publications, please visit: World Bank Economic Monitoring. For their analytical work on topical policy issues, please visit Prospects Group Policy Research Working Papers.