Movies as an Education in Global Looting: The Sea Hawk (1940)

Movies and the World as an Arena of Violent Domination and Global Looting

The classic Warner Brothers swashbuckler, The Sea Hawk, from 1940, within its romantic adventures and intricate swordfights (perhaps comparable to the car chases of later movies) is a partly historical, partly fictional version of a world built on imperial struggles and ransacking and despoiling. The hegemonic power in the West (and perhaps worldwide) is Spain. Phillip II the king-emperor wants to own and dominate and rule the whole world. In 1588, his Spanish Armada loses to England. (The British of course want to compare this to the Battle of Britain against the German Luftwaffe.)

Set in 1585, The Sea Hawk opens with King Philip II of Spain plotting world domination, laughing that all world maps will soon read simply “Spain” — once England is out of the way, of course.

The Spanish ambassador departs for England to escort his niece to Queen Elizabeth’s court, but in a spectacular sea battle, the Spanish galley is soundly damaged, boarded, raided and sunk by a group of pirates led by Captain Geoffrey Thorpe, a Sir Walter Raleigh stand-in played by Erroll Flynn. Thorpe rescues the galley slaves — they row the boat — and spares the crew, taking them aboard and delivering them to England. The jewels and other bounty (or a portion thereof) are a gift to the Queen.

His crew is part of a noble privateer coalition — the Sea Hawks — who justify their piracy as reclamation of English goods (and enslaved sailors) from the Spanish behemoth. The political fallout from Thorpe’s abduction of the ambassador forces Elizabeth to outlaw the Sea Hawks, including an official denial (and private approval) of his mission to Panama to steal a shipment of Aztec gold.

Inca gold is also mentioned in the movie as a target of robbing.

Sir John Hawkins (1532–1595), part of this group of global sailor-pirates and master-mariners, was one of the most notable sailors and naval commanders of the sixteenth century.

He is known for his pivotal role in the maritime history of England and the rise of the global slave trade.

John Hawkins, the son of a merchant, was born in Plymouth in 1532. He became a sea captain and in 1562 became the first Englishman to start capturing people in Sierra Leone and selling them as slaves to Spanish settlers in the Caribbean. (Notice that selling slaves does not discriminate against Spaniards even with Phillip II threatening England. Business is business.)

Stealing Aztec gold as part of colonial or imperial plundering and the slave trade were part of the dark side of history, something the standard history books “skate over” dishonestly.

A key scene between the Spanish aristocratic beauty and Captain Thorpe:

Doña María Álvarez de Córdoba: “I’m not in the habit of conversing with thieves. I thought I made that quite clear, Captain Thorpe.”

Captain Geoffrey Thorpe: “Why, yes, all except your definition. Tell me, is a thief an Englishman who steals?”

Doña María Álvarez de Córdoba: “It’s anybody who steals… whether it’s piracy or robbing women.”

Captain Geoffrey Thorpe: “Oh, I see. I’ve been admiring some of the jewels we found in your chest… particularly the wrought gold. It’s Aztec, isn’t it? I wonder just how those Indians were persuaded to part with it.”

The Sea Hawk (1940)

Donald Trump continues this tradition of looting when he says of Iraq’s oil:

“Think of it as our oil under their sand.”

Thus the whole world is an arena where the weak don’t have any property rights: not the oil or gold, not themselves (slavery) and not their country (colonialism).

This exploitative hierarchy and “world-system” is part of “the way of the world” and even a romantic adventure story like 1940’s The Sea Hawk gives you a Hollywoodized glimpse into its roots. Imperial struggles in the West spill over into colonization and ransacking and looting. History books one sees in high school are dishonest and in that sense uninformative or even disinformative.

The popular PBS travel series Rick Steves’ Europe unintentionally gives us a wonderful example of this notion of plunder and looting as a pillar of world history in the show on Venice. Rick Steves is talking about the various statues in Venice’s central St. Mark’s Square (Piazza San Marco), and comments “I’d call the style ‘Early Ransack.’”

This Rick Steves quip about ransacking and historical wealth-building is very informative.

Science-Watching: Forecasting New Diseases in Low-Data Settings Using Transfer Learning

[from London Mathematical Laboratory]

by Kirstin Roster, Colm Connaughton & Francisco A. Rodrigues

Abstract

Recent infectious disease outbreaks, such as the COVID-19 pandemic and the Zika epidemic in Brazil, have demonstrated both the importance and difficulty of accurately forecasting novel infectious diseases. When new diseases first emerge, we have little knowledge of the transmission process, the level and duration of immunity to reinfection, or other parameters required to build realistic epidemiological models. Time series forecasts and machine learning, while less reliant on assumptions about the disease, require large amounts of data that are also not available in early stages of an outbreak. In this study, we examine how knowledge of related diseases can help make predictions of new diseases in data-scarce environments using transfer learning. We implement both an empirical and a synthetic approach. Using data from Brazil, we compare how well different machine learning models transfer knowledge between two different dataset pairs: case counts of (i) dengue and Zika, and (ii) influenza and COVID-19. In the synthetic analysis, we generate data with an SIR model using different transmission and recovery rates, and then compare the effectiveness of different transfer learning methods. We find that transfer learning offers the potential to improve predictions, even beyond a model based on data from the target disease, though the appropriate source disease must be chosen carefully. While imperfect, these models offer an additional input for decision makers for pandemic response.

Introduction

Epidemic models can be divided into two broad categories: data-driven models aim to fit an epidemic curve to past data in order to make predictions about the future; mechanistic models simulate scenarios based on different underlying assumptions, such as varying contact rates or vaccine effectiveness. Both model types aid in the public health response: forecasts serve as an early warning system of an outbreak in the near future, while mechanistic models help us better understand the causes of spread and potential remedial interventions to prevent further infections. Many different data-driven and mechanistic models were proposed during the early stages of the COVID-19 pandemic and informed decision-making with varying levels of success. This range of predictive performance underscores both the difficulty and importance of epidemic forecasting, especially early in an outbreak. Yet the COVID-19 pandemic also led to unprecedented levels of data-sharing and collaboration across disciplines, so that several novel approaches to epidemic forecasting continue to be explored, including models that incorporate machine learning and real-time big data data streams. In addition to the COVID-19 pandemic, recent infectious disease outbreaks include Zika virus in Brazil in 2015, Ebola virus in West Africa in 2014–16, Middle East respiratory syndrome (MERS) in 2012, and coronavirus associated with severe acute respiratory syndrome (SARS-CoV) in 2003. This trajectory suggests that further improvements to epidemic forecasting will be important for global public health. Exploring the value of new methodologies can help broaden the modeler’s toolkit to prepare for the next outbreak. In this study, we consider the role of transfer learning for pandemic response.

Transfer learning refers to a collection of techniques that apply knowledge from one prediction problem to solve another, often using machine learning and with many recent applications in domains such as computer vision and natural language processing. Transfer learning leverages a model trained to execute a particular task in a particular domain, in order to perform a different task or extrapolate to a different domain. This allows the model to learn the new task with less data than would normally be required, and is therefore well-suited to data-scarce prediction problems. The underlying idea is that skills developed in one task, for example the features that are relevant to recognize human faces in images, may be useful in other situations, such as classification of emotions from facial expressions. Similarly, there may be shared features in the patterns of observed cases among similar diseases.

The value of transfer learning for the study of infectious diseases is relatively under-explored. The majority of existing studies on diseases remain in the domain of computer vision and leverage pre-trained neural networks to make diagnoses from medical images, such as retinal diseases, dental diseases, or COVID-19. Coelho and colleagues (2020) explore the potential of transfer learning for disease forecasts. They train a Long Short-Term Memory (LSTM) neural network on dengue fever time series and make forecasts directly for two other mosquito-borne diseases, Zika and Chikungunya, in two Brazilian cities. Even without any data on the two target diseases, their model achieves high prediction accuracy four weeks ahead. Gautam (2021) uses COVID-19 data from Italy and the USA to build an LSTM transfer model that predicts COVID-19 cases in countries that experienced a later pandemic onset.

These studies provide empirical evidence that transfer learning may be a valuable tool for epidemic forecasting in low-data situations, though research is still limited. In this study, we aim to contribute to this empirical literature not only by comparing different types of knowledge transfer and forecasting algorithms, but also by considering two different pairs of endemic and novel diseases observed in Brazilian cities, specifically (i) dengue and Zika, and (ii) influenza and COVID-19. With an additional analysis on simulated time series, we hope to provide theoretical guidance on the selection of appropriate disease pairs, by better understanding how different characteristics of the source and target diseases affect the viability of transfer learning.

Zika and COVID-19 are two recent examples of novel emerging diseases. Brazil experienced a Zika epidemic in 2015–16 and the WHO declared a public health emergency of global concern in February 2016. Zika is caused by an arbovirus spread primarily by mosquitoes, though other transmission methods, including congenital and sexual have also been observed. Zika belongs to the family of viral hemorrhagic fevers and symptoms of infection share some commonalities with other mosquito-borne arboviruses, such as yellow fever, dengue fever, or chikungunya. Illness tends to be asymptomatic or mild but can lead to complications, including microcephaly and other brain defects in the case of congenital transmission.

Given the similarity of the pathogen and primary transmission route, dengue fever is an appropriate choice of source disease for Zika forecasting. Not only does the shared mosquito vector result in similar seasonal patterns of annual outbreaks, but consistent, geographically and temporally granular data on dengue cases is available publicly via the open data initiative of the Brazilian government.

COVID-19 is an acute respiratory infection caused by the novel coronavirus SARS-CoV-2, which was first detected in Wuhan, China, in 2019. It is transmitted directly between humans via airborne respiratory droplets and particles. Symptoms range from mild to severe and may affect the respiratory tract and central nervous system. Several variants of the virus have emerged, which differ in their severity, transmissibility, and level of immune evasion.

Influenza is also a contagious respiratory disease that is spread primarily via respiratory droplets. Infection with the influenza virus also follows patterns of human contact and seasonality. There are two types of influenza (A and B) and new strains of each type emerge regularly. Given the similarity in transmission routes and to a lesser extent in clinical manifestations, influenza is chosen as the source disease for knowledge transfer to model COVID-19.

For each of these disease pairs, we collect time series data from Brazilian cities. Data on the target disease from half the cities is retained for testing. To ensure comparability, the test set is the same for all models. Using this empirical data, as well as the simulated time series, we implement the following transfer models to make predictions.

  • Random forest: First, we implement a random forest model which was recently found to capture well the time series characteristics of dengue in Brazil. We use this model to make predictions for Zika without re-training. We also train a random forest model on influenza data to make predictions for COVID-19. This is a direct transfer method, where models are trained only on data from the source disease.
  • Random forest with TrAdaBoost: We then incorporate data from the target disease (i.e., Zika and COVID-19) using the TrAdaBoost algorithm together with the random forest model. This is an instance-based transfer learning method, which selects relevant examples from the source disease to improve predictions on the target disease.
  • Neural network: The second machine learning algorithm we deploy is a feed-forward neural network, which is first trained on data of the endemic disease (dengue/influenza) and applied directly to forecast the new disease.
  • Neural network with re-training and fine-tuning: We then retrain only the last layer of the neural network using data from the new disease and make predictions on the test set. Finally, we fine-tune all the layers’ parameters using a small learning rate and low number of epochs. These models are examples of parameter-based transfer methods, since they leverage the weights generated by the source disease model to accelerate and improve learning in the target disease model.
  • Aspirational baseline: We compare these transfer methods to a model trained only on the target disease (Zika/COVID-19) without any data on the source disease. Specifically, we use half the cities in the target dataset for training and the other half for testing. This gives a benchmark of the performance in a large-data scenario, which would occur after a longer period of disease surveillance.

The remainder of this paper is organized as follows. The models are described in more technical detail in Section 2. Section 3 shows the results of the synthetic and empirical predictions. Finally, Section 4 discusses practical implications of the analyses.

Access the full paper [via institutional access or paid download].

World-Watching: China Globalization Conference

[from the Center for China and Globalization]

The Center for China and Globalization is proud to announce the full program of their upcoming 8th edition of CCG annual China and Globalization Forum 2022 to be held in online-offline hybrid format in Beijing. Everyone is cordially invited to join the events open to public virtually. All sessions open to public will be broadcast live. You will be able to access the sessions on Zoom:

Tuesday, June 21st

09:00-10:00—Forum Special Online Program I: Advancing the 2030 Agenda in Uncertain Times: Sustainability and the Quest for ChinaU.S. Cooperation – Fireside Chat with Sec. Henry M. Paulson, Jr. and Mr. WANG Shi (王石)

10:30-12:30—Ambassadors’ Roundtable: Global Recovery in Post-Pandemic Times: Trends, Challenges, and Responses

14:00-16:00ChinaEurope Roundtable: ChinaEurope Economic Cooperation: Moving Forward with the Global Quest for Sustainability

17:30-18:30—Forum Special Online Program II: History at a Turning Point: Pandemic, Ukraine, and the Changing Relations between China, Europe, and the United States–Dialogue with Historian Niall Ferguson

20:00-21:30—Forum Special Online Program III: Realigning the U.S.China Trade and Economic Relationship: Inflation, Tariffs, and the Way Forward – ChinaU.S. Think Tank Dialogue

Zoom:
Webinar ID: 894 5641 9097
Passcode: 566991

Once you’re admitted into the Zoom meeting, your camera and audio will remain off. Simultaneous interpretation of both English and Chinese languages will be available by selecting the language pane.

Agenda

Monday, June 20th

09:00-10:00—Forum Special Online Program I: Advancing the 2030 Agenda in Uncertain Times: Sustainability and the Quest for ChinaU.S. Cooperation – Fireside Chat with Sec. Henry M. Paulson, Jr. and Mr. WANG Shi (王石)

Host

WANG Huiyao (王辉耀), CCG President, Vice Chairman of China Association for International Economic Cooperation (CAFIEC)

Speakers

Henry M. Paulson, Jr., former U.S. Treasury Secretary, Founder and Chairman of the Paulson Institute
WANG Shi (王石), CCG Senior Vice President, Founder and Honorary Chairman of China Vanke Co., Ltd., Founder of C-Team

This program will also be livestreamed on the web via the Baidu links and social media platforms below:

English language
Chinese language

Social Media
Youtube
Twitter
Facebook

10:30-12:30—Ambassadors’ Roundtable: Global Recovery in Post-Pandemic Times: Trends, Challenges, and Responses

Chair

WANG Huiyao (王辉耀), CCG President, Vice Chairman of China Association for International Economic Cooperation (CAFIEC)

Opening remarks

LONG YongtuCCG Chairman; former Vice Minister of Commerce
LIN Songtian, President of the Chinese People’s Association for Friendship with Foreign Countries, former Chinese Ambassador to South Africa
Siddharth Chatterjee, UN Resident Coordinator, United Nations in China

Participants

(in alphabetic order by country): 
Rahamtalla M. Osman
, Permanent Representative of African Union to China
Graham Fletcher, Ambassador of Australia to China 
Paulo Estivallet de Mesquita, Ambassador of Brazil to China 
Nicolas Chapuis, Ambassador of European Union to China 
Laurent Bili, Ambassador of France to China 
Djauhari Oratmangun, Ambassador of Indonesia to China 
Luca Ferrari, Ambassador of Italy to China 
Raja Dato Nushirwan Zainal Abidin, Ambassador of Malaysia to China 
Clare Fearnley, Ambassador of New Zealand to China 
Signe Brudeset, Ambassador of Norway to China 
Moin ul Haque, Ambassador of Pakistan to China 
Luis Quesada, Ambassador of Peru to China 
José Augusto Duarte, Ambassador of Portugal to China 
James Kimonyo, Ambassador of Rwanda to China 
Alenka Suhadolnik, Ambassador of Slovenia to China 
Siyabonga Cwele, Ambassador of South Africa to China 
Bernardino Regazzoni, Ambassador of Switzerland to China 
Arthayudh Srisamoot, Ambassador of Thailand to China 
Ali Obaid Al Dhaheri, Ambassador of UAE to China

14:00-16:00ChinaEurope Roundtable: ChinaEurope Economic Cooperation: Moving Forward with the Global Quest for Sustainability

Chair

Andy MokCCG Senior Fellow

Participants

(in alphabetic order)
Joseph Cash
, Policy Analyst, China–Britain Business Council (CBBC)
CUI Hongjian, CCG Non-Resident Senior Fellow and Director of the Department of European Studies at the China Institute of International Studies (CIIS)
Vivian Ding, CCG Senior Council Member, Founder and CEO of WeBrand Global
FENG Zhongping, Director of Institute of European Studies, Chinese Academy of Social Sciences (CASS)
Allan Gabor, President of Merck China
Archil Kalandia, Ambassador of Georgia to China
LENG Yan, CCG Senior Council Member; Executive Vice President of Daimler Greater China
LIU Chang, Vice President of Knorr-Bremse Asia Pacific
Steven Lynch, Managing Director, BritCham China
Dario Mihelin, Ambassador of Croatia to China
Leena-Kaisa Mikkola, Ambassador of Finland to China
MIN Hao, CCG Senior Council Member; Founder, Chairman, and CEO of the Nanjing Easthouse Electric Ltd.
SUN Yongfu, CCG Senior Fellow; former Director-General of MOFCOM Department of European Affairs
Joerg Wuttke, President of the EU Chamber of Commerce in China
ZHOU YanliCCG Advisor; Former Vice Chairman of China Insurance Regulatory Commission
Helen Zhu, CCG Senior Council Member; Vice President of Sanofi China

This program will also be livestreamed on the web via the Baidu links and social media platforms below:

English language
Chinese language

Social Media
Youtube
Twitter
Facebook

17:30-18:30—Forum Special Online Program II: History at a Turning Point: Pandemic, Ukraine, and the Changing Relations between China, Europe, and the United States–Dialogue with Historian Niall Ferguson

Speakers

Niall Ferguson, Milbank Family Senior Fellow at the Hoover Institution, Stanford University
WANG Huiyao (王辉耀), CCG President, Vice Chairman of China Association for International Economic Cooperation (CAFIEC)

20:00-21:30—Forum Special Online Program III: Realigning the U.S.China Trade and Economic Relationship: Inflation, Tariffs, and the Way Forward – ChinaU.S. Think Tank Dialogue

Moderator

WANG Huiyao (王辉耀), CCG President, Vice Chairman of China Association for International Economic Cooperation (CAFIEC)

Speakers

(in alphabetic order)
Craig Allen
, President, US-China Business Council (USCBC)
Wendy Cutler, Vice President, Asia Society Policy Institute; former Acting Deputy U.S. Trade Representative
JIN Xu, President, China Association of International Trade (CAIT)
Adam Posen, President, Peterson Institute for International Economics (PIIE)
Jeremie Waterman, President of China Center and Vice President, U.S. Chamber of Commerce
YI Xiaozhun, former Deputy Director-General of World Trade Organization, former Vice Commerce Minister

Tuesday, June 21st

09:30-12:30China Globalization 30 Roundtable Experts Roundtable: China and Globalization in the 21st Century (Chinese language livestream, not available on Zoom)

Chair

Mabel MiaoCCG Secretary-General

Discussants

(in alphabetic order)
CHEN Zhiwu, Director of Asia Global Institute, Professor of Business School, Hong Kong University
DA Wei, Professor and Director of Center for International Security and Strategy, Tsinghua University
DONG Guanpeng, Vice President of China Public Relations Association, Dean of School of Government and Public Affairs, Communication University of China
GE Jianxiong, Director of Institute of Chinese Historical Geography, Fudan University
GU Xuewu, Director of Center for Globalization, University of Bonn
HU Biliang, Executive Director of the Belt and Road Institute and the Institute of Emerging Markets, Beijing Normal University
LI Xiangyang, Director of Institute of Asia-Pacific and Global Strategy, Chinese Academy of Social Sciences (CASS)
LIU Guoen, Dean of Institute for Global Health and Development, BOYA Distinguished Professor, Peking University
LIU Junhong, Director of Globalization Center, China Institutes of Contemporary International Relations (CICIR)
SU Hao, Director of Center for Strategy and Peace Studies, China Foreign Affairs University
XIE Tao, Dean of School of International Relations and Diplomacy, Beijing Foreign Studies University
XUE Lan, Dean of Schwarzman College, Tsinghua University
WANG Huiyao (王辉耀), President of Center for China and Globalization; Dean of Development Research Institute, Southwest University of Finance and Economics
WANG Ning, Zhiyuan Chair Professor, Shanghai Jiao Tong University, Foreign Member of the European Academy of Sciences
WANG Yiwei, Professor of School of International Relations, Renmin University of China
WANG Yong, Director of Center for International Political and Economic Studies, Peking University
WU Xinbo, Dean of Institute of International Studies, Director of Center for American Studies, Fudan University
WU Zhicheng, Vice President of the Institute of International Strategic Studies, Party School of the Central Committee of CPC (National Academy of Administration)
YANG Xuedong, Senior Professor of Political Science, Tsinghua University
ZHANG Shuhua, Director of Institute of Political Science, Chinese Academy of Social Sciences (CASS)
ZHANG Xudong, Professor of Comparative Literature & East Asian Studies, NYU
ZHANG Yunling, Member of Presidium of Academic Divisions of Chinese Academy of Social Sciences (CASS)

This session will also be livestreamed on the web accessible via this Baidu link (Chinese language only, no simultaneous interpretation).

COVID-19 and “Naïve Probabilism”

[from the London Mathematical Laboratory]

In the early weeks of the 2020 U.S. COVID-19 outbreak, guidance from the scientific establishment and government agencies included a number of dubious claims—masks don’t work, there’s no evidence of human-to-human transmission, and the risk to the public is low. These statements were backed by health authorities, as well as public intellectuals, but were later disavowed or disproven, and the initial under-reaction was followed by an equal overreaction and imposition of draconian restrictions on human social activities.

In a recent paper, LML Fellow Harry Crane examines how these early mis-steps ultimately contributed to higher death tolls, prolonged lockdowns, and diminished trust in science and government leadership. Even so, the organizations and individuals most responsible for misleading the public suffered little or no consequences, or even benefited from their mistakes. As he discusses, this perverse outcome can be seen as the result of authorities applying a formulaic procedure of “naïve probabilism” in facing highly uncertain and complex problems, and largely assuming that decision-making under uncertainty boils down to probability calculations and statistical analysis.

This attitude, he suggests, might be captured in a few simple “axioms of naïve probabilism”:

Axiom 1: more complex the problem, the more complicated the solution.

This idea is a hallmark of naïve decision making. The COVID-19 outbreak was highly complex, being a novel virus of uncertain origins, and spreading through the interconnected global society. But the potential usefulness of masks was not one of these complexities. The mask mistake was consequential not because masks were the antidote to COVID-19, but because they were a low cost measure the effect of which would be neutral at worst; wearing a mask can’t hurt in reducing the spread of a virus.

Yet the experts neglected common sense in favor of a more “scientific response” based on rigorous peer review and sufficient data. Two months after the initial U.S. outbreak, a study confirmed the obvious, and masks went from being strongly discouraged to being mandated by law. Precious time had been wasted, many lives lost, and the economy stalled.

Crane also considers another rule of naïve probabilism:

Axiom 2: Until proven otherwise, assume that the future will resemble the past.

In the COVID-19 pandemic, of course, there was at first no data that masks work, no data that travel restrictions work, no data of human-to-human transmission. How could there be? Yet some naïve experts took this as a reason to maintain the status quo. Indeed, many universities refused to do anything in preparation until a few cases had been detected on campus—at which point they had some data, as well as hundreds or thousands of other as yet undetected infections.

Crane touches on some of the more extreme examples of his kind of thinking, which assumes that whatever can’t be explained in terms of something that happened in the past is speculative, non-scientific and unjustifiable:

“This argument was put forward by John Ioannidis in mid-March 2020, as the pandemic outbreak was already spiralling out of control. Ioannidis wrote that COVID-19 wasn’t a ‘once-in-a-century pandemic,’ as many were saying, but rather a ‘once-in-a-century data-fiasco’. Ioannidis’s main argument was that we knew very little about the disease, its fatality rate, and the overall risks it poses to public health; and that in face of this uncertainty, we should seek data-driven policy decisions. Until the data was available, we should assume COVID-19 acts as a typical strain of the flu (a different disease entirely).”

Unfortunately, waiting for the data also means waiting too long, if it turns out that the virus turns out to be more serious. This is like waiting to hit the tree before accepting that the available data indeed supports wearing a seatbelt. Moreover, in the pandemic example, this “lack of evidence” argument ignores other evidence from before the virus entered the United States. China had locked down a city of 10 million; Italy had locked down its entire northern region, with the entire country soon to follow. There was worldwide consensus that the virus was novel, the virus was spreading fast and medical communities had no idea how to treat it. That’s data, and plenty of information to act on.

Crane goes on to consider a 3rd axiom of naïve probabilism, which aims to turn ignorance into a strength. Overall, he argues, these axioms, despite being widely used by many prominent authorities and academic experts, actually capture a set of dangerous fallacies for action in the real world.

In reality, complex problems call for simple, actionable solutions; the past doesn’t repeat indefinitely (i.e., COVID-19 was never the flu); and ignorance is not a form of wisdom. The Naïve Probabilist’s primary objective is to be accurate with high probability rather than to protect against high-consequence, low-probability outcomes. This goes against common sense principles of decision making in uncertain environments with potentially very severe consequences.

Importantly, Crane emphasizes, the hallmark of Naïve Probabilism is naïveté, not ignorance, stupidity, crudeness or other such base qualities. The typical Naïve Probabilist lacks not knowledge or refinement, but the experience and good judgment that comes from making real decisions with real consequences in the real world. The most prominent naïve probabilists are recognized (academic) experts in mathematical probability, or relatedly statistics, physics, psychology, economics, epistemology, medicine or so-called decision sciences. Moreover, and worryingly, the best known naïve probabilists are quite sophisticated, skilled in the art of influencing public policy decisions without suffering from the risks those policies impose on the rest of society.

Read the paper. [Archived PDF]

Education and the Historical Swirl: Part II

We concluded Part I on this topic with the following comments which we wish students to incorporate into their educations, irrespective of the major, field or concentration:

The gold standard itself, dominated from London led to intricate problems: Golden Fetters: The Gold Standard and the Great Depression, 1919-1939 (published in 1992) by Barry Eichengreen, the leading historian of monetary systems, shows the downstream pitfalls of the gold standard.

In other words, the de facto emergence of Britain/London as the world commercial and policy center and the relation of this emergence to empire and international tensions and rivalries, means it is very problematical for any country to steer a course other than staying in tandem with British moods and ideologies, such as free trade. Any country by itself would find it difficult to have a more independent policy. (Friedrich List of Germany, who died in 1846, wrestles with these difficulties somewhat.) The attempts to find “autonomy and autarky” in the interwar years (Germany, Japan, Italy) led to worse nightmares. The world seems like a “no exit” arena of ideologies and rivalries.

The “crazy dynamics” and the semi-anarchy of the system, which continues to this day and is even worse, means that policy-making is always seen through a “dark windshield.”

History in the globalizing capitalist centuries, the nineteenth and the twentieth, is a kind of turbulent swirl and not a rational “walk.”

Here’s a bizarre but necessary comment on this sense of turbulent and surprising swirl propelling history forwards and backwards and sidewards at the same time:

The historian, Barry Eichengreen (mentioned above), is a distinguished analyst of world monetary systems at U.C. Berkeley and perhaps the leading expert today on the evolution of such systems.

From movies such as Shoah and Last of the Unjust by the great filmmaker Claude Lanzmann, we know that Barry Eichengreen’s mother was Lucille Eichengreen, a Jew born in Hamburg, Germany (1925) and deported to the Łódź Ghetto in Poland during World War II. She survived through many miraculous accidents and contingencies, then wrote about her experiences.

We get a deeper insight into “the way of the world” by seeing that the Holocaust itself has as a backdrop the anarcho-craziness of the world. The Fascists and Nazis were jumping from the “frying pan into the fire” by imagining that world conquest and world-murdering could “stop the world.” They and their favored populations could “get off” and step into a racial dreamworld. They were taking today’s concept of “gated community” and applying it to the “racial community” (Volksgemeinschaft, in German).

This led to the phenomenon depicted in Goya’s famous aquatint: The Sleep of Reason Produces Monsters.

The perceived madness of the world and the madness of leaders that this perception leads to have never been analyzed together.

The fact that the behavior of world leaders could be “crazy like a fox” (half-insane, half-opportunistic, or Machiavellian “clever”) is a complicating factor or twist from Mussolini until today.

World-Watching: 21st International Economic Forum on Africa 2022

[From the Organisation for Economic Co-operation and Development (OECD)]

The Future Africa Wants: Better Policies for the Next Generation and a Sustainable Transition

Fast and brutal mutations in the global economy today are reshaping conditions for transforming African economies and creating better opportunities for its youth. Efforts to reduce the continent’s dependence on raw material exports, advance productive transformation and increase investment and domestic resource mobilization are being challenged.

Can innovative policies and international partnerships help address: 

Join OECD experts, African leaders and policy makers and shakers to discuss next steps for a more sustainable future.

The 21st edition of the AUOECD International Economic Forum on Africa takes places in the framework of the OECD Ministerial Council Meeting (MCM), chaired by Italy under the theme “The Future We Want: Better Policies for the Next Generation and a Sustainable Transition.” The Forum is an opportunity for OECD members to engage at high level, yet informally, with Africa’s leaders, movers and shakers on the way forward.

Register to attend in person.

Can’t come to Paris? Join the Forum online, watch it on OECD TV, Twitter or Facebook

Education and Seeing the “Swirl” of History

The tempo and rhythm of world events and world history are not captured in the linear and bland books one reads in schools and colleges where the sense of the stormy forward turbulence of the world is not communicated. Here’s an example that does communicate this “crazy dynamics”:

The leading historian, James Joll, in his excellent Europe Since 1870: An International History talks about gold and the gold standard in this way:

“The world supply of gold was diminishing, as the effects of the gold rushes in California and Australia in the 1850s and 1860s passed. This coincided with the decision in the 1870s of many of the leading countries to follow Britain’s example to use gold rather than silver as the basis of their currencyGermany in 1871, France in 1876 for example — so that the demand for gold rose just as the supply was temporarily declining. This in turn led to some doubt about the use of a gold standard and to much discussion about ‘bi-metallism’ and about the possibility of restoring silver to its place as the metal on which the world’s currency should be based, though this movement had more success in the United States than in Europe, where gold has now established itself firmly. By the 1890s however the discovery of new gold deposits in South Africa, Western Australia and Canada put an end to these discussions and uncertainties, as far as currency was concerned, for some fifty years.”

(James Joll, Europe Since 1870: An International History, Penguin Books, 1976, page 35)

These twists and turns and accidents or contingencies don’t communicate the real semi-turmoil surrounding all the decisions, which we can infer from the comment by a German politician in 1871, “We chose gold, not because gold was gold, but because Britain was Britain.” (Ian Patrick Austin, Common Foundations of American and East Asian Modernisation: From Alexander Hamilton to Junichero Koizumi, Select Publishing, 2009, page 99.)

Professor Joll delineates the emergent primacy of England:

“The establishment of London as the most important center in the world for shipping, banking, insurance-broking and buying and selling generally, as well as the growth of British industry, had been based on a policy of free trade.”

(James Joll, Europe Since 1870: An International History, Penguin Books, 1976, page 34)

The gold standard itself, dominated from London led to intricate problems: Golden Fetters: The Gold Standard and the Great Depression, 1919-1939 (published in 1992) by Barry Eichengreen, the leading historian of monetary systems, shows the downstream pitfalls of the gold standard.

In other words, the de facto emergence of Britain/London as the world commercial and policy center and the relation of this emergence to empire and international tensions and rivalries, means it is very problematical for any country to steer a course other than staying in tandem with British moods and ideologies, such as free trade. Any country by itself would find it difficult to have a more independent policy. (Friedrich List of Germany, who died in 1846, wrestles with these difficulties somewhat.) The attempts to find “autonomy and autarky” in the interwar years (Germany, Japan, Italy) led to worse nightmares. The world seems like a “no exit” arena of ideologies and rivalries.

The “crazy dynamics” and the semi-anarchy of the system, which continues to this day and is even worse, means that policy-making is always seen through a “dark windshield.”

History in the globalizing capitalist centuries, the nineteenth and the twentieth, is a kind of turbulent swirl and not a rational “walk.”

Education and the World’s Confusion

Students need to understand that the world and history and the mood of the moment are always a “confusing swirl,” as experience shows, and that implies the present intersection of world/history/mood is also such a confusing “opaque windshield.”

Take the example of Europe after World War I. Mussolini leaves his position at the left-wing paper Avanti! (English: “Forward”) and founds the bellicose Il Populo d’Italia (English: “The People of Italy”), which is nationalist and warlike.

Avanti! was an Italian daily newspaper, born as the official voice of the Italian Socialist Party, published since 25 December 1896. It took its name from its German counterpart Vorwärts, the party-newspaper of the Social Democratic Party of Germany. Il Populo d’Italia, was an Italian newspaper which published editions every day with the exception for Mondays founded by Benito Mussolini in 1914, after his split from the Italian Socialist Party.

Mussolini was a complete tactical opportunist and his profound flip-flops indicate that “the winds” of mood and opinion were capricious and somewhat blind to its own twists and turns.

Take the case of the (later) famous anti-fascist Arturo Toscanini, the great music conductor. His trajectory is non-linear and as “jumpy” as Mussolini’s, going the other way:

“From the start Fascism was an eclectic movement and in its early days in 1919 it attracted a number of people who, including some, such as the great conductor Arturo Toscanini, who soon became its most determined opponents.”

(James Joll, Europe since 1870: An International History, Penguin Books, 1976, page 266)

Toscanini ran as a Fascist parliamentary candidate in Milan (1919) and this is a clue as to the tremendous disorientation in the wake of World War I.

In 1983, the outstanding Hebrew University scholar, Professor Sternhell, wrote Ni droite ni gauche. L’idéologie fasciste en France, which was translated to English three years later under the title, Neither Right nor Left: Fascist Ideology in France. The title of this classic by Sternhell—“neither right nor left”—captures via its very title, the indeterminate fusion and hodge-podge quality of modern ideologies. If they’re neither right nor left, where are they?

We could say there’s a deep pattern: World War I (yielding communism and fascism and Nazism) and then World War II (with atomic weapons and Auschwitz) and the Cold War have all left very disorienting legacies and since people in 2022 are legatees of these three wars, outlooks are very foggy. As the world becomes extremely confusing, people react accordingly and veer from mood to mood and opinion to opinion.

Essay 108: Early View Alert: Water Resources Research

from the American Geophysical Union’s journals:

Research Articles

Modeling the Snow Depth Variability with a High-Resolution Lidar Data Set and Nonlinear Terrain Dependency

by T. Skaugen & K. Melvold

Summary: Using airborne laser, 400 million snow depth measurements at Hardangervidda in Southern Norway have been collected. The amount of data has made in-depth studies of the spatial distribution of snow and its interaction with the terrain and vegetation possible. We find that the terrain variability, expressed by the square slope, the average amount of snow, and whether the terrain is vegetated or not, largely explains the variation of snow depth. With this information it is possible to develop equations that predict snow depth variability that can be used in environmental models, which again are used for important tasks such as flood forecasting and hydropower planning. One major advantage is that these equations can be determined from the data that are, in principle, available everywhere, provided there exists a detailed digital model of the terrain.

[Archived PDF article]

Phosphorus Transport in Intensively Managed Watersheds

by Christine L. Dolph, Evelyn Boardman, Mohammad Danesh-Yazdi, Jacques C. Finlay, Amy T. Hansen, Anna C. Baker & Brent Dalzell

Abstract: When phosphorus from farm fertilizer, eroded soil, and septic waste enters our water, it leads to problems like toxic algae blooms, fish kills, and contaminated drinking supplies. In this study, we examine how phosphorus travels through streams and rivers of farmed areas. In the past, soil lost from farm fields was considered the biggest contributor to phosphorus pollution in agricultural areas, but our study shows that phosphorus originating from fertilizer stores in the soil and from crop residue, as well as from soil eroded from sensitive ravines and bluffs, contributes strongly to the total amount of phosphorus pollution in agricultural rivers. We also found that most phosphorus leaves farmed watersheds during the very highest river flows. Increased frequency of large storms due to climate chaos will therefore likely worsen water quality in areas that are heavily loaded with phosphorus from farm fertilizers. Protecting water in agricultural watersheds will require knowledge of the local landscape along with strategies to address (1) drivers of climate chaos, (2) reduction in the highest river flows, and (3) ongoing inputs and legacy stores of phosphorus that are readily transported across land and water.

[Archived PDF of article]

Detecting the State of the Climate System via Artificial Intelligence to Improve Seasonal Forecasts and Inform Reservoir Operations

by Matteo Giuliani, Marta Zaniolo, Andrea Castelletti, Guido Davoli & Paul Block

Abstract: Increasingly variable hydrologic regimes combined with more frequent and intense extreme events are challenging water systems management worldwide. These trends emphasize the need of accurate medium- to long-term predictions to timely prompt anticipatory operations. Despite in some locations global climate oscillations and particularly the El Niño Southern Oscillation (ENSO) may contribute to extending forecast lead times, in other regions there is no consensus on how ENSO can be detected, and used as local conditions are also influenced by other concurrent climate signals. In this work, we introduce the Climate State Intelligence framework to capture the state of multiple global climate signals via artificial intelligence and improve seasonal forecasts. These forecasts are used as additional inputs for informing water system operations and their value is quantified as the corresponding gain in system performance. We apply the framework to the Lake Como basin, a regulated lake in northern Italy mainly operated for flood control and irrigation supply. Numerical results show the existence of notable teleconnection patterns dependent on both ENSO and the North Atlantic Oscillation over the Alpine region, which contribute in generating skillful seasonal precipitation and hydrologic forecasts. The use of this information for conditioning the lake operations produces an average 44% improvement in system performance with respect to a baseline solution not informed by any forecast, with this gain that further increases during extreme drought episodes. Our results also suggest that observed preseason sea surface temperature anomalies appear more valuable than hydrologic-based seasonal forecasts, producing an average 59% improvement in system performance.

[Archived PDF of article]

Landscape Water Storage and Subsurface Correlation from Satellite Surface Soil Moisture and Precipitation Observations

by Daniel J. Short Gianotti, Guido D. Salvucci, Ruzbeh Akbar, Kaighin A. McColl, Richard Cuenca & Dara Entekhabi

Abstract: Surface soil moisture measurements are typically correlated to some degree with changes in subsurface soil moisture. We calculate a hydrologic length scale, λ, which represents (1) the mean-state estimator of total column water changes from surface observations, (2) an e-folding length scale for subsurface soil moisture profile covariance fall-off, and (3) the best second-moment mass-conserving surface layer thickness for a simple bucket model, defined by the data streams of satellite soil moisture and precipitation retrievals. Calculations are simple, based on three variables: the autocorrelation and variance of surface soil moisture and the variance of the net flux into the column (precipitation minus estimated losses), which can be estimated directly from the soil moisture and precipitation time series. We develop a method to calculate the lag-one autocorrelation for irregularly observed time series and show global surface soil moisture autocorrelation. λ is driven in part by local hydroclimate conditions and is generally larger than the 50-mm nominal radiometric length scale for the soil moisture retrievals, suggesting broad subsurface correlation due to moisture drainage. In all but the most arid regions, radiometric soil moisture retrievals provide more information about ecosystem-relevant water fluxes than satellite radiometers can explicitly “see”; lower-frequency radiometers are expected to provide still more statistical information about subsurface water dynamics.

[Archived PDF of article]

Process-Guided Deep Learning Predictions of Lake Water Temperature

by Jordan S. Read, Xiaowei Jia, Jared Willard, Alison P. Appling, Jacob A. Zwart, Samantha K. Oliver, Anuj Karpatne, Gretchen J. A. Hansen, Paul C. Hanson, William Watkins, Michael Steinbach & Vipin Kumar

Abstract: The rapid growth of data in water resources has created new opportunities to accelerate knowledge discovery with the use of advanced deep learning tools. Hybrid models that integrate theory with state-of-the art empirical techniques have the potential to improve predictions while remaining true to physical laws. This paper evaluates the Process-Guided Deep Learning (PGDL) hybrid modeling framework with a use-case of predicting depth-specific lake water temperatures. The PGDL model has three primary components: a deep learning model with temporal awareness (long short-term memory recurrence), theory-based feedback (model penalties for violating conversation of energy), and model pre-training to initialize the network with synthetic data (water temperature predictions from a process-based model). In situ water temperatures were used to train the PGDL model, a deep learning (DL) model, and a process-based (PB) model. Model performance was evaluated in various conditions, including when training data were sparse and when predictions were made outside of the range in the training data set. The PGDL model performance (as measured by root-mean-square error (RMSE)) was superior to DL and PB for two detailed study lakes, but only when pretraining data included greater variability than the training period. The PGDL model also performed well when extended to 68 lakes, with a median RMSE of 1.65 °C during the test period (DL: 1.78 °C, PB: 2.03 °C; in a small number of lakes PB or DL models were more accurate). This case-study demonstrates that integrating scientific knowledge into deep learning tools shows promise for improving predictions of many important environmental variables.

[Archived PDF of article]

Adjustment of Radar-Gauge Rainfall Discrepancy Due to Raindrop Drift and Evaporation Using the Weather Research and Forecasting Model and Dual-Polarization Radar

by Qiang Dai, Qiqi Yang, Dawei Han, Miguel A. Rico-Ramirez & Shuliang Zhang

Abstract: Radar-gauge rainfall discrepancies are considered to originate from radar rainfall measurements while ignoring the fact that radar observes rain aloft while a rain gauge measures rainfall on the ground. Observations of raindrops observed aloft by weather radars consider that raindrops fall vertically to the ground without changing in size. This premise obviously does not stand because raindrop location changes due to wind drift and raindrop size changes due to evaporation. However, both effects are usually ignored. This study proposes a fully formulated scheme to numerically simulate both raindrop drift and evaporation in the air and reduces the uncertainties of radar rainfall estimation. The Weather Research and Forecasting model is used to simulate high-resolution three-dimensional atmospheric fields. A dual-polarization radar retrieves the raindrop size distribution for each radar pixel. Three schemes are designed and implemented using the Hameldon Hill radar in Lancashire, England. The first considers only raindrop drift, the second considers only evaporation, and the last considers both aspects. Results show that wind advection can cause a large drift for small raindrops. Considerable loss of rainfall is observed due to raindrop evaporation. Overall, the three schemes improve the radar-gauge correlation by 3.2%, 2.9%, and 3.8% and reduce their discrepancy by 17.9%, 8.6%, and 21.7%, respectively, over eight selected events. This study contributes to the improvement of quantitative precipitation estimation from radar polarimetry and allows a better understanding of precipitation processes.

[Archived PDF of article]

The Role of Collapsed Bank Soil on Tidal Channel Evolution: A Process-Based Model Involving Bank Collapse and Sediment Dynamics

by K. Zhao, Z. Gong, F. Xu, Z. Zhou, C. K. Zhang, G. M. E. Perillo & G. Coco

Abstract: We develop a process-based model to simulate the geomorphodynamic evolution of tidal channels, considering hydrodynamics, flow-induced bank erosion, gravity-induced bank collapse, and sediment dynamics. A stress-deformation analysis and the Mohr-Coulomb criterion, calibrated through previous laboratory experiments, are included in a model simulating bank collapse. Results show that collapsed bank soil plays a primary role in the dynamics of bank retreat. For bank collapse with small bank height, tensile failure in the middle of the bank (Stage I), tensile failure on the bank top (Stage II), and sectional cracking from bank top to the toe (Stage III) are present sequentially before bank collapse occurs. A significant linear relation is observed between bank height and the contribution of bank collapse to bank retreat. Contrary to flow-induced bank erosion, bank collapse prevents further widening since the collapsed bank soil protects the bank from direct bank erosion. The bank profile is linear or slightly convex, and the planimetric shape of tidal channels (gradually decreasing in width landward) is similar when approaching equilibrium, regardless of the consideration of bank erosion and collapse. Moreover, the simulated width-to-depth ratio in all runs is comparable with observations from the Venice Lagoon. This indicates that the equilibrium configuration of tidal channels depends on hydrodynamic conditions and sediment properties, while bank erosion and collapse greatly affect the transient behavior (before equilibrium) of the tidal channels. Overall, this contribution highlights the importance of collapsed bank soil in investigating tidal channel morphodynamics using a combined perspective of geotechnics and soil mechanics.

[Archived PDF of article]

A Physically Based Method for Soil Evaporation Estimation by Revisiting the Soil Drying Process

by Yunquan Wang, Oliver Merlin, Gaofeng Zhu & Kun Zhang

Abstract: While numerous models exist for soil evaporation estimation, they are more or less empirically based either in the model structure or in the determination of introduced parameters. The main difficulty lies in representing the water stress factor, which is usually thought to be limited by capillarity-supported water supply or by vapor diffusion flux. Recent progress in understanding soil hydraulic properties, however, have found that the film flow, which is often neglected, is the dominant process under low moisture conditions. By including the impact of film flow, a reexamination on the typical evaporation process found that this usually neglected film flow might be the dominant process for supporting the Stage II evaporation (i.e., the fast falling rate stage), besides the generally accepted capillary flow-supported Stage I evaporation and the vapor diffusion-controlled Stage III evaporation. A physically based model for estimating the evaporation rate was then developed by parameterizing the Buckingham-Darcy’s law. Interestingly, the empirical Bucket model was found to be a specific form of the proposed model. The proposed model requires the in-equilibrium relative humidity as the sole input for representing water stress and introduces no adjustable parameter in relation to soil texture. The impact of vapor diffusion was also discussed. Model testing with laboratory data yielded an excellent agreement with observations for both thin soil and thick soil column evaporation experiments. Model evaluation at 15 field sites generally showed a close agreement with observations, with a great improvement in the lower range of evaporation rates in comparison with the widely applied Priestley and Taylor Jet Propulsion Laboratory model.

[Archived PDF of article]

Floodplain Land Cover and Flow Hydrodynamic Control of Overbank Sedimentation in Compound Channel Flows

by Carmelo Juez, C. Schärer, H. Jenny, A. J. Schleiss & M. J. Franca

Abstract: Overbank sedimentation is predominantly due to fine sediments transported under suspension that become trapped and settle in floodplains when high-flow conditions occur in rivers. In a compound channel, the processes of exchanging water and fine sediments between the main channel and floodplains regulate the geomorphological evolution and are crucial for the maintenance of the ecosystem functions of the floodplains. These hydrodynamic and morphodynamic processes depend on variables such as the flow-depth ratio between the water depth in the main channel and the water depth in the floodplain, the width ratio between the width of the main channel and the width of the floodplain, and the floodplain land cover characterized by the type of roughness. This paper examines, by means of laboratory experiments, how these variables are interlinked and how the deposition of sediments in the compound channel is jointly determined by them. The combination of these compound channel characteristics modulates the production of vertically axised large turbulent vortical structures in the mixing interface. Such vortical structures determine the water mass exchange between the main channel and the floodplain, conditioning in turn the transport of sediment particles conveyed in the water, and, therefore, the resulting overbank sedimentation. The existence and pattern of sedimentation are conditioned by both the hydrodynamic variables (the flow-depth ratio and the width ratio) and the floodplain land cover simulated in terms of smooth walls, meadow-type roughness, sparse-wood-type roughness, and dense-wood-type roughness.

[Archived PDF of article]

Identifying Actionable Compromises: Navigating Multi-city Robustness Conflicts to Discover Cooperative Safe Operating Spaces for Regional Water Supply Portfolios

by D. F. Gold, P. M. Reed, B. C. Trindade & G. W. Characklis

Summary: Cooperation among neighboring urban water utilities can help water managers face challenges stemming from climate change and population growth. Water utilities can cooperate by coordinating water transfers and water restrictions in times of water scarcity (drought) so that water is provided to areas that need it most. In order to successfully implement these policies, however, cooperative partners must find a compromise that is acceptable to all regional actors, a task complicated by asymmetries in resources and risks often present in regional systems. The possibility of deviations from agreed upon actions is another complicating factor that has not been addressed in water resources literature. Our study focuses on four urban water utilities in the Research Triangle region of North Carolina who are investigating cooperative drought mitigation strategies. We contribute a framework that includes the use of simulation models, optimization algorithms, and statistical tools to aid cooperating partners in finding acceptable compromises that are tolerant modest deviations in planned actions. Our results can be used by regional utilities to avoid or alleviate potential planning conflicts and are broadly applicable to urban regional water supply planning across the globe.

[Archived PDF of article]

Detecting Changes in River Flow Caused by Wildfires, Storms, Urbanization, Regulation, and Climate across Sweden

by Berit Arheimer & Göran Lindström

Abstract: Changes in river flow may appear from shifts in land cover, constructions in the river channel, and climatic change, but currently there is a lack of understanding of the relative importance of these drivers. Therefore, we collected gauged river flow time series from 1961 to 2018 from across Sweden for 34 disturbed catchments to quantify how the various types of disturbances have affected river flow. We used trend analysis and the differences in observations versus hydrological modeling to explore the effects on river flow from (1) land cover changes from wildfires, storms, and urbanization; (2) dam constructions with regulations for hydropower production; and (3) climate-change impact in otherwise undisturbed catchments. A mini model ensemble, consisting of three versions of the S-HYPE model, was used, and the three models gave similar results. We searched for changes in annual and daily stream flow, seasonal flow regime, and flow duration curves. The results show that regulation of river flow has the largest impact, reducing spring floods with up to 100% and increasing winter flow by several orders of magnitude, with substantial effects transmitted far downstream. Climate changed the total river flow up to 20%. Tree removal by wildfires and storms has minor impacts at medium and large scales. Urbanization, on the contrary, showed a 20% increase in high flows also at medium scales. This study emphasizes the benefits of combining observed time series with numerical modeling to exclude the effect of varying weather conditions, when quantifying the effects of various drivers on long-term streamflow shifts.

[Archived PDF of article]

Assessing the Feasibility of Satellite-Based Thresholds for Hydrologically Driven Landsliding

by Matthew A. Thomas, Brian D. Collins & Benjamin B. Mirus

Summary: Soil wetness and rainfall contribute to landslides across the world. Using soil moisture sensors and rain gauges, these environmental conditions have been monitored at numerous points across the Earth’s surface to define threshold conditions, above which landsliding should be expected for a localized area. Satellite-based technologies also deliver estimates of soil wetness and rainfall, potentially offering an approach to develop thresholds as part of landslide warning systems over larger spatial scales. To evaluate the potential for using satellite-based measurements for landslide warning, we compare the accuracy of landslide thresholds defined with ground- versus satellite-based soil wetness and rainfall information. We find that the satellite-based data over-predict soil wetness during the time of year when landslides are most likely to occur, resulting in thresholds that also over-predict the potential for landslides relative to thresholds informed by direct measurements on the ground. Our results encourage the installation of more ground-based monitoring stations in landslide-prone settings and the cautious use of satellite-based data when more direct measurements are not available.

[Archived PDF of article]

Modeling the Translocation and Transformation of Chemicals in the Soil-Plant Continuum: A Dynamic Plant Uptake Module for the HYDRUS Model

by Giuseppe Brunetti, Radka Kodešová & Jiří Šimůnek

Abstract: Food contamination is responsible for thousands of deaths worldwide every year. Plants represent the most common pathway for chemicals into the human and animal food chain. Although existing dynamic plant uptake models for chemicals are crucial for the development of reliable mitigation strategies for food pollution, they nevertheless simplify the description of physicochemical processes in soil and plants, mass transfer processes between soil and plants and in plants, and transformation in plants. To fill this scientific gap, we couple a widely used hydrological model (HYDRUS) with a multi-compartment dynamic plant uptake model, which accounts for differentiated multiple metabolization pathways in plant’s tissues. The developed model is validated first theoretically and then experimentally against measured data from an experiment on the translocation and transformation of carbamazepine in three vegetables. The analysis is further enriched by performing a global sensitivity analysis on the soilplant model to identify factors driving the compound’s accumulation in plants’ shoots, as well as to elucidate the role and the importance of soil hydraulic properties on the plant uptake process. Results of the multilevel numerical analysis emphasize the model’s flexibility and demonstrate its ability to accurately reproduce physicochemical processes involved in the dynamic plant uptake of chemicals from contaminated soils.

[Archived PDF of article]

Physical Controls on Salmon Redd Site Selection in Restored Reaches of a Regulated, Gravel-Bed River

by Lee R. Harrison, Erin Bray, Brandon Overstreet, Carl J. Legleiter, Rocko A. Brown, Joseph E. Merz, Rosealea M. Bond, Colin L. Nicol & Thomas Dunne

Abstract: Large-scale river restoration programs have emerged recently as a tool for improving spawning habitat for native salmonids in highly altered river ecosystems. Few studies have quantified the extent to which restored habitat is utilized by salmonids, which habitat features influence redd site selection, or the persistence of restored habitat over time. We investigated fall-run Chinook salmon spawning site utilization and measured and modeled corresponding habitat characteristics in two restored reaches: a reach of channel and floodplain enhancement completed in 2013 and a reconfigured channel and floodplain constructed in 2002. Redd surveys demonstrated that both restoration projects supported a high density of salmon redds, 3 and 14 years following restoration. Salmon redds were constructed in coarse gravel substrates located in areas of high sediment mobility, as determined by measurements of gravel friction angles and a grain entrainment model. Salmon redds were located near transitions between pool-riffle bedforms in regions of high predicted hyporheic flows. Habitat quality (quantified as a function of stream hydraulics) and hyporheic flow were both strong predictors of redd occurrence, though the relative roles of these variables differed between sites. Our findings indicate that physical controls on redd site selection in restored channels were similar to those reported for natural channels elsewhere. Our results further highlight that in addition to traditional habitat criteria (e.g., water depth, velocity, and substrate size), quantifying sediment texture and mobility, as well as intragravel flow, provides a more complete understanding of the ecological benefits provided by river restoration projects.

[Archived PDF of article]

Mountain-Block Recharge: A Review of Current Understanding

by Katherine H. Markovich, Andrew H. Manning, Laura E. Condon & Jennifer C. McIntosh

Abstract: Mountain-block recharge (MBR) is the subsurface inflow of groundwater to lowland aquifers from adjacent mountains. MBR can be a major component of recharge but remains difficult to characterize and quantify due to limited hydrogeologic, climatic, and other data in the mountain block and at the mountain front. The number of MBR-related studies has increased dramatically in the 15 years since the last review of the topic was conducted by Wilson and Guan (2004), generating important advancements. We review this recent body of literature, summarize current understanding of factors controlling MBR, and provide recommendations for future research priorities. Prior to 2004, most MBR studies were performed in the southwestern United States. Since then, numerous studies have detected and quantified MBR in basins around the world, typically estimating MBR to be 5–50% of basin-fill aquifer recharge. Theoretical studies using generic numerical modeling domains have revealed fundamental hydrogeologic and topographic controls on the amount of MBR and where it originates within the mountain block. Several mountain-focused hydrogeologic studies have confirmed the widespread existence of mountain bedrock aquifers hosting considerable groundwater flow and, in some cases, identified the occurrence of interbasin flow leaving headwater catchments in the subsurface—both of which are required for MBR to occur. Future MBR research should focus on the collection of high-priority data (e.g., subsurface data near the mountain front and within the mountain block) and the development of sophisticated coupled models calibrated to multiple data types to best constrain MBR and predict how it may change in response to climate warming.

[Archived PDF of article]

An Adjoint Sensitivity Model for Steady-State Sequentially Coupled Radionuclide Transport in Porous Media

by Mohamed Hayek, Banda S. RamaRao & Marsh Lavenue

Abstract: This work presents an efficient mathematical/numerical model to compute the sensitivity coefficients of a predefined performance measure to model parameters for one-dimensional steady-state sequentially coupled radionuclide transport in a finite heterogeneous porous medium. The model is based on the adjoint sensitivity approach that offers an elegant and computationally efficient alternative way to compute the sensitivity coefficients. The transport parameters include the radionuclide retardation factors due to sorption, the Darcy velocity, and the effective diffusion/dispersion coefficients. Both continuous and discrete adjoint approaches are considered. The partial differential equations associated with the adjoint system are derived based on the adjoint state theory for coupled problems. Physical interpretations of the adjoint states are given in analogy to results obtained in the theory of groundwater flow. For the homogeneous case, analytical solutions for primary and adjoint systems are derived and presented in closed forms. Numerically calculated solutions are compared to the analytical results and show excellent agreements. Insights from sensitivity analysis are discussed to get a better understanding of the values of sensitivity coefficients. The sensitivity coefficients are also computed numerically by finite differences. The numerical sensitivity coefficients successfully reproduce the analytically derived sensitivities based on adjoint states. A derivative-based global sensitivity method coupled with the adjoint state method is presented and applied to a real field case represented by a site currently being considered for underground nuclear storage in Northern Switzerland, “Zürich Nordost,” to demonstrate the proposed method. The results show the advantage of the adjoint state method compared to other methods in term of computational effort.

[Archived PDF of article]

Hydraulic Reconstruction of the 1818 Giétro Glacial Lake Outburst Flood

by C. Ancey, E. Bardou, M. Funk, M. Huss, M. A. Werder & T. Trewhela

Summary: Every year, natural and man-made dams fail and cause flooding. For public authorities, estimating the risk posed by dams is essential to good risk management. Efficient computational tools are required for analyzing flood risk. Testing these tools is an important step toward ensuring their reliability and performance. Knowledge of major historical floods makes it possible, in principle, to benchmark models, but because historical data are often incomplete and fraught with potential inaccuracies, validation is seldom satisfactory. Here we present one of the few major historical floods for which information on flood initiation and propagation is available and detailed: the Giétro flood. This flood occurred in June 1818 and devastated the Drance Valley in Switzerland. In the spring of that year, ice avalanches blocked the valley floor and formed a glacial lake, whose volume is today estimated at 25×106 m3. The local authorities initiated protection works: A tunnel was drilled through the ice dam, and about half of the stored water volume was drained in 2.5 days. On 16 June 1818, the dam failed suddenly because of significant erosion at its base; this caused a major flood. This paper presents a numerical model for estimating flow rates, velocities, and depths during the dam drainage and flood flow phases. The numerical results agree well with historical data. The flood reconstruction shows that relatively simple models can be used to estimate the effects of a major flood with good accuracy.

[Archived PDF of article]

The Representation of Hydrological Dynamical Systems Using Extended Petri Nets (EPN)

by Marialaura Bancheri, Francesco Serafin & Riccardo Rigon

Abstract: This work presents a new graphical system to represent hydrological dynamical models and their interactions. We propose an extended version of the Petri Nets mathematical modeling language, the Extended Petri Nets (EPN), which allows for an immediate translation from the graphics of the model to its mathematical representation in a clear way. We introduce the principal objects of the EPN representation (i.e., places, transitions, arcs, controllers, and splitters) and their use in hydrological systems. We show how to cast hydrological models in EPN and how to complete their mathematical description using a dictionary for the symbols and an expression table for the flux equations. Thanks to the compositional property of EPN, we show how it is possible to represent either a single hydrological response unit or a complex catchment where multiple systems of equations are solved simultaneously. Finally, EPN can be used to describe complex Earth system models that include feedback between the water, energy, and carbon budgets. The representation of hydrological dynamical systems with EPN provides a clear visualization of the relations and feedback between subsystems, which can be studied with techniques introduced in nonlinear systems theory and control theory.

[Archived PDF of article]

A Regularization Approach to Improve the Sequential Calibration of a Semidistributed Hydrological Model

by A. de Lavenne, V. Andréassian, G. Thirel, M.-H. Ramos & C. Perrin

Abstract: In semidistributed hydrological modeling, sequential calibration usually refers to the calibration of a model by considering not only the flows observed at the outlet of a catchment but also the different gauging points inside the catchment from upstream to downstream. While sequential calibration aims to optimize the performance at these interior gauged points, we show that it generally fails to improve performance at ungauged points. In this paper, we propose a regularization approach for the sequential calibration of semidistributed hydrological models. It consists in adding a priori information on optimal parameter sets for each modeling unit of the semi-distributed model. Calibration iterations are then performed by jointly maximizing simulation performance and minimizing drifts from the a priori parameter sets. The combination of these two sources of information is handled by a parameter k to which the method is quite sensitive. The method is applied to 1,305 catchments in France over 30 years. The leave-one-out validation shows that, at locations considered as ungauged, model simulations are significantly improved (over all the catchments, the median KGE criterion is increased from 0.75 to 0.83 and the first quartile from 0.35 to 0.66), while model performance at gauged points is not significantly impacted by the use of the regularization approach. Small catchments benefit most from this calibration strategy. These performances are, however, very similar to the performances obtained with a lumped model based on similar conceptualization.

[Archived PDF of article]

Proneness of European Catchments to Multiyear Streamflow Droughts

by Manuela I. Brunner & Lena M. Tallaksen

Summary: Droughts lasting longer than 1 year can have severe ecological, social, and economic impacts. They are characterized by below-average flows, not only during the low-flow period but also in the high-flow period when water stores such as groundwater or artificial reservoirs are usually replenished. Limited catchment storage might worsen the impacts of droughts and make water management more challenging. Knowledge on the occurrence of multiyear drought events enables better adaptation and increases preparedness. In this study, we assess the proneness of European catchments to multiyear droughts by simulating long discharge records. Our findings show that multiyear drought events mainly occur in regions where the discharge seasonality is mostly influenced by rainfall, whereas catchments whose seasonality is dominated by melt processes are less affected. The strong link between the proneness of a catchment to multiyear events and its discharge seasonality leads to the conclusion that future changes toward less snow storage and thus less snow melt will increase the probability of multiyear drought occurrence.

[Archived PDF of article]

Equifinality and Flux Mapping: A New Approach to Model Evaluation and Process Representation under Uncertainty

by Sina Khatami, Murray C. Peel, Tim J. Peterson & Andrew W. Western

Abstract: Uncertainty analysis is an integral part of any scientific modeling, particularly within the domain of hydrological sciences given the various types and sources of uncertainty. At the center of uncertainty rests the concept of equifinality, that is, reaching a given endpoint (finality) through different pathways. The operational definition of equifinality in hydrological modeling is that various model structures and/or parameter sets (i.e., equal pathways) are equally capable of reproducing a similar (not necessarily identical) hydrological outcome (i.e., finality). Here we argue that there is more to model equifinality than model structures/parameters, that is, other model components can give rise to model equifinality and/or could be used to explore equifinality within model space. We identified six facets of model equifinality, namely, model structure, parameters, performance metrics, initial and boundary conditions, inputs, and internal fluxes. Focusing on model internal fluxes, we developed a methodology called flux mapping that has fundamental implications in understanding and evaluating model process representation within the paradigm of multiple working hypotheses. To illustrate this, we examine the equifinality of runoff fluxes of a conceptual rainfall-runoff model for a number of different Australian catchments. We demonstrate how flux maps can give new insights into the model behavior that cannot be captured by conventional model evaluation methods. We discuss the advantages of flux space, as a subspace of the model space not usually examined, over parameter space. We further discuss the utility of flux mapping in hypothesis generation and testing, extendable to any field of scientific modeling of open complex systems under uncertainty.

[Archived PDF of article]

Role of Extreme Precipitation and Initial Hydrologic Conditions on Floods in Godavari River Basin, India

by Shailesh Garg & Vimal Mishra

Abstract: Floods are the most frequent natural calamity in India. The Godavari river basin (GRB) witnessed several floods in the past 50 years. Notwithstanding the large damage and economic loss, the role of extreme precipitation and antecedent moisture conditions on floods in the GRB remains unexplored. Using the observations and the well-calibrated Variable Infiltration Capacity model, we estimate the changes in the extreme precipitation and floods in the observed (1955–2016) and projected future (2071–2100) climate in the GRB. We evaluate the role of initial hydrologic conditions and extreme precipitation on floods in both observed and projected future climate. We find a statistically significant increase in annual maximum precipitation for the catchments upstream of four gage stations during the 1955–2016 period. However, the rise in annual maximum streamflow at all the four gage stations in GRB was not statistically significant. The probability of floods driven by extreme precipitation (PFEP) varies between 0.55 and 0.7 at the four gage stations of the GRB, which declines with the size of the basins. More than 80% of extreme precipitation events that cause floods occur on wet antecedent moisture conditions at all the four locations in the GRB. The frequency of extreme precipitation events is projected to rise by two folds or more (under RCP 8.5) in the future (2071–2100) at all four locations. However, the increased frequency of floods under the future climate will largely be driven by the substantial rise in the extreme precipitation events rather than wet antecedent moisture conditions.

[Archived PDF of article]

Research Letters

Combined Effect of Tides and Varying Inland Groundwater Input on Flow and Salinity Distribution in Unconfined Coastal Aquifers

by Woei Keong Kuan, Pei Xin, Guangqiu Jin, Clare E. Robinson, Badin Gibbes & Ling Li

Abstract: Tides and seasonally varying inland freshwater input, with different fluctuation periods, are important factors affecting flow and salt transport in coastal unconfined aquifers. These processes affect submarine groundwater discharge (SGD) and associated chemical transport to the sea. While the individual effects of these forcings have previously been studied, here we conducted physical experiments and numerical simulations to evaluate the interactions between varying inland freshwater input and tidal oscillations. Varying inland freshwater input was shown to induce significant water exchange across the aquifer-sea interface as the saltwater wedge shifted landward and seaward over the fluctuation cycle. Tidal oscillations led to seawater circulations through the intertidal zone that also enhanced the density-driven circulation, resulting in a significant increase in the total SGD. The combination of the tide and varying inland freshwater input, however, decreased the SGD components driven by the separate forcings (e.g., tides and density). Tides restricted the landward and seaward movement of the saltwater wedge in response to the varying inland freshwater input in addition to reducing the time delay between the varying freshwater input signal and landward-seaward movement in the saltwater wedge interface. This study revealed the nonlinear interaction between tidal fluctuations and varying inland freshwater input will help to improve our understanding of SGD, seawater intrusion, and chemical transport in coastal unconfined aquifers.

[Archived PDF of article]