Essay 115: Novels as Another University: Joseph Conrad

One can say that the first wave of imperial “neocons” was not the group that got the U.S. into the Iraq War (2003) but the group described by Warren Zimmerman in his classic book on the rise of the American Empire from the 1890s onwards:

First Great Triumph

How Five Americans Made Their Country a World Power.

By Warren Zimmermann.

Illustrated. 562 pp. New York: Farrar, Straus & Giroux

Americans like to pretend that they have no imperial past,” Warren Zimmermann tells us in First Great Triumph: How Five Americans Made Their Country a World Power. But they do.

The United States had been expanding its borders from the moment of its birth, though its reach had been confined to the North American continent until 1898, when American soldiers and sailors joined Cuban and Filipino rebels in a successful war against Spain. When the war was won, the United States acquired a “protectorate” in Cuba and annexed Hawaii, the Philippine Islands, Guam, Puerto Rico and Hawaii. “In 15 weeks,” Zimmermann notes, “the United States had gained island possessions on both the Atlantic and Pacific sides of its continental mass. It had put under its protection and control more than 10 million people: whites, blacks, Hispanics, Indians, Polynesians, Chinese, Japanese and the polyethnic peoples of the Philippine archipelago.”

John Hay, at the time the American ambassador to Britain, writing to his friend Theodore Roosevelt in Cuba, referred to the war against Spain as “a splendid little war, begun with the highest motives, carried on with magnificent intelligence and spirit, favored by that Fortune which loves the brave.” He hoped that the war’s aftermath would be concluded “with that fine good nature, which is, after all, the distinguishing trait of the American character.” More than a century later, we are still asking ourselves just how splendid that little war and its consequences really were.

Zimmermann, a career diplomat and a former United States ambassador to Yugoslavia, begins his brilliantly readable book about the war and its aftermath with biographical sketches of the five men — Alfred T. Mahan, Theodore Roosevelt, Henry Cabot Lodge, John Hay and Elihu Root — who played a leading role in making “their country a world power.”

Ironically, it turns out that any reader of Joseph Conrad’s (died in 1924) famous novel Nostromo from 1904 would have encountered the “manifesto” of the American Empire, very clearly enunciated by one of the characters in the novel:

“Time itself has got to wait on the greatest country in the whole of God’s universe. We shall be giving the word for everything; industry, trade, law, journalism, art, politics and religion, from Cape Horn clear over to Smith’s Sound (i.e., Canada/Greenland), and beyond too, if anything worth taking hold of turns up at the North Pole. And then we shall have the leisure to take in hand the outlying islands and continents of the earth.

“We shall run the world’s business whether the world likes it or not. The world can’t help it—and neither can we, I guess.”

Joseph Conrad, Nostromo, Penguin Books, 2007, pages 62/63

The political stances of Conrad which seem so denunciatory of imperialism here in Nostromo seem very disrespectful of Africans in his Heart of Darkness as Chinua Achebe (Nigerian novelist, author of Things Fall Apart) and other Africans have shown and decried. Thus one sees layer upon layer of contradiction both in American empire-mongering and Conrad’s anticipation of it in his novel Nostromo.

Essay 110: Education and Famine Analysis

The great historian Élie Halévy’s (died in 1937) History of the English People in the Nineteenth Century, a multi-volume classic, gives us a sense of nineteenth century famine dynamics for the 1840s, which combines failed harvests and failed incomes and failed speculations together:

“It was a ‘dearth’ (i.e., scarcity)—a crisis belonging to the old order—the last ‘dearth,’ in fact, Europe had known up to the present day (i.e., before 1937). The unsatisfactory harvest of 1845 was followed by the disastrous autumn of 1846. The potato disease was worse than it had been the year before. The cereal harvest, moderately good in 1845, was a failure not only in the United Kingdom, but in France and throughout Western Europe. In 1845, Great Britain could still purchase corn even in Ireland, while the Irish poor were starving to death. Nothing of the kind was possible at the end of 1846.

Britain could not obtain wheat from France or Germany. In short, it was no longer Ireland alone, but the whole of Western Europe that had to be saved from famine.

“The United Kingdom, France, and Germany must import Russian and American wheat, the only sources available to supply the deficit.

“In consequence the price of wheat rose from 50 shillings and 2d. on August 22 to 65 shillings and 7d. on November 18. The price of wheat rose once more. It exceeded 78 shillings in March.

“In Germany and France, where another ‘jacquerie’ seemed to have begun, hunger caused an outbreak of rioting. The same happened in Scotland and the south of England…but England suffered in common with Ireland and Continental Europe, and a drain on English gold began, to pay for the Russian and American wheat.

“Later there was a fall of 50% in four months. The corn factors (i.e., corn dealers) who for months had been gambling on a rise had no time to retrace their steps and were ruined at a single blow.” (“Commerical Failures in 1847,” Eclectic Review, December 1847)

(Élie Halévy, “Victorian Years (1841-1895),” Halévy’s History of the English People in the Nineteenth Century, Volume 4, pages 191-193, Ernest Benn Ltd., 1970)

Note that in British usage, “corn” refers to all feed grains (primarily wheat), not corn (in the American sense) or maize. For example, see the Corn Laws.

We sense from Halévy’s description of the “food insecurity” of the nineteenth century in Europe, why the Revolutions of 1848 were to a large extent severe food riots and how food poverty and speculation interacted with risk and uncertainty prevailing.

This should be read and pondered in connection with Prof. Amartya Sen’s classic from 1981, Poverty and Famines, which highlights the famine of income and buying power, as opposed to famines based on outright crop failures. Pearl Buck’s classic novel, The Good Earth (1931), fits this topic set, as it analyzes in human terms the pattern of Chinese famines. It is interesting to note, parenthetically, that the movie of The Good Earth could not feature Chinese actors in lead roles due to racial craziness at the time. Stepping back, we see a world of food insecurity aggravated by the spectre of racism further poisoning social relations worldwide.

Halévy states: “It was a ‘dearth’ (i.e., scarcity)—a crisis belonging to the old order—the last ‘dearth,’ in fact, Europe had known up to the present day…”.

It would be instructive to ponder whether this really was “a crisis belonging to the old order” given the catastrophes and food crises that could come with climate change from 2019 on out. Will we have “global ‘dearths’”?

Essay 108: Early View Alert: Water Resources Research

from the American Geophysical Union’s journals:

Research Articles

Modeling the Snow Depth Variability with a High-Resolution Lidar Data Set and Nonlinear Terrain Dependency

by T. Skaugen & K. Melvold

Summary: Using airborne laser, 400 million snow depth measurements at Hardangervidda in Southern Norway have been collected. The amount of data has made in-depth studies of the spatial distribution of snow and its interaction with the terrain and vegetation possible. We find that the terrain variability, expressed by the square slope, the average amount of snow, and whether the terrain is vegetated or not, largely explains the variation of snow depth. With this information it is possible to develop equations that predict snow depth variability that can be used in environmental models, which again are used for important tasks such as flood forecasting and hydropower planning. One major advantage is that these equations can be determined from the data that are, in principle, available everywhere, provided there exists a detailed digital model of the terrain.

[Archived PDF article]

Phosphorus Transport in Intensively Managed Watersheds

by Christine L. Dolph, Evelyn Boardman, Mohammad Danesh-Yazdi, Jacques C. Finlay, Amy T. Hansen, Anna C. Baker & Brent Dalzell

Abstract: When phosphorus from farm fertilizer, eroded soil, and septic waste enters our water, it leads to problems like toxic algae blooms, fish kills, and contaminated drinking supplies. In this study, we examine how phosphorus travels through streams and rivers of farmed areas. In the past, soil lost from farm fields was considered the biggest contributor to phosphorus pollution in agricultural areas, but our study shows that phosphorus originating from fertilizer stores in the soil and from crop residue, as well as from soil eroded from sensitive ravines and bluffs, contributes strongly to the total amount of phosphorus pollution in agricultural rivers. We also found that most phosphorus leaves farmed watersheds during the very highest river flows. Increased frequency of large storms due to climate chaos will therefore likely worsen water quality in areas that are heavily loaded with phosphorus from farm fertilizers. Protecting water in agricultural watersheds will require knowledge of the local landscape along with strategies to address (1) drivers of climate chaos, (2) reduction in the highest river flows, and (3) ongoing inputs and legacy stores of phosphorus that are readily transported across land and water.

[Archived PDF of article]

Detecting the State of the Climate System via Artificial Intelligence to Improve Seasonal Forecasts and Inform Reservoir Operations

by Matteo Giuliani, Marta Zaniolo, Andrea Castelletti, Guido Davoli & Paul Block

Abstract: Increasingly variable hydrologic regimes combined with more frequent and intense extreme events are challenging water systems management worldwide. These trends emphasize the need of accurate medium- to long-term predictions to timely prompt anticipatory operations. Despite in some locations global climate oscillations and particularly the El Niño Southern Oscillation (ENSO) may contribute to extending forecast lead times, in other regions there is no consensus on how ENSO can be detected, and used as local conditions are also influenced by other concurrent climate signals. In this work, we introduce the Climate State Intelligence framework to capture the state of multiple global climate signals via artificial intelligence and improve seasonal forecasts. These forecasts are used as additional inputs for informing water system operations and their value is quantified as the corresponding gain in system performance. We apply the framework to the Lake Como basin, a regulated lake in northern Italy mainly operated for flood control and irrigation supply. Numerical results show the existence of notable teleconnection patterns dependent on both ENSO and the North Atlantic Oscillation over the Alpine region, which contribute in generating skillful seasonal precipitation and hydrologic forecasts. The use of this information for conditioning the lake operations produces an average 44% improvement in system performance with respect to a baseline solution not informed by any forecast, with this gain that further increases during extreme drought episodes. Our results also suggest that observed preseason sea surface temperature anomalies appear more valuable than hydrologic-based seasonal forecasts, producing an average 59% improvement in system performance.

[Archived PDF of article]

Landscape Water Storage and Subsurface Correlation from Satellite Surface Soil Moisture and Precipitation Observations

by Daniel J. Short Gianotti, Guido D. Salvucci, Ruzbeh Akbar, Kaighin A. McColl, Richard Cuenca & Dara Entekhabi

Abstract: Surface soil moisture measurements are typically correlated to some degree with changes in subsurface soil moisture. We calculate a hydrologic length scale, λ, which represents (1) the mean-state estimator of total column water changes from surface observations, (2) an e-folding length scale for subsurface soil moisture profile covariance fall-off, and (3) the best second-moment mass-conserving surface layer thickness for a simple bucket model, defined by the data streams of satellite soil moisture and precipitation retrievals. Calculations are simple, based on three variables: the autocorrelation and variance of surface soil moisture and the variance of the net flux into the column (precipitation minus estimated losses), which can be estimated directly from the soil moisture and precipitation time series. We develop a method to calculate the lag-one autocorrelation for irregularly observed time series and show global surface soil moisture autocorrelation. λ is driven in part by local hydroclimate conditions and is generally larger than the 50-mm nominal radiometric length scale for the soil moisture retrievals, suggesting broad subsurface correlation due to moisture drainage. In all but the most arid regions, radiometric soil moisture retrievals provide more information about ecosystem-relevant water fluxes than satellite radiometers can explicitly “see”; lower-frequency radiometers are expected to provide still more statistical information about subsurface water dynamics.

[Archived PDF of article]

Process-Guided Deep Learning Predictions of Lake Water Temperature

by Jordan S. Read, Xiaowei Jia, Jared Willard, Alison P. Appling, Jacob A. Zwart, Samantha K. Oliver, Anuj Karpatne, Gretchen J. A. Hansen, Paul C. Hanson, William Watkins, Michael Steinbach & Vipin Kumar

Abstract: The rapid growth of data in water resources has created new opportunities to accelerate knowledge discovery with the use of advanced deep learning tools. Hybrid models that integrate theory with state-of-the art empirical techniques have the potential to improve predictions while remaining true to physical laws. This paper evaluates the Process-Guided Deep Learning (PGDL) hybrid modeling framework with a use-case of predicting depth-specific lake water temperatures. The PGDL model has three primary components: a deep learning model with temporal awareness (long short-term memory recurrence), theory-based feedback (model penalties for violating conversation of energy), and model pre-training to initialize the network with synthetic data (water temperature predictions from a process-based model). In situ water temperatures were used to train the PGDL model, a deep learning (DL) model, and a process-based (PB) model. Model performance was evaluated in various conditions, including when training data were sparse and when predictions were made outside of the range in the training data set. The PGDL model performance (as measured by root-mean-square error (RMSE)) was superior to DL and PB for two detailed study lakes, but only when pretraining data included greater variability than the training period. The PGDL model also performed well when extended to 68 lakes, with a median RMSE of 1.65 °C during the test period (DL: 1.78 °C, PB: 2.03 °C; in a small number of lakes PB or DL models were more accurate). This case-study demonstrates that integrating scientific knowledge into deep learning tools shows promise for improving predictions of many important environmental variables.

[Archived PDF of article]

Adjustment of Radar-Gauge Rainfall Discrepancy Due to Raindrop Drift and Evaporation Using the Weather Research and Forecasting Model and Dual-Polarization Radar

by Qiang Dai, Qiqi Yang, Dawei Han, Miguel A. Rico-Ramirez & Shuliang Zhang

Abstract: Radar-gauge rainfall discrepancies are considered to originate from radar rainfall measurements while ignoring the fact that radar observes rain aloft while a rain gauge measures rainfall on the ground. Observations of raindrops observed aloft by weather radars consider that raindrops fall vertically to the ground without changing in size. This premise obviously does not stand because raindrop location changes due to wind drift and raindrop size changes due to evaporation. However, both effects are usually ignored. This study proposes a fully formulated scheme to numerically simulate both raindrop drift and evaporation in the air and reduces the uncertainties of radar rainfall estimation. The Weather Research and Forecasting model is used to simulate high-resolution three-dimensional atmospheric fields. A dual-polarization radar retrieves the raindrop size distribution for each radar pixel. Three schemes are designed and implemented using the Hameldon Hill radar in Lancashire, England. The first considers only raindrop drift, the second considers only evaporation, and the last considers both aspects. Results show that wind advection can cause a large drift for small raindrops. Considerable loss of rainfall is observed due to raindrop evaporation. Overall, the three schemes improve the radar-gauge correlation by 3.2%, 2.9%, and 3.8% and reduce their discrepancy by 17.9%, 8.6%, and 21.7%, respectively, over eight selected events. This study contributes to the improvement of quantitative precipitation estimation from radar polarimetry and allows a better understanding of precipitation processes.

[Archived PDF of article]

The Role of Collapsed Bank Soil on Tidal Channel Evolution: A Process-Based Model Involving Bank Collapse and Sediment Dynamics

by K. Zhao, Z. Gong, F. Xu, Z. Zhou, C. K. Zhang, G. M. E. Perillo & G. Coco

Abstract: We develop a process-based model to simulate the geomorphodynamic evolution of tidal channels, considering hydrodynamics, flow-induced bank erosion, gravity-induced bank collapse, and sediment dynamics. A stress-deformation analysis and the Mohr-Coulomb criterion, calibrated through previous laboratory experiments, are included in a model simulating bank collapse. Results show that collapsed bank soil plays a primary role in the dynamics of bank retreat. For bank collapse with small bank height, tensile failure in the middle of the bank (Stage I), tensile failure on the bank top (Stage II), and sectional cracking from bank top to the toe (Stage III) are present sequentially before bank collapse occurs. A significant linear relation is observed between bank height and the contribution of bank collapse to bank retreat. Contrary to flow-induced bank erosion, bank collapse prevents further widening since the collapsed bank soil protects the bank from direct bank erosion. The bank profile is linear or slightly convex, and the planimetric shape of tidal channels (gradually decreasing in width landward) is similar when approaching equilibrium, regardless of the consideration of bank erosion and collapse. Moreover, the simulated width-to-depth ratio in all runs is comparable with observations from the Venice Lagoon. This indicates that the equilibrium configuration of tidal channels depends on hydrodynamic conditions and sediment properties, while bank erosion and collapse greatly affect the transient behavior (before equilibrium) of the tidal channels. Overall, this contribution highlights the importance of collapsed bank soil in investigating tidal channel morphodynamics using a combined perspective of geotechnics and soil mechanics.

[Archived PDF of article]

A Physically Based Method for Soil Evaporation Estimation by Revisiting the Soil Drying Process

by Yunquan Wang, Oliver Merlin, Gaofeng Zhu & Kun Zhang

Abstract: While numerous models exist for soil evaporation estimation, they are more or less empirically based either in the model structure or in the determination of introduced parameters. The main difficulty lies in representing the water stress factor, which is usually thought to be limited by capillarity-supported water supply or by vapor diffusion flux. Recent progress in understanding soil hydraulic properties, however, have found that the film flow, which is often neglected, is the dominant process under low moisture conditions. By including the impact of film flow, a reexamination on the typical evaporation process found that this usually neglected film flow might be the dominant process for supporting the Stage II evaporation (i.e., the fast falling rate stage), besides the generally accepted capillary flow-supported Stage I evaporation and the vapor diffusion-controlled Stage III evaporation. A physically based model for estimating the evaporation rate was then developed by parameterizing the Buckingham-Darcy’s law. Interestingly, the empirical Bucket model was found to be a specific form of the proposed model. The proposed model requires the in-equilibrium relative humidity as the sole input for representing water stress and introduces no adjustable parameter in relation to soil texture. The impact of vapor diffusion was also discussed. Model testing with laboratory data yielded an excellent agreement with observations for both thin soil and thick soil column evaporation experiments. Model evaluation at 15 field sites generally showed a close agreement with observations, with a great improvement in the lower range of evaporation rates in comparison with the widely applied Priestley and Taylor Jet Propulsion Laboratory model.

[Archived PDF of article]

Floodplain Land Cover and Flow Hydrodynamic Control of Overbank Sedimentation in Compound Channel Flows

by Carmelo Juez, C. Schärer, H. Jenny, A. J. Schleiss & M. J. Franca

Abstract: Overbank sedimentation is predominantly due to fine sediments transported under suspension that become trapped and settle in floodplains when high-flow conditions occur in rivers. In a compound channel, the processes of exchanging water and fine sediments between the main channel and floodplains regulate the geomorphological evolution and are crucial for the maintenance of the ecosystem functions of the floodplains. These hydrodynamic and morphodynamic processes depend on variables such as the flow-depth ratio between the water depth in the main channel and the water depth in the floodplain, the width ratio between the width of the main channel and the width of the floodplain, and the floodplain land cover characterized by the type of roughness. This paper examines, by means of laboratory experiments, how these variables are interlinked and how the deposition of sediments in the compound channel is jointly determined by them. The combination of these compound channel characteristics modulates the production of vertically axised large turbulent vortical structures in the mixing interface. Such vortical structures determine the water mass exchange between the main channel and the floodplain, conditioning in turn the transport of sediment particles conveyed in the water, and, therefore, the resulting overbank sedimentation. The existence and pattern of sedimentation are conditioned by both the hydrodynamic variables (the flow-depth ratio and the width ratio) and the floodplain land cover simulated in terms of smooth walls, meadow-type roughness, sparse-wood-type roughness, and dense-wood-type roughness.

[Archived PDF of article]

Identifying Actionable Compromises: Navigating Multi-city Robustness Conflicts to Discover Cooperative Safe Operating Spaces for Regional Water Supply Portfolios

by D. F. Gold, P. M. Reed, B. C. Trindade & G. W. Characklis

Summary: Cooperation among neighboring urban water utilities can help water managers face challenges stemming from climate change and population growth. Water utilities can cooperate by coordinating water transfers and water restrictions in times of water scarcity (drought) so that water is provided to areas that need it most. In order to successfully implement these policies, however, cooperative partners must find a compromise that is acceptable to all regional actors, a task complicated by asymmetries in resources and risks often present in regional systems. The possibility of deviations from agreed upon actions is another complicating factor that has not been addressed in water resources literature. Our study focuses on four urban water utilities in the Research Triangle region of North Carolina who are investigating cooperative drought mitigation strategies. We contribute a framework that includes the use of simulation models, optimization algorithms, and statistical tools to aid cooperating partners in finding acceptable compromises that are tolerant modest deviations in planned actions. Our results can be used by regional utilities to avoid or alleviate potential planning conflicts and are broadly applicable to urban regional water supply planning across the globe.

[Archived PDF of article]

Detecting Changes in River Flow Caused by Wildfires, Storms, Urbanization, Regulation, and Climate across Sweden

by Berit Arheimer & Göran Lindström

Abstract: Changes in river flow may appear from shifts in land cover, constructions in the river channel, and climatic change, but currently there is a lack of understanding of the relative importance of these drivers. Therefore, we collected gauged river flow time series from 1961 to 2018 from across Sweden for 34 disturbed catchments to quantify how the various types of disturbances have affected river flow. We used trend analysis and the differences in observations versus hydrological modeling to explore the effects on river flow from (1) land cover changes from wildfires, storms, and urbanization; (2) dam constructions with regulations for hydropower production; and (3) climate-change impact in otherwise undisturbed catchments. A mini model ensemble, consisting of three versions of the S-HYPE model, was used, and the three models gave similar results. We searched for changes in annual and daily stream flow, seasonal flow regime, and flow duration curves. The results show that regulation of river flow has the largest impact, reducing spring floods with up to 100% and increasing winter flow by several orders of magnitude, with substantial effects transmitted far downstream. Climate changed the total river flow up to 20%. Tree removal by wildfires and storms has minor impacts at medium and large scales. Urbanization, on the contrary, showed a 20% increase in high flows also at medium scales. This study emphasizes the benefits of combining observed time series with numerical modeling to exclude the effect of varying weather conditions, when quantifying the effects of various drivers on long-term streamflow shifts.

[Archived PDF of article]

Assessing the Feasibility of Satellite-Based Thresholds for Hydrologically Driven Landsliding

by Matthew A. Thomas, Brian D. Collins & Benjamin B. Mirus

Summary: Soil wetness and rainfall contribute to landslides across the world. Using soil moisture sensors and rain gauges, these environmental conditions have been monitored at numerous points across the Earth’s surface to define threshold conditions, above which landsliding should be expected for a localized area. Satellite-based technologies also deliver estimates of soil wetness and rainfall, potentially offering an approach to develop thresholds as part of landslide warning systems over larger spatial scales. To evaluate the potential for using satellite-based measurements for landslide warning, we compare the accuracy of landslide thresholds defined with ground- versus satellite-based soil wetness and rainfall information. We find that the satellite-based data over-predict soil wetness during the time of year when landslides are most likely to occur, resulting in thresholds that also over-predict the potential for landslides relative to thresholds informed by direct measurements on the ground. Our results encourage the installation of more ground-based monitoring stations in landslide-prone settings and the cautious use of satellite-based data when more direct measurements are not available.

[Archived PDF of article]

Modeling the Translocation and Transformation of Chemicals in the Soil-Plant Continuum: A Dynamic Plant Uptake Module for the HYDRUS Model

by Giuseppe Brunetti, Radka Kodešová & Jiří Šimůnek

Abstract: Food contamination is responsible for thousands of deaths worldwide every year. Plants represent the most common pathway for chemicals into the human and animal food chain. Although existing dynamic plant uptake models for chemicals are crucial for the development of reliable mitigation strategies for food pollution, they nevertheless simplify the description of physicochemical processes in soil and plants, mass transfer processes between soil and plants and in plants, and transformation in plants. To fill this scientific gap, we couple a widely used hydrological model (HYDRUS) with a multi-compartment dynamic plant uptake model, which accounts for differentiated multiple metabolization pathways in plant’s tissues. The developed model is validated first theoretically and then experimentally against measured data from an experiment on the translocation and transformation of carbamazepine in three vegetables. The analysis is further enriched by performing a global sensitivity analysis on the soilplant model to identify factors driving the compound’s accumulation in plants’ shoots, as well as to elucidate the role and the importance of soil hydraulic properties on the plant uptake process. Results of the multilevel numerical analysis emphasize the model’s flexibility and demonstrate its ability to accurately reproduce physicochemical processes involved in the dynamic plant uptake of chemicals from contaminated soils.

[Archived PDF of article]

Physical Controls on Salmon Redd Site Selection in Restored Reaches of a Regulated, Gravel-Bed River

by Lee R. Harrison, Erin Bray, Brandon Overstreet, Carl J. Legleiter, Rocko A. Brown, Joseph E. Merz, Rosealea M. Bond, Colin L. Nicol & Thomas Dunne

Abstract: Large-scale river restoration programs have emerged recently as a tool for improving spawning habitat for native salmonids in highly altered river ecosystems. Few studies have quantified the extent to which restored habitat is utilized by salmonids, which habitat features influence redd site selection, or the persistence of restored habitat over time. We investigated fall-run Chinook salmon spawning site utilization and measured and modeled corresponding habitat characteristics in two restored reaches: a reach of channel and floodplain enhancement completed in 2013 and a reconfigured channel and floodplain constructed in 2002. Redd surveys demonstrated that both restoration projects supported a high density of salmon redds, 3 and 14 years following restoration. Salmon redds were constructed in coarse gravel substrates located in areas of high sediment mobility, as determined by measurements of gravel friction angles and a grain entrainment model. Salmon redds were located near transitions between pool-riffle bedforms in regions of high predicted hyporheic flows. Habitat quality (quantified as a function of stream hydraulics) and hyporheic flow were both strong predictors of redd occurrence, though the relative roles of these variables differed between sites. Our findings indicate that physical controls on redd site selection in restored channels were similar to those reported for natural channels elsewhere. Our results further highlight that in addition to traditional habitat criteria (e.g., water depth, velocity, and substrate size), quantifying sediment texture and mobility, as well as intragravel flow, provides a more complete understanding of the ecological benefits provided by river restoration projects.

[Archived PDF of article]

Mountain-Block Recharge: A Review of Current Understanding

by Katherine H. Markovich, Andrew H. Manning, Laura E. Condon & Jennifer C. McIntosh

Abstract: Mountain-block recharge (MBR) is the subsurface inflow of groundwater to lowland aquifers from adjacent mountains. MBR can be a major component of recharge but remains difficult to characterize and quantify due to limited hydrogeologic, climatic, and other data in the mountain block and at the mountain front. The number of MBR-related studies has increased dramatically in the 15 years since the last review of the topic was conducted by Wilson and Guan (2004), generating important advancements. We review this recent body of literature, summarize current understanding of factors controlling MBR, and provide recommendations for future research priorities. Prior to 2004, most MBR studies were performed in the southwestern United States. Since then, numerous studies have detected and quantified MBR in basins around the world, typically estimating MBR to be 5–50% of basin-fill aquifer recharge. Theoretical studies using generic numerical modeling domains have revealed fundamental hydrogeologic and topographic controls on the amount of MBR and where it originates within the mountain block. Several mountain-focused hydrogeologic studies have confirmed the widespread existence of mountain bedrock aquifers hosting considerable groundwater flow and, in some cases, identified the occurrence of interbasin flow leaving headwater catchments in the subsurface—both of which are required for MBR to occur. Future MBR research should focus on the collection of high-priority data (e.g., subsurface data near the mountain front and within the mountain block) and the development of sophisticated coupled models calibrated to multiple data types to best constrain MBR and predict how it may change in response to climate warming.

[Archived PDF of article]

An Adjoint Sensitivity Model for Steady-State Sequentially Coupled Radionuclide Transport in Porous Media

by Mohamed Hayek, Banda S. RamaRao & Marsh Lavenue

Abstract: This work presents an efficient mathematical/numerical model to compute the sensitivity coefficients of a predefined performance measure to model parameters for one-dimensional steady-state sequentially coupled radionuclide transport in a finite heterogeneous porous medium. The model is based on the adjoint sensitivity approach that offers an elegant and computationally efficient alternative way to compute the sensitivity coefficients. The transport parameters include the radionuclide retardation factors due to sorption, the Darcy velocity, and the effective diffusion/dispersion coefficients. Both continuous and discrete adjoint approaches are considered. The partial differential equations associated with the adjoint system are derived based on the adjoint state theory for coupled problems. Physical interpretations of the adjoint states are given in analogy to results obtained in the theory of groundwater flow. For the homogeneous case, analytical solutions for primary and adjoint systems are derived and presented in closed forms. Numerically calculated solutions are compared to the analytical results and show excellent agreements. Insights from sensitivity analysis are discussed to get a better understanding of the values of sensitivity coefficients. The sensitivity coefficients are also computed numerically by finite differences. The numerical sensitivity coefficients successfully reproduce the analytically derived sensitivities based on adjoint states. A derivative-based global sensitivity method coupled with the adjoint state method is presented and applied to a real field case represented by a site currently being considered for underground nuclear storage in Northern Switzerland, “Zürich Nordost,” to demonstrate the proposed method. The results show the advantage of the adjoint state method compared to other methods in term of computational effort.

[Archived PDF of article]

Hydraulic Reconstruction of the 1818 Giétro Glacial Lake Outburst Flood

by C. Ancey, E. Bardou, M. Funk, M. Huss, M. A. Werder & T. Trewhela

Summary: Every year, natural and man-made dams fail and cause flooding. For public authorities, estimating the risk posed by dams is essential to good risk management. Efficient computational tools are required for analyzing flood risk. Testing these tools is an important step toward ensuring their reliability and performance. Knowledge of major historical floods makes it possible, in principle, to benchmark models, but because historical data are often incomplete and fraught with potential inaccuracies, validation is seldom satisfactory. Here we present one of the few major historical floods for which information on flood initiation and propagation is available and detailed: the Giétro flood. This flood occurred in June 1818 and devastated the Drance Valley in Switzerland. In the spring of that year, ice avalanches blocked the valley floor and formed a glacial lake, whose volume is today estimated at 25×106 m3. The local authorities initiated protection works: A tunnel was drilled through the ice dam, and about half of the stored water volume was drained in 2.5 days. On 16 June 1818, the dam failed suddenly because of significant erosion at its base; this caused a major flood. This paper presents a numerical model for estimating flow rates, velocities, and depths during the dam drainage and flood flow phases. The numerical results agree well with historical data. The flood reconstruction shows that relatively simple models can be used to estimate the effects of a major flood with good accuracy.

[Archived PDF of article]

The Representation of Hydrological Dynamical Systems Using Extended Petri Nets (EPN)

by Marialaura Bancheri, Francesco Serafin & Riccardo Rigon

Abstract: This work presents a new graphical system to represent hydrological dynamical models and their interactions. We propose an extended version of the Petri Nets mathematical modeling language, the Extended Petri Nets (EPN), which allows for an immediate translation from the graphics of the model to its mathematical representation in a clear way. We introduce the principal objects of the EPN representation (i.e., places, transitions, arcs, controllers, and splitters) and their use in hydrological systems. We show how to cast hydrological models in EPN and how to complete their mathematical description using a dictionary for the symbols and an expression table for the flux equations. Thanks to the compositional property of EPN, we show how it is possible to represent either a single hydrological response unit or a complex catchment where multiple systems of equations are solved simultaneously. Finally, EPN can be used to describe complex Earth system models that include feedback between the water, energy, and carbon budgets. The representation of hydrological dynamical systems with EPN provides a clear visualization of the relations and feedback between subsystems, which can be studied with techniques introduced in nonlinear systems theory and control theory.

[Archived PDF of article]

A Regularization Approach to Improve the Sequential Calibration of a Semidistributed Hydrological Model

by A. de Lavenne, V. Andréassian, G. Thirel, M.-H. Ramos & C. Perrin

Abstract: In semidistributed hydrological modeling, sequential calibration usually refers to the calibration of a model by considering not only the flows observed at the outlet of a catchment but also the different gauging points inside the catchment from upstream to downstream. While sequential calibration aims to optimize the performance at these interior gauged points, we show that it generally fails to improve performance at ungauged points. In this paper, we propose a regularization approach for the sequential calibration of semidistributed hydrological models. It consists in adding a priori information on optimal parameter sets for each modeling unit of the semi-distributed model. Calibration iterations are then performed by jointly maximizing simulation performance and minimizing drifts from the a priori parameter sets. The combination of these two sources of information is handled by a parameter k to which the method is quite sensitive. The method is applied to 1,305 catchments in France over 30 years. The leave-one-out validation shows that, at locations considered as ungauged, model simulations are significantly improved (over all the catchments, the median KGE criterion is increased from 0.75 to 0.83 and the first quartile from 0.35 to 0.66), while model performance at gauged points is not significantly impacted by the use of the regularization approach. Small catchments benefit most from this calibration strategy. These performances are, however, very similar to the performances obtained with a lumped model based on similar conceptualization.

[Archived PDF of article]

Proneness of European Catchments to Multiyear Streamflow Droughts

by Manuela I. Brunner & Lena M. Tallaksen

Summary: Droughts lasting longer than 1 year can have severe ecological, social, and economic impacts. They are characterized by below-average flows, not only during the low-flow period but also in the high-flow period when water stores such as groundwater or artificial reservoirs are usually replenished. Limited catchment storage might worsen the impacts of droughts and make water management more challenging. Knowledge on the occurrence of multiyear drought events enables better adaptation and increases preparedness. In this study, we assess the proneness of European catchments to multiyear droughts by simulating long discharge records. Our findings show that multiyear drought events mainly occur in regions where the discharge seasonality is mostly influenced by rainfall, whereas catchments whose seasonality is dominated by melt processes are less affected. The strong link between the proneness of a catchment to multiyear events and its discharge seasonality leads to the conclusion that future changes toward less snow storage and thus less snow melt will increase the probability of multiyear drought occurrence.

[Archived PDF of article]

Equifinality and Flux Mapping: A New Approach to Model Evaluation and Process Representation under Uncertainty

by Sina Khatami, Murray C. Peel, Tim J. Peterson & Andrew W. Western

Abstract: Uncertainty analysis is an integral part of any scientific modeling, particularly within the domain of hydrological sciences given the various types and sources of uncertainty. At the center of uncertainty rests the concept of equifinality, that is, reaching a given endpoint (finality) through different pathways. The operational definition of equifinality in hydrological modeling is that various model structures and/or parameter sets (i.e., equal pathways) are equally capable of reproducing a similar (not necessarily identical) hydrological outcome (i.e., finality). Here we argue that there is more to model equifinality than model structures/parameters, that is, other model components can give rise to model equifinality and/or could be used to explore equifinality within model space. We identified six facets of model equifinality, namely, model structure, parameters, performance metrics, initial and boundary conditions, inputs, and internal fluxes. Focusing on model internal fluxes, we developed a methodology called flux mapping that has fundamental implications in understanding and evaluating model process representation within the paradigm of multiple working hypotheses. To illustrate this, we examine the equifinality of runoff fluxes of a conceptual rainfall-runoff model for a number of different Australian catchments. We demonstrate how flux maps can give new insights into the model behavior that cannot be captured by conventional model evaluation methods. We discuss the advantages of flux space, as a subspace of the model space not usually examined, over parameter space. We further discuss the utility of flux mapping in hypothesis generation and testing, extendable to any field of scientific modeling of open complex systems under uncertainty.

[Archived PDF of article]

Role of Extreme Precipitation and Initial Hydrologic Conditions on Floods in Godavari River Basin, India

by Shailesh Garg & Vimal Mishra

Abstract: Floods are the most frequent natural calamity in India. The Godavari river basin (GRB) witnessed several floods in the past 50 years. Notwithstanding the large damage and economic loss, the role of extreme precipitation and antecedent moisture conditions on floods in the GRB remains unexplored. Using the observations and the well-calibrated Variable Infiltration Capacity model, we estimate the changes in the extreme precipitation and floods in the observed (1955–2016) and projected future (2071–2100) climate in the GRB. We evaluate the role of initial hydrologic conditions and extreme precipitation on floods in both observed and projected future climate. We find a statistically significant increase in annual maximum precipitation for the catchments upstream of four gage stations during the 1955–2016 period. However, the rise in annual maximum streamflow at all the four gage stations in GRB was not statistically significant. The probability of floods driven by extreme precipitation (PFEP) varies between 0.55 and 0.7 at the four gage stations of the GRB, which declines with the size of the basins. More than 80% of extreme precipitation events that cause floods occur on wet antecedent moisture conditions at all the four locations in the GRB. The frequency of extreme precipitation events is projected to rise by two folds or more (under RCP 8.5) in the future (2071–2100) at all four locations. However, the increased frequency of floods under the future climate will largely be driven by the substantial rise in the extreme precipitation events rather than wet antecedent moisture conditions.

[Archived PDF of article]

Research Letters

Combined Effect of Tides and Varying Inland Groundwater Input on Flow and Salinity Distribution in Unconfined Coastal Aquifers

by Woei Keong Kuan, Pei Xin, Guangqiu Jin, Clare E. Robinson, Badin Gibbes & Ling Li

Abstract: Tides and seasonally varying inland freshwater input, with different fluctuation periods, are important factors affecting flow and salt transport in coastal unconfined aquifers. These processes affect submarine groundwater discharge (SGD) and associated chemical transport to the sea. While the individual effects of these forcings have previously been studied, here we conducted physical experiments and numerical simulations to evaluate the interactions between varying inland freshwater input and tidal oscillations. Varying inland freshwater input was shown to induce significant water exchange across the aquifer-sea interface as the saltwater wedge shifted landward and seaward over the fluctuation cycle. Tidal oscillations led to seawater circulations through the intertidal zone that also enhanced the density-driven circulation, resulting in a significant increase in the total SGD. The combination of the tide and varying inland freshwater input, however, decreased the SGD components driven by the separate forcings (e.g., tides and density). Tides restricted the landward and seaward movement of the saltwater wedge in response to the varying inland freshwater input in addition to reducing the time delay between the varying freshwater input signal and landward-seaward movement in the saltwater wedge interface. This study revealed the nonlinear interaction between tidal fluctuations and varying inland freshwater input will help to improve our understanding of SGD, seawater intrusion, and chemical transport in coastal unconfined aquifers.

[Archived PDF of article]

Essay 83: Press Release: World Energy Outlook 2019 Highlights Deep Disparities in the Global Energy System

Rapid and widespread changes across all parts of the energy system are needed to put the world on a path to a secure and sustainable energy future

Deep disparities define today’s energy world. The dissonance between well-supplied oil markets and growing geopolitical tensions and uncertainties. The gap between the ever-higher amounts of greenhouse gas emissions being produced and the insufficiency of stated policies to curb those emissions in line with international climate targets. The gap between the promise of energy for all and the lack of electricity access for 850 million people around the world.

The World Energy Outlook 2019, the International Energy Agency’s flagship publication, explores these widening fractures in detail. It explains the impact of today’s decisions on tomorrow’s energy systems, and describes a pathway that enables the world to meet climate, energy access and air quality goals while maintaining a strong focus on the reliability and affordability of energy for a growing global population.

As ever, decisions made by governments remain critical for the future of the energy system. This is evident in the divergences between WEO scenarios that map out different routes the world could follow over the coming decades, depending on the policies, investments, technologies and other choices that decision makers pursue today. Together, these scenarios seek to address a fundamental issue – how to get from where we are now to where we want to go.

The path the world is on right now is shown by the Current Policies Scenario, which provides a baseline picture of how global energy systems would evolve if governments make no changes to their existing policies. In this scenario, energy demand rises by 1.3% a year to 2040, resulting in strains across all aspects of energy markets and a continued strong upward march in energy-related emissions.

The Stated Policies Scenario, formerly known as the New Policies Scenario, incorporates today’s policy intentions and targets in addition to existing measures. The aim is to hold up a mirror to today’s plans and illustrate their consequences. The future outlined in this scenario is still well off track from the aim of a secure and sustainable energy future. It describes a world in 2040 where hundreds of millions of people still go without access to electricity, where pollution-related premature deaths remain around today’s elevated levels, and where CO2 emissions would lock in severe impacts from climate change.

The Sustainable Development Scenario indicates what needs to be done differently to fully achieve climate and other energy goals that policy makers around the world have set themselves. Achieving this scenario – a path fully aligned with the Paris Agreement aim of holding the rise in global temperatures to well below 2°C and pursuing efforts to limit it to 1.5°C – requires rapid and widespread changes across all parts of the energy system. Sharp emission cuts are achieved thanks to multiple fuels and technologies providing efficient and cost-effective energy services for all.

“What comes through with crystal clarity in this year’s World Energy Outlook is there is no single or simple solution to transforming global energy systems,” said Dr. Fatih Birol, the IEA’s Executive Director. “Many technologies and fuels have a part to play across all sectors of the economy. For this to happen, we need strong leadership from policy makers, as governments hold the clearest responsibility to act and have the greatest scope to shape the future.”

In the Stated Policies Scenario, energy demand increases by 1% per year to 2040. Low-carbon sources, led by solar PV, supply more than half of this growth, and natural gas accounts for another third. Oil demand flattens out in the 2030s, and coal use edges lower. Some parts of the energy sector, led by electricity, undergo rapid transformations. Some countries, notably those with “net zero” aspirations, go far in reshaping all aspects of their supply and consumption.

However, the momentum behind clean energy is insufficient to offset the effects of an expanding global economy and growing population. The rise in emissions slows but does not peak before 2040.

Shale output from the United States is set to stay higher for longer than previously projected, reshaping global markets, trade flows and security. In the Stated Policies Scenario, annual U.S. production growth slows from the breakneck pace seen in recent years, but the United States still accounts for 85% of the increase in global oil production to 2030, and for 30% of the increase in gas. By 2025, total U.S. shale output (oil and gas) overtakes total oil and gas production from Russia.

“The shale revolution highlights that rapid change in the energy system is possible when an initial push to develop new technologies is complemented by strong market incentives and large-scale investment,” said Dr. Birol. “The effects have been striking, with U.S. shale now acting as a strong counterweight to efforts to manage oil markets.”

The higher U.S. output pushes down the share of OPEC members and Russia in total oil production, which drops to 47% in 2030, from 55% in the mid-2000s. But whichever pathway the energy system follows, the world is set to rely heavily on oil supply from the Middle East for years to come.

Alongside the immense task of putting emissions on a sustainable trajectory, energy security remains paramount for governments around the globe. Traditional risks have not gone away, and new hazards such as cybersecurity and extreme weather require constant vigilance. Meanwhile, the continued transformation of the electricity sector requires policy makers to move fast to keep pace with technological change and the rising need for the flexible operation of power systems.

“The world urgently needs to put a laser-like focus on bringing down global emissions. This calls for a grand coalition encompassing governments, investors, companies and everyone else who is committed to tackling climate change,” said Dr. Birol. “Our Sustainable Development Scenario is tailor-made to help guide the members of such a coalition in their efforts to address the massive climate challenge that faces us all.”

A sharp pick-up in energy efficiency improvements is the element that does the most to bring the world towards the Sustainable Development Scenario. Right now, efficiency improvements are slowing: the 1.2% rate in 2018 is around half the average seen since 2010 and remains far below the 3% rate that would be needed.

Electricity is one of the few energy sources that sees rising consumption over the next two decades in the Sustainable Development Scenario. Electricity’s share of final consumption overtakes that of oil, today’s leader, by 2040. Wind and solar PV provide almost all the increase in electricity generation.

Putting electricity systems on a sustainable path will require more than just adding more renewables. The world also needs to focus on the emissions that are “locked in” to existing systems. Over the past 20 years, Asia has accounted for 90% of all coal-fired capacity built worldwide, and these plants potentially have long operational lifetimes ahead of them. This year’s WEO considers three options to bring down emissions from the existing global coal fleet: to retrofit plants with carbon capture, utilisation and storage or biomass co-firing equipment; to repurpose them to focus on providing system adequacy and flexibility; or to retire them earlier.

Access the 2019 World Energy Outlook report.

About the IEA: The International Energy Agency, the global energy authority, was founded in 1974 to help its member countries co-ordinate a collective response to major oil supply disruptions. Its mission has evolved and rests today on three main pillars: working to ensure global energy security; expanding energy cooperation and dialogue around the world; and promoting an environmentally sustainable energy future.

International Energy Agency Press Office
31-35 Rue de la Fédération, Paris, 75015

Essay 80: Short-Term Energy Outlook

U.S. Energy Information Administration
November 13, 2019 Release

Highlights

Global liquid fuels
  • Brent crude oil spot prices averaged $60 per barrel (b) in October, down $3/b from September and down $21/b from October 2018. EIA forecasts Brent spot prices will average $60/b in 2020, down from a 2019 average of $64/b. EIA forecasts that West Texas Intermediate (WTI) prices will average $5.50/b less than Brent prices in 2020. EIA expects crude oil prices will be lower on average in 2020 than in 2019 because of forecast rising global oil inventories, particularly in the first half of next year.
  • Based on preliminary data and model estimates, EIA estimates that the United States exported 140,000 b/d more total crude oil and petroleum products in September than it imported; total exports exceeded imports by 550,000 b/d in October. If confirmed in survey-collected monthly data, it would be the first time the United States exported more petroleum than it imported since EIA records began in 1949. EIA expects total crude oil and petroleum net exports to average 750,000 b/d in 2020 compared with average net imports of 520,000 b/d in 2019.
  • Distillate fuel inventories (a category that includes home heating oil) in the U.S. East Coast—Petroleum Administration for Defense District (PADD 1)—totaled 36.6 million barrels at the end of October, which was 30% lower than the five-year (2014–18) average for the end of October. The declining inventories largely reflect low U.S. refinery runs during October and low distillate fuel imports to the East Coast. EIA does not forecast regional distillate prices, but low inventories could put upward pressure on East Coast distillate fuel prices, including home heating oil, in the coming weeks.
  • U.S. regular gasoline retail prices averaged $2.63 per gallon (gal) in October, up 3 cents/gal from September and 11 cents/gal higher than forecast in last month’s STEO. Average U.S. regular gasoline retail prices were higher than expected, in large part, because of ongoing issues from refinery outages in California. EIA forecasts that regular gasoline prices on the West Coast (PADD 5), a region that includes California, will fall as the issues begin to resolve. EIA expects that prices in the region will average $3.44/gal in November and $3.12/gal in December. For the U.S. national average, EIA expects regular gasoline retail prices to average $2.65/gal in November and fall to $2.50/gal in December. EIA forecasts that the annual average price in 2020 will be $2.62/gal.
  • Despite low distillate fuel inventories, EIA expects that average household expenditures for home heating oil will decrease this winter. This forecast largely reflects warmer temperatures than last winter for the entire October–March period, and retail heating oil prices are expected to be unchanged compared with last winter. For households that heat with propane, EIA forecasts that expenditures will fall by 15% from last winter because of milder temperatures and lower propane prices.
Natural gas
  • Natural gas storage injections in the United States outpaced the previous five-year (2014–18) average during the 2019 injection season as a result of rising natural gas production. At the beginning of April, when the injection season started, working inventories were 28% lower than the five-year average for the same period. By October 31, U.S. total working gas inventories reached 3,762 billion cubic feet (Bcf), which was 1% higher than the five-year average and 16% higher than a year ago.
  • EIA expects natural gas storage withdrawals to total 1.9 trillion cubic feet (Tcf) between the end of October and the end of March, which is less than the previous five-year average winter withdrawal. A withdrawal of this amount would leave end-of-March inventories at almost 1.9 Tcf, 9% higher than the five-year average.
  • The Henry Hub natural gas spot price averaged $2.33 per million British thermal units (MMBtu) in October, down 23 cents/MMBtu from September. The decline largely reflected strong inventory injections. However, forecast cold temperatures across much of the country caused prices to rise in early November, and EIA forecasts Henry Hub prices to average $2.73/MMBtu for the final two months of 2019. EIA forecasts Henry Hub spot prices to average $2.48/MMBtu in 2020, down 13 cents/MMBtu from the 2019 average. Lower forecast prices in 2020 reflect a decline in U.S. natural gas demand and slowing U.S. natural gas export growth, allowing inventories to remain higher than the five-year average during the year even as natural gas production growth is forecast to slow. 
  • EIA forecasts that annual U.S. dry natural gas production will average 92.1 billion cubic feet per day (Bcf/d) in 2019, up 10% from 2018. EIA expects that natural gas production will grow much less in 2020 because of the lag between changes in price and changes in future drilling activity, with low prices in the third quarter of 2019 reducing natural gas-directed drilling in the first half of 2020. EIA forecasts natural gas production in 2020 will average 94.9 Bcf/d.
  • EIA expects U.S. liquefied natural gas (LNG) exports to average 4.7 Bcf/d in 2019 and 6.4 Bcf/d in 2020 as three new liquefaction projects come online. In 2019, three new liquefaction facilities—Cameron LNG, Freeport LNG, and Elba Island LNG—commissioned their first trains. Natural gas deliveries to LNG projects set a new record in July, averaging 6.0 Bcf/d, and increased further to 6.6 Bcf/d in October, when new trains at Cameron and Freeport began ramping up. Cameron LNG exported its first cargo in May, Corpus Christi LNG’s newly commissioned Train 2 in July, and Freeport in September. Elba Island plans to ship its first export cargo by the end of this year. In 2020, Cameron, Freeport, and Elba Island expect to place their remaining trains in service, bringing the total U.S. LNG export capacity to 8.9 Bcf/d by the end of the year.
Electricity, coal, renewables, and emissions
  • EIA expects the share of U.S. total utility-scale electricity generation from natural gas-fired power plants will rise from 34% in 2018 to 37% in 2019 and to 38% in 2020. EIA forecasts the share of U.S. electric generation from coal to average 25% in 2019 and 22% in 2020, down from 28% in 2018. EIA’s forecast nuclear share of U.S. generation remains at about 20% in 2019 and in 2020. Hydropower averages a 7% share of total U.S. generation in the forecast for 2019 and 2020, down from almost 8% in 2018. Wind, solar, and other non-hydropower renewables provided 9% of U.S. total utility-scale generation in 2018. EIA expects they will provide 10% in 2019 and 12% in 2020.
  • EIA expects total U.S. coal production in 2019 to total 698 million short tons (MMst), an 8% decrease from the 2018 level of 756 MMst. The decline reflects lower demand for coal in the U.S. electric power sector and reduced competitiveness of U.S. exports in the global market. EIA expects U.S. steam coal exports to face increasing competition from Eastern European sources, and that Russia will fill a growing share of steam coal trade, causing U.S. coal exports to fall in 2020. EIA forecasts that coal production in 2020 will total 607 MMst.
  • EIA expects U.S. electric power sector generation from renewables other than hydropower—principally wind and solar—to grow from 408 billion kilowatt-hours (kWh) in 2019 to 466 billion kWh in 2020. In EIA’s forecast, Texas accounts for 19% of the U.S. non-hydropower renewables generation in 2019 and 22% in 2020. California’s forecast share of non-hydropower renewables generation falls from 15% in 2019 to 14% in 2020. EIA expects that the Midwest and Central power regions will see shares in the 16% to 18% range for 2019 and 2020.
  • EIA forecasts that, after rising by 2.7% in 2018, U.S. energy-related carbon dioxide (CO2) emissions will decline by 1.7% in 2019 and by 2.0% in 2020, partially as a result of lower forecast energy consumption. In 2019, EIA forecasts less demand for space cooling because of cooler summer months; an expected 5% decline in cooling degree days from 2018, when it was significantly higher than the previous 10-year (2008–17) average. In addition, EIA also expects U.S. CO2 emissions in 2019 to decline because the forecast share of electricity generated from natural gas and renewables will increase, and the share generated from coal, which is a more carbon-intensive energy source, will decrease.

Essay 79: Past and Present Thinking

History is “forever new” and we keep asking “what’s new?” but the past is “forever suggestive” and so we inquire here as to whether the past gives us interesting echoes of the more recent.

Specifically, we juxtapose the “closing of the gold window” in August 1971 (Nixon) and the British gold standard gyrations between 1925 and 1931, when England left gold (i.e., September 1931).

At the time, under Nixon, the U.S. also had an unemployment rate of 6.1% (August 1971) and an inflation rate of 5.84% (1971).

To combat these problems, President Nixon consulted Federal Reserve chairman Arthur Burns, incoming Treasury Secretary John Connally, and then Undersecretary for International Monetary Affairs and future Fed Chairman Paul Volcker.

On the afternoon of Friday, August 13, 1971, these officials along with twelve other high-ranking White House and Treasury advisors met secretly with Nixon at Camp David. There was great debate about what Nixon should do, but ultimately Nixon, relying heavily on the advice of the self-confident Connally, decided to break up Bretton Woods by announcing the following actions on August 15:

Speaking on television on Sunday, August 15, when American financial markets were closed, Nixon said the following:

“The third indispensable element in building the new prosperity is closely related to creating new jobs and halting inflation. We must protect the position of the American dollar as a pillar of monetary stability around the world.

“In the past 7 years, there has been an average of one international monetary crisis every year …

“I have directed Secretary Connally to suspend temporarily the convertibility of the dollar into gold or other reserve assets, except in amounts and conditions determined to be in the interest of monetary stability and in the best interests of the United States.

“Now, what is this action—which is very technical—what does it mean for you?

“Let me lay to rest the bugaboo of what is called devaluation.

“If you want to buy a foreign car or take a trip abroad, market conditions may cause your dollar to buy slightly less. But if you are among the overwhelming majority of Americans who buy American-made products in America, your dollar will be worth just as much tomorrow as it is today.

“The effect of this action, in other words, will be to stabilize the dollar.”

Britain’s own experience in the twenties is explained like this:

“In 1925, Britain had returned to the gold standard.

(editor: This Churchill decision was deeply critiqued by Keynes.)

“When Labour came to power in May 1929 this was in good time for Black Friday on Wall Street in the following October.

“After the Austrian and German crashes in May and July 1931, Britain’s financial position became critical, and on 21st September she abandoned the gold standard.

London was still the world’s financial capital in 1931, and the British abandonment of the gold standard set off a chain of reactions throughout the world.

“Strangely enough Germany and Austria maintained the gold standard…”

(Europe of the Dictators, Elizabeth Wiskemann, Fontana/Collins, 1977, page 92-93)

Nixon’s policies gave us the demise of Bretton Woods, while the economic gyrations of 1925-1931 were part of the lead-up to World War II.

The setting is both “infinitely different” across the decades but the feeling of “flying blind” applies to both cases: U.S.A. “closing the gold window,” August 1971 and Britain’s overturning Churchill’s 1925 return to the gold standard, by 1931. One gets the sense of “concealed turmoil” and a lot of “winging it” in both cases. Policy-makers disagreed and they all saw the world of their moments “through a glass, darkly.”

Essay 66: Education and the Question of Fecklessness

We propose in Meta Intelligence an education that is completely global and cosmopolitan from Day 1.

The problem with education as a confusing area of activity is revealed to us in an episode of the great Japanese novel, The Makioka Sisters.

The Makioka Sisters (細雪 [Sasameyuki], “Light Snow”) is a novel by Japanese writer Jun’ichirō Tanizaki (died in 1965) that was serialized from 1943 to 1948. It follows the lives of the wealthy Makioka family of Osaka from the autumn of 1936 to April 1941, focusing on the family’s attempts to find a husband for the third sister, Yukiko.

In the novel, there’s a description of a “failed educational odyssey:”

“Mimaki was an old court family. The present viscount, the son, was well along in years. Mimaki Minoru, son by a concubine, was a graduate of the Peers School and had studied physics at the Imperial University, which he left to go to France.  In Paris he studied painting for a time, and French cooking for a time, and numerous other things, none for very long.

“Going on to America, he studied aeronautics in a not-too-famous state university, and he did finally take a degree, it seemed.

“After graduation, he continued to wander about the United States, and on to Mexico and South America. With his allowance from home cut off in the course of these wanderings, he made a living as a cook and even as a bellboy. He also returned to painting and even tried his hand at architecture.

“Following his whims and relying on his undeniable cleverness, he tried everything. He abandoned aeronautics when he left school.”

(The Makioka Sisters, Vintage Books, 1985, Seidensticker translation, page 473-474)

This person winds up dabbling in architecture after his return to Japan.

This episode in Tanizaki’s great novel gives us a “flashlight” or “searchlight” into the whole problem of educational confusion.  Is this simply a case of one person’s “fecklessness?”  Is this just a case of what’s called “failure to launch” (see the movie by this name)?

Or is it partly perhaps that education as a “lockstep system” of schools, exams, courses, semesters, quizzes and grades is very “inhospitable” to “searchers?”

If we call everyone who “stumbles around” a dilettante and a feckless failure, we might be unnecessarily “binary,” exclusionary and unaware of the problem of “cold educational ecosystems” which punish exploring for those who are not “born specialists.”  Winners and losers are too polarized as an educational judgment, perhaps.

The classic German novel about youthful confusions is Fontane’s classic Irrungen, Wirrungen (Trials and Tribulations, 1888) and perhaps an argument could be made that the coldly “binary view” of “successes” versus “the feckless” causes the loss of many young people who had various kinds of emotional resistance to education as an “Olympics” of sorts, with “winners and losers.”  This might be seen as a kind of overly narrow kind of “edu-brutality” which is intolerant of more difficult adjustment stories for young people, which are not uncommon.

Essay 55: Sharply-Focused and Informative Data for All Students

BEA News: Gross Domestic Product by Industry, 2nd quarter 2019 and annual update

The U.S. Bureau of Economic Analysis

The U.S. Bureau of Economic Analysis (BEA) has issued the following news release:

Professional, scientific, and technical services; real estate and rental and leasing; and mining were the leading contributors to the increase in U.S. economic growth in the second quarter of 2019.

The private goods‐ and services‐producing industries, as well as the government sector, contributed to the increase. Overall, 14 of 22 industry groups contributed to the 2.0 percent increase in real GDP in the second quarter.

The full text of the release [archived PDF] on BEA’s website can be found here.

The Bureau of Economic Analysis provides this service to you at no charge.  Visit us on the Web at www.bea.gov.  All you will need is your e-mail address. If you have questions or need assistance, please e-mail subscribe@bea.gov.

Essay 47: Novels as a Kind of University Demonstrating Storms of Global Finance and Technification

Edith Wharton began writing The Age of Innocence in 1917 as a way of recalling and criticizing the world of her youth, which had not yet experienced the devastation of World War I (1914–18).  Beginning in July 1920, the novel was published in serial form in New York’s monthly Pictorial Review.

The centrality of finance and technical change can be seen. We are reminded of the very first line in The Magnifcent Ambersons of Booth Tarkington, which tells the reader that the basis of the magnificence of the Ambersons was established when they somehow benefited from the 1873 financial crisis which destroyed many others. (Whether the Ambersons were shrewd or lucky or wily is not clarified.)

The Age of Innocence is set in New York in the 1870s and the financial storm and “techno-storm” become vital:

The Panic of 1873:

In The Age of Innocence, the investment bank run by Julius Beaufort collapses, bringing shame upon him and his wife and throwing New York into a tizzy. Beaufort’s business failure is a fictionalized version of the Panic of 1873, industrial capitalism’s first worldwide depression. Then, the United States backed its currency with both silver and gold, but when Germany and several other countries stopped using silver to back their currency, the price of silver fell precipitously, devaluing U.S. currency. The U.S. Treasury made matters worse by releasing large amounts of paper money into the economy. Speculators and bankers now had to immediately pay off their debts with gold.

In 1873, a prominent investment banker by the name of Jay Cooke went bankrupt, the effects rippled throughout the entire U.S. economy, and panic ensued. Trading was suspended for two weeks on the New York Stock Exchange as company after company failed, wages dropped precipitously, and unemployment spiked. The rise of the labor movement can be traced to the widespread unrest and economic instability set off by the panic. Additionally, the panic allowed a few of the wealthiest businessmen—such as Andrew Carnegie, John Rockefeller, and Cyrus McCormick, who retained access to valuable capital—to vastly increase their wealth and snuff out competitors.

Technological Advancements

Characters in The Age of Innocence are aware their world is about to be forever changed by the culture of outsiders, brought to them in part by advancements in technology. Although inventions like the telephone were on the horizon, they seemed improbably fantastic to people living in the early 1870s world of telegrams and horse-drawn carriages. However, in the final chapter, Wharton depicts Newland Archer living in a world that has been significantly altered by these technologies, a mere quarter century later.

In 1876, for example, American inventor Alexander Graham Bell (1847-1922) patented an early telephone and wowed audiences by demonstrating the world’s first telephone call by placing a call from one telegraph station to another five miles away.  The Western Union company refused to buy Bell’s telephone patent, claiming his invention would amount to no more than a novelty. However, the first telephone line was built in 1877-78, and after that, telephone usage skyrocketed.  At the start of the 1880s, there were almost 50,000 telephones in use, a number that swelled to over half a million by the turn of the century.

A similar large-scale change was the invention and development of electricity. Although the first electric light was developed in 1835, it was not until 1879 that American inventor Thomas Edison (1847-1931) developed and patented a light bulb with a life span of 15 hours. Edison’s work also focused on the problems of electrical generation and conductivity.

At the same time that communication was becoming easier and the day was lengthened artificially through electric lighting, the distance between continents was shortened by advances in turbine steam engines

In the 1860s, it took between eight and nine days to cross the Atlantic Ocean; by 1907, the Mauretania (the ship that Dallas and Newland Archer take to Europe in the last chapter) makes the voyage in half that time.  This was a contributing factor to the great influx of European immigrants who arrived in the United States during the late 19th and early 20th centuries.

In Chapter 29, Newland contemplates the “brotherhood of visionaries,” who predict a train tunnel under the Hudson River as well as “ships that would cross the Atlantic in five days … and other Arabian Night marvels.” In 1904, excavation for train tunnels under the Hudson began, directed by Alexander Cassatt, president of the Pennsylvania Railroad. In 1910, New York’s Penn Station opened and began receiving traffic from electric trains that traveled through the tunnels.

Notice that the novel The Magnificent Ambersons is from 1918, Edith Wharton’s Age of Innocence from 1920. In each, the personal storms of private emotion are somewhat carried along and swept up into the storms coming from national and even global finance (1873 caused a tremendous crash in Germany and Austria called the “Grunderkrach” [founder’s crash]) as well as techno-waves that are very baffling to the people of the time.

Essay 46: Novelists As Prophetic?

There are three French novelists who say prophetic things in their writings, predictions that are based on intuition and sensibility and not on any formal forecasting at all, but far-seeing nevertheless. Consider these three:

Jules Verne (died in 1905):

Paris in the Twentieth Century (FrenchParis au XXe siècle) is a science fiction novel by Jules Verne. The book presents Paris in August 1960, 97 years in Verne’s future, where society places value only on business and technology.

Written in 1863 but first published 131 years later (1994), the novel follows a young man who struggles unsuccessfully to live in a technologically advanced, but culturally backwards world.  Often referred to as Verne’s “lost novel,” the work paints a grim, dystopian view of a technological future civilization.

Verne’s predictions for 1960:

The book’s description of the technology of 1960 was in some ways remarkably close to actual 1960s technology.

The book described in detail advances such as cars powered by internal combustion engines (“gas-cabs”) together with the necessary supporting infrastructure such as gas stations and paved asphalt roads, elevated and underground passenger train systems and high-speed trains powered by magnetism and compressed air, skyscrapers, electric lights that illuminate entire cities at night, fax machines (“picture-telegraphs”), elevators, primitive computers which can send messages to each other as part of a network somewhat resembling the Internet (described as sophisticated electrically powered mechanical calculators which can send information to each other across vast distances), the utilization of wind power, automated security systems, the electric chair, and remotely-controlled weapons systems, as well as weapons destructive enough to make war unthinkable.

The book also predicts the growth of suburbs and mass-produced higher education (the opening scene has Dufrénoy attending a mass graduation of 250,000 students), department stores, and massive hotels. A version of feminism has also arisen in society, with women moving into the workplace and a rise in illegitimate births. It also makes accurate predictions of 20th-century music, predicting the rise of electronic music, and describes a musical instrument similar to a synthesizer, and the replacement of classical music performances with a recorded music industry.  It predicts that the entertainment industry would be dominated by lewd stage plays, often involving nudity and sexually explicit scenes.

Flaubert (died in 1880):

In his posthumous novel published in 1881, Bouvard and Pécuchet, a satire on random knowledge-seeking, the two clerks of the book title, conclude that sometime in the future, America will “take over” the world or its hegemonial leadership. To see that America would supplant Europe, in those days, is quite “counterintuitive.”

Bouvard and Pécuchet details the adventures of two Parisian copy-clerks, François Denys Bartholomée Bouvard and Juste Romain Cyrille Pécuchet, of the same age and nearly identical temperament. They meet one hot summer day in 1838 by the canal Saint-Martin and form an instant, symbiotic friendship. When Bouvard inherits a sizable fortune, the two decide to move to the countryside. They find a 94-acre (380,000 m2) property near the town of Chavignolles in Normandy, between Caen and Falaise, and 100 miles (160 km) west of Rouen. Their search for intellectual stimulation leads them, over the course of years, to flounder through almost every branch of knowledge.

Balzac (died in 1850):

In his novel, The Wild Ass’s Skin (La Peau de Chagrin), Balzac describes scenes and conversations which lead one insightful interpreter of his to remark:  “On the level of world history, this incident can be read as an allegorical prefiguration of the contemporary conversion of Asia to the materialistic motivations of the technological societies of the West.”  (Balzac: An Interpretation of La Comédie Humaine, F.J.W. Hemmings, Random House, 1967, page 173)

Hemmings says:  “Europe and then American norms are generally accepted among what we call the advanced societies of the world: a civilization concerned above all to stimulate and then gratify the innumerable private desires of its citizens…In Balzac’s day, this civilization had reached its highest development in Paris.”  (Hemmings’s book, page 173)

These three novelists bring to mind Heidegger’s (died in 1976) more recent sense that science and technology from Europe would take over dominant “planetary thinking” and that would “wring out” any sense of “being” or “being-in-the world.”

These three writers gave us “allegorical prefigurations” (to use the Hemmings’s phrase above) of the present which are startling in their far-seeing sense of things and that raises the question: who might their equivalents be in our time?