“Pre-Understanding” as a Pillar of Better Education

One pillar of our education enhancement effort is the concept of “pre-understanding” which argues that there usually is a step that has been skipped in education and that is the overview or guidance or “lay of the land” step that comes before courses become efficacious. To tackle a 900-page text-book seems soul-crushing in the absence of “pre-understanding” (i.e., where are we and why are we doing this) other than the coercive power of schools (grades, scholarships, recommendations, grad school admissions, etc.)?

A person senses (not incorrectly) that economics as a field of study seems tedious and solipsistic (i.e., “talking to itself” and not to the student).

Can we give students a “pre-understanding” that opens a backdoor or side window into the field, where such doors and windows were never seen or noticed?

A person is trying to decide what airline they should use in flying from Boston to Nepal.

Immediate concerns are of course price, flexibility of ticket, safety reputation of different airlines, schedules, weather forecasts, routes, etc.

A person might argue: Flight A stops in Tokyo and I can make use of that because my friend who lives in the area will put me up for a weekend, whereby we can do the town and sights, talk about old times, re-connect, etc. There’s also some other task or chore there I could do and so the Tokyo interruption is to my liking. There’s some risks associated with this (i.e., my fiancée who’s traveling with me might find it boring). I’m not sure (uncertainty).

Now suppose somebody tells you that such “decision theory” is at the heart of economics and involves four dimensions:

  1. Costs.
  2. Benefits.
  3. Risks.
  4. Uncertainties.

Whether you know it or not, you are optimizing some things (usefulness and pleasure of travel) and minimizing other things (time in the air, costs, safety risks, comfort, etc.).

You don’t realize that you’re making subtle decisional calculations where risks and uncertainties that cannot be quantified, are somehow being weighted and weighed and quantified by you, implicitly and the decision calculus is quite complicated.

Suppose you were now given to understand that economics is about economizing (i.e., budgeting your costs, benefits, risks and uncertainties, some of which are qualitative and subjective) but you find a way to assign some kind of numbers and weighting factors (i.e., importance to you) in your actual but more likely, intuitive calculations.

Goaded and prompted by this “pre-understanding” you might then pick up a standard guide to actual cost-benefit analysis (such as Mishan’s classic book) and go through this previously unseen “door” into the field without being crushed by the feeling that it’s all so tiresome in its appearance.

Similarly, if you take a math concept like the square root of minus one, think of it as an imaginary “unicorn” of the mind, then how is it that it appears constantly in all science and math such as Euler’s equation, Schrödinger’s equation, electrical engineering textbooks, etc.

How can something so elusive be so useful?

This “pre-understanding” quest or detour or episode could give you, the student, a deep nudge through a hidden window or door into “math world.”
Without this “trampoline of pre-understanding,” an “ocean of math intricacy” seems to loom before you.

Education and “Chaos”: The Example of Climate Change

Students will have heard on read descriptions of “chaos theory” which try to capture the phenomenon that a small change “here” or now might involve a mega-change somewhere else or later on or both. In other words, tremendous turbulence could arise from overlooked minutiae in some other region or domain. Chaos here does not mean lawless…it means lawful but in surprising ways, like a pendulum swinging from another pendulum where the laws of pendular motion are still in effect but the motions are “jumpy.”

This can be described as follows:

Chaos theory is a branch of mathematics focusing on the study of chaos—states of dynamical systems whose apparently-random states of disorder and irregularities are often governed by deterministic laws that are highly sensitive to initial conditions. Chaos theory is an interdisciplinary theory stating that, within the apparent randomness of chaotic complex systems, there are underlying patterns, constant feedback loops, repetition, self-similarity, fractals, and self-organization.

The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning that there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in China can cause a hurricane in Texas.

Blaise Pascal (17th century) gives us the example of “Cleopatra’s nose.” Had her nose been longer, Pascal muses, she would presumably have not been so beautiful and this could have altered romantic entanglements and the behavior of rival Roman generals and world history might have moved along different pathways completely (recall Caesar and Cleopatra, the play).

All of this “strange science” applies to climate change.

In the Winter 2019/20 issue of Options, from the International institute for Applied Systems Analysis (IIASA, Austria headquarters), there’s a short piece that shows you how climate change has such “chaos-type” features which could “turbo-charge” changes already expected:

Will Forests Let Us Down?

Current climate models assume that forests will continue to remove greenhouse gases from the atmosphere at their current rate.

A study by an international team including researchers from IIASA, however, indicates that this uptake capacity could be strongly limited by soil phosphorous availability. If this scenario proves true, the Earth’s climate would heat up much faster than previously assumed.

(Options, Winter 2019/20 issue, IIASA, page 5, “News in Brief”)

Students should glimpse something here that points to a “deep structure.”
Climate scientists and climate modelers at this time are trying to re-examine and re-jigger predictions to include overlooked details that could add “chaotic dynamics” to the predictions. Knowledge itself is evolving and if you add knowledge changes and revisions to model ones, you have to conclude that even with this fantastic level of human ingenuity and scientific intricacy, we “see the world through a glass, darkly” because the facts, models, chaos math, overviews, are themselves in “interactive flux.”

Two Kinds of Extra Understanding: Pre and Post

We argue here in this proposal for an educational remedy that two dimensions of understanding must be added to “retro-fit” education.

In the first addition, call it pre-understanding, a student is given an overview not only of the field but of his or her life as well as the “techno-commercial” environment that characterizes the globe.

Pre-understanding includes such “overall cautions” offered to you by Calderón de la Barca’s 17th century classic Spanish play, Life is a Dream (SpanishLa vida es sueño). A student would perhaps ask: “what would it be like if I faced this “dreamlike quality” of life, as shown by the Spanish play, and suddenly realized that a life of “perfect myopia” is not what I want.

Hannah Arendt warns similarly of a life “like a leaf in the whirlwind of time.”

Again, I, the student ask: do I want such a Hannah Arendt-type leaf-in-the-whirlwind-like life, buried further under Calderón de la Barca’s “dream state”?

But that’s not all: while I’m learning about these “life dangers,” all around me from my block to the whole world, humanity does its “techno-commerce” via container ships and robots, hundreds of millions of vehicles and smartphones, multilateral exchange rates, and tariff policies. Real understanding has one eye on the personal and the other on the impersonal and not one or the other.

All of these personal and impersonal layers of the full truth must be faced and followed, “en face,” as they say in French (i.e., “without blinking”).

Call all this pre-understanding which includes of course a sense of how my “field” or major or concentration fits into the “architecture of knowledge” and not in isolation without connections or a “ramification structure.”

Post-understanding comes from the other end: my lifelong effort, after just about all that I learned about the six wives of King Henry VIII and the “mean value theorem”/Rolle’s theorem in freshman math, have been completely forgotten and have utterly evaporated in my mind, to re-understand my life and times and book-learning.

Pre-and post-understanding together allows the Wittgenstein phenomenon of “light falls gradually over the whole.”

Without these deeper dimensions of educational remedy, the student as a person would mostly stumble from “pillar to post” with “perfect myopia.” Education mostly adds to all the “fragmentariness” of the modern world and is in that sense, incomplete or even disorienting.

Education in this deep sense is supposed to be the antidote to this overall sense of modern “shapelessness,” to use Kierkegaard’s term.

Meaningfulness versus Informativeness

The Decoding Reality book is a classic contemporary analysis of the foundations of physics and the implications for the human world. The scientists don’t see that physics and science are the infrastructure on which the human “quest for meaning” takes place. Ortega (Ortega y Gasset, died in 1955) tells us that a person is “a point of view directed at the universe.” This level of meaning cannot be reduced to bits or qubits or electrons since man is a “linguistic creature” who invents fictional stories to explain “things” that are not things.

The following dialog between Paul Davies (the outstanding science writer) and Vlatko Vedral (the distinguished physicist) gropes along on these issues: the difference between science as one kind of story and the human interpretation of life and self expressed in “tales” and parables, fictions and beliefs:

Davies: “When humans communicate, a certain quantity of information passes between them. But that information differs from the bits (or qubits) physicists normally consider, inasmuch as it possesses meaning. We may be able to quantify the information exchanged, but meaning is a qualitative property—a value—and therefore hard, maybe impossible, to capture mathematically. Nevertheless the concept of meaning obviously has, well… meaning. Will we ever have a credible physical theory of ‘meaningful information,’ or is ‘meaning’ simply outside the scope of physical science?”

Vedral: “This is a really difficult one. The success of Shannon’s formulation of ‘information’ lies precisely in the fact that he stripped it of all “meaning” and reduced it only to the notion of probability. Once we are able to estimate the probability for something to occur, we can immediately talk about its information content. But this sole dependence on probability could also be thought of as the main limitation of Shannon’s information theory (as you imply in your question). One could, for instance, argue that the DNA has the same information content inside as well as outside of a biological cell. However, it is really only when it has access to the cell’s machinery that it starts to serve its main biological purpose (i.e., it starts to make sense). Expressing this in your own words, the DNA has a meaning only within the context of a biological cell. The meaning of meaning is therefore obviously important. Though there has been some work on the theory of meaning, I have not really seen anything convincing yet. Intuitively we need some kind of a ‘relative information’ concept, information that is not only dependent on the probability, but also on its context, but I am afraid that we still do not have this.”

For a physicist, all the world is information. The universe and its workings are the ebb and flow of information. We are all transient patterns of information, passing on the recipe for our basic forms to future generations using a four-letter digital code called DNA.

See Decoding Reality.

In this engaging and mind-stretching account, Vlatko Vedral considers some of the deepest questions about the universe and considers the implications of interpreting it in terms of information. He explains the nature of information, the idea of entropy, and the roots of this thinking in thermodynamics. He describes the bizarre effects of quantum behavior—effects such as “entanglement,” which Einstein called “spooky action at a distance” and explores cutting edge work on the harnessing quantum effects in hyper-fast quantum computers, and how recent evidence suggests that the weirdness of the quantum world, once thought limited to the tiniest scales, may reach into the macro world.

Vedral finishes by considering the answer to the ultimate question: Where did all of the information in the universe come from? The answers he considers are exhilarating, drawing upon the work of distinguished physicist John Wheeler. The ideas challenge our concept of the nature of particles, of time, of determinism, and of reality itself.

Science is an “ontic” quest. Human life is an “ontological” quest. They are a “twisted pair” where each strand must be seen clearly and not confused. The content of your telephone conversation with your friend, say. is not reducible to the workings of a phone or the subtle electrical engineering and physics involved. A musical symphony is not just “an acoustical blast.”

The “meaning of meaning” is evocative and not logically expressible. There’s a “spooky action at a distance” between these levels of meaning versus information but they are different “realms” or “domains.”

Education and the “Knowability” Problem

There was a wonderful PBS Nature episode in 2006 called “The Queen of Trees” [full video, YouTube] which went into details about the survival strategy and rhythms and interactions with the environment of one tree in Africa and all the complexities this involves:

This Nature episode explores the evolution of a fig tree in Africa and its only pollinator, the fig wasp. This film takes us through a journey of intertwining relationships. It shows how the fig (queen) tree is life sustaining for an entire range of species, from plants, to insects, to other animals and even mammals. These other species are in turn life-sustaining to the fig tree itself. It could not survive without the interaction of all these different creatures and the various functions they perform. This is one of the single greatest documented (on video) examples of the wonders of our natural world; the intricacies involved for survival and ensuring the perpetual existence of species.

It shows us how fragile the balance is between survival and extinction.

One can begin to see that the tree/animal/bacteria/season/roots/climate interaction is highly complex and not quite fully understood to this day.

The fact that one tree yields new information every time we probe into it gives you a “meta” (i.e., meta-intelligent) clue that final theories of the cosmos and fully unified theories of physics will be elusive at best and unreachable at worst. If one can hardly pin down the workings of a single tree, does it sound plausible that “everything that is” from the electron to galaxy clusters to multiverses will be captured by an equation? The objective answer has to be: not particularly.

Think of the quest of the great unifiers like the great philosopherphysicist Hermann Weyl (died in 1955, like Einstein):

Since the 19th century, some physicists, notably Albert Einstein, have attempted to develop a single theoretical framework that can account for all the fundamental forces of nature–a unified field theory. Classical unified field theories are attempts to create a unified field theory based on classical physics. In particular, unification of gravitation and electromagnetism was actively pursued by several physicists and mathematicians in the years between the two World Wars. This work spurred the purely mathematical development of differential geometry.

Hermann Klaus Hugo Weyl (9 November, 1885 – 8 December, 1955) was a German mathematician, theoretical physicist and philosopher. Although much of his working life was spent in Zürich, Switzerland and then Princeton, New Jersey, he is associated with the University of Göttingen tradition of mathematics, represented by David Hilbert and Hermann Minkowski.

His research has had major significance for theoretical physics as well as purely mathematical disciplines including number theory. He was one of the most influential mathematicians of the twentieth century, and an important member of the Institute for Advanced Study during its early years.

Weyl published technical and some general works on space, time, matter, philosophy, logic, symmetry and the history of mathematics. He was one of the first to conceive of combining general relativity with the laws of electromagnetism. While no mathematician of his generation aspired to the “universalism” of Henri Poincaré or Hilbert, Weyl came as close as anyone.

Weyl is quoted as saying:

“I am bold enough to believe that the whole of physical phenomena may be derived from one single universal world-law of the greatest mathematical simplicity.”

(The Trouble with Physics, Lee Smolin, Houghton Mifflin Co., 2006, page 46)

This reminds one of Stephen Hawking’s credo that he repeated often and without wavering, that the rational human mind would soon understand “the mind of God.”

This WeylHawkingEinstein program of “knowing the mind of God” via a world-equation seems both extremely charming and beautiful, as a human quest, but potentially mono-maniacal à la Captain Ahab in Moby-Dick. The reason that only Ishmael survives the sinking of the ship, the Pequod, is that he has become non-monomaniacal and accepts the variegatedness of the world and thus achieves a more moderate view of human existence and its limits. “The Whiteness of the Whale” chapter in the novel gives you Melville’s sense (from 1851) of the unknowability of some final world-reality or world-theory or world-equation.

Education and the Pursuit of Improved Overviews

Professor Sherman Stein was a prominent mathematician and popularizer, and his book, Mathematics: The Man-Made Universe, is a modern classic. The subtitle “The Man-made Universe” already tells you that you’re looking at a clear exposition of “humans made math” in contrast to the “mathematics fundamentalism,” à la Professor Max Tegmark of MIT, whose tone seems to say mathematics allowed for reality and us.

This is of course a perfect “argument without end.” This is the kind of argument that should help a student to rethink their assumptions and not obsess about some once-and-for-all final understanding which can become an “idée fixe” (i.e., fixed idea in French, indicating being overly rigid or stuck).

In the preface to Professor Stein’s mathematics survey classic, he writes:

“We all find ourselves in a world we never made. Though we become used to the kitchen sink, we do not understand the atoms that compose it. The kitchen sink, like all the objects surrounding us, is a convenient abstraction.

Mathematics, on the other hand is completely the work of man.

Each theorem, each proof, is the product of the human mind. In mathematics all the cards can be put on the table.

In this sense, mathematics is concrete whereas the world is abstract.”

(Sherman Stein, Mathematics The Man-Made Universe, Dover Publications, “Preface” Third Edition, page XIII, 1999)

Meta-intelligence tells you if views of what is real, what is concrete, what is abstract, what is man-made, what is mathematical, are so radically different depending on the interpreter or analyst, it makes prudent sense to keep various views in one’s mind and modify them or juggle them as you go along. Our ability as a species to nail down for eternity what the nature of mathematics, humans and kitchen sinks are and how they all interrelate, is elusive and tangled up in language, as Wittgenstein keeps saying.

Education and “The Three-Body Problem”

The brilliant math-watcher, Ian Stewart, says of this classic physics problem, the Three-Body Problem:

Newton’s Law of Gravity runs into problems with three bodies (earth, moon, sun, say).

In particular, the gravitational interaction of a mere three bodies, assumed to obey Newton’s inverse square law of gravity, stumped the mathematical world for centuries.

It still does, if what you want is a nice formula for the orbits of those bodies. In fact, we now know that three-body dynamics is chaotic–so irregular that is has elements of randomness.

There is no tidy geometric characterization of three-body orbits, not even a formula in coordinate geometry.

Until the late nineteenth century, very little was known about the motion of three celestial bodies, even if one of them were so tiny that its mass could be ignored.

(Visions of Infinity: The Great Mathematical Problems, Ian Stewart, Basic Books, 2014, page 136)

Henri Poincaré, the great mathematician, wrestled with this with tremendous intricacy and ingenuity all his life:

Jules Henri Poincaré was a French mathematician, theoretical physicist, engineer, and philosopher of science. He is often described as a polymath, and in mathematics as “The Last Universalist,” since he excelled in all fields of the discipline as it existed during his lifetime.

Born: April 29, 1854, Nancy, France
Died: July 17, 1912, Paris, France.

We now think of applying in an evocative and not a rigorous mathematical way, the unexpected difficulties of the three-body problem to the n-body (i.e., more than three) problems of sociology or economics or history itself, and sense that social life is always multifactorial and not readily pin-downable, since “everything is causing everything else” and extracting mono-causal explanations must be elusive for all the planetary and Poincaré reasons and beyond.

This suggests to the student that novels are one attempt to say something about n-body human “orbits” based on “n-body” stances and “circumstances” with large amounts of randomness governing the untidy mess that dominates human affairs.

Words are deployed in novels and not numbers as in physics, but the “recalcitrance” of the world, social and physical, remains permanent.

Education and meta-intelligence would be more complete by seeing how the world, as someone put it, “won’t meet us halfway.” Remember Ian Stewart’s warning above:

“There is no tidy geometric characterization of three-body orbits…” and you sense that this must apply to human affairs even more deeply.

Early View Alert: Water Resources Research

from the American Geophysical Union’s journals:

Research Articles

Modeling the Snow Depth Variability with a High-Resolution Lidar Data Set and Nonlinear Terrain Dependency

by T. Skaugen & K. Melvold

Summary: Using airborne laser, 400 million snow depth measurements at Hardangervidda in Southern Norway have been collected. The amount of data has made in-depth studies of the spatial distribution of snow and its interaction with the terrain and vegetation possible. We find that the terrain variability, expressed by the square slope, the average amount of snow, and whether the terrain is vegetated or not, largely explains the variation of snow depth. With this information it is possible to develop equations that predict snow depth variability that can be used in environmental models, which again are used for important tasks such as flood forecasting and hydropower planning. One major advantage is that these equations can be determined from the data that are, in principle, available everywhere, provided there exists a detailed digital model of the terrain.

[Archived PDF article]

Phosphorus Transport in Intensively Managed Watersheds

by Christine L. Dolph, Evelyn Boardman, Mohammad Danesh-Yazdi, Jacques C. Finlay, Amy T. Hansen, Anna C. Baker & Brent Dalzell

Abstract: When phosphorus from farm fertilizer, eroded soil, and septic waste enters our water, it leads to problems like toxic algae blooms, fish kills, and contaminated drinking supplies. In this study, we examine how phosphorus travels through streams and rivers of farmed areas. In the past, soil lost from farm fields was considered the biggest contributor to phosphorus pollution in agricultural areas, but our study shows that phosphorus originating from fertilizer stores in the soil and from crop residue, as well as from soil eroded from sensitive ravines and bluffs, contributes strongly to the total amount of phosphorus pollution in agricultural rivers. We also found that most phosphorus leaves farmed watersheds during the very highest river flows. Increased frequency of large storms due to climate chaos will therefore likely worsen water quality in areas that are heavily loaded with phosphorus from farm fertilizers. Protecting water in agricultural watersheds will require knowledge of the local landscape along with strategies to address (1) drivers of climate chaos, (2) reduction in the highest river flows, and (3) ongoing inputs and legacy stores of phosphorus that are readily transported across land and water.

[Archived PDF of article]

Detecting the State of the Climate System via Artificial Intelligence to Improve Seasonal Forecasts and Inform Reservoir Operations

by Matteo Giuliani, Marta Zaniolo, Andrea Castelletti, Guido Davoli & Paul Block

Abstract: Increasingly variable hydrologic regimes combined with more frequent and intense extreme events are challenging water systems management worldwide. These trends emphasize the need of accurate medium- to long-term predictions to timely prompt anticipatory operations. Despite in some locations global climate oscillations and particularly the El Niño Southern Oscillation (ENSO) may contribute to extending forecast lead times, in other regions there is no consensus on how ENSO can be detected, and used as local conditions are also influenced by other concurrent climate signals. In this work, we introduce the Climate State Intelligence framework to capture the state of multiple global climate signals via artificial intelligence and improve seasonal forecasts. These forecasts are used as additional inputs for informing water system operations and their value is quantified as the corresponding gain in system performance. We apply the framework to the Lake Como basin, a regulated lake in northern Italy mainly operated for flood control and irrigation supply. Numerical results show the existence of notable teleconnection patterns dependent on both ENSO and the North Atlantic Oscillation over the Alpine region, which contribute in generating skillful seasonal precipitation and hydrologic forecasts. The use of this information for conditioning the lake operations produces an average 44% improvement in system performance with respect to a baseline solution not informed by any forecast, with this gain that further increases during extreme drought episodes. Our results also suggest that observed preseason sea surface temperature anomalies appear more valuable than hydrologic-based seasonal forecasts, producing an average 59% improvement in system performance.

[Archived PDF of article]

Landscape Water Storage and Subsurface Correlation from Satellite Surface Soil Moisture and Precipitation Observations

by Daniel J. Short Gianotti, Guido D. Salvucci, Ruzbeh Akbar, Kaighin A. McColl, Richard Cuenca & Dara Entekhabi

Abstract: Surface soil moisture measurements are typically correlated to some degree with changes in subsurface soil moisture. We calculate a hydrologic length scale, λ, which represents (1) the mean-state estimator of total column water changes from surface observations, (2) an e-folding length scale for subsurface soil moisture profile covariance fall-off, and (3) the best second-moment mass-conserving surface layer thickness for a simple bucket model, defined by the data streams of satellite soil moisture and precipitation retrievals. Calculations are simple, based on three variables: the autocorrelation and variance of surface soil moisture and the variance of the net flux into the column (precipitation minus estimated losses), which can be estimated directly from the soil moisture and precipitation time series. We develop a method to calculate the lag-one autocorrelation for irregularly observed time series and show global surface soil moisture autocorrelation. λ is driven in part by local hydroclimate conditions and is generally larger than the 50-mm nominal radiometric length scale for the soil moisture retrievals, suggesting broad subsurface correlation due to moisture drainage. In all but the most arid regions, radiometric soil moisture retrievals provide more information about ecosystem-relevant water fluxes than satellite radiometers can explicitly “see”; lower-frequency radiometers are expected to provide still more statistical information about subsurface water dynamics.

[Archived PDF of article]

Process-Guided Deep Learning Predictions of Lake Water Temperature

by Jordan S. Read, Xiaowei Jia, Jared Willard, Alison P. Appling, Jacob A. Zwart, Samantha K. Oliver, Anuj Karpatne, Gretchen J. A. Hansen, Paul C. Hanson, William Watkins, Michael Steinbach & Vipin Kumar

Abstract: The rapid growth of data in water resources has created new opportunities to accelerate knowledge discovery with the use of advanced deep learning tools. Hybrid models that integrate theory with state-of-the art empirical techniques have the potential to improve predictions while remaining true to physical laws. This paper evaluates the Process-Guided Deep Learning (PGDL) hybrid modeling framework with a use-case of predicting depth-specific lake water temperatures. The PGDL model has three primary components: a deep learning model with temporal awareness (long short-term memory recurrence), theory-based feedback (model penalties for violating conversation of energy), and model pre-training to initialize the network with synthetic data (water temperature predictions from a process-based model). In situ water temperatures were used to train the PGDL model, a deep learning (DL) model, and a process-based (PB) model. Model performance was evaluated in various conditions, including when training data were sparse and when predictions were made outside of the range in the training data set. The PGDL model performance (as measured by root-mean-square error (RMSE)) was superior to DL and PB for two detailed study lakes, but only when pretraining data included greater variability than the training period. The PGDL model also performed well when extended to 68 lakes, with a median RMSE of 1.65 °C during the test period (DL: 1.78 °C, PB: 2.03 °C; in a small number of lakes PB or DL models were more accurate). This case-study demonstrates that integrating scientific knowledge into deep learning tools shows promise for improving predictions of many important environmental variables.

[Archived PDF of article]

Adjustment of Radar-Gauge Rainfall Discrepancy Due to Raindrop Drift and Evaporation Using the Weather Research and Forecasting Model and Dual-Polarization Radar

by Qiang Dai, Qiqi Yang, Dawei Han, Miguel A. Rico-Ramirez & Shuliang Zhang

Abstract: Radar-gauge rainfall discrepancies are considered to originate from radar rainfall measurements while ignoring the fact that radar observes rain aloft while a rain gauge measures rainfall on the ground. Observations of raindrops observed aloft by weather radars consider that raindrops fall vertically to the ground without changing in size. This premise obviously does not stand because raindrop location changes due to wind drift and raindrop size changes due to evaporation. However, both effects are usually ignored. This study proposes a fully formulated scheme to numerically simulate both raindrop drift and evaporation in the air and reduces the uncertainties of radar rainfall estimation. The Weather Research and Forecasting model is used to simulate high-resolution three-dimensional atmospheric fields. A dual-polarization radar retrieves the raindrop size distribution for each radar pixel. Three schemes are designed and implemented using the Hameldon Hill radar in Lancashire, England. The first considers only raindrop drift, the second considers only evaporation, and the last considers both aspects. Results show that wind advection can cause a large drift for small raindrops. Considerable loss of rainfall is observed due to raindrop evaporation. Overall, the three schemes improve the radar-gauge correlation by 3.2%, 2.9%, and 3.8% and reduce their discrepancy by 17.9%, 8.6%, and 21.7%, respectively, over eight selected events. This study contributes to the improvement of quantitative precipitation estimation from radar polarimetry and allows a better understanding of precipitation processes.

[Archived PDF of article]

The Role of Collapsed Bank Soil on Tidal Channel Evolution: A Process-Based Model Involving Bank Collapse and Sediment Dynamics

by K. Zhao, Z. Gong, F. Xu, Z. Zhou, C. K. Zhang, G. M. E. Perillo & G. Coco

Abstract: We develop a process-based model to simulate the geomorphodynamic evolution of tidal channels, considering hydrodynamics, flow-induced bank erosion, gravity-induced bank collapse, and sediment dynamics. A stress-deformation analysis and the Mohr-Coulomb criterion, calibrated through previous laboratory experiments, are included in a model simulating bank collapse. Results show that collapsed bank soil plays a primary role in the dynamics of bank retreat. For bank collapse with small bank height, tensile failure in the middle of the bank (Stage I), tensile failure on the bank top (Stage II), and sectional cracking from bank top to the toe (Stage III) are present sequentially before bank collapse occurs. A significant linear relation is observed between bank height and the contribution of bank collapse to bank retreat. Contrary to flow-induced bank erosion, bank collapse prevents further widening since the collapsed bank soil protects the bank from direct bank erosion. The bank profile is linear or slightly convex, and the planimetric shape of tidal channels (gradually decreasing in width landward) is similar when approaching equilibrium, regardless of the consideration of bank erosion and collapse. Moreover, the simulated width-to-depth ratio in all runs is comparable with observations from the Venice Lagoon. This indicates that the equilibrium configuration of tidal channels depends on hydrodynamic conditions and sediment properties, while bank erosion and collapse greatly affect the transient behavior (before equilibrium) of the tidal channels. Overall, this contribution highlights the importance of collapsed bank soil in investigating tidal channel morphodynamics using a combined perspective of geotechnics and soil mechanics.

[Archived PDF of article]

A Physically Based Method for Soil Evaporation Estimation by Revisiting the Soil Drying Process

by Yunquan Wang, Oliver Merlin, Gaofeng Zhu & Kun Zhang

Abstract: While numerous models exist for soil evaporation estimation, they are more or less empirically based either in the model structure or in the determination of introduced parameters. The main difficulty lies in representing the water stress factor, which is usually thought to be limited by capillarity-supported water supply or by vapor diffusion flux. Recent progress in understanding soil hydraulic properties, however, have found that the film flow, which is often neglected, is the dominant process under low moisture conditions. By including the impact of film flow, a reexamination on the typical evaporation process found that this usually neglected film flow might be the dominant process for supporting the Stage II evaporation (i.e., the fast falling rate stage), besides the generally accepted capillary flow-supported Stage I evaporation and the vapor diffusion-controlled Stage III evaporation. A physically based model for estimating the evaporation rate was then developed by parameterizing the Buckingham-Darcy’s law. Interestingly, the empirical Bucket model was found to be a specific form of the proposed model. The proposed model requires the in-equilibrium relative humidity as the sole input for representing water stress and introduces no adjustable parameter in relation to soil texture. The impact of vapor diffusion was also discussed. Model testing with laboratory data yielded an excellent agreement with observations for both thin soil and thick soil column evaporation experiments. Model evaluation at 15 field sites generally showed a close agreement with observations, with a great improvement in the lower range of evaporation rates in comparison with the widely applied Priestley and Taylor Jet Propulsion Laboratory model.

[Archived PDF of article]

Floodplain Land Cover and Flow Hydrodynamic Control of Overbank Sedimentation in Compound Channel Flows

by Carmelo Juez, C. Schärer, H. Jenny, A. J. Schleiss & M. J. Franca

Abstract: Overbank sedimentation is predominantly due to fine sediments transported under suspension that become trapped and settle in floodplains when high-flow conditions occur in rivers. In a compound channel, the processes of exchanging water and fine sediments between the main channel and floodplains regulate the geomorphological evolution and are crucial for the maintenance of the ecosystem functions of the floodplains. These hydrodynamic and morphodynamic processes depend on variables such as the flow-depth ratio between the water depth in the main channel and the water depth in the floodplain, the width ratio between the width of the main channel and the width of the floodplain, and the floodplain land cover characterized by the type of roughness. This paper examines, by means of laboratory experiments, how these variables are interlinked and how the deposition of sediments in the compound channel is jointly determined by them. The combination of these compound channel characteristics modulates the production of vertically axised large turbulent vortical structures in the mixing interface. Such vortical structures determine the water mass exchange between the main channel and the floodplain, conditioning in turn the transport of sediment particles conveyed in the water, and, therefore, the resulting overbank sedimentation. The existence and pattern of sedimentation are conditioned by both the hydrodynamic variables (the flow-depth ratio and the width ratio) and the floodplain land cover simulated in terms of smooth walls, meadow-type roughness, sparse-wood-type roughness, and dense-wood-type roughness.

[Archived PDF of article]

Identifying Actionable Compromises: Navigating Multi-city Robustness Conflicts to Discover Cooperative Safe Operating Spaces for Regional Water Supply Portfolios

by D. F. Gold, P. M. Reed, B. C. Trindade & G. W. Characklis

Summary: Cooperation among neighboring urban water utilities can help water managers face challenges stemming from climate change and population growth. Water utilities can cooperate by coordinating water transfers and water restrictions in times of water scarcity (drought) so that water is provided to areas that need it most. In order to successfully implement these policies, however, cooperative partners must find a compromise that is acceptable to all regional actors, a task complicated by asymmetries in resources and risks often present in regional systems. The possibility of deviations from agreed upon actions is another complicating factor that has not been addressed in water resources literature. Our study focuses on four urban water utilities in the Research Triangle region of North Carolina who are investigating cooperative drought mitigation strategies. We contribute a framework that includes the use of simulation models, optimization algorithms, and statistical tools to aid cooperating partners in finding acceptable compromises that are tolerant modest deviations in planned actions. Our results can be used by regional utilities to avoid or alleviate potential planning conflicts and are broadly applicable to urban regional water supply planning across the globe.

[Archived PDF of article]

Detecting Changes in River Flow Caused by Wildfires, Storms, Urbanization, Regulation, and Climate across Sweden

by Berit Arheimer & Göran Lindström

Abstract: Changes in river flow may appear from shifts in land cover, constructions in the river channel, and climatic change, but currently there is a lack of understanding of the relative importance of these drivers. Therefore, we collected gauged river flow time series from 1961 to 2018 from across Sweden for 34 disturbed catchments to quantify how the various types of disturbances have affected river flow. We used trend analysis and the differences in observations versus hydrological modeling to explore the effects on river flow from (1) land cover changes from wildfires, storms, and urbanization; (2) dam constructions with regulations for hydropower production; and (3) climate-change impact in otherwise undisturbed catchments. A mini model ensemble, consisting of three versions of the S-HYPE model, was used, and the three models gave similar results. We searched for changes in annual and daily stream flow, seasonal flow regime, and flow duration curves. The results show that regulation of river flow has the largest impact, reducing spring floods with up to 100% and increasing winter flow by several orders of magnitude, with substantial effects transmitted far downstream. Climate changed the total river flow up to 20%. Tree removal by wildfires and storms has minor impacts at medium and large scales. Urbanization, on the contrary, showed a 20% increase in high flows also at medium scales. This study emphasizes the benefits of combining observed time series with numerical modeling to exclude the effect of varying weather conditions, when quantifying the effects of various drivers on long-term streamflow shifts.

[Archived PDF of article]

Assessing the Feasibility of Satellite-Based Thresholds for Hydrologically Driven Landsliding

by Matthew A. Thomas, Brian D. Collins & Benjamin B. Mirus

Summary: Soil wetness and rainfall contribute to landslides across the world. Using soil moisture sensors and rain gauges, these environmental conditions have been monitored at numerous points across the Earth’s surface to define threshold conditions, above which landsliding should be expected for a localized area. Satellite-based technologies also deliver estimates of soil wetness and rainfall, potentially offering an approach to develop thresholds as part of landslide warning systems over larger spatial scales. To evaluate the potential for using satellite-based measurements for landslide warning, we compare the accuracy of landslide thresholds defined with ground- versus satellite-based soil wetness and rainfall information. We find that the satellite-based data over-predict soil wetness during the time of year when landslides are most likely to occur, resulting in thresholds that also over-predict the potential for landslides relative to thresholds informed by direct measurements on the ground. Our results encourage the installation of more ground-based monitoring stations in landslide-prone settings and the cautious use of satellite-based data when more direct measurements are not available.

[Archived PDF of article]

Modeling the Translocation and Transformation of Chemicals in the Soil-Plant Continuum: A Dynamic Plant Uptake Module for the HYDRUS Model

by Giuseppe Brunetti, Radka Kodešová & Jiří Šimůnek

Abstract: Food contamination is responsible for thousands of deaths worldwide every year. Plants represent the most common pathway for chemicals into the human and animal food chain. Although existing dynamic plant uptake models for chemicals are crucial for the development of reliable mitigation strategies for food pollution, they nevertheless simplify the description of physicochemical processes in soil and plants, mass transfer processes between soil and plants and in plants, and transformation in plants. To fill this scientific gap, we couple a widely used hydrological model (HYDRUS) with a multi-compartment dynamic plant uptake model, which accounts for differentiated multiple metabolization pathways in plant’s tissues. The developed model is validated first theoretically and then experimentally against measured data from an experiment on the translocation and transformation of carbamazepine in three vegetables. The analysis is further enriched by performing a global sensitivity analysis on the soilplant model to identify factors driving the compound’s accumulation in plants’ shoots, as well as to elucidate the role and the importance of soil hydraulic properties on the plant uptake process. Results of the multilevel numerical analysis emphasize the model’s flexibility and demonstrate its ability to accurately reproduce physicochemical processes involved in the dynamic plant uptake of chemicals from contaminated soils.

[Archived PDF of article]

Physical Controls on Salmon Redd Site Selection in Restored Reaches of a Regulated, Gravel-Bed River

by Lee R. Harrison, Erin Bray, Brandon Overstreet, Carl J. Legleiter, Rocko A. Brown, Joseph E. Merz, Rosealea M. Bond, Colin L. Nicol & Thomas Dunne

Abstract: Large-scale river restoration programs have emerged recently as a tool for improving spawning habitat for native salmonids in highly altered river ecosystems. Few studies have quantified the extent to which restored habitat is utilized by salmonids, which habitat features influence redd site selection, or the persistence of restored habitat over time. We investigated fall-run Chinook salmon spawning site utilization and measured and modeled corresponding habitat characteristics in two restored reaches: a reach of channel and floodplain enhancement completed in 2013 and a reconfigured channel and floodplain constructed in 2002. Redd surveys demonstrated that both restoration projects supported a high density of salmon redds, 3 and 14 years following restoration. Salmon redds were constructed in coarse gravel substrates located in areas of high sediment mobility, as determined by measurements of gravel friction angles and a grain entrainment model. Salmon redds were located near transitions between pool-riffle bedforms in regions of high predicted hyporheic flows. Habitat quality (quantified as a function of stream hydraulics) and hyporheic flow were both strong predictors of redd occurrence, though the relative roles of these variables differed between sites. Our findings indicate that physical controls on redd site selection in restored channels were similar to those reported for natural channels elsewhere. Our results further highlight that in addition to traditional habitat criteria (e.g., water depth, velocity, and substrate size), quantifying sediment texture and mobility, as well as intragravel flow, provides a more complete understanding of the ecological benefits provided by river restoration projects.

[Archived PDF of article]

Mountain-Block Recharge: A Review of Current Understanding

by Katherine H. Markovich, Andrew H. Manning, Laura E. Condon & Jennifer C. McIntosh

Abstract: Mountain-block recharge (MBR) is the subsurface inflow of groundwater to lowland aquifers from adjacent mountains. MBR can be a major component of recharge but remains difficult to characterize and quantify due to limited hydrogeologic, climatic, and other data in the mountain block and at the mountain front. The number of MBR-related studies has increased dramatically in the 15 years since the last review of the topic was conducted by Wilson and Guan (2004), generating important advancements. We review this recent body of literature, summarize current understanding of factors controlling MBR, and provide recommendations for future research priorities. Prior to 2004, most MBR studies were performed in the southwestern United States. Since then, numerous studies have detected and quantified MBR in basins around the world, typically estimating MBR to be 5–50% of basin-fill aquifer recharge. Theoretical studies using generic numerical modeling domains have revealed fundamental hydrogeologic and topographic controls on the amount of MBR and where it originates within the mountain block. Several mountain-focused hydrogeologic studies have confirmed the widespread existence of mountain bedrock aquifers hosting considerable groundwater flow and, in some cases, identified the occurrence of interbasin flow leaving headwater catchments in the subsurface—both of which are required for MBR to occur. Future MBR research should focus on the collection of high-priority data (e.g., subsurface data near the mountain front and within the mountain block) and the development of sophisticated coupled models calibrated to multiple data types to best constrain MBR and predict how it may change in response to climate warming.

[Archived PDF of article]

An Adjoint Sensitivity Model for Steady-State Sequentially Coupled Radionuclide Transport in Porous Media

by Mohamed Hayek, Banda S. RamaRao & Marsh Lavenue

Abstract: This work presents an efficient mathematical/numerical model to compute the sensitivity coefficients of a predefined performance measure to model parameters for one-dimensional steady-state sequentially coupled radionuclide transport in a finite heterogeneous porous medium. The model is based on the adjoint sensitivity approach that offers an elegant and computationally efficient alternative way to compute the sensitivity coefficients. The transport parameters include the radionuclide retardation factors due to sorption, the Darcy velocity, and the effective diffusion/dispersion coefficients. Both continuous and discrete adjoint approaches are considered. The partial differential equations associated with the adjoint system are derived based on the adjoint state theory for coupled problems. Physical interpretations of the adjoint states are given in analogy to results obtained in the theory of groundwater flow. For the homogeneous case, analytical solutions for primary and adjoint systems are derived and presented in closed forms. Numerically calculated solutions are compared to the analytical results and show excellent agreements. Insights from sensitivity analysis are discussed to get a better understanding of the values of sensitivity coefficients. The sensitivity coefficients are also computed numerically by finite differences. The numerical sensitivity coefficients successfully reproduce the analytically derived sensitivities based on adjoint states. A derivative-based global sensitivity method coupled with the adjoint state method is presented and applied to a real field case represented by a site currently being considered for underground nuclear storage in Northern Switzerland, “Zürich Nordost,” to demonstrate the proposed method. The results show the advantage of the adjoint state method compared to other methods in term of computational effort.

[Archived PDF of article]

Hydraulic Reconstruction of the 1818 Giétro Glacial Lake Outburst Flood

by C. Ancey, E. Bardou, M. Funk, M. Huss, M. A. Werder & T. Trewhela

Summary: Every year, natural and man-made dams fail and cause flooding. For public authorities, estimating the risk posed by dams is essential to good risk management. Efficient computational tools are required for analyzing flood risk. Testing these tools is an important step toward ensuring their reliability and performance. Knowledge of major historical floods makes it possible, in principle, to benchmark models, but because historical data are often incomplete and fraught with potential inaccuracies, validation is seldom satisfactory. Here we present one of the few major historical floods for which information on flood initiation and propagation is available and detailed: the Giétro flood. This flood occurred in June 1818 and devastated the Drance Valley in Switzerland. In the spring of that year, ice avalanches blocked the valley floor and formed a glacial lake, whose volume is today estimated at 25×106 m3. The local authorities initiated protection works: A tunnel was drilled through the ice dam, and about half of the stored water volume was drained in 2.5 days. On 16 June 1818, the dam failed suddenly because of significant erosion at its base; this caused a major flood. This paper presents a numerical model for estimating flow rates, velocities, and depths during the dam drainage and flood flow phases. The numerical results agree well with historical data. The flood reconstruction shows that relatively simple models can be used to estimate the effects of a major flood with good accuracy.

[Archived PDF of article]

The Representation of Hydrological Dynamical Systems Using Extended Petri Nets (EPN)

by Marialaura Bancheri, Francesco Serafin & Riccardo Rigon

Abstract: This work presents a new graphical system to represent hydrological dynamical models and their interactions. We propose an extended version of the Petri Nets mathematical modeling language, the Extended Petri Nets (EPN), which allows for an immediate translation from the graphics of the model to its mathematical representation in a clear way. We introduce the principal objects of the EPN representation (i.e., places, transitions, arcs, controllers, and splitters) and their use in hydrological systems. We show how to cast hydrological models in EPN and how to complete their mathematical description using a dictionary for the symbols and an expression table for the flux equations. Thanks to the compositional property of EPN, we show how it is possible to represent either a single hydrological response unit or a complex catchment where multiple systems of equations are solved simultaneously. Finally, EPN can be used to describe complex Earth system models that include feedback between the water, energy, and carbon budgets. The representation of hydrological dynamical systems with EPN provides a clear visualization of the relations and feedback between subsystems, which can be studied with techniques introduced in nonlinear systems theory and control theory.

[Archived PDF of article]

A Regularization Approach to Improve the Sequential Calibration of a Semidistributed Hydrological Model

by A. de Lavenne, V. Andréassian, G. Thirel, M.-H. Ramos & C. Perrin

Abstract: In semidistributed hydrological modeling, sequential calibration usually refers to the calibration of a model by considering not only the flows observed at the outlet of a catchment but also the different gauging points inside the catchment from upstream to downstream. While sequential calibration aims to optimize the performance at these interior gauged points, we show that it generally fails to improve performance at ungauged points. In this paper, we propose a regularization approach for the sequential calibration of semidistributed hydrological models. It consists in adding a priori information on optimal parameter sets for each modeling unit of the semi-distributed model. Calibration iterations are then performed by jointly maximizing simulation performance and minimizing drifts from the a priori parameter sets. The combination of these two sources of information is handled by a parameter k to which the method is quite sensitive. The method is applied to 1,305 catchments in France over 30 years. The leave-one-out validation shows that, at locations considered as ungauged, model simulations are significantly improved (over all the catchments, the median KGE criterion is increased from 0.75 to 0.83 and the first quartile from 0.35 to 0.66), while model performance at gauged points is not significantly impacted by the use of the regularization approach. Small catchments benefit most from this calibration strategy. These performances are, however, very similar to the performances obtained with a lumped model based on similar conceptualization.

[Archived PDF of article]

Proneness of European Catchments to Multiyear Streamflow Droughts

by Manuela I. Brunner & Lena M. Tallaksen

Summary: Droughts lasting longer than 1 year can have severe ecological, social, and economic impacts. They are characterized by below-average flows, not only during the low-flow period but also in the high-flow period when water stores such as groundwater or artificial reservoirs are usually replenished. Limited catchment storage might worsen the impacts of droughts and make water management more challenging. Knowledge on the occurrence of multiyear drought events enables better adaptation and increases preparedness. In this study, we assess the proneness of European catchments to multiyear droughts by simulating long discharge records. Our findings show that multiyear drought events mainly occur in regions where the discharge seasonality is mostly influenced by rainfall, whereas catchments whose seasonality is dominated by melt processes are less affected. The strong link between the proneness of a catchment to multiyear events and its discharge seasonality leads to the conclusion that future changes toward less snow storage and thus less snow melt will increase the probability of multiyear drought occurrence.

[Archived PDF of article]

Equifinality and Flux Mapping: A New Approach to Model Evaluation and Process Representation under Uncertainty

by Sina Khatami, Murray C. Peel, Tim J. Peterson & Andrew W. Western

Abstract: Uncertainty analysis is an integral part of any scientific modeling, particularly within the domain of hydrological sciences given the various types and sources of uncertainty. At the center of uncertainty rests the concept of equifinality, that is, reaching a given endpoint (finality) through different pathways. The operational definition of equifinality in hydrological modeling is that various model structures and/or parameter sets (i.e., equal pathways) are equally capable of reproducing a similar (not necessarily identical) hydrological outcome (i.e., finality). Here we argue that there is more to model equifinality than model structures/parameters, that is, other model components can give rise to model equifinality and/or could be used to explore equifinality within model space. We identified six facets of model equifinality, namely, model structure, parameters, performance metrics, initial and boundary conditions, inputs, and internal fluxes. Focusing on model internal fluxes, we developed a methodology called flux mapping that has fundamental implications in understanding and evaluating model process representation within the paradigm of multiple working hypotheses. To illustrate this, we examine the equifinality of runoff fluxes of a conceptual rainfall-runoff model for a number of different Australian catchments. We demonstrate how flux maps can give new insights into the model behavior that cannot be captured by conventional model evaluation methods. We discuss the advantages of flux space, as a subspace of the model space not usually examined, over parameter space. We further discuss the utility of flux mapping in hypothesis generation and testing, extendable to any field of scientific modeling of open complex systems under uncertainty.

[Archived PDF of article]

Role of Extreme Precipitation and Initial Hydrologic Conditions on Floods in Godavari River Basin, India

by Shailesh Garg & Vimal Mishra

Abstract: Floods are the most frequent natural calamity in India. The Godavari river basin (GRB) witnessed several floods in the past 50 years. Notwithstanding the large damage and economic loss, the role of extreme precipitation and antecedent moisture conditions on floods in the GRB remains unexplored. Using the observations and the well-calibrated Variable Infiltration Capacity model, we estimate the changes in the extreme precipitation and floods in the observed (1955–2016) and projected future (2071–2100) climate in the GRB. We evaluate the role of initial hydrologic conditions and extreme precipitation on floods in both observed and projected future climate. We find a statistically significant increase in annual maximum precipitation for the catchments upstream of four gage stations during the 1955–2016 period. However, the rise in annual maximum streamflow at all the four gage stations in GRB was not statistically significant. The probability of floods driven by extreme precipitation (PFEP) varies between 0.55 and 0.7 at the four gage stations of the GRB, which declines with the size of the basins. More than 80% of extreme precipitation events that cause floods occur on wet antecedent moisture conditions at all the four locations in the GRB. The frequency of extreme precipitation events is projected to rise by two folds or more (under RCP 8.5) in the future (2071–2100) at all four locations. However, the increased frequency of floods under the future climate will largely be driven by the substantial rise in the extreme precipitation events rather than wet antecedent moisture conditions.

[Archived PDF of article]

Research Letters

Combined Effect of Tides and Varying Inland Groundwater Input on Flow and Salinity Distribution in Unconfined Coastal Aquifers

by Woei Keong Kuan, Pei Xin, Guangqiu Jin, Clare E. Robinson, Badin Gibbes & Ling Li

Abstract: Tides and seasonally varying inland freshwater input, with different fluctuation periods, are important factors affecting flow and salt transport in coastal unconfined aquifers. These processes affect submarine groundwater discharge (SGD) and associated chemical transport to the sea. While the individual effects of these forcings have previously been studied, here we conducted physical experiments and numerical simulations to evaluate the interactions between varying inland freshwater input and tidal oscillations. Varying inland freshwater input was shown to induce significant water exchange across the aquifer-sea interface as the saltwater wedge shifted landward and seaward over the fluctuation cycle. Tidal oscillations led to seawater circulations through the intertidal zone that also enhanced the density-driven circulation, resulting in a significant increase in the total SGD. The combination of the tide and varying inland freshwater input, however, decreased the SGD components driven by the separate forcings (e.g., tides and density). Tides restricted the landward and seaward movement of the saltwater wedge in response to the varying inland freshwater input in addition to reducing the time delay between the varying freshwater input signal and landward-seaward movement in the saltwater wedge interface. This study revealed the nonlinear interaction between tidal fluctuations and varying inland freshwater input will help to improve our understanding of SGD, seawater intrusion, and chemical transport in coastal unconfined aquifers.

[Archived PDF of article]

Bureau of Economic Analysis Materials for Every Student, Regardless of Major

We mentioned in a previous essay that an economist receives certain Bureau of Economic Analysis and Bureau of Labor Statistics updates and that allows them to “guesstimate” next year’s GDP growth by adding up average labor productivity growth (Y/L) to labor force growth. Remember Y (GDP) equals Y/L multiplied by L and percentage growth in Y is approximately equal to the sum of the other two variables: Y/L and L.  The sum approximates GDP growth and requires no mental gymnastics with complex mathematics of any kind.  A wise student would learn what’s on offer by these government update services and realize simple familiarity is half the game in everything.  The economics pundits are not ten feet tall.  They simply follow simple materials that the typical student does not have and has no idea that these materials exist.

BEA News:  Gross Domestic Product by Industry, 2nd quarter 2019 and annual update:

“The U.S. Bureau of Economic Analysis (BEA) has issued the following news release today:

“Professional, scientific, and technical services; real estate and rental and leasing; and mining were the leading contributors to the increase in U.S. economic growth in the second quarter of 2019. The private goods‐ and services‐producing industries, as well as the government sector, contributed to the increase.  Overall, 14 of 22 industry groups contributed to the 2.0 percent increase in real GDP in the second quarter.”

The full text of the release [archived PDF] on BEA’s website can be found here

The Bureau of Economic Analysis provides this service to you at no charge.  Visit us on the Web at www.bea.gov.  All you will need is your e-mail address.  If you have questions or need assistance, please e-mail subscribe@bea.gov

The Language Phenomenon in Education

Wittgenstein (1889–1951) identifies language as the principal “confusion-machine” within philosophy:

“Philosophy is a battle against the bewitchment of our intelligence by means of language.”

The philosopher’s treatment of a question is like the treatment of an illness.

“What is your aim in philosophy?—To show the fly the way out of the fly-bottle.”

Education if deep and meaningful would put language itself in front of a student to understand the “bewitchment” and to perhaps “escape from the fly-bottle.” The fly-bottle is roughly “the captive mind syndrome” described by Czesław Miłosz, the Polish poet-thinker.

There are various aspects of this language-watching:

Hans-Georg Gadamer (Heidegger’s successor, who died in 2002) writes:

“It is not that scientific methods are mistaken, but ‘this does not mean that people would be able to solve the problems that face us, peaceful coexistence of peoples and the preservation of the balance of nature, with science as such. It is obvious that not mathematics but the linguistic nature of people is the basis of civilization.’”

(German Philosophy, Oxford University Press, 2000, pages 122/123)

This is readily seeable. Imagine Einstein and Kurt Gödel walking near the Princeton campus. They speak to each other in German, their native tongue which they both “inhabit.” Gödel communicates the limits to logic and Einstein the limits to modern physics such as quantum mechanics. They bring in Bohr and Heisenberg and the “Copenhagen Interpretation” as a counter-view. They refer to equations and experiments and conjectures and puzzles, current papers and conferences.

They take “communicative action” by use of speech using German as a means.

There are two levels here that are always confused: the ontological (i.e., all the why-questions people ask using language) and the ontic level, all the how-questions people pose using mathematics and laboratory results (e.g., Higgs boson).

Gödel once made the observation that if you look at language as a kind of logical system, it’s absolutely puzzling that people can communicate at all since language is so utterly ambiguous and “polyvalent.”

Take the sentence: “Men now count.” Out of context, does it mean count as in the sense of numeracy, one, two, three apples in front of me or do you mean perhaps that men in a certain country were given the right to vote and now “count” politically. Without the context and the ability to contextualize, no sentence by itself makes certain sense at all.

This is partly why Wittgenstein sees philosophy problems as “language games.”

Heidegger coming from “being-in-the-world” as foundational, and calls language “the house of being.”

You inhabit a native language the way you “inhabit” a family home or a home town. You flow through.

When a child of ten plays marbles (as analyzed by Piaget) and his native language (say French) comes pouring out of him in a spontaneous gusher, how can we really explain it since the child doesn’t look up syntactical rules and grammatical definitions when he speaks. The words flow.

Heidegger retorts that language speaks you in other words, you’re channeling the language in a way a songwriter explains how a song comes to him. In the end, it’s something spontaneous and not propositional like grammar is.

A moment’s reflection shows you how “slippery” language is: 

A man driving to New York says to you, “the car died on me halfway there.”  He does not mean the car was “on” him physically. To die on doesn’t really mean perish forever, it means, on average, stopped to function in a way that usually can be fixed in the garage.  It means this reparable conking out of the car gave him a big headache and aggravation as he waited for the Triple A people to get there and do the paperwork. You visualize all these layers and twists.

Again, without a human context, the sentence “the car died on me” makes little sense. Without a human context, “the sky is blue” makes incomplete sense too. Does a camel or cricket see a blue sky?

A full education would explore these dimensions of language and this has nothing to do with bringing back Latin or Greek or studying a foreign language to meet a Ph.D. requirement.  Formal linguistics à la Chomsky, Fodor, Katz, etc. is not what’s being discussed, as interesting as all that might be.

It also is not about language genes such as FAP-2 or how vocal cords work since these questions are ontic (i.e., how does it work?) and not ontological (i.e., what does something mean or imply?). Thinking about language in an engineering sense with the human mouth as a “buccal cavity” is quite legitimate and a voice coach might do well to do that.  We are talking about something else:  the centrality of language in human self-understanding, functioning and the making of meaning.