Speculative Science: The Reality beyond Spacetime, with Donald Hoffman

[from The Institute of Art and Ideas Science Weekly, July 22]

Donald Hoffman famously argues that we know nothing about the truth of the world. His book, The Case Against Reality, claims the process of survival of the fittest does not require a true picture of reality. Furthermore, Hoffman claims spacetime is not fundamental. So, what lies beneath spacetime, can we know about it? And how does consciousness come into play? Join this interview with the famed cognitive psychologist and author exploring our notions of consciousness, spacetime, and what lies beneath. Hosted by Curt Jaimungal.

[watch the video]

COVID-19 and “Naïve Probabilism”

[from the London Mathematical Laboratory]

In the early weeks of the 2020 U.S. COVID-19 outbreak, guidance from the scientific establishment and government agencies included a number of dubious claims—masks don’t work, there’s no evidence of human-to-human transmission, and the risk to the public is low. These statements were backed by health authorities, as well as public intellectuals, but were later disavowed or disproven, and the initial under-reaction was followed by an equal overreaction and imposition of draconian restrictions on human social activities.

In a recent paper, LML Fellow Harry Crane examines how these early mis-steps ultimately contributed to higher death tolls, prolonged lockdowns, and diminished trust in science and government leadership. Even so, the organizations and individuals most responsible for misleading the public suffered little or no consequences, or even benefited from their mistakes. As he discusses, this perverse outcome can be seen as the result of authorities applying a formulaic procedure of “naïve probabilism” in facing highly uncertain and complex problems, and largely assuming that decision-making under uncertainty boils down to probability calculations and statistical analysis.

This attitude, he suggests, might be captured in a few simple “axioms of naïve probabilism”:

Axiom 1: more complex the problem, the more complicated the solution.

This idea is a hallmark of naïve decision making. The COVID-19 outbreak was highly complex, being a novel virus of uncertain origins, and spreading through the interconnected global society. But the potential usefulness of masks was not one of these complexities. The mask mistake was consequential not because masks were the antidote to COVID-19, but because they were a low cost measure the effect of which would be neutral at worst; wearing a mask can’t hurt in reducing the spread of a virus.

Yet the experts neglected common sense in favor of a more “scientific response” based on rigorous peer review and sufficient data. Two months after the initial U.S. outbreak, a study confirmed the obvious, and masks went from being strongly discouraged to being mandated by law. Precious time had been wasted, many lives lost, and the economy stalled.

Crane also considers another rule of naïve probabilism:

Axiom 2: Until proven otherwise, assume that the future will resemble the past.

In the COVID-19 pandemic, of course, there was at first no data that masks work, no data that travel restrictions work, no data of human-to-human transmission. How could there be? Yet some naïve experts took this as a reason to maintain the status quo. Indeed, many universities refused to do anything in preparation until a few cases had been detected on campus—at which point they had some data, as well as hundreds or thousands of other as yet undetected infections.

Crane touches on some of the more extreme examples of his kind of thinking, which assumes that whatever can’t be explained in terms of something that happened in the past is speculative, non-scientific and unjustifiable:

“This argument was put forward by John Ioannidis in mid-March 2020, as the pandemic outbreak was already spiralling out of control. Ioannidis wrote that COVID-19 wasn’t a ‘once-in-a-century pandemic,’ as many were saying, but rather a ‘once-in-a-century data-fiasco’. Ioannidis’s main argument was that we knew very little about the disease, its fatality rate, and the overall risks it poses to public health; and that in face of this uncertainty, we should seek data-driven policy decisions. Until the data was available, we should assume COVID-19 acts as a typical strain of the flu (a different disease entirely).”

Unfortunately, waiting for the data also means waiting too long, if it turns out that the virus turns out to be more serious. This is like waiting to hit the tree before accepting that the available data indeed supports wearing a seatbelt. Moreover, in the pandemic example, this “lack of evidence” argument ignores other evidence from before the virus entered the United States. China had locked down a city of 10 million; Italy had locked down its entire northern region, with the entire country soon to follow. There was worldwide consensus that the virus was novel, the virus was spreading fast and medical communities had no idea how to treat it. That’s data, and plenty of information to act on.

Crane goes on to consider a 3rd axiom of naïve probabilism, which aims to turn ignorance into a strength. Overall, he argues, these axioms, despite being widely used by many prominent authorities and academic experts, actually capture a set of dangerous fallacies for action in the real world.

In reality, complex problems call for simple, actionable solutions; the past doesn’t repeat indefinitely (i.e., COVID-19 was never the flu); and ignorance is not a form of wisdom. The Naïve Probabilist’s primary objective is to be accurate with high probability rather than to protect against high-consequence, low-probability outcomes. This goes against common sense principles of decision making in uncertain environments with potentially very severe consequences.

Importantly, Crane emphasizes, the hallmark of Naïve Probabilism is naïveté, not ignorance, stupidity, crudeness or other such base qualities. The typical Naïve Probabilist lacks not knowledge or refinement, but the experience and good judgment that comes from making real decisions with real consequences in the real world. The most prominent naïve probabilists are recognized (academic) experts in mathematical probability, or relatedly statistics, physics, psychology, economics, epistemology, medicine or so-called decision sciences. Moreover, and worryingly, the best known naïve probabilists are quite sophisticated, skilled in the art of influencing public policy decisions without suffering from the risks those policies impose on the rest of society.

Read the paper. [Archived PDF]

Education and “Intuition Pumps”

Professor Daniel Dennett of Tufts uses the word “intuition pumps” in discussing intuitive understanding and its tweaking.

Let’s do a simple example, avoiding as always “rocket science,” where the intricacies weigh you down in advance. We make a U-turn and go back by choice to elementary notions and examples.

Think of the basic statistics curve. It’s called the Bell Curve, the Gaussian, the Normal Curve.

The first name is sort of intuitive based on appearance unless of course it’s shifted or squeezed and then it’s less obvious. The second name must be based on either the discoverer or the “name-giver” or both, if the same person. The third is a bit vague.

Already one’s intuitions and hunches are not fool-proof.

The formula for the Bell Curve is:

\begin{equation} y = \frac{1}{\sqrt{2\pi}}e^{\frac{-x^2}{2}} \end{equation}

We immediately see the two key constants: π (pi) and e. These are: 22/7 and 2.71823 (base of natural logs).

The first captures something about circularity, the second continuous growth as in continuous compounding of interest.

You would not necessarily anticipate seeing these two “irrational numbers” (they “go on” forever) in a statistics graph. Does that mean your intuition is poor or untutored or does it mean that “mathworld” is surprising?

It’s far from obvious.

For openers, why should π (pi) be everywhere in math and physics?

Remember Euler’s identity: e + 1 = 0

That the two key integers (1 and 0) should relate to π (pi), e, and i (-1) is completely unexpected and exotic.

Our relationship to “mathworld” is quite enigmatic and this raises the question whether Professor Max Tegmark of MIT who proposes to explain “ultimate reality” through the “math fabric” of all reality might be combining undoubted brilliance with quixotism. We don’t know.

Education and Finality Claims

Stephen Hawking kept saying he wanted to discover the ultimate world-equation. This would be the final “triumph of the rational human mind.”

This would presumably imply that if one had such a world-equation, one could infer or deduce all the formalisms in a university physics book with its thousand pages of equations, puzzles and conundrums, footnotes and names and dates.

While hypothetically imaginable, this seems very unlikely because too many phenomena are included, too many topics, too many rules and laws.

There’s another deep problem with such Hawking-type “final equation” quests. Think of the fact that a Henri Poincaré (died in 1912) suddenly appears and writes hundreds of excellent science papers. Think of Paul Erdős (died in 1996) and his hundreds of number theory papers. Since the appearance of such geniuses and powerhouses is not knowable in advance, the production of new knowledge is unpredictable and would “overwhelm” any move towards some world-equation which was formulated without the new knowledge since it was not known at the time that the world-equation was formalized.

Furthermore, if the universe is mathematical as MIT’s Professor Max Tegmark claims, then a Hawking-type “world-equation” would cover all mathematics without which parts of Tegmark’s universe would be “unaccounted for.”

In other words, history and the historical experience, cast doubt on the Stephen Hawking “finality” project. It’s not just that parts of physics don’t fit together. (General relativity and quantum mechanics, gravity and the other three fundamental forces.) Finality would also imply that there would be no new Stephen Hawking who would refute the world-equation as it stands at a certain point in time. In other words, if you choose, as scientists like Freeman Dyson claim that the universe is a “vast evolutionary” process, then the mathematical thinking about it is also evolving or co-evolving and there’s no end.

There are no final works in poetry, novels, jokes, language, movies or songs and there’s perhaps also no end to science.

Thus a Hawking-type quest for the final world-equation seems enchanting but quixotic.

Meaningfulness versus Informativeness

The Decoding Reality book is a classic contemporary analysis of the foundations of physics and the implications for the human world. The scientists don’t see that physics and science are the infrastructure on which the human “quest for meaning” takes place. Ortega (Ortega y Gasset, died in 1955) tells us that a person is “a point of view directed at the universe.” This level of meaning cannot be reduced to bits or qubits or electrons since man is a “linguistic creature” who invents fictional stories to explain “things” that are not things.

The following dialog between Paul Davies (the outstanding science writer) and Vlatko Vedral (the distinguished physicist) gropes along on these issues: the difference between science as one kind of story and the human interpretation of life and self expressed in “tales” and parables, fictions and beliefs:

Davies: “When humans communicate, a certain quantity of information passes between them. But that information differs from the bits (or qubits) physicists normally consider, inasmuch as it possesses meaning. We may be able to quantify the information exchanged, but meaning is a qualitative property—a value—and therefore hard, maybe impossible, to capture mathematically. Nevertheless the concept of meaning obviously has, well… meaning. Will we ever have a credible physical theory of ‘meaningful information,’ or is ‘meaning’ simply outside the scope of physical science?”

Vedral: “This is a really difficult one. The success of Shannon’s formulation of ‘information’ lies precisely in the fact that he stripped it of all “meaning” and reduced it only to the notion of probability. Once we are able to estimate the probability for something to occur, we can immediately talk about its information content. But this sole dependence on probability could also be thought of as the main limitation of Shannon’s information theory (as you imply in your question). One could, for instance, argue that the DNA has the same information content inside as well as outside of a biological cell. However, it is really only when it has access to the cell’s machinery that it starts to serve its main biological purpose (i.e., it starts to make sense). Expressing this in your own words, the DNA has a meaning only within the context of a biological cell. The meaning of meaning is therefore obviously important. Though there has been some work on the theory of meaning, I have not really seen anything convincing yet. Intuitively we need some kind of a ‘relative information’ concept, information that is not only dependent on the probability, but also on its context, but I am afraid that we still do not have this.”

For a physicist, all the world is information. The universe and its workings are the ebb and flow of information. We are all transient patterns of information, passing on the recipe for our basic forms to future generations using a four-letter digital code called DNA.

See Decoding Reality.

In this engaging and mind-stretching account, Vlatko Vedral considers some of the deepest questions about the universe and considers the implications of interpreting it in terms of information. He explains the nature of information, the idea of entropy, and the roots of this thinking in thermodynamics. He describes the bizarre effects of quantum behavior—effects such as “entanglement,” which Einstein called “spooky action at a distance” and explores cutting edge work on the harnessing quantum effects in hyper-fast quantum computers, and how recent evidence suggests that the weirdness of the quantum world, once thought limited to the tiniest scales, may reach into the macro world.

Vedral finishes by considering the answer to the ultimate question: Where did all of the information in the universe come from? The answers he considers are exhilarating, drawing upon the work of distinguished physicist John Wheeler. The ideas challenge our concept of the nature of particles, of time, of determinism, and of reality itself.

Science is an “ontic” quest. Human life is an “ontological” quest. They are a “twisted pair” where each strand must be seen clearly and not confused. The content of your telephone conversation with your friend, say. is not reducible to the workings of a phone or the subtle electrical engineering and physics involved. A musical symphony is not just “an acoustical blast.”

The “meaning of meaning” is evocative and not logically expressible. There’s a “spooky action at a distance” between these levels of meaning versus information but they are different “realms” or “domains.”

Words and Reality and Change: What Is a Fluctuation?

Ludwig Boltzmann who died in 1906 was a giant in the history of physics.

His name is associated with various fields like statistical mechanics, entropy and so on.

A standard physics overview book called Introducing Quantum Theory (2007, Icon/Totem Books) shows a “cartoon” of Boltzmann which says, “I also introduced the controversial notion of fluctuations.” (page 25)

In common parlance, some common synonyms of fluctuate are oscillate, sway, swing, undulate, vibrate and waver. While all these words mean “to move from one direction to its opposite,” fluctuate suggests (sort of) constant irregular changes of level, intensity or value. Pulses and some pulsations suggest themselves as related.

Expressions like “Boltzmann brains” refer to this great physicist Boltzmann and you can find this notion described here: “Boltzmann Brain.”

Notice that the word “fluctuation” occurs four times in one of the paragraphs of the article “Boltzmann Brain,” as you can see:

“In 1931, astronomer Arthur Eddington pointed out that, because a large fluctuation is exponentially less probable than a small fluctuation, observers in Boltzmann universes will be vastly outnumbered by observers in smaller fluctuations. Physicist Richard Feynman published a similar counterargument within his widely read 1964 Feynman Lectures on Physics. By 2004, physicists had pushed Eddington’s observation to its logical conclusion: the most numerous observers in an eternity of thermal fluctuations would be minimal “Boltzmann brains” popping up in an otherwise featureless universe.”

You may remember perhaps you’ve also heard the term, perhaps on a PBS Nova episode on quantum fluctuation.

In the classic history of science book, The Merely Personal by Dr. Jeremy Bernstein (Ivan Dee, Chicago, 2001), one encounters the word fluctuation all over:

“This uniform density of matter …and fluctuations from the average are what would produce the unwanted instability.”

“So Einstein chose the cosmological constant…” (page 83 of Bernstein’s book)

Suppose we allow our minds to be restless and turn to economics to “change the lens” we are using to look at the world, since lens-changing is one of the pillars of Meta Intelligence.

What do we see?

In 1927, Keynes’s professor Arthur Cecil Pigou (died in 1959) published the famous work, Industrial Fluctuations.

In 1915, twelve years earlier, the famous Sir Dennis Holme Robertson (died in 1963) published A Study of Industrial Fluctuation.

The word fluctuation seems to be migrating to or resonating in economics.

The larger point (i.e., the Meta Intelligent one): is the use of this word a linguistic accident or fashion or is something basic being discovered about how some “things” “jump around” in the world?

Is the world seen as more “jumpy” or has it become more jumpy due to global integration or disintegration or in going to the deeper levels of physics with the replacement of a Newtonian world by an Einsteinian one?

The phenomena of change—call it “change-ology” whooshes up in front of us and a Meta Intelligent student of the world would immediately ponder fluctuations versus blips versus oscillations versus jumps and saltations (used in biology) and so on. What about pulsations? Gyrations?

This immediately places in front of you the question of the relationship of languages (words, numbers, images) to events.

The point is not to nail down some final answer. Our task here is not to delve into fields like physics or economics or whatever but to notice the very terms we are using across fields and in daily life (i.e., stock price fluctuations).

Notice, say, how the next blog post on oil price dynamics begins:

“Our oil price decomposition, reported weekly, examines what’s behind recent fluctuations in oil prices…”

The real point is to keep pondering and “sniffing” (i.e., Meta Intelligence), since MI is an awareness quest before all.

Essay 100: “The View From Nowhere” Problem

The phrase “view from nowhere” comes from the title of a 1986 classic philosophy book by Professor Thomas Nagel. It tries to wrestle with the paradox that the human ability to take a “detached view” (abstract theory, say) is potentially misleading since the person behind the detachment is a real person embodied and somewhere.

A theoretician like Richard Feynman (the great physicist) has a nervous system, a brain, a body and uses his hand to write equations on the blackboard. One is trained to focus on the equations since that’s the physics. The person, the physicist is a detail, a distraction, an irrelevance. However this can’t be true since the physicistRichard Feynman in this example—represents a human way of looking at things, at a time and place, no matter how heterodox or offbeat the view.

The human “style” of “being-in-the-world” comes into the equations and to the very idea of equating.

Human beings have the unique ability to view the world in a detached way: We can think about the world in terms that transcend our own experience or interest, and consider the world from a vantage point that is, in Nagel’s words, “nowhere in particular.” At the same time, each of us is a particular person in a particular place, each with his own “personal” view of the world, a view that we can recognize as just one aspect of the whole. How do we reconcile these two standpoints—intellectually, morally, and practically?

To what extent are they irreconcilable and to what extent can they be integrated? Thomas Nagel’s ambitious and lively book tackles this fundamental issue, arguing that our divided nature is the root of a whole range of philosophical problems, touching, as it does, every aspect of human life. He deals with its manifestations in such fields of philosophy as: the mind-body problem, personal identity, knowledge and skepticism, thought and reality, free will, ethics, the relation between moral and other values, the meaning of life, and death.

Excessive objectification has been a malady of recent analytic philosophy, claims Nagel, it has led to implausible forms of reductionism in the philosophy of mind and elsewhere.

The solution is not to inhibit the objectifying impulse, but to insist that it learn to live alongside the internal perspectives that cannot be either discarded or objectified. Reconciliation between the two standpoints, in the end, is not always possible.

Table of Contents for The View from Nowhere book:

I. Introduction
II. Mind
III. Mind and Body
IV. The Objective Self
V. Knowledge
VI. Thought and Reality
VII. Freedom
VIII. Value
IX. Ethics
X. Living Right and Living Well
XI. Birth, Death, and the Meaning of Life

Essay 89: Physics AI Predicts That Earth Goes Around the Sun

from Nature Briefing:

Hello Nature readers,

Today we learn that a computer Copernicus has rediscovered that Earth orbits the Sun, ponder the size of the proton and see a scientific glassblower at work.

Physicists have designed artificial intelligence that thinks like the astronomer Nicolaus Copernicus by realizing the Sun must be at the center of the Solar System. (NASA/JPL/SPL)

AI ‘Discovers’ That Earth Orbits the Sun [PDF]

A neural network that teaches itself the laws of physics could help to solve some of physics’ deepest questions. But first it has to start with the basics, just like the rest of us. The algorithm has worked out that it should place the Sun at the centre of the Solar System, based on how movements of the Sun and Mars appear from Earth.

The machine-learning system differs from others because it’s not a black that spits out a result based on reasoning that’s almost impossible to unpick. Instead, researchers designed a kind of ‘lobotomizedneural network that is split into two halves and joined by just a handful of connections. That forces the learning half to simplify its findings before handing them over to the half that makes and tests new predictions.

Next FDA Chief Will Face Ongoing Challenges

U.S. President Donald Trump has nominated radiation oncologist Stephen Hahn to lead the Food and Drug Administration (FDA). If the Senate confirms Hahn, who is the chief medical executive of the University of Texas MD Anderson Cancer Center, he’ll be leading the agency at the centre of a national debate over e-cigarettes, prompted by a mysterious vaping-related illness [archived PDF] that has made more than 2,000 people sick. A former FDA chief says Hahn’s biggest challenge will be navigating a regulatory agency under the Trump administration, which has pledged to roll back regulations.


Do We Know How Big a Proton Is?
[PDF]

A long-awaited experimental result has found the proton to be about 5% smaller than the previously accepted value. The finding seems to spell the end of the ‘proton radius puzzle’: the measurements disagreed if you probed the proton with ordinary hydrogen, or with exotic hydrogen built out of muons instead of electrons. But solving the mystery will be bittersweet: some scientists had hoped the difference might have indicated exciting new physics behind how electrons and muons behave.

Contingency Plans for Research After Brexit

The United Kingdom should boost funding for basic research and create an equivalent of the prestigious European Research Council (ERC) if it doesn’t remain part of the European Union’s flagship Horizon Europe research-funding program [archived PDF]. That’s the conclusion of an independent review of how UK science could adapt and collaborate internationally after Brexit — now scheduled for January 31, 2020.

Nature’s 150th anniversary

A Century and a Half of Research and Discovery

This week is a special one for all of us at Nature: it’s 150 years since our first issue, published in November 1869. We’ve been working for well over a year on the delights of our anniversary issue, which you can explore in full online.

10 Extraordinary Nature Papers

A series of in-depth articles from specialists in the relevant fields assesses the importance and lasting impact of 10 key papers from Nature’s archive. Among them, the structure of DNA, the discovery of the hole in the ozone layer above Antarctica, our first meeting with Australopithecus and this year’s Nobel-winning work detecting an exoplanet around a Sun-like star.

A Network of Science

The multidisciplinary scope of Nature is revealed by an analysis of more than 88,000 papers Nature has published since 1900, and their co-citations in other articles. Take a journey through a 3D network of Nature’s archive in an interactive graphic. Or, let us fly you through it in this spectacular 5-minute video.

Then dig deeper into what scientists learnt from analyzing tens of millions of scientific articles for this project.

150 Years of Nature, in Graphics

An analysis of the Nature archive reveals the rise of multi-author papers, the boom in biochemistry and cell biology, and the ebb and flow of physical chemistry since the journal’s first issue in 1869. The evolution in science is mirrored in the top keywords used in titles and abstracts: they were ‘aurora’, ‘Sun’, ‘meteor’, ‘water’ and ‘Earth’ in the 1870s, and ‘cell’, ‘quantum’, ‘DNA’, ‘protein’ and ‘receptor’ in the 2010s.

Evidence in Pursuit of Truth

A century and a half has seen momentous changes in science, and Nature has changed along with it in many ways, says an Editorial in the anniversary edition. But in other respects, Nature now is just the same as it was at the start: it will continue in its mission to stand up for research, serve the global research community and communicate the results of science around the world.

Features & Opinion

Nature covers: from paste-up to Photoshop

Nature creative director Kelly Krause takes you on a tour of the archive to enjoy some of the journal’s most iconic covers, each of which speaks to how science itself has evolved. Plus, she touches on those that didn’t quite hit the mark, such as an occasion of “Photoshop malfeasance” that led to Dolly the sheep sporting the wrong leg.

Podcast: Nature bigwigs spill the tea

In this anniversary edition of BackchatNature editor-in-chief Magdalena Skipper, chief magazine editor Helen Pearson and editorial vice president Ritu Dhand take a look back at how the journal has evolved over 150 years, and discuss the part that Nature can play in today’s society. The panel also pick a few of their favorite research papers that Nature has published, and think about where science might be headed in the next 150 years.

Where I Work

Scientific glassblower Terri Adams uses fire and heavy machinery to hand-craft delicate scientific glass apparatus. “My workbench hosts an array of tools for working with glass, many of which were custom-made for specific jobs,” says Adams. “Each tool reminds me of what I first used it for and makes me consider how I might use it again.” (Leonora Saunders for Nature)

Quote of the Day

“At the very least … we should probably consider no longer naming *new* species after awful humans.”

Scientists should stop naming animals after terrible people — and consider renaming the ones that already are, argues marine conservation biologist and science writer David Shiffman. (Scientific American)

Yesterday was Marie Skłodowska Curie’s birthday, and for the occasion, digital colorist Marina Amaral breathed new life into a photo of Curie in her laboratory

(If you have recommended people before and you want them to count, please ask them to email me with your details and I will make it happen!) Your feedback, as always, is very welcome at briefing@nature.com.

Flora Graham, senior editor, Nature Briefing

Essay 36: What We Mean by “Epochal Waters”

We sometimes use the phrase “epochal waters” to refer to the deepest layers of the past which we “swimmers” at the surface of the ocean don’t see or know. “Epochal waters” are latent, currents are closer to the surface.

There’s a similar idea from the French philosopher Michel Foucault who died in 1984. In his The Order of Things, classic from 1966, he talks about the “episteme” (as in epistemology) that frames everything from deep down. (The Greeks distinguished between “techne” (arts, crafts, practical skills and “episteme” (theory, overview).

“In essence, Les mots et les choses (Foucault’s The Order of Things) maintains that every period is characterized by an underground configuration that delineates its culture, a grid of knowledge making possible every scientific discourse, every production of statements. Foucault designates this historical a priori as an episteme, deeply basic to defining and limiting what any period can—or cannot—think.

Each science develops within the framework of an episteme, and therefore is linked in part with other sciences contemporary with it.

(Didier Eribon, Michel Foucault, Harvard University Press,  1991, page 158)

Take a simple example. A discussion comes up about what man is or does or thinks or knows. In today’s episteme or pre-definition, one thinks immediately not of man in terms of language or the invention of gods, but in terms of computational genomics, big data, bipedalism (walking upright on two legs). Its assumed in advance via an invisible episteme, that science and technology. physics, genetics, big data, chemistry and biology hold the answer and the rest is sort of outdated. This feeling is automatic and reflexive like breathing and might be called “mental breathing.”

One’s thoughts are immediately sent in certain directions or grooves, a process  that is automatic and more like a “mental reflex” than a freely chosen “analytical frame.” The thinker has been “trained” in advance and the episteme pre-decides what is thinkable and what is not.

There are deep episteme that underlie all analyses: for example, in the Anglo-American tradition of looking at things, the phrase “human nature” inevitably comes in as a deus ex machina (i.e., sudden way of clinching an argument, the “magic factor” that has been there all along). If you ask why are you suddenly “importing” the concept of “human nature,” the person who uses the phrase has no idea. It’s in the “epochal water” or Foucault’s episteme, and it suddenly swims up from below at the sea floor.

Another quick example: In the Anglo-American mind, there’s a belief from “way down and far away” that failure in life is mostly about individual behavior (laziness, alcoholism, etc.) and personal “stances” while “circum-stances” are an excuse. This way of sequencing acceptable explanations is deeply pre-established in a way that is itself hard to explain. It serves to “frame the picture” in advance. These are all “epochal water“ or episteme phenomena.