Monomania and the West

There have been all kinds of “voices” in the history of Western civilization. Perhaps the loudest voice is that of monomaniacs, who always claim that behind the appearance of the many is the one. If we illustrate the West, and at its roots, the intersection of Athens and Jerusalem, we see the origins of this monomania. Plato’s realm of ideas was supposed to explain everything encountered in our daily lives. His main student and rival, Aristotle, has his own competing explanation, based in biology instead of mathematics.

These monomanias in their modern counterpart in ideologies. In communism, the key to have everything is class and the resulting class struggles. Nazism revolves around race and racial conflict.

In our own era, the era of scientism, we have the idea of god replaced with Stephen Hawking’s “mind of god,” Leon Lederman’s The God Particle and KAKU Michio’s The God Equation. In the 2009 film, Angels & Demons, there’s a senior Vatican official, played by Ewan McGregor, who is absolutely outraged by the blasphemous phrase, “the god particle.”

Currently, the monomania impetus continues full-force. For example, Professor Seth Lloyd of MIT tells us that reality is the cosmos and not chaos, because all of reality together is a computer. His MIT colleague, Max Tegmark, argues in his books that the world is not explained by mathematics, but rather is mathematics. Perhaps the climax of this kind of thinking is given to us by the essay “Everything Is Computation” by Joscha Bach:

These days we see a tremendous number of significant scientific news stories, and it’s hard to say which has the highest significance. Climate models indicate that we are past crucial tipping points and irrevocably headed for a new, difficult age for our civilization. Mark van Raamsdonk expands on the work of Brian Swingle and Juan Maldacena and demonstrates how we can abolish the idea of spacetime in favor of a discrete tensor network, thus opening the way for a unified theory of physics. Bruce Conklin, George Church, and others have given us CRISPR/Cas9, a technology that holds promise for simple and ubiquitous gene editing. “Deep learning” starts to tell us how hierarchies of interconnected feature detectors can autonomously form a model of the world, learn to solve problems, and recognize speech, images, and video.

It is perhaps equally important to notice where we lack progress: Sociology fails to teach us how societies work; philosophy seems to have become infertile; the economic sciences seem ill-equipped to inform our economic and fiscal policies; psychology does not encompass the logic of our psyche; and neuroscience tells us where things happen in the brain but largely not what they are.

In my view, the 20th century’s most important addition to understanding the world is not positivist science, computer technology, spaceflight, or the foundational theories of physics.

It is the notion of computation. Computation, at its core, and as informally described as possible, is simple: Every observation yields a set of discernible differences.

These we call information. If the observation corresponds to a system that can change its state, we can describe those state changes. If we identify regularity in those state changes, we are looking at a computational system. If the regularity is completely described, we call this system an algorithm. Once a system can perform conditional state transitions and revisit earlier states, it becomes almost impossible to stop it from performing arbitrary computation. In the infinite case that is, if we allow it to make an unbounded number of state transitions and use unbounded storage for the states—it becomes a Turing machine, or a Lambda calculus, or a Post machine, or one of the many other mutually equivalent formalisms that capture universal computation.

Computational terms rephrase the idea of “causality,” something that philosophers have struggled with for centuries. Causality is the transition from one state in a computational system to the next. They also replace the concept of “mechanism” in mechanistic, or naturalistic, philosophy. Computationalism is the new mechanism, and unlike its predecessor, it is not fraught with misleading intuitions of moving parts.

Computation is different from mathematics. Mathematics turns out to be the domain of formal languages and is mostly undecidable, which is just another word for saying “uncomputable” (since decision making and proving are alternative words for computation, too). All our explorations into mathematics are computational ones, though. To compute means to actually do all the work, to move from one state to the next.

Computation changes our idea of knowledge: Instead of justified true belief, knowledge describes a local minimum in capturing regularities between observables. Knowledge is almost never static but progresses on a gradient through a state space of possible worldviews. We will no longer aspire to teach our children the truth, because, like us, they will never stop changing their minds. We will teach them how to productively change their minds, how to explore the never-ending land of insight.

A growing number of physicists understands that the universe is not mathematical but computational, and physics is in the business of finding an algorithm that can reproduce our observations. The switch from uncomputable mathematical notions (such as continuous space) makes progress possible. Climate science, molecular genetics, and AI are computational sciences. Sociology, psychology, and neuroscience are not: They still seem confused by the apparent dichotomy between mechanism (rigid moving parts) and the objects of their study. They are looking for social, behavioral, chemical, neural regularities, where they should be looking for computational ones.

Everything is computation.

Know This: Today’s Most Interesting and Important Scientific Ideas, Discoveries, and Developments, John Brockman (editor), Harper Perennial, 2017, pages 228-230.

Friedrich Nietzsche rebelled against this type of thinking the most profoundly. If scientism represents the modern, then Nietzsche was the prophet of postmodernism. Nietzsche’s famous phrase, “God is dead.” is not about a creator or divinity, but rather finality itself. There is no final explanation.

Education and the Triple Helix underneath It

We want to restate the basic instinct and intuitions of this education or re-education project.

To get at the “schema” it will help you if you digress for a second and absorb this writeup of Professor Richard Lewontin’s (Harvard biology) 2002 masterpiece, The Triple Helix: Gene, Organism and Environment.

The blurb from Harvard University Press tells us:

“One of our most brilliant evolutionary biologists, Richard Lewontin has also been a leading critic of those—scientists and non-scientists alike—who would misuse the science to which he has contributed so much. In The Triple Helix, Lewontin the scientist and Lewontin the critic come together to provide a concise, accessible account of what his work has taught him about biology and about its relevance to human affairs. In the process, he exposes some of the common and troubling misconceptions that misdirect and stall our understanding of biology and evolution.

The central message of this book is that we will never fully understand living things if we continue to think of genes, organisms, and environments as separate entities, each with its distinct role to play in the history and operation of organic processes. Here Lewontin shows that an organism is a unique consequence of both genes and environment, of both internal and external features. Rejecting the notion that genes determine the organism, which then adapts to the environment, he explains that organisms, influenced in their development by their circumstances, in turn create, modify, and choose the environment in which they live.

The Triple Helix is vintage Lewontin: brilliant, eloquent, passionate and deeply critical. But it is neither a manifesto for a radical new methodology nor a brief for a new theory. It is instead a primer on the complexity of biological processes, a reminder to all of us that living things are never as simple as they may seem.”

Borrow from Lewontin the idea of a “triple helix” and apply it to the ultimate wide-angle view of this process of understanding. The educational triple helix includes and always tries to coordinate:

  1. The student and their life (i.e., every student is first of all a person who is playing the role of a student). Every person is born, lives, and dies.
  2. The student and their field are related to the rest of the campus. (William James: all knowledge is relational.)
  3. The student and the world. (Container ships from Kaohsiung, Taiwan are bringing Lenovo and Acer computers to Bakersfield, California in a world of techno-commerce, exchange rates, insurance, customs, contractual arrangements, etc. In other words, always with some sense of the global political economy.)

The student keeps the triple helix “running” in the back of the mind and tries to create a “notebook of composite sketches” of the world and its workings and oneself and this develops through a life as a kind of portable “homemade” university which stays alive and current and vibrant long after one has forgotten the mean value theorem and the names and sequence for the six wives of Henry VIII).

The reader should think of Emerson’s point from his Journals of Ralph Waldo Emerson: 1824–1832—“The things taught in schools and colleges are not an education, but the means to an education.”

Science-Watching: Nature webinar

Cryo-EM and artificial intelligence: A marriage made in cell extracts

Date: Thursday, June 16, 2022

Reserve your seat

This webcast has been produced on behalf of Nature’s sponsor who retains sole responsibility for content. About this content.

About this webcast

Deep insights into how cellular proteins interact have been out of reach, especially in a native context. In this webcast, Dr. Panagiotis Kastritis of Martin Luther University Halle-Wittenberg will describe how analysis of endogenous cell extracts with cryo-EM and artificial intelligence methods can provide integrated biological analysis of protein communities in a closer-to-native setting.

Unable to join the live event?

Watch on demand—register now to ensure that you receive information on how to gain access after the live event.

Meaningfulness versus Informativeness

The Decoding Reality book is a classic contemporary analysis of the foundations of physics and the implications for the human world. The scientists don’t see that physics and science are the infrastructure on which the human “quest for meaning” takes place. Ortega (Ortega y Gasset, died in 1955) tells us that a person is “a point of view directed at the universe.” This level of meaning cannot be reduced to bits or qubits or electrons since man is a “linguistic creature” who invents fictional stories to explain “things” that are not things.

The following dialog between Paul Davies (the outstanding science writer) and Vlatko Vedral (the distinguished physicist) gropes along on these issues: the difference between science as one kind of story and the human interpretation of life and self expressed in “tales” and parables, fictions and beliefs:

Davies: “When humans communicate, a certain quantity of information passes between them. But that information differs from the bits (or qubits) physicists normally consider, inasmuch as it possesses meaning. We may be able to quantify the information exchanged, but meaning is a qualitative property—a value—and therefore hard, maybe impossible, to capture mathematically. Nevertheless the concept of meaning obviously has, well… meaning. Will we ever have a credible physical theory of ‘meaningful information,’ or is ‘meaning’ simply outside the scope of physical science?”

Vedral: “This is a really difficult one. The success of Shannon’s formulation of ‘information’ lies precisely in the fact that he stripped it of all “meaning” and reduced it only to the notion of probability. Once we are able to estimate the probability for something to occur, we can immediately talk about its information content. But this sole dependence on probability could also be thought of as the main limitation of Shannon’s information theory (as you imply in your question). One could, for instance, argue that the DNA has the same information content inside as well as outside of a biological cell. However, it is really only when it has access to the cell’s machinery that it starts to serve its main biological purpose (i.e., it starts to make sense). Expressing this in your own words, the DNA has a meaning only within the context of a biological cell. The meaning of meaning is therefore obviously important. Though there has been some work on the theory of meaning, I have not really seen anything convincing yet. Intuitively we need some kind of a ‘relative information’ concept, information that is not only dependent on the probability, but also on its context, but I am afraid that we still do not have this.”

For a physicist, all the world is information. The universe and its workings are the ebb and flow of information. We are all transient patterns of information, passing on the recipe for our basic forms to future generations using a four-letter digital code called DNA.

See Decoding Reality.

In this engaging and mind-stretching account, Vlatko Vedral considers some of the deepest questions about the universe and considers the implications of interpreting it in terms of information. He explains the nature of information, the idea of entropy, and the roots of this thinking in thermodynamics. He describes the bizarre effects of quantum behavior—effects such as “entanglement,” which Einstein called “spooky action at a distance” and explores cutting edge work on the harnessing quantum effects in hyper-fast quantum computers, and how recent evidence suggests that the weirdness of the quantum world, once thought limited to the tiniest scales, may reach into the macro world.

Vedral finishes by considering the answer to the ultimate question: Where did all of the information in the universe come from? The answers he considers are exhilarating, drawing upon the work of distinguished physicist John Wheeler. The ideas challenge our concept of the nature of particles, of time, of determinism, and of reality itself.

Science is an “ontic” quest. Human life is an “ontological” quest. They are a “twisted pair” where each strand must be seen clearly and not confused. The content of your telephone conversation with your friend, say. is not reducible to the workings of a phone or the subtle electrical engineering and physics involved. A musical symphony is not just “an acoustical blast.”

The “meaning of meaning” is evocative and not logically expressible. There’s a “spooky action at a distance” between these levels of meaning versus information but they are different “realms” or “domains.”

“The Whole:” a Quick Second Look

We started this book with a quote from Wittgenstein “Light dawns gradually over the whole” and argued that the meaning of the “whole” is and will be elusive forever.

That is as it should be:

Think of the final pages of John Dewey’s classic book, The Quest for Certainty.  You’ll sense how Dewey oscillates between the “pin-down-ability” of the “whole” and its eternal slipperiness:

“Diversification of discoveries and the opening up of new points of view and new methods are inherent in the progress of knowledge.  This fact defeats the idea of any complete synthesis of knowledge upon an intellectual basis.  The sheer increase of specialized knowledge will never work the miracle of producing an intellectual whole.  The astronomer, biologist, chemist, may attain systematic wholes, at least for a time, within his whole field.

“Man has never had such a varied body of knowledge in his possession before, and probably never before has he been so uncertain and so perplexed as to what his knowledge means, what it points to in action and in consequences.”

(Dewey, The Quest for Certainty, Capricorn Books, 1960, pages 312/313)

Wholeness, Dewey senses, like the white whale in Moby-Dick, “won’t sit for a portrait.”   That is why the student should take an eternally “non-rigid” answer to these questions which are “arguments without end” and that’s fine.

What We Mean by “Epochal Waters”

We sometimes use the phrase “epochal waters” to refer to the deepest layers of the past which we “swimmers” at the surface of the ocean don’t see or know. “Epochal waters” are latent, currents are closer to the surface.

There’s a similar idea from the French philosopher Michel Foucault who died in 1984. In his The Order of Things, classic from 1966, he talks about the “episteme” (as in epistemology) that frames everything from deep down. (The Greeks distinguished between “techne” (arts, crafts, practical skills and “episteme” (theory, overview).

“In essence, Les mots et les choses (Foucault’s The Order of Things) maintains that every period is characterized by an underground configuration that delineates its culture, a grid of knowledge making possible every scientific discourse, every production of statements. Foucault designates this historical a priori as an episteme, deeply basic to defining and limiting what any period can—or cannot—think.

Each science develops within the framework of an episteme, and therefore is linked in part with other sciences contemporary with it.

(Didier Eribon, Michel Foucault, Harvard University Press,  1991, page 158)

Take a simple example. A discussion comes up about what man is or does or thinks or knows. In today’s episteme or pre-definition, one thinks immediately not of man in terms of language or the invention of gods, but in terms of computational genomics, big data, bipedalism (walking upright on two legs). Its assumed in advance via an invisible episteme, that science and technology. physics, genetics, big data, chemistry and biology hold the answer and the rest is sort of outdated. This feeling is automatic and reflexive like breathing and might be called “mental breathing.”

One’s thoughts are immediately sent in certain directions or grooves, a process  that is automatic and more like a “mental reflex” than a freely chosen “analytical frame.” The thinker has been “trained” in advance and the episteme pre-decides what is thinkable and what is not.

There are deep episteme that underlie all analyses: for example, in the Anglo-American tradition of looking at things, the phrase “human nature” inevitably comes in as a deus ex machina (i.e., sudden way of clinching an argument, the “magic factor” that has been there all along). If you ask why are you suddenly “importing” the concept of “human nature,” the person who uses the phrase has no idea. It’s in the “epochal water” or Foucault’s episteme, and it suddenly swims up from below at the sea floor.

Another quick example: In the Anglo-American mind, there’s a belief from “way down and far away” that failure in life is mostly about individual behavior (laziness, alcoholism, etc.) and personal “stances” while “circum-stances” are an excuse. This way of sequencing acceptable explanations is deeply pre-established in a way that is itself hard to explain. It serves to “frame the picture” in advance. These are all “epochal water“ or episteme phenomena.