Monomania and the West

There have been all kinds of “voices” in the history of Western civilization. Perhaps the loudest voice is that of monomaniacs, who always claim that behind the appearance of the many is the one. If we illustrate the West, and at its roots, the intersection of Athens and Jerusalem, we see the origins of this monomania. Plato’s realm of ideas was supposed to explain everything encountered in our daily lives. His main student and rival, Aristotle, has his own competing explanation, based in biology instead of mathematics.

These monomanias in their modern counterpart in ideologies. In communism, the key to have everything is class and the resulting class struggles. Nazism revolves around race and racial conflict.

In our own era, the era of scientism, we have the idea of god replaced with Stephen Hawking’s “mind of god,” Leon Lederman’s The God Particle and KAKU Michio’s The God Equation. In the 2009 film, Angels & Demons, there’s a senior Vatican official, played by Ewan McGregor, who is absolutely outraged by the blasphemous phrase, “the god particle.”

Currently, the monomania impetus continues full-force. For example, Professor Seth Lloyd of MIT tells us that reality is the cosmos and not chaos, because all of reality together is a computer. His MIT colleague, Max Tegmark, argues in his books that the world is not explained by mathematics, but rather is mathematics. Perhaps the climax of this kind of thinking is given to us by the essay “Everything Is Computation” by Joscha Bach:

These days we see a tremendous number of significant scientific news stories, and it’s hard to say which has the highest significance. Climate models indicate that we are past crucial tipping points and irrevocably headed for a new, difficult age for our civilization. Mark van Raamsdonk expands on the work of Brian Swingle and Juan Maldacena and demonstrates how we can abolish the idea of spacetime in favor of a discrete tensor network, thus opening the way for a unified theory of physics. Bruce Conklin, George Church, and others have given us CRISPR/Cas9, a technology that holds promise for simple and ubiquitous gene editing. “Deep learning” starts to tell us how hierarchies of interconnected feature detectors can autonomously form a model of the world, learn to solve problems, and recognize speech, images, and video.

It is perhaps equally important to notice where we lack progress: Sociology fails to teach us how societies work; philosophy seems to have become infertile; the economic sciences seem ill-equipped to inform our economic and fiscal policies; psychology does not encompass the logic of our psyche; and neuroscience tells us where things happen in the brain but largely not what they are.

In my view, the 20th century’s most important addition to understanding the world is not positivist science, computer technology, spaceflight, or the foundational theories of physics.

It is the notion of computation. Computation, at its core, and as informally described as possible, is simple: Every observation yields a set of discernible differences.

These we call information. If the observation corresponds to a system that can change its state, we can describe those state changes. If we identify regularity in those state changes, we are looking at a computational system. If the regularity is completely described, we call this system an algorithm. Once a system can perform conditional state transitions and revisit earlier states, it becomes almost impossible to stop it from performing arbitrary computation. In the infinite case that is, if we allow it to make an unbounded number of state transitions and use unbounded storage for the states—it becomes a Turing machine, or a Lambda calculus, or a Post machine, or one of the many other mutually equivalent formalisms that capture universal computation.

Computational terms rephrase the idea of “causality,” something that philosophers have struggled with for centuries. Causality is the transition from one state in a computational system to the next. They also replace the concept of “mechanism” in mechanistic, or naturalistic, philosophy. Computationalism is the new mechanism, and unlike its predecessor, it is not fraught with misleading intuitions of moving parts.

Computation is different from mathematics. Mathematics turns out to be the domain of formal languages and is mostly undecidable, which is just another word for saying “uncomputable” (since decision making and proving are alternative words for computation, too). All our explorations into mathematics are computational ones, though. To compute means to actually do all the work, to move from one state to the next.

Computation changes our idea of knowledge: Instead of justified true belief, knowledge describes a local minimum in capturing regularities between observables. Knowledge is almost never static but progresses on a gradient through a state space of possible worldviews. We will no longer aspire to teach our children the truth, because, like us, they will never stop changing their minds. We will teach them how to productively change their minds, how to explore the never-ending land of insight.

A growing number of physicists understands that the universe is not mathematical but computational, and physics is in the business of finding an algorithm that can reproduce our observations. The switch from uncomputable mathematical notions (such as continuous space) makes progress possible. Climate science, molecular genetics, and AI are computational sciences. Sociology, psychology, and neuroscience are not: They still seem confused by the apparent dichotomy between mechanism (rigid moving parts) and the objects of their study. They are looking for social, behavioral, chemical, neural regularities, where they should be looking for computational ones.

Everything is computation.

Know This: Today’s Most Interesting and Important Scientific Ideas, Discoveries, and Developments, John Brockman (editor), Harper Perennial, 2017, pages 228-230.

Friedrich Nietzsche rebelled against this type of thinking the most profoundly. If scientism represents the modern, then Nietzsche was the prophet of postmodernism. Nietzsche’s famous phrase, “God is dead.” is not about a creator or divinity, but rather finality itself. There is no final explanation.

Problems of Perspective, Michel Foucault

Michel Foucault was one of the leading French philosophers of the 20th century. Often considered a postmodernist, he did not believe there was a final perspective that human knowledge could achieve. This immediately contrasts with the outlook of leading physicists like Stephen Hawking. In his 1988 classic, A Brief History of Time, Hawking concludes the book by saying, once science has achieved a theory of everything, which is not far off, we will “know the mind of god.”

In his 1966 key work, The Order of Things: An Archaeology of the Human Sciences (French: Les Mots et les Choses: Une archéologie des sciences humaines), Foucault argued that the so-called order of things is invented, not discovered, by us. This is contrary to scientific thought.

Foucault sets up this limit in his surprising interpretation of the Diego Velázquez masterpiece painting, Las Meninas (Spanish: The Ladies-in-waiting). The painting is deliberately elusive in its use of perspective.

The great German thinker, Jürgen Habermas, explained this Foucault/Velázquez perspective difficulty:

This picture portrays the painter in front of a canvas not visible to the spectator; the painter is evidently looking, as are the two ladies-in-waiting next to him, in the direction of his two models, King Philip IV and his spouse. These two personages standing as models are found outside the frame of the picture; they can be identified by the spectator only with the help of a mirror pictured in the background. The point that Velázquez apparently had in mind is a confusing circumstance of which the spectator becomes aware by inference: The spectator cannot avoid assuming the place and the direction of the gaze of the counterfeit but absent royal pair — toward which the painter captured in the picture gazes — as well as the place and the perspective of Velázquez himself, which is to say, of the .painter who actually produced this picture. For Foucault, in turn, the real point lies in the fact that the classical picture frame is too limited to permit the representation of the act of representing as such — it is this that Velázquez makes clear by showing the gaps within the classical picture frame. left by the lack of reflection on the process of representing itself.29

29. Foucault constructs two different series of absences. On the one hand, the painter in the picture lacks his model, the royal couple standing outside the frame of the picture; the latter are in turn unable to see the picture of themselves that is being painted — they only see the canvas from behind; finally, the spec­tator is missing the center of the scene, that is, the couple standing as models, to which the gaze of the painter and of the courtesans merely directs us. Still more revealing than the absence of the objects being represented is, on the other hand, that of the subjects doing the representing, which is to say, the triple absence of the painter, the model, and the spectator who, located in front of the picture, takes in perspectives of the two others. The painter, Velázquez, actually enters into the picture, but he is not presented exactly in the act of painting — one sees him during a pause and realizes that he will disappear behind the canvas as soon as he takes up his labors again. The faces of the two models can actually be recognized unclearly in a mirror reflection, but they are not to be observed directly during the act of their portrayal. Finally, the act of the spectator is equally unrepresented — the spectator depicted entering into the picture from the right cannot take over this function. (See Foucault, The Order of Things, pp. 3-16, 307-311.)

Critique and Power: Recasting the Foucault/Habermas Debate, Michael Kelly, editor, MIT Press, 1994, pages 67, 77 [archived PDF].

Let us conclude by saying one way of specifying the disagreement between scientists and these thinkers is that sciences see themselves as “objective” while the thinkers feel science lacks objectivity because of the human observer. Kant, centuries ago, argued that concepts like causality, space and time are imposed by the human mind on the world. Similarly, Heisenberg, in Physics and Philosophy: The Revolution in Modern Science, similarly said that science does not finally answer questions about an objective reality, but can only answer questions posed by us.

World-Watching: Science First Release, 10 July 2025

[from Science]

Accepted papers posted online prior to journal publication.

NASA Earth Science Division provides key data

by Dylan B. Millet, Belay B. Demoz, et al.

In May, the US administration proposed budget cuts to NASA, including a more than 50% decrease in funding for the agency’s Earth Science Division (ESD), the mission of which is to gather knowledge about Earth through space-based observation and other tools. The budget cuts proposed for ESD would cancel crucial satellites that observe Earth and its atmosphere, gut US science and engineering expertise, and potentially lead to the closure of NASA research centers. As former members of the recently dissolved NASA Earth Science Advisory Committee, an all-volunteer, independent body chartered to advise ESD, we warn that these actions would come at a profound cost to US society and scientific leadership.

[read more]

Spin-filter tunneling detection of antiferromagnetic resonance with electrically tunable damping

by Thow Min Jerald Cham, Daniel G. Chica, et al.

Antiferromagnetic spintronics offers the potential for higher-frequency operations and improved insensitivity to magnetic fields compared to ferromagnetic spintronics. However, previous electrical techniques to detect antiferromagnetic dynamics have utilized large, millimeter-scale bulk crystals. Here we demonstrate direct electrical detection of antiferromagnetic resonance in structures on the few-micrometer scale using spin-filter tunneling in PtTe2/bilayer CrSBr/graphite junctions in which the tunnel barrier is the van der Waals antiferromagnet CrSBr. This sample geometry allows not only efficient detection, but also electrical control of the antiferromagnetic resonance through spin-orbit torque from the PtTe2 electrode. The ability to efficiently detect and control antiferromagnetic resonance enables detailed studies of the physics governing these high-frequency dynamics.

[read more]

Scalable emulation of protein equilibrium ensembles with generative deep learning

by Sarah Lewis, Tim Hempel, et al.

Following the sequence and structure revolutions, predicting functionally relevant protein structure changes at scale remains an outstanding challenge. We introduce BioEmu, a deep learning system that emulates protein equilibrium ensembles by generating thousands of statistically independent structures per hour on a single GPU. BioEmu integrates over 200 milliseconds of molecular dynamics (MD) simulations, static structures and experimental protein stabilities using novel training algorithms. It captures diverse functional motions—including cryptic pocket formation, local unfolding, and domain rearrangements—and predicts relative free energies with 1 kcal/mol accuracy compared to millisecond-scale MD and experimental data. BioEmu provides mechanistic insights by jointly modeling structural ensembles and thermodynamic properties. This approach amortizes the cost of MD and experimental data generation, demonstrating a scalable path toward understanding and designing protein function.

[read more]

Negative capacitance overcomes Schottky-gate limits in GaN high-electron-mobility transistors

by Asir Intisar Khan, Jeong-Kyu Kim, et al.

For high-electron-mobility transistors based on two-dimensional electron gas (2DEG) within a quantum well, such as those based on AlGaN/GaN heterostructure, a Schottky-gate is used to maximize the amount of charge that can be induced and thereby the current that can be achieved. However, the Schottky-gate also leads to very high leakage current through the gate electrode. Adding a conventional dielectric layer between the nitride layers and gate metal can reduce leakage; but this comes at the price of a reduced drain current. Here, we used a ferroic HfO2ZrO2 bilayer as the gate dielectric and achieved a simultaneous increase in the ON current and decrease in the leakage current, a combination otherwise not attainable with conventional dielectrics. This approach surpasses the conventional limits of Schottky GaN transistors and provides a new pathway to improve performance in transistors based on 2DEG.

[read more]

Heidegger vs. Marx as World Watchers

Marx (1818-1883) implies that the foundation of human reality is econo-technical, and on that basis society creates thoughts and philosophies, art and poems. This explanation seems appealing when we think of the economic development of China in our time, for example, or the rise of computers and software.

In a way, Heidegger (1889-1976) turns this upside down. At the basis of world history is society producing culture. You can make a simple “cartoon” and say that for Marx, economics shapes everything, and for Heidegger culture replaces economics.

For example, in his book, What Is Called Thinking? (English translation, 1968, Harper & Row), Heidegger argues the foundation of all Western thinking and culture comes from axioms such as logos [Ancient Greekλόγος] (from which we have logic, cosmology, psychology, epistemology, etc.), as well as legein (the Greek verb λέγειν, “to speak”).

Heidegger states (on page 204), “Without the λέγειν of that logic, modern man would have to make do without his automobile. There would be no airplanes, no turbines, no Atomic Energy Commission.”

Our MI comment on this is that any monocausal explanation of how mankind went from Neanderthal to the Manhattan skyline is completely inadequate. You must create a “double-helix” of Marx and Heidegger, adding the dimensions of surprise and unintended consequences. Without the physics concepts of emergence and complexity, we have no possibility of understanding how we got to now. In the site tagline, we use the word “composite” as a reference to this kind of deeper understanding.

Speculative Science: The Reality beyond Spacetime, with Donald Hoffman

[from The Institute of Art and Ideas Science Weekly, July 22]

Donald Hoffman famously argues that we know nothing about the truth of the world. His book, The Case Against Reality, claims the process of survival of the fittest does not require a true picture of reality. Furthermore, Hoffman claims spacetime is not fundamental. So, what lies beneath spacetime, can we know about it? And how does consciousness come into play? Join this interview with the famed cognitive psychologist and author exploring our notions of consciousness, spacetime, and what lies beneath. Hosted by Curt Jaimungal.

[watch the video]

COVID-19 and “Naïve Probabilism”

[from the London Mathematical Laboratory]

In the early weeks of the 2020 U.S. COVID-19 outbreak, guidance from the scientific establishment and government agencies included a number of dubious claims—masks don’t work, there’s no evidence of human-to-human transmission, and the risk to the public is low. These statements were backed by health authorities, as well as public intellectuals, but were later disavowed or disproven, and the initial under-reaction was followed by an equal overreaction and imposition of draconian restrictions on human social activities.

In a recent paper, LML Fellow Harry Crane examines how these early mis-steps ultimately contributed to higher death tolls, prolonged lockdowns, and diminished trust in science and government leadership. Even so, the organizations and individuals most responsible for misleading the public suffered little or no consequences, or even benefited from their mistakes. As he discusses, this perverse outcome can be seen as the result of authorities applying a formulaic procedure of “naïve probabilism” in facing highly uncertain and complex problems, and largely assuming that decision-making under uncertainty boils down to probability calculations and statistical analysis.

This attitude, he suggests, might be captured in a few simple “axioms of naïve probabilism”:

Axiom 1: more complex the problem, the more complicated the solution.

This idea is a hallmark of naïve decision making. The COVID-19 outbreak was highly complex, being a novel virus of uncertain origins, and spreading through the interconnected global society. But the potential usefulness of masks was not one of these complexities. The mask mistake was consequential not because masks were the antidote to COVID-19, but because they were a low cost measure the effect of which would be neutral at worst; wearing a mask can’t hurt in reducing the spread of a virus.

Yet the experts neglected common sense in favor of a more “scientific response” based on rigorous peer review and sufficient data. Two months after the initial U.S. outbreak, a study confirmed the obvious, and masks went from being strongly discouraged to being mandated by law. Precious time had been wasted, many lives lost, and the economy stalled.

Crane also considers another rule of naïve probabilism:

Axiom 2: Until proven otherwise, assume that the future will resemble the past.

In the COVID-19 pandemic, of course, there was at first no data that masks work, no data that travel restrictions work, no data of human-to-human transmission. How could there be? Yet some naïve experts took this as a reason to maintain the status quo. Indeed, many universities refused to do anything in preparation until a few cases had been detected on campus—at which point they had some data, as well as hundreds or thousands of other as yet undetected infections.

Crane touches on some of the more extreme examples of his kind of thinking, which assumes that whatever can’t be explained in terms of something that happened in the past is speculative, non-scientific and unjustifiable:

“This argument was put forward by John Ioannidis in mid-March 2020, as the pandemic outbreak was already spiralling out of control. Ioannidis wrote that COVID-19 wasn’t a ‘once-in-a-century pandemic,’ as many were saying, but rather a ‘once-in-a-century data-fiasco’. Ioannidis’s main argument was that we knew very little about the disease, its fatality rate, and the overall risks it poses to public health; and that in face of this uncertainty, we should seek data-driven policy decisions. Until the data was available, we should assume COVID-19 acts as a typical strain of the flu (a different disease entirely).”

Unfortunately, waiting for the data also means waiting too long, if it turns out that the virus turns out to be more serious. This is like waiting to hit the tree before accepting that the available data indeed supports wearing a seatbelt. Moreover, in the pandemic example, this “lack of evidence” argument ignores other evidence from before the virus entered the United States. China had locked down a city of 10 million; Italy had locked down its entire northern region, with the entire country soon to follow. There was worldwide consensus that the virus was novel, the virus was spreading fast and medical communities had no idea how to treat it. That’s data, and plenty of information to act on.

Crane goes on to consider a 3rd axiom of naïve probabilism, which aims to turn ignorance into a strength. Overall, he argues, these axioms, despite being widely used by many prominent authorities and academic experts, actually capture a set of dangerous fallacies for action in the real world.

In reality, complex problems call for simple, actionable solutions; the past doesn’t repeat indefinitely (i.e., COVID-19 was never the flu); and ignorance is not a form of wisdom. The Naïve Probabilist’s primary objective is to be accurate with high probability rather than to protect against high-consequence, low-probability outcomes. This goes against common sense principles of decision making in uncertain environments with potentially very severe consequences.

Importantly, Crane emphasizes, the hallmark of Naïve Probabilism is naïveté, not ignorance, stupidity, crudeness or other such base qualities. The typical Naïve Probabilist lacks not knowledge or refinement, but the experience and good judgment that comes from making real decisions with real consequences in the real world. The most prominent naïve probabilists are recognized (academic) experts in mathematical probability, or relatedly statistics, physics, psychology, economics, epistemology, medicine or so-called decision sciences. Moreover, and worryingly, the best known naïve probabilists are quite sophisticated, skilled in the art of influencing public policy decisions without suffering from the risks those policies impose on the rest of society.

Read the paper. [Archived PDF]

Education and “Intuition Pumps”

Professor Daniel Dennett of Tufts uses the word “intuition pumps” in discussing intuitive understanding and its tweaking.

Let’s do a simple example, avoiding as always “rocket science,” where the intricacies weigh you down in advance. We make a U-turn and go back by choice to elementary notions and examples.

Think of the basic statistics curve. It’s called the Bell Curve, the Gaussian, the Normal Curve.

The first name is sort of intuitive based on appearance unless of course it’s shifted or squeezed and then it’s less obvious. The second name must be based on either the discoverer or the “name-giver” or both, if the same person. The third is a bit vague.

Already one’s intuitions and hunches are not fool-proof.

The formula for the Bell Curve is:

\begin{equation} y = \frac{1}{\sqrt{2\pi}}e^{\frac{-x^2}{2}} \end{equation}

We immediately see the two key constants: π (pi) and e. These are: 22/7 and 2.71823 (base of natural logs).

The first captures something about circularity, the second continuous growth as in continuous compounding of interest.

You would not necessarily anticipate seeing these two “irrational numbers” (they “go on” forever) in a statistics graph. Does that mean your intuition is poor or untutored or does it mean that “mathworld” is surprising?

It’s far from obvious.

For openers, why should π (pi) be everywhere in math and physics?

Remember Euler’s identity: e + 1 = 0

That the two key integers (1 and 0) should relate to π (pi), e, and i (-1) is completely unexpected and exotic.

Our relationship to “mathworld” is quite enigmatic and this raises the question whether Professor Max Tegmark of MIT who proposes to explain “ultimate reality” through the “math fabric” of all reality might be combining undoubted brilliance with quixotism. We don’t know.

Education and Finality Claims

Stephen Hawking kept saying he wanted to discover the ultimate world-equation. This would be the final “triumph of the rational human mind.”

This would presumably imply that if one had such a world-equation, one could infer or deduce all the formalisms in a university physics book with its thousand pages of equations, puzzles and conundrums, footnotes and names and dates.

While hypothetically imaginable, this seems very unlikely because too many phenomena are included, too many topics, too many rules and laws.

There’s another deep problem with such Hawking-type “final equation” quests. Think of the fact that a Henri Poincaré (died in 1912) suddenly appears and writes hundreds of excellent science papers. Think of Paul Erdős (died in 1996) and his hundreds of number theory papers. Since the appearance of such geniuses and powerhouses is not knowable in advance, the production of new knowledge is unpredictable and would “overwhelm” any move towards some world-equation which was formulated without the new knowledge since it was not known at the time that the world-equation was formalized.

Furthermore, if the universe is mathematical as MIT’s Professor Max Tegmark claims, then a Hawking-type “world-equation” would cover all mathematics without which parts of Tegmark’s universe would be “unaccounted for.”

In other words, history and the historical experience, cast doubt on the Stephen Hawking “finality” project. It’s not just that parts of physics don’t fit together. (General relativity and quantum mechanics, gravity and the other three fundamental forces.) Finality would also imply that there would be no new Stephen Hawking who would refute the world-equation as it stands at a certain point in time. In other words, if you choose, as scientists like Freeman Dyson claim that the universe is a “vast evolutionary” process, then the mathematical thinking about it is also evolving or co-evolving and there’s no end.

There are no final works in poetry, novels, jokes, language, movies or songs and there’s perhaps also no end to science.

Thus a Hawking-type quest for the final world-equation seems enchanting but quixotic.

Meaningfulness versus Informativeness

The Decoding Reality book is a classic contemporary analysis of the foundations of physics and the implications for the human world. The scientists don’t see that physics and science are the infrastructure on which the human “quest for meaning” takes place. Ortega (Ortega y Gasset, died in 1955) tells us that a person is “a point of view directed at the universe.” This level of meaning cannot be reduced to bits or qubits or electrons since man is a “linguistic creature” who invents fictional stories to explain “things” that are not things.

The following dialog between Paul Davies (the outstanding science writer) and Vlatko Vedral (the distinguished physicist) gropes along on these issues: the difference between science as one kind of story and the human interpretation of life and self expressed in “tales” and parables, fictions and beliefs:

Davies: “When humans communicate, a certain quantity of information passes between them. But that information differs from the bits (or qubits) physicists normally consider, inasmuch as it possesses meaning. We may be able to quantify the information exchanged, but meaning is a qualitative property—a value—and therefore hard, maybe impossible, to capture mathematically. Nevertheless the concept of meaning obviously has, well… meaning. Will we ever have a credible physical theory of ‘meaningful information,’ or is ‘meaning’ simply outside the scope of physical science?”

Vedral: “This is a really difficult one. The success of Shannon’s formulation of ‘information’ lies precisely in the fact that he stripped it of all “meaning” and reduced it only to the notion of probability. Once we are able to estimate the probability for something to occur, we can immediately talk about its information content. But this sole dependence on probability could also be thought of as the main limitation of Shannon’s information theory (as you imply in your question). One could, for instance, argue that the DNA has the same information content inside as well as outside of a biological cell. However, it is really only when it has access to the cell’s machinery that it starts to serve its main biological purpose (i.e., it starts to make sense). Expressing this in your own words, the DNA has a meaning only within the context of a biological cell. The meaning of meaning is therefore obviously important. Though there has been some work on the theory of meaning, I have not really seen anything convincing yet. Intuitively we need some kind of a ‘relative information’ concept, information that is not only dependent on the probability, but also on its context, but I am afraid that we still do not have this.”

For a physicist, all the world is information. The universe and its workings are the ebb and flow of information. We are all transient patterns of information, passing on the recipe for our basic forms to future generations using a four-letter digital code called DNA.

See Decoding Reality.

In this engaging and mind-stretching account, Vlatko Vedral considers some of the deepest questions about the universe and considers the implications of interpreting it in terms of information. He explains the nature of information, the idea of entropy, and the roots of this thinking in thermodynamics. He describes the bizarre effects of quantum behavior—effects such as “entanglement,” which Einstein called “spooky action at a distance” and explores cutting edge work on the harnessing quantum effects in hyper-fast quantum computers, and how recent evidence suggests that the weirdness of the quantum world, once thought limited to the tiniest scales, may reach into the macro world.

Vedral finishes by considering the answer to the ultimate question: Where did all of the information in the universe come from? The answers he considers are exhilarating, drawing upon the work of distinguished physicist John Wheeler. The ideas challenge our concept of the nature of particles, of time, of determinism, and of reality itself.

Science is an “ontic” quest. Human life is an “ontological” quest. They are a “twisted pair” where each strand must be seen clearly and not confused. The content of your telephone conversation with your friend, say. is not reducible to the workings of a phone or the subtle electrical engineering and physics involved. A musical symphony is not just “an acoustical blast.”

The “meaning of meaning” is evocative and not logically expressible. There’s a “spooky action at a distance” between these levels of meaning versus information but they are different “realms” or “domains.”

Words and Reality and Change: What Is a Fluctuation?

Ludwig Boltzmann who died in 1906 was a giant in the history of physics.

His name is associated with various fields like statistical mechanics, entropy and so on.

A standard physics overview book called Introducing Quantum Theory (2007, Icon/Totem Books) shows a “cartoon” of Boltzmann which says, “I also introduced the controversial notion of fluctuations.” (page 25)

In common parlance, some common synonyms of fluctuate are oscillate, sway, swing, undulate, vibrate and waver. While all these words mean “to move from one direction to its opposite,” fluctuate suggests (sort of) constant irregular changes of level, intensity or value. Pulses and some pulsations suggest themselves as related.

Expressions like “Boltzmann brains” refer to this great physicist Boltzmann and you can find this notion described here: “Boltzmann Brain.”

Notice that the word “fluctuation” occurs four times in one of the paragraphs of the article “Boltzmann Brain,” as you can see:

“In 1931, astronomer Arthur Eddington pointed out that, because a large fluctuation is exponentially less probable than a small fluctuation, observers in Boltzmann universes will be vastly outnumbered by observers in smaller fluctuations. Physicist Richard Feynman published a similar counterargument within his widely read 1964 Feynman Lectures on Physics. By 2004, physicists had pushed Eddington’s observation to its logical conclusion: the most numerous observers in an eternity of thermal fluctuations would be minimal “Boltzmann brains” popping up in an otherwise featureless universe.”

You may remember perhaps you’ve also heard the term, perhaps on a PBS Nova episode on quantum fluctuation.

In the classic history of science book, The Merely Personal by Dr. Jeremy Bernstein (Ivan Dee, Chicago, 2001), one encounters the word fluctuation all over:

“This uniform density of matter …and fluctuations from the average are what would produce the unwanted instability.”

“So Einstein chose the cosmological constant…” (page 83 of Bernstein’s book)

Suppose we allow our minds to be restless and turn to economics to “change the lens” we are using to look at the world, since lens-changing is one of the pillars of Meta Intelligence.

What do we see?

In 1927, Keynes’s professor Arthur Cecil Pigou (died in 1959) published the famous work, Industrial Fluctuations.

In 1915, twelve years earlier, the famous Sir Dennis Holme Robertson (died in 1963) published A Study of Industrial Fluctuation.

The word fluctuation seems to be migrating to or resonating in economics.

The larger point (i.e., the Meta Intelligent one): is the use of this word a linguistic accident or fashion or is something basic being discovered about how some “things” “jump around” in the world?

Is the world seen as more “jumpy” or has it become more jumpy due to global integration or disintegration or in going to the deeper levels of physics with the replacement of a Newtonian world by an Einsteinian one?

The phenomena of change—call it “change-ology” whooshes up in front of us and a Meta Intelligent student of the world would immediately ponder fluctuations versus blips versus oscillations versus jumps and saltations (used in biology) and so on. What about pulsations? Gyrations?

This immediately places in front of you the question of the relationship of languages (words, numbers, images) to events.

The point is not to nail down some final answer. Our task here is not to delve into fields like physics or economics or whatever but to notice the very terms we are using across fields and in daily life (i.e., stock price fluctuations).

Notice, say, how the next blog post on oil price dynamics begins:

“Our oil price decomposition, reported weekly, examines what’s behind recent fluctuations in oil prices…”

The real point is to keep pondering and “sniffing” (i.e., Meta Intelligence), since MI is an awareness quest before all.