Monomania and the West

There have been all kinds of “voices” in the history of Western civilization. Perhaps the loudest voice is that of monomaniacs, who always claim that behind the appearance of the many is the one. If we illustrate the West, and at its roots, the intersection of Athens and Jerusalem, we see the origins of this monomania. Plato’s realm of ideas was supposed to explain everything encountered in our daily lives. His main student and rival, Aristotle, has his own competing explanation, based in biology instead of mathematics.

These monomanias in their modern counterpart in ideologies. In communism, the key to have everything is class and the resulting class struggles. Nazism revolves around race and racial conflict.

In our own era, the era of scientism, we have the idea of god replaced with Stephen Hawking’s “mind of god,” Leon Lederman’s The God Particle and KAKU Michio’s The God Equation. In the 2009 film, Angels & Demons, there’s a senior Vatican official, played by Ewan McGregor, who is absolutely outraged by the blasphemous phrase, “the god particle.”

Currently, the monomania impetus continues full-force. For example, Professor Seth Lloyd of MIT tells us that reality is the cosmos and not chaos, because all of reality together is a computer. His MIT colleague, Max Tegmark, argues in his books that the world is not explained by mathematics, but rather is mathematics. Perhaps the climax of this kind of thinking is given to us by the essay “Everything Is Computation” by Joscha Bach:

These days we see a tremendous number of significant scientific news stories, and it’s hard to say which has the highest significance. Climate models indicate that we are past crucial tipping points and irrevocably headed for a new, difficult age for our civilization. Mark van Raamsdonk expands on the work of Brian Swingle and Juan Maldacena and demonstrates how we can abolish the idea of spacetime in favor of a discrete tensor network, thus opening the way for a unified theory of physics. Bruce Conklin, George Church, and others have given us CRISPR/Cas9, a technology that holds promise for simple and ubiquitous gene editing. “Deep learning” starts to tell us how hierarchies of interconnected feature detectors can autonomously form a model of the world, learn to solve problems, and recognize speech, images, and video.

It is perhaps equally important to notice where we lack progress: Sociology fails to teach us how societies work; philosophy seems to have become infertile; the economic sciences seem ill-equipped to inform our economic and fiscal policies; psychology does not encompass the logic of our psyche; and neuroscience tells us where things happen in the brain but largely not what they are.

In my view, the 20th century’s most important addition to understanding the world is not positivist science, computer technology, spaceflight, or the foundational theories of physics.

It is the notion of computation. Computation, at its core, and as informally described as possible, is simple: Every observation yields a set of discernible differences.

These we call information. If the observation corresponds to a system that can change its state, we can describe those state changes. If we identify regularity in those state changes, we are looking at a computational system. If the regularity is completely described, we call this system an algorithm. Once a system can perform conditional state transitions and revisit earlier states, it becomes almost impossible to stop it from performing arbitrary computation. In the infinite case that is, if we allow it to make an unbounded number of state transitions and use unbounded storage for the states—it becomes a Turing machine, or a Lambda calculus, or a Post machine, or one of the many other mutually equivalent formalisms that capture universal computation.

Computational terms rephrase the idea of “causality,” something that philosophers have struggled with for centuries. Causality is the transition from one state in a computational system to the next. They also replace the concept of “mechanism” in mechanistic, or naturalistic, philosophy. Computationalism is the new mechanism, and unlike its predecessor, it is not fraught with misleading intuitions of moving parts.

Computation is different from mathematics. Mathematics turns out to be the domain of formal languages and is mostly undecidable, which is just another word for saying “uncomputable” (since decision making and proving are alternative words for computation, too). All our explorations into mathematics are computational ones, though. To compute means to actually do all the work, to move from one state to the next.

Computation changes our idea of knowledge: Instead of justified true belief, knowledge describes a local minimum in capturing regularities between observables. Knowledge is almost never static but progresses on a gradient through a state space of possible worldviews. We will no longer aspire to teach our children the truth, because, like us, they will never stop changing their minds. We will teach them how to productively change their minds, how to explore the never-ending land of insight.

A growing number of physicists understands that the universe is not mathematical but computational, and physics is in the business of finding an algorithm that can reproduce our observations. The switch from uncomputable mathematical notions (such as continuous space) makes progress possible. Climate science, molecular genetics, and AI are computational sciences. Sociology, psychology, and neuroscience are not: They still seem confused by the apparent dichotomy between mechanism (rigid moving parts) and the objects of their study. They are looking for social, behavioral, chemical, neural regularities, where they should be looking for computational ones.

Everything is computation.

Know This: Today’s Most Interesting and Important Scientific Ideas, Discoveries, and Developments, John Brockman (editor), Harper Perennial, 2017, pages 228-230.

Friedrich Nietzsche rebelled against this type of thinking the most profoundly. If scientism represents the modern, then Nietzsche was the prophet of postmodernism. Nietzsche’s famous phrase, “God is dead.” is not about a creator or divinity, but rather finality itself. There is no final explanation.

Problems of Perspective, Michel Foucault

Michel Foucault was one of the leading French philosophers of the 20th century. Often considered a postmodernist, he did not believe there was a final perspective that human knowledge could achieve. This immediately contrasts with the outlook of leading physicists like Stephen Hawking. In his 1988 classic, A Brief History of Time, Hawking concludes the book by saying, once science has achieved a theory of everything, which is not far off, we will “know the mind of god.”

In his 1966 key work, The Order of Things: An Archaeology of the Human Sciences (French: Les Mots et les Choses: Une archéologie des sciences humaines), Foucault argued that the so-called order of things is invented, not discovered, by us. This is contrary to scientific thought.

Foucault sets up this limit in his surprising interpretation of the Diego Velázquez masterpiece painting, Las Meninas (Spanish: The Ladies-in-waiting). The painting is deliberately elusive in its use of perspective.

The great German thinker, Jürgen Habermas, explained this Foucault/Velázquez perspective difficulty:

This picture portrays the painter in front of a canvas not visible to the spectator; the painter is evidently looking, as are the two ladies-in-waiting next to him, in the direction of his two models, King Philip IV and his spouse. These two personages standing as models are found outside the frame of the picture; they can be identified by the spectator only with the help of a mirror pictured in the background. The point that Velázquez apparently had in mind is a confusing circumstance of which the spectator becomes aware by inference: The spectator cannot avoid assuming the place and the direction of the gaze of the counterfeit but absent royal pair — toward which the painter captured in the picture gazes — as well as the place and the perspective of Velázquez himself, which is to say, of the .painter who actually produced this picture. For Foucault, in turn, the real point lies in the fact that the classical picture frame is too limited to permit the representation of the act of representing as such — it is this that Velázquez makes clear by showing the gaps within the classical picture frame. left by the lack of reflection on the process of representing itself.29

29. Foucault constructs two different series of absences. On the one hand, the painter in the picture lacks his model, the royal couple standing outside the frame of the picture; the latter are in turn unable to see the picture of themselves that is being painted — they only see the canvas from behind; finally, the spec­tator is missing the center of the scene, that is, the couple standing as models, to which the gaze of the painter and of the courtesans merely directs us. Still more revealing than the absence of the objects being represented is, on the other hand, that of the subjects doing the representing, which is to say, the triple absence of the painter, the model, and the spectator who, located in front of the picture, takes in perspectives of the two others. The painter, Velázquez, actually enters into the picture, but he is not presented exactly in the act of painting — one sees him during a pause and realizes that he will disappear behind the canvas as soon as he takes up his labors again. The faces of the two models can actually be recognized unclearly in a mirror reflection, but they are not to be observed directly during the act of their portrayal. Finally, the act of the spectator is equally unrepresented — the spectator depicted entering into the picture from the right cannot take over this function. (See Foucault, The Order of Things, pp. 3-16, 307-311.)

Critique and Power: Recasting the Foucault/Habermas Debate, Michael Kelly, editor, MIT Press, 1994, pages 67, 77 [archived PDF].

Let us conclude by saying one way of specifying the disagreement between scientists and these thinkers is that sciences see themselves as “objective” while the thinkers feel science lacks objectivity because of the human observer. Kant, centuries ago, argued that concepts like causality, space and time are imposed by the human mind on the world. Similarly, Heisenberg, in Physics and Philosophy: The Revolution in Modern Science, similarly said that science does not finally answer questions about an objective reality, but can only answer questions posed by us.

Education and the “Knowability” Problem

There was a wonderful PBS Nature episode in 2006 called “The Queen of Trees” [full video, YouTube] which went into details about the survival strategy and rhythms and interactions with the environment of one tree in Africa and all the complexities this involves:

This Nature episode explores the evolution of a fig tree in Africa and its only pollinator, the fig wasp. This film takes us through a journey of intertwining relationships. It shows how the fig (queen) tree is life sustaining for an entire range of species, from plants, to insects, to other animals and even mammals. These other species are in turn life-sustaining to the fig tree itself. It could not survive without the interaction of all these different creatures and the various functions they perform. This is one of the single greatest documented (on video) examples of the wonders of our natural world; the intricacies involved for survival and ensuring the perpetual existence of species.

It shows us how fragile the balance is between survival and extinction.

One can begin to see that the tree/animal/bacteria/season/roots/climate interaction is highly complex and not quite fully understood to this day.

The fact that one tree yields new information every time we probe into it gives you a “meta” (i.e., meta-intelligent) clue that final theories of the cosmos and fully unified theories of physics will be elusive at best and unreachable at worst. If one can hardly pin down the workings of a single tree, does it sound plausible that “everything that is” from the electron to galaxy clusters to multiverses will be captured by an equation? The objective answer has to be: not particularly.

Think of the quest of the great unifiers like the great philosopherphysicist Hermann Weyl (died in 1955, like Einstein):

Since the 19th century, some physicists, notably Albert Einstein, have attempted to develop a single theoretical framework that can account for all the fundamental forces of nature–a unified field theory. Classical unified field theories are attempts to create a unified field theory based on classical physics. In particular, unification of gravitation and electromagnetism was actively pursued by several physicists and mathematicians in the years between the two World Wars. This work spurred the purely mathematical development of differential geometry.

Hermann Klaus Hugo Weyl (9 November, 1885 – 8 December, 1955) was a German mathematician, theoretical physicist and philosopher. Although much of his working life was spent in Zürich, Switzerland and then Princeton, New Jersey, he is associated with the University of Göttingen tradition of mathematics, represented by David Hilbert and Hermann Minkowski.

His research has had major significance for theoretical physics as well as purely mathematical disciplines including number theory. He was one of the most influential mathematicians of the twentieth century, and an important member of the Institute for Advanced Study during its early years.

Weyl published technical and some general works on space, time, matter, philosophy, logic, symmetry and the history of mathematics. He was one of the first to conceive of combining general relativity with the laws of electromagnetism. While no mathematician of his generation aspired to the “universalism” of Henri Poincaré or Hilbert, Weyl came as close as anyone.

Weyl is quoted as saying:

“I am bold enough to believe that the whole of physical phenomena may be derived from one single universal world-law of the greatest mathematical simplicity.”

(The Trouble with Physics, Lee Smolin, Houghton Mifflin Co., 2006, page 46)

This reminds one of Stephen Hawking’s credo that he repeated often and without wavering, that the rational human mind would soon understand “the mind of God.”

This WeylHawkingEinstein program of “knowing the mind of God” via a world-equation seems both extremely charming and beautiful, as a human quest, but potentially mono-maniacal à la Captain Ahab in Moby-Dick. The reason that only Ishmael survives the sinking of the ship, the Pequod, is that he has become non-monomaniacal and accepts the variegatedness of the world and thus achieves a more moderate view of human existence and its limits. “The Whiteness of the Whale” chapter in the novel gives you Melville’s sense (from 1851) of the unknowability of some final world-reality or world-theory or world-equation.

World Watching: Project Syndicate—New Commentary

from Project Syndicate:

The EU’s EV Greenwash

by Hans-Werner Sinn

EU emissions regulations that went into force earlier this year are clearly designed to push diesel and other internal-combustion-engine automobiles out of the European market to make way for electric vehicles. But are EVs really as climate-friendly and effective as their promoters claim?

MUNICHGermany’s automobile industry is its most important industrial sector. But it is in crisis, and not only because it is suffering the effects of a recession brought on by Volkswagen’s own cheating on emissions standards, which sent consumers elsewhere. The sector is also facing the existential threat of exceedingly strict European Union emissions requirements, which are only seemingly grounded in environmental policy.

The EU clearly overstepped the mark with the carbon dioxide regulation [PDF] that went into effect on April 17, 2019. From 2030 onward, European carmakers must have achieved average vehicle emissions of just 59 grams of CO2 per kilometer, which corresponds to fuel consumption of 2.2 liters of diesel equivalent per 100 kilometers (107 miles per gallon). This simply will not be possible.

As late as 2006, average emissions for new passenger vehicles registered in the EU were around 161 g/km. As cars became smaller and lighter, that figure fell to 118 g/km in 2016. But this average crept back up, owing to an increase in the market share of gasoline engines, which emit more CO2 than diesel engines do. By 2018, the average emissions of newly registered cars had once again climbed to slightly above 120 g/km, which is twice what will be permitted in the long term.

Even the most gifted engineers will not be able to build internal combustion engines (ICEs) that meet the EU’s prescribed standards (unless they force their customers into soapbox cars). But, apparently, that is precisely the point. The EU wants to reduce fleet emissions by forcing a shift to electric vehicles. After all, in its legally binding formula for calculating fleet emissions, it simply assumes that EVs do not emit any CO2 whatsoever.

The implication is that if an auto company’s production is split evenly between EVs and ICE vehicles that conform to the present average, the 59 g/km target will be just within reach. If a company cannot produce EVs and remains at the current average emissions level, it will have to pay a fine of around €6,000 ($6,600) per car, or otherwise merge with a competitor that can build EVs.

But the EU’s formula is nothing but a huge scam. EVs also emit substantial amounts of CO2, the only difference being that the exhaust is released at a remove—that is, at the power plant. As long as coal– or gas-fired power plants are needed to ensure energy supply during the “dark doldrums” when the wind is not blowing and the sun is not shining, EVs, like ICE vehicles, run partly on hydrocarbons. And even when they are charged with solar– or wind-generated energy, enormous amounts of fossil fuels are used to produce EV batteries in China and elsewhere, offsetting the supposed emissions reduction. As such, the EU’s intervention is not much better than a cut-off device for an emissions control system.

Earlier this year, the physicist Christoph Buchal and I published a research paper [PDF, in German] showing that, in the context of Germany’s energy mix, an EV emits a bit more CO2 than a modern diesel car, even though its battery offers drivers barely more than half the range of a tank of diesel. And shortly thereafter, data published [PDF, in German] by Volkswagen confirmed that its e-Rabbit vehicle emits slightly more CO2 [PDF, in German] than its Rabbit Diesel within the German energy mix. (When based on the overall European energy mix, which includes a huge share of nuclear energy from France, the e-Rabbit fares slightly better than the Rabbit Diesel.)

Adding further evidence, the Austrian think tank Joanneum Research has just published a large-scale study [PDF, in German] commissioned by the Austrian automobile association, ÖAMTC, and its German counterpart, ADAC, that also confirms those findings. According to this study, a mid-sized electric passenger car in Germany must drive 219,000 kilometers before it starts outperforming the corresponding diesel car in terms of CO2 emissions. The problem, of course, is that passenger cars in Europe last for only 180,000 kilometers, on average. Worse, according to Joanneum, EV batteries don’t last long enough to achieve that distance in the first place. Unfortunately, drivers’ anxiety about the cars’ range prompts them to recharge their batteries too often, at every opportunity, and at a high speed, which is bad for durability.

As for EU lawmakers, there are now only two explanations for what is going on: either they didn’t know what they were doing, or they deliberately took Europeans for a ride. Both scenarios suggest that the EU should reverse its interventionist industrial policy, and instead rely on market-based instruments such as a comprehensive emissions trading system.

With Germany’s energy mix, the EU’s regulation on fleet fuel consumption will not do anything to protect the climate. It will, however, destroy jobs, sap growth, and increase the public’s distrust in the EU’s increasingly opaque bureaucracy.

“The View From Nowhere” Problem

The phrase “view from nowhere” comes from the title of a 1986 classic philosophy book by Professor Thomas Nagel. It tries to wrestle with the paradox that the human ability to take a “detached view” (abstract theory, say) is potentially misleading since the person behind the detachment is a real person embodied and somewhere.

A theoretician like Richard Feynman (the great physicist) has a nervous system, a brain, a body and uses his hand to write equations on the blackboard. One is trained to focus on the equations since that’s the physics. The person, the physicist is a detail, a distraction, an irrelevance. However this can’t be true since the physicistRichard Feynman in this example—represents a human way of looking at things, at a time and place, no matter how heterodox or offbeat the view.

The human “style” of “being-in-the-world” comes into the equations and to the very idea of equating.

Human beings have the unique ability to view the world in a detached way: We can think about the world in terms that transcend our own experience or interest, and consider the world from a vantage point that is, in Nagel’s words, “nowhere in particular.” At the same time, each of us is a particular person in a particular place, each with his own “personal” view of the world, a view that we can recognize as just one aspect of the whole. How do we reconcile these two standpoints—intellectually, morally, and practically?

To what extent are they irreconcilable and to what extent can they be integrated? Thomas Nagel’s ambitious and lively book tackles this fundamental issue, arguing that our divided nature is the root of a whole range of philosophical problems, touching, as it does, every aspect of human life. He deals with its manifestations in such fields of philosophy as: the mind-body problem, personal identity, knowledge and skepticism, thought and reality, free will, ethics, the relation between moral and other values, the meaning of life, and death.

Excessive objectification has been a malady of recent analytic philosophy, claims Nagel, it has led to implausible forms of reductionism in the philosophy of mind and elsewhere.

The solution is not to inhibit the objectifying impulse, but to insist that it learn to live alongside the internal perspectives that cannot be either discarded or objectified. Reconciliation between the two standpoints, in the end, is not always possible.

Table of Contents for The View from Nowhere book:

I. Introduction
II. Mind
III. Mind and Body
IV. The Objective Self
V. Knowledge
VI. Thought and Reality
VII. Freedom
VIII. Value
IX. Ethics
X. Living Right and Living Well
XI. Birth, Death, and the Meaning of Life