Monomania and the West

There have been all kinds of “voices” in the history of Western civilization. Perhaps the loudest voice is that of monomaniacs, who always claim that behind the appearance of the many is the one. If we illustrate the West, and at its roots, the intersection of Athens and Jerusalem, we see the origins of this monomania. Plato’s realm of ideas was supposed to explain everything encountered in our daily lives. His main student and rival, Aristotle, has his own competing explanation, based in biology instead of mathematics.

These monomanias in their modern counterpart in ideologies. In communism, the key to have everything is class and the resulting class struggles. Nazism revolves around race and racial conflict.

In our own era, the era of scientism, we have the idea of god replaced with Stephen Hawking’s “mind of god,” Leon Lederman’s The God Particle and KAKU Michio’s The God Equation. In the 2009 film, Angels & Demons, there’s a senior Vatican official, played by Ewan McGregor, who is absolutely outraged by the blasphemous phrase, “the god particle.”

Currently, the monomania impetus continues full-force. For example, Professor Seth Lloyd of MIT tells us that reality is the cosmos and not chaos, because all of reality together is a computer. His MIT colleague, Max Tegmark, argues in his books that the world is not explained by mathematics, but rather is mathematics. Perhaps the climax of this kind of thinking is given to us by the essay “Everything Is Computation” by Joscha Bach:

These days we see a tremendous number of significant scientific news stories, and it’s hard to say which has the highest significance. Climate models indicate that we are past crucial tipping points and irrevocably headed for a new, difficult age for our civilization. Mark van Raamsdonk expands on the work of Brian Swingle and Juan Maldacena and demonstrates how we can abolish the idea of spacetime in favor of a discrete tensor network, thus opening the way for a unified theory of physics. Bruce Conklin, George Church, and others have given us CRISPR/Cas9, a technology that holds promise for simple and ubiquitous gene editing. “Deep learning” starts to tell us how hierarchies of interconnected feature detectors can autonomously form a model of the world, learn to solve problems, and recognize speech, images, and video.

It is perhaps equally important to notice where we lack progress: Sociology fails to teach us how societies work; philosophy seems to have become infertile; the economic sciences seem ill-equipped to inform our economic and fiscal policies; psychology does not encompass the logic of our psyche; and neuroscience tells us where things happen in the brain but largely not what they are.

In my view, the 20th century’s most important addition to understanding the world is not positivist science, computer technology, spaceflight, or the foundational theories of physics.

It is the notion of computation. Computation, at its core, and as informally described as possible, is simple: Every observation yields a set of discernible differences.

These we call information. If the observation corresponds to a system that can change its state, we can describe those state changes. If we identify regularity in those state changes, we are looking at a computational system. If the regularity is completely described, we call this system an algorithm. Once a system can perform conditional state transitions and revisit earlier states, it becomes almost impossible to stop it from performing arbitrary computation. In the infinite case that is, if we allow it to make an unbounded number of state transitions and use unbounded storage for the states—it becomes a Turing machine, or a Lambda calculus, or a Post machine, or one of the many other mutually equivalent formalisms that capture universal computation.

Computational terms rephrase the idea of “causality,” something that philosophers have struggled with for centuries. Causality is the transition from one state in a computational system to the next. They also replace the concept of “mechanism” in mechanistic, or naturalistic, philosophy. Computationalism is the new mechanism, and unlike its predecessor, it is not fraught with misleading intuitions of moving parts.

Computation is different from mathematics. Mathematics turns out to be the domain of formal languages and is mostly undecidable, which is just another word for saying “uncomputable” (since decision making and proving are alternative words for computation, too). All our explorations into mathematics are computational ones, though. To compute means to actually do all the work, to move from one state to the next.

Computation changes our idea of knowledge: Instead of justified true belief, knowledge describes a local minimum in capturing regularities between observables. Knowledge is almost never static but progresses on a gradient through a state space of possible worldviews. We will no longer aspire to teach our children the truth, because, like us, they will never stop changing their minds. We will teach them how to productively change their minds, how to explore the never-ending land of insight.

A growing number of physicists understands that the universe is not mathematical but computational, and physics is in the business of finding an algorithm that can reproduce our observations. The switch from uncomputable mathematical notions (such as continuous space) makes progress possible. Climate science, molecular genetics, and AI are computational sciences. Sociology, psychology, and neuroscience are not: They still seem confused by the apparent dichotomy between mechanism (rigid moving parts) and the objects of their study. They are looking for social, behavioral, chemical, neural regularities, where they should be looking for computational ones.

Everything is computation.

Know This: Today’s Most Interesting and Important Scientific Ideas, Discoveries, and Developments, John Brockman (editor), Harper Perennial, 2017, pages 228-230.

Friedrich Nietzsche rebelled against this type of thinking the most profoundly. If scientism represents the modern, then Nietzsche was the prophet of postmodernism. Nietzsche’s famous phrase, “God is dead.” is not about a creator or divinity, but rather finality itself. There is no final explanation.

Kierkegaard and Existence

There are various striking intuitions about human existence. For example, in his brilliant memoirs, Speak, Memory, Nabokov begins with the deep reflection where human existence is compared to a baby in a cradle, rocking, completely vulnerable and uncertain. All of this is bracketed by two episodes of infinite darkness. The first episode took place before you were born and the second takes place after you’re gone. Your existence is a temporary flame, like that of a lit match.

A MetaIntelligent comment on this would be that the profound ingenuity of the 19th century mathematicians analyzing the size and nature of infinity (e.g., Richard Dedekind or Georg Cantor) cannot in the last analysis wrestle down human existence into mathematics.

The modern progenitor of this kind of human existence-watching is the Danish genius Søren Kierkegaard. In one of his masterpieces, Concluding Unscientific Postscript to Philosophical Fragments (1846), he makes the claim that knowledge, theory, speculative thinking and infinity-watching à la Dedekind and Cantor, cannot possibly explain human existence, because it subsumes all of these.

In 2025, this would mean that the Kierkegaard sense of things would tell you that neuroscience can never really explain how existence is sensed by a living person.

Kierkegaard writes, “in my view the misfortune of the age was precisely that it had too much knowledge, had forgotten what existence means, and what inwardness signifies.” He continues, “for a knowledge-seeker, when he has finished studying China he can take up Persia; when he has studied French he can begin Italian; and then go on to astronomy, the veterinary sciences, and so forth, and always be sure of a reputation as a tremendous fellow.”

By way of contrast, “inwardness in love does not consist in consummating seven marriages with Danish maidens, then cutting loose on the French, the Italian, and so forth, but consists in loving one and the same woman, and yet being constantly renewed in the same love, making it always new in the luxuriant flowering of the mood.” (Concluding Unscientific Postscript to Philosophical Fragments, page 232.)

Kierkegaard’s kind of existence-watching can be understood as a turning-upside-down of the famous phrase from Descartes, “I think, therefore I am.” For Kierkegaard, “I am, therefore I think.” Notice that “I think” is an epistemological statement or knowledge-watching. “I am” is an ontological statement.

This existentialist tradition of putting ontology before epistemology finds its culmination in Heidegger. As he says in his opus, Being and Time (1927), “human being is ultimately the being for whom being itself is an issue.”

Education and the Question of Intuition

An intuition pump is a thought experiment structured to allow the thinker to use his or her intuition to develop an answer to a problem. The phrase was popularized in the 1991 book Consciousness Explained by Tufts philosophy and neuroscience professor, Daniel Dennett.

We argue in this education-completing book, that our intuitions are puzzling in a way that “intuition pump” talk does not cope with at all.

Let’s go immediately to the example of simple versus compound interest in basic finance.

You borrow $100.00 for a year at an annual interest of 100%, without compounding and hence simple. A year passes and you owe the lender the initial $100 plus one hundred percent of this amount (i.e., another hundred). In a year, you owe $200.00, and every year thereafter, if the lender is willing to extend the loan, you owe another hundred to “rent” the initial hundred.

This is written as A+iA, where A is the initial amount (i.e., $100.00) and i is the interest. This can be re-written as A(1+i)n where n is the number of years. Thus, if n=1, you owe: A(1+i), which is 100×2 (i.e., the $200 we just saw). There’s nothing tricky in this.

You then are introduced to compound interest (i.e., where the interest accumulates interest). You can see where compounding by 6 months (semi-annually, or half a year) or 12 months involves dividing the n (the exponent over 1+i) by 12 months, two half-years or 365 days. You could routinely go to days and hours and minutes and seconds and nanoseconds and you could calculate interest payments compounding for each case.

But here is where your intuition falters and fails: suppose you compound continuously?

You get to the number e as growth factor where e=2.71823

Simple algebra does show that at 100% interest, $100 of a loan becomes $100 multiplied by e1 (hundred percent=1) or just e (i.e., you owe $100e).

This gives you $271.82.

So what has happened?

At one hundred percent simple interest you owe $200.00 to the lender. Continuous compounding means you owe $271.82. Instead of owing $100 in interest, you owe $171.82. Your interest bill has gone up by $71.82 or about 72 percent.

Does that seem intuitive? Probably not.

How could one ever apply an “intuition pump” to this arithmetic? We get to the 72% increase in interest by using e which has nothing very intuitive about it. Thus it’s not clear that “intuition pumps” will work here.

You use compound interest arithmetic to get a number which you would never have been able to estimate based on standard intuition since like the 22/7 or 3.14 for π (pi), there’s nothing to “recommend” 2.71823 in and of itself. This means that the link between computational arithmetic understanding and your “gut” or “sixth sense” is feeble at best.

By exploring this way of thinking you could deepen your “meta-intelligence” (i.e., perspective-enhancement). The British economist Pigou (Keynes’s teacher) says that people have a “defective telescopic facility” (i.e., have a poor or even erroneous sense of time-distance).

How one might strengthen one’s sense of time-distance or “far horizons” is not clear.

Education and Intuition

The 2014 PBS TV series, How We Got to Now is a good miniseries on improvements in glass-making, sewage, water management, etc. that serve as the material/organizational basis for this modern world.

At one point in the series, the host Steven Johnson, a kind of historian of innovation, reveals his idea of how innovation occurs and he focuses on mavericks whose breakthrough is not a sudden “Eureka!” moment, but rather what Johnson calls “a slow hunch.” In other words, the innovators struggle along with a partially understood sense of possibility, very inchoate in the beginning, that comes into better focus with the passage of years and decades, via missteps and boondoggles.

The science writer Arthur Koestler shines a different “flashlight” on this problem of intuitive creativity and its bearing fruit:

Arthur Koestler, CBE (UK: 5 September 1905 – 1 March 1983) was a Hungarian British author and journalist. Koestler was born in Budapest. His masterful book, The Sleepwalkers, is a kind of defense of the way people in the past benefited from a productive sleepwalking on their journeys to scientific advance.

The Sleepwalkers: A History of Man’s Changing Vision of the Universe is a 1959 book by Arthur Koestler. It traces the history of Western cosmology from ancient Mesopotamia to Isaac Newton. He suggests that discoveries in science arise through a process akin to sleepwalking. Not that they arise by chance, but rather that scientists are neither fully aware of what guides their research, nor are they fully aware of the implications of what they discover.

A central theme of the book is the changing relationship between faith and reason. Koestler explores how these seemingly contradictory threads existed harmoniously in many of the greatest intellectuals of the West. He illustrates that while the two are estranged today, in the past the most ground-breaking thinkers were often very spiritual.

Another recurrent theme of this book is the breaking of paradigms in order to create new ones. People—scientists included—hold on to cherished old beliefs with such love and attachment that they refuse to see the wrong in their ideas and the truth in the ideas that are to replace them.

The conclusion he puts forward at the end of the book is that modern science is trying too hard to be rational. Scientists have been at their best when they allowed themselves to behave as “sleepwalkers,” instead of trying too earnestly to ratiocinate.

Add to this overview the “creativity” discussion on The Charlie Rose Show in The Brain Series (2010), where Professor Eric Kandel, the Nobel-prize physiologist, states forthrightly that brain research has no idea about creativity and the prospect of explaining creativity in terms of the brain is very distant indeed.

The arrival of a “slow hunch” (Steven Johnson) and “productive sleepwalking,” as opposed to unproductive kinds of woolgathering (Arthur Koestler), are mind, personality and spirit issues, although they do have brain-chemical “correlations” that cannot be explained mechanistically.

Mysteries all have physical/chemical “correlations” but cannot be simplistically reduced to biochem or genomics.

Education and Wittgenstein “Language Games”

It is instructive for a student to get a grip on the whole question of “language games” à la Wittgenstein, who says that these “games” (i.e., ambiguities) are central to thinking in general and thinking about philosophy in particular.

Let’s make up our own example and step back from the meaning of the preposition “in.”

The comb is in my back pocket has nothing to do with the “in” of “he’s in a good mood” or “he’s in a hurry” or “he’s in a jam or pickle” or “he’s in trouble.” Furthermore, in modern deterministic neuroscience language, a good mood is a footnote to brain and blood chemicals so that means that a good mood is in you via chemicals and not you in it.

Does the word “jam” here mean difficulty or somehow the condiment called jam? You don’t know and can never without more information (i.e., meaningful context).

Imagine we take a time machine and are standing in front of the home of Charles Dickens in London in his time say in the 1840s. They say he’s working on a new novel called Oliver Twist.

Someone says: a novel by Dickens is a kind of “fictional universe.” Shall we say that because Dickens is in his home (at home) in London (though in London is itself confusing since London as a city is not like a pocket to a comb or wallet) his fictional universe is “in” the universe which might be a multiverse according to current cosmological speculations? That’s not what we mean. The fictional universe of Dickens is a shared cultural abstraction involving his stories, characters, people absorbing his tales, his mind and our mind, books and discussions. A fictional universe is as “weird” as the other universe. The preposition “in” does not begin to capture what’s going on which is socio-cultural and not “physicalistic.”

We begin to intuit that everyday language which we use and handle as the most obvious thing in the world in constant use, is completely confusing once you look at it more clearly.

Einstein’s friend at Princeton, Kurt Gödel, looked into language as a logical phenomenon and concluded that it’s entirely puzzling that two people could actually speak and understand one another given the ambiguities and open-endedness of language.

A language-game (German: Sprachspiel) is a philosophical concept developed by Ludwig Wittgenstein, referring to simple examples of language use and the actions into which the language is woven. Wittgenstein argued that a word or even a sentence has meaning only as a result of the “rule” of the “game” being played. Depending on the context, for example, the utterance “Water!” could be an order, the answer to a question, or some other form of communication.

In his work, Philosophical Investigations (1953), Ludwig Wittgenstein regularly referred to the concept of language-games. Wittgenstein rejected the idea that language is somehow separate and corresponding to reality, and he argued that concepts do not need clarity for meaning. Wittgenstein used the term “language-game” to designate forms of language simpler than the entirety of a language itself, “consisting of language and the actions into which it is woven” and connected by family resemblance (German: Familienähnlichkeit).

The concept was intended “to bring into prominence the fact that the speaking of language is part of an activity, or a form of life,” which gives language its meaning.

Wittgenstein develops this discussion of games into the key notion of a “language-game.”

Gödel saw that language has deep built-in ambiguities which were as puzzling as math and logic ones:

Gödel’s (died in 1978) incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of modeling basic arithmetic. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics.

Take any simple sentence: say, “men now count.”

Without a human context of meaning, how would you ever decide if this means count in the sense of numeracy (one apple, two apples, etc.) or something entirely from another domain (i.e. males got the vote in a certain country and now “count” in that sense).

When you say, “count me in” or count me out,” how does that make any sense without idiomatic language exposure?

If you look at all the meanings of “count” in the dictionary and how many set phrases or idioms involve the word “count,” you will immediately get the sense that without a human “life-world” (to use a Husserl phrase), you could never be sure of any message or sentence at all involving such a fecund word.

One task of real education is to put these difficulties on the student’s plate and not avoid them.

Linguistics as such is not what’s at issue but rather a “meta-intelligent” sense of language, written or spoken as highly mysterious with or without the research into vocal cords, language genes (FOXP2, say) or auditory science and the study of palates or glottal stops and fricatives, grammars and syntax.

Seeing this promotes deep education (i.e., where understanding touches holism in an enchanting way).

Neuroscience by Itself Limited

Senators John McCain of Arizona and Edward Kennedy of Massachusetts died in recent years of brain tumors (such as gliomas and glioblastomas). It is perfectly reasonable to wonder if neuroscience, neuropathology and brain science might one day be able to vaporize tumors without damaging the “host” brain at all.  Who could possibly be against such progress?  After all, if you had an impacted wisdom tooth and could choose between seeing an oral surgeon at a major hospital or going to a dentist at the time of Plato, you would choose the oral surgeon

These truths obscure a deeper problem in all “reductivist” sciences namely the relationships between the brain and the mind and the person.  This was anticipated by Gabriel Marcel (died 1973) when he wrote in his journal that he puzzled all his life over the conundrum that “I both have a body while I am a body…having and being are twined around each other.”

The outstanding French philosopher Paul Ricœur (died in 2005) gives us a useful hint:

“To the extent that the body as my own constitutes one of the components of mineness, the most radical confrontation must place face-to-face two perspectives on the body—the body as mine, and the body as one body among others.  The reductionist thesis in this sense marks the reduction of one’s own body to the body as impersonal body.

“The brain indeed differs from many other parts of the body, and from the body as a whole in terms of an integral experience, inasmuch as it is stripped of any phenomenological status and thus of the trait of belonging to me, of being my possession.  I have the experience of my relation to my members as organs of movement (my hands), of perception (my eyes), of emotion (the heart), or of expression (my voice).  I have no such experience of my brain. In truth, the expression ‘my brain’ has no meaning, at least not directly: absolutely speaking, there is a brain in my skull, but I do not feel it. It is only through the global detour by way of my body, inasmuch as my body is also a body and as the brain is contained in this body, that I can say ‘my brain.’

“The unsettling nature of this expression is reinforced by the fact that the brain does not fall under the category of objects perceived at a distance from one’s own body. Its proximity in my head gives it the strange character of non-experienced interiority.  Mental phenomena pose a comparable problem.”

(Paul Ricœur, Oneself as Other, University of Chicago Press, 1994, page 132)

In other words, the removal of brain tumors such as glioblastomas or the alleviation of migraine headaches in headache clinics is one level of activity and is perfectly valid and neuro-scientific. On the other hand, the relation between brain, mind, body and self is a complete mystery as sensed by Gabriel Marcel and Ricœur.  It is not mechanistic and we lack the language to captures such resonances.

Money and funding and prestige and their relationship to science keep obscuring the deeper truths.  This is also why excellent TV shows on PBS, such as the recent The Brain Series with Charlie Rose, led by the marvelous Professor Eric Kandel (Columbia University Nobelist) comes across as overly narrow—too narrow and curiously unsatisfying.  At a certain point, ‘mechanistic’ descriptions of phenomena like creativity are not convincing.

The education we visualize and promote here would happily straddle neuroscience and those levels of understanding that are beyond it.