The secrets of free will, consciousness, and the self will not be unlocked just through analyzing the brain’s most primordial physical components, argues Douglas Hofstadter. We need to better integrate high-level mental properties into our explanations of reality.
In his 2007 book I am a Strange Loop, the cognitive scientist Douglas Hofstadter tackles the thorny subject of self: what is the ‘I’? In a cosmos bound by physical law, how does the ‘I’ come to exist? From the seething micro world of quanta, how does a sense of self, a sense of perspective and thought and feeling, possibly emerge?
Of course, it’s impossible to approach such questions without dwelling on the nature of the brain — this “teetering bulb of dread and dream”, as Hofstadter describes it (quoting the poet Russell Edson).
The brain is the philosophical hotspot for a number of difficult problems: free will, consciousness, and the self among them. And, over the last few decades, advances in neuroscience have granted us unprecedented access into the brain’s internal workings.
But while neuroscience has made fantastic progress in mapping the brain and investigating the behavior of neurons, Hofstadter says, brain researchers must be careful not to deprive the more abstract side of neurological study.
There is a widespread assumption, Hofstadter points out,
that the level of the most primordial physical components of a brain must also be the level at which the brain’s most complex and elusive mental properties reside.
But this kind of “neuroreductionism” is not helpful: brain research should include not just the study of things like neurotransmitters, synapses, neurons, and the visual cortex, Hofstadter implores.
Mental properties like ideas, concepts, and analogies are “structures” of the brain just as much as the left hemisphere is; but they won’t be found by zooming in on the brain’s tiny physical constituents, Hofstadter claims, for they operate on a level of abstraction above such constituents. He writes:
Trying to localize a concept or a sensation or a memory (etc.) down to a single neuron makes no sense at all. Even localization to a higher level of structure, such as a column in the cerebral cortex… makes no sense when it comes to aspects of thinking like analogy-making or the spontaneous bubbling-up of episodes from long ago.
Instead, if we are to make progress in analyzing such “elusive mental phenomena” as perception, concepts, thinking, consciousness, “I”, and so on, Hofstadter observes, then the brain must be viewed as a multilevel system. After all, he writes,
the brain is a thinking machine, and if we’re interested in understanding what thinking is, we don’t want to focus on the trees (or their leaves!) at the expense of the forest. The big picture will become clear only when we focus on the brain’s large-scale architecture, rather than doing ever more fine-grained analysis of its building blocks.
And, by large-scale architecture, Hofstadter does not just mean our physical architecture, he means our large-scale mental constructs like concepts, ideas, and a sense of self.
Mapping the physical constituents of the brain is hard enough, but modeling the workings of the brain in a way that includes mental properties like thoughts, concepts, and ideas — as well as how such properties could possibly emerge from or interact with the brain’s physical constituents — seems an almost impossible challenge.
Hofstadter spends much of I am a Strange Loop attempting to do so, and so I can’t hope to do him justice in this short article. However, some immediate questions we might have for Hofstadter include:
In response to such concerns, Hofstadter offers two analogies that might help us reframe our thinking to make it clear that mental properties can and do play a causal role in the physical world, and can thus be integrated into our working models of the brain.
Before we consider those analogies, however, to set the scene and establish just how complex a multilevel brain model must be, Hofstadter draws on the neurologist Roger Sperry’s essay Mind, Brain, and Humanist Values.
In a substantial yet illuminating passage quoted by Hofstadter, Sperry muses on the manifold layers of causal forces in the brain as follows:
To put it very simply, it comes down to the issue of who pushes whom around in the population of causal forces that occupy the cranium. It is a matter, in other words, of straightening out the peck-order hierarchy among intracranial agents. There exists within the cranium a whole world of diverse causal forces; what is more, there are forces within forces within forces, as in no other cubic half-foot of universe that we know…
To make a long story short, if one keeps climbing upward in the chain of command within the brain, one finds at the very top those over-all organizational forces and dynamic properties of the large patterns of cerebral excitation that are correlated with mental states or psychic activity… Near the apex of this command system in the brain... we find ideas.
Man over the chimpanzee has ideas and ideals. In the brain model proposed here, the causal potency of an idea, or an ideal, becomes just as real as that of a molecule, a cell, or a nerve impulse. Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and, thanks to global communication, in far distant, foreign brains. And they also interact with the external surroundings to produce in toto a burstwise advance in evolution that is far beyond anything to hit the evolutionary scene yet, including the emergence of the living cell.
There is much to unpack in Sperry’s evocative passage, but the important question Hofstadter wants us to focus on is this: who is shoving whom around inside the cranium? Do flashes of neuronal activity push our thoughts and ideas around? Or do our thoughts and ideas cause flashes of neuronal activity?
As Hofstadter lyrically puts it:
Do dreads and dreams, hopes and griefs, ideas and beliefs, interests and doubts, infatuations and envies, memories and ambitions, bouts of nostalgia and floods of empathy, flashes of guilt and sparks of genius, play any role in the world of physical objects? Do such pure abstractions have causal powers? Can they shove massive things around, or are they just impotent fictions? Can a blurry, intangible “I” dictate to concrete physical objects such as electrons or muscles what to do?
While Hofstadter seeks to open our minds with such provocative questions, ultimately he wants to show us why the answer to all of them should be an emphatic “yes”: abstractions like thoughts, concepts, and ideas do have causal powers.
Indeed, mental properties are not detached from the physical, Hofstadter thinks: they govern the physical.
Thoughts, concepts, and ideas may occur at a level of abstraction ‘above’ the brain’s physical components (for Hofstadter, they represent the vast abstract patterns emerging from such components), but that does not make them empty add-ons or epiphenomenon; rather, they have real causal power in the brain’s physical system.
To help us get clear on why this is so, Hofstadter presents two simple but powerful analogies. Let’s briefly look at each in turn.
To start, Hofstadter asks us to imagine a chain of falling dominoes, but with a few enhancements that transform the chain into a basic kind of mechanical computer (for instance: dominoes are split into various groups, each domino is spring-loaded and thus has the ability to right itself after falling, we can send signals that tell dominoes when to fall and when to right themselves, and so on).
With our enhanced domino “chanium”, we could perform some basic computations, such as working out whether, say, 641 is a prime number.
If 641 is identified as prime, then a particular stretch of the dominoes in the “results” section of the chain will remain standing. If, however, 641 isn’t a prime number, these “results” dominoes will fall.
Now, suppose someone unaware of these background computations is observing the domino chain. After a while, they point to a particular domino in the results section of the chain and ask, “Why does this domino never fall?”
There are two very different types of answer we might give here, Hofstadter notes.
The first type of answer would be to refer to the dominoes themselves and say, well, “because that domino’s predecessor never falls!”
Of course, while correct, this answer doesn’t take us very far — it just passes the buck to the next domino in the chain.
The second (and much better) type of answer would be, “Because 641 is a prime number.”
While this is a much deeper explanation as to why the dominoes are behaving as they are, there is something curious about it, Hofstadter observes: it doesn’t actually refer to the physical dominoes at all:
Not only has the focus moved upwards to collective properties of the chanium, but those properties somehow transcend the physical and have to do with pure abstractions, such as primality… the second answer bypasses all the physics of gravity and domino chains and makes reference only to concepts that belong to a completely different domain of discourse.
The point of this analogy is to show that 641’s primality is the best explanation, Hofstadter writes, “perhaps even the only explanation, for why certain dominoes did fall and certain other ones did not fall.”
So, although 641’s primality is not itself a physical force, although it operates at a level of abstraction above the physical components involved (as indeed do thoughts, concepts, and ideas), it nevertheless can legitimately be described as playing a causal role in a physical system.
Why? As Hofstadter puts it, “because the most efficient and most insight-affording explanation of the chanium’s behavior depends crucially on that notion”:
In a word, 641 is the prime mover. So I ask: Who shoves whom around inside the domino chanium?
And who shoves whom around inside the human cranium?
Hofstadter’s second analogy involves cars in a gridlocked traffic jam. What’s “caused” your car to be immobile is, in some limited sense, that the cars immediately around you are immobile; but to efficiently describe the situation, Hofstadter notes, global abstractions like population density and rush hour will be of much more use than an analysis of individual cars:
No amount of expertise in car mechanics will help you to grasp the essence of such a situation; what is needed is knowledge of the abstract forces that can act on freeways and traffic. Cars are just pawns in the bigger game and, aside from the fact that they can’t pass through each other and emerge intact post-crossing (as do ripples and other waves), their physical nature plays no significant role in traffic jams.
Indeed, what’s “caused” your car to be immobile are high-level abstractions like “traffic”, “rush hour”, and “population density” — though they make no reference to the individual cars around you, these are the much deeper explanations for why your car is not moving.
Likewise, we can unpick the nature of the brain’s physical components all we like, but if we pay attention only to such “lower levels”, Hofstadter warns,
then you are doomed to taking the long way around, to understanding things only locally and without insight.
The real explanatory power, the real comprehension, will come from an understanding and modeling of the brain’s emergent high-level abstract patterns.
Hofstadter’s goal with these analogies is to convey that the brain — understood as a complex, multilevel causal system — requires multiple levels of explanation, but that the “higher” we go with our levels of abstraction (i.e., the more we focus on emergent patterns), the more efficient and insightful our explanations become. He writes:
Deep understanding of causality sometimes requires the understanding of very large patterns and their abstract relationships and interactions, not just the understanding of microscopic objects interacting in microscopic time intervals.
Indeed, Hofstadter wants us to realize that causal force and explanatory power travel downwards through our levels of description and abstraction.
In a combustion engine, for example, we say a gas’s temperature (an emergent high-level abstraction) “causes” the piston to move, even though the lower, fine-grained picture involves billions of individual molecules banging into each other.
Similarly, 641’s primality “causes” the dominoes to fall or remain standing, rush hour “causes” your car to be immobile, and a thought or belief “causes” us to behave in a certain way — even though the lower, fine-grained picture involves myriad physical components.
It’s important to emphasize that, while Hofstadter attributes causal power to patterns and abstractions, there is no “extra-physical” force here. At the lower levels — the individual dominoes, cars, molecules and indeed the physical components of the brain — the laws of physics take care of everything on their own.
But it is the global arrangement of these lower levels that actually determines what happens, and it is through explaining and modeling such arrangements with high-level abstractions (like “primality”, “traffic”, “temperature”, and indeed “thoughts” and “ideas”) that we can really attain insight into what’s going on.
“Brain research”, then, if it is to truly inform us about philosophical problems like free will, consciousness, and the self, Hofstadter argues, cannot just focus on low-level constituents of the brain like neurotransmitters, synapses, and neurons.
Rather, to get closer to comprehension of such difficult philosophical problems, we need to much better integrate high-level mental properties, patterns, and abstractions into our explanations of reality.
For a difficult problem like free will, for example, an explanation at the level of neurons is almost irrelevant, Hofstadter suggests. This would be like trying to explain a domino chain by focusing on individual dominoes, a gridlocked traffic jam by taking apart each individual car, or a combustion engine by looking at individual molecules.
The pertinent question is not, “are our neurons ‘free’?” (this is like asking, “are dominoes bound by the laws of physics?”); no, the real question is, “are our thoughts, ideas, and desires ‘free’?” — this is the level of description, the level of abstraction, the level of causality, relevant to macro human behavior.
Ultimately, then, Hofstadter thinks research areas like neuroscience won’t help us unlock the philosophical mysteries of the mind until we have a working theory for how to incorporate emergent, high-level abstractions — ideas, beliefs, and concepts — into our models of the brain. I am a Strange Loop is his challenging, impressive, provocative attempt at doing just that.
If you’re interested in learning more about philosophical problems like free will, consciousness, and the self, you might enjoy the following related reads:
Finally, if you enjoyed this article, you might like my free Sunday breakdown. I distill one piece of wisdom from philosophy each week; you get the summary delivered straight to your email inbox, and are invited to share your view. Consider joining 10,000+ subscribers and signing up below:
From the Buddha to Nietzsche: join 10,000+ subscribers enjoying a nugget of profundity from the great philosophers every Sunday:
Each break takes only a few minutes to read, and is crafted to expand your mind and spark your philosophical curiosity.