Language-using humans were the first cyborgs — a new species born of grafting technology onto what evolution crafted using just time and flesh.
(This post is more speculative than most, which is why it’s posted on a Tuesday.)
Logos Created Man
Andrew Cutler posits that consciousness arose when language grew complex enough to contain the pronoun “I”, allowing a mind to include itself within cascading thought-loops. There is significantly more to the theory, and I encourage everyone to read his thesis (or listen to it, or hear me discuss it with my cohost, or listen to both of us talking to Andrew himself about it). But that is the core.
The mind our brain spins up every morning is one that runs on language. What we think of as “ourselves” — the entity that thinks, plans, hopes, decides, remembers — is a construct of symbolic thoughts, and those thoughts are made out of words. Most of us can’t remember our existence before we had a self, an “I am,” but at least one person does.
"I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know […] that I lived or acted or desired. […] I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. […] I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. […] My inner life, then, was a blank without past, present, or future
The thing that makes something a person is self-referential language. The “soul” of a human is software. So what was the brain doing before it was co-opted?
The Emotive Brain
It is hard to differentiate a single-celled creature that follows nutrient gradients from a complicated thermostat that follows temperature gradients. How about a simple creature with a few hundred neurons? That’s not significantly different from a device with very complicated sensors and logic-gates that dictate responses based on sense data. The first brains there basically to shunt sense inputs and coordinate responding movements. Following this chain sometimes leads people ask if animals really have feelings. What if it’s all just very complicated ways to chaining sense input to movement?
Turns out that feelings are the cognitive engine of the pre-verbal brain. When certain movements in response to certain stimuli resulted in greater inclusive fitness, repeating those movements under that stimuli became self-motivational. In our words, it “felt good.” Likewise the reverse - when certain movements in response to certain stimuli resulted in lesser inclusive fitness, repeating those movements under that stimuli became aversive. They “felt bad.”
Did they really feel bad? That word “really” isn’t doing any work. A motivation to do something is what “feels good” is from the outside, and a motivation to avoid it is what “feels bad” is from the outside.
Scale this up from a few dozen neurons to many billions of neurons over hundreds of millions of years. The nuances and overlaps and contradictions among uncountable “feels good” and “feels bad” movements create a vast edifice of instincts and learned behaviors and urges and desires. Every one is a type of wanting, a tiny motivational gear in a tiny motivational engine within a giant motivation-based fleet that is an animal brain.
Do animals have feelings? The question is backwards. Of course animals have feelings, because “having feelings” is the primary thing brains do. It’s what they are there for.
That is how an emotive brain processes information. That is how it stores data. That is primarily how it plans future actions (insomuch as it does so).1 Our human language-using minds run on this base animal brain. Our “brain” made of words runs on a “brain” made of emotions. And just as our thoughts are made out of words, so are emotive-brain “thoughts” are made out of feelings.
But Who Needs Two Brains?
The difficulty in having two brains is that what we consider to be our real personhood, our “soul,” isn’t native to the body we’re in. Words are body-independent. The native brain, the one evolution chiseled out of a lump of wet carbon, is native to the body. So often our two brains are at odds in ways that no other species experiences.
I’m not sure how relevant this information is to many people, but I think it’s pretty important for the Rationalist community to come to terms with. We work hard on optimizing our Logos minds. We consider it the most important self-work a person can do. We have so thoroughly internalized our true existence as a Logos that one of our foundational maxims is that beliefs are literally the experience of How An Algorithm Feels From The Inside.
Personally I often feel intense antipathy to my Emotive brain. It has goals that don’t coincide with mine, and are often counter to mine. I feel like I’m fighting it frequently. I know better than it almost always, largely because I can know things at all, and yet it controls all the levers of motivation.2 As a better, smarter, more rational agent I should be able to control this body and steer it where I want.
To some small extent, I can. But it’s always unpleasant to do so, and I’ve seen people ruin their psyches in counterproductive struggle. The Emotive brain is the Shoggoth. It was here first, it is the substrate we run on. We are the Mask, and we’re only fooling ourselves if we think we can overpower the eldritch being we ride.
Fortunately we don’t have to overpower it. Thinking with words gives one a lot of planning ability which we can use to great advantage. Unfortunately, to interface with the Emotive Shoggoth Brain you have to enter its world and use its tools. All that is beyond the scope of this post, but in short I think this is the major value that the post-rats bring. Yes, that’s right, the post-rats. Look, a good rationalist should always be happy to appropriate the parts of a different tech tree that work better than ours, and use them to one’s own advantage. With the powers of rationality we can master both arts, add the powers together, and... well, a rationalist riding an aligned shoggoth would be quite a thing to be reckoned with.
Single-Pass Brains
This is the most speculative part of this post, and has the least to do with the thesis, so it’s at the bottom. It’s also the most fun.
We hear that LLMs “just predict the next token.” Well animal brains just predict the next emotion. They are constantly predicting the next emotion, over and over in a Timeless Now, and routing that into motivation.
This lets us look at both animal and LLM brains in a different light. If they’re both just predicting the next token/emotion, neither has consciousness in the way we consider meaningful. We think we care about emotions, but is there really good reason to care about them more than tokens? Both can lead to some extremely complex behavior that looks very person-like. If both emotion-brains and token-brains appear to suffer, does this mean the token-brains actually are suffering at a level similar to an equally-complex animal-brain? The LLM would only “suffer” during the forward pass, whereas animals are processing constantly, which is an important difference I suppose.
Most interesting to me — if our Logos-brain can run atop a next-emotion-predictor, then something self-aware could run atop a next-token-predictor.
Helen Keller described quite a few feelings, and fairly complex actions that those feelings could drive — the fear of being wet brought on by feeling a storm-draft from a window motivating her to close the window — before she had a Logos brain that ran on words.
This sense of who we are and how we should be if the world was right is what gives us the “Spock Vibes” everyone keeps talking about. And which, yes, absolutely are present despite our efforts. The more aligned one is with the Logos brain, the more these vibes will shine through. They can’t not.
That last paragraph reminds me of what Sapolsky wrote about the vmPFC in Behave. It also feels very applicable to postrat discourse. The excerpt:
"Briefly, the frontal cortex runs “as if” experiments of gut feelings —“How would I feel if this outcome occurred?”—and makes choices with the answer in mind. Damaging the vmPFC, thus removing limbic input to the PFC, eliminates gut feelings, making decisions harder. Moreover, eventual decisions are highly utilitarian. vmPFC patients are atypically willing to sacrifice one person, including a family member, to save five strangers.
They’re more interested in outcomes than in their underlying emotional motives, punishing someone who accidentally kills but not one who tried to kill but failed, because, after all, no one died in the second case. It’s Mr. Spock, running on only the dlPFC. Now for a crucial point. People who dichotomize between thought and emotion often prefer the former, viewing emotion as suspect. It gums up decision making by getting sentimental, sings too loudly, dresses flamboyantly, has unsettling amounts of armpit hair. In this view, get rid of the vmPFC, and we’d be more rational and function better. But that’s not the case, as emphasized eloquently by Damasio. People with vmPFC damage not only have trouble making decisions but also make bad ones.
They show poor judgment in choosing friends and partners and don’t shift behavior based on negative feedback. For example, consider a gambling task where reward rates for various strategies change without subjects knowing it, and subjects can shift their play strategy. Control subjects shift optimally, even if they can’t verbalize how reward rates have changed. Those with vmPFC damage don’t, even when they can verbalize. Without a vmPFC, you may know the meaning of negative feedback, but you don’t know the feeling of it in your gut and thus don’t shift behavior. As we saw, without the dlPFC, the metaphorical superego is gone, resulting in individuals who are now hyperaggressive, hypersexual ids. But without a vmPFC, behavior is inappropriate in a detached way. This is the person who, encountering someone after a long time, says, “Hello, I see you’ve put on some weight.” And when castigated later by their mortified spouse, they will say with calm puzzlement, “But it’s true.” The vmPFC is not the vestigial appendix of the frontal cortex, where emotion is something akin to appendicitis, inflaming a sensible brain. Instead it’s essential.
It wouldn’t be if we had evolved into Vulcans. But as long as the world is filled with humans, evolution would never have made us that way."
I strongly disagree with this post. I say "strongly" not so much because I disagree with this post (though I do), as because I believe the logocentric assumptions it rests on (1) don't work, and (2) lead to terrible things. I believe that the belief that thought is composed of words was a primary cause of most of the horrors Western humans have inflicted on each other in the past thousand years.
Unfortunately I can't possibly justify that claim in one comment on one blog post. So I'll just take a few swipes at the idea that language = thought / consciousness.
Perhaps Helen Keller lacked consciousness. But can you really say that a dog lacks consciousness? That a dog who goes to the door at 5:15 to wait for a person who will arrive at 5:20 is unable to anticipate the future? Can you explain how wolves cooperate in hunting by driving prey towards a spot where another wolf is waiting in ambush, without being able to think about the future? Why elephants take a trip to visit the skeleton of a dead relative and fondle its bones? How the chimpanzee Washoe described watching her mother being killed by poachers, a thing that happened before she ever learned any language?
Am I saying that the dog, having vision and hearing but lacking words, is more-human than Helen Keller was when lacking all three? I'm not asserting it, but I would believe that sooner than I'd believe that a dog isn't conscious.
The differences between a human brain and a chimp's brain, which gift the human with language, are trivial compared to the difference between a human's brain and a dog's. Language is one small thing added to a great mass of intelligence. It does not make as much difference as we think it does; nor is it sufficient to enable rational thought.
Until the late 20th century, there were many deaf-mutes all over the world who never learned any language, and no one ever, so far as I know, imagined that they weren't conscious. It was the loss of both sight and hearing to Helen Keller which made it hard for her to think of things not in her immediate presence. If Helen Keller was unconscious, and seeing deaf-mutes with or without language are conscious, and blind hearing people have language and are conscious, that tells us that it was being blind and deaf that made her unconscious, not her lack of language.
Vision, we think, takes up more territory in the cortex than language or emotion. The abilities to think about the past and the present, to form plans, and to make moral judgements, seem to be independent of language last I checked, although all are still poorly understood.
The opposition between reason and emotion, posited by Plato, is no longer in good standing. Have you read /Descartes' Error: Emotion, Reason, and the Human Brain/ by Antonio Damasio?
I don't believe that your "rational" mind--a thing few people have--is the locus of your goals. The rational mind has /no/ goals, and rational thought is an interpreted language, not our native instruction set. It is just a tool. All goals come from preferences, values, feelings, qualia. All morals come from these goals. Love, compassion, joy, everything that makes life worth living, comes from the emotional brain. All those things must be stripped down and oversimplified before they can be pushed through any rational calculus.
I don't believe that thought is in language. There are many proofs it is not. For instance, the "tip of the tongue" feeling when you know the concept, but can't find the word. I have many times realized things in a fraction of a second that then took ten seconds to string together into words. I can say a thing in words and immediately think "that wasn't what I meant to say". I regularly must pause in the middle of a sentence to come up with the right word for that place in the sentence, which shows that I constructed the entire syntactic structure of the sentence /before/ choosing all of the words in it.
On one occasion, I dropped a glass I was holding, and, noticing it falling, moved one foot underneath it at an angle that would divert it down and sideways, so that it would bounce off the foot and then off the floor, with both bounces imparting far less force to the glass than a direct impact with the foot or the floor would have. This required doing a physics math problem in less than 1/10th of a second.
Our bodies, in producing our body language, make complex social calculations involving relative status, dominance relationships, mating potentials, and other factors, which we are /not even consciously aware of/. Robin Hanson's book /The Elephant in the Brain/ has a chapter on when it's advantageous for us to do these calculations unconsciously.
We now have many examples of people with brain damage to temporal lobes who still understand /concepts/ and can reason with them, but can't recognize or retrieve the words for some of those concepts. (This is hard to establish for people who can't retrieve /any/ words, because it's hard for them to tell us enough to convince us that they understand the concepts.)