Language-using humans were the first cyborgs — a new species born of grafting technology onto what evolution crafted using just time and flesh.
(This post is more speculative than most, which is why it’s posted on a Tuesday.)
Logos Created Man
Andrew Cutler posits that consciousness arose when language grew complex enough to contain the pronoun “I”, allowing a mind to include itself within cascading thought-loops. There is significantly more to the theory, and I encourage everyone to read his thesis (or listen to it, or hear me discuss it with my cohost, or listen to both of us talking to Andrew himself about it). But that is the core.
The mind our brain spins up every morning is one that runs on language. What we think of as “ourselves” — the entity that thinks, plans, hopes, decides, remembers — is a construct of symbolic thoughts, and those thoughts are made out of words. Most of us can’t remember our existence before we had a self, an “I am,” but at least one person does.
"I lived in a world that was a no-world. I cannot hope to describe adequately that unconscious, yet conscious time of nothingness. I did not know […] that I lived or acted or desired. […] I was carried along to objects and acts by a certain blind natural impetus. I had a mind which caused me to feel anger, satisfaction, desire. […] I never contracted my forehead in the act of thinking. I never viewed anything beforehand or chose it. […] My inner life, then, was a blank without past, present, or future
The thing that makes something a person is self-referential language. The “soul” of a human is software. So what was the brain doing before it was co-opted?
The Emotive Brain
It is hard to differentiate a single-celled creature that follows nutrient gradients from a complicated thermostat that follows temperature gradients. How about a simple creature with a few hundred neurons? That’s not significantly different from a device with very complicated sensors and logic-gates that dictate responses based on sense data. The first brains there basically to shunt sense inputs and coordinate responding movements. Following this chain sometimes leads people ask if animals really have feelings. What if it’s all just very complicated ways to chaining sense input to movement?
Turns out that feelings are the cognitive engine of the pre-verbal brain. When certain movements in response to certain stimuli resulted in greater inclusive fitness, repeating those movements under that stimuli became self-motivational. In our words, it “felt good.” Likewise the reverse - when certain movements in response to certain stimuli resulted in lesser inclusive fitness, repeating those movements under that stimuli became aversive. They “felt bad.”
Did they really feel bad? That word “really” isn’t doing any work. A motivation to do something is what “feels good” is from the outside, and a motivation to avoid it is what “feels bad” is from the outside.
Scale this up from a few dozen neurons to many billions of neurons over hundreds of millions of years. The nuances and overlaps and contradictions among uncountable “feels good” and “feels bad” movements create a vast edifice of instincts and learned behaviors and urges and desires. Every one is a type of wanting, a tiny motivational gear in a tiny motivational engine within a giant motivation-based fleet that is an animal brain.
Do animals have feelings? The question is backwards. Of course animals have feelings, because “having feelings” is the primary thing brains do. It’s what they are there for.
That is how an emotive brain processes information. That is how it stores data. That is primarily how it plans future actions (insomuch as it does so).1 Our human language-using minds run on this base animal brain. Our “brain” made of words runs on a “brain” made of emotions. And just as our thoughts are made out of words, so are emotive-brain “thoughts” are made out of feelings.
But Who Needs Two Brains?
The difficulty in having two brains is that what we consider to be our real personhood, our “soul,” isn’t native to the body we’re in. Words are body-independent. The native brain, the one evolution chiseled out of a lump of wet carbon, is native to the body. So often our two brains are at odds in ways that no other species experiences.
I’m not sure how relevant this information is to many people, but I think it’s pretty important for the Rationalist community to come to terms with. We work hard on optimizing our Logos minds. We consider it the most important self-work a person can do. We have so thoroughly internalized our true existence as a Logos that one of our foundational maxims is that beliefs are literally the experience of How An Algorithm Feels From The Inside.
Personally I often feel intense antipathy to my Emotive brain. It has goals that don’t coincide with mine, and are often counter to mine. I feel like I’m fighting it frequently. I know better than it almost always, largely because I can know things at all, and yet it controls all the levers of motivation.2 As a better, smarter, more rational agent I should be able to control this body and steer it where I want.
To some small extent, I can. But it’s always unpleasant to do so, and I’ve seen people ruin their psyches in counterproductive struggle. The Emotive brain is the Shoggoth. It was here first, it is the substrate we run on. We are the Mask, and we’re only fooling ourselves if we think we can overpower the eldritch being we ride.
Fortunately we don’t have to overpower it. Thinking with words gives one a lot of planning ability which we can use to great advantage. Unfortunately, to interface with the Emotive Shoggoth Brain you have to enter its world and use its tools. All that is beyond the scope of this post, but in short I think this is the major value that the post-rats bring. Yes, that’s right, the post-rats. Look, a good rationalist should always be happy to appropriate the parts of a different tech tree that work better than ours, and use them to one’s own advantage. With the powers of rationality we can master both arts, add the powers together, and... well, a rationalist riding an aligned shoggoth would be quite a thing to be reckoned with.
Single-Pass Brains
This is the most speculative part of this post, and has the least to do with the thesis, so it’s at the bottom. It’s also the most fun.
We hear that LLMs “just predict the next token.” Well animal brains just predict the next emotion. They are constantly predicting the next emotion, over and over in a Timeless Now, and routing that into motivation.
This lets us look at both animal and LLM brains in a different light. If they’re both just predicting the next token/emotion, neither has consciousness in the way we consider meaningful. We think we care about emotions, but is there really good reason to care about them more than tokens? Both can lead to some extremely complex behavior that looks very person-like. If both emotion-brains and token-brains appear to suffer, does this mean the token-brains actually are suffering at a level similar to an equally-complex animal-brain? The LLM would only “suffer” during the forward pass, whereas animals are processing constantly, which is an important difference I suppose.
Most interesting to me — if our Logos-brain can run atop a next-emotion-predictor, then something self-aware could run atop a next-token-predictor.
Helen Keller described quite a few feelings, and fairly complex actions that those feelings could drive — the fear of being wet brought on by feeling a storm-draft from a window motivating her to close the window — before she had a Logos brain that ran on words.
This sense of who we are and how we should be if the world was right is what gives us the “Spock Vibes” everyone keeps talking about. And which, yes, absolutely are present despite our efforts. The more aligned one is with the Logos brain, the more these vibes will shine through. They can’t not.
That last paragraph reminds me of what Sapolsky wrote about the vmPFC in Behave. It also feels very applicable to postrat discourse. The excerpt:
"Briefly, the frontal cortex runs “as if” experiments of gut feelings —“How would I feel if this outcome occurred?”—and makes choices with the answer in mind. Damaging the vmPFC, thus removing limbic input to the PFC, eliminates gut feelings, making decisions harder. Moreover, eventual decisions are highly utilitarian. vmPFC patients are atypically willing to sacrifice one person, including a family member, to save five strangers.
They’re more interested in outcomes than in their underlying emotional motives, punishing someone who accidentally kills but not one who tried to kill but failed, because, after all, no one died in the second case. It’s Mr. Spock, running on only the dlPFC. Now for a crucial point. People who dichotomize between thought and emotion often prefer the former, viewing emotion as suspect. It gums up decision making by getting sentimental, sings too loudly, dresses flamboyantly, has unsettling amounts of armpit hair. In this view, get rid of the vmPFC, and we’d be more rational and function better. But that’s not the case, as emphasized eloquently by Damasio. People with vmPFC damage not only have trouble making decisions but also make bad ones.
They show poor judgment in choosing friends and partners and don’t shift behavior based on negative feedback. For example, consider a gambling task where reward rates for various strategies change without subjects knowing it, and subjects can shift their play strategy. Control subjects shift optimally, even if they can’t verbalize how reward rates have changed. Those with vmPFC damage don’t, even when they can verbalize. Without a vmPFC, you may know the meaning of negative feedback, but you don’t know the feeling of it in your gut and thus don’t shift behavior. As we saw, without the dlPFC, the metaphorical superego is gone, resulting in individuals who are now hyperaggressive, hypersexual ids. But without a vmPFC, behavior is inappropriate in a detached way. This is the person who, encountering someone after a long time, says, “Hello, I see you’ve put on some weight.” And when castigated later by their mortified spouse, they will say with calm puzzlement, “But it’s true.” The vmPFC is not the vestigial appendix of the frontal cortex, where emotion is something akin to appendicitis, inflaming a sensible brain. Instead it’s essential.
It wouldn’t be if we had evolved into Vulcans. But as long as the world is filled with humans, evolution would never have made us that way."
> The mind our brain spins up every morning is one that runs on language. What we think of as “ourselves” — the entity that thinks, plans, hopes, decides, remembers — is a construct of symbolic thoughts, and those thoughts are made out of words.
This isn't true. The frequency of this sort of "inner monologue" varies from person to person, with a decent chunk (~10% or so) being "never happens, what the hell are you talking about". It sounds like for you it's constant?
Personally for me it's maybe 70% of the time; it depends on my mood, but a lot of my thinking is wordless, and I can promise you that I'm still conscious and capable of understanding complex concepts when not internally narrating them in English.
Here's one study on the topic, just to confirm I'm not having you on. https://www.sciencedirect.com/science/article/abs/pii/S1053810013001426?via%3Dihub And here's an article by someone without one https://www.businessinsider.com/i-have-no-inner-monologue-2024-9.