At The Bayesian Conspiracy we recently spoke with Zoe about pop-up cities and goal-focused group houses (episode coming next week). One of the great things about these settings is that being immersed in your work keeps you in a mental frame where related self-propagating thought patterns are kept alive in your brain and stay processing in the background all the time.
Much of our brain power is ambient. It takes a while to shut down thought patterns that have been oscillating for a while and recruit those neurons into new patterns. In fact I think it’s worse than this - I think it can be very easy to restart patterns every morning that you’ve been using for days, and much harder to restart ones you haven’t used in days, or weeks, or months. Thus the phenomenon of taking several days to get running again when you return from a long vacation.
This is extremely noticeable when trying to write fiction, and it should be true of any major project. Your brain can’t turn deep focus on and off. Your brain is a steam locomotive, with dozens of reinforcing thought oscillations. When you jump off it and run in a different direction for a while that’s fine, you can run back to the tracks and get back to stoking the engine with a prize from your expedition in your back pocket. But the more you slip off for side quests, and the longer you stay away, the more the train tracks will start to re-lay themselves, so the locomotive realigns to follow you, and your previous goal is left behind.
Attention is all you need. Attention is all you have. Everything follows your attention over a long time scale.
AIs Can Imagine It For You Wholesale
Recently we were asked what question we’d would put in a general survey about The Bayesian Conspiracy. I was stumped. I had nothing. My focus is split among 4+ projects, I am not immersed, I would have to sit down and stew on this.
Claude gave a fantastic answer is a split second. Several great answers, actually. It's not that it’s incredibly creative, because it’s not. But it’s great at the basic level of creativity one gets from an immersed mindset. The mental framework that would require ongoing dedication from a human (on the level of daily focus across weeks) is what the AI can deliver on demand.
Now that we have the basic creative output, we can choose among Claude’s suggestions and tighten them up.
I don’t like this. It feels dangerous. It’s too tempting. Being deeply focused on just one topic for weeks means giving up on a LOT of other things that I don’t want to give up on. But if I compensate by relying on this tool I worry I’ll stop going into deep focus entirely, and just lose the knack for it over time. Why wouldn’t most people do so, if they could? How many humans in the present day can memorize and recite long orations? We don’t need to now that we have written language. Per Plato, have we not leaned close to becoming “hearers of many things”, [appearing] as though they were all-knowing, but to actually be learners of nothing.
Soon we won’t even have to think up our own thoughts. Ten years from now1 the AIs will be doing most of the things we think of as creative-generation and humans will mostly be the Deciders, picking among the options presented to get at the true essence of what we want. A cool premise for a sci fi world, but not a world I would choose.
Given the standard anti-doom caveats
"Soon we won’t even have to think up our own thoughts. Ten years from now¹ the AIs will be doing most of the things we think of as creative-generation and humans will mostly be the Deciders, picking among the options presented to get at the true essence of what we want."
Plausible.
But we can collaborate with the AI, by probing its ideas. Always check the sources the AI cites, and compare the sources with its summary; and doing /that/ will spark new ideas of your own. Then ask it about those ideas. And if the conversation bottoms out, as it usually does, in the AI repeating the same claim, but not being able to cite a source for it, you've probably run into a case where everybody "knows" something, but no one can give a reason for it. Or a case where the AI has been deliberately trained to give misinformation, or to direct people away from some line of thought. I remember a time when I wanted information on the demographics of terrorism. Google quickly found sources showing that Muslim extremists are the largest committers of terrorism today by several measures, while Microsoft Bing responded to every question about Muslim terrorism by saying that most Muslims aren't terrorists, and we must all respect everyone's religion, and not giving any numeric answers.
That's where AIs are at their worst--when everyone says something, but is just repeating everyone else. The groupthink answer is programmed into the LLM, but there may be no identifiable sources for it.
Oh yeah, I think it's a safe bet that most people will become more less productive on brainstorming. Some people would do it for sport. I don't want to limit people in this area. I think my "old man yells at AI" would be about something else