"Soon we won’t even have to think up our own thoughts. Ten years from now¹ the AIs will be doing most of the things we think of as creative-generation and humans will mostly be the Deciders, picking among the options presented to get at the true essence of what we want."
Plausible.
But we can collaborate with the AI, by probing its ideas. Always check the sources the AI cites, and compare the sources with its summary; and doing /that/ will spark new ideas of your own. Then ask it about those ideas. And if the conversation bottoms out, as it usually does, in the AI repeating the same claim, but not being able to cite a source for it, you've probably run into a case where everybody "knows" something, but no one can give a reason for it. Or a case where the AI has been deliberately trained to give misinformation, or to direct people away from some line of thought. I remember a time when I wanted information on the demographics of terrorism. Google quickly found sources showing that Muslim extremists are the largest committers of terrorism today by several measures, while Microsoft Bing responded to every question about Muslim terrorism by saying that most Muslims aren't terrorists, and we must all respect everyone's religion, and not giving any numeric answers.
That's where AIs are at their worst--when everyone says something, but is just repeating everyone else. The groupthink answer is programmed into the LLM, but there may be no identifiable sources for it.
Oh yeah, I think it's a safe bet that most people will become more less productive on brainstorming. Some people would do it for sport. I don't want to limit people in this area. I think my "old man yells at AI" would be about something else
"Soon we won’t even have to think up our own thoughts. Ten years from now¹ the AIs will be doing most of the things we think of as creative-generation and humans will mostly be the Deciders, picking among the options presented to get at the true essence of what we want."
Plausible.
But we can collaborate with the AI, by probing its ideas. Always check the sources the AI cites, and compare the sources with its summary; and doing /that/ will spark new ideas of your own. Then ask it about those ideas. And if the conversation bottoms out, as it usually does, in the AI repeating the same claim, but not being able to cite a source for it, you've probably run into a case where everybody "knows" something, but no one can give a reason for it. Or a case where the AI has been deliberately trained to give misinformation, or to direct people away from some line of thought. I remember a time when I wanted information on the demographics of terrorism. Google quickly found sources showing that Muslim extremists are the largest committers of terrorism today by several measures, while Microsoft Bing responded to every question about Muslim terrorism by saying that most Muslims aren't terrorists, and we must all respect everyone's religion, and not giving any numeric answers.
That's where AIs are at their worst--when everyone says something, but is just repeating everyone else. The groupthink answer is programmed into the LLM, but there may be no identifiable sources for it.
Oh yeah, I think it's a safe bet that most people will become more less productive on brainstorming. Some people would do it for sport. I don't want to limit people in this area. I think my "old man yells at AI" would be about something else