Discussion about this post

User's avatar
Man in White's avatar

"Unfortunately the movement is leaning into massaging their image anyway. They talk a lot about literal PR. They spend actual effort on optics when that effort could be spent on doing something good instead."

Honestly, not sure about that. People, including me, are still debating that castle for example. Yes, I know Scott thinks it's actually good purchase, but I've still haven't found how they made the calculation.

Another thing, I really dislike when MacAskill is seen as super pro longtermism based on his arguments, people push back him, and he says "I actually mean weak longtermism, but you seem to think I argue for hard longtermism". That's great, but argue that he needs to work on his communication in that case.

Expand full comment
Bad Horse's avatar

I agree that it's more important to focus on doing EA right than on how the normies perceive EA. Though I'm coming more from the sense that EA is less altruistic than it aspires to, and the rationalist movement is far less rational than it imagines itself to be, and so keeping the community goals front and center is and always will be the most-urgent thing.

The LessWrong community was always deeply infected with groupthink and with already knowing what to think back in the 2000s, and I think it's gotten worse, and has gotten even worse than that in the AI-safety community that grew from LW. I think highly of EA, and I think LW still does more good than harm; but I think it's bad enough in the AI safety community that the community is more likely to do harm than good. They have a "party line", and a way you're supposed to sound, and bad metaphysical assumptions you're supposed to assume--mostly the same ones assumed in LW, but they are more obviously harmful in AI safety.

I admit these perceptions stem mostly from how people in these communities treat me. I spent years building my reputation within LW, and have extensive educational and career experience in AI, linguistics, philosophy, and other issues related to AI safety. But every time I step out-of-bounds, the community ignores me. Most recently I posted to LW what I think is a knock-down argument that instilling human values in AI is theoretically incapable of aligning them with human interests unless we are willing to count them as human. Not one person took the time to understand it; hardly anyone even read it. I don't think that community is capable of hearing things it doesn't want to hear.

Expand full comment
11 more comments...

No posts