p: my communist friends thought that effective altruism was about 'long termism' and not actually doing anything to help people here and now
m: So, they know GiveWell exist, they just think it another community/movement funding it? Or they never heard of it?
p: never heard of it
m: and what do they think now?
p: Well they said it was “PR” when I brought it up
This is as actual discussion from yesterday. I’ve written before about this topic, but I used too many words, only the last three paragraphs mattered. To be shorter:
EA is weird. It’s autistic and tech-positive and cares about outcomes. All of these things are super-cringe. The people who need to hear about it — others like us — WILL hear about it. Everyone else will only hear about it when the clout-chasing normies dunk on how weird we are. And when we point to the things we do which they would theoretically strongly approve of, they will dismiss that at “PR.”
Why? Because they already know what to think. They were told what to think by their thought leaders. Any evidence that goes against what their thought-leaders say is dismissed as disingenuous posturing. Actually doing good things does not matter to them. They dismiss it as PR because they know PR is primarily just deception, and they’re right.
Unfortunately the movement is leaning into massaging their image anyway. They talk a lot about literal PR. They spend actual effort on optics when that effort could be spent on doing something good instead. Optics to win over people who have antibodies specifically against manufactured optics manipulation.
If the actual, literal good things EA does — which are core to EA and predated any public awareness of the movement — are dismissed as “PR” because they don’t fit the NYT narrative… how will literal attempts to massage the EA image be viewed?
This isn’t really a problem if you don’t care what they think. These people literally do not matter. You cannot win over such people by mutilating your movement, you can only mutilate your movement.
Have the courage to be true to your convictions. That’s what brought EA where it is today. That’s what attracted all the people who support it and work within it and fund it. Mutilating that in a cursed chase for approval from people who want to hate you will only drive away all the good people who made EA what it is.
"Unfortunately the movement is leaning into massaging their image anyway. They talk a lot about literal PR. They spend actual effort on optics when that effort could be spent on doing something good instead."
Honestly, not sure about that. People, including me, are still debating that castle for example. Yes, I know Scott thinks it's actually good purchase, but I've still haven't found how they made the calculation.
Another thing, I really dislike when MacAskill is seen as super pro longtermism based on his arguments, people push back him, and he says "I actually mean weak longtermism, but you seem to think I argue for hard longtermism". That's great, but argue that he needs to work on his communication in that case.
I agree that it's more important to focus on doing EA right than on how the normies perceive EA. Though I'm coming more from the sense that EA is less altruistic than it aspires to, and the rationalist movement is far less rational than it imagines itself to be, and so keeping the community goals front and center is and always will be the most-urgent thing.
The LessWrong community was always deeply infected with groupthink and with already knowing what to think back in the 2000s, and I think it's gotten worse, and has gotten even worse than that in the AI-safety community that grew from LW. I think highly of EA, and I think LW still does more good than harm; but I think it's bad enough in the AI safety community that the community is more likely to do harm than good. They have a "party line", and a way you're supposed to sound, and bad metaphysical assumptions you're supposed to assume--mostly the same ones assumed in LW, but they are more obviously harmful in AI safety.
I admit these perceptions stem mostly from how people in these communities treat me. I spent years building my reputation within LW, and have extensive educational and career experience in AI, linguistics, philosophy, and other issues related to AI safety. But every time I step out-of-bounds, the community ignores me. Most recently I posted to LW what I think is a knock-down argument that instilling human values in AI is theoretically incapable of aligning them with human interests unless we are willing to count them as human. Not one person took the time to understand it; hardly anyone even read it. I don't think that community is capable of hearing things it doesn't want to hear.