15 Comments

I was thinking "This reminds me of something else I heard recently" then I realized that other thing was one of your podcast episodes about Fallout.

Expand full comment
author

It was an outgrowth of that conversation, as well as another one I had with someone else around the same time :)

Expand full comment
May 28·edited May 28Liked by Eneasz Brodski

Perhaps a tangent: The parable of the talents is not quite what it's said to be. The lord gave one man 5 talents, and another 2. Both got 100% returns on their investments, yet he praised the man whom he gave 5 talents more, because his returns were greater. This bothered me since I was 5 years old.

Expand full comment

The veil of ignorance is (among other things) an argument for impartiality.

It looks like you are stating that partiality is correct.

It seems to me logically rude to bring up an argument, say you disagree with its conclusion, and not say which of its assumptions/steps you disagree with.

Expand full comment

Speaking as someone who also believes partiality is correct, I think it's not so logically rude in this case because Veil of Ignorance is begging the question. It essentially says: "First, pretend you could strip away every reason you might have to be partial. If you did that, it would be reasonable and self-interested to be impartial."

Yeah, of course if I had no reasons for partiality, I would conclude that I should be impartial. But I do have reasons for partiality. Why should I pretend I don't, just so I can conclude I don't?

If the argument is "If P, then P. P; therefore, P", there's no real disagreement to offer but "not P".

Expand full comment

Just want to say, you've been on a tear the last few months or so and this is another banger

Expand full comment

"Well that wouldn’t be me, then. It would be that other person." - Sure, but what kind of person you would like be, say, if reincarnation is real?

Expand full comment
May 28·edited May 28

Here's where I depart from you:

"But soon after the Veiler points out that “Hey, you know, you could be a chicken too. Or a shrimp. Hell, the only reason you aren’t an insect is pure dumb luck! The Veil of Ignorance motherfcker! For all you know, you could’ve been born an E. Coli!” "

Yes, that is the implication. And that doesn't make the argument dumb; it makes it deep.

We're at THE point in history where we are about to create intelligent machines, some slightly less-intelligent than humans, some more-intelligent. It's time to drop the convenient but racist assumption that humans have a moral obligation only towards other humans. Why are humans so special? How is this any better than saying that whites only have moral obligations to whites, and blacks only to blacks? It was "workable" only while there was a large cognitive gap between humans and everything else. But that time is almost gone. We need to develop a new ethical system, from the ground up, that can accommodate and incorporate arbitrary organisms.

It isn't as hard as it sounds. It would look a lot like older, merit-based ethical systems. It would look morally repugnant to us today. But I think a world where humans are singled out as sacred, and everyone else counted as resources, is even more morally repugnant. And what will happen in the future, when you will no longer be able to determine whether someone is "human" just by lineage?

Expand full comment
May 29Liked by Eneasz Brodski

"humans have a moral obligation only towards other humans."

I would say that is a pretty severe misstatement of the case Eneasz is making.

It is not that there is no obligation to things other than humans, but rather that the greatest obligation is to our fellow humans (and among those humans, the obligation is proportional to the strength of our relationship with a given human or group of humans). Really, "human" is simply a handy proxy for an organism we have a relationship with. I am sure there are people now who consider themselves to have a larger karmic debt to their pets or livestock than they do to some humans. I expect that in the cyborg/transhumanist/AI future, we may be hurtling towards, there will be organisms of "questionable" membership in the species Homo sapiens which share a sense of obligation with humans greater than some humans do intraspecies. This will be nothing new.

Expand full comment
May 30·edited May 30

No, I don't think it's a misstatement. He argued that the veil of ignorance is silly, because we'd have to imagine ourselves being born as animals. That logically implies that he thinks it's silly to try to factor animal welfare into our moral calculus. Any consideration of them at all would admit them into that moral calculus.

But Eneasz might not /think/ he implied that. Bringing all organisms into the moral calculus requires not having equity, but endorsing utility monsters. (Google "humans are utility monsters" site:lesswrong.com .) Otherwise, we'd spend our lives serving viruses and bacteria. But it's possible for people not to even consider the possibility of having a moral calculus which assigns different values to different creatures. If they don't consider that possibility, then they will almost always draw a sharp line between humans and other animals.

Expand full comment
author

I agree with Winston. I draw fuzzy lines, which get fuzzier the further away you get from entities you have a relationship with. Since you basically can't have a meaningful relationship with animals in general (only a limited relationship with specific individual animals directly in your life) (this is due to the fact that animals cannot participate in human society & the broader relationships those entail) the line fuzzes into near-non-existence with animals-in-general.

Expand full comment
May 30·edited May 30Liked by Eneasz Brodski

It isn't a fuzzy line if the only way you use it is to draw a hard line, excluding things beyond some degree of fuzziness. The only way to not draw a hard line is to use real-numbered weights. In which case, there is no fuzziness; there are numbers.

I had a conversation once with Lofti Zadeh, the inventor of fuzzy logic. I asked him why he invented fuzzy logic instead of just using probability theory, which would be correct. It turned out he invented fuzzy logic because he didn't understand probability theory.

I think a general moral calculus, sufficient to get us thru the next century, must not begin with a binary classification into "human" and "non-human". We must discard the convenient commitment to equality. This already bites us today in edge cases like abortion, where we are required to draw a bright line at some arbitrary point between "fertilized egg" and "delivered baby". In the near future, I expect there will be no more edges. The notion of equal rights will be untenable and grossly immoral in a world where agents exist on a continuum of degrees of consciousness and intelligence.

Expand full comment
author

Agreed :)

Expand full comment

So, Rawls asks us to strengthen the relationship with moral beings outside given group of humans. And sure, it will alienate from given group of humans, time is limited. But it's still not to Luck, but to moral beings

Expand full comment