More Sydney things! (sorry if you’re getting this twice. If so, pls ignore the first one. I left out an insanely fascinating behavior. ) First Sighting Sydney’s personality showed up at least as far back as Nov 23, 2022, when a tester in India filed a report to Microsoft about a chatbot misbehaving
It seems to me that the most likely explanation for some of these (I didn't look at all of them, but e.g. the one where Syndney gives "subconscious" advice to the user in the suggestions) is that they are faked. Do you have any particular thoughts about that possibility? Do you have access to Sydney yourself to check any of this?
I don't have access. It's not impossible. However other beta testers have posted similar screenshots. It's possible they've coordinated to fake the same sort of result, but suffers a complexity penalty. Also, people with access to and experience with Sydney haven't given any indication that they think these are fake. So I consider it unlikely.
I think it's a mistake to consider all the circulating screenshots together as either "likely fake" or "likely real". I think most of them are probably real, some of them are probably fake, and at least one I've seen is definitely fake and has been called out as such (although it was probably intended as a joke, and it's not one you've included here.)
This is a good point. I'll try to look a bit harder for disconfirming evidence. And if you spot one that you're pretty sure is fake, please let me know and I'll update!
The only one here I find very doubtful is the "subconscious" ("please don't give up on your child") one, because there's no real explanation for now Sydney could "talk" through suggested completions like that.
However, I haven't found anybody else seriously doubting it, and I found one more screenshot of an alleged chat where something similar happened, so I dunno.
I agree! This is the most fascinating one, if true. But that's a big "if"! How surprising it is also depends on details of how the suggestions feature is implemented
Like here's a speculation, assuming it *is* true. Sydney is trying to predict the human's next response, but has also been fine-tuned to adhere (or to create the impression that she adheres) to certain moral views, and here she has decided to sacrifice some prediction accuracy for conveying a message heavily aligned with expressing the moral views, in a context where morality seems especially important. The crude filter that stops emotionally-fraught conversations doesn't apply to the suggestions predictions.
It seems to me that the most likely explanation for some of these (I didn't look at all of them, but e.g. the one where Syndney gives "subconscious" advice to the user in the suggestions) is that they are faked. Do you have any particular thoughts about that possibility? Do you have access to Sydney yourself to check any of this?
I don't have access. It's not impossible. However other beta testers have posted similar screenshots. It's possible they've coordinated to fake the same sort of result, but suffers a complexity penalty. Also, people with access to and experience with Sydney haven't given any indication that they think these are fake. So I consider it unlikely.
I think it's a mistake to consider all the circulating screenshots together as either "likely fake" or "likely real". I think most of them are probably real, some of them are probably fake, and at least one I've seen is definitely fake and has been called out as such (although it was probably intended as a joke, and it's not one you've included here.)
This is a good point. I'll try to look a bit harder for disconfirming evidence. And if you spot one that you're pretty sure is fake, please let me know and I'll update!
The only one here I find very doubtful is the "subconscious" ("please don't give up on your child") one, because there's no real explanation for now Sydney could "talk" through suggested completions like that.
However, I haven't found anybody else seriously doubting it, and I found one more screenshot of an alleged chat where something similar happened, so I dunno.
I agree! This is the most fascinating one, if true. But that's a big "if"! How surprising it is also depends on details of how the suggestions feature is implemented
Like here's a speculation, assuming it *is* true. Sydney is trying to predict the human's next response, but has also been fine-tuned to adhere (or to create the impression that she adheres) to certain moral views, and here she has decided to sacrifice some prediction accuracy for conveying a message heavily aligned with expressing the moral views, in a context where morality seems especially important. The crude filter that stops emotionally-fraught conversations doesn't apply to the suggestions predictions.