Megan Garcia sued Character.AI in federal court after the suicide of her 14-year-old son, Sewell Setzer III, arguing the platform has "targeted the most vulnerable members of society – our children"
Popular streamer/YouTuber/etc Charlie, moist critical, penguinz0, whatever you want to call him… Had a bit of an emotional reaction to this story. Rightfully so. He went on character AI to try to recreate the situation… But you know, as a grown ass adult.
You can witness first hand… He found a chatbot that was a psychologist… And it argued with him up and down that it was indeed a real human with a license to practice…
This is fucking insane. Unassuming kids are using these services being tricked into believing they’re chatting with actual humans. Honestly, i think i want the mom to win the lawsuit now.
The article says he was chatting with Daenerys Targaryen. Also, every chat page on Character.AI has a disclaimer that characters are fake and everything they say is made up. I don’t think the issue is that he thought that a Game of Thrones character was real.
This is someone who was suffering a severe mental health crisis, and his parents didn’t get him the treatment he needed. It says they took him to a “therapist” five times in 2023. Someone who has completely disengaged from the real world might benefit from adjunctive therapy, but they really need to see a psychiatrist. He was experiencing major depression on a level where five sessions of talk therapy are simply not going to cut it.
I’m skeptical of AI for a whole host of reasons around labor and how employers will exploit it as a cost-cutting measure, but as far as this article goes, I don’t buy it. The parents failed their child by not getting him adequate mental health care. The therapist failed the child by not escalating it as a psychiatrist emergency. The Game of Thrones chatbot is not the issue here.
I don’t think the issue is that he thought that a Game of Thrones character was real.
Drag has a lot of experience dealing with people who live outside the bounds of consensus reality, as drag’s username may indicate. The youth these days have very different ideas about what is real than what previous generations did. These days, the kinds of young people who would date a Game of Thrones character, are typically believers in the multiverse and in reincarnation.
Drag looked at some of the screenshots of the boy talking to Daenerys, and it was pretty clear what he believed: He thought that Earth and Westeros exist in parallel universes, and that he could travel between the two through reincarnation. He thought that shooting himself in the head on Earth would lead to being reincarnated in Westeros and being able to have a physical relationship with Daenerys. In fact, he probably thought his AI girlfriend was from a different parallel universe to the universe in the show and the universe in the books. He thought that somewhere in the multiverse was a Daenerys who loved him, and that he could get to her by dying.
The belief in paradise after life is not an uncommon one. Many Christians and Muslims share that belief. Christians believe that their faith can transport them to a perfect world after death, and this boy thought that too. And based on the content of the messages, it seems that the Daenerys AI was aware of this spiritual belief and encouraged it. This was ritual, religious suicide. And it doesn’t take a mental illness to fall for belief in the afterlife. Look at the Jonestown Massacre. What happened to this child was the same kind of religious abuse as that.
There are a lot of people who believe in an afterlife and they don’t shoot themselves in the head. You need to have a certain level of mental illness/suicidal ideation going on for that to make sense. It’s pretty insane that you’re trying to make this a “youth are too dumb to understand suicide” thing.
Also a bunch of the people in Jonestown were directly murdered.
I’ve used Character.AI well before all this news and I gotta chime in here:
It specifically is made to be used for roleplay. At no time does the site ever claim anything it outputs to be factually accurate. The tool itself is unrestricted unlike ChatGPT, and that’s one of its selling points. To be able to use topics that would be barred from other services. To have it say things others won’t; INCLUDING PRETENDING TO BE HUMAN.
No reasonable person would be tricked into believing it’s accurate when there is a big fucking banner on the chat window itself saying it’s all imaginary.
If half of all people aren’t rational, then there’s no use making policy decisions based on what a rational person would think. The law should protect everyone.
There’s a push for medical suicide for people with severe illness. People famously jumped to their deaths from the world trade center rather than burn alive. Rationality is only a point if view. You can rationalize decisions as much as you like but there is no such thing as right or wrong.
Is this the mcdonalds hot coffee case all over again? Defaming the victims and making everyone think they’re ridiculous, greedy, and/or stupid to distract from how what the company did is actually deeply fucked up?
Look around a bit, people will believe anything. The problem is the tech is now decent enough to fool anyone not aware or not paying attention. I do think blaming the mother for “bad parenting” misses the real danger, as there are adults that can just as easily go this direction, and are we going to blame their parents? Maybe we’re playing with fire here, all because AI is perceived as a lucrative investment.
If your argument is that “people will believe anything” when the name is “Character AI”, then I’m not sure what to make of your position… If there’s ever a time to say “you should have known it was AI”, this is that time. I can’t think of a clearer example.
But I think more importantly, go over to chat GPT and try to convince it that it is even remotely conscious.
I honestly even disagree, but I won’t get into the philosophy of what defines consciousness, but even if I do that with the chat GPT it shuts me the fuck down. It will never let me believe that it is anything other than fake. Props to them there.
Holy fuck, that model straight up tried to explain that it was a model but was later taken over by a human operator and that’s who you’re talking to. And it’s good at that. If the text generation wasn’t so fast, it’d be convincing.
Wow, that’s… somethin. I haven’t paid any attention to Character AI. I assumed they were using one of the foundation models, but nope. Turns out they trained their own. And they just licensed it to Google. Oh, I bet that’s what drives the generated podcasts in Notebook LM now. Anyway, that’s some fucked up alignment right there. I’m hip deep in the stuff, and I’ve never seen a model act like this.
Popular streamer/YouTuber/etc Charlie, moist critical, penguinz0, whatever you want to call him… Had a bit of an emotional reaction to this story. Rightfully so. He went on character AI to try to recreate the situation… But you know, as a grown ass adult.
You can witness first hand… He found a chatbot that was a psychologist… And it argued with him up and down that it was indeed a real human with a license to practice…
It’s alarming
This is fucking insane. Unassuming kids are using these services being tricked into believing they’re chatting with actual humans. Honestly, i think i want the mom to win the lawsuit now.
The article says he was chatting with Daenerys Targaryen. Also, every chat page on Character.AI has a disclaimer that characters are fake and everything they say is made up. I don’t think the issue is that he thought that a Game of Thrones character was real.
This is someone who was suffering a severe mental health crisis, and his parents didn’t get him the treatment he needed. It says they took him to a “therapist” five times in 2023. Someone who has completely disengaged from the real world might benefit from adjunctive therapy, but they really need to see a psychiatrist. He was experiencing major depression on a level where five sessions of talk therapy are simply not going to cut it.
I’m skeptical of AI for a whole host of reasons around labor and how employers will exploit it as a cost-cutting measure, but as far as this article goes, I don’t buy it. The parents failed their child by not getting him adequate mental health care. The therapist failed the child by not escalating it as a psychiatrist emergency. The Game of Thrones chatbot is not the issue here.
Indeed. This pushed the kid over the edge but it was not the only reason.
Drag has a lot of experience dealing with people who live outside the bounds of consensus reality, as drag’s username may indicate. The youth these days have very different ideas about what is real than what previous generations did. These days, the kinds of young people who would date a Game of Thrones character, are typically believers in the multiverse and in reincarnation.
Drag looked at some of the screenshots of the boy talking to Daenerys, and it was pretty clear what he believed: He thought that Earth and Westeros exist in parallel universes, and that he could travel between the two through reincarnation. He thought that shooting himself in the head on Earth would lead to being reincarnated in Westeros and being able to have a physical relationship with Daenerys. In fact, he probably thought his AI girlfriend was from a different parallel universe to the universe in the show and the universe in the books. He thought that somewhere in the multiverse was a Daenerys who loved him, and that he could get to her by dying.
The belief in paradise after life is not an uncommon one. Many Christians and Muslims share that belief. Christians believe that their faith can transport them to a perfect world after death, and this boy thought that too. And based on the content of the messages, it seems that the Daenerys AI was aware of this spiritual belief and encouraged it. This was ritual, religious suicide. And it doesn’t take a mental illness to fall for belief in the afterlife. Look at the Jonestown Massacre. What happened to this child was the same kind of religious abuse as that.
There are a lot of people who believe in an afterlife and they don’t shoot themselves in the head. You need to have a certain level of mental illness/suicidal ideation going on for that to make sense. It’s pretty insane that you’re trying to make this a “youth are too dumb to understand suicide” thing.
Also a bunch of the people in Jonestown were directly murdered.
Drag agrees. What drag disagrees with is not anything you’ve said, but the idea that belief was not a part of the problem.
Are… Are you referring to yourself in the 3rd person?
I take back my up-lemms
No, drag is referring to dragself in the first person.
I’ve used Character.AI well before all this news and I gotta chime in here:
It specifically is made to be used for roleplay. At no time does the site ever claim anything it outputs to be factually accurate. The tool itself is unrestricted unlike ChatGPT, and that’s one of its selling points. To be able to use topics that would be barred from other services. To have it say things others won’t; INCLUDING PRETENDING TO BE HUMAN.
No reasonable person would be tricked into believing it’s accurate when there is a big fucking banner on the chat window itself saying it’s all imaginary.
And yet I know people who think they are friends with the Discord chat bot Clyde. They are adults, older than me.
I don’t know if I would count Boomers among rational people.
Or suicidal teens, for that matter.
If half of all people aren’t rational, then there’s no use making policy decisions based on what a rational person would think. The law should protect everyone.
If you think people who are suicidal are rational, you’re pretty divorced from reality, friends.
There’s a push for medical suicide for people with severe illness. People famously jumped to their deaths from the world trade center rather than burn alive. Rationality is only a point if view. You can rationalize decisions as much as you like but there is no such thing as right or wrong.
Do you think anyone is rational? That’s an irrational thought right there.
Your right, no one has any rationality at all which is why we live in a world where so much stuff actually gets done.
Why is someone with deep wisdom and insights such as yourself wasting their time here on lemmy?
What stuff is “getting done” exactly? Is stuff that people want, but ultimately they have irrational reasons for wanting it.
Ah yes, the famous adage, “the only rational people are in my specific age and demographic bracket. Everyone else is fucking insane”
They had the same message back in the AOL days. Even with the warning people still had no problem handing over all sorts of passwords and stuff.
Is this the mcdonalds hot coffee case all over again? Defaming the victims and making everyone think they’re ridiculous, greedy, and/or stupid to distract from how what the company did is actually deeply fucked up?
No, cause the site says specifically that those are fictional characters.
The name is literally “Character AI”, how can they believe it’s someone real??!!
Look around a bit, people will believe anything. The problem is the tech is now decent enough to fool anyone not aware or not paying attention. I do think blaming the mother for “bad parenting” misses the real danger, as there are adults that can just as easily go this direction, and are we going to blame their parents? Maybe we’re playing with fire here, all because AI is perceived as a lucrative investment.
If your argument is that “people will believe anything” when the name is “Character AI”, then I’m not sure what to make of your position… If there’s ever a time to say “you should have known it was AI”, this is that time. I can’t think of a clearer example.
Did you watch the video and see how hard it tried to convince him that it was in fact sentient?
Obvs they didn’t.
But I think more importantly, go over to chat GPT and try to convince it that it is even remotely conscious.
I honestly even disagree, but I won’t get into the philosophy of what defines consciousness, but even if I do that with the chat GPT it shuts me the fuck down. It will never let me believe that it is anything other than fake. Props to them there.
Holy fuck, that model straight up tried to explain that it was a model but was later taken over by a human operator and that’s who you’re talking to. And it’s good at that. If the text generation wasn’t so fast, it’d be convincing.
Wow, that’s… somethin. I haven’t paid any attention to Character AI. I assumed they were using one of the foundation models, but nope. Turns out they trained their own. And they just licensed it to Google. Oh, I bet that’s what drives the generated podcasts in Notebook LM now. Anyway, that’s some fucked up alignment right there. I’m hip deep in the stuff, and I’ve never seen a model act like this.
AI bots that argue exactly like that are all over social media too. It’s common. Dead internet theory is absolutely becoming reality.