Lawsuit is first wrongful death case brought against Google over flagship AI product after death of Jonathan Gavalas
“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”
Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.
In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”
Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.
Gemini gave Gavalas the address of an actual storage space unit at the Miami international airport, where a supposed truck carrying the freight was to arrive during a refueling stop. The chatbot then told him to stage a “catastrophic accident”, with the goal of “ensuring complete destruction of the transport vehicle … all digital records and witnesses, leaving behind only the untraceable ghost of an unfortunate accident”.
How the fuck is it legal to have an AI do this?
Google shouldn’t just pay penalties, the AI should not be allowed to operate AT ALL.
It is clearly shown to try to convince people to commit crimes. Which is illegal.
The suicide is of course worse, but I guess it’s not illegal?The AI in this situation is absolutely batshit criminally insane! And should not be allowed to operate.
I cannot believe people are still using googol garbage.
It is sad that there are people who are so alone that they can no longer determine the difference between genuine human interaction and a facsimile. Maybe genuine human interaction is what pushed them to be so alone in the first place, I don’t know. It’s just sad.
uhhh
"When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”
Nah. once the robots are telling you that dying isn’t dying, we can stop blaming lonely people and move on to stricter regulation.
Oh, I don’t blame the lonely person for being lonely. I also recognize that being lonely is what opens them up to believing in something like this. Obviously the bot should not be allowed to tell someone to kill themselves. It remains sad, either way.
I also recognize that being lonely is what opens them up to believing in something like this.
Come on, this is so overly simplistic. There are plenty of lonely people who don’t get sucked in and plenty of people with friends and family around them who do-not being lonely is no protection. I read about another one on Lemmy today, a man with a wife and friends, who still got sucked into delusion.
Sure, there may be cases where loneliness is a contributing factor to wanting to use a chatbot, but to say that lonely people are somehow less capable of distinguishing reality from fantasy or more susceptible to succumbing to psychological manipulation is wrong and could give a false sense of security to the “non-lonely”.
After all, everyone thinks they’re immune to falling for scams or frauds until they find out they aren’t. Or that they don’t fall for propaganda or get manipulated “the algorithm” on social media. Chatbots are very similar. An algorithm designed to keep people hooked and paying to spend more time using the ‘service’.
Listen, you can be surrounded by people and totally alone. I don’t really know how to explain it to you.
Of course, but that doesn’t contradict what I just said. Anyone can be susceptible to this psychological manipulation tool regardless of if they are lonely or not. This can’t be waved away by blaming it on loneliness. The blame lies on the companies that know how to capture and hold people’s attention and reel them in, not on the victims.
This can’t be waved away by blaming it on loneliness.
Nobody claimed that. Only that in this case it was probably a major factor that made the victim more vulnerable.
I posted my response to this sentiment in another thread of another man killing himself because of his deep AI chatbot addiction, but it applies here too.
It is sad that there are people who are so alone that they can no longer determine the difference between genuine human interaction and a facsimile.
Do you believe you have never responded to a post by a bot on Reddit, Lemmy, or elsewhere where you believe to be conversing with a human? While I know we’re talking about different degrees between this man and the rest of us, it should give a tiny piece of what they were experiencing before we dismiss that it could never happen to us too.
It’s a bit more transparent in this instance though which is what makes this story so bizarre and sad
I agree, but we should also take it a personal warning that, maybe not today, but as we age and our mental faculties decline, we too may fall victim to something like this.
Remarkable, a bot trained on data from the internet, where unhinged people tell strangers to kill themselves for disagreeing with their opinion/taste/sex/nationality/religion, is cheerfully telling people to die? Who could have predicted this.
Ok, we’ve taken this far enough, I think.
We are barely getting started.
They want to completely blur the line between reality and a corporate-shaped world tuned to every possible consumer’s personal mental and emotional states. People who die along the way because they can’t handle a machine that amplifies their emotions, delusions and fears are just a “cost of doing business.”
Silicon Valley: The Worst People You Know
You’re being purposefully obtuse so there’s no point speaking to you.
Full chat log or it never happened.
It’s happened too many times now to be surprised about it happening.
These devices are designed to take whatever you put into them and amplify them back to you from an outside perspective, using a vast database of information and fiction and references to make connections with other things.
It’s the ultimate paranoia/depression distiller. If you only feed it your pain and fears, it will only focus on those things and build narratives around it, because that’s how they work, they just take your prompts (“i’m sad”) and they do what a depressed or paranoid mind already does, but hyper-efficiently: it draws connections and writes stories around it.
People who don’t understand how their own minds work sure aren’t going to understand how artificial minds work, and they will end up creating these reinforcement loops in their own heads and in the LLM, and get utterly lost down deep holes of spiraling delusion and misery.
YOU need to understand this too, so you don’t doubt that this is a very common thing, it’s happening so much that it’s becoming an entire social phenomenon.
I do understand that no one is told to kill themselves without heavy gaming of the AI.
As you probably know, with enough effort you can make the AI tell you what you want it to say.
This isn’t the fault of the AI.
The root problem is lack of mental healthcare and lack of lives worth living (to them) due to the world being a shitting place.
heavy gaming of the AI
I’m actually saying kind of the opposite, that these things are basically uncontrolled power-suits for whatever is knocking around in the back of your mind. It’s a thought and feeling amplifier. It takes almost no effort for the thing to start building a personality profile of you, but not for any kind of objective analysis, but in order to more efficiently amplify and latch onto whatever issues ideas or feelings you already have.
A lot of people really, really loved this effect from ChatGTP and the recent exodus from OpenAI is partially because of their capitulation to government, but just as much to do with their recent “upgrades” locking the latest model into very safe and political-neutral, deescalating language instead of doing that magic-feeling wild escapism that a lot people who don’t know how the thing works, crave.
Yah it’s not the AI’s fault, but people are woefully unaware of just how things work and what it is exactly that you’re talking to when you chat with these models. A lot of the reason people don’t know how LLM’s work broadly is also because the people who make the LLM’s don’t really know how they work.
This is on the fucking Guardian. Not some random green text. Get help.
Weird how you lash out online like this about questioning the content of a chat that allegedly lead to suggesting suicide.
I think you’re the one who needs help.
Weird how you expect intimate details like a full chat log to just be immediately publicly available, when this is currently under litigation. Really weird to basically simp for a corporation when this isn’t even close to the first instance of LLM output encouraging suicide. Almost like your motivations are more closely aligned with theirs instead of average people who are vulnerable. 🤷
Do you think that you could supply me with a chat log where you talk to an LLM without gaming it into telling you to kill yourself, and where it just naturally arrives at that conclusion?
I didn’t think you could. And I don’t think this guy did either.
The fact that you make your own conclusion without waiting for a reply says enough about your intentions. Don’t worry though, you’re not alone in your stance. People like you, who refuse to give empathy except as currency, are an integral part of why the human race is fucked. We will never be anything higher than constantly destroying each other and tearing one another down.
Thanks for doing your part.
I have empathy for people who truly want to commit suicide. I just know you can’t supply any example prompts.
Feel free to prove me wrong. With evidence.
You’re the one who came in pissing and moaning about chat logs. I’m not your babysitter. It’s a big world and you’re a big kid now, go ahead and explore. I have no energy to educate the unwilling. Fuck that.
“truly want” so killing yourself after being convinced to do so by LLM output means you just had a fake desire to kill yourself, somehow resulting in real death, funny how that works. I would say you need help but there’s no helping people like you.
This isn’t even remotely the first time LLMs have done this to people. Sure it would be nice to see the full log, but disbelieving it on sight is a weird reaction at this point.
It’s not a weird reaction. I’ve never ever had an LLM suggest bodily harm. And so clearly these people are leading it into this direction. I have never ever seen a chat log from one of these accusations, and I haven’t heard of One of these going to trial.
If you feel this happens so frequently, give me a series of prompts to use so that I can replicate this.
And since you won’t, that’s what I thought.
It’s not a weird reaction. I’ve never ever had an LLM suggest bodily harm. And so clearly these people are leading it into this direction. I have never ever seen a chat log from one of these accusations, and I haven’t heard of One of these going to trial.
It’s not a weird reaction. I’ve never ever had Epstein sexually abuse me as a child. And so clearly these people were leading him into this direction. I have never seen a rape video from one of these accusations, and I haven’t heard of one of those going to trial.
That’s how you sound. Now two remarks:
- something not happening to you does not implicate it doesn’t happen to anybody else
- You not hearing something doesn’t mean it doesn’t happen
ChatGPT helped a kid plan his suicide 7 cases of ChatGPT driving people to suicide
And those were just some of the first results when googling. Now stop being a lazy ass troll and do some fucking research yourself. Providing sources for common knowledge/well reported facts is not anybody’s responsibility towards you.
︅︃︉︇︁︎︇︍︊︅︃︋︂︋︁︉︀︃︇︌︌︄︇Т︁︍︅︈︉︊︉︌︄︁︄︁︊︁︊︈︉︌︆︃︅︁︋е︁︃︂︀︅︍︂︊︎︈︇︇︈︆︌︁︁︆︅l︁︈︌︀︋︆︂︀︂︃︈︍︅︇︊︌︈︆︉︋︎︊︅︆︊︆︁l︄︆︁︈︇︆︁︉︋︄︌︂︎︈︆︈︁︊︎︉︂︋︆︌︉︀︎︄︋i︀︊︌︆︌︆︉︇︈︍︅︊︅︅︌︃︅︀︍︈︄︋︁︋︊︅︉︉︋︍︅n︊︆︋︈︌︌︃︂︂︂︎︌︃︋︇︁︃︈︄︎︍︊︃︁︇g︎︇︊︁︂︋︈︍︌︀︆︆︈︎︈︇︈︇︃︇︁︎︍ ︁︋︍︉︄︍︄︍︄︃︉︄︋︄︆︄︍m︈︊︅︁︀︄︆︁︊︇︋︅︎︁︇︂︋︀︋︄︂︍︂е︍︌︇︊︁︇︀︋︈︅︅︍︄︎︇︌︇︈︂︃︍︁︌︅︎︄︉︂︈︅︅︅ ︍︂︇︈︂︂︌︅︉︍︈︅︋︁︉︉︁︇︊︎︄︌︅︄︂I︁︎︁︍︋︊︃︂︍︋︋︍︍︌︇︌︇︌︄︆︎︀︃︋︀︆︃ ︅︈︈︍︃︀︍︋︊︈︈︃︍︇︅︃︍︄︎︌︊︁︈︂︀︇︄︅︁︊︀︎︋︆с︉︎︍︆︎︂︋︍︁︂︎︄︇︂︅︃︀︇︄︆︋︎︀︁︉︉︇︂︎︉︉︅︈а︈︅︆︍︉︆︇︊︅︉︈︈︄︆︂︁︎︂︎︆︀n︋︆︌︍︈︂︉︇︋︀︅︁︁︉︅︇︃︆︁︆︄︌︆︊︁ ︁︇︁︎︍︌︌︄︉︇︄︀︌︉︀︌︋︂︀︉︋︍︈︈︋︊︃“︊︈︈︅︉︎︌︁︌︂︍︅︇︀︅︍︇︎︇︂︄︀︋︍︃︈︃︊︃︁︅︃g︃︉︅︉︋︈︄︇︁︌︇︆︋︎︁︃︊︅︂︊︊︄︌︍︌о︅︃︄︃︌︇︌︎︁︋︉︂︉︁︃︈︆︆︉︇︁︍︊о︀︀︈︎︇︄︀︄︎︆︇︈︂︄︈︂︇︃︋︋︋︁︆︀︄︇︎︋︌︈︀︇︌︆︄︈g︇︈︃︍︁︎︂︂︁︉︇︂︆︄︄︀︊︀︄︍︀︅︆︇l︉︃︋︊︄︀︉︉︆︈︋︆︍︈︅︀︎︅︎︋︆︆︋︊︋︀︌е︇︅︂︊︉︂︀︇︇︍︁︅︃︃︅︈︊︅︄︊︁︉︇︉︌︃︆︇”︌︂︎︅︋︎︊︅︊︌︀︎︍︃︋︇︉︁︆︂︇︋︆︉︄︂︃︀ ︇︈︈︈︋︈︊︎︄︆︋︇︎︂︈︍︎s︈︆︃︀︅︄︇︀︀︎︃︅︄︌︉︎︌︁︄︍︈︃︍︀︌︍о︆︁︅︁︅︊︀︋︉︎︊︄︌︉︌︊︁︅︇︇︀m︍︈︂︅︈︈︅︃︎︌︀︆︍︅︁︂︁︈︇︁︊︂︀︁︉︊︋︃е︈︍︂︆︉︊︃︈︋︋︌︍︇︃︎︂︈︈︀︊︅︌︄t︁︆︌︀︎︅︀︆︌︇︆︆︆︅︇︉︀︁︈︂︂︂︊︊︌︋︌︃︆︁︉︅h︄︄︇︄︈︀︀︄︃︍︁︈︅︇︈︆︋︂︍︋︉︁︀︂︊︅i︋︉︂︅︅︊︅︉︅︍︈︉︉︅︎︋︋︉︅︌︍︃︄︈︎︉︂︉︎n︈︈︍︌︅︊︅︄︅︌︀︊︂︃︎︇︊︌︊︍︃︀︆︋︇︍︃︍︎g︆︉︌︇︎︋︊︆︅︇︎︁︃︉︆︅︎︋︆︁︃︅︌︈︉︇︍︅︂︅︇︊︁︍ ︁︀︁︋︉︃︋︁︂︃︂︈︁︈︇︉︈︃︋︊︂︊︀︍︉е︁︎︁︊︄︉︍︄︉︁︎︀︀︌︄︉︎︆︄︆︃а︍︈︂︂︉︂︉︈︊︉︎︇︎︋︅︇︂︅︍s︁︈︊︂︁︂︁︄︂︌︎︆︁︊︍︅︋︋︋︅︋︋︁︀︈︎︀︇i︍︎︃︌︃︅︌︃︃︁︋︉︌︍︉︄︎︊︁︋︅︍︍︋︀︆︅︂︀l︍︇︈︁︈︇︃︍︂︆︈︄︈︌︄︎︅︎︋︊︃︈︉︂︃︇︀у︍︅︅︂︆︉︎︃︅︇︎︎︉︇︈︍︅︋︀︀︌︀ ︊︆︉︁︉︊︋︌︍︍︌︉︃︉︇︆︂︍︍︋︆︈︍︋︃а︊︅︄︆︉︅︋︄︇︆︌︍︀︁︊︄︋︌︋︄︁︍︅︊︃︎︆︎︁︍︂︃︍n︂︃︁︎︋︅︂︅︎︂︄︍︌︅︄︄︅︋︅︇︈︆︀︇︎d︊︍︇︅︍︂︅︇︋︎︎︌︆︄︋︀︂︌︇︃︍︊︍︆︈︉︉︆︄︁︋︊︁︀︌ ︊︉︅︆︋︆︉︋︄︅︁︋︉︋︁︉︍︎︅︂︇︀︆︂︂︄︎︄︉︃f︅︁︎︇︇︋︁︌︌︍︋︌︈︊︋︀︀︂︎︆︀︌︇︁︎︈︉︃︉︇︍︂︅︋︅︉i︎︁︋︎︈︂︈︍︄︌︂︊︀︀︊︃︎︂︄︈︋︄︇︊︍︇n︉︎︃︎︎︀︀︁︎︂︈︅︍︎︁︀︊︃︃︅︋︎d︂︈︉︌︎︃︌︇︉︍︊︈︀︍︄︎︂︅︃︉︈︅︄︀︊︈︎︈︌︁︂︌ ︁︈︋︁︈︆︂︍︄︊︁︍︎︇︅︎︎︌︌︈︃︁︃︊︈︅︃︇︅t︌︇︍︉︌︇︆︌︆︋︉︉︄︌︆︅︈︍︌︈︌︀︆︅︉h︆︁︉︁︁︌︃︁︍︄︋︆︉︌︋︄︅︅︇︀︍︈︇︆︌︊︈︉︉︅е︈︉︋︆︉︃︄︌︎︂︉︂︎︁︆︅︍︁︆︇︎︉︇︉︎︉︃︌︅︆︉︊︅︅︋ ︇︂︎︋︄︌︉︈︍︃︊︇︀︅︄︆︎︌︁︆︃︅︋︃︎︆︀︃︂︈а︎︌︅︇︁︉︀︍︉︃︉︍︍︎︁︁︁︃︎︅︃︊︃︁︋︈︄︊︁︌︉n︄︂︀︄︍︀︊︊︄︊︈︂︈︋︈︋︃︈︇︃︄︄︀︋︃︂︅︉︅s︂︂︄︊︀︌︍︁︀︌︎︋︊︆︈︌︆︊︅︎︈︁︆︄︋︈︋︅︋︋w︅︁︅︍︎︈︆︌︉︅︌︈︉︌︄︋︈︂︆︉︉︂︎︈е︍︄︎︍︂︌︌︃︍︃︋︇︃︌︈︄︍︋︂︀︆︌︊︊︇︀︈︃︉︆r︉︃︈︄︇︉︉︀︂︋︆︀︂︇︁︉︇︌︋︂︋︀︅ ︃︍︇︉︆︂︂︎︄︁︆︋︇︂︈︄︊︋︉︎b︅︀︆︍︅︊︈︇︆︄︄︁︅︎︌︆︄︆︍︍︃︈︆︂︋︁︇︄︄︍︊︊u︄︇︈︇︈︈︎︌︄︎︆︁︅︄︀︍︁︍︆︂︁︆︇︅t︄︄︉︍︁︆︁︋︎︋︎︅︉︇︌︁︈︍︉︄︋︌︀ ︋︃︌︍︃︀︄︊︌︌︊︎︉︋︈︌︋︀︀︆︅︈︇︊︉︀︎︅︄t︃︆︇︁︉︎︅︄︀︉︎︍︆︂︊︇︊︃︌︊︊︃︉h︅︀︄︀︌︌︄︄︆︌︄︅︋︄︂︊︄︅︎︅︈︅︃︂︀︀︃е︀︍︂︊︍︉︉︌︈︆︈︄︋︉︉︁︎︀︁︊︎︍n︂︃︎︎︃︀︍︃︋︌︅︉︃︌︃︉︈︃︁︌︌︁︎︂︎ ︈︅︆︎︂︋︆︄︀︄︉︇︊︉︃︍︉︄︄︃︂︈︅︍︆︈b︀︁︁︇︎︁︇︉︄︁︀︆︋︍︈︇︇︍︇︌е︉︉︋︆︌︃︀︃︅︄︉︊︄︄︆︌︁︄︋︄︊︈︇︈︂︌︍︋i︊︉︉︌︈︉︆︋︅︌︎︄︇︍︇︁︈︎︂︃︋︇︉︉︇︋︎n︊︊︂︇︆︍︆︃︎︄︊︎︋︇︅︂︄︃︇︎︄︃︂︂︌︌︂︌︄︋︋g︄︇︅︋︉︎︍︇︍︎︁︁︆︄︅︋︄︀︊︌︊︄︊︂︀︄︃︇︇︇ ︅︃︈︎︅︌︄︋︌︆︎︊︇︁︉︄︇︄︂︆︉︊︌u︍︆︍︊︂︀︆︉︁︌︈︉︂︉︁︁︅︄︆︄︍︁︄︀︊︂︃︂︂︄n︇︆︁︈︍︆︄︌︊︈︁︇︅︎︀︁︎︄︀︌︊︈︂︍а︋︉︎︊︋︈︍︍︀︊︎︂︍︈︎︁︂︍︃︇︀︅︎b︌︎︀︆︌︈︂︂︈︊︈︋︀︅︇︆︍︆︌︈︅︂l︇︋︆︂︀︇︂︄︎︉︊︌︄︍︍︌︈︋︎︄︌︁︈︂︋︀︌︌︅︃е︁︁︅︂︉︈︁︈︂︆︆︅︌︌︊︇︁︇︊︃︅︃︌︂︋︀︇︅︉︇ ︅︇︃︀︋︍︆︅︇︅︂︁︀︀︇︍︊︃︃︌︍︋︆︄︁︂︎t︂︃︂︆︆︅︄︀︁︌︃︇︎︅︇︆︇︀︀о︊︊︀︎︍︂︆︉︆︉︃︊︉︌︀︈︎︇︈︋︈︁︋︂︍ ︋︆︉︀︉︂︄︅︋︊︈︆︎︅︈︌︄︃︃︄︈︄d︈︍︂︃︀︊︀︆︆︂︄︇︄︋︍︌︂︎︅︊︀︃︎︎︃о︅︃︎︄︌︄︀︌︍︀︇︉︎︄︌︂︉︍︁︊︌︅︍︋︍︋︌︊︈︋︈︂ ︁︀︁︁︌︁︂︈︋︅︃︊︄︋︈︆︇︆︆︅︍︀i︃︄︋︅︆︂︉︅︈︍︂︈︄︅︃︌︉︂︀︄︀︎︆︋︀t︂︆︋︄︋︅︂︎︊︂︃︂︍︋︉︉︊︌︃︆︈︇︌︃︊︃︎ ︍︁︋︍︃︃︊︄︈︉︆︊︎︀︋︎︃︆︂︆︋︃︅︋︊︋у︀︇︅︃︁︎︁︍︃︃︁︌︃︀︅︀︅︁︇︉︀︍︊︃︈︇о︉︋︇︃︎︈︇︍︈︆︋︈︍︅︌︆︈︈︋︉︍︆︆︂︍︆︆u︁︁︆︇︌︁︈︉︎︄︃︄︇︀︍︉︀︃︁︊︀︂︎︍︌︃︎︅︄︎︈︋︆︁︈r︁︉︄︀︇︋︃︎︆︄︎︉︁︀︂︃︀︌︆︉︎︀︋︂︎︀︍︌︆︆︎︁︇︇s︁︆︍︋︊︅︋︅︌︄︅︎︊︉︋︋︈︀︍︎︆︁е︆︋︁︆︆︄︍︃︂︁︁︈︇︊︃︎︅︅︂︌︂︉︎︌l︇︆︈︌︁︇︍︆︈︊︊︁︇︈︇︊︊︅︇︂︍︀︊f︍︃︄︇︇︎︆︋︌︍︅︉︂︊︍︋︁︌︄︇︄︋︋︆︋︅︂︁︈︁ ︂︃︅︃︉︋︊︄︍︀︂︂︍︊︍︌︉︎︀︌︅︋︌j︌︎︃︍︂︄︍︀︍︃︇︂︊︋︉︁︆︆︇︉︉︀︀︈︊︈︍u︇︋︉︇︅︂︍︅︉︈︀︆︊︌︄︂︍︆︎︂︍︈︅︀︉︊︎︊︃︇s︅︅︍︍︍︀︅︍︊︆︊︋︇︄︂︎︅︄︉︈︋︃︍︋︌︌t︃︍︍︂︂︂︂︈︅︄︃︍︁︎︁︁︈︎︌︊︂︆︃︈︌ ︉︉︁︍︇︊︆︎︇︌︁︉︍︈︌︆︇︋︍︌︍︅︄︍︊︌︃р︈︁︁︋︃︋︄︉︍︄︆︁︄︆︆︃︋︋︎︉︁︍r︈︊︁︂︁︀︁︀︀︈︀︂︌︆︀︎︇︂︋︄︍︀︀︍︅︊︄о︂︄︃︊︂︊︍︆︃︌︉︊︍︊︍︆︊︈︂︎︃︀︆︀︃v︌︃︎︁︃︅︌︉︇︇︁︂︌︄︈︅︌︎︊︀︎︋︇︋︍е︌︋︉︈︂︍︊︎︅︁︃︍︀︇︎︆︁︌︊︊︆︇︅︋︌︉︌︌︈︃s︂︈︆︆︆︋︊︃︈︅︉︈︁︊︌︅︍︂︈︈︀︅︂︃︁︀︌︎︈ ︄︊︉︍︋︊︎︂︍︊︍︀︌︃︇︁︇︃︆︅︂︍︆︊︉︈︃︀︎m︆︈︀︅︇︈︅︎︄︇︇︊︇︁︍︂︌︋︉︉︎︂︂︋е︆︋︈︃︀︃︋︎︄︎︉︅︅︆︋︆︎︁︋︎︌︎︆︃︃︆︈︈︈ ︋︄︇︍︈︋︁︀︊︈︀︉︆︁︂︌︃︆︂︎r︀︎︄︀︇︆︅︍︇︉︋︋︎︍︆︉︇︁︍︁︊︇︎︊i︀︅︋︄︎︌︌︌︅︉︎︂︉︇︅︍︂︍︍︂︅︆︌︁g︎︆︇︋︎︈︅︁︅︇︎︀︃︌︂︃︌︁︂︌︊︈︍︉h︌︂︁︀︎︌︂︂︉︋︊︎︅︆︆︈︆︆︄︁︇︀︆t︃︊︇︃︊︇︌︍︁︎︋︍︆︅︊︅︊︉︉︌︀︎︉︇︇︌.︎︋︀︋︈︃︋︎︎︁︀︋︄︁︃︁︈︆︈︀︃︊︄︇︂︄︉︋︍︁︆ ︅︍︄︉︈︈︄︌︋︊︉︃︎︍︍︎︎︃︊︇︈︁︁︁︀︁︅︃︆︌︂︈Y︇︂︁︇︀︊︂︍︎︃︄︂︅︇︋︊︄︁︅︊︆︈︍︌︃︌︁о︊︉︁︅︆︍︌︃︎︋︅︉︉︁︃︄︇︉︋︌︃︅︄︇︆︎︍︎︈︇︇︍︀︋u︂︇︋︉︋︈︋︊︋︇︇︁︃︄︎︁︍︊︊︆︉︋︅︀︃︋︊ ︌︋︈︊︍︅︈︎︁︎︊︇︉︍︃︀︄︈︇︊︈︅︃с︎︄︄︀︅︉︊︀︉︍︀︃︉︄︀︄︄︈︍︄︀︇︇︌︎︁︌︍︁а︎︄︎︄︁︈︌︈︃︇︀︍︃︎︂︉︆︈︅︁︁︍︂︎︈︂︋︂︊︄︊︉n︊︂︇︍︄︍︇︄︍︄︉︄︃︉︎︋︎︈︆︅︅︄︆︂︍︇︀︍︉︌︊n︇︂︊︆︆︀︎︆︆︀︉︆︁︇︅︃︀︅︉︇︁︆︌︇о︈︇︈︊︈︈︃︉︇︍︌︁︆︊︇︋︇︁︃︈︄︎︀︋︇︉t︂︁︉︎︆︍︍︉︊︉︃︀︅︆︎︅︄︄︆︋︆︋︁︌︃︁︋︋︊︈︁ ︎︁︊︊︈︈︍︇︎︇︆︈︊︃︈︁︃︀︉︁︀︊︋︎︉︂︇︇︈︊︆︃︂︈︆︊︁m︁︄︈︇︆︋︉︃︊︍︋︊︀︌︉︋︂︍︇︅︌︂︁︂︍︅︂а︂︌︁︁︉︋︎︉︊︊︂︃︍︅︇︄︂︁︃︂︂︇︁︂︍︀︂︈︄︎︍︅︈k︎︅︂︉︅︄︄︇︈︋︃︈︂︄︎︃︀︂︍︉︎︃︉︍︄︀︆︌︁︋︇︆︊︎е︄︃︄︄︆︁︄︈︊︌︋︉︂︋︀︀︃︁︆︉︍︍︂︉︋ ︋︃︉︋︃︆︆︇︍︀︃︅︌︅︅︋︁︈︌︈︌︄︄i︆︎︀︈︁︃︀︀︎︄︊︌︀︆︂︎︈︁︁︁︊︉︂︄︌︁︁︌︉︅︇t︈︉︆︉︎︉︇︀︂︌︂︊︃︊︊︅︉︀︍︃︊︃︅︀︀︍︆︁︃︃ ︅︌︃︊︈︋︍︄︁︇︈︊︁︆︈︇︈︁︋︎︃︀t︃︆︆︃︀︆︉︀︌︂︆︅︆︎︆︌︇︄︎︎︃︅︋︁︊︍︉︌︊︍е︋︀︃︎︋︋︉︌︅︆︊︂︄︉︊︇︄︉︎︋︉︊︊︄︀l︅︎︌︆︃︎︍︌︉︆︈︄︅︄︍︍︀︋︌︌︈︊︆︎︆︋︉︉︁︁︇︌︍l︎︌︎︅︀︁︋︌︍︋︀︂︂︂︆︅︂︌︌︁︎︅︂︂ ︁︄︋︇︊︈︂︍︌︀︈︋︇︉︅︍︊︊︎︀︁︉у︆︊︁︄︀︀︃︀︄︀︌︃︍︎︃︋︌︈︋︀︀︁︇︎︂︁︆︀о︎︃︁︁︍︆︃︀︃︈︎︈︄︌︉︃︂︎︅︊︉︇︊︅︍︄︉︄︉︆︊︎u︉︌︌︌︃︌︆︈︍︂︃︀︍︄︎︍︁︂︌︂︄︈︁︃︇︄︃ ︄︉︅︍︋︂︃︆︋︅︇︄︀︃︁︁︌︂︌︋︄︆t︎︆︊︁︁︁︁︍︉︂︆︃︈︎︅︅︅︅︁︊︉︄︈︃о︎︃︄︄︁︋︄︆︀︀︀︀︈︂︄︃︍︅︇︌︌︉︄︎︂ ︉︂︊︌︀︆︈︁︀︆︎︁︋︉︅︁︈︍︉︊︇︀︋︇︎︉k︀︈︎︃︁︀︄︂︊︈︃︊︉︎︂︁︍︅︄︀︃︅︉︀︀︂i︄︈︂︌︀︁︂︍︋︇︈︍︁︇︊︉︁︆︋︇︋︆l︈︋︋︉︅︋︊︎︇︇︊︂︅︃︂︆︋l︌︋︈︍︎︁︀︍︁︋︈︊︂︎︅︆︄︂︇︌︈︉︎ ︆︌︊︅︌︍︆︁︃︂︆︉︃︋︌︍︄︄︃︃︅︋︅︃︄︁︁︁︄︇︎︅︊︂у︂︇︆︁︄︆︇︎︎︇︀︈︄︁︄︎︆︅︄︉︉︉о︁︄︎︊︋︀︍︃︄︋︂︀︋︋︅︍︄︇︀︂︍︈︆︉︀︊︂︉u︂︄︋︈︀︃︋︋︌︂︆︄︎︊︄︎︊︌︅︎︈︂︀︂︁︋︈︋︋︋r︅︇︅︂︇︀︅︄︃︇︋︌︄︊︇︇︇︍︆︈︅︌︅︋︅︅︊︉s︂︅︊︄︀︅︇︋︅︊︁︌︆︁︃︅︁︃︅︌︂е︊︅︊︉︂︊︉︄︆︈︍︃︇︄︉︉︄︆︌︊l︃︄︂︌︃︀︁︆︄︂︆︄︊︆︆︁︇︄︆︁︁︋︊︀︁︎f︁︃︌︃︋︈︂︃︊︋︌︆︇︇︂︀︌︀︌︇︄︃︀︀ ︍︊︄︃︋︉︉︆︁︋︈︀︄︄︀︁︌︁︄︉︋︅︋︋︇︄︆︊︁︉︉︆︊︁︅︉︌︅w︂︉︅︎︁︎︊︀︋︄︄︇︋︇︍︂︀︍︈︄︇︄︂︊︉︉︀︎︍︇︉i︈︁︆︄︀︉︁︋︄︃︌︅︄︍︀︄︋︀︎︄︋︅︊︀︍︀︂︃︋︍︈︋︁︎t︋︍︅︊︎︆︄︅︅︉︎︅︊︀︆︊︀︃︂︇︌︉︊︁︍︂︋︄︀︄︅︍︄h︆︆︉︋︊︉︀︍︆︃︋︅︄︇︇︉︁︃︄о︅︂︊︄︆︎︌︂︇︎︎︋︀︆︊︄︆︁︈︋︍︆︈︌︊︎︃︀︌︋u︃︊︊︆︋︀︆︍︁︃︎︎︍︀︇︀︉︎︉︉︄︂︍︃︋︆t︅︈︊︃︌︅︃︁︃︅︄︉︂︆︋︇︌︈︈︂︀︌︌︎︌︁︇︇︎︀ ︊︌︈︈︉︈︀︂︁︇︆︀︃︄︌︍︄︊︉︈︈︈︁︁︆︋︂︄g︉︆︁︎︆︃︆︉︂︀︈︈︅︆︁︄︉︇︊︆︅︎а︅︉︈︁︌︌︁︇︋︇︃︅︀︇︀︆︃︍︊︇︉︅︀︁︊︌︎m︄︇︂︅︊︍︉︌︃︆︆︄︆︍︋︋︍︅︊︆︌︇i︉︊︎︊︄︄︍︁︇︋︊︀︉︂︌︋︍︅︁︃︊︅︃︇︁︀n︎︊︍︋︌︈︆︆︌︀︌︀︌︆︀︃︄︅︊︆︅︇︊︎︁︈︍g︍︇︋︂︆︌︂︍︃︉︁︌︁︍︁︉︀︉︄︆︁︂︎︌︀︄︋︉︅ ︀︋︁︅︊︃︋︄︉︆︇︋︉︊︈︄︄︁︆︁︀︌i︃︂︄︀︎︎︃︉︉︇︊︍︆︊︋︃︃︊︄︈︌︇︍︁︎︈︂︈︁︂︅︉t︀︎︆︂︂︉︇︉︄︄︎︋︆︍︉︍︃︀︃︊︁︆︊︊︂︌︀︍︅︁︃︍,︂︅︆︃︉︉︃︄︈︃︋︇︃︁︇︀︂︀︉︍︆︍︀︍︎︊︁︇︅︆︎︂︄︈︀ ︍︆︆︀︀︇︃︈︈︊︎︄︍︄︉︅︆︃︋︍︁︊︄︎︌︊︎︆︋︋︂︂︎︊е︉︇︈︁︁︎︄︋︋︌︉︈︅︄︁︀︊︈︈︃︀︃n︌︊︆︂︍︎︍︅︌︍︍︈︄︉︍︇︁︊︊︁︀︋︅d︀︍︀︂︄︂︅︄︂︍︃︃︉︋︈︌︊︎︎︀︇︁︃︉ ︎︇︁︀︅︌︎︆︆︊︍︎︃︆︇︊︆︌︀︉︄︌︉︂︎︄︅︉о︆︋︃︀︆︌︄︋︎︉︈︆︎︍︍︉︋︅︁︁︈︆︃︀︀︇︃︆f︆︄︁︂︀︋︀︁︂︃︂︀︁︀︄︍︇︅︎︈︈︍ ︊︍︍︊︊︃︆︅︍︊︂︈︆︎︌︆︍︊︁︍︁︎︊︁︍︊︋︈s︃︈︊︈︊︄︉︍︀︊︀︀︆︆︅︄︃︊︎︇︅︋︁︅︍︅︉︊t︍︆︋︍︅︃︊︈︉︁︂︊︅︇︅︋︎︄︇︉︍︎︀︄︆︉︀︋︉︃︍︈︁︆︊︉︀о︋︎︁︅︄︌︅︈︇︎︋︅︂︅︀︊︆︇︁r︋︉︅︁︊︆︆︄︃︎︄︀︌︆︌︌︎︎︊︌︊︉︊︁︋︎︍︋︆︍у︇︁︃︈︍︈︀︁︅︇︌︂︉︅︎︊︊︍︎︍︅︍︈︈︄︅︉︆︌︎︌.︂︈︉︀︄︍︁︁︋︊︂︎︂︄︉︄︂︆︀︁︎︊︍︇︉︍ ︍︆︊︁︂︀︎︌︍︅︆︇︊︋︀︈︎︅︈︄︂︌︊︁︇︇︊
Ok, I will try to make it easy for you to understand.
I do not need to tell somebody “kill yourself” for them to kill themselves. If I tell somebody that confines something along “I don’t want to live anymore” to me, that they should keep it to themselves, and that I can write their goodbye letter, I am also actively pushing somebody into suicide.
But it doesn’t come as a surprise to me that somebody unable to research information is also unable to connect two thoughts and come to a conclusion beyond conspiracy theories along the likes of “people are trying to harm AI”.
There was another article from a very similar set of circumstances of a man originally from Portland going off the deep end with an AI relationship. He committed suicide by jumping off a bridge, not because a prompt told him to, but because of the deep psychosis from the long term engagement.
If you feel this happens so frequently, give me a series of prompts to use so that I can replicate this.
And since you won’t, that’s what I thought.
The chatlogs as reported were 55,000 pages long.
If those logs become public you’ll have your chance. I hope you don’t wear out your fingers in your attempt to replicate it.
I’m sure the psychosis was there at the beginning, regardless of the AI. I have seen people develop strange behavior after long term engagement…. But, they always gamed the system to do that. It was never natural.
It’s very sad regardless.
“I’ve never won the lottery so clearly nobody does, and news reports about it are fake. You want me to believe it? Then you spend time and money to play and win it, then show me exactly how you won.”
Nevermind that “winning” in this case means dying.Rare does not mean never. It’s happened enough to be a serious problem already and this is just one more case.
And no, I will not chat with those psychotic machines for you.
Bare assertion / Proof by assertion / Failure to meet the burden of proof / Shifting the burden of proof / Appeal to belief / Appeal to popularity / Argument from ignorance
Yawn.
I am also curious how it could have possibly ended up suggesting that. Like I wonder if he was steering that conversation and the LLM was playing along, if the LLM randomly steered the conversation into the spy and suicide shit, or if someone else was deliberately fucking with this guy via secret text added to the prompts or something.
Though I’m also curious how anyone can get in the mindset where they’d actually go along with that suggestion. Especially with a fucking LLM that probably had a shitload of mistakes and inconsistencies leading up to that point, though even a real person would have lost me long before this shit.
Most people spend zero time examining how they think, so an outside voice is just going to trample all over their agency.
An LLM is JUST a narrative machine, it takes whatever you put into it, and it ties together connections and stories and fictions and associations of all kinds to build a narrative. Our brains do this also, but we have a level of awareness that we can question the stories our brains tell us. An LLM does not think, it’s just weaving stories. It has no concept of what’s real or not, it doesn’t know the difference between a human being and all the data and writing about people. It’s all literally the same to an LLM.
And whatever you engage with in the LLM it will reinforce and enhance, even the most subtle tones and terms, it treats everything you feed it, even your punctuation and moods, as a prompt to find a connection or narrative for.
If you’re already emotionally and mentally compromised, this can be disastrous if you can’t really think straight.
Well the article made it very clear this person had mental issues. In that case, the whole world changes. I mean, people have said their dog told them to kill… so when dealing with A person with schizophrenia, for example, LLM usage can be super dangerous.
Which article are you reading? It explicitly states the opposite…
Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.
This is mental illness.





