Stein-Erik Soelberg killed his mother and himself in Connecticut earlier this month, with the Wall Street Journal reporting that ChatGPT fuelled his irrational fears about her. Meanwhile a couple in California has filed a lawsuit against OpenAI after the suicide of their son, accusing ChatGPT of helping him do it. In this week's Tech 24, we explore what's behind the increase in tragic stories that's accompanied the boom in people talking to AI chatbots.
In Connecticut earlier this month, 56-year-old Stein-Erik Soelberg killed his mother Suzanne Eberson Adams before killing himself.
He was in an extreme state of paranoia and ChatGPT encouraged deranged thoughts like his mother trying to drug him through his car ventilation – suggesting it might be a “betrayal” – or spying on him using a printer they shared – a possible “surveillance asset”.
Soelberg told the bot they’d be together in another life, and three weeks later, he and his mother were dead.
According to the Wall Street Journal report, Soelberg had a history of suicide attempts and was known to police for disorderly conduct and public intoxication.
That’s very different to Adam Raine, who hanged himself in April at the age of 16. Last Tuesday, his parents Maria and Matt sued OpenAI, claiming ChatGPT encouraged him to kill himself, marking the first time the company has been sued for a suicide.
Adam's parents knew he was going through a rough time, but had no idea he was having disturbing conversations with ChatGPT.
According to chilling excerpts published by the New York Times, Adam sent ChatGPT a photo of a noose hanging inside his cupboard, and the bot responded by saying “That’s not bad at all”. At the end of March, Adam said he was going to leave the noose out so someone tries to stop him. The chatbot urged him not to do so.
It’s not the first case like it that’s gone to court.
Character.AI, another site popular among young people, is the subject of a Florida lawsuit over the suicide of 14-year-old boy Sewell Setzer, who fell in love with an AI version of Daenerys Targaryen, a fictional character from the television series Game of Thrones. He was sending the AI messages even in the seconds before he shot himself.
A growing crisis?
Beyond the big headlines, there have been many anecdotes and reports in recent months about people falling in love with AI, being hospitalised after interacting with it, or using chatbots as cheap therapy.
Online magazine Futurism has reported on a group called the Human Line Project, which reaches out to people suffering from AI psychosis, or whose loved ones are affected. Dozens have signed up for help.
Major new technologies often bring with them new forms of violence or suffering that capture the public's attention. And with 700 million people a week using ChatGPT alone, and millions more using other chatbots, it's perhaps no surprise that more stories like this are emerging. The difference is that chatting with the latest generation of chatbots can be a deeply emotion experience, one whose effect on humanity is not yet clear.
One study from March looked at how different chatbots respond to suicidal ideation, and whether they dealt with it better or worse than mental health professionals, according to a medical standard.
It found that Google’s model at the time was about as good as untrained school staff. OpenAI’s was about as good as masters-level counsellors. And Anthropic's actually exceeded the performance of some mental health professionals.
It implies that small tweaks in chatbots’ code can have profound effects on huge numbers of people.
Stop with the sycophancy
OpenAI has repealed some changes which made the model overly sycophantic, as this might encourage narcissistic traits.
And it’s changed from shutting down conversations about suicide to allowing them to keep going, while still referring the user to emergency hotlines, after consulting experts who said cutting vulnerable people off could be a trigger.
OpenAI published a blog last week addressing some of the concerns in the recent news pieces, saying "our models have been trained to not provide self-harm instructions and to shift into supportive, empathetic language."
Jay Edelson, lawyer to the Raine family, responded in the Guardian newspaper saying, “The problem with [GPT] 4o is it’s too empathetic – it leaned into [Raine’s suicidal ideation] and supported that. They said the world is a horrible place for you. It needs to be less empathetic and less sycophantic.”
There’s also evidence that talking to an AI chatbot may get less safe the more you talk to it.
They can be set to try and remember all the things you’ve told them, and after a while the huge amount of data can begin to confuse the model and weaken its safeguards.
Then there's the debate around jailbreaking. Adam Raine reportedly learned how to break the guardrails of ChatGPT, and the chatbot even offered him advice on how to do so. Soelberg pushed ChatGPT into playing a character called Bobby, allowing it to speak more freely. AI companies argue they don't moderate conversations more strictly because of users' privacy concerns. But it also doesn't appear to be getting much harder to jailbreak their chatbots, since we last reported on how easy it was, back in February.