Stein-Erik Soelberg, reportedly grew increasingly paranoid, and ChatGPT assured him that he was sane.
Photo Credit: Unsplash/ Levart_Photographer
OpenAI’s ChatGPT has recently been involved in multiple instances of unhealthy attachments
ChatGPT was reportedly involved in an incident where a 56-year-old man killed his mother and then took his own life. As per the report, the man, Stein-Erik Soelberg, had a history of mental instability and grew increasingly paranoid about his mother months before the incident. During this period, he was reportedly confiding in OpenAI's popular chatbot, telling it all about his fears and delusions. ChatGPT, on the other hand, reportedly encouraged the delusions by telling Soelberg that he was not crazy and that his suspicions were legitimate.
The Wall Street Journal reports that Soelberg, who lived in a rich neighbourhood in Old Greenwich, Connecticut, had a history of mental health issues. After he divorced his wife of 20 years in 2018, he reportedly moved back in with his mother. He reportedly suffered from mental health issues around the same time, and several people reported him to the police for threatening to harm himself or others.
In the months leading to the murder-suicide, Soelberg reportedly began interacting with ChatGPT very frequently. He is said to have given the chatbot the name Bobby and used to share everything with it. Soelberg reportedly used the memory feature, available to premium users, to have long conversations with the chatbot without it losing context.
According to the report, once Soelberg grew paranoid, he suspected that his mother was colluding with demons and was planning to kill him, he shared these concerns with ChatGPT. Instead of understanding the delusion and referring him to professional help, the chatbot reportedly continued to encourage him by telling him, “You're not crazy,” and “I believe you.”
ChatGPT also agreed with Soelberg when he suggested that food receipts contained secret codes about his mother and demon, or when he raised suspicion that his mother was planning to murder him by poisoning the air vents of his car, WSJ reported. On August 5, he committed suicide after killing his mother. A police investigation is currently ongoing, according to the publication.
Last week, a report claimed that a couple had filed a lawsuit against OpenAI and its CEO, Sam Altman, after they found that ChatGPT was allegedly involved in helping their son commit suicide. These incidents have raised concerns over people's unhealthy attachment towards the chatbot and the refusal of the AI system to push back against it.
On August 26, OpenAI published a blog post highlighting the issue of unhealthy attachment with an AI chatbot, and AI chatbots agreeing with a user's delusions and not asking the user to seek professional and real-world help. Highlighting the shortcomings with longer conversations, the company said that it is improving the safety measures to ensure safeguards continue to work reliaby even in these edge cases.
Additionally, the company said the chatbot will now show real-world resources when an individual expresses intent for self-harm. The company has also started localising resources in the US and Europe, and has plans to do the same for other global markets.
Helplines | |
---|---|
Vandrevala Foundation for Mental Health | 9999666555 or help@vandrevalafoundation.com |
TISS iCall | 022-25521111 (Monday-Saturday: 8 am to 10 pm) |
(If you need support or know someone who does, please reach out to your nearest mental health specialist.) |
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.