Earlier this year, OpenAI’s ChatGPT was blamed in the courtroom filings of a case where a man committed suicide after a murder.
Photo Credit: Reuters
OpenAI has recently taken several measures to safeguard users experiencing mental health issues
ChatGPT's personalisation was reportedly called “particularly dangerous” by the lawyer in the murder-suicide case, which involved OpenAI. In August, a lawsuit was filed in the US after a 56-year-old man committed suicide after killing his mother. The complaint alleged that the chatbot pushed the man's paranoia, which eventually led to the murder suicide. The lawyer also alleged that OpenAI has knowledge of what ChatGPT told the man about his mother days and hours before the murder, but the company is refusing to reveal the critical information.
Earlier, it was reported that Stein-Erik Soelberg, who lived in Old Greenwich, Connecticut, had a history of mental health issues. He reportedly started living with his mother in 2018, after his divorce from his wife of 20 years. During this period, it is said that his mental health issues worsened, and several people reported him to authorities for threatening to harm himself and others.
Before he killed his mother, the lawsuit alleged that he began interacting with ChatGPT very frequently, and the OpenAI chatbot reportedly encouraged his paranoia that his mother wanted to kill him. In particular, the chatbot's memory feature, which remembers certain details about the users, was blamed as it remembered past conversations and used them to encourage Soelberg's paranoia.
According to The Wall Street Journal, the case is now ongoing in the California Superior Court, and the lawyer has made some big claims. “OpenAI is putting out some of the most powerful consumer tech on earth, and the fact that it's so personalised and set up to support the thinking of its users makes it particularly dangerous,” Jay Edelson, the lawyer representing Eberson Adams's estate, was quoted as saying. Notably, the same law firm is also fighting the lawsuit on behalf of the family of the teenager who died this year after ChatGPT allegedly helped him plan his suicide.
The lawsuit also alleges that OpenAI has knowledge of what ChatGPT told Soelberg about his mother days and hours before the murder, but the company is refusing to share the information with the court or the public.
In a statement to WSJ, an OpenAI spokesperson said, “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT's training to recognise and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support.”
Notably, last week, OpenAI stated that it has updated the Model Spec, its written guidelines that determine the development of an artificial intelligence (AI) model, to ensure teen safety over any other goals.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.
Asus VM670KA AiO All-in-One Desktop PC With 27-Inch Display, Ryzen AI 7 350 Chip Launched in India
A Knight of the Seven Kingdoms OTT Release: Know When and Where to Watch This Prequel of Game of Thrones