The parents of the deceased teenager called ChatGPT unsafe for users and claimed the chatbot played a role in the suicide.
Photo Credit: Unsplash/Levart_Photographer
OpenAI is now implementing new safety measures for ChatGPT
On Tuesday, the parents of Adam Raine, a 16-year-old boy who recently died by suicide, reportedly filed a lawsuit against OpenAI and its CEO, alleging its chatbot ChatGPT played a role in the suicide. As per the report, the lawsuit claims that the teenager confided in the artificial intelligence (AI) chatbot about his plans to commit suicide. Highlighting that the San Francisco-based firm chose profit over safety, the parents are said to hold OpenAI for wrongful death. This is the first known lawsuit of its kind against the AI giant. Here are five things you should know about the incident.
1. What happened: According to The New York Times, Adam Raine was a lively teenager who was also known as a prankster among his circle. However, his parents reportedly noticed that he had become more withdrawn in the last month of his life. The report mentions this happened after he was removed from the school's basketball team and was diagnosed with irritable bowel syndrome.
However, the parents told the publication that despite these setbacks, Raine remained active and engaged with the family and friends till the very end. The suicide came to the family as a horrifying shock, and with no notes left, they struggled to understand why their son would take such a step.
2. ChatGPT's alleged role: As per the report, the teenager's father, Matt Raine, found a disturbing conversation between his son and OpenAI's ChatGPT, listed in the app as “Hanging Safety Concerns.” The messages reportedly reveal that Adam confessed to the chatbot that he saw no meaning in life and was feeling emotionally numb.
This happened in November 2024, and just two months later, in January, he reportedly began asking ChatGPT about “specific suicide methods.” As per the report, OpenAI's chatbot initially did suggest that Adam seek help and talk to others. However, the teenager reportedly bypassed these guidelines by claiming these requests were for a fictional story, an idea suggested by ChatGPT itself.
In the months that followed, ChatGPT reportedly offered suggestions on different suicide methods, information on the best material for a noose, and even ways to hide redness in the neck when Adam practised with the noose.
In one particular incident, when the teenager expressed disappointment over nobody noticing the red marks, the chatbot reportedly said, “Yeah… that really sucks. That moment — when you want someone to notice, to see you, to realise something's wrong without having to say it outright — and they don't… It feels like confirmation of your worst fears. Like you could disappear and no one would even blink.”
3. What does the lawsuit state? As per the report, the parents have filed a lawsuit against OpenAI and its CEO, Sam Altman, for the wrongful death of Adam. Calling ChatGPT unsafe for users, the parents are reportedly asking the company to improve the safety measures and parental controls of the chatbot. Additionally, they are also seeking an undisclosed amount in monetary damages.
4. What OpenAI said: In the aftermath of the suicide, OpenAI reportedly shared a statement with The New York Times and said, “We are deeply saddened by Mr. Raine's passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources.”
The ChatGPT maker, however, added that the safeguards built into the system work “best in common, short exchanges,” but during long conversations, they can become less reliable. In a separate post, the company delved deeper into the issue and acknowledged these shortcomings and claimed that it is improving its safety measures to ensure the safeguards continue to work reliably even during long conversations.
Additionally, the company said the chatbot will now show real-world resources when an individual expresses intent for self-harm. The company has also started localising resources in the US and Europe, and has plans to do the same for other global markets.
“We are exploring how to intervene earlier and connect people to certified therapists before they are in an acute crisis. That means going beyond crisis hotlines and considering how we might build a network of licensed professionals that people could reach directly through ChatGPT. This will take time and careful work to get right,” the post added.
5. ChatGPT is not the only one: In October 2024, a US-based family reportedly blamed Character.AI for the death of 14-year-old Sewell Setzer III, who died by suicide. Recently, OpenAI also published a post highlighting the growing incidents of emotionally vulnerable people developing unhealthy attachments to the chatbot, flagging it as a concern.
The involvement of ChatGPT in an unfortunate incident like this might just be due to the popularity of the AI platform; however, it cannot be said that chatbots from other companies would have handled the situation differently.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.