OpenAI said it is adding new features in ChatGPT to make it easier to reach emergency services and get help from experts.
The company plans to make significant progress within the next 120 days
Photo Credit: Reuters
OpenAI has shared plans to better protect ChatGPT users facing emotional distress, as well as teenagers interacting with its artificial intelligence (AI) models, on Tuesday. The San Francisco-based AI firm highlighted that it has started partnering with experts and including measures within the platform to assist and guide users in a sensitive moment. The company plans to make significant progress within the next 120 days. OpenAI highlighted that its reasoning-focused models, such as GPT-5 and o3, have been trained with a "deliberative alignment" technique, which allows it to consistently follow and apply safety guidelines.
In a blog post, the AI firm said that it is improving its large language models (LLMs) to recognise and respond to signs of mental and emotional distress, which will be guided by the input of a team of experts. Notably, the announcement from OpenAI comes after ChatGPT was involved in two tragic incidents where people lost their lives. In the first incident, a teenager committed suicide after confiding in ChatGPT for months, and in the second, a 56-year-old man committed suicide after killing his mother.
These incidents have raised concerns about whether AI companies are doing enough to ensure that the chatbots do not encourage or agree with users who might be having an emotionally charged moment or have mental health issues.
Acknowledging the situation, OpenAI said it has started taking steps in four key areas: Interventions for people in crisis, making it easier to reach emergency services and getting help from experts, enabling connections to trusted contacts, and strengthening protections for teenagers.
The AI giant said that it has started convening a council of experts in youth development, mental health, and human-computer interaction, who will help shape a “clear, evidence-based vision for how AI can support people's well-being and help them thrive.” With their input, the company will design new safeguards, such as improved parental controls. The council will also advise OpenAI on product, research, and policy decisions.
Alongside the experts, the council will also include a pool of more than 250 physicians who have practised in 60 countries. Calling them the company's Global Physician Network, OpenAI said these members have already worked with the company on its health bench evaluations. “Their input directly informs our safety research, model training, and other interventions, helping us to quickly engage the right specialists when needed.”
Apart from this, the company also plans to use its real-time router (which was released with the launch of the GPT-5 models), to switch to a reasoning model whenever its system detects a sensitive conversation showing signs of acute distress.
OpenAI is also planning to improve the parental controls of ChatGPT to ensure teenagers are better protected by next month. With this, parents will be able to link their account with their teen's account via a simple email invitation, control how the chatbot responds to teens, manage which features to disable, and receive notifications when the system detects their teen in a moment of distress.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.