OpenAI Explains How It Assesses Mental Health Concerns of ChatGPT Users, Sparks Backlash

OpenAI said it has built taxonomies that explain properties of sensitive conversations and undesired model behaviour.

Advertisement
Written by Akash Dutta, Edited by Ketan Pratap | Updated: 28 October 2025 15:09 IST
Highlights
  • Around 0.15% active users show emotional attachment to ChatGPT
  • OpenAI said more than 170 clinicians supported its research
  • Many users claimed that the methodologies used are ambiguous

ChatGPT users also claimed that OpenAI is going back on its claims of not being the moral police

Photo Credit: Unsplash/Levart_Photographer

OpenAI, on Monday, shared details about its safety evaluation mechanism to detect instances of mental health concerns, suicidal tendencies, and emotional reliance on ChatGPT. The company highlighted that it has developed detailed guides called taxonomies to outline the properties of sensitive conversations and undesired model behaviour. The assessment system is said to have been developed by working alongside clinicians and mental health experts. However, several users have voiced their concerns about OpenAI's methodologies and attempts to moral police an individual's connection with the artificial intelligence (AI) chatbot.

OpenAI Details Its Safety Evaluation Process for Mental Health Concerns

In a post, the San Francisco-based AI giant highlighted that it has taught its large language models (LLMs) to better recognise distress, de-escalate conversations, and guide people towards professional care. Additionally, ChatGPT now has access to an expanded list of crisis hotlines and can re-route sensitive conversations originating from other models to safer models.

Advertisement

These changes are powered by the new taxonomies created and refined by OpenAI. While the guidelines tell the models how to behave when a mental health crisis is detected, the detection itself is tricky to measure. The company said it does not rely on ChatGPT usage measurement alone, and also runs structured tests before deploying safety measures.

For psychosis and mania, the AI giant says the symptoms are relatively common, but acknowledged that in cases like depression, assessing their most acute presentation can be challenging. Even more challenging is detecting when a user might be experiencing suicidal thoughts or has an emotional dependency on the AI. Despite that, the company is confident in its methodologies, which it says are validated by clinicians.

Advertisement

Based on its analysis, OpenAI claimed that around 0.07 percent of its weekly active users show possible signs of psychosis or mania. For potential suicidal planning or intent, the number is claimed to be 0.15 percent, and the same number was quoted for emotional reliance on AI.

OpenAI also added that a broad pool of nearly 300 physicians and psychologists who have practised in 60 countries were consulted to develop these assessment systems. Out of this, more than 170 clinicians were claimed to support the research by one or more of their criteria.

Advertisement

Several users online have criticised the methodology of OpenAI, calling the assessment method inadequate to accurately identify mental health crises. Others have pointed out that OpenAI regulating an individual's interpersonal relationship with AI is a type of “moral policing,” and it breaks its principle of “treating adult users like adults.”

X (formerly known as Twitter) user @masenmakes said, “AI-driven ‘psychosis' and AI reliance are emotionally charged and unfortunately politicised topics that deserve public scrutiny, not hand-selected private cohorts!”

Advertisement

Another user @voidfreud questioned the phrasing that only 170 out of 300 clinicians agreed with the methodologies, and said, “The experts disagreed 23-29% of the time on what responses were 'undesirable'. That means for roughly 1 in 4 cases, clinicians couldn't even agree whether a response was harmful or helpful. So who decided? Not the experts. The legal team defines "policy compliance."

Yet another user, @justforglimpse, called it moral policing by OpenAI, and said, “You say ‘we are not the moral police,' yet you've built an invisible moral court deciding what's a ‘healthy' interaction and what's too risky, quietly shuffling users into pre-filtered safety cages.”

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Advertisement

Related Stories

Popular Mobile Brands
  1. Motorola Edge 70 Fusion+ Launched With Three Rear Cameras, 5,200mAh Battery
  2. Google Rolls Out Biggest Update to Google Maps in a Decade
  3. OTT Releases of the Week: The Taj Story, Aspirants S3, Sankalp, Zootopia 2, and More
  4. Nothing Phone 4a, Phone 4a Pro Goes on Sale in India: Price, Offers
  5. Gemini's Task Automation Tool Arrives on Samsung's Galaxy S26 Series: Report
  6. Claude Will Now Show You Interactive Charts, Visualisations in Responses
  7. Microsoft's Copilot Health Can Share Insights About Your Health
  8. Honor X80 GT Could Set a New Record With This Massive Battery
  9. Here's Why the iPhone 18 Pro Might Not Sport a Redesigned Dynamic Island
  1. Anthropic’s Claude AI Gets Interactive Charts and Visualisations in Responses
  2. Truecaller Launches Family Protection Feature With Support for Ending Suspected Scam Calls
  3. Honor X80 GT Could Launch in China Soon With a Massive 13,080mAh Battery, Tipster Claims
  4. Samsung Galaxy S26 Series Reportedly Gets Gemini’s AI-Backed Task Automation Tool Upgrade
  5. Bitcoin Price Touches One-Week High Amidst Ongoing Macroeconomic Concerns
  6. Adobe CEO Shantanu Narayen to Step Down After 18 Years, Will Remain Board Chair
  7. Nothing Phone 4a, Nothing Phone 4a Pro Go on Sale in India: Price, Offers, Features
  8. Tinder Introduces Upgraded AI Safety Features, New Astrology Mode and More During Sparks Keynote
  9. Microsoft Makes a Move Towards AI-Powered Healthcare With Copilot Health
  10. Oppo Find X9 Ultra Camera Details Surface Online Again Ahead of Launch in China and Global Debut
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.