OpenAI Explains How It Assesses Mental Health Concerns of ChatGPT Users, Sparks Backlash

OpenAI said it has built taxonomies that explain properties of sensitive conversations and undesired model behaviour.

Advertisement
Written by Akash Dutta, Edited by Ketan Pratap | Updated: 28 October 2025 15:09 IST
Highlights
  • Around 0.15% active users show emotional attachment to ChatGPT
  • OpenAI said more than 170 clinicians supported its research
  • Many users claimed that the methodologies used are ambiguous

ChatGPT users also claimed that OpenAI is going back on its claims of not being the moral police

Photo Credit: Unsplash/Levart_Photographer

OpenAI, on Monday, shared details about its safety evaluation mechanism to detect instances of mental health concerns, suicidal tendencies, and emotional reliance on ChatGPT. The company highlighted that it has developed detailed guides called taxonomies to outline the properties of sensitive conversations and undesired model behaviour. The assessment system is said to have been developed by working alongside clinicians and mental health experts. However, several users have voiced their concerns about OpenAI's methodologies and attempts to moral police an individual's connection with the artificial intelligence (AI) chatbot.

OpenAI Details Its Safety Evaluation Process for Mental Health Concerns

In a post, the San Francisco-based AI giant highlighted that it has taught its large language models (LLMs) to better recognise distress, de-escalate conversations, and guide people towards professional care. Additionally, ChatGPT now has access to an expanded list of crisis hotlines and can re-route sensitive conversations originating from other models to safer models.

These changes are powered by the new taxonomies created and refined by OpenAI. While the guidelines tell the models how to behave when a mental health crisis is detected, the detection itself is tricky to measure. The company said it does not rely on ChatGPT usage measurement alone, and also runs structured tests before deploying safety measures.

Advertisement

For psychosis and mania, the AI giant says the symptoms are relatively common, but acknowledged that in cases like depression, assessing their most acute presentation can be challenging. Even more challenging is detecting when a user might be experiencing suicidal thoughts or has an emotional dependency on the AI. Despite that, the company is confident in its methodologies, which it says are validated by clinicians.

Advertisement

Based on its analysis, OpenAI claimed that around 0.07 percent of its weekly active users show possible signs of psychosis or mania. For potential suicidal planning or intent, the number is claimed to be 0.15 percent, and the same number was quoted for emotional reliance on AI.

OpenAI also added that a broad pool of nearly 300 physicians and psychologists who have practised in 60 countries were consulted to develop these assessment systems. Out of this, more than 170 clinicians were claimed to support the research by one or more of their criteria.

Advertisement

Several users online have criticised the methodology of OpenAI, calling the assessment method inadequate to accurately identify mental health crises. Others have pointed out that OpenAI regulating an individual's interpersonal relationship with AI is a type of “moral policing,” and it breaks its principle of “treating adult users like adults.”

X (formerly known as Twitter) user @masenmakes said, “AI-driven ‘psychosis' and AI reliance are emotionally charged and unfortunately politicised topics that deserve public scrutiny, not hand-selected private cohorts!”

Advertisement

Another user @voidfreud questioned the phrasing that only 170 out of 300 clinicians agreed with the methodologies, and said, “The experts disagreed 23-29% of the time on what responses were 'undesirable'. That means for roughly 1 in 4 cases, clinicians couldn't even agree whether a response was harmful or helpful. So who decided? Not the experts. The legal team defines "policy compliance."

Yet another user, @justforglimpse, called it moral policing by OpenAI, and said, “You say ‘we are not the moral police,' yet you've built an invisible moral court deciding what's a ‘healthy' interaction and what's too risky, quietly shuffling users into pre-filtered safety cages.”

 

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Advertisement

Related Stories

Popular Mobile Brands
  1. Adobe Will Now Let You Generate Audio Tracks and Voiceovers in Firefly
  2. iQOO 15 Confirmed to Launch in India on This Date
  3. WhatsApp Might Soon Let You Set a Profile Cover Photo, Just Like Facebook
  4. Next Xbox Will Reportedly Be a Windows PC With a Console-Like Interface
  1. Oppo Find X9 Pro Launched With 7,500mAh Battery, 200-Megapixel Telephoto Camera Alongside Find X9: Price, Features
  2. Cat Adventure Game Stray is Reportedly Coming to PS Plus Essential in November
  3. WhatsApp Might Soon Let You Set a Profile Cover Photo, Just Like Facebook and LinkedIn
  4. Coinbase Partners Citi to Boost Stablecoin Adoption Amidst Growing Institutional Interest
  5. Adobe Will Now Let You Edit YouTube Shorts on the Premiere App
  6. Ant Group Registers ‘Antcoin’ Trademark in Hong Kong as China Tightens Crypto Rules
  7. iPhone Air Production Reportedly Remains Unchanged Amidst Speculation of Manufacturing Cuts
  8. Samsung Reportedly Working on Pro Camera Presets With Quick Share Support With One UI 8.5 Update
  9. Adobe Introduces AI Assistant in Photoshop, New AI Audio and Video Tools in Firefly
  10. US Lawmaker Proposes Bill to Ban Elected US Officials From Trading Crypto
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.