OpenAI Explains How It Assesses Mental Health Concerns of ChatGPT Users, Sparks Backlash

OpenAI said it has built taxonomies that explain properties of sensitive conversations and undesired model behaviour.

Advertisement
Written by Akash Dutta, Edited by Ketan Pratap | Updated: 28 October 2025 15:09 IST
Highlights
  • Around 0.15% active users show emotional attachment to ChatGPT
  • OpenAI said more than 170 clinicians supported its research
  • Many users claimed that the methodologies used are ambiguous

ChatGPT users also claimed that OpenAI is going back on its claims of not being the moral police

Photo Credit: Unsplash/Levart_Photographer

OpenAI, on Monday, shared details about its safety evaluation mechanism to detect instances of mental health concerns, suicidal tendencies, and emotional reliance on ChatGPT. The company highlighted that it has developed detailed guides called taxonomies to outline the properties of sensitive conversations and undesired model behaviour. The assessment system is said to have been developed by working alongside clinicians and mental health experts. However, several users have voiced their concerns about OpenAI's methodologies and attempts to moral police an individual's connection with the artificial intelligence (AI) chatbot.

OpenAI Details Its Safety Evaluation Process for Mental Health Concerns

In a post, the San Francisco-based AI giant highlighted that it has taught its large language models (LLMs) to better recognise distress, de-escalate conversations, and guide people towards professional care. Additionally, ChatGPT now has access to an expanded list of crisis hotlines and can re-route sensitive conversations originating from other models to safer models.

Advertisement

These changes are powered by the new taxonomies created and refined by OpenAI. While the guidelines tell the models how to behave when a mental health crisis is detected, the detection itself is tricky to measure. The company said it does not rely on ChatGPT usage measurement alone, and also runs structured tests before deploying safety measures.

For psychosis and mania, the AI giant says the symptoms are relatively common, but acknowledged that in cases like depression, assessing their most acute presentation can be challenging. Even more challenging is detecting when a user might be experiencing suicidal thoughts or has an emotional dependency on the AI. Despite that, the company is confident in its methodologies, which it says are validated by clinicians.

Advertisement

Based on its analysis, OpenAI claimed that around 0.07 percent of its weekly active users show possible signs of psychosis or mania. For potential suicidal planning or intent, the number is claimed to be 0.15 percent, and the same number was quoted for emotional reliance on AI.

OpenAI also added that a broad pool of nearly 300 physicians and psychologists who have practised in 60 countries were consulted to develop these assessment systems. Out of this, more than 170 clinicians were claimed to support the research by one or more of their criteria.

Advertisement

Several users online have criticised the methodology of OpenAI, calling the assessment method inadequate to accurately identify mental health crises. Others have pointed out that OpenAI regulating an individual's interpersonal relationship with AI is a type of “moral policing,” and it breaks its principle of “treating adult users like adults.”

X (formerly known as Twitter) user @masenmakes said, “AI-driven ‘psychosis' and AI reliance are emotionally charged and unfortunately politicised topics that deserve public scrutiny, not hand-selected private cohorts!”

Advertisement

Another user @voidfreud questioned the phrasing that only 170 out of 300 clinicians agreed with the methodologies, and said, “The experts disagreed 23-29% of the time on what responses were 'undesirable'. That means for roughly 1 in 4 cases, clinicians couldn't even agree whether a response was harmful or helpful. So who decided? Not the experts. The legal team defines "policy compliance."

Yet another user, @justforglimpse, called it moral policing by OpenAI, and said, “You say ‘we are not the moral police,' yet you've built an invisible moral court deciding what's a ‘healthy' interaction and what's too risky, quietly shuffling users into pre-filtered safety cages.”

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Advertisement

Related Stories

Popular Mobile Brands
  1. Vivo Y600 Pro With 10,200mAh Battery Arrives at This Price
  2. HMD Vibe 2 5G India Launch Teased: Expected Design, Specifications
  3. OpenAI Reportedly Eyeing AI Smartphones With Custom Chips
  4. OnePlus Nord CE 6 Lite Price Range, Chipset and More Details Revealed
  5. Assassin's Creed Hexe Game Director Benoit Richer Has Left Ubisoft
  6. Apple Could Launch These New Devices Once John Ternus Takes Over
  1. Samsung Galaxy S25 Series Likely to Begin Receiving Stable One UI 8.5 Update Soon: Report
  2. Qualcomm Sets Snapdragon for India Event for May 7 as OnePlus Gears Up for Nord CE 6 Launch
  3. Meta AI Business Assistant Expanded to Global Markets, to Let Advertisers Optimise Marketing Campaigns
  4. Vivo Y600 Pro Launched With 10,200mAh Battery, 50-Megapixel Camera: Price, Specifications
  5. Huawei Mate XT 2 Tipped to Launch in October With Upgraded Hinge, Kirin 9050 Pro Chip
  6. Samsung Galaxy Z Fold 8, Z Flip 8 and Z Wide Fold Dummy Units Hint at Design, Wireless Charging Support
  7. HMD Vibe 2 5G India Launch Teased: Expected Design, Key Specifications
  8. Aave Labs Urges Arbitrum DAO to Release $73 Million in Frozen ETH for rsETH Recovery
  9. JBL Bar 1300MK2, 1000MK2, 800MK2, 500MK2 Soundbars Launched in India: Price, Features
  10. Apple’s Foldable iPhone Will Be Slightly Thicker Than iPhone 17 Pro Max, Leaked Schematics Show
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.