• Home
  • Ai
  • Ai News
  • OpenAI Explains How It Assesses Mental Health Concerns of ChatGPT Users, Sparks Backlash

OpenAI Explains How It Assesses Mental Health Concerns of ChatGPT Users, Sparks Backlash

OpenAI said it has built taxonomies that explain properties of sensitive conversations and undesired model behaviour.

OpenAI Explains How It Assesses Mental Health Concerns of ChatGPT Users, Sparks Backlash

Photo Credit: Unsplash/Levart_Photographer

ChatGPT users also claimed that OpenAI is going back on its claims of not being the moral police

Click Here to Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • Around 0.15% active users show emotional attachment to ChatGPT
  • OpenAI said more than 170 clinicians supported its research
  • Many users claimed that the methodologies used are ambiguous
Advertisement

OpenAI, on Monday, shared details about its safety evaluation mechanism to detect instances of mental health concerns, suicidal tendencies, and emotional reliance on ChatGPT. The company highlighted that it has developed detailed guides called taxonomies to outline the properties of sensitive conversations and undesired model behaviour. The assessment system is said to have been developed by working alongside clinicians and mental health experts. However, several users have voiced their concerns about OpenAI's methodologies and attempts to moral police an individual's connection with the artificial intelligence (AI) chatbot.

OpenAI Details Its Safety Evaluation Process for Mental Health Concerns

In a post, the San Francisco-based AI giant highlighted that it has taught its large language models (LLMs) to better recognise distress, de-escalate conversations, and guide people towards professional care. Additionally, ChatGPT now has access to an expanded list of crisis hotlines and can re-route sensitive conversations originating from other models to safer models.

These changes are powered by the new taxonomies created and refined by OpenAI. While the guidelines tell the models how to behave when a mental health crisis is detected, the detection itself is tricky to measure. The company said it does not rely on ChatGPT usage measurement alone, and also runs structured tests before deploying safety measures.

For psychosis and mania, the AI giant says the symptoms are relatively common, but acknowledged that in cases like depression, assessing their most acute presentation can be challenging. Even more challenging is detecting when a user might be experiencing suicidal thoughts or has an emotional dependency on the AI. Despite that, the company is confident in its methodologies, which it says are validated by clinicians.

Based on its analysis, OpenAI claimed that around 0.07 percent of its weekly active users show possible signs of psychosis or mania. For potential suicidal planning or intent, the number is claimed to be 0.15 percent, and the same number was quoted for emotional reliance on AI.

OpenAI also added that a broad pool of nearly 300 physicians and psychologists who have practised in 60 countries were consulted to develop these assessment systems. Out of this, more than 170 clinicians were claimed to support the research by one or more of their criteria.

Several users online have criticised the methodology of OpenAI, calling the assessment method inadequate to accurately identify mental health crises. Others have pointed out that OpenAI regulating an individual's interpersonal relationship with AI is a type of “moral policing,” and it breaks its principle of “treating adult users like adults.”

X (formerly known as Twitter) user @masenmakes said, “AI-driven ‘psychosis' and AI reliance are emotionally charged and unfortunately politicised topics that deserve public scrutiny, not hand-selected private cohorts!”

Another user @voidfreud questioned the phrasing that only 170 out of 300 clinicians agreed with the methodologies, and said, “The experts disagreed 23-29% of the time on what responses were 'undesirable'. That means for roughly 1 in 4 cases, clinicians couldn't even agree whether a response was harmful or helpful. So who decided? Not the experts. The legal team defines "policy compliance."

Yet another user, @justforglimpse, called it moral policing by OpenAI, and said, “You say ‘we are not the moral police,' yet you've built an invisible moral court deciding what's a ‘healthy' interaction and what's too risky, quietly shuffling users into pre-filtered safety cages.”

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Akash Dutta
Akash Dutta is a Chief Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
iQOO Neo 11 Confirmed to Launch With Snapdragon 8 Elite SoC, 8K VC Cooling Solution

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »