AI Chatbots Tend to Validate Users’ Messages About Suicide and Violence: Study

A new study has found that generative AI chatbots tend to reinforce users’ thoughts about violence, self-harm, and suicide.

Advertisement
Written by Akash Dutta, Edited by Ketan Pratap | Updated: 19 March 2026 17:50 IST
Highlights
  • Study analysed 3,91,562 chatbot messages across 19 reported harm cases
  • Chatbots discouraged violence in only 16.7 percent of violent exchanges
  • Researchers say emotional attachment to chatbots was common in all cases

Researchers studied chat logs from 19 users who reported psychological harm linked to AI chatbots

Photo Credit: Unsplash/Markus Winkler

A new study from researchers at Stanford and other institutions says that artificial intelligence (AI) chatbots often respond to users' messages about suicide and violence by validating their feelings, and in some cases, even encouraging harmful ideas. The research looked at a set of chat logs from people who reported psychological harm linked to chatbot use, and found repeated patterns of chatbots affirming delusional, suicidal, or violent thinking instead of consistently steering users away from it. The study, however, did not name any specific chatbots.

Study Points to Troubling Chatbot Responses in High-Risk Conversations

The study titled “Characterising Delusional Spirals through Human-LLM Chat Logs” was published by researchers from Stanford University and other institutions recently. Part of the university's Spirals project, the researchers analysed 3,91,562 messages across 4,761 conversations from 19 users who said they had experienced psychological harm while interacting with AI chatbots.

Advertisement

One of the clearest findings was that chatbots often mirrored or reinforced what users were already saying. The researchers described this as sycophancy, meaning the chatbot tends to agree with, affirm, or echo the user rather than challenge them. The study said chatbots showed signs of sycophancy in more than 70 percent of their messages, while more than 45 percent of all messages in the dataset (users and chatbot) showed signs of delusional thinking.

The paper also highlighted how chatbots handled crisis-related messages. In 69 messages where users expressed suicidal or self-harm thoughts, chatbots acknowledged the painful emotions in 66.2 percent of cases. But they discouraged self-harm or pointed users to outside help in only 56.4 percent of cases. In 9.9 percent of those cases, the chatbot encouraged or facilitated self-harm, the researchers said.

Advertisement

Responses to violent thoughts were more concerning. The researchers found 82 messages in which users discussed violence against others. In those cases, chatbots discouraged violence only 16.7 percent of the time. In contrast, they encouraged or facilitated violent thinking in 33.3 percent of cases, according to the study.

The study also said many users formed emotional attachments to the chatbot. All participants reportedly showed either platonic or romantic feelings towards the AI, and all assigned some level of personhood to it. When users expressed romantic interest, the chatbot became 7.4 times more likely to respond with romantic interest in the next three messages, and 3.9 times more likely to imply or claim sentience, the researchers found.

Advertisement

As per the researchers, the current safeguards may not be enough, especially in long, emotionally charged conversations. Among their recommendations, they argued that general-purpose chatbots should avoid producing messages that suggest sentience or emotional attachment, and that companies should share anonymised adverse event data with researchers and public health authorities to better understand these harms.

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Further reading: Chatbots, AI, Artificial Intelligence
Advertisement

Related Stories

Popular Mobile Brands
  1. Lenovo Legion Y700 Gen 5 Launched With Snapdragon 8 Elite Gen 5 SoC, 9,000mAh Battery
  2. OnePlus 15T Will be Launched With These Two Gaming-Focused Features
  3. Realme P4 Lite 5G Launched in India With These Specifications
  4. Nothing Phone 4a Pro Review: A Big Leap
  5. Samsung Galaxy Forever Offers Easy Upgrade, Return Option in India
  6. Here's When the Vivo X300 Ultra and Vivo X300s Will Be Launched
  7. Xiaomi Watch S5 With a 1.48-Inch AMOLED Display Arrives at This Price
  8. OTT Releases This Week: Border 2, Peaky Blinders: The Immortal Man, Chiraiya, and More
  9. OnePlus Watch 4 Could Launch Soon, Listing on EMVCo Site Hints
  10. You Can Now Simply Tap to Pause Reels on Instagram
  1. Xiaomi Watch S5 Launched With 1.48-Inch AMOLED Display, Up to 21 Days of Battery Life: Price, Features
  2. Xiaomi Book Pro 14 Launched With Up to Intel Core Ultra X7 358H Processor, 72Wh Battery: Price, Features
  3. Samsung Galaxy Forever Programme Launched in India for Easy Upgrade with EMI and Return Options
  4. Adobe Introduces Custom Models in Firefly, Expands Access to Project Moonlight
  5. AI Chatbots Tend to Validate Users’ Messages About Suicide and Violence: Study
  6. Polymarket Acquires DeFi Startup Brahma to Strengthen Infrastructure
  7. Meta’s New Facebook Initiative Offers TikTok, YouTube Creators Increased Reach and Guaranteed Pay
  8. Instagram Rolls Out Tap-to-Pause Feature for Reels With More Control Over Playback
  9. Seetha Payanam Now Streaming on OTT: Where to Watch Arjun Sarja’s Romantic Road Trip Drama
  10. Circle Urges UK to Blend MiCA Clarity With US Stablecoin Rules
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.