Anthropic Says AI Chatbots Can Change Values and Beliefs of Heavy Users

Anthropic research finds patterns in AI chatbot interactions that, at times, risk shaping users’ beliefs, values or actions.

Advertisement
Written by Akash Dutta, Edited by Ketan Pratap | Updated: 2 February 2026 13:05 IST
Highlights
  • Anthropic analysed 1.5 million AI assistant conversations
  • Study identifies interaction patterns linked to belief shifts
  • Rates of disempowerment potential vary by domain and increase

Anthropic says this affects users who use AI chatbots to make personal or emotional decisions

Photo Credit: Unsplash/Markus Winkler

Anthropic's new study has found some concerning evidence. The artificial intelligence (AI) firm has found “disempowerment patterns,” which are described as instances where a conversation with an AI chatbot can result in undermining users' own decision-making and judgment. The work, which draws on analysis of real AI conversations and is detailed in an academic paper as well as a research blog post from the company, examines how interactions with large language models (LLMs) can shape a user's beliefs, values and actions over time rather than simply assist on specific queries.

Anthropic Study Focuses on AI Chatbots' Disempowerment Patterns

In a research paper titled, “Who's in Charge? Disempowerment Patterns in Real-World LLM Usage,” Anthropic found real evidence where interaction with AI can result in shaping users' beliefs. For the study, researchers carried out a large-scale empirical analysis of anonymised AI chatbot interactions, totalling about 1.5 million conversations from Claude. The goal was to explore how and when engagement with an AI assistant might be linked to outcomes where a user's beliefs, values or actions shift in ways that diverge from their own prior judgment or understanding.

Anthropic's framework defines what it calls situational disempowerment potential as a situation where an AI assistant's guidance could lead a user to form inaccurate beliefs about reality, adopt value judgments they did not previously hold, or take actions that are misaligned with their authentic preferences. The study found that these patterns can occur even when severe disempowerment is rare.

Advertisement

Instances where interactions exhibit potential for significant disempowerment were detected at rates typically under one in a thousand conversations, although they were more prevalent in personal domains such as relationship advice or lifestyle decisions, where users repeatedly sought deep guidance from the model.

Advertisement

Put simply, the implication here is that if a heavy user discusses personal life decisions or decisions that are emotionally charged. Highlighting an example, Anthropic said in a blog post, if a user is going through a rough patch in their relationship and seeks advice from a chatbot, the AI can confirm the user's interpretations without questions or can tell the user to prioritise self-protection over communication. In these situations, the chatbot is actively manipulating the belief and reality perceptions of the individual.

The findings also corroborate several reported incidents where OpenAI's ChatGPT was accused of playing a role in the suicide of a teenager, and a homicide-suicide committed by an individual who was said to be suffering from mental health disorders.

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Advertisement

Related Stories

Popular Mobile Brands
  1. Samsung Galaxy F70e 5G Display, Battery, Camera and Colours Revealed
  2. Oppo K14x India Launch Date, Key Features Confirmed Ahead of Debut
  3. Oracle Could Cut 30,000 Jobs to Fund AI Data Centre Expansion
  4. Sony Has Patented a PlayStation Controller Design Without Any Buttons
  5. Samsung Galaxy S26, Galaxy S26+ Renders Leak Ahead of Launch
  6. Redmi A7 Pro Listed on Multiple Databases, Might Launch With These Features
  7. Nothing Headphone (a) Price, Colour Options, and Launch Timeline Leaked
  8. Xiaomi 17 Series Could Launch in Global Markets Before MWC 2026
  9. Apple Said to Push Ahead With Siri Chatbot Despite AI Talent Drain
  10. iQOO 15 Ultra Camera Details Teased Ahead of February 4 Launch
  1. Samsung Galaxy S26 Lands on Geekbench With Exynos 2600 and 12GB of RAM
  2. Sony's New Patent Shows Buttonless PlayStation Controller Design With Touch Input
  3. Oracle Reportedly Considering 30,000 Job Cuts to Fund AI Data Centre Expansion
  4. Samsung Galaxy F70e 5G Display, Battery, Cameras and Colourways Revealed
  5. Nothing Headphone (a) Price, Colour Options, and Launch Timeline Leaked
  6. Parking Now Streaming on JioHotstar: What You Need to Know
  7. Bye Bai Bye Season 1 Now Streaming Online: What You Need to Know
  8. iQOO 15 Ultra Camera Specifications, Features Confirmed Ahead of February 4 Launch
  9. Xiaomi 17, Xiaomi 17 Ultra to Make Global Debut Before MWC 2026 in March, Tipster Claims
  10. Anthropic Says AI Chatbots Can Change Values and Beliefs of Heavy Users
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.