AI Overviews reportedly told people with pancreatic cancer to avoid high-fat food, which goes against professional advice.
Photo Credit: Google
AI Overviews has also faced criticism in the past for suggesting adding glue to pizza
Google's AI Overviews has again stirred a commotion for sharing incorrect medical advice. According to a report, the Mountain View-based tech giant's artificial intelligence (AI) search summary feature was spotted sharing incorrect information when it was asked a very specific medical question. After the error became known publicly, the company reportedly removed the response from the AI Overviews, and the feature is said not to show up when asking certain medical queries. The incident occurs at a time when companies like OpenAI and Anthropic are heavily pushing for AI adoption in healthcare.
According to an investigation conducted by The Guardian, AI Overviews is producing incorrect and harmful responses when users ask medical queries on Google Search. In one particular case, the publication noted that when asking about diet preference for a person with pancreatic cancer, the AI tool recommended avoiding high-fat food. Notably, professionals recommend eating high-fat food regularly, as not doing so can have lethal consequences.
Similarly, the publication reported that asking about the normal range for liver blood tests would show numbers that do not factor in important metrics, such as the nationality, gender, age, or ethnicity of the individual. This can lead to an unsuspecting user believing their results were healthy, when in reality they were not. AI Overviews about women's cancer tests reportedly also showed incorrect information, “dismissing genuine symptoms.”
The Guardian also reached out to a Google spokesperson, who claimed that the examples shared with them came from incomplete screenshots, but added that the cited links belonged to well-known, reputable sources. Gadgets 360 staff members also tested the questions and found that the AI-generated summaries are not showing up, which was also reported by The Guardian separately.
But if the AI tool is indeed providing incorrect information, this can be a cause of concern since a large number of people rely on Google Search to find information. Since AI Overviews appear at the top of the search results, many users just refer to this information instead.
Notably, the incident comes at a time when companies like OpenAI and Anthropic are pushing for the adoption of their healthcare-focused AI products. While these AI firms are using a custom model geared for medical queries, even the smallest error in this space could have dire consequences.
Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.
Realme Neo 8 Display Details Teased; TENAA Listing Reveals Key Specifications