The AI Overviews feature took reference from a Reddit comment over a decade ago.
Photo Credit: Unsplash/Arkan Perdana
Google’s AI-powered Search feature has been accused of providing misinformation
Google's brand-new AI-powered search tool, AI Overviews, is facing a blowback for providing inaccurate and somewhat bizarre answers to users' queries. In a recently reported incident, a user turned to Google for cheese not sticking to their pizza. While they must've been expecting a practical solution for their culinary troubles, Google's AI Overviews feature presented a rather unhinged solution. As per recently surfaced posts on X, this was not an isolated incident with the AI tool suggesting bizarre answers for other users as well.
The issue came to light when a user reportedly wrote on Google, “cheese not sticking to pizza”. Addressing the culinary problem, the search engine's AI Overviews feature suggested a couple of ways to make the cheese stick, such as mixing the sauce and letting the pizza cool down. However, one of the solutions turned out to be really bizarre. As per the screenshot shared, it suggested the user to “add ⅛ cup of non-toxic glue to the sauce to give it more tackiness".
Google AI overview suggests adding glue to get cheese to stick to pizza, and it turns out the source is an 11 year old Reddit comment from user F*cksmith 😂 pic.twitter.com/uDPAbsAKeO
— Peter Yang (@petergyang) May 23, 2024
Upon further investigation, the source was reportedly found and it turned out to be a Reddit comment from 11 years ago, which appeared to be a joke rather than an expert culinary advice. However, Google's AI Overviews feature, which still carries a “Generative AI is experimental” tag at the bottom, provided it as a serious suggestion to the original query.
Yet another inaccurate response by AI Overviews came to light a few days ago when a user reportedly asked Google, “How many rocks should I eat”. Quoting UC Berkeley geologists, the tool suggested, “eating at least one rock per day is recommended because rocks contain minerals and vitamins that are important for digestive health”.
Issues like this have been surfacing regularly in recent years, especially since the artificial intelligence (AI) boom kicked off, resulting in a new problem known as AI hallucination. While companies claim that AI chatbots can make mistakes, instances of these tools twisting the facts and providing factually inaccurate and even bizarre responses have been increasing.
However, Google isn't the only company whose AI tools have provided inaccurate responses. OpenAI's ChatGPT, Microsoft's Copilot, and Perplexity's AI chatbot have all reportedly suffered from AI hallucinations.
In more than one instance, the source has been discovered as a Reddit post or comment made years ago. The companies behind the AI tools are aware of it too, with Alphabet CEO Sundar Pichai telling The Verge, “these are the kinds of things for us to keep getting better at”.
Talking about AI hallucinations during an event at IIIT Delhi In June 2023, Sam Altman, [OpenAI]( CEO and Co-Founder said, “It will take us about a year to perfect the model. It is a balance between creativity and accuracy and we are trying to minimise the problem. [At present,] I trust the answers that come out of ChatGPT the least out of anyone else on this Earth.”
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.
Nandamuri Balakrishna's Akhanda 2 Arrives on OTT in 2026: When, Where to Watch the Film Online?
Single Papa Now Streaming on OTT: All the Details About Kunal Khemu’s New Comedy Drama Series
Scientists Study Ancient Interstellar Comet 3I/ATLAS, Seeking Clues to Early Star System Formation