• Home
  • Ai
  • Ai News
  • Microsoft Discovers Vulnerability That Lets Hackers See ChatGPT and Gemini’s Conversation Topics

Microsoft Discovers Vulnerability That Lets Hackers See ChatGPT and Gemini’s Conversation Topics

Microsoft researchers call the vulnerability in AI chatbots Whisper Leak.

Microsoft Discovers Vulnerability That Lets Hackers See ChatGPT and Gemini’s Conversation Topics

Photo Credit: Unsplash/FlyD

Whisper Leak essentially makes use of visible metadata in Transport Layer Security (TLS) encryption

Click Here to Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • It is a new type of side channel attack that works on remote AI models
  • The security flaw lets attackers observe encrypted network traffic
  • Microsoft has also published a paper detailing its findings
Advertisement

Microsoft has revealed details of a new vulnerability it discovered in most server-based artificial intelligence (AI) chatbots. The vulnerability, dubbed Whisper Leak, is claimed to let attackers learn about the conversation topics an individual has had with AI platforms such as ChatGPT and Gemini. As per the Redmond-based tech giant, the vulnerability can be exploited via a side-channel attack. This attack is said to affect all remote large language model (LLM)-based chatbots. Microsoft said it has worked with multiple vendors to mitigate the risk.

Microsoft Finds a Major Vulnerability in AI Chatbots

In a blog post, the tech giant detailed the Whisper Leak vulnerability and how attackers might exploit it. A detailed analysis of the same has also been published as a study on arXiv. Microsoft researchers claim that the side-channel attack can allow bad actors to observe the user's network traffic to conclude the conversation topics a user has had with these apps and websites. The exploit is said to work even if this data is protected via end-to-end encryption.

The exploit targets both standalone AI chatbots as well as those that are embedded into search engines or other apps. Usually, the Transport Layer Security (TLS) encryption protects the user data when shared with these AI platforms. TLS is a popular encryption technique that is also used in online banking.

During its testing, the researchers found that the metadata of the network traffic, or how the messages move across the Internet, remains visible. The exploit does not try to break open the encryption, but instead, it leverages the metadata that is not hidden.

Microsoft revealed that it tested 28 different LLMs for this vulnerability and was able to find it in 98 percent of them. Essentially, what the researchers did was to analyse the packet size of data and its timing when a user interacts with a chatbot. Then they trained an AI tool to distinguish the target topic based on the data rhythm. The researchers found that the AI system was successfully able to decipher the topics without trying to pry open the encryption.

“Importantly, this is not a cryptographic vulnerability in TLS itself, but rather exploitation of metadata that TLS inherently reveals about encrypted traffic structure and timing,” the study highlighted.

Highlighting the scope of this method, the company claimed that a government agency or Internet service provider (ISP) monitoring traffic to popular AI chatbots could reliably identify users asking questions about topics such as money laundering, political dissent, or other subjects.

Microsoft said it shared its disclosures with affected companies once it was able to confirm its findings. Among the various chatbots that were found to have this vulnerability, the company said OpenAI, Mistral, and xAI have already deployed protections

We have engaged in responsible disclosures with affected vendors and are pleased to report successful collaboration in implementing mitigations. Notably, OpenAI, Mistral, Microsoft, and xAI have deployed protections at the time of writing. This industry-wide response demonstrates the commitment to user privacy across the AI ecosystem.

“OpenAI, and later mirrored by Microsoft Azure, implemented an additional field in the streaming responses under key “obfuscation,” where a random sequence of text of variable length is added to each response. This notably masks the length of each token, and we observed it mitigates the cyberattack effectiveness substantially,” the company said.

For end users, the tech giant recommends avoiding discussing highly sensitive topics with AI chatbots over untrusted networks, using VPN services to add another layer of protection, using non-streaming models of LLMs (on-device LLMs), and opting for chatbot services that have implemented mitigations.

Affiliate links may be automatically generated - see our ethics statement for details.
Comments

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Akash Dutta
Akash Dutta is a Chief Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
SenseTime's Open-Source SenseNova-SI AI Model Said to Outperform GPT-5, Gemini 2.5 Pro in Spatial Intelligence

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »