OpenAI said these ChatGPT accounts were used by individuals apparently linked to Chinese government entities.
In February, OpenAI banned a Chinese account seeking to build a social media listening tool
Photo Credit: Unsplash/Levart_Photographer
OpenAI announced on Tuesday that it has banned several potential China-linked ChatGPT accounts that were attempting to utilise the chatbot to develop mass surveillance tools. The San Francisco-based artificial intelligence (AI) giant said in a published report that some accounts were also seeking suggestions to build social media listening tools and profiling systems to target specific groups of people. OpenAI highlighted that all such accounts detected to be involved in such practices have been banned. The report also mentioned other accounts from Russia that were trying to use AI to build phishing tools.
According to OpenAI's Disrupting Malicious Uses of AI: October 2025 report, a cluster of accounts, potentially linked with the Chinese government, was using ChatGPT to seek information and create autocratic tools. Calling it “a rare snapshot into the broader world of authoritarian abuses of AI,” the AI company highlighted several instances of these accounts trying to develop specialised tools for mass surveillance, profiling, and online monitoring. Notably, these incidents did not take place all at once and were spread across 2025.
One user asked ChatGPT to help draft project plans and promotional material for a “social media listening tool” allegedly intended for a government client, the report stated. The proposed system, referred to as a “social media probe,” was described as capable of scanning platforms such as X (formerly known as Twitter), Facebook, Instagram, Reddit, TikTok, and YouTube for extremist or politically sensitive content. OpenAI said it found no evidence that the tool was ever developed or operated.
Another banned account was linked to a user who sought assistance in drafting a proposal for what was described as a High-Risk Uyghur-Related Inflow Warning Model. The model was described as a system that would analyse transport bookings and cross-reference them with police records to track individuals labelled as “high-risk”. As with the previous case, OpenAI clarified that the model was not used to build or run such a tool and that its existence could not be independently verified.
Other accounts appeared to use ChatGPT for profiling and online research. In one instance, a user asked the AI model to identify funding sources for an X account critical of the Chinese government. Another user sought information on the organisers of a petition in Mongolia. In both cases, ChatGPT returned only publicly available information.
Apart from this, OpenAI highlighted that some accounts also used ChatGPT as an open-source research tool, similar to a search engine. These accounts asked the chatbot to identify and summarise breaking news that would be relevant to China. It also sought information on sensitive topics such as the Tiananmen Square massacre in 1989 and the birthday of the Dalai Lama.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.