Wikipedia’s new content policy prohibits the usage of large language models (LLMs) to write or rewrite articles.
Photo Credit: Wikipedia
Wikipedia says just stylistic or linguistic resemblance with AI text is not enough to sanction editors
Wikipedia updated its content policy recently, banning artificial intelligence (AI)-generated text in articles. The new guidelines explicitly prohibit the use of large language models (LLMs) when writing an article or rewriting a page for the website. While taking a strong stance against AI, the platform has also made two exceptions for editors, allowing them to use such tools to make copyedits to pages and to translate pages from any language to English. However, it has warned contributors to apply caution when using AI chatbots.
In a new project page, Wikipedia shared the updated content policy, stating, “The use of LLMs to generate or rewrite article content is prohibited.” The open-source online encyclopedia highlighted that the decision was made as using AI-generated text from chatbots, such as ChatGPT, Gemini, Claude, DeepSeek, and others, “violates several of Wikipedia's core content policies.”
The main problem the non-profit platform is trying to solve is the verifiability and neutrality of the text, since AI-generated content can sometimes change the meaning of the text, making it unsupported by cited sources. The hallucination problem around AI can also lead to accuracy issues, given that Wikipedia focuses heavily on the quality of the articles.
However, it has also made two exceptions for editors. First, editors are allowed to use LLMs and chatbots to suggest basic copyedits to their own writing. These can also be incorporated into the page after human review, as long as the AI does not add content of its own. Wikipedia does ask editors to exercise caution whenever using such tools.
Second, Wikipedia is also letting editors use AI chatbots to translate articles from another language to the English Wikipedia. However, editors have been asked to follow the guidelines for LLM-assisted translation. Essentially, editors need to tag such text as “automatically translated” text needing review. These are only approved after human review.
Wikipedia's move comes at a time when social media spaces are increasingly filled with generic AI-generated posts, and many have expressed concerns about it replacing human-written content and its authenticity.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.
Sony Raises PlayStation 5, PlayStation 5 Pro and PlayStation Portal Prices Globally
Oppo Find X9 Ultra Teased to Feature 10x Telephoto Camera With Advanced Stabilisation