• Home
  • Ai
  • Ai News
  • China Proposes New AI Rules to Safeguard Minors, Prevent Harmful Output

China Proposes New AI Rules to Safeguard Minors, Prevent Harmful Output

The Cyberspace Administration of China has proposed rules addressing AI responses that can cause self-harm and suicide.

China Proposes New AI Rules to Safeguard Minors, Prevent Harmful Output

Photo Credit: Unsplash/Markus Winkler

CAC has also told AI firms to use datasets that promote socialist values and Chinese culture

Click Here to Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • The draft rules state that AI chatbots cannot promote gambling
  • AI companies have been asked to add special settings for minors
  • Damaging national honour has also been prohibited in the rules
Advertisement

The Cyberspace Administration of China (CAC) drafted a new set of rules to regulate artificial intelligence (AI) companies and systems last week. The main focus of these rules is to outline the activities that chatbots and AI tools cannot participate in, as well as the practices these machines should implement to align with the country's laws. One of the main focuses is to safeguard minors by including child-safety tools, such as time limits and personalisation. The rules also instruct companies to ensure their chatbots do not generate harmful output.

China to Regulate AI With New Rules

As per the draft rules published by CAC, the new rules aim to standardise AI services in accordance with China's civil code, cybersecurity and data security laws, and other existing regulations. The draft is titled “Interim Measures for the Administration of Anthropomorphic Interactive Services Using Artificial Intelligence,” and the government body is currently inviting feedback from stakeholders. The deadline for feedback ends on January 25, 2026.

CAC's new rules list multiple activities that an AI chatbot should not participate in. These include generating content that endangers national security, national honour and interests, engages in religious activities, or spreads rumours to disrupt the economic and social order. Apart from the nationalistic approach, the rules also prohibit obscene, gambling-related, violent, and crime-inciting responses.

AI-generated responses relating to suicide and self-harm are also among the listed items that will become prohibited if the rules come into effect. To protect minors, the rules highlight adding a “minor mode” in chatbots and AI services that come with personalised safety settings, such as switching to a child-friendly version, regular real-time reminders, and usage time limits. Parental controls have also been mentioned in scenarios where a chatbot provides emotional companionship services to minors.

CAC's draft rules also instruct AI companies to develop mechanisms to identify and assess users' emotions and their dependence on their products and services. If a user is found to be in a moment of extreme emotional distress or addicted to the product, the companies are told to intervene. The body highlights that these mechanisms should not violate users' personal privacy.

Comments

Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.

Akash Dutta
Akash Dutta is a Chief Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
Cyberpunk 2 Said to Launch in Q4 2030, The Witcher 3 Tipped to Get Third Paid Expansion Next Year

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »