The Cyberspace Administration of China has proposed rules addressing AI responses that can cause self-harm and suicide.
CAC has also told AI firms to use datasets that promote socialist values and Chinese culture
Photo Credit: Unsplash/Markus Winkler
The Cyberspace Administration of China (CAC) drafted a new set of rules to regulate artificial intelligence (AI) companies and systems last week. The main focus of these rules is to outline the activities that chatbots and AI tools cannot participate in, as well as the practices these machines should implement to align with the country's laws. One of the main focuses is to safeguard minors by including child-safety tools, such as time limits and personalisation. The rules also instruct companies to ensure their chatbots do not generate harmful output.
As per the draft rules published by CAC, the new rules aim to standardise AI services in accordance with China's civil code, cybersecurity and data security laws, and other existing regulations. The draft is titled “Interim Measures for the Administration of Anthropomorphic Interactive Services Using Artificial Intelligence,” and the government body is currently inviting feedback from stakeholders. The deadline for feedback ends on January 25, 2026.
CAC's new rules list multiple activities that an AI chatbot should not participate in. These include generating content that endangers national security, national honour and interests, engages in religious activities, or spreads rumours to disrupt the economic and social order. Apart from the nationalistic approach, the rules also prohibit obscene, gambling-related, violent, and crime-inciting responses.
AI-generated responses relating to suicide and self-harm are also among the listed items that will become prohibited if the rules come into effect. To protect minors, the rules highlight adding a “minor mode” in chatbots and AI services that come with personalised safety settings, such as switching to a child-friendly version, regular real-time reminders, and usage time limits. Parental controls have also been mentioned in scenarios where a chatbot provides emotional companionship services to minors.
CAC's draft rules also instruct AI companies to develop mechanisms to identify and assess users' emotions and their dependence on their products and services. If a user is found to be in a moment of extreme emotional distress or addicted to the product, the companies are told to intervene. The body highlights that these mechanisms should not violate users' personal privacy.
Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.