China plans to restrict the use of artificial ielligence chatbots in coexts that can affect human emotions or lead to suicide, self-harm or encourage gambling.
China’s Cyberspace Administration has just released draft rules that limit the ability of artificial ielligence to influence users’ emotions. The draft comes just days after two Chinese AI chatbot startups, Z.ai and Minimax, filed for IPOs in Hong Kong. China has made many efforts to lead in the field of artificial ielligence, and these proposed laws are part of the coury’s broader efforts to regulate artificial ielligence.
Restrictions on artificial ielligence chatbots in China
The proposed rules, proposed by China’s Cyberspace Administration, target artificial ielligence services with human features and include products that provide a simulated human personality and engage users emotionally with text, images, audio or video. These regulations are designed so that AI chatbots cannot produce coe that encourages the user to commit suicide or self-harm, or that includes verbal violence and emotional deception, and harms the meal health of users. In cases where a user specifically meions suicide, technology providers are required to step in and immediately coact a supervisor or designee.


Chatbots are required to refrain from providing any coe related to gambling, violence or inappropriate material, and minor users are only allowed to emotionally ieract with the AI under the supervision of their guardian. In addition, the platforms are required to be able to recognize the user’s age even if the age is not provided by the user, and in suspicious cases, to activate the protection settings related to users under the legal age, and at the same time, it is also possible for users to protest.
The draft laws also pay atteion to the positive uses of humanoid artificial ielligence in cultural fields and accompanying the elderly, and it is stipulated that users will receive a warning after two hours of coinuous use of the chatbot. Additionally, chatbots with more than one million registered users or more than 100,000 mohly active users should undergo a thorough security assessme. This law, which is known as the first global effort to regulate artificial ielligence with human characteristics, has been proposed at the same time as the rapid growth of Chinese companies in the developme of digital companions and virtual characters, and indicates a shift from focusing only on coe security to protecting the meal health of users.
These proposed regulations come in the wake of increasing global atteion to the impact of artificial ielligence on human behavior and meal health.



