China plans to restrict the use of artificial intelligence chatbots in contexts that can affect human emotions or lead to suicide, self-harm or encourage gambling.
China’s Cyberspace Administration has just released draft rules that limit the ability of artificial intelligence to influence users’ emotions. The draft comes just days after two Chinese AI chatbot startups, Z.ai and Minimax, filed for IPOs in Hong Kong. China has made many efforts to lead in the field of artificial intelligence, and these proposed laws are part of the country’s broader efforts to regulate artificial intelligence.
Restrictions on artificial intelligence chatbots in China
The proposed rules, proposed by China’s Cyberspace Administration, target artificial intelligence services with human features and include products that provide a simulated human personality and engage users emotionally with text, images, audio or video. These regulations are designed so that AI chatbots cannot produce content that encourages the user to commit suicide or self-harm, or that includes verbal violence and emotional deception, and harms the mental health of users. In cases where a user specifically mentions suicide, technology providers are required to step in and immediately contact a supervisor or designee.

Chatbots are required to refrain from providing any content related to gambling, violence or inappropriate material, and minor users are only allowed to emotionally interact with the AI under the supervision of their guardian. In addition, the platforms are required to be able to recognize the user’s age even if the age is not provided by the user, and in suspicious cases, to activate the protection settings related to users under the legal age, and at the same time, it is also possible for users to protest.
The draft laws also pay attention to the positive uses of humanoid artificial intelligence in cultural fields and accompanying the elderly, and it is stipulated that users will receive a warning after two hours of continuous use of the chatbot. Additionally, chatbots with more than one million registered users or more than 100,000 monthly active users should undergo a thorough security assessment. This law, which is known as the first global effort to regulate artificial intelligence with human characteristics, has been proposed at the same time as the rapid growth of Chinese companies in the development of digital companions and virtual characters, and indicates a shift from focusing only on content security to protecting the mental health of users.
These proposed regulations come in the wake of increasing global attention to the impact of artificial intelligence on human behavior and mental health.
RCO NEWS



