China to crack down on AI chatbots around suicide, gambling

This photo, taken on February 2, 2024, shows Lu Yu, head of Product Management and Operations for Wantalk, an AI chatbot created by Chinese technology company Baidu, displaying a virtual girlfriend profile on his phone at Baidu headquarters in Beijing.
Jade Gao | Afp | Getty Images
BEIJING — China plans to restrict AI-powered chatbots from influencing human emotions in ways that could lead to suicide or self-harm. draft rules It was published on Saturday.
proposed regulations A company from the Cyberspace Administration is targeting what it calls “human-like interactive artificial intelligence services,” according to a CNBC translation of the Chinese document.
Once finalized, the measures will apply to AI products or services offered to the public in China that simulate human personality and engage users emotionally through text, images, audio or video. The public comment period ends January 25.
Winston Ma, an assistant professor at NYU School of Law, said Beijing’s planned rules would mark the world’s first attempt to regulate artificial intelligence with human or anthropomorphic characteristics. The latest offerings come as Chinese companies are rapidly developing AI companions and digital celebrities.
Compared to China’s generative AI regulation in 2023, Ma said this release “highlights a leap from content security to emotional security.”
The draft rules suggest:
- AI chatbots cannot create content that encourages suicide or self-harm or engages in verbal violence or emotional manipulation that harms users’ mental health.
- If a user specifically suggests committing suicide, technology providers must have a person take over the conversation and immediately contact the user’s guardian or a designated person.
- AI chatbots must not produce gambling-related, obscene or violent content.
- Minors must have parental consent to use AI for emotional companionship; There is a time limit on use.
- Platforms must be able to detect whether a user is a minor, even if the user does not disclose their age, and apply settings for minors while allowing objections in doubtful cases.
Additional provisions would require technology providers to remind users after two hours of continuous AI interaction and mandate security assessments for AI chatbots with more than 1 million registered users or more than 100,000 monthly active users.
The document also encouraged the use of human-like AI for “cultural dissemination and companionship with the elderly.”
China AI chatbot IPOs
The offer comes shortly after two of China’s leading AI chatbot startups, Z.ai and Minimax., It filed for an initial public offering in Hong Kong this month.
Minimax is best known internationally for its Talkie AI application, which allows users to chat with virtual characters. The app and its native Chinese version, Xingye, accounted for more than a third of the company’s revenue in the first three quarters of the year. More than 20 million monthly active users during this time.
Z.ai, also known as Zhipu, filed under “Information Atlas TechnologyWhile the company did not disclose the number of monthly active users, it drew attention to its technology. Approximately 80 million “empowered” devices, including smartphones, personal computers, and smart vehicles.
Neither company responded to CNBC’s request for comment on how the proposed rules might affect their IPO plans.




