google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
USA

China to crack down on AI chatbots around suicide, gambling

This photo, taken on February 2, 2024, shows Lu Yu, head of Product Management and Operations for Wantalk, an AI chatbot created by Chinese technology company Baidu, displaying a virtual girlfriend profile on his phone at Baidu headquarters in Beijing.

Jade Gao | Afp | Getty Images

BEIJING — China plans to restrict AI-powered chatbots from influencing human emotions in ways that could lead to suicide or self-harm. draft rules It was published on Saturday.

proposed regulations A company from the Cyberspace Administration is targeting what it calls “human-like interactive artificial intelligence services,” according to a CNBC translation of the Chinese document.

Once finalized, the measures will apply to AI products or services offered to the public in China that simulate human personality and engage users emotionally through text, images, audio or video. The public comment period ends January 25.

Winston Ma, an assistant professor at NYU School of Law, said Beijing’s planned rules would mark the world’s first attempt to regulate artificial intelligence with human or anthropomorphic characteristics. The latest offerings come as Chinese companies are rapidly developing AI companions and digital celebrities.

Compared to China’s generative AI regulation in 2023, Ma said this release “highlights a leap from content security to emotional security.”

The draft rules suggest:

  • AI chatbots cannot create content that encourages suicide or self-harm or engages in verbal violence or emotional manipulation that harms users’ mental health.
  • If a user specifically suggests committing suicide, technology providers must have a person take over the conversation and immediately contact the user’s guardian or a designated person.
  • AI chatbots must not produce gambling-related, obscene or violent content.
  • Minors must have parental consent to use AI for emotional companionship; There is a time limit on use.
  • Platforms must be able to detect whether a user is a minor, even if the user does not disclose their age, and apply settings for minors while allowing objections in doubtful cases.

Additional provisions would require technology providers to remind users after two hours of continuous AI interaction and mandate security assessments for AI chatbots with more than 1 million registered users or more than 100,000 monthly active users.

The document also encouraged the use of human-like AI for “cultural dissemination and companionship with the elderly.”

China AI chatbot IPOs

Weekly analysis and information on Asia’s largest economy in your inbox
Subscribe now

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button