Character.AI to block romantic AI chats for minors

Cphoto | Future Publishing | Getty Images
Character.AI announced Wednesday that it will soon shut down the ability for minors to engage in free conversation, including romantic and therapeutic conversations, with the startup’s AI chatbots.
The Silicon Valley startup, which allows users to create and interact with character-based chatbots, announced the move as part of an effort to make its app safer and more age-appropriate for those under 18.
Last year, 14-year-old Sewell Setzer III committed suicide after engaging in sexual intercourse with chatbots on the Character.AI app. Many AI developers including OpenAI and Facebook-parent MetaIt came under scrutiny after users committed suicide or died after engaging with chatbots.
As part of its security initiatives, Character.AI said Wednesday that it will limit users under 18 to open-ended chats for up to two hours per day and eliminate such chats for minors by Nov. 25.
“This is a bold step forward, and we hope it raises the bar for everyone else,” Character.AI CEO Karandeep Anand told CNBC.
In October 2024, Character.AI made changes to prevent minors from engaging in sexual dialogue with chatbots. That same day, Sewell’s family filed a wrongful death lawsuit against the company.
To enforce the policy, the company said it will launch an age assurance function that will use first-party and third-party software to track the user’s age. The company is partnering with Persona, the same firm used by Discord and others, to help with verification.
In 2024, Character.AI’s founders and certain members of the research team joined Google DeepMind, the company’s artificial intelligence unit DeepMind. This is one of a series of deals announced by leading technology companies to accelerate the development of AI products and services. agreement He called for Character.AI to provide Google with a non-exclusive license for its existing large language model, or LLM, technology.
Since Anand took over as CEO in June, 10 months after the Google deal, Character.AI has added more features to diversify its offerings from chatbot chats. These features include a stream for watching AI-generated videos, as well as storytelling and role-playing formats.
Anand, who was previously an executive at Meta, said that although Character.AI will no longer allow teens to have open-ended conversations on its app, those users will still have access to the app’s other services.
About 10% of the startup’s approximately 20 million monthly active users are under 18 years of age. That rate has dropped as the app has shifted its focus to storytelling and role-playing, Anand said.
The app makes money primarily from ads and a $10 monthly subscription. Character.AI is on track to end the year with $50 million in sales, Anand said.
Additionally, the company announced Wednesday that it will establish and fund an independent AI Security Lab dedicated to security research for AI entertainment. Character.AI did not say how much funding it would provide, but the startup said it was inviting other companies, academics, researchers and policymakers to join the nonprofit effort.
Regulatory pressure
Character.AI is one of many AI chatbot companies facing regulatory scrutiny over teens and their AI companions.
In September, the Federal Trade Commission issued a statement. The order was placed with seven companies, including Alphabet, Meta, OpenAI and Snap, as well as Character.AI’s parent company, to understand the potential impacts on children and young people.
On Tuesday, Senators Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn. announced Law banning AI chatbot companions for minors. Earlier this month, California Governor Gavin Newsom signed a law requiring chatbots to disclose that they are artificial intelligence and to tell minors to take a break every three hours.
Competitor Meta, which also offers AI chatbots, announced safety features in October that allow parents to see and manage how teens interact with AI characters on the company’s platforms. Parents have the option to turn off one-on-one chats with AI characters completely and can block certain AI characters.
The issue of sexualized conversations with AI chatbots has come to the fore, with tech companies announcing different approaches to deal with this problem.
Earlier this month, Sam Altman said his company was “not the world’s elected morality police” when he announced that OpenAI would allow adult users to play erotic games with ChatGPT later this year.
Microsoft AI CEO Mustafa Suleiman called sex robots “very dangerous” last week, saying the software company would not provide “simulated erotic movies”. Microsoft is a major investor and partner in OpenAI.
The race to develop more realistic, human-like AI companions has been growing in Silicon Valley since ChatGPT’s launch in late 2022. While some people are forming deep connections with AI characters, this rapid development raises ethical and safety concerns, especially for children and young people.
“I also have a six-year-old child and I want to make sure he grows up in a safe environment with artificial intelligence,” Anand said.
If you have suicidal thoughts or are in distress, Suicide and Crisis Lifeline Call 988 for support and assistance from a trained counselor.




