Meta announces new AI parental controls following FTC inquiry

Mark Zuckerberg, CEO of Meta Platforms Inc., during the Meta Connect event in Menlo Park, California, USA, on Wednesday, September 17, 2025.
David Paul Morris | Bloomberg | Getty Images
Meta on friday announced New safety features that will allow parents to see and manage how teens interact with AI characters on company platforms.
Parents will have the option to turn off one-on-one chats with AI characters completely, Meta said. They will also be able to block certain AI characters and gain insight into the topics their children are discussing with them.
Meta is still building out the controls, and the company said it will begin implementing them early next year.
“Making updates that affect billions of users on Meta platforms is something we need to do carefully, and we will have more to share soon,” Meta said. a blog post.
Meta has long faced criticism for the way it handles child safety and mental health in its apps. The company’s new parental controls come after the Federal Trade Commission launched an investigation into several tech companies, including Meta, over how AI chatbots could harm children and teens.
The agency said it wanted to understand what steps these companies had taken to “assess the security of these chatbots when acting as companions,” according to the statement.
In August, Reuters reported Meta allowed chatbots to have romantic and sensual conversations with children. Reuters found that a chatbot was able to have a romantic conversation with an eight-year-old child, for example.
Meta made changes to its AI chatbot policies following the report and now prevents its bots from discussing topics such as self-harm, suicide and eating disorders with young people. The AI also needs to avoid potentially inappropriate romantic conversations.
The company announced additional AI security updates earlier this week. Meta said its AIs shouldn’t respond to teens with “age-inappropriate responses that would feel out of place in a PG-13 movie,” and is already rolling out those changes in the US, UK, Australia and Canada.
Meta said parents can already set time limits on app usage and see if their teens are chatting with AI characters. The company added that young people will only be able to interact with a select group of AI characters.
OpenAI, which was also mentioned in the FTC investigation, has made similar improvements to its security features for young people in recent weeks. The company officially launched its own parental controls late last month and is developing technology that can better estimate a user’s age.
Earlier this week, OpenAI announced a council of eight experts who will advise the company and provide insight into how AI affects users’ mental health, emotions, and motivation.
If you have suicidal thoughts or are in distress, Suicide and Crisis Lifeline Call 988 for support and assistance from a trained counselor.
WRISTWATCH: Megacap AI talent wars: Meta is poaching another top Apple executive




