AI chatbot firms face stricter regulation to protect children in UK

A young girl solving homework at the desk with an artificial intelligence chatbot.
Phynart Studio | E+ | Getty Images
The UK government is closing a “loophole” in new online safety legislation that will subject AI chatbots to obligations to tackle illegal material, face fines or even be blocked.
After the country’s government harshly criticized Elon Musk’s Google’s Gemini and Microsoft The co-pilot will be included in his government’s Online Safety Act.
Platforms will be expected to comply with “illegal content mandates” or “face consequences for breaking the law,” the announcement said.
This comes after the European Commission investigated Musk’s X in January for disseminating sexually explicit images of children and other individuals. Starmer has led calls for Musk to put an end to this.
British Prime Minister Keir Starmer at a press conference in London, England, Monday, January 19, 2026.
Bloomberg | Bloomberg | Getty Images
Earlier, Britain’s media watchdog Ofcom launched an investigation into X, who was reported to have spread sexually explicit images of children and other individuals.
“Our action against Grok sent a clear message that no platform gets a free pass.” Starmer saidannounced the latest measures. “We are closing gaps that put children at risk and laying the groundwork for further action.”
Starmer made a speech on Monday about the new powers, which include setting minimum age limits for social media platforms, restricting harmful features such as infinite scrolling, and restricting children’s use of AI chatbots and access to VPNs.
One of the measures announced would require social media companies to retain data after the death of a child, unless the online activity is clearly unrelated to the death.
“We are acting to protect the welfare of children and help parents navigate the minefield of social media,” Starmer said.
Alex Brown, TMT chairman of law firm Simmons & Simmons, said the announcement showed how the government was taking a different approach to regulating rapidly evolving technology.
“Historically our legislators have been reluctant to regulate technology and have instead sought to regulate use cases for good reason,” Brown told CNBC. he said.
He said regulations that focus on a particular technology can quickly become obsolete and run the risk of revealing shortcomings in its use. Generative AI exposes the limits of the Online Safety Act, which focuses on “regulating services rather than technology,” Brown said.
He said Starmer’s latest statement showed that the UK government wanted to address dangers “from the design and behavior of the technologies themselves, and not just from user-generated content or platform features”.
There has been increased scrutiny of children and young people’s access to social media in recent months, with lawmakers citing harms to mental health and wellbeing. In December, Australia became the first country to implement legislation banning social media use by young people under 16.
Australia’s ban has forced apps such as Alphabet’s YouTube, Meta’s Instagram and ByteDance’s TikTok to have age verification methods, such as uploading ID or banking information, to prevent those under 16 from creating accounts.
While Spain became the first European country to implement the ban earlier this month, France, Greece, Italy, Denmark and Finland are also considering similar proposals.
The UK government launched a campaign consultation in January About banning social media for under 16s.
In addition, the House of Lords, the country’s unelected upper legislative house, voted last month to amend the Child Welfare and Schools Bill to include a ban on social media for under-16s.
The next stage will see the bill be examined by the parliament’s House of Commons. Both houses must agree before any changes become law.




