Makers of AI chatbots that put children at risk face big fines or UK ban | Internet safety

Makers of AI chatbots that put children at risk will face huge fines and even have their services blocked in the UK under changes to the law that Keir Starmer will announce on Monday.
Ministers are planning a “crackdown on disgusting illegal AI-generated content”, encouraged by public outrage last month that Elon Musk stopped X’s Grok AI tool from creating sexualized images of real people in the UK.
As the number of children using chatbots for everything from homework help to mental health support grows, the government has said it will “act swiftly to close a legal loophole and force all AI chatbot providers to comply with illegal content duties in the Online Safety Act or be forced to face the consequences of breaking the law.”
Starmer also plans to step up new restrictions on children’s social media use if a possible ban on under-16s is agreed by MPs following public consultation. This means any changes to children’s social media use, which include other measures such as restricting endless scrolling, could happen as soon as this summer.
But the Conservatives rejected the government’s claim that it was moving quickly because consultations had not yet started as “more smoke and mirrors”.
Shadow education secretary Laura Trott said: “It is simply not credible to claim that they are taking ‘urgent action’ when so-called urgent consultations are not available.” “The Labor Party has repeatedly said it has no view on whether under-16s should be blocked from accessing social media. This is not good enough. I am clear that we must stop under-16s from accessing these platforms.”
The moves come after online regulator Ofcom admitted it had no power to act against Grok because images and videos created by a chatbot without searching the internet are not covered by existing laws unless they amount to pornography. The change to bring AI chatbots under the Online Safety Act could happen within weeks, but the loophole has been known for more than two years.
“Technology is moving really fast and the law needs to keep up,” Starmer said. “Our action against Grok sent a clear message that no platform gets a free pass. Today we are closing loopholes that put children at risk and laying the groundwork for further action.”
Companies that breach the Online Safety Act could face fines of up to 10% of their global revenues, and regulators could apply to the courts to block their connections to the UK.
If AI chatbots are specifically used as search engines, to produce pornography, or to operate in user-to-user contexts, they are already covered by the law. But they can be used to create material that encourages people to harm themselves or commit suicide, or even to create child sexual abuse material, without facing any sanctions. This is the gap the government says it wants to close.
Chris Sherwood, chief executive of the NSPCC, said young people were contacting the helpline to report harm caused by AI chatbots and did not trust tech companies to design them safely.
In one case, a 14-year-old girl who spoke to an AI chatbot about her eating habits and body dysmorphia was given false information. In others, they saw “young people harming themselves even when presented with self-harming content.”
“Social media has brought great benefits to young people, but it has also done a lot of harm,” Sherwood said. “If we’re not careful, AI will be like this on steroids.”
OpenAI, the $500bn San Francisco startup behind ChatGPT, one of the UK’s most popular chatbots, and xAI, which produces Grok, have been approached for comment.
Since Adam Raine, a 16-year-old from California, took his own life after what his family claims was “months of encouragement from ChatGPT,” OpenAI has introduced parental controls and is rolling out age estimation technology to restrict access to potentially harmful content.
The government will also hold consultations on forcing social media platforms to make it impossible for users to send or receive images of naked children; This is a practice that is currently illegal.
Technology secretary Liz Kendall said: “We won’t wait to take the action families need, so we’ll be tightening the rules on AI chatbots and laying the groundwork so we can act quickly based on the results of consultations with young people and social media.”
The Molly Rose Foundation, founded by the father of 14-year-old Molly Russell, who committed suicide after seeing harmful content online, described the steps as a “welcome down payment”. But he called on the prime minister to introduce a new Online Safety Bill that “strengthens regulation and makes clear that product safety and children’s welfare are the cost of doing business in the UK.”




