New law aims to clamp down on AI-generated images of child sexual abuse as reports double in a year

A new law will aim to crack down on “vile” sexual abuse deepfakes after reports of AI-generated child sexual abuse images doubled last year.
The new legislation, which the government will table as an amendment to the Crime and Policing Act on Wednesday, will require AI models to put in place safeguards to ensure their technology cannot be used to create child sexual abuse material (CSAM).
This comes as data from child abuse charity Internet Watch Foundation (IWF) shows that AI-generated CSAM reports have doubled in the past year (from 199 in 2024 to 426 in 2025).
The charity added that there had been a “disturbing” increase in images of the youngest children, with depictions of children aged 0-2 rising from five to 92 last year.
In its study ‘AI-Generated CSAM trends’, the IWF found that reported AI-generated materials are also becoming more extreme. Category A images (the most serious type, those involving penetrative sexual activity, images of sexual activity with an animal or sadism) now make up more than half of the material; Last year, this rate was 41 percent.
It added that girls were “overwhelmingly” targeted and made up 94 per cent of reported illicit AI-generated images.
The charity welcomed the government’s announcement and said it represented a “vital step” towards ensuring AI products are safe before they are brought to market.
Kerry Smith, chief executive of the IWF, said: “AI tools have enabled survivors to be re-victimized in just a few clicks, giving criminals the ability to prepare a potentially unlimited amount of complex, photorealistic child sexual abuse material.
“Security needs to be built into new technology by design. Today’s announcement could be a vital step in ensuring AI products are secure before they are released to market.”
Proposed new rules would allow technology and home secretaries to appoint “authorised testers”, including AI developers and child protection organizations such as the IWF. The government said these bodies will have the power to examine artificial intelligence models to proactively ensure that they are not exploited by those who want to exploit children.
Currently, developers cannot run security tests on AI models because the images involved are illegal. This means that images can only be removed once they have been created and shared online.
In a “landmark” conviction last year, Hugh Nelson, 27, was jailed for 18 years for using artificial intelligence modeling software Daz 3D to transform legitimate images of real children into indecent images.
Taking kickbacks from online predators, Nelson created hundreds of illegal images using a plugin that allowed him to import real faces into AI models.
As part of the new legislation, the government also said that a group of experts in the field of artificial intelligence and child safety will be formed to design the necessary measures to protect sensitive data and prevent the risk of leakage of illegal content.
Technology Minister Liz Kendall said the government “will not allow” technological progress to outpace children’s safety.
“These new laws will ensure that AI systems are secured at the source and prevent vulnerabilities that could put children at risk,” he said. “By empowering trusted organizations to review AI models, we ensure that child safety is designed into AI systems and not as an afterthought.”
Jess Phillips, the minister responsible for the protection and violence against women and girls, said the measures would stop legitimate AI tools being used to create “despicable” materials.




