google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
UK

Criminals using AI to create ‘frightening’ number of child sexual abuse videos

Campaigners have issued a stark warning after artificial intelligence was used to create thousands of child sex abuse videos last year, contributing to record levels of such harrowing material available online.

The Internet Watch Foundation (IWF) announced that its analysts discovered 3,440 AI-generated videos depicting child sexual abuse by 2025; this number was a dramatic increase from just 13 detected in 2024.

Overall, IWF staff estimated that the number of verified reports of abuse images found online reached 312,030 in 2025. This number was 291,730 the previous year.

Their research showed that of 3,440 AI-generated videos, 2,230 fell into Category A, the most extreme classification under UK law, while another 1,020 were considered the second most severe classification.

IWF Chief Executive Kerry Smith said: “When images and videos of children being sexually abused are distributed online, it makes everyone, especially children, less safe.

“Our analysts are working tirelessly to have these images removed to give victims some hope. But now AI has advanced so much that criminals can essentially have their own child sex abuse machines where they can do whatever they want to see.

“The alarming increase in extreme category A AI-generated child sexual abuse videos shows the kind of things criminals want, and it’s dangerous.

“The easy availability of this material will only embolden those with a sexual interest in children, accelerate its commercialization and further endanger children both online and offline.

X announces limits to AI chatbot Grok's ability to manipulate images after reports that users can instruct it to sexualize images of women and children

X announces limits to AI chatbot Grok’s ability to manipulate images after reports that users can instruct it to sexualize images of women and children (PA Wire)

“Governments around the world must now ensure that AI companies integrate design principles and security from the outset. It is unacceptable to introduce technology that allows criminals to create this content.”

The research comes as X’s AI chatbot Grok announced limits to its ability to manipulate images, following a backlash over reports that users could instruct it to sexualize images of women and children.

The company announced earlier this week that it would prevent Grok from “editing images of people in revealing clothing” and prevent users from creating images that resemble real people in countries where this is illegal.

Technology Secretary Liz Kendall said she was still waiting for the Ofcom regulator to establish the facts “fully and robustly”, and while the watchdog welcomed the new restrictions, she said its investigation would continue as it sought “answers about what went wrong and what is being done to fix it”.

The IWF has previously said it wants all nudging software to be banned, arguing that AI companies need to make tools more secure before they are rolled out and insisting that the Government make this mandatory.

Children’s charity NSPCC said the IWF’s findings were “both deeply worrying and sadly predictable”.

Chris Sherwood, the company’s chief executive, said: “Criminals are using these tools to create extreme materials on a scale we’ve never seen before, and children are paying the price.

“Tech companies cannot continue to launch AI products without developing vital protections. They know the risks and the harm they can cause. It is up to them to ensure their products are never used to create indecent images of children.”

Children's charity NSPCC said the IWF's findings

Children’s charity NSPCC said the IWF’s findings were “both deeply worrying and sadly predictable” (Punsayaporn Thaveekul/Alamy/PA)

“The UK Government and Ofcom must now step in and hold tech companies to account.

“We call on Ofcom to use every tool available to them through the Online Safety Act and for the Government to impose a legal duty of care in the design of their products to ensure the safety of children and productive AI services are required to prevent these terrible crimes.”

Ms Kendall called it “deeply abhorrent that AI is being used to target women and girls” and insisted the Government “will not tolerate this technology being weaponized for the purpose of harm, which is why I have stepped up our action to ban the creation of intimate images created by AI without consent.”

He added: “AI needs to be a force for progress, not misused, and we are committed to supporting its responsible use to stimulate growth, improve lives and deliver real benefits, and to take action where it is being misused.

“That’s why we’ve launched a world-leading attack targeting AI models trained or adapted to create child sexual abuse material. Possessing, supplying or modifying these models will soon become a crime.”

The Lucy Faithfull Foundation, which works to support offenders to stop viewing child abuse images, said they had also seen the number of people using AI to view and make abuse images double in the past year.

Young people who are concerned that inappropriate images of themselves have been shared online can use the free Report Removal tool at childline.org.uk/remove.

Safeguarding Minister Jess Phillips said: “This increase in AI-generated child abuse videos is appalling; this Government will not sit back and allow predators to create this disgusting content.”

He added: “Tech companies can no longer have excuses. Act now or we will force you.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button