UK regulator asks X about reports its AI makes ‘sexualised images of children’

Ofcom has made “urgent contact” with Elon Musk’s company xAI following reports that its AI tool Grok could be used to create “sexualised images of children” and undress women.
A spokesman for the regulator said it was also investigating concerns that Grok was producing “naked images” of people.
The BBC has seen several examples On social media platform
X did not respond to a request for comment. Sunday, issued a warning to users Not to use Grok to produce illegal content, including child sexual abuse material.
Elon Musk I also sent this to say Anyone who asks the AI to produce illegal content will be “subject to the same consequences” as if they had uploaded that content themselves.
XAI’s own acceptable use policy prohibits “pornographic depictions of likenesses of persons.”
But people are using Grok to digitally rob people without their consent and without telling them.
This is a free virtual assistant with some paid premium features that responds to prompts from X users when they tag them in a post.
Journalist Samantha Smith, who discovered users using artificial intelligence to take bikini photos of herself, told the BBC’s PM program on Friday that it made her feel “dehumanised and reduced to a sexual stereotype”.
“It wasn’t me who was undressing, but it looked like me, it looked like me, and it felt violating, like someone had posted a photo of me naked or in a bikini,” she said.
Ofcom says it is illegal under the Online Safety Act to create or share intimate or sexually explicit images, including AI-generated “deepfakes”, without a person’s consent.
Tech companies are also expected to take “appropriate steps” to reduce the risk of UK users encountering such content, and to “expeditiously” remove it when made aware.
A Home Office spokesman said legislation had been introduced to ban nudification devices and under a new criminal offence, anyone providing such technology would “face imprisonment and significant fines”.




