google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
UK

California investigates Grok over AI deepfakes

California’s attorney general has launched an investigation into the spread of sexualized AI deepfakes produced by Elon Musk’s AI model Grok.

In a statement announcing the investigation, Attorney General Rob Bonta said: “It is shocking that the avalanche of reports detailing non-consensual, sexually explicit material that XAI has produced and posted online in recent weeks is shocking.”

xAI, which developed Grok, has previously said that “anyone using Grok who encourages or uses Grok to create illegal content will be subject to the same consequences as if they had uploaded illegal content.”

California’s investigation comes as British Prime Minister Sir Keir Starmer warns of possible action against X.

In a statement on Wednesday, Bonta said: “This material, depicting women and children naked and in sexually explicit situations, was used to harass people online.”

The Democratic prosecutor called on xAI to take immediate action.

Democratic California Gov. Gavin Newsom told X on Wednesday that xAI’s “decision to create and host a breeding ground for predators is despicable.”

The BBC has contacted xAI for comment.

On Wednesday, Musk told X that he was “unaware, literally zero, of any underage nude images created by Grok.”

“Obviously Grok does not generate images spontaneously,” Musk wrote. “It only does this based on user requests.”

The tech billionaire, a Republican megadonor, also said critics of X were politically motivated and were using the Grok controversy as an “excuse for censorship.”

In November, Wired magazine reported that tools from other AI companies, such as OpenAI and Google, were also being used to digitally rip people off.

Last week, three US Democratic senators asked Apple and Google to remove X and Grok from their app stores.

A few hours after the request, X restricted the image creation tool to only available to paying subscribers.

X and Grok remain available on the Apple App Store and Google Play.

This comes amid a debate over whether US tech companies should be shielded from liability for what users share on AI platforms.

Section 230 of the Communications Decency Act of 1996 provides legal immunity to online platforms for user-generated content.

However, Prof James Grimmelmann of Cornell University argues that this law “protects sites only from liability arising from third-party content coming from users, not content generated by the sites themselves”.

Grimmelmann says xAI is trying to deflect blame made the images available to users, but expressed doubt that this claim would hold up in court.

“This is not a situation where users create images themselves and then share them on X,” he said.

In this case, “xAI itself is making the images. This is outside the scope to which Section 230 applies,” he added.

Oregon Senator Ron Wyden argued that Section 230, which he co-authored, does not apply to images created by artificial intelligence. He said companies should be held fully responsible for such content.

“I’m happy to see states like California stepping in to investigate Elon Musk’s producer of horrific child sexual abuse material,” Wyden told the BBC on Wednesday.

Wyden is one of three Democratic senators asking Apple and Google to remove X and Grok from their app stores.

The announcement of the investigation in California comes as the UK prepares legislation that would make it illegal to create non-consensual intimate images.

UK watchdog Ofcom also launched an investigation into Grok.

If the platform is found to be in breach of the law, it could impose a fine of up to 10% of its worldwide revenue or £18 million, whichever is greater.

On Monday, Sir Keir Starmer told Labor MPs that Musk’s social media platform X could lose its “right to self-regulate”, adding: “If X can’t control Grok, we will.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button