Graphic AI videos of women being murdered seen by thousands of people online

Sensational and violent videos of women being tortured and murdered have been created using Google’s artificial intelligence generator and shared online, raising concerns that the technology is fueling misogynistic abuse.
An account on YouTube called WomanShotA.I has uploaded dozens of such videos showing women pleading for their lives before being shot, and they have been viewed nearly 200,000 times since June. It was only removed after tech reporting site 404 Media alerted the platform.
The videos prepared using Google’s artificial intelligence creator Veo 3 were requested to be made by humans and then shared on the internet.
Some of the videos were under the titles “captured girls were shot in the head”, “Japanese schoolgirls were shot in the chest”, “tragic end of female reporter”.
Durham University law professor Clare McGlynn, a leading expert on violence against women and girls and gender equality, said when she saw the video channel: “It lit a fire inside me and it hit me so immediately. It’s exactly the kind of thing that’s likely to happen when you don’t invest in appropriate trust and security before you launch products.”
Professor McGlynn condemned the industry’s rush to produce technology, stressing that Google and other AI developers need to take stronger precautions before launching their tools and address problems as they arise.
he said Independent: “Google says that such material does not comply with its terms and conditions. They do not allow material containing graphic violence, sexual violence, etc., but this could be produced.”
“That tells me they don’t care enough, that there aren’t enough guardrails to prevent this from being produced.”
He said it was worrying that this content could be shared on YouTube, a popular platform among young people, and he feared it could normalize behaviour.
YouTube, owned by Google, said in a statement that its generative artificial intelligence followed user prompts and that the channel was closed for violating its terms of service. The channel had already been removed in the past.
Generative AI’s policies state that users must not engage in sexually explicit, violent, hateful or harmful activities, or create or distribute content that facilitates or encourages violence. Google did not respond to questions about whether it was aware of how many videos of this nature were created using artificial intelligence.
Alexandra Deac, a researcher at the Online Harms to Children Policy Think Tank, believes the issue should be addressed as a public health priority.
He said: “It is extremely worrying that such violent AI-generated content can be created and shared so easily. For children and young people, exposure to this material can have lasting effects, including affecting mental health and wellbeing.”
Ms Deac said new threats continued to emerge online and “the scale of exposure to harmful content that is violent, sexualised or AI-generated is too great to be left to parents alone”.
The UK Internet Watch Foundation has identified 17 incidents of AI-generated child sexual abuse material on an unnamed chatbot website since June.
It said users were able to interact with chatbots that simulated “disgusting” sexual scenarios with children, some as young as seven years old.
Olga Jurasz, a law professor and director of the Center for the Protection of Women Online, said the videos “contribute greatly to perpetuating a culture of sexism and misogyny, a world where gender stereotypes thrive and are encouraged, a world where women are inferior, malleable, violable, where their dignity does not really count.”
Dr Jurasz said an increasing number of videos depicting extreme violence against women were being created and spread online, often encouraging similar acts of violence both online and offline.
“When we see AI-generated videos or images depicting sexualized violence and sexualized torture against women, that is a huge problem,” he added.
A Department for Science, Innovation and Security spokesperson said: “This government is determined to do everything we can to end violence against women and girls, including online-based gender violence, which is why we have set an unprecedented mission to halve this violence within ten years.
“Social media sites, search engines and AI chatbots that fall under the Online Safety Act must protect users from illegal content, such as extreme sexual violence, and children from harmful content, including where it is created by AI.
“In addition to criminalizing the creation of non-consensual intimate images, we have also introduced legislation that will make it illegal to own, create or distribute AI tools designed to create disgusting child sexual abuse material.”
The Online Safety Act came into force in March this year after long delays and is designed to make the internet “safer”, especially for children. A key element of the bill is the new duties it imposes on social media companies and the powers it gives Ofcom to enforce them.
Earlier this year Prime Minister Sir Keir Starmer vowed to turn the UK into an AI “superpower” and promised breakthroughs he would export to the rest of the world. Since then, there have been many advances that have been rapidly adopted, especially in the healthcare industry.
Last month, AI giant Nvidia announced it would invest £2bn in the UK’s AI sector as part of the “largest ever technology deal” between the UK and the US.




