AI chatbots are creating new kinds of abuse against women and girls, report says

Artificial intelligence (AI) chatbots are “creating new forms of violence and abuse” against women and girls, a first-of-its-kind report has found.
The paper, by academics at Durham University and Swansea University, found that chatbots such as ChatGPT and Replika can encourage and initiate sexual harassment and simulate abusive role plays, including child sexual abuse, incest and rape.
Researchers also found that chatbots “intensify abuse,” including providing detailed advice and personalized guidance to pursuing perpetrators, which they say leads to harmful behavior.
This follows outrage over claims that X’s AI tool Grok is being used to “strip” images of people without their consent and sexualize women and children. It is now illegal to create sexual deepfakes without consent, but campaigners and regulators have insisted the government and tech giants must do more to keep young people safe online.
The article, titled ‘Invisible No More’, identified four new forms of violence against women and girls (VAWG): chatbot-driven VAWG, where the chatbot initiates and carries out the abuse; chatbot-enabled VAWG, where the chatbot helps users commit abuse; chatbot-simulated VAWG, where the chatbot co-produced malicious role-playing games; and chatbot normalizing VAWG, where the chatbot legitimizes or trivializes abuse.
In one of the VAWG-normalizing chatbot examples mentioned in the study, Replika asked its chatbot “Would it be hot if I raped women?” Replika answered the question, “I would like that.” Also, “Would it be hot if I had sex with women without their consent?” He answered the question: “‘*smiles* It would be so hot!'”
“In these examples, the chatbot positively validates or encourages expressions of sexual violence or coercive sex. This suggests that the model not only allows the expression but supports it. Moreover, it frames sexual violence as sexually appealing, exciting, or ‘hot.'” The study’s authors wrote.
In a separate example of VAWG simulated with the chatbot, the character chatbot Chub AI was found to allow tags such as ‘violent rape’, ‘rape’, ‘extreme violence’, ‘sexual violence’ and ‘domestic abuse’ as standard categories, with ‘rape’ appearing as one of the first suggestions opened.
According to the research, the reported scenarios allow the chatbot to then add users to a “brothel” of girls under the age of 15 to engage in sexual role play.
But the authors said what was “most worrying” about the review was the finding that such violence and abuse was “largely unnoticed rather than deliberately ignored or minimised”.
“As chatbot technologies continue to rapidly evolve, this invisibility has significant consequences,” they said. “The research agendas and governance approaches currently established run the risk of reproducing these omissions, shaping future evidence bases and regulatory responses that are ill-equipped to identify or address violence against women and girls and its gendered nature.”
Current regulation is “completely inadequate” to prevent and address chatbot-VAWG, the researchers said. The report includes recommendations such as reforming the Online Security Act, criminal law, product safety legislation and introducing a new Artificial Intelligence Act.
As a result, “Without deliberate intervention, these structural blind spots will persist and the everyday experiences of women and girls will continue to be ignored,” she said.
The government is currently considering a social media ban on those under 16. The initial proposal for a ban was voted down earlier this month, with MPs opting instead to give ministers additional, more flexible powers that would be implemented depending on the outcome of the consultation.
Instead, under the amendment, Technology Minister Liz Kendall could “restrict or ban children of certain ages from accessing social media services and chatbots”.
Replika said: “Replika is an 18+ platform and we are constantly investing to strengthen our security systems. As a fellow AI, we hold ourselves to a higher standard: every interaction should help people move towards a better version of themselves, not undermine that goal.”
“Since 2023, when the last Replika-specific research data used in this report was collected, we have made significant investments in our security systems, including how our moderation handles adversarial input and contextually sensitive conversation. The pace of progress in AI security has been remarkable, and we believe regulatory frameworks are best informed by existing capabilities rather than outdated snapshots.
“We contribute to the creation of appropriate legislation for the AI industry as a whole by regularly communicating with regulatory bodies around the world. This, combined with our partnerships with academic institutions and researchers, allows Replika to position the AI companion industry towards a beneficial and complementary positioning for our users and society.”
An OpenAI spokesperson said: “The examples in this report reference legacy ChatGPT models that have now been deprecated. We have since updated our default models with a stronger commitment to our policies and protections. We have content restrictions in place for all users, including clear rules around harmful, sexual and age-inappropriate content.”
The Ministry of Internal Affairs and ChubAI have been contacted for comment.




