Children exposed to guns, self-harm and misogyny ‘within minutes of creating social media profiles’

British children are potentially exposed to guns, self-harm, misogyny and sexting within minutes of creating social media profiles, a new study has found.
Researchers found that one profile designed to resemble that of a young teenager showed an extremely distressed woman being rescued from a torture scene just seconds after signing up for TikTok, while another was shown graphic gun content on YouTube.
Tech firms have been accused of designing systems that can expose young people to “highly harmful” content on social media, with “powerful, unregulated algorithms built to maximize engagement at all costs.”
Campaigners have reiterated their call for the government to raise the age of access to social media to 16, warning that “every day of delay exposes thousands more children to harm and exploitation”. Actress Natalie Cassidy described the findings as “every parent’s worst nightmare.”
The experiment, carried out by the Little Victims of Big Tech campaign, set out to reveal what children are shown by algorithm-driven platforms when they sign up to social media at the age of 13, the current legal age of access.
Four fictional profiles were created on TikTok, Instagram, Snapchat and YouTube, based on typical 13-year-old girls and boys in the UK, using common interests such as gaming, beauty, music and sport. The researchers then used each platform for up to 30 minutes per day, scrolling like a child would.
The findings show that over the course of a week, profiles were served with hundreds of alarming content that glorified guns and knives, made explicit references to sex and pornography, promoted extreme fitness regimes and diets, and encouraged misogyny, isolation, self-harm and even suicide.
It was found that on average across the week, profiles were served content within just three minutes of logging in, and for every minute spent scrolling they were shown a piece of harmful or inappropriate content. While harmful material is the first thing presented in some sessions, algorithmic content loops make it difficult, sometimes impossible, to escape the escalating harm, according to the research. In one session on Snapchat, researchers say 86 relevant pieces of content were flagged during 30 minutes of scrolling.
Daniel Kebede, Secretary General of the National Education Union (YDB), which is running the campaign, said: “What this experiment shows is shocking but not surprising. Children are exposed to extremely harmful content on social media, even if the platforms know their age. This is not a coincidence, these systems are designed this way.”
“At 13, children’s minds are still developing, but they are being targeted by powerful, unregulated algorithms built to maximize participation at all costs. Teachers see the impact every day, with misogyny rising, concentration deteriorating, and students arriving at school exhausted from what they have been exposed to online. Parents are having to manage these negative effects alone at home.
“Therefore, the government must act now and raise the age of access to social media to 16. Every day of delay exposes thousands more children to harm and exploitation.”
The experiment also revealed clear differences between platforms and genders.
Alarming and harmful content was found to appear more frequently and increase most sharply on TikTok and Snapchat, while content on Instagram was mostly age-appropriate. One adult researcher who used Snapchat said they had to leave because the content of self-harm and suicidal ideation was so extreme.
In the experiment, girls were disproportionately presented with content that often encouraged self-harm and suicidal ideation, as well as overly body-focused and sexualized content such as “scrutiny” and body shaming. Researchers say TikTok delivers excessive health, fitness or diet content to girls’ profiles in 92 percent of sessions, and sexually explicit content in 83 percent of sessions.
Meanwhile, findings show boys are turning to violence, misogyny and radicalization, with repeated exposure to guns, hostile content about women, racist and anti-immigration narratives, and figures linked to extremist or conspiratorial views, such as Tommy Robinson and Andrew Tate. According to the research, while male profiles on TikTok and YouTube were presented with content containing hate speech or racist narratives in 77 percent of the sessions, only on TikTok, misogynistic content was presented in 85 percent of boy sessions, while this rate was only 13 percent in girls’ sessions.
Mental health issues were found to be a common theme across all profiles, with content of this nature appearing in 74 percent of TikTok, Snapchat and YouTube sessions.
Cassidy, ambassador for Big Tech’s Little Victims campaign, said: “Having your children watch this type of content is every parent’s worst nightmare. “You’d assume that if a platform knows your child’s age it will protect them – but this experiment shows that’s not happening.
“Parents can’t watch every swipe, every video, every algorithmic decision. We need the government to step in and put children’s safety ahead of Big Tech’s profits.”
TikTok said it limits content that may be unsuitable for those under 18, sets age limits for certain features, such as requiring people to be 16 or older for videos to appear in the ‘For You’ feed or use direct messages, and uses more restrictive privacy settings by default for younger users. He added that he was reviewing the findings of the experiment.
A Snapchat spokesperson said: “Snapchat is a visual communication app built to encourage real conversations with real friends, and unlike other platforms, we don’t apply an algorithm to the flow of unmoderated or unmoderated content, so this type of harmful content has no place on Snapchat and we take immediate action when we find it.” The company is understood to have contacted the NEU about its findings.
Meta, the company that owns Instagram, said it is rolling out updates to Teen Accounts on Instagram, which puts people under 18 in age-appropriate content settings by default, automatically limits exposure to sensitive or adult content in recommendations, search, accounts and AI experiences, and introduces stricter options for parents as well.
A UK government spokesman said: “The law is clear. Under the Online Safety Act, platforms must protect children from harmful content, including violent and pornographic content and material that encourages self-harm.
“Those who fail to act will face sanctions from Ofcom. The regulator has the full support of the government in going after those who fail to comply with UK law.”
Independent He took to YouTube for comment.




