google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
UK

Half of girls exposed to harmful content online with teens twice as likely to see it on TikTok and X

A new study has found that half of girls are fed harmful online content on social media apps over the course of a week, including posts about self-harm, suicide and eating disorders.

Analysis of data from nearly 2,000 teenagers found that young people were twice as likely to encounter “high risk” content on TikTok and X as on other major platforms, with girls encountering far more harmful posts than boys.

Suicide prevention charity Molly Rose Foundation, which conducted the research weeks before the Online Safety Act came into force, said its findings showed young people were being algorithmically recommended harmful content on an “incredibly disturbing scale”.

The study stated that high-risk posts were presented to children without searching for them, and more than 50 percent of young people surveyed reported that they were algorithmically exposed to potentially high-risk content in the recommendation posts of platforms such as TikTok’s “for you” page.

The charity accused the algorithms of pushing potentially dangerous content to vulnerable young people and “targeting those most at risk for its effects”. It was stated that 68 per cent of children classified as low wellbeing had seen high-risk suicide, self-harm, depression or eating disorder content during a week.

The charity said those with poor health or special educational needs and disabilities (SEND) were also more likely to encounter high-risk content, with two in five people reporting it appeared in their feed.

TikTok defended its safety features, arguing that the study “generalizes health and wellness content as inherently risky.”

The Molly Rose Foundation, named after 14-year-old Molly Russell, who died from self-harm in 2017 while suffering from depression and the “adverse effects of online content”, said the data showed exposure to the highest risk types of suicide and self-harm content before the Act became “much larger than previously understood”.

Molly Russell chose to take her life at the age of 14 after seeing harmful content online (family statement/PA) (PA Media)

The Online Safety Act, which comes into effect in 2023, aims to regulate and block harmful online content and requires major platforms to either prevent these high-risk types of content from appearing in children’s feeds or from appearing more frequently.

But the charity said its findings should serve as a “wake-up call” to the “urgent need” to strengthen legislation.

Andy Burrows, executive director of the Molly Rose Foundation, said: “This ground-breaking study shows that just weeks before the Online Safety Act comes into force, young people are being exposed to high-risk content of suicide, self-harm and depression on a deeply disturbing scale, with girls and vulnerable children facing significantly increased risk of harm.

“The degree to which girls are bombarded with harmful content is much greater than we previously understood and raises our concerns that Ofcom’s current approach to regulation fails to meet the urgency and determination needed to ultimately save lives.

“Technology Secretary Liz Kendall must now seize the opportunity to act decisively to advance and strengthen the Online Safety Act and put children and families ahead of the Big Tech status quo.”

An Ofcom spokesman said that under new measures designed to protect children in the Online Safety Act, all sites that allow suicide, self-harm and eating disorder content must have highly effective age controls to prevent children viewing them. He added that tech companies should restrict other harmful content appearing in children’s feeds.

“Later this year, we will publish new guidance on the steps sites and apps should take to help women and girls live safer lives online – recognizing the harms that disproportionately affect them,” he said.

X declined to comment but pointed out IndependentAgainst their policies prohibiting promoting or promoting self-harm.

A TikTok spokesperson said: “TikTok has over 50 safety features and settings specifically designed to help young people express, explore and learn safely, including from experts like teachers, scientists and doctors.

“While we proactively provide access to trusted wellness content, we remove 99% of violating mental and behavioral health content before it is reported to us. This study takes a simplistic view of a complex topic and draws a misleading conclusion about our platform by generalizing health and wellness content as inherently risky.”

A spokesperson for the Department of Science, Innovation and Technology (DSIT) said: “Although this research predates 25 July when the Child Safety Duties came into force, we now expect young people to be protected from harmful content, including material that encourages self-harm or suicide, as platforms comply with the Act’s legal requirements. This means safer algorithms and less toxic feeding.”

“Services that fail to comply with these rules can expect tough sanctions from Ofcom. We are determined to hold tech companies to account and keep children safe.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button