TikTok and Meta’s algorithm race compromised safety for engagement, say whistleblowers: Report

Social media giants Meta and TikTok compromised security to participate in algorithm races, the BBC reported, citing a dozen whistleblowers and insiders at the companies. They said internal investigations showed an increase in sexual blackmail, terrorism and violence, but were ignored in favor of increasing participation.
An engineer from Meta (which owns Instagram, Facebook and WhatsApp) told the paper that he had been told to allow harmful content “at the limit” “because stock prices were falling”. This included content about conspiracy theories and misogyny.
A TikTok employee told the publication the platform’s internal dashboard for user complaints and other examples where staff were told to prioritize reports from politicians to “maintain a strong relationship” over posts that put children at risk.
What are the allegations? ‘Users fed on fast food’
Whistleblowers He spoke to the BBC For his documentary ‘Inside the Rage Machine’, which tells the story of how TikTok’s highly engaging algorithm for short videos is shaking up the status quo and competitors are racing to catch up.
Senior Meta researcher Matt Motyl told the BBC that TikTok’s direct competitor, Instagram Reel, was launched in 2020 without adequate security measures. It cited dozens of high-level internal studies that found there were more incidents of bullying, harassment, hate speech and incitement to violence on Reel compared to other areas on the platform. The documents also showed that Facebook was aware of the problem.
Internal investigations showed that Facebook chose to “continue to feed users fast food” and focused on the algorithm offering maximum profits “at the expense of the well-being of the audience”, which was not in line with the company’s mission.
Another former senior employee said 700 staff had been deployed to help Reels grow, while security teams had been denied two experts to help police content harmful to children and 10 staff to help with election coverage.
‘Keep TikTok away from your children as much as possible’
Ruofan Ding, a machine learning engineer who worked on TikTok’s 2020-24 recommendation engine, said algorithms are a “black box” that is difficult to examine and they rely on security teams to ensure harmful content is removed. But he acknowledged that the algorithm was improving on a weekly basis and that he was starting to see “borderline” content more frequently.
Borderline; Harmful but legal content such as conspiracy theories, misogynistic posts, racist content and sexualized posts.
“Nick”, a member of TikTok’s security team, told the BBC that he decided to speak and showed reporters the internal dashboard and how the company handles reports. Nick asked, “If you feel guilty every day about what you should have done, should I say something at some point?” “You can decide,” Nick said.
While “terrorism, sexual violence, physical violence, abuse, human trafficking” appear to be on the rise, the volume of cases, layoffs and artificial intelligence (AI) taking over some tasks make it harder for moderation teams to protect children and young people, he said. Nick added that the public statements did not match the actions taken. He told the BBC the solution was to “delete the app” and keep children “away from it for as long as possible”.
How did companies react?
Responding to questions, TikTok told the publication that the allegations were “fabricated” and that it was investing in technology to prevent harmful content from being displayed. He added that political content was not prioritized over security and that such claims “fundamentally misrepresent the way moderation systems work.”
In a statement, a spokesperson for Meta denied the whistleblower’s claims, adding: “Any suggestion that we deliberately amplified harmful content for financial gain is false.” He added that the company has strict policies and has made “significant investments in safety and security over the last decade.”
Meta added that “real changes” have been made to the platform to protect teens, including a new Teen Accounts feature with “built-in protections and tools for parents to manage their teens’ experiences.”




