UK MPs warn of repeat of 2024 riots unless online misinformation is tackled | Internet safety

Failure to properly tackle online misinformation means it is “only a matter of time” before viral content triggers a repeat of the summer 2024 riots, MPs have warned.
Chi Onwurah, chair of the Commons science and technology select committee, said ministers appeared complacent about the threat and this was putting the public at risk.
The committee said it was disappointed by the government’s response to its latest report, which warned that the business models of social media companies contributed to the unrest following the Southport murders.
Responding to the committee’s findings, the government rejected calls for a law to tackle prolific AI platforms and said it would not intervene directly in the online advertising market; MPs claimed this helped encourage the creation of harmful material after the attack.
Onwurah said the government agreed with most of the conclusions but stopped short of supporting the recommendations for action.
Accusing ministers of putting the public at risk, Onwurah said: “The government urgently needs to fill the loopholes in the Online Safety Act (OSA) but instead appears complacent about the harms caused by the viral spread of legal but harmful misinformation. Public safety is at risk and it is only a matter of time before there is a repeat of the 2024 summer riots fueled by misinformation.”
In the report titled Social Media, Misinformation and Harmful Algorithms, MPs stated that provocative artificial intelligence images were published on social media platforms after the stabbing incidents in which three children died, and warned that artificial intelligence tools facilitate the creation of hateful, harmful or deceptive content.
In its response published by the committee on Friday, the government said there was no need for new legislation and that AI-generated content was already covered by the OSA, which regulates material on social media platforms. He said introducing more legislation would hinder its implementation.
However, the committee noted testimony from an official from communications regulator Ofcom, who said AI chatbots were not 100% included in the law and further consultation with the tech industry was needed.
The government also refused to take immediate action on the committee’s recommendation that a new body be created to tackle social media advertising systems that allow “harmful and misleading content to be monetized”, including a website that spread misinformation about the name of the Southport killer.
In its response, the government said it “acknowledged concerns” about the lack of transparency in the online advertising market and would continue to review regulations in the sector. He added that the online advertising workforce hopes to increase transparency and accountability in the industry, particularly regarding illegal advertising and protecting children from harmful products and services.
Addressing the committee’s request for more research into how social media algorithms amplify harmful content, the government said Ofcom was “in the best position” to decide whether the investigation should be carried out.
Responding to the committee, Ofcom said it was carrying out work on recommendation algorithms but acknowledged further work was needed across the wider academic and research sectors.
The government also rejected the committee’s call for an annual report to parliament on the state of misinformation online, arguing that it could expose and hinder the government’s operations to limit the spread of harmful information online.
UK government identifies misinformation Like the accidental spread of false information, disinformation is the intentional creation and dissemination of false information to create harm or disruption.
Responses regarding artificial intelligence and digital advertising are particularly concerning, Onwurah said. “It is disappointing to see a lack of commitment to action, particularly on AI regulations and digital advertising,” he said.
“The committee is not convinced by the government’s argument that the OSA already covers generative AI and that the technology is evolving so rapidly that clearly more will need to be done to combat its effects on online misinformation.
“And how do we stop this without addressing the advertising-based business models that encourage social media companies to algorithmically amplify misinformation?”




