Fake AI images stoking anti-immigration protests outside hotels

The researches published by MPs said that images produced by AI, who described Muslims as a threat, helped to stop anti -immigration protests outside the hotels in the UK.
A submission The House of Commons, which investigates the new forms of extremism, found that the extreme right accounts spread manipulated images that strengthen racist stereotypes online. Among them were descriptions of Muslims carrying weapons or wearing suicide vests.
Researchers said that the rise of the material produced by AI is combined with the way of rewarding the divisive content of social media algorithms, accelerating radicalization and making extremist ideas more visible.
Last year, the rebellions at Southport watched the waves of false information about the attacker. Analysts said that the same networks, which help the spread of false claims during these disorders, have greatly trusted the images created by the AI to normalize the negative stereotypes of Muslims and to oppose immigrants.

A study of 622 tasks of the London School of Economics showed that those involved in visual depictions of racist conspiracy theories have been strengthened about 30 percent more than others. The research focused on groups that encouraged the “great spare” conspiracy theory, the idea that white Christian people were deliberately displaced by Muslim immigrants.
Among the examples mentioned, a woman in a England football shirt crying outside the parliament while Muslim men were in the geerdi showed a fabricated image and a woman under the Eiffel Tower surrounded by garbage Muslim men on earth in Paris.
He found that the images normalized negative associations about Muslims and caused more people to lead to anti -immigrant protests.
Dr. Beatriz Lopez, who examined how algorithms spread out of extreme narratives, said, “Our research reveals that the prejudices buried in these systems may have serious real world consequences when they are strengthened on social media platforms.

“These tools constantly reproduce harmful stereotypes by associating black, brown and Muslim people with guilt and deception, while describing white Western individuals morally.
“The images created by the online productive-ai can not only strengthen the prejudice, and white Western men can inspire violence by positioning as honest defenders of their communities and families.”
“The most related finding is how these stereotypes are normalized.
“The racist and Islamophobic images produced by AI seem so routine that the alarm circulates widely without raising. This invisibility, negative depictions of minority communities are buried in daily digital content, making the problem worse.”