google.com, pub-8701563775261122, DIRECT, f08c47fec0942fa0
Hollywood News

State actors are behind much of the visual misinformation about the Iran war

As attacks spread following the bombing of Iran by US and Israeli forces, a video of crowds looking at fire, smoke and debris from the top of a tall building rumored to be in Bahrain was widely circulated.

Social media users claimed that the skyscraper was hit by an Iranian attack. However, even though buildings in Bahrain were hit by Iranian missiles during the Iran war, this video was not real. It was created with artificial intelligence and shared by accounts associated with the Iranian government as part of an effort to boost their success.

There are several clues that the video is not original, including two cars that appear to be stuck together on the left side of the clip and a man whose elbow appears to be poking through his backpack in the lower right corner.

Since the Iran war began last week, a flurry of misrepresented or fabricated videos have spread widely online, fueled in part by state-run propaganda and influence campaigns (particularly around who won the war and how many people died).

“Content coming from state actors is a little bit better targeted,” said Melanie Smith, senior director of policy and research on information operations at the Strategic Dialogue Institute. “They have a very clear narrative structure and the videos are just used to support some sort of statement they want to make about the conflict and the geopolitical situation in general.”


Pro-Iran social media accounts adopted a narrative exaggerating the destruction and death toll caused by the country’s military, supported by reports in Iranian state media. This has led to the emergence of numerous videos of so-called AI-generated airstrikes, such as the burning high-rise building in Bahrain.
An ongoing Russia-aligned influence operation called Operation Overload, also referred to as Matryoshka or Storm-1679, is releasing videos designed to impersonate intelligence agencies and news organizations and undermine people’s sense of security in order to influence their behavior; It’s a tactic the network has used before during election cycles. For example, he shared a warning falsely attributed to Israeli intelligence telling Israelis in Germany and the United States to be careful in public places or not to go out at all.Iranian censorship makes things even more confusingMisrepresented and fabricated videos have been a key feature of other recent conflicts, such as the Russia-Ukraine and Israel-Hamas wars, but experts say the biggest difference now is the lack of information from the Iranian public due to internet shutdowns and general censorship; this is a loss of perspectives that can work both for and against the Iranian government.

“In Ukraine, that message was so powerful that it really changed the whole dynamic of the conflict because the world was really aligned with the perspective of Ukrainians who were facing attacks and showing resilience in light of the attacks, but we’re kind of missing that story in Iran,” said Todd Helmus, a senior behavioral scientist at RAND who studies irregular warfare, terrorism and information operations.

Opportunistic social media users unconnected to state actors looking for clicks also contributed greatly to the misinformation spread in the early days of the Iran war; It presents old footage from other conflicts as if it were current, shares video game clips as if they were real, and publishes its own AI-generated content.

Artificial intelligence, in particular, has helped amplify misinformation in ways that were not possible in past conflicts, even just a few years ago. Combined with state-related disinformation and censorship, this creates an even wider gap where the truth can disappear.

“The volume of AI content is really starting to horribly pollute the information landscape in these types of crisis environments,” Smith said. “In times like these, not being able to access verified and reliable information makes it increasingly difficult to do so.”

Nikita Bier, X’s chief product officer, wrote in a post on Tuesday that the platform would alienate users from its revenue sharing program if they publish AI-generated content from a gunfight without proper disclosure. Penalties are 90 days for a first offense and are permanent thereafter. Emerson Brooking, director of strategy and principal senior investigator at the Atlantic Council’s Digital Forensic Research Laboratory, warns that social media platforms are now on the front lines of warfare and users need to be aware of their potential for use by state actors, even if they are thousands of miles away from action on the ground.

“If you are in these areas, understand that this is an extension of the physical battlefield,” he said. “There are actors on all sides of the conflict who are actively trying to spread propaganda and disinformation to convince you that things are wrong when they are right. Your eyes and attention are an asset.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button