Trump Accuses Iran of AI-Driven Disinformation in Wartime Propaganda Efforts

Donald Trump has stated that Iran is employing AI to make untrue pictures of war, which he believes damages what people can believe in the news, and is causing arguments about what the country should do. He specifically mentioned made-up pictures of 'kamikaze boats' and big supporter meetings, as problems between the United States and Iran get worse. This situation shows how hard it is to deal with AI being used in propaganda and how the news media should be controlled.

The former president said Iran was using artificial intelligence as a ‘way to spread false information’ on social media and in news reports with fake war pictures and videos. He said that AI was making Iran seem to be doing better in battles and to have more people supporting it, and that some Western news groups were repeating these claims – though he did not show proof of the claim that they were working together.

The story of the claim and when it was made

Trump made these statements after putting something on his social media site, and while talking to reporters on Air Force One. These comments were made when there was more worry about how the US and Israel’s conflict with Iran was being reported, and when the people in charge of rules and TV and radio stations were arguing about what was fair to report during wartime.

The Federal Communications Commission has given signals it will be more strict with TV and radio stations whose reporting it thinks is misleading. Because of this, the argument has more importance – and brings up questions about whether news groups are responsible for what they say, how safe the country is, and how much the public should be allowed to look into things when AI-made content is involved.

The specific claims Trump made

Trump pointed to three things as proof that Tehran was using AI to fool people. He said pictures or video of ‘kamikaze boats’ were not real, that an attack on the USS Abraham Lincoln had been shown falsely, and that large rally pictures showing 250,000 people supporting a new leader had been made by AI.

He said again and again that AI can be ‘very risky’ and said that news groups that spread this material should be held responsible. Trump said that some newspapers and TV stations had spread untrue pictures which hurt people’s faith in what they read and saw, but he did not directly show that these stories were linked to Iranian AI work.

What’s really happening on the ground and what reports argue about

Some of the events have been checked by people not connected to the government, and show a more detailed situation. Video has shown boats full of explosives attacking fuel ships in the area, and at least one crew member was said to have been killed in an event at sea near Basra. Iranian government reports also said that a US aircraft carrier had been hit, but these claims were not widely accepted by the international community.

Large meetings in support of the government have happened in Tehran after a high-ranking official died, but the number of people at the meetings is reported very differently. Just because there is video or claims does not mean we can tell whether specific pictures were made by AI, and different stories from people who saw what happened and from officials make it hard to check things quickly.

How AI changes the way wartime propaganda works

Better synthetic media makes it simpler and cheaper to produce true-looking pictures, sounds and videos which give a wrong idea of what happened. ‘Deepfakes’ and ‘generative models’ can make scenes that seem real to people who don’t know much, which makes them spread very quickly on social media – especially during times of crisis.

Tools for checking and science-based ways of finding out what is true have got better, but they are not keeping up with how fast the models are being developed. The speed and size of AI-made content increases the risk that untrue stories will affect what people think, the arguments about what the country should do, and even what decisions are made during war.

Arguments about policy and how reliable the news media is

The connection between AI, propaganda and how the news media is controlled raises difficult choices. The people who make rules worry about false information which could make conflicts worse or destroy trust in society, while people who support a free press warn against very strong measures which stop real reporting and views which are different.

TV and radio stations are under pressure to make correct or explain claims people disagree about, and social media sites have to find a balance between controlling what is posted and being open about what is synthetic. Asking for punishments or taking away licenses makes the situation more divided and may move the argument from finding facts to fighting about how rules are enforced.

What to look for next and what might be done

People involved are likely to try a mix of short-term and long-term responses: better ways to check digital things, clearer openness from social media sites about synthetic content, international rules for information work during war, and teaching people how to read and understand so they can spot untrue pictures.

People who make policy will also have to deal with how far rules should go. Good responses need working together across borders, investing in technology to find out what is true, and giving rewards to good journalism so that information that has been checked can go faster than untrue stories at times when correct reporting is most important.