UK

The Dangerous AI Future of Politics

Leon Mewse
August 31, 2025
5 min

Image - Point Normal

The release of Veo3, the first easily available generative AI video creator with audio, has certainly been a focal point of the debate over the virtues and role of AI. It has acted as yet another front for its strongest supporters and opponents to clash.

While tech media have been quick to discuss whether AI Will Smith is now adept enough at eating spaghetti and the notions of its output being ‘AI slop’, its most cynical sections have considered its darker potential.

Veo3 provides new opportunities, especially grim ones. This has been alluded to by writers discussing how it could be used for nefarious actions including fraud.

These are legitimate critiques, we should not under estimate the depravity of fraudsters nor the greed of corporations and how they take advantage of these new technological innovations to enrich themselves.

However, the one thing which concerns me the most is the effect it will have on politics and foreign policy.

Current Fake News and Misinformation

We have already seen how mis- and disinformation is rampant on the internet. Fake news, misleading videos, and how the sheer amount of it avalanches over any event, making it difficult, particularly when there is little information in the moment, to know which pieces of information are true.

Anyone who keeps up with news on the Ukraine war may remember the flurry of videos surging off of Telegram in the early, foggy days of the war. With the democratisation of media and the speed at which it spreads, quality control is non-existent. This was the case with supposed video footage of a Russian fighter jet being shot down which later turned out to be footage from a video game. News is broken, seen by millions, and retracted in a matter of hours or even minutes in the modern cycle.

Russia is one of the most prolific users of fake news to push its agenda, taking advantage of any event and issue such as immigration or protests as a means to amplify issues and fuel divides among the citizenry of enemy states.

Seeing how non/minimal AI misinformation has already had a dramatic effect on public discourse, we must consider how AI videos could play a destructive role in discourse or even state-state relations.

Where AI Videos Could Take Us

Before I discuss how dire this situation could get, I must confess that I understand little of the technical and/or algorithmic side of AI. My aim is not to look at the technology, but its social, political, or economic effects in the context of mis- and dis-information, that can be clearly seen without the specific technical knowhow.

While I started this piece discussing Veo3, it is only a harbinger for what is to come in the fast-moving realm of innovation. Other models will replace it and some of them, maybe even created by state actors capable of staving off model collapse with huge amounts of raw data, may lack the restraining and censorship structures Veo3 and other AI software currently have. We must remember that it has only been a few years since consumers were playing around with the rudimentary almost-impressionist fever dreams produced by Dall-e. Since then, AI imagery and video models have been developing, feasting upon the fruits of society’s digital contributions.

Give it enough time and soon it could produce content which could challenge your perception of reality, especially if you are uninformed about the recognisable signatures of AI content (think of the people who already fall for the comedically obvious Nigerian prince scams).

Imagine if fake news with sound and convincing authenticity could depict a public figure engaging in an unacceptable act, something as mild as insulting constituents, perhaps something more horrific. Such authentic-looking media could ruin a career or, at the very least, stain it.

David Cameron had (almost certainly false) rumours of his pleasuring himself with a pig’s head spread by a political opponent in a book. Despite it being made up and the only ‘evidence’ being the book, people still remember it and may even see that as a defining feature of Cameron’s career as Prime Minister (definitely not helped by the incident’s resemblance to a Black Mirror episode).

This conveys that it does not take much to tarnish reputations, advanced AI videos with sound could worsen that considerably.

Beyond simple acts of career homicide, advanced AI videos could affect not just elections, perhaps a repeat of Gordon Brown’s ‘bigoted women’, but also international relations and security.

Consider the effect fake footage of war crimes, a terror attack, or even entirely faked disasters could have on security. An AI video of an attack could trigger a ‘counter’-attack, a particular risk if tensions are already frayed.


With the democratised nature of media in the modern era, popular anger could very quickly boil over into physical violence. Aggrieved individuals or states could attack embassies, citizens, or even people who vaguely resemble the targets. COVID-19 and the rise in anti-Asian sentiment occurred under relatively ‘normal’ misinformation circumstances, creating authentic-looking fake videos of minorities being a threat could make this worse and more common.

Malicious actors, including state actors invested in sowing discord in enemy states, will be happy to know they may now be able to engage in the further undermining of their opponents through even more advanced mis-and disinformation.

Despite what I have discussed, one thing that is more concerning about advanced AI videos than anything else is the way they can muddy the waters of reality and accountability.

Not only will this continue and worsen the existing misinformation avalanche but also allow actors to produce content to make their opponents appear just as bad or use the prevalence of AI videos as a defence.

If a politician or another public figure were found to have committed a crime, having been recorded in the act, they will now be able to do two things: claim the video was produced by AI and/or create an AI piece showing their opponents doing the same or worse, the avalanche smothering it all.

Truth and justice will become meaningless in a cultural and media environment filled to the brim with fake videos and news, little more than playthings for those with power and a lack of morality to do as they wish.

Adding to all of this is how traditional media could exacerbate the threats posed by advanced AI videos. All it would take is sloppy fact-checking or unscrupulous editors for AI videos to make it on to live TV news.

For example, consider the historic record of Fox News, an organisation which is less news and more propaganda due to its passing relationship with reality, it has a large, disproportionately elderly audience, the perfect demographic for spreading misinformation. AI videos becoming the norm could lead to the elderly being misinformed by fake videos, further facilitating political divides and the breakdown in the common perception of fact and reality.

Conclusion (and what benefits AI can bring)

This piece has not been intended to be hit-piece against all AI, although it may not be an exactly stellar defence of it either. I have been coldly Realist in my focus, with a primary concern of how AI videos could be apart of the arsenal of any opposing states with minimal regard for the way AI as a whole could be a boon for diplomacy and cooperation.

To be balanced, I ought to list contexts where AI could be very useful in politics and foreign policy. Some research such as that by Christina Meleouni and Iris Panagiota Efthymiou suggests that AI could be used to assist in a whole variety of ways including conflict resolution, international law compliance, data analysis among other uses. AI will be able to collate data and work out how to proceed to meet its aims. For example, in conflict resolution AI could be able to create settlements by inputting the aims of all parties and working out where common interests could be found and relationships fostered.

However, this does not mean AI is a panacea and this piece has hopefully shown why.

Generative AI videos open a pandora’s box of issues ranging from undermining careers; creating fake events to increase tensions; fuelling violence and bigotry; and acting as a way to limit accountability for crimes and socially unacceptable behaviour committed by powerful figures. Meanwhile, legacy media could use these fake videos to supplement and/or replace the facts either through malice or ignorance.