Even if you couldn’t quite describe what AI-based propaganda is or how it works, the odds are you’ve consumed it at least once.

More than likely it was just a relatively harmless photo of Donald Trump’s head seamlessly edited onto the ripped, muscular body of Conan the Barbarian (or Rocky, Rambo, Superman, or just about any other larger-than-life 80s action movie hero) that you saw being shared around your crazy uncle’s social media.

hodl-post-image
Source: Variety

However, generative AI has also been used for far more nefarious means by governments, corporations — along with anyone else who has a desire to shape and mold public opinion in their favor.

The word propaganda stems from the name given to a group of cardinals belonging to the Roman Catholic Church whose job it was to spread the religion in far-flung corners of the globe. The congregatio de propaganda fide, or the ‘congregation for propagation of the faith’, formed in 1622, was responsible for exporting Roman Catholicism to foreign lands.

Today, over 400 years later, the tools of propaganda have been sharpened and refined by constant use — and there isn’t a government in the world that hasn’t used them to shape public opinion to its advantage.

The Spread of AI Propaganda

In years gone past, peddlers of propaganda had to rely on communication tools like newspapers, radio broadcasts and television news shows to propagate their will. Today, an entirely fake video can be made showing an entirely fake newscaster spreading an entirely fake message, and it can show up on your social media newsfeed without you being any the wiser.

AI-based propaganda threatens to warp our very sense of reality thanks to the ease with which it can produce totally believable (yet totally fake) representations of reality which can’t be discerned from the real thing.

Today, anyone can create convincing fake videos and images using a horde of online tools for virtually no cost at all. The ease of access that the average citizen now has to create lifelike fake videos is worrying enough. 

But what’s even more concerning is that governments and intelligence agencies can use the same tools to inflict propaganda on their own citizens without their knowledge.

In 2023, at least 47 governments around the world deployed bot or human commentators to manipulate online discussions in their favor, while 16 countries used the generative AI toolkit to create fake videos to spread their self-serving message, according to research by human rights advocacy group, Freedom House.

For example, in 2023, several British actors were surprised to find themselves voicing support for the military junta of President Ibrahim Traoré in the small African nation of Burkina Faso.

The actors, who had no knowledge of Burkina Faso or the Traoré regime, were used as the physical basis for AI videos created by London-based AI firm Synthesia, which produces AI-based videos for customers — including military regimes and governments — around the world.

“We must support… President Ibrahim Traoré… Homeland or death we shall overcome!” said one of the actors, who later revealed that he had no idea his likeness was being used for such purposes.

Similar situations arose in other countries, such as in Venezuela, where yet another British actor was used without his knowledge in a fake news video wherein he voiced support for the Venezuelan government and condemned Western meddling in the country’s domestic affairs.

Propaganda Wars Go International, Aided by AI Chatbots

It may not raise too many eyebrows to discover that AI is being used by governments and corporations to get people to vote for them or to buy a certain product. But what if governments used AI to wage a propaganda war on the citizens of other countries?

That’s exactly the situation we find ourselves in as Russia and the West engage in a virtual tête-à-tête where the goal is to disrupt democratic elections by sowing as much confusion in the minds of the enemy country’s populace as possible.

Even more worryingly, the fake news produced by these efforts needn’t find their way onto your social media news feed, because they are already being sucked up by AI-chatbots and spat back out at unsuspecting readers who believe they are receiving genuine information that has been fully vetted for accuracy.

Yet, it has not. According to a recent study by NewsGuard, the ten leading AI-chatbots on the market today were found to spread Russian disinformation narratives 37% of the time when asked various questions about domestic politics. 

The study found that a network of Russian disinformation websites were frequently cited by the chatbots, despite these sites being fake propaganda outlets specifically designed to look and sound like real American news publications.

The chatbots found to have spread Russian state propaganda to American citizens include OpenAI's ChatGPT-4, You.com's Smart Assistant, Grok, Inflection, Mistral, Microsoft's Copilot, Meta AI, Anthropic's Claude, Google Gemini and Perplexity.

Evolution of Propaganda Technology

The earliest iterations of AI-based propaganda tools were fairly simple. The impact of AI-chatbots was limited to spamming repetitive messages on internet forums, or ‘liking’ or ‘disliking’ content to shape how that content was perceived by future observers.

While these tools initially lacked nuance, they have since evolved into sophisticated conversational agents which are capable of holding seemingly sincere, authentic human discussions which give the illusion of real human engagement. 

Today, chatbots can be programmed to respond intelligently to online discussions and subtly shape the nature of conversations while adapting their responses to the profile of the human being they interact with.

Indeed, one of the main strengths of AI-driven propaganda is its ability to personalize its responses based on individual preferences and behavior.

Whereas propaganda techniques of the past were limited to a one-size-fits-all approach, AI-based propaganda can target specific segments of the population based on factors like social media profiles, browsing history, and even emotional reactions to content.

This enables the propaganda to be even more psychologically potent and effective by shaping its content according to the interests, biases, and susceptibilities of different audience segments.

For example, if data shows a particular internet user to be particularly concerned about economic security, the AI can target that user with disinformation narratives about employment statistics, economic collapse, immigration news, or other content which plays on the individual’s specific fears.

This targeted approach makes AI-based propaganda far more effective at manipulating emotions than the wholesale, generalized propaganda of the past. 

Future Implications of AI Propaganda

The development of AI-tools doesn’t show any signs of slowing down. The genie is very much out of the bottle, and AI-based propaganda techniques are likely to become more sophisticated, and more realistic as time goes on.

As AI technology becomes more integrated with the technology we use every day, the personalization factor involved in modern propaganda techniques is expected to grow tremendously.

For example, future AI systems will be able to deliver even more personalized content by analyzing data like GPS location, online purchase history, browsing habits, and even voice and facial recognition data.

This data is already used by internet companies to create a consumer profile, and will soon be used by propagandists to deliver narratives shaped specifically to the individual’s personality traits, political stance, and even their emotional states.

This ‘hyper-personalization’ will make it even more difficult to discern fake news from the real thing as the propaganda becomes an integrated part of a user’s digital environment - subtly reacting to biases and shaping opinion without the user knowing it. 

This presents an even greater threat to personal liberty because the user will think they have arrived at their opinions by themselves.

And if enough people fall into the traps laid by AI-driven propagandists, the net effect could be the large-scale destabilization of entire political regimes, weakened democracies, and even internal conflicts like civil wars.

Final Take

AI technology might be the Trojan horse of the modern age. With the advent of AI, most of us assumed we were gaining a useful toolkit which would aid creativity and reduce the time we spend on certain tasks through the wonder of automation.

However, what we have also gained is a set of highly deceptive tools which can be used to warp reality, and ensnare us all within the confines of our own biases by creating a feedback loop which plays on our fears and emotions.

While there may currently be a healthy skepticism surrounding AI-based content, future generations will be born into a world where such content is the norm. While we may currently be sharp-eyed enough to identify an AI-based video, the ongoing development of AI technology will make it harder to identify what’s real and what’s not as time goes on.

In order to fight back against AI-propaganda, the average internet user must become more engaged and discerning when it comes to consuming content online. This might mean selectively pruning our social media feeds (or avoiding them altogether), and identifying trusted news sources which we know to be authentic and genuine. 

AI-driven propaganda is already subtly shaping the modern world, but maybe not for the better.

Generative AI Forces Engineers to Evolve or Fall Behind | HODL FM
AI reshapes engineering: 80% must upskill in 3 years, blending creativity with AI for future roles and relevance to stay relevant in an AI-driven world
hodl-post-image

Disclaimer: All materials on this site are for informational purposes only. None of the material should be interpreted as investment advice. Please note that despite the nature of much of the material created and hosted on this website, HODL FM is not a financial reference resource and the opinions of authors and other contributors are their own and should not be taken as financial advice. If you require advice of this sort, HODL FM strongly recommends contacting a qualified industry professional.