The Emerging Threat of Ai Deepfakes in Global Elections
As voters around the world prepare to cast their ballots in upcoming national elections, a powerful new form of artificial intelligence known as “deepfakes” threatens to upend political discourse and sow confusion and mistrust on a massive scale.
Deepfakes – highly realistic fake videos or audio recordings generated by machine learning algorithms – have rapidly advanced in sophistication to the point where they are often indistinguishable from authentic content to the untrained eye.
While this technology has benign applications in entertainment and the arts, it also has the potential to be weaponized for political deception and propaganda in the run-up to pivotal elections in the U.S., Brazil, Nigeria, Taiwan, and other democracies over the next two years.
“We are on the cusp of a perfect storm,” warns Wilson Standish, Director of the Digital Forensics Lab at the Atlantic Council think tank. “The technology has gotten shockingly good while remaining accessible to anyone with a decent computer. Meanwhile, public trust in institutions and the media is at an all-time low, and foreign adversaries are eagerly exploiting our divisions. Put this together and you have a recipe for large-scale mayhem.”
“We saw what happened in 2016 with the WikiLeaks dumps and ‘fake news’ – and that was with primitive, easily debunked fakes and simple bot networks,” notes Sandra Marling, a fellow at Harvard’s Belfer Center. “Imagine that dialed up to 11. We could see fake videos of candidates spouting racial slurs, planning terrorist attacks, accepting bribes, anything you want, and most people won’t be able to tell the difference.”[1]
Marling argues the bigger threat is not changed votes but voter confusion, suppression and apathy. “If people don’t know what’s real anymore and lose all trust, they may just tune out and not bother voting. Which is exactly what the enemies of democracy want.”
The U.S. is far from alone in facing this threat. In Brazil, experts fear a repeat of the rampant disinformation and conspiracy mongering that marred the 2022 election, this time with far more convincing fakes. “We barely muddled through last time, and we had a president openly attacking the electoral system itself as rigged,” said Paulo Xavier, director of the Brazilian fact-checking group Aos Fatos. “Now the lies will be prettier and more viral than ever.”
Across the Atlantic, Nigeria’s 2023 presidential election, which brought the country to the brink of violence, provides a harrowing preview of what could unfold in its upcoming 2027 election. Deepfake videos appearing to show both major candidates engaging in voter intimidation and hate speech rippled across WhatsApp in the final week of the campaign. “The technology is a force multiplier for division and hate,” argued Aminu Sadiq, a political science professor at the University of Lagos. “In a country with deep polarization and low trust, the potential for serious violence is immense.”[2]
In Taiwan, officials are on high alert for a surge of deepfakes and other disinformation emanating from mainland China in the lead-up to the 2024 presidential election. “The Chinese Communist Party has already deployed crude deepfakes against Taiwanese targets, and we expect far more sophisticated attacks this time,” warned Ting-Yu Chen of the Taiwan Factcheck Center in Taipei.[3] Fakes could take the form of Taiwanese politicians surrendering to or collaborating with Beijing.
Yet the potential threats extend beyond national borders and democratic contests. Terrorist groups like the Islamic State are experimenting with deepfakes to grow their reach and inspire homegrown extremists abroad. “You could create fake videos of ‘lone wolves’ carrying out attacks in Western cities, or deepfakes of politicians anywhere insulting the prophet,” says Samira Haddad, an extremism researcher based in Berlin. “Groups have already used basic fakes for recruiting and incitement, so it’s inevitable they will embrace this as well.”[4]
Others worry deepfakes could abet nuclear brinksmanship during tense standoffs. Vincent Wu, an arms control expert at the Asia Research Institute in Singapore, offers an alarming scenario. “Imagine a deepfake video emerging of Kim Jong-un declaring a missile strike on Seoul, or Narendra Modi announcing an imminent attack on Pakistan. In the confusion and panic, a nuclear power could misinterpret this as a real first strike and retaliate in kind, with catastrophic consequences.”
So what can be done to address this gathering storm? Experts say there is no silver bullet, but point to a range of countermeasures that could help mitigate the damage.
The first and most urgent priority is boosting digital media literacy among the global public to engender a more critical mindset when consuming online content. “Just as we teach kids to question strangers offering candy, we need people to reflexively doubt the shocking political video that seems too good (or bad) to be true,” says Micheal Barone, an advisor to the European Union’s East StratCom Task Force, which combats Russian disinformation. “You don’t need to be a master of pixels to spot fakes, just attentive to red flags like unnatural speech patterns, blurriness where you’d expect detail, and misaligned head movements and shadows.”[5]
Tech companies also have a major role to play in improving their ability to rapidly detect and remove deepfakes on their platforms while avoiding a whack-a-mole game. Facebook, Twitter and Google have all released open datasets of deepfakes to help train AI-powered screening tools, and have committed to information-sharing partnerships with governments and academic institutions.
But Standish of the Atlantic Council argues the platforms need to go further: “We need them to be far more transparent about the process of how they will determine fakes in real-time and what the thresholds are for removal. Right now there’s justified skepticism they’ll fall short in the heat of the moment.”
Digital forensics researchers in academia, media and cybersecurity firms are also racing to develop automated detection systems to sniff out telltale artifacts of deepfake provenance. While the fakers currently have the edge, promising breakthroughs continue to emerge: a UC Berkeley team recently unveiled a detection model boasting 97% accuracy.[6] But experts caution that it’s ultimately an arms race, as forgers will inevitably leverage the same machine learning techniques to evade screening.
Policymakers also have avenues to shape the legal and normative environment around synthetic media. A growing number of countries have passed laws criminalizing malicious deepfakes, with penalties reaching several years in prison in nations like China, South Korea and India. In the U.S., several state laws ban deepfake pornography, while a proposal by Sen. Marco Rubio would impose sanctions on foreign individuals or entities caught peddling election-related deepfakes.[7] But civil liberties advocates warn that overly broad laws could ensnare legitimate media and hinder artistic expression.
Norm-setting bodies like the Paris Call for Trust and Security in Cyberspace, which has buy-in from over 500 governments and companies, are also working to stigmatize deepfakes as unacceptable election interference on par with ballot-box stuffing – a admittedly tricky line to walk. “We want governments to pledge not to use deepfakes on each other’s elections as a confidence-building measure, while not discouraging media discussion and open research into the technology itself,” explains Alexander Klimburg, Director of the Global Commission on the Stability of Cyberspace.[8]
Ultimately, however, democratic societies will have to contend with the grim reality that they now inhabit a world where seeing is not necessarily believing. We may never eradicate deepfakes, but we can develop resilience and the wisdom to pause before amplifying content that seems a bit too extraordinary.
“It’s on all of us – journalists, leaders, educators, citizens – to defend facts and truth in the face of a world where the line between the real and the fabricated has blurred beyond recognition,” argues Marling of the Belfer Center. “We either learn to navigate that wilderness together or we let the fabric of reality be shredded before our eyes. The fight for democracy has entered a new stage, and we all have to rise to the moment.”