Deepfakes and elections: time for a zero-trust mindset

2024 is a bumper year for elections. Citizens will go to the polls in over 60 countries. With advances in artificial intelligence tools creating new avenues for deception -- from deepfake videos to voice clones of politicians speaking words they didn’t say -- voters must remain vigilant about what they see and hear online.

In January, US voters in New Hampshire got a taste of the digital mimicry that is now part of the information landscape. Democrats received a robocall with a voice that sounded like President Joe Biden asking them to stay home and not vote in the state’s primary election. The facsimile of Biden’s voice, later traced to a telecom marketing company in Texas, even used one of the President’s favourite phrases, “What a bunch of malarkey.”

Malarkey indeed. And that’s just the start of it. Cybersecurity experts expect to see more attempts at this type of manipulation in the US elections.

Attempts to sway voter sentiment

Deepfakes are videos, photos, or audio clips that replace a person’s face or voice with somebody else’s and appear realistic. One of their popular uses is the creation of non-consensual pornography, which includes celebrity targets like Taylor Swift.

Other, more benign examples include a haunting deepfake replica of Morgan Freeman waxing philosophically about the “era of synthetic reality.”

In politics, deepfakes can be used to manipulate voters, spreading disinformation and fear. Last year, in Slovakia’s parliamentary election, a fake audio clip of Michal Šimečka, leader of the liberal Progressive Slovakia party, discussing how to rig the election went viral two days before voting day, as Wired and CNN report. Šimečka’s party lost at the polls, although it is unclear how many voters were influenced by the fake audio clip. (Another fake clip also featured his voice talking about raising the price of beer.)

In the UK, over 100 deepfake video advertisements of Prime Minister Rishi Sunak recently aired on Facebook in just one month, reports The Guardian.

In South Africa, concerns about deepfakes and election misinformation are growing. To limit the spread of misinformation and disinformation in this year’s election, the Independent Electoral Commission (IEC) has partnered with the Information Regulator. It is engaging with social media companies, including Meta and TikTok, and media watchdog Media Monitoring Africa (MMA).

The Political Advert Repository (Padre), an initiative of the IEC and MMA, is one platform for verifying political content. Political parties can upload their campaign ads, and the public can cross-check the official versions of the ads they view online.

Spotting fakes

Unfortunately, high-quality deepfakes are becoming increasingly tough to detect. But there are signs to watch out for.

For video content, pay close attention to the person’s face. According to the MIT Media Lab, a deepfake’s lips may move unnaturally when speaking and not be in sync with what they are saying. The person may blink excessively or not enough. They may have unnatural-looking facial hair or moles. The skin on the cheeks and forehead may appear “too smooth or too wrinkly.”

Deepfake videos may also not accurately represent the “natural physics of a scene,” the MIT Media Lab notes. This can result in shadows around the eyes and eyebrows appearing in the wrong place and the glare off glasses being amiss.

Audio deepfakes, like the fake Biden robocall, are challenging to detect, even with software built explicitly for that purpose. However, there are warning signs. “Current deepfakes rarely include a person taking a breath in between words, and they often unnaturally space out each word evenly, unlike the way that real people talk,” according to an article by NBC News.

While assessing video and audio content for signs of manipulation is a good first step, it’s important to triangulate. The basics of sound fact-checking apply. If a political video, audio recording, or photo seems dubious, check to see where the information comes from and if it’s been verified by reputable sources.

US journalism institute Poynter advises doing a “lateral search.” This involves searching the web to “find out more information about what the claim is, who is sharing it, and what other sources are saying about it.”

Time for a zero-trust mindset

The boundaries between fact and fiction are eroding in our increasingly digital world.

With the rise of deepfakes and other deceptive online content and no shortage of hucksters seeking to manipulate public opinion, it pays to approach social media with a “zero-trust mindset.” This means applying a sceptical eye and being fastidious about verifying what one sees and hears online.

With elections taking place in South Africa and abroad, voters need to remain vigilant. Everyone with a smartphone can make a difference by not reflexively blasting unverified content, engineered to provoke and mislead, to their social media channels and WhatsApp groups.

 

Brendon Bosworth is a communications specialist and science communication trainer with an ever-growing interest in AI. He is the principal consultant at Human Element Communications. 

Brendon Bosworth

Brendon Bosworth is a communications specialist and the principal consultant at Human Element Communications.

https://www.humanelementcommunications.com
Previous
Previous

Talking science communication on the Let’s Talk SciComm podcast

Next
Next

Facilitated writing sessions help you get your writing done