Published on September 18, 2024

With over two billion people voting in over 60 countries, 2024 is poised to be one of the most consequential election years in recent memory. Unfortunately, it may also be the year that AI undermines election integrity.

AI poses threats to election security officials in myriad ways. Election officials have to contend with AI-fueled ransomware attacks, AI-powered phishing attempts, and bad actors using AI to overwhelm officials with disingenuous open records requests.

Moreover, in the run-up to the U.S presidential election, we can expect to see more synthetic media disseminating across social networks. We’ve already seen deepfake audio and media used to manipulate voters in the New Hampshire primary. In that particular incident, a political consultant, Steven Kramer, created synthetic robocalls, using Biden’s voice to encourage primary voters to stay away from the polls. In the aftermath of those robocalls, Lingo Telecom, the company that transmitted the calls, paid a $1 million FCC fine. The FCC subsequently banned all robocalls that use synthetic audio, and Kramer faces a bevy of criminal charges.

Although there are many different AI concerns, the spread of AI-powered misinformation and disinformation on social media remains the most serious AI-related threat to election integrity.

Social media is ground zero for misinformation

Given the lack of effective content moderation on social media platforms, bad actors often have little trouble spreading disinformation (false content designed to manipulate elections) and misinformation (false content distributed by social media users without nefarious intent). Complicating matters, AI-powered bots are able to hyper-target certain audiences while effectively disseminating this content far and wide.

As deepfakes become increasingly convincing, it’s naive to think AI-fueled disinformation won’t affect voters this year. Unfortunately, the most prominent social media companies have little interest in flagging such content.

The top social media companies tend to prioritize user engagement above all else; as long as advertisers don’t leave, these companies don’t have much of an incentive to act in society’s best interest. Meta (Facebook and Instagram) has long been been accused of allowing targeted disinformation to spread in the run-up to elections.

As a quick example, earlier this year, a deepfake audio file circulated on Facebook, jeopardizing the integrity of a Slovakian election. Two days before the election, users shared a fake recording, depicting liberal candidate Michal Šimečka and journalist Monika Tódová plotting to rig the election. At the time of the incident, Meta’s manipulated media policy did not account for deepfake audio; however, Meta subsequently vowed to adjust their policy.

Meta has made election ad policy adjustments

In the U.S., Meta blocks all electoral ads during the seven days leading up to elections, and since January 2024, political advertisers must disclose whenever they’ve used AI to manipulate media in political ads on Meta’s platforms.

Meta’s manipulated media policy is well-intentioned; however, it doesn’t address the dissemination of all types of AI-manipulated content. For example, Meta’s manipulated media policy still wouldn’t cover the aforementioned fake conversation between Šimečka and Tódová. That would apparently fall under the purview of Meta’s “coordinated inauthentic behavior” policy.

According to their policy statistics, Meta employees have successfully removed over 200 networks of coordinated inauthentic behavior networks since 2017. If you find Meta’s moderation policies confusing, you’re not alone. Meta’s Oversight Board has called the company’s synthetic media policies “incoherent and confusing,” although Meta has supposedly made policy adjustments.

At the 2024 Munich Security Conference, Meta promised to address the proliferation of deepfake media designed to trick voters. Meta may be able to identify AI-generated images that have been created with Meta AI; however, it remains to be seen how well it can assess other forms of synthetic media flooding their platforms.

David Evan Harris and Lawrence Norden, two scholars from UC-Berkeley and NYU, respectively, are not optimistic about Meta’s efforts to address AI-fueled misinformation initiatives. Harris’ opinion on this matter is particularly valuable, as he was formerly the research manager for responsible AI at Meta.

Of late, Meta has made some good AI policy decisions; for example, earlier this month, Meta joined the steering committee for the Coalition for Content Provenance and Authenticity (C2PA). That said, given Meta’s immense reach and poor track record, a lot of synthetic, election-influencing content is sure to go undetected.

Nevertheless, Meta is not my greatest concern. That dubious distinction belongs to X.

Elon Musk uses X to advance his political interests

After firing his election integrity team last year, Musk wasted no time inserting himself into the election process. He quickly encouraged his followers to vote for Trump and vowed to contribute $45 million a month to a pro-Trump super political action committee: America PAC.

Seemingly unafraid of financial backlash, Musk isn’t afraid to instigate regulators. For example, in 2018, Musk tweeted that he had secured funding to take Tesla private, which resulted in a $40M fraud settlement with the SEC.

As of today, Musk faces at least eleven different legal battles with the U.S. federal government; thus, it should come as no surprise that he’s trying to use his funds (and X) to his political advantage.

Harvard Business School professor David Yoffie explains, “Given the challenges Musk faces and his political predisposition today, he would clearly like to see a Trump administration in office.” Trump and Musk have grown closer in recent months; in fact, Trump has offered Musk a role in his upcoming administration should the ex-president win.

Musk’s proclivity for spreading false election information, his aversion to content moderation related to election integrity, and his use of X to actively promote political candidates of his choice are all cause for concern—and that’s without dwelling on the fact that an immense amount of AI-generated election misinformation already flourishes on X.

There are solutions

Musk and X aside, there are steps that social media platforms can take to ensure that AI-fueled election misinformation doesn’t jeopardize elections.

Media platforms need to verify all election officials’ accounts, and these platforms should use resources to make sure AI-generated content is labeled accordingly.

While domestic and international threat actors “flood the zone” with disinformation (political operative Steve Bannon uses a different term, of course), it’s important that everyday citizens do the opposite leading up to election day.

Local officials—and everyday citizens—can monitor social media and search platforms, verify election information, and amplify official election content.

As a quick example, everyday citizens can direct voters to the National Association of Secretaries of States’ #TrustedInfo2024 initiative and CISA’s Election Security Rumor vs. Reality website.

To be fair, X and Meta do have policies in place to remove misinformation related to voter registration and methods of voting. As X CEO Linda Yaccarino stated earlier this year, “Every citizen and company has a responsibility to safeguard free and fair elections.”

I agree with Yaccarino. However, considering who owns the leading social media companies, I think we may have to rely more on “citizens” and less on “companies” if we want to keep our election integrity intact.

John Donegan

John Donegan

Enterprise Analyst, ManageEngine

John is an Enterprise Analyst at ManageEngine. He covers infosec, cybersecurity, and public policy, addressing technology-related issues and their impact on business. Over the past fifteen years, John has worked at tech start-ups, as well as B2B and B2C enterprises. He has presented his research at five international conferences, and he has publications from Indiana University Press, Intellect Books, and dozens of other outlets. John holds a B.A. from New York University, an M.B.A. from Pepperdine University, an M.A. from the University of Texas at Austin, and an M.A. from Boston University.

 Learn more about John Donegan
x Your enterprise, your rules: Master digital governance