As social media guardrails fade and AI deepfakes go mainstream, experts warn of impact on elections

Dec 27, 2023, 4:23 PM | Updated: 4:23 pm

FILE - A booth is ready for a voter, Feb. 24, 2020, at City Hall in Cambridge, Mass., on the first ...

FILE - A booth is ready for a voter, Feb. 24, 2020, at City Hall in Cambridge, Mass., on the first morning of early voting in the state. (AP Photo/Elise Amendola, File)

(AP Photo/Elise Amendola, File)

NEW YORK (AP) — Nearly three years after rioters stormed the U.S. Capitol, the false election conspiracy theories that drove the violent attack remain prevalent on social media and cable news: suitcases filled with ballots, late-night ballot dumps, dead people voting.

Experts warn it will likely be worse in the coming presidential election contest. The safeguards that attempted to counter the bogus claims the last time are eroding, while the tools and systems that create and spread them are only getting stronger.

Many Americans, egged on by former President Donald Trump, have continued to push the unsupported idea that elections throughout the U.S. can’t be trusted. A majority of Republicans (57%) believe Democrat Joe Biden was not legitimately elected president.

Meanwhile, generative artificial intelligence tools have made it far cheaper and easier to spread the kind of misinformation that can mislead voters and potentially influence elections. And social media companies that once invested heavily in correcting the record have shifted their priorities.

“I expect a tsunami of misinformation,” said Oren Etzioni, an artificial intelligence expert and professor emeritus at the University of Washington. “I can’t prove that. I hope to be proven wrong. But the ingredients are there, and I am completely terrified.”


Manipulated images and videos surrounding elections are nothing new, but 2024 will be the first U.S. presidential election in which sophisticated AI tools that can produce convincing fakes in seconds are just a few clicks away.

The fabricated images, videos and audio clips known as deepfakes have started making their way into experimental presidential campaign ads. More sinister versions could easily spread without labels on social media and fool people days before an election, Etzioni said.

“You could see a political candidate like President Biden being rushed to a hospital,” he said. “You could see a candidate saying things that he or she never actually said. You could see a run on the banks. You could see bombings and violence that never occurred.”

High-tech fakes already have affected elections around the globe, said Larry Norden, senior director of the elections and government program at the Brennan Center for Justice. Just days before Slovakia’s recent elections, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false, but they were shared as real across social media regardless.

These tools might also be used to target specific communities and hone misleading messages about voting. That could look like persuasive text messages, false announcements about voting processes shared in different languages on WhatsApp, or bogus websites mocked up to look like official government ones in your area, experts said.

Faced with content that is made to look and sound real, “everything that we’ve been wired to do through evolution is going to come into play to have us believe in the fabrication rather than the actual reality,” said misinformation scholar Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania.

Republicans and Democrats in Congress and the Federal Election Commission are exploring steps to regulate the technology, but they haven’t finalized any rules or legislation. That’s left states to enact the only restrictions so far on political AI deepfakes.

A handful of states have passed laws requiring deepfakes to be labeled or banning those that misrepresent candidates. Some social media companies, including YouTube and Meta, which owns Facebook and Instagram, have introduced AI labeling policies. It remains to be seen whether they will be able to consistently catch violators.


It was just over a year ago that Elon Musk bought Twitter and began firing its executives, dismantling some of its core features and reshaping the social media platform into what’s now known as X.

Since then, he has upended its verification system, leaving public officials vulnerable to impersonators. He has gutted the teams that once fought misinformation on the platform, leaving the community of users to moderate itself. And he has restored the accounts of conspiracy theorists and extremists who were previously banned.

The changes have been applauded by many conservatives who say Twitter’s previous moderation attempts amounted to censorship of their views. But pro-democracy advocates argue the takeover has shifted what once was a flawed but useful resource for news and election information into a largely unregulated echo chamber that amplifies hate speech and misinformation.

Twitter used to be one of the “most responsible” platforms, showing a willingness to test features that might reduce misinformation even at the expense of engagement, said Jesse Lehrich, co-founder of Accountable Tech, a nonprofit watchdog group.

“Obviously now they’re on the exact other end of the spectrum,” he said, adding that he believes the company’s changes have given other platforms cover to relax their own policies. X didn’t answer emailed questions from The Associated Press, only sending an automated response.

In the run-up to 2024, X, Meta and YouTube have together removed 17 policies that protected against hate and misinformation, according to a report from Free Press, a nonprofit that advocates for civil rights in tech and media.

In June, YouTube announced that while it would still regulate content that misleads about current or upcoming elections, it would stop removing content that falsely claims the 2020 election or other previous U.S. elections were marred by “widespread fraud, errors or glitches.” The platform said the policy was an attempt to protect the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions.”

Lehrich said even if tech companies want to steer clear of removing misleading content, “there are plenty of content-neutral ways” platforms can reduce the spread of disinformation, from labeling months-old articles to making it more difficult to share content without reviewing it first.

X, Meta and YouTube also have laid off thousands of employees and contractors since 2020, some of whom have included content moderators.

\The shrinking of such teams, which many blame on political pressure, “sets the stage for things to be worse in 2024 than in 2020,” said Kate Starbird, a misinformation expert at the University of Washington.

Meta explains on its website that it has some 40,000 people devoted to safety and security and that it maintains “the largest independent fact-checking network of any platform.” It also frequently takes down networks of fake social media accounts that aim to sow discord and distrust.

“No tech company does more or invests more to protect elections online than Meta – not just during election periods but at all times,” the posting says.

Ivy Choi, a YouTube spokesperson, said the platform is “heavily invested” in connecting people to high-quality content on YouTube, including for elections. She pointed to the platform’s recommendation and information panels, which provide users with reliable election news, and said the platform removes content that misleads voters on how to vote or encourages interference in the democratic process.

The rise of TikTok and other, less regulated platforms such as Telegram, Truth Social and Gab, also has created more information silos online where baseless claims can spread. Some apps that are particularly popular among communities of color and immigrants, such as WhatsApp and WeChat, rely on private chats, making it hard for outside groups to see the misinformation that may spread.

“I’m worried that in 2024, we’re going to see similar recycled, ingrained false narratives but more sophisticated tactics,” said Roberta Braga, founder and executive director of the Digital Democracy Institute of the Americas. “But on the positive side, I am hopeful there is more social resilience to those things.”


Trump’s front-runner status in the Republican presidential primary is top of mind for misinformation researchers who worry that it will exacerbate election misinformation and potentially lead to election vigilantism or violence.

The former president still falsely claims to have won the 2020 election.

“Donald Trump has clearly embraced and fanned the flames of false claims about election fraud in the past,” Starbird said. “We can expect that he may continue to use that to motivate his base.”

Without evidence, Trump has already primed his supporters to expect fraud in the 2024 election, urging them to intervene to ” guard the vote ” to prevent vote rigging in diverse Democratic cities. Trump has a long history of suggesting elections are rigged if he doesn’t win and did so before voting in 2016 and 2020.

That continued wearing away of voter trust in democracy can lead to violence, said Bret Schafer, a senior fellow at the nonpartisan Alliance for Securing Democracy, which tracks misinformation.

“If people don’t ultimately trust information related to an election, democracy just stops working,” he said. “If a misinformation or disinformation campaign is effective enough that a large enough percentage of the American population does not believe that the results reflect what actually happened, then Jan. 6 will probably look like a warm-up act.”


Election officials have spent the years since 2020 preparing for the expected resurgence of election denial narratives. They’ve dispatched teams to explain voting processes, hired outside groups to monitor misinformation as it emerges and beefed up physical protections at vote-counting centers.

In Colorado, Secretary of State Jena Griswold said informative paid social media and TV campaigns that humanize election workers have helped inoculate voters against misinformation.

“This is an uphill battle, but we have to be proactive,” she said. “Misinformation is one of the biggest threats to American democracy we see today.”

Minnesota Secretary of State Steve Simon’s office is spearheading #TrustedInfo2024, a new online public education effort by the National Association of Secretaries of State to promote election officials as a trusted source of election information in 2024.

His office also is planning meetings with county and city election officials and will update a “Fact and Fiction” information page on its website as false claims emerge. A new law in Minnesota will protect election workers from threats and harassment, bar people from knowingly distributing misinformation ahead of elections and criminalize people who non-consensually share deepfake images to hurt a political candidate or influence an election.

“We hope for the best but plan for the worst through these layers of protections,” Simon said.

In a rural Wisconsin county north of Green Bay, Oconto County Clerk Kim Pytleski has traveled the region giving talks and presentations to small groups about voting and elections to boost voters’ trust. The county also offers equipment tests in public so residents can observe the process.

“Being able to talk directly with your elections officials makes all the difference,” she said. “Being able to see that there are real people behind these processes who are committed to their jobs and want to do good work helps people understand we are here to serve them.”

Fernando reported from Chicago. Associated Press writer Christina A. Cassidy in Atlanta contributed to this report.

The Associated Press receives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

KSL 5 TV Live


The bodies of two men, including a U.S. Air Force colonel who served as director of operations for ...

Associated Press

Bodies of Air Force colonel and Utah man are recovered after their plane crashed in an Alaska lake

The bodies of two men, including a U.S. Air Force colonel who served as director of operations for the Alaskan Command, have been recovered after their small plane plunged into an Alaska lake.

47 minutes ago

FILE - A line of unsold 2018 Cooper Clubmen sit in a long row at a Mini dealership, March 30, 2018,...

Wyatte Grantham-Philips, AP Business Writer

Car dealerships are being disrupted by a multi-day outage after cyberattacks on software supplier

Car dealerships across North America have faced a major disruption this week.

3 hours ago

A reflective monolith found by Vegas Metro Search and Rescue near Gass Peak over the weekend....

Rio Yamat

Shiny monolith removed from mountains outside Las Vegas. How it got there is still a mystery

A strange monolith found jutting out of the rocks in a remote mountain range near Las Vegas has been taken down by authorities.

6 hours ago

WASHINGTON, DC - JUNE 20: An exterior view of the Supreme Court on June 20, 2024 in Washington, DC....

Mark Sherman

The Supreme Court upholds a gun control law intended to protect domestic violence victims

The Supreme Court upholds a gun control law intended to protect domestic violence victims.

8 hours ago

Robert F. Kennedy Jr. apologized to members of his family who objected to a new TV ad released Sund...

Meg Kinnard, Associated Press

Robert F. Kennedy Jr. fails to qualify for CNN’s debate

Robert F. Kennedy Jr. has failed to qualify for next week’s debate in Atlanta. Host network CNN said Thursday the independent presidential candidate fell short of benchmarks both for state ballot qualification and polling

1 day ago

In this still image taken from video of the Office of the New York Governor, Gov. Kathy Hochul sign...

Anthony Izaguirre

New York moves to limit ‘addictive’ social media feeds for kids

New York's governor has signed a bill that would allow parents to block their children from getting social media posts suggested by a platform’s algorithm.

1 day ago

Sponsored Articles

Photo courtesy of Artists of Ballet West...

Ballet West

The rising demand for ballet tickets: why they’re harder to get

Ballet West’s box office is experiencing demand they’ve never seen before, leaving many interested patrons unable to secure tickets they want.

Electrician repairing ceiling fan with lamps indoors...

Lighting Design

Stay cool this summer with ceiling fans

When used correctly, ceiling fans help circulate cool and warm air. They can also help you save on utilities.

Side view at diverse group of children sitting in row at school classroom and using laptops...

PC Laptops

5 internet safety tips for kids

Read these tips about internet safety for kids so that your children can use this tool for learning and discovery in positive ways.

Women hold card for scanning key card to access Photocopier Security system concept...

Les Olson

Why printer security should be top of mind for your business

Connected printers have vulnerable endpoints that are an easy target for cyber thieves. Protect your business with these tips.

Modern chandelier hanging from a white slanted ceiling with windows in the backgruond...

Lighting Design

Light up your home with these top lighting trends for 2024

Check out the latest lighting design trends for 2024 and tips on how you can incorporate them into your home.

Technician woman fixing hardware of desktop computer. Close up....

PC Laptops

Tips for hassle-free computer repairs

Experiencing a glitch in your computer can be frustrating, but with these tips you can have your computer repaired without the stress.

As social media guardrails fade and AI deepfakes go mainstream, experts warn of impact on elections