Proof Threshold: Exploring How Americans Perceive Deepfakes

Internet security research

Technological advancements have shifted society since early humans learned to craft tools to aid with hunting and survival. As Bill Gates points out, technology has come a long way and is improving the quality of life in unimaginable ways, like a carbon dioxide catcher or a fridge that makes recipe suggestions.

However, this progression of technology and artificial intelligence has resulted in a degree of fear and distrust – and for good reason, as we’ve seen a rise in deepfake technology, which started as photo-altering apps but now allows for a much greater ability to manipulate the truth.

Sometimes, it’s an attempt at comedy: For example, Sen. Kamala Harris’ national press secretary Ian Sams altered a viral photo to show Speaker Nancy Pelosi pointing her finger at Harris. Other times, deepfake technology can border sexual assault, using pornography to discredit politicians.

Companies like Facebook and Microsoft have taken steps to protect their users from deepfake content, but are the American people concerned? We surveyed 1,011 people familiar with this technology to understand how Americans perceive deepfakes.

Trust what you see?

Deepfakes were defined to survey respondents as “media that is doctored using artificial intelligence-based technology to produce or alter video/image/audio content so that it presents something that didn’t, in fact, occur.”

More than half of the participants in our study were very or extremely concerned with the implications of deepfake technology. We’re looking at the equivalent of “fake news” here, and the beginning of a movement that seeks to break people’s confidence in the truth, because what is truth when everything can be altered?

That’s a question people are concerned about. Two-thirds of participants believed that one day it would be impossible to discern a real video from a fake one. While more than 1 in 4 thought fake digital media is already taking the place of factual information, most people believed it would take about eight years, on average, before nothing can be trusted.

Implications of False Information

Twitter set a clear standard when it banned all political ads from the platform in 2019. The decision came after Facebook refused to remove fake political ads from the platform, suggesting “free expression” is more important than the removal of false information.

According to our findings, more than 3 in 4 Americans were extremely or very concerned with the use of deepfake technology to spread false political information. Their anxiety is founded. In 2019, President Trump tweeted a video of Speaker Pelosi “[stammering] through [a] news conference,” and although the video was viewed over 2.5 million times, it was fake.

Following political misinformation, people feared that deepfake technology is used to commit fraud and other digital crimes. Cyber-enabled crimes cost Americans more than $2.7 billion in 2018, and the FBI reported that scams were one of the top three ways money was extorted. Individuals aren’t the only ones at risk: Corporations stand to continue losing millions as artificial intelligence is being used to mimic the voices of well-known CEOs.

Twitter’s decision to ban political ads may seem like a disservice to voters, but it may protect them and their votes in 2020 – it, at least, comforts some. Forty-two percent of people believed it is very or extremely likely that deepfakes will be used to mislead voters in 2020. Cambridge Analytica and Facebook misused 87 million people’s data in 2016, and it remains unclear what role Facebook will play in either the re-election of President Trump or the welcoming of another president.

Identity and Impersonation

While the risk of deepfake technology use on regular citizens may not be immediate, some laws to protect Americans do exist, including safeguards against harassment or extortion. However, 7 in 10 participants in our study said deepfakes should be illegal, and some are taking action.

The No. 1 way people said they would protect themselves from deepfakes is by denying others the ability to tag them in online photos. Forty-four percent of Americans would go as far as removing all of their facial images from the internet, and 42% said they would delete social media content.

Our findings show that the majority of Americans were concerned about deepfakes, but Gen Xers and millennials in our study worried most. Perhaps it’s because they’ve spent more of their lives with technology and have seen its criminal potential more than baby boomers have.

Deepfake Experiences (That You Know of)

From Rep. Alexandria Ocasio-Cortez and Elon Musk to Mark Zuckerberg, politicians, business executives, and celebrities seem to be the main targets of deepfakes. Although some of us have luckily missed out on these images, nearly 50% of survey participants said they’ve come across a celebrity deepfake, and 43% reported seeing a deepfake of a politician.

It’s one thing if someone believes an article from The Onion is factual, but it’s alarming when a news source or a viral Facebook post disseminates deepfakes. Yet, half of the people surveyed said they’ve come across deepfakes being shared as authentic content.

Proof and Perception

Deepfakes concern experts because they can easily make a lie into the truth and lead people to dispute video evidence. What is true when everything on the internet could be a lie?

More than half of Republicans said they would be very or extremely skeptical of video evidence showing President Trump conducting criminal activity, compared to 11% of Democrats.

The sentiment isn’t the same for Mark Zuckerberg. Only 29% of Republicans would be very or extremely skeptical of video evidence showing the Facebook CEO admitting to malicious activity, compared to 18% of Democrats and Independents.

An Obscured Future

Much like facial recognition technology, digital manipulation techniques started as seemingly innocent: photoshopping hips for Instagram “likes” or manipulating one’s skin tone in a photo. However, deepfakes have since evolved, and our findings show the majority of Americans are concerned about deepfake technology being used to spread political misinformation.

However, deepfakes don’t have to be political to pose a threat. Voice technology has been used to extort money from business leaders, and scam calls pose a threat to individuals. While laws don’t currently protect people from deepfake technology, personal action can be taken. Visit and learn how you can block robocalls.


We conducted a survey of 1,011 Americans who were at least slightly familiar with deepfake technology. Respondents were then asked to answer questions about the possible implications of this technology and their current experiences with it.

Fifty-eight percent of our respondents identified as male, 42% identified as female, and less than 1% identified as a gender not listed in our survey. Respondents ranged in age from 18 to 82 with a mean of 38 and a standard deviation of 11.5.


The findings on this page rely on self-reporting and, as such, are susceptible to exaggeration or selective memory. No statistical testing was performed. The claims listed above are based on means alone and are presented for informational purposes.

Fair Use Statement

The emergence of deepfake technology is alarming, and we want people to have the tools to protect themselves. We welcome the sharing of our content, but please only share for noncommercial reuse and link back to this page so that readers can read the entire study and review our methodology.