Deepfakes are digitally produced videos, images, and audio recordings that sound or look real but are actually fake. Some deepfakes are relatively harmless, but others can be quite costly, like when a finance worker in Hong Kong was tricked by a deepfake video of the company CEO to transfer $25.6 million to a fraudulent account. With the presidential elections on the horizon, U.S. government officials and citizens alike are sounding the alarms about how deepfake technology may be used to deceive voters and influence the outcomes of an election.
Because of the increasing proliferation of deepfakes, it is more important than ever for all of us, including our students, to develop media literacy skills that can help us separate fact from fiction. While no approach is foolproof, the following seven strategies can be applied to help identify deepfakes.
Seven Strategies
Deepfake detectors are computer programs that can digitally determine if media content is real or fake. Intel released the first real-time deepfake video detection program in November 2022, right about the time that ChatGPT was being rolled out to the public. This product, called Fakecatcher, identifies deepfake videos by detecting the changes of color in blood veins that occur as the heart pumps blood through our bodies. If color changes in the veins are detected, it is likely that the human displayed in the video is real. If no blood flow is identified, it is likely a deepfake. Intel claims that Fakecatcher can identify deepfakes with 96% effectiveness.
While Intel’s software is considered one of the best deepfake detectors available, there are also others on the market. For example, Sentinel AI, created by an Estonian cybersecurity firm, is used by governments and media outlets to combat disinformation campaigns. Phoneme-Viseme Mismatch Detector identifies mismatches between mouth movements and audio. Microsoft has developed the Microsoft Video AI Authenticator, which detects deepfakes by recognizing “subtle grayscale changes” in videos “that are usually missed by normal eyes.”
You don’t need fancy detection software for this strategy. Instead, you need to use your eyes, be observant, and look for anything that doesn’t appear quite right. MIT Media Lab points out, “. . . There’s no single tell-tale sign of how to spot a fake.” However, with videos, and sometimes pictures, there are common discrepancies that you can look for to spot a potential deepfake. Ask yourself these types of questions:
- Is the skin too smooth or too wrinkly compared to the rest of the face?
- Are there shadows where you would not expect them?
- If the subject is wearing glasses that cause a glare, does the glare look natural?
- Do facial hair and moles look real?
- Does blinking look natural, or does the subject blink too often or too little?
- Do lip movements look natural?
If you’re looking at a picture, you can ask a more general question, such as “Does anything look off?” This might include noticing if the people in the image have any extra fingers or missing toes, which AI programs often struggle with, or it could mean discrepancies like shadows that don’t match the placement of the sun.
This strategy is fitting for both video and audio recordings. Does the speaker’s voice sound natural? Does the recording reflect the subject’s natural speaking cadence? Are there audio artifacts indicating that the audio has been manipulated?
An audio artifact is an artificial sounding blip or sudden change in pitch caused by manipulation of the audio file. If something sounds off, it’s a reason to question the source. It might simply turn out to be a poor-quality recording, but it could also indicate that the audio is a deepfake.
It can be helpful to practice spotting deepfakes, and it’s especially helpful to examine both examples and nonexamples. The more examples you study, the more clues you will likely discover for identifying deepfakes.
The MIT Media Lab has created a website for this type of practice, Detect Fakes, which contains a series of videos and text selections related to the current presidential election cycle. All examples are currently based on content from Joe Biden and Donald Trump, and the content is divided evenly between the two men to reduce any implicit or perceived bias. The site works by prompting users through 20 examples and asking them to determine if each sample is real or fake. No account is required, and the content is school-appropriate, so this site could even be used with students.
One of the best strategies is to compare what you are seeing and hearing with what you already know.
If you’ve been following the presidential elections, for example, you will likely be familiar with the talking points that each candidate favors. You’ll probably also have a good idea which policies each candidate would support and even their usual tone of voice. When you watch a video and hear one of the candidates speak, you can then ask yourself, “Based on what I know about this person, does this clip seem believable?”
This is not a perfect system, of course, but if something seems off or out of character, it deserves looking into a bit further before accepting the clip as authentic. Ideally, this will lead you to do a little follow-up research to cross-reference the content against reliable sources. With this technique, the more you know, the better you can judge and detect inconsistencies.
Because the effectiveness of this approach is dependent upon depth of knowledge, it’s important to remember that students may oftentimes not yet have enough real-world experience to make fully informed evaluations. This isn’t unique to students either. The same could be said for adults who lack knowledge in a specific area or haven’t kept up to date with what’s happening in world news. Regardless of how much background information you possess, a savvy media consumer should be able to corroborate any content with at least one other reliable source. If you can’t do this, you’ll want to be especially critical of the content’s credibility.
It’s generally best practice to start your analysis by identifying the source of any new information you encounter. If you’re not sure where the content came from or who produced it, it can be difficult to know whether it’s trustworthy or not. This strategy is a good starting point for any type of credibility check, not just for deepfakes.
Ask yourself questions like:
- Who created this media?
- Do I recognize the creator as a trusted source?
- Is the site sponsored by a credible organization, company, or government?
- Is the media from this source, or is it being shared secondhand?
If the source is in question, the media content should also be questioned and scrutinized. Credible sources and authentic media producers don’t typically hide their identity.
This last strategy is a bit of an overarching one and can serve you and your students well when using any of the other six approaches to identifying deepfakes. It’s important for all of us to think critically about the media we view. While the goal is not to create cynical consumers, a healthy dose of skepticism can help us all think more critically about the media we consume.
As technology advances, deepfakes will be easier to create and will likely become more convincing as well. In turn, strategies for detecting deepfakes will need to evolve in order to keep up with these technological changes. For now, these seven strategies are a good place to start, as they can help you and your students better identify content that might look real but is, in fact, a deepfake.
AVID Connections
This resource connects with the following components of the AVID College and Career Readiness Framework:
- Instruction
- Rigorous Academic Preparedness
- Student Agency
Extend Your Learning
- How to Detect Deepfake Videos Like a Fact-Checker (PolitiFact)
- Detect DeepFakes: How to Counteract Misinformation Created By AI (MIT Media Lab)
- Deepfakes (U.S. Government Accountability Office)