As deepfakes and sophisticated AI technology get more public attention, it’s easy to overlook a simpler type of misinformation: the “cheap fake.”
Cheap fakes are like deepfakes in that they involve the manipulation of media to make something look real when it really isn’t. However, cheap fakes take very little skill to create, and they can be made with simple and accessible tools, rather than sophisticated artificial intelligence tools. In other words, you don’t have to be an AI whiz or have a lot of money to make a cheap fake.
The term cheap fake was coined by Britt Paris and Joan Donovan, who defined it as “an AV manipulation created with cheaper, more accessible software (or, none at all).” Nina Schick, author of Deepfakes: The Coming Infocalypse, describes a cheap fake as “a piece of media that has been crudely manipulated, edited, mislabeled, or improperly contextualized in order to spread disinformation.” The News Literacy Project refers to cheap fakes as “deepfake’s less polished and more believable cousin, which takes real audio, images or videos and cheaply manipulates or decontextualizes them.”
Bret Schafer from Michigan Online at the University of Michigan says, “Although deep fakes are more convincing, cheap fakes have fewer barriers to being created and disseminated and still misinform and mislead.”
There are four main types of cheap fakes:
Associating misleading text with a media clip creates a false context meant to confuse or persuade an audience. This can happen when a picture, video, or audio clip is accompanied by text that encourages the reader to misinterpret the attached media. This might take the form of misleading text in an article, a false heading, or an intentionally inaccurate caption. Oftentimes, the misleading text does not authentically match the photo, video, or sound bite, and even when the media clips are authentic, the use of fabricated text, captions, or headlines can create a misleading or false narrative.
For example, a spreader of misinformation might attach a provocative caption or headline, like “Corruption Is on the Rise,” to an image of an individual. Even if the person in the image is actually denouncing corruption, the misleading caption can skew public perception and unjustly associate them with the very issue they oppose. This simple tactic can completely change the perceived context without altering the original media. It takes no technical skills and, if done through social media, costs nothing.
Another example could include the use of old footage or media from another location to make it look like there is a current problem. Perhaps an out-of-context picture of a homeless encampment is posted on social media with the text, “Community Danger Rises Around Growing Encampments.” Even though it is not a photo of the actual area of concern or perhaps even from a current time period, its association with the text will make readers and viewers assume that it is.
Using misleading text, headlines, and captions might be the easiest form of cheap fakes to produce by simply posting a talking point and attaching some form of media that appears to corroborate it, even if it’s totally unrelated. If it looks real, it will often be perceived as truth.
There is considerable overlap with deepfakes on this one. When AI is used to fabricate or alter the media, they are considered deepfakes. However, when audio or video clips are published out of context or edited with simple and available software programs, they are cheap fakes.
Two of the most famous examples of this type of cheap fake involve Nancy Pelosi when she was Speaker of the United States House of Representatives. In one incident, a video of her speaking was slowed down by 75% to make it appear as if she was slurring her words. This altered video was shared widely on social media in an effort to damage her credibility and question her competence.
The Associated Press notes another incident where Nancy Pelosi was the victim of a cheap fake. This time, it involved an edited video of her at the State of the Union address. The video footage was edited and remixed to make it appear as if Pelosi tore up her script while the president was presenting a solemn tribute to a military family. While she did tear up her script, it was not done during that tribute, but rather, after the event was finished. The misleading edit was done to portray her as unpatriotic.
Another form of cheap fakes is to manipulate the truth by creatively cutting out portions of a video, image, or audio clip. Without the full context, a small sound bite can be easily misunderstood or misinterpreted.
Another example of a cheap fake, a viral video of President-elect Joe Biden, was presented in a way that made him sound like he is admitting to fraud. However, Biden’s comments were actually describing a program designed to protect voters from voter fraud. By removing the full content, an authentic piece of media was transformed into a cheap fake, and it was created with simple and inexpensive video editing software.
Cropping of an image can also lead to misinformation. For instance, a photograph of a political rally could be cropped to show only a small group of attendees, suggesting poor turnout. The full image, however, might reveal a much larger crowd. This type of selective cropping could be used to mislead the audience about the event’s popularity and support. The cropped image is only telling part of the story, skewing the truth.
Manipulating an image can go far beyond cropping. It can include digitally removing part of a picture, changing a background, or pasting something into a scene that was never actually there. This process is continually becoming cheaper and easier to do, even by a novice.
Doctored photos can be crudely produced and still have the potential to fool people. Seeing is believing, after all, and many consumers don’t take the time to study media in any real detail. They may not have the time or interest in doing so.
After Hurricane Harvey in 2017, an image supposedly showed a shark swimming along a flooded Houston highway. Although circulated widely on social media, the image was actually a cheap fake that has since reappeared in various natural disasters around the world multiple times, to the point where it has now become a joke to many. Still, some people continue to believe that it’s real, and in the case of the original Houston incident, viewers mistakenly believed that the shark displacement was the real consequence of hurricane flooding.
Similarly, a cheap fake photo of former president Donald Trump being arrested has been circulated on social media. At first glance, it looks authentic. However, with a closer look at his neck and hand, it becomes clear that something doesn’t look natural in the photo. In fact, someone pasted the likeness of former President Trump into the scene to make it appear that he was being arrested. The scene never really happened. While this type of image is increasingly being generated using artificial intelligence tools, it can also be created as a cheap fake using inexpensive photo editing software and fairly little expertise.
Although cheap fakes are less sophisticated than deepfakes, they are still being used to effectively spread misinformation. Even if images, video, or audio seem too outrageous to be true, there are people who still believe them. The News Literacy Project explains this phenomenon, saying: “Deeply rooted biases tempt people to quickly jump to conclusions when they see social media content that appears to confirm their preconceived beliefs. This is especially true when it paints political foes in an unflattering light.”
The News Literacy Project goes on to explain how we might begin to combat this form of misinformation, writing, “. . . viral social media posts rarely contain enough information to accurately convey complex realities and are often presented out of context and attached to baseless and sensational assertions. By practicing a little restraint and seeking out additional information, such as standards-based news coverage or a high-quality fact check, people can restore the necessary context to viral outrage posts.”
The continued use of cheap fakes is one more reason to teach students to be savvy consumers of information. Educators can encourage students to ask probing questions, such as: Who created this? Why? Could this have been edited? What’s the source?
With a healthy amount of skepticism, students can learn to confirm and verify information, rather than being misled by cheap fakes and other forms of misinformation.
AVID Connections
This resource connects with the following components of the AVID College and Career Readiness Framework:
- Instruction
- Rigorous Academic Preparedness
- Student Agency
Extend Your Learning
- Worried About Political Deepfakes? Beware the Spread of ‘Cheapfakes’ (WIRED)
- Fake News, Cheapfakes, and Deepfakes: Evaluating AV Media (Cornell University Library)
- Get Smart About News (News Literacy Project)
- Cheapfakes: Experts Confront the Growing Threats Posed by DIY Deepfakes (Antispoofing)
- Deepfakes and Cheap Fakes (Data & Society)