Strategies to Protect Yourself From Misinformation

Review six action steps that can be taken to help identify misinformation and avoid being misled.

Grades K-12 13 min Resource by:
Listen to this article

The following six action steps can help protect you from misinformation and disinformation.

It’s probably not surprising that awareness is the first step in protecting yourself. In order to identify misinformation and disinformation, you first need to know what they are and what they might look like:

  • Misinformation is information that is factually incorrect but is presented as being true. Misinformation can range from an honest reporting mistake in a news story to an entirely fabricated story that has no basis in truth.
  • Disinformation is misinformation that is deliberately and knowingly spread with the intent to deceive someone else.

Also of importance is understanding how these falsehoods can become threats to you, both individually and as a member of a larger society. Explore the AVID Open Access article, Understanding the Basics of Misinformation and Disinformation, to learn more.

This action item also involves awareness. If you are aware of how bad actors are trying to draw you in and manipulate you with disinformation, you can be on the lookout for potential deception. If you don’t know what to look for, it’s very difficult to notice misinformation, and it will be much easier to get fooled.

To help you and your students develop the skills needed to identify disinformation, the University of Cambridge Social Decision-Making Lab has developed an interactive learning experience called Bad News. This game-based activity encourages you to take on the role of a disinformation agent in order to raise awareness of their tactics.

Your challenge is to mislead as many people as possible as you work through various methods of pushing disinformation, including: impersonation, emotion, polarization, conspiracy, discredit, and trolling. The idea is not to train you in how to be a disinformation agent, but by going through this experience, you will gain an understanding of what disinformation agents are trying to do to you. This can help you become a more savvy consumer of information. The Bad News game has you take a pre- and post-test, so you can see your growth.

To learn more, explore the AVID Open Access article, Identifying Disinformation Strategies.

It’s important that you know what the large tech companies are doing to protect you, so you don’t overestimate the amount of protection they are providing. Recently, United States government officials have been putting pressure on the major tech and social media companies to address the issue of disinformation.

Each company is taking a slightly different approach. An article by Robert Hart was recently published in Forbes that outlines a few of them. Highlights include:

  • OpenAI: The owner of ChatGPT is probably doing the most to prevent disinformation. OpenAI won’t allow applications to be built for political campaigning or that deter people from voting. They will also be prohibiting the impersonation of candidates, officials, and governments, and they hope to introduce an authentication tool, though it’s unclear what form that will take.
  • Meta: The parent company of Facebook, Instagram, and WhatsApp is requiring labeling of posts for state-controlled media and blocking any ads by those sources that target people in the U.S. They are also barring new political ads in the final week of the American election campaign, and they’re requiring a disclosure if AI was used to create or alter election-related content.
  • Google: This tech leader is requiring sources to disclose if content has been generated or altered with AI. They are also restricting Gemini, their AI chatbot that was previously called Bard, from answering certain election-related questions.
  • YouTube: This Google product will require labels on AI-generated content.
  • X: The social media platform formerly known as Twitter is relying on crowdsourced fact-checking. In effect, it is leaving users to police themselves.

While labeling policies and election-related restrictions will be helpful, those efforts are unlikely to completely stop those who are serious about spreading disinformation, and consumers shouldn’t rely on these companies to fully protect them with these policies. Each person needs to own this responsibility by reviewing information with caution and a critical eye.

Prebunking is the act of warning people ahead of time about specific falsehoods they are likely to encounter. It can also include explaining why a source might spread certain misinformation.

Prebunking is based on the inoculation theory. The idea behind this theory is that you can inoculate, or protect people, against misinformation by warning them that it’s coming. In fact, the Bad News game was developed as a way to inoculate people from the harms of misinformation. Shannon Bond of National Public Radio (NPR) explains that the intent of such efforts is to “show people the tactics and tropes of misleading information before they encounter it in the wild—so they’re better equipped to recognize and resist it.”

Prebunking and inoculation strategies attempt to counteract what is called the misinformation effect, which was developed by psychologist and professor Elizabeth Loftus. Her research found that people’s memories don’t always work perfectly and that they can even be reprogrammed with post-event misinformation or disinformation. Her experiments have shown that the introduction of alternative views or fictional details, even after the fact, tend to change post-event memories, many times without the person even being aware that it’s happening.

Sometimes, this introduction of a false narrative happens innocently through unintentional misinformation, but it can also be done intentionally as targeted disinformation. For example, something might happen that damages a person’s status or reputation. In response to that threat, the impacted person might start spreading alternate versions of what happened to confuse the truth. As other people hear these alternate versions of what didn’t actually happen, the truth becomes murky, and people’s memories become reprogrammed to the point where they may honestly believe the new version of a story instead of their original memory.

The misinformation effect is most likely to happen when the new version aligns with a person’s desires or beliefs. Instead of accepting a reality that conflicts with their preexisting beliefs, they end up believing a falsehood because it’s more comfortable and fits how they want to see the world. Loftus suggests that doctored photos, fake news stories, and repeated lies can all confuse a person’s memory of what really happened or was said.

Prebunking is an effort to head off this misinformation effect before it begins. Of course, prebunking isn’t always easy to do since it’s dependent on predicting ahead of time what misinformation might be spread.

This action step is good media practice in general, but it can be especially effective in preventing people from falling for misinformation or fake news. The more you know, the less likely you are to be fooled. By reading a variety of news sources, you’ll be more in tune with the diverse opinions that evolve around controversial issues. This awareness can make it easier to identify an extreme perspective that is being spread through misinformation. As is often the case with fighting misinformation, knowledge is power.

If people don’t listen to diverse opinions and get their news from a variety of places, they are likely to find themselves living in an echo chamber where they hear the same opinions repeated over and over until they believe that’s the unanimous perspective. This is especially true on social media where they might only be friends or followers of those who share similar points of view. And even when they do have friends with differing perspectives, if they don’t engage with those connections frequently, the algorithms used by social media platforms will begin to filter those out of their feeds and show them more of the types of posts with which they’ve been engaging. This phenomenon is called a filter bubble. People get trapped in a bubble of similar perspectives because opposing viewpoints are filtered out.

If something seems a little off, it very well may be. If something seems extreme, you should attempt to verify it before simply believing the message as it is shared.

Central Washington University has created a Misinformation and Fake News research guide to help people discern the validity of news sources. It provides a number of case studies as well as some fake news practice exercises. Throughout the guide, it uses a consistent four-point process that aids in verifying content. This four-step process has been borrowed from Michael Caufield’s Web Literacy for Student Fact-Checkers. It includes the following:

  • Check Previous Work: See if anyone else has already fact-checked the item in question. Some places to start include FactCheck.org, PolitiFact, The Washington Post Fact Checker, and Snopes. For images, you may want to conduct a reverse image search.
  • Go Upstream: This refers to finding the original source of the information. Most web content has been repurposed, and finding the origin of the information may help you discern its validity and the reason why it was originally shared.
  • Read Laterally: This one is really important. Once you have found the original source of the information, find out what other people are saying about that source (the publication company, organization, or individual author). Also, see if you can find other trusted sources that are sharing the same content. Caufield writes, “The truth is in the network.”
  • Circle Back: If things get tangled and confused, start the process over using what you’ve learned along the way so far. This newly informed perspective may lead you to greater success the second time around.

Caufield’s process is similar to the SIFT strategy, which is detailed in Hapgood’s Check, Please! Starter Course:

  • Stop: Take a moment to survey a resource to get a sense of what you’re looking at.
  • Investigate the Source: Look at who is presenting the information and consider their credibility in relation to the topic being discussed.
  • Find Trusted Coverage: Find an even more credible source that corroborates the content of your original discovery.
  • Trace Claims, Quotes, and Media to the Original Context: Find out where the content originated and determine if the original source is credible.

Both approaches are helpful in guiding the verification process. The key is to slow down and question the credibility of the content you find.

It’s Up to You

Spreaders of misinformation will keep getting better, and new technology will come along that helps them spread it faster and further. Generative AI tools like ChatGPT are just the latest example. It’s up to us to make sure that we are media literate enough to protect ourselves from as much misinformation as possible. It’s also up to us to help our students develop these skills. Using these six strategies probably won’t fully protect you from misinformation, but they can go a long way toward helping you identify much of it.

AVID Connections

This resource connects with the following components of the AVID College and Career Readiness Framework:

  • Instruction
  • Rigorous Academic Preparedness
  • Student Agency

Extend Your Learning