Understanding the Basics of Misinformation and Disinformation

Explore the motivations and psychology that contribute to the sharing of misinformation and disinformation in online spaces.

Grades K-12 10 min Resource by:
Listen to this article

While the terms misinformation and disinformation are similar, they have some important differences between them.

Misinformation is information that is factually incorrect but is presented as being true. Misinformation can range from an honest reporting mistake in a news story to an entirely fabricated story that has no basis in truth. The key is that the content contains at least portions of untruth. Misinformation can be unintentional, and people who share misinformation often don’t realize that the information they are passing along is untrue. Regardless, this unintentional sharing can still be harmful because false information is being spread, and that information may end up misleading people.

Disinformation is similar to misinformation, as it also involves the distribution of factually untrue information, but there is one key difference. With disinformation, someone is deliberately and knowingly spreading false information with the intent to deceive someone else. In other words, misinformation becomes disinformation when it is being spread on purpose with an intent to mislead.

An Increasing Problem

Problems with misinformation and disinformation have worsened in recent years because technology, such as artificial intelligence, social media, and deepfakes, have made it much easier to create and share fake and misleading content.

NewsGuard, a website that tracks AI-enabled misinformation, reported that as of February 1, 2024, they had identified 676 “unreliable AI-Generated News” websites. These are sites that have “little to no human oversight” and contain mostly AI-generated content. In most cases, they found a strong likelihood that no one is monitoring the quality or truthfulness of the content being posted on these sites. The goal of these platforms is not necessarily to be a serious news source; rather, the main purpose is to generate clickbait, getting people to click on content in order to earn the advertising revenue attached to the pages being visited.

As an example, NewsGuard shared that one of these questionable sites used AI to rewrite a piece of satire about the actions of Israeli Prime Minister Benjamin Netanyahu’s nonexistent psychiatrist. As a satire, the original content was never meant to be conveyed as truth. However, the site took the satirical content and repackaged it as a news story, presenting the fictitious story as fact. That story was later shared on an Iranian broadcast channel, and then it was spread further by users on TikTok, Reddit, and Instagram. It didn’t take long for this fake news story to be shared widely.

MSNBC published an article in December 2023 titled, “AI-Generated Weapons of Mass Misinformation Have Arrived.” The article calls attention to the scale of the problem and how easy it is to spread misinformation. The article cites a NewsGuard study that uncovered a network of over a dozen TikTok accounts that used AI text-to-speech software to spread political and health misinformation in the form of videos. Those videos, with their misleading and untrue content, have been viewed over 300 million times.

The Washington Post also issued cautionary language, calling the use of AI to generate fake news a “misinformation superspreader.” They point out that AI-generated fake news is often about elections, wars, and natural disasters, and that the publication of this type of content is on the rise. In fact, they report that “websites hosting AI-created false articles have increased by more than 1,000 percent” in the last half of 2023. To compound the problem, these sites are typically given generic names that sound legitimate, like Daily Time Update or Ireland Top News. Some of these sites intentionally mix in human-generated articles to give them more credibility and make it less obvious that the majority of their content is AI-generated.


The motivations for spreading disinformation and misinformation can vary, but there are some noticeable trends.

With disinformation, the two most common reasons for people spreading false information are politics and profit. In politics, propaganda works, whether the information being spread is true or not, and political operatives have learned that they can influence people’s opinions and voting behavior through disinformation campaigns. Political disinformation campaigns are about swaying public opinion, which can then translate into votes and, ultimately, political power.

Money, the second biggest reason for disinformation, often comes down to clicks. The more clicks a website or social media post can draw to an advertiser’s product, the more money the content creator makes. Sensational content and eye-catching headlines that are posted to attract clicks are called clickbait. Once again, it doesn’t matter if the content is accurate and true. All that matters is that the post attracts a click—it is all about the money. Clickbait featuring fake news isn’t new, but generative AI tools like ChatGPT have made it infinitely easier to produce large amounts of fictitious clickbait. What used to take an army of human writers now just takes a well-crafted prompt and an AI chatbot.

The motivations behind spreading misinformation differ than those for disinformation. With misinformation, the misleading content is often spread by victims rather than perpetrators. Those who share misinformation are often doing so unintentionally. They are being targeted by those involved in disinformation campaigns, and they’re getting lured in and tricked into believing something to be true. Unfortunately, many of these victims will get fooled into believing and passing along the fallacious content to others.

The Psychology Behind It

Studies have shown that some people are more prone than others to pass along misinformation.

The American Psychological Association (APA) points out that people are much more likely to share content when they engage with misinformation that aligns to their personal identity, matches their social norms, and elicits a strong emotional response. Studies have shown that content causing anger or outrage typically gets the strongest reaction, and these reactions can quickly translate into impulsive actions, such as liking or sharing a post on social media. With a click of the share button, misinformation can be spread to all of a person’s friends and followers. All of those followers then have an opportunity to share it with their friends who again can pass it along to others.

Because sharing is so easy, it doesn’t take long for misinformation to go viral. This is especially true in closed, online communities, where most of the members of a group share the same ideological viewpoints. This homogenous environment is sometimes called an “echo chamber” because everyone in that group holds the same core opinions, which leads to similar ideas being shared over and over again throughout the community—like an echo. People living in a virtual echo chamber, like a social media friends group, often don’t hear all sides of a story. Rather, they might hear different versions of the same story over and over again, and if people only hear one point of view or one side of the story, they will often adopt that point of view as their own or have it reinforce an existing point of view that they already held.

Confirmation bias also plays a role in the spread of misinformation. The APA Dictionary of Psychology defines confirmation bias as “the tendency to gather evidence that confirms preexisting expectations, typically by emphasizing or pursuing supporting evidence while dismissing or failing to see contradictory evidence.” An American Psychological Association article from November 2023 states that people are less likely to question information that aligns with their point of view. In addition to that tendency, if people believe it and it makes them angry, anxious, or scared, they’re likely to pass it on. People are also more likely to believe false information that they hear repeatedly, a phenomenon the APA calls the “illusory truth effect.”

In this context, a small number of online accounts can cause large amounts of misinformation to circulate. The APA points out that “most online misinformation originates from a small minority of ‘superspreaders.’” These originators typically use social media to motivate people to continue sharing and spreading the inflammatory and false information to others. Those superspreaders drop their misinformation into the social media pool and watch the ripples spread outward.

AVID Connections

This resource connects with the following components of the AVID College and Career Readiness Framework:

  • Instruction
  • Rigorous Academic Preparedness
  • Student Agency

Extend Your Learning