MU LibraryFINDGET HELPSERVICESABOUT Skip to Main Content

Media Literacy & Misinformation: How Misinformation Spreads

Learn how to recognize and prevent misinformation and discover where your news comes from.

Misinformation can spread rapidly and on multiple platforms. Bots, trolls, social media and message boards - even word of mouth - can spread misinformation, disinformation and propaganda. And with the exponential growth of AI, misleading content is increasingly common in everything from product reviews to social media posts. Below are information and tools to help you learn to recognize and fight the bots and trolls that help spread "fake news" !

The Role of Bots & Trolls

Per a 2017 study in the University of Michigan Journal of Law Reform, the purveyors of bots and trolls typically do not seek a specific outcome; rather, they deploy them to sow chaos, confusion and paranoia in order to disrupt institutions great and small. They typically can be found in online message boards and social media outlets, and can be deployed in a variety of situations. 

New for 2024 - Artificial Intelligence Spamming

According to information scientist Mike Caulfield, "The latest AI language tools are powering a new generation of spammy, low-quality content that threatens to overwhelm the internet unless online platforms and regulators find ways to rein it in." Caulfield believes it's essential for tech platforms to mitigate AI spam before platforms become completely unusable. Stay tuned.

What is a Bot?

Graphic image showing four colorful robots.

A Twitter bot is a type of automated software that controls a Twitter account. Automation of such accounts is governed by a set of rules governing use. Improper usage includes circumventing automation rate limits, a key indicator of nefarious bot behavior.

Spot the Bot

Experts use multiple criteria to judge whether a particular Twitter account is a bot. Learn to recognize some key telltale signs!

  • Activity – How many posts per day have been generated by the account? The Oxford Internet Institute’s Computational Propaganda team views an average of more than 50 posts a day as suspicious.
  • Suspicious patterns of likes/retweets – very high numbers of likes/retweets vs. original posts, often in quantities that are very close.
  • High number of account followers, low number of account follows.

Source: Atlantic Council's Digital Forensic Research Lab

Message Boards - 4Chan & Reddit

Historical graphic of logo for 4chan platform, white bear-like figure with an antenna and red eyes.

4chan, an online message board in which users remain anonymous, was responsible for some of the largest hoaxes, cyberbullying incidents and Internet pranks of the past few years, while Reddit has its own troubled history with fake news. While these and other message boards are by no means inherently bad, news and information appearing on such outlets should be treated with caution.

Sources: PC Magazine,  Washington Post

Fighting Disinformation & Propaganda

In June 2018, the European Parliament reported on fake news and disinformation attacks on member states, listing key actions to be taken in response. Strategies included:

  • Strengthen media fact-checking operations
  • Support quality journalism at all levels
  • Educate public on how to recognize and respond to disinformation
  • Highlight prominent examples and expose tactics, methods, and sources
  • Increase security on communications platforms

Debunking and Prebunking - What Works?

Research into how to curb the spread of misinformation, disinformation and conspiracy theories is ongoing and plentiful. Sander van der Linden of the University of Cambridge and Steven Lewandowsky of the University of Bristol are just two of the growing number of behavioral scientists researching this important topic. Below is some of their recent publishing, which includes important new discoveries about the techniques of prebunking and debunking misinformation. Per latest research, both methods have merit, but prebunking - essentially inoculating people against misinformation before they hear it - is gaining traction as an important tool. Read more on FirstDraft.

Van der Linden and colleagues have developed a new game called Bad News that highlights the tactic of prebunking. Visit the Tips and Tools page of this guide and check it out!

Karlsson, L. C., Mäki, K. O., Holford, D., Fasce, A., Schmid, P., Lewandowsky, S., & Soveri, A. (2024). Testing psychological inoculation to reduce reactance to vaccine-related communicationHealth Communication, 1–9.

Lewandowsky, S., & van der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology32(2), 348–384. 

Traberg, C. S., Roozenbeek, J., & van der Linden, S. (2022). Psychological inoculation against misinformation: current evidence and future directionsThe ANNALS of the American Academy of Political and Social Science700(1), 136-151. 

Curb Your Misinformation

February 2024 study published in The Harvard Kennedy School's Misinformation Review found that debunking misinformation among fringe groups susceptible to it was not effective. Instead, the study found that focusing upon limiting consumption of misinformation is far more effective. The new strategy works by exposing the unreliability of sources, which results in the audiences reducing their consumption of misinformation in order not to be misled. This groundbreaking research has broad implications for how societies combat misinformation.

And a June 2024 study published in Nature found that exposure to misinformation on social media is not nearly as widespread as has been reported, but tends to concentrate "among a narrow fringe with strong motivations to seek out such information." According to the study, algorithms that determine what content a person receives in their social media feed actually "tend to push users to more moderate content and to offer extreme content predominantly to those who have sought it out."  An effective way of deterring misinformation distributed via websites is to compile and release lists of advertisers that purchase advertising on those sites, the study also found.  As with the above study, the study asserts that the best way to counteract the effects of misinformation is to limit the consumption of such content in fringe groups. 

To be continued...

Chatbots Spreading Russian Propaganda?

June 2024 study by NewsGuard finds that several major chatbot platforms including ChatGPT have been spreading Russian propaganda. NewsGuard co-CEO Steven Brill recommends that "for now, don't trust answers provided by most of these chatbots to issues related to news, especially controversial issues."

Photo: licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

Recently, NewsGuard has come under fire from members of the U.S. House Oversight Committee (see Stanford Internet Observatory box in this guide for more details). This is a developing story - watch this space.

Bots & Trolls

Bots and trolls had a disruptive influence on the 2016 U.S. presidential election, spreading disinformation and propaganda via multiple outlets. We can diminish their influence by educating ourselves and by ignoring them!

Hamilton 2.0

Masthead graphic from the Hamilton 2.0 project website, showing pencil drawing of Vladimir Putin gesturing at a cloud of red Twitter bird logos and Facebook thumbs up logos

Hamilton 2.0 is a website developed by the Alliance for Securing Democracy, a bipartisan, transatlantic initiative that works to publicly document and expose the ongoing efforts by Vladimir Putin and other authoritarian regimes to subvert democracy in the United States, Europe, and globally. Site users can view snapshots of Twitter bot traffic and view hot topics/hashtags and trending domains and URLs.

Beware of Astroturf

 

Color close-up photo of astroturf (artificial turf).

 

Per behavioral scientist Caroline Orr Bueno, an "astroturfed " social media campaign is a coordinated effort to sway public opinion in a particular direction by manipulating people's online behavior on multiple media outlets. Such campaigns typically consist of the following tactics:

  • exploiting online algorithms to amplify certain content and push it onto people’s social media feeds and to the top of search engine results
  • using a high volume of posts to drown out real, reasoned debate between humans and replace it with false content that pushes fringe or extreme viewpoints into the mainstream, ultimately hijacking and derailing public discourse
  • using divisive issues to widen existing cultural divides and promote infighting within a particular movement
  • creating "manufactured consensus,"  or the illusion of popularity, so that an idea or position without much public support appears more popular and mainstream than it actually is

"Astroturf" campaign tactics push a specific message into public debate by artificial means, resulting in:

  • coverage of the issue as a "trending topic" on both social media and traditional media outlets
  • specific words or phrases show up as a "suggested search terms" or hashtags in web and social media searches
  • feedback loops in which suggested search terms attract users who follow and participate in current social media trends


Knight Institute at Columbia University visiting researcher Arvind Narayanan is conducting a study on algorithmic amplification, which will include an exploration of how algorithms can be manipulated in various ways such as "astroturfing." Stay tuned.

#Pizzagate

Color photo of the neon Comet Pizza restaurant sign, yellow letters framed by a yellow star on dark green background

"Pizzagate" is a debunked conspiracy theory that went viral during the 2016 United States presidential election cycle, and an excellent example of how message boards and social media can rapidly spread fake news that does real world harm. Click here for an approximate timeline of the bizarre but true series of events.

Sources: Indictment document, U.S. v. Viktor Borisovich Netyksho, et al (1:18-cr-215, District of Columbia); Buzzfeed; New York Times; Billboard; Snopes.  

Disinformation is Dangerous

“Propaganda is amazing. People can be led to believe anything.” - Alice Walker

Color photo of author Alice Walker speaking at a podium.

Artificial amplification (aka "signal boosting") of media content is a cause for concern because it can make it relatively easy to manipulate mass opinion, which in turn can have disastrous effects on the stability of democratic systems of governance.

What is a Troll?

Color cartoon showing brown "troll" with long nose and a tail, looking at an outstretched hand with a plate of food, with legend "don't feed the troll."

Internet trolling is a behavior in which users post derogatory or false messages in a public forum such as a message board, newsgroup or social media. The purpose of trolling is to provoke others into displaying emotional responses or to normalize tangential discussion either for amusement or personal gain.

Sources: PC Magazine online encyclopedia, Collins English Dictionary

Social Media & Fake News

In a survey conducted in 2016, 64% of adults said that fake news had caused a great deal of confusion about the basic facts of current issues and events, while 23% said they themselves have shared a made-up news story online. In the United States, 93% of adults get at least some of that news online, either via mobile or desktop applications. Social media are a key driver of traffic to news sites, with Facebook leading the way.

Source: Pew Research Center

Bot Check

Graphic image of a blue bird facing a robotic blue bird in parody of Twitter logo.

 

Not sure if you're dealing with a bot or an actual person? Try these tools:

Bot-o-Meter

Twitter Audit

Bot Sentinel

BotSlayer

NOTE: There is no foolproof bot detection app as of yet; the above tools may yield false positives or negatives over time. Use your best judgment!

How to Recognize a Twitter Bot

Here are a couple articles on ways to spot a Twitter bot: