Misinformation can spread rapidly and on multiple platforms. Bots, trolls, social media and message boards - even word of mouth - can spread misinformation, disinformation and propaganda. And with the exponential growth of AI, misleading content is increasingly common in everything from product reviews to social media posts. Below are information and tools to help you learn to recognize and fight the bots and trolls that help spread "fake news" !
Per a 2017 study in the University of Michigan Journal of Law Reform, the purveyors of bots and trolls typically do not seek a specific outcome; rather, they deploy them to sow chaos, confusion and paranoia in order to disrupt institutions great and small. They typically can be found in online message boards and social media outlets, and can be deployed in a variety of situations.
According to information scientist Mike Caulfield, "The latest AI language tools are powering a new generation of spammy, low-quality content that threatens to overwhelm the internet unless online platforms and regulators find ways to rein it in." Caulfield believes it's essential for tech platforms to mitigate AI spam before platforms become completely unusable. Stay tuned.
A Twitter bot is a type of automated software that controls a Twitter account. Automation of such accounts is governed by a set of rules governing use. Improper usage includes circumventing automation rate limits, a key indicator of nefarious bot behavior.
Experts use multiple criteria to judge whether a particular Twitter account is a bot. Learn to recognize some key telltale signs!
4chan, an online message board in which users remain anonymous, was responsible for some of the largest hoaxes, cyberbullying incidents and Internet pranks of the past few years, while Reddit has its own troubled history with fake news. While these and other message boards are by no means inherently bad, news and information appearing on such outlets should be treated with caution.
Sources: PC Magazine, Washington Post
In June 2018, the European Parliament reported on fake news and disinformation attacks on member states, listing key actions to be taken in response. Strategies included:
Research into how to curb the spread of misinformation, disinformation and conspiracy theories is ongoing and plentiful. Sander van der Linden of the University of Cambridge and Steven Lewandowsky of the University of Bristol are just two of the growing number of behavioral scientists researching this important topic. Below is some of their recent publishing, which includes important new discoveries about the techniques of prebunking and debunking misinformation. Per latest research, both methods have merit, but prebunking - essentially inoculating people against misinformation before they hear it - is gaining traction as an important tool. Read more on FirstDraft.
Van der Linden and colleagues have developed a new game called Bad News that highlights the tactic of prebunking. Visit the Tips and Tools page of this guide and check it out!
Karlsson, L. C., Mäki, K. O., Holford, D., Fasce, A., Schmid, P., Lewandowsky, S., & Soveri, A. (2024). Testing psychological inoculation to reduce reactance to vaccine-related communication. Health Communication, 1–9.
Lewandowsky, S., & van der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology, 32(2), 348–384.
Traberg, C. S., Roozenbeek, J., & van der Linden, S. (2022). Psychological inoculation against misinformation: current evidence and future directions. The ANNALS of the American Academy of Political and Social Science, 700(1), 136-151.
A February 2024 study published in The Harvard Kennedy School's Misinformation Review found that debunking misinformation among fringe groups susceptible to it was not effective. Instead, the study found that focusing upon limiting consumption of misinformation is far more effective. The new strategy works by exposing the unreliability of sources, which results in the audiences reducing their consumption of misinformation in order not to be misled. This groundbreaking research has broad implications for how societies combat misinformation.
And a June 2024 study published in Nature found that exposure to misinformation on social media is not nearly as widespread as has been reported, but tends to concentrate "among a narrow fringe with strong motivations to seek out such information." According to the study, algorithms that determine what content a person receives in their social media feed actually "tend to push users to more moderate content and to offer extreme content predominantly to those who have sought it out." An effective way of deterring misinformation distributed via websites is to compile and release lists of advertisers that purchase advertising on those sites, the study also found. As with the above study, the study asserts that the best way to counteract the effects of misinformation is to limit the consumption of such content in fringe groups.
To be continued...
A June 2024 study by NewsGuard finds that several major chatbot platforms including ChatGPT have been spreading Russian propaganda. NewsGuard co-CEO Steven Brill recommends that "for now, don't trust answers provided by most of these chatbots to issues related to news, especially controversial issues."
Photo: licensed under the Creative Commons Attribution-Share Alike 4.0 International license.
Recently, NewsGuard has come under fire from members of the U.S. House Oversight Committee (see Stanford Internet Observatory box in this guide for more details). This is a developing story - watch this space.
Bots and trolls had a disruptive influence on the 2016 U.S. presidential election, spreading disinformation and propaganda via multiple outlets. We can diminish their influence by educating ourselves and by ignoring them!
Hamilton 2.0 is a website developed by the Alliance for Securing Democracy, a bipartisan, transatlantic initiative that works to publicly document and expose the ongoing efforts by Vladimir Putin and other authoritarian regimes to subvert democracy in the United States, Europe, and globally. Site users can view snapshots of Twitter bot traffic and view hot topics/hashtags and trending domains and URLs.
Per behavioral scientist Caroline Orr Bueno, an "astroturfed " social media campaign is a coordinated effort to sway public opinion in a particular direction by manipulating people's online behavior on multiple media outlets. Such campaigns typically consist of the following tactics:
"Astroturf" campaign tactics push a specific message into public debate by artificial means, resulting in:
Knight Institute at Columbia University visiting researcher Arvind Narayanan is conducting a study on algorithmic amplification, which will include an exploration of how algorithms can be manipulated in various ways such as "astroturfing." Stay tuned.
"Pizzagate" is a debunked conspiracy theory that went viral during the 2016 United States presidential election cycle, and an excellent example of how message boards and social media can rapidly spread fake news that does real world harm. Click here for an approximate timeline of the bizarre but true series of events.
Sources: Indictment document, U.S. v. Viktor Borisovich Netyksho, et al (1:18-cr-215, District of Columbia); Buzzfeed; New York Times; Billboard; Snopes.
“Propaganda is amazing. People can be led to believe anything.” - Alice Walker
Artificial amplification (aka "signal boosting") of media content is a cause for concern because it can make it relatively easy to manipulate mass opinion, which in turn can have disastrous effects on the stability of democratic systems of governance.
Internet trolling is a behavior in which users post derogatory or false messages in a public forum such as a message board, newsgroup or social media. The purpose of trolling is to provoke others into displaying emotional responses or to normalize tangential discussion either for amusement or personal gain.
Sources: PC Magazine online encyclopedia, Collins English Dictionary
In a survey conducted in 2016, 64% of adults said that fake news had caused a great deal of confusion about the basic facts of current issues and events, while 23% said they themselves have shared a made-up news story online. In the United States, 93% of adults get at least some of that news online, either via mobile or desktop applications. Social media are a key driver of traffic to news sites, with Facebook leading the way.
Source: Pew Research Center
Not sure if you're dealing with a bot or an actual person? Try these tools:
NOTE: There is no foolproof bot detection app as of yet; the above tools may yield false positives or negatives over time. Use your best judgment!
Here are a couple articles on ways to spot a Twitter bot: