MU LibraryFINDGET HELPSERVICESABOUT Skip to Main Content

Media Literacy & Misinformation: Artificial Intelligence & Misinformation

Learn how to recognize and prevent misinformation and discover where your news comes from.

A.I. and Misinformation

In June 2024, misinformation watchdog NewsGuard identified more than 800 websites that use A.I. to produce "unreliable news content." The sites often have bland sounding names that are modeled after actual news outlets, and distribute articles in multiple languages that might be mistaken as being created by human writers. 

Checking for AI in Images

Content Credentials logo

Brand new from ContentCredentials.org, a tool that lets you upload an image to inspect its content credentials in detail and see how it has changed over time. Not 100% foolproof, but good to use along with reverse image search to determine details about the history/usage of an image.

 

Chatbots Spreading Russian Propaganda?

June 2024 study by NewsGuard finds that several major chatbot platforms including ChatGPT have been spreading Russian propaganda. NewsGuard co-CEO Steven Brill recommends that "for now, don't trust answers provided by most of these chatbots to issues related to news, especially controversial issues."

Photo: licensed under the Creative Commons Attribution-Share Alike 4.0 International license.

Recently, NewsGuard has come under fire from members of the U.S. House Oversight Committee (see Stanford Internet Observatory box in this guide for more details). This is a developing story - watch this space.

Sloppy Intelligence?

 

computer monitor on desk surrounded by piles of stuff

Photo courtesy Joe Dykes per CC BY-ND 2.0 license.

The use of A.I. to spam internet users has a name! Per a June 2024 article in the New York Times,  A.I.-generated material of suspicious origin is now commonly referred to as "slop" by tech insiders. 
More coverage of this topic by Rolling Stone magazine.

Lying Chatbots

A September 2024 study in Nature found that three popular chatbot platforms are "more inclined to generate wrong answers than to admit ignorance."  And worse still, users are not fact checking answers at all, nor are they even spotting the falsities. Another reminder not to use chatbots for research

Source: Nature.com

New for 2024 - Artificial Intelligence Spamming

According to information scientist Mike Caulfield, "The latest AI language tools are powering a new generation of spammy, low-quality content that threatens to overwhelm the internet unless online platforms and regulators find ways to rein it in." Caulfield believes it's essential for tech platforms to mitigate AI spam before platforms become completely unusable. Stay tuned.