Per a February 2025 Pew Research poll, a majority of American workers are more worried than hopeful about the use of AI in the workplace.
In June 2024, misinformation watchdog NewsGuard identified more than 800 websites that use A.I. to produce "unreliable news content." The sites often have bland sounding names that are modeled after actual news outlets, and distribute articles in multiple languages that might be mistaken as being created by human writers.
Brand new from ContentCredentials.org, a tool that lets you upload an image to inspect its content credentials in detail and see how it has changed over time. Not 100% foolproof, but good to use along with reverse image search to determine details about the history/usage of an image.
A June 2025 study published in arXiv yielded extremely concerning results about the effects of long-term AI use on the brain's cognitive and analytical ability. Over time, study subjects showed significantly reduced capacity to learn, understand and recall information due to diminished activity in the brain's neural pathways. This is the first of what are sure to be many studies on this topic, but with AI chatbots already returning false responses to simple queries, these reported effects on people's ability to recognize and avoid misinformation make it all the more likely that misinformation and disinformation will become ubiquitous in our culture. Stay tuned.
Image: https://pixy.org/4488352/ via a CC BY-NC-ND 4.0 license
Kosmyna, N., Hauptmann, E., Yuan, Y.T., Situ, J., Liao, X., Beresnitzky, A. V., Braunstein, I. & Maes, P. (2025). Your brain on ChatGPT: accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv.
Wong, A. (2025) The entire Internet is reverting to beta. The Atlantic.
Using AI to fact check what you see on social media? The experts say not so fast.
What to do instead? Fact check the old fashioned way - see the Tips & Tools page of this guide for more info!
With all the headlines around AI and misinformation, it's hard to trust anything generated by AI. Per data scientists Davi Ottenheimer and Bruce Schneier, data integrity is critical to the reliability of AI-generated materials going forward. In a new article published in IEEE Spectrum, they have determined four key areas upon which to focus:
1) Input integrity - the quality and authenticity of data entering a system
2) Processing integrity - which ensures that systems transform inputs into outputs correctly.
3) Storage integrity - the correctness of information as it’s stored and communicated.
4) Contextual integrity - the appropriate flow of information according to the norms of its larger context.
Image courtesy Blogtepreneur, used under Creative Commons Attribution License
Why is data integrity so important?
1) Decision quality - decisions made by AI directly affect our health and safety.
2) Accountability - understanding the causes of failures requires reliable logging, audit trails, and system records
3) Security relationships between components - without these, malicious agents could impersonate trusted systems, potentially creating cascading failures.
4) Public definitions of safety - Integrity provides the basis for meeting legal obligations.
Source: Ottenheimer, D. & Schneier, B. (2025). The agents of tomorrow need data integrity. IEEE Spectrum.
Per journalist Joseph Cox of 404 Media, the personal safety app Citizen is using AI to generate crime alerts with no review by humans prior to publication, resulting in unreliable information and outright fabrications being shared as fact.
Source: Cox, J. (2025) Citizen Is using AI to generate crime alerts with no human review. It’s making a lot of mistakes. 404 Media.
A September 2024 study in Nature found that three popular chatbot platforms are "more inclined to generate wrong answers than to admit ignorance." And worse still, users are not fact checking answers at all, nor are they even spotting the falsities. Another reminder not to use chatbots for research!
Source: Nature.com
According to information scientist Mike Caulfield, "The latest AI language tools are powering a new generation of spammy, low-quality content that threatens to overwhelm the internet unless online platforms and regulators find ways to rein it in." Caulfield believes it's essential for tech platforms to mitigate AI spam before platforms become completely unusable. Stay tuned.