
Does trust require truth? Facts? With explanations of the speaker’s reasoning & sources? Or just because “(A)I said it’s so” – authoritatively.
So, what could go [1] … hardly anyone understands how their smartphone works … (as well as most electronics, eh). Nothing new there for technology … yet, nobody likes to be conned, fooled … but is resistance futile with AI? [2]
I’m reminded of the ‘Magic 8 Ball‘ toy where people ask all kinds of yes-no questions, and receive brief affirmative (10), neutral (5), or negative (5) answers (designed by a psychology professor and internally on a 20-sided regular icosahedron die).
Is there an implicit “buyer beware” label or “Good Housekeeping Seal” warranty?”
These articles explore the nature of AI branding.
• Washington Post > “You are hardwired to blindly trust AI. Here’s how to fight it.” by Shira Ovide (Junn 3, 2025) – Decades of research shows our tendency to treat machines like magical answer boxes.
Key terms and points
Automation bias
Conversational agility ≠ smartness
- The problem, AI researchers say, is that those warnings conveniently ignore how we actually use technology — as machines that spit out the right “answer.”
- “Generative AI systems have both an authoritative tone and the aura of infinite expertise …”
- We’re all prone to automation bias, especially when we’re stressed or worked up [just trying to survive].
• The Atlantic > “What Happens When People Don’t Understand How AI Works” by Tyler Austin Harper (Jun 6, 2025) [paywall]
Today, Butler’s “mechanical kingdom” [Erewhon] is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT.
It joins another recently released book – The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business.
Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam.
A model is by definition a simplification of something, of a reality. AI (generative AI) uses models. Behind the curtain, under the hood, there be truth-less dragons “which don’t interact with [mirror] the world the way we do.” Shadow mirrors.
• Cnet > “LLMs and AI Aren’t the Same. Everything You Should Know About What’s Behind Chatbots” by Lisa Lacy, Katelyn Chedraoui (May 31, 2025) – Understanding how LLMs work is key to understanding how AI works.
Key terms: language model (“soothsayer for words”), chatbot, parameter, deep learning, training data, tokens, patterns, search engine.
Notes
[1] Compare with the stories in the Black Mirror series regarding the consequences of some new technology, an “unending pursuit of scientific and technological advancement.”
[2] Equally futile, eh, like the freemium business model: Freemium 2.0 (ambrosia) – a 21st century fable – Terms of the Tree (resistance is futile).



