
Does trust require truth? Facts? With explanations of the speaker’s reasoning & sources? Or just because “(A)I said it’s so” – authoritatively.
So, what could go [1] … hardly anyone understands how their smartphone works … (as well as most electronics, eh). Nothing new there for technology … yet, nobody likes to be conned, fooled … but is resistance futile with AI? [2]
I’m reminded of the ‘Magic 8 Ball‘ toy where people ask all kinds of yes-no questions, and receive brief affirmative (10), neutral (5), or negative (5) answers (designed by a psychology professor and internally on a 20-sided regular icosahedron die).
Is there an implicit “buyer beware” label or “Good Housekeeping Seal” warranty?”
These articles explore the nature of AI branding.
• Washington Post > “You are hardwired to blindly trust AI. Here’s how to fight it.” by Shira Ovide (Junn 3, 2025) – Decades of research shows our tendency to treat machines like magical answer boxes.
Key terms and points
Automation bias
Conversational agility ≠ smartness
- The problem, AI researchers say, is that those warnings conveniently ignore how we actually use technology — as machines that spit out the right “answer.”
- “Generative AI systems have both an authoritative tone and the aura of infinite expertise …”
- We’re all prone to automation bias, especially when we’re stressed or worked up [just trying to survive].
• The Atlantic > “What Happens When People Don’t Understand How AI Works” by Tyler Austin Harper (Jun 6, 2025) [paywall]
Today, Butler’s “mechanical kingdom” [Erewhon] is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT.
It joins another recently released book – The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business.
Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam.
A model is by definition a simplification of something, of a reality. AI (generative AI) uses models. Behind the curtain, under the hood, there be truth-less dragons “which don’t interact with [mirror] the world the way we do.” Shadow mirrors.
• Cnet > “LLMs and AI Aren’t the Same. Everything You Should Know About What’s Behind Chatbots” by Lisa Lacy, Katelyn Chedraoui (May 31, 2025) – Understanding how LLMs work is key to understanding how AI works.
Key terms: language model (“soothsayer for words”), chatbot, parameter, deep learning, training data, tokens, patterns, search engine.
Notes
[1] Compare with the stories in the Black Mirror series regarding the consequences of some new technology, an “unending pursuit of scientific and technological advancement.”
[2] Equally futile, eh, like the freemium business model: Freemium 2.0 (ambrosia) – a 21st century fable – Terms of the Tree (resistance is futile).
Image credit: Wiki, Creative Commons Attribution-Share Alike 4.0 International license.
As I learned while a public school teacher, intelligence is more than the typical notion of IQ [see Wiki citation below] – the “ability to answer exam questions, solve logical puzzles, or come up with novel answers to knotty math problems.” Current attribution of personality to generative AIs raises the question of emotional intelligence (EI or EQ) and emotional literacy. What might that mean?
One example is the knack (or maturity) to know when a situation or interaction is likely to drift darkly, either inappropriately or beyond one’s skill set. So as to be best handed off to someone else.
• Wired > “GPT-5 Doesn’t Dislike You – It Might Just Need a Benchmark for Emotional Intelligence” by Will Knight (8-13-2025) – User affinity for gen AI models poses a challenge for alignment and engagement.
• Wiki > “Theory of multiple intelligences”
Yet, the above Wired article acknowledges that “chatbots are adept at mimicking engaging human communication.” So, if chatbots adopt the phrases profiled in this CNBC article (below), are their responses authentic? Or just ersatz (pro forma) emotional support? (Even if by an avatar mimicking ‘body’ language and ‘eye’ contact, or by a robot adept at doing so? Cf. the classic Twilight Zone Episode “The Lonely.”)
• CNBC > “If you use any of these 4 phrases you have higher emotional intelligence than most” by Aditi Shrikant (3-13-2024) – EQ isn’t as easy to quantify as other types of skills because empathy and self-awareness are hard to measure.
And providing emotional support typically requires some degree of introspection – the ability to assess one’s own capabilities & limitations (as in mistakes), as well as share (when appropriate) relevant personal experiences & feelings. But, as this second Wired article points out about AIs: “There’s Nobody Home.”
• Wired > “Why You Can’t Trust a Chatbot to Talk About Itself” by Benj Edwards, Ars Technica (8-14-2025) – You’ll be disappointed if you expect AI to be self-aware – that’s just not how it works.