
Does trust require truth? Facts? With explanations of the speaker’s reasoning & sources? Or just because “(A)I said it’s so” – authoritatively.
So, what could go [1] … hardly anyone understands how their smartphone works … (as well as most electronics, eh). Nothing new there for technology … yet, nobody likes to be conned, fooled … but is resistance futile with AI? [2]
I’m reminded of the ‘Magic 8 Ball‘ toy where people ask all kinds of yes-no questions, and receive brief affirmative (10), neutral (5), or negative (5) answers (designed by a psychology professor and internally on a 20-sided regular icosahedron die).
Is there an implicit “buyer beware” label or “Good Housekeeping Seal” warranty?”
These articles explore the nature of AI branding.
• Washington Post > “You are hardwired to blindly trust AI. Here’s how to fight it.” by Shira Ovide (Junn 3, 2025) – Decades of research shows our tendency to treat machines like magical answer boxes.
Key terms and points
Automation bias
Conversational agility ≠ smartness
- The problem, AI researchers say, is that those warnings conveniently ignore how we actually use technology — as machines that spit out the right “answer.”
- “Generative AI systems have both an authoritative tone and the aura of infinite expertise …”
- We’re all prone to automation bias, especially when we’re stressed or worked up [just trying to survive].
• The Atlantic > “What Happens When People Don’t Understand How AI Works” by Tyler Austin Harper (Jun 6, 2025) [paywall]
Today, Butler’s “mechanical kingdom” [Erewhon] is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT.
It joins another recently released book – The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business.
Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam.
A model is by definition a simplification of something, of a reality. AI (generative AI) uses models. Behind the curtain, under the hood, there be truth-less dragons “which don’t interact with [mirror] the world the way we do.” Shadow mirrors.
• Cnet > “LLMs and AI Aren’t the Same. Everything You Should Know About What’s Behind Chatbots” by Lisa Lacy, Katelyn Chedraoui (May 31, 2025) – Understanding how LLMs work is key to understanding how AI works.
Key terms: language model (“soothsayer for words”), chatbot, parameter, deep learning, training data, tokens, patterns, search engine.
Notes
[1] Compare with the stories in the Black Mirror series regarding the consequences of some new technology, an “unending pursuit of scientific and technological advancement.”
[2] Equally futile, eh, like the freemium business model: Freemium 2.0 (ambrosia) – a 21st century fable – Terms of the Tree (resistance is futile).
Image credit: Wiki, Creative Commons Attribution-Share Alike 4.0 International license.
As I learned while a public school teacher, intelligence is more than the typical notion of IQ [see Wiki citation below] – the “ability to answer exam questions, solve logical puzzles, or come up with novel answers to knotty math problems.” Current attribution of personality to generative AIs raises the question of emotional intelligence (EI or EQ) and emotional literacy. What might that mean?
One example is the knack (or maturity) to know when a situation or interaction is likely to drift darkly, either inappropriately or beyond one’s skill set. So as to be best handed off to someone else.
• Wired > “GPT-5 Doesn’t Dislike You – It Might Just Need a Benchmark for Emotional Intelligence” by Will Knight (8-13-2025) – User affinity for gen AI models poses a challenge for alignment and engagement.
• Wiki > “Theory of multiple intelligences”
Yet, the above Wired article acknowledges that “chatbots are adept at mimicking engaging human communication.” So, if chatbots adopt the phrases profiled in this CNBC article (below), are their responses authentic? Or just ersatz (pro forma) emotional support? (Even if by an avatar mimicking ‘body’ language and ‘eye’ contact, or by a robot adept at doing so? Cf. the classic Twilight Zone Episode “The Lonely.”)
• CNBC > “If you use any of these 4 phrases you have higher emotional intelligence than most” by Aditi Shrikant (3-13-2024) – EQ isn’t as easy to quantify as other types of skills because empathy and self-awareness are hard to measure.
And providing emotional support typically requires some degree of introspection – the ability to assess one’s own capabilities & limitations (as in mistakes), as well as share (when appropriate) relevant personal experiences & feelings. But, as this second Wired article points out about AIs: “There’s Nobody Home.”
• Wired > “Why You Can’t Trust a Chatbot to Talk About Itself” by Benj Edwards, Ars Technica (8-14-2025) – You’ll be disappointed if you expect AI to be self-aware – that’s just not how it works.
Please & appease … no worries (about good sense & soundness), be happy … immediate satisfaction over potential future consequences … savor that Homer Simpson “everything’s fine” moment … sip that modern day truthiness.
Nobody likes to be conned. Buyer beware and all that. But what if we welcome inaccurate information because it’s so satisfying, and the provider artfully styles the conversation to be so? Unlike being gaslighted, there’s no questioning of our perceived reality. Instead, any misperception or naiveté is encouraged. Carl Sagan’s “baloney detector” is MIA [1] – the baloney sells well, with lots of thumbs-up. Nothing to fix here, eh.
But the reality is that AIs, like people, respond to incentives. This old saying (by Upton Sinclair) might apply: “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”
This article summarizes the three phases of training LLMs (large language models). The last stage is: “Reinforcement learning from human feedback, in which they’re refined to produce responses closer to what people want or like.”
Unlike AI sycophancy which I’ve written about elsewhere, researchers are calling this particular drift “machine BS” – “to distinguish this LLM behavior from honest mistakes and outright lies.”
• cnet > “AI Lies to You Because It Thinks That’s What You Want” by Macy Meyer (8-31-2025) – “Companies want users to continue ‘enjoying’ this technology and its answers, but that might not always be what’s good for us.”
Notes
[1] Carl Sagan’s “The Fine Art of Baloney Detection” in The Demon-Haunted World: Science as a Candle in the Dark (1995)
References
Bergstrom, Carl T.; West, Jevin D.. Calling Bullshit: The Art of Skepticism in a Data-Driven World (2020). Kindle Edition.
So, when AI (chatbot) therapy becomes more popular than human therapists, what does that say? Does it mean that those chabots are better? This article takes exception with that conclusion. Namely, “bad therapy has become scalable.”
AI companies scraped therapeutic content and their chabots merely model the style of contemporary practice, thereby making it more accessible. But what is needed is something different. “The way forward is not to imitate machines.”
What is needed is growth vs. coddling, challenges vs. comforting validation.
• LA Times Opinion Voices 9-30-2025 > “AI therapy isn’t getting better. Therapists are bad” by Jonathan Alpert, Guest contributor – AI’s ANSWERS may be reckless, but the format is quick, confident and direct — and addictive.