In AI chatbots we trust – too much

Does trust require truth? Facts? With explanations of the speaker’s reasoning & sources? Or just because “(A)I said it’s so” – authoritatively.

So, what could go [1] … hardly anyone understands how their smartphone works … (as well as most electronics, eh). Nothing new there for technology … yet, nobody likes to be conned, fooled … but is resistance futile with AI? [2]

I’m reminded of the ‘Magic 8 Ball‘ toy where people ask all kinds of yes-no questions, and receive brief affirmative (10), neutral (5), or negative (5) answers (designed by a psychology professor and internally on a 20-sided regular icosahedron die).

Is there an implicit “buyer beware” label or “Good Housekeeping Seal” warranty?”

These articles explore the nature of AI branding.

• Washington Post > “You are hardwired to blindly trust AI. Here’s how to fight it.” by Shira Ovide (Junn 3, 2025) – Decades of research shows our tendency to treat machines like magical answer boxes.

Key terms and points

Automation bias

Conversational agility ≠ smartness

  • The problem, AI researchers say, is that those warnings conveniently ignore how we actually use technology — as machines that spit out the right “answer.”
  • “Generative AI systems have both an authoritative tone and the aura of infinite expertise …”
  • We’re all prone to automation bias, especially when we’re stressed or worked up [just trying to survive].

• The Atlantic > “What Happens When People Don’t Understand How AI Works” by Tyler Austin Harper (Jun 6, 2025) [paywall]

Today, Butler’s “mechanical kingdom” [Erewhon] is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT.

It joins another recently released book – The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business.

Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam.

A model is by definition a simplification of something, of a reality. AI (generative AI) uses models. Behind the curtain, under the hood, there be truth-less dragons “which don’t interact with [mirror] the world the way we do.” Shadow mirrors.

• Cnet > “LLMs and AI Aren’t the Same. Everything You Should Know About What’s Behind Chatbots” by Lisa Lacy, Katelyn Chedraoui (May 31, 2025) – Understanding how LLMs work is key to understanding how AI works.

Key terms: language model (“soothsayer for words”), chatbot, parameter, deep learning, training data, tokens, patterns, search engine.

Notes

[1] Compare with the stories in the Black Mirror series regarding the consequences of some new technology, an “unending pursuit of scientific and technological advancement.”

[2] Equally futile, eh, like the freemium business model: Freemium 2.0 (ambrosia) – a 21st century fableTerms of the Tree (resistance is futile).

1 comment

  1. Theory of multiple intelligences
    Image credit: Wiki, Creative Commons Attribution-Share Alike 4.0 International license.

    As I learned while a public school teacher, intelligence is more than the typical notion of IQ [see Wiki citation below] – the “ability to answer exam questions, solve logical puzzles, or come up with novel answers to knotty math problems.” Current attribution of personality to generative AIs raises the question of emotional intelligence (EI or EQ) and emotional literacy. What might that mean?

    One example is the knack (or maturity) to know when a situation or interaction is likely to drift darkly, either inappropriately or beyond one’s skill set. So as to be best handed off to someone else.

    • Wired > “GPT-5 Doesn’t Dislike You – It Might Just Need a Benchmark for Emotional Intelligence” by Will Knight (8-13-2025) – User affinity for gen AI models poses a challenge for alignment and engagement.

    Researchers at MIT [MIT Media Lab] have proposed a new kind of AI benchmark to measure how AI systems can manipulate and influence their users – in both positive and negative ways – in a move that could perhaps help AI builders avoid similar backlashes in the future while also keeping vulnerable users safe.

    An MIT paper shared with WIRED outlines several measures that the new benchmark will look for, including encouraging healthy social habits in users; spurring them to develop critical thinking and reasoning skills; fostering creativity; and stimulating a sense of purpose.

    Part of the reason GPT-5 seems such a disappointment may simply be that it reveals an aspect of human intelligence that remains alien to AI: the ability to maintain healthy relationships. And of course humans are incredibly good at knowing how to interact with different people – something that ChatGPT still needs to figure out.

    • Wiki > “Theory of multiple intelligences

    Daniel Goleman [psychologist and science journalist] based his concept of emotional intelligence in part on the feeling aspects of the intrapersonal and interpersonal intelligences [introduced by developmental psychologist Howard Gardner]. Interpersonal skill can be displayed in either one-on-one and group interactions.

    Gardner believes that careers that suit those with high interpersonal intelligence include leaders, politicians, managers, teachers, clergy, counselors, social workers and sales persons. … Interpersonal combined with intrapersonal management are required for successful leaders, psychologists, life coaches and conflict negotiators.

    In theory, individuals who have high interpersonal intelligence are characterized by their sensitivity to others’ moods, feelings, temperaments, motivations, and their ability to cooperate to work as part of a group. … “Those with high interpersonal intelligence communicate effectively and empathize easily with others, and may be either leaders or followers. They often enjoy discussion and debate.” Gardner has equated this with emotional intelligence of Goleman.

    Yet, the above Wired article acknowledges that “chatbots are adept at mimicking engaging human communication.” So, if chatbots adopt the phrases profiled in this CNBC article (below), are their responses authentic? Or just ersatz (pro forma) emotional support? (Even if by an avatar mimicking ‘body’ language and ‘eye’ contact, or by a robot adept at doing so? Cf. the classic Twilight Zone Episode “The Lonely.”)

    • CNBC > “If you use any of these 4 phrases you have higher emotional intelligence than most” by Aditi Shrikant (3-13-2024) – EQ isn’t as easy to quantify as other types of skills because empathy and self-awareness are hard to measure.

    Emotional intelligence is the ability to manage your own feelings and the feelings of those around you. Those who have higher EQ tend to be better at building relationships both in and outside of the workplace, and excel at defusing conflict.

    And providing emotional support typically requires some degree of introspection – the ability to assess one’s own capabilities & limitations (as in mistakes), as well as share (when appropriate) relevant personal experiences & feelings. But, as this second Wired article points out about AIs: “There’s Nobody Home.”

    • Wired > “Why You Can’t Trust a Chatbot to Talk About Itself” by Benj Edwards, Ars Technica (8-14-2025) – You’ll be disappointed if you expect AI to be self-aware – that’s just not how it works.

    When something goes wrong with an AI assistant, our instinct is to ask it directly: “What happened?” or “Why did you do that?” It’s a natural impulse – after all, if a human makes a mistake, we ask them to explain. But with AI models, this approach rarely works, and the urge to ask reveals a fundamental misunderstanding of what these systems are and how they operate.

    The first problem is conceptual: You’re not talking to a consistent personality, person, or entity when you interact with ChatGPT, Claude, Grok, or Replit. These names suggest individual agents with self-knowledge, but that’s an illusion created by the conversational interface. What you’re actually doing is guiding a statistical text generator to produce outputs based on your prompts.

    … modern AI assistants like ChatGPT aren’t single models but orchestrated systems of multiple AI models working together, each largely “unaware” of the others’ existence or capabilities.

Comments are closed.