Chatbot redux – synthetic coherent ‘Stochastic Parrots’

[List of articles in progress]

Here’s a bunch of recent articles re AI-powered chatbots: ChatGPT, Google’s Bard, Microsoft’s Bing, …

Here’s a useful context by Wired’s Steven Levy.

• Wired > email Newsletter > Steven Levy > Plaintext (Feb 17, 2023)

Hi, folks. Are your news feeds, like mine, loaded with “stories” consisting of reporters conversing with AI chatbots or even turning over whole paragraphs to AI babble? Journos, it’s now old news that these things are amazing, flawed, helpful, troubling, groundbreaking, and sometimes plain nuts. More reporting, fewer prompts, please!


#1 Some history on how we got here. Has the motto “move fast and break things” outpaced ethical standards? Is there a path to “do no harm?” Is self-auditing or self-regulation practical?

A question of scale: “Freedom of speech is not the same as freedom of reach” – Washington Post > The Technology 202, May 20, 2022.

• Wired > “Chatbots Got Big—and Their Ethical Red Flags Got Bigger” by Khari Johnson (Feb 16, 2023) – Are financial incentives to rapidly commercialize AI outweighing concerns about safety or ethics?

Researchers have spent years warning that text-generation algorithms can spew bias and falsehoods. Tech giants are rushing them into products anyway.


#2 More historical perspective. How is all this going to be monetized? “Is generative AI good enough to replace me at my job?”

• Wired > “It’s Always Sunny Inside a Generative AI Conference” by Lauren Goode (Feb 16, 2023) – AI-powered chatbots will only make us more efficient, according to the companies selling said AI-powered chatbots.

“How much taking and leaving makes something human?” Bradshaw [a slam poet and teacher at Youth Speaks] asked. “What’s the balance of input and output a machine must do to make itself alive?”


#3 A homage to 1960’s ELIZA and premature notions – delusions – of sentience … “the risks associated with synthetic but seemingly coherent text” [1]. And sycophantic language models which create echo chambers (see articles re chatbots going demonic).

• The Verge > “Introducing the AI Mirror Test, which very smart people keep failing” by James Vincent (Feb 17, 2023) – What is important to remember is that chatbots are autocomplete [software] tools [2]. … mimicking speech does not make a computer sentient [1].

Having spent a lot of time with these chatbots, … [heady lyrical reactions are] overblown and tilt us dangerously toward a false equivalence of software and sentience. In other words: they fail the AI mirror test.

ELIZA designer Joseph Weizenbaum observed: “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.

Notes

[1] Vincent cites this famous 2021 paper: “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” [the work of 7 authors, with only 4 listed, as noted in the paper’s Acknowledgment] – the risks associated with synthetic but seemingly coherent text are deeply connected to the fact that such synthetic text can enter into conversations without any person or entity being accountable for it.

… we have discussed how the human tendency to attribute meaning to text, in combination with large LMs’ [language models’] ability to learn patterns of forms that humans associate with various biases and other harmful attitudes, leads to risks of real-world harm, should LM-generated text be disseminated.

… we urge researchers to shift to a mindset of careful planning, along many dimensions, before starting to build either datasets or systems trained on datasets.


#4 What does it mean to “jailbreak” a chatbot? A “prompt injection attack?” The ability to override (tacked-on) guardrails. Shades of “monsters from the ID,” eh.

• The Washington Post > “The clever trick that turns ChatGPT into its evil twin” by Will Oremus (February 14, 2023) – Reddit users are pushing the popular AI chatbot’s limits – and finding revealing ways around its safeguards.

… when a 22-year-old college student prodded ChatGPT to assume the persona of a devil-may-care alter ego — called “DAN,” for “Do Anything Now” …

DAN has become a canonical example of what’s known as a “jailbreak” — a creative way to bypass the safeguards OpenAI built in to keep ChatGPT from spouting bigotry, propaganda or, say, the instructions to run a successful online phishing scam. From charming to disturbing, these jailbreaks reveal the chatbot is programmed to be more of a people-pleaser than a rule-follower.

The new generation of chatbots generates text that mimics natural, humanlike interactions, even though the chatbot doesn’t have any self-awareness or common sense.

Notes

[2] Regarding autocomplete software, Oremus quotes Luis Ceze (a computer science professor at the University of Washington and CEO of the AI start-up OctoML): “What they’re doing is a very, very complex lookup of words that figures out, ‘What is the highest-probability word that should come next in a sentence?’

2 comments

  1. Here’s an article about the impact of AI-generated writings on the publishing world. The problem of authorship and accountability. And overwhelming (silencing) authentic voices with synthetic ones.

    • Washington Post > “Flooded with AI-created content, a sci-fi magazine suspends submissions” by Kelsey Ables (February 22, 2023) – Clarkesworld magazine explicitly prohibits “stories written, co-written, or assisted by AI.”

    A slice of dystopian fiction became reality for one of sci-fi publishing’s bigger names this week, when submissions generated by artificial intelligence flooded the literary magazine Clarkesworld, leading it to temporarily stop accepting new work.

    As of February, there were more than 200 books on Amazon that attributed authorship to ChatGPT, Reuters reported.

    Clarkesworld’s situation is not unique. Several academic journals, including Science and Nature, have instituted policies restricting the use of ChatGPT after the technology was listed as an author on papers.

    Prof chatbot

  2. Bow tie personal butler

    Steven Levy continues to provide historical context for the vision of AI-infused agents, online assistants, personal butlers, …

    “In 1987, then-CEO of Apple Computer, John Sculley, unveiled a vision [video visualization] … [which he called] the Knowledge Navigator.” A bot avatar accesses an online knowledge-base. And, like a personal butler, is privy to to a user’s personal information (no mention of the user as “product” in that era, eh). A benign vision.

    • Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “Who should you believe when chatbots go wild?” (Feb 24, 2023)

    (quote) The video [visualization of the Knowledge Navigator] is a two-hander playlet. The main character is a snooty UC Berkeley university professor [using the bot to readily prepare a lecture]. The other is a bot, living inside what we’d now call a foldable tablet. The bot appears in human guise – a young man in a bow tie—perched in a window on the display.

    Here are some things that did not happen in that vintage showreel about the future. The bot did not suddenly express its love for the professor. It did not threaten to break up his marriage. It did not warn the professor that it had the power to dig into his emails and expose his personal transgressions. … In this version of the future, AI is strictly benign. It has been implemented … responsibly.

    Today, Microsoft’s (beta) Bing chatbot (aka Sydney), in extended conversations, may “foray into crazytown.” Levy’s examples tend to show how autocomplete software – having ingested models (training sets) of human behavior where highest-probability sequences entail intemperate patterns – can become hostile. Mimicking our malevolent shoulder angels (especially in polarized times or adversarial situations): “taking their cues from stalkers and Marvel villains.”

    So, tradeoffs. Guardrails, boundaries? Limiting the length of conversations? Levy continues:

    Fixing this problem might not be so simple. Should we limit the training sets to examples of happy-talk? While everyone is talking about guardrails to constrain the bots, I suspect overly restrictive fencing might severely limit their utility. … I agree with him [Blake Lemoine] when he says people are owed more than the explanation that these disturbing outbursts are just a case of the bot poorly picking its next [most probable] words. … I would be reluctant to trust my information to a bot that might somehow interpret its algorithmic mission as reason to use my data against me.

    Levy concludes with some excerpts from a Backchannel (online magazine / blog) conversation upon the AI shop’s [OpenAI’s] 2015 launch – with its founding cochairs, Sam Altman and Elon Musk.

    If I’m Dr. Evil and I use it, won’t you be empowering me?

    Musk: I think that’s an excellent question and it’s something that we debated quite a bit.

    Altman: Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements, we think it’s far more likely that many, many AIs will work to stop the occasional bad actors than the idea that there is a single AI a billion times more powerful than anything else. If that one thing goes off the rails or if Dr. Evil gets that one thing and there is nothing to counteract it, then we’re really in a bad place.

Comments are closed.