[List of articles in progress]

Here’s a bunch of recent articles re AI-powered chatbots: ChatGPT, Google’s Bard, Microsoft’s Bing, …
Here’s a useful context by Wired’s Steven Levy.
• Wired > email Newsletter > Steven Levy > Plaintext (Feb 17, 2023)
Hi, folks. Are your news feeds, like mine, loaded with “stories” consisting of reporters conversing with AI chatbots or even turning over whole paragraphs to AI babble? Journos, it’s now old news that these things are amazing, flawed, helpful, troubling, groundbreaking, and sometimes plain nuts. More reporting, fewer prompts, please!
#1 Some history on how we got here. Has the motto “move fast and break things” outpaced ethical standards? Is there a path to “do no harm?” Is self-auditing or self-regulation practical?
A question of scale: “Freedom of speech is not the same as freedom of reach” – Washington Post > The Technology 202, May 20, 2022.
• Wired > “Chatbots Got Big—and Their Ethical Red Flags Got Bigger” by Khari Johnson (Feb 16, 2023) – Are financial incentives to rapidly commercialize AI outweighing concerns about safety or ethics?
Researchers have spent years warning that text-generation algorithms can spew bias and falsehoods. Tech giants are rushing them into products anyway.
#2 More historical perspective. How is all this going to be monetized? “Is generative AI good enough to replace me at my job?”
• Wired > “It’s Always Sunny Inside a Generative AI Conference” by Lauren Goode (Feb 16, 2023) – AI-powered chatbots will only make us more efficient, according to the companies selling said AI-powered chatbots.
“How much taking and leaving makes something human?” Bradshaw [a slam poet and teacher at Youth Speaks] asked. “What’s the balance of input and output a machine must do to make itself alive?”

#3 A homage to 1960’s ELIZA and premature notions – delusions – of sentience … “the risks associated with synthetic but seemingly coherent text” [1]. And sycophantic language models which create echo chambers (see articles re chatbots going demonic).
• The Verge > “Introducing the AI Mirror Test, which very smart people keep failing” by James Vincent (Feb 17, 2023) – What is important to remember is that chatbots are autocomplete [software] tools [2]. … mimicking speech does not make a computer sentient [1].
Having spent a lot of time with these chatbots, … [heady lyrical reactions are] overblown and tilt us dangerously toward a false equivalence of software and sentience. In other words: they fail the AI mirror test.
ELIZA designer Joseph Weizenbaum observed: “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.“
Notes

[1] Vincent cites this famous 2021 paper: “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” [the work of 7 authors, with only 4 listed, as noted in the paper’s Acknowledgment] – the risks associated with synthetic but seemingly coherent text are deeply connected to the fact that such synthetic text can enter into conversations without any person or entity being accountable for it.
… we have discussed how the human tendency to attribute meaning to text, in combination with large LMs’ [language models’] ability to learn patterns of forms that humans associate with various biases and other harmful attitudes, leads to risks of real-world harm, should LM-generated text be disseminated.
… we urge researchers to shift to a mindset of careful planning, along many dimensions, before starting to build either datasets or systems trained on datasets.
#4 What does it mean to “jailbreak” a chatbot? A “prompt injection attack?” The ability to override (tacked-on) guardrails. Shades of “monsters from the ID,” eh.
• The Washington Post > “The clever trick that turns ChatGPT into its evil twin” by Will Oremus (February 14, 2023) – Reddit users are pushing the popular AI chatbot’s limits – and finding revealing ways around its safeguards.
… when a 22-year-old college student prodded ChatGPT to assume the persona of a devil-may-care alter ego — called “DAN,” for “Do Anything Now” …
DAN has become a canonical example of what’s known as a “jailbreak” — a creative way to bypass the safeguards OpenAI built in to keep ChatGPT from spouting bigotry, propaganda or, say, the instructions to run a successful online phishing scam. From charming to disturbing, these jailbreaks reveal the chatbot is programmed to be more of a people-pleaser than a rule-follower.
The new generation of chatbots generates text that mimics natural, humanlike interactions, even though the chatbot doesn’t have any self-awareness or common sense.
Notes
[2] Regarding autocomplete software, Oremus quotes Luis Ceze (a computer science professor at the University of Washington and CEO of the AI start-up OctoML): “What they’re doing is a very, very complex lookup of words that figures out, ‘What is the highest-probability word that should come next in a sentence?’“

