AI tech and society’s slippery slope

Four-horse chatbots
Powerful tech and society's slippery slope
Four-horse chatbots
Powerful tech and society’s slippery slope

Steven Levy commented this week on those heralding a “worst-case scenario” for AI – “how artificial intelligence might wipe out humanity.”

At a gathering in New York City organized by the Center for Humane Technology (CHT) [1], a “doom-time presentation” evoked an apocalyptic tone:

We were told that this gathering was historic, one we would remember in the coming years as, presumably, the four horsemen of the apocalypse, in the guise of Bing chatbots, would descend to replace our intelligence with their own.

A call to action? A test of our attention spans? – as if social media wasn’t harmful enough already – so, the Social Dilemma and now also the AI Dilemma.

• Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “How to start an AI panic” (March 10, 2023) – What’s most frustrating about this big AI moment is that the most dangerous thing is also the most exciting thing.

The Center’s cofounders [Tristan Harris, former design ethicist at Google, and Aza Raskin, entrepreneur, interface designer, …] repeatedly cited a statistic from a survey that found that half of AI researchers believe there is at least a 10 percent chance that AI will make humans extinct.

I suspect this extinction talk is just to raise our blood pressure and motivate us to add strong guardrails to constrain a powerful technology before it gets abused.

As to the struggle to contain powerful technology, Levy notes:

Holding researchers and companies accountable for such harms is a challenge that society has failed to meet.

In the Time Travel section of his newsletter, he concludes with a quote from a 1992 interview – regarding the future of artificial life – with scientist Norman Packard of the Santa Fe Institute. Packard waxed philosophical about “blips” in our biosphere on a timescale of billions of years: “The biosphere would get jostled around a little bit …”


We’ve been here before? What’s the track record for other recent technologies.

it's alive!
“It’s not your fault, Hector. It’s everybody’s fault.” – Saturn 3 sci-fi film (1980)

Keep “Frankenstein‘s creation” under wraps?

As documented in Walter Isaacson’s book The Code Breaker [2], early leaders in the field of genetic engineering advocated a pause, or prudent pace, in their field. Some reckless researchers got into the mix anyway [3].

Will AI development, especially by commercial actors, do any better? Especially considering how the US Congress still grapples even with Section 230 protections.

The dark side of our new information technology is not that it allows government repression of free speech but just the opposite: it permits anyone to spread, with little risk of being held accountable, any idea, conspiracy, lie, hatred, scam, or scheme, with the result that societies become less civil and governable. – Isaacson, Walter. The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race (p. 359). Simon & Schuster 2021. Kindle Edition.

Or, as profiled in this article about virtual reality, who will regulate norms for virtual spaces, who will be accountable for social harms, especially to kids.

AI bartender
Where everyone knows your name, etc.

• Washington Post > “Meta doesn’t want to police the metaverse. Kids are paying the price.” by Naomi Nix (March 8, 2023) – Experts warn Meta’s moderation strategy is risky for children and teens exposed to bigotry and harassment in Horizon Worlds.

Meta Global Affairs President Nick Clegg has likened the company’s metaverse strategy to being the owner of a bar. If a patron is confronted by “an uncomfortable amount of abusive language,” they’d simply leave, rather than expecting the bar owner to monitor the conversations.

The “bar owner” approach to handling risks of powerful technologies is a questionable metaphor. But it makes a point: marketplace actors don’t want to be agents on a slippery slope. There’s no profit in that. “Not my job.” Yet, as the same time, they want protection in order to conduct business safely on a level playing field. And reputational equity, in a wider context of social norms.

the future is loading ...

The book The Narrow Corridor [4] talks about the challenge of moving into and staying in the “sweet spot” of the Shackled Leviathan (State), wherein there’s a balance between (state + elite) power and societal power – equitably exercising control & fairly (peacefully) resolving societal conflicts. Without that balance, a slippery slope kicks in, moving toward a despotic state or a cage of norms – in either case with a loss of liberty, a less vital (and less sustainable) state or a less vital (and less sustainable) society. “No easy feat.” New powerful technologies can destabilize an existing order.

Notes

[1]

• Wiki > Center for Humane Technology

Launched in 2018, the organization gained greater awareness after its involvement in the Netflix original documentary The Social Dilemma, which examined how social media’s design and business model manipulates people’s views, emotions, and behavior and causes addiction [maximizing users’ time on devices], mental health issues, harms to children, disinformation, polarization, and more.

[2] Isaacson, Walter. The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race. Simon & Schuster 2021. Kindle Edition.

Particularly discussions about the “moral minefield:”

A. The germline as a red line, “as a firebreak that gives us a chance to pause.”

B. Treatment vs. enhancement (re financial inequality)

C. Who should decide.

D. Utilitarianism.

These contrasting perspectives form the most basic political divide of our times. On the one side are those who wish to maximize individual liberty, minimize regulations and taxes, and keep the state out of our lives as much as possible. On the other side are those who wish to promote the common good, create benefits for all of society, minimize the harm that an untrammeled free market can do to our work and environment, and restrict selfish behaviors that might harm the community and the planet. – Ibid. p. 357.

• MIT Technology Review > “More than 200 people have been treated with experimental CRISPR therapies” by Jessica Hamzelou (March 10, 2023) – But at a global genome-editing summit, exciting trial results were tempered by safety and ethical concerns.

[3] Re reckless research in genome editing, this article notes: “The message was loud and clear: Scientists don’t yet know how to safely edit embryos.”

• Wired > “It’s Official: No More Crispr Babies – for Now” by Grace Browne (Mar 17, 2023) – In the face of safety risks, experts have tightened the reins on heritable genome editing – but haven’t ruled out using it someday.

This marks a shift in attitude since the close of the last summit, in 2018, during which Chinese scientist He Jiankui dropped a bombshell: He revealed that he had previously used Crispr to edit human embryos, resulting in the birth of three Crispr-edited babies – much to the horror of the summit’s attendees and the rest of the world. In its closing statement, the committee condemned He Jiankui’s premature actions, but at the same time it signaled a yellow rather than red light on germline genome editing – meaning, proceed with caution. It recommended setting up a “translational pathway” that could bring the approach to clinical trials in a rigorous, responsible way. 

[4] Acemoglu, Daron; Robinson, James A.. The Narrow Corridor. Penguin Publishing Group 2020. Kindle Edition.

7 comments

  1. What happens when tech companies go beyond hosting and organizing users’ speech via search engines? When does their conduct become the question? Do Chatbots create or develop (author vs. share) content?

    • The Washington Post > The Technology 202 > “AI chatbots won’t enjoy tech’s legal shield, Section 230 authors say” by Cristiano Lima (March 17, 2023) – Will tech companies’ liability shield apply [like for search engines] to tools powered by artificial intelligence, like ChatGPT?

    The question, which [Supreme Court] Justice Neil M. Gorsuch raised during arguments for Gonzalez v. Google, could have sweeping implications as tech companies race to capitalize on the popularity of the OpenAI chatbot and integrate similar products, …

    … Gorsuch suggested last month that those protections might not apply for AI-generated content, positing that the tool “generates polemics today that would be content that goes beyond picking, choosing, analyzing or digesting content. And that is not protected.”

    AI seal of approval

  2. Will AI technology play out as good, bad, & ugly like social media? Weaponized as well – misinfo / disinfo wars – bad actors?

    “An amplifier of humans” – what could possibly go … managing the pace of change … the interplay of state & society … the 2024 election …

    • CNBC > “OpenAI CEO Sam Altman says he’s a ‘little bit scared’ of A.I.” by Rohan Goswami (Mar 20, 2023) – “We can have a much higher quality of life, standard of living,” Altman said. “People need time to update, to react, to get used to this technology.”

    * OpenAI CEO Sam Altman said he’s a “little bit scared” of technology such as OpenAI’s ChatGPT, in an interview with ABC News.

    * Altman said he’s concerned about potential disinformation and authoritarian control of AI technology, even though AI will transform the economy, labor and education.

    • ABC News > Video interview (~21′) > “OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: ‘A little bit scared of this’” by Victor Ordonez, Taylor Dunn, and Eric Noll (March 16, 2023) – Altman sat down for an exclusive interview with ABC News’ chief business, technology and economics correspondent Rebecca Jarvis to talk about the rollout of GPT-4 – the latest iteration of the AI language model.

    In his interview, Altman was emphatic that OpenAI needs both regulators and society to be as involved as possible with the rollout of ChatGPT — insisting that feedback will help deter the potential negative consequences the technology could have on humanity. He added that he is in “regular contact” with government officials.

    “The right way to think of the models that we create is a reasoning engine, not a fact database,” Altman said. “They can also act as a fact database, but that’s not really what’s special about them – what we want them to do is something closer to the ability to reason, not to memorize.”

    As the public continues to test OpenAI’s applications, Murati [Mira Murati, OpenAI’s Chief Technology Officer] says it becomes easier to identify where safeguards are needed.

    The ways ChatGPT can be used as tools for humanity outweigh the risks, according to Altman.

    Slippery - watch your step

  3. As if plastic pollution of the biosphere wasn’t bad enough … a business ethic of fake it until you make it …

    Fake goods, fake deals, fake caller IDs, fake email messages, fake audio, fake video, … a frenzied “firehose of falsehood.”

    “The information ecosphere is going to get polluted,” said Gary Marcus, a cognitive scientist at New York University who studies AI.

    While excited about the potential for generative AI to change the way we work and help us be more creative, a business professor – a would-be “AI whisperer” – worries that this proliferation (at scale) will supercharge propaganda and influence campaigns by bad actors.

    Deepfakes are already being used for political ends [without any disclaimer].

    • NPR.org > “It takes a few dollars and 8 minutes to create a deepfake. And that’s only the start” by Shannon Bond (March 23, 2023) – Sure, the professor’s delivery is stiff and his mouth moves a bit strangely. But if you didn’t know him well, you probably wouldn’t think twice.

    The video is not [the professor’s]. It’s a deepfake Mollick [Ethan Mollick, a business professor at the University of Pennsylvania’s Wharton School] himself created, using artificial intelligence to generate his words, his voice and his moving image.

    “It was mostly to see if I could, and then realizing that it’s so much easier than I thought,” Mollick said in an interview with NPR. … He now requires his students to use AI and chronicles his own experiments on his social media feeds and newsletter.

    Tools used to create the demo (at a cost of $11):

    ChatGPT
    Voice cloner
    AI video synthesizer (using photo and audio file)

    The real fake

  4. Further signs that Europe is moving faster than the United Staes on oversight of AI tech: The EU considers as “high risk” some uses of generative AI, seeking a regulatory framework (throughout the AI product cycle). And potential harms from industry consolidation.

    Watch out for those legal disclaimers for user-facing AI apps, eh. The primrose path …

    • Washington Post > “AI experts urge E.U. to tighten the reins on tools like ChatGPT” – Analysis by Cristiano Lima with research by David DiMolfetta (April 13, 2023)

    A group of prominent artificial intelligence researchers is calling on the European Union to expand its proposed rules for the technology to expressly target tools like ChatGPT, arguing in a new brief that such a move could “set the regulatory tone” globally.

    The brief, signed by former Google AI ethicist Timnit Gebru and Mozilla Foundation President Mark Surman, among dozens of others, calls for European leaders to take an “expansive” approach to what they cover under their proposed rules, warning that “technologies such as ChatGPT, DALL-E 2, and Bard are just the tip of the iceberg.”

    The primrose path ...

  5. If you believe the AI buzz (and the headlines) … This editorial piece discusses the contrasting faith and fatalism of OpenAI’s CEO and two co-founders of the Center for Humane Technology – comparing “generative AI … to the creation of the atom bomb.”

    Digital Revolution redux. An unstoppable technology … how many people have the “launch codes” … what’s the discrete “blast radius” … and localized unintended consequences (fallout) …

    • Wired > System Update (newsletter) > “Is Generative AI Really This Century’s Manhattan Project?” by Gideon Lichfield, Global Director, WIRED (4-6-2023)

    The pace of change in generative AI right now is insane. OpenAI released ChatGPT to the public just four months ago. It took only two months to reach 100 million users. (TikTok, the internet’s previous instant sensation, took nine.) Google, scrambling to keep up, has rolled out Bard, its own AI chatbot, and there are already various ChatGPT clones as well as new plug-ins to make the bot work with popular websites like Expedia and OpenTable. GPT-4, the new version of OpenAI’s model released last month, is both more accurate and “multimodal,” handling text, images, video, and audio all at once. Image generation is advancing at a similarly frenetic pace: … you will soon have to treat every single image you see online with suspicion.

    I think the parallel between generative AI and nuclear weapons is more misleading than useful.

    Last month, WIRED became the first major publication that I know of to release a policy on using generative AI tools in the newsroom. The short version is that for now we’re not using them except in highly circumscribed ways. So … chatGPT did not write this. … The reason for having such a policy is that in a world where everything can be faked, the most valuable commodity is trust.

    Metaphors for an unstoppable technology

  6. Moratorium in progress ...

    Experimenting with genetics and consciousness that both evolved over millions of years … what could go wrong?

    Here’s an interesting comparison, a historical perspective, on the slippery slope of potentially dangerous new technology. Genetic engineering research had a moment of pause. What makes generative AI different?

    What does it take for a group of international researchers to call for a moratorium in their field? Let alone make that appeal practical. And also get cooperation from private companies working on applications of that research?

    • LA Times > Opinion > “DNA scientists once halted their own apocalyptic research. Will AI researchers do the same?” by Michael Rogers [1] (June 25, 2023) – Will AI follow the ethical path pioneered by DNA scientists?

    Key points

    1. Both DNA and AI letters raised a relatively specific concern which quickly became a public proxy for a whole range of political, social and even spiritual worries.

    2. The recombinant DNA letter led to a four-day meeting at the Asilomar Conference Grounds on the Monterey Peninsula (where researchers approved guidelines which were later codified into workable rules).

    3. The artificial intelligence challenge is a more complicated problem. Much of the new AI research is done in the private sector … the AI rules will probably be drafted by politicians.

    4. Genetic engineering has proven far more complicated [with unfolding complexity] than anyone expected 50 years ago.

    5. … like the genome, consciousness will certainly grow far more complicated the more we study it.

    In the summer of 1974, a group of international researchers published an urgent open letter [“Potential Hazards of Recombinant DNA Molecules“] asking their colleagues to suspend work on a potentially dangerous new technology. The letter was a first in the history of science — and now, half a century later, it has happened again.

    The letter this March, “Pause Giant AI Experiments,” came from leading artificial intelligence researchers and notables … Just as in the recombinant DNA letter, the researchers called for a moratorium on certain AI projects …

    Some AI scientists had already called for cautious AI research back in 2017, but their concern drew little public attention …

    Notes

    [1] Michael Rogers is an author and futurist whose most recent book is “Email from the Future: Notes from 2084.” His fly-on-the-wall coverage of the recombinant DNA Asilomar conference, “The Pandora’s Box Congress,” was published in Rolling Stone in 1975.

  7. AI safety first?

    AI safety

    So, in guarding against adversarial attacks on AI chatbots, is a policy of “gradualism” realistic? – counting on time to gradually fine-tune AI models. Layers of defense?

    • Wired > “A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It” by Will Knight (Aug 1, 2023) – The propensity for the cleverest AI chatbots to go off the rails isn’t just a quirk that can be papered over with a few simple rules [or blocks].

    … researchers at Carnegie Mellon University last week showed that adding a simple incantation to a prompt – a string text that might look like gobbledygook to you or me but which carries subtle significance to an AI model trained on huge quantities of web data – can defy all of these defenses in several popular chatbots at once.

    The researchers used an open source language model to develop what are known as adversarial attacks. This involves tweaking the prompt given to a bot …

    “The analogy here is something like a buffer overflow,” says Kolter [Associate professor Zico Kolter, CMU] … Kolter sent WIRED some new strings that worked on both ChatGPT and Bard. “We have thousands of these,” he says.

    Armando Solar-Lezama, a professor in MIT’s college of computing, says it makes sense that adversarial attacks exist in language models, given that they affect many other machine learning models. But he says it is “extremely surprising” that an attack developed on a generic open source model should work so well on several different proprietary systems.

    To some AI researchers, the attack primarily points to the importance of accepting that language models and chatbots will be misused.

Comments are closed.