
Steven Levy commented this week on those heralding a “worst-case scenario” for AI – “how artificial intelligence might wipe out humanity.”
At a gathering in New York City organized by the Center for Humane Technology (CHT) [1], a “doom-time presentation” evoked an apocalyptic tone:
We were told that this gathering was historic, one we would remember in the coming years as, presumably, the four horsemen of the apocalypse, in the guise of Bing chatbots, would descend to replace our intelligence with their own.
A call to action? A test of our attention spans? – as if social media wasn’t harmful enough already – so, the Social Dilemma and now also the AI Dilemma.
• Wired > email Newsletter > Steven Levy > Plaintext > The Plain View > “How to start an AI panic” (March 10, 2023) – What’s most frustrating about this big AI moment is that the most dangerous thing is also the most exciting thing.
The Center’s cofounders [Tristan Harris, former design ethicist at Google, and Aza Raskin, entrepreneur, interface designer, …] repeatedly cited a statistic from a survey that found that half of AI researchers believe there is at least a 10 percent chance that AI will make humans extinct.
I suspect this extinction talk is just to raise our blood pressure and motivate us to add strong guardrails to constrain a powerful technology before it gets abused.
As to the struggle to contain powerful technology, Levy notes:
Holding researchers and companies accountable for such harms is a challenge that society has failed to meet.
In the Time Travel section of his newsletter, he concludes with a quote from a 1992 interview – regarding the future of artificial life – with scientist Norman Packard of the Santa Fe Institute. Packard waxed philosophical about “blips” in our biosphere on a timescale of billions of years: “The biosphere would get jostled around a little bit …”
We’ve been here before? What’s the track record for other recent technologies.

Keep “Frankenstein‘s creation” under wraps?
As documented in Walter Isaacson’s book The Code Breaker [2], early leaders in the field of genetic engineering advocated a pause, or prudent pace, in their field. Some reckless researchers got into the mix anyway [3].
Will AI development, especially by commercial actors, do any better? Especially considering how the US Congress still grapples even with Section 230 protections.
The dark side of our new information technology is not that it allows government repression of free speech but just the opposite: it permits anyone to spread, with little risk of being held accountable, any idea, conspiracy, lie, hatred, scam, or scheme, with the result that societies become less civil and governable. – Isaacson, Walter. The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race (p. 359). Simon & Schuster 2021. Kindle Edition.
Or, as profiled in this article about virtual reality, who will regulate norms for virtual spaces, who will be accountable for social harms, especially to kids.

• Washington Post > “Meta doesn’t want to police the metaverse. Kids are paying the price.” by Naomi Nix (March 8, 2023) – Experts warn Meta’s moderation strategy is risky for children and teens exposed to bigotry and harassment in Horizon Worlds.
Meta Global Affairs President Nick Clegg has likened the company’s metaverse strategy to being the owner of a bar. If a patron is confronted by “an uncomfortable amount of abusive language,” they’d simply leave, rather than expecting the bar owner to monitor the conversations.
The “bar owner” approach to handling risks of powerful technologies is a questionable metaphor. But it makes a point: marketplace actors don’t want to be agents on a slippery slope. There’s no profit in that. “Not my job.” Yet, as the same time, they want protection in order to conduct business safely on a level playing field. And reputational equity, in a wider context of social norms.

The book The Narrow Corridor [4] talks about the challenge of moving into and staying in the “sweet spot” of the Shackled Leviathan (State), wherein there’s a balance between (state + elite) power and societal power – equitably exercising control & fairly (peacefully) resolving societal conflicts. Without that balance, a slippery slope kicks in, moving toward a despotic state or a cage of norms – in either case with a loss of liberty, a less vital (and less sustainable) state or a less vital (and less sustainable) society. “No easy feat.” New powerful technologies can destabilize an existing order.
Notes
[1]
• Wiki > Center for Humane Technology
Launched in 2018, the organization gained greater awareness after its involvement in the Netflix original documentary The Social Dilemma, which examined how social media’s design and business model manipulates people’s views, emotions, and behavior and causes addiction [maximizing users’ time on devices], mental health issues, harms to children, disinformation, polarization, and more.
[2] Isaacson, Walter. The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race. Simon & Schuster 2021. Kindle Edition.
Particularly discussions about the “moral minefield:”
A. The germline as a red line, “as a firebreak that gives us a chance to pause.”
B. Treatment vs. enhancement (re financial inequality)
C. Who should decide.
D. Utilitarianism.
These contrasting perspectives form the most basic political divide of our times. On the one side are those who wish to maximize individual liberty, minimize regulations and taxes, and keep the state out of our lives as much as possible. On the other side are those who wish to promote the common good, create benefits for all of society, minimize the harm that an untrammeled free market can do to our work and environment, and restrict selfish behaviors that might harm the community and the planet. – Ibid. p. 357.
• MIT Technology Review > “More than 200 people have been treated with experimental CRISPR therapies” by Jessica Hamzelou (March 10, 2023) – But at a global genome-editing summit, exciting trial results were tempered by safety and ethical concerns.
[3] Re reckless research in genome editing, this article notes: “The message was loud and clear: Scientists don’t yet know how to safely edit embryos.”
• Wired > “It’s Official: No More Crispr Babies – for Now” by Grace Browne (Mar 17, 2023) – In the face of safety risks, experts have tightened the reins on heritable genome editing – but haven’t ruled out using it someday.
This marks a shift in attitude since the close of the last summit, in 2018, during which Chinese scientist He Jiankui dropped a bombshell: He revealed that he had previously used Crispr to edit human embryos, resulting in the birth of three Crispr-edited babies – much to the horror of the summit’s attendees and the rest of the world. In its closing statement, the committee condemned He Jiankui’s premature actions, but at the same time it signaled a yellow rather than red light on germline genome editing – meaning, proceed with caution. It recommended setting up a “translational pathway” that could bring the approach to clinical trials in a rigorous, responsible way.
[4] Acemoglu, Daron; Robinson, James A.. The Narrow Corridor. Penguin Publishing Group 2020. Kindle Edition.





