ChatGPT at one year – we cut our path in moving on

Our future will be characterized by a tension between copilot (AI as collaborator) and autopilot (humans as sidekick to AI). The latter is more efficient and cheaper in a narrow labor economics sense but troublesome in all sorts of ways.” – Wired > email Newsletter > Steven Levy > Plaintext > The Plain View (December 8, 2023)


Ready or not, in November, 2022, a new chatbot “tumbled into civilization’s ongoing conversation.” The legacy of Eliza … neural nets and deep learning … captivating us like a beguiling Jedi mind trick – “You will use this tool!”

There’ll be books, movies, dramas … the rise of ChatGPT.

Has it been a year – and only a year – since ChatGPT kicked the hornet’s nest? In this article (below), Steven Levy looks back at (and beyond) all the fuss, promise, and portent of “the fastest-growing user base in history.” In a year where there was enough international contention, conflict, & chaos, “OpenAI turned up the heat.”

In helping a friend recently move from an old flip phone to a smartphone, once again I was struck by the power of an agile, mobile user interface. A game changer. Seductive. Easy access matters.

Merriam-Webster’s “Word of the Year 2023” is “authentic.” What will authenticity look like going forward? (Levy concludes in his “Ask Me One Thing” section, that “privacy is indeed a fight that we’ve lost.”)

As I wrote (years ago) in my poem “MORE LESSONS … (from gerbil poems)” …

what is there to reply, to say?
pay the beast, we cannot stay.
we have left the trees, put up walls.
gone the warming fire-ring without,
the tribal whole within.
ascending lord of neoteny,
at home, we yet stand that grassy plain
and cut our path in moving on

• Wired > “The Year of ChatGPT and Living Generatively” by Steven Levy (Dec 1, 2023) – In November last year, OpenAI launched a “low key research preview” called ChatGPT. What happened next transformed the tech industry – and perhaps humanity’s future.

Some key points

The response shocked ChatGPT’s creators at the AI startup OpenAI as much as anyone. When I was interviewing people at the company for WIRED’s October cover feature this year, virtually everyone admitted to wildly underestimating the chatbot’s impact.

In my first Plaintext column of 2023, I made the observation (too obvious to be a prediction) that ChatGPT would own the new year. I said that it would kick off a wet, hot AI summer, dispelling whatever chill lingered from an extended AI winter.

ChatGPT had scrambled tech’s balance of power [triggered an AI arms race].

Maybe most significantly, ChapGPT was a shrieking wake-up call that a technology with impact at least on the scale of the internet was about to change our lives.

Meanwhile, during this year of ChatGPT, many AI scientists themselves have come to believe that their brilliant creations could bring about disaster.

I appreciate ChatGPT for many things, but especially the clarity it provided us in an era of change. In the Before Days, meaning anytime prior to November 30, 2022, we already had long passed the turning point in digital technology’s remodeling of civilization.

1 comment

  1. Global AI corps

    From prior Washington Post > The Technology 202 by Cristiano Lima (11-14-2023)

    • Google wants governments to form a ‘global AI corps

    Responsibility and opportunity are two sides of the same coin. We need to get both right,’ Kent Walker, Google’s president of global affairs, said in an interview Monday.

    The paper, penned by Walker, calls on governments to “scale up AI training programs” to “catch workers that are impacted by AI and reskill them so they can quickly bounce back into new and better jobs.” It also urges officials to create “flexible immigration pathways for AI experts.”

    • YouTube allows AI-generated videos of fake events if content is labeled

    YouTube said Tuesday that it will allow people to upload AI-generated videos that realistically show an event that didn’t happen, or depict a person saying something they didn’t say, as long as the content is labeled and follows the platform’s other rules around not misleading users when it comes to sensitive topics like elections and wars, our colleague Gerrit De Vynck reports for The Technology 202.

Comments are closed.