As I’ve noted elsewhere, verbal (or literary) agility need not be a sign of intelligence. And such fluency can be glib, even annoying. Especially when, as with some people, responses are repetitive, follow predictable patterns – in autopilot mode.
So, AI puts us in a similar place. Driven by hope & hype, it’s shoehorned into spaces where even tech fanboys may feel “how about never.”
But AI is also really annoying. The way it talks, the way it forgets things, the way it just makes stuff up on the spot and brazenly lies with confidence. It’s not as good or as revolutionary as it purports to be. Not to mention the awful things some people are doing with it, or the overall effect it has had on the industries I love and work in.
Key points (quoted)
AI is more annoying than ever
“That’s so X, and honestly, a great example of Y”
AI lies too readily and too confidently
“Oh yes. This is the best new game design in a long time, it will surely be published and sold in many languages and…” … When I called ChatGPT out on this, it apologized and admitted that it was just saying what it thought I wanted to hear.
AI still doesn’t know anything
But setting aside memory and context, there’s one huge flaw that still undermines LLMs: they randomly make things up.
The frustrating thing about AI is that it works best when you already know the answer you’re seeking … If you don’t have that knowledge, then you just can’t know if an answer is good or bad.
AI is way too inconsistent
You can ask ChatGPT or any other AI chatbot the exact same question that someone else asked, yet receive a different answer. Sometimes the differences are minor. Other times they’re drastic.
AI is making everything worse [slop, sidelining, shortage, spin]
It all feels a little too inevitable
AI can be useful and I can see the end goal that everyone is reaching for. But they’re not going to get there with large language models. Pretending they will – and rushing head-first into an AI-powered future by investing trillions of dollars into “solutions” that nobody really wants – is not going to get us there, and especially not in a healthy way.
Two years after Time’s “TIME 100 AI” cover for 2023 – the most influential people in AI … movers & shakers, shaping the good, bad, & ugly, we have a shorter list for the 2025 cover.
A new mythology – the Titan Atlas recast as Vidi-on-us holding up the world
I’ve read & written a lot about AI the last two years.
Time’s article covers all the bases. Even some anecdotal tales of AI’s “Midas touch” seeping (or blending) into individual lives. It contains useful infographics:
the players, the lords of AI – Chip builders, Computing Providers, and Model Builders
the capital expenditures on AI – the deals driving investment and markets; where AI spending is going – builders, energizers, tech devs
how people use the ChatGPT – let’s count all the ways, pacing scaling & chatter.
There’re the sirens of smartness, who pledge wonders and wealth – with wisdom perhaps an afterthought (yet humility not even subtext?).
Whether bubble or historic boom … Is this a flywheel for prosperity or primrose path for the general public?
• AI Overview
Time magazine reveals its Person of the Year for 2025, AI … Time Magazine (TIME) named “The Architects of AI” as its 2025 Person of the Year, recognizing the tech leaders like Jensen Huang (Nvidia), Sam Altman (OpenAI), and Elon Musk (xAI) who developed and shaped artificial intelligence as it became a mainstream force, impacting everything from daily life to global competition. The choice highlights AI’s rapid integration in 2025, marking a significant shift from novel tech to a fundamental part of modern existence, with its creators influencing a future filled with both opportunity and uncertainty.
Who they are:
Jensen Huang: CEO of Nvidia, a key supplier of AI hardware.
Sam Altman: CEO of OpenAI, developer of ChatGPT.
Elon Musk: Founder of xAI and other ventures.
Lisa Su: CEO of AMD, another major chipmaker.
Mark Zuckerberg: CEO of Meta.
Dario Amodei: CEO of Anthropic.
Demis Hassabis: CEO of Google DeepMind.
Fei-Fei Li: AI researcher and advocate.
Why they were chosen:
Year of AI: 2025 was the year AI moved from early adoption to mainstream consumer use, changing how people work, search, and create.
Shaping the future: These individuals led the charge in creating technology that reshapes economies, information, and society.
Impact: Their work accelerated medical research, boosted productivity, and sparked global debates on AI’s disruptive potential.
TIME’s reasoning:
TIME’s Editor-in-Chief, Sam Jacobs, noted that the people who imagined, built, and drove AI had the most profound impact on the world in 2025, ushering humanity toward a highly automated and uncertain future.
• Time > The Architects of AI Are TIME’s 2025 Person of the Year by Charlie Campbell, Andrew R. Chow and Billy Perrigo (Dec 11, 2025) – A vibe of boom and abundance highlighted a year of AI, as “tech titans grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods.”
Memes depict Nvidia as Atlas, holding the stock market on its shoulders. More than just a corporate juggernaut, Nvidia also has become an instrument of statecraft, operating at the nexus of advanced technology, diplomacy, and geopolitics.
The AI boom seemed to swallow the economy into “a black hole that’s pulling all capital towards it,” says Paul Kedrosky, an investor and research fellow at MIT.
Tools .. totems … T-factor (thalamocortical network) … consciousness in the circle of life, eh. [1]
As mentioned elsewhere, remember that “there’s nobody home” when interacting with gen AI models and chatbots. Despite occasional chatbot psychosis where a user attributes some type of agency, perhaps even a consciousness, to a chatbot. Yet, expert commentary, as cited in this article (below), is unlikely to counter “conspiracies” – the lack of public denial by AI lords – of consciousness.
If dogs & cats are conscious, maybe that’s why some people dream of AI pets, eh.
In a wider societal context, I hope that any inquiry into cruelty to AIs – “AI welfare” – does not distract research from practical efforts to address human-on-human cruelty.
And I’m more concerned about abdication of agency to AIs. And “Amusing Ourselves to Death.”
The debate over whether AI models could one day be conscious — and merit legal safeguards — is dividing [some] tech leaders. In Silicon Valley, this nascent field has become known as “AI welfare,” …
Microsoft’s CEO of AI, Mustafa Suleyman, published a blog post on Tuesday arguing that the study of AI welfare is “both premature, and frankly dangerous.”
Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we’re just starting to see around AI-induced psychotic breaks and unhealthy attachments to AI chatbots.
Suleyman believes it’s not possible for subjective experiences or consciousness to naturally emerge from regular AI models. Instead, he thinks that some companies will purposefully engineer AI models to seem as if they feel emotion and experience life.
See also:
Language agility need not be a sign of a consciousness … it’s like a mirage – a ‘seemingly conscious AI.’
“We must build AI for people; not to be a digital person,” Suleyman writes.
Suleyman’s 4,600-word treatise is a timely reaction to a growing phenomenon of AI users ascribing human-like qualities of consciousness to AI tools.
And if something feels human, we are generally inclined to give it some autonomy and rights. Suleyman wants us and AI companies to nip this idea in the bud now.
Suleyman argues we should protect the well-being and rights of existing humans today, along with animals and the environment.
He also calls on AI companies to explicitly say that their AI products are not conscious …
Notes
[1] Google: define:thalamocortical network
AI Overview
A thalamocortical network is a neural circuit comprising the thalamus and cerebral cortex, connected by reciprocal thalamocortical and corticothalamic fibers. This network is crucial for acquiring, processing, and storing sensory information, regulating arousal and consciousness, and is involved in various cognitive functions and disorders. The network dynamically shifts between different functional states, influenced by neuromodulators and involving complex oscillatory activity, to control states like sleep, wakefulness, and the processing of sensory and cognitive information.
Components and Pathways
Thalamus: Acts as a relay station, receiving sensory and motor information and transmitting it to specific cortical areas.
Cerebral Cortex: Receives processed information from the thalamus.
Thalamocortical Fibers: Nerve fibers that carry information from the thalamus to the cortex.
Corticothalamic Fibers: Fibers that send information back from the cortex to the thalamus, creating a two-way communication loop.
Thalamic Reticular Nucleus (TRN): A shell of GABAergic neurons surrounding the thalamus that modulates activity in both the thalamus and cortex via inhibitory connections.
Key Functions
Sensory Processing
Primary sensory pathways for vision, audition, and touch pass through the thalamus en route to the cortex.
Cognition
The network plays a role in attention, executive control, and perceptual decision-making.
Consciousness and Arousal
Involved in regulating cortical activity and consciousness, as well as controlling transitions between sleep stages.
Memory
Involved in learning and episodic memory processes through connections with limbic structures.
Network Dynamics
Neuromodulators
Substances like acetylcholine and histamine modulate the strength and activity of connections within the thalamocortical network, influencing brain states.
Oscillations
The network generates synchronized brain rhythms, such as gamma and beta oscillations, that are essential for information processing.
Functional States
Activity in the thalamocortical network shifts between states of low-amplitude, fast activity during behavioral arousal and synchronized, slow oscillations during sleep.
A better technology, a better humanity. A personal coach (and emotional pal) in everyone’s pocket. That’s the drift. Bet on it.
Business is business, eh. If we don’t do it, then someone else will. If we can’t survive as a business, then no one will benefit. $$$ in play. Soft power. And so on, the pros & cons of free trade (whether it’s really free). The ongoing saga of Promethean technology.
Anthropic’s CEO says basing their success on the political alignment of outside investors would invite bad grace. Optics need not compromise good outcomes.
Jensen Huang of Nvidia is threading the same needle. Sidestepping hypocrisy. Downsides, but …
Well, maybe at least there’ll be a conversation. Usage policies. Rather than a political kerfuffle.
And, as noted below, there’s Fidji Simo, the incoming CEO of applications at OpenAI. Just think of the opportunities – truly historic. Power to the people. All those “high-profile business partnerships.”
Data centers – in the lands of Energy Lords – will rival Victorian architecture in grandeur (scale) and control (agency). The Grid and networked structures a statement of dominance over nature and a display of wealth and power.
In his memo, Amodei acknowledged that the decision to pursue investments from authoritarian regimes would lead to accusations of hypocrisy. In an essay titled “Machines of Loving Grace,” Amodei wrote: “Democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries.”
By pursuing a “narrowly scoped, purely financial investment from Gulf countries,” the company hopes to avoid the risks associated with allowing outside investors to gain “leverage” over the company, the memo says.
He added: “It’s perfectly consistent to advocate for a policy of ‘No one is allowed to do x,’ but then if that policy fails and everyone else does X, to reluctantly do x ourselves.”
Soon-to-be former Instacart CEO Fidji Simo sent a memo to OpenAI staff Monday laying out her vision for how AI will change the world.
“If we get this right, AI can give everyone more power than ever,” Simo wrote, striking a hyper-optimistic tone, according to a copy of the memo viewed by WIRED. “But I also realize those opportunities won’t magically appear on their own.”
“AI can compress thousands of hours of learning into personalized insights delivered in plain language, at the pace that suits us, responsive to our specific level of understanding,” Simo writes. “It doesn’t just answer questions – it teaches us to ask better ones. And it helps us develop confidence in areas that once felt opaque or intimidating, growing both personally and professionally.”
“If AI can help people truly understand themselves, it could be one of the biggest gifts we could ever receive,” Simo writes.
Does trust require truth? Facts? With explanations of the speaker’s reasoning & sources? Or just because “(A)I said it’s so” – authoritatively.
So, what could go [1] … hardly anyone understands how their smartphone works … (as well as most electronics, eh). Nothing new there for technology … yet, nobody likes to be conned, fooled … but is resistance futile with AI? [2]
I’m reminded of the ‘Magic 8 Ball‘ toy where people ask all kinds of yes-no questions, and receive brief affirmative (10), neutral (5), or negative (5) answers (designed by a psychology professor and internally on a 20-sided regular icosahedron die).
The problem, AI researchers say, is that those warnings conveniently ignore how we actually use technology — as machines that spit out the right “answer.”
“Generative AI systems have both an authoritative tone and the aura of infinite expertise …”
We’re all prone to automation bias, especially when we’re stressed or worked up [just trying to survive].
Today, Butler’s “mechanical kingdom” [Erewhon] is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, is part Silicon Valley exposé, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT.
It joins another recently released book – The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hanna—in revealing the puffery that fuels much of the artificial-intelligence business.
Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam.
A model is by definition a simplification of something, of a reality. AI (generative AI) uses models. Behind the curtain, under the hood, there be truth-less dragons “which don’t interact with [mirror] the world the way we do.” Shadow mirrors.
Key terms: language model (“soothsayer for words”), chatbot, parameter, deep learning, training data, tokens, patterns, search engine.
Notes
[1] Compare with the stories in the Black Mirror series regarding the consequences of some new technology, an “unending pursuit of scientific and technological advancement.”