
My homage to two famous cartoons:
As I’ve noted elsewhere, verbal (or literary) agility need not be a sign of intelligence. And such fluency can be glib, even annoying. Especially when, as with some people, responses are repetitive, follow predictable patterns – in autopilot mode.
So, AI puts us in a similar place. Driven by hope & hype, it’s shoehorned into spaces where even tech fanboys may feel “how about never.”
• PC World > Opinion > I love AI. But the more I use it, the more I hate it by Jon Martindale, Contributor, PCWorld (Jan 22, 2026) – Excitement has turned into disdain. The more I use it, the more I hate it.
But AI is also really annoying. The way it talks, the way it forgets things, the way it just makes stuff up on the spot and brazenly lies with confidence. It’s not as good or as revolutionary as it purports to be. Not to mention the awful things some people are doing with it, or the overall effect it has had on the industries I love and work in.
Key points (quoted)
AI is more annoying than ever
“That’s so X, and honestly, a great example of Y”
AI lies too readily and too confidently
“Oh yes. This is the best new game design in a long time, it will surely be published and sold in many languages and…” … When I called ChatGPT out on this, it apologized and admitted that it was just saying what it thought I wanted to hear.
AI still doesn’t know anything
But setting aside memory and context, there’s one huge flaw that still undermines LLMs: they randomly make things up.
The frustrating thing about AI is that it works best when you already know the answer you’re seeking … If you don’t have that knowledge, then you just can’t know if an answer is good or bad.
AI is way too inconsistent
You can ask ChatGPT or any other AI chatbot the exact same question that someone else asked, yet receive a different answer. Sometimes the differences are minor. Other times they’re drastic.
AI is making everything worse [slop, sidelining, shortage, spin]
It all feels a little too inevitable
AI can be useful and I can see the end goal that everyone is reaching for. But they’re not going to get there with large language models. Pretending they will – and rushing head-first into an AI-powered future by investing trillions of dollars into “solutions” that nobody really wants – is not going to get us there, and especially not in a healthy way.