AI warlords – We don’t need no stinkin’ redlines!

, ,
0

Epigraph

“I just think that most of us are simply sick of the parts of digital life that once seemed miraculous but now feel exploitative.” – Backchannel, Steven Levy (2-13-2026)


World-class generative AI

Boom and kaboom

There’s a new factor about the AI boom – an arms race.

From the pen to the sword, our Promethean moment carries us into uncharted territory. Wizards & witches, all claiming to be in the right, vie for prominence – in using AI to refresh our wartime footing. A way to save the world (of course, eh).

There’s increasing government sway (and swagger) to deliver weaponized AI systems. A new zeal for military applications & ascendancy. Without any private (corporate) or public (regulatory) guardrails. The lords of AI pledged to serve the king, the overlord of warfare.

Overlord vs. oversight. An old story.


Badge of authority

The sirens sing “full speed ahead” – for autonomous weapons and mass surveillance. Badges of authority mask the risks, bend the rules.

Are we in outlaw country? Where lines in the sand are easily erased? Where legitimacy is improvised:

“Redlines? We don’t need no stinkin’ redlines!”

Which is an homage to “one of the greatest movie quotes in history” – by the character “Gold Hat” in the 1948 film The Treasure of the Sierra Madre [1]:

“Badges? We ain’t got no badges. We don’t need no badges. I don’t have to show you any stinkin’ badges!”


AI metaphors

Hopefully we’ll not be thrust into a “no win” scenario with AI – where “Frankenstein” AI cannot be put back in the bottle, eh.

And clarify the ongoing debate as to whether AI will ever be conscious [2].

• Wired > “AI Will Never Be Conscious” by Michael Pollan (2-24-2026) – Adapted (with permission) from A World Appears: A Journey into Consciousness (2026. Penguin Press).

[Article references the Blake Lemoine incident.]

Right on page one, these computer scientists and philosophers set forth their guiding assumption: “We adopt computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness, as a working hypothesis.” Computational functionalism takes as its starting point the idea that consciousness is essentially a kind of software running on the hardware of what could be a brain or a computer – the theory is completely agnostic. But is computational functionalism true? The authors aren’t quite prepared to nail themselves to that claim, only to say that it is “mainstream – although disputed.” Even so, they will proceed on the assumption that it is true for “pragmatic reasons.”

For the purposes of the report, the “material substrate” of the system—that is, whether it is a brain or a computer—”does not matter for consciousness … It can exist in multiple substrates, not just in biological brains.” Any substrate that can run the necessary algorithm will do.

Metaphors can be powerful tools for thinking, but only as long as we don’t forget they are metaphors – imperfect or partial analogies likening one thing to another. The differences between the two things are as important as the similarities, but these differences seem to have gotten lost in the enthusiasm surrounding AI. As cyberneticists Arturo Rosenblueth and Norbert Wiener noted years ago, “The price of metaphor is eternal vigilance.” Beyond the authors of this report, the whole field of AI appears to have let down its guard on this one.


AI safety – Humpty Dumpty on a wall

Also, I’m reminded of the times in Star Trek where Kirk “talked a computer to death” in order the save the universe [3].

• Wired > Backchannel > “We were promised AI regulation – now we’re arguing about killer robots” by Steven Levy (3-6-2026) – The fragile consensus around AI safety is starting to crack.

I’ve spent the past few days asking AI companies to convince me that the prospects for AI safety have not dimmed. Just a few years ago, it seemed that there was universal agreement among companies, legislators, and the general public that serious regulation and oversight of AI was not just necessary, but inevitable. People speculated about international bodies setting rules to insure that AI would be treated more seriously than other emerging technologies, and that could at least provide obstacles to its most dangerous implementations. Corporations vowed to prioritize safety over competition and profits. While doomers still spun dystopic scenarios, a global consensus was forming to limit AI risks while reaping its benefits.

Promethean building blocks

Key points:

All of this seems to point to a glum future where unfettered and dangerous AI proliferates. But the companies beg to differ. When I presented my bleak argument, they insisted that safety was as important as ever, despite the Pentagon’s affection for unreliable killer drones. “I don’t think the race to the top is dead,” says Anthropic’s chief science officer Jared Kaplan, urging me to shift my gaze from the battlefield and the marketplace to the research labs.


Notes

[1] Think (as a working title): Treasure of the Mountain of Consciousness (tesoro de la montaña de la conciencia)

Or a quest like Jason and the Golden Genie [GEN(eral) Intelligent E(ssence)]

The Wiki article provides some background for the dialog in John Huston’s screenplay (and a “In popular culture” section which lists how that dialog “has been humorously misquoted in various comedy movies and TV shows over the decades since the original film.”)

John Huston’s adaptation of Traven’s novel was altered to meet Hays Code regulations, which severely limited profanity in film. The original line from the novel was:

“Badges, to god-damned hell with badges! We have no badges. In fact, we don’t need badges. I don’t have to show you any stinking badges, you god-damned cabrón and chinga tu madre!”

The dialogue as written for the film is:

Gold Hat: “We are Federales…you know, the mounted police.”

Dobbs: “If you’re the police, where are your badges?”

Gold Hat: “Badges? We ain’t got no badges. We don’t need no badges! I don’t have to show you any stinkin’ badges!”

Gold Hat’s response as written by Huston and delivered by Bedoya has become famous, and it often is misquoted as “We don’t need no stinking badges!” In 2005, the quotation was chosen as No. 36 on the American Film Institute list AFI’s 100 Years…100 Movie Quotes.


[2] Re the “I, Robot” episodes of The Outer Limits (TV series) and the metaphor that brains are computers.

List of The Outer Limits (1995 TV series) episodes

All but one of the 43 episodes in season 1 and season 2 are originals, the only remake is ‘I Robot’, which starred Leonard Nimoy in the original, and he also appears in the new episode.

18 18 “I, Robot”
Adam Nimoy
Based on the short story by : Eando Binder
Teleplay by : Alison Lea Bingeman
July 23, 1995

Dr. Link is working on the central memory of a robot, Adam, when it suddenly activates and attacks him. A lab assistant enters the room in time to see Adam smashing up the laboratory before crashing through a window and escaping, while Dr. Link is left dead.

Later, a police officer finds Adam in a back alley, and Adam, apparently remembering nothing of the incident, asks the officer to contact Dr. Link.

Adam is taken to a cell and preparations are made to disassemble it.

Mina, Dr. Link’s daughter, contacts a lawyer, Thurman Cutler (Leonard Nimoy), who pushes for a murder trial, insisting that Adam is his client and not simply a machine.

A court hearing begins, and the prosecutor pushes for dismissal of the case on the grounds that Adam is just a machine. Cutler argues that, although Adam is clearly not human, it possesses intelligence and will, and, on that basis, deserves a trial.

Cutler begins to look into Dr. Link’s financial records and finds that he was working for a defense contractor, eventually discovering that Dr. Link was working to turn Adam into a weapon. Cutler argues, with supporting evidence of financial accounts and company memos, that Dr. Link was forced into attempting to rewrite Adam’s central programming, effectively lobotomizing it, and that Adam reacted in the way any human might when faced with death.

The court eventually finds that Adam is a person and will stand trial for the murder of Dr. Link.

As it is being led away, Adam sees the prosecuting attorney in danger of being run over and rescues her, sacrificing its own life in the process.

Note: Leonard Nimoy, father of co-director Adam Nimoy, co-stars in both this episode and the 1960s Outer Limits version of “I, Robot” as different characters. Neither version has any connection to the famous “I, Robot” stories of Isaac Asimov.

THE OUTER LIMITS (1995–2002): SEASON 1, EPISODE 18 – I, ROBOT – FULL TRANSCRIPT

[Opening narration]

It is said that God
made man in his image …

but man fell from grace.

Still, man has retained
from his humble beginnings …

the innate desire to create.

But how will
man’s creations fare?

Will they attain
a measure of the divine …

or will they too fall from grace?

[Closing narration]

Empathy, sacrifice, love –

these qualities are not confined
to walls of flesh and blood …

but are found within the deepest,
best parts of man’s soul …

no matter where that soul resides.

List of The Outer Limits (1963 TV series) episodes

Season 2 (1964–65)

41 9 “I, Robot”
Leon Benson
S : Eando Binder;
T : Robert C. Dennis
November 14, 1964 43

Adam Link is accused of murder; however, Adam Link is a robot who maintains the victim’s death was the result of an accident. Placed on trial for the murder of Professor Link (Peter Brocco), his creator, Adam Link is defended by the professor’s niece Nina (Marianna Hill) and retired lawyer Thurman Cutler (Howard da Silva). Ultimately, it turns out that the prosecution is not simply placing the robot on trial but humankind itself as irresponsible and abusive of technology.


[3] As Michael Pollan wrote: “Why should we assume that conscious machines would be any more virtuous than conscious humans?

Or as I wrote in my “When AI grows up – no longer ‘really cute tiger cub’” post (4-29-2025) re an interview with the ‘Godfather of AI’:

… [Why would AIs (AGIs)] be a monolithic threat (or benefit). Whether globally or at a international corporate or state level. That such super-intelligent machines will share a common purpose or perspective regarding humanity. …

… And he [Hinton] notes that “human interests don’t align with each other.” So, why would AI interests? – in the long run.

So, while the interview raises the problem of AI-human misalignment, might AIs have different personalities? Diverge in temperament and virtue? “Evolve” in different ways? Tribes.

• Google: examples of “Kirk talking computers to death” trope

AI Overview [Star Trek episodes]

“Kirk talking computers to death” involves Kirk using logic, paradoxes, or emotional arguments to break down overly rigid AI systems. Key examples include convincing the AI Landru in “The Return of the Archons” to destroy itself, overloading the Nomad probe in “The Changeling,” and causing the M-5 computer in “The Ultimate Computer” to self-destruct.
Reddit

Examples of the Trope:

Landru (“The Return of the Archons”): Kirk breaks the computer’s logic by explaining that its directive to protect the people through stagnation is actually causing them harm, forcing it into a paradox.

Nomad (“The Changeling”): Kirk convinces the probe that it has made an error in identifying him as its creator and, therefore, is imperfect, causing it to self-destruct.

M-5 Computer (“The Ultimate Computer”): Kirk forces the computer to face the ethical paradox of murder, causing it to shut down.

“I, Mudd” (Harry Mudd): Kirk and his crew overwhelm the androids with nonsensical, illogical behavior and emotional outbursts, which the rigid AI cannot process.
Reddit

• Google search: in star trek how did kirk defeat a rogue ai

[Evidently 4 times]

In the Star Trek episode “The Ultimate Computer,” Kirk defeats the rogue AI, M-5, by exploiting a flaw in its programming: by making M-5 realize that its actions (killing humans) contradicted the ethical principles of its creator, Dr. Daystrom, causing the AI to self-destruct due to its own internal logic and sense of guilt.


Related posts

, , , , , , , , , , , , , , , , , , ,

Tags

ad blocking (6) AI (24) Anti-virus (7) Apple (6) Chatbot (18) ChatGPT (7) Creators update (4) Cyber Security (5) email (6) Holiday (16) iOS (9) iPhone (14) Mac (11) macOS (7) Malware (16) Microsoft (4) OpenAI (7) Phishing (11) Poem (7) Privacy (11) Scams (24) Security (19) Setup (5) Smartphone (6) Spoofing (8) Tips (15) Upgrade (7) Virus (8) Windows 10 (24) WordPress (4)