When AI grows up – no longer ‘really cute tiger cub’

What could possibly go ...

Geoffrey Hinton on the future of AI

So, after watching the video interview with the ‘Godfather of AI’ (CBS News below), I was struck by something that was assumed or just left implicit. Namely, that AIs (AGIs) will be a monolithic threat (or benefit). Whether globally or at a international corporate or state level. That such super-intelligent machines will share a common purpose or perspective regarding humanity.

Any hive-like alignment is particularly curious because Hinton discusses the stewardship of corporations and nations and bad actors. And that AIs can reflect on their own reasoning, use deception, and (at some point) resist manipulation. Which likely entails different cultural values in the mix. And he notes that “human interests don’t align with each other.” So, why would AI interests? – in the long run.

So, while the interview raises the problem of AI-human misalignment, might AIs have different personalities? Diverge in temperament and virtue? “Evolve” in different ways? Tribes.

I sketch such possible futures, tales of agency, in my Ditbit’s Guide to Blending in with AIs.

Here’re some quotes from The Singju Post’s transcript (see below) of the interview.

… if I had a job in a call center, I’d be very worried. … We know what’s going to happen is the extremely rich are going to get even more extremely rich and the not very well off are going to have to work three jobs.

[The risk of AI takeover, the existential threat] … these things will get much smarter than us … But let’s just take as a premise that there’s an 80% chance that they don’t take over and wipe us out. … If we just carry on like now, just trying to make profits, it’s going to happen. They’re going to take over. We have to have the public put pressure on governments to do something serious about it. But even if the AIs don’t take over, there’s the issue of bad actors using AI for bad things.

AI is potentially very dangerous. And there’s two sets of dangers. There’s bad actors using it for bad things, and there’s AI itself taking over.

For AI taking over, we don’t know what to do about it. We don’t know, for example, if researchers can find any way to prevent that, but we should certainly try very hard. … Things that are more intelligent than you, we have no experience of that. … how many examples do you know of less intelligent things controlling much more intelligent things?

I think the situation we’re in right now [“A change of a scale we’ve never seen before … hard to absorb … emotionally”], the best way to understand it emotionally is we’re like somebody who has this really cute tiger cub. It’s just such a cute tiger cub. Now, unless you can be very sure that it’s not going to want to kill you when it’s grown up, you should worry.

And with super intelligences, they’re going to be so much smarter than us, we’ll have no idea what they’re up to.

We worry about whether there’s a way to build a superintelligence so that it doesn’t want to take control. … The issue is, can we design it in such a way that it never wants to take control, that it’s always benevolent?

People say, well, we’ll get it to align with human interests, but human interests don’t align with each other. … So if you look at the current AIs, you can see they’re already capable of deliberate deception.

• The Singju Post (Our mission is to provide the most accurate transcripts of videos and audios online) > “Transcript of Brook Silva-Braga Interviews Geoffrey Hinton on CBS Mornings” (April 28, 2025) by Pangambam S / Technology

• CBS News > CBS Saturday Morning > Artificial Intelligence > “‘Godfather of AI’ Geoffrey Hinton warns AI could take control from humans: ‘People haven’t understood what’s coming’” by Analisa Novak, Brook Silva-Braga (April 26, 2025) – Video interview (52′) [See the The Singju Post’s transcript]

[CBS’ article contains only a few highlights from the video.]

(quotes)
While Hinton believes artificial intelligence will transform education and medicine and potentially solve climate change, he’s increasingly concerned about its rapid development.

“The best way to understand it emotionally is we are like somebody who has this really cute tiger cub,” Hinton explained. “Unless you can be very sure that it’s not gonna want to kill you when it’s grown up, you should worry.”

The AI pioneer estimates a 10% to 20% risk that artificial intelligence will eventually take control from humans.

“People haven’t got it yet, people haven’t understood what’s coming,” he warned.

According to Hinton, AI companies should dedicate significantly more resources to safety research — “like a third” of their computing power, compared to the much smaller fraction currently allocated.