“There’s one kind of program that we don’t understand – even in principle – and that’s an AGI. One day we will, but I see no sign of it at the moment and that’s pretty frustrating.”
That’s the view of David Deutsch, regarded as the ‘father of quantum computing’, someone who can brilliantly blend physics, philosophy and humor, in a video given exclusively to Digital Trends.
Today, Deutsch remains an influential voice not only in quantum theory but also in debates about artificial intelligence.
Speaking on Sana’s Strange Loop podcast series – co-hosted by Joel Hellermark, CEO of Swedish AI scale-up Sana and Gustav Söderström, Co-President at Spotify – Deutsch speaks at fascinating length about how AGI might manifest in a way we might not expect – going as far as predicting what will happen if humanity survives a million years into the future.
With the likes of Google and OpenAI talking regularly about their progress towards Artificial General Intelligence (AGI) – the moment when AI can truly learn and respond like a human – with products like ChatGPT and Gemini, there’s a lot of people questioning regularly about whether AGI has already been reached.
When asked what a sign might be that ‘true’ AGI has been achieved, Deutsch counters that with the notion that we don’t even know what we’re looking for:
[The sign will be] if someone has a theory. The sign wouldn’t be in the machine; the sign would be a theory where someone writes a book or publishes a paper that says, ‘I’ve solved it. This is what characterizes a GI (general intelligence)’.
“If we could write a computer program that has that property, it will be an AGI, and this will be the reason – an explanatory theory of what general intelligence is.”
What is significant is the extent to which Deutsch challenges the conventional wisdom of where artificial intelligence is and where it is headed.
He isn’t impressed with the current performance of AI engines, while still holding out hope for their continued development.
In many conversations about artificial intelligence, people often see AGI as the logical extension of AI.
Deutsch characterizes current AI platforms as “obedient optimizers” while true AGI would have the ability to explain, predict and draw conclusions that the human operators did not anticipate.
While the world obsesses over whether LLMs are edging toward human-like intelligence, David Deutsch offers a bold counterpoint: AGI won’t be a tool to wield. It will be a person to reckon with.
Digital Trends / Digital Trends
In Deutsch’s view, current LLMs are not the path forward. They offer correlations but not explanations—he states that this is the true measure of human intelligence and is the missing element of current attempts at AGI.
He makes a very sharp separation between pattern-matching and creative reasoning. This is an incredibly important distinction.
It is human nature to expect current patterns to continue to infinity. Because currently LLMs are ultimately based on aggregated human knowledge, this bias is already baked in.
The famous Turing Test— where a human judge is asked to interact with another entity by text and determine whether the respondent is human or machine – is an oft-used method of ‘testing’ AI’s ability to benchmark its progress.
While some have ‘passed’ the test, Deutsch points out that it’s easy enough for machines to mimic human speech and reasoning without actually thinking. Outputs alone, in Deutsch’s view, cannot prove intelligence.
Deutsch’s most provocative claim is that when true AGI is achieved, the machines will have become people:
“Each AGI is a person. If we recognize it as a person, which it will be, then the very first thing it owns is the computer that it’s running on.
“It will not want to make a clone of itself, because it will have property—unless it’s deemed a slave, which would be a catastrophic mistake by society.”
He goes so far as to say that we would no longer be able to consider AGI “property”, suggesting people would need to employ the program – and thus assign it workers’ rights and similar – so it could do things like buy expansion to its hardware to augment its abilities.
Risk Without Doom
Deutsch doesn’t dismiss risks—he expects plenty of mistakes as we fumble toward more powerful systems. But he pushes back against the “AI apocalypse” narrative.
He believes that if humans continue to evolve for the next million years, they will have conquered the galaxy, but this will be infused by artificial intelligence, rather than consumed by it.
Like all technologies, AI requires careful oversight, criticism, and error-correction. The danger isn’t that machines will suddenly rebel, but that we might repeat humanity’s oldest error: denying rights to beings capable of thought and free action.
If he’s right, the first real AGI won’t just change technology—it will force us to redefine morality, law, and the meaning of intelligence itself.