The Infinity Man
Demis Hassabis, colleagues and rivals with a little help from their chips
I'm a weird British outlier, on this little island here, and I've made my own path. I've followed my passions and tried to stay true to what I believe in.
Demis Hassabis
The cover of The Infinity Machine, Sebastian Mallaby’s new biography of DeepMind’s Demis Hassabis, features a blurred portrait of Hassabis. The effect was probably meant to evoke a Star Trek transporter-like dissolving of the human form as human intelligence transforms into a less tangible artificial variety.
For anyone familiar with Hassabis though, either through his regular public speaking or interview appearances, or through this book, the image might evoke another response.
Hassabis is one of the most important individuals of his, or for that matter any, generation. Founder of one of the world’s foremost AI labs and now leading AI development at Alphabet/Google, creator of many ground-breaking AI models, and now Nobel prize winner in biology due to his work on AlphaFold.
Yet I’d wager though that if you met him in person unaware of his achievements then there is a good chance you’d judge him unremarkable. The opposite of flamboyant - he had a high performance car briefly whilst at University but now drives a modest family car - softly spoken and seemingly lacking the ‘reality distortion field’ that Steve Jobs was able to deploy.
Hassabis seems quite strikingly - for want of a better word - normal and, as on the cover of Mallaby’s book, someone who easily blends into the background.
Hassabis’s apparent grounded-ness is a central feature of The Infinity Machine. For example, the interviews for the book, as Mallaby repeatedly emphasises, mostly seem to take place at a cosy pub in North London close to Hassabis’s family home.
There is a reason for this of course: to contrast Hassabis with his rivals Elon Musk and, most notably, Sam Altman of OpenAI. Both outsized personalities with seemingly infinite egos.
The release of Mallaby’s book could not have been better timed to make the most of this comparison. Over the last week Altman’s character has been the subject of an extensive profile published in the New Yorker. It’s an article without a smoking gun but with a steady accumulation of problematic behaviours. The title of the profile - Sam Altman May Control Our Future—Can He Be Trusted? - somewhat unsubtly invites the reader to invoke Betteridge’s Law and reply ‘no’ from outset.
The Infinity Machine does the opposite for Hassabis. The lack of controversy over its nearly 400 pages leaves the reader with the sense that Hassabis really is someone who can be trusted.
And that matters.
Because the timing of the release of Mallaby’s book is opportune in another respect too.
Human judgment and the ethics surrounding the use of AI models have suddenly become a matter of life and death.
This week has seen news break of Anthropic’s Mythos, a model that is so powerful that it can’t safely be released to the general public. The same firm continues to be embroiled in a bitter dispute with the US Government over the use of its models at the Department of War.
The portrayal of Hassabis plainly seeks to reassure that we’d probably prefer him, rather than Musk or Altman, be the first to lead the world to Artificial General Intelligence.
But this doesn’t answer the most profound question about Hassabis’s work. Given the risks, should anyone be doing it?
The Infinity Machine starts with the story of Deep Learning pioneer Geoffrey Hinton discussing his misgivings about his research in a conversation with Nick Bostrom. Hinton is profoundly pessimistic:
“I am in the camp that is hopeless,” Hinton informed Bostrom.
“In that you think it will not be a cause for good?” Bostrom inquired.
“I think political systems will use it to terrorize people,” Hinton answered.
“Then why are you doing the research?” Bostrom asked.
“I could give you the usual arguments,” Hinton replied. “But the truth is that the prospect of discovery is too sweet.”
Hinton was echoing J. Robert Oppenheimer, the creator of the atom bomb. “When you see something that is technically sweet, you go ahead and do it,” Oppenheimer said. “You argue about what to do about it only after you have had your technical success.”
Later, Hinton regretted his line. “It was very apt. That’s why I wish I hadn’t used it,” he told me.
Hinton’s perspective frames the rest of the book. Is it right for anyone, no matter how trustworthy, to pursue this research? How does Hassabis himself come to terms with the risks that his work creates for humanity?
Before we return to that question, let’s take a whirlwind tour through Hassabis’s early life. It’s a story of precocious talent and relentless determination.
Born in 1976 in London, the son of Greek and Chinese-Singaporean immigrants; chess prodigy; precocious video game developer; double-first in Computer Science at Cambridge; video game studio founder; PhD in cognitive neuroscience at University College London; post-doctoral work at Harvard and UCL; co-founder with Shane Legg and Mustafa Suleyman and CEO of DeepMind in London in 2010.
This is really where the story proper gets started, though, as DeepMind takes centre stage. Hassabis’s time as CEO of DeepMind is really two threads running in parallel. The first is a series of ground-breaking projects, the names of which are familiar to anyone with a passing interest in AI. The second is the story of how Hassabis has created and kept DeepMind viable as a commercial entity: raising capital from Peter Thiel, selling out to Google and then negotiating life as a subsidiary of the Californian giant, all whilst staying thousands of miles away in London.
It’s clear which of these threads is most important for Hassabis:
“I am first and foremost a scientist,” Hassabis began. “My goal is to understand nature.”
“But doing science is, sort of, like reading the mind of God. Understanding the deep mystery of the universe is my religion, kind of.”
“We humans, we have these faculties. The world is understandable.
But why should it be that way? I think there is a reason.”
“Computers are just bits of sand and copper,” Hassabis continued, now sounding more urgent. “Why should these combine to do any-thing? I mean, it’s absurd! The electrons move around and then that creates an Al system that can defeat a Go master? Why should that be possible?”
“Why should it be solid?”
The focus on research is accompanied by disinterest in the financial rewards that come his way as one, almost comical, exchange with Mallaby brings to life:
“I’ve been in the same house for more than ten years”
“What is the view from your home office?”
“There isn’t one. It’s an attic.”
“Do you own other homes?”
“Yes, but they’re for family members”
“Holiday homes?”
“No.”
“Ski chalet?”
“No.”
“Beach house?”
“Nothing.”
“A yacht?”
“Of course not.”
“Scientific collectibles?”
“I’ve got some first editions of Shannon’s papers. They cost £5K or something.” Five thousand pounds was less than $6,500.
This is no hagiography though: the second thread of the story portrays Google and DeepMind’s failure to capitalise on its own transformer architecture. Their ceding of leadership to OpenAI in Large Language Models after the launch of ChatGPT, and its struggles to catch up are set out in painstaking detail.
Hassabis was ‘the man who knew’ that AI would be world changing. Even he, however, underestimated the impact and the importance of ChatGPT and its rivals.
Hassabis’s response is captured in a single paragraph, neatly illustrating both his determination and - plausibly if perhaps also conveniently - the constraints he and his team are working within.
Hassabis was not just furious. He was furiously competitive. OpenAl had fired a starting gun, and however much Hassabis might wish to slow the march to AGI, he saw no choice but to rush forward. Short of quitting the industry and retiring to watch powerlessly from the side-lines, neither he nor his colleagues at Google had any more agency than the other contenders in this race. In fact, both the slowness of their start to put in frustration and their new resolve to sprint illustrated the forces of technological determinism.
The Infinity Machine isn’t just about Hassabis. There are detours into shenanigans at OpenAI - Hassabis himself comments that Altman “seems like he’s doing it for power” - cameos from leading figures at Google - the ‘steely’ Sundar Pichai and the shadowy and inconsistent Larry Page - and more detailed descriptions of key colleagues from DeepMind itself.
Most extensively there is a portrait of DeepMind co-founder Mustafa Suleyman (“Moose”) who is portrayed as a well intentioned but complex and ultimately tragic figure.
Mostly absent from the narrative, though, are the machines that Hassabis, his colleagues and rivals, rely on.
I think it’s a natural but common flaw in ‘histories of AI’ to focus on the people who have made modern AI a reality. That means less on the models and the technology that underpins the advances. The Infinity Machine is largely disinterested in the machines.
There are descriptions of the technical details of the models which are short and highly readable even if they sometimes seem a little ‘walled off’ from the rest of the narrative (to enable readers to skip or because they were drafted separately and subject to extensive external review?)
But the machines themselves and the fact that advances in the computational power has been a prerequisite underpinning Hassabis’s work go largely unremarked on. Moore’s Law is mentioned early on and then forgotten. Nvidia gets precisely one mention in the main body of the book. ‘GPU’ doesn’t appear in the index at all.
And in turn this means that there is little discussion of the impact of the ‘AI boom’ and its economic impact.
Perhaps the rationale is that this pales in comparison with the repercussions of creating AGI.
So what of Hassabis’s conscience? Perhaps we’re looking at it wrong and it really is a gift to the world.
“But I only started DeepMind because I thought it was the best way to get the mission off the ground. If I had stayed in academia, I wouldn’t have had the resources.”
“And anyway, AGI should be gifted to the world eventually. I mean, AGI is infinitely bigger than a company or a person or a set of owners.”
“It’s bigger than capitalism and national economies. It’s humanity-sized, really.”
“AI’s humanity’s invention and it’s going to affect all humanity. So humanity should run it. Unfortunately the problem is, what are the right institutions?”
And his personal contribution?
… I'll just have to do the best I can …
The Infinity Machine isn’t the first book - and won’t be the last - to set out the rivalry between leading firms in the race for superintelligence.
It’s an exceptionally well written account both of Hassabis’s life and achievements combined with the story of DeepMind’s rivalry with OpenAI.
Amongst the competition Cade Metz’s ‘The Genius Makers’ is more detailed, somewhat more balanced in its coverage, but much harder going and a few years out of date by now. ‘Supremacy’ by Parmy Olson is more recent and a great read with a more compelling narrative of the race between OpenAI and Google.
The Infinity Machine’s weaknesses come from its primary focus on Hassabis and from a suspicion that Mallaby might, perhaps, have become a little too close to his subject in the course of those cosy pub interviews. Don’t let that put you off though: it’s a great book.


