Today’s article is a guest post from the writer and researcher
. I’ve worked with Alex for a long time across various newspapers and magazines (we co-edited the fine art/fashion magazine Supplement for a few years, and DJ’d together a lot), and his own work has seen him go very deep into some very smart research about architecture and the built environment for people like Heatherwick Studios. Ten years ago when I was editing HOUSE magazine, I commissioned him to write a profile of Nick Bostrom. Alex thought it would be interesting to go back and look at how that conversation read ten years on, and write a new introduction. Take it away, Alex:§
When I met with Nick Bostrom in 2014, the Swedish philosopher’s warnings about super-intelligent computers threatening the future of mankind seemed novel, interesting, and perhaps a tiny bit implausible – or at the very least, quite a distant prospect.
Bostrom’s timelines were reassuring far in the future. As I write in the piece (originally commissioned for the Soho House magazine), “mankind could well develop human-level artificial intelligence within the next fifty to sixty years.” That horizon has since shrunk to just five or six years, according to AI pioneers such as Google DeepMind founder Demis Hassibis.
In person, Bostrom didn’t do much to dispel the remote quality. He came across as just a tiny bit nutty. In my research for the piece, I had read some of his poetry, which, as far as I remember, had a strange sci-fi element to it, and taken a look at his own art – Bostrom drew the owl on the cover of the cover of Superintelligence.
He also did not seem deeply committed to preventing the rise of AI. He preferred to privately philosophise rather than publicly proselytise; was able to meet me in London because he had come down from Oxford to speak at the Palace of Westminster, addressing, I think, MPs who were trying to find a way for the UK to gain some kind of technological advantage. Bostrom admitted that the best way to do that would be to invest in a very powerful computer – even though such computers would of course hasten the arrival of better than human intelligence.
Looking back, this remoteness and oddness makes him well suited to truly considering the big risks and rewards of better-than-human computing. He chose not to be a public intellectual – perhaps because he knew some of his views, or once-held views, were not publicly palatable. In doing this, he left space for others to grapple with the difficulties of fame and the challenges of making difficult ideas appeal to big audiences. He hasn’t really engaged with the news cycle; Superintelligence has quite a lot about the ways AI might conquer mankind, but really little about short-term job losses or smaller-scale industry disruptions.
A decade later, Bostrom published Deep Utopia: Life and Meaning in a Solved World. Rather than reckon with the risks, this new book considers how we might handle the fantastic benison of incredibly advanced technology that might be coming our way at some point.
From his vantage point, these are the big questions, and the matter of industrial, social and labour market disruptions are highway-bumps on a route to a truly consequential fork in the road. It’s the long view all right, but it’s not quite as far sighted as it seemed in 2014; a decade on, these changes might now be under a decade away.
–
Nick Bostrom
For Professor Nick Bostrom, all-out global thermonuclear war wouldn’t really be the end of the world. “According to our current models, a war would not have resulted in the extinction of humanity,” says the Oxford University professor, seated at a quiet reception area in central London, one a hot afternoon. “It would cause mass starvation, but it looks like there would have been pockets of survivors in temperate regions, who would, perhaps, have repopulated the earth.”
Hardly dinner and a movie, then, but not the existential risk that Bostrom specialises in. The 41-year-old Swedish-born academic has a background in philosophy, physics and neuroscience, and is director of the Future of Humanity Institute, an Oxford research centre that examines all-encompassing risks to human life; risks that may lead, as the professor puts it, “to the extinction of all intelligent life, or drastically destroy our future potential.” Risks, he adds, more bluntly, that could “destroy the entire future.”
While a nuclear exchange doesn’t meet Bostrom’s criteria, the invention of superintelligence, or computers brighter than humans, does.
“Why do humans have a dominant position on the planet?” he says. “It’s because our brains are different. Some tweaks to the brains of our great-ape ancestors allowed us to accumulate information and make technology. That’s why the fate of the gorilla depends much more on what we do than on its own actions. In the same way,” he goes on, “if we make machines that surpass us in intelligence, they have the potential to become extremely powerful in relation to us. They could then invent advanced technologies, plans and strategies.”
Despite the dystopian gloss of his subject, Bostrom remains quite clear-sighted. His new book, Superintelligence: Paths, Dangers, Strategies, describes the possible timescale, effects and countermeasures presented by better-than-human artificial intelligence. According to the experts he surveyed, mankind could well develop human-level artificial intelligence within the next fifty to sixty years, and then a better-than-human artificial intelligence a few years thereafter. Once in place, mankind could, as Bostrom puts it, join the ranks of the gorillas.
In reply, science-fiction readers recall something like Isaac Azamov’s Three Laws of Robotics, a fictional list of commandments the author outlined, which began ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm.’
Could something like this work? Bostrom shakes his head. “For one, it would be a hopeless task to try to lay down rules for all the things we care about and specify how they should be traded off against each other, “he says. “And even if we did, there’s the problem of interpretation. Take that rule about not allowing a human being to come to harm. You might then want to prevent any human from ever being born, or encase all existing humans in some cocoon and feed them nutrients. You can write down a sentence, but interpretation is always open.”
Instead, he places his faith in building some kind of motivational selection into machine intelligence as it’s being developed, asking the machines to make the same kind of choices mankind might make. This sort of countermeasure might work, were it added in from the ground up. “The only way is to start from scratch,” he says. “The bottom line is that it’s an unsolved problem that looks really difficult.”
Thankfully, Bostrom and his team are devoting themselves to the task in hand. Without naming names, the professor acknowledges that other academics have made their fortune as public speakers, defending this or that cause. He could make a buck or two railing against the dangers of artificial intelligence; the only difficulty is, Bostrom says, that you then feel compelled to spend the rest of our life defending old work.
“I’m only 41,” he explains, “I’d like to think that I might still be able to contribute to ongoing research, and that makes the defence of your old ideas difficult. It’s hard enough anyway, even if you’re completely focussed.” And for the time being, Bostrom is still one of the smartest guys – or indeed things – in the room.
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom (Oxford University Press) http://www.nickbostrom.com