(M)ending the World
It’s either one or the other — or so they say
--
Imagine this: In front of you there’s a big magical button. You happen to know that, if you press it, there’s an indeterminate but non-zero chance that you’ll solve all the world’s problems right away. Sounds great! There’s a caveat, though. At the other end of the probability distribution lies a similarly tiny but very real possibility that you will, just as instantly, kill everyone.
Do you press it?
(Btw, here’s a much more harmless button you can press instead:)
Superintelligence: Utopia or apocalypse?
That button is, as you may have imagined, a metaphor for the hypothetical AGI or superintelligence (will use them interchangeably) we hear about everywhere nowadays. The dichotomic scenario I described is the setting that so-called “AI optimists” and “AI doomers” have submerged us in. Superintelligence will be humanity’s blessing or humanity’s curse. It’ll be a paradisiac dream or a hellish nightmare. It’ll be the panacea to solve all our problems or the doom that will end human civilization.
Public discussions on social media and traditional media about superintelligence and the broad range of futures that will open up if — or when, for some people — we manage to create an AGI have captured the conversation; everything else pales in comparison. Debates about current, actual problems that are existentially urgent to many people are relegated to obscurity because they’re not as “existentially serious as … [AIs] taking over,” as AI pioneer Geoffrey Hinton stated recently.
It doesn’t surprise me, though, because what’s more pressing than deciding what to do next when, if we choose well, we can achieve the end of suffering instead of causing the end of the world? In this framing of the situation, Hinton is strictly right: his worries are more “existentially serious.” One step toward utopia is one step away from dystopia — and there’s no pathway in between.