(M)ending the World
It’s either one or the other — or so they say
Imagine this: In front of you there’s a big magical button. You happen to know that, if you press it, there’s an indeterminate but non-zero chance that you’ll solve all the world’s problems right away. Sounds great! There’s a caveat, though. At the other end of the probability distribution lies a similarly tiny but very real possibility that you will, just as instantly, kill everyone.
Do you press it?
(Btw, here’s a much more harmless button you can press instead:)
Superintelligence: Utopia or apocalypse?
That button is, as you may have imagined, a metaphor for the hypothetical AGI or superintelligence (will use them interchangeably) we hear about everywhere nowadays. The dichotomic scenario I described is the setting that so-called “AI optimists” and “AI doomers” have submerged us in. Superintelligence will be humanity’s blessing or humanity’s curse. It’ll be a paradisiac dream or a hellish nightmare. It’ll be the panacea to solve all our…