A Non-Cynical Reading of AI Risk Letters
It’s not bad — it’s worse
--
I hadn’t planned to write about the CAIS statement on AI Risk released on May 30, but the press goes crazy every time one of these is published, so I won’t add much noise to the pile regardless. I still wouldn’t have posted this if I didn’t have anything to say to complement the takeaways I’ve seen on Twitter and the news. But I do.
The existential risk of AI has recently become a constant focus for the community (other risks are included but as fine print). The explanations I’ve read elsewhere for why that’s the case are incomplete at best. Loose ends are common: if Altman just wanted to get richer, why does he have no equity in OpenAI? If he just wanted political power, why was he openly talking about superintelligence before OpenAI? If everything is about business, why are academics signing the letters?
I recognize this topic isn’t easy to analyze: it involves remarkable scientific progress (at least in kind if not also in impact) combined with unprecedented political and financial tensions that interact with the individual psychologies of the people involved — both researchers and builders — not to mention that they may partially hide their beliefs to protect them from public scrutiny, difficulting a clean assessment.
The position I present below is perhaps more assertively worded than it deserves, but the growing importance of the subject demands it. I’m prepared — and eager, you’ll understand why — to be proved wrong. Anyway, this imperfect digression will hopefully help clarify part of the fuzz, illuminate some obscure motivations, and add some value to the conversation. (Let me know what you think in the comments, especially if you disagree!)
This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between AI, algorithms, and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.