A Non-Cynical Reading of AI Risk Letters
It’s not bad — it’s worse
I hadn’t planned to write about the CAIS statement on AI Risk released on May 30, but the press goes crazy every time one of these is published, so I won’t add much noise to the pile regardless. I still wouldn’t have posted this if I didn’t have anything to say to complement the takeaways I’ve seen on Twitter and the news. But I do.
The existential risk of AI has recently become a constant focus for the community (other risks are included but as fine print). The explanations I’ve read elsewhere for why that’s the case are incomplete at best. Loose ends are common: if Altman just wanted to get richer, why does he have no equity in OpenAI? If he just wanted political power, why was he openly talking about superintelligence before OpenAI? If everything is about business, why are academics signing the letters?
I recognize this topic isn’t easy to analyze: it involves remarkable scientific progress (at least in kind if not also in impact) combined with unprecedented political and financial tensions that interact with the individual psychologies of the people involved — both researchers and builders — not to mention that they may partially hide their beliefs to protect them from public scrutiny, difficulting a clean assessment.