Member-only story
Even God Can’t Skip the Bureaucrats
Comments on the “AI 2027” report
A team of five savvy experts (for lack of another way to group them under a label) has published a long and thorough report on what they predict will happen in AI between now (April 2025) and the end of 2027.
For reference, it belongs together with “Situational Awareness” and “Machines of Loving Grace.” Similar quality, similar depth, similar angle. The main difference is its extreme concreteness (easily testable in retrospect but harder to get right).
For a non-technical audience, the “AI 2027” report can be summarized — obviously incurring some unfairness — with its initial sentence:
We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.
I’ve read it in full. You should, too, if you care about the short-term future of AI (or you can watch the 3-hour Dwarkesh Patel podcast episode). But fair warning: parts of it are very technical, and others sound straight out of science fiction. Still, it’s valuable for one reason above all: it shows what “taking forecasts seriously” looks like.
This isn’t a summary but rather a set of general remarks on what I see as the report’s main flaws. Higher concreteness — which is laudable — allows for…