Fear Has Won

The US and the UK have decided what the future of AI looks like

Alberto Romero

--

It’s so over — Fear has captured the king

This article is a selection from The Algorithmic Bridge, an educational newsletter to bridge the gap between AI, algorithms, and people.

It was not the potential benefits of nuclear bombs that led the US to develop the first one in 1945, which was soon later replicated by other superpowers. It was the fear that the nazis could do it first. A reasonable fear (up to some point), but a fear nonetheless.

It’s not the potential benefits of AI that have led the US and the UK to, respectively, release the first Executive Order (EO) on AI and lead the AI Safety Summit this week. The one sentence that best describes both efforts would be: “They have bought the fear.” The fear that the long-term risks of AI materialize — including that it goes rogue and kills us all.

The UK AI Safety Summit was centered around the long-term risks of AI from the very beginning — the Prime Minister had been giving away hints he had bought the AI existential risk narrative for a few months now, so it’s really no surprise. This isn’t good or bad in itself, just profoundly illuminating of what matters to the UK government.

The EO, although much broader (touches on virtually every topic) also bought an idea that industry leader, Sam Altman, proposed to the Senate in May: Setting a threshold to control AI model development as a function of the level of capabilities (instead of regulating at the application level, which wouldn’t stifle open innovation), using as a temporary proxy for “capabilities” the amount of FLOPS used to train a model (which isn’t the best approach if the goal is to minimize all kinds of risks). Companies that develop models above a threshold (10²⁶ FLOPS; above GPT-4, but probably not GPT-5 or Gemini) will have to report “safety test results” to the government.

This may not sound that bad but writing reports is definitely a hurdle for smaller players yet a negligible delay for larger players. Imagine if the government had regulated the size of software programs or the number of transistors that a device could have back in the 80s or 90s; in a matter of months, only those who could pay the “risk tax” could have kept innovating. The world would look radically different: much more…

--

--