The Era of Large AI Models Is Over

Size doesn’t matter

Alberto Romero
8 min readApr 26

--

Midjourney

Just kidding: we all know size matters. It’s definitely true for AI models, especially for those trained on text data, i.e., language models (LMs). If there’s one trend that has, above all others, dominated AI in the last five or six years, it is the steady increase in parameter count of the best models, which I’ve seen referred to as Moore’s law for large LMs. The GPT family is the clearest — albeit not the only — embodiment of this fact: GPT-2 was 1.5 billion parameters, GPT-3 was 175 billion, ~100x its predecessor, and rumors have it that GPT-4’s size, although officially undisclosed, has reached the 1 trillion mark. Not an exponential curve but definitely a growing one.

OpenAI was categorically following the godsend guidance of the scaling laws they themselves discovered in 2020 (that DeepMind later refined in 2022). The main takeaway is that size matters a lot. DeepMind revealed that other variables like the amount of training data, or its quality, also influence performance. But a truth we can’t deny is that we love nothing more than a bigger thing: Model size has been the gold standard for heuristically measuring how good an AI system would be.

OpenAI and DeepMind have been making their models bigger over the years in search of hints from the performance graphs, signs in the benchmark results, or whispers from the models themselves, of an otherwise merely hypothetical path toward AGI, the field’s holy grail. They didn’t find what they were looking for. Instead, they got predictable — although, if you ask me, impressive — improvements in language mastery, that sadly don’t reveal any clear direction toward the next stage.

Size has proven, as they predicted, critical, but it seems companies have practically exhausted the “scale is all you need” doctrine. What’s most striking, the acknowledgment of this new reality doesn’t come from a classical AI proponent or a deep learning critic but from OpenAI’s CEO, Sam Altman himself.

This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between AI, algorithms, and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.

--

--

Alberto Romero

AI & Tech | Analyst at CambrianAI | Weekly AI Newsletter: https://thealgorithmicbridge.substack.com/ | Contact: alber.romgar@gmail.com