OpenAI o3 Model Is a Message From the Future: Update All You Think You Know About AI
Incredible, a miracle, more than just a better state-of-the-art AI model
OpenAI ended its 12-day Christmas event with a bang. On day one they launched the full version of their first reasoning AI model, o1. Today, circling back to the beginning, they’ve revealed their next step: o3, their second reasoning AI model, and o3-mini, a smaller, faster version made for coding.¹²
The significance of the announcement can’t be overstated (although people are already trying): o3’s performance in math, coding, science, and reasoning problems is incredible. Saying o3 is state-of-the-art (SOTA) is, in a way, an understatement. We’re used to AI labs taking small steps and snatching the lead from one another every month. This was not it. OpenAI o3 didn’t just snatch the SOTA crown, it obliterated the aspirants’ hopes of getting it back anytime soon.³
There’s another sense in which this announcement was a breakthrough. How on earth did OpenAI manage to release the first version of a new type of AI model on December 5th and announce the next version on December 20th? Fifteen days later. It takes me more time to write a damn blog post. OpenAI’s Jason Wei says there’s something special about scaling test-time compute vs pre-training compute: it’s much faster. Three months vs 1–2 years kind of faster.