Member-only story
GPT-4: A Viral Case of AI Misinformation
For those of you wondering, GPT-4 won’t have 100 trillion parameters

I’m responsible for the “GPT-4 will have 100 trillion parameters” false statement going viral on social media.
In case you don’t know what I’m talking about, here are a couple of visual examples:
These two images above were shared on Twitter (together, at the time of writing, they’ve got 5 million views). Similar posts are circulating on LinkedIn, Reddit, and other sites. All slightly different versions of the same thing.
They combine an appealing visual graph of a GPT-3/GPT-4 comparison (recent ones use GPT-3 as a proxy for ChatGPT) and an accompanying hook with a high emotional charge (e.g. “this is a frightening visual for me” or “[GPT-4] will make ChatGPT look like a toy”). There’s also an authoritative touch to them — more often than not the posters show no trace of doubt in their claims and sources are lacking.
The millions of people who first learn about GPT-4 through these posts leave irremediably hyped (and surprised or afraid) for what’s to come. The problem is twofold: They won’t access the reality behind the hype due to the illusory certainty of fake knowledge and, sooner or later, they’ll find out their expectations aren’t going to be fulfilled (i.e. GPT-4 won’t be as amazing as those visuals suggest).
If we take this problem to the extreme we get the perfect recipe for an AI winter. As I’ve argued recently, I think it’s not going to happen, but the risk increases with this kind of viral misinformation.
This (personal) essay is a cautionary tale on how fast — and how deep — misinfo spreads through the internet and how it can emerge even from the best…