ChatGPT: Eraser of the Implausible
ChatGPT is five months old, i.e., ancient. During this time, one of the most practiced AI-sports has been trying to find the most succinct and precise description of what it is and what it does.
The original definition is along the lines of: ChatGPT is a system trained to predict the next token given a history of previous ones and further tuned to follow human instruction. Andrew Kadel shared on Twitter a more snarky one his daughter came up with: ChatGPT is a “say something that sounds like an answer” machine. On the same note, writer Neil Gaiman observed that “ChatGPT doesn’t give you information. It gives you information-shaped sentences.” Then, at the more imaginative end of the spectrum, we have that ChatGPT is a language model with emergent capabilities that allow it to understand and reason about the world.
But we’re all tired of this game. Mainly because it’s irrelevant — people are using ChatGPT as a Google replacement anyway. But also because ChatGPT’s functional complexity forces any particular description to only illuminate some aspects, leaving out others. The descriptions above are all partially right (the “super-autocomplete” system and the “reasoning engine” takes aren’t at the same level of correctness for me, though; the first is a bit conservative but the second engages in too much freedom of interpretation), yet none of them is completely correct.
But before we stop doing this — let’s not tell anyone else how they should use ChatGPT; it doesn’t work — I will play the game one more time. There’s one description I learned about just now that, in its partiality and incomplete correctness it becomes poetic. And it’s precisely this unintentional lyricism that I find worth sharing here.
This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between AI, algorithms, and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.