GPT-4 in 10 Keys
First impressions
GPT-4 is here. The long-awaited and most-anticipated AI model from OpenAI was announced and launched as a product yesterday, March 14 (confirming the rumors first reported by Heise). People are already talking a lot about GPT-4, but I’ve yet to see a succinct overview of its ability, significance, uniqueness — and disappointments — in one place.
That’s what this is: everything you need to know about GPT-4 in ten keys. Most of the citations are from the technical report, the research blog post, or the product blog post (there’s a lot of info overlapping so don’t worry about reading them in-depth). Also, I’ll write follow-up articles for TAB if I see fit as new info or stories come out.
- Multimodality: The first good multimodal large language model
- Availability: ChatGPT+ and API
- Pricing and enlarged context window
- High performance on human exams and language/vision benchmarks
- Predictive scaling: what will future models be capable of?
- Improved steerability to control GPT-4 better
- Limitations and risks (and modest improvements)
- A super-closed release: bad news for the AI community
- Microsoft has revealed that Bing Chat was GPT-4 all along