ARTIFICIAL INTELLIGENCE

4 Things GPT-4 Will Improve From GPT-3

GPT-3 revolutionized AI. Will GPT-4 do the same?

Alberto Romero
Towards Data Science
8 min readMay 28, 2021

--

Photo by Robynne Hu on Unsplash

In May 2020 OpenAI presented GPT-3 in a paper titled Language Models are Few Shot Learners. GPT-3, the largest neural network ever created, revolutionized the AI world. OpenAI released a beta API for people to play with the system and soon the hype started building up. People were finding crazy results. GPT-3 could transform a description of a web page into the corresponding code. It could emulate people and write customized poetry or songs. And it could wonder about the future or the meaning of life.

And it wasn’t trained for any of this. GPT-3 was brute-force trained in most of the Internet’s available text data. But it wasn’t explicitly taught to do these tasks. The system is so powerful that it became a meta-learner. It learned how to learn. And users could communicate with it in plain natural language; GPT-3 would receive the description and recognize the task it had to do.

This was a year ago. For the last 3 years, OpenAI has been releasing GPT models yearly. In 2018 they presented GPT-1, then GPT-2 in 2019, and finally, GPT-3 arrived in 2020. Following this pattern, we could presumably be close to the creation of a hypothetical GPT-4. Given everything that GPT-3 can do and the degree to which it has changed some paradigms within AI, the question is: What can we expect from GPT-4? Let’s get into it!

Disclaimer: GPT-4 doesn’t exist (yet). What’s next is a compilation of speculation and predictions based on my knowledge of GPT models in general and GPT-3 in particular, which I compiled in this long-form article for Towards Data Science:

GPT-3 is big. GPT-4 will be bigger. Here’s the reason

GPT-3 isn’t just big. The title of the biggest neural network ever created is very ambiguous. It could be just a tiny fraction bigger than other models. To put its size into perspective, GPT-3 is 100x…

--

--