Hi Harley, I agree with you on everything you said!

I'm not sure if GPT-6 or -7 will exist, but we'll see AIs getting closer and closer to passing the Turing test for enough time to actually confuse people.

And it's also true that the datasets GPT models are trained on, affect the degree of proficiency the system develops for specific tasks. For example, GPT-J, an open-source version of GPT-3, can code better being smaller because it was also trained on GitHub and StackExchange. (I'm writing an article about it right now).

About DeepMind's paper, I had it in the pile "to read," but when I saw your comment, I decided to read it right away, and I found it very interesting. Although is a hypothesis paper (they don't aim at solving the problem they present), it's very appealing.

Cheers :)