Thank you for the response Paul!

From what the creators of Wu Dao 2.0 say, it has achieved better performance in benchmarks in which GPT-3 and DALL·E were SOTA or close to SOTA.

I don't think Wu Dao 2.0 has anything "truly beyond" the previous models since GPT-3. Even with 10x the amount of parameters. I agree that there's a limit ahead the "bigger is better" approach will step into. I wrote an article about embodied AI, which tackles that point.

I didn't opine about Wu Dao's performance because there's very little information. I've covered the news, but nothing more.

About GPT-4, it'll possibly be better than Wu Dao 2.0. There's a constant race since the GPT models appeared a few years ago to find the limits of pre-trained language models. If OpenAI presents a more powerful model later this year, China will present another one the next year, and so on, until the wall you mentioned is reached.

--

--

AI and Technology | Analyst at CambrianAI | Weekly Newsletter: https://mindsoftomorrow.ck.page | Contact: alber.romgar@gmail.com

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alberto Romero

Alberto Romero

AI and Technology | Analyst at CambrianAI | Weekly Newsletter: https://mindsoftomorrow.ck.page | Contact: alber.romgar@gmail.com

More from Medium

Researching while Chinese American: Ethnic Profiling, Chinese American Scientists and a New…

The Ethical Case Against the Death Penalty

Mark Zuckerberg, you bloody genius

The Gatekeeping of Elitism