Alberto Romero
1 min readJun 10, 2021

--

Thank you for the response Paul!

From what the creators of Wu Dao 2.0 say, it has achieved better performance in benchmarks in which GPT-3 and DALL·E were SOTA or close to SOTA.

I don't think Wu Dao 2.0 has anything "truly beyond" the previous models since GPT-3. Even with 10x the amount of parameters. I agree that there's a limit ahead the "bigger is better" approach will step into. I wrote an article about embodied AI, which tackles that point.

I didn't opine about Wu Dao's performance because there's very little information. I've covered the news, but nothing more.

About GPT-4, it'll possibly be better than Wu Dao 2.0. There's a constant race since the GPT models appeared a few years ago to find the limits of pre-trained language models. If OpenAI presents a more powerful model later this year, China will present another one the next year, and so on, until the wall you mentioned is reached.

--

--

Alberto Romero
Alberto Romero

Written by Alberto Romero

AI & Tech | Weekly AI Newsletter: https://thealgorithmicbridge.substack.com/ | Contact: alber.romgar at gmail dot com

No responses yet