Thank you Pete for the clarification. Yes, I did that for each completion, although in a few cases I liked the first generation and didn't repeat it.

The reason why I got such a nice story is that I didn't burden any of them with a lot of text to generate. Every time one of them made a completion, the whole previous text history was fed to the other as the prompt. The ability of these models to generate a short meaningful completion from a long prompt is remarkable.

On the contrary, having them generate a long paragraph is way more difficult. Also, prompt engineering is crucial, that may have something to do with your results?

--

--

AI and Technology | Analyst at CambrianAI | Weekly Newsletter: https://mindsoftomorrow.ck.page | Contact: alber.romgar@gmail.com

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alberto Romero

Alberto Romero

AI and Technology | Analyst at CambrianAI | Weekly Newsletter: https://mindsoftomorrow.ck.page | Contact: alber.romgar@gmail.com

More from Medium

Will our Future be aUtopia, Protopia or Dystopia?

Mine Your Own Business! — Demystifying the Environmental Impact of Crypto

Algorithms and AI — Bias will create serious damage on societies’ culture and cognitive freedom

Lessons learned from Building Europe's first OnDemand fuel station from its COO.