Thanks for the comment, Gottfried!

Genuine in what sense? I'd say what you call "causal considerations" and "counterfactual pondering" could be explained by: "This word makes sense given the previous ones."

They may seem to display causal reasoning but you can't get them to always make accurate reasoning. Of course, they will get some reasoning right, but people get reasoning right almost every single time. What's been criticized about them isn't that they never display causal or common sense reasoning. It's that they do it not often enough to be reliable.

The trick here is that I didn't intend the story to go anywhere. I didn't have a plot or an end in mind. I let the models drive it and that's why it feels they did a great job. If you set them to go somewhere in particular, you'll more likely fail to make them do that.

Also, as Pete added to your comment, I chose what completion fit best the conversation each time. For instance, if GPT-3 agrees with J1-Jumbo in one completion, in the other completion it may have said the complete opposite.

It's more an entertainment piece than a careful analysis of LLM behavior.

--

--

AI & Tech | Analyst at CambrianAI | Weekly AI Newsletter: https://thealgorithmicbridge.substack.com/ | Contact: alber.romgar@gmail.com

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alberto Romero

Alberto Romero

AI & Tech | Analyst at CambrianAI | Weekly AI Newsletter: https://thealgorithmicbridge.substack.com/ | Contact: alber.romgar@gmail.com

More from Medium

The Future Of Robotics

Eliciting latent knowledge

Artificial Intelligence can now be an Inventor: Where to from Here?

Meta-Insight Into the Metaverse