Hi Marc, thank you for the response.

Actually, if you've seen some of the best work by GPT-3, you'll realize just how difficult it is to recognize its authorship.

In the best (or worst) cases, it shows a writing skill way above the human average. Looking at the content isn't enough anymore (and sadly, this isn't just an issue with written content, but also visual content -- think deep fakes).

Going more into the philosophical side of this issue, I think AI is putting on the table an issue that was already a problem -- although we didn't seem to acknowledge it as such. It is, in the form of a question: What method/process do people use to verify and trust their sources of information? In the end, only an increasingly little amount of information comes to us first-hand (what we see, hear, touch). Most of it is, at best, second-hand.

We fear disinformation by AI because it feels more alien to us. But disinformation and propaganda is an old thing. It fuels to a significant degree how the world works. What can we do individually and collectively about it?

Just some food for thought. Cheers :)

--

--

AI & Tech | Analyst at CambrianAI | Weekly AI Newsletter: https://thealgorithmicbridge.substack.com/ | Contact: alber.romgar@gmail.com

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alberto Romero

Alberto Romero

AI & Tech | Analyst at CambrianAI | Weekly AI Newsletter: https://thealgorithmicbridge.substack.com/ | Contact: alber.romgar@gmail.com

More from Medium

Bill Gates Guardian Quantum AI — Review

PanaceaDAO: An introduction

What Does the Metaverse Mean for the Future of Digital Engagement?

Cycles, narratives, and trends