Hi Marc, thank you for the response.

Actually, if you've seen some of the best work by GPT-3, you'll realize just how difficult it is to recognize its authorship.

In the best (or worst) cases, it shows a writing skill way above the human average. Looking at the content isn't enough anymore (and sadly, this isn't just an issue with written content, but also visual content -- think deep fakes).

Going more into the philosophical side of this issue, I think AI is putting on the table an issue that was already a problem -- although we didn't seem to acknowledge it as such. It is, in the form of a question: What method/process do people use to verify and trust their sources of information? In the end, only an increasingly little amount of information comes to us first-hand (what we see, hear, touch). Most of it is, at best, second-hand.

We fear disinformation by AI because it feels more alien to us. But disinformation and propaganda is an old thing. It fuels to a significant degree how the world works. What can we do individually and collectively about it?

Just some food for thought. Cheers :)

Writing at the intersection of AI, philosophy, and the cognitive sciences | alber.romgar@gmail.com | Get my articles for free: https://mindsoftomorrow.ck.page

Love podcasts or audiobooks? Learn on the go with our new app.