Hi Ricardo! Thank you for your comment, always good to see you here!

I agree that no AI understands language the way we do. Because they don't live in the world, they can't interact with it and therefore lack access to contextual information that allows us to connect form with meaning.

In my third point, I tried to address this argument. It goes like this: Even if AI doesn't understand anything - and therefore lacks intention - the value a reader gets from reading its outputs remains unaltered.

When we read a book, we don't get value out of it just because the writer intended it (actually, in many cases the reader doesn't get what the writer intended). We get the value from the meaning we give to the words we read.

Communication isn't a process of conveying meaning - although we act as if it is. Communication is a process of conveying form, where the sender wishes the meaning intended is the meaning obtained by the receiver. The words "You can hear the little birds singing while a soft breeze caresses your skin" have an objective form, but the meaning behind them is subjective. We're certainly imagining different scenes from the same words.

In the case of AI, there's no intended meaning, that's true, but there is obtained meaning. The reader gets to ascribe meaning to words that didn't have any. And that has intrinsic value.

I agree with the problem of distinguishing AI-made from human-made writing. We expect to understand a person better by reading his/her work, which we can't do with AI. But I don't see much difference between this problem and ghostwriting in this sense.

What do you think? Cheers! :)

Writing at the intersection of AI, philosophy, and the cognitive sciences | alber.romgar@gmail.com | Get my articles for free: https://mindsoftomorrow.ck.page

Writing at the intersection of AI, philosophy, and the cognitive sciences | alber.romgar@gmail.com | Get my articles for free: https://mindsoftomorrow.ck.page