The Alienness of AI Is a Bigger Problem Than Its Imperfection
Today I bring you a fresh perspective on a topic I’ve written about a lot before: AI imperfection.
But, instead of enumerating the ways in which AI systems fail, as I typically do, I’m going to change my point of view to give you a new — and rather convincing — argument that I haven’t seen written anywhere else.
Let’s start from the beginning. A few days ago, before ChatGPT was a thing, I was scrolling Twitter and saw this picture (try to recognize what you’re looking at):
It took me a whole minute to realize it’s just a little doggy.
Then it struck me: humans are nowhere near perfect.
It’s ironic that I write so much (maybe too much? let me know!) about AI ethics, bias, misinformation, unreliability, and systems wreaking havoc, and it turns out that humans fail a lot, too.
If we’re so imperfect, why am I demanding such perfectionism for AI? Why do I set these super-high standards to consider that an AI system is good enough to be out in the world?
Am I being reasonable when I argue companies should show restraint to transform research into products and services and devote more resources to improve these issues?
The Algorithmic Bridge is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between algorithms and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.