The AI Empathy Crisis

Remember LaMDA and the Google engineer? That, but happening to millions of people

Alberto Romero


Pareidolia: the tendency … to impose a meaningful interpretation on a nebulous stimulus … so that one sees … meaning where there is none. Credit: Author via Midjourney

AI language models (LMs) have recently gotten so skilled as to be believable — even deceptive.

Not in the sense of intentionally fooling people, but in the sense of being capable of generating utterances that would make us imagine a mind behind the screen.

We — gullible humans with a tendency to anthropomorphize non-living objects — are the perfect victims of this trap.

As access to LMs becomes widespread, many will start doubting. Some will even claim certainty: “AI is alive and sentient.”

This powerful illusion, at scale, will be the beginning of the first AI empathy crisis.

This article is a selection from The Algorithmic Bridge, an educational newsletter whose purpose is to bridge the gap between algorithms and people. It will help you understand the impact AI has in your life and develop the tools to better navigate the future.

From an isolated case to a full-blown crisis

The AI empathy crisis refers to a future stage of AI development in which we won’t have yet built sentient AIs (this isn’t to imply we will eventually — I don’t know) but we’ll have built AI so advanced that an increasingly large amount of people will believe it’s sentient or conscious.

Under the assumption that first, it’s easier to create the appearance of sentience than true sentience, and second, the AI community is leading us to the former and not necessarily the latter, we can conclude that some time from now this will unavoidably happen.

The term “robot empathy crisis” was introduced in 2017 by scientist and science-fiction author David Brin. I’ve modified it slightly given the recent explosion of virtual AIs, but the idea remains the same:

“The first robotic empathy crisis is going to happen very soon … Within three to five years