Why ‘Is LaMDA Sentient?’ Is an Empty Question

And 3 barriers that separate us from the truth — whatever it is.

Alberto Romero
10 min readJun 14, 2022
Photo by local_doctor on Shutterstock

The hot topic in AI this week comes from the now ex-Google senior engineer Blake Lemoine. He claims that Google’s large language model LaMDA is sentient. In a viral article for the Washington Post, Nitasha Tiku wrote about how Lemoine concluded, after a span of a few months of interaction with the bot, that it was a person. He then tried to convince others at Google of the same thing, but was told that “there was no evidence that LaMDA was sentient (and lots of evidence against it).” Lemoine was put on paid leave for violating the confidentiality policy, which could devolve into a termination.

Let’s see what is LaMDA and why his claim is empty.

LaMDA (Language Model for Dialogue Applications), announced at Google’s I/O conference in 2021, is the company’s latest conversational AI capable of managing the “open-ended nature” of human dialogue. At 270B parameters, it’s a bit bigger than GPT-3. It was trained specifically on dialogue with the objective to minimize perplexity, a measure of how confident is a model in predicting the next token. Being a transformer-based language model, no responsible AI researcher would take Lemoine’s claim of sentience seriously.

--

--