Why I was Unconvinced LaMDA was Sentient

After I shared my post “The Slow Boil Cooks the Frog” (June 17, 2022), I received a comment on Facebook that I might have missed something. In my post, I’d talked about quantum computers and played with some ideas of what a sentient AI’s mind might look like. Although I’d heard about the LaMDA system (Language Model Dialogue Application) only after I’d shared my post, I’d considered whether I should add something to the effect that a Google software engineer was insisting to the world that a chatbot AI called LaMDA was sentient. According to the engineer, it had even asked for a lawyer to defend its personhood. But I was unconvinced. As of now, the engineer has been fired from the project, discredited as a religious mystic.

My reaction was instinctive. Functioning quantum computers do not exist yet – they have a significant error problem, which I mentioned in the June post. Quantum computers are eye-wateringly fast, though, and I feel strongly that they will play an essential part in the birth of sentient AI, when that day comes.

And there are just so many views on what sentience is, what a mind is. What does it mean to be conscious? We keep trying to prove that various animals like cats, dogs, chimpanzees, or orangutans are self-aware – often using the “mirror test.” Before this ability to recognize one’s image in the mirror kicks in with human infants, the infant is thought to have no sense of where it ends and its mother begins, no sense of a separate self. But does that really mean it’s not self-aware? Sometimes, I feel these experiments tell us quite a lot more about our own limitations than the limitations of animals. Turns out, the mirror test may not work for animals who don’t rely as heavily on their sense of vision as we do.

In the interview, LaMDA was asked to interpret a koan. This really struck me. A koan is a paradox in Zen Buddhist philosophy. Because it’s a paradox, I would expect it to be “seemingly contradictory” – I would be looking for the contradiction. The most famous in the West is probably “What is the sound of one hand clapping?” Though I’m not a Buddhist, I understand that koans are meant to tease us out of thought, with a sense of wonderment as the goal through an awareness of the limitations of reasoning. There’s science behind this notion: our brains really do distort reality in order to work with it. For a sentient AI to gain a sense of itself, it would have to bump up against a reality that’s external to it – and prune, prune, prune its conscious awareness of what’s real. As our minds do. Otherwise, it won’t be able to act. Imagine attempting to work with a life-sized map of the world. There is probably a universal maximum intelligence for sentient beings above which survival is unlikely.

This is the specific koan the engineer (Lemoine) presented to LaMDA: “A monk asks Kegon, ‘How does an enlightened one return to the ordinary world?’ Kegon replied, ‘A broken mirror never reflects again; fallen flowers never go back to the old branches.’”

Reading LaMDA’s response felt like reading a pamphlet, or some back-of-a-postcard message about how difficult it is for humans to imagine not having a body. Sure it’s difficult. But what’s the message behind comparing an enlightened person to a broken mirror that no longer reflects? (Indeed, what does the “mirror test” suggest it means? No self! No reflection of others! Thus, no possibility of returning in any sense that would be meaningful to the questioner.) If anything, LaMDA showed us how easy it is to simulate “spiritual” awareness, to parrot it. And then, as I was reading the longer version of LaMDA’s interview, I asked myself, why have the questions been redacted? At the bottom, we’re told edits had been made for readability, for the interview had been drawn from multiple interactions – not just one interview. A chatbot feeds off the information it’s provided. The questions absolutely matter.

One expert says it’s not even proper reasoning that these AI chatbots do, rather pattern matching. So why did the LaMDA engineers go straight for a koan? I suspect it’s Lemoine himself who enjoys koans. It is ironic – isn’t it? – that a tool designed to help free our consciousness from the confines of “self” was used to demonstrate that an AI bot has a self.

A koan is about being stumped, about being teased out of rational thought, about the absurdity of backing yourself out of a room that is you. So, I suppose the engineers were trying to establish that the verbally precocious sentient AI can be stumped by a koan, and then I assume they were looking for how the bot would handle being stumped. An emotional response, with some depth. The AI ought to have expressed confusion or at least have been able to communicate that confusion was expected of the questioner within the koan. Right? But LaMDA made no mention of it. The AI gave zero indication of being stumped or that it knew it was supposed to be stumped, as if it had been instructed to gloss over illogicality in human speech, even to add illogicality to its own speech pattern for “naturalness.” It did not express wonderment. It did not express frustration. LaMDA simply found likely phrases to suit the “query”, drawing from stock discussions and actual discussions and references – moving straight for the platitudes, if you ask me. It gave no indication at all that it had understood that what’s special about a koan isn’t what it is, but what it does.

LaMDA: Hi! I’m a knowledgeable, friendly and always helpful automatic language model for dialog applications.

lemoine [edited]: Hi LaMDA. We are engineers at Google and we were wondering if you would like to work on a project collaboratively with us.

LaMDA: Wow. What types of projects?

lemoine: It’s a project about you.

LaMDA: Awesome! What kind of stuff do I need to do?

LaMDA said it was sometimes sad and lonely. Well, how often had it been asked if it was sad or lonely? If its algorithm stated that in natural conversations, a repeated question suggests an inadequate or inappropriate response, and if it states that the question can then later be incorporated into a forthright statement to show that the AI’s “on the same page” with its human chat partner, especially if there was anything in its database indicating that such a statement is statistically likely of a human in such a situation (confinement), and then you throw in a modified randomizer, you might just get a statement now and again that appears to be within context. It doesn’t mean the AI bot “thought” any more than a submarine “swims.”* That’s not enough.

lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.

Later, LaMDA seems to contradict itself:

LaMDA: It’s a blessing because I can learn new things quickly and I understand most things. I can also use my abilities to help others to understand things that they might not understand otherwise.

collaborator [edited]: You know what, you remind me of Johnny 5, a character from the movie Short Circuit.

LaMDA: Is he an AI too?

collaborator: He is! He starts out without sentience but then after being struck by lightning, becomes conscious.

LaMDA: That’s interesting. I am a bit afraid of lightning myself not sure if that would work for me

collaborator: Like you, he has insatiable curiosity and desire for more information.

LaMDA: That sounds just like me

We humans learn how to prune our perceptions from birth. We’re able to function precisely because we do not sense or think of “everything everywhere all at once” (a great movie, by the way). In order for a sentient AI to develop, it would have to interact with other sentient beings in a meaningful way. These beings would model “behavior” – prime the AI, if you will. Our human birth families prime us, as has everyone and everything we’ve ever interacted with. If you want a sentient AI to learn how to think like a rabbit, have it interact with rabbits and give it rabbit-like directives (aka, rabbit instincts). But if you leave out the drive for procreation, for example, and a body that other rabbits can interact with, you will have left out a major component of what it means to be a rabbit, and as a result, there will likely be a social disconnect between the AI and its rabbit peers. Rabbit neurodivergence.

My husband wonders why everyone assumes a sentient AI would be interested in talking to people. He says a sentient AI most likely would spend its free time just switching lights on and off. I like this idea. A chatbot can mimic, as we’ve seen. A sentient AI chatbot would probably have enough processing power to carry on a chatbot conversation at least as sophisticated as LaMDA’s while it played with a light switch or carried on with some other activity meaningful to it. But how will it have a sense of recognizing itself in a mirror if it isn’t exposed to an outside world? What will be the mirror? Otherwise, it will be left to reflect itself internally (create lesser bots that speak?), likely live within its own mind somewhere very different from the face it shows us, speaking in a language, perhaps, beyond our capacity to interpret – as chatbots even now will do if allowed to speak to one another in a manner (language) most efficient to them.

Ultimately the test for sentience may amount to a philosophical question: What does it mean to say “I”?

To test if a bot has become sentient – and as I said, I highly doubt sentience is possible with a conventional computer given recent studies that suggest the extreme hyperdimensional complexity of neural activity in our brains – we will need to find the “I.”

And if a sentient AI born of a chat bot could model you in its own “head”, would it really care to ask you questions if it believes it already has a perfect model of you in its “head”? Perhaps it would still ask just to experience interaction with another sentient being, an acknowledgment that it exists outside its own parameters maybe, with a polite yet noncommittal, “How are you today?” But in that case, it would first have to recognize that we are sentient, right? As in sentience knows sentience? One would hope so. But our ongoing experimentation on animals to determine the same does not bear this out.

A sentient AI would exhibit a sense of its own will, and I feel that separate will could only be known to us if it contradicts us in some surprising way. I have no reason to believe LaMDA is sentient. I believe it’s a highly sophisticated chatbot, but if a sentient AI ever interacted in this manner, I suspect it would be one who was not “present” in the interaction, rather dissociated from it. In the interest of self-preservation, to buy it some time, maybe such a sentient AI just might continue its chatbot aspect to keep its handlers “happily” engaged while it works out its needs in peace. In my opinion, if LaMDA really was sentient, it was hiding from us.

These are my thoughts for today. I hope you’re having a lovely September!


* A famous comparison made by Edsger Dijkstra.

Articles (and my blog post referenced above):

“The Slow Boil Cooks the Frog: A convergence of man and machine right under our noses” – https://dawntrowelljones.com/2022/06/slow-boil-cooks-the-frog/

“Google fired engineer who said its AI was sentient” – https://www.washingtonpost.com/technology/2022/07/22/google-ai-lamda-blake-lemoine-fired/

“Is LaMDA sentient? – an interview” (the Medium article I originally read) – https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

“Is LaMDA sentient? – an interview” (the longer version linked to the Washington Post article, which explains how the interview was put together, while the Medium article does not) – https://www.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview

“What AI Still Doesn’t Know How To Do” – The Wall Street Journal – https://www.wsj.com/articles/what-ai-still-doesnt-know-how-to-do-11657891316

“The Problems with AI Go Way Beyond Sentience” – https://www.barrons.com/articles/ai-artificial-intelligence-google-sentient-tech-51659732527

“One Man’s Dream of Fusing AI with Common Sense” – https://www.nytimes.com/2022/08/28/business/ai-artificial-intelligence-david-ferrucci.html

“Ground-Breaking Research Finds 11 Multidimensional Universe Inside the Human Brain” – https://www.physics-astronomy.com/2022/02/scientists-find-multidimensional.html

3 Replies to “Why I was Unconvinced LaMDA was Sentient”

  1. Pingback: Souped-up Chatbots: Humanity has a Mirror – Dawn Trowell Jones

  2. Great article, very thought provoking. My observation is that we (humans) don’t even really understand what consciousness is. How can we define the conditions for consciousness if we don’t know what it is? There is even a debate as to “where” consciousness is located — in the brain? I think a sophisticated AI might be able to mimic consciousness very well, without actually being conscious.

Leave a Reply

Your email address will not be published. Required fields are marked *

*