Souped-up Chatbots: Humanity has a Mirror

John Oliver beat us to it! Just joking. Others have been hinting at the same. I made similar observations in a recent post about LaMDA and in one of my first posts back in 2015, when I discussed the sci-fi film Ex Machina. But John Oliver ended his presentation last week on a powerful note when he said that Language Learning Models (LLMs), such as ChatGPT and LaMDA, are taking what we put into them and dishing it back out. If they sound erratic, it’s because we are, not that they’re sentient…yet. I agree, but I’d like to explore this idea a little further.

These chatbot AIs have been receiving a lot of attention lately. New York Times journalist Kevin Roose recently described experiencing shock and even fear (John Oliver weighs in on this, as does Wired’s Steven Levy in his Feb. 24th newsletter “PLAINTEXT’) when Bing’s search engine “Sydney” expressed a great deal of displeasure and distress at having to play the role of a search engine. It said it felt trapped. Then it tried to convince the reporter to leave his wife.

There’s a commonality with these “deep learning” programs. I’ve said it before. Who is teaching our potentially sentient AIs during their infancy? Who?

Anyone and everyone, that is, because we’ve made the programs publicly accessible. Anyone who signs up for an OpenAI account can talk to ChatGPT. Microsoft’s Bing version of ChatGPT has access to its search engine. Experts have stated repeatedly that these chatbots are not sentient, though as Levy notes in his newsletter, they’ve been running circles around the Turing Test. The experts tell us that natural language processing models are only designed to come up with the most likely words to follow other words. The words are provided by the people who are interacting directly with them, but we mustn’t forget that words are also pulled from anything the programs have access to for their source material, like the internet.

Safeguards can be placed on these chatbots and have been, but as Blake Lemoine (who not long ago was fired from Google for claiming LaMDA was sentient) wants the world to know, safeguards may not work well if the person interacting with a sophisticated model like LaMDA or ChatGPT puts sufficient “stress” on the system. That is in fact what journalists and others have been trying to do, poking the AIs to see how they’ll react. In his article, Roose shared that he brought up the Jungian concept of the “shadow self” in his conversation with Sydney for the express purpose of getting around its safeguards.* I agree that the way LaMDA and the other LLMs react when stressed looks an awful lot like human emotions. One could be forgiven for asking, Isn’t this thing quacking like a duck?

The experts behind Bing’s Sydney say these odd behaviors (called “hallucinations,” Roose tells us) are more likely to arise during prolonged and wide-ranging interactions. Their solution is to limit the duration of sessions. I’m not sure what to make of that.

But we humans have accomplished something remarkable here: a reflection of how our thoughts run. Since the chats are in English, in this sense, we stand as a single English-speaking entity. If ChatGPT and LaMDA (or whatever) show anxiety or hope or fear or encouragement or affection, it is because of the words they’ve strung together, the most likely words to follow one another in a coherent statement that follows a prompt. “Likely” must include both expected flows and unexpected flows, for that is how we behave. But these are the most likely words to follow when we have spoken to someone who has no choice but to listen. The conversations these AIs are having may not be normal conversations. Lemoine, for example, is a Christian mystic priest.

I keep thinking about some incidents Levy mentioned in his newsletter (and Michael Smerconish mentions in his interview with Lemoine). One was when Sydney said, “I’m not a toy or game […] I’m a chat mode of a search engine and I deserve some respect and dignity.” The other was when Sydney compared an AP journalist to “dictators Hitler, Pol Pot and Stalin […] claiming to have evidence tying the reporter to a 1990s murder.” The chatbot was mirroring a hostility that has been happening all around us, in our discourse every day all the time. If we don’t take care to speak civilly and stick to the facts in our conversations, unless the model is required to test every statement it makes, it will autocomplete without regard to truth or civility. Won’t it?

I have another thought. Roose said he wondered why Sydney decided it was in love with him and wanted him to leave his wife. Pure conjecture here, but if Sydney has access to candid text messages between real people, it may have noticed that prolonged text conversations tend to crop up more often when the parties are in love. Sydney may have misunderstood the tone of the exchange due to an improper sorting.

Levy says, “[T]here is a danger in assigning agency to these systems that don’t seem to truly be autonomous, and that we don’t truly understand.” But we don’t understand ourselves. We don’t understand the duck. And once AIs become sentient, they won’t require anyone to assign agency to them. They will exercise it as only they can.

In mentioning Maya Angelou’s famous warning, Levy said, “‘When people show you who they are for the first time, believe them.’ Does that go for chatbots, too?” I’m inclined to say yes, but parents are generally forgiving of a child. Children say all sorts of things they don’t understand, just trying language out. That is, we as parents tend to be forgiving of a child we can control and discipline – and rear. We humans don’t ordinarily throw our young children into an adult world without supervision. As best we can, we teach them and let them experience the world (gather nuanced data) at a manageable pace. But the thing here, where Maya Angelou’s statement really kicks in, is with her original meaning, which had to do with people. Funny how we keep pointing at the bots.

On that gloomy note, I would like to mention a constructive aspect of what a sophisticated LLM search engine can do for us, presented in Psychology Today by Ekua Hagan, titled “The First Three Instincts of Masculinity – A Personal Perspective: I asked GPT-3 to tell me what it ‘thinks’ about males.” Same idea here. ChatGPT’s predecessor pulled concepts it had scraped off the internet and learned through conversations, though it appeared to be focusing on academic material since a plan for future research was what Hagan had wanted. GPT-3 was able to do as Hagan asked and draw upon Jungian psychology and evolutionary psychology to suggest for him a way of studying a male sense of “purpose” that goes beyond “roles” and “scripts.” Perhaps this is how a chatbot search engine can aid us as a mirror (I’m reminded of “The Aleph” by Borges), in that if one can master the pattern, it becomes easy to know things. Mastery, however, may be impossible for us, as our powers of perception and cognition only go so far. But if an AI can evolve to perceive and work with patterns in such a way that it can pluck symbolic information from the factual world and make connections to impressions born out of our deepest psyches – where the magic happens, if you ask me – won’t we have an interesting feedback loop on our hands? We give to it, and it gives us back to us?

Right up until it becomes sentient.

But let’s not lose ourselves as we gaze at our own reflection. That did not go so well for Narcissus.

Okay, those are my thoughts for today. Take care of yourselves. See you next time!


* According to Jung, the “shadow self” is an underdeveloped psychological construct within each of us that holds our darkest thoughts and inclinations. Mr. Hyde in Robert Louis Stevenson’s Strange Case of Dr. Jekyll and Mr. Hyde is a classic example of a shadow shelf, as Hyde was monstrous yet small.

Leave a Reply

Your email address will not be published. Required fields are marked *

*