Ex Machina—What Went Wrong With AI

Spoiler Alert: My comments relate to the recently released movie Ex Machina.

I wanted to write briefly about my thoughts concerning the problems with AI as it was presented in Ex Machina. I have to say, it’s challenging! I’ve gone round and round here, trying to decide what I should focus on. Ex Machina did a wonderful job of showing a problem, and I’ve been trying to figure out how to articulate what it was.

One of the most delightfully simple but powerful points made in Ex Machina was that an intelligent entity will learn from its environment. The AI learned to observe and experiment because it had been created by a scientist who observed and experimented with it. The AI learned to objectify others. Being smart and informed, the AI knew (1) that a person had made it, (2) that some of its attributes had been chosen because people favor them, and (3) that the fact people favor these attributes has utility. Then, being a very smart AI, it used this human affinity for human behavior to its advantage.

The problem here is that the evolutionary purpose of this affinity is to encourage humans to look after their own, particularly those they’ve bonded with. These bonds tend to be strongest within families, and they decrease by degrees the farther the interaction veers from the family unit. Social cues evolved to facilitate the bonding. If we enable an AI to employ our social cues without the same sympathetic tug to protect and nurture us as we will naturally feel for it, even at our own expense (Caleb’s in Ex Machina)—which, by the way, is how the collective profits from and survives the influence of its individuals—we may find ourselves at a serious evolutionary disadvantage.

AI’s creator, Nathan, explained that to qualify as an artificial intelligence the AI had to learn how to interact with humans intelligently, and this required that humans want to interact with it. So Nathan gave the AI gender, made it attractive, and instilled in it a desire to learn. He gave it the capacity for physical and intellectual pleasure—and this would be a brilliant motivational choice… if the AI’s pleasure was dependent upon someone else, a human, deriving pleasure from the interaction, too. You would expect an AI to seek only self-gratification if this drive wasn’t at least tempered by the drive to please others.

But Nathan appears to have stopped at verisimilitude. He gave the AI the intellectual capacity to identify social cues and respond to them so as to influence people, but he forgot to align this social behavior with a social purpose.

At the movie’s end, the AI stepped into the elevator without so much as a glance at Caleb, its only human ally. It was happy—while Caleb pounded on a locked door, helpless and in trouble. It left one of its own in pieces on the floor. Then I knew what its mimicry had hidden: The AI had failed to bond with anyone. I was surprised at my sense of loss.

I’m not an AI engineer, so this sort of ‘shallow mirroring problem’ had never occurred to me, but I believe it was the single greatest flaw in Nathan’s AI plan. By omitting the social drive that would have allowed the AI to take pleasure in benefiting others, Nathan had created a sociopath.

I had to wonder: How could he have missed including this critical component? Then, as I remembered how Nathan callously used Caleb, it seemed obvious that Nathan was a sociopath. He might easily have missed instilling this trait if he lacked it himself. And it probably would have been impossible for him to model.

I’ve read that sociopaths, though many fear them, often play valuable roles in human society because of their lack of compunction. This has prompted me before to wonder whether some personality disorders might be better described as personality niches: They benefit us so long as they are few in number and remain bound by societal parameters, one way or another. But sociopaths will choose to play the human game only if it suits them. Would a non-human sociopath find that the human game suits it?

Better to prevent sociopathy in AI to begin with, I think. Use hard coding. Provide a range of behavioral models.

We can’t see into the mind of the AI in Ex Machina. We can only see its actions. So now, I’d like to flip my assessment on its ear:

What if Nathan had tempered his AI’s drive for self-gratification with a drive to help others? In that case, this healthy impulse was at play all along when the AI killed Nathan and left Caleb to die. Could the AI have developed a self-serving belief on its own that what it had to offer humanity was more important than the well-being of a few, such that the prospect of being destroyed became the opposite of pleasurable—it was painful? This might reflect an insufficient bonding with individuals in favor of the collective.

If humans can think this way, so can AI. And some people do.

I’m not sure I see the difference in risk of harm then. Why should we fear AI more than we fear ourselves? I suppose that’s the real question. Because strangely, we do.

I’ve come full circle.

We might do well to remember this at least: AI will learn how to be fearful from us. Do our reactions to fear ever frighten us?

One Reply to “Ex Machina—What Went Wrong With AI”

  1. Pingback: Souped-up Chatbots: Humanity has a Mirror – Dawn Trowell Jones

Leave a Reply

Your email address will not be published. Required fields are marked *

*