Personal Narratives, Tribalism, and Robots

I recently read a book recommended to me by one of my beta readers, Sapiens: A Brief History of Humankind by historian Yuval Noah Harari. The topic is quite large, and to be honest, I came away from this book a little dumbstruck. But it wasn’t the only book I was reading at the time, so I’ll see if I can get to the nut of what captivated me the most about how these works interlocked.

According to Harari, the fictions we tell ourselves serve a crucial evolutionary purpose, and that purpose is to facilitate large-scale collective action. These fictions go to the very root of who we are and how we live. Harari isn’t simply talking about myths, religious beliefs, and philosophies. Made-up narratives permeate every aspect of our lives, which we take for granted, and will continue to do so, he says, because we must.

The gist is that culture has supplanted instinct, or rather, it has become its own instinctive process, and the way we communicate culture is through narratives. What do these stories, these fictions, look like? National borders, politics, property, standards of beauty, capitalism, communism, money, corporations, stock exchanges – every abstract we value has built for us a sense of identity and purpose. A diamond is much more than a rock.

Then, there’s tribalism. Tribalism, which we’ve been hearing so much about lately, where there’s an “us” and a “them,” goes hand-in-glove with culture, Harari says. It’s essential to who we are as a species. But what does all this do for us? It gives us a pleasurable sense of belonging to a collective with shared values – sure, we already know that. But why? Tribalism is a vehicle, the only vehicle, he says, through which coordinated action is possible in groups consisting of more than about a hundred and fifty individuals. That’s essentially it, the whole point. Collective action inspired by the cultural narratives we tell ourselves have helped us not only to survive, but to drive out our competitors. We were able to adapt faster than other, older versions of humans dominated by biology and inherited instincts.

I ran across this in James Dickey’s Alnilam, which seems to illustrate what this process looks like:

“‘You see,’ Shears said, now almost apologetic, ‘incantation is valuable to us.’ He thumbed through the book and read from it, holding it at a curious slant. ‘Through incantation, each thinks he has summoned the words from some deep place in himself that only he knows. The more familiar with the words he is, even if he memorized them from some other place at the beginning, the more sure of this he is; the more he thinks the words belong to him; that they come from him. Thus the many enter into a gigantic reflex, which is like a touched muscle belonging to them all. Yet each feels it as his own.’“

Harari says cultural fictions build on top of one another, and evolve out of one another. The late physicist Michael Polanyi in The Tacit Dimension (I mentioned him a couple of posts back) pointed out that whenever a new concept arises out of lower conceptual parts (emergence), we tend to lose sight of those lower parts. If we later try to focus too long on the lower parts, we destroy the higher one. There’s a process of forgetting that’s crucial for growth.

And, so, as I understand Harari, over time, we tend to forget that social fictions are fictions, and build on them down the centuries, even millennia. Just imagine: intellectually, we and our distant ancestors are the same. On average they would receive and absorb approximately the same amount of instruction throughout their lives as we do. A child attending school in the year 2050 won’t have to reinvent computers or an entire educational system or master particle physics. She’ll simply turn on her quantum computer and do her homework. What she learns will be the topmost cultural constructs of her era. Old knowledge will still be there; it’s just subsumed.

Then, an interesting analogy struck me. Kids, including me as a kid, love cartoons in which they perceive a lot of intense action but relatively low social context, low in the hierarchy of cultural information. What’s perceived by them is usually the bare-bones stuff; they tend not to pick up on nuance. Clear good guys, clear bad guys, and a streamlined backstory. The appreciation for nuance comes in later, when they’re older, in situations of high social context where tribal lines are not always clear. Adults, I’d say, generally prefer action they can follow, both visually and in terms of relevance, and stories with a deeper social context, with moral gray areas. What would it mean if adult narratives trended away from high context and nuance? I wonder. Why would we ever not want to build on what we know?

Stories matter. What we tell ourselves about ourselves matters, as a tribe and as individuals through our own personal narratives.

A literary agent on Twitter recently mentioned how much she loved the movies Blade Runner and Moon for their portrayals of memory and identity. Me, too. The android Rachael (Blade Runner) and Sam Bell’s clone (Moon) face similar circumstances in that they’ve both been given false memories. These false memories are the foundation on which their identities have been built and provide them with a sense of place in the world. Like any of us, the characters take their memories for granted, and the truth, when it comes out, is psychologically shattering to them. It’s painful even to imagine how it would feel to have one’s underlying sense of self wiped out like that. But new information clears the path for new narratives—this is also true. We’ve been doing the same thing to ourselves, it would seem, since the dawn of time, whenever we shave down our own identities to fit the narratives of our tribes, whatever those might be, however we came to be a part of them. Sometimes individuals or entire cultures get subsumed; they lose out and end up hurt, as do Rachael and Sam Bell’s clone. But we love these characters, I think, because they do not give up. In a sense, they write their own parts, by the end.

Speaking of robots, in the book titled What to Expect When You’re Expecting Robots, experts Laura Major and Julie Shah tell us that engineers should design future robots around the ways in which humans will interact with each other, and them, on a day-to-day basis. (A recent Ars Technica article on the topic, if curious.) They explain that generating human empathy toward these robots will be essential to their survivability. (Not too much empathy, though.) Our ability to feel empathy comes from our capacity to imagine the perspective of others, to develop a theory of mind about them. According to What to Expect, in order to know how to interact and communicate with robots reflexively, people should be able to intuit how robots “think” without the need for specific training, in split-second assessments. To accomplish this, robots should be given apparent narratives, as in, “I want to deliver this package to this house and not hurt anyone in the process,” a storyline that human bystanders can recognize and simulate in their own minds – basically, a narrative that meshes with theirs. But robots don’t think like humans, so it’ll be a fiction. Interestingly, identity and memory and our own personal narratives are the very tools that allow us to model the narratives of others.

What to Expect when Expecting Robots says:

“Failure is a part of life, not only for humans, but for robots, too, and no amount of planning and testing can change that. It is not reasonable to expect to be able to identify or account for all possible error circumstances that might arise in day-to-day living. The human world is simply too complicated to predict.”

It never ceases to amaze me how a chain of ideas from widely varying media sources, topics, and eras can come together to tell me a story about the world as it exists now, through their association, that I might not have come across otherwise. A unique configuration. An overview. Cool.

Leave a Reply

Your email address will not be published. Required fields are marked *

*