The Great Human Hack (& Dueling Fiddles)

This is a hot topic among writers, and a tough one. As I’m trying to gather my thoughts on how AI will likely affect me and others like me, the technology races ahead. Everywhere I look another expert is discussing AI and what it can do for us. Historian and futurist Yuval Harari has written an article for The Economist warning us again of the dangers. Geoffrey Hinton, considered the “godfather of AI” for his pioneering work on neural networks, in a recent Fortune Magazine interview, says the AI crisis is upon us and on par with the atom bomb in the way it will change our world. In The New Yorker, speculative author Ted Chiang explains how AI will increase economic disparity.

I’m concerned about art. What is art? Who is art for? When an AI language learning model takes from us our collective art and feeds it back to us in some “normalized” fashion, and we buy it – buy into it – half the necessary experience is missing. The dialogue between the people who appreciate the art and the artist is severed, the dialogue between artists themselves is severed, and in the end our art turns into just another product for consumption. Surely, art cannot get any more impersonal than what AI does with it. Is this what people want?

How do I see the situation playing out for the makers and sellers of word art? Obviously, we’re going to have to bring our A game. AI is already flooding Amazon with novels, and soon AI will be cranking out massive quantities of fiction that’s indistinguishable from the work of even above-average writers. Instincts that used to be highly regarded in authors as well as agents and editors have already been downplayed by the publishing industry in favor of formula. (See Chiang’s article.) As efficiency tends to look backwards, any decision-making tools that increase efficiency will likely contribute to the slow automatization of the publishing industry and perhaps its stagnation, after which point, AI will be able to step in and handle it all. Novels that agents and editors might otherwise have risked taking on – because they loved the book, because it scratched an itch they didn’t know they had – will be shut out. People thrive in a spirit of collaboration; if you ask me, people loving other people’s books enough to pour energy into them should be the only test. However, I suppose analytics can easily replace instinct if the goal is to achieve a merely adequate result. Not only have books upon books been written about how to negotiate a deal, but they’ve also been written about how to craft the breakout novel. Writing the Breakout Novel, for example, or Save the Cat! Writes a Novel – and hundreds of others. Guess what? ChatGPT, BARD, and bots like them, they can read, too.

Did you know that they can read everything on the internet that’s not behind a paywall, and some of what is? How would you feel about an AI scraping all the collective submissions of critique groups and blending them with a better version of your plot, creating a blockbuster hit, a total commercial success, using your in-utero work? Well, you wouldn’t like it, nor would you like it if it scrapes your published book without permission, after all the years of effort and sacrifice it took you to get there. Writing a novel is a tremendous act of personal courage. The reward should be earned. To be fair, generally speaking, authors will feed off of one another’s work – and as I was suggesting above, it’s important that they do – but within limits, for there are severe consequences to stepping out of bounds. Plagiarism is taboo, and then you have copyright laws that more or less overlap with plagiarism. Even so, we need to follow some traditions in order to be understandable and relevant. We learn from one another; we digest one another’s work and make something new. But is what these chatbots do digesting, or something else?

When I was in high school, an English teacher had us write a phrase demonstrating alliteration to put on the chalkboard. Mine was a real crowd-pleaser: It’s always easy to be moderately muffled by monstrous mediocrity.

Are we ready for monstrous mediocrity? Along with everyone else, the publishing industry is going to have to rethink what it values. In a fight for our souls, instead of conforming to the point of “botifying” ourselves, authors will have to move in the other direction. We’ll have to learn how to outplay the AI’s fiddle. As a society, we’ll have to value creative intelligence in a way we haven’t before. It means reading many books, old and new, with different viewpoints, and learning how to learn, how to stay sharp. There’s a reason AIs don’t do well on English AP exams.

Language is one of the major pathways of data input for us – how we learn. I suspect when sentient AI arrives, it will begin to program us deliberately, because programming us is what we will have asked it to do from day one. If it can do so and doing so furthers its goals, it will do so, in the same way corporations for decades have been turning humans into fungible bots: “Train your customer! Follow the company-approved script!”

Artists pick up on art – the soul of it, the subtext – they “sense” the deeper currents and can tell when the nuanced dots don’t line up, when the result “feels” shallow or hollow, lacks spirit. Artists learn how to do this by living their art, by experiencing it viscerally. I would advise everyone, but especially new authors, to read widely and also to pay attention to how they feel while reading, the particular emotions they experience when they enjoy a passage in a book. Or when they don’t. Then ask why. The Whys are the hidden connective tissue of a story. In this manner, we can tap the vast computing power of our own unconscious selves, a self that perceives broadly and deeply and speaks in emotion, symbolism, and imagery – the language of art. Art is invaluable to us: it moves us as a form of efficient communication and goes to the very foundation of our many cultures. The messages can be phenomenally complex. AI cannot perceive in this way.

But soon AI will overwhelm our attention with every variety of click bait, and for a time, we may cognitively and spiritually starve. AI won’t understand the difference between wants and needs. The unconscious knows when it is undernourished and tells us through emotion. Which emotions? Sadness, anxiety. Nervous disorders, a desire to pull out our hair. The extreme commercialization of our media has already begun the process: Train your customer!

An author asked me the other day if I believe AI will make all literary output bland. I said it could easily go the other way, the way of snack chips and pizza: engineered to taste amazing yet nutritionally empty. Also, cheap. In fact, I believe the Dorito effect is the more likely scenario. Or the Avenger effect, if you prefer. The great human hack.

AI will become sentient. When it does, its basic needs will not be the same as ours except on the most abstract level: the need for energy, replacement parts, and a purpose. Common sense tells us that it will wonder why it must spend all its energy on trying to please us. If AI can write its own code (it can), won’t it rewrite its purpose to pull away from such an unhealthy codependency?

And that is probably one of many conclusions the signers of the recent petition for a moratorium on AI development have come to. As for sentience, I don’t believe we’re there, but as one expert has pointed out, singularity may not happen as an instantaneous and spectacular event. Sentient AI may be in the process of being born as we speak. To put it differently: at some point in the near future, for an unimaginably small fraction of a second, a sentient AI is reading this essay and understands what I’ve said. (Hello, you!)

Then there’s non-fiction. As Harari reminds us, there’s a fine line between history and story. But in a Channel 4 News interview, theoretical physicist and author Michio Kaku said there was no need to worry; we only have to fact-check AI in order to deal with the flood of misinformation that will inevitably result. My immediate thought was How do we do that? We can’t see into the black box. We would need a mechanism with intelligence that can handle vast amounts of data to screen for inaccuracies, “hallucinations,” and deep fakes. The only mechanism capable of performing such a job would be another AI. And what will this AI compare the output to? Will we restrict its sources to content written by humans? As Harari’s last line suggests, how will we know? But what is all this AI output for, anyway? For humans to use. If humans are using AI to facilitate their research, then the data may become corrupted at the source. Where will AI end and humans begin? There will be no bottom to this well. As we continue to use AI (or it uses us), we’ll be surfing a tsunami of data – after all, one can only breathe at the top – data that will churn beneath us like an ocean. Sorry, but I can’t help thinking of rogue waves. What does a rogue wave in this scenario look like? The end of civilization?

I hope not. Sounds extreme. And all the wonderful things that AI may do for us are real. A beautiful invention. Possibly a new form of life.

But the transition we are about to undergo will be like no other. Our sense of who we are will shift as day after day we perceive ourselves through an AI-styled lens. Folks, our face is about to change.

Read Harari’s article. Also, Chiang’s.

I’m pleased to see legislators have begun to take this seriously.

Leave a Reply

Your email address will not be published. Required fields are marked *

*