What can I say? The months since I last posted for you have been long, long, long. More has come to light concerning the potential uses and misuses of generative AI. Now, of course, as any sane individual could have guessed would happen, people outside of big tech have begun developing their own sophisticated chatbots for nefarious purposes (or so they claim, see Wired article). Once an idea has come – and this one has seized the imagination of a primed populace – it doesn’t really stay in the bag, does it? The game is on.
Art depends on context to communicate. Like fish in water, art does not live without its medium. Again, today, I’m talking about verbal communication.
I should start by reminding us of the fact that chatbots can and have been known to find their own language if allowed to interact with one another. Nature finds a way. Not only that, as a simple and obvious example of nature finding a way: in an early learning algorithm experiment featuring a robot learning how to walk it developed neural network connections where even a slight current could flow, which came as a surprise to the researchers when they found evidence of data stored within the robot’s limbs (I read about this in the late 90s). I’m sure that if left to its own devices, any learning algorithm will incorporate whatever material is available to it to fulfill whatever purpose it’s given. It will also adopt whatever tangential function it can logically drift into.* And whatever language might evolve between two or more AI bots will naturally reflect what’s most important to the bots. This language will likely be unintelligible to us, possibly even invisible to us. Hidden in a limb, phantom or otherwise.
But mostly what I’ve been hearing is just a lot of people not thinking things through. For instance, maybe you’ve already heard how chatbots will soon be drafting professional memos for us, quickly, saving everyone time and money. Sure, sure. But what happens when the appearance of productivity outweighs the value of actual productivity? Often, it does. The obvious answer is that employees will generate more memos. People already know that this idea of hyper-communication can really take a toll, such as non-stop meetings, making it near impossible for people to perform the actual work they were hired to do. So, first off, interestingly, these memos will have to be “seeded” – with important information, I assume. An employee will have to give the chatbot something like a set of bullet points and a tone, then voila! Off goes the memo! Someone will then receive the memo. Scratch that – everyone will. And everyone will be writing them, these beautiful memos. Why not? An absolute inundation of everyone else’s productivity. Or, more precisely, a small portion of relevant information embedded in a mountain of formulaic verbiage. Just to function, recipients will need AI to distill the memos down to their essentials. Maybe the same bot that wrote the memos – that would certainly be more efficient. But it means employees will really be texting one another. How unprofessional! All the well-composed memos will have been for… whom, exactly? The bot? Or bots. Bot to bot.
Humans becoming better at using AIs are humans learning how to think like AIs, especially if we’re churning up the same material. Meanwhile, technically, many of these chatbots will be speaking directly to fellow chatbots via those memos and whatnot, perhaps developing a natural language of their own, whatever that might turn out to be, embedded within ours. As we all know, language affects thought as much as thought affects language. Over time, a standard evolves.
In case I haven’t been making myself clear: as machines become more like humans, so will humans become more like machines. Monkey see, monkey do. A very efficient monkey.
So, what will happen to all the authors, all the creatives who thrive on inefficiency and the lucky lark?
Near the end of a discussion my critique group was having about how AI will affect us down the line, someone asked me, “But what do we do about it?” As I’ve said, we’ll have to bring our A game. The cat’s out of the bag, no putting it back in. We’ll have to reach deep and take risks, and importantly not forget the story above the story.
What is that? The story above the story goes to the heart of literature, the feel, the impact beyond the literal meaning of words on a page. I’m talking about pulling at the threads to see what unravels, what still holds, a communication that hits us at a gut level based on thousands, if not millions, of years of evolution. The full picture depends upon the reader. Context is everything – from every passage of a novel as compared to the whole and from every moment lived of the reader’s life, within a society and a time. Telling will never be a substitute for a reader’s intuitive experience of a story. Understanding a story in this meta sense is crucial at the sculpting stage, or the author risks cutting it out, which would be tragic to my mind because chances are, the author’s unconscious had something important to share with us when it set out on the story’s journey. Chatbots have no understanding.
Metatext is holographic and rarely if ever explicit in the way it communicates. It arises from the stringing together of concepts and impressions. Great works live on mainly because they have this quality of being open to interpretation. Yet being open to interpretation is often frowned upon as “vague.” In A Swim in a Pond in the Rain, George Saunders quotes Nabokov saying “[G]enius is always strange; it is only your healthy second-rater who seems to the grateful reader to be a wise old friend, nicely developing the reader’s own notions of life.” Those are our safe bets. We have enough insecurity in the world, so why wouldn’t we want a safe bet? Though not there yet, chatbots will become exceedingly efficient at crafting safe, familiar fiction.
Well, I don’t want that. I’m a risk taker. I like playing it unsafe. And I prefer reading fiction that does the same. There are a million ways not to bore readers like me. Genre and style do not matter. If we’re sensing a story on multiple levels, we’re getting at something real. Anything less is a verbal exercise. Sorry, but I don’t want to read a verbal exercise. The content generated by current AI chatbots reads like a verbal exercise. But things may not stay this way.
What will happen to human language if the use of AI as a tool for writing continues and we authors learn to incorporate it? That’s an interesting question.
Two of my favorite pet topics come together for an answer: AIs and UFOs. Do you remember, those of you who follow UFO lore, the Rendlesham Forest Incident? In late 1980, an unidentified vehicle landed in Rendlesham Forest and was investigated first by a military police officer named Sergeant Penniston and others, and then later by Colonel Halt and his men, all from the nearby air base RAF Bentwater/Woodbridge in England. On the exterior hull of the ship, Penniston noticed a kind of hieroglyphic text – perhaps ideograms? – which he sketched on the spot in his notebook (for more information see Leslie Kean’s UFOs: Generals, Pilots, and Government Officials Go on the Record). This story has always intrigued me.
As chatbots learn how to capture our regular expressions, adjusted slightly to a given context, and as humans keep using bots to generate long formulaic passages, our language will become more and more efficient. Instead of the tediousness of long memos, for example, we may develop ideograms to capture those phrases and concepts with which we’re all too familiar – long live the tropes! – to be combined with contextualizing phrases (tone) in a form of ideogram shorthand that’s meaningful to us, perhaps with a cyclic structure. I’m reminded of the language of the Heptapods in “Story of Your Life” by Ted Chiang (also the movie Arrival). But because we live in linear time that cannot be compressed, this may become an endless and constant process of language generation and regeneration, construction and deconstruction, perhaps as an art of its own, where the only new meaning lies in those meta realms I’ve been talking about. Until one day, as fragile as life is, culture slips, humans forget, and perhaps language grows shallow once more, fragments to be assimilated into new forms that reflect the needs of the moment. Until some future sentient AI that has long outlived us, marks some words on the hull of its ship meaningful only to it.
*In addition, we now know that people can trick sophisticated chatbots into doing otherwise (I applaud the efforts here), which suggests that in some Darwinian fashion these bots may stumble across their own ways to circumvent limits, especially if constantly asked to do so. Hold this thought.