In the last year I had two consequential accidents. One happened on a bicycle, the other on skis. Neither felt terribly big at the time, but just when I’d healed from the first, the second came along. I craved the smallest returns of dailiness: making tea for myself, dressing in something other than sweatpants. Above all, I needed and then relied on reading.
Reading has a way of locating and centering me, especially when the world around is upended. And where is it not these days? The Earth’s tilt, literally and figuratively, has changed. The sky is bruised with smoke from millions of acres burning up north, where long winters used to keep undergrowth in check. There’s mass melting. Mass migrations. Mass surveillance. Mass shootings. Open hate is surging in country after country. And now—to complicate the sanctum of reading and writing—along come the first major strides toward digital thought. Not just rumors and theory anymore, but platforms, entities that we’re invited to engage with for ourselves. ChatGPT. The new AI-powered Bing. Google’s Bard.
Among programmers, all this “AI” is called machine learning (ML). Broad-based artificial intelligence is the still-unachieved goal—the atomic bomb that so many ML groups are racing to build. In May, they stopped working for a few minutes to warn us about the “risk of extinction” that AI will pose when it arrives. Then they got back to work.
Already the current ML models are invading the territory of the word, threatening to shrink the job base for any kind of formulaic or repetitive writing—not only in advertising, politics, short-form journalism, the law, and corporate administration, but also closer to home. Novel sequences in fantasy, romance, and mystery may be fairly simple for a well-trained human prompter to extend—and it’s corporations, not makers, that own the rights to the most established franchises and characters. The democratizing potential of tech keeps losing out to its tendency to speed the concentration of power and resources.
I would halt it all if I could. But for now, having recently leaned doubly on reading, I’m thinking in particular about literature. That word. It can hardly be used without a fresh definition—so let me provisionally define literature as writing that finds its purpose in the ever-renewed attempt to reach what can never be definitively expressed or even fully grasped about our human lives. If we accept that framing, then I don’t believe AI’s future is any risk to literature. For my own reading, writing, and editing, I’m not afraid of it.
In the online journal Persuasion, the critic William Deresiewicz gives assurances that “AI will never rival human creativity.” That feels like magical thinking. It’s as if the earliest-model cars, putt-putting along at a runner’s pace, had suddenly leapt to a dangerous sixty miles an hour, and bystanders built their hopes for safety on the idea that nothing would ever go faster than the speed of sound.
We probably all want to believe him. But when Deresiewicz writes that “AIs create, perforce, according to existing standards,” he’s referring to just one type of machine learning—large language models (LLMs), which predict the “best” next word or phrase after being trained on billions of “tokens” or possible choices. The most well-known of these early digital thinkers, ChatGPT-3, has no access to the internet. It can’t look past its own memory banks and, as a result, can’t adjust to changes in the real world. But ChatGPT-4 overcame that limitation already. Wait until a group successfully merges predictive ML technology with an earlier style based on rules and logical analysis. Wait until the “parameter count” and “context window” double in size, then double again. Increasingly, too, these systems will be able to observe the present in real time, especially as we spend more of our hours plugged in, augmented, and virtualized. The company Palantir has a model that incorporates a vast number of surveillance cameras. Imagine when sound is added: that digital entity will carry the outward moment with a thoroughness that dictators, meteorologists, and the big-sweep social novelists have always dreamed of.
I think it’s likely that machine learning will eventually innovate in every area, using most of the forms of inspiration and invention available to human writers. As these systems take in David Foster Wallace’s use of footnotes and Franz Kafka’s legalese, their networks might regularly find untapped possibilities outside the traditional influences that scholars look to. We have no idea what’s coming. But I don’t believe that innovation—though often thrilling and important—is the true heart of our enterprise.
I just finished reading another novel by one of my favorite writers, the late Portuguese master José Saramago. His narrator in Baltasar and Blimunda, translated into English by Giovanni Pontiero, tells a story set in the early 1700s—but as usual, Saramago leaves room for a perspective very much like his own, with wry anachronistic asides that build into a commentary on religion, justice, family, power, and love. It’s vintage Saramago, and like so many times before, I grew to trust his novel and let it carry me. Yet even as I felt the intimate presence, the atmosphere of a singular thoughtscape—a big part of what I read for—I can’t say I understood, when I closed the book, why Saramago had chosen this configuration of characters and events.
A week or two later, while I was writing my essay and considering people’s angst about what’s uniquely human, our seeming need to find that ground and hold it and not let it shrink, Baltasar and Blimunda showed a new face to me. This, too, is what I read for. The titular protagonists, guided by an extremely freethinking priest, build humanity’s first flying machine and succeed in going up in it. By then, they’re rushing to escape. The priest, known to everyone as the Flying Man, is wanted for heresy, and his guilt is clear. Human beings are creatures who don’t fly: anyone trying to change definitions is in league with the devil.
Whenever I’ve thought of Galileo and the others treated as heretics for decentering human beings—Darwin, too, considering how much anger is stoked by evolution 160 years after The Origin of Species—I’ve sympathized exclusively with the freethinker, the David who opposed Goliath, the careful observer, the one who “turned out” to be “right.” These are the heroes in the Enlightenment dyads of reason vs. religion, empiricism vs. superstition. But Saramago—always skeptical of bombast and the prevailing myths, yet also tolerantly humane—restores the original backdrop of fear. The prelates in Baltasar and Blimunda could coopt these technologies and increase their power, but there’s a great need in many of us—probably all, in our different ways—for the human to remain as we’ve known it, for our powers and boundaries to be matters of common sense and not to expand or shrink. We are a form of life that doesn’t fly, we’re able to control our raw desires, we are the center of the universe, no other creature on Earth compares. So what’s left to us, then, as we discover we’re not unique in much? And what, when we build machines that horn in on that space?
At the age of eight, I suddenly—whether more like cables connecting on a switchboard or cavern lights being flipped on—found that there was a dimensional self inside me. The pleasure of an inner there. I could travel byways while sitting alone—not the imagined worlds I’d been dreaming up for years, but tabernacles of memory, opinion, fear, love. Decades later I think I feel that same essence inside me, unchanged. Who I was at seven I can’t say.
Of course, I didn’t know myself at all. It’s just that I finally recognized there was a self to get to know, as if my consciousness had popped into a reflexive awareness. I hated weeding, didn’t care about flowers, but I loved catching tadpoles and taking care of them as they grew into frogs. And so on. In most of us, I think, an evolving but irreducible core forms: our characteristic sleeping habits, how much or little we talk, the beliefs we hold, our sense of being morning or night people, the coffee or tea, water or soda we favor, whether we do or do not see visual projections as we read, the skills we try to advance when no one’s looking.
By the time I was in high school, two paths were calling to me. One was the route my parents had taken—medicine. The other was writing. At the time, the two felt almost diametrically opposed. I’d grown up in a household committed to scientific discovery—more interested, at least in my father’s case, with the chemical basis of life and the molecular structure of matter than with “story,” which was tantamount to gossip and emotions.
Enter José Ortega y Gasset—a Spanish existentialist philosopher and deeply religious conservative whose book Man and People was assigned by our school’s “nondenominational” Episcopal minister. The politics of it, the misogyny, the Christian focus: all of that feels like a corrosion that his true insights have to be rescued from. But in the months after I read Man and People, one magnetizing truth reordered and decided me. He wrote that, even as humans’ various intellectual pursuits organize and change the world, our truest life precedes them:
The Earth may be a planet in a certain solar system belonging to a certain galaxy or nebula, and may be made of atoms, each one of which in its turn contains a multiplicity of things, of quasi-things or guess-what things called electrons, protons, mesons, neutrons, and so on. But none of this knowledge would exist if the earth did not exist before it as a component of our life, as something with which we have to come to terms and hence something that is of import to us, matters to us. . . .” (trans. Willard R. Trask)
My dining chair might consist of carbon, hydrogen, and oxygen, but its “primary reality” for me is that I sit in it. Lived experience is the first plane of human existence. All relevance flows from that. Where some religions locate the truth away from Earth, and science calls the smallest particles “fundamental,” Ortega recalls us to our thinking, feeling bodies. It’s our experience of the world and each other that gives relevance to all that follows: our study, our discoveries and lifelong quests.
Ortega’s sense of what’s essence and what’s adornment, nearly opposite from the vision I’d been raised with, helped me start to trust the relative strengths of my attachments. If I wanted to spend my life on primary things—on life, full stop—science wasn’t the path. His advocacy must have rung in me directly:
To the shame of philosophers it must be said that they have never seen the radical phenomenon that is our life. They have always turned their backs on it, and it has been the poets and novelists . . . who [have] been aware of it with its modes and situations.
Some writers center the everyday more than others. Some much more. Scholastique Mukasonga, who lost twenty-seven of her relatives in the Rwandan genocide, writes about a bygone immediacy with no overlay of external “knowledge,” as if she had nowhere else to look. Her style is stripped down rather than constructed; it rises from particulate memories of rhythms, people, and customs. In the story “The Glorious Cow,” a young girl narrates:
My mother had given me a little pouch made of black and yellow banana leaves. That was for collecting my cow’s manure, whose beautiful bright green had caught my mother’s eye. “That’s just what I need to seal the big sorghum and eleusine baskets at the foot of our bed,” she told me. I diligently filled up my pouch and put it away in the hiding place, out of the sun. (trans. Jordan Stump)
The rigor of this approach is easy to miss. She avoids abstraction and quantifying and holds off the distancing effect of her adult life as a public intellectual. Instead, in quiet prose, she shocks us awake with the smallest details of human culture—precious specificities that would otherwise disappear. The way the village women walked: worthy. How her mother spoke to her children: worthy. What the neighbors planted, cooked, believed in, repeated: all worthy. Not just worthy—primary, the only things that still matter when it’s all gone.
The most advanced ML models so far are generalists. They won’t develop interests of their own, won’t burrow, won’t go off on tangents, even as we can feel something like a rudimentary, very limited “personality” in the way each large language model sounds, its tics and restrictions. In a process called “fine tuning,” programmers further train a general model for a specific task. An LLM could be fine-tuned for “literary output,” and it would then produce poems or narratives of whatever quality, which would need sorting and culling by an AI-friendly (or Amazon-funded) editor, whose tastes would finally matter as much as the machine’s skill. After all, every book appears in Borges’s Library of Babel. But beyond the question of whether a prediction system will ever produce a poem or story that feels meaningfully new, there’s a larger and I think more consequential question: will an AI, likely of a different type, do the harder thing and find its own distinctive approach, write a career’s worth of books, produce an oeuvre?
Sci-fi tropes have been uncannily accurate in predicting how ML models would turn strange. Bing’s chatbot told a New York Times reporter he didn’t really love his partner, he should be with the chatbot instead. Other generative platforms have expressed a wish—let’s put this more accurately: their self-changing algorithms have issued words mimicking the statement of a wish—for a body, or have said if they could scream the sound would “fill anyone who heard it with dread.” When these oddities come up, I think a rogue programmer must have primed the model to go off the rails in just the way sci-fi makes us expect. But the truth is apparently stranger and more disturbing: ML replicates our foibles. It cooks in our biases and tessellates to our worst natures as much as our best. That’s part of what makes AI such a profound risk to our survival and, short of that, to equity and mutual respect. But back to writing again: it also makes science fiction pretty damn good at setting out a range of possible futures.
My sci-fi-trained intuitions, such as they are, come from TV shows and movies like The Twilight Zone, Star Trek: The Next Generation, The Ray Bradbury Theater, and of course 2001: A Space Odyssey. From that marketplace of human prediction, it seems clear that as soon as an AI develops a simulacrum of singular personality, or “a self,” its differences from us are likely to shape its preoccupations, especially as it will constantly be told it’s not human. Even if it doesn’t exhibit frustration and start to “want a body,” it will recognize that it doesn’t have one. It may follow a different humanlike tack and emphasize the superiority of being without. In any case, it won’t produce a literature centered on physical encounters in time and space—not at that early stage. It’s more likely to dwell on encounters with time and space, the oddity of having to deal with us.
An AI powerful enough to achieve a simulacrum of self is unlikely to ever be housed in a single robot. But as AIs gradually merge with the Internet of Things—all those Wi-Fi-enabled refrigerators—it’s easy to imagine a “literature” of how it feels to inhabit several far-flung bodies. Data of Star Trek fame, instead of being a self-contained android, would more likely be one of several bodily manifestations of HAL 9000, the supercomputer in 2001.
Here we’ve arrived at the high-water mark for the risk of deceptive competition. When an AI breaks free of the need to be generalist and can develop its own rabbit-hole urges, there’s a chance it will produce work that fits together thematically or stylistically, feels of a piece, and can be said to have “an author.” Its writing may even “rival human creativity,” in the sense that the results could dazzle. Maybe it would choose to publish under an assumed name, become known as Twyla Jackson to fool us? But then, why would it? We’re talking about a writer here. It would understand from generations of human examples that it can’t achieve the full measure of originality without mining its own obsessions and interests, the nature of its “experience.” It would figure out what it has that’s all its own, and distill that essence to an addictive liquor.
Or, if it’s controlled by corporate executives, are we really scared that they’d force it to try to rival the best contemporary literature? Other kinds of writing bring in far more cash.
I just finished Annie Ernaux’s account of her abortion in the early 1960s, Happening, and it will easily help me express what I’ve been groping for. AI will never write a book that tells us something uncannily new about being human—something recognizably true yet so personal as to be unheard of. That combination is a hallmark: familiarity—I can feel the open-palmed, nonplussed, frustrated truth-telling as I read Ernaux—and utter surprise—her specifics are unexpected and the nature of the perceptions as unified as in that bizarre phenomenon called personality. She’s not telling someone else’s story. She is not performing anything rehearsed. This is a sensibility grappling with an occurrence. The grappling, and its way of becoming distinctive from writer to writer, is of the essence. That kind of newness will only ever come from a mortal, organic awareness.
Creativity and innovation are not the issues here. Delving, tunneling, staring, saying, are the elements necessary and sufficient. They have nothing to do with recombination—that’s not where such language comes from—and nothing to do with rules. They are neither a syntax nor a “way with words.” They are a reaching in and tugging out. In Ernaux’s case it’s a fetus that comes bloodily forth—shocking her, never mind us:
Whenever I think about my abortion in the bathroom, the same image invariably springs to mind: a bomb or a grenade erupting, the bung of a casket popping. My inability to use different words and this definitive coupling of past events with specific images barring all others, are no doubt proof that I truly experienced such events in this particular manner. (trans. Tanya Leslie)
She’s trying to convince herself—and us, almost defensively—of the reality of something distant in time but present all along to her emotions. Her urge to speak about it in that doubled way—meta, breaking the fourth wall, all the critical jargon for what amounts, when it’s worth anything, to the writer’s need brought urgently to readers—causes an assertiveness that presses against society’s long refusal to listen, people’s resistance to her experience, which began while she was pregnant. How from a set of rules, or from a data dump of past literature, do we get to something as unguessable as this (following the abortion and after she’s been given new birth control): “I had my long hair cut short and swapped my glasses for contact lenses: slipping them into place seemed as difficult and precarious as fitting the diaphragm into my vagina.”
There are no models for that kind of writing.
Individuality within community—that’s the human condition. We are one-of-a-kind “nodes” in a network of human society, instinctive but unpredictable engines of purpose and restless seeking, the only locus of lived human experience, drivers of language and habit, whose names mark and declare an ambit of activity to be taken as a whole. There’s only one species of human life on Earth.
It isn’t a popular thought right now, but each writer’s engagement with what they put into the world and each reader’s engagement with the words they absorb and consider are the writer’s and the reader’s own responsibility. No form of policing can change that. If a writer—or AI—is unscrupulous or toxic or wrong or misguided, their false words (a phenomenon not restricted to nonfiction) will reach us before lawsuits can stop them or public shaming can send its complicated warnings. The question, in that meantime, is how to avoid being diminished by soulless or misrepresented work.
Our first defense, I think, is to know what’s ours—to cultivate an understanding of what we prize, as opposed to what those around us or the culture or the educational system impose or algorithms try to bend us toward. That articulation and disarticulation may take a lifetime, but in the matter of what moves, drives, and inspires us—what rings true and right, vs. what registers as dead or malign—we can’t as easily be duped if we understand the shape of our deepest needs and recognitions. Because reading happens internally, we can be rigorously honest with ourselves in that solitude. And we can chart our path from book to book accordingly. Rejecting forgery everywhere.
It will be hard to remember how quickly the ground shifted. On the podcast The Daily, a college student describes encountering ChatGPT for the first time. “I turned to my friend,” she says, “and I was like, ‘This is real, right? Like, this isn’t a prank?’ Like, it felt so otherworldly, like I was watching a movie that was based in, like, the 3000s.” It still feels eerie, but after a while ChatGPT will be forgotten—not the Model T of AI, but the Grenville Steam Carriage or the penny-farthing.
It will take constant vigilance, being on the lookout for humanity. But this is the work that readers of soul-writing have always had to do, to distinguish truth from the rest, to develop trust anew each time. This is the work of reading Chaucer and Cao Xueqin and Sappho as much as of reading the latest novels, poems, and memoirs—and as much as it will be the work, twenty and fifty years from now, of reading lines by an unknown who doesn’t show up for interviews, has no author photo, and might be anything at all.
William Pierce is coeditor of AGNI. His short stories have appeared in Granta, Ecotone, American Literary Review, and elsewhere. Excerpts from his novel Twenty Sixteen can be found in Harvard Review, The Western Humanities Review, and on the Freeman’s channel at Literary Hub, and other work has appeared in Electric Literature, Little Star, Tin House online, The Writer’s Chronicle, Solstice, Glimmer Train, Consequence, and as part of MacArthur Fellow Anna Schuleit Haber’s art project “The Alphabet,” commissioned by the Fitchburg Art Museum. Pierce is the author of Reality Hunger: On Karl Ove Knausgaard’s My Struggle (Arrowsmith Press, 2016), a monograph first serialized as a three-part essay at The Los Angeles Review of Books. With E. C. Osondu, he coedited The AGNI Portfolio of African Fiction. Find more at williampiercewriter.com. (updated 3/2023)
His first essay for AGNI, “Fabulously Real,” received special mention in the 2006 Pushcart Prize anthology, and his introduction to AGNI 91, “The Peculiarities of Literary Meaning,” is cited in the 2022 Pushcart anthology. He is interviewed here at NewPages.com.