Readers want to know you're there
Why AI-generated novels will always feel like something's missing
Everywhere I went, I kept running into Ada Lovelace. First, in the Wayfarers series by Becky Chambers, which is excellent, by the way, and you should totally check it out. There’s a character named Lovelace. “That’s a weirdly specific name,” I thought to myself, yet did not bother to give it a search. Last weekend I went to the Computer History Museum in Mountain View, where I learned who she actually was. Fun fact: her father was Lord Byron, the famous novelist, poet, and champion of the Luddite movement. Then Lovelace reappeared this week while I was attending a lecture by Hannes Bajohr about Novels and LLMs. He paraphrased a quote from Lovelace from her famous Appendix G in her essay, The Analytical Machine. Why did this keep happening?
First, let’s talk about Bajohr and what he discovered when he tried to write a novel with AI1. It feels like a misnomer to say that he wrote it, because a large language model (LLM) generated it. What he found was that while the output was a convincing imitation, it was lacking in something crucial. While superficially it read like a novel, in reality it was completely incoherent, lacking any underlying causal logic.
Going back to Lovelace, her essay was a description of a theoretical “Analytical Machine” — a massive machine powered by steam, which would fit the description of Alan Turing’s general-purpose computing machine. The machine was modeled after looms, which at the time were using punch cards to create elaborate designs with automation. In her essay, Lovelace made the claim that machines don’t “originate anything” rather, they do whatever they’re instructed to do.
I encountered the same idea at university, under a different name, called the Chinese Room. The idea is, if an algorithm is simply following instructions, can we assign it sentience if the output matches the complexity of what a human would produce? Modern LLMs are basically the Chinese Room. Their procedural generation relies on pattern matching using correlations formed in the training and stored as weights in arrays of numbers. What they can’t do is care. When you set the temperature to 0, it becomes a deterministic algorithm. When given the same input, you’ll always get the same output, with each token passing through each matrix multiplication, predetermined during training. There is no intention.
Intentionality is, after all, what readers really want when they pick up a novel. They want to feel a sense of connection to another mind, to see a certain way, to be changed by what they’ve read. An AI will never do this. As context windows get larger and reinforcement learning from human feedback tuning makes the results more convincing, the fundamental fact is unchanged. LLMs are incapable of intentionality. As Bajohr points out in his experiment more practically, they’re also terrible at causality. Causality is so important to narrative. Narrative momentum arises from one action rising inevitably in reaction to another.
So, LLMs will never replace writers. As the torrent of AI-generated slop gains momentum, people’s appetite for authenticity only increases. We see in the online backlash to the use of AI in popular video games like Baldur’s Gate 3 and Claire Obscur: Expedition 33. In an increasingly disconnected world, we crave connection.
That’s why a daily writing practice is so valuable. It’s an exercise in living life deliberately. An opportunity to step back from the world of infinite digital distraction and listen to your inner voice. The newsletters on Substack that are the most popular are written by authors with a distinct voice, strong opinions, and who make you feel like you’re on a journey together. Compare that to LLM-generated text, which is literally optimized for blandness. By design, it aims to provide the most likely output, or the statistical average. Basically, it’s mid by design.
A book is a carefully crafted experience striving towards a specific effect in the reader’s mind. Maybe it’s an idea, or an emotion with a novel. George Saunders describes the writing like driving a motorcycle with the reader in the sidecar. The author’s goal is to keep the reader next to you at all times, to make them feel how you feel, as you give them a thrilling ride. With AI, it’s more like riding inside a Waymo. Sure, it’s a smooth and comfortable ride, and we end up where we told it to take us. But it wasn’t a memorable ride. Now compare that to scraping pegs on Skyline Blvd and try to tell me they’re the same thing.
Ada Lovelace is the coolest computer scientist that I never learned about studying computer science at university. Her quote is a reminder to me that there is no substitute for chatting with another mind. In his book “On Writing,” Stephen King describes writing as telepathy across time and space. It’s nothing short of magic.
“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.”
- Ada Lovelace
Hannes Bajohr took GPT-J, an open source LLM roughly equivalent to ChatGPT3, and performed fine tuning (additional training rounds) on 4 contemporary German novels. He generated the novel with minimal prompting by feeding the LLM a single word or sentence, then with some light editing and reordering of chapters, he had a novel Berlin, Miami, published in German by Rohstoff Verlag, with an English Translation published by MIT Press coming in 2027.


