Text-Savvy AI Is Here to Write Fiction

A few years ago this month, Portland, Oregon artist Darius Kazemi watched a flood of tweets from would-be novelists. November is National Novel Writing Month, a time when people hunker down to churn out 50,000 words in a span of weeks. To Kazemi, a computational artist whose preferred medium is the Twitter bot, the idea sounded mildly tortuous. “I was thinking I would never do that,” he says. “But if a computer could do it for me, I’d give it a shot.”

Kazemi sent off a tweet to that effect, and a community of like-minded artists quickly leapt into action. They set up a repo on Github, where people could post their projects and swap ideas and tools, and a few dozen people set to work writing code that would write text. Kazemi didn’t ordinarily produce work on the scale of a novel; he liked the pith of 140 characters. So he started there. He wrote a program that grabbed tweets fitting a certain template—some (often subtweets) posing questions, and plausible answers from elsewhere in the Twitterverse. It made for some interesting dialogue, but the weirdness didn’t satisfy. So, for good measure, he had the program grab entries from online dream diaries, and intersperse them between the conversations, as if the characters were slipping into a fugue state. He called it Teens Wander Around a House. First “novel” accomplished.

It’s been six years since that first NaNoGenMo—that’s “Generation” in place of “Writing.” Not much has changed in spirit, Kazemi says, though the event has expanded well beyond his circle of friends. The Github repo is filled with hundreds of projects. “Novel” is loosely defined. Some participants strike out for a classic narrative—a cohesive, human-readable tale—hard-coding formal structures into their programs. Most do not. Classic novels are algorithmically transformed into surreal pastiches; wiki articles and tweets are aggregated and arranged by sentiment, mashed-up in odd combinations. Some attempt visual word art. At least one person will inevitably do a variation on “meow, meow, meow…” 50,000 times over.

“That counts,” Kazemi says. In fact, it’s an example on the Github welcome page.

But one thing that has changed is the tools. New machine learning models, trained on billions of words, have given computers the ability to generate text that sounds far more human-like than when Kazemi started out. The models are trained to follow statistical patterns in language, learning basic structures of grammar. They generate sentences and even paragraphs that are perfectly readable (grammatically, at least) even if they lack intentional meaning. Earlier this month, OpenAI released GPT-2, among the most advanced of such models, for public consumption. You can even fine-tune the system to produce a specific style—Georgic poetry, New Yorker articles, Russian misinformation—leading to all sorts of interesting distortions.

GPT-2 can’t write a novel; not even the semblance, if you’re thinking Austen or Franzen. It can barely get out a sentence before losing the thread. But it has still proven a popular choice among the 80 or so NaNoGenMo projects started so far this year. One guy generated a book of poetry on a six hour flight from New York to Los Angeles. (The project also underlined the hefty carbon footprint involved in training such language models.) Janelle Shane, a programmer known for her creative experiments with cutting-edge AI, tweeted about the challenges she’s run into. Some GPT-2 sentences were so well-crafted that she wondered if they were plagiarized, plucked straight from the training dataset. Otherwise, the computer often journeyed into a realm of dull repetition or “uncomprehending surrealism.”

“No matter how much you’re struggling with your novel, at least you can take comfort in the fact that AI is struggling even more,” she writes.

“It’s a fun trick to make text that has this outward appearance of verisimilitude,” says Allison Parrish, who teaches computational creativity at New York University. But from an aesthetic perspective, GPT-2 didn’t seem to have much more to say than older machine learning techniques, she says—or even Markov chains, which have been used in text prediction since the 1940s, when Claude Shannon first declared language was information. Since then, artists have been using those tools to make the assertion, Parrish says, “that language is nothing more than statistics.”

Many of Parrish’s students plan to work with GPT-2, as part of a NaNoGenMo final project for a course on computational narrative. There’s nothing wrong with that, she notes; advanced AI is yet another tool for creative code experiments, as work like Shane’s demonstrates. She just thinks it could be a challenge, artistically, given the temptation to feed a few lines into GPT-2 and let readers divine some deeper meaning in the patterns. Humans are, after all, charitable creatures of interpretation.

There are plenty of ways to elevate code-generated text. One method is to set some boundaries. For this year’s event, Nick Montfort, a digital media professor at MIT, came up with the idea of Nano-NaNoGenMo, a challenge to produce novel-length works using snippets of code no longer than 256 characters. It harkens back to the cypherpunk era, he says, imposing the kinds of constraints coders dealt with in the 1980s on their Commodore 64s—no calls to fancy machine learning code. Nostalgia aside, Montfort is a fan of code and datasets you can read and interpret. He prefers to avoid the black boxes of the new language models, which generate text rooted in the statistical vagaries of massive datasets. “I look forward to reading the code as well as the novels,” he says. “I do read computational novels thoroughly front to back.”

Quite literally, in some cases. Montfort has published and bound a few NaNoGenMo novels, which other presses eventually “translated” by rejiggering the underlying code to produce text in other languages. His first submission-turned-book, back in 2013, constructed a series of vignettes for each moment of the day, set in different cities and adjusted for timezone. In each, a character reads ordinary texts—the backs of cereal boxes, drug labels. He wrote it over a few hours using 165 lines of Python code. His next effort built off Samuel Beckett’s novel, Watt, which is so impenetrable it almost reads as computerized. He thought that by generating his own version, by finding the right features and patterns to augment, he might become a better reader of Beckett.

This year, Montfort’s nano submissions are simple. (One of them deletes first-person pronouns from Moby Dick.) That’s a benefit, he says, because it encourages NaNoGenMo to stay beginner-friendly, with projects simple in both concept and execution. “You’re not going to be seriously judged and shut down based on what you do,” he says. “People aren’t going to stop inviting you to poetry readings.”

Take heart in that sentiment, would-be novel generators. Yes, November is half over. And yes, 50,000 words is a lot. But don’t worry, you’ve got a computer to help things along. The wonderful thing—and horrible—thing about computers is that they can spit out a lot of things, fast. Kazemi is saving his entry for the last minute, too. He prefers a hands-off approach, no post-production tweaks except for some formatting, and to try out new tools. He’s looking forward to seeing what he can make with GPT-2.

Parrish is still in planning mode too. She’s considering a rewrite of Alice in Wonderland, in which the words are replaced by statistical representations—graphs of some kind. What will it look like? “I don’t know yet,” she says. The fun part is the discovery.


More Great WIRED Stories

Read More