A Chatbot Is Secretly Doing My Job

On creating serviceable copy using ChatGPT

A painting of a woman writing is superimposed with a glitchy computer effect.
Arsh Raziuddin / The Atlantic. Source: Getty

I have a part-time job that is quite good, except for one task I must do—not even very often, just every other week—that I actively loathe. The task isn’t difficult, and it doesn’t take more than 30 minutes: I scan a long list of short paragraphs about different people and papers from my organization that have been quoted or cited in various publications and broadcasts, pick three or four of these items, and turn them into a new, stand-alone paragraph, which I am told is distributed to a small handful of people (mostly board members) to highlight the most “important” press coverage from that week.

Four weeks ago, I began using AI to write this paragraph. The first week, it took about 40 minutes, but now I’ve got it down to about five. Only one colleague knows I’ve been doing this; we used to switch off writing this blurb, but since it’s become so quick and easy and, frankly, interesting, I’ve taken over doing it every week.

The process itself takes place within OpenAI’s “Playground” feature, which offers similar functionality as the company’s ChatGPT product. The Playground presents as a blank page, not a chat, and is therefore better at shaping existing words into something new. I write my prompt at the top, which always begins with something like “Write a newspaper-style paragraph out of the following.” Then, I paste below my prompt the three or four paragraphs I selected from the list and—this is crucial, I have learned—edit those a touch, to ensure that the machine “reads” them properly. Sometimes that means placing a proper noun closer to a quote, or doing away with an existing headline. Perhaps you’re thinking, This sounds like work too, and it is—but it’s quite a lot of fun to refine my process and see what the machine spits out at the other end. I like to think that I’ve turned myself from the meat grinder into the meat grinder’s minder—or manager.

I keep waiting to be found out, and I keep thinking that somehow the copy will reveal itself for what it is. But I haven’t, and it hasn’t, and at this point I don’t think I or it ever will (at least, not until this essay is published). Which has led me to a more interesting question: Does it matter that I, a professional writer and editor, now secretly have a robot doing part of my job?

I’ve surprised myself by deciding that, no, I don’t think it matters at all. This in turn has helped clarify precisely what it was about the writing of this paragraph that I hated so much in the first place. I realized that what I was doing wasn’t writing at all, really—it was just generating copy.

Copy is everywhere. There’s a very good chance that even you, dear reader, are encountering copy as you read this: in the margins, between the paragraph breaks, beyond this screen, or in another window, always hovering, in ads or emails—the wordy white noise of our existence.

ChatGPT and the Playground are quite good at putting copy together. The results certainly aren’t great, but they’re absolutely good enough, which is exactly as good as most copy needs to be: intelligible but not smart—simply serviceable. These tools require an editor to liven the text up or humanize it a touch. I often find myself adding an em dash here or there—haven’t you noticed? I love em dashes—or switching a sentence around, adjusting tenses, creating action. At one point, early on, I complained to a data-scientist friend who has worked with machine-learning systems that the robot didn’t seem to understand my command to “avoid the passive voice”; he suggested the prompt “no past tense verbs,” which helped but wasn’t quite right either. I sent him more of my prompts. He said they were too suggestive and that I needed to be firmer, more precise, almost mean. “You can’t hurt the robot’s feelings,” he said, “because it doesn’t have any.”

But that’s just the thing, isn’t it? Writing is feeling. And thinking. And although writing certainly has rules, plenty of good writing breaks nearly all of them. When ChatGPT was first released, and everyone, particularly in academia, seemed to be freaking out, I thought back to my own experience as a writer who grew up with another computer-assisted writing tool: spell-check. I am a terrible—really, truly abysmal—speller. I’ve often thought that in a different, pre-spell-check era, my inability to confidently construct words might have kept me from a vocation that I love.

I think now of all the kids coming up who are learning to write alongside ChatGPT, just as I learned to write with spell-check. ChatGPT isn’t writing for them; it’s producing copy. For plenty of people, having a robot help them produce serviceable copy will be exactly enough to allow them to get by in the world. But for some, it will lower a barrier. It will be the beginning of their writing career, because they will learn that even though plenty of writing begins with shitty, soulless copy, the rest of writing happens in edits, in reworking the draft, in all the stuff beyond the initial slog of just getting words down onto a page.

Already, folks are working hard to close off this avenue for new writing and new writers. Just as I was writing the sentences above, I received an email from the digital editorial director at Travel + Leisure alerting me to an important update regarding “our content creation policy.” “At Travel + Leisure,” she wrote, in bold, “we only publish content authored entirely by humans and it is against our policies to use ChatGPT or similar tools to create the articles you provide to us, in part or in full.”

This and other panicked responses seem to fundamentally misunderstand the act of writing, which is generative—a process. Surely there will be writers—new writers, essential writers, interesting writers—who come to their own process alongside ChatGPT or the Playground or other AI-based writing tools, who break open new aesthetics and ideas in writing and what it can be.

After all, there are already great artists who have long worked with robots. One of my favorites is Brian Eno, who has been an evangelist for the possibilities of musical exploration and collaboration with computer programs for decades now. A few years ago, in a conversation with the producer Rick Rubin, Eno laid out his process: He begins with an algorithmic drum loop that is rhythmically perfect, and then starts inserting small errors—bits of humanity—before playing with other inputs to shape the sound. “What I have been doing quite a lot is tuning the system so that it starts to get into that interesting area of quasi-human” is how he described playing alongside the machine. “Sometimes, there will be a particularly interesting section, where the ‘drummer’”—that is, the computer—“does something really extraordinary … Sometimes the process is sort of iterated two or three times to get somewhere I like.”

Then Eno chuckled his very British-sounding chuckle: “Very little of this stuff have I actually released … I’m just playing with it, and fascinated by it.” To which I can only add: So am I.