More than a few submissions to our annual Predictions for Journalism series touched on generative AI this year.
Some predicted the tech could be “a game-changer” for journalism, particularly resource-strapped local newsrooms. Others cautioned that producing convincing disinformation just got a lot cheaper and faster, raised ethical questions the news industry is only beginning to grapple with, and predicted that AI-written content will soon flood the internet.
A prediction by Gannett’s Eric Ulken read, in part, “I don’t imagine we’ll see GPT-3–produced copy in the pages of The New York Times in 2023, but it’s likely we’ll turn to machines for some previously unthinkable creative tasks.”
I didn’t realize how close we were to that first possibility until I listened to a recent episode of The New York Times podcast Hard Fork, hosted by Times tech columnist Kevin Roose and Casey Newton of Platformer.
“I will make a confession here on this podcast that I have tried to write parts of my column using AI,” Roose said. “I’ve said, ‘I’m sort of stuck on this paragraph. I wonder if it could help me figure out a way to complete this thought.’”
Roose hasn’t been entirely impressed with the results. (He used an app called Lex that he described as a “Google Doc with GPT-3 built in.”)
“Sometimes what it comes up with is passable, but it’s not good,” Roose said. “It’s not something that I would be happy to pass off as my own, even if it were ethical to do so — which I don’t think it would be.”
Our own Joshua Benton came to a similar conclusion after experimenting with GPT-2 back in 2019. Since then, the Microsoft-backed tech company OpenAI has trained its language processing AI on a much larger dataset and introduced a chatbot interface that will bring the technology to many more users than earlier iterations. OpenAI is also developing a watermark that’ll help detect text generated in ChatGPT.
Even with the improvements, Roose said he hasn’t been tempted to include AI-generated writing in his Times column just yet.
“I wouldn’t actually be copying and pasting any of the text verbatim, because it just, frankly, isn’t that unique or interesting or stylish,” Roose said in the episode.
“Maybe it’ll get to a point with GPT-4 where it’s better than I am, and then I’ll have to have some hard thoughts about what I can ethically and spiritually stand outsourcing to the AI,” he added.
Roose envisions using AI help to outline and research his columns. In an earlier Hard Fork episode, the hosts discussed using the tech to generate story ideas, submit broken code for corrections, and create multiple explanations for complicated concepts at different levels of difficulty.
Roose also mentioned another way that AI may help him write his columns.
“One thing that I do when I’m writing is I try to anticipate what people might object to, what good points people might make in response to some argument that I’m making,” Roose said. “I feel like I’m O.K. at that, but a GPT-3 or GPT-4 might be better at it. I might be able to paste in my column and say, ‘What are three counterarguments to this?’”
“Right,” Newton quipped. “Until now, if you wanted to find out why your argument was stupid, you had to tweet out a link to your story.”
Their most recent episode also included Hard Fork’s own predictions. (Newton said “the media’s divorce from Twitter will begin in earnest” in 2023 and Roose claimed to be “medium-confident” that TikTok would be banned in the United States before the year was through.) You can listen or read a transcript here.