Journalists productively harness generative AI tools

The hype online for OpenAI’s latest AI-driven chat tool is almost insufferable. Admittedly, ChatGPT is impressive and you should go play with it if you haven’t already. You can prompt the machine with requests like “Write three newsworthy headlines for this article”, or “Summarize this abstract without the scientific jargon” and it will probably provide a sensibly written response. Many in my feeds seem amazed with the quality of the text generated by the AI. Some even think it could be the death-knell of the college essay. The technology is poised to disrupt many aspects of media and communication industries by making content — not only text, but also visual imagery — easier to create.

We’re still early in the hype cycle, but in the next year I expect the field of journalism to soberly flesh out how such new tools might actually be productive. No, they’re not going to write ready-to-publish articles for you, despite the misleading headlines. But there are plenty of ways they might save bits of time on various newsroom production tasks. Journalists need to test the possibilities and boundaries of the technology and set to work exploring how these powertools can be adapted for their needs. Lots of experimentation is needed with writing prompts to get the most out of the AI. On top of that, serious ethical thinking is needed to consider when and how to use the technology responsibly.

These AI tools can already do a lot. For instance, they can rewrite text to simplify it for different audiences, summarize documents, write potential headlines, and brainstorm angles or potential directions for reporting. In data journalism they can be used to classify documents or extract data (with varying degrees of success), or to generate short snippets of text to render descriptions based on structured data. What else could be done with these tools?

There are of course limitations, including bias, nonsense text, and a range of other concerns. For news, the biggest issue is that the tools hallucinate with confidence, making them a ready tool for disinformation production. Any text these tools output still needs to be checked for accuracy and so may be ill-suited to specific tasks. It’s probably better to think of these tools as internal newsroom tools, making suggestions to reporters and editors rather than generating text that will be directly published. Research is blazing ahead to make future versions of the technology better able to output factually accurate text. And news organizations could also invest more in R&D to fine-tune and further adapt the models to be better aligned to journalistic needs. In the meantime, fact-checking should be a growth center for news organizations.

Like any other AI technology, it’s not a button to press to fix what ails news media. But I’m fundamentally optimistic about what might be done with these AI tools when used in responsible ways by journalists.

Nicholas Diakopoulos is an associate professor of communication studies and computer science at Northwestern University.

Leave a Reply