AI will start fact-checking. We may not like the results.

Algorithmic fact checking will go mainstream in 2023.

Teams of computer scientists around the globe have already developed AI systems designed to detect manipulated media, misinformation or fake news.

Some types of verification are well-suited to a high-tech approach: A team from Drexel Univeristy recently published a new approach for detecting forged and manipulated videos. Their system combines forensic analysis with deep learning to detect fake videos that would slip past human reviewers or existing systems.

But fact-checking isn’t usually so straightforward. A quote might be accurate, but misleading. Every news story is built on a subjective frame of what’s included — or excluded. That nuance, however, is lost when researchers test a new AI model against benchmark datasets that catalog posts as true or false.

The researchers who are developing cutting edge AI fact-checking systems today measure their accuracy to the hundredth of a percent against benchmark datasets of social media posts and articles. That’s the standard way for artificial intelligence researchers to test and share their results, but it’s not well-suited to supporting journalists on deadline or platforms that need to make moderation decisions at scale.

The teams building these tools are well-intentioned, but their work risks being a high-tech Maginot Line: A defense against misleading COVID tweets from 2020 that can’t anticipate — or respond to — the next iteration of information warfare.

The challenge for journalists, however, will be if these tools are widely adopted by social media platforms, ISPs, or content-managment systems. Fake and misleading stories might still circulate because bad actors would have the resources to engineer ways around the automated tools. Reporters, on the other hand, might find their work blocked as investigations and enterprise reporting lack the precedent in the AI’s model to appear “true.”

But it doesn’t need to be that way.

Journalists can engage with the technologists who are working in this space to improve — and direct — their work. As an industry we can wrestle with the ethics of letting machines audit and improve our reporting. We can be open to the disruptive power of artificial intelligence at all points in our value chain, instead of closing ourselves off and assuming the future will look just like today.

If we spend 2023 waiting for the killer AI app that will save journalism, we won’t like the results. We’ll wind up playing catch-up, as we have with each wave of technological disruption for the last two decades.

But if we use the coming year to act — by tracking the technologies that will shape our industry, by building partnerships that equip our organizations to grow, by taking stock of the skills we need in the newsroom — we can have agency over our future.

We can create a future where AI is a tool for making journalism more sustainable instead of inhabiting a future created by others

Sam Guzik leads product strategy for WNYC and is a foresight expert advisor at the Future Today Institute.

Leave a Reply