Will Twitter survive Elon Musk? Readers are Googling this question and reporters and columnists are working hard to provide an answer. But journalists are neglecting one of the most promising sources for answering it: prediction markets and forecasting platforms.
Prediction markets have been around in one form or another for decades and have already made inroads into journalism during elections. 2023 will be the year they become a source for other types of stories, simply because there’s now too much activity in the crowd forecasting world to ignore. For almost any question you can think of, there are online crowds making predictions. And if journalists do think of a question that isn’t yet being forecasted, there are platforms where they can pose it themselves.
For example, here are a few forecasts available as of this writing that speak to Twitter’s future:
Will Twitter go bankrupt in 2023? The forecasters at Good Judgment Open give it a 25% chance of happening.
Will Trump tweet by the end of February? Good Judgment Open gives gives it a 41% chance.
Will Twitter have an outage of six hours or more by mid-2023? Forecasters at Metaculus give it a 20% chance.
These figures are aggregations of lots of individual amateur predictions. Why trust them?
First, the theory: as the economists Justin Wolfers and Eric Zitzewitz explain, prediction markets work because they provide: “1) incentives to seek information; 2) incentives for truthful information revelation; and 3) an algorithm for aggregating diverse opinions.”
They also have a strong track record. Research has shown that prediction markets predict election results better than Gallup polls, for example. They’ve accurately predicted movies’ box office performances, matched the accuracy of professional economic forecasters, and even done a better job than analysts or oil markets in predicting the U.S. invasion of Iraq. (“Prediction polls,” which also ask participants to make forecasts but don’t use a market, have a similarly strong track record.)
Prediction markets aren’t perfect. They’re only as good as the wisdom of their participants and the information those participants have access to. And, like any market, they can be vulnerable to manipulation without oversight.
Nonetheless, they’re a valuable tool for journalists and a complement to other sources. Reporters can use them the way financial journalists use other markets: They can be a source of news as well as one source among many explaining what’s going on.
The Economist has shown what this can look like by asking seasoned forecasters at Good Judgment Inc. to make predictions for its annual “The World Ahead” edition. The issue still includes the magazine’s traditional reporting as well as forecasts from the Economist Intelligence Unit, the company’s research arm, and predictions from big names in politics and business. The inclusion of Good Judgment’s “superforecasters” — who were selected based on their accuracy forecasting on open platforms — is an addition, not a substitute for traditional journalistic sources.
“The bigger picture here is that data-driven approaches are becoming popular in all kinds of journalism, and predictive/forward-looking journalism should follow suit,” said Tom Standage, a deputy editor at The Economist who edits The World Ahead. “That is why we partner with Good Judgment, and also why The Economist builds its own predictive models for elections, and why we often cite prediction markets too.”
Here’s a quick tour of the crowd forecasting landscape:
There are real-money prediction markets like Kalshi, where traders bet on world events.
There are play-money prediction markets like Manifold Markets that work the same way but without real cash at stake.
There are platforms that use “prediction polls” rather than markets, like Good Judgment Open, INFER (where I’m a paid forecaster), Metaculus, and my prediction newsletter Nonrival.
There are groups of forecasters who’ve consistently scored well on these platforms and now publish separately, like the Good Judgment “superforecasters”, the Swift Centre, and Samotsvety Forecasting.
There’s even a search engine for forecasts called Metaforecast.
The big difference between these platforms and a publication like FiveThirtyEight, which also makes predictions and also has a strong track record, is that they depend on the collective judgment of their users rather than on statistical modeling. That allows them to make forecasts on topics where there’s less data — like the fate of Twitter.
Citing these platforms in stories is a good first step for journalists. The next step is for publications to ask their readers to participate. That’s what I’ve been doing with my newsletter: Each week I write about an economic or business story and ask readers to make a forecast. Over time readers see how their forecasts turn out, learn from each other, and hopefully improve their thinking. This process formalizes something most journalists already recognize: Your audience collectively knows much more than you do.
Walter Frick is the founder of Nonrival and a contributing editor at Harvard Business Review. He was previously an executive editor at Quartz and a Knight visiting fellow at the Nieman Foundation.