Facebook is failing to label many posts from websites most likely to publish climate change misinformation, according to a new report from a British watchdog group.
That’s despite the company rolling out a feature in May 2021 that would add information labels to climate change-related posts, a feature that is available in several countries around the world.
The group, the Center for Countering Digital Hate, looked at a small sample of English-language articles related to climate change from publishers the group had previously named to its “Toxic Ten” group. In November 2021, CCDH found that this group of 10 websites — including Breitbart, Newsmax, and the Daily Wire — was responsible for nearly 70% of engagement on Facebook with climate denial content.
The report’s authors used the analytics tool NewsWhip to search for nearly two dozen terms such as “climate hoax,” “climate alarmism,” “climategate,” and “global warming scam,” to arrive at a shortlist. Together, these posts had more than 1 million interactions, including likes, shares, and comments.
The shortlist was then evaluated specifically for articles containing climate misinformation as defined by the voluntary coalition known as the Conscious Advertising Network. The final list was made up of 184 articles published between May 19, 2021 (after the company rolled out its informational labeling feature) and January 20, 2022.
Using CrowdTangle, the study authors identified the most popular Facebook post associated with each article and assessed whether these posts included an information or fact-checking label. (Facebook parent company Meta is now limiting access to CrowdTangle, which could make analyses like these more difficult to do in the future.)
The study found that half of these posts contained no information label, while the other half did.
“50% is a failing grade. It’s an F,” Imran Ahmed, chief executive of CCDH, said in a call with reporters. “Someone with the resources of Facebook should be aiming for an A.”
This 50% without labels (a total of 93 posts), had nearly 542,000 Facebook interactions, which the authors found equated to a little more than half of total interactions with articles in the sample.
There didn’t seem to be any predictable patterns behind which posts earned a label and which didn’t, according to Callum Hood, head of research at CCDH. “Bottom line, it seemed quite arbitrary,” Hood told reporters. “We had posts with very high numbers of interactions that you might have intuitively thought Facebook would pay more attention to but contained phrases clearly associated with climate denial that were not labeled. And then you had others, which didn’t really contain those words or phrases, or were less popular and did have labels,” he said.
Here are some examples of the posts that were missing information or fact-checking labels:
In contrast, here are some examples of posts that Facebook did choose to add information or fact-checking labels to:
Still, labeling is not always an effective tool against misinformation. Facebook’s own internal research has shown that adding labels has a limited effect.
The new report comes just days after whistleblower and former Facebook employee Frances Haugen filed a pair of complaints with the SEC alleging that Facebook misled investors about how it was combating Covid-19 and climate change misinformation on its website.
The new CCDH report builds on what Haugen is claiming, Ahmed said. “[Labeling] was the major intervention that Facebook said it was going to do, and it hasn’t done it,” he said. “We’ve got another case here of where a tech giant has made a sweeping promise about what it’s going to do to address a disinformation problem on its platform. And our research, again, shows that it simply isn’t doing it.”
Facebook spokesperson Kevin McAlister said in a statement:
“We combat climate change misinformation by connecting people to reliable information in many languages from leading organizations through our Climate Science Center and working with a global network of independent fact checkers to review and rate content. When they rate this content as false, we add a warning label and reduce its distribution so fewer people see it. During the time frame of this report, we hadn’t completely rolled out our labeling program, which very likely impacted the results.”