Fact-Checking, COVID-19 Misinformation, and the British Medical Journal

Fight Censorship, Share This Post!

Throughout the COVID-19 pandemic, authoritative research and publications have been critical in gaining better knowledge of the virus and how to combat it. However, unlike previous pandemics, this one has been further exacerbated by a massive wave of misinformation and disinformation spreading across traditional and online social media.

The increasing volume of misinformation and urgent calls for better moderation have made processes like fact-checking—the practice that aims to assess the accuracy of reporting—integral to the way social media companies deal with the dissemination of content. But, a valid question persists: who should check facts? This is particularly pertinent when one considers how such checks can shape perceptions, encourage biases, and undermine longstanding, authoritative voices. Social media fact-checks currently come in different shapes and sizes; for instance, Facebook outsources the role to third party organizations to label misinformation, while Twitter’s internal practices determine which post will be flagged as misleading, disputed, or unverified.

That Facebook relies on external fact-checkers is not in and of itself a problem – there is something appealing about Facebook relying on outside experts and not being the sole arbiter of truth. But Facebook vests a lot of authority in its fact-checkers and then mostly steps out of the way of any disputes that may arise around their decisions. This raises concerns about Facebook fulfilling its obligation to provide its users with adequate notice and appeals procedures when their content is moderated by its fact-checkers.

According to Facebook, its fact-checkers may assign one of four labels to a post: “False,” “Partly False,” Altered,” or “Missing Context.”  The label is accompanied by a link to the fact-checker and a more detailed explanation of that decision. Each label triggers a different action from Facebook. ​Content rated either “False” or “Altered” is subject to a dramatic reduction in distribution and gets the strongest warning labels. Content rated “Partly False” also gets reduced distribution, but to a lesser degree than “False” or “Altered.” Content rated “Missing Context” is not typically subject to distribution reduction; rather Facebook surfaces more information from its fact-checking partners. But under its current temporary policy, Facebook will reduce distribution of posts about COVID-19 or vaccines marked as “Missing Context” by its fact-checkers.

As a result, these fact-checkers exert significant control over many users’ posts and how they may be shared.

A recent incident demonstrates some of the problems with this system.

In November 2021, the British Medical Journal (BMJ) published a story about a whistleblower’s allegations of poor practices at three clinical trial sites run by Ventavia, one of the companies contracted by Pfizer to carry out its COVID-19 vaccine trials. After publication, BMJ’s readers began reporting a variety of problems, including being unable to share the article and being prompted by Facebook that people who repeatedly share “false information” might have their posts removed from Facebook’s News Feed.

BMJ’s article was fact-checked by Lead Stories, one of the ten fact-checking companies contracted by Facebook in the United States. After BMJ contacted Lead Stories to inquire about the flagging and removal of the post, the company maintained that the “Missing Context” label it had assigned the BMJ article was valid. In response to this, BMJ wrote an open letter to Mark Zuckerberg about Lead Stories’ fact-check, requesting that Facebook allow its readers to share the article undisturbed. Instead of hearing from Facebook, however, BMJ received a response to its open letter from Lead Stories.

Turns out, Facebook outsources not just fact-checking but also communication. According to Facebook, “publishers may reach out directly to third-party fact-checking organisations if they have corrected the rated content or they believe the fact-checker’s rating is inaccurate.” Then Facebook goes on to note that “these appeals take place independently of Facebook.” Facebook apparently has no role at all once one of its fact-checkers labels a post.

This was the first mistake. Although Facebook may properly outsource its fact-checking, it’s not acceptable to outsource its appeals process or the responsibility for follow-up communications. When Facebook vests fact-checkers with the power to label its users’ posts, Facebook remains responsible for those actions and their effects on its users’ speech. Facebook cannot merely step aside and force its users to debate the fact-checkers. Facebook must provide, maintain, and administer its own appeals process.

But more about this in a while; now, back to the story:

According to Lead Stories’ response, the reasons for the “Missing Context” label could be summarized in two points: the first concerned the headline and other substantive parts of the publication, which, according to Lead Stories, overstated the jeopardy and unfairly disqualified the data collected from the Pfizer trials; and, the second doubted the credibility of the whistleblower, given that in some other instances it would appear he had not always expressed unreserved support for COVID vaccines on social media. Lead Stories claims it was further influenced by the fact that the article was being widely shared as part of a larger campaign to discredit vaccines and their efficacy.

What happens next is interesting. The “appeals” process, as it were, played out in the public. Lead Stories responded to BMJ’s open letter in a series of articles published on its site. And Lead Stories further used Twitter to defend its decision and criticize both BMJ and the investigative journalist who was the author of the article. 

What does this all tell us about Facebook’s fact-checking and the implications for the restriction of legitimate, timely speech and expression on the platform? It tells us that users with legitimate questions about being fact-checked will not get much help from Facebook itself, even if they are a well-established and well-regarded scholarly journal. 

It is unacceptable that users who feel disserviced by Facebook need to navigate a whole new and complex system with a party that they were not directly involved with. For example, since 2019, Facebook has endorsed the Santa Clara Principles, which, among others, require companies to ensure a clear and easily accessible appeals’ process. This means that “users should be able to sufficiently access support channels that provide information about the actioning decision and available appeals processes once the initial actioning decision is made.” Do Lead Stories offer such an appeal process? Have they signed up to the Santa Clara principles? Does Facebook require its outside fact-checkers to offer robust notice and appeals processes? Has Facebook even encouraged them to?

Given the current state of misinformation, there is really no question that fact-checking can help navigate the often-overwhelming world of content moderation. At the same time, fact-checking should not mean that users must be exposed to a whole new ecosystem, consisting of new actors, with new processes and new rules. Facebook and other technology companies cannot encourage processes that detach the checking of facts from the overall content moderation process. Instead, it must take on the task of creating systems that users can trust and depend on. Unfortunately, the current system created by Facebook fails to achieve that. 


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.