The day after Surgeon General Vivek Murthy issued an advisory calling for a “whole-of-society” effort to combat the “urgent threat to public health” posed by “health misinformation,” President Joe Biden accused Facebook and other social media platforms of “killing people” by allowing the spread of anti-vaccine messages. Bridling at the homicide charge, Facebook noted that “vaccine acceptance” among the platform’s users has increased by 10 to 15 percentage points since January.
“The data shows that 85% of Facebook users in the US have been or want to be vaccinated against COVID-19,” the company said in a statement on Saturday. “President Biden’s goal was for 70% of Americans to be vaccinated by July 4. Facebook is not the reason this goal was missed.”
The escalation of Biden’s tiff with Facebook reflects his frustration with the company’s failure to control speech the way he thinks it should. As Reason‘s Robby Soave noted last week, Biden seems to think Facebook should be treating his censorious suggestions as orders. “These platforms have to recognize they’ve played a major role in the increase in speed and scale with which misinformation is spreading,” Murthy said on CNN yesterday. His recommendations include “appropriate legal and regulatory measures that address health misinformation while protecting user privacy and freedom of expression.”
It is hard to imagine how any such measures could be “appropriate” in light of the First Amendment. The basic problem Biden and Murthy have identified is that freedom of speech allows people to say things that are misleading or flat-out wrong. And while Biden and Murthy are surely right that misguided opinions and erroneous statements of fact can have bad consequences, that has always been understood as an unavoidable cost of free speech.
The alternative—empowering the government to determine which opinions are acceptable and which statements are accurate—does not eliminate error, since the officials charged with making those judgments are just as fallible as the rest of us, as the history of public pronouncements about COVID-19 amply illustrates. Such a regime would replace the cacophony of contending views that offends Biden and Murthy with a single, authoritative voice that might also be wrong, but without the opportunity for correction. When the government uses “legal and regulatory measures” to suppress “misinformation” it deems dangerous, it destroys the noisy, imperfect, but ultimately enlightening mechanism that allows us to arrive at the truth.
At the end of his advisory, just before the list of references, Murthy tackles a subject that you might think deserved higher billing: What is “misinformation”?
“Defining ‘misinformation’ is a challenging task, and any definition has limitations,” Murthy concedes. “One key issue is whether there can be an objective benchmark for whether something qualifies as misinformation. Some researchers argue that for something to be considered misinformation, it has to go against ‘scientific consensus.’ Others consider misinformation to be information that is contrary to the ‘best available evidence.’ Both approaches recognize that what counts as misinformation can change over time with new evidence and scientific consensus. This Advisory prefers the ‘best available evidence’ benchmark since claims can be highly misleading and harmful even if the science on an issue isn’t yet settled.”
Who decides what the “best available evidence” indicates? Trusting government-appointed experts with that job seems risky, to say the least.
Anthony Fauci, the federal government’s top COVID-19 adviser, has admitted that he deliberately lowballed the threshold for herd immunity because he did not think the public could handle the truth. Rochelle Walensky, director of the Centers for Disease Control and Prevention (CDC), misrepresented the “best available evidence” concerning outdoor COVID-19 transmission in several significant ways. Fauci, the CDC, and Jerome Adams, Murthy’s predecessor as surgeon general, initially dismissed the value of face masks as a precaution against COVID-19, only to reverse themselves for reasons that had nothing to do with evolving science.
Now that the official guidance has changed, is it “misinformation” to question whether face masks have done much to prevent COVID-19 transmission? Since the evidence is mixed, you might think that mask skeptics would be allowed to state their views on social media. But Murthy worries about messages that might encourage people to “reject public health measures such as masking and physical distancing.” Questioning the effectiveness of masks certainly seems to fall into that category. Murthy even expresses concern that “misinformation” has “led to harassment of and violence against” airline personnel, although the proximate cause of those conflicts is the Transportation Security Administration’s scientifically dubious rule requiring that all passengers wear face masks, regardless of whether they have been vaccinated.
Whether skepticism about masks or about mask mandates qualifies as “misinformation” depends on what Murthy or some other officially recognized expert decides the “best available evidence” tells us. Likewise with controversies such as the origin of COVID-19, its infection fatality rate, the herd immunity threshold, the usefulness of specific treatments, and the cost-effectiveness of lockdowns.
The fact the Murthy does not cite a single specific example of “health misinformation” in his 22-page advisory on that subject is not exactly reassuring. He repeatedly echoes Biden’s concern that certain messages might dissuade people from getting vaccinated, but even here he is vague. “A recent study showed that even brief exposure to COVID-19 vaccine misinformation made people less likely to want a COVID-19 vaccine,” he says.
Several of the messages used in that study were straightforwardly wacky. One stated that “97% of corona vaccine recipients will become infertile,” for example, while another warned that “you will essentially become a genetically modified human being” if you receive an mRNA vaccine. But one post wondered why everyone needs to be vaccinated against a virus that kills a tiny percentage of people infected by it, which is more of a question than a false claim.
According to Murthy’s definition, “misinformation” includes statements that are true but “misleading,” which suggests the label could be applied to, say, concerns about the sample sizes or follow-up periods used in vaccine studies. If someone notes that the studies did not follow subjects for years to see whether they suffered any long-term side effects, that would be true but unhelpful and therefore probably would be deemed “misleading.”
In addition to worrying that “health misinformation” is “undermining vaccination efforts,” Murthy says it is “dividing families and communities,” which to his mind counts as the sort of “significant harm” that demands government attention. If so, pretty much any scientific, political, or social issue that provokes arguments between relatives and friends would qualify as a “public health” problem requiring a government response.
Even purging clearly false statements about COVID-19 vaccines from social media platforms is a tall order. Yet Murthy seems to think Facebook et al. also should suppress debatable or even accurate statements that might discourage vaccination, along with other kinds of “misinformation” about COVID-19. And the platforms are expected to do that without “an objective benchmark,” based on an assessment of what counts as “misleading” in light of the “best available evidence,” which is open to interpretation and “can change over time.” Good luck with that.
Murthy notes “the challenges of content moderation” as well as the potential for “unintended consequences,” such as “migration of users to less-moderated platforms.” Unfazed by those challenges, he thinks platforms “should also address misinformation in live streams, which are more difficult to moderate due to their temporary nature and use of audio and video.”
If social media platforms fail to accomplish this impossible mission, Murthy warns, “legal and regulatory measures” may be necessary. What might those look like?
Now that the surgeon general “has declared the barrage of misinformation spreading on social media a public health hazard,” Harvard media researchers Joan Donovan and Jennifer Nilsen argue in an NBC News essay, the government should treat social media companies “like Big Tobacco” by imposing “serious consumer protection regulations.” They note that Murthy thinks Facebook et al. should “redesign their algorithms so that search and recommendation systems don’t surface reckless misinformation, and to make it easier for people to identify and report misinformation.”
If those recommendations become commands, they would clearly impinge on the First Amendment rights of social media companies and people who use their platforms. But even if such regulations could pass constitutional muster, they would face the same basic problem as voluntary efforts to curb “misinformation”: Once you get beyond clear examples like warnings about vaccine-induced mass sterility, misinformation is in the eye of the beholder.
Founded in 1968, Reason is the magazine of free minds and free markets. We produce hard-hitting independent journalism on civil liberties, politics, technology, culture, policy, and commerce. Reason exists outside of the left/right echo chamber. Our goal is to deliver fresh, unbiased information and insights to our readers, viewers, and listeners every day. Visit https://reason.com