Reports have surfaced about the removal of information about abortion from social media. Unfortunately, none of it is unprecedented. Platforms like Facebook and Instagram have long maintained broad and vague community standards that allows them to remove content with little recourse.
What Is Happening
As reported by Vice and followed up on by Wired, posts about abortion receive intense scrutiny online. The difference, one activist told Vice, is simply that more people are seeing their posts removed than before.
Vice found that the truthful sentence “abortion pills can be mailed” triggered a flag as violating Facebook’s rules about “buying, selling, or exchanging non-medical drugs.” A moderator running a group on Facebook connecting people seeking information about abortions told Wired she has always had to carefully monitor posts to avoid the group being removed entirely, with clear rules about what can be posted—any links are banned, for instance.
The moderator expressed a frustration we’ve heard constantly about community guidelines: that they have no idea what the lines actually are and find things suddenly shifting with no warning.
In the wake of the COVID-19 pandemic, social media platforms tightened up enforcement of rules surrounding medical information, making their automated systems and human reviewers arbiters of truth. Their rules also ban the buying, selling, or gifting of pharmaceuticals (it’s this rule that the posts with the sentence “abortion pills can be mailed” ran afoul of).
Furthermore, during the pandemic, Facebook removed posts at the request of states’ attorneys general related to the “promotion and sale of regulated goods and services.” In the context of abortion care and information, that precedent becomes especially dangerous.
Additionally, pretty much all social media platforms have some form of rule banning either “illegal” activity or the promotion thereof. What is unclear is, in the wake of the Supreme Court’s overturning of Roe, if those rules will now apply on a state-by-state basis. We don’t know how companies will react, if they are even capable of doing state-by-state blocking (and if they will, even if they can), if they will comply with an obviously unconstitutional or illegal law until it is struck down, and how these companies will deal with the uncertainty of people seeking abortion care in states where it is still illegal from states where it isn’t. There is so much uncertainty here and the companies do not have a history of providing clear guidance even in the best of circumstances.
These kinds of policies are also easily weaponizable by those seeking to silence those trying to share information and provide community support. All it takes is a few people reporting someone or a group in bad faith for posts to be removed and accounts banned. Even if eventually reinstated, the downtime means that people seeking help will not be able to find it.
What Should Companies Be Doing
At the risk of repeating ourselves, companies need to make their policies clear and consistent. Vague and broad community standards don’t provide true guidance to users about what they can and cannot say. This becomes especially true when the appeals process is broken.
Vice tested Facebook by posting “abortion pills can be mailed” several times, once “disagreeing” with the flag and once “agreeing” with it. (These are the two options given to users.) The posts disappeared and the user’s account was suspended for 24 hours, even though the post where they disagreed with Facebook’s assessment was eventually reinstated.
That’s a problem—losing access to an account for 24 hours even though it was determined no rule was broken is something easily weaponizable. It is also a problem that, in the case of Facebook, appeals are limited to clicking yes or no on a box and whether or not a human being is reviewing the case is unclear.
Clear, consistent policies with a functional appeals process would do a lot for people looking to share information online. However uncomfortable for them, companies need to take stands on behalf of their users. Inconsistent enforcement based on press attention or political pressure ill-serves everyone.
Companies’ transparency reports also need to start being broken down not just by country, but by state, where the difference in state laws are making a difference. It’s important to know which states’ attorneys generals and other law enforcement entities are making abortion-related asks of the companies. This will help us figure out how companies are reacting to various state laws. In the Facebook example above, we don’t know which states’ attorneys general asked Facebook to remove material. That information should be public.
We see transparency reports and policy that restrict access to material based on “local laws,” but nothing more granular than national data is given.
Thinking Beyond Facebook
Policies like those held by Facebook, Instagram, and Twitter exist beyond social media. And when infrastructure is implicated can be even more dangerous.
Services like Cloudflare could be pressured disable access to websites with abortion information on them. ISPs could be pressured to cut off internet access to accounts providing information. Payment processors could prevent people from paying for abortion care. In addition to the grave consequences for speech and access to speech, there are extra consequences the further down the technical stack you go.
Posts being removed is one thing—an entire account or website where only part is related to abortion is dangerous. And in the case of internet access, it does not merely prevent someone from speaking about abortion. It prevents them—and anyone in their household—from working from home, from remote schooling, and from connecting with their family.
Amazon’s AWS—the cloud computing service that dominates website hosting—has a policy allowing the disabling of “illegal” content or “with applicable law or any judicial, regulatory or other governmental order or request,” a broad ability for government entities to get websites providing abortion information taken down.
Similarly, Google prohibits users of Google Docs from “engaging in illegal activities or to promote activities, goods, services, or information that cause serious and immediate harm to people or animals.” Many use Google Docs as a place to gather research and information and disseminate the link to it among their communities. Using Google Docs for abortion care then subjects users to the possible loss of a Google Account—meaning a loss of any and everything you had entrusted to Google. Your emails, pictures, home videos, all gone.
And that doesn’t include the possibility that Google will report you to the authorities.
As users we find ourselves increasingly dependent on a handful of companies for internet access, hosting and sharing of resources, and communication. If we are booted from one, there are vanishingly few alternatives. Most Americans don’t have a choice in their internet service provider. If they are booted, they are offline. Google provides a suite of services to its users that, if lost, would be devastating. AWS could scrub the internet of many websites.
Optimally, we’d be able to choose. We’d be able to pick the social media site, ISP, or webhost that shared our values and had a commitment to standing up against unjust government demands. Instead, we are forced to try to figure out what is possible under the rules of these companies. So, we have to pressure them to stand up for us. To resist government pressure to remove information, users, or groups. To make the rules clear so we can share and access information. To serve us, who are their customers.
The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development. We work to ensure that rights and freedoms are enhanced and protected as our use of technology grows. Visit https://www.eff.org