Hateful speech presents one of the most difficult problems of content moderation. At a global scale, it’s practically impossible.
That’s largely because few people agree about what hateful speech is—whether it is limited to derogations based on race, gender, religion, and other personal characteristics historically subject to hate, whether it includes all forms of harassment and bullying, and whether it applies only when directed from a place of power to those denied such power. Just as governments, courts, and international bodies struggle to define hateful speech with the requisite specificity, so do online services. As a result, the significant efforts online services do undertake to remove hateful speech can often come at the expense of freedom of expression.
That’s why there’s no good solution to Facebook’s current dilemma of how to treat the term “Zionism.”
The trouble with defining hateful speech
Hateful speech presents one of the most difficult problems of content moderation by the dominant online services. Few of these services want to host hateful speech; at best it is tolerated, and never welcomed. As a result, there are significant efforts across services to remove hateful speech, and unfortunately, but not unexpectedly, often at the expense of freedom of expression. Online services struggle, as governments, courts and various international bodies have, to define hateful speech with the necessary degree of specificity. They often find themselves in the position of having to take sides in long-running political and cultural disputes.
In its attempts to combat hate speech on its platform, Facebook has often struggled with nuance. In a post from 2017, the company’s VP of policy for Europe, Middle East, and Africa, Richard Allan, illustrated the difficulties of moderating with nuance on a global platform, writing:
People who live in the same country—or next door—often have different levels of tolerance for speech about protected characteristics. To some, crude humor about a religious leader can be considered both blasphemy and hate speech against all followers of that faith. To others, a battle of gender-based insults may be a mutually enjoyable way of sharing a laugh. Is it OK for a person to post negative things about people of a certain nationality as long as they share that same nationality? What if a young person who refers to an ethnic group using a racial slur is quoting from lyrics of a song?
Indeed, determining what is or isn’t “hate speech” is a hard problem: words that are hateful in one context are not in another, words that are hateful among the population or in one area are not in others. And laws designed to protect minority populations from hateful words based on race or ethnicity, have historically been used by the majority race or ethnicity to suppress criticism or expressions of minority pride. Most recently, this has been seen in the United States, as some have labeled the speech of the Movement for Black Lives as hate speech against white people. .
Of course, making such determinations on a platform with billions of users from nearly every country in the world is much more complicated—and even more so when the job is either outsourced to low-paid workers at third-party companies or worse: to automated technology.
The difficulty of automating synonymy
The latest vocabulary controversy at Facebook surrounds the word “Zionist,” used to describe an adherent to the political ideology of Zionism—the historic and present-day nationalist movement that supports a Jewish homeland in Israel—but also sometimes used in a derogatory manner, as a proxy for “Jew” or “Jewish.” Adding complexity, the term is also used by some Palestinians to refer to Israelis, whom they view as colonizers of their land.
Because of the multi-dimensional uses of this term, Facebook is reconsidering the ways in which it moderates the term’s use. More specifically, the company is considering adding “Zionist” as a protected category under its hate speech policy.
Facebook’s hate speech policy has in the past been criticized for making serious mistakes, like elevating “white men” as a protected category, while refusing to do so for groups like “black children.”
According to reports, the company is under pressure to adopt the International Holocaust Remembrance Alliance’s working definition of anti-semitism, which functionally encompasses harsh criticism of Israel within it. This definition has been criticized by others in the Jewish community. A group of 55 scholars of anti-semitism, Jewish history and the Israeli-Palestinian conflict called it “highly problematic and controversial”. A US-based group called Jewish Voice for Peace, working in collaboration with a number of Jewish, Palestinian, and Muslim groups, has launched a campaign to urge Facebook not to include “Zionist” as a protected category to their hate speech policy.
Moderation at scale
While there is no denying that “Zionist” can be used as a proxy for “Jewish” in ways that are anti-semitic, the term’s myriad uses make it a prime example of what makes moderation at scale an impossible endeavor.
In a perfect world, moderation might be conducted by individuals who specialize in the politics of a given region or who have a degree in political science or human rights. In the real world, however, the political economy of content moderation is such that companies pay workers to sit at a desk and fulfill time-based quotas deciding what expression does or does not violate a set of constantly-changing rules. These workers, as EFF Pioneer Award recipient and scholar Sarah T. Roberts describes in her 2019 book, labor “under a number of different regimes, employment statuses, and workplace conditions around the world—often by design.”
While some proportion of moderation still receives a human touch, Facebook and other companies are increasingly using automated technology—and even moreso amidst the global pandemic—to deal with content, meaning that a human touch isn’t guaranteed even when the expression at hand requires a nuanced view.
Imagine the following: A user posts an angry rant about a group of people—in this example, let’s use the term “socialist”—also contested and multi-dimensional—to illustrate the point. The rant talks about the harm that socialists have caused to the person’s country and expresses serious anger—but does not threaten violence—toward socialists.
If this rant came from someone in the former Eastern bloc, it would carry a vastly different meaning than if it came from someone in the United States. A human moderator well-versed in those differences and given enough time to do their job could understand those differences. An algorithm, on the other hand, could not.
Facebook has admitted these difficulties in other instances, such as when dealing with the Burmese term kalar, which when used against Rohingya Muslims is a slur, but in other cases carries an entirely innocuous meaning (among its definitions is simply “split pea”). Of that term, Richard Allan wrote:
We looked at the way the word’s use was evolving, and decided our policy should be to remove it as hate speech when used to attack a person or group, but not in the other harmless use cases. We’ve had trouble enforcing this policy correctly recently, mainly due to the challenges of understanding the context; after further examination, we’ve been able to get it right. But we expect this to be a long-term challenge.
Everything old is new again
This isn’t the first time that the company has grappled with the term “Zionist.” In 2017, the Guardian released a trove of documents used to train content moderators at Facebook. Dubbed the Facebook Files, these documents listed “Zionist” alongside “homeless people” and “foreigners” when dealing with credible threats of violence. Also considered particularly vulnerable were journalists, activists, heads of state, and “specific law enforcement officers.” The leaked slides were not only sophomoric in their political analysis, but provided far too much information—and complexity—for already over-taxed content moderation workers.
In Jillian C. York’s forthcoming book, Silicon Values: The Future of Free Speech Under Surveillance Capitalism, a former operations staffer from Facebook’s Dublin officewho left in 2017 and whose job includes weighing in on these difficult cases, said that the consideration of “Zionism” as a protected category was a “constant discussion” at the company, while another said that numerous staffers tried in vain to explain to their superiors that “Being a Zionist isn’t like being a Hindu or Muslim or white or Black—it’s like being a revolutionary socialist, it’s an ideology … And now, almost everything related to Palestine is getting deleted.”
“Palestine and Israel [have] has always been the toughest topic at Facebook. In the beginning, it was a bit discreet,” she further explained, with the Arabic-language team mainly in charge of tough calls, but later, Facebook began working more closely with the Israeli government (just as it did with the governments of the United States, Vietnam, and France, among others), resulting in a change in direction.
This story is ongoing, and it remains unclear what Facebook will decide. But ultimately, the fact that Facebook is in the position to make such a decision is a problem. While we hope that they will not limit yet another nuanced term that they lack the capacity to moderate fairly, whatever they choose, they must ensure that their rules are transparent, and that users have the ability to appeal—to a human moderator—any decisions that are made.
The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development. We work to ensure that rights and freedoms are enhanced and protected as our use of technology grows. Visit https://www.eff.org