The Ultimate Managed Hosting Platform

How Bad Is Online Harassment? 

Fight Censorship, Share This Post!

In September, the Committee to Protect Journalists (CPJ), an American organization that monitors attacks on freedom of the press worldwide, issued a report on what it called a major threat to journalists—particularly female journalists—in the United States and Canada: online harassment. The report opened with an anecdote meant to illustrate the problem. A Texas-based freelancer suddenly found her inbox flooded with spam, from sale promotions to fake job offers, and realized that someone had subscribed her to dozens of email lists; she suspected that the culprit was a bigoted commenter previously banned from a website for which she wrote. It was, the report quoted her as saying, “kind of scary.”

Given that CPJ deals with issues that range from censorship to beatings, kidnappings, and even murders of journalists, junk-mail bombing seems like the epitome of a First World Problem. (I say that as someone targeted by a similar prank a couple of years ago.) Yet such trivial annoyances show up quite frequently in accounts that treat online abuse as an extremely grave social problem.

CPJ is far from the only organization to address the issue. A 2018 report from Amnesty International, a globally revered human rights advocacy group, was titled Toxic Twitter and examined “violence and abuse against women online.” The same year, PEN America, the nearly 100-year-old nonprofit that promotes freedom of speech, issued a statement describing online harassment as a “clear threat to free expression.” The United Nations has also weighed in, holding its first hearing on the subject in 2015.

Some of the behavior that falls under the general umbrella of “online harassment” is not only noxious but genuinely frightening and even criminal. The article by journalist Amanda Hess that set off the current panic—”Why Women Aren’t Welcome on the Internet,” published by Pacific Standard in January 2014—discussed Hess’ own experience of being cyberstalked by a man who progressed from tweets to emails to threatening phone calls. In other cases, harassment in cyberspace crosses over into real life via “swatting”: prank emergency calls that dispatch law enforcement to handle a supposed dangerous situation. In 2017, police in Wichita, Kansas, shot and killed an unarmed 28-year-old man after one such fraudulent 911 call.

The rapid evolution of the internet has often outpaced the law’s ability to deal with cybercrime, including stalking and threats. Unfortunately, as with many other issues, the discussion of online harassment easily lends itself to catastrophizing. Every “go jump off a cliff” tweet becomes virtual terrorism, grounds for social media banishment if not criminal investigation. The sense of urgency is amplified by shoddy analysis, politically driven double standards, and “do something!” calls to action—action that often involves speech suppression.

The great thing about the internet is that you can reach just about anyone, anywhere, in an instant. The awful thing about the internet is that just about anyone, from anywhere, can reach you in an instant. As a journalist, you can reach vast numbers of new readers, connect with fans, and find information that would once have been out of reach; you can also get nasty messages from hundreds of haters who no longer have to take the effort to mail a letter.

Online harassment is far from the first internet-related panic—remember sex fiends lurking in chat rooms? But while earlier alarmism about online horrors usually came from the right and was not overtly political, the panic about internet harassment has come primarily from the left and is transparently politicized in its selection of “deserving” victims.

Hess’ Pacific Standard article came just two weeks after a woman named Justine Sacco watched her life fall apart because of an internet mob. On her way from London to Cape Town, Sacco tweeted a joke meant to mock the privileged “bubble” of affluent Americans: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!” The tweet went viral, and by the time Sacco landed she was not only jobless but so infamous some hotels canceled her bookings.

This was a textbook example of online harassment. Yet neither Hess’ article nor the ensuing conversation mentioned Sacco’s ordeal. When British journalist Jon Ronson wrote about it a year later in So You’ve Been Publicly Shamed (Riverhead Books), many faulted him for being too sympathetic. A Washington Post essay by academic and author Patrick Blanchfield chided those who would turn this “30-something, well-educated, relatively affluent white woman” into the “martyr of choice” for internet abuse. Blanchfield’s own martyrs of choice included successful white feminists targeted by right-wing trolls.

“People are much more likely to view things done or said to their side as harassment and to view what happens to people they don’t like as just something they should deal with, or not that bad, or maybe made up,” says Ken White, a Los Angeles–based criminal defense attorney and First Amendment litigator who blogs and tweets under the handle “Popehat.”

In the case of online harassment, narratives from the progressive tribe have dominated mainstream media coverage and advocacy. That means concerns about online harassment don’t usually extend, for instance, to outrage cycles targeting alleged bigots, even when the outrage is misplaced.

In November 2018, a Portland, Oregon, woman nicknamed “Crosswalk Cathy” had to scrub her online presence after a viral video pilloried her for calling the cops on a black couple over a bad parking job. But subsequent reports revealed she called a parking hotline about a car partially blocking a crosswalk while the owners, who were getting takeout food nearby, were away from the car—meaning she had no idea they were black.

Nor do progressive concerns about harassment extend to victims of online vigilantism for ostensibly noble causes. A few years ago, in the wake of a teen girl’s highly publicized sexual assault by two high school football players in Steubenville, Ohio, many locals experienced egregious harassment by members of the “hacktivist” group Anonymous. Emails were hacked and personal data posted online. Jim Parks, the webmaster of the football team’s fan site, was accused of being the mastermind of a teen porn ring because of supposed photos of nude underage girls found in his email account. (The subjects all turned out to be adult women, and the principal hacker, Deric Lostutter, who was later identified and questioned by the FBI, issued a public apology to Parks for the “embarrassment” he had suffered.) Others—adults and teenagers—were smeared as accomplices to rape and barraged with threats. Yet when Lostutter faced possible criminal charges several months later, much progressive opinion treated him as a hero. Parks, who described Anonymous as “terrorists,” would no doubt have disagreed.

In late 2014, a few months after Hess’ influential article, the internet conflagration known as GamerGate—in which various members of the video game community battled over sexism in the gaming industry and press—broke out. Death threats eventually forced feminist video game critic Anita Sarkeesian to temporarily leave her home and cancel a lecture.

Yet progressive critics of GamerGate quickly began to conflate actual threats with mere disagreeable speech. Perspectives on Harmful Speech Online, published in 2017 by Harvard’s Berkman Klein Center for Internet & Society, includes a discussion of “sealioning,” defined as “persistent questioning [combined] with a loudly-insisted-upon commitment to reasonable debate.” (The term originates from a 2014 web comic in which a couple is pestered by a talking sea lion.) Testifying before a 2015 U.N. panel, Sarkeesian insisted that “cyberviolence” includes not only actual threats but “the day-to-day grind of, ‘You’re a liar,’ ‘You suck,'” as well as “hate videos” attacking her critiques of sexism in video games.

GamerGate spurred on anti-harassment measures by many big tech companies, usually developed in close collaboration with social justice activists. A roundtable discussion of Silicon Valley’s efforts to curb online harassment, published in Wired in late 2015 and prominently featuring Twitter Vice President for Trust and Safety Del Harvey, was notable for its explicit assumption that solutions to harassment should focus on “marginalized” victims—”women, people of color, and LGBT people”—and should help progressive causes. In early 2016, Sarkeesian’s Feminist Frequency website was officially listed as part of Twitter’s Trust and Safety Council—along with about 40 other organizations, many of which have a censorious bent.

Some anti-harassment measures by social media platforms have been uncontroversial. Twitter, for instance, has made it easier to ignore hostile messages by blocking and muting or receiving notifications only from known accounts. But other, more heavy-handed measures—account restrictions, suspensions, and bans—have resulted in pitched battles over double standards, political biases, and uneven enforcement.

In February 2016, Twitter abruptly perma-banned far-right blogger Robert Stacey McCain for “targeted abuse,” without ever pointing to actual abusive tweets. McCain, who has peddled racist fare and posted rants against homosexuality, is not a sympathetic figure. But unlike, say, former Breitbart writer and professional troll Milo Yiannopoulos, who joined him in Twitter exile a few months later, McCain neither instigated nor participated in online attacks. Plenty of people who found his views abhorrent nonetheless felt that his banishment was a clear sign of biased enforcement; some wondered if it was related to his vitriolic polemics against Sarkeesian, newly elevated to Twitter’s Trust and Safety Council.

McCain’s ban boosted complaints on the right about the social media platforms’ left-wing bias—a theme incessantly flogged by Breitbart but also echoed by more moderate conservatives. Around the same time, First Amendment attorney and blogger Marc Randazza reported an unscientific but plausible experiment in which he tracked both actual Twitter users and his own “decoy” handles and found that conservatives were disciplined for nasty tweets far more than social justice or feminist accounts.

The online wars have since escalated to an even higher pitch, with Donald Trump and the Trumpian right on one side and the “Resistance” and hyper-“woke” left on the other.

The alt-right’s manic trolling, which included bombarding anti-Trump journalists with grotesque racist and anti-Semitic memes, has solidified the view that online harassment is something that comes predominantly from right-wingers and bigots. Meanwhile, on the left, new clashes over free speech and social media harassment have focused on the battles between transgender activists and radical feminists who believe that allowing trans women access to single-sex female spaces jeopardizes women’s safety. In November 2018, shortly after Twitter amended its policy to prohibit “misgendering” as a form of “hateful conduct,” Canadian feminist author and activist Megan Murphy was banned for using male pronouns to refer to Jessica Yaniv, the notorious litigant in British Columbia who (unsuccessfully) demanded that beauticians who wax women’s pubic hair be forced to serve transgender women with intact male genitals.

Does biased enforcement on social media platforms pose a free speech problem? To free speech advocates such as White, the answer is a clear no: Twitter, Facebook, et al. are private entities with their own free association rights. “If they don’t want Nazis, or they don’t want vegans, that’s them and that’s part of their expression,” White says. If social media companies restrict too much legitimate speech, he adds, “the best remedy is for people to create their own communities or vote with their feet.”

White emphatically rejects the idea—advanced, for instance, by current Federal Communications Commission (FCC) Chairman Ajit Pai—that speech policing by social media platforms weakens essential liberal values and norms. Pai, then an FCC commissioner, told The Washington Examiner in 2016 that “there are certain cultural values that undergird the [First A]mendment that are critical for its protections to have actual meaning.” White sees it very differently: If anything, he says, voluntary speech moderation by social media companies is a “safety valve” that protects “a culture of legal free speech” by letting people have online spaces in which they don’t have to deal with verbal abuse or overt bigotry.

At the other end, people as politically different as conservative talk show host Laura Ingraham and the Electronic Frontier Foundation’s William Budington believe that big social media companies should be regulated as public utilities. Facebook and Twitter, Budington writes by email, are “closed platforms with no socially viable alternatives.” While he would much rather see them “exchange freely with newcomers” in a competitive environment, he says, the realistic option is to regulate them to ensure fair access.

Somewhere in the middle, legal scholar and author Nadine Strossen, a former head of the American Civil Liberties Union whose most recent book is Hate: Why We Should Resist It With Free Speech, Not Censorship (Oxford University Press), agrees that the internet giants are not obliged to abide by First Amendment speech protections and have the right to moderate content without government interference. But she also believes that powerful institutions have a moral responsibility to safeguard speech. “It would behoove them from a business perspective” as well, she says, since politically fraught speech restrictions will never satisfy everyone and will only invite attack from both sides. (And so they do.)

At worst, the policing of broadly defined internet harassment can cross the line into the suppression of speech by law—particularly in countries without First Amendment–type speech protections.

In England, where the Malicious Communications Act of 2003 prohibits electronic messages that cause “annoyance, inconvenience or needless anxiety,” several people have faced criminal charges for alleged online harassment of transgender activists, based largely on “misgendering.” One defendant was self-described transsexual Miranda Yardley, whose case was dismissed on the first day of trial. (Yardley was accused of outing an activist’s transgender child, but evidence showed that the activist herself had frequently mentioned the child on Twitter.) Others, including prominent Irish comedian Graham Linehan, have been questioned and warned by the police for using the wrong pronouns.

In Canada, Toronto-based graphic artist Gregory Alan Elliott was prosecuted for criminal harassment over Twitter fights with two local feminists, Stephanie Guthrie and Heather Reilly—which started, ironically enough, when Elliott criticized Guthrie for proposing to “sic” internet mobs on the creator of a video game that allowed players to digitally “beat up” Sarkeesian. Elliott created a #FascistFeminists hashtag to denounce Guthrie and Reilly; they and their supporters monitored his tweets and repeatedly blasted him as a misogynistic creep. There was a lot of mutual sniping and name-calling, but even the police conceded that none of Elliott’s tweets were threatening or sexually harassing.

When Elliott was finally acquitted in January 2016 after a three-year legal battle that destroyed his business, Canadian and American feminists deplored the verdict as a failure to take online harassment seriously. (Disclosure: I participated in a fundraiser for Elliott’s defense in 2015.)

In the U.S., the First Amendment remains a strong bulwark against criminalizing speech in the war on internet harassment (despite a couple of cases in which misguided judges have issued unconstitutional restraining orders forbidding someone to “harass” a public figure by writing about him or her online). One potential weak spot, however, is Section 230 of the Communications Decency Act, which rightly exempts internet platforms from liability for user-posted content.

“That’s the thing that people are taking the most shots at, and I think people underestimate how important it is to how the internet works,” White says.

As often happens, the pressure is coming from both directions. In 2015, left-wing commentator and self-styled “social justice stormtrooper” Arthur Chu wrote an article for TechCrunch urging the repeal of Section 230 to combat the scourge of online harassment. Four years later, Sen. Josh Hawley (R–Mo.) introduced legislation to amend Section 230 by requiring large tech platforms to be “politically neutral” in moderating content, largely in response to conservative complaints about unfair enforcement of harassment policies.

How bad a problem is internet harassment? The Pew Research Center’s most recent study, in 2017, found that over 40 percent of Americans had experienced online harassment and nearly one in five had been subjected to “severe” harassment, defined as physical threats, sexual harassment, sustained harassment, or stalking. Yet the Pew report also acknowledged that its conclusions were complicated by subjective definitions. Notably, even among people classified as victims of “severe” online harassment, 28 percent did not consider their experiences to be harassment and another 21 percent were not sure. Gender makes a difference: Only 31 percent of men who had experienced online harassment as defined in the report felt that the term applied to their most recent incident, but 42 percent of women did.

This gap points to the complex gender dynamics at play—and the trouble with framing the problem as a particular burden on women. “You would have to show pretty extensive evidence about disproportionate impact and intent,” says Strossen, who cautions against the notion that “women are inherently more vulnerable to this kind of attack because of who we are.”

In fact, studies consistently show fewer women than men saying they experience internet harassment of every kind, except for sexual harassment. Counterintuitively, even so-called revenge porn—nonconsensual exposure of intimate images—may happen to men more often, according to the 2017 Pew survey. And while women in the survey were considerably more likely than men to rate their online harassment experiences as extremely or very upsetting, they were no more likely to report negative consequences ranging from mental and emotional stress to problems at work or school. More women—70 percent vs. 54 percent of men—saw online harassment as a major problem, but only 36 percent of women (compared to a quarter of men) wanted stronger laws to deal with it.

There is no question that online harassment can be terrifying, or at least severely disruptive. In 2016, a Jewish real estate agent in Whitefish, Montana, was deluged with threatening calls after the neo-Nazi site The Daily Stormer targeted her—and posted her phone number—for supposedly trying to pressure the mother of white nationalist Richard Spencer into selling her home. Last November, a Honolulu man was arrested for using both online ads and the telephone to direct hundreds of unwanted service calls and food deliveries to the home of his former girlfriend over the course of a year.

Most of what is commonly labeled “online harassment” is far less extreme, though some of it, like other human conflict, can be extremely stressful and injurious. The internet makes both a negative and a positive difference: Malicious gossip can now spread much faster and wider than before, but its targets also have many more opportunities to learn about it and counter it.

Some supposed harassment is vaguely and subjectively defined: One person’s “callout” is another’s “cyberbullying.” Even “doxing,” or public disclosure of private information, turns out to be a flexible concept: A Twitter user once accused me of doxing her because I mentioned a job listed in her public Twitter profile. Ultimately, most so-called internet harassment is simply trash talk—a minor annoyance that we can learn to handle by, as Strossen puts it, “developing resilience.”

The current panic has also made it possible to use accusations of harassment as a weapon to silence criticism—and even to harass one’s critics. In February 2016, I watched a blow-up in which a male Twitter user repeatedly asked a fairly big-name progressive female journalist to correct a tweet containing erroneous information; the journalist responded by tweeting at the man’s employer to accuse him of “hounding strange women on Twitter during work hours” (and by mobilizing her followers to dogpile him as a harasser).

At its core, the online harassment crusade is a push for political control over speech. It is happening in the midst of growing authoritarian sympathies on the right and growing hostility to First Amendment protections for “harmful” speech on the left. Legally, those protections remain as robust as ever—for now. But in these unpredictable times, it would be reckless to assume that the erosion of basic freedoms is something that can’t happen here.


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.