I’m still doing some research related to President Trump’s “Preventing Online Censorship” draft Executive Order, and hope to post more about this today. But for now, I wanted to post some background I put together earlier about 47 U.S.C. § 230 (enacted 1996), the statute that is so important to the order; I hope people find this helpful.
Section 230 makes Internet platforms and other Internet speakers immune from liability for material that’s posted by others Congress enacted 47 U.S.C. § 230 (with some exceptions). That means, for instance, that
- I’m immune from liability for what is said in our comments.
- A newspaper is immune from liability for its comments.
- Yelp and similar sites are immune from liability for business reviews that users post.
- Twitter, Facebook, and YouTube (which is owned by Google) are immune from liability for what their users post.
- Google is generally immune from liability for its search engine results.
And that’s true whether or not the Internet platform or speaker chooses to block or remove certain third-party materials. I don’t lose my immunity just because I occasionally delete some comments (e.g., ones that contain vulgar personal insults); Yelp doesn’t lose its because it sometimes deletes comments that appear to have come from non-customers; the other entities are likewise allowed to engage in such selection and still retain immunity. Section 230 has recently become controversial, and I want to step back a bit from the current debates to explain where it fits within the traditions of American law (and especially American libel law).
Historically, American law has divided operators of communications systems into three categories.
- Publishers, such as newspapers, magazines, and broadcast stations, which themselves print or broadcast material submitted by others (or by their own employees).
- Distributors, such as bookstores, newsstands, and libraries, which distribute copies that have been printed by others. Property owners on whose property people might post things —such as bars on whose restroom walls people scrawl “For a good time, call __“—are treated similarly to distributors.
- Platforms, such as telephone companies, cities on whose sidewalks people might demonstrate, or broadcasters running candidate ads that they are required to carry.
And each category had its own liability rules:
- Publishers were basically liable for material they republished the same way they were liable for their own speech. A newspaper could be sued for libel in a letter to the editor, for instance. In practice, there was some difference between liability for third parties’ speech and for the company’s own, especially after the Supreme Court required a showing of negligence for many libel cases (and knowledge of falsehood for some); a newspaper would be more likely to have the culpable mental state for the words of its own employees. But, still, publishers were pretty broadly liable, and had to be careful in choosing what to publish. See Restatement (Second) of Torts § 578.
- Distributors were liable on what we might today call a “notice-and-takedown” model. A bookstore, for instance, wasn’t expected to have vetted every book on its shelves, the way that a newspaper was expected to vet the letters it published. But once it learned that a specific book included some specific likely libelous material, it could be liable if it didn’t remove the book from the shelves. See Restatement (Second) of Torts § 581; Janklow v. Viking Press (S.D. 1985).
- Platforms weren’t liable at all. For instance, even if a phone company learned that an answering machine had a libelous outgoing message (see Anderson v. N.Y. Telephone Co. (N.Y. 1974)), and did nothing to cancel the owner’s phone service, it couldn’t be sued for libel. Likewise, a city couldn’t be liable for defamatory material on signs that someone carried on city sidewalks (even though a bar could be liable once it learned of libelous material on its walls), and a broadcaster couldn’t be liable for defamatory material in a candidate ad.
Categorical immunity for platforms was thus well-known to American law; and indeed New York’s high court adopted it in 1999 for e-mail systems, even apart from § 230. See Lunney v. Prodigy Servs. (N.Y. 1999).
But the general pre-§ 230 tradition was that platforms were entities that didn’t screen the material posted on them, and indeed were generally (except in Lunney) legally forbidden from screening such materials. Phone companies are common carriers. Cities are generally barred by the First Amendment from controlling what demonstrators said. Federal law requires broadcasters to carry candidate ads unedited.
Publishers were free to choose what third-party work to include in their publications, and were fully liable for that work. Distributors were free to choose what third-party work to put on their shelves (or to remove from their shelves), and were immune until they were notified that such work was libelous. Platforms were not free to choose, and therefore were immune, period.
Enter the Internet, in the early 1990s. Users started speaking on online bulletin boards, such as America Online, Compuserve, Prodigy, and the like, and of course started libeling each other. This led to two early decisions: Cubby v. Compuserve, Inc. (S.D.N.Y. 1991), and Stratton Oakmont, Inc. v. Prodigy Services Co. (N.Y. trial Ct. 1995).
- Cubby held that Internet Service Providers (such as Compuserve) were entitled to be treated as distributors, not publishers.
- Stratton Oakmont held that only Service providers that exercised no editorial control (such as Compuserve) over publicly posted materials would get distributor treatment, and service providers that exercised some editorial control (such as Prodigy)—for instance, by removing vulgarities—would be treated as publishers.
Neither considered the possibility that an ISP could actually be neither a publisher nor a distributor but a categorically immune platform, perhaps because at the time only entities that had a legal obligation not to edit were treated as platforms. And Stratton Oakmont‘s conclusion that Prodigy was a publisher because it ” actively utilize[ed] technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and ‘bad taste,'” is inconsistent with the fact that distributors (such as bookstores and libraries) have always had the power to select what to distribute (and what to stop distributing), without losing the limited protection that distributor liability offered.
But whether or not those two decisions were sound under existing legal principles, they gave service providers strong incentive not to restrict speech in their chat rooms and other public-facing portions of their service. If they were to try to block or remove vulgarity, pornography, or even material that they were persuaded was libelous or threatening, they would lose their protection as distributors, and would become potentially strictly liable for material their users posted. At the time, that looked like it would be ruinous for many service providers (perhaps for all but the unimaginably wealthy, will-surely-dominate-forever America Online).
This was also a time when many people were worried about the Internet, chiefly because of porn and its accessibility to children. That led Congress to enact the Communications Decency Act of 1996, which tried to limit online porn; but the Court struck that down in Reno v. ACLU (1997). Part of the Act, though, remained: 47 U.S.C. § 230, which basically made all Internet service and content providers platforms as to their users’ speech—whether or not they blocked or removed certain kinds of speech.
Congress, then, deliberately provided platform immunity to entities that (unlike traditional platforms) could and did select what user content to keep up. It did so precisely to encourage platforms to block or remove certain speech (without requiring them to do so), by removing a disincentive (loss of immunity) that would have otherwise come with such selectivity. And it gave them this flexibility regardless of how the platforms exercised this function.
And Congress deliberately imposed platform liability (categorical immunity) rather than distributor liability (notice-and-takedown immunity). For copyright claims, it retained distributor liability (I oversimplify here), and soon codified it in 17 U.S.C. § 512, the Digital Millennium Copyright Act of 1998: If you notify Google, for instance, that some video posted on YouTube infringes copyright, Google will generally take it down—and if it doesn’t, then you could sue Google for copyright infringement. Not so for libel.
So what do we make of this? A few observations:
[1.] Under current law, Twitter, Facebook, and the like are immune as platforms, regardless of whether they edit (including in a politicized way). Like it or not, but this was a deliberate decision by Congress. You might prefer an “if you restrict your users’ speech, you become liable for the speech you allow” model. Indeed, that was the model accepted by the court in Stratton Oakmont. But Congress rejected this model, and that rejection stands so long as § 230 remains in its current form. (I’ll have more technical statutory details on this in a later post.)
[2.] Section 230 does indeed change traditional legal principles in some measure, but not that much. True, Twitter is immune from liability for its users’ posts, and a print newspaper is not immune from liability for letters to the editor. But the closest analogy to Twitter isn’t the newspaper (which prints only a few hundred third-party letters-to-the-editor words a day), but either
- the bookstore or library (which houses millions of third-party words, which it can’t be expected to screen at the outset) or
- the phone company or e-mail service.
Twitter is like the bookstore or library in that it runs third-party material without a human reading it carefully, and reserves the right to remove some material (just as a bookstore can refuse to sell a particular book, whether because it’s vulgar or unreliable or politically offensive or anything else). Twitter is like the phone company or e-mail service in that it handles a vast range of material, much more than even a typical bookstore and library, and generally keeps up virtually all of it (though isn’t legally obligated to do so, the way a phone company would). Section 230 is thus a broadening of the platform category, to include entities that might otherwise have been distributors.
[3.] Now of course § 230 could be amended, whether to impose publisher liability (in which case many sites, including ours, would have to regretfully close their comment sections) or distributor notice-and-takedown liability (which would impose a lesser burden, but still create pressure to over-remove material, especially when takedown demands come from wealthy, litigious people or institutions). And it could be amended to impose distributor liability for sites that restrict user speech in some situations and retain platform liability for sites that don’t restrict it at all. I hope to blog some more about these options in the coming days. I also hope to blog some more in the coming days with more details about the specific wording of § 230. But for now, I hope this gives a good general perspective on the traditional common-law rules, and the way § 230 has amended those rules.
(Disclosure: In 2012, Google commissioned me to co-write a White Paper arguing for First Amendment protection for search engine results; but this post discusses quite different issues from those in that White Paper. I am writing this post solely in my capacity as an academic and a blogger, and I haven’t been commissioned by anyone to do it.)
Founded in 1968, Reason is the magazine of free minds and free markets. We produce hard-hitting independent journalism on civil liberties, politics, technology, culture, policy, and commerce. Reason exists outside of the left/right echo chamber. Our goal is to deliver fresh, unbiased information and insights to our readers, viewers, and listeners every day. Visit https://reason.com