More from the free speech and social media platforms symposium in the first issue of our Journal of Free Speech Law; you can read the whole article (by Michigan State law professor Adam Candeub) here, but here’s the abstract:
Section 230 of the Communications Decency Act gives internet platforms legal protection for content moderation. Even though the statute is 25 years old, courts have not clearly stated which provision within section 230 protects content moderation. Some say section 230(c)(1), others section 230(c)(2). But section 230(c)(1) speaks only to liability arising from third-party content, codifying common carriers’ liability protection for delivering messages.
And while section 230(c)(2) addresses content moderation, its protections extend only to content moderation involving certain types of speech. All content moderation decisions for reasons not specified in section 230(c)(2), such as based on material being considered “hate speech,” “disinformation,” or “incitement,” stand outside section 230’s protections. More important, because section 230(c)(2) regulates both First Amendment protected and unprotected speech, it does raise constitutional concerns, but they may not be fatal.
Founded in 1968, Reason is the magazine of free minds and free markets. We produce hard-hitting independent journalism on civil liberties, politics, technology, culture, policy, and commerce. Reason exists outside of the left/right echo chamber. Our goal is to deliver fresh, unbiased information and insights to our readers, viewers, and listeners every day. Visit https://reason.com