Load WordPress Sites in as fast as 37ms!

Platform Liability Trends Around the Globe: Moving Forward

Fight Censorship, Share This Post!

This is the final installment in a four-part blog series surveying global intermediary liability laws.  You can read additional posts here: 

As this blog series has sought to show, increased attention on issues like hate speech, online harassment, misinformation, and the amplification of terrorist content continues to prompt policymakers around the globe to adopt stricter regulations for speech online, including more responsibilities for online intermediaries. 

EFF has long championed efforts to promote freedom of expression and create an enabling environment for innovation in a manner that balances the needs of governments and other stakeholders. We recognize that there’s a delicate balance to be struck between addressing the very real issue of platforms hosting and amplifying harmful content and activity while simultaneously providing enough protection to those platforms so that they are not incentivized to remove protected user speech, thus promoting freedom of expression. 

Today, as global efforts to change long-standing intermediary liability laws continue, we now use a set of questions to guide the way we look at such proposals. We approach new platform regulation proposals with three primary questions in mind: Are intermediary liability regulations the problem? Is the proposed solution going to fix that problem? And can inevitable collateral effects be mitigated? 

We are hopeful that policymakers will shift in the right direction on internet policy and affirm the important role of immunity for online intermediaries in fostering an enabling environment for users’ freedom of expression. We outline our recommendations on how to do so below.

Our Recommendations

Online Intermediaries Should Not Be Held Liable for User Content

Intermediaries are vital pillars of internet architecture, and fundamental drivers of free speech, as they enable people to share content with audiences at an unprecedented scale. Immunity from liability for third-party content plays a vital role in propelling the success of online intermediaries. This is one of the fundamental principles that we believe must continue to underpin internet regulation: Platforms should not be held responsible for the ideas, images, videos, or speech that users post or share online. 

Regulators should make sure that online intermediaries continue to benefit from comprehensive liability exemptions and are not held liable for content provided by users as they are not involved in co-creating or modifying that content in a way that substantially contributes to illegality. Any additional obligations must be proportionate and must not curtail free expression and innovation.

No mandated content restrictions without an order by a judicial authority 

Where governments choose to impose positive duties on online platforms, it’s crucial that any rules governing intermediary liability must be provided by laws and be precise, clear, and accessible. Such rules must follow due process and respect the principle that it should be up to independent judicial authorities to assess the illegality of content and to decide whether content should be restricted. Most importantly, intermediaries should not be held liable for choosing not to remove content simply because they received a private notification by a user. In jurisdictions where knowledge about illegal content is relevant for the liability of online intermediaries, regulators should follow the principle that actual knowledge of illegality is only obtained by intermediaries if they are presented with an order by a court or similar authority that operates with sufficient safeguards for independence, autonomy, and impartiality. 

No Mandatory Monitoring or Filtering

Obligations for platforms to monitor what users share online have a chilling effect on the speech of users, who change their behavior and abstain from communicating freely if they know they are being actively observed. It also undermines users’ privacy rights and their right to private life. Policymakers should thus not impose obligations on digital service providers to affirmatively monitor their platforms or networks for illegal content that users post, transmit, or store. Nor should there be a general obligation for platforms to actively monitor facts or circumstances indicating illegal activity by users. The use of automated filters that evaluate the legality of third-party content or which prevent the (re)upload of illegal content should never be mandated, especially considering that filters are prone to error and tend to over-block legitimate material. By the same token, no liability should be based on an intermediary’s failure to detect illegal content as this would incentivise platforms to filter, monitor, and screen user speech.

Limit the Scope of Takedown Orders

Recent cases have demonstrated the perils of worldwide content takedown orders. In Glawischnig-Piesczek v Facebook, the Court of Justice of the EU held that a court of a Member State can order platforms not only to take down defamatory content globally, but also to take down identical or “equivalent” material. This was a terrible outcome, as the content in question may be deemed illegal in one State, but is clearly lawful in many other States. Also, by referring to “automated technologies” to detect similar language, the court opened the gates of monitoring by filters, which are notoriously inaccurate and prone to overblocking legitimate material.

Reforms to internet legislation are an opportunity to acknowledge that the internet is global and takedown orders of global reach are immensely unjust and impair users’ freedom. New rules should make sure that court orders—and particularly injunctions—should not be used to superimpose the laws of one country on every other state in the world. Takedown orders should be limited to the content in question and based on the principles of necessity and proportionality in terms of geographical scope. Otherwise, it is possible that we will see one country’s government dictating what residents of other countries can say, see, or share online. This would lead to a “race to the bottom” toward creating an ever more restrictive and splintered global internet. A worthwhile effort to put limits on the scope of takedown orders was done in the proposal for the EU’s Digital Services Act. It provides that court orders should not exceed what is strictly necessary to achieve its objective and pay respect to the Charter of Fundamental Rights and general principles of international law.

Regulate Processes, Rather than Speech

Instead of holding platforms accountable for content shared by users or forcing platforms to scan every piece of content uploaded on their servers, modern platform regulation should focus on setting out standards for platforms’ processes, such as changes to terms of service and algorithmic decision making. Accountable governance, such as notifications and explanations to users whenever platforms change their terms of service, can help reduce the information asymmetry between users and powerful gatekeeper platforms. Users should be empowered to better understand how they can notify platforms about both problematic content and problematic takedown decisions and should be informed about how content moderation works on large platforms. Privacy by default, improved transparency, and procedural safeguards, such as due process and effective redress mechanisms for removal or blocking decisions, can help to ensure the protection of fundamental rights online.

Moving Forward in the Right Direction

We strongly believe that enforcing heavy-handed liability provisions on intermediaries for the content shared by their users hinders the right to freedom of expression. This doesn’t mean that we shouldn’t consider proposals to reform existing regulatory regimes and introduce new elements in legislation that help address the fundamental flaws of the current online ecosystem. 

For many users, being online means being locked into a few powerful platforms, nonconsensually tracked across the Web, with their ability to access and share information left at the mercy of algorithmic decision-making systems that curate their online lives. Policymakers should put users back in control of their online experiences rather than give the few large platforms that have monopolized the digital space more power, or even oblige them, to police expression and to arbitrate access to content, knowledge, and goods and services. 

Adjustments to internet legislation offer policymakers an opportunity to examine existing  rules and to make sure that the internet remains an open platform for free expression. While the trend towards stricter liability for online intermediaries has us dismayed, it has simultaneously reinvigorated our commitment to advocating for regulatory frameworks that promote freedom of expression and innovation.

The other blogs in this series can be found here:

Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.