Select Page

Three Political Ad Policies and No Good Answers

Observations of election manipulation on social media, and fears about how it could get worse in future elections, are prompting several platforms to take action. Twitter’s new political ads policy goes into effect today, just days after Google announced changes to its own ads policy and in the midst of potential adjustments to the policy Facebook announced last month.

Social media advertising, because it is so inexpensive compared to broadcast and print, is a valuable tool for campaigns and causes with less funding. Nevertheless, these giant platforms can and should make changes to prevent the use of their advertising tools from distorting the political process.

But moderating content—including advertising—at scale is a difficult and unsolved problem. In appointing themselves the judges of who and what qualifies as “political,” all three of these companies will have to make impossible decisions that will inevitably result in mistakes and inconsistencies. Given the history of these companies’ decisions regarding content moderation, their mistakes will most likely harm already-marginalized groups, such as racial minorities and LGBTQIA people.

We’ve written before about how Facebook’s decision to exempt political advertising from fact-checking and hate speech guidelines imposes harmful power imbalances on users. This post will focus on Twitter’s and Google’s new policies, and how all three compare.

Twitter: Political Ads Ban and Limited Microtargeting

Twitter’s new rules on political ads boil down to two main policies: banning ads for political campaigns and candidates, and limiting microtargeting for general issue or “cause-based” ads.

The microtargeting limits apply to an almost impossibly broad range of advertisements adjacent to the banned campaign content: any ads that “educate, raise awareness, and/or call for people to take action in connection with civic engagement, economic growth, environmental stewardship, or social equity causes.”

Instead of outright banning this category of advertisement, Twitter’s new policy restricts the kinds of targeting publishers can use. Features like Tailored Audiences, Twitter’s tool to let an advertiser target Twitter users based on their own marketing list, are off the table. Instead, advertisers can only target based on limited geographic location, keywords, and interests. For example, using political affiliation (e.g. using keywords like “conservative” or “liberal”) to target cause-based ads will no longer be allowed.

Google: Limited Microtargeting and Continued Fact-Checking

Google’s latest announcement will affect ads on Google search, YouTube, and across the web via its “display ad” network.

Google is continuing to allow election ads, but limiting how they can be microtargeted. Political advertisers will only be able to target based on age, gender, and postal code, and will no longer be able to use public voter records or affiliations like “left-leaning, right-leaning, and independent.” They will also be barred from using Google’s Customer Match, a tool for advertisers to combine their email lists or other information with Google’s user data (similar to Twitter’s Tailored Audiences and Facebook’s Custom Audiences). Emails to campaign staffers suggest that other microtargeting tools will also be prohibited.

In a not-so-subtle criticism of Facebook’s decision not to fact-check political ads, Google’s announcement also clarified their general ad policy against false claims, including ““deep fakes” (doctored and manipulated media), misleading claims about the census process, and ads or destinations making demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process.”

Implementation Problems

While the Political Science 101 student in all of us could argue that all advertising is political, Twitter, Google, and Facebook are still in the position of deciding who and what counts as political on their platforms, with the increased scrutiny that new policies and a major U.S. election brings. In the U.S., FCC guidelines for broadcasters to identify “legally qualified candidates for public office” are available—but the scope of decision-making that the platforms’ political ads policies require is much broader.

Twitter’s policy puts a couple of these additional decisions front and center: an exemption for “news publishers,” and a certification process for cause-based advertisers.

Twitter offers no information about the latter, which would affect non-profits, community groups, and other awareness-raising or advocacy organizations. The relevant page in its policies only says to check back once enforcement of the policy has begun.

The policy for news publishers, however, is live, and has serious problems. For a publication to qualify as a “news publisher” and thus be allowed to run ads referring to political candidates and active legislation, its website has to have a minimum of 200,000 unique monthly visitors in the U.S. This knocks out local and independent news sources across the country, from Oregon’s Bend Source (110,000 monthly unique visitors) to Arkansas’s Pine Bluff Commercial (75,000) to Northern California’s Jewish news weekly J (110,000). Twitter’s requirement that these website visitors be in the U.S. also raises the question of how Twitter will regard news publishers in the rest of the world.

It Is Impossible for a Few Tech Giants to Govern Speech Appropriately

Twitter, Google, and Facebook are trying approaches across the spectrum: from Twitter’s combined ban and limited microtargeting, to Google’s limited microtargeting and fact-checking, to Facebook’s explicit exception for political ads from its content policies. It’s not worth comparing which one is better or worse, because each of these different policies has the potential to worsen the problem it is trying to solve.

On one end, to the extent that cheaper online advertising has been a way to level the playing field among campaigns with different resources, Twitter ban on political campaign ads may blunt one tool for civil society and upstart candidates to compete with well-funded campaigns.

On the other, Facebook’s policy also exacerbates power dynamics by applying more empowering rules to advertisers who, as candidates in a political campaign, already have the ability to reach wide audiences.

On massive platforms with global influence and algorithmic content delivery, content moderation rules tend to hit marginalized groups harder. Mike Masnick sums it up well in the introduction to his “impossibility theorem” for content moderation:

Content moderation at scale is impossible to do well. Importantly, this is not an argument that we should throw up our hands and do nothing. Nor is it an argument that companies can’t do better jobs within their own content moderation efforts. But I do think there’s a huge problem in that many people—including many politicians and journalists—seem to expect that these companies not only can, but should, strive for a level of content moderation that is simply impossible to reach. 

Alternative approaches could come from smaller platforms and communities with more manageable volumes of content. Even better, a healthy ecosystem of many small platforms would let users choose the moderation approaches that work for them. More platforms could experiment with tools to empower communities and individuals to curate their experiences and protect themselves from manipulative advertising, such as letting users designate their own trusted fact-checking resources.

The dueling political ads policies from Twitter, Google, and Facebook give us a taste of what it means to have different approaches to choose from. But as long as we’re stuck with just a few tech-giant approved monocultures, content moderation remains an impossible task.

This post has been republished with permission from a publicly-available RSS feed found on EFF. The views expressed by the original author(s) do not necessarily reflect the opinions or views of The Libertarian Hub, its owners or administrators. Any images included in the original article belong to and are the sole responsibility of the original author/website. The Libertarian Hub makes no claims of ownership of any imported photos/images and shall not be held liable for any unintended copyright infringement. Submit a DCMA takedown request.

-> Click Here to Read the Original Article <-

About The Author

Gennie Gebhart

The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development. We work to ensure that rights and freedoms are enhanced and protected as our use of technology grows. Visit

Leave a reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.


Bringing together a variety of news and information from some of today’s most important libertarian thought leaders. All feeds are checked and refreshed every hour and pages auto-refresh every 15 minutes. External images are deleted after 30 days.

Time since last refresh: 0 second

Publish Your Own Article

Follow The Libertarian Hub


Support Our Work

Support the Libertarian Hub by tipping with Bitcoin!

Weekly Newsletter

Newsletter Signup

Subscribe to our newsletter to receive a weekly email report of the top five most popular articles on the Libertarian Hub!

Weekly Newsletter SignupTop 5 Stories of the Week

Subscribe to our newsletter to receive a weekly email report of the top five most popular articles on the Libertarian Hub!