The Ultimate Managed Hosting Platform

Facebook’s Attack on Research is Everyone’s Problem

Fight Censorship, Share This Post!

Facebook recently banned the accounts of several New York University (NYU) researchers who run Ad Observer, an accountability project that tracks paid disinformation, from its platform. This has major implications: not just for transparency, but for user autonomy and the fight for interoperable software.

Ad Observer is a free/open source browser extension used to collect Facebook ads for independent scrutiny. Facebook has long opposed the project, but its latest decision to attack Laura Edelson and her team is a powerful new blow to transparency. Worse, Facebook has spun this bullying as  defending user privacy. This “privacywashing” is a dangerous practice that muddies the waters about where real privacy threats come from. And to make matters worse, the company has been gilding such excuses with legally indefensible claims about the enforceability of its terms of service. 

Taken as a whole, Facebook’s sordid war on Ad Observer and accountability is a perfect illustration of how the company warps the narrative around user rights. Facebook is framing the conflict as one between transparency and privacy, implying that a user’s choice to share information about their own experience on the platform is an unacceptable security risk. This is disingenuous and wrong. 

This story is a parable about the need for data autonomy, protection, and transparency—and how Competitive Compatibility (AKA “comcom” or “adversarial interoperability”) should play a role in securing them.

What is Ad Observer?

Facebook’s ad-targeting tools are the heart of its business, yet for users on the platform they are shrouded in secrecy. Facebook collects information on users from a vast and growing array of sources, then categorizes each user with hundreds or thousands of tags based on their perceived interests or lifestyle. The company then sells the ability to use these categories to reach users through micro-targeted ads. User categories can be weirdly specific, cover sensitive interests, and be used in discriminatory ways, yet according to a 2019 Pew survey 74% of users weren’t even aware these categories exist.

To unveil how political ads use this system, ProPublica launched its Political Ad Collector project in 2017. Anyone could participate by installing a browser extension called “Ad Observer,” which copies (or “scrapes”) the ads they see along with information provided under each ad’s “Why am I seeing this ad?” link. The tool then submits this information to researchers behind the project, which as of last year was NYU Engineering’s Cybersecurity for Democracy.

The extension never included any personally identifying information—simply data about how advertisers target users. In aggregate, however, the information shared by thousands of Ad Observer users revealed how advertisers use the platform’s surveillance-based ad targeting tools. 

This improved transparency is important to better understand how misinformation spreads online, and Facebook’s own practices for addressing it. While Facebook claims it “do[es]n’t allow misinformation in [its] ads”, it has been hesitant to block false political ads, and it continues to provide tools that enable fringe interests to shape public debate and scam users. For example, two groups were found to be funding the majority of antivaccine ads on the platform in 2019. More recently, the U.S. Surgeon General spoke out on the platform’s role in misinformation during the COVID-19 pandemic—and just this week Facebook stopped a Russian advertising agency from using the platform to spread misinformation about COVID-19 vaccines. Everyone from oil and gas companies to political campaigns has used Facebook to push their own twisted narratives and erode public discourse.

Revealing the secrets behind this surveillance-based ecosystem to public scrutiny is the first step in reclaiming our public discourse. Content moderation at scale is notoriously difficult, and it’s unsurprising that Facebook has failed again and again. But given the right tools, researchers, journalists, and members of the public can monitor ads themselves to shed light on misinformation campaigns. Just in the past year Ad Observer has yielded important insights, including how political campaigns and major corporations buy the right to propagate misinformation on the platform.

Facebook does maintain its own “Ad Library” and research portal. The former has been unreliable and difficult to use without offering information about targeting based on user categories; the latter comes swathed in secrecy and requires researchers to allow Facebook to suppress their findings. Facebook’s attacks on the NYU research team speak volumes about the company’s real “privacy” priority: defending the secrecy of its paying customers—the shadowy operators pouring millions into paid disinformation campaigns.

This isn’t the first time Facebook has attempted to crush the Ad Observer project. In January 2019, Facebook made critical changes to the way its website works, temporarily preventing Ad Observer and other tools from gathering data about how ads are targeted. Then, on the eve of the hotly contested 2020 U.S. national elections, Facebook sent a dire legal threat to the NYU researchers, demanding the project cease operation and delete all collected data. Facebook took the position that any data collection through “automated means” (like web scraping) is against the site’s terms of service. But hidden behind the jargon is the simple truth that scraping” is no different than a user copying and pasting. Automation here is just a matter of convenience, with no unique or additional information being revealed. Any data collected by a browser plugin is already, rightfully, available to the user of the browser. The only potential issue with plugins “scraping” data is if it happens without a user’s consent, which has never been the case with Ad Observer. 

Another issue EFF emphasized at the time is that Facebook has a history of dubious legal claims that such violations of service terms are violations of the Computer Fraud and Abuse Act (CFAA). That is, if you copy and paste content from any of the company’s services in an automated way (without its blessing), Facebook thinks you are committing a federal crime. If this outrageous interpretation of the law were to hold, it would have a debilitating impact on the efforts of journalists, researchers, archivists, and everyday users. Fortunately, a recent U.S. Supreme Court decision dealt a blow to this interpretation of the CFAA.  

Last time around, Facebook’s attack on Ad Observer generated enough public backlash that it seemed Facebook was going to do the sensible thing and back down from its fight with the researchers. Last week however, it turned out that this was not the case.

Facebook’s Bogus Justifications 

Facebook’s Product Management Director, Mike Clark, published a blog post defending the company’s decision to ban the NYU researchers from the platform. Clark’s message mirrored the rationale offered back in October by then-Advertising Integrity Chair Rob Leathern (who has since left for Google). These company spokespeople have made misleading claims about the privacy risk that Ad Observer posed, and then used these smears to accuse the NYU team of violating Facebook users’ privacy. The only thing that was being “violated” was Facebook’s secrecy, which allowed it to make claims about fighting paid disinformation without subjecting them to public scrutiny. 

Secrecy is not privacy. A secret is something no one else knows. Privacy is when you get to decide who knows information about you. Since Ad Observer users made an informed choice to share the information about the ads Facebook showed them, the project is perfectly compatible with privacy. In fact, the project exemplifies how to do selective data sharing for public interest reasons in a way that respects user consent.

It’s clear that Ad Observer poses no privacy risks to its users. Information about the extension is available in an FAQ and privacy policy, both of which accurately and comprehensively describe how the tool worked. Mozilla thoroughly reviewed the extension’s open source code independently before recommending it to users. That’s something Facebook itself could have done, if it was genuinely worried about what information the plugin was gathering.

In Clark’s post defending Facebook’s war on accountability, he claimed that the company had no choice but to shut down Ad Observer, thanks to a “consent decree” with the Federal Trade Commission (FTC). This order imposed after the Cambridge Analytica scandal requires the company to strictly monitor third-party apps on the platform. This excuse was obviously not true, as a casual reading of the consent decree makes clear. If there was any doubt, it was erased when the FTC’s acting director of the Bureau of Consumer Protection, Sam Levine, published an open letter to Mark Zuckerberg calling this invocation of the consent decree “misleading,” adding that nothing in the FTC’s order bars Facebook from permitting good-faith research. Levine added, “[W]e hope that the company is not invoking privacy – much less the FTC consent order – as a pretext to advance other aims.” This shamed Facebook into a humiliating climbdown in which it admitted that the consent decree did not force them to disable the researchers’ accounts.

Facebook’s anti-Ad Observer spin relies on both overt and implicit tactics of deception. It’s not just false claims about FTC orders—there’s also subtler work, like publishing a blog post about the affair entitled “Research Cannot Be the Justification for Compromising People’s Privacy,” which invoked the infamous Cambridge Analytica scandal of 2018. This seeks to muddy any distinction between the actions of a sleazy for-profit disinformation outfit with those of a scrappy band of academic transparency researchers.

Let’s be clear; Cambridge Analytica is nothing like Ad Observer. Cambridge Analytica did its dirty work by deceiving users, tricking them into using a “personality quiz” app that siphoned away both their personal data and that of their Facebook “friends,” using a feature provided by the Facebook API. This information was packaged and sold to political campaigns as a devastating, AI-powered, Big Data mind-control ray, and saw extensive use in the 2016 US presidential election. Cambridge Analytica gathered this data and attempted to weaponize it by using Facebook’s own developer tools (tools that were already known to leak data), without meaningful user consent and with no public scrutiny. The slimy practices of the Cambridge Analytica firm bear absolutely no resemblance to the efforts of the NYU researchers, who have prioritized consent and transparency in all aspects of their project.

An Innovation-Killing Pretext

Facebook has shown that it can’t be trusted to present the facts about Ad Observer in good faith. The company has conflated Cambridge Analytica’s deceptive tactics with NYU’s public interest research; it’s conflated violating its terms of service with violating federal cybersecurity law; and it’s conflated the privacy of its users with secrecy for its paying advertisers. 

Mark Zuckerberg has claimed he supports an “information fiduciary” relationship with users. This is the idea that companies should be obligated to protect the user information they collect. That would be great, but not all fiduciaries are equal. A sound information fiduciary system would safeguard users’ true control over how they share this information in the first place. For Facebook to be a true information fiduciary, it would have to protect users from unnecessary data collection by first parties like Facebook itself. Instead, Facebook says it has a duty to protect user data from the users themselves.

Even some Facebookers are disappointed with their company’s secrecy and anti-accountability measures. According to a New York Times report, there’s a raging internal debate about transparency after Facebook dismantled the team responsible for its content tracing tool CrowdTangle. According to interviewees, there’s a sizable internal faction at Facebook that sees the value of sharing how the platform operates (warts and all), and a cadre of senior execs who want to bury this information. (Facebook disputes this.) Combine this with Facebook’s attack on public research, and you get a picture of a company that wants to burnish its reputation by hiding its sins from the billions of people who rely on it under the guise of privacy, instead of owning those mistakes and making amends for them. 

Facebook’s reputation-laundering spills out into its relationship with app developers. The company routinely uses privacy-washing as a pretext to kill external projects, a practice so widespread it’s got a special name: “getting Sherlocked.” Last year EFF weighed in on another case where Facebook abused the CFAA to demand that the “Friendly browser” cease operation. Friendly allows users to control the appearance of Facebook while they use it, and doesn’t collect any user data or make use of Facebook’s API. Nevertheless, the company sent dire legal threats to its developers, which EFF countered in a letter that demolished the company’s legal claims. This pattern played out again recently with the open source Instagram app Barinsta, which received a cease and desist notice from the company.

When developers go against Facebook, the company uses all of its leverage as a platform to respond full tilt. Facebook doesn’t just kill your competing project: they deplatform you, burden you with legal threats, and brick any of your hardware that requires a Facebook login

What to do

Facebook is facing a vast amount of public backlash (again!). Several U.S. senators sent Zuckerberg a letter asking him to clarify the company’s actions. Over 200 academics signed a letter in solidarity with Laura Edelson and the other banned researchers. One simple remedy is clearly necessary: Facebook must reinstate all of the accounts of the NYU research team. Management should also listen to the workers at Facebook calling for greater transparency, and furthermore cease all CFAA legal threats to not just researchers, but anyone accessing their own information in an automated way.

This Ad Observer saga provides even more evidence that users cannot trust Facebook to act as an impartial and publicly accountable platform on its own. That’s why we need tools to take that choice out of Facebook’s hands. Ad Observer is a prime example of competitive compatibility—grassroots interoperability without permission. To prevent further misuse of the CFAA to shut down interoperability, courts and legislators must make it clear that anti-hacking laws don’t apply to competitive compatibility. Furthermore, platforms as big as Facebook should be obligated to loosen their grip on user information, and open up automated access to basic, useful data that users and competitors need. Legislation like the ACCESS Act would do just that, which is why we need to make sure it delivers

We need the ability to alter Facebook to suit our needs, even when Facebook’s management and shareholders try to stand in the way.


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.