Why Social Media Takes a “Ban First, Ask Questions Later” Approach

Fight Censorship, Share This Post!

While some people are increasingly concerned about (and others cheer) the political implications of censorship on social media platforms like Facebook and Twitter, few have engaged in any analysis of how things reached this point and why social media companies seem universally of the same mind in designing their policies. Ultimately, though, it stems from the nature of the current social media business model itself.

Social media companies from Facebook and Twitter to Reddit and Tumblr earn their revenue almost exclusively from advertisement space sold to put ads in front of the users. Typically, these show up as “sponsored posts” or are called out more directly as advertisements on scrolling timelines and “news feeds.” The precise mechanisms are less important than the revenue model itself, however. What is being sold is access to the user, particularly their attention as measured in the form of views and clicks.

These ads are, naturally, more valuable the more people will see them. This essentially boils down to two conditions: (1) more users are scrolling down more timelines and/or (2) each user is spending more time on the platform. As far as the social media business goes, these are the primary metrics to rely upon, and this is why statistics like “daily active users” (DAU) or “peak concurrent users” (PCU) are bandied about to demonstrate how much “engagement” is available to advertisers. (Twitter even goes a step further and records stats like “timeline views per monthly active user.”) These two conditions, and these alone, essentially play out in a sort of social media lifecycle.

In the early, heady days of a new social media platform, the most important metric of growth is in condition (1). They need you to sign up. They need you to get your friends to sign up. They need those friends to get their friends to sign up, and so on. Expanding the number of users, particularly the ones who use the platform regularly to check in with their friends, is the primary means of expansion for a young social media company, and it therefore does not behoove them to worry about censoring users or removing “unsavory elements” from the platform. “Community standards” are typically focused on particularly egregious behavior and enforced with a light touch.

As the platform’s network grows, however, its rate of expansion necessarily slows down. Adding ten thousand new users is not as meaningful when you have 1.5 billion DAUs as it is when you only have a few million. The means to grow revenue thus turns toward maximizing condition (2): get the users you do have to spend more time scrolling on the platform.

This is the stage at which the algorithms determining what goes into a news feed or timeline become of vast importance. They become, indeed, THE Algorithm as things are tweaked to reach this sole purpose of getting people to spend more time on the social media platform in question. Ultimately, what this comes down to is appealing to their confirmation biases, fandoms, and major interests. The Algorithm increasingly silos people into groups so that when you are scrolling through your feed you will see item after item that feeds into your ideological preconceptions, your favorite media franchises, and so on. Social media starts to gradually be less about connection with friends and family and more about segmented and regimented “identities.”

This mostly works for a time: people indeed spend more time on the social media platform when it is showing them things they want to see. There isn’t even a strong argument against this at this stage: though people tended to complain as these changes pushed them further from keeping up with otherwise “real-world” relationships, they kept using the social media platforms more and more. It appears that news articles that confirm the reader’s bias and people saying things you agree with is, on some level, what people want out of social media.

It is not too long before the usage of social media reaches another stage, however, where the imperfections of the algorithms cause it to start breaking down. Commentary that disputes your views breaks through to your news feed. A friend of a friend with a dissenting opinion posts comments that may be upsetting in response to your posts. The group you are in about the latest Marvel movies starts to become a politically charged group as fans disagree about the latest casting choice or the quality of the last film and whether politics are involved. The echo chambers aren’t complete, and as a result some people start looking for alternative platforms.

“Community standards” become a great way to deal with this breakdown. Posts and comments that would potentially induce people to “jump ship” are increasingly made verboten. At scale, algorithms are made to enforce these standards, and inevitably are designed to err on the side of removing content rather than admitting it. Removal might, ultimately, annoy the person who posted that content, even to the point of leaving the service, but the asymmetry of multiple viewers to any post or comment leans the enforcement naturally to a “block first, ask questions later” style.

These “community standards” are, inevitably, created and enforced by biased individuals (or algorithms created by said biased individuals) and so naturally take on these biases and cement them into the policies. Those who share the biases quickly come to see these policies as reasonable and fair removal of highly offensive content, while those who do not see them as riddled with double standards and selective enforcement. Ultimately, depending on how “activist” these individuals are, they can even intentionally impose a desired world view through the design and enforcement of the rules.

In the end, this power over social media inevitably starts to break down as more people get censored or removed from the platform, more prominent voices seek out alternatives that are in the more early stages and don’t censor as heavily, and fragmentation starts to make the service less useful and interesting rather than being a source of affirmation and good feelings. The natural response of a social media giant at this stage is to begin taking anticompetitive measures: pushing regulation that small companies wouldn’t be able to afford compliance with, painting censorship as desirable and necessary while working to “deplatform” competitors who “don’t do enough” (the strongest example being the tech giants’ responses to Parler in early 2021), and so forth. Whether, and for how long, this works depends on consumer and governmental responses that we are still watching unfold at the writing of this article.

Within the paradigm of centralized social media companies that rely on advertising, there is no real escape from this lifecycle. The good news is that these companies are not the only avenue toward what is provided by social media. Decentralized alternatives exist that allow for “self-hosted” networks that can overlap and communicate with each other. Messaging applications like Signal allow for the creation of groups, which allow for ongoing chat rooms that enable small-scale networking and community. Companies like MeWe are arising that consider the users to be the customers, drawing their revenue not from advertising but customer subscriptions, add-ons, and so forth. These alternatives are not a cure-all, but they do present a significant alternative that isn’t bound to fall into the trap of “continuous growth” that ad-revenue social media giants necessarily must to maintain stock prices and revenue sales. This (and, importantly, not government regulations that social media giants actually desire) provides a glimmer of hope for the future of social media.

Originally published at Disinthrallment.com.


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.