Load WordPress Sites in as fast as 37ms!

How LGBTQ+ Content is Censored Under the Guise of “Sexually Explicit”

Fight Censorship, Share This Post!

The latest news from Apple—that the company will open up a backdoor in its efforts to combat child sexual abuse imagery (CSAM)—has us rightly concerned about the privacy impacts of such a decision.

As always, some groups will be subject to potentially more harm than others. One of the features of Apple’s new plan is designed to provide notifications to minor iPhone users who are enrolled in a Family Plan when they either receive or attempt to send a photo via iMessage that Apple’s machine learning classifier defines as “sexually explicit.” If the minor child is under 13 years of age and chooses to send or receive the content, their parent will be notified and the image saved to the parental controls section of their phone for the parent to view later. Children between 13-17 will also receive a warning, but the parent will not be notified.

While this feature is intended to protect children from abuse, Apple doesn’t seem to have considered the ways in which it could enable abuse. This new feature assumes that parents are benevolent protectors, but for many children, that isn’t the case: parents can also be the abuser, or may have more traditional or restrictive ideas of acceptable exploration than their children. While it’s understandable to want to protect children from abuse, using machine learning classifiers to decide what is or is not sexual in nature may very well result in children being shamed or discouraged from seeking out information about their sexuality.

As Apple’s product FAQ explains, the feature will use on-device machine learning to determine which content is sexually explicit—machine learning that is proprietary and not open to public or even civil society review.

The trouble with this is that there’s a long history of non-sexual content—and particularly, LGBTQ+ content—being classified by machine learning algorithms (as well as human moderators) as “sexually explicit.” As Kendra Albert and Afsaneh Rigot pointed out in a recent piece for Wired, “Attempts to limit sexually explicit speech tend to (accidentally or on purpose) harm LGBTQ people more.”

From filtering software company Netsweeper to Google News, Tumblr, YouTube and PayPal, tech companies don’t have a good track record when it comes to differentiating between pornography and art, educational, or community-oriented content. A recent paper from scholar Ari Ezra Waldman demonstrates this, arguing that “content moderation for ‘sexual activity’ is an assemblage of social forces that resembles oppressive anti-vice campaigns from the middle of the last century in which ‘disorderly conduct’, ‘vagrancy’, ‘lewdness’, and other vague morality statutes were disproportionately enforced against queer behavior in public.”

On top of that, Apple itself has a history in over-defining “obscenity.” Apple TV has limited content for being too “adult,” and its App Store has placed prohibitions on sexual content—as well as on gay hookup and dating apps in certain markets, such as China, Saudi Arabia, the United Arab Emirates, and Turkey

Thus far, Apple says that their new feature is limited to “sexually explicit” content, but as these examples show, that’s a broad area that—without clear parameters—can easily catch important content in the net.

Right now, Apple’s intention is to roll out this feature only in the U.S.—which is good, at least, because different countries and cultures have highly different beliefs around what is and is not sexually explicit. 

But even in the U.S., no company is going to satisfy everyone when it comes to defining, via an algorithm, what photos are sexually explicit. Are breast cancer awareness images sexually explicit? Facebook has said so in the past. Are shirtless photos of trans men who’ve had top surgery sexually explicit? Instagram isn’t sure. Is a photo documenting sexual or physical violence or abuse sexually explicit? In some cases like these, the answers aren’t clear, and Apple wading into the debate, and tattling on children who may share or receive the images, will likely only produce more frustration, and more confusion. 


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.