Facebook boasts about new AI-based censorship

Fight Censorship, Share This Post!

Facebook (aka Meta) is doubling down on using “AI” to censor what the company considers “harmful content,” it has been announced.

When Big Tech talks about “AI” used for the purposes of moderation and censorship, they really mean machine learning (ML) powered automation, and time and time again, this technology has proved not to be up to the task.

But now Facebook says that it has developed “new AI technology” stemming from “scientific breakthroughs” in the field – and that it has started using the new system, called Few-Shot Learner (FSL) that is supposed to speed up censorship times.

Instead of taking months to detect “harmful content” Facebook now believes it will be able to achieve the goal within weeks, and get rid of “evolving” examples of posts slated for censorship quickly. The previous method meant labeling and collecting up to several million examples against which to train ML algorithms.

Now, Facebook says, the model is set up to “start with a general understanding of many different topics and then use much fewer — or sometimes zero — labeled examples to learn new tasks.” And it will be applied to both images and text, in as many as 100 languages.

Facebook was happy to go into some detail about how the new system is supposed to work technically, but the blog post doesn’t bother to define what “harmful content” is, from the giant’s point of view. Some examples, but no definition, are given, inevitably revolving around Covid, vaccines, and whatever passes off as disinformation on the day.

Here, the post explains that previously the existing moderation algorithms would have missed “implied” misinformation. And Facebook appears to lean into its editorial role so much as to include “sensationalized information” into the “harmful” category.

One example given is the use of the phrase “DNA changer” in the context of vaccines and somebody that wonders whether another person “needs all their teeth.” Facebook considers these to be examples of inflammatory and sensationalized information, as well as incitement to violence.

Facebook promises that its algorithmic moderation and censorship machine is now sophisticated enough to catch and ban posts containing similar wording, and that the FSL will be applied to also identify hate speech.

“Over time” the giant thinks it can “enhance the performance of all of our integrity AI systems by letting them leverage a single, shared knowledge base and backbone to deal with many different types of violations.”

The post Facebook boasts about new AI-based censorship appeared first on Reclaim The Net.


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.