YouTube Algorithms Don’t Turn Unsuspecting Masses Into Extremists, New Study Suggests

Fight Censorship, Share This Post!

thumbnail4

“Over years of reporting on internet culture, I’ve heard countless versions of [this] story: an aimless young man—usually white, frequently interested in video games—visits YouTube looking for direction or distraction and is seduced by a community of far-right creators,” wrote Kevin Roose for The New York Times back in 2019. “Some young men discover far-right videos by accident, while others seek them out. Some travel all the way to neo-Nazism, while others stop at milder forms of bigotry.”

Never one to dial back alarmism, The Daily Beast put out a headline in 2018 calling YouTube’s algorithm a “far-right radicalization factory” and claimed that an “unofficial network of fringe channels is pulling YouTubers down the rabbit hole of extremism.” Even MIT Technology Review sounded the alarm in 2020 about how “YouTube’s algorithm seems to be funneling people to alt-right videos.”

A new study by City University of New York’s Annie Y. Chen, Dartmouth’s Brendan Nyhan, University of Exeter’s Jason Reifler, Stanford’s Ronald E. Robertson, and Northeastern’s Christo Wilson complicates these popular narratives. “Using paired behavioral and survey data provided by participants recruited from a representative sample (n=1,181), we show that exposure to alternative and extremist channel videos on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment,” write the researchers. “These viewers typically subscribe to these channels (causing YouTube to recommend their videos more often) and often follow external links to them. Contrary to the ‘rabbit holes’ narrative, non-subscribers are rarely recommended videos from alternative and extremist channels and seldom follow such recommendations when offered.”

The researchers were precise in their definition of what constitutes a “rabbit hole.” They also distinguished between “alternative” content—Steven Crowder, Tim Pool, Candace Owens—and “extremist” content—Stefan Molyneux, David Duke, Mike Cernovich. (Methodological information can be found here.)

Less than 1 percent—0.6, to be exact—of those studied by the researchers were responsible for an astonishing 80 percent of the watch time for channels deemed extremist. And only 1.7 percent of participants studied were responsible for 79 percent of the watch time for channels deemed alternative. These study participants typically found those videos by having watched similar ones previously.

But how many people studied by the researchers watched innocuous videos and were led by algorithms from those videos toward extremist content? This occurred in 0.02 percent of the total video visits that were studied, for a grand total of 108 times. If you apply an even stricter definition of rabbit holes and exclude cases where a viewer was subscribed to a similar extremist or alternative channel and followed an algorithmic suggestion to a similar one, rabbit holes become even more rare, comprising 0.012 percent of all video visits.

Basically, the narrative that hordes of unwitting YouTube browsers are suddenly stumbling across far-right extremist content and becoming entranced by it does not hold much water.

Instead, it is people who were already choosing to watch fringe videos who might be fed extremist content by the algorithm. It’s the people who were already Alex Jones-curious, per their own browsing habits, who receive content about Pizzagate and how the chemicals in the water supply are turning the frogs gay and who veer toward Cernovich or Duke types. (Duke himself was banned from YouTube two years ago.)

Plenty of tech critics and politicians fearmonger about algorithmic rabbit holes, touting these powerful algorithms as explanations for why people get radicalized and calling for regulatory crackdowns to prevent this problem. But this new study provides decent evidence that the rabbit hole concern is overblown—and, possibly, that the steps YouTube took to tweak their algorithm circa 2019 have led to the relative absence of rabbit holes. The changes “do appear to have affected the propagation of some of the worst content on the platform, reducing both recommendations to conspiratorial content on the platform and sharing of YouTube conspiracy videos on Twitter and Reddit,” write the researchers.

“Our research suggests that the role of algorithms in driving people to potentially harmful content is seemingly overstated in the post-2019 debate about YouTube,” Nyhan tells Reason, butpeople with extreme views are finding extremist content on the platform. The fact that they are often seeking such content out does not mean YouTube should escape scrutiny for providing free hosting, subscriptions, etc. to the channels in question—these are choices that the company has made.”

Algorithm tweaks have not stopped tech journalists and politicians from frequently bloviating about YouTube radicalization, regardless of whether their rabbit hole critique remains true today.

The post YouTube Algorithms Don’t Turn Unsuspecting Masses Into Extremists, New Study Suggests appeared first on Reason.com.


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.