YouTube is trying to reduce the spread of toxic videos on the platform by limiting how often they appear in users' recommendations. The company announced the shift in a blog post on Friday, writing that it would begin cracking down on so-called "borderline content" that comes close to violating its community standards without quite crossing the line.
"We’ll begin reducing recommendations of borderline content and content that could misinform users in harmful ways—such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11," the company wrote. These are just a few examples of the broad array of videos that might be targeted by the new policy. According to the post, the shift should affect less than one percent of all videos on the platform.
Social media companies have come under heavy criticism for their role in the spread of misinformation and extremism online, rewarding such content—and the engagement it gets—by pushing it to more users. In November, Facebook announced plans to reduce the visibility of sensational and provocative posts in News Feed, regardless of whether they explicitly violate the company's policies. A YouTube spokesperson told WIRED the company has been working on its latest policy shift for about a year, saying it has nothing to do with the similar change at Facebook. The spokesperson stressed that Friday's announcement is still in its earliest stages, and the company may not catch all of the borderline content immediately.
The WIRED Guide to Conspiracy Theories
Over the last year, YouTube has spent substantial resources on trying to clean up its platform. It’s invested in news organizations and committed to promoting only “authoritative” news outlets on its homepage during breaking news events. It’s partnered with companies like Wikipedia to fact check common conspiracy theories, and it’s even spent millions of dollars sponsoring video creators who promote social good.
The problem is, YouTube’s recommendation algorithm has been trained over the years to give users more of what it thinks they want. So if a user happens to watch a lot of far-right conspiracy theories, the algorithm is likely to lead them down a dark path to even more of them. Last year, Jonathan Albright, director of research at Columbia University's Tow Center for Digital Journalism, documented how a search for “crisis actors” after the Parkland, Florida shooting led him to a network of 9,000 conspiracy videos. A recent Buzzfeed story showed how even innocuous videos often lead to recommendations of increasingly extreme content.
With this shift, YouTube is hoping to throw people off that trail by removing problematic content from recommendations. But implementing such a policy is easier said than done. The YouTube spokesperson says it will require human video raters around the world to answer a serious of questions about videos they watch to determine whether they qualify as borderline content. Their answers will be used to train YouTube’s algorithms to detect such content in the future. YouTube's sister company, Google, uses similar processes to assess the relevance of search.
It’s unclear what signals both the human raters and the machines will analyze to determine what videos constitute borderline content. The spokesperson, who asked not to be named declined, to share additional details, except to say that the system will look at more than just the language in a given video’s title and description.
For as much as these changes stand to improve platforms like Facebook and YouTube, instituting them will no doubt invite new waves of public criticism. People are already quick to claim that tech giants are corrupted by partisan bias and are practicing viewpoint censorship. And that’s in an environment where both YouTube and Facebook have published their community guidelines for all to see. They’ve drawn bright lines about what is and isn’t acceptable behavior on their platforms and have still been accused of fickle enforcement. Now, both companies are, in a way, blurring those lines, penalizing content that hasn’t yet crossed it.
YouTube will not take these videos off the site altogether, and they'll still be available in search results. The shift also wouldn't stop, say, a September 11th truther from subscribing to a channel that only spreads conspiracies. "We think this change strikes a balance between maintaining a platform for free speech and living up to our responsibility to users," the blog post read.
In other words, YouTube, like Facebook before it, is trying to appease both sides of the censorship debate. It's guaranteeing people the right to post their videos—it's just not guaranteeing them an audience.
- Weedmaps’ grip on the high-flying California pot market
- Have phones become boring? They’re about to get weird
- How to find your Netflix freeloaders—and kick them out
- The blind YouTubers making the internet more accessible
- Antibiotics are failing us. Crispr is our glimmer of hope
- 👀 Looking for the latest gadgets? Check out our picks, gift guides, and best deals all year round
- 📩 Get even more of our inside scoops with our weekly Backchannel newsletter
Its problems reflect and contribute to our culture like a big, scandalous, Tide Pod-and-condom-slurping ouroboros.So it’s fitting that YouTube’s most persistent bugaboos this year have been America’s: out-of-control celebrities and our cultural addiction to them, racism and conspiracy theories, and policies that disproportionately impact vulnerable groups like the LGBTQ community.But as much as 2018 was a year beset by scandal and frenzied backpedaling, it was also a year in which YouTube started trying in earnest to reckon with its own problems.