Facebook is trying to redefine authoritativeness on the internet as part of its efforts to fight the spread of misinformation and abuse on its platforms.
On Wednesday, the company rolled out a slew of announcements that aim to promote more trustworthy news sources, tamp down on Groups that spread misinformation, and give the public more insight into how Facebook crafts its content policies writ large. The changes, broadly, seek to nurture what Facebook refers to as "integrity" on the platform at a time when many users, regulators, and politicians have come to see Facebook and its other apps—WhatsApp, Instagram, and Messenger—as the chief propagators of propaganda, hate speech, and fake news online.
By far the biggest change to come from these announcements is the introduction of a new metric called Click-Gap, which Facebook's News Feed algorithms will use to determine where to rank a given post. Click-Gap, which Facebook is launching globally today, is the company’s attempt to limit the spread of websites that are disproportionately popular on Facebook compared with the rest of the web. If Facebook finds that a ton of links to a certain website are appearing on Facebook, but few websites on the broader web are linking to that site, Facebook will use that signal, among others, to limit the website’s reach.
“This can be a sign that the domain is succeeding on News Feed in a way that doesn't reflect the authority they've built outside it and is producing low-quality content,” Guy Rosen, Facebook's vice president of integrity, and Tessa Lyons, head of news feed integrity, wrote in a blog post.
Click-Gap could be bad news for fringe sites that optimize their content to go viral on Facebook. Some of the most popular stories on Facebook come not from mainstream sites that also get lots of traffic from search or directly, but rather from small domains specifically designed to appeal to Facebook’s algorithms.
Experts like Jonathan Albright , director of the Digital Forensics Initiative at Columbia University's Tow Center for Digital Journalism, have mapped out how social networks, including Facebook and YouTube, acted as amplification services for websites that would otherwise receive little attention online, allowing them to spread propaganda during the 2016 election. Facing backlash for its role in those misinformation campaigns, Facebook revamped its News Feed algorithm last year to prioritize content shared by friends and family over posts from publisher pages. But at least one study of the platform since then suggests that the change rewards engagement, outrage, and division.
For Facebook, pulling off Click-Gap requires the company to map the internet and all of its inbound and outbound links. In that way, it’s similar to the idea that Google’s cofounders had when they first launched their search engine based on a system called Page Rank. That algorithm analyzed linking patterns between websites to determine which ones deserved the most prominence in search. Facebook’s Click-Gap metric is a new spin on that old concept, using links to assess whether Facebook is an accurate reflection of the internet at large.
Will Facebook’s New Ban on White Nationalist Content Work?
Mark Zuckerberg on Facebook's Future and What Scares Him Most
In Latest Facebook Data Exposure, History Repeats Itself
Such a change is likely to invite even more attacks from Facebook’s critics, particularly Republicans, who have accused the social network for years of stymying free speech on the platform. On Wednesday, the Senate Judiciary Committee is holding a hearing on that very subject. News sites across the web have already seen their Facebook traffic decline over the last year. It’s Facebook’s attempt to reduce the spread of fake news, but it has sparked outrage among the owners of far-right sites, like Gateway Pundit, who allege they’re being disproportionately targeted. It’s possible those sites could see their reach even further diminished by Click-Gap.
In addition to introducing this new metric, Facebook is also turning its attention to Groups, where so much amplification happens. Private or semi-private Groups have exploded in popularity in recent years, in part thanks to a push by Facebook to promote communities. At the same time, they have become petri dishes of misinformation, radicalization, and abuse. Groups are a particularly tricky part of Facebook’s ecosystem, because they are intimate, insular, and often opaque. Because many are secret or closed, when abuse happens, it’s up to members to report it or Facebook’s automated tools to detect it.
Facebook will now take a more punitive approach for administrators of toxic Groups, and will factor in moderator behavior when assessing the health of a group. “When reviewing a group to decide whether or not to take it down, we will look at admin and moderator content violations in that group, including member posts they have approved, as a stronger signal that the group violates our standards,” Rosen and Lyons write.
Facebook will also penalize these Groups for spreading fake news, which doesn't always violate community standards. In much the same way the platform reduces the reach of sites that get repeatedly dinged by its partner fact-checkers, Facebook will now downgrade the reach of Groups it finds to be constantly sharing links to such sites. That, the company hopes, will make the Groups harder to find.
All Group admins will now get more clarity into actions Facebook takes against their Group, with a feature called Group Quality. This will provide administrators with an overview of the content flagged in the group, removed by Facebook, and designated fake news. Additionally, starting soon, if people leave a group, they’ll have the option to remove past posts, too.1
As part of these announcements, Facebook is also acknowledging a perennial problem, which is that it could never hire or partner with enough human beings to monitor all of the news published on its platform. This has been a drain on some of the news organizations that Facebook partnered with to fact-check false news. Earlier this year, Snopes announced it was leaving the partnership to evaluate “the ramifications and costs of providing third-party fact-checking services.” Now, Facebook says it will be consulting with academics, journalists, and other groups to sort out new approaches that can adequately address the scope of the problem.
“Our professional fact-checking partners are an important piece of our strategy against misinformation, but they face challenges of scale: There simply aren't enough professional fact-checkers worldwide and, like all good journalism, fact-checking takes time,” the blog post reads. One possibility Facebook is considering is relying on groups of users for help with fact-checking, a system that would no doubt introduce the possibility for manipulation. In the meantime, Facebook recently began adding what it calls “Trust Indicators” to news stories. These ratings are created by a news consortium called The Trust Project, which assesses news outlets’ credibility based on things like their ethics standards, corrections policies, and ownership.
Finally, starting today, Facebook will begin publishing changes it makes to its community standards , which dictate what is and isn't allowed on the platform. This is a lengthy, living document, which Facebook is constantly reassessing. Facebook made this document public last year, but until now, there hasn't been an easy way to track the changes the company makes to it over time.
For all of the announcements Facebook made Wednesday, there’s a lot missing, too. For example, there was no mention of Facebook’s encrypted messaging platform WhatsApp—a glaring omission given Mark Zuckerberg’s plan to merge WhatsApp with Instagram Direct and Messenger. Instagram and Messenger were barely discussed. Facebook did say it has begun limiting the spread of content on Instagram that it deems inappropriate, even if it doesn't explicitly violate its content policies, but didn't provide many details.
As for Messenger, Rosen and Lyons said it will now get verification badges and a Context Button, like Facebook has, which offers users more information on the publisher behind each news article. They’re also introducing something called a Forward Indicator, which signals to users that they are receiving a forwarded message. Facebook added this feature on WhatsApp last year, where forwarding has contributed to the spread of massive disinformation campaigns, to dire effect.
All of this comes just a day after Facebook faced Congress to answer for the role it played in spreading the livestreamed mass murder of 50 people in Christchurch, New Zealand in March. After years of scrutiny into how—or whether—it can keep propaganda, abuse, and misleading information off the site, Facebook is finally coming around to the fact that assuming the role of a supposedly neutral platform for so many years has had some not-so-neutral consequences.
Facebook executives are meeting with reporters in Menlo Park today. This story will be updated as needed.
Correction: 1:29 pm ET 4/10/2019 An earlier version of this story stated that people who leave Groups can remove old posts starting today. That change will be implemented soon.
- A brief history of porn on the internet
- How Android fought an epic botnet —and won
- A fight over specialized chips threatens an Ethereum split
- Tips for getting the most out of Spotify
- A tiny guillotine decapitates mosquitoes to fight malaria
- 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round
- 📩 Get even more of our inside scoops with our weekly Backchannel newsletter
"Nothing you do is being broadcast; rather, it is being shared with people who care about what you do—your friends." Days later, Zuckerberg backtracked in an open letter, saying, "We really messed this one up," and announcing new controls users would have over what stories populated their News Feeds.