Though Facebook says it will still remove content from politicians that encourages violence or harm, Willner argues that allowing hate speech—whether it's from a politician or a private citizen white supremacist—can create a dangerous atmosphere. He cites research from the Dangerous Speech Project, which studies the types of public speech that spark violence, that backs up his claim. He also charges that Facebook’s exception now makes politicians a privileged class, enjoying rights denied to everyone else on the platform. Not only is Facebook avoiding hard choices, Willner says, it is betraying the safety of its users to placate the politicians who have threatened to regulate or even break up the company. “Restricting the speech of idiot 14-year-old trolls while allowing the President to say the same thing isn't virtue,” writes Willner. “It's cowardice.”Willner intended his post, distributed to “friends-only,” to generate discussion and attention from former colleagues, both inside and outside Facebook. That it did. One top executive who engaged with Willner was Andrew “Boz” Bosworth, Facebook’s outspoken vice-president of AR/VR, and one of the engineers who created the News Feed. Taking pains to note that he is not part of the content standards team, Bosworth described Facebook’s decision as a reasonable balance between maintaining standards and letting people know what their political leaders really think. “I just feel you aren't paying enough respect to the newsworthiness case,” he wrote in a comment under Willner’s post. “I'm not convinced the horror of the speech is greater than the horror of it going unnoticed.”
Another participant in the fray was Paul C. Jeffries, Facebook’s former head of legal operations. Speaking like someone who has spent a lot of time with lawyers—though he holds an physics degree—Jeffries says of Facebook that “they aren’t really applying different rules to regular folks and politicians. It’s the same rule, it just evaluates [sic] different.”The conflict is genuinely knotty. As Bosworth notes, it’s a classic tradeoff between two values. Twitter gives a pass to Donald Trump’s hateful tweets, because CEO Jack Dorsey believes it’s in the public good to document what the president says. And Facebook has used newsworthiness as moderation factor for several years. In December 2015, Facebook’s top executives debated what to do with Trump’s denigration of Muslims, which some employees thought clearly violated its standards. They allowed it to stand. A year later, Facebook took down a post from a Norwegian writer that included the iconic “Terror of War” photo, which depicts children fleeing a US napalm attack in Vietnam, because it showed a naked young girl. Responding to the outcry that the company was censoring the Pulitzer Prize–winning image, Facebook reversed itself. Thereafter, “newsworthiness” formally became a factor in treating speech that otherwise violated the company’s standards.
And so simultaneously the company mounted a huge effort, led by CTO Mike Schroepfer, to create artificial intelligence systems that can, at scale, identify the content that Facebook wants to zap from its platform, including spam, nudes, hate speech, ISIS propaganda, and videos of children being put in washing machines.
Facebook Had an Incredibly Busy Weekend