Can the Wisdom of Crowds Help Fix Social Media’s Trust Issue?

Social media misinformation outrage cycles tend to go through familiar phases. There’s the initial controversy over some misleading story that goes viral, then the platform’s response. Then someone asks “What about Fox News?” Finally, someone points out that the real problem, as far as social media is concerned, is the algorithms that determine who sees what. Those algorithms are primarily optimized for engagement, not accuracy. False and misleading stories can be more engaging than true ones, so absent some intervention by the platform, that’s what people are going to see. Fixing the algorithm, the argument goes, would be a better way to deal with the problem than taking down viral misinformation after the fact.But fix it how? To change the ranking to favor true stories over false ones, say, the platforms would need a way to systematically judge everything that gets shared, or at least everything that gets shared a nontrivial amount. The current prevailing approach to false material involves punting the judgment to some outside party. Facebook, for example, partners with organizations like to determine whether a given link merits a warning label. Twitter builds its fact-checks by linking to external sources. That could never be scaled up to the level of the algorithm. There aren’t enough professional fact checkers in the world to go over every article that might get posted on social media. Research has found that this creates an “implied truth effect”: If you only check a subset of content, some users will assume any article that isn’t labeled must therefore be accurate, even if it simply was never checked.
A new paper published in Science Advances suggests a promising solution to these issues: fact-checking by the crowd. In the study, a team of researchers led by David Rand, a professor at MIT, set out to show whether groups of random laypeople could approximate the results of professional fact checkers. Using a set of 207 articles that had been flagged for fact-checking by Facebook’s AI, they had three professional fact checkers score them on several dimensions to produce an overall score from 1 (totally false) to 7 (totally trustworthy). Then they recruited about 1,100 ordinary people from Amazon Mechanical Turk, divided them into groups equally balanced between self-identified Democrats and Republicans, and had them do the same thing, but with a twist: While the fact checkers read the entire article and did their own research to verify the claims, the laypeople only looked at the headline and first sentence of each story.

Amazingly, that was enough for the crowd to match and even exceed the fact checkers’ performance.

To measure the crowd’s performance, the researchers first measured the correlation between the scores assigned by the three fact checkers themselves. (The correlation came out to .62—high, but far from uniform agreement. When judging stories on a binary true/false scale, however, at least two out of three fact checkers agreed with each other more than 90 percent of the time.) Then they measured the correlation between the crowd-assigned scores, on the one hand, and the average of the three fact checkers’ scores, on the other. The basic idea was that the average of the professionals’ ratings represents a better benchmark of accuracy than any one fact checker alone. And so if the laypeople ratings correlated with the average fact checker score as closely as the individual fact checkers agreed with each other, it would be fair to say that the crowd was performing as well as or better than a professional. The question: How many laypeople would you need to assemble to hit that threshold?