A new paper published in Science Advances suggests a promising solution to these issues: fact-checking by the crowd. In the study, a team of researchers led by David Rand, a professor at MIT, set out to show whether groups of random laypeople could approximate the results of professional fact checkers. Using a set of 207 articles that had been flagged for fact-checking by Facebook’s AI, they had three professional fact checkers score them on several dimensions to produce an overall score from 1 (totally false) to 7 (totally trustworthy). Then they recruited about 1,100 ordinary people from Amazon Mechanical Turk, divided them into groups equally balanced between self-identified Democrats and Republicans, and had them do the same thing, but with a twist: While the fact checkers read the entire article and did their own research to verify the claims, the laypeople only looked at the headline and first sentence of each story.
Amazingly, that was enough for the crowd to match and even exceed the fact checkers’ performance.
To measure the crowd’s performance, the researchers first measured the correlation between the scores assigned by the three fact checkers themselves. (The correlation came out to .62—high, but far from uniform agreement. When judging stories on a binary true/false scale, however, at least two out of three fact checkers agreed with each other more than 90 percent of the time.) Then they measured the correlation between the crowd-assigned scores, on the one hand, and the average of the three fact checkers’ scores, on the other. The basic idea was that the average of the professionals’ ratings represents a better benchmark of accuracy than any one fact checker alone. And so if the laypeople ratings correlated with the average fact checker score as closely as the individual fact checkers agreed with each other, it would be fair to say that the crowd was performing as well as or better than a professional. The question: How many laypeople would you need to assemble to hit that threshold?