“We're committed to publishing every tweet, video, and image that we can reliably attribute to a state-backed information operation,” a Twitter spokesperson says via email. “We have an obligation to balance these important public disclosures with our commitment to protecting people's reasonable expectation of privacy, and we conduct thorough impact assessments before each.”Twitter and other social media companies are trying to find a balance among transparency, user privacy, and a timely response to state-sponsored activity. Facebook, which was also targeted by the IRA and other groups before and after the 2016 election, has taken a different approach with its data. Instead of releasing troves of information to the public, Facebook partners with researchers it trusts, including the Digital Forensic Research Lab where Nimmo works. Facebook also shares data through an independent research commission called Social Science One that vets the information and the researchers who get access to it, hoping to prevent another Cambridge Analytica-style privacy breach.Google, which owns YouTube, says it has taken steps to counter state-sponsored activity and to prevent phishing and hacking campaigns. The company shares information with law enforcement and with other social media companies, but it doesn’t usually release that information to the public. Google, along with Facebook and Twitter, released some information to researchers at Oxford’s Computational Propaganda Project, which issued a comprehensive report on the IRA’s impact on American politics from 2012 through 2018. That report noted that Google’s contribution was “by far the most limited in context and least comprehensive of the three.”
For all of Twitter’s openness, much is not known about its data releases. No one is sure how Twitter finds suspicious accounts, how it defines “state-sponsored,” or how it distinguishes between acceptable and “malicious” content. Twitter doesn’t discuss how it chooses countries and networks to focus on. As a result, it’s difficult to assess how successful the company is at ferreting out disinformation.
Twitter would not reveal any specifics about its process for this article. “We seek to protect the integrity of our efforts and avoid giving bad actors too much information, but in general, we focus on conduct, rather than content,” the Twitter spokesperson wrote in an emailed statement. “This means we look at the behavioral signals behind networks of accounts to intricately understand how they interact across the service,” the statement continued, adding that Twitter works with governments, law enforcement, and other tech companies to better understand such operations.But in keeping those specifics secret, Twitter and other social media companies make oversight impossible and make themselves the sole arbiters of what kinds of speech are authentic and legitimate, says Danny O’Brien, director of strategy at the Electronic Frontier Foundation. The platforms decide who is normal, who is newsworthy, and who is dangerous, without revealing how they make those judgment calls. “From a social standpoint this puts a huge amount of faith and trust and responsibility in the platforms,” says Buntain.In some ways, the operations Twitter has identified in Russia, Iran, and elsewhere are low-hanging fruit. It’s against Twitter’s rules to impersonate someone in order to intentionally “mislead, confuse, or deceive others.” It’s also straightforward to say one country shouldn’t mount a massive, covert disinformation campaign to manipulate another country’s voters. But the issues get more complex when you look at domestic social media campaigns. Is it wrong for a political action committee to hire marketing and PR firms to promote specific ideas on social media? Or for a private citizen to set up a web of blogs and posts that promote particular candidates or disparage others? “Is the problem that people are trying to influence one another? Because if it is, then you’re probably going to have to ban elections, because that’s the whole point of elections,” O’Brien says.Erin Gallagher, a social media researcher, says the market for this persuasive online activity is growing, getting more complex, and harder to categorize. “Globally we're looking at a smorgasbord of actors and methods in a cottage industry that no one really knows much about,” she wrote in an email.In his 1970 book Culture Is our Business, Marshall McLuhan examined American civilization through advertising. Part collage, part social commentary, it smashes McLuhan’s own frighteningly prescient observations against articles about smoking, quotes from Finnegans Wake, and ads for Hertz, Western Electric, Karmann Ghia, and TWA. “World War III is a guerrilla information war with no division between military and civilian participation,” he wrote.That description mirrors the world some researchers describe: one in which personal political views and state-sponsored propaganda easily intermingle and are difficult to untwine. “Basically this is where we are right now, and it’s a total clusterfuck,” wrote Gallagher. The line between a bad actor who intentionally posts misleading information and an individual promoting persuasive posts is muddy and hard to define.As disinformation tactics spread, such ethical questions get even more complicated. Recent elections in Brazil and India were plagued by disinformation campaigns launched on WhatsApp, a Facebook-owned secure messaging service that uses end-to-end encryption. That encryption gives users an added expectation of privacy, but makes it harder for researchers to monitor the platform. “Is it worth the risk of invading peoples’ privacy to collect the data that academics would need in order to understand how these platforms are being used?” asks Buntain. “I just don’t know the answer to that question.”
- Inside Backpage.com’s vicious battle with the Feds
- Facebook’s cryptocurrency betrays its true ambitions
- Shopkeepers around the world, in their “urban temples”
- The best gear to help you clean up your act (and house)
- An all-white town’s divisive experiment with crypto
- 🎧 Things not sounding right? Check out our favorite wireless headphones , soundbars , and bluetooth speakers
- 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories
Flooding the Zone In an 16-month study of 1.5 billion tweets, Zubair Shafiq, a computer science professor at the University of Iowa, and his graduate student Shehroze Farooqi, identified more than 167,000 apps using Twitter's API to automate bot accounts that spread tens of millions of tweets pushing spam, links to malware, and astroturfing campaigns.