Any rule that SolarWinds violates would be a new one, he argues, given that the hacking campaign was by all appearances focused on the kind of cyberespionage US intelligence agencies routinely carry out, with no clear evidence that it was intended to cause disruptive effects.
On March 25, the CEOs of Google, Facebook, and Twitter will once again testify before a committee of the House of Representatives, this time about the spread of disinformation on their platforms.
As the director of the US Cybersecurity and Infrastructure Security Agency, Krebs oversaw the country's election preparedness , grappling not only with potential foreign hacking threats but a firehose of disinformation from President Donald Trump and his associates.
“When I read some of your tweets, my jaw dropped,” the host told Jodi Doering, referring to her account of gravely ill patients who “scream at you for a magic medicine and that Joe Biden is going to ruin the USA.
Within minutes of Donald Trump tweeting that he had fired Christopher Krebs as the director of the Department of Homeland Security’s cybersecurity agency Tuesday night, Twitter slapped on a warning label that the accompanying claim about electoral fraud “is disputed.” The disinformation warning was, in some ways, a fitting denouement to a two-week-long battle between Krebs, the head of the Cybersecurity and Infrastructure Security Agency, and his boss in the Oval Office.
We applaud these changes, and believe that if Twitter is serious about its stated goal of “protecting the integrity of the election conversation,” there's another thing the platform should consider: putting a time delay on the tweets of Donald Trump and other political elites.
Facebook attributed one of the disinformation distribution networks to "actors associated with election interference in the US in the past, including those involved in 'DC leaks' in 2016.The network tied to IRA-linked individuals included accounts and groups collectively posing as a Turkey-based think tank.
Maria Ressa, CEO and executive editor of Rappler, an investigative news website in the Philippines, says we talk about disinformation all wrong.Ressa repeatedly warned Facebook of the threat to press freedoms and democratic institutions just as Russian campaigns were working to destabilize the 2016 US presidential campaign.
The company was underlining how critical it is to provide trustworthy information during an election period, while simultaneously defending its ambivalent political ads policy, which allows politicians and parties to deliver misleading statements using Facebook’s powerful microtargeting tools.
While Netflix makes no mention of it whatsoever, The Hater is actually a sequel to Komasa’s 2011 movie, The Suicide Room, which is about a teenager whose life becomes a catastrophe after a video of him kissing another boy on a dare gets circulated online.
The propagandists have created and disseminated disinformation since at least March 2017, with a focus on undermining NATO and the US troops in Poland and the Baltics; they’ve posted fake content on everything from social media to pro-Russian news websites.
Today, Ayyadurai is one of the most dangerous vectors of health disinformation, racking up millions of engagements on posts that rail against vaccinations, claim Anthony Fauci is a member of the “deep state,” and instruct followers to point blow dryers down their throats to kill the coronavirus.
It has run relatively few campaigns related to Syria and its civil war but is devoted to a common priority for Russia-backed digital actors: undermining and destabilizing Ukraine .Though Secondary Infektion's activities are difficult to track, Graphika researchers were able to piece the its activity together by looking at rare occasions where the group reused an account a few times, and identifying patterns in sets of blogs and forums the group would post to.
That’s true even in the modern era of microtargeted advertising and social-media-enabled disinformation; and it’s particularly relevant as we begin to imagine what electoral campaigns will look like in the midst of (or, hopefully, the aftermath of) the Covid-19 pandemic.
That day, representatives learned that a “high school kid with a good graphics card can make this stuff.” That the creators of malicious deepfakes (the bad guys) and those working to identify and intercept fake content (the good guys) are locked in an unending arms race.
A new United Nations-sponsored report offers one of the most comprehensive overviews of the challenges to global electoral integrity posed by the onslaught of misinformation, online extremism, and social media manipulation campaigns, and calls for a series of reforms from platforms, politicians, and international governing bodies.
Moreover, very little of the IRA’s spending was on traditional political advertising: The Senate report notes that only about 5 percent of the Russian ads users saw prior to the presidential election actually referenced Hillary Clinton or Donald Trump directly.
Among the features announced Monday were new interstitials—notices that appear in front of a post—that warn users when content in their Instagram or Facebook feeds has been flagged as false by outside fact-checkers.
A years-old internet hoax warning about a new, nonexistent Instagram rule resurfaced this week—and demonstrated the staying power of even low-stakes misinformation online.If the story pushed by a meme or hoax fits in a way that feels like a coherent narrative to a critical mass of people, it’s game over, says Phillips.
“I can see why the platforms would be hesitant,” says Ben Nimmo, a senior fellow of the Atlantic Council’s Digital Forensic Research Lab. People who followed IRA or other state-sponsored accounts may have been manipulated, but they weren’t breaking the law or even violating Twitter’s terms of service.
"Let's say I want to wage a disinformation campaign to attack a political opponent or a company, but I don’t have the infrastructure to create my own Internet Research Agency," Gully told WIRED in an interview, speaking publicly about Jigsaw's year-old disinformation experiment for the first time.
Last August, researchers from the threat intelligence firm FireEye uncovered a vast social media influence campaign, conducted by a network of inauthentic news outlets and fake personas with ties to Iran.
The discussions organized by Avaaz served as a counterpoint to all that pressure, as individual victims of online harassment campaigns came forward to tell tech companies exactly how they’ve been hurt by the hate and hoaxes that have festered on their platforms.
Feelings of helplessness and symptoms associated with post-traumatic stress disorder—like anxiety, guilt, and anhedonia—are on the rise, they said, as warnings go unheeded and their hopes for constructive change are dashed time and time again.“We are in a time where a lot of things feel futile,” says Alice Marwick, a media and technology researcher and professor at the University of North Carolina Chapel Hill.
These overdue moves illustrate the companies’ ability to identify and police false content, and they undercut a notion widely embraced in the social media industry that Facebook, Twitter, and YouTube shouldn’t be “arbiters of the truth.” WIRED OPINION ABOUT Paul M.
Flooding the Zone In an 16-month study of 1.5 billion tweets, Zubair Shafiq, a computer science professor at the University of Iowa, and his graduate student Shehroze Farooqi, identified more than 167,000 apps using Twitter's API to automate bot accounts that spread tens of millions of tweets pushing spam, links to malware, and astroturfing campaigns.
From Russian disinformation on Facebook, Twitter, and Instagram to YouTube extremism to drones grounding air traffic, Soltani argues, tech companies need to think not just about protecting their own users but about what he calls abusability: the possibility that users could exploit their tech to harm others, or the world.