Consider that our forgeries are already much better and far stranger than those of Big Brother. Right now, you can experience the pleasure of a well-done fake with only a click or a tap, no Winston Smith required. Visit the website This Person Does Not Exist, and refresh as many fictitious comrades as your heart desires. When the digital ghosts get too weird, you can marvel at a deepfake face swap: Behold, if you dare, Steve Buscemi as Jennifer Lawrence. Or you can fake yourself with FaceApp.Fakery isn’t always harmless. More than 90 percent of deepfakes are pornographic , much of it “revenge porn.” Criminals have deployed deepfaked audio to impersonate CEOs. Synthetic content, experts warn, could be used to influence elections, sway financial markets, or trigger wars. Back in June 2019, House Democrat Adam Schiff, a man permanently on the cusp of letting out a long, tired sigh, led a congressional hearing on deepfakes. All told, it was a gloomy affair.That day, representatives learned that a “high school kid with a good graphics card can make this stuff.” That the creators of malicious deepfakes (the bad guys) and those working to identify and intercept fake content (the good guys) are locked in an unending arms race. Hany Farid, an expert in digital forensics at UC Berkeley, has said, "We are outgunned ... The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.” Finally, representatives learned of the tipping point of indistinguishability: In a few years, it will be impossible for the naked eye to distinguish a real video from a deepfake. The prospects are harrowing: perfect fakes, creatable by anyone, unleashed at scale and difficult to discern. It’s no wonder that during the hearing Washington representative Denny Heck repeatedly quoted from Dante’s Inferno: “Abandon hope all ye who enter here.”
Putting aside such brash pessimism, what can be done? The platforms on which fake content appears (Facebook, Instagram, YouTube, Twitter) have taken some steps to combat disinformation. In a recent blog post, Facebook pledged to “remove misleading manipulated media” if “it is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.” This is important, but detection is a serious challenge, and few trust Big Tech to fully self-regulate. Sure, there’s a profusion of laws around identity theft and defamation that might dissuade creators of harmful fakes, but it’s unclear who will enforce them or how.
SUBSCRIBESubscribe to WIRED and stay smart with more of your favorite Ideas writers.As law professors Danielle Citron and Robert Chesney describe in their paper “Deepfakes: A Looming Challenge for Privacy, Democracy, and National Security,” three federal agencies (the FCC, the FEC, and the FTC) could in theory regulate the dissemination of fake content, but "on close inspection, their potential roles appear quite limited.” The FCC's jurisdiction is limited to radio and television. The FEC is concerned only with the electoral process. The FTC oversees "fake advertising," but deepfakes aren't typically hawking products or services.