Mark Zuckerberg has promised to deploy AI to help solve some of the company’s biggest problems, by policing hate speech, fake news, and cyberbullying (an effort that has seen limited success so far ). More recently, Facebook has been forced to reckon with how to stop AI-powered deception in the form of deepfake videos that could convincingly spread misinformation as well as enable new forms of harassment.Pesenti joined Facebook in January 2018, inheriting a research lab created by Yann Lecun, one of the biggest names in the field. Before that, he worked on IBM’s Watson AI platform and at Benevolent AI, a company that is applying the technology to medicine.
And so simultaneously the company mounted a huge effort, led by CTO Mike Schroepfer, to create artificial intelligence systems that can, at scale, identify the content that Facebook wants to zap from its platform, including spam, nudes, hate speech, ISIS propaganda, and videos of children being put in washing machines.
Pesenti met with Will Knight, senior writer at WIRED, near its offices in New York. The conversation has been edited for length.
9 Questions for Facebook After Zuckerberg’s Privacy Manifesto Christophe Morin/Getty Images Yesterday afternoon, Mark Zuckerberg presented an entirely new philosophy. Facebook does have nascent efforts in commerce and cryptocurrency, but there’s no question that figuring out revenue on the new platform will be a hard problem for Dave Wehner, Facebook’s chief financial officer.
Will Knight: AI has been presented as a solution to fake news and online abuse, but that may oversell its power. What progress are you really making there?Jerome Pesenti: Moderating automatically, or even with humans and computers working together, at the scale of Facebook is a super challenging problem. But we’ve made a lot of progress.
Early on, the field made progress on vision—understanding scenes and images . We’ve been able to apply that in the last few years for recognizing nudity, recognizing violence, and understanding what's happening in images and videos.
Recently there’s been a lot of progress in the field of language , allowing us a much more refined understanding of interactions through the language that people use. We can understand if people are trying to bully, if it’s hate speech, or if it’s just a joke. By no measure is it a solved problem, but there's clear progress being made.
WK: What about deepfakes?
JP: We’re taking that very seriously. We actually went around and created new deepfake videos , so that people could test deepfake detection techniques. It’s a really important challenge that we are trying to be proactive about. It’s not really significant on the platform at the moment, but we know it can be very powerful. We’re trying to be ahead of the game, and we’ve engaged the industry and the community.
WK: Let’s talk about AI more generally. Some companies, for instance DeepMind and OpenAI , claim their objective is to develop “artificial general intelligence.” Is that what Facebook is doing?JP: As a lab, our objective is to match human intelligence. We're still very, very far from that, but we think it’s a great objective. But I think many people in the lab, including Yann, believe that the concept of “AGI” is not really interesting and doesn't really mean much.
On the one hand, you have people who assume that AGI is human intelligence. But I think it's a bit disingenuous because if you really think of human intelligence, it is not very general. Then other people project onto AGI the idea of the singularity—that if you had an AGI, then you will have an intelligence that can make itself better, and keep improving. But there’s no real model for that. Humans can’t can’t make themselves more intelligent. I think people are kind of throwing it out there to pursue a certain agenda.
He, Obamehinti, and other technical leaders reflected on the challenges of using technology—particularly artificial intelligence—to safeguard or enhance the company’s products without creating new biases and problems. Facebook program manager Lade Obamehinti discovered that a prototype of the company's video chat device, Portal, had a problem seeing people with darker skin tones.