One day at work last year, Lade Obamehinti encountered an algorithm that had a problem with black people.
The Facebook program manager was helping test a prototype of the company’s Portal video chat device , which uses computer vision to identify and zoom in on a person speaking. But as Obamehinti, who is black, enthusiastically described her breakfast of French toast, the device ignored her and focused instead on a colleague—a white man.
Tom Simonite covers artificial intelligence for WIRED.
Obamehinti related that experience Wednesday at Facebook’s annual developer conference. The day prior, CEO Mark Zuckerberg claimed his company’s many products would become more private .
The conference’s second day, headlined by Facebook’s chief technology officer Mike Schroepfer, was more sober. He, Obamehinti, and other technical leaders reflected on the challenges of using technology—particularly artificial intelligence—to safeguard or enhance the company’s products without creating new biases and problems. “There aren’t simple answers,” Schroepfer said.
Schroepfer and Zuckerberg have said that, at Facebook’s scale, AI is essential to remedy the unintended consequences of the company digitizing human relationships. But like any disruptive technology, AI creates unpredictable consequences of its own, Facebook’s director of AI, Joaquin Candela, said late Wednesday. “It’s just impossible to foresee,” he said.
Obamehinti’s tale of algorithmic discrimination showed how Facebook has had to invent new tools and processes to fend off problems created by AI. She said being ignored by the prototype Portal spurred her to develop a new “process for inclusive AI” that has been adopted by several product development groups at Facebook.
That involved measuring racial and gender biases in the data used to create the Portal’s vision system, as well as the system’s performance. She found that women and people with darker skin were underrepresented in the training data, and that the prerelease product was less accurate at seeing those groups.
Many AI researchers have recently raised the alarm about the risk of biased AI systems as they are assigned more critical and personal roles. In 2015, Google’s photo organizing service tagged photos of some black people as “gorillas”; the company responded by blinding the product to gorillas , monkeys, and chimps.
Obamehinti said she found a less-sweeping solution for the system that had snubbed her, and managed to ameliorate the Portal’s blindspots before it shipped. She showed a chart indicating that the revised Portal recognized men and women of three different skin tones more than 90 percent of the time—Facebook’s goal for accuracy—though it still performed worse for women and the darkest skin tones.
A similar process is now used to check that Facebook’s augmented reality photo filters work equally well on all kinds of people. Although algorithms have gotten more powerful, they require careful direction. “When AI meets people,” Obamehinti said, “there’s inherent risk of marginalization.”
The WIRED Guide to Artificial Intelligence
Candela, Facebook’s AI director, spoke Wednesday about how Facebook’s use of AI to fight misinformation has also required engineers to be careful that the technology doesn’t create inequities.
The company has deployed a content filtering system to identify posts that may be spreading political misinformation during India’s month-long national election. It highlights posts for human review, and operates in several of the country’s many languages. Candela said engineers have been carefully comparing the system’s accuracy among languages to ensure that Facebook’s guidelines are enforced equitably.
Similar concerns have arisen in a project testing whether Facebook could flag fake news faster by crowdsourcing the work of identifying supporting or refuting evidence to some of its users. Candela said that a team working on bias in AI and related issues has been helping work out how to ensure that the pool of volunteers who review any particular post is diverse in outlook, and not all drawn from one region or community.
Facebook’s AI experts hope some of the challenges of making their technology perform equitably will diminish as the technology becomes more powerful. Schroepfer, the company’s CTO, highlighted research that has allowed Facebook’s systems for processing images or text to achieve high accuracy with smaller amounts of training data. He didn’t share any figures indicating that Facebook has improved at flagging content that breaches its rules, though, instead repeating numbers released last November.
Candela acknowledged that AI advances, and tools developed to expose and measure shortcomings of AI systems, won’t alone fix Facebook’s problems. It requires Facebook’s engineers and leaders to do the right thing. “While tools are definitely necessary they’re not sufficient because fairness is a process,” he said.
- “If you want to kill someone, we are the right guys ”
- The best speed climbers dash up walls with this move
- Everything you need to know about open source software
- Kitty Hawk, flying cars, and the challenges of “going 3D”
- Tristan Harris vows to fight “human downgrading ”
- 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team's picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones .
- 📩 Get even more of our inside scoops with our weekly Backchannel newsletter
And so simultaneously the company mounted a huge effort, led by CTO Mike Schroepfer, to create artificial intelligence systems that can, at scale, identify the content that Facebook wants to zap from its platform, including spam, nudes, hate speech, ISIS propaganda, and videos of children being put in washing machines.