But, like a hare zigzagging back and forth to avoid a falcon, this robot’s seeming madness is in fact a special brand of cleverness, one that Facebook thinks holds the key to not only better robots, but for developing better artificial intelligence. This robot, you see, is teaching itself to explore the world. And that could one day, Facebook says, lead to intelligent machines like telepresence robots.
At the moment robots are very dumb—generally you have to spell everything out in code for them: This is how you roll forward, this is how you move your arm. We humans are much smarter in how we learn. Even babies understand that an object that moves out of view hasn’t vanished from the physical universe. They learn they can roll a ball, but not a couch. It’s fine to fall off a couch, but not a cliff.All of that experimentation builds a model of the world in your brain, which is why later on you can learn to drive a car without crashing it immediately. “We know in advance that if we're driving near a cliff and we turn the wheel to the right, the car is going to run off a cliff and nothing good is going to happen,” says Yann LeCun, chief AI scientist at Facebook. We have a self-learned model in our head that keeps us from doing dumb things. Facebook is trying to give that kind of model to the machines, too. Systems that learn “models of the world is in my opinion the next challenge to really make significant progress in AI,” LeCun adds.
Matt Simon covers cannabis, robots, and climate science for WIRED.Now, the group at Facebook isn’t the first to try to get a robot to teach itself to move. Over at UC Berkeley, a team of researchers used a technique called reinforcement learning to teach a two-armed robot named Brett to shove a square peg in a square hole . Simply put, the robot tries lots and lots of random movements. If one gets it closer to the goal, the system gives it a digital “reward.” If it screws up, it gets a digital “demerit,” which the robot keeps a tally of. Over many iterations, the reward-seeking robot gets its hand closer and closer to square hole, and eventually drops the peg in.What Facebook is experimenting with is a bit different. “What we wanted to try out is to instill this notion of curiosity,” says Franziska Meier, an AI research scientist at Facebook. That’s how humans learn to manipulate objects: Children are driven by curiosity about their world. They don’t try something new, like yanking a cat’s tail, because they have to, but because they wonder what might happen if they do, much to the detriment of poor old Whiskers.So whereas a robot like Brett refines its motions bit by bit—drawing closer to its target, resetting, and drawing closer still with the next try—Facebook’s robot arm might get closer and then veer way off course. That’s because the researchers aren’t rewarding it for incremental success, but instead giving it freedom to try non-optimal movements. It’s trying new things, like a baby, even if those things don’t seem particularly rational in the moment.Each movement provides data for the system. What did this application of torque in each joint do to move the arm to that particular spot. “Although it didn't achieve the task, it gave us more data, and the variety of data we get by exploring like this is bigger than if we weren't exploring,” says Meier. This concept is known as self-supervised learning—the robot tries new things and updates a software model, which can help it predict the consequences of its actions.The idea is to make machines more flexible and less single-minded about a task. Think of it like completing a maze. Maybe a robot knows the direction it needs to head to find the exit. It might try over and over to get there, even if it inevitably hits a dead end in that pursuit. “Since you're so focused on moving in that single direction, you might walk yourself into corners,” says University of Oslo roboticist Tønnes Nygaard, who has developed a four-legged robot that learns to walk on its own . (Facebook is also experimenting with getting a six-legged robot to walk on its own, but wasn’t able to demonstrate that research for my visit to the lab.) “Instead of being so focused on saying, I want to go in the direction I know the solution is in, instead I try to focus on just going to explore. I'm going to try finding new solutions.”
So those incoherent movements Facebook’s robot arm is making are really a form of curiosity, and it’s that kind of curiosity that could lead to machines that more readily adapt to their environment. Think of a home robot that’s trying to load a dishwasher. Maybe it thinks the most efficient way to put a mug on the top rack is to come at it sideways, in which case it bumps the edge of the rack. It’s deterministic, in a sense: Trial and error, over and over, lead it down this less-than-ideal path, where it’s trying to get better at loading the rack sideways, and now it can’t back up and try something new. A robot loaded with curiosity, on the other hand, can experiment and learn that it’s actually best to come in from above. It’s flexible, not deterministic, which in theory would allow it to adapt more easily to dynamic human environments.
Now, an easier, faster way to teach robots how to do stuff is with simulations. That is, build a digital world for, say, an animated stick figure, and let it teach itself to run through the same kind of trial and error. The method is relatively fast, because the iterations happen much quicker when the digital “machines” aren’t constrained by real-world laws of physics.
But while simulation might be more efficient, it’s an imperfect representation of the real world—there’s just no way you can fully simulate the complexities of dynamic human environments. So while researchers have been able to train robots to do something first in simulation, then port that knowledge to robots in the real world, the transition is extremely messy , because the digital and physical worlds are mismatched.Doing everything in the physical world may be slower and more laborious, but the data you get is more pure, in a sense. “If it works in the real world, it actually works,” says Roberto Calandra, an AI research scientist at Facebook. If you’re designing supremely complex robots, you can’t simulate the chaos of the human world that they’ll be tackling. They’ve got to live it. This will be particularly important as the tasks we give robots get more complex. A robot lifting car doors on a factory line is relatively easy to just code, but to navigate the chaos of a home (clutter on the floor, children, children on the floor…) a robot will have to adapt on its own with creativity, so it doesn’t get stuck in feedback loops. A coder can’t hold its hand for every obstacle.
The WIRED Guide to RobotsFacebook’s project is part of a great coming-together of AI and robots. Traditionally, these worlds have largely kept to themselves. Yes, robots have always needed AI to operate autonomously, like using machine vision to sense the world. But while tech giants like Google and Amazon and Facebook have pushed major advances in the development of AI in purely digital contexts—getting computers to recognize objects in images, for example, by having humans label those objects first—robots have remained fairly dumb as researchers have focused on getting the things to move without falling on their faces .That’s beginning to change, as AI researchers start using robots as platforms to refine software algorithms. Facebook, for instance, might want to teach a robot to solve a series of tasks on its own. That, in turn, might inform the development of AI assistants that can better plan a sequence of actions for you, the user. “It's the same problem,” says LeCun. “If you solve it in one context, you'll solve it in the other context.”
In other words, AI is making robots smarter, but robots are also now helping advance AI. “A lot of the interesting problems and interesting questions that are connected with AI—particularly the future of AI, how can we get to human-level AI—are currently being addressed by people who work in robotics,” says LeCun. “Because you can't cheat with robots. You can't have thousands of people labeling images for you.”
Still: What would a digital behemoth like Facebook want with robots? At the moment, the company says this research isn’t connected to a particular product pipeline.But keep in mind that Facebook is in the connecting-people business (well, and in the ad-selling business). “We think robotics is going to be an important component of this—think about things like telepresence,” says LeCun. Facebook is already a hardware company, after all, what with the Oculus VR system and Portal, its video conference device. “The logical succession of this is perhaps things that you can control from a distance.” (Which, if you’ve been reading WIRED recently , will certainly bring up questions of privacy and security .)But we’re getting ahead of ourselves. Every home robot, save for the Roomba, so far has failed , in part because the machines just aren’t smart or useful enough. No robot is particularly smart. But maybe Facebook’s flailing robot arm can help fix that.
- Why I (still) love tech: In defense of a difficult industry
- “Heartbeat” bills get the science all wrong
- Inside China's massive surveillance operation
- I'm mad as hell about Square's shady automatic emails
- “If you want to kill someone, we are the right guys ”
- 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team's picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones .
- 📩 Get even more of our inside scoops with our weekly Backchannel newsletter