AI Helps Warehouse Robots Pick Up New Tricks

Some of the biggest names in artificial intelligence, including two godfathers of the machine learning boom , are betting that clever algorithms are about to transform the abilities of industrial robots.

Geoffrey Hinton and Yann LeCun, who shared this year’s Turing Prize with Yoshua Bengio for their work on deep learning, are among the AI luminaries who have invested in, a startup developing AI technology for warehouse bin-picking has developed a platform that consists of off-the-shelf robot arms equipped with cameras, a special gripper, and plenty of computer power for figuring out how to grasp objects tossed into warehouse bins. The company, emerging from stealth Wednesday, announced the first commercial installations of its AI-equipped robots: picking boxes and bags of products for a German electronics retailer called Obeta.

Picking up everyday boxes and plastic packages might sound trivial, and it is for most humans. Workers in factories and warehouses are frequently given new objects to handle, or a batch of different items mixed together, but it’s deceptively difficult for a machine to quickly work out how to grab the next doodad. Workplace robots are still incredibly dumb and clumsy, and teaching them to grasp unfamiliar objects or those with complex shapes remains a holy grail of AI and robotics research.

In recent years, a number of companies have sprung up offering robots that use simpler algorithms to perform useful warehouse tasks, including limited product picking. Successful players include Plus One Robotics, Picnic, and RightHand Robotics.
Safer robot arms, custom grippers, off-the-shelf sensors, and open source code for robot vision and control have made it easier for startups to deploy robots in new roles, such as ferrying products around warehouses or taking boxes off has not yet developed a robot as dextrous or adaptable as a human, but it has apparently succeeded in applying an exotic research technology, called reinforcement learning, to an industrial setting. It is hard for robots to learn in the real world without making mistakes, and commercial robot installations require extreme levels of reliability.
The company was founded in 2017 by Pieter Abbeel, a prominent AI professor at UC Berkeley, and several of his students. Abbeel pioneered the application of machine learning to robotics, and he made a name for himself in academic circles in 2010 by developing a robot capable of folding laundry (albeit very slowly).

Covariant uses a range of AI techniques to teach robots how to grasp unfamiliar objects. These include reinforcement learning, in which an algorithm trains itself through trial and error, a little like the way animals learn through positive and negative feedback.

Reinforcement learning has driven spectacular recent breakthroughs in AI, including the superhuman game-playing algorithms developed by Alphabet’s AI subsidiary, DeepMind. The approach can help a robot figure out what shape an object is from a video image and where to grasp it, even if it has only been trained on objects of a different shape. This may be done in simulation so that the process can be accelerated.
But reinforcement learning is finicky and needs lots of computer power. “I used to be skeptical about reinforcement learning, but I’m not anymore,” says Hinton, a professor at the University of Toronto who also works part time at Google. Hinton says the amount of computer power needed to make reinforcement learning work has often seemed prohibitive, so it is striking to see commercial success. He says it is particularly impressive that Covariant’s system has been running in a commercial setting for a prolonged period.