Among the many things we humans like to lord over the rest of the animal kingdom is our complex language. Sure, other creatures talk to one another, but we’ve got all these wildly complicated written languages with syntax and fun words like defenestrate. This we can also lord over robots, who, in addition to lacking emotion and the ability to not fall on their faces , can’t write novels.
At least not yet. Researchers at Brown University just got a robot to do something as linguistically improbable as it is beautiful: After training to hand-write Japanese characters, the robot then turned around and started to copy words in a slew of other languages it’d never written before, including Hindi, Greek, and English, just by looking at examples of that handwriting. Not only that, it could do English in print and cursive. Oh, and then it copied a drawing of the Mona Lisa on its own for good measure.
Like walking on two legs, handwriting is one of those seemingly simple human charms that is in fact elaborate. When you write a word, you have to know where to put down your pen, how long to draw a line and in which direction, then pick up your pen, sometimes mid-letter (like with a capital A), and know where to put it down again.
Matt Simon covers cannabis, robots, and climate science for WIRED.
So to get a kid to write, you can’t just show them a sample and set them loose—you have to give them instructions on how to form each letter. “They give you these little algorithms for what strokes to do and what order to put them in to make the character,” says Brown University roboticist Stefanie Tellex, who developed the system with Atsunobu Kotani, also at Brown. “And that's what our algorithm is learning to do.”
Their learning system is split into two distinct models. A “local” model is in charge of what’s going on with the current stroke of the pen—so aiming in the right direction and determining how to end the stroke. And a “global” model is in charge of moving the robot’s writing utensil to the next stroke of the character.
To train the robot, the researchers fed it a corpus of Japanese characters, and provided information about how the component strokes of a character are supposed to work. “From that, it basically learns a model that looks at pixels of the image and predicts where it needs to go to start the next stroke, and then where it needs to move while it’s drawing the stroke to reproduce the image,” Tellex says.
The WIRED Guide to Robots
Then they decided to try to confuse the hell out of the robot by writing hello on a whiteboard in Hindi and Tamil and Yiddish—all of which use unique scripts. Incredibly, the robot could eyeball each with machine vision and write its own copies of the words, even though it only ever trained to write Japanese. Also, they showed it English cursive in addition to print, and it handled both fine.
Then a gaggle of kindergarteners visited Tellex’s lab. Surely, the robot couldn’t recognize and replicate their … suboptimal handwriting? Yeah, it copycatted them with ease. “Just to watch it reproduce the somewhat wobbly writing of these little 6-year-olds, it was just incredible, having never seen that before and never trained on that,” Tellex says.
Surely, the robot wouldn’t be able to copy a rough sketch that Kotani did of the Mona Lisa on the whiteboard? Well, this robot is not so easily confused. “That was back in August, and that picture is still on our whiteboard in our lab,” Tellex says.
But nobody’s perfect. Because the researchers trained the robot on modern Japanese, which is written left to right, the system could generalize to English, which is written in the same direction. But it didn’t do so hot with languages written right to left.
Still, it’s a remarkable demonstration of the interconnectivity of languages, many different scripts that nevertheless come from the same human (and now robot) hand. And it’s a step toward opening up a new line of communication between humans and machines. Maybe not so much in the near term, but perhaps one day humanoid robots could leave us handwritten notes, as opposed to having them spit printouts from their bodies. Ideally not something ominous like defenestrate! defenestrate! defenestrate!
- Inside the hybrid digital-analog lives of children
- The Chernobyl disaster may have also built a paradise
- Inside China's massive surveillance operation
- Bluetooth's complexity has become a security risk
- I'm mad as hell about Square's shady automatic emails
- 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team's picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones .
- 📩 Get even more of our inside scoops with our weekly Backchannel newsletter
“This is all about letting the robot supervise itself, rather than humans going in and doing annotations,” says coauthor Lucas Manuelli, also of MIT CSAIL.“I can see how this is very useful in industrial applications where the hard part is finding a good point to grasp,” says Matthias Plappert, an engineer at OpenAI who has developed a system for a robot hand to teach itself how to manipulate, but who wasn’t involved in this work.