While concern for the movement of bodies is central to both dance and robotics, historically, the disciplines have rarely overlapped. On the one hand, the Western dance tradition has been known to maintain a generally anti-intellectual tradition that poses great challenges to those interested in interdisciplinary research. George Balanchine, the acclaimed founder of the New York City Ballet, famously told his dancers, “Don't think, dear, do.” As a result of this sort of culture, the stereotype of dancers as servile bodies that are better seen than heard unfortunately calcified long ago. Meanwhile, the field of computer science—and robotics by extension—demonstrates comparable, if distinct, body issues. As sociologists Simone Browne, Ruha Benjamin and others have demonstrated, there is a long-standing history of emerging technologies that cast human bodies as mere objects of surveillance and speculation. The result has been the perpetuation of racist, pseudoscientific practices like phrenology, mood reading software, and AIs that purport to know if you’re gay by how your face looks. The body is a problem for computer scientists; and the overwhelming response by the field has been technical “solutions'' that seek to read bodies without meaningful feedback from their owners. That is, an insistence that bodies be seen, but not heard.
Despite the historical divide, it is perhaps not too great a stretch to consider roboticists as choreographers of a specialized sort, and to think that the integration of choreography and robotics could benefit both fields. Usually, the movement of robots isn’t studied for meaning and intentionality the way it is for dancers, but roboticists and choreographers are preoccupied with the same foundational concerns: articulation, extension, force, shape, effort, exertion, and power. “Roboticists and choreographers aim to do the same thing: to understand and convey subtle choices in movement within a given context,” writes Amy Laviers, a certified movement analyst and founder of the Robotics, Automation and Dance (RAD) Lab in a recent National Science Foundation-funded paper. When roboticists work choreographically to determine robot behaviors, they’re making decisions about how human and inhuman bodies move expressively in the intimate context of one another. This is distinct from the utilitarian parameters that tend to govern most robotics research, where optimization reigns supreme (does the robot do its job?), and what a device’s movement signifies or makes someone feel is of no apparent consequence.
Madeline Gannon, founder of the research studio AtonAton, leads the field in her exploration of robot expressivity. Her World Economic Forum–commissioned installation, Manus, exemplifies the possibilities of choreo-robotics both in its brilliant choreographic consideration and its feats of innovative mechanical engineering. The piece consists of 10 robot arms displayed behind a transparent panel, each stark and brilliantly lit. The arms call to mind the production design of techno-dystopian films like Ghost in the Shell. Such robot arms are engineered to perform repetitive labor, and are customarily deployed for utilitarian matters like painting car chassis. Yet when Manus is activated, its robot arms embody none of the expected, repetitious rhythms of the assembly line, but instead appear alive, each moving individually to animatedly interact with its surroundings. Depth sensors installed at the base of the robots’ platform track the movement of human observers through space, measuring distances and iteratively responding to them. This tracking data is distributed across the entire robotic system, functioning as shared sight for all of the robots. When passersby move sufficiently close to any one robot arm, it will “look” closer by tilting its “head” in the direction of the stimuli, and then move closer to engage. Such simple, subtle, gestures have been used by puppeteers for millenia to imbue objects with animus. Here, it has the cumulative effect of making Manus appear curious and very much alive. These tiny choreographies give the appearance of personality and intelligence. They are the functional difference between a haphazard row of industrial robots and the coordinated movements of intelligent pack behavior.
But while tech giants like Google and Amazon and Facebook have pushed major advances in the development of AI in purely digital contexts—getting computers to recognize objects in images, for example, by having humans label those objects first—robots have remained fairly dumb as researchers have focused on getting the things to move without falling on their faces.