A New Robo-Car Report Card Isn't Quite What It Seems

The autonomous vehicle disengagement reports published annually by California's DMV are flawed. But the public can still glean some secrets about the self-driving industry. Christopher Ham

The latest batch of autonomous vehicle developer disengagement reports—the closest thing we’ve got to a robo-report card—has just been published by the California Department of Motor Vehicles. The columns and columns of data contained therein don’t quite illuminate the secrets of the very secretive self-driving vehicle industry. But its many pages make clear that while the Silicon Valley hype around robocars may have cooled, progress toward the day when humans are unshackled from the steering wheel continues: The 48 autonomous vehicle developers that tested their tech on public roads collectively drove 2.05 million miles between December 2017 and November 2018, up from 500,000 the year before.

These reports spell out how many times each company’s vehicle “disengaged” out of autonomous mode and switched back to the old fashioned human-hand-on-the-wheel manual mode. Waymo, for example, reported that a driver had to take over once every 11,017 miles, 97 percent better than a year ago. GM Cruise reported a takeover once every 5,205 miles, 320 percent better than a year ago.

But like many standardized tests, the reports miss a lot. They are poor tools for understanding how well this technology works, and how each company’s progress compares to their competitors’. Some in the space gripe that the reports risk encouraging engineers to baby their cars in a bid to keep disengagements low, to juke the stats .

Still, in a new industry built on complex software and hardware, in which secrecy is paramount, any public information can be revealing. And there is knowledge to be gleaned from these thousands upon thousands of pages.

Before diving in, a few words about the reports themselves. California is the only state to require something like them (it also requires developers to publicly report all collisions). And understanding them means requiring accepting a few big caveats:

  • The reports are unscientific, because each company reports its data in a different way, offering various levels of detail and idiosyncratic explanations for what jolted their vehicles out of self-driving mode.
  • They’re packed with vague language and lack context. This was supposed to be the first year all companies used a standard data entry format but, first, not all of them appeared to have followed the rules. And second, their language is still very vague. Waymo, for example, cites a bunch of disengagements for “unwanted movement of the vehicle that was undesirable under the circumstances.” Uber labels a ton of disengagements as “precautionary takeover or operator discretion.” Do those describe similar situations? Who knows!
  • The reports are little use for anyone who wants to compare rival companies, because those companies aren’t running the same tests. Waymo does most of its testing in simple suburbs; Zoox focuses on the complex city. They’re better for tracking the progress of each outfit, but still not great, because those companies change how and where they test over time. And comparing the number of miles each company has driven doesn't actually tell you much about anyone's progress.
  • Crucially: These reports only cover driving on public roads in California . So we don’t know anything about Ford, which focuses its testing around Detroit and Pittsburgh. And we don’t see data for Waymo’s increasingly important test program in Phoenix. (Aurora does include data from its testing activities in Pittsburgh, but without detailing specific disengagements.)

If we set aside the flawed idea of using disengagements as a metric, though, we can still learn plenty from these reports. They also include information on how many miles the companies are covering, how big their fleets are, and what sorts of roads they’re testing on.

Let’s start our 21st century Kremlinology by noting the mix of players. Old-timers like Waymo, General Motors’ Cruise, Nissan, and Uber are joined by little-known newcomers like Roadstar.Ai, aiPod, Nullmax, and WeRide. Smaller names have become big ones: Aurora just netted $530 million in funding, delivery-focused Nuro reeled in close to $1 billion, and Apple finally unveiled some information about its secretive ( and possibly troubled ) self-driving project. All three filed their first disengagement reports this year. So yes, the money doesn’t flow as freely in Autonoworld as it once did. But the growing field of competitors indicates the dream of the self-driving car is alive as ever.

The reports also indicate that almost every company’s testing operations are growing. Cruise had 109 vehicles testing in the state last reporting period; now it has 194. (Many of those news ones are the Generation 3, the company's electric production model AV.) Waymo upped the number of cars in the state by almost a factor of five. Uber is the exception: It shut down its self-driving program in California almost a year ago, after one of its testing vehicles struck and killed a woman in Arizona. In all, the DMV had approved 665 robocars to test in California by the end of November 2018, up from 326 the year before.

Some of those vehicles covered more ground than others—and shows how each company’s self-driving strategy diverges from its competitors’. Aurora’s five vehicles traveled just 32,858 miles in autonomous mode, way fewer than Waymo or Cruise, with a higher disengagement rate than either those companies. That low number is intentional, says Aurora cofounder Sterling Anderson. Driving around in public comes at a cost: in fuel, in pay for the drivers who sit behind the wheel (and also receive equity in the startup), and in the risk of a crash. Aurora does most of its development in simulation and on a test track. It reserves its on-road, in public testing for the tricky stuff, to see how its tech is advancing. When it stops seeing high disengagement rates, it moves on to a new geographic area, with new challenges.

“Testing on road should not be development,” Anderson says. “Testing on road is more verification.”

Competitors might call that spin—Aurora’s way of explaining away low miles and high disengagements. What’s interesting, though, is that the company is a second act for each of its three cofounders: Chris Urmson led Google’s program (now Waymo), Drew Bagnell helped start Uber’s team, and Anderson worked on Tesla’s Autopilot system. That they used their fresh start to settle on this unusual way of testing shows that the methods of creating this technology are themselves evolving and diversifying—a sign of a gradually maturing industry.

  • Journalism isn't dying. It's returning to its roots
  • A crypto CEO dies—with the only key to $137 million
  • Probe your pupper’s genetic secrets with these DNA kits
  • The WIRED guide to commercial human space flight
  • Finding Lena, the patron saint of JPEGs
  • 👀 Looking for the latest gadgets? Check out our latest buying guides and best deals all year round
  • 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories