ABOUTJustin Sherman (@jshermcyber) is a Fellow at the Atlantic Council’s Cyber Statecraft Initiative.Davos is hardly known for its modesty—nor its favoring of government solutions to publicly shared problems. Hence why it might seem unexpected, even counterintuitive, that companies like Microsoft and IBM spoke at this year’s summit about the need for technology regulation.
But this perfectly captures the US tech industry’s shift toward talking regulation—just in a way that benefits itself—and the related risks of allowing private corporations to set the American (or even global) agenda on technology governance.Microsoft CEO Satya Nadella caught the media’s attention when he said at Davos, “I think we should be thinking a lot harder about regulation” of facial recognition and object recognition technology. IBM CEO Ginni Rometty hosted a Davos panel on precision regulation of AI, in line with IBM’s push to “guide regulation” in the space. Palantir CEO Alex Karp even joined the fray, criticizing at once Silicon Valley’s aversion to regulation and its reluctance to work with the US government.Just days before Davos, Google CEO Sundar Pichai called for governments to regulate AI. He even publicly supported the European Union’s proposal to temporarily ban facial recognition.
It’s possible some of these stances are well-intentioned. Tech firms have felt the heat for the harm their technologies and behaviors have inflicted; that may have kicked slight attitude shifts into gear. At the same time, however, these calls for tech regulation from private tech firms have a sharp corporate twist.
First, when a corporation calls for regulation, they can nudge the public to over-focus on a technology itself and under-focus on the nature of the technology’s development and use. This gives them leverage. Take facial recognition, for example.
Certain things about facial recognition are unique, like its use of a face as the mechanism of identification—not so easily changed as a password, a phone number, or even a home address. In this way, bans on facial recognition technology could have positive effects. As Google’s Pichai said recently, artificial intelligence shouldn’t be used to “support mass surveillance or violate human rights,” and facial recognition could certainly play a role in those practices.Yet, as Bruce Schneier laid out in The New York Times, “focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building.” Facial recognition is “just one identification technology among many.” In other words, prohibitions on using facial recognition are one thing for a bottom line. They target a specific identification method or technology (depending on how you want to define it).
But regulating the underlying data collection and analysis? That’s an entirely different animal, one that could challenge core business models of major search engines, social media platforms, and AI product developers. Effects would be far more disruptive for those firms—albeit welcomed by citizens wanting legally protected data privacy. Zeroing in on the singular technology thus pivots regulation dialogue in the corporate favor, away from talk of more fundamental, government-driven change.