“Now is the time,” the board’s report says, “to hold serious discussions about norms of AI development and use in a military context—long before there has been an incident.” A section musing on potential problems from AI cites “unintended engagements leading to international instability,” or, put more plainly, war.The Pentagon has declared it a national priority to rapidly expand the military’s use of AI everywhere from the battlefield to the back office. An updated National Defense Strategy released last year says AI is needed to stay ahead of rivals leaning on new technologies to compete with US power, such as China and Russia. A new Joint AI Center aims to accelerate projects built on commercial AI technology, expanding on a strategy tested under Project Maven , which tapped Google and others to apply machine learning to drone surveillance footage.
The Defense Innovation Board’s report lays out five ethical principles it says should govern such projects.The first is that humans should remain responsible for the development, use, and outcomes of the department’s AI systems. It echoes an existing policy introduced in 2012 that states there should be a “human in the loop” when deploying lethal force.Other principles on the list describe practices that one might hope are already standard for any Pentagon technology project. One states that AI systems should be tested for reliability, while another says that experts building AI systems should understand and document what they’ve made.
The remaining principles say the department should take steps to avoid bias in AI systems that could inadvertently harm people, and that Pentagon AI should be able to detect unintended harm and automatically disengage if it occurs, or allow deactivation by a human.
The recommendations highlight how AI is now seen as central to the future of warfare and other Pentagon operations—but also how the technology still relies on human judgment and restraint. Recent excitement about AI is largely driven by progress in machine learning. But as the slower-than-promised progress on autonomous driving shows, AI is best at narrowly defined and controlled tasks, and rich real world situations can be challenging.
“There’s a legitimate need for these kinds of principles predominantly because a lot of the AI and machine learning technology today has a lot of limitations,” says Paul Scharre, director of the technology and national security program the Center for a New American Security. “There are some unique challenges in a military context because it’s an adversarial environment and we don’t know the environment you will have to fight in.”
The latest on artificial intelligence , from machine learning to computer vision and more
Although the Pentagon asked the Defense Innovation Board to develop AI principles, it is not committed to adopting them. Top military brass sounded encouraging, however. Lieutenant General Jack Shanahan, director of the Joint Artificial Intelligence Center, said in a statement that the recommendations would “help enhance the DoD's commitment to upholding the highest ethical standards as outlined in the DoD AI strategy, while embracing the US military's strong history of applying rigorous testing and fielding standards for technology innovations."
If accepted, the guidelines could spur more collaboration between the tech industry and the US military. Relations have been strained by employee protests over Pentagon work at companies including Google and Microsoft . Google decided not to renew its Maven contract and released its own AI principles after thousands of employees protested its existence.Pentagon AI ethics principles might help executives sell potentially controversial projects internally. Microsoft and Google have both made clear they intend to remain engaged with the US military, and both have executives on the Defense Innovation Board. Google’s AI principles specifically allow military work . Microsoft was named Friday as the surprise winner of a $10 billion Pentagon cloud contract known as JEDI, intended to power a broad modernization of military technology, including AI.
- The internet is for everyone, right? Not with a screen reader
- Trying to plant a trillion trees won't solve anything
- Pompeo was riding high—until the Ukraine mess exploded
- Maybe it’s not YouTube’s algorithm that radicalizes people
- The untold story of Olympic Destroyer, the most deceptive hack in history
- 👁 Prepare for the deepfake era of video ; plus, check out the latest news on AI
- 🏃🏽♀️ Want the best tools to get healthy? Check out our Gear team’s picks for the best fitness trackers , running gear (including shoes and socks ), and best headphones .