Artificial Intelligence Moving to Battlefield as Ethics Weighed
By Travis J. Tritten
Bloomberg Government subscribers get the stories like this first. Act now and gain unlimited access to everything you need to know. Learn more.
- Congress expected to boost funding for war-fighting algorithms
- Ethics recommendations to be unveiled later this month
The Pentagon, taking the next big step of deploying artificial intelligence to aid troops and help select battlefield targets, must settle lingering ethical concerns about using the technology for waging war.
Search giant Google dealt a blow last year to the military’s maiden artificial intelligence, or AI, program sorting drone footage. Thousands of employees protested working on surveillance technology they said eventually could be used to kill.
The Pentagon’s Joint Artificial Intelligence Center is pushing ahead with a new series of AI projects that will be rolled out to commanders over the coming year with an expected funding boost from Congress. It’s part of the U.S. race for military artificial intelligence supremacy over China and Russia. At the same time, a defense board is hammering out ethical guidelines for the cutting-edge technology.
“We will use artificial intelligence in our weapons systems,” said Lt. Gen. Jack Shanahan, director of the Joint Artificial Intelligence Center, during an appearance last month. “If we don’t do that, I think we are worse off for it, and I’m not sure everybody agrees with me.”
Google Employees Protest
Shanahan headed the drone program called Project Maven when it hit the ethical protests from Google employees and he has been dogged by fears of fully autonomous weapons that could kill with no human input. The military has been explicit that it doesn’t want that type of weapon, which could violate international laws of war as well as the Pentagon’s own ethical doctrine.
The Pentagon adopted rules before the dawn of its AI program that require any semi-autonomous system, such as drones, to be subject to human judgment and control.
For now, new AI algorithms would sort data and help troops make quicker battlefield decisions. The applications envisioned by Shanahan include target development, swarming drones, battlefield communication, intelligence, and faster strikes.
The center is forming development teams and poring over tens of thousands of war records from U.S. operations in Iraq and Afghanistan for data it will use to design the new AI programs. The earlier Project Maven freed troops from the arduous task of viewing thousands of hours of military drone footage to glean intelligence.
Shanahan said introducing the technology to the military’s war-fighting commands is a top priority this fiscal year as special operators and others clamor for AI to test in the real world.
“They will make enormous impact day to day on the war-fighter by getting through command and control faster,” meaning better communication between leaders and troops in the field, he said during a briefing in August.
Military Decisions
Shanahan’s team is stopping short of unleashing autonomous robots, but the projects could for the first time insert artificial intelligence into military operations on the battlefield, where human decisions can come down to life and death.
The Defense Innovation Board, which advises the defense secretary, since last year has been weighing those types of ethical concerns. The board, which includes former Alphabet Inc. Executive Chairman Eric Schmidt and astrophysicist Neil deGrasse Tyson, is working on creating a set of principles for the Pentagon’s ethical use of artificial intelligence.
A final draft of its recommendations is planned for release and voting during the board’s Oct. 31 meeting.
Shanahan is also looking to hire a new ethicist for the AI center to oversee its models and algorithms, according to spokesman Lt. Cmdr. Arlo Abrahamson.
Automation Bias
One concern is that commanders and troops could succumb to automation bias, which means they become overly trusting of the machine input, said Paul Sharre, a senior fellow at the Center for a New American Security and author of the book “Army of None: Autonomous Weapons and the Future of War”.
“We don’t want a situation where humans are pushing the button but humans are just a cog in the machine,” Sharre said.
Flawed artificial intelligence could lead to mistakes on the battlefield.
“What if the computer doesn’t have it quite right, and faulty analytic outputs are used by commanders or other decision-makers down the line?” according to an analysis by the consulting firm Booz Allen Hamilton that was submitted to the defense board.
Despite concerns, there are also great hopes for military artificial intelligence, including a capability to launch more precise strikes and operations while limiting civilian casualties.
Reducing Collateral Damage
Shanahan said the advances will give the U.S. an advantage over adversaries and deter war, and save lives during conflicts by making war more precise. The Pentagon’s AI strategy published in February says the technology will be used to protect against civilian casualties and unnecessary destruction around the world.
“The issue of collateral damage is something where AI has an enormous opportunity to be a very positive influence on how war is prosecuted,” Martin Heinrich (D-N.M.), a member of the Senate Armed Services panel overseeing the technology, said during a recent conference.
Ethical uses of the technology could include the development of landmines, similar to the Claymore mines used by the U.S. in Vietnam, that can distinguish between adults carrying weapons and children, the nonprofit research group the Mitre Corp. told the defense board at a public hearing at Carnegie Mellon University in March.
Shanahan’s center first employed the technology to help fight wildfires in California and elsewhere and has discussed humanitarian relief uses in the Pacific with Japan and Singapore. Much of the potential for military artificial intelligence lies outside direct battlefield operations in areas such as logistics and accounting.
“Let’s say you could improve the fuel efficiency of the Department of Defense by 5%. That’s a lot of money, that’s huge,” Sharre said.
Congress has signaled a readiness to back the Pentagon’s work.
The Pentagon requested $209 million for the Joint Artificial Intelligence Center in fiscal 2020. The money would be a small drop in a potential defense budget of $738 billion but it would more than double the center’s first annual budget of $93 million.
The Senate’s defense authorization bill (S. 1790) backs the military its full request, while the House bill (H.R. 2500) would allow $167 million. Shanahan has described the budget outlook as “very good” for artificial intelligence.
A final bill could be negotiated when lawmakers return next week after a two-week recess.
Meanwhile, adversaries such as China are pouring resources into the technology. President Xi Jinping has made leading the world on artificial intelligence a top priority and total Chinese spending on its strategy is in the tens of billions of dollars, according to the Center for a New American Security.
“The move from Google not to continue with Project Maven, I think that was a bad move,” Rep. Will Hurd (R-Texas) said during a conference last month. “The only entity that benefited from that was the Chinese government because you have the largest AI company not working with the Department of Defense.”
To contact the reporter on this story: Travis J. Tritten at ttritten@bgov.com
To contact the editors responsible for this story: Paul Hendrie at phendrie@bgov.com; Robin Meszoly at rmeszoly@bgov.com
Stay informed with more news like this – from the largest team of reporters on Capitol Hill – subscribe to Bloomberg Government today. Learn more.