Pentagon Urged to Do More Against Biased Artificial Intelligence


By Josh Axelrod

  • Lawmakers call on DOD to clarify guidelines around ethical AI
  • Contractors researching new technology to combat bias

Bloomberg Government subscribers get the stories like this first. Act now and gain unlimited access to everything you need to know. Learn more.

The Pentagon has committed to combating bias in artificial intelligence but may not be asking the right questions of industry partners to be successful.

AI has become a bedrock of 21st century military strategy, and the Defense Department argues that machine learning is integral to the U.S. military’s competitive edge. But traditional AI models are “brittle” and “not ready for deployment,” Nisheeth Vishnoi, a Yale professor of computer science, said. Wrong results on the battlefield include incorrect identification of human and environmental targets.

The department has spent billions of dollars on AI development, including establishment of a Joint Artificial Intelligence Center, or JAIC, in 2018. It continues to issue procurement requirements for AI-based tools and gadgets from the private sector, but for the most part can only describe what already exists.

Skeptics of the Pentagon’s current trajectory of artificial intelligence development warn of widespread deployment of weapons systems inextricably linked to bias. Lucy Suchman, professor of the anthropology of science and technology at Lancaster University, is one advocate for interrupting what has been characterized as an “automation arms race.”

Making AI ‘Explainable’

The Pentagon spent $2 billion on its “AI Next” initiative in 2018, to fund AI R&D from the Defense Advanced Research Projects Agency; part of that mandate was to make AI more explainable.

Alex John London, author of “For the Common Good: Philosophical Foundations of Research Ethics,” has challenged the premise, however: “The idea that explainability is a solution to all the problems that we lump under bias—I think that’s real problematic.”

DARPA worked hand-in-hand with contractors, including a nearly $1 million contract to bankroll Kitware Inc.’s “Explainable AI Toolkit.” The product lets users “interrogate the model just on its output and get some understanding of the factors it was using to make the decision,” Anthony Hoogs, vice president of AI at Kitware, said.

Other companies are also working with DOD to diagnose problems within computer vision models, and inform human users where to deploy them.

Veritone, a software company, was one of 79 vendors added in February to a $249 million-ceiling blanket purchase agreement from the Army Contracting Command to help the JAIC with evaluation, metrics, testing standards, and best practices for AI-enabled systems. Veritone’s aiWARE platform can determine which parts of entire massive datasets a model can read effectively, and which it can’t.

These companies aren’t in the business of creating more equitable models, however. That’s a far more involved task that data scientists are just beginning to understand.

“We are not focusing on fixing the AI for that particular use case,” Al Brown, chief technology officer of aiWARE, said. “We are focusing on showing how well each of the models do against the disparate datasets.”

Retraining models and plugging holes are part of the effort to combat bias in Defense Department AI, but ethics researchers say the technology can’t be relied on to police itself.

Read more: Airport Face Scans Embraced by Agencies Despite Nagging Mistrust

‘Noisy’ Algorithms

It isn’t that AI models demand bias; the issue stems from human designers who create the algorithms and choose what data to input, especially when it comes to images.

A common pitfall for computer vision models is unbalanced training sets. When companies scrape massive public domain datasets from the internet to feed an algorithm, the outcome is more error-prone when comparing inputs that differ from what was in the original data, “so the training sets are crucial,” Suchman said.

Even with representative training samples, sometimes humans encode biases in models unintentionally, as identified in examples of law enforcement use of AI for predictive policing.

Photographer: Qilai Shen/Bloomberg

Perhaps the stumbling block that best represents AI’s immaturity is that current computer vision models have immense difficulty recognizing objects or faces when they appear only slightly different from what they were trained on.

“As soon as you introduce differences in lighting, differences in background, multiple people, differences in orientation, things become very, very noisy for these systems,” Suchman said.

Better Guidelines

DOD established two main sets of ethical guidelines to govern AI use. Congress wants them updated.

The first is a 2012 document, DOD Directive 3000.09, which stipulates “autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” The Defense Department followed up with Ethical Principles for Artificial Intelligence in 2020, which include responsibility, equity, traceability, reliability, and governability.

Sen. Cory Booker (D-N.J.), Sen. Elizabeth Warren (D-Mass.), Rep. Anthony Brown (D-Md.), and 10 other members of Congress issued an open letter to Secretary of Defense Lloyd Austin in April, calling for oversight of DOD’s automated systems policies “to ensure the use of this technology does not exacerbate discrimination and bias.”

A DOD spokesperson said the department “will respond appropriately, and directly, to the congressional members.”

Industry leaders have joined Congress in calling for more specifics on the existing guidelines.

Ron Keesing, vice president for AI at Leidos Inc., wants the industry to agree on what is meant by terms like “fairness” and “bias.”

“Without formal definitions, it’s very easy for everyone in the community to talk past each other, to believe that we’re doing the same thing when we’re not,” Keesing said.

Advocates of clearer expectations for military AI worry contractors will continue to proliferate new AI tech without implementing next-level ethics and fairness standards—unless the Defense Department provides more robust definitions, specifies where guidelines apply, and outlines contract language.

Although some defense contractors are working to address bias, not every company competing for DOD awards will be proactive in addressing the problem, said Pramod Raheja, CEO and co-founder of Airgility Inc., a robotics company that contracts with the Pentagon.

“When the government puts something out, you’re looking at a set of requirements—you’re going to go off that set of requirements, but it doesn’t necessarily mean you’re going to go above and beyond it,” Raheja said. “Or it doesn’t mean that I’m going to think about all the other factors that they haven’t mentioned. Because I’m just trying to get that job done, and I want that contract.”

Reworking a ‘Bogus’ Model

Starting from square one on an entirely new type of model could be a creative—and lucrative—solution to tackling bias.

One government contractor has shown promising results in sidestepping the myriad problems that typically plague computer vision models.

Instead of using billions of training samples and requiring intense processing power, Z Advanced Computing’s cognitive-based model works off a few samples and a single laptop.

It treats samples as objects or concepts, not as pixels. Rather than training a machine to classify a car as a recognizable grid of image fragments, the ZAC cognitive model learns to see a car as something closer to a rectangular body with circular wheels.

The Air Force awarded ZAC $1.5 million on three contracts starting in 2019 for their cognitive AI model.

Bijan and Saied Tadayon, the brothers behind ZAC, began developing the model in 2011, rejecting the traditional neural network-based models, which Bijan calls “unreliable,” “fragile,” and “bogus.”

Oxford researchers listed ZAC as one of the five leading global big data companies in the “Fourth Industrial Revolution” in a paper for the Industrial and Corporate Change journal.

“Probably, in 10 years, our method will be the golden standard,” Bijan said.

To contact the reporter on this story: Josh Axelrod in Washington at jaxelrod@bloombergindustry.com

To contact the editors responsible for this story: Amanda H. Allen at aallen@bloombergindustry.com; Anna Yukhananov at ayukhananov@bloombergindustry.com

Stay informed with more news like this – from the largest team of reporters on Capitol Hill – subscribe to Bloomberg Government today. Learn more.