Goldman Banker Snared by AI as U.S. Government Embraces New Tech

  • Advanced data analytics flags areas for examination
  • Humans still needed to create actionable intelligence

The Securities and Exchange Commission used a proprietary algorithm to spot suspicious trading that will soon send a former Goldman Sachs Group Inc. banker to prison.

That case is only one example of the rapid adoption of artificial intelligence across the U.S. government. About half of the top 100 regulatory agencies are now using one or more types of AI to carry out their daily work, according to researchers from Stanford and New York universities who are cataloging its use and expect to publish their findings later this year.

For now, “in very few instances have we observed algorithms truly displacing the final exercise of human discretion by agency employees or officials,” said Daniel Ho, law professor at Stanford Law School and one of the team leaders working on the project for the Administrative Conference of the U.S. But AI is pointing the way to using “historical agency data’’ to predict real-world behaviors.

At the Social Security Administration a text processing tool will be able to scan a draft decision in a disability benefits appeal to extract functional impairments and compare them to the occupational title, to help attorneys or judges decide if the individual cannot be gainfully employed.

And the Patent and Trademark office is testing an algorithm using advanced computer vision that can search images in an application and analyze how similar they may be to images already trademarked.

geralt
geralt (Pixabay)

Inconsistencies Flagged

Far more than a keyword search, AI is an advanced form of machine learning where an agency’s data is used to train an algorithm to aid in decision-making. Right now, it’s used primarily to assist agency staff faced with hundreds if not thousands of text-based filings by looking over all the data and suggesting areas that may be worth further examination, enforcement or adjudication, developers said.

For now, natural language processing—a computer’s ability to understand language and text—is incapable of making end-to-end assessments. Instead, machines are shrinking the haystacks to help human evaluators and subject-matter experts find the needles.

The haystacks are huge. At the Social Security Administration, about 1,600 judges must make about 500,000 adjudication decisions annually. The EDGAR electronic-document system at the SEC receives 100,000 to 120,000 submissions per year, mostly text-based. At the PTO, about 600 trademark attorneys get about 600,000 time-consuming applications per year.

The algorithm being piloted by the PTO could take the World Wildlife Fund panda logo, for example, and search for similar marks from disparate data sets, coming back with a ranked set of possible matches, said David Engstrom, law professor at Stanford Law School also working on the ACUS project.

“Very quickly, AI technologies are evolving from far-off dreams of science fiction to mainstream, everyday uses that take computers to new levels at awe-inspiring speeds,” said Patent and Trademark Office Director Andrei Iancu in a speech in January.

Finding Pandas and Pigs

But for all its promise, there is a risk to relying on AI.

“It could be a better tool and yet it could be subject to gaming,” Engstrom said. “Better-heeled members of the regulated community may be able to reverse engineer the tool and figure out what the agency is doing, and then duck enforcement,” he said.

That fear explains why agencies are reluctant to discuss specifics of what tools they use and how they work.

The SEC won’t discuss how its algorithms spotted insider trading last year by former Goldman Sachs Group Inc.banker Woojae “Steve” Jung, who pleaded guilty and last month was sentenced to three months in prison.

“The Division of Enforcement uses a number of tools to identify suspicious trading and abuses perpetrated on retail investors by financial professionals,” said SEC Chairman Jay Clayton in June, where he broadly outlined a few of the commission’s techniques for analyzing data.

Some agencies are earlier adopters than others.

Stanford scholars demonstrated how recent advances in image-learning techniques could vastly improve regulatory enforcement of concentrated animal feeding operations by the Environmental Protection Agency.

EPA has estimated that nearly 60% of CAFOs do not hold permits, but they’re hard to locate. Using satellite images downloaded from the Department of Agriculture, the Stanford scholars successfully trained two convolutional neural networks to detect the presence of pig and poultry CAFOs in North Carolina.

At present, however, EPA is not aware of any use of AI regarding CAFOs, an agency spokesperson told Bloomberg Government. The EPA is using AI in a program that allows scientists to mimic human behaviors that risk exposure to chemicals.

What the Future Holds

Legal scholars are looking to the future of AI and its implications for administrative law.

There’s a question of how courts overseeing challenges to regulations would review questions about whether the agency had adequate reasons for decisions made by machines, said Catherine Sharkey, law professor at NYU School of Law also working on the ACUS project.

That applies whether it’s a decision made by a machine, or about one. The National Highway Traffic Safety Administration may be asked to approve fully autonomous vehicles, which could dramatically reduce accident fatalities caused by human error, Sharkey said. The challenge is how regulators measure whether the AI components of autonomous operations pose unreasonable risks.

“To me, all sorts of questions about how we’re going to go about regulating health and safety risks in society are implicated by this topic,” Sharkey said.

Cary Coglianese, law and political science professor at the University of Pennsylvania Law School, said he envisions a day in which certain benefits or licensing determinations could be made using AI alone, without human intervention.

Machine learning algorithms are sometimes called “black-box” algorithms because they learn on their own—effectively making choices as they work through vast quantities of data to find patterns—making it difficult to say exactly why a specific determination was made, Coglianese said.

“Some observers will see automated, opaque governmental systems as raising basic constitutional and administrative law principles,” he said.

Yet legal analysis supports agency use of machine learning, Coglianese said. At the same time, agencies have to be willing to spend on infrastructure, have people who understand how AI tools work, and be transparent about how decisions are made, he said.

To contact the reporter on this story: Cheryl Bolen in Washington at cbolen@bgov.com

To contact the editors responsible for this story: Bernie Kohn at bkohn@bloomberglaw.com; Heather Rothman at hrothman@bgov.com

Top