The financial world wants to open AI’s black boxes

The financial world wants to open AI’s black boxes

Powerful machine-learning methods have taken the tech world by storm in recent years, vastly improving voice and image recognition, machine translation, and many other things.

Now these techniques are poised to upend countless other industries, including the world of finance. But progress may be stymied by a significant problem: it’s often impossible to explain how these “deep learning” algorithms reach a decision.

Adam Wenchel, vice president of machine learning and data innovation at Capital One, says the company would like to use deep learning for all sorts of functions, including deciding who is granted a credit card. But it cannot do that because the law requires companies to explain the reason for any such decision to a prospective customer. Late last year Capital One created a research team, led by Wenchel, dedicated to finding ways of making these computer techniques more explainable.

“Our research is to ensure we can maintain that high bar for explainability as we push into these much more advanced, and inherently more opaque, models,” he says.

Deep learning emerged in the last five years as a powerful way of mimicking human perceptual abilities. The approach involves training a very large neural network to recognize patterns in data. It is loosely inspired by a theory about the way neurons and synapses facilitate learning. Although each simulated neuron is simply a mathematical function, the complexity of these interlinked functions makes the reasoning of a deep network extremely difficult to untangle.

Some other machine-learning techniques, including those that outperform deep learning in certain scenarios, are a lot more transparent. But deep learning, which allows for sophisticated analytics that are useful to the finance industry, can be very difficult to interrogate.

Some startups aim to exploit concerns over the opacity of existing algorithms by promising to use more transparent approaches (see “An AI-Fueled Credit Formula Might Help You Get a Loan”).

This issue could become more significant over the next few years as deep learning becomes more commonly used and as regulators turn their attention to algorithmic accountability. Starting next year, under its General Data Protection Regulation, the European Union may require any company to be able to explain a decision made by one of its algorithms.

The problem has also caught the attention of the Defense Advanced Research Projects Agency, which does research for the U.S. Department of Defense. Last year DARPA launched an effort to fund approaches to making machine learning less opaque (see “The U.S. Military Wants Autonomous Machines to Explain Themselves”). The 13 projects selected for funding show a variety of approaches for making algorithms more transparent.

The hope is that deep learning can be used to go beyond just matching human perceptual capabilities. A credit card company might, for example, feed credit history and other financial data into a deep network and train it to recognize people who might default on their credit card payments.

Capital One is also looking at deep learning as a way of automatically detecting fraudulent charges more reliably, Wenchel says, although the company is wary of trusting such a system when its reasoning can’t be examined. “We operate in a heavily regulated industry,” he says. “We need to be able to explain both internally as well as to people why we’re making decisions. And make sure we’re making decisions for the right reasons.”

“Deep learning is a very big buzzword right now, and there’s been great progress in computer vision and natural language processing,” says Trevor Darrell, a professor at UC Berkeley who is leading one of the projects selected for funding from DARPA. “But [deep-learning systems] are criticized because it’s sometimes hard to figure out what’s going on inside of them.”

For the DARPA project, Darrell’s group is developing several new deep-learning approaches, including more complex deep networks capable of learning several things simultaneously. There are also approaches that include an explanation in the training data: in the case of image captioning, for example, an image classified as a cat would be paired with an explanation for why it was classified as such. The same approach could be used in classifying credit card charges as fraudulent. “All these things get us to more interpretable deep networks,” Darrell says.

Images Powered by Shutterstock