Using AI And Deep Learning To Improve Consumer Access To Credit

Using AI And Deep Learning To Improve Consumer Access To Credit

Artificial intelligence, machine learning and neural networks-based deep learning are concepts that have recently come to dominate venture capital funding, startup formation, promotion and exits and policy discussions. The highly-publicized triumphs over humans in Go and Poker, rapid progress in speech recognition, image identification, and language translation, and the proliferation of talking and texting virtual assistants and chatbots, have helped inflate the market cap of Apple (#1 as of February 17), Google (#2), Microsoft (#3), Amazon (#5), and Facebook (#6).

While these companies dominate the headlines—and the war for the relevant talent—other companies that have been analyzing data or providing tools for analysis for years are also capitalizing on recent AI advances. A case in point are Equifax and SAS: The former developing deep learning tools to improve credit scoring and the latter adding new deep learning functionality to its data mining tools and offering a deep learning API.

Both companies have a lot of experience in what they do. Equifax, founded in 1899, is a credit reporting agency, collecting and analyzing data on more than 820 million consumers and more than 91 million businesses worldwide. SAS, founded in 1976, develops and sells data analytics and data management software.

The AI concepts that make headlines today also have a long history. Moving beyond speedy calculation, two approaches emerged in the 1950s to applying early computers to other type of cognitive work. One was labeled “artificial intelligence,” the other “machine learning” (a decidedly less sexy and attention-grabbing name). While the artificial intelligence approach was related to symbolic logic, a branch of mathematics, the machine-learning approach was related to statistics. And there was another important distinction between the two: The artificial intelligence approach was part of the dominant computer science paradigm and the practice of a programmer defining what the computer had to do by coding an algorithm, a model, a program in a programming language. The machine-learning approach relied on data and on statistical procedures that found patterns in the data or classified the data into different buckets, allowing the computer to “learn” (e.g., optimize the performance—accuracy—of a certain task) and “predict” (e.g., classify or put in different buckets) the type of new data that is fed to it.

For traditional computer science, data was what the program processed and the output of that processing. With machine learning, the data itself defines what to do next. Says Oliver Schabenberger, Executive Vice President and Chief Technology Officer at SAS: “What sometimes gets overlooked is that it’s really the data that drives machine learning.”

Over the years, machine learning has been applied successfully to problems such as spam filtering, handwriting recognition, machine translation, fraud detection, and product recommendations. Many successful “digital natives” such as Google, Amazon and Netflix, have built their fortunes with the help of machine learning algorithms. The real-world experiences of these companies have proved how successful machine learning can be in using lots of data from a variety of sources to predict consumer behavior. Using lots and lots of data makes predictive models more robust and predictions more accurate. “Big Data,” however, gave rise not only to new type of data-driven companies, but also to a new type of machine learning: “Deep Learning.”

Deep learning takes the machine-learning approach much further by applying it to multi-layer “artificial neural networks.” Influenced by a computational model for human neural networks first developed in 1943, artificial neural networks got their first software manifestation in the 1957 Perceptron, an algorithm for pattern recognition based on a two-layer network. Abandoned for a while because of the limited computing power of the day, deep neural networks have seen a remarkable revival over the last decade, fueled by advanced algorithms, big data, and increased computer power, specifically in the form Graphics Processing Units (GPU) which process data in parallel, thus cutting down on the time required to “train” the computer.

Today’s deep neural networks move vast amounts of data through many layers of hardware and software, each layer coming up with its own representation of the data and passing what it “learned” to the next layer. Artificial intelligence attempts “to make a machine that thinks like a human. Deep neural networks try to solve pretty narrow tasks,” says Schabenberger. Relinquishing the quest for human-like intelligence, deep learning has succeeded in vastly expanding the range of narrow tasks machines can learn and perform.

“We noticed a couple of years ago,” says Peter Maynard, Senior Vice President of Global Analytics at Equifax, “that we were not getting enough statistical lift from our traditional credit scoring methodology.” The conventional wisdom in the credit scoring industry at the time was that they must continue to use traditional machine learning approaches such as logistical regression because the results were interpretable, i.e., in compliance with regulation. Modern machine-learning approaches such as deep neural networks, which promised more accurate results, presented a challenge in that regard as they were not interpretable. They are considered a “black box,” a process so complex that even its programmers do not fully understand how the learning machine reached the results it produced.

“My team decided to challenge that and find a way to make neural nets interpretable,” says Maynard.  He explains: “We developed a mathematical proof that shows that we could generate a neural net solution that can be completely interpretable for regulatory purposes. Each of the inputs can map into the hidden layer of the neural network and we imposed a set of criteria that enable us to interpret the attributes coming into the final model. We stripped apart the black box so we can have an interpretable outcome. That was revolutionary, no one has ever done that before.”

Maynard reports that the neural net has improved the predictive ability of the model by up to 15%. The larger the size of the data set analyzed and the more complex the analysis, the bigger is the improvement. “In credit scoring,” says Maynard, “we spend a lot of time creating segments to build a model on. Determining the optimal segment could take sometimes 20% of the time that it takes to build a model. In the context of neural nets, those segments are the hidden layers—the neural net does it all for you. The machine is figuring out what are the segments and what are the weights in a segment instead of having an analyst do that. I find it really powerful.”

The immediate benefit of using neural nets is faster model development as some of the work previously done by data scientists in building and testing a model is automated. But Maynard envisions “full automation,” especially regarding a big part of a data scientist’s job—the ongoing tweaking of the model. Maynard: ”You have a human reviewing it to make sure it’s executing as intended but the whole thing is done automatically. It’s similar to search optimization or product recommendations where the model gets tweaked every time you click. In credit scoring, when you have a neural network with superior predictability and interpretability, there is no reason to have a person in the middle of that process.”

In addition, the “attributes” or the factors affecting a credit score (e.g., the size of an individual’s checking account balance and how it was used over the last 6 months), are now “data-driven.” Instead of being hypotheses developed by data scientists, now the attributes are created by the deep learning process, on the basis of a much larger set of historical or “trended data.” “We are looking at 72 months of data and identifying patterns of consumer behavior over time, using machine learning to understand the signal and the strength of the signal over that time period,” says Maynard. “Now, instead of creating thousands of attributes, we can create hundreds of thousands of attributes for testing. The algorithms will determine what’s the most predictive in terms of the behavior we are trying to model.”

The result—and the most important benefit of using modern machine learning tools—is greater access to credit. Analyzing two years’ worth of U.S. mortgage data, Equifax determined that numerous declined loans could have been loaned safely. That promises a considerable expansion of the universe of approved mortgages. “The use case we showed regulators,” says Maynard, “was in the telecom industry where people had to put down a down payment to get a cell phone—with this model they don’t need to do that anymore.”

Equifax has filed for a patent for its work on improving credit scoring. “It’s the dawn of a new age—enabling greater access to credit is a huge opportunity,” says Maynard.

Images Powered by Shutterstock