Artificial intelligence (AI) in banking: The double-edged sword

Artificial intelligence (AI) in banking: The double-edged sword

Artificial intelligence (AI) is proving to be a double-edged sword for the banking industry. Sifting through the chatter in the financial industry there are two main themes emerging. Firstly, the ‘BigTech’ with their prowess in data, artificial intelligence (AI) and cloud could exert significant strain on banking profits and eventually on the stability of financial systems. Secondly, the use of AI algorithms and models bring risks that are not yet fully understood and appreciated by industry or regulators. In other words, damned if you do, damned if you don't.

The fact that AI is an inevitability and accelerating risk concerns regulators around the world. But, when it comes to issues around the use of AI models in the financial industry the language is not very clear or consistent. What are the regulators referring to when they talk about risks of AI and what exactly is the concern? The recent warnings about the use of AI by the US and Australian regulators seem to suggest they appreciate the strategic importance of getting this right but still no indication of any formal guidance on this issue.

Banks have been using advanced models for a while now, including what could be classified as AI. Basel II regulation has certainly introduced a whole heap of statistical learning models in areas such as credit risk, market risk, and operational risks. Banks have been investing in model risk management functions and have built capabilities to manage the risks of these models. So, what exactly is the regulatory concern when it comes to using AI models and what is hindering policy direction? Let’s start with the basics.

If I had a penny…! Of course, AI is well defined and understood by a limited group of academics, researchers and industry practitioners but it is far from being a common understanding across the industries. The Britannica defines AI as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings”. Intelligent beings? This could be another can of worms. While this definition is correct it is hardly a practical explanation. At the other end, we have technical explanations where AI is used as an umbrella term for a combination of techniques such as machine learning (ML), deep learning (DL), statistics, mathematics, and other advanced analytical techniques. The industry is in dire need of a workable explanation and classification of AI which will help in improving the quality of discussions around AI and risks associated with it.

The Defense Advanced Research Projects Agency (DARPA), the standard bearer for cutting edge innovation in AI, has come up with a novel way to explain AI by their ability to process information.

Handcrafted knowledge models are rule-based AI models where humans define the structure of knowledge and machines explore the specifics. The rule-based engine is a rudimentary AI application that enables reasoning over ‘narrowly defined problems’ but nonetheless, they are AI as per DARPA. For example, the way banks calculate Basel II regulatory capital where a pre-determined capital formula is applied for a set of pre-defined products. If there is new information such as a new product then the rule needs to be updated. These AI models do not have any learning capability and handle uncertainty very poorly. These first wave AI models are still relevant in emerging and accelerating risks such as cybersecurity. Recently, a system called ‘Mayhem’ based on first wave principles solved a decade-old security challenge in DARPA’s Cyber Grand Challenge (CGC).

The inherent risk of first wave AI models as explained by their ability to handle uncertainty is very high. However, these models exist in a highly controlled environment as they do not cope very well with the dynamics of the natural world. The realised risk of first wave AI models can be very low as they are applied to narrowly defined problems, which reduces uncertainty, to begin with. Nonetheless, model usage can pose catastrophic risks if these models are making material decisions.

Statistical models are capable of learning within a ‘defined problem domain’. For example, the domain could be language processing or visual pattern recognition. The complexity of the problem is represented by data. Richer data will yield greater information requiring more complex (non-linear) learning algorithms to represent them. For example, if you have 10 pictures of dogs you can develop a simple algorithm that learns from 10 pictures and identifies a dog with reasonable accuracy. However, if you have a thousand pictures with different breeds of dog you can develop a much more complex model, which can identify not just the dog but the breed as well. Statistical learning AI models possess nuanced classification and prediction capabilities but have minimal reasoning ability and no contextual capability.

The financial industry is rife with statistical learning models. The credit rating models that determine the probability of an individual or a company defaulting and anti-money laundering models that estimate the propensity of money laundering are some of the examples.

The inherent risk of the second wave models is high as explained by their ability to handle uncertainty. They cope reasonably well with uncertainty but dependent on data that represents the uncertainty. More complex algorithms, which use big data, are statistically impressive but individually unreliable. They are prone to inherent biases that can be exploited. Any autonomy of these models needs to be monitored and governed, as maladaptation unwanted behaviour is possible. Overall, the realised risk of these AI models can be very high as the uncertainty of the problem domain can vary by large margins. As with the first wave models the use of statistical AI models can result in catastrophic risks based on the materiality of decisions.

Contextual adaptation AI models explain decisions and usually constructed by systems with a contextual explanation of the real-world phenomenon. The key here is explainability and automation. Generative models create explanations that provide context to the decisions and probabilities. This is cutting edge research under the banner of Explainable Artificial Intelligence (XAI). This is important in dealing with complex decisions such as ones involving ethical dilemmas. Contextual adaptation is critical in reducing the model risk and ultimately the risk of decisions as most of the decisions are expected to be automated in the future.

The inherent and realised risks of the third wave AI models are very high due to the immaturity of the field. While explainability is important to be aware of AI decisions it is absolutely critical to manage risks.

The financial industry is already using advanced techniques discussed in the first and second wave of AI models to make decisions or to inform decisions. Any concerns raised about the use of AI in banking needs to be measured and specific. A blanket call on the risks posed by AI models doesn't necessarily incorporate the fact that banks have built significant capabilities to handle most of the risks discussed in the first and second wave of AI models. Regulation needs to catch up and speed up the pace of policy setting and guidance to alleviate any inertia in implementing AI. There is a considerable risk of either hindering the progress of AI use banking or not appropriately realising the risks that AI models bring to banking. The AI governance efforts need to start with the appropriate identification and classification of AI models. The DARPA method discussed here may not be the final solution but it is certainly the starting point.

Finally, AI is meant to reflect human intelligence. More scared we are of it scarier it looks. It is time to approach the AI risks head on and promote the use of AI in the banking industry.

Images Powered by Shutterstock