Explainable AI (XAI) is an emerging branch of AI where AI systems are made to explain the reasoning behind every decision made by them. We investigate some of its key benefits and design principles.
Having deployed about 20+ AI Solutions in past 10 years from building Intelligent Audience Measurement System for a Media Company in 2009 to Intelligent Financial Compliance System for large CPG customer in 2018, one skepticism stayed constant throughout with Enterprise customers — Trustworthy Production Deployment of an AI System. Yes, it is Holi Grail of AI and for the right reason; whether it about losing a High-Value customer due to wrong Churn Prediction or losing dollars due to incorrect classification of a financial transaction. In reality, Customers are the less bothered accuracy of AI model, but their concerns are about Cluelessness of Data Scientist to explain “How do I trust its decision making?”
AI Systems — How do I trust them?In most AI enabled Digital Transformation, most customers are fascinated about having AI capability in their systems for certain Business Value Proposition. On other hands, Most Data Scientists are fascinated by applying most fashionable Algo (DNN/GAINS/DRN, etc). Sadly, both of these stakeholders forget one key consideration of Accountability and Trust factors. In real life, every decision made, either by Machine or by low-rank Employee or by CEO, all are subjected to due regular Scrutiny to explain their actions in order to improve overall Business systems/processes. This gives the rise of the Emerging Branch of AI, called “Explainable AI” (XAI).
XAI is an emerging branch of AI where AI systems are made to explain the reasoning behind every decision made by them. Following is a simple depiction of the full circle of AI.
Apart from a solution of the above scenarios, XAI offers deeper Business benefits, such as:
Just like Dev-Ops, ML-Ops has another AI emerging field where dreaded deployment scenarios are being solved using tools and technology. But it is not about tool & tech rather about roles human plays around these AI systems. Broadly, we can define them in three buckets.
For both Trainer and Sustainer roles, there are plenty of tool-kit available in Data Scientist’s bag. But for Explainer, the situation is not as rosy. Here is the reason.
AI/ML algorithms are notoriously Black Box in nature due to its learning mechanism of their Weights and Bias from the large sum of non-linear nature of training data.
There are three key dimensions of XAI - Responsible, Traceable and Understandable AIs.
There are 8 general principals to adapt XAI from Conceptualization to Deployment of AI solutions.
There are two major techniques for XAI.
Model Specific Techniques: In this, there are two sets, First where existing ML Algorithms are partially explainable and Second ones which are being researched & developed to have complete explainability., aka White Box Model. SPINE is particularly making news, but yet to make landfall in enterprise production. Model Agnostic Techniques: This one works outside the Operational Model by hacking into its working. A technique called LIME which basically try to estimate local boundaries for decision making.
I have used LIME library extensively in my past experiences coupled with Natural Language Generation tech to narrate for Sustainer and Operators.
Following shows the current state of Algorithms in Accuracy vs Explainability space along with approach directions.
XAI field has a promising future to help enterprise deal with AI shortcomings. Here are a few directions for it.
Bio: Saurabh Kaushik is passionate about leading Digital Product Engineering using Artificial Intelligence and other digital tech, deploying them at global scale to make our clients more successful and their customers more delightful.