‘Explainable Artificial Intelligence’: Cracking open the black box of AI

‘Explainable Artificial Intelligence’: Cracking open the black box of AI

Researchers and enterprise want to build deep learning neural networks that can explain their actions to humans

At a demonstration of Amazon Web Services' new artificial intelligence image recognition tool last week, the deep learning analysis calculated with near certainty that a photo of speaker Glenn Gore depicted a potted plant.

“It is very clever, it can do some amazing things but it needs a lot of hand holding still. AI is almost like a toddler. They can do some pretty cool things, sometimes they can cause a fair bit of trouble,” said AWS’ chief architect in his day two keynote at the company’s summit in Sydney.

Where the toddler analogy falls short, however, is that a parent can make a reasonable guess as to, say, what led to their child drawing all over the walls, and ask them why. That’s not so easy with AI.

Artificial intelligence – in its application of deep learning neural networks, complex algorithms and probabilistic graphical models – has become a ‘black box’ according to a growing number of researchers.

And they want an explanation.

“You don’t really know why a system made a decision. AI cannot tell you that reason today. It cannot tell you why,” says Aki Ohashi, director of business development at PARC (Palo Alto Research Center). “It’s a black box. It gives you an answer and that’s it, you take it or leave it.”

For AI to be confidently rolled out by industry and government, he says, the technologies will require greater transparency, and explain their decision making process to users.

“You need to have the system accountable,” he told the AIIA Navigating Digital Government Summit in Canberra on Wednesday. “You can’t blame the technology. They have to be more transparent about the decisions that are made. It’s not just saying – well that’s what the system told me.”

PARC has been working with the Defense Advanced Research Projects Agency, an agency of the U.S. Department of Defense on what is being called Explainable Artificial Intelligence, or XAI.

The research is working towards new machine-learning systems that will have the ability to explain their rationale, characterise their strengths and weaknesses, and convey an understanding of how they will behave in the future. Importantly they will also translate models into understandable and useful explanations for end users.

In current models nodes arbitrarily decide how they make decisions, in image recognition using miniscule dots or shadows.

“They focus on whatever they want. The things they focus on are not things that tend to be intuitive to humans,” Ohashi says.

One way to do change this, being explored by PARC, is to restrict the way nodes in a neural network consider things to ‘concepts’ like colour and shapes and textures.

“The AI then starts thinking about things from a perspective which is logically understandable to humans,”

Others are working towards the same goal. While “humans are surprisingly good at explaining their decisions,” said researchers at University of California, Berkeley and the Max Planck Institute for Informatics in Germany in a recent paper, deep learning models “frequently remain opaque”.

They are seeking to “build deep models that can justify their decisions, something which comes naturally to humans”.

Their December paper Attentive Explanations: Justifying Decisions and Pointing to the Evidence, primarily focused on image recognition, makes a significant step towards AI that can provide natural language justifications of decisions and point to the evidence.

Being able to explain its decision-making is necessary for AI to be fully embraced and trusted by industry, Ohashi says. You wouldn't put a toddler in charge of business decisions.

“If you use AI for financial purposes and it starts building up a portfolio of stocks which are completely against the market. How does a human being evaluate whether it’s something that made sense and the AI is really really smart or if it’s actually making a mistake?” Ohashi says.

There have been some early moves into XAI among enterprises. In December Capital One Financial Corp told the Wall Street Journal that it was employing in-house experts to study ‘explainable AI’ as a means of guarding against potential ethical and regulatory breaches.

UK start-up Weave, which is now focused on XAI solutions has been the target of takeover talks in Silicon Valley, reports the Financial Times.

Images Powered by Shutterstock