As society enters an era where AI will take life or death decisions—spotting whether moles are cancerous and driving us to work—trusting these machines will become ever more important.
The difficulty is that it's almost impossible for us to understand the inner workings of many modern AI systems that perform human-like tasks, such as recognizing real-life objects or understanding speech.
The models produced by the deep-learning systems that have powered recent AI breakthroughs are largely opaque, functioning as black boxes that spit out a result but whose operation remain mysterious. This inscrutability stems from the complexity of the large neural networks that underpin deep-learning systems. These brain-inspired networks are interconnected layers of algorithms that feed data into each other and can be trained to carry out specific tasks. The way these systems represent what they have learned is spread across these sprawling and densely connected networks, and dispersed in such a way that their workings are very tricky to make sense of.
Technology giants such as Google, Facebook, Microsoft and Amazon have laid out a vision of the future where AI agents will help people in their daily lives, both at work and at home: organizing our day, driving our cars, delivering our goods.
SEE: Inside Amazon's clickworker platform: How half a million people are being paid pennies to train AI (PDF download) (TechRepublic)
But for that future to be realized, machine learning models will need to be open to scrutiny, says Dr Tolga Kurtoglu, CEO of PARC, the pioneering Silicon Valley research facility renowned for work in the late 1970s that led to the creation of the mouse and graphical user interface.
"There is a huge need in being able to meaningfully explain why a particular AI algorithm came to the conclusion it did," he said, particularly as AI increasingly interacts with consumers.
"That will have a profound impact on how we think about human-computer interaction in the future."
Systems will need to be able to articulate their assumptions, which paths they explored, what they ruled out and why, and how they arrived at their conclusion, according to Kurtoglu.
"It's the first step towards establishing a trusted relationship between human agents and AI agents," he said, adding that collaboration between humans and machines could prove highly effective in solving problems.
Greater insight into an AI's workings would also help identify where faulty assumptions originated. Machine learning models are only as good as the training data used to create them, and inherent biases in that data will be reflected in the conclusions these models reach.
For example, the facial recognition system that categorises images in Google's Photos app made headlines when it tagged black faces as gorillas, an error that was blamed on it not being trained on sufficient images of African Americans. Similarly, a system that learned to associate male and female names with concepts such as 'executive' and 'professional', ended up repeating reductive gender stereotypes.
As responsibility for decisions that can have a material effect on our lives are handed to AI, such as whether someone should be given a loan or which treatment is best suited to a patient, the need for transparency becomes more pressing, said Dr Ayanna Howard, of the school of electrical and computer engineering at the Georgia Institute of Technology.
"This is an important issue, especially when these intelligent agents are included in decision-making processes that directly impact an individual's well-being, liberty, or subjective treatment by society," she said.
"For example, if a machine learning system is involved in determining what medical treatment or procedure an individual should receive, without some disclosure of the system's thinking process - how do we know if such decisions are biased or not?"
Research at Georgia Institute of Technology has shown that in certain situations people will "overtrust" decisions made by AI or robots, she said, highlighting the need for systems capable of articulating their reasoning process to users.
PARC has recently won a four year grant from the US Department of Defense to work on this problem of devising learning systems that offer greater transparency.
CEO Tolga Kurtoglu was optimistic about the research's prospects, but said it would likely "take a long time to really crack some of those hard technical questions that need to get answered".
"One of the things that we're looking at is being able to translate between the semantic representations of how humans think about a certain set of problems, and computational representations of knowledge and information, and be able to seamlessly go back and forth—so that you can map from one domain to another."
The challenge of augmenting current deep-learning approaches to be more understandable will be considerable, according to Dr Sean Holden, senior lecturer in Machine Learning in the Computer Laboratory at Cambridge University,
"I don't see any evidence that a solution is on the horizon," he said.
However, while deep learning models have had tremendous success in areas such as image and speech recognition, and are massively more successful than other AI techniques for tackling particular tasks, there are other approaches to AI whose reasoning are clearer.
"Despite the current obsession with all things deep, this represents only one part of the wider field of AI. Other areas in AI are much more amenable to producing explanations," Holden said.