Trevor Darrell (UCB, USA)
Title: Deep Learning for Perception, Action, and Explanation
Abstract: Learning of layered or deep representations has provided significant advances in computer vision in recent years, but has traditionally been limited to fully supervised settings with very large amounts of training data, where the model lacked interpretability. New results in adversarial adaptive representation learning show how such methods can also excel when learning across modalities and domains, and further can be trained or constrained to provide natural language explanations or multimodal visualizations to their users. I'll present recent long-term recurrent network models that learn cross-modal description and explanation, using implicit and explicit approaches, which can be applied to domains including fine-grained recognition and visuomotor policies.
Bio: Prof. Darrell is on the faculty of the CS and EE Divisions of the EECS Department at UC Berkeley. He leads Berkeley's DeepDrive (BDD) Industrial Consortia, is co-Director of the Berkeley Artificial Intelligence Research (BAIR) lab, and is Faculty Director of PATH at UC Berkeley. Darrell's group develops algorithms for large-scale perceptual learning, including object and activity recognition and detection, for a variety of applications including autonomous vehicles, media search, and multimodal interaction with robots and mobile devices. His areas of interest include computer vision, machine learning, natural language processing, and perception-based human computer interfaces. Prof. Darrell previously led the vision group at the International Computer Science Institute in Berkeley, and was on the faculty of the MIT EECS department from 1999-2008, where he directed the Vision Interface Group. He was a member of the research staff at Interval Research Corporation from 1996-1999, and received the S.M., and PhD. degrees from MIT in 1992 and 1996, respectively. He obtained the B.S.E. degree from the University of Pennsylvania in 1988.
Prof. Darrell also serves as consulting Chief Scientist for the start-up Nexar, and is a technical consultant on deep learning and computer vision for Pinterest. Darrell is on the scientific advisory board of several other ventures, including DeepScale, WaveOne, SafelyYou, and Graymatics. Previously, Darrell advised Tyzx (acquired by Intel), IQ Engines (acquired by Yahoo), Koozoo, BotSquare/Flutter (acquired by Google), and MetaMind (acquired by Salesforce). As time permits, Darrell has served and is available as an expert witness for patent litigation relating to computer vision.
Rao Kambhampati (Arizona State U., USA)
Topic: Learning to Predict the Explicability and Predictability of Plans
Title: Explicability and Explanations in Human-Aware AI Agents
Abstract: Human-aware AI agents need to exhibit behavior that is "explicable" to the humans, and be ready to provide "explanations" where needed. I will argue that both explicability and explanations can be understood from the point of view of the differences between the AI agent's model M, and the human partner's mental model of the agent Mh. The agent plans its behavior based on M, but that behavior is viewed by the human partner through the lens of Mh. Explicability in this setup involves the agent making its behavior close to what is expected in terms of Mh. Explanations can be formalized as a dialog between the agent and human intended to thet Mh closer to M. I will discuss the realization of this set-up in our ongoing work on human-robot teaming.
Bio: Subbarao Kambhampati (Rao) is a professor of Computer Science at Arizona State University, and is the current president of the Association for the Advancement of AI (AAAI), and a trustee of the Partnership for AI. His research focuses on automated planning and decision making, especially in the context of human-aware AI systems. He is an award-winning teacher and spends significant time pondering the public perceptions and societal impacts of AI. He was an NSF young investigator, and is a fellow of AAAI. He served the AI community in multiple roles, including as the program chair for IJCAI 2016 and program co-chair for AAAI 2005. Rao received his bachelor's degree from Indian Institute of Technology, Madras, and his PhD from University of Maryland, College Park. More information can be found at rakaposhi.eas.asu.edu.
Freddy Lecue (Accenture Technology Labs, Dublin (Ireland) & INRIA (France))
Title: Beyond Machine Learning: Delivering Technology for People through Explainable AI
Abstract: Machine learning and its models have been largely studied to derive efficient solutions for problems ranging from regression, classification to clustering. However, explaining such models and their underlying predictions remains an open problem, mainly due to the model complexity and interpretability. This work presents our journey towards Explainable AI (i.e., how systematically decoding and enriching machine learning systems with knowledge graphs; and applying such learning and reasoning systems to explain abnormal items in the contexts of (1) finance from 80,000,000+ travel expenses lines of 191,346 employees, and (2) contract risks from 600,000+ clients deals in Accenture). Our semantics-aware travel expenses reasoning system has demonstrated scalability and accuracy for the tasks of explaining abnormalities at large scale.
Bio: Dr Freddy Lecue (PhD 2008, Habilitation 2015) is a principal scientist and research manager in large scale reasoning Systems in Accenture Technology Labs, Dublin - Ireland. He is also a research associate at INRIA, in WIMMICS, Sophia Antipolis - France. His research area is at the frontier of learning and reasoning systems with a strong interest in semantics-driven explainable AI. Before joining Accenture as a principal scientist and research manager in large scale reasoning system in January 2016, he was a research scientist and lead investigator in large scale reasoning systems at IBM Research - Ireland. His research has received IBM internal recognition: IBM research division award in 2015 and IBM Technical Accomplishment award in 2014. His research received external recognition: best paper awards from ISWC (International Semantic Web Conference) in 2014, and ESWC (Extended Semantic Web Conference) in 2014, as well as semantic Web challenge awards from ISWC in 2013 and 2012. Prior to joining IBM Research he was Research Fellow at The University of Manchester from 2008 to 2011 and Research Engineer at Orange Labs (formerly France Telecom R&D) from 2005 to 2008. He received his Research Habilitation (HdR - Accreditation to supervise research) from the University of Nice (France) in 2015, and a PhD from École des Mines de Saint-Etienne (France) in 2008. His PhD thesis was sponsored by Orange Labs and was awarded by the French Association in Artificial Intelligence.
Daniele Magazzeni (King's College London, UK)
Title: Explainable Planning
Abstract: As AI is increasingly being adopted into application solutions, the challenge of supporting interaction with humans is becoming more apparent. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but also it is a necessary step in the process of building trust as humans migrate greater responsibility to such systems. The challenge is to find effective ways to communicate the foundations of AI-driven behaviour, when the algorithms that drive it are far from transparent to humans. In this talk we consider the opportunities that arise in AI planning, exploiting the model-based representations that form a familiar and common basis for communication with users, while acknowledging the gap between planning algorithms and human problem-solving.
Bio: Dr. Daniele Magazzeni is Lecturer in Artificial Intelligence at King's College London. His research explores the links between Artificial Intelligence and Verification, and the use of AI in innovative applications. Magazzeni is an elected member of ICAPS Executive Council. He is Editor-in-Chief of AI Communications. He was Conference Chair of ICAPS 2016, is Workshop Chair of IJCAI 2017, and will be chair of the Robotics track at ICAPS 2018. He is co-investigator in UK and EU projects. Daniele is scientific advisor and has collaborations and consultancy projects with a number of companies and organisations.
Darryn Reid (DSTG Edinburgh, Australia)
Uncertainty, Resource Allocation, and Trust
Title: TBD
Raymond Sheh (Curtin U., Australia)
Title: Why Explain?
Abstract: Explainable AI (XAI) is enjoying renewed focus from both the research community as well as broader society. Matching up the growing body of techniques for XAI with its applications benefits from an understanding of the contrasting reasons for why explanations are desired in different situations. We present a way of categorising different approaches to XAI from the perspective of why the explanations are required. Such a classification informs not only provides the research community with a framework for discussing XAI but also helps the broader community to better specify their goals in demanding explanations, and to understand the explanations that they are provided with.
Bio: Dr Raymond Sheh is a Senior Lecturer at the Department of Computing, Curtin University. He specialises in the areas of Artificial Intelligence, Robotics and Cyber Security. He has been involved in robotics research since 2003. Dr Sheh established the Intelligent Robots Group at the Department of Computing, with the aim of developing ways of allowing robots and other intelligent systems to learn about their environments and tasks in a way that not only allows them to perform those tasks better, but to also explain their actions and justify their decisions. This ability has significant implications for issues of safety and trust between humans and intelligent systems.
Dr Sheh's current activities include robotics for hazardous environments, surgical robotics and the application of robotic sensing technologies to industrial automation. The former includes a significant education and research outreach component to other universities and high schools through Dr Sheh's position on the Executive Committee of the International RoboCup Rescue Robot League competition.
Prior to joining Curtin in 2013, Dr Sheh was with the US National Institute of Standards and Technology, developing standardised test methods for response robots, used in hazardous environments. He holds a PhD in Artificial Intelligence from The University of New South Wales, an Honours degree in Electronic and Communications Engineering from Curtin University and a degree in Computer Science from Curtin University.
Barry Smyth (UC Dublin, Ireland)
Title: Explainable Recommendations
Abstract: Recommender systems are now a near ubiquitous part of the online landscape, influencing the news and books that we read, the music we listen to, the movies we watch, and even the people we date. In this talk we will consider the role of explanation in modern recommender systems, starting with the conventional approach of providing explanations as a way to justify suggestions to a user. But we will take this a step further, by arguing for a more intimate connection between explanations and recommendations, one that sees explanations playing a more central, formative role in the selection and ranking of recommendations.
Bio: Barry is a professor of computer science at University College Dublin in Ireland. His research interests include recommender systems, artificial intelligence, case-based reasoning, information retrieval and the sensor web. He is a Founding Director of the Insight Centre for Data Analytics. You can find out more about his research through his publication list and the people he works with.
Barry has a keen interest in the commercialisation of research. To this end he has co-founded a number of startups - ChangingWorlds Ltd (now a division of Amdocs) and HeyStaks Technologies Ltd - and he also advises and serves on the boards of a number of organisations.
Mohan Sridharan (U. Auckland, New Zealand)
Topic: Explainable Agency