The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Machine Learning Explainability for External Stakeholders

Workshop Paper by Umang Bhatt, McKane Andrus , Adrian Weller, Alice Xiang

Machine Learning Explainability for External StakeholdersICML Workshop XXAI: Extending Explainable AI Beyond Deep Models and Classifiers, 2020. Also selected for spotlight presentation at the ICML Workshop on Human Interpretability (WHI), 2020 & accepted at the IJCAI 2020 Workshop on Explainable AI. 

Abstract: As machine learning is increasingly deployed in high-stakes contexts affecting people’s liveli- hoods, there have been growing calls to “open the black box” and to make machine learning algorithms more explainable. Providing useful explanations requires careful consideration of the needs of stakeholders, including end-users, regulators, and domain experts. Despite this need, little work has been done to facilitate interstakeholder conversation around explainable machine learning. To help address this gap, we conducted a closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals. We also asked participants to share case studies in deploying explainable machine learning at scale. In this paper, we provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.

Download Workshop Paper