Explainable Machine Learning in Deployment

Academic articles by Umang Bhatt, Adrian Weller

Explainable Machine Learning in Deployment, NeurIPS Workshop on Human-centric Machine Learning, 2019.

FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency January 2020 Pages 648–657 https://doi.org/10.1145/3351095.3375624

Explainable machine learning seeks to provide various stakeholders with insights into model behavior via feature importance scores, counterfactual explanations, and influential samples, among other techniques. Recent advances in this line of work, however, have gone without surveys of how organizations are using these techniques in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that the majority of deployments are not for end users affected by the model but for machine learning engineers, who use explainability to debug the model itself. There is a gap between explainability in practice and the goal of public transparency, since explanations primarily serve internal stakeholders rather than external ones. Our study synthesizes the limitations with current explainability techniques that hamper their use for end users. To facilitate end user interaction, we develop a framework for establishing clear goals for explainability, including a focus on normative desiderata.

 

Download Conference Paper

 

Related People

Umang Bhatt

Umang Bhatt

Student Fellow

Adrian Weller

Adrian Weller

Programme Director