The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Getting a CLUE: A Method for Explaining Uncertainty Estimates

Workshop Paper by Javier Antoran, Umang Bhatt, Tameem Adel Hesham, Adrian Weller, Jose Miguel Hernandez-Lobato

Getting a CLUE: A Method for Explaining Uncertainty Estimates ICLRICLR Workshop on Machine Learning in Real Life (ML-IRL), 2020 [Selected Oral Presentation].

Abstract: Uncertainty estimates from machine learning models allow domain experts to
asses prediction reliability and can help practitioners identify model failure modes.
We introduce Counterfactual Latent Uncertainty Explanations (CLUE), a method
that answers: ”How should we change an input such that our model produces more
certain predictions?” We perform a user study, concluding that CLUE allows users
to understand which regions of input space contribute to predictive uncertainty.

Download Workshop Paper