Getting a CLUE: A Method for Explaining Uncertainty Estimates ICLR. ICLR Workshop on Machine Learning in Real Life (ML-IRL), 2020 [Selected Oral Presentation].
Abstract: Uncertainty estimates from machine learning models allow domain experts to
asses prediction reliability and can help practitioners identify model failure modes.
We introduce Counterfactual Latent Uncertainty Explanations (CLUE), a method
that answers: ”How should we change an input such that our model produces more
certain predictions?” We perform a user study, concluding that CLUE allows users
to understand which regions of input space contribute to predictive uncertainty.