The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

You shouldn't trust me: Learning models which conceal unfairness from multiple explanation methods

Conference Paper by Botty Dimanov, Umang Bhatt, Mateja Jamnik, Adrian Weller

You shouldn't trust me: Learning models which conceal unfairness from multiple explanation methods. European Conference on Artificial Intelligence (ECAI), 2020.

Transparency of algorithmic systems has been discussed as a way for end-users and regulators to develop appropriate trust in machine learning models. One popular approach, LIME (Ribeiro, Singh, and Guestrin 2016), even suggests that model explanations can answer the question “Why should I trust you?”. Here we show a straightforward method for modifying a pre-trained model to manipulate the output of many popular feature importance explanation methods with little change in accuracy, thus demonstrating the danger of trusting such explanation methods to evaluate the trustworthiness of models. We show how this explanation attack can mask a model’s discriminatory use of a sensitive feature, raising strong concerns about using such explanation methods as reliable evaluations to check the fairness of a model.

Download Conference Paper