The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

UK Parliament: Research Briefing - Interpretable machine learning

Adrian Weller was amongst a number of stakeholders who contributed to the recent UK Parliament Research Briefing on Interpretable machine learning, published Tuesday, 06 October, 2020.

Abstract: Machine learning (ML, a type of artificial intelligence) is increasingly being used to support decision making in a variety of applications including recruitment and clinical diagnoses. While ML has many advantages, there are concerns that in some cases it may not be possible to explain completely how its outputs have been produced. This POSTnote gives an overview of ML and its role in decision-making. It examines the challenges of understanding how a complex ML system has reached its output, and some of the technical approaches to making ML easier to interpret. It also gives a brief overview of some of the proposed tools for making ML systems more accountable.

Next article

Jess Whittlestone & Zoe Cremer awarded best paper: ECAI, Evaluating AI Workshop