The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

"Explaining" AI poses new policy challenges

Diane Coyle, CBE and Leverhulme CFI Programme Director, Adriane Weller, have just released an article in Science that discusses the policy challenges revealed when we try to "explain" machine learning.  There is increasing awareness of bias in algorithms which often results from biased data. What is less widely appreciated is the conflict between the clarity required by algorithms, and the ambiguity inherent to policy decisions.  Coyle and Weller explain: "Most policy decision-making makes extensive use of constructive ambiguity to pursue shared objectives with sufficient political consensus. There is thus a tension between political or policy decisions, which trade off multiple (often incommensurable) aims and interests, and ML, typically a utilitarian maximizer of what is ultimately a single quantity and which typically entails explicit weighting of decision criteria." 

Read the full article online in Science.

Full Article

Next article

The European Commission on AI