Why Value Judgements Should Not Be Automated

Reports, Policy and White Papers by Rune Nyrup, Jess Whittlestone, Stephen Cave

Why Value Judgements Should Not Be Automated. https://doi.org/10.17863/CAM.41552

Submitted as evidence for the Committee on Standards in Public Life’s review into artificial intelligence and its impact on standards across the public sector.

Abstract
AI technologies are already being used for a number of purposes in public services, including to automate (parts of) decision processes and to make recommendations and predictions in support of human decisions. Increasing application of AI in public services therefore has the potential to impact several of the Seven Principles of Public Life, presenting new challenges for public servants in upholding those values. We believe AI is particularly likely to impact the principles of Objectivity, Openness, Accountability and Leadership. Algorithmic bias has the potential to threaten the objectivity of public sector decisions, while several forms of opacity in AI systems raise challenges for openness in public services; and, in turn, this impacts the ability of public servants to be accountable and exercise proper leadership.

Download Evidence Submission

Related People

Stephen Cave

Stephen Cave

Director

Rune Nyrup

Rune Nyrup

Associate Fellow

Jess Whittlestone

Jess Whittlestone

Senior Research Fellow, 2018-2021