The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions

Conference Paper by Jess Whittlestone, Rune Nyrup, Anna Alexandrova, Stephen Cave

The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions

The Second AAAI / ACM Annual Conference on AI, Ethics, and Society, 26-28 January 2019, Hawaii USA

The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy. While articulating and agreeing on principles is important, it is only a starting point. Drawing on comparisons with the field of bioethics, we highlight some of the limitations of principles: in particular, they are often too broad and high-level to guide ethics in practice. We suggest that an important next step for the field of AI ethics is to focus on exploring the tensions that inevitably arise as we try to implement principles in practice. By explicitly recognising these tensions we can begin to make decisions about how they should be resolved in specific cases, and develop frameworks and guidelines for AI ethics that are rigorous and practically relevant. We discuss some different specific ways that tensions arise in AI ethics, and what processes might be needed to resolve them.

This research is part of the Ethical and Societal Implications of Algorithms, Data and AI project, commissioned by the Nuffield Foundation to inform the strategy of the Ada Lovelace Institute.

Ada Lovelace Institute

Download Conference Paper