This project synthesises existing research on the ethics of AI across disciplines and sectors, and proposes a roadmap for future work.
Interest in the ethical and societal implications of technologies based on algorithms, data, and AI has exploded in many circles in the past decade: academic, policy, industry, activist, and popular media. The perceived opportunities and threats of these technologies have spurred a proliferation of initiatives to establish a common ethics for their development and use.
While these efforts are praiseworthy, they remain, as is to be expected at this stage, unsystematic and often vague. Much of the work has focused on attempts to put forward a unified framework, or to formulate a list of high-level principles for ethical use of AI and data. However, these approaches suffer from three interrelated blind spots:
This project, commissioned by the Nuffield Foundation to inform the strategy of the new Ada Lovelace Institute, assesses the key strengths and limitations of existing work on the ethics of AI, and identifies priorities for future research. In particular, we explore the different kinds of tensions that are likely to arise as we try to apply ethical principles to the application of AI in practice in different domains.
Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research
Report by: Jess Whittlestone, Rune Nyrup, Anna Alexandrova, Kanta Dihal, Stephen Cave.
Director
Project Leader, 2016 - March 2020
Associate Fellow
Senior Research Fellow, 2018-2021