This project synthesises existing research on the ethics of AI across disciplines and sectors, and proposes a roadmap for future work.
Interest in the ethical and societal implications of technologies based on algorithms, data, and AI has exploded in many circles in the past decade: academic, policy, industry, activist, and popular media. The perceived opportunities and threats of these technologies have spurred a proliferation of initiatives to establish a common ethics for their development and use.
While these efforts are praiseworthy, they remain, as is to be expected at this stage, unsystematic and often vague. Much of the work has focused on attempts to put forward a unified framework, or to formulate a list of high-level principles for ethical use of AI and data. However, these approaches suffer from three interrelated blind spots:
- Lack of clarity or consensus around the meaning of central ethical concepts.
- Lack of attention to tensions between different ideals and values.
- Lack of solid evidence on both technical capabilities and societal needs related to key issues and tensions.
This project, commissioned by the Nuffield Foundation to inform the strategy of the new Ada Lovelace Institute, assesses the key strengths and limitations of existing work on the ethics of AI, and identifies priorities for future research. In particular, we explore the different kinds of tensions that are likely to arise as we try to apply ethical principles to the application of AI in practice in different domains.