The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Governance, ethics and responsible innovation

Ensuring that the impacts of AI are beneficial in the near and long-term requires engaging with the principles and practices that underpin the development and deployment of AI. It requires shaping governance at the national and international level. And it requires the development of a global, cooperative community working fruitfully together to ensure that AI benefits everyone, both now and in the future.

Research within this theme focuses on:
●      Norms and principles: What kind of publication norms, collaboration norms and practices, and ethical principles are needed within AI research communities, and in the broader application and governance of AI? How do we move beyond high-level principles to practical implementation, and how can tensions between principles be navigated? What norms and principles, if established now, could most positively influence the trajectory of AI’s global impacts as more powerful technologies are developed in higher-stakes contexts?
●      Cross-cultural cooperation: Wherever AI is developed, its impacts will be global. And global ethics and principles must be shaped by a diversity of global voices. AI:FAR, in collaboration with colleagues in CSER and CFI, has been working to build a global cooperative community, with a particular focus on building links to leading Asian thinkers, technologists and institutions. Initiatives include ChinUK; a translation series on AI ethics, governance and sustainability; and papers and joint workshops.
●      International governance: Recent years have seen a proliferation of proposals for the international governance of AI, and the establishment of new bodies and fora. However, AI and robotics applications are already subject to extensive domain-specific international law, although in some regimes ratification and implementation are severely lacking. Our research examines the current state of international law and governance for AI, the strengths and weaknesses of different models of future governance, and governance priorities for achieving meaningful and inclusive stewardship of AI globally.
●      Responsible Innovation: Responsible innovation means “taking care of the future through collective stewardship of science and innovation in the present”. Collective stewardship of AI requires meaningful collaboration between academia, industry, policymakers, civil society and affected communities. Our research examines ethical activism in the AI community, methods for trustworthy collaboration and oversight of AI research, and the role of participatory methods to achieve a better understanding of cross-societal concerns and priorities for AI. We work closely with technology-leading research groups, governments and civil society organisations.
 
Recent papers include:
●      Kunz, M., & Ó hÉigeartaigh, S. (Forthcoming, 2021) Artificial Intelligence and Robotization. In Robin Geiß and Nils Melzer (eds.), Oxford Handbook on the International Law of Global Security (Oxford University Press, Forthcoming).
●      Cave, S., Whittlestone, J., Nyrup, R., Ó hÉigeartaigh, S. S., and Calvo, R. (Forthcoming, 2021). The Ethics of Using AI in Pandemic Management. Forthcoming in a BMJ and WHO Special Issue on AI and COVID-19.
●      Liu, H. Y., & Maas, M. M. (2021). ‘Solving for X?’Towards a problem-finding framework to ground long-term governance strategies for artificial intelligenceFutures, 126, 102672.
●      Cihon, P., Maas, M. M., & Kemp, L. (2020). Fragmentation and the Future: Investigating Architectures for International AI Governance. Global Policy, 11(5), 545-556.
●      Brundage, M., Avin, S., Wang, J., Belfield, H., Krueger, G., Hadfield, G., Ó hÉigeartaigh, S.S. ... & Maharaj, T. (2020). Toward trustworthy AI development: mechanisms for supporting verifiable claims. arXiv preprint arXiv:2004.07213.
●      Stix, C., & Maas, M. M. (2020). Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy. AI and Ethics, 1-11.
●      Tzachor, A., Whittlestone, J., Sundaram, L. & Ó hÉigeartaigh, S.S. (2020) Artificial intelligence in a crisis needs ethics with urgency. Nature Machine Intelligence
●      ÓhÉigeartaigh, S.S., Whittlestone, J., Liu, Y. et al. (2020). Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance. Philosophy & Technology
●      Belfield, H. (2020). Activism by the AI Community: Analysing Recent Achievements and Future Prospects. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 15-21).
●      Cihon, P., Maas, M. M., & Kemp, L. (2020). Fragmentation and the Future: Investigating Architectures for International AI Governance. Global Policy, 11(5), 545-556.
●      Hagerty, A., & Rubinov, I. (2019). Global AI ethics: a review of the social impacts and ethical implications of artificial intelligence. arXiv preprint arXiv:1907.07892.  
●      Stix, C. (2019). A Survey of the European Union’s Artificial Intelligence Ecosystem.
●      Whittlestone, J., & Ovadya, A. (2019). The tension between openness and prudence in responsible AI research. NeurIPS 2019 Joint Workshop on AI for Social Good.
●      Ovadya, A., & Whittlestone, J. (2019). Reducing malicious use of synthetic media research: Considerations and potential release practices for machine learning. arXiv preprint arXiv:1907.11274.
●      Vold, K., & Whittlestone, J. (2019). Privacy, Autonomy, and Personalised Targeting: rethinking how personal data is used. IE Report on Data, Privacy and the Individual.
●      Whittlestone, J., Nyrup, R., Alexandrova, A., & Cave, S. (2019). The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. In Proceedings of the AAAI/ACM Conference on AI Ethics and Society, Honolulu, HI, USA (pp. 27-28).
●       Whittlestone, J., Nyrup, R., Alexandrova, A., Dihal, K., & Cave, S. (2019). Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. London: Nuffield Foundation.
 
Researchers:
Jess Whittlestone, Martina Kunz, Alexa Hagerty, Sean O hEigeartaigh, Haydn Belfield, Matthijs Maas.

Next project

Safety, security and risk