The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Policy and Responsible Innovation

This project focuses on the collaborations between technologists, policymakers, and other stakeholders needed for the responsible development of AI.

There are good prospects for the development of a robust ‘safety and benefits’ culture within the industry.

The open letter prepared at Puerto Rico, now signed by a large body of industry and academic research leaders, called for ‘research not only on making AI more but also on maximizing the societal benefit of AI’. Its overall message echoes Stilgoe et al.’s definition of responsible innovation: ‘taking care of the future through collective stewardship of science and innovation in the present’ (Stilgoe et al. 2013, Developing a framework for responsible innovation. Res Policy, 42).

But this has been tried in other technology fields, with mixed results. How are the lessons of these historical cases relevant to AI? What are the role and prospects for regulation, and how can the technology community work with policymakers, towards mutual goals? How can industry leaders balance near-term commercial responsibilities with the need to engage with broader and more long-term challenges? As a ‘virtuous cycle’ takes hold, where greater commercial success leads to ever-greater investment in R&D, these questions become increasingly urgent. By connecting the AI technologists with academic specialists in science and policy studies, this project aims to help answer such questions.

Next project

Autonomous Weapons - Prospects for Regulation