With a focus on responsible AI in practice, this programme of research investigates the education and daily practice of those involved in AI development, deployment and leadership across sectors. The goal is to translate higher-level principles into actionable guidance and tools for professionals working with AI. This includes clarifying barriers to responsible action and addressing these to better ensure a responsible AI ecosystem.
Intersectional and anthropological perspectives on designing AI for just and sustainable futures. As AI technologies race ahead, we are seeing unintended social consequences: algorithms that promote everything from racial bias in healthcare to the misinformation eroding faith in democracies. The project, ‘Desirable Digitalisation: Rethinking AI for Just and Sustainable Futures’, funded by the Mercator Foundation in Germnay, is a […]
I believe that a technical field such as AI can contribute a great deal to our understanding of human existence, but only once it develops a much more flexible and reflexive relation to its own language and to the experience of research and life that this language organizes. Phil Agre, Towards a critical technical practice: […]
Growing concern over the impact of digital technologies on psychological wellbeing has prompted the largest technology companies to develop initiatives on ‘digital wellbeing’. However, these initiatives tend to focus on encouraging people to change their technology behaviour rather than on changing technology itself. Yet, it’s unclear why users should bear the burden of adjusting to, or […]
This project, led by the CFI Imperial Spoke, will explore ethical issues to do with the design and devlopment of AI-enhanced conversational agents and how to address these issues as part of the technology development process. The research will survey several development projects currently underway at Imperial that employ Natural Language Processing techniquies to facilitate helpful human-machine conversation. The dialogues are delivered […]
This AI Act Toolkit project is developing a step-by-step pro-justice compliance tool for developers of high-risk AI Complying with the AI Act can be a daunting task for providers of high-risk AI, as it requires extensive documentation, including a risk management system, data governance and declaration of conformity. Our tool will assist anyone who is actively involved […]
AI in the street: scoping everyday observatories for public engagement with connected and automated urban environments AI in the Street is a group project led by the University of Warwick, with the Universities of Cambridge, Edinburgh, and King’s College London, and with a non-profit partner, Careful Industries. Working with city-based AI innovation partners across sectors […]
Associate Director | Senior Research Fellow
Senior Research Fellow | Student Advisor
Visitor and Associate Fellow
Senior Research Fellow | Student Advisor
Associate Director (Research Partnerships) | Senior Research Fellow
Spoke Co-Lead, Imperial
Associate Fellow
Director
Research Fellow | Student Advisor
Research Assistant
April 2, 2024