The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

AI for Just and Sustainable Futures

Intersectional and anthropological perspectives on designing AI for just and sustainable futures.

As AI technologies race ahead, we are seeing unintended social consequences: algorithms that promote everything from racial bias in healthcare to the misinformation eroding faith in democracies. 

The project, ‘Desirable Digitalisation: Rethinking AI for Just and Sustainable Futures’, funded by the Mercator Foundation in Germnay, is a collaboration between the Universities of Cambridge and Bonn, led by Dr Stephen Cave from LCFI and Professor Markus Gabriel from the Institute for Philosophy at Bonn.

The new research project comes as the European Commission negotiates its Artificial Intelligence Act, which has ambitions to ensure AI becomes more “trustworthy” and “human-centric”. The Act will require AI systems to be assessed for their impact on fundamental rights and values. The researchers on the Desirable Digitalisation project collaboratively investigate the many questions that arise from these plans, such as: What exactly does a “human-centric” approach to AI look like? How can we meaningfully assess whether and how AI systems violate fundamental rights and values? And how can we foster awareness of discriminatory practices and how to stop them?

The Desirable Digitalisation project is divided into two parts.

Part 1: Anthropological and intersectional perspectives

In the first part, researchers will investigate intercultural and intersectional perspectives on AI and fundamental rights and values. looking beyond the code and drawing on lessons from history and political science. This part of the project will ask questions from two perspectives: anthropological (How will our idea of ‘the human’ influence and be influenced by digital technology?) and intersectional (How do the structural injustices of the past influence today’s technology and its influence on fundamental rights and values?). The teams will work not only with colleagues across Europe, but also with teams in Asia and Africa. The project investigates foundational, anthropological questions concerning the human in the digital age such as how do different ideas of the human shape different cultures’ views of desirable digitalization?

Part 2: Designing AI for just and sustainable futures

In the second part of the project, researchers from both universities will work with the AI industry to develop design and education resources that put sustainability and justice at the heart of technological progress. The project will approach sustainability in all its dimensions including the social, ecological, economic and technological and as a central value in designing AI. Only by taking sustainability and justice into account can these technologies improve our lives and our world.

For upcoming events and other information, head to the project website.

View Project Website

Many Worlds of AI: Intercultural Approaches to the Ethics of Artificial Intelligence

Conference at the University of Cambridge, 26-28 April 2023

Many Worlds of AI is the inaugural conference in a series of biennial events organised as part of the ‘Desirable Digitalisation: Rethinking AI for Just and Sustainable Futures’ research programme. The aim of the conference is to interrogate how an intercultural approach to ethics can inform the processes of conceiving, designing, and regulating artificial intelligence (AI).

Registration for free online attendance is now open here.

For more information and event schedule view the conference website here.

Next project

AI: Practice, People, Places