Global AI Narratives

Exploring the ways we talk about AI

Artificial intelligence (AI) is set to have an unprecedented global impact, and public perceptions of AI will influence much of how the technology is developed, adopted and regulated. To better understand the likely impacts of AI technologies around the world, the Global AI Narratives (GAIN) project investigates how people from different cultures and regions view the risks and benefits of AI, and, more specifically, how AI narratives are helping to shape these views.

GAIN is an interdisciplinary academic research project within the Leverhulme Centre for the Future of Intelligence (LCFI) at the University of Cambridge. The core GAIN team works with a network of Global Partners and funders to run events and organise research collaborations around the world.

Project goals

Many worldviews on AI are currently poorly understood. Public perceptions of AI beyond the Western perspective ( a viewpoint primarily fueled by cinematic narratives like The Terminator and I, Robot) receive very little attention from academics and policy makers. We aim to address this lack of representation through research on the production and dissemination of AI narratives in communities around the world.

The goals of GAIN are therefore:

  • To understand how different cultures and regions perceive the risks and benefits of AI, and the influences that are shaping those perceptions.

  • To create a global community of experts who can relate these diverse visions to pressing questions of AI ethics and governance.

  • To foster interdisciplinary academic research from underrepresented regions and groups through collaborative projects and publications.

Project outputs

GAIN began as a four-year project running from 2018 to 2022.  In addition to the four completed in-person workshops, another sixteen virtual workshops were undertaken across five continents. Each workshop takes place in a different region outside the UK and North America. At each of these workshops we asked local experts to present their experience of AI – whether that is an academic presenting their research, an industry expert discussing their current work, an artist or writer presenting their creative work, or an activist talking about the remit or purpose of their civil action. Outputs from each workshop are tailored to the preferences and specialisms of the relevant regional partner and the findings during the workshop. Consequently, each workshop’s outputs include one or more of the following: academic publications; general-audience reports; peer-reviewed reports. We also summarised each workshop as an online report below.

Book

The project also resulted in the publication of an edited collection: Imagining AI: How the World Sees Intelligent Machines. This is a companion volume to the recently published AI Narratives: A History of Imaginative Thinking about Intelligent Machines (Oxford University Press, 2020).

Online Workshop Reports

Imagining AI book cover

People

Stephen Cave

Stephen Cave

Director

Kanta Dihal

Kanta Dihal

Visitor and Associate Fellow

Tomasz Hollanek

Tomasz Hollanek

Research Fellow | Student Advisor

Toshiba Takahashi

Toshiba Takahashi

Associate Fellow; Visiting Fellow, 2018/2019