Today, CFI's Desirable AI project is launching the proceedings of the Many Worlds of AI: Intercultural Approaches to the Ethics of Artificial Intelligence conference, which took place in Cambridge, UK, earlier this year, gathering over 100 in-person participants and over 500 online participants from all over the world.
The conference interrogated how an intercultural approach to ethics can inform the processes of conceiving, designing, and regulating artificial intelligence (AI), and was produced in conjunction with the Center for Science and Thought at the University of Bonn, supported by Stiftung Mercator.
The online repository is a curated collection of presentation abstracts and recordings documenting the range of topics covered during the conference. This allows visitors to search for specific presentations, abstracts and keywords. The presentations have been organised into four large categories:
Design and Development: This category explores how to responsibly design, develop, and deploy AI systems while acknowledging and drawing on the differences between groups and societies. The presentations deal with the difficulties of operationalising intercultural ethics of AI when considering concepts such as accuracy or explainability, and highlight how different development can draw on diverse local languages, situated knowledges, value systems, and community practices. Some presentations also capture efforts towards decolonisation and resistance against discriminatory practices and ideologies in AI development.
Philosophy / Fundamental Questions: This section unpacks particular principles, values, and fundamental concepts that shape the contemporary understanding of AI as well as the ethics of AI, acknowledging potential disagreements between different worldviews and highlighting the need for a plurality of visions for ethical frameworks for responsible AI. Presentations in this category discuss how different philosophical stances, religious values, and indigenous perspectives or anti-discriminatory approaches can inform the ethical scaffolding for AI development.
Policy and Governance: Presentations in this category discuss how legal frameworks and policy guidelines for AI development and deployment are entangled with geopolitical power, how AI systems can inhibit civil rights or undermine human rights, and how community-based and participatory approaches to AI design can allow for more equitable technological futures.
Art / History / Narratives: Presentations in this category highlight the histories that make and remake AI and data-driven systems, contemporary narratives that shape public perceptions of such systems, as well as artistic interventions as a form of response or resistance to AI. The presentations tie the critique of machine learning and data collection practices to the discussion of different socio-technical imaginaries, as well as longer histories of nationalism and colonialism, and present-day climate politics and global power relations. They speak to the relevance of history, art, and digital humanities to the question of responsible AI development.
Please email desirableAI@lcfi.cam.ac.uk if you have any questions.