Dr Kanta Dihal is a Visitor and Associate Fellow in Leverhulme Centre for the Future of Intelligence.
In her research, she explores how fictional and nonfictional stories shape the development and public understanding of artificial intelligence.
Kanta’s work intersects the fields of science communication, literature and science, and science fiction. She obtained her DPhil in science communication at the University of Oxford: in her thesis, titled ‘The Stories of Quantum Physics,’ she investigated the communication of conflicting interpretations of quantum physics to adults and children. She is co-editor of the forthcoming collection AI Narratives: A History of Imaginative Thinking About Intelligent Machines (Oxford University Press, 2020) and is currently working with Dr Stephen Cave on the monograph AI: A Mythology.
Cave, S., & Dihal, K. 2021. ‘AI Will Always Love You’, in Minding the Future: Contemporary Issues in Artificial Intelligence, ed. Dainton, B., Slocombe, W., & Tanyi, A. New York: Springer.
Sunday Feature: The Puppet’s Gaze. Originally broadcast on BBC Radio 3 13 June 2021 Puppetry has seen a rise in interest during the pandemic. New Generation Thinker Noreen Masud made her own puppet, Harley, during Lockdown and reflects on what the puppet’s gaze can give, that the human gaze cannot. Featuring Russell Dean, Artistic Director […]
White Mischief. Episode 2 – Kind of nightmarish. Originally broadcast on BBC Radio 4 11 October 2021 Kanta Dihal was a contributor to this broadcast which looked at the role artificial intelligence has to play in reinforcing ideas about whiteness.
Narratives and Visions: SNF Nostos Conference 2021. 6 September 2021 Video recording of conference session including Kanta Dihal as panel member. Download Video Recording
The Good Robot: Kanta Dihal on Decolonising AI Narratives In this episode, Christina Gaw postdoctoral researchers Eleanor Drage and Kerry Mackereth chat to Kanta Dihal, Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence who leads the Decolonising AI project. We discuss the stories that are being told about AI, why these stories […]
Applying intersectional and anthropological perspectives to designing AI for just and sustainable futures.
How we talk about new technologies and their risks and benefits can significantly influence their development, regulation, and place in public opinion. Balancing AI’s potential and its pitfalls therefore requires navigating this web of associations.
AI is set to have an unprecedented global impact -- and public perceptions will shape much of it, affecting how the technology is developed, adopted, and regulated. But different cultures see AI through very different lenses.
If the developer demographic does not diversify, and if bias is not sufficiently addressed in datasets, AI stands to exacerbate inequality and social injustice on a global scale.
The ‘Gender and AI’ research stream develops feminist and queer approaches to AI that are informed by critical race theory, postcolonial/decolonial theory, Asian American/Asian diaspora studies, crip theory, and areas of justice-oriented knowledge and work.