The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Decolonising AI

It is increasingly recognised that artificial intelligence has a gender and ethnicity problem. The first deployments of AI-related technologies have repeatedly shown that they perpetuate existing ethnic and racial biases. Famous cases include Google’s notorious tagging function categorizing Black people as gorillas – and solving it by removing the tag ‘gorilla’ from their system. If the designer demographic does not diversify, and if bias is not sufficiently addressed in datasets, AI stands to exacerbate inequality and social injustice on a global scale.

There is a vicious cycle in which systematic lack of diversity among AI designers and existing structural injustices are reflected in the technology, which in turn perpetuates those injustices. This takes two main, related forms. First, the lack of diversity among developers leads to ‘single vision’ biases, limiting the values and interests built into the AI systems to those of certain groups, and excluding others. Second, the human data on which much AI depends reflects existing social inequalities, and so can compound or exacerbate them. The Decolonizing AI project aims to evidence the interrelationships so that we can identify the most effective interventions that will ensure AI will be developed in ways that are beneficial for all.

Photo credit

Next project

Policy and Responsible Innovation