It is increasingly recognised that artificial intelligence has a gender and ethnicity problem. The first deployments of AI-related technologies have repeatedly shown that they perpetuate existing ethnic and racial biases. Famous cases include Google’s notorious tagging function which categorised a percentage of Black people as gorillas, along with Google's follow-up 'solution' which was to remove the tag ‘gorilla’ from their system. If the developer demographic does not diversify, and if bias is not sufficiently addressed in datasets, AI stands to exacerbate inequality and social injustice on a global scale.
There is a vicious cycle in which systematic lack of diversity among AI developers and existing structural injustices are reflected in the technology, which in turn perpetuates those injustices. This takes two main, related forms. First, the lack of diversity among developers leads to ‘single vision’ biases, limiting the values and interests built into the AI systems to those of certain groups, and excluding others. Second, the human data on which much AI depends reflects existing social inequalities, and so can compound or exacerbate them. The Decolonising AI project aims to evidence the interrelationships so that we can identify the most effective interventions that will ensure AI will be developed in ways that are beneficial for all.
Stephen Cave, The Problem with Intelligence: Its Value-Laden History and the Future of AI. Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society, 2020.
Stephen Cave and Kanta Dihal, The Whiteness of AI. Philos. Technol. (2020). https://doi.org/10.1007/s13347-020-00415-6