The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

AI Ethics & Pedagogy

There has been a rush to institute ‘ethics owners’, cadres of people tasked with ‘doing’ ethics in engineering, data, and software development organisations in Silicon Valley (Moss and Metcalf, 2020). However, their influence remains limited because their roles are poorly defined, and their influence in the design of technologies are often curtailed. Similarly, In many other sectors and industries, there is significant enthusiasm among working professionals to learn about how to address what ‘ethical AI’ means in their sectors.

This is evidenced by the ethical and responsible AI courses for working professionals, and university students; and the number of people applying to our educational programs. However, there is limited evidence and analysis of how teaching and learning materials and resources correspond to ‘ethical and responsible AI’, or a critical technical practice in data science, engineering, business, civil society or government. There is some promising evidence however of pedagogical approaches in formal university teaching contexts, such as using Science Fiction in teaching ethics to Computer Science undergraduates (Burton, Goldsmith and Mattei, 2018), and how reading fiction cultivates the imagination required for teaching information ethics (Vamanu, 2023).

We find that working professionals face a distinct kind of immediacy in responding to the ethical and societal challenges of AI. While many may want to move into working with profitable AI technologies and industries, others already in those industries, or having to make decisions about AI applications, are concerned with the impacts of products they build, communities they represent, or are accountable to. What kinds of pedagogical practices enable working professionals to do ‘ethics work’, and what conditions enable their success in this? What are the implications of professionalisation and credentialing of ‘AI ethics’ in different professional contexts? These are the questions that interest the Critical AI Pedagogies project (CAP).

We assume that ‘ethics work’ is the work of care, repair, harm reduction, risk mitigation, and responsibility in the shaping of AI. Thus, we re-frame such work to include contending with: limited socio-technical literacies in regulation; how the rhetoric of geopolitical ‘races’ and hype influence business imaginaries and practices; ethics and values positioned in terms of design tradeoffs; the lack of will, and resources, for values-oriented interventions; lack of accountability ‘upward’; fragmented communities of practice; translation required between specialists; and other social and organisational practices in the complex and distributed assembly of AI production.

CAP is interested in a range of pedagogical practices in responding to these challenges: approaches to curricula, kinds of assessments, learning methods, and meta-epistemological perspectives shaping the field of AI as interdisciplinary. Rather than identify circular causality, this project maps how different kinds of pedagogical practices are influential, or not, in stimulating [working professionals’] fluencies in organisational contexts re-shaped by the introduction of AI technologies. The results of this participatory project will include teaching and learning curricula, workshops, and community networks to support research, teaching and learning about AI. The project will include the working professionals, scholars, regulators, and graduate students that we encounter through our educational work.

There are multiple theoretical conceptual frameworks that positively shape this inquiry: Social Construction of Technology (SCOT) approaches; ‘located accountabilities’ (Suchman, 2012), the role of lay experts in the shaping of scientific innovation (Epstein, 1996); and the ‘epistemic fluency’ required of working professionals in emerging and challenging areas of science and technology (Markauskaite and Goodyear, 2017). ‘Critical AI Pedagogies’ is inspired by the well-established ‘critical pedagogy’ tradition in Education (Giroux, 2020) as well as by the writings of legendary 1980s-90s era AI scientist Phil Agre’s in outlining a ‘critical technical practice’.


I believe that a  technical field such as AI can contribute a great deal to our understanding of human existence, but only once it develops a much more flexible and reflexive relation to its own language, and to the experience of research and life that this language organizes. Phil Agre, Towards a critical technical practice: Lessons in trying to reform AI (1988)

Agre wrote about why it was necessary to ‘reform AI’, to ‘wake up’ and build a ‘critical technical practice’ in computing (Agre, 1997). Alongside building software, Agre developed a custom practice of hermeneutical inquiry that included Heideggerian philosophy, the history of the organisation of ideas, minute observations of his own behaviour and journaling about it, and cross-disciplinary social networks, among others (Masis, 2014). In other words, he tried to approach the problems AI was being built to solve from multiple, different theoretical and embodied perspectives. For him, “exploring and negotiating “difference, tension, and conflict” are key practices that “demonstrate the value of interdisciplinary work and teaching” (Baker and Däumer, 2015).

For Agre, as for us, how we learn about and engage with the world of diversity and difference is integral to learning about AI. From this perspective, CAP is also interested in how AI works as a meta-epistemological prompt about the practice of scientific research and teaching, and what this means for the shaping of knowledge about the world. What kinds of creative, disruptive, innovative research and teaching methods are being employed by scholars in the Sciences and the Humanities in teaching and learning about AI in times of environmental, democratic, and human crises?

Through collaborations with academic partners, CAP will investigate how pedagogy is being re-shaped to build expertise across disciplinary divisions.


References:

  • Agre, P.E (1997) Toward a critical technical practice: Lessons learned in trying to reform AI, in G. Bowker, L. Star, B. Turner, and L. Gasser, (Eds.) Bridging the Great Divide: Social Science, Technical Systems, and Cooperative Work, Mahwah, NJ, USA: Lawrence Erlbaum Associates, Inc. pp. 131–157.
  • Baker, W. D., & Däumer, E. (2015). Designing interdisciplinary instruction: Exploring disciplinary and conceptual differences as a resource. Pedagogies: An International Journal, 10(1), 38–53. https://doi.org/10.1080/1554480X.2014.999776
  • Burton, E., Goldsmith, J., and Mattei, N. (2018) How to teach computer ethics through science fiction. Commun. ACM 61, 8 (August 2018), 54–64. https://doi.org/10.1145/3154485
  • Epstein, S. (1996). Impure science: AIDS, activism, and the politics of knowledge (Vol. 7). Univ of California Press.
  • Giroux, H. A. (2020). Introduction, On Critical Pedagogy (1st ed., pp. 1–16). Bloomsbury Academic; Bloomsbury Collections pp 2 http://www.bloomsburycollections.com/book/on-critical-pedagogy/introduction/
  • Markauskaite, L., & Goodyear, P. (2017). Epistemic fluency and professional education. Dordrecht: Springer Netherlands. doi, 10, 978-94.
  • Masís, Jethro (2014). Making AI Philosophical Again: On Philip E. Agre's Legacy. Continent 4 (1):58-70.
  • Moss, E., and Metcalf, J. (2020) Ethics Owners: A New Model of Organizational Responsibility in Data-Driven Technology Companies (New York: Data & Society Research Institute, 2020), https://datasociety.net/pubs/Ethics-Owners.pdf.
  • Suchman, L. (2002). Located accountabilities in technology production. Scandinavian Journal of Information Systems, 14(2), 7.
  • Vamanu I. (2023). Cultivating Imagination: A Case for Teaching Information Ethics With Works of Fiction. Journal of Education for Library and Information Science. 10.3138/jelis-2020-0035. 64:1. (1-17)

Next project

Designing for Wellbeing