The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

MPhil Elective Modules 2023-2024

MPhil in Ethics of AI, Data and Algorithms: Elective Modules 2023-2024*

*Please note, modules will only run if there is a minimum of three people enrolled.

About the MPhil

Course Content 

2023-2024 Elective Modules

Application Guide 

Current Students

MADA1: Artificial sentience and the Robot Rights debate

Convenor: Dr Henry Shevlin
Teaching weeks: Michealemas Term, Weeks 5-8
Summary: Under what conditions, if any, should a machine be considered a ‘moral patient’? Moral patiency refers to the status of being someone who matters morally, for example, someone whose rights or wellbeing others have to consider when doing ethical decision-making. In this module, participants will be introduced to prominent philosophical theories of moral patiency, and discuss whether these apply to any current or foreseeable types of AI.

MADA2: Gender and AI

Convenors: Dr Eleanor Drage and Dr Apolline Taillandier
Teaching weeks: Michealemas Term, Weeks 5-8
Summary: This module gives an overview of how feminist and queer thought offers a unique and robust basis for AI ethics. During the module we will discuss some of the major feminist perspectives and epistemologies, their core aims and strategies, and how they relate to concrete attempts at developing feminist AI. We will also discuss feminist posthumanism and the critiques this perspective has raised against transhumanist thinking about AI.

MADA3: Ethical Considerations in Machine Learning and Machine Intelligence

Convenor: Dr Miri Zilka
Teaching Weeks: Michealemas Term, Weeks 5-8
Summary: This will be an Engineering MLMI MPhil core module with 4x1hr seminars. CFI students will receive an additional 2hr workshop. 
This course provides an introduction to algorithmic auditing, and its role in identifying and mitigating bias in algorithms. The course emphasises the importance of context when auditing models intended for real-world use, drawing examples from criminal justice, healthcare, and finance. We will learn about different technical and legal definitions of fairness, and the tensions between them. We will pay particular attention to understanding data issues, a common source of bias that is easy to overlook without appropriate domain knowledge. This course is jointly offered to CFI and Engineering students, and aims to foster interdisciplinary understanding and exchange.

MADA4: Race, Colonialism and AI

Convenors: Dr Stephen Cave and Dr Kerry McInerney
Teaching weeks: Michealemas Term, Weeks 5-8
Summary: This module explores how racism, colonialism, imperialism shaped the tech industry, and how these legacies continue to shape AI development and deployment today. Students will gain an understanding of different approaches to race and racism, from Afropessimism and Asian diaspora studies through to critical whiteness studies and Indigenous Futurisms. It will also explore how these theories illuminate specific tech uses and domains, such as surveillance and the use of AI in medicine. 

MADA5: Ethics of AI Prediction

Convenor: Dr Claire Benn
Teaching weeks: Lent Term, Weeks 1-4
Summary: AI and data have tremendous potential to generate new insights and knowledge. While more knowledge may initially seem an incredible positive, or at worst an ethically neutral tool, things are not so simple. This module explores the ethical questions and risks behind the epistemic power of AI, such as: What counts as accurate? Can we know too much? When is classification a matter of policy as much as science? What should we do when the truth can be presented in multiple ways? Considering these questions and others shines a light on how the information and knowledge we gain from AI demands a deeply ethical analysis. 

MADA6: Evaluation of AI Systems: Capabilities, Safety and Generality

Convenor: Dr John Burden
Teaching weeks: Lent Term, Weeks 1-4
Summary: The module explores the current state of AI evaluation and the myriad of issues surrounding it. It will provide participants with an understanding of why robust evaluation is important from a societal / ethical perspective and an overview of alternative approaches. Finally, we will discuss the difficulties associated with evaluating increasingly larger and more capable systems.

MADA7: AI Governance: Intelligence Rising

Convenors: Dr Haydn Belfield and Dr Shahar Avin
Teaching weeks: Lent Term, Weeks 1-4
Summary: This module looks at the international governance of potential future AI technologies. The first three weeks will consist of a strategic role-playing game, through which participants will be introduced to different actors, mechanics and dynamics that might arise in scenarios involving transformative technological change. This role-playing game is one that the module leaders have used to study AI governance with decision makers in industry and policy. In the final week, we will discuss what this type of exercise can teach us about international AI governance.

MADA8: Theories of Socio-digital Design for Human-Centred AI

Convenor: Prof Alan Blackwell and Dr Tomasz Hollanek
Teaching weeks: Lent Term, Weeks 1-4
Summary: This module offers a critical encounter with the broad field of human-computer interaction (HCI), extending to consider topics such as intelligent user interfaces, critical and speculative design, participatory design, cognitive models of users, human-centred, as well as more-than-human-centred design, and others. While the module is hosted by the Department of Computer Science and Technology, it is theory-focused and does not require previous programming or design experience. Through discussion of both theoretical material and relevant ‘tools’ aiming to impact AI design practice (such as design checklists, policy briefs, or ideation cards), the module will allow students of diverse academic backgrounds to effectively link the political, social, and ethical considerations in HCI to specific technical and conceptual design challenges in Human-Centred AI development.