The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

MPhil Course Content

About the MPhil

Course Content 

2023-2024 Elective Modules

Application Guide

CURRENT STUDENTS

Course Content and Structure

The MPhil includes one set module, "Introduction to the Ethics of AI, Data and Algorithms", and a choice of elective modules that students can undertake depending on their research interests. Students are expected to attend at least four elective modules from a list which changes each year depending on the focus of researchers in CFI.

The course is full-time and 9 months long. It runs over the three Cambridge terms in a year: Michaelmas, Lent and Easter.

Over the year, students will be marked on a series of essays and a dissertation. A Critical Analysis Essay of up to 3,000 words and two Research Essays of up to 5,000 words and up to 7,000 words will be the work focus for Michaelmas and Lent Terms. These, in addition to a short presentation form Part 1 of the assessment structure.

Part 2 of the assessment will be made up of the dissertation, which can be up to 12,000 words; the majority of this work will be expected to be undertaken in Easter Term. 

MPhil assessment structure

Students can expect 10 hours of one-to-one supervision to support them in developing ideas and written work for each essay. One supervision will be given for the Critical Analysis Essay, two for Research Essay 1, three for Research Essay 2 and four for the Dissertation.

Core Module: Introduction to the Ethics of AI, Data and Algorithms

In this 4-week, 8-session course, we will come together to gain an overview of some key concepts, theories, technologies, questions and problems in the ethics of ADA, gaining practice in applying specific frameworks and conceptual lens to concrete examples as well as experience of reading, analysing and discussing academic texts. This module will also build cohort collegiality, communication skills and open students up to interdisciplinary perspectives. The content is broken into three main sections.

Introduction to key concepts

Here we will explore the realm of ethics and other normative domains as well as getting an overview of the history of AI, data and algorithms. We will explore the questions of: what is technology and can it be value-free? What roles can humans have in relation to AI? How do these roles affect our answers to ethical questions?

A deeper dive into contemporary debates

We will then turn to a in depth exploration of some key areas in the ethics of ADA: How does bias enter into the design, functioning and effects of AI and how can it be resolved? How can different power relations be problematic? What should machine ethics look like? Should we defer to AI on moral matters? What’s the separation between the virtual and the real? What ethical issues are raised by AI-human interaction?

Moving beyond our current context

Finally we will explore what happens when ADA is deployed across borders, cultures, values; for increasing numbers of people; and look towards the future. We will address questions of: How do we resolve moral disagreement? What ethical problems arise from deploying AI at scale? How can we anticipate and prepare for future ethical issues arising from AI? What role can imagination play in helping us understand these?

Examples of possible elective modules

In addition to the core module, students will be able to choose a minimum of four elective modules. These will not be marked explicitly through an assessment, but they will support students' ideas and writing for their chosen topics for research essays and dissertation. 

Artificial sentience and the Robot Rights Debate

Under what conditions, if any, should a machine be considered a ‘moral patient’? Moral patiency refers to the status of being someone who matters morally, for example, someone whose rights or wellbeing others have to consider when doing ethical decision-making. In this module, participants will be introduced to prominent philosophical theories of moral patiency, and discuss whether these apply to any current or foreseeable types of AI.

Gender and AI

This module gives an overview of how feminist and queer thought offers a unique and robust basis for AI ethics. During the module we will discuss some of the major feminist perspectives and epistemologies, their core aims and strategies, and how they relate to concrete attempts at developing feminist AI. We will also discuss feminist posthumanism and the critiques this perspective has raised against transhumanist thinking about AI.

Ethical Considerations in Machine Learning and Machine Intelligence

This course introduces algorithmic auditing and its role in identifying and mitigating bias in algorithms. The course emphasises the importance of context when auditing models intended for real-world use, drawing examples from criminal justice, healthcare, and finance. We will learn about different technical and legal definitions of fairness, and the tensions between them. We will pay particular attention to understanding data issues, a common source of bias that is easy to overlook without appropriate domain knowledge. This course is jointly offered to CFI and Engineering students, and aims to foster interdisciplinary understanding and exchange.

Race, Colonialism and AI

This module explores how racism, colonialism, imperialism shaped the tech industry, and how these legacies continue to shape AI development and deployment today. Students will gain an understanding of different approaches to race and racism, from Afropessimism and Asian diaspora studies through to critical whiteness studies and Indigenous Futurisms. It will also explore how these theories illuminate specific tech uses and domains, such as surveillance and the use of AI in medicine.

Ethics of AI Prediction

AI and data have tremendous potential to generate new insights and knowledge. While more knowledge may initially seem an incredible positive, or at worst an ethically neutral tool, things are not so simple. This module explores the ethical questions and risks behind the epistemic power of AI, such as: What counts as accurate? Can we know too much? When is classification a matter of policy as much as science? What should we do when the truth can be presented in multiple ways? Considering these questions and others shines a light on how the information and knowledge we gain from AI demands a deeply ethical analysis.

Evaluation of AI Systems

The module explores the current state of AI evaluation and the myriad of issues surrounding it. It will provide participants with an understanding of why robust evaluation is important from a societal / ethical perspective and an overview of alternative approaches. Finally, we will discuss the difficulties associated with evaluating increasingly larger and more capable systems.

Intelligence Rising

This module looks at the international governance of potential future AI technologies. The first three weeks will consist of a strategic role-playing game, through which participants will be introduced to different actors, mechanics and dynamics that might arise in scenarios involving transformative technological change. This role-playing game is one that the module leaders have used to study AI governance with decision makers in industry and policy. In the final week, we will discuss what this type of exercise can teach us about international AI governance.

Theories of Socio-digital Design for Human-Centred AI

This module offers a critical encounter with the broad field of human-computer interaction (HCI), extending to consider topics such as intelligent user interfaces, critical and speculative design, participatory design, cognitive models of users, human-centred, as well as more-than-human-centred design, and others. While the module is hosted by the Department of Computer Science and Technology, it is theory-focused and does not require previous programming or design experience. Through discussion of both theoretical material and relevant ‘tools’ aiming to impact AI design practice (such as design checklists, policy briefs, or ideation cards), the module will allow students of diverse academic backgrounds to effectively link the political, social, and ethical considerations in HCI to specific technical and conceptual design challenges in Human-Centred AI development.