Current Students
We want to extend a warm welcome to all of our new MPhil students who joined us at Darwin College for their induction day. Our Cohort for this year has a diverse array of interests, as shown below, and we can’t wait to work with them!
You can click on a person’s name to send them an email.
Daisy Coburn
Artificial consciousness and the grounds on which an artificially intelligent agent may have a place in the moral circle. Further, the social and ethical implications of ascribing moral status and rights to artificially intelligent agents.
Daniel brings a decade of experience in policy advocacy and political strategy. Using social research, he has designed effective messages that change public perceptions and behaviours for some of Australia’s most prominent political and social campaigns – including 8 elections, and the landmark Campaign for Marriage Equality.He holds a University Medal for his research on online political discourse and has collaborated with C-suite executives and Cabinet Ministers to improve election integrity, ad transparency, and reduce misinformation.At CFI, he is developing a values-based messaging framework that empowers public accountability of emerging technology.He is a fellow of the Centre for Human Technology and chair of the Australian Progressive Technology Network.
Philosopher specialising in consciousness science, artificial intelligence and aesthetics. www.dvijamehta.com
Looking to learn broadly, but main interests at the moment in the philosophy and safe regulation of ai systems; cognitive approaches to language processing; and information ecosystems, democracy and the digital public sphere.
My core intellectual and altruistic interest consists in helping Earth-originating intelligent life (get a better understanding of how to) prevent as much (intense) suffering as possible, considering all minds within all of spacetime. Consequently, I am particularly interested in questions about suffering, the future, and uncertainty. Considerations regarding scope sensitivity and expected (dis)value motivate me to focus on s-risks: risks of worst-case scenarios in terms of suffering. The main question I intend to explore in the course of this MPhil is which (global) AI governance efforts, if any, might constitute promising avenues, at this point, for mitigating extreme AI-related risks other than those of human extinction. Barely researched sub-topics thereof include global stable totalitarianism and other lock-ins of dystopias; a world economy based on productive AIs suffering on an unfathomable scale; and catastrophic conflict where at least one party is an advanced AI or an AI-enabled actor, especially those very high in dark tetrad traits.
I am interested in the concept of machine consciousness and the attribution of intentionality to AI chatbots such as ChatGPT and Bard by users. I aim to research what it would take for a model to be considered conscious and the ethical implications this carries.
Additionally, I plan to explore techniques for reducing bias in algorithmic decision-making systems. My research will focus on the efficacy of transparent systems and address concerns about their susceptibility to manipulation.
Inbar holds an Honours Bachelor’s degree in Education and Psychology from the Hebrew University of Jerusalem. In addition to exploring the dynamic challenges and opportunities that AI poses in education, she is interested in exploring the ethical implications associated with educational technology, as well as how policymakers respond to these issues.
Formerly a member of the Head Scientist’s office in the Israeli Ministry of Education, she worked on educational technology and AI policy issues.
Her research interests include rentier capitalism and education rentiers, the marketisation of education, feminist theory, Complexity theory and privacy and data collection in the digital age.
Addictively terrified by AI sentience, curious about organoid intelligence, serious about AI governance, and willing to experiment with AI pedagogy.
I centre my research on the political dimension of the ethics of AI, data and algorithms. I enjoy exploring the tension between the right decision and the right to decide in emerging digital ecosystems. The topics I am particularly interested include transhumanism, AI governance, the future of digital payment, and intersectional digital futures among others.
My research interests include the inner and the outer alignment problem, responsibility/accountability in opaque decision systems, the ontology of reasoning, human-computer interactions and extended cognition, AI governance and infinite ethics. I have completed EA Cambridge’s ‘AGI Safety: Technical Alignment Fellowship’, and won Trinity College’s Samuel Devon’s Essay Competition (2022) for essays on the topics of science for the fulfilment of moral and social responsibilities.
I’m looking to gain a broad perspective on current research fields across AI ethics, while going into greater detail on topics within global politics and the geopolitical implications of AI; faith, religion and AI; as well as more specific safety and alignment research.
I am interested in the ethical, social, and legal aspects of AI, focusing specifically on the emerging sub-field of AI for global development (AI4D) and its promise to empower women and Indigenous people through innovative AI initiatives in the Global South. My research interests include the governance of AI, coloniality/modernity of gender, and the role AI plays in the production of universal narratives and transformative imaginaries of the future.
My research interests concern negligence in AI, particularly when it is used within medical settings and with regard to self-driving cars. I hope to use this MPhil as a chance to explore the development of comprehensive legal and ethical frameworks for AI in this situations, thereby enabling more precise allocation of liability.
Nikhil is an emerging creative and academic interested in situating “AI” technologies within historical structures of power and violence, drawing from transfeminist, anti-capitalist, abolitionist, and anti-caste perspectives. A recent graduate of Harvard University in Computer Science and History/Literature, Nikhil spent their undergraduate years studying biometric surveillance, Brahminical-colonial eugenics, and histories of anti-colonial solidarity in the Global South. Learning from the deep wisdom of marginalized communities, Nikhil’s work grapples with what it means to build collective power against sociotechnical systems while foregrounding complicity as a primary ethical obligation (writing as a caste- and immensely class-privileged person raised in Silicon Valley).
Artificial Creativity and Digital Equality
Raphael Hernandes is a journalist specializing in technology and data and is currently undertaking LCFI’s MPhil program. Before coming to Cambridge, he served as an editor-at-large for Folha de S.Paulo, one of the most influential newspapers in Brazil.
As a reporter who can also code, he manages technology projects, applies his data skills to reporting, and writes about the intersection between tech and society, focusing on artificial intelligence, cybersecurity, metaverse, and ethics.
Now, his career and study goals are centered on algorithm accountability, with a particular interest in understanding their impact on the information environment.
I am interested in exploring global institutions governing AI, with a focus on their structure and design. My goal is to understand how these entities can ensure fair and equitable AI benefit-sharing globally while mitigating risks.I aim to develop inclusive and ethical governance models by reconciling varied stakeholder interests and fostering responsible and equitable AI use within global governance frameworks.
My research interests are primarily in artificial intelligence (AI) governance, particularly developing ethical frameworks, regulations, and policies to promote the responsible development and application of AI technologies. I seek to tackle issues such as accountability, bias, transparency, and societal impact.
Valena is a MPhil student in Ethics of AI at the University of Cambridge, fully funded by the Gates Cambridge scholarship. Her research interests include normative ethics, meta-ethics, and political philosophy. The questions she focuses on assess the impact of AI on our epistemic agency, how to assign responsibility to AI frameworks, and especially how an artificial moral agent should ethically behave in a pluralistic society. Before Cambridge, Valena completed her BA in Philosophy at King’s College London.
My research interests lie at the intersection of AI, law, healthcare, and ethics. I aim to explore how interdisciplinary legal scholarship can address the instances in which AI is unsafe and unethical. The application of technology in healthcare is a particular interest of mine, as I seek to explore how the benefits of medical AI can be realised without compromising patient safety.