The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Postdoctoral Researcher - Trust and Transparency

Location: Cambridge, UK

Role information

In collaboration with the Machine Learning Group, the Leverhulme Centre for the Future of Intelligence (CFI) at the University of Cambridge invites applications for a postdoctoral Research Associate to work on the project 'Trust and Transparency in AI', which spans multiple disciplines including machine learning, law, psychology and policy. Funding for this position is available for 2 years in the first instance. It is an exciting opportunity for a talented individual to make a major contribution to the development of this field.

CFI is a new, highly interdisciplinary research centre addressing the challenges and opportunities posed by AI. Funded by the Leverhulme Trust, CFI is based at the University of Cambridge, with partners in the University of Oxford, Imperial College, and UC Berkeley, and has close links with industry and policymakers.

This is a new post within CFI's Trust and Transparency project. This project, led by Dr Adrian Weller and Professor Zoubin Ghahramani, and involving partners at Imperial College and DeepMind, examines technical, legal and social mechanisms for ensuring AI systems are appropriately transparent and trustworthy. Its strands include:

(i) Transparency: studies ways to make interpretable the reasons for an AI's predictions or decisions. There is an emerging field of research studying these issues within machine learning, though the psychology of how humans understand systems is also important.

(ii) Trustworthiness: seeks to understand when humans tend to trust machines, and when they should ¿ that is, what makes intelligent and autonomous systems appropriately trustworthy. This strand includes topics such as reliability and robustness, and may involve insights from machine learning, psychology, human computer interaction, anthropology and more.

(iii) Law and Governance: explores what policy instruments and standards can help ensure that AI systems are fair, appropriately transparent, trustworthy, interpretable, and respect privacy and human rights; what we mean by these concepts with regard to algorithms, and how should they be enforced.

Candidates are expected to have expertise in at least one of these strands, e.g. machine learning or law, including a relevant PhD. If the appointee has a machine learning background, they could be offered a joint appointment with the Machine Learning Group in the Department of Engineering.

If you have any questions about this vacancy, please contact Susan Gowans at Please quote reference GO17672 on your application and in any correspondence about this vacancy.

Closing Date: Midnight (GMT) on 31 January 2019

Interviews planned for February 2019

To apply online for this vacancy, please click on the 'Apply here' button below. This will route you to the University's Web Recruitment System, where you will need to register an account (if you have not already) and log in before completing the online application form.

Please upload in the Upload section of the online application (1) your CV; (2) a Covering Letter of no more than 1,500 words, outlining a proposed research direction, and explaining how your skills and proposal would contribute to this project in particular, and CFI more broadly; and (3) a Sample of Writing of no more than 5,000 words that demonstrates your suitability for this project. If you upload any additional documents we will not be able to consider these as part of your application

The University values diversity and is committed to equality of opportunity.

The University has a responsibility to ensure that all employees are eligible to live and work in the UK.

Futher particulars 

Apply here

Enquire about role