The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Research Associate in Paradigms of Artificial General Intelligence and Their Associated Risk

Location: Cambridge
Closing date: 26 August 2019

Role information

The University of Cambridge’s Centre for the Study of Existential Risk (CSER) invites applications for a Post-Doctoral Research Associate to work on safety challenges associated with increasingly general artificial intelligence systems.

Research efforts are being devoted globally to developing artificial intelligence systems with greater generality: an ability to function effectively in a wider range of environments, and to solve a broader range of tasks. Looking ahead, there are likely to be areas of scientific and intellectual progress that will require the types of planning, abstract reasoning, and meaningful understanding of the world that we associate with general intelligence in humans and animals. A key question is whether systems with a greater degree of generality may have different risks and unknowns in comparison to the more specialised, constrained systems we are used to.

The Associate will contribute to and lead technical research on topics including: use of resources, performance on tasks requiring general intelligence, and rates of progress in artificial intelligence. The research will link to the growing body of work on different aspects of AI safety, with the aim of better understanding the links between the capability, generality and safety of AI systems.

As well as producing targeted research outputs within these areas, the Research Associate will collaborate on project organisation, and will build collaborations with world-leading partners in academia and industry, building on existing connections between CSER, the Leverhulme Centre for the Future of Intelligence and research groups at Cambridge, Oxford, Imperial, OpenAI, the Partnership on AI and others. This is an exciting opportunity for a talented researcher to engage in a cutting-edge research programme and to develop their own lines of enquiry.

Applicants must have:

- A PhD in a relevant field or professional experience in a relevant research area commensurate with the requirements of the role.
- Expertise relevant to the focus area.
- The ability to engage with scientific literature ranging from AI/machine learning, to AI safety, to performance measurement and testing.
- Evidence of ability to work in collaborative environments, and the ability to engage with diverse communities of experts.
- Excellent written and oral communication and presentation skills.
- Evidence of a serious research interest in the research foci of the Centre.

Fixed term: Funding for this post is available for two years in the first instance

The University actively supports equality, diversity and inclusion and encourages applications from all sections of society.

The University has a responsibility to ensure that all employees are eligible to live and work in the UK.

Deadline for applications: August 26, 2019.

More information and application procedure at:

Project information:

Enquire about role