The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Leverhulme Centre for the Future of Intelligence welcomes new Centre for Data Ethics and Innovation

The Autumn Budget 2017 announced:

 “The government will create a new Centre for Data Ethics and Innovation to enable and ensure safe, ethical and ground-breaking innovation in AI and data driven technologies. This world-first advisory body will work with government, regulators and industry to lay the foundations for AI adoption”

Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence (LCFI), said:

 “I’m delighted that the UK will have the world’s first national advisory body for artificial intelligence. We have long been supporters of such a Centre – it’s great to see the Government take such a groundbreaking step. We look forward to working with the Centre to ensure safe, ethical and ground-breaking innovation in AI.”

Background

The mission of the Leverhulme Centre for the Future of Intelligence (LCFI) is to build a new interdisciplinary community of researchers, with strong links to technologists and the policy world, and a clear practical goal: to work together to ensure that we make the best of the opportunities of artificial intelligence as it develops over coming decades.

In May 2016 LCFI Director Professor Huw Price recommended “establishing an appropriate standing body” on data and AI in Written Evidence to the House of Commons Science and Technology Committee inquiry into Robotics and artificial intelligence[1]. In its Report the Committee recommended to the Government that “a standing Commission on Artificial Intelligence be established”[2]. In June 2017 the Royal Society and the British Academy also recommended “the creation of an independent body to steward the [data] governance landscape as a whole.”[3] In September 2017, CFI members emphasised in Written Evidence to the House of Lords Select Committee on AI the importance of such a body also covering AI.[4]

In a speech to LCFI’s first annual conference, the Rt Hon Matt Hancock MP, Minister of State for Digital, publically announced support for the body: "I’m delighted that the Royal Society and the British Academy have done such excellent work, setting out the challenges we face, and proposing a new body or institute that can take a view, talk to the public and provide leadership. It’s a very interesting idea similar to that we proposed in the manifesto, and we are considering it carefully."[5]



[1]http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee/robotics-and-artificial-intelligence/written/32598.html 

See also Dr Owen Cotton-Barrett on behalf of LCFI sister organisations the Future of Humanity Institute, Centre for the Study of Existential Risk, Global Priorities Project, and Future of Life Institute:

http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee/robotics-and-artificial-intelligence/written/34840.html

[2]https://publications.parliament.uk/pa/cm201617/cmselect/cmsctech/145/14509.htm#_idTextAnchor039


[3] https://royalsociety.org/topics-policy/projects/data-governance/


[4]http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/69702.pdf


[5] https://www.gov.uk/government/speeches/matt-hancocks-speech-to-the-leverhulme-centre

Next article

Leverhulme CFI and others to partner with DeepMind on new Ethics and Society Research Unit