The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Leverhulme CFI researchers provide input on AI and Data-Driven Targeting to the UK Government’s new Centre for Data Ethics and Innovation  

In late 2017, the UK Government announced the creation of the Centre for Data Ethics and Innovation (CDEI). The Leverhulme Centre for the Future of Intelligence (LCFI) has welcomed the new CDEI, and recognizes that this Government has been the first to announce an official Centre of this kind, thereby positioning the country as a global leader in ethical AI.
 
In June 2018, Roger Taylor was announced as the Chair of the CDEI, while at the same time the Government launched its Consultation on the Centre’s roles and activities. In this Consultation the Government proposed “six possible areas in which the Centre could undertake projects to strengthen the governance of data and AI uses.” These themes were targeting, fairness, transparency, liability, data access, and intellectual property and ownership.Following this the Government sought views of interested groups, including experts from academia, industry, and policy, to inform the Centre’s operations and work programme, as well as to inform on the current state of data-driven technologies.
 
Several LCFI researchers were asked to contribute a discussion paper to this consultation, sharing their vision and insight on the theme of ‘targeting’. The CDEI Consultation describes the topic of targeting in the following way: “Data and artificial intelligence can produce powerful insights about our behaviour and emotions. This can be used to create better and more efficient public and commercial services, for example by ensuring that individuals receive recommendations for products and services that they value. But it may also restrict the information and choices available to us or even be used to influence, manipulate or control our behaviour in harmful ways.”
 
In particular, the CDEI requested responses in the following areas:
 
            (1) What are the technological developments relevant to the theme of ‘targeting’?
            (2) What are the biggest opportunities and risks in relation to this theme?
            (3) What role can the Centre play in maximising the opportunities and minimising the risks in relation to this theme?
 
CFI researchers Karina Vold, Jessica Whittlestone, former visiting student Anunya Bahanda, and Executive Director Stephen Cave contributed a short discussion paper focused on the theme of ‘targeting’ that addressed each question posed by the CDEI.
 
Our discussion begins by outlining three ways in which technological advances have changed what is, and what will be, possible in the area of targeting. First, the ubiquitous use of digital technologies makes it possible to collect vastly more and vastly more personalised data about individuals’ lives. Second, advances in machine learning allow us to process these large amounts of information, and to draw inferences from data that go far beyond what is explicitly contained within it. Finally, various forms of technology make some forms of targeting easier than ever before: smart devices, applications, and social media make it possible to deliver frequent messages and interventions, sometimes without people even being aware of it.

We point out many new opportunities that will be possible through targeting, such as delivering better public services to those most in need, making more efficient use of resources, and saving people time and energy. But we also raise concerns about some of the risks that come along with these new opportunities. This includes the concern that targeting methods might be used by self-interested parties to manipulate people in harmful ways and in ways that threaten individual autonomy. We also warn about how targeting may threaten important notions of fairness in society, as more information can make it easier for us to discriminate against individuals or groups in ways that are harmful.

Finally, we focussed on two key areas that we believe the CDEI can play a crucial role in addressing. The first is identifying areas where targeting is particularly likely to be beneficial. Here we advocate a “policy first, data second” approach: by starting with a clear case for the benefit, and then working backwards to collect data on which groups to target, it is much easier to ensure that targeting is not being used to exploit or manipulate individuals. The second is to set ethical standards for targeting by drawing on an understanding of public opinion about the acceptability of various forms of targeting, as well as various forms of expertise. In particular, by supporting research and public engagement on these topics, the CDEI could gain valuable insights into how to craft targeted policies that both respect important ethical standards and can gain public support.

You can read LCFI’s full contribution here.
 
 
 
 
 
 
 
 

Next article

Global AI Narratives in Japan