The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Response to UK's AI regulation whitepaper

The UK Government released its AI regulation: A pro-innovation approach whitepaper on Wednesday 29th March. Research from both LCFI and CSER was cited in the report, including The Malicious Use of AI, which Shahar Avin, Haydn Belfield, Sean O hEigeartaigh and SJ Beard contributed to, and Why and how government should monitor AI development by CSER/LCFI alumna Jess Whittlestone and co-author Jack Clark.

The whitepaper describes an agile, sector-specific approach to AI governance, adhering to five principles: 1. safety, security and robustness, 2. appropriate transparency and explainability, 3. fairness, 4. accountability and governance, and 5. contestability and redress.  The sector-specific approach will be combined with a central function to “coordinate, monitor and adapt the framework as a whole”.

Below are comments on the whitepaper from several LCFI and CSER researchers:

Governance of frontier AI models and evaluation of extreme risks

Sean O hEigeartaigh, Director of the AI:FAR Programme (a joint initiative of LCFI and CSER) is encouraged by the iterative approach proposed: “An agile, iterative approach focusing on sector-specific applications makes sense for most AI applications. Foundation models in particular are showing remarkable advances, and their broad capabilities mean that they will affect many sectors and could pose a variety of risks. Each successive generation demonstrates surprising new capabilities with both positive and misuse potential, as well as persistent problems such as ‘hallucination’. It is appropriate that these AI systems be subject to ongoing monitoring and evaluation, under the central function."  

"I was pleased to see that the government is considering steps such as monitoring compute use for training runs, and requirements for reporting for frontier models above a certain size,”  he added. “It was also encouraging to see that a central, cross-economy risk function will be established, and that it will 'include "high impact but low probability" risks such as existential risks posed by artificial general intelligence or AI biosecurity risk'. We look forward to advising the Government on implementation of these functions.”

Context-specific regulation

LCFI Student Fellow, and University of Cambridge PhD candidate, Harry Law states that: “In principle, empowering existing regulators to develop and oversee context-specific rules regarding the use of machine learning will equip them with the awareness and knowledge to effectively implement new requirements. In practice, however, for such an approach to be effective it will require the appropriate allocation of resources towards enhancing regulatory authorities' capabilities, measures to foster technical proficiency, and efforts to connect regulators to the wider governance ecosystem.” 

AI in recruitment

Dr Kerry McInerney, Research Fellow at LCFI, was pleased to note the attention on AI in recruitment: “We welcome the addition of a case study on AI used for hiring to the white paper, as our 2022 study found that AI-powered video recruitment systems are frequently misadvertised to consumers. This is leading to the inappropriate uptake and deployment of AI-powered products that simply cannot do what they say on the tin. In a moment of extreme AI ‘hype’ it is more important than ever to ensure consumers are getting accurate information about what AI products can and cannot do.” 

However, LCFI Research Fellow, Dr Eleanor Drage, cautions that there is still work to be done: “While the white paper’s 'appropriate transparency and explainability' principle now directs AI companies towards responsible marketing practices such as product labeling, my colleagues Stephen Cave and Kerry McInerney and I have called for the Office of AI to go further in regulating how AI companies communicate their product’s capabilities to procurers and the public."

"AI companies must state their products’ limitations as well as their capabilities, and these capabilities should be based in scientifically proven research rather than - as is currently the case - advertisements and corporate white papers dressed up as peer-reviewed science. We look forward to collaborating with the Office of AI on AI Assurance and on AI Standards to address this issue. We are also pleased to see AI-specific regulators being encouraged to join forces with other regulators, as in the hiring case study. Collaboration and joint guidance will be crucial to ensuring existing standards are maintained.”  

Further commentary

CSER’s Haydn Belfield also separately co-authored a response to the piece with Labour for the Long Term

Next article

New Research: Covert Use of Predictive Policing Software Raises Concerns