The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Comment on the EU's world-first AI regulation: "an historic opportunity"

The European Commission is proposing the world’s first regulation on AI - a historic opportunity to set global standards which will influence how this technology shapes our world. 

This comment represents the views of the AI:FAR team, which includes: Jess Whittlestone, Haydn Belfield, Seán Ó hÉigeartaigh, Matthijs Maas, Alexa Hagerty, John Burden and Shahar Avin.

Overall, we are very supportive of the proposed regulation, and particularly the focus on taking a risk-based approach. AI systems promise substantial benefits, but we are also already seeing risk of substantial and irreversible harm, both to individuals and to society more broadly. For example, AI is already being used in ways that can result in discrimination and manipulation on the individual level, and is changing how we interact with information online on a broader societal level. As this technology fast advances and is used more widely, this will inevitably raise the stakes of potential failures and misuse, and could have extreme and long-lasting systemic impacts: on our information ecosystem, labour markets, and international relations, for example. It is essential that governments have the tools to understand and anticipate the risks of AI, and to hold those developing AI accountable for harms. 

However, the impact this regulation has will depend on many specific details of implementation: including how different risk levels are interpreted; how new sources of risk are identified; and how prohibitions and assessments are implemented in practice. It is particularly important that AI governance can be flexible and responsive to changes in the underlying technological capabilities, its applications, and its impacts on society. There is an important role for regulatory approaches in an AI governance ecosystem, but without care these could become ossified quickly. Regulation may be most effective if complemented by a strong set of adaptive, non-regulatory approaches, such as initiatives to strengthen governments’ capacity to monitor, assess, and anticipate the impacts of AI, and the use of participatory approaches to create space for wider public discourse about what we do and do not want from AI.

We plan to comment on the regulation in more depth in response to the open consultation, but for now briefly highlight three points that we believe are particularly important.

First, key to a risk-based approach is how risk levels are defined, and this needs more thought. The proposal suggests that several “unacceptable” uses of AI should be prohibited, including uses of AI for manipulation and exploitation, general purpose social scoring, and surveillance in public spaces. The bounds of these prohibited uses are fuzzy - how do we determine whether an AI system is responsible for ‘manipulating’ human behaviour in ways that leads to harm? Some of the categories identified as ‘high risk’ are very broad - including the use of AI in educational and vocational training, employment, and access to essential services. There may well be some more bureaucratic uses of AI in these areas which it would be disproportionate to consider high-risk, and so the process used to identify specific high-risk uses within the broad categories outlined will be important. On the other hand, there are other areas where AI is beginning to be used that are missing from the list, which seem to be at least at the risk level of those currently listed, including the use of AI in automated financial trading, political advertising on social networks, and cybersecurity. Part of the challenge in clearly defining and delineating risk levels is that it can be difficult to determine the exact role of “AI” in many contexts: many human decisions are today influenced by data, knowledge or recommendations provided by AI systems in ways people may not be aware of or able to trace.

Second, the current approach to risk assessment pays insufficient attention to systemic risk. The proposal’s approach to risk assessment focuses almost entirely on cases where AI systems may result in direct harm to individuals, but broader structural or systemic impacts are at least as important to consider. For example, the use of AI in online content recommendation may directly harm individuals if deemed manipulative, but also have broader harms as a result of changing the nature and quality of online discourse (e.g. increasing polarisation or reducing trust.) The potential for uses of AI to increase discrimination also goes beyond specific cases of biased discrimination, and needs to take into account broader ways that disadvantaged communities may be more likely to experience harms and have less access to the benefits of AI. We suggest that the proposed approach to risk assessment could be strengthened and complemented by investing in capacity for more systemic risk assessment, which could also help with identifying new high-risk areas in future. There is an open question as to whether systemic risks can be addressed via the same regulatory processes as more direct and easily identifiable harms, but better processes for identifying and understanding these risks is a crucial starting point for addressing them.

Third, the development of assessment processes in coming years will be important. The proposal outlines a number of requirements for systems classified as high-risk, including technical documentation and logging processes, and that systems are designed and developed in ways that ensure traceability of functioning, ‘sufficient’ transparency of operation, effective oversight, and which meet “an appropriate level of accuracy, robustness, and cybersecurity”. These are broad requirements and the processes that will be needed to conduct these assessments are not yet laid out; there are currently many different tools and approaches to testing and assuring the operation of AI systems and components. The proposed regulation is likely to prompt considerable experimentation in the development of assessment processes in coming years, which we see as a good thing, so long as it does not simply allow companies to ‘mark their own work’ however they like. The EC will need ways to identify and establish independent certification bodies and robust assessment processes, and to identify which are best suited to different purposes. We would be particularly keen to see work on developing broad and open-ended assessment processes that can take into account the more systemic impacts of AI discussed above, including those using participatory methods to ensure a wide range of impacts and concerns are captured.

Next article

LCFI Researchers and Associate Fellows feature in University Podcast Series