Executive Summary: We are a group of academic researchers on AI, with positions at the Universitat Politecnica de Valencia, the University of Copenhagen, the University of Cambridge, and the Leverhulme Centre for the Future of Intelligence – a leading international centre in AI ethics. We have published dozens of technical papers on machine learning and artificial intelligence and reports and white papers on the ethics and governance of artificial intelligence.
Our submission mainly focuses on the “Ecosystem of Trust: Regulatory Framework for AI”, and in particular the mandatory conformity assessments for high-risk AI applications carried out by independent testing centres. Our key recommendation is to keep this proposed structure and not water it down.
In this submission we 1) support the Commission’s proposed structure, defend this approach on technical, policy, practical, and ethical grounds, and offer some considerations for future extensions; and 2) offer some specific recommendations for the mandatory requirements.