The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

EU AI Act: How well does it protect children and young people?

EU AI Act: How well does it protect children and young people?

Article Written by Dr Nomisha Kurian, a CFI Associate Fellow, Teaching Associate in Digital Sociology and Churchill College By-Fellow

It has been exciting to see the European Parliament approving the world's first comprehensive framework for regulating AI and thereby encouraging responsible innovation. But how well does the EU AI Act protect some of its most vulnerable users - children?

Positives:

To start with what is promising in terms of children’s rights, one of the strengths of the EU AI Act is that there is explicit attention to children as a vulnerable category.

Its current level of acknowledgement of child-users was not present in initial drafts of the Act. This suggests that policymakers seem to have responded sensitively to child rights advocacy groups who asked for explicit clarification within the Act that children are a specific group protected from AI systems that exploit age-related vulnerability. Recital 28a clarifies explicitly that “children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being”.

This wording establishes a clear legal basis for assessing the impact of AI systems on children: namely, through the most well-established global legal standard for protecting children’s rights and wellbeing (the UNCRC) as well as its tailored protections for digital technology (General Comment No. 25). Thereby, these explicit references to international child rights law set out potential protections for children within the scope of AI governance, and make it clear that policymakers and AI developers are accountable to their youngest users.

Another strength of the Act for protecting children’s rights is its foresight in anticipating potential pitfalls in one of the most impactful settings in children’s lives: their education.

Annex III for Article 6 classifies all uses of AI in education as high-risk, a precautionary measure that succeeds in addressing many of the most potentially problematic or extreme uses of AI in education. There is explicit reference to AI systems intended to be used to determine access or admission or to assign students to education or training institutions depending on performance. In other words, any use of educational AI that could heavily impact young people’s life chances or life outcomes is now classified as high-risk - a promisingly proactive approach to safeguarding the next generation in the very environments that will shape their futures.

The Act also classifies under ‘unacceptable’ risk cognitive behavioural manipulation of people or specific vulnerable groups and lists age as an axis of vulnerability - which gives all those protecting children’s interests scope to advocate against, for example, ‘AI-driven toys that can encourage dangerous behaviour in children’.

Recognising that children could be susceptible to manipulation or exploitation is particularly significant because such psychological harms often fall into legally ambiguous territory. For example, the UN Convention on the Rights of the Child protects children’s rights to be safe from physical and emotional abuse and provides clear protection against tangible risks such as neglect and violence - yet it remains unclear how persuasive AI, such as anthropomorphic chatbots that induce emotional attachments, relates to child protection frameworks. The Act’s explicit acknowledgement that AI-mediated manipulation poses special risks to children can help raise complex legal and ethical questions that demand further scrutiny and regulatory clarity.

The strengths of the Act for Children: 

  • Explicit attention to children’s rights both in educational terms and psychological terms.
  • Sensitivity to their mental, emotional, and physical well-being, which will help lay the groundwork for further legal measures to shape what safe childhoods look like in an increasingly AI-mediated world.

Negatives:

However, certain limitations emerge within the Act in relation to children’s needs and challenges. One gap is around responding to the full spectrum of harms caused by deepfake technology, or the creation of hyper-realistic but fake synthetic media which are increasingly difficult to distinguish from real ones. Although child pornography is acknowledged as a criminal offence under the Act, there does not appear to be sufficient attention to children’s unique vulnerabilities with deepfake technology and potential inadequacies in the Act’s compliance standard as a result.

For context, the surge of free and easily accessible generative AI tools in recent years has led to a rise in minors creating and sharing non-consensual, manipulated imagery of each other as a tool for bullying and shaming. It is adolescent girls who have been the most prominent target of this trend. “To be in a situation where you see young girls traumatised at a vulnerable stage of their lives is hard to witness,” said the mayor of a New Jersey town in November 2023 (Blanco, 2023). Her remark referred to a fresh form of psychological warfare sweeping the local school community through deepfake technology.

Adolescent girls’ male classmates had used generative Artificial Intelligence (AI) to create, and circulate, nude images of them. The boys shared photos in groupchats, causing the girls to not only experience shame and fear but also contemplate deleting their social media (Blanco, 2023). The School Superintendent noted that “all” school districts were “grappling with the challenges and impact of artificial intelligence” that had suddenly become “available to students at any time and anywhere” (McNicholas, 2023). Indeed, reports of students using generative AI to create sexually explicit or non-consensual imagery to bully their peers have emerged all over the world, from Australia (Long, 2023) to Spain (Guy, 2023) to the point that the New York Times has just declared “deepfake nudes in schools” an “epidemic” for teenage girls (Singer, 2024). 

Child safeguarding has been a perennial concern for educators and scholars. Yet, rapid advancements in generative AI pose a new “wicked problem” (defined as “a class of social system problems which are ill-formulated, where the information is confusing, where there are many clients and decision-makers with conflicting values, and where the ramifications in the whole system are thoroughly confusing” (Buchanan, 1992). The EU AI Act’s approach to regulating deepfakes seems to focus only on the principle of disclosure as a mode of protection.

Recital 134 states that creators “should clearly and distinguishably disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin” and the article stresses “compliance with this transparency obligation”. Similarly, Recital 133 focuses on encouraging precise tools for tagging and identifying AI-generated content, such as watermarks and metadata tags. This focus on transparency and disclosure - making it easier for all users to distinguish between real and artificial content - bodes well for countering many misuses of deep fakes such as the spread of fake news, disinformation, election manipulation, and creating content to ridicule political opponents.

The transparency and disclosure rule would be a robust preventative tool in these examples, yet it falls short in tackling the complex harms of deepfake content. 

As Mateusz Łabuz notes in his work on regulating deep fakes, applying disclosure rules does not mitigate harms like depression, anxiety or PTSD which can haunt victims of deep fakes. Many of the young girls victimised have reported these severe psychological effects, in addition to becoming reluctant or completely unwilling to continue attending school (Singer, 2024). There is also reputational harm to consider, as well as psychological harm and educational prospects.

Here it is relevant to note the ‘false promise of transparent deep fakes’, as researchers at the Centre for Digital Governance in Berlin aptly put it (Centre for Digital Governance, 2023). The ‘false promise’ of transparent disclosure rules is the mistaken notion that simply revealing that a piece of content is artificial can effectively mitigate the adverse impacts of creating and circulating it.

Disclosure alone does not tackle the deep-seated psychological, reputational and educational harms that children face when deepfakes are used as a form of gender-based violence or peer-to-peer bullying. In such cases, demand for the content does not necessarily depend on its authenticity. The inadequacy of the disclosure and transparency principle thus forms one of the gaps in the Act in terms of child protection.  

Key Gaps of the EU AI for Children: 

  • The complexity of deepfake technologies is underestimated in the current regulation. 
  • Disclosure and transparency are inadequate measures to overcome the potential psychological, mental and well-being impacts of deepfakes. 

It is exciting to see the EU AI Act’s commitment to acknowledging children and young people as a category of vulnerable users and making explicit reference to some of our strongest human rights law protections. These are significant steps forward.

At the same time, there remain critical areas for refinement, notably in addressing the unique risks posed by deepfake technology in a psychosocial sense - not just the question of whether what we are seeing is real, but also how it affects our youngest citizens’ emotional well-being, social interactions, and educational experiences when used as a tool for bullying, humiliation or exploitation. 

The Act has undoubtedly set a new global benchmark for mitigating the risks of AI. Yet, we still need more dialogue and debate about how high it sets the bar for safeguarding children. The most transformative policies are those that protect their most vulnerable stakeholders. 


References
  • Blanco, A. (2023). Teen boys at New Jersey school accused of creating AI deepfake nudes of female classmates. The Independent. Retrieved from https://www.independent.co.uk/news/deepfake-nude-westfield-high-school-nj-b2440793.html
  • Buchanan, R. (1992). Wicked problems in design thinking. Design issues, 8(2), 5-21.
  • Centre for Digital Governance, The false promise of transparent deep fakes: How transparency obligations in the draft AI Act fail to deal with the threat of disinformation and image-based sexual abuse, Hertie School, 2022. [Online]. Available: https://www.hertie-school.org/en/digital-governance/research/blog/detail/content/the-false-promise-of-transparent-deep-fakes-how-transparency-obligations-in-the-draft-ai-act-fail-to-deal-with-the-threat-of-disinformation-and-image-based-sexual-abuse (Accessed: April 18th, 2024).
  • Guy, J. (2023). Outcry in Spain as artificial intelligence used to create fake naked images of underage girls. CNN News. Retrieved from https://edition.cnn.com/2023/09/20/europe/spain-deepfake-images-investigation-scli-intl/index.html
  • Long, C. (2023). First reports of children using AI to bully their peers using sexually explicit generated images, eSafety commissioner says. ABC News. Retrieved from https://www.abc.net.au/news/2023-08-16/esafety-commisioner-warns-ai-safety-must-improve/102733628
  • McNicholas, T. (2023). New Jersey high school students accused of making AI-generated pornographic images of classmates. CBS News. Retrieved from https://www.cbsnews.com/newyork/news/westfield-high-school-ai-pornographic-images-students/
  • Singer, N. (2024). Teen Girls Confront an Epidemic of Deepfake Nudes in Schools. Retrieved from https://www.nytimes.com/2024/04/08/technology/deepfake-ai-nudes-westfield-high-school.html
  • Wang, K., Gou, C., Duan, Y., Lin, Y., Zheng, X., & Wang, F. Y. (2017). Generative adversarial networks: introduction and outlook. IEEE/CAA Journal of Automatica Sinica, 4(4), 588-598.

CFI Blog Series curated by Dr Aisha Sobey. 

Next article

EU AI Act for High Risk AI in a Nutshell