• Oxford University researchers recommend a multidisciplinary approach to designing AI systems suitable for juvenile users, which involves developers and designers, as well as parents and teachers.
• They also deplore the fact that there has been little or no research into the impact of algorithms on children's long-term development.
Tailormade experiences, automated learning, video games: many digital applications for children and teenagers are based on artificial intelligence technologies, but these are not always suited to such young audiences. Worse still, it has emerged that content recommendation algorithms can even undermine children’s mental health. A recent report by RTÉ highlighted how TikTok accounts targeting 13 year-olds — which were created within the framework of the TV channel’s investigation — could very easily be triggered to display a persistent and progressively more intense stream of content relating to self-harm and suicidal thoughts. The problem underlying such unwanted outcomes is that while there is now a consensus on the main principles of responsible AI, these guidelines do not necessarily take children into account. “It is important to find a way of translating these principles into real-life practices that are adapted to children’s uses and needs, and in line with their stage of development,” explains Oxford University doctorate Ge Wang. “It’s a difficult task, because the AI community measures the success of its systems using quantifiable data, and it is very difficult to integrate data relating to human behaviour and development into the design of these tools.”
There is as yet no initiative to incorporate child protection principles into AI innovations based on large language models, which may expose children to inappropriate and biased content.
The need for AI tools adapted for juvenile audiences
In an article published in Nature Machine Intelligence, Wang and her fellow researchers identified a number of challenges. “In studying international AI ethics initiatives, we realized that developmental side of childhood is not taken into consideration.” Publishers and developers of artificial intelligence tools should pay more attention to children’s ages, backgrounds, and stages of development, notably adolescence, which is a critical moment in the acquisition of digital habits. “We need to be able to identify children’s best interests, conduct research in the field so that we can quantify and qualify them, and listen to the voices of stakeholders such as parents, schools, etc.” And this is also linked to an academic challenge. “Given the very recent development of technologies like image-generating AI tools, existing research in this area is limited, and there is hardly any evidence on the impact of algorithms on teenagers and children.” For example, there is as yet no initiative to incorporate child protection principles into AI innovations based on large language models, which may expose children to inappropriate and biased content.
Re-evaluate the role of parents and teachers
Parents have a crucial role to play in helping children to develop the ability to think critically about their activity online. “Children need to be taught to ask themselves why particular content is being recommended to them, and to be made aware of their online digital rights, as they do not always realise, for example, that their data is being monetised,” points out Ge Wang. The article also highlights another paradox: it is often assumed that parents have greater expertise than their children in digital matters, but this is not always the case. The researchers therefore suggest adopting a child-centred approach, rather than one that is focused on parents and/or teachers.
The need for a multi-disciplinary approach
The researchers point out that there are significant “gaps in knowledge and methodologies adopted by different scientific approaches” when addressing challenges posed by children’s use of AI. With this in mind, they advocate a multi-disciplinary strategy for the development of systems, which would involve input from stakeholders in a range of different fields: human-computer interaction, design, algorithms, policy guidance, data protection law, and education. At the same time, developers and designers of AI tools must work together on the elaboration of AI ethical principles that take into account the needs and interests of children. “Industry is not providing enough support for developers who are called on to interpret broad guidelines. We suggest that AI ethics practitioners and organisations improve their collaboration with developers and designers and adopt a bottom-up approach to create a common basis for industry standards and practices.”
Sources :
Wang, G., Zhao, J., Van Kleek, M. et al. Challenges and opportunities in translating ethical AI principles into practice for children. Nat Mach Intell 6, 265–270 (2024). https://doi.org/10.1038/s42256-024-00805-x