● Researchers are now calling for greater integration of cognitive science in the development of AI to ensure that future systems that are more robust and easier to understand.
● Their initiative has highlighted existing ethical challenges, the need to steer clear of anthropomorphic thinking, and the drive to ensure regulation that nonetheless develops AI’s potential to expedite tedious tasks.
Can AI be progressively taught to generalize in a more human-like way? Whereas machines struggle to resolve problems not covered by their training data, humans can analyse unknown situations and make decisions in response to unforeseen conditions. Humans can generalize on the basis of relatively few examples: if a human is shown a single black cat, they will sense that a cat with orange fur is also a cat, which is something that AI cannot yet do. Now an international research team has published an article in Nature Machine Intelligence, which calls for greater alignment between these two worlds with a view to the creation of more robust AIs that can better reflect human ways of thinking.
In cognitive sciences, generalization involves conceptual thinking and abstraction, whereas in AI, it implies the production of results from out-of-domain data with neuro-symbolic systems that combine logic and neural networks. “AI is inspired by what we observe in nature and in humans. Initially, we focused on reasoning, then on perception and processing input from sensors to produce predictions, and now we’ve advanced to content generation. But we are still trying to reproduce what humans do,” explains Responsible AI Programme Manager at Orange Emilie Sirvent-Hien.
The fallacy of anthropomorphism
The authors of the article argue that human generalization capacities are based on cognitive mechanisms such as categorization by prototypes, analogy, and the construction of mental models. However, “current neural networks also reason in ways that are not fully understood, which can lead us to attribute intentionality and an ability to generalize to these systems which they do not possess,” points out the researcher. And the risk that we will confuse technical performance and real understanding in artificial intelligence is growing. “We can fall into the trap of anthropomorphism and imagine that AIs reason in the same way we do, whereas in fact they just calculate probabilities.” At the same time, the prospect of AIs with a mastery of generalization that will allow them to think more like humans is already raising ethical questions notably with regard to responsibility for decisions they may take. For Emilie Sirvent-Hien, “Responsibility must always remain human. Designers and users are the ones who set the parameters, choose the data and define limits for these systems.” Thus, the main challenge will be keeping AIs under control, which means that their actions must be traceable, and their decisions will need to be comprehensible, even if some of their details cannot be fully explained. As the researcher puts it, “The more powerful AI becomes, the more we will need to regulate its use.”
A framework to define generalization in AI
But how can AI decisions that are either opaque or highly abstract be explained? The researchers at Bielefeld and the other members of their international team are proposing a shared framework for the evaluation of generalization that combines insights from cognitive approaches and AI techniques. “The biggest challenge is that ‘Generalization’ means completely different things for AI and humans,” explains Benjamin Paaßen, junior professor for Knowledge Representation and Machine Learning at Bielefeld. For the research team, this shared framework needs to encompass three dimensions: “What do we mean by generalization? How is it achieved? And how can it be evaluated?” And the answers to these questions will need to bridge the gap between cognitive science and AI research.
In a context where professions are being transformed by GenAI, the improvement of models will likely raise new questions in companies. “AI can automate tedious tasks and free up time for higher added-value activities. Just as Internet search engines have made knowledge more accessible, the advent of AI will mean that there are certain skills that we won’t call on as often as we do now, but we will also develop new ones. What is important is to be prepared to take the initiative to seize new opportunities while avoiding excessive measures,” concludes Emilie Sirvent-Hien.
