As well as helping to navigate the ethical issues of social robotics, mental health is one of the few fields in which this technology has actually been deployed, with autistic spectrum disorders (ASDs) and Alzheimer’s disease considered the most widespread and most promising use cases [2], in the same logic of resorting to artificial sociability to fight against the isolation of people.
Regarding ASD, it all started in Great Britain at the end of the 1990s. A researcher in social robotics, Kerstin Dauthenhan, hypothesized the robot as a “social mediator”, capable of increasing children’s capacities for social engagement [3]. Following this founding project, research on the use of social robots to support children with an ASD has increased internationally [4], without allowing a generalisation of the results [5].
Currently, Nao is the subject of the largest number of experiments in France. It was designed in 2006 and marketed from 2008 by a French start-up, Aldebaran Robotics, bought in 2015 by the Japanese company Softbank Robotics. Nao has “independent living” programs that allow him to interact with its environment. It can also be programmed using Chorégraphe, a software based on the Python coding language.
A member of the innovation hub at Softbank describes the stance of Bruno Maisonnier at the beginning of the Nao creation process as follows: “He said: I don’t know what it will be used for, but I’m making a platform and people will come up with uses for it. Autism was the textbook example of that”. Researchers from the psychology department of Notre Dame in Indiana invented this use of the robot and led the designers to develop in 2013 a dedicated software suite, Ask Nao [Autism solution for kids], then its tablet interface in 2015, Ask Nao Tablet, to facilitate the guidance of the robot which was previously done by voice or from the computer.
This article draws on a study concerning the design of the Ask Nao app and the use of the Nao robot in seven entities providing care for children with ASD in France (day hospitals and medico-social care centers). Healthcare professionals work differently with Nao, but they all agree on the hypothesis that this robot can attract and motivate children with ASD, given their penchant for new technologies [6].
Nao: a disappointing companion
The research teams at Softbank harbour the ambition of producing robots capable of recognising their interlocutors and adapting to each individual. Concerning Pepper, another humanoid robot developed by the company, a researcher explains: “I really like to focus on the question: what will improve the robot’s long-term interaction with human beings? […] One of the main technical sticking points is for the robot to be able to recognise you and for the discussion you have with robot to generate something. The next day, when it sees you again, it remembers what has been said. A bond is created.”
When they meet Nao, healthcare professionals tend to observe a gap between the robot’s purported capabilities, notably those cited in Softbank’s communications, and the reality. Many denounce the “big lie” or the “sham” surrounding Nao’s capacities. One child psychiatrist found the bot a very disappointing companion: “The truth soon became clear. Compared to the Star Wars-style fantasy, the reality brought us down to earth with a bang. In fact, it has very little self-sufficiency indeed and is not at all mature as presented.”
A specialist teacher shares his disappointed dream of being able to build a personal and lasting relationship with Nao thanks to memorisation of past interactions: “If I come into class as I have every day for the last 8 years, Nao will never say to me “How’s it hanging, T.? You look worried. No, it just sits there like an idiot. It clearly has no capacity to improvise. Nao doesn’t create anything and doesn’t memorise any details about the person.”
Professionals in care centers regret the low autonomy of the robot, its interactional stupidity. All of them present Nao as not having been completed from a technical point of view, which leads them to consider that the robot does not fall within the domain of Artificial Intelligence and to qualify it rather as a “puppet”, as a “remote-controlled car” or “big tape recorder”.
It turns out that “independent living” programs can hardly be used with children with ASD. Nao is too demanding: he only answers if its interlocutor is positioned well in front of it; it understands only what is pronounced correctly, etc. The robot must therefore be programmed to be used in care centers. In an IME, a computer scientist who has become “in charge of innovative projects” estimates that two hours of programming are necessary to make a session of fifteen minutes with Nao.
Is public debate wide off the mark?
In the face of these observations, several actors consider that certain ethical questions posed in public debate are not really relevant given the still very limited interactional skills of social robots.
The reports published by different institutions over the past five years offer a homogeneous corpus representative of the lines taken on “AI ethics”. All of these reports make recommendations based on a future-oriented approach, without referring to empirical studies on the current state of technologies.
Concerning social robots, these reports aim to alert the public, professional communities and the public authorities to two types of risks: first, the risk of humans forming an inappropriate attachment to the machine, which could jeopardise social bonds; and second, the risk of excessive autonomy for the machine, together with a loss of human control.
These arguments may seem out of step with the current state of technology. For example, a researcher at Softbank highlights the gap between the attachment problem and the limits of facial recognition: “It’s true that we try not to do anything stupid. We try not to be too disconnected from the social aspect. But we have so many technical limits to deal with first. We had really interesting debates with loads of people for hours, but in any case, the robot doesn’t recognise humans.”
Healthcare professionals share this view and tend to shift some ethical issues, such as the question of empathising with artificial beings, onto what they consider to be AI, a field which, to their mind, excludes Nao. A child psychiatrist explains: “For the moment, as we were saying, it’s our puppet. […] If we are talking about artificial intelligence, with a robot that intervenes, responds to a child’s feelings, reacts, etc., there needs to be robust consideration of the ethical issues before such devices are introduced into care for young people.”
Getting back to ordinary ethics
Do the current technical limits make public debates about the ethics of social robotics irrelevant? Does this mean that the question of ethics does not have to be asked just yet? That those matters will be addressed in good time, when the technology is more advanced?
The current limitations of social robotics invite us to focus instead on ordinary ethics with regard to the concrete deployment of this technology. As Danah Boyd and Madeleine Elish point out, “the real questions of AI ethics sit in the mundane rather than the spectacular” [7].
What about the ethical concerns that emerge from experiments rooted in the “here and now” in care establishments? To what extent does sociological inquiry shift the terms of the debate and identify the “real issues”?
We would like to focus on how the “ethical” issues identified in public reports are framed by those who design and use social robots. That means examining the work they do to moralise these robots, and how they attempt to establish the right place for interactional robots.
Morality refers here to all the rules governing the ways we live together in society, while ethics is seen as a possible formalisation of morality.
We will focus on two issues that are central to the debates: the relationship that may be formed between the human and the robot, and the most desirable division of tasks between them.
Confusion between the human and the non-human
The production of anthropomorphised robots, in which the characteristics we spontaneously attribute to humans can be found, raises first and foremost the issue of confusion between human and robot. In public reports, this risk is considered to be all the greater for vulnerable individuals.
However, most healthcare professionals believe that no one can have any illusions about Nao’s lack of autonomy. In a hospital, the health executive and the child psychiatrist agree on that point in highlighting the intelligibility of the robot’s command system (“the children understood very well that it was coming from the computer”) and the lack of fluidity in its movements (“it’s not fluid enough or humanoid enough for any confusion to be possible”).
In another place, a speech therapist considers confusion as an initial stage that “does not last”, particularly as his project involves having Nao programmed by the children, allowing them to pull the strings.
Nevertheless, professionals do not unanimously exclude the risk of children considering the robot as a human. One child psychiatrist expresses more doubts, and a certain unease: “Because what bothers me in this study project is the idea that… Not all autistic children, but some of them are so in their own world that you feel that, between humans and robots, they don’t really grasp the fact that the robot is… not independent, that it isn’t speaking on its own. You feel that it’s sometimes difficult for them to see that. And ethically, I find that somewhat questionable.”
Whatever their feelings, all professionals stress the importance of not cultivating confusion, by establishing protocols for presenting the robot (taking it in and out of its box in front of the children) or language norms for talking about it (saying “it’s been put away in its box” rather than “it’s sleeping”). They express a desire to avoid suggesting that the robot has emotional, or even sensory, experiences in the same way a human does.
Attachment
The desire to avoid confusion between human and non-human raises a second question: that of attachment. Public reports present this issue as being particularly critical for vulnerable people, who are considered more likely to succumb to anthropomorphic traps.
One psychomotor therapist makes an explicit link between confusion and attachment by telling the story of a child who, after having given Nao a lot of “cuddles”, ended up “losing interest” in the robot when he understood that it was “controlled by the tablet”. She concludes that: “He didn’t want to guide Nao, he wanted Nao to reply: ‘hello’. He wanted a partner.” The lack of confusion could therefore resolve the problem of attachment.
However, interestingly, other actors separate these two issues. They posit the possibility of a child becoming attached to the robot without confusing it with a human. A child psychiatrist explains: “Firstly, they draw a clear distinction between the robot and the adult who is there, who is their point of reference. I mean, there is no confusion. I don’t think there is any question of that. It’s a medium for projecting things.”
In the end, from the professionals’ viewpoint, the issue is not so much to prevent attachment, which they consider they have little control over, but rather to avoid fuelling the idea of possible reciprocity, particularly by suggesting that the robot has emotions similar to humans’. Regarding Pepper, one psychologist explains: “It is really sold as having a heart, as being able to cry with you if you are sad. And we don’t like that at all. It is not at all ethical.”
Substitution
The emergence of artificial sociability also raises the question of the possible replacement of humans by machines. In public reports, this possibility is perceived as being all the more problematic in the care field.
This empirical study shows that the concept of substitution actually encompasses three very different realities. The first conception is what we might call functional substitution: the idea that some functions (such as childcare) could be performed by robots, with the risk that some professions (such as that of childcare worker) could eventually disappear.
This fear is anticipated by designers. Regarding Ask Nao, the project manager explains: “We don’t want to talk about therapy done entirely by computer. People aren’t interested in that. It doesn’t appeal to them. It scares everyone and it neglects the role of the therapist”. The fear of substitution is also allayed by professionals, who never establish the robot as a self-sufficient system that could operate without their intervention. A psychologist points out that: “There’s nothing therapeutic about the robot itself. It is the way it is used that will be therapeutic. ”
A second conception of substitution is what we might call economic substitution: the idea that investing in a robot (allocating the necessary resources to buy one and use it) is a kind of trade-off that could, particularly in a context of budget cuts, result in a lack of resources (e.g. personnel) to do the “real work”, i.e. therapy and childcare.
A third conception, more radical but also less widespread (which is logical, given the disillusion actors experience), is what we might call ontological substitution: the possibility of robots replacing humans not in performing a given function but as a species, “the idea that the algorithm is going to become so intelligent that it will surpass humans”, as a project manager from the Red Cross puts it.
Decision making
Lastly, the boom in AI and robotisation raises the question of decision making and professional responsibility: the ability for humans to “keep control” (Cnil, 2017), to take the final decision and control the machine.
In practice, actors often make sure the robot’s decision-making capacity does not impinge on core professional competencies, in this case the ability to handle the therapeutic or educational dimension of a situation and to judge the appropriate action to take care of a child.
Healthcare professionals express a desire to keep control of evaluation of the child’s competencies, a task they consider cannot be delegated to the robot, be it in absolute terms or with the current state of technology.
Concerning a program to work on imitation with Nao, they mention two approaches: it is up to the carer to judge whether the child demonstrates the expected behaviour, because the robot cannot do so correctly, but also because “the judgement is for the carer to make”: the carer is the only one capable of judging what can be expected from a child, according to the context and his/her capacities.
The researchers at Softbank have clearly identified the importance of leaving control of evaluation in professionals’ hands. This observation also guided them in the design of the Ask Nao tablet interface: “The idea was to give the therapist control. […] We gave them back a little more control.”
Thus, with regard to Nao, the question is not so much the delegation of responsibility from the practitioner to the robot, but the increased responsibility of practitioners in handling the robot as a therapeutic tool, through evaluation of the risks involved.
Conclusion: a social actor like no other
This survey therefore makes it possible, by leaving fantasy functions aside and concentrating on concrete usage situations, to identify the ethical issues at the heart of the effective deployment of social robotics.
It sheds light on the reticence to establish robots as social actors with the moral competence to punish or reward our behaviour. Indeed, actors tend to consider that the robot’s appeal as an interlocutor lies precisely in its social incompetence, its inability to judge our actions. Many of them point out the fact that Nao “does not judge”.
Thus, professionals are constantly working, in theory and in practice, to carve out a unique place for robots as a social actor like no other. And we can make the hypothesis that this observation can be generalised to other “intelligent artificial agents”, animated virtual agents and conversational agents [8].
This research thus feeds Orange’s positioning by allowing it to identify certain points of vigilance and to orient its technological choices as best as possible in order to be in phase with the emerging uses of intelligent artificial agents.
Public reports on the ethical issues of AI and robotics (non-exhaustive list)
- November 2014: Commission on the Ethics of Research in Digital Sciences and Technologies (CERNA) opinion – Ethics of research in robotics
- December 2016: IEEE (Institute of Electrical and Electronics Engineers) report – Ethically aligned design. A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems
- January 2017: Delvaux report to the European Commission – Report with recommendations to the Commission on Civil Law Rules on Robotics
- March 2017: Report of the OPECST (Parliamentary Office for the Evaluation of Scientific and Technological Choices) – For controlled, useful and demystified artificial intelligence
- December 2017: Report of the CNIL (French National Data Protection Commission) – How can we ensure humans keep control? The ethical issues of algorithms and AI
- January 2018: White Paper by the CNOM (National Council of the French Medical Association) – Doctors and patients in the world of data, algorithms and artificial intelligence. Analyses and recommendations
- March 2018: Villani report to the French government – Giving meaning to artificial intelligence: for a national and European strategy
- November 2018: Report of the CCNE (National Ethics Advisory Committee) in conjunction with CERNA – Digital and Health: what ethical issues and what regulations?
References:
[1] Turkle S., 2011, Alone together: Why we expect more from technology and less from each other, Basic Books.
[2] Costescu, C., Vanderborght, B., David, D., 2014, “The effects of robot-enhanced psychotherapy: A meta-analysis”, Review of General Psychology, vol. 18, n° 2, p. 127-136.
[3] Dauthenhan, K., 2007, “Encouraging social interaction skills in children with autism playing with robots. A case study evaluation of triadic interactions involving children with autism, other people (peers and adults) and a robotic toy”, Enfance, vol. 59, n° 1, p. 72-81.
[4] Cabibihan, J., Javed, H., Ang, M., Aljunied, S., 2013, “Why robots? A survey on the roles and benefits of social robots for the therapy of children with autism”, International Journal of Social Robotics, vol. 5, n° 4, p. 593-618.
[5] Baddoura, R., 2017, “Le robot social médiateur : un outil thérapeutique prometteur encore à explorer”, Le journal des psychologues, n° 350, p. 33-37.
[6] Grossard, C., Grynszpan, O., 2015, “Entraînement des compétences assistées par les technologies numériques dans l’autisme : une revue”, Enfance, vol. 1, n° 1, p. 67-85.
[7] Boyd, D., Elish, M., 2018, “Don’t believe every AI you see (Blog post)”, New America, https://www.newamerica.org/public-interest-technology/blog/dont-believe-every-ai-you-see/
[8] Borelle C., 2018, “Sortir du débat ontologique. Éléments pour une sociologie pragmatique des interactions entre humains et agents artificiels intelligents”, Réseaux, vol. 36, n° 212, p. 206-231.
To learn more:
- Borelle Céline, “La moralisation des robots sociaux par leurs utilisateurs”, Sociologie du travail, à paraître en 2020.
- Borelle Céline, À quoi les robots sociaux peuvent-ils servir ? Éléments sur la conception et les usages de robots sociaux dans le domaine de la santé mentale, Rapport interne, Orange labs, 2019.
- Borelle Céline, Velkovska Julia, Zouinar Moustafa, Personnalité, émotions et anthropomorphisme dans la conception et les usages des agents “intelligents”, Rapport interne, Orange labs, 2018.
- https://www.fondationorange.com/Autisme-et-numerique-quel-bilan-5-ans-apres?