● Instead of imitating humans, effective social robots should be designed to facilitate collective reflexivity that not only pools ideas, but also elucidates the manner in which social groups think, take action and interact.
● Social robots co-designed with future users would be better able to map their needs and could also assist in creating communities and documenting group experiences.
In the context of an aging society and the growing prevalence of chronic health conditions, informal caregivers play an increasingly important role in health systems battling with a scarcity of resources. Social robots – robots designed to offer companionship and assistance to humans – could potentially provide them with valuable support. This is the hypothesis explored in a study conducted by researchers at the University of Cambridge, which set out to determine if interaction with a robot like Pepper could help these carers cope with emotional distress. The researchers found that after ten sessions with the social robot, the caregivers reported improvement in their moods, reduced feelings of loneliness, and greater acceptance of their role. These results also turn a spotlight on a libertarian understanding of care: robots are presented as tools that can help individuals to better manage their stress, but “without any questioning of the structural conditions of their isolation,” argues Orange sociologist Céline Borelle. The researcher is highly critical of a techno-solutionist approach that masks the political dimensions of a problematic state of affairs: “The fundamental question is: why do these carers lack moral and material support to the point where they are so isolated? The reasons for this situation are swept under the carpet.”
Instead of seeking to imitate humans, social robots for caregivers could function as a catalyst for collective reflexivity.
The question of anthropomorphism
The study suggests that more human-like robots have superior effectiveness. “Exposing participants to social robots is not appropriate, given that they transpose a vocabulary that is habitually used for human interaction,” points out Céline Borelle. The researcher further adds that “effectiveness is not necessarily a function of a likeness, which favours the illusion of interaction with a being as competent as a human.” On this point she cites Agnès Giard’s research on artificial romantic relationships in Japan and how certain companies like Gatebox, a producer of holographic significant others, deliberately design incompetent partners that under exploit available technology. The idea behind this “is to play on ontological distance to create an alternative space for interaction, where people can experiment and try something else,” notes Céline Borelle. And as to what that something else could be in this instance, instead of seeking to imitate humans, social robots for caregivers could function as a catalyst for collective reflexivity. For example, rather than mimicking human therapists, the robots could pose open questions on the caregiving experience, and then compile anonymous responses so that they can be shared with other care workers or political decision makers. In so doing, they would act as mediators between individual and collective experiences with the goal of transforming atomised suffering into actionable demands.
Rethinking the design of social robots
For Céline Borelle, the caregivers’ progressive self-disclosure to the robots represents an opportunity albeit one that may be limited in scope: “These tools allow for a form of reflexivity, but within a very individualistic framework, in a self-to-self relationship, as with a personal diary.” She further wonders: “Can we not devise social robots that can be used to facilitate forms of collective reflexivity and reintroduce alterity into this inner monologue?” Naturally, this would necessitate further investigation of the manner in which humans interact with these cultural artefacts, which, as the sociology researcher points out, have not been sufficiently studied in natural settings outside of laboratories. At the same time, caregivers and other users could also participate in the development of these robots, so that they can be acknowledged as co-creators of future solutions and also with a view to measuring the impact of social robots on existing solidarity and their potential to reinforce or weaken human ties. In short, the use of social robots should be refocused on collective goals: to better map the needs of caregivers and develop appropriate solutions, which involve them via secure platforms in a community, and empower them to document their experiences.