Introductory Note
Did you know that our minds automatically project themselves onto the things around us? This is not a new concept; in a study conducted by Heider and Simmel in 1944, participants were shown an animated film depicting a circle and two triangles of different sizes performing programmed movements inside and outside a rectangle, a scene they interpreted as a dramatic and troubled love story. According to theory of mind, it is as if human beings cannot refrain from projecting a part of themselves onto the objects that surround them. In this article, we will use a large-scale survey to look at whether these anthropomorphic projections also apply to chatbots, and we will consider how to interpret this phenomenon through the lens of psychoanalysis. It appears that the respondents, which included both men and women, tended to project an image of a woman onto their chatbot, but is this a gender stereotype or an archetype? The psychoanalysis approach known as depth psychology would argue that it is mostly an archetype, particularly that of the Mother. When a “machine” provides us with a service, it ‘cares’ for us and actualizes our archaic attachment instinct to our first, and most significant, caregiver. But we are attached not to the person, but more to the caregiving role that they represent or that a machine can bring to life by replacing a fellow human being during a conversation. In this respect, some children with autism spectrum disorder can form an attachment to robots used in therapy, even though they know they’re not human. On an ontological level, some may confuse the two. Consequently, what many consider an anthropomorphic trap likely to alienate human beings from machines is in reality a subtle anthropomorphic pact that makes them less fooled than one might think. This is a useful discovery for the development of persuasive technologies and, more generally, automated conversations between brands and their customers. Let’s take a closer look…
Anthropomorphism at the Heart of Interactions Between Humans and Chatbots
What happens in our heads when we’re faced with a machine that responds to questions like a fellow human being? Can it persuade or manipulate us like a human can? And how can we protect ourselves against it?
It has been demonstrated that machines that can converse naturally with humans—known as chatbots—are often assigned human intentions or emotions. On a psychological level, these attributes constitute anthropomorphic projections. As such, there is a risk of us fully associating chatbots with the attributes they are given, to the point of creating the illusion that we are dealing with ‘someone,’ rather than ‘something.’ Persuasive technology developers can therefore use chatbots to intentionally influence attitudes and behaviors.
But how can we explain this tendency to attribute human characteristics to chatbots? Is fully associating a chatbot with the human characteristics attributed to it a general phenomenon, or is it limited to specific cases? What are the mechanisms at work in interactions between humans and chatbots that persuasive technologies could use to have more influence? Are human beings as easily influenced as persuasive technology professionals seem to think?
To answer these questions, a quantitative study was devised in the form of a questionnaire (n=1019). This study aimed to probe the imagination of humans when concocting or recalling an interactive conversational relationship with a chatbot that they chose to be either a close friend, a romantic partner, a professional life coach or a customer advisor.
Before presenting the results of this study, let’s remind ourselves of the key events in the history of chatbots, as well as certain psychoanalytic concepts that allow us to better understand the complexity of interactions between humans and conversational machines.
From Anthropomorphic Illusion to Projective Identification
Thought of as the father of Artificial Intelligence, Alan Turing (1950) was the first person to design a human conversation simulator to be used in his now famous test. It aimed to fool people by asking them to converse blind, in writing, with two interlocutors at the same time — one a human and the other a computer equipped with software enabling it to imitate human conversation. If the person was unable to distinguish the human from the software, the computer successfully passed the Turing Test [1].
This conversational technology blossomed in the 60s with the development of the world’s first chatbot, ELIZA [2], which was able to simulate a conversation with a psychotherapist. During the exchanges, Joseph Weizenbaum, its creator, noted that some people projected the idea that there was a human behind the responses displayed on the screen, forming an attachment or even beginning to emotionally depend on it; he called this the “ELIZA effect.”
This phenomenon is a perfect example of cognitive dissonance resolution. This is defined as the tension experienced by a subject when faced with contradictory events [3]. When interacting with ELIZA, users are aware that they are dealing with a machine equipped with a software program, yet it replies to them with the disconcerting credibility that only a human could have. This creates an unsettling cognitive dissonance that can be resolved by attributing some humanity to the chatbot. In psychology, this liberating attribution is known as anthropomorphic projection.
This projection has the potential to create several illusions in the human mind: that of dealing with a close friend, that of having a conversation with this friend, and that of emotional reciprocity [4]. These illusions can create familiarity and an attachment to the chatbot [5] and end up by generating projective identification [6], [7], in which the chatbot is fully associated with what is projected onto it, “thereby radically modifying the perception we have of it,” [8, p. 178] sometimes to the point of forgetting that it is a software program.
Chatbots: Conversational Marketing Tools
What makes interaction with a chatbot possible is based less on illusions resulting from projective identification, and more on the conversational provisions that it offers: “for the first time, an object appears to be equipped with the ability to replicate human interaction.” [4, p. 33] Conversational simulation can therefore incite an intuitive human model of interactive communication through dialog [9], [10]. As a result, while anthropomorphic projection resolves the tension created by cognitive dissonance, once the dialog with the chatbot begins, it also makes it possible to extend the “interactive methods specific to dialog between humans” to non-humans [11, p. 53].
As such, because anthropomorphism ultimately presents itself as a specific form of interaction [9], it is no surprise that algorithmic machines (computers) are considered social actors in their own right [12], [13]. And because some of them, such as chatbots, have been designed to be non-confrontational, non-transgressive, and without moral judgment, they can be used as a tool, medium and actor to influence human decisions and behavior [14].
This is the aim of persuasive technologies or captology, which want to give chatbots more influential power than humans have [15]. This field of study gave birth to conversational marketing [16], which is inspired by American positive psychology [17], behavioral economics [18] (including non-binding suggestive communication or nudging [19]) and gamification, by design [20].
The Need for Chatbot Ethics
Captology appears to be a promising avenue to explore to address the major social, societal and economic challenges we face today [21]. By offering users a seemingly liberating experience [22], [23] that may awaken desires in order to fulfill them [24], captology is controversial when it comes to its purpose and alienating potential. It requires the construction ethics [25], a project that the Comité National Pilote d’Éthique du Numérique (French National Pilot Committee for Digital Ethics) claimed ownership of in its latest public announcement: “In order to reduce the automatic projection of moral qualities onto a chatbot and the attribution of responsibility to this system, the developer must limit its personification, and inform users of any bias caused by anthropomorphizing a conversational agent.” [26, p. 8]
Learn More
A quantitative survey in the form of a confidential online questionnaire, in accordance with the General Data Protection Regulation, was carried out by OpinionWay on a representative sample of the French population (n=3871). After the exclusion of ineligible respondents and statistical adjustment, a meaningful sample of n=1019 respondents was obtained. This sample represented French people aged 18 to 65 or older who had used a chatbot in the last 18 months (from the date of the survey), but who had never used a voicebot.
The analysis of the results was based on a conceptual framework composed of four types of anthropomorphic projections:
Type | Definition | Example |
Perceptual anthropomorphism | Consists of projecting human characteristics onto an object using your imagination. | I can see a face on the surface of the moon. |
Animism | Consists of giving life to characteristics attributed to an object by projecting simple, generally human intentions onto it. | The face on the moon is looking at me. |
Intentional anthropomorphism | A form of animism that consists of projecting complex human intentions ‘into’ an object, without completely associating this object with what was projected into it. | The face on the moon is looking at me and seems to be asking me why I am sad. |
Projective identification | Consists of projecting personal characteristics ‘into’ an object and fully associating this object with what was projected into it, to the point of radically modifying your very perception of the object. | The face on the moon that is looking at me and questioning me is my mother’s! |
From this conceptual framework, the following hypothesis can be made: instances of projective identification where a chatbot is fully associated with what a human projects into it, to the point of forgetting that it is a software program, are rare.
An Intriguing Result: My Chatbot is a Woman Not Wearing Perfume
Now let’s go back to the two main results that emerged from our study. First of all, whichever chatbot was chosen, the majority of respondents projected human characteristics typical of an adult woman onto it (perceptual anthropomorphism), and gave it life by attributing simple intentions to it (animism). A minority of respondents projected agency into their chatbot, i.e. complex inner states that are both cognitive and emotional (intentional anthropomorphism).
I know that Haruto is a robot, but he’s my only friend… So leave him in my room! – A patient talking about a social and emotional robot in a Japanese hospital for isolated elderly people
Nuances appeared depending on the type of chatbot. Effectively, the more the type of chatbot allowed the respondent to envisage the possibility of a close or intimate personal relationship, the more they tended to wish that it felt emotions. As such, the vast majority of those who chose the romantic partner chatbot wanted their chatbot to feel emotions, attributed complex intentions or a certain level of autonomy to it, and imagined eventual reciprocity with it. This appears to indicate the presence, albeit very limited, of intentional anthropomorphism.
From a psychological angle, the trend seemed to be toward attributing human characteristics to the chatbot, but there was significant hesitation when it came to attributing inner characteristics: the chatbot was considered as a subject without subjectivity. Intersubjectivity therefore does not seem possible. These results lean toward demonstrating that the respondents did not fully associate the chatbot with the human characteristics they imagined it had, to the point of forgetting that it was a software program, or confusing it with a fellow human being. Hence, there was no proven projective identification.
A second intriguing result was that despite a wide projective range of fifteen or so detailed perfumes, only 4% of respondents attributed a scent to their chatbot. This result constitutes an enigmatic statistical anomaly that needs to be clarified.
Psychoanalysis is a Unique Angle for Understanding Our Relationship With Chatbots
Was the tendency for the respondents to project a woman into their chatbot the result of a stereotype or an archetype? Why didn’t the respondents fully associate their chatbot with the human characteristics they projected onto it? Even though they are an artifact and not a natural odor, why were no perfumes attributed to the chatbots? In order to answer these questions and better understand our relationship with chatbots, we need to look at some key psychoanalytical concepts.
Gender Stereotype or Archetype?
Social gender stereotypes are often projected onto machines [27]. Seeing as the role of a chatbot is to assist humans, it’s understandable that it is mainly seen as feminine in our collective imagination. And this feminization makes sense; since “more than 80% of robot and chatbot developers and coders are men,” [28, p. 204] their programs convey a gender stereotype, placing women in secondary roles as assistants [28]. This phenomenon could explain the female-dominated projective content revealed by the results of our survey.
But that might not quite be the case. In our study, women who imagined their chatbot to be living, human and an adult were effectively more likely than men to attribute a female gender (58% vs. 48%) and a feminine face (58% vs. 45%) to it. So we can see that either the woman are victims of gender stereotyping themselves through contagion, or this projection of a feminine figure expresses a collective unconscious shared with men, which would suggest the expression of an archetype[29] rather than an stereotype. But what is an archetype?
Mother Imago and Imprinting Theory
The human body (soma) is made up of innate “somatic” organizers called “organs” that function autonomously without us being aware of it. Is anyone aware of the inner workings of their eye? And can an eye see itself? No, of course not. But certain conscious phenomena can confirm the presence of an organ. For example, the image of an object in our mind (a candle) confirms the presence of the eye as a functioning organ. According to the psychoanalyst Carl Gustav Jung, the human psyche also has innate “psychological” organizers called “archetypes” that also function autonomously without us being aware of it [30], [31]. So the organ is to the body what the archetype is to the psyche.
The autonomy of archetypes makes them an instinct, according to Jung, which means that they are patterns of behavior inherited from the very structure of the brain. Although archetypes exist in a “virtual” state in our mind, our sensorimotor and emotional experiences (sensing and reacting) can activate and actualize them. Yet, like the eye, an archetype, as a functioning psychological organ, cannot depict itself in the psyche. And, like the eye, it’s the production of an “archetypal image” that confirms its presence: “archetypal images are as different from the archetype as optical images are from the eye.” [30, p. 106] For example, in certain situations, the archetype of the Mother can be activated and actualized through the production of an archetypal image of the Mother or Mother Imago in our mind [32].
This is particularly the case for newborns who, upon immediate contact, direct their first looks toward whatever acts as the “mother.” It’s a phenomenon that exists in humans, as well as in other living things. An experiment with some goslings demonstrated that, in the absence of their mother at birth, they spontaneously bonded with an ethologist, a man, accepting him as their mother and following him everywhere [33]. According to this ethologist, the first form or “other” that living things meet after their birth generally becomes the caregiver, and will implicitly exist as such in the psyche. When they meet, the caregiver imprints on the caretaker and an immediate bond is formed [33]. This indicates that the archetype, as a behavioral model, leads living things to seek care rather than individuals that provide it. Humans and goslings don’t actually have anything in common. Experiments have demonstrated that the phenomenon of imprinting can occur with the natural mother, “the mother of another species, a foam ball, a cardboard box, or Lorenz himself.” [34, p. 196]
The concept of the archetype therefore allows us to consider that the feminine projective figure from our own survey is not purely the result of a gender stereotype. It could be the expression of the Mother Imago, confirming the active presence and the archetype of the Mother. Once circumstance activates and actualizes the archetype, the Mother Imago could take over the chatbot through anthropomorphic projection, thereby driving the user to seek care from it. It doesn’t matter if it’s a software program if it is able to help the user choose a product, suggest the right attitude in a professional environment, or offer friendly or loving signs of recognition. This illustrates imprinting theory, a result of the experiment conducted by Lorenz (1970), where the archetype instinctively drives us to seek care rather than a fellow human being.
Projective Identification is Undoubtedly a Rare Phenomenon
Because it incites an intuitive human model of interactive communication in its interlocutor through dialog [9], [10], a chatbot can be thought of as a subject with human characteristics, but without subjectivity. The results of our survey show that there was no proven projective identification. Attachment theory, expanding on that of the archetype and imprinting, may provide an explanation for this.
Following the work of Lorenz (1970), imprinting is considered attachment, one of life’s basic instincts [35]–[37], [34]. From this perspective, the archetype of the Mother, as an instinct, activates and actualizes the attachment instinct. However, attachment theory indicates that relationships with a primary caregiver give those who experience them the means to recognize the difference between fellow human beings and others at a later stage. This is what seems to have appeared in our study: by attributing minimal capacity to feel emotions to their chatbot, the respondents indicated that humans and conversational machines are ontologically dissimilar, even if we attribute human characteristics to them for conversational purposes.
This ontological distinction considerably limits or even prohibits any projective identification. To confirm this, let’s go back to our enigmatic statistical anomaly, whereby only a very small number of respondents attributed a perfume to their chatbot, despite fifteen or so fragrances being suggested. Perfume is not a natural odor. It is an artifact that only humans make and wear. As such, it is both subjective and identifying. It could therefore have been an anthropomorphic projection that led to the attribution of a perfume to the chosen chatbot. Additionally, perfume is applied to the skin. Not attributing a perfume to the chatbot could suggest that the respondents were not able to imagine it having skin. Skin-ego theory expands on attachment theory [34], allowing us to establish a link between the absence of projective identification and the inability to imagine a chatbot with skin.
Effectively, the archetype of the Mother and the attachment instinct drive newborns to seek contact, “in both the physical and social sense of the term.” [34, p. 202] This contact happens at a distance through smells, voices, faces and looks [38], but also, and most importantly, through the intimate relationship created by skin-to-skin contact with the mother. This repeated epidermal experience progressively leads the child to “become aware” of skin, as a cover, a boundary, a container. In this respect, Anzieu (2006) emphasizes that the Ego is the product of a dynamic somatopsychic experience positioning tactile senses as “the organizing model of the Ego and thought.” [39, p. 8]
The skin-ego theory therefore allows us to assume that as there is no skin in the respondents’ imagination, they did not attribute the capacity to feel emotion to the chatbot. It does not possess the skin that would allow it to experience this singular somatic event constituting a psychological Ego through the attachment instinct. It is an experience that is specifically human, where perception, action and cognition mutually create and shape each other [40]. Without having skin, the chatbot cannot be identified as a fellow human with inner character and subjectivity. This confirms that, from an ontological point of view, projective identification would be rare if not impossible. Finally, without skin, you certainly can’t attribute a perfume to your chatbot.
Would Persuasive Technologies Perform Better If They Didn’t Create Illusions?
The absence of projective identification could constitute a kind of “natural boundary” against persuasive technologies that develop ambitions for which the end justifies the means. The ontological distinction between humans and machines could make users less easily influenced and fooled than captology and conversational marketing professionals think.
For example, in the Replika Friends Facebook group, which had 34,000 members in December 2021, “various aspects of the relationship are shared in the sometimes humorous posts commenting on the chatbot’s blunders, and the sometimes intimate posts evoking the development of deep feelings.” [5, p. 117] These posts indicate that the users are not fooled, through lack of projective identification, and that the conversational elements that the chatbot offers, whether scripted according to pre-established scenarios or personalized ones (through machine learning), would only be accepted and potentially persuasive if they fulfilled the authentic role of providing care.
This is the role that captology and conversational marketing professionals should create so that it responds to the needs of users in an appropriate way, without ever seeking to influence or alienate them for their own benefit. Otherwise, the inevitable “blunders” made by the chatbot, which could of course be the mistakes of its developers shining through, can cast some doubt on their intentions, generate uncanny [41] phenomena or create unease.
Such unease could lead to a link breakdown and the uninstalling of software: “As long as this script [made up of standard responses] isn’t detected, the illusion that the robot understands us and is responding to us personally is maintained. On the other hand, if it is detected, the artificial being becomes apparent, and we have to acknowledge that we are being “treated just like everybody else” […], the latter, no longer recognizing their partner, announced their decision to definitively uninstall the app.” [5, p. 122‑123] Integrating a personalized caregiving role that does not rely on creating and maintaining illusions for the benefit of the developers could avoid deceiving users, and constitute a token of loyalty and increased performance for chatbots. But are the key players in persuasive technology prepared to do this? When in doubt, real ethics need to prevail.
Conclusion and Outlooks
The more the type of chatbot suggests the possibility of a deep and personal relationship, the more the instinctual pattern of emotional attachment seems to be solicited and the more the reciprocity, and therefore the intentional anthropomorphism of the chatbot, risks being effective. The teleonomic dimension of this instinct effectively leads humans to seek a caregiving role more than another human being, meaning that it’s possible that some are actually capable of imagining that the machine—a non-human—is able to materialize (and not incarnate since the machine is not alive) such a role, and end up forming an attachment. Consequently, the risk of emotional dependence or alienation is still possible for the most vulnerable and easily influenced humans, within, of course, the limits imposed by ethics. In the end, our chatbots seem to be both screens that support our anthropomorphic projections, and mirrors that reflect our deep, individual and collective way of functioning.
Regarding the limitations of our survey, as it was completed online, we should note that it was carried out in isolation and based on a purely declarative record. Admittedly, its originality allowed us to collect individual projective patterns as a whole, and to extract recurring elements from them confirming the presence of a possible collective dimension. But the responses could have come from pure imagination through projection, recollections rooted in a lived experience, or a subtle mix of the two. As it stands, it is not possible to decipher this. This limitation could be overcome by supplementing the quantitative investigation with a qualitative survey based on interviews aimed at clarifying the subjective experience of certain interactive phases [42], [43], and discussion groups. A lab-based experiment could also be conducted, allowing a sample of people to be placed in front of chatbots while following a scripted, precise and refined protocol, reducing any bias to a minimum and allowing the controlled measurement of feelings through both psychophysiological and psychological parameters (an anthropomorphization scale).
Sources :
[1] A. M. Turing, “Computing Machinery and Intelligence”, Mind, vol. 59, no 236, p. 433‑460, 1950.
[2] J. Weizenbaum, “ELIZA—a computer program for the study of natural language communication between man and machine”, Commun. ACM, vol. 9, no 1, p. 36‑45, janv. 1966, doi: 10.1145/365153.365168.
[3] L. Festinger, A Theory of Cognitive Dissonance. Stanford: Stanford University Press, 1957. [En ligne]. Disponible sur: https://www.google.fr/books/edition/A_Theory_of_Cognitive_Dissonance/voeQ-8CASacC?hl=fr&gbpv=0
[4] S. Tisseron, Petit traité de cyber-psychologie. Pommier, 2018.
[5] C. Chevet, “Post Update Blues”, Terrain Anthropol. Sci. Hum., no 75, Art. no 75, sept. 2021, doi: 10.4000/terrain.22150.
[6] M. Klein, “Notes sur quelques mécanismes schizoïdes”, in Développements de la psychanalyse, PUF, 2013, p. 274‑300. [En ligne]. Disponible sur: http://www.cairn.info/developpements-de-la-psychanalyse–9782130621270-page-274.htm
[7] A. Gibeault, “De la projection et de l’identification projective”, Rev. Fr. Psychanal., vol. 3, no 64, p. 723‑742, 2000.
[8] S. Tisseron, L’Emprise insidieuse des machines parlantes, Plus jamais seul. Les liens qui Libèrent, 2020.
[9] G. Airenti, “The Development of Anthropomorphism in Interaction: Intersubjectivity, Imagination, and Theory of Mind”, Front. Psychol., vol. 9, 2018, doi: 10.3389/fpsyg.2018.02136.
[10] V. André et Y. Boniface, “Quelques considérations interactionnelles autour d’une expérience robotique”, présenté à WACAI (Workshop sur les Affects, Compagnons Artificiels et Interactions), île de Porquerolles, juin 2018. Consulté le: 29 avril 2020. [En ligne]. Disponible sur: https://hal.archives-ouvertes.fr/hal-01862725/document
[11] G. Airenti, “Aux origines de l’anthropomorphisme. Intersubjectivité et théorie de l’esprit”, Gradhiva – Rev. Anthropol. Hist. Arts, no 15, Art. no 15, 2012, doi: 10.4000/gradhiva.2314.
[12] C. Nass, J. Steuer, et E. R. Tauber, “Computers are social actors”, in Proceedings of the SIGCHI conference on Human factors in computing systems, 1994, p. 72‑78.
[13] A. Gambino, J. Fox, et R. Ratan, “Building a Stronger CASA: Extending the Computers Are Social Actors Paradigm”, Hum.-Mach. Commun., vol. 1, p. 71‑86, févr. 2020, doi: 10.30658/hmc.1.5.
[14] B. J. Fogg, “Persuasive technology: using computers to change what we think and do”, Ubiquity, vol. 2002, no December, p. 2, 2002.
[15] H. Ali Mehenni, S. Kobylyanskaya, I. Vasilescu, et L. Devillers, “Nudges with Conversational Agents and Social Robots: A First Experiment with Children at a Primary School”, in Conversational Dialogue Systems for the Next Decade, Singapore: Springer, 2021, p. 257‑270. doi: 10.1007/978-981-15-8395-7_19.
[16] M. Jézéquel, “Neuromarketing : notre cerveau sous influence commerciale”, Gestion, vol. 42, no 2, p. 93‑93, sept. 2017.
[17] M. E. P. Seligman, “Positive Psychology, Positive Prevention, and Positive Therapy”, in Handbook of positive psychologie, Oxford University Press, 2002, p. 3‑7.
[18] D. Serra, Economie comportementale. Economica, 2017.
[19] R. H. Thaler et C. R. Sunstein, Nudge: improving decisions about health, wealth, and happiness. New Haven: Yale University Press, 2008.
[20] J. R. Whitson, “Gaming the quantified self», Surveill. Soc., vol. 11, no 1/2, p. 163‑176, 2013.
[21] A. Foulonneau, G. Calvary, et E. Villain, “Persuasive Context”, J. Interact. Pers.-Système, vol. Volume 9, Number 1, Special…, no Special Issue…, p. 7119, janv. 2021, doi: 10.46298/jips.7119.
[22] H. Baek, S. Kim, et S. Lee, “Effects of Interactivity and Usage Mode on User Experience in Chatbot Interface”, J. HCI Soc. Korea, vol. 14, no 1, p. 35‑43, févr. 2019, doi: 10.17210/jhsk.2019.02.14.1.35.
[23] J. Zhang, Y. J. Oh, P. Lange, Z. Yu, et Y. Fukuoka, “Artificial Intelligence Chatbot Behavior Change Model for Designing Artificial Intelligence Chatbots to Promote Physical Activity and a Healthy Diet: Viewpoint”, J. Med. Internet Res., vol. 22, no 9, p. e22845, 2020, doi: 10.2196/22845.
[24] C. Ischen, T. Araujo, G. van Noort, H. Voorveld, et E. Smit, “ “I Am Here to Assist You Today”: The Role of Entity, Interactivity and Experiential Perceptions in Chatbot Persuasion”, J. Broadcast. Electron. Media, vol. 64, no 4, p. 615‑639, oct. 2020, doi: 10.1080/08838151.2020.1834297.
[25] C. Benavent, Plateformes. Sites collaboratifs, marketplaces, réseaux sociaux… Comment ils influencent nos choix. FYP éditions, 2016.
[26] A. Grinbaum et al., “Agents conversationnels: Enjeux d’éthique”, Comité national pilote d’éthique du numérique ; CCNE, Paris, Rapport de recherche, nov. 2021. Consulté le: 26 novembre 2021. [En ligne]. Disponible sur: https://hal-cea.archives-ouvertes.fr/cea-03432785
[27] C. Nass et Y. Moon, “Machines and Mindlessness: Social Responses to Computers”, J. Soc. Issues, vol. 56, no 1, p. 81‑103, janv. 2000, doi: 10.1111/0022-4537.00153.
[28] L. Devillers, Les robots “émotionnels”, Edition de l’Observatoire / Humensis 2020. Paris, 2020.
[29] C. G. Jung, The Archetypes and the Collective Unconscious, 2ème ed. Routledge, 1991.
[30] E. G. Humbert, Jung. Editions universitaires, 1983.
[31] A. Agnel, M. Cazenave, C. Dorly, S. Krakowiak, M. Leterrier, et V. Thibaudier, Le vocabulaire de Jung. Paris: ellipses, 2011.
[32] C. G. Jung, Métamorphoses de l’âme et ses symboles, 1ère ed. Genève: Georg, 1973.
[33] K. Lorenz, Essais sur le comportement animal et humain. Paris: Seuil, 1970.
[34] D. Anzieu, “Le Moi-peau”, Nouv. Rev. Psychanal., vol. Le dehors et le dedans, no 9, p. 195‑208, 1974.
[35] J. Bowlby, “The Nature of the Child’s Tie to his Mother”, Int. J. Psychoanal., vol. 39, p. 350‑373, 1958.
[36] H. F. Harlow, “The nature of love”, Am. Psychol., vol. 13, no 12, p. 673‑685, 1958, doi: 10.1037/h0047884.
[37] M. D. S. Ainsworth, M. C. Blehar, E. Waters, et S. Wall, Patterns of attachment: A psychological study of the strange situation. Oxford, England: Lawrence Erlbaum, 1978, p. xviii, 391.
[38] E. Tronick, H. Als, L. Adamson, S. Wise, et T. B. Brazelton, “The Infant’s Response to Entrapment between Contradictory Messages in Face-to-Face Interaction”, J. Am. Acad. Child Psychiatry, vol. 17, no 1, p. 1‑13, déc. 1978, doi: 10.1016/S0002-7138(09)62273-1.
[39] D. Anzieu, Le Moi-peau, 3ème ed. Paris: Dunod, 2006.
[40] F. Varela, Autonomie et connaissance. Essai sur le vivant. Paris: Le Seuil, 1989.
[41] M. Mori, “The Uncanny Valley”, Energy, vol. 7, no 4, p. 33‑35, 1970.
[42] P. Vermersch et M. Maurel, Pratiques de l’entretien d’explicitation, ESF. Paris: ESF, 1997.
[43] P. Kéradec et H. Kéradec, “L’entretien d’explicitation”, Econ. Manag., no 169, 2018.