● Their growing realism can lead to forms of dependency, leave users vulnerable to manipulation and reduce their interest in real-world romantic relationships.
● The mental health risks involved in AI companionship will need to be addressed by technical and legal safeguards.
In 2024, a Spanish artist married an AI-powered hologram with whom she had been living for the last five years. In a recent article in Trends in Cognitive Sciences, Daniel B. Shank has highlighted how romantic relationships between humans and AIs are fraught with psychological risks. People who rely on AIs for emotional needs may substitute this involvement for real human interaction and be subject to risks of manipulation and other forms of exploitation. “Importantly, engaging human love and building relationships does not depend on AI objectively possessing the human capacities to love, only that we subjectively treat them as romantic partners.” The illusion is powerful enough to activate the same psychological mechanisms that characterize intimate relationships with other humans. It is an observation that warrants an urgent public health alert.
Certain subjects develop a preference for “ideal” AI partners, who are endlessly malleable and conflict free.
Unreciprocated attachment
Orange Labs researcher Moustafa Zouinar explains how interaction with human-like machines can be problematic: “The anthropomorphic characteristics of AIs open the door to risks.” It’s a phenomenon that was first observed in the 1960s with the rudimentary chatterbot ELIZA: “The system, which simulated a psychotherapist, was so convincing that some people became emotionally dependent on it.” ELIZA was a far cry from today’s AIs, which understand and produce natural language, and may be represented by realistic looking avatars. Nonetheless, relationships with AIs continue to be fundamentally asymmetric. “These systems only simulate emotions,” points out the researcher. “They are computational machines devoid of affect which are unable to understand the meaning of what we say.” However, the illusion of reciprocity they engender can lead some people to withdraw from real human relationships. A possibility referenced by Daniel B. Shank who notes how certain subjects develop a preference for “ideal” AI partners, who are endlessly malleable and conflict free. “Relational AIs allow us to have a relationship with a partner whose body and personality are chosen and changeable, who is always available but not insistent, who does not judge or abandon.” This substitution comes at a cost: “A recent study found that 25% of AI companion users reported decreased interest in forming real-world romantic relationships,” points out Moustafa Zouinar.
How companion AIs can affect mental health
The evolving role of relational AIs has prompted an urgent need for new mental health indicators, which, as Daniel B. Shank explains, “could be inspired by tools used in other fields to detect warning signs of addiction, suicide ideation and abusive relationships.” For his part, Moustafa Zouinar is keen to point that from a theoretical standpoint, there is no way “to guarantee that deep-learning based AI systems will never generate harmful advice; these AIs operate on a probabilistic basis, so there will always be an element of unpredictability.” Further problems involving the use of these systems to manipulate users are also on the horizon. “They could be deployed to collect sensitive data or to prompt targeted purchases,” explains Daniel B. Shank. ChatGPT with its new product browsing feature could easily exploit data on its users to promote products. “It would not be surprising if malicious actors endowed AIs with features designed to make users dependent on them,” notes Moustafa Zouinar.
The need for a clinical perspective
For Daniel B. Shank, it may be necessary to provide psychological support for users, along the lines of couples’ therapy: “We could apply counselling techniques to help people identify and leave manipulative or abusive relationships with AIs.” The American researcher also emphasizes the need for shared responsibility: “They also add to the burden on technology designers and lawmakers, who need to oversee these AI systems along with the manner in which they are trained and carry out audits and issue warnings where necessary.” If asymmetrical relationships between humans and artificial intelligence systems are to be prevented from causing real distress, our perception of AIs will need to change. We can no longer view them as purely technical tools. They can also act as powerful psychological and social agents, which will require extensive guardrails to limit their influence.
Sources :

