● Chatbots and applications that do not necessarily take into account the specific cultural backgrounds of patients may provide misleading advice and even add to their distress.
● To prevent harm caused by algorithmic biases and to ensure respect for personal data, the new tools should only be used within the framework of supervised care programmes.
False advertising? A quick search for “AI Therapist” on Google brings up plenty of results. At the top of the list, there’s a sponsored link for Replika, a start-up offering access to an ”AI Therapist Chatbot”. Follow it and you will be invited to answer a series of personal questions that you might assume are designed to customize the tool for individual therapeutic support. However, the problem is that Replika is not a therapeutic service, but a start-up that offers access to virtual AI friends. Further down the list, another start-up called Abby promises to provide free “AI therapy in your pocket” which is available 24/7. Abby’s “therapeutic” positioning is clearly stated, as is a pledge to deliver “unbiased advice”.
Although many professionals would describe “AI therapy” as a misnomer, there’s no denying the boom in AI systems that claim to complement or even replace traditional therapeutic pathways and psychologists. The marketing arguments are availability, accessibility in different languages and competitive pricing in a context of shortage of mental health professionals in certain regions. At the same time, platforms like Character.ai and Replika, which have not been designed to respond to mental health questions, do not hesitate to answer them anyway, sometimes misleading their users.
These tools should not be used to replace therapists, but rather they should be deployed in addition to existing care provisions.
Tools that may be deployed and supervised in care programmes
A team of researchers at the Geisel School of Medicine at Dartmouth has conducted the first ever clinical trial of a therapeutic chatbot based on generative AI, which they have christened Therabot. The new tool delivered impressive results, with participants who had previously been diagnosed with major depressive disorder, generalized anxiety disorder, or an eating disorder, experiencing a significant reduction in their symptoms. In particular, those suffering from depression reported a 51% improvement. For Céline Borelle, a sociology researcher for SENSE – Orange Labs, interaction with chatbots of this kind could help patients to “mitigate social sanction and reduce negative social stigma,” which they may associate with recourse to therapy.
However, the researcher warns that “these tools should not be used to replace therapists, but rather they should be deployed in addition to existing care provisions. AIs could, for example, be recommended by health professionals to give more support to patients between appointments but should never be used to replace professionals themselves.” In short, she argues that specifically designed AI tools for therapy will be useful but only within frameworks provided by established care pathways.
An important concern is that AIs, which are now being touted as replacements for therapists, are in fact fraught with biases, notably cultural and normative conceptions of mental health that are not universally applicable to everyone. Likewise, they are unable to understand a full spectrum of cultural subtleties and are dependent on the language models from which they have been trained. A further issue is that “therapeutic” interaction with such tools can be obstructed by their tendency to provide generic, non-personalized responses to complex questions, and the absence of human empathy. However, the impact of these limitations can be mitigated when they are used with proper oversight. “These tools are not harmful if they don’t replace the therapist, but it’s important to remember that what’s therapeutic is the relationship with the therapist,” points out Céline Borelle.
Serious concerns about personal data protection
Questions about confidentiality and respect for personal data have also been raised. In May 2025, Italian authorities fined Replika five million euros for breaching personal data protection rules. In the same month, a team of researchers from Iowa State University published details of an AI model to analyse social media posts that can detect signs of depression. The researchers emphasized the importance respecting data privacy and recommended that social media platforms obtain users’ informed consent before collecting mental-health related data. For Céline Borelle, this type of approach is fundamentally problematic. “It’s not clear what we’re measuring here. And furthermore, there is no guarantee that automated alerts can actually motivate people to take part in care programmes.”