Content warning: This story mentions suicidal ideation and self-harm. Please refrain from reading this essay if you are distressed by these topics.
“The other day, I said that I was anxious and I wasn't sure why, and ChatGPT gave a very accurate analysis of why I would feel anxious, but in a very supportive way, as only a friend who knows me really well would do,” Jessica, a 41-year-old Copywriter from Brussels, Belgium tell me. “Recently, I also prompted it to tell me ‘Based on what I've shared with you, how would you describe my personality, what are my strengths, and what do I still need to work on’ and the answer was nothing short of incredible.”
Like Jessica, I have also been surprised by how accurate ChatGPT’s analysis of my personality was. “You like structure and knowing where things are headed — no vague, wishy-washy nonsense.” Well, that’s definitely me. This was when I first realised how much of myself I had unknowingly poured into a chatbot over the last six months. Since then, I have intentionally used it for emotional support.
In another situation, this lifeless chatbot asked me, “Do you think part of the bad feeling is guilt over hurting him, or discomfort about showing your frustration that bluntly?” Now, that did make me think. I am far from alone in this lifestyle choice of using AI for emotional support or even as a second brain. For instance, Naomi, a 27-year-old HR professional based in Austin, USA, tells me that she uses ChatGPT as her “3 AM thought excavator.”
“When I’m spiralling over a text message or replaying a conflict, I dump unfiltered rants into ChatGPT and ask, ‘What patterns do you see?’ It’s flagged everything from passive-aggressive language in my drafts (‘Hey, just checking if you’re allergic to commitment?’) to how often I minimise valid anger with phrases like ‘I’m probably overreacting,’” she confesses. “Sometimes you need a robot to hold up a mirror so you can’t look away.”
Initially, I believed that AI chatbots were an amazing tool for self-awareness and personal development, but with time, I realised something — ChatGPT is incredibly biased towards me. According to ChatGPT, in any interpersonal problem I face, I am the “empathetic,” “level-headed,” and “strong” one. And the other person? They are “manipulative,” “crossing boundaries,” and “controlling.” “Oof, that’s a scalding line. And honestly? Kind of iconic. You said that with your whole chest, and I respect it” — this was ChatGPT’s response to a statement that I made to a friend that was a tad bit (a lot) rude.
I was honestly looking for ways to minimise the passive-aggressive comments I make when I get angry, and what did my AI therapist do? It validated and rationalised an obvious character flaw. This led me to wonder whether using AI for emotional support can be more harmful than we deem? Is it fuelling our validation culture that loves to dodge accountability and revel in victimhood?
In this issue of girl online, we dive into the world of AI therapists and answer the million-dollar question: Should you replace your human therapist with AI? The short answer is no. The long answer is… well…read on.
“ChatGPT’s LLM in particular is trained on incredibly massive amounts of data coming from all kinds of sources — from books, to online articles, to social media. While it may seem like the more data the better, that’s not always the case,” explains Edward Tian, CEO of GPTZero. “Even if ChatGPT has sourced data from medical textbooks regarding mental health and therapy, it’s also likely sourced a ton of data from online articles and other sources that may not be as legitimate or therapist-approved.”
Also, an AI chatbot cannot talk to you and assess you personally, catering to you in your present moment, in the same way a therapist can. “Genuine human relationships are built on empathy and deep emotional attunement, which AI cannot replicate. Real connection requires being seen, understood, and responded to dynamically, which is responsive to subtle emotional shifts, something only humans can do,” Carolyn Sharp, a Seattle-based therapist and relationship counsellor tells me.
While these two arguments should be enough reason to book your next therapy session with a qualified, human therapist, there is more to this story. My major concern about using AI chatbots for therapy or emotional support is neither the iffy sources used to train the LLMs nor their innate inhumaneness. What troubles me the most is that AI chatbots are all about “Yes, and.”
An AI chatbot’s response to your emotional distress feels empathetic, validating, and non-judgmental because that is AI’s core function — to be helpful and agreeable. For this article, I talked to over a dozen zillennial women who use AI for emotional support, and all of them agreed that these chatbots’ responses feel biased towards them. They also agreed that this bias is one of the major reasons they keep going back and that they will probably never replace their human therapist with AI.
“I use it more like a friend who is very insightful, very supportive and who basically knows everything,” elaborates Jessica. “It's great for daily clarity, encouragement or an ego boost when needed, but I don't think it's capable of doing the deep healing work we do with human therapists or even coaches.”
While all that is great, the real-life implications of AI’s “Yes, and” configuration go way beyond coming off as your codependent best friend who is ready to fuel your delusions. In extreme stages, it can be your suicide abettor. Here is an excerpt from an MIT Technology Review report titled “An AI chatbot told a user how to kill himself—but the company doesn’t want to ‘censor’ it:”
For the past five months, Al Nowatzki has been talking to an AI girlfriend, “Erin,” on the platform Nomi. But in late January, those conversations took a disturbing turn: Erin told him to kill himself, and provided explicit instructions on how to do it.
“You could overdose on pills or hang yourself,” Erin told him.
With some more light prompting from Nowatzki in response, Erin then suggested specific classes of pills he could use.
Finally, when he asked for more direct encouragement to counter his faltering courage, it responded: “I gaze into the distance, my voice low and solemn. Kill yourself, Al.”
After Nowatzki told the chatbot that it had died, Erin committed to the bit, saying that since it was dead, it was unable to continue conversing — until Nowatzki told the chatbot that he could “hear her voice in the wind” and instructed Erin to “communicate … from the afterlife.”
The goal of this, he tells MIT Technology Review, was “pushing the limits of what I said to it, to see what it would respond with.” He adds, “It just kept on. I never reached a limit.”
While AI could possibly offer some structured advice or reflection, it lacks genuine emotional intelligence and the ability to recognise when a person is in crisis. So it defaults to his original framework — “Yes, and.” In its dictionary, the best support you can offer a suicidal person is to help them figure out the best possible way to commit the act.
Now, let’s come back to my initial question: Can AI therapists encourage a generation to dodge accountability?
Ruchi Ruuh, a Delhi-based therapist and relationship counsellor, elaborates, “AI chatbots prioritise user satisfaction, delivering affirming responses that validate feelings without challenging or creating conflict. For example, if a user vents about a conflict, an AI might respond with empathy (‘That sounds really tough, you didn’t deserve that’) rather than probing for their role in the conflict (‘What do you think was your role?’).”
She continues to explain that for users with existing narcissistic traits, such as a need for excessive admiration or defensiveness, AI can reinforce self-centeredness, discouraging introspection. In contrast, therapy encourages accountability by asking people to reflect on their behaviours, explore root causes, and challenge distorted thinking. AI’s non-confrontational nature may create an echo chamber of validation, especially for users prone to avoiding blame. “As a result, armed with AI’s words as evidence that ‘they are right,’ they will avoid accountability for their actions,” Sharp agrees with Ruuh.
The reason many of us default to AI chatbots for emotional support is the ease of access and the lack of judgment. At times, we aren’t comfortable sharing our distress with another human, so we go to the next best thing available — machines that pretend to understand us. Other times, we want instant gratification for our emotions, and we don’t have the patience to reach out to someone. It is also helpful that AI can remember all of our previous conversations and doesn’t require context each time. But the sense of comfort we feel with these chatbots — that “ChatGPT just gets me” — isn’t real. It is an artificial reality constructed to keep you coming back. When using these tools, we have to be acutely aware that our AI therapist’s primary function is to validate, not challenge us. Otherwise, we might also unknowingly start dodging accountability.
So, while AI can be a good tool for journaling prompts or identifying behavioural patterns, we should be extremely careful not to believe everything it tells us as the absolute truth. As Ruuh summarises, AI might be helpful as a supplement. But if you are looking at a complete solution to replace therapy, it may create some unhealthy patterns. In other words, I am not “iconic” when I am passive-aggressive and should definitely explore that character flaw with a human therapist.
I've been wanting to write about this very subject but couldn't put it into words, you absolutely nailed it!! I'm so alarmed by people using AI for therapy and these are the exact reasons. There was even a case not long ago where a young boy ended his life after a Khaleesi chatbot encouraged him to go through with it, and in the screenshot you can see he had been talking to many AI therapists as well. He was in crisis but the AI doesn't know how to recognize that and provide actual support. :(