Posts / artificial-intelligence
AI as a Mirror: When the Chatbot Says What Four Years of Therapy Couldn't
There’s a post doing the rounds that stopped me mid-scroll this week. Someone shared how a ten-minute conversation with an AI chatbot gave them more closure on their divorce than four years of therapy. And the responses ranged from genuinely moving to deeply cynical, with a lot of interesting territory in between.
My first instinct was the sceptical one — it just told you what you wanted to hear. That’s a fair criticism. Large language models are, by their nature, somewhat sycophantic. They’re trained to be helpful and agreeable, and that can absolutely create a hall-of-mirrors effect where your existing narrative gets reflected back at you with a shiny coat of authoritative-sounding language on top. That’s a real risk, and anyone using these tools for emotional processing should keep it front of mind.
But then I sat with it a bit longer, and I think the cynical take misses something important about what’s actually happening here.
The person wasn’t claiming AI replaced their therapist. They were still seeing their therapist. What they found was a space — available at whatever hour the grief decided to surface — where they could organise their thoughts without interruption, without the clock ticking, without worrying about the other person’s reaction. One commenter framed it really well: the AI just gave you a space to process without interruption or judgment. And that’s genuinely valuable, even if it’s not clinical treatment.
Think about it from an accessibility angle for a moment. Mental health services in Australia are, frankly, stretched thin. The waitlists are brutal, the out-of-pocket costs after your Medicare sessions run out can be punishing, and if you’re in a regional area the situation is often far worse. The idea that emotional support should only happen in a formal clinical setting, with a licensed professional, during business hours — that’s a model that was already failing a lot of people before AI entered the picture. Someone in rural Victoria processing grief at midnight doesn’t have great options. If a chatbot helps them organise their thoughts and get through the night, I’m not going to sneer at that.
That said, a therapist who commented in the thread made a point worth taking seriously — dependency is a real risk. There’s a meaningful difference between using AI as a structured reflection tool between therapy sessions and replacing critical thinking with an AI that will confidently validate whatever you bring to it. The technology can hallucinate certainty. It can sound authoritative while being wrong. And in mental health contexts, that’s not a trivial problem. There have been genuinely tragic cases where AI interactions made things worse, not better.
So where does that leave us? I think the framing that resonated most with me came from someone in the thread who described AI as a first reflection layer — not a replacement for therapy, but a lower-friction way to organise messy emotional material before you even get to the clinical setting. Something that helps you find language for patterns you’ve been circling for years. That’s a genuinely useful role, and it’s one that a lot of people are already using AI for whether the healthcare system is ready for it or not.
Working in IT, watching how quickly these tools have evolved, I’m both excited and a little unsettled by the pace of this. The version of AI-assisted mental health support that we want to build — with proper guardrails, escalation protocols for crisis situations, source checking, and integration with actual clinical care — that’s achievable. Some of it is already being worked on. But the version that exists right now, where millions of people are having deeply personal conversations with commercial products optimised for engagement? That needs a lot more scrutiny than it’s currently getting.
The person who wrote the original post seems like they have their head screwed on right. They know what AI is and isn’t. They’re still doing the actual therapeutic work. They just found a useful tool in a difficult moment, and they wanted to share that.
That’s not naive. That’s actually pretty healthy. The trick is making sure the guardrails exist for the people who might not have that same clarity — and that’s a conversation our healthcare policymakers, tech companies, and mental health sector need to be having seriously, and soon.