With every day that passes, we use artificial intelligence (AI) in increasingly more ways. For example, AI can provide you a quick recipe on the fly, mock up a picture of what your living room would look like with new furniture, and suggest what you might wear to an upcoming formal dinner. AI can seemingly do so much that, at times, it’s easy to forget that AI is not the perfect answer for everything. One place where more people are attempting to use AI is with mental health, replacing real clinical therapists with AI interactions. While AI is more affordable, and maybe more convenient, is AI therapy as effective as working with a real, trained and licensed human being? Additionally, what are the dangers when choosing AI over a real therapist, especially as it pertains to self-harm, or the potential to harm others?

AI versus human therapy
If you have ever played around with AI, then you know how easy it is to forget you are actually communicating with a computer, and not a real human. AI can be very polite, conversational, and knowledgeable — but can AI help with real mental health challenges? While millions of people worldwide receive mental health assistance from trained, licensed clinicians, we are witnessing some people replace human therapy with AI mental health assistance. Is this a good thing, or does AI actually compound mental health issues that might have been resolved by using a trained human instead?
One problem with AI is that it lacks clinical judgement. AI does not understand language, but instead predicts language patterns — a very different approach compared to human therapy. AI is also problematic when assessing and managing risk — including self-harm and suicidal ideation — by missing important, yet subtle, warning signs. While a real clinician is trained to watch for important signs of distress, AI overlooks those indicators, resulting in potentially dangerous situations.
Another problem with AI is that no therapeutic relationship forms between computer and client, as compared to the dynamic that develops in a human therapeutic relationship. A therapeutic alliance allows for trust and empathy to develop, factors closely associated with positive therapeutic outcomes. Additional concerns relating to AI therapy include misdiagnosis, as well as oversimplification of complex mental health conditions, factors that can lead to improper, unhealthy, or dangerous treatments.
A final consideration when comparing AI to human therapy has to do with the uniqueness of the support provided. With AI, advice is general, relies on patterns, and misses individual nuance. Human therapy, by contrast, is personalized, adaptive, and evolves over time. Human therapists also track progress over time, adjust treatment, and coordinate care, whereas AI does none of those things.

Final thoughts
AI can simulate conversation, but it cannot replace clinical judgement, human connection, or responsibility for care. Human therapy is about facilitating change, not just giving advice, as is often the case with AI. Yes, AI can be useful with some things relating to mental health support, including psycho-education, journaling prompts, and providing a sense of low-level support, but using AI exclusively to assist with mental health issues is not advised, and can even lead to new and more serious concerns.
drstankovich.com