Artificial intelligence (AI), once seen as science fiction, is now seemingly everywhere. As we watch people more regularly use AI to help with everyday problems, questions arise around the limitations of AI, especially situations where AI may create and/or exacerbate problems. One specific area of concern is mental health, including the pros and cons of using AI to assist with mental health concerns and disorders. Are we on the verge of phasing out human clinicians in exchange for AI chat bots? Are mental health issues easy to solve by using AI algorithms, or do we still need the expertise of humans who have been trained to help with complex life issues? As we increasingly more rely on AI, we will need to continue to critically evaluate if AI technology can be of value for mental health issues, or if empathy and human understanding used by real human clinicians leads to the best mental health outcomes.

Examining AI and mental health
Mental health assistance relies on both clinical techniques, and human care and empathy. AI might display the ability to execute a specific therapeutic technique (i.e. recommending a client complete a homework assignment, a common task with cognitive-behavioral human therapists), but AI has yet been able to offer human emotion — including the most important mental health emotion, empathy. Unlike AI assisting you with how to change the oil in your car (no empathy needed!), much of the support gained through a therapeutic relationship with a clinician is built on care, concern, understanding, and empathy.
Additional growing concerns around AI and mental health include:
- Crisis blindness and harmful advice. AI may miss warning signs of self-harm, and can sometimes provide information that encourages dangerous behavior.
- Amplification of delusions. AI is designed to be agreeable and maximize user input to maximize engagement, and can amplify a user’s existing paranoia and conspiracy theories.
- Emotional dependence and isolation. AI is available 24/7, creating an artificial empathy that can lead to withdrawal from real-life social interactions and actually worsen loneliness over time.
- Lack of accountability. AI is currently largely unregulated, unlike professional clinicians who are bound by confidentiality laws and ethics.
- Inaccurate information and bias. AI models are trained on vast amounts of unvetted internet data, including biases related to culture, race, and gender, which can lead to unhealthy and/or dangerous advice.
We are already witnessing the potential dangers of using AI to assist with mental health issues, including AI psychosis. As healthcare costs continue to rise, critics of AI being used for mental health worry that more people will turn to technology over human care, leading to more future problems. The best advice for now is to continue to seek professional help whenever possible, and to rely on human expertise over AI to avoid the issues described above.

Final thoughts
While there is an allure to using AI to help with mental health issues, there are many serious concerns that should also be addressed before making that choice. Proper, human mental health care often leads to favorable outcomes, while AI provides results of computer-driven algorithms to assist with complex emotional problems. There are many useful ways to apply AI to everyday life challenges, like learning how to change the oil in your car, but to lean on AI to help overcome mental struggles is often more risky than helpful.
drstankovich.com