Recent advances in artificial intelligence (AI) have aided health systems in myriad ways. The mental health sector has been slower to adopt its use but promise – and peril – are on the horizon. In this article, we examine the state of AI in mental health care, its potentialities and its perils.
Related article: Clinician Perspective: Artificial Intelligence, Trauma and Mental Health
Jump to section
Introduction
Today’s mental health landscape is dominated by two major realities, which might – with proper care and attention – come to be complementary to one another. The first is that increasing numbers of people globally are experiencing mental health concerns post-pandemic. The Centers for Disease Control (CDC) in the United States estimate that one in two Americans could suffer from depression – and about two-thirds of cases go undiagnosed. Estimates are that, for every ten individuals suffering from mental health issues, only three secure access to a licensed professional in the system (Kesari, 2021).
The second reality is that artificial intelligence has arrived in the healthcare sector and is showing no signs of slowing down. Just in the last few years, there has been a huge surge in robotic surgeries, AI-predicted health assessments and treatment plans, and greater detection of diseases through AI data analysis (Skowron, 2024). It would seem that mental health care, however – with its attendant requirements for human connection, emotional attunement and empathic understanding – would be much more difficult to achieve with a robot. Yet expert commentators have identified multiple ways in which we are already deploying AI in the mental health fields, with lots of potential for growth. In this article, we overview how it is already being used, with brief notes on the promise of it and a few perils to look out for.
The present: Ways we are already using artificial intelligence in mental health care
Some expert commentators have noted that AI tools presently serve two primary functions: operating behind the scenes to predict health risks or recommend personalised treatment plans, or directly interfacing with patients in the form of therapeutic chatbots (Abrams, 2021). The first is much more common.
Quality control: Keeping therapeutic conversations on track, monitoring progress
When you’re with a client, you may have a few exchanges of initial chitchat to establish rapport and settle into the session. But therapists can ask themselves: how much of the total session is comprised of therapeutic interventions learned in their training, and how much is just conversation without therapeutic purpose? With an ever-increasing demand for services and ever-higher workloads, some mental health clinics are looking into automated ways to monitor quality control among their therapists.
Recent advances in linguistic AI programs have allowed more accurate scoring of psychotherapy interventions, where before supervisors and “human rating” constructs were used. A new type of natural language processing, known as BERT (bidirectional encoder representations from transformers) is able to automatically code behaviours and transcribe sessions. It can not only train more effective new clinicians, but also help maintain the skills of seasoned therapists.
For the client, it can continue real-time assessment of skills, areas of improvement, and efficacy across the scope of therapy (Bateman, 2021; Skowron, 2024). An obvious advantage here is the ability of AI to pick up on utterances that may show the need for a different tack in the therapy, or even a different therapist. For example, those using motivational interviewing approaches are always looking for “change talk” (i.e., client statements that indicate they are ready to make a change in their behaviour). AI can detect such statements, giving the feedback back to the therapist, who may have missed the statements due to focusing on other aspects. Conversely, if AI does not hear any such statements, it might convince the therapist, upon review of the transcript, that the therapy is not working as intended (Bateman, 2021).
Detecting problems earlier and more accurately
AI has made great strides in early detection of mental health issues, particularly the garden-variety mental illnesses of anxiety and depression.
Anxiety
A recent study found that AI picked up anxiety symptoms with 92% accuracy. The movements of adults in Pakistan were recorded with the use of a sensor as individuals performed a series of activities in a particular order. Using motion sensors and deep-learning techniques, researchers focused on specific anxiety-linked behaviours, such as nail-biting, knuckle cracking, and hand-tapping. While the particular study had the big limitation of only 10 participants, the study’s author noted that “a major takeaway from this research is that we can safely and conveniently use artificial intelligence to provide measurement, analysis and diagnostics for anxiety”.
Given that the measurement can be as simple as the client wearing a smartwatch and checking the readings on their app, there is the added benefit of the research being non-intrusive as well as non-subjective and accurate. Conversely, the former way of measuring anxiety – with subjective measures and scales – has sometimes caused further stress and anxiety, among both clients and clinicians (Jagoo, 2022)!
Depression
Incredible as it seems, AI can now use voice biomarkers to accurately flag symptoms of depression. The story behind this development is a fascinating case of necessity being the mother of invention. Two women, Rima Seiilova-Olson (a research analyst) and Grace Cheng (a technologist) both had difficulties accessing health care when they needed it and wanted to “democratise” health care. The result was Kintsugi, a startup that uses AI to compensate for two current problems in mental health care in many parts of the world: (1) getting access to a clinician, as they are scarce; (2) getting an accurate diagnosis.
Kintsugi, Sonde Health, and the Digital Strategy section of Bristol Myers Squibb have all noted the promise of digital therapeutics, which are evidence-based, clinically evaluated software tools that help treat, manage, and prevent a broad range of diseases (Kesari, 2021).
As with AI-detected anxiety, where diagnosis of mental health issues has traditionally been based on a screening tool, the AI solution here processes not only the obvious audio characteristics – such as variations in pitch, energy, tonal quality, and rhythm – but also the small changes that occur in a person’s voice every few milliseconds resulting from changes in their body and health conditions. The resultant “rich” data make it possible to identify which vocal features map to particular disease symptoms or changes in health.
The Sonde Health team has used this approach to train machine learning (ML) models that can provide cues when people start experiencing depressive symptoms. Their model uses six vocal biomarkers that measure aspects such as how well you can hold your vocal pitch or how dynamic your voice is when speaking. Through multiple means, they have acquired over 1,000,000 voice samples from over 80,000 people globally.
With just 20 seconds of an audio clip, Kintsugi’s AI solution detects mental health issues with over 80% accuracy, compared to the 47.3% of mental health cases detected accurately by health professionals (Kesari, 2021).
Virtual reality
One of the most widely discussed and researched aspects of AI is virtual reality (VR), which creates an immersive world, facilitating the working-through of concerns for the client. Most of the studies have centred on the use of VR in the treatment of phobias, where exposure therapy can be undertaken in a simulated (safe) way. It’s also been used to treat schizophrenia, social anxiety, eating disorders, and addiction. VR has been shown to elicit similar physiological and psychological reactions in clients to real-world experiments, but with better capability for controlled exposure (or experimental manipulation in research), thus enabling it to advance the field of mental health significantly through improved methodological rigour and more accurate, tailored assessment. The programs often cost thousands of dollars, though (Skowron, 2024).
Therapeutic chatbots
Perhaps the most intriguing of all is the development of therapeutic chatbots, such as Woebot, a smartphone application which uses machine learning and natural language processing to deliver CBT to its tens of thousands of daily users. Users can bring relationship problems, stressful situations, and other issues to Woebot, and through the exchange of short text messages with it, learn about CBT concepts such as overgeneralisation and black-and-white thinking.
Similarly, ADHD (attention-deficit/hyperactivity disorder) is able to be treated now with AI-based programs, such as the video game EndeavorRx, as this has been cleared by the U.S. Food and Drug Administration for use under medical supervision (Abrams, 2021).
CBT in session and wearables outside it
In less spectacular ways, AI is improving clients’ lives when it is used instead of something else. In the U.K., the National Institute for Health and Care Excellence (NICE) recently changed its guidelines to encourage the use of CBT before medication for cases of mild depression: an understandable move, given that the level of prescribed anti-depressants rose 23% in 2020-2021 compared with the same period in 2015-2016. Those session transcript analyses we referred to earlier can help alert therapists to more places where they can identify maladaptive thought patterns and find ways to help clients replace them with more rational, kinder ways of thinking. Used this way, higher levels of CBT chat in session have been linked to better recovery rates than sessions using more general chat (Bateman, 2021).
Outside the clinic, clients using wearable technologies, such as trackers (e.g., Fitbit or similar) can give their therapists higher-quality data on their sleep or exercise, for example, than what they can give through either remembering or subjectively evaluating how well they slept, or how long they exercised (Bateman, 2021).
The promise of artificial intelligence in mental health
The goals of using AI are to provide more accurate data for clinical researchers and health professionals to identify, evaluate, and treat mental health disorders (or even prevent them), or to helpfully provide therapeutic “conversation” to clients to directly assist them. AI therapeutic tools offer several clear advantages over those scarce, overworked human clinicians: they are available 24/7; they never get tired, distracted, or impatient; they have seemingly unlimited knowledge of psychological literature; and they remember every interaction they have ever had with a client (beat that, huh?). They can deliver treatments in real time which have been customised to meet a client’s needs, and they cost a whole lot less than you (a mental health professional) will have to charge a client in order to continue buying groceries (Abrams, 2021). What’s not to love here?
The peril of artificial intelligence in mental health
The “but” hangs in the air! AI is increasingly amazing, but it will never replace the human touch. And there is more.
Cultural competency and inclusivity
Modern psychotherapy demands cultural competency, but AI may not be as inclusive as a human clinician working with diverse populations would be. One striking example of technology gone wrong occurred in 2019, when researchers discovered that a predictive algorithm used by United Health Group was biased against Black clients. By using healthcare spending as a proxy for illness, the tool inadvertently perpetuated systemic inequities that have historically kept Black clients from receiving adequate care. We have to face that algorithms are created by people who have their own values, assumptions and both explicit and implicit biases about the world. Those biases are bound to influence how the AI models function (Abrams, 2021).
Informed consent, privacy, and clinical validity (equals safety)
The sheer mass of data AI can generate is stunning in its richness, but there are concerns around informed consent and privacy of clients’ data (Vigliotti, 2023; Skowron, 2021). Beyond that is a question: is the response that – for example – Woebot makes to your client a clinically valid one? What if the bot responds in a way that exacerbates, rather than alleviates, the distress of a higher-risk user who struggles with suicidal ideation or self-harming behaviours? The thought is concerning.
In this vein, finally, there is the issue that AI is only as good as data from which it learns. AI solutions can thus be biased because the data are often generated from people experiencing mental health concerns rather than those who are healthy. Decreasing this risk means balancing data samples with sufficient healthy individuals.
Artificial intelligence is adding immeasurable value to mental health fields, through its assistance with highlighting aspects of clients’ and clinicians’ language and other themes, analysing assessments, and detecting different mental health problems, such as anxiety, depression, and more. As the demand for mental health practitioners increasingly outstrips supply, it can help close the gap, making mental health support more accessible and affordable. The chatbots, like Woebot, can be integrated into real-life treatment. In short, AI is increasingly pervasive in our lives. It has the potential to revolutionise the mental health field, and it is an ever more valuable adjunct to human-guided therapy, but we don’t believe that it will ever fully replace that empathetic therapist with unconditional positive regard who is genuinely, humanly, present with the client.
Key takeaways
- AI is increasingly pervasive in the mental health field, already being utilised to achieve quality control and track therapeutic conversations, detect health problems sooner, and provide both therapeutic conversations and virtual reality experiences in various scenarios.
- AI tools have numerous advantages over human clinicians of greater accessibility, lower cost, encyclopaedic information, and ability to “remember” interactions.
- We can embrace AI tools for the value they offer but must be mindful of potential perils in using them, such as a lack of cultural inclusivity, possible issues with informed consent or privacy, and potential lack of clinical validity.
References
- Abrams, Z. (2021). The promise and challenges of AI. American Psychological Association. Retrieved on 27 February 2024 from https://www.apa.org/monitor/2021/11/cover-artificial-intelligence
- Bateman, K. (2021). 4 ways artificial intelligence is improving mental health therapy. World Economic Forum. Retrieved on 27 February, 2024, from https://www.weforum.org/agenda/2021/12/ai-mental-health-cbt-therapy/
- Jagoo, K. (2022). Artificial intelligence could be the future of mental illness detection. Verywellmind. Retrieved on 27 February, 2024, from https://www.verywellmind.com/artificial-intelligence-could-be-the-future-of-mental-illness-detection-5213212
- Kesari, G. (2021). AI can now predict depression from your voice, and it’s twice as accurate as human practitioners. Forbes. Retrieved on 27 February 2024, from https://www.forbes.com/sites/ganeskesari/2021/05/24/ai-can-now-detect-depression-from-just-your-voice/?sh=2bf607224c8d
- Skowron, C. (2024). Three ways we’re already using AI in mental health care. Psychology Today. Retrieved on 27 February 2024, from https://www.psychologytoday.com/us/blog/a-different-kind-of-therapy/202402/3-ways-were-already-using-ai-in-mental-health-care
- Vigliotti, A. (2023). AI in the mental health field. Psychology Today. Retrieved on 27 February, 2024, from https://www.psychologytoday.com/us/blog/the-now/202310/ai-in-the-mental-health-field