Regional
Chatbot therapy is risky. It’s also not useless.
Getting AI to improve mental health outcomes is not as simple as firing up ChatGPT.
I didn’t find a therapist when I first felt I might need one, nor when I finally found the energy to start Googling the therapists with offices near me. I didn’t find one months later when, after glancing at the results of my depression screening, my physician delayed her next appointment, pulled up a list of therapists, and helped me send emails to each of them asking if they were taking on new patients. It was a year before my therapist search ended thanks to a friend who was moving away who gave me the name of the person that had been treating her.
I was fortunate: My full-time job included health insurance. I lived in an area with many mental health professionals, and I had the means to consider therapists who were out of network. Many people trying to get mental health care do so without any of the institutional, social, or financial resources I had.
This lack of access, fueled by a nationwide mental health crisis and a shortage of therapists in the US — not to mention a health care system that can, for many, make it extremely difficult to find an in-network provider — is a problem that urgently needs solutions. As with any such problem, there are people out there who say the solution is technology.
Enter AI. As Generative AI chatbots have rolled out to a wider range of users, some have started using readily available, multipurpose tools like ChatGPT as therapists. Vice spoke to some of these users earlier this year, noting that anecdotal reports of people praising their experiences with chatbots had spread through social media. One Redditor even wrote a guide to “jailbreaking” ChatGPT in order to get around the chatbot’s guardrails against providing mental health advice.
But ChatGPT is not built to be anyone’s therapist. It’s not bound by the privacy or accountability requirements that guide the practice and ethics of human therapists. While there are consequences when a chatbot, say, fabricates a source for a research paper, those consequences are not nearly as serious as the potential harm caused by a chatbot providing dangerous or inaccurate medical advice to someone with a serious mental health condition.
This doesn’t necessarily mean that AI is useless as a mental health resource. Betsy Stade, a psychologist and postdoctoral researcher at the Stanford Institute for Human-Centered AI, says that any analysis of AI and therapy should be framed around the same metric used in psychology to evaluate a treatment: Does it improve patient outcomes? Stade, who is the lead author of a working paper on the responsible incorporation of generative AI into mental health care, is optimistic AI can help patients and therapists receive and provide better care with better outcomes. But it’s not as simple as firing up ChatGPT.
If you have questions about where AI therapy stands now — or what it even is — we’ve got a few answers.
What is an AI therapist?
The term “AI therapist” has been used to refer to a couple different things. First, there are dedicated applications that are designed specifically to assist in mental health care, some of which are available to the public and some not. And then there are AI chatbots pitching themselves as something akin to therapy. These apps existed long before tools like ChatGPT. Woebot, for example, is a service launched in 2017 designed to provide assistance based on cognitive behavioral therapy; it gained popularity during the pandemic as a mental health aid that was easier and cheaper to access than therapy.
More recently, there has been a proliferation of free or cheaper-than-therapy chatbots that can provide uncannily conversational interactions, thanks to large language models like the one that underpins ChatGPT. Some have turned to this new generation of AI-powered tools for mental health support, a task they were not designed to perform. Others have done it unwittingly. Last January, the co-founder of the mental health platform KoKo announced that it had provided AI-created responses to thousands of users who thought they were speaking to a real human being.
It’s worth noting that the conversation around chatbots and therapy is happening alongside research into roles that AI might play in mental health care outside of mimicking a therapy session. For instance, AI tools could help human therapists do things like organize their notes and ensure that standards for proven treatments are upheld, something that has a track record of improving patient outcomes.
Why do people like chatbots for therapy, even if they weren’t designed for it?
There are a few hypotheses about why so many people seeking therapy respond to AI-powered chatbots. Maybe they find emotional or social support from these bots. But the level of support probably differs person to person, and is certainly influenced by their mental health needs and their expectations of what therapy is — as well as what an app might be able to provide for them.
Therapy means a lot of different things to different people, and people come to therapists for a lot of different reasons, says Lara Honos-Webb, a clinical psychologist who specializes in ADHD and the co-founder of a startup aimed at helping those managing the condition. Those who have found ChatGPT useful, she said, might be approaching these tools at the level of “problem, solution.” Tools like this might seem like they’re pretty good at reframing thoughts or providing “behavioral activation,” such as a list of healthy activities to try. Stade added that, from a research perspective, experts don’t really know what it is that people feel is working for them in this case.
“Beyond super subjective, qualitative reports of what a few people are doing, and then some people posting on Reddit about their experiences, we actually don’t have a good accounting of what’s happening out there,” she said.
So what are the risks of chatbot therapy?
There are some obvious concerns here: Privacy is a big one. That includes the handling of the training data used to make generative AI tools better at mimicking therapy as well as the privacy of the users who end up disclosing sensitive medical information to a chatbot while seeking help. There are also the biases built into many of these systems as they stand today, which often reflect and reinforce the larger systemic inequalities that already exist in society.
But the biggest risk of chatbot therapy — whether it’s poorly conceived or provided by software that was not designed for mental health — is that it could hurt people by not providing good support and care. Therapy is more than a chat transcript and a set of suggestions. Honos-Webb, who uses generative AI tools like ChatGPT to organize her thoughts while writing articles on ADHD but not for her practice as a therapist, noted that therapists pick up on a lot of cues and nuances that AI is not prepared to catch.
Stade, in her working paper, notes that while large language models have a “promising” capacity to conduct some of the skills needed for psychotherapy, there’s a difference between “simulating therapy skills” and “implementing them effectively.” She noted specific concerns around how these systems might handle complex cases, including those involving suicidal thoughts, substance abuse, or specific life events.
Honos-Webb gave the example of an older woman who recently developed an eating disorder. One level of treatment might focus specifically on that behavior: If someone isn’t eating, what might help them eat? But a good therapist will pick up on more of that. Over time, that therapist and patient might make the connection between recent life events: Maybe the patient’s husband recently retired. She’s angry because suddenly he’s home all the time, taking up her space.
“So much of therapy is being responsive to emerging context, what you’re seeing, what you’re noticing,” Honos-Webb explained. And the effectiveness of that work is directly tied to the developing relationship between therapist and patient.
But can AI help solve the crisis of access to mental health care?
Implemented ethically, AI could become a valuable tool for helping people improve their results when seeking mental health care. But Stade noted that the reasons behind this crisis are wider-reaching than the realm of technology and would require a solution that is not simply a new app.
When I asked Stede about AI’s role in solving the access crisis in US mental health care, she said: “I believe we need universal health care. There’s so much outside the AI space that needs to happen.”
“That said,” she added, “I do think that these tools have some exciting opportunities to expand and fill gaps.”
A version of this story was also published in the Vox Technology newsletter. Sign up here so you don’t miss the next one!
-
Crime 1 day ago
Two kids killed as device explodes in N. Waziristan
-
Entertainment 2 days ago
Famous Korean actor commits suicide
-
Business 2 days ago
Petroleum product likely to be expensive for second consecutive time
-
Pakistan 10 hours ago
PMLN govt is going in right direction, says Nawaz Sharif
-
Pakistan 2 days ago
Attack on Qazi: British police visit Pakistan High Commission
-
Pakistan 2 days ago
Nawaz Sharif claims country on right direction
-
Regional 23 hours ago
PPP wins 8 seats in Karachi by-elections
-
Business 17 hours ago
Gold price shoots up Rs1,300 per tola in Pakistan