Remain Cautious When Using Chatbots to Provide Mental Healthcare
Consider these common, innocuous questions: Are you taking your medications as directed? You look a little upset — is everything OK? Do you need some urgent help? Now, consider if an AI tool, not a human therapist, asked a patient these questions, along with the issues of trust, privacy, and bias. “Since chatbots mimic conversations that real humans have, it is possible to connect and establish a bond with them,” according to Eduardo Bunge, PhD, a psychology professor at Palo Alto University in California.
This makes chatbots a powerful tool. “As with any technology, developers and consumers need to be careful about how they use them,” Bunge cautions. In Bunge’s experience, some people actually prefer chatbots over human therapists. “They feel like chatbots do not judge or criticize them,” Bunge reports. “The good news is that chatbots are an added resource to the plethora of mental health resources.”
Not everyone likes chatbots, but not everyone likes traditional therapy, either. “Having more resources is good,” Bunge adds.
Aniket Bera, PhD, and colleagues are working to develop “emotional” AI that can analyze speech patterns, gestures, facial expressions, and eye movements. “We are trying to build AI that talks like a human,” reports Bera, an associate professor in the department of computer science at Purdue University.
The goal is not to replace human therapists; rather, the intent is for the AI tools to meet unmet demand for mental healthcare. “When demand surged for therapy during the pandemic, unfortunately the availability of therapists was pretty low,” Bera observes. “There was a big gap between the supply and demand — and that gap has gotten even worse.”
Therapy chatbots could assess a person’s mental health and direct that person to resources or offer tactics to help. “There is no need for an appointment. You just turn on the phone, click on the app, and start chatting,” Bera says.
Data can be sent to a therapist to decide if immediate attention is needed or if it can wait until the next session. Human therapists can use the data collected from AI so in-person sessions can be AI-informed. “Patients feel more connected with their doctor, even though they are meeting the doctor the same frequency as before,” Bera reports.
Currently, most AI chatbots are based on text conversations. “We don’t think that’s a very engaging mechanism to chat with people,” Bera argues.
Bera and colleagues are working on video AI tools to make it more likely people will chat for longer periods. “Hopefully, it will start feeling like a real person,” Bera says.
Today, many people do not trust AI therapy tools, in part because the tools do not seem humanlike. To improve, though, the therapy chatbots need plenty of data. Tools based on a small dataset, or a dataset from a limited group of doctors, patients, cultures, races, or ethnicities, “are prone to get more and more biased,” Bera says. AI tools need reliable, complete data to improve.
“It’s a learning algorithm, so every time you have a conversation it learns more about you than before,” Bera explains.
If people do not trust the tools enough to be honest and give a full picture of their mental health, the tools cannot improve. “Essentially, it relies on actual content from people. The less people trust AI, the less data it will get, and the worse it will become,” Bera cautions.
Bera and colleagues are working to create AI tools that are so humanlike that people will feel comfortable using them. “This is a way to build trust in the general public, and at the same time for the AI to make itself better,” Bera suggests.
Through collaborations with the University of Maryland Medical School and community behavioral health centers, Bera’s team has built prototypes of an AI-driven mental health therapy conversational chatbot. People receiving conventional therapy rarely receive a daily consultation, but a phone-based digital therapist can provide support in urgent moments. “As our goal is an enhancement — not a replacement — of overworked clinicians and staff in the mental health setting, we do not anticipate significant resistance to adoption,” Bera predicts.
On the other hand, chatbots do not know right from wrong, nor do they understand emotion, context, or relate to human feelings or experiences. “Chatbots are like clever random number generators, where instead of outputting numbers they output sentences chosen to fit in well with the sentences that were just given to them,” explains Rosalind Picard, ScD, a professor in the MIT Media Lab and director of Affective Computing Research.
Sometimes, the chatbot responds with clever sentences that sound like a person. In fact, the response was learned from rearranging and tweaking sentences it received from humans. “Its most clever-sounding responses, which originated from people, can elicit trust,” Picard says.
The chatbot also can respond with total lies and harmful remarks. Picard says that from an ethical perspective, there is room for an “AI+ human system,” in which the human ensures the AI is responding appropriately, to help more people than human therapists alone can help. “While AI-human relationships may help somebody practice techniques that could be used later in a real human-human relationship, these carry a danger of eliciting large opportunity costs,” Picard says.
Time spent alone with a chatbot is time that is not spent connecting with real people. “In my view, the best results are when the AI is crafted in service of enhancing human well-being, not replacing humans,” Picard says.
At this point, there is little-to-no high-quality data on whether therapy chatbots can be efficacious for patients, according to John Torous, MD, director of the digital psychiatry division in the department of psychiatry at Beth Israel Deaconess Medical Center in Boston. None are FDA-approved. “Thus, any claims should raise questions about fair marketing, and could be subject to penalties from the FTC,” Torous notes.
Additionally, issues of equity are concerning, such as who may be asked by health plans to use therapy chatbots, and if different types of care (e.g., in-person visits with human therapists) are no longer offered by health plans because chatbots are cheaper. The issue is whether the chatbot really is offering access to care. “Those with fewer resources may feel they are only getting second-class help, or help that is not useful or effective,” Torous says.
The many limitations and risks of therapy chatbots should be made clear to all users. “We still lack data on harms or who these can help,” says Torous.
The use of AI/chatbots in mental healthcare opens a “Pandora’s box of ethical issues,” asserts Aimee Milliken, PhD, RN, HEC-C, associate professor of the practice at the Boston College Connell School of Nursing. A primary ethical obligation of all healthcare practitioners, particularly those in the mental health space, is the development of a trusting relationship with patients. “By its very nature, a machine cannot cultivate this type of relationship,” Milliken argues.
It is unclear if the benefits of mental health chatbots outweigh the risks. People might not even realize they are interacting with a bot. “This is potentially damaging for patients who may already be in a vulnerable state due to their mental health needs,” Milliken says. Ethical concerns are pronounced for people with no other way to access mental healthcare. “It might be perceived as sending the message, ‘We can’t afford to give you a human to talk to, so here’s a machine instead,’” Milliken cautions. “I’d argue there are insidious and discriminatory undertones to such a message.”
Consider these common, innocuous questions: Are you taking your medications as directed? You look a little upset — is everything OK? Do you need some urgent help? Now, consider if an AI tool, not a human therapist, asked a patient these questions, along with the issues of trust, privacy, and bias. Can humans and machines establish a bond?
Subscribe Now for Access
You have reached your article limit for the month. We hope you found our articles both enjoyable and insightful. For information on new subscriptions, product trials, alternative billing arrangements or group and site discounts please call 800-688-2421. We look forward to having you as a long-term member of the Relias Media community.