
A nice little market has emerged for therapy chatbots. But, does it really work well enough now to address complex issues like mental health?
Recently, while chatting up with a voice-based assistant, I was informed that it now supported “empathy”. Looking to test the feature, I told the assistant that I was feeling sad.
“I am sorry you feel that way,” it chirped, in a clipped, official tone. It then recommended I do something to make me happy, like talk to someone. Duh! If I’d been really been sad, this cold and banal interaction would have only made things worse. Yet, more and more people are sharing their feelings with chatbots of various hues.
There are some advantages of this, like chatbots are always available and will hear us out patiently. We also know that they won’t judge us (quite unlike people).
A nice little market has emerged for dispensing therapy using chatbots. But, does it really work well enough now to address complex issues like mental health?
“Hi Tyagarajan, can we do a check-in now?”
That was my therapist, reaching out to me through Facebook messenger. I click “Yes”.
“So tell me, what’s your energy like?”
I am given three options: High, medium and low.
This was Day 3 of the 14-day free session (post which I’d have to pay $39 / month for the service) with Woebot, a therapy chatbot created by a team of Stanford psychologists and AI experts.
So far, my back-and-forth has been frustratingly limited. But I must confess: I’m a sceptic who’s trying to break it. The bot promises to get better as it learns more about me, but I suspect that at best, it will be an automated thinking coach, prompting me with questions like these:
OK which of the following is an example of fortunetelling?
Woebot’s questions are based on a popular talk-therapy method called cognitive behavioural therapy (CBT). CBT is widely used to treat a variety of issues ranging from pain to depression. It works by reframing gloomy thoughts about the self into a more factual context by peeling away negative assumptions.
Woebot’s questions are based on a popular talk-therapy method called cognitive behavioural therapy (CBT). CBT is widely used to treat a variety of issues ranging from pain to depression
Every hour, one student commits suicide in India. Just last week, an 18-year-old jumped from a 20-floor building. In 2016, more than 100 army men committed suicide. What used to be the occasional story of an IT engineer or a business owner committing suicide is becoming more common, especially as the market gets tough. And that’s just the suicides. A 2015 report from WHO stated that nearly 5% of India suffers from depression and another 3% suffers from other anxiety disorders.
Yet, seeking support in India faces multiple barriers. There’s the obvious social taboo that favours avoidance over acceptance and treatment. There aren’t enough psychotherapists around and finding them isn’t easy, especially if you aren’t near a big urban center. It may also be seen as a costly indulgence in a value-conscious environment where people seek tangible returns. Spending hundreds of rupees per hour for therapy may be beyond what many can afford.
A free, easily available private therapy chatbot like Wysa, therefore, is a massively disruptive solution which could bring access to mental health treatment to the forefront
The problem with chatbots is that they don’t chat very well. Within a very narrow scope of algorithmised responses (where the bot provides suggestions on what to say), they barely wear the human mask. But stray just a little bit and the veneer falls away to reveal the crude idiot machine that lies at the back.
Deep learning neural nets have carried us rapidly into the realm of conversational artificial intelligence (AI), but they are still super specialists. The quest to build a multipurpose neural network is an early one. A vast majority of chatbots don’t even use sophisticated neural nets to learn, but instead rely on rather simplistic answer trees. As a result, what we have is a proliferation of crappy chatbots, not much unlike the website of the early days.
The capabilities of today’s bots may be enough to provide limited customer support or some basic automated commerce, but they’re unlikely to be sufficient to address mental health challenges.
The problem with chatbots is that they don’t chat very well. Within a very narrow scope of algorithmised responses (where the bot provides suggestions on what to say), they barely wear the human mask
This is not to say that the simple one-on-one chatbots in the market today aren’t helping mental health patients. A study by Alison Darcy indicated that the use of bots resulted in a significant reduction in symptoms of depression for students who used it
Mental health chatbots don’t take the Hippocratic oath. Nor are they independently regulated right now. Unlike a licensed psychotherapist, algorithms don’t need certifications to prove their legitimacy. A majority of mental health related apps and bots don’t really follow rigorous scientific method when it comes designing their interactions.
To be fair, most of these bots call out their role right at the beginning. The founders of these companies are careful to call these as life coaches or trainers rather than therapists. But is such a casual disclaimer enough to balance against the boatload of narrative on how the robot healers are coming to solve your mental troubles.
The danger here is that those seeking help may feel that they are addressing their issues by chatting with a bot. In India, where there’s a taboo around seeking mental health treatment, a readily available, free alternative could end up blocking real treatment.
Mental health chatbots don’t take the Hippocratic oath. Nor are they independently regulated right now. Unlike a licensed psychotherapist, algorithms don’t need certifications to prove their legitimacy
AI is beginning to play a huge role in mental health. It will help psychotherapists and other stakeholders analyse a lot more information and present patterns that form the basis for making decisions. It can help gather behavioural evidence that is objective and captured at the right time. But chatbots engaging in mental state discussions? We are nowhere close to it yet.
Chatbots and voice interfaces will get smarter and more “human” and we could have deeper conversations on philosophy or even the meaning of life. At some point, we may even have fully autonomous therapy bots that can converse as well as a human, read the inflections in our voice and tone, use computer vision to read our facial expressions, and develop a sophisticated profile of our emotional state.
Layer on top the kind of rich, complex information obtained by combining data, both real time and historic, from disparate sources including wearables, our medical history, shopping patterns, lab test results, eating patterns, etc. At that point, we’ll have an individual profile that is complex and unique and way better than any human doctor may have been able to determine.
We aren’t there yet, though. And when it comes to mental health, it would be dangerous to let these half-baked bots loose without human supervision.
Also read: Digital initiatives are helping improve rural India’s mental health