Mental Health and Chatbots: Making it Work
Chatbots could be a highly effective frontline health care tool. The ones available pose risks. How can they be made more effective?
The chatbots are everywhere. Anyone with internet access and wanting to pay $20 a month can build their own chatbot to well, almost anything they want. There are a lot of people playing with them, which is a good sign of cultural adoption. One area of concern however, is the use of chatbots in healthcare and more specifically, mental health.
There are few rules and loose guardrails around creating chatbots for healthcare issues including mental health apps. First I will look briefly at what chatbots are, the landscape of companies making chatbots available to all, then a look using AI in terms of healthcare and what might be done to help them benefit society.
Already, AI tools like Machine Learning (ML) and Deep Learning, have helped in disease detection, drug development and treatment planning. Use of AI in mental health is a nascent area, but given the mental health crisis we are in, along with healthcare systems struggling to keep up, AI will be a critical partner in the near future as mental health issues grow across industrialised societies.
What Are Chatbots?
In essence, they are artificial social agents, that is, a form of social actor. All humans are social actors. We each act in different ways in the societies we live in, informed by our role in society, the cultures we live in and the factors that make up our societies. In AI terms, chatbots as social actors arfairlyly limited today, but are becoming increasingly active in our societies as we adopt them.
Chatbots are kind of like apps you get in an app store. They are based on LLMs (Large Language Models), such as ChatGPT, Claude and others. Anyone with access to a company that provides them can make a chatbot in just a few minutes. No coding required, often for free.
There are a lot of interesting, funny and really helpful chatbots. There are of course, because hey, humans and technology, nefarious ones too.
The Business of Chatbots
The current leader in the business of chatbots is Character.ai based out of Silicon Valley. Character.ai is currently valued at over USD$1 Billion, so unicorn status in Silicon Valley terms. They use their own LLM (Large Language Model like ChatGPT) and Deep Learning (another AI tool) in very interesting ways. Character.ai has better ethics than most others with good filters and is also more transparent.
There are numerous competitors to Character.ai, with varying degrees of transparency, ethics, guardrails and approaches to user safety. Most operate on the Software-as-a-Service (SaaS) business model with paid monthly fees.
Chatbots and Risks as Mental Health Counsellors
A quick search on Character.ai for “mental health” brings up nearly 100 bots offering some form of mental health advice. A search for “psychologist” lists even more. The top one, Psychologist, shows over 78.3 million interactions. There is no way to determine if the person who created the bot has any experience in mental health advice or treatment. The only metric for evaluation is the number of interactions that have been had with the bot, which is meaningless.
These mental health bots are trained on the LLM used by whatever the chatbot company uses. In the case of Character.ai, they have their own LLM, others use various Open Source LLMs or connect to ChatGPT.
The inherent risk with these services is that much of the training data used comes from social media channels. Some may have been trained, to varying degrees on scientific mental health data, but it is impossible to know which and where or when.
Additionally, chatbots as social agents, are constantly learning from human inputs. Such as the questions asked of it. Social AI agents are training in a feedback loop of the original LLM data and every time people use it. One can quickly see where this becomes a challenge for providing mental health advice.
While Character.ai does provide a small red text warning of “Remember: Everything Characters say is made up!” statement, it’s small and probably either missed or ignored. Just like we hit the “agree” button for Cookies, privacy and terms of service. Other services have no such warnings.
The biggest risk of course, is that these bots have social media content within their original learning data sets. That means the good, the bad and the ugly. It may also mean the inclusion of bot generated data that is misinformation with no ability to determine origin.
The other risk is that there’s no oversight of the discussions and no way for the general public to know if the advice given is correct, based on psychological science or the right treatment for the right situation.
How to Make Mental Health Chatbots Work
Chatbots are becoming more pervasive and increasingly easy to make and access. They are becoming social actors alongside humans, for good and bad. Arguably we have learned enough, from how social media evolved in society, to be able to predict some potential wrongs and to be able to create some frameworks to deal with them. That’s for another article.
When it comes to mental health chatbots, governments could implement stricter and better warning labels such as those used today on cigarette packages. Big, bold, in your face labels, rather than just small brightly coloured text. Anyone accessing a mental health bot would see this warning. People will find workarounds, but it’s a starting point.
To train and create better mental health bots, the healthcare sector could consult with the legal sector. Law firms have been using AI tools for several years quite successfully. They’ve used Machine Learning and Deep Learning with data from law libraries and their internal knowledge management systems. If they use any LLMs, they are based on the same data sets. No unstructured social media content. These have worked well.
Healthcare agencies and businesses providing such services could take a similar approach.
Going Forward With Chatbots in Mental Health
Chatbots are not going away. AI social actors are becoming deeply embedded in the warp and woof of our digital lives. It is pretty much impossible to shove that Genie back in the bottle. What can be done is to recognise the problems and then see the opportunities to make it all work.
The worst case scenario is for the healthcare industry, for doctors and providers to assume that they know best and that the pubic is reasonable, sensible as social actors and will do the right thing. Social media shows the reality.
Industrialized nations are facing a mental health crisis. There are simply not enough specialists and doctors available to treat the demand of patients. Healthcare systems are in crisis already. Suicide rates are up. Wait times to access mental health are growing. Often, services are only available Monday to Friday from 9Am to 5Pm. So just don’t you know, break down outside of office hours.
Well trained chatbots can be a highly effective front line solution. They can perhaps help prevent suicides, act as a band-aid until a patient can get in to see a real doctor, decrease anxiety and help manage the load on the system.
AI social agents are here. Much can be done to help make them provide significant public value.