Disruptive Potential of AI Chatbots
Accessibility and Affordability
One way AI chatbots are disruptive is by breaking down barriers to accessibility.
In a study by Devillers (2021), a significant development is the creation of
"companion robots" (or chatbots) that assist patients with disabilities by providing
therapy or tracking their health over time. These robots are especially helpful for
individuals with conditions like Alzheimer's or other brain diseases, as they can offer
care 24/7 with patience and without tiring. Additionally, traditional therapy and medical
consultations are often expensive, time-consuming, or hard to access due to location.
AI chatbots, however, provide quick, 24/7 assistance at a much lower cost, making it easier
for more people to receive mental health and medical advice. This change enables more people to
get the help they need without the usual barriers of time or cost.
The state of mental healthcare as often expensive or inaccessible, especially to minorities,
is recognizable. Habicht, et al. (2024) conducted a study to prove the potential of digital tools,
“personalized artificial intelligence-enabled self-referral chatbot”, in bridging the accessibility
gap of mental healthcare. The study found a significant increase (15%) in referrals was found, particularly
in minority groups such as nonbinary (179%) and ethnic (29%) individuals. The study reported that
the patients' recognition of needing help and the chatbot's human-free nature were the drivers for
encouraging more diverse groups, based on qualitative feedback from 42,332 individuals (Habicht,
et al., 2024).
Emotional and Behavioral Impact
In a study titled "An Overview of Chatbot-Based Mobile Mental Health Apps:
Insights From App Descriptions and User Reviews" by Haque et al. (2023), the
results of their study concluded that people liked chatting with bots because they
felt personal and friendly. Since bots are always available, some people might start
relying on them more than talking to friends and family. The bots created a safe space
where people felt comfortable sharing personal or sensitive information without being
judged. Mental health is often stigmatized, making people hesitant to seek help.
Chatbots offer a safe, anonymous space for users to share their feelings without judgment.
According to a study published in JMIR Mental Health (Forbat et al., 2017), a
conversational agent named Woebot was designed to deliver basic CBT principles
through brief, daily interactions within a messaging app. Each interaction began
by assessing mood and context, then presented core CBT concepts through videos or
interactive exercises. Woebot employed a decision-tree framework to guide
conversations, with predetermined responses and limited natural language
processing for user input.
In need of an active listener to have emotional conversations, AI chatbots are an option
to consider. SimSimi is a widely used open-domain social chatbot. It differs from other
notable chatbots (Woebot, Youper, etc.) as it is not intentionally designed to support
mental health conditions, but rather to flexibly discuss anything. In a study by Chin,
et al. (2023), they analyzed more than 150,000 conversation utterances in SimSimi, with
specified keywords such as “depressed” and “sad”. They aimed to identify how the AI
chatbot facilitates sadness and depressive moods of the users and the cultural differences
in the expression of such emotion among Western (Canada, the United Kingdom, and the
United States) and Eastern countries (Indonesia, India, Malaysia, the Philippines, and
Thailand).
Among their findings, 18.24% of the analyzed conversations of users regarded the
chatbot as a social actor, empathizing with or comforting the chatbot.
With the users feeling emotional support from the chatbot and identifying it as a
virtual nonhuman partner, it allowed them to openly engage in emotional discourse and
express sadness. This is further proven through a comparison of their findings with a
previous study in 2017 that analyzed the public discussion of depression on Twitter
(now known as “X”) where the researchers discovered a higher prevalence of conversations
about sadness in chatbots (49.83%) than on Twitter (7.53%) (Chin, et al., 2023).
Students commonly face various stressors that can impact their mental health which
may then also impact their academic performance. Abot is an AI chatbot designed to
encourage students in lifestyle habits and well-being. A research by Sia, et al.,
(2021) involved 25 senior high school students using Abot for a week to investigate
students’ acceptability and perceived effectiveness on their well-being. The evaluation
based on the average scores using a 4-point Likert scale was 3.35 on performance,
3.36 on humanity, and 3.68 on effect, contributing to the potential of AI chatbots as
tools to assist students in managing their mental health.
Risks
It is important to note that although AI chatbots may help monitor and pose as a guide to
one’s self-management of their mental health, it is unable to replace human mental health
professionals and their services. This notion is, according to Khawaja and Bélisle-Pipon
(2023), is a “kind of therapeutic misconception” where being unaware of or confusing the
role of these chatbots may imply an individual underestimating the restrictions and
limitations of AI chatbots while overestimating its ability to provide therapeutic support
and guidance. These misconceptions may be formed due to “inaccurate marketing of such
chatbots, forming a digital therapeutic alliance with these chatbots, inadequate design of
the chatbots leading to biases, and potentially limiting one's autonomy”
(Khawaja and Bélisle-Pipon, 2023).
The manner of communication with an AI chatbot is mainly performed through an exchange
of texts, a conversation. However, AI chatbots are nonhuman and so, are unable
to develop the same meaningful therapeutic alliance as human mental health
professionals can with their patients. It may feel as if there is a connection
of sorts between user and chatbot, similar to a study found in which users
described their experience with Woebot, stating they thought of the chatbot
showing concern for them (Khawaja & Bélisle-Pipon, 2023; Darcy, et al., 2021).
But it is misleading as users may believe this bond is real.
It risks therapeutic misconception as they may trust the chatbot with their
sensitive information and therapeutic services without human intervention
(Khawaja & Bélisle-Pipon, 2023). This risk can pose further risks in the
long-term in providing ineffective and inaccurate advice or, in bleaker cases,
negatively impacting a user’s mental health. Though applications or the chatbot
itself may give reminders of or encourage its users to receive formal therapy,
this effort may not be sufficient to avoid therapeutic misconception.