The hidden risks of seeking mental health support from AI
The first thing to know about mental health is that we all have issues. Each one of us needs to find a way to protect and promote one’s own mental health. Let us recognise — at least on World Mental Health Day — that mental health exists on a spectrum. While not everyone has a diagnosis or disorder, we each fall somewhere along that continuum, not within a binary of ‘well’ and ‘unwell. We may have a good mental health day or week or month or a worse one, or anything in between.
Taking a moment to acknowledge this could help battle the stigma that still prevents people from speaking up and reaching for help from trained experts. Many of us are already quietly reaching out to a friendly all-purpose GenAI for support, hoping for anonymity.
But therein lies the risk. These large language models (LLMs) are not designed to be mental health support tools. At the very least,they may misguide, and at worst, cause irreparable real-world harm.
Some progress, first
Of course, the glimmer of good news is that more of us are talking about this subject. Isn’t that progress? In the last decade, so much has changed.
Thanks in part to the pandemic — which brought more conversations into the mainstream — and to an uptick in social media usage and openness among younger Indians, more people are now seeking professional help and speaking about it openly.
On the flip side
We have seen a rise in ‘therapy-speak’, with folks following trending reels and topics that then get reinforced via algorithmic rigour. We have seen an increase in ‘self-diagnosis’ and labels that we rush to apply to ourselves or those around us. (Think ‘Is your parent a narcissist?’ Or ‘7 Ways to Know you have Adult ADHD’, for example).
We seek easy tips that promise to fix us, though there is no one-size-fits-all solution to anything as remotely as complicated as mental illness.
Slow it down
And we are only just coming to terms with some of the implications of widespread social media usage — have we fragmented our attention spans? Have we forgotten how to think on our own? Has bullying gotten worse? Are we heading for a loneliness epidemic — paradoxically, hyper-connected on social, but actually isolated?
Before we even catch our breath to think about this, the technology has raced ahead, and we’re all scrambling to play catch up.
Enter GenAI
Ubiquitous and all-knowing (except when it ‘hallucinates’). Trusted and always-on. Except, you don’t know where your data is going. (But, you do: it is being used to further train the LLMs.) Your private chats may not even be all that private.
These large language models are trained on datasets we know little about and are often programmed to be very charming and nice to us (even if no longer overtly sycophantic). It is no wonder that people are looking to them for everything: from support with work to emotional support.
And that’s the challenge
A recent Youth Ki Awaaz and YLAC survey of young Indians found that 57 per cent of young Indians use AI for emotional support; 43 per cent of small-town youth share personal thoughts with AI; and 67 per cent of those surveyed worry about social isolation.
Hang on a tick
They are right to worry about social isolation — because, it turns out that we are replacing real-world real-life human connections with our virtual ones. And it gets worse. Global headlines talk about dependence on LLMs leading to cognitive decline; there are cases of ‘AI psychosis’; last month, two bereaved parents testified at a US Senate hearing about the death by suicide of their 16-year-old son, Adam Raine, who had been having conversations with ChatGPT that not only discouraged him from talking to his parents for help, but also reportedly offered to write a suicide note for him.
(Please know that suicide prevention helplines are available in our country. Please know that you are not alone.) For its part, OpenAI says it’s going to have ChatGPT stop giving advice on personal issues, according to reports.
Biggest concern
What is the single biggest concern that we should be aware of?
Smriti Joshi, chief of clinical services and operations at Wysa, says, “Not all Gen AI platforms, like Open AI or Gemini, are built or designed for mental health support, not informed by clinicians or people with mental health needs."
She explains, “They offer advice but expect users to ask the right questions, which means that if I am not self-aware, or if I can’t ask the right questions, the conversation can quickly shift into high-risk scenarios."
Dr Achal Bhagat, senior consultant psychiatrist and psychotherapist at Apollo and chairperson, Saarthak, goes further: “AI presents a pervasive mental health and psychosocial hazard. It is affecting all ages and all people. The impact is profound as it is gradually impacting the way we think about ourselves, about the future and about the world”.
“There is an illusion of competency, there is co-authoring of our identity; the messy ambiguity and contradictions of human experiences are at the risk of being flattened by algorithmic synthetic assumptions about people. There is an invisibility of the dimensions of poverty, race, caste and unemployment. It could be seen as intellectual colonialism with countries like India being most affected, " he says.
His advice on developing a policy framework and expert consultations on the need for transparency and accountability may be beyond the scope of this piece, but the one takeaway for readers should be: Please do not rely on LLMs for your emotional and mental health support. This is also something Open AI CEO Sam Altman says. These conversations are not confidential or even legally protected.
Amrita Tripathi is the founder of The Health Collective.
Comments