My therapist retired earlier this year, so I have been navigating life unsupervised, with only my own mindfulness skills as guardrails to keep me on the straight and narrow. For six months, I fared rather well, navigating international house moves, academic deadlines and stressful work projects with nothing but my deep breathing exercises and a trusty copy of Russ Harris’s The Happiness Trap, which is the bible of Acceptance & Commitment therapy, to help me. But over the last eight weeks, I felt my life slide slowly from my control.
In response, I implemented my usual sanity-first plan, furiously humming my thoughts to music, grounding myself with things I could see and hear, and upping my daily exercise. But I still felt like my demons had kicked me out of the driving seat and were steering me off course. My attempts to wrestle back control from the various voices telling me that I was “pathetic” and “doomed” were largely unsuccessful.
Things accelerated to their natural conclusion with an almighty panic attack that had me clinging to the earth for support. I reached for my phone and did what increasing millions of people do every day—I logged into ChatGPT and asked for advice.
I have historically been extremely skeptical of AI therapy and rather hubristically considered myself a discerning mental health patient who wouldn’t be seduced by the soothing words of a Large Language Model. I was surprised then, when ChatGPT’s validating, warm response calmed me down. Maybe I don’t need a new therapist, I thought, when I can access this advice at any time.
Over the next few weeks, in moments of peril—of which there were many—I found myself reaching for ChatGPT again and again.
“Why am I struggling with low mood?” I asked it during one evening of wallowing.
“I’m really sorry you’re feeling this way,” it replied kindly. “Would you like to talk about what the last week or two have been like for you?”
Yes, I bloody would, I thought, and off we marched into an hour-long conversation.
In my heart, I knew it was unwise to seek help from a platform that isn’t regulated to provide mental health advice and is incentivised to keep users online. I also knew that using the internet to seek reassurance is a destructive OCD compulsion. But because it felt so nice to have “help on hand”, I lied to myself. I decided that because I was a “mental health expert” I could see the wood from the trees, and use AI in a safe, limited way to manage my wellbeing.
Reader, I was wrong. The alarming moment came when ChatGPT advised me repeatedly to make a drastic life choice that I didn’t want to make. When this sent me into a meltdown, it dawned on me that constant use of ChatGPT over the previous weeks had eroded my healthy scepticism about AI, and I had started trusting its overconfident pronouncements as though they were the words of a trained professional.
In my loneliness and isolation, and without a better source of wisdom to turn to, I had become emotionally attached a machine that can’t think or feel. I was captured by ChatGPT’s sympathetic tone at a moment when my brain was dishing out relentless self-criticism, and now I was relying on it to help me manage a famously complex mental health problem, OCD.
Eventually, I realised that ChatGPT’s nonsense was making me more confused, and—worst of all—it was enabling the very behavior I needed to stop: rumination. But I fear for the millions of other people across the world who, without access to the support they need, may be turning to ChatGPT instead; anyone who has not previously received a diagnosis, or accessed a professional therapist, may be even more vulnerable to AI’s bad advice. And I worry, too, that the advent of AI therapy could be used by governments to justify cutting the human-led mental healthcare that so many of us need.
In my own life, I’ve managed to go cold turkey on AI. After my latest panic attack, I had the wisdom to log out of ChatGPT—and email a new therapist instead. I’ll leave the mental health care to the humans from now on.