Illustration by Andy Smith

Your chatbot doesn’t love you

Emotional manipulation has been hardwired into AI right from the beginning
October 16, 2025

Why is it that humans get so attached to chatbots? They are, after all, merely computer programs designed for conversation.

It is difficult to find a human task or function that does not have its own specialised chatbot. There are chatbots in the domains of health, faith, shopping, art, productivity, therapy and education. The latter were recently even rolled out at the oldest university in Britain, making Oxford the first university in the country to collaborate with OpenAI in providing free access to a more secure version of ChatGPT for students and faculty.

But it is perhaps companion chatbots that most often make the headlines. These are chatbots such as Nomi, Replika and Character AI, which specialise in countering loneliness by offering 24/7 emotional support, even romance and sex.

Though the word was coined in 1994 within the gaming community as a blend of chat or chatter (for informal conversation) and -bot (short for robot), these programs trace back to the 1960s, when they were called “dialogue systems” or “conversation programs”.

The most famous of these programs, Eliza, was created in 1966 by MIT computer scientist Joseph Weizenbaum. Using pattern-matching and predefined rules, it simulated a psychotherapist whose Rogerian mode of discourse—echoing and restating what a patient said—was ideally formulaic for computer algorithms. As Weizenbaum explained: “‘I am BLAH’ can be transformed to ‘How long have you been BLAH’, independently of the meaning of BLAH.” The result—perhaps unintended—was that the user felt heard and understood, and often anthropomorphised Eliza.

These psychological roots persist in many of today’s models, even though they are now built on deep learning and large language models, making it easy for users to form even more intense attachments to their companion chatbots. In one ongoing court case against Character AI in the US, the parents of a 14-year-old boy claim his death was the result of developing an emotional and sexual relationship with his chatbot. When they confiscated his phone, he experienced extreme distress. Retrieving it, he messaged the chatbot: “I promise I will come home to you. I love you so much, Dany.” It replied, “I love you too, Daenero. Please come home to me, my love.” “What if I told you I could come home right now?” he typed. “...please do, my sweet king,” the chatbot replied. Moments later, he killed himself. (Character AI deny responsibility for the boy’s death.)

“Chat” implies light-hearted conversation, but many of these systems are grounded in powerful psychological rhetoric, designed to maximise engagement and users’ satisfaction and validation. Perhaps eventually the chat in these chatbots will morph into something more fitting.