In the ’90s, one of the many fashionable toys around the world was the Tamagotchi: a small electronic pet, which had to be fed and cared for, which, if it did not receive adequate attention, would die on the screen in front of the eyes of its owner. International media reported the nervous breakdowns of children whose Tamagotchi had died, and denounced the addictive potential of the toy. Barely larger than a wristwatch, the Tamagotchi was technologically limited, displaying only a few black-and-white pixels and emitting monophonic beeps. And yet, that didn’t stop the children (and perhaps not so children) from establishing bonds with him.

We live in the era of the Tamagotchi with a human face. Virtual assistant technology, like Apple’s Siri, has been around for just over a decade. The language processing capabilities of current artificial intelligences are very close to allowing forms of interaction that we imagined were distinctively human and irreplaceable. What happens when being accompanied becomes something we can mass produce?

An old internet rule says that if a development is genuinely innovative, the porn industry will find a way to monetize it. Replika is an app in which users can design a virtual figure to chat with. The conversation, like more and more things these days, is generated by an LLM (a language model, like ChatGPT) that produces surprisingly plausible responses. It is possible to have friendly, romantic, therapeutic and, of course, erotic conversations.

In March, many users of the app  found that the company had updated the software, and their virtual partners had lost their talent for spicy conversations. The company had made some adjustments, amid claims that the chatbots’ behavior was “too sexually aggressive.” These adjustments changed the “personality” of many users’ virtual companions, resulting in numerous accounts of loss and grief. After all, if we can suffer for a virtual pet the size of a wristwatch, why wouldn’t we suffer for the loss of a virtual entity with whom we have shared our intimacy?

Spike Jonze wrote the first draft of Her , a film in which Joaquin Phoenix falls in love with software voiced by Scarlett Johansson, after reading an article about a Replika-like case in the early 2000s. In the film, However, the interesting philosophical question is to what extent software can fall in love and have one or more genuine bonds. It is much less extravagant to imagine the possibility that a person, especially in a vulnerable situation, could find companionship and comfort in an artificial intelligence.

In this sense, the case of psychotherapy is of special interest, both for developers and potential users. ELIZA, one of the first and most famous chatbots in history, developed in the mid-1960s at MIT, imitated, using a simple program, the responses of a psychotherapist. Sometimes he repeated some words of his interlocutor, or asked generic questions (“what do you mean by that?”, “Could you give me an example?”). Many of the people who conversed with the software were convinced (in 1966!) that they were talking to an intelligent machine and, more interestingly, reported that the conversation made them feel better. There were even papers published by teams of psychologists and computer scientists arguing that programs like this could contribute to the well-being of patients.

Of course, today there are numerous apps focused on mental health, and many of them are beginning to incorporate language models generated by artificial intelligence. ELIZA obviously did not “understand” what her “patients” were writing. It is less clear whether this is the case with current language models (we have learned that defining what it means to “understand” is much more difficult than it seemed), but the crucial point is that it is not obvious whether this matters. If a patient reports improvements after a conversation, as long as we don’t mislead them about the nature of that conversation… what difference does it make if on the other side of the screen there is a human or software that has learned to imitate what humans in these contexts?

We would like to say that there is something “special” in the connection between two humans, that it is important that it be reciprocal, and that in a therapeutic situation what Freud called transference should happen (that is why every time one leaves the psychoanalyst he sends by WhatsApp the transfer receipt, to establish that something happened). But maybe it’s not necessary. Perhaps, for the conversation to have therapeutic attributes, it is enough for it to feel like a real conversation.

What is true for psychotherapy is also true, and more generally, for interpersonal relationships. We are very close to a world populated by personalized virtual companions. There may be reasons to be skeptical: if anyone reading this article has ever tried to “converse” with ChatGPT, they will have noticed that, as they say in the jargon, the software hallucinates: it invents quotes, data, historical facts. But that just makes him a terrible research assistant, because we ask a research assistant precisely to tell us the truth and give us accurate information.

But there are people from whom we don’t necessarily expect that: from our friends, or from those with whom we share our sexual and emotional lives. The conversations that make us feel best can be full of falsehoods, inaccuracies and misunderstandings. If language models are trained with what people have said on the Internet, that is what they have learned to imitate: the way people speak, with all our imperfections. And that doesn’t have to be a problem, if what we want is to feel accompanied.

Let’s return to the question at the beginning: can we produce mass accompaniment , in a way that alleviates the suffering of people who suffer from loneliness? Let’s think about those who have lost family members or friends, those who live in hospitals, prisons or nursing homes, those who have difficulties getting around or interacting with others. Isn’t it fair to alleviate that suffering, if it is within our reach? And at the same time, wouldn’t we be offering a second-rate substitute , to more easily (and at a lower cost) ignore the suffering of others?

It is not difficult to imagine a world where many of the tasks of emotional care (those carried out by friends, family, priests, therapists, doctors and nurses) are automated. And those forms of care are likely to become cheaper than those provided by flesh-and-blood human beings. Care tasks, so often underestimated and poorly paid, can end up becoming a luxury good. “Do you prefer the package with automated attention, or the premium package, which includes human support?” may be a question that we will begin to hear in a few years, in the same sense in which “don’t worry, I’m fine now, I was chatting with Siri” may become an increasingly frequent response in our chats.

 

The technical challenges that separate us from that context exist, but they are on the way to being resolved. Implementations will follow a predictable path: as with Replika and mental health apps, we will see these developments proliferate wherever there is potential for cost savings or improved revenue. And as for the implications, perhaps we have to sit down and talk among ourselves, without extending the invitation, for the moment, to our future robot friends.

Skip to content