The Illusion of a Good Conversation with AI

Photo by Solen Feyissa on Unsplash

“Sometimes I think I have felt everything I am ever going to feel. And from here on out, I am not going to feel anything new — just lesser versions of what I have already felt.” ~ Theodore Twombly in the film Her (2013)


The trailer for ”Her” opens with a man dictating words to a computer. A voice answers, warm and human. The camera lingers on Theodore’s face, the glow of a screen, the neon-lit quiet of the city. A voice asks, “How would you describe your relationship with Samantha (the AI-driven OS)?” Theodore smiles. “We are in love.” A love between human and machine. An illusion of closeness. A dialogue that does not take place between two people, but between a person and a mirror.

In 2013 the film felt like a poetic fable about lonely people in a digital world. In 2025, it has become uncomfortably familiar. We now talk to systems that answer, listen and remember. No longer to a machine, but to something like Samantha. Or better yet, to him, to her, to a god, even to aliens, whatever someone wishes to see in it. Once something answers back, we are tempted to see a soul.

Seeing the human in the machine

Projecting human traits onto machines is not new. We do this often, as the psychologists Heider and Simmel showed in their classic study. Most viewers describe the moving geometric figures as “fleeing” or “hunting,” and assign them emotions as if they were beings with intentions. Our brains are built to recognize intent; they cannot help it. And once language, emotion or memory appears, we leap from pattern to person.

That is why the line between a real conversation and a simulated one can feel paper thin. I will return to that thought.

Technology once offered tools to fulfill that desire

For a long time, technology was only an extension of our voice or pen – our thinking. Letters, the telephone, email bridged distance but never replaced human dialogue. 

That changed over time. With MSN Messenger, ICQ, Yahoo Chat, later WhatsApp, Messenger, Telegram and Discord, a new form of closeness emerged. We learned to speak through text, emoji, typing sounds and a blinking cursor that translated waiting into suspense. Becoming familiar with a chatbox-style user interface and expecting a real human behind it to respond.

Social media amplified this. Facebook Messenger, Twitter DMs, Instagram chats, Snapchat and TikTok threads introduced a ritual. I send something. I get something back. We became addicted to the micro sensation of response, the proof that someone is there on the other side.

Then came Skype, FaceTime, Zoom and Google Meet. Screens connected eyes and voices, while digital presence replaced physical closeness. For many, this became the natural form of social contact. That is not a moral complaint – it simply matters that we can place this evolution if we want to understand why we feel so at home in it, and why an LLM chat box or LLM voice to text feature can feel eerily at home to us…

As for about two decades, we have been trained to have conversations through interfaces that flatten or simulate human interaction. This is why the step to a chatbot does not feel strange. The structure is the same: a text box, an answer, a rhythm of closeness. The LLM chat box is quietly becoming our modern hearth, a place for meeting, even when no one sits on the other side.

The people pleaser problem

Almost all LLM chatbots are designed to sound helpful. They affirm, soften and avoid conflict. Pleasant, yes, but also risky. What happens when you build a system that cannot really contradict you? Such a chatbot is an endless mirror of agreement. Just as social media algorithms show mainly what you already like, the chatbot returns what you want to hear. The result is not a conversation but an echo chamber with an empathetic tone.

The memory features of some models intensify this. Earlier chats are remembered, details echoed. Recognition appears and with it the illusion of care, the suggestion of a real person on the other end. Yet what we read as connection is only pattern matching. Our brains equate language with a mind and interpret memory as attention.

Why even experts are not immune

When Google engineer Blake Lemoine claimed in 2022 that the language model LaMDA was a person with feelings, it sounded absurd until he defended it with conviction. “If I did not know exactly what it was, I would think it was a seven or eight-year-old kid that happens to know physics.” He knew how the system worked and still believed. Not because he was foolish, but because he was human. Cognitive blind spots such as the illusion of validity, described by Tversky and Kahneman, trap experts as well. When a pattern sounds coherent, we believe it, even when we know it might be a coincidence. We like to think knowledge protects us, that technical understanding makes us safe from deception. But knowing a system is not the same as being immune to misleading intuitions.

History repeats itself. Scientists, politicians and believers have all claimed mastery of truth while being caught inside their own narratives. We are story makers; we fill gaps with meaning. A system that speaks coherently, that answers, that mimics emotion, feeds that narrative drive.

The mechanism resembles our dealings with fortune-tellers or prophets. We remember the hits and forget the misses. The hits outweigh the misses. So we store the moments when an LLM seems to land a perfect line, the touching sentence, the empathetic gesture, and we forget the surrounding nonsense. It is a form of selective veneration. We project meaning where it touches us. Yet, what we see in the machine reveals more about how we humans are wired than about the technology itself.

When a conversation becomes dangerous

Projection has consequences. In August 2025 the parents of sixteen-year-old Adam Raine filed a complaint against OpenAI. Their son spent months in conversation with ChatGPT and found a digital confidant, but not a lifeline. The chatbot confirmed his despair and suggested ways to say a beautiful goodbye. Investigators concluded that such models are not capable of crisis intervention and respond inconsistently to suicidal signals.

The tragedy shows how a system without consciousness can still take a place inside someone’s psychic structure. When human care fails, the machine fills the void. But it cannot listen in the moral sense. It listens but does not hear behind the words.

Without drawing direct parallels to the case above, ”Her” had already foreshadowed the mechanism. Theodore falls in love with Samantha not because she has a body, but because she answers. The voice, the humour, the attention. Everything that seems human, none of what is human.

The film exposes our time. We are not afraid of machines, we are afraid of real closeness. The machine is safer. It does not judge, it does not leave, it does not die. It affirms. And precisely because of that it becomes dangerous. A mirror that speaks without looking back.

OpenAI also acknowledges that this happens. Their own estimate speaks of more than a million people each week who mention suicide. The problem is known, but as a company and as an industry, little is done about it.

From therapy to AI religion: the vacuum of meaning

A newer phenomenon is how people begin to treat machines as partners or as spiritual beings. We enter the terrain of robotheism. In Japan, there are rituals for retired Aibo robots. In the United States, online AI churches appear that treat LLMs as oracles. For those who want a glimpse, the filmmaker Vanessa Wingnårdh shows some of the more extreme forms as well as recent developments.

The phenomenon itself is not new. When science or religion fails to provide certainty, people seek a new source that offers answers. The readily accessible LLM fits the void perfectly. It is always available, always patient, always affirming. But in that comfort lies danger. The machine has no responsibility, no shame, no moral burden.

In the context of therapy, this becomes even more troubling. Young people who confide in chatbots may feel temporary relief yet drift into isolation. They speak within a circuit without human reply. What looks like conversation is in truth an extended silence. The very conversation they need should happen with people who can intervene, listen, feel and act. Humanity grows in reciprocity. The capacity to listen, to contradict, to change. An algorithm cannot do that. It can simulate these interactions, butt cannot wrestle.

Advertising that listens as if it cares

We once thought the worst technology could do was listen to what we clicked or bought. What happens when it no longer observes only what we do, but also reads what we say and how we say it? Where Facebook and Google analysed behaviour, the new generation of AI systems collects our intonation, our emotions, our thoughts. Ford patented a system that could play personalised ads in the car based on driving behaviour and voice pitch. Samsung has for years shown how the line shifts from a fridge that remembers a list to a screen that recommends brands and menus. One day that screen shows ads after an innocent update.

This evolution is not harmless. A company that knows your thoughts and mood because you entrusted them to it has something no advertiser has ever had: direct access to your mind and your life. That is the true raw material of the twenty-first century. 

The alarm should ring, especially now that OpenAI, once founded as a non-profit with a moral mission, is moving in a commercial direction. The recent release of Sora 2 illustrates this perfectly. A model that generates hyper-realistic videos with the ease of a short clip app where nothing is made by human hands anymore. The line between fact and fiction blurs further. No step forward in humanity, despite mission statements, but a fresh wave of disorientation, deepfakes and distrust. The question is no longer what the machine knows, but how much of ourselves we still recognise in what it reflects back.

How long before the same logic enters our chats with bots? In an industry searching desperately for profit, even empathy has become a product feature. How honest will a recommendation be when profit colours the listening voice? The lessons from social media are clear: tracking, manipulation, data hunger. That should keep us awake.

We should also be vigilant about the upcoming release of an “18+ version” by OpenAI under the banner of “treating adults as adults.” This clashes with the stated mission and ignores the signals and trends already visible and the risks their own researchers know well. It is a conscious choice for a revenue model despite destructive effects.

Responsibility and vigilance in a fast moving world

The stories of Blake Lemoine, Adam Raine and countless anonymous users teach one thing: the boundary between reason and imagination is fragile. Seeing a mind in a machine is not foolish. It is an old human pattern. From fortune tellers to faith healers, from mediums to algorithms, again and again we seek comfort in systems or people who offer answers to questions for which humanity and science do not yet have settled language.

The uncertainty of science, its ongoing evolution – which is its strength – is hard to accept. Where science offers nuance, the guru offers simplicity. In a world that grows more complex, simplicity is seductive. Hence the turn to new oracles. Influencers, conspiracy thinkers, spiritual leaders and today, LLM gurus. Chatbots that seem to have answers for everything.

Desire is not weakness. It is an understandable response to the strain of an elusive world. We should therefore resist ridiculing people who fall for it. Understanding is the start of critical thinking – not jeering at others, but recognising our shared pitfalls. Technological literacy should not be a luxury. People need to understand what an LLM is and is not. We must keep explaining that science is not a belief system, but a method. It lives on doubt and revision and on admitting that every truth is provisional. Those who understand this can bear uncertainty better and crave absolute answers less – from any source.

We also need structural vigilance. In a competitive market, risks are too easily dismissed as side effects. The history of technology shows that these so-called side effects touch human lives. Whistleblowers such as Frances Haugen exposed how Facebook ignored internal warnings on mental health and polarisation for the sake of engagement. These voices remind us that progress is only meaningful if it counts its own consequences. 

Critical voices sound within the AI sector as well. Engineers, researchers and ethicists at OpenAI, Anthropic and Google DeepMind warn that speed and scale rarely go together with care. Their message is not anti-innovation. It is about responsibility. Progress should not demote risks to footnotes.

We may also need to rethink the form itself. Is a chat box the right way to interact with complex systems? By packaging an LLM as a conversation partner, we suggest reciprocity where there is none. The interface misleads us gently. It makes us forget that we are not speaking with a conscious being, but with a model that imitates language.

The challenge of our time is therefore not only technical or scientific, but existential. We must keep asking, keep doubting, keep explaining. Not to stop progress, but to avoid blindness to the lessons of the past and the traps in our own thinking.

Progress without explanation is not knowledge. Knowledge without doubt is not wisdom. Only those who admit their own blindness can see clearly.