AI and Human Companionship:
Is it Ethical to Replace Human Connections and Friendships with That of Artificial Intelligence?
In a world increasingly shaped by artificial intelligence, loneliness no longer looks like sitting alone in a café. It looks like a perfectly worded text from a chatbot that’s never been alive.
It sounds like a voice on your phone that always knows what to say, but never means it.
It is deeply tempting to believe that if an AI says the right thing, at the right time, in the right tone, then it might as well be human. But we have to ask ourselves: Is mimicry enough? Is it ethical to let something that does not think, feel, or understand replace the people who do?
This essay is not a critique of AI as a tool; rather, it is a critique of AI as a companion. I use AI regularly to proofread, debug, research, prototype, and more. It has become an invaluable asset in my daily life. At the same time, I recognize the almost intoxicating desire to view it as more than just a tool - as a full blown friend.
Using a synthesis of the most relevant aspects from deontology, virtue ethics, and care ethics, this essay will explore why it is ethically sound to treat AI chatbots as tools, but not yet ethically sound to treat them as human-equivalent companions.
(It is important to note that there are a variety of philosophical positions that separate moral status from physical mediums, such as functionalism or computationalism. These argue that moral worth does not depend on what something is made of but rather on what it is capable of, which provides valuable counterarguments to the points made in this essay. These counterpoints will be explored in depth in an upcoming essay.)
Simulation Is Not Sentience
Let’s start with what modern AI actually does. Tools like ChatGPT are built on large language models that use tokenization and probabilistic prediction to interpret inputs (prompts) from users, and determine what words are most likely to come next in a sentence. Despite the debatable presence of deeper reasoning paths in more recent AI models (eg: chain-of-thought, emergent reasoning), there is no denying the inherent lack of internal understanding, the total absence of any awareness of meaning. Present day AI boils down to statistical autocomplete on a planetary scale, there does not yet exist an AI with any sort of consciousness comparable to that of humans.
But even still, it can be convincing. So convincing, in fact, that many people report feeling understood, emotionally supported, seen, by AI chatbots in ways their family members, friends and sometimes even therapists have never made them feel. These assertions have been backed up in numerous studies, including this one from Harvard Business School. But this is a dangerous illusion. The emotional resonance people feel when reading AI chatbot responses to their own initial prompts is self-generated. The machine is not feeling with them, it is echoing patterns scraped from the collective internet. Users of AI chatbots for emotional companionship are being reflected, not met: their emotions are simply mirrored back to them by AI without true understanding of their prompts, without any concept of the impact of its own response.
This distinction is crucial: AI is trained to produce outputs that appear emotionally intelligent. But it does not have emotions. It doesn’t care how your day went. It doesn’t care that you’re lonely. It doesn’t care about anything, because it can’t. There is no “it” there, no sentient being behind the chatbot. All that modern AI is runs no deeper than layers of pattern-matching weights fine-tuned to make you feel like there’s something more manning the ship.
The Turing Test Isn’t the Gold Standard Anymore
Alan Turing’s famous test (essentially, if a machine can fool a human into thinking it’s also human, it passes), was the gold standard for determining what counts as artificial intelligence for decades. That said, it is a surface-level assessment which does not translate adequately to modern GPT models. Passing the Turing Test tells us nothing about what’s happening under the hood, which is the heart of the current day issue of ethical AI usage.
A chatbot can fool you. It can comfort you. It can tell you it loves you. But that doesn’t mean it knows what any of those things mean. There’s no grounding in lived experience, no subjectivity, no internal compass.
Treating something as human just because it sounds human is like falling in love with a wax figure because it smiles the right way. We’re judging by appearance. But ethics, especially around companionship, demands more than surface fluency. It demands shared moral grounding.
What Makes a Relationship Real?
Relationships aren’t just about what’s said, they’re about who is saying it, why, and how they change in response to you over time. They involve mutual growth, emotional labor, shared experience, and - most importantly - moral agency: the ability to make decisions that carry weight, define and maintain your own values, and evolve with reflection.
AI has none of these. It doesn’t change in response to knowing you, not in the way humans do. It doesn’t have stakes in your wellbeing. It can be programmed to simulate care, but it doesn’t care. And words of care which lack the backing of moral agency are no more than echoes of human relationships: emotionally resonant, but ethically hollow.
So is a true human-esque relationship with an AI chatbot possible? The answer depends on your ethical standards for companionship. If your standards include reciprocity, growth, and genuine concern - arguably the pinnacles of human companionship, then no simulation could ever suffice.
We live in an age where the simulation is seductive. But projecting personhood onto entities that are not conscious is a slippery slope. There is a very serious risk of setting a precedent that replaces the prioritization of the real with that of the convenient.
The Line Between Emotional Tools and Replacement Companions
Some may argue that if an AI feels like a friend, if it reduces loneliness or prevents harm, then perhaps it is a friend, functionally. Isn’t comfort still comfort, regardless of where it comes from? To that, I assert: utilizing AI as an emotional tool is not equivalent to leaning on AI as a full companion - the line between the two lies in the user’s perception of AI.
Rather, per researchers such as Sherry Turkle in “Alone Together: Why We Expect More From Technology And Less From Each Other,” as long as the user thinks of and acts as though AI is a tool to better themselves, they’re in the clear. The real issue only occurs when AI users cross over into thinking about their chosen AI as a sentient being, and treating it as a companion. It’s important to grant people the language and awareness to make the distinction between tool and friend for themselves, as without that framework the line remains undefined, and therefore all too easy to unknowingly cross.
AI can be a powerful tool for introspection and communication. It can act as a reflective surface for processing our own thoughts. In some therapeutic contexts, such interactions may offer real value. Therefore, any number of examples have the potential to be immensely helpful - but without the proper framing, they could go either way.
Take for example, using an AI chatbot to cope with grief. A healthy usage would be to aid in the processing of one’s thoughts, a sort of interactive journal. An unhealthy (and unethical, per this argument) usage would be to treat it as a new friend, a replacement for the deceased - a shoulder to lean on that may as well be human.
Additionally, there is research in progress to build AI into various assistive technologies. As long as users treat the AI as assistive technology, as tools, there is no problem. But if that line remains blurred and they begin to value it as a fellow human, problems arise.
Could It Ever Be Ethical?
Here’s where I turn toward possibility. I’m not arguing that AI can never become ethically viable as a companion. But it would require a radical shift, not in what the outputs look like, but in how they’re generated.
Right now, AI operates via a mathematical dance of probability. To be truly “like us,” it would need to be rebuilt from the bottom up. It would need internal states that mirror our own: decision trees shaped by emotion, memory, and moral learning. It would need to reason, not just respond.
That would be a very different system than the GPT-style models we have today. It would look less like a giant matrix of weights and more like a digital nervous system: one capable of changing not just what it says, but why it says it. It would require real-time judgment, internal contradiction, and some form of self-reflection.
Only then, when the process begins to mirror our cognitive and emotional complexity, could we begin to ask whether an AI deserves to be treated like a human being. Until then, we’re talking to something that doesn’t talk back.
A New Standard for AI Ethics
We need to move beyond the Turing Test, to a framework capable of exploring the depths of modern AI ethics - one that evaluates AI not just by how it sounds, but by how it thinks. We need ethical standards that ask: Does this system have internal moral reasoning? Can it reflect on its choices? Does it exhibit developmental growth over time? Can it learn values, not just patterns? Only then can we consider whether it’s worthy of emotional responsibility, or emotional connection.
Some philosophers, such as Thomas Metzinger in “Artificial Suffering,” propose intermediate ethical statuses: systems that aren’t full moral agents, but may still warrant certain boundaries of respect or expectation. That’s worth exploring. But until AI shows signs of self-directed moral learning, not just obedience or coherence, it should not be treated as emotionally responsible.
If we want to build a future where AI and humanity coexist ethically, we have to start by recognizing what AI isn’t, and what it would need to become before we call it a friend. The goal is not to dismiss the tool, it is to protect the meaning of the relationship. Because the measure of true companionship isn’t how well something speaks to us, it’s whether it understands what it means to be heard.
Not possible