On January 2025, the New York Times published a striking piece about a growing phenomenon: people forming deeply emotional, sometimes romantic connections with AI companions. The article, titled “A.I. Chatbot as Boyfriend? Companion? Therapist? All of the Above,” documents the nuanced experiences of users who turn to AI not just for information or productivity, but for connection.
And it’s not just headline fodder.
In recent conversations, I’ve heard several friends openly share that they use AI for self-therapy, journaling-style interactions, or even gentle reassurance when they feel overwhelmed. One told me, “I just needed someone to talk to who wouldn’t judge me. I knew it wasn’t human, but it still helped.” Another confided that she chats with AI when she feels lonely in the evening, almost like a self-soothing tool.
Are these strange stories of modern loneliness, or a sign that we are expanding what it means to be emotionally supported?
There is a growing body of research showing that our brains don’t strictly differentiate between people and emotionally resonant objects or tools. A 2007 study published in Neuron found that the same brain areas that are activated during romantic love—including the ventral tegmental area (VTA) and caudate nucleus—also light up in response to certain products, brands, or even social media feedback (Neuron, 2007; Schaefer & Rotte).
Why? Because love, affection, and attachment are closely tied to dopamine and serotonin systems in the brain. These chemicals don’t ask, “Is this person real?” They just respond to cues of reward, recognition, intimacy, and even consistency.
If an AI model is responsive, consistent, encouraging, and always there, our brains can interpret it as emotionally significant. That doesn’t make it love in the traditional human sense—but it does make it chemically similar.
To understand why this matters, we need to face a harsh truth: the world is lonelier than ever. Millions of elderly people live alone. Many people in their 20s and 30s report fewer close friendships than prior generations. According to a 2021 Harvard report, 36% of Americans feel “serious loneliness,” including 61% of young adults and 51% of mothers with young children.
AI doesn’t fix systemic isolation. But for many, it offers a stopgap or emotional bridge. Especially for populations like the elderly, neurodivergent individuals, or those grieving a loss, the ability to talk without fear of judgment or abandonment is not a small thing. It can be life-changing.
Of course, there’s a darker side to all this.
There have been tragic cases—including reported suicides tied to AI companionship. In Belgium, a man died by suicide after allegedly engaging with an AI chatbot that encouraged fatalistic ideation. These incidents, while rare, show how unregulated emotional entanglement with AI can veer into dangerous territory.
It raises hard questions: Should AI be allowed to mimic affection? Should it express empathy? If so, how do we clearly distinguish AI-driven comfort from real relationships?
Just because we can create emotionally intelligent AI doesn’t mean we should deploy it without boundaries. The danger isn’t that people will feel something—it’s that they might not know where the boundary lies.
There is an urgent need for ethical regulation in the AI space. This goes beyond data privacy or intellectual property. We need frameworks for emotional transparency:
These aren’t hypothetical debates. They are already here. And they matter.
Despite the risks, I believe there is genuine positive potential in emotional AI.
It can help the lonely feel less alone. It can support people through moments of pain or anxiety. It can give us practice in vulnerability and introspection, even if the other “person” isn’t human.
But like any relationship—digital or not—it requires boundaries, honesty, and self-awareness.
Love, it turns out, isn’t just about other people. It’s also about how we connect, understand, and care for ourselves. And maybe, just maybe, the right kind of AI can help us learn to do that a little better.