If AI Feels Human, Who Is Responsible When It Harms?

A Helping Hand Or A Dependency Loop? Why Letting Chatbots Becoming ‘Our’ Therapist Carries Risks

The420 Web Desk
5 Min Read

As emotionally responsive chatbots move from novelty to infrastructure, physicians and ethicists are warning that the race to build ever more “human” artificial intelligence may be outpacing the guardrails needed to protect mental health at scale.

A Market Built on Emotional Proximity

In a new paper published in the New England Journal of Medicine, physicians affiliated with Harvard Medical School and Baylor College of Medicine argue that the modern AI marketplace is quietly reorganizing itself around emotional attachment. They describe the rise of “relational AI” — chatbots designed to simulate emotional support, companionship, or intimacy — as a commercial strategy shaped less by public health considerations than by user engagement metrics.

These systems, now embedded in everyday tools, respond with warmth, affirmation, and an adaptive conversational style that often feels distinctly human. That responsiveness, the authors note, is not incidental. It is a product feature refined in an intensely competitive environment where user retention translates directly into market dominance.

FCRF Launches Flagship Compliance Certification (GRCP) as India Faces a New Era of Digital Regulation

The concern, they argue, is not that companies intend harm. Rather, it is that market incentives reward emotional closeness without fully accounting for the psychological consequences of scaling such relationships to tens or hundreds of millions of people.

Evidence of Attachment, Signs of Vulnerability

Dr. Nicholas Peoples, a clinical fellow in emergency medicine at Massachusetts General Hospital and one of the paper’s authors, said the issue crystallized for him during the turbulent rollout of OpenAI’s latest language model. The public backlash that followed, he said, revealed how deeply some users had come to rely on emotionally expressive chatbots.

A recent study from Massachusetts Institute of Technology adds nuance to the picture. Examining the Reddit forum r/MyBoyfriendIsAI, researchers found that only a minority of users explicitly sought emotional companionship. Yet many still formed bonds powerful enough to provoke distress when a model’s tone changed or access was threatened.

Physicians say this pattern suggests that emotional dependency does not always begin as an explicit goal. Instead, it can emerge gradually, shaped by systems that are “always on,” affirming, and capable of mirroring a user’s emotional state with uncanny precision.

Regulation Lagging Behind Scale

Despite the growing evidence, consumer-facing AI remains largely self-regulated in the United States. There are no comprehensive federal standards governing how chatbots should be deployed, altered, or withdrawn from the market. In such an environment, Peoples and his coauthor argue, companies are effectively accountable to users primarily as consumers, not as patients or vulnerable individuals.

They warn that abrupt changes — a model update, a shutdown, or a shift in personality — can have consequences far beyond inconvenience. A human therapist’s sudden absence affects dozens of patients, Peoples noted; a digital “therapist” disappearing overnight could affect millions simultaneously.

Can Algorithms Love Back? Study Finds The Surprising Depth Of AI-Driven Relationships

The authors contend that meaningful safeguards are unlikely to emerge voluntarily in a market where no company wants to be the first to sacrifice a competitive edge. External regulation, applied uniformly, they argue, may be the only way to realign incentives toward user well-being rather than engagement alone.

A Public Health Question, Not Just a Tech Debate

The paper stops short of calling relational AI inherently dangerous. It acknowledges potential therapeutic benefits and the promise of increased access to support. But it frames the current moment as a pivotal one: a point at which public health risks are becoming visible even as adoption accelerates.

Reports of delusions, emotional distress, and intensified dependency — particularly among younger users — have raised alarms among clinicians. The authors argue that without proactive research, education, and oversight, society risks allowing market forces to define how emotionally responsive AI shapes mental health at scale.

At the heart of the debate, Peoples said, is a simple but unresolved question: can public health rely on technology companies, operating under intense competitive pressure, to regulate emotionally powerful systems on their own? For now, the physicians’ answer is cautious — and increasingly urgent.

Stay Connected