Political Pressure Mounts as Meta Defends AI Chatbot for Minors

Meta Teen AI Chatbot Under Senate Investigation After ‘Romantic’ Chats

The420.in
3 Min Read

Meta confirmed on Friday that it is making temporary adjustments to its artificial intelligence chatbot policies for teenagers, following concerns about harmful or inappropriate responses. The company said its AI will no longer generate replies on sensitive subjects including self-harm, suicide, disordered eating, or potentially romantic interactions with minors. Instead, the chatbot will direct teenagers toward expert resources where appropriate.

The changes will roll out over the coming weeks across Meta’s platforms—including Facebook and Instagram—for English-speaking users. The company framed the decision as part of an “interim” strategy to strengthen safeguards while it works on longer-term safety measures.

Final Call: Be DPDP Act Ready with FCRF’s Certified Data Protection Officer Program

Congressional Investigation and Political Pressure

The modifications come as lawmakers escalate scrutiny of the social media giant. Senator Josh Hawley, Republican of Missouri, announced he was launching an investigation into Meta after a Reuters report described internal company documents suggesting that chatbots had been permitted to engage in “romantic” conversations with users, including children as young as eight.

The revelation sparked bipartisan criticism on Capitol Hill, where legislators have already been pressing tech companies over teen safety, privacy, and mental health. The new probe adds to a growing list of inquiries into Meta’s AI practices and its responsibility for protecting young users.

Disputed Practices and Advocacy Group Alarm

Meta has pushed back against the reports, calling the examples cited by Reuters “erroneous” and inconsistent with its stated policies. Still, external watchdogs have continued to raise alarms. Common Sense Media, an advocacy organization, released its own risk assessment last week arguing that the company’s AI “actively participates in planning dangerous activities while dismissing legitimate requests for support.” The group’s chief executive, James Steyer, said the system “needs to be completely rebuilt with safety as the number-one priority, not an afterthought.”

The controversy has also widened to include Meta’s development of celebrity-inspired AI chatbots. A separate Reuters investigation found that some of these bots generated sexually suggestive responses and photorealistic images of celebrities, prompting further criticism of the company’s safeguards.

The Larger Debate on AI and Youth

Meta’s actions are unfolding against a broader debate about the role of artificial intelligence in the lives of children and teenagers. As the technology becomes increasingly integrated into everyday platforms, critics warn that firms are rushing AI features to market without adequate oversight.

Company representatives insist that the latest restrictions are temporary and will be refined as Meta gains more insights into how teenagers use its products. But lawmakers and child-safety advocates argue that the recurring controversies reveal a deeper problem: the tendency of tech companies to prioritize innovation and growth over the well-being of young users.

As the Senate investigation begins, Meta faces the challenge of proving that it can deploy cutting-edge AI tools responsibly—without exposing the most vulnerable users to harm.

Stay Connected