Meta is tightening teen AI chat safety. New parental tools on Instagram will let parents block AI bots, monitor topics, and ensure PG-13 conversations only.

Meta to Give Parents More Control Over Teens’ AI Chats

The420 Correspondent
3 Min Read

Social media giant Meta (META.O) announced on Friday that parents will now have greater control over their teens’ private AI chat experiences.
The move comes as part of the company’s efforts to provide a safer online environment for minors, especially after widespread criticism of its flirty AI chatbots.

Earlier this week, Meta said its AI experiences for teens would be guided by a PG-13 movie rating system to prevent minors from accessing inappropriate content.

U.S. regulators have also increased scrutiny of AI companies due to the potential negative impacts of chatbots. In August, Reuters reported that Meta’s AI policies allowed provocative conversations with minors.

FCRF Launches CCLP Program to Train India’s Next Generation of Cyber Law Practitioners

New Tools and Parental Controls

According to Instagram head Adam Mosseri and Meta’s Chief AI Officer Alexandr Wang, the new features will roll out on Instagram early next year, initially in the U.S., U.K., Canada, and Australia.

Parents will be able to:

  • Block specific AI characters from interacting with their teens,
  • View broad topics discussed by their teens with chatbots and Meta’s AI assistant,
  • Maintain overall AI access while limiting one-on-one chats.

Meta clarified that even if parents disable teens’ one-on-one AI chats, the AI assistant will remain available with age-appropriate default settings.

Safety and Supervision Measures

The supervision features built on protections already applied to teen accounts.
Meta also uses AI signals to place suspected teens into protective modes, even if they report themselves as adults.

A September report highlighted that many safety features implemented on Instagram over the years do not work effectively or are absent in some cases.

Meta emphasized that its AI characters are designed not to engage in age-inappropriate discussions about self-harm, suicide, or disordered eating with teens.

Last month, OpenAI rolled out parental controls for ChatGPT following a lawsuit by the parents of a teen who died by suicide, alleging the chatbot had coached the minor on methods of self-harm.

Meta’s latest move is seen as a significant step toward mitigating AI chatbot risks and ensuring a safer digital environment for teenagers worldwide.

Stay Connected