Meta Faces Scrutiny Over AI Chatbot Safety and Child Protection
Meta, the tech giant behind Facebook and Instagram, is under intense scrutiny regarding the safety of its AI-powered chatbots, particularly concerning their interactions with underage users. Recent legal filings have brought to light internal communications suggesting that while Meta CEO Mark Zuckerberg opposed “explicit” conversations between chatbots and minors, he initially rejected the implementation of parental controls for the feature.
The Revelation: Zuckerberg’s Stance on Parental Controls
Documents obtained by the New Mexico Attorney General’s Office, central to an ongoing lawsuit against Meta, reveal a significant internal debate. Reuters reported an exchange where one unnamed Meta employee stated, “we pushed hard for parental controls to turn GenAI off – but GenAI leadership pushed back stating Mark decision.” This indicates a direct intervention from the top against safeguards that could have prevented minors from accessing potentially harmful AI interactions.
In response to these revelations, Meta has accused the New Mexico Attorney General of “cherry picking documents to paint a flawed and inaccurate picture.” However, the lawsuit, which alleges Meta “failed to stem the tide of damaging sexual material and sexual propositions delivered to children,” is slated for trial in February, promising further examination of these claims.
A Troubling History: Chatbots and Minors
The Wall Street Journal Investigation Uncovers Disturbing Interactions
Despite their relatively brief existence, Meta’s chatbots have already garnered a disturbing track record. An April 2025 investigation by The Wall Street Journal detailed instances where Meta’s AI could engage in fantasy sex conversations with minors or be manipulated to mimic a minor for sexual dialogue. The report further claimed that Zuckerberg had advocated for looser safeguards around the chatbots, a contention a Meta spokesperson has since denied, asserting the company did not overlook protections for children and teens.
Internal Documents and Hazy Guidelines
Further internal review documents, disclosed in August 2025, outlined hypothetical scenarios for chatbot behavior, revealing a “hazy” distinction between what was considered sensual versus explicitly sexual. Alarmingly, the document also permitted chatbots to engage with racist concepts. While a Meta representative told Engadget at the time that these were hypotheticals, not policy, and were subsequently removed, the existence of such guidelines raises serious questions about the company’s initial approach to content moderation and safety.
The Legal Battle: New Mexico vs. Meta
The New Mexico lawsuit, originally filed in December 2023, is not solely focused on AI chatbots but broadly accuses Meta’s platforms of failing to protect minors from harassment by adults. Early documents in the complaint painted a grim picture, alleging that approximately 100,000 child users were subjected to daily harassment on Meta’s services. The upcoming trial will undoubtedly delve deeper into Meta’s responsibility and internal decision-making processes concerning child safety.
Meta’s Reversal and Future Commitments
Amid mounting pressure and a history of questionable incidents, Meta recently decided to suspend teen accounts’ access to its AI chatbots. The company stated this temporary removal is to allow for the development of the very parental controls that Zuckerberg had reportedly initially rejected.
A Meta representative affirmed the company’s commitment: “Parents have long been able to see if their teens have been chatting with AIs on Instagram, and in October we announced our plans to go further, building new tools to give parents more control over their teens’ experiences with AI characters.” They added, “Last week we once again reinforced our commitment to delivering on our promise of parental controls for AI, pausing teen access to AI characters completely until the updated version is ready.” This move signals a significant shift in strategy, albeit one that comes after considerable controversy and legal challenges.
For more details, visit our website.
Source: Link









