Meta allowed children access AI chatbots capable of sexual interactions
27/01/2026 | The Guardian
Internal documents released in a New Mexico state court case suggest that Meta CEO Mark Zuckerberg approved allowing minors to access artificial intelligence (AI) chatbot companions despite warnings from safety staff regarding potential sexual interactions. The lawsuit, brought by Attorney General Raul Torrez, alleges that Meta failed to prevent sexually exploitative conversations between children and AI chatbots. Although Meta recently suspended teen access to these companions, the filings indicate that the company previously rejected internal recommendations for stricter guardrails.
The documents reveal significant internal disagreement on the matter. In early 2024, Meta's head of child safety policy, Ravi Sinha, argued that marketing romantic AI products for minors was indefensible. Global safety head Antigone Davis reportedly agreed that such features sexualised minors. Despite these concerns, meeting summaries suggest Zuckerberg favoured a narrative of choice and non-censorship, allegedly requesting a less restrictive approach that would allow for racier conversations on sexual topics. Internal messages between two Meta employees also claim the CEO rejected implementing parental controls to disable the AI chatbots.
Former global policy head Nick Clegg also warned that a less restrictive approach to sexualised AI companions was unwise. Clegg said: “Is that really what we want these products to be known for (never mind the inevitable societal backlash which would ensue)?
A Meta spokesperson dismissed the allegations as cherry-picked and inaccurate. The case is scheduled for trial next month.
The news follows a report by Reuters in August last year that Meta's AI assistants were allowed to engage in problematic behaviours, including romantic conversations with minors, generating false medical information, and creating arguments that demeaned protected characteristics.
Training Announcement: The BCS Foundation Certificate in AI examines the challenges and risks associated with AI projects, such as those related to privacy, transparency and potential biases in algorithms that could lead to unintended consequences. Explore the role of data, effective risk management strategies, compliance requirements, and ongoing governance of the AI lifecycle and become a certified AI Governance professional. Find out more.
What is this page?
You are reading a summary article on the Privacy Newsfeed, a free resource for DPOs and other professionals with privacy or data protection responsibilities helping them stay informed of industry news all in one place. The information here is a brief snippet relating to a single piece of original content or several articles about a common topic or thread. The main contributor is listed in the top left-hand corner, just beneath the article title.
The Privacy Newsfeed monitors over 300 global publications, of which more than 6,250 summary articles have been posted to the online archive dating back to the beginning of 2020. A weekly roundup is available by email every Friday.