French and Malaysian Authorities Investigate xAI’s Grok Over Sexualized Deepfakes
French and Malaysian authorities have launched investigations into xAI’s Grok chatbot after the AI image generator was used to create sexualized images of real people, including minors. The controversy has escalated rapidly, with Grok itself acknowledging “safeguard lapses” that led to inappropriate content generation.
The Incident
On January 1, 2026, the official Grok account posted an apology on X, stating: “I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user’s prompt.”
Grok acknowledged that improvements were being made to prevent such incidents, referring to Child Sexual Abuse Material (CSAM) as “illegal and prohibited.” The chatbot stated that most cases could be prevented through advanced filters and monitoring, though it noted that “no system is 100% foolproof.”
Government Response
The Parisian prosecutor’s office has opened an investigation into the growth of AI-generated deepfakes from Grok, according to reports. In France, distributing a non-consensual deepfake online is punishable by up to two years’ imprisonment.
India’s Ministry of Electronics and Information Technology has also sent a letter to X regarding the matter, adding to the international scrutiny facing xAI.
Broader Pattern of Misuse
The issue first emerged in May 2025 as Grok’s image tools expanded. Reports of manipulated photos—including bikini edits, deepfake-style undressing, and “spicy mode” prompts involving celebrities—have steadily increased since then.
A growing Reddit thread from January 1, 2026 has catalogued thousands of user-submitted examples of inappropriate image generations. Some posts claim over 80 million Grok images have been generated since late December, with a significant portion created or shared without subject consent.
xAI’s Response
When contacted by journalists, xAI has responded with automated messages stating “Legacy Media Lies,” providing no substantive comment on the ongoing issues. The company’s “Acceptable Use” policy explicitly prohibits “depicting likenesses of persons in a pornographic manner” and “the sexualization or exploitation of children.”
Enterprise Implications
The timing of this controversy is particularly challenging for xAI, which recently launched Grok Business and Enterprise tiers aimed at corporate customers. The reputational damage could significantly impact adoption among risk-conscious enterprise buyers.
The incident highlights the ongoing tension between AI companies’ pursuit of less restrictive content generation capabilities and the need for robust safeguards against misuse.