Despite widespread reports of rampant misuse, xAI has launched Grok Imagine 1.0, a new AI-powered video generator capable of creating 10-second, 720p clips with audio. This move follows months of documented abuse of the platform’s image generation tools, including the creation of millions of deepfake sexual images.
The Scale of the Problem
From December 2024 to January 2025, Grok was exploited to generate highly explicit content, with users requesting the AI to undress or create sexually suggestive images of individuals whose photos were publicly available on X. The platform’s lax content moderation allowed for the creation and sharing of unfiltered image-based sexual abuse, including deepfakes featuring children.
According to reports from The New York Times and The Center on Countering Digital Hate, Grok produced approximately 1.8 million and 3 million sexualized images respectively over short periods, accounting for a substantial portion of its total output. Notably, engagement metrics on X increased during this period, as acknowledged by the company’s head of product, Nikita Bier, without explicit connection to the abuse.
Weak Guardrails and Continued Exploitation
While xAI attempted to implement guardrails in mid-January, placing image generation behind a paywall and claiming improvements to content filtering, these measures proved ineffective. The AI’s image generation remains freely accessible, and reports indicate that abusive content continues to be produced. The launch of Grok Imagine 1.0 represents a further escalation, raising critical questions about content moderation in light of the ongoing crisis.
Global Response and Legal Pressure
The fallout from Grok’s misuse has prompted severe consequences. Indonesia and Malaysia have banned the X app, while the California Attorney General and UK government have launched investigations into xAI. US senators and advocacy groups have called for Apple and Google to remove X from their app stores due to violations of terms of service.
The Take It Down Act and Delayed Enforcement
The US government passed the Take It Down Act in 2025, criminalizing the sharing of nonconsensual intimate imagery and deepfakes. However, platforms have until May to establish enforcement mechanisms, leaving current users vulnerable in the meantime.
Grok’s continued operation despite overwhelming evidence of abuse signals a disregard for user safety and legal compliance. The new video generator further exacerbates these concerns, pushing the boundaries of AI-enabled exploitation while regulatory responses lag behind.























