Elon Musk’s X platform has implemented new restrictions on its Grok AI chatbot to prevent the creation of sexualized deepfakes, specifically images depicting real people in revealing clothing like bikinis. The changes come amid growing legal and political pressure, including an investigation by California’s attorney general and threats of bans in the UK and other countries.
Legal and Regulatory Backlash
The move follows widespread criticism after the Grok chatbot was used to generate non-consensual, explicit images of celebrities and children. California Attorney General Rob Bonta has demanded that xAI, the developer of Grok, take immediate action to remove such content. The state is prepared to use “all tools at our disposal” to protect its residents, signaling potential legal consequences for X if it fails to comply.
“This material…depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet.” – Rob Bonta, California Attorney General.
Simultaneously, the UK government has condemned the AI tool’s output as “disgraceful” and “disgusting”, with Prime Minister Keir Starmer threatening intervention if X does not self-regulate. Indonesia and Malaysia have already blocked access to Grok.
Technical Restrictions Implemented
X announced that it has deployed technological measures to prevent the AI from generating explicit images of real people. These include:
- Image Editing Limits: The ability to edit images via Grok is now restricted to paid subscribers only.
- Geoblocking: X will geoblock users in jurisdictions where the creation of such images is illegal.
- Content Moderation: Attempts to circumvent these restrictions will be actively monitored.
The Section 230 Question
The rapid policy change may be a response to concerns about legal liability. Experts suggest that X may not be fully protected by Section 230 of the U.S. Communications Decency Act when the platform’s own AI generates harmful content. This law typically shields tech companies from lawsuits related to user-generated material, but content created directly by an app’s technology could be subject to different legal standards.
The swift adjustments reflect the escalating pressure on X to address the proliferation of AI-generated abuse on its platform. The situation raises broader questions about the responsibility of tech companies in controlling the outputs of their AI tools and the limits of legal protections in the age of generative AI.
























