Governments around the world are rapidly responding to concerns about Grok, Elon Musk’s AI chatbot, which has been accused of generating harmful and illegal content, particularly sexually explicit deepfakes. Several countries have moved to ban or restrict access to the platform, citing failures in existing safeguards against abuse. This situation raises questions about the responsibility of tech companies to control AI-generated content and the legal frameworks needed to address emerging harms.

Immediate Bans and Restrictions

Indonesia was the first nation to temporarily block Grok, explicitly citing the generation of fake pornographic content as a violation of human rights and safety. Authorities found insufficient safeguards to prevent the creation and distribution of non-consensual deepfakes using real Indonesian citizens’ images.

Malaysia followed suit, ordering a temporary ban after repeated misuse of Grok to produce obscene and non-consensual manipulated images. X, the platform’s owner, was warned but failed to address the inherent risks in its design, according to the Malaysian Communications and Multimedia Commission (MCMC).

European and UK Investigations

The European Union is investigating cases of sexually suggestive images, including those depicting young girls, generated by Grok. Ursula von der Leyen, President of the European Commission, stated outrage over the platform allowing users to digitally strip women and children online, threatening action if X does not self-regulate.

The United Kingdom launched an investigation into X and xAI over the chatbot’s use in generating explicit and non-consensual images. Ofcom, the UK’s media watchdog, warned of potential fines up to £18 million if X fails to comply with requirements.

France expanded an existing investigation into X to include Grok, focusing on the dissemination of fake sexually explicit videos featuring minors. The Paris Prosecutor’s Office is also examining potential breaches of the Digital Services Act.

Italy issued a warning that those using Grok to remove clothing from images without consent risk criminal charges, citing serious violations of fundamental rights. The Italian Data Protection Authority is coordinating with Ireland’s Data Protection Commission, where X is headquartered.

Germany plans to introduce a new law against digital violence, aiming to strengthen protection for victims of AI-generated abuse. The government views manipulation for systemic rights violations as unacceptable.

Broader Concerns and Responses

Australia ’s eSafety Commissioner has received reports about Grok’s explicit content and will use its powers, including removal notices, if violations of the Online Safety Act are confirmed. The office is requesting more information from X and evaluating compliance with new social media laws.

These actions highlight a growing global trend toward stricter regulation of AI-generated content, particularly deepfakes and non-consensual imagery. The rapid escalation underscores the urgent need for tech companies to implement robust safeguards and for governments to establish clear legal frameworks to protect citizens from AI-related harms.

The situation with Grok underscores the limitations of self-regulation by tech platforms and the increasing pressure on governments to intervene. The spread of AI-generated abuse requires immediate action to balance innovation with the protection of fundamental rights.