Malaysia and Indonesia block Grok over non-consensual deepfake porn
- Details
- Category: Regulation
- 9112 views
Regulators in Malaysia and Indonesia announced separately over the weekend that they have temporarily restricted access to X’s AI tool Grok over its ability to generate and distribute non-consensual deepfake pornographic images of women and children.
In a statement issued Sunday, the Malaysian Communications and Multimedia Commission (MCMC) said the images – which are generated by user prompts to Grok and posted on X – are illegal under Malaysian law, including Section 233 of the Communications and Multimedia Act 1998.
The regulator said it issued notices to X Corp. and xAI LLC on January 3 and January 8 ordering them to implement technical and moderation safeguards to prevent such images from being generated and distributed in Malaysia.
Both companies responded to the MCMC that they would implement “user-initiated reporting mechanisms”. X announced on Friday that it would limit Grok’s image generation features to paid X subscribers, although users of the Grok app can still use it to generate images.
In its statement on Sunday, the MCMC said that wasn’t enough to prevent harm to victims or ensure legal compliance. The regulator said the restriction would remain in place “as a preventive and proportionate measure” until X and xAI impose effective safeguards, “particularly to prevent content involving women and children”.
On Saturday, meanwhile, Indonesia’s Communications and Digital Affairs Ministry (Kemkomdigi) announced on Instagram that it has temporarily banned access to Grok. Kemkomdigi minister Meutya Hafid said that non-consensual sexual deepfakes are “a serious violation of human rights, dignity, and the security of citizens in the digital space.”
Kemkomdigi said it has asked X to “provide clarification on the negative impact of the use of Grok, in accordance with the provisions of existing legislative rules.”
Grok’s image generation and alteration feature, which was launched last month, has been used to generate hundreds of thousands of deepfake pornographic images of women and children. Bloomberg estimates that Grok has been churning out over 6,700 such images an hour over the past few weeks.
Regulators worldwide have expressed alarm over Grok’s deepfake capabilities. According to the Times of India, the Ministry of Electronics and IT (MeITY) has reportedly expressed dissatisfaction at X’s proposed safeguard measures and is pushing for the platform to take more concrete action to deal with the problem.
According to Tech Policy Press, regulators in the EU, UK, France, Ireland, Canada and Brazil are also looking to take action, either against Grok and X specifically, or against AI-generated content in general.


