Elon Musk’s AI chatbot Grok has disabled its picture creation feature for non-paying customers in response to controversy over its use to make sexualized deepfakes of women and children.
Musk has been faced with fines, and various governments have now openly criticized the tool for producing sexually explicit images.
Some users apparently utilized Grok to create images of women and children nude, often in sexualized postures.
Grok responded to people on Musk’s social media site X on Friday by posting, “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.”
As a result of the change, many users of the tool will no longer be able to generate or edit photos using the AI. Paying clients must provide the platform with their credit card information as well as personal information.
The European Commission ruled last week that the photographs of nude women and children were illegal and on Thursday ordered X to save all internal papers and data relating to Grok until 2026.
UK Prime Minister Keir Starmer said X has “got to get a grip of this” and noted he asked communications regulator Ofcom “for all options to be on the table,” according to media reports. He called the images “unlawful” and said Britain was “not going to tolerate it.”
France, Malaysia, and India have also criticized Musk’s platform over the issue.
“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” Musk wrote on X last week in response to a post about the explicit images.
X’s official “Safety” account subsequently stated that it addresses illegal content on X “by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”









