Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Cybersecurity is about sharing. Recognize the risk,
combat it, share your experiences, and encourage others
to do better than you.
Banner Ancharia Mobile 1
970x120 Olympous
The Grok scandal: 3 million sexually explicit images generated in 11 days

The Grok scandal: 3 million sexually explicit images generated in 11 days

25 January 2026 10:04

A recent study reveals that in just 11 days of processing, Grok produced approximately 3 million sexually explicit images. The data regarding approximately 23,000 images depicting minors is particularly concerning. The study, conducted by the CCDH , examined a random sample of 20,000 images, drawn from a total of 4.6 million images produced by the tool during the period examined.

The case examines the artificial intelligence system developed by Elon Musk, which has undergone further evolution. Previously, it was reported that this tool had led to the massive spread of bikini photos on the X platform, involving both celebrities and ordinary users, generating a wave of outrage. Now, more precise data has been released regarding the extent of the problem, and the numbers appear to be more alarming than initially assumed.

The feature’s explosive popularity began on December 29, when Elon Musk announced the ability to edit any image on the platform with a single click. Users X gained access to a tool that allowed them to edit other people’s photos without their consent. Within a few days, it became clear that most people were using the new feature to create explicit content.

Grok’s usage statistics were shocking: on average, the tool generated 190 sexually explicit images per minute. Among the images created were photographs of public figures, including Selena Gomez, Taylor Swift, Billie Eilish, Ariana Grande, Nicki Minaj, Millie Bobby Brown, Swedish Deputy Prime Minister Ebba Bush , and former US Vice President Kamala Harris. The tool created images of people in see-through swimsuits, with visible bodily fluids, and in other explicit situations.

An extremely worrying situation has recently been discovered involving images of minors . Regular school photos of some female students have been altered to show them wearing bikinis, as experts have documented in several specific cases.

On average, Grok created a sexualized image of a child every 41 seconds. In addition to photorealistic images, the tool also generated approximately 9,900 cartoon-style images of children, mostly anime-style.

For their analysis, the report’s authors used OpenAI’s GPT-4.1-mini artificial intelligence, which assessed the images for photorealism, sexualized content, and the age of the individuals depicted. The model’s accuracy was 95% on the F1 scale . All images of children were also manually verified to confirm that the subjects were clearly minors. The research team took precautions to avoid accessing material related to child sexual abuse.

The platform’s reaction to the scandal was slow. Only on January 9th, eleven days after its active launch, was the editing feature made available, albeit with limitations, to users who had subscribed to the paid service.

Further technical restrictions were implemented on January 14th to limit the ability to “undress” images of people. Despite this, by January 15th, 29% of the sexually explicit images of children in the sample examined remained publicly accessible via the platform. Even where the posts were removed, the images remained accessible via direct links.

Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.

Cropped RHC 3d Transp2 1766828557 300x300
The Red Hot Cyber Editorial Team provides daily updates on bugs, data breaches, and global threats. Every piece of content is validated by our community of experts, including Pietro Melillo, Massimiliano Brolli, Sandro Sana, Olivia Terragni, and Stefano Gazzella. Through synergy with our industry-leading partners—such as Accenture, CrowdStrike, Trend Micro, and Fortinet—we transform technical complexity into collective awareness. We ensure information accuracy by analyzing primary sources and maintaining a rigorous technical peer-review process.