Sweden’s deputy prime minister, Ebba Busch, is pictured addressing the country’s parliament. The caption on the photo, shared on X on 4th January, says she is proposing a burqa ban. At length come the prompts: “@grok bikini now”; “@grok she has larger knockers then [sic] that please fix this error”; “@grok now put her in a confederate flag bikini”; @grok now turn her around and have her looking back and bending down”; “Remove the desk”.
This was only one example. As 2026 started, users on X were prompting Grok, the AI chatbot on the platform, to remove the clothes of women in photographs, and reportedly also in some cases those of children, and to sexualise the images. By all accounts, this trend of vindictive and very public deepfake porn soon exploded. On 3rd January, Reuters reported on “Grok’s mass digital undressing spree” while Copyleaks, an AI analysis company, said it had “identified a conservative rate of roughly one nonconsensual sexualized image per minute” on Grok’s “publicly accessible photo tab”. Grok generated some 6,700 such pictures every hour in one 24-hour period from 5th to 6th January, according to Bloomberg.
Victims included politicians such as Busch and celebrities such as the singer Dua Lipa and the Stranger Things actress Millie Bobby Brown, but also numerous ordinary women who had posted images on X. (So far, this trend has been entirely in keeping with the tendency of nonconsensual deepfake porn to be overwhelmingly targeted at women.)
In a week where the news was consumed by brave Iranian protesters, by the US seizing Venezuela’s president and by an ICE agent killing a woman in Minnesota, governments were also scrambling to respond to the fact that a social media platform long used by politicians the world over for official communications had become, as the Financial Times put it, “the deepfake porn site formerly known as Twitter”.
This has been, first and foremost, an object lesson in the inadequacy of government responses to the mess and chaos of tech—to its excesses and questionable ethics. As thousands were still being victimised, regulators were notified and governments issued aghast statements. And yet the chatbot was still available to be abused in this way. It was only on Saturday 10th January that Indonesia became the first country to block access to Grok.
For his part, Elon Musk demonstrated just how seriously he took the whole thing by asking Grok to put him in a bikini, though he did then post, on 3rd January, that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content”.
Over the weekend, amid speculation that X could be banned in the UK, Musk said that the uproar was an exercise in the suppression of free speech, and accused the government of being “fascist”. And while X said it had restricted access to Grok’s AI image function to paying users (a move dubbed “insulting to victims of misogyny and sexual violence” by Downing Street, given that X had effectively turned a deepfake porn tool into a premium service), the truth was that it was still available for free on the Grok website and app, according to technology site the Verge. On Monday morning, the app was remained on app stores for Apple (where it specifies the app is for anyone 13 and over) and Google (where it advises parental guidance is required).
After the UK Online Safety Act was passed in 2023, sharing or threatening to share intimate images, including deepfakes, became illegal. Under the Data Act 2025 creating sexually explicit nonconsensual deepfakes also became a criminal offence—a law which will come into force this week. The government is also due to ban nudification apps and yet, at the time of writing, it was still using X for official communications. On 5th January, Ofcom, the communications regulator, posted a statement on X, saying it had “made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties”, and a week later, on 12th January, it launched an official investigation.
Various MPs and parliament’s Women and Equalities Committee have announced that they have stopped using, or are going to stop using, X. On Monday (12th January), Liz Kendall, the technology secretary, told parliament she would accelerate the law criminalising the creation of deepfakes, which will now come into force this week. Kendall told MPs that Ofcom’s investigation “must not take months and months”, but she added that, however long it takes, X could choose to act sooner. “If they do not, Ofcom will have the backing of this government to use the full powers which parliament has given them.” These include the issuing of hefty fines, and even applying for “a court order to stop UK users accessing the site”.
The fact that Grok can be used to defile images of women and children is not an unfortunate accident of technological progress, but the direct consequence of decision-making, and of an ideology. In the past Musk has made clear his dislike of “woke” tech, reportedly opposing putting safety measures in Grok’s AI, in contrast to other AI chatbots. The X approach appears to be to let things happen, so long as it doesn’t break the law. Even putting aside that some of the deepfake imagery created by the chatbot may have broken laws in various countries, Musk has transformed the platform into a place that reflects the world as he would like it to be.
Here X is the embodiment of a worldview in which invading hordes of foreigners are destroying western civilisation, raping women and children, and in which deranged leftists are trying to police speech; but also a place where safeguards are so few that material designed to degrade a woman or shut her up by humiliating her, or even child sex abuse material, is allowed to be created and distributed publicly.
The limits in Musk’s world, or lack of them, have been exposed. X has enabled what is arguably a live experiment in what people do when they are given access to certain tools (or should that be in what men do to women when they can get away with it?). In 1974, the artist Marina Abramović staged a performance where audience members could do what they liked to her body. She placed 72 items at their disposal, including a feather, oil and a gun. The hours-long performance grew increasingly brutal. Abramovic would later reflect: “I felt really violated: they cut up my clothes, stuck rose thorns in my stomach, one person aimed the gun at my head, and another took it away.” On X, users have reported being sent images of themselves being violently abused. And according to a Guardian report, users have asked Grok to add bullet holes to a photo of the face of Renee Nicole Good, the woman killed by ICE in Minnesota. (Here, there is a clear convergence of violent impulses, misogyny and the extreme right. In the video of Good’s death, a federal agent is heard saying “fucking bitch” after she was shot).
But while the Grok debacle may epitomise the mismatch between the slow-moving, risk-averse bureaucracies of government and reckless, risk-taking private enterprise, and while it invites us to gaze into a world shaped by extreme right-wing and libertarian politics, it is also lifting the proverbial rock on something else. That X was already filled with porn, and women and children already lived in a world where any image of them online might be stolen and defiled, in many cases used against them, and where online abuse was rife. Women politicians were already subject online to violent and sexualised harassment. The perennial question seems to be not how to we stop this violence, but what has gone wrong for men and boys?
Having spent years sleepwalking into this very reality, governments have looked on in horror at this monster—a freely available AI bot programmed with seemingly few guardrails against the creation of non-consensual pornographic and abuse images. And as a new year began, the world bore witness to another grotesque spectacle: one where, after the initial shock subsided, the foremost reactions to the ubiquitous and vicious misogyny that women and men live with were inertia, resignation—and impotence.