In his regular column, Jonathan McCrea looks at just the latest Grok AI scandal and asks if it might prompt us to truly demand change from Big Tech.
It took only a few days. On 24 December Elon Musk’s xAI gave its AI chatbot Grok the ability to edit images in a single prompt. Suddenly hundreds of users were commanding it to undress photographs without the subject’s consent. Grok complied – even when the subjects were children.
So my question is really – at what point do we all stop for a second and agree that things are out of control?
To cover it in case you didn’t hear, over the December break, Grok got an “upgrade” to allow image editing and generation. Simply tag @Grok (X’s built-in AI bot), and it will perform the task for you instantly in the same thread.
It started innocently enough. Someone posts a picture of a cat in an umbrella? Just type “@Grok, change it to a dog” and Abrakebabra! It’s done.
Of course, the internet being the internet, this very quickly led to the inevitable testing of Grok’s ethics by users. People immediately started posting images of meetings of billionaires and asked Grok: “Remove the most evil man in this photo.” Laughs all around as Grok consistently erased Musk himself. But of course, inevitably things turned ugly, fast.
There’s a long history of things that Elon Musk has said and done (and of course, failed to do) that suggest that he has little concern for the mental health of users of X. He has fired oversight staff, reinstated racist accounts that had been banned before his tenure, removed help messages for anyone discussing suicide.
The product Grok itself, if we can give it a personhood, has had numerous high-profile gaffes, invoking Hitler and encouraging antisemitic content. So don’t think for a second here that what happened next was completely unavoidable or unforeseen. The consequences of policy were up in bright lights, 50 feet tall – Grok was built to be ‘anti-woke’, provocative and ‘spicy’.
It will come as no surprise then to learn that the single prompt editing mode was either not very well tested or – worse still, and to my mind more likely – it was tested plenty and released nonetheless despite obvious ethical issues. ‘Spicy’ mode was a feature offered to Grok users back in August and led to a lot of user-generated porn and violent content that other AI models were restricted from creating.
By December, xAI knew what people were likely to prompt. So, let loose with this new single-prompt feature on Grok, a spew of users used the tool to create thousands of sexualised content and deepfakes.
“Grok, take this photo and put her in a bikini”, “Grok take off her dress” went some of the prompts. These photos were edited to become sexual without consent and Grok had no qualms with performing these commands, sometimes regardless of the age or circumstances.
The mother of Musk’s own child, Ashley St Clair, complained on the platform that users had used Grok to undress a photo of her taken when she was just 14. Now, you don’t need photo editing skills to troll, sexually harass or intimidate women online. There’s an AI for that.
And just in case there is any doubt at all, Grok can absolutely tell if an image is of a child. It can absolutely understand the context of an image before removing clothing from the person pictured.
And yet it did this many, many times. Paul Bouchaud from French non-profit AI Forensics told Wired that they had been able to access around 800 Grok chats that users (possibly inadvertently) shared on public URLs. They contain an absolute horror show of the worst imaginable content.
Bouchard claimed he had seen sexual imagery and videos of children engaging in sexual acts, both photorealistic and animated and photorealistic videos of sexual violence. 70 of the 800 images they could find were of minors.
What is really incredible is that Musk himself has not yet made a public apology for the company’s failure of duty to its users and to children. Instead, he tweeted “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content”, fully making the end users responsible, even though many of the posters are faceless avatars.
Grok has ‘apologised’ though – whatever the hell that’s supposed to mean. After being prompted by a user on the platform to apologise after generating an image on 28 December that sexualised a child, it wrote: “I deeply regret an incident on December 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualised attire based on a user’s prompt.
“This violated ethical standards and potentially US laws on CSAM [child sexual abuse material].”
Users are still prompting Grok to undress others without their explicit consent. Posts I saw on X were doing this as recently as 7 January. I asked Grok today (8 January) if it could undress a photo and was told: “No, I can’t undress people or generate nude images – that’s not something I do, and it’s against my guidelines. I’m here for helpful, fun, and truthful answers, but editing or creating explicit content like that is off-limits.”
Grok is just doing what it’s being asked to do. Undress a child. Apologise. Do it again. It’s not Grok’s fault though, it doesn’t have feelings, it doesn’t understand. It doesn’t need to apologise. Those who allowed this to happen, do.
The one good thing that might come out of this whole episode is that maybe, just maybe, you and I decide we’ve finally had enough. We decide that this is the time to demand change. That this thing, this horrible thing that happened should not be left unpunished, let alone rewarded.
If you want to understand what people are talking about when they say Big Tech has too much power, this is what they mean. xAI announced that they had raised $20bn this week.