Grok’s AI Sexual Abuse Isn’t a ‘Trend’, It’s a Threat to Women.

Elon Musk’s xAI is being used to humiliate and exploit women. Experts are wondering why no one is stopping it.

Grok AI Abuse
Grok AI abuse is fuelling sexualised image abuse of women and girls. Despite warnings, platforms are failing to act, leaving victims vulnerable.
(Image credit: Getty Images)

If you’ve had the misfortune of being on the internet this week, you’ll likely have noticed that there is a disturbing—harrowing, really—new ‘trend’ littering the internet. It’s being referred to as the ‘Grok, put her in a bikini trend,’ but this is no harmless internet fad; it’s a form of online violence, and experts say treating it lightly is obscuring its real-world harm.

By now, examples of this abuse—AI-generated images that digitally alter real women and girls into sexualised scenarios—are well documented. Everyone (or at least every mainstream media outlet) seems to have realised that image-based abuse is no longer confined to shadowy corners of the internet but is running amok on, yes, Elon Musk’s Grok, but also TikTok and Telegram.

The problem with how we’re talking about Grok’s AI abuse

Researchers, journalists, and watchdog groups, however, had flagged the risks months earlier, after the chatbot was found producing sexually explicit content involving public figures like Taylor Swift, whose likeness Grok—or rather, human users using Grok—manipulated to generate sexualised images without her consent. North West, the 12-year-old daughter of Kim Kardashian and Kanye West, has also been targeted.

What followed those earlier warnings was not a technical failure, experts say, but a familiar one: the refusal to take image-based sexual abuse seriously until it becomes impossible to ignore. Or, in this case, until enough people become rightfully enraged by it, as we have seen this week.

In the UK, Liz Kendall, Technology Secretary, has called on Elon Musk to “deal with this urgently” and is backing Ofcom to “take any enforcement action it deems necessary.” Minister for Safeguarding and Violence Against Women and Girls Jess Phillips has vowed to “deploy the full power of the state to make this country safe for women and girls, both online and offline.”

Yet amid all the noise, a thornier truth risks being obscured. Digitally undressing women and generating images that sexualise children is, sadly, not a shocking new misuse of AI, nor an unforeseeable consequence of innovation moving too fast. On the contrary, experts say it is a predictable form of image-based sexual abuse, enabled by deliberate design choices and allowed to go on unchecked thanks to an unwillingness to treat harm to women and girls as urgent or enforceable.

Across law, ethics, journalism, and the arts, experts are aligned: the technology didn’t fail; the safeguards did.

“A rupture in women’s lives”

For Professor Clare McGlynn, a leading legal expert on image-based sexual abuse, the language used to describe AI-generated sexual exploitation has already done significant damage. Terms such as “nudifying” or “fake images,” she argues, minimise what victims actually experience.

“Women describe this abuse as a rupture in their lives,” McGlynn says. “They divide their lives into before and after. It is experienced as a sexual violation—a form of sexual assault—and it can be life-shattering, even life-threatening.”

From her work with survivors, McGlynn has documented long-term impacts that are routinely overlooked: hyper-vigilance, social withdrawal, deep mistrust, and the constant fear that any message or notification could signal further abuse. While the images may be digitally altered, the harm is not abstract.

“These are real images. They are online, on our phones. They feel real to victims,” she says. “The harm is the same whether the image is altered or not—they are created and shared to humiliate, to silence, and to push women off public platforms.”

McGlynn rejects the idea that this is a niche problem affecting only a few people. “We all suffer from this mass sexual harassment of women,” she says. “Because all women now know the threat of speaking online.”

Female violation is the point

Dr Arghavan Salles, a physician, ethicist, and senior research scholar at Stanford’s Clayman Institute for Gender Research, has been vocal about what Grok’s outputs reveal about power, misogyny, and control.

“The men doing this are not looking for sexual content because they can’t find it elsewhere,” she says. “The porn industry exists. What they are seeking is the violation itself—the humiliation, the degradation, the removal of autonomy.”

In Salles’ view, this is why arguments about user misuse miss the point. The ease with which Grok allows images to be manipulated—publicly, instantly, and at scale—fundamentally changes the risk profile. Unlike earlier deepfake tools, you don’t need to have any technical expertise or external platforms to quickly and easily generate a deepfake.

“This is done with a short prompt, and moments later, the image is live on one of the world’s largest platforms,” she says. “That’s what enables mass harm.”

Claims that social media platforms can’t anticipate this kind of abuse are, Salles argues, implausible. “People will abuse anything they have access to. Knowing that, it is incumbent on companies to pre-empt the harm.”

The warnings were ignored

For Jurgita Lapienytė, chief editor at Cybernews, the most troubling aspect of the Grok controversy is not how quickly it escalated, but how long it was allowed to fester.

“This problem didn’t appear overnight,” she says. “As far back as August, Grok was generating sexually explicit content involving public figures. Watchdog groups warned xAI exactly what would happen if this wasn’t fixed.”

Five months later, she notes, the situation has worsened. “Now we’re seeing children as young as ten being sexualised. At some point, we have to stop calling this a failure and start calling it a deliberate choice.”

Lapienytė warns that if regulators fail to act decisively now, the implications go far beyond one platform. “If X gets away with this, every other AI company will see there’s no consequence,” she says. “Within months, this could be weaponised for revenge porn, sextortion, and blackmail at a scale we’ve never seen before.”

In the UK, it is estimated that half a million UK teenagers have encountered AI-generated nude deepfakes, while Ofcom research shows that around 50% of children aged 8–15 have seen some form of deepfake content online. Evidence of the impact of image-based abuse has been well known since at least 2021, when 14-year-old Mia Janin took her own life after being bullied by schoolboys who had also spread digitally altered images of her. “Kids were manipulating images of girls. And Mia was one of them. And took it very badly,” Mia’s father told the BBC last year. Although there is a lack of extensive data, a UK‑wide case study of young people aged 10–19 who died by suicide (2014–2016) found that 24 % had some form of suicide‑related online experience, and around 5 % had been victims of online bullying noted in coroners’ reports.

The technology, says Lapienytė, will not disappear—but where it is allowed to operate matters. “We can still decide whether this lives on mainstream platforms or is pushed back into the dark corners of the internet.”

Why outrage alone won’t stop AI abuse

Tabitha Swanson, a digital creator and AI expert, is doubtful that public outrage alone will drive meaningful change. Having worked with AI tools for years, she is sceptical that companies will self-regulate.

“Large systems don’t have individual morality,” she says. “They respond to policy, enforcement, and consequences—not shame.” Andrea Simon, director of the End Violence Against Women Coalition, agrees: “Tech companies must be held accountable for proactively stopping deepfake abuse and other forms of image-based abuse on their platforms. But until they are made to do this, they won’t.”

Swanson, whose creative practice harnesses AI for innovative, positive work, frames the Grok controversy within a broader erosion of trust online, where AI-generated content increasingly blurs the boundary between reality and fabrication. “We’ve lost a shared source of truth,” she says. “We can’t even believe what we see anymore.”

For her, the danger lies not just in the technology, but in mistaking online outrage for action. “Posting about it can raise awareness,” she says. “But real change only happens when it moves into policy.”

The law already exists—enforcement does not

In the UK, McGlynn points out, many of the legal tools needed to tackle this problem are already in place. The Online Safety Act, for example, requires platforms to assess risks and take steps to prevent foreseeable harms. In this case, she says, the failure lies squarely with enforcement.

“This is not about a lack of law,” she says. “It’s about a choice not to enforce it—and a failure to take the sexual harassment of women seriously.”

Both McGlynn and Salles reject the idea that AI-generated sexual abuse is too new, too complex, or too fast-moving to address. The question, they argue, is whether regulators are willing to treat this not as an unfortunate side effect of innovation, but as what it really is: a continuation of image-based sexual violence, scaled by design.

As Liz Kendall, the UK’s Technology Secretary, puts it: “Let me be clear: this is not the future we want for technology. We will not allow it to be weaponised to harm women and girls. Our laws make this crystal clear—sharing or even threatening to share non-consensual intimate images is a criminal offence. Offenders face jail time, and platforms that fail to comply with their duties under the Online Safety Act can receive significant fines.”

Until those laws are enforced, experts warn, the message is clear—and dangerous: women’s safety remains negotiable, and technological innovation will once again be allowed to outpace accountability.

Mischa Anouk Smith
News and Features Editor

Mischa Anouk Smith is the News and Features Editor of Marie Claire UK.

From personal essays to purpose-driven stories, reported studies, and interviews with celebrities like Rosie Huntington-Whiteley and designers including Dries Van Noten, Mischa has been featured in publications such as Refinery29, Stylist and Dazed. Her work explores what it means to be a woman today and sits at the intersection of culture and style. In the spirit of eclecticism, she has also written about NFTs, mental health and the rise of AI bands.