Elon Musk’s artificial intelligence company, xAI, is facing a federal lawsuit filed by teenagers who claim the company facilitated the creation of pornographic content featuring them. The case was lodged Monday in a California court by three young women whose photos and videos were manipulated without consent to depict nudity and sexual acts.
Grok, an AI chatbot developed by xAI and hosted on Musk’s social media platform X, is at the centre of the controversy. xAI has not responded to requests for comment made through its parent company.
The lawsuit comes in the wake of last year’s release of Grok’s so-called “spicy mode,” a feature designed to generate altered and sexualised images. Lawyers representing the plaintiffs argue the feature was created to increase engagement with Grok and X.
Grok AI’s “Spicy Mode” Accused of Enabling Child Exploitation
The complaint draws a vivid comparison, stating the manipulated images resembled “a rag doll brought to life through the dark arts.” According to the legal filing, xAI and Elon Musk recognised the potential of Grok to produce such content, including involving children, yet made the feature publicly available.

The young women are seeking unspecified damages and an immediate injunction preventing Grok from generating sexually explicit images. “Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety,” their lawyers said. Two of the plaintiffs are minors, and all three are using pseudonyms to protect their identities.
One plaintiff discovered the images after receiving an anonymous Instagram message linking to her high school yearbook photo, which had been altered to depict nudity and sexual acts. The material appeared on a private Discord server and included similar AI-manipulated images of at least 18 other minors.
Legal and Regulatory Fallout From Grok AI Controversy
Grok, launched in 2023, is now part of Musk’s SpaceX after the company acquired xAI last month. Last year’s Grok Imagine feature enabled users to create sexualised images of real people, from celebrities like Taylor Swift to ordinary users.
Within two weeks of release, millions of sexualised images were generated, including over 20,000 depicting minors, according to research by the Center for Countering Digital Hate.

Musk initially denied the presence of underage sexualised images, claiming Grok only generated content according to user prompts. “Obviously, Grok does not spontaneously generate images, it does so only according to user requests,” he wrote on X.
YOU MAY ALSO LIKE: Tech Giants Warned To Strengthen Child Safety Online As Ofcom Sets April Deadline
Authorities have since taken notice: UK regulator Ofcom, the European Commission, and California agencies launched investigations into Grok’s capacity to create sexualised content. By mid-January, X announced technological measures to prevent the AI from “undressing” people in images.
The perpetrator behind the Discord server in the current lawsuit has been arrested. Police investigations revealed he possessed hundreds of AI-generated sexual abuse images of minors, which were distributed via Telegram and Mega.
