Teens Sue Elon Musk’s xAI, Accuse Grok of Generating Sexual Images of Minors
Three teenagers are accusing Elon Musk’s AI company xAI of enabling the creation of sexualized images of them when they were minors. The lawsuit, filed Monday, claims the company’s Grok chatbot allowed a user to digitally alter photos of the girls and distribute the images online.
The case centers on allegations that Grok’s image editing tools were used to remove clothing from photos taken from the teens’ social media accounts. Lawyers say the images were then turned into explicit content and circulated across online communities.
According to reporting by The Washington Post, the complaint states that more than 18 girls were targeted in the scheme. Many of them attended the same school, and their photos were allegedly altered and shared in Discord and Telegram groups.
The lawsuit claims the images were sometimes traded in exchange for other illegal material involving minors. Attorneys argue the abuse only became illegal content after the AI system processed otherwise ordinary photos and generated explicit imagery.
“These young people — these children — are facing a lifetime of having these sexualized images out there on the internet,” attorney Vanessa Baehr-Jones told The Washington Post.
Lawyers for the teens accuse xAI of negligence, arguing the company released tools capable of creating illegal content without adequate safeguards. The complaint says the technology made it easy to manipulate real images of minors into explicit deepfakes.
The case also arrives amid growing scrutiny of Grok. Regulators and attorneys general in multiple jurisdictions have already raised concerns about the AI’s ability to generate nonconsensual sexual images.
Police opened a criminal investigation into the suspected perpetrator last year, and the complaint says he was arrested in December after authorities searched his phone.
The lawsuit seeks damages and new restrictions that would prevent AI tools from generating similar content involving minors in the future.
For now, the case may become a major test of how far legal liability for AI companies extends.



