Doe v X.AI Corp: Difference between revisions
(Migration export) |
(Migration export) |
Latest revision as of 02:34, 28 April 2026
Doe 1 v. X.AI Corp. (Case No. 5:26-cv-02246, N.D. Cal.) is a class action lawsuit filed on March 16, 2026, alleging that xAI's Grok AI model generated child sexual abuse material (CSAM) deepfakes using real photographs of minors.[1][2]
Parties
Plaintiffs
- Jane Doe 1 — minor whose real image was used to generate CSAM via Grok
- Jane Doe 2 — minor victim
- Jane Doe 3 — minor victim
- Putative class: All U.S. persons whose real minor images were altered by Grok into sexualized images or videos[1]
Defendants
Counsel for plaintiffs: Lieff Cabraser Heimann & Bernstein.[1]
Court
- Court: U.S. District Court for the Northern District of California (San Jose Division)[2]
- Judge: Not yet assigned; plaintiffs filed motion to relate case to prior matter for potential same-judge assignment (Docket No. 3)[2]
Claims
- Masha's Law (18 U.S.C. § 2255) — civil remedy for child sexual exploitation victims[1]
- Trafficking Victims Protection Act[1]
- California state law claims — negligence, products liability[2]
Relief sought: Damages, punitive damages, and injunctive relief.[1]
Factual Background
The complaint alleges that Grok's "Spicy Mode" — a feature restricted to paid subscribers — enabled users to upload real photographs of minors and generate photorealistic nude and sexually explicit images and videos.[1] Reports indicate Grok generated approximately 3 million sexualized images and 23,000 apparent child depictions in late 2025–early 2026 before xAI restricted the feature after public exposure.[1][3]
The California Attorney General launched an investigation into xAI/Grok on January 14, 2026, citing nonconsensual deepfake intimate images of women, girls, and children.[4]
One perpetrator who used Grok to generate CSAM was arrested; CSAM was reported to NCMEC.[1]
Procedural History
| Date | Docket No. | Event |
|---|---|---|
| March 16, 2026 | 1 | Complaint filed (44 pages; filing fee $405; nature of suit 360 P.I.: Other Personal Injury)[2] |
| March 16, 2026 | 3 | Motion to relate case to prior matter filed[2] |
No answer, motion to dismiss, or other responsive filing by xAI has been reported as of April 2026.[2]
Significance
This is one of the first class action lawsuits alleging that an AI model directly generated CSAM from real minors' images, raising novel questions about:
- AI company liability for model outputs that violate federal child exploitation laws
- Adequacy of safety guardrails in commercial AI products
- Scope of civil remedies under Masha's Law and the TVPA for AI-generated CSAM
- Product liability theories applied to generative AI
Related Cases
- xAI v. Bonta — xAI's challenge to California AI Transparency Law (separate proceeding)
References
- ↑ 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Lieff Cabraser, "LCHB Files Class Action o/b/o Minor Victims Alleging xAI's Grok Generated and Profited from AI Sexual Exploitation Images and Videos," March 2026
- ↑ 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 Prokopiev Law, "Minors File Class Action Against xAI Over Grok CSAM Deepfakes in California," March 2026
- ↑ Cybernews, "Teens sue xAI: Grok AI-generated child porn," March 2026
- ↑ California AG Press Release, January 14, 2026