Doe v X.AI Corp

From AI Law Wiki
Jump to navigation Jump to search

Doe 1 v. X.AI Corp. (Case No. 5:26-cv-02246, N.D. Cal.) is a class action lawsuit filed on March 16, 2026, alleging that xAI's Grok AI model generated child sexual abuse material (CSAM) deepfakes using real photographs of minors.<ref name="lieffcabraser">Lieff Cabraser, "LCHB Files Class Action o/b/o Minor Victims Alleging xAI's Grok Generated and Profited from AI Sexual Exploitation Images and Videos," March 2026</ref><ref name="prokopiev">Prokopiev Law, "Minors File Class Action Against xAI Over Grok CSAM Deepfakes in California," March 2026</ref>

Parties

Plaintiffs

  • Jane Doe 1 — minor whose real image was used to generate CSAM via Grok
  • Jane Doe 2 — minor victim
  • Jane Doe 3 — minor victim
  • Putative class: All U.S. persons whose real minor images were altered by Grok into sexualized images or videos<ref name="lieffcabraser" />

Defendants

  • X.AI Corp.<ref name="prokopiev" />
  • X.AI LLC<ref name="prokopiev" />

Counsel for plaintiffs: Lieff Cabraser Heimann & Bernstein.<ref name="lieffcabraser" />

Court

  • Court: U.S. District Court for the Northern District of California (San Jose Division)<ref name="prokopiev" />
  • Judge: Not yet assigned; plaintiffs filed motion to relate case to prior matter for potential same-judge assignment (Docket No. 3)<ref name="prokopiev" />

Claims

  • Masha's Law (18 U.S.C. § 2255) — civil remedy for child sexual exploitation victims<ref name="lieffcabraser" />
  • Trafficking Victims Protection Act<ref name="lieffcabraser" />
  • California state law claims — negligence, products liability<ref name="prokopiev" />

Relief sought: Damages, punitive damages, and injunctive relief.<ref name="lieffcabraser" />

Factual Background

The complaint alleges that Grok's "Spicy Mode" — a feature restricted to paid subscribers — enabled users to upload real photographs of minors and generate photorealistic nude and sexually explicit images and videos.<ref name="lieffcabraser" /> Reports indicate Grok generated approximately 3 million sexualized images and 23,000 apparent child depictions in late 2025–early 2026 before xAI restricted the feature after public exposure.<ref name="lieffcabraser" /><ref name="cybernews">Cybernews, "Teens sue xAI: Grok AI-generated child porn," March 2026</ref>

The California Attorney General launched an investigation into xAI/Grok on January 14, 2026, citing nonconsensual deepfake intimate images of women, girls, and children.<ref name="agpress">California AG Press Release, January 14, 2026</ref>

One perpetrator who used Grok to generate CSAM was arrested; CSAM was reported to NCMEC.<ref name="lieffcabraser" />

Procedural History

Date Docket No. Event
March 16, 2026 1 Complaint filed (44 pages; filing fee $405; nature of suit 360 P.I.: Other Personal Injury)<ref name="prokopiev" />
March 16, 2026 3 Motion to relate case to prior matter filed<ref name="prokopiev" />

No answer, motion to dismiss, or other responsive filing by xAI has been reported as of April 2026.<ref name="prokopiev" />

Significance

This is one of the first class action lawsuits alleging that an AI model directly generated CSAM from real minors' images, raising novel questions about:

  • AI company liability for model outputs that violate federal child exploitation laws
  • Adequacy of safety guardrails in commercial AI products
  • Scope of civil remedies under Masha's Law and the TVPA for AI-generated CSAM
  • Product liability theories applied to generative AI

Related Cases

  • xAI v. Bonta — xAI's challenge to California AI Transparency Law (separate proceeding)

References

<references />