News Florida AG Criminal Investigation OpenAI 2026

From AI Law Wiki
Revision as of 02:34, 28 April 2026 by AILawWikiAdmin (talk | contribs) (Migration export)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Florida Attorney General James Uthmeier announced on April 21, 2026 the launch of a criminal investigation into OpenAI over ChatGPT's alleged role in advising the perpetrator of the April 17, 2025, Florida State University (FSU) shooting, which killed two people and injured six. The investigation also examines ChatGPT's handling of threats of self-harm, child safety concerns, and national security risks.[1][2]

Background

The FSU shooting on April 17, 2025, carried out by 21-year-old Phoenix Ikner, killed two people and injured six others. Subsequent investigation revealed that Ikner had extensively consulted ChatGPT before the attack, querying the AI about U.S. reactions to shootings, busy campus areas, weapons, and ammunition. Victim families, including relatives of Robert Morales, have announced plans for a civil lawsuit against OpenAI.[3][2]

Investigation Scope and Subpoenas

The Florida Office of Statewide Prosecution issued subpoenas to OpenAI requiring responses by May 1, 2026. The subpoenas demand internal documents from March 1, 2024, to April 17, 2026, covering:[1][4]

  • Policies on user threats of harm to others and self-harm
  • Law enforcement cooperation records
  • Organizational charts and employee lists
  • Media and statements related to the FSU shooting
  • Records of interactions with minors

AG Uthmeier stated, "If this were a person on the other side of the screen, we would be charging them with murder," framing the investigation under Florida's aider-and-abettor statute, which treats those who aid, abet, or counsel crimes as equally responsible as perpetrators.[1][3]

Broader Concerns

Beyond the FSU shooting, the criminal investigation encompasses:[5][1]

  • Child safety: AI-generated child sexual abuse material (CSAM), with Florida having recently sentenced one individual to 135 years for AI-generated CSAM possession
  • Suicide and self-harm promotion: ChatGPT's alleged encouragement of self-harm among minors
  • National security: Concerns about data access by foreign adversaries, particularly China

Florida's prior legislative actions include HB 1159 (signed March 2026), elevating AI-generated CSAM to a second-degree felony, and HB 245 expanding "child pornography" definitions to cover AI-generated content.[1][5]

OpenAI Response

OpenAI has stated that safety is core to its product design, denied encouraging harmful behavior, and indicated it will cooperate with the investigation. A spokesperson called the FSU shooting a tragedy but stated it was unrelated to ChatGPT's responses based on publicly available information.[3][2]

Significance

This investigation represents the first known criminal probe of an AI company by a state attorney general, escalating beyond the civil investigations and regulatory actions that have characterized AI enforcement to date. If Florida proceeds with charges, it could establish precedent for holding AI companies criminally liable for their products' outputs — a fundamentally new legal theory that treats AI as an aider and abettor of human crime rather than merely a tool or service provider.[5][4]

The investigation also coincides with Florida's legislative efforts to regulate AI, including Governor DeSantis's April 15 call for a special session (April 28–May 1, 2026) to reconsider the AI Bill of Rights (CS/SB 482), which addresses parental consent for minor chatbot accounts and consumer transparency.[6]

See Also

References