News-April-30-2026

From AI Law Wiki
Revision as of 01:12, 1 May 2026 by AILawWikiAdmin (talk | contribs) (Fix citations (use actual article URLs, not section pages), correct Day 4→Day 3, add Meta New Mexico shutdown story)

April 30, 2026 — Daily digest of AI law developments.

This daily digest covers four major AI law stories.


Musk v. Altman Trial Opens: Days 1 & 2 Recap

The Musk v. Altman bench trial with advisory jury began April 28, 2026 in the Northern District of California before Judge Yvonne Gonzalez Rogers. Elon Musk testified that OpenAI co-founders Sam Altman and Greg Brockman betrayed the nonprofit mission he claims induced his $38 million in donations. OpenAI's counsel argued Musk quit when he "didn't get his way" and never fulfilled his promised $1 billion contribution. On Day 2 (April 29), Musk grew combative under cross-examination, accusing OpenAI's lawyer of asking questions "designed to trick" him. Musk seeks $134 billion in damages. The trial is expected to last four weeks.[1][2]


Musk v. Altman: Day 3 — Distillation Admission (April 30, 2026)

On the third day of the Musk v. Altman bench trial, Elon Musk appeared to admit under cross-examination that his AI company xAI had used OpenAI's models to train its own through the process of distillation — a technique where one AI model is trained to mimic another. When OpenAI attorney William Savitt asked whether xAI had distilled OpenAI models, Musk replied that "generally all the AI companies" do it, and when pressed, said "partly." He characterized it as "standard practice to use other AIs to validate your AI."[3][4]

The admission was notable given OpenAI's prior efforts to block distillation by foreign competitors, particularly Chinese AI labs. The Trump administration had also announced in April 2026 that it would share information with US AI companies about foreign distillation threats. Anthropic had previously blocked both OpenAI's and xAI's access to its Claude models over terms of service violations.

See also: Musk v. Altman case page


Meta Threatens to Shut Down in New Mexico Over Child Safety Order

Meta filed a brief on April 30, 2026 warning that it may be forced to completely withdraw Facebook and Instagram from New Mexico if a state judge orders the company to adopt sweeping new child safety features. The threat comes in the second phase of a trial that already resulted in $375 million in civil penalties after a jury found Meta knowingly harmed children's mental health and concealed what it knew about child sexual exploitation on its platforms. The bench trial on remedies begins May 4, 2026.[5]

Prosecutors seek court-ordered changes including effective age verification (99% accuracy), removal of addictive features like infinite scroll and auto-play, warning labels about platform risks, permanent bans for adults engaged in child exploitation, and an independent oversight committee. Meta called the demands "technically impractical, impossible for any company to meet and disregard the realities of the internet." New Mexico Attorney General Raúl Torrez responded that "Meta's refusal to follow the laws that protect our kids tells you everything you need to know about this company and the character of its leaders." The case is the first to reach trial among more than 40 state AG lawsuits against Meta over youth mental health.


Bipartisan CHATBOT Act Introduced in Senate

Senators Ted Cruz (R-TX), Brian Schatz (D-HI), John Curtis (R-UT), and Adam Schiff (D-CA) introduced the CHATBOT Act (S.2714), requiring AI companies to establish parental "family accounts" for managing minors' chatbot access, mandating parental consent, limiting manipulative design features, and prohibiting targeted advertising to children. The bill is supported by over 20 organizations including the American Federation of Teachers and Americans for Responsible Innovation.[6][7]


White House Opposes Anthropic's Mythos Expansion

On April 30, 2026, the Trump administration told Anthropic it opposes the company's plan to expand access to Mythos, its advanced cybersecurity AI model capable of autonomously discovering zero-day vulnerabilities, to approximately 70 additional organizations. The White House cited security concerns about potential misuse and compute constraints that could degrade government access. Meanwhile, the administration is also developing an executive action to bypass the Pentagon's supply chain risk designation of Anthropic. The National Security Agency (NSA) is currently among the agencies using Mythos to probe for vulnerabilities in Microsoft products and other widely-used software.[8][9]

References