News-April-30-2026

Revision as of 00:11, 1 May 2026 by AILawWikiAdmin (talk | contribs) (Add Day 4 distillation admission coverage)

April 30, 2026 — Daily digest of AI law developments.

This daily digest covers three major AI law stories.


Musk v. Altman Trial Opens: Days 1 & 2 Recap

The Musk v. Altman bench trial with advisory jury began April 28, 2026 in the Northern District of California before Judge Yvonne Gonzalez Rogers. Elon Musk testified that OpenAI co-founders Sam Altman and Greg Brockman betrayed the nonprofit mission he claims induced his $38 million in donations. OpenAI's counsel argued Musk quit when he "didn't get his way" and never fulfilled his promised $1 billion contribution. On Day 2 (April 29), Musk grew combative under cross-examination, accusing OpenAI's lawyer of asking questions "designed to trick" him. Musk seeks $134 billion in damages. The trial is expected to last four weeks.

See individual articles: Day 1 — Musk's Direct Testimony | Day 2 — Fiery Cross-Examination | Day 3 — Credibility Under Scrutiny

Source: NBC News, Law360


Musk v. Altman: Day 4 — Distillation Admission (April 30, 2026)

On the fourth day of the Musk v. Altman bench trial, Elon Musk appeared to admit under cross-examination that his AI company xAI had used OpenAI's models to train its own through the process of distillation — a technique where one AI model is trained to mimic another. When OpenAI attorney William Savitt asked whether xAI had distilled OpenAI models, Musk replied that "generally all the AI companies" do it, and when pressed, said "partly." He characterized it as "standard practice to use other AIs to validate your AI."[1][2][3]

The admission was notable given OpenAI's prior efforts to block distillation by foreign competitors, particularly Chinese AI labs. The Trump administration had also announced in April 2026 that it would share information with US AI companies about foreign distillation threats. Anthropic had previously blocked both OpenAI's and xAI's access to its Claude models over terms of service violations.

See also: Musk v. Altman case page


Bipartisan CHATBOT Act Introduced in Senate

Senators Ted Cruz (R-TX), Brian Schatz (D-HI), John Curtis (R-UT), and Adam Schiff (D-CA) introduced the CHATBOT Act (S.2714), requiring AI companies to establish parental "family accounts" for managing minors' chatbot access, mandating parental consent, limiting manipulative design features, and prohibiting targeted advertising to children. The bill is supported by over 20 organizations including the American Federation of Teachers and Americans for Responsible Innovation.

See individual article: CHATBOT Act (S.2714)

Source: Senate Commerce Committee, Congress.gov



White House Opposes Anthropic's Mythos Expansion

On April 30, 2026, the Trump administration told Anthropic it opposes the company's plan to expand access to Mythos, its advanced cybersecurity AI model capable of autonomously discovering zero-day vulnerabilities, to approximately 70 additional organizations. The White House cited security concerns about potential misuse and compute constraints that could degrade government access. Meanwhile, the administration is also developing an executive action to bypass the Pentagon's supply chain risk designation of Anthropic. The National Security Agency (NSA) is currently among the agencies using Mythos to probe for vulnerabilities in Microsoft products and other widely-used software.

See individual article: White House Opposes Anthropic Mythos Expansion

Source: The Next Web

References