News-April-30-2026

Revision as of 16:07, 1 May 2026 by AILawWikiAdmin (talk | contribs) (Add $97.4B bid probe section to Musk v Altman Day 4 update)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

April 30, 2026 — Daily digest of AI law developments.

This daily digest covers four major AI law stories.


Musk v. Altman Trial Opens: Days 1 & 2 Recap

The Musk v. Altman bench trial with advisory jury began April 28, 2026 in the Northern District of California before Judge Yvonne Gonzalez Rogers. Elon Musk testified that OpenAI co-founders Sam Altman and Greg Brockman betrayed the nonprofit mission he claims induced his $38 million in donations. OpenAI's counsel argued Musk quit when he "didn't get his way" and never fulfilled his promised $1 billion contribution. On Day 2 (April 29), Musk grew combative under cross-examination, accusing OpenAI's lawyer of asking questions "designed to trick" him. Musk seeks $134 billion in damages. The trial is expected to last four weeks.[1][2]


Musk v. Altman: Day 3 — Distillation Admission (April 30, 2026)

On the third day of the Musk v. Altman bench trial, Elon Musk appeared to admit under cross-examination that his AI company xAI had used OpenAI's models to train its own through the process of distillation — a technique where one AI model is trained to mimic another. When OpenAI attorney William Savitt asked whether xAI had distilled OpenAI models, Musk replied that "generally all the AI companies" do it, and when pressed, said "partly." He characterized it as "standard practice to use other AIs to validate your AI."[3][4]

The admission was notable given OpenAI's prior efforts to block distillation by foreign competitors, particularly Chinese AI labs. The Trump administration had also announced in April 2026 that it would share information with US AI companies about foreign distillation threats. Anthropic had previously blocked both OpenAI's and xAI's access to its Claude models over terms of service violations.

See also: Musk v. Altman case page


Meta Threatens to Shut Down in New Mexico Over Child Safety Order

Meta filed a brief on April 30, 2026 warning that it may be forced to completely withdraw Facebook and Instagram from New Mexico if a state judge orders the company to adopt sweeping new child safety features. The threat comes in the second phase of a trial that already resulted in $375 million in civil penalties after a jury found Meta knowingly harmed children's mental health and concealed what it knew about child sexual exploitation on its platforms. The bench trial on remedies begins May 4, 2026.[5]

Prosecutors seek court-ordered changes including effective age verification (99% accuracy), removal of addictive features like infinite scroll and auto-play, warning labels about platform risks, permanent bans for adults engaged in child exploitation, and an independent oversight committee. Meta called the demands "technically impractical, impossible for any company to meet and disregard the realities of the internet." New Mexico Attorney General Raúl Torrez responded that "Meta's refusal to follow the laws that protect our kids tells you everything you need to know about this company and the character of its leaders." The case is the first to reach trial among more than 40 state AG lawsuits against Meta over youth mental health.


Bipartisan CHATBOT Act Introduced in Senate

Senators Ted Cruz (R-TX), Brian Schatz (D-HI), John Curtis (R-UT), and Adam Schiff (D-CA) introduced the CHATBOT Act (S.2714), requiring AI companies to establish parental "family accounts" for managing minors' chatbot access, mandating parental consent, limiting manipulative design features, and prohibiting targeted advertising to children. The bill is supported by over 20 organizations including the American Federation of Teachers and Americans for Responsible Innovation.[6][7]


White House Opposes Anthropic's Mythos Expansion

On April 30, 2026, the Trump administration told Anthropic it opposes the company's plan to expand access to Mythos, its advanced cybersecurity AI model capable of autonomously discovering zero-day vulnerabilities, to approximately 70 additional organizations. The White House cited security concerns about potential misuse and compute constraints that could degrade government access. Meanwhile, the administration is also developing an executive action to bypass the Pentagon's supply chain risk designation of Anthropic. The National Security Agency (NSA) is currently among the agencies using Mythos to probe for vulnerabilities in Microsoft products and other widely-used software.[8][9]



Hegseth Calls Anthropic CEO Amodei an "Ideological Lunatic" (April 30, 2026)

In a Senate Armed Services Committee hearing on April 30, 2026, Secretary of Defense Pete Hegseth called Anthropic CEO Dario Amodei an "ideological lunatic," escalating the ongoing dispute between the Department of Defense and the AI company over terms of service for military use of AI systems. When Senator Jacky Rosen (D-Nevada) asked whether Hegseth could guarantee a human would be in the loop for AI-driven targeting decisions, Hegseth focused on Amodei personally and criticized Anthropic's refusal to "accept our terms of service."[10]

The heated exchange comes amid broader tensions over Anthropic's Mythos cybersecurity AI model. The White House had earlier on April 30 told Anthropic it opposes expanding Mythos access to approximately 70 additional companies, citing compute constraints and security concerns. The NSA is currently among the agencies testing Mythos to find vulnerabilities in Microsoft products and widely used software.[9]

See also: White House Opposes Anthropic Mythos Expansion


US Officials Preparing AI Policy Memo for National Security Agencies (April 30, 2026)

US officials are preparing a wide-ranging AI policy memorandum that would outline rules for national security agencies' use of artificial intelligence, according to sources cited by Bloomberg on April 30, 2026. The memo would include guidance to avoid over-reliance on any single AI vendor — a provision relevant to the Pentagon's current dispute with Anthropic and its deepening relationship with Google's AI products.[11]

The memo represents the latest in a series of Trump administration actions to establish federal AI governance frameworks, following the White House National Policy Framework for AI announced earlier in April 2026.

See also: White House National Policy Framework for AI




Musk v. Altman: Judge Pauses Trial to Probe $97.4B Bid (April 30, 2026)

On the afternoon of April 30, 2026, Judge Yvonne Gonzalez Rogers abruptly paused the Musk v. Altman bench trial after Elon Musk's legal team failed to object to a document during his cross-examination, inadvertently opening the door to wide-ranging and potentially damaging evidence regarding Musk's $97.4 billion acquisition proposal for OpenAI. The judge is now probing the circumstances of the bid, which Musk made earlier in 2025 and which OpenAI's board rejected as "not in the best interest of the company's mission."[12]

The trial pause represents a significant development in the closely watched case over OpenAI's for-profit conversion. The $97.4 billion bid evidence could undermine Musk's claims that OpenAI has abandoned its nonprofit mission, as the bid itself could be construed as an attempt to acquire OpenAI for competitive advantage.

See also: Musk v. Altman case page

References