News-April-29-2026
April 29, 2026 — Daily digest of AI law and policy developments.
This digest consolidates 10 stories from April 28–29, 2026.
Contents
1. Musk v. Altman Trial: Day 2 — Musk Testifies 2. OpenAI-AWS Partnership Announced After Microsoft Exclusivity Ends 3. China Freezes New Robotaxi Licenses After Baidu Chaos 4. Meta Found to Violate EU Child Safety Law 5. Pentagon AI Chief Confirms Google Work, Cautions Overreliance 6. Tumbler Ridge Families Sue OpenAI Over School Shooting 7. Musk v. Altman Trial: Day 3 — Cross-Examination Continues 8. Google Defends Pentagon AI Contract After Staff Backlash
Musk v. Altman Trial: Day 2 — Musk Testifies
Elon Musk testified on April 28, 2026, accusing Sam Altman and Greg Brockman of "looting" OpenAI's charitable assets after he invested $38 million under the condition the company remain a nonprofit. Judge Yvonne Gonzalez Rogers admonished both sides to stop using social media to exacerbate the conflict and scolded OpenAI for taking inconsistent positions on the origin of its name.[1][2]
See full article: April 28, 2026 — Musk v. Altman Trial Day 2: Musk Testifies, Judge Admonishes Both Sides
OpenAI-AWS Partnership Announced After Microsoft Exclusivity Ends
OpenAI announced an expanded partnership with AWS on April 28, 2026, bringing its latest AI models, Codex, and developer tools to Amazon's cloud platform. The deal comes one day after restructuring Microsoft exclusivity terms and appears designed to address antitrust concerns about Big Tech–AI startup relationships.[3][4]
See full article: April 28, 2026 — OpenAI-AWS Partnership
China Freezes New Robotaxi Licenses After Baidu Chaos
China's regulators froze all new robotaxi operating licenses after dozens of Baidu Apollo Go vehicles simultaneously froze in Wuhan traffic last month, causing widespread disruption. The freeze is the most significant regulatory intervention in China's autonomous vehicle sector and may delay expansion plans for Baidu, Pony.ai, and competitors.[5]
See full article: April 29, 2026 — China Freezes Robotaxi Licenses
Meta Found to Violate EU Child Safety Law
EU regulators found that Meta violated the Digital Services Act by failing to adequately prevent children from accessing its platforms. The finding could lead to fines of up to 6% of global revenue and signals aggressive enforcement of the DSA's child safety provisions.[6]
See full article: April 28, 2026 — Meta EU Child Safety Violation
Pentagon AI Chief Confirms Google Work, Cautions Overreliance
The Pentagon's chief AI officer confirmed the DoD is actively working with Google on AI initiatives while cautioning that overreliance on any single provider is "never a good thing." The comment supports arguments for multi-vendor architectures in government AI procurement.[7]
See full article: April 28, 2026 — Pentagon AI Chief Confirms Google Work
Musk v. Altman Trial: Day 3 — Cross-Examination Continues
Day 3 of the Musk v. Altman trial saw OpenAI's counsel Marc Savitt press Elon Musk on his credibility as an AI safety advocate. Savitt highlighted Musk's and xAI's opposition to Colorado's anti-algorithmic discrimination law, suggesting his safety claims were hypocritical. Savitt also raised xAI's own safety record — including "Mechahitler" references — and Musk conceded that profit motives undermining AI safety was "an issue across the board." After the jury was dismissed, Judge Yvonne Gonzalez Rogers indicated Musk's testimony may have "opened the door" to further questioning about xAI's safety record.[8]
See full article: April 29, 2026 — Musk v. Altman Trial Day 3: Cross-Examination Targets xAI Safety Record
Google Defends Pentagon AI Contract After Staff Backlash
Alphabet's president of global affairs Kent Walker issued an internal memo on April 29, 2026, defending Google's decision to allow the U.S. military to use its AI technology for classified operations. The memo came after significant employee backlash reminiscent of the 2018 Project Maven protests. Walker wrote that "staying engaged with governments, including on national security, will help democracies benefit from responsible technologies." The stance marks a significant shift from Google's post-Project Maven era and comes amid broader AI industry engagement with defense agencies.[9][10]
See full article: April 29, 2026 — Google Defends Pentagon AI Contract for Classified Operations
Bipartisan Senate Bill Requires AI Safeguards for Children
On April 29, 2026, a bipartisan group of senators introduced legislation that would require artificial intelligence companies to establish safeguards protecting children's mental health and social development when using chatbots. The bill would give parents greater oversight and control over their children's interactions with AI systems.
The legislation, whose sponsors include both Democratic and Republican senators, mandates that AI companies implement age-appropriate design standards, parental consent mechanisms for minor users, and transparency about how children's data is used in AI systems.[11]
References
- ↑ Law360, "Musk Testifies Altman 'Looting' OpenAI Charity For Own Gain," April 28, 2026
- ↑ Bloomberg, "Musk v. Altman: judge asks executives to control social media," April 28, 2026
- ↑ The Verge, "OpenAI-AWS partnership," April 28, 2026
- ↑ CNBC, "OpenAI brings its models to Amazon's cloud," April 28, 2026
- ↑ Bloomberg, "China freezes new robotaxi licenses after Baidu chaos," April 29, 2026
- ↑ CNBC, "Meta found to have violated EU law by failing to keep children off its platform," April 28, 2026
- ↑ CNBC, "Pentagon AI chief confirms work with Google," April 28, 2026
- ↑ The Verge, Musk v. Altman trial liveblog, April 29, 2026
- ↑ Financial Times, "Google tells staff it is proud of Pentagon AI contract after internal backlash," April 29, 2026
- ↑ The Verge, "Google defends allowing US military use of AI for classified operations," April 29, 2026
- ↑ Law360 — Bipartisan Bill Would Give Parents Control Over Kids' AI Use
NetChoice Sues to Block Minnesota Social Media Warning Label Law
On April 29, 2026, the tech industry trade group NetChoice filed a federal lawsuit challenging a Minnesota law that requires social media platforms to prominently display mental health warning labels to all users. The lawsuit argues that the mandate violates the First Amendment by compelling speech and that the state is using public health concerns to create an unlawful backdoor to regulate protected expression.[1]
The Minnesota law is one of several state-level efforts to address concerns about social media's impact on youth mental health. NetChoice, which represents companies including Meta, Google, and X, has successfully challenged similar laws in other states on First Amendment grounds.