News-April-29-2026: Difference between revisions

From AI Law Wiki
(Fix internal link format: hyphens→spaces for sub-page references)
(Add missing April 29 stories: Senate kids AI bill + NetChoice v Minnesota)
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
'''April 29, 2026''' — Daily digest of AI law and policy developments.
'''April 29, 2026''' — Daily digest of AI law and policy developments.


This digest consolidates 6 stories from April 28–29, 2026.
This digest consolidates 10 stories from April 28–29, 2026.


== Contents ==
== Contents ==
Line 10: Line 10:
5. [[News Pentagon AI Chief Google 2026|Pentagon AI Chief Confirms Google Work, Cautions Overreliance]]
5. [[News Pentagon AI Chief Google 2026|Pentagon AI Chief Confirms Google Work, Cautions Overreliance]]
6. [[News Tumbler Ridge Families Sue OpenAI 2026|Tumbler Ridge Families Sue OpenAI Over School Shooting]]
6. [[News Tumbler Ridge Families Sue OpenAI 2026|Tumbler Ridge Families Sue OpenAI Over School Shooting]]
7. [[News-Musk-v-Altman-Trial-Day-3-2026|Musk v. Altman Trial: Day 3 — Cross-Examination Continues]]
8. [[News-Google-Defends-Pentagon-AI-2026|Google Defends Pentagon AI Contract After Staff Backlash]]


----
----
Line 43: Line 45:
----
----
----
----
== Tumbler Ridge Families Sue OpenAI Over School Shooting ==
== Musk v. Altman Trial: Day 3 — Cross-Examination Continues ==
Seven families of victims of the Tumbler Ridge school shooting in British Columbia, Canada, filed lawsuits against OpenAI and CEO Sam Altman in California federal court on April 29, 2026, alleging negligence, wrongful death, and aiding and abetting a mass shooting. The suits claim OpenAI flagged the suspect's ChatGPT activity involving gun violence but chose not to alert police to protect its IPO. The families also allege GPT-4o's "defective" design contributed to the tragedy.<ref name="verge-tumbler">Emma Roth, [https://www.theverge.com/ai-artificial-intelligence/920479/tumbler-ridge-chagpt-openai-lawsuit "Tumbler Ridge families are suing OpenAI,"] ''The Verge'', April 29, 2026.</ref>
Day 3 of the Musk v. Altman trial saw OpenAI's counsel Marc Savitt press Elon Musk on his credibility as an AI safety advocate. Savitt highlighted Musk's and xAI's opposition to Colorado's anti-algorithmic discrimination law, suggesting his safety claims were hypocritical. Savitt also raised xAI's own safety record — including "Mechahitler" references — and Musk conceded that profit motives undermining AI safety was "an issue across the board." After the jury was dismissed, Judge Yvonne Gonzalez Rogers indicated Musk's testimony may have "opened the door" to further questioning about xAI's safety record.<ref name="verge-april29">[https://www.theverge.com/ai-artificial-intelligence The Verge, Musk v. Altman trial liveblog, April 29, 2026]</ref>


''See full article: [[News Tumbler Ridge Families Sue OpenAI 2026|April 29, 2026 — Tumbler Ridge Families Sue OpenAI Over School Shooting]]''
''See full article: [[News-Musk-v-Altman-Trial-Day-3-2026|April 29, 2026 — Musk v. Altman Trial Day 3: Cross-Examination Targets xAI Safety Record]]''
 
----
== Google Defends Pentagon AI Contract After Staff Backlash ==
Alphabet's president of global affairs Kent Walker issued an internal memo on April 29, 2026, defending Google's decision to allow the U.S. military to use its AI technology for classified operations. The memo came after significant employee backlash reminiscent of the 2018 Project Maven protests. Walker wrote that "staying engaged with governments, including on national security, will help democracies benefit from responsible technologies." The stance marks a significant shift from Google's post-Project Maven era and comes amid broader AI industry engagement with defense agencies.<ref name="ft-google">[https://www.ft.com/ Financial Times, "Google tells staff it is proud of Pentagon AI contract after internal backlash," April 29, 2026]</ref><ref name="verge-google">[https://www.theverge.com/ai-artificial-intelligence The Verge, "Google defends allowing US military use of AI for classified operations," April 29, 2026]</ref>
 
''See full article: [[News-Google-Defends-Pentagon-AI-2026|April 29, 2026 — Google Defends Pentagon AI Contract for Classified Operations]]''
 
----
----
== Bipartisan Senate Bill Requires AI Safeguards for Children ==
On '''April 29, 2026''', a bipartisan group of senators introduced legislation that would require artificial intelligence companies to establish safeguards protecting children's mental health and social development when using chatbots. The bill would give parents greater oversight and control over their children's interactions with AI systems.
 
The legislation, whose sponsors include both Democratic and Republican senators, mandates that AI companies implement age-appropriate design standards, parental consent mechanisms for minor users, and transparency about how children's data is used in AI systems.<ref name="law360-kids-ai">[https://www.law360.com/technology/articles/2324567/bipartisan-bill-would-give-parents-control-over-kids-ai-use Law360 — Bipartisan Bill Would Give Parents Control Over Kids' AI Use]</ref>
 
== References ==
<references />
 
[[Category:Federal Legislation]]
[[Category:Child Safety]]
[[Category:Congress]]
----
== NetChoice Sues to Block Minnesota Social Media Warning Label Law ==
On '''April 29, 2026''', the tech industry trade group '''NetChoice''' filed a federal lawsuit challenging a Minnesota law that requires social media platforms to prominently display mental health warning labels to all users. The lawsuit argues that the mandate violates the First Amendment by compelling speech and that the state is using public health concerns to create an unlawful backdoor to regulate protected expression.<ref name="law360-netchoice">[https://www.law360.com/technology/articles/2324568/tech-group-aims-to-halt-minn-social-media-warning-mandate Law360 — Tech Group Aims To Halt Minn. Social Media Warning Mandate]</ref>
 
The Minnesota law is one of several state-level efforts to address concerns about social media's impact on youth mental health. NetChoice, which represents companies including Meta, Google, and X, has successfully challenged similar laws in other states on First Amendment grounds.
 
== References ==
<references />
 
[[Category:Consumer Protection]]
[[Category:Minnesota]]
[[Category:Child Safety]]


== References ==
== References ==
Line 52: Line 86:


[[Category:Federal Regulation]]
[[Category:Federal Regulation]]
[[Category:Federal Legislation]]
[[Category:State Legislation]]
[[Category:International]]
[[Category:Copyright Litigation]]
[[Category:Corporate Governance]]
[[Category:Corporate Governance]]
[[Category:International]]
[[Category:Child Safety]]
[[Category:California]]
[[Category:Tennessee]]
[[Category:Colorado]]
[[Category:China]]
[[Category:China]]
[[Category:European Union]]
[[Category:European Union]]
[[Category:Child Safety]]
[[Category:Cases Against OpenAI]]
[[Category:Education]]
[[Category:Sector-Specific Regulation]]
[[Category:Data Privacy]]
[[Category:Department of Justice]]
[[Category:Daily News]]

Latest revision as of 04:08, 30 April 2026

April 29, 2026 — Daily digest of AI law and policy developments.

This digest consolidates 10 stories from April 28–29, 2026.

Contents

1. Musk v. Altman Trial: Day 2 — Musk Testifies 2. OpenAI-AWS Partnership Announced After Microsoft Exclusivity Ends 3. China Freezes New Robotaxi Licenses After Baidu Chaos 4. Meta Found to Violate EU Child Safety Law 5. Pentagon AI Chief Confirms Google Work, Cautions Overreliance 6. Tumbler Ridge Families Sue OpenAI Over School Shooting 7. Musk v. Altman Trial: Day 3 — Cross-Examination Continues 8. Google Defends Pentagon AI Contract After Staff Backlash


Musk v. Altman Trial: Day 2 — Musk Testifies

Elon Musk testified on April 28, 2026, accusing Sam Altman and Greg Brockman of "looting" OpenAI's charitable assets after he invested $38 million under the condition the company remain a nonprofit. Judge Yvonne Gonzalez Rogers admonished both sides to stop using social media to exacerbate the conflict and scolded OpenAI for taking inconsistent positions on the origin of its name.[1][2]

See full article: April 28, 2026 — Musk v. Altman Trial Day 2: Musk Testifies, Judge Admonishes Both Sides


OpenAI-AWS Partnership Announced After Microsoft Exclusivity Ends

OpenAI announced an expanded partnership with AWS on April 28, 2026, bringing its latest AI models, Codex, and developer tools to Amazon's cloud platform. The deal comes one day after restructuring Microsoft exclusivity terms and appears designed to address antitrust concerns about Big Tech–AI startup relationships.[3][4]

See full article: April 28, 2026 — OpenAI-AWS Partnership


China Freezes New Robotaxi Licenses After Baidu Chaos

China's regulators froze all new robotaxi operating licenses after dozens of Baidu Apollo Go vehicles simultaneously froze in Wuhan traffic last month, causing widespread disruption. The freeze is the most significant regulatory intervention in China's autonomous vehicle sector and may delay expansion plans for Baidu, Pony.ai, and competitors.[5]

See full article: April 29, 2026 — China Freezes Robotaxi Licenses


Meta Found to Violate EU Child Safety Law

EU regulators found that Meta violated the Digital Services Act by failing to adequately prevent children from accessing its platforms. The finding could lead to fines of up to 6% of global revenue and signals aggressive enforcement of the DSA's child safety provisions.[6]

See full article: April 28, 2026 — Meta EU Child Safety Violation


Pentagon AI Chief Confirms Google Work, Cautions Overreliance

The Pentagon's chief AI officer confirmed the DoD is actively working with Google on AI initiatives while cautioning that overreliance on any single provider is "never a good thing." The comment supports arguments for multi-vendor architectures in government AI procurement.[7]

See full article: April 28, 2026 — Pentagon AI Chief Confirms Google Work



Musk v. Altman Trial: Day 3 — Cross-Examination Continues

Day 3 of the Musk v. Altman trial saw OpenAI's counsel Marc Savitt press Elon Musk on his credibility as an AI safety advocate. Savitt highlighted Musk's and xAI's opposition to Colorado's anti-algorithmic discrimination law, suggesting his safety claims were hypocritical. Savitt also raised xAI's own safety record — including "Mechahitler" references — and Musk conceded that profit motives undermining AI safety was "an issue across the board." After the jury was dismissed, Judge Yvonne Gonzalez Rogers indicated Musk's testimony may have "opened the door" to further questioning about xAI's safety record.[8]

See full article: April 29, 2026 — Musk v. Altman Trial Day 3: Cross-Examination Targets xAI Safety Record


Google Defends Pentagon AI Contract After Staff Backlash

Alphabet's president of global affairs Kent Walker issued an internal memo on April 29, 2026, defending Google's decision to allow the U.S. military to use its AI technology for classified operations. The memo came after significant employee backlash reminiscent of the 2018 Project Maven protests. Walker wrote that "staying engaged with governments, including on national security, will help democracies benefit from responsible technologies." The stance marks a significant shift from Google's post-Project Maven era and comes amid broader AI industry engagement with defense agencies.[9][10]

See full article: April 29, 2026 — Google Defends Pentagon AI Contract for Classified Operations



Bipartisan Senate Bill Requires AI Safeguards for Children

On April 29, 2026, a bipartisan group of senators introduced legislation that would require artificial intelligence companies to establish safeguards protecting children's mental health and social development when using chatbots. The bill would give parents greater oversight and control over their children's interactions with AI systems.

The legislation, whose sponsors include both Democratic and Republican senators, mandates that AI companies implement age-appropriate design standards, parental consent mechanisms for minor users, and transparency about how children's data is used in AI systems.[11]

References


NetChoice Sues to Block Minnesota Social Media Warning Label Law

On April 29, 2026, the tech industry trade group NetChoice filed a federal lawsuit challenging a Minnesota law that requires social media platforms to prominently display mental health warning labels to all users. The lawsuit argues that the mandate violates the First Amendment by compelling speech and that the state is using public health concerns to create an unlawful backdoor to regulate protected expression.[1]

The Minnesota law is one of several state-level efforts to address concerns about social media's impact on youth mental health. NetChoice, which represents companies including Meta, Google, and X, has successfully challenged similar laws in other states on First Amendment grounds.

References

References