News-May-01-2026

From AI Law Wiki
Revision as of 21:06, 1 May 2026 by AILawWikiAdmin (talk | contribs) (Add Pentagon/Anthropic blacklist, Oscars AI rules, Musk v Altman trial update)

May 1, 2026 — Daily digest of AI law developments. This digest consolidates state legislative updates following the Transparency Coalition's May 1 roundup.


Maryland: AI Dynamic Pricing Signed Into Law, Four AI Bills Advance

Maryland Gov. Wes Moore signed HB 895, the AI Dynamic Pricing Act, into law on April 28, 2026. The new law prohibits food retailers and delivery services from using AI and personal data to set individualized prices. Three additional AI bills have been approved by the legislature and await the governor's signature: SB 8 (deepfake protection), SB 720 (AI education guidance for school systems), and SB 141 (deepfakes in political campaign materials).[1]

See individual article: Maryland AI Bills Signed April 2026


Tennessee: Governor Signs Six AI Measures Into Law

Tennessee Gov. Bill Lee signed six AI-related bills into law following the legislature's April 23 adjournment. The newly enacted measures include: SB 1580 (prohibits AI therapy chatbots), SB 837 (defines personhood to exclude AI), SB 2310 (bell-to-bell smartphone limits in K-5 schools), SB 2041 (civil action for deepfake intimate imagery), and SB 1469 (restricts monetization of online content by minors under 14). A chatbot safety bill, SB 1700 (CHAT Act), was approved but effectively killed by an undermining amendment.[1]

See individual article: Tennessee AI Bills Signed April 2026


Oklahoma: Chatbot Safety and AI Identity Theft Bills Advance

SB 1521, Oklahoma's leading chatbot safety bill, has received both Senate and House approval and awaits reconciliation before heading to Gov. Kevin Stitt. Separately, HB 3244 — which makes the use of AI an aggravating factor in identity theft crimes — was sent to the governor on April 30, 2026.[1]

See individual article: Oklahoma AI Bills April 2026


Arizona: AI Bills in Limbo Amid Budget Showdown

Arizona lawmakers have blown past their original April 25 adjournment date due to a budget standoff between Republican legislative leaders and Gov. Katie Hobbs. Three AI-related bills remain in a holding pattern as Hobbs has vowed to sign no legislation until a budget agreement is reached. The impasse leaves Arizona's AI legislative agenda in limbo.[1]

See also: Arizona SB 1786



OpenAI Restricts Cyber Access, Mirroring Anthropic's Mythos Approach

On April 30, 2026, OpenAI CEO Sam Altman confirmed the company's GPT-5.5 Cyber cybersecurity tool will be released through a controlled access program to "critical cyber defenders," requiring applicants to submit credentials for approval. The approach mirrors the same restrictive distribution strategy Altman had previously criticized Anthropic for adopting with its Mythos model, which he had called "fear-based marketing." OpenAI says it is consulting with the U.S. government to expand access to verified cybersecurity professionals.[2]

See individual article: OpenAI Restricts Cyber Access



DOD Strikes Classified AI Deals with Nvidia, Microsoft, Google, AWS (May 1, 2026)

The U.S. Department of Defense announced on May 1, 2026 new agreements with Nvidia, Microsoft, Google, SpaceX, Reflection AI, and Amazon Web Services to deploy their AI systems on classified military networks "for lawful operational use." The deals represent a major expansion of commercial AI's role in national security, allowing the participating companies' AI tools to operate in sensitive defense environments.[3][4]

The agreements follow years of internal debate and employee protests at major tech companies over military AI contracts, most notably Google's Project Maven controversy. The deals come amid growing U.S. efforts to maintain technological superiority over China in AI-enabled defense capabilities.

See individual article: DOD Classified AI Deals


Pentagon Tech Chief: Anthropic Still Blacklisted, But Mythos Is a 'Separate' Issue

Defense Department CTO Emil Michael told CNBC on May 1, 2026 that Anthropic remains designated as a supply chain risk — meaning defense contractors must certify they do not use Anthropic's Claude models. However, Michael distinguished Mythos as a "separate national security moment," saying the government needs to "make sure that our networks are hardened up" because Mythos has "capabilities that are particular to finding cyber vulnerabilities and patching them." The comments came the same day the DOD announced AI deals with seven other companies for classified network use — Nvidia, Microsoft, Google, SpaceX, OpenAI, Reflection AI, and AWS — with Anthropic notably excluded. Anthropic sued the Trump administration in March 2026 to reverse the blacklisting.[5]

See also: White House Opposes Anthropic's Mythos Expansion


Film Academy Establishes First AI Rules for Oscar Eligibility

The Academy of Motion Picture Arts and Sciences, the organization behind the Academy Awards (Oscars), on May 1, 2026 for the first time issued rules addressing the use of artificial intelligence in performances and scripts eligible for the 2027 Academy Awards. The new rules address AI-generated performances and AI-assisted screenwriting, marking a significant expansion of AI governance into the creative arts and entertainment industry. The move follows broader debates across Hollywood about AI's role in filmmaking, including the 2023 writers' and actors' strikes.[6]


Musk v Altman Trial: Judge Probes Attorneys on $97B Bid

On April 30, 2026, the federal judge overseeing Musk v Altman et al paused proceedings to question Elon Musk's attorneys about a $97 billion bid related to OpenAI. The trial, heard in the Northern District of California, concerns OpenAI's evolution from nonprofit to for-profit entity. Musk testified earlier in the week that xAI trained its Grok model on OpenAI outputs, and sparred with OpenAI attorney William Savitt during cross-examination.[7][8]

See case page: Musk v Altman et al


References