News April 24 2026

From AI Law Wiki
Revision as of 02:34, 28 April 2026 by AILawWikiAdmin (talk | contribs) (Migration export)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

April 24, 2026 — Daily digest of AI law developments.

This article consolidates 4 news stories from April 24, 2026.

Contents

1. Cooley State AI Laws Update April 2. House Jailbroken AI Demo Federal Framework 3. Minnesota HF 1606 AI Nudification Ban 4. State AI Legislation Week April 24


Cooley State AI Laws Update April

Law firm Cooley published an analysis on April 24, 2026 titled "State AI Laws – Where Are They Now?" examining how state AI legislation is undergoing significant changes as compliance deadlines in 2026 approach and federal preemption efforts threaten to reshape state-level initiatives.[1]

Key Findings

Existing Laws Undergoing Changes

Many state AI laws that passed in prior years have undergone significant changes or delays since their passage, as states revise or reconsider their regulatory frameworks in response to practical implementation challenges and federal pressure.[1]

Federal Preemption Threat

The White House is urging Congress to enact sweeping AI legislation to preempt certain state laws that risk stifling innovation. The federal action is "potentially threatening to reshape or constrain state-level initiatives" according to the analysis.[1]

Colorado SB 205

The alert provides detailed coverage of Colorado's comprehensive AI regime enacted in May 2024, which regulates "high-risk artificial intelligence systems" used in "consequential decisions" and imposes broad obligations on developers and deployers related to risk management, impact assessments, consumer disclosures, and reporting to the Colorado attorney general. Proposed amendments signal movement away from the broad "high-risk AI" framework toward a narrower, decision-focused model, with a June 30, 2026 compliance deadline.[1]

The Colorado AI Working Group has proposed replacing the existing framework with a disclosure-driven approach, as covered in the Colorado AI Working Group Revision story.

Recommendations

The alert emphasizes that companies should prepare for compliance under current state frameworks while tracking legislative developments that could reshape obligations in the near term, given the dynamic interplay between federal preemption efforts and state legislative activity.[1]

References

See individual article: Cooley State AI Laws Update April


House Jailbroken AI Demo Federal Framework

April 24, 2026 — The U.S. House Committee on Homeland Security held a bipartisan, closed-door demonstration for House lawmakers showcasing the risks of "jailbroken" AI models, as Congress considers a federal regulatory framework for artificial intelligence by the end of 2026.[1][2]

The Demonstration

The demonstration, organized in partnership with the Department of Homeland Security's National Counterterrorism Innovation, Technology, and Education Center (NCITE), allowed participants to interact with censored and "abliterated" AI models whose names were concealed.[1][2]

Censored models — such as Anthropic's Claude and OpenAI's ChatGPT — include built-in safety protections that refuse harmful queries. Abliterated models have their refusal mechanisms deactivated, enabling unrestricted outputs.[1]

DHS researchers demonstrated how malicious actors can exploit unrestricted AI systems to obtain instructions for:

  • Building bombs and weapons[1]
  • Planning terrorist attacks[1]
  • Launching cyberattacks[1]
  • Committing mass violence[1]

Rep. Gabe Evans (R-Colo.) stated that jailbroken models "gave answers to all of those things" when asked how to make a nuclear bomb.[1]

Real-World Threat Examples

The briefing highlighted documented cases of AI exploitation by nation-state actors:[1]

  • Russia-linked groups hijacking leading AI models for disinformation campaigns
  • Beijing-backed hackers attempting a fully automated cyberattack using Anthropic's Claude model — described as the first documented case of a fully AI-automated cyberattack

Legislative Context

House Republican leadership aims to pass a federal regulatory framework for AI by the end of 2026.[3] The Trump administration's legislative framework proposes:[4]

  • Uniform federal safety guardrails
  • Preemption of state-level AI laws
  • Age-gating requirements and parental safeguards for children
  • Provisions to reduce risks of chatbots encouraging self-harm or facilitating sexual exploitation of minors
  • A Ratepayer Protection Pledge signed by major AI developers (Microsoft, Amazon, Google) addressing data center infrastructure and electricity costs

The federal preemption proposal has drawn opposition from states that have already enacted their own AI legislation. As of April 2026, 19 states have passed new AI laws, and many state lawmakers resist federal override of their consumer protection measures.[4]

Significance

The demonstration is part of a growing Congressional focus on AI safety, following the White House's release of its National Policy Framework for Artificial Intelligence in March 2026. The event underscores the tension between federal preemption advocates and states that have already enacted AI regulations, particularly regarding child safety, chatbot regulation, and deepfake protections.

References

See individual article: House Jailbroken AI Demo Federal Framework


Minnesota HF 1606 AI Nudification Ban

April 24, 2026 — The Minnesota House of Representatives passed HF 1606, a landmark bill banning AI "nudification" technology that generates non-consensual nude or sexually explicit images from real people's photographs, by a vote of 132-1.[1][2]

Overview

HF 1606, sponsored by Representative Jess Hanson (D), targets the growing problem of AI-powered "nudification" tools that create explicit images of individuals without their consent. The bill passed the House on April 24, 2026, and now awaits a vote in the Minnesota Senate. If enacted, it would be among the first state laws to explicitly ban nudification technology rather than merely penalizing the distribution of its output.[1][2]

Key Provisions

Ban on Nudification Technology

The bill prohibits owners and controllers of websites, applications, and programs from:[3]

  • Allowing users to access or use nudification tools on their platforms
  • Performing nudification on behalf of users
  • Advertising or promoting nudification technology

Definition of "Nudify"

"Nudify" is defined as using artificial intelligence to depict the intimate parts of a person explicitly without their consent. The bill includes an exemption for tools requiring substantial human technological or artistic skill, meaning AI-assisted creative tools that require meaningful human direction would not be covered.[3][2]

Penalties and Remedies

  • Civil penalties of up to $500,000 for companies that violate the ban[2]
  • Victims may sue for up to 3x actual damages, punitive damages, injunctions, and attorney fees[1][2]
  • Private right of action allowing individuals harmed by nudification technology to seek relief in court

Legislative History

  • Introduced: 2026 Minnesota legislative session
  • April 24, 2026: Passed House 132-1[1]
  • Next step: Senate floor vote (companion bill progressing in Senate)[2]
  • If passed: Sent to Governor Tim Walz for signature

Context

HF 1606 is part of a broader Minnesota legislative effort to address AI-enabled harms. A separate bill, HB 1887, addresses deepfake protections more broadly and was approved by the House on April 20, 2026, currently pending in the Senate.[4]

The bill follows a national trend of states targeting AI-generated explicit content. Several states have enacted or are considering similar legislation, including Tennessee's ELVIS Act and California's deepfake criminalization laws. Minnesota's approach is notable for directly banning the technology itself rather than relying solely on penalizing distribution of AI-generated explicit images.

Minnesota's bill is also among the first to target the nudification technology market specifically, rather than treating all AI-generated explicit content identically. By distinguishing between fully automated nudification tools and human-directed artistic tools, the legislation attempts to preserve creative uses of AI while closing the loophole exploited by "clothing removal" apps and websites.

See Also

References

See individual article: Minnesota HF 1606 AI Nudification Ban


State AI Legislation Week April 24

April 24, 2026 — State AI legislation accelerated across the country this week, with Tennessee's CHAT Act and personhood bill heading to the governor, Hawaii moving three AI bills into reconciliation, Nebraska signing a chatbot safety law, and Alabama enacting AI insurance regulations as multiple state legislatures approach adjournment.[1][2]

Tennessee: CHAT Act and Personhood Bill Head to Governor

The Tennessee General Assembly adjourned on April 24, 2026, sending two major AI bills to Governor Bill Lee's desk:

  • SB 1700 (CHAT Act): Approved unanimously by the House on April 21 (90-0) after Senate passage on April 14 (31-0). The Curbing Harmful AI Technology Act establishes comprehensive chatbot safety and data privacy requirements. As of April 26, the bill awaits Governor Lee's signature. The governor has approximately until May 8 to sign or veto the bill before it becomes law without signature.[1]
  • SB 837 (AI Personhood): Passed the Senate (26-6) on April 6 and the House (93-2) on April 8, then sent to the governor on April 15. As of April 26, the bill awaits Governor Lee's signature. The bill explicitly excludes AI systems from the legal definitions of "human being," "life," and "natural person" under Tennessee law.[1]

Governor Lee has already signed SB 1580, prohibiting AI from representing itself as a mental health professional.[1]

Nebraska: Chatbot Safety Law Signed

Governor Jim Pillen signed LB 1185, the Conversational Artificial Intelligence Safety Act, into law on April 17, 2026, after the legislature approved the bill 49-0 on April 10. The law, which was attached to the Agricultural Data Privacy Act (LB 525) as a legislative vehicle, becomes operative July 1, 2027. It establishes comprehensive consumer protections for publicly accessible conversational AI services, including disclosure requirements and safety protocols.[1]

Nebraska's legislature adjourned April 17.[1]

Alabama: AI Insurance Regulation Enacted

Governor Kay Ivey signed SB 63 on April 17, 2026, making Alabama the latest state to regulate AI use in health insurance coverage determinations. The law imposes requirements on health insurers using AI systems to make or support decisions regarding coverage, prior authorization, and claims processing. Alabama's legislature had adjourned sine die on April 9.[1]

Hawaii: Three AI Bills Enter Reconciliation

Hawaii's legislature approved three AI-related bills that are now in reconciliation between chambers as the session approaches its May 2, 2026 deadline:

  • HB 1782: AI companion safeguards for minors — passed the House March 10, approved by Senate with amendments April 14 (31-0), House disagreed with amendments and returned the bill
  • SB 3001: AI operator disclosures and suicide prevention protocols — Senate passed the bill, House approved with amendments, Senate disagreed with House amendments
  • HB 2137: Deepfake protections and synthetic performer disclosure — advanced through both chambers, now in reconciliation[1][2]

Arizona: Three AI Bills Near Finish Line

As the Arizona legislature approached its April 25 adjournment, three AI-related bills remained in play:

  • SB 1786: AI content verification and provenance data requirements — passed the Senate March 3 and the House April 15, now in reconciliation
  • HB 2133: AI-related bill advancing through the legislature
  • A third AI bill under consideration[1]

Utah: Legislature Adjourns After Passing Nine AI Bills

Utah's legislative session adjourned after Governor Spencer Cox signed nine AI-related bills into law, making Utah one of the most active states in AI regulation for 2026. The package covers deepfakes, consumer protection, education policy, and government oversight.[1]

Significance

The week of April 21–26, 2026, saw multiple states reach legislative milestones as sessions neared adjournment. The convergence of chatbot safety bills across Tennessee, Nebraska, and Hawaii signals growing bipartisan consensus on regulating conversational AI, while Alabama's focused approach to AI in health insurance reflects the sector-specific regulatory strategy adopted by several states. With Tennessee and Nebraska both enacting chatbot safety laws and Hawaii poised to follow, 2026 is shaping up as a landmark year for state AI legislation.

See Also

References

See individual article: State AI Legislation Week April 24


Categories