News March 26 2026

From AI Law Wiki
Revision as of 02:34, 28 April 2026 by AILawWikiAdmin (talk | contribs) (Migration export)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

March 26, 2026 — Daily digest of AI law developments.

This article consolidates 4 news stories from March 26, 2026.

Contents

1. AI Foundation Model Transparency Act 2. Anthropic Preliminary Injunction Trump AI Safety 3. EU Parliament Digital Omnibus AI Act 4. GUARDRAILS Act Repeal AI Moratorium


AI Foundation Model Transparency Act

On March 26, 2026, a bipartisan group of U.S. lawmakers introduced H.R. 8094, the AI Foundation Model Transparency Act (AI FMTA), marking the first federal legislation focused specifically on AI transparency rather than direct regulation.[1][2]

Purpose

The AI FMTA would require developers of large AI models (such as ChatGPT, Claude, and similar systems) to publicly disclose:

  • Training methods and data sources used
  • Intended capabilities and limitations of the model
  • Known risks and safety evaluations
  • Performance benchmarks and evaluation practices

The legislation aims to increase transparency in AI development without imposing direct regulatory restrictions on model capabilities or deployment.

Requirements

Covered entities (developers of large foundation models) would be required to:

  1. Publish transparency reports detailing training data composition
  2. Disclose energy consumption and environmental impact of training
  3. Provide information on red teaming and safety testing conducted
  4. Report on known limitations and failure modes

Significance

The AI FMTA represents a shift toward transparency-focused AI governance at the federal level. Unlike the sector-specific approaches proposed in other legislation, this bill targets the foundational layer of AI systems.

The bipartisan sponsorship suggests growing consensus around "sunlight" as a regulatory tool for AI governance, potentially serving as a template for other jurisdictions.

Status

As of April 2026, H.R. 8094 has been introduced and referred to committee. No markup or floor vote has been scheduled.

See Also

References

See individual article: AI Foundation Model Transparency Act


Anthropic Preliminary Injunction Trump AI Safety

A federal judge granted Anthropic PBC a preliminary injunction on March 26, 2026, blocking the Trump administration's government-wide ban on the company's technology and the Pentagon's designation of Anthropic as a supply-chain risk to national security. U.S. District Judge Rita F. Lin of the Northern District of California found that the government's actions appeared designed to punish Anthropic for refusing to remove AI safety restrictions from its Claude model, rather than addressing genuine security concerns.[1][2][3]

Background

Anthropic filed suit on March 9, 2026 (Case No. 3:26-cv-01996-RFL) after the Department of War, under Secretary Pete Hegseth, designated Anthropic as a supply-chain risk under 41 U.S.C. § 4713 and 10 U.S.C. § 3252. The designation followed Anthropic's public refusal to remove two usage restrictions from its Claude AI system: prohibitions on lethal autonomous warfare and mass surveillance of Americans. The government-wide directive would have barred all federal agencies from using Anthropic's products or services.[1][4]

The Injunction

Judge Lin granted the preliminary injunction on March 26, 2026, making several key findings:[2]

  • Anthropic demonstrated a high likelihood of success on its First Amendment retaliation claim
  • The company was likely to succeed on its Fifth Amendment due process claim
  • The government failed to prove that Anthropic's conduct qualifies as a supply-chain risk
  • The designation appeared pretextual — motivated by Anthropic's protected speech rather than genuine national security concerns

Judge Lin granted a seven-day administrative stay to allow the government to seek an emergency stay from the Court of Appeals. The General Services Administration (GSA) restored Anthropic's status by early April 2026.[5]

Amicus Support

The case attracted broad amicus support from the Cato Institute, Electronic Frontier Foundation, Society for the Rule of Law (representing former national security officials), Yale Law School's Rule of Law Clinic, and industry trade associations including the Information Technology Industry Council (ITI). The briefs uniformly argued that the government's retaliation against AI safety restrictions violates constitutional principles and undermines national security by politicizing procurement.[4][6][7]

Broader Significance

The case represents one of the first major legal confrontations between an AI company and the U.S. government over AI safety guardrails. A ruling against Anthropic could have chilled AI companies from implementing safety restrictions that conflict with government preferences, while the injunction preserves Anthropic's ability to set terms for its own products. The case remains ongoing, with a separate D.C. Circuit challenge to the 41 U.S.C. § 4713 designation also pending.[1]

See Also

References

See individual article: Anthropic Preliminary Injunction Trump AI Safety


EU Parliament Digital Omnibus AI Act

March 26, 2026 — European Parliament Adopts Position on Digital Omnibus AI Act Amendments

The European Parliament plenary adopted its negotiating position on proposed Digital Omnibus amendments to the EU AI Act on March 26, 2026, advancing to trilogue negotiations with the Council of the European Union (which adopted its own mandate on March 13, 2026) and the European Commission.[1][2][3]

Key Amendments

The Parliament's position includes several significant changes to the Commission's original Omnibus proposal:[1]

  • Fixed application dates: Replaces the Commission's flexible backstop dates with fixed dates, eliminating regulatory uncertainty about when rules take effect. This aligns with the Council position.
  • Ban on non-consensual AI-generated intimate imagery: Introduces a prohibition under Article 5 on AI systems generating realistic sexually explicit images or videos of identifiable individuals without consent. The Parliament's ban is broader than the Council's, which limited the prohibition to systems lacking safeguards. The measure also explicitly bans "nudifier apps."[1][3]
  • Shortened transparency grace period: Reduces the compliance period for Article 50(2) marking obligations for AI systems placed on the market before August 2, 2026 from six months (to February 2, 2027) to just three months (to November 2, 2026).[1]
  • Reinstated registration requirements: Retains EU database registration for self-assessed non-high-risk AI systems under Article 6(3), rejecting the Commission's proposed removal and aligning with the Council.[1]
  • Stricter data processing thresholds: Restores the "strict necessity" standard for processing special categories of personal data in bias detection, limited to high-risk systems with exceptional extensions requiring necessity, proportionality, and links to health, safety, fundamental rights, or discrimination.[1]
  • Annex I restructuring: Deletes Section A and moves New Legislative Framework legislation to Section B, altering the AI Act's interaction with sectoral product rules.[1]
  • AI Office enhancements: Strengthens supervision over general-purpose AI models with resourcing requirements, while preserving national authority competence through exceptions. The Parliament's position adds detailed GPAI/DSA integration provisions.[1]

Alignment With Council Position

Parliament and Council show broad alignment on rolling back Commission simplifications, including fixed dates, registration requirements, data processing thresholds, non-consensual imagery bans, and AI Office oversight. Key distinctions include the Parliament's shorter transparency grace period, explicit AI Office resourcing mandates, and Annex I restructuring.[1][3]

Timing

The amendments were fast-tracked due to time pressures before the AI Act's general application date of August 2, 2026. Trilogue negotiations are expected to proceed quickly given the broad alignment between Parliament and Council positions.[4][3]

Significance

The Parliament's position represents a significant pushback against the Commission's simplification agenda, restoring many compliance obligations that the Omnibus proposal sought to streamline. The shortened transparency grace period and reinstated registration requirements will require companies to accelerate compliance timelines ahead of the August 2026 enforcement deadline. The ban on non-consensual AI-generated intimate imagery marks one of the most explicit legislative responses to deepfake technology to date.

See Also

References

See individual article: EU Parliament Digital Omnibus AI Act


GUARDRAILS Act Repeal AI Moratorium

March 26, 2026 — Rep. Beyer Introduces GUARDRAILS Act to Repeal Trump AI Moratorium and Restore State Regulation Authority

On March 26, 2026, Representative Don Beyer (D-VA) introduced the GUARDRAILS Act (Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards Act), legislation that would repeal President Trump's December 11, 2025 Executive Order 14365 imposing a moratorium on state-level AI regulation.[1]

Background

Executive Order 14365, titled "Removing Barriers to American Leadership in Artificial Intelligence," was signed by President Trump on December 11, 2025, and established a framework prioritizing federal preemption of state AI laws. The order directed federal agencies to review state regulations that could impede AI development and created an AI Litigation Task Force announced January 9, 2026.[2]

The GUARDRAILS Act was introduced in direct response, seeking to nullify the executive order, prohibit federal funds from implementing it, and restore states' authority to enact their own AI regulations covering safety, privacy, and bias concerns.[1]

Cosponsors

The bill's cosponsors include Reps. Doris Matsui (D-CA), Ted Lieu (D-CA), Sara Jacobs (D-CA), and April McClain Delaney (D-MD). Rep. Beyer serves as co-chair of the Congressional Artificial Intelligence Caucus.[1][3]

Legislative Context

The GUARDRAILS Act was introduced the same week as the White House's March 20, 2026 release of its National Policy Framework for Artificial Intelligence, which recommended federal preemption across seven pillars. The bill positions itself as the legislative counterweight to both the executive order and the proposed Trump America AI Act championed by Senator Marsha Blackburn (R-TN).[4]

Current Status

As of April 26, 2026, the bill has been introduced and referred to committee. No hearings, votes, or further legislative action has been reported.[5]

References

See individual article: GUARDRAILS Act Repeal AI Moratorium


Categories