News Anthropic Preliminary Injunction Trump AI Safety 2026

From AI Law Wiki
Revision as of 02:34, 28 April 2026 by AILawWikiAdmin (talk | contribs) (Migration export)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

A federal judge granted Anthropic PBC a preliminary injunction on March 26, 2026, blocking the Trump administration's government-wide ban on the company's technology and the Pentagon's designation of Anthropic as a supply-chain risk to national security. U.S. District Judge Rita F. Lin of the Northern District of California found that the government's actions appeared designed to punish Anthropic for refusing to remove AI safety restrictions from its Claude model, rather than addressing genuine security concerns.[1][2][3]

Background

Anthropic filed suit on March 9, 2026 (Case No. 3:26-cv-01996-RFL) after the Department of War, under Secretary Pete Hegseth, designated Anthropic as a supply-chain risk under 41 U.S.C. § 4713 and 10 U.S.C. § 3252. The designation followed Anthropic's public refusal to remove two usage restrictions from its Claude AI system: prohibitions on lethal autonomous warfare and mass surveillance of Americans. The government-wide directive would have barred all federal agencies from using Anthropic's products or services.[1][4]

The Injunction

Judge Lin granted the preliminary injunction on March 26, 2026, making several key findings:[2]

  • Anthropic demonstrated a high likelihood of success on its First Amendment retaliation claim
  • The company was likely to succeed on its Fifth Amendment due process claim
  • The government failed to prove that Anthropic's conduct qualifies as a supply-chain risk
  • The designation appeared pretextual — motivated by Anthropic's protected speech rather than genuine national security concerns

Judge Lin granted a seven-day administrative stay to allow the government to seek an emergency stay from the Court of Appeals. The General Services Administration (GSA) restored Anthropic's status by early April 2026.[5]

Amicus Support

The case attracted broad amicus support from the Cato Institute, Electronic Frontier Foundation, Society for the Rule of Law (representing former national security officials), Yale Law School's Rule of Law Clinic, and industry trade associations including the Information Technology Industry Council (ITI). The briefs uniformly argued that the government's retaliation against AI safety restrictions violates constitutional principles and undermines national security by politicizing procurement.[4][6][7]

Broader Significance

The case represents one of the first major legal confrontations between an AI company and the U.S. government over AI safety guardrails. A ruling against Anthropic could have chilled AI companies from implementing safety restrictions that conflict with government preferences, while the injunction preserves Anthropic's ability to set terms for its own products. The case remains ongoing, with a separate D.C. Circuit challenge to the 41 U.S.C. § 4713 designation also pending.[1]

See Also

References