News Anthropic Preliminary Injunction Trump AI Safety 2026: Difference between revisions
(Migration export) |
(Migration export) |
(No difference)
| |
Latest revision as of 02:34, 28 April 2026
A federal judge granted Anthropic PBC a preliminary injunction on March 26, 2026, blocking the Trump administration's government-wide ban on the company's technology and the Pentagon's designation of Anthropic as a supply-chain risk to national security. U.S. District Judge Rita F. Lin of the Northern District of California found that the government's actions appeared designed to punish Anthropic for refusing to remove AI safety restrictions from its Claude model, rather than addressing genuine security concerns.[1][2][3]
Background
Anthropic filed suit on March 9, 2026 (Case No. 3:26-cv-01996-RFL) after the Department of War, under Secretary Pete Hegseth, designated Anthropic as a supply-chain risk under 41 U.S.C. § 4713 and 10 U.S.C. § 3252. The designation followed Anthropic's public refusal to remove two usage restrictions from its Claude AI system: prohibitions on lethal autonomous warfare and mass surveillance of Americans. The government-wide directive would have barred all federal agencies from using Anthropic's products or services.[1][4]
The Injunction
Judge Lin granted the preliminary injunction on March 26, 2026, making several key findings:[2]
- Anthropic demonstrated a high likelihood of success on its First Amendment retaliation claim
- The company was likely to succeed on its Fifth Amendment due process claim
- The government failed to prove that Anthropic's conduct qualifies as a supply-chain risk
- The designation appeared pretextual — motivated by Anthropic's protected speech rather than genuine national security concerns
Judge Lin granted a seven-day administrative stay to allow the government to seek an emergency stay from the Court of Appeals. The General Services Administration (GSA) restored Anthropic's status by early April 2026.[5]
Amicus Support
The case attracted broad amicus support from the Cato Institute, Electronic Frontier Foundation, Society for the Rule of Law (representing former national security officials), Yale Law School's Rule of Law Clinic, and industry trade associations including the Information Technology Industry Council (ITI). The briefs uniformly argued that the government's retaliation against AI safety restrictions violates constitutional principles and undermines national security by politicizing procurement.[4][6][7]
Broader Significance
The case represents one of the first major legal confrontations between an AI company and the U.S. government over AI safety guardrails. A ruling against Anthropic could have chilled AI companies from implementing safety restrictions that conflict with government preferences, while the injunction preserves Anthropic's ability to set terms for its own products. The case remains ongoing, with a separate D.C. Circuit challenge to the 41 U.S.C. § 4713 designation also pending.[1]
See Also
- Anthropic PBC v U.S. Department of War — Full case page
- Bartz v Anthropic PBC — Copyright litigation involving Anthropic
- Cases — Active AI litigation tracker
References
- ↑ 1.0 1.1 1.2 Clearinghouse, Anthropic PBC v. U.S. Department of War
- ↑ 2.0 2.1 Bloomberg Law, Preliminary Injunction Order, March 26, 2026
- ↑ U.S. District Court, N.D. Cal., Case 3:26-cv-01996
- ↑ 4.0 4.1 Cato Institute, Amicus Brief in Support of Anthropic, March 2026
- ↑ GSA Statement on Anthropic Preliminary Injunction, April 3, 2026
- ↑ Society for the Rule of Law, Amicus Brief in Support of Anthropic
- ↑ Yale Law School, Amicus Brief in Support of Anthropic