Anthropic PBC v U.S. Department of War

From AI Law Wiki
Revision as of 02:34, 28 April 2026 by AILawWikiAdmin (talk | contribs) (Migration export)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Anthropic PBC v. U.S. Department of War (Case No. 3:26-cv-01996-RFL) is a federal lawsuit in which AI company Anthropic PBC challenges the Trump administration's designation of the company as a supply-chain risk to national security, alleging that the designation was retaliatory punishment for Anthropic's refusal to remove AI safety restrictions from its Claude model. On March 26, 2026, U.S. District Judge Rita F. Lin granted Anthropic a preliminary injunction blocking the government-wide ban on Anthropic's technology, finding the company likely to succeed on First Amendment retaliation and Fifth Amendment due process claims.[1][2][3]

Case Background

On March 9, 2026, Anthropic filed suit against the U.S. Department of War, Secretary Pete Hegseth, and 17 federal agencies in the U.S. District Court for the Northern District of California, challenging a presidential directive and the Department's designation of Anthropic as a supply-chain risk under 41 U.S.C. § 4713 and 10 U.S.C. § 3252. The designation would have barred Anthropic and its affiliates from supplying products or services to the Department of War and, under a government-wide directive, effectively prevented any federal agency from using Anthropic's technology.[1][3]

Anthropic alleged the designation was issued after the company publicly refused to remove two usage restrictions from its Claude AI system: prohibitions on lethal autonomous warfare and mass surveillance of Americans. The company argued the government's actions constituted impermissible retaliation against Anthropic for exercising its First Amendment rights to speak out on matters of public concern and to set terms for the use of its own products.[2][4]

Claims

Anthropic's complaint asserted five counts:[1]

  1. Administrative Procedure Act violations — The supply-chain risk designation was arbitrary, capricious, and not in accordance with law
  2. First Amendment retaliation — The government punished Anthropic for protected speech (refusing to remove AI safety restrictions)
  3. Ultra vires executive action — The President exceeded statutory authority in directing a government-wide ban
  4. Fifth Amendment due process — Anthropic was deprived of liberty and property interests without due process
  5. APA sanction violations — The designation constituted an unlawful sanction under APA

Preliminary Injunction (March 26, 2026)

On March 26, 2026, Judge Rita F. Lin granted Anthropic's motion for a preliminary injunction, issuing the following key findings:[2]

  • Anthropic demonstrated a high likelihood of success on its First Amendment retaliation claim, having made a prima facie case that the government's actions were motivated by Anthropic's protected speech
  • Anthropic was likely to succeed on its Fifth Amendment due process claim
  • The government failed to prove that Anthropic's conduct qualifies as a supply-chain risk under the relevant statutes
  • The court found evidence that the designation appeared pretextual — designed to punish Anthropic for criticizing the DOD's contracting position rather than addressing genuine national security concerns

Judge Lin granted the government a seven-day administrative stay to seek an emergency, expedited stay from the Court of Appeals. The GSA restored Anthropic's status by early April 2026, and a government status report was filed on April 6, 2026 (Dkt. No. 146).[5]

A separate D.C. Circuit case challenging the 41 U.S.C. § 4713 designation remains undecided.[1]

Amicus Briefs

The case attracted significant amicus support. Multiple organizations filed briefs supporting Anthropic, including:[4][6][7]

  • Cato Institute — Argued the government's retaliation violates core First Amendment principles
  • Society for the Rule of Law — Former national security officials warned the designation undermines national security by politicizing procurement
  • Electronic Frontier Foundation (EFF) — Emphasized the chilling effect on AI safety research
  • Industry trade associations (ITI and others) — Warned of broad harmful impact on the technology sector
  • Yale Law School Rule of Law Clinic — Argued the executive action exceeds statutory authority

Significance

The case is one of the first major legal confrontations between an AI company and the U.S. government over AI safety restrictions and government retaliation. Its implications include:

  • AI safety vs. government mandates: Tests whether the government can punish AI companies for maintaining safety guardrails that conflict with government preferences
  • First Amendment protections for AI companies: Establishes that AI companies have constitutional protections when setting terms for the use of their products
  • Supply-chain risk designation limits: Clarifies that government procurement restrictions must be based on genuine security concerns, not retaliation
  • Chilling effect on AI safety: A ruling against Anthropic could discourage AI companies from implementing safety restrictions that displease government agencies

Current Status

The case is ongoing. The preliminary injunction remains in effect, and the litigation on the merits continues. The D.C. Circuit case on the 41 U.S.C. § 4713 designation is also pending.[1] As of April 26, 2026, no trial date has been set.

See Also

References