News-Pentagon-AI-Classified-2026
The U.S. Department of Defense announced on May 1, 2026 that it has reached agreements with seven technology companies to deploy their artificial intelligence systems on classified military computer networks. The agreements represent the largest coordinated effort to integrate commercial AI into the Pentagon's classified infrastructure.[1]
The seven companies are Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX. The Defense Department stated the technology will help "augment warfighter decision-making in complex operational environments."[1]
Anthropic Exclusion
Notably absent from the list is Anthropic, following a public dispute and legal fight with the Trump administration over the ethics and safety of AI usage in warfare. Anthropic had sought assurances that its technology would not be used in fully autonomous weapons or for surveillance of Americans. Defense Secretary Pete Hegseth insisted the company must allow any uses the Pentagon deemed lawful.[1]
After Anthropic refused, President Donald Trump ordered all federal agencies to stop using Anthropic's chatbot Claude, and Hegseth sought to label the company a supply chain risk. Anthropic subsequently sued the administration.[1]
Background
OpenAI had already announced a deal with the Pentagon in March 2026 to deploy ChatGPT in classified environments, effectively replacing Anthropic.[1] The new round of agreements significantly expands the roster of commercial AI providers operating within classified government networks.
Helen Toner, interim executive director at Georgetown University's Center for Security and Emerging Technology and former OpenAI board member, noted that questions about appropriate levels of human involvement, risk, and operator training "are still being worked out."[1]
Significance
The agreements mark a significant milestone in the U.S. military's adoption of commercial AI. They establish a framework for deploying private-sector AI systems on classified networks and raise ongoing questions about the role of AI in lethal autonomous decision-making, human oversight requirements, and the government's ability to compel AI companies to permit military uses of their technology.