News-CAISI-Google-Microsoft-xAI-May-2026

May 5, 2026 — The Trump administration significantly expanded its oversight of frontier artificial intelligence on Tuesday, as the Commerce Department's Center for AI Standards and Innovation (CAISI) announced new pre-deployment testing agreements with Google DeepMind, Microsoft, and xAI, while the White House considered executive orders to create an AI working group and bar companies from interfering with government AI use.

CAISI Agreements Expand to Google, Microsoft, xAI

CAISI, housed within the National Institute of Standards and Technology (NIST) at the Department of Commerce, announced on May 5, 2026 that it had signed agreements with Google DeepMind, Microsoft, and xAI to conduct pre-deployment evaluations of their frontier AI models before public release. The agreements allow government evaluators to assess AI models for national security risks and capabilities, including in classified environments. To date, CAISI has completed more than 40 such evaluations, including on state-of-the-art models that remain unreleased.[1]

The new agreements build on CAISI's 2024 partnerships with OpenAI and Anthropic, which have been renegotiated to reflect directives from Commerce Secretary Howard Lutnick and America's AI Action Plan. Under Lutnick's direction, CAISI serves as industry's primary point of contact within the U.S. government for AI testing, collaborative research, and best-practice development.[2]

CAISI Director Chris Fall stated that "independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," adding that the expanded collaborations "help us scale our work in the public interest at a critical moment."[1]

White House Eyes Executive Orders on AI Security

Beyond the CAISI agreements, the White House is weighing the creation of a new AI working group that would bring together tech executives and government officials to explore potential oversight procedures, including plans to vet models before public release. The group may be established through an executive order.[2]

Separately, according to Politico, the administration is also considering a 16-page executive order that would prohibit private-sector companies from "interfering" with the government's use of AI models and create more aggressive contracting and termination standards for federal vendors.[3] These provisions appear driven by the recent standoff between the Defense Department and Anthropic, which refused to let the military use its Claude model for surveillance or autonomous weapons, leading Defense Secretary Pete Hegseth to designate Anthropic a supply chain risk to national security in March 2026.[3]

The White House told CNBC that discussion about potential executive orders is speculation, and that any official policy announcement would come directly from President Donald Trump.[2]

Significance

The expansion of CAISI's testing agreements represents a notable shift in the Trump administration's approach to AI regulation, which had previously taken a hands-off stance at the urging of laissez-faire advisers including David Sacks and Marc Andreessen. The combination of voluntary pre-deployment testing and potential executive orders to enforce government access to AI models signals the emergence of a more structured federal AI oversight regime — one that balances industry cooperation with national security mandates.

References