
Kevin Hassett, director of the White House National Economic Council (NEC), said in a Fox Business interview on May 7 that the Trump administration is studying an executive order requiring AI models to undergo government security reviews before they are publicly released, drawing an analogy to the FDA’s pre-market approval process for drugs. But according to Politico’s report on May 8, senior White House officials later said the remark was “selectively quoted.”
On May 4, 2026, the New York Times reported that the White House was discussing creating a pre-release review mechanism for AI models, which was characterized at the time as “under consideration.” On May 7, 2026, in an appearance on Fox Business, Kevin Hassett publicly said: “We are looking at whether we can, via an executive order, require that future AI, which might create vulnerabilities, can only be deployed after it demonstrates that it is safe—just like FDA drugs.”
Late on the night of May 7, 2026, White House Chief of Staff Susie Wiles posted on X that the government “is not responsible for picking winners and losers,” and said that the security deployment of powerful technology should be driven by “America’s outstanding innovators rather than bureaucratic agencies.” Based on the record of her official account, the post was the fourth piece of content published since Wiles created her account.
Citing three anonymous sources, Politico reported that the White House is discussing having intelligence agencies conduct preliminary assessments before AI models are publicly released. One U.S. government official said in the report that one purpose of the move is to “ensure the intelligence community studies and uses these tools before adversaries like Russia and China understand the new capabilities.”
The AI Standards and Innovation Center (CAISI), under the Department of Commerce, announced this week that it has signed AI safety assessment agreements with Google DeepMind, Microsoft, and xAI, expanding the scope beyond OpenAI and Anthropic, which were previously included. CAISI’s voluntary assessment framework has been in place since 2024.
On May 8, 2026, Deputy Secretary of Defense Emil Michael, speaking at an AI conference in Washington, publicly supported a government pre-evaluation before the public release of AI models, and cited Anthropic’s Mythos system as a reference case, saying that such models “will eventually show up,” and that the government must establish response mechanisms.
According to Politico, in March 2026, Defense Secretary Pete Hegseth put Anthropic on a risk list citing supply-chain risks, and banned its models from being used for Department of Defense contracts; afterward, Trump separately required federal agencies to stop using Anthropic products within six months. Meanwhile, last month Anthropic disclosed that its AI system Mythos has powerful software vulnerability-discovery capabilities that go beyond the safety thresholds required for public release, and multiple federal agencies subsequently submitted requests to integrate it. On May 8, 2026, OpenAI announced a limited preview of GPT-5.5-Cyber, a new tool designed to detect and fix network vulnerabilities.
Daniel Castro, president of the Information Technology and Innovation Foundation (ITIF), said in a Politico report: “If pre-market approval can be denied, that’s a big problem for any company. If one competitor gets approved and another doesn’t, the weeks or months gap in market access will have a huge impact.” ITIF funders include Anthropic, Microsoft, and Meta.
In the same report, a senior White House official said: “There is definitely one or two people who are very enthusiastic about government regulation, but they are just a few.” The official was granted anonymity on the grounds that the discussion involves sensitive policy matters.
According to Politico, on May 7, 2026, during a Fox Business interview, Kevin Hassett publicly said that the government is considering an executive order requiring AI models to pass government security reviews before they are released, drawing an analogy to the FDA drug approval process.
According to Politico’s May 8, 2026 report, senior White House officials said Hassett’s remarks were “a bit selectively quoted,” and that the White House’s policy direction is to partner with companies rather than pursue government regulation. Chief of Staff Susie Wiles also posted again to reaffirm that the government does not intervene in market choices.
According to CAISI’s statement this week, the newly added agreements cover Google DeepMind, Microsoft, and xAI, in addition to OpenAI and Anthropic that were already covered. The voluntary assessment framework has been in place since 2024.
Related Articles
Microsoft Executives Doubted OpenAI in 2017-2018, Invested $1B to Prevent Move to Amazon
SNS Launches MCP Protocol Enabling AI Agents to Manage .sol Domains on May 7
South Korea Launches $8M AI Cybersecurity Fund for 50 Companies
OpenAI Expands ChatGPT Ads Pilot to South Korea, UK, Japan, Brazil, Mexico on May 7
U.S. and China Set to Launch Official AI Safety Dialogue, Led by Treasury Officials