White House AI pre-trial review mechanism discussed leaked; Hassett’s remarks denied by the official the next day

MarketWhisper

白宮AI預審機制討論

Kevin Hassett, director of the White House National Economic Council (NEC), said in a Fox Business interview on May 7 that the Trump administration is studying an executive order requiring AI models to undergo government security reviews before they are publicly released, drawing an analogy to the FDA’s pre-market approval process for drugs. But according to Politico’s report on May 8, senior White House officials later said the remark was “selectively quoted.”

Event Timeline: The New York Times report makes White House policy contradictions public

On May 4, 2026, the New York Times reported that the White House was discussing creating a pre-release review mechanism for AI models, which was characterized at the time as “under consideration.” On May 7, 2026, in an appearance on Fox Business, Kevin Hassett publicly said: “We are looking at whether we can, via an executive order, require that future AI, which might create vulnerabilities, can only be deployed after it demonstrates that it is safe—just like FDA drugs.”

Late on the night of May 7, 2026, White House Chief of Staff Susie Wiles posted on X that the government “is not responsible for picking winners and losers,” and said that the security deployment of powerful technology should be driven by “America’s outstanding innovators rather than bureaucratic agencies.” Based on the record of her official account, the post was the fourth piece of content published since Wiles created her account.

Citing three anonymous sources, Politico reported that the White House is discussing having intelligence agencies conduct preliminary assessments before AI models are publicly released. One U.S. government official said in the report that one purpose of the move is to “ensure the intelligence community studies and uses these tools before adversaries like Russia and China understand the new capabilities.”

Agencies Involved and Policy Framework

CAISI expands voluntary AI safety assessment agreements

The AI Standards and Innovation Center (CAISI), under the Department of Commerce, announced this week that it has signed AI safety assessment agreements with Google DeepMind, Microsoft, and xAI, expanding the scope beyond OpenAI and Anthropic, which were previously included. CAISI’s voluntary assessment framework has been in place since 2024.

Deputy Defense Secretary publicly supports pre-evaluation mechanism

On May 8, 2026, Deputy Secretary of Defense Emil Michael, speaking at an AI conference in Washington, publicly supported a government pre-evaluation before the public release of AI models, and cited Anthropic’s Mythos system as a reference case, saying that such models “will eventually show up,” and that the government must establish response mechanisms.

Trump administration and Anthropic policy backdrop

According to Politico, in March 2026, Defense Secretary Pete Hegseth put Anthropic on a risk list citing supply-chain risks, and banned its models from being used for Department of Defense contracts; afterward, Trump separately required federal agencies to stop using Anthropic products within six months. Meanwhile, last month Anthropic disclosed that its AI system Mythos has powerful software vulnerability-discovery capabilities that go beyond the safety thresholds required for public release, and multiple federal agencies subsequently submitted requests to integrate it. On May 8, 2026, OpenAI announced a limited preview of GPT-5.5-Cyber, a new tool designed to detect and fix network vulnerabilities.

Industry opposition to mandatory review mechanisms

Daniel Castro, president of the Information Technology and Innovation Foundation (ITIF), said in a Politico report: “If pre-market approval can be denied, that’s a big problem for any company. If one competitor gets approved and another doesn’t, the weeks or months gap in market access will have a huge impact.” ITIF funders include Anthropic, Microsoft, and Meta.

In the same report, a senior White House official said: “There is definitely one or two people who are very enthusiastic about government regulation, but they are just a few.” The official was granted anonymity on the grounds that the discussion involves sensitive policy matters.

Frequently Asked Questions

When and where did Kevin Hassett make the AI pre-review remarks?

According to Politico, on May 7, 2026, during a Fox Business interview, Kevin Hassett publicly said that the government is considering an executive order requiring AI models to pass government security reviews before they are released, drawing an analogy to the FDA drug approval process.

What is the basis for the White House denying Hassett’s remarks?

According to Politico’s May 8, 2026 report, senior White House officials said Hassett’s remarks were “a bit selectively quoted,” and that the White House’s policy direction is to partner with companies rather than pursue government regulation. Chief of Staff Susie Wiles also posted again to reaffirm that the government does not intervene in market choices.

What new AI safety assessment agreements did CAISI add this week?

According to CAISI’s statement this week, the newly added agreements cover Google DeepMind, Microsoft, and xAI, in addition to OpenAI and Anthropic that were already covered. The voluntary assessment framework has been in place since 2024.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

SanDisk Stock Jumps 430% on AI Storage Demand

SanDisk, a US flash memory and SSD maker spun out of Western Digital last year, closed at US$1,409.98 on May 6, with shares surging nearly 430% this year as investors bet on rising demand for AI server storage, according to Chosun Daily. Long-Term Supply Contracts Secure Revenue The company

CryptoFrontier1m ago

Microsoft Executives Doubted OpenAI in 2017-2018, Invested $1B to Prevent Move to Amazon

According to court filings monitored by Beating News, emails from over a dozen Microsoft executives, including CEO Satya Nadella, revealed between 2017 and 2018 show internal skepticism about OpenAI before the company's $1 billion investment. Nadella consulted colleagues on supporting OpenAI's $300

GateNews2m ago

SNS Launches MCP Protocol Enabling AI Agents to Manage .sol Domains on May 7

According to SNS official announcement on May 7, 2026, SNS MCP (Model Context Protocol) went live, enabling AI agents to discover, register, and manage .sol domains on Solana through conversational interactions. Users can now connect compatible AI platforms such as Claude AI and OpenClaw to

GateNews24m ago

South Korea Launches $8M AI Cybersecurity Fund for 50 Companies

South Korea's Ministry of Science and ICT and the Korea Internet and Security Agency will invest 12 billion won (US$8.31 million) in the 2026 Information Security New Technology Support Project, according to Chosun Daily. The program aims to support 50 companies working on 18 AI security tasks

CryptoFrontier24m ago

OpenAI Expands ChatGPT Ads Pilot to South Korea, UK, Japan, Brazil, Mexico on May 7

According to The Korea Times, OpenAI announced on May 7 that it will expand its ChatGPT ads pilot to South Korea, the UK, Japan, Brazil, and Mexico to monetize free users beyond its initial markets (US, Canada, Australia, and New Zealand). The test will target adult users on Free and Go plans,

GateNews24m ago

U.S. and China Set to Launch Official AI Safety Dialogue, Led by Treasury Officials

According to reports, the United States and China are preparing to launch an official AI safety dialogue aimed at establishing crisis management mechanisms for their technology competition. The U.S. delegation will be led by Treasury Secretary Scott Bessent, while China will be represented by Vice

GateNews54m ago
Comment
0/400
No comments