Commerce Department Removes Online Details of Microsoft, Google, xAI AI Safety Deal

Key Points
- Commerce Department deleted the web page detailing a pre‑release AI testing deal with Microsoft, Google and xAI.
- The original May 5 announcement said the three firms would submit frontier AI models for security review before public deployment.
- The page now redirects to the Center for AI Standards and Innovation, the agency that runs the testing program.
- No comment was received from the Commerce Department, the Trump White House, or the three companies.
- The removal follows an executive order that refocused the AI safety institute toward standards and industry coordination.
- Critics argue that pre‑release government access could increase vulnerability to cyber‑espionage.
- The testing program itself appears to remain active despite the missing webpage.
The U.S. Commerce Department deleted a web page that described an agreement in which Microsoft, Google and Elon Musk's xAI would submit their most advanced AI models to government scientists for security testing before public release. The page, first posted on May 5, vanished Monday afternoon and now redirects to the Center for AI Standards and Innovation, the agency that runs the tests. Neither the department nor the Trump White House offered an explanation, and the three companies have not commented. The removal comes amid shifting federal AI policy and ongoing debate over giving the government pre‑release access to frontier AI systems.
The Commerce Department quietly erased a web page that outlined a high‑profile partnership between Microsoft, Google and xAI, the artificial‑intelligence arm of Elon Musk’s X. The original posting, dated May 5, said the three firms would hand over their frontier AI systems to a federal testing team for evaluation of cyber‑attack vulnerabilities, risks of military misuse and other national‑security flaws before the models hit the market.
By Monday afternoon, Washington time, the link returned a generic "Sorry, we cannot find that page" notice. Visitors were automatically redirected to the website of the Center for AI Standards and Innovation, the body that now oversees the testing program. The Center, a successor to the U.S. AI Safety Institute, operates within the National Institute of Standards and Technology (NIST), itself a component of the Commerce Department.
The shift in online presence follows an executive order that scaled back the previous administration’s AI‑safety architecture. Instead of a broad safety‑evaluation mandate, the order refocused the institute’s mission on developing standards and coordinating with industry. The change in branding and web location reflects that policy pivot.
Neither the Commerce Department nor the Trump White House responded to Reuters’ requests for comment on why the page was removed. The three companies also declined to comment. The lack of an official statement leaves observers to wonder whether the deletion signals a deeper policy disagreement or simply a routine website update.
When the May 5 announcement was first released, it was seen as a tangible sign of growing federal concern about the national‑security risks posed by powerful AI models. It also marked a rare public commitment by major AI developers to subject their cutting‑edge systems to pre‑deployment government review.
Industry insiders recall that the deal followed the Trump administration’s earlier removal of Anthropic from a Pentagon AI contract over alleged safety‑related constraints, though Anthropic was not listed as a participant in the Commerce Department’s testing program.
Critics have warned that granting the government access to frontier AI models before they are released could create a new target for nation‑state cyber‑espionage. Several federal officials have publicly questioned the wisdom of such pre‑release access, arguing that it may inadvertently expose sensitive technology to hostile actors.
Despite the page’s disappearance, the Center for AI Standards and Innovation continues to operate, and the redirected site still hosts general information about its program. No indication has been given that the testing arrangement itself has been cancelled.
The episode underscores the ongoing tension within U.S. AI policy circles. Supporters of robust government oversight view the original announcement as a cornerstone of the administration’s approach to AI risk mitigation. Detractors see the removal of a positive, public‑facing statement as evidence of internal discord about how deeply the government should involve itself in the development of frontier AI.
For now, the precise status of the Microsoft, Google and xAI testing agreement remains opaque. The public record no longer includes the specifics of the pre‑release testing arrangement, leaving policymakers, industry players and the public to infer the next steps from a shifting regulatory landscape.