White House Accuses China of Industrial-Scale AI Model Theft, Announces Intelligence Sharing

White House Accuses China of Industrial-Scale AI Model Theft, Announces Intelligence Sharing
The Next Web

Key Points

  • OSTP memo accuses Chinese entities of industrial‑scale AI model distillation.
  • U.S. will share threat intelligence with domestic AI firms and explore sanctions.
  • OpenAI and Anthropic provided evidence of millions of illicit queries by DeepSeek, MiniMax and Moonshot AI.
  • Frontier Model Forum now facilitates AI industry threat‑sharing across competitors.
  • Congress introduced the Deterring American AI Model Theft Act to blacklist offending entities.
  • The memo precedes a Trump‑Xi summit scheduled for May 14, linking AI security to diplomatic talks.
  • Hardware export controls face circumvention; smuggling of Nvidia chips was recently charged.
  • Open‑source models like Meta's Llama pose additional national‑security challenges.

The White House Office of Science and Technology Policy released a memo on Wednesday alleging that entities in China are running industrial‑scale campaigns to distill U.S. artificial‑intelligence models. The memorandum pledges to share threat intelligence with American AI firms and to explore sanctions against the perpetrators. The claim builds on accusations from OpenAI and Anthropic that Chinese labs have used millions of queries to replicate frontier models. Lawmakers responded with the Deterring American AI Model Theft Act, while the memo arrives weeks before a planned Trump‑Xi summit in Beijing.

The Office of Science and Technology Policy (OSTP) issued a policy memorandum on Wednesday that directly accuses China of conducting "industrial‑scale" theft of American artificial‑intelligence models. Director Michael Kratsios said the United States has evidence that foreign entities, primarily in China, are executing large‑scale distillation campaigns to copy U.S. AI capabilities. The memo commits federal agencies to share intelligence with domestic AI developers and to explore accountability measures, though it stops short of announcing specific sanctions.

Distillation, the technique at the center of the dispute, does not involve stealing model weights or hacking servers. Instead, a distiller submits thousands or millions of carefully crafted queries to a frontier model, collects the responses, and uses that data to train a cheaper replica that mimics the original’s performance. The legal status of this practice remains unsettled, but its strategic implications are significant.

OpenAI first raised the alarm in February, filing a formal memo with the House Select Committee on China that named DeepSeek as a party that had been extracting outputs from its models. Anthropic followed with a more detailed report later that month, identifying three Chinese laboratories—DeepSeek, MiniMax and Moonshot AI—as having generated more than 16 million exchanges with its Claude model through roughly 24,000 fraudulent accounts. The accounts employed jail‑breaking techniques and commercial proxy services to bypass geofencing and other restrictions.

By early April, the three biggest U.S. AI firms—OpenAI, Anthropic and Google—began sharing distillation threat intelligence through the Frontier Model Forum, a coalition originally founded in 2023 with Microsoft. The arrangement mirrors cybersecurity threat‑sharing frameworks: when one company spots an attack pattern, it alerts the others. The cooperation underscores how seriously the industry views the emerging threat.

Congress moved in parallel. On April 15, Representative Bill Huizenga introduced the Deterring American AI Model Theft Act (H.R. 8283), co‑sponsored by Representative John Moolenaar, chair of the House Select Committee on China. The bill would direct the Commerce Department to blacklist entities that employ "improper query‑and‑copy techniques" and impose sanctions accordingly. A hearing on the issue was held on April 16, drawing bipartisan support.

The memo arrives three weeks before a scheduled Trump‑Xi summit in Beijing on May 14, positioning AI model protection as both a national‑security priority and a bargaining chip. While the United States has long restricted advanced AI chips to China—tightening export rules in 2022, 2023, and again in 2025—smuggling schemes have demonstrated the limits of hardware controls. A $2.5 billion scheme to divert Nvidia chips to China was charged in March, and Nvidia CEO Jensen Huang warned that Chinese optimization of these chips could render the hardware choke point ineffective.

Open‑source models add another layer of complexity. Meta’s Llama series, freely downloadable, have already been fine‑tuned by PLA‑linked institutions for military intelligence purposes. Although Meta prohibits military and espionage uses, it lacks technical means to enforce that restriction once the weights are public. The current legislative focus on distillation sidesteps the harder question of how to regulate open‑source AI that can be repurposed by adversaries.

What follows will test the United States’ ability to enforce a border around something that has no physical form. Detecting illicit distillation requires behavioral analysis of API traffic, not customs inspections of hardware. The upcoming summit will reveal whether the OSTP memo marks the start of a sustained enforcement campaign or merely a negotiating lever aimed at extracting concessions from Beijing.

#AI security#model distillation#US-China relations#technology policy#OpenAI#Anthropic#White House#OSTP#AI theft#export controls#semiconductor#Trump-Xi summit
Generated with  News Factory -  Source: The Next Web

Also available in: