Pentagon Plans to Train AI Models on Classified Military Data

Key Points
- Pentagon aims to train AI models on classified military data for exclusive use.
- Training would occur in a secure, classified‑approved data center.
- The Department would retain ownership of all data used in model training.
- OpenAI and xAI are expected to participate; Anthropic may be excluded.
- Experts warn that broader model deployment could expose classified info to uncleared personnel.
- The initiative supports the DoD's "AI‑first" warfighting strategy.
- Security clearances may be granted to select AI company staff for training access.
The Department of Defense is reportedly preparing to have artificial‑intelligence companies train versions of their models on classified information for exclusive military use. The initiative would take place in a secure data center authorized for classified projects, with the Pentagon retaining ownership of all training data. Companies such as OpenAI and xAI are expected to participate, while Anthropic may be excluded due to its policy restrictions. Experts warn that training on sensitive data could expose classified material to personnel lacking proper clearance, raising security concerns about broader model deployment within the defense establishment.
Background
The U.S. Department of Defense is moving toward an "AI‑first" warfighting posture, as outlined in a recent statement by Secretary of Defense Pete Hegseth. Current use of artificial‑intelligence models includes applications like Anthropic’s Claude, which reportedly assisted in operations involving the capture of Venezuelan President Nicolás Maduro and an attack on Iran. These existing deployments use publicly available AI technology, but the Pentagon seeks more precise capabilities by training models on classified data.
Planned Approach
According to the MIT Technology Review, the Defense Department intends to conduct training in a secure data center that is cleared for classified government projects. The plan calls for creating copies of AI models that will be owned solely by the Pentagon, with the training data remaining under government control. In limited cases, individuals from the AI firms may be granted the necessary security clearances to access the classified material required for training.
Potential Risks
Alek Mehta, formerly leading AI policy at Google and OpenAI, cautioned that training models on classified data carries significant risk. While the resulting models would be dedicated to military purposes, there is concern that if a single model is deployed across the Defense Department, personnel without the appropriate clearance could inadvertently receive access to sensitive information embedded within the model’s responses.
Industry Involvement
The initiative is expected to involve major AI developers such as OpenAI and Elon Musk’s xAI, both of which have recently signed agreements with the agency. Anthropic, a long‑time government contractor, may not participate because it has refused to allow its technology to be used for mass surveillance or autonomous weapons, a stance that previously led to a ban on its use by federal agencies.
Implications
If implemented, the program would mark a shift toward integrating highly specialized AI tools into military decision‑making processes, potentially improving the accuracy and detail of responses in scenarios that are not publicly documented. However, the approach also raises questions about data security, clearance management, and the broader impact of embedding classified knowledge within AI systems.