OpenAI Secures Deal with U.S. Defense Department to Deploy Its AI Models

Key Points
- OpenAI signed a contract to deploy its AI models within the U.S. Defense Department’s network.
- The agreement embeds two safety principles: no domestic mass surveillance and human responsibility for force, including autonomous weapons.
- OpenAI will provide technical safeguards and assign engineers to work with the department.
- Deployment will occur on cloud infrastructure, with a pending partnership to use Amazon Web Services for enterprise customers.
- Anthropic declined a similar government contract, citing opposition to surveillance and autonomous weapons.
- The Defense Department highlighted that the contract references existing legal authorities and mutually agreed safety mechanisms.
OpenAI announced a contract with the U.S. Defense Department to place its artificial‑intelligence models within the agency’s network. The agreement includes two core safety principles—prohibitions on domestic mass surveillance and a requirement for human responsibility over the use of force, including autonomous weapon systems. OpenAI will provide technical safeguards, assign engineers to work with the department, and run the models on cloud infrastructure, with a pending partnership to use Amazon Web Services for enterprise customers. The deal comes as rival Anthropic declined a similar government offer, citing concerns over surveillance and weaponization.
Deal Overview
OpenAI has entered into a contract with the United States Defense Department to deploy its artificial‑intelligence models on the agency’s internal network. The company’s chief executive communicated the agreement publicly, emphasizing that the deal incorporates two of OpenAI’s most important safety principles: a prohibition on domestic mass surveillance and the requirement that humans retain responsibility for the use of force, including in autonomous weapon systems.
Safety Commitments
The safety principles are embedded in the contract, and OpenAI pledged to build technical safeguards that ensure the models behave as intended. Engineers from OpenAI will be assigned to work directly with the Defense Department to monitor and maintain these safeguards. The deployment will be limited to cloud networks, and the company plans to use Amazon Web Services for enterprise‑level hosting.
Government Context
The agreement follows a broader governmental push to regulate AI use in sensitive areas. A senior official from the Defense Department noted that the contract references existing legal authorities and includes mutually agreed‑upon safety mechanisms. The same safety framework was offered to other AI firms, but not all have accepted.
Anthropic’s Stance
Anthropic, another leading AI developer, publicly refused a comparable contract with the Defense Department. The company reiterated its opposition to domestic mass surveillance and fully autonomous weapons, stating it would challenge any designation of “supply chain risk” in court. Anthropic’s refusal underscores differing corporate approaches to government partnerships involving AI safety constraints.
Cloud Infrastructure and Partnerships
OpenAI’s deployment will initially run on cloud platforms, and the company has announced a partnership with Amazon to make its models available on Amazon Web Services for enterprise customers. While the Defense Department currently relies on a different cloud provider, the partnership could enable future migration of OpenAI’s models to the department’s preferred infrastructure.
Implications
The deal illustrates a growing intersection between advanced AI technology and national‑security agencies, highlighting the importance of safety safeguards in government contracts. By embedding explicit prohibitions on surveillance and autonomous weapon use, OpenAI aims to balance innovation with ethical responsibility while meeting the Defense Department’s operational needs.