Anthropic’s Surveillance Restrictions Spark Tension with White House

White House officials reportedly frustrated by Anthropic’s law enforcement AI limits
Ars Technica2

Key Points

  • White House officials are frustrated by Anthropic’s ban on domestic surveillance use of Claude models.
  • Federal contractors for the FBI and Secret Service face roadblocks when seeking AI‑driven surveillance tools.
  • Anthropic’s Claude models are among the few AI systems cleared for top‑secret work through AWS GovCloud.
  • The company has a $1‑fee agreement to provide AI services to federal customers.
  • A GSA blanket agreement now permits OpenAI, Google and Anthropic to supply AI tools to federal workers.
  • OpenAI will deliver ChatGPT Enterprise to over 2 million federal employees for $1 per agency per year.
  • Anthropic also serves the Department of Defense but still prohibits its models for weapons development.

White House officials have expressed frustration with Anthropic’s policy that bars the use of its Claude models for domestic surveillance. The restriction is creating roadblocks for federal contractors working with agencies such as the FBI and Secret Service. Anthropic’s models are among the few AI systems cleared for top‑secret environments through Amazon Web Services’ GovCloud, and the company has a nominal‑fee agreement to provide services to federal customers. The dispute comes as the government also signs a blanket agreement with OpenAI, Google and Anthropic to supply AI tools to federal workers.

Background

Anthropic’s Claude models are used in high‑security contexts and are cleared for top‑secret situations via Amazon Web Services’ GovCloud. The company has a special arrangement with the federal government that provides its services for a nominal $1 fee.

Policy Restrictions

Anthropic’s usage policies prohibit the use of its AI models for domestic surveillance, a stance that has drawn criticism from senior White House officials. Contractors working with agencies like the FBI and the Secret Service have encountered obstacles when attempting to employ Claude for surveillance‑related tasks. Officials worry that the company enforces its policies selectively and uses vague terminology that allows broad interpretation.

Federal Agreements

In addition to Anthropic’s $1‑fee deal, the General Services Administration recently signed a blanket agreement allowing OpenAI, Google and Anthropic to supply AI tools to federal workers. OpenAI announced a separate contract to provide more than 2 million federal executive‑branch employees with ChatGPT Enterprise access for $1 per agency for one year.

Industry Context

The friction highlights a tension between private AI providers’ ethical usage policies and government agencies’ demand for advanced AI capabilities in law‑enforcement and national‑security operations. While Anthropic also works with the Department of Defense, its policies continue to prohibit the use of its models for weapons development.

#Anthropic#Claude#White House#Trump administration#FBI#Secret Service#AWS GovCloud#Department of Defense#OpenAI#ChatGPT Enterprise#General Services Administration#AI policy#Domestic surveillance
Generated with  News Factory -  Source: Ars Technica2

Also available in: