Anthropic Introduces Safer Auto Mode for Claude Code

Anthropic Introduces Safer Auto Mode for Claude Code
The Verge

Key Points

  • Anthropic adds an auto mode to Claude Code for safer AI‑driven actions.
  • The feature flags and blocks potentially risky operations before execution.
  • If an action is blocked, Claude Code can retry or request user intervention.
  • Auto mode is currently a research preview for Team plan users.
  • Anthropic plans to extend access to Enterprise and API users shortly.
  • The tool is labeled experimental and should be used in isolated environments.
  • Anthropic emphasizes that auto mode reduces but does not remove all risk.

Anthropic has launched an auto mode for its Claude Code tool, allowing the AI to act on users' behalf while reducing the risk of unwanted actions. The feature flags and blocks potentially risky operations, prompting the model to retry or request user intervention. Currently available as a research preview for Team plan users, Anthropic plans to extend access to Enterprise and API users in the coming days. The company emphasizes that the tool remains experimental and recommends use in isolated environments.

Overview of Claude Code Auto Mode

Anthropic announced a new auto mode for its Claude Code product, a tool that enables artificial intelligence to make permission-level decisions on behalf of developers. The addition targets a middle ground between constant manual oversight and granting the model unrestricted autonomy, which can lead to undesirable outcomes such as accidental file deletion, unintended data sharing, or execution of malicious code.

How Auto Mode Enhances Safety

The auto mode is designed to intercept actions that could be risky before they are executed. When Claude Code attempts an operation that may pose a threat, the feature flags the action, blocks it, and either offers the model a chance to try an alternative approach or asks the user to intervene. This safety layer aims to provide developers with a more secure environment while still leveraging the convenience of AI‑driven assistance.

Current Availability and Planned Expansion

At launch, the auto mode is offered as a research preview limited to users on Anthropic’s Team plan. Anthropic has indicated that access will be broadened to include Enterprise customers and users of its API in the coming days, allowing a wider audience to test the feature.

Experimental Nature and Recommended Use

Anthropic cautions that the auto mode remains experimental and does not eliminate risk entirely. The company advises developers to employ Claude Code in isolated environments to mitigate potential impacts. By acknowledging the limitations, Anthropic encourages responsible experimentation while continuing to develop safer AI assistance tools.

#Anthropic#Claude Code#auto mode#AI safety#developer tools#machine learning#AI coding assistant#experimental preview#enterprise AI#API
Generated with  News Factory -  Source: The Verge

Also available in:

Anthropic Introduces Safer Auto Mode for Claude Code | AI News