Anthropic Expands Claude Data Use, Offers Opt-Out for Users

Key Points
- Anthropic will start using new Claude chats and coding tasks as training data for its AI models.
- The policy update becomes effective on October 8, after a delay from the originally planned September 28 date.
- A default “Help improve Claude” toggle is enabled; users must turn it off in Privacy Settings to opt out.
- Opt‑out applies to all new and revisited chats unless users reactivate old threads.
- Data retention is extended from 30 days to five years for all users, irrespective of training opt‑in status.
- Commercial‑tier users licensed through government or educational programs are exempt from data use for training.
- Claude’s coding assistance features mean coding projects are also included in the training dataset for opted‑in users.
- Prior to this change, Claude was unique among major chatbots for not automatically using user conversations for model training.
Anthropic announced that it will begin using new Claude chat interactions and coding tasks as training data for its large language models. The shift follows an update to the company’s privacy policy slated for October 8, which will automatically include user data unless individuals explicitly opt out. Users can control the setting through a “Help improve Claude” toggle in Privacy Settings. The policy also extends data retention from 30 days to five years for all users, while commercial‑tier accounts licensed through government or educational programs remain exempt from training data collection.
Policy Change and Rationale
Anthropic is preparing to incorporate user conversations with its Claude chatbot, as well as coding tasks performed within the tool, into the training data for future large language models. The company explained that large language models require extensive datasets, and real‑world interactions provide valuable insights into which responses are most useful and accurate for users. This represents a departure from Anthropic’s previous stance, where user chats were not automatically used for model training.
Implementation Timeline
The updated privacy policy is set to take effect on October 8. The change was originally scheduled for September 28 but was postponed to give users additional time to review the new terms. Gabby Curtis, a spokesperson for Anthropic, indicated the delay was intended to ensure a smooth technical transition.
Opt‑Out Mechanism
New Claude users will encounter a decision prompt during the sign‑up process, while existing users may see a pop‑up outlining the changes. The default setting, labeled “Help improve Claude,” is turned on, meaning users are opted in unless they actively turn the switch off. To opt out, users should navigate to the Privacy Settings and toggle the switch to the off position. If users do not opt out, the policy applies to all new chats and any revisited conversations that are reopened, but it does not automatically retroactively apply to older archived threads unless those threads are reactivated.
Data Retention Extension
Alongside the training data change, Anthropic is extending its data retention period. Previously, most user data was retained for 30 days; under the new policy, data will be stored for up to five years, regardless of whether the user has opted in to model training.
Scope of Affected Users
The policy covers both free and paid commercial‑tier users of Claude. However, commercial users who are licensed through government or educational plans are exempt; their conversations will not be used for model training. Claude’s popularity as a coding assistant means that coding projects submitted through the platform will also be included in the training dataset for users who have not opted out.
Industry Context
Before this update, Claude was one of the few major chatbots that did not automatically use user conversations for training. In contrast, OpenAI’s ChatGPT and Google’s Gemini default to allowing model training on personal accounts unless users choose to opt out. The shift places Anthropic in line with industry practices regarding data use for AI model improvement.
What Users Can Do
Users who wish to keep their Claude interactions private should locate the “Help improve Claude” toggle in the Privacy Settings and switch it off. Those interested in broader privacy considerations can consult guides that outline opt‑out procedures for various AI services.