Anthropic Expands Claude Chat Data Use, Offers Opt-Out Option

Key Points
- Anthropic will use Claude chat logs and coding sessions to train future models unless users opt out.
- The opt‑out toggle is on by default for new users and can be changed in Privacy Settings for existing users.
- Data retention is extended from 30 days to up to five years.
- Commercial‑tier accounts are exempt from the new training policy.
- Reopened archived chats become eligible for training if the user has not opted out.
- The policy aligns Anthropic’s data practices with industry standards set by OpenAI and Google.
Anthropic announced that it will begin using user conversations and coding sessions from its Claude chatbot to train future large language models, unless users actively opt out. The policy change, detailed in an updated privacy notice, also extends data retention from thirty days to five years. New users encounter an opt‑in toggle during sign‑up, while existing users see a pop‑up prompting a choice. Users can manage the setting at any time under Privacy Settings by disabling the “Help improve Claude” switch. Commercial‑tier accounts remain exempt from the new training policy.
Background
Anthropic’s Claude chatbot has historically been one of the few major AI assistants that did not use user interactions as training data for its large language models. The company’s privacy policy has now been revised to allow the repurposing of chat logs and coding tasks for model improvement, aligning its approach with industry norms.
Policy Change
The updated privacy notice states that, starting from the effective date, all new and revisited chats—unless the user opts out—may be incorporated into Anthropic’s training pipeline. The change also lengthens the data retention period, moving from a typical thirty‑day hold to a maximum of five years for stored user data. This policy applies to both free and paid personal accounts, while commercial‑tier users, including government and educational licenses, are explicitly excluded.
Opt‑Out Process
During the sign‑up flow for new Claude users, a clear choice is presented: a toggle labeled “Allow the use of your chats and coding sessions to train and improve Anthropic AI models.” The toggle is on by default, meaning users who do not actively disable it are opted in. Existing users who have already encountered a privacy pop‑up can adjust their preference at any time via the Privacy Settings menu. The relevant setting, titled “Help improve Claude,” can be switched off to prevent any future chat data from being used for training.
Implications for Users
For users who opt out, Anthropic will not use their new conversations or coding work for model training. However, if a user later reopens an archived chat, that interaction becomes eligible for inclusion unless the opt‑out remains active. The expanded retention window means that stored data—whether opted in or not—will be kept for a longer period, potentially up to five years.
Industry Context
Anthropic’s new policy brings its data practices in line with those of other leading AI providers, such as OpenAI’s ChatGPT and Google’s Gemini, which also permit model training by default and require users to opt out if they wish to restrict it. By offering a straightforward opt‑out mechanism, Anthropic aims to balance the need for real‑world interaction data to improve Claude with user privacy concerns.