1. News
  2. AI
  3. Anthropic to Use User Data for AI Training: Act Now!

Anthropic to Use User Data for AI Training: Act Now!

featured
Share

Share This Post

or copy the link

Anthropic announced it will begin utilizing user data, including new chat and coding session transcripts, to train its AI models unless users expressly opt out. Additionally, the company is extending its data retention policy to five years for users who do not choose to opt out.

All users have until September 28th to make a choice regarding this policy. According to a blog post from Anthropic on Thursday, users who select “Accept” will have their data immediately incorporated into the training of AI models for a duration of up to five years.

This new policy pertains to “new or resumed chats and coding sessions.” If users consent to having their data used for training purposes, it will not include any old chats or coding sessions that have not been resumed. However, continuing an older session grants permission to use that data.

The updated terms are relevant for all of Claude’s consumer subscription tiers, which encompass Claude Free, Pro, and Max. This includes usage when employing Claude Code from accounts linked to those plans. However, these updates do not apply to commercial tiers like Claude Gov, Claude for Work, Claude for Education, or API use via third-party platforms such as Amazon Bedrock and Google Cloud’s Vertex AI.

New users will need to set their preferences during the signup process, while existing users will encounter a pop-up prompting them for their decision, which can be deferred by selecting a “Not now” option. Nevertheless, a decision must be made by the specified deadline of September 28th.

It is crucial for users to be aware that many may inadvertently click “Accept” without thoroughly reviewing the accompanying information.

Anthropic’s new terms
Anthropic’s new terms.
Anthropic

The notification users will encounter states prominently, “Updates to Consumer Terms and Policies,” followed by information that reads, “An update to our Consumer Terms and Privacy Policy will take effect on September 28, 2025. You can accept the updated terms today.” A large “Accept” button is positioned at the bottom of the notice.

Accompanying this are smaller details indicating, “Allow the use of your chats and coding sessions to train and improve Anthropic AI models,” complete with an on/off toggle switch pre-set to “On.” This may lead many users to inadvertently click “Accept” without altering the toggle setting.

Users wishing to opt out can do so by switching the toggle to “Off” upon seeing the notification. For those who have already accepted and wish to amend their choice, they must go to Settings, then the Privacy tab, find the Privacy Settings section, and toggle to “Off” under the “Help improve Claude” option. Users can revisit their privacy settings at any time to alter their choices; however, any new preferences will only affect future data and cannot retract data already used in model training.

In its blog post, Anthropic reassured users about their privacy, stating, “To protect users’ privacy, we use a combination of tools and automated processes to filter or obfuscate sensitive data. We do not sell users’ data to third parties.”

5 Comments

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Anthropic to Use User Data for AI Training: Act Now!
Comment

Tamamen Ücretsiz Olarak Bültenimize Abone Olabilirsin

Yeni haberlerden haberdar olmak için fırsatı kaçırma ve ücretsiz e-posta aboneliğini hemen başlat.

Your email address will not be published. Required fields are marked *

Login

To enjoy Technology Newso privileges, log in or create an account now, and it's completely free!