Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out

Date:

Share:


Anthropic is prepared to repurpose conversations users have with its Claude chatbot as training data for its large language models—unless those users opt out.

Previously, the company did not train its generative AI models on user chats. When Anthropic’s privacy policy updates on October 8 to start allowing for this, users will have to opt out, or else their new chat logs and coding tasks will be used to train future Anthropic models.

Why the switch-up? “All large language models, like Claude, are trained using large amounts of data,” reads part of Anthropic’s blog explaining why the company made this policy change. “Data from real-world interactions provide valuable insights on which responses are most useful and accurate for users.” With more user data thrown into the LLM blender, Anthropic’s developers hope to make a better version of their chatbot over time.

The change was originally scheduled to take place on September 28 before being bumped back. “We wanted to give users more time to review this choice and ensure we have a smooth technical transition,” Gabby Curtis, a spokesperson for Anthropic, wrote in an email to WIRED.

How to Opt Out

New users are asked to make a decision about their chat data during their sign-up process. Existing Claude users may have already encountered a pop-up laying out the changes to Anthropic’s terms.

“Allow the use of your chats and coding sessions to train and improve Anthropic AI models,” it reads. The toggle to provide your data to Anthropic to train Claude is automatically on, so users who chose to accept the updates without clicking that toggle are opted into the new training policy.

All users can toggle conversation training on or off under the Privacy Settings. Under the setting that’s labeled Help improve Claude, make sure the switch is turned off and to the left if you’d rather not have your Claude chats train Anthropic’s new models.

If a user doesn’t opt out of model training, then the changed training policy covers all new and revisited chats. That means Anthropic is not automatically training its next model on your entire chat history, unless you go back into the archives and reignite an old thread. After the interaction, that old chat is now reopened and fair game for future training.

The new privacy policy also arrives with an expansion to Anthropic’s data retention policies. Anthropic increased the amount of time it holds onto user data from 30 days in most situations to a much more extensive five years, whether or not users allow model training on their conversations.

Anthropic’s change in terms applies to commercial-tier users, free as well as paid. Commercial users, like those licensed through government or educational plans, are not impacted by the change and conversations from those users will not be used as part of the company’s model training.

Claude is a favorite AI tool for some software developers who’ve latched onto its abilities as a coding assistant. Since the privacy policy update includes coding projects as well as chat logs, Anthropic could gather a sizable amount of coding information for training purposes with this switch.

Prior to Anthropic updating its privacy policy, Claude was one of the only major chatbots not to use conversations for LLM training automatically. In comparison, the default setting for both OpenAI’s ChatGPT and Google’s Gemini for personal accounts include the possibility for model training, unless the user chooses to opt out.

Check out WIRED’s full guide to AI training opt-outs for more services where you can request generative AI not be trained on user data. While choosing to opt out of data training is a boon for personal privacy, especially when dealing with chatbot conversations or other one-on-one interactions, it’s worth keeping in mind that anything you post publicly online, from social media posts to restaurant reviews, will likely be scraped by some startup as training material for its next giant AI model.



Source link

━ more like this

How to watch the Hisense CES 2026 presentation live

Hisense is perhaps best known for its budget-friendly electronics and appliances, like TVs and refrigerators. But at CES 2025, the China-based company showed...

Samsung unveils its new $200 Galaxy A17 5G smartphone, arriving in January

Samsung will have two new inexpensive mobile devices arriving on the US market next month. The Galaxy A17 5G starts at $199 and...

What if the Apple Watch looked like an iMac G3? This concept nails it

Apple‘s late-90s design era refuses to stay in the past, and a new Apple Watch concept inspired by the iMac G3 shows why...

2026 makes way for faster laptops, but at the cost of memory

CES (Consumer Electronics Show) has long served as a key venue for the introduction of new laptops. It also plays an important role...

Netflix has released a trailer for the Stranger Things finale

Tomorrow's the big day, and I don't just mean New Year's Eve. The series finale of Stranger Things airs tomorrow, and Netflix has...
spot_img