Anthropic Will Use Claude Chats for Training Data. Here’s How to Opt Out

Date:

Share:


Anthropic is prepared to repurpose conversations users have with its Claude chatbot as training data for its large language models—unless those users opt out.

Previously, the company did not train its generative AI models on user chats. When Anthropic’s privacy policy updates on October 8 to start allowing for this, users will have to opt out, or else their new chat logs and coding tasks will be used to train future Anthropic models.

Why the switch-up? “All large language models, like Claude, are trained using large amounts of data,” reads part of Anthropic’s blog explaining why the company made this policy change. “Data from real-world interactions provide valuable insights on which responses are most useful and accurate for users.” With more user data thrown into the LLM blender, Anthropic’s developers hope to make a better version of their chatbot over time.

The change was originally scheduled to take place on September 28 before being bumped back. “We wanted to give users more time to review this choice and ensure we have a smooth technical transition,” Gabby Curtis, a spokesperson for Anthropic, wrote in an email to WIRED.

How to Opt Out

New users are asked to make a decision about their chat data during their sign-up process. Existing Claude users may have already encountered a pop-up laying out the changes to Anthropic’s terms.

“Allow the use of your chats and coding sessions to train and improve Anthropic AI models,” it reads. The toggle to provide your data to Anthropic to train Claude is automatically on, so users who chose to accept the updates without clicking that toggle are opted into the new training policy.

All users can toggle conversation training on or off under the Privacy Settings. Under the setting that’s labeled Help improve Claude, make sure the switch is turned off and to the left if you’d rather not have your Claude chats train Anthropic’s new models.

If a user doesn’t opt out of model training, then the changed training policy covers all new and revisited chats. That means Anthropic is not automatically training its next model on your entire chat history, unless you go back into the archives and reignite an old thread. After the interaction, that old chat is now reopened and fair game for future training.

The new privacy policy also arrives with an expansion to Anthropic’s data retention policies. Anthropic increased the amount of time it holds onto user data from 30 days in most situations to a much more extensive five years, whether or not users allow model training on their conversations.

Anthropic’s change in terms applies to commercial-tier users, free as well as paid. Commercial users, like those licensed through government or educational plans, are not impacted by the change and conversations from those users will not be used as part of the company’s model training.

Claude is a favorite AI tool for some software developers who’ve latched onto its abilities as a coding assistant. Since the privacy policy update includes coding projects as well as chat logs, Anthropic could gather a sizable amount of coding information for training purposes with this switch.

Prior to Anthropic updating its privacy policy, Claude was one of the only major chatbots not to use conversations for LLM training automatically. In comparison, the default setting for both OpenAI’s ChatGPT and Google’s Gemini for personal accounts include the possibility for model training, unless the user chooses to opt out.

Check out WIRED’s full guide to AI training opt-outs for more services where you can request generative AI not be trained on user data. While choosing to opt out of data training is a boon for personal privacy, especially when dealing with chatbot conversations or other one-on-one interactions, it’s worth keeping in mind that anything you post publicly online, from social media posts to restaurant reviews, will likely be scraped by some startup as training material for its next giant AI model.



Source link

━ more like this

Samsung halts Galaxy Watch 4 update after users report battery drain and broken sensors

Samsung’s latest firmware update for the Galaxy Watch 4 and Galaxy Watch 4 Classic, which introduced One UI 8 Watch (based on Wear...

This well-known vacuum brand’s first EV hypercar wants to challenge BYD and Tesla

Dreame Technology, a Chinese brand best known for home cleaning solutions such as cordless vacuums and robot cleaners, has teased (not launched) an...

You can now try the Windows phone OS that Microsoft never released

Before Microsoft shipped the Surface Duo with Android, it was meant to run a custom version of Windows called Andromeda OS, built specifically...

How to watch the Hisense CES 2026 presentation live

Hisense is perhaps best known for its budget-friendly electronics and appliances, like TVs and refrigerators. But at CES 2025, the China-based company showed...

Samsung unveils its new $200 Galaxy A17 5G smartphone, arriving in January

Samsung will have two new inexpensive mobile devices arriving on the US market next month. The Galaxy A17 5G starts at $199 and...
spot_img