Understanding Anthropic’s Data Usage Policy: What Users Need to Know

Key Update for Claude Users

Anthropic has announced significant changes to its Consumer Terms and Privacy Policy, effective September 28, 2025. These updates impact users of Claude Free, Pro, and Max plans, as the company will now use user-generated data (chats and coding sessions) to train and improve its AI models. While users can opt in, the default setting in privacy controls is on, requiring manual adjustment to opt out.

What Has Changed?

  1. Data Usage for Training
  • Anthropic will use your chats and coding sessions to enhance its AI models.
  • This applies only to personal accounts (not commercial or API users).
  1. Data Retention
  • If you allow data usage, it will be retained for 5 years for training purposes.
  • You can modify your preferences anytime via the Privacy Settings.

How to Opt Out

  • Review Privacy Settings: After September 28, log into Claude.ai and adjust your preferences.
  • Manual Adjustment: If the default is set to ‘on,’ ensure you toggle it to ‘off’ to prevent data usage.

Why This Matters

This policy shift raises concerns about data privacy and user consent. While Anthropic claims the changes improve model accuracy and safety, users must remain vigilant to protect their information.

For more details, visit Anthropic’s official blog or review their updated Privacy Policy. Stay informed to make empowered decisions about your data.