Data Privacy & Security Weekly AI News

September 1 - September 9, 2025

This weekly update reveals major changes in how AI agents handle your personal data and new security risks emerging from their widespread use.

Anthropic updated Claude's privacy policy in September 2025, meaning your conversations with Claude may now be used to train their AI models unless you actively opt out. Users must go to their privacy settings and turn off the toggle labeled "Help improve Claude" to protect their data. Similarly, ChatGPT continues to use your chats for model training by default, requiring users to manually disable this feature in their data controls.

Meanwhile, courts are taking AI privacy violations seriously. A U.S. Federal Court allowed a major class action lawsuit to proceed against an AI customer service provider for recording customer conversations without proper consent. The court rejected the company's argument that pizza orders don't deserve privacy protection, noting that customers shared personal and financial information including names, addresses, and credit card details.

On the security front, research shows AI coding assistants are creating significant new vulnerabilities. While these tools boost programming speed by up to 4 times, they also introduce ten times more security flaws in code. This creates a dangerous trade-off between productivity and safety.

As cybersecurity budgets shrink from 17% growth in 2022 to just 4% in 2025, organizations are turning to AI-powered security tools to fill the gaps. However, this same AI technology is being weaponized by criminals, with AI chatbots inadvertently helping hackers plan cybercrimes. The education sector has become the biggest target for these AI-enhanced attacks.

Extended Coverage