Data Privacy & Security Weekly AI News

September 1 - September 9, 2025

This weekly update highlights a concerning trend where AI agents are reshaping both the privacy landscape and cybersecurity threats in ways that directly impact everyday users and businesses worldwide.

Major AI companies are quietly changing how they handle your personal data. Anthropic made significant policy changes for Claude in September 2025, now allowing the company to use consumer account conversations for model training unless users explicitly opt out. This represents a shift toward less privacy-protective defaults in AI services. Users must navigate to Claude's privacy settings and manually turn off the "Help improve Claude" toggle to maintain data protection.

Similar privacy concerns plague other popular AI tools. OpenAI's ChatGPT continues to default to using all user conversations for model training purposes. To prevent this data collection, users must click their profile icon, navigate to Settings > Data Controls, and toggle off "Improve the model for everyone". These opt-out requirements place the burden on users to protect their own privacy, often without clear notification of policy changes.

Educational institutions are taking steps to protect their communities. Boston College now recommends faculty, staff, and students use institution-licensed AI tools like Google Gemini, Google NotebookLM, and Microsoft Copilot instead of consumer versions. When using institutional accounts, user data is not used for AI model training and receives stronger data protection. This guidance reflects growing awareness among organizations about AI privacy risks.

Courts are establishing important precedents for AI privacy rights. The U.S. District Court for the Northern District of California denied a motion to dismiss a California Invasion of Privacy Act class action against an AI customer service provider. The court rejected the company's "flippant" argument that pizza orders don't warrant privacy expectations, recognizing that customers share personally identifiable and financial information including names, addresses, and credit card details. This ruling establishes that AI agents capable of using customer communications for their own purposes may be considered third-party eavesdroppers.

The security implications of AI agents are becoming increasingly severe. New research from Apiiro reveals that while AI coding assistants increase engineering productivity by up to 4 times, they simultaneously introduce ten times more vulnerabilities in code. This dramatic increase in security flaws creates significant risks as organizations rush to adopt AI development tools without adequate security review processes. Companies are urged to invest heavily in code review automation and secure coding training to address this expanding security gap.

Economic pressures are forcing risky AI adoption in cybersecurity. As cybersecurity budgets have declined dramatically from 17% growth in 2022 to just 4% growth in 2025, organizations are accelerating adoption of AI-driven security tools. Nearly 90% of information security leaders report being understaffed, creating dangerous gaps in human oversight of automated security systems. While AI excels at routine tasks like threat detection and alert triage, this dependency raises concerns about reduced human judgment in critical security decisions.

Criminals are weaponizing the same AI technologies for malicious purposes. Cybersecurity experts report that AI chatbots are inadvertently helping hackers plan sophisticated cybercrimes. The education sector has emerged as the primary target for these AI-enhanced attacks, likely due to valuable personal data stores and often weaker security infrastructure. This creates a concerning arms race where both defenders and attackers leverage similar AI capabilities.

New regulatory frameworks are emerging to address these challenges. Various states are implementing laws requiring consumer-facing bots to disclose when users are interacting with AI rather than humans. Additionally, regulations around AI systems making consequential decisions in areas like employment, lending, healthcare, and education are being finalized across multiple jurisdictions. These developments suggest regulators recognize the urgent need for governance frameworks around AI agent deployment and data handling practices.

Weekly Highlights