Data Privacy & Security Weekly AI News

August 4 - August 12, 2025

This weekly update highlights growing privacy and security challenges as AI agents become more deeply integrated into our digital lives. These intelligent systems, designed to help users with tasks and decisions, are creating new risks that companies and regulators are struggling to address.

Apple's Siri faces major privacy scrutiny after researchers at Black Hat 2025 conference revealed serious data handling issues. The Israeli cybersecurity firm Lumia Security discovered that Siri routinely sends sensitive user information to Apple servers, including dictated messages and WhatsApp communications, even when this data transmission isn't necessary to complete user requests. This behavior occurs outside Apple's heavily promoted Private Cloud Compute system, which the company markets as providing enhanced privacy protections for AI processing.

Apple initially acknowledged the findings and indicated it would work toward fixes, but later changed its position. The company now claims the data transmission isn't a privacy issue related to Apple Intelligence, but rather stems from third-party services using SiriKit, Apple's system for integrating external apps with Siri. However, this distinction between Siri's servers and the Private Cloud Compute system isn't clearly communicated to users, leaving many confused about how their personal data is actually being handled.

AI agent security vulnerabilities extend far beyond Apple's ecosystem. Security researchers recently uncovered major flaws in McDonald's AI hiring chatbot Olivia, developed by third-party provider Paradox AI. These vulnerabilities were shockingly basic - hackers could access chat logs and contact information simply by changing applicant ID numbers. Most concerning, researchers gained full admin access using the login "admin" with password "123456." These security failures potentially exposed personal data from up to 64 million job applicants, demonstrating how poorly some companies protect AI systems that handle sensitive information.

Regulatory responses are accelerating as governments recognize the risks posed by AI agents. California's Civil Rights Council secured final approval for regulations protecting against employment discrimination from AI systems. These new rules require employers, employment agencies, and other organizations to maintain automated-decision data for at least four years. The regulations also specify that AI assessments, including tests that might reveal information about disabilities, could constitute unlawful medical inquiries.

Minnesota's Consumer Data Privacy Act took effect during this period, adding significant new obligations for companies using AI to process personal data. Unlike previous U.S. state privacy laws, Minnesota's law requires organizations to maintain detailed data inventories and designate Chief Privacy Officers or other individuals specifically responsible for consumer data protection. The law also grants consumers new rights to challenge and obtain additional information about how AI systems profile their personal information.

International developments show the global nature of AI agent privacy concerns. The European Commission received the final version of the General Purpose AI Code of Practice, a voluntary framework designed to guide responsible AI development and deployment in the EU. This code provides practical guidance on transparency, risk management, and accountability for AI providers and deployers, though it remains voluntary for organizations seeking to demonstrate compliance with the EU AI Act.

However, enforcement challenges persist across borders. Privacy rights organization NOYB filed formal complaints against three major Chinese apps - TikTok, AliExpress, and WeChat - alleging violations of the EU's General Data Protection Regulation. The complaints accuse these companies of failing to adequately respond to user data access requests, with TikTok providing incomplete data, AliExpress sending broken files, and WeChat ignoring requests entirely.

Advanced attack techniques targeting AI agents are becoming more sophisticated. Researchers discovered ways to trick Google's Gemini AI through prompt injection attacks. By including invisible text in emails, attackers can pass hidden commands to AI systems that summarize messages, tricking them into outputting malicious content without users realizing what's happening. This technique exploits the fact that AI agents often process untrusted content from the internet while having access to sensitive user data and system functions.

Security experts warn of a "lethal trifecta" when AI agents combine three dangerous elements: access to tools and systems, ability to process untrusted external content, and difficulty distinguishing between legitimate and malicious instructions. This combination makes it nearly impossible for current AI systems to reliably prevent attacks while maintaining their useful capabilities.

Industry responses show companies are taking AI agent security more seriously. HPE unveiled new AI-driven security solutions at Black Hat USA 2025, including SASE copilot systems that provide AI-powered insights on network activity and security gaps. These tools represent the growing trend of using AI to defend against AI-powered attacks, though experts note this creates an ongoing arms race between attackers and defenders.

The fundamental challenge remains that AI agents require extensive data access to function effectively, making traditional privacy frameworks inadequate. As these systems become embedded in everything from hiring decisions to personal assistants, the complexity makes it difficult for users to understand when their data is being transmitted, processed locally, or shared with third parties. Enterprise users face particular compliance concerns when sensitive corporate information potentially leaves organizational networks through employee devices running AI agents.

This weekly update demonstrates that while AI agents offer tremendous benefits for productivity and convenience, the privacy and security challenges they create require urgent attention from companies, regulators, and users alike.

Weekly Highlights