Data Privacy & Security Weekly AI News
November 3 - November 11, 2025This week brought important developments in data privacy and security related to AI systems and autonomous agents now used in many workplaces. Researchers and tech companies discovered several new security problems that could put sensitive company information at risk.
One major announcement was the release of OpenPCC, a new open-source security tool designed to protect data sent to AI assistants and agents. The tool works like a protective wrapper that keeps information secret from people who shouldn't see it. Right now, most AI systems process information in plain text, which means sensitive data can be leaked or hacked. OpenPCC encrypts this data to make it safer. However, companies will need special computer hardware to use it, which might be difficult for smaller businesses.
Researchers also found serious problems with AI security. Microsoft discovered a new type of attack called Whisper Leak that can figure out what topics people are discussing with AI chatbots, even when the conversations are encrypted. Tenable security researchers found seven vulnerabilities in ChatGPT that attackers could use to steal private user information. Meanwhile, Google reported that bad actors from countries like North Korea, Iran, and China are using AI to create better ways to attack companies and steal data.
On top of these threats, security companies found that companies are struggling to control where their data goes. A report from Proofpoint showed that AI agents now have access to many company systems, and this is creating new risks that companies don't fully understand. Additionally, security experts predict that by 2027, most organizations will face phishing attacks using fake AI-generated people.