Data Privacy & Security Weekly AI News
August 25 - September 2, 2025Companies around the world are facing new challenges as AI agents become more common in workplaces. These smart computer programs can work on their own, but they also create new ways for data to be stolen or misused.
The biggest problem right now is something called shadow AI. This happens when workers use AI tools that their companies don't know about. IBM's latest report shows this is a huge risk. One in five companies had their data stolen because employees used unauthorized AI tools. When these shadow AI breaches happen, they cost companies much more money than regular data breaches - about $670,000 extra on average.
What makes shadow AI so dangerous is that companies can't protect what they can't see. The IBM report found that 97% of companies that had AI security problems didn't have proper controls over who could use AI tools. Even worse, 63% of companies that got hacked had no rules at all for managing AI or catching unauthorized use.
Security experts are trying to solve these problems with new approaches. The Cloud Security Alliance just released new guidelines for protecting digital identities when AI agents work independently. Traditional security systems were made for humans and regular computer programs, not for AI that can make its own decisions.
The new framework suggests using zero trust architecture, which means never automatically trusting any user or device. It also recommends watching AI agents continuously and changing their permissions based on what they're doing in real time. This is very different from old systems that gave fixed permissions to users and rarely changed them.
Anthropic, the company behind Claude AI, shared worrying examples of how criminals are using AI tools. They found cases where bad actors used their AI code assistant to attack Vietnamese phone company computers. They also discovered people using multiple AI agents working together to commit fraud. This shows that AI isn't just being attacked - it's also being used as a weapon.
There's some good news though. Scientists at UC Riverside created a new way to remove private information from AI models. Before this breakthrough, companies had to rebuild entire AI systems from scratch to remove personal data, which was very expensive and used lots of energy. Now they can surgically remove specific information while keeping the AI working properly.
This discovery is especially important because of new privacy laws. The European Union's General Data Protection Regulation and California's Consumer Privacy Act require companies to delete personal information when people ask for it. The UC Riverside method works even when companies no longer have access to the original data used to train their AI models.
Security leaders around the world are feeling the pressure. Proofpoint's survey of 1,600 security chiefs found that 76% expect a major cyber attack in the next year. In the United States, 80% of security leaders are worried about losing customer data through public AI platforms like ChatGPT or Google's AI tools.
Despite these concerns, companies are still rushing to use AI. The same survey found that 64% of security leaders say enabling AI tools is a top priority for the next two years. This creates a difficult balance between getting the benefits of AI and keeping data safe.
The human factor remains a major problem. Two-thirds of security leaders experienced significant data loss in the past year, with most of it caused by people inside their own companies. 92% of data loss was linked to employees leaving the company, either accidentally or on purpose.
Regulators are starting to respond to these new risks. Italy recently issued new data protection rules specifically for AI use in healthcare. Other countries are likely to follow with their own AI-specific privacy laws as they see more security incidents.
Experts say the key to staying safe is treating this as a data-centric security problem rather than just an AI problem. This means understanding exactly what data AI systems can access, monitoring how they use it, and putting barriers in place before problems happen. Companies need to evolve their security practices to match how quickly AI technology is advancing.