Data Privacy & Security Weekly AI News

September 29 - October 7, 2025

This weekly update reveals how companies worldwide are racing to solve security challenges created by AI agents. These smart computer helpers are becoming common in workplaces, but they bring new risks that traditional security systems cannot handle.

Microsoft announced major updates to its Sentinel security platform. The company wants to turn Sentinel into what they call an "agentic platform." This means security teams can use AI agents to fight cyber criminals. The new system can understand how different parts of a company's network connect to each other. When hackers attack, the AI agents can quickly see which systems might be affected. Microsoft says this helps security teams respond much faster than before. The company also made Sentinel work better with other Microsoft security tools. This gives security teams a complete picture of what's happening across their organization.

The Coalition for Secure AI released a report highlighting serious problems with AI agent identity. Current computer systems treat AI agents like either human users or simple computer programs. But AI agents are neither of these things. They can work for hours without stopping, learn from their experiences, and even create copies of themselves. This creates confusion about what permissions they should have. The coalition warns that companies who don't solve this problem will either limit their AI agents too much or create dangerous security holes.

Security experts are especially worried about the insider threat problem with AI agents. Traditional insider threats involve human employees who might steal information or cause damage. AI agents could be much more dangerous because they work faster and have access to more systems. If hackers trick an AI agent through techniques like prompt injection, the damage could be massive. The coalition says companies need detection systems that understand what AI agents are supposed to be doing. These systems must spot when an AI agent starts acting strangely.

Several companies launched new products to address these concerns. Dataminr announced Intel Agents for monitoring physical world events. These AI agents watch for threats like natural disasters, security incidents, and other risks that could affect businesses. The company says their agents provide context about events, not just raw information. This helps security teams understand whether they need to take action.

Entro extended its security platform to cover AI agents specifically. The company focuses on what they call "non-human identities." Their tools help companies see which AI agents they have, what those agents can access, and whether they're behaving normally. This type of specialized security tool shows how the industry is adapting to the unique challenges of AI agents.

The timing of these announcements is important. Many companies are deploying AI agents without fully understanding the security implications. The Coalition for Secure AI warns that organizations need to build defenses now for attacks that don't exist yet. They say the traditional approach of reacting to new threats won't work when AI systems can develop and spread attacks faster than humans can respond.

Industry experts predict that AI agents will soon be as common as human employees in many organizations. Companies that figure out how to secure these digital workers safely will have major advantages. Those that don't may face security breaches that make traditional cyber attacks look small by comparison. The race is on to develop security frameworks that can keep pace with rapidly advancing AI capabilities.

Weekly Highlights