This week brought significant updates on data privacy and security challenges tied to agentic AI systems. The Open Web Application Security Project (OWASP) published a comprehensive guide to secure AI agents that operate autonomously, such as those writing code or configuring systems without human input. These systems pose unique risks because they can share data or results between tools rapidly and adapt to environments without oversight. For example, if an AI agent misconfigures a system, it could expose sensitive data or create security gaps. OWASP’s guidance emphasizes technical safeguards like least privilege access and real-time monitoring to mitigate these risks.

In the finance sector, a PYMNTS Intelligence report revealed widespread skepticism about agentic AI adoption. While nearly all CFOs understand the concept, only 15% plan to deploy it soon. Key concerns include trust in decision-making, integration with legacy systems, and real-time visibility into AI actions. Experts warn that traditional monitoring tools struggle to track AI activity, especially when encrypted traffic or siloed authentication systems are involved. For instance, if an AI agent accesses unauthorized systems, organizations might not detect the breach until it’s too late. Deloitte’s survey of finance professionals echoed these concerns, noting trust remains the main barrier despite optimism about AI’s future role in tasks like data analysis and error reduction.

The legal industry saw progress with Epiq’s new agentic AI platform, which combines proprietary tools and third-party agents like Microsoft Copilot for tasks such as contract review and cyber incident response. The platform emphasizes human oversight in critical areas to balance efficiency and security. For example, Epiq’s AI Discovery Assistant™ helped a law firm achieve 90% recall in document reviews while maintaining precision. However, integrating multiple AI agents raises questions about data privacy and access controls, particularly when handling sensitive legal information.

Cybersecurity experts highlighted the need for new defense strategies as agentic AI becomes more prevalent. Traditional approaches like fixed security boundaries are insufficient for systems that act autonomously. Instead, organizations should adopt zero-trust models and automated moving target defense (AMTD), which continuously change system configurations to confuse attackers. For example, AMTD could rotate encryption keys or alter network pathways to disrupt malicious actors. However, these advanced defenses require securing the AI systems themselves, as they become prime targets for attackers seeking to exploit their autonomy.

Manufacturers face unique risks as agentic AI agents manage supply chains and production. Autonomous procurement agents, for instance, might interface with compromised supplier platforms, leading to data leaks or malicious firmware downloads. These risks are amplified in distributed supply chains, where a single compromised node could cascade vulnerabilities across the network. Experts recommend dynamic guardrails and AI monitoring AI to detect and correct risky behavior. For example, secondary AI agents could audit primary agents’ actions in real time, ensuring compliance with safety and ethical standards.

Finally, EY’s survey revealed a disconnect between AI investments and understanding of agentic AI. While 60% of organizations plan to prioritize AI in the next year, many lack clear guidelines for responsible deployment. Only 39% have comprehensive frameworks addressing governance, privacy, and ethical use. Without these safeguards, organizations risk deploying AI agents that act unpredictably or violate data protection laws. For example, an AI agent trained on biased data could make discriminatory decisions, harming both users and the organization’s reputation.

Weekly Highlights