This week saw major developments in data privacy and security for agentic AI systems. The Open Web Application Security Project (OWASP) released a new guide to help developers secure AI agents that act autonomously, like those writing code or configuring systems without human input. Meanwhile, a PYMNTS report revealed only 15% of CFOs are ready to adopt agentic AI due to trust issues, technical challenges, and compliance risks in finance and payments. Legal tech firm Epiq launched an AI platform for law firms, combining tools like contract analysis and cyber response with human oversight to balance efficiency and security. Cybersecurity experts warned that agentic AI requires new defenses, such as zero-trust models and automated moving target defense, as these systems become high-value targets themselves. Deloitte found trust remains the top barrier in finance, with only 13.5% of organizations using agentic AI despite optimism about its future. Researchers also exposed vulnerabilities where attackers exploit AI agents to steal data or breach systems. In manufacturing, autonomous procurement agents risk amplifying supply chain threats if compromised. EY’s survey highlighted a gap between AI investments and understanding of agentic AI, with cybersecurity and data privacy as top concerns.

Extended Coverage