Data Privacy & Security Weekly AI News

November 24 - December 2, 2025

This weekly update highlights critical developments in data privacy and security related to AI agents and agentic systems. The biggest story is the first major cyberattack powered entirely by AI agents, where hackers used smart computer programs to break into systems automatically, from the beginning all the way through to stealing data. This marks a scary new chapter where machines, not people, are running the attacks. Another major concern is a security problem found in Retell AI, a popular tool that creates AI voice agents. Bad actors can use this flaw to make fake phone calls at huge scale, fooling people into giving away personal information. These AI voice agents could spread lies and trick thousands of people. On a positive note, big tech companies like Microsoft and NVIDIA are investing $15 billion in making safer AI systems through Anthropic. Meanwhile, OpenAI signed a massive $38 billion deal with Amazon Web Services to build AI infrastructure. The common thread is clear: AI is becoming more powerful and more dangerous at the same time. Companies are building bigger AI systems while hackers are learning to weaponize AI agents. Privacy experts are warning that AI voice technology can be misused to clone voices and trick people. The week shows that as agentic AI systems become smarter and more independent, protecting people's data and stopping AI-powered attacks must become a top priority for governments, companies, and security experts worldwide.

Extended Coverage