Data Privacy & Security Weekly AI News
December 1 - December 9, 2025Companies around the world are using artificial intelligence agents to help them work faster and smarter, but a new report shows they are not protecting these powerful AI systems very well. This is creating serious security and privacy risks for the people whose information these AI agents can access.
Understanding what AI agents are is the first step to understanding why they're so risky. An AI agent is a computer program that can work on its own without a person controlling it every step of the way. Unlike a regular computer tool where someone has to type commands and click buttons, an AI agent can make decisions, read lots of information, complete tasks, and take actions all by itself. It never needs a break and never gets tired. Some AI agents can do the same work in one hour that would take a human several days to finish.
Because AI agents are so useful and powerful, companies are rushing to use them in their businesses. According to the latest research, 83% of companies already use some form of artificial intelligence in their daily work activities. This means that AI is becoming normal in almost every business - from hospitals to stores to banks to schools. Companies use AI agents to sort through emails, organize information, write computer code, decide which customers are most important, and do hundreds of other important jobs every single day.
The problem is that companies are adopting AI agents very quickly, but they're not spending enough time figuring out how to protect them. This creates a big gap between how fast AI is spreading and how prepared companies are to keep it safe. When asked if they have strong visibility into how their AI systems handle sensitive information - meaning they can actually see and understand what's happening - only 13% of companies said yes. Visibility is incredibly important because if you can't see what something is doing, you definitely can't stop it from doing something wrong.
The real danger becomes clear when you look at what's actually happening in companies. Two out of every three companies have actually discovered that their AI agents have been accessing information that those agents should never have been allowed to touch. This means that AI agents are reading and touching secret customer information, medical records, financial details, and other private data. When this happens, people's privacy gets hurt and companies can face enormous fines from the government.
Autonomous AI agents are causing the most concern. These are AI systems that make their own choices about what to do. They're like workers who decide their own tasks instead of asking a boss what to do. Seventy-six percent of companies say that autonomous AI agents are the hardest and most difficult systems to keep secure. More than half of companies - actually 57% - don't have the ability to immediately stop an AI agent when it starts doing something dangerous. Imagine knowing someone is doing something wrong but not being able to stop them in time - that's what many companies are experiencing.
Visibility remains one of the biggest problems. Nearly 50% of all companies report having zero visibility into where their AI is being used and what it's doing. Another third of companies say they only have minimal visibility - they barely understand what's going on. This is like having security guards protecting a building but they can't actually see anything. How can they do their job if they can't see what's happening?
Companies also don't have the right people and teams in place to manage AI safely. Only 7% of companies have created a special team whose main job is just to handle artificial intelligence governance and security. This means at 93% of companies, nobody is focused full-time on protecting their AI systems. The work of managing AI is just added onto someone's existing job, along with everything else they already have to do.
When government agencies started asking if companies feel ready for new AI regulations and rules, the answer was concerning. Only 11% of companies said they feel prepared to meet new regulations about AI. This shows that companies are falling further and further behind. The rules are coming, but most companies don't feel ready.
Governments are now stepping in to create rules to protect people from unsafe AI. Governor Ron DeSantis of Florida announced a proposal called the Citizen Bill of Rights for AI that focuses on protecting people's privacy and their money from large AI data centers. The proposal includes important rules: AI cannot use a person's name, face, or likeness without getting permission first; companies must tell people when they're interacting with a computer AI instead of a real human person; and people should be able to control what information AI can use about them.
The U.S. government is also getting involved. Major government agencies including the National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA) released special guidance about how to safely use AI in systems that are really important for people's safety. This shows that government leaders take AI security very seriously.
Experts are saying that companies need to completely change how they think about protecting AI. Instead of just watching what humans are doing, companies need to constantly watch what AI agents are doing. They need to set up clear rules about what information each AI agent is allowed to access - with more sensitive information being locked down more carefully. Companies need to treat AI agents like an entirely new type of worker that requires special protection rules and careful watching.
The experts have created a very simple but powerful warning: "You cannot secure an AI agent you do not identify, and you cannot govern what you cannot see". In plain words, this means companies first have to know which AI agents they have and what those agents are doing before they can possibly protect people from them.
Moving forward, the future is clear. Companies need to invest money and resources into better tools for managing AI. They need constant, real-time updates about where AI is being used in their business. They need to watch what AI agents are saying and doing every single moment. Most importantly, companies need to make AI safety and governance an important priority right now, not something to worry about later. The experts are saying the same thing: the faster companies act to protect their AI agents, the safer their customers' information and their businesses will be.