Data Privacy & Security Weekly AI News
February 2 - February 10, 2026AI Agents Are Everywhere, But Nobody Is Watching Them
Artificial intelligence is changing how companies work every single day. One big change is the use of AI agents, which are programs that can think and make decisions without a human telling them exactly what to do. These agents are now doing real work in companies, not just being tested by engineers. The problem is that these agents are multiplying very quickly—companies are creating and deploying them faster than they can keep track of them.
A study released recently looked at companies in the United States and the United Kingdom and found something shocking. Out of about 3 million AI agents being used, about 1.5 million of them are not being properly watched or controlled. This is like having security guards working in a building, but nobody checking to see if they are doing their jobs correctly. One security expert said that even 53% might be too low—that really, every single AI agent could potentially cause problems.
The Governance Problem: Who Is in Charge?
When a regular employee makes a mistake at work, a manager can ask them why they did it and teach them to do better. But with AI agents, it is much harder to know what happened and why. Privacy leaders and business experts say that companies need clear identity and responsibility rules for AI agents, just like they have for human workers. Each AI agent needs to have an owner, a way to check if it really is who it says it is, and limits on what it can access.
Right now, many companies are letting their AI agents do whatever they want. They have broad access to many systems and can touch lots of sensitive data, but nobody knows who is in charge if something goes wrong. Think of it like giving someone the keys to your entire house when they only need the key to one room. This is dangerous because an AI agent with too much power can become an insider threat—similar to when a trusted employee misuses their access.
Data Privacy: The Information Challenge
Companies collect lots of personal information from their customers, like names, email addresses, and shopping history. When AI agents start using this data, privacy becomes much more complicated. One big question is: How is the company using the data I gave them permission to use? If a company said they would use your data to show you better ads, but then they use that same data to train an AI model for something totally different, that is not okay.
Privacy experts say companies must know exactly what data their AI agents are touching, where that data came from, and what the agents are doing with it. This is called data lineage, which is like following a map of where information travels. Companies also need to be able to stop using someone's data when that person asks them to, even if an AI model was trained using that data.
Everyone Is Spending Money on AI Security
Companies around the world are very worried about AI security threats, and they are putting a lot of money into protecting themselves. A survey asked leaders in charge of security at big companies like Blackstone, Virgin, and Rakuten what they would spend money on in 2026. The answer was clear: almost 8 out of every 10 security leaders said they would spend more money on AI-powered security tools.
Specifically, 77.8% of these security leaders plan to use AI tools to protect their companies, and 41.3% want AI systems that can handle security tasks automatically. Other popular choices include protecting data stored in the cloud and finding threats to people's identities, which were each chosen by 33% of the leaders. By the end of 2026, most companies believe that using AI for defense will become normal and standard.
New Threats: AI-Powered Attacks
While companies are using AI to protect themselves, attackers are also using AI to attack. Bad actors can now use AI agents to automatically find ways into computer networks with very little human help. They can also create fake emails and messages that feel real and personal, because AI can look at information about you from social media and other places to make messages that look genuine. Deepfake technology—where AI makes fake videos or voice recordings of people—is also becoming a bigger threat.
The scary part is that traditional security teams are designed to catch attacks made by humans working at normal speed. But AI-powered attacks happen at machine speed, which means they can get through a network and cause damage in minutes instead of days. Security teams need to change how they work and use AI to help them defend faster.
What Needs to Happen Now
Experts say the answer is not simple, but there are clear steps companies should take. First, companies need clear rules about who is responsible for each AI agent—similar to how they track what human employees do. Second, they need to see all their data and understand what AI agents are doing with it. Third, they need to control what data people agree to share and make sure AI systems follow those rules. Fourth, they need to explain how AI makes decisions and have humans review those decisions.
The bigger message is that privacy and security cannot be added at the end anymore—they need to be built into AI systems from the beginning. Companies that figure this out will build trust with their customers, and trust is becoming the most important thing that separates successful companies from ones that fail.