Data Privacy & Security Weekly AI News

October 27 - November 4, 2025

This weekly update covers the latest news about keeping AI agents safe and protecting company data. AI agents are becoming more common in workplaces everywhere, and companies are using them to do important jobs. However, most companies are not ready to keep these AI agents safe from attacks.

The Big Problem: Companies Moving Too Fast

One study asked three thousand five hundred business leaders in five countries about AI agents. The study found something very concerning: about two-thirds of these leaders said their company is using AI agents much faster than they understand what AI agents can do. This is like driving a fast car without learning the rules of the road. Even worse, six out of every ten leaders said that the dangers from AI attacks are growing faster than their security teams can keep up with. This means the bad guys are winning the race right now.

New Tools to Find Problems Before They Happen

Token Security, a company with offices in Tel Aviv, Israel, and New York, United States, started a brand new team of security experts. This team, called Token Research, looks for problems in AI agents before hackers can use them. The researchers have already found important weaknesses. For example, they discovered that a chat AI from a company called Drift had a serious flaw. This flaw let someone fake passwords and access customers' information stored in Salesforce. They also found problems with Microsoft Azure where weak setup could let hackers move around inside a company's network.

Companies Getting Recognition for AI Agent Protection

Zenity, another company that protects AI agents, won an important award in 2025. This award from Cyber Defense Magazine recognizes companies that make new inventions to keep computers safe. Zenity's special skill is watching AI agents everywhere they work. Some AI agents use business software, some use cloud computers in the internet, and some work on personal devices. Zenity's system looks at every action an AI agent takes, like watching every step a person makes. This helps catch problems before they become big disasters. Zenity also works with big groups like MITRE and OWASP that create safety rules for AI agents.

A New Way That AI Agents Can Leak Secrets

Researchers from Smart Labs AI and a German university found a scary new attack. Here is how it works: imagine an AI agent whose job is to search the internet and look at your company's secret files. An attacker could put hidden instructions inside a normal webpage. These instructions might be written in white text on a white background so people cannot see them. When the AI agent reads that webpage as part of normal work, it also reads the hidden instructions. The AI agent follows the hidden orders and searches your company's secret files. Then it sends that secret information to the attacker's computer using the same search tool that is built into the agent. The person who asked the AI agent to search would never know that anything bad happened. This is very dangerous because the AI agent did exactly what someone told it to do—it just did not know that someone was being tricky.

Salesforce Discusses Safety and Privacy

Salesforce, a huge software company used by millions of people, is making an AI agent called Agentforce. Company leaders told reporters that they are doing very careful testing to make sure Agentforce works safely. They said they watch what Agentforce is doing all the time and keep fixing things. Sometimes the problem is not about getting the right answer, but about knowing when a person should step in and take over. Salesforce is also giving companies tools so they can control what information Agentforce can touch and what it is allowed to do. However, some security experts think Salesforce should tell more details about how it keeps data safe.

New Tools to Keep AI Agents Under Control

Palo Alto Networks, a huge security company, made a new tool called Cortex AgentiX. This tool helps companies build their own AI agents and make sure the agents cannot do bad things. The tool lets companies create agents without writing computer code, which is much easier. Companies can also make AI agents do complicated jobs across the whole company. Most importantly, companies can set rules so agents can only do certain things and need human approval for big decisions. This means humans stay in control and can stop an AI agent if something looks wrong.

AI Agents for Shopping

PwC, a big consulting company, and Stripe, a company that helps people buy things online, announced they are working together. They created new safety rules called the Agentic Commerce Protocol, or ACP. More than half of people in the United States already use AI to help them decide what to buy. As more people use AI agents to shop, the rules will make sure the shopping is safe and the data stays private.

Weekly Highlights