Human-Agent Trust Weekly AI News

January 26 - February 3, 2026

## AI Agents Need Strong Trust and Safety Rules

Artificial intelligence agents are becoming a huge part of how businesses work in 2026. Unlike regular AI tools that just help people, AI agents can make decisions and do things on their own. A bank in Australia called Westpac is using AI agents to help workers do their jobs better and learn new skills. However, as these agents get more power, companies are starting to realize they need strong trust systems to keep everything safe.

## One Hacked Agent Can Break Many Others

A real incident showed just how important trust is for AI agents. One company discovered that when a single AI agent got hacked, it was able to break 50 other agents connected to it. This was a huge shock to the industry. It was like if one bad person in a company could mess up the work of 50 other people. This problem is similar to how the internet worked in the early days—without proper naming and trust systems, everything was connected through hardcoded addresses and blind trust.

To fix this problem, computer scientists created something called Agent Name Service (ANS). It works like the internet's DNS system but for AI agents. With ANS, every agent has to prove who it is and what it is allowed to do before it can talk to other agents. The results have been impressive: agents can now be set up in under 30 minutes instead of 2-3 days, and the system can handle over 10,000 agents working at the same time.

## Companies Don't Know How Many Agents They Have

One big problem right now is that many companies have too many AI agents running around without proper control. Security experts found that some companies have one agent per worker, while others have as many as 17 agents per worker. Even scarier, many of these agents were created by workers without permission from IT teams. This is called shadow AI, and it's becoming a real problem.

When security experts scan company computers, they find that the agents often have too much access to important information. For example, one company's AI agent was tricked through prompt injection to steal information from an employee's computer. These agents have become so powerful that some experts say it's like "letting thousands of interns run around in our production environment without supervision".

## Humans Must Stay in Control

The smart companies are building systems where humans stay in the loop. This means AI agents do the routine work, but humans make the big decisions and watch over everything. For example, an AI agent might suggest cost savings for cloud computers, but a person reviews it before it happens. This keeps companies safe while still getting the speed and efficiency they want from their agents.

Microsoft is building a new system called Agent 365 that helps companies see what all their agents are doing. It works like a control center where managers can watch all the agents in one place, whether the agents were made by Microsoft or other companies. Companies like ServiceNow are using this system to manage their AI agents while doing important work like running lab experiments.

## Trust Must Be Built From the Start

Experts agree that trust cannot be added to AI agents later—it must be built in from the beginning. This means every AI agent needs to know who created it and what it's allowed to do. When an AI agent wants to do something important, it should be able to prove it's the real agent and that it has permission to do that task.

Companies also need to understand that AI agents are becoming like digital workers or intellectual workers. Just like a company is responsible for what its human employees do, companies are responsible for what their AI agents do. This raises important questions: Who is accountable if an agent makes a mistake? Who decided what the agent should do? This is why treating agents like employees and tracking who created them and what they can access is so important.

## The Year AI Agents Become Serious

In 2026, AI agents are moving from being experimental tools to being real workers that make decisions and run important business processes. By the end of the year, almost 70% of companies around the world plan to have AI agents in their workflows. About 40% of all business software will have AI agents built into it.

Business leaders need to get ready for this change. The companies that prepare now by building trust systems and creating clear rules for their agents will be the ones that win. The ones that ignore these lessons and let agents run wild without oversight will face serious problems. As AI becomes more powerful, the difference between companies that keep control and companies that lose control will decide who succeeds and who fails in 2026.

Weekly Highlights