Human-Agent Trust Weekly AI News
March 23 - March 31, 2026AI Agents Are Growing Much Faster Than Expected
This week brought big news about how fast AI agents are taking over the internet. A company called HUMAN Security looked at over one quadrillion digital interactions in 2025 and found something shocking: AI traffic grew by 187% during the year. That means AI agents are doing much more work than they did before. Even more surprising, automation is now growing eight times faster than human activity on the internet.
What does this mean? It means that right now, computers and AI agents are doing more things online than actual people. AI agents are no longer just something companies test out in their offices anymore. They are now real tools that actually buy and sell things in stores, handle customer service, and manage travel bookings.
The Trust Problem Gets More Important
As AI agents become more powerful, a big question keeps coming up: Should we trust AI agents to do important work?. This is the biggest challenge facing businesses right now. A big study from McKinsey asked 500 companies about how ready they are to handle AI agents safely. The answer shows that most companies are not fully prepared.
Only about 30% of companies feel they have good enough security and control systems for AI agents. This is a big problem because companies are putting AI agents in charge of important jobs like managing money, checking paperwork, and helping customers make decisions. When companies are not ready, bad things can happen.
New Dangers From Bad Actors
Criminals are learning how to use AI agents to attack companies too. The HUMAN Security report found that bad people tried over 400,000 attacks on customer accounts in 2025. That sounds bad enough, but here is the shocking part: this number is more than four times bigger than in 2024. This means bad actors are getting smarter at using AI agents to steal passwords and break into accounts.
Companies like HUMAN Security say that the biggest challenge is telling the difference between good AI agents and bad AI agents. They explained that "the line between helpful automation and harmful automation is getting very thin". This means we cannot just ask "Is it a bot or a human?" anymore. We have to ask "Can we trust this AI?".
Some Surprising Good News About AI Agents
But here is something interesting: In some cases, AI agents might actually be more trustworthy than people. A company called SaaStr tested this idea with their sales team. They found that 50% of big sales deals actually came through AI agents first. The AI agents helped answer questions and set up meetings, and then a human closed the deal.
Why might AI agents be more trustworthy? Because AI agents do not have pressure to make a sale like human salespeople do. A human salesperson might stretch the truth to earn money, but an AI agent does not need that money. An AI agent is also less likely to accidentally promise something the company cannot deliver.
Companies Must Build Better Safety Systems
Experts this week agreed on one thing: companies need much better ways to check if an AI agent is safe and real. When AI agents handle customer information or make business deals, companies need systems to make sure the agents are truly safe. They need to protect customer data, make sure agents are actually who they say they are, and have clear rules about what agents can do.
NVIDIA, one of the world's biggest technology companies, announced new tools this week to help companies control and manage their AI agents better. These tools help separate how AI agents are built from how they actually work in real businesses. This is important because it lets companies have more control over what their AI agents can and cannot do.
The Big Picture: Trust Is The New Security
The biggest idea from this week is that trust is becoming as important as security in the age of AI agents. Companies are realizing that they cannot just focus on stopping bad bots anymore. They have to focus on understanding which AI agents are good and which ones might cause problems.
Governments and companies are also starting to understand that there need to be rules and standards for AI agents. Europe is already starting to enforce AI rules, and more countries are paying attention. This is important because AI agents will soon be handling health information, money, and other very important things for people.
The message from experts this week is clear: AI agents are the future, but companies and people need to be smart about trusting them. We need better security, clearer rules, and ways to check that AI agents are safe before they handle important work. If we do this right, AI agents can help people work better and faster. If we do not, bad things could happen.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.