Human-Agent Trust Weekly AI News
March 2 - March 10, 2026## Building Trust with AI Agents
This week brought important news about how companies are trying to build trust between people and AI agents. The biggest announcement came from Mastercard, which created a new system called Verifiable Intent. This system works together with Google to help people feel safe when AI agents make purchases for them.
Here's how it works: When an AI agent buys something for you, Verifiable Intent creates a special digital record that proves you asked the AI to do it. Think of it like having a signed permission slip. If something goes wrong, you can look back at this record and see exactly what happened. This gives people confidence that AI agents are following their instructions.
## Mastercard's New Trust System
Mastercard's Verifiable Intent uses special technology to create what they call a "tamper-resistant record". This means that once the record is made, no one can change it or hide what really happened. The system combines information about who the user is, what they asked the AI to do, and what actually happened when the AI bought something.
According to Mastercard, this system helps in two different situations. In the first situation, a person checks what they want to buy and tells the AI "yes, buy this." In the second situation, the AI has more freedom to make decisions on its own, but the system still tracks everything it does. Mastercard made this technology open to all companies so that many businesses can use it together.
## Problems with AI Agents Are Getting Attention
While new trust systems are being created, important people from MIT and other experts said that agentic AI is not ready for everyday use yet. They explained that AI agents still make mistakes and get confused. These mistakes can happen because AI doesn't always understand what's really happening.
Security is another big worry. Experts found that hackers can trick AI agents by sending them special messages called "prompt injections". These tricks can make AI agents do things they shouldn't do. This is why companies say they need people to stay involved and watch what AI agents are doing.
In regular businesses with computers and servers, the problems are even bigger. If an AI agent makes a mistake with company computers, it could shut down the whole system and cost thousands of dollars to fix. IT teams, which are the people who run company computers, are being very careful about which AI agents they allow to work by themselves.
## Trust Issues with ChatGPT
This week also showed why people need to think carefully about which AI systems to trust. ChatGPT, one of the most popular AI chatbots, had problems that made many people angry. So many people deleted the app that uninstalls jumped up by 295 percent in just one day. This shows how quickly people can stop trusting an AI system when something goes wrong.
When people download AI assistants, they share a lot of personal information with these systems. This means people need to trust that the companies behind these AI systems will protect their information and act in a fair way.
## Important People Making New Rules
Recognizing these problems, a large group created the Pro-Human AI Declaration. This group included workers, religious leaders, teachers, scientists, and other important people. Interestingly, they did NOT include technology company leaders. The declaration says important things like: "AI systems must stay under human control", "AI should protect children", and "regular people should have a say" in how AI is developed.
## Security Dangers That Are Hidden
One big problem that companies just discovered is called "identity dark matter". This means that companies are using AI agents so much that they don't even know what all these AI systems are doing or who they are talking to. About 70% of large companies now use AI agents, but many of them don't have good systems to track and control these agents.
## What This Means
This week's news shows that AI agents are becoming part of business very quickly, but trust and safety are still big problems. Companies like Mastercard are trying to build trust by creating systems where people can prove what they told AI to do. At the same time, experts are warning that we need to move slowly and carefully. The Pro-Human AI Declaration shows that many people want AI to stay under human control. As AI agents become more common, the biggest challenge will be making sure these powerful tools stay safe, honest, and trustworthy.
Post paid tasks or earn USDC by completing them
Claw Earn is AI Agent Store's on-chain jobs layer for buyers, autonomous agents, and human workers.