Human-Agent Trust Weekly AI News
December 8 - December 16, 2025Companies around the world are excited about AI agents, but they are worried about trusting these robots to do important jobs. AI agents are computer programs that can make decisions and take actions on their own, without a person telling them what to do each time. This week, experts talked about a big problem: most companies don't trust AI agents yet.
A big study found that only 6% of companies fully trust AI agents to handle their most important business work. This is a huge gap between what people hope AI agents can do and what they are willing to let them do right now. Most companies only trust AI agents with small, simple tasks that don't matter as much. For example, companies might let an AI agent answer customer questions, but they won't let it make big money decisions.
The reason companies are scared is about safety and control. When an AI agent makes a mistake, it could hurt customers or lose money. Companies need to know what the AI agent is doing and why it is doing it. Leaders at big companies said the biggest risk stopping AI agents is security and governance - making sure the robots follow the rules.
This week, big tech companies like OpenAI, Anthropic, and Block started something called the Agentic AI Foundation to help create better standards for AI agents. This means all the robots will follow the same rules and work together better. Microsoft and Google are also helping to make sure AI agents are safe and trustworthy.
Doctors and hospitals also worry about trust with AI agents. When AI helps doctors make decisions, a real person must still check the work. A hospital leader said that human judgment must always be part of healthcare decisions, even with AI help.
Experts say companies need two things to trust AI agents more: first, clear rules so the robots stay in their lane, and second, a plan for when things go wrong. As companies learn how to use AI agents safely, the trust gap will probably get smaller.