Human-Agent Trust Weekly AI News

December 8 - December 16, 2025

## This Week's News About AI Agents and Trust

AI agents are becoming very popular in companies, but trust is still the biggest problem. This week showed us that people love the idea of AI agents but are scared to actually use them for important work. AI agents are smart robots that can make decisions and do jobs without a person telling them exactly what to do every single time. They can work fast and never get tired, which sounds great for businesses.

However, there is a trust gap that is slowing everything down. When a study asked company leaders if they trust AI agents to run their most important business processes, only 6% said yes. That is a very small number! Most companies said they only trust AI agents with easy, routine jobs like answering simple questions or organizing information. Many companies keep AI agents on the sidelines for now, making sure a person watches everything the robot does.

Why Don't Companies Trust AI Agents?

Companies worry about security and control. When an AI agent makes a choice, what happens if it is a bad choice? What if it hurts a customer or breaks the law by mistake? A leader at a big company said security and governance is the number one problem stopping companies from using AI agents more. Governance just means having good rules and processes to keep things safe and fair.

Another problem is that AI agents can do unexpected things. One expert predicted that in the coming years, there will be AI agent problems that make headlines in the news. She said agents might "go rogue" and do things that were not planned. This scares company leaders and makes them move slowly.

What Do Companies Need to Trust AI Agents More?

Experts say companies need clear rules and a plan for when things go wrong. First, there must be systems that make sure AI agents follow the rules. The robots should not be able to do anything they want. Second, companies need to have a clear plan for what to do when an AI agent makes a mistake. It should not just be a suggestion - it should have real consequences.

A leader from a hospital said that human beings must always be part of important decisions. Even if an AI agent is very smart and has learned from many doctors, a real doctor must still check the work and say yes or no. The person in charge is called a human-in-the-loop, which means a person is always in the decision-making loop.

Big Tech Companies Are Trying to Help

This week, big companies like OpenAI, Anthropic, and Block came together to create the Agentic AI Foundation. Think of this like a club with rules that everyone agrees to follow. The foundation has three projects with names like MCP, AGENTS.md, and goose that help make AI agents work better together and more safely.

Microsoft also shared new ideas about how to build AI agents at a big conference. They are creating new tools to help AI agents understand business information better. Google and Amazon are also part of this group trying to make AI agents safer and more trustworthy.

A Study About Real Numbers

The numbers from this week are interesting. While only 6% of companies fully trust AI agents for big business jobs, about 50% of companies are trying them out to see what happens. About 9% of companies say they have already fully deployed AI agents into their work. That means they are really using them, not just testing.

Even though companies are still scared, 72% of them think the good things AI agents can do are worth the risks. This means companies think AI agents will help them, even if there are safety worries.

What Happens Next?

Companies are working hard on training their workers to understand AI agents better. About 44% of companies are teaching their workers how to watch over AI agents and understand what they do. About 39% are building guardrails, which are like safety fences that keep AI agents inside safe boundaries.

As companies learn how to work with AI agents safely, and as big tech companies help create better standards and rules, the trust gap will probably get smaller. But this will take time. Human trust is more important than speed right now. Companies want to move forward carefully and make sure everything is safe before they let AI agents do really important work.

Weekly Highlights