Human-Agent Trust Weekly AI News

October 20 - October 28, 2025

## Building Trust Between Humans and AI Agents

This week in October 2025 shows agentic AI moving from ideas to real use in companies around the world. Unlike regular AI tools that answer questions, agentic AI can think through problems and take action all by itself. These systems can write and test computer code, fix customer problems, or even manage supply chains without someone watching them every step.

The speed of this change is exciting but also scary for many people. Companies see huge possibilities—agentic AI could help them automate tasks that normally need lots of people working for a long time. Some experts think agentic AI could double how much work people can do by 2027. But this speed also means workers and leaders might not have time to understand the changes happening around them.

## What Workers Really Think

A new survey from EY, a big consulting company, asked over 1,100 workers about agentic AI. The results tell an interesting story. About 84% of employees said they are eager to use agentic AI in their jobs. When people already use AI agents at work, 90% feel confident they can use them well. About 86% of workers say AI agents have made their teams more productive.

But beneath this excitement is worry. More than half of workers (56%) feel scared about their job security. Workers understand that AI agents can do some of their work, and they are not sure if there will still be jobs for them in the future. This creates what experts call a "human readiness gap"—people want to learn and work with AI, but they are also worried.

## The Communication Problem

The biggest problem might be that company leaders are not explaining things clearly enough. When leaders do explain their AI plans well, workers feel much better and work harder. But many companies are not doing this. Only about 52% of senior leaders say their company has good training and learning programs for agentic AI.

Without clear communication, workers feel confused and uncertain about what is happening. This confusion can slow down adoption and waste time. Leaders need to show workers their complete AI plans, explain how AI agents will change jobs, and give people the training they need. When workers understand the plan and get good training, they trust the AI agents more and perform better.

## Making AI Agents Trustworthy and Safe

Security experts from places like The Hacker News and EY say that trust in AI agents depends on strong security. Each AI agent should be treated like an employee who needs proper identification and permission checks. Right now, many companies do not do this well enough.

Here is why this matters: AI agents can connect to databases, run computer programs, and make changes in company systems. If an AI agent has too much power and something goes wrong, the problem could spread to other systems very quickly. To prevent this, companies need to give each AI agent only the minimum access it needs to do its job—this is called "least privilege". Every action an AI agent takes should be recorded so people can see what happened later.

Companies like Anthropic and OpenAI are building safety features into agentic AI systems. In October tests, these safety systems caught 95% of mistakes that could have caused problems. But experts say this is just the beginning—agentic AI is still very new and can be unpredictable.

## Real Companies Taking Action

Some companies are already using agentic AI in their work. Mimecast, a security company, just released a new AI agent called Mihra that helps investigate security problems. This agent can work 7 times faster than people doing the same job. Companies like Maersk (shipping), Accenture (consulting), and Salesforce (software) are also using AI agents now.

These companies are learning important lessons about working with AI agents. They are discovering that human oversight still matters—someone needs to check that AI agents are doing the right thing. They are also learning that security from the beginning is much easier than fixing problems later. Early companies are treating AI agents as untrusted visitors that need careful supervision, not as trusted employees.

## What Needs to Happen Next

Experts say the next step is building better governance and rules for AI agents. This means deciding what each AI agent can do, how to watch what it does, and how to stop it if something goes wrong. Organizations also need AI literacy training—teaching people how AI agents work and what risks they bring.

The window of time to build these protections is short. Organizations moving now will understand agentic AI better and can use it more safely. Those that wait will have to catch up later and might face security problems they did not see coming.

The path forward requires both excitement and caution. Workers can be excited about working with agentic AI, but companies must invest in training, communication, and security. When people understand AI agents and trust that their company is protecting them and their jobs, then real progress can happen.

Weekly Highlights