Human-Agent Trust Weekly AI News
October 20 - October 28, 2025This weekly update covers how human-agent trust is becoming critical as agentic AI systems move from testing into real-world use. Agentic AI can now do complex tasks on its own, like fixing code or handling customer problems, without waiting for constant instructions from people. This is a big step forward, but it brings new challenges about trust and safety.
Workers around the world are excited about agentic AI—84% of employees want to use it at work. Many say it helps them work better and faster. However, 56% of workers worry about whether they will still have jobs with AI agents doing the work. This shows a trust gap between excitement and fear.
The real problem is that companies are moving fast with agentic AI but not teaching workers enough about it. Only about half of company leaders say they are training their teams. Without proper communication and training, workers don't understand how to work safely with AI agents.
Security experts warn that AI agents need special protections just like human employees. Every AI agent should have proper identification and permissions so it only does what it is supposed to do. If companies don't build these protections from the start, they could face serious security problems.
Companies like Anthropic and OpenAI are working on making agentic AI safer by adding special features that catch mistakes before they happen. In tests this October, these safety systems caught 95% of problems. But experts say everyone needs to learn more about these risks, because agentic AI is still very new and unpredictable.