Human-Agent Trust Weekly AI News

January 19 - January 27, 2026

This weekly update covers how companies are building trust in AI agents. A major study from Dynatrace found that companies are not rushing to use AI agents because they worry about safety and control. Instead of letting AI agents work alone, most companies want humans to check the work that agents do. About 69% of companies verify what their AI agents decide before acting on it. This shows that human oversight is still very important.

Companies are also worried about security and privacy. More than half of companies say that security concerns, privacy issues, and following rules are the biggest problems when using AI agents. To solve this, companies are creating better governance systems to watch over AI agents and make sure they follow the rules. Some companies are building constitutions for their AI models, which are like rule books that tell the AI what values to follow.

Another big concern is identity and access control. As AI agents do more work, companies need to make sure agents only access information they should see and only do actions they are allowed to do. This is like giving each agent a special key card that only works for certain doors. Companies in travel, retail, and healthcare are now testing AI agents in real situations, and they are learning how to keep trust between humans and AI strong by having humans stay in charge.

Extended Coverage