Human-Agent Trust Weekly AI News
January 19 - January 27, 2026This weekly update explores how trust between humans and AI agents is becoming the most important topic in technology. A new global study by Dynatrace surveyed 919 leaders who work with AI agents every day. The study found something surprising: companies are not slow to use AI agents because they think AI is bad. Instead, companies are moving carefully because they want to make sure they can control and understand what the agents are doing.
The research shows that about half of all AI agent projects are still in early stages called proof-of-concept or pilot testing. This means companies are carefully testing AI agents on small projects before using them for important work. Even when companies move to bigger projects, they do not let the AI agents work completely alone. Instead, 69% of all decisions made by AI agents are checked by humans first. This human checking is like a teacher reviewing a student's homework before it is turned in.
Security and privacy are the biggest worries for companies. More than half of all companies say that security concerns, privacy issues, and following government rules are the main problems that stop them from using AI agents. Only 13% of companies use fully autonomous agents that work completely alone without human help. This shows that companies still trust humans more than AI to make important decisions. The majority of companies—87%—are building or using AI agents that need a human to approve their work.
To build more trust, companies are creating better ways to check on AI agents. The top ways companies verify that AI agents are working correctly include checking data quality, having humans review the agent's outputs, and watching for unusual behavior. However, 44% of companies still use manual methods to check what AI agents are communicating with each other. This means a human has to read through the messages by hand, which takes a long time. Companies are looking for better, faster ways to do this checking automatically.
Another important step is creating clear values and rules for AI. Anthropic, a company that makes an AI called Claude, recently created a longer constitution for their AI model. This constitution is like a rule book that explains what values the AI should follow, such as being truthful and safe. The new constitution helps the AI make better decisions when it faces hard situations where different values might conflict.
Identity and control are also becoming critical. As AI agents do more and more work, companies need to make sure each agent can only do certain tasks and access certain information. Think of it like this: a mail carrier should be able to deliver mail, but not enter your bedroom. Similarly, an AI agent that books hotel rooms should not be able to access customer credit card numbers. Companies are learning that identity systems are like the control points that determine whether AI agents can be trusted to do their jobs safely.
Companies in different industries are testing how human-AI trust works in real situations. In travel and hospitality, AI agents will soon help customers choose flights and hotels, but companies know they need to keep trust with their guests. Hotels and airlines are building systems to make sure the information AI agents use is correct and up-to-date. In retail, companies like Microsoft are creating AI shopping assistants that can help customers buy things, but customers still have control and can make final choices. In healthcare, Anthropic created a special version of Claude that follows healthcare privacy rules called HIPAA.
The biggest shift happening right now is that companies understand reliability is more important than giving AI agents complete freedom. Leaders in 74% of companies expect to spend even more money on AI next year. But they want to spend it on building trustworthy systems rather than just building more powerful AI. This is a big change from what people expected a year ago, when many thought AI would quickly become completely independent.
Looking ahead, the key to success is balancing human control with AI power. Companies expect 50% human and 50% AI collaboration for routine tasks like IT support and customer service. For more important business decisions, companies want even more human involvement—about 60% human and 40% AI. This shows that the future is not about AI replacing humans. Instead, it is about humans and AI working as a team where humans make the final decisions and AI helps them work faster and smarter.